text
stringlengths
56
7.94M
\begin{document} \title{Exploring Progressions: A Collection of Problems} \author{Konstantine Zelator\\ Department of Mathematics\\ and Computer Science\\ Rhode Island College \\ 600 Mount Pleasant Avenue \\ Providence, RI 02908\\ USA} \maketitle \section{Introduction} In this work, we study the subject of arithmetic, geometric, mixed, and harmonic progressions. Some of the material found in Sections 2,3,4, and 5, can be found in standard precalculus texts. For example, refer to the books in \cite{1} and \cite{2}. A substantial portion of the material in those sections cannot be found in such books. In Section 6, we present 21 problems, with detailed solutions. These are interesting, unusual problems not commonly found in mathematics texts, and most of them are quite challenging. The material of this paper is aimed at mathematics educators as well as math specialists with a keen interest in progressions. \section{Progressions} In this paper we will study arithmetic and geometric progressions, as well as mixed progressions. All three kinds of progressions are examples of sequences. Almost every student who has studied mathematics, at least through a first calculus course, has come across the concept of sequences. Such a student has usually seen some examples of sequences so the reader of this book has quite likely at least some informal understanding of what the term sequence means. We start with a formal definition of the term sequence. \noindent{\bf Definition 1:} \begin{enumerate} \item[(a)] A {\bf finite sequence} of $k$ elements, ($k$ a fixed positive integer) and whose terms are real numbers, is a mapping $f$ from the set $\{1,2,\ldots,k\} $ (the set containing the first $k$ positive integers) to the set of real numbers $\mathbb{R}$. Such a sequence is usually denoted by $a_1,\ldots,a_n,\ldots,a_k$. If $n$ is a positive integer between $1$ and $k$, the \boldmath $n${\bf th term} ${\boldmath a}_{\boldmath n}$, is simply the value of the function $f$ at $n$; $a_{n}=f(n)$. \item[(b)] {\bf An infinite sequence} whose terms are real numbers, is a mapping $f$ from the set of positive integers or natural numbers to the set of real numbers $\mathbb{R}$, we write $F:\mathbb{N} \rightarrow \mathbb{R}$; $f(n)=a_n$. \end{enumerate} Such a sequence is usually denoted by $a_1,a_2,\ldots a_n,\ldots$ . The term $a_n$ is called the $n$th term of the sequence and it is simply the value of the function at $n$. \noindent{\bf Remark 1:} Unlike sets, for which the order in which their elements do not matter, in a sequence the order in which the elements are listed does matter and makes a particular sequence unique. For example, the sequences $1,\ 8,\ 10,$ and $8,\ 10,\ 1$ are regarded as different sequences. In the first case we have a function $f$ from $\{1,2,3\}$ to $\mathbb{R}$ defined as follows: $f: = \{1,2,3\} \rightarrow \mathbb{R};\ f(1)=1=a_1,\ f(2) = 8 = a_2$, and $f(3)= 10= a_3$. In the second case, we have a function $g: \{1,2,3\} \rightarrow \mathbb{R};\ g(1)=b_1 =8,\ g(2) = b_2 = 10$, and $g(3) = b_3 =1$. Only if two sequences are {\bf equal as functions}, are they regarded one and the same sequence. \section{Arithmetic Progressions} \noindent{\bf Definition 2:} A sequence $a_1,a_2,\ldots, a_n,\ldots $ with at least two terms, is called an {\bf arithmetic} progression, if, and only if there exists a (fixed) real number $d$ such that $a_{n+1} = a_n + d$, for every natural number $n$, if the sequence is infinite. If the sequence if finite with $k$ terms, then $a_{n+1} = a_n+d$ for $n=1,\ldots,k-1$. The real number $d$ is called the {\bf difference} of the arithmetic progression. \noindent{\bf Remark 2:} What the above definition really says, is that starting with the second term $a_2$, each term of the sequence is equal to the sum of the previous term plus the fixed number $d$. \noindent{\bf Definition 3:} An arithmetic progression is said to be {\bf increasing} if the real number $d$ (in Definition 2) is positive, and {\bf decreasing} if the real number $d$ is negative, and {\bf constant} if $d = 0$. \noindent{\bf Remark 3:} Obviously, if $d>0$, each term will be greater than the previous term, while if $d < 0$, each term will be smaller than the previous one. \noindent{\bf Theorem 1:} Let $a_1,a_2,\ldots,a_n,\ldots$ be an arithmetic progression with difference $d,m$ and $n$ any natural numbers with $m < n$. The following hold true: \begin{enumerate} \item[(i)] $a_n = a_1 + (n-1)d$ \item[(ii)] $a_n = a_{n-m} + md$ \item[(iii)] $a_{m+1} + a_{n-m} = a_1 + a_n$ \end{enumerate} \noindent {\bf Proof:} \begin{enumerate} \item[(i)] We may proceed by mathematical induction. The statement obviously holds for $n=1$ since $a_1 = a_1 + (1-1)d$; $a_1 = a_1 +0$, which is true. Next we show that if the statement holds for some natural number $t$, then this assumption implies that the statement must also hold for $(t+1)$. Indeed, if the statement holds for $n=t$, then we have $a_t = a_1 + (t-1)d$, but we also know that $a_{t+1} = a_t + d$, since $a_t$ and $a_{t+1}$ are successive terms of the given arithmetic progression. Thus, $a_t = a_1 + (t-1)d \mathbb{R}a a_t+d = a_1(t-1)d+d \mathbb{R}a a_t + d = a_1 + d\cdot t \mathbb{R}a a_{t+1} = a_1 + d \cdot t$; $a_{t+1} = a_1 + d \cdot [(t+1) -1]$, which proves that the statement also holds for $n=t+1$. The induction process is complete. \item[(ii)] By part (i) we have established that $a_n = a_1+(n-1)d$, for every natural number $n$. So that $$ \begin{array}{rcl} a_n & = & a_1 + (n-1)d-md+ md;\\ \\ a_n & = & a_1 + [( n-m) - 1] d + md. \end{array} $$ Again, by part (i) we know that $a_{n-m} = a_1 + [(n-m)-1] d$. Combining this with the last equation we obtain, $a_n = a_{n-m}+md$, and the proof is complete. \item[(iii)] By part (i) we know that $a_{m+1} = a_1 + [(m+1)-1]d \mathbb{R}a a_{m+1} = a_1 + md$; and by part (ii), we have already established that $a_n = a_{n-m} + md$. Hence, $a_{m+1} + a_{n-m} = a_1 + md + a_{n-m} = a_1+a_n$, and the proof is complete. $\square$ \end{enumerate} \noindent{\bf Remark 4:} Note that what Theorem 1(iii) really says is that in an arithmetic progression $a_1,\ldots, a_n$ with $a_1$ being the first term and $a_n$ being the $n$th or last term; if we pick two in between terms $a_{m+1}$ and $a_{n-m}$ which are ``equidistant" from the first and last term respectively ($a_{m+1}$ is $m$ places or spaces to the right of $a_1$ while $a_{n-m}$ is $m$ spaces or places to the left of $a_n$), the sum of $a_{m+1}$ and $a_{n-m}$ remains fixed: it is always equal to $(a_1 +a_n)$, no matter what the value of $m$ is ($m$ can take values from $1$ to $(n-1)$). For example, if $a_1,a_2,a_3,a_4,a_5$ is an arithmetic progression we must have $a_1 +a_5 = a_2+a_4 = a_3 +a_3 = 2a_3$. Note that $(a_2+a_4)$ corresponds to $m=1$, while $(a_3+a_3)$ corresponds to $m=2$, but also $a_4 + a_2$ corresponds to $m=3$ and $a_5 +a_1$ corresponds to $m=4$. Likewise, if $b_1,b_2,b_3,b_4,b_5,b_6$ are the successive terms of an arithmetic progression we must have $b_1+b_6 = b_2 +b_5 = b_3 + b_4$. The following theorem establishes two equivalent formulas for the sum of the first $n$ terms of an arithmetic progression. \noindent{\bf Theorem 2:} Let $a_1,a_2,\ldots ,a_n,\ldots ,$ be an arithmetic progression with difference $d$. \begin{enumerate} \item[(i)] The sum of the first (successive) $n$ terms $a_1,\ldots,a_n$, is equal to the real number $\left( {\displaystyle \frac{a_1+a_n}{2}}\right)\cdot n$; we write $a_1+a_2+ \cdots +a_n = {\displaystyle \sum^n_{i=1}} a_i = {\displaystyle \frac{n\cdot (a_1+a_n)}{2}}$. \item[(ii)] ${\displaystyle \sum^n_{i=1}} a_i = \left( {\displaystyle \frac{a_1+[a_1 +(n-1)d]}{2}}\right) \cdot n$. \end{enumerate} \noindent{\bf Proof:} \begin{enumerate} \item[(i)] We proceed by mathematical induction. For $n=1$ the statement is obviously true since $a_1 = {\displaystyle \frac{1\cdot (a_1+a_1)}{2}} = {\displaystyle \frac{2a_1}{2}}$ . Assume the statement to hold true for some $n=k \geq 1$. We will show that whenever the statement holds true for some value $k$ of $n,\ k\geq 1$, it must also hold true for $n=k+1$. Indeed, assume $a_1 + \cdots + a_k = {\displaystyle \frac{k\cdot (a_1 + a_k)}{2}}$; add $a_{k+1}$ to both sides to obtain $$\begin{array}{rcl} a_1 + \cdots + a_k + a_{k+1}& = & {\displaystyle \frac{k\cdot a_1 +a_k}{2}} + a_{k+1}\\ & & \mathbb{R}a a_1 + \cdots + a_k + a_{k+1}\\ \\ & = & {\displaystyle \frac{ka_1+ka_k + 2a_{k+1}}{2}}\end{array} \eqno{(1)} $$ \noindent But since the given sequence is an arithmetic progression by Theorem 1(i), we must have $a_{k+1} = a_1+kd$ where $d$ is the difference. Substituting back in equation (1) for $a_{k+1}$ we obtain, $$ \begin{array}{rcl} a_1 + \cdots + a_k + a_{k+1}& = &{\displaystyle \frac{ka_1+ka_k + (a_1+kd) + a_{k+1}}{2}} \\ \\ \mathbb{R}a a_1 + \cdots + a_k + a_{k+1} & = & {\displaystyle \frac{(k+1)a_1 + k(a_k+d)+a_{k+1}}{2} } \end{array} \eqno{(2)} $$ \noindent We also have $a_{k+1} = a_k+d$, since $a_k$ and $a_{k+1}$ are successive terms. Replacing $a_k+d$ by $a_{k+1}$ in equation (2) we now have $a_1 + \cdots + a_k + a_{k+1} = {\displaystyle \frac{(k+1) a_1 + ka_{k+1} + a_{k+1}}{2}} = {\displaystyle \frac{(k+1)a_1+(k+1) a_{k+1}}{2}} = (k+1) \cdot {\displaystyle \frac{(a_1+a_{k+1})}{2}} $, and the proof is complete. The statement also holds for $n=k+1$. $\square$ \item[(ii)] This is an immediate consequence of part (i). Since ${\displaystyle \sum^n_{i=1}} a_i = {\displaystyle \frac{n(a_1+a_n)}{2}}$ and $a_n = a_1 + (n-1) d$ (by Theorem 1(i)) we have, $${\displaystyle \sum^n_{i=1}} a_i = n\left( {\displaystyle \frac{a_1+[a_1+(n-1)d]}{2}}\right),$$ \noindent and we are done. $\square $ \end{enumerate} \noindent {\bf Example 1:} \begin{enumerate} \item[(i)] The sequence of positive integers $1,2,3,\ldots ,n,\ldots ,$ is an infinite sequence which is an arithmetic progression with first term $a_1 = 1$, difference $d=1$, and the $n$th term $a_n=n$. According to Theorem 2(i) the sum of the first $n$ terms can be easily found: $1+2 + \ldots + n = {\displaystyle \frac{n\cdot (1+n)}{2}}$. \item[(ii)] The sequence of the even positive integers $2,4,6,8,\ldots ,2n,\ldots $ has first term $a_1 = 2$, difference $d = 2$, and the $n$th term $a_n = 2n$. According to Theorem 2(i), $2 + 4 + \cdots + 2n = {\displaystyle \frac{n \cdot (2 + 2n)}{2}} = {\displaystyle \frac{n\cdot 2 \cdot (n+1)}{2}} = n \cdot (n+1)$. \item[(iii)] The sequence of the odd natural numbers $1,3,5,\ldots ,(2n-1), \ldots $, is an arithmetic progression with first term $a_1 =1$, difference $d = 2$, and $n$th term $a_n = 2n-1$. According to Theorem 2(i) we have $1+3+ \cdots + (2n-1) = n\cdot \left( {\displaystyle \frac{1+(2n-1)}{2}}\right) = {\displaystyle \frac{n\cdot (2n)}{2}} = n^2$. \item[(iv)] The sequence of all natural numbers which are multiples of $3\ : \ 3,6,9,12,$ \linebreak $\ldots ,3n, \ldots$ is an arithmetic progression with first term $a_1 = 3$, difference $d=3$ and $n$th term $a_n=3n$. We have $3+6+ \cdots + 3n = {\displaystyle \frac{n\cdot (3+3n)}{2}} = {\displaystyle \frac{3n(n+1)}{2}}$. Observe that this sum can also be found from (i) by observing that $3+6+ \cdots +3n = 3\cdot (1+2+\cdots + n) = {\displaystyle \frac{3\cdot n(n+1)}{2}}$. If we had to find the sum of all natural numbers which are multiples of $3$, starting with $3$ and ending with $33$; we know that $a_1 = 3$ and that $a_n = 33$. We must find the value of $n$. Indeed, $a_n = a_1 + (n-1)\cdot d$; and since $d = 3$, we have $33 = 3 + (n-1)\cdot 3 \mathbb{R}a 33 = 3 \cdot [1+(n-1)]$; $11 = n$. Thus, $3 + 6 + \cdots + 30 + 33 = {\displaystyle \frac{11\cdot (3 + 33)}{2}} = {\displaystyle \frac{11 \cdot 36}{2}} = 11 \cdot 18 = 198$. \end{enumerate} \noindent {\bf Example 2:} Given an arithmetic progression $a_1,\ldots ,a_m, \ldots , a_n, \ldots$, and natural numbers $m,n$ with $2 \leq m<n$, one can always find the sum $a_m+a_{m+1} + \cdots + a_{n-1} + a_n$; that is, the sum of the $[(n-m)+1]$ terms starting with $a_m$ and ending with $a_n$. If we know the values of $a_m$ and $a_n$ then we do not need to know the value of the difference. Indeed, the finite sequence $a_m,a_{m+1}, \ldots,a_{n-1},a_n$ is a finite arithmetic progression with first term $a_m$, last term $a_n$, (and difference $d$); and it contains exactly $[(n-m)+1]$ terms. According to Theorem 2(i) we must have $a_m+a_{m+1}+ \cdots + a_{n-1} + a_n = \frac{(n-m+1)\cdot [a_m+a_n]}{2}$. If, on the other hand, we only know the values of the first term $a_1$ and difference $d$ ( and the values of $m$ and $n$), we can apply Theorem 2(ii) by observing that \noindent$\begin{array}{rcl} a_m+a_{m+1} + \cdots + a_{n-1}+a_n & = & \underset{\underset{n\ {\rm terms}}{{\rm sum\ of\ the\ first}}}{\left(\underbrace{a_1+a_2+ \ldots + a_n}\right)}\\ & & - \underset{\underset{(m-1)\ {\rm terms}}{{\rm sum\ of\ the\ first}}}{\left(\underbrace{a_1+ \ldots + a_{m-1}}\right)}\\ \\ {\rm by\ Th.\ 2(ii)}& = & \left( \frac{2a_1 + (n-1)d}{2} \right) \cdot n\\ & & - \left( \frac{2a_1 + (m-2)d}{2} \right) \cdot (m-1)\\ \\ & = & \frac{2[n-(m-1)]a_1 + [n\cdot (n-1)-\cdot (m-2)\cdot (m-1)]d}{2}\\ \\ & = & \frac{2(n-m+1)a_1 + [n(n-1) - (m-2)(m-1) ] d}{2} \end{array} $ \noindent {\bf Example 3:} \begin{enumerate} \item[(a)] Find the sum of all multiples of $7$, starting with $49$ and ending with $133$. Both $49$ and $133$ are terms of the infinite arithmetic progression with first term $a_1 = 7$, and difference $d=7$. If $a_m = 49$, then $49 = a_1 +(m-1)d;\ 49 = 7 + (m-1) \cdot 7 \mathbb{R}a \frac{49}{7} = m;\ m = 7$. Likewise, if $a_n =$ then $133 = a_1 + (n-1)d;\ 133 = 7 + (n-1) 7 \mathbb{R}a 19 = n$. According to Example 2, the sum we are looking for is given by $a_7 + a_8 + \ldots + a_{18} + a_{19} = \frac{(19-7+1)(a_7 + a_{19})}{2} = \frac{13 \cdot (49+133)}{2} = \frac{13 \cdot 182}{2} = (13)\cdot (91) = 1183$. \item[(b)] For the arithmetic progression with first term $a_1 = 11$ and difference $d = 5$, find the sum of its terms starting with $a_5$ and ending with $a_{13}$. \noindent We are looking for the sum $a_5 + a_6 + \ldots + a_{12} + a_{13}$; in the usual notation $m = 5$ and $n = 13$. According to Example 2, since we know the first term $a_1=11$ and the difference $d = 5$ we may use the formula we developed there: $$ \begin{array}{rcl} a_m + a_{m+1} + \ldots + a_{n-1} + a_n & = & \frac{2(n-m+1)a_1 + [n(n-1) - (m-2)(m-1)]d}{2};\\ \\ a_5+a_6 + \ldots + a_{12} + a_{13} & = & \frac{2\cdot(13-5+1)\cdot 11 + [13\cdot(13-1)- (5-2)(5-1)]5}{2} \\ \\ & = & \frac{2\cdot 9 \cdot 11 + [(13)(12) - (3)(4)]5}{2} = \frac{198 + (156 -12)\cdot 5}{2} \\ \\ & = & \frac{198 + 720}{2} = \frac{918}{2} = 459 \end{array} $$ \end{enumerate} The following Theorem is simple in both its statement and proof but it serves as an effective tool to check whether three real numbers are successive terms of an arithmetic progression. \noindent {\bf Theorem 3:} Let $a,b,c$ be real numbers with $a < b < c$. \begin{enumerate} \item[(i)] The three numbers $a,b$, and $c$ are successive of an arithmetic progression if, and only if, $2b = a + c$ or equivalently $b = \frac{a+c}{2}$. \item[(ii)] Any arithmetic progression containing $a,b,c$ as successive terms must have the same difference $d$, namely $d = b - a = c - b$ \end{enumerate} \noindent{\bf Proof:} \begin{enumerate} \item[(i)] Suppose that $a,b$, and $c$ are successive terms of an arithmetic progression; then by definition we have $b = a + d$ and $c = b+d$, where $d$ is the difference. So that $ d=b-a=c-b$; from $b - a = c-b$ we obtain $2b = a+c$ or $b = \frac{a+c}{2}$. Conversely, if $2b = a + c$, then $b - a = c-b$; so by setting $d = b - a = c - b$, it immediately follows that $b = a+d$ and $c = b+d$ which proves that the real numbers $a,b,c$ are successive terms of an arithmetic progression with difference $d$. \item[(ii)] This has already been shown in part (i), namely that $d = b - a = c - b$. Thus, any arithmetic progression containing the real numbers $a,b,c$ as successive terms must have difference $d = b - a = c - b$. \end{enumerate} \noindent {\bf Remark 5:} According to Theorem 3, the middle term $b$ is the average of $a$ and $c$. This is generalized in Theorem 4 below. But, first we have the following definition. \noindent {\bf Definition 4:} Let $a_1,a_2,\ldots , a_n$ be a list (or sequence) of $n$ real numbers($n$ a positive integer). The arithmetic {\bf mean or average} of the given list, is the real number $\frac{a_1 +a_2 + \ldots + a_n}{n}$. \noindent {\bf Theorem 4:} Let $m$ and $n$ be natural numbers with $m < n$. Suppose that the real numbers $a_m,a_{m+1},\ldots , a_{n-1},a_n$ are the $(n-m+1)$ successive terms of an arithmetic progression (here, as in the usual notation, $a_k$ stands for the $k$th term of an arithmetic progression whose first term is $a_1$ and difference is $d$). \begin{enumerate} \item[(i)] If the natural number $(n-m+1)$ is odd, then the arithmetic mean or average of the reals $a_m,a_{m+1},\ldots , a_{n-1},a_n$ is the term $a_{(\frac{m+n}{2})}$. In other words, $a_{(\frac{m+n}{2})} = \frac{a_m + a_{m+1} + \ldots + a_{n-1}+ a_n}{n-m+1}$. (Note that since $(n-m+1)$ is odd, it follows that $n-m$ must be even, and thus so must be $n+m$; and hence $\frac{m+n}{2}$ must be a natural number). \item[(ii)] If the natural number is even, then the arithmetic mean of the reals $a_m,a_{m+1},\ldots ,a_{n-1}, a_n$ must be the average of the two middle terms $a_{(\frac{n+m-1}{2})}$ and $a_{(\frac{n+m+1}{2})}$. \noindent In other words $\frac{a_m+a_{m+1}+\ldots + a_{n-1} + a_n}{n-m+1} = \frac{a_{(\frac{n+m-1}{2})} + a_{(\frac{n+m+1}{2})}}{2}$. \end{enumerate} \noindent {\bf Remark 6:} To clearly see the workings of Theorem 4, let's look at two examples; first suppose $m = 3$ and $n = 7$. Then $n-m+1=7-3+1=5$; so if $a_3,a_4,a_5,a_6,a_7$ are successive terms of an arithmetic progression, clearly $a_5$ is the middle term. But since the five terms are equally spaced or equidistant from one another (because each term is equal to the sum of the previous terms plus a fixed number, the difference $d$), it makes sense that $a_5$ would also turn out to be the average of the five terms. If, on the other hand, the natural number $n-m+1$ is even; as in the case of $m = 3$ and $n = 8$. Then we have two middle numbers: $a_5$ and $a_6$. \noindent{\bf Proof} (of Theorem 4): \begin{enumerate} \item[(i)] Since $n-m+1$ is odd, it follows $n - m$ is even; and thus $n+m$ is also even. Now, if we look at the integers $m,m+1,\ldots , n-1,n$ we will see that since $m+n$ odd, there is a middle number among them, namely the natural number $\frac{m+n}{2}$. Consequently among the terms $a_m,a_{m+1},\ldots , a_{n-1},a_n$, the term $a_{(\frac{m+n}{2})}$ is the middle term. Next we perform two calculations. First we compute $a_{(\frac{m+n}{2})}$ in terms of $m, n$ the first term $a_1$ and the difference $d$. According to Theorem 1(i), we have, $$ a_{(\frac{m+n}{2})} = a_1 + \left( \frac{m+n}{2} - 1 \right) d = a_1 + \left( \frac{m+n-2}{2}\right) d. $$ Now let us compute the sum $\frac{a_m+a_{m+1} + \ldots + a_{n-1} + a_n}{n-m+1}$. First assume $m \geq 2$; so that $2 \leq m < n$. Observe that $$\begin{array}{rl} & a_m+a_{m+1} + \ldots + a_{n-1} + a_n \\ \\ = & \underset{{\rm sum\ of\ the\ first}\ n\ {\rm terms}}{\left( {\underbrace{a_1+a_2 + \ldots + a_m + a_{m+1} + \ldots + a_{n-1}+a_n}}\right)}\\ \\ & - \underset{\underset{{\rm note\ that}\ m-1 \geq 1,\ {\rm since}\ m \geq 2}{{\rm sum\ of\ the \ first}\ (m-1) \ {\rm terms}}}{(\underbrace{a_1+\ldots +a_{m-1}})} \end{array} $$ Apply Theorem 2(ii), we have, $$ a_1 + a_2 + \ldots + a_m + a_{m+1} + \ldots + a_{n-1} + a_n = \frac{n[2a_1 + (n-1)d]}{2} $$ \noindent and $$ a_1 + \ldots + a_{m-1} = \frac{(m-1)[2a_1 + (m-2)d]}{2}. $$ Putting everything together we have $$\begin{array}{rl}& a_m + a_{m+1} + \ldots + a_{n-1} + a_n\\ \\ = & (a_1+a_2+\ldots +a_m + a_{m+1} + \ldots + a_{n-1} + a_n)\\ \\ & - (a_1 + \ldots + a_{m-1}) = \frac{n[2a_1+(n-1)d]}{2}\\ \\ & - \frac{(m-1)[2a_1 +(m-2)d]}{2}\\ \\ = &\frac{2(n-m+1)a_1 + [n(n-1)-(m-1)(m-2)]d}{2}. \end{array} $$ \noindent Thus, $$\begin{array}{rcl} & & \frac{a_m+a_{m+1}+ \ldots + a_{n-1} +a_n}{n-m+1} \\ \\ & = & \frac{2(n-m+1)a_1 + [n(n-1)-(m-1)(m-2)]d}{2(n-m+1)}\\ \\ & = & a_1 + \frac{[n(n-1)-(m-1)(m-2)]d}{2(n-m+1)}\\ \\ &= & a_1 + \frac{[n^2-m^2 -n+3m-2]d}{2(n-m+1)}\\ \\ & = & a_1 + \frac{[(n-m)(n+m)+(n+m) - 2(n-m) -2]d}{2(n-m+1)}\\ \\ &= & a_1 + \frac{[(n-m)(n+m)+(n+m)-2(n-m+1)]d}{2(n-m+1)}\\ \\ &= & a_1 + \frac{[(n+m)(n-m+1) -2(n-m+1)]d}{2(n-m+1)}\\ \\ & = & a_1 + \frac{(n-m+1)(n+m-2)d}{2(n-m+1)} = a_1 + \frac{(n+m-2)d}{2}, \end{array} $$ \noindent which is equal to the term $a_{(\frac{m+n}{2})}$ as we have already shown. What about the case $m=1$? If $m=1$, then $n-m+1=n$ and $a_m=a_1$. In that case, we have the sum $\frac{a_1+a_2+\ldots +a_{n-1}+a_n}{n} =$ (by Theorem 2(ii)) $\frac{n\cdot[2a_1+(n-1)d]}{2n}$; but the middle term $a_{(\frac{m+n}{2})}$ is now $a_{(\frac{n+1}{2})}$ since $m = 1$; but $a_{(\frac{n+1}{2})} = a_1 + (\frac{1+n-2}{2})d \mathbb{R}a a_{(\frac{n+1}{2})} = a_1 + (\frac{n-1}{2})d$; compare this answer with what we just found right above, namely $$\frac{n\cdot [2a_1 +(n-1)d]}{2n} = \frac{2a_1+(n-1)d}{2} = a_1 + (\frac{n-1}{2})d,$$ \noindent they are the same. The proof is complete. \item[(ii)] This is left as an exercise to the student. (See Exercise 23). \end{enumerate} \noindent{\bf Definition 5:} A sequence $a_1,a_2,\ldots , a_n, \ldots$ (finite or infinite) is called a {\bf harmonic progression}, if, and only if, the corresponding sequence of the reciprocal terms: $$ b_1 = \frac{1}{a_1},\ \ b_2 = \frac{1}{a_2} ,\ldots , b_n = \frac{1}{a_n}, \ldots , $$ \noindent is an arithmetic progression. \noindent {\bf Example 4:} The reader can easily verify that the following three sequences are harmonic progressions: \begin{enumerate} \item[(a)] $\frac{1}{1}, \frac{1}{2},\frac{1}{3},\ldots , \frac{1}{n}, \ldots$ \item[(b)] $\frac{1}{2}, \frac{1}{4}, \frac{1}{6} , \ldots , \frac{1}{2n}, \ldots$ \item[(c)] $\frac{1}{9}, \frac{1}{16}, \frac{1}{23} , \ldots , \frac{1}{7n+2} , \ldots$ \end{enumerate} \section{Geometric Progressions} \noindent {\bf Definition 6:} A sequence $a_1,a_2,\ldots , a_n, \ldots $ (finite or infinite) is called a {\bf geometric progression}, if there exists a (fixed) real number $r$ such that $a_{n+1} = r \cdot a_n$, for every natural number $n$ (if the progression is finite with $k$ terms $a_1, \ldots , a_k$; with $k \geq 2$, then $a_{n+1} = r\cdot a_n$, for all $n = 1,2,\ldots ,k-1$). The real number $r$ is called the {\bf ratio} of the geometric progression. The first term of the arithmetic progression is usually denoted by $a$, we write $a_1 = a$. \noindent{\bf Theorem 5:} Let $a = a_1,a_2,\ldots , a_n, \ldots$ be a geometric progression with first term $a$ and ratio $r$. \begin{enumerate} \item[(i)] $a_n = a \cdot r^{n-1}$, for every natural number $n$. \item[(ii)] $a_1 + \ldots + a_n = {\displaystyle \sum^n_{i=1}} a_i = \frac{a_n \cdot r - a}{r-1} = \frac{a(r^n-1)}{r-1}$, for every natural number $n$, if $r \neq 1$; if on the other hand $r = 1$, then the sum of the first $n$ terms of the geometric progression is equal to $n \cdot a$. \end{enumerate} \noindent{\bf Proof:} \begin{enumerate} \item[(i)] By induction: the statement is true for $n = 1$, since $a_1 = a \cdot r^{\circ} = a$. Assume the statement to hold true for $n = k$; for some natural number $k$. We will show that this assumption implies the statement to be also true for $n = (k+1)$. Indeed, since the statement is true for $n = k$, we have $a_k = a\cdot r^{k-1} \mathbb{R}a r\cdot a_k = r \cdot a\cdot r^{k-1} = a \cdot r^k$; but $k = (k+1 -1)$ and $r\cdot a_k = a_{k+1}$, by the definition of a geometric progression. Hence, $a_{k+1} = a\cdot r^{(k+1)-1}$, and so the statement also holds true for $n = k$. \item[(ii)] Most students probably have seen in precalculus the identity $r^n -1 = (r-1)(r^{n-1}+ \ldots + 1)$ to hold true for all natural numbers $n$ and all reals $r$. For example, when $n = 2,\ r^2-1=(r-1)(r+1)$; when $n = 3$, $r^3 - 1 = (r-1)(r^2 +r + 1)$. \end{enumerate} We use induction to actually prove it. Note that the statement $n = 1$ simply takes the form, $r-1=r-1$ so it holds true; while for $n = 2$ the statement becomes $r^2-1 = (r-1)(r+1)$, which is again true. Now assume the statement to hold for some $n = k,\ k \geq 2$ a natural number. So we are assuming that the statement $r^k -1 =(r-1)(r^{k-1} + \ldots + r + 1)$. Multiply both sides by $r$: $$\begin{array}{rl} & r\cdot (r^k-1) = r\cdot (r-1) \cdot (r^{k-1} + \ldots + r+1)\\ \\ \mathbb{R}a & r^{k+1} - r = (r-1) \cdot (r^k+r^{k-1}+ \ldots + r^2 + r);\\ \\ & r^{k+1}-r = (r-1)\cdot (r^k + r^{k-1} +\ldots r^2 + r + 1-1)\\ \\ \mathbb{R}a & r^{k+1}-r = (r-1) \cdot (r^k + r^{k-1} + \ldots + r^2 + r + 1)\\ & + (r-1)\cdot (-1)\\ \\ \mathbb{R}a & r^{k+1} - r = (r-1) \cdot (r^k +r^{k-1} + \ldots + r^2 + r + 1) - r+1\\ \\ \mathbb{R}a & r^{k+1}-1 = (r-1) \cdot (r^{(k+1)-1} + r^{(k+1)-2} + \ldots + r^2 + r + 1), \end{array} $$ \noindent which proves that the statement also holds true for $n = k + 1$. The induction process is complete. We have shown that $r^n - 1 = (r-1)(r^{n-1}+r^{n-2} + \ldots + r + 1)$ holds true for every real number $r$ and every natural $n$. If $r \neq 1$, then $r-1 \neq 0$, and so $\frac{r^n-1}{r-1} = r^{n-1} + r^{n-2} + \ldots + r + 1$. Multiply both sides by the first term $a$ we obtain $$\begin{array}{rcl}{\displaystyle \frac{a\cdot (r^n-1)}{r-1}} & = & ar^{n-1} + ar^{n-2} + \ldots ar+a\\ \\ & = & a_n + a_{n-1} + \ldots + a_2 + a_1. \end{array} $$ \noindent Since by part (i) we know that $a_i = a\cdot r^{i-1}$, for $i = 1,2,\ldots , n$; if on the other hand $r = 1$, then the geometric progression is obviously the constant sequence, $a,a, \ldots ,a, \ldots\ ;\ \ a_n=a$ for every natural number $n$. In that case $a_1 + \ldots + a_n = \underset{n\ {\rm times}}{\underbrace{a+ \ldots + a}} = na$. The proof is complete. $\square$ \noindent{\bf Remark 7:} We make some observation about the different types of geometric progressions that might occur according to the different types of values of the ratio $r$. \begin{enumerate} \item[(i)] If $a = 0$, then regardless of the value of the ratio $r$, one obtains the zero sequence $0,0,0,\ldots , 0,\ldots$ . \item[(ii)] If $r = 1$, then for any choice of the first term $a$, the geometric progression is the constant sequence, $a,a,\ldots , a,\ldots$ . \item[(iii)] If the first term $a$ is positive and $r > 1$ one obtains a geometric progression of positive terms, and which is increasing and which eventually exceed any real number (as we will see in Theorem 8, given a positive real number $M$, there is a term $a_n$ that exceeds M; in the language of calculus, we say that it approaches positive infinity). For example: $a = \frac{1}{2}$, and $r = 2$; we have the geometric progression $$ a_1 = a= \frac{1}{2},\ a_2 = \frac{1}{2} \cdot 2 = 1, a_3 = \frac{1}{2} \cdot 2^2 = 2; $$ \noindent The sequence is, $\frac{1}{2}, 1, 2, 2^2, 2^3, 2^4, \ldots , \underset{a_n}{\underbrace{\frac{1}{2} \cdot 2^{n-1}}}$. \item[(iv)] When $a > 0$ and $0 < r < 1$, the geometric progression is decreasing and in the language of calculus, it approaches zero (it has limit value zero). \noindent For example: $a = 4,\ r = \frac{1}{3}$. \noindent We have $a_1 = a = 4,\ a_2 = 4 \cdot \frac{1}{3},\ a_3 = a\cdot \left( \frac{1}{3}\right)^2,\ a_4 = 4 \cdot \left( \frac{1}{3}\right)^3;$\linebreak $ 4,\ \frac{4}{3},\ \frac{4}{9} , \ldots , \frac{4}{3^{n-1}}\ n$th term, $ \ldots$ . \item[(v)] For $a > 0$ and $-1 < r < 0$, the geometric sequence alternates (which means that if we pick any term, the succeeding term will have opposite sign). Still, in this case, such a sequence approaches zero (has limit value zero). \noindent For example: $a = 9,\ r = - \frac{1}{2}$. \noindent $a_1 = a = 9,\ a_2 = 9\cdot \left( -\frac{1}{2}\right) = - \frac{9}{2},\ a_3 = 9 \cdot \left(\cdot \frac{1}{2}\right)^2 = \frac{9}{4},\ldots $ \noindent $9,\ -\frac{9}{2},\ \frac{9}{2^2}, \ \frac{-9}{2^3} , \ldots , \underset{n{\rm th\ term}}{\underbrace{9\cdot \left( -\frac{1}{2} \right)^{n-1} = \frac{9\cdot (-1)^{n-1}}{2^{n-1}}}}\\ $ \item[(vi)] For $a > 0 $ and $r = -1$, we have a geometric progression that oscillates: $a, -a,a,-a, \ldots , a_n=(-1)^{n-1}, \ldots $\ . \item[(vii)] For $a > 0$ and $r < -1$, the geometric progression has negative terms only, it is decreasing, and in the language of calculus we say that approaches negative infinity. \noindent For example: $a = 3, r = -2$ $$\begin{array}{rcl}a_1 & =& a = 3,\ a_2 = 3\cdot (-2)= -6,\\ \\ a_3 &= &3\cdot (-2)^2 = 12, \ldots3, \ -6, \ 12, \ldots ,\\ \\ && \displaystyle{\underbrace{3 \cdot (-2)^{n-1} = 3 \cdot 2^{n-1} \cdot (-1)^{n-1}}_{n{\rm th\ term}}} , \ldots\end{array} $$ \item[(viii)] What happens when the first term $a$ is negative? A similar analysis holds (see Exercise 24). \end{enumerate} \noindent {\bf Theorem 6:} Let $a = a_1, a_2,\ldots , a_n,\ldots $ be a geometric progression with ratio $r$. \begin{enumerate} \item[(i)] If $m$ and $n$ are any natural numbers such that $m < n,\ a_n = a_{n-m} \cdot r^m$. \item[(ii)] If $m$ and $n$ are any natural numbers such that $m < n$, then $a_{m+1}\cdot a_{n-m} = a_1 \cdot a_n$. \item[(iii)] For any natural number $n$, $\left( \overset{n}{\underset{i=1}{\Pi}} a_i\right) ^2 = (a_1 \cdot a_2 \ldots a_n)^2 = (a_1 \cdot a_n)^n$, where $ \overset{n}{\underset{i=1}{\Pi}} a_i$ denotes the product of the first $n$ terms $a_1,a_2,\ldots , a_n$. \end{enumerate} \noindent{\bf Proof:} \begin{enumerate} \item[(i)] By Theorem 5(i) we have $a_n=a\cdot r^{n-1}$ and $a_{n-m} = a\cdot r^{n-m-1}$; thus $a_{n-m} \cdot r^m = a\cdot r^{n-m-1} \cdot r^m = a\cdot r^{n-1} = a_n$, and we are done. $\square$ \item[(ii)] Again by Theorem 5(i) we have, $$a_{m+1} = a \cdot r^m,\ a_{n-m} = a\cdot r^{n-m-1},\ {\rm and}\ a_n = a\cdot r^{n-1} $$ \noindent so that $a_{m+1} \cdot a_{n-m} = a\cdot r^m \cdot a \cdot r^{n-m-1} = a^2 \cdot r^{n-1}$ and $a_1 \cdot a_n = a\cdot (a\cdot r^{n-1}) = a^2 \cdot r^{n-1}$. Therefore, $a_{m+1} \cdot a_{n-m} = a_1 \cdot a_n$. \item[(iii)] We could prove this part by using mathematical induction. Instead, an alternative proof can be offered by making use of the fact that the sum of the first $n$ natural integers is equal to $\frac{n\cdot(n+1)}{2}$; $1 + 2 + \ldots + n = \frac{n(n+1)}{2}$; we have already seen this in Example 1(i). (Go back and review this example if necessary; $1,2,\ldots , n$ are the consecutive first $n$ terms of the infinite arithmetic progression with first term $1$ and difference $1$). This fact can be applied neatly here: $$ \begin{array}{rcl} a_1 \cdot a_2 \ldots a_i \ldots a_n & = & \ {\rm (by\ Theorem\ 1(i))} \\ \\ & =& a\cdot(a\cdot r) \ldots (a\cdot r^{i-1}) \ldots (a\cdot r^{n-1})\\ \\ & = &\underset{n\ {\rm times}}{(\underbrace{a\cdot a \ldots a})}\cdot r ^{[1+2+\ldots + (i-1) +\ldots + (n-1)]} \end{array} $$ \noindent The sum $[1+2+\ldots + (i-1) + \ldots + (n-1)]$ is simply the sum of the first $(n-1)$ natural numbers, if $n \geq 2$. According to Example 1(i) we have, $$ 1+2+\ldots + (i-1) + \ldots + (n-1) = \frac{(n-1)\cdot [(n-1)+1]}{2} = \frac{(n-1)\cdot n}{2}. $$ \noindent Hence, $a_1 \cdot a_2 \ldots a_i \ldots a_n = \underset{n\ {\rm times}}{(\underbrace{a\cdot a \ldots a})} \cdot r^{[1+2+\ldots + (i-1) + \ldots +(n-1)]} = a^n \cdot r^{\frac{(n-1)n}{2}} \mathbb{R}a (a_1 \cdot a_2 \ldots a_i \ldots a_n)^2 = a^{2n} \cdot r^{(n-1)\cdot n}$. On the other hand, $(a_1 \cdot a_n)^n = [a\cdot (a\cdot r^{n-1})]^n = [a^2 \cdot r^{n-1}]^n = a^{2n} \cdot r^{n(n-1)} = (a_1 \cdot a_2 \ldots a_i \ldots a_n)^2$; we are done. $\square$ \end{enumerate} \noindent {\bf Definition 7:} Let $a_1,a_2, \ldots, a_n$ be positive real numbers. The positive real number $\sqrt[n]{a_1a_2\ldots a_n}$ is called the {\bf geometric mean} of the numbers $a_1,a_2,\ldots a_n$. We saw in Theorem 3 that if three real numbers $a,b,c$ are consecutive terms of an arithmetic progression, the middle term $b$ must be equal to the arithmetic mean of $a$ and $c$. The same is true for the geometric mean if the positive reals $a,b,c$ are consecutive terms in a geometric progression. We have the following theorem. \noindent {\bf Theorem 7:} If the positive real numbers $a,b,c$, are consecutive terms of a geometric progression, then the geometric mean of $a$ and $c$ must equal $b$. Also, any geometric progression containing $a,b,c$ as consecutive terms, must have the same ratio $r$, namely $r = \frac{b}{a} = \frac{c}{b}$. Moreover, the condition $b^2 = ac$ is the necessary and sufficient condition for the three reals $a,b,c$ to be consecutive terms in a geometric progression. \noindent{\bf Proof:} If $a,b,c$ are consecutive terms in a geometric progression, then $b = ar$ and $c=b \cdot r$; and since both $a$ and $b$ are positive and thus nonzero, we must have $r = \frac{b}{a}={c}{b} \mathbb{R}a b^2 = ac \mathbb{R}a b = \sqrt{ac}$ which proves that $b$ is the geometric mean of $a$ and $c$. Conversely, if the condition $b^2 = ac$ is satisfied (which is equivalent to $b = \sqrt{ac}$, since $b$ is positive), then since $a$ and $b$ are positive and thus nonzero, infer that $\frac{b}{a} = \frac{c}{b}$; thus if we set $r = \frac{b}{a} = \frac{c}{b}$, it is now clear that $a,b,c$ are consecutive terms of a geometric progression whose ratio is uniquely determined in terms of the given reals $a,b,c$ and any other geometric progression containing $a,b,c$ as consecutive terms must have the same ratio $r$. $\square$ \noindent\framebox{ \parbox[t]{5.0in}{For the theorem to follow we will need what is called Bernoulli's Inequality: for every real number $a \geq -1$, and every natural number $n$, \hspace{1.5in}$(a+1)^n \geq 1 + na.$}} \noindent Let $a \geq -1$; Bernoulli's Inequality can be easily proved by induction: clearly the statement holds true for $n=1$ since $1 + a \geq 1 + a$ (the equal sign holds). Assume the statement to hold true for some $n= k \geq 1 : (a+1)^k \geq 1 + ka$; since $a + 1 \geq 0$ we can multiply both sides of this inequality by $a + 1$ without affecting its orientation: $$ \begin{array}{rcl} (a+1)^{k+1} & \geq & (a+1)(1+ka) \mathbb{R}a \\ \\ (a+1)^{k+1} & \geq & a + ka^2 +1 +ka;\\ \\ (a+1)^{k+1} & \geq & 1+ (k+1)a + ka^2 \geq 1 + (k+1)a, \end{array} $$ \noindent since $ka^2 \geq 0$ (because $a^2 \geq 0$ and $k$ is a natural number). The induction process is complete. \noindent{\bf Theorem 8:} \begin{enumerate} \item[(i)] If $r > 1$ and $M$ is a real number, then there exists a natural number $N$ such that $r^n > M$, for every natural number $n$. For parts (ii), (iii), (iv) and (v), let $a_1 = a, a_2, \ldots , a_n, \ldots ,$ be an infinite geometric progression with first term $a$ and ratio $r$. \item[(ii)] Suppose $r > 1$ and $a > 0$. If $M$ is a real number, then there is a natural number $N$ such that $a_n > M$, for every natural number $n \geq N$. \item[(iii)] Suppose $r > 1$ and $a < 0$. If $M$ is a real number, then there is a natural number $N$ such that $a_n < M$, for every natural number $n \geq N$. \item[(iv)] Suppose $|r| < 1$, and $r \neq 0$. If $\epsilon > 0$ is a positive real number, then there is a natural number $N$ such that $|a_n| < \epsilon$, for every natural number $n \geq N$. \item[(v)] Suppose $|r|<1$ and let $S_n = a_1 + a_2 + \ldots + a_n$. If $\epsilon > 0$ is a positive real number, then there exists a natural number $N$ such that $\left| S_n - \frac{a}{1-r} \right| < \epsilon$, for every natural number $n \geq N$. \end{enumerate} \noindent {\bf Proof:} \begin{enumerate} \item[(i)] We can write $r = (r-1)+1$; let $a = r-1$, since $r > 1$, $a$ must be a positive real. According to the Bernoulli Inequality we have, $r^n=(a+1)^n \geq 1+na$; thus, in order to ensure that $r^n > M$, it is sufficient to have $1+na>M \Leftrightarrow na > M-1 \Leftrightarrow n > {\displaystyle \frac{M-1}{a}}$ (the last step is justified since $a > 0$). Now, if $\left[\hspace*{-.04in}\left[\,{\displaystyle \frac{|M-1|}{a}} \,\right]\hspace*{-.04in}\right]$ stands for the integer part of the positive real number ${\displaystyle \frac{|M-1|}{a}}$ we have by definition, $\left[\hspace*{-.045in}\left[ {\displaystyle \frac{|M-1|}{a}}\ \right]\hspace*{-.04in}\right]\ \leq \ {\displaystyle \frac{| M-1|}{a}} < \left[\hspace*{-.04in}\left[\,{\displaystyle \frac{|M-1|}{a}}\,\right]\hspace*{-.04in}\right]+ 1$. Thus, if we choose $N = \left[\hspace*{-.04in}\left[ {\displaystyle \frac{|M-1|}{a}}\right]\hspace*{-.045in}\right] +1$, it is clear that $N > {\displaystyle \frac{|M-1|}{a}} \geq {\displaystyle \frac{M-1}{a}}$ so that for every natural number $n \geq N$, we will have $n > {\displaystyle \frac{M-1}{a}}$, and subsequently we will have (since $a > 0$), $na > M-1 \mathbb{R}a na + 1 > M$. But $(1+a)^n \geq 1 + na$ (Bernoulli), so that $r^n = (1+a)^n \geq 1 + na > M$; $r^n > M$, for every $n \geq N$. We are done. $\square$ \item[(ii)] By part (i), there exists a natural number $N$ such that $r^n > \frac{M}{a} \cdot r$, for every natural number $n \geq N$ (apply part (i) with $\frac{M}{a}\cdot r$ replacing $M$). Since both $r$ and $a$ are positive, so is $\frac{a}{r}$; multiplying both sides of the above inequality by $\frac{a}{r}$ we obtain $\frac{a}{r} \cdot r^n > \frac{a}{r} \cdot \frac{M}{a} \cdot r \mathbb{R}a a \cdot r ^{n-1} > M$. But $a \cdot r^{n-1}$ is the $n$th term $a_n$ of the geometric progression. Hence $a_n > M$, for every natural number $n \geq N$. $\square$ \item[(iii)] Apply part (ii) to the opposite geometric progression: $-a_1,-a_2, \ldots$,\linebreak $ -a_n , \ldots $\ , where $a_n$ is the $n$th term of the original geometric progression (that has $a_1 = a < 0$ and $r>1$, it is also easy to see that the opposite sequence is itself a geometric progression with the same ratio $r > 1$ and opposite first term $-a$). According to part (ii) there exists a natural number $N$ such that $-a_n > -M$, for every natural number $n\geq N$. Thus $-(-a_n) < -(-M) \mathbb{R}a a_n < M$, for every $n \geq N$. $\square$ \item[(iv)] Since $|r| < 1$, assuming $r \neq 0$ it follows that $\frac{1}{|r|} > 1$. Let \linebreak $M = \frac{|a|}{\epsilon\cdot|r|}$. According to part (i), there exists a natural number $N$ such that $\left( \frac{1}{|r|}\right)^n > M = \frac{|a|}{\epsilon |r|}$ (just apply part (i) with $r$ replaced by $\frac{1}{|r|}$ and $M$ replaced by $\frac{|a|}{\epsilon \cdot |r|}$ for every natural number $n \geq N$. Thus $\frac{1}{|r|^n} > \frac{|a|}{\epsilon \cdot |r|}$; multiply both sides by $|r|^n \cdot \epsilon$ to obtain $\frac{|r|^n \cdot \epsilon}{|r|^n} > \frac{|a|\cdot |r|^n\cdot \epsilon}{\epsilon \cdot |r|} \mathbb{R}a |a|\cdot |r|^{n-1} < \epsilon $; but $|a|\cdot |r|^{n-1} = |ar^{-n} | = |a_n|$, the absolute value of the $n$th term of the geometric progression; $|a_n| < \epsilon$, for every natural number $n \geq N$. Finally if $r = 0$, then $a_n = 0$ for $n \geq 2$, and so $|a_n| = 0 < \epsilon$ for all $n \geq 2$. $\square$ \item[(v)] By Theorem 5(ii) we know that, $$ S_n = a_1 + a_2 + \ldots + a_n = a + ar + \ldots + ar^{n-1} = \frac{a(r^n-1)}{r-1} $$ \noindent We have $S_n - \frac{a}{1-r} = \frac{a(r^n-1)}{r-1} + \frac{a}{r-1} = \frac{ar^n-a+a}{r-1} = \frac{ar^n}{r-1}$. Consequently, $\left| S_n - \frac{a}{1-r}\right| = \left| \frac{ar^n}{r-1}\right| = |r|^n \cdot \left|\frac{a}{r-1}\right|$. Assume $r \neq 0$. Since $|r|<1$, we can apply the already proven part (iv), using the positive real number $\frac{\epsilon \cdot |r-1|}{|r|}$ in place of $\epsilon$: there exists a natural number $N$ such that $|a_n| < \frac{\epsilon\cdot|r-1|}{|r|}$, for every natural number $n \geq N$. But $a_n = a \cdot r^{n-1}$ so that, $$ |a| \cdot |r|^{n-1} < \frac{\epsilon \cdot |r-1|}{|r|} \mathbb{R}a $$ \noindent $\mathbb{R}a$ (multiplying both sides by $|r|$)\ \ $|a||r|^n < \epsilon \cdot |r-1| \mathbb{R}a$ \noindent $\mathbb{R}a$ (dividing both sides by $|r-1|$) \ \ $\frac{|a|\ |r|^n}{|r-1|} < \epsilon$. \noindent And since $\left| S_n - \frac{a}{1-r}\right| = |r|^n \cdot \left| \frac{a}{r-1} \right|$ we conclude that, $\left| S_n - \frac{a}{1-r} \right| < \epsilon$. The proof will be complete by considering the case $r = 0$: if $r = 0$, then $a_n = 0$, for all $n \geq 2$. And thus $S_n = \frac{a(r^n-1)}{r-1} = \frac{-a}{-1} = a$, for all natural numbers $n$. Hence, $\left| S_n - \frac{a}{1-r} \right| = \left| a - \frac{a}{1}\right| = |a - a| = 0 < \epsilon$, for all natural numbers $n$. $\square$ \end{enumerate} \noindent{\bf Remark 6:} As the student familiar with, will recognize, part (iv) of Theorem 8 establishes the fact that the limit value of the sequence whose $n$th term is $a_n = a \cdot r^{n-1}$ and under the assumption $|r|< 1$, is equal to zero. In the language of calculus, when $|r|< 1$, the geometric progression approaches zero. Also, part (v), establishes the sequence of (partial) sums whose $n$th term is $S_n$, approaches the real number $\frac{a}{1-r}$, under the assumption $|r| < 1$. In the language of calculus we say that the infinite series $a + ar+ar^2 + \ldots + ar^{n-1} + \ldots $ converges to $\frac{a}{1-r}$. \section{\bf Mixed Progressions} The reader of this book who has also studied calculus, may have come across the sum, $$ 1 + 2x + 3x^2 + \ldots + (n+1)x^n. $$ \noindent There are $(n+1)$ terms in this sum; the $i$th term is equal to $i \cdot x^{i-1}$, where $i$ is a natural number between $1$ and $(n+1)$. Note that if $a_i = i \cdot x^{i-1},\ b_i = i$, and $c_i = x^{i-1}$, we have $a_i = b_i \cdot c_i$; what is more, $b_i$ is the $i$th term of an arithmetic progression (that has both first term and difference equal to $1$); and $c_i$ is the $i$th term of a geometric progression (with first term $c = 1$ and ratio $r = x$). Thus the term $a_i$ is the product of the $i$th term of an arithmetic progression with the $i$th term of a geometric progression; then we say that $a_i$ is the $i$th term of a mixed progression. We have the following definition. \noindent{\bf Definition 8:} Let $b_1, b_2, \ldots , b_n, \ldots$ be an arithmetic progression; and $c_1,c_2,$ \linebreak $\ldots , c_n, \ldots$ be a geometric progression. The sequence $a_1, a_2, \ldots, a_n, \ldots ,$ where $a_n = b_n \cdot c_n$, for every natural number $n$, is called {\bf a mixed progression}. (Of course, if both the arithmetic and geometric progressions are finite sequences with the same number of terms, so it will be with the mixed progression.) Back to our example. With a little bit of ingenuity, we can compute this sum; that is, find a closed form expression for it, in terms of $x$ and $n$. Indeed, we can write the given sum in the form, $$ \begin{array}{ll}\underset{(n+1)\ {\rm terms}}{\left(\underbrace{1 + x + x^2 + \ldots + x^{n-1} + x^n}\right)} + \underset{n\ {\rm terms}}{\left( \underbrace{x + x^2 + \ldots + x^{n-1}+x^n}\right)}\\ \\ +\underset{(n-1)\ {\rm terms}}{\left( \underbrace{x^2 + x^3 + \ldots + x^{n-1}+x^n}\right)}+ \ldots + \underset{2\ {\rm terms}}{\left(\underbrace{x^{n-1}+ x^n}\right)} + \underset{{\rm one\ term}}{\underbrace{x^n}}\end{array}.$$ \noindent In other words we have written the original sum $1 + 2x + 3x^2 + \ldots + (n+1)x^n$ as a sum of $(n+1)$ sums, each containing one term less than the previous one. Now according to Theorem 5(ii), $$ 1+x+x^2 + \ldots + x^{n-1} + x^n = \frac{x^{n+1}-1}{x-1}\ {\rm \left(\right.}{\rm assuming}\ x\neq 1\ {\rm \left. \right)}, $$ \noindent since this is the sum of the first $(n+1)$ terms of a geometric progression with first term $1$ and ratio $x$. Next, consider $$\begin{array}{rcl} x + x^2 + \ldots + x^{n-1} + x^n &=& (1+x+x^2+ \ldots + n^{n-1} + x^n) -1 \\ & = & \frac{x^{n+1}-1}{x-1} - \left( \frac{x^i -1}{x-1}\right)\end{array}. $$ \noindent Continuing this way we have, $$\begin{array}{rcl} x^2 + \ldots + x^{n-1} + x^n & = & (1+x+x^2 + \ldots + x^n-1 + x^n) - (x + 1) \\ \\ & = & {\displaystyle \frac{x^{n+1}-1}{x-1}} - \left({\displaystyle \frac{x^2-1}{x-1}}\right) . \end{array} $$ \noindent On the $i$th level, $$ \begin{array}{rcl} x^i + \ldots + x^{n-1} + x^n & = & (1+x+\ldots +x^{i-1} + x^i+\ldots + x^{n-1} + x^n)-\\ \\ -(1+x+\ldots + x^{i-1}) & = & {\displaystyle \frac{x^{n+1}-1}{x-1}} - \left({\displaystyle \frac{x^i-1}{x-1}}\right) . \end{array} $$ Let us list all of these sums: $$ \begin{array}{crcl} (1) & 1 + x + x^2 + \ldots + x^{n-1} + x^n & = & \frac{x^{n+1}-1}{x-1} \\ \\ (2) & x+ x^2 + \ldots + x^{n-1} + x^n & = & \frac{x^{n+1}-1}{x-1} - \left( \frac{x-1}{x-1}\right)\\ \\ (3) & x^2 + \ldots + x^{n-1} + x^n & = & \frac{x^{n+1}-1}{x-1} - \left( \frac{x^2-1}{x-1}\right)\\ \vdots & & & \\ (i) & x^i + \ldots + x^{n-1}+x^n & = & \frac{x^{n+1}-1}{x-1} - \left( \frac{x^i -1}{x-1}\right)\\ \vdots & & & \\ (n) & x^{n-1}+x^n & = & \frac{x^{n+1}-1}{x-1} - \left( \frac{x^{n-1}-1}{x-1}\right)\\ \\ (n+1) & x^n & = & \frac{x^{n+1}-1}{x-1} - \left( \frac{x^n-1}{x-1}\right), \end{array} $$ \noindent with $x \neq 1$. If we add the $(n+1)$ equations or identities (they hold true for all reals except for $x = 1$), the sum of the $(n+1)$ left-hand sides is simply the original sum $1+2x+3x^2 + \ldots + nx^{n-1} + (n+1)x$? Thus, if we add up the $(n+1)$ equations member-wise we obtain, $$ \begin{array}{rl} & 1+2x+3x^2 + \ldots + nx^{n-1} + (n+1)x^n \\ = & (n+1) \cdot \left(\frac{x^{n+1}-1}{x-1}\right) + \frac{n-(x+x^2 + \ldots + x^i + \ldots + x^{n-1} + x^n)}{x-1} \\ = & (n+1) \cdot \left( \frac{x^{n+1}-1}{x-1}\right) + \frac{(n+1) - (1 + x + x^2 + \ldots + x^n)}{x-1}\\ \mathbb{R}a & 1 + 2x + 3x^2 + \ldots + nx^{n-1} + (n+1)x^n \\ = & (n+1)\cdot \left(\frac{x^{n+1}-1}{x-1}\right) +\frac{(n+1)-\left(\frac{x^{n+1}-1}{x-1}\right)}{x-1};\\ & 1 + 2x +3x^2 + \ldots + nx^{n-1} + (n+1)x^n \\ = & (n+1) \cdot \left( \frac{x^{n+1}-1}{x-1} \right) + \frac{(n+1)(x-1)-(x^{n+1}-1)}{(x-1)^2};\\ & 1 + 2x+3x^2 + \ldots + nx^{n-1}+(n+1)x^n\\ = &\frac{(n+1)(x^{n+1}-1) (x-1) + (n+1)(x-1) - (x^{n+1}-1)}{(x-1)^2};\\ & 1 + 2x + 3x^2 + \ldots + nx^{n-1}+ (n+1)x^n\\ = & \frac{(n+1)(x-1)\cdot [(x^{n+1}-1)+1] - (x^{n+1}-1)}{(x-1)^2};\\ & 1 + 2x + 3x^2 + \ldots + nx^{n-1}+ (n+1)x^n \\ = & \frac{(n+1) (x-1) \cdot x^{n+1}- (x^{n+1}-1)}{(x-1)^2};\\ & 1 + 2x + 3x^2 + \ldots + nx^{n-1}+ (n+1)x^n\\ = & \frac{(n+1)x^{n+2}-(n+1)x^{n+1}-x^{n+1} +1 }{(x-1)^2};\\ & 1 + 2x + 3x^2 + \ldots + nx^{n-1}+ (n+1)x^n\\ = & \fbox{$\frac{(n+1)x^{n+2}-(n+2)x^{n+1} +1}{(x-1)^2}$} \end{array} $$ \noindent for every natural number $n$. For $x = 1$, the above derived formula is not valid. However, for $x =1 $; $1 + 2x + 3x^2 + \ldots + nx^{n-1}+(n+1)x^n = 1 + 2 + 3 + \ldots + n + (n+1) = \frac{(n+1)(n+2)}{2}$ (the sum of the first $(n+1)$ terms of an arithmetic progression with first term $a_1 = 1$ and difference $d = 1$. The following theorem gives a formula for the sum of the first $n$ terms of a mixed progression. \noindent{\bf Theorem 9:} Let $b_1,b_2, \ldots ,b_n , \ldots$ , be an arithmetic progression with first term $b_1$ and difference $d$; and $c_1,c_2, \ldots , c_n, \ldots $ , be a geometric progression with first term $c_1 = c$ and ratio $r \neq 1$. Let $a_1, a_2,\ldots , a_n , \ldots$ , the corresponding mixed progression, that is the sequence whose $n$th term $a_n$ is given by $a_n = b_n \cdot c_n$, for every natural number $n$. \begin{enumerate} \item[(i)] $a_n = \left[ b_1 + (n-1) \cdot d\right] \cdot c \cdot r^{n-1}$, for every natural number $n$. \item[(ii)] For every natural number $n$, $a_{n+1} - r\cdot a_n = d \cdot c_{n+1}$. \item[(iii)] If $S_n = a_1 + a_2 + \ldots + a_n$ (sum of the first $n$ terms of the mixed progression), then $$ \begin{array}{rcl} S_n & = & \frac{a_n \cdot r - a_1}{r-1} + \frac{d\cdot \tau \cdot c \cdot (1-r^{n-1})}{(r-1)^2};\\ \\ S_n & = & \frac{a_n \cdot r - a_1}{r-1} + \frac{d\cdot r \cdot (c - c_n)}{(r-1)^2} \end{array} $$ \noindent (recall $c_n = c\cdot r ^{n-1}$). \end{enumerate} \noindent{\bf Proof:} \begin{enumerate} \item[(i)] This is immediate, since by Theorem 1(i), $b_n = b_1 + (n-1)\cdot d$ and by Theorem 5(i), $c_n = c\cdot r^{n-1}$, and so $a_n = b_n \cdot c_n = [ b_1 + (n-1)d]\cdot c \cdot r ^{r-1}$. \item[(ii)] We have $a_{n+1} = b_{n+1} \cdot c_{n+1},\ a_n = b_n c_n,\ b_{n+1} = d + b_n$. Thus, $a_{n+1} - r \cdot a_n = c_{n+1} \cdot (d + b_n) - r\cdot b_n \cdot c_n = d\cdot c_{n+1} + c_{n+1}b_n - rb_n c_n = d \cdot c_{n+1} + b_n \cdot \underset{0}{(\underbrace{c_{n+1} -rc_n})}= dc_{n+1}$, since $c_{n+1}= rc_n$ by virtue of the fact that $c_n$ and $c_{n+1}$ are consecutive terms of a geometric progression with ratio $r$. End of proof. $\square$ \item[(iii)] We proceed by mathematical induction. The statement is true for $n = 1$ because $S_1 = a_1$ and $\frac{a_1r-a_i}{r-1} + \frac{d\cdot r \cdot (c-c_1)}{(r-1)^2} = \frac{a_1(r-1)}{r-1} + 0 = a_1 = S_1$. Assume the statement to hold for $n=k$: (for some natural number $k \geq 1; \ S_k = \frac{a_k \cdot r - a_1}{r-1} + \frac{d\cdot r \cdot (c- c_k)}{(r-1)^2}$. We have $S_{k+1} = S_k + a_{k+1} = \frac{a_k \cdot r-a_1}{r-1} + \frac{d \cdot r \cdot (c - c_k)}{(r-1)^2} + a_{k+1} = \frac{a_k \cdot r-a_1 + a_{k+1} \cdot r - a_{k+1}}{r-1} + \frac{d \cdot r \cdot (c - c_k)}{(r-1)^2} (1)$. But by part (ii) we know that $a_{k+1} - ra_k = d\cdot c_{k+1}$. Thus, by (1) we now have, $$ \begin{array}{lrcl} & S_{k+1} & = & {\displaystyle \frac{a_{k+1}\cdot r - a_1}{r-1} - \frac{d\cdot c_{k+1}}{r-1} + \frac{d\cdot r \cdot (c - c_k)}{(r-1)^2}}\\ \mathbb{R}a & S_{k+1}& =& {\displaystyle \frac{a_{k+1} \cdot r - a_1}{r-1} + \frac{-(r-1)\cdot d \cdot c_{k+1} + d \cdot r\cdot (c-c_k)}{(r-1)^2}};\\ & S_{k+1} & = & {\displaystyle \frac{a_{k+1}\cdot r-a_1}{r-1} + \frac{d\cdot r\cdot (c - c_{k+1}) + d\cdot \overset{0}{(\overbrace{c_{k+1}- r \cdot c_k})}}{(r-1)^2}}. \end{array} $$ \noindent But $c_{k+1} - r\cdot c_k = 0$ (since $c_{k+1} = r \cdot c_k$) because $c_k$ and $c_{k+1}$ consecutive terms of a geometric progression with ratio $r$. Hence, we obtain $S_{k+1} = \frac{a_{k+1}\cdot r-a_1}{r-1} + \frac{d \cdot r \cdot (c - c_{k+1})}{(r-1)^2}$; the induction is complete. \end{enumerate} The example with which we opened this section is one of a mixed progression. We dealt with the sum $1 + 2x+3x^2 + \ldots + nx^{n-1} + (n+1)x^n$. This is the sum of the first $(n+1)$ terms of a mixed progression whose $n$th term is $a_n=n\cdot x^{n-1}$; in the notation of Theorem 9, $b_n = n,\ d=1,\ c_n = x^{n-1}$, and $r = x$ (we assume $x \neq 1$). According to Theorem 9(iii) $$ \begin{array}{rcl} S_n & = & 1 + 2x+3x^2 + \ldots + nx^{n-1} = \frac{(nx^{n-1}) \cdot x - 1}{x-1} + \frac{x\cdot(1 - x^{n-1})}{(x-1)^2} \\ \\ & = & \frac{nx^n-1}{x-1} + \frac{x-x^n}{(x-1)^2} = \frac{(nx^n -1)(x-1)}{(x-1)^2} + \frac{x-x^n}{(x-1)^2 }\\ \\ & = & \frac{nx^{n+1}-nx^n - x+1 + x - x^n}{(x-1)^2} = \frac{nx^{n+1} - (n+1) x^n + 1}{(x-1)^2}; \end{array} $$ \noindent Thus, if we replace $n$ by $(n+1)$ we obtain, $S_{n+1} = 1 + 2x + 3x^2 + \ldots + nx^{n-1} +(n+1)x^n = \frac{(n+1)x^{n+2}-(n+2)x^{n+1}+1}{(x-1)^2}$, and this is the formula we obtained earlier. \noindent{\bf Definition 9:} Let $a_1, \ldots , a_n$ be nonzero real numbers. The real number $\frac{n}{\frac{1}{a_1} + \ldots + \frac{1}{a_n}}$, is called the {\bf harmonic mean} of the real numbers $a_1, \ldots, a_n$. \noindent {\bf Remark 7:} Note that since $\frac{n}{\frac{1}{a_1} + \ldots + \frac{1}{a_n}} = \frac{1}{(\frac{1}{a_1} + \ldots + \frac{1}{a_n})/n}$, the harmonic mean of the reals $a_1, \ldots , a_n$, is really the reciprocal of the mean of the reciprocal real numbers $\frac{1}{a_1}, \ldots , \frac{1}{a_n}$. We close this section by establishing an interesting, significant and deep inequality, that has many applications in mathematics and has been used to prove a number of other theorems. Given $n$ positive real numbers $a_1, \ldots , a_n$ one can always designate three positive reals to the given set $\{ a_1, \ldots , a_n \}$: the arithmetic mean denoted by A.M., the geometric mean denoted by G.M., and the harmonic mean H.M. The arithmetic-geometric-harmonic mean inequality asserts that A.M. $\geq$ G.M. $\geq$ H.M. (To the reader: Do an experiment; pick a set of three positive reals; then a set of four positive reals; for each set compute the A.M., G.M., and H.M. values; you will see that the inequality holds; if you are in disbelief do it again with another sample of positive real numbers.) The proof we will offer for the arithmetic-geometric-harmonic inequality is indeed short. To do so, we need a preliminary result: we have already proved (in the proof of Theorem 5(i)) the identity $r^n -1 = (r-1)(r^{n-1}+ r^{n-2}+ \ldots + r+1)$, which holds true for all real numbers $r$ and all natural numbers $n$. Moreover, if $r \neq 1$, we have $$ \frac{r^{n-1}}{r-1} = r^{n-1}+ r^{n-2} + \ldots + r + 1 $$ \noindent If we set $ r = \frac{b}{a}$, with $b \neq a$, in the above equation and we multiply both sides by $a^n$ we obtain, $$ \frac{b^n - a^n}{b-a} = b^{n-1} + b^{n-2} \cdot a + b^{n-3} \cdot a^2 + \ldots + b^2 \cdot a^{n=-3} + b \cdot a^{n-2} + a^{n-1} $$ \noindent Now, if $b > a > 0$ and in the above equation we replace $b$ by $a$, the resulting right-hand side will be smaller. In other words, in view of $b > a > 0$ we have, \ \hspace*{-.25in}$\begin{array}{c} (1)\\ (2)\\ (3)\\ \vdots\\ (n-2)\\ (n-1)\\ (n) \end{array} \left\{ \begin{array}{l} b^{n-1} > a^{n-1}\\ b^{n-2}\cdot a > a^{n-2}\cdot a^1 = a^{n-1}\\ b^{n-3} \cdot a^2 > a^{n-3}\cdot a^2 = a^{n-1}\\ \vdots \\ b^2 \cdot a^{n-3} \cdot a^2 > a^2 \cdot a^{n-3} j= a^{n-1}\\ b \cdot a^{n-2} > a \cdot a ^{n-2} = a^{n-1}\\ a^{n-1} = a^{n-1}\end{array}\right\} \begin{array}{ll}\mathbb{R}a&{\rm add\ memberwise}\\ \\ & b^{n-1} + b^{n-2} \cdot a+b^{n-3} \cdot a^2 + \ldots \\ +& b^2a^{n-3}+b\cdot a^{n-2} + a^{n-1}\\ >& n\cdot a^{n-1}\end{array}$ \noindent Hence, the identity above, for $b > a > 0$, implies the inequality $\frac{b^n-a^n}{b-a}>na^{n-1}$; multiplying both sides by $b-a > 0$ we arrive at $$ \begin{array}{rl} & b^n-a^n > (b-a)na^{n-1}\\ \\ \mathbb{R}a & b^n > nba^{n-1} - na^n + a^n;\\ \\ & b^n > nba^{n-1} - (n-1)a^n. \end{array} $$ Finally, by replacing $n$ by $(n+1)$ in the last inequality we obtain, \fbox{$b^{n+1} > (n+1)ba^n - na^{n+1}$, \begin{tabular}{l}for every natural number $n$ and any\\ real numbers such that $b>a>0$\end{tabular}} We are now ready to prove the last theorem of this chapter. \noindent {\bf Theorem 10:} Let $n$ be a natural number and $a_1 , \ldots , a_n$ positive real numbers. Then, $$\begin{array}{rcccl} \underset{{\rm A.M.}}{\underbrace{\frac{a_1\ldots + a_n}{n}}} & \geq & \underset{{\rm G.M.}}{\underbrace{\sqrt[n]{a_1\ldots a_n}}} & \geq & \underset{{\rm H.M.}}{\underbrace{\frac{n}{\frac{1}{a_1} +\frac{1}{a_2}+\ldots +\frac{1}{a_n}}}} \end{array} $$ \noindent{\bf Proof:} Before we proceed with the proof, we mention here that if one equal sign holds the other must also hold, and that can only happen when all $n$ numbers $a_1 , \ldots , a_n$ are equal. We will not prove this here, but the reader may want to verify this in the cases $n=2$ and $n=3$. We will proceed by using mathematical induction to first prove that, $\frac{a_1 + \ldots + a_n}{n} \geq \sqrt[n]{a_1 \ldots a_n}$, for every natural number $n$ and all positive reals $a_1, \ldots , a_n$. Even though this trivially holds true for $n=1$, we will use as our starting or base value, $n =2$. So we first prove that $\frac{a_1 + a_2}{2} \geq \sqrt{a_1a_2}$ holds true for any two positive reals. Since $a_1$ and $a_2$ are both positive, the square roots $\sqrt{a_1}$ and $\sqrt{a_2}$ are both positive real numbers and $a_1 = (\sqrt{a_1})^2,\ a_2 = (\sqrt{a_2})^2$. Clearly, $$\begin{array}{rl} & (\sqrt{a_1}- \sqrt{a_2})^2 \geq 0 \\ \\ \mathbb{R}a & (\sqrt{a_1})^2 - 2(\sqrt{a_1})(\sqrt{a_2}) + ( \sqrt{a_2})^2 \geq 0\\ \\ \mathbb{R}a & a_1 - 2\sqrt{a_1a_2} + a_2 \geq 0 \\ \\ \mathbb{R}a & a_1 + a_2 \geq 2 \cdot \sqrt{a_1a_2}\\ \\ \mathbb{R}a & \frac{a_1+a_2}{2} \geq \sqrt{a_1a_2}, \end{array} $$ \noindent so the statement holds true for $n = 2$. \noindent{\bf The Inductive Step:} Assume the statement to hold true for some natural number $n = k \geq 2$; and show that this assumption implies that the statement must also hold true for $n = k +1$. So assume, $$ \begin{array}{rlll}& \frac{a_1+\ldots + a_k}{k} & \geq & \sqrt[k]{a_1 \ldots a_k} \\ \\ \mathbb{R}a & a_1 + \ldots + a_k & \geq & k \cdot \sqrt[k]{a_1 \ldots a_k} \end{array} $$ Now we apply the inequality we proved earlier: $$b^{k+1} > (k+1) \cdot b \cdot a^k - k\cdot a^{k+1}; $$ \noindent If we take $b = \sqrt[k+1]{a_{k+1}}$, where $a_{k+1}$ is a positive real and $a = \sqrt[k(k+1)]{a_1 \ldots a_k}$ we now have, $$ \begin{array}{rcl}\left( \sqrt[k+1]{a_{k+1}} \right)^{k+1}& > &(k+1) \cdot \sqrt[k+1]{a_{k+1}} \cdot \left( \sqrt[k(k+1)]{a_1 \ldots a_k} \right)^k - k \cdot \left( \sqrt[k(k+1)]{a_1 \ldots a_k}\right)^{k+1}\\ \\ & \mathbb{R}a & a_{k+1} > (k+1)\cdot \sqrt[k+1]{a_{k+1}} \cdot \sqrt[k+1]{a_1 \ldots a_k} - k \cdot \sqrt[k]{a_1 \ldots a_k}\\ \\ & \mathbb{R}a & a_{k+1} + k \cdot \sqrt[k]{a_1 \ldots a_k} > (k+1)\cdot\sqrt[k+1]{a_1 \ldots a_k \cdot a_{k+1}} \end{array} $$ \noindent But from the inductive step we know that $a_1 + \ldots + a_k \geq k\cdot \sqrt[k]{a_1 \ldots a_k}$; hence we have, $$\begin{array}{rcl}a_{k+1} + (a_1 + \ldots + a_k ) & \geq & a_{k+1} + k\cdot \sqrt[k]{a_1 \ldots a_k} \geq (k+1)\cdot \sqrt[k+1]{a_1 \ldots a_k \cdot a_{k+1}}\\ \\ & \mathbb{R}a & a_1 + \ldots + a_k + a_{k+1} \geq (k+1) \sqrt[k+1]{a_1 \ldots a_k \cdot a_{k+1}}, \end{array} $$ \noindent and the induction is complete. Now that we have established the arithmetic-geometric mean inequality, we prove the geometric-harmonic inequality. Indeed, if $n$ is a natural number and $a_1 , \ldots , a_n$ are positive reals, then so are the real numbers $\frac{1}{a_1} , \ldots , \frac{1}{a_n}$. By applying the already proven arithmetic-geometric mean inequality we infer that, $$ \frac{\frac{1}{a_1} + \ldots + \frac{1}{a_n}}{n} \geq \sqrt[n]{\frac{1}{a_1} \ldots \frac{1}{a_n}} $$ \noindent Multiplying both sides by the product $\left( \frac{n}{\frac{1}{a_1} + \ldots + \frac{1}{a_n}}\right) \cdot \sqrt[n]{a_1 \ldots a_n}$, we arrive at the desired result: $$ \sqrt[n]{a_1 \ldots a_n} \geq \frac{n}{\frac{1}{a_1} +\ldots + \frac{1}{a_n}}. $$ \noindent This concludes the proof of the theorem. $\square$ \section{A collection of 21 problems} \begin{enumerate} \item[P1.] Determine the difference of each arithmetic progression whose first term is $\frac{1}{5}$; and with subsequent terms (but not necessarily consecutive) the rational numbers $\frac{1}{4},\ \frac{1}{3},\ \frac{1}{2}$. \noindent {\bf Solution:} Let $k,m,n$ be natural numbers with $k < m < n$ such that $a_k = \frac{1}{4},\ a_m = \frac{1}{3},$ and $a_n = \frac{1}{2}$. And, of course, $a_1 = \frac{1}{5}$ is the first term; $a_1 = \frac{1}{5} , \ldots , a_k = \frac{1}{4}, \ldots , a_m = \frac{1}{3}, \ldots , a_n = \frac{1}{2} , \ldots$ . By Theorem 1(i) we must have, $ \left. \begin{array}{l} \frac{1}{4} = a_k = \frac{1}{5} + (k-1) d\\ \\ \frac{1}{3} = a_m = \frac{1}{5} + (m-1) d\\ \\ \frac{1}{2} = a_n + \frac{1}{5} + (n-1)d \end{array} \right\}$; \begin{tabular}{l} where $d$ is the difference \\ of the arithmetic progression. \end{tabular} \noindent Obviously, $d \neq 0$; the three equations yield, $\left.\begin{array}{l} (k-1) d = \frac{1}{4} - \frac{1}{5} = \frac{1}{20}\\ \\ (m-1)d = \frac{1}{3} - \frac{1}{5} = \frac{2}{15}\\ \\ (n-1) d = \frac{1}{2} - \frac{1}{5} = \frac{3}{10}\end{array} \right\}$\ \begin{tabular}{ll} (1) & Also, it is clear that $1 < k$;\\ \\ (2) & so that $ 1 < k < m < n$.\\ \\ (3) & \end{tabular} \noindent Dividing (1) with (2) member-wise gives $\frac{k-1}{m-1} = \frac{3}{8},\ \mathbb{R}a 8(k-1) = 3(m-1)$ (4) \noindent Dividing (2) with (3) member-wise implies $\frac{m-1}{n-1} = \frac{4}{9} \mathbb{R}a 9(m-1) = 4(n-1)$ (5) \noindent Dividing (1) with (3) member-wise produces $\frac{k-1}{n-1} = \frac{1}{6} \mathbb{R}a 6(k-1) = n-1$ (6) According to Equation (4), 3 must be a divisor of $k-1$ and $8$ must be a divisor of $m-1$; if we put $k-1 = 3t;\ k=3t +1$, where $t$ is a natural number (since $k>1$), then (4) implies $8t = m-1 \mathbb{R}a m = 8t +1$ Going to equation (5) and substituting for $m-1 = 8t$, we obtain, $$18t = n-1 \mathbb{R}a n= 18t + 1.$$ Checking equation (6) we see that $6(3t) = 18t$, which is true for all nonnegative integer values of $t$. In conclusion we have the following formulas for $k,\ m,$ and $n$: $$ k=3t+1,\ m = 8t+1,\ n = 18t + 1;\ t \in {\mathbb N}; \ t = 1,2,\ldots $$ \noindent We can now calculate $d$ in terms of $t$ from any of the equations (1), (2), or (3): \noindent From (1), $(k-1)d = \frac{1}{20} \mathbb{R}a 3t\cdot d = \frac{1}{20} \mathbb{R}a$ \fbox{$d = \frac{1}{60t}$.} We see that this problem has infinitely many solutions: there are infinitely many (infinite) arithmetic progressions that satisfy the conditions of the problem. For each positive integer of value of $t$, a new such arithmetic progression is determined. For example, for $t = 1$ we have $d=\frac{1}{60},\ k = 4,\ m = 9,\ n = 19$. We have the progression, $$ a_1 = \frac{1}{5},\ldots , a_4 = \frac{1}{4},\ldots \ldots , a_9 = \frac{1}{3}, \ldots \ldots , a_{19} = \frac{1}{2}, \ldots $$ \item[P2.] Determine the arithmetic progressions (by finding the first term $a_1$ and difference $d$) whose first term is $a_1 = 5$, whose difference $d$ is an integer, and which contains the numbers $57$ and $113$ among their terms. \noindent {\bf Solution:} We have $a_1 = 5, \ a_m= 57,\ a_n = 113$ for some natural numbers $m$ and $n$ with $1 < m < n$. We have $57 = 5 + (m-1)d$ and $113 = 5 + (n-1)d$; $(m-1)d = 52$ and $(n-1)d = 108$; the last two conditions say that $d$ is a common divisor of $52$ and $108$; thus \fbox{$d = 1,2,\ {\rm or}\ 4$} are the only possible values. A quick computation shows that for $d = 1$, we have $m=53$, and $n = 109$; for $d = 2$, we have $m = 27$ and $n = 55$; and for $d = 4, \ m = 14$ and $n = 28$. In conclusion there are exactly three arithmetic progressions satisfying the conditions of this exercise; they have first term $a_1 = 5$ and their differences $d$ are $d = 1,2,$ and $4$ respectively. \item[P3.] Find the sum of all three-digit natural numbers $k$ which are such that the remainder of the divisions of $k$ with $18$ and of $k$ with $30$, is equal to $7$. \noindent{\bf Solution:} Any natural number divisible by both $18$ and $30$, must be divisible by their least common multiple which is $90$. Thus if $k$ is any natural number satisfying the condition of the exercise, then the number $k - 7$ must be divisible by both $18$ and $90$ and therefore $k-7$ must be divisible by $90$; so that $k - 7 = 90t$, for some nonnegative integer $t$; thus the three-digit numbers of the form $k = 90t+7$ are precisely the numbers we seek to find. These numbers are terms in an infinite arithmetic progression whose first term is $a_1 =7$ and whose difference is $d = 70:\ a_1 = 7,\ a_2 = 7+90,\ a_3 = 7 + 2 \cdot (90), \ldots , a_{t+1} = 7 + 90t,\ldots$ . A quick check shows that the first such three-digit number in the above arithmetic progression is $a_3 = 7 + 90(2) = 187$ (obtained by setting $t = 2$) and the last such three-digit number in the above progression is $a_{12} = 7 + 90(11) = 997$ (obtained by putting $t = 11$ in the formula $a_{t+1} = 7 + 90t$). Thus, we seek to find the sum, $a_3 + a_4 + \ldots + a_{11} + a_{12}$. We can use either of the two formulas developed in Example 2 (after example 1 which in turn is located below the proof of Theorem 2). Since we know the first and last terms of the sum at hand, namely $a_3$, it is easier to use the first formula in Example 2: $$ \begin{array}{rcl}a_m + a_{m+1} + \ldots + a_{n-1} + a_n & = & \frac{(n-m+1)(a_m+a_n)}{2} \end{array}$$ \noindent In our case $m = 3,\ n = 12, \ a_m = a_3 = 187$, and $a_n = a_{12} = 997$. Thus $$\begin{array}{rcl} a_3 +a_4 + \ldots + a_{11} + a_{12} & = & \frac{(12-3+1)\cdot(187+997)}{2}\\ \\ & = & \frac{10}{2} \cdot (1184) = 5 \cdot (1184) = 5920.\end{array} $$ \item[P4.] Let $a_1, a_2, \ldots , a_n,\ldots$, be an arithmetic progression with first term $a_1$ and positive difference $d$; and $M$ a natural number, such that $a_1 \leq M$. Show that the number of terms of the arithmetic progression that do not exceed $M$, is equal to $\left[\!\left[ \frac{M-a_1}{d}\right]\!\right] + 1$, where $\left[\!\left[ \frac{M-a_1}{d} \right]\!\right]$ stands for the integer part of the real number $\frac{M-a_1}{d}$. \noindent{\bf Solution:} If, among the terms of the arithmetic progression, $a_n$ is the largest term which does not exceed $M$, then $a_n \leq M$ and $a_{\ell} > M$, for all natural number $\ell$ greater than $n$; $\ell = n+1,n+2,\ldots$ . But $a_n = a_1 + (n-1)d$; so that $a_1 + (n-1)d\leq M\mathbb{R}a (n-1)d \leq M - a_1 \mathbb{R}a n-1 \leq \frac{M-a_1}{d}$ since $d > 0$. Since, by definition, $\left[\!\left[ \frac{M-a_1}{d} \right]\!\right]$ is the greatest integer not exceeding $\frac{M-a_1}{d}$ and since $n-1$ does not exceed $\frac{M-a_1}{d}$, we conclude that $n - 1 \leq \left[\!\left[ \frac{M-a_1}{d} \right]\!\right] \mathbb{R}a n \leq \left[\!\left[ \frac{M-a_1}{d} \right]\!\right] +1$. But $n$ is a natural number, that is, a positive integer, and so must be the integer $N = \left[\!\left[ \frac{M-a_1}{d} \right]\!\right] +1$ Since $a_n$ was assumed to be the largest term such that $a_n \leq M$, it follows that $n$ must equal $N$; because the term $a_N$ is actually the largest term not exceeding $M$ (note that if $n < N$, then $a_n < a_N$, since the progression is increasing in view of the fact that $d > 0$). Indeed, if $N = \left[\!\left[ \frac{M-a_1}{d}\right]\!\right] + 1$, then by the definition of the integer part of a real number we must have $N - 1 \leq \frac{M-a_1}{d}<N$. Multiplying by $d > 0$ yields $d(N-1) \leq M - a_1 \mathbb{R}a a_1 + d(N-1) \leq M \mathbb{R}a a_N \leq M$. In conclusion we see that the terms $a_1, \ldots , a_N$ are precisely the terms not exceeding $\left[\!\left[ \frac{M-a_1}{d}\right]\!\right] +1$; therefore there are exactly $\left[\!\left[ \frac{M-a_1}{d} \right]\!\right] + 1$ terms not exceeding $M$. \item[P5.] Apply the previous problem P4 to find the value of the sum of all natural numbers $k$ not exceeding $1,000$, and which are such that the remainder of the division of $k^2$ with $17$ is equal to $9$. \noindent{\bf Solution:} First, we divide those numbers $k$ into two disjoint classes or groups. If $q$ is the quotient of the division of $k^2$ with $17$, and with remainder $9$, we must have, $$ k^2 = 17q + 9 \Leftrightarrow (k-3)(k+3) = 17q, $$ \noindent but $17$ is a prime number and as such it must divide at least one of the two factors $k-3$ and $k+3$; but it cannot divide both. Why? Because for any value of the natural number $k$, it is easy to see that the greatest common divisor of $k-3$ and $k+3$ is either equal to $1,2$, or $6$. Thus, we must have either $k - 3 = 17n$ or $k+3 = 17m$; either $k = 17n+3$ or $$\begin{array}{rcl} k=17 m-3 & = & 17(m-1) + 14\\ & = & 17 \cdot \ell + 14 \end{array} $$ \noindent (here we have set $m-1 = \ell$). The number $n$ is a nonnegative integer and the number $\ell$ is also a nonnegative integer. So the two disjoint classes of the natural numbers $k$ are, $$ \begin{array}{rrcl}& k & = & 3, 20, 37, 54, \ldots\\ \\ {\rm and} & k & = & 14, 31, 48, 65, \ldots \end{array} $$ \noindent Next, we find how many numbers $k$ in each class do not exceed $M = 10,000$. Here, we are dealing with two arithmetic progressions: the first being $3,20,37,54,\ldots ,$ having first term $a_1 = 3$ and difference $d = 17$. The second arithmetic progression has first term $b_1 = 14 $ and the same difference $d = 17$. According to the previous practice problem, P4, there are exactly $N_1 = \left[\!\left[ \frac{M-a_1}{d} \right]\!\right] + 1 = \left[\!\left[ \frac{1000 - 3}{17}\right]\!\right] + 1 = \left[\!\left[ \frac{997}{17} \right]\!\right] + 1 = 58 + 1 = 59$ terms of the first arithmetic progression not exceeding $1000$ (also, recall from Chapter 6 that $\left[\!\left[ \frac{997}{17}\right]\!\right]$ is really none other than the quotient of the division of $997$ with $17$). Again, applying problem P4 to the second arithmetic progression, we see that there are $N_2 = \left[\!\left[ \frac{M-b_1}{d} \right]\!\right] + 1 = \left[\!\left[ \frac{1000-14}{17} \right]\!\right] + 1 = \left[\!\left[ \frac{986}{17} \right]\!\right] + 1 = 58+1 = 59$. \noindent Finally, we must find the two sums: $$ \begin{array}{rcl} S_{N_1} & = & a_1 + \ldots + a_{N_1} = \frac{N_1 \cdot (a_1 + a_{N_1})}{2} = \frac{N_1 \cdot \left[ 2a_1 + (N_1 -1)d\right]}{2} \\ \\ & = & \frac{59\cdot \left[2(3) + (59-1) \cdot 17 \right]}{2} = \frac{59 \cdot \left[ 6 + (58)(17)\right]}{2} \end{array} $$ \noindent and $$ \begin{array}{rcl} S_{N_2} & = & b_1 \ldots + b_{N_2} = \frac{N_2 \cdot \left[ 2b_1 + (N_2 - 1)d\right]}{2} \\ \\ & = & \frac{59 \cdot \left[2(14)+ (59-1)17\right]}{2} = \frac{59 \cdot \left[ 28 + (58)(17)\right]}{2} \end{array} $$ \noindent Hence, $$ \begin{array}{rcl} S_{N_1} + S_{N_2} & = & \frac{59\cdot \left[ 6 + 28 + 2(58)(17)\right]}{2}\\ \\ & = & \frac{59\left[34 + 1972\right]}{2} = \frac{59 \cdot (2006)}{2} = 59 \cdot (1003) = 59,177. \end{array} $$ \item[P6.] If $S_n,\ S_{2n},\ S_{3n}$, are the sums of the first $n,\ 2n,\ 3n$ terms of an arithmetic progression, find the relation or equation between the three sums. \noindent{\bf Solution:} We have $S_n = \frac{n\cdot \left[ a_1 + (n-1)d\right]}{2}$, $S_{2n}= \frac{2n\cdot \left[ a_1 + (2n-1)d\right]}{2}$, \linebreak and $S_{3n} = \frac{3n\cdot \left[a_1 + (3n-1)d\right]}{2}$. We can write $$ \begin{array}{rcl} S_{2n} & = & \frac{2n\cdot \left[ 2a_1 + 2(n-1)d+(d-a_1)\right]}{2}\ {\rm and}\\ \\ S_{3n} & = & \frac{3n \cdot \left[3a_1 + 3(n-1)d + (2d-2a_1)\right]}{2}. \end{array} $$ \noindent So that, \hspace*{1.0in}$S_{2n} = \frac{2n\cdot 2 \cdot \left[ a_1 + (n-1)d\right]}{2} + \frac{2n\cdot (d-a_1)}{2}$ (1) \noindent and \hspace*{1.0in}$S_{3n} = \frac{3n\cdot 3 \cdot \left[ a_1 + (n-1)d\right]}{2} + \frac{3n \cdot 2\cdot (d - a_1)}{2}$ (2) To eliminate the product $n \cdot (d-a_1)$ in equations (1) and (2) just consider $3S_{2n}-S_{3n}$: equations (1) and (2) imply, $$ \begin{array}{rcl} 3S_{2n} - S_{3n} & = & \frac{3 \cdot 2n \cdot 2 \cdot \left[ a_1 + (n-1)d\right]}{2} - \frac{3n\cdot 3 \cdot \left[ a_1 + (n-1)d\right]}{2} \\ \\ & & + \underset{0}{\underbrace{\frac{3 \cdot 2n \cdot (d-a_1)}{2} - \frac{3n \cdot 2 \cdot (d - a_1)}{2}}} \\ \\ \mathbb{R}a 3S_{2n} - S_{3n} & = & \frac{3n\cdot \left[ a_1 + (n-1)d\right]}{2} \end{array} $$ \noindent but $S_n = \frac{n\cdot \left[ a_1 + (n-1)d\right]}{2}$; hence the last equation yields $$ \begin{array}{rl} & 3S_{2n} - S_{3n} = 3 \cdot S_n\\ \\ \mathbb{R}a & \fbox{$3S_{2n} = 3S_n + S_{3n}$};\\ \\ {\rm or} & 3(S_{2n} - S_n) = S_{3n} \end{array} $$ \item[P7.] If the first term of an arithmetic progression is equal to some real number $a$, and the sum of the first $m$ terms is equal to zero, show that the sum of the next $n$ terms must equal to $\frac{a\cdot m(m+n)}{1-m}$; here, we assume that $m$ and $n$ are natural numbers with $m > 1$ \noindent{\bf Solution:} We have $a_1 + \ldots + a_m = 0 = \frac{m\cdot \left[2a_1 + d(m-1)\right]}{2} \mathbb{R}a$ (since $m > 1$) $2a_1 + d(m-1) = 0 \mathbb{R}a d = \frac{-2a_1}{m-1} = \frac{2a_1}{1-m} = \frac{2a}{1-m}$. Consider the sum of the next $n$ terms $$ \begin{array}{rcl} a_{m+1}+\ldots + a_{m+n} & = & \frac{n\cdot (a_{m+1} + a_{m+n})}{2} ;\\ \\ a_{m+1} +\ldots + a_{m+n} & = & \frac{n \cdot \left[(a_1 + md) + (a_1 + (m+n-1)d)\right]}{2};\\ \\ a_{m+1}+ \ldots + a_{m+n} & = & \frac{n \cdot \left[ 2a_1 + (2m+n-1)d\right]}{2} \end{array} $$ Now substitute for $d = \frac{2a}{1-m}$: (and of course, $a = a_1$) $$ \begin{array}{rcl} a_{m+1} + \ldots + a_{m+n} & = & \frac{n[2a +(2m+n-1)\cdot \frac{2a}{1-m}]}{2};\\ \\ a_{m+1} + \ldots + a_{m+n} & = & \frac{n\cdot 2a[(1-m)+(2m+n-1)]}{2(1-m)} ;\\ \\ a_{m+1} + \ldots + a_{m+n} & = & \frac{2an[1-m+2m+n-1]}{2(1-m)} = \frac{a\cdot n\cdot (m+n)}{1-m} \end{array} $$ \item[P8.] Suppose that the sum of the $m$ first terms of an arithmetic progression is $n$; and thesum of the first $n$ terms is equal to $m$. Furthermore, suppose that the first term is $\alpha$ and the difference is $\beta$, where $\alpha$ and $\beta$ are given real numbers. Also, assume $m \neq n$ and $\beta \neq 0$. \begin{enumerate} \item[(a)] Find the sum of the first $(m + n)$ in terms of the constants $\alpha$ and $\beta$ only. \item[(b)] Express the integer $mn$ and the difference $(m-n)$ in terms of $\alpha$ and $\beta$. \item[(c)] Drop the assumption that $m \neq n$, and suppose that both $\alpha$ and $\beta$ are integers. Describe all such arithmetic progressions. \end{enumerate} \noindent{\bf Solution:} \begin{enumerate} \item[(a)] We have $a_1 + \ldots + a_m = n$ and $a_1 + \ldots + a_n = m$; $$ \frac{m \cdot[2\alpha +(m-1)\beta]}{2} = n\ \ {\rm and}\ \ \frac{n\cdot[2\alpha + (n-1)\beta]}{2} = m, $$ \noindent since $a_1 = \alpha$ and $d = \beta$. Subtracting the second equation from the first one to obtain, $$ \begin{array}{rcl} 2\alpha \cdot (m-n) & + & \beta \cdot [m(m-1)-n(n-1)] = 2n -2m;\\ 2\alpha \cdot (m-n) & + & \beta \cdot [(m^2 - n^2) - (m-n)] + 2(m-n) = 0;\\ 2\alpha \cdot (m-n) & + & \beta \cdot [(m-n)(m+n) - (m-n)] + 2(m-n) = 0;\\ 2\alpha \cdot (m-n) & + & \beta \cdot (m-n) \cdot [m+n-1]+2(m-n)=0; \end{array} $$ \noindent $(m-n) \cdot [2\alpha + \beta(m+n-1) + 2] = 0$; but $m-n\neq 0$, since $m \neq n$ by the hypothesis of the problem. Thus, $$ \begin{array}{ll} & 2 \alpha + \beta\cdot (m+n-1) + 2 = 0 \mathbb{R}a \beta (m+n-1) = -2(1 + a)\\ \\ \mathbb{R}a & m + n - 1 = \frac{-2(1+a)}{\beta} \mathbb{R}a m + n = 1 -\frac{2(1+\alpha)}{\beta} = \frac{\beta - 2\alpha - 2}{\beta}. \end{array} $$ \noindent Now, we compute the sum $a_1 + \ldots + a_{m+n} = \frac{(m+n)\cdot[2\alpha +(m+n-1)\beta]}{2}$ $$ \begin{array}{rl} \mathbb{R}a & a_1+\ldots +a_{m+n} = \frac{\left(\frac{\beta - 2\alpha -2}{\beta}\right)\cdot \left[ 2 \alpha \left(\frac{\beta-2\alpha - 2}{\beta} \right) \cdot \beta \right]}{2};\\ \\ & a_1 + \ldots + a_{m+n} = \fbox{$\frac{\left(\beta - 2\alpha -2\right) \cdot \left(\beta - 2\right)}{2\beta}$} \end{array} $$ \item[(b)] If we multiply the equations $\frac{m\cdot[2\alpha +(m-1)\beta]}{2} = n$ and $\frac{n\cdot[2 \alpha + (n-1)\beta]}{2} = m$ member-wise we obtain, $\frac{m\cdot n\cdot [2\alpha+(n-1)\beta][2\alpha +(m-1)\beta]}{4} = mn$ and since $mn \neq 0$, we arrive at $$ \begin{array}{rl} & [2\alpha + (n-1)\beta]\cdot[2\alpha +(m-1)\beta] = 4\\ \\ \mathbb{R}a & 4\alpha^2 + 2\alpha \beta \cdot (m-1+n-1) +(n-1)(m-1)\beta^2 = 4\\ \\ \mathbb{R}a & 4\alpha^2 + 2\alpha \beta \cdot (m+n) - 4\alpha \beta +nm\beta^2 -(n+m)\beta^2 + \beta^2 = 4;\\ \\ & (2\alpha - \beta )^2 + (m+n)\cdot (2\alpha \beta - \beta^2) + nm\beta^2 = 4. \end{array} $$ \noindent Now let us substitute for $m+n = \frac{\beta - 2\alpha -2}{\beta}$ (from part (a)) in the last equation above; we have, $$ \begin{array}{rl} & (2\alpha-\beta)^2 + \left( \frac{\beta - 2\alpha -2}{\beta}\right) \cdot \beta \cdot (2\alpha -\beta) + nm\beta^2 = 4\\ \\ \mathbb{R}a & (2\alpha - \beta)^2 + (\beta - 2\alpha-2)(2\alpha - \beta) + nm\beta^2 = 4\\ \\ \mathbb{R}a & 4\alpha ^2 - 4\alpha \beta + \beta^2 + 2\alpha \beta - \beta^2 - 4 \alpha^2 + 4 \alpha \beta -4\alpha + 2\beta +nm\beta^2 = 4\\ \\ \mathbb{R}a & nm\beta^2 + 2\alpha \beta - 4\alpha + 2\beta = 4 \mathbb{R}a nm\beta^2 = 4 - 2\alpha \beta + 4\alpha -2\beta\\ \\ \mathbb{R}a & \fbox{$nm = \frac{2\cdot(2-\alpha \beta + 2\alpha - \beta)}{\beta^2}$} \end{array} $$ Finally, from the identity $(m-n)^2 = (m+n)^2 - 4nm$, it follows that $$ \begin{array}{rl} &(m-n)^2 = \left(\frac{\beta - 2\alpha -2}{\beta}\right)^2 - \frac{8(2-\alpha\beta + 2\alpha - \beta)}{\beta^2}\\ \\ \mathbb{R}a & (m-n)^2 = \frac{\beta^2+4\alpha^2 + 4 - 4 \alpha \beta - 4\beta + 8 \alpha-16 + 8\alpha \beta - 16\alpha + 8\beta}{\beta^2}\\ \\ & (m - n)^2 = \frac{\beta^2 + 4\alpha^2 -12 + 4 \alpha \beta +4\beta -8\alpha}{\beta^2};\\ \\ & |m - n| = \frac{\sqrt{\beta^2 + 4\alpha^2 - 12 + 4 \alpha \beta + 4\beta -8\alpha}}{|\beta |}\\ & = \frac{\sqrt{(2\alpha + \beta)^2 -12+4\beta - 8 \alpha}}{|\beta|};\\ \\ & \fbox{$m-n = \pm \frac{\sqrt{(2\alpha + \beta)^2 - 12 + 4\beta - 8\alpha}}{|\beta |}$} \end{array} $$ \noindent the choice of the sign depending on whether $m > n$ or $m < n$ respectively. Also note, that a necessary condition that must hold here is $$ (2\alpha + \beta)^2 -12 + 4\beta -8\alpha > 0. $$ \item[(c)] Now consider $\dfrac{m[2\alpha +(m-1)\beta]}{2} = n$ and $\dfrac{n[2\alpha +(n-1)\beta]}{2} = m$, with $\alpha$ and $\beta$ being integers. There are four cases. \noindent {\bf Case 1:} Suppose that $m$ and $n$ are odd. Then we see that $m\mid n$ and $n \mid m$, which implies $m = n$ (since $m,n$ are positive integers; if they are divisors of each other, they must be equal). We obtain, $$ 2\alpha + (n-1) \beta = 2 \Leftrightarrow n = \dfrac{\beta+2-2\alpha}{\beta} = 1 + \dfrac{2(1-\alpha)}{\beta};\ \beta \mid 2(1-\alpha). $$ If $\beta$ is odd, it must be a divisor of $1 - \alpha$. Put $1-\alpha = \beta \rho$ and so $n = 1+2\rho$, with $\rho$ being a positive integer. So, the solution is \begin{center}\fbox{\parbox{3.5in}{$m = n = 1 + 2\rho,\ \ \alpha = 1 - \beta \rho,\ \ \rho \in \mathbb{Z}^+, \ \ \beta \in \mathbb{Z}$}}\end{center} If $\beta $ is even, set $\beta = 2B$. We obtain $1 - \alpha - B\rho$, for some odd integer $\rho \geq 1$. The solution is \fbox{\parbox{4.5in}{$m = n = 1 + \rho,\ \ \alpha = 1- B\rho, \ \ \beta = 2B,\ \ \rho\ {\rm an \ odd\ positive\ integer}$.}} \noindent {\bf Case 2:} Suppose that $m $ is even, $n$ is odd; put $m = 2k$. We obtain $$ k \left[ 2 \alpha + (2k-1)\beta\right] = n\ {\rm and}\ n\left[ 2 \alpha + (n-1) \beta \right] = 4k. $$ Since $n$ is odd, $n $ must be a divisor of $k$ and since $k $ is also a divisor of $n$, we conclude that since $n$ and $k$ are positive, we must have $n = k$. So, $2 \alpha + (2n-1) \beta =1$ and $2 \alpha + (n-1) \beta = 4$. From which we obtain $n \beta = -3 \Leftrightarrow (n=1\ {\rm and }\ \beta = -3)$ or $(n=3\ {\rm and}\ \beta = 01$). The solution is \begin{center}\fbox{\parbox{2.75in}{$\begin{array}{rl}& n = 1,\ \beta = -3,\ m = 2,\ \alpha = 2 \\ {\rm or} & n=3,\ \beta = -1,\ m = 6,\ \alpha = 3\end{array}$}}\end{center} \noindent {\bf Case 3:} $m$ odd and $n$ even. This is exactly analogous to the previous case. One obtains the solutions (just switch $m$ and $n$) \begin{center}\fbox{\parbox{2.5in}{$\begin{array}{l} m=1,\ \beta = -3,\ n = 2,\ \alpha = 2\\ m = 3,\ \beta = -1,\ n = 6,\ \alpha = 3 \end{array}$}}\end{center} \noindent {\bf Case 4:} Assume $m$ and $n$ to be both even. Set $m = 2^e_{m_1}, n = 2^f_{n_1}$, where $e,f$ are positive integers and $m_1,n_1$ are odd positive integers. Since $n-1$ and $m-1$ are odd, by inspection we see that $\beta$ must be even. We have, $$ \left\{\begin{array}{rl}& 2^e \cdot m_1 \cdot \left[ 2\alpha +\left(2^3_{m_1} -1\right)\cdot \beta\right] = 2^{f+1} \cdot n_1 \\ \\ {\rm and }& 2^f \cdot n_1 \cdot \left[ 2\alpha + \left[ 2\alpha + \left(2^f_{n_1}-1\right) \cdot \beta \right]\right] = 2^{e+1} \cdot m_1. \end{array}\right.$$ We see that the left-hand side of the first equation is divisible by a power of 2 which is at least $2^{e+1}$; and the left-hand side of the equation is divisible by at least $2^{f+1}$. This then implies that $e+1 \leq f+1$ and $f+1 \leq e+1$. Hence $e = f$. Consequently, $$\begin{array}{rcll} m_1 \left[ 2 \alpha + \left(2^e_{m_1}-1\right) \beta \right] &=& 2_{n_1} & {\rm and}\\ \ n_1 \left[ 2 \alpha + \left(2^e_{n_1} -1\right) \beta \right]& =& 2m_1 & \end{array} $$ Let $\beta = 2k$. By cancelling the factor 2 from both sides of the two equations, we infer that $m_1$ is a divisor of $n_1$ and $n_1$ a divisor of $m_1$. Thus $m_1 = n_1$. The solution is \begin{center}\fbox{\parbox{1.75in}{$\begin{array}{l} \alpha = 1 - \left( 2^e \cdot n_1-1 \right) k \\ \\ \beta = 2k \\ \\ m = 2^e_{n_1} = n\end{array}$ }},\end{center} \noindent where $k$ is an arbitrary integer, $e$ is a positive integer, and $n_1$ can be any odd positive integer. \end{enumerate} \item[P9.] Prove that if the real numbers $\alpha,\beta, \gamma, \delta$ are successive terms of a harmonic progression, then $$ 3(\beta-\alpha) (\delta - \gamma) = (\gamma - \beta)(\delta - \alpha). $$ \noindent{\bf Solution:} Since $\alpha, \beta, \gamma,\delta$ are members of a harmonic progression they must all be nonzero; $\alpha \beta \gamma \delta \neq 0$. Thus $$ 3(\beta - \alpha)(\delta - \gamma) = ( \gamma - \beta)(\delta - \alpha) $$ \noindent is equivalent to $$ \frac{3(\beta-\alpha)(\delta - \gamma)}{\alpha \beta \gamma \delta} = \frac{(\gamma - \beta)(\delta - \alpha)}{\alpha\beta\gamma\delta} $$ $$ \begin{array}{rl} \Leftrightarrow & 3\cdot \left( \frac{\beta - \alpha}{\beta \alpha}\right) \cdot \left( \frac{\delta - \gamma}{\delta \gamma}\right) = \left( \frac{\gamma - \beta}{\gamma \beta}\right) \cdot \left( \frac{\delta - \alpha}{\alpha \delta}\right)\\ \\ \Leftrightarrow & 3 \cdot \left( \frac{1}{\alpha} - \frac{1}{\beta} \right) \cdot \left(\frac{1}{\gamma} - \frac{1}{\delta}\right) = \left(\frac{1}{\beta}-\frac{1}{\gamma}\right) \cdot \left( \frac{1}{\alpha} - \frac{1}{\delta} \right) \end{array} $$ \noindent By definition, since $\alpha,\beta,\gamma,\delta$ are consecutive terms of a harmonic progression; the numbers $\frac{1}{\alpha},\frac{1}{\beta},\frac{1}{\gamma},\frac{1}{\delta}$ must be successive terms of an arithmetic progression with difference $d$; and $\frac{1}{\alpha} - \frac{1}{\beta} = -d,\ \frac{1}{\gamma}-\frac{1}{\delta} = -d$, \linebreak $\frac{1}{\beta} - \frac{1}{\gamma} = -d$, and $ \frac{1}{\alpha} - \frac{1}{\delta} = -3d$ (since $\frac{1}{\delta} = \frac{1}{\gamma} + d = \frac{1}{\beta} + 2d = \frac{1}{\alpha} + 3d$). Thus the above statement we want to prove is equivalent to $$3 \cdot (-3)\cdot(-d) = (-d)\cdot(-3d) \Leftrightarrow 3d^2 = 3d^2$$ \noindent which is true. \item[P10.] Suppose that $m$ and $n$ are fixed natural numbers such that the $m$th term $a_m$ in a harmonic progression is equal to $n$; and the $n$th term $a_n$ is equal to $m$. We assume $m \neq n$. \begin{enumerate} \item[(a)] Find the $(m+n)$th term $a_{m+n}$ in terms of $m$ and $n$ . \item[(b)] Determine the general $k$th term $a_k$ in terms of $k,m$, and $n$. \end{enumerate} \noindent {\bf Solution:} \begin{enumerate} \item[(a)] Both $\frac{1}{a_m}$ and are the $m$th and $n$th terms respectively of an arithmetic progression with first term $\frac{1}{a_1}$ and difference $d$; so that $\frac{1}{a_m} = \frac{1}{a_1} + (m-1)d$ and $\frac{1}{a_n} = \frac{1}{a_1} + (n-1)d$. Subtracting the second equation from the first and using the fact that $a_m = n$ and $a_n=m$ we obtain, $\frac{1}{n} - \frac{1}{m} = (m-n)d \mathbb{R}a \frac{m-n}{nm} = (m-n)d$; but $m-n \neq 0$; cancelling the factor $(m-n)$ from both sides, gives \fbox{$\frac{1}{mn} = d$}. Thus from the first equation , $\frac{1}{n} = \frac{1}{a_1} + (m-1)\cdot \frac{1}{mn}\mathbb{R}a \frac{1}{n} -\frac{(m-1)}{mn}= \frac{1}{a_1} \mathbb{R}a \frac{m-(m-1)}{mn} = \frac{1}{a_1} ;\ \frac{1}{mn} = \frac{1}{a_1} \mathbb{R}a \fbox{$a_1 = mn$}$. Therefore, $\frac{1}{a_{m+n}} = \frac{1}{a_1} + (m+n-1)d \mathbb{R}a \frac{1}{a_{m+n}} = \frac{1}{mn} + \frac{m+n-1}{mn} \mathbb{R}a \fbox{${a}_{m+n} = \frac{mn}{m+n}$}$. \item[(b)] We have $\frac{1}{a_k} = \frac{1}{a_1} + (k-1)d \mathbb{R}a \frac{1}{a_k} = \frac{1}{mn}+ \frac{(k-1)}{mn} = \frac{k}{mn} \mathbb{R}a \fbox{$a_k = \frac{mn}{k}$}$ . \end{enumerate} \item[P11.] Use mathematical induction to prove that if $a_1,a_2, \ldots, a_n$, with $n \geq 3$, are the first $n$ terms of a harmonic progression, then $(n-1)a_1a_n = a_1a_2+a_2a_3 + \ldots + a_{n-1}a_n$. \noindent{\bf Solution:} For $n=3$ the statement is $2a_1 a_3 = a_1a_2+a_2a_3 \Leftrightarrow a_2 \cdot (a_1+a_3)=2a_1a_3$; but $a_1,a_2,a_3$ are all nonzero since they are the first three terms of a harmonic progression. Thus, the last equation is equivalent to $\frac{2}{a_2} = \frac{a_1+a_d}{a_1a_3} \Leftrightarrow \frac{2}{a_2} = \frac{1}{a_3} + \frac{1}{a_1}$ which is true, because $\frac{1}{a_1},\frac{1}{a_2},\frac{1}{a_3}$ are the first three terms of a harmonic expression. The inductive step: prove that whenever the statement holds true for some natural number $n = k \geq 3$, then it must also hold true for $n = k+1$. So we assume $(k-1)a_1a_k = a_1a_2+a_2a_3 + \ldots + a_{k-1}a_k$. Add $a_ka_{k+1}$ to both sides to obtain, $(k-1)a_1a_k + a_ka_{k+1} = a_1a_2+a_2a_3 + \ldots + a_{k-1}a_k + a_ka_{k+1}$ (1) \noindent If we can show that the left-hand side of (1) is equal to $ka_1a_{k+1}$, the induction process will be complete. So we need to show that \hspace*{1.0in}$(k-1)a_1a_k+a_ka_{k+1} = k\cdot a_1 \cdot a_{k+1}$ (2) \noindent (dividing both sides of the equation by $a_1\cdot a_k\cdot a_{k+1} \neq 0$) \hspace{.15in} $\Leftrightarrow \frac{(k-1)}{a_{k+1}} + \frac{1}{a_1} = \frac{k}{a_k}.$ (3) To prove (3), we can use the fact that $\frac{1}{a_{k+1}}$ and $\frac{1}{a_k}$ are the $(k+1)$th and $k$th terms of an arithmetic progression with first term $\frac{1}{a_1}$ and ratio $d$: $\frac{1}{a_{k+1}} = \frac{1}{a_1} + k \cdot d$ and $\frac{1}{a_k} = \frac{1}{a_1} + (k-1)d$; so that, $\frac{k-1}{a_{k+1}} = \frac{k-1}{a_1} + (k-1)kd$ and $\frac{k}{a_k} = \frac{k}{a_1} + k(k-1)d$. Subtracting the second equation from the first yields, $$ \frac{k-1}{a_{k+1}} - \frac{k}{a_k} = \frac{(k-1)-k}{a_1} \mathbb{R}a \frac{k-1}{a_{k+1}} + \frac{1}{a_1} = \frac{k}{a_k} $$ \noindent which establishes (3) and thus equation (2). The induction is complete since we have show (by combining (1) and (3)). $$ k\cdot a_1a_{k+1} = a_1a_2 +a_2a_3 +\ldots +a_{k-1}a_k+a_k a_{k+1}, $$ \noindent the statement also holds for $n=k+1$. \item[P12.] Find the necessary and sufficient condition that three natural numbers $m,n$, and $k$ must satisfy, in order that the positive real numbers $\sqrt{m},\sqrt{n},\sqrt{k}$ be consecutive terms of a geometric progression. \noindent{\bf Solution:} According to Theorem 7, the three positive reals will be consecutive terms of an arithmetic progression if, and only if, $(\sqrt{n})^2 = \sqrt{m}\sqrt{k} \Leftrightarrow n = \sqrt{mk} \Leftrightarrow$ (since both $n$ and $mk$ are positive) $n^2 = mk$. Thus, the necessary and sufficient condition is that the product of $m$ and $k$ be equal to the square of $n$. \item[P13.] Show that if $\alpha, \beta, \gamma$ are successive terms of an arithmetic progression, $\beta,\gamma, \delta$ are consecutive terms of a geometric progression, and $\gamma, \delta, \epsilon$ are the successive terms of a harmonic progression, then either the numbers $\alpha, \gamma, \epsilon$ or the numbers $\epsilon, \gamma, \alpha$ must be the consecutive terms of a geometric progression. \noindent{\bf Solution:} Since $\frac{1}{\gamma},\frac{1}{\delta},\frac{1}{\epsilon}$ are by definition successive terms of an arithmetic progression and the same holds true for $\alpha , \beta, \gamma$, Theorem 3 tells us that we must have $2\beta = \alpha + \gamma$ (1) and $\frac{2}{\delta} = \frac{1}{\gamma}+\frac{1}{\epsilon}$ (2). And by Theorem 7, we must also have $\gamma^2 = \beta \delta$ (3). (Note that $\gamma, \delta$, and $\epsilon$ must be nonzero and thus so must be $\beta$.) Equation (2) implies $\delta = \frac{2\gamma \epsilon}{\gamma + \epsilon}$ and equation (1) implies $\beta = \frac{\alpha + \gamma}{2}$. Substituting for $\beta$ and $\delta$ in equation (3) we now have $$\begin{array}{rl} & \gamma^2 = \left( \frac{\alpha + \gamma}{2}\right) \cdot \left( \frac{2\gamma \epsilon}{\gamma+\epsilon}\right)\\ \\ \mathbb{R}a& \gamma^2 \cdot (\gamma + \epsilon) = (\alpha + \gamma)\cdot \gamma \epsilon \mathbb{R}a \gamma^3 + \gamma^2 \epsilon = \alpha \gamma \epsilon + \gamma^2\epsilon \\ \\ \mathbb{R}a & \gamma^3 - \alpha \gamma \epsilon = 0 \mathbb{R}a \gamma (\gamma^2 - \alpha \epsilon) =0\end{array} $$ \noindent and since $\gamma \neq 0$ we conclude $\gamma^2 - \alpha \epsilon = 0 \mathbb{R}a \gamma^2 = \alpha\epsilon$, which, in accordance with Theorem 7, proves that either $\alpha, \gamma, \epsilon$; or $\epsilon, \gamma, \alpha$ are consecutive terms in a geometric progression. \item[P14.] Prove that if $\alpha$ is the arithmetic mean of the numbers $\beta$ and $\gamma$; and $\beta$, nonzero, the geometric mean of $\alpha$ and $\gamma$, then $\gamma$ must be the harmonic mean of $\alpha$ and $\beta$. (Note: the assumption $\beta \neq 0$, together with the fact that $\beta$ is the geometric mean of $\alpha$ and $\gamma$, does imply that both $\alpha$ and $\gamma$ must be nonzero as well.) \noindent{\bf Solution:} From the problems assumptions we must have $2\alpha = \beta + \gamma$ and $\beta^2 = \alpha \gamma$; $\beta^2 = \alpha \gamma \mathbb{R}a 2\beta^2 = 2 \alpha \gamma$; substituting for $2\alpha = \beta + \gamma$ in the last equation produces $$ \begin{array}{rl} & 2\beta^2 = (\beta+\gamma)\gamma \mathbb{R}a 2\beta^2 = \beta\gamma + \gamma^2\\ \\ \mathbb{R}a & 2\beta^2 - \gamma^2 - \beta \gamma = 0 \mathbb{R}a (\beta^2 - \gamma^2) +(\beta^2 - \beta\gamma) = 0\\ \\ \mathbb{R}a & (\beta-\gamma)(\beta+\gamma) + \beta \cdot (\beta - \gamma) = 0 \mathbb{R}a (\beta - \gamma) \cdot (2\beta + \gamma) = 0. \end{array} $$ \noindent If $\beta - \gamma \neq 0$, then the last equation implies $2\beta + \gamma = 0 \mathbb{R}a \gamma = -2\beta$; and thus from $2a=\beta + \gamma$ we obtain $2\alpha = \beta -2\beta$; $2\alpha = -\beta$; $\alpha = -\beta/2$. Now compute, $\frac{2}{\gamma} = \frac{2}{-2\beta} = - \frac{1}{\beta}$, since $\beta \neq 0$; and $\frac{1}{\alpha} + \frac{1}{\beta} = \frac{1}{-\frac{\beta}{2}} + \frac{1}{\beta} = - \frac{2}{\beta} + \frac{1}{\beta} = - \frac{1}{\beta}$. Therefore $\frac{2}{\gamma} = \frac{1}{\alpha} + \frac{1}{\beta}$, which proves that $\gamma$ is the harmonic mean of $\alpha$ and $\beta$. Finally, by going back to the equation $(\beta - \gamma)(2\beta + \gamma) = 0$ we consider the other possibility, namely $\beta- \gamma =0$; $\beta = \gamma$ (note that $\beta - \gamma$ and $2\beta + \gamma$ cannot both be zero for this would imply $\beta = 0$, violating the problem's assumption that $\beta \neq 0$). Since $\beta = \gamma$ and $2\alpha = \beta + \gamma$, we conclude $\alpha = \beta = \gamma$. And then trivially, $\frac{2}{\gamma} = \frac{1}{\alpha} + \frac{1}{\beta}$, so we are done. \item[P15.] We partition the set of natural numbers in disjoint classes or groups as follows: $\{1\},\{2,3\},\{4,5,6\},\{7,8,9,10\},\ldots $; the $n$th class contains $n$ consecutive positive integers starting with $\frac{n\cdot(n-1)}{2} + 1$. Find the sum of the members of the $n$th class. \noindent{\bf Solution:} First let us make clear why the first member of $n$th class is the number $\frac{n(n-1)}{2} +1$; observe that the $n$th class is preceded by $(n-1)$ classes; so since the $k$th class, $1 \leq k \leq n-1$, contains exactly $k$ consecutive integers, then there precisely $(1+2+\ldots +k+\ldots +(n-1))$ consecutive natural numbers preceding the $n$th class; but the sum $1+2+\ldots + (n-2)+(n-1)$ is the sum of the first $(n-1)$ terms of the infinite arithmetic progression that has first term $a_1 =1$ difference $d=1$, hence $$ \begin{array}{rcl} 1+2+\ldots+(n-1)& = & a_1+a_2+\ldots+a_{n-1} = \frac{(n-1)\cdot(a_1+a_{n-1})}{2}\\ \\ & = & \frac{(n-1)(1+(n-1))}{2} = \frac{(n-1)\cdot n}{2}.\end{array} $$ \noindent This explains why the $n$th class starts with the natural number $\frac{n(n-1)}{2}+1$; the members of the $n$th class are the numbers $\frac{n(n-1)}{2} + 1, \ \frac{n(n-1)}{2} + 2, \ldots, \frac{n(n-1)}{2}+n$. These $n$ numbers form a finite arithmetic progression with first term $\underset{a}{\underbrace{\frac{n(n-1)}{2}+1}}$ and difference $d=1$. Hence their sum is equal to $$ \begin{array}{rcl} \frac{n\cdot [2a+(n-1)d]}{2}& =& \frac{n\cdot\left[2\left(\frac{n(n-1)}{2}+1\right)+(n-1)\right]}{2}\\ \\ & = & \frac{n\cdot [n(n-1)+2+n-1]}{2} = \frac{n\cdot [n^2-n+2+n-1]}{2} = \fbox{$\frac{n\cdot(n^2+1)}{2}$}\end{array} $$ \item[P16.] We divide 8,000 objects into $(n+1)$ groups of which the first $n$ of them contain $5,8,11,14,\ldots ,[5+3\cdot(n-1)]$ objects respectively; and the $(n+1)$th group contains fewer than $(5+3n)$ objects; find the value of the natural number $n$ and the number of objects that the $(n+1)$th group contains. \noindent{\bf Solution:} The total number of objects that first $n$ groups contain is equal to, $S_n=5+8+11+14+ \ldots + [5+3(n-1)]$; this sum, $S_n$, is the sum of the first $n$ terms of the infinite arithmetic progression with first term $a_1 = 5$ and difference $d = 3$; so that its $n$th term is $a_n=5+3(n-1)$. According to Theorem 2, $S_n = \frac{n\cdot[a_1 + a_n]}{2} = \frac{ n\cdot[5+5+3(n-1)]}{2} = \frac{n\cdot [5+5+3n-3]}{2} = \frac{n\cdot (7+3n)}{2}$. Thus, the $(n+1)$th group must contain, $8,000 - \frac{n(7+3n)}{2}$ objects. By assumption, the $(n+1)$th group contains fewer than $(5+3n)$ objects. Also $8,000 - \frac{n(7+3n)}{2}$ must be a nonnegative integer, since it represents the number of objects in a set (the $(n+1)$th class; theoretically this number may be zero). So we have two simultaneous inequalities to deal with: $$ 0 \leq 8,000 - \frac{n(7+3n)}{2} \Leftrightarrow \frac{n(7+3n)}{2} \leq 8,000;\ \ n(7+3n)\leq 16,000. $$ \noindent And (the other inequality) $$ \begin{array}{rcl}8,000 - \frac{n(7+3n)}{2} & < & 5 + 3n \Leftrightarrow 16,000 - n(7+3n)<10+6n \Leftrightarrow 16,000\\ & < &3n^2 + 13n + 10 \Leftrightarrow 16,000 < (3n+10)(n+1).\end{array} $$ \noindent So we have the following system of two simultaneous inequalities $$ \left.\begin{array}{rc} & n(7+3n) \leq 16,000\\ \\ {\rm and} & 16,000 < (3n+10)(n+1) \end{array}\right\} \begin{array}{c}(1)\\ \\ (2)\end{array} $$ \noindent Consider (1): At least one of the factors $n$ and $7+3n$ must be less than or equal to $\sqrt{16,000}$; for if both were greater than $\sqrt{16,000}$ then their product would exceed $\sqrt{16,000}\cdot\sqrt{16,000} = 16,000$, contradicting inequality (1); and since $n<7+3n$, it is now clear that the natural number $n$ cannot exceed $\sqrt{16,000}:n\leq \sqrt{16,000}\Leftrightarrow n \leq \sqrt{16\cdot 10^3};\ n \leq 4\cdot \sqrt{10^2\cdot 10};\ n \leq 4 \cdot 10\cdot \sqrt{10} = 40\sqrt{10}$ so $40\sqrt{10}$ is a necessary upper bound for $n$. The closest positive integer to $40\sqrt{10}$, but less than $40\sqrt{10}$ is the number $126$; but actually, an upper bound for $n$ must be much less than $126$ in view of the factor $7+3n$. If we consider (1), we have $3n^2+7n-16,000 \leq 0$ (3) The two roots of the quadratic equation $3x^2+7x - 16,000 = 0$ are the real numbers $r_1 = \frac{-7+\sqrt{(7)^2-4(3)(-16,000)}}{6} = \frac{-7+\sqrt{192,049}}{6} = \approx$\linebreak $ {\rm approximately}\ 71.872326$; and $r_2 = \frac{-7-\sqrt{192,049}}{6} \approx -74.20566$. Now, it is well known from precalculus that if $r_1$ and $r_2$ are the two roots of the quadratic polynomial $ax^2+bx+c$, then $ax^2 + bx+c = a\cdot(x-r_1)(x-r_2)$, for all real numbers $x$. In our case $3x^2+7x-16,000 = 3\cdot(x-r_1)(x-r_2)$, where $r_1$ and $r_2$ are the above calculated real numbers. Thus, in order for the natural number $n$ to satisfy the inequality (3), $3n^2+7n-16,000 \leq 0$; it must satisfy $3(n-r_1)(n-r_2)\leq 0$; but this will only be true if, and only if, $r_1 \leq n \leq r_2$; $-74.20566 \leq n \leq 71.872326$; but $n$ is a natural number; thus $1 \leq n \leq 71$; this upper bound for $n$ is much lower than the upper bound of the upper bound $126$ that we estimated more crudely earlier. Now consider inequality (2): it must hold true simultaneously with (1); which means we have, $$ \left.\begin{array}{rl} & 16,000 < (3n+10)\cdot(n+1)\\ \\ {\rm and} & 1 \leq n \leq 71 \end{array} \right\} $$ \noindent If we take the highest value possible for $n$; namely $n = 71$, we see that $(3n+10)(n+1) = (3\cdot(71)+10)\cdot(72)= (223)(72) = 16,052$ which exceeds the number $16,000$, as desired. But, if we take the next smaller value, $n = 70$, we have $(3n+10)(n+1) = (220)(71) = 15,620$ which falls below $16,000$. Thus, this problem has a unique solution, \fbox{$n = 71$}. The total number of objects in the first $n$ groups (or 71 groups) is then equal to, $$ \frac{n\cdot(7+3n)}{2} = \frac{(7)\cdot(7+3(7))}{2} = \frac{(71)\cdot(220)}{2} = (71)\cdot(110) - 7,810. $$ \noindent Thus, the $(n+1)$th or $72$nd group contains, $8,000 - 7,810 = \fbox{190}$ objects; note that $190$ is indeed less that $5n+3=5(71)+3 = 358$. \item[P17.] \begin{enumerate} \item[(a)] Show that the real numbers $\frac{\sqrt{2}+1}{\sqrt{2}-1},\ \frac{1}{2-\sqrt{2}},\ \frac{1}{2}$, can be three consecutive terms of a geometric progression. Find the ratio $r$ of any geometric progression that contains these three numbers as consecutive terms. \item[(b)] Find the value of the infinite sum of the terms of the (infinite) geometric progression whose first three terms are the numbers $\frac{\sqrt{2}+1}{\sqrt{2}-1},\ \frac{1}{2-\sqrt{2}},\ \frac{1}{2}; \ \left(\frac{\sqrt{2}+1}{\sqrt{2}-1} \right) + \left( \frac{1}{2-\sqrt{2}} \right)+ \frac{1}{2} + \ldots$ . \end{enumerate} \noindent {\bf Solution:} \begin{enumerate} \item[(a)] Apply Theorem 7: the three numbers will be consecutive terms of a geometric progression if, and only if, \hspace{.5in}$\left({\displaystyle \frac{1}{2-\sqrt{2}}}\right)^2 = {\displaystyle \frac{(\sqrt{2}+1)}{(\sqrt{2} - 1)}} \cdot \frac{1}{2}$ (1) \noindent Compute the left-hand side: $$\begin{array}{rcl}{\displaystyle \frac{1}{(2-\sqrt{2})^2}} & = &{\displaystyle \frac{1}{4-4\sqrt{2}+2}} = {\displaystyle \frac{1}{6-4\sqrt{2}}}\\ \\ & = &{\displaystyle \frac{1}{2(3-2\sqrt{2})}} = {\displaystyle \frac{3+2\sqrt{2}}{2\cdot(3-2\sqrt{2})(3+2\sqrt{2})}}\\ \\ & =& {\displaystyle \frac{3+2\sqrt{2}}{2\cdot[9-8]}} = {\displaystyle \frac{3+2\sqrt{2}}{3}}.\end{array} $$ \noindent Now we simplify the right-hand side: $$\begin{array}{rcl}\left( {\displaystyle \frac{\sqrt{2}+1}{\sqrt{2}-1}}\right) \cdot {\displaystyle \frac{1}{2}} & = &{\displaystyle \frac{1}{2} \cdot \frac{(\sqrt{2}+1)^2}{(\sqrt{2}-1)(\sqrt{2}+1)}}\\ \\ & = & {\displaystyle \frac{1}{2} \cdot \frac{(2+2\sqrt{2} + 1)}{(2-1)} = \frac{3+\sqrt{2}}{2}} \end{array} $$ \noindent so the two sides of (1) are indeed equal; (1) is a true statement. Thus, the three numbers can be three consecutive terms in a geometric progression. To find $r$, consider $\left(\frac{\sqrt{2}+1}{\sqrt{2}-1}\right) \cdot r = \frac{1}{2-\sqrt{2}}$; and also $\left(\frac{1}{2-\sqrt{2}}\right) \cdot r = \frac{1}{2}$; from either of these two equations we can get the value of $r$; if we use the second equation we have, \fbox{$r = \frac{2-\sqrt{2}}{2}$}. \item[(b)] Since $|r| = \left| \frac{2-\sqrt{2}}{2} \right| = \frac{2-\sqrt{2}}{2} < 1$, according to Remark 6, the sum $a + ar + ar^2 + \ldots + ar^{n-1}+\ldots$ converges to $\frac{a}{1-r}$; in our case $a = \frac{\sqrt{2}+1}{\sqrt{2}-1}$ and $ r = \frac{2-\sqrt{2}}{2}$. Thus the value of the infinite sum is equal to $$\begin{array}{rcl}{\displaystyle \frac{a}{1-r}} & = &{\displaystyle \frac{\frac{\sqrt{2}+1}{\sqrt{2}-1}}{1-\left(\frac{2-\sqrt{2}}{2}\right)}}= {\displaystyle \frac{\frac{\sqrt{2}+1}{\sqrt{2}-1}}{\frac{2-\left(2-\sqrt{2}\right)}{2}}}\\ \\ & = & {\displaystyle \frac{2(\sqrt{2}+1)}{\sqrt{2}(\sqrt{2}-1)} = \frac{2(\sqrt{2}+1)\cdot (\sqrt{2}+1)\cdot \sqrt{2}}{\sqrt{2} \cdot \sqrt{2} \cdot (\sqrt{2}-1)(\sqrt{2}+1)}} \\ \\ & =& {\displaystyle \frac{2\sqrt{2} \cdot (\sqrt{2}+1)^2}{2\cdot (2-1)}= \sqrt{2} \cdot (2+2\sqrt{2}+1) = \sqrt{2} \cdot (3 + 2\sqrt{2})}\\ \\ & = & 3\sqrt{2} + 2 \cdot \sqrt{2} \cdot \sqrt{2} = 3\sqrt{2} + 4 = \fbox{$4+3\sqrt{2}$} \end{array} $$ \item[P18.] (For student who had Calculus.) If $|\rho | < 1$ and $|\beta\rho | < 1$, calculate the infinite sum, $$S = \underset{1{\rm st}}{\underbrace{\alpha\rho}} + \underset{2{\rm nd}}{(\underbrace{\alpha+\alpha\beta})} \rho^2 + \ldots + \underset{n{\rm th\ term}}{(\underbrace{\alpha +\alpha\beta + \ldots + \alpha \beta^{n-1}})} \rho^n + \ldots \ . $$ \noindent{\bf Solution:} First we calculate the $n$th term which itself is a sum of $n$ terms: $$ (\alpha+\alpha\beta + \ldots + \alpha \beta^{n-1}) \cdot \rho^n = \alpha \cdot \rho^n \cdot (1+\beta +\ldots + \beta^{n-1}) = \alpha \cdot \rho^n \cdot \left(\frac{\beta^n-1}{\beta-1}\right) $$ \noindent by Theorem 5(ii). Now we have, $$ \begin{array}{rcl} S & = & \alpha \rho + (\alpha + \alpha \beta)\rho^2 + \ldots + \alpha \cdot \rho^n \cdot \left( \frac{\beta^n-1}{\beta-1}\right) + \ldots \\ \\ S & = & \alpha \rho \left( \frac{\beta-1}{\beta-1}\right) + \alpha \rho^2 \cdot \left(\frac{\beta^2-1}{\beta-1}\right)\\ \\ & & + \ldots + \alpha \cdot \rho^n \cdot \left( \frac{\beta^n-1}{\beta -1}\right) + \ldots \end{array} $$ \noindent Note that $S = {\displaystyle \lim_{n\rightarrow \infty}} S_n$, where $$ \begin{array}{rcl} S_n & = & \alpha \rho \cdot \left(\frac{\beta -1}{\beta - 1}\right) + \alpha \rho^n \cdot \left( \frac{\beta^2 - 1}{\beta - 1}\right) + \ldots + \alpha \cdot \rho^n \cdot \left(\frac{\beta^n-1}{\beta-1}\right);\\ \\ S_n & = & \left(\frac{\alpha\rho}{\beta -1}\right) \left[ (\beta - 1) + \rho (\beta^2 -1) + \ldots + \rho^{n-1} \cdot (\beta^n - 1)\right]\\ \\ S_n & = & \left(\frac{\alpha\rho}{\beta -1}\right) \left[ \beta \cdot [1 + (\rho \beta) + \ldots + (\rho \beta)^{n-1}] - (1 + \rho + \ldots + \rho^{n-1})\right]\\ \\ S_n & = & \left( \frac{\alpha\rho}{\beta -1}\right) \cdot \left[ \beta \cdot \frac{[(\rho\beta)^n-1]}{\rho \beta -1} - \left( \frac{\rho^n-1}{\rho -1}\right)\right] \end{array} $$ \noindent Now, as $n\rightarrow \infty$, in virtue of $|\rho \beta |<1$ and $|\rho | < 1$ we have, ${\displaystyle \lim_{n\rightarrow \infty}} \frac{[(\rho \beta)^n-1]}{\rho \beta - 1} = \frac{-1}{\rho \beta - 1} = \frac{1}{1 - \rho \beta}$ and ${\displaystyle \lim_{n\rightarrow \infty}} \frac{\rho^n -1}{\rho - 1} = \frac{1}{1-\rho}$. Hence, $$ \begin{array}{rcl} S & = & {\displaystyle \lim_{n\rightarrow \infty}} S_n = \left( \frac{\alpha \rho}{\beta -1}\right) \cdot \left[ \beta \cdot \left( \frac{1}{1-\rho\beta}\right)- \left( \frac{1}{1-\rho}\right)\right];\\ \\ S& = & \left( \frac{\alpha \rho}{\beta - 1}\right) \cdot \left[ \frac{\beta (1-\rho)-(1- \rho \beta)}{(1 - \rho \beta) \cdot (1-\rho)}\right] = \frac{\alpha \rho \cdot (\beta -1 )}{(\beta -1) \cdot (1 -\rho\beta)(1-\rho)};\end{array} $$ \hspace*{.45in}\fbox{$\begin{array}{c}S = \frac{\alpha \rho}{(1-\rho \beta) \cdot (1-\rho)}\end{array}$} \end{enumerate} \item[P19.] Let $m, n$ and $\ell$ be distinct natural numbers; and $a_1 , \ldots , a_k , \ldots$, an infinite arithmetic progression with first nonzero term $a_1$ and difference $d$. \begin{enumerate} \item[(a)] Find the necessary conditions that $n,\ell$, and $m$ must satisfy in order that, $$\underset{\underset{{\rm first}\ m\ {\rm terms}}{\rm sum\ of\ the}}{\underbrace{a_1 +a_2 + \ldots + a_m}} = \underset{\underset{{\rm next}\ n\ {\rm terms}}{\rm sum\ of\ the}}{\underbrace{a_{m+1}+\ldots + a_{m+n}}} = \underset{\underset{{\rm next}\ \ell\ {\rm terms}}{\rm sum\ of\ the}}{\underbrace{a_{m+1}+\ldots +a_{m + \ell}}}$$ \item[(b)] If the three sums in part (a) are equal, what must be the relationship between $a_1$ and $d$? \item[(c)] Give numerical examples. \end{enumerate} \noindent{\bf Solution:} \begin{enumerate} \item[(a)] We have two simultaneous equations, $ \left.\begin{array}{rl} & a_1 + a_2 + \ldots + a_m = a_{m+1} +\ldots +a_{m+n} \\ {\rm and }\\ & a_{m+1}+ \ldots + a_{m+n} = a_{m+1} + \ldots + a_{m+\ell} \end{array} \right\} $ (1) \noindent According to Theorem 2 we have, $$\begin{array}{rrcl} &a_1 + a_2 + \ldots + a_m & =& \frac{m\cdot [2a_1 + (m-1)d]}{2};\\ \\ & a_{m+1}+\ldots + a_{m+n} & = & \frac{n\cdot[a_{m+1}+a_{m+n}]}{2}\\ \\ && = & \frac{ n\cdot[(a_1 + md) + (a_1 + (m+n-1)d)]}{2}\\ \\ & & = & \frac{n\cdot [2a_1 + (2m+n-1)d]}{2};\\ {\rm and}& & & \\ & a_{m+1}+\ldots + a_{m+\ell} & = & \frac{\ell \cdot[2a_1 + (2m+\ell -1)d]}{2} \end{array} $$ \noindent Now let us use the first equation in (1): $$\begin{array}{rcl} \frac{m\cdot [2a_1 + (m-1)d]}{2} & = & \frac{n\cdot[2a_1 + (2m+n-1)d]}{2};\\ \\ 2ma_1 +m(m-1)d & = & 2na_1 + n\cdot (2m+n-1)d;\\ \\ 2a_1 \cdot (m-n) & = & [n \cdot (2m + n-1) - m(m-1)]d;\\ \\ 2a_1 \cdot (m-n) & = & [ 2nm + n^2 - m^2 + m -n ]d; \end{array} $$ According to hypothesis $a_1 \neq 0$ and $m-n \neq 0$; so the right-hand side must also be nonzero and, \hspace*{1.0in}$d = \frac{2a_1 \cdot (m-n)}{2nm+n^2 - m^2 + m-n}$ (2) Now use the second equation in (1): $$ \begin{array}{rrcl} & \frac{n\cdot[2a_1 + (2m+n-1)d]}{2} & = & \frac{\ell \cdot [2a_1 + (2m + \ell -1)d]}{2}\\ \\ \Leftrightarrow & 2na_1 + n(2m+n-1)d & = & 2\ell a_1 + \ell (2m+\ell -1)d\\ \\ \Leftrightarrow & 2a_1 \cdot (n-\ell) & = & [ \ell (2m+\ell -1) - n(2m+n-1)]d\\ \\ \Leftrightarrow & 2a_1 \cdot (n-\ell ) & = & [2m \cdot (\ell -n)+(\ell^2 - n^2)-(\ell - n)]d\\ \\ \Leftrightarrow & 2a_1 \cdot (n-\ell ) & = &[2m \cdot (\ell - n) + (\ell - n)(\ell + n) - (\ell - n)]d\\ \\ \Leftrightarrow & 2a_1 \cdot (n- \ell ) & = & (\ell - n) \cdot [2m + \ell + n-1]d; \end{array} $$ \noindent and since $n - \ell \neq$, we obtain $-2a_1 = (2m+\ell _n-1)d$; \hspace*{1.5in}$d = \frac{-2a_1}{2m+\ell + n-1}$ (3) \noindent (Again, in virtue of $a_1 \neq 0$, the product $(2m + \ell + n-1)d$ must also be nonzero, so $2m + \ell + n-1 \neq 0$, which is true anyway since, obviously, $2m + \ell + n$ is a natural number greater than 1). Combining Equations (2) and (3) and cancelling out the factor $2a_1 \neq 0$ from both sides we obtain, $$ \frac{m-n}{2nm+n^2 - m^2 + m-n} = \frac{-1}{2m + \ell + n-1} $$ \noindent Cross multiplying we now have, $$ \begin{array}{cl} &(m-n)\cdot (2m + \ell + n-1)\\ \\ = & (-1) \cdot (2nm + n^2 - m^2 + m - n);\\ \\ & 2m^2 + m\ell + mn -m - 2mn - n\ell - n^2 + n\\ \\ = & -2mn - n^2 + m^2 - m + n;\\ \\ & m^2 + m\ell - n\ell + mn = 0. \end{array} $$ \noindent We can solve for $n$ in terms of $m$ and $\ell$ (or for $\ell$ in terms of $m$ and $n$) we have, $$ n\cdot (\ell - m) = m\cdot (m+\ell ) \mathbb{R}a \fbox{$n = \frac{m\cdot(m + \ell)}{\ell - m}$},\ {\rm since}\ \ell - m \neq 0. $$ Also, we must have \fbox{$\ell > m$}, in view of the fact that $n$ is a natural number and hence positive (also note that these two conditions easily imply $n > m$ as well). But, there is more: The natural number $\ell - m$ must be a divisor of the product $m \cdot (m+\ell)$. Thus, the conditions are: \begin{enumerate} \item[(A)] $\ell > m$ \item[(B)] $(\ell - m)$ is a divisor of $m \cdot (m+\ell )$ and \item[(C)] $n = \frac{m\cdot (m+\ell)}{\ell - m}$ \end{enumerate} \item[(b)] As we have already seen $d$ and $a_1$ must satisfy both conditions (2) and (3). However, under conditions (A), (B), and (C), the two conditions (2) and (3) are, in fact, equivalent, as we have already seen; so $d = \frac{-2a_1}{2m+\ell + n-1}$ (condition (3)) will suffice. \item[(c)] Note that in condition (C), if we choose $m$ and $\ell$ such $\ell - m$ is \ positive and $(\ell - m)$ is a divisor of $m$, then clearly the number $n = \frac{m\cdot (m+\ell)}{\ell - m}$, will be a natural number. If we set $\ell - m = t$, then $m+\ell = t + 2m$, so that $$ n= \frac{m\cdot(t + 2m)}{t} = m + \frac{2m^2}{t}. $$ \noindent So if we take $t$ to be a divisor of $m$, this will be sufficient for $\frac{2m^2}{t}$ to be a positive integer. Indeed, set $m = M\cdot t$, then $n = M\cdot t + \frac{2M^2t^2}{t} = M\cdot t + 2M^2 \cdot t = t\cdot M \cdot (1 + 2M)$. Also, in condition (3) , if we set $a_1 = a$, then (since $\ell = m+t = Mt+t$) $ \begin{array}{rcl} d & = & \frac{-2a}{2M\cdot t+(Mt+t)+Mt + 2M^2 t - 1};\\ \\ d & = & \frac{-2a}{4Mt + t + 2M^2 t -1} .\end{array}$ (4) \noindent Thus, the formulas $\ell = Mt+t,\ n = Mt+2M^2 \cdot t$ and (4) will generate, for each pair of values of the natural numbers $M$ and $t$, an arithmetic progression that satisfies the conditions of the problem; for any nonzero value of the first term $a$. \noindent{\bf Numerical Example:} If we take $t=3$ and $M=4$, we then have $m = M\cdot t = 3 \cdot 4 = 12;\ n = t\cdot M \cdot (1+2M) = 12 \cdot (1 + 8) = 108$, and $\ell = m + t = 12+3=15$. And, $$d= \frac{-2a}{2m+\ell+n-1} = \frac{-2a}{24+15 + 108-1} = \frac{-2a}{146} = \frac{-a}{73}. $$ Now let us compute $$\begin{array}{rcl}a_1 + \ldots + a_m& =& {\displaystyle \frac{m\cdot [2a + (m-1)d]}{2} = \frac{12\cdot \left[2a+11 \cdot \left(\frac{-a}{73}\right)\right]}{2}}\\ \\ & =& {\displaystyle \frac{12\cdot [146a - 11a]}{2\cdot 73} = \frac{6\cdot(135a)}{73}} = {\displaystyle \frac{810a}{73}}.\end{array} $$ \noindent Next, $$\begin{array}{rl} & a_{m+1}+\ldots + a_{m+n^{\prime}}\\ \\ =& \frac{n\cdot [2a+(m+n-1)d]}{2}\\ \\ = & \frac{108 \cdot \left[2a+(24+108-1)\cdot \left(\frac{-a}{73}\right)\right]}{2}\\ \\ = & \frac{108}{2} \cdot \frac{[146a -131 a]}{73}\\ \\ = &\frac{(54)(15a)}{73} = \frac{810a}{73}\end{array} $$ \noindent and $$ \begin{array}{cl} & a_{m+1} + \ldots + a_{m+\ell} \\ \\ = & {\displaystyle \frac{ \ell \cdot[2a + (2m+\ell-1)d]}{2}}\\ \\ = & {\displaystyle\frac{15\cdot\left[2a+(24+15-1) \cdot \left(\frac{-a}{73}\right)\right]}{2}} \\ \\ = & {\displaystyle \frac{15}{2} \cdot \frac{[146a - 38a]}{73}}\\ \\ = & {\displaystyle \frac{15}{2} \cdot \frac{(108)a}{73} = \frac{(15)(54a)}{73}}\\ \\ = & {\displaystyle \frac{810a}{73}}. \end{array} $$ \noindent Thus, all three sums are equal to $\frac{810a}{73}$. \end{enumerate} \item[P20.] If the real numbers $a,b,c$ are consecutive terms of an arithmetic progression and $a^2, b^2, c^2$ are consecutive terms of a harmonic progression, what conditions must the numbers $a,b,c$ satisfy? Describe all such numbers $a,b,c$. \noindent{\bf Solution:} By hypothesis, we have $$2b=a + c \ {\rm and}\ \frac{2}{b^2} = \frac{1}{a^2} + \frac{1}{c^2} $$ \noindent so $a,b,c$ must all be nonzero real numbers. The second equation is equivalent to $b^2 = \frac{2a^2c^2}{a^2+c^2}$ and $abc \neq 0$; so that, $b^2 (a^2+c^2) = 2a^2c^2 \Leftrightarrow b^2 \cdot [(a+c)^2 - 2ac] = 2a^2 c^2$. Now substitute for $a+c = 2b$: $$\begin{array}{rl} & b^2 \cdot [(2b)^2 - 2ac] = 2a^2c^2 \\ \\ \Leftrightarrow & 4b^4 - 2acb^2 - 2a^2c^2 = 0;\\ \\ & 2b^4 - acb^2 - a^2 c^2 = 0 \end{array} $$ \noindent At this stage we could apply the quadratic formula since $b^2$ is a root to the equation $2x^2 -acx-a^2c^2 = 0$; but the above equation can actually be factored. Indeed, \hspace*{1.0in}$\begin{array}{rcl} b^4 - acb^2 + b^4 - a^2c^2 &= & 0;\\ \\ b^2(b^2 - ac) + (b^2)^2 - (ac)^2 & = & 0; \end{array} $ \hspace*{.52in}$\begin{array}{rcl} b^2 \cdot (b^2 - ac) + (b^2 - ac)(b^2 + ac) & = & 0;\\ (b^2 - ac) \cdot (2b^2 + ac) & = & 0 \end{array} $ (1) According to Equation (1), we must have $b^2 - ac= 0$; or alternatively $2b^2 + ac = 0$. Consider the first possibility, $b^2 - ac = 0$. Then, by going back to equation $\frac{2}{b^2}= \frac{1}{a^2} + \frac{1}{c^2}$ we obtain $\frac{2}{ac} = \frac{1}{a^2} + \frac{1}{c^2} \Leftrightarrow \frac{2a^2c^2}{ac} = a^2 + c^2 \Leftrightarrow 2ac = a^2 + c^2$; $a^2 + c^2 - 2ac = 0 \Leftrightarrow (a-c)^2 = 0$; $a = c$ and thus $2b = a+c$ implies $b = a = c$. Next, consider the second possibility in Equation (1): $2b^2 + ac = 0 \Leftrightarrow 2b^2 = -ac$; which clearly imply that one of $a$ and $c$ must be positive, the other negative. Once more going back to \hspace*{.75in}$\begin{array}{rl} & \frac{2}{b^2} = \frac{1}{a^2} + \frac{1}{c^2};\ \frac{4}{2b^2} = \frac{1}{a^2} + \frac{1}{c^2}\\ \\ \Leftrightarrow & \frac{4}{-ac} = \frac{c^2+a^2}{a^2c^2 }\\ \\ \Leftrightarrow & -4ac = c^2 + a^2$; $a^2 + 4ac + c^2 = 0 \end{array}$ (2) Let $t = \frac{a}{c};\ a=c\cdot t$ then Equation (2) yields (since $ac \neq 0$), \hspace*{1.5in}$t^2 + 4t+1 = 0$ (3) \noindent Applying the quadratic formula to Equation (3), we now have $$ \begin{array}{l}t = {\displaystyle \frac{-4\pm \sqrt{16-4}}{2} = \frac{-4 \pm 2\sqrt{3}}{2}};\\ \\ t = -2 \pm \sqrt{3}; \end{array} $$ \noindent note that both numbers $-2+\sqrt{3}$ and $-2-\sqrt{3}$ are negative and hence both acceptable as solutions, since we know that $a$ and $c$ have opposite sign, which means that $t = \frac{a}{c}$ must be negative. So we must have either $a = (-2 + \sqrt{3})c$; or alternatively $a = -(2+\sqrt{3})\cdot c$. Now, we find $b$ in terms of $c$. From $2b^2 = -ac$; $b^2 = -\frac{ac}{2}$; note that the last equation says that either the numbers $- \frac{a}{2}, b, c$ are the successive terms of a geometric progression; or the numbers $-a,b,\frac{c}{2}$ (or any of the other two possible permutations: $a,b,-\frac{c}{2}$, $\frac{a}{2}, b, -c$; and four more that are obtained by switching $a$ with $c$). So, if $a = (-2+\sqrt{3})c$, then from $2b = a + c;\ b = \frac{a+c}{2} = \frac{(-2+\sqrt{3})c+c}{2} = \frac{(\sqrt{3}-1)c}{2}$. And if $a = -(2+\sqrt{3})c,\ b = \frac{a+c}{2} = \frac{-(2+\sqrt{3}) c+c}{2} = \frac{-(1+\sqrt{3})c}{2}$. So, in conclusion we summarize as follows: Any three real numbers $a,b,c$ such that $a,b,c$ are consecutive terms of an arithmetic progression and $a^2,b^2,c^2$ the successive terms of a harmonic progression must fall in exactly one of three classes: \begin{enumerate} \item[(1)] $a = b = c;\ c$ can be any nonzero real number \item[(2)] $a = (-2+\sqrt{3})\cdot c,\ b = \frac{(\sqrt{3}-1)c}{2};\ c$ can be any positive real; \item[(3)] $a = (2 + \sqrt{3})c,\ b= \frac{-(1+\sqrt{3})}{2}c;\ c$ can be any positive real. \end{enumerate} \item[P21.] Prove that if the positive real numbers $\alpha, \beta, \gamma$ are consecutive members of a geometric progression, then $\alpha ^k + \gamma ^k \geq 2 \beta^k$, for every natural number $k$. \noindent{\bf Solution:} Given any natural number $k$, we can apply the arithmetic-geometric mean inequality of Theorem 10, with $n = 2$, and $a_1 = \alpha^k,\ a_2 = \gamma^k$, in the notation of that theorem: $$ \frac{\alpha^k + \gamma^k}{2} \geq \sqrt{\alpha^k \cdot \gamma^k} = \sqrt{(\alpha\gamma)^k}. $$ \noindent But since $\alpha ,\beta , \gamma$ are consecutive terms of a geometric progression, we must also have $\beta^2 = \alpha \gamma$. Thus the above inequality implies, $$ \begin{array}{rccl} & \frac{\alpha^k + \gamma^k}{2} & \geq & \sqrt{(\beta^2)^k};\\ & \frac{\alpha^k + \gamma^k}{2} & \geq & \sqrt{(\beta^k)^2} \\ \mathbb{R}a & \frac{\alpha^k + \gamma^k}{2} & \geq & \beta ^k\\ \mathbb{R}a & \alpha^k + \gamma^k & \geq & 2\beta^k, \end{array} $$ \noindent and the proof is complete. \end{enumerate} \section{\bf Unsolved problems} \begin{enumerate} \item[1.] Show that if the sequence $a_1,a_2,\ldots , a_n,\ldots$ , is an arithmetic progression, so is the sequence $c \cdot a_1, c\cdot a_2, \ldots, c \cdot a_n, \ldots$ , where $c$ is a constant. \item[2.] Determine the difference of each arithmetic progression which has first term $a_1 = 6$ and contains the numbers $62$ and $104$ as its terms. \item[3.] Show that the irrational numbers $\sqrt{2}, \ \sqrt{3},\ \sqrt{5}$ cannot be terms of an arithmetic progression. \item[4.] If $a_1, a_2, \ldots , a_n , \ldots$ is an arithmetic progression and $a_k = \alpha,\ a_m=\beta,\ a_{\ell}= \gamma$, show that the natural numbers $k,m,\ell$ and the real numbers $\alpha, \beta, \gamma$, must satisfy the condition $$ \alpha \cdot (m-\ell) + \beta \cdot (\ell - k) + \gamma \cdot (k - m)=0. $$ \noindent{\bf Hint:} Use the usual formula $a_n = a_1 + (n-1)d$, for $n = k,m,\ell$, to obtain three equations; subtract the first two and then the last two (or the first and the third) to eliminate $a_1$; then eliminate the difference $d$ (or solve for $d$ in each of the resulting equations). \item[5.] If the numbers $\alpha,\beta, \gamma$ are successive terms of an arithmetic progression, then the same holds true for the numbers $\alpha^2 \cdot(\beta +\gamma),\ \beta^2 \cdot (\gamma + \alpha),\ \gamma^2 \cdot (\alpha +\beta)$. \item[6.] If $S_k$ denotes the sum of the first $k$ terms of the arithmetic progression with first term $k$ and difference $d = 2k-1$, find the sum $S_1 + S_2 + \ldots + S_k$. \item[7.] We divide the odd natural numbers into groups or classes as follows: $\{1\},\{3,5\},\{7,9,11\}, \ldots $ ; the $n$th group contains $n$ odd numbers starting with $(n \cdot (n-1)+1)$ (verify this). Find the sum of the members of the $n$th group. \item[8.] We divide the even natural numbers into groups as follows: $\{2\},\{4,6\},$ \linebreak $\{8,10,12\},\ldots$ ; the $n$th group contains $n$ even numbers starting with $(n(n-1)+2)$. Find the 'sum of the members of the $n$th group. \item[9.] Let $n_1,n_2,\ldots , n_k$ be $k$ natural numbers such that $n_1 < n_2 < \ldots < n_k$; if the real numbers, $a_{n_1}, a_{n_2},\ldots , a_{n_k}$, are members of an arithmetic progression (so that the number $a_{n_i}$ is precisely the $n_i$th term in the progression; $i = 1,2,\ldots ,k)$, show that the real numbers: $$ \frac{a_{n_k}-a_{n_1}}{a_{n_2}-a_{n_1}},\ \frac{a_{n_k}-a_{n_2}}{a_{n_2}-a_{n_1}}\ , \ldots , \frac{a_{n_k}-a_{n_{k-1}}}{a_{n_2}- a_{n_1}}, $$ \noindent are all rational numbers. \item[10.] Let $m$ and $n$ be natural numbers. If in an arithmetic progression $a_1,a_2,\ldots , a_k,\ldots $; the term $a_m$ is equal to $\frac{1}{n}$; $a_m=\frac{1}{n}$, and the term $a_n$ is equal to $\frac{1}{m};\ a_n = \frac{1}{m}$, prove the following three statements. \begin{enumerate} \item[(a)] The first term $a_1$ is equal to the difference $d$. \item[(b)] If $t$ is any natural number, then $a_{t\cdot(mn)} = t$; in other words, the terms $a_{mn},a_{2mn},a_{3mn},\ldots $ , are respectively equal to the natural numbers $1,2,3,\ldots$ . \item[(c)] If $S_{t\cdot (mn)}$ ($t$ a natural number) denote the sum of the first $(t\cdot m \cdot n)$ terms of the arithmetic progression, then $S_{t\cdot(mn)} = \frac{1}{2} \cdot (mn+1)\cdot t$. In other words, $S_{mn} = \frac{1}{2}(mn+1)$, $S_{2mn} = \frac{1}{2} \cdot (mn+1)\cdot 2,\ S_{3mn} = \frac{1}{2} \cdot (mn+1)\cdot 3,\ldots $ . \end{enumerate} \item[11.] If the distinct real numbers $a,b,c$ are consecutive terms of a harmonic progression show that \begin{enumerate} \item[(a)] $\frac{2}{b} = \frac{1}{b-a} + \frac{1}{b-c}$ and \item[(b)] $\frac{b+a}{b-a} + \frac{b+c}{b-c} = 2$ \end{enumerate} \item[12.] If the distinct reals $\alpha, \beta, \gamma $ are consecutive terms of a harmonic progressionthen the same is true for the numbers $\alpha, \alpha - \gamma, \alpha - \beta$. \item[13.] Let $a = a_1, a_2, a_3, \ldots , a_n, \ldots$ , be a geometric progression and $k, \ell, m$natural numbers. If $a_k = \beta, \ a_{\ell}=\gamma ,\ a_m=\delta$, show that $\beta^{\ell -m} \cdot \gamma^{m-k} \cdot \delta^{k-\ell} = 1$. \item[14.] Suppose that $n$ and $k$ are natural numbers such that $n > k + 1$; and $a_1 = a, a_2, \ldots , a_t , \ldots $ a geometric progression, with positive ratio $r \neq 1$, and positivefirst term $a$. If $A$ is the value of the sum of the first $k$ terms of the progression and $B$ is the value of the last $k$ terms among the $n$ first terms, express the ratio $r$ in terms of $A$ and $B$ only; and also the first term $a$ in terms of $A$ and $B$. \item[15.] Find the sum $\left( a- \frac{1}{a}\right)^2 + \left(a^2 - \frac{1}{a^2}\right)^2 + \ldots \left( a^n - \frac{a}{a^n}\right)^2$. \item[16.] Find the infinite sum $\left( \frac{1}{3} + \frac{1}{3^2} + \frac{1}{3^3} + \ldots \right)+ \left( \frac{1}{5} + \frac{1}{5^2} + \frac{1}{5^3} + \ldots \right)$\linebreak $+ \left( \frac{1}{9} + \frac{1}{9^2} + \frac{1}{9^3} + \ldots \right) + \ldots + \underset{k{\rm th\ sum}}{\left(\underbrace{ \frac{1}{(2k+1)} + \frac{1}{(2k+1)^2} + \frac{1}{(2k+1)^3} + \ldots }\right)} + \ldots $ . \item[17.] Find the infinite sum $\frac{2}{7} + \frac{4}{7^2} + \frac{2}{7^3} + \frac{4}{7^4} + \frac{2}{7^5} + \frac{4}{7^6} + \ldots $ . \item[18.] If the numbers $\alpha, \beta, \gamma$ are consecutive terms of an arithmetic progression and the nonzero numbers $\beta , \gamma, \delta$ are consecutive terms of a harmonic progression, show that $\frac{\alpha}{\beta} = \frac{\gamma}{\delta}$. \item[19.] Suppose that the positive reals $\alpha, \beta ,\gamma $ are successive terms of an arithmetic progression and let $x$ be the geometric mean of $\alpha $ and $\beta$; and let $y$ be the geometric mean of $\beta$ and $\gamma$. Prove that $x^2,\beta^2, y^2$ are successive terms of an arithmetic progression. Give two numerical examples. \item[20.] Show that if the nonzero real numbers $a,b,c$ are consecutive terms of a harmonic progression, then the numbers $a-\frac{b}{2},\ \frac{b}{2},\ c - \frac{b}{2}$, must be consecutive terms of a geometric progression. Give two numerical examples. \item[21.] Compute the following sums: \begin{enumerate} \item[(i)] $\frac{1}{2} + \frac{2}{2^2} + \ldots + \frac{n}{2^n}$ \item[(ii)] $1 + \frac{3}{2} + \frac{5}{4} + \ldots + \frac{2n-1}{2^{n-1}}$ \end{enumerate} \item[22.] Suppose that the sequence $a_1, a_2, \ldots , a_n, \ldots$ satisfies $a_{n+1} = (a_n + \lambda)\cdot \omega$, where $\lambda $ and $\omega$ are fixed real numbers with $\omega \neq 1$. \begin{enumerate} \item[(i)] Use mathematical induction to prove that for every natural number, $a_n = a_1 \cdot \omega^{n-1} + \lambda \cdot \left( \frac{\omega^n - \omega}{\omega -1 }\right)$. \item[(ii)] Use your answer in part (i) to show that, $$ \begin{array}{rcl}S_n & = & a_1 + a_2 + \ldots + a_n\\ & =& a_1 \cdot {\left(\displaystyle \frac{\omega^n - 1}{\omega - 1}\right) + \lambda \cdot \left( \frac{\omega^{n+1} - n \cdot \omega ^2 + (n-1) \omega}{(\omega - 1)^2}\right)}.\end{array} $$ \noindent($*$) Such a sequence is called a {\bf semi-mixed} progression. \end{enumerate} \item[23.] Prove part (ii) of Theorem 4. \item[24.] Work out part (viii) of Remark 5. \item[25.] Prove the analogue of Theorem 4 for geometric progressions: if the $(n - m+1)$ positive real numbers $a_m,a_{m+1}, \ldots , a_{n-1},a_n$ are successive terms of a geometric progression, then \begin{enumerate} \item[(i)] If the natural number $(n-m+1)$ is odd, then the geometric mean of the $(n-m+1)$ terms is simply the middle number $a_{(\frac{m+n}{2})}$. \item[(ii)] If the natural number $(n-m+1)$ is even, then the geometric mean of the $(n-m+1)$ terms must be the geometric mean of the two middle terms $a_{(\frac{n+m-1}{2})}$ and $a_{(\frac{n+m+1}{2})}$. \end{enumerate} \end{enumerate} \end{document}
\begin{document} \preprint{} \title{Precision quantum metrology and nonclassicality in linear and nonlinear detection schemes} \author{\'{A}ngel Rivas$^{1}$ and Alfredo Luis$^{2}$} \email{[email protected]} \affiliation{$^1$Institut f\"{u}r Theoretische Physik, Universit\"{a}t Ulm, Ulm D-89069, Germany.\\ $^2$ Departamento de \'{O}ptica, Facultad de Ciencias F\'{\i}sicas, Universidad Complutense, 28040 Madrid, Spain} \date{\today} \begin{abstract} We examine whether metrological resolution beyond coherent states is a nonclassical effect. We show that this is true for linear detection schemes but false for nonlinear schemes, and propose a very simple experimental setup to test it. We find a nonclassicality criterion derived from quantum Fisher information. \end{abstract} \pacs{03.65.Ca, 03.65.Ta, 42.50.Dv, 42.50.Ar} \maketitle Nonclassicality is a key concept supporting the necessity of the quantum theory. There is widespread consensus that the coherent states $| \alpha \rangle$ are the classical side of the borderline between the quantum and classical realms \cite{MW,border}. In quantum metrology it is usually believed that resolution beyond coherent states is a quantum effect, since this is achieved by famous nonclassical probe states, such as squeezed, number, or coherent superpositions of distinguishable states \cite{CA}. However, this does not mean that every state providing larger resolution than coherent states is nonclassical. In this work we test this belief by examining whether metrological resolution beyond coherent states is necessarily a nonclassical effect or not \cite{AAS}. To this end we find a novel nonclassicality criterion derived from quantum Fisher information. We demonstrate that the belief is true for linear detection schemes but false for nonlinear schemes. Nonlinear detection is a recently introduced item in quantum metrology that has plenty of promising possibilities and is being thoroughly studied and implemented in different areas such as quantum optics \cite{QO,QO2}, Bose-Einstein condensates \cite{BE,pe}, nanomechanical resonators \cite{NR}, and atomic magnetometry \cite{AM}. Throughout we focus on single-mode quantum light beams with complex amplitude operators $a$ such that $[a,a^\dagger]=1$ and $a | \alpha \rangle = \alpha | \alpha \rangle$. Resolution provided by different probe states is compared for the same mean number of photons $\bar{n}$ that represents the energy resources available for the measurement. We examine the following proposition: \textit{Proposition:} A probe state $\rho$ providing larger resolution than coherent states $| \alpha_\rho \rangle$ with the same mean number of photons $\bar{n}$ is nonclassical, where \begin{equation} \label{link} \bar{n} = \langle \alpha_\rho | a^\dagger a | \alpha_\rho \rangle = | \alpha_\rho |^2 = \textrm{tr} (\rho a^\dagger a ) . \end{equation} A customary signature of nonclassical behavior is the failure of the Glauber-Sudarshan $P (\alpha)$ phase-space representation to exhibit all the properties of a classical probability density \cite{MW}. This occurs when $P (\alpha)$ takes negative values, or when it becomes more singular than a delta function. To test the proposition we must specify how resolution is assessed. \textit{Resolution.---} In a detection scheme the signal to be detected $\chi$ is encoded in the input probe state $\rho$ by a transformation $\rho \rightarrow \rho_\chi$. For definiteness, we focus on the most common and practical case of unitary transformations with constant generator $G$ independent of the parameter \begin{equation} \rho_\chi = \exp \left ( i \chi G \right ) \rho \exp \left ( - i \chi G \right ) . \end{equation} The value of $\chi$ is inferred from the outcomes of measurements performed on $\rho_\chi$. The ultimate resolution of such inference is given by the quantum Fisher information $F_Q (\rho_\chi)$ since the variance of any unbiased estimator $\tilde{\chi}$ is bounded from below in the form \cite{CR,BC} \begin{equation} \left ( \Delta \tilde{\chi} \right )^2 \geq \frac{1}{N F_Q (\rho_\chi)} , \end{equation} where $N$ is the number of independent repetitions of the measurement. Better resolution is equivalent to larger quantum Fisher information, which can be expressed as \cite{BC,FI} \begin{equation} \label{Fi} F_Q (\rho_\chi) = 2 \sum_{j,k} \frac{(r_j - r_k )^2}{r_j + r_k} \left | \langle r_j | G | r_k \rangle \right |^2 , \end{equation} where $| r_j \rangle $ are the eigenvectors of $\rho$ with eigenvalues $r_j$ and the sum includes all the cases with $r_j + r_k \neq 0$. So for uniparametric unitary transformations $F_Q$ is independent of $\chi$ \cite{FI}. In order to reach ultimate sensitivity predicted by the quantum Fisher information, an optimum measurement and an efficient estimator are required \cite{BC}. If we consider the maximum likelihood as estimator, the number of repetitions required to reach the efficient regime may depend on the probe state \cite{BLC}. In order to focus on the intrinsic capabilities of different schemes we will assume that $N$ is large enough so that optimum conditions are reached for all cases, so that schemes are compared by comparing their quantum Fisher information. Note also that resolution depends also on the duration of the measurement. Because of this any meaningful comparison between different schemes should be done on equal-time basis. Let us show three useful properties of the quantum Fisher information: i) For pure states, such as coherent states $| \alpha \rangle$, the quantum Fisher information becomes proportional to the variance of the generator \cite{BC} \begin{equation} \label{var} F_Q (| \alpha \rangle ,G) = 4 \left ( \Delta_\alpha G \right )^2 = 4 \left ( \langle \alpha | G^2 | \alpha \rangle - \langle \alpha | G | \alpha \rangle^2 \right ). \end{equation} ii) The quantum Fisher information is convex. For a proof based on the monotonocity of quantum Fisher information under complete positive maps see Ref. \cite{FU}. A much simpler proof is given by a straightforward use of the convexity of the Fisher information and the Braunstein-Caves inequality \cite{BC}. Thus, for classical states \begin{equation} \rho_\mathrm{class} = \int d^2 \alpha P_\mathrm{class} (\alpha) | \alpha \rangle \langle \alpha |, \end{equation} where $ P_\mathrm{class} (\alpha)$ is a non-negative function no more singular than a delta function, convexity implies the following bound for the quantum Fisher information of classical states \begin{eqnarray} F_Q (\rho_\mathrm{class} ,G) &\leq& \int d^2 \alpha P_\mathrm{class} (\alpha) F_Q (| \alpha \rangle ,G) \nonumber\\ &=& 4\int d^2 \alpha P_\mathrm{class} (\alpha) \left ( \Delta_\alpha G \right )^2.\label{conv} \end{eqnarray} iii) In most cases it is rather difficult to compute analytically $F_Q (\rho ,G)$, especially in infinite dimensional systems. A similar but simpler performance measure is \begin{equation} \label{spm} \Lambda^2 \left ( \rho,G \right ) = \textrm{tr} \left ( \rho^2 G^2 \right ) - \textrm{tr} \left ( \rho G \rho G \right ) , \end{equation} or, equivalently, in the same conditions of Eq. (\ref{Fi}), \begin{equation} \label{L} \Lambda^2 \left ( \rho,G \right ) = \frac{1}{2} \sum_{j,k} \left ( r_j - r_k \right )^2 \left | \langle r_k | G | r_j \rangle \right |^2 , \end{equation} which for pure states such as coherent states also becomes the variance of the generator $\Lambda ( | \alpha \rangle, G )= \Delta_\alpha G$ \cite{RL}. This is derived from the Hilbert-Schmidt distance between $\rho_\chi$ and $\rho$ in the same terms in which the quantum Fisher information is derived from the Bures distance \cite{BC,BF}. The useful point here is that from Eqs. (\ref{Fi}) and (\ref{L}) and given that $r_k + r_\ell \leq 1$ it holds that \begin{equation} \label{FL} F_Q \left ( \rho,G \right ) \geq 4 \Lambda^2 \left ( \rho,G \right ) , \end{equation} the equality being reached for pure states. \textit{Nonclassicality from quantum Fisher information.---} For the sake of convenience let us express the variance of $G$ on coherent states as a mean value \begin{equation} \label{DGAG} \left ( \Delta_\alpha G \right )^2 = \langle \alpha | A_G | \alpha \rangle , \quad A_G = G^2 - :G^2: , \end{equation} where $: \; :$ denotes normal order, and $G$ in $:G^2:$ must be expressed in its normally ordered form so that $\langle \alpha |:G^2: | \alpha \rangle = \langle \alpha |G | \alpha \rangle^2$ . A key point is that $ \langle \alpha | A_G | \alpha \rangle$ gives the quantum Fisher information of coherent states, \begin{equation} \label{FAc} F_Q (| \alpha \rangle ,G) = 4 \langle \alpha | A_G | \alpha \rangle , \end{equation} so that the bound (\ref{conv}) for the quantum Fisher information of classical states reads \begin{eqnarray} F_Q (\rho_\mathrm{class} ,G) &\leq& 4 \int d^2 \alpha P_\mathrm{class} (\alpha) \langle \alpha | A_G | \alpha \rangle \nonumber\\ &=& 4 \textrm{tr} \left ( \rho_\mathrm{class} A_G \right ). \end{eqnarray} This relation is derived from the convexity of $F_Q (\rho ,G)$, so it relies entirely on the classical nature of $P_\mathrm{class} (\alpha)$. Therefore its violation provides the following nonclassicality criterion: \begin{equation} \label{ncc} F_Q (\rho ,G) > 4 \textrm{tr} \left ( \rho A_G \right ) \longrightarrow \rho \textrm{ is nonclassical} . \end{equation} Since this criterion is formulated in terms of the quantum Fisher information, it will be useful to discuss the interplay between improved metrological resolution and nonclassicality. The key point is to link $\textrm{tr} ( \rho A_G )$ in the nonclassical criterion (\ref{ncc}) with the quantum Fisher information of coherent states with the same mean number of photons $F_Q (| \alpha_\rho \rangle,G) = 4 \langle \alpha_\rho | A_G | \alpha_\rho \rangle$. This is straightforward when $A_G \propto a^\dagger a$. To study this in detail let us split the analysis in linear and nonlinear schemes. \textit{Linear schemes.---} By linear schemes we mean that the signal is encoded via input-output transformations where the output complex amplitudes are linear functions of the input ones and their conjugates. Their generators are polynomials of $a, a^\dagger$ up to second order, embracing all traditional interferometric techniques exemplified by the phase shifts generated by the photon-number operator \begin{equation} \label{GAa} G = A_G = a^\dagger a , \end{equation} so that $G$ and $A_G$ coincide. In this case the resolution (quantum Fisher information) provided by coherent probe states is given by its mean number of photons \begin{equation} F_Q (| \alpha_\rho \rangle ,a^\dagger a ) = 4\langle \alpha_\rho | a^\dagger a | \alpha_\rho \rangle = 4 \left | \alpha_\rho \right |^2 = 4 \textrm{tr} \left ( \rho a^\dagger a \right ) , \end{equation} where we have used Eqs. (\ref{link}), (\ref{FAc}), and (\ref{GAa}). The probe states $\rho$ providing larger resolution than coherent states $| \alpha_\rho \rangle$ with the same mean number of photons satisfy \begin{equation} F_Q (\rho ,a^\dagger a ) > F_Q (| \alpha_\rho \rangle ,a^\dagger a ) = 4 \textrm{tr} \left ( \rho a^\dagger a \right ) , \end{equation} so that from the nonclassical criterion (\ref{ncc}) they are necessarily nonclassical states and the proposition being tested is true. This result also holds for other generators of linear transformations such as $G = a \exp (i \theta) + a^\dagger \exp (-i \theta)$, which generates displacements of the quadratures, and $G = a^2 \exp (i \theta) + a^{\dagger 2} \exp (-i \theta)$, which generates quadrature squeezing, where $\theta$ is an arbitrary phase \cite{ds}. This is because $A_G = 1$ and $A_G = 4 a^\dagger a + 2$, respectively, so that $4 \textrm{tr} ( \rho A_G ) = F_Q ( | \alpha_\rho \rangle , G)$. This also holds for two-mode SU(2) generators \begin{equation} \label{GJ} G = \bm{u} \cdot \bm{J} , \quad A_G = a^\dagger_1 a_1 + a_2^\dagger a_2 , \end{equation} where $\bm{u}$ is a three-dimensional unit real vector and $\bm{J}$ are the bosonic realization of the angular momentum operators that generate the SU(2) group \begin{equation} J_x = a^\dagger_1 a_2 + a_1 a_2^\dagger, J_y = i ( a^\dagger_1 a_2 - a_1 a_2^\dagger ), J_z = a^\dagger_1 a_1 - a_2^\dagger a_2 . \end{equation} This describes all two-beam lossless optical devices, such as beam splitters, phase plates, and two-beam interferometers. In this two-mode context the coherent states $| \alpha \rangle$ refer to the product of single-mode coherent states $| \alpha \rangle = | \alpha_1 \rangle | \alpha_2 \rangle$ with mean number of photons $\bar{n}=|\alpha_1|^2+|\alpha_2|^2 = \langle \alpha | A_G | \alpha \rangle$. For a simple derivation of $A_G$ in Eq. (\ref{GJ}), note that any $\bm{u} \cdot \bm{J}$ is in normal order, normal order commutes with SU(2) transformations, $\bm{u} \cdot \bm{J}$ is SU(2) equivalent to $J_z$, with $J_z^2 -: J_z^2 : = a^\dagger_1 a_1 + a_2^\dagger a_2$, and $a^\dagger_1 a_1 + a_2^\dagger a_2$ is SU(2) invariant. This is $A_{UGU^\dagger} = U A_G U^\dagger$ if $G$ is in normal order and $U$ is a SU(2) unitary transformation. When the angular momentum $\bm{J}$ refers collectively to a system of qubits, it has been demonstrated \cite{PS} that improved resolution beyond coherent states implies entanglement between qubits. We recover this result by noticing that spin nonclassicality is equivalent to entanglement \cite{eqb}. This equivalence no longer holds when entanglement refers to the entanglement between field modes; this is to say that nonclassical factorized states $| \psi_1 \rangle | \psi_2 \rangle$, where $| \psi_j \rangle$ is in mode $a_j$, can provide better resolution than coherent states. \textit{Nonlinear schemes.---} By nonlinear detection schemes we mean that the signal is encoded via input-output transformations where the output complex amplitudes are not linear functions of the input ones. A suitable example is given by \begin{equation} \label{GAGnl} G = \left ( a^\dagger a \right )^2, \qquad A_G = 4 a^{\dagger 3} a^3 + 6 a^{\dagger 2} a^2 + a^\dagger a , \end{equation} and the key point is that $A_G$ is no longer proportional to the number operator. In practical quantum-optical terms this corresponds to light propagation through nonlinear Kerr media \cite{MW}. Next we show that there are classical states that provide larger resolution than coherent states with the same mean number of photons, so that the proposition being tested is false. To this end let us consider the mixed probe state \begin{equation} \label{rclass} \rho_\mathrm{class} = p | \alpha /\sqrt{p} \rangle \langle \alpha /\sqrt{p}| + (1-p)| 0 \rangle \langle 0 |, \end{equation} where $| \alpha /\sqrt{p} \rangle$ is a coherent state, $| 0 \rangle$ is the vacuum, and $1 > p > 0$. The state $\rho_\mathrm{class}$ has the same mean number of photons as the coherent state $| \alpha \rangle$ for every $p$. Since in general $F_Q ( \rho_\mathrm{class},G )$ is difficult to compute when $\rho_\mathrm{class}$ is mixed, we resort to Eq. (\ref{FL}) so that if \begin{equation} \label{cond} 4 \Lambda^2 ( \rho_\mathrm{class} ,G) > F_Q (| \alpha \rangle ,G) , \end{equation} then $F_Q (\rho_\mathrm{class} ,G) > F_Q (| \alpha \rangle ,G)$ and $\rho_\mathrm{class}$ provides larger resolution than $| \alpha \rangle$. Using Eq. (\ref{spm}) the condition (\ref{cond}) is equivalent to the following relation between variances of $G$ in coherent states \begin{equation} \label{apa} p^2 \left ( \Delta_\frac{\alpha }{\sqrt{p}} G \right )^2 > \left ( \Delta_\alpha G \right )^2 , \end{equation} where we have used that $| 0 \rangle$ is an eigenstate of $G$ with null eigenvalue. After Eqs. (\ref{DGAG}) and (\ref{GAGnl}) \begin{equation} \left ( \Delta_\alpha G \right )^2 = 4 | \alpha |^6 + 6 | \alpha |^4 + | \alpha |^2 , \end{equation} and from Eq. (\ref{apa}) the state $\rho_\mathrm{class}$ provides larger resolution than $| \alpha \rangle$ provided that $|\alpha |^2 > \sqrt{p}/2$, which can be easily fulfilled. We are able to observe this improvement even with a very simple and practical measuring scheme such as homodyne detection illustrated in Fig. 1. For that we evaluate the Fisher information $F_C ( \rho_\mathrm{class},G )$ of the measurement for $\rho_\mathrm{class}$ in Eq. (\ref{rclass}), \begin{equation} F_C ( \rho_\mathrm{class},G ) = \int dx \frac{1}{P ( x| \chi )} \left ( \frac{ \partial P ( x | \chi)}{\partial \chi} \right )^2 , \end{equation} where $P ( x| \chi ) = | \langle x | \rho_\chi | x \rangle |^2$ is the probability of the outcome $x$ of the $X$ quadrature, with $X = a^\dagger + a$ and $X |x \rangle = x | x \rangle $. We consider very small $\chi$ so that the classical Fisher information is evaluated at $\chi=0$. We also assume an optimum value for the phase of the coherent amplitude $\alpha = i \sqrt{\bar{n}}$. Using the results in Ref. \cite{QO2} we get for large $\bar{n}$ \begin{equation} F_C ( \rho_\mathrm{class},G) = 16 \frac{\bar{n}^3}{p} = \frac{1}{p} F_C ( | \alpha \rangle ,G) . \end{equation} Thus, the Fisher information for the classical probe state $\rho_\mathrm{class}$ is above the value for the coherent states with the same mean number of photons $| \alpha \rangle$, especially when $p \rightarrow 0$. \begin{figure} \caption{Sketch of a homodyne measurement.} \end{figure} \textit{Discussion.---} To some extent this may be regarded as a paradoxical result, especially in the limit $p \rightarrow 0$ where $\rho_\mathrm{class}$ tends to be the vacuum, $\langle 0 | \rho_\mathrm{class} | 0 \rangle \rightarrow 1$, since the vacuum state is useless for detection. Nevertheless next we show that this is a fully meaningful and worthy result. To this end let us consider that we repeat the measurement $N$ times with the probe $\rho_\mathrm{class}$ in Eq. (\ref{rclass}). That will be equivalent to get $Np$ times the result of the probe state $| \alpha /\sqrt{p} \rangle$ and $N(1-p)$ times the useless vacuum. Therefore the useful resources are $N | \alpha |^2$ photons distributed in $Np$ runs of $| \alpha |^2 /p$ photons. When the probe is $| \alpha \rangle$ (this is the case $p=1$), all runs are useful and we get the same resources $N | \alpha |^2$ distributed in $N$ runs of $| \alpha |^2$ photons. For linear detection schemes the two allocations of resources provide essentially the same resolution for every $p$ because for large number of photons $\langle \alpha | 0 \rangle \simeq 0$ it holds that $F_Q (\rho_\mathrm{class}, a^\dagger a ) \simeq p F_Q (| \alpha /\sqrt{p} \rangle, a^\dagger a ) = F_Q (| \alpha \rangle, a^\dagger a )$. However, the nonlinearity greatly privileges large photon numbers so that the best strategy is to put as many photons as possible in a single run, instead of splitting them into several runs. More specifically, for large $|\alpha |$ it holds that $\langle \alpha | 0 \rangle \simeq 0$ and $F_Q (\rho_\mathrm{class} ,G) \simeq 16 |\alpha |^6 /p^2$ while $F_Q (| \alpha \rangle ,G) \simeq 16 |\alpha |^6 $ so that $\rho_\mathrm{class}$ provides much larger resolution than $| \alpha \rangle$ as $p \rightarrow 0$. Incidentally, the above calculus shows that when $\langle \alpha | 0 \rangle \simeq 0$ we get $F_C ( \rho_\mathrm{class} ,G ) \simeq p F_Q ( \rho_\mathrm{class},G )$. This is to say that whereas both $F_{C,Q}$ increase when $p$ decreases, it holds that $F_Q$ increases faster than $F_C$. Finally, it might be argued that the improvement of resolution in nonlinear schemes, and the differences between different classical input probes just discussed, may be ascribed to nonclassicality induced by nonlinear transformations. We can rule out this possibility. The quantum Fisher information does not depend on the value of the signal, so that the optimum sensitivity cannot depend on the amount of nonclassicality induced by the transformation. In particular, for the usual case of small signals the induced nonclassicalities will be negligible. \textit{Conclusions.---} We have obtained a general nonclassical test derived from quantum Fisher information. For linear detection schemes this test demonstrates that improved resolution beyond coherent states is a nonclassical feature. For nonlinear schemes the situation is different since mixed classical states can provide better resolution than coherent states. This result is very attractive since the key point of classical states is that they are extremely robust against experimental imperfections \cite{QO2,pe} and they are easy to generate in labs. A. R. acknowledges Susana F. Huelga for illuminating comments and financial support from the EU Integrated Projects QAP, QESSENCE, and the STREP action CORNER. A. L. acknowledges support from official Spanish projects No. FIS2008-01267 and QUITEMAD S2009-ESP-1594. \end{document}
\begin{document} \begin{abstract} We show that a connection with skew-symmetric torsion satisfying the Einstein metricity condition exists on an almost contact metric manifold exactly when it is D-homothetic to a cosymplectic manifold. In dimension five, we get that the existence of a connection with skew torsion satisfying the Einstein metricity condition is equivalent to the existence of a Sasaki-Einstein 5-manifold and vice versa, any Sasaki-Einstein 5-manifold generates a two parametric family of connections with skew torsion satisfying the Einstein metricity condition. Formulas for the curvature and the Ricci tensors of these connections are presented in terms of the Sasaki-Einstein SU(2) structures. {\tilde\eta}nd{abstract} \title[Non-symmetric Riemannian gravity and Sasaki-Einstein 5-manifolds]{Non-symmetric Riemannian gravity and Sasaki-Einstein 5-manifolds} \date{\today} \author{Stefan Ivanov} \address[Ivanov]{University of Sofia "St. Kl. Ohridski"\\ Faculty of Mathematics and Informatics\\ Blvd. James Bourchier 5\\ 1164 Sofia, Bulgaria; Institute of Mathematics and Informatics, Bulgarian Academy of Sciences} {\tilde\eta}mail{[email protected]} \author{Milan Zlatanovi\'c} \address[Zlatanovi\'c]{Department of Mathematics, Faculty of Science and Mathematics, University of Ni\v s, Vi\v segradska 33, 18000 Ni\v s, Serbia} {\tilde\eta}mail{[email protected]} \date{\today } \maketitle \tableofcontents \setcounter{tocdepth}{2} \section{Introduction} In this note we consider the geometry arising on an odd-dimensional manifold from the Einstein gravitational theory on a non-symmetric (generalized) Riemannian manifold $(M^{2n+1}, G=g+F)$, where the generalized metric $G$ has non-degenerate symmetric part $g$ and a skew-symmetric part $F$ of $rank=2n$. \subsection{Motivation from general relativity} General relativity (GR) was developed by Albert Einstein in 1916 \cite{Ein1} and contributed to by many others after 1916. In GR the equation $ds^2=g_{ij}dx^idx^j, \quad (g_{ij}=g_{ji})$ is valid, where $g_{ij}$ are functions of a point. In GR, which is a four dimensional space-time continuum, metric properties depend on mass distribution. The magnitudes $g_{ij}$ are known as {\tilde\eta}mph{gravitational potential}. Christoffel symbols, commonly expressed by ${{\nabla^{ngt}}abla}amma^k_{ij}$, play the role of magnitudes which determine the gravitational force field. General relativity explains gravity as { the} curvature of space-time. In GR the metric tensor { obeys} the Einstein equations $R_{ij}-{{\phi h}rak a}c 12 R g_{ij}=T_{ij}$, where $R_{ij}$ is the Ricci tensor of the metric of space-time, $R$ is the scalar curvature of the metric, and $T_{ij}$ is the energy-momentum tensor of matter. In 1922, Friedmann \cite{Fried} found a solution in which the universe may expand or contract, and later Lema\^{\i}tre \cite{Lema} derived a solution for an expanding universe. However, Einstein believed that the universe was apparently static, and since static cosmology was not supported by the general relativistic field equations, he added the cosmological constant $\hbox{\ddpp L}ambda$ to the field equations, which became $R_{ij}-{{\phi h}rak a}c 12 R g_{ij}+\hbox{\ddpp L}ambda g_{ij} =T_{ij}$. From 1923 to the end of his life Einstein worked on various variants of Unified Field Theory \cite{Ein}. This theory had the aim to unite {the theory of gravitation} and the theory of electromagnetism. Starting from 1950, Einstein used the real non-symmetric basic tensor $G$, sometimes called {\tilde\eta}mph{generalized Riemannian metric/manifold}. In this theory the symmetric part $g_{ij}$ of the basic tensor $G_{ij} (G_{ij}=g_{ij}+F_{ij})$ is related to gravitation, and the skew-symmetric one $F_{ij}$ to electromagnetism. More recently the idea of a non-symmetric metric tensor appears in Moffat's non-symmetric gravitational theory \cite{Mof}. In Moffat's theory the skew-symmetric part of the metric tensor represents a Proca field (massive Maxwell field) which is a part of the gravitational interaction, contributing to the rotation of galaxies. While on a Riemannian space the connection coefficients are expressed { as functions} of the metric, $g_{ij}$, in Einstein's works the connection between these magnitudes is determined by the so called {\tilde\eta}mph{Einstein metricity condition,} i.e. the non-symmetric metric tensor $G$ and the connection components ${{\nabla^{ngt}}abla}amma_{ij}^k$ are connected by \begin{equation}\label{metein} {{\phi h}rak a}c{\partial G_{ij}}{\partial x^m}-{{\nabla^{ngt}}abla}amma ^p_{im}G_{pj}- {{\nabla^{ngt}}abla}amma ^p_{mj}G_{ip}=0. {\tilde\eta}nd{equation} A generalized Riemannian manifold satisfying the Einstein metricity condition {\tilde\eta}qref{metein} is called an NGT-space \cite{Ein,LMof,Mof}. The choice of a connection in NGT is uniquely determined in terms of the structure tensors \cite{IZl}. Special attention was paid when the torsion of the NGT connection is totally skew-symmetric with respect to the symmetric part $g$ of $G$ \cite{Eis} (NGTS connection for short). One reason for that comes from supersymmetric string theories and non-linear $\sigma$-models (see e.g. \cite{Str,y4,BSethi,GMPW,GPap} and references therein), as well as from { the theory of gravity} itself \cite{Ham}. In even dimensions with nondegenerate skew-symmetric part $F$ one arrives { at} Nearly K\"ahler manifolds, namely, an almost Hermitian manifold is NGTS exactly when it is a Nearly K\"ahler manifold \cite[Theorem~3.3]{IZl}. In this case the NGTS connection coincides with the Gray connection \cite{Gr1,Gr2,Gr3}, which is the unique connection with skew-symmetric torsion preserving the Nearly K\"ahler structure \cite{FI}. Nearly K\"ahler manifolds (called almost Tachibana spaces in \cite{Yano}) were developed by Gray \cite{Gr1,Gr2,Gr3} and have been intensively studied since then in \cite{Ki,N1,MNS,N2,But,FIMU}. Nearly K\"ahler manifolds also appear in supersymmetric string theories (see e.g. \cite{Po1,PS,LNP,GLNP}). The first complete and therefore compact inhomogeneous examples of 6-dimensional Nearly K\"ahler manifolds were presented recently in \cite{FosH}. In odd dimensions $2n+1$ with $rank F=2n$ one gets an almost contact metric manifold. It is shown in \cite{IZl} that a connection with skew-symmetric torsion satisfying the Einstein metricity condition exists on an almost contact metric manifold exactly when it is {\tilde\eta}mph{almost nearly cosymplectic} (the precise definition is given in Definition~\ref{defanc} below). The aim of this note is to investigate the geometry of almost-nearly cosymplectic spaces. It is well known that nearly cosymplectic manifolds are the odd dimensional analog of Nearly K\"ahler spaces and that the trivial circle bundle over a Nearly K\"ahler space is nearly cosymplectic. We { establish} a relation between almost nearly cosymplectic spaces { and} nearly cosymplectic spaces, namely we present a special D-homothetic deformation relating these two objets. Applying the structure theorem for nearly cosymplectic structures established by de Nicola-Dileo-Yudin in \cite{NDY} we present a structure theorem for almost nearly cosymplectic structures (Theorem~\ref{ancstructure}) and give a formula relating the curvature and the Ricci tensors of the almost nearly cosymplectic structure { to} the corresponding D-homothetic nearly cosymplectic structure. In dimension five we found a closed relation { between} almost nearly cosymplectic structures { and} the 5-dimensional Sasaki-Einstein spaces. Namely, using the fundamental observation connecting nearly cosymplectic 5-manifolds with the SU(2) structures and Sasaki-Einstein 5-manifolds due to Cappelletti-Montano-Dileo in \cite{CD}, we show in Theorem~\ref{asasncos} that any almost nearly cosymplectic 5-manifold is D-homothetically equivalent to a 5-dimensional Sasaki-Einstein space. Moreover, we show that a Sasaki-Einstein 5-manifold generates a two parametric family of almost nearly cosymplectic structures and therefore a two-parametric family of NGTS connections. We give an explicit formula connecting these NGTS connections with the Levi-Civita connection of the corresponding Sasaki-Einstein structure. We express the curvature and Ricci tensor of these NGTS connections in terms of the curvature of the Sasaki-Einstein metric and the corresponding Sasaki-Einstein SU(2) structure. On the other hand, by virtue of admitting real Killing spinors \cite{FK}, Sasakian-Einstein 5-manifolds admit supersymmetry and have received a lot of attention in physics from the point of view of AdS/CFT correspondence (see e.g. \cite{GauSas,GMPW1,Sas51,BBC,BBC1,Sp}). The AdS/CFT correspondence relates string theory on the product of anti-deSitter space with a compact Einstein space to quantum field theory on the conformal boundary. The renewed interest in these manifolds has to do with the so-called p-brane solutions in superstring theory. For example, the case of D3-branes of string theory the relevant near horizon geometry is that of a product of anti-deSitter space with a Sasakian-Einstein 5-manifold. This led to a construction of a number of examples of irregular compact Sasaki-Einstein 5-manifolds \cite{GauSas,GMPW1,Sp}. In this spirit, our results may help to establish a possible relation between the NGT with skew-symmetric torsion, supersymmetric string theories and quantum field theory. \section{Preliminaries} \subsection{Einstein metricity condition (NGT)}\label{ngt} In his attempt to construct a unified field theory (Non-symmetric Gravitational Theory, briefly NGT) Einstein \cite{Ein} considered a generalized Riemannian manifold $(G=g+F)$ with nondegenerate symmetric part $g$ and skew-symmetric part $F$ and used the so-called metricity condition {\tilde\eta}qref{metein}, which can be written as follows $ XG(Y,Z)-G({\nabla^{ngt}} _YX,Z)-G(Y,{\nabla^{ngt}}_XZ)=0$. For { the} non-degenerate symmetric part $g$ we have a (1,1) tensor $A$ given by $F(X,Y)=g(AX,Y)$. The metricity condition {\tilde\eta}qref{metein} can be written in terms of the torsion $T(X,Y)={\nabla^{ngt}}_XY-{\nabla^{ngt}}_YX-[X,Y]$ of the NGT connection ${\nabla^{ngt}}$ and the endomorphism $A$ in the form \begin{equation}\label{metein1} ({\nabla^{ngt}}_XG)(Y,Z)=-G(T(X,Y),Z) \quad \hbox{\ddpp L}eftrightarrow \quad ({\nabla^{ngt}}_X(g+F))(Y,Z)=-T(X,Y,Z)+T(X,Y,AZ). {\tilde\eta}nd{equation} A general solution for the connection ${\nabla^{ngt}}$ satisfying {\tilde\eta}qref{metein} is given in terms of $g,F,T$\cite{Hlav} (see also \cite{Mof}) and in terms of $dF$ in \cite{IZl}. \subsection{Almost contact metric and almost nearly cosymplectic structures.} In the case of odd dimension, we consider an almost contact metric manifold $(M^{2n+1}, g, \phi, {\tilde\eta}ta,\xi)$, i.e., a $(2n+1)$ -dimensional Riemannian manifold equipped with a 1-form ${\tilde\eta}ta$, a (1,1)-tensor $\phi$ and { the} vector field $\xi$ dual to ${\tilde\eta}ta$ with respect to the metric $g$, ${\tilde\eta}ta(\xi)=1, {\tilde\eta}ta(X)=g(X,\xi)$ such that the following compatibility conditions are satisfied (see e.g. \cite{Blair}) \begin{equation}\label{acon} \phi^2=-id+{\tilde\eta}ta{\tilde\omega}times\xi, \quad g(\phi X,\phi Y)=g(X,Y)-{\tilde\eta}ta(X){\tilde\eta}ta(Y), \quad \phi\xi={\tilde\eta}ta\phi=0. {\tilde\eta}nd{equation} The fundamental 2-form is defined by $F(X,Y)=g(\phi X, Y)$. Such a space can be considered as a generalized Riemannian manifold with $G=g+F,\quad A=\phi$ and in this case the skew-symmetric part $F$ is degenerate $F(\xi,.)=0$ and has $rank F=2n$. \begin{dfn}\cite{IZl}\label{defanc} An almost contact metric manifold $(M^{2n+1},\phi,g,{\tilde\eta}ta,\xi)$ is said to be {\tilde\eta}mph{almost-nearly cosymplectic} if the covariant derivative of the fundamental tensor $\phi$ with respect to the Levi-Civita connection ${\nabla^{ngt}}abla$ of the Riemannian metric $g$ satisfies the following condition \begin{equation}\label{ff3f} \begin{split} g(({\nabla^{ngt}}abla_X\phi)Y,Z)={{\phi h}rak a}c13dF(X,Y,Z)+{{\phi h}rak a}c13{\tilde\eta}ta(X)d{\tilde\eta}ta(Y,\phi Z)-{{\phi h}rak a}c16{\tilde\eta}ta(Y)d{\tilde\eta}ta(Z,\phi X)-{{\phi h}rak a}c16{\tilde\eta}ta(Z)d{\tilde\eta}ta(\phi X,Y) {\tilde\eta}nd{split} {\tilde\eta}nd{equation} {\tilde\eta}nd{dfn} The following relations are also valid \cite[Section~3.5]{IZl} \begin{equation}\label{ax} \begin{split} d{\tilde\eta}ta(X,Z)=dF(X,\phi Z,\xi)=dF(\phi X,Z,\xi),\quad d{\tilde\eta}ta(X,\xi)=0,\\ d{\tilde\eta}ta(\phi X,Z)=d{\tilde\eta}ta(X,\phi Z)=dF(\phi X,\phi Z,\xi)=-d F(X,Z,\xi)\\ d F(\phi X,\phi Y,\phi Z)+dF(X,Y,\phi Z)={\tilde\eta}ta(X)d{\tilde\eta}ta(Y,Z)+{\tilde\eta}ta(Y){\tilde\eta}ta(Z,X). {\tilde\eta}nd{split} {\tilde\eta}nd{equation} and the vector field $\xi$ is a Killing vector field \begin{equation}\label{kill1} ({\nabla^{ngt}}abla_X{\tilde\eta}ta)Y=g({\nabla^{ngt}}abla_X\xi,Y)={{\phi h}rak a}c12d{\tilde\eta}ta(X,Y), {\tilde\eta}nd{equation} Almost nearly cosymplectic manifolds arise in a natural way from NGT. Namely, we have the following \begin{thrm}\cite[Theorem~3.8]{IZl}\label{acnika} Let $(M,\phi,g,F,{\tilde\eta}ta,\xi)$ be an almost contact metric manifold with a fundamental 2-form F considered as a generalized Riemannian manifold $(M,G)$ with a generalized Riemannian metric $G=g+F$. Then $(M,G)$ satisfies the Einstein metricity condition {\tilde\eta}qref{metein} with a totally skew-symmetric torsion $T$ if and only if it is almost-nearly cosymplectic, i.e. {\tilde\eta}qref{ff3f} holds. The skew-symmetric torsion is determined by the condition \begin{equation}\label{nk3ac} T(X,Y,Z)=-{{\phi h}rak a}c13dF(X,Y,Z) {\tilde\eta}nd{equation} The connection is { uniquely} determined by the formula \begin{equation}\label{ngtcon} g({\nabla^{ngt}}_XY,Z)=g({\nabla^{ngt}}abla_XY,Z)-{{\phi h}rak a}c16dF(X,Y,Z)+{{\phi h}rak a}c16\Big[{\tilde\eta}ta(X)d{\tilde\eta}ta(Y,Z)+{\tilde\eta}ta(Y)d{\tilde\eta}ta(X,Z) \Big]. {\tilde\eta}nd{equation} The Einstein metricity condition has the form $$({\nabla^{ngt}}_XG)(Y,Z)={{\phi h}rak a}c13\Big[dF(X,Y,Z)-dF(X,Y,\phi Z)\Big].$$The covariant derivative of $g$ and $F$ are given by \begin{equation*} \begin{split} ({\nabla^{ngt}}_Xg)(Y,Z)={{\phi h}rak a}c16\Big[{\tilde\eta}ta(Y)d{\tilde\eta}ta(Z,X)+{\tilde\eta}ta(Z)d{\tilde\eta}ta(Y,X)\Big];\\ ({\nabla^{ngt}}_XF)(Y,Z)={{\phi h}rak a}c13\Big[dF(X,Y,Z)-dF(X,Y,\phi Z)\Big]-{{\phi h}rak a}c16\Big[{\tilde\eta}ta(Y)d{\tilde\eta}ta(Z,X)+{\tilde\eta}ta(Z)d{\tilde\eta}ta(Y,X)\Big]. {\tilde\eta}nd{split} {\tilde\eta}nd{equation*} {\tilde\eta}nd{thrm} \subsection{Nearly cosymplectic manifolds} We recall here the definition of nearly cosymplectic structures together with their basic properties from \cite{CD,NDY}. An almost contact metric structure is cosymplectic if the endomorphism $\phi$ is parallel with respect to the Levi-Civita connection, ${\nabla^{ngt}}abla\phi=0$ and it is Sasakian if it is normal and contact, $[\phi,\phi]+{\tilde\eta}ta{\tilde\omega}times\xi=0, d{\tilde\eta}ta=-2F$. In terms of the Levi-Civita connection the Sasakian condition has the form $({\nabla^{ngt}}abla_X\phi)Y=g(X,Y)\xi-{\tilde\eta}ta(Y)X.$ For further details on Sasakian and cosymplectic manifolds, we {refer the reader} to \cite{Blair,BG,CDY}. An almost contact metric manifold is called nearly cosymplectic if \cite{Bl,BS} $({\nabla^{ngt}}abla_X\phi) X=0$ which is also equivalent to the condition \begin{equation}\label{ncfxx} g(({\nabla^{ngt}}abla_X\phi)Y,Z)={{\phi h}rak a}c13dF(X,Y,Z) {\tilde\eta}nd{equation} In this case the vector field $\xi$ is Killing, ${\nabla^{ngt}}abla_{\xi}\xi={\nabla^{ngt}}abla_{\xi}{\tilde\eta}ta=0$ and \begin{equation}\label{ncxixx} d{\tilde\eta}ta(\phi X,Y)=d{\tilde\eta}ta(X,\phi Y). {\tilde\eta}nd{equation} The tensor field $h$ of type (1,1) defined by \begin{equation}\label{hh} {\nabla^{ngt}}abla_X\xi=hX {\tilde\eta}nd{equation} has the following properties \begin{eqnarray}\label{ncosh} \aligned g(hX,Y)=-g(X,hY)={{\phi h}rak a}c12d{\tilde\eta}ta(X,Y); \quad Ah+hA=0, {\tilde\eta}ndaligned{\tilde\eta}nd{eqnarray} i.e. it is skew-symmetric, anticommutes with $\phi$ and satisfies $h\xi=0, {\tilde\eta}ta\circ h=0$. The following formulas also hold \cite{Endo,Endo1}: \begin{equation}\label{nc15} g(({\nabla^{ngt}}abla_X\phi)Y,hZ)={\tilde\eta}ta(Y)g(h^2X,\phi Z)-{\tilde\eta}ta(X)g(h^2Y,\phi Z). {\tilde\eta}nd{equation} \begin{equation}\label{nc16} ({\nabla^{ngt}}abla_Xh)Y=g(h^2X,Y)\xi-{\tilde\eta}ta(Y)h^2X. {\tilde\eta}nd{equation} \begin{equation}\label{nc17} tr(h^2)=constant. {\tilde\eta}nd{equation} A consequence of {\tilde\eta}qref{nc16} and {\tilde\eta}qref{nc17} is that the eigenvalues of symmetric operator $h^2$ are constants \cite{CD,NDY}. A fundamental observation that if $0$ is a simple eigenvalue of $h^2$ then the nearly cosymplectic manifold is of dimension five was made by A. de Nicola, G. Dileo and I. Yudin in \cite{NDY} which lead to their structure theorem: \begin{thrm} \cite[Theorem~4.5]{NDY} \label{ncstructure} Let $(M,\phi, \xi, {\tilde\eta}ta, g)$ be a nearly cosymplectic non-cosymplectic manifold of dimension $2n+1>5$. Then $M$ is locally isometric to one of the following Riemannian products: $$\mathcal R\times N^{2n}, \quad M^5\times N^{2n-4},$$ where $N^{2n}$ is a nearly K\"ahler non-K\"ahler manifold, $N^{2n-4}$ is a nearly K\"ahler manifold and $M^5$ is a nearly cosymplectic non-cosymplectic manifold of dimension five. If the manifold $M$ is complete and simply connected the above isometry is global. {\tilde\eta}nd{thrm} Note also that the nearly K\"ahler factor can be further decomposed according to \cite[Theorem~1.1 and Proposition~2.1]{N1}. \subsection{Nearly cosymplectic manifolds in dimension 5} According to Theorem~\ref{ncstructure}, the non-trivial case of nearly cosymplectic manifolds is the 5-dimensional case. In this case B. Cappelletti-Montano and G. Dileo show in \cite{CD} that any 5-dimensional nearly cosymplectic manifold carryies a Sasaki-Einstein structure and vice-versa. In order to describe the construction in \cite{CD} we recall below the notion of {an} $SU(2)$ structure developed by D. Conti and S. Salamon in \cite{CS}. An SU(2) structure in dimension 5 is an SU(2)-reduction of the bundle of linear frames and it is equivalent to the existence of three almost contact metric structures $(\phi_i, \xi, {\tilde\eta}ta, g), i=1,2,3$ related {to each other through} $\phi_i\phi_j=-\phi_j\phi_i=\phi_k$ for any even permutation \{i,j,k\} of \{1,2,3\}. In \cite{CS} Conti and Salamon proved that, in the spirit of special geometries, such a structure is equivalently determined by a quadruplet $({\tilde\eta}ta,{\tilde\omega}mega_1,{\tilde\omega}mega_2,{\tilde\omega}mega_3)$, where ${\tilde\eta}ta$ is a 1-form and ${\tilde\omega}mega_i$ are 3 {2-forms} satisfying ${\tilde\omega}mega_i\wedge{\tilde\omega}mega_j=\delta_{ij}v$ for some 4-form $v$ with $v\wedge{\tilde\eta}ta{\nabla^{ngt}}ot=0$ and $X\lrcorner{\tilde\omega}mega_i=Y\lrcorner{\tilde\omega}mega_j\hbox{\ddpp L}ongrightarrow {\tilde\omega}mega_k(X,Y){\tilde g}e 0$. The endomorphisms $\phi_i$, the Riemannian metric $g$ and the 2-forms ${\tilde\omega}mega_i$ are related {to each other through} ${\tilde\omega}mega_i(X,Y)=g(\phi X,Y)$. The class of Sasaki-Einstein structures in dimension 5 is characterized by the following differential equations \begin{equation}\label{sasEinstein} d{\tilde\eta}ta=-2{\tilde\omega}mega_3, \quad d{\tilde\omega}mega_1=3{\tilde\eta}ta\wedge{\tilde\omega}mega_2,\quad d{\tilde\omega}mega_2=-3{\tilde\eta}ta\wedge{\tilde\omega}mega_1. {\tilde\eta}nd{equation} For such a manifold the almost contact metric structure $(\phi_3, \xi, {\tilde\eta}ta, g)$ is Sasakian, with Einstein Riemannian metric $g$. A Sasaki-Einstein 5-manifold may { equivalently be} defined as a Riemannian manifold for which the cone metric is K\"ahler Ricci flat \cite{BG}. There are several generalizations of Sasaki-Einstein structures in dimension five. We only recall that the hypo structures introduced in \cite{CS} are defined by \begin{equation}\label{hypo} d{\tilde\omega}mega_3=0, \quad d({\tilde\eta}ta\wedge{\tilde\omega}mega_1)=0,\quad d({\tilde\eta}ta\wedge{\tilde\omega}mega_2)=0. {\tilde\eta}nd{equation} These structures arise naturally on hypersurfaces of a 6-manifold endowed with an integrable SU(3) structure, i.e. a hypersurface of a K\"ahler Ricci flat 6-manifold. Starting with a 5 dimensional nearly cosymplectic manifold B. Cappelletti-Montano and G. Dileo show in \cite[Theorem~5.1]{CD} that \begin{equation}\label{spech}h^2=-\lambda^2(I-{\tilde\eta}ta{\tilde\omega}times\xi), \lambda=const.{\nabla^{ngt}}ot=0 {\tilde\eta}nd{equation} induces an SU(2) structure $({\tilde\eta}ta,{\tilde\omega}mega_1,{\tilde\omega}mega_2,{\tilde\omega}mega_3)$ determined by \begin{equation}\label{su2} \phi_1=-{{\phi h}rak a}c1{\lambda}\phi h, \quad \phi_2=\phi, \quad \phi_3=-{{\phi h}rak a}c1{\lambda}h; \qquad {\tilde\omega}mega_i(X,Y)=g(\phi_i X,Y) {\tilde\eta}nd{equation} which satisfies the relations \begin{equation}\label{sasnk} d{\tilde\eta}ta=-2\lambda{\tilde\omega}mega_3, \quad d{\tilde\omega}mega_1=3\lambda{\tilde\eta}ta\wedge{\tilde\omega}mega_2,\quad d{\tilde\omega}mega_2=-3\lambda{\tilde\eta}ta\wedge{\tilde\omega}mega_1. {\tilde\eta}nd{equation} In particular these SU(2) structures are hypo. Consider the homothetic SU(2) structures determined by $\tilde{\tilde\eta}ta=\lambda{\tilde\eta}ta, \quad \tilde{\tilde\omega}mega_i=\lambda^2{\tilde\omega}mega_i$ It follows from {\tilde\eta}qref{sasnk} that these new SU(2) structures satisfy {\tilde\eta}qref{sasEinstein} and therefore ${\tilde\omega}_3$ is a Sasaki-Einsten structure while ${\tilde\omega}_1$ and ${\tilde\omega}_2$ are nearly cosymplectic structures \cite{CD}. This shows \begin{thrm}\cite{CD}\label{sasncos} A nearly cosymplectic non cosymplectic 5-manifold carries a Sasaki-Einstein structure and vice versa, any Sasaki-Einstein 5 manifold supports 2 nearly cosymplectic structures. In particular, a nearly cosymplectic non cosymplectic 5-manifold is Einstein with positive scalar curvature. In terms of nearly cosymplectic structures, the attached Sasaki-Einstein structure is given by $$ \tilde\phi=-{{\phi h}rak a}c1{\lambda}h,\quad \tilde\xi={{\phi h}rak a}c1{\lambda}\xi,\quad \tilde{\tilde\eta}ta=\lambda{\tilde\eta}ta,\quad \tilde g=\lambda^2g,\quad Scal=\lambda^2 \tilde Scal=20\lambda^2 $$ while $(\phi,\tilde{\tilde\eta}ta,\tilde g)$ and $(-{{\phi h}rak a}c1{\lambda}\phi h,\tilde{\tilde\eta}ta,\tilde g)$ are nearly cosymplectic structures. {\tilde\eta}nd{thrm} \section{Almost nearly cosymplectic manifolds} Let $(M,\phi, \xi, {\tilde\eta}ta, g)$ be an almost nearly cosymplectic manifold of dimension $2n+1$. For the (1,1) tensor $h$ defined by {\tilde\eta}qref{hh} we have \begin{lemma} On an almost nearly cosymplectic manifolds the following relations { are} valid: \begin{eqnarray}\label{ancosh} \aligned &g(hX,Y)=-g(X,hY)={{\phi h}rak a}c12d{\tilde\eta}ta(X,Y);\\ &h\xi=0,\quad \phi h+h\phi=0;\quad ({\nabla^{ngt}}abla_X\phi)\xi=-\phi hX. {\tilde\eta}ndaligned{\tilde\eta}nd{eqnarray} {\tilde\eta}nd{lemma} \begin{proof} The Killing condition {\tilde\eta}qref{kill1} together with {\tilde\eta}qref{hh} yield $g({\nabla^{ngt}}abla_X\xi,Y)=g(hX,Y)={{\phi h}rak a}c12d{\tilde\eta}ta(X,Y)=-g(hY,X)$ which proves the first equality. The second and the third are consequences of the first one and the first two lines in {\tilde\eta}qref{ax}. Indeed, for the third one we have $$g(h\phi X,Y)={{\phi h}rak a}c12d{\tilde\eta}ta(\phi X,Y)={{\phi h}rak a}c12d{\tilde\eta}ta(X,\phi Y)=g(hX,\phi Y)=-g(\phi hX,Y)$$ The last equation follows directly from {\tilde\eta}qref{ff3f}. {\tilde\eta}nd{proof} \subsection{D-homothetic transformations} We recall \cite{Tano, Tano1} that the almost contact metric structure $(\bar\phi,\bar\xi,\bar{\tilde\eta}ta,\bar g)$ defined by \begin{equation}\label{dhom} \bar{\tilde\eta}ta=a{\tilde\eta}ta, \quad \bar\xi={{\phi h}rak a}c1a\xi, \quad \bar\phi=\phi, \quad \bar g=ag+(a^2-a){\tilde\eta}ta{\tilde\omega}times{\tilde\eta}ta, \quad a>0 \quad constant {\tilde\eta}nd{equation} is called D-homothetic to $(\phi,\xi,{\tilde\eta}ta,g)$. We have $\bar F=aF$. Our main result follows \begin{thrm}\label{d-hom} Any almost nearly cosymplectic structure is D-homothetic to a nearly cosymplectic structure and vice versa. The corresponding (1,1) tensors $\bar h$ and $h$ coincide. {\tilde\eta}nd{thrm} \begin{proof} Let $\{X_1,\dots,X_{2n},X_{2n+1}=\xi\}$ be an orthonormal basis. The Kozsul formula and {\tilde\eta}qref{dhom} give \begin{multline}\label{conn} 2\bar g(\bar{\nabla^{ngt}}abla_{X_i}X_j,X_k)=X_i(\bar g(X_j,X_k)+X_j(\bar g(X_i,X_k)-X_k(\bar g(X_i,X_j)\\+\bar g([X_i,X_j],X_k)+\bar g([X_k,X_i],X_j)-\bar g([X_j,X_k],X_i) \\=2a g({\nabla^{ngt}}abla_{X_i}X_j,X_k) +(a^2-a)\Big[X_i[{\tilde\eta}ta(X_j){\tilde\eta}ta(X_k)]+X_j[{\tilde\eta}ta(X_k){\tilde\eta}ta(X_i)]-X_k[{\tilde\eta}ta(X_i){\tilde\eta}ta(X_j)]\Big]\\ +(a^2-a)\Big[{\tilde\eta}ta([X_i,X_j]){\tilde\eta}ta(X_k)]+{\tilde\eta}ta([X_k,X_i]){\tilde\eta}ta(X_i)]-{\tilde\eta}ta([X_j,X_k]){\tilde\eta}ta(X_i)\Big]\\ =2a g({\nabla^{ngt}}abla_{X_i}X_j,X_k) +(a^2-a)\Big[({\nabla^{ngt}}abla_{X_i}{\tilde\eta}ta)X_j+({\nabla^{ngt}}abla_{X_j}{\tilde\eta}ta)X_i+2{\tilde\eta}ta({\nabla^{ngt}}abla_{X_i}X_j){\tilde\eta}ta(X_k)\Big]\\ +(a^2-a)\Big[[{\nabla^{ngt}}abla_{X_i}{\tilde\eta}ta)X_k-({\nabla^{ngt}}abla_{X_k}{\tilde\eta}ta)X_i]{\tilde\eta}ta(X_j)+[{\nabla^{ngt}}abla_{X_j}{\tilde\eta}ta)X_k-({\nabla^{ngt}}abla_{X_k}{\tilde\eta}ta)X_j]{\tilde\eta}ta(X_i)\Big]\\ =2a g({\nabla^{ngt}}abla_{X_i}X_j,X_k) +(a^2-a)\Big[({\nabla^{ngt}}abla_{X_i}{\tilde\eta}ta)X_j+({\nabla^{ngt}}abla_{X_j}{\tilde\eta}ta)X_i+2{\tilde\eta}ta({\nabla^{ngt}}abla_{X_i}X_j){\tilde\eta}ta(X_k)\Big]\\ +(a^2-a)\Big[d{\tilde\eta}ta(X_i,X_k){\tilde\eta}ta(X_j)+d{\tilde\eta}ta(X_j,X_k){\tilde\eta}ta(X_i)\Big] {\tilde\eta}nd{multline} The Killing condition applied to {\tilde\eta}qref{conn} yields \begin{multline}\label{conn1} \bar g(\bar{\nabla^{ngt}}abla_{X_i}X_j,X_k)=a g({\nabla^{ngt}}abla_{X_i}X_j,X_k)\\ +{{\phi h}rak a}c{a^2-a}2\Big[d{\tilde\eta}ta(X_i,X_k){\tilde\eta}ta(X_j)+d{\tilde\eta}ta(X_j,X_k){\tilde\eta}ta(X_i)+2{\tilde\eta}ta({\nabla^{ngt}}abla_{X_i}X_j){\tilde\eta}ta(X_k)\Big] {\tilde\eta}nd{multline} Using the Killing condition, we evaluate the last term in {\tilde\eta}qref{conn1} as follows, \begin{equation}\label{term} 2{\tilde\eta}ta({\nabla^{ngt}}abla_{X_i}X_j)=2g({\nabla^{ngt}}abla_{X_i}X_j,\xi)=-2g({\nabla^{ngt}}abla_{X_i}\xi,X_j)=-d{\tilde\eta}ta(X_i,X_j). {\tilde\eta}nd{equation} For a D-homothetic transformation with Killing vector field $\xi$ we obtain substituting {\tilde\eta}qref{term} into {\tilde\eta}qref{conn1} that \begin{multline}\label{conKil} \bar g(\bar{\nabla^{ngt}}abla_{X_i}X_j,X_k)=ag({\nabla^{ngt}}abla_{X_i}X_j,X_k)\\+{{\phi h}rak a}c{a^2-a}2\Big[d{\tilde\eta}ta(X_i,X_k){\tilde\eta}ta(X_j)+d{\tilde\eta}ta(X_j,X_k){\tilde\eta}ta(X_i)-d{\tilde\eta}ta(X_i,X_j){\tilde\eta}ta(X_k)\Big]{\tilde\eta}nd{multline} We have, applying {\tilde\eta}qref{conKil}, that the following tensor equality holds for all vector fields $X,Y,Z$ \begin{multline}\label{barF} \bar g((\bar{\nabla^{ngt}}abla_X\phi)Y,Z)=\bar g(\bar{\nabla^{ngt}}abla_X\phi Y,Z)+\bar g(\bar{\nabla^{ngt}}abla_XY,\phi Z)\\=ag(({\nabla^{ngt}}abla_X\phi)Y,Z)+{{\phi h}rak a}c{a^2-a}2\Big[d{\tilde\eta}ta(\phi Y,Z){\tilde\eta}ta(X)-d{\tilde\eta}ta(X,\phi Y){\tilde\eta}ta(Z)+d{\tilde\eta}ta(X,\phi Z){\tilde\eta}ta(Y)+d{\tilde\eta}ta(Y,\phi Z){\tilde\eta}ta(X)\Big] \\=ag(({\nabla^{ngt}}abla_X\phi)Y,Z)+{{\phi h}rak a}c{a^2-a}2\Big[2d{\tilde\eta}ta(\phi Y,Z){\tilde\eta}ta(X)+d{\tilde\eta}ta(X,\phi Z){\tilde\eta}ta(Y)-d{\tilde\eta}ta(X,\phi Y){\tilde\eta}ta(Z)\Big], {\tilde\eta}nd{multline} where the last equality follows from {\tilde\eta}qref{ncxixx}. Suppose $(\phi,\xi,{\tilde\eta}ta,g)$ is a nearly cocymplectic structure, i.e. {\tilde\eta}qref{ncfxx} holds. Then, taking $a={{\phi h}rak a}c32$, the { equalities} {\tilde\eta}qref{barF}, {\tilde\eta}qref{ncfxx} and {\tilde\eta}qref{ncxixx} { imply} \begin{multline}\label{anc-nc} \bar g((\bar{\nabla^{ngt}}abla_X\phi)Y,Z)={{\phi h}rak a}c32g(({\nabla^{ngt}}abla_X\phi)Y,Z)+{{\phi h}rak a}c38\Big[2d{\tilde\eta}ta(\phi Y,Z){\tilde\eta}ta(X)+d{\tilde\eta}ta(X,\phi Z){\tilde\eta}ta(Y)-d{\tilde\eta}ta(X,\phi Y){\tilde\eta}ta(Z)\Big]\\ ={{\phi h}rak a}c13d\bar F(X,Y,Z)+{{\phi h}rak a}c16\Big[2d\bar{\tilde\eta}ta(Y,\phi Z)\bar{\tilde\eta}ta(X)-d\bar{\tilde\eta}ta(Z,\phi X)\bar{\tilde\eta}ta(Y)-d\bar{\tilde\eta}ta(\phi X,Y)\bar{\tilde\eta}ta(Z)\Big] {\tilde\eta}nd{multline} i.e. the structure $(\bar\phi,\bar\xi,\bar{\tilde\eta}ta,\bar g)$ is an almost nearly cosymplectic. Vice versa, starting from an almost nearly cosymplectic structure and making a D-homothetic deformation with constant $a={{\phi h}rak a}c23$ we get a nearly cosymplectic one which proves the first claim. To show that $\bar h=h$, we put $X_j=\bar \xi={{\phi h}rak a}c23\xi$ in {\tilde\eta}qref{conKil} taken with $a={{\phi h}rak a}c32$ and use {\tilde\eta}qref{ancosh} to get \begin{equation*} \bar g(\bar{\nabla^{ngt}}abla_{X_i}\bar \xi,X_k)=g({\nabla^{ngt}}abla_{X_i}\xi,X_k)+{{\phi h}rak a}c{1}4d{\tilde\eta}ta(X_i,X_k)={{\phi h}rak a}c{3}2g({\nabla^{ngt}}abla_{X_i}\xi,X_k).{\tilde\eta}nd{equation*} On the other { hand}, from (\ref{dhom}), we have \begin{equation*} \bar g({\nabla^{ngt}}abla_{X_i} \xi,X_k)={{\phi h}rak a}c 3 2g( {\nabla^{ngt}}abla_{X_i} \xi,X_k)+{{\phi h}rak a}c{3}4{\tilde\eta}ta( {\nabla^{ngt}}abla_{X_i} \xi){\tilde\eta}ta(X_k)={{\phi h}rak a}c 3 2g({\nabla^{ngt}}abla_{X_i}\xi,X_k). {\tilde\eta}nd{equation*} The last two equalities imply $ \bar hX_i=hX_i$ which completes the proof . {\tilde\eta}nd{proof} \begin{cor}\label{nabla} Let $(M,\phi, \xi, {\tilde\eta}ta, g)$ be an almost contact metric manifold with Killing vector field $\xi$. The D-homothetic almost contact metric manifold $(M,\bar\phi, \bar\xi, \bar{\tilde\eta}ta, \bar g)$ has a Killing vector field $\bar\xi$ and { the two corresponding Levi-Civita connections ${\bar\nabla}$ and ${\nabla^{ngt}}abla$ of these manifolds} are related by \begin{equation}\label{d-nabla} g(\bar{\nabla^{ngt}}abla_XY,Z)=g({\nabla^{ngt}}abla_XY,Z)+{{\phi h}rak a}c{a^2-a}{2a}\Big[d{\tilde\eta}ta(X,Z){\tilde\eta}ta(Y)+d{\tilde\eta}ta(Y,Z){\tilde\eta}ta(X)\Big]. {\tilde\eta}nd{equation} {\tilde\eta}nd{cor} \begin{proof} Observe that any orthonormal basis for $g$ is an orthogonal basis for $\bar g$. We obtain from {\tilde\eta}qref{conKil} and {\tilde\eta}qref{dhom} $$ \bar g(\bar{\nabla^{ngt}}abla_{X_i}\bar\xi,X_k)=g({\nabla^{ngt}}abla_{X_i}\xi,X_k)+{{\phi h}rak a}c{a^2-a}{2a}d{\tilde\eta}ta(X_i,X_k) $$ which shows that $\bar\xi$ is a Killing vector field for $\bar g$. Using {\tilde\eta}qref{dhom} together with the Killing condition, we have \begin{multline}\label{barnabla} \bar g(\bar{\nabla^{ngt}}abla_{X_i}X_j,X_k)=ag(\bar{\nabla^{ngt}}abla_{X_i}X_j,X_k)+(a^2-a){\tilde\eta}ta((\bar{\nabla^{ngt}}abla_{X_i}X_j){\tilde\eta}ta(X_k)\\=ag(\bar{\nabla^{ngt}}abla_{X_i}X_j,X_k)+{{\phi h}rak a}c{a^2-a}a\bar{\tilde\eta}ta((\bar{\nabla^{ngt}}abla_{X_i}X_j){\tilde\eta}ta(X_k)=ag(\bar{\nabla^{ngt}}abla_{X_i}X_j,X_k)-{{\phi h}rak a}c{a^2-a}{2a}d\bar{\tilde\eta}ta({X_i}X_j){\tilde\eta}ta(X_k)\\ =ag(\bar{\nabla^{ngt}}abla_{X_i}X_j,X_k)-{{\phi h}rak a}c{a^2-a}{2}d{\tilde\eta}ta({X_i}X_j){\tilde\eta}ta(X_k) {\tilde\eta}nd{multline} Compare {\tilde\eta}qref{barnabla} with {\tilde\eta}qref{conKil} to get \begin{equation*}\label{nnabla} ag(\bar{\nabla^{ngt}}abla_{X_i}X_j-{\nabla^{ngt}}abla_{X_i}X_j,X_k)={{\phi h}rak a}c{a^2-a}2\Big[d{\tilde\eta}ta(X_i,X_k){\tilde\eta}ta(X_j)+d{\tilde\eta}ta(X_j,X_k){\tilde\eta}ta(X_i)\Big] {\tilde\eta}nd{equation*} which implies {\tilde\eta}qref{nabla} since the difference between { these two connections} is a tensor field. {\tilde\eta}nd{proof} Combine Theorem~\ref{d-hom} with Theorem~\ref{ncstructure} to get the structure theorem for almost nearly cosymplectic manifold. \begin{thrm}\label{ancstructure} Let $(M,\phi, \xi, {\tilde\eta}ta, g)$ be an almost nearly cosymplectic non-cosymplectic manifold of dimension $2n+1>5$. Then $M$ is locally D-homothetic with { the} constant $a={{\phi h}rak a}c23$ to one of the following Riemannian products: $$\mathcal R\times N^{2n}, \quad M^5\times N^{2n-4},$$ where $N^{2n}$ is a nearly K\"ahler non-K\"ahler manifold, $N^{2n-4}$ is a nearly K\"ahler manifold and $M^5$ is a nearly cosymplectic non-cosymplectic manifold of dimension five. If the manifold $M$ is complete and simply connected the above D-homothety is global. {\tilde\eta}nd{thrm} We also have \begin{prop} On an almost nearly cosymplectic manifolds $(M,\phi,\bar{\tilde\eta}ta,\bar\xi,\bar g)$ the { following} relations hold. \begin{equation}\label{acn15} \bar g((\bar{\nabla^{ngt}}abla_X\phi)Y,\bar hZ)=\bar{\tilde\eta}ta(Y)\bar g(\bar h^2X,\phi Z), {\tilde\eta}nd{equation} \begin{equation}\label{acn16}-\bar R(\bar\xi, X,Y,Z)= \bar g((\bar{\nabla^{ngt}}abla_X\bar h)Y,Z)=-\bar{\tilde\eta}ta(Y)\bar g(\bar h^2X,Z)+\bar{\tilde\eta}ta(Z)\bar g(\bar h^2X,Y). {\tilde\eta}nd{equation} {\tilde\eta}nd{prop} \begin{proof} We apply Theorem~\ref{d-hom}. Using (\ref{anc-nc}) and $\bar h=h$, we have \begin{equation*} \bar g((\bar{\nabla^{ngt}}abla_X\phi)Y,\bar hZ)={{\phi h}rak a}c{3}{2}g(({\nabla^{ngt}}abla_X\phi)Y, h Z) +{{\phi h}rak a}c{3}8\Big[2d{\tilde\eta}ta(\phi Y, hZ){\tilde\eta}ta(X)+d{\tilde\eta}ta(X,\phi hZ){\tilde\eta}ta(Y)\Big] {\tilde\eta}nd{equation*} For { this} D-homothetic transformation, and formula (\ref{nc15}), we obtain \begin{multline*} \bar g((\bar{\nabla^{ngt}}abla_X\phi)Y,\bar hZ)={{\phi h}rak a}c{3}{2}\Big[{\tilde\eta}ta(Y)g(h^2X,\phi Z)-{\tilde\eta}ta(X)g(h^2Y,\phi Z)\Big] +{{\phi h}rak a}c{3}8\Big[4g(h^2Y,\phi Z){\tilde\eta}ta(X)+2g(h^2X,\phi Z){\tilde\eta}ta(Y)\Big]\\={{\phi h}rak a}c{9}{4}{\tilde\eta}ta(Y)g(h^2X,\phi Z)=\bar{\tilde\eta}ta(Y)\bar g(\bar h^2X,\phi Z) {\tilde\eta}nd{multline*} which proves {\tilde\eta}qref{acn15}. To prove the second line we first note that since $\bar\xi$ is a Killing vector field we have the well known relation $g((\bar{\nabla^{ngt}}abla_X\bar h)Y,Z)=-\bar R(\bar\xi, X,Y,Z)$ (see e.g. \cite{YanoBochner}). Further, starting from (\ref{d-nabla}) taken for $a={{\phi h}rak a}c32$, we obtain \begin{equation*} \bar g((\bar{\nabla^{ngt}}abla_X\bar h)Y,Z)={{\phi h}rak a}c{3}{2}g(({\nabla^{ngt}}abla_Xh)Y,Z)+{{\phi h}rak a}c{3}8\Big[d{\tilde\eta}ta(X,hZ){\tilde\eta}ta(Y) -d{\tilde\eta}ta(X,hY){\tilde\eta}ta(Z)\Big], {\tilde\eta}nd{equation*} which, in view of (\ref{nc16}) and {\tilde\eta}qref{ncosh}, takes the form \begin{multline*} \bar g((\bar{\nabla^{ngt}}abla_X\bar h)Y,Z)={{\phi h}rak a}c{3}{2}\Big[-{\tilde\eta}ta(Y)g(h^2X,Z)+{\tilde\eta}ta(Z)g(h^2X,Y)\Big] +{{\phi h}rak a}c{3}8\Big[-d{\tilde\eta}ta(hX,Z){\tilde\eta}ta(Y) +d{\tilde\eta}ta(hX,Y){\tilde\eta}ta(Z)\Big]\\ =-{{\phi h}rak a}c{9}{4}{\tilde\eta}ta(Y)g(h^2X,Z)+{{\phi h}rak a}c{9}{4}{\tilde\eta}ta(Z)g(h^2X,Y)=-\bar{\tilde\eta}ta(Y)\bar g(h^2X,Z)+\bar{\tilde\eta}ta(Z)\bar g(h^2X,Y). {\tilde\eta}nd{multline*} This completes the proof. {\tilde\eta}nd{proof} \subsection{The curvature of almost nearly cosymplectic manifold} Let $(M,\bar\phi,\bar{\tilde\eta}ta,\bar\xi,\bar g)$ be an almost nearly cosymplectic manifold D-homotetically related with $a={{\phi h}rak a}c32$ { to} the nearly cosymplectic manifold $(M,\phi,{\tilde\eta}ta,\xi, g)$ in the sense of Theorem~\ref{d-hom}. Put $a={{\phi h}rak a}c32$ into {\tilde\eta}qref{d-nabla} and use {\tilde\eta}qref{ncosh} to get the following relations between the Levi-Civita connections \begin{equation}\label{con12} \bar{\nabla^{ngt}}abla_XY={\nabla^{ngt}}abla_XY+{{\phi h}rak a}c12{\tilde\eta}ta(Y)hX+{{\phi h}rak a}c12{\tilde\eta}ta(X)hY {\tilde\eta}nd{equation} For the curvatures, we calculate using {\tilde\eta}qref{con12}, {\tilde\eta}qref{ancosh}, {\tilde\eta}qref{ncosh} and $\bar h=h$ that \begin{multline}\label{curv} \bar R(Z,X)Y=\bar{\nabla^{ngt}}abla_Z\bar{\nabla^{ngt}}abla_XY-\bar{\nabla^{ngt}}abla_X\bar{\nabla^{ngt}}abla_ZY-\bar{\nabla^{ngt}}abla_{[Z,X]}Y\\=R(Z,X)Y+{{\phi h}rak a}c12({\nabla^{ngt}}abla_Z{\tilde\eta}ta)Y.hX-{{\phi h}rak a}c12({\nabla^{ngt}}abla_X{\tilde\eta}ta)Y.hZ+{{\phi h}rak a}c12d{\tilde\eta}ta(Z,X)hY -{{\phi h}rak a}c12{\tilde\eta}ta(Z)({\nabla^{ngt}}abla_Xh)Y\\+{{\phi h}rak a}c12{\tilde\eta}ta(X)({\nabla^{ngt}}abla_Zh)Y+{{\phi h}rak a}c12{\tilde\eta}ta(Y)\Big[({\nabla^{ngt}}abla_Zh)X-({\nabla^{ngt}}abla_Xh)Z\Big]+{{\phi h}rak a}c14{\tilde\eta}ta(Y){\tilde\eta}ta(Z)h^2X-{{\phi h}rak a}c14{\tilde\eta}ta(Y){\tilde\eta}ta(X)h^2Z. {\tilde\eta}nd{multline} Applying {\tilde\eta}qref{nc16} to {\tilde\eta}qref{curv}, we get \begin{multline}\label{curv1} \bar R(Z,X)Y= R(Z,X)Y+{{\phi h}rak a}c12g(hZ,Y)hX-{{\phi h}rak a}c12g(hX,Y)hZ+g(hZ,X)hY\\+{{\phi h}rak a}c54{\tilde\eta}ta(Y){\tilde\eta}ta(Z)h^2X-{{\phi h}rak a}c54{\tilde\eta}ta(Y){\tilde\eta}ta(X)h^2Z+{{\phi h}rak a}c12\Big[{\tilde\eta}ta(X)g(h^2Z,Y)-{\tilde\eta}ta(Z)g(h^2X,Y)\Big]\xi {\tilde\eta}nd{multline} For the Ricci tensors ${\tilde\omega}verline{Ric}$ and $Ric$, we get \begin{equation}\label{ric} {\tilde\omega}verline{Ric}(X,Y)=Ric(X,Y)+g(h^2X,Y)-{{\phi h}rak a}c54tr(h^2){\tilde\eta}ta(X){\tilde\eta}ta(Y). {\tilde\eta}nd{equation} \section{Almost nearly cosymplectic manifolds in dimension 5} In view of Theorem~\ref{ancstructure} we restrict our attention {to} dimension five. Let $(M, \bar\phi, \bar\xi, \bar{\tilde\eta}ta,\bar g)$ be a 5-dimensional almost nearly cosymplectic manifold, $(M, \phi, \xi, {\tilde\eta}ta, g)$ be the D-homothetically corresponding nearly cosymplectic manifold according to Theorem~\ref{d-hom} and $(M, \tilde\phi, \tilde\xi, \tilde{\tilde\eta}ta, \tilde g)$ be the Sasaki-Einstein manifold homothetically attached to the nearly cosymplectic structure $(M, \phi, \xi, {\tilde\eta}ta, g)$, i.e. we have \begin{equation}\label{ndhom} \begin{split} \bar{\tilde\eta}ta={{\phi h}rak a}c32{\tilde\eta}ta, \quad \bar\xi={{\phi h}rak a}c23\xi, \quad \bar\phi=\phi, \quad \bar g={{\phi h}rak a}c32g+{{\phi h}rak a}c34{\tilde\eta}ta{\tilde\omega}times{\tilde\eta}ta, \quad g={{\phi h}rak a}c23\bar g-{{\phi h}rak a}c29\bar{\tilde\eta}ta{\tilde\omega}times\bar{\tilde\eta}ta,\\ \tilde\phi=-{{\phi h}rak a}c1{\lambda}h, \quad\tilde{\tilde\eta}ta=\lambda{\tilde\eta}ta={{\phi h}rak a}c{2\lambda}3\bar{\tilde\eta}ta, \quad \tilde\xi={{\phi h}rak a}c1{\lambda}\xi={{\phi h}rak a}c3{2\lambda}\bar\xi, \quad\tilde g=\lambda^2g={{\phi h}rak a}c{2\lambda^2}3\bar g-{{\phi h}rak a}c{2\lambda^2}9\bar{\tilde\eta}ta{\tilde\omega}times\bar{\tilde\eta}ta {\tilde\eta}nd{split} {\tilde\eta}nd{equation} We recall that an almost contact metric manifold $(M, \phi, \xi, {\tilde\eta}ta, g)$ is called ${\tilde\eta}ta$-Einstein if its Ricci tensor satisfies $$Ric(X,Y)=ag(X,Y)+b{\tilde\eta}ta(X){\tilde\eta}ta(Y),$$ where $a$ and $b$ are smooth functions on $M$. ${\tilde\eta}ta$-Einstein metrics were introduced and studied by Okumura \cite{Ok}. In particular, he studied the relation between the existence of ${\tilde\eta}ta$ -Einstein metrics and certain harmonic forms and {showed} that a K-contact ${\tilde\eta}ta$-Einstein manifold of dimension $2n+1$ bigger than 3 the functions $a$ and $b$ are constants satisfying $a+b=2n$. Sasakian ${\tilde\eta}ta$-Einsten spaces are studied in detail in \cite{BGM}. \begin{prop}\label{cur} Let $(M, \bar\phi, \bar\xi, \bar{\tilde\eta}ta,\bar g)$ be a 5-dimensional almost nearly cosymplectic manifold. Then it is ${\tilde\eta}ta$-Einstein and the Ricci tensor is given by \begin{equation} {\tilde\omega}verline{Ric}(X,Y)=2\lambda^2\bar g(X,Y)+2\lambda^2\bar{\tilde\eta}ta(X)\bar{\tilde\eta}ta(Y). {\tilde\eta}nd{equation} {\tilde\eta}nd{prop} \begin{proof} We get from {\tilde\eta}qref{curv1} and {\tilde\eta}qref{spech} \begin{multline}\label{curv5} \bar R(Z,X)Y= R(Z,X)Y+{{\phi h}rak a}c12g(hZ,Y)hX-{{\phi h}rak a}c12g(hX,Y)hZ+g(hZ,X)hY\\-{{\phi h}rak a}c{5\lambda^2}4{\tilde\eta}ta(Y){\tilde\eta}ta(Z)X+{{\phi h}rak a}c{5\lambda^2}4{\tilde\eta}ta(Y){\tilde\eta}ta(X)Z-{{\phi h}rak a}c{\lambda^2}2\Big[{\tilde\eta}ta(X)g(Z,Y)-{\tilde\eta}ta(Z)g(X,Y)\Big]\xi {\tilde\eta}nd{multline} For the Ricci tensors ${\tilde\omega}verline{Ric}$ and $Ric$, we obtain from {\tilde\eta}qref{ric} \begin{multline}\label{ric1} {\tilde\omega}verline{Ric}(X,Y)=Ric(X,Y)+g(h^2X,Y)-{{\phi h}rak a}c54tr(h^2){\tilde\eta}ta(X){\tilde\eta}ta(Y)\\=Ric(X,Y)-\lambda^2g(X,Y)+6\lambda^2{\tilde\eta}ta(X){\tilde\eta}ta(Y)= 3\lambda^2 g+6\lambda^2{\tilde\eta}ta(X){\tilde\eta}ta(Y)\\=3\lambda^2({{\phi h}rak a}c23\bar g(X,Y)-{{\phi h}rak a}c29\bar{\tilde\eta}ta(X)\bar{\tilde\eta}ta(Y))+6\lambda^2{{\phi h}rak a}c49\bar{\tilde\eta}ta(X)\bar{\tilde\eta}ta(Y)=2\lambda^2\bar g(X,Y)+2\lambda^2\bar{\tilde\eta}ta(X)\bar{\tilde\eta}ta(Y) {\tilde\eta}nd{multline} since $Ric={{\phi h}rak a}c{Scal}5g=4\lambda^2g$ is an Einstein space according to Theorem~\ref{sasncos}. This completes the proof. {\tilde\eta}nd{proof} Let $({\tilde\eta}ta,{\tilde\omega}mega_1,{\tilde\omega}mega_2,{\tilde\omega}mega_3)$ be the SU(2) structure induced by the nearly cosymplectic structure determined by {\tilde\eta}qref{su2} and satisfying {\tilde\eta}qref{sasnk} \cite{CD}. Since $\bar\phi=\phi$ and $\bar h=h$ we get an SU(2) structure $(\bar{\tilde\eta}ta,\bar{\tilde\omega}mega_1,\bar{\tilde\omega}mega_2,\bar{\tilde\omega}mega_3, \bar{\tilde\omega}mega_i=\bar g(\phi_i.,.))$ induced by the almost nearly cosymplectic structure. These two SU(2) structures are related by \begin{equation}\label{su22} \bar{\tilde\eta}ta={{\phi h}rak a}c32{\tilde\eta}ta,\quad \bar{\tilde\omega}mega_i={{\phi h}rak a}c32{\tilde\omega}mega_i. {\tilde\eta}nd{equation} We obtain from {\tilde\eta}qref{sasnk}, Theorem~\ref{sasncos}, {\tilde\eta}qref{su22} and Proposition~\ref{cur} the following \begin{prop}\label{ancdf} An almost nearly cosymplectic structure on a 5-dimensional manifold is equivalent to an $SU(2)$-structure $(\bar{\tilde\eta}ta,\bar{\tilde\omega}mega_1,\bar{\tilde\omega}mega_2,\bar{\tilde\omega}mega_3)$ satisfying \begin{equation}\label{achom} d\bar{\tilde\eta}ta=-2\lambda\bar{\tilde\omega}mega_3,\quad d\bar{\tilde\omega}mega_1=2\lambda \bar{\tilde\eta}ta \wedge \bar{\tilde\omega}mega_2,\quad d\bar{\tilde\omega}mega_2=-2\lambda \bar{\tilde\eta}ta \wedge \bar{\tilde\omega}mega_1 {\tilde\eta}nd{equation} for some real number $\lambda{\nabla^{ngt}}eq0$. These structures are hypo and the structures $(\bar{\tilde\eta}ta,\bar{\tilde\omega}mega_1)$ and $(\bar{\tilde\eta}ta,\bar{\tilde\omega}mega_2)$ are almost nearly cosymplectic $\bar{\tilde\eta}ta$-Einstein structures. {\tilde\eta}nd{prop} The homothetic $SU(2)$ structure ${\tilde\eta}ta^*=\lambda^2\bar{\tilde\eta}ta, {\tilde\omega}mega_i^*=\lambda^2\bar{\tilde\omega}mega_i, i=1,2,3$ satisfies {\tilde\eta}qref{achom} with $\lambda=1$. The structure $({\tilde\eta}ta^*,{\tilde\omega}mega^*_3)$ is a Sasaki ${\tilde\eta}ta^*$-Einstein structure while the structures $({\tilde\eta}ta^*,{\tilde\omega}mega^*_1)$ and $({\tilde\eta}ta^*,{\tilde\omega}mega^*_2)$ are almost nearly cosymplectic ${\tilde\eta}ta^*$-Einstein structures. We have \begin{cor} An SU(2) structure $({\tilde\eta}ta,{\tilde\omega}mega_1,{\tilde\omega}mega_2,{\tilde\omega}mega_3)$ on a 5-manifold is Sasaki ${\tilde\eta}ta$-Einstein if and only if it satisfies the relations \begin{equation}\label{achoms} d{\tilde\eta}ta=-2{\tilde\omega}mega_3,\quad d{\tilde\omega}mega_1=2{\tilde\eta}ta \wedge {\tilde\omega}mega_2,\quad d{\tilde\omega}mega_2=-2{\tilde\eta}ta \wedge {\tilde\omega}mega_1. {\tilde\eta}nd{equation} {\tilde\eta}nd{cor} Consider the structures $(\bar{\tilde\eta}ta,\bar{\tilde\Omega}mega^t_1,\bar{\tilde\Omega}mega^t_2,{\tilde\omega}mega_3)$, where \begin{equation}\label{circ} \begin{split} \bar{\tilde\Omega}mega^t_1=\cos t\bar{\tilde\omega}mega_1+\sin t\bar{\tilde\omega}mega_2, \quad \phi_1^t=-{{\phi h}rak a}c1{\lambda}\phi h\cos t+\phi\sin t\\ \bar{\tilde\Omega}mega^t_2=-\sin t\bar{\tilde\omega}mega_1+\cos t\bar{\tilde\omega}mega_2, \quad \phi_2^t={{\phi h}rak a}c1{\lambda}\phi h\sin t+\phi\cos t. {\tilde\eta}nd{split} {\tilde\eta}nd{equation} It is easy to check that these structures are SU(2) structures satisfying {\tilde\eta}qref{achom} and $(\bar{\tilde\eta}ta,\bar{\tilde\Omega}mega^t_1)$ and $(\bar{\tilde\eta}ta,\bar{\tilde\Omega}mega^t_2)$ are almost nearly cosymplectic $\bar{\tilde\eta}ta$-Einstein structures. \begin{rmrk}\label{rmnk} Starting with an SU(2) structure $({\tilde\eta}ta,{\tilde\omega}mega_1,{\tilde\omega}mega_2,{\tilde\omega}mega_3)$ on a 5-manifold which is Sasaki Einstein one obtaines an $(\mathbb R^2-\{0\})$-familly of nearly cosymplectic structures $({\tilde\eta}ta,{\tilde\Omega}mega^t_1,{\tilde\Omega}mega^t_2,{\tilde\omega}mega_3)$ given by \begin{equation}\label{circnc} \begin{split} {\tilde\Omega}mega^t_1=\cos t{\tilde\omega}mega_1+\sin t{\tilde\omega}mega_2, \quad \phi_1^t=-{{\phi h}rak a}c1{\lambda}\phi h\cos t+\phi\sin t\\ {\tilde\Omega}mega^t_2=-\sin t{\tilde\omega}mega_1+\cos t{\tilde\omega}mega_2, \quad \phi_2^t={{\phi h}rak a}c1{\lambda}\phi h\sin t+\phi\cos t {\tilde\eta}nd{split} {\tilde\eta}nd{equation} which satisfy {\tilde\eta}qref{sasnk}. {\tilde\eta}nd{rmrk} Applying Theorem~\ref{sasncos}, Theorem~\ref{d-hom} and Proposition~\ref{cur}, we obtain \begin{thrm}\label{asasncos} An almost nearly cosymplectic non cosymplectic 5-manifold $(M, \bar\phi, \bar\xi, \bar{\tilde\eta}ta,\bar g)$ carries a D-homothetic Sasaki-Einstein structure $(M, \tilde\phi, \tilde\xi, \tilde{\tilde\eta}ta, \tilde g)$ and vice versa, any Sasaki-Einstein 5 manifold supports a $(\mathbb R^2-\{0\})$-familly of almost nearly cosymplectic structures. In particular, an almost nearly cosymplectic non cosymplectic 5-manifold is ${\tilde\eta}ta$-Einstein, ${\tilde\omega}verline{Ric}=2\lambda^2(\bar g+\bar{\tilde\eta}ta{\tilde\omega}times\bar{\tilde\eta}ta)$ with positive scalar curvature ${\tilde\omega}verline{Scal}=12\lambda^2$. In terms of almost nearly cosymplectic structures, the attached Sasaki-Einstein structure is given by $$ \tilde\phi=-{{\phi h}rak a}c1{\lambda}h, \quad\tilde{\tilde\eta}ta={{\phi h}rak a}c{2\lambda}3\bar{\tilde\eta}ta, \quad \tilde\xi={{\phi h}rak a}c3{2\lambda}\bar\xi, \quad\tilde g={{\phi h}rak a}c{2\lambda^2}3\bar g-{{\phi h}rak a}c{2\lambda^2}9\bar{\tilde\eta}ta{\tilde\omega}times\bar{\tilde\eta}ta $$ The structures $(-{{\phi h}rak a}c1{\lambda}\phi h,\bar{\tilde\eta}ta,\bar g)$ and $(\phi,\bar{\tilde\eta}ta,\bar g)$ are almost nearly cosymplectic generating the circle-family $(\phi_1^t,\bar{\tilde\eta}ta,\bar g), (\phi_2^t,\bar{\tilde\eta}ta,\bar g)$ of almost nearly cosymplectic structures defined by {\tilde\eta}qref{circ}. {\tilde\eta}nd{thrm} Since the nearly cosymplectic structure is homothetic to the Sasaki-Einstein structure then the corresponding Levi-Civita connections coincide, ${\nabla^{ngt}}abla=\tilde{\nabla^{ngt}}abla$ and {\tilde\eta}qref{con12} takes the form \begin{equation}\label{aconsas} \bar{\nabla^{ngt}}abla_XY=\tilde{\nabla^{ngt}}abla_XY+{{\phi h}rak a}c12{\tilde\eta}ta(Y)hX+{{\phi h}rak a}c12{\tilde\eta}ta(X)hY,\quad R(X,Y)Z=\tilde R(X,Y)Z. {\tilde\eta}nd{equation} We have \begin{prop} The curvature of { an} almost nearly cosymplectic manifold is connected with the curvature of the associated Sasaki-Einsten manifold by \begin{multline}\label{curv5s} \bar R(Z,X)Y= \tilde R(Z,X)Y+{{\phi h}rak a}c12\tilde g(\tilde\phi Z,Y)\tilde\phi X-{{\phi h}rak a}c12\tilde g(\tilde\phi X,Y)\tilde\phi Z+\tilde g(\tilde Z,X)\tilde\phi Y\\-{{\phi h}rak a}c54\tilde{\tilde\eta}ta(Y)\tilde{\tilde\eta}ta(Z)X+{{\phi h}rak a}c54\tilde{\tilde\eta}ta(Y)\tilde{\tilde\eta}ta(X)Z-{{\phi h}rak a}c12\Big[\tilde{\tilde\eta}ta(X)\tilde g(Z,Y)-\tilde{\tilde\eta}ta(Z)\tilde g(X,Y)\Big]\tilde\xi. {\tilde\eta}nd{multline} In particular, we have \begin{equation}\label{rxi}\bar R(Z,X)\bar\xi=\lambda^2\Big[\bar{\tilde\eta}ta(X)Z-\bar{\tilde\eta}ta(Z)X\Big].{\tilde\eta}nd{equation} {\tilde\eta}nd{prop} \begin{proof} Applying {\tilde\eta}qref{ndhom} to {\tilde\eta}qref{curv5} we obtain {\tilde\eta}qref{curv5s}. Using the well known identity \begin{equation}\label{sascurvxi} R(X,Y)\xi={\tilde\eta}ta(Y)X-{\tilde\eta}ta(X)Y, {\tilde\eta}nd{equation} valid for any Sasakian manifold (see e.g.\cite{Blair}), we obtain {\tilde\eta}qref{rxi} from {\tilde\eta}qref{curv5s} and {\tilde\eta}qref{ndhom}. {\tilde\eta}nd{proof} \section{The NGT-connections in dimension 5} Due to Theorem~\ref{asasncos}, any Sasaki Einstein 5-manifold generates an $(R^2-\{0\})$-family of almost nearly cosymplectic structures which are D-homothetic to the Sasaki-Einstein structure. These almost nearly cosymplectic structures have the same Levi-Civita connection. According to Theorem~\ref{acnika} each almost nearly cosymplectic structure generates a unique NGT-connection with skew-symmetric torsion. The NGTS connections may be different for different almost nearly cosymplectic structures from the family because the skew-symmetric torsions may be different. We describe these NGTS connections explicitly and express their curvature and the Ricci tensors in terms of the Sasaki-Einstein curvature and the generated SU(2)-structures. According to Theorem~\ref{acnika} and { taking} into account Proposition~\ref{ancdf} and {\tilde\eta}qref{circ}, the {torsions} $T_2$ , resp $T_1$, of the NGTS connection ${\nabla^{ngt}}e$, resp. ${\nabla^{ngt}}d$ corresponding to the almost nearly cosymplectic structure $(\bar{\tilde\eta}ta,\bar{\tilde\Omega}mega^t_2)$, resp. $(\bar{\tilde\eta}ta,\bar{\tilde\Omega}mega^t_1)$ are given respectively by \begin{eqnarray}\label{ngtor2} T_2(X,Y,Z)&=&-{{\phi h}rak a}c13d\bar{\tilde\Omega}mega^t_2(X,Y,Z)={{\phi h}rak a}c23\lambda(\bar{\tilde\eta}ta\wedge\bar{\tilde\Omega}mega^t_1)(X,Y,Z)\\{\nabla^{ngt}}onumber&=&-{{\phi h}rak a}c23\cos t\Big(\bar{\tilde\eta}ta(X)\bar g(\phi hY,Z)+\bar{\tilde\eta}ta(Y)\bar g(\phi hZ,X)+\bar{\tilde\eta}ta(Z)\bar g(\phi hX,Y)\Big)\\{\nabla^{ngt}}onumber &+&{{\phi h}rak a}c23\lambda\sin t\Big(\bar{\tilde\eta}ta(X)\bar g(\phi Y,Z)+\bar{\tilde\eta}ta(Y)\bar g(\phi Z,X)+\bar{\tilde\eta}ta(Z)\bar g(\phi X,Y)\Big);\\ T_1(X,Y,Z)&=&-{{\phi h}rak a}c13d\bar{\tilde\Omega}mega^t_1(X,Y,Z)=-{{\phi h}rak a}c23\lambda\bar{\tilde\eta}ta\wedge\bar{\tilde\Omega}mega^t_2(X,Y,Z)\label{ngtor1}\\{\nabla^{ngt}}onumber&=&-{{\phi h}rak a}c23\sin t\Big(\bar{\tilde\eta}ta(X)\bar g(\phi hY,Z)+\bar{\tilde\eta}ta(Y)\bar g(\phi hZ,X)+\bar{\tilde\eta}ta(Z)\bar g(\phi hX,Y)\Big)\\{\nabla^{ngt}}onumber &-&{{\phi h}rak a}c23\lambda\cos t\Big(\bar{\tilde\eta}ta(X)\bar g(\phi Y,Z)+\bar{\tilde\eta}ta(Y)\bar g(\phi Z,X)+\bar{\tilde\eta}ta(Z)\bar g(\phi X,Y)\Big); {\tilde\eta}nd{eqnarray} Letting $t=t+{{\phi h}rak a}c{\pi}2$ we can write all the torsions with one relation as follows \begin{eqnarray}\label{ngtorr} T_t(X,Y,Z)&=&-{{\phi h}rak a}c23\sin t\Big(\bar{\tilde\eta}ta(X)\bar g(\phi hY,Z)+\bar{\tilde\eta}ta(Y)\bar g(\phi hZ,X)+\bar{\tilde\eta}ta(Z)\bar g(\phi hX,Y)\Big)\\{\nabla^{ngt}}onumber &-&{{\phi h}rak a}c23\lambda\cos t\Big(\bar{\tilde\eta}ta(X)\bar g(\phi Y,Z)+\bar{\tilde\eta}ta(Y)\bar g(\phi Z,X)+\bar{\tilde\eta}ta(Z)\bar g(\phi X,Y)\Big); {\tilde\eta}nd{eqnarray} \subsection{The NGTS connections ${\nabla^{ngt}}t$ and its curvature} Insert {\tilde\eta}qref{ngtorr} into {\tilde\eta}qref{ngtcon} to get for the NGTS connections ${\nabla^{ngt}}t$ the following expression \begin{multline}\label{ngtcont} \bar g({\nabla^{ngt}}t_XY,Z)=\bar g(\bar{\nabla^{ngt}}abla_XY,Z)+{{\phi h}rak a}c13\Big[\bar{\tilde\eta}ta(X)\bar g(hY,Z)+\bar{\tilde\eta}ta(Y)\bar g(hX,Z)\Big]\\ -{{\phi h}rak a}c13\sin t\Big(\bar{\tilde\eta}ta(X)\bar g(\phi hY,Z)+\bar{\tilde\eta}ta(Y)\bar g(\phi hZ,X)+\bar{\tilde\eta}ta(Z)\bar g(\phi hX,Y)\Big)\\ -{{\phi h}rak a}c13\lambda\cos t\Big(\bar{\tilde\eta}ta(X)\bar g(\phi Y,Z)+\bar{\tilde\eta}ta(Y)\bar g(\phi Z,X)+\bar{\tilde\eta}ta(Z)\bar g(\phi X,Y)\Big) {\tilde\eta}nd{multline} which, using {\tilde\eta}qref{ndhom} and {\tilde\eta}qref{con12} yields \begin{equation}\label{ngtcontt} {\nabla^{ngt}}t_XY={\nabla^{ngt}}abla_XY-{{\phi h}rak a}c{\lambda}2\Big[{\tilde\eta}ta(X)(\phi^t_2+2\phi_3)Y-{\tilde\eta}ta(Y)(\phi^t_2-2\phi_3)X+{{\phi h}rak a}c23 g(\phi^t_2X,Y)\xi\Big]. {\tilde\eta}nd{equation} The equation {\tilde\eta}qref{ngtcontt} implies \begin{prop} The NGTS connections ${\nabla^{ngt}}t$ with totally skew-symmetric torsion on a Sasaki-Einstein 5-manifold $(M,{\tilde\eta},\phi_3,{\tilde g},\tilde\xi)$ are related to the Levi-Civita connection $\tilde{\nabla^{ngt}}abla$ of ${\tilde g}$ by \begin{equation}\label{ngtconttsas} {\nabla^{ngt}}t_XY=\tilde{\nabla^{ngt}}abla_XY-{{\phi h}rak a}c12\Big[{\tilde\eta}(X)(\phi^t_2+2\phi_3)Y-{\tilde\eta}(Y)(\phi^t_2-2\phi_3)X+{{\phi h}rak a}c23 {\tilde g}(\phi^t_2X,Y)\tilde\xi\Big]. {\tilde\eta}nd{equation} {\tilde\eta}nd{prop} Further, to calculate the curvature of the NGTS connection ${\nabla^{ngt}}t$ we use {\tilde\eta}qref{ngtcontt}. We have \begin{multline}\label{newt2} {\nabla^{ngt}}t_Z{\nabla^{ngt}}t_XY={{\nabla^{ngt}}abla}_Z{{\nabla^{ngt}}abla}_XY-{{\phi h}rak a}c{\lambda}2\Big[{\tilde\eta}ta(Z)[\phi^t_2+2\phi_3]{{\nabla^{ngt}}abla}_XY-{\tilde\eta}ta({{\nabla^{ngt}}abla}_XY)[\phi^t_2-2\phi_3]Z+{{\phi h}rak a}c23 g(\phi^t_2Z,{{\nabla^{ngt}}abla}_XY)\xi\Big] \\-{{\phi h}rak a}c{\lambda}2\Big\{Z{\tilde\eta}ta(X)\phi^t_2Y+{\tilde\eta}ta(X){\nabla^{ngt}}t_Z\phi^t_2Y-Z{\tilde\eta}ta(Y)\phi^t_2X-{\tilde\eta}ta(Y){\nabla^{ngt}}t_Z\phi^t_2X+{{\phi h}rak a}c23Zg(\phi^t_2X,Y)\xi +{{\phi h}rak a}c23g(\phi^t_2X,Y){\nabla^{ngt}}t_Z\xi\Big\}\\ -\lambda\Big\{Z{\tilde\eta}ta(X)\phi_3Y+{\tilde\eta}ta(X){\nabla^{ngt}}t_Z\phi_3Y+Z{\tilde\eta}ta(Y)\phi_3X+{\tilde\eta}ta(Y){\nabla^{ngt}}t_Z\phi_3X\Big\} {\tilde\eta}nd{multline} Applying the fact that $\phi^t_2$ and $\phi^t_1$ are nearly cosymplectic structures, we get from {\tilde\eta}qref{newt2} and {\tilde\eta}qref{ngtcontt} that \begin{multline}\label{rngt1} {{\phi h}rak a}c2{\lambda}R^{ngt}_t(Z,X)Y={{\phi h}rak a}c2{\lambda}R(Z,X)Y-d{\tilde\eta}ta(Z,X))[\phi^t_2+2\phi_3]Y\\+({{\nabla^{ngt}}abla}_Z{\tilde\eta}ta)Y)[\phi^t_2-2\phi_3]X-({{\nabla^{ngt}}abla}_X{\tilde\eta}ta)Y)[\phi^t_2-2\phi_3]Z\\ -{\tilde\eta}ta(X)[({{\nabla^{ngt}}abla}_Z\phi^t_2)Y+2({{\nabla^{ngt}}abla}_Z\phi_3)Y]+{\tilde\eta}ta(Z)[({{\nabla^{ngt}}abla}_X\phi^t_2)Y+2({{\nabla^{ngt}}abla}_X\phi_3)Y]\\-2{\tilde\eta}ta(Y)[({{\nabla^{ngt}}abla}_Z\phi_3)X-({{\nabla^{ngt}}abla}_X\phi_3)Z-({{\nabla^{ngt}}abla}_Z\phi^t_2)X]+\lambda\Big[{\tilde\eta}ta(Y){\tilde\eta}ta(Z)[{{\phi h}rak a}c52-{{\phi h}rak a}c12\phi^t_1]X] -{\tilde\eta}ta(Y){\tilde\eta}ta(X)[{{\phi h}rak a}c52-{{\phi h}rak a}c12\phi^t_1]Z\Big]\\ +{{\phi h}rak a}c{4\lambda}3[g(\phi^t_2X,Y)\phi_3Z-g(\phi^t_2Z,Y)\phi_3X]-{{\phi h}rak a}c{\lambda}3[g(\phi^t_2X,Y)\phi^t_2Z-g(\phi^t_2Z,Y)\phi^t_2X]\\ -{{\phi h}rak a}c23[g({{\nabla^{ngt}}abla}_Z\phi^t_2)X,Y)-g({{\nabla^{ngt}}abla}_X\phi^t_2)Z,Y)]\xi+{{\phi h}rak a}c{\lambda}3[{\tilde\eta}ta(X)g(Z,Y)-{\tilde\eta}ta(Z)g(X,Y)]\xi\\-{{\phi h}rak a}c{2\lambda}3[{\tilde\eta}ta(X)g(\phi^t_1Z,Y)-{\tilde\eta}ta(Z)g(\phi^t_1X,Y)+2{\tilde\eta}ta(Y)g(\phi^t_1Z,X)]\xi\Big] {\tilde\eta}nd{multline} which, due to the Killing condition, {\tilde\eta}qref{sasnk}, {\tilde\eta}qref{circnc} and {\tilde\eta}qref{ngtcont1}, is equivalent to \begin{multline}\label{rngt2} {{\phi h}rak a}c2{\lambda}R^{ngt}_t(Z,X)Y={{\phi h}rak a}c2{\lambda}R(Z,X)Y+2\lambda{\tilde\omega}mega_3(Z,X))[\phi^t_2+2\phi_3]Y\\-\lambda{\tilde\omega}mega_3(Z,Y)[\phi^t_2-2\phi_3]X+\lambda{\tilde\omega}mega_3(X,Y)[\phi^t_2-2\phi_3]Z\\ -{\tilde\eta}ta(X)[({{\nabla^{ngt}}abla}_Z\phi^t_2)Y+2({{\nabla^{ngt}}abla}_Z\phi_3)Y]+{\tilde\eta}ta(Z)[({{\nabla^{ngt}}abla}_X\phi^t_2)Y+2({{\nabla^{ngt}}abla}_X\phi_3)Y]\\-2{\tilde\eta}ta(Y)[({{\nabla^{ngt}}abla}_Z\phi_3)X-({{\nabla^{ngt}}abla}_X\phi_3)Z-({{\nabla^{ngt}}abla}_Z\phi^t_2)X]+\lambda\Big[{\tilde\eta}ta(Y){\tilde\eta}ta(Z)[{{\phi h}rak a}c52-{{\phi h}rak a}c12\phi^t_1]X] -{\tilde\eta}ta(Y){\tilde\eta}ta(X)[{{\phi h}rak a}c52-{{\phi h}rak a}c12\phi^t_1]Z\Big]\\ +{{\phi h}rak a}c{4\lambda}3[{\tilde\Omega}mega^t_2(X,Y)\phi_3Z-{\tilde\Omega}mega^t_2(Z,Y)\phi_3X]-{{\phi h}rak a}c{\lambda}3[{\tilde\Omega}mega^t_2(X,Y)\phi^t_2Z-{\tilde\Omega}mega^t_2(Z,Y)\phi^t_2X]\\ +{{\phi h}rak a}c{\lambda}3[{\tilde\eta}ta(X)g(Z,Y)-{\tilde\eta}ta(Z)g(X,Y)]\xi-2\lambda[{\tilde\eta}ta(X){\tilde\Omega}mega^t_1(Z,Y)-{\tilde\eta}ta(Z){\tilde\Omega}mega^t_1(X,Y)]\xi\Big] {\tilde\eta}nd{multline} At this point we need {\tilde\eta}qref{ncfxx}, Remark~\ref{rmnk} and {\tilde\eta}qref{sasnk} yielding to \begin{eqnarray}\label{ngtcont1} g(({\nabla^{ngt}}abla_X\phi^t_2)Y,Z)&=&{{\phi h}rak a}c13d{\tilde\Omega}mega^t_2(X,Y,Z)=-\lambda{\tilde\eta}ta\wedge{\tilde\Omega}mega^t_1(X,Y,Z)\\{\nabla^{ngt}}onumber&=&-\lambda\Big[{\tilde\eta}ta(X) g(\phi^t_1Y,Z)+{\tilde\eta}ta(Y) g(\phi^t_1Z,X)+{\tilde\eta}ta(Z) g(\phi^t_1X,Y)\Big];\\\label{ngtcont2} g(({\nabla^{ngt}}abla_X\phi^t_1)Y,Z)&=&{{\phi h}rak a}c13d{\tilde\Omega}mega^t_1(X,Y,Z)=\lambda{\tilde\eta}ta\wedge{\tilde\Omega}mega^t_2(X,Y,Z)\\{\nabla^{ngt}}onumber&=&\lambda\Big[{\tilde\eta}ta(X) g(\phi^t_2Y,Z)+{\tilde\eta}ta(Y) g(\phi^t_2Z,X)+{\tilde\eta}ta(Z) g(\phi^t_2X,Y)\Big]\\\label{ngtcont3} g(({\nabla^{ngt}}abla_X\phi_3)Y,Z)&=&-{{\phi h}rak a}c1{\lambda}g(({\nabla^{ngt}}abla_Xh)Y,Z)=\lambda\Big[ g(X,Y){\tilde\eta}ta(Z)- g(X,Z){\tilde\eta}ta(Y)\Big]. {\tilde\eta}nd{eqnarray} Applying {\tilde\eta}qref{ngtcont1}, {\tilde\eta}qref{ngtcont2}and {\tilde\eta}qref{ngtcont3} to {\tilde\eta}qref{rngt2}, we obtain \begin{multline}\label{rngt4} {{\phi h}rak a}c2{\lambda^2}R^{ngt}_t(Z,X)Y={{\phi h}rak a}c2{\lambda^2} R(Z,X)Y+2{\tilde\omega}mega_3(Z,X)\Big[\phi^t_2+2\phi_3\Big]Y\\ +{{\phi h}rak a}c32{\tilde\eta}ta(X){\tilde\eta}ta(Y)\Big[Z+\phi^t_1Z\Big]+\Big[{\tilde\omega}mega_3(X,Y)-{{\phi h}rak a}c13{\tilde\Omega}mega^t_2(X,Y)\Big]\phi^t_2Z-2\Big[{\tilde\omega}mega_3(X,Y)-{{\phi h}rak a}c23{\tilde\Omega}mega^t_2(X,Y)\Big]\phi_3Z \\-{{\phi h}rak a}c32{\tilde\eta}ta(Z){\tilde\eta}ta(Y)\Big[X+\phi^t_1X\Big]-\Big[{\tilde\omega}mega_3(Z,Y)-{{\phi h}rak a}c13{\tilde\Omega}mega^t_2(Z,Y)\Big]\phi^t_2X+2\Big[{\tilde\omega}mega_3(Z,Y)-{{\phi h}rak a}c23{\tilde\Omega}mega^t_2(Z,Y)\Big]\phi_3X\\ +\Big[{{\phi h}rak a}c53\Big({\tilde\eta}ta(Z)g(X,Y)-{\tilde\eta}ta(X)g(Z,Y)\Big)+{\tilde\eta}ta(Z){\tilde\Omega}mega^t_1(X,Y)-{\tilde\eta}ta(X){\tilde\Omega}mega^t_1(Z,Y)-2{\tilde\eta}ta(Y){\tilde\Omega}mega^t_1(Z,X)\Big]\xi {\tilde\eta}nd{multline} The equation {\tilde\eta}qref{rngt4}, {\tilde\eta}qref{sascurvxi} and the Sasaki-Einstein condition $\tilde Ric=4{\tilde g}$ imply \begin{prop} The curvatures $R^{ngt}_t$ of the NGTS connections with totally skew-symmetric torsion on a Sasaki-Einstein 5-manifold $(M,{\tilde\eta},\phi_3,{\tilde g},\tilde\xi)$ are related to the Sasaki-Einstein curvature $\tilde R$ and the corresponding SU(2) structure $(\tilde{\tilde\eta}ta,{\tilde\Omega}^t_1,{\tilde\Omega}^t_2,{\tilde\omega}_3)$ by \begin{multline}\label{rngt5} R^{ngt}_t(Z,X)Y=\tilde R(Z,X)Y+{\tilde\omega}_3(Z,X)\Big[\phi^t_2+2\phi_3\Big]Y\\ +{{\phi h}rak a}c34{\tilde\eta}(X){\tilde\eta}(Y)\Big[Z+\phi^t_1Z\Big]+{{\phi h}rak a}c12\Big[{\tilde\omega}_3(X,Y)-{{\phi h}rak a}c13{\tilde\Omega}^t_2(X,Y)\Big]\phi^t_2Z-\Big[{\tilde\omega}_3(X,Y)-{{\phi h}rak a}c23{\tilde\Omega}^t_2(X,Y)\Big]\phi_3Z \\-{{\phi h}rak a}c34{\tilde\eta}(Z){\tilde\eta}(Y)\Big[X+\phi^t_1X\Big]-{{\phi h}rak a}c12\Big[{\tilde\omega}_3(Z,Y)-{{\phi h}rak a}c13{\tilde\Omega}^t_2(Z,Y)\Big]\phi^t_2X+\Big[{\tilde\omega}_3(Z,Y)-{{\phi h}rak a}c23{\tilde\Omega}^t_2(Z,Y)\Big]\phi_3X\\ +\Big[{{\phi h}rak a}c56{\tilde\eta}(Z){\tilde g}(X,Y)-{{\phi h}rak a}c56{\tilde\eta}(X){\tilde g}(Z,Y)+{{\phi h}rak a}c12{\tilde\eta}(Z){\tilde\Omega}^t_1(X,Y)-{{\phi h}rak a}c12{\tilde\eta}(X){\tilde\Omega}^t_1(Z,Y)-{\tilde\eta}(Y){\tilde\Omega}^t_1(Z,X)\Big]\tilde\xi. {\tilde\eta}nd{multline} In particular, \begin{equation}\label{rngtxi} R^{ngt}_t(Z,X)\tilde\xi={{\phi h}rak a}c74{\tilde\eta}(X)Z-{{\phi h}rak a}c74{\tilde\eta}(Z)X+{{\phi h}rak a}c34{\tilde\eta}(X)\phi^t_1Z-{{\phi h}rak a}c34{\tilde\eta}(Z)\phi^t_1X-{\tilde\Omega}^t_1(Z,X)\tilde\xi {\tilde\eta}nd{equation} The Ricci tensors of the NGTS connections are given by \begin{equation}\label{ricngt6} Ric^{ngt}_t(X,Y) ={{\phi h}rak a}c53{\tilde g}(X,Y)+{{\phi h}rak a}c{16}3{\tilde\eta}(X){\tilde\eta}(Y)+{{\phi h}rak a}c43{\tilde\Omega}^t_1(X,Y). {\tilde\eta}nd{equation} {\tilde\eta}nd{prop} {\bf Acknowledgments.} S.I. is partially supported by Contract DH/12/3/12.12.2017 and Contract 80-10-31/10.04.2019 with the Sofia University "St.Kl.Ohridski". M.Z. was partially supported by the project EUROWEB+, and by Serbian Ministry of Education, Science, and Technological Development, Projects 174012. \begin{thebibliography}{99} \bibitem{y4} {\ K. Becker, M. Becker, J.-X. Fu, L.-S. Tseng, S.-T. Yau, {\tilde\eta}mph{Anomaly cancellation and smooth non-K\"ahler solutions in heterotic string theory}, Nuclear Phys. \textbf{B 751} (2006), 108--128. } \bibitem{BSethi} {\ K. Becker, S. Sethi, {\tilde\eta}mph{Torsional heterotic geometries }, Nuclear Phys. \textbf{B 820} (2009), 1--31. } \bibitem {BBC} M. Bertolini, F. Bigazzi, and A. Cotrone, {\tilde\eta}mph{New checks and subtleties for AdS/CFT and a-maximization}, JHEP 0412, 024 (2004). \bibitem{BBC1} S. Benvenuti, S. Franco, A. Hanany, D. Martelli and J. Sparks, {\tilde\eta}mph{An Infinite Family of Superconformal Quiver Gauge Theories with Sasaki–Einstein Duals}, JHEP 0506, 064 (2005). \bibitem{Blair} D.E. Blair, Riemannian geometry of contact and symplectic manifolds. Second edition. Progress in Mathematics, 203. Birkh\"auser Boston, Inc., Boston, MA, 2010. \bibitem{Bl} D. E. Blair, {\tilde\eta}mph{Almost contact manifolds with Killing structure tensors}, Pacific J. Math. 39 (1971), no. 2, 285-292. \bibitem{BS} D. E. Blair, D. K. Showers, {\tilde\eta}mph{Almost contact manifolds with Killing structure tensors. II.}, J. Differential Geom. 9 (1974), 577-582. \bibitem{BG} C. P. Boyer, K. Galicki, Sasakian Geometry, Oxford Univ. Press, Oxford, 2007. \bibitem{BGM} C. P. Boyer, K. Galicki, P. Matzeu {\tilde\eta}mph{ On Eta-Einstein Sasakian Geometry} , Commun. Math. Phys. 262 (2006), 177-208. \bibitem{But} J.-B. Butruille, {\tilde\eta}mph{Twistors and 3-symmetric spaces}, Proc. London Math. Soc. {\bf 96} (2008), 738-766. \bibitem{CD} B. Cappelletti-Montano, G. Dileo, {\tilde\eta}mph{Nearly Sasakian geometry and SU(2)-structures}, Ann. Mat. Pura Appl. (IV) 195 (2016), 897-922. \bibitem{CDY} B. Cappelletti-Montano, A. De Nicola, I. Yudin, {\tilde\eta}mph{A survey on cosymplectic geometry}, Rev. Math. Phys. 25 (2013), no. 10, 55 pp. \bibitem{CS} D. Conti, S. Salamon, {\tilde\eta}mph{Generalized Killing spinors in dimension 5}, Trans. Amer. Math. Soc. 359 (2007), no. 11, 5319-5343. \bibitem{NDY} A. De Nicola, G. Dileo, I. Yudin, {\it On nearly Sasakian and nearly cosymplectic manifolds} Ann. Mat. Pura Appl. (4) 197 (2018), no. 1, 127-138. \bibitem{Ein1} A. Einstein, Die Grundlagen der allgemeinen Relativit\"atstheorie, Ann. der Physik, 49, 769, 1916. \bibitem{Ein} A. Einstein, The Meaning of Relativity, Princeton Univ. Press, 1953. \bibitem{Eis} L.P. Eisenhart, Generalized Riemannian spaces I, Nat. Acad. Sci. USA, 37, (1951), 311-315. \bibitem{Endo} H. Endo, {\it On the curvature tensors of nearly cosymplectic manifolds of constant $\hbox{\ddpp P}hi$-setional curvature}, Analele \c{S}tin. Ale Univers. "Ali Cuza" Ia\c{s}i, Tomul LI, s.I, Math, (2005), 439-454. \bibitem{Endo1} H. Endo, {\tilde\eta}mph{ On the first Betti number of certain compact nearly cosymplectic manifolds}, J. Geom. 103 (2012), no. 2, 231–236. \bibitem{FIMU} M. Fernandez, S. Ivanov, V. Munos, L. Ugarte, {\tilde\eta}mph{Nearly hypo structures and compact nearly K\"ahler 6-manifolds with conical singularities}, J. London Math. Soc. (2) {\bf 78} (2008) 580-604. \bibitem{FosH} Lorenzo Foscolo, Mark Haskins, {\tilde\eta}mph{New $G_2$ holonomy cones and exotic nearly K\"ahler structures on the $S^6$ and $S^3\times S^3$}, Ann. of Math. (2) 185 (2017), no. 1, 59-130. \bibitem {FI} Th. Friedrich, S. Ivanov, {{\tilde\eta}m Parallel spinors and connections with skew-symmetric torsion in string theory}, Asian J. Math. {\bf 6} (2002), 3003-336. \bibitem{FK} Th. Friedrich, I. Kath, {\tilde\eta}mph{Einstein manifolds of dimension five with small first eigenvalue of the Dirac operator}, J. Differential Geom. 29 (1989), 263-279. \bibitem{Fried} A. Friedmann, {\tilde\eta}mph{\" Uber die Kr\" ummung des Raumes,} Zeitschrift f\" ur Physik 10 (1922), 377-386. \bibitem{GauSas} J. P. Gauntlett, D. Martelli, J. Sparks, and D. Waldram, {\tilde\eta}mph{Sasaki-Einstein metrics on $S^2\times S^3$}, Adv. Theor. Math. Phys. \textbf{8} (2004), no. 4, 711-734. \bibitem{GMPW} {\ J.P. Gauntlett, D. Martelli, S. Pakis, D. Waldram, {\tilde\eta}mph{$ G $-structures and wrapped NS5-branes}, Commun. Math. Phys. \textbf{247} (2004), 421--445. } \bibitem{GMPW1} J. P. Gauntlett, D. Martelli, J. F. Sparks and D. Waldram, {\tilde\eta}mph{A new infinite class of Sasaki-Einstein manifolds}, Adv. Theor. Math. Phys. \textbf{8} (2006), 987-1000. \bibitem{GPap} {\ J. Gillard, G. Papadopoulos, D. Tsimpis, {\tilde\eta}mph{Anomaly, fluxes and $(2,0)$ heterotic-string compactifications}, JHEP \textbf{0306} (2003), 035. } \bibitem{Gr1} A. Gray,{\tilde\eta}mph{Nearly K\"ahler manifolds}, J. Differential Geom. {\bf4} (1970) 283-309. \bibitem{Gr2} A. Gray, {\tilde\eta}mph{Riemannian manifolds with geodesic symmetries of order 3}, J. Differential Geom. {\bf 7} (1972) 343-369. \bibitem{Gr3} A. Gray, {\tilde\eta}mph{The structure of nearly K\"ahler manifolds}, Math. Ann. {\bf 223} (1976) 233-248. \bibitem{GLNP} K.-P. Gemmer, O. Lechtenfeld, C. N\"olle, A.D. Popov, {\tilde\eta}mph{Yang-Mills instantons on cones and sine-cones over nearly K\"ahler manifolds}, JHEP 09 (2011) 103. \bibitem{Ham} R. T. Hammond, {\tilde\eta}mph{The necessity of torsion in gravity}, Gen Relativ Gravit (2010) \textbf{42}:2345--2348. \bibitem{Hlav} V. Hlavat\'{y}, Geometry of Einstein's Unified Field Theory, P. Noordhoff Ltd, Groningen, Holland, 1957. \bibitem{IZl} S. Ivanov, M. Zlatanovi\' c, {\it Connections on non-symmetric (generalized) riemannian manifold and gravity}, Class. Quantum Grav., Volume 33, Number 7, 075016, (2016). \bibitem{Ki} V. Kirichenko, {\tilde\eta}mph{K-spaces of maximal rank}, Mat. Zam. {\bf 22} (1977) 465-476 (Russian). \bibitem {LNP} O. Lechtenfeld, C. N\"oolle, A.D. Popov, {\tilde\eta}mph{Heterotic compactifications on nearly K\"ahler manifolds}, JHEP 09 (2010) 074. \bibitem{Lema} G. Lema\^{\i}tre, {\tilde\eta}mph{L'univers en expansion,} Ann. Soc. Sci. Brux. 47A, 49 (1927). \bibitem{LMof}J. Legare, J. W. Moffat, {\tilde\eta}mph{Geodesic and Path Motion in the Nonsymmetric Gravitational Theory}, Gen.Rel.Grav.28:1221-1249,1996 . \bibitem{Sas51} D. Martelli and J. Sparks, {\tilde\eta}mph{Toric Geometry, Sasaki–Einstein Manifolds and a New Infinite Class of AdS/CFT Duals,} , Commun. Math. Phys. 262 (2006), 51-69. \bibitem{Mof} J.W. Moffat, {\tilde\eta}mph{ Regularity Theorems in the Nonsymmetric Gravitational Theory}, arXiv:gr-qc/9504009. \bibitem{MNS} A. Moroianu, P.A. Nagy, U, Semmelmann, {\tilde\eta}mph{Unit Killing vector fields on nearly K\"ahler manifolds}, Internat. J. Math. {\bf16} (2005), no. 3, 281-301. \bibitem{N1} P.-A. Nagy, {\tilde\eta}mph{ Nearly K\"ahler geometry and Riemannian foliations}, Asian J. Math. 6 (2002), no. 3, 481--504. (Reviewer: Maria Falcitelli) 53C15 (53C12 53C28) \bibitem{N2} P.-A. Nagy, {\tilde\eta}mph{ On nearly-K\"ahler geometry}, Ann. Global Anal. Geom. {\bf 22} (2002), no. 2, 167-178. \bibitem{Ok} M. Okumura, {\tilde\eta}mph{ Some remarks on space with a certain contact structure}, Tohoku Math. J. (2) 14 (1962), 135-145. \bibitem{Po1} A.D. Popov, {\tilde\eta}mph{Hermitian-Yang-Mills equations and pseudo-holomorphic bundles on nearly K\"ahler and nearly Calabi-Yau twistor 6-manifolds}, Nucl.Phys. {\bf B 828} (2010), 594-624. \bibitem{PS} A.D. Popov and R.J. Szabo, {\tilde\eta}mph{Double quiver gauge theory and nearly K\"ahler flux compactifications}, JHEP 02 (2012) 033. \bibitem{Sp} J. Sparks, Sasaki-Einstein manifolds, Surveys in differential geometry. Volume XVI. Geometry of special holonomy and related topics, Surv. Differ. Geom., vol. 16, Int. Press, Somerville, MA, 2011, pp. 265-324. \bibitem{Str} {\ A. Strominger, {\tilde\eta}mph{Superstrings with torsion}, Nuclear Phys. \textbf{B 274} (1986), 253--284. } \bibitem{Tano} S. Tanno, {\tilde\eta}mph{The topology of contact Riemannian manifolds} , Illinois J. Math. 12 (1968), 700-717. \bibitem{Tano1}S. Tano, {\tilde\eta}mph{Geodesic flows on $C_L$ -manifolds and Einstein metrics on $S^3\times S^2$ , Minimal submanifolds and geodesics} (Proc. Japan-United States Sem ., Tokyo, 1977), North-Holland, Amsterdam, 1979, pp. 283-292. \bibitem{Yano} K. Yano, Differential geometry on complex and almost complex spaces, Pergamon Press, New York, 1965. \bibitem{YanoBochner} K. Yano, S. Bochner, Curvature and Betti Numbers, Princeton (1953). {\tilde\eta}nd{thebibliography} {\tilde\eta}nd{document}
\begin{document} \title{ Abelian Varieties into $2g+1$-dim. Linear Systems} \footnotetext{Mathematics Classification Number: 14C20, 14B05, 14E25.} \begin{abstract} We show that polarisations of type $(1,...,1,2g+2)$ on $g$-dimensional abelian varieties are $\it{never}$ very ample, if $g\geq 3$. This disproves a conjecture of Debarre, Hulek and Spandaw. We also give a criterion for non-embeddings of abelian varieties into $2g+1$-dimensional linear systems. \end{abstract} \section{Introduction} Let $L$ be an ample line bundle of type $\delta$ on an abelian variety $A$, of dimension $g$. Classical results of Lefschetz ($n\geq 3$) and Ohbuchi ($n=2$) imply very ampleness of $L^n$, if $|L|$ has no fixed divisor when $n=2$. Suppose $L$ is an ample line bundle of type $(1,...,1,d)$ on $A$. When $g=2$, Ramanan ( see [4]) has shown that if $d\geq 5$ and the abelian surface does not contain elliptic curves then $L$ is very ample. When $g\geq 3$, Debarre, Hulek and Spandaw ( see [3], Corollary 25, p. 201) have shown the following. \bt Let $(A,L)$ be a generic polarized abelian variety of dimension $g$ and type $(1,...,1,d)$. For $d>2^g$, the line bundle $L$ is very ample. \et They further conjecture that if $d\geq 2g+2$, then the line bundle $L$ is very ample, ( see [3], Conjecture 4, p. 184). In particular, when $g=3$ and $d\geq 8$, their results ( for $d\geq 9$) and conjecture ( for $d=8$) imply that $L$ is very ample. The results due to Barth ( [1])and Van de Ven ( [5])show \bt For $g\geq 3$, no abelian variety $A_g$ can be embedded in $\p^d$, for $d\leq 2g$. \et In particular, it implies that line bundles of type $(1,...,1,d)$, $d\leq 2g+1$, are never very ample. We show \bt Suppose $L$ is an ample line buundle of type $(1,...,1,d)$ on an abelian variety $A$, of dimension $g$. If $g\geq 3$ and $d\leq 2g+2$, then $L$ is never very ample. \et This disproves the conjecture of Debarre et.al when $d=2g+2$ and gives a different proof of 1.2, for morphisms into the complete linear system $|L|$. The proof of 1.3 also indicates the type of singularities of the image in $|L|$. Now any abelian variety $A$ of dimension $g$ can be embedded in a projective space of dimension $2g+1$. Consider a morphism $A\lrar |V|$, where $dim|V|=2g+1$. Suppose the involution $i:A\lrar A$, $a\mapsto -a$ lifts to an involution on the vector space $V$, hence on the linear system $|V|$, ( such a situation will arise, for example, if $A$ is embedded by a symmetric line bundle into its complete linear system, of dimension greater than $2g+1$. One may then project the abelian variety from a vertex which is invariant for the involution $i$, to a projective space of dimension $2g+1$ and the involution $i$ will then descend down to this projection ). Then we show \bt Suppose there is a morphism $A\sta{\phi}{\lrar} |V|$, with $dim|V|=2g+1$ and the involution $i$ acting on the vector space $V$. If $degree \phi(A)>2^{2g}$ and $dim V_+\neq dim V_-$, then the morphism $\phi$ is never an embedding, for all $g\geq 1$. Here $V_+$ and $V_-$ denote the $\pm 1$-eigenspaces of $V$, for the involution $i$. \et \section{Proof of 1.3.} Consider a pair $(A,L)$, as in 1.3. We may assume $L$ is an ample line bundle of characteristic $0$ on $A$. Then $L$ is symmetric , i.e. there is an isomorphism $L\simeq i^*L$, for the involution $i:A\lrar A,\,a\mapsto -a$. This induces an involution on the vector space $H^0(L)$, also denoted as $i$. Let $H^0(L)^+$ and $H^0(L)^-$ denote the $+1$ and $-1$-eigenspaces of $H^0(L)$, for the involution $i$ and $h^0(L)^+$ and $h^0(L)^-$ denote their respective dimensions. Choose a normalized isomorphism $\psi:L\simeq i^*L$, i.e. the fibre map $\psi(0):L(0)\lrar L(0)$ is $+1$. Let $A_2$ denote the set of torsion $2$ points of $A$. If $a\in A_2$ then $\psi(a):L(a)\lrar L(a)$ is either $+1$ or $-1$. Let $$A_2^+=\{a\in A_2: \psi(a)= +1\}$$ and $$A_2^-=\{a\in A_2:\psi(a)=-1\}$$ and $Card(A_2^+)$ and $Card(A_2^-)$ denote their respective cardinalities. Consider the associated morphism $A\sta{\phi_L}{\lrar }\p H^0(L)$ and let $$\p _+= \p \{s=0 : s\in H^0(L)^-\}$$ and $$\p _-=\p \{ s=0 : s\in H^0(L)^+\}.$$ Then the involution $i$ acts trivially on the subspaces $\p _+$ and $\p _-$, of $\p H^0(L)$. Moreover, $\phi_L(A_2^+)\subset \p _+$ and $\phi_L(A_2^-)\subset \p _-$. \bl If $a\in A_2^+$, then the intersection of the image $\phi_L(A)$ and $\p _+$ is transversal at the point $\phi_L(a)$. \el \pf The action of the involution $i$ at the tangent space, $T_{A,a}$, at $a$, is $-1$. If the intersection of $\phi_L(A)$ with $\p _+$ is not transversal at $\phi_L(a)$, then $\phi_{L*}(T_{A,a})$ intersects $\p_+$, giving a $i$-fixed non-trivial subspace of $T_{A,a}$, which is not true. ( This argument was given by M.Gross.) $\Box$ Let $Z=\phi_L(A)\cap \p _+$, in $\p H^0(L)$. Then $\phi_L(A_2^+)\subset Z$. Suppose $dimZ>0$. Since the involution $i$ acts trivially on $Z$, the morphism $\phi_L$ restricts on $\phi_L^{-1}(Z)\lrar Z$, as a morphism of degree at least $2$, with its Galois group containing $<i>$. If $dimZ=0$, then by 2.1, the points of $\phi_L(A_2^+)$ have multiplicity $1$ in $Z$. Let $r=deg Z-Card(A_2^+)$. Then there are $\frac{r}{2} $-points on $\phi_L(A)$ on which the involution $i$ acts trivially, i.e. there are $\frac{r}{2}$-pairs $(a,-a),\,a\in A-A_2$, which are identified transversally by $\phi_L$. By $K(L)$-invariance of the image $\phi_L(A)$, there are more such pairs. \brm If $dimZ>0$ or $r>0$, then $L$ is not very ample. \erm $\bf{Case\,1:}$ $d=2m $ and $m\leq g+1$. By [2], 4.6.6, $h^0(L)^+=m+1$ and $h^0(L)^-=m-1$. Hence $dim\p _+=m$ and $dim \p_-=m-2$. a) If $m<g+1$, then $dimZ\geq g+m-2m+1>0$. b) If $m=g+1$. By Riemann-Roch, $deg\phi_L(A)=(2g+2).g!$. If $dimZ=0$ then since $\p_+$ and $\phi_L(A)$ have complementary dimensions in $\p H^0(L)$, $degZ=(2g+2).g!$. Now by [2], Exercise 4.12 b)-Remark 4.7.7, $$Card(A_2^+)\leq 2^{2g-(g-1)-1}(2^{g-1}+1)$$ $$=2^g(2^{g-1}+1).$$ Since $g\geq 3$, $r\geq(2g+2).g!-2^g(2^{g-1}+1)>0$. Hence by 2.2, $L$ is not very ample. $\bf{Case\,2:}$ $d=2m-1$ and $m\leq g+1$. Then $h^0(L)^+=m$ and $h^0(L)^-=m-1$. Hence $dim\p _+=m-1$ and $dim\p_-=m-2$. a) If $m<g+1$, then $dimZ\geq g+m+1-2m>0$. b) If $m=g+1$, as in $\bf{Case\,1}$, $deg\phi_L(A)=(2g+1)g!$, and $\p _+$ and $\phi_L(A)$ have complementary dimension in $\p H^0(L)$. Hence if $dimZ=0$, then $degZ=(2g+1)g!$. Also, in this case, $Card(A_2^+)\leq 2^{g-1}(2^g+1)$. Since $g\geq 3$, $r\geq(2g+1)g!-2^{g-1}(2^g+1)>0$. Hence by 2.2, $L$ is not very ample. $\Box$ \section{Morphisms into $i$-invariant linear systems} $\bf{Proof\,of\,1.4}$: Consider the morphism $A\sta{\phi}{\lrar}|V|$, with the involution $i$ acting on the vector space $V$. Let $$\p_+= \p \{s=0: s\in V_-\}$$ and $$\p_-= \p \{s=0: s\in V_+\},$$ where $V_+$ and $V_-$ denote the $+1$ and $-1$-eigenspaces of the vector space $V$, for the involution $i$. Let $d=degree \phi(A)$. Now $dim\p_+>g$ or $dim\p_+< g$ or $dim\p_+=g$. Case 1: $dim\p_+> g$ Consider the intersection $Z=\p_+\cap \phi(A)$. Then $dimZ \geq g+ g+1 -2g-1\geq 0$. As in Proof of 1.3, if $dimZ>0$, then the restricted morphism $\phi^{-1}(Z)\lrar Z$ is of degree at least $2$, since $i$ acts trivially on $Z$. Suppose $dim Z=0$. Then the intersection of $\phi(A)$ and $\p_+$ is transversal at the image of torsion $2$ points of $A$, by 2.1. Since $Card(A_2)=2^{2g}$ and $degree(\phi(A))>2^{2g}$, there are pairs $\{a,-a\}$ on $A$ which get identified transversally by the morphism $\phi$. Case 2: $dim\p_+< g$. In this situation, $dim \p_-> g$ and we can repeat the above argument. Hence $\phi$ is never an embedding. $\Box$ \brm When $dim V_+= dim V_-$, the morphism $\phi$ need not identify some pair of points $\{a,-a\}$, in the linear system $|V|$. For example, consider a symmetric line bundle $L$, of type $(1,1,9)$, on a generic abelian threefold $A$. Then $L$ is very ample and $dim H^0(L)_+=5$ and $dim H^0(L)_-=4$. Hence $dim \p_+=4$ and $dim \p_-=3$. Consider the scroll $S_A=\cup_{a\in A} l_{a,-a }$, where $l_{a,-a}$ is the line joining the points $a$ and $-a$, in $ |L|$. Then the line $l_{a,-a}$ is invariant for the involution $i$ and has two fixed points, one of them in $\p_+$ and the other in $\p_-$. Hence $S_A$ intersects $\p_+$ in at most a $3-$dimensional subset. Now we can project from a point of $\p_+$, outside this subset, and the projection will have the fixed spaces of $i$ to be equidimensional. Also, by the choice of the point of projection, there are no pairs $\{a,-a\}$ identified in the projection. \erm $\it{Acknowledgement}$: We thank M.Gross for giving the argument in 2.1 and O.Debarre for pointing out a mistake in an earlier version dealing with the case when $dim V_+= dim V_-$, in 1.4. We also thank French Ministry of National Education, Research and Technology, for their support. \begin{thebibliography}{99} \bib [1]{1} Barth, W.:{\em Transplanting cohomology classes in complex-projective space}, Amer.J. of Math. $\bf{92}$, 951-967, (1970). \bib [2]{2} Birkenhake, Ch., Lange, H. : {\em Complex abelian varieties}, Springer-Verlag, Berlin, (1992). \bib [3]{3} Debarre, O., Hulek, K., Spandaw, J. : { \em Very ample linear systems on abelian varieties}, Math. Ann. $\bf{ 300}$, 181-202, (1994). \bib [4]{4} Ramanan, S.: {\em Ample Divisors on Abelian Surfaces}, Proc. London Math. Soc. (3), $\bf{51}$, 231-245, (1985). \bib [5]{5} Van de Ven : {\em On the embeddings of abelian varieties in projective spaces}, Ann.Mat.Pura Appl.(4), $\bf{103}$, 127-129, (1975). \end {thebibliography} \noindent{Institut de Mathematiques,\\ Case 247, Univ.Paris-6, \\4, Place Jussieu, \\75252, Paris Cedex 05, France.\\ Email: [email protected] } \end{document}
\begin{document} \begin{frontmatter} \begin{quote} \textit{\large "Now, there is a law written in the darkest of the Books of Life, and it is this: If you look at a thing nine hundred and ninety-nine times, you are perfectly safe; if you look at it the thousandth time, you are in frightful danger of seeing it for the first time."}\\ \begin{flushright}{\large Gilbert Keith Chesterton}\\ in ``The Napoleon of Notting Hill''(1906)\end{flushright} \end{quote} \begin{quote} \makeatletter \addcontentsline{toc}{chapter}{Aknowledgements} \makeatother \centerline{\Large AKNOWLEDGEMENTS} This was a very rewarding work by itself, but I certainly have to thank some people that made the experience even more interesting and pleasant. First, I would like to express my gratitude for the great amount (and variety) of knowledge I received from my advisor, Alonso Botero, as well as for his auspicious and patient orientation. Special thanks to Víctor Tapia, for his valuable advice in the writting and general presentation of the document. Thanks also for Liliana Martín, who, besides of her pleasant company, gave me necessary opinions from a different point of view about some aspects of the work and the document . And last but not least, I am indebted to the Department of Physics of Universidad de los Andes, which under the efficacious direction of Dr. Bernardo Gómez has been a wonderful place to develop my work. \end{quote} \cleardoublepage \begin{thesisabstract} \begin{center} {\large CLASSIFICATION OF QUANTUM SYMMETRIC NONZERO-SUM 2X2 GAMES IN THE EISERT SCHEME} \\ Álvaro Francisco Huertas Rosero \\ Universidad de los Andes \\ {\large \textbf{Abstract}} \end{center} A quantum game in the Eisert scheme is defined by the payoff matrix, plus some quantum entanglement parameters. In the symmetric nonzero-sum 2x2 games, the relevant features of the game are given by two parameters in the payoff matrix, and only one extra entanglement parameter is introduced by quantizing it in the Eisert scheme. The criteria adopted in this work to classify quantum games are the amount and characteristics of Nash equilibria and Pareto-optimal positions. A new classification based on them is developed for symmetric nonzero-sum classical 2x2 games, as well as classifications for quantum games with different restricted subsets of the total strategy set. Finally, a classification is presented taking the whole set of strategies into account, both unitary strategies and nonunitary strategies studied as convex mixures of unitary strategies. The classification reproduces features which have been previously found in other works, like appearance of multiple equilibria, changes in the character of equilibria, and entanglement regime transitions. \end{thesisabstract} \end{frontmatter} \chapter{ABOUT THIS WORK} This is a work on a very specific subject in the newborn field of Quantum Game Theory, a field that is familiar only to a restricted group of people (at least, at the date of the publication of this dissertation). The work is presented here mainly for physicists, specially those working on Quantum Mechanics. Therefore, the physical concepts will be explained to a minor extent than those of the mathematical Theory of Games. The dissertation is divided in 10 chapters (this is chapter \thechapter), for which a little reading guide is presented here: \begin{enumerate} \setcounter{enumi}{1} \item Why study Quantum Games?\\ In this chapter, an introduction to some of the concepts of both Game Theory and Quantum Information are given with an emphasis on their potential practical application. \item Classical Games, Quantum Games\\ An introduction to the relevant concepts from ``Classical'' game theory, and a description of the Eisert scheme for defining Quantum Games. \item What we know so far\\ In this chapter a brief description is presented of the relevant work developed on the subject of Quantum games, as well as the questions posed by earlier studys, with an emphasis on the questions that have inspired this work \item Classification of Symmetric 2x2 Classical Games\\ A classification is developed for this kind of games, that will be the basis to devise a similar classification of quantum games. The results obtained can have some interest in some developing fields of Game Theory as well as in Quantum Game Theory. \item The Quantum Game with Deterministic Strategies\\ The \textbf{main results} of the entire work can be found in this chapter. \item A Geometrical approach to Unitary Strategies\\ In this chapter a methodology for studying a larger set of strategies is developed. It can be interesting to the reader that is interested in the transformations of qubits, but is not essential to the work, except as a basis for further results. \item Critical Responses in Quantum 2x2 Games\\ Here a methodology is presented to characterize games taking all the possible strategies into account, and some further partial results are presented. \item Exploration of the Strategy Space\\ Here a final exploration of the characteristics of the quantum games is presented, that completes the sought results. \item Conclusions and Perspectives \end{enumerate} To guide the reader, here is also a little map: \footnote{The map shows the main concepts treated in this work, with their relations as arrows. The numbers within the parentheses are the chapters where they are developed.} \begin{figure} \caption{A road map of the work} \end{figure} \chapter{WHY STUDY QUANTUM GAMES?} \section{WHY STUDY GAMES} Games are situations where several deciding agents get a certain individual payoff according to what \textit{all of them} have decided. A certain rationality is assumed from the agents that forces all of them to choose with a payoff maximizing criterion. There is a number of situations in life where the best thing to do, or what can be expected to happen, can be computed by game theoretical tools, and some of these situations have been portrayed in the movies and TV. An example:\\ \textit{In the famous movie ``\textbf{Rebel Without a Cause}'', a simple game is stated, called ``\textbf{The Chicken Game}''. Two rebel young man drive on a highway in opposite directions, each approaching the other at high speed. If one of them get scared and avoid the collision, he will lose control of his car and be ridiculed by his friends. Obviously, the other driver will gain a great prestige among these risk-loving young men. But, on the other hand, if neither driver turns to avoid the collision right on time, both will probably die or be severely hurt.}\cite{GamesTV} We can ask several questions about this game... Is there a ``winning'' strategy? How can one predict which player is going to win? Is it more likely to have a happy ending or a tragic one? Game Theory is a systematic attempt to answer this kinds of questions. Game Theory is, indeed, part of the heart of what is called by Aumann ``rational side of social sciences'' \cite{RationalSocial}. Any social situation consists in a number of deciding agents whose decisions affect the whole group. Game theory is, then, very useful to study the behaviour of extremely complex systems in a complex environment (individuals in a society), provided that there is some way of defining the payoff to ensure rationality. Evolution \cite{Evolution}, population dynamics \cite{Population1} and a variety of biological phenomena\cite{Animals} have been studied with game models, obtaining in this way interesting predictions and suggestive interpretations. It is possible, for example, to give a definition for \textit{altruism} as a way to perceive the payoff in a social situation \cite{Altruism}. Game theory has even been proposed for modeling complex physical phenomena such as decoherence and irreversibility \cite{Parrondo}, \cite{Decoherence}. \subsection{RATIONALITY AND UNILATERAL OPTIMIZATION} The concept of instrumental rationality adopted in Game Theory is sometimes problematic when applied to human individuals \cite{Rational} but it can be applied comfortably to other kind of agents. It consists simply in the tendency to maximize a certain kind of \textbf{utility}. Here is an example given by Rapoport in \cite{Utility}: \begin{quote} ``The utility scale, as it has been defined, implies that of two \textit{risky}\footnote{The italics were put by the author} outcomes, the one with the greater expected utility is always preferred. A risky outcome can be thought of essentially as a lottery ticket entitling the owner to any several prizes depending on the outcome of a chance event. Such a lottery ticket carries a list of the prizes with the probability attached to each, namely the probability of the event that entitles the holder of the ticket to the prize in question.'' \end{quote} \begin{definition} \textbf{Expected Payoff} $(\hat{\$})$:= The average payoff obtained in a big number of attempts. \end{definition} This average (expected) payoff gives us a good criterion to decide in situations where there is no complete certainty of what can happen. There is an utility, but there is also a risk that arises from the lack of information. Now suppose we have to choose between two different lotteries. Each has different prizes, and different probabilities to win. The information we have about the prizes and probabilities is in table \ref{Lotteries}. \begin{table} \center \begin{tabular}{|c|c||c|c|}\hline \multicolumn{2}{|c||}{Lottery 1} & \multicolumn{2}{|c|}{Lottery 2} \\ \hline\hline {\small Probability} & {\small Prize} & {\small Probability} & {\small Prize}\\\hline 0.010 & 1000 & 0.001 & 10000\\ 0.040 & 250 & 0.005 & 2000\\ 0.150 & 40 & 0.104 & 45\\ 0.800 & 0 & 0.890 & 0\\ \hline \end{tabular} \caption{Comparison of two lotteries}\label{Lotteries} \end{table} The expected payoff of the first lottery can be computed as an average, that is, the sum of each probability times the corresponding prize: \begin{equation*}\begin{aligned} \boxed{\hat{\$}_1 = 0.01\times 1000 + 0.040\times250 + 0.150\times 40 + 0.800\times0 = 10 + 10 + 6 + 0 = 26}\\ \boxed{\hat{\$}_2 = 0.001\times 10000 + 0.005\times2500 + 0.104\times40 + 0.800\times0 = 10 + 10 + 4.16 = 24.16} \end{aligned}\end{equation*} The expected payoff analysis tells us that the better lottery is the first one, which give us an expected payoff of \$26, even when the second one promises us \$10000 if we win the big prize. This is a very simple way of taking risk into account, but there are more elaborate ones. There is, then, a way of predicting the behaviour of a decision-making agent accurately in terms of maximization of a certain payoff with a certain use of risk statistical parameters. This is the base of \textbf{decision theory} \cite{Decision}. The maximization of quantities, on the other hand, has been used as a cornerstone to construct very elegant theoretic buildings, not only in physics but in all the so called ``hard sciences''. The best example is probably Lagrangian Mechanics, based on the Maupertuis least action principle \cite{ExtremalPrinc}. Nowadays, as it has always happened, this still motivates physicists and other theoretical scientists to untiringly seek \textit{extremal principles} as a foundation for their theories. (An example of this is \cite{ExtremalPhysics}, where three extremal principles are claimed to give rise to the entire formulation of General and Special Relativity and Quantum Mechanics) Rationality, then, seems to be a very useful assumption for studying both living and non-living systems, and therefore Game Theory arises as a powerful tool in those fields. \subsection{MULTIAGENT OPTIMIZATION}\label{Multiagent} The lottery we have seen can be bought by a large amount of people or by none at all, and this does not affect either the probabilities or the prizes. But this is not always the case. For example: The administrators of a lottery decide to do something to incentive the people to buy tickets. If a certain prize is not reclaimed, then it is given to the winner of another game, at random. Then the payoffs \textit{depend} now on what other lottery players do: if they don't buy, they increase the payoffs of the game. The theory of multi-agent decision is \textbf{game theory}. A number of interesting new concepts arise when we consider the interplay between many agents. A typical example of strange things that can happen is the game called \textbf{Prisoner's dilemma}. Two partners in crime are held by the police in two separate rooms at the police station and given a similar deal. If one implicates the other, he will go free while the other receives a life in prison. If neither implicates the other, both will be given moderate sentences, and if both implicate the other, the sentences for both will be severe \cite{Prisoner}. The payoff in this case would be the avoided years in prison. A maximizes his payoff choosing the upper row no matter what player B does, and B maximizes his payoff choosing the left column no matter what player A does. They reach an equilibrium with \textit{severe} sentences, while if they maximize \textit{the other player's payoff} both get \textit{moderate} sentences. \begin{table}[hbt] \center \begin{tabular}{|p{6.4cm} p{6.4cm}|}\hline \textbf{Payoff for A} & \textbf{Payoff for B}\\ \begin{tabular}{|c|c|c|}\hline & B {\tiny confesses} & B {\tiny implicates A}\\ \hline A {\tiny confesses} & {\small moderate} & {\small lifetime}\\ \hline A {\tiny implicates B} & {\small free} & {\small severe} \\ \hline \end{tabular} & \begin{tabular}{|c|c|c|}\hline & B {\tiny confesses} & B {\tiny implicates A}\\ \hline A {\tiny confesses} & {\small moderate} & {\small free} \\ \hline A {\tiny implicates B} & {\small lifetime} & {\small severe}\\ \hline \end{tabular}\\ \hline \end{tabular} \caption{Qualitative Payoffs in Prisoner's Dilemma}\label{TablePrisoner} \end{table} In situations involving interacting agents like those described by games, overall payoff optimization can give a different result than unilateral optimization. In prisoner's dilemma, for example, an overall optimization of payoff would lead us to the \textit{both confess} situation, with a moderate sentence for both, while the unilateral optimization lead us to the \textit{both implicate one another} situation, where both get severe sentences. \subsection{MODELS OF BEHAVIOUR} Game theory have also proved to be a very useful tool to devise models of learning and, in general, models of behaviour for interacting individuals within a population \cite{Population2}. But this requires a careful choice of the payoffs to be assigned to the players according each state of affairs. The concept of payoff function has been applied successfully as an objective tendency as well as a subjective preference. Repeated versions of 2x2 games like Prisoner's Dilemma are now extensively used to model evolution of behavior within populations \cite{Behaviour}. In some works, it is assumed that the perceived payoff function varies with time when the game is played repeatedly, and sometimes it arrives to a stationary state where a very stable equilibrium is established. This is, for example, an explanation proposed by Castillo and Salazar for the mad persistence of the guerrilla conflict in Colombia \cite{WarGame}. There are also interesting approaches to a ``rational'' (in a game-theoretical sense) definition of altruistic, aggressive, and other kins of behaviour, as learning schemes used along a game \cite{Population2}, or even as subjective perceptions of the payoff \cite{Subjective}. This last approach \textit{requires a complete knowledge of the relation between the payoff matrix and the relevant features in the game}, because the characteristics of a game can change dramatically when certain thresholds are reached in the variation of the payoff matrix. It can be then necessary to have a certain \textbf{cartographic knowledge of the space of payoff matrices}, to predict the qualitative changes of behavior arising from changes in the perceived payoff matrix. \section{THE APPEAL OF QUANTUM INFORMATION} All we have said is about theory of (classical) games. But, what about \textit{quantum} games? In quantum mechanics, the description of the state of the system is said to be incomplete \cite{EPR}, because the values of some observables are sharply determined, and the values of some others are not. In that sense, we can think of a quantum state as \textit{less informative} than a classical one. This is a consequence of the existence of incompatible observables: \begin{definition} Two observables are \textbf{incompatible} when measuring one implies losing information about the other. \end{definition} This can lead us to formulate a question: Which, then, is the advantage of quantum information? There are two ways to make the laws of quantum mechanics be advantageously used for information processing: \begin{enumerate} \item Coherent Superposition \item Entanglement \end{enumerate} \begin{definition} \textbf{COHERENT SUPERPOSITION:} When an observable does not have a definite value in a state, the system behaves as if there are several superposed states with definite observable values. This state is said to be a \textbf{coherent superposition} of states. \end{definition} The states in the superposition evolve independently, but when the observable is measured, all disappear except one, thus destroying the information about \textit{other} observables in the system. A superposition is represented in the following way: \begin{equation} \psi_{superposition}= c_1\phi_1 + c_2\phi_2 + ... + c_n\phi_n \end{equation} where the coefficients $c_i$ are complex numbers, and $\phi_i$ are the states for which the value of observable $\hat{O}$ is $\lambda_i$. The probabilities that, when measuring, we get the value $o_i$ are the square of the magnitude of these numbers \begin{equation} Probability(o_i\text{ in state }\psi) = |c_i|^2. \end{equation} This feature of quantum states led Richard P. Feynman\cite{FeynmanRules} and, independently, to Paul Benioff in 1982 to suggest that this can be used to perform parallel computation \cite{BeginingQuantComp}. If such a superposition is processed in a certain way, then every component is processed \textit{independently and simultaneously}. This could be used, they suggested, to accelerate the processing of data. There is still another potential use of coherent superposition. If a measurement on quantum systems destroys information of the state, why not encode a secret message in that way, a message that is destroyed when not properly read? That question well may have been the beginning of \textbf{quantum cryptography} \cite{BeginingQuantComp}. \begin{definition} \textbf{ENTANGLEMENT} An \textbf{entangled} state is a state of several quantum subsystems where no observable of the individual subsystems is sharply determinate, but some global (nonlocal) observables are. \end{definition} Now suppose we prepare two quantum systems in an entangled state, and take them to places that are far apart from each other. The systems will still share an entangled state. The results of measuring an observable on any of them is unpredictable, because its value is not determinate. But once we measure an observable in one, the other is \textbf{instantly} put in a state where a certain local observable is also determinate. \subsection{INFORMATION AND THE QUANTUM WORLD} There are not only pragmatic motivations to study quantum information, but also philosophical ones. The understanding of how a system \textit{can be described} has an outstanding role in the ontology\footnote{the set of the things that are supposed to exist} of a theory. And information is, now, part of the ontology of physics. When speaking in information in physics, it is customary to cite the visionary statement of Rolf Landauer: \begin{quote} ``Information is physical''\\ \flushright{Rolf Landauer, cited by\cite{RLandauer}} \end{quote} Information was given a precise mathematical meaning by Shannon in 1944 \cite{TheoryInf}, and since this work physics has used it extensively in the field of statistical mechanics, and others related. This put information in the line for the general conceptual scrutiny quantum mechanics has caused on other physical concepts. One of the first remarks done on the problematic character of the quantum description of the world, and one of the most important, was made in 1935 by Einstein, Podolsky and Rosen. They showed that the new theory included essentially problematic non-local aspects \cite{EPR}; which suggest new ways to think the \textit{information} concepts regarding our knowledge of a physical system \cite{BellPhysics}. \subsubsection{THE EPR PARADOX} A quantum state will have a determinate value for some observables, but not for some others that are incompatible with the former. Let us call a state that has a determinate value $a_i$ for observable $A$, the state $\psi_i$. Now suppose $B$ is an observable that is incompatible with $A$, and $\phi_j$ is a state where it gives a value $b_j$. Now suppose we have two quantum systems, both in the state $\psi_i$. The total state is $\psi_i\psi_i$. But if they interact, a new state must be formed, and it can be an \textbf{entangled} state. Suppose that the interaction leave the entire system the entangled state $\psi_{ent}c_1\psi_1\psi_2 + c_2\psi_2\psi_1$. According to quantum mechanics, if observable $A$ is measured now on subsystem 1 and the result is $a_1$, then the result of measuring $A$ on subsystem 2 is instantly determinate (is $a_2$) no matter how far apart the two subsystems are. However, there is no certainty of getting $a_1$ when measuring $A$ on system 1, because individual observables are not sharply defined. When we measured $a_1$ in system 1, then we know that the result of measuring on $A$ on system 2 would be $a_2$. There is no point in measuring $A$. Then, let us measure B. We get, for example $b_1$. But alas! It looks like we fooled nature!! We know that the value of A for subsystem 2 is $a_2$, and the value of B for subsystem 2 is $b_2$ \textbf{even thought they are incompatible}! That is the EPR paradox. The bad news are that if we want to check $A$ on system 2, just to be sure, it is perfectly possible that the result \textbf{is not} $a_2$ as we expected, because \textit{when we measured B on system 2, we destroyed the information about observable A in that subsystem.} Nature is not so easy to fool. The strange properties of entanglement were indeed used to test the very fitness of Quantum Theory itself experimentally, with favourable results \cite{AspectExperiment}. \subsection{QUANTUM ENTANGLEMENT AND ITS USES} John Stuart Bell shown in 1966 that systems in entangled states exhibit correlations beyond those explainable by local ``hidden'' properties\footnote{Hidden properties are properties that are determinate in the system, but cannot be measured.} \cite{BellInequalities}. Suppose now that we have a black cube, a white cube, a black ball and a white ball to be divided randomly to Alice and Bob: one cube and one ball to each. But only one of them can be taken out of the bag (the hole is too narrow), so each one must decide whether he (she) is going to see the cube or the ball (they can touch the things inside the bag to decide). So far, this is equivalent to an entangled state. But now, let us consider the case when Alice saw a black ball and Bob saw a black cube. Everything is now determinate: a further measurement from Alice should give a ``white'' result, and a further measurement by Bob should give a ``black'' result. But the quantum case is not like that. In the quantum case, the hidden Alice's cube not only must be hidden when Alice sees the ball, but must be able to \textbf{change its color} with a probability of $\frac{1}{2}$ when she does! If the objects were quantum, we may well end up with two black balls and two black cubes in the considered case. Once a local measurement is done, the quantum correlations (unlike the classical correlations) must disappear. This correlating power of entanglement can be used to speed up the communication, but not to send a signal faster than light. This is called \textbf{superdense coding}\cite{SDcoding1}, because several bits are encoded in one particle. Suppose Alice and Bob have particles sharing the following entangled state: \begin{equation} \frac{1}{\sqrt{2}}\left(\phi_0\phi_0 + \phi_1\phi_1\right). \end{equation} Alice can apply a transformation to her particle according to the message she wants to transmit, according to table \ref{SDcoding}: \begin{table}[hbt] \center \begin{tabular}{|c|l|r|}\hline Message & Operation & Final state \\\hline 00 & she does nothing & $\phi_0\phi_0 + \phi_1\phi_1$ \\ 01 & she turns $\phi_0$ into $\phi_1$ and vice versa & $\phi_0\phi_1 + \phi_1\phi_0$\\ 10 & she turns $\phi_0$ into -$\phi_1$ and $\phi_0$ into $\phi_1$ & $\phi_0\phi_1 - \phi_1\phi_0$ \\ 11 & she turns $\phi_0$ into -$\phi_0$ & $\phi_0\phi_0 - \phi_1\phi_1$ \\\hline \end{tabular} \caption{Operations for super dense coding}\label{SDcoding} \end{table} Then Alice sends her particle to Bob, and he measures on the whole two-particle system an observable whose value gives him the message\footnote{this is possible because the final states given in table \ref{SDcoding} are completely distinguishable}. Then,Alice transmitted \textbf{two bits} with \textbf{one} two-state particle. This has been tested experimentally using photons with partially successful results \cite{SDcoding2}. \section{THE FIRST QUANTUM GAME} In 1999 a version of the ``penny flipover'' game appeared in a major Physics journal \cite{QuantumPenny}. This was the first quantum game that attracted the attention of the physicists community. This game was presented as played by Captain Picard from Enterprise starship and a quantum player \textit{Q}. Q gives captain Picard a penny in a closed box, then Picard decides whether to turn it over or not, and gives it back to Q. This procedure is repeated a number of times. Then Q decides to turn it over or not. The penny is initially head up, and Picard wins if the penny is tail up after Q's second turn. \begin{figure} \caption{\small Penny Flipover Game} \end{figure} The quantum feature of this game is that Q can perform a ``half turn'' on the penny, leaving it at first in a quantum state which is a superposition of head up and tail up, and recovering the initial state with the opposite turn at the end, whatever P has done. Giving Q this advantage, there is no way P can win. The state generated by Q can be represented as a quantum superposition of the two classical possibilities: \begin{equation} \vert STATE \rangle = \frac{1}{\sqrt{2}}\left(\vert HEAD-UP \rangle \pm \vert TAIL-UP \rangle \right). \end{equation} A classical turnover turns head-up into tail-up, and tail-up into head-up. The effect on the state generated by Q is none for the symmetric superposition (that with a plus sign) or a global change of sign for the antisymmetric superposition (that with a minus sign): {\small \begin{multline} \hat{\boxed{turnover}}\vert STATE \rangle =\\ \pm\frac{1}{\sqrt{2}}\left(\vert TAIL-UP \rangle \pm \vert HEAD-UP \rangle \right)\\ = \pm \vert STATE \rangle. \end{multline} This overall sign is irrelevant when measuring. \subsection{QUANTUM COINS} This first quantum game deals with a ``quantum coin'', that is, a coin that can be put in stable enough quantum states. This quantum coin can be thought as a two-level quantum system, or \textit{qubit}. A qubit is a quantum extension of the classical concept of the bit, or, in game theory, a binary choice. There is a number of physical realizations of quantum coins. The conditions necessary to get a good quantum coin can be grouped in two categories: \begin{enumerate} \item Only two states can be involved in the manipulations. Normally quantum systems have a very big number of states, but if we want to use the system as a coin, it only must show heads, coins, or superposition of both. Transitions to other states must be avoided \item The quantum state of the system, whatever it is, must be well defined. Interaction with large systems as environment causes a loss of determinacy in the quantum state, that is called \textit{decoherence} \cite{Decoherence}. That is probably the reason why we have no ``quantum casino'' so far. \end{enumerate} Each category of conditions encompasses several technical requirements, which are different for each experimental realization. Some of these realizations are \cite{ExpQuantComp}: \begin{itemize} \item \textbf{Quantum wells :} An electron restricted to move within a small region of semiconductive material can have definite states of energy just like one spinning around a nucleus. The technology to limit its transitions to only some of the lower energy states is now developed enough to produce reliable microscopic quantum coins. These coins can be designed with an outstanding freedom, to suit the necessities of a particular use. \item \textbf{Light states :} LASER light is a collective quantum state of photons (called coherent state) where the number of photons is not sharply determinate, but its average value can be decreased to nearly one. Then, we would have states that are superposition of states with no photon and one photon, for example, and can act as good quantum coins. \item \textbf{Oscillators :} Currents in microcircuits and vibrating nuclei in molecules are fairly harmonic oscillators. This systems, when small enough, have clearly defined discrete energy states. These states have nearly equal energy differences, and it is possible to control the transitions between them using lasers or modulated electromagnetic fields. If we are able to limit the system to two of these states (``heads'' and ``tails'') or two sets of states, we will have a quantum coin. \item \textbf{Atoms in cavities :} The one-electron states in an atom can be very efficiently controlled when the atom is in an optical cavity\footnote{A cavity with mirrored walls}. The states of this system can be controlled with laser light with a high accuracy, and designed with a great freedom to fit a wide range of applications. This is in fact one of the most developed fields in experimental quantum computation. \item \textbf{Spinning nuclei: } Atomic nuclei have a property that has only sense in the frame of quantum mechanics: spin. This is a vector property, and its components are mutually incompatible observables. The values that this components can have are always naturally limited to a few multiples of a basic quantity, so it is not necessary to limit transitions to other states. The evolution of these states is controlled with modulated magnetic fields, that can be controlled with precision. The drawback of this realization of quantum coins is that a great degree of control at a very small length scale is needed. This particular realization was indeed used by Du, et al. to actually play a quantum game \cite{ExpQG}. \end{itemize} \section{MULTIPARTY CONTROL AND THE EISERT SCHEME} The game between Picard and Q discussed above would turn rather dull if captain Picard were allowed to perform quantum operations as well. In fact, the ``both quantum'' version of the game is as unsolvable as the classical game, in the sense that there is no winning strategy. But some months after Meyer's paper, another proposal for quantum games appeared, where the game was still very interesting when both players play on equal terms. This scheme was proposed by Jens Eisert, from Potsdam University, and named after him \cite{Eisert}. \subsection{EISERT'S IDEA} It has been noticed by John Bell \cite{BellInequalities} that two person sharing entangled qubits have a correlation resource beyond any classical possibility, and, even when they are not able to communicate faster than light, they can make use of that resource to act coordinately \cite{QuantumCoordination}. Coordination can make a great difference in some two-person games like Prisoner's Dilemma and Stag-Hunt (Chicken) game \cite{Miracle}. Why not try to play these games using entangled qubits? This was probably the motivation of Eisert. His scheme (explained later in detail) is obtained by the classical game changing some elements: \begin{enumerate} \item ENTANGLEMENT\\ A referee takes a composite system and prepares an entangled state. The subsystems have as much levels as choices are in the classical game \item QUANTUM STRATEGIES\\ The players are allowed to perform quantum operations on the entangled subsystems. These operations are the quantum analogue to the classical strategies \item PAYOFF FUNCTION\\ States of the composite system are defined as eigenstates of the payoff operator.\par There must be a ``classical subset'' in each player's strategy set, and players can always choose strategies within this subset that are equivalent to probabilistic strategies in the classical game. Otherwise, we would not have a quantum version of a classical game, but rather a different game. \end{enumerate} The formulation of this game is, by itself, very interesting. There is, for example \textit{only one way} to entangle the subsystems for 2 players with 2 choices that is coherent with the objectives pursued. The space of local operations on a 2-level quantum systems becomes here the strategy space, and is a key concept in the study of quantum computation; here we study it from a new point of view, that of the maximization of an observable. The description of this scheme for quantum games, described in chapter \ref{EisertScheme} is, therefore, the natural next step in this thesis. \subsection{THE SIMPLEST GAMES} The obvious way to approach the study of quantum games is choosing first the simplest possible games that can be interesting. In the current stage, quantum game theory is involved mainly in the study of 2x2 games, that is, games with two players that (in the classical game) have two possible choices. The quantum games are then 2-qubit games. The most studied game is perhaps the Prisoner's Dilemma; a game where both players are completely equivalent (a symmetric game). The same occurs for Chicken Game, which is perhaps the second most studied game \cite{Miracle}. \section{THE OBJECTIVES OF THIS WORK} The current research on 2x2 classical games shows that in most cases a \textit{complete knowledge of the relation between the payoff matrix and the main characteristics of the game} is a necessary point of departure. The payoff of a game can vary within certain limits without changing the properties of the game, but outside these limits new features can appear, and old features can disappear. In some methodologies the perception of the payoff matrix or even the payoff matrix itself is subject of variations \cite{Behaviour} \cite{Subjective}, thus increasing the importance of the qualitative changes generated by continuous variations of the payoffs. There is then a number of situations where it is necessary to have a good cartography of the space of possible payoff matrices. And we can expect that this is also the case in the field of quantum games. In this respect, Rappoport developed a classification for 2x2 games with strictly ordered payoff\footnote{A strictly ordered payoff matrix is one whose elements have relations of strict order in each column and each row}, and found 78 classes of them \cite{Rapoport0},\cite{Rapoport1}. If he restricts to symmetric 2x2 games, the number of classes is only 9 \cite{Robinson}. It can be advisable, as a first step, to study this simple classification (that of symmetric games) in their quantum versions. The main purpose of this work can be formulated in the following way: \begin{center} \textbf{Find how quantization affects the classification of symmetric nonzero-sum 2x2 games} \end{center} There are many ways to classify games. In this work two criteria are chosen: \begin{enumerate} \item Number and nature of Nash Equilibria \item Number and nature of Pareto Optimal positions \end{enumerate} \chapter{CLASSICAL GAMES, QUANTUM GAMES} Before starting to develop the work, it can be useful to introduce and define some concepts of Game Theory (both classical and quantum) that are going to be used along the next chapters. \section{CLASSICAL GAMES} A classical game is defined as three elements:\cite{TheoryGamesEB} \begin{enumerate} \item Two or more agents (players) \item A set of choices for each player at each point of the game. (in this work, the only relevant games are those where each player chooses once only) \item A payoff function that depends on the choices taken by all the players. \item A set of rules governing the influence of one player over the others, like information restrictions. \end{enumerate} The only element among these that must have some special properties is the payoff function. To be able to define some operative player's rationality the values of the payoff function must belong to an \textit{ordered set}. That means that it must be possible to order all the possible values of the payoff in a consistent way, thus allowing the player to choose always a best result. The existence and characteristics of payoffs that can be considered as utilities is a rather cumbersome philosophical problem\cite{Rational}, that we can avoid by simply working with payoffs which are real finite numbers. According to the set of rules about the influences among the players, the games can be divided in two classes: \begin{definition} \textbf{Games with Imperfect Information : }In this games the players are not influenced by the choices of the others. \end{definition} This is the kind of games we will study. \begin{definition} \textbf{Games with Perfect Information : }In this games all players know what the others have decided in all the other steps of the game \end{definition} \subsection{EXTENSIVE FORM}\label{ExtensiveForm} A popular way to represent games (specially complex, multi-stage games) is the extensive form. It consists in a tree where each inner node is a point where an agent chooses, and a payoff is assigned to each terminal node. \begin{definition} The \textbf{extensive form of a game} is a tree representation, where each decision of each player is represented by branching nodes, and a payoff is assigned to every terminal branch. \end{definition} Let's see an example taken from Jim Ratliff's Game Theory Course \cite{GTCourse}. It is called ``Cholesterol: friend or foe'' Suppose that there are two companies planing to launch a new low-cholesterol product, but each can choose not to. But after the launching date, a medical association releases a study about cholesterol. This study can give two results: \begin{itemize} \item Cholesterol is injurious for health. Good news for the companies. \item Cholesterol is health promoting. Bad news for the companies. (But good news for gourmets) \end{itemize} This result will certainly affect the payoffs of the companies, but nature (the agent that determines whether cholesterol is injurious or not) is not affected by the result of the game, and will not maximize anything. Nature simply will choose ``unhealthy'' with a probability $p$ or ``healthy'' with a probability $1-p$.\\ \begin{figure} \caption{Game Tree for ``Cholesterol: friend or foe''} \end{figure} This way of representing games is appropriate to include some rules about available information (example: a curve can enclose the set of nodes that a certain player have information about, defining an ``information set'' of the player in a certain step) \cite{InformationGames}. \subsection{STRATEGIC FORM}\label{StrategicForm} Most of the games are much more complicated than the one described in the last section. The tree game for that game is somewhat big, and can be too big to be useful for a number of games having more steps, more players, or both. Besides, there is a number of different trees that corresponds to the very same game (for complex games, a huge number of them), and it is desirable to remove as much as possible that multiplicity of representations. A simpler representation would be very useful in those cases. It was proved by von Neuman and Morgenstern for games where each player is ignorant of what the others do, there is a simpler representation that is completely equivalent. It is called the \textbf{strategic form} \cite{TheoryGamesEB}. To generate the strategic-form representation of a game, we first define \textbf{strategy} \begin{definition} A \textbf{strategy} is a certain set of decisions taken by a player all along the game. \end{definition} A strategy, then, describes all the performance of a player in the game. With this concept, we can define the strategic-form representation \begin{definition} The \textbf{strategic-form} representation of a game is a table where a certain payoff for each player is assigned to each possible set of players' strategies. \end{definition} To show one example of this representation, let us define a ($n_1 \times n_2 \times n_3 \times ... \times n_N$) game: \begin{definition} \textbf{A (}$\mathbf{n_1 \times n_2 \times n_3 \times ... \times n_N}$\textbf{) game} is a N-player game where player 1 can use $n_1$ strategies, player 2 can use $n_2$ strategies, and, in general, player i has $n_i$ strategies (where i is an integer between 1 and N) \end{definition} The strategic representation of such game can be made with N hypermatrices, each with N indexes and dimensions $n_1, n_2, n_3, ..., n_N$. The games to be studied here are 2x2 games. For this games the strategic representation can be made with two 2x2 matrices, one for each player, where the payoffs are tabulated, just like in table \ref{TablePrisoner}. \subsubsection{TWO-PLAYER GAMES} It is necessary at this point to make some definitions about two-player games that will be important later: The first concept is \textbf{zero-sum} games. This kind of games are not studied in this work, but they are tangentially important, because some studies about them are important precedents to this one. \begin{definition} A \textbf{zero-sum game} is a game where the payoffs of both players sum up to zero for any position. \end{definition} In these games, what one player wins, the other player lose it. The reason why they are not as studied in their quantum versions is that in this games the coordination between players is not important. And new possibilities of coordination are one of the main reasons to study quantum games. The other concept is \textbf{symmetric} games. \begin{definition} A \textbf{symmetric game} is one where all the players are exactly in the same conditions, both in terms of payoff, number of choices, and information rules. \end{definition} This concept is important, of course, because this is the kind of games we have chosen for this study. The reason why this kind of games was chosen is simplicity. The symmetry imposes certain conditions on the payoff matrices (or poly-hypermatrices for games with more players), that make the number of free parameters lower, and allows to study the variations of the defining payoffs more easily. This will be noticed in chapter 5, where a complete classification of the 2x2 symmetric games is made, that heavily relies on the symmetry condition. A similar study of the non-symmetric games can be made with the same methodology, but the results would be much more complex. \subsection{REDUCTION OF THE GAMES} Even when the strategic representation is much simpler than the extensive representation, it can sometimes be still too complex (or, better, too big). It can be necessary to simplify it further to make it operational. An example of this simplification (reduction) of the strategic form can be performed on the three-player game ``Cholesterol: friend or foe''. The payoff can be tabulated in three hypermatrices with three indexes, one for Nature, other por Afoods, and other for Bfoods. To present the payoffs in matrices, we can consider the two choices of nature separately: \begin{table}[hbt] \center \textbf{CHOLESTEROL IS HEALTHY} \begin{tabular}{|p{4.2cm} p{4.2cm} p{4.2cm}|}\hline \textbf{Nature} &\textbf{Afoods} & \textbf{Bfoods} \\ \begin{tabular}{|c|c|c|}\hline & BL & BW\\ \hline AL & X & X \\ \hline AW & X & X \\ \hline \end{tabular} & \begin{tabular}{|c|c|c|}\hline &BL & BW\\ \hline AL & -9 & -3 \\ \hline AW & -1 & 1 \\ \hline \end{tabular} & \begin{tabular}{|c|c|c|}\hline & BL & B BW\\ \hline AL & -9 & -1 \\ \hline AW & -3 & 1 \\ \hline \end{tabular}\\\hline \end{tabular} \textbf{CHOLESTEROL IS UNHEALTHY} \begin{tabular}{|p{4cm} p{4cm} p{4cm}|}\hline \textbf{Nature} &\textbf{Afoods} & \textbf{Bfoods} \\ \begin{tabular}{|c|c|c|}\hline &BL & B BW\\ \hline AL & Y & Y \\ \hline AW & Y & Y \\ \hline \end{tabular} & \begin{tabular}{|c|c|c|}\hline & BL & BW\\ \hline AL & 9 & 15 \\ \hline AW & 5 & 7 \\ \hline \end{tabular} & \begin{tabular}{|c|c|c|}\hline & BL & BW\\ \hline AL & 9 & 5 \\ \hline AW & 15 & 7 \\ \hline \end{tabular}\\\hline \end{tabular} \caption{Strategic ``Cholesterol: friend or foe''}\label{TableCFF} \end{table} In table \label{TableCFF} we called the strategies of launching the product AL and BL respectively, and the strategies of waiting AW and BW. The variables X and Y are random variables which allow us to model the behaviour of Nature in this game. With a probability $p$ Y>X and nature will select ``not healthy'', and with a probability ($1-p$) X>Y and nature will select ``healthy''. The information we have about nature allows us to reduce the strategic form of the game. According to their statistical knowledge of nature, players should work with \textbf{average payoffs} to decide their actions. This puts nature out of the picture, and allow us to convert the game in a 2x2 game, with a parameter $p$ in the payoff: \begin{table}[hbt] \center \begin{tabular}{|p{6.4cm} p{6.4cm}|}\hline \textbf{Payoff for A} & \textbf{Payoff for B}\\ \begin{tabular}{|c|c|c|}\hline & B launches & B waits \\ \hline A launches & 9(2p-1) & 18p-3 \\ \hline A waits & 6p-1 & 6p + 1 \\ \hline \end{tabular} & \begin{tabular}{|c|c|c|}\hline & B launches & B waits \\ \hline A launches & 9(2p-1) & 6p-1 \\ \hline A waits & 18p-3 & 6p + 1 \\ \hline \end{tabular}\\ \hline \end{tabular} \caption{Reduced ``Cholesterol: friend or foe''}\label{ReducedCFF} \end{table} \subsection{OPTIMAL RESPONSES AND NASH EQUILIBRIUM}\label{OptimalNash} The strategic representation of games makes very easy to forecast what a rational player would do if he knew the strategies of all the others. He will probably react with an optimal response. \begin{definition}\label{DefOptResp} \textbf{Optimal response} is the strategy that gives the player the highest payoff if the other players' strategies are known and fixed. \end{definition} Each player, of course, will not know the strategies of the others in the games that we are studying. But he can assume that they are rational too. So, the other player will want to play optimal strategies also. That means that all players will tend to be at a certain position\footnote{We will call \textbf{position} a certain choice of strategies for all players} that is optimal for all the players. That is called a Nash Equilibrium. \begin{definition}\label{DefNashEq} \textbf{Nash Equilibrium} is a position where all the strategies are mutually optimal responses. That means that no unilateral change of strategy will give a higher payoff to the corresponding player. \end{definition} \subsection{PARETO OPTIMAL STRATEGIES} In Prisoner's Dilemma (section \ref{Multiagent}), we saw that a better result can sometimes be obtained in two person games if we maximize \textit{the other player's payoff}. Thus we could define a concept related to Nash Equilibrium, called Pareto Optimal Position: \cite{Pareto} \begin{definition}\label{DefParetoOp} A position is \textbf{Pareto Optimal} if the only way each player has to increase her payoff is decreasing someone else's payoff. \end{definition} This concept is an interesting approach to include the advantages of a limited sympathy towards the other players. \section{QUANTUM GAMES} \subsection{INTRODUCTION}\label{EisertScheme} The puzzling properties of \textit{entanglement} are currently intensely studied with a number of motivations, and its applicability in communication technology is certainly among the most popular. Entangled quantum systems behave in a somehow coordinated way, and even though we cannot fool relativity using entanglement, it has been proved possible to enhance communication speeds to a large amount by this resource \cite{QuantumCommunication}. But, what about coordination through entanglement? Even if there is no way to communicate, it can be possible for two people to act in a coordinated way if they share entanglement. A natural arena to check this possible coordination is \textbf{game theory}. (an example can be found in \cite{QuantumCoordination}) The most popular scheme for quantum games is perhaps the \textbf{Eisert scheme} \cite{Eisert}. A quantum game in this scheme is played in the following way: \begin{itemize} \item A referee produces an entangled state of N \textit{qunits} (A \textit{qunit} is a n-level quantum system), applying a nonlocal unitary transformation to an initial known separable state \item Each player gets one of the entangled qunits \item Each player acts on her qunit in any way she wants \item The qunits are returned to the referee \item The referee applies the inverse nonlocal transformation, so that if neither of the players did anything, the result state is the initial known state \item The referee makes a measurement on each of the qunits, and gives a payoff to each player. \begin{figure} \caption{The Eisert Protocol for a Quantum Game} \end{figure} \end{itemize} Let us discuss some of the concepts involved in the quantum game: \subsection{THE INITIAL STATE} This state must be known by all the players, at least to some level of accuracy, because of the rationality condition \cite{LinearUtility}. This condition requires from the players some criterion to choose the strategy, which cannot exist without knowledge of the initial state. This state must be separable (at least to some extent), because otherwise the players would have uncertainty in the \textit{local} state of their subsystem. In the ideal case (total certainty) the initial state is given by the state vector $\vert 00 \rangle$, or by the density operator: \begin{equation} \rho_{ini} = (\vert 0 \rangle \langle 0 \vert)\otimes(\vert 0 \rangle \langle 0 \vert). \end{equation} \subsection{THE STRATEGIES OF THE PLAYERS} In the Eisert scheme, each player is given an initial state, and the strategies of a quantum player correspond to quantum operations to be performed on that state. Whenever the game is played, the players get one of the payoff values tabulated in the payoff matrix, but there is some randomness only in the result. To apply some kind of rationality in these games, it is necessary to work with the \textbf{expected payoff} rather than the actual earned payoff, as it is done in games involving random moves \cite{TheoryGamesEB}. We will make a distinction, however subtle, between \textbf{quantum operations} and \textbf{quantum strategies}, the former being physical operations carried on a physical system (a qubit) and the latter being a particular choice of parameters that leads to some expected payoff. \subsection{A CLASSIFICATION OF QUANTUM STRATEGIES} In classical games, there are only two kinds of strategies: pure strategies (those who define the space of positions) and mixed strategies, defined as distributions of probability. The set of quantum operations on a qubit, on the other hand, is very big. A general quantum operation can be defined by means of the $\chi$ matrix representation, \cite{Chi} which consists in: \begin{equation}\label{EstratChi} \rho_{transformed}=\hat{\hat{O}}\rho = \sum_{i,j} (E_i)^\dagger \rho E_j \chi_{i,j}. \end{equation} where $\hat{\hat{O}}$ is the superoperator describing the quantum operation, $\rho$ is a density matrix, $\{E_i\}$ is a complete basis for the hermitian operators in the Hilbert-Schmidt space, and $\chi$ is an hermitian matrix with the dimension of the space. The parameters in the $\chi$ matrix define a quantum strategy. A physically meaningful quantum operation fulfills the positiveness condition: \cite{Completeness} \begin{equation}\label{Completeness} \sum_{i,j} \chi_{i,j} (E_i)^\dagger E_j \leqslant Identity. \end{equation} The number of parameters necessary to describe an operation on n-level systems is $n^4-n^2$. For a qubit, this number is 12. (the space of quantum strategies is 12-fold!) From a game theoretical point of view, we may use a determinacy criterion to classify quantum strategies (operations), as we did in classical games to distinguish pure and mixed strategies. Therefore we can define two kinds of strategies: \begin{enumerate} \item DETERMINISTIC STRATEGIES\\ These are strategies that ensure a known payoff, without randomness. They consist simply on acting on the density matrix in the following way: \begin{equation}\label{Deterministic} \rho_{final} = \rho_{initial} \hspace{16pt}or\hspace{16pt} \rho_{final} = \sigma_x \rho_{initial} \sigma_x. \end{equation} The reason for using $\sigma_x$ as a deterministic strategy is merely conventional. \item WEAKLY DETERMINISTIC STRATEGIES\\ These are strategies that are deterministic in the totally nonentangled and totally entangled game, but can be otherwise in intermediate entangled games. They consist simply on acting on the density matrix in the following way: \begin{equation} \rho_{final} = \sigma_y \rho_{initial}\sigma_y \hspace{16pt}or\hspace{16pt} \rho_{final} = \sigma_z \rho_{initial} \sigma_z. \end{equation} In what follows, we will refer to both deterministic and weakly deterministic strategies as \textbf{semideterministic strategies} \item NONDETERMINISTIC STRATEGIES\\ These are strategies that involve randomness in the payoff for any entanglement condition.\\ Among these we can still distinguish two cases: \begin{enumerate} \item UNITARY STRATEGIES In these strategies the state of the entangled pair of qubits remains pure, that is, there is no coherence loss. That means that there is no randomness in the \textit{quantum state} of the entangled pair of qubits. The source of the randomness in the payoff is for this cases is the indeterminacy of the payoff in the quantum state of the system. They consist simply on acting on the density matrix in the following way: \begin{equation} \rho_{final} = U^\dagger \rho_{initial}U. \end{equation} The form of the $chi$ matrix corresponding to an unitary strategy is: \begin{equation}\label{ChiUnitary} \chi = \begin{pmatrix}\sqrt{1-\vec{a}\centerdot\vec{a}} & -ia_z & -ia_y & -ia_x\end{pmatrix} \begin{pmatrix}\sqrt{1-\vec{a}\centerdot\vec{a}} \\ ia_z \\ ia_y \\ ia_x\end{pmatrix}. \end{equation} \item MIXED UNITARY STRATEGIES These consist in a number of unitary strategies used at random with a distribution of probabilities: \begin{equation} \rho_{final} = \sum_j \lambda_j\left((U_j)^\dagger \rho_{qubit} U_j\right) \end{equation} where $G_j$ is some hermitian operator, like $\hat{r}\centerdot\vec{\sigma}$ (for $\hat{r}$ real and unitary), $0\leqslant \lambda_j \leqslant 1 $ and $\sum_j \lambda_j = 1$. Unitary interaction with other relatively small systems in indeterminate states is a one-step physical realization of this kind of strategies. An important characteristic of this kind of quantum operations is that if the initial density operator is identity, it remains the identity after applying the operation. The operations that behave in that way are called \textit{unital}. For single qubits, \textbf{all unital operations can be represented by mixtures of unitaries} \cite{UnitalQubit}. Another important characteristic is that this operation \textit{preserve the entanglement}. In the Eisert scheme, the consequence is that the final state (the one on which the payoff is measured) is unentangled.\\ This means that if both players act in this way on their entangled qubits, the referee ``unentangles'' them with the inverse of the unitary he used to entangle them before being transformed. \begin{multline} J \left(\sum_{j,k}\lambda_j \lambda_k (U_j\otimes U_k)^\dagger \left[J^\dagger (\rho_A\otimes\rho_B)J\right] (U_j\otimes U_k)\right) J^\dagger =\\ = \sum_{j,k}\left(\lambda_j \lambda_k(\tilde{\rho}_A)_j \otimes (\tilde{\rho}_B)_k \right) =\\ = \sum_l \left(\tilde{\lambda}_l(\tilde{\rho}_A)_l \otimes (\tilde{\rho}_B)_l\right) \end{multline} where J is the entangling operation. The final density matrix is separable, but \textbf{classically correlated}, thus allowing coordination. \item INFORMATION WITHDRAWING STRATEGIES\\ Non-unital operations can be regarded as information withdrawing strategies, because it is not possible to recover the initial state from the knowledge of the operation and the final state.\\ They are associated with a measurement, and consist simply on acting on the density matrix in the following way: \begin{equation} \rho_{final} = \sum_j (F_j)^\dagger \rho_{initial} F_j. \end{equation} Where $R_j$ are positive operators such that $\sum_j (F_j)^\dagger F_j <\ Identity$\\ This kind of operations, even being in a sense local, \textit{do not preserve entanglement}. That means that the final state, the one on which payoff is going to be measured, is entangled. The payoff operator is separable (even when, in general, payoffs are classically correlated), and the entanglement in the state to be measured becomes an \textbf{additional source of randomness}. Since $\chi$ is an hermitian matrix, which is, besides, positive if the basis operators are positive, it can be decomposed in terms of projectors: \begin{equation}\label{ChiDecomposed} \chi_A = \sum_k \lambda_k \begin{pmatrix}\sqrt{1-\vec{(a_k)^*}\centerdot\vec{a_k}} \\ ((a_k)_z)^* \\ ((a_k)_y)^* \\ ((a_k)_x)^*\end{pmatrix} \begin{pmatrix}\sqrt{1-\vec{(a_k)^*}\centerdot\vec{a_k}} & (a_k)_z)^* & (a_k)_y & (a_k)_x\end{pmatrix} \end{equation} where the $\lambda_k$ are real numbers from 0 to 1 and $\sum_k \lambda_k \leqslant 1$. This expression is very similar to that of a unital map (a convex sum of (\ref{ChiUnitary})), except that the unit vectors $a_k$ are not necessarily imaginary in $x,y,z$. This expansion of the general map will be useful later. \end{enumerate} \end{enumerate} \subsection{THE PAYOFF FUNCTION} The payoff function is in quantum games an observable, and it is, of course, represented by an operator. The payoff matrix gives a definite payoff for the positions defined by deterministic strategies (\ref{Deterministic}). Each of the 4 numbers of the 2x2 payoff matrix is then an eigenvalue of the payoff operator. The eigenvectors are simply direct products of the initial state $\vert 0 \rangle$ transformed by the deterministic operation of A, and the one transformed by the deterministic operation of B. If the payoff matrix for the classical game is {\small$\begin{pmatrix}a & b\\c&d\end{pmatrix}$} then the payoff operator is: \begin{equation} \hat{\$} = a\vert 00 \rangle \langle 00 \vert + b\vert 01 \rangle \langle 01 \vert + c\vert 10 \rangle \langle 10 \vert + d\vert 10 \rangle \langle 10 \vert. \end{equation} The expected payoff will have then the following form: \begin{equation}\label{ChiPayoff} \langle \$ \rangle = \sum_{i,j,k,l}(\chi_A)_{i,j} (\chi_B)_{k,l} Tr \left(J\left((F_i\otimes F_k)^\dagger J^\dagger (\vert 00 \rangle \langle 00 \vert)J(F_j\otimes F_l)\right)J^\dagger \ \hat{\$} \right). \end{equation} The trace in the expression is a bitensor of rank 4 and dimension 12, that we can call $P^{i,j,k,l}$. The payoff will then be an scalar: \begin{equation} \langle \$ \rangle = \sum_{i,j,k,l}(\chi_A)_{i,j} (\chi_B)_{k,l} P^{i,j,k,l}. \end{equation} \subsection{THE CLASSICAL GAME WITHIN THE QUANTUM GAME} One of the guidelines in the quantization of games is that the classical game must be embedded in the quantum game \cite{QGTheory}. Therefore, there must be a classical strategy subset within the total quantum strategy set. In the Eisert scheme, the classical strategies are those constructed with generators commuting with the entangling unitary transformation \cite{Eisert}. As a convention, the chosen entangling unitary is: \begin{equation}\label{entangling} J = \sqrt{1-\gamma^2}\mathbb{I} + i\gamma(\sigma_x \otimes \sigma_x) \end{equation} where $\mathbb{I}$ is the 2x2 identity and $\gamma$ is a real entanglement parameter. Any operation $\hat{\hat{O}}$ defined with $E_i=\alpha\mathbb{I} + \beta\sigma_x$ according to equation \ref{EstratChi}, will commute with the entangling unitary $J$, and will therefore be a \textit{classical strategy}. Restricting the elements of the quantum operations to the identity and the flip Pauli matrix, a classical strategy would be represented by a 2x2 hermitian matrix. The payoff tensor computed in this subspace is very simple: \begin{multline} P^{i,j,k,l} = \frac{1}{4}((a+b+c+d) + \delta_{ij}(-1)^{1+ij}(a+b-c-d) \\ + \delta_{kl}(-1)^{1+kl}(a-b-c+d) + \delta_{ij}\delta_{kl}(-1)^{1+ij+kl}(a-b-c+d) ) \end{multline} where the indexes $i,j,k,l$ take values of 0 for the identity and 1 for $\sigma_x$. The expected payoff is simply: \begin{equation} \langle \$ \rangle = a(\chi_A)_{00}(\chi_B)_{00} + b(\chi_A)_{00}(\chi_B)_{11} + c(\chi_A)_{11}(\chi_B)_{00}+d(\chi_A)_{11}(\chi_B)_{11}. \end{equation} The condition of completeness (\ref{Completeness}) forces the relation $\chi_{00}+\chi_{11} = 1$ for both players, and we can see that the expected value of the payoff is exactly equivalent to that of the expected value for mixed strategies in the classical game: \begin{equation} \langle \$_{classical} \rangle = (1-P_A)(1-P_B)a + (1-P_A)P_Bb + P_A(1-P_B)c + P_AP_bd. \end{equation} It must be noted that the nondiagonal elements of the $\chi$ matrix in the classical subspace are irrelevant for the expected payoff. \subsection{THE SPACE OF PLAYER'S PARAMETERS} We have seen that the strategy of each player can be represented by an hermitian matrix fulfilling the completeness condition, and that not all the elements of the matrix are relevant to the expected payoff ($\chi_{0X}$ and $\chi_{0X}$ are not relevant). As the parameters are 24 (12 from each player) but the payoff is only one number. That means that we can find 23 functions of the parameters given by the players that are not relevant for the payoff (we can call them redundant parameters). The two irrelevant parameters mentioned are not independent, they are related by the completeness relation to each other and to other elements. (If all the elements outside the classical subspace are zero, the relation is $\chi_{0X}=-\chi_{0X}$ The hermiticity of the matrix forces each to be the conjugate of the other, so they must be plus and minus an imaginary number.\\ We can only reduce the space of parameters if some of these relevant parameters depend only on one player's parameters. In that case, some unilateral variations would not affect the payoff. \chapter{WHAT WE KNOW SO FAR} These work is restricted to quantum games in the Eisert scheme. There are interesting proposals outside this scheme, but these are not discussed here. However, in table \ref{OtherGames} a little description is presented: \begin{table}[htb] \center \begin{tabular}{|p{10cm}|c|}\hline {\centering \textbf{Description}} & \textbf{Reference} \\\hline ``\textbf{Quantum solutions for coordination problems}'' and ``\textbf{Correlated Equilibria of Classical Strategic Games with Quantum Signals}'' The use of quantum communication (communication through entanglement) is explored for players playing quantum games & \cite{QuantumCoordination} and \cite{QCorrEquilibr}\\ \hline ``\textbf{Probability amplitude in quantum like games}'' Classical games that emule quantum situations are explored & \cite{QuantumLike}\\ \hline ``\textbf{Generalized quantum games with Nash Equilibrium}'' Generalized quantum games are defined relaxing some features of the Eisert scheme, like locality of initial state, to give the probability amplitudes a protagonic role. They call these games ``fully-quantum game theory''. & \cite{GeneralizedQG}\\ \hline ``\textbf{Quantum Games of asymmetric information}'' Asymmetric information is introduced in the Eisert Scheme. & \cite{Asymmetric}\\\hline ``\textbf{Continuous-variable Quantum Games}'' & \cite{QContinuousG}\\ \hline \end{tabular} \caption{Other quantum games} \center \label{OtherGames} \end{table} However, the quantum versions of symmetric 2x2 nonzero-sum quantum games are in some cases complex enough to constitute a broad field of research, which is not yet depleted. The first important results to discuss are a quantum version of the Nash Theorem, and some criticisms about the very ``quantumness'' of the quantum games. \section{NASH EQUILIBRIA AND PARETO OPTIMAL POSITIONS} A \textbf{Nash Equilibrium} is a position in which the payoff of all players can only remain equal or decrease by unilateral changes of strategy. The Nash Theorem states that \textbf{every finite game has at least one equilibrium point} \cite{NashTheorem}. This Nash equilibrium, however, can correspond to a \textbf{mixed strategy}, a strate.gy that consists in playing one pure strategy with a certain probability, and playing another with another probability. For symmetric a stronger theorem is fulfilled: Every symmetric game has at least one \textbf{pure strategy Nash Equilibrium} \cite{NashSym},\cite{Samaritan}. The quantum version of Nash theorem was proved by Lee and Johnson \cite{QGTheory} for quantum games in the Eisert scheme (Quantum games defined in other schemes can fail to fulfill this theorem, for example those studied in \cite{QmatrixStrat}). The Pareto-optimal positions are positions where \textbf{each player can increase his own payoff only decreasing the other player payoff} \cite{Pareto}. These have not been studied as extensively as a central feature of a game like the Nash Equilibria, but rather as a stability additional test. A Nash Equilibria that is Pareto Optimal at the same time, is more stable than one that is not. \subsection{EQUIVALENT CLASSICAL GAMES?} How ``quantum'' are quantum games? van Enk and Pike show in \cite{ClassicalRules} that some quantum games are equivalent to a classical game, in particular Quantum Games with a restricted set of strategies. They criticize the assumption that the quantum version of a game is a quantum version of \textit{that} game, even when can have the characteristics of \textit{another} classical game. \section{THE PRISONER'S DILEMMA} The preferred example for studying quantum games is the Prisoner Dilemma. The most exhaustive study about this game can be found in \cite{Eisert}. A popular quantitative assignation of payoff for this game is: \begin{equation} \begin{pmatrix} 3 & 5 \\ 0 & 1 \end{pmatrix} \end{equation} \subsection{ONE-PARAMETER SUBSPACE} The first study done was the exploration of a one-parameter subspace of the strategy space, that of the Unitary transformations generated by $\sigma_y$. In this subspace, Nash Equilibrium in the completely entangled game is the same as in the classical one. (position(1,1), payoff 1 for both players) \subsection{TWO-PARAMETER SUBSPACE} The second step was the study of the following parametrization for a subset of SU(2): \begin{equation}\label{TwoParam} U(\theta,\phi)= \begin{pmatrix} \cos(\theta/2)e^{i\phi} & \sin(\theta/2)\\ -\sin(\theta/2) & \cos(\theta/2)e^{-i\phi} \end{pmatrix} \end{equation} A new equilibrium was found for this subspace in $\theta=0$, $\phi=\pi/2$, with the same payoff that the Pareto Optimal position of the classical game, which is 3 for both players. This is the best possible payoff for any symmetrical position. \subsection{THREE PARAMETER SUBSPACE} In the whole unitary group SU(2) it was found that the perfect Nash Equilibrium found in the last subspace loses its character, and the Nash equilibrium is again the (0,0) position, the same as in the classical game. \subsection{THE GENERAL CASE} In the general case, there appears once again an efficient Nash Equilibrium, which can be put as a mixed unitary strategy that is different for each player (a nonsymmetric mixed strategy) For player A: \begin{equation} U_1= \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\hspace{16pt}\lambda_1=\frac{1}{2}\hspace{1.5cm} U_2= \begin{pmatrix} -i & 0 \\ 0 & i \end{pmatrix}\hspace{16pt}\lambda_2=\frac{1}{2} \end{equation} For player B: \begin{equation} U_1= \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}\hspace{16pt}\lambda_1=\frac{1}{2}\hspace{1.5cm} U_2= \begin{pmatrix} 0 & -i \\ -i & 0 \end{pmatrix}\hspace{16pt}\lambda_2=\frac{1}{2} \end{equation} In the total strategy space, the only extra operation is a measurement that, according to Nielsen \cite{Equivalence} is equivalent to the mixture of unitaries. \section{THE CHICKEN GAME} The Chicken Game (Also called Stag-Hunt) is another 2x2 symmetric game which is popular among the quantum game theoreticians. This game exhibits a dilemma of a different kind, based on the nonsymmetric character of their Nash Equilibria. This produces, however, a rather stable mixed Nash Equilibria. In this game it is interesting to find out which quantum strategy can mimic the mixed classical strategy. A quantitative payoff matrix for this game is: \begin{equation} \begin{pmatrix} 6 & 2 \\ 8 & 0 \end{pmatrix} \end{equation} \subsection{ONE-PARAMETER SUBSPACE} Again, in the one-parameter subspace the game has the same Nash Equilibrium in its classical version and in its quantum version. (the classical Nash equilibria are the positions (0,1) and (1,0), with payoff 8 for both players) \subsection{TWO-PARAMETER SUBSPACE} Using the same parametrization (\ref{TwoParam}), a unique diagonal Nash Equilibrium in $\theta=0,\phi=\pi/2$ with a payoff of 6 is found . \subsection{THE WHOLE SPACE} In the whole space the focal point\footnote{A \textbf{focal point} is a Nash Equilibrium which is stable under unilateral departures from rationality} turns out to be the same as the mixed strategy in the classical game, with a payoff of 4 for both players. \section{OTHER GAMES} A number of nonsymmetric games, or even zero-sum games have been studied. Here are some references to them, even though they are not going to be discussed here. \begin{enumerate} \item The Battle of Sexes: Some common properties of this game and Stag-Hunt (Chicken) are described \cite{QBattleSexes}. This can be a result of the fact that relabeling of strategies of the Battle of the Sexes give a game that is very similar to Stag-Hunt. \item Samaritan Dilemma and Welfare Game: This non-symmetric game has no pure strategy Nash Equilibrium; but it is found that in the completely entangled case several equilibrium arise, converting the game in a coordination game.\cite{Samaritan} \end{enumerate} \section{TRANSITIONS OF ENTANGLEMENT REGIME} Eisert's work, and others, are focused on studying completely entangled games. But, as we have seen, in the Eisert scheme there is the possibility to \textit{modulate} entanglement. Then it is interesting to look for \textit{critical amount of entanglement} in a game, where the qualitative properties of it change. This was studied in \cite{QTransition}, where Du et. Al. found precisely this for games like the Prisoner's Dilemma. They studied games with a ordered payoff matrix \begin{equation} \begin{pmatrix}r & s \\ t & p\end{pmatrix} \end{equation} where t>r>p>s. (Ordered payoff matrix means there is an order relation between the entries, like this). They found that using the entangling unitary \begin{equation} J = \sqrt{1-\gamma^2}\mathbb{I}\otimes\mathbb{I} + i\gamma \sigma_x\otimes\sigma_x \end{equation} where $\gamma$ is a real parameter between 0 and 1. There are two transitions in the entanglement regime: \begin{align} \vert 1-2\gamma^2 \vert = \sqrt{\frac{p-s}{t-s}}\\ \vert 1-2\gamma^2 \vert = \sqrt{\frac{t-s}{r-s}} \end{align} \begin{figure} \caption{Variation of Nash Equilibrium Payoff with Entanglement (Reproduced from \cite{QTransition} \end{figure} All this was computed, however, for the two parameter subspace mentioned above. For the total space, they find that there is only one transition (the first) and that after it there is only one symmetric mixed Nash Equilibrium, whose payoff does not change. (Essentially, the same found by Eisert) All these studies have taken payoff matrix from games that were known to be interesting in their classical version. But what has \textit{not} been studied suggest a direction for further studies: {\large \begin{enumerate} \item How to choose a payoff matrix to get a game that is interesting in its \textbf{quantum version}? \item How to avoid the choice of rather arbitrary parameters to define a set of quantum strategies? (a more general classification of quantum strategies is perhaps needed) \item If there are non-unitary strategies, why unitary strategies seem to show all the important features we can expect? \end{enumerate}} In later chapters we will try to make some aportations to answer these questions. \chapter[CLASSIFICATION OF CLASSICAL GAMES]{CLASSIFICATION OF NONZERO-SUM SYMMETRIC 2x2 GAMES} \section{INTRODUCTION} There is a large amount of investigation on dynamic models based on 2x2 symmetric nonzero-sum games \cite{Behaviour},\cite{Selection}, where a certain payoff, and some times some uncertainty on it \cite{Subjective}, is chosen to construct the model. The features of the static games defined by the possible payoff matrices are very important in these dynamic models \cite{Suitable}, but so far, we lack a complete cartography of the space of possible payoff matrices to guide our selection for a certain model. It is then very important to know at which moment a change in the assigned payoffs will lead to a different kind of game, and if so, which kind of game. The aim of this work is to provide a guide to ``move'' in the space of possible 2x2 symmetric nonzero-sum games in the strategic form. \section{PAYOFF MATRIX FOR 2x2 GAMES} Among the nonzero sum games, the most simple kind of game is probably the 2x2 symmetric with perfect information. And the most simple way to represent it is the strategic form presented in \ref{StrategicForm} \begin{equation} \Game = (\{A,B\},\{0,1\} \otimes \{0,1\},\$_{A}:\{0,1\}\otimes\{0,1\}\to \mathbb{R},\$_{B}:\{0,1\}\otimes\{0,1\}\to \mathbb{R}) \end{equation} where $\Game$ represents the game, $\{A,B\}$ is the set of players, $\{0,1\}$ is the space of strategies of any of the players, and the composite space $\{0,1\}\otimes\{0,1\}$ is the space of \textit{positions}. $\$_A$ and $\$_B$ are the payoff function for the two players. This is a function that goes from the space of positions to some ordered set, which is generally $\mathbb{Z}$, the set of the integer numbers. These payoff functions are sometimes referred as \textit{payoff matrices}, taking the labels of the strategies as indexes \begin{equation} (\mathbb{P}_A)_{ij} = (\$_A(i,j)). \end{equation} In symmetric games, there is a symmetry condition on the payoffs, which ensures equality of conditions for the two players \cite{Simetricos}. If we interchange players, both the indexes $i$, $j$ are interchanged, and the payoff matrices $\mathbb{P}_{A}$, $\mathbb{P}_{B}$ are interchanged. \begin{equation} (\mathbb{P}_{A})_{i,j}=(\mathbb{P}_{B})_{j,i}. \end{equation} \section[FEATURES OF THE PAYOFF MATRIX]{ESSENTIAL AND UNESSENTIAL FEATURES IN THE PAYOFF BIMATRIX}\label{ClasParameters} An acceptable payoff can be defined in any set with an order relation \cite{OrdinalUtility}, but it proves to be very useful to define it within a compact set. If we assign a numerical payoff to both players in every choice situation, we need 8 numbers. The condition of symmetry reduces them to 4. But as it was stated in \cite{LinearInvariance} and \cite{LinearUtility}, the properties of the game cannot depend either on the value of payoff represented by number 0 (additive invariance), or an overall factor in the scale of payoffs shared by both players (scale invariance). Then, there are two parameters that can be ruled out, and we are left with only \textbf{two numbers}. Let us represent the payoff matrices for A and B in the following way: \begin{equation} \label{Gclasica} \mathbb{P}_{A}= \begin{pmatrix}a & b \\ c & d \end{pmatrix}\hspace{1cm} \mathbb{P}_{B}= \begin{pmatrix}a & c \\ b & d \end{pmatrix}. \end{equation} The mean of this numbers is irrelevant (additive invariance \cite{LinearUtility}), so can subtract it from the four parameters (that is $a'=a-\frac{1}{4}(a+b+c+d)$) and the properties of the game are preserved. We can also use the scale invariance to get rid of the overall scale considering (a',b',c',d') as a vector in $\mathbb{R}^4$ and normalizing. ($a'' = a'/norm(a',b',c',d')$). To make the two relevant parameters explicit, we choose a SO(4) transformation which takes $a''$ into $\frac{1}{2}(a''+b''+c''+d'')$ which we know is always zero, and get, for example,these new parameters: {\small \begin{equation}\label{transformation} (G_0,G_A,G_B,G_{AB})=(a,b,c,d) \frac{1}{2}\begin{pmatrix} 1 & 1 & 1 & 1\\ 1 & 1 & -1 & -1\\ 1 & -1 & 1 & -1\\ 1 & -1 & -1 & 1 \end{pmatrix}. \end{equation}} $G_{0}$ is always zero, but $G_{A}$ is the difference between the expected payoffs of each player if \textit{the other} player chooses 0 y 1 with equal probability. $G_{B}$ is the difference in \textit{the other player}'s payoffs if each player plays an uniform distribution of probabilities 0 and 1. $G_{AB}$ is the payoff difference between the two symmetric situations (A choosing the same as B). The parameter $G_{0}$ is always zero, so we can ignore it. The rest of the parameters define a unit vector ($G_{A}$,$G_{B}$,$G_{AB}$), which allows us to devise a geometric representation of the space of games as the surface of a 3D sphere. \section{GEOMETRIC REPRESENTATION} The conditions for the existence of Nash equilibria and Pareto Optimal conditions are sets of inequalities involving these three parameters. These conditions cut the surface of the sphere at planes, so it is possible to view the different kinds of 2x2 non-zero symmetric games as portions of the unit sphere. Every possible game in the considered type, defined by a payoff matrix, has a ``normalized'' representative in the surface of the sphere (except for the trivial one with the same payoff for any situation). \textbf{The fraction of surface of the unit sphere enclosed by the planes corresponding to the conditions is exactly the fraction of the possible games set which fulfills those conditions}. \section{NASH EQUILIBRIA (NE)} The definition for a Nash Equilibrium (NE from now on) is simple: if both player decrease their payoff departing individually from a certain choice condition, then this condition is a NE \cite{NashTheorem}. For example, the position($i^*$,$j^*$) is a NE iff: \begin{equation} \forall (k,l)\epsilon \{positions\}\hspace{16pt} \mathbb{P}_{i^*j^*} \geqslant \mathbb{P}_{kj^*}\hspace{16pt}and\hspace{16pt} \mathbb{P}_{i^*j^*} \geqslant \mathbb{P}_{i^*l}. \end{equation} In a symmetric game, that means that we only compare elements in the same column of the player A payoff matrix, or in the same row of the player B payoff matrix. In terms of our payoff parameters, all these conditions have the form: \begin{equation}\label{NashConditions} \begin{aligned} (-1)^i(G_{A}+ (-1)^j G_{AB}) &\geqslant 0\\ (-1)^j(G_{A}+ (-1)^i G_{AB}) &\geqslant 0. \end{aligned} \end{equation} Where the indexes $i$ and $j$ can take values within the set of strategies $\{0,1\}$. A remarkable feature of these conditions is that they only involve the parameters $G_{A}$ and $G_{AB}$. They are completely insensitive to $G_{B}$, the average difference in payoff owed to a change in the \textit{other player} strategy. \subsection{Symmetric positions} For the symmetric positions (i=j) it is easily seen that both conditions are the same: \begin{equation} (-1)^i(G_{A}+(-1)^i G_{AB}) \geqslant 0. \end{equation} This condition will be fulfilled for half the games, for it defines a plane that cuts the sphere in two halves, both for $(1-i)(1-j)=1$ and $ij=1$. The planes are orthogonal, and that allows us to infer that $\frac{1}{4}$ of the games will have (0,0) as a Nash equilibrium, $\frac{1}{4}$ of the games will have (1,1) as a Nash equilibrium and $\frac{1}{4}$ of the games will have both positions as Nash equilibria. \subsection{Nonsymmetric positions} For the nonsymmetric positions , the two conditions are different ($s_1=1, s_2=-s_3$). The explicit conditions are: \begin{equation}\begin{aligned} (-1)^iG_{A}-G_{AB} &\geqslant 0\\ (-1)^jG_{A}-G_{AB} &\geqslant 0. \end{aligned}\end{equation} These conditions, again, define two perpendicular planes. In fact, the planes defined are the same as in the conditions for the symmetric positions, but the conditions are fulfilled precisely in the zone excluded by the conditions for (0,0) and (1,1). A fraction of $\frac{1}{4}$ of the possible games will then have two non-symmetric Nash equilibria, and none will have only one. \subsection{A plane representation} So far, the conditions do not involve the $G_{B}$ parameter at all, and therefore we can restrict ourselves to the $(G_{A},G_{AB})$ plane to consider NE alone. The NE conditions for 00 and 11 define two orthogonal planes perpendicular to $G_{B}=0$. Each divides the sphere in two halves, and the halves so defined overlap in $\frac{1}{4}$ of the sphere. The NE conditions for 01 and 10 define the same two planes mentioned above. The region of the sphere they enclose is the quarter of the sphere where there is no symmetric NE. Summing up, the fractions are: \begin{itemize} \item One symmetric NE: 1/2 \item Two symmetric NE: 1/4 \item Two nonsymmetric NE: 1/4 \end{itemize} \section{PARETO OPTIMA (PO)} A situation is Pareto-optimum (PO from now on) when each player can improve his payoff, but doing so, he diminishes the other player's payoff \cite{Pareto}. One of the conditions for a certain choice situation to be a PO is clearly that \textbf{it is not a NE}. The other, concerning the other player's payoff maximization, can be viewed as the NE condition on the \textit{transposed} payoff matrix. Transposing payoff matrix amounts to interchanging $G_{A}$ and $G_{B}$, so we already have them: \begin{equation}\label{ParetoConditions} \begin{aligned} s_1 G_{B}+ s_2 G_{AB} &\geqslant 0\\ s_1 G_{B}+ s_3 G_{AB} &\geqslant 0. \end{aligned} \end{equation} These conditions share the same form with conditions \ref{NashConditions}, but refer to a different subspace. \subsection{The geometric complete picture} NE conditions generates planes defined by $(G_{A}\pm G_{AB}=0)$, and OP conditions generate planes defined by $(G_{B}\pm G_{AB}=0)$. To achieve some symmetry in our partition of the sphere we need to cut the sphere with the plane $(G_{A}\pm G_{B}=0)$ as well. This involves comparing one diagonal element of the payoff matrix with the other diagonal element, and comparing one nondiagonal element with the other nondiagonal element Now we divided the sphere with these three pairs of orthogonal planes, getting 24 identical regions, which will allow us to compute easily the fraction of the sphere corresponding to certain set of such NE or PO conditions. Every region of the sphere characterized by NE and OP conditions will be conformed by some of these 24 small curve ``triangles'', which we will call \textit{elementary regions} from now on. \begin{figure} \caption{Partition of the Sphere} \end{figure} We can also use another norm to choose the set of game representatives which is going to give us a clearer picture. If we take the higher absolute value of the elements of the payoff matrix as a norm, we are projecting the space of possible games into the \textbf{surface of the unit cube}. The Nash and Pareto conditions for this cube will be all the diagonal planes cutting the cube in two equal halves. Then, it becomes apparent that all the areas are equal. In figure\ref{Cube} this partition of the cube is shown, together with the unfolded surface. In the central square $G_{AB}>0$, while in the square formed by the four extreme triangles of the unfolded cube $G_{AB}<0$ \begin{figure} \caption{Partition of the Cube} \label{Cube} \end{figure} The projection on the cube works very well for strictly-ordered payoffs, but we have to be cautious to use with non-strictly ordered payoff. If we impose a boundary to the differences of payoffs within a matrix, we are restricting the space of parameter to a cube. This makes no difference for the fraction of strictly ordered payoffs games in a class, but for non-strictly-ordered games, the difference mentioned above is important. Non-strictly-ordered games lie, in our representation, at the lines between triangles, and the fractions are given by the length of a straight line (for the cube) or by arc lengths (for the sphere). There is, then, a slight difference between the representations. \section{A classification of 2x2 nonzero sum symmetric games} We can see the EN conditions affecting $G_{AB}$ and $G_{A}$ and the PO conditions affecting $G_{AB}$ and $G_{B}$ as classification criteria, meaning the following for a certain position: \begin{itemize} \item If the position is NE it means that each player gets a maximum payoff \textbf{for himself}. \item If the position is PO it means that each player produces a maximum payoff \textbf{for the other player}. \end{itemize} The importance of the first criteria is evident by itself, but the other criteria becomes important in cooperative game \cite{Threats} where players are allow to interact before choosing, or in dynamic games, where in some cases players learn that it can be advantageous to behave ``generously'' \cite{BehaviouralStrategies} Normally the PO definition include the inverse of the NE condition (the PO payoff must be susceptible to be raised by any player) \cite{Pareto}. But here we are going to try a less restrictive approach, taking the conditions on the transpose of the payoff matrices as \textit{independent} from the conditions of the actual payoff matrix. Using these NE and PO conditions independently we can assign each region of the sphere a certain kind of game . The NE and PO planes cut the sphere into 12 regions, with 3 different sizes. There are 8 triangular (elementary) regions covering each $\frac{1}{24}$ of the sphere, 4 bigger triangular regions covering each $\frac{1}{12}$ of the sphere (conformed by two elementary regions), and 2 square regions covering each $\frac{1}{6}$ of the sphere, conformed by 4 elementary regions. We should, however, take into account the arbitrariness of the labeling of strategies 0 and 1, and only distinguish diagonal (symmetric) positions where both players play the same strategy and nondiagonal (nonsymmetric) conditions where they don't. Then we would have 9 kinds of games. It can be wise, however, to discriminate whether a certain diagonal P0 position (pure or mixed) have higher or lower payoff than a certain NE position. This give us 12 kinds of games. This is exactly twice the number obtained by Rappoport and others for classes of 2x2 symmetric games \cite{Rapoport},\cite{Robinson}. \subsection{ROBINSON ORDER GRAPHS}\label{RobinsonNash} To organize the results in a simple way, We will use a very powerful tool presented in \cite{Robinson}. This is a classification scheme for strictly ordered payoffs (where there are no two equal elements in the payoff matrix) where games are classified according to the ordering of the payoff elements. In the diagram, a XY plot is drawn of A payoff vs. B payoff, locating the four points corresponding to the four positions. The next step is connecting the points with arrows when a player can unilaterally shift from one to other. The arrow points at the point where the payoff is higher for the involved player. If A's payoff is plotted in the Y axis, then A's arrows point allays upwards and B's arrows points allays to the right. Doing so, we get an \textbf{order graph} where NE are points where arrows converge. To find PO we draw arrows that point in the direction where the \textbf{other player}'s payoff is increased. We will call these arrows \textit{Pareto arrows}. The points fulfilling PC are the points where the Pareto arrows converge.Conversely, we will call the arrows pointing in the direction where the own payoff increases \textit{Nash arrows}. This can be seen in figure \ref{Robinson1} \\ \begin{figure} \caption{Robinson Diagram for the Prisoner's Dilemma} \label{ExampleRobinson} \label{Robinson1} \end{figure} If the payoff matrix $\begin{pmatrix}3 & 0 \\ 5 & 1 \end{pmatrix}$ is examined instead, the results are different. In figure \ref{ExampleRobinson} we see Nash arrows as solid arrows, and the empty ones are Pareto arrows. Nash Arrows are related to ordinality within one column, and Pareto Arrows to ordinality within one row. It is the same approach used elsewhere to find Nash equilibria and Pareto optima \cite{LineaColumna} A particular game of the kind we study is completely characterized with a simplified Robinson diagram, where we simply show the NE. From the NE we can infer the direction of Nash arrows, and we get the Pareto arrows changing the direction of those arrows with a negative slope. Two examples of this simplified Robinson diagrams is in figure \ref{Robinson3}. The simplified graphs in the figure allow us to distinguish the game with the payoff matrix $\begin{pmatrix}4 & 1 \\ 5 & 0 \end{pmatrix}$ from that with a transpose payoff matrix $\begin{pmatrix}4 & 5 \\ 1 & 0 \end{pmatrix}$. The position (4,4) corresponding to the NE is shown as an empty circle in the first, as well as positions (1,5) and (5,1) in the second. That allows us to infer the direction of the Nash arrows, all pointing at the NE. The Pareto arrows are inferred from the former in the following way: \begin{itemize} \item the arrows with a negative slope (those adjacent to the NE) must be reverted to be converted in Pareto arrows. \item The other Pareto arrows have the same direction as the correspondent Nash arrows. \end{itemize} In figure \ref{Robinson3} an example of this representation with Nash and Pareto arrows is presented. \begin{figure} \caption{Simplified Robinson Diagrams} \label{Robinson3} \end{figure} \section{A LIST OF 2x2 NONZERO-SUM SYMMETRIC GAMES} The game classes in this scheme are: \begin{enumerate}[1] \item One diagonal position is NE \begin{enumerate}[1] \item The same position is PO \hspace{12pt}{\tiny $\begin{pmatrix}4 & 2 \\ 3 & 1 \end{pmatrix}$}\hspace{5pt}\parbox{1cm}{\includegraphics{R4231.eps}} These games are in two opposed pairs of adjacent elementary triangles in the cube. The fraction is then {\large $\frac{1}{6}$} of the total. Games in this class have an essentially unproblematic solution, for there is only one equilibrium, and it is very stable. \item The \textbf{other} diagonal position is PO\\ These case also covers 4 triangles on the sphere. The fraction is then {\large $\frac{1}{6}$} of the total. Games in this class have an essentially unproblematic solution. Here we have two relevant situations: the ``generous'' PO can be more profitable than the NE itself, or the other way around. This can be checked by the sign of $G_{A}+G_{B}$ If it is positive, then the payoff of 00 is higher, if it is negative, then it is lower. \begin{itemize} \item NC payoff is higher than the PC payoff {\tiny $\begin{pmatrix}3 & 4 \\ 1 & 2 \end{pmatrix}$}\hspace{10pt} \parbox{1cm}{\includegraphics{R3412.eps}} This situation corresponds to one of the elementary regions conforming the rhomboidal regions: the outer one in the left graphic and the inner one in the right one. These are then {\large $\frac{1}{12}$} of all the games. An example called ``\textbf{Hershey's kisses game}'' is mentioned in \cite{Kisses}. Another is ``\textbf{Deadlock}'' in \cite{Miracle}. A game of this kind called ``exploitation of civil economy'' was proposed to model the decisions of scaling the conflict or negociating between the constitutional Armed Forces of Colombia and the Guerrilla groups \cite{WarGame}. \item NC payoff is lower than the PC payoff {\tiny $\begin{pmatrix}3 & 1 \\ 4 & 2 \end{pmatrix}$}\hspace{10pt} \parbox{1cm}{\includegraphics{R3142.eps}} This situation corresponds to the other 2 elementary regions in the two rhomboids: the inner one in the left graphic and the outer one in the right one. These also are {\large $\frac{1}{12}$} of all the games. In this class we find the famous ``\textbf{Prisoner's Dilemma}'' \cite{SymGames}. \end{itemize} \item The nondiagonal positions fulfills PC This kind of games have payoff matrices which are transpose of those of a ``\textbf{Chicken}-like'' game. Then this kind of game also encloses $\frac{1}{12}$ of all the games. If we compare the payoff of the NE with the mixed symmetric position in the middle of the PO positions, we can assign each triangle a different kind of game: \begin{itemize} \item NE payoff is higher than the mixed diagonal PO payoff{\tiny $\begin{pmatrix}5 & 1 \\ 4 & 3 \end{pmatrix}$}\hspace{10pt} \parbox{1cm}{\includegraphics{R4251.eps}} Here $G_{A}>G_{AB}$ . This is shown as the left triangle in the graphic ($\frac{1}{24}$ of all the games) \item NE payoff is lower than the mixed diagonal PO payoff{\tiny $\begin{pmatrix}5 & 1 \\ 3 & 2 \end{pmatrix}$}\hspace{10pt} \parbox{1cm}{\includegraphics{R3251.eps}} Here $(-G_{A})>G_{AB}$. This is shown as the right triangle in the graphic ($\frac{1}{24}$ of all the games) \end{itemize} \item Both diagonal positions fulfill PC This is the case in other two of the triangular regions (conformed by one elementary region each). This kind of games cover then $\frac{1}{12}$ of the sphere. \end{enumerate} \item Both diagonal positions fulfill NC \begin{enumerate}[1] \item One of the diagonal positions fulfill PC\\ This is the case in two of the triangular regions (conformed by one elementary region each).This kind of games cover then $\frac{1}{12}$ of the sphere. In this class we find the so called \textbf{assurance game} \cite{Strategy} But, again, we can distinguish two cases: \begin{itemize} \item The PC payoff is higher than the other diagonal payoff{\tiny $\begin{pmatrix}5 & 1 \\ 4 & 3 \end{pmatrix}$}\hspace{10pt} \parbox{1cm}{\includegraphics{R5143.eps}} These are $\frac{1}{24}$ of all the games (Upper triangle in the graphic) \item The PC payoff is lower than the other diagonal payoff{\tiny $\begin{pmatrix}5 & 1 \\ 4 & 2 \end{pmatrix}$}\hspace{10pt} \parbox{1cm}{\includegraphics{R5142.eps}} These are $\frac{1}{24}$ of all the games (Lower triangle in the graphic) \end{itemize} \item Both of the diagonal positions fulfill PC{\tiny $\begin{pmatrix}4 & 1 \\ 2 & 3 \end{pmatrix}$}\hspace{10pt} \parbox{1cm}{\includegraphics{R4123.eps}} This is the case in one of the square regions of the sphere (conformed by 4 elementary regions). So, these are $\frac{1}{6}$ of all the games. These can be problematic games, because there are two equilibria, both very stable. Half of them, of course, will give higher payoff to one of the equilibria, and half of them to the other. This can give the players a certain predilection. But in the line between the two halves (shown in the figure) there are two equilibria, and we have a real dilemma for the players. In this class we find the ``\textbf{Pareto Coordination}'' game \cite{Dictionary}. \end{enumerate} \item Nondiagonal positions fulfill NC \begin{enumerate}[1] \item One diagonal position fulfills PC This is the case in two of the triangular regions ($\frac{1}{12}$ of all the games). We find in this class the famous ``\textbf{Chicken game}`` \cite{SymGames}, \cite{Strategy}, \cite{Dictionary} also called ``\textbf{Stag-Hunt}'' \cite{Miracle} We can, again, distinguish two possibilities: \begin{itemize} \item PO payoff is lower than the mixed diagonal NE payoff{\tiny $\begin{pmatrix}3 & 2 \\ 5 & 1 \end{pmatrix}$}\hspace{10pt} \parbox{1cm}{\includegraphics{R3251.eps}} This class encompasses $\frac{1}{24}$ of all the games. (Lower region in the graphic) \item PO payoff is higher than the mixed diagonal NE payoff{\tiny $\begin{pmatrix}4 & 2 \\ 5 & 1 \end{pmatrix}$}\hspace{10pt} \parbox{1cm}{\includegraphics{R4251.eps}} This class encompasses $\frac{1}{24}$ of all the games. (Upper region in the graphic) \end{itemize} \item Nondiagonal positions fulfill PC{\tiny $\begin{pmatrix}2 & 3 \\ 4 & 1 \end{pmatrix}$}\hspace{10pt} \parbox{1cm}{\includegraphics{R2341.eps}} This is the case in the other square region of the sphere ($\frac{1}{6}$ of all the games) These are also games with a nonproblematic solution, because there is a mixed symmetric NE which is extraordinarily stable. \end{enumerate} \end{enumerate} \section{CONCLUSIONS} The following image presents a map of the different kinds of games according to the Robinson classification, projected on the unfolded cube: \begin{figure} \caption{Zones of the Payoff-Matrix-Space on the Unit Cube} \label{classesmap} \end{figure} \subsection{HOW TO READ THE MAP} Each triangle in the map corresponds to a plane triangle, and all the variations of the payoff matrix are linear within it. But the triangles can be in one out of 8 planes defined by the signs of the three coordinates. In every triangle a certain point corresponds to a convex combination of the payoff matrices written in the vertexes, weighted by their closeness. Payoff matrices are not written in the normalized form, but in a easier-to-use integer form with entries from 0 to 12, to make easier to recognize the known games. Examples: \textbf{PRISONER'S DILEMMA} \begin{equation} \begin{pmatrix}3 & 0 \\ 5 & 1\end{pmatrix} = \frac{9}{6}\left(\frac{2}{9}\begin{pmatrix}0 & 0 \\ 6 & 0\end{pmatrix} + \frac{4}{9}\begin{pmatrix}3 & 0 \\ 3 & 0\end{pmatrix} + \frac{3}{9}\begin{pmatrix}2 & 0 \\ 2 & 2\end{pmatrix} \right) \end{equation} The factor $\frac{9}{12}$ is to convert a game with a total payoff sum of 12 to one with a total payoff sum of 9. Prisoner's Dilemma can thus be found in the map at $\frac{2}{9}$ from the upper-left vertex of the triangle, at $\frac{4}{9}$ from the upper-right vertex of the triangle, and at $\frac{3}{9}$ from the lower vertex of the triangle. \textbf{CHOLESTEROL: FRIEND OR FOE} This game has a payoff matrix that is between the ``cholesterol is healthy'' and the ``cholesterol is harmful''. Each payoff matrix corresponds to ($G_A,G_B,G_{AB}$ vectors that are projected in different sides of the cube. (Good cholesterol in the (-,-,+) face and bad cholesterol in the (+,-,-) face). The decompositions are: \begin{equation} \begin{pmatrix}9 & 15 \\ 5 & 7\end{pmatrix} = \frac{16}{6}\left(\frac{3}{8}\begin{pmatrix}0 & 6 \\ 0 & 0\end{pmatrix} + \frac{3}{8}\begin{pmatrix}2 & 2 \\ 0 & 2\end{pmatrix} + \frac{2}{8}\begin{pmatrix}3 & 3 \\ 0 & 0\end{pmatrix} \right) + \begin{pmatrix}5 & 5 \\ 5 & 5\end{pmatrix} \end{equation} For cholesterol healthy and \begin{equation} \begin{pmatrix}-9 & -3 \\ -1 & 1\end{pmatrix} = \frac{6}{24}\left(\frac{2}{24}\begin{pmatrix}0 & 0 \\ 0 & 6\end{pmatrix} + \frac{4}{24}\begin{pmatrix}0 & 0 \\ 3 & 3\end{pmatrix} + \frac{18}{24}\begin{pmatrix}0 & 2 \\ 2 & 2\end{pmatrix}\right) + \begin{pmatrix}-9 & -9 \\ -9 & -9 \end{pmatrix} \end{equation} Nature would choose a point lying \textbf{between} this two points in the space of payoff matrices. This line, when projected into the unit cube we used as a map, gives a trajectory of two straight segments joining with an angle. To compute this trajectory, all we need to do is to find the point where the the projections arrive at the edge where the faces of the cube join. From the decomposition of the payoff matrix, we can locate the extremal points of the game. The distance to the vertexes of the triangle are given by the inverse of the coefficient (the higher coefficient, the nearer to the vertex). The trajectory from the two extremal payoff matrices for the game ``Cholesterol: friend or foe'' are ilustrated in figure \ref{Trajectory}. \begin{figure} \caption{Position of ``Cholesterol: frien or foe'' for different p} \end{figure} In this map is all the information we need about the relations between classes of games, like these: \begin{itemize} \item The probability of getting a certain class with a random payoff matrix, is the number of cells occupied by this class, divided by 24 (the total number of cells) \item Distorting a payoff matrix we can change the class of the game. The map shows us in which direction (in the G parameter space) a change from one class to another happens. \item The games with non-strictly-ordered payoff matrices are represented by the lines between classes, and can also be quantified by the length of the line. (one example is the ``\textbf{Friend-or-Foe game}'', a non-ordered-payoff game lying between the Prisoner's dilemma and the Chicken game \cite{FOF}\\ Even though in the circle there seem to be two kinds of lines, long ones with unit length and short ones with length 1/$\sqrt{2}$, in the sphere all lines are 60 degrees long. \end{itemize} This map can be used as a starting point for the classification of the quantum games. The quantum game with zero entanglement is equivalent to the classical game, so this map is useful for quantum games in that limit. The questions that the following parts of the work must answer are the following: {\large \begin{itemize} \item Can two classes of games merge when the entanglement is introduced? \item Do new classes appear in quantum games that have no classical analogues in this kinds of games? \item Are some differences between classes emphasized or softened in the quantum games? \end{itemize}} \chapter[DETERMINISTIC STRATEGIES]{THE QUANTUM GAME WITH DETERMINISTIC STRATEGIES} In the Eisert scheme a "strategy" is an action the player performs on a quantum system, which can be entangled with the other player's system. \cite{Eisert}. We are going to call \textit{semideterministic strategies} to those who does not involve randomness neither in the totally nonentangled nor in the totally entangled cases. These strategies can be represented by the identity and the Pauli matrices multiplied by i. In what follows, we are going to call them $E$,$Z$,$Y$ and $X$: \begin{equation} I=\begin{pmatrix}1&0\\0&1\end{pmatrix}\hspace{20pt} Z=\begin{pmatrix}i&0\\0&-i\end{pmatrix}\hspace{20pt} Y=\begin{pmatrix}0&-1\\1&0\end{pmatrix}\hspace{20pt} X=\begin{pmatrix}0&i\\i&0\end{pmatrix}. \end{equation} They form a group with a product defined as the matrix product of the hermitian conjugate of the first times the second: \begin{equation} P:(S_i,S_j)\mapsto S_k \hspace{1cm}P(S_i,S_j)=S_i S_j. \end{equation} This group is isomorphic to the V group \cite{Vgroup}. The strategies of both players can be described as direct product of the strategies of each player, so the total group of strategies is indeed a product group. Some of the direct products of strategies will commute with the entangling unitary $J$, and thus will not be affected by its action, but some are mixed as if orthogonally transformed: \begin{equation} \begin{aligned} (I\otimes I),\hspace{16pt}(Z\otimes Z)&,\hspace{16pt}(Y\otimes Y)\hspace{16pt}(X\otimes X)\\ (I\otimes X),\hspace{16pt}(X\otimes I)&,\hspace{16pt}(Y\otimes Z), \hspace{16pt}(Z\otimes Y). \end{aligned} \end{equation} The other possible products will be transformed in the following way: \begin{align} J^\dagger \begin{pmatrix}Z\otimes I\\Y\otimes X\end{pmatrix} J &= \begin{pmatrix} \sqrt{1-E} & \sqrt{E} \\-\sqrt{E} & \sqrt{1-E}\end{pmatrix} \begin{pmatrix}Z\otimes I\\Y\otimes X\end{pmatrix}\\ J^\dagger \begin{pmatrix}I\otimes Z\\X\otimes Y\end{pmatrix} J &= \begin{pmatrix} \sqrt{1-E} & \sqrt{E} \\-\sqrt{E} & \sqrt{1-E}\end{pmatrix} \begin{pmatrix} I\otimes Z \\ X\otimes Y \end{pmatrix}\\ J^\dagger \begin{pmatrix}Z\otimes X\\Y\otimes I\end{pmatrix} J &= \begin{pmatrix} \sqrt{1-E} & \sqrt{E} \\-\sqrt{E} & \sqrt{1-E}\end{pmatrix} \begin{pmatrix}Z\otimes X\\Y\otimes I\end{pmatrix}\\ J^\dagger \begin{pmatrix}X\otimes Z\\I\otimes Y\end{pmatrix} J &= \begin{pmatrix} \sqrt{1-E} & \sqrt{E} \\-\sqrt{E} & \sqrt{1-E}\end{pmatrix} \begin{pmatrix} X\otimes Z \\ I\otimes Y \end{pmatrix}. \end{align} where we used the entangling transformation defined in equation \ref{entangling}. The parameter $E$ is defined as $\sqrt{E}=2\gamma \sqrt{1-\gamma^2}$. \section{THE 4x4 EQUIVALENT GAME} Let's consider a symmetric nonzero-sum 2x2 game. Without loss of generality we can describe it with a restricted payoff with null average payoff, and unit average square payoff: \begin{equation}\begin{aligned} G_A=\begin{pmatrix} a & b \\ c & d \end{pmatrix}&\\ a+b+c+d=0 \hspace{2cm}& a^2+b^2+c^2+d^2=1. \end{aligned}\end{equation} The complete expression for the expected payoff for a player can be written as follows: \begin{equation} \langle G \rangle = Tr \left( J^\dagger(U_A\otimes U_B)^\dagger J \Pi_{00} J^\dagger(U_A\otimes U_B)J \hat{G} \right) \end{equation} where \begin{equation} \Pi_{00} = \begin{pmatrix}1&0\\0&0\end{pmatrix} \otimes \begin{pmatrix}1&0\\0&0\end{pmatrix}\hspace{20pt} \hat{G_A} = \begin{pmatrix}a&0&0&0\\0&b&0&0\\0&0&c&0\\0&0&0&d\end{pmatrix}. \end{equation} If we construct an extended payoff matrix with the four strategies for each player, we get: \begin{equation}\label{ExtendedPayoff} \begin{matrix} & \begin{matrix} I_B & Z_B & Y_B & X_B \end{matrix}\\ \begin{matrix} I_A \\ Z_A \\ Y_A \\ X_A \end{matrix} & \begin{pmatrix}a&a'&b'&b\\a'&a&b&b'\\ c'&c&d&d' \\ c&c'&d'&d\end{pmatrix} \end{matrix} \end{equation} where \begin{equation}\begin{aligned} a' &= (1-E)a + E d \hspace{20pt} b' = (1-E)b + E c \\ c' &= (1-E)c + E b \hspace{20pt} d' = (1-E)d + E a. \end{aligned}\end{equation} We can expect that constructing 4x4 payoff matrices with the same parameters of 2x2 payoff matrices plus one (the entanglement parameter) will lead us to a somehow symmetric result, ant it is in fact so. The resulting payoff matrix has a remarkable symmetry: if we exchange strategies $I$ and $Z$ simultaneously for both players or if we exchange strategies $X$ and $Y$ simultaneously for both players, we get exactly the same payoff matrix. We can think of a two-stage classical game corresponding to this density matrix. Each player chooses to play "classically" or "quantumly", and if they choose the opposite, then there is a probability $E$ that the referee transpose both columns and rows of the payoff matrix for the next stage of the game, where they use the same strategies 0 and 1. We can regard the four strategies as being two choices each: \begin{enumerate} \item $I$: CLAS y 0 \item $Z$: CUANT y 0 \item $Y$: CUANT y 1 \item $X$: CLAS y 1 \end{enumerate} The tree representing the choices and payoffs in the classical game is shown in figure \ref{2x2tree},tougether with a \textbf{completely transposed game} with a payoff matrix: \begin{equation} G^* = \begin{pmatrix}d & c \\ b & a \end{pmatrix} \end{equation} \begin{figure} \caption{Extensive forms of a 2x2 game and its transposed version} \label{2x2tree} \end{figure} The semideterministic quantum game can be represented by a three-player game where one of the players is the entanglement introducing referee (figure \ref{DetTree}). In the figure, Ac and Bc means A and B choosing to play classical moves, and A0 and A1 choosing the classical or quantum version of move 0 and move 1. \begin{figure} \caption{Extensive form of a 2x2 deterministic quantum game} \label{DetTree} \end{figure} The deterministic game evolves as if a referee transposed the payoff matrix for the next stage of the game (with probability $E$) or left it unchanged (with probability $1-E$). Observe that our probabilistic knowledge of what the referee does is similar to the knowledge of the action of nature in ``Cholesterol: friend or foe'' game described in section \ref{ExtensiveForm}. Like in that game, we can reduce the game to a two-person game. \subsection{A PROBLEMATIC FEATURE} It must be noted that even though the Harsanyi procedure of including commitments in the game tree can remove the dilemma of choosing among equilibria, the extra choices introduced in the quantum game are indeed adding degeneracy to the game. This happens because in the new payoff matrix every value appear twice, including the Nash equilibria and Pareto Optima, causing the player to have even more equilibria to choose. This problem can be solved introducing some extra elements to the game, like treats and commitments \cite{Commitments}. \subsection{ORDER OF PAYOFFS IN THE EXTENDED GAME} The extended game is a symmetric nonzero-sum 4x4 classical game, with 16 positions corresponding to all possible strategy choices of the players. However, the the payoff matrix has only 8 different values (each appearing twice), all of them depending only of three independent parameters: two from the classic game (section \ref{ClasParameters}), and the entanglement parameter $E$. Four out of the eight parameters are convex combinations of the others with weights $E$ and $1-E$, so it the order of these will be determined by the order of the elements of the classical payoff matrix and the entanglement parameter. For example, if $a>b$ and $d>c$, then it is always true that $a'>b'$ and $d'>c'$. But not every ordering relations result in such a certainty. For example, if $a>b$ and $c>d$, then the inequalities $a'>b'$ and $d'>c'$ hold or not hold depending on $E$. It can be visualized in figure \ref{ordering}, where the vertical lines mark the values of $E$ and $1-E$. \begin{figure} \caption{Payoffs vs. Entanglement degree} \label{ordering} \end{figure} It is clear from the drawing that in certain conditions there are qualitative changes in the game when changing the entanglement parameter $E$, for there is a change in the ordering of the different values in the payoff matrix. This can make different entanglement regimes have different qualitative features. \section{AVAILABLE POSITIONS FOR THE PLAYERS}\label{Positions} To examine the consequences of the ordering in the extended payoff matrix we can use the Robinson order graphs, used in a previous work to classify the possible 2x2 classical games. In the quantum games with deterministic strategies, the eight different values mark eight nodes in the Robinson graph, corresponding to two positions each. Then any Nash equilibrium or Pareto optimal position correspond to two positions of the payoff matrix. We will have always an even number of Nash equilibria and Pareto optimal positions. \subsection{CONNECTIVITY} An important thing to be analyzed in the quantum game is the capability of each player to change the payoff from one possible value to another, given a certain strategy used by the other player. When we examine a node in the order graph corresponding to some position the connections (arrows) tell us which positions are available by unilateral actions. In the quantum game we have more strategies than we have in the classical game, and there will be more possible payoffs to choose (given a certain strategy chosen by the other player). Let's take another look at the payoff matrix:\ \begin{equation*} \begin{matrix} & \begin{matrix} I_B & Z_B & Y_B & X_B \end{matrix}\\ \begin{matrix} I_A \\ Z_A \\ Y_A \\ X_A \end{matrix} & \begin{pmatrix}a&a'&b'&b\\a'&a&b&b'\\ c'&c&d&d' \\ c&c'&d'&d\end{pmatrix} \end{matrix} \end{equation*} \begin{figure} \caption{Connections between payoff positions} \label{Connectivity} \end{figure} A given strategy of player A restricts us to a single row, and a given strategy of B restricts us to a single column. Then we can conclude: \textit{player B can choose payoffs within a row, and player A can choose payoffs within a column}. In figure \ref{Connectivity} the connections associated with player A's moves (same column in the matrix) are represented by dotted lines, and player B's moves (same row in the matrix) are represented by dashed lines. Solid lines represent double relations (A \textbf{and} B moves)} If we compare the connectivity graph of the quantum game with that of the classical game, we can see that each lines corresponding to each player in the classical game have become a \textit{complete subgraph} in the quantum game. \begin{itemize} \item \pstree[treemode=R]{\Tcircle{a}}{\Tcircle{b}} turns into \hspace{16pt} \parbox{5cm}{ \begin{pspicture}(0,0)(5,2) \rput(0,1){\circlenode{a}{a}} \rput(2,2){\circlenode{a'}{a'}} \rput(3,0){\circlenode{b'}{b'}} \rput(5,1){\circlenode{b}{b}} \ncline{a}{b} \ncline{a}{a'} \ncline{a}{b'} \ncline{b}{b'} \ncline{a'}{b'} \ncline{a'}{b} \end{pspicture}} \item \parbox{3cm}{ \pstree[treemode=R]{\Tcircle{a}}{\Tcircle{c}}} turns into \hspace{16pt} \parbox{5cm}{ \begin{pspicture}(0,0)(5,2) \rput(0,1){\circlenode{a}{a}} \rput(2,2){\circlenode{a'}{a'}} \rput(3,0){\circlenode{c'}{c'}} \rput(5,1){\circlenode{c}{c}} \ncline{a}{c} \ncline{a}{a'} \ncline{a}{c'} \ncline{c}{c'} \ncline{a'}{c'} \ncline{a'}{c} \end{pspicture}} \end{itemize} and so forth. Note that the first subgraph corresponds to moves of player B, and the second subgraph corresponds to moves of player A.\\ The subgraph originated in $a-b$ is formed by three of the dashed lines in the complete graph, and share one solid line ($a-a'$) with the subgraph originated in $a-c$, and another solid line ($b-b'$) with the subgraph originated in $b-d$. \section{FROM ARROWS TO LATTICES} It is important to know the possibilities available for each player, but when that is clear, we still need to know what will she do in order to maximize her payoff within this set of possibilities. What happen, then, when we introduce strict order relations between the payoff parameters $a$,$b$,$c$,$d$? First of all, there are two order relations for the nodes, one a relation for player A (horizontal in the Robinson graphs) and a relation for player B (vertical in the Robinson graphs). Each subgraph, with the corresponding order relations constitutes indeed a boolean lattice, the lattice of payoffs available for a given player for a given strategy chosen by the other player. The arrow that shows the way each player can improve his payoff in the Robinson graph of a classical game turns now into a lattice in the Robinson graph of the restricted quantum game, with a maximum payoff node which is going to be chosen by the player. \subsection{NASH EQUILIBRIA} Each point belongs to a lattice ``belonging'' to player A, and a lattice belonging to player B. To be a Nash equilibrium, the point must be a supremum in both lattices. So, we still can find Nash equilibria easily with the Robinson graphs as we did in the classical game (chapter \ref{RobinsonNash}). The games where $a>b>c>d$ are a rather trivial example, but will allow us to show how the Robinson graphs look for a quantum restricted game: \begin{figure} \caption{Robinson Graph for a quantum game with deterministic strategies, where a>b>c>d} \label{QRobinson} \end{figure} The existence of the Nash equilibria is guaranteed for the classical game \cite{NashTheorem}. It is, however, not obvious how at least one supremum of a lattice corresponding to player A \textbf{must} correspond to the supremum of a lattice corresponding to player B. But if we see the quantum deterministic-strategy game as a certain 4x4 classical game, the existence of at least Nash Equilibrium is again guaranteed by Nash Theorem. \subsection{PARETO OPTIMA} To find if a position is Pareto optimal, we can also draw the ``Pareto Lattices'' for both players. A Pareto optimal position will be a supremum in the Pareto lattice of each player. As in the Nash arrows and Pareto arrows, the Pareto and Nash lattices for one player have the same nodes, but some arrows can have different directions. In the Robinson graphs, one of these lattices has a ``horizontal'' ordering (arrows pointing rightwards), the other has a ``vertical'' order (arrows pointing upwards). \section{REGIME TRANSITIONS} Every one among these payoff lattices involves two \textit{primed} parameters (like $a'$, $b'$, $c'$ and $d'$) which depend on a pair of payoff parameters (like $a$, $b$, $c$ and $d$) and the entanglement parameter $E$. That means that for a given strict ordering of the payoff parameters they can undergo qualitative changes as we change the entanglement parameter, as a result of the order relations discussed above. Three different entanglement regimes can be defined then \textbf{for each lattice}: low, medium and high. They are presented with an example, a trivial game with order relation $a>b>c>d$, and payoff matrix $(a,b,c,d)=(12,4,10,0)$. \subsection{THE LOW ENTANGLEMENT REGIME} In this regime the supremum of the lattice preserves its status. There are no differences with the classical game, except that of the degeneracy of maxima. In the example, the lower lattices (those who are not implied in the Nash equilibrium) are in this regime for $E<\frac{1}{3}$, and the upper lattice are always in it, because being the highest payoff, it is impossible that the supremum lose its character. \begin{figure} \caption{Robinson Graph for Low Entanglement} \end{figure} \subsection{THE MEDIUM ENTANGLEMENT REGIME} In this regime, the supremum of the lattice is the primed node derived from the initial higher point in the classical lattice. For example: if we have a $b\to a$ ($b<a$) order relation , now we have $b\to a \to a'$ ($b<a<a'$). In this case no new Nash equilibria appears, but the payoff of the equilibrium increases with $E$. In figure \ref{MediumRobinson}, the lower lattices are in this regime for $\frac{1}{6}<\sqrt{E}<\frac{1}{3}$. \begin{figure} \caption{Robinson Graph for Medium Entanglement} \label{MediumRobinson} \end{figure} \subsection{THE HIGH ENTANGLEMENT REGIME} In this regime, the supremum of the lattice is the primed node derived from the initial lower point in the classical lattice. For example: if we have a $b\to a$ ($b<a$) order relation , now we have $b\to a \to b'$ ($b<a<b'$). In the example, there appears a new Nash Equilibrium node in the Robinson graph, meaning two more Nash Equilibrium in the game. This Nash equilibrium has a payoff of $12\sqrt{E}$. \begin{figure} \caption{Robinson Graph for High Entanglement} \end{figure} \section{CONCLUSIONS} At this point, we are able to give a number of general results, that even when they are restricted to deterministic strategies, will prove to be valid within bigger sets of strategies: \subsection{EQUIVALENT CLASSICAL GAME} The quantum nonzero-sum 2x2 symmetric game with deterministic strategies is equivalent to a two-stage symmetric game, where the second choice is a normal 2x2 game, but the first choice of both players, if anticorrelated, produces a double (column and row) transposition of the payoff matrix. \subsection{DEGENERACY OF THE EQUIVALENT 4x4 GAME}\label{DeterministicSym} The equivalent game is invariant under interchange of strategies 0 and Z or interchange of strategies X and Y made for both player. This causes each payoff value to be repeated twice in the 4x4 payoff matrix in such a way that there are pairs of completely equivalent positions. \subsection{POSSIBILITY OF ENTANGLEMENT REGIMES} It is possible to change the qualitative features of the quantum version of some classical games by modulating the entanglement parameter. The ordering among payoffs of one player for a fixed strategy of the other player (which we named the player's lattices) can change twice from the unentangled case to the totally entangled case. We have named the possible entanglement regimes low, medium and high. In the high entanglement regime, there can appear new pairs of Nash equilibria in some games. In all cases the position that becomes Nash equilibrium is a' if d>a or d' if a>d. The payoff for this position becomes (in the transition) higher than the higher non-diagonal classical payoff (b or c). This allows us to write the condition like this: \begin{equation} (1-E)\ min(a,d) + E\ max(a,d) > max(b,c) \end{equation} which means: \begin{equation} E > \frac{max(b,c)-min(a,d)}{max(a,d)-min(a,d)} \end{equation} In terms of the G parameters defined in equation \ref{transformation}, these maxima and minima are easy to compute: \begin{multline} max(a,d)= \frac{1}{2}(|G_0+G_{AB}|+|G_A+G_B|)\\ min(a,d)=\frac{1}{2}(|G_0+G_{AB}|-|G_A+G_B|)\\ max(b,c)=\frac{1}{2}(|G_0-G_{AB}|+|G_A+G_B|). \end{multline} The transition value for E will be: \begin{equation*} E > 1-\frac{|G_0+G_{AB}|-|G_0-G_{AB}|}{2|G_A+G_B|}; \end{equation*} Or, more briefly: \begin{equation} E > 1\frac{|min(G_0,G_{AB})|}{|G_A+G_B|} \end{equation} We know that $G_0$ can be set to 0, and that means that $G_{AB}$ \textbf{must be a negative number to allow the existence of an entanglement transition}. \section{RESULTS: REGIMES AND CLASSIFICATION}\label{DetermClass} In the following listing the different regimes appearing in different kinds of games are presented. Each class of games is characterized by several payoff matrices with maximum payoff 1, minimum payoff 0, and two free parameters which can vary from 0 to 1. \begin{enumerate}[1] \item \textbf{One diagonal position is Nash Equilibrium} \begin{enumerate}[1] \item The same position is PO \begin{itemize} \item Classical Payoff Matrix: $\begin{pmatrix}1 & xy \\ x & 0 \end{pmatrix}$ \item Quantum Semideterministic Payoff Matrix: {\small \begin{equation} \begin{pmatrix} 1 & 1-E & (1-E)xy + Ex & xy\\ 1-E & 1 & xy & (1-E)xy + Ex\\ (1-E)x + Exy & x & 0 & E\\ y & (1-E)x + Exy & E & 0 \\ \end{pmatrix} \end{equation}} \item Regimes: There is only one relevant transition, that takes place when $E=x(y-E(1-y))$. The maximum value for E is 1, and this means that the transition is only possible when $2xy<1-x$. Games in this class that do not fulfill this condition do not exhibit such transition. In the high entanglement regime there appears a new Nash Equilibrium with payoff E, which is nevertheless less appealing than the initial one. It must be noted that even thought in the classical game the game with transposed payoff matrix has the same features, here the game with payoff matrix {\tiny $\begin{pmatrix}1 & x \\ xy & 0 \end{pmatrix}$} has this transition at E=x, that is, at higher payoff. Then the transition is present in all such games This can be seen for a suitable example in the Robinson Graphs for low and high entanglement. The Robinson Graph shows a new Nash Equilibrium for payoff {\tiny $\begin{pmatrix}6 & 1 \\ 3 & 0 \end{pmatrix}$} but not for {\tiny $\begin{pmatrix}6 & 3 \\ 1 & 0 \end{pmatrix}$}. \begin{center}\begin{figure} \caption{Diagonal position is NE and PO} \end{figure}\end{center} \end{itemize} \item The \textbf{other} diagonal position is PO. \hspace{8pt} There are two possibilities: \begin{enumerate} \item The Nash Equilibrium payoff is higher than the Pareto Optimal payoff.\\ (Hershey's kisses, Deadlock) \begin{itemize} \item Classical Payoff Matrix: $\begin{pmatrix}x & 0 \\ 1 & xy \end{pmatrix}$ \item Quantum Semideterministic Payoff Matrix: {\small \begin{equation} \begin{pmatrix} x & x(1-E + yE) & 1-E & 1\\ x(1-E + yE) & x & 1 & 1-E\\ E & 0 & xy & x(y(1-E)+E)\\ 0 & E & x(y(1-E)+E) & xy\\ \end{pmatrix} \end{equation}} \item Regimes: In these games there is no transition at all. \end{itemize} \item The Pareto Optimal payoff is higher than the Nash Equilibrium payoff\\ (\textbf{Prisoner's Dilemma}) \begin{itemize} \item Classical Payoff Matrix: $\begin{pmatrix}x & 1 \\ 0 & xy \end{pmatrix}$ \item Quantum Semideterministic Payoff Matrix: {\small \begin{equation} \begin{pmatrix} x & x(1-E + yE) & E & 0\\ x(1-E + yE) & x & 0 & E\\ 1-E & 1 & xy & x(y(1-E)+E)\\ 1 & 1-E & x(y(1-E)+E) & xy\\ \end{pmatrix} \end{equation}} \item Regimes: Here we find another transition where a new equilibrium arises, in a similar way as the first case considered. This happens at $E=x(y(1-E)+E)$. In the known game "`Prisoner's Dilemma"' the scaled payoff matrix is defined by $x=\frac{3}{5}$ and $y=\frac{1}{3}$. The transition is in $E=\frac{1}{3}$, the same result found by \cite{QTransition} For these games $G_{AB}=x+xy-1$, a positive or negative number according to the value of x an y. Games with $G_{AB}>0$ will not have an entanglement transition, and games with $G_{AB}<0$ will have one. In the image an example of either this or the above considered kind of games is illustrated: \begin{center}\begin{figure} \caption{One diagonal position is NE and the other is PO} \end{figure}\end{center} \end{itemize} \item Both diagonal positions are Pareto Optimal \begin{itemize} \item Classical Payoff Matrix: $\begin{pmatrix}1 & x \\ 0 & xy \end{pmatrix}$ \item Quantum Semideterministic Payoff Matrix: {\small \begin{equation} \begin{pmatrix} 1 & 1-(1-xy)E & x(1-E) & x\\ 1-(1-xy)E & 1 & x & x(1-E)\\ Ex & 0 & xy & xy+E(1-xy)\\ 0 & Ex & xy+E(1-xy) & xy\\ \end{pmatrix} \end{equation}} \item Regimes: In these games we find another transition corresponding to the appearance of a new Nash Equilibrium. This happens at $xy+E(1-xy)=x$ \begin{center}\begin{figure} \caption{One diagonal position is NE and both are PO} \end{figure}\end{center} \end{itemize} \end{enumerate} \item \textbf{Both diagonal position are Nash Equilibria} There are two cases here: \begin{enumerate} \item One of the diagonal position is Pareto Optimal\\ (Assurance Game, Stag-Hunt) \begin{itemize} \item Classical Payoff Matrix: $\begin{pmatrix}x & 1 \\ xy & 0 \end{pmatrix}$ \item Quantum Semideterministic Payoff Matrix: {\small \begin{equation} \begin{pmatrix} x & x(1-E) & 1-E(1-xy) & 1\\ x(1-E) & x & 1 & 1-E(1-xy)\\ xy+(1-xy)E & xy & 0 & Ex \\ xy & xy+(1-xy)E & Ex & 0\\ \end{pmatrix} \end{equation}} \item Regimes: In these games there is no transition at all. \begin{center}\begin{figure} \caption{Both diagonal are Nash Equilibria, One is Pareto Optimal} \end{figure}\end{center} \item Both positions are Pareto Optimal \begin{itemize} \item Classical Payoff Matrix: $\begin{pmatrix}1 & xy\\ 0 & x \end{pmatrix}$ \item Quantum Semideterministic Payoff Matrix: {\small \begin{equation} \begin{pmatrix} x & x(1-E) & 1-E(1-xy) & 1\\ x(1-E) & x & 1 & 1-E(1-xy)\\ xy+(1-xy)E & xy & 0 & Ex \\ xy & xy+(1-xy)E & Ex & 0\\ \end{pmatrix} \end{equation}} \item Regimes: In these games there is no transition at all. \begin{center}\begin{figure} \caption{Both diagonal are Nash Equilibria and Pareto Optimal} \end{figure}\end{center} \end{itemize} \end{itemize} \end{enumerate} \item Nondiagonal positions are Nash Equilibria \begin{enumerate} \item One diagonal position is Pareto Optimal\\ (Chicken Game) \begin{itemize} \item Classical Payoff Matrix: $\begin{pmatrix}x & xy \\ 1 & 0 \end{pmatrix}$ \item Quantum Semideterministic Payoff Matrix: {\small \begin{equation} \begin{pmatrix} x & x(1-E) & xy+(1-xy)E & xy\\ x(1-E) & x & xy & 1-E(1-xy)\\ 1-E(1-xy) & 1 & 0 & Ex \\ 1 & 1-E(1-xy) & Ex & 0\\ \end{pmatrix} \end{equation}} \item Regimes: In these games there is no transition \begin{center}\begin{figure} \caption{Both diagonal are Nash Equilibria and Pareto Optimal} \end{figure}\end{center} \end{itemize} \item Both diagonal positions are Pareto Optimal \begin{itemize} \item Classical Payoff Matrix: $\begin{pmatrix}xy & x \\ 1 & 0 \end{pmatrix}$ \item Quantum Semideterministic Payoff Matrix: {\small \begin{equation} \begin{pmatrix} xy & xy(1-E) & E+x(1-E) & x\\ xy(1-E) & xy & x & E+x(1-E)\\ 1-E(1-x) & 1 & 0 & xyE\\ 1 & 1-E(1-x) & xyE & 0\\ \end{pmatrix} \end{equation}} \item Regimes: In these games there is no transition \begin{center}\begin{figure} \caption{Both diagonal are Nash Equilibria and Pareto Optimal} \end{figure}\end{center} \end{itemize} \end{enumerate} \end{enumerate} \end{enumerate} We can locate in the game classes map (figure \ref{classesmap}) for example, the games that can have regime transitions for $E<\frac{1}{2}$ These are, according to the map, $\frac{1}{4}$ of all possible games (figure \ref{MapTrans}): \begin{figure} \caption{Entanglement Regimes in the payoff zones} \label{MapTrans} \end{figure} In this chapter we considered the smaller subspace of interesting strategies. The next step in this work, is to consider unitary strategies, and the first step is given in the next chapter, developing a methodology to obtain the pursued results in the space of unitary strategies. \chapter{A GEOMETRICAL APPROACH TO UNITARY STRATEGIES} In this chapter, a methodology to compute optimal response strategies for unitary strategies is presented. It is a little of a digression from the subject of quantum games, but can be interesting for the reader that is interested in the quantum mechanics of entangled qubits. \section{TENSOR REPRESENTATION OF THE SEPARABLE DENSITY OPERATOR} Identity and Pauli matrices constitute a base for the space of operators acting on a bidimensional Hilbert space. Then we can represent any linear operator acting on this space as a 4-vector with the coefficients of the basis matrix as elements. That is: \begin{multline*} \vec{O} = \frac{1}{2}(\langle 0\mid\hat{O}\mid0 \rangle +\langle 1\mid\hat{O}\mid1 \rangle,\langle 0\mid\hat{O}\mid1 \rangle +\langle 1\mid\hat{O}\mid0 \rangle,\\,i\langle 0\mid\hat{O}\mid1\rangle -i\langle 1\mid\hat{O}\mid0 \rangle,\langle 0\mid\hat{O}\mid0 \rangle -\langle 1\mid\hat{O}\mid1 \rangle), \end{multline*} Or: \begin{equation} \vec{O} =\frac{1}{2}(Tr(\mathbb{I}\mathbb{O}),Tr(\sigma_x\mathbb{O}),Tr(\sigma_y\mathbb{O}),Tr(\sigma_z\mathbb{O}) \end{equation} where: \begin{equation*} \mathbb{O}_{i,j} = \sum(\langle\hbox{i}\mid\hat{O}\mid\hbox{i}\rangle). \end{equation*} If we study a composite system, we can represent the operators as rank 2 and dimension 4 tensors. The tensor representation for separable and general linear operators acting on the total space is: \begin{align} (A\otimes B)_{\mu\nu} &= A_{\mu}B_{\nu}\\ O_{(spaces A and B)} &= Tr \left(\begin{pmatrix} \mathbb{I}\\ \sigma_{z}\\ \sigma_{y}\\ \sigma_{x} \end{pmatrix}\begin{pmatrix} \mathbb{I} & \sigma_{z} & \sigma_{y} & \sigma_{x} \end{pmatrix} \hat{O}\right). \label{RepSeparable} \end{align} Even if an operator is not separable it can be written as a sum of separable operators, and the same is valid for its tensor representation. \section{ENTANGLED STATES} A transformation generated by a direct product of Pauli matrices is completely nonlocal, in the sense that it can cause a maximal change in local information (defined as von Neumann entropy, length of Bloch vector of reduced density operator, etc). In the Eisert scheme the chosen entangling unitary is: \begin{equation}\label{TransformEnreda} \hat{J}=\sqrt{1-\gamma^{2}}(\mathbb{I}\otimes\mathbb{I})+i\gamma(\sigma_{x}\otimes\sigma_{x}). \end{equation} We will use the notation (ab)=$\sigma_{a}\otimes\sigma_{b}$ where $\sigma_{0}$ is identity: \begin{equation*} \hat{J}=\sqrt{1-\gamma^{2}}(00)+i\gamma(xx). \end{equation*} Let us examine the effect of this transformation on the state $\mid$ kl $\rangle$: \begin{equation*} \hat{J}^{\dagger}\rho(kl)\hat{J}=(1-\gamma^{2})\rho\hbox{(kl)}+\gamma^{2}(xx)\rho\hbox{(kl)}(xx)\\+i\gamma\sqrt{1-\gamma^{2}}\left[\rho\hbox{(kl)},(xx)\right]. \end{equation*} Using an explicit expression for $\rho$(kl): \begin{equation} \rho\hbox{(kl)}=\frac{1}{2}((00)+(-1)^{k}(z0)+(-1)^{l}(0z)+(-1)^{k+l}(zz). \end{equation} The effect of the transformation J on the tensor can be deduced from the entangled deterministic strategies shown before: \begin{align*} \hat{J}^{\dagger}(00)\hat{J}&=(00) \\ \hat{J}^{\dagger}(n0)\hat{J}&=(1-2\gamma^{2})(n0)-i\gamma\sqrt{1-\gamma^{2}}((n\times x)x) \\ \hat{J}^{\dagger}(0n)\hat{J}&=(1-2\gamma^{2})(0z)-i\gamma\sqrt{1-\gamma^{2}}(x(n\times x)) \\ \hat{J}^{\dagger}(nn)\hat{J}&=(nn). \end{align*} where n is $x$, $y$ or $z$. We are particularly interested in those with $n=z$, because they appear in the density operator of the eigenstates of the payoff operator: \begin{align*} \hat{J}^{\dagger}(00)\hat{J}&=(00)\\ \hat{J}^{\dagger}(z0)\hat{J}&=(1-2\gamma^{2})(z0)-i\gamma\sqrt{1-\gamma^{2}}(yx))\\ \hat{J}^{\dagger}(0z)\hat{J}&=(1-2\gamma^{2})(0z)-i\gamma\sqrt{1-\gamma^{2}}(xy))\\ \hat{J}^{\dagger}(zz)\hat{J}&=(zz). \end{align*} The tensor representation of the transformed initial state is: \begin{equation}\label{TensRhoEnredado} \overleftrightarrow{\rho\hbox{(ij)}}= \begin{pmatrix} 1 & (-1)^{k}\sqrt{1-E} & 0 & 0\\ (-1)^{l}\sqrt{1-E} & (-1)^{k+l} & 0 & 0\\ 0 & 0 & 0 & (-1)^{k+1}\sqrt{E} \\ 0 & 0 & (-1)^{l+1}\sqrt{E} & 0 \\ \end{pmatrix} \end{equation} where we defined: \begin{equation}\label{ParametroE} E=4\gamma^{2}(1-\gamma^{2}). \end{equation} \section{REPRESENTATION FOR THE PAYOFF OPERATOR} The payoff function is represented in classical games in normal form as a matrix whose row is the state chosen by player A and the column that chosen by player B. In the quantum game, we define the payoff as an operator, whose eigenvalues are the entries of the classical payoff matrix: \begin{equation} \hat{G}=(00)G_{prom} +(z0)G_{A} +(0z)G_{B} +(zz)G_{AB}. \end{equation} where the parameters $G_{A}$, $G_{B}$ and $G_{AB}$ are the same defined before for the classical game in section \ref{ClassParameters}: \begin{itemize} \item $G_{A}$ is the difference in the own payoff when a player changes from strategy 0 to strategy 1 \item $G_{B}$ is the difference in the other player's payoff when a player changes from strategy 0 to strategy 1 \item $G_{AB}$ is the difference in any player's payoff when both players change simultaneously from a coordinated strategy to an anticoordinated one. \item $G_{prom}$ is the average payoff for completely random playing by both players. As we have seen before, it is completely irrelevant, and can be set equal to zero. \end{itemize} The tensor representing the non-entangled payoff operator is: \begin{equation} \overleftrightarrow{G}= \begin{pmatrix} 0 & G_{B} & 0 & 0\\ G_{A} & G_{AB} & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ \end{pmatrix}. \end{equation} It must be observed that this operator is not necessarily expressible as a direct product of partial operators. In fact, the games where it can be are rather trivial both in the classical and quantum version. The reason is that there must be some \textit{correlation} between the player A and player B payoff, for the game to be non-trivial. Entanglement acts on the tensor representation in a similar way to how it acts on the representation of density operator: \begin{equation}\label{TensGEnredado} {\overleftrightarrow{G}}= \begin{pmatrix} 0 & \sqrt{1-E}G_{B} & 0 & 0\\ \sqrt{1-E}G_{A} & G_{AB} & 0 & 0\\ 0 & 0 & 0 & \sqrt{E}G_{B} \\ 0 & 0 & \sqrt{E}G_{A} & 0\\ \end{pmatrix}. \end{equation} \section{LOCAL UNITARY OPERATIONS} The action of the unitary operators on the tensor representation is very simple; much simpler than that of nonlocal transformations. This can be easily seen on separable operators (tensor representation given in (\ref{RepSeparable}): \begin{align*} (U_{A}^{\dagger} A U_{A}\otimes U_{B}^{\dagger} B U_{B})_{\mu\nu} &= (U_{A}^{\dagger}A U_{A})_{\mu}(U_{B}^{\dagger} B U_{B})_{\nu}\\ \hat{A} &= \langle A \rangle \mathbb{I}+\vec{A}\centerdot \vec{\sigma}\\ U_{A}^{\dagger} A U_{A} &= \langle A \rangle \mathbb{I}+\vec{A}\centerdot (U_{A}^{\dagger} \vec{\sigma} U_{A})\\ &= \langle A \rangle \mathbb{I}+\overleftrightarrow{R_{A}}\centerdot \vec{A}\centerdot \vec{\sigma}. \end{align*} The tensor $\overleftrightarrow{R_{A}}$ is simply the matrix representation of a SO(3) transformation. To apply this result to the 4-vector representation we enlarge the dimension by one, and replacing the SO(3) matrix to his SO(4) equivalent. \begin{equation} (R_{A})_{\mu,\mu' }= 1 \oplus (R_{A})_{ij}. \end{equation} For the subspace of player B we proceed in a similar way \begin{equation} (U_{A}^{\dagger} A U_{A}\otimes U_{A}^{\dagger} B U_{B})_{\mu\nu} = (R_{A})_{\mu,\mu' }(A\otimes B)_{\mu,\nu}(R_{B})_{\nu ,\nu'}. \end{equation} In the last equation all indexes are \textit{subindexes}, but there is no problem with that because the metric is identity. It can be seen, anyway that the formula is that of a \textbf{matrix multiplication}. There is only one detail regarding the representation of the rotation for B. The subindex convention forces us to use \textit{the transpose of the rotation matrix}, and to always multiply $(\mathbb{R}_{B})^{T}$ on the right. \begin{equation}\label{TransLocalTensor} U_{A}^{\dagger} \hat{A} U_{A}\otimes U_{A}^{\dagger} \hat{B} U_{B} = \mathbb{R}_{A}\overleftrightarrow{A\otimes B}(\mathbb{R}_{B})^{T}. \end{equation} \section{EXPECTED PAYOFF} In the tensor representation, there is a very simple expression for the expected payoff: {\large \begin{equation} \langle G \rangle = \overleftrightarrow{G}:\overleftrightarrow{\rho} \end{equation}. } where $\overleftrightarrow{G}$ is given in (\ref{TensGEnredado}), and $\overleftrightarrow{\rho}$ is given in (\ref{TensRhoEnredado}) This contraction is equivalent to the trace of a product of matrices: {\large \begin{equation} \langle G \rangle = (R_{A})_{\mu,\mu' }\rho_{\mu' ,\nu' }(R_{B})_{\nu' ,\nu}G_{\mu,\nu}. \end{equation}} This expression is valid for the payoff of player A. In symmetric games, it is unimportant which player are we considering. If we want to consider the \textit{other player} payoff, all we have to do is change $G_{A}$ by $G_{B}$ in the expression. \section{A SYMMETRY IN THE QUANTUM GAME}\label{symmetryQG} To simplify the problem, we can change the base used for the tensor representation, in a way that the representation of the payoff operator become diagonal in the $x,y,z$ subspace. We achieve this multiplying by a rotation of 90 degrees in z by the right. (that means that we change the convention for the unitary strategies of B): \begin{equation}\label{ConvencionB} \begin{aligned} \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ \end{pmatrix} &\mapsto \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & -1 & 0 & 0\\ 0 & 0 & 0 & 0\\ \end{pmatrix}\\ \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 &-1\\ 0 & 0 & 1 & 0\\ \end{pmatrix} &\mapsto \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 0 &-1 & 0\\ 0 & 0 & 0 & 1\\ 0 & 1 & 0 & 0\\ \end{pmatrix}\\ \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 0 & 0 & -1\\ 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 0\\ \end{pmatrix} &\mapsto \begin{pmatrix} 1 & 0 & 0 & 1\\ 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 &-1 & 0\\ \end{pmatrix}\\ \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & -1\\ 0 & 0 & 1 & 0\\ \end{pmatrix} &\mapsto \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ \end{pmatrix}. \end{aligned} \end{equation} In this new base, the tensors will be: {\small \begin{align*} \overleftrightarrow{\rho} &= \begin{pmatrix} 1 & \sqrt{1-E} & 0 & 0\\ \sqrt{1-E} & 1 & 0 & 0\\ 0 & 0 & \sqrt{E} & 0\\ 0 & 0 & 0 & -\sqrt{E}\\ \end{pmatrix}\\ {\overleftrightarrow{G}} &= \begin{pmatrix} 0 & \sqrt{1-E}G_{B} & 0 & 0\\ \sqrt{1-E}G_{A} & G_{AB} & 0 & 0\\ 0 & 0 & \sqrt{E}G_{B} & 0\\ 0 & 0 & 0 & -\sqrt{E}G_{A} \\ \end{pmatrix}. \end{align*} } It is also possible to make a further change of base to make density operator be represented by the identity in the maximally entangled limit. We achieve this multiplying by a 180 degree rotation about x on the left: {\small \begin{equation*} R_{x,180degrees} = \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & -1 & 0 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & 1\\ \end{pmatrix}. \end{equation*} \begin{subequations}\label{TensoresDiagonales} \begin{align} \overleftrightarrow{\rho} &= \begin{pmatrix} 1 & \sqrt{1-E} & 0 & 0\\ -\sqrt{1-E} & -1 & 0 & 0\\ 0 & 0 & -\sqrt{E} & 0\\ 0 & 0 & 0 & -\sqrt{E}\\ \end{pmatrix}\label{OpDensidad}\\ {\overleftrightarrow{G}} &= \begin{pmatrix} 0 & \sqrt{1-E}G_{B} & 0 & 0\\ -\sqrt{1-E}G_{A} & -G_{AB} & 0 & 0\\ 0 & 0 & -\sqrt{E}G_{B} & 0\\ 0 & 0 & 0 & -\sqrt{E}G_{A} \\ \end{pmatrix}.\label{Ganancia} \end{align} \end{subequations} } If we divide the tensor into blocks corresponding to subspaces \textit{(0Z)} and \textit{(XY)}, it becomes obvious that the tensor representation for the density operator is a multiple of the identity in the \textit{(XY)} block, and therefore is not affected by coordinated rotations in z: \begin{equation} \begin{pmatrix} \mathbb{I}_{2x2} & 0\\ 0 & \mathbb{R} \end{pmatrix} \begin{pmatrix} \mathbb{I}_{2x2} & 0\\ 0 & \sqrt{E}\mathbb{I}_{2x2} \end{pmatrix} \begin{pmatrix} \mathbb{I}_{2x2} & 0\\ 0 & \mathbb{R}^T \end{pmatrix}= \begin{pmatrix} \mathbb{I}_{2x2} & 0\\ 0 & \sqrt{E}\mathbb{I}_{2x2} \end{pmatrix}. \end{equation} This means that there is a symmetry: \textbf{coordinated rotations in z do not affect the expected payoff} in this base. {\large \begin{equation}\label{simetriaZ} \mathbb{G}\mathbb{R}_{A} \mathbb{R}_{z}\rho\mathbb{R}_{z}^{T}(\mathbb{R}_{B})^{T}=\mathbb{G}\mathbb{R}_{A} \rho(\mathbb{R}_{B})^{T} \end{equation} } This means that if we change the strategies of the players by multiplying by the mentioned matrices, we do not affect the payoff: \begin{equation} (R_x \mathbb{R}_A)\otimes((\mathbb{R}_B)^TR'_z)\equiv \mathbb{R}_A\otimes(\mathbb{R}_B)^T \end{equation} where $R_x$ is a 180 degree rotation about x and $R'_z$ is a 90 degree rotation about z To study with detail the expression for the expected payoff, it is convenient to translate the problem to matrices of dimension 3. We achieve this dividing the 4-matrices in 1 dimensional and 3 dimensional blocks: {\small \begin{subequations}\label{TensoresBloques} \begin{align} \overleftrightarrow{\rho} &= \begin{pmatrix} 1 & \sqrt{1-E}\langle z \vert\\ -\sqrt{1-E}\vert z \rangle & -\mathbb{T}\\ \end{pmatrix}\label{OpDensidad}\\ {\overleftrightarrow{G}} &= \begin{pmatrix} 0 & \sqrt{1-E}G_{B} \langle z \vert\\ -\sqrt{1-E}G_{A} \vert z \rangle & -\mathbb{G}_{diag}\\ \end{pmatrix}.\label{TGanancia3d} \end{align} \end{subequations} } where \hspace{16pt} $\mathbb{T}=\begin{pmatrix}1 & 0 & 0\\ 0 & \sqrt{E} & 0\\ 0 & 0 & \sqrt{E} \end{pmatrix} $ y $\mathbb{G}=\begin{pmatrix}G_{AB}&0&0\\0&G_{A}\sqrt{E}&0\\0&0&G_{B}\sqrt{E}\end{pmatrix}$ When we compute the tensor product block by block we find the following expression for the expected value of payoff: \begin{equation*} \langle G \rangle = (1-E)\bigl( G_{A} \langle z \vert \mathbb{R}_{A} \vert z \rangle + Tr(G_{B} \vert z \rangle \langle z \vert (\mathbb{R}_{B})^{T}) \bigr) +Tr(\mathbb{R}_{A}\mathbb{T}(\mathbb{R}_{B})^{T}\mathbb{G}). \end{equation*} This can be written simply as the trace of a matrix: \begin{equation*} \langle G \rangle = Tr((1-E)\bigl( G_{A} \mathbb{R}_{A} \vert z \rangle\langle z \vert -(G_{B} \vert z \rangle \langle z \vert (\mathbb{R}_{B})^{T} \bigr) +\mathbb{R}_{A}\mathbb{T}(\mathbb{R}_{B})^{T}\mathbb{G}). \end{equation*} This expression can be simplified using the fact that $\vert z \rangle$ is an eigenvector of $\mathbb{G}$ with eigenvalue $G_{AB}$: \begin{multline}\label{ganancia3d} \langle G \rangle = Tr(\bigl(\mathbb{R}_{A}-(1-E)\frac{G_{B}}{G_{AB}} \vert z \rangle\langle z \vert \bigr)\mathbb{T}\bigl((\mathbb{R}_{B})^{T}+(1-E)\frac{G_{A}}{G_{AB}} \vert z \rangle\langle z \vert \bigr) \mathbb{G})\\ + (1-E)^{2}\frac{G_{A}G_{B}}{G_{AB}}. \end{multline} \section{EXTREMAL STRATEGIES} The extremum condition on a player's parameters can be computed constructing the rotations as exponentials of a sum of products of real parameters times the antisymmetric generators of SO(3): \begin{multline*} \mathbb{R}_{A}(\theta + \epsilon) = \mathbb{R}_{A}(\epsilon)\mathbb{R}_{A}(\theta)\\ \mathbb{R}_{A}(\epsilon) = exp(\vec{\epsilon}\centerdot \vec{M}_{generators}). \end{multline*} Expanding around $\theta$: \begin{equation*} \mathbb{R}_{A}(\theta + \epsilon) = (\mathbb{I} +\vec{\epsilon}\centerdot \vec{M}_{generators}+O(\epsilon^{2}))\mathbb{R}_{A}(\theta). \end{equation*} The gradient of the expected payoff can be computed replacing the expansion in (\ref{ganancia3d}) and deriving: \begin{subequations}\label{GradG} \begin{align} \vec{\Delta}_{A}\langle G_{A} \rangle = Tr(\vec{\mathbb{M}}_{generators}\mathbb{R}_{A}\mathbb{T}\bigl((\mathbb{R}_{B})^{T}+(1-E) \frac{G_{A}}{G_{AB}} \vert z \rangle\langle z \vert \bigr) \mathbb{G}_{A})\label{GradGA}\\ \vec{\Delta}_{B}\langle G_{B} \rangle = Tr(\vec{\mathbb{M}}_{generators}\mathbb{R}_{B}\mathbb{T}\bigl((\mathbb{R}_{A})^{T}+(1-E) \frac{G_{A}}{G_{AB}} \vert z \rangle\langle z \vert \bigr) (\mathbb{G}_{A})^{T}).\label{GradGB} \end{align} \end{subequations} For the second expression we used the transpose of (\ref{ganancia3d}).\\ Let us define: \begin{equation}\label{MatrizA} \mathbb{A}=\mathbb{T}\bigl((\mathbb{R}_{B})^{T}-(1-E)\frac{G_{A}}{G_{AB}} \vert z \rangle\langle z \vert \bigr) \mathbb{G}_{A}, \text{ and} \end{equation} \begin{equation}\label{MatrizB} \mathbb{B}= \mathbb{T}\bigl((\mathbb{R}_{A})^{T}+(1-E)\frac{G_{A}}{G_{AB}} \vert z \rangle\langle z \vert \bigr)(\mathbb{G}_{B})^{T}. \end{equation} Matrix $\mathbb{A}$ depends solely on B's parameters and matrix $\mathbb{B}$ depends solely on A's parameters. The expression for the gradient is then: \begin{align}\label{GradienteG} \vec{\Delta}_{A}\langle G_{A} \rangle &= Tr(\vec{\mathbb{M}}_{generators}\mathbb{R}_{A}\mathbb{A})\\ \vec{\Delta}_{B}\langle G_{B} \rangle &= Tr(\vec{\mathbb{M}}_{generators}\mathbb{R}_{B}\mathbb{B}). \end{align} The generating matrices of SO(3) constitute a complete basis for 3x3 antisymmetric matrices, therefore equating this gradient to zero amounts to say that the matrix multiplying the vector of generating matrix is \textbf{symmetric}\\ Conclusion: {\large\textbf{For a critical expected payoff, it is fulfilled that} \begin{equation} \mathbb{R}_{A}\mathbb{A}=\mathbb{A}^{T}\mathbb{R}_{A}^{T}\hspace{8pt}and\hspace{8pt}\mathbb{R}_{B}\mathbb{B}=\mathbb{B}^{T}\mathbb{R}_{B}^{T} \end{equation}} It must be noted that semideterministic strategies are represented by diagonal unitary matrices that commute or anticonmute with the conventional rotation introduced in Z. These are then trivialy mutual critical strategies. \section{SOME EXTREMAL STRATEGIES} The form of the solutions to the extremum condition is rather simple, however difficult, in general, to compute. \begin{align*} \mathbb{R}_{A}^{eq} &=(\mathbb{A}^{T}\mathbb{A})^{1/2}\mathbb{A}^{-1}\\ \mathbb{R}_{B}^{eq} &=(\mathbb{B}^{T}\mathbb{B})^{1/2}\mathbb{B}^{-1}. \end{align*} It is useful rewrite them in the following way: \begin{subequations}\label{soluciones} \begin{align} \mathbb{R}_{A}^{eq}\mathbb{A} &= (\mathbb{A}^{T}\mathbb{A})^{1/2}\label{SolA}\\ \label{SolB} \mathbb{R}_{B}^{eq}\mathbb{B} &= (\mathbb{B}^{T}\mathbb{B})^{1/2}. \end{align} \end{subequations} The matrix $\mathbb{A}$ can be written as the product of three matrices, two orthogonal and a diagonal: \begin{equation*} \begin{split} \mathbb{A} =\mathbb{O}_1\mathbb{D}_{A}\mathbb{O}_2\\ (\mathbb{A}^{T}\mathbb{A})^{1/2} = ((\mathbb{O}_2)^{T}\mathbb{D}_{A}\mathbb{O}1^{T}\mathbb{O}_1\mathbb{D}_{A}\mathbb{O}_1)^{1/2} = (\mathbb{O}_1)^{T}(\mathbb{D}_{A}\mathbb{D}_{A})^{1/2}\mathbb{O}_2. \end{split} \end{equation*} The extremum condition is then: \begin{equation}\label{diagonalA} \begin{split} \mathbb{R}_{A}\mathbb{O}_1\mathbb{D}_{A}\mathbb{O}_2 = (\mathbb{O}_2)^{T}(\mathbb{D}_{A}\mathbb{D}_{A})^{1/2}\mathbb{O}_2\\ \mathbb{O}_2\mathbb{R}_{A}\mathbb{O}1\mathbb{D}_{A}\mathbb{O}_2(\mathbb{O}_2)^{T} = (\mathbb{D}_{A}\mathbb{D}_{A})^{1/2}\\ \mathbb{O}_2\mathbb{R}_{A}\mathbb{O}_1\mathbb{D}_{A}= (\mathbb{D}_{A}\mathbb{D}_{A})^{1/2}. \end{split} \end{equation} Then $O_2\mathbb{R}_{A}O_1$ who must be diagonal in a solution. The SO(3) diagonal matrices can be written in the following way: \begin{equation}\label{RotacionDiagonal} \begin{pmatrix} s_{1}s_{2}&0&0\\0&s_{2}&0\\0&0&s_{1} \end{pmatrix} \end{equation} where $s_{1}=\pm1$ and $s_{2}=\pm1$. Given a strategy $R_{B}$ chosen by B, there are 4 \textit{critical strategies} that A can choose: \begin{equation}\label{SolucionesA} R_{A}(s_{1},s_{2})= \hspace{10pt}(O_2)^T\hspace{10pt} \begin{pmatrix} s_{1}s_{2}&0&0\\0&s_{2}&0\\0&0&s_{1} \end{pmatrix}\hspace{10pt} (O_2)^T \end{equation} where $O_1$ and $O_2$ ``diagonalize'' $\mathbb{A}$: \begin{equation*} (O_2)^T\hspace{10pt} \mathbb{A} \hspace{10pt} (O_1)^T \hspace{10pt}= Diagonal. \end{equation*} It is important to characterize the extremal strategies, that is, find out if they are maxima, minima or saddle point. Taking the expression (\ref{GradienteGanancia}) and deriving again, we find the Hessian: \begin{equation} H_{ij} = Tr(\mathbb{M}_{i}\mathbb{M}_{j}\mathbb{R}_{A}\mathbb{A}). \end{equation} The product of the generators of SU(3) can be computed very easily using the Levi-Civita tensor: \begin{multline*} (\mathbb{M}_{i})_{jk} = \epsilon_{jik}\\ (\mathbb{M}_{i}\mathbb{M}_{j})_{kl} = \epsilon_{kil}\epsilon_{kjl} = \delta_{ik}\delta_{jl} - \delta_{ij}\delta_{kl}. \end{multline*} The Hessian is the trace of a product, taken on the indexes \textit{j} and \textit{k}, and the expression left for the Hessian is: \begin{equation}\label{FormaHessiana} H_{ij} = \bigl(\mathbb{R}_{A}\mathbb{A}-\mathbb{I}Tr(\mathbb{R}_{A}\mathbb{A})\bigr)_{ij}. \end{equation} If we replace $\mathbb{A}$ and $\mathbb{R}_A$ by their decomposed expression, we get: \begin{equation}\label{FormaHessiana} \mathbb{H} = (O_2)^T\bigl(\mathbb{D}_R \mathbb{D}_A-\mathbb{I}Tr(\mathbb{D}_R \mathbb{D}_A)\bigr)O_2. \end{equation} Now we can conclude that the diagonal matrix $\mathbb{D}_A$ has the same eigenvalues of $\mathbb{A}$ in absolute value, with the signs $s_1 s_2$, $s_2$ y $s_1$ respectively. These signs characterize the critical response $\mathbb{R}_A$. The eigenvalues of the Hessian are then sums of two of the eigenvalues of $\mathbb{A}$ with signs $-s_2$ and $-s_1$ for the first, $-s_1$ and $-s_1s_2$ for the second, and $-s_2$ and $-s_1s_2$ for the third. Whatever the signs of the eigenvalues of $\mathbb{A}$, we can get any sign for the curvatures. We can get maximum, minimum or saddlepoints with this criterion. However, when the matrix $\mathbb{A}$ is \textbf{degenerated} some of the curvatures can be null. This will be studied later. \chapter{CRITICAL RESPONSES IN QUANTUM 2x2 GAMES} In chapter \ref{OptimalNash} we introduced the concept of Optimal responses, and in the last chapter, the concept of critical responses, in the field of unitary local transformations of entangled qubits. In this chapter, we apply this to Quantum Game Theory. \section{REVIEW OF SOME CONCEPTS} As we had shown before \cite{Quadratic}, quantum games share some common features with quadratic games, differing in the fact that the former include a quadratic constraint restricting the strategy space. The quantum games can thus be considered as belonging to the class of games with countably infinite strategies whose payoff is a smooth function of some parameters labeling the strategies. We can, nonetheless, use a concept developed for general games with countably infinite strategies in which the payoff is a smooth function of the parameters that label the strategies. to characterize a quantum game: the concept of critical response. This is based in the concept of \textbf{optimal response} (Definition \ref{DefOptResp}) used in the theory of infinite games \cite{NashResponse}. A strategy which gives a maximum, minimum or saddle-point payoff for a given fixed strategy of the other player is called a \textbf{critical response} to the other player's strategy. A \textbf{Nash equilibrium} (Definition \ref{DefNashEq}) can thus be defined as a pair of strategies that are mutual optimum (maximal) responses in the own payoff function, and a \textbf{Pareto optimum} (Definition \ref{DefParetoOp}) as a pair of strategies which are mutual optimum (maximal) responses in \textit{the other player's payoff function} \cite{NashResponse}. \subsection{THE PROBLEM OF STABILITY} These two concepts (Nash equilibrium and Pareto optimum) give us situations which are stable in the sense that players with a certain kind of rationality will not depart from such positions; those who maximize their own payoff will not depart from Nash equilibria and those who maximize the other player's payoff will not depart from Pareto optima. But, what happens when players choose with ``trembling hands'' \cite{Trembling}, that is, when there is some variability of strategies, some limited rationality, curiosity, or other factor that makes players use some strategies different from the ``rational'' choices? Will Nash equilibria and Pareto Optima be stable under slight departures from rationality? This is a very simple problem when we deal with a few pure strategies and a set of mixed strategies which are convex combinations of the former. But, as we will see, it turns out to be not so simple when we deal with an infinite set of strategies. \section{CONVEXITY AND CONCAVITY} The most important class games with infinite strategies is the class of \textit{continuous games}. The defining characteristic of these games is that the strategies can be labeled with vectors of real numbers such that the payoff is, for both players, a continuous function of the label vectors of each one of them. Within the class of continuous games there are two classes that have been largely studied: \textit{convex} games and \textit{concave} games \cite{Continuous}. These are defined by the condition: \begin{equation}\label{ConcaveConvex}\begin{aligned} s\left(\alpha G(\vec{a},\vec{b}) + (1-\alpha) G(\vec{a'},\vec{b})\right) &\geqslant s\left(G(\alpha \vec{a} + (1-\alpha) \vec{a'},\vec{b})\right)\\ s\left(\alpha G(\vec{a},\vec{b}) + (1-\alpha) G(\vec{a},\vec{b'})\right) &\geqslant s\left(G(\vec{a}, {\alpha \vec{b} + (1-\alpha) \vec{b'}})\right) \end{aligned}\end{equation} Where G is the payoff, $\vec{a},\vec{b}$ are the vectors that label the strategies, and $\alpha$ is a real number between 0 and 1. \begin{definition} A \textbf{convex game} is a game whose payoff fulfills the conditions \ref{ConcaveConvex} with $s=-1$ \end{definition} \begin{definition} A \textbf{cocave game} is a game whose payoff fulfills the conditions \ref{ConcaveConvex} with $s=1$ \end{definition} Later we will see that a quantum game is neither completely convex nor completely concave, but can be described either way under certain restrictions of the strategies. \section{THE TENSOR FORMULA FOR THE EXPECTED PAYOFF} In chapter 3 (THE EISERT SCHEME) a tensor formula for the expected payoff was developed based on the $\chi$ matrix representation of the quantum operations: \begin{equation}\label{ChiPayoff} \langle \$ \rangle = \sum_{i,j,k,l}(\chi_A)_{i,j} (\chi_B)_{k,l} P^{i,j,k,l} \end{equation} where $\chi$ is a 4x4 hermitian matrix containing the parameters of each player. For any unital map acting on qubit density operators, the $\chi$ matrix can be written as a convex combination of matrices corresponding to unitary transformation. The expected payoff can therefore as the sum of a number of weighted ``unitary payoffs'', just as the classical expected payoff can be seen as a sum of weighted ``pure-strategy payoffs''. As it has been stressed repeatedly, the core of any fundamentally quantum behaviour lies in the interference of amplitudes \cite{FeynmanRules}, and this is manifest only \textbf{within} the unitary terms. This is, for the moment, an assumption, but will prove later to be true. The assumption is this: \textbf{A critical response is always unitary} \section{MUTUALLY CRITICAL STRATEGIES} The main aim of this work is to classify the quantum games according to Nash Equilibria and Pareto optimal positions, and the critical strategies can be used as pathways to them. As it was already stated, a Nash Equilibrium is formed by two mutually optimal strategies, and a Pareto optimal is formed by two mutually optimal(in a pareto sense, with transpose payoff matrix) strategies as well. Supose Alice and Bob are in a Nash equilibrium position. A slight unilateral departure from this position by Alice will cause Bob to react in a certain way. It is possible that Alice's best response to Bob is returning towards the equilibrium position (stable equilibrium) or to move away from it (unstable equilibrium). \begin{definition} A \textbf{stable equilibrium} is a position from which a slight departure in the space of strategies of one player causes, by optimal response, a strategy from the other player that compells the departing player to get back to the initial equilibrium strategy. \end{definition} The optimal strategy map gives us the tools to examine this feature of Nash Equilibrium, by computing a \textit{procuct map}. \subsection{PRODUCT MAPS} A certain critical strategy is a map from the set of strategies of one player to the set of strategies of the other: \begin{equation}\begin{aligned} \tilde{C}_B(s1_B,s2_B):\chi_A &\mapsto \chi_B\\ \tilde{C}_A(s1_A,s2_A):\chi_B &\mapsto \chi_A \end{aligned}\end{equation}. We can compose the two maps in one that takes from the set of strategies of one player into itself: \begin{equation} \tilde{C}^2(s1_A,s2_A,s1_B,s2_B)=\tilde{C}_B(s1_B,s2_B)\tilde{C}_A(s1_A,s2_A):\chi_A \mapsto \chi_A. \end{equation} The Nash Equilibrium strategies will be fixed points of this map. The map can also tell when a Nash equilibrium is stable or unstable. Some tools to do it are given in this work, but the stability analysis falls beyond the scope of this study. \subsection{THE CRITICAL RESPONSE MAPS IN SO(3)} We have seen in (THE EISERT SCHEME) that any $\chi$ matrix can be expressed: \begin{equation}\label{ChiDecomposed} \chi_A = \sum_k \lambda_k \begin{pmatrix}((a_k)_0)^* \\ ((a_k)_z)^* \\ ((a_k)_y)^* \\ ((a_k)_x)^*\end{pmatrix} \begin{pmatrix}(a_k)_0 & (a_k)_z & (a_k)_y & (a_k)_x\end{pmatrix} \end{equation} where the summation is over some set $\{a_k\}$ constituting a complete basis for the $\mathbb{C}^4$ space (the space of complex vectors with dimension 4) Let us decompose the $\chi$ matrix for player A, and compute the payoff expression: if A plays a unitary strategy, the payoff expression (\ref{ChiPayoff}) is: \begin{equation}\label{ChiPayoff} \langle \$ \rangle = \sum_{k,l}(\chi_B)_{k,l}\sum_n \left((\lambda_A)_n\sum_{i,j}((a_n)_i)^*(a_n)_j P^{i,j,k,l}\right). \end{equation} where $a_k$ are unitary complex vectors, and $(\lambda_A)_n$ are real numbers between 0 and 1 that sum up to 1. This is simply a convex combination of quadratic forms. Let us fix an arbitrary strategy $\chi_B$, and find the critical vectors $a_n$ without varying the $(\lambda_A)_n$. In these conditions the critical responses of A are given by some $a_i$ that are \textbf{eigenvectors} of the matrix: \begin{equation} (\lambda_A)_n\sum_{k,l} (\chi_B)_{k,l} P^{i,j,k,l} \end{equation} Once we found some critical vectors $\tilde{a}_i$, the expression for the payoff is: \begin{equation} \sum_n(\lambda_A)_n\sum_{k,l} (\chi_B)_{k,l} P^{i,j,k,l}(\tilde{a}_i)^* \tilde{a}_j. \end{equation} The fact that all the critical vectors are identical allows us to write the expression in the following way: \begin{equation} \left(\sum_{k,l} (\chi_B)_{k,l} P^{i,j,k,l}(\tilde{a}_i)^* \tilde{a}_j\right)\sum_n(\lambda_A)_n. \end{equation} The parameters $\lambda_n$ turn out to be irrelevant, for the completeness condition forces $\sum_n(\lambda_A)_n$ to be 1. Then the critical responses are always \textbf{``one-term''} quantum operations, defined only by a complex 4-dimensional unitary vector. A unitary operation, on the other hand, is defined by a 4-dimensional unitary vector $a_i$ where $a_0$ is real and $a_z$,$a_y$ and $a_x$ are imaginary. \subsection{ONE-TERM QUANTUM OPERATIONS} It is very important at this point to check what kind of quantum operations can be described by only one complex 4-dimensional vector, that is, by a $\chi$ matrix corresponding to a projector in the $\mathbb{C}^4$ space. \begin{equation} \chi_{one-term} = \begin{pmatrix}(a_0)^*\\(a_z)^*\\(a_y)^*\\(a_x)^*\end{pmatrix} \centerdot \begin{pmatrix}a_0&a_z&a_y&a_x\end{pmatrix}. \end{equation} The action of this operation on a density matrix $\rho$ is given by \begin{equation} \hat{\chi} \rho = \left((a_0)^*\mathbb{I} + (a_z)^*\sigma_z + (a_y)^*\sigma_y + (a_x)^*\sigma_x)\right)\rho\left(a_0\mathbb{I} + a_z\sigma_z + a_y\sigma_y + a_x\sigma_x)\right). \end{equation} This operation must be \textbf{trace-preserving} to be physically meaningful. This means that the coefficient of the identity must remain as $\frac{1}{2}$. This implies: \begin{equation}\begin{aligned} \text{for }&\rho = \frac{1}{2}(\mathbb{I}+\vec{r}\centerdot\vec{\sigma})\\ &|a_0|^2 +||\vec{a}||^2 - 2i(Im\left[(\vec{a}^{\ *}\times \vec{r})\centerdot \vec{a}\right]) + 2Re\left[(a_0)^*(\vec{r}\centerdot \vec{a})\right] = 1\\ \text{By unitarity: }& |a_0|^2 +||\vec{a}||^2 = 1\\ \text{therefore }&i(Im\left[(\vec{a}^{\ *}\times \vec{r})\centerdot \vec{a}\right]) = Re\left[(a_0)^*(\vec{r}\centerdot \vec{a})\right]\\ \text{Which can be written: }& i(Im\left[\vec{a}\right]\times \vec{r})\centerdot Re\left[\vec{a}\right] = Re\left[(a_0)^*\vec{a}\right]\centerdot\vec{r} \end{aligned}\end{equation} where $\vec{r}=(r_z,r_y,r_x)\ \epsilon \ \mathbb{R}^3$, $\vec{\sigma}=(\sigma_z,\sigma_y,\sigma_x)$ and $\vec{a}=(a_z,a_y,a_x)\ \epsilon \ \mathbb{C}^3$ For unitary transformations $Im\left[a_0\right]=0$ and $Re\left[\vec{a}\right]=0$, and the condition is automatically fulfilled regardless of the vector $\vec{r}$. There is a vector relation \cite{Identity} that states: \begin{equation} (\vec{u}\times\vec{v})\centerdot\vec{w}=\vec{u}\centerdot(\vec{v}\times{w}) \end{equation} We can use it to rewrite the trace-preservation condition: \begin{equation}\begin{aligned} i(Re\left[\vec{a}\right]\times Im\left[\vec{a}\right])\centerdot \vec{r} = (Im\left[a_0\right]Im\left[\vec{a}\right]-Re\left[a_0\right]Re\left[\vec{a}\right])\centerdot\vec{r}\\ (iRe\left[\vec{a}\right]\times Im\left[\vec{a}\right]- Im\left[a_0\right]Im\left[\vec{a}\right]+Re\left[a_0\right]Re\left[\vec{a}\right])\centerdot\vec{r}=0. \end{aligned}\end{equation} The first term within the parentheses is necessarily perpendicular to both of the other two terms, therefore the only possibility to fulfill the condition regardless of the vector $\vec{r}$ is that one of the parts (real or imaginary) of the vector $\vec{a}$ is null, as well as the complementary part of $a_0$. We can conclude that there are only two kinds of one-term operations that preserve trace: \begin{enumerate} \item Those where $Re\left[\vec{a}\right]=0$ and $Im\left[a_0\right]=0$ (unitary operations) \item Those where $Im\left[\vec{a}\right]=0$ and $Re\left[a_0\right]=0$ (antiunitary operations) \end{enumerate} If the only physically meaningful one-term quantum operations are unitary or antiunitary, we can conclude: \begin{center} {\large The critical-response strategies are always unitary} \end{center} We have to remark, however, that when there is some degeneracy in the matrix $\sum_{k,l}(\chi_B)_{k,l}P^{i,j,k,l}$ there are mixed strategies that can be considered critical-response strategies. \subsection{REVISITING GEOMETRIC REPRESENTATION} In chapter 6 we studied unitary strategies, and found some useful expressions for critical responses (called also extremal strategies). These expressions were found using a geometric representation of the operators acting on the 2-dimensional Hilbert space: the so called ``Bloch vector representation'' \cite{BlochVector}. In this representation a density operator is represented by a 3-dimensional vector with length from 0 to 1, and unitary operations belonging to SU(2) correspond to SO(3) rotations that act on that vector. Given a certain unitary strategy chosen by A ($\mathbb{R}_A$) and another chosen by B ($\mathbb{R}_B$), the payoff can be computed with the formula: \begin{multline}\label{ganancia3d} \langle G \rangle = Tr(\bigl(\mathbb{R}_{A}\mathbb{R}_{1}-(1-E)\frac{G_{B}}{G_{AB}} \vert z \rangle\langle z \vert \bigr)\mathbb{T}\bigl(\mathbb{R}_{2}(\mathbb{R}_{B})^{T}+(1-E)\frac{G_{A}}{G_{AB}} \vert z \rangle\langle z \vert \bigr) \mathbb{G})\\ + (1-E)^{2}\frac{G_{A}G_{B}}{G_{AB}} \end{multline} Where: \begin{equation} \mathbb{T}= \begin{pmatrix} 1 & 0 & 0 \\ 0 & \sqrt{E} & 0\\ 0 & 0 & \sqrt{E} \end{pmatrix} \text{ and } \mathbb{G}=\begin{pmatrix} G_{AB} & 0 & 0 \\ 0 & \sqrt{E}G_B & 0\\ 0 & 0 & \sqrt{E}G_A \end{pmatrix} \end{equation} The rotations $\mathbb{R}_1=${\tiny$\begin{pmatrix}-1&0&0\\0&-1&0\\0&0&1\end{pmatrix}$} and $\mathbb{R}_2=${\tiny$\begin{pmatrix}1&0&0\\0&0&1\\0&1&0\end{pmatrix}$} are put there to cast $\mathbb{T}$ and $\mathbb{G}$ in positive diagonal form. The payoff parameters, according to a payoff matrix {\tiny $\begin{pmatrix}a&b\\c&d\end{pmatrix}$} are: \begin{equation} G_A = a + b - c - d \hspace{1cm} G_B = a - b + c - d \hspace{1cm} G_{AB} = a - b - c + d. \end{equation} To examine the unilateral changes that player A can perform, it is useful to define a matrix $A$: \begin{equation} \mathbb{A}=\mathbb{T}\bigl(\mathbb{R}_{2}(\mathbb{R}_{B})^{T}-(1-E)\frac{G_{A}}{G_{AB}} \vert z \rangle\langle z \vert \bigr) \mathbb{G} \end{equation} With this matrix we can compute critical strategies and their correspondent Hessian matrix. Here we reproduce equation \ref{FormaHessiana}, where the expression for the Hessian matrix is given: \begin{equation*} H_{ij} = \bigl(\mathbb{R}_{A}\mathbb{A}-\mathbb{I}Tr(\mathbb{R}_{A}\mathbb{A})\bigr)_{ij} \end{equation*} \subsection{THE CRITICAL RESPONSE MAP IN SO(3)} There are four possible critical responses (a minimum, two saddle points and a maximum, according to the number of negative eigenvalues of the Hessian), and each corresponds to a certain assignations for the signs of the eigenvectors of a square root 3x3 matrix. This can be viewed as a map which takes from the space of unitary strategies of one player to the space of unitary strategies of the other. \begin{equation} \mathbb{R}_{A}^{critical} =\left[(\mathbb{A}^{T}\mathbb{A})^{1/2}\right]_{s1,s2}\mathbb{A}^{-1}(\mathbb{R}_1)^{t} \end{equation} where: \begin{equation} \left[(\mathbb{A}^{T}\mathbb{A})^{1/2}\right]_{s1,s2} = \mathbb{O}_1 \begin{pmatrix} (s1\ s2)\lambda_{12} & 0 & 0 \\ 0 & s2\ \lambda_2 & 0 \\ 0 & 0 & s1\ \lambda_1 \end{pmatrix} \mathbb{O}_2 \end{equation} For $\lambda_{12},\lambda_2,\lambda_1$ positive, and $\mathbb{O}_1,\mathbb{O}_2 \ \epsilon \ SO(3)$ \\ If we can diagonalize the $\mathbb{A}$ matrix it is easy to compute the critical response: \begin{multline}\label{CriticalComputed} \mathbb{A} = (\mathbb{O}_2)^t(\mathbb{O}_1)\mathbb{D}_A(\mathbb{O}_2)\\ \mathbb{R}_{A}^{critical} = (\mathbb{O}_2)^t\begin{pmatrix} s1\ s2 & 0 & 0 \\ 0 & s2 & 0 \\ 0 & 0 & s1 \end{pmatrix}(\mathbb{O}_1)^t(\mathbb{O}_2)(\mathbb{R}_1)^t \end{multline} The Hessian matrix for a certain response of A can be computed according to: \begin{equation} \mathbb{H}_A = \bigl(\mathbb{R}_{A}\mathbb{R}_1\mathbb{A}-\mathbb{I}Tr(\mathbb{R}_{A}\mathbb{R}_1\mathbb{A})\bigr) \end{equation} \subsubsection{A ONE-PARAMETER SUBSET} As an example, we can parametrize two parameter rotations for which it is possible to find an explicit expression for the critical response map. To illustrate how this can be compute, let us begin with rotations along the Z axis: \begin{equation}\begin{aligned} \mathbb{R}_2(\mathbb{R}_B)^t = \begin{pmatrix} 1 & 0 & 0 \\ 0 & cos(\theta) & sin(\theta) \\ 0 & -sin(\theta) & cos(\theta) \end{pmatrix}\\ \mathbb{A}= \mathbb{R}_2(\mathbb{R}_B)^{t}\begin{pmatrix} (1-E)G_A & 0 & 0 \\ 0 & E\ G_B & 0 \\ 0 & 0 & E\ G_A \end{pmatrix} \end{aligned}\end{equation} Because $\mathbb{R}_2(\mathbb{R}_B)^t$ commutes with $\mathbb{T}$.\\ The expression of the critical response is then rather simple: \begin{equation} \mathbb{R}_A = \begin{pmatrix} s1\ s2 & 0 & 0 \\ 0 & s2 & 0 \\ 0 & 0 & s1 \end{pmatrix}\mathbb{R}_B(\mathbb{R}_2)^t(\mathbb{R}_1)^t \end{equation} \subsubsection{THE OTHER ONE-PARAMETER SUBSETS} It is also possible to get a explicit formula for the critical response to a rotation around the X axis: \begin{equation}\begin{aligned} \mathbb{R}_2(\mathbb{R}_B)^t = \begin{pmatrix} cos(\theta) & sin(\theta) & 0 \\ -sin(\theta) & cos(\theta) & 0 \\ 0 & 0 & 1 \end{pmatrix}\\ \mathbb{A}= \begin{pmatrix} cos(\theta)G_{AB}+(1-E)G_A & sin(\theta)\sqrt{E}G_B & 0 \\ -sin(\theta)\sqrt{E}G_{AB} & cos(\theta)E\ G_B & 0 \\ 0 & 0 & E\ G_A \end{pmatrix} \end{aligned}\end{equation} The problem of finding critical responses implies here to diagonalize a 2x2 block with two SO(2) transformations: one, applied by left, to make the matrix symmetric, and then another one to make it diagonal. A general matrix can be made symmetric multiplying by a certain orthogonal matrix by left: \begin{multline} \left[ \frac{1}{\sqrt{(t+w)^2+(v-u)^2}} \begin{pmatrix} t+w & v-u \\ -v+u & t+w \end{pmatrix}\right] \begin{pmatrix} t & u \\ v & w \end{pmatrix}\\ = \frac{1}{t+w} \begin{pmatrix} t^2 + v^2 + tw - uv & tu + vw \\ tu + vw & u^2 + w^2 + tw - uv \end{pmatrix} \end{multline} The symmetrical matrix can be diagonalized by another SO(2) transformation: \begin{multline} \frac{1}{N^2}\begin{pmatrix} v' & -(w' + \sqrt{(v')^2+(w')^2}) \\ w' + \sqrt{(v')^2+(w')^2} & v' \end{pmatrix} \begin{pmatrix} u' + v' & w' \\ w' & u'- v' \end{pmatrix}\\\begin{pmatrix} v' & w' + \sqrt{(v')^2+(w')^2} \\ -(w' + \sqrt{(v')^2+(w'^2)^2}) & v' \end{pmatrix}\\ = \begin{pmatrix} u' + \sqrt{(v')^2+(w')^2} & 0 \\ 0 & u' - \sqrt{(v')^2+(w')^2} \end{pmatrix} \end{multline} where N is a normalization factor for the unitary: \begin{equation} N^2 = 2\left((v')^2 + (w')^2 + w'\sqrt{(v')^2+(w')^2}\right) \end{equation} What we need for the computation of the critical response are only the unitaries used. With the expression for $\mathbb{A}$ in terms of a diagonal matrix and two orthogonal transformations we compute easily according to \ref{CriticalComputed}. Applying the unitaries described above, we get an explicit (however a little complex) expression for the critical responses. For rotations around X: \begin{equation} \mathbb{R}_{A}^{critical}=\frac{1}{N_1(N_2)^2}\begin{pmatrix} -s1\ s2 \alpha\ \gamma + s2\ \beta \ \delta & s1\ s2\ \alpha \ \delta - s1\ \beta \ \gamma & 0\\ s1\ s2 \ \beta \gamma - s2\ \alpha \ \delta & s1\ s2\ \beta \delta + s2\ \alpha \ \gamma & 0\\ 0 & 0 & s1 \end{pmatrix} \end{equation} Where: \begin{equation}\begin{aligned} \gamma &= cos(\theta)(G_{AB}+E\ G_B)+(1-E)\ G_A\\ \delta &= -sin(\theta)\sqrt{E}\ (G_{AB}+ G_B)\\ \alpha &= \gamma\ (\gamma - 2cos(\theta)E\ G_b) + \delta\ (\delta + 2sin(\theta)\sqrt{E}\ G_B)\\ \beta &= sen(\theta)\sqrt{E}(1-E)G_B(cos(\theta)G_{AB}+G_A)\\ N_1 &= \sqrt{\gamma^2+\delta^2}\\ (N_2)^2 &= \sqrt{\alpha^2+\beta^2}. \end{aligned}\end{equation} For rotations around axis Y the expression is similar: \begin{equation} \mathbb{R}_{A}^{critical}=\frac{1}{N_1(N_2)^2}\begin{pmatrix} -s1\ s2 \alpha\ \gamma + s1\ \beta \ \delta & 0 & s1\ s2\ \alpha \ \delta - s1\ \beta \ \gamma\\ 0 & s2 & 0 \\ s1\ s2 \ \beta \gamma - s1\ \alpha \ \delta & 0 & s1\ s2\ \beta \delta + s1\ \alpha \ \gamma \end{pmatrix} \end{equation} 2here: \begin{equation}\begin{aligned} \gamma &= cos(\theta)(G_{AB}+E\ G_A)+(1-E)\ G_A\\ \delta &= -sin(\theta)\sqrt{E}\ (G_{AB}+ G_A)\\ \alpha &= \gamma\ (\gamma - 2cos(\theta)E\ G_b) + \delta\ (\delta + 2sin(\theta)\sqrt{E}\ G_A)\\ \beta &= sen(\theta)\sqrt{E}(1-E)G_A(cos(\theta)G_{AB}+G_A)\\ N_1 &= \sqrt{\gamma^2+\delta^2}\\ (N_2)^2 &= \alpha^2+\beta^2. \end{aligned}\end{equation} \subsection{SOME TWO-PARAMETER SUBSPACES} It is possible combine this last two expressions with that for rotations around axis Z, and compute also critical responses to two-parameter strategies that can be written in the following way: \begin{equation} \mathbb{R}\mathbb{R}_1 = \mathbb{R}_z\mathbb{R}_x \hspace{24pt}\text{or}\hspace{24pt}\mathbb{R}\mathbb{R}_1 = \mathbb{R}_z\mathbb{R}_y. \end{equation} \subsubsection{THE COMPLETE GROUP} To compute the critical response to any SO(3) rotation means diagonalizing 3x3 nonsymmetric matrices. This is, of course, possible, but the resulting expressions are too complicated, and are not therefore shown here. \chapter{EXPLORATION OF THE STRATEGY SPACE} \section{ACCESSIBLE AREAS ON THE PAYOFF SPACE} Even when the spaces of strategies in Quantum 2x2 games are relatively big (3 parameters per player) the space of player's payoff is part of the definition of the game, and remains bidimensional. Thus it is very useful to track the results of the quantum strategies on that arena, as it was done in the case of semideterministic strategies in chapter \ref{Positions}. \subsection{THE INITIAL CLASSICAL GAME} In the initial Classical game, we get the Robinson Graphs mentioned in chapter 4, (like figure \ref{ExampleRobinson}). There is some delimited area of the payoffs space that is suitable to represent payoffs for both players. \begin{center}\begin{figure} \caption{Accessible Payoff Area (the circle marks the Nash Equilibrium)} \end{figure}\end{center} The critical responses (there are only two of them) take to the upper and lower borders of this area for one player, and to the left and right border for the other player. \subsection{THE EXTENDED (SEMIDETERMINISTIC) GAME} In the extended (semideterministic) game, four extra points appear in the Robinson Graph; we can call them \textit{quantum points}. In this case, the lines that in the classical game demarcated the border of the accessible area have turn into lattices, and critical responses to a mixed strategy of the other player take each player to points on the lines drawing the lattice. If we draw a point for each critical response to every possible mixed semideterministic strategy, we draw the lines in the Robinson Graph. \begin{center}\begin{figure} \caption{Lines of Critical Responses to Mixed Semideterministic Strategies} \end{figure}\end{center} \subsection{THE GAME WITH UNITARY STRATEGIES} The space of unitary strategies has the same dimension than the space of mixed semideterministic strategies. But the similarity between them goes no further. When departing from a semideterministic strategy in the unitary space, the trajectory in the payoff space is far more interesting than that generated with mixed strategies. The quadratic nature of the payoff function is reflected in concavity and convexity relations, and in curve shapes in the payoff space. When one player wanders in her unitary strategy space and the other answers with a critical response, a trajectory in the payoffs space is followed. Next we are going to see how curved can this trajectories be. The famous game of the \textbf{Prisoner's Dilemma} is a good example to show this, because it has an entanglement regime transition. The payoff matrix found in literature for this game is {\tiny$\begin{pmatrix}3&0\\5&1\end{pmatrix}$} \cite{Prisoner}. Let us check first the figure generated by points in the payoffs space corresponding to one player choosing random unitary strategies, and the other using critical response to the former: \begin{center}\begin{figure} \caption{Points of Critical Responses to Random Unitary Strategies (low entanglement)} \end{figure}\end{center} To include mixed unitary strategies will cause the appearance of new points \textbf{between the existing ones} in the figure, thus filling two convex areas. (left and right in the example). The Entanglement transitions are determined by the positions in the payoff space of the extremal points of the lattices (in the figure, the black circles), and are not thus affected by the introduction of general unitary strategies. It is important to mention that the convex areas from these critical responses to unitary strategies are only slightly different to those occupied by the semideterministic lattices. We have chosen the Prisoner's Dilemma because it has an entanglement regime transition. What is the consequence of this in the payoff graph? The curves drawn by critical responses to unitary strategies can be seen in figure \ref{DPHighEnt} For high entanglement the curvatures are even more pronounced, but that the convex area is still almost the same as that delimited by the semideterministic lattice. \begin{figure} \caption{Points of Critical Responses to Random Unitary Strategies (high entanglement)} \label{DPHighEnt} \end{figure} Here it is also valid that the unitary strategies do not introduce new elements beyond those mentioned in chapter \ref{DetermClass}, just as in the low entanglement regime. \section{SYMMETRY} In section \ref{SymmetryQG} a symmetry of the quantum game was found for the unitary strategy group, and therefore for unital strategy set\footnote{A quantum operation is unital when it maps identity to identity. A strategy is unital when it corresponds to a unital quantum operation}. But what about the complete strategy set? According to formula \ref{ChiPayoff} every term in the expected payoff depends on the elements of the basis in the following way: \begin{equation}\label{ChiPayoff} Tr \left(J\left((F_i\otimes F_k)^\dagger J^\dagger (\vert 00 \rangle \langle 00 \vert)J(F_j\otimes F_l)\right)J^\dagger \ \hat{\$} \right). \end{equation} As we found in section \ref{DeterministicSym}, when we examined expected payoffs consisting only of a few of these terms, there are some changes we can make on the terms without changing the expected payoff. There is one of these equivalences for each repeated entry on the 4x4 payoff matrix in equation \ref{ExtendedPayoff} \parbox{6cm}{\begin{itemize} \item 00 $\leftrightarrow$ ZZ \item Z0 $\leftrightarrow$ 0Z \item Y0 $\leftrightarrow$ XZ \item X0 $\leftrightarrow$ YZ \end{itemize}}\hspace{1cm} \parbox{6cm}{\begin{itemize} \item 0X $\leftrightarrow$ ZY \item ZX $\leftrightarrow$ 0Y \item YX $\leftrightarrow$ XY \item XX $\leftrightarrow$ YY \end{itemize}} There is a lot of continuous transformations on the individual $\chi$ matrices that act in that way on the deterministic strategies. The symmetry of the unitary game, on the other hand, suggests us to choose the following: {\footnotesize \begin{equation} \chi_A \to \begin{pmatrix} cos(\theta) & -isin(\theta) & 0 & 0\\ -isin(\theta) & cos(\theta) & 0 & 0\\ 0 & 0 & cos(\theta) & -sin(\theta)\\ 0 & 0 & sin(\theta) & cos(\theta) \end{pmatrix}\chi_A \begin{pmatrix} cos(\theta) & isin(\theta) & 0 & 0\\ isin(\theta) & cos(\theta) & 0 & 0\\ 0 & 0 & cos(\theta) & sin(\theta)\\ 0 & 0 & -sin(\theta) & cos(\theta) \end{pmatrix} \end{equation}} for A and {\footnotesize \begin{equation} \chi_B \to \begin{pmatrix} cos(\phi) & i sin(\phi) & 0 & 0\\ i sin(\phi) & cos(\phi) & 0 & 0\\ 0 & 0 & -sin(\theta) & cos(\theta)\\ 0 & 0 & -cos(\theta) & -sin(\theta) \end{pmatrix}\chi_B \begin{pmatrix} cos(\phi) & -isin(\phi) & 0 & 0\\ -isin(\phi) & cos(\phi) & 0 & 0\\ 0 & 0 & -sin(\theta) & -cos(\theta)\\ 0 & 0 & cos(\theta) & -sin(\theta) \end{pmatrix} \end{equation}} for B, where $\phi=\theta+\pi/4$ \chapter{CONCLUSIONS AND PERSPECTIVES} \section{CONCLUSIONS} The main results of this work are those obtained in chapter \ref{ConcDeterm}. Here a summary of them is presented: \subsection{EQUIVALENT CLASSICAL GAMES} As it was stated by van Enk in \cite{ClassicalRules}, there is an extended classical game that has all the Nash Equilibria and Pareto Optima present in the quantum game. However, there is one reason not to consider this extended game as \textit{equivalent} to the quantum game: \begin{center} {\large The unitary strategies are not equivalent to the mixed semideterministic ones. The displacements in the payoffs space generated by changes in the unitary strategy parameters are curve and qualitatively different to those generated by changes in the probabilities that define mixed strategies.} \end{center} Even when the structure of Nash equilibria can be the same in the extended classical game\footnote{We call extended classical game to a classical game with a referee that introduces entanglement to simulate a semideterministic quantum game} the dynamics of a player moving in the complete strategy space can be very different to the dynamics of a player in the extended game. \subsection{SYMMETRY OF THE QUANTUM 2X2 SYMMETRIC GAMES}\label{ConcluSymm} The quantum game with semideterministic strategies is invariant under interchange of strategies 0 and Z or interchange of strategies X and Y made for both player, and this was found to be a consequence of a symmetry of all possible strategy with respect to some \textbf{coordinate} operations generated by $\sigma_z\otimes\mathbb{I}$ and $\mathbb{I}\otimes\sigma_z$(chapter \ref{symmetryQG}). This symmetry causes the following features in Quantum Games: \begin{enumerate} \item A quantum game has always an even number of Nash Equilibria. \item Several equilibria will be equivalent in terms of payoff\\ This will cause them to be less stable, because of the dilemma of choosing equilibrium points \cite{Commitments} \end{enumerate} \subsection{CRITICAL RESPONSES} From computations with general quantum strategies it was found that critical responses must be unitary strategies, at least for the 2x2 case. It must be checked as a different case the two-player game with more strategies (for example (2x3, 3x3, etc). A geometric methodology were developed to compute the critical response map for unitary strategies, and explicit expressions were computed for one and two-parameter spaces. This expressions become very simple for semi-deterministic strategies, and it is easy to show that there will allways be at least two Nash Equilibria within this set of strategies. The existence of at least two semi-deterministic Nash Equilibria can be proved thinking of the semi-deterministic quantum game as a 4x4 classical game, with the symmetry mentioned in \ref{ConcluSymm} \subsection{POSSIBILITY OF ENTANGLEMENT REGIMES} Some quantum games (but not all of them) show several different entanglement regimes, according to the parameter of entanglement (controlled by the referee in the Eisert scheme). In section \ref{DetermClass} the entanglement regime transitions are enumerated. The main result was that there are comparatively few games showing interesting entanglement regime transitions ($\frac{1}{4}$ according to figure \ref{MapTrans}). The games with a high value of the entanglement parameter usually show problematic features, like multiple maxima with similar characteristics. It was found that the analysis of the semi-deterministic strategies is enough to check the existence of entanglement regimes, and that the inclussion of unitary or other strategies does not affect this feature of the game. \section{PERSPECTIVES} A number of suggestions for future works arises, that can be classified as follows: \begin{itemize} \item On 2x2 symmetric games: \begin{itemize} \item To study the stability properties of Nash Equilibria (both intrinsic and entanglement-generated) by means of the product of critical response maps. \item To check situations of asymmetric information, commitments and other refinements in quantum games \item To devise ways of using unitary strategies to mimic complex behaviours of interacting agents in a 2x2 game \item Introduce effects of decoherence and classical loss of information in this kind of games \end{itemize} \item Extensions\\ There are some natural suggestions, that are already being followed by some workers on quantum games \begin{itemize} \item Study games with larger sets of strategies (2x3, 3x3, etc.) \item Study different kinds of entanglement in games with more players (2x2x2, 2x2x2x2, 3x3x3, etc) \item Study quantum games on networks \item Study quantum games with continuum of players \end{itemize} \end{itemize} \end{document}
\begin{eqnarray}gin{document} ^{th}ispagestyle{empty} \baselineskip=28pt ^{th}ispagestyle{empty} \baselineskip=28pt {{\cal L}ARGE{\bf \begin{eqnarray}gin{center} Improved Semiparametric Analysis of Polygenic Gene-Environment Interactions in Case-Control Studies \end{center} }} \baselineskip=12pt \vskip 2mm \begin{eqnarray}gin{center} Tianying Wang \footnote{Corresponding author at: Weiqinglou Rm 212-A, Center for Statistical Science, Tsinghua University, Beijing 100084, China. E-mail address: [email protected] (T. Wang). }\\ Center for Statistical Science, Tsinghua University,\\ Department of Industrial Engineering, Tsinghua University\\ Beijing, China \\ [email protected] \\ \hskip 5mm\\ Alex Asher\\ StataCorp LLC, \\ 4905 Lakeway Dr, College Station, TX 77845, U.S.A\\ [email protected]\\ \end{center} \begin{eqnarray}gin{center} {{\cal L}arge{\bf Abstract}} \end{center} \baselineskip=12pt Standard logistic regression analysis of case-control data has low power to detect gene-environment interactions, but until recently it was the only method that could be used on complex polygenic data for which parametric distributional models are not feasible. Under the assumption of gene-environment independence in the underlying population, \citeauthor{Stalder2017} (2017, \emph{Biometrika}, {\bf 104}, 801-812) developed a retrospective method that treats both genetic and environmental variables nonparametrically. However, the mathematical symmetry of genetic and environmental variables is overlooked. We propose an improvement to the method of \citeauthor{Stalder2017} that increases the efficiency of the estimates with no additional assumptions and modest computational cost. This improvement is achieved by treating the genetic and environmental variables symmetrically to generate two sets of parameter estimates that are combined to generate a more efficient estimate. We employ a semiparametric framework to develop the asymptotic theory of the estimator, {showed its asymptotic efficiency gain,} and evaluate its performance via simulation studies. The method is illustrated using data from a case-control study of breast cancer. \baselineskip=12pt \par \noindent \underline{\bf Some Key Words}: Gene-environment interaction; Case-control study; Genetic epidemiology; Semiparametric estimation; Biased samples. \par \noindent \underline{\bf Short title}: Semiparametric Analysis of Gene-Environment Interactions \pagebreak \pagenumbering{arabic} \newlength{\gnat} \setlength{\gnat}{22pt} \baselineskip=\gnat \section{Introduction}\label{sec:Intro} Genetic epidemiologists have identified both genetic and environmental factors that influence the incidence of complex diseases such as cancers, heart diseases, depression, and diabetes \citep{Nickels2013interaction, Rudolph2015interaction, mullins2016depression, Gustavsson2016FTO, Krischer2017genetic}. As new studies identify additional genetic variants associated with a disease, attention turns to exploring the interaction between genetic susceptibility and environmental risk factors. Researchers studying gene-environment interactions often adopt a case-control study design, wherein diseased cases and healthy control subjects are identified and their covariate information is collected retrospectively. When the disease is rare, sampling cases and controls separately provides substantial cost and time savings over a prospective cohort study, but it makes statistical inference more complicated. \citet{PrenticePyke1979} demonstrated that standard prospective logistic regression of case-control data, which ignores the retrospective sampling scheme, nevertheless yields consistent estimates of all parameters except the logistic intercept. Logistic regression is equivalent to maximum likelihood estimation under a model that places no assumptions on the joint distribution of the genetic and environmental variables, and it achieves the variance lower bound under this model \citep{Breslow2000}. However, logistic regression of case-control data lacks power to detect gene-environment interaction effects. To improve estimation efficiency, studies of gene-environment interactions often take advantage of the relatively mild assumption that the genetic and environmental variables are independently distributed in the source population. This assumption is easy to test, is frequently valid, and enables the use of specialized methods for the analysis of case-control data. {\cal C}itet{Piegorsch1994} proposed a case-only approach that efficiently estimates multiplicative interactions (but not main effects) under the assumptions of gene-environment independence and rare disease. {\cal C}itet{ChatterjeeCarroll2005} exploited the gene-environment independence assumption to develop a semiparametric retrospective profile likelihood framework that treats environmental variables nonparametrically but assumes that the genetic variables have a known, discrete distribution. Further developments have yielded additional retrospective methods based on parametric modeling of the distribution of genetic variables given the environmental variables, see for example \cite{Lobach2008, Ma2010, han2012likelihood, liang2019semiparametric}. Genome-wide association studies have shown that genetic predisposition to a single disease tends to be highly polygenic, with many genetic variants influencing disease risk \citep{chatterjee2016developing, Fuchsberger2016}. To provide a more complete picture of genetic risk and gene-environment interactions, it is often advantageous to include multiple genetic loci in the disease model \citep{chatterjee2006powerful, jiao2013sberia, lin2015test}. In the interest of parsimony, many studies have focused on developing polygenic risk scores through a weighted combination of all known genetic variants associated with a disease \citep{ChatterjeeWheeler2013polygenic, Dudbridge2013}. Handling multiple genetic variants, polygenic risk scores, or a combination of both is straightforward with prospective logistic regression, but can be unwieldy or even impossible when using retrospective methods that exploit gene-environment independence to gain efficiency but require a parametric model for the distribution of the genetic component. The method of \citet{Stalder2017} extends the Chatterjee-Carroll retrospective profile likelihood framework by treating both the genetic and environmental variables nonparametrically, requiring only the assumption of gene-environment independence in the source population. This assumption of independence can be weakened if a discrete stratification variable is found such that genes and environment are independent within strata of the source population. However, \citet{Stalder2017} overlooked the mathematical symmetry of the genetic and environmental variables in the retrospective likelihood, resulting in a sub-optimal efficiency gain. With this valuable discovery, here we propose an improvement to the method of \citet{Stalder2017} that substantially increases the efficiency of the estimates with no additional assumptions and modest computational cost. This development relies on the observation that the method of \citeauthor{Stalder2017} removes dependence on the distribution of the genetic and environmental variables in two different fashions; by treating the genetic and environmental variables symmetrically we generate two sets of parameter estimates that are combined to generate a more efficient estimate. We employ a semiparametric framework to develop the asymptotic theory of the estimator. The properties of the new method are illustrated through simulations in \cref{sec:Simulations}, and an example in \cref{sec:DataAnalysis}. \section{Methodology and Theory}\label{sec:MethodTheory} \subsection{Background}\label{sec:Background} We adopt notation similar to that of \citeauthor{Stalder2017}, with disease status, genetic information, and environmental risk factors denoted by $D$, $G$, and $X$, respectively. Both $G$ and $X$ are potentially multivariate and can contain both discrete and continuous components. For a given case-control study, $n_1$ is the number of cases ($D=1$) and $n_0$ is the number of controls ($D=0$), while $\pi_1 = \hbox{pr}(D=1)$ is the disease rate in the source population and $\pi_0 = 1 - \pi_1$. We maintain the assumption in \citeauthor{Stalder2017} that $\pi_1$ is either known or can be estimated well. The assumption of gene-environment independence in the source population can be written as $f_{GX}(g, x) = f_{G}(g) \times f_{X}(x)$, where $f_{GX}(\cdot, \cdot)$ is the joint density or mass of $G$ and $X$ in the underlying population, and $f_{G}(\cdot)$ and $f_{X}(\cdot)$ are the marginal density or mass functions of $G$ and $X$, respectively, in the underlying population. We assume $f_X(x)$ and $f_G(g)$ are completely unspecified. Given the genetic and environmental covariates, we assume the risk of disease in the underlying population follows the model $\hbox{pr}(D=1 \mid G,X) = H\{\alpha_0 + m(G,X,\mbox{\boldmath $\begin{eqnarray}ta$})\},$ where $H(x)=\{ 1 + \exp(-x)\}^{-1}$ is the logistic distribution function and $m(G,X,\mbox{\boldmath $\begin{eqnarray}ta$})$ is a function that describes the joint effect of $G$ and $X$ and is known up to the unspecified parameters of interest $\mbox{\boldmath $\begin{eqnarray}ta$}$. Given the subject's disease status, the retrospective likelihood is the probability of observing the genetic and environmental variables. Under gene-environment independence in the source population, the retrospective likelihood is \begin{eqnarray}gin{equation*} \frac{f_G(g) f_X(x) \exp[d\{\alpha_0 + m(g,x,\begin{eqnarray}ta)\}]/ [1 + \exp\{\alpha_0 + m(g,x,\begin{eqnarray}ta)\}]} {\int f_G(u) f_X(v) \exp[d\{\alpha_0 + m(u,v,\begin{eqnarray}ta)\}]/ [1 + \exp\{\alpha_0 + m(u,v,\begin{eqnarray}ta)\}] du dv}. \label{eq:RetroLik} \end{equation*} The logistic intercept $\alpha_0$, typically of little scientific interest, is not consistently estimated using prospective logistic regression, which instead converges to $\kappa = \alpha_0 + \hbox{log}(n_1/n_0) - \hbox{log}(\pi_1/\pi_0)$ \citep{PrenticePyke1979}. For convenience, we parameterize everything in terms of $\kappa$, and we define $\Omega = (\kappa,\mbox{\boldmath $\begin{eqnarray}ta$}^{\rm T})^{\rm T}$. \citet{ChatterjeeCarroll2005} profiled out $f_X(\cdot)$ to create a semiparametric profile likelihood \begin{eqnarray}gin{equation} L_X(D,G,X,\Omega,f_{G}) = f_G(G) \frac{S(D,G,X,\Omega)}{R_X(X,\Omega)}, \label{eq:ProfLik_X} \end{equation} where \begin{eqnarray}gin{align} S(d,g,x,\Omega) &= \frac{\exp[d\{\kappa + m(g,x,\begin{eqnarray}ta)\}]}{1 + \exp\{\kappa - \hbox{log}(n_1/n_0) + \hbox{log}(\pi_1/\pi_0) + m(g,x,\begin{eqnarray}ta)\}}; \nonumber \\ R_X(x,\Omega) &= \hbox{$\sum_{r=0}^{1}$} \int f_{G}(v) S(r,v,x,\Omega) dv. \label{eq:R_X} \end{align} The key insight of \citet{Stalder2017} was to develop an unbiased estimator of $R_X(x,\Omega)$ that treats $f_G(\cdot)$ nonparametrically, defined as \begin{eqnarray}gin{equation} \widehat{R}_X(x,\Omega) = \hbox{$\sum_{j=1}^J$}n \hbox{$\sum_{r=0}^{1}$} \hbox{$\sum_{d=0}^{1}$} (\pi_d / n_d) I(D_j=d) S(r,G_j,x,\Omega). \label{eq:Rhat_X} \end{equation} The leading term $f_G(G)$ in \cref{eq:ProfLik_X} is constant with respect to $\Omega$, and can be ignored for the purpose of estimation. Replacing $R_X(x,\Omega)$ with $\widehat{R}_X(x,\Omega)$ and taking the logarithm yields the estimated profile loglikelihood of $\Omega$ given the data as \begin{eqnarray}gin{equation} \widehat{{\cal L}}_X(\Omega) = \hbox{$\sum_{i=1}^n$} \hbox{log}\{S(D_i,G_i,X_i,\Omega)\} - \hbox{$\sum_{i=1}^n$} \hbox{log}\{\widehat{R}_{X}(X_i,\Omega)\}. \label{eq:Lik_X} \end{equation} Define $S_{\Omega}(d,g,x,\Omega) = \partial S(d,g,x,\Omega) / \partial \Omega$ and $\widehat{R}_{X \Omega}(x,\Omega) = \partial \widehat{R}_{X}(x, \Omega) / \partial \Omega$. The profile likelihood score function, ${\cal S}_{X}(\Omega)$, is unknown but can be estimated consistently by \begin{eqnarray}gin{equation} \widehat{{\cal S}}_{X}(\Omega) = n^{-1/2}\sum_{i=1}^{n} \left\{\frac{S_{\Omega}(D_i,G_i,X_i,\Omega)}{S(D_i,G_i,X_i,\Omega)} -\frac{\widehat{R}_{X\Omega}(X_i,\Omega)}{\widehat{R}_{X}(X_i,\Omega)}\right\}. \label{eq:ScoreHat_X} \end{equation} By solving $\widehat{{\cal S}}_{X}(\Omega)=0$, we obtain a consistent estimate of $\Omega$, which we will denote as $\widehat{\Omega}_X$ and which is called the SPMLE by \citet{Stalder2017}. \subsection{Symmetric Combination Estimator}\label{sec:Symm} The above equations are equivalent to those found in \citet{Stalder2017} with the addition of the subscript $X$ in \crefrange{eq:ProfLik_X}{eq:ScoreHat_X} to emphasize that the density of $X$ has been profiled out, leaving the density of $G$ to be treated nonparametrically. Because our only assumption about $G$ and $X$ is their independence in the source population, we could just as well have interchanged them and profiled out the distribution of $G$. The notation in this symmetric case is analogous to the above, but with subscript $G$ instead of $X$. It follows that the analogous estimated score function \begin{eqnarray}gin{equation*} \widehat{{\cal S}}_{G}(\Omega) = n^{-1/2}\sum_{i=1}^{n}\left\{ \frac{S_{\Omega}(D_i,G_i,X_i,\Omega)}{S(D_i,G_i,X_i,\Omega)} - \frac{\widehat{R}_{G \Omega}(G_i,\Omega)}{\widehat{R}_{G}(G_i,\Omega)}\right\} \label{eq:ScoreHat_G} \end{equation*} can be used to obtain $\widehat{\Omega}_G$, another consistent estimate of $\Omega$. The optimal combination of symmetric estimators $\widehat{\Omega}_{X}$ and $\widehat{\Omega}_{G}$ follows the principle of generalized least squares. Suppose the dimension of $\Omega$ is $p$. Let $I_p$ be the $p\times p$ identity matrix and define ${\cal X} = (I_p,I_p)^{\rm T}$. Define ${\cal Y} = (\widehat{\Omega}_X^{\rm T},\widehat{\Omega}_G^{\rm T})^{\rm T}$ and ${\cal L}ambda_{\rm all} = \hbox{\rm cov}({\cal Y})$. {\cal C}ref{thm:AsympNormSymm}, in \cref{sec:AsympTheory}, shows ${\cal Y} \to \hbox{Normal}({\cal X}\Omega, {\cal L}ambda_{\rm all})$. Treating this as a generalized least squares problem, we can rewrite it as ${\cal Y} = {\cal X}\Omega+\epsilon,$ where $\epsilon \sim \hbox{Normal} (0, {\cal L}ambda_{\rm all})$. The Symmetric Combination Estimator is the solution to the linear model, namely \begin{eqnarray}gin{equation} \widehat{\Omega}_{\rm Symm} = ({\cal X}^{\rm T} {\cal L}ambda^{-1}_{\rm all}{\cal X})^{-1}{\cal X}^{\rm T} {\cal L}ambda^{-1}_{\rm all}{\cal Y}. \label{eq:OmegaHatSymm} \end{equation} An alternative method of combining the two estimates is to average the two estimated profile likelihoods into a single composite likelihood. The resulting Composite Likelihood Estimator yields minimal efficiency gains over the SPMLE from \citeauthor{Stalder2017}, and is presented in \cref{sec:CompositeLik} of the \emph{Supplementary Material}{}. \subsection{Asymptotic Theory}\label{sec:AsympTheory} In this subsection we first demonstrate that the joint distribution of $\widehat\Omega_X$ and $\widehat\Omega_G$ is asymptotically normal. We then show the asymptotic results for the Symmetric Combination Estimator. In practice, asymptotic standard errors for the Symmetric Combination Estimator proved unreliable due to slow convergence, so bootstrap standard errors are used instead. To state the asymptotic results, let $Z_i = (D_i,G_i,X_i)$ then define \begin{eqnarray*} {\cal G}amma_{X} &=& \sum_{d=0}^{1} \frac{n_d}{n} E\left[ \frac{\partial}{\partial \Omega^{\rm T}} \left\{ \frac{S_{\Omega}(Z, \Omega)}{S(Z, \Omega)} - \frac{R_{X \Omega}(X,\Omega)}{R_{X}(X,\Omega)} \right\} \bigg| D=d \right]; \\ {\cal G}amma_{G} &=& \sum_{d=0}^{1} \frac{n_d}{n} E\left[ \frac{\partial}{\partial \Omega^{\rm T}} \left\{ \frac{S_{\Omega}(Z, \Omega)}{S(Z, \Omega)} - \frac{R_{G \Omega}(X,\Omega)}{R_{G}(X,\Omega)} \right\} \bigg| D=d \right]; \\ \zeta_{X}(Z_i,\Omega) &=&\frac{S_{\Omega}(Z_i,\Omega)}{S(Z_i,\Omega)} - \frac{R_{X \Omega}(X_i,\Omega)}{R_{X}(X_i,\Omega)}\\ && -\sum_{d=0}^1\sum_{r=0}^1 \frac{n_d\pi_{d_i}}{n_{d_i}} E\left\{\frac{S_\Omega(r,g,X,\Omega)}{R_{X}(X, \Omega)} -\frac{R_{X \Omega}(X, \Omega)S(r,g,X,\Omega)}{R^2_{X}(X, \Omega)} \bigg| D=d\right\}_{g=G_i};\\ \zeta_{G}(Z_i,\Omega) &=& \frac{S_{\Omega}(Z_i,\Omega)}{S(Z_i,\Omega)} - \frac{R_{G \Omega}(G_i,\Omega)}{R_{G}(G_i,\Omega)}\\ && -\sum_{d=0}^1\sum_{r=0}^1 \frac{n_d\pi_{d_i}}{n_{d_i}} E\left\{\frac{S_\Omega(r,G,x,\Omega)}{R_{G}(G, \Omega)} -\frac{R_{G \Omega}(G, \Omega)S(r,G,x,\Omega)}{R^2_{G}(G, \Omega)} \bigg| D=d\right\}_{x=X_i};\\ \zeta_{X*}(Z_i,\Omega) &=& \zeta_{X}(Z_i,\Omega) - E\{\zeta_{X}(Z,\Omega) \vert D=D_i\};\\ \zeta_{G*}(Z_i,\Omega) &=& \zeta_{G}(Z_i,\Omega) - E\{\zeta_{G}(Z,\Omega) \vert D=D_i\}. \end{eqnarray*} By profiling $X$ and $G$ out separately, we have the following two equations \begin{eqnarray}gin{align} n^{1/2}(\widehat\Omega_X-\Omega)&=-{\cal G}amma_X^{-1}n^{-1/2}\hbox{$\sum_{i=1}^n$} \zeta_{X*}(Z_i,\Omega)+o_p(1);\label{eq:asymp_X}\\ n^{1/2}(\widehat\Omega_G-\Omega)&=-{\cal G}amma_G^{-1}n^{-1/2}\hbox{$\sum_{i=1}^n$} \zeta_{G*}(Z_i,\Omega)+o_p(1).\label{eq:asymp_G} \end{align} {\cal C}ref{eq:asymp_X} is proved in Theorem 1 of \citet{Stalder2017}, and the proof of the symmetric case in \cref{eq:asymp_G} is analogous. To demonstrate the asymptotic properties of the Symmetric Combination Estimator, denote the block-diagonal matrix ${\cal G}amma_{\rm all}^{-1}=\hbox{diag}({\cal G}amma_{X}^{-1}, {\cal G}amma_{G}^{-1})$. \begin{eqnarray}gin{Thm}\label{thm:AsympNormSymm}{ Suppose that $0 < \displaystyle\lim_{n\to\infty} n_d/n<1$, and $\pi_1$ is known. Then \begin{eqnarray}gin{equation*} n^{1/2}({\cal Y}-{\cal X}\Omega) = -{\cal G}amma_{\rm all}^{-1} \;n^{-1/2} \; \sum_{i=1}^{n} \left\{\begin{eqnarray}gin{matrix} \zeta_{X*}(Z_i,\Omega) \\ \zeta_{G*}(Z_i,\Omega) \end{matrix}\right\} + o_p(1). \end{equation*} The $Z_i$ are independent and $E\{\zeta_{X*}(Z_i,\Omega)\vert D_i\} = E\{\zeta_{G*}(Z_i,\Omega)\vert D_i\} = 0$, so as $n \to \infty,$ \begin{eqnarray}gin{equation} n^{1/2}({\cal Y}-{\cal X}\Omega) \to \hbox{Normal}(0,{\cal L}ambda_{\rm all}) \label{eq:AsympNormSymm} \end{equation} in distribution, where \begin{eqnarray}gin{align*} {\cal L}ambda_{\rm all} \enspace &= \enspace {\cal G}amma_{\rm all}^{-1} \; \Sigma_{\rm all} \; {\cal G}amma_{\rm all}^{\rm -T}; \\ \Sigma_{\rm all} \enspace &= \enspace \hbox{\rm cov}\left\{\begin{eqnarray}gin{matrix} \zeta_{X*}(Z,\Omega) \\ \zeta_{G*}(Z,\Omega) \end{matrix}\right\} \; = \enspace \hbox{\rm cov}\left\{\begin{eqnarray}gin{matrix} \zeta_{X}(Z,\Omega) \\ \zeta_{G}(Z,\Omega) \end{matrix}\right\}. \end{align*} }\end{Thm} The proof of \cref{thm:AsympNormSymm} follows directly from the proofs of \cref{eq:asymp_X,eq:asymp_G} and the properties of M-estimators $\widehat{\Omega}_X$ and $\widehat{\Omega}_G$. \begin{eqnarray}gin{Rem}\label{rem:BootstrapCov} In \cref{sec:Symm}, we constructed a linear model from \cref{eq:AsympNormSymm} and used generalized least squares to calculate $\widehat{\Omega}_{\rm Symm}$. The asymptotic properties of GLS estimators inform us that as $n \to \infty$, \begin{eqnarray}gin{equation*} n^{1/2}(\widehat{\Omega}_{\rm Symm}-\Omega) \to \hbox{Normal}\{0,({\cal X}^{\rm T} {\cal L}ambda^{-1}_{\rm all}{\cal X})^{-1}\}. \end{equation*} In practice, $\widehat\Omega_{X}$ and $\widehat\Omega_{G}$ are highly correlated, which slows convergence to the asymptotic covariance matrix. Asymptotic estimates of standard errors proved unreliable in simulations, and are not recommended. Instead, we estimate $\hbox{\rm cov}(\widehat{\Omega}_{\rm Symm})$ using a balanced bootstrap, where cases and controls are resampled separately, thus maintaining their respective sample sizes. \end{Rem} \subsection{Rare Diseases When $\pi_1$ is Unknown} \label{sec:RareDisease} Due to its sampling efficiency, the case-control design is typically used to study relatively rare diseases. If the true disease rate in the source population is unknown, it is common to assume that $\pi_1 \approx 0$ \citep{Piegorsch1994,modan2001parity,lin2006likelihood,kwee2007simple}. Under this rare disease assumption, $\widehat\Omega_{X}$ and $\widehat\Omega_{G}$ converge not to $\Omega$, but to $\Omega_{X*}$ and $\Omega_{G*}$, the solutions to their respective score equations with $\pi_1=0$. Using estimates of $\Omega_{X*}$ and $\Omega_{G*}$ to calculate $\widehat{\Omega}_{\rm Symm}$ runs the risk of introducing bias, but in practice the small potential bias is typically inconsequential unless the sample size is very large and standard errors unusually small. Examples in \cref{sec:Misspec} of the \emph{Supplementary Material}{} demonstrate that under the rare disease approximation, coverage intervals remain near nominal until the true disease rate exceeds 8\%. {\subsection{Asymptotic efficiency} In this section, we discuss the relative efficiency of the proposed estimator $\widehat{\Omega}_{\rm Symm}$ with SPMLE estimator $\widehat{\Omega}_{X}$. Recall that $\Omega = (\kappa, \mbox{\boldmath $\begin{eqnarray}ta$}^{\rm T})^{\rm T}$, and $\mbox{\boldmath $\begin{eqnarray}ta$} = (\begin{eqnarray}ta_1,...,\begin{eqnarray}ta_p)$ is the parameter of interest. Since both estimators are consistent, our goal is to show ${\rm var}(\begin{eqnarray}ta_{\rm Symm, j})\leq{\rm var}(\begin{eqnarray}ta_{\rm X, j})$ for $j =1,...,p$. We start with their asymptotic covariance matrix $({\cal{X}}^{\rm T} {\cal L}ambda_{\rm all}^{-1}{\cal{X}})^{-1}$ and ${\cal G}amma^{-1}_X{\rm cov}\{\xi_{X}(Z,\Omega)\}{\cal G}amma_X^{\rm -T}$. Recall that \begin{eqnarray*} {\cal L}ambda_{\rm all}^{-1} &=& \begin{eqnarray}gin{pmatrix}{\cal G}amma^{-1}_X{\rm cov}\{\xi_X(Z,\Omega)\}{\cal G}amma^{\rm -T}_X&{\cal G}amma^{-1}_X{\rm cov}\{\xi_X(Z,\Omega), \xi_G(Z,\Omega)\}{\cal G}amma^{\rm -T}_G\\ {\cal G}amma^{-1}_G{\rm cov}\{\xi_G(Z,\Omega), \xi_X(Z,\Omega)\}{\cal G}amma^{\rm -T}_X& {\cal G}amma^{-1}_G{\rm cov}\{\xi_G(Z,\Omega)\}{\cal G}amma^{\rm -T}_G\\ \end{pmatrix}^{-1}.\\ \end{eqnarray*} To simplify the notations, we denote $A = {\cal G}amma^{-1}_X{\rm cov}\{\xi_X(Z,\Omega)\}{\cal G}amma^{\rm -T}_X$, $B= {\cal G}amma^{-1}_X{\rm cov}\{\xi_X(Z,\Omega), \xi_G(Z,\Omega)\}{\cal G}amma^{\rm -T}_G$, and $D = {\cal G}amma^{-1}_G{\rm cov}\{\xi_G(Z,\Omega)\}{\cal G}amma^{\rm -T}_G$. According to \cite[Theorem 2.1]{lu2002inverses}, then we can further write ${\cal L}ambda^{-1}_{all}$ as \begin{eqnarray*} {\cal L}ambda_{\rm all}^{-1}&=&\begin{eqnarray}gin{pmatrix}A&B\\ B^{\rm T}&D\end{pmatrix}^{-1}= \begin{eqnarray}gin{pmatrix} A^{-1}+A^{-1}B(D-B^{\rm T} A^{-1}B)^{-1}B^{\rm T} A^{-1} & -A^{-1}B(D-B^{\rm T} A^{-1}B)^{-1}\\ -(D-B^{\rm T} A^{-1}B)^{-1}B^{\rm T} A^{-1}& (D-B^{\rm T} A^{-1}B)^{-1} \end{pmatrix}. \end{eqnarray*} Thus, \begin{eqnarray*} {\cal{X}}^{\rm T} {\cal L}ambda_{\rm all}^{-1}{\cal{X}}&=& \begin{eqnarray}gin{pmatrix} I_p&I_p\end{pmatrix} \begin{eqnarray}gin{pmatrix} A^{-1}+A^{-1}B(D-B^{\rm T} A^{-1}B)^{-1}B^{\rm T} A^{-1} & -A^{-1}B(D-B^{\rm T} A^{-1}B)^{-1}\\ -(D-B^{\rm T} A^{-1}B)^{-1}B^{\rm T} A^{-1}& (D-B^{\rm T} A^{-1}B)^{-1} \end{pmatrix} \begin{eqnarray}gin{pmatrix} I_p\\I_p\end{pmatrix}\\ &=& A^{-1}+A^{-1}B(D-B^{\rm T} A^{-1}B)^{-1}B^{\rm T} A^{-1} -A^{-1}B(D-B^{\rm T} A^{-1}B)^{-1}\\ && \hskip 3mm -(D-BB^{\rm T} A^{-1}B)^{-1}B^{\rm T} A^{-1}+ (D-B^{\rm T} A^{-1}B)^{-1}\\ &=& A^{-1} + \left\{(D-B^{\rm T} A^{-1}B)^{-1/2}(B^{\rm T} A^{-1} -I_p)\right\}^{\rm T} \left\{(D-B^{\rm T} A^{-1}B)^{-1/2}(B^{\rm T} A^{-1} -I_p)\right\}. \end{eqnarray*} To further simplify the notations, we denote $\left\{(D-B^{\rm T} A^{-1}B)^{-1/2}(B^{\rm T} A^{-1} -I_p)\right\}$ as $C$, and write ${\cal{X}}^{\rm T} {\cal L}ambda_{\rm all}^{-1}{\cal{X}}=A^{-1}+C^{\rm T} C$. Thus, \begin{eqnarray*} {\rm cov}(\widehat{\Omega}_{\rm G}) - {\rm cov}(\widehat{\Omega}_{\rm Symm}) &=& (A^{-1}+C^{\rm T} C)^{-1}-A\\ &=& A - \{A - AC^{\rm T} (I+CAC^{\rm T})^{-1}CA \}\\ &=& AC^{\rm T} (I+CAC^{\rm T})^{-1}CA\\ &\geq&0. \end{eqnarray*} We observe that the matrix $AC^{\rm T} (I+CAC^{\rm T})^{-1}CA$ is positive semi-definite, and its diagonal elements are non-negative. Hence, we conclude that ${\rm var}(\begin{eqnarray}ta_{\rm Symm, j}) \leq {\rm var}(\begin{eqnarray}ta_{\rm X, j}), j = 1,...,p$. In practice, ${\rm var}(\begin{eqnarray}ta_{\rm Symm, j}) = {\rm var}(\begin{eqnarray}ta_{\rm X, j})$ when $D = B^{\rm T} A^{-1} B$. According to the mathematical symmetry , we get similar results that ${\rm var}(\begin{eqnarray}ta_{\rm Symm, j}) \leq {\rm var}(\begin{eqnarray}ta_{\rm G, j}), j = 1,...,p$, and ${\rm var}(\begin{eqnarray}ta_{\rm Symm, j}) = {\rm var}(\begin{eqnarray}ta_{\rm G, j})$ when $A = B^{\rm T} D^{-1} B$. As we discussed at the beginning of this paper, though the estimator $\widehat{\Omega}_X$ is proposed in the method of Stalder et al., $\widehat{\Omega}_G$ is an alternative choice according to the fact of mathematical symmetry. Choosing $\widehat{\Omega}_X$ or $\widehat{\Omega}_G$ can be tricky and require prior information in practice. However, through the above illustration, we show that the proposed estimator $\widehat{\Omega}_{\rm Symm}$ is guaranteed to be more efficient than, or at least as efficient as, either $\widehat{\Omega}_X$ or $\widehat{\Omega}_G$, without worrying about the arbitrary choice between $\widehat{\Omega}_X$ and $\widehat{\Omega}_G$. } \section{Simulations} \label{sec:Simulations} \subsection{Scenario} \label{sec:SimScenario} To investigate the performance of the Symmetric Combination Estimator, we adopt the same simulation settings as reported in \citet{Stalder2017}. Environmental variable $X$ is binary with population frequency 0.5, and $G$ consists of five correlated single nucleotide polymorphisms (SNPs). The SNPs follow a trinomial distribution in Hardy-Weinberg equilibrium, wherein SNP $G_j$ takes values $(0, 1, 2)$ with probabilities $\{(1-p_j)^2, \enspace 2p_j(1-p_j), \enspace p_j^2\}$, respectively. To generate correlated SNPs, we first simulated a 5-variate normal random variable with mean 0 and covariance between the $j$th and $k$th components equal to $0.7^{|j-k|}$. We then trichotomized the variates with appropriate thresholds so that the frequency of 0, 1, and 2 followed Hardy-Weinberg equilibrium with minor allele frequencies $(p_1, p_2, p_3, p_4, p_5) = (0.1, 0.3, 0.3, 0.3, 0.1)$. Disease status was simulated according to the risk model $H\{\alpha_0 + m(G,X,\mbox{\boldmath $\begin{eqnarray}ta$})\}$, with $m(G,X,\begin{eqnarray}ta) = G^{\rm T} \begin{eqnarray}ta_G+X\begin{eqnarray}ta_X + (GX)^{\rm T}\begin{eqnarray}ta_{GX}$. Here $\begin{eqnarray}ta_G = \{\hbox{log}(1.2), \hbox{log}(1.2), 0, \hbox{log}(1.2), 0\}$, $\begin{eqnarray}ta_X = \hbox{log}(1.5)$, and $\begin{eqnarray}ta_{GX}=\{\hbox{log}(1.3), 0, 0, \hbox{log}(1.3), 0\}$. We set the logistic intercept $\alpha_0 = -4.165$ to yield a population disease rate $\pi_1 = 0.03$. A sample of 1000 cases and 1000 controls was drawn from the simulated population, and parameters were estimated using logistic regression, the SPMLE of \citet{Stalder2017}, the Symmetric Combination Estimator with known $\pi_1$, and the Symmetric Combination Estimator with a rare disease approximation. Standard error estimates for both logistic regression and the SPMLE were based on asymptotic theory, while those for the Symmetric Combination Estimator were calculated using 200 balanced bootstrap samples, as described in \cref{rem:BootstrapCov}. \subsection{Results} \label{sec:SimResults} {\cal C}ref{tab:sim1} presents the results of 1000 simulations comparing standard logistic regression, the SPMLE proposed by \citeauthor{Stalder2017} with known $\pi_1$, the SPMLE proposed by \citeauthor{Stalder2017} using the rare disease approximation, our proposed Symmetric Combination Estimator with known $\pi_1$, and the Symmetric Combination Estimator using the rare disease approximation. Standard error estimates for logistic regression and the SPMLE were calculated using asymptotic theory, and the standard error estimates for both versions of the Symmetric Combination Estimator were calculated using 200 bootstrap samples as described in \cref{rem:BootstrapCov}. The Symmetric Combination Estimator, both with known $\pi_1$ and when using the rare disease approximation, shows negligible bias and has coverage percentages near the nominal level. Like the SPMLE, both versions of our Symmetric Combination Estimator provide slightly more than 25\% improvement in mean squared error efficiency over ordinary logistic regression for the main effect of $X$. More impressively, our estimator nearly doubles the mean squared error efficiency of logistic regression for the main effects of $G$, and nearly triples the mean squared error efficiency for the interaction terms. This is a marked improvement even over the performance of the SPMLE, and it is accomplished without modeling the distribution of either $G$ or $X$. \begin{eqnarray}gin{table}[!ht] \ttabbox {\caption{\baselineskip=12pt Results of 1000 simulations as described in \cref{sec:SimScenario}, comparing the bias, coverage, and efficiency of four estimators: ordinary logistic regression, the SPMLE of \citeauthor{Stalder2017} with known $\pi_1$, our proposed Symmetric Combination Estimator with known $\pi_1$, and the Symmetric Combination Estimator using the rare disease approximation} \label{tab:sim1}} {\begin{eqnarray}gin{tabular}{ | l | ll l l l | l | l l l l l| } \hline &$\begin{eqnarray}ta_{G1}$ &$\begin{eqnarray}ta_{G2}$&$\begin{eqnarray}ta_{G3}$ & $\begin{eqnarray}ta_{G4}$ & $\begin{eqnarray}ta_{G5}$ & $\begin{eqnarray}ta_{X}$ & $\begin{eqnarray}ta_{XG1}$ & $\begin{eqnarray}ta_{XG2}$ & $\begin{eqnarray}ta_{XG3}$ &$\begin{eqnarray}ta_{XG4}$ & $\begin{eqnarray}ta_{XG5}$ \\ \hline True & 0.18& 0.18& 0.00& 0.18& 0.00 &0.41& 0.26& 0.00& 0.00& 0.26& 0.00 \\ \hline \multicolumn{12}{|c|}{}\\[-.80em]\multicolumn{12}{|c|}{Logistic Regression}\\ \hline Bias & 0.01& 0.00& 0.00&0.00& -0.01 &0.00& 0.00& 0.00& 0.00 &0.00& 0.00 \\ CI(\%) & 95.2& 95.5& 94.4& 94.7& 95.3& 95.8& 94.5 &95.9& 94.7& 94.6& 95.3 \\ \hline \multicolumn{12}{|c|}{}\\[-.80em] \multicolumn{12}{|c|}{SPMLE, known $\pi_1$} \\ \hline Bias & 0.01 & 0.00 & 0.00 & 0.00 & -0.01 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.01 \\ CI(\%) & 95.4 & 95.8 & 94.8 & 96.1 & 96.3 & 95.6 & 94.6 & 96.0 & 94.3 & 95.6 & 95.1 \\ MSE Eff & 1.32 & 1.25 & 1.26 & 1.32 & 1.30 & 1.27 & 2.08 & 1.78 & 1.88 & 1.95 & 2.12 \\ \hline \multicolumn{12}{|c|}{}\\[-.80em] \multicolumn{12}{|c|}{SPMLE, rare} \\ \hline Bias & 0.02 & 0.00 & 0.00 & 0.01 & -0.01 & 0.02 & -0.02 & 0.00 & 0.00 & -0.01 & 0.01 \\ CI(\%) & 95.0 & 95.7 & 94.8 & 95.9 & 96.4 & 95.5 & 94.4 & 96.0 & 94.7 & 95.4 & 95.7 \\ MSE Eff & 1.31 & 1.26 & 1.27 & 1.32 & 1.30 & 1.26 & 2.18 & 1.89 & 2.01 & 2.06 & 2.27 \\ \hline \multicolumn{12}{|c|}{}\\[-.80em]\multicolumn{12}{|c|}{Symmetric Combination Estimator, known $\pi_1$}\\ \hline Bias &0.00 &-0.03& 0.00& 0.00& -0.01& 0.01& -0.03& 0.02& 0.00& -0.02& 0.01 \\ CI$^*$(\%) & 96.7& 95.7& 96.7& 96.5& 97.8& 95.4& 94.8& 96.7& 96.2& 96.6& 97.2 \\ MSE Eff & 1.92 & 1.71 & 2.00 & 1.83 & 2.05 & 1.31 & 2.84 & 2.51 & 2.99 & 2.68 & 3.34 \\ \hline \multicolumn{12}{|c|}{}\\[-.80em] \multicolumn{12}{|c|}{Symmetric Combination Estimator, rare} \\ \hline Bias & 0.01 & -0.02 & 0.00 & 0.01 & -0.01 & 0.02 & -0.05 & 0.02 & 0.00 & -0.03 & 0.00 \\ CI$^*$(\%)& 96.4 & 95.7 & 95.7 & 96.3 & 98.1 & 94.9 & 94.0 & 97.0 & 96.4 & 95.5 & 97.5 \\ MSE Eff & 1.86 & 1.71 & 1.92 & 1.78 & 1.95 & 1.27 & 2.75 & 2.66 & 3.08 & 2.69 & 3.58 \\ \hline \end{tabular} \floatfoot{\baselineskip=11pt \emph{CI}: coverage of a 95\% nominal confidence interval, calculated using asymptotic standard error. \emph{CI$^*$}: coverage of a 95\% nominal confidence interval, calculated using 200 bootstrap samples. \emph{MSE Eff}: mean squared error efficiency when compared to logistic regression.}} \end{table} \subsection{Further Simulations} \label{sec:SimFurther} Further simulations were conducted with multiple correlated SNPs and a binary environmental risk factor, but with changes to the number of SNPs (3 or 8), the population disease rate (1\% or 5\%), or the sample size (500 or 3000 cases \& controls). All such simulations yielded results similar to those in \cref{tab:sim1} with regards to coverage, efficiency gains, and unbiasedness, and are thus not reported. {\cal C}ref{sec:AddlSims} of the \emph{Supplementary Material}{} contains the results of simulations examining the behavior of the Symmetric Combination Estimator in a variety of settings. {\cal C}ref{sec:sim1ALL} contains an unabridged version of \cref{tab:sim1} that includes the SPMLE\_G ($\widehat{\Omega}_G$) and the Composite Likelihood Estimator, neither of which approach the MSE efficiency of the Symmetric Combination Estimator. {\cal C}ref{sec:Misspec} presents the results of simulations with misspecified population disease rate; we found the Symmetric Combination Estimator fairly robust to the misspecification of the disease rate. {\cal C}ref{sec:ViolAssump} contains simulation studies examining the robustness of our method with respect to violations of the gene-environment independence assumption. Those simulations demonstrate that there will be bias in the estimated interaction parameter between a specific gene and a correlated environmental variable, but the rest of the parameter estimates continue unbiased, and the average mean squared error for all $G{*}X$ interactions can still be substantially lower than that obtained from prospective logistic regression. {\cal C}ref{sec:AltDist} presents the results of simulations with different distributions for $G$ and $X$. \section{Data Analysis}\label{sec:DataAnalysis} \subsection{Data}\label{sec:DataData} Here we apply the proposed methodology to a case-control study of breast cancer. This case-control sample is taken from a large prospective cohort at the National Cancer Institute: the Prostate, Lung, Colorectal and Ovarian cancer screening trial \citep{canzian2010comprehensive}. This cohort enrolled 64,440 non-Hispanic, white women aged 55 to 74, of whom 3.72\% developed breast cancer \citep{pfeiffer2013risk}. The case-control study analyzed here consists of 658 cases and 753 controls. Each of the 1411 subjects was genotyped for 21 SNPs that have been previously associated with breast cancer based on large genome-wide association studies. These SNPs were weighted by their log-odds-ratio coefficients and summed to define a polygenic risk score. A scaled version of this polygenic risk score, with mean zero and standard deviation one, was used as the genetic risk factor $G$. The individual SNPs and their coefficients can be found in \cref{sec:PRSweights} of the \emph{Supplementary Material}{}. Early menarche is a known risk factor for breast cancer \citep{anderson2007estimating}, and environmental variable $X$ is a binary indicator of whether the age at menarche is less than 14. The interaction between age at menarche and genetic breast cancer risk is a topic of interest, but the power to detect such interactions in previous studies has been limited \citep{gail2008discriminatory}. The model fitted is $\hbox{pr}(D = 1) = H(\begin{eqnarray}ta_0+\begin{eqnarray}ta_G G + \begin{eqnarray}ta_X X + \begin{eqnarray}ta_{GX} GX)$. While $\pi_1$ is known in this population, we apply our method using both the known disease rate and the rare disease approximation. \subsection{Verifying Gene-Environment Independence}\label{sec:DataIndepend} Before applying our approach, we performed analyses to check the assumption of gene-environment independence in the population. Using the 753 controls, we ran a $t$-test of the polygenic risk score against the levels of $X$. The $p$-value was 0.91, indicating no evidence of correlation between $G$ and $X$. We also ran chi-squared tests for each of the 21 individual genes and found no significant association after controlling the false discovery rate: the minimum $q$-value was 0.09. We also checked for correlation, known as linkage disequilibrium, between the 21 SNPs used to create the polygenic risk score and 32 SNPs known to influence age at menarche \citep{elks2010menarche}. Using phased haplotypes from subjects of European descent from \emph{1000 Genomes} \citep{1000Genomes2015} and \emph{HapMap} \citep{gibbs2003international}, we were able to analyze 651 of the 672 possible linkages, and no evidence of linkage disequilibrium was found: the maximum $R^2$ was 0.1 and the minimum q-value was 0.85. Finally, a recent study of breast cancer susceptibility loci examined the relationship between age at menarche and 10 of the 21 SNPs used to create our polygenic risk score, none of which were found to influence age at menarche \citep{andersen2014susceptibility}. \subsection{Results}\label{sec:DataResults} \renewcommand{\cal F}Baskip{14pt} \floatbox[\captop]{table}[0.9\hsize][][] { \caption{\baselineskip=12pt Results of the analysis of the Prostate, Lung, Colorectal and Ovarian cancer screening trial data} \label{tab:PLCO} }{ \begin{eqnarray}gin{tabular}{|l|lll|} \hline & $\begin{eqnarray}ta_G$ &$\begin{eqnarray}ta_X$ &$\begin{eqnarray}ta_{GX}$ \\ \hline {Logistic Regression}&&& \\ \qquad\qquad Estimate & 0.539 & 0.124 &-0.242 \\*[-.40em] \qquad\qquad Standard Error (asymptotic)& 0.117 & 0.128 & 0.133 \\*[-.40em] \qquad\qquad p-value (asymptotic) &$<1e$-4& 0.331 & 0.068 \\ \hline {Symmetric Combination, known $\pi_1=3.72\%$}&&& \\ \qquad\qquad Estimate & 0.495 & 0.093 &-0.215 \\*[-.40em] \qquad\qquad Standard Error (bootstrap) & 0.094 & 0.133 & 0.089 \\*[-.40em] \qquad\qquad p-value (bootstrap) &$<1e$-4& 0.484 & 0.016 \\ \hline {SPMLE, known $\pi_1=3.72\%$}&&& \\ \qquad\qquad Estimate & 0.587&0.127&-0.274 \\*[-.40em] \qquad\qquad Standard Error (asymptotic) & 0.103& 0.128 & 0.108 \\*[-.40em] \qquad\qquad p-value (asymptotic) &$<1e$-4& 0.321 & 0.011 \\ \hline {Symmetric Combination, rare disease approximation}&&& \\ \qquad\qquad Estimate & 0.538 & 0.116 &-0.237 \\*[-.40em] \qquad\qquad Standard Error (bootstrap) & 0.089 & 0.124 & 0.099 \\*[-.40em] \qquad\qquad p-value (bootstrap) &$<1e$-4& 0.352 & 0.016 \\ \hline {SPMLE, rare disease approximation}&&& \\ \qquad\qquad Estimate & 0.590& 0.129 &-0.269 \\*[-.40em] \qquad\qquad Standard Error (asymptotic) & 0.104 & 0.128 & 0.106 \\*[-.40em] \qquad\qquad p-value (asymptotic) &$<1e$-4&0.315 & 0.012 \\ \hline \end{tabular} \floatfoot{\baselineskip=11pt $\begin{eqnarray}ta_{G}$ and $\begin{eqnarray}ta_{X}$ are the main effects for the polygenic risk score $G$ and the environmental variable $X$ (age at menarche $<$ 14), and $\begin{eqnarray}ta_{GX}$ is the gene-environment interaction.} } {\cal C}ref{tab:PLCO} presents the results of our analysis with known $\pi_1$ and under a rare disease approximation. In both cases, standard errors for the Symmetric Combination Estimator were calculated using 500 bootstrap samples. The two estimates yield very similar results, indicating that a valid analysis can be conducted even if $\pi_1$ is not known. The p-value for SPMLE estimates are smaller, due to the larger estimates compared to Symmetric Combination Estimator. However, the Symmetric Combination Estimator often has smaller standard error, especially for the gene-environment interaction term. The polygenic risk score was strongly associated with breast cancer status of the women in the study, which is to be expected given that each of its component SNPs has a known association with breast cancer risk. Standard logistic regression analysis provides some indication of an interaction between the polygenic risk score and age at menarche, but the result is not statistically significant at the 0.05 level. Using the assumption of gene-environment independence in the population, the Symmetric Combination Estimator finds stronger evidence of this interaction. The improved power to detect this interaction is due to the much smaller standard error estimates of the Symmetric Combination Estimators. Using logistic regression, the estimated standard error of $\begin{eqnarray}ta_{GX}$ is 49\% larger than with our method, indicating a variance increase of 121\% (when applying the rare disease approximation, the variance increase is 81\%). \section{Discussion and Extensions}\label{sec:Discussion} Researchers investigating gene-environment interactions in case-control studies have traditionally had two broad options for analysis: standard logistic regression, which is flexible but has low power to detect interactions, or retrospective methods, which lack flexibility but offer improved efficiency by exploiting the assumption of gene-environment independence. Improved understanding of genetic risk factors has led to the need for efficient estimators that can model complex gene-environment interactions. {\cal C}itet{Stalder2017} proposed a retrospective profile method that exploits the assumption of gene-environment independence while treating the genetic and environmental variables nonparametrically. By obviating the need for a parametric model of genotype distributions, their method is well suited for the analysis of multimarker genetic data and polygenic risk scores. We observed the mathematical symmetry of the genetic and environmental variables and discovered the sub-optimal efficiency gain in \citet{Stalder2017}. Hence, we proposed an improvement to the method of \citet{Stalder2017} that substantially increases the efficiency of the estimates with modest computational cost and no additional assumptions, making it applicable anywhere that the method of \citeauthor{Stalder2017} can be used. Simulations under a variety of scenarios demonstrate a consistent improvement in mean squared error efficiency over the method of \citet{Stalder2017} and logistic regression on the estimation of both main effects and gene-environment interaction terms. Our methods are implemented in the R package \widetilde{\epsilon}xttt{caseControlGE}, freely available at \url{github.com/alexasher/caseControlGE}. The proposed Symmetric Combination Estimator places no distributional assumptions on the genetic or environmental variables, but it does rely on three assumptions. The first assumption, that the logistic risk model $H\{\alpha_0 + m(G,X,\mbox{\boldmath $\begin{eqnarray}ta$})\}$ is known up to parameters $\alpha_0$ and $\mbox{\boldmath $\begin{eqnarray}ta$}$, is minimally restrictive because a flexible function, such as a function of B-splines, can be defined for $m(G,X,\mbox{\boldmath $\begin{eqnarray}ta$})$. The second assumption, that $\pi_1$ is known or can be well estimated, can be relaxed by using the rare disease approximation of \cref{sec:RareDisease}. Even if the true disease rate is not rare, the Symmetric Combination Estimator is generally robust to the misspecification of $\pi_1$, as demonstrated in \cref{sec:Misspec} of the \emph{Supplementary Material}{}. The final assumption is gene-environment independence in the source population. In \cref{sec:ViolAssump} of the \emph{Supplementary Material}{}, we present the results of simulations demonstrating that bias is introduced in the estimated interaction parameter between correlated genetic and environmental variables, but the rest of the parameter estimates are unbiased. We recommend that researchers verify gene-environment independence before applying the Symmetric Combination Estimator, as we did in \cref{sec:DataIndepend}. To relax the gene-environment independence assumption, it should be straightforward to adapt the Symmetric Combination Estimator to the case where $G$ and $X$ are conditionally independent within the strata of an observed factor, as demonstrated in the \emph{Supplementary Material}{} of \citet{Stalder2017}. If suitable strata cannot be found, another possibility is to construct an empirical Bayes-type shrinkage estimator like that of \citet{mukherjee2008exploiting}, which would shrink the estimate from standard logistic regression back to the Symmetric Combination Estimator when the gene-environment independence assumption is valid. \section*{Supplementary Material} The \emph{Supplementary Material}\ includes methodology and theory for the Composite Likelihood Estimator, the unabridged version of \cref{tab:sim1} with all estimators, and additional simulation results for model robustness when the disease rate is misspecified or the gene-environment independence assumption is violated. \newcommand{\appendix\defS.\arabic{section}{Appendix~\Alph{section}}\defS.\arabic{section}.\arabic{subsection}{\Alph{section}.\arabic{subsection}}}{\appendix\defS.\arabic{section}{Appendix~\Alph{section}}\defS.\arabic{section}.\arabic{subsection}{\Alph{section}.\arabic{subsection}}} \pagebreak \pagestyle{fancy} \fancyhf{} \rhead{{\bf s}eriesS.\arabic{page}} \lhead{{\bf s}eries NOT FOR PUBLICATION SUPPLEMENTARY MATERIAL} \begin{eqnarray}gin{center} {{\cal L}ARGE{\bf Supplementary Material }} \end{center} \vskip 2mm \setcounter{figure}{0} \setcounter{equation}{0} \setcounter{Thm}{0} \setcounter{page}{1} \setcounter{table}{1} \setcounter{section}{0} \renewcommand{S.\arabic{figure}}{S.\arabic{figure}} \renewcommand{S.\arabic{equation}}{S.\arabic{equation}} \renewcommand{S.\arabic{Thm}}{S.\arabic{Thm}} \renewcommand{S.\arabic{section}}{S.\arabic{section}} \renewcommand{S.\arabic{section}.\arabic{subsection}}{S.\arabic{section}.\arabic{subsection}} \renewcommand{S.\arabic{page}}{S.\arabic{page}} \renewcommand{S.\arabic{table}}{S.\arabic{table}} \section{Composite Likelihood Estimator}\label{sec:CompositeLik} The estimated composite profile likelihood is just the average of the two symmetric profile likelihoods \begin{eqnarray*} \widehat{\cal L}_{\rm CL}(\Omega) &=& \{\widehat{\cal L}_{X}(\Omega) + \widehat{\cal L}_{G}(\Omega)\}/2 \\ &=& \hbox{$\sum_{i=1}^n$} \hbox{log}\{S(D_i,G_i,X_i,\Omega)\} - 0.5\hbox{$\sum_{i=1}^n$} \hbox{log}\{\widehat{R}_{X}(G_i,\Omega)\}- 0.5\hbox{$\sum_{i=1}^n$} \hbox{log}\{\widehat{R}_{G}(X_i,\Omega)\}. \end{eqnarray*} The estimated score function is thus the average of the two symmetric score functions \begin{eqnarray*} \widehat{{\cal S}}_{\rm CL}(\rm \Omega) &=& (\widehat{{\cal S}}_{X}(\rm \Omega) + \widehat{{\cal S}}_{G}(\rm \Omega))/2 \\ &=& n^{-1/2}\sum_{i=1}^{n}\left\{ \frac{S_{\Omega}(D_i,G_i,X_i,\Omega)}{S(D_i,G_i,X_i,\Omega)} - \frac{1}{2}\frac{\widehat{R}_{X \Omega}(X_i,\Omega)}{\widehat{R}_{X}(X_i,\Omega)}-\frac{1}{2}\frac{\widehat{R}_{G \Omega}(G_i,\Omega)}{\widehat{R}_{G}(G_i,\Omega)}\right\}. \label{eq:composite_score} \end{eqnarray*} Estimate $\widehat{\Omega}_{\rm CL}$ is calculated by solving $\widehat{{\cal S}}_{\rm CL}(\Omega)=0$, or equivalently, maximizing $\widehat{\cal L}_{\rm CL}(\Omega)$. Following the notation defined previously, we sum \cref{eq:asymp_X,eq:asymp_G} together instead of stacking them as in \cref{thm:AsympNormSymm}. \begin{eqnarray}gin{Thm}\label{thm:AsympNormComposite}{ Suppose that $0 < \displaystyle\lim_{n\to\infty} n_d/n<1$, and $\pi_1$ is known. Then \begin{eqnarray}gin{equation*} n^{1/2}(\widehat{\Omega}_{\rm CL}- \Omega) = -({\cal G}amma_{X} + {\cal G}amma_{G})^{-1} n^{-1/2} \hbox{$\sum_{i=1}^n$} \{\zeta_{X*}(Z_i,\Omega)+\zeta_{G*}(Z_i,\Omega)\} + o_p(1). \end{equation*} To calculate the asymptotic variance, write \begin{eqnarray}gin{align*} \Sigma_{\rm all} \enspace &= \enspace \left[\begin{eqnarray}gin{matrix} \Sigma_{XX} & \Sigma_{XG}\\ \Sigma_{GX} & \Sigma_{GG} \end{matrix}\right] \; = \enspace \hbox{\rm cov}\left\{\begin{eqnarray}gin{matrix} \zeta_{X*}(Z_i,\Omega) \\ \zeta_{G*}(Z_i,\Omega) \end{matrix}\right\} \; = \enspace \hbox{\rm cov}\left\{\begin{eqnarray}gin{matrix} \zeta_{X}(Z_i,\Omega) \\ \zeta_{G}(Z_i,\Omega) \end{matrix}\right\};\\ \Sigma_{XX} \enspace &= \enspace \hbox{$\sum_{d=0}^{1}$} (n_d/n) \hbox{\rm cov}\{\zeta_{X*}(Z,\Omega) \vert D=d\} \enspace = \enspace \hbox{$\sum_{d=0}^{1}$} (n_d/n) \hbox{\rm cov}\{\zeta_{X}(Z,\Omega) \vert D=d\}; \\ \Sigma_{GG} \enspace &= \enspace \hbox{$\sum_{d=0}^{1}$} (n_d/n) \hbox{\rm cov}\{\zeta_{G*}(Z,\Omega) \vert D=d\} \enspace = \enspace \hbox{$\sum_{d=0}^{1}$} (n_d/n) \hbox{\rm cov}\{\zeta_{G}(Z,\Omega) \vert D=d\}; \\ \Sigma_{XG} \enspace &= \enspace \hbox{$\sum_{d=0}^{1}$} (n_d/n) \hbox{\rm cov}\{\zeta_{X*}(Z,\Omega),\zeta_{G*}(Z,\Omega) \vert D=d\} \\ &= \enspace \hbox{$\sum_{d=0}^{1}$} (n_d/n) \hbox{\rm cov}\{\zeta_{X}(Z,\Omega),\zeta_{G}(Z,\Omega) \vert D=d\} \enspace = \enspace \Sigma_{XG}^{\rm T} . \end{align*} Since the $Z_i$ are independent and $E\{\zeta_{X*}(Z_i,\Omega)\vert D_i\}=E\{\zeta_{G*}(Z_i,\Omega)\vert D_i\} = 0$, then \begin{eqnarray*} n^{1/2}(\widehat{\Omega}_{\rm CL}- \Omega) &\to& \hbox{Normal}(0,{\cal L}ambda_{\rm CL}); \\ \Sigma_{\rm CL} &=& \hbox{$\sum_{d=0}^{1}$} (n_d/n) \hbox{\rm cov}\{\zeta_{X*}(Z_i,\Omega)+\zeta_{G*}(Z_i,\Omega)\} \\ &=& \Sigma_{XX}+\Sigma_{GG}+\Sigma_{XG}+\Sigma_{GX}; \\ {\cal L}ambda_{\rm CL} &=& ({\cal G}amma_{X} + {\cal G}amma_{G})^{-1} \Sigma_{\rm CL} \{({\cal G}amma_{X} + {\cal G}amma_{G})^{-1}\}^{\rm T}. \end{eqnarray*} }\end{Thm} The proof of \cref{thm:AsympNormComposite} follows directly from the proofs of \cref{eq:asymp_X,eq:asymp_G} and the properties of M-estimators $\widehat{\Omega}_X$ and $\widehat{\Omega}_G$. \section{Additional Simulations}\label{sec:AddlSims} \subsection{Unabridged version of \cref{tab:sim1} from \cref{sec:Simulations}} \label{sec:sim1ALL} {\cal C}ref{tab:sim1} in \cref{sec:Simulations} of the main paper reports the results of five estimators: logistic regression, the SPMLE with known $\pi_1$, the SPMLE using the rare disease approximation, our Symmetric Combination Estimator with known $\pi_1$, and our Symmetric Combination Estimator using the rare disease approximation. {\cal C}ref{tab:sim1ALL} presents the results of \emph{all} estimators in 1000 simulations under the simulation settings of \cref{sec:SimScenario}. In addition to logistic regression, four retrospective methods are presented: the SPMLE ($\widehat{\Omega}_X$), the SPMLE\_G ($\widehat{\Omega}_G$), the Composite Likelihood Estimator ($\widehat{\Omega}_{\rm CL}$), and the Symmetric Combination Estimator ($\widehat{\Omega}_{\rm Symm}$). Each retrospective estimator was calculated under two conditions: with known $\pi_1$, and with unknown $\pi_1$ using the rare disease approximation. We see that the rare disease approximation of each retrospective estimator closely matches the version calculated with known $\pi_1$. The efficiency of the Composite Likelihood Estimator is equivalent to that of the SPMLE and its symmetric counterpart, the SPMLE\_G. The Symmetric Combination Estimator stands out as markedly more efficient than the other estimators. \begin{eqnarray}gin{table}[H] \ttabbox {\caption{\baselineskip=12pt Results of 1000 simulations as described in \cref{sec:SimScenario}, comparing the bias, coverage, and efficiency of all estimators} \label{tab:sim1ALL}} {\begin{eqnarray}gin{tabular}{ | l | l l l l l | l | l l l l l | } \hline &$\begin{eqnarray}ta_{G1}$ &$\begin{eqnarray}ta_{G2}$&$\begin{eqnarray}ta_{G3}$ & $\begin{eqnarray}ta_{G4}$ & $\begin{eqnarray}ta_{G5}$ & $\begin{eqnarray}ta_{X}$ & $\begin{eqnarray}ta_{XG1}$ & $\begin{eqnarray}ta_{XG2}$ & $\begin{eqnarray}ta_{XG3}$ &$\begin{eqnarray}ta_{XG4}$ & $\begin{eqnarray}ta_{XG5}$ \\ \hline True & 0.18 & 0.18 & 0.00 & 0.18 & 0.00 & 0.41 & 0.26 & 0.00 & 0.00 & 0.26 & 0.00 \\ \hline \multicolumn{12}{|c|}{Logistic Regression} \\ \hline Bias & 0.01 & 0.00 & 0.00 & 0.00 & -0.01 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ CI(\%) & 95.2 & 95.5 & 94.4 & 94.7 & 95.3 & 95.8 & 94.5 & 95.9 & 94.7 & 94.6 & 95.3 \\ \hline \multicolumn{12}{|c|}{SPMLE, known $\pi_1$} \\ \hline Bias & 0.01 & 0.00 & 0.00 & 0.00 & -0.01 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.01 \\ CI(\%) & 95.4 & 95.8 & 94.8 & 96.1 & 96.3 & 95.6 & 94.6 & 96.0 & 94.3 & 95.6 & 95.1 \\ MSE Eff & 1.32 & 1.25 & 1.26 & 1.32 & 1.30 & 1.27 & 2.08 & 1.78 & 1.88 & 1.95 & 2.12 \\ \hline \multicolumn{12}{|c|}{SPMLE, rare} \\ \hline Bias & 0.02 & 0.00 & 0.00 & 0.01 & -0.01 & 0.02 & -0.02 & 0.00 & 0.00 & -0.01 & 0.01 \\ CI(\%) & 95.0 & 95.7 & 94.8 & 95.9 & 96.4 & 95.5 & 94.4 & 96.0 & 94.7 & 95.4 & 95.7 \\ MSE Eff & 1.31 & 1.26 & 1.27 & 1.32 & 1.30 & 1.26 & 2.18 & 1.89 & 2.01 & 2.06 & 2.27 \\ \hline \multicolumn{12}{|c|}{SPMLE\_G, known $\pi_1$} \\ \hline Bias & 0.01 & 0.00 & 0.00 & 0.00 & -0.01 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.01 \\ CI(\%) & 95.0 & 95.8 & 94.8 & 96.1 & 96.3 & 95.2 & 94.8 & 95.5 & 94.1 & 95.6 & 95.6 \\ MSE Eff & 1.35 & 1.27 & 1.29 & 1.34 & 1.33 & 1.28 & 2.12 & 1.82 & 1.90 & 1.98 & 2.14 \\ \hline \multicolumn{12}{|c|}{SPMLE\_G, rare} \\ \hline Bias & 0.02 & 0.00 & 0.00 & 0.01 & -0.01 & 0.02 & -0.02 & 0.00 & 0.00 & -0.01 & 0.01 \\ CI(\%) & 94.9 & 95.7 & 94.9 & 95.9 & 96.3 & 94.8 & 94.1 & 95.5 & 94.3 & 95.0 & 95.7 \\ MSE Eff & 1.35 & 1.29 & 1.31 & 1.35 & 1.35 & 1.27 & 2.25 & 1.94 & 2.04 & 2.10 & 2.32 \\ \hline \multicolumn{12}{|c|}{Composite Likelihood Estimator, known $\pi_1$} \\ \hline Bias & 0.01 & 0.00 & 0.00 & 0.00 & -0.01 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.01 \\ CI(\%) & 94.9 & 95.8 & 94.9 & 96.1 & 96.5 & 95.4 & 94.7 & 95.7 & 94.4 & 95.7 & 95.5 \\ MSE Eff & 1.34 & 1.26 & 1.28 & 1.33 & 1.32 & 1.28 & 2.11 & 1.81 & 1.90 & 1.98 & 2.14 \\ \hline \multicolumn{12}{|c|}{Composite Likelihood Estimator, rare} \\ \hline Bias & 0.02 & 0.00 & 0.00 & 0.01 & -0.01 & 0.02 & -0.02 & 0.00 & 0.00 & -0.01 & 0.01 \\ CI(\%) & 94.9 & 95.6 & 94.9 & 95.9 & 96.4 & 95.2 & 94.1 & 95.8 & 94.8 & 95.3 & 95.7 \\ MSE Eff & 1.32 & 1.27 & 1.29 & 1.33 & 1.32 & 1.27 & 2.23 & 1.92 & 2.03 & 2.09 & 2.31 \\ \hline \multicolumn{12}{|c|}{Symmetric Combination Estimator, known $\pi_1$} \\ \hline Bias & 0.00 & -0.03 & 0.00 & 0.00 & -0.01 & 0.01 & -0.03 & 0.02 & 0.00 & -0.02 & 0.01 \\ CI$^*$(\%)& 96.7 & 95.7 & 96.7 & 96.5 & 97.8 & 95.4 & 94.8 & 96.7 & 96.2 & 96.6 & 97.2 \\ MSE Eff & 1.92 & 1.71 & 2.00 & 1.83 & 2.05 & 1.31 & 2.84 & 2.51 & 2.99 & 2.68 & 3.34 \\ \hline \multicolumn{12}{|c|}{Symmetric Combination Estimator, rare} \\ \hline Bias & 0.01 & -0.02 & 0.00 & 0.01 & -0.01 & 0.02 & -0.05 & 0.02 & 0.00 & -0.03 & 0.00 \\ CI$^*$(\%)& 96.4 & 95.7 & 95.7 & 96.3 & 98.1 & 94.9 & 94.0 & 97.0 & 96.4 & 95.5 & 97.5 \\ MSE Eff & 1.86 & 1.71 & 1.92 & 1.78 & 1.95 & 1.27 & 2.75 & 2.66 & 3.08 & 2.69 & 3.58 \\ \hline \end{tabular} \floatfoot{\baselineskip=11pt \emph{CI}: coverage of a 95\% nominal confidence interval, calculated using asymptotic standard error. \emph{CI$^*$}: coverage of a 95\% nominal confidence interval, calculated using 200 bootstrap samples. \emph{MSE Eff}: mean squared error efficiency when compared to logistic regression.}} \end{table} \subsection{Simulation when the disease rate is misspecified}\label{sec:Misspec} {\cal C}ref{tab:Misspec} presents the results of a simulation to evaluate the robustness of our method to misspecification of the population disease rate. A sample of 1000 cases and 1000 controls was simulated using the same scenario as described in \cref{sec:SimScenario} except the logistic intercept was modified to yield true population disease rates of 0.05, 0.085, and 0.12. In each instance, 1000 data sets were simulated and the Symmetric Combination Estimator was calculated with misspecified ``known $\pi_1=0.03$'' and again using the rare disease approximation. When using the rare disease approximation, coverage remains near nominal until the true disease rate reached 0.085, and even then the lowest coverage rate was 91.3\% (for interaction parameter $\begin{eqnarray}ta_{XG1}$, which still demonstrated a mean squared error efficiency of 2.51 compared to logistic regression). When the disease rate was assumed ``known $\pi_1=0.03$'', nominal coverage was seen except when the population disease rate was 0.12. This indicates the Symmetric Combination Estimator is fairly robust to disease rate misspecification, and even an imprecise estimate of $\pi_1$ is likely to be sufficient to conduct a valid analysis. \begin{eqnarray}gin{table}[!ht] \ttabbox {\caption{\baselineskip=12pt Results of simulations as described in \cref{sec:SimScenario}, but with population disease rates (0.05, 0.085, 0.12). For each disease rate, we simulated 1000 data sets and compared logistic regression, our method with misspecified ``known $\pi_1=0.03$'', and our method using the rare disease approximation.} \label{tab:Misspec}} {\begin{eqnarray}gin{tabular}{ | l | l l l l l | l | l l l l l | } \hline &$\begin{eqnarray}ta_{G1}$ &$\begin{eqnarray}ta_{G2}$&$\begin{eqnarray}ta_{G3}$ & $\begin{eqnarray}ta_{G4}$ & $\begin{eqnarray}ta_{G5}$ & $\begin{eqnarray}ta_{X}$ & $\begin{eqnarray}ta_{XG1}$ & $\begin{eqnarray}ta_{XG2}$ & $\begin{eqnarray}ta_{XG3}$ &$\begin{eqnarray}ta_{XG4}$ & $\begin{eqnarray}ta_{XG5}$ \\ \hline True & 0.18 & 0.18 & 0.00 & 0.18 & 0.00 & 0.41 & 0.26 & 0.00 & 0.00 & 0.26 & 0.00 \\ \hline \multicolumn{4}{l}{\bf Disease Rate = 0.05}&\multicolumn{8}{l}{Logistic Regression} \\ \hline Bias & 0.00 & 0.00 & 0.00 & 0.00 & -0.01 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.01 \\ CI(\%) & 95.8 & 95.2 & 95.9 & 94.7 & 94.4 & 95.6 & 95.7 & 95.5 & 95.3 & 94.8 & 95.3 \\ \hline \multicolumn{4}{|l}{}&\multicolumn{8}{l|}{Symmetric Combination Estimator, ``known $\pi_1=0.03$''} \\ \hline Bias & 0.00 & -0.03 & 0.00 & 0.00 & -0.01 & 0.03 & -0.05 & 0.02 & 0.00 & -0.04 & 0.01 \\ CI$^*$(\%)& 97.6 & 94.1 & 97.0 & 94.8 & 95.8 & 95.1 & 93.7 & 95.6 & 96.8 & 94.8 & 96.8 \\ MSE Eff & 1.84 & 1.74 & 2.07 & 1.69 & 1.97 & 1.30 & 2.61 & 2.65 & 3.14 & 2.37 & 2.97 \\ \hline \multicolumn{4}{|l}{}&\multicolumn{8}{l|}{Symmetric Combination Estimator, rare} \\ \hline Bias & 0.01 & -0.02 & 0.00 & 0.01 & -0.01 & 0.04 & -0.06 & 0.02 & 0.00 & -0.05 & 0.00 \\ CI$^*$(\%)& 96.9 & 94.3 & 97.4 & 94.4 & 96.1 & 94.7 & 92.2 & 95.4 & 96.7 & 93.4 & 96.6 \\ MSE Eff & 1.75 & 1.73 & 2.03 & 1.60 & 1.89 & 1.22 & 2.48 & 2.76 & 3.22 & 2.25 & 3.11 \\ \hline \multicolumn{4}{l}{\bf Disease Rate = 0.085}&\multicolumn{8}{l}{Logistic Regression} \\ \hline Bias & -0.01& 0.01 & 0.00 & 0.00 & 0.00 & 0.00 & 0.01 & -0.01 & 0.00 & 0.00 & 0.01 \\ CI(\%) & 94.3 & 94.9 & 95.4 & 94.4 & 93.5 & 94.5 & 94.8 & 94.0 & 95.1 & 95.8 & 94.5 \\ \hline \multicolumn{4}{|l}{}&\multicolumn{8}{l|}{Symmetric Combination Estimator, ``known $\pi_1=0.03$''} \\ \hline Bias & 0.00 & -0.02 & 0.00 & 0.01 & -0.01 & 0.05 & -0.07 & 0.01 & 0.00 & -0.06 & 0.00 \\ CI$^*$(\%)& 96.4 & 95.2 & 96.9 & 95.7 & 95.7 & 93.6 & 92.7 & 96.0 & 97.6 & 92.9 & 97.1 \\ MSE Eff & 1.84 & 1.81 & 1.99 & 1.61 & 1.90 & 1.18 & 2.65 & 2.81 & 3.18 & 2.15 & 3.20 \\ \hline \multicolumn{4}{|l}{}&\multicolumn{8}{l|}{Symmetric Combination Estimator, rare} \\ \hline Bias & 0.01 & -0.01 & 0.00 & 0.02 & -0.01 & 0.06 & -0.08 & 0.01 & 0.00 & -0.06 & 0.00 \\ CI$^*$(\%)& 96.5 & 95.8 & 96.5 & 95.7 & 95.3 & 92.5 & 91.3 & 96.1 & 97.3 & 91.8 & 97.1 \\ MSE Eff & 1.77 & 1.81 & 1.98 & 1.59 & 1.86 & 1.12 & 2.51 & 2.91 & 3.21 & 2.07 & 3.30 \\ \hline \multicolumn{4}{l}{\bf Disease Rate = 0.12}&\multicolumn{8}{l}{Logistic Regression} \\ \hline Bias & 0.00 & 0.01 & -0.01& 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ CI(\%) & 94.6 & 95.4 & 94.9 & 94.8 & 93.7 & 95.7 & 94.4 & 95.9 & 94.8 & 94.8 & 94.8 \\ \hline \multicolumn{4}{|l}{}&\multicolumn{8}{l|}{Symmetric Combination Estimator, ``known $\pi_1=0.03$''} \\ \hline Bias & 0.00 & -0.02 & 0.00 & 0.02 & -0.01 & 0.06 & -0.08 & 0.01 & 0.00 & -0.07 & 0.01 \\ CI$^*$(\%)& 96.4 & 95.6 & 96.1 & 94.5 & 95.6 & 93.5 & 89.2 & 96.4 & 97.2 & 89.0 & 96.9 \\ MSE Eff & 1.83 & 1.71 & 1.95 & 1.59 & 1.86 & 1.08 & 2.33 & 2.80 & 3.07 & 1.90 & 3.02 \\ \hline \multicolumn{4}{|l}{}&\multicolumn{8}{l|}{Symmetric Combination Estimator, rare} \\ \hline Bias & 0.02 & -0.01 & 0.00 & 0.03 & -0.01 & 0.07 & -0.10 & 0.01 & 0.00 & -0.08 & 0.00 \\ CI$^*$(\%)& 95.6 & 96.1 & 96.4 & 94.5 & 95.1 & 91.8 & 86.0 & 96.6 & 97.0 & 87.4 & 96.0 \\ MSE Eff & 1.72 & 1.72 & 1.91 & 1.53 & 1.79 & 0.99 & 2.11 & 2.95 & 3.14 & 1.78 & 3.11 \\ \hline \end{tabular} \floatfoot{\baselineskip=11pt \emph{CI}: coverage of a 95\% nominal confidence interval, calculated using asymptotic standard error. \emph{CI$^*$}: coverage of a 95\% nominal confidence interval, calculated using 200 bootstrap samples. \\ \emph{MSE Eff}: mean squared error efficiency when compared to logistic regression.}} \end{table} \subsection{Violations of the Gene-Environment Independence Assumption} \label{sec:ViolAssump} {\cal C}ref{tab:ViolAssump} presents the results of simulations to examine the robustness of our methods to violations of the gene-environment independence assumption. In these simulations, a sample of 1000 cases and 1000 controls is simulated with genetic variables as described in \cref{sec:SimScenario}, but the environmental variable is normally distributed with mean $\alpha G_1$, $\alpha G_2$, or $\alpha G_3$. We set $\alpha=0.032$ to induce dependence between $X$ and $G_j$ with $R^2=0.001$. Here $\begin{eqnarray}ta_G = \{\hbox{log}(1.2), \hbox{log}(1.2), 0, \hbox{log}(1.2), 0\}$ as in \cref{sec:SimScenario}, but $\begin{eqnarray}ta_X = \hbox{log}(1.35)$, and $\begin{eqnarray}ta_{GX}=\{\hbox{log}(1.21), 0, 0, \hbox{log}(1.21), 0\}$. In each simulation, the logistic intercept was selected to give a population disease rate of 0.03. In the first simulation, $X$ is correlated with $G_1$, which has a nonzero main effect and a nonzero interaction; in the second simulation, $X$ is correlated with $G_2$, which has a nonzero main effect but no interaction effect; in the third simulation, $X$ is correlated with $G_3$, which has neither main nor interaction effects. We find that violating the gene-environment independence assumption induces bias in the estimate of the interaction parameter of the environmental variable and the specific SNP that is in violation of the gene-environment independence assumption, while the estimated interaction parameters of the other SNPs are unaffected. When $\pi_1$ is known, estimates of the main effects of the SNP that is in violation of the gene-environment independence assumption are uncompromised. \begin{eqnarray}gin{table}[!ht] \ttabbox {\caption{\baselineskip=12pt Results of simulations violating the gene-environment independence assumption with $X \sim N(0,\; 0.032G_j)$ for SNPs ($G_1$, $G_2$, $G_3$). In each instance, we simulated 1000 data sets and compared our method, both with known $\pi_1$ and using the rare disease approximation, to logistic regression.} \label{tab:ViolAssump}} {\begin{eqnarray}gin{tabular}{ | l | l l l l l | l | l l l l l | } \hline &$\begin{eqnarray}ta_{G1}$ &$\begin{eqnarray}ta_{G2}$&$\begin{eqnarray}ta_{G3}$ & $\begin{eqnarray}ta_{G4}$ & $\begin{eqnarray}ta_{G5}$ & $\begin{eqnarray}ta_{X}$ & $\begin{eqnarray}ta_{XG1}$ & $\begin{eqnarray}ta_{XG2}$ & $\begin{eqnarray}ta_{XG3}$ &$\begin{eqnarray}ta_{XG4}$ & $\begin{eqnarray}ta_{XG5}$ \\ \hline True & 0.18 & 0.18 & 0.00 & 0.18 & 0.00 & 0.30 & 0.19 & 0.00 & 0.00 & 0.19 & 0.00 \\ \hline \multicolumn{4}{l}{\bf $X$ correlated with $G_1$}&\multicolumn{8}{l}{Logistic Regression} \\ \hline Bias & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.01 & 0.00 & 0.00 & 0.00 & 0.01 \\ CI(\%) & 95.3 & 94.9 & 95.3 & 94.3 & 93.6 & 95.4 & 95.5 & 94.0 & 94.9 & 94.1 & 95.6 \\ \hline \multicolumn{4}{|l}{}&\multicolumn{8}{l|}{Symmetric Combination Estimator, known $\pi_1$} \\ \hline Bias & 0.00 & -0.03 & 0.00 & 0.00 & 0.00 & -0.01 & 0.05 & 0.01 & 0.00 & -0.02 & 0.00 \\ CI$^*$(\%)& 95.8 & 93.3 & 95.6 & 94.3 & 95.4 & 94.2 & 92.5 & 92.7 & 95.6 & 93.5 & 95.4 \\ MSE Eff & 1.31 & 1.13 & 1.37 & 1.26 & 1.44 & 1.34 & 1.91 & 2.40 & 2.71 & 2.41 & 2.91 \\ \hline \multicolumn{4}{|l}{}&\multicolumn{8}{l|}{Symmetric Combination Estimator, rare} \\ \hline Bias & 0.00 & -0.03 & 0.00 & 0.00 & 0.00 & 0.01 & 0.01 & 0.01 & 0.00 & -0.03 & 0.00 \\ CI$^*$(\%)& 96.4 & 92.5 & 96.3 & 94.8 & 95.7 & 95.5 & 95.3 & 93.5 & 95.8 & 90.3 & 95.3 \\ MSE Eff & 1.35 & 1.14 & 1.47 & 1.34 & 1.51 & 1.43 & 3.24 & 2.67 & 2.96 & 2.27 & 3.59 \\ \hline \multicolumn{4}{l}{\bf $X$ correlated with $G_2$}&\multicolumn{8}{l}{Logistic Regression} \\ \hline Bias & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.01 & 0.00 \\ CI(\%) & 94.8 & 95.1 & 94.5 & 94.5 & 95.3 & 96.2 & 94.3 & 93.4 & 94.7 & 95.3 & 95.2 \\ \hline \multicolumn{4}{|l}{}&\multicolumn{8}{l|}{Symmetric Combination Estimator, known $\pi_1$} \\ \hline Bias & -0.01 & -0.02 & 0.00 & 0.00 & 0.00 & -0.03 & -0.02 & 0.06 & 0.00 & -0.02 & 0.00 \\ CI$^*$(\%)& 95.6 & 95.2 & 95.8 & 95.5 & 96.9 & 93.1 & 94.2 & 83.5 & 95.3 & 94.1 & 96.1 \\ MSE Eff & 1.33 & 1.27 & 1.35 & 1.32 & 1.42 & 1.11 & 2.79 & 1.54 & 2.84 & 2.63 & 3.21 \\ \hline \multicolumn{4}{|l}{}&\multicolumn{8}{l|}{Symmetric Combination Estimator, rare} \\ \hline Bias & -0.01 & -0.02 & 0.00 & 0.00 & 0.00 & -0.01 & -0.05 & 0.05 & 0.00 & -0.03 & 0.00 \\ CI$^*$(\%)& 95.8 & 94.9 & 95.9 & 95.1 & 96.3 & 96.0 & 87.7 & 84.7 & 95.3 & 90.3 & 97.0 \\ MSE Eff & 1.34 & 1.27 & 1.44 & 1.35 & 1.45 & 1.37 & 2.41 & 1.79 & 3.21 & 2.43 & 3.95 \\ \hline \multicolumn{4}{l}{\bf $X$ correlated with $G_3$}&\multicolumn{8}{l}{Logistic Regression} \\ \hline Bias & 0.00 & 0.00 & 0.00 & 0.00 & -0.01 & 0.00 & 0.01 & 0.00 & 0.00 & 0.00 & 0.01 \\ CI(\%) & 94.9 & 94.8 & 96.0 & 94.9 & 95.2 & 95.0 & 95.9 & 95.0 & 95.6 & 94.7 & 94.3 \\ \hline \multicolumn{4}{|l}{}&\multicolumn{8}{l|}{Symmetric Combination Estimator, known $\pi_1$} \\ \hline Bias & -0.01 & -0.02 & 0.01 & 0.00 & -0.01 & -0.03 & -0.01 & 0.01 & 0.05 & -0.02 & 0.00 \\ CI$^*$(\%)& 96.0 & 93.8 & 96.3 & 96.4 & 96.5 & 92.2 & 93.5 & 95.6 & 89.9 & 94.0 & 94.8 \\ MSE Eff & 1.33 & 1.16 & 1.34 & 1.25 & 1.34 & 1.15 & 2.56 & 2.62 & 1.63 & 2.33 & 2.89 \\ \hline \multicolumn{4}{|l}{}&\multicolumn{8}{l|}{Symmetric Combination Estimator, rare} \\ \hline Bias & -0.01 & -0.03 & 0.01 & 0.00 & -0.01 & -0.01 & -0.04 & 0.01 & 0.04 & -0.03 & 0.00 \\ CI$^*$(\%)& 95.6 & 93.5 & 96.3 & 96.3 & 96.5 & 94.4 & 88.2 & 96.5 & 89.1 & 90.8 & 95.5 \\ MSE Eff & 1.41 & 1.16 & 1.35 & 1.28 & 1.37 & 1.39 & 2.37 & 3.01 & 1.78 & 2.17 & 3.62 \\ \hline \end{tabular} \floatfoot{\baselineskip=11pt \emph{CI}: coverage of a 95\% nominal confidence interval, calculated using asymptotic standard error. \emph{CI$^*$}: coverage of a 95\% nominal confidence interval, calculated using 100 bootstrap samples. \\ \emph{MSE Eff}: mean squared error efficiency when compared to logistic regression.}} \end{table} \subsection{Simulations with alternative distributions for $G$ and $X$} \label{sec:AltDist} {\cal C}ref{tab:AltDist} presents the results of a simulation in which $X$ and $G$ are both multivariate with a combination of discrete and continuous components. $G_1$ and $G_2$ are correlated SNPs in Hardy-Weinberg equilibrium with minor allele frequencies (0.2, 0.3), and $G_3$ has a gamma distribution with shape = 20 and scale = 20 (to simulate a skewed polygenic risk score). $X_1$ is binary with frequency 0.5 and $X_2$ has a standard normal distribution. Here $\begin{eqnarray}ta_G = \{\hbox{log}(1.2), 0, \hbox{log}(1.38)\}$, $\begin{eqnarray}ta_X = \{\hbox{log}(1.5), \hbox{log}(1.14)\}$, $\begin{eqnarray}ta_{GX}=\{\hbox{log}(1.1), 0, 0, 0, 0, 0\}$, and the logistic intercept was selected to give a population disease rate of 0.05. Using these settings, 1000 data sets were simulated with 1000 cases and 1000 controls each. \begin{eqnarray}gin{table}[!ht] \ttabbox {\caption{\baselineskip=12pt Results of 1000 simulations with multivariate $G$ and $X$, comparing the bias, coverage, and efficiency of standard logistic regression to our Symmetric Combination Estimator, both with known $\pi_1$ and using the rare disease approximation.} \label{tab:AltDist}} {\begin{eqnarray}gin{tabular}{ | l | l l l l l l l l l l l | } \hline &$\begin{eqnarray}ta_{G1}$ &$\begin{eqnarray}ta_{G2}$ &$\begin{eqnarray}ta_{G3}$ &$\begin{eqnarray}ta_{X1}$ &$\begin{eqnarray}ta_{X2}$ &$\begin{eqnarray}ta_{X1G1}$ &$\begin{eqnarray}ta_{X1G2}$ &$\begin{eqnarray}ta_{X1G3}$ &$\begin{eqnarray}ta_{X2G1}$ &$\begin{eqnarray}ta_{X2G2}$ &$\begin{eqnarray}ta_{X2G3}$ \\ \hline True & 0.18 & 0.00 & 0.32 & 0.41 & 0.14 & 0.10 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline \multicolumn{12}{|c|}{Logistic Regression} \\ \hline Bias & 0.01 & -0.01 & 0.00 & 0.00 & 0.00 & -0.01 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ CI(\%) & 94.4 & 94.3 & 95.2 & 94.6 & 94.0 & 95.7 & 94.7 & 94.7 & 94.5 & 95.4 & 94.6 \\ \hline \multicolumn{12}{|c|}{Symmetric Combination Estimator, known $\pi_1$} \\ \hline Bias & -0.02 & 0.00 & -0.06 & -0.01 & -0.02 & -0.02 & 0.00 & 0.01 & 0.00 & 0.00 & 0.00 \\ CI$^*$(\%)& 93.4 & 94.7 & 94.9 & 94.8 & 96.0 & 94.1 & 95.4 & 96.1 & 95.8 & 96.9 & 96.5 \\ MSE Eff & 1.48 & 1.66 & 1.61 & 2.44 & 2.75 & 2.22 & 2.67 & 2.84 & 2.90 & 2.80 & 3.03 \\ \hline \multicolumn{12}{|c|}{Symmetric Combination Estimator, rare} \\ \hline Bias & -0.01 & 0.00 & -0.06 & 0.00 & -0.02 & -0.03 & 0.00 & 0.01 & 0.00 & 0.00 & 0.00 \\ CI$^*$(\%)& 93.3 & 94.5 & 95.3 & 94.8 & 95.8 & 94.1 & 95.6 & 96.3 & 95.6 & 96.9 & 96.2 \\ MSE Eff & 1.54 & 1.73 & 1.66 & 2.60 & 2.98 & 2.37 & 2.98 & 3.08 & 3.25 & 3.07 & 3.32 \\ \hline \end{tabular} \floatfoot{\baselineskip=11pt \emph{CI}: coverage of a 95\% nominal confidence interval, calculated using asymptotic standard error. \\ \emph{CI$^*$}: coverage of a 95\% nominal confidence interval, calculated using 100 bootstrap samples. \\ \emph{MSE Eff}: mean squared error efficiency when compared to logistic regression.}} \end{table} \subsection{Creating the polygenic risk score for the PLCO data analysis} \label{sec:PRSweights} {\cal C}ref{tab:PRSweights} displays the SNPs used in the calculation of the polygenic risk score for the analysis of the Prostate, Lung, Colorectal and Ovarian cancer screening trial data described in \cref{sec:DataData}. \begin{eqnarray}gin{table}[ht] \begin{eqnarray}gin{center} \caption{\baselineskip=12pt SNPs involved in creating the polygenic risk score, and their regression coefficients} \label{tab:PRSweights} \begin{eqnarray}gin{tabular}{lr} {\bf RS Number} & {\bf Coefficient} \\ rs11249433 & -0.02813492 \\ rs1045485 & -0.09307971 \\ rs13387042 & -0.26203658 \\ rs4973768 & 0.08013260 \\ rs10069690 & 0.06459363 \\ rs10941679 & 0.09185539 \\ rs889312 & -0.00565121 \\ rs17530068 & 0.09668742 \\ rs2046210 & 0.09851217 \\ rs1562430 & -0.14871719 \\ rs1011970 & 0.05329783 \\ rs865686 & -0.02913340 \\ rs2380205 & -0.01821032 \\ rs10995190 & -0.04275836 \\ rs2981582 & 0.14008397 \\ rs909116 & 0.04955235 \\ rs614367 & 0.06438418 \\ rs3803662 & 0.27080105 \\ rs6504950 & -0.17586244 \\ rs8170 & 0.08570773 \\ rs999737 & -0.13737833 \\ \end{tabular} \end{center} \end{table} \end{document}
\begin{document} \title{Violation of Leggett-Garg inequalities in single quantum dot} \author{Yong-Nan Sun, Yang Zou, Rong-Chun Ge, Jian-Shun Tang, Chuan-Feng Li $\footnote{email: [email protected]}$, and Guang-Can Guo} \affiliation{Key Laboratory of Quantum Information, University of Science and Technology of China, CAS, Hefei, 230026, People's Republic of China} \date{\today } \pacs{03.67.Mn, 42.50.-p, 78.67.Hc} \begin{abstract} We investigate the violation of Leggett-Garg (LG) inequalities in quantum dots with the stationarity assumption. By comparing two types of LG inequalities, we find a better one which is easier to be tested in experiment. In addition, we show that the fine-structure splitting, background noise and temperature of quantum dots all influence the violation of LG inequalities. \end{abstract} \maketitle The transition from the strange quantum description to our familiar classical description is one of the fundamental questions in understanding the world. This issue was first investigated by Leggett and Garg in 1985~\cite{1} and led to the formulation of the so called temporal Bell inequalities. Instead of the correlations between states of two spatially separated system, they cared about the correlations of the state of a single system at different time. What's more, in order to get the inequalities, Leggett and Garg made two general assumptions~\cite{2}: (1) any two-level macroscopic system will be at any time in one of the two accessible states (Macroscopic realism). (2) the actual state of the system can be determined with arbitrarily small perturbation of its subsequent dynamics (Non-invasive measurability). Leggett-Garg (LG) inequalities provide a criterion to characterize the boundary between the quantum realm and classical one and the possibility of identifying the macroscopic quantum coherence. Due to the assumption of noninvasive measurement, which describes the ability to determine the state of the system without any influence on its subsequent dynamics, it is difficult to test the LG inequalities experimentally. There have been many proposals to test such kind of inequalities by employing superconducting quantum interference devices~\cite{1,3,4}, but the extreme difficulty of experiments with truly macroscopic systems, as well as the request of noninvasive measurement, has made it impossible to draw any clear conclusion. The LG inequalities have been tested in optical system with CNOT gate~\cite{5}. A more recent approach takes advantage of weak measurement~\cite{6,7,8}, in which the dynamics of the system is slightly disturbed. In this work, we follow a different approach. Testable LG inequalities is derived by replacing the noninvasive measurement assumption with the stationarity assumption~\cite{9,10}. Considering an observable $Q(t)$ of a two level-system such as the polarization of the photon, in this case, we can define $Q(t)=1$ when the state of the photon is $|H\rangle$, and $Q(t)=-1$ when the state of the photon is $|V\rangle$. The autocorrelation function of this observable is defined as $K(t_{1},t_{2})=\langle Q(t_{1})Q(t_{2}) \rangle$. For three different times $t_{1}$ , $t_{2}$ and $t_{3}$ $(t_{1}<t_{2}<t_{3})$ , we get the two LG inequalities under the realistic description~\cite{1} \begin{equation} K(t_{1},t_{2})+K(t_{2},t_{3})+K(t_{1},t_{3})\geq -1 , \end{equation} \begin{equation} K(t_{1},t_{3})-K(t_{1},t_{2})-K(t_{2},t_{3})\geq -1 . \end{equation} Once the assumption of stationarity is introduced, in the case of $t_{2}-t_{1}=t_{3}-t_{2}=t$ , the inequalities become~\cite{9} \begin{equation} K_{+}=K(2t)+2K(t)\geq-1 , \end{equation} \begin{equation} K_{-}=K(2t)-2K(t)\geq-1 . \end{equation} These inequalities set the boundary of the temporal correlations and are amenable to an experimental testing by a two-shot experiment. In this paper, we compare the two types of LG inequalities and discuss the violation of LG inequalities with the influence of the fine-structure splitting, background noise and temperature in a quantum dot system. Semiconductor quantum dots, often referred to as ``artificial atoms'', have well defined discrete energy levels~\cite{11} due to their three-dimensional confinement of electrons. The atomlike properties of the single semiconductor quantum dot, together with its ease of integration into more complicated device structures, have made it an attractive and widely-studied system for applications in quantum information~\cite{12}. One of their potentially useful properties is the emission of polarization entangled photon pairs by the radiative decay of the biexciton state~\cite{13,14,15,16}. \begin{figure} \caption{(Color online). Energy-level schematic of the biexciton cascade process. The ground state $|0\rangle$, the two linear-polarized exciton state $|2\rangle$ and $|1\rangle$, the biexciton state $|3\rangle$, the fine-structure splitting S.} \end{figure} The energy level of a quantum dot is shown in Figure 1, we care about the two exciton states $|2\rangle$ and $|1\rangle$ in a quantum dot. We define $Q(t)=1$ when the quantum dot state is $\frac{~1}{\sqrt{2}}(|2\rangle+|1\rangle)$ and $Q(t)=-1$ when the quantum dot state is $\frac{~1}{\sqrt{2}}(|2\rangle-|1\rangle)$. The quantum dot is initially excited to the biexciton state by a short-pulsed laser and then evolves in a thermal bath. After emitting a biexciton photon, the proposed single quantum dot system will be in an entangled photon-exciton state. For an ideal quantum dot with degenerate intermediate exciton states $|2\rangle$ and $|1\rangle$, it is a maximum entangled state $ |\psi \rangle=\frac{~1}{\sqrt{2}}[|H \rangle |2 \rangle+|V \rangle |1 \rangle ] $. Then if we find the biexciton photon in state $\frac{~1}{\sqrt{2}}(|H\rangle+|V\rangle)$, the state of the quantum dot system will be $\frac{~1}{\sqrt{2}}(|2\rangle+|1\rangle)$. With the help of measurement, we can prepare the initial state of the quantum dot system. After a period of time of evolution, the quantum dot system will emit the second exciton photon. Then, if the state of the second photon is detected to be $\frac{~1}{\sqrt{2}}(|H\rangle-|V\rangle)$, it indicates the quantum dot system ends up in $\frac{~1}{\sqrt{2}}(|2\rangle - |1\rangle)$. In this case, because of the relationship between photon and the quantum dot system, we can detect the photon state to evaluate the LG inequalities. With these photon states, we can evaluate the LG inequalities. Taking the acoustic phonon-assisted transition process into account, the density matrix of this four-level system can be described with the master equation~\cite{17} \begin{eqnarray} \dot{\hat{\rho}}=-i[\hat{H_{0}},\hat{\rho}]+L(\hat{\rho}), \end{eqnarray} where $\hat{H_{0}}=\sum_{i=0}^{3} \omega_{i}|i \rangle \langle i|$ and $L(\hat{\rho})$ is the Lindblad term denotes the dissipation term~\cite{18}. When projected onto the subspace spanned by the four basis $\{|H_{1}H_{2}\rangle,|H_{1}V_{2}\rangle,|V_{1}H_{2}\rangle,|V_{1}V_{2}\rangle\}$, we can get the corresponding two-photon polarization density matrix $\hat{\rho}_{_{pol}}$. With the presence of the background noise and the overlap of the photon pairs frequency distributions, the total density matrix of the photon pairs is divided into three parts~\cite{18} \begin{eqnarray} \hat{\rho}_{_{tot}}=\frac{1}{1+g}[\eta\hat{\rho}_{_{pol}}+(1-\eta)\hat{\rho}_{_{noc}}+g\hat{\rho}_{_{noise}}], \end{eqnarray} where $\hat{\rho}_{_{noc}}$ arises from the fine-structure splitting induced distinguishability between the two decay paths, representing the nonoverlap of the photon frequency distributions. The third term $\hat{\rho}_{_{noise}}$, which describes the background noise, is set as an identity matrix. At last, the total density matrix of the photon pairs can be expressed as follows \begin{displaymath} \mathbf{\hat{\rho}_{_{tot}}}= \left( \begin{array}{cccc} \rho_{11} & 0 & 0 & \rho_{14} \\ 0 & \rho_{22} & 0 & 0 \\ 0 & 0 & \rho_{33} & 0 \\ \rho_{41} & 0 & 0 & \rho_{44} \end{array} \right) \end{displaymath} In our model, we can define the observable $Q(t)=1$ when the state of the photon is $|+\rangle=\frac{~1}{\sqrt{2}}(|H\rangle+|V\rangle)$, and $Q(t)=-1$ when the state of the photon is $|-\rangle=\frac{~1}{\sqrt{2}}(|H\rangle-|V\rangle)$. At the time $t=0$, we detect the first photon under the bases $|+\rangle$ and postselect the result of $|+\rangle$ to prepare the initial state of the quantum dot system. After a period of time $t$, we detect the second photon under the bases $|+\rangle$ and $|-\rangle$. We define the joint probability $P_{++}$ when the state of the second photon is $|+\rangle$ and the probability $P_{+-}$ when the state of the second photon is $|-\rangle$. We detect the photon pairs with delays $\tau$ in the range $t\leq \tau\leq (t+\omega)$, by employing a single timing gate~\cite{19}, where $t$ is the start time of the gate. At last, we can evaluate the autocorrelation by the expression \begin{eqnarray} K(t)=P_{++}(t)-P_{+-}(t) . \end{eqnarray} The autocorrelation will be measured two times by two independent experiments which begin with a primary system in an identical initial state and evolves under identical conditions. In the first experiment, the measurement of the autocorrelation is made at the time $t$ and the second measurement is made at the time $2t$ by another experiment and the joint probability can be calculated through the total density matrix of the photon pairs, then the final autocorrelation can be expressed as follows \begin{eqnarray} K(t)=\frac{\rho_{14}(t)+\rho_{41}(t)}{\rho_{11}(t)+\rho_{22}(t)+\rho_{33}(t)+\rho_{44}(t)} , \end{eqnarray} Therefore, we can evaluate the LG inequalities through the expression $K_{+}=K(2t)+2K(t)$, and $K_{-}=K(2t)-2K(t)$. \begin{figure} \caption{Two types of LG inequalities. The fine-structure splitting $S=3$ $\mu$eV, background noise g=0, the gate width $\omega=50$ ps and the temperature $T=5$ K.} \end{figure} Our result is shown in Fig. 2. We can see that the curve of the LG oscillates with time, what's more, the amplitude of the curve decays when the time increases. By comparing the two types of LG inequalities, we find that the $K_{-}$ reaches the classical limit $-1$ first. As we know, when the time delay of the photon pairs increase, the biphoton coincidence decreases. Therefore, $K_{-}$ is easier to be tested in the experiment and we discuss $K_{-}$ for the following results. \begin{figure} \caption{The relationship between fine-structure splitting and Leggett-Garg inequality. In the conditions of the background noise $g=0.3$, the gate width $\omega=50$ ps and the temperature $T=5$ K. The violation of the LG inequality becomes easier when the fine-structure splitting decreases.} \end{figure} \begin{figure} \caption{The relationship between background noise and the LG inequality. The fine-structure splitting $S=3$ $\mu$eV , the gate width $\omega=50$ ps and the temperature $T=5$ K.} \end{figure} \begin{figure} \caption{The relationship between temperature and the LG inequality. The fine-structure splitting $S=2.5$ $\mu$eV, the background noise $g=0.2$ and the gate width $\omega=50$ ps.} \end{figure} Then we analysis the relationship between the LG inequality and the fine-structure splitting. The result is presented in Fig. 3. In this case, the background noise $g=0.3$ , the gate width $\omega=50$ ps and the temperature $T=5$ K. When the fine-structure splitting is small, the LG inequality is violated easily. However, with the increase of the fine-structure splitting, the violation of LG inequality becomes increasingly weak. It can be seen that $K_{-}$ do not violate the classical limit $-1$ when the fine-structure splitting becomes large enough. This implies that when the fine-structure splitting becomes large, the evolution process can be described by classical realistic theory. Further more, the influence of background noise and temperature are investigated. As shown in Fig. 4 and Fig. 5, when the background noise and temperature decrease, the curve of the LG inequality goes below the classical limit $-1$ easily. The transition from quantum process to classical process is also been seen clearly. At last, it should be noticed that there is an approximation in our model. We use the state of photon emitted to determine the state of quantum dot, this is true when the fine structure splitting is zero. In real situation, we need the fine structure splitting to be nonzero in order to introduce the evolution, the partial distinguishability of spectrums between $|H\rangle$ and $|V\rangle$ will introduce errors. However, it can be easily verified that the errors only make the LG inequality more difficult to be violated, i.e., the requirement is not loosen in our model. In summary, we have investigated the violation of the LG inequalities in quantum dot system. When the fine-structure splitting of the quantum dot system, background noise and the temperature become small, we achieve the maximal violation. Better results may be obtained, when we change the method by which we get the initial state. Finally, we can see clearly from the results that the LG inequalities can be used as a criterion to identify the boundary of the classical realistic description. This work was supported by National Fundamental Research Program, National Natural Science Foundation of China (Grant Nos. 60921091 and 10874162). \end{document}
\begin{document} \keywords{Gain graph, $G$-cospectrality, $\pi$-cospectrality, Unitary representation, Switching equivalence, Switching isomorphism.} \title{On cospectrality of gain graphs} \author[M. Cavaleri]{Matteo Cavaleri} \address{Matteo Cavaleri, Universit\`{a} degli Studi Niccol\`{o} Cusano - Via Don Carlo Gnocchi, 3 00166 Roma, Italia} \email{[email protected] \textrm{(Corresponding Author)}} \author[A. Donno]{Alfredo Donno} \address{Alfredo Donno, Universit\`{a} degli Studi Niccol\`{o} Cusano - Via Don Carlo Gnocchi, 3 00166 Roma, Italia} \email{[email protected]} \begin{abstract} We define $G$-cospectrality of two $G$-gain graphs $(\Gamma,\psi)$ and $(\Gamma',\psi')$, proving that it is a switching isomorphism invariant. When $G$ is a finite group, we prove that $G$-cospectrality is equivalent to cospectrality with respect to all unitary representations of $G$. Moreover, we show that two connected gain graphs are switching equivalent if and only if the gains of their closed walks centered at an arbitrary vertex $v$ can be simultaneously conjugated. In particular, the number of switching equivalence classes on an underlying graph $\Gamma$ with $n$ vertices and $m$ edges, is equal to the number of simultaneous conjugacy classes of the group $G^{m-n+1}$. We provide examples of $G$-cospectral non-switching isomorphic graphs and we prove that any gain graph on a cycle is determined by its $G$-spectrum. Moreover, we show that when $G$ is a finite cyclic group, the cospectrality with respect to a faithful irreducible representation implies the cospectrality with respect to any other faithful irreducible representation, and that the same assertion is false in general. \end{abstract} \maketitle \begin{center} {\footnotesize{\bf Mathematics Subject Classification (2020)}: 05C22, 05C25, 05C50, 20C15.} \end{center} \section{Introduction} The spectrum of the adjacency matrix of a graph determines many properties of the graph: number of edges, connectivity, bipartiteness, automorphisms, etc. (see \cite{cve}). This relation between graph structural properties and linear algebra is the focus of the \emph{spectral graph theory}. Indeed, one can be interested in what properties of a graph are determined by its spectrum, but also, more drastically, in which graphs are determined by their spectrum \cite{deter}. Actually the latter issue is old (it originated from chemistry \cite{gp}) and it is clearly related with the problem of constructing pairs of \emph{cospectral} non-isomorphic graphs. Since the first examples of such a pairs \cite{Colla} to today, many works have been devoted to the subject: we point out the result of Schwenk, asserting that \emph{almost all} tree have cospectral, non-isomorphic mates \cite{tree}, and the development by Godsil and McKay of a tool to generate cospectral graphs \cite{GoMc}, known as \emph{Godsil-McKay switching}. \\\indent Part of the success of graphs is due to the fact that they are a model for systems of \emph{things} that have or do not have, two by two, interaction. The introduction of signed graphs originates from the need to distinguish also between positive and negative interaction. A signed graph is a graph whose edges can be positive or negative \cite{Harary}: more precisely, it is a pair $(\Gamma,\sigma)$ where $\Gamma$ is the \emph{underlying graph} and $\sigma$ is the \emph{signature}, that is a map from the set of the edges of $\Gamma$ to $\{\pm 1\}$. As graphs are investigated up to isomorphism, signed graphs are investigated up to \emph{switching isomorphism} \cite{zasign,zasgraph}. The notion of switching isomorphism is inspired from the Seidel's switching \cite{sei}, but it is based on the switch of the signs of the edges incident to a fixed vertex. A \emph{balanced signed graph} is a graph where the product of the signs of the edges along any cycle is positive or equivalently, it is a signed graph whose edges can be switched to be all positive \cite{Harary}. A signed graph has a natural \emph{signed adjacency matrix}, a matrix whose non-zero entries belong to $\{\pm 1\}$. It turns out that the spectrum of the signed adjacency matrix is invariant under switching isomorphism: there are all the requisites for the development of a \emph{spectral theory of signed graphs} \cite{zasmat}. \\\indent Spectral theory of signed graphs is not only a collection of extensions of results from the classical setting. One of its first results, for example, is the Acharya's characterization of balance \cite{acharya}, and it is peculiar of signed graphs. It states that a signed graph is balanced if and only if it is cospectral with its underlying graph. Moreover, also direct generalizations from classical theory are often far from being trivial, and many questions remain still open \cite{open}. Many works on the subject can be found in the recent literature, even restricting the field to cospectrality \cite{ak,BB,cos, lolli}. In particular in \cite{cos} the Godsil-McKay switching is extended to signed graphs. \\\indent A further generalization of the notion of graph is that of gain graph, or also voltage graphs. For a given group $G$, a $G$-gain graph (or gain graph over $G$) is a pair $(\Gamma,\psi)$ where $\Gamma$ is the underlying graph and $\psi$ is the gain function, that is a map from the pairs $(u,v)$ of adjacent vertices $u,v$ of $\Gamma$ to the group $G$, with the property that $\psi(u,v)=\psi(v,u)^{-1}$, so that $\psi(u,v)$ is the gain from $u$ to $v$. The reason behind the introduction of gain graphs is to be found, on the one hand in the theory of biased graphs, of which the gain graphs are special cases \cite{zaslavsky1}, on the other hand in the theory of coverings in topological graph theory \cite{voltage}. For the numerous interconnections with other fields and for a rich and regularly updated bibliography, we refer to \cite{zasbib}. Anyway, it is evident how the concept of balance is immediately generalized from signed graphs to gain graphs, as well as, perhaps less immediately, that of switching isomorphism. \\\indent When the group $G$ is a subgroup of the multiplicative group of the complex numbers $\mathbb C$, there is a natural complex matrix playing the role of adjacency matrix of a gain graph $(\Gamma, \psi)$. Moreover, if $G$ is a subgroup of the group $\mathbb T=\{z\in \mathbb{C} : |z|=1\}$ of the complex units, this matrix is also Hermitian with a real spectrum, which is also a switching isomorphism invariant. This allowed a development of a \emph{spectral theory of $\mathbb T$-gain graphs} \cite{reff1}. In particular Acharya's characterization of balance \cite{adun} and Godsil-McKay switching \cite{gm} have been extended to $\mathbb T$-gain graphs. \\\indent Some other special groups $G$ have been considered as group of gains for a graph. For example the group of invertible elements of a finite field in \cite{shahulgermina} and the group of quaternion units in \cite{quat}. In both cases, there was already a definition of spectrum for matrices with entries in $G\cup\{0\}$ and then a definition for the spectrum of a $G$-gain graph. Among other things, in \cite{quat,shahulgermina} it has been proven an extension of Acharya's characterization to these gain graphs. \\\indent The main obstruction for a spectral theory of general gain graphs, without any assumption on the group of gains, is that the adjacency matrices are not complex valued. Possible solutions are offered by the work \cite{JACO}, where the generalization of Acharya's characterization is given in terms of \emph{group algebra valued matrices} $M_n(\mathbb C G)$ (see \cite[Theorem 4.2]{JACO}) and in terms of \emph{$\pi$-spectrum}, or \emph{spectrum of a gain graph with respect to a representation} $\pi$ of the group $G$ of gains (\cite[Theorem 5.1]{JACO}). In both cases, the balance of $(\Gamma,\psi)$ is characterized via some kind of cospectrality with $(\Gamma,\bold{1_G})$, that is the gain graph whose underlying graph is $\Gamma$ and the gain function is constantly $1_G$, the unit element of $G$. \\\indent This manuscript is conceptually the continuation of \cite{JACO}, aiming at investigating cospectrality of $(\Gamma,\psi)$ with any other gain graph, and not only with $(\Gamma,\bold{1_G})$ as in \cite{JACO}. This generalization is not trivial and there are several unexpected results, especially in the group representation approach. \\\indent As we said, spectral graph theory is the investigation of graphs via linear algebra, watching them like operators on a vector space. This is exactly what group representation theory does with groups and group elements. It seems very natural to combine these two approaches in order to develop a spectral theory of gain graphs. Once a (unitary) representation $\pi$ of $G$ is fixed, every $G$-gain graph $(\Gamma,\psi)$ has a complex (Hermitian) adjacency matrix, obtained via an extension to $M_n(\mathbb C G)$ of the Fourier transform at $\pi$. And then $\pi$-spectrum and $\pi$-cospectrality are defined. Moreover, the signed and complex unit spectral investigations we have talked about are covered as particular cases with a suitable choice of the representation $\pi$ (see \Cref{ex:2,ex:3}). \\\indent When $\pi$ is faithful (e.g., when it is the left regular representation $\lambda_G$), we know from \cite{JACO} that if a gain graph $(\Gamma,\psi)$ is $\pi$-cospectral with $(\Gamma,\bold{1_G})$ then it is balanced and switching isomorphic with $(\Gamma,\bold{1_G})$. In particular $(\Gamma,\psi)$ and $(\Gamma,\bold{1_G})$ are cospectral with respect to any other representation. On the contrary two gain graphs $(\Gamma,\psi)$ and $(\Gamma',\psi')$ can be cospectral with respect to a faithful representation even if they are not switching isomorphic and even if they are not cospectral with respect to some other (faithful, irreducible) representation (see \Cref{exa:2}). For finite cyclic groups $\mathbb T_m$ it is at least true that cospectrality with respect to a faithful irreducible representation is equivalent to cospectrality with respect to any other faithful irreducible representation (see \Cref{coro:ultimo}). But this is false in general (see \Cref{exa:s4}). This gives rise to a need for a more canonical definition of cospectrality that was not evident after \cite{JACO}, that is one of the main motivations of this paper, leading to the introduction of the notion of $G$-cospectrality. \noindent The paper is organized as follows. \\ \indent In \Cref{sec:2} we provide the essential tools on group representations and their extension to the algebras $\mathbb C G$ and $M_n(\mathbb C G).$ \\ \indent In \Cref{sec:3} we give some preliminaries on gain graphs. We characterize the switching equivalence class of a gain graph in terms of \emph{simultaneous conjugation} of the gains of its closed walks centered at a vertex (see \Cref{thm:primo}) and we prove that switching equivalence classes of gain graphs on a graph $\Gamma$ with $n$ vertices and $m$ edges are as many as the simultaneous conjugacy classes of the group $G^{m-n+1}$ (see \Cref{coro:card}). We believe that this is a result whose interest can even go beyond the spectral theory. \\ \indent In \Cref{sec:4} we introduce the notion of $G$-cospectrality in $M_n(\mathbb C G)$ (see \Cref{def:cosp}), inspired from the trace characterization of cospectrality in $M_n(\mathbb C)$, but in such a way that this definition is switching isomorphism invariant (see \Cref{teo:benposto}). We prove in \Cref{teo:cosp} that when $G$ is finite, two gain graphs $(\Gamma,\psi)$ and $(\Gamma',\psi')$ are $G$-cospectral if and only if they are cospectral with respect to all unitary representations of $G$, or equivalently, with respect to a complete system of unitary irreducible representations. In order to prove this last result, we show how actually all information about the $\pi$-spectrum of a gain graph is on the characters of the gains of the closed walks (see \Cref{teo:pico}). \\ \indent In addition to the examples already mentioned, in \Cref{sec:5} we analyze, in the light of these new results, the cases of signed graphs, $\mathbb T_m$-gain graphs and cyclic gain graphs. In particular we provide non-trivial pairs of $G$-cospectral non-switching isomorphic gain graphs (see \Cref{exa:1,exa:nuovofine}), and, on the other extreme, we prove in \Cref{coro:det} that every gain graph with cyclic underlying graph is \emph{determined by its $G$-spectrum}. We did this by completely determining the switching equivalence and the switching isomorphism classes on a cyclic graph. \\ \indent The results of this paper allow to summarize the relationships between different cospectrality notions for two gain graphs $(\Gamma_1,\psi_1)$ and $(\Gamma_2, \psi_2)$ as in Fig. \ref{tableau}. \begin{figure} \caption{Cospectrality properties for two $G$-gain graphs $(\Gamma_1,\psi_1)$ and $(\Gamma_2, \psi_2)$ and their connections, with $G$ finite.} \label{tableau} \end{figure} \section{Preliminaries on representations}\label{sec:2} Let $G$ be a (finite or infinite) group, with unit element $1_G$. Let $M_k(\mathbb{C})$ be the set of all square matrices of size $k$ with entries in $\mathbb C$ and let $GL_k(\mathbb{C})$ be the group of all invertible matrices in $M_k(\mathbb{C})$. A representation $\pi$ of \emph{degree $k$} (we write $\deg \pi=k$) of $G$ is a group homomorphism $\pi\colon G\to GL_k(\mathbb{C})$. In other words, we are looking at the elements of $G$ as automorphisms of a vector space $V$ on $\mathbb C$ with $\dim V=k$.\\ \indent Let $U_k(\mathbb{C}) = \{M\in GL_k(\mathbb{C}) : M^{-1}=M^\ast\}$ be the set of unitary matrices of size $k$, where $M^*$ is the Hermitian (or conjugate) transpose of $M$. Then the representation $\pi$ is said to be \emph{unitary} if $\pi(g)\in U_k(\mathbb{C})$, for each $g\in G$. Two representations $\pi$ and $\pi'$ of degree $k$ of a group $G$ are said to be equivalent, and we write $\pi\sim\pi'$, if there exists a matrix $S\in GL_k(\mathbb{C})$ such that, for any $g\in G$, one has $\pi'(g)=S^{-1}\pi(g)S$. This is an equivalence relation and it is known that, for a finite group $G$, each class contains a unitary representative: this is the reason why working with unitary representations is not restrictive in the finite case. \\ \indent We will denote by $I_k$ the identity matrix of size $k$. The \textit{kernel} of a representation $\pi$ is $\ker \pi=\{g\in G:\, \pi(g)=I_k\}$, and a representation is said to be \textit{faithful} if $\ker \pi=\{1_G\}$. The \textit{character} $\chi_\pi$ of a representation $\pi$ is the map $\chi_{\pi}:G\to \mathbb{C}$ defined by $\chi_\pi(g)=Tr(\pi(g))$, that is, the trace of the matrix $\pi(g)$. It is a well known fact that two representations have the same character if and only if they are equivalent. \\ \indent Given two representations $\pi_1$ and $\pi_2$ of $G$, one can construct \emph{the direct sum representation $\pi=\pi_1 \oplus \pi_2$ of $G$}, defined by $\pi(g):=\pi_1(g)\oplus \pi_2(g)$, for every $g\in G$, where $\pi_1(g)\oplus \pi_2(g)$ is the direct sum of the matrices $\pi_1(g)$ and $\pi_2(g)$. We will use the notation $\pi^{\oplus i}$ for the $i$-th iterated direct sum of $\pi$ with itself. \\ \indent A representation $\pi$ of $G$ is said to be \textit{irreducible} if there is no non-trivial invariant subspace for the action of $G$. It is proven that $\pi$ is irreducible if and only if it is not equivalent to any direct sum of representations. It is well known that if $G$ has $m$ conjugacy classes, then there exists a list of $m$ irreducible, pairwise inequivalent, representations $\pi_0,\ldots,\pi_{m-1}$, called a \emph{complete system of irreducible representations of $G$}. Moreover, for every representation $\pi$ of $G$, there exist $k_0,\ldots,k_{m-1} \in \mathbb N \cup \{0\}$ such that \begin{equation*}\label{deco} \pi\sim \bigoplus_{i=0}^{m-1} \pi_i^{\oplus k_i}\quad \mbox{ or, equivalently, } \quad \chi_\pi=\sum_{i=0}^{m-1} k_i \chi_{\pi_i}. \end{equation*} We say that $\pi$ \emph{contains} the irreducible representation $\pi_i$ with multiplicity $k_i$ if $k_i\neq 0$, or also that $\pi_i$ is a \emph{subrepresentation} of $\pi$. \\ \indent Now we recall some remarkable representations of $G$. The first one is the trivial representation $\pi_0\colon G\to \mathbb C$, with $\pi_0(g)=1$ for every $g\in G$. It is unitary and irreducible and it always belongs to a complete system of unitary irreducible representations of $G$. \\ \indent The group $G$ naturally acts by left multiplication on itself. This action can be regarded as the action of $G$ on the vector space $\mathbb{C}G=\{\sum_{x\in G} c_x x: \ c_x\in \mathbb{C}\}$. When $G$ is finite, this gives rise to the \emph{left regular representation} $\lambda_G$, which is faithful and has degree $|G|$. It is well known that $\lambda_G$ contains, up to equivalence, each irreducible representation $\pi_i$ of $G$ with multiplicity $\deg \pi_i$ and its character is non-zero only on $1_G$. In formulae, we have: \begin{equation*}\label{regdec} \lambda_G \sim \bigoplus_{i=0}^{m-1} \pi_i^{\oplus \deg \pi_i}\quad \mbox{ and } \quad \chi_{\lambda_G}(g)= \begin{cases} |G| &\mbox{ if } g=1_G\\ 0 &\mbox{ otherwise.} \end{cases} \end{equation*} For further information and a more general discussion on group representation theory, we refer the interested reader to \cite{fulton}. Even when $G$ is not finite, the space of all finite $\mathbb C$-linear combinations of elements of $G$ is a vector space, denoted by $\mathbb{C}G$. Endowed with the product $$ \left(\sum_{x\in G} f_x x\right) \left(\sum_{y\in G} h_y y\right):= \sum_{x,y\in G} f_x h_y \, x y, \qquad \mbox{ for each } f=\sum_{x\in G} f_x x,\;h=\sum_{y\in G} h_y y\in \mathbb CG, $$ and with the involution $$ f^*:= \sum_{x\in G} \overline{f_{x^{-1}} }x, $$ it is an algebra with involution, known as \emph{group algebra} of $G$. A representation $\pi$ of degree $k$ of $G$ can be naturally extended by linearity to $\mathbb{C}G$. With a little abuse of notation, we denote with $\pi$ this extension, that is in fact a homomorphism of algebras: \begin{equation*} \begin{split} \pi\colon \mathbb{C}G&\to M_k(\mathbb C)\\ \sum_{x\in G} f_x x&\mapsto \sum_{x\in G} f_x \pi(x). \end{split} \end{equation*} Moreover, if $\pi$ is unitary, then $\pi(f^*)=\pi(f)^*$ for any $f\in\mathbb CG$. Also the space $M_n(\mathbb{C}G)$ of the group algebra valued matrices $n\times n$ is an algebra with involution. An element $F \in M_n(\mathbb C G)$ is in fact a square matrix of size $n$ whose entry $F_{i,j}$ is an element of $\mathbb C G$. The product of $F,H\in M_n(\mathbb{C}G)$ is such that $(F H)_{i,j}=\sum_{k=1}^n F_{i,k} H_{k,j},$ where $F_{i,k} H_{k,j}$ is the product in the group algebra $\mathbb C G$, and the involution $^*$ in $M_n(\mathbb C G)$ is defined by $(F^*)_{i,j}=(F_{j,i})^*$, where the $^*$ on the right is the involution in $\mathbb C G$. A representation $\pi$ of degree $k$ of $G$ can be also extended by linearity to $M_n(\mathbb{C}G)$: $$\pi\colon M_n(\mathbb{C}G)\to M_{nk}(\mathbb C).$$ For $A\in M_n(\mathbb{C}G)$, the element $\pi(A)\in M_{nk}(\mathbb C)$ is called the \emph{Fourier transform of $A$ at $\pi$}. Roughly speaking, $\pi(A)$ is the matrix obtained from the matrix $A\in M_{n}(\mathbb C G)$ by replacing each occurrence of $g\in G$ with the block $\pi(g)$ and each $0\in \mathbb C G$ with a zero block of size $k$. One can check that also this extension of $\pi$ is a homomorphism of algebras (see, e.g., \cite{JACO}). Moreover, equivalence relations and subrepresentations are also preserved in their extensions to $M_n(\mathbb{C}G)$ as the next proposition claims. \begin{proposition}\label{productfou}{\cite[Proposition 3.4]{JACO}} Let $A \in M_n(\mathbb C G)$ and let $\pi,\pi'$ be two representations. Then: \begin{itemize} \item if $\pi \sim \pi'$ then the matrices $\pi(A)$ and $\pi'(A)$ are similar; \item the matrix $(\pi\oplus\pi')(A)$ is similar to the matrix $\pi(A)\oplus \pi'(A)$. \end{itemize} Moreover, if $\pi$ is unitary, one has $\pi(A^*)=\pi(A)^*.$ \end{proposition} \section{Gain graphs and switching equivalence}\label{sec:3} Let $\Gamma=(V_\Gamma,E_\Gamma)$ be a finite simple graph. If $e=\{u,v\}\in E_\Gamma$ we write $u\sim v$ and we say that $u$ and $v$ are \emph{adjacent vertices} of $\Gamma$. Let $G$ be a group and consider a map $\psi$ from the set of ordered pairs of adjacent vertices of $\Gamma$ to $G$, such that $\psi(u,v)=\psi(v,u)^{-1}$. The pair $(\Gamma,\psi)$ is said to be a \emph{$G$-gain graph} (or equivalently, a gain graph on $G$): the graph $\Gamma$ is called \emph{underlying graph} and $\psi$ is called a \emph{gain function} on $\Gamma$. \\When $G=\mathbb T_2=\{\pm1\}$, then a $\mathbb T_2$-gain graph, usually denoted by $(\Gamma,\sigma)$, is called a \emph{signed graph} and the gain function $\sigma$ is called a \emph{signature} on $\Gamma$. \\Let us fix an ordering $v_1,v_2,\ldots, v_n$ in $V_\Gamma$. The adjacency matrix of a gain graph $(\Gamma,\psi)$ is the group algebra valued matrix $A_{(\Gamma,\psi)}\in M_n(\mathbb C G)$ with entries $$ (A_{(\Gamma,\Psi)})_{i,j}= \begin{cases} \psi(v_i,v_j) &\mbox{if } v_i\sim v_j; \\ 0 &\mbox{otherwise.} \end{cases} $$ Notice that $$ A_{(\Gamma,\psi)}^*=A_{(\Gamma,\psi)}. $$ Let $W$ be a \emph{walk} of length $h$, that is, an ordered sequence of $h+1$ (not necessarily distinct) vertices of $\Gamma$, say $w_0,w_1,\ldots, w_h$, with $w_i\sim w_{i+1}$. One can define $\psi(W)\in G$, the \emph{gain of the walk $W$}, as follows: $$ \psi(W):=\psi(w_0,w_1)\cdots \psi(w_{h-1},w_h). $$ A \emph{closed walk} of length $h$ is a walk of length $h$ with $w_0=w_{h}$. In what follows, we denote by $\mathcal W^h_{i,j}(\Gamma)$ the set of walks in $\Gamma$ of length $h$ from $v_i$ to $v_j$, and by $\mathcal C^h(\Gamma):=\bigcup_{i=1}^{n} \mathcal W_{i,i}^h$ the set of all closed walks in $\Gamma$ of length $h$. Notice that for $W_1\in\mathcal W^{h_1}_{i,j}(\Gamma)$ and $W_2\in\mathcal W^{h_2}_{j,k}(\Gamma)$ one can define the \emph{concatenation} $W_1W_2\in \mathcal W^{h_1+h_2}_{i,k}$ in the obvious way, satisfying the property $\psi(W_1W_2)=\psi(W_1)\psi(W_2)$. Moreover, for any vertex $v\in V_\Gamma$, we denote by $\mathcal C_v$ the set of all closed walks starting and ending at the vertex $v$, or closed walks \emph{centered} at $v$ for short. A gain graph $(\Gamma,\psi)$ is \emph{balanced} if $\Psi(W)=1_G$ for every closed walk $W$. An example of a balanced gain graph is that endowed with the \emph{trivial gain function $\bold{1}_G$}, defined as $\bold{1}_G(u,v)=1_G$ for every pair of adjacent vertices $u$ and $v$. When the underlying graph $\Gamma$ is a tree, the gain graph $(\Gamma,\psi)$ is automatically balanced. \\A fundamental concept in the theory of gain graphs, inherited from the theory of signed graphs, is the \emph{switching equivalence}: two gain functions $\psi_1$ and $\psi_2$ on the same underlying graph $\Gamma$ are switching equivalent, and we shortly write $(\Gamma,\psi_1)\sim(\Gamma,\psi_2)$, if there exists $f\colon V_\Gamma\to G$ such that \begin{equation}\label{eqsw} \psi_2(v_i,v_j)=f(v_i)^{-1} \psi_1(v_i,v_j)f(v_j). \end{equation} for any pair of adjacent vertices $v_i$ and $v_j$ of $\Gamma$. If Eq.~\eqref{eqsw} holds, we simply write $\psi_2=\psi_1^f$. We denote by $[\psi]$ the switching equivalence class of the $G$-gain function $\psi$. It turns out that a gain graph $(\Gamma, \Psi)$ is balanced if and only if $(\Gamma, \Psi)\sim(\Gamma, \bold{1_G})$ (see \cite[Lemma~5.3]{zaslavsky1} or \cite[Lemma~1.1]{refft}). Moreover, in analogy with the complex case (see \cite[Lemma~3.2]{reff1}), the following result holds. \begin{theorem}{\cite[Theorem 4.1]{JACO}}\label{teo-vecchio} Let $\psi_1$ and $\psi_2$ be two gain functions on the same underlying graph $\Gamma$, with adjacency matrices $A_1$ and $A_2\in M_n(\mathbb C G)$, respectively. Then $(\Gamma,\psi_1)\sim(\Gamma,\psi_2)$ if and only if there exists a diagonal matrix $F\in M_n(\mathbb C G)$, with $F_{i,i}\in G$ for each $i=1,\ldots, n$, such that $F^*A_1F=A_2$. \end{theorem} It is worth mentioning that it is also possible to introduce the notion of a \emph{group algebra valued Laplacian matrix of $(\Gamma,\psi)$} obtaining the analogous result \cite[Theorem 3.5]{GLine}. In particular, Theorem \ref{teo-vecchio} implies that the group algebra valued adjacency matrices of switching equivalent gain graphs are, in a way, similar. Two gain graphs $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are said to be \emph{switching isomorphic} if there is a graph isomorphism $\phi\colon V_{\Gamma_1}\to V_{\Gamma_2}$ such that $(\Gamma_1,\psi_1)\sim(\Gamma_1,\psi_2\circ \phi)$, where $\psi_2\circ \phi$ is the gain function on $\Gamma_1$ such that $(\psi_2\circ \phi)(u,v)=\psi_2(\phi(u),\phi(v))$. \begin{remark}\label{sw-isomorphism} It is easy to see that also the adjacency matrices of switching isomorphic gain graphs can be obtained from each other by conjugating with a suitable group algebra valued matrix. More precisely, two gain graphs $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are switching isomorphic if and only if \begin{equation*}\label{eq:swiso} (PF)^*A_{(\Gamma_1,\psi_1)}(PF)=A_{(\Gamma_2,\psi_2)}, \end{equation*} for some diagonal matrix $F\in M_n(\mathbb C G)$, with $F_{i,i}\in G$ for each $i=1,\ldots, n$, and for some matrix $P\in M_n(\mathbb C G)$ whose non-zero entries are equal to $1_G$ and where the positions of the non-zero entries correspond to those of a permutation matrix. \end{remark} We have given the definitions of switching equivalence and switching isomorphism and their matrix characterizations. Now we are going to present a different description of switching equivalence in terms of the gains of closed walks, which will be useful for the spectral investigations that we are going to develop in the next sections. \\\indent It is known \cite[Section~2]{reff1} that, when $G$ is Abelian, it is possible to establish whether $(\Gamma,\psi_1)$ and $(\Gamma, \psi_2)$ are switching equivalent just by looking at the gain of the closed walks. We extend this characterization also to the non-Abelian case. For two given elements $g,h\in G$, we set $g^h = h^{-1}gh$. \begin{theorem}\label{thm:primo} Let $(\Gamma, \psi_1)$ and $(\Gamma, \psi_2)$ be two $G$-gain graphs on the same connected underlying graph $\Gamma$. The following are equivalent. \begin{enumerate}[(i)] \item $(\Gamma, \psi_1)\sim (\Gamma, \psi_2)$; \item $\exists v\in V_\Gamma$, $\exists g_v\in G$ such that $\psi_1(W)=\psi_2(W)^{g_v}$ for all $W\in \mathcal C_v$; \item $\forall v\in V_\Gamma$, $\exists g_v\in G$ such that $\psi_1(W)=\psi_2(W)^{g_v}$ for all $W\in \mathcal C_v$. \end{enumerate} \end{theorem} \begin{proof} (i)$\implies$(iii)\\ Suppose that $\psi_1=\psi_2^f$ for some $f\colon V_\Gamma\to G$. For every $v\in V_\Gamma$, and for any closed walk $W\in \mathcal C_v$, with ordered vertices $v, w_1,w_2,\ldots, w_k,v$, we have: \begin{equation}\label{eq:co} \begin{split} \psi_1(W)&=\psi_1(v,w_1)\psi_1(w_1,w_2)\cdots \psi_1(w_k,v)\\ &=f(v)^{-1}\psi_2(v,w_1)f(w_1)f(w_1)^{-1}\psi_2(w_1,w_2)f(w_2)f(w_2)^{-1}\cdots \psi_2(w_k,v)f(v) \\&={f(v)}^{-1}\psi_2(v,w_1)\psi_2(w_1,w_2)\cdots \psi_2(w_k,v)f(v) \\&=\psi_2(W)^{f(v)}, \end{split} \end{equation} and then setting $g_v:=f(v)$ the thesis follows. \\ (iii)$\implies$(ii)\\ It is obvious.\\ (ii)$\implies$(i)\\ Let us fix a spanning tree $T$ of $\Gamma$ and denote by $E_T$ the subset of $E_\Gamma$ consisting of the edges of $T$. By hypothesis, there exist $v\in V_\Gamma$ and $g_{v}\in G$ such that $\psi_1(W)=\psi_2(W)^{g_{v}}$ for all $W\in \mathcal C_{v}$. Since a gain graph on a tree is balanced, we can find $p,q\colon V_\Gamma\to G$ such that \begin{equation}\label{eq:1} \psi'_1(v_i,v_j)=\psi'_2(v_i,v_j)=1_G\quad \mbox{ if } \{v_i,v_j\}\in E_T, \end{equation} where $\psi'_1=\psi_1^p$ and $\psi'_2=\psi_2^q$. This construction comes from a standard technique in gain graphs (see for example \cite[Lemma~2.2]{reff1} or \cite[Lemma~2.5]{ELine}) inherited from that for signed graphs. Moreover, if we set $g'_{v}:=q(v)^{-1}g_{v}p(v)$, performing computations similar to that of Eq.~\eqref{eq:co}, for any $W\in \mathcal C_{v}$ we get: \begin{equation}\label{eq:proof2} \psi'_1(W)=\psi_1(W)^{p(v)}=\psi_2(W)^{g_{v}p(v)}=\psi'_2(W)^{q(v)^{-1}g_{v}p(v)}=\psi'_2(W)^{g'_{v}}. \end{equation} Notice that, since $T$ is a spanning tree, for any two vertices $v_i$ and $v_j$ adjacent in $\Gamma$ but not in $T$ (that is, $\{v_i,v_j\}\in E_\Gamma \setminus E_T$) there exists a closed walk $W_{i,j}\in \mathcal C_{v}$, visiting $v_i$ first and then $v_j$, whose all edges but $\{v_i,v_j\}$ are in $E_T$. By Eq.~\eqref{eq:1} we have $\psi'_1(W_{i,j})=\psi'_1(v_i,v_j)$ and $\psi'_2(W_{i,j})=\psi'_2(v_i,v_j)$. As a consequence of Eq.~\eqref{eq:proof2}, we obtain: \begin{equation}\label{eq:ff} \psi'_1(v_i,v_j)=\psi'_2(v_i,v_j)^{g'_{v}}. \end{equation} But clearly Eq.~\eqref{eq:ff} trivially holds also when $\{v_i,v_j\}\in E_T$. Therefore, if we define $f\colon V_\Gamma\to G$ as $f(w)=g'_{v}$ for any $w\in V_\Gamma$, from Eq.~\eqref{eq:ff} we have $$\psi'_1={\psi'_2}^f.$$ As a consequence, $\psi'_1$ and $\psi'_2$ are switching equivalent and so $\psi_1$ and $\psi_2$ are switching equivalent by transitivity. \end{proof} In particular, Theorem \ref{thm:primo} says that $\psi_1$ and $\psi_2$ are switching equivalent if and only if their gains on every closed walk are conjugated in $G$ by an element that only depends on the starting vertex of the walk. \begin{remark}\label{rem:preco} Notice that, in the proof of Theorem \ref{thm:primo}, for implication (ii)$\implies$(i) we actually do not need that $\psi_1(W)=\psi_2(W)^{g_v}$ for all $W\in \mathcal C_v$; fixed a spanning tree $T$, it is enough to check this condition on $|E_\Gamma|-|E_T|=|E_\Gamma|-|V_\Gamma|+1$ closed walks centered at $v$, each crossing exactly one distinct edge in $E_\Gamma\setminus E_T$. \end{remark} The number $|E_\Gamma|-|V_\Gamma|+1$ is usually called \emph{circuit rank} of $\Gamma$, and we denote it by $cr(\Gamma)$: it is the cardinality of a cycle basis (see, for example, \cite{Bharary}). This observation allows us to show in Corollary \ref{coro:card} the existence of a bijection between the switching equivalence classes of gain functions on $\Gamma$ and the \emph{simultaneous conjugacy classes} of $G^{cr(\Gamma)}$. Here $G^{cr(\Gamma)}$ denotes the $cr(\Gamma)$-th iterated Cartesian product of $G$ with itself; the simultaneous conjugacy class of an element $\left(x_1,\ldots, x_{cr(\Gamma)}\right)\in G^{cr(\Gamma)}$ is the subset $\left\{\left(g^{-1}x_1g,\ldots, g^{-1}x_{cr(\Gamma)}g\right), g\in G \right\} \subset G^{cr(\Gamma)}$. \begin{corollary}\label{coro:card} Let $\Gamma = (V_\Gamma, E_\Gamma)$ be a connected graph with $n$ vertices and $m$ edges. There is a bijection between the set of switching equivalence classes of $G$-gain functions on $\Gamma$ and the set of simultaneous conjugacy classes of $G^{m-n+1}$. \end{corollary} \begin{proof} Let us start by fixing a spanning tree $T$ and a vertex $v$ of $\Gamma$. Set $E_\Gamma\setminus E_T=\{e_1,\dots, e_{m-n+1}\}$ and let $W_1,W_2,\ldots, W_{m-n+1}$ be closed walks centered at $v$ such that the only edge in $E_\Gamma\setminus E_T$ crossed by $W_i$ is $e_i$.\\ We first define a map $\phi$ from the $G$-gain functions on $\Gamma$ to $G^{m-n+1}$: $$\phi(\psi)=(\psi(W_1),\psi(W_2),\ldots,\psi(W_{m-n+1}) ).$$ The map $\phi$ is surjective. In fact, for every $h=(h_1,\ldots, h_{m-n+1})\in G^{m-n+1}$ one can define a gain function $\psi_h$ such that $\phi(\psi_h)=h$ as follows. Let us make $\psi_h$ be trivial on the pairs of vertices adjacent in $T$. If instead $\{u,v\}\in E_\Gamma\setminus E_T$, then there exists $i\in\{1,\ldots,{m-n+1} \}$ such that $u,v$ or $v,u$ are consecutive vertices in $W_i$. In the first case we set $\psi_h(u,v)=h_i$ (and then $\psi_h(v,u)=h_i^{-1}$), in the second case we set $\psi_h(v,u)=h_i$ (and then $\psi_h(u,v)=h_i^{-1}$). By definition of $\phi$ we have $\phi(\psi_h)=(h_1,h_2,\ldots,h_{m-n+1})$ and then $\phi$ is surjective. By virtue of Theorem \ref{thm:primo} $(i)\implies(ii)$, if $(\Gamma,\psi_1)\sim (\Gamma,\psi_2)$, then $\phi(\psi_1)$ and $\phi(\psi_2)$ are simultaneously conjugated. Vice versa, if two gain functions $\psi_1$ and $\psi_2$ are such that $\phi(\psi_1)$ and $\phi(\psi_2)$ are simultaneously conjugated, then there exist $g_v\in G$ such that $\psi_1(W_i)=\psi_2(W_i)^{g_v}$ for each $i=1,\ldots, |E_\Gamma|-|V_\Gamma|+1$. By virtue of Remark \ref{rem:preco} this is enough to ensure that $(\Gamma,\psi_1)\sim (\Gamma,\psi_2)$. As a consequence, the map $\phi$ composed with the projection onto the simultaneous conjugacy class, induces a bijection between the set of switching equivalence classes of gain functions on $\Gamma$ and the simultaneous conjugacy classes of $G^{m-n+1}$. \end{proof} Notice that, when $G$ is finite and Abelian, Corollary \ref{coro:card} implies that for a connected graph $\Gamma$ there are $|G|^{m-n+1}$ switching classes of $G$-gain functions on $\Gamma$. In particular there are $2^{m-n+1}$ switching classes of signed graphs on $\Gamma$ (see \cite[Proposition~3.1]{nase}). \section{$G$-cospectrality, $\pi$-spectrum and $\pi$-cospectrality}\label{sec:4} For classical graphs, signed graphs, and $\mathbb T$-gain graphs, there is one natural definition of the (adjacency) spectrum, that is in particular invariant under switching isomorphism. In those settings there is no ambiguity, essentially because the adjacency matrices are complex valued matrices. On the contrary, several definitions of spectrum may be associated with an element of $M_n(\mathbb C G)$ (and therefore with a $G$-gain graph) within the framework of operator algebras \cite{Chu}. As already done in \cite{JACO}, we will deal with the problem bringing us back to the theory of complex matrices. More precisely, as a first step, we define \emph{cospectrality} in $M_n(\mathbb C G)$ without giving an explicit definition for the spectrum in $M_n(\mathbb C G)$. Only after fixing a unitary representation of $G$, we define the spectrum associated with that representation. As already said, in order to define cospectrality in $M_n(\mathbb C G)$, we are inspired from what happens in the classical setting of complex matrices. The following lemma is a consequence of Specht's theorem \cite{specht} for Hermitian matrices. \begin{lemma}\label{specht} Let $A,B\in\mathbb M_n(\mathbb C)$ be Hermitian. Then $$A \mbox{ and } B \mbox{ are cospectral } \iff Tr(A^h)=Tr(B^h)\quad \forall h\in \mathbb N.$$ \end{lemma} Alternatively, elementary symmetric polynomials can be used to prove that, for $A,B\in M_n(\mathbb C)$, one has that $Tr(A^h)=Tr(B^h)$ for $h=1,\ldots,n$ is equivalent to the cospectrality of $A$ and $B$, see \cite{rob}. Observe that if $A$ is the adjacency matrix of a graph $\Gamma=(V_\Gamma,E_\Gamma)$, then the entry $(A^h)_{i,j}$ equals the number of walks of length $h$ from $v_i$ to $v_j$. As a consequence, looking at $i=j$, one has: \begin{equation}\label{eq:nogain} \Gamma_1 \mbox{ and } \Gamma_2 \mbox{ are cospectral graphs } \iff |\mathcal C^h(\Gamma_1)|=|\mathcal C^h(\Gamma_2)|\quad \forall h\in \mathbb N. \end{equation} Keeping this characterization in mind, let us introduce a group algebra valued trace map on $M_n(\mathbb C G)$ as follows: \begin{equation}\label{eq:trace} \begin{split} Tr\colon M_n(\mathbb C G)&\to \mathbb C G\\ A&\mapsto \sum_{i=1}^n A_{i,i}. \end{split} \end{equation} If $A\in M_n(\mathbb C G)$ is the adjacency matrix of a $G$-gain graph $(\Gamma,\psi)$, by virtue of \cite[Lemma~4.1]{JACO} the entry $(A^h)_{i,j}\in\mathbb CG$ is the sum of the gains of all walks from $v_i$ to $v_j$ of length $h$. In particular, we have: \begin{equation}\label{eq:trko} Tr(A^h)=\sum_{W \in\mathcal C^h(\Gamma)} \psi(W). \end{equation} We might be tempted to define the cospectrality of $A,B\in M_n(\mathbb C G)$ by requiring the equality of $Tr(A^h)$ and $Tr(B^h)$ in $\mathbb C G$ for all $h\in \mathbb N$ and, therefore, to define the cospectrality of two $G$-gain graphs $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ by requiring the equality of the sum of the gains of all closed walks of length $h$, for all $h\in \mathbb N$. However, this request when the group $G$ is not Abelian is too strong. In fact the map $Tr\colon M_n(\mathbb C G)\to \mathbb C G$, defined in Eq.~\eqref{eq:trace}, is not invariant under conjugation and there are pairs of switching isomorphic (even switching equivalent) graphs which would not be cospectral in this sense (see Section \ref{sec:cicli}). What we really want from two cospectral matrices $A,B\in M_n(\mathbb C G)$ is that $Tr(A^h)$ and $Tr(B^h)$ are equal only up to group conjugations of some of their addends. Let us denote by $[G]$ the set of conjugacy classes of $G$, and by $[g]$ the conjugacy class of $g\in G$. A \emph{class function} $f\colon G\to \mathbb C$ is a map such that $g_1,g_2\in[g]\implies f(g_1)=f(g_2)$. The set $\mathbb C_{Class}[G]$ of finitely supported class functions is a $\mathbb C$-vector space. Moreover, if $G$ is finite, the vector space $\mathbb C_{Class}[G]$ can be endowed with a Hermitian inner product: \begin{equation}\label{eq:pro} \begin{split} &\langle\;,\;\rangle\colon \mathbb C_{Class}[G]\times \mathbb C_{Class}[G] \to \mathbb C\\ &\langle f , h \rangle=\frac{1}{|G|}\sum_{g\in G} f(g) \overline{h(g)}.\\ \end{split} \end{equation} There is a natural map $\mu$ from $\mathbb CG$ to $\mathbb C_{Class}[G]$, defined as the sum of the coefficients on each conjugacy class: \begin{equation}\label{eq:projection} \begin{split} &\mu\colon \mathbb C G\to \mathbb C_{Class}[G]\\ &\mu\left(\sum_{x\in G} a_x x\right)(g)=\sum_{x\in[g]} a_x. \end{split} \end{equation} Notice that, if $G$ is Abelian, each conjugacy class contains only one element and $\mu$ is nothing but an isomorphism between $\mathbb C$-vector spaces. \begin{definition}\label{def:cosp} Two group algebra valued matrices $A,B\in M_n(\mathbb C G)$, with $A^*=A$ and $B^*=B$, are \emph{$G$-cospectral} if and only if $$\mu(Tr(A^h))=\mu(Tr(B^h))\quad \forall h\in \mathbb N.$$ Two gain graphs $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are \emph{$G$-cospectral} if $A_{(\Gamma_1,\psi_1)}$ and $A_{(\Gamma_2,\psi_2)}$ are $G$-cospectral. \end{definition} Our first result consists in reformulating $G$-cospectrality in terms of the gains of closed walks, in analogy with the classical case (see Eq.~\eqref{eq:nogain}). \begin{proposition}\label{prop:cicli} Two gain graphs $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are $G$-cospectral if and only if \begin{equation}\label{eq:prop} \left|\{W\in \mathcal C^h(\Gamma_1): \psi_1(W)\in [g]\} \right| = \left|\{W\in \mathcal C^h(\Gamma_2): \psi_2(W)\in [g]\} \right|,\quad \forall [g]\in[G],\;\forall h\in\mathbb N. \end{equation} \end{proposition} \begin{proof} Let $A:=A_{(\Gamma_1,\psi_1)}$ and $B:=A_{(\Gamma_2,\psi_2)}$. By Definition \ref{def:cosp}, the graphs $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are $G$-cospectral if and only if $\mu(Tr(A^h))=\mu(Tr(B^h))$, for all $h\in \mathbb N$. By Eq.~\eqref{eq:trko}, this is equivalent to \begin{equation}\label{eq:num} \mu\left( \sum_{W\in \mathcal C^h(\Gamma_1)} \psi_1(W) \right)=\mu\left( \sum_{W\in \mathcal C^h(\Gamma_2)} \psi_2(W) \right),\quad \forall h\in \mathbb N. \end{equation} On the other hand, by the definition of $\mu$ in Eq.~\eqref{eq:projection}, for every $g\in G$, we have: $$\mu\left( \sum_{W\in \mathcal C^h(\Gamma_i)} \psi_i(W) \right)(g)= \left|\{W\in \mathcal C^h(\Gamma_i): \psi_i(W)\in [g]\} \right|.$$ Therefore Eq.~\eqref{eq:num} is equivalent to Eq.~\eqref{eq:prop} and the thesis follows. \end{proof} A famous result of Acharya states that a signed graph is balanced if and only if it is cospectral with its underlying graph \cite{acharya}. There are several generalizations of this result \cite{adun,JACO}. Definition \ref{def:cosp} and Proposition \ref{prop:cicli} allow us to easily prove the following. \begin{theorem} A gain graph $(\Gamma,\psi)$ is balanced if and only if it is $G$-cospectral with $(\Gamma,\bold{1}_G)$. \end{theorem} \begin{proof} It is clear that all closed walks of length $h$ in $(\Gamma,\bold{1}_G)$ have gain $1_G$, for each $h\in\mathbb N$. Then, by Proposition \ref{prop:cicli}, $(\Gamma,\psi)$ is $G$-cospectral with $(\Gamma,\bold{1}_G)$ if and only if also all closed walks of length $h$ in $(\Gamma,\psi)$ have gain $1_G$, for each $h\in\mathbb N$, that is, if and only if $(\Gamma,\psi)$ is balanced. \end{proof} Combining Theorem \ref{thm:primo} with Proposition \ref{prop:cicli}, we deduce that if $(\Gamma,\psi_1)\sim(\Gamma,\psi_2)$, then $(\Gamma,\psi_1)$ and $(\Gamma,\psi_2)$ are $G$-cospectral. Actually, the following result guarantees that the property of $G$-cospectrality is invariant under switching isomorphism. \begin{theorem}\label{teo:benposto} If two $G$-gain graphs $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are switching isomorphic, then they are $G$-cospectral. \end{theorem} \begin{proof} The isomorphism $\phi$ from $\Gamma_1$ to $\Gamma_2$ induces a bijection $\phi$ from $\mathcal C^h(\Gamma_1)$ to $\mathcal C^h(\Gamma_2)$, for all $h\in\mathbb N$. In particular, if $W$ is a closed walk of length $h$ in $\Gamma_1$, $\phi(W)$ is a closed walk of length $h$ in $\Gamma_2$. Moreover, by definition of switching isomorphism, $(\Gamma_1,\psi_1)\sim(\Gamma_1,\psi_2\circ \phi)$ and then, by Theorem \ref{thm:primo}, the elements $\psi_1(W)$ and $\psi_2(\phi(W))$ belong to the same conjugacy class in $G$. This implies that the number of closed walks in $\Gamma_1$ of length $h$ whose gain is in $[g]$ is equal to the number of closed walks in $\Gamma_2$ of length $h$ whose gain is in $[g]$, for all $h\in\mathbb N$ and all $[g]\in[G]$. The thesis follows from Proposition \ref{prop:cicli}. \end{proof} As one would expect, the converse of Theorem \ref{teo:benposto} is not true: there are pairs of $G$-gain graphs that are $G$-cospectral but non-switching isomorphic. One can construct such pairs by considering two non-isomorphic cospectral graphs $\Gamma_1$ and $\Gamma_2$. Clearly $(\Gamma_1, \bold{1_G})$ and $(\Gamma_2, \bold{1_G})$ are $G$-cospectral but non-switching isomorphic (see \Cref{exa:1,exa:nuovofine} for non-trivial examples). On the other extreme, there exist $G$-gain graphs determined by their $G$-spectrum (see Corollary \ref{coro:det}), according to the following definition. \begin{definition} A $G$-gain graph $(\Gamma,\psi)$ is \emph{determined by its $G$-spectrum} if $$(\Gamma,\psi)\mbox{ and }(\Gamma',\psi') \mbox{ $G$-cospectral } \implies (\Gamma,\psi) \mbox{ and } (\Gamma',\psi') \mbox{ switching isomorphic.}$$ \end{definition} In the rest of this section, we recall the notion of spectrum with respect a unitary representation from \cite{JACO} and investigate its relation with $G$-cospectrality. For a representation $\pi$ of $G$ of degree $k$ we denote by $\chi_\pi\colon G\to \mathbb C$ its character. Notice that $\chi_\pi(1_G)= \deg \pi = k$. As already mentioned, the representation $\pi$ can be extended to $\mathbb C G$ and to $M_n(\mathbb C G)$ via Fourier transforms. \\ \indent Similarly, the character $\chi_\pi$ can be linearly extended to $\mathbb C G$, and to finitely supported functions in $\mathbb C_{class} [G]$: \begin{equation}\label{eq:cara} \begin{split} \chi_\pi\left(\sum_{x\in G} a_x x\right)&:=Tr\left( \pi\left(\sum_{x\in G} a_x x\right)\right)=\sum_{x\in G} a_x \chi_\pi(x),\qquad \mbox{with }\sum_{x\in G} a_x x\in \mathbb C G;\\ \chi_\pi(h)&:= \sum_{x\in G} h(x) \chi_\pi(x)=\sum_{[g]\in [G]}|[g]|h(g)\chi_\pi(g),\qquad \mbox{with } h\in\mathbb C_{Class} [G]. \end{split} \end{equation} Consider a $G$-gain graph $(\Gamma,\psi)$ with adjacency matrix $A\in\mathbb M_n(\mathbb C G)$. The matrix $A_\pi:=\pi(A)\in M_{nk}(\mathbb C)$ is called the \emph{represented adjacency matrix} of $(\Gamma,\psi)$ with respect to $\pi$. By Proposition \ref{productfou}, $A_\pi$ is a Hermitian matrix and we say that its spectrum is the \emph{$\pi$-spectrum of $(\Gamma,\psi)$}, denoted with $\sigma_\pi(\Gamma,\psi)$ or also $\sigma_\pi(A)$. Notice that, if $\pi$ is unitary, then the $\pi$-spectrum is real. Moreover, by Proposition \ref{productfou}, if $\pi\sim\pi'$, then $\sigma_\pi(A)=\sigma_{\pi'}(A)$. \begin{definition} Let $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ be two $G$-gain graphs. Let $\pi$ be a unitary representation of $G$. The graphs $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are said to be $\pi$-cospectral if $\sigma_\pi(\Gamma_1,\psi_1)=\sigma_\pi(\Gamma_2,\psi_2)$. \end{definition} Although it was formally introduced only in \cite{JACO}, particular cases of $\pi$-spectra have often been considered in the literature, as shown in the next examples. \begin{example}\label{ex:1} Let $(\Gamma,\psi)$ be a $G$-gain graph and let $\pi_0$ be the trivial representation of $G$, that is, the one-dimensional representation such that $\pi_0(g)=1$ for any $g\in G$. Then $A_{\pi_0}$ is nothing but the adjacency matrix of the underlying graph $\Gamma$. Thus the $\pi_0$-spectrum of $(\Gamma,\psi)$ is the spectrum of its underlying graph. \end{example} \begin{example}\label{ex:2} Let $(\Gamma,\sigma)$ be a signed graph and let $\pi_{id} \colon \mathbb T_2\to \mathbb C$ be the identical one-dimensional representation of $\mathbb T_2$ with values $+1$ and $-1$. The represented adjacency matrix $A_{\pi_{id}}$ is the classical adjacency matrix of the signed graph (see, e.g., \cite{zasmat}) and so the $\pi_{id}$-spectrum is the classical spectrum of the signed graph. \end{example} \begin{example}\label{ex:3} Let $(\Gamma,\psi)$ be a $\mathbb T$-gain graph and let $\pi_{id} \colon \mathbb T\to \mathbb C$ the identical one-dimensional representation of $\mathbb T$. The represented adjacency matrix $A_{\pi_{id}}$ is the classical adjacency matrix for complex unit gain graphs (in the sense of \cite{reff1}) and the $\pi_{id}$-spectrum is the classical spectrum of a complex unit gain graph. \end{example} \begin{example}\label{ex:4} Let $(\Gamma,\psi)$ be a $G$-gain graph, and let $\lambda_G$ be the left regular representation of $G$. Then the represented adjacency matrix $A_{\lambda_G}$ is the adjacency matrix of the \emph{(left) cover graph of $(\Gamma,\psi)$} \cite[Lemma~6.1]{JACO}, and the $\lambda_G$-spectrum of $(\Gamma,\psi)$ is the spectrum of the cover graph of $(\Gamma,\psi)$. \end{example} As a first result about \emph{represented cospectrality}, in analogy with Proposition \ref{prop:cicli}, we are going to prove that the $\pi$-spectrum of a gain graph is fully identified by the characters of the gains of the closed walks. \begin{theorem}\label{teo:pico} Two gain graphs $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are $\pi$-cospectral if and only if \begin{equation}\label{eq:inteo} \sum_{W \in\mathcal C^h} \chi_\pi(\psi_1(W))= \sum_{W \in\mathcal C^h} \chi_\pi(\psi_2(W)), \quad \forall h\in\mathbb N. \end{equation} \end{theorem} \begin{proof} Recall that the extension to $M_n(\mathbb C G)$ of a unitary representation $\pi$ of $G$ of degree $k$ is a homomorphism, and then, for every $A\in M_n(\mathbb C G)$, for any $h\in\mathbb N$, one has: $$\pi(A^h)=\pi(A)^h.$$ If $(\Gamma,\psi)$ is a $G$-gain graph on $n$ vertices, $A=A_{(\Gamma,\psi)}\in M_n(\mathbb C G)$ is its adjacency matrix and $A_\pi\in M_{nk}(\mathbb C)$ is its represented adjacency matrix with respect to $\pi$, then by using Eq.~\eqref{eq:trko} and Eq.~\eqref{eq:cara} one has: $$Tr\left((A_\pi)^h\right)=Tr\left(\pi\left( A^h\right) \right)=\chi_\pi(A^h)=\sum_{W \in\mathcal C^{h}} \chi_\pi (\psi(W)), \qquad \forall h\in \mathbb N.$$ Combining with Lemma \ref{specht} one can conclude that the represented adjacency matrices of $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are cospectral if and only if Eq.~\eqref{eq:inteo} holds. \end{proof} If we apply Theorem \ref{teo:pico} to the trivial representation $\pi_0$ (see Example \ref{ex:1}), we can say that the underlying graphs of two gain graphs are cospectral if and only if they have, for all $h\in\mathbb N$, the same number of closed walks of length $h$. For two signed graphs or two $\mathbb T$-gain graphs with the representation $\pi_{id}$ (see \Cref{ex:2,ex:3}), \Cref{teo:pico} implies that they are cospectral if, for all $h\in\mathbb N$, they share the sum of the gains of their closed walks of length $h$. Finally, if we apply Theorem \ref{teo:pico} to the left regular representation (see Example \ref{ex:4}), we can say that $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ have cospectral cover graphs if and only if they have, for all $h\in\mathbb N$, the same number of \emph{balanced closed walks} of length $h$, where a balanced closed walk is a walk whose gain is $1_G$, that is also the only group element whose character is non-zero. Surprisingly, $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ can be $\pi_1$-cospectral but not $\pi_2$-cospectral, even if both unitary representations $\pi_1$ and $\pi_2$ are faithful (see Example \ref{exa:2}), or even if $\pi_1$ and $\pi_2$ are faithful and irreducible (see Example \ref{exa:s4}). The following proposition shows that, however, it is possible to prove $\pi$-cospectrality just by looking at the irreducible subrepresentations of $\pi$. \begin{proposition}\label{prop:somma} If two $G$-gain graphs $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are $\pi_1$-cospectral and $\pi_2$-cospectral, where $\pi_1$ and $\pi_2$ are unitary representations, then $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are $(\pi_1\oplus \pi_2)$-cospectral. \end{proposition} \begin{proof} By Proposition \ref{productfou} the matrix $(\pi_1\oplus \pi_2)(A)$ is similar to the matrix $\pi_1(A)\oplus \pi_2(A)$ for any $A\in M_n(\mathbb CG)$. As a consequence, the $(\pi_1\oplus \pi_2)$-spectrum of a gain graph is the union (as multisets) of the $\pi_1$-spectrum with the $\pi_2$-spectrum. The thesis easily follows. \end{proof} Notice that the converse of Proposition \ref{prop:somma} is not true, see Example \ref{exa:2}. Anyway, as a consequence of Proposition \ref{prop:somma}, if $\pi_0,\ldots,\pi_{m-1}$ is a complete system of irreducible unitary representations of $G$, and $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are $\pi_i$-cospectral for each $i=0,\ldots,m-1$, then $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are $\pi$-cospectral for every unitary representation $\pi$. The following theorem ensure that, when $G$ is finite, one can check $G$-cospectrality just by looking at cospectrality with respect to a complete system of irreducible, unitary, representations of $G$. \begin{theorem}\label{teo:cosp} Let $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ be two $G$-gain graphs, with $G$ finite. Let $\pi_0,\ldots,\pi_{m-1}$ be a complete system of irreducible, unitary, representations of $G$. The following are equivalent. \begin{enumerate} \item $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are $G$-cospectral; \item $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are $\pi$-cospectral, for every unitary representation $\pi$ of $G$; \item $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are $\pi_i$-cospectral, for each $i=0,\ldots,m-1$. \end{enumerate} \end{theorem} \begin{proof} (1)$\implies$(2)\\ By Proposition \ref{prop:cicli}, if $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are $G$-cospectral, then Eq.~\eqref{eq:prop} holds. Let $\pi$ be a unitary representation. Since $\chi_{\pi}$ is a class function we have \begin{equation}\label{eq:dimo2} \sum_{W \in\mathcal C^h} \chi_\pi(\psi_i(W))=\sum_{[g]\in[G]} \left|\{W\in \mathcal C^h(\Gamma_i): \psi_i(W)\in [g]\} \right| \chi_{\pi}(g), \quad i=1,2; \;\forall h\in\mathbb N. \end{equation} Combining Eq.~\eqref{eq:prop} with Eq.~\eqref{eq:dimo2} we have $$\sum_{W \in\mathcal C^h} \chi_\pi(\psi_1(W))= \sum_{W \in\mathcal C^h} \chi_\pi(\psi_2(W)), \quad \forall h\in\mathbb N,$$ and this implies (2) by Theorem \ref{teo:pico}.\\ (2)$\implies$(3)\\It is obvious.\\ (3)$\implies$(1)\\ Let $A,B\in M_n(\mathbb C G)$ be the adjacency matrices of $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$, respectively. Since $A_{\pi_i}$ and $B_{\pi_i}$ are cospectral, by Theorem \ref{teo:pico} we have: \begin{equation}\label{eq:dimofi2} \chi_{\pi_i}\left(Tr\left(A^h\right)\right)=\sum_{W \in\mathcal C^{h}} \chi_{\pi_i} (\psi_1(W))= \sum_{W \in\mathcal C^{h}} \chi_{\pi_i}(\psi_2(W))=\chi_{\pi_i}\left(Tr\left(B^h\right)\right), \quad\forall h\in\mathbb N. \end{equation} We want to prove that $\mu(Tr(A^h))=\mu(Tr(B^h))$ for all $h\in\mathbb N$, that is, $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are $G$-cospectral. Since $G$ is finite, for any $f\in \mathbb CG$, one can define $\overline{\mu}(f)\in \mathbb C_{Class}[G]$, that is, the \emph{normalization of $\mu(f)$}, in the following way: $$\overline{\mu}(f)(g)=\frac{1}{|[g]|} \sum_{x\in[g]} f_x=\frac{1}{|[g]|} \mu(f)(g).$$ One can easily check that $\mu(Tr(A^h))=\mu(Tr(B^h))$ for all $h\in\mathbb N$ if and only if $\overline{\mu}(Tr(A^h))=\overline{\mu}(Tr(B^h))$ for all $h\in\mathbb N$, since both conditions are equivalent to that of Eq.~\eqref{eq:prop} in Proposition \ref{prop:cicli}. Moreover, since $G$ is finite, the characters $\chi_{\pi_0}, \chi_{\pi_1}, \ldots, \chi_{\pi_{m-1}}$ form an orthonormal basis of $\mathbb C_{Class}[G]$ with respect to the Hermitian product defined in Eq.~\eqref{eq:pro} (see \cite[Theorem 2.12]{fulton}). Therefore, in order to conclude the proof, we only need to show that \begin{equation}\label{eq:dimofinale} \langle \chi_{\pi_i},\overline{\mu}\left(Tr\left(A^h\right)\right)\rangle =\langle \chi_{\pi_i},\overline{\mu}\left(Tr\left(B^h\right)\right)\rangle, \quad\forall h\in\mathbb N,\ i=0,1,\ldots, m-1. \end{equation} Notice that the coefficient of $Tr\left(A^h\right)$ and $Tr\left(B^h\right)$ are real, and then $Tr\left(A^h\right), Tr\left(B^h\right)\in \mathbb R G$. For $f\in\mathbb R G$, one has \begin{equation}\label{eq:ultima} \langle \chi_{\pi_i},\overline{\mu}(f)\rangle= \frac{1}{|G|}\sum_{g\in G} \chi_{\pi_i}(g) \overline{\frac{1}{|[g]|}\sum_{x\in [g]} f_x} =\frac{1}{|G|} \sum_{x\in G} \chi_{\pi_i}(x) f_x= \frac{1}{|G|} \chi_{\pi_i}(f), \end{equation} since $\chi_{\pi_i}(x)=\chi_{\pi_i}(g)$ for all $x\in [g]$. By gluing together Eq.~\eqref{eq:ultima} with Eq.~\eqref{eq:dimofi2} we obtain Eq.~\eqref{eq:dimofinale}, and the thesis follows. \end{proof} \begin{remark}\label{rem:nonleft} As already mentioned, the left regular representation $\lambda_G$ contains every irreducible representations. However the condition for a pair of $G$-gain graphs of being $\lambda_G$-cospectral is not equivalent to those of Theorem \ref{teo:cosp}, see Example \ref{exa:2}. \end{remark} \section{Examples and applications}\label{sec:5} This section is devoted to the application of the results obtained in the previous sections to some remarkable cases: signed graphs, gain cyclic graphs, $\mathbb{T}_m$-gain graphs. Finally, we discuss a particular example where a pair of gain graphs over the symmetric group $S_4$ is considered. \subsection{Signed graphs} Signed graph can be regarded as $\mathbb T_2$-gain graphs, being $\mathbb T_2$ the group of order two. The irreducible representations of $\mathbb T_2$ are the trivial representation $\pi_0$, and the signed representation, that coincides with $\pi_{id}$. In the light of \Cref{teo:cosp} and \Cref{ex:1,ex:2}, two signed graphs are $\mathbb T_2$-cospectral if and only if they are cospectral as signed graphs and they have cospectral underlying graphs. A trivial example is given by a pair of cospectral graphs (as unsigned graphs) both endowed with the all positive signature. We present now a non-trivial example. \begin{example}\label{exa:1} Let $(K_8,\sigma)$ be the signed graph depicted in Fig. \ref{fig:1}. An explicit computation shows that its spectrum (that is, its $\pi_{id}$-spectrum) is $$\{\pm 1,\pm \sqrt{5}^{ (2)},\pm \sqrt{17}\};$$ it is symmetric with respect to $0$. As a consequence, $(K_8,\sigma)$ and $(K_8,-\sigma)$ are cospectral as signed graphs, that is, they are $\pi_{id}$-cospectral. Moreover, $(K_8,\sigma)$ and $(K_8,-\sigma)$ are also $\pi_0$-cospectral since they have isomorphic underlying graphs. By virtue of Theorem \ref{teo:cosp}, $(K_8,\sigma)$ and $(K_8,-\sigma)$ are $\mathbb T_2$-cospectral, even if they are not switching isomorphic, because $(K_8,\sigma)$ is an example of a non-signsymmetric graph with symmetric spectrum (see \cite[Fig.~2]{open}, \cite{sss}). \begin{figure} \caption{The signed graph $(K_8,\sigma)$ of Example \ref{exa:1} \label{fig:1} \end{figure} \end{example} \subsection{Cycles}\label{sec:cicli} We are going to use the results of the previous two sections to completely characterize switching equivalence classes, switching isomorphism classes, $G$-cospectrality classes and $\lambda_G$-cospectrality classes for $G$-gain graphs on the cyclic graph $C_n$ with vertices $v_1,\ldots,v_n$, for any group $G$. \subsubsection{Switching equivalence classes.} The circuit rank of $C_n$ is $1$ and then, by virtue of Corollary \ref{coro:card}, the switching equivalence classes of $G$-gain functions on $C_n$ are in bijection with $[G]$, the set of conjugacy classes of $G$. More precisely, fixing $W_0$ the closed walk $v_1,v_2,\ldots,v_n,v_1$ on $C_n$, we have $$ (C_n,\psi_1)\sim (C_n,\psi_2) \iff [\psi_1(W_0)]=[\psi_2(W_0)]. $$ \subsubsection{Switching isomorphism classes.} We have just said that on $C_n$ there are $|[G]|$ distinct switching equivalence classes of $G$-gain functions. A natural question is: which of these are in the same switching isomorphism class? It is easy to see that, in general, the automorphism group of a graph $\Gamma$ acts on the set of the switching equivalence classes: the orbits of this action are exactly the switching isomorphism classes of gain graphs whose underlying graph is isomorphic to $\Gamma$. The group of the automorphisms of $C_n$ is well known; it is isomorphic to the \emph{dihedral group $D_{2n}$} and its non-trivial elements can be partitioned into \emph{reflections} and \emph{rotations}. A rotation $\phi$ acts trivially on the switching equivalence classes, not affecting the conjugacy class of the gain of the closed walk $W_0$. More precisely: $$ [\psi\circ \phi(W_0)]=[\psi(W_0)],\qquad \mbox{ for any $G$-gain function $\psi$ on $C_n$}; $$ thus the switching equivalence classes are fixed by $\phi$. On the contrary, a reflection $\sigma$ changes the orientation of the walk $W_0$, that is $$ [\psi\circ \sigma(W_0)]=[\psi(W_0)^{-1}],\qquad \mbox{ for any $G$-gain function $\psi$ on $C_n$}, $$ and then two gain functions in the same switching isomorphism class can also have inverse gains on $W_0$. In particular, the switching isomorphisms classes of $C_n$ are in bijection with the conjugacy classes of $G$ up to group inversion. More explicitly, \begin{equation*} \begin{split} (C_n,\psi_1) \mbox{ and } (C_n,\psi_2)\quad \iff \qquad &[\psi_1(W_0)]=[\psi_2(W_0)] \\ \mbox{ are switching isomorphic }\qquad \quad\mbox{ or } &[\psi_1(W_0)]=[\psi_2(W_0)^{-1}]. \end{split} \end{equation*} Notice that if $G$ is such that $[g]=[g^{-1}]$ for every $g\in G$, then the switching isomorphism classes and the switching equivalence classes of $C_n$ coincide. A group with the property that every element is conjugate to its inverse is said to be \emph{ambivalent} \cite{ambi}, and the symmetric group $S_n$ is an example of such a group. \subsubsection{$G$-cospectrality classes.} By virtue of Theorem \ref{teo:benposto}, two switching isomorphic $G$-gain graphs $(C_n,\psi_1)$ and $(C_n,\psi_2)$ are $G$-cospectral. \\We are going to prove that actually $(C_n,\psi_1)$ and $(C_n,\psi_2)$ are $G$-cospectral if and only if they are switching isomorphic. Suppose, by the contradiction, that $(C_n,\psi_1)$ and $(C_n,\psi_2)$ are $G$-cospectral but they are not switching isomorphic. Let us set $a:=\psi_1(W_0)$ and $b:=\psi_2(W_0)$ with $a,b\in G$. Since $(C_n,\psi_1)$ and $(C_n,\psi_2)$ are non-switching isomorphic, it must be $[a]\neq[b]$ and $[a]\neq[b^{-1}]$. In particular, $a$ and $b$ cannot be both trivial; without loss of generality, we assume $a\neq 1_G$. Since $(C_n,\psi_1)$ and $(C_n,\psi_2)$ are $G$-cospectral, by Proposition \ref{prop:cicli}, taking $h=n$ and $g=a$, one has: \begin{equation}\label{eq:contra} 0<\left|\{W\in \mathcal C^n(C_n): \psi_1(W)\in [a]\} \right| = \left|\{W\in \mathcal C^n(C_n): \psi_2(W)\in [a]\} \right|. \end{equation} But in $\mathcal C^n(C_n)$ we have $2n$ closed walks visiting all vertices (as many as the pairs of centers and orientations), and, when $n$ is even, we have also acyclic walks which are automatically balanced, since each edge is crossed the same number of times in the two opposite directions. Then $$ \psi_2(W)\in[b]\cup[b^{-1}]\cup\{1_G\} $$ for every $W\in \mathcal C^n(C_n)$. But $a\notin [b]\cup[b^{-1}]\cup\{1_G\}$, that is in contradiction with Eq.\ \eqref{eq:contra}. \begin{corollary}\label{coro:det} A $G$-gain graph whose underlying graph is a cycle is determined by its $G$-spectrum. \end{corollary} \begin{proof} Let $(C_n,\psi)$ and $(\Gamma,\psi')$ be $G$-cospectral. In particular, they are $\pi_0$-cospectral, that is, the underlying graphs $C_n$ and $\Gamma$ are cospectral. Since $C_n$, as (ungained) graph, is determined by its spectrum (e.g., \cite[Proposition 5]{deter}), also $\Gamma$ is a cycle. But we proved above that two $G$-cospectral gain graphs whose underlying graph is a cycle, must be switching isomorphic. \end{proof} \subsubsection{$\lambda_G$-cospectrality classes.} Here, we focus our attention on cospectrality of cyclic gain graphs with respect to the regular representation $\lambda_G$. We have already noticed that, as a consequence of \Cref{teo:pico}, two gain graphs $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are $\lambda_G$-cospectral if and only if for every $h\in \mathbb N$, they have the same number of balanced closed walks of length $h$. When $\Gamma_1$ and $\Gamma_2$ are both isomorphic to $C_n$, this simply means that $\psi_1(W_0)$ and $\psi_2(W_0)$ have the same \emph{order} in $G$, as the next proposition shows. \begin{proposition}\label{prop:ordine} Two gain graphs $(C_n,\psi_1)$ and $(C_n,\psi_2)$ are $\lambda_G$-cospectral if and only if $o(\psi_1(W_0))=o(\psi_2(W_0))$, where $W_0$ is the walk $v_1,v_2,\ldots,v_n,v_1$. \end{proposition} \begin{proof} Suppose that $o(\psi_1(W_0))=o(\psi_2(W_0))$. The gain $\psi_i$ of a closed walk $W$ is conjugated with a (possibly negative, depending on the orientation) power of $\psi_i(W_0)$. By assumption, this power of $\psi_i(W_0)$ is equal to $1_G$ for $i=1$ if and only if it is equal to $1_G$ for $i=2$. As a consequence $(C_n,\psi_1)$ and $(C_n,\psi_2)$ have the same number of balanced closed walks of length $h$ for every $h\in\mathbb N$, and so they are $\lambda_G$-cospectral.\\ \indent In order to prove the converse implication, suppose now that $k:=o(\psi_1(W_0))<o(\psi_2(W_0))$. We claim that \begin{equation}\label{eq:nuova} |\{W\in \mathcal C^{nk}(C_n): \psi_1(W)=1_G \}|\geq |\{W\in \mathcal C^{nk}(C_n): \psi_2(W)=1_G \}|+2n. \end{equation} More precisely $2n$ is the number of closed walks which make exactly $k$ turns of the cycle (they are as many as the pairs of central vertices and orientations of the cycle): these walks are balanced with respect to $\psi_1$ (their gains are conjugated with $\psi_1(W_0)^k=1_G$ or its inverse) while they are unbalanced with respect to $\psi_2$ (since $\psi_2(W_0)^k\neq1_G$). Moreover, any closed walk of length $nk$ balanced for $\psi_2$ is automatically balanced for $\psi_1$. \\From Eq.~\eqref{eq:nuova} it follows that $(C_n,\psi_1)$ and $(C_n,\psi_2)$ cannot be $\lambda_G$-cospectral. \end{proof} This characterization highlights that, even if $\lambda_G$ is a faithful representation containing every irreducible representation of $G$, the information on a gain graph given by its $\lambda_G$-spectrum is quite poor. Certainly, the $\lambda_G$-cospectrality is weaker than $G$-cospectrality, as already announced in Remark \ref{rem:nonleft}. \begin{example}\label{exa:2} Let $\mathbb T_5=\{1_{\mathbb{T}_5},\xi,\xi^2,\xi^3,\xi^4\}$ be the cyclic group of order $5$. Since $\mathbb T_5$ is Abelian, the switching equivalence classes of $\mathbb T_5$-gain functions on $C_n$ are $5$. Moreover, since $\xi^4 =\xi^{-1}$, and $\xi^3 = (\xi^2)^{-1}$, there are $3$ switching isomorphism classes of $\mathbb T_5$-gain graphs whose underlying graph is $C_n$, and then also the number of $\mathbb T_5$-cospectrality classes is $3$. Finally, every non-trivial element of $\mathbb T_5$ has order $5$: by Proposition \ref{prop:ordine} there are only $2$ possible $\lambda_{\mathbb T_5}$-spectra for a $\mathbb T_5$-gain graph on $C_n$, one for the balanced case and the other for the unbalanced case. \\More explicitly, when a $\mathbb T_5$-gain graph $(C_n,\psi)$ is balanced, its cover graph is isomorphic to the disjoint union of $5$ copies of $C_n$ and the $\lambda_{\mathbb T_5}$-spectrum of $(C_n,\psi)$ is given by $$ \left\{ \left(2\cos\frac{2\pi j}{n} \right)^{(5)},\; j=0,\ldots, n-1 \right\}.$$ On the other hand, when $(C_n,\psi)$ is unbalanced one can check that its cover graph is isomorphic to the cyclic graph $C_{5n}$ and the $\lambda_{\mathbb T_5}$-spectrum of $(C_n,\psi)$ is given by $$\left\{ 2\cos \frac{2\pi j}{5n},\; j=0,\ldots, 5n-1 \right\}.$$ In particular, there exists a pair $(C_n,\psi_1)$ and $(C_n,\psi_2)$ of $\lambda_{\mathbb T_5}$-cospectral graphs that are not $\mathbb T_5$-cospectral. And then, by Theorem \ref{teo:cosp}, there exists an irreducible representation $\pi$ such that $(C_n,\psi_1)$ and $(C_n,\psi_2)$ are not $\pi$-cospectral. This provides an example of gain graphs that are cospectral with respect to a sum of representations but that are not cospectral with respect to some addends, explicitly showing that the converse of Proposition \ref{prop:somma} is not true. Notice that $(C_n,\psi_1)$ and $(C_n,\psi_2)$ are always $\pi_0$-cospectral; since any other irreducible representation of $\mathbb T_5$ is faithful, it follows that $(C_n,\psi_1)$ and $(C_n,\psi_2)$ are not $\pi$-cospectral for some faithful representation $\pi$. \end{example} \subsection{$\mathbb T_m$-gain graphs} As we have shown in the previous example, cospectrality with respect to a faithful representation does not imply cospectrality with respect to any other faithful representation. What happens if one restricts to faithful irreducible representations? In this subsection we investigate this question for the cyclic group $$ \mathbb T_m=\{1_{\mathbb T_m},\xi,\xi^2,\ldots, \xi^{m-1}\} $$ or order $m$, which is isomorphic to the group of $m$-th roots of unity. An element $f=\sum_{i=0}^{m-1} f_{\xi^i} \,\xi^i \in \mathbb C \mathbb T_m$ can be identified with a polynomial of degree at most $m-1$ with coefficients in $\mathbb C$: $$ f(z):=\sum_{i=0}^{m-1} f_{\xi^i } z^{i}.$$ Two $\mathbb T_m$-gain graphs $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ with $A:=A_{(\Gamma_1,\psi_1)}$ and $B:=A_{(\Gamma_2,\psi_2)}$, are $\mathbb T_m$-cospectral if and only if, for every $h\in \mathbb N$, the polynomials $Tr(A^h)$ and $Tr(B^h)$ coincide (see Definition \ref{def:cosp} and remember that $\mathbb T_m$ is Abelian). Thus $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are $\mathbb T_m$-cospectral if and only if, for every $h\in \mathbb N$, the polynomial \begin{equation}\label{eq:pp} P_h(z):= Tr(A^h)(z)-Tr(B^h)(z) \end{equation} is the zero polynomial. The unitary irreducible representations of $\mathbb T_m$ have degree $1$ and each of them maps the generator $\xi$ to an $m$-th root of unit in $\mathbb C$. More precisely, a complete system of unitary irreducible representations of $\mathbb T_m$ is given by $\pi_0,\pi_1,\ldots, \pi_{m-1}$, where for each $j$ one has: \begin{equation}\label{eq:defi} \pi_j(\xi)=e^{ \frac{2\pi ij}{m}}\in\mathbb C. \end{equation} Actually $\mathbb T_m$ can be seen already embedded in $\mathbb T$ (with $\xi=e^{ \frac{2\pi i}{m}}$), and in this case the representation $\pi_1$ coincides with $\pi_{id}$. Notice that, for each $j=0,\ldots,m-1$, the linear extension of $\pi_j$ to $\mathbb C \mathbb T_m$ is such that: \begin{equation}\label{eq:valu} \pi_j\left(\sum_{i=0}^{m-1} f_{\xi^i} \,\xi^i\right)=\sum_{i=0}^{m-1} f_{\xi^i}\pi_j(\xi^i)=\sum_{i=0}^{m-1} f_{\xi^i}\pi_j(\xi)^i=f(\pi_j(\xi)), \qquad \forall f\in\mathbb C \mathbb T_m. \end{equation} We are going to analyze $\pi_j$-cospectrality of $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ when $\pi_j$ is faithful. A representation $\pi_j$ is faithful if and only if $\pi_j(\xi)$ is an $m$-th primitive root, that is, if and only if $j$ and $m$ are coprime. The Eq.~\eqref{eq:inteo} of Theorem \ref{teo:pico} for the representation $\pi_j$ is equivalent, by linearity, to \begin{equation}\label{eq:inpi} \pi_j\left(Tr\left(A^h\right)\right)=\pi_j\left(Tr\left(B^h\right)\right),\qquad \forall h\in\mathbb N. \end{equation} Combining \Cref{eq:valu} with \Cref{eq:inpi}, recalling the definition of the polynomial $P_h$ from \Cref{eq:pp}, one has that $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are $\pi_j$-cospectral if and only if $$P_h( \pi_j(\xi))=0, \quad \forall h\in\mathbb N.$$ The \emph{cyclotomic polynomial $\Phi_m$} is irreducible on $\mathbb Z$ and its roots are the $m$-th primitive roots of unit. Notice that for each $h$, the polynomial $P_h$ has integer coefficients, this implies that if an $m$-th primitive root is a root of $P_h$, then the cyclotomic polynomial $\Phi_m$ divides $P_h$ and then any other $m$-th primitive root is a root of $P_h$. The next corollary directly follows. \begin{corollary}\label{coro:ultimo} Let $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ be two $\mathbb T_m$-gain graphs. Let $\pi$ and $\pi'$ be two unitary, faithful, irreducible representations of $\mathbb T_m$. Then $$(\Gamma_1,\psi_1)\mbox{ and }(\Gamma_2,\psi_2) \mbox{ are $\pi$-cospectral } \iff (\Gamma_1,\psi_1)\mbox{ and }(\Gamma_2,\psi_2) \mbox{ are $\pi'$-cospectral}.$$ In particular, when $m$ is prime, the gain graphs $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are $\mathbb T_m$-cospectral if and only if they are $\pi_{id}$-cospectral and their underlying graphs $\Gamma_1$ and $\Gamma_2$ are cospectral. \end{corollary} \begin{proof} The representations $\pi$ and $\pi'$ are unitary, faithful, irreducible by hypothesis. Therefore we can assume $\pi=\pi_j$ and $\pi'=\pi_{j'}$ (see \Cref{eq:defi}), with $j,j'\in\{1,\ldots,m-1\}$ both coprime with $m$. In particular $\pi_j(\xi)$ and $\pi_{j'}(\xi)$ are both $m$-th primitive roots, and then, for every $h\in\mathbb N$, $$P_h(\pi_j(\xi))=0\iff P_h(\pi_{j'}(\xi))=0,$$ and then $(\Gamma_1,\psi_1)\mbox{ and }(\Gamma_2,\psi_2)$ are $\pi_{j}$-cospectral if and only if they are $\pi_{j'}$-cospectral. \\ \indent The second part of the statement follows from the fact that, when $m$ is prime, all non-trivial irreducible representations are faithful, and from the fact that the $\pi_0$-cospectrality is equivalent to the cospectrality of the underlying graphs. \end{proof} \begin{example}\label{exa:nuovofine} Let $\mathbb T_5=\{1_{\mathbb T_5},\xi,\xi^2,\xi^3,\xi^4\}$ be the cyclic group of order $5$. Consider the $\mathbb T_5$-gain graphs $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ depicted in Fig. \ref{fig:uu}, where the gain of each unlabelled edge is $1_{\mathbb T_5}$ and where if an oriented edge from a vertex $u$ to a vertex $v$ in $(\Gamma_i,\psi_i)$ has label $\xi$ then $\psi_i(u,v)=\xi$ and $\psi_i(v,u)=\xi^{-1}$. \\The underlying graphs $\Gamma_1$ and $\Gamma_2$ are cospectral, both with characteristic polynomial equal to: $$(x-1)(x^3-x^2-5x+1)(x+1)^2.$$ Moreover, $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are $\pi_{id}$-cospectral, since both the matrices $$\begin{pmatrix} 0&1&0&0&0&0\\ 1&0&e^{\frac{2\pi i}{5}}&0&1&0\\ 0&e^{-\frac{2\pi i}{5}}&0&1&1&0\\ 0&0&1&0&e^{-\frac{2\pi i}{5}}&1\\ 0&1&1&e^{\frac{2\pi i}{5}}&0&0\\ 0&0&0&1&0&0\\ \end{pmatrix} \mbox{ and } \begin{pmatrix} 0&1&1&1&1&1\\ 1&0&0&0&0&0\\ 1&0&0&e^{\frac{2\pi i}{5}}&0&0\\ 1&0&e^{-\frac{2\pi i}{5}}&0&0&0\\ 1&0&0&0&0&e^{\frac{2\pi i}{5}}\\ 1&0&0&0&e^{-\frac{2\pi i}{5}}&0\\ \end{pmatrix} $$ have characteristic polynomial: $$\left(x^4-6x^2-4x\cos\frac{2\pi}{5}+1\right)(x^2-1).$$ By \Cref{coro:ultimo}, this is enough to state that $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are $\mathbb T_5$-cospectral. Notice that this implies also that $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ are $\mathbb T$-cospectral as $\mathbb T$-gain graphs. \begin{center} \begin{figure} \caption{The $\mathbb T_5$-gain graphs $(\Gamma_1,\psi_1)$ and $(\Gamma_2,\psi_2)$ of Example \ref{exa:nuovofine} \label{fig:uu} \end{figure} \end{center} \end{example} In \Cref{coro:ultimo} we proved that two $\mathbb T_m$-gain graphs are cospectral with respect to either all irreducible faithful representations or none of them. It is natural to ask whether this behavior is not peculiar of finite subgroups of $\mathbb T$ and if it is more generally valid, maybe even for non-Abelian groups. A positive answer would suggest to consider a milder definition of cospectrality for $G$-gain graphs, less strong than $G$-cospectrality, but still independent from the choice of the representation (provided it is faithful and irreducible), which would be also more coherent with the ``classical'' concept of cospectrality in $\mathbb T$-gain graphs and signed graphs, where only the $\pi_{id}$-spectrum is considered. The answer is no: there are pairs of gain graphs, cospectral with respect to a faithful irreducible representation and non-cospectral with respect to another faithful irreducible representation. In order to show that, we are going to investigate a group with a richer representation theory, such as the symmetric group $S_4$ is. \subsection{An example over the symmetric group $S_4$} The symmetric group $S_4$ admits two faithful irreducible representations: the standard representation and its tensor product with the alternating representation \cite[Section~2.3]{fulton}, both of degree $3$. We explicitly construct unitary representatives $\pi_{St}$ and $\pi_{St\otimes A}$ for both representations in \Cref{table}. \begin{center} \begin{tabular}{c|c|c|} & $(12)$& $(1234)$\\ \hline $\pi_{St}$ & $\begin{pmatrix} -1&0&0\\ 0&1&0\\ 0&0&1\\ \end{pmatrix}$ & $\begin{pmatrix} -\frac{1}{2}&\frac{\sqrt{3}}{2}&0\\ -\frac{\sqrt{3}}{6}&-\frac{1}{6}& \frac{2\sqrt{2}}{3}&\\ -\frac{\sqrt{6}}{3}&-\frac{\sqrt{2}}{3}&-\frac{1}{3}\\ \end{pmatrix}$ \\ \hline $\pi_{St\otimes A}$ & $\begin{pmatrix} 1&0&0\\ 0&-1&0\\ 0&0&-1\\ \end{pmatrix}$ & $\begin{pmatrix} \frac{1}{2}&-\frac{\sqrt{3}}{2}&0\\ \frac{\sqrt{3}}{6}&\frac{1}{6}& -\frac{2\sqrt{2}}{3}&\\ \frac{\sqrt{6}}{3}&\frac{\sqrt{2}}{3}&\frac{1}{3}\\ \end{pmatrix}$ \\ \hline \end{tabular} \captionof{table}{The images of $(12)$ and $(1234)$ via the representations $\pi_{St}$ and $\pi_{St\otimes A}$.}\label{table} \end{center} The permutations $(12)$ and $(1234)$ generate $S_4$ and one can check that $\pi_{St}$ and $\pi_{St\otimes A}$ extend to homomorphisms from $S_4$ to $U_3(\mathbb{C})$. In other words, by using \Cref{table}, one can compute the values of $\pi_{St}$ and $\pi_{St\otimes A}$ on any permutation in $S_4$ (with the convention of multiplying from the left to the right). For example, from the relation $(34)=(1234)^{-2}(12)(1234)^{2}$, we obtain: \begin{equation*}\label{eq:34} \begin{split} \pi_{St}((34))= \pi_{St}((1234))^{-2} \pi_{St}((12)) \pi_{St}((1234))^2&=\begin{pmatrix} 1&0&0\\ 0&\frac{1}{3}& \frac{2\sqrt{2}}{3}&\\ 0& \frac{2\sqrt{2}}{3}&-\frac{1}{3}\\ \end{pmatrix};\\ \pi_{St\otimes A}((34))= \pi_{St\otimes A}((1234))^{-2} \pi_{St\otimes A}((12)) \pi_{St\otimes A}((1234))^2&=\begin{pmatrix} -1&0&0\\ 0&-\frac{1}{3}& -\frac{2\sqrt{2}}{3}&\\ 0& -\frac{2\sqrt{2}}{3}&\frac{1}{3}\\ \end{pmatrix}. \end{split} \end{equation*} Also: \begin{equation*}\label{eq:1234} \begin{split} \pi_{St}((12)(34))= \pi_{St}((12)) \pi_{St}((34)) &= \begin{pmatrix} -1&0&0\\ 0&\frac{1}{3}& \frac{2\sqrt{2}}{3}&\\ 0& \frac{2\sqrt{2}}{3}&-\frac{1}{3}\\ \end{pmatrix};\\ \pi_{St\otimes A}((12)(34))= \pi_{St\otimes A}((12)) \pi_{St\otimes A}((34)) &=\begin{pmatrix} -1&0&0\\ 0&\frac{1}{3}& \frac{2\sqrt{2}}{3}&\\ 0& \frac{2\sqrt{2}}{3}&-\frac{1}{3}\\ \end{pmatrix}. \end{split} \end{equation*} \begin{example}\label{exa:s4} Let $(\Gamma,\psi)$ be the $S_4$-gain graph depicted in Fig.~\ref{fig:2}, whose group algebra valued adjacency matrix is $$A_{(\Gamma,\psi)}=\begin{pmatrix} 0&(12)(34)&0&1_{S_4}&0&0&0&0&0\\ (12)(34)&0&1_{S_4}&0&1_{S_4}&0&0&0&0\\ 0&1_{S_4}&0&(12)(34)&1_{S_4}&0&0&0&0\\ 1_{S_4}&0&(12)(34)&0&1_{S_4}&0&0&0&0\\ 0&1_{S_4}&1_{S_4}&1_{S_4}&0&1_{S_4}&0&0&0\\ 0&0&0&0& 1_{S_4}&0&(12)&0&(34)\\ 0&0&0&0& 0&(12)&0&(34)&0\\ 0&0&0&0& 0&0&(34)&0& (12)\\ 0&0&0&0& 0&(34)&0&(12)&0\\ \end{pmatrix}. $$ Let $(\Gamma,\psi')$ be the $S_4$-gain graph depicted in Fig.~\ref{fig:3}, whose group algebra valued adjacency matrix is $$A_{(\Gamma,\psi')}=\begin{pmatrix} 0&(34)&0&(12)&0&0&0&0&0\\ (34)&0&(12)&0&1_{S_4}&0&0&0&0\\ 0&(12)&0&(34)&1_{S_4}&0&0&0&0\\ (12)&0&(34)&0&1_{S_4}&0&0&0&0\\ 0&1_{S_4}&1_{S_4}&1_{S_4}&0&1_{S_4}&0&0&0\\ 0&0&0&0& 1_{S_4}&0&1_{S_4}&0&(12)(34)\\ 0&0&0&0& 0&1_{S_4}&0&(12)(34)&0\\ 0&0&0&0& 0&0&(12)(34)&0& 1_{S_4}\\ 0&0&0&0& 0&(12)(34)&0&1_{S_4}&0\\ \end{pmatrix}. $$ Notice that each unlabelled edge in Fig. \ref{fig:2} and Fig. \ref{fig:3} corresponds to an edge whose gain is $1_{S_4}$, and in both graphs each gain is an involution in $S_4$ and then it is not necessary to specify the edge direction associated with the label. By a direct computation, one can check that the $27\times 27$ matrices $\pi_{St}(A_{(\Gamma,\psi)})$ and $\pi_{St}(A_{(\Gamma,\psi')})$ are cospectral but $\pi_{St\otimes A}(A_{(\Gamma,\psi)})$ and $\pi_{St\otimes A}(A_{(\Gamma,\psi')})$ are not. More precisely, $\pi_{St}(A_{(\Gamma,\psi)})$ and $\pi_{St}(A_{(\Gamma,\psi')})$ and $\pi_{St\otimes A}(A_{(\Gamma,\psi)})$ have the same characteristic polynomial, that is: \begin{equation*} \begin{split} (x^9-12x^7+44x^5-48x^3)\cdot (&x^{18}-24x^{16}-4x^{15} +224x^{14}+64x^{13}-1024x^{12}-368x^{11}\\&+2352x^{10}+896x^9-2432x^8-768x^7+768x^6). \end{split} \end{equation*} On the other hand, the characteristic polynomial of $\pi_{St\otimes A}(A_{(\Gamma,\psi')})$ is: \begin{equation*} \begin{split} (x^9-12x^7+44x^5-48x^3)\cdot (&x^{18}-24x^{16}+4x^{15} +224x^{14}-64x^{13}-1024x^{12}+368x^{11}\\&+2352x^{10}-896x^9-2432x^8+768x^7+768x^6). \end{split} \end{equation*} Therefore $(\Gamma,\psi)$ and $(\Gamma,\psi')$ are $\pi_{St}$-cospectral but they are not $\pi_{St\otimes A}$-cospectral. Notice that the idea behind the construction is that the element $1_{S_4}+(12)(34)-(12)-(34)\in\mathbb C S_4$ is in the kernel of the extension of $\pi_{St}$ to $\mathbb C S_4$ (it is also in the kernel of the extension of the natural permutation representation, that is equivalent to $\pi_{0}\oplus \pi_{St}$). But the same element is not in the kernel of the extension of $\pi_{St\otimes A}$. More precisely: \begin{equation}\label{eq:finale} \begin{split} \pi_{St}(1_{S_4})+\pi_{St}((12)(34))&=\pi_{St}((12))+\pi_{St}((34))\\ \pi_{St\otimes A}(1_{S_4})+\pi_{St\otimes A}((12)(34))&=-\pi_{St\otimes A}((12))-\pi_{St\otimes A}((34)). \end{split} \end{equation} This way, looking at the gains in $(\Gamma,\psi)$ and $(\Gamma,\psi')$ of the two triangles, one has $$\sum_{W \in\mathcal C^3} \chi_{\pi_{St}}(\psi(W))= \sum_{W \in\mathcal C^3} \chi_{\pi_{St}}(\psi'(W))$$ but $$\sum_{W \in\mathcal C^3} \chi_{\pi_{St\otimes A}}(\psi(W))\neq \sum_{W \in\mathcal C^3} \chi_{\pi_{St\otimes A}}(\psi'(W)),$$ and then, for the representation $\pi_{St\otimes A}$, the condition of Eq.~\eqref{eq:inteo} does not hold for $h=3$; it follows from Theorem \ref{teo:pico} that $(\Gamma,\psi)$ and $(\Gamma,\psi')$ are not $\pi_{St\otimes A}$-cospectral. \end{example} \begin{center} \begin{figure} \caption{The $S_4$-gain graph $(\Gamma,\psi)$ of Example \ref{exa:s4} \caption{The $S_4$-gain graph $(\Gamma,\psi')$ of Example \ref{exa:s4} \label{fig:2} \label{fig:3} \end{figure} \end{center} \end{document}
\begin{document} \title{Noiseless amplification of weak coherent fields exploiting\\ energy fluctuations of the field} \date{4 December 2012} \author{Mikko Partanen} \author{Teppo H\"ayrynen} \author{Jani Oksanen} \author{Jukka Tulkki} \affiliation{Department of Biomedical Engineering and Computational Science, Aalto University, P.O. Box 12200, 00076 Aalto, Finland} \keywords{quantum optics, coherent field, noiseless amplification, Wigner function} \pacs{42.50.Lc, 42.50.Gy} \begin{abstract} Quantum optics dictates that amplification of a pure state by any linear deterministic amplifier always introduces noise in the signal and results in a mixed output state. However, it has recently been shown that noiseless amplification becomes possible if the requirement of a deterministic operation is relaxed. Here we propose and analyze a noiseless amplification scheme where the energy required to amplify the signal originates from the stochastic fluctuations in the field itself. In contrast to previous amplification setups, our setup shows that a signal can be amplified even if no energy is added to the signal from external sources. We investigate the relation between the amplification and its success rate as well as the statistics of the output states after successful and failed amplification processes. Furthermore, we also optimize the setup to find the maximum success rates in terms of the reflectivities of the beam splitters used in the setup and discuss the relation of our setup with the previous setups. \end{abstract} \maketitle \section{Introduction} Any conventional amplification process unavoidably introduces quantum noise into the signal \cite{Caves1982}. However, this can be circumvented by implementing the amplification nondeterministically so that an amplified noiseless output signal occurs randomly \cite{Zavatta2011,Ferreyrol2010,Ferreyrol2011,Barbieri2011}. Noiseless amplification schemes generally rely on applying quantum operations, such as a sequence of single-photon addition and subtraction, to the optical field \cite{Marek2010,Hayrynen2009,Hayrynen2011}. In an experimental implementation, the addition and subtraction of photons is typically performed by using single-photon light sources, beam splitters, and photodetectors \cite{Zavatta2011,Ferreyrol2010,Ferreyrol2011,Barbieri2011}, thus leading to nondeterministic amplification. The noiseless high-fidelity amplification schemes are expected to become an essential tool for quantum communications and metrology, by recovering information transmitted over lossy channels or by enhancing the discrimination between partially overlapping quantum states \cite{Zavatta2011,Josse2006}. In this paper, we propose and analyze a noiseless amplification scheme where, in contrast to previous works \cite{Ferreyrol2010,Zavatta2011,Ferreyrol2011}, the energy required to amplify the signal does not originate from an external energy source (i.e., a single-photon source) but from the stochastic fluctuations in the field itself. More concretely, the signal is amplified even if no external energy is added to it. The proposed scheme consists of devices that are frequently used in experiments. Furthermore, as one of the key challenges in nondeterministic quantum optical amplification of light is the small success rate, we also consider improving the success probability. We start by a short summary of the basic principles of noiseless amplification and the related figures of merit. This is followed by describing the proposed amplification setup and calculations. \section{Amplification scheme} The proposed amplification scheme, illustrated in Fig.~\ref{fig:scheme}, utilizes the energy fluctuations of the initial field to replace the single-photon source that would otherwise be needed as in the scheme suggested by Zavatta \emph{et al.} \cite{Zavatta2011}. In our scheme, a similar action is obtained by a configuration where the successful subtraction of a single photon from the initial field by a beam splitter is verified by a quantum nondemolition (QND) measurement \cite{Munro2005,Nogues1999,Guerlin2007,Grangier1998,Brune1990,Milburn1984}, which is followed by adding the photon back to the field by the second beam splitter if no photons are detected at photodetector PD1. Finally, another photon is subtracted from the field at the third beam splitter, if photodetector PD2 detects a photon. The final output state resulting from these events is an amplified coherent state with high fidelity when the QND, PD1, and PD2 detectors detect 1, 0, and 1 photons, respectively. The action of an ideal noiseless amplifier for coherent states can be described as $|\alpha\rangle\rightarrow|g\alpha\rangle$, where $|\alpha\rangle$ is the initial coherent field, $|g\alpha\rangle$ is the amplified field, and $g$ is the gain of amplification. This operation cannot be implemented by deterministic amplifiers, but it can be approximated probabilistically. It has been shown that the operator $\hat G=\hat a\hat a^\dag$, where $\hat a$ and $\hat a^\dag$ are the annihilation and creation operators of the field, approximates the action of amplification for weak coherent fields with nominal gain $g=2$ \cite{Fiurasek2009}. The scheme suggested by Zavatta \emph{et al.} \cite{Zavatta2011} is based on this. The same outcome is also obtained by an operator $\hat G^{\,\prime}=\hat a\hat a^\dag\hat a$ implemented by the setup used in this paper because the input field $|\alpha\rangle$ is an eigenstate of $\hat a$. \begin{figure} \caption{\label{fig:scheme} \label{fig:scheme} \end{figure} \subsection{Output field of the amplifier} The output fields of our setup have been calculated using the standard Wigner function formalism. The Wigner function of the initial coherent field $|\alpha\rangle$ is \cite{Schleich2001} \begin{equation} W_{\mathrm{coh}}(x,p)=\frac{1}{\pi\hbar}\exp\!\Big[-(\kappa x-\sqrt{2}\mathrm{Re}\alpha)^2-\left(\frac{p}{\hbar\kappa}-\sqrt{2}\mathrm{Im}\alpha\right)^2\Big], \label{eq:W_coherent} \end{equation} where $x$ and $p$ are position and momentum quadratures of the field, $\alpha$ is a complex variable defining the coherent field amplitude, $\kappa$ is the spring constant of the field oscillator, and $\hbar$ is the reduced Planck constant. When plotting the Wigner functions, it is conventional to set $\hbar=\kappa=1$ \cite{Schleich2001}. The entangled Wigner function $W_\mathrm{BS}$ emerging as a result from fields $W_\mathrm{field 1}$ and $W_\mathrm{field 2}$ interfering on a beam splitter is given by \cite{Leonhardt1997,Leonhardt2003} \begin{align} W_\mathrm{BS}(x_1,p_1,x_2,p_2) =\; & W_\mathrm{field 1}(tx_1+rx_2,tp_1+rp_2)\nonumber \\ & \times W_\mathrm{field 2}(tx_2-rx_1,tp_2-rp_1), \label{eq:bs} \end{align} where $x_1$, $p_1$, $x_2$, and $p_2$ are the position and momentum quadratures of the transmitted and reflected fields and the beam splitter reflection and transmission coefficients $r$ and $t$ obey the relation $r^2+t^2=1$. In our notation $W_\mathrm{field 1}$ is the field incident to the beam splitter from the left and transmitted field quadratures refer to the field emerging from the beam splitter to the right in Fig.~\ref{fig:scheme}. The probability of detecting $n$ photons on the reflected field (the field that emerges from the beam splitter and travels vertically in Fig.~\ref{fig:scheme}) can be expressed as \begin{align} P(n) =\; & 2\pi\hbar\int W_\mathrm{BS}(x_1,p_1,x_2,p_2)\nonumber \\ & \times W_\mathrm{n}(x_2,p_2)\,dx_1\,dp_1\,dx_2\,dp_2, \label{eq:probability} \end{align} where $W_n$ is the Wigner function of the $n$-photon Fock state $|n\rangle$, to which the reflected field collapses after the detection, and is expressed as \cite{Schleich2001} \begin{align} W_n(x,p) = \; & \frac{(-1)^n}{\pi\hbar}\exp\!\Big[-(\kappa x)^2-\left(\frac{p}{\hbar\kappa}\right)^2\Big]\nonumber\\ & \times L_n\Big[2(\kappa x)^2+2\left(\frac{p}{\hbar\kappa}\right)^2\Big], \label{eq:W_fock} \end{align} where $L_n(x)$ denotes a Laguerre polynomial of degree $n$. After detecting $n$ photons on the reflected field, the transmitted field collapses to \begin{align} W_\mathrm{T}(x_1,p_1) = \; & \frac{2\pi\hbar}{P(n)}\int W_\mathrm{BS}(x_1,p_1,x_2,p_2)\nonumber \\ & \times W_\mathrm{n}(x_2,p_2)\,dx_2\,dp_2. \label{eq:transmitted} \end{align} The collapsed transmitted field is then used as the input for the second beam splitter. Despite the physical difference between the QND and PD, their effect on the transmitted field is exactly the same, and to calculate the final output of the setup Eqs.~(\ref{eq:bs})--(\ref{eq:transmitted}) are applied for the remaining two beam splitters as described in more detail below. For simplicity, we have made the usual assumption that the photodetectors PD1 and PD2 are ideal. The same assumption is also made for the QND since measurements made with QND detectors have been reported to yield single-photon Fock states with good accuracy \cite{Munro2005,Nogues1999,Guerlin2007,Grangier1998,Brune1990,Milburn1984}. The details of the analysis of how the state is propagated through the setup resulting in the conditional state of interest are as follows. In the first beam splitter, the initial coherent field $|\alpha\rangle$ in Eq.~\eqref{eq:W_coherent} is mixed with a vacuum state $|0\rangle$ [a zero-photon Fock state $n=0$ in Eq.~\eqref{eq:W_fock}] using Eq.~\eqref{eq:bs}. Then, one photon is measured by the QND detector. The probability for this and the transmitted field are given by Eqs.~\eqref{eq:probability} and \eqref{eq:transmitted} with $n=1$. In the second beam splitter, the transmitted field is mixed with a single-photon Fock state using Eq.~\eqref{eq:bs} since a photon coming from the QND detector is added to the field. No photons are measured by photodetector PD1. The probability for this and the transmitted field are given by Eqs.~\eqref{eq:probability} and \eqref{eq:transmitted} with $n=0$. In the third beam splitter, a photon is subtracted from the field. This is performed by mixing the field with a vacuum state using Eq.~\eqref{eq:bs} and using Eqs.~\eqref{eq:probability} and \eqref{eq:transmitted} with $n=1$ for calculating the probability and the transmitted field that is the final output state of the setup. The total probability for this successfully amplified output state $P_\mathrm{succ}$ is the product of the mentioned three photon detection probabilities given by \begin{equation} P_\mathrm{succ}=(1+|t_1t_2t_3\alpha|^2(3+|t_1t_2t_3\alpha|^2)) |r_1r_2r_3\alpha|^2 e^{|t_1t_2t_3\alpha|^2-|\alpha|^2}, \label{eq:optfunc} \end{equation} where $r_i$ and $t_i$, $i=1,2,3$, are reflectivities and transmittivities of the beam splitters in the setup obeying $r_i^2+t_i^2=1$. \subsection{Effective gain and fidelity of the amplified state} In the calculations depending on the parameters of the setup, effective gain values different from the nominal gain of 2 can be found. The effective gain can be defined as the ratio of the expectation values of the annihilation operator $\hat a$ for the output and input fields \cite{Zavatta2011} \begin{equation} g_\mathrm{eff}=\frac{|\langle\hat a_\mathrm{out}\rangle|}{|\langle\hat a_\mathrm{in}\rangle|}, \label{eq:geff} \end{equation} which corresponds to the effective amplification of the electric-field amplitude. In the Wigner function formalism, the expectation value of the annihilation operator can be calculated using the operator correspondence relation as \cite{Gardiner2004} \begin{align} \langle\hat a\rangle =\; & \int\left[\frac{\kappa}{\sqrt{2}}\left(x+\frac{i\hbar}{2}\frac{\partial}{\partial p}\right) +\frac{i}{\sqrt{2}\hbar\kappa}\left(p-\frac{i\hbar}{2}\frac{\partial}{\partial x}\right)\right]\nonumber\\ & \times W(x,p)\,dx\,dp. \label{eq:annih} \end{align} The calculations produce the following expression for the effective gain: \begin{equation} g_\mathrm{eff}=\frac{t_1 t_2 t_3 (2+4|t_1 t_2 t_3\alpha|^2+|t_1 t_2 t_3\alpha|^4)} {1+3|t_1 t_2 t_3\alpha|^2+|t_1 t_2 t_3\alpha|^4}. \label{eq:gefffull} \end{equation} It is also useful to quantify how much the output state differs from an ideally amplified coherent state. A practical measure for this purpose is the fidelity $F$, which is the overlap between the states calculated using Wigner functions $W_1$ and $W_2$ of the compared fields \cite{Lee2000} \begin{equation} F(W_1,W_2)=2\pi\hbar\int W_1(x,p)W_2(x,p)\,dx\,dp. \label{eq:fidelityw} \end{equation} The fidelity obtained for the successfully amplified field with respect to a coherent field $|g_\mathrm{eff}\alpha\rangle$ is \begin{equation} F_\mathrm{eff}=\frac{(1+2g_\mathrm{eff}t_1t_2t_3|\alpha|^2+g_\mathrm{eff}^2t_1^2t_2^2t_3^2|\alpha|^4) e^{-(g_\mathrm{eff}^2-t_1t_2t_3)^2|\alpha|^2}} {1+3|t_1 t_2 t_3\alpha|^2+|t_1 t_2 t_3\alpha|^4}. \label{eq:fidelityeff} \end{equation} \begin{figure} \caption{\label{fig:succ} \label{fig:succ} \end{figure} The effective gain, fidelity, and Wigner functions of successfully amplified fields are presented in Fig.~\ref{fig:succ}. In Fig.~\ref{fig:succ}(a) the effective gain is very close to the nominal gain value $g=2$ for small values of $|\alpha|$ and $r$. As the input field amplitude or the beam splitter reflectivity increases, the gain decreases. Increasing the input field amplitude results in the reduction of the effective fidelity $F_\mathrm{eff}$ as shown in Fig.~\ref{fig:succ}(b), where the fidelity is calculated with respect to a coherent field $|g_\mathrm{eff}\alpha\rangle$. However, the reduction of fidelity can be partly compensated by increasing the beam splitter reflectivity. Figure \ref{fig:succ}(c) shows the fidelity calculated with respect to an ideal maximally amplified coherent field $|2\alpha\rangle$ for comparison with the results obtained by Zavatta \emph{et al.} \cite{Zavatta2011} for the setup, including a specific single-photon source. The values for this ideal fidelity $F_\mathrm{ideal}$ decrease faster than the effective fidelities $F_\mathrm{eff}$ in Fig. \ref{fig:succ}(b) due to the reduction in $g_\mathrm{eff}$ for stronger input fields. Thus $F_\mathrm{eff}$ is a better measure for the quality of the resulting output field. One can also see that $F_\mathrm{ideal}$ decreases when the beam splitter reflectivity increases while the opposite is true for $F_\mathrm{eff}$. This is also due to the reduction in the effective gain. The contour plots in Fig.~\ref{fig:succ}(d) demonstrate how the Wigner function deforms in the amplification. For small initial field amplitudes, the output field is very close to a pure coherent field, but it increasingly deviates from a coherent state when the initial field amplitude increases. \subsection{Optimizing the scheme} Next we investigate how to optimize the probability of successful amplification while maintaining a given effective gain. The optimization problem for maximizing the probability of successful amplification [Eq.~\eqref{eq:optfunc}] with a constraint requiring the effective gain [Eq.~\eqref{eq:gefffull}] exceeding a threshold value $g_\mathrm{eff,0}$ can be presented as \begin{equation} \max_{g_\mathrm{eff}\ge g_\mathrm{eff,0}} P_\mathrm{succ}(|\alpha|,r_1,r_2,r_3). \label{eq:optproblem} \end{equation} Here, $|\alpha|$ is the input field amplitude and the beam splitter reflectivities are $r_1$, $r_2$, and $r_3$. The four-dimensional nonlinear optimization problem in Eq.~\eqref{eq:optproblem} was solved using a barrier function method \cite{Bazaraa2006}. For a certain $g_\mathrm{eff,0}$, one finds a single maximum $P_\mathrm{opt}$ with an optimal input field amplitude $|\alpha|_\mathrm{opt}$ and beam splitter reflectivities $r_\mathrm{1,opt}$, $r_\mathrm{2,opt}$, and $r_\mathrm{3,opt}$. The optimization problem was solved multiple times changing the minimum effective gain parameter $g_\mathrm{eff,0}$. It was found that at the optimum all the beam splitter reflectivities have the same value $r_\mathrm{i,opt}=r_\mathrm{opt}$, $i=1,2,3$. \begin{figure} \caption{\label{fig:opti} \label{fig:opti} \end{figure} Figure \ref{fig:opti} shows how the optimized probability of successful amplification $P_\mathrm{opt}$, the input field amplitude $|\alpha|_\mathrm{opt}$, the beam splitter reflectivity $r_\mathrm{opt}$, and the fidelity of the successfully amplified state $F_\mathrm{opt}$ evolve as a function of the minimum effective gain parameter $g_\mathrm{eff,0}$. The probability of successful amplification in Fig.~\ref{fig:opti}(a) decreases exponentially when the effective gain increases. For instance, if one wants to have an effective gain of 1.4, the maximum success probability of $10^{-3}$ is achievable with $|\alpha|_\mathrm{opt}=0.51$ and $r_\mathrm{opt}=0.38$. For comparison, Ferreyrol \emph{et al.} \cite{Ferreyrol2010,Ferreyrol2011} reported success rates of order $10^{-2}$ for a conventional scheme based on quantum scissors \cite{Ralph2008}. However, their scheme required a single photon source whose effect is not included in the reported success rates. Thus the obtained values can not be directly compared. In Fig.~\ref{fig:opti}(b) one sees that for useful values of $g_\mathrm{eff,0}$ the optimal input field amplitude is limited to $|\alpha|_\mathrm{opt}<1$. The optimal beam splitter reflectivity in Fig.~\ref{fig:opti}(c) has a maximum $r_\mathrm{opt}=0.42$ at the effective gain $g_\mathrm{eff,0}=1.18$ and it approaches zero when the required effective gain approaches 2. The fidelity curve in Fig.~\ref{fig:opti}(d) has a minimum $F_\mathrm{opt}=0.982$ at $g_\mathrm{eff,0}=1.08$. In the nominal gain limit, the fidelity approaches unity. \subsection{Failed amplification} So far, we have only discussed the case of successful amplification. For completeness, we next analyze the other possible output states. If the initial field is weak and the beam splitter reflectivities are $<\hspace{-2.5pt}0.5$, the probability that any of the photodetectors detects more than one photon is typically $<\hspace{-4pt}10^{-2}$. This probability is not completely negligible but since it is still small and the occurrence of these processes can be detected, we can focus on the processes where only at most one photon is detected at a time. These single photon processes and the corresponding eight possible output states are described in Table \ref{tbl:states}. The output states can be experimentally identified by photon detection measurement outcomes. The first state is the successfully amplified field and the last row shows the probability that more than one photon is detected by some of the photodetectors. The fidelities in Table \ref{tbl:states} clearly show that states from 5 to 8 are exactly coherent. This can be understood by considering what happens if the first photon subtraction fails. In this case, the output from the first beam splitter can be shown to be $|t\alpha\rangle$, which is a perfectly coherent field. This further means that at beam splitters 2 and 3, only single-photon subtraction or no photon subtraction can take place. Both operations result in coherent fields, albeit with reduced amplitude. This is because the photon subtractions only decrease the amplitude and keep the state coherent since coherent states are eigenstates of the annihilation operator \cite{Hayrynen2009,Kim2008a}. The states 3 and 4 are not exactly coherent since, in these cases, the input field arriving to the second beam splitter from the QND device is a single-photon Fock state producing superposition states at the output. \begin{table} \caption{\label{tbl:states}Photon detection measurement outcomes, amplitude expectation values $|\langle\hat a\rangle|$, degradations of fidelities $1-F_\mathrm{eff}$, and probabilities $P$ for possible single-photon process output states of the amplification setup with $g_\mathrm{eff}=1.4$. The initial field is a coherent field with $|\alpha|=0.5$ and the reflectivity of the beam splitters is $r=0.4$. Successful amplification corresponds to the first state. } \centering \renewcommand{1.5}{1.5} \begin{tabular}{ccccccc} \hline\hline & \multicolumn{3}{c}{Measurements} & & & \\ \cline{2-4} State & QND & PD1 & PD2 & $|\langle\hat a\rangle|$ & $1-F_\mathrm{eff}$ & $P$\\ \hline 1 & 1 & 0 & 1 & 0.686 & $4.84\times 10^{-3}$ & $1.36\times 10^{-3}$\\ 2 & 1 & 0 & 0 & 0.720 & 0.362 & $5.58\times 10^{-3}$\\ 3 & 1 & 1 & 1 & 0.292 & $3.79\times 10^{-5}$ & $5.27\times 10^{-4}$\\ 4 & 1 & 1 & 0 & 0.310 & $1.60\times 10^{-5}$ & $2.88\times 10^{-2}$\\ 5 & 0 & 1 & 1 & 0.385 & 0 & $8.57\times 10^{-4}$\\ 6 & 0 & 1 & 0 & 0.385 & 0 & $3.03\times 10^{-2}$\\ 7 & 0 & 0 & 1 & 0.385 & 0 & $2.55\times 10^{-2}$\\ 8 & 0 & 0 & 0 & 0.385 & 0 & 0.903\\ other & & & & & & $3.84\times 10^{-3}$\\ \hline\hline \end{tabular} \end{table} The amplitude expectation values in Table \ref{tbl:states} showed that the amplitude of the coherent output states 5--8 $|\langle\hat a\rangle|=0.385$ is clearly smaller than the amplitude of the initial field $|\alpha|=0.5$. This follows from the relatively large reflectivity of $r=0.4$. For a smaller reflectivity of $r=0.1$, the amplitude expectation value for the exactly coherent output states is $|\langle\hat a\rangle|=0.493$, which is much closer to the initial field amplitude. Since this output is also the most probable output and, in the case of small reflectivities, it is a nearly unchanged coherent state one could also try to repeat the amplification process in order to increase the probability of successful amplification. However, experimental realization of the repeated amplification setup would be challenging. \section{Conclusions} In conclusion, we have studied noiseless amplification of coherent signals in a setup where all the energy added to the amplified signal originates from the fluctuations in the quantum field in a purely stochastic manner, i.e., the field is amplified even when no additional energy is added to the field from external sources in contrast to the previously reported noiseless amplifiers. We have shown that the probability of successful amplification can be maximized by finding optimal values for the beam splitter reflectivities depending on the desired effective gain. Our results show that the purely stochastic amplification scheme can amplify weak coherent fields with very good fidelities much like the conventional stochastic amplification setups relying on single-photon sources. Most importantly, however, all parts of our setup have been experimentally demonstrated so that the proposed amplification scheme is experimentally feasible and may allow experimentally demonstrating noiseless amplification that requires no external energy. \end{document}
\begin{document} \title{{\Large The rank of a CM elliptic curve and a recurrence formula} \begin{abstract} Let $p$ be a prime number and $E_{p}$ denote the elliptic curve $y^2=x^3+px$. It is known that for $p$ which is congruent to $1, 9$ modulo $16$, the rank of $E_{p}$ over $\mathbb Q$ is equal to $0, 2$. Under the condition that the Birch and Swinnerton-Dyer conjecture is true, we give a necessary and sufficient condition that the rank is $2$ in terms of the constant term of some polynomial that is defined by a recurrence formula. \end{abstract} \section{Introduction} Which prime number $p$ can be written as the sum of two cubes of rational numbers? This is one of the classical Diophantine problems and there are various works. ({\it cf}. \cite{DasguptaVoight}, \cite{Yin}) This problem is equivalent to the existence of non-torsion $\mathbb Q$-rational points of the curve $A_p: x^3+y^3=p$. The curve $A_p$ has the structure of an elliptic curve over $\mathbb Q$ with the point $\infty=[1: -1: 0]$. For an odd prime number $p$, we have $A_p(\mathbb Q)_{\rm tors}=\{\infty\}$. Therefore an odd prime number $p$ is written as the sum of cubes if and only if the rank of $A_p$ over $\mathbb Q$ is not $0$. \cite{Satge} shows the upper bound \begin{align} \rank A_p(\mathbb Q)\leq \begin{cases} 0 & (p\equiv 2, 5\mod 9),\\ 1 & (p\equiv 4, 7, 8\mod 9),\\ 2 & (p\equiv 1\mod 9). \end{cases} \end{align} Let $\varepsilon(A_p/\mathbb Q)$ be the sign of the functional equation for the Hasse-Weil $L$ function of $A_p$ over $\mathbb Q$. The parity conjecture for $3$-Selmer groups ({\it cf}. \cite{Nekovar}) leads to \begin{align} (-1)^{\corank {\rm Sel}_{3^{\infty}}(A_p/\mathbb Q)}=\varepsilon(A_p/\mathbb Q)=\begin{cases} +1 & (p\equiv 1, 2, 5\mod 9),\\ -1 & ({\rm otherwise}). \end{cases} \end{align} Thus for the case where $p\equiv 1\bmod 9$ (resp. $p\equiv 4, 7, 8\bmod 9$), the rank of $A_p$ is $0$ or $2$ (resp. $1$) if we assume the Tate-Shafarevich group is finite. The remaining problem is essentially whether the rank is $0$ or $2$ for the case where $p$ is congruent to $1$ modulo $9$. In the paper \cite{VillegasZagier}, Villegas and Zagier have given three necessary and sufficient conditions that the rank is equal to $2$ under the Birch and Swinnerton-Dyer(BSD) conjecture. One of the conditions is described in terms of a recurrence formula although they did not give the details of the proof. In this paper, we give a similar formula for the elliptic curve $E_p: y^2=x^3+px$. A $2$-descent (\cite{Silverman}) shows the upper bound \begin{align} \rank E_{p}(\mathbb Q)\leq \begin{cases} 0 & (p\equiv 7, 11\mod 16),\\ 1 & (p\equiv 3, 5, 13, 15\mod 16),\\ 2 & (p\equiv 1, 9\mod 16). \end{cases} \end{align} For the case where $p$ is congruent to $1, 9$ modulo $16$, the sign of functional equation of the Hasse-Weil $L$-function of $E_{p}$ over $\mathbb Q$ is $+1$. Similarly for the case $E_{p}$, we see that $\rank E_{p}(\mathbb Q)=0$ or $2$ if we assume the Tate-Shafarevich group is finite. Let $\Omega_{E}=\Gamma(1/4)^2/(2\varpi^{1/2})$ be the real period of $E_{1}$ and let $S_{p}$ be the constant satisfying \begin{align} L(E_{p}/\mathbb Q, 1)=\dfrac{\Omega_{E}S_{p}}{2p^{1/4}}.\label{eq:Sp} \end{align} Here $\varpi=3.1415\cdots$. The BSD conjecture predicts that the constant $S_{p}$ is equal to the order of the Tate-Shafarevich group if $\rank E_{p}(\mathbb Q)= 0$ and is $0$ otherwise. \begin{thm}\label{thm:mainthm1} Let $p$ be a prime number which is congruent to $1, 9$ modulo $16$. Suppose that $S_p\in \mathbb Z$. If the rank of $E_{p}$ over $\mathbb Q$ is equal to $2$, then $p$ divides $f_{3(p-1)/8}(0)$, where the polynomial $f_n(t)\in \mathbb{Z}[t]$ is defined by the recurrence formula \begin{align} f_{n+1}(t)=-12(t+1)(t+2)f_n'(t)+(4n+1)(2t+3)f_n(t)-2n(2n-1)(t^2+3t+3)f_{n-1}(t). \end{align} The initial condition is $f_0(t)=1,\,f_1(t)=2t+3$. Moreover if we assume the BSD conjecture, then the converse is also true. \end{thm} We will show that $S_p$ is a rational integer for more general elliptic curves in the next paper. We tried to recover the proof of \cite[Theorem 3]{VillegasZagier} which claims the same as Theorem \ref{thm:VZthm}. As a result, we obtain Theorem \ref{thm:mainthm2} below although we could not obtain the proof of Theorem \ref{thm:VZthm}. Our recurrence formula \eqref{eq:RFmainthm2} is simpler than \eqref{eq:VZformula}. In Table \ref{tab:VZformula} and Table \ref{tab:RFmainthm2}, we show the first several terms for the two recurrence formulas. The degree of the polynomial and the number of terms of \eqref{eq:RFmainthm2} are less than \eqref{eq:VZformula}. Perhaps we may make the recurrence formula \eqref{eq:RFmainthm2} simpler. A procedure to obtain the recurrence formula \eqref{eq:RFmainthm2} is essentially the same as \cite{VillegasZagier}. \begin{thm}[{\cite[Theorem 3]{VillegasZagier}}]\label{thm:VZthm} Let $p$ be a prime number which is congruent to $1$ modulo $9$, the rank of $A_p$ over $\mathbb Q$ is equal to $2$, then $p$ divides $a_{(p-1)/3}(0)$, where the polynomial $a_n(t)\in \mathbb{Z}[t]$ is defined by the recurrence formula \begin{align} a_{n+1}(t)=-(1-8t^3)a_n'(t)-(16n+3)t^2a_n(t)-4n(2n-1)ta_{n-1}(t).\label{eq:VZformula} \end{align} The initial condition is $a_0(t)=1,\,a_1(t)=-3t^2$. Moreover if we assume the BSD conjecture, then the converse is also true. \end{thm} \begin{thm}\label{thm:mainthm2} Let $p$ be a prime number which is congruent to $1$ modulo $9$, the rank of $A_p$ over $\mathbb Q$ is equal to $2$, then $p$ divides $x_{(p-1)/3}(0)$, where the polynomial $x_n(t)\in \mathbb{Z}[t]$ is defined by the recurrence formula \begin{align} x_{n+1}(t)=-2(1-8t^3)x_n'(t)-8nt^2x_n(t)-n(2n-1)tx_{n-1}(t).\label{eq:RFmainthm2} \end{align} The initial condition is $x_0(t)=1,\,x_1(t)=0$. Moreover if we assume the BSD conjecture, then the converse is also true. \end{thm} We now discuss the proof of Theorem \ref{thm:mainthm1}. For the case where $p$ is congruent to $1, 9$ modulo $16$, we see that $\rank E_{p}(\mathbb Q)=2$ if and only if $L(E_{p}/\mathbb Q, 1)=0$ under the BSD conjecture. The calculation $L(E_{p}/\mathbb Q, 1)$ reduces to $L(\psi^{2k-1}, k)$ for some Hecke character $\psi$ and some positive integer $k$. More precisely, by a theory of $p$-adic $L$-functions, there exists a $\bmod~p$ congruence relation between an algebraic part of $L(E_{p}/\mathbb Q, 1)$ and that of $L(\psi^{2k-1}, k)$. Therefore with the estimate $|L(E_{p}/\mathbb Q, 1)|$, it holds that $L(E_p/\mathbb Q, 1)=0$ if and only if $p$ divides the algebraic part of $L(\psi^{2k-1}, k)$. We write the algebraic part of $L(\psi^{2k-1}, k)$ in terms of a recurrence formula by using the method of \cite{Villegas}. In Section $2.1$, we show the rank of $E_{p}$ is equal to $2$ if and only if $p$ divides the algebraic part of $L(\psi^{2k-1}, k)$. In Section $2.2$, we review some basic properties of the Maass-Shimura operator $\partial_k$. In Section 2.3, we write the special value $L(\psi^{2k-1}, k)$ with some special value of $\partial_k$-derivative of some modular form. In Section 3, we write the special value of $\partial_k$-derivative of the modular form as the constant term of some polynomial that is defined by a recurrence formula. \begin{table}[H]\label{tab:VZformula} \centering \begin{tabular}{|c||l|} \hline $n$ & $a_n(t)$\\ \hline \hline $0$ & $1$ \\ \hline $1$ & $-3t^2$ \\ \hline $2$ & $9t^4+2t$ \\ \hline $3$ & $-27t^6-18t^3-2$ \\ \hline $4$ & $81t^8+108t^5+36t^2$ \\ \hline $5$ & $-243t^{10}-540t^7-360t^4+152t$ \\ \hline $6$ & $729t^{12}+2430t^9+2700t^6-16440t^3-152$ \\ \hline $7$ & $-2187t^{14}+10206t^{11}-17010t^8+1311840t^5+24240t^2$ \\ \hline $8$ & $6561t^{16}+40824t^{13}+95256t^{10}-99234720t^7-2974800t^4+6848t$ \\ \hline $9$ & $-19683t^{18}-157464t^{15}-489888t^{12}+7449816240t^9+359465040t^6-578304t^3-6848$ \\ \hline \end{tabular} \caption{the recurrence formula for $a_n(t)$} \end{table} \begin{table}[H] \begin{tabular}{cc} \begin{minipage}[h]{7cm} \centering \begin{tabular}{|c||l|} \hline $n$ & $x_n(t)$\\ \hline \hline $0$ & $1$ \\ \hline $1$ & $0$ \\ \hline $2$ & $-t$ \\ \hline $3$ & $2$ \\ \hline $4$ & $-33t^2$ \\ \hline $5$ & $76t$ \\ \hline $6$ & $-339t^3$ \\ \hline $7$ & $4314t^2$ \\ \hline $8$ & $-72687t^4-3424t$ \\ \hline $9$ & $228168t^3+6848$\\ \hline \end{tabular} \caption{the recurrence formula for $x_n(t)$} \label{tab:RFmainthm2} \end{minipage} \begin{minipage}[h]{8cm} \centering \begin{tabular}{|c||l||c||l|} \hline $p$ & $p|f_{3(p-1)/8}(0)$ & $p$ & $p|f_{3(p-1)/8}(0)$ \\ \hline \hline $17$ & false & 257 & false \\ \hline $41$ & false & 281 & true \\ \hline $73$ & true & 313 & false \\ \hline $89$ & true & 337 & true \\ \hline $97$ & false & 353 & true \\ \hline $113$ & true & 401 & false \\ \hline $137$ & false & 409 & false \\ \hline $193$ & false & 433 & false \\ \hline $233$ & true & 449 & false \\ \hline $241$ & false & 457 & false \\ \hline \end{tabular} \caption{the constant term $f_{3(p-1)/8}(0)$} \end{minipage} \end{tabular} \end{table} \begin{table}[H] \label{table:RFmainthm1} \centering \begin{tabular}{|c||l|}\hline $n$ & $f_n(t)$\\ \hline \hline $0$ & $1$ \\ \hline $1$ & $2t+3$ \\ \hline $2$ & $-6t^2-18t-9$ \\ \hline $3$ & $12t^3+54t^2+108t+81$ \\ \hline $4$ & $60t^4+360t^3+1296t^2+2268t+1377$ \\ \hline $5$ & $-1512t^5-11340t^4-\dots-34992t^2-13122t+2187$ \\ \hline $6$ & $21816t^6+196344t^5+\dots+1027890t^2+433026t+80919$ \\ \hline $7$ & $-280368t^7-2943864t^6-\dots-46517490t^2-24074496t-5189751$ \\ \hline $8$ & $3319056t^8+39828672t^7+\dots+1016482608t^2+423420696t+82097793$ \\ \hline $9$ & $-32283360t^9-435825360t^8-\dots+2060573904t^2+4373050842t+1702205523$ \\ \hline \end{tabular} \caption{the recurrence formula for $f_n(t)$} \end{table} \section{The algebraic part of the special value $L(E_{p}/\mathbb Q, 1)$} \subsection{congruence of a special value of $L$-function} Here we show that there exists a mod $p$ congruence relation between the special value $L(E_p/\mathbb Q, 1)$ and some special value of a Hecke $L$-function associated to the elliptic curve $E_1: y^2=x^3+x$.\\ Suppose $p$ satisfies $p\equiv 1, 9\bmod 16$ and $p$ splits as $\mathfrak{p}\bar{\mathfrak{p}}$ in the integer ring of $K=\mathbb Q(i)$. If necessary by repalcing $\bar{\mathfrak{p}}$ by $\mathfrak{p}$, we may assume there is a generator $\pi=a+bi$ of $\mathfrak{p}$ satisfying \begin{align} a\equiv 1\mod 4, \ \ b\equiv -\skakko{\dfrac{p-1}{2}}!a\mod p. \end{align} We fix inclusions $i_\infty: \overline{\mathbb Q}\hookrightarrow \mathbb C, i_p: \overline{\mathbb Q}\hookrightarrow \mathbb C_p$ so that $i_p$ is compatible with $\mathfrak{p}$-adic topology. Let $\Omega_{E}=\Gamma(1/4)^2/(2\varpi^{1/2})$ be the real period of $E_{1}$ and let $S_{p}$ be the constant satisfying \begin{align} L(E_{p}/\mathbb Q, 1)=\dfrac{\Omega_{E}S_{p}}{2p^{1/4}}.\label{eq:Sp} \end{align} The BSD conjecture predicts that the constant $S_{p}$ is equal to the order of the Tate-Shafarevich group if $\rank E_{p}(\mathbb Q)= 0$ and is $0$ otherwise. The elliptic curve $E_{1}: y^2=x^3+x$ has complex multiplication by $\mathcal O_K$. Let $\psi$ be the Hecke character of $K$ associated to $E_{1}$ and let $\chi$ be the quartic character such that $L(E_p/\mathbb Q, s)=L(\psi\chi, s)$. These characters are explicitly given by \begin{align} &\psi(\mathfrak{a})=\skakko{\dfrac{-1}{\alpha}}_4\alpha=(-1)^{(a-1)/2}\alpha \ \ \ \text{if} \ (\mathfrak{a}, 4)=1,\\ &\chi(\mathfrak{a})=\overline{\skakko{\dfrac{\alpha}{p}}_4} \ \ \ \text{if} \ (\mathfrak{a}, p)=1, \end{align} where $\alpha=a+bi$ is the primary generator of $\mathfrak{a}$ and $(\cdot/\cdot)_4$ is the quartic residue character ({\it cf}. \cite[II, Exercice 2.34]{Silverman}). Let $k$ be a positive interger. We define the algebraic part of $L(\psi^{2k-1}, k)$ to be \begin{align} L_{E, k}=\dfrac{2^{k+1}3^{k-1}\varpi^{k-1}(k-1)!}{\Omega_{E}^{2k-1}}L(\psi^{2k-1}, k). \end{align} \begin{lem}\label{lem:CRT} Let $p$ be a prime number such that $p\equiv 1, 9\bmod 16$ and $k=(3p+1)/4$. For all non-zero integral ideals $\mathfrak{a}$ of $\mathcal O_K$ which is prime to $4p$, we have \begin{align} \chi(\mathfrak{a})\equiv \skakko{\dfrac{\alpha}{\overline{\alpha}}}^{k-1} \mod p, \end{align} where $\alpha$ is the primary generator of $\mathfrak{a}$. \end{lem} \begin{proof} Since $3(N(\pi)-1)=4(k-1)$, we have \begin{align} \alpha^{k-1}\equiv \skakko{\dfrac{\alpha^3}{\pi}}_4 \mod \pi, \ \alpha^{k-1}\equiv \skakko{\dfrac{\alpha^3}{\overline{\pi}}}_4 \mod \overline{\pi}. \end{align} We take $a\in \mathfrak{p}, b\in \bar{\mathfrak{p}}$ so that $a+b=1$. Then by the Chinese Remainder Theorem, we have \begin{align} &\alpha^{k-1}\equiv a\skakko{\dfrac{\alpha^3}{\overline{\pi}}}_4+b\skakko{\dfrac{\alpha^3}{\pi}}_4 \mod p\mathcal O_K,\label{eq:CRT1}\\ &\overline{\alpha}^{k-1} \equiv a\skakko{\dfrac{\overline{\alpha}^3}{\overline{\pi}}}_4+b\skakko{\dfrac{\overline{\alpha}^3}{\pi}}_4 \mod p\mathcal O_K\label{eq:CRT2}. \end{align} Since the equation \eqref{eq:CRT1} multiplied by $(\overline{\alpha}^3/\pi)_4$ equals to the equation \eqref{eq:CRT2} multiplied by $(\alpha^3/\pi)_4$, it holds that \begin{align} \skakko{\dfrac{\overline{\alpha}^3}{\pi}}_4\alpha^{k-1}\equiv \skakko{\dfrac{\alpha^3}{\pi}}_4\overline{\alpha}^{k-1} \mod p\mathcal O_K. \end{align} Therefore we obtain \begin{align} \dfrac{\alpha^{k-1}}{\overline{\alpha}^{k-1}}\equiv \skakko{\dfrac{\alpha^3}{\pi}}_4\skakko{\dfrac{\alpha^3}{\overline{\pi}}}_4=\skakko{\dfrac{\alpha}{p}}^3=\chi(\mathfrak{a}) \mod p\mathcal O_K. \end{align} \end{proof} \begin{prop}\label{prop:algebraicpart} We suppose that $S_{p}\in \mathbb Z$. Under the same assumptions as in Lemma \ref{lem:CRT}, the constant $S_{p}$ is equal to $0$ if and only if $p$ divides $L_{E, k}$. \end{prop} \begin{proof} The estimate (\cite[Proposition 2]{VillegasZagier}) of the real number $\absolute{L(E_{p}/\mathbb Q, 1)}$ yields $|S_{p}|<p$. Thus we only need to show that $S_{p}$ is congruent to $0$ modulo $p$ if and only if $L_{E, k}$ is congruent to $0$ modulo $p$. It is straight forward to check \begin{align} &L(\psi\chi, 1)=\left.\sum_{(\mathfrak{a}, 4p)=1}\chi(\mathfrak{a})\dfrac{1}{\overline{\psi}(\mathfrak{a})N\mathfrak{a}^s}\right|_{s=0},\label{eq:Specialvalue1}\\ &L(\psi^{2k-1}, k)=\left.\sum_{(\mathfrak{a}, 4)=1}\skakko{\dfrac{\alpha}{\overline{\alpha}}}^{k-1}\dfrac{1}{\overline{\psi}(\mathfrak{a})N\mathfrak{a}^s}\right|_{s=0}.\label{eq:Specialvalue2} \end{align} We set $\varepsilon_1(\mathfrak{a})=\chi(\mathfrak{a})\psi(\mathfrak{a}), \varepsilon_2(\mathfrak{a})=(\psi(\mathfrak{a})/\bar{\psi}(\mathfrak{a}))^{k-1}\psi(\mathfrak{a})$. Since the elliptic curve $E_{1}$ is ordinary at $p$, there exists a $p$-adic $L$-function interpolating special values \eqref{eq:Specialvalue1} and \eqref{eq:Specialvalue2}. We denote $L_\mathfrak{f}(\varepsilon, s)$ by the Hecke $L$-function associated to a Hecke character $\varepsilon$ without the Euler factor at the primes dividing $\mathfrak{f}$. Since the elliptic curve $E_p$ is defined over $\mathbb Q$, we have $L_{4p}(\varepsilon_1^{-1}, 0)=L(\psi\chi, 1)$ and $L_{4}(\varepsilon_2^{-1}, 0)=L(\psi^{2k-1}, k)$. Let $(\Omega, \Omega_p)$ be the pair of complex period and $p$-adic period as in \cite[p. 68, DEFINITION]{deShalit} and let $\mu$ be the $p$-adic measure on $\mathcal G={\rm Gal}(K(4p^{\infty})/K)$ related to the $p$-adic $L$-function of $E_1$. Then the following identities, both sides of which lie in $\bar{\mathbb Q}$, holds: \begin{align} &\dfrac{1}{\Omega_p}\int_{\mathcal G}\varepsilon_1(\sigma)d\mu(\sigma)=\dfrac{1}{\Omega}G(\varepsilon_1)L_{4p}(\varepsilon_1^{-1}, 0),\\ &\dfrac{1}{\Omega_p^{2k-1}}\int_{\mathcal G}\varepsilon_2(\sigma)d\mu(\sigma)=\dfrac{(k-1)!}{\Omega^{2k-1}}\varpi^{k-1}G(\varepsilon_2)\skakko{1-\dfrac{\varepsilon_2(\mathfrak{p})}{p}}^2L_{4}(\varepsilon_2^{-1}, 0), \end{align} where $G(\varepsilon)$ is a "Gauss sum" (see definition \cite[p. 80]{deShalit}). Lemma \ref{lem:CRT} shows \begin{align} \absolute{\int_{\mathcal G}\varepsilon_1(\sigma)d\mu(\sigma)-\int_{\mathcal G}\varepsilon_2(\sigma)d\mu(\sigma)}_{\pi}\leq \max_{(\mathfrak{a}, 4p)=1}\absolute{\varepsilon_1(\mathfrak{a})-\varepsilon_2(\mathfrak{a})}_{\pi}\leq \dfrac{1}{p}. \end{align} Therefore we obtain the congruence relation \begin{align} \dfrac{\Omega_p}{\Omega}G(\varepsilon_1)L_{4p}(\varepsilon_1^{-1}, 0)\equiv \dfrac{\Omega_p^{2k-1}(k-1)!}{\Omega^{2k-1}}\varpi^{k-1}L_{4}(\varepsilon_2^{-1}, 0) \mod p. \end{align} By \cite[p. 91, Lemma]{deShalit} and \cite[p. 8, (14)]{Loxton}, $G(\varepsilon_1)^2$ is equal to $\sqrt{p}\bar{\pi}$ up to units in $\mathcal O_K^\times$ and $G(\varepsilon_2)$ is equal to $1$. Moreover, (\cite[p, 9-10]{deShalit}) shows $\Omega_p^{p-1}\equiv \bar{\pi}^{-1}\bmod p$. Hence it follows that \begin{align} \bar{\pi}S_{p}\equiv u2^{4k-5}3^{3k-3}L_{E, k}\mod p\label{eq:equivalencerelation} \end{align} for some $u\in \mathcal O_K^\times$. The assertion follows from this. \end{proof} \begin{rem} It is known that $(\frac{p-1}{2})!^2\equiv -1\bmod p$ and \cite[Corollary 6.6]{Lemmermeyer} shows \begin{align} \binom{\frac{p-1}{2}}{\frac{p-1}{4}}\equiv \pi+\bar{\pi} \mod p. \end{align} Thus \eqref{eq:equivalencerelation} can be rewritten as \begin{align} S_p\equiv \pm \skakko{\dfrac{p-1}{4}}!^22^{4k-5}3^{3k-3}L_{E, k} \mod p. \end{align} The proof of Proposition \ref{prop:algebraicpart} essentially shows Villegas' and Zagier's congruence relation \cite[p. 7]{VillegasZagier} \begin{align} S_{A, p}\equiv (-3)^{(p-10)/3}\skakko{\dfrac{p-1}{3}}!^2L_{A, k} \mod p, \end{align} where $S_{A, p}$ is the algebraic part of the special value $L(A_p/\mathbb Q, 1)$. The algebraic number $L_{A, k}$ is explained in detail below. \end{rem} By Proposition \ref{prop:algebraicpart}, we only need to calculate the algebraic part $L_{-4, k}$. (Actually, $L_{-4, k}$ is a square of a rational integer. We calculate the square root of it. ) Let $\psi'$ be the Hecke character of $\mathbb Q(\sqrt{-3})$ associated to $A_{1}: x^3+y^3=1$. We define the algebraic part of $L(\psi'^{2k-1}, k)$ to be \begin{align} L_{A, k}=3\nu \left(\dfrac{2\varpi}{2\sqrt{3}\Omega_{A}^2} \right)^{k-1}\dfrac{(k-1)!}{\Omega_{A}}L(\psi'^{2k-1}, k), \end{align} where $\Omega_{A}=\Gamma(1/3)^3/(2\varpi\sqrt{3})$ is the real period of $E_{1}$ and $\nu=2$ if $k\equiv 2\bmod 6$, $\nu=1$ otherwise. For the case where $p$ is congruent to $1$ modulo $9$, we see that the rank of $A_{p}$ is equal to $0$ if and only if $p$ divides $L_{A, k}$ in the same way for $E_{p}$. \subsection{Maass-Shimura operator} Unless otherwise stated, we denote by $\Gamma\subset SL_2(\mathbb R)$ a congruence subgroup. Let $M_k(\Gamma)$ be the space of holomorphic modular forms of weight $k$ for $\Gamma$. In general, $M_k^\ast(\Gamma)$ denotes the space of differentiable modular form, possibly with some character or multiplier system. Let $D$ be the differential operator \begin{align} D=\dfrac{1}{2\varpi i}\dfrac{d}{dz}=q\dfrac{d}{dq} \quad(q=e^{2\varpi iz}). \end{align} The Maass-Shimura operator \begin{align} \partial_k=D-\dfrac{k}{4\varpi y} \quad(z=x+iy), \end{align} preserves a modular relation. This is because the Maass-Shimura operator is compatible with the slash operator, that is, the following holds: \begin{align} {}^\forall \gamma\in SL_2(\mathbb R),\, \partial_k(f|[\gamma]_k)=(\partial_kf)|[\gamma]_{k+2}.\label{eq:compatiblity} \end{align} Moreover if $f\in M_k^\ast(\Gamma)$, then $\partial_k^{(h)}f\in M_{k+2h}^\ast(\Gamma)$, where \begin{align} \partial_k^{(h)}=\partial_{k+2h-2}\circ \partial_{k+2h-4}\circ \dots \circ \partial_{k+2}\circ \partial_{k}. \end{align} \begin{prop}[{\cite[p.4, (16)]{Villegas}}]\label{prop:MSEisenstein} \begin{align} \partial_k^{(h)}\left(\dfrac{1}{(mz+n)^k} \right)=\dfrac{(h+k-1)!}{(k-1)!}\left(\dfrac{-1}{4\varpi y}\dfrac{m\bar{z}+n}{mz+n} \right)^h\dfrac{1}{(mz+n)^k}. \end{align} \end{prop} We define the $h$-th generalized Laguerre polynomial to be \begin{align} L_h^\alpha(z)=\sum_{j=0}^\infty \binom{h+\alpha}{h-j}\dfrac{(-z)^j}{j!} \quad (h\in \mathbb Z_{\geq 0},\,\alpha\in \mathbb C). \end{align} In the special case $\alpha=1/2,\,-1/2$, we see that \begin{align} H_{2n}(z)=(-4)^nn!L_n^{-1/2}(z^2),\quad H_{2n+1}(z)=2(-4)^nn!zL_n^{1/2}(z^2),\label{eq:LaguerreHermite} \end{align} where \begin{align} H_n(z)=\sum_{0\leq j \leq n/2}\dfrac{n!}{j!(n-2j)!}(-1)^j(2z)^{n-2j} \end{align} is the $n$-th Hermite polynomial. \begin{prop}[{\cite[p.3, (9)]{Villegas}}]\label{prop:MSq} The following holds. \begin{align} \partial_k^{(h)}\left(\sum_{n=0}^\infty a(n)e^{2\varpi i nz} \right)=\dfrac{(-1)^hh!}{(4\varpi y)^h}\sum_{n=0}^\infty a(n)L_h^{k-1}(4\varpi ny)e^{2\pi inz}. \end{align} In particular for $k=1/2, 3/2$, we have \begin{align} &\partial_{1/2}^{(h)}\left(\sum_{n=0}^\infty a(n)e^{\varpi in^2z} \right)=\dfrac{(-1)^hh!}{(4\varpi y)^h}\sum_{n=0}^\infty a(n)L_h^{-1/2}(2n^2\varpi y)e^{\varpi in^2z},\\ &\partial_{3/2}^{(h)}\left(\sum_{n=0}^\infty a(n)e^{\varpi in^2z} \right)=\dfrac{(-1)^hh!}{(4\varpi y)^h}\sum_{n=0}^\infty a(n)L_h^{1/2}(2n^2\varpi y)e^{\varpi in^2z}. \end{align} \end{prop} We introduce the following theta series, whose notation is based on \cite{FarkasKra}. \begin{align} &\theta\character{\epsilon}{\epsilon'}(z,\tau):=\sum_{n\in \mathbb Z}\exp 2\varpi i\mkakko{\dfrac{1}{2}\left(n+\dfrac{\epsilon}{2} \right)^2\tau+\left(n+\dfrac{\epsilon}{2} \right)\left(z+\dfrac{\epsilon'}{2} \right)}\quad (\epsilon, \epsilon'\in \mathbb Q),\label{eq:FKtheta1}\\ &\theta'\character{\epsilon}{\epsilon'}(0, \tau):=\dfrac{\partial}{\partial z}\left.\theta\character{\epsilon}{\epsilon'}(z, \tau)\right|_{z=0}=2\varpi i\sum_{n\in \mathbb Z}\skakko{n+\dfrac{\epsilon}{2}} \exp 2\varpi i\mkakko{\dfrac{1}{2}\skakko{n+\dfrac{\epsilon}{2}}^2\tau+\dfrac{\epsilon'}{2}\skakko{n+\dfrac{\epsilon}{2}}}\label{eq:FKtheta2}. \end{align} The action of the Maass-Shimura operator on \eqref{eq:FKtheta1} and \eqref{eq:FKtheta2} is described by \begin{align} &\theta_{(p)}\character{\mu}{\nu}(z):=i^{-p}(2\varpi y)^{-p/2}\sum_{n\in \mathbb Z+\mu}H_p(n\sqrt{2\varpi y})\exp(\varpi in^2z+2\varpi i\nu n)\quad (\mu, \nu\in \mathbb Q, p\in \mathbb Z_{\geq 0}). \end{align} \begin{prop}\label{prop:partialtheta} For $h\in \mathbb Z_{\geq 0}$, it holds that \begin{align} &\theta_{(2h)}\character{\mu}{\nu}(z)=(-1)^h2^{3h}\partial_{1/2}^{(h)}\skakko{\theta\character{2\mu}{2\nu}(0, z)},\\ &\theta_{(2h+1)}\character{\mu}{\nu}(z)=-i(-1)^{h}2^{3h+1}\partial_{3/2}^{(h)}\skakko{\dfrac{1}{2\varpi i}\theta'\character{2\mu}{2\nu}(0, z)}. \end{align} \end{prop} \begin{proof} It follows by Proposition \ref{prop:MSq} and the identities \eqref{eq:LaguerreHermite}. \end{proof} \subsection{The special value of $L$-function with the Maass-Shimura operator} Let $\psi$ be the Hecke character of $K=\mathbb Q(i)$ associated to $E_{1}: y^2=x^3+x$, where $i=\sqrt{-1}$. For an integral ideal $\mathfrak{a}$ of $\mathcal O_K$ which is prime to $4$, we have \begin{align} \psi(\mathfrak{a})=(-1)^{(a-1)/2}(a+bi), \end{align} where $a+bi$ is the primary generator, that is, $a+bi$ satisfies $(a, b)\equiv (1, 0), (3, 2)\bmod 4$. We set $\varepsilon(a+bi)=(-1)^{(a-1)/2}$. \begin{lem}\label{lem:idealform} An integral ideal $\mathfrak{a}$ of $\mathcal O_K$ which is prime to $4$ is written in the form \begin{align} \mathfrak{a}=(r+4N-2mi) \quad (r\in \{1, 3\}, N, m\in \mathbb Z). \end{align} \end{lem} \begin{proof} An ideal $(a+bi)$ is prime to $4$ if and only if the norm $a^2+b^2$ is prime to $4$. Therefore such an ideal $(a+bi)$ must satisfy $(a, b)\equiv (1, 0), (0, 1)\bmod 2$. There is nothing to prove the former case. For the latter case, it follows from $(a+bi)=(b-ai)$. \end{proof} Let $\Theta(z)$ be the theta series \begin{align} \Theta(z)=\sum_{\lambda\in \mathcal O_K}q^{N_{K/\mathbb Q}\lambda}=\sum_{n, m\in \mathbb Z}q^{n^2+m^2}\in M_1(\Gamma_1(4)). \end{align} \begin{prop} We have \begin{align} L(\psi^{2k-1}, k)=\dfrac{(-1)^{k-1}2^{-3}\varpi^k}{(k-1)!}\skakko{\partial_1^{(k-1)}\Theta(z)|_{z=i/4}+\partial_1^{(k-1)}\Theta(z)|_{z=i/4+1/2}}. \end{align} \end{prop} \begin{proof} We consider the Eisenstein series of weight $1$ for $\Gamma_1(4)$ \begin{align} G_{1, \varepsilon}(z)=\lim_{s\to 0}\dfrac{1}{2}\sum_{n, m}\dfrac{\varepsilon(n)}{(4mz+n)\absolute{4mz+n}^{2s}} \quad(z\in \mathbb H), \end{align} where $\sum'$ implies that $(n, m)=(0, 0)$ is excluded. By using Proposition \ref{prop:MSEisenstein}, we have \begin{align} \partial_1^{(k-1)}G_{1, \varepsilon}(z)=(k-1)!\skakko{\dfrac{-1}{4\varpi y}}^{k-1}\dfrac{1}{2}\sum_{n, m}\dfrac{\varepsilon(n)(n+4m\bar{z})^{2k-1}}{|n+4mz|^{2k}}. \end{align} Since $G_{1, \varepsilon}(z)=\varpi/4 \cdot \Theta(z)$ (Note that $\dim M_1(\Gamma_1(4))=1$.), it holds that \begin{align} L(\psi^{2k-1}, k)&=\sum_{r, N, m}\dfrac{\psi((r+4N-2mi))^{2k-1}}{|r+4N-2mi|^{2k}}\nonumber \\ &=\dfrac{1}{2}\sum_{r, N, m}\dfrac{\varepsilon(r+4N)(r+4N-2mi)^{2k-1}}{|r+4N+2mi|^{2k}}\nonumber \\ &=\dfrac{1}{2}\sideset{}{^{'}}\sum_{n, m}\dfrac{\varepsilon(n)(n-2mi)^{2k-1}}{|n+2mi|^{2k}}\nonumber \\ &=\dfrac{(-1)^{k-1}2^{k-3}\varpi^k}{(k-1)!}\partial_1^{(k-1)}\Theta(z)|_{z=i/2},\label{eq:forevenkcorollary} \end{align} Finally the identity (\cite[p.192]{Kohler}) \begin{align} 2\Theta(z)=\Theta\skakko{\dfrac{z}{2}}+\Theta\skakko{\dfrac{z+1}{2}} \end{align} yields the claim. \end{proof} \begin{cor} If $k$ is an even integer, then $L(\psi^{2k-1}, k)=0$. \end{cor} \begin{proof} For $\gamma=\skakko{\begin{array}{cc} 0 & -1\\ 4 & 0\end{array}}\in GL_2^+(\mathbb Q)$, we have $\Theta(z)|[\gamma]_1=-i\Theta(z)$ (\cite[p. 124]{Koblitz}). By \eqref{eq:compatiblity}, we have \begin{align} \partial_1^{(k-1)}\Theta(z)=i(2z)^{-2k+1}\partial_1^{(k-1)}\Theta(z)|_{z=-1/4z}. \end{align} Thus we obtain $\partial_1^{(k-1)}\Theta(z)|_{z=i/2}=0$ and the colollary follows by \eqref{eq:forevenkcorollary}. \end{proof} Next, we write the special value $L(\psi^{2k-1}, k)$ as a square of the $\partial_k$-derivative of some modular form. The key is Theorem \ref{thm:FF} below. Note that by Proposition \ref{prop:MSq}, it holds that \begin{align} \partial_{1}^{(k-1)}\Theta(z)&|_{z=i/4}+\partial_1^{(k-1)}\Theta(z)|_{z=i/4+1/2}\nonumber \\ =&2\dfrac{(-1)^{k-1}(k-1)!}{\varpi^{k-1}}\sum_{(0, 0), (1, 1)}L_{k-1}^0(2\varpi Q_i(n, m))e^{-\varpi(n^2+m^2)/2}, \end{align} where $\sum_{(a, b)}$ implies that $(n, m)$ runs over all pairs of integer which satisfy $(n, m)\equiv (a, b)\bmod 2$. We set \begin{align} a_{n, m}:=L_{k-1}^0(2\pi Q_i(n, m))e^{-\varpi(n^2+m^2)/2}. \end{align} \begin{thm}[{\cite[p.7]{Villegas}}]\label{thm:FF} For $a\in \mathbb Z_{>0},\, z\in \mathbb H,\, \mu, \nu\in \mathbb Q,\, p, \alpha\in \mathbb Z_{\geq 0}$, the following identity holds. \begin{multline*} \dfrac{(-1)^pp!}{(\varpi y)^p}\sum_{n, m\in \mathbb Z}e^{2\varpi i(n\mu+m\nu)}\left(\dfrac{mz-n}{ay} \right)^\alpha L_p^\alpha\left(\dfrac{2\varpi}{a}Q_z(n, m) \right)e^{\varpi (inm-Q_z(n, m))/a}\\ =\sqrt{2ay}(ay)^\alpha \theta_{(p)}\character{a\mu}{\nu}(a^{-1}z)\theta_{(p+\alpha)}\character{\mu}{-a\nu}(-a\bar{z}). \end{multline*} In particular for the case $a=1,\,\alpha=0$, the right hand side is \begin{align} (-1)^p\sqrt{2y}\left|\theta_{(p)}\character{\mu}{\nu}(z) \right|^2. \end{align} \end{thm} We define $\theta_2, \theta_4$ to be \begin{align*} \theta_2(z):=\theta\character{1}{0}(0, z)=\sum_{n\in \mathbb Z+1/2}e^{\varpi in^2z},\quad \theta_4(z):=\theta\character{0}{1}(0, z)=\sum_{n\in \mathbb Z}(-1)^ne^{\varpi in^2z}. \end{align*} \begin{thm}\label{thm:mainthm3} Let $\psi$ be the Hecke character of $K=\mathbb Q(i)$ associated to $E_{1}: y^2=x^3+x$. Then for $L(\psi^{2k-1}, s)$, we have \begin{align} L(\psi^{2k-1}, k)=\begin{cases} \dfrac{2^{3k-9/2}\varpi^k}{(k-1)!}\absolute{\partial_{1/2}^{(N)}\theta_2(z)|_{z=i}}^2 & (k=2N+1),\\ 0 & (k=2N). \end{cases} \end{align} \end{thm} \begin{proof} We apply for $p=k-1, a=1,\,\alpha=0, z=i$ in Theorem \ref{thm:FF}. By substituting $(\mu,\,\nu)=(1/2,\,0),\,(0,\,1/2)$, we see that \begin{align} &\dfrac{(k-1)!}{\varpi^{k-1}}\skakko{\sum_{(0, 0), (0, 1), (1, 1)}a_{n, m}-\sum_{(1, 0)}a_{n, m}}=\sqrt{2}\absolute{\theta_{(k-1)}\character{1/2}{0}(i)}^2,\label{eq:FF1}\\ &\dfrac{(k-1)!}{\varpi^{k-1}}\skakko{\sum_{(0, 0), (1, 0), (1, 1)}a_{n, m}-\sum_{(0, 1)}a_{n, m}}=\sqrt{2}\absolute{\theta_{(k-1)}\character{0}{1/2}(i)}^2.\label{eq:FF2} \end{align} Note that \begin{align} \absolute{\theta_{(k-1)}\character{1/2}{0}(z)}^2= \absolute{\theta_{(k-1)}\character{0}{1/2}(z)}^2. \end{align} By adding \eqref{eq:FF1} and \eqref{eq:FF2}, we obtain \begin{align} \partial_1^{(k-1)}\Theta(z)|_{z=i/4}+\partial_1^{(k-1)}\Theta(z)|_{i/4+1/2}=(-1)^{k-1}2^{3/2}\absolute{\theta_{(k-1)}\character{1/2}{0}(i)}^2. \end{align} Therefore the theorem follows by Proposition \ref{prop:partialtheta}. \end{proof} \begin{cor} Under the same condition as Theorem \ref{thm:mainthm3}, we have \begin{align} L(\psi^{2k-1}, k)\geq 0. \end{align} \end{cor} \subsubsection{The case for $A_p$} Let $\psi'$ be the Hecke character of $K=\mathbb Q(\omega)$ associated to $A_{1}: x^3+y^3=1$, where $\omega=(-1+\sqrt{-3})/2$. For an integral ideal $\mathfrak{a}$ of $\mathcal O_K$ which is prime to $3$, we have \begin{align} \psi'(\mathfrak{a})=\psi'((a+bi))=\varepsilon'(a+bi)(a+bi), \end{align} where $\varepsilon': \skakko{\mathcal O_K/3\mathcal O_K}^\times \to \mathbb C^\times$ is some sextic character. \begin{lem} An integral ideal $\mathfrak{a}$ of $\mathcal O_K$ which is prime to $3$ is written in the form \begin{align} \mathfrak{a}=(r+3(N+m\omega^2)) \quad(r\in \{1, 2 \}, N, m\in \mathbb Z), \end{align} \end{lem} \begin{proof} A proof is the same as Lemma \ref{lem:idealform}. \end{proof} Let $\Theta'(z)$ be the theta series \begin{align*} \Theta'(z)=\sum_{\lambda\in \mathcal O_K}q^{N\lambda}=\sum_{n, m}q^{n^2+nm+m^2}\in M_1(\Gamma_1(3)). \end{align*} \begin{prop} We have \begin{align} L(\psi'^{2k-1}, k)=\dfrac{(-1)^{k-1}2^{k-1}3^{-k/2-2}\varpi^k}{(k-1)!}\omega^{k-1}(1-\omega)\partial_1^{(k-1)}\Theta'(z)|_{z=(\omega-2)/3}. \end{align} \end{prop} \begin{proof} Similarly for the case $E_{p}$, we obtain \begin{align} L(\psi'^{2k-1}, k)&=\dfrac{1}{2}\sideset{}{^{'}}\sum_{n, m}\dfrac{\varepsilon'(n)(n+3m\omega^2)^{2k-1}}{|n+3m\omega|^{2k}}\\ &=\dfrac{(-1)^{k-1}2^{k-1}3^{k/2-2}\varpi^k}{(k-1)!}\partial_1^{(k-1)}\Theta'(z)|_{z=\omega}. \end{align} For the Atkin-Lehner involution $W_3=\begin{pmatrix} 0 & -1/\sqrt{3} \\ \sqrt{3} & 0 \end{pmatrix}$, we have $\Theta'(z)|[W_3]_1=-i\Theta'(z)$ (\cite[p.155]{Kohler}). By \eqref{eq:compatiblity}, we have \begin{align} \partial_1^{(k-1)}\Theta'(z)=i(\sqrt{3}z)^{-2k+1}\partial_1^{(k-1)}\Theta'(z)|_{z=-1/3z}. \end{align} The proposition follows by substituting $z=\omega$. \end{proof} By Proposition \ref{prop:MSq}, it holds that \begin{align} \partial_1^{(k-1)}&\Theta(z)|_{z=(\omega-2)/3}\nonumber\\ =&\dfrac{(-1)^{k-1}\sqrt{3}^{k-1}(k-1)!}{2^{k-1}\varpi^{k-1}}\sum_{n, m\in \mathbb Z}L_{k-1}^0(2\varpi Q_\omega(n, m))e^{2\varpi i(n^2+nm+m^2)(\omega-2)/3}. \end{align} We set \begin{align} a_{n, m}:=L_{k-1}^0(2\varpi Q_\omega(n, m))e^{2\varpi i(n^2+nm+m^2)(\omega-2)/3}. \end{align} \begin{lem}\label{lem:mainlem} For $h, N\in \mathbb Z_{\geq 0}$, the following holds. \begin{itemize} \item[$(1)$] $\partial_{1/2}^{(h)}\left.\theta \character{1/3}{-1/3}(0, z)\right|_{z=\omega}~=e^{h\varpi i/3-\varpi i/4}3^{1/4}\left.\partial_{1/2}^{(h)}\eta(z)\right|_{z=\omega}$,\\ \item[$(2)$] $\partial_{3/2}^{(h)}\left.\dfrac{1}{2\varpi i}\theta'\character{1}{1}(z)\right|_{z=\omega}=e^{\varpi i/2}\partial_{3/2}^{(h)}\eta(z)^3|_{z=\omega}$,\\ \item[$(3)$] $\partial_{3/2}^{(3N+1)}\left.\dfrac{1}{2\varpi i}\theta'\character{1/3}{-1/3}(z)\right|_{z=\omega}=e^{N\varpi i-13\varpi i/36}2^{-1}3^{5/4}\partial_{3/2}^{(3N+1)}\eta(3z)^3|_{z=\omega}$. \end{itemize} \end{lem} \begin{proof} (1)~By using identity (\cite[p. 241]{FarkasKra}) \begin{align} \theta\character{1/3}{1}(0, z)=e^{\varpi i/6}\eta(z), \end{align} we have \begin{align} \theta\character{1/3}{-1/3}(0, z)=e^{-7\varpi i/36}\theta\character{1/3}{1}(0, z)=e^{-\varpi i/36}\eta\skakko{\dfrac{z-1}{3}} \end{align} It follows from this and \eqref{eq:compatiblity}. (2) It follows from the identity \cite[p.289, (4.14)]{FarkasKra} \begin{align} \theta'\character{1}{1}(0, \tau)=-2\varpi \eta(\tau)^3.\label{eq:etaidentity} \end{align} (3)~By \eqref{eq:compatiblity}, we have \begin{align} \partial_{3/2}^{(3N+1)}\eta(z)^3|_{z=\omega}=0.\label{eq:partialetazero} \end{align} It follows from \eqref{eq:etaidentity}, \eqref{eq:partialetazero} and the identity \cite[p.240, (3.40)]{FarkasKra} \begin{align} 6e^{\varpi i/3}\theta'\character{1/3}{1}(0, 3z)&=\theta'\character{1}{1}(0, z/3)+3\theta'\character{1}{1}(0, 3z). \end{align} \end{proof} \begin{thm}\label{thm:mainthm4} Let $\psi'$ be the Hecke character of $K=\mathbb Q(\omega)$ associated to $A_1: x^3+y^3=1$. Then for $L(\psi'^{2k-1}, s)$, we have \begin{align} L(\psi'^{2k-1}, k)=\begin{cases} \dfrac{\varpi^k}{(k-1)!}2^{2k-1}3^{k/2-9/4}\absolute{\partial_{1/2}^{(3N)}\eta(z)|_{z=\omega}}^2 & (k=6N+1),\\ \dfrac{\varpi^k}{(k-1)!}2^{2k-3}3^{k/2-11/4}\absolute{\partial_{3/2}^{(3N+1)}\eta(z)^3|_{z=\omega}}^2 & (k=6N+2),\\ \dfrac{\varpi^k}{(k-1)!}2^{2k-4}3^{k/2-1/4}\absolute{\partial_{3/2}^{(3N+1)}\eta(3z)^3|_{z=\omega}}^2 & (k=6N+4),\\ 0 & ({\rm otherwise}). \end{cases} \end{align} \end{thm} \begin{proof} We apply for $p=k-1,\, a=1,\,\alpha=0,\, z=\omega$ in Theorem \ref{thm:FF}. By substituting $(\mu, \nu)=(1/2, 1/2)$ with multiplication $\omega^2$, $(\mu, \nu)=(1/6, -1/6)$ and $(-1/6, 1/6)$, we see that \begin{align} &\dfrac{2^{k-1}(k-1)!}{\sqrt{3}^{k-1}\varpi^{k-1}}\left(\sum_{n-m\equiv 1, 2, 4, 5}a_{n, m}+\sum_{n-m\equiv 0, 3}a_{n, m} \right)=\omega^2\sqrt[4]{3}\left|\theta_{(k-1)}\left[\begin{array}{c}1/2 \\ 1/2 \end{array}\right](\omega) \right|^2,\label{eq:FF3}\\ &\dfrac{2^{k-1}(k-1)!}{\sqrt{3}^{k-1}\varpi^{k-1}}\left(\sum_{n-m\equiv 0, 1, 3, 4}a_{n, m}+\sum_{n-m\equiv 2, 5}a_{n, m} \right)=\sqrt[4]{3}\left|\theta_{(k-1)}\left[\begin{array}{c}1/6 \\ -1/6 \end{array}\right](\omega) \right|^2,\label{eq:FF4}\\ &\dfrac{2^{k-1}(k-1)!}{\sqrt{3}^{k-1}\varpi^{k-1}}\left(\sum_{n-m\equiv 0, 2, 3, 5}a_{n, m}+\sum_{n-m\equiv 1, 4}a_{n, m} \right)=\sqrt[4]{3}\left|\theta_{(k-1)}\left[\begin{array}{c}-1/6 \\ 1/6 \end{array}\right](\omega) \right|^2,\label{eq:FF5} \end{align} where $\sum_{n-m\equiv a}$ implies that $(n, m)$ runs over all pairs of integer which satisfy $n-m\equiv a\bmod 6$. Note that \begin{align} \left|\theta_{(p)}\left[\begin{array}{c}\mu \\ -\nu \end{array}\right](z) \right|^2=\left|\theta_{(p)}\left[\begin{array}{c}-\nu \\ \mu \end{array}\right](z) \right|^2. \end{align} By adding \eqref{eq:FF3}, \eqref{eq:FF4} and \eqref{eq:FF5}, we obtain \begin{align} L(\psi'^{2k-1}, k)=\dfrac{2^{-k+1}3^{k/2-11/4}\varpi^k}{(k-1)!}\mkakko{\omega^{k+1}\absolute{\theta_{(k-1)}\character{1/2}{1/2}(\omega)}^2+2\omega^{k-1}\absolute{\theta_{(k-1)}\character{1/6}{-1/6}(\omega)}^2}. \end{align} Since $L(\psi'^{2k-1}, k )$ takes a real number, it holds that \begin{align} L(\psi'^{2k-1}, k)=\begin{cases} 0 & (k\equiv 0, 3\mod 6),\\ \dfrac{2^{-k+2}3^{k/2-11/4}\varpi^k}{(k-1)!}\absolute{\theta_{(k-1)}\character{1/6}{1/6}(\omega)}^2 & (k\equiv 1, 4\mod 6),\\ \dfrac{2^{-k+1}3^{k/2-11/4}\varpi^k}{(k-1)!}\absolute{\theta_{(k-1)}\character{1/2}{1/2}(\omega)}^2 & (k\equiv 2, 5\mod 6). \end{cases} \end{align} Since \begin{align} \theta\character{1}{1}(0,z)=0, \end{align} the theorem follows by Proposition \ref{prop:partialtheta} and Lemma \ref{lem:mainlem}. \end{proof} \begin{cor} Under the same condition as Theorem \ref{thm:mainthm4}, we have \begin{align} L(\psi'^{2k-1}, k)\geq 0. \end{align} \end{cor} \section{Recurrence formula} In this section, we also denote $\varpi=3.1415\cdots$ by the real number. The Maass-Shimura operator does not map a modular form to a modular form in general, but the Ramanujan-Serre operator \begin{align} \vartheta_k=D-\dfrac{k}{12}E_2 \end{align} does, where $E_2(z)=1-24\sum_{n=1}^\infty \sigma_1(n)q^n$ is the Eisenstein series of weight $2$. This Eisenstein series is not a modular form, but the function $E_2^\ast(z)=E_2(z)-3/\varpi y$ is non-holomorphic modular form. Since the Ramanujan-Serre operator is also expressed as $\vartheta_k=\partial_k-kE_2^\ast/12$, we see that $\vartheta_k$ maps a modular form of weight $k$ to a modular form of weight $k+2$. To express the difference between $\partial_k$ and $\vartheta_k$, Villegas and Zagier have introduced the following series. \begin{align} &f_\partial(z, X):=\sum_{n=0}^\infty \dfrac{\partial_k^{(n)}f(z)}{k(k+1)\dots (k+n-1)}\dfrac{X^n}{n!} \quad(z\in \mathbb H,\,X\in\mathbb C,\,f\in M_k(\Gamma))\nonumber\\ &f_\vartheta(z, X):=e^{-E_2^\ast(z)X/12}f_\partial(z, X)\label{eq:recurrenceseries} \end{align} \begin{prop}[{\cite[p. 12]{VillegasZagier}}]\label{prop:recurrence} Let $f\in M_k(\Gamma)$. Then the series $f_\vartheta(z, X)$ has the expansion \begin{align} f_\vartheta(z, X)=\sum_{n=0}^\infty \dfrac{F_n(z)}{k(k+1)\dots (k+n-1)}\dfrac{X^n}{n!}, \end{align} where $F_n\in M_{k+2n}(\Gamma)$ is the modular form that is defined by the following recurrence formula. \begin{align} F_{n+1}=\vartheta_{k+2n} F_n-\dfrac{n(n+k-1)}{144}E_4F_{n-1}\label{eq:recurrence} \end{align} The initial condition is $F_0=f,\,F_1=\vartheta_k f$. \end{prop} If a CM point $z_0$ satisfy $E_2^\ast(z_0)=0$, then $f_\partial(z_0, X)=f_\vartheta(z_0, X)$ by \eqref{eq:recurrenceseries}. Therefore by Proposition \ref{prop:recurrence}, we see that \begin{align} \partial_k^{(n)}f(z)|_{z=z_0}=F_n(z_0), \end{align} where $F_n$ is the modular form that is defined by the recurrence formula \eqref{eq:recurrence}. We apply Proposition \ref{prop:recurrence} for $f=\theta_2, \Gamma=\Gamma(2)$. The graded ring $\oplus_{k\in \frac{1}{2}\mathbb Z}M_k(\Gamma(2))$ is isomorphic to $\mathbb C[\theta_2, \theta_4]$ as $\mathbb C$-algebra ({\it cf}. \cite[p.28-29]{Zagier123}). Since $\theta_2$ and $\theta_4$ is algebraically independent over $\mathbb C$, we sometimes regard $\theta_2$ and $\theta_4$ as indeterminates and $\mathbb C[\theta_2, \theta_4]$ as the polynomial ring in two variables over $\mathbb C$. \begin{lem}\label{lem:mainlem5} We have \begin{align} \vartheta \theta_2=\dfrac{1}{12}\theta_2\theta_4^4+\dfrac{1}{24}\theta_2^5, ~\vartheta\theta_4=-\dfrac{1}{12}\theta_2^4\theta_4-\dfrac{1}{24}\theta_4^5 \end{align} \end{lem} \begin{proof} It follows from the fact that $\vartheta \theta_2^4$ and $\vartheta \theta_4^4$ are of weight $4$ and the ring $M_4(\Gamma(2))$ is generated by $\theta_2^4, \theta_4^4$. \end{proof} By the above lemma, the Ramanujan-Serre operator $\vartheta$ acts on $\mathbb C[\theta_2, \theta_4]$ as \begin{align} \vartheta=\skakko{\dfrac{1}{12}\theta_2\theta_4^4+\dfrac{1}{24}\theta_2^5}\dfrac{\partial}{\partial \theta_2}-\skakko{\dfrac{1}{12}\theta_2^4\theta_4+\dfrac{1}{24}\theta_4^5}\dfrac{\partial}{\partial \theta_4}. \end{align} \begin{thm}\label{thm:mainthm5} We define the algebraic part of $L(\psi^{2k-1}, k)$ to be \begin{align} L_{E, k}=\dfrac{2^{k+1}3^{k-1}\varpi^{k-1}(k-1)!}{\Omega_{E}^{2k-1}}L(\psi^{2k-1}, k). \end{align} Then $L_{E, k}$ is a square of a rational integer and \begin{align} \sqrt{L_{E, k}}=\begin{cases} \absolute{f_N(0)} & (k=2N+1),\\ 0 & (k=2N), \end{cases} \end{align} where $f_n(t)\in\mathbb Z[t]$ is the polynomial that is defined by the recurrence formula \begin{align} f_{n+1}(t)=(4n+1)(2t+3)f_n(t)-12(t+1)(t+2)f_n'(t)-2n(2n-1)(t^2+3t+3)f_n(t) \end{align} The initial condition is $f_0(t)=1,\, f_1(t)=2t+3$. \end{thm} \begin{proof} By Proposition \ref{prop:recurrence} and Lemma \ref{lem:mainlem5}, we have $\partial_{1/2}^{(n)}\theta_2(z)|_{z=i}=F_n(i)$, where $F_n$ is the modular form that is defined by the recurrence formula \begin{align} F_{n+1}=\skakko{\dfrac{1}{12}\theta_2\theta_4^2+\dfrac{1}{24}\theta_2^5}\dfrac{\partial F_n}{\partial \theta_2}-\skakko{\dfrac{1}{12}\theta_2^4\theta_4+\dfrac{1}{24}\theta_4^5}\dfrac{\partial F_n}{\partial \theta_4}-\dfrac{n(n-1/2)}{144}E_4F_{n-1}. \label{eq:step1ofmainthm5} \end{align} We set $f_n={24}^nF_n/\theta_2^{4n+1}$, which has degree $0$. Then we can rewrite the recurrence formula \eqref{eq:step1ofmainthm5} as follows: \begin{align} f_{n+1}=(4n+1)\dfrac{\theta_2^4+2\theta_4^4}{\theta_2^4}f_n+\dfrac{\theta_2^4+2\theta_4^4}{\theta_2^4}\dfrac{\partial f_n}{\partial \theta_2}-\dfrac{2\theta_2^4\theta_4+\theta_4^5}{\theta_2^4}\dfrac{\partial f_n}{\partial \theta_4}-2n(2n-1)\dfrac{E_4}{\theta_2^8}f_{n-1}.\label{eq:step2ofmainthm5} \end{align} Moreover we set $t=(\theta_4^4-\theta_2^4)/\theta_2^4$ which satisfies $t(i)=0$. Note that $E_4=\theta_2^8+\theta_2^4\theta_4^4+\theta_4^8$. Then the recurrence formula \eqref{eq:step2ofmainthm5} transforms \begin{align} f_{n+1}(t)=(4n+1)(2t+3)f_n(t)-12(t+1)(t+2)f_n'(t)-2n(2n-1)(t^2+3t+3)f_n(t). \end{align} The initial condition is $f_0(t)=1,\, f_1(t)=2t+3$. By the complex multiplication theory, we have \begin{align} \absolute{\theta_2(i)}=2^{-1/4}\varpi^{-1/2}\Omega_{E}^{1/2}. \end{align} Therefore we obtain \begin{align} \absolute{\partial_{1/2}^{(N)}\theta_2(z)|_{z=i}}^2=2^{-4k+7/2}3^{-k+1}\varpi^{-2k+1}\Omega_{E}^{2k-1}\absolute{f_N(0)}^2. \end{align} \end{proof} \subsection{The case $A_p$} First we consider the case for $k=6N+1$. (The case for $k=6N+2$ is almost the same. ) We apply Proposition \ref{prop:recurrence} for $f=\eta, \Gamma=\Gamma(1)$. The graded ring $\oplus_{k\in \mathbb Z}M_k(\Gamma(1))$ is isomorphic to $\mathbb C[E_4, E_6]$ as $\mathbb C$-algebra. Since $E_4$ and $E_6$ are algebraically independent over $\mathbb C$, we sometimes regard $E_4$ and $E_6$ as indeterminates and $\mathbb C[E_4, E_6]$ as the polynomial ring in two variables over $\mathbb C$. We denote by $\frac{\partial}{\partial E_4}$ and $\frac{\partial}{\partial E_6}$ the derivative with respect to formal variables $E_4$ and $E_6$. We take a sufficiently small neighborhood $D$ of $\omega$ so that $E_6^{1/3}$ can be defined. (Note that $E_6(\omega)\neq 0$.) In the following, we restrict the domain of functions in $\mathbb C[E_4, E_6, E_6^{1/3}, E_6^{-1}, \eta]$ to $D$. \begin{lem} We have \begin{align} \vartheta E_4=-\dfrac{1}{3}E_6,\quad \vartheta E_6=-\dfrac{1}{2}E_4^2,\quad \vartheta\eta=0. \end{align} \end{lem} \begin{proof} The proof is the same as Lemma \ref{lem:mainlem5}. \end{proof} By the above lemma, the Ramanujan-Serre operator $\vartheta$ acts on $\mathbb C[E_4, E_6]$ as \begin{align} \vartheta=-\dfrac{E_6}{3}\dfrac{\partial}{\partial E_4}-\dfrac{E_4^2}{2}\dfrac{\partial}{\partial E_6}.\label{eq:6N+1op} \end{align} The derivatives $\frac{\partial}{\partial E_4}$ and $\frac{\partial}{\partial E_6}$ on $\mathbb C[E_4, E_6]$ are uniquely extended on $\mathbb C[E_4, E_6, E_6^{-1}, E_6^{1/3}, \eta]$ satisfying the following: \begin{align} \dfrac{\partial}{\partial E_6} E_6^{-1}=-E_6^{-2}, \ \ \dfrac{\partial}{\partial E_6} E_6^{1/3}=\dfrac{1}{3}E_6^{-1}E_6^{1/3}. \end{align} Next we consider the case for $k=6N+4$. We apply Proposition \ref{prop:recurrence} for $f=\eta_3, \Gamma=\Gamma_0(3)$, where $\eta_3(z)=\eta(3z)^3$. It is known that the graded ring $\oplus_{k\in \mathbb Z}M_k(\Gamma_0(3))$ is isomorphic to $\mathbb C[C, \alpha, \beta]/(\alpha^2-C\beta)\cong \mathbb C[C, C^{-1}, \alpha]$ ({\it cf}. \cite{Suda}) as $\mathbb C$-algebra, where \begin{align} &C=\dfrac{1}{2}\skakko{3E_2(3z)-E_2(z)},\,\alpha=\dfrac{1}{240}\skakko{E_4(z)-E_4(3z)},\\ &\beta=\dfrac{1}{12}\mkakko{\dfrac{1}{504}\skakko{E_6(3z)-E_6(z)}-C\alpha}. \end{align} Since $C$ and $\alpha$ are algebraically independent over $\mathbb C$, we sometimes regard $C$ and $\alpha$ as indeterminates and $\mathbb C[C, \alpha]$ as the polynomial ring in two variables over $\mathbb C$. In the following, we consider the extension $\mathbb C[C, C^{-1}, \alpha, \eta_3]$ of $\mathbb C[C, \alpha]$. \begin{lem} We have \begin{align} \vartheta C=-\dfrac{1}{6}C^2+18\alpha, ~\vartheta \alpha=\dfrac{2}{3}C\alpha+9C^{-1}\alpha^2. \end{align} \end{lem} \begin{proof} The proof is the same as Lemma \ref{lem:mainlem5}. \end{proof} Similarly in the case for $k=6N+1$, the Ramanujan-Serre operator $\vartheta$ acts on $\mathbb C[C, C^{-1}, \alpha, \eta_3]$ as \begin{align} \vartheta=\skakko{-\dfrac{1}{6}C^2+18\alpha}\dfrac{\partial}{\partial C}+\skakko{\dfrac{2}{3}C\alpha+9C^{-1}\alpha^2}\dfrac{\partial }{\partial \alpha}.\label{eq:6N+3op} \end{align} \begin{thm}\label{thm:mainthm6} We define the algebraic part of $L(\psi^{2k-1}, k)$ to be \begin{align} L_{A, k}=3\nu \skakko{\dfrac{2\varpi}{2\sqrt{3}\Omega_{A}^2}}^{k-1}\dfrac{(k-1)!}{\Omega_{A}}L(\psi'^{2k-1}, k) \end{align} where $\nu =2$ if $k\equiv 2\mod 6$, $\nu=1$ otherwise. Then $L_{A, k}$ is a square of a rational integer and \begin{align} \sqrt{L_{A, k}}=\begin{cases} \absolute{x_{3N}(0)} & (k=6N+1),\\ \absolute{y_{3N}(0)} & (k=6N+2),\\ \absolute{z_{3N+1}(0)} & (k=6N+4),\\ 0 & ({\rm otherwise}), \end{cases} \end{align} where $x_{n}(t), y_{n}(t), z_{n}(t)\in \mathbb Z[t]$ are polynomials that is defined by the following recurrece formulas \begin{align} &x_{n+1}(t)=-2(1-8t^3)x_n'(t)-8nt^2x_n(t)-n(2n-1)tx_{n-1}(t)\\ &y_{n+1}(t)=-2(1-8t^3)y_n'(t)-8nt^2y_n(t)-n(2n+1)ty_{n-1}(t)\\ &z_{n+1}(t)=-(t-1)(9t-1)z_n'(t)+\mkakko{(6t-2)n+2}z_n(t)-2n(2n+1)tz_{n-1}(t). \end{align} The initial conditions are \begin{align} &x_0(t)=1,\, x_1(t)=0,\\ &y_0(t)=1,\, y_1(t)=0,\\ &z_0(t)=1/2,\, z_1(t)=1. \end{align} \end{thm} \begin{proof} Since the proof for the case $k=6N+2$ is the same for $k=6N+1$, we prove for the case $k=6N+1, 6N+4$. First we proof for $k=6N+1$. By Proposition \ref{prop:recurrence} and \eqref{eq:6N+1op}, we have $\partial_{1/2}^{(n)}\eta(z)|_{z=\omega}=X_n(\omega)$, where $X_n$ is the modular form that is defined by the recurrence formula \begin{align} X_{n+1}=-\dfrac{E_6}{3}\dfrac{\partial X_n}{\partial E_4}-\dfrac{E_4^2}{2}\dfrac{\partial X_n}{\partial E_6}-\dfrac{n(n-1/2)}{144}E_4X_{n-1}.\label{eq:step1ofmainthm6} \end{align} We set $x_n=12^nX_n/\eta E_6^{n/3}$ and $t=E_4E_6^{-2/3}/2$ which satisfies $t(\omega)=0$. Then we can rewrite the recurrence formula \eqref{eq:step1ofmainthm6} as follows: \begin{align} x_{n+1}(t)=-2(1-8t^3)x_n'(t)-8nt^2x_n(t)-n(2n+1)tx_{n-1}(t). \end{align} The initial condition is $x_0(t)=1,\,x_1(t)=0$. By the complex multiplication theory, we have \begin{align} \eta(\omega)=\dfrac{3^{3/8}\Omega_{A}^{1/2}}{2^{1/2}\varpi^{1/2}},\, E_6(\omega)=\dfrac{3^6\Omega_{A}^6}{2^3\varpi^6} \end{align} Therefore we have \begin{align} \absolute{\partial_{1/2}^{(3N)}\eta(z)|_{z=\omega}}^2=\dfrac{\Omega_{A}^{2k-1}}{\varpi^{2k-1}}2^{-3k+2}3^{k-1/4}\absolute{x_{3N}(0)}^2. \end{align} Next we proof for $k=6N+4$. We set $\eta_3(z)=\eta(3z)^3$. We have $\partial_{3/2}^{(n)}\eta_3(z)|_{z=\omega}=Z_n(\omega)$, where $Z_n$ is the modular form that is defined by the recurrence formula \begin{align} Z_{n+1}=\skakko{-\dfrac{1}{6}C^2+18\alpha}\dfrac{\partial Z_n}{\partial C}+\skakko{\dfrac{2}{3}C\alpha+9C^{-1}\alpha^2}\dfrac{\partial Z_n}{\partial \alpha}-\dfrac{n(n+1/2)}{144}E_4Z_{n-1}.\label{eq:step2ofmainthm6} \end{align} We set $z_n=2^{3n-1}Z_n/\eta_3C^n, t=(1+216C^{-2}\alpha)/9$, which satisfies $t(\omega)=0$. Then we can rewrite the recurrence formula \eqref{eq:step2ofmainthm6} as follows: \begin{align} z_{n+1}(t)=-(t-1)(9t-1)z_n'(t)+\mkakko{(6t-2)n+2}z_n(t)-2n(2n+1)tz_{n-1}(t). \end{align} The initial condition is $z_0(t)=1/2,\, z_1(t)=1$. By the complex multiplication theory, we have \begin{align} \eta_3(\omega)=\dfrac{\Omega_{A}^{3/2}}{2^{3/2}3^{1/8}\varpi^{3/2}}, ~~C(\omega)=\dfrac{3\Omega_{A}^2}{\varpi^2}. \end{align} Therefore we obtain \begin{align} \absolute{\partial_{3/2}^{(3N+1)}\eta_3(z)|_{z=\omega}}=\dfrac{\Omega_{A}^{2k-1}}{\varpi^{2k-1}}2^{-3k+5}3^{k-9/4}\absolute{z_{3N+1}(0)}^2. \end{align} \end{proof} \begin{ac} This paper is based on his master thesis at Kyushu University. The author would like to thank his advisor Shinichi Kobayashi at Kyushu University for suggesting to him the topic in this paper and giving him advice and comments. He would also like to thank S. Yokoyama at Tokyo Metropolitan University for giving him helpful advice for calculators. He is grateful for F. Rodriguez-Villegas and D. Zagier. He has received significant inspiration from their paper. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. \end{ac} \end{document}
\begin{document} \title{Weitzman's Rule for Pandora's Box with Correlations} \begin{abstract} \mathcal{PB}Text{} is a central problem in decision making under uncertainty that can model various real life scenarios. In this problem we are given $n$ boxes, each with a fixed opening cost, and an unknown value drawn from a known distribution, only revealed if we pay the opening cost. Our goal is to find a strategy for opening boxes to minimize the sum of the value selected and the opening cost paid. In this work we revisit \mathcal{PB}Text{} when the value distributions are correlated, first studied in \cite{ChawGergTengTzamZhan2020}. We show that the optimal algorithm for the independent case, given by Weitzman's rule, directly works for the correlated case. In fact, it results in significantly improved approximation guarantees than the previous work. We also show how to implement the rule given only sample access to the correlated distribution of values. Specifically, we find that a number of samples that is polynomial in the number of boxes is sufficient for the algorithm to work. \varepsilonnd{abstract} \sigmaetcounter{page}{0} \thispagestyle{empty} \sigmaection{Introduction} In various minimization problems where uncertainty exists in the input, we are allowed to obtain information to remove this uncertainty by paying an extra price. Our goal is to sequentially decide which piece of information to acquire next, in order to minimize the sum of the search cost and the value of the option we chose. This family of problems is naturally modeled by \mathcal{PB}Text{}, first formulated by Weitzman~\cite{Weit1979} in an economics setting. In this problem where we are given $n$ boxes, each containing a value drawn from a known distribution and each having a fixed known \varepsilonmph{opening cost}. We can only see the exact value realized in a box if we open it and pay the opening cost. Our goal is to minimize the sum of the value we select and the opening costs of the boxes we opened. In the original work of Weitzman, an optimal solution was proposed when the distributions on the values of the boxes were independent~\cite{Weit1979}. This algorithm was based on calculating a \varepsilonmph{reservation value} ($\sigma$) for each box, and then choosing the box with the lowest reservation value to open at every step. Given that independence is an unrealistic assumption in real life, \cite{ChawGergTengTzamZhan2020} first studied the problem where the distributions are correlated, and designed an algorithm giving a constant approximation guarantee. This algorithm is quite involved, it requires solving an LP to convert the \mathcal{PB}Text{} instance to a \textsc{Min Sum Set Cover}{} one, and then solving this instance to obtain an ordering of opening the boxes. Finally, it reduces the problem of deciding when to stop to an online algorithm question corresponding to \textsc{Ski-Rental}. \sigmaubsection{Our Contribution} In this work we revisit \mathcal{PB}Text{} with correlations, and provide \textbf{simpler}, \textbf{learnable} algorithms with \textbf{better approximation guarantees}, that directly \textbf{generalize} Weitzman's reservation values. More specifically, our results are the following. \begin{itemize} \item \textbf{Generalizing}: we first show how the original reservation values given by Weitzman~\cite{Weit1979} can be generalized to work in correlated distributions, thus allowing us to use a version of their initial greedy algorithm. \item \textbf{Better approximation}: we give two different variants of our main algorithm, that each uses different updates on the distribution $\mathcal{D}$ after every step. \begin{enumerate} \item \varepsilonmph{Variant 1: partial updates}. We condition on the algorithm not having stopped yet. \item \varepsilonmph{Variant 2: full updates}. We condition on the exact value $v$ revealed in the box opened. \varepsilonnd{enumerate} Both variants improve the approximation given by \cite{ChawGergTengTzamZhan2020} from $9.22$ to $4.428$ for Variant 1 and to $5.828$ for Variant 2. \item \textbf{Simplicity}: our algorithms are greedy and only rely on the generalized version of the reservation value, while the algorithms in previous work rely on solving a linear program, and reducing first to \textsc{Min Sum Set Cover}{} then to \textsc{Ski-Rental}, making them not straightforward to implement. A $9.22$ approximation was also given in \cite{GergTzam2022}, which followed the same approach but bypassed the need to reduce to \textsc{Min Sum Set Cover}{} by directly rounding the linear program via randomized rounding. \item \textbf{Learnability}: we show how given sample access to the correlated distribution $\mathcal{D}$ we are able to still maintain the approximation guarantees. Specifically, for Variant 1 only $\text{poly}(n, 1/\varepsilon, \log (1/\delta))$ samples are are enough to obtain $4.428+\varepsilon$ approximation with probability at least $1-\delta$. Variant 2 is however impossible to learn. \varepsilonnd{itemize} Our analysis is enabled by drawing similarities from \mathcal{PB}Text{} to \textsc{Min Sum Set Cover}{}, which corresponds to the special case of when the values inside the boxes are $0$ or $\infty$. For \textsc{Min Sum Set Cover}{} a simple greedy algorithm was shown to achieve the optimal $4$-approximation~\cite{FeigUrieLovaTeta2002}. Surprisingly, Weitzman's algorithm can be seen as a direct generalization of that algorithm. Our analysis follows the histogram method introduced in~\cite{FeigUrieLovaTeta2002}, for bounding the approximation ratio. However, we significantly generalize it to handle values in the boxes and work with tree-histograms required to handle the case with full-updates. \sigmaubsection{Related Work} Since Weitzman's initial work~\cite{Weit1979} on \mathcal{PB}Text{} there has been a renewed interest in studying this problem in various settings. Specifically~\cite{Dova2018,BeyhKlei2019} study \mathcal{PB}Text{} when we can select a box without paying for it ( non-obligatory inspection), in~\cite{BoodFuscLazoLeon2020} there are tree or line constraints on the order in which the boxes can be opened. In~\cite{ChawGergTengTzamZhan2020, ChawGergMcmaTzam2021} the distributions on the values inside the boxes are correlated and the goal is to minimize the search and value cost, while finally in~\cite{BechDughPate2022} the task of searching over boxes is delegated by an agent to a principal, while the agent makes the final choice. The recent work of Chawla et al.~\cite{ChawGergTengTzamZhan2020} is the first one that explores the correlated distributions variant and gives the first approximation guarantees. This problem can be seen as being part of the ``price of information" literature~\cite{CharFagiGuruKleiRaghSaha2002, GuptKuma2001, ChenJavdKarbBagnSrinKrau2015, ChenHassKarbKrau2015}, where we can remove part of the uncertainty of the problem at hand by paying a price. In this line of work, more recent papers study the structure of approximately optimal rules for combinatorial problems~\cite{GoelGuhaMuna2006, GuptNaga2013, AdamSvirWard2016, GuptNagaSing2016, GuptNagaSing2017, Sing2018, GuptJianSing2019}. For the special case of \textsc{Min Sum Set Cover}{}, since the original work of~\cite{FeigUrieLovaTeta2002}, there has been many follow-ups and generalizations where every set has a requirement of how many elements contained in it we need to choose~\cite{AzaGamzIftaYinr2009, BansGuptRavi2010, AzarGamz2010,SkutWill2011,ImSvirZwaa2012}. \sigmaection{Preliminaries} In \mathcal{PB}Text{} ($\mathcal{PB}$) we are given a set of $n$ boxes $\mathcal{B}$, each with a known opening cost $c_b\in\mathbb{R}^+$, and a distribution $\mathcal{D}$ over a vector of unknown values $\bm v = (v_1,\ldots,v_n) \in \mathbb{R}_+^d$ inside the boxes. Each box $b\in \mathcal{B}$, once it is opened, reveals the value $v_{b}$. The algorithm can open boxes sequentially, by paying the opening cost each time, and observe the value instantiated inside the box. The goal of the algorithm is to choose a box of small value, while spending as little cost as possible ``opening" boxes. Formally, denoting by $\mathcal{O} \sigmaubseteq \mathcal{B}$ the set of opened boxes, we want to minimize \[\E{v\sigmaim \mathcal{D}}{\sigmaum_{b\in \mathcal{O}}c_b + \min_{b\in \mathcal{O}} v_b }.\] A \varepsilonmph{strategy} for \mathcal{PB}Text{} is an algorithm that in every step decides which is the next box to open and when to stop. A strategy can pick any open box to select at any time. To model this, we assume wlog that after a box is opened the opening cost becomes $0$, allowing us to select the value without opening it again. In its full generality, a strategy can make decisions based on every box opened and value seen so far. We call this the \varepsilonmph{Fully-Adaptive} (FA) strategy. \paragraph{Different Benchmarks.} As it was initially observed in~\cite{ChawGergTengTzamZhan2020}, optimizing over the class of fully-adaptive strategies is intractable, therefore we consider the simpler benchmark of \varepsilonmph{partially-adaptive} (PA) strategies. In this case, the algorithm has to fix the opening order of the boxes, while the stopping rule can arbitrarily depend on the values revealed. \sigmaubsection{Weitzman's Algorithm} When the distributions of values in the boxes are independent, Weitzman~\cite{Weit1979} described a greedy algorithm that is also the optimal strategy. In this algorithm, we first calculate an index for every box $b$, called \varepsilonmph{reservation value} $\sigma_b$, defined as the value that satisfies the following equation \begin{equation}\label{eq:reservation_value} \E{\bm v \sigmaim \mathcal{D}}{(\sigma_b-v_b)^+} = c_b. \varepsilonnd{equation} Then, the boxes are ordered by increasing $\sigmaigma_b$ and opened until the minimum value revealed is less than the next box in the order. Observe that this is a \varepsilonmph{partially-adaptive} strategy. \sigmaection{Competing with the Partially-Adaptive}\label{sec:main_algo} We begin by showing how Weitzman's algorithm can be extended to correlated distributions. Our algorithm calculates a reservation value $\sigma$ for every box at each step, and opens the box $b\in \mathcal{B}$ with the minimum $\sigma_b$. We stop if the value is less than the reservation value calculated, and proceed in making this box \varepsilonmph{free}; we can re-open this for no cost, to obtain the value just realized at any later point. The formal statement is shown in Algorithm~\ref{alg:weitzman_main}. We give two different variants based on the type of update we do after every step on the distribution $\mathcal{D}$. In the case of partial updates, we only condition on $V_b>\sigmaigma_b$, which is equivalent to the algorithm not having stopped. On the other hand, for full updates we condition on the exact value that was instantiated in the box opened. Theorem~\ref{thm:main_vs_pa} gives the approximation guarantees for both versions of this algorithm. \begin{algorithm}[ht] \KwIn{Boxes with costs $c_i\in \mathbb{R}$, distribution over scenarios $\mathcal{D}$.} An unknown vector of values $v \sigmaim \mathcal{D}$ is drawn\\ \Repeat{termination}{ Calculate $\sigmaigma_b$ for each box $b \in \mathcal{B}$ by solving: $$ \E{\bm v \sigmaim \mathcal{D}}{(\sigma_b-v_b)^+} = c_b. $$\\ Open box $b = \text{argmin}_{b\in \mathcal{B}}\sigmaigma_b$\\ Stop if the value the observed $V_b = v_b \le \sigmaigma_b$\\ $c_b\leftarrow 0$ \tcp{Box is always open now} Update the prior distribution \begin{itemize} \item[-] \textbf{Variant 1}: $\mathcal{D} \leftarrow \mathcal{D}|_{V_b > \sigmaigma_b}$ (partial updates) \item[-] \textbf{Variant 2}: $\mathcal{D} \leftarrow \mathcal{D}|_{V_b = v_b} $ (full updates) \varepsilonnd{itemize} } \caption{Weitzman's algorithm, for correlated $\mathcal{D}$. }\label{alg:weitzman_main} \varepsilonnd{algorithm} \begin{restatable}{theorem}{mainThm}\label{thm:main_vs_pa} Algorithm~\ref{alg:weitzman_main} is a $4.428$-approximation for Variant 1 and $5.828$-approximation for Variant 2 of \mathcal{PB}Text{} against the partially-adaptive optimal. \varepsilonnd{restatable} \begin{proof} We seperately show the two components of this theorem in Theorems~~\ref{thm:main_vs_pa_partial} and \ref{thm:main_vs_pa_equal}. \varepsilonnd{proof} Observe that for independent distributions this algorithm is exactly the same as Weitzman's \cite{Weit1979}, since the product prior $\mathcal{D}$ remains the same, regardless of the values realized. Therefore, the calculation of the reservation values does not change in every round, and suffices to calculate them only once at the beginning. \paragraph{Scenarios} To proceed with the analysis of Theorem~\ref{thm:main_vs_pa}, we assume that $\mathcal{D}$ is supported on a collection of $m$ vectors, $(\bm v^s)_{s \in \sigmacenarios}$, which we call scenarios, and sometimes abuse notation to say that a scenario is sampled from the distribution $\mathcal{D}$. We assume that all scenarios have equal probability. The general case with unequal probabilities follows by creating more copies of the higher probability scenarios until the distribution is uniform. A scenario is \varepsilonmph{covered} when the algorithm decides to stop and choose a value from the opened boxes. For a specific scenario $s\in\sigmacenarios$ we denote by $c(s)$ the total opening cost paid by an algorithm before this scenario is covered and by $v(s)$ the value chosen for this scenario. \paragraph{Reservation Values} To analyze Theorem~\ref{thm:main_vs_pa}, we introduce a new way of defining the reservation values of the boxes that is equivalent to~\varepsilonqref{eq:reservation_value}. For a box $b$, we have that \[\sigma_b = \min_{A \sigmaubseteq \sigmacenarios} \frac{c_b+ \sigmaum_{s\in A} \Pr{\mathcal{D}}{s} v^s_b}{\sigmaum_{s\in A} \Pr{\mathcal{D}}{s}} \] The equivalence to~\varepsilonqref{eq:reservation_value}, follows since $\sigma_b$ is defined as the root of the expression \begin{align*} \E{s \sigmaim \mathcal{D}}{(\sigma_b-v^s_b)^+} & - c_b = \sigmaum_{s\in \sigmacenarios} \Pr{\mathcal{D}}{s} (\sigma_b - v^s_b)^+ - c_b \\ &= \max_{A \sigmaubseteq \sigmacenarios} \sigmaum_{s\in A} \Pr{\mathcal{D}}{s} (\sigma_b - v^s_b) - c_b. \varepsilonnd{align*} Thus, $\sigma_b$ is also the root of \begin{align*} \max_{A \sigmaubseteq \sigmacenarios}& \frac { \sigmaum_{s\in A} \Pr{\mathcal{D}}{s} (\sigma_b - v^s_b) - c_b } {\sigmaum_{s\in A} \Pr{\mathcal{D}}{s}} = \sigma_b - \min_{A \sigmaubseteq \sigmacenarios} \frac { c_b + \sigmaum_{s\in A} \Pr{\mathcal{D}}{s} v^s_b } {\sigmaum_{s\in A} \Pr{\mathcal{D}}{s}}. \varepsilonnd{align*} This, gives our formula for computing $\sigma_b$, which we can further simplify using our assumption that all scenarios have equal probability. In this case, $ \Pr{\mathcal{D}}{s}= 1/|\sigmacenarios|$ which implies that \begin{equation}\label{eq:equivalent_reserv} \sigma_b = \min_{A \sigmaubseteq \sigmacenarios} \frac{c_b|\sigmacenarios|+\sigmaum_{s\in A}v_s}{|A|}. \varepsilonnd{equation} \sigmaubsection{Conditioning on $V_b>\sigmaigma_b$}\label{subsec:proof_algo} We start by describing the simpler variant of our algorithm where after opening each box we update the distribution by conditioning on the event $V_b > \sigma_b$. This algorithm is \varepsilonmph{partially adaptive}, since the order for each scenario does not depend on the actual value that is realized every time. At every step the algorithm will either stop or continue opening boxes conditioned on the event ``We have not stopped yet" which does not differentiate among the surviving scenarios. \begin{restatable}{theorem}{mainThm}\label{thm:main_vs_pa_partial} Algorithm~\ref{alg:weitzman_main} is a $4.428$-approximation for \mathcal{PB}Text{} against the partially-adaptive optimal, when conditioning on $V_b > \sigmaigma_b$. \varepsilonnd{restatable} In this section we show a simpler proof for Theorem~\ref{thm:main_vs_pa_partial} that gives a $3+2\sigmaqrt{2} \approx 5.828$-approximation. The full proof for the $4.428$-approximation is given in section~\ref{apn:main_algo} of the Appendix. Using the equivalent definition of the reservation value (Equation~\varepsilonqref{eq:equivalent_reserv}) we can rewrite Algorithm~\ref{alg:weitzman_main} as follows.\\ \begin{algorithm}[H] \KwIn{Boxes with costs $c_i\in \mathbb{R}$, set of scenarios $\sigmacenarios$.} $t\leftarrow 0$\\ $R_0 \leftarrow \mathcal{S}$ the set of scenarios still uncovered\\ \While{$R_t\neq \varepsilonmptyset$}{ Let $\sigma_t \leftarrow \text{min}_{b\in \mathcal{B}, A\sigmaubseteq R_t} \frac{c_b|R_t| + \sigmaum_{s\in A}v^s_{b}}{|A|}$\\ Let $b_t$ and $A_t$ be the box and the set of scenarios that achieve the minimum\\ Open box $b_t$ and pay $c_{b_t}$\\ Stop and choose the value at box $b_t$ if it is less than $\sigma_t$: this holds \textbf{iff} $s \in A_t$\\ Set $c_{b_t} \leftarrow 0$\\ $R_t \leftarrow R_t \sigmaetminus A_t$\\ $t\leftarrow t+1$ } \caption{Weitzman's rule for Partial Updates}\label{alg:general_v} \varepsilonnd{algorithm} We first start by giving a bound on the cost of the algorithm. The cost can be broken down into opening cost plus the value obtained. Since at any time $t$, all remaining scenarios $R_t$ pay the opening cost $c_{b_t}$, we have that the total opening cost is $$\sigmaum_t c_{b_t} |R_t|.$$ Moreover, the chosen value is given as $$\sigmaum_t \sigmaum_{s \in A_t} v^s_{b_t}. $$ Overall, we have that \begin{align*} \alg & = \sigmaum_t \left( c_{b_t} |R_t| + \sigmaum_{s \in A_t} v^s_{b_t} \right) = \sigmaum_t |A_t| \frac { c_{b_t} |R_t| + \sigmaum_{s \in A_t} v^s_{b_t} } { |A_t|} = \sigmaum_t |A_t| \sigma_t. \varepsilonnd{align*} Defining $\sigma_s$ to be the reservation value of scenario $s$ at the time it is covered, i.e. when $s \in A_t$, we get $\alg = \sigmaum_{s \in \sigmacenarios} \sigma_s$. We follow a \varepsilonmph{histogram analysis} similar to the proof of Theorem~4 in \cite{FeigUrieLovaTeta2004} for \textsc{Min Sum Set Cover}{} and construct the following histograms. \begin{itemize} \item The $\opt_o$ histogram: put the scenarios on the x-axis on increasing opening cost order $c_s^\opt$ according to $\opt$, the height of each scenario is the opening cost it paid. \item The $\opt_v$ histogram: put the scenarios on the x-axis on increasing covering value order $v_s^\opt$ according to $\opt$, the height of each scenario is the value with which it was covered. \item The $\alg$ histogram: put scenarios on the x-axis in the order the algorithm covers them. The height of each scenario is $\sigma_s$. Observe that the area of the $\alg$ histogram is exactly the cost of the algorithm. \varepsilonnd{itemize} \input{one_pa_optimized} \sigmaubsection{Conditioning on $V_b=v$}\label{subsec:equal} In this section we switch gears to our second variant of Algorithm~\ref{alg:weitzman_main}, where in each step we update the prior $\mathcal{D}$ conditioning on the event $V_b = v$. We state our result in Theorem~\ref{thm:main_vs_pa_equal}. In this case, the conditioning on $\mathcal{D}$ implies that the algorithm at every step removes the scenarios that are \varepsilonmph{inconsistent} with the value realized. \begin{restatable}{theorem}{mainThmVarEqual}\label{thm:main_vs_pa_equal} Algorithm~\ref{alg:weitzman_main} is a $3+2\sigmaqrt{2} \approx 5.828$-approximation for \mathcal{PB}Text{} against the partially-adaptive optimal, when conditioning on $V_b = v$. \varepsilonnd{restatable} The main challenge was that the algorithm's solution is now a tree with respect to scenarios instead of a line as in the case of $\mathcal{D}|_{V_b>\sigmaigma_b}$. Specifically, in the $D|_{V_b>\sigmaigma_b}$ variant at every step all scenarios that had $V_b \leq \sigmaigma_b$ were covered and removed from consideration. However in the $D|_{V_b=v}$ variant the remaining scenarios are split into different cases, based on the realization of $V$, as shown in the example of Figure~\ref{fig:tree_solution_equal}. \begin{figure}[ht] \centering \input{tikz_alg_solution} \caption{Algorithm's solution when conditioning on $V=v$, for an instance with scenarios $\sigmacenarios=\{s_1, s_2, s_3\}$, and boxes $\mathcal{B} = \{b_1, b_2\}$. The nodes contain the consistent scenarios at each step, and the values $V$ are revealed once we open the corresponding box.} \label{fig:tree_solution_equal} \varepsilonnd{figure} This results into the ALG histogram not being well defined, since there is no unique order of covering the scenarios. We overcome this by generalizing the histogram approach to trees. \begin{proof}[Proof of Theorem~\ref{thm:main_vs_pa_equal}] The proof follows similar steps to that of Theorem~\ref{thm:main_vs_pa_partial}, thus we only highlight the differences. The algorithm is presented below, the only change is line 5 where we remove the inconsistent with the value revealed scenarios, which also leads to our solution branching out for different scenarios and forming a tree. \begin{algorithm}[ht] \KwIn{Boxes with costs $c_i\in \mathbb{R}$, set of scenarios $\sigmacenarios$.} Define a root node $u$ corresponding to the set $\mathcal{S}$\\ $R_u \leftarrow \mathcal{S}$ the set of scenarios still uncovered\\ \While{$R_u\neq \varepsilonmptyset$}{ Let $\sigma_u \leftarrow \text{min}_{b\in \mathcal{B}, A\sigmaubseteq R_u} \frac{c_b|R_u| + \sigmaum_{s\in A}v^s_{b}}{|A|}$\\ Let $b_u$ and $A_u$ be the box and the set of scenarios that achieve the minimum\\ Open box $b_u$ paying $c_{b_u}$ and observe value $v$\\ Stop and choose the value at box $b_u$ if it is less than $\sigma_u$: this holds \textbf{iff} $s \in A_u$\\ Set $c_{b_u} \leftarrow 0$\\ Let $u'$ be a vertex corresponding to the set of consistent scenarios with $R_{u'} \triangleq R_u \sigmaetminus \left(A_u \cup \{s\in R_u: v_{b_u}^s\neq v\}\right)$ \tcp{Remove inconsistent scenarios} Set $u\leftarrow u'$ } \caption{Weitzman's rule for Full Updates }\label{alg:general_v_equal} \varepsilonnd{algorithm} \paragraph{Bounding the opening cost} Consider the tree $\mathcal{T}$ of $\alg$ where at every node $u$ a set $A_u$ of scenarios is covered. We associate this tree with node weights, where at every node $u$, we assign $|A_u|$ weights $(\sigma_u,...,\sigma_u)$. Denote, the weighted tree by $\mathcal{T}_\alg$. As before, the total cost of $\alg$ is equal to the sum of the weights of the tree. We now consider two alternative ways of assigning weights to the the nodes, forming trees $\mathcal{T}_{\opt_o}$, $\mathcal{T}_{\opt_v}$ using the following process. \begin{itemize} \item $\mathcal{T}_{\opt_o}.$ At every node $u$ we create a vector of weights $\bm w^{\opt_o}_u = (c^\opt_{s})_{s \in A_u}$ where each $c^\opt_{s}$ is the opening cost that scenario $s \in A_u$ has in the optimal solution. \item $\mathcal{T}_{\opt_v}.$ At every node $u$ we create a vector of weights $\bm w^{\opt_v}_u = (v^\opt_{s})_{s \in A_u}$ where each $v^\opt_{s}$ is the value the optimal uses to cover scenario $s \in A_u$. \varepsilonnd{itemize} We denote by $\text{cost}(\mathcal{T}_\alg)$ the sum of all weights in every node of the tree $\mathcal{T}$. We have that $\text{cost}(\mathcal{T})$ is equal to the total cost of $\alg$, while $\text{cost}(\mathcal{T}_{\opt_o})$ and $\text{cost}(\mathcal{T}_{\opt_v})$ is equal to the optimal opening cost $\opt_o$ and optimal value $\opt_v$ respectively. Intuitively, the weighted trees correspond to the histograms in the previous analysis of Theorem~\ref{thm:main_vs_pa_partial}. We want to relate the cost of $\alg$, to that of $\mathcal{T}_{\opt_o}$ and $\mathcal{T}_{\opt_v}$. To do this, we define an operation similar to histogram scaling, which replaces the weights of every node $u$ in a tree with the top $\rho$-percentile of the weights in the subtree rooted at $u$. As the following lemma shows, this changes the cost of a tree by a bounded multiplicative factor. \begin{restatable}{lemma}{genHist}\label{lem:general_histogram} Let $\mathcal{T}$ be a tree with a vector of weights $\bm w_u$ at each node $u\in \mathcal{T}$, and let ${ \mathcal{T}^{(\rho)} }$ be the tree we get when we substitute the weights of every node with the top $\rho$-percentile of all the weights in the subtree of $\mathcal{T}$ rooted at $u$. Then \[\rho \cdot \text{cost}(\mathcal{T}^{(\rho)}) \leq cost (\mathcal{T}).\] \varepsilonnd{restatable} We defer the proof of Lemma~\ref{lem:general_histogram} to Section~\ref{apn:equal} of the Appendix. To complete the proof of Theorem~\ref{thm:main_vs_pa_equal}, and bound $\text{cost}(\mathcal{T}_\alg)$, we show as before that the weights at every node $u$, are bounded by the weights of $\mathcal{T}^{(1-\gamma)}_{\opt_o}$ scaled by $\frac 1 {\beta \gamma}$ plus the weights of $\mathcal{T}^{((1-\beta)\gamma)}_{\opt_v}$, for the constants $\beta, \gamma \in (0,1)$ chosen in the proof of Theorem~\ref{thm:main_vs_pa_partial}. This implies that \begin{align*} \text{cost}(\mathcal{T}_{\opt_o}) \le & \frac 1 {\beta \gamma} \text{cost}(\mathcal{T}^{(1-\gamma)}_{\opt_o}) + \text{cost}(\mathcal{T}^{((1-\beta)\gamma)}_{\opt_v})\\ \le& \frac 1 {\beta \gamma (1\text{-}\gamma)} \text{cost}(\mathcal{T}_{\opt_o}) + \frac 1 {(1\text{-}\beta)\gamma} \text{cost}(\mathcal{T}_{\opt_v}) \varepsilonnd{align*} which gives $\alg \le 5.828\, \opt $ for the choice of $\beta$ and $\gamma$. The details of the proof are similar to the one of Theorem~\ref{thm:main_vs_pa}, and are deferred to section~\ref{apn:equal} of the Appendix. \varepsilonnd{proof} \sigmaubsection{Lower Bound} To show that our algorithm is almost tight, we observe that the lower bound of Min Sum Set Cover presented in~\cite{FeigUrieLovaTeta2004} also applies to \mathcal{PB}Text{}. In \textsc{Min Sum Set Cover}{} we are given $n$ elements $e_i$, and $m$ sets $s_j$ where each $s_j \sigmaubseteq [n]$. We say a set $s_j$ \varepsilonmph{covers} an element $e_i$ if $e_i \in s_j$ . The goal is to select elements in order to minimize the sum of the \varepsilonmph{covering times} of all the sets, where \varepsilonmph{covering time} of a set is he first time an element $e_i\in s_j$ is chosen. In \cite{FeigUrieLovaTeta2004} the authors show that \textsc{Min Sum Set Cover}{} cannot be approximated better than $4-\varepsilon$ even in the special case where every set contains the same number of elements\footnote{Equivalently forms a uniform hypergraph, where sets are hyperedges, and elements are vertices.}. We restate the theorem below. \begin{theorem}[Theorem 13 of \cite{FeigUrieLovaTeta2004}]\label{thm:hardness_feige} For every $\varepsilon> 0$, it is NP-hard to approximate min sum set cover within a ratio of $4 - \varepsilon$ on uniform hypergraphs. \varepsilonnd{theorem} Our main observation is that \textsc{Min Sum Set Cover}{} is a special case of \mathcal{PB}Text{}. When the boxes all have the same opening cost $c_b=1$ and the values inside are $v^b_s \in \{0,\infty\}$, we are required to find a $0$ for each scenario; equivalent to \varepsilonmph{covering} a scenario. The optimal solution of \textsc{Min Sum Set Cover}{} is an algorithm that selects elements one by one, and stops whenever all the sets are covered. This is exactly the partially adaptive optimal we defined for \mathcal{PB}Text{}. The theorem restated above results in the following Corollary. \begin{corollary} It is NP-Hard to approximate Pandora's Box against the partially-adaptive better than $4-\varepsilon$. \varepsilonnd{corollary} \sigmaection{Learning from Samples}\label{sec:learning} In this section we show that our algorithm also works when we are only given sample access to the correlated distribution $\mathcal{D}$. We will mainly focus on the first variant with partial updates $\mathcal{D}|_{V>v}$. The second variant with full Bayesian updates $\mathcal{D}|_{V=v}$ is learnable requires full knowledge of the underlying distribution and can only work with sample access if one can learn the full distribution. To see this consider for example an instance where the values are drawn uniformly from $[0,1]^d$. No matter how many samples one draws, it is impossible to know the conditional distribution $\mathcal{D}|_{V=v}$ after opening the first box for a fresh samples $v$, and the Bayesian update is not well defined. Variant 1 does not have this problem and can be learned from samples if the costs of the boxes are polynomially bounded by $n$, i.e. if there is a constant $c > 0$ such that for all $b \in \mathcal{B}$, $c_b \in [1,n^c]$. If the weights are unbounded, it is impossible to get a good approximation with few samples. To see this consider the following instance. Box 1 has cost $1/H \rightarrow 0$, while every other box has cost $H$ for a very large $H>0$. Now consider a distribution where with probability $1-\frac 1 H \rightarrow 1$, the value in the first box is $0$, and with probability $1/H$ is $+\infty$. In this case, with a small number of samples we never observe any scenario where $v_1 \neq 0$ and believe the overall cost is near $0$. However, the true cost is at least $H \cdot 1/H \ge$ and is determined by how the order of boxes is chosen when the scenario has $v_1 \neq 0$. Without any such samples it is impossible to pick a good order. Therefore, we proceed to analyze Variant 1 with $\mathcal{D}|_{V>\sigmaigma}$ in the case when the box costs are similar. We show that polynomial, in the number of boxes, samples suffice to obtain an approximately-optimal algorithm, as we formally state in the following theorem. We present the case where all boxes have cost 1 but the case where the costs are polynomially bounded easily follows. \begin{theorem}\label{thm:learning} Consider an instance of Pandora's Box with opening costs equal to 1. For any given parameters $\varepsilon, \delta>0$, using $m = poly(n, 1/\varepsilon, \log(1/\delta))$ samples from $\mathcal{D}$, Algorithm~\ref{alg:weitzman_main} (Variant 1) obtains a $4.428 + \varepsilon$ approximation policy against the partially-adaptive optimal, with probability at least $1-\delta$. \varepsilonnd{theorem} To prove the theorem, we first note that variant 1 of Algorithm~\ref{alg:weitzman_main} takes a surprisingly simple form, which we call a threshold policy. It can be described by a permutation $\pi$ of visiting the boxes and a vector of thresholds $\bm{\tau}$ that indicate when to stop. The threshold for every box corresponds to the reservation value the first time the box is opened. To analyze the sample complexity of Algorithm~\ref{alg:weitzman_main}, we study a broader class of algorithms parameterized by a permutation and vector of thresholds given in Algorithm~\ref{alg:learning}. \begin{algorithm}[ht] \KwIn{Set of boxes, permutation $\pi$, vector of thresholds $\bm{\tau}\in \mathbb{R}^n$} best $\leftarrow \infty$\\ \ForEach{$i \in [n]$}{ \uIf{best $>\tau_i$}{ Open box $\pi_i$, see value $v_i$\\ best $\leftarrow \min(\text{best}, v_i)$ } \uElse{ Accept best } } \caption{General format of \mathcal{PB}Text{} algorithm.}\label{alg:learning} \varepsilonnd{algorithm} Our goal now is to show that polynomially many samples from the distribution $\mathcal{D}$ suffice to learn good parameters for Algorithm~\ref{alg:learning}. We first show a Lemma that bounds the cost of the algorithm calculated in the empirical $\hat{\mathcal{D}}$ instead of the original $\mathcal{D}$ (Lemma~\ref{lem:closeness}), and a Lemma~\ref{lem:capping} that shows how capping the reservation values by $n/\varepsilon$ can also be done with negligible cost. \begin{restatable}{lemma}{closeness}\label{lem:closeness} Let $\varepsilon, \delta > 0$ and let $\mathcal{D}'$ be the empirical distribution obtained from $\text{poly}(n, 1/\varepsilon,\\ \log(1/\delta))$ samples from $\mathcal{D}$. Then, with probability $1-\delta$, it holds that $$\left| \E{\hat D}{\alg(\pi,\tau) - \min_{b\in \mathcal{B}} v_b} - \E{D}{\alg(\pi,\tau) - \min_{b\in \mathcal{B}} v_b} \right| \le \varepsilon$$ for any permutation $\pi$ and any vector of thresholds $\bm v \in \left[0, \frac n \varepsilon\right]^n$ \varepsilonnd{restatable} We defer the proof of Lemma~\ref{lem:closeness} the section~\ref{apn:learning} of the Appendix. \begin{lemma}\label{lem:capping} Let $\mathcal{D}$ be any distribution of values. Let $\varepsilon > 0$ and consider a permutation $\pi$ and thresholds $\bm \tau$. Moreover, let $\tau'$ be the thresholds capped to $n/\varepsilon$, i.e. setting $\tau'_b = \min \{ \tau_b, n/\varepsilon \} $ for all boxes $b$. Then, $$\E{v \sigmaim D}{\alg(\pi,\tau')} \le (1+\varepsilon) \E{v \sigmaim D}{\alg(\pi,\tau)}.$$ \varepsilonnd{lemma} \begin{proof} We compare the expected cost of $\alg$ with the original thresholds and the transformed one $\alg'$ with the capped thresholds. For any value vector $\bm v \sigmaim \mathcal{D}$, either (1) the algorithms stopped at the same point having the same opening cost and value, or (2) $\alg$ stopped earlier at a threshold $\tau > n/\varepsilon$, while $\alg'$ continued. In the latter case, the value $v$ that $\alg$ gets is greater than $n/\varepsilon$, while the value $v'$ that $\alg'$ gets is smaller, $v' \le v$. For such a scenario, the opening cost $c$ of $\alg$, and the opening cost $c'$ of $\alg'$ satisfy $c' \le c + n$. Thus, the total cost is $c' + v' \le c + v + n \le (1+\varepsilon) (c+v)$ Overall, we get that \[ \E{\mathcal{D}}{\alg'} \leq \E{\mathcal{D}}{\alg}(1+\varepsilon).\] \varepsilonnd{proof} \begin{proof}[Proof of Theorem~\ref{thm:learning}] With $\text{poly}(n,\varepsilon,\log(1/\delta))$ samples from $\mathcal{D}$, we obtain an empirical distribution $\hat \mathcal{D}$. From Lemma~\ref{lem:closeness}, we have that with probability at least $1-\delta \varepsilon / \log(1/\delta)$, the following holds \begin{align}\label{eq:error_thres} \bigg| \E{v \sigmaim \hat D}{\alg(\pi,\tau) - \min_{b\in \mathcal{B}} v_b} - \E{v \sigmaim D}{\alg(\pi,\tau) - \min_{b\in \mathcal{B}} v_b} \bigg| \le \varepsilon \varepsilonnd{align} for any permutation $\pi$ and any vector of thresholds $\bm v \in \left[0, \frac n \varepsilon\right]^n$. This gives us that we can estimate the cost of a threshold policy accurately. To compare with the set of all partially adaptive policies that may not take the form of a threshold policy, we consider the set of scenario aware policies (SA). These are policies $\text{SA}(\pi)$ parameterized by a permutation $\pi$ of boxes and are forced to visit the boxes in that order. However, they are aware of all values in the boxes in advance and know precisely when to stop. These are unrealistic policies introduced in \cite{ChawGergTengTzamZhan2020} which serve as an upper bound to the set of all partially adaptive policies. As shown in \cite{ChawGergTengTzamZhan2020} (Lemma 3.3), scenario-aware policies are also learnable from samples. With probability at least $1-\delta \varepsilon / \log(1/\delta)$, it holds that for any permutation $\pi$ \begin{align}\label{eq:error_sa} \bigg| & \E{v \sigmaim \hat D}{SA(\pi) - \min_{b\in \mathcal{B}} v_b} - \E{v \sigmaim D}{SA(\pi) - \min_{b\in \mathcal{B}} v_b} \bigg| \le \varepsilon. \varepsilonnd{align} The $\alpha$-approximation guarantees (with $a \approx 4.428$) of Algorithm~\ref{alg:weitzman_main} hold even against scenario aware policies as there is no restriction on how the partially-adaptive policy may choose to stop. So for the empirical distribution, we can compute a permutation $\hat \pi$ and thresholds $\hat \tau$ such that: $$ \E{\hat D}{\alg(\hat \pi,\hat \tau)} \le \alpha \cdot \min_\pi \E{\hat D}{ SA(\pi) } $$ Clipping the thresholds to obtain $\hat \tau' = \min\{ \hat \tau, n/\varepsilon \}$, and letting $\Delta = \E{v \sigmaim \hat D}{\min_{b\in \mathcal{B}} v_b} - \E{v \sigmaim D}{\min_{b\in \mathcal{B}} v_b}$, we have that: \begin{align*} &\E{D}{\alg(\hat \pi,\hat \tau')} \le \E{\hat D}{\alg(\hat \pi,\hat \tau')} - \Delta + \varepsilon \\ &\le (1+\varepsilon) \E{\hat D}{\alg(\hat \pi,\hat \tau)} + \Delta + \varepsilon / 4 \\ &\le (1+\varepsilon) \alpha \cdot \min_{\pi} \E{\hat D}{SA(\pi)} - \Delta + \varepsilon / 4 \\ &\le (1+\varepsilon) \alpha \cdot \min_{\pi} \E{D}{SA(\pi)} + O( \Delta + \varepsilon ) \varepsilonnd{align*} By Markov's inequality, we have that $\Pr{}{ \E{v \sigmaim \hat D}{\min_{b\in \mathcal{B}} v_b} \le (1+\varepsilon) \E{v \sigmaim D}{\min_{b\in \mathcal{B}} v_b} } \ge \frac \varepsilon { 1+ \varepsilon} \ge \varepsilon/2$. Thus, repeating the sampling process $\frac {O(\log 1/\delta)} {\varepsilon}$ times and picking the empirical distribution with minimum $\E{v \sigmaim \hat D}{\min_{b\in \mathcal{B}} v_b}$ satisfies $\Delta \le \varepsilon \E{v \sigmaim D}{\min_{b\in \mathcal{B}} v_b}$ with probability at least $1-\delta$ and simultaneously satisfies equations \varepsilonqref{eq:error_thres} and \varepsilonqref{eq:error_sa}. This shows that $\E{D}{\alg(\hat \pi,\hat \tau')} \le (1+O(\varepsilon)) \alpha \cdot \min_{\pi} \E{D}{SA(\pi)}$ which completes the proof by rescaling $\varepsilon$ by a constant. \varepsilonnd{proof} \appendix \sigmaection{Appendix} \sigmaubsection{Proofs from Section~\ref{sec:main_algo}}\label{apn:main_algo} \mainThm* The tighter guarantee proof follows the steps of the proof in section~\ref{subsec:proof_algo} for the opening cost, but provides a tighter analysis for the values cost. \begin{proof}[Tight proof of Theorem~\ref{thm:main_vs_pa_partial}] Denote by $\sigmaigma_s$ the reservation value for scenario $s$ when it was covered by $\alg$ and by $\mathcal{T}$ the set of boxes opened i.e. the steps taken by the algorithm. Then we can write the cost paid by the algorithm as follows \begin{equation}\label{eq:alg_cost_tight} \alg = \frac{1}{|\sigmacenarios|} \sigmaum_{s\in \sigmacenarios} \sigmaigma_s = \frac{1}{|\sigmacenarios|}\sigmaum_{p\in \mathcal{T}} |A_t| \sigmaigma_p . \varepsilonnd{equation} We use the same notation as section~\ref{subsec:proof_algo} which we repeat here for convenience. Consider any point $p$ in the $\alg$ histogram, and let $s$ be its corresponding scenario and $t$ be the time this scenario is covered. \begin{itemize} \item $R_t:$ set of uncovered scenarios at step $t$ \item $A_t:$ set of scenarios that $\alg$ chooses to cover at step $t$ \item $c^*$: the opening cost such that $\gamma|R_t|$ of the scenarios in $R_t$ have opening cost less than $c^*$ \item $R_{\text{low}} = \{s\in R_t: c_s^\opt \leq c^*\}$ the set of these scenarios \item $v^*$: the value of scenarios in $R_{\text{low}}$ such that $b|R_{\text{low}}|$ of the scenarios have value less than $v^*$ \item $L = \{s\in R_{\text{low}}: v_s^\opt\leq v^*\}$ the set of scenarios with value at most $v^*$ \item $B_L$: set of boxes the optimal uses to cover the scenarios in $L$ of step $t$ \varepsilonnd{itemize} The split described in the definitions above is again shown in Figure~\ref{fig:t_v_split_apn}, and the constants $1> \beta,\gamma>0$ will be determined in the end of the proof. \begin{figure}[H] \centering \input{split.tex} \caption{Split of scenarios in $R_t$.} \label{fig:t_v_split_apn} \varepsilonnd{figure} Continuing from equation~\varepsilonqref{eq:alg_cost_tight} we obtain the following. \begin{align*} \alg & \leq \frac{1}{|\sigmacenarios|}\sigmaum_{t\in \mathcal{T}} |A_t| \frac{|R_t|\sigmaum_{b\in B_L} c_b + \sigmaum_{s\in L }v^\opt_s}{|L|} & \text{Inequality~\ref{eq:sigma_UB}}\\ & \leq \frac{1}{|\sigmacenarios|} \sigmaum_{t\in\mathcal{T}} \left( |A_t|\frac{c^*}{\beta \gamma} + \frac{\sigmaum_{s\in L}v^\opt_s}{|L|}\right) & \text{Ineq.~\ref{eq:sigma_UB} and } |L| = \gamma \beta|R_t| \\ & \leq \frac{\opt_o}{\beta \gamma(1-\gamma)} \sigmaum_{t\in \mathcal{T}}\frac{|A_t|}{|\sigmacenarios|} + \sigmaum_{t\in\mathcal{T}}\frac{|A_t|}{|\sigmacenarios|} \frac{\sigmaum_{s\in L}v^\opt_s}{|L|} & \text{Since }c^* \leq \opt_o/(1-\gamma)\\ & = \frac{\opt_o}{\beta \gamma(1-\gamma)} + \sigmaum_{p\in\mathcal{T}}\frac{|A_t|}{|\sigmacenarios|} \frac{\sigmaum_{s\in L}v^\opt_s}{|L|} & \text{Since} \sigmaum_{t}|A_t| = |\sigmacenarios| \varepsilonnd{align*} Where in the second to last inequality we used the same histogram argument from section~\ref{subsec:proof_algo}, to bound $c^*$ by $\opt_o/(1-\gamma)$. To bound the values term, observe that if we sorted the optimal values $v^\opt_s$ that cover each scenario by decreasing order, and denote $j_s$ the index of $v_s^\opt$ in this ordering, we add $v_s^\opt$ multiplied by the length of the interval every time $j_s\in \big[(1-\beta)\gamma|R_t|, \gamma |R_t|\big]$. This implies that the length of the intervals we sum up for $v_s^\opt$ ranges from $j_s/\gamma$ to $j_s/((1-\beta)\gamma)$, therefore the factor for each $v_s^\opt$ is \[ \frac{1}{\gamma} \sigmaum_{i=j_s/\gamma}^{j_s/(1-\beta)\gamma } \frac{1}{i} \leq \frac{1}{\gamma} \log\left( \frac{1}{1-\beta} \right)\] We want to balance the terms $1/(\beta \gamma(1-\gamma))$ and $1/\gamma\log(1/(1-\beta))$ which gives that \[ \gamma= 1- \frac{1}{\beta \log\left( \frac{1}{1-\beta}\right)}.\] Since we balanced the opening cost and value terms, by substituting the expression for $\gamma$ we get that the approximation factor is \[ \frac{1}{\beta \gamma(1-\gamma)} = \frac{\beta \log^2\left( \frac{1}{1-\beta}\right)}{\beta\log\left( \frac{1}{1-\beta}\right) -1}. \] Numerically minimizing that ratio for $\beta$ and ensuring that $0<\beta,\gamma<1$ we get that the minimum is $4.428$ obtained at $\beta \approx 0.91$ and $\gamma\approx 0.55$. \varepsilonnd{proof} \sigmaubsection{Proofs from Section~\ref{subsec:equal}}\label{apn:equal} \mainThmVarEqual* \begin{proof}[Continued proof of Theorem~\ref{thm:main_vs_pa_equal}] We now proceed to give the bound on the weights of the nodes of $\mathcal{T}_\alg$. Consider any node $u$. We have that the weights at this node are equal to \[\sigma_u = \frac{c_{b_u} |R_u| + \sigmaum_{s\in A_t}v^s_{b_{u}}}{|A_t|} \leq \frac{c_b|R_u| + \sigmaum_{s\in A}v^s_{b}}{|A|}\] where the last inequality holds for all $A\sigmaubseteq R_u$ and any $b\in \mathcal{B}$. Let $c^*_u$ the opening cost such that $\gamma|R_u|$ of the scenarios in $R_u$ have opening cost less than $c^*_u$, and by $R_{\text{low}} = \{s\in R_u: c_s^\opt \leq c^*_u\}$ the set of these scenarios. Similarly denote by $v^*_u$ the value of scenarios in $R_{\text{low}}$ such that $\beta |R_{\text{low}}|$ of the scenarios have value less than $v^*_u$ and by $L = \{s\in R_{\text{low}}^p: v_s^\opt\leq v^*_u\}$ these scenarios. This split is shown in Figure~\ref{fig:t_v_split}. Note that, $c^*_u$ corresponds to the weights of node $u$ in $\mathcal{T}^{(1-\gamma)}_{\opt_o}$, while the weights of node $u$ at $\mathcal{T}^{(1-\gamma)}_{\opt_v}$ are at least $v^*_u$. Let $B_L$ be the set of boxes that the optimal solution uses to cover the scenarios in $L$. Let $L_b \sigmaubseteq L \sigmaubseteq R_u$ be the subset of scenarios in $L$ that choose the value at box $b$ in $\opt$. Using inequality~\varepsilonqref{eq:sigma_p} with $b\in B_L$ and $A = L_b$, we obtain $\sigmaigma_u |L_b| \leq c_b |R_u| + \sigmaum_{s \in L_b} v_s^\opt $, and by summing up the inequalities for all $b\in B_L$ we get \begin{equation}\label{eq:sigma_UB} \sigmaigma_u \leq \frac{|R_u| \sigmaum_{b\in B_L} c_b + \sigmaum_{s\in L} v_s^\opt}{|L|} \leq \frac{|R_u| c^* + \sigmaum_{s\in L} v_s^\opt}{|L|} \leq \frac{c^*_u}{\beta \cdot \gamma} + v^*_u \varepsilonnd{equation} where for the second inequality we used that the cost for covering the scenarios in $L$ is at most $c^*_u$ by construction, and in the last inequality that $|L| = |R_t|/(\beta \cdot \gamma)$. We consider each term above separately, to show that the point $p$ is within the histograms. \varepsilonnd{proof} \genHist* \begin{proof}[Proof of Lemma~\ref{lem:general_histogram}] We denote by $\mathcal{T}_u$ the subtree rooted at $u$, by $W(\mathcal{T}) = \{ w: w \in \bm w_v \text{ for } v \in \mathcal{T} \}$ the (multi)set of weights in the tree $\mathcal{T}$. Denote, by $q^\rho( \mathcal{T} )$ be the top $\rho$ percentile of all the weights in $\mathcal{T}$. Finally, we define $Q(\rho | \mathcal{T})$ for any tree $\mathcal{T}$ as follows: \begin{itemize} \item We create a histogram $H(x)$ of the weights in $W(\mathcal{T})$ in increasing order. \item We calculate the area enclosed within $(1-\rho) |W(\mathcal{T})|$ until $|W(\mathcal{T})|$: $$Q\left( \rho | \mathcal{T} \right) = \int_{(1-\rho)|W(\mathcal{T})|}^{|W(\mathcal{T})|} H(x) dx$$ This is approximately equal to the sum of all the values greater than $q^\rho( \mathcal{T} )$ with values exactly $q^\rho( \mathcal{T} )$ taken fractionally so that exactly $\rho$ fraction of values are selected. \varepsilonnd{itemize} We show by induction that for every node $u$, it holds that $\rho \cdot \text{cost}(\mathcal{T}^{(\rho)}_u) \le Q\left( \rho | \mathcal{T} \right)$ \begin{itemize} \item For the base case, for all leaves $u$, the subtree $\mathcal{T}_u$ only has one node and the lemma holds as $\rho q^\rho(\mathcal{T}_u) \le Q\left( \rho | \mathcal{T}_u \right)$. \item Now, let $r$ be any node of the tree, and denote by $\text{child}(r)$ the set of the children nodes of $r$. \begin{align*} \rho \cdot \text{cost}(\mathcal{T}^{(\rho)}_r) & = \rho \cdot q^\rho(\mathcal{T}_r) |\bm w_r| + \rho \cdot \sigmaum_{v \in \text{child}(r)} \text{cost}(\mathcal{T}^{(\rho)}_v) & \text{Definition of cost($\mathcal{T}^{(\rho)}_r$)} \\ & \leq \rho \cdot q^\rho(\mathcal{T}_r) |\bm w_r| + \rho \cdot \sigmaum_{v\in \text{child}(r)} Q(\rho| T_v) & \text{From induction hypothesis}\\ & \leq \rho \cdot q^\rho(\mathcal{T}_r) |\bm w_r| + Q\left(\rho \frac {|W(\mathcal{T}_r)| - |\bm w_r|} {|W(\mathcal{T}_r)|}\, \Biggr| \, T_r\right)& \text{Since }\mathcal{T}_v \sigmaubseteq T_r\\ & \leq Q\left( \rho | T_r \right)& \varepsilonnd{align*} \varepsilonnd{itemize} The second-to-last inequality follows since $Q$ is defined as the area of the largest weights of the histogram. Including more weights only increases and keeping the length of the integration range the same (equal to $\rho (|W(\mathcal{T}_r)| - |\bm w_r|)$) can only increase the value $Q$. The last inequality follows by noting that if $H(x)$ is the histogram corresponding to the values of $\mathcal{T}_r$, then \begin{align*} Q\left( \rho | T_r \right) - Q\left( \rho \frac {|W(\mathcal{T}_r)| - |\bm w_r|} {|W(\mathcal{T}_r)|}\, \Biggr| \, T_r \right) &= \int_{(1-\rho)|W(\mathcal{T}_r)|}^{|W(\mathcal{T}_r)|} H(x) dx - \int_{(1-\rho)|W(\mathcal{T}_r)| + \rho |\bm w _r|}^{|W(\mathcal{T}_r)|} H(x)dx \\ &=\int_{(1-\rho)|W(\mathcal{T}_r)|}^{(1-\rho)|W(\mathcal{T}_r)| + \rho |\bm w _r|} H(x) dx \ge \int_{(1-\rho)|W(\mathcal{T}_r)|}^{(1-\rho)|W(\mathcal{T}_r)| + \rho |\bm w _r|} q^\rho(\mathcal{T}_r) dx \\ &= \rho q^\rho(\mathcal{T}_r) |\bm w _r| \varepsilonnd{align*} where the inequality follows since $H(x) \ge q^\rho(\mathcal{T}_r)$ for $x \ge (1-\rho)|W(\mathcal{T}_r)|$ by the definition of $q^\rho(\mathcal{T}_r)$ as the top-$r$ quantile of the weights in $\mathcal{T}_r$. \begin{figure}[H] \centering \input{quantiles.tex} \caption{Picture depicting the proof above.} \label{fig:histogram_apn} \varepsilonnd{figure} \varepsilonnd{proof} \sigmaubsection{Proofs from Section~\ref{sec:learning}}\label{apn:learning} \closeness* \begin{proof}[Proof of Lemma~\ref{lem:closeness}] We first argue that we can accurately estimate the cost for any vector of thresholds $\bm \tau$ when the order of visiting boxes is fixed. Consider any fixed permutation $\pi= \pi_1, \pi_2, \ldots , \pi_n$ be any permutation of the boxes, we relabel the boxes wlog so that $\pi_i$ is box $i$. Denote by $\hat{V}_i = \min_{j\leq i} v_j$, and observe that $\hat{V}_i$ is a random variable that depends on the distribution $\mathcal{D}$. Then we can write the expected cost of the algorithm as the expected sum of the opening cost and the chosen value: $\E{\mathcal{D}}{\alg} = \E{\mathcal{D}}{\alg_o} + \E{\mathcal{D}}{\alg_v}$. We have that: \begin{align*} \E{\mathcal{D}}{\alg_o} =\sigmaum_{i=1}^n \Pr{\mathcal{D}}{\text{reach }i} = \sigmaum_{i=1}^n \Pr{\mathcal{D}}{\bigwedge_{j=1}^{i-1} (\hat{V}_j > \tau_{j+1})} \varepsilonnd{align*} Moreover, we denote by $\overline{V}^i_{\bm \tau} = \bigwedge_{j=1}^{i-1} \left( \hat{V}_j > \tau_{j+1} \right)$ and we have \begin{align*} \E{\mathcal{D}}{\alg_v - \hat{V}_n} & =\sigmaum_{i=1}^n \E{\mathcal{D}}{ (\hat{V}_i - \hat{V}_n) \cdot \ind{\text{stop at }i}}\\ & = \sigmaum_{i=1}^{n-1} \E{\mathcal{D}}{(\hat{V}_i - \hat{V}_n) \cdot \ind{\overline{V}^i_{\bm \tau} \wedge \left( \hat{V}_i \leq \tau_{i+1}\right) }} \\ & = \sigmaum_{i=1}^{n-1} \mathbb{E}_{\mathcal{D}} \Bigg[ \tau_{i+1} \Pr{r \sigmaim U[0,\tau_{i+1}]}{r < \hat{V}_i - \hat{V}_n} \cdot \ind{ \overline{V}^i_{\bm \tau} \wedge \left( \hat{V}_i \leq \tau_{i+1}\right) }\Bigg]\\ &= \sigmaum_{i=1}^{n-1} \tau_{i+1} \textbf{Pr}_{\mathcal{D}, r \sigmaim U[0,\tau_{i+1}]}\Bigg[ {\overline{V}^i_{\bm \tau} \wedge \left( r + \hat{V}_n \le \hat{V}_i \leq \tau_{i+1}\right) }\Bigg] \varepsilonnd{align*} In order to show our result, we use from~\cite{BlumEhreHausWarm1989} that for a class with VC dimension $d<\infty$ that we can learn it with error at most $\varepsilon$ with probability $1-\delta$ using $m=\text{poly}( 1/\varepsilon, d, \log\left( 1/\delta\right))$ samples. Consider the class $\mathcal{F}_{\bm \tau}(\hat{V}, r) = {\bigwedge_{j=1}^{i-1} (\hat{V}_j > \tau_{j+1})} $. This defines an axis parallel rectangle in $\mathbb{R}^i$, therefore its VC-dimension is $2i$. Using the observation above we have that using $m=\text{poly}( 1/\varepsilon, n, \log\left( 1/\delta\right))$ samples, , with probability at least $1-\delta$, it holds \[ \bigg|\Pr{\mathcal{D}}{\mathcal{F}_{\bm \tau}(\hat{V}, r)} - \Pr{\hat{\mathcal{D}}}{\mathcal{F}_{\bm \tau}(\hat{V}, r)} \bigg| \leq \varepsilon \] for all $\bm \tau \in\mathbb{R}^n$. Similarly, the class $\mathcal{C}_{\bm \tau}(\hat{V}, r)=\bigwedge_{j=1}^{i-1} \left( \hat{V}_j > \tau_{j+1} \right) \wedge \left( r + \hat{V}_n \le \hat{V}_i \leq \tau_{i+1}\right)$ has VC-dimension $O(n)$ since it is an intersection of at most $n$ (sparse) halfspaces. Therefore, the same argument as before applies and for $m=\text{poly}( 1/\varepsilon, n, \log\left( 1/\delta\right))$ samples, we get \begin{align*} \bigg| \Pr{\mathcal{D}, r \sigmaim U[0,\tau_{i+1}]}{ \mathcal{C}_{\bm \tau}(\hat{V},r)} - \Pr{\hat{\mathcal{D}}, r \sigmaim U[0,\tau_{i+1}]}{ \mathcal{C}_{\bm \tau}(\hat{V},r)} \bigg| \leq \varepsilon \varepsilonnd{align*} for all $\bm \tau \in\mathbb{R}^n$, with probability at least $1-\delta$. Putting it all together, the error can still be unbounded if the thresholds $\tau$ are too large. However, since we assume that $\tau_i \leq n/\varepsilon$ for all $i\in [n]$, $\text{poly}(n, 1/\varepsilon, \log(1/\delta))$ samples suffice to get $\varepsilon$ error overall, by setting $\varepsilon \leftarrow \frac {\varepsilon^2}{n}$. While we obtain the result for a fixed permutation, we can directly obtain the result for all $n!$ permutations through a union bound. Setting $\delta \leftarrow \frac \delta {n!}$ only introduces an additional factor of $\log(n!) = n \log n$ in the overall sample complexity. \varepsilonnd{proof} \varepsilonnd{document}
\begin{document} \title{Oriented Riordan graphs and their fractal property hanks{This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Ministry of Education of Korea (NRF-2019R1I1A1A01044161).} \begin{abstract} In this paper, we use the theory of Riordan matrices to introduce the notion of an oriented Riordan graph. The oriented Riordan graphs are a far-reaching generalization of the well known and well studied Toeplitz oriented graphs and tournament. The main focus in this paper is the study of structural properties of the oriented Riordan graphs which includes a fundamental decomposition theorem and fractal property. Finally, we introduce the generalization of the oriented Riordan graph who is called a $p$-Riordan graph. \noindent {\bf Key Words:} oriented Riordan graph, graph decomposition, fractal, $p$-Riordan graph \\[3mm] {\bf 2010 Mathematics Subject Classification:} 05C20, 05A15, 28A80 \end{abstract} \section{Introduction} An {\it oriented graph} is a directed graph having no symmetric pair of directed edges. Let $G^\sigma$ be a simple graph with an orientation $\sigma$, which assigns to each edge a direction so that $G^\sigma$ becomes a directed graph \cite{SW}. With respect to a labeling, the {\it skew-adjacency matrix} $\mathcal{S}(G^\sigma)$ is the $(-1,0,1)$-real skew symmetric matrix $[s_{ij}]_{1\le i,j\le n}$ where $s_{ij} = 1$ and $s_{ji}=-1$ if $i\rightarrow j$ is an arc of $G^\sigma$, otherwise $s_{ij}=s_{ji}= 0.$ A complete oriented graph is called a {\it tournament}. An oriented graph $G^\sigma_n$ with $n$ vertices is {\em Riordan} if there exists a labeling $1,2,\ldots,n$ of $G^\sigma_n$ such that the lower triangular part of order $n-1$ of the skew-adjacency matrix $S(G^\sigma_n)$ is of size $n-1$ of some Riordan array $(g,f)=[\ell_{ij}]_{i,j\ge0}$ over the finite field ${\mathbb Z}_3$ defined as \begin{eqnarray}\label{e:a sam} \ell_{ij}\equiv[z^i]gf^j\;({\rm mod}\; 3)\;{\rm and}\;\ell_{i,j}\in\{-1,0,1\} \end{eqnarray} where $g=\sum_{n\ge0}g_nz^n$ and $f=\sum_{n\ge1}f_nz^n$ are formal power series over integers ${\mathbb Z}$. By using Riordan language, the skew-adjacency matrix $\mathcal{S}(G^\sigma_n)$ can be written as \begin{eqnarray*} \mathcal{S}(G^\sigma)\equiv (zg,f)_n-(zg,f)_n^T\;({\rm mod 3}) \end{eqnarray*} where $2\equiv -1$ ({\rm mod 3). We denote such graph by $G^\sigma_n(g,f)$, or simply by $G^\sigma_n$ when the Riordan array $(g,f)$ is understood from the context, or it is not important. For Riordan graphs, see ~\cite{CJKM1,CJKM2}. Note that every Riordan array $(g,f)$ over ${\mathbb Z}$ defines the oriented Riordan graph $G^\sigma(g,f)$ with respect to the labeling with the same ones as column indices of the the Riordan array $(g,f)$. For instance, consider the Pascal array given by $(1/(1-z),z/(1-z))$. The corresponding skew-adjacency matrix of order 7 and its oriented Riordan graph $G_7^\sigma=G^\sigma_7(1/(1-z),z/(1-z))$ are illustrated in Figure \ref{PG_7}. \begin{figure} \caption{$\mathcal{S} \label{PG_7} \end{figure} Throughout this paper, we write $a\equiv b$ for $a\equiv b\;({\rm mod\;3})$. \begin{proposition}\label{num-graphs} The number of oriented Riordan graphs on $n$ vertices is \begin{align*} {3^{2(n-1)}+3\over 4}. \end{align*} \end{proposition} \begin{proof} Let $G_n^\sigma=G_n^\sigma(g,f)$ be an oriented Riordan graphs on $n\ge2$ vertices and $i$ be the smallest index such that $g_i=[z^i]g\not\equiv 0$. \begin{itemize} \item If $i\ge n-1$ then $G_n^\sigma$ is the null graph $N_n$. \item If $0\le i\le n-2$ then we may assume that $g=\sum_{k=i}^{n-2}g_kz^k$ and $f=\sum_{k=1}^{n-2-i}g_kz^k$. \end{itemize} Since $g_i\in\{-1,1\}$ and $g_{i+1},\ldots,g_{n-2},f_1,\ldots, f_{n-i-2}\in\{-1,0,1\}$ it follows that the number of possibilities to create $n\times n$ skew-adjacency matrix $S(G^\sigma_n)$ is \begin{align*} 1+2\sum_{i=0}^{n-2}3^{2(n-i-2)}={3^{2(n-1)}+3\over 4} \end{align*} where the 1 corresponds to the null graph. \end{proof} \begin{remark}{\rm It is known \cite{oeis} that the numbers $1,3,21,183,1641,\ldots,(3^{2(n-1)}+3)/4,\ldots$ count closed walks of length $2n\;(n\ge1)$ along the edges of a cube based at a vertex. The numbers are also equal to the numbers of words of length $2n\;(n\ge1)$ on alphabet $\{0,1,2\}$ with an even number (possibly zero) of each letter. See the OEIS number A054879.} \end{remark} \begin{figure} \caption{Vertex labeling on nonisomorphic oriented graphs of order up to 4} \label{Riordan-graphs-up-to-4} \end{figure} From Figure \ref{Riordan-graphs-up-to-4}, we can see that the following graph is the only nonisomorphic oriented graph of order up to 4 who is not Riordan. \begin{center} \epsfig{file=C4.eps,scale=0.15} \end{center} Thus the following proposition shows that not all nonisomorphic oriented graphs on $n$ vertices are Riordan for $n\geq 4$. \begin{proposition}\label{non-oriented-Riordan-graph} Let $H_n\cong K_{n-1}\cup K_1$ be a graph obtained from a complete graph $K_{n-1}$ by adding an isolated vertex. Then any orientation of $H_{n}$ for $n\ge5$ is not an oriented Riordan graph. \end{proposition} \begin{proof} Let $H_{n}^\sigma$ be any oriented graph of $H_n$ with an orientation $\sigma$. Suppose that there exist $g$ and $f$ such that a labelled copy of $H_{n+1}^\sigma$ is the oriented Riordan graph $G_n^\sigma(g,f)$. Let the isolated vertex be labelled by 1. Since there are no arcs $1\rightarrow i$ and $i\rightarrow 1\in E(H_{n}^\sigma)$ for $i=2,\ldots,n$, we have $g=0$ so that $G_n^\sigma(g,f)$ is the null graph $N_n$. This is a contradiction. Let $i\neq1$ be the label of the isolated vertex and $\mathcal{S}(H_{n}^\sigma)=[s_{i,j}]_{1\leq i,j\leq n}$. Then we obtain \begin{align*} (s_{2,1},\ldots,s_{n,n-1})=\left\{ \begin{array}{ll} (a_1,\ldots,a_{i-2},0,0,a_{i+1},\ldots,a_{n-1}) & \text{if $i\in\{2,\ldots,n-1\};$} \\ (a_1,\ldots,a_{n-2},0) & \text{if $i=n$} \end{array} \right. \end{align*} where $a_{i}\in\{-1,1\}$. This implies that \begin{itemize} \item $[z^{i-1}]gf^{i-1}\equiv0$ and $[z^{i}]gf^{i}\not\equiv0$ if $i\in\{2,\ldots,n-2\}$; \item $[z^{n-4}]gf^{n-4}\not\equiv0$ and $[z^{n-3}]gf^{n-3}\equiv0$ if $i=n-1$; \item $[z^{n-3}]gf^{n-3}\not\equiv0$ and $[z^{n-2}]gf^{n-2}\equiv0$ if $i=n$. \end{itemize} This is also a contradiction. Hence the proof follows. \end{proof} \section{Riordan arrays} Let $\kappa[[z]]$ be the ring of formal power series in the variable $z$ over an integral domain $\kappa$. If there exists a pair of generating functions $(g,f)\in \kappa[[z]]\times \kappa[[z]]$, $f(0)=0$ such that for $j\ge 0$, \begin{eqnarray*} gf^j=\sum_{i\ge0}\ell_{i,j}z^i, \end{eqnarray*} then the matrix $L=[\ell_{ij}]_{i,j\ge0}$ is called a {\it Riordan matrix} (or, a {\it Riordan array}) over $\kappa$ generated by $g$ and $f$. Usually, we write $L=(g,f)$. Since $f(0)=0$, every Riordan matrix $(g,f)$ is infinite and a lower triangular matrix. If a Riordan matrix is invertible, it is called {\it proper}. Note that $(g,f)$ is invertible if and only if $g(0)\ne0$, $f(0)=0$ and $f^\prime(0)\ne0$. If we multiply $(g,f)$ by a column vector $(c_0,c_1,\ldots)^T$ with the generating function $\Phi$ over an integral domain $\kappa$ with characteristic zero, then the resulting column vector has the generating function $g\Phi(f)$. This property is known as the {\it fundamental theorem of Riordan matrices} ({\it FTRM}). Simply, we write the FTRM as $(g,f)\Phi=g\Phi(f)$. This leads to the multiplication of Riordan matrices, which can be described in terms of generating functions as \begin{eqnarray}\label{e:Rm} (g,f)*(h,\ell)=(gh(f),\ell(f)). \end{eqnarray} The set of all proper Riordan matrices under the above {\em Riordan multiplication} forms a group called the \textit{Riordan group}. The identity of the group is $(1,z)$, the usual identity matrix and $(g,f)^{-1}=({1/g(\overline{f})},\overline{f})$ where $\overline{f}$ is the {\em compositional inverse} of $f$, i.e.\ $\overline{f}(f(z))=f(\overline{f}(z))=z$. The {\em leading principal matrix of order $n$} of $(g,f)$ is denoted by $(g, f)_n$. If $\kappa={\mathbb Z}$ then the fundamental theorem gives \begin{align}\label{e:ftbrm} (g,f)\Phi\equiv g\Phi(f). \end{align} It is known \cite{MRSV} that an infinite lower triangular matrix $L=[\ell_{i,j}]_{i,j\ge0}$ is a Riordan matrix $(g,f)$ with $[z^1]f\neq0$ if and only if there is a unique sequence $(a_0,a_1,\ldots)$ with $a_0\neq0$ such that, for $i\ge j\ge0$, \begin{align*} \ell_{i,0}&=[z^i]g;\\ \ell_{i+1,j+1}&=a_0\ell_{i,j}+a_1\ell_{i,j+1}+\cdots+a_{i-j}\ell_{i,i}. \end{align*} This sequence is called the {\it $A$-sequence} of the Riordan array. Also, if $L=(g,f)$ then \begin{align}\label{e:eq12} f=zA(f),\quad{\rm or\ equivalently}\quad A=z/\bar f \end{align} where $A$ is the generating function of the $A$-sequence of $(g,f)$. In particular, if $L$ is a Riordan array $(g,f)$ over ${\mathbb Z}_3$ with $f'(0)=1$ then the sequence is called the {\it ternary $A$-sequence} $(1,a_1,a_2,\ldots)$ where $a_k\in\{-1,0,1\}$. \section{Structural properties of oriented Riordan graphs} A {\em fractal} is an object exhibiting similar patterns at increasingly small scales. Thus, fractals use the idea of a detailed pattern that repeats itself. In this section, we show that every oriented Riordan graph $G_n^\sigma(g,f)$ with $f'(0)=1$ has fractal properties by using the notion of the $A$-sequence of a Riordan matrix. The set of labelings $1,2,\ldots,n$ of the graph $G_n^\sigma(g,f)$ is`denoted as $[n]$. \begin{definition}{\rm Let $G^\sigma$ be an oriented Riordan graph. A pair of vertices $\{k,t\}$ in $G^\sigma$ is a {\em cognate pair} with a pair of vertices $\{i,j\}$ in $G^\sigma$ if \begin{itemize} \item $|i-j|=|k-t|$ and \item $i\rightarrow j$ is an arc of $G^\sigma$ if and only if $k\rightarrow t$ is an arc of $G^\sigma$. \end{itemize} The set of all cognate pairs of $\{i,j\}$ is denoted by cog$(i,j)$.} \end{definition} \begin{lemma}\label{p-power} Let $g,f\in\mathbb{Z}[[z]]$ with $f(0)=0$. For a prime $p$, we obtain \begin{eqnarray*} g^{p^k}(f)\equiv g(f^{p^k})\;({\rm mod}\;p) \;\textrm{for any integer $k\ge0$.} \end{eqnarray*} \end{lemma} Throughout this paper, we write $a\equiv b$ for $a\equiv b\;({\rm mod\;3})$. The following theorem gives a relationship between cognate pairs and the $A$-sequence of a Riordan array. \begin{theorem}\label{e:cognate} Let $G^\sigma_n(g,f)$ be an oriented Riordan graph of order $n$ where $f\ne z$ and $[z^1]f=1$. If the ternary $A$-sequence of $(g,f)$ is of the form \begin{eqnarray}\label{e:ta} A=(a_k)_{k\ge0}=(1,\underbrace{0,\ldots,0}_{\ell\ge0 \;{\rm times}},a_{\ell+1},a_{\ell+2},\ldots),\;\;a_0,a_{\ell+1}\not\equiv0 \end{eqnarray} then \begin{align*} \mbox{\em cog}(i,j)=\left\{\left\{i+m3^{s}, j+m3^{s}\right\}\ |\ i+m3^{s},\ j+m3^{s}\in[n]\right\} \end{align*} where $s\ge0$ is an integer such that $3\lfloor(|i-j|-1)/3^s\rfloor\le \ell$. \end{theorem} \begin{proof} Let $\mathcal{S}(G_n^\sigma)=(r_{i,j})_{1\leq i,j\leq n}$ be the skew-adjacency matrix of $G_n^\sigma(g,f)$. Without loss of generality, we may assume that $i>j\ge1$. By Lemma \ref{p-power}, we obtain \begin{align} r_{i+3^s,j+3^s}&\equiv\left[z^{i+3^s-2}\right] gf^{j+3^s-1}=\left[z^{i+3^s-2}\right] gf^{j-1}\left(zA(f)\right)^{3^s}\nonumber\\ &\equiv\left[z^{i-2}\right]gf^{j-1}A(f^{3^{s}})=\left[z^{i-2}\right]\sum_{k\ge0}a_kgf^{j-1+k3^s}\nonumber\\ &\equiv \sum_{k=0}^{\alpha}a_kr_{i,j+k3^s}\label{e:eq6}. \end{align} where $\alpha={\rm max}\{k\in\mathbb{N}_0\;|\;0\le k3^s\le i-j-1\}=\lfloor(i-j-1)/3^s\rfloor$. Since $a_0=1$ and $a_k=0$ for $1\le k\le \ell$, it follows that $\alpha\le \ell$ if and only if \begin{align}\label{e:eq7} r_{i+3^s,j+3^s}\equiv r_{i,j}. \end{align} Now, let $s\ge0$ be an integer with $\lfloor(i-j-1)/3^s\rfloor\le\ell$. By \eref{e:eq7}, $i$ is adjacency to $j$ with $i\rightarrow j$ in $G_n^\sigma$ if and only if $i+3^s$ is adjacency to $j+3^s$ with $i+3^s\rightarrow j+3^s$ in $G_n^\sigma$. It implies that $i+3^s$ is adjacency to $j+3^s$ with $i\rightarrow j$ in $G_n^\sigma$ if and only if $i+2\cdot3^s$ is adjacency to $j+2\cdot3^s$ with $i+2\cdot3^s\rightarrow j+2\cdot3^s$ in $G_n^\sigma$. By repeating this process, we obtain the desired result. \end{proof} The following theorem shows that if $(g,f)$ is a Riordan array over integers where $f\ne z$ and $f'(0)=1$ then every oriented Riordan graph $G_n^\sigma(g,f)$ has a fractal property. \begin{theorem}\label{ORG-fractal} Let $G^\sigma_n(g,f)$ be the same oriented Riordan graph in Theorem \ref{e:cognate}. If $a_0=1$ in \eref{e:ta} then $G_n^\sigma$ has the following fractal properties for each $s\ge 0$ and $k\in\{0,\ldots,\ell\}$: \begin{itemize} \item[{\rm(i)}] $\left<\{1,\ldots,(k+1)3^{s}+1\}\right>\cong \left<\{\alpha(k+1)3^{s}+1,\ldots,(\alpha+1)(k+1)3^{s}+1\}\right>$ \item[{\rm(ii)}]$\left<\{1,\ldots,(k+1)3^{s}\}\right>\cong \left<\{\alpha(k+1)3^{s}+1,\ldots,(\alpha+1)(k+1)3^{s}\}\right>$ \end{itemize} where $\alpha\ge1$. \end{theorem} \begin{proof} Let $i,j\in\{1,\ldots,(k+1)3^{s}+1\}\in V(G_n^\sigma)$ with $0\le k\le\ell$. Since \begin{align*} \left\lfloor{|i-j|-1\over 3^s}\right\rfloor\le\left\lfloor{(k+1)3^s-1\over 3^s}\right\rfloor\le\ell, \end{align*} it follows from Theorem \ref{e:cognate} that \begin{align}\label{e:con2} \left\{i+\alpha(k+1)3^{s}, j+\alpha(k+1)3^{s}\right\}\in\textrm{cog}(i,j). \end{align} Thus $i$ is adjacent to $j$ with $i\rightarrow j$ in $G_n^\sigma$ if and only if $i+\alpha(k+1)3^{s}$ is adjacent to $j+\alpha(k+1)3^{s}$ with $i+\alpha(k+1)3^{s}\rightarrow j+\alpha(k+1)3^{s}$ in $G_n^\sigma$. Hence we obtain (i). Similarly we obtain (ii). Hence the proof follows.\end{proof} Note that real skew symmetric matrices with respect to different labelings are permutationally similar. \begin{example} {\rm Let us consider an oriented Pascal graph $PG_n^\sigma=G_n^\sigma(1/(1-z),z/(1-z))$. Since by \eref{e:eq12} its $A$-sequence is given by $(1,1,0,\ldots)$, i.e.\ $\ell=0$ so that $k=0$, it follows from Theorem \ref{ORG-fractal} that \begin{align} \left<\{1,\ldots,3^{s}+1\}\right>\cong \left<\{\alpha3^{s}+1,\ldots,(\alpha+1)3^{s}+1\}\right> \end{align} For instance, when $n=19$, $s=2$ and $\alpha=1$ $\left<\{1,\ldots,10\}\right>\cong \left<\{10,\ldots,19\}\right>$ i.e., if $\mathcal{S}(PG_{19}^\sigma)=(p_{n,k})_{1\le n\le19}$ then $(p_{n,k})_{1\le n,k\le10}=(p_{n,k})_{10\le n,k\le19}$, see Figure \ref{PG19}.} \end{example} \begin{figure} \caption{$\mathcal{S} \label{PG19} \end{figure} \begin{lemma}\label{p-1-derivitive} Let $h=\sum_{n\ge0}h_nz^n\in\mathbb{Z}[[z]]$. For a prime $p$, we obtain \begin{eqnarray*} {d^{p-1}\over dz^{p-1}}h(\sqrt[p]{z})\equiv-\sum_{k\ge0}h_{(k+1)p-1}z^{k}\;({\rm mod}\;p). \end{eqnarray*} \end{lemma} \begin{proof} By applying $(p-1)$th derivative of $h(z)$, we obtain \begin{align*} [z^m]{d^{p-1}\over dz^{p-1}}h(z)&=m(m-1)\cdots(m-p+2)h_{m+p-1}=(p-1)!{m\choose p-1}h_{m+p-1}. \end{align*} By the Wilson theorem and the Lucas theorem, the right had side of the above equation can be written as \begin{align*} (p-1)!{m\choose p-1}h_{m-p+1}\equiv_p-{m\choose p-1}h_{m+p-1}\equiv_p\left\{ \begin{array}{ll} -h_{m+p-1} & \text{if $m=(k+1)p-1$} \\ 0 & \text{otherwise} \end{array} \right. \end{align*} where $k\ge0$ and $a\equiv_p b$ denotes $a\equiv b\;({\rm mod}\;p)$. This implies \begin{align*} {d^{p-1}\over dz^{p-1}}h(z)\equiv_p-\sum_{k\ge0}h_{(k+1)p-1}z^{pk}. \end{align*} Hence the proof follows. \end{proof} We now consider the vertex set $V=[n]$ of an oriented Riordan graph $G_n^\sigma$ of order $n\ge 3$. Then $V$ can be partitioned into three subsets $V_1$, $V_2$ and $V_3$ where $V_i=\{j\in[n]\;|\;i\equiv j\}$ for $j=1,2,3$. There exists a permutation matrix $P$ such that \begin{align*} \mathcal{S}(G^\sigma)=P^{T}\left( \begin{array}{ccc} \mathcal{S}(\left<V_1\right>) & B_{1,2} & B_{1,3} \\ -B_{1,2}^T & \mathcal{S}(\left<V_2\right>) & B_{2,3} \\ -B_{1,3}^T & -B_{2,3}^T & \mathcal{S}(\left<V_3\right>) \end{array} \right)P \end{align*} where $V_i$ and $V_j$, $i\neq j$, are mutually nonempty disjoint subsets of $V$ such that $V_1\cup V_2\cup V_3=V$. \begin{theorem}[Oriented Riordan Graph Decomposition]\label{e:th} Let $G_n^\sigma=G_n^\sigma(g,f)$ be an oriented Riordan graph of order $n\ge 3$ with the vertex set $V=V_1\cup V_2\cup V_3$ where $V_i=\{j\in[n]\;|\;j\equiv i\}$. Then its skew-adjacency matrix $\mathcal{S}(G_n^{\sigma})$ is permutationally similar to the $3\times3$ block matrix: \begin{eqnarray}\label{e:bm} \left( \begin{array}{cccc} X_1 & B_{1,2} & B_{1,3} \\ -B_{1,2}^T & X_2 & B_{2,3} \\ -B_{1,3}^T & -B_{2,3}^T & X_3 \end{array} \right) \end{eqnarray} where $X_i=\mathcal{S}(\left<V_i\right>)$, the skew-adjacency matrix of the induced subgraph of $G_n^\sigma$ by $V_i$, $i=1,2,3$. In particular, $\left<V_i\right>$ is isomorphic to the oriented Riordan graph of order $\ell_i=\lfloor (n-i)/3\rfloor+1$ given~by \begin{align*} G^\sigma_{\ell_i}\left(-{d^{2}\over dz^{2}}\left({gf^{i-1}\over z^{i-1}}\right)(\sqrt[3]{z}),f(z)\right), \end{align*} and $B_{i,j}$ representing the edges between $V_i$ and $V_j$ can be expressed as the sum of two Riordan matrices as follows: \begin{align*} B_{i,j}\equiv\left(-z{d^{2}\over dz^{2}}\left({gf^{j-1}\over z^{i-1}}\right)(\sqrt[3]{z}),f(z)\right)_{\ell_i \times\ell_j}+\left({d^{2}\over dz^{2}}(z^{4-j}gf^{i-1})(\sqrt[3]{z}),f(z)\right)_{\ell_j\times\ell_i}^{T}. \end{align*} \end{theorem} \begin{proof} Using the $n\times n$ permutation matrix defined as $$ P=\left[e_{1}|e_{4}|\cdots|e_{3\lceil n/3\rceil -2}|\cdots|e_{3}|e_{6} \;|\; \cdots \;|\; e_{3\lfloor n/3\rfloor }\right] ^{T} $$ where $e_{i}$ is the elementary column vector with the $i$th entry being $1$ and the others entries being $0$, it can be shown that $P\mathcal{S}(G_n^\sigma)P^{T}$ is equal to the block matrix in \eref{e:bm}. Taking into account the form of $P$, clearly $X_i$ is the skew-adjacency matrix of the induced subgraph $\left<V_i\right>$ of order $\ell_i=\lfloor (n-i)/3\rfloor+1$ in $G_n^\sigma=G_n^\sigma(g,f)$. Let $\mathcal{S}\left(G_n^\sigma\right)=[r_{i,j}]_{1\leq i,j\leq n}$. Since $f=zA(f)$ where $A(z)\in \mathbb{Z}[[z]] $ is the generating function of the $A$-sequence $(a_0,a_1,\ldots)$ for the Riordan matrix $(g,f)$, it follows from Lemma \ref{p-power} that for $i>j\geq p+1$ \begin{align}\label{e:eq} r_{i,j} &\equiv\left[ z^{i-2}\right] gf^{j-1}=\left[ z^{i-2}\right] gf^{j-4}\left(zA(f)\right)^3 \equiv \left[ z^{i-5}\right] gf^{j-4}A\left( f^{3}\right)\nonumber \\ &=\left[ z^{i-5}\right] \left( a_{0}gf^{j-4}+a_{1}gf^{j-1}+a_{2}gf^{j+2}+\cdots \right) \nonumber \\ &=\sum_{k=0}^{\lfloor(i-j-1)/3\rfloor}a_kr_{i-3,j-3+3k}. \end{align} Since $X_i=[r_{3(u-1)+i,3(v-1)+i}]_{1\le u,v\le\ell_i}$ and by Lemma \ref{p-1-derivitive} $${d^{2}\over dz^{2}}\left({gf^{i-1}\over z^{i-1}}\right)={d^{2}\over dz^{2}}\left(\sum_{k\ge0}r_{k+i+1,i}z^k\right)\equiv-\sum_{k\ge0}r_{3(k+1)+i,i}z^{3k},$$ it follows from \eref{e:eq} that \begin{align*} \left<V_i\right>\cong G^\sigma_{\ell_i}\left(-{d^{2}\over dz^{2}}\left({gf^{i-1}\over z^{i-1}}\right)(\sqrt[3]{z}),f(z)\right). \end{align*} Let \begin{align*} L_{i,j}=\left[ \begin{array}{cccc} 0 & 0 & 0 & \cdots \\ r_{3+i,j} & 0 & 0 & \cdots \\ r_{6+i,j} & r_{6+i,3+j} & 0 & \cdots \\ r_{9+i,j} & r_{9+i,3+j} & r_{9+i,6+j} & \ddots \\ \vdots & \vdots & \vdots & \ddots \end{array} \right]\;{\rm and}\; U_{i,j}=\left[ \begin{array}{ccccc} r_{j,i} & 0 & 0 & 0 &\cdots \\ r_{3+j,i} & r_{3+j,3+i} & 0 & 0 & \cdots \\ r_{6+j,i} & r_{6+j,3+i} & r_{6+j,6+i} & 0 & \cdots\\ r_{9+j,i} & r_{9+j,3+i} & r_{9+j,6+i} & r_{9+j,9+i} & \ddots\\ \vdots & \vdots & \vdots & \vdots & \ddots \end{array} \right]. \end{align*} Since by Lemma \ref{p-1-derivitive} we obtain \begin{align*} {d^{2}\over dz^{2}}\left({gf^{j-1}\over z^{i-1}}\right)={d^{2}\over dz^{2}}\left(\sum_{k\ge0}r_{k+i+1,j}z^k\right)\equiv-\sum_{k\ge0}r_{3(k+1)+i,j}z^{3k} \end{align*} and \begin{align*} {d^{2}\over dz^{2}}(z^{4-j}gf^{i-1})={d^{2}\over dz^{2}}\left(\sum_{k\ge0}r_{k+j-2,i}z^k\right)\equiv-\sum_{k\ge0}r_{3k+j,i}z^{3k}, \end{align*} we obtain \begin{align*} L_{i,j}=\left(-z{d^{2}\over dz^{2}}\left({gf^{j-1}\over z^{i-1}}\right)(\sqrt[3]{z}),f(z)\right)\;{\rm and}\;U_{i,j}=\left(-{d^{2}\over dz^{2}}(z^{4-j}gf^{i-1})(\sqrt[3]{z}),f(z)\right). \end{align*} One can see that $B_{i,j}=(L_{i,j}-U_{i,j}^T)_{\ell_i\times\ell_j}$ where $(L_{i,j}-U_{i,j}^T)_{\ell_i\times\ell_j}$ is the $\ell_i\times\ell_j$ leading principal matrix of $L_{i,j}-U_{i,j}$. Hence the proof follows. \end{proof} \begin{example} {\rm Let us consider an oriented Pascal graph $PG_{13}^\sigma=G_{13}^\sigma(1/(1-z),z/(1-z))$. Then we have {\small\begin{align}\label{PG13} \mathcal{S}(PG_{13}^\sigma)=\left(\begin{array}{ccccccccccccc} 0 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 \\ 1 & 0 & -1 & 1 & 0 & -1 & 1 & 0 & -1 & 1 & 0 & -1 & 1 \\ 1 & 1 & 0 & -1 & 0 & 0 & -1 & 0 & 0 & -1 & 0 & 0 & -1 \\ 1 & -1 & 1 & 0 & -1 & -1 & -1 & 1 & 1 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 & 0 & -1 & 1 & 0 & 1 & -1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 & 1 & 0 & -1 & 0 & 0 & 1 & 0 & 0 & 0 \\ 1 & -1 & 1 & 1 & -1 & 1 & 0 & -1 & -1 & -1 & 0 & 0 & 0 \\ 1 & 0 & 0 & -1 & 0 & 0 & 1 & 0 & -1 & 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & -1 & -1 & 0 & 1 & 1 & 0 & -1 & 0 & 0 & 0 \\ 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 & 1 & 0 & -1 & -1 & -1 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & -1 & 1 \\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & -1 \\ 1 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1 & 1 & 0 \end{array}\right). \end{align}}For a permutation matrix $P=\left[e_{1}|e_{4}|e_{7}|e_{10}|e_{13}|e_{2}|e_{5}|e_{8}|e_{11}|e_{3}|e_{6}|e_{9}|e_{12}\right] ^{T}$, we obtain {\small\begin{align}\label{PPG13PT} P\mathcal{S}(PG_{13}^\sigma)P^T=\left(\begin{array}{ccccc|cccc|cccc} 0 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 \\ 1 & 0 & -1 & 1 & 0 & -1 & -1 & 1 & 0 & 1 & -1 & 1 & 0 \\ 1 & 1 & 0 & -1 & 0 & -1 & -1 & -1 & 0 & 1 & 1 & -1 & 0 \\ 1 & -1 & 1 & 0 & -1 & -1 & 1 & -1 & -1 & 1 & -1 & 1 & -1 \\ 1 & 0 & 0 & 1 & 0 & -1 & 0 & 0 & -1 & 1 & 0 & 0 & 1 \\\hline 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & -1 & -1 & -1 & -1 \\ 1 & 1 & 1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 1 & 0 \\ 1 & -1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\ 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\\hline 1 & -1 & -1 & -1 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & -1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & -1 & 1 & -1 & 0 & 1 & -1 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 & -1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \end{array}\right). \end{align}}Since ${1\over (1-z)^3}\equiv {1\over 1-z^3}$ and \begin{align*} {d^{2}\over dz^{2}}\left({gf^{i-1}\over z^{i-1}}\right)={d^{2}\over dz^{2}}\left({1\over (1-z)^i}\right)={i(i+1)\over (1-z)^{i+2}}, \end{align*} it follows from Theorem \ref{e:th} that we may check \begin{align*} \left<V_1\right>\cong G_{5}^\sigma\left({1\over 1-z},{z\over 1-z}\right)= PG_5^\sigma\;{\rm and}\;\left<V_2\right>\cong\left<V_3\right>\cong G_{4}^\sigma\left(0,{z\over 1-z}\right)\cong N_4 \end{align*} where $V_1=\{1,4,7,10,13\}$, $V_2=\{2,5,8,11\}$ and $V_3=\{3,6,9,12\}$. Since \begin{align*} {d^{2}\over dz^{2}}\left({gf^{2-1}\over z^{1-1}}\right)&= {d^{2}\over dz^{2}}\left({z\over (1-z)^2}\right)={2(2+z)\over (1-z)^4}\equiv {1\over (1-z)^3}\equiv {1\over 1-z^3},\\ {d^{2}\over dz^{2}}(z^{4-2}gf^{1-1})&={d^{2}\over dz^{2}}{z^2\over 1-z}={2\over (1-z)^3}\equiv-{1\over 1-z^3}, \end{align*} it follows from Theorem \ref{e:th} that we may check also \begin{align*} B_{1,2}&\equiv\left(-{z\over 1-z},{z\over 1-z}\right)_{5,4}+\left(-{1\over 1-z},{z\over 1-z}\right)_{4,5}^T\\ &=\left(\begin{array}{cccc} 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 \\ -1 & -1 & 0 & 0 \\ -1 & -2 & -1 & 0 \\ -1 & -3 & -3 & -1 \end{array}\right) + \left(\begin{array}{ccccc} -1 & 0 & 0 & 0 & 0 \\ -1 & -1 & 0 & 0 & 0 \\ -1 & -2 & -1 & 0 & 0 \\ -1 & -3 & -3 & -1 & 0 \end{array}\right)^T\\ &\equiv\left( \begin{array}{cccc} -1 & -1 & -1 & -1 \\ -1 & -1 & 1 & 0 \\ -1 & -1 & -1 & 0 \\ -1 & 1 & -1 & -1 \\ -1 & 0 & 0 & -1 \end{array} \allowbreak \right). \end{align*} Similarly, we may check that Theorem \ref{e:th} holds for $B_{1,3}$ and $B_{2,3}$.} \end{example} \begin{theorem}\label{e:th13} Let $G_n^\sigma=G_n^\sigma(g,f)$ be an oriented Riordan graph of order $n\ge 3$ and $V_i=\{j\in[n]\;|\;j\equiv i\}$, $i=1,2,3$. Then \begin{itemize} \item[{\rm(i)}] For $n=3k$ ($k\ge1$), $\left\langle V_i\right\rangle \cong \left\langle V_j\right\rangle$ if and only if $[z^{3m+i-2}]gf^{i-1}\equiv [z^{3m+j-2}]gf^{j-1}$ for all $m\ge1$. \item[{\rm(ii)}] The induced subgraph $\left\langle V_{i}\right\rangle$ is a null graph if and only if $[z^{3m+i-2}]gf^{i-1}\equiv 0$ for all $m\ge1$. \item[{\rm(iii)}] $G_{n}^\sigma$ is a 3-partite graph with parts $V_1,V_2,V_{3}$ if and only if $[z^{3m+i-2}]gf^{i-1}\equiv0$ for all $m\ge1$ and $i=1,2,3$. \item[{\rm(iv)}] For $j>i$, there are no arcs between a vertex $u\in V_i$ and a vertex $v\in V_j$ if and only if $[z^{3m+i-2}]gf^{j-1}\equiv[z^{3(m-1)+j-2}]gf^{i-1}\equiv 0$ for all $m\ge1$. \end{itemize} \end{theorem} \begin{proof} (i) Let $n=3k$ with $k\ge1$. From Theorem~\ref{e:th}, $\left\langle V_i\right\rangle \cong \left\langle V_j\right\rangle$ if and only if the matrices $X_i$ and $X_j$ in the block matrix in \eref{e:bm} are given by \begin{align*} {d^{2}\over dz^{2}}\left({gf^{i-1}\over z^{i-1}}\right)\equiv {d^{2}\over dz^{2}}\left({gf^{j-1}\over z^{j-1}}\right)&\;\Leftrightarrow\;[z^{3m-1}]{gf^{i-1}\over z^{i-1}}\equiv[z^{3m-1}]{gf^{j-1}\over z^{j-1}}\\ &\;\Leftrightarrow\;[z^{3m+i-2}]{gf^{i-1}}\equiv[z^{3m+j-2}]{gf^{j-1}} \end{align*} for all $m\ge1$ which proves (i). (ii) From Theorem \ref{e:th}, the induced subgraph $\left\langle V_i\right\rangle$ is a null graph if and only if the matrix $X_i$ in \eref{e:bm} is a zero matrix, i.e.\ \begin{align}\label{ii-proof} {d^{2}\over dz^{2}}\left({gf^{i-1}\over z^{i-1}}\right)\equiv 0\;\Leftrightarrow\;[z^{3m-1}]{gf^{i-1}\over z^{i-1}}\equiv0\;\Leftrightarrow\;[z^{3m+i-2}]{gf^{i-1}}\equiv0 \end{align} for all $m\ge1$ which proves (ii). (iii) From Theorem \ref{e:th}, $G_{n}^\sigma$ is a 3-partite graph with parts $V_1,V_2,V_{3}$ if and only if the matrices $X_1$, $X_2$ and $X_3$ in \eref{e:bm} are zero matrices. Hence by \eref{ii-proof} we obtain desired result. (iv) Form Theorem \ref{e:th}, there are no arcs between a vertex $u\in V_i$ and a vertex $v\in V_j$ with $j>i$ if and only if the matrix $B_{i,j}$ in \eref{e:bm} is a zero matrices, i.e.\ \begin{align*} {d^{2}\over dz^{2}}\left({gf^{j-1}\over z^{i-1}}\right)\equiv0\;\;{\rm and}\;\;{d^{2}\over dz^{2}}(z^{4-j}gf^{i-1})\equiv0\;&\Leftrightarrow\; [z^{3m-1}]{gf^{j-1}\over z^{i-1}}\equiv0\;\textrm{and}\;[z^{3m-1}]z^{4-j}gf^{i-1}\equiv0\\ &\Leftrightarrow[z^{3m+i-2}]gf^{j-1}\equiv[z^{3(m-1)+j-2}]gf^{i-1}\equiv 0 \end{align*} for all $m\ge1$. Hence the proof follows. \end{proof} \begin{example} {\rm Let us consider an oriented Pascal graph $PG_n^\sigma=G_n^\sigma(1/(1-z),z/(1-z))$. Since for any $m\ge1$ \begin{align*} [z^{3m+2-2}]{gf^{2-1}}&=[z^{3m}]{z\over (1-z)^2}=[z^{3m}]\sum_{k\ge1}kz^k=3m\equiv0;\\ [z^{3m+3-2}]{gf^{3-1}}&=[z^{3m}]{z^2\over (1-z)^3}=[z^{3m}]\sum_{k\ge2}{k\choose 2}z^k={3m(3m-1)\over2}\equiv0, \end{align*} it follows from (ii) of Theorem \ref{e:th13} that induced subgraphs $\left<V_2\right>$ and $\left<V_3\right>$ are null. For instance, when $n=13$ see \eref{PPG13PT}}. \end{example} \section{Oriented Riordan graph of the Bell type} \begin{definition}\label{faithful-def} {\rm Let $G_n^\sigma=G_n^\sigma(g,f)$ be a proper oriented Riordan graph with the vertex sets $V_i=\{j\in[n]\;|\;j\equiv i\}$, $i=1,2,3$. If $\left< V_{1}\right> \cong G_{\lceil n/3\rceil}^\sigma$ and $\left< V_{2}\right>$ and $\left< V_{3}\right>$ are null graphs then $G_n^\sigma$ is {\em i1-decomposable}.} \end{definition} An oriented Riordan graph $G_n^\sigma(g,f)$ is called of {\it Bell type} if $f=zg$. \begin{lemma}\label{lem1} Let $G_n^\sigma(g,zg)$ be a proper. Then the induced subgraph $\left<V_3\right>$ is a null graph. \end{lemma} \begin{proof} Since $[z^{3m+1}]g(z)(zg(z))^{p-1}=[z^{3m-1}]g^p(z)\equiv [z^{3m-1}]g(z^3)\equiv0$, it follows from (ii) of Theorem \ref{e:th13} that the induced subgraph $\left\langle V_3\right\rangle$ is a null graph. Hence the proof follows.\end{proof} \begin{theorem}\label{i1-decomposable-Theorem} An oriented Riordan graph $G_n^{\sigma}(g,zg)$ is i1-decomposable if and only if \begin{align}\label{eq5} g'\equiv\pm g^2. \end{align} \end{theorem} \begin{proof} Let $V_i=\{j\in[n]\;|\;j\equiv i\}$ be the vertex subsets of $G_n^{\sigma}(g,zg)$ for $i=1,2,3$. From Lemma \ref{lem1}, $\left<V_3\right>$ is the null graph. By definition, $G_n^{\sigma}(g,zg)$ is i1-decomposable if and only if $\left< V_{1}\right> \cong G_{\lceil n/3\rceil}^\sigma(g,zg)$ and $\left< V_{2}\right>$ is the null graph. By Oriented Riordan Graph Decomposition, $\left< V_{1}\right> \cong G_{\lceil(n-1)/3\rceil}^\sigma(g,zg)$ and $\left< V_{2}\right>$ is the null graph if and only if \begin{align*} &-g''(\sqrt[3]{z})\equiv g(z)\;\;{\rm and}\;\;-(g^2)''(\sqrt[3]{z})\equiv 0\\ \Leftrightarrow\;\;&g''(z)\equiv -g(z^3)\equiv -g^3(z)\;\;{\rm and}\;\;(g')^2\equiv -g''g\equiv g^4\\ \Leftrightarrow\;\;&g'\equiv \pm g^2. \end{align*} Hence the proof follows. \end{proof} \begin{theorem}\label{i1-decomposable-A-seq} An oriented Riordan graph $G_n^{\sigma}(g,zg)$ is i1-decomposable if and only if the ternary $A$-sequence $A=(a_k)_{k\ge0}$ of $G_n^{\sigma}(g,zg)$ satisfies either \begin{align}\label{A-seq-i1} (1,1,0,a_3,a_3,0,a_6,a_6,0,\ldots),\quad a_i\in\{0,1,-1\}. \end{align} or \begin{align}\label{A-seq-i1-1} (1,-1,0,a_3,-a_3,0,a_6,-a_6,0,\ldots),\quad a_i\in\{0,1,-1\}. \end{align} \end{theorem} \begin{proof} Let $G_n^{\sigma}(g,zg)$ be i1-decomposable. Since there is a unique generating function $A=\sum_{i\ge0}a_iz^i$ such that $g=A(zg)$, by applying derivative to both sides, we obtain \begin{align}\label{eq1} g'\equiv(g+zg')\cdot A'(zg). \end{align} By Theorem \ref{i1-decomposable-Theorem}, the equation \eref{eq1} is equivalent to \begin{align}\label{eq2} g\equiv(\pm1+ zg)\cdot A'(zg),\;\;{\rm i.e.}\; A(zg)\equiv(\pm1+ zg)\cdot A'(zg) \end{align} Let $f=zg$ and $A(z)=\sum_{n\ge0}a_nz^n$ with $a_0=1$. Since $[z^0]f=0$ and $[z^1]f=1$, there is a composition inverse of $f$ so that the equation \eref{eq2} is equivalent to either \begin{align*} \sum_{n\ge0}a_nz^n\equiv(1+z)\cdot A'(z)\equiv\sum_{n\ge0}\left(a_{3i+1}+(a_{3i+1}-a_{3i+2})z-a_{3i+2}z^2\right)z^{3i} \end{align*} or \begin{align*} \sum_{n\ge0}a_nz^n\equiv(-1+ z)\cdot A'(z)\equiv\sum_{n\ge0}\left(-a_{3i+1}+(a_{3i+1}+a_{3i+2})z-a_{3i+2}z^2\right)z^{3i} \end{align*} which implies the desired result. \end{proof} \begin{example}\label{ex} {\rm Let $PG_n^\sigma=G_n^{\sigma}({1\over 1-z},{z\over 1-z})$ and $CG_n^\sigma=G_n^{\sigma}(C,zC)$ where $C={1-\sqrt{1-4z}\over 2z}$ is the Catalan generating function. Since $A$-sequences of $PG_n^\sigma$ and $CG_n^\sigma$ are $(1,1,0,\ldots)$ and $(1,1,1,\ldots)$ respectively, by Theorem \ref{i1-decomposable-A-seq} $PG_n^\sigma$ is i1-decomposable but $CG_n^\sigma$ is not.} \end{example} \begin{theorem}\label{th1} Let $G_n^\sigma=G_n^{\sigma}(g,zg)$ be an i1-decomposable graph with $g'\equiv g^2$. Then we have the following: \begin{itemize} \item[(i)] If $n=3^{i}+1$ for $i\ge1$ then $n$th low of skew-adjacency matrix $\mathcal{S}(G_n^\sigma)$ is given by \begin{align*} (1,-1,\ldots,1,-1,1,0); \end{align*} \item[(ii)] If $n=2\cdot3^{i}+1$ for $i\ge0$ then $n$th low of skew-adjacency matrix $\mathcal{S}(G_n^\sigma)$ is given by \begin{align*} (\underbrace{1,-1,\ldots,1,-1,1}_{3^i\;{\rm term}},\underbrace{1,-1,\ldots,1,-1,1}_{3^i\;{\rm term}},0). \end{align*} \end{itemize} \end{theorem} \begin{proof} (i) Let $\mathcal{S}(G_n^\sigma)=(s_{i,j})_{1\le i,j\le n}$ and $n=3^i+1$. First we show that $s_{n,k}=-s_{n,k}$ for $1\le k<n-1$. There exits an integer $t\not\equiv0$ such that $k=3^st$ for some nonnegative integer $s< i$. Since $[z^{m-1}]g'=m[z^m]g$ and $g'\equiv g^2$, we obtain \begin{align*} ts_{n,k}&\equiv t[z^{3^i-1}]z^{k-1}g^k\equiv -{(3^i-k)[z^{3^i-k}]g^k\over 3^s}=-{[z^{3^i-k-1}](g^k)'\over 3^s}\\ &=-{k[z^{3^i-k-1}]g^{k-1}g'\over 3^s}\equiv-t[z^{3^i-1}]z^{k}g^{k+1}\equiv-ts_{n,k+1}. \end{align*} Thus we have $s_{n,k}=-s_{n,k+1}$ for all $k=1,\ldots,n-1$. Now it is enough to show that $s_{n,1}\equiv[z^{3^i-1}]g\equiv1$. Since \begin{align*} g'(z)\equiv g^2(z)\;\Rightarrow\;g''(z)=2g(z)g'(z)\equiv-g^3(z)\equiv-g(z^3), \end{align*} by comparing the coefficients of $g''(z)$ and $g(z^3)$ we obtain \begin{align*} [z^j]g\equiv [z^{3j+2}]g\;\Rightarrow\;[z^{3^j-1}]g\equiv[z^0]g=1. \end{align*} Hence we complete the proof. (ii) Let $\mathcal{S}(G_n^\sigma)=(s_{i,j})_{1\le i,j\le n}$ and $n=2\cdot3^i+1$. First we show that $s_{n,k}=-s_{n,k}$ for $1\le k\le3^i$. There exits an integer $t\not\equiv0$ such that $k=3^st$ for some nonnegative ineger $s< i$. Since $[z^{m-1}]g'=m[z^m]g$ and $g'\equiv g^2$, we obtain \begin{align*} ts_{n,k}&\equiv t[z^{2\cdot3^i-1}]z^{k-1}g^k\equiv -{(2\cdot3^i-k)[z^{2\cdot3^i-k}]g^k\over 3^s}=-{[z^{2\cdot3^i-k-1}](g^k)'\over 3^s}\\ &=-{k[z^{2\cdot3^i-k-1}]g^{k-1}g'\over 3^s}\equiv-t[z^{2\cdot3^i-1}]z^{k}g^{k+1}\equiv-ts_{n,k+1}. \end{align*} Thus we obtain \begin{align}\label{eq6} s_{n,k}=-s_{n,k}\;\;(1\le k<3^i). \end{align} Similarly, we can show that \begin{align}\label{eq7} s_{n,k}=-s_{n,k}\;\;(3^i+1\le k\le 2\cdot3^i). \end{align}Since \begin{align*} s_{n,3^i}&\equiv [z^{2\cdot3^i-1}]z^{3^i-1}g^{3^i}\equiv{3^i[z^{3^i}]g^{3^i}\over 3^i}={[z^{3^i-1}](g^{3^i})'\over 3^i}\\ &={3^i[z^{3^i-1}]g^{3^i-1}g'\over 3^i}\equiv[z^{2\cdot3^i-1}]z^{3^i}g^{3^i+1}\equiv s_{n,3^i+1}, \end{align*} by \eref{eq6} and \eref{eq7} $n$th low of skew-adjacency matrix $\mathcal{S}(G_n^\sigma)$ is given by \begin{align*} (\underbrace{x,-x,\ldots,x,-x,x}_{3^i\;{\rm term}},\underbrace{x,-x,\ldots,x,-x,x}_{3^i\;{\rm term}},0) \end{align*} where $x=s_{n,1}$. Now it is enough to show that $s_{n,1}\equiv[z^{2\cdot3^i-1}]g\equiv1$. Since from \eref{A-seq-i1} we have $[g^1]g=[z^0]g=1$, by comparing the coefficients of $g''(z)$ and $g(z^3)$ we obtain $[z^{2\cdot3^j-1}]g\equiv[z^1]g=1$. Hence we complete the proof. \end{proof} \begin{example} {\rm Let us consider the oriented Pascal graph $PG_n^\sigma=G_n^{\sigma}({1\over 1-z},{z\over 1-z})$. Since $PG_n^\sigma$ is i1-decomposable by Example \ref{ex} and $g'={1\over 1-z}'={1\over (1-z)^2}=g^2$, it follows from (ii) of Theorem \ref{th1} that the 19th low of $PG_{19}$ is given by \begin{align*} (1,-1,1,-1,-1,1,-1,-1,1,1,-1,1,-1,-1,1,-1,-1,1,0),\;\; \textrm{see Figure \ref{PG19}}. \end{align*} } \end{example} By using the similar argument of Theorem \ref{th1}, we obtain the following result. \begin{theorem}\label{th2} Let $G_n^\sigma=G_n^\sigma(g,zg)$ be an i1-decomposable graph with $g'\equiv-g^2$. Then we have the following: \begin{itemize} \item[(i)] If $n=3^{i}+1$ for $i\ge1$ then $n$th low of skew-adjacency matrix $\mathcal{S}(G_n^\sigma)$ is given by \begin{align*} (1,1,\ldots,1,0); \end{align*} \item[(ii)] If $n=2\cdot3^{i}+1$ for $i\ge0$ then $n$th low of skew-adjacency matrix $\mathcal{S}(G_n^\sigma)$ is given by \begin{align*} (\underbrace{-1,-1,\ldots,-1}_{3^i\;{\rm term}},\underbrace{1,1,\ldots,1}_{3^i\;{\rm term}},0). \end{align*} \end{itemize} \end{theorem} \begin{proof} (i) Let $\mathcal{S}(G_n^\sigma)=(s_{i,j})_{1\le i,j\le n}$ and $n=3^i+1$. First we show that $s_{n,k}=s_{n,k}$ for $1\le k<n-1$. There exits an integer $t\not\equiv0$ such that $k=3^st$ for some nonnegative integer $s< i$. Since $[z^{m-1}]g'=m[z^m]g$ and $g'\equiv-g^2$, we obtain \begin{align*} ts_{n,k}&\equiv t[z^{3^i-1}]z^{k-1}g^k\equiv -{(3^i-k)[z^{3^i-k}]g^k\over 3^s}=-{[z^{3^i-k-1}](g^k)'\over 3^s}\\ &=-{k[z^{3^i-k-1}]g^{k-1}g'\over 3^s}\equiv t[z^{3^i-1}]z^{k}g^{k+1}\equiv ts_{n,k+1}. \end{align*} Thus we have $s_{n,k}=s_{n,k+1}$ for all $k=1,\ldots,n-1$. Now it is enough to show that $s_{n,1}\equiv[z^{3^i-1}]g\equiv1$. Since \begin{align*} g'(z)\equiv -g^2(z)\;\Rightarrow\;g''(z)=-2g(z)g'(z)\equiv -g^3(z)\equiv -g(z^3), \end{align*} by comparing the coefficients of $g''(z)$ and $g(z^3)$ we obtain \begin{align*} [z^{3j+2}]g\equiv [z^j]g,\;j\ge0 \;\Rightarrow\;[z^{3^i-1}]g\equiv[z^0]g=1,\;i\ge0. \end{align*} Hence we complete the proof. \end{proof} \section{$p$-Riordan graphs} \begin{definition}\cite{GX} {\rm Let $G^\sigma$ be a simple weighted undirected graph with an orientation $\sigma$, which assigns to each edge a direction so that $G^\sigma$ becomes a {\em weighted oriented graph}. (We will refer to an unweighted oriented graph which is just a weighted oriented graph with weight of each arc equals to 1.) The skew-adjacency matrix associated to the weighted oriented graph $G^\sigma$ with the vertex set $[n]$ is defined as the $n\times n$ matrix $\mathcal{S}(G^\sigma)=(s_{i,j})_{1\le i,j\le n}$ whose $(i,j)$-entry satisfies: \begin{align*} s_{i,j}=\left\{ \begin{array}{lll} \omega,& \text{if there is an arc with weight $\omega$ from $i$ to $j$;} \\ -\omega, & \text{if there is an arc with weight $\omega$ from $j$ to $i$;} \\ 0, & \text{otherwise.} \end{array} \right. \end{align*} In particular, when $\omega=1$ the weighted oriented graph $G^\sigma$ is the oriented graph.} \end{definition} \begin{definition} {\rm Let $p\ge3$ be prime. An oriented graph $G^\sigma$ with $n$ vertices is called a \emph{$p$-Riordan graph} if there exists a labeling $1,2,\ldots,n$ of $G^\sigma$ such that its skew-adjacency matrix $\mathcal{S}(G^\sigma)=[s_{ij}]_{1\le i,j\le n}$ is given by \begin{align*} \mathcal{S}(G^\sigma)\equiv (zg,f)_n-(zg,f)_n^T\;({\rm mod p})\;\; {\rm and}\;\;s_{i,j}\in\{\lfloor p/2\rfloor,\ldots,-1,0,1,\ldots,\lfloor p/2\rfloor\}. \end{align*} We denote such graph by $G^\sigma_{n,p}(g,f)$, or simply by $G^\sigma_{n,p}$ when the Riordan array $(g,f)$ is understood from the context, or it is not important.} \end{definition} \begin{proposition} The number of weighted oriented $p$-Riordan graphs of order $n\ge1$ is \begin{align*} {p^{2(n-1)}+p\over p+1}. \end{align*} \end{proposition} \begin{definition}{\rm Let $G^\sigma_{n,p}=[s_{i,j}]_{1\le i,j\le n}$ be a weighted oriented $p$-Riordan graph. A pair of vertices $\{k,t\}$ in $G$ is a {\em weighted cognate pair} with a pair of vertices $\{i,j\}$ in $G$ if \begin{itemize} \item $|i-j|=|k-t|$ and \item an arc $i\rightarrow j$ with the weight $s_{i,j}$ if and only if an arc $k\rightarrow t$ with the weight $s_{i,j}$. \end{itemize} The set of all weighted cognate pairs of $\{i,j\}$ is denoted by wcog$(i,j)$.} \end{definition} In this section, we simply denote $a\equiv_p b$ if $a\equiv_p b$ (mod $p$). \begin{lemma}\label{p-power} Let $g,f\in\mathbb{Z}[[z]]$ with $f(0)=0$. For a prime $p$ and $k\ge0$, we obtain \begin{eqnarray*} g^{p^k}(f)\equiv_p g(f^{p^k}). \end{eqnarray*} \end{lemma} The following theorem gives a relationship between weighted cognate pair cognate pairs and the $A$-sequence of a Riordan graph. \begin{theorem}\label{e:cognate} For $\ell\ge 0$, let $A=(a_k)_{k\ge0}=(a_0,\underbrace{0,\ldots,0}_{\ell \;{\rm times}},a_{\ell+1},a_{\ell+2},\ldots)$ with $a_0,a_{\ell+1}\not\equiv_p0$ be the $p$-ary $A$-sequence for a weighted oriented $p$-Riordan graph $G_{n,p}^\sigma(g,f)$ where $f\ne z$ and $f'(0)=1$. Then \begin{align*} \mbox{\em wcog}(i,j)=\left\{\left\{i+mp^{s}, j+mp^{s}\right\}\ |\ i+mp^{s},\ j+mp^{s}\in[n]\right\} \end{align*} where $s\ge0$ is an integer with $p\lfloor(|i-j|-1)/p^s\rfloor\le \ell$. \end{theorem} \begin{theorem}\label{e:coro2} For $\ell\ge 0$, let $A=(a_k)_{k\ge0}=(1,\underbrace{0,\ldots,0}_{\ell \;{\rm times}},a_{\ell+1},a_{\ell+2},\ldots)$ with $a_{\ell+1}\not\equiv_p0$ be the $p$-ary $A$-sequence for a weighted oriented Riordan graph $G_{n,p}^\sigma=G_{n,p}^\sigma(g,f)$ where $f\ne z$ and $f'(0)=1$. For each $s\ge 0$ and $k\in\{0,\ldots,\ell\}$, $G_{n,p}^\sigma$ has the following fractal properties: \begin{itemize} \item[{\rm(i)}] $\left<\{1,\ldots,(k+1)p^{s}+1\}\right>\cong \left<\{\alpha(k+1)p^{s}+1,\ldots,(\alpha+1)(k+1)p^{s}+1\}\right>$ \item[{\rm(ii)}]$\left<\{1,\ldots,(k+1)p^{s}\}\right>\cong \left<\{\alpha(k+1)p^{s}+1,\ldots,(\alpha+1)(k+1)p^{s}\}\right>$ \end{itemize} where $\alpha\ge1$. \end{theorem} \begin{theorem} Let $G_{n,p}^\sigma=G_{n,p}^\sigma(g,f)$ be a weighted oriented $p$-Riordan graph of order $n\ge p$ and $V_i=\{j\in[n]\;|\;j\equiv_pi\}$. If $G_n^{(p)}$ is proper then \begin{itemize} \item[{\rm(i)}] There exists a permutation matrix $P$ such that the skew-adjacency matrix $\mathcal{S}(G_n^{(p)})$ satisfies \begin{eqnarray}\label{e:bm-p} \mathcal{S}(G_{n,p}^\sigma)=P^{T}\left( \begin{array}{cccc} X_1 & -B_{1,2} &\cdots & -B_{1,p} \\ B_{1,2}^T & X_2 & \ddots &\vdots \\ \vdots & \ddots &\ddots &-B_{p-1,p}\\ B_{1,p}^T &\cdots & B_{p-1,p}^T & X_p \end{array} \right)P \end{eqnarray} where $P=\left[e_{1}|e_{p+1}|\cdots|e_{p\lceil n/p\rceil -p+1}|\cdots|e_{p}|e_{2p} \;|\; \cdots \;|\; e_{p\lfloor n/p\rfloor }\right] ^{T}$ is the $n\times n$ permutation matrix and $e_{i}$ is the elementary column vector with the $i$th entry being $1$ and the others entries being $0$. \item[{\rm(ii)}] The matrix $X_t$ is the skew-adjacency matrix of the induced subgraph of $G_{n,p}^\sigma$ by $V_t$. In particular, the induced subgraph $\left<V_t\right>$ is isomorphic to a weighted oriented $p$-Riordan graph of order $\ell_t$ given~by \begin{align*} G_{\ell_t,p}^\sigma\left(-{d^{p-1}\over dz^{p-1}}\left({gf^{t-1}\over z^{t-1}}\right)(\sqrt[p]{z}),f(z)\right). \end{align*} \item[{\rm(iii)}] The matrix $B_{t,k}$ representing the edges between $V_t$ and $V_k$ can be expressed as the sum of two matrices as follows: \begin{align*} B_{t,k}\equiv_p\left(-z{d^{p-1}\over dz^{p-1}}\left({gf^{k-1}\over z^{t-1}}\right)(\sqrt[p]{z}),f(z)\right)_{\ell_t \times\ell_k}+\left({d^{p-1}\over dz^{p-1}}(z^{p-k+1}gf^{t-1})(\sqrt{z}),f(z)\right)_{\ell_k\times\ell_t}^{T}. \end{align*} \end{itemize} where $\ell_t=\lfloor (n-t)/p\rfloor+1$. \end{theorem} \end{document}
\begin{document} \setlength{\unitlength}{1cm} \begin{picture}(0,0) \partialut(-0.7,1.1){\line(1,0){6.85}} \partialut(-0.7,1.09){\line(1,0){6.85}} \partialut(-0.7,1.08){\line(1,0){6.85}} \partialut(-0.7,1.07){\line(1,0){6.85}} \partialut(-0.7,1.06){\line(1,0){6.85}} \partialut(-0.7,1.01){\line(1,0){6.85}} \partialut(-0.7,1.00){\line(1,0){6.85}} \partialut(-0.65,0.6){\bf\chudaude \textbf{T}hai \textbf{J}ournal of \textbf{M}athematics} \partialut(-0.7,0.04){\line(1,0){6.85}} \partialut(-0.7,0.03){\line(1,0){6.85}} \partialut(-0.7,-0.02){\line(1,0){6.85}} \partialut(-0.7,-0.03){\line(1,0){6.85}} \partialut(-0.7,-0.04){\line(1,0){6.85}} \partialut(-0.7,-0.05){\line(1,0){6.85}} \partialut(-0.7,-0.06){\line(1,0){6.85}} \partialut(-0.65,-0.4){\bf\kam {\textbf{www.math.science.cmu.ac.th/thaijournal}}} \partialut(-0.65,-0.8){\bf\kam Online ISSN 1686-0209} \end{picture} \vskip1.5cm \centerline {\bf \chuto Some classical and recent results } \vskip.2cm \centerline {\bf \chuto concerning renorming theory} \vskip.8cm \centerline {\kamy Amanollah Assadi$^\dag$ and Hadi Haghshenas$^\ddag,$\footnote{{\tt Corresponding author email:[email protected] (H. Haghshenas)}\\ \\ {\kamy Copyright \copyright\, 2011 by the Mathematical Association of Thailand. All rights reserved.}}} \vskip.5cm \centerline {$^\dag$Department of Pure Mathematics, Birjand University, Birjand, Iran} \centerline {e-mail : {\tt [email protected]}} \centerline {$^\ddag$Department of Pure Mathematics, Birjand University, Birjand, Iran} \centerline {e-mail : {\tt [email protected]}} \vskip.5cm \hskip-.5cm{\small{\bf Abstract :} The problems connected with equivalent norms lie at the heart of Banach space theory. This is a short survey on some recent as well as classical results and open problems in renormings of Banach spaces. \vskip0.3cm\noindent {\bf Keywords :} Renormings, differentiable norms, Kadec-Klee property, weakly compactly generated space, uniform Eberlein compact space, super-reflexive space. \noindent{\bf 2010 Mathematics Subject Classification: }46B20, 46B03. \hrulefill \section{Introduction.} Banach space theory is a classic topic in functional analysis. The study of the structure of Banach spaces provides a framework for many branches of mathematics like differential calculus, linear and nonlinear analysis, abstract analysis, topology, probability, harmonic analysis, etc. The geometry of Banach spaces plays an important role in Banach space theory. Since it is easier to do analysis on a Banach space which has a norm with good geometric properties than on a general space, we consider in this survey an area of Banach space theory known as \emph{renorming theory}. Renorming theory is involved with problems concerning the construction of \emph{equivalent norms} on a vector space with nice geometrical properties of convexity or differentiability. An excellent monograph containing the main advances on renorming theory until 1993 is \cite{14}.\\\\\\\\ We consider only Banach spaces over the reals. Given a Banach space $X$ with norm $\|.\|$, we denote by $S(X)$ the unit sphere, and by $X^*$ the dual space with (original) dual norm $\|.\|^{*}$. All undefined terms and notation are standard and can be found, for example, in \cite{3, 15, 19, 21, 23, 45, 49}. \section{Differentiable norms.} Differentiability of functions on Banach spaces is a natural extension of the notion of a directional derivative on $BM(F)bb{R}^{n}$. A function $f: X\rightarrow BM(F)bb{R}$ is said to be \emph{G\^{a}teaux differentiable} at $x \in X$ if there exists a functional $g \in X^{*}$ such that $g(y)=\displaystyle{\lim_{t\rightarrow 0}\frac{f(x+ty)-f(x)}{t}},$ for all $y \in X$. In this case, $g$ is called \emph{G\^{a}teaux derivative} of $f$. If the limit above exists uniformly for each $y\in S(X)$, then $f$ is called \emph{Fr\'{e}chet differentiable} at $x$ with \emph{Fr\'{e}chet derivative} $g$. In this paper, most of our attention will be concentrated on the differentiability of the norm. Two important more strong notions of differentiability are obtained as uniform versions of both Fr\'{e}chet and G\^{a}teaux differentiability. The norm $\|.\|$ on $X$ is \emph{uniformly Fr\'{e}chet differentiable} if $\displaystyle{\lim_{t\rightarrow 0}\frac{\|x+ty\|-\|x\|}{t}}$ exists uniformly for $(x,y)\in S(X)\times S(X)$. Also, it is \emph{uniformly G\^{a}teaux differentiable} if for each $y \in S(X)$, $\displaystyle{\lim_{t\rightarrow 0}\frac{\|x+ty\|-\|x\|}{t}}$ exists uniformly in $x \in S(X)$.\\ Clearly, Fr\'{e}chet differentiability implies G\^{a}teaux differentiability, but the converse is true only for finite-dimensional Banach spaces, in general. As an example, the mapping $f: L^{1}[0,\partiali]\rightarrow BM(F)bb{R}$ defined by $f(x)=\int_0^\partiali{\rm sin}(x(t))dt$ is every where G\^{a}teaux differentiable, but nowhere Fr\'{e}chet differentiable \cite{37}.\\ Around the year of 1940, \u{S}mulyan proved his following fundamental dual characterization of differentiability of norms, which is used in many basic renorming results.\\\\ \textbf{Theorem 2.1.} \cite[Ch. VIII]{23} \emph{For each $x \in S(X)$, the following are equivalent:\\ \textbf{(i)} $\|.\|$ is Fr\'{e}chet differentiable at $x$.\\ \textbf{(ii)} For all $(f_{n})_{n=1}^{\infty},(g_{n})_{n=1}^{\infty}\subseteq S(X^{*})$, if $\displaystyle{\lim_{n\rightarrow \infty}f_{n}(x)=1}$ and $\displaystyle{\lim_{n\rightarrow \infty}g_{n}(x)=1}$ then $\displaystyle{\lim_{n\rightarrow \infty}\|f_{n}-g_{n}\|^{*}=0}$.\\ \textbf{(iii)} Each $(f_{n})_{n=1}^{\infty}\subseteq S(X^{*})$ with $\displaystyle{\lim_{n\rightarrow \infty}f_{n}(x)=1}$ is convergent in $S(X^{*})$.}\\\\ As a direct application of \u{S}mulyan's theorem, we have the following corollary:\\\\ \textbf{Corollary 2.2.} \cite[Ch. VIII]{23} \emph{If the dual norm of $X^{*}$ is Fr\'{e}chet differentiable then $X$ is reflexive.} \begin{proof} A celebrated and deep theorem of James state that $X$ is reflexive if and only if each nonzero $f\in X^{*}$ attains its norm at some $x\in S(X)$. Let $f\in S(X^{*})$ and choose $(x_{n})_{n=1}^{\infty}\in S(X)$ such that $f(x_{n})\rightarrow1$. By Theorem 2.1, $\displaystyle{\lim_{n\rightarrow\infty}x_{n}=x\in S(X)}$. Therefore $f(x)=f(\displaystyle{\lim_{n\rightarrow\infty}x_{n})= \displaystyle{\lim_{n\rightarrow\infty}f(x_{n})=1=\|f\|^{*}.}}$ If now $f\in X^{*}$ is non-zero, then $\frac{f}{\|f\|^{*}}\in S(X^{*})$ and according to the reasoning above there exists $x\in S(X)$ such that $\frac{f}{\|f\|^{*}}(x)=1$.\end{proof} \textbf{Theorem 2.3.} \emph{The following assertions imply the reflexivity of $X$:\\ \textbf{(i)} The norm of $X$ is uniformly Fr\'{e}chet differentiable} \cite[page 434]{21}.\\ \emph{\textbf{(ii)} The third dual norm of $X$ is G\^{a}teaux differentiable} \cite[page 276]{23}.\\\\ \textbf{Theorem 2.4.} \cite[page 275]{23} \emph{If $X$ is separable and the second dual norm of $X$ is G\^{a}teaux differentiable then $X^{*}$ is separable.}\\\\ Since the separable and reflexive spaces contain numerous nice structural aspects, they have an important role in our investigation. In fact, there are many renorming characterizations of reflexivity and separability, such as follows:\\\\ \textbf{Theorem 2.5.} \cite{26} \emph{If $X$ is reflexive then can be renormed in such a way that both $X$ and $X^{*}$ have Fr\'{e}chet differentiable norm.}\\\\ There exist reflexive spaces which do not admit any equivalent uniformly G\^{a}teaux differentiable norm. This example can be found in Kutzarova and Troyanski \cite{38}. However, \u{S}mulyan proved the following positive result. His norm was the predual norm to the norm defined on $X^{*}$ by $|\|f\||^{2}=\|f\|^{*}{^{2}}+\sum_{i=1}^{\infty}2^{-i}f^{2}(x_{i}),$ where $(x_{i})_{i=1}^{\infty}$ is dense in $S(X)$.\\\\ \textbf{Theorem 2.6.} \cite[Ch. II]{14} \emph{Any separable space admits an equivalent uniformly G\^{a}teaux differentiable norm.}\\\\ \textbf{Theorem 2.7.} (Kadec, Restrepo) \cite{36} \emph{If a separable space $X$ admits an equivalent Fr\'{e}chet differentiable norm then $X^{*}$ is separable.}\begin{proof} Observe that the set $B=\{\|.\|^{'}: x \in X , x\neq0\}$ is norm-separable, where $\|x\|^{'}$ denotes the derivative of $\|.\|$ at $x$. The set $B$ contains all norm-attaining functionals, and is thus norm-dense in $X^{*}$ by the Bishop-Phelps theorem.\end{proof} The norm $\|.\|$ on $X$ is called \emph{2-rotund (resp. weakly 2-rotund)} if for every $(x_{n})_{n=1}^{\infty} \subseteq S(X)$ such that $\displaystyle{\lim_{m, n\rightarrow \infty} \|x_{m}+x_{n}\|}=0,$ there is an $x \in X$ such that $\displaystyle{\lim_{n\rightarrow\infty}x_{n}=x}$ in the norm (resp. weak) topology of $X$.\\\\ By using Theorem 2.1, it is proved that if a norm on $X$ is 2-rotund then its dual norm is Fr\'{e}chet differentiable. Also, if a norm on $X$ is weakly 2-rotund then its dual norm is G\^{a}teaux differentiable \cite{22}.\\\\ \textbf{Theorem 2.8.} \cite[page 208]{32} \emph{$X$ is reflexive if and only if it admits an equivalent weakly 2-rotund norm.}\\\\ \textbf{Theorem 2.9.} \cite[page 208]{32} \emph{A separable space $X$ is reflexive if and only if $X$ admits an equivalent 2-rotund norm.}\\\\ Note that it is not known if the separability of $X$ has to be assumed in theorem above.\\\\ The space $X$ is \emph{Hilbert generated space} if there is a Hilbert space $H$ and a bounded linear operator $T$ from $H$ into $X$ such that $T(H)$ is dense in $X$.\\\\ \textbf{Theorem 2.10.} \cite{22} \emph{$X$ is a subspace of a Hilbert generated space if and only if $X$ admits an equivalent uniformly G\^{a}teaux differentiable norm.} \section{Asplund spaces.} It is a well-known theorem that every continuous convex function on a separable space $X$ is G\^{a}teaux differentiable at the points of a $G_{\delta}$-dense subset of $X$ \cite[page 384]{21}. Let $f$ be a continuous convex function on $X$. Then the set $G$ of all points in $X$ where $f$ is Fr\'{e}chet differentiable (possibly empty) is a $G_{\delta}$ set in $X$ \cite[page 357]{21}. The space $X$ is said to be \emph{Asplund} if every continuous convex function on it is Fr\'{e}chet differentiable at each point of a dense $G_{\delta}$ subset of $X$. There exist many well-known equivalent characterizations of the Asplund spaces. For example, $X$ is Asplund if and only if $Y^{*}$ is separable whenever $Y$ is a separable subspace of $X$. Every Banach space with a Fr\'{e}chet differentiable norm is Asplund \cite{40} but, on the other hand, Haydon \cite{34} constructed Asplund spaces admitting no G\^{a}teaux differentiable norm.\\\\ \textbf{Theorem 3.1.} \cite{17, 40} \emph{For each separable space $X$ the following are equivalent:\\ \textbf{(i)} $X^{*}$ is separable.\\ \textbf{(ii)} $X$ is Asplund.\\ \textbf{(iii)} $X$ admits an equivalent Fr\'{e}chet differentiable norm.\\ \textbf{(iv)} There is no equivalent rough norm on $X$.}\\\\ Recall that the norm $\|.\|$ on $X$ is \emph{rough} if for some $\varepsilon >0$,$$\displaystyle{\limsup_{h\rightarrow 0} \frac{1}{\|h\|}(\|x+h\|+\|x-h\|-2)}\geq \varepsilon,$$for every $x \in S(X)$.\\ By using the canonical norm of $C([0,1])$ as a rough norm, we obtain that $C([0,1])$ does not admit any Fr\'{e}chet differentiable norm. \section{Kadec-Klee property.} The norm $\|.\|$ on $X$ has \emph{weak-Kadec-Klee} property provided that whenever $(x_{n})_{n=1}^{\infty}\subseteq X $ converges weakly to some $x \in X$ and $\displaystyle{\lim_{n\rightarrow \infty}\|x_{n}\|=\|x\|}$, then $\displaystyle{\lim_{n\rightarrow \infty}\|x_{n}-x\|=0}$. Also, a dual norm $\|.\|_{*}$ on $X^{*}$ has \emph{$weak^{*}$-Kadec-Klee} property if $\displaystyle{\lim_{n\rightarrow \infty}\|f_{n}-f\|_{*}=0}$, whenever $(f_{n})_{n=1}^{\infty}\subseteq X^{*}$ is weak$^{*}$-convergent to some $f \in X^{*}$ and $\displaystyle{\lim_{n\rightarrow \infty}\|f_{n}\|_{*}=\|f\|_{*}.}$\\ The weak-Kadec-Klee norms play an important role in geometric Banach space theory and its applications.\\\\ \textbf{Theorem 4.1.} \cite[page 422]{21} \emph{\textbf{(i)} Let $X$ be a separable space. If $X^{*}$ has the weak$^{*}$-Kadec-Klee property then $X^{*}$ is separable.\\\textbf{(ii)} If $X^{*}$ is separable then $X$ admits an equivalent norm such that $X^{*}$ has the weak$^{*}$-Kadec-Klee property.} \section{Strictly convex spaces.} One interesting and fruitful line of research, dating from the early days of Banach space theory, has been to relate analytic properties of a Banach space to various geometric conditions on that space. The simplest example of such a condition is that of strict convexity.\\ The space $X$ (or the norm $\|.\|$ on $X$) is called \emph{strictly convex} (R) if for $x, y \in S(X)$, $\|x+y\|=2$ implies $x=y$.\\\\ The following result is a consequence of Theorem 2.1 of \u{S}mulyan:\\\\ \textbf{Theorem 5.1.} \cite[Ch. VIII]{23} \emph{If a dual norm of $X^{*}$ is strictly convex (G\^{a}teaux differentiable) then its predual norm is G\^{a}teaux differentiable (strictly convex).}\\\\ The converse implications in the theorem above are true for reflexive spaces, but not in general.\\\\ Strict convexity is not preserved by equivalent norms. It is well known that $\|.\|_{\infty}$ and $\|.\|_{2}$ are equivalent norms on $BM(F)bb{R}^{n}$, $\|.\|_{2}$ is strictly convex but $\|.\|_{\infty}$ is not.\\ A most common strictly convex renorming is based on the following simple observation. Let $Y$ be a strictly convex space and $T: X \rightarrow Y$ a linear one-to-one bounded operator; then $\||x|\|=\|x\|+\|T(x)\|$, $x \in X$, is an equivalent strictly convex norm on $X$.\\\\ \textbf{Theorem 5.2.} \cite{35, 36} \emph{Any separable space $X$ admits an equivalent norm whose dual norm is strictly convex.} \begin{proof} Let $\{x_{i}\}_{i=1}^{\infty}$ be dense in $S(X)$. Define a norm $\||.|\|$ on $X^{*}$ by $|\|f\||^{2}=\|f\|^{*}{^{2}}+\sum_{i=1}^{\infty}2^{-i}f^{2}(x_{i}).$ It is not hard to show that $|\|.\||$ is a weak$^{*}$ lower semicontinuous function on $X^{*}$ equivalent with $\|.\|^{*}$. Hence $|\|.\||$ is the dual of a norm $|.|$ equivalent with $\|.\|$, and also it is strictly convex.\end{proof} \section{Locally uniformly convex spaces.} The concept of a locally uniformly convex norm was introduced by Lovaglia in \cite{42}. The space $X$ (or the norm $\|.\|$ on $X$) is said to be \emph{locally uniformly convex} (LUR) if $$\displaystyle{\lim_{n\rightarrow \infty} \bigg( 2\|x\|^{2}+ 2\|x_{n}\|^{2}-\|x+x_{n}\|^{2} \bigg)}=0 \hspace{0.5cm}\Longrightarrow \hspace{0.5cm}\displaystyle{\lim_{n\rightarrow \infty}\|x-x_{n}\|}=0,$$ for any sequence $(x_{n})_{n=1}^{\infty}$ and $x$ in $X$.\\ Lovaglia showed, as a straightforward consequence of Theorem 2.1, that the norm of a Banach space is Fr\'{e}chet differentiable if the dual norm is LUR. The converse does not hold, even up to renormings. In fact, there exists a space with a Fr\'{e}chet differentiable norm, which does not admit any equivalent norm with a strictly convex dual norm \cite{14}. However, in the class of spaces with unconditional bases, we do have equivalence up to a renorming.\\Many efforts have been dedicated in the renorming theory to obtain sufficient conditions for a Banach space to admit an equivalent LUR norm. In 1979, Troyanski stated the first characterization of existence of LUR renormings.\\\\ \textbf{Theorem 6.1.}\cite{33} \emph{If $X^{*}$ has a dual LUR norm then $X$ admits an equivalent LUR norm.}\\\\ \textbf{Theorem 6.2.} \cite[page 387]{21} \emph{Any space $X$ with a Fr\'{e}chet differentiable norm which has a G\^{a}teaux differentiable dual norm admits an equivalent LUR norm.}\\\\ \textbf{Theorem 6.3.} (Kadec) \cite{35} \emph{\textbf{(i)} If $X$ is separable then $X$ admits an equivalent LUR norm.\\\textbf{(ii)} If $X^{*}$ is separable then $X$ admits an equivalent norm whose dual norm is LUR.} \begin{proof} We show the second statement. Suppose that $\{x_{i}\}_{i=1}^{\infty}$ be dense in $S(X)$ and $\{f_{i}\}_{i=1}^{\infty}$ be dense in $S(X^{*})$. For $i \in BM(F)bb{N}$, put $F_{i}=$span$\{f_{1}, f_{2}, ..., f_{i}\}$. Define a norm $|\|.\||$ on $X^{*}$ by $$|\|f\||^{2}=\|f\|^{*}{^{2}}+\sum_{i=1}^{\infty}2^{-i}{\rm dist}(f, F_{i})^{2}+\sum_{i=1}^{\infty}2^{-i}f^{2}(x_{i}).$$It is not hard to show that $|\|.\||$ is a weak$^{*}$ lower semicontinuous function on $X^{*}$ equivalent with $\|.\|^{*}$. Hence $|\|.\||$ is the dual of a norm $|.|$ equivalent with $\|.\|$, and also it is LUR.\end{proof} The theorem above shows that, in particular, every separable space admits an equivalent strictly convex norm. By using Theorems 3.1 and 6.3, we see that if $X$ is an Asplund space then $X^{*}$ admits an equivalent LUR norm. The next theorem is a powerful result of Troyanski \cite{55}:\\\\ \textbf{Theorem 6.4.} \emph{$X$ admits an equivalent LUR norm if and only if it admits an equivalent weak-Kadec-Klee norm and an equivalent strictly convex norm.}\\\\ \textbf{Theorem 6.5.} \cite{51} \emph{Let $Y$ be a closed subspace of $X$ such that both $Y$ and $X/Y$ admit equivalent norms whose dual norms are LUR. Then $X$ admits an equivalent norm whose dual norm is LUR.}\\\\ Let us mention here that the analogue of Theorem 6.5 for Fr\'{e}chet differentiable norms is still an open question. Talagrand \cite{53} proved that the corresponding result for G\^{a}teaux differentiable norms is false.\\\\ Here we offer a characterization of LUR spaces in terms of Lipschitz separated spaces:\\\\ Given a positive scalar $M$, we will let $L_{X, M}$ be the space of all functions $f:X \rightarrow BM(F)bb{R}$ such that $|f(x)-f(y)| \leq \|x-y\|$ for each $x, y \in X$ and sup$\{|f(x)|: x \in X\}\leq M$ endowed with the metric $\rho(f,g)=$ sup$\{|f(x)-g(x)|: x \in X \}$. With this metric, $L_{X, M}$ is a complete metric space. Given a closed nonempty subset $Y \subseteq X$ and $f \in L_{X,M}$, we let $L_{f, M}=\{\widetilde{f} \in L_{X, M} : \widetilde{f}_{|Y}=f\}$. We say $X$ is \emph{Lipschitz separated} if for every proper closed subspace $Y \subseteq X$ and every $f \in L_{Y, M}$, we have $\displaystyle{\sup_{\widetilde{f} \in L_{f, M}} \widetilde{f}(x)}> \displaystyle{\inf_{\widetilde{f} \in L_{f, M}} \widetilde{f}(x)}$ for all $x \in X \setminus Y$.\\\\ \textbf{Theorem 6.6.} \cite{6} \emph{The separable space $X$ can be equivalently renormed so that it is LUR but not Lipschitz separated.}\\\\ \textbf{Theorem 6.7.} \cite{6} \emph{Any space $X$ with a separable dual admits an equivalent norm under which $X$ is Lipschitz separated but not LUR.} \section{Uniformly convex spaces.} A Banach space is strictly convex if the midpoint of each chord of the unit ball lies beneath the surface. In 1936, Clarkson introduced the stronger notion of uniform convexity. A Banach space is uniformly convex if the midpoints of all chords of the unit ball whose lengths are bounded below by a positive number are uniformly buried beneath the surface. The class of uniformly convex Banach spaces is very interesting and has numerous applications.\\\\\\\\The space $X$ (or the norm $\|.\|$ on $X$) is said to be \emph{uniformly convex} (UR) if for all sequences $(x_{n})_{n=1}^{\infty}$, $(y_{n})_{n=1}^{\infty} \subseteq X$ $$\displaystyle{\lim_{n\rightarrow \infty}\bigg( 2\|x_{n}\|^{2}+ 2\|y_{n}\|^{2}-\|x_{n}+y_{n}\|^{2} \bigg)}=0 \hspace{0.5cm} \Longrightarrow \hspace{0.5cm} \displaystyle{\lim_{n\rightarrow \infty}\|x_{n}- y_{n}\|=0}.$$ For example, any Hilbert space is uniformly convex and it can be shown that $L^{p}$ spaces are uniformly convex whenever $1<p<\infty$. We have (UR)$\Rightarrow$(LUR)$\Rightarrow$(R) but the converse is not true. For example, define a norm $|\|.\||$ on $C([0,1])$ by $|\|f\||^{2}=\|f\|_{\infty}^{2}+\|f\|_{2}^{2}$, where $\|.\|_{\infty}$ denotes the standard supremum norm of $C([0,1])$ and $\|.\|_{2}$ denotes the canonical norm of $L^{2}[0,1]$. Then $|\|.\||$ is strictly convex but not LUR on $C([0,1])$.\\%************************** There is a complete duality between uniform convexity and uniform Fr\'{e}chet differentiability.\\\\ \textbf{Theorem 7.1.} (Lindenstrauss) \cite{41} \emph{For any space $X$, the dual norm of $X^{*}$ is uniformly convex if and only if its predual norm is uniformly Fr\'{e}chet differentiable. Also, the dual norm of $X^{*}$ is uniformly Fr\'{e}chet differentiable if and only if its predual norm is uniformly convex.}\\\\ One of the first theorems to relate the geometry of the norm to linear topological properties is the following;\\\\ \textbf{Theorem 7.2.} \cite[pages 37-50]{15} \emph{Any uniformly convex Banach space is reflexive.} \begin{proof} Assume that the norm of $X$ is uniformly convex. Then the dual norm of $X^{*}$ is uniformly Fr\'{e}chet differentiable by Theorem 7.1. Therefore $X^{*}$ is reflexive by Theorem 2.3 and thus X is reflexive.\end{proof} The theorem above shows that any Hilbert space is a reflexive Banach space which is a well-known result in functional analysis. Note that the class of uniformly convex Banach spaces does not coincide with the all reflexive Banach spaces: an example of a reflexive Banach space which is not uniformly convex can be given.\\\\ Notice that, the space $C([0,1])$ is a separable non reflexive space. Consequently, $C([0,1])$ admits no equivalent uniformly convex norm, although, by Theorem 6.3, it does admit an equivalent LUR norm.\\\\ \textbf{Theorem 7.3.} \cite[Ch. XI]{23} \emph{Any space that admits an equivalent WUR norm is an Asplund space.}\\\\ The norm $\|.\|$ on $X$ is \emph{weakly uniformly convex} (WUR) if for all sequences $(x_{n})_{n=1}^{\infty}$, $(y_{n})_{n=1}^{\infty} \subseteq X$ with $\displaystyle{\lim_{n\rightarrow \infty} 2\|x_{n}\|^{2}+ 2\|y_{n}\|^{2}-\|x_{n}+y_{n}\|^{2} }=0$, then $\displaystyle{\lim_{n\rightarrow \infty} x_{n}- y_{n}=0}$, in the weak topology of $X$.\\\\\\\\ Using Theorems 3.1 and 7.3, we have the following corollary.\\\\ \textbf{Corollary 7.4.} \emph{If the norm of a separable space $X$ is WUR then $X^{*}$ is separable.} \section{Super-reflexive spaces.} Given Banach space $Y$, we say that $Y$ is \emph{finitely representable} in $X$ if for every $\varepsilon > 0$ and for every finite-dimensional subspace $Z$ of $Y$, there is an isomorphism $T$ of $Z$ onto $T(Z)\subseteq X$ such that $\|T\| \|T^{-1}\| < 1 + \varepsilon$. The space $X$ is said to be \emph{super-reflexive} if every finitely representable space in $X$ is reflexive.\\ Clearly, every super-reflexive space is reflexive. One of the well-known super-reflexive spaces are Hilbert spaces. The $L^{p}$ spaces for $1<p<\infty$ are other examples of super-reflexive spaces. But there are many other super-reflexive spaces. This class is mapped by the following equivalence.\\\\ \textbf{Theorem 8.1.} \cite[page 436]{21} \emph{The following assertions are equivalent:\\\textbf{(i)} $X$ is superreflexive.\\\textbf{(ii)} $X$ admits an equivalent uniformly convex norm.\\\textbf{(iii)} $X$ admits an equivalent uniformly Fr\'{e}chet differentiable norm.\\\textbf{(iv)} $X$ admits an equivalent norm which is uniformly convex and uniformly Fr\'{e}chet differentiable.} \section{Mazur intersection property.} In 1933, it was Mazur who first studied Banach spaces which have the so called \emph{Mazur intersection property} (MIP): every bounded closed convex set can be represented as an intersection of closed balls. A systematic study of this topic was initiated by Phelps \cite{48}. In 1978, Giles, Gregory and Sims gave some characterizations of this property \cite{27}. They raised the question whether every Banach space with the MIP is an Asplund space. They also characterized the associated property for a dual space, called the \emph{weak$^{*}$ Mazur intersection property}: every bounded weak$^{*}$ closed convex set can be represented as an intersection of closed dual balls.\\ Associated with MIP, we have also the following concepts:\\ A set $C$ in $X$ is a \emph{Mazur set} if given $f \in X^{*}$ with sup $f(C) < \lambda$, then there exists a closed ball $D$ such that $C \subseteq D$ and sup $f(D) < \lambda$. The space $X$ is called a \emph{Mazur space} provided that any intersection of closed balls in $X$ is a Mazur set.\\\\ \textbf{Theorem 9.1.} \cite{31} \emph{Any space with Fr\'{e}chet differentiable norm satisfies the MIP. Also, every space whose dual satisfies the MIP is reflexive and each reflexive space with a Fr\'{e}chet differentiable norm is a Mazur space. Finally, Mazur spaces with the MIP are Asplund and G\^{a}teaux differentiable.}\\\\ \textbf{Theorem 9.2.} \cite{50} \emph{A Mazur space with the MIP admits an equivalent Fr\'{e}chet differentiable norm.}\\\\ \textbf{Theorem 9.3.} \cite{51} \emph{Let $Y$ be a subspace of $X$ such that both $Y^{*}$ and $(X/Y)^{*}$ can be renormed to have the weak$^{*}$ Mazur intersection property. Then $X^{*}$ can be renormed to have the weak$^{*}$ Mazur intersection property.} \section{Weakly compactly generated spaces.} The space $X$ is said to be \emph{weakly compactly generated} (WCG) if $X$ is the closed linear span of a weakly compact set $K \subseteq X$. The class of WCG spaces has been intensively studied during the last forty years and now is in the core of modern Banach space theory \cite{15, 19, 21}.\\ Recall that the space $X$ is separable if there exists a countable set $\{x_{n}\}_{n=1}^{\infty}$ with $\overline{\{x_{n}\}_{n=1}^{\infty}}=X $. An important characterization of reflexivity is the result that $X$ is reflexive if and only if $B(X)$, the closed unit ball of $X$, is weakly compact.\\ Notice that if $X$ is reflexive, then one may take $K=B(X)$ in the definition above, whereas if $X$ is separable, with $\{x_{n}\}_{n=1}^{\infty}$ dense in the $S(X)$, we can take $K=\{n^{-1} x_{n} \}_{n=1}^{\infty} \bigcup \{0\}$. In this way we see that both separable and reflexive spaces are WCG.\\\\ \textbf{Theorem 10.1.} \emph{\textbf{(i)} If $X$ is WCG then $X$ admits an equivalent norm that is simultaneously LUR and G\^{a}teaux differentiable \cite{1, 56}.\\\textbf{(ii)} If $X^{*}$ is WCG then $X$ admits an equivalent norm $|.|$ the dual norm of which is LUR. In particular, $|.|$ is Fr\'{e}chet differentiable \cite{30}.}\\\\ The first part of theorem above shows that every reflexive space admits an equivalent norm with weak-Kadec-Klee property.\\\\ \textbf{Corollary 10.2.} \cite[page 589]{21} \emph{If $X^{*}$ is WCG then $X$ is Asplund.}\\\\ The corollary above shows that any reflexive Banach space is Asplund.\\\\ \textbf{Theorem 10.3.} \cite{1} \emph{If $X$ is WCG then $X^{*}$ admits an equivalent strictly convex dual norm.}\\\\ \textbf{Theorem 10.4.} \cite{5} \emph{If $X$ is WCG and Asplund then $X^{*}$ admits an equivalent LUR dual norm.}\\\\ If $M$ is a bounded total set in $X$ (i.e., a bounded set $M$ in $X$ such that $\overline{span}M=X$), we will say that the norm of $X$ is \emph{dually $M$-2-rotund} if $(f_{n})_{n=1}^{\infty}$ is convergent to some $f \in B(X^{*})$ uniformly on $M$ whenever $f_{n} \in S(X^{*})$ are such that $\displaystyle{\lim_{m, n\rightarrow \infty} \|f_{m}+f_{n}\|}=0$.\\\\ \textbf{Theorem 10.5.} \cite{22} \emph{$X$ is WCG if and only if $X$ admits an equivalent dually $M$-2-rotund norm for some bounded total set in $X$.} \section{Va\v{s}\'{a}k spaces.} A class of spaces wider than WCG spaces, known as weakly countably determined or Va\v{s}\'{a}k spaces, was originally defined and investigated by Va\v{s}\'{a}k.\\ The space $X$ is \emph{Va\v{s}\'{a}k} if there is a sequence $(B_{n})_{n=1}^{\infty}$ of $weak^{*}$-compact sets in $X^{**}$ such that given $x \in X$ and $u \in X^{**}\backslash X$, there is $n \in BM(F)bb{N}$ such that $x \in B_{n}$ and $u \not \in B_{n}$.\\\\ \textbf{Theorem 11.1.} \cite[Ch. XI]{23} \emph{If $X^{*}$ is Va\v{s}\'{a}k then $X$ admits an equivalent Fr\'{e}chet differentiable norm.}\\\\ \textbf{Theorem 11.2.} \cite{43} \emph{Every Va\v{s}\'{a}k space has an equivalent norm the dual norm of which is strictly convex.}\\\\ Many of the renorming results for WCG spaces are actually valid for Va\v{s}\'{a}k spaces. For example, any Va\v{s}\'{a}k space admits a G\^{a}teaux differentiable norm \cite[Ch. VII]{14}. Further details can be found in \cite[Ch. VII]{19}. \section{Uniform Eberlein compact spaces.} A compact space $K$ is said to be \emph{uniform Eberlein} if $K$ is homeomorphic to a weakly compact subset of a Hilbert space in its weak topology.\\\\ \textbf{Theorem 12.1.} \cite[page 624]{21} \emph{\textbf{(i)} $(B(X^{*}),w^{*})$ is uniform Eberlein compact if and only if $X$ admits an equivalent uniformly G\^{a}teaux differentiable norm.\\\textbf{(ii)} Let $K$ be a compact space. $C(K)$ admits an equivalent uniformly G\^{a}teaux differentiable norm if and only if $K$ is uniform Eberlein.} \section{Bases and renorming theory.} A \emph{Schauder} basis for $X$ is a sequence $(x_{n})_{n=1}^{\infty}$ of vectors in $X$ such that every vector in $X$ has a unique representation of the form $\displaystyle{\sum_{n=1}^{\infty}a_{n}x_{n}}$ with each $a_{n}$ a scalar and where the sum is converges in the norm topology.\\Recall that a series $\displaystyle{\sum_{n=1}^{\infty}x_{n}}$ is said to be \emph{unconditionally convergent} if the series $\displaystyle{\sum_{n=1}^{\infty}x_{n_{i}}}$ converges for every choice of $n_{1} < n_{2} < n_{3} <\cdots$.\\A Shauder basis $(x_{n})_{n=1}^{\infty}$ for $X$ is said to be \emph{unconditional} if for every $x \in X$, its expansion in terms of the basis $\displaystyle{\sum_{n=1}^{\infty}a_{n}x_{n}}$ converges unconditionally.\\\\ \textbf{Theorem 13.1.} \cite{52} \emph{Let $X$ have an unconditional basis. Then $X$ admits an equivalent norm with an LUR dual norm whenever $X$ admits an equivalent Fr\'{e}chet differentiable norm.}\\\\ A biorthogonal system $\{x_{i};f_{i}\}_{i \in I}$ in $X \times X^{*}$ \big(i.e., $f_{i}(x_{j})=\delta_{ij}$ (the Kronecker delta) for $i, j \in I$\big) is called \emph{fundamental} provided that $\overline{span}(x_{i})_{i \in I}=X$. A fundamental biorthogonal system $\{x_{i};f_{i}\}_{i \in I}$ is a \emph{Markushevich} basis if $(f_{i})_{i \in I}$ separates the points of $X$. A Markushevich basis $\{x_{i};f_{i}\}_{i \in I}$ is called \emph{shrinking} if $\overline{span}(f_{i})_{i \in I}=X^{*}$. Clearly, every Schauder basis is Markushevich. An example of a Markushevich basis that is not a Schauder basis is the sequence of trigonometric polynomials $\{e^{i 2 \partiali n t}: n=0, \partialm1, \partialm2, ...\}$ in the $\widetilde{C}([0,1])$ of complex continuous functions on $[0,1]$ whose values at $0, 1$ are equal, with the sup-norm. If $X^{*}$ is separable then $X$ has a shrinking Markushevich basis \cite[page 231]{21}.\\ A compact space $K$ is called a \emph{Corson compact space} if $K$ is homeomorphic to a subset $C$ of $[-1,1]^{\Gamma}$, for some set $\Gamma$, such that each point in $C$ has only a countable number of nonzero coordinates.\\For example, any metrizable compact is a Corson compact or any weakly compact set in a Banach space is a Corson compact or the dual ball for a Va\v{s}\'{a}k space in its weak$^{*}$-topology is a Corson compact \cite[Ch. VI]{14}.\\A Banach space $X$ is called \emph{weakly Lindel\"{o}f determined} (WLD) if $(B(X^{*}),w^{*})$ is a Corson compact. Every Va\v{s}\'{a}k space is WLD.\\\\ \textbf{Theorem 13.2.} \cite[page 211]{32} \emph{For any space $X$ the following are equivalent:\\ \textbf{(i)} $X$ has a shrinking Markushevich basis.\\ \textbf{(ii)} $X$ is WCG and Asplund.\\ \textbf{(iii)} $X$ is WLD and Asplund.\\ \textbf{(iv)} $X$ is WLD and admits an equivalent norm whose dual norm is LUR.\\ \textbf{(v)} $X$ is WLD and admits an equivalent Fr\'{e}chet differentiable norm.}\\\\ \textbf{Theorem 13.3.} \cite{31, 47} \emph{Let $X$ have a fundamental biorthogonal system $\{x_{i};f_{i}\}_{i \in I} \subseteq X \times X^{*}$. Then the subspace $Y={\rm span}(x_{i})_{i \in I}$ admits an equivalent LUR norm.}\\\\ \textbf{Theorem 13.4.} \cite{47} \emph{Let $X$ have a fundamental biorthogonal system. Then $X^{*}$ admits an equivalent norm with the weak$^{*}$-Mazur intersection property (every bounded weak$^{*}$-closed convex set can be represented as an intersection of closed dual balls).}\begin{proof} A dual Banach space has the weak$^{*}$-Mazur intersection property provided its predual has a dense set of LUR points. Let us consider a biorthogonal system $\{x_{i};f_{i}\}_{i \in I} \subseteq X \times X^{*}$ such that $X=\overline{span}(x_{i})_{i \in I}$ and put $Y=$ span$(x_{i})_{i \in I}$. Using Theorem 13.3, we obtain an equivalent LUR norm $|.|$ on Y. Let $\|.\|$ be the norm $|.|$ extended to $X$. Then, the unit ball of $\|.\|$ is the closure of the unit ball of $|.|$. We claim that $\|.\|$ is LUR at each point of $Y$. Take $y \in Y \backslash \{0\}$ and a sequence $(x_{n})_{n \in BM(F)bb{N}}$ in $X$ so that $\displaystyle{\lim_{n\rightarrow \infty} 2\|y\|^{2}+ 2\|x_{n}\|^{2}-\|y+x_{n}\|^{2}}=0.$ If we choose $y_{n} \in Y$ with $\|y_{n}-x_{n}\|<\frac{1}{n}$, then $\displaystyle{\lim_{n\rightarrow \infty} 2|y|^{2}+ 2|y_{n}|^{2}-|y+y_{n}|^{2}}=0$ and, hence, $\displaystyle{\lim_{n\rightarrow \infty} \|x_{n}-y\|}=\displaystyle{\lim_{n\rightarrow \infty} |y_{n}-y|}=0$.\end{proof} In the theorem above, in fact, we prove that every Banach space with a fundamental biorthogonal system admits an equivalent norm with a dense set of locally uniformly convex points.\\\\ \textbf{Theorem 13.5.} \cite{51} \emph{Let $X^{*}$ be a dual Banach space with a fundamental biorthogonal system $\{x_{i};f_{i}\}_{i \in I} \subseteq X^{*} \times X $. Then $X$ admits an equivalent norm with the MIP.} \section{Some interesting problems.} According to the author's knowledge and taste, the following problems in this area arise:\\%**************************** (Q1) If the space $X$ has the Radon-Nikodym property (i.e., for every $\varepsilon>0$ every bounded subset of $X$ has a non-empty slice of diameter less than $\varepsilon$), does it follow that $X$ admits an equivalent weak-Kadec-Klee norm? Does it admits an equivalent strictly convex norm? Is it true that $X$ admits an equivalent LUR norm?\\%**************************** (Q2) Does every Asplund space admit an equivalent SSD norm? \big(If $X$ admits an equivalent SSD norm, then $X$ is Asplund (G. Godefroy)\big). Recall that the norm $\|.\|$ on $X$ is called strongly subdifferentiale (SSD) if for each $x \in X$, the one-sided limit $\displaystyle{\lim_{t\rightarrow 0^{+}}\frac{\|x+ty\|-\|x\|}{t}}$ exists uniformly for $y$ in $S(X)$. Note that the norm $\|.\|$ is Fr\'{e}chet differentiable if and only if it is G\^{a}teaux differentiable and at the same time SSD.\\%**************************** (Q3) Assume that a Banach space $X$ admits an equivalent G\^{a}teaux differentiable norm and that $X$ admits also an equivalent SSD norm. Does $X$ admit an equivalent Fr\'{e}chet differentiable norm?\\%**************************** (Q4) Assume that $X$ is a nonseparable non Asplund space. Does $X$ admit an equivalent norm that is nowhere SSD except at the origin? For separable non Asplund space the answer is yes.\\ (Q5) Assume that the norm of a separable Banach space $X$ has the property that its restriction to every infinite dimensional closed subspace $Y \subseteq X$ has a point of Fr\'{e}chet differentiability on $Y$. Is then $X^{*}$ necessarily separable?\\%**************************** (Q6) Assume that $X$ is Va\v{s}\'{a}k. Does $X$ admit an equivalent norm that has the following property: $(f_{n})_{n=1}^{\infty}$ is weak-convergent to some $f \in B(X^{*})$ whenever $f_{n} \in S(X^{*})$ are such that $\|f_{n}+f_{m}\|\rightarrow2$ as $n,m\rightarrow\infty$?\\%**************************** (Q7) Assume that $X$ has an unconditional basis and admits a G\^{a}teaux differentiable norm. Does $X$ admit a norm the dual norm of which is strictly convex?\\\\\\\\%**************************** (Q8) (Godefroy) Assume an Asplund space $X$ has a Markushevich basis $\{x_{i},f_{i}\}_{i \in I}$ with span$\{f_{i}\}_{i \in I}$ norming $X^{*}$. Is $X$ WCG?\\%**************************** (Q9) Assume $X$ admits an equivalent Fr\'{e}chet differentiable norm. Does $X$ admits an equivalent LUR norm?\\%**************************** (Q10) It is proved in \cite{6} that every weakly uniformly convex Banach space is Lipschitz separated. Can a Lipschitz separated Banach space be equivalently renormed with a weakly uniformly convex norm? A related question is if Lipschitz separated Banach space necessarily an Asplund space?\\%**************************** (Q11) Is it true that an equivalent Fr\'{e}chet differentiable norm in a subspace of a separable and reflexive Banach space can be extended to an equivalent Fr\'{e}chet differentiable norm in the whole space?\\%**************************** (Q12) Assume that for every nonempty closed, bounded and convex subset $A$ of $X^{*}$ there exists $x \in X$ which attains its supremum on $A$. Is $X$ Asplund?\\ (Q13) Is it true that an equivalent Fr\'{e}chet differentiable norm in a subspace of a separable and reflexive Banach space can be extended to an equivalent Fr\'{e}chet differentiable norm in the whole space?\\ (Q14) A separable Banach space $X$ is reflexive if and only if $X$ admits an equivalent 2-rotund norm. Is it true in general for nonseparable spaces?\\ (Q15) Assume that the norm of a separable Banach space $X$ is such that the restriction of it to every subspace of $X$ is Fr\'{e}chet differentiable at a point. Must $X^{*}$ be separable?\\ (Q16) Let $X$ be a WLD space and $X$ admits a G\^{a}teaux differentiable norm. Does $X$ admit a norm whose dual norm is strictly convex?\\ (Q17) Let $X$ be a WLD space. Is every convex continuous function on $X$ G\^{a}teaux differentiable at some points?\\ \textbf{Acknowledgement.} We thank the referee for many valuable remarks which led to significant improvement of the present paper. (Received -- -- --) (Accepted -- -- --) \vskip.5cm \end{document}
\begin{document} \pagestyle{plain} \title{Popular Edges and Dominant Matchings} \begin{abstract} Given a bipartite graph $G = (A \cup B,E)$ with strict preference lists and $e^* \in E$, we ask if there exists a popular matching in $G$ that contains the edge~$e^*$. We call this the {\em popular edge} problem. A matching $M$ is popular if there is no matching $M'$ such that the vertices that prefer $M'$ to $M$ outnumber those that prefer $M$ to~$M'$. It is known that every stable matching is popular; however $G$ may have no stable matching with the edge $e^*$ in it. In this paper we identify another natural subclass of popular matchings called ``dominant matchings'' and show that if there is a popular matching that contains the edge $e^*$, then there is either a stable matching that contains $e^*$ or a dominant matching that contains~$e^*$. This allows us to design a linear time algorithm for the popular edge problem. We also use dominant matchings to efficiently test if every popular matching in $G$ is stable or not. \end{abstract} \section{Introduction} \label{intro} We are given a bipartite graph $G = (A \cup B, E)$ where each vertex has a strict preference list ranking its neighbors and we are also given $e^* \in E$. Our goal is to compute a matching that contains the edge $e^*$, in other words, $e^*$ is an essential edge to be included in our matching -- however, our matching also has to be globally acceptable, in other words, $M$ has to {\em popular} (defined below). We say a vertex $u \in A \cup B$ prefers matching $M$ to matching $M'$ if either $u$ is matched in $M$ and unmatched in $M'$ or $u$ is matched in both and in $u$'s preference list, $M(u)$, i.e., $u$'s partner in $M$, is ranked better than $M'(u)$, i.e., $u$'s partner in~$M'$. For matchings $M$ and $M'$ in $G$, let $\phi(M,M')$ be the number of vertices that prefer $M$ to~$M'$. If $\phi(M',M) > \phi(M,M')$ then we say $M'$ is {\em more popular than}~$M$. \begin{definition} A matching $M$ is {\em popular} if there is no matching that is more popular than $M$; in other words, $\phi(M,M') \ge \phi(M',M)$ for all matchings $M'$ in~$G$. \end{definition} Thus in an election between any pair of matchings, where each vertex casts a vote for the matching that it prefers, a popular matching never loses. Popular matchings always exist in $G$ since every stable matching is popular~\cite{Gar75}. Recall that a matching $M$ is stable if it has no {\em blocking pair}, i.e., no pair $(a,b)$ such that both $a$ and $b$ prefer each other to their respective assignments in~$M$. It is known that every stable matching is a minimum size popular matching~\cite{HK13}, thus the notion of popularity is a relaxation of stability. Our problem is to determine if there exists a popular matching in $G$ that contains the essential edge~$e^*$. We call this the {\em popular edge} problem. It is easy to check if there exists a stable matching that contains~$e^*$. However as stability is stricter than popularity, it may be the case that there is no stable matching that contains $e^*$ while there is a popular matching that contains $e^*$. Fig.~\ref{fig:1} has such an example. \begin{figure} \caption{The top-choice of both $a_1$ and of $a_2$ is $b_1$; the second choice of $a_1$ is~$b_2$. The preference lists of the $b_i$'s are symmetric. There is no edge between $a_2$ and~$b_2$. The matching $S = \{(a_1,b_1)\} \label{fig:1} \end{figure} It is a theoretically interesting problem to identify those edges that can occur in a popular matching and those that cannot. This solution is also applicable in a setting where $A$ is a set of applicants and $B$ is a set of posts, each applicant seeks to be matched to an adjacent post and vice versa; also vertices have strict preferences over their neighbors. The central authority desires to pair applicant $a$ and post $b$ to each other. However if the resulting matching $M$ should lose an election where vertices cast votes, then $M$ is unpopular and not globally stable; the central authority wants to avoid such a situation. Thus what is sought here is a matching that satisfies both these conditions: (1)~$(a,b) \in M$ and (2)~$M$ is popular. A first attempt to solve this problem may be to ask for a {\em stable} matching $S$ in the subgraph obtained by deleting the endpoints of $e^*$ from $G$ and add $e^*$ to~$S$. However $S \cup \{e^*\}$ need not be popular. Fig.~\ref{fig:arranged_pair} has a simple example where $e^* = (a_2,b_2)$ and the subgraph induced by $a_1,b_1,a_3,b_3$ has a unique stable matching~$\{(a_1,b_1)\}$. However $\{(a_1,b_1),(a_2,b_2)\}$ is not popular in $G$ as $\{(a_1,b_3),(a_2,b_1)\}$ is more popular. Note that there {\em is} a popular matching $M^* = \{(a_1,b_3),(a_2,b_2),(a_3,b_1)\}$ that contains~$e^*$. \begin{figure} \caption{Here we have $e^* = (a_2,b_2)$. The top choice for $a_1$ and $a_2$ is $b_1$ while $b_2$ is their second choice; $b_1$'s top choice is $a_2$, second choice is $a_1$, and third choice is~$a_3$. The vertices $b_2,b_3$, and $a_3$ have $a_2,a_1$, and $b_1$ as their only neighbors.} \label{fig:arranged_pair} \end{figure} It would indeed be surprising if it was the rule that for every edge $e^*$, there is always a popular matching which can be decomposed as $\{e^*\}$ $\cup$ a stable matching on $E \setminus \{e^*\}$, as popularity is a far more flexible notion than stability; for instance, the set of vertices matched in every stable matching in $G$ is the same~\cite{GS85} while there can be a large variation (up to a factor of 2) in the sizes of popular matchings in $G$. We need a larger palette than the set of stable matchings to solve the popular edge problem. We now identify another natural subclass of popular matchings called {\em dominant} popular matchings or dominant matchings, in short. In order to define dominant matchings, we use the relation ``defeats'', defined as follows. \begin{definition} \label{def:defeat} Matching $M$ {\em defeats} matching $M'$ if either of these two conditions holds: \begin{enumerate}[(i)] \item $M$ is more popular than $M'$, i.e., $\phi(M,M') > \phi(M',M)$; \item $\phi(M,M') = \phi(M',M)$ and $|M| > |M'|$. \end{enumerate} \end{definition} When $M$ and $M'$ gather the same number of votes in the election between $M$ and $M'$, instead of declaring these matchings as incomparable (as done under the ``more popular than'' relation), it seems natural to regard the larger of $M,M'$ as the {\em winner} of the election. Condition~(ii) of the {\em defeats} relation exactly captures this. We define dominant matchings to be those popular matchings that are never defeated (as per Definition~\ref{def:defeat}). \begin{definition} \label{def:dominant} Matching $M$ is {\em dominant} if there is no matching that {\em defeats} it; in other words, $M$ is popular and for any matching $M'$, if $|M'| > |M|$, then $M$ is more popular than~$M'$. \end{definition} Note that a dominant matching has to be a maximum size popular matching since smaller-sized popular matchings get defeated by a popular matching of maximum size. However not every maximum size popular matching is a dominant matching, as the example (from~\cite{HK13}) in Fig.~\ref{fig:0} demonstrates. \begin{figure} \caption{The vertex $b_1$ is the top choice for all $a_i$'s and $b_2$ is the second choice for $a_1$ and $a_2$ while $b_3$ is the third choice for~$a_1$. The preference lists of the $b_i$'s are symmetric. There are 2 maximum size popular matchings here: $M_1 = \{(a_1,b_1),(a_2,b_2)\} \label{fig:0} \end{figure} Analogous to Definition~\ref{def:dominant}, we can define the following subclass of popular matchings: those popular matchings $M$ such that {\em for any matching $M'$, if $|M'| < |M|$ then $M$ is more popular than $M'$}. It is easy to show that this class of matchings is exactly the set of stable matchings. That is, we can show that any popular matching $M$ that is more popular than every {\em smaller-sized} matching is a stable matching and conversely, every stable matching is more popular than any smaller-sized matching. Thus dominant matchings are to the class of maximum size popular matchings what stable matchings are to the class of minimum size popular matchings: these are popular matchings that carry the proof of their maximality (similarly, minimality) by being more popular than every matching of larger (resp., smaller) size. \subsubsection{Our contribution.} Theorem~\ref{main-thm} is our main result here. This enables us to solve the popular edge problem in linear time. \begin{theorem} \label{main-thm} If there exists a popular matching in $G = (A\cup B,E)$ that contains the edge $e^*$, then there exists either a stable matching in $G$ that contains $e^*$ or a dominant matching in $G$ that contains~$e^*$. \end{theorem} \begin{enumerate}[(i)] \item To show Theorem~\ref{main-thm}, we show that any popular matching $M$ can be partitioned as $M_0 \mathbin{\mathaccent\cdot\cup} M_1$, where $M_0$ is dominant in the subgraph induced by the vertices matched in $M_0$ and in the subgraph induced by the remaining vertices, $M_1$ is stable. If $M$ contains $e^*$, then $e^*$ is either in $M_0$ or in~$M_1$. In the former case, we show a dominant matching in $G$ that contains $e^*$ and in the latter case, we show a stable matching in $G$ that contains~$e^*$. \item We also show that every dominant matching in $G$ can be realized as an image (under a simple and natural mapping) of a stable matching in a new graph~ $G'$. This allows us to determine in linear time if there is a dominant matching in $G$ that contains the edge~$e^*$. This mapping between stable matchings in $G'$ and dominant matchings in $G$ can also be used to find a min-cost dominant matching in~$G$ efficiently, where we assume there is a rational cost function on~$E$. \item When all popular matchings in $G$ have the same size, it could be the case that every popular matching in $G$ is also stable. That is, in $G$ we have $\{$popular matchings$\} = \{$stable matchings$\}$. We use dominant matchings to efficiently check if this is the case or not. We show that if there exists an unstable popular matching in $G$, then there has to exist an unstable dominant matching in $G$. This allows us to design an $O(m^2)$ algorithm (where $|E| = m)$ to check if every popular matching in $G$ is also stable. \end{enumerate} \subsubsection{Related Results.} Stable matchings were defined by Gale and Shapley in their landmark paper~\cite{GS62}. The attention of the community was drawn very early to the characterization of \emph{stable edges}: edges and sets of edges that can appear in a stable matching. In the seminal book of Knuth~\cite{Knu76}, stable edges first appeared under the term ``arranged marriages''. Knuth presented an algorithm to find a stable matching with a given stable set of edges or report that none exists. This method is a modified version of the Gale-Shapley algorithm and runs in $O(m)$ time. Gusfield and Irving~\cite{GI89} provided a similar, simple method for the stable edge problem with the same running time. The stable edge problem is a highly restricted case of the \emph{min-cost stable matching problem}, where a stable matching that has the minimum edge cost among all stable matchings is sought. With the help of edge costs, various stable matching problems can be modeled, such as stable matchings with restricted edges~\cite{DFFS03} or egalitarian stable matchings~\cite{ILG87}. A simple and elegant formulation of the stable matching polytope of $G = (A \cup B,E)$ is known~\cite{Rot92} and using this, a min-cost stable matching can be computed in polynomial time via linear programming. The size of a stable matching in $G$ can be as small as $|M_{\max}|/2$, where $M_{\max}$ is a maximum size matching in $G$. Relaxing stability to popularity yields larger matchings and it is easy to show that a largest popular matching has size at least~$2|M_{\max}|/3$. Efficient algorithms for computing a popular matching of maximum size were shown in~\cite{HK13,Kav12-journal}. The algorithm in \cite{HK13} time runs in $O(mn_0)$ time, where $n_0 = \min(|A|,|B|)$ and the algorithm in \cite{Kav12-journal} runs in linear time. In fact, both these algorithms compute dominant matchings -- thus dominant matchings always exist in a stable marriage instance with strict preference lists. Interestingly, all the polynomial time algorithms currently known for computing {\em any} popular matching in $G=(A\cup B,E)$ compute either a stable matching or a dominant matching in~$G$. \paragraph{Organization of the paper.} A characterization of dominant matchings is given in Section~\ref{sec:char}. In Section~\ref{sec:dom-mat} we show a surjective mapping between stable matchings in a larger graph $G'$ and dominant matchings in~$G$. Section~\ref{sec:pop-edge} has our algorithm for the popular edge problem and Section~\ref{sec:dom-vs-stab} has our algorithm to test if every popular matching in $G$ is also stable. The Appendix has a brief overview of the maximum size popular matching algorithms in \cite{HK13,Kav12-journal}. \section{A characterization of dominant matchings} \label{sec:char} Let $M$ be any matching in $G = (A \cup B, E)$ and let $M(u)$ denote $u$'s partner in $M$, where $u \in A \cup B$. Label each edge $e=(a,b)$ in $E\setminus M$ by the pair $(\alpha_e,\beta_e)$, where $\alpha_e = \mathsf{vote}_a(b,M(a))$ and $\beta_e = \mathsf{vote}_b(a,M(b))$, i.e., $\alpha_e$ is $a$'s vote for $b$ vs.\ $M(a)$ and $\beta_e$ is $b$'s vote for $a$ vs.~$M(b)$. The function $\mathsf{vote}(\cdot,\cdot)$ is defined below. \begin{definition} For any $u \in A\cup B$ and neighbors $x$ and $y$ of $u$, define $u$'s vote between $x$ and $y$ as: \begin{equation*} \vspace*{-2mm} \label{vote-defn} \mathsf{vote}_u(x,y) = \begin{cases} + & \text{if $u$ prefers $x$ to $y$}\\ - & \text{if $u$ prefers $y$ to $x$}\\ 0 & \text{otherwise (i.e., $x = y$).} \end{cases} \end{equation*} \end{definition} If a vertex $u$ is unmatched, then $M(u)$ is undefined and we define $\mathsf{vote}_u(v,M(u))$ to be $+$ for all neighbors of $u$ since every vertex prefers to be matched than be unmatched. Note that if an edge $(a,b)$ is labeled $(+,+)$, then $(a,b)$ blocks $M$ in the stable matching sense. If an edge $(a,b)$ is labeled $(-,-)$, then both $a$ and $b$ prefer their respective partners in $M$ to each other. Let $G_M$ be the subgraph of $G$ obtained by deleting edges that are labeled~$(-,-)$. The following theorem characterizes popular matchings. \begin{theorem}[from \cite{HK13}] \label{thm:pop-char} A matching $M$ is popular if and only if the following three conditions are satisfied in the subgraph $G_M$: \begin{enumerate} \item[(i)]There is no alternating cycle with respect to $M$ that contains a $(+,+)$ edge. \item[(ii)]There is no alternating path starting from an unmatched vertex wrt $M$ that contains a $(+,+)$ edge. \item[(iii)]There is no alternating path with respect to $M$ that contains two or more $(+,+)$ edges. \end{enumerate} \end{theorem} Lemma~\ref{thm:domn-char} characterizes those popular matchings that are dominant. The ``if'' side of Lemma~\ref{thm:domn-char} was shown in \cite{Kav12-journal}: it was shown that if there is no augmenting path with respect to a popular matching $M$ in $G_M$ then $M$ is more popular than all larger matchings, thus $M$ is a maximum size popular matching. Here we show that the converse holds as well, i.e., if $M$ is a popular matching such that $M$ is more popular than all larger matchings, in other words, if $M$ is a dominant matching, then there is no augmenting path with respect to $M$ in~$G_M$. \begin{lemma} \label{thm:domn-char} A popular matching $M$ is dominant if and only if there is no augmenting path wrt $M$ in~$G_M$. \end{lemma} \begin{proof} Let $M$ be a popular matching in~$G$. Suppose there is an augmenting path $\rho$ with respect to $M$ in~$G_M$. Let us use $M \approx M'$ to denote both matchings getting the same number of votes in an election between them, i.e., $\phi(M,M') = \phi(M',M)$. We will now show that $M\oplus\rho \approx M$. Since $M\oplus\rho$ is a larger matching than $M$, if $M\oplus\rho \approx M$, then it means that $M\oplus\rho$ defeats $M$, thus $M$ is not dominant. Consider $M\oplus\rho$ versus $M$: every vertex that does not belong to the path $\rho$ gets the same partner in both these matchings. Hence vertices outside $\rho$ are indifferent between these two matchings. Consider the vertices on~$\rho$. In the first place, there is no edge in $\rho\setminus M$ that is labeled $(+,+)$, otherwise that would contradict condition~(ii) of Theorem~\ref{thm:pop-char}. Since the path $\rho$ belongs to $G_M$, no edge is labeled $(-,-)$ either. Hence every edge in $\rho\setminus M$ is labeled either $(+,-)$ or~$(-,+)$. Note that the $+$ signs count the number of votes for $M\oplus\rho$ while the $-$ signs count the number of votes for~$M$. Thus the number of votes for $M\oplus\rho$ equals the number of votes for $M$ on vertices of $\rho$, and thus in the entire graph~$G$. Hence $M\oplus\rho \approx M$. Now we show the other direction: if there is no augmenting path with respect to a popular matching $M$ in $G_M$ then $M$ is dominant. Let $M'$ be a larger matching. Consider $M \oplus M'$ in $G$: this is a collection of alternating paths and alternating cycles and since $|M'| > |M|$, there is at least one augmenting path with respect to $M$ here. Call this path $p$, running from vertex $u$ to vertex~$v$. Let us count the number of votes for $M$ versus $M'$ among the vertices of~$p$. \begin{figure} \caption{The $u$-$v$ augmenting path $p$ in $G$ where the bold edges are in $M$; at least one edge here (say, $(x,y)$) is labeled~$(-,-)$.} \label{fig:thm2} \end{figure} No edge in $p$ is labeled $(+,+)$ as that would contradict condition~(ii) of Theorem~\ref{thm:pop-char}, thus all the edges of $M'$ in $p$ are labeled $(-,+)$, $(+,-)$, or~$(-,-)$. Since $p$ does not exist in $G_M$, there is at least one edge that is labeled $(-,-)$ here (see Fig.~\ref{fig:thm2}): thus among the vertices of $p$, $M$ gets more votes than $M'$ (recall that $+$'s are votes for $M'$ and $-$'s are votes for $M$). Thus $M$ is more popular than $M'$ among the vertices of~$p$. By the popularity of $M$, we know that $M$ gets at least as many votes as $M'$ over all other paths and cycles in $M \oplus M'$; this is because if $\rho$ is an alternating path/cycle in $M \oplus M'$ such that the number of vertices on $\rho$ that prefer $M'$ to $M$ is more than the number that prefer $M$ to $M'$, then $M\oplus\rho$ is more popular than $M$, a contradiction to the popularity of~$M$. Thus adding up over all the vertices in $G$, it follows that $\phi(M,M') > \phi(M',M)$. Hence $M$ is more popular than any larger matching and so $M$ is a dominant matching. \qed \end{proof} Corollary~\ref{cor0} is a characterization of dominant matchings. This follows immediately from Lemma~\ref{thm:domn-char} and Theorem~\ref{thm:pop-char}. \begin{corollary} \label{cor0} Matching $M$ is a dominant matching if and only if $M$ satisfies conditions~(i)-(iii) of Theorem~\ref{thm:pop-char} and condition~(iv): there is no augmenting path wrt $M$ in~$G_M$. \end{corollary} \section{The set of dominant matchings} \label{sec:dom-mat} In this section we show a surjective mapping from the set of stable matchings in a new instance $G' = (A'\cup B', E')$ to the set of dominant matchings in $G = (A\cup B,E)$. It will be convenient to refer to vertices in $A$ and $A'$ as {\em men} and vertices in $B$ and $B'$ as {\em women}. The construction of $G' = (A'\cup B', E')$ is as follows. Corresponding to every man $a \in A$, there will be two men $a_0$ and $a_1$ in $A'$ and one woman $d(a)$ in~$B'$. The vertex $d(a)$ will be referred to as the dummy woman corresponding to~$a$. Corresponding to every woman $b \in B$, there will be exactly one woman in $B'$ -- for the sake of simplicity, we will use $b$ to refer to this woman as well. Thus $B' = B \cup d(A)$, where $d(A) = \{d(a): a \in A\}$ is the set of dummy women. Regarding the other side of the graph, $A' = A_0 \cup A_1$, where $A_i = \{a_i: a \in A\}$ for $i = 0,1$, and vertices in $A_0$ are called level~0 vertices, while vertices in $A_1$ are called level~1 vertices. We now describe the edge set $E'$ of~$G'$. For each $a \in A$, the vertex $d(a)$ has exactly two neighbors: these are $a_0$ and $a_1$ and $d(a)$'s preference order is $a_0$ followed by~$a_1$. The dummy woman $d(a)$ is $a_1$'s most preferred neighbor and $a_0$'s least preferred neighbor. \begin{itemize} \item The preference list of $a_0$ is all the neighbors of $a$ (in $a$'s preference order) followed by~$d(a)$. \item The preference list of $a_1$ is $d(a)$ followed by the neighbors of $a$ (in $a$'s preference order) in~$G$. \end{itemize} For any $b \in B$, its preference list in $G'$ is level~1 neighbors in the same order of preference as in $G$ followed by level~0 neighbors in the same order of preference as in~$G$. For instance, if $b$'s preference list in $G$ is $a$ followed by $a'$, then $b$'s preference list in $G'$ is top-choice $a_1$, then $a'_1$, and then $a_0$, and the last-choice is $a'_0$. We show an example in Fig.~\ref{fig:new-graph}. \begin{center} \begin{figure} \caption{The graph $G'$ on the right corresponding to $G$ on the left. We used blue to color edges in $(A_1 \times B) \cup (A_0 \times d(a))$ and orange to color edges in $(A_0 \times B) \cup (A_1 \times d(a))$.} \label{fig:new-graph} \end{figure} \end{center} \vspace*{-0.5in} We now define the mapping $T: \{$stable matchings in $G'\} \rightarrow \{$dominant matchings in~$G\}$. Let $M'$ be any stable matching in~$G$. \begin{itemize} \item $T(M')$ is the set of edges obtained by deleting all edges involving vertices in $d(A)$ (i.e., dummy women) from $M'$ and replacing every edge $(a_i,b) \in M'$, where $b \in B$ and $i \in \{0,1\}$, by the edge~$(a,b)$. \end{itemize} It is easy to see that $T(M')$ is a valid matching in~$G$. This is because $M'$ has to match $d(a)$, for every $a \in A$, since $d(a)$ is the top-choice for~$a_1$. Thus for each $a \in A$, one of $a_0, a_1$ has to be matched to~$d(a)$. Hence at most one of $a_0,a_1$ is matched to a non-dummy woman $b$ and thus $M = T(M')$ is a matching in~$G$. \paragraph{The proof that $M$ is a dominant matching in $G$.} This proof is similar to the proof of correctness of the maximum size popular matching algorithm in \cite{Kav12-journal}. As described in Section~\ref{sec:char}, in the graph $G$, label each edge $e = (a,b)$ in $E \setminus M$ by the pair $(\alpha_e,\beta_e)$, where $\alpha_e \in \{+,-\}$ is $a$'s vote for $b$ vs.\ $M(a)$ and $\beta_e \in \{+,-\}$ is $b$'s vote for $a$ vs.~$M(b)$. \begin{itemize} \item It will be useful to assign a value in $\{0,1\}$ to each $a \in A$. If $M'(a_1) = d(a)$, then $f(a) = 0$ else $f(a) = 1$. So if $a \in A$ is unmatched in $M$ then $(a_0,d(a)) \in M'$ and so $f(a) = 1$. \item We will now define $f$-values for vertices in $B$ as well. If $M'(b) \in A_1$ then $f(b) = 1$, else $f(b) = 0$. So if $b \in B$ is unmatched in $M'$ (and thus in $M$) then $f(b) = 0$. \end{itemize} \begin{new-claim} \label{clm1} The following statements hold on the edge labels: \begin{itemize} \item[(1)] If the edge $(a,b)$ is labeled $(+,+)$, then $f(a) = 0$ and $f(b) = 1$. \item[(2)] If $(y,z)$ is an edge such that $f(y) = 1$ and $f(z) = 0$, then $(y,z)$ has to be labeled~$(-,-)$. \end{itemize} \end{new-claim} \begin{proof} We show part~(1) first. The edge $(a,b)$ is labeled~$(+,+)$. Let $M(a) = z$ and $M(b) = y$. Thus in $a$'s preference list, $b$ ranks better than $z$ and similarly, in $b$'s preference list, $a$ ranks better than~$y$. We know from the definition of our function $T$ that $M'(z) \in \{a_0,a_1\}$ and $M'(b) \in \{y_0,y_1\}$. So there are 4 possibilities: $M'$ contains (1)~$(a_0,z)$ and $(y_0,b)$, (2)~$(a_1,z)$ and $(y_0,b)$, (3)~$(a_1,z)$ and $(y_1,b)$, (4)~$(a_0,z)$ and~$(y_1,b)$. We know that $M'$ has no blocking pairs in $G'$ since it is a stable matching. In (1), the pair $(a_0,b)$ blocks $M'$, and in (2) and (3), the pair $(a_1,b)$ blocks~$M'$. Thus the only possibility is (4). That is, $M'(b) \in A_1$ and $M'(a_1) = d(a)$. In other words, $f(a) = 0$ and $f(b) = 1$. We now show part~(2) of Claim~\ref{clm1}. We are given that $f(y) = 1$, so $M'(y_0) = d(y)$. We know that $d(y)$ is $y_0$'s last choice and $y_0$ is adjacent to $z$, thus $y_0$ must have been rejected by~$z$. Since we are given that $f(z) = 0$, i.e., $M'(z) \in A_0$, it follows that $M'(z) = u_0$, where $u$ ranks better than $y$ in $z$'s preference list in~$G$. In the graph $G'$, the vertex $z$ prefers $y_1$ to $u_0$ since it prefers any level~1 neighbor to a level~0 neighbor. Thus $y_1$ is matched to a neighbor that is ranked better than $z$ in $y$'s preference list, i.e., $M'(y_1) = v$, where $y$ prefers $v$ to~$z$. We have the edges $(y,v)$ and $(u,z)$ in $M$, thus both $y$ and $z$ prefer their respective partners in $M$ to each other. Hence the edge $(y,z)$ has to be labeled~$(-,-)$. \qed \end{proof} Lemmas~\ref{lem:aug-path} and \ref{lem:popular} shown below, along with Lemma~\ref{thm:domn-char}, imply that $M$ is a dominant matching in~$G$. \begin{lemma} \label{lem:aug-path} There is no augmenting path with respect to $M$ in~$G_M$. \end{lemma} \begin{proof} Let $a \in A$ and $b \in B$ be unmatched in~$M$. Then $f(a) = 1$ and $f(b) = 0$. If there is an augmenting path $\rho = \langle a, \cdots, b\rangle$ with respect to $M$ in $G_M$, then in $\rho$ we move from a man whose $f$-value is 1 to a woman whose $f$-value is 0. Thus there have to be two consecutive vertices $y \in A$ and $z \in B$ on $\rho$ such that $f(y) = 1$ and $f(z) = 0$. However part~(2) of Claim~\ref{clm1} tells us that such an edge $(y,z)$ has to be labeled~$(-,-)$. In other words, $G_M$ does not contain the edge $(y,z)$ or equivalently, there is no augmenting path $\rho$ in~$G_M$. \qed \end{proof} \begin{lemma} \label{lem:popular} $M$ is a popular matching in~$G$. \end{lemma} \begin{proof} We will show that $M$ satisfies conditions~(i)-(iii) of Theorem~\ref{thm:pop-char}. \noindent{\em Condition~(i).} Consider any alternating cycle $C$ with respect to $M$ in $G_M$ and let $a$ be any vertex in $C$: if $f(a) = 0$ then its partner $b = M(a)$ also satisfies $f(b) = 0$ and part~(2) of Claim~\ref{clm1} tells us that there is no edge in $G_M$ between $b$ and any $a'$ such that $f(a') = 1$. Similarly, if $f(a) = 1$ then its partner $b = M(a)$ also satisfies $f(b) = 1$ and though there can be an edge $(y,b)$ labeled $(+,+)$ incident on $b$, part~(1) of Claim~\ref{clm1} tells us that $f(y) = 0$ and thus there is no way the cycle $C$ can return to $a$, whose $f$-value is 1. Hence if $G_M$ contains an alternating cycle $C$ with respect to $M$, then all vertices in $C$ have the same $f$-value. Since there can be no edge labeled $(+,+)$ between 2 vertices whose $f$-value is the same (by part~(1) of Claim~\ref{clm1}), it follows that $C$ has no edge labeled $(+,+)$. \noindent{\em Condition~(ii).} Consider any alternating path $p$ with respect to $M$ in $G_M$ and let the starting vertex in $p$ be~$a \in A$. Since $a$ is unmatched in $M$, we have $f(a) = 1$ and we know from part~(2) of Claim~\ref{clm1} that there is no edge in $G_M$ between such a man and a woman whose $f$-value is 0. Thus $a$'s neighbor is $p$ is a woman $b'$ such that $f(b')=1$. Since $f(b') = 1$, its partner $a' = M(b')$ also satisfies $f(a') = 1$ and part~(2) of Claim~\ref{clm1} tells us that there is no edge in $G_M$ between $a'$ and any $b''$ such that $f(b'') = 0$, thus all vertices of $p$ have $f$-value 1 and thus there is no edge labeled $(+,+)$ in~$p$. Suppose the starting vertex in $p$ is~$b \in B$. Since $b$ is unmatched in $M$, we have $f(b) = 0$ and we again know from part~(2) of Claim~\ref{clm1} that there is no edge in $G_M$ between such a woman and a man whose $f$-value is 1. Thus $b$'s neighbor in $p$ is a woman $a'$ such that~$f(a')=0$. Since $f(a') = 0$, its partner $b' = M(a')$ also satisfies $f(b') = 0$ and part~(2) of Claim~\ref{clm1} tells us that there is no edge in $G_M$ between $b'$ and any $a''$ such that $f(a'') = 1$, thus all vertices of $p$ have $f$-value 0 and thus there is no edge labeled $(+,+)$ in~$p$. \noindent{\em Condition~(iii).} Consider any alternating path $\rho$ with respect to $M$ in~$G_M$. We can assume that the starting vertex in $\rho$ is matched in $M$ (as condition~(ii) has dealt with the case when this vertex is unmatched). Suppose the starting vertex is~$a \in A$. If $f(a) = 0$ then its partner $b = M(a)$ also satisfies $f(b) = 0$ and part~(2) of Claim~\ref{clm1} tells us that there is no edge in $G_M$ between $b$ and any $a'$ such that $f(a') = 1$, thus all vertices of $\rho$ have $f$-value 0 and thus there is no edge labeled $(+,+)$ in~$\rho$. If $f(a) = 1$ then after traversing some vertices whose $f$-value is 1, we can encounter an edge $(y,z)$ that is labeled $(+,+)$ where $f(z) = 1$ and $f(y) = 0$. However once we reach $y$, we get stuck in vertices whose $f$-value is 0 and thus we can see no more edges labeled $(+,+)$. Suppose the starting vertex in $\rho$ is $b \in B$. If $f(b) = 1$ then its partner $a = M(b)$ also satisfies $f(a) = 1$ and part~(2) of Claim~\ref{clm1} tells us that there is no edge in $G_M$ between $a$ and any $b'$ such that $f(b') = 0$, thus all vertices of $\rho$ have $f$-value 1 and thus there is no edge labeled $(+,+)$ in~$\rho$. If $f(b) = 0$ then after traversing some vertices whose $f$-value is 0, we can encounter an edge $(y,z)$ labeled $(+,+)$ where $f(y) = 0$ and $f(z) = 1$. However once we reach $z$, we get stuck in vertices whose $f$-value is 1 and thus we can see no more edges labeled $(+,+)$. Thus in all cases there is at most one edge labeled $(+,+)$ in~$\rho$. \qed \end{proof} \subsection{$T$ is surjective} \label{sec:onto} We now show that corresponding to any dominant matching $M$ in $G$, there is a stable matching $M'$ in $G'$ such that $T(M') = M$. Given a dominant matching $M$ in $G$, we first label each edge $e=(a,b)$ in $E \setminus M$ by the pair $(\alpha_e,\beta_e)$ where $\alpha_e$ is $a$'s vote for $b$ vs.\ $M(a)$ and $\beta_e$ is $b$'s vote for $a$ vs.~$M(b)$. We will work in $G_M$, the subgraph of $G$ obtained by deleting all edges labeled $(-,-)$. We now construct sets $A_0,A_1 \subseteq A$ and $B_0, B_1 \subseteq B$ as described in the algorithm below. These sets will be useful in constructing the matching~$M'$. \begin{enumerate} \item[0.] Initialize $A_0 = B_1 = \emptyset$, \ $A_1 = \{$unmatched men in $M\}$, and $B_0 = \{$unmatched women in $M\}$. \item[1.] For every edge $(y,z) \in M$ that is labeled $(+,+)$ do: \begin{itemize} \item let $A_0 = A_0 \cup \{y\}$, \ $B_0 = B_0 \cup \{M(y)\}$, \ $B_1 = B_1 \cup \{z\}$, \ and $A_1 = A_1 \cup \{M(z)\}$. \end{itemize} \item[2.] While there exists a matched man $a \notin A_0$ that is adjacent in $G_M$ to a woman in $B_0$ do: \begin{itemize} \item $A_0 = A_0 \cup \{a\}$ and $B_0 = B_0 \cup \{M(a)\}$. \end{itemize} \item[3.] While there exists a matched woman $b \notin B_1$ that is adjacent in $G_M$ to a man in $A_1$ do: \begin{itemize} \item $B_1 = B_1 \cup \{b\}$ and $A_1 = A_1 \cup \{M(b)\}$. \end{itemize} \end{enumerate} All unmatched men are in $A_1$ and all unmatched women are in~$B_0$. For every edge $(y,z)$ that is labeled $(+,+)$, we add $y$ and its partner to $A_0$ and $B_0$, respectively while $z$ and its partner are added to $B_1$ and $A_1$, respectively. For any man $a$, if $a$ is adjacent to a vertex in $B_0$ and $a$ is not in $A_0$, then $a$ and its partner get added to $A_0$ and $B_0$, respectively. Similarly, for any woman $b$, if $b$ is adjacent to a vertex in $A_1$ and $b$ is not in $B_1$, then $b$ and its partner get added to $B_1$ and $A_1$, respectively. The following observations are easy to see (refer to Fig.~\ref{fig:lem4}). Every $a \in A_1$ has an even length alternating path in $G_M$ to either \begin{itemize} \item[(1)] a man unmatched in $M$ (by Step~0 and Step~3) or \item[(2)] a man $M(z)$ where $z$ has an edge labeled $(+,+)$ incident on it (by Step~1 and Step~3). \end{itemize} Similarly, every $a \in A_0$ has an odd length alternating path in $G_M$ to either \begin{itemize} \item[(3)] a woman unmatched in $M$ (by Step~0 and Step~2) or \item[(4)] a woman $M(y)$ where $y$ has an edge labeled $(+,+)$ incident on it (by Step~1 and Step~2). \end{itemize} \begin{figure} \caption{Vertices get added to $A_1$ and $A_0$ by alternating paths in $G_M$ from either unmatched vertices or endpoints of edges labeled~$(+,+)$. The bold black edges are in $M$ and the red edge $(y,z)$ is labeled $(+,+)$ with respect to~$M$.} \label{fig:lem4} \end{figure} We show the following lemma here and its proof is based on the characterization of dominant matchings in terms of conditions~(i)-(iv) as given by Corollary~\ref{cor0}. We will also use (1)-(4) observed above in our proof. \begin{lemma} \label{clm3} $A_0 \cap A_1 = \emptyset$. \end{lemma} \begin{proof} \noindent{\em Case~1.} Suppose $a$ satisfies reasons~(1) and (3) for its inclusion in $A_1$ and in $A_0$, respectively. So $a$ is in $A_1$ because it is reachable via an even alternating path in $G_M$ from an unmatched man $u$; also $a$ is in $A_0$ because it is reachable via an odd length alternating path in $G_M$ from an unmatched woman~$v$. Then there is an augmenting path $\langle u,\ldots,v\rangle$ wrt $M$ in $G_M$ -- a contradiction to the fact that $M$ is dominant (by Lemma~\ref{thm:domn-char}). \noindent{\em Case~2.} Suppose $a$ satisfies reasons~(1) and (4) for its inclusion in $A_1$ and in $A_0$, respectively. So $a$ is in $A_1$ because it is reachable via an even alternating path wrt $M$ in $G_M$ from an unmatched man $u$; also $a$ is in $A_0$ because it is reachable via an odd length alternating path in $G_M$ from $z$, where edge $(y,z)$ is labeled~$(+,+)$. Then there is an alternating path wrt $M$ in $G_M$ from an unmatched man $u$ to the edge $(y,z)$ labeled $(+,+)$ and this is a contradiction to condition~(ii) of popularity. \noindent{\em Case~3.} Suppose $a$ satisfies reasons~(2) and (3) for its inclusion in $A_1$ and in $A_0$, respectively. This case is absolutely similar to Case~2. This will cause an alternating path wrt $M$ in $G_M$ from an unmatched woman to an edge labeled $(+,+)$, a contradiction again to condition~(ii) of popularity. \noindent{\em Case~4.} Suppose $a$ satisfies reasons~(2) and (4) for its inclusion in $A_1$ and in $A_0$, respectively. So $a$ is reachable via an even length alternating path in $G_M$ from an edge labeled $(+,+)$ and $M(a)$ is also reachable via an even length alternating path in $G_M$ from an edge labeled~$(+,+)$. If it is the same edge labeled $(+,+)$ that both $a$ and $M(a)$ are reachable from, then there is an alternating cycle in $G_M$ with a $(+,+)$ edge -- a contradiction to condition~(i) of popularity. If these are 2 different edges labeled $(+,+)$, then we have an alternating path in $G_M$ with two edges labeled $(+,+)$ -- a contradiction to condition~(iii) of popularity. These four cases finish the proof that $A_0\cap A_1 = \emptyset$. \qed \end{proof} We now describe the construction of the matching $M'$. Initially $M' = \emptyset$. \begin{itemize} \item For each $a \in A_0$: add the edges $(a_0,M(a))$ and $(a_1,d(a))$ to~$M'$. \item For each $a \in A_1$: add the edge $(a_0,d(a))$ to $M'$ and if $a$ is matched in $M$ then add $(a_1,M(a))$ to~$M'$. \item For $a \notin (A_0\cup A_1)$: add the edges $(a_0,M(a))$ and $(a_1,d(a))$ to~$M'$. {\em (Note that the men outside $A_0 \cup A_1$ are not reachable from either unmatched vertices or edges labeled $(+,+)$ via alternating paths in~$G_M$.)} \end{itemize} \begin{lemma} \label{lem:stable} $M'$ is a stable matching in~$G'$. \end{lemma} \begin{proof} Suppose $M'$ is not stable in~$G'$. Then there are edges $(u_i,v)$ and $(a_j,b)$ in $M'$ where $i,j \in\{0,1\}$, such that in the graph $G'$, the vertices $v$ and $a_j$ prefer each other to $u_i$ and $b$, respectively. There cannot be a blocking pair involving a dummy woman, thus the edges $(u,v)$ and $(a,b)$ are in~$M$. If $i = j$, then the pair $(a,v)$ blocks $M$ in~$G$. However, from the construction of the sets $A_0,A_1,B_0,B_1$, we know that all the blocking pairs with respect to $M$ are in $A_0 \times B_1$. Thus there is no blocking pair in $A_0 \times B_0$ or in $A_1 \times B_1$ with respect to $M$ and so~$i \ne j$. Since $v$ prefers $a_j$ to $u_i$ in $G'$, the only possibility is $i=0$ and~$j=1$. It has to be the case that $a$ prefers $v$ to $b$, so there is an edge labeled $(+,-)$ between $a \in A_1$ and $v \in B_0$ (see Fig.~\ref{fig:lem5}). \begin{figure} \caption{If the vertex $a_1$ prefers $v$ to $b$ in $G'$, then $a$ prefers $v$ to $b$ in $G$; thus the edge $(a,v)$ has to be present in~$G_M$.} \label{fig:lem5} \end{figure} So once $v$ got added to $B_0$, since $a$ is adjacent in $G_M$ to a vertex in $B_0$, vertex $a$ satisfied Step~2 of our algorithm to construct the sets $A_0,A_1,B_0$, and~$B_1$. So $a$ would have got added to $A_0$ as well, i.e., $a \in A_0 \cap A_1$, a contradiction to Lemma~\ref{clm3}. Thus there is no blocking pair with respect to $M'$ in~$G'$. \qed \end{proof} For each $a \in A$, note that exactly one of $(a_0,d(a))$, $(a_1,d(a))$ is in~$M'$. In order to form the set $T(M')$, the edges of $M'$ with women in $d(A)$ are pruned and each edge $(a_i,b) \in M'$, where $b \in B$ and $i \in \{0,1\}$, is replaced by~$(a,b)$. It is easy to see that $T(M') = M$. This concludes the proof that every dominant matching in $G$ can be realized as an image under $T$ of some stable matching in~$G'$. Thus $T$ is surjective. Our mapping $T$ can also be used to solve the min-cost dominant matching problem in polynomial time. Here we are given a cost function $c: E \rightarrow Q$ and the problem is to find a dominant matching in $G$ whose sum of edge costs is the least. We will use the mapping $T$ established from $\{$stable matchings in $G'\}$ to $\{$dominant matchings in $G\}$ to solve the min-cost dominant matching problem in~$G$. It is easy to extend $c$ to the edge set of~$G'$. For each edge $(a,b)$ in $G$, we will assign $c(a_0,b) = c(a_1,b) = c(a,b)$ and we will set $c(a_0,d(a)) = c(a_1,d(a)) = 0$. Thus the cost of any stable matching $M'$ in $G'$ is the same is the cost of the dominant matching $T(M')$ in~$G$. Since every dominant matching $M$ in $G$ equals $T(M')$ for some stable matching $M'$ in $G'$, it follows that the min-cost dominant matching problem in $G$ is the same as the min-cost stable matching problem in~$G'$. Since a min-cost stable matching in $G'$ can be computed in polynomial time, we can conclude Theorem~\ref{thm:min-cost-dom}. \begin{theorem} \label{thm:min-cost-dom} Given a graph $G = (A \cup B,E)$ with strict preference lists and a cost function $c: E \rightarrow \mathbb{Q}$, the problem of computing a min-cost dominant matching can be solved in polynomial time. \end{theorem} \section{The popular edge problem} \label{sec:pop-edge} In this section we show a decomposition for any popular matching in terms of a stable matching and a dominant matching. We use this result to design a linear time algorithm for the {\em popular edge} problem. Here we are given an edge $e^*=(u,v)$ in $G = (A\cup B,E)$ and we would like to know if there exists a popular matching in $G$ that contains~$e^*$. We claim the following algorithm solves the above problem. \begin{enumerate} \item Check if there is a stable matching $M_{e^*}$ in $G$ that contains edge~$e^*$. If so, then return~$M_{e^*}$. \item Check if there is a dominant matching $M'_{e^*}$ in $G$ that contains edge~$e^*$. If so, then return~$M'_{e^*}$. \item Return ``there is no popular matching that contains edge $e^*$ in~$G$''. \end{enumerate} \paragraph{Running time of the above algorithm.} In step~1 of our algorithm, we have to determine if there exists a stable matching $M_{e^*}$ in $G$ that contains $e^* = (u,v)$. We modify the Gale-Shapley algorithm so that the woman $v$ rejects all proposals from anyone worse than~$u$. If the modified Gale-Shapley algorithm produces a matching $M$ containing $e^*$, then it will be the most men-optimal stable matching containing $e^*$ in~$G$. Else there is no stable matching in $G$ that contains~$e^*$. We refer the reader to~\cite[Section 2.2.2]{GI89} for the correctness of the modified Gale-Shapley algorithm; it is based on the following fact: \begin{itemize} \item If $G$ admits a stable matching that contains $e^*=(u,v)$, then exactly one of (1), (2), (3) occurs in any stable matching $M$ of $G$: {\em (1)~$e^* \in M$, (2)~$v$ is matched to a neighbor better than $u$, (3)~$u$ is matched to a neighbor better than $v$}. \end{itemize} In step~2 of our algorithm for the popular edge problem, we have to determine if there exists a dominant matching in $G$ that contains $e^* = (u,v)$. This is equivalent to checking if there exists a stable matching in $G'$ that contains either the edge $(u_0,v)$ or the edge~$(u_1,v)$. This can be determined by using the same modified Gale-Shapley algorithm as given in the previous paragraph. Thus both steps~1 and 2 of our algorithm can be implemented in $O(m)$ time. \paragraph{Correctness of the algorithm.} Let $M$ be a popular matching in $G$ that contains edge~$e^*$. We will use $M$ to show that there is either a stable matching or a dominant matching that contains~$e^*$. As before, label each edge $e = (a,b)$ outside $M$ by the pair of votes $(\alpha_e,\beta_e)$, where $\alpha_e$ is $a$'s vote for $b$ vs.\ $M(a)$ and $\beta_e$ is $b$'s vote for $a$ vs.~$M(b)$. We run the following algorithm now -- this is similar to the algorithm in Section~\ref{sec:onto} to build the subsets $A_0,A_1$ of $A$ and $B_0,B_1$ of $B$, except that all the sets $A_0,A_1,B_0,B_1$ are initialized to empty sets here. \begin{enumerate} \item[0.] Initialize $A_0 = A_1 = B_0 = B_1 = \emptyset$. \item[1.] For every edge $(a,b) \in M$ that is labeled $(+,+)$: \begin{itemize} \item let $A_0 = A_0 \cup \{a\}$, \ $B_1 = B_1 \cup \{b\}$, \ $A_1 = A_1 \cup \{M(b)\}$, \ and $B_0 = B_0 \cup \{M(a)\}$. \end{itemize} \item[2.] While there exists a man $a' \notin A_0$ that is adjacent in $G_M$ to a woman in $B_0$ do: \begin{itemize} \item $A_0 = A_0 \cup \{a'\}$ and $B_0 = B_0 \cup \{M(a')\}$. \end{itemize} \item[3.] While there exists a woman $b' \notin B_1$ that is adjacent in $G_M$ to a man in $A_1$ do: \begin{itemize} \item $B_1 = B_1 \cup \{b'\}$ and $A_1 = A_1 \cup \{M(b)\}$. \end{itemize} \end{enumerate} All vertices added to the sets $A_0$ and $B_1$ are matched in $M$ -- otherwise there would be an alternating path from an unmatched vertex to an edge labeled $(+,+)$ and this contradicts condition~(ii) of popularity of $M$ (see Theorem~\ref{thm:pop-char}). Note that every vertex in $A_1$ is reachable via an even length alternating path wrt $M$ in $G_M$ from some man $M(b)$ whose partner $b$ has an edge labeled $(+,+)$ incident on it. Similarly, every vertex in $A_0$ is reachable via an odd length alternating path wrt $M$ in $G_M$ from some woman $M(a)$ whose partner $a$ has an edge labeled $(+,+)$ incident on it. The proof of Case~4 of Lemma~\ref{clm3} shows that $A_0 \cap A_1 = \emptyset$. We have $B_1 = M(A_1)$ and $B_0 = M(A_0)$ (see Fig.~\ref{fig:first}). All edges labeled $(+,+)$ are in $A_0 \times B_1$ (from our algorithm) and all edges in $A_1 \times B_0$ have to be labeled $(-,-)$ (otherwise we would contradict either condition~(i) or (iii) of popularity of $M$). Let $A' = A_0 \cup A_1$ and $B' = B_0 \cup B_1$. Let $M_0$ be the matching $M$ restricted to $A' \cup B'$. The matching $M_0$ is popular on $A' \cup B'$. Suppose not and there is a matching $N_0$ on $A' \cup B'$ that is more popular. Then the matching $N_0 \cup (M\setminus M_0)$ is more popular than $M$, a contradiction to the popularity of~$M$. Since $M_0$ matches all vertices in $A' \cup B'$, it follows that $M_0$ is dominant on $A' \cup B'$. \begin{figure} \caption{$M_0$ is the matching $M$ restricted to $A' \cup B'$. All unmatched vertices are in $(A \setminus A') \cup (B \setminus B')$.} \label{fig:first} \end{figure} Let $M_1 = M \setminus M_0$ and let $Y = A\setminus A'$ and $Z = B \setminus B'$. The matching $M_1$ is stable on $Y \cup Z$ as there is no edge labeled $(+,+)$ in $Y \times Z$ (all such edges are in $A_0 \times B_1$ by Step~1 of our algorithm above). The subgraph $G_M$ contains no edge in $A_1 \times Z$ -- otherwise such a woman $z \in Z$ should have been in $B_1$ (by Step~3 of the algorithm above) and similarly, $G_M$ contains no edge in $Y \times B_0$ -- otherwise such a man $y \in Y$ should have been in $A_0$ (by Step~2 of this algorithm). We will now show Lemmas~\ref{lem:domn-e} and \ref{lem:stab-e}. These lemmas prove the correctness of our algorithm. \begin{lemma} \label{lem:domn-e} If the edge $e^* \in M_0$ then there exists a dominant matching in $G$ that contains~$e^*$. \end{lemma} \begin{proof} Let $H$ be the induced subgraph of $G$ on $Y \cup Z$. We will transform the stable matching $M_1$ in $H$ to a dominant matching $M^*_1$ in~$H$. We do this by computing a stable matching in the graph $H' = (Y' \cup Z', E')$ -- the definition of $H'$ (with respect to $H$) is analogous to the definition of $G'$ (with respect to $G$) in Section~\ref{sec:dom-mat}. So for each man $y \in Y$, we have two men $y_0$ and $y_1$ in $Y'$ and one dummy woman $d(y)$ in $Z'$; the set $Z' = Z \cup d(Y)$ and the preference lists of the vertices in $Y' \cup Z'$ are exactly as given in Section~\ref{sec:char} for the vertices in~$G'$. We wish to compute a dominant matching in $H$, equivalently, a stable matching in~$H'$. However we will not compute a stable matching in $H'$ from scratch since we want to obtain a dominant matching in $H$ using~$M_1$. So we compute a stable matching in $H'$ by starting with the following matching in $H'$ (this is essentially the same as $M_1$): \begin{itemize} \item for each edge $(y,z)$ in $M_1$, include the edges $(y_0,z)$ and $(y_1,d(y))$ in this initial matching and for each unmatched man $y$ in $M_1$, include the edge $(y_0,d(y))$ in this matching. This is a feasible starting matching as there is no blocking pair with respect to this matching. \end{itemize} Now run the Gale-Shapley algorithm in $H'$ with unmatched men proposing and women disposing. Note that the starting set of unmatched men is the set of all men $y_1$ where $y$ is unmatched in~$M_1$. However as the algorithm progresses, other men could also get unmatched and propose. Let $M'_1$ be the resulting stable matching in~$H'$. Let $M^*_1$ be the dominant matching in $H$ corresponding to the stable matching $M'_1$ in~$H'$. Observe that $M_0$ is untouched by the transformation $M_1 \leadsto M^*_1$. Let $M^* = M_0\cup M^*_1$. Since $e^* \in M_0$, the matching $M^*$ contains~$e^*$. \begin{new-claim} \label{clm2} $M^*$ is a dominant matching in~$G$. \end{new-claim} The proof of Claim~\ref{clm2} involves some case analysis and is given at the end of this section. Thus there is a dominant matching $M^*$ in $G$ that contains $e^*$ and this finishes the proof of Lemma~\ref{lem:domn-e}. \qed \end{proof} \begin{lemma} \label{lem:stab-e} If the edge $e^* \in M_1$ then there exists a stable matching in $G$ that contains~$e^*$. \end{lemma} \begin{proof} Here will leave $M_1$ untouched and transform the dominant matching $M_0$ on $A' \cup B'$ to a stable matching $M'_0$ on $A' \cup B'$. We do this by {\em demoting} all men in~$A_1$. That is, we run the stable matching algorithm on $A' \cup B'$ with preference lists as in the original graph $G$, i.e., men in $A_1$ are not promoted over the ones in~$A_0$. Our starting matching is $M_0$ restricted to edges in~$A_1 \times B_1$. Since there is no blocking pair with respect to $M_0$ in $A_1 \times B_1$, this is a feasible starting matching. Now unmatched men (all those in $A_0$) propose in decreasing order of preference to the women in $B'$ and when a woman receives a better proposal than what she currently has, she discards her current partner and accepts the new proposal. This may make men in $A_1$ single and so they too propose. This is Gale-Shapley algorithm with the only difference being in our starting matching not being empty but $M_0$ restricted to the edges of~$A_1 \times B_1$. Let $M_0'$ be the resulting matching on~$A'\cup B'$. Let $M' = M'_0 \cup M_1$. This is a matching that contains the edge $e^*$ since $e^* \in M_1$. \begin{new-claim} \label{clm4} $M'$ is a stable matching in~$G$. \end{new-claim} The proof of Claim~\ref{clm4} again involves some case analysis and is given at the end of this section. Thus there is a stable matching $M'$ in $G$ that contains the edge $e^*$ and this finishes the proof of Lemma~\ref{lem:stab-e}. \qed \end{proof} We have thus shown the correctness of our algorithm. We can now conclude the following theorem. \begin{theorem} \label{thm:popular-edge} Given a stable marriage instance $G = (A \cup B,E)$ with strict preference lists and an edge $e^* \in E$, we can determine in linear time if there exists a popular matching in $G$ that contains~$e^*$. \end{theorem} \subsubsection{Proof of Claim~\ref{clm2}.} We need to show that $M^* = M_0 \cup M^*_1$ is a dominant matching, where $M^*_1$ is the dominant matching in $H$ corresponding to the stable matching $M'_1$ in~$H'$. Let $Y_0$ be the set of men $y \in Y$ such that $(y_1,d(y)) \in M'_1$ and let $Y_1$ be the set of men $y \in Y$ such that $(y_0,d(y)) \in M'_1$. Let $Z_1$ be the set of those women in $Z$ that are matched in $M'_1$ to men in $Y_1$ and let $Z_0 = Z\setminus Z_1$. The following properties will be useful to us: \begin{itemize} \item[(i)] If $y \in Y_1$, then $M^*_1(y)$ ranks at least as good as $M_1(y)$ in $y$'s preference list. This is because $y \in Y_1$ and note that $Y_1$ is a {\em promoted} set when compared to~$Y_0$. Thus $y_1$ gets at least as good a partner in $M^*_1$ as in the men-optimal stable matching in $H$, which is at least as good as $M_1(y)$, as $M_1$ is a stable matching in~$H$. \item[(ii)] If $z \in Z_0$, then $M^*_1(z)$ ranks at least as good as $M_1(z)$ in $z$'s preference list. This is because in the computation of the stable matching $M'_1$, if the vertex $z$ rejects $M_1(z)$, then it was upon receiving a better proposal from a neighbor in $Y_0$ (since $z \in Z_0$). Thus $z$'s final partner in $M'_1$, and hence in $M^*_1$, ranks at least as good as $M_1(z)$ in her preference list. \end{itemize} Label each edge $e=(a,b)$ in $E \setminus M^*$ by the pair of votes $(\alpha_e,\beta_e)$, where $\alpha_e$ is $a$'s vote for $b$ vs.\ $M^*(a)$ and $\beta_e$ is $b$'s vote for $a$ vs.~$M^*(b)$. We will first show the following claim here. \begin{claim} Every edge in $(A_1 \cup Y_1) \times (B_0 \cup Z_0)$ is labeled $(-,-)$ with respect to~$M^*$. \end{claim} We already know that every edge in $A_1 \times B_0$ is labeled $(-,-)$ with respect to $M_0$ and as shown in part~(1) of Claim~\ref{clm1}, it is easy to see that every edge in $Y_1 \times Z_0$ is labeled $(-,-)$ with respect to~$M^*_1$. We will now show that all edges in $(Y_1 \times B_0) \cup (A_1 \times Z_0)$ are labeled~$(-,-)$ with respect to $M$. \begin{itemize} \item Consider any edge $(y,b) \in Y_1 \times B_0$. We know that $(y,b)$ was labeled $(-,-)$ with respect to~$M$. We have $M^*(b) = M_0(b) = M(b)$. Thus $b$ prefers $M^*(b)$ to~$y$. The man $y$ preferred $M(y)$ to $b$ and since $y \in Y_1$, we know from (i) above that $y$ ranks $M^*_1(y)$ at least as good as $M_1(y) = M(y)$. Thus the edge $(y,b)$ is labeled $(-,-)$ with respect to $M^*$ as well. \item Consider any edge in $(a,z) \in A_1 \times Z_0$. We know that $(a,z)$ was labeled $(-,-)$ with respect to~$M$. We have $M^*(a) = M_0(a) = M(a)$. Thus $a$ prefers $M^*(a)$ to~$z$. The woman $z$ preferred $M_1(z)$ to $a$ and we know from (ii) above that $z$ ranks $M^*_1(z)$ at least as good as~$M_1(z)$. Thus the edge $(a,z)$ is labeled $(-,-)$ with respect to $M^*$ as well. \end{itemize} Thus we have shown that every edge in $(A_1 \cup Y_1) \times (B_0 \cup Z_0)$ is labeled~$(-,-)$. We will now show the following claim. \begin{claim} Any edge labeled $(+,+)$ with respect to $M^*$ has to be in $(A_0 \cup Y_0) \times (B_1 \cup Z_1)$. \end{claim} \begin{figure} \caption{All edges in $(A_1\cup Y_1)\times(B_0\cup Z_0)$ are labeled $(-,-)$ wrt $M^*$ and all edges labeled $(+,+)$ wrt $M^*$ are in $(A_0\cup Y_0) \times (B_1\cup Z_1)$.} \label{fig:lemma6} \end{figure} Note that we already know that no edge in $A_i \times B_i$ is labeled $(+,+)$ wrt $M_0$ and no edge in $Y_i \times Z_i$ is labeled $(+,+)$ wrt $M^*_1$, for $i = 0,1$. We will now show that no edge in $\cup_{i=0}^1(A_i \times Z_i) \cup (Y_i \times B_i)$ is labeled $(+,+)$ (see Fig.~\ref{fig:lemma6}). \begin{enumerate} \item[(1)] Consider any edge in $(a,z) \in A_1 \times Z_1$. We know that $(a,z)$ was labeled $(-,-)$ wrt $M$. Since $M^*(a) = M_0(a) = M(a)$, the first coordinate in this edge label wrt $M^*$ is still~$-$. Thus this edge is not labeled $(+,+)$ wrt~$M^*$. \item[(2)] Consider any edge in $(y,b) \in Y_1 \times B_1$: there was no edge labeled $(+,+)$ wrt $M$ in~$Y \times B_1$. -- Suppose $(y,b)$ was labeled $(-,-)$ or $(+,-)$ wrt~$M$. Since $M^*(b) = M_0(b) = M(b)$, the second coordinate in this edge label with respect to $M^*$ is still~$-$. Thus this edge is not labeled $(+,+)$ wrt~$M^*$. -- Suppose $(y,b)$ was labeled $(-,+)$ wrt~$M$. Since $y \in Y_1$, we know from (i) above that $y$ ranks $M^*_1(y)$ at least as good as~$M_1(y)$. Hence the first coordinate in this edge label wrt $M^*$ is still~$-$. Thus this edge is not labeled $(+,+)$ wrt~$M^*$. \item[(3)] Consider any edge $(y,b) \in Y_0 \times B_0$: we know that $(y,b)$ was labeled $(-,-)$ wrt~$M$. Since $M^*(b) = M_0(b) = M(b)$, the second coordinate in this edge label wrt $M^*$ is still~$-$. Thus this edge is not labeled $(+,+)$ wrt~$M^*$. \item[(4)] Consider any edge in $(a,z) \in A_0 \times Z_0$: there was no edge labeled $(+,+)$ wrt $M$ in~$A_0 \times Z$. -- Suppose $(a,z)$ was labeled $(-,-)$ or $(-,+)$ wrt~$M$. Since $M^*(a) = M_0(a) = M(a)$, the first coordinate in this edge label wrt $M^*$ is still~$-$. Thus this edge is not labeled $(+,+)$ wrt~$M^*$. -- Suppose $(a,z)$ was labeled $(+,-)$ wrt~$M$. Since $z \in Z_0$, we know from (ii) above that $z$ ranks $M^*_1(z)$ at least as good as~$M_1(z)$. Hence the second coordinate in this edge label wrt $M^*$ is still~$-$. Thus this edge is not labeled $(+,+)$ wrt~$M^*$. \end{enumerate} Thus any edge labeled $(+,+)$ has to be in $(A_0 \cup Y_0) \times (B_1 \cup Z_1)$. This fact along with the earlier claim that all edges in $(A_1 \cup Y_1) \times (B_0 \cup Z_0)$ are labeled $(-,-)$, immediately implies that Claim~\ref{clm1} holds here, where we assign $f$-values to all vertices in $A \cup B$ as follows: if $a \in A_1 \cup Y_1$ then $f(a) = 1$ else $f(a) = 0$; similarly, if $b \in B_1 \cup Z_1$ then $f(b) = 1$ else~$f(b) = 0$. Thus if the edge $(a,b)$ is labeled $(+,+)$, then $f(a) = 0$ and $f(b) = 1$, and if $(y,z)$ is an edge such that $f(y) = 1$ and $f(z) = 0$, then $(y,z)$ has to be labeled~$(-,-)$. Lemmas~\ref{lem:aug-path} and \ref{lem:popular} with $M^*$ replacing $M$ follow now (since all they need is Claim~\ref{clm1}). We can conclude that $M^*$ is dominant in~$G$. Thus there is a dominant matching in $G$ that contains~$e^*$. \qed \subsubsection{Proof of Claim~\ref{clm4}.} We will now show that $M' = M'_0 \cup M_1$ is a stable matching. We already know that there is no edge labeled $(+,+)$ in $A' \times B'$ with respect to $M'_0$ and there is no edge labeled $(+,+)$ in $Y \times Z$ with respect to~$M_1$. Now we need to show that there is no edge labeled $(+,+)$ either in $A' \times Z$ or in~$Y \times B'$. Label each edge $e=(a,b)$ in $E\setminus M'$ by the pair of votes $(\alpha_e,\beta_e)$, where $\alpha_e$ is $a$'s vote for $b$ vs.\ $M'(a)$ and $\beta_e$ is $b$'s vote for $a$ vs.~$M'(b)$. We will first show that there is no edge labeled $(+,+)$ in $A' \times Z$, i.e., in $(A_0\cup A_1) \times Z$. \begin{itemize} \item[(1)] Consider any $(a,z) \in A_1 \times Z$: this edge was labeled $(-,-)$ wrt~$M$. Since $M'(z) = M_1(z) = M(z)$, the second coordinate of the label of this edge wrt $M'$ is~$-$. Thus this edge cannot be labeled $(+,+)$ wrt~$M'$. \item[(2)] Consider any $(a,z) \in A_0 \times Z$: there was no edge labeled $(+,+)$ wrt $M$ in $A' \times Z$. -- Suppose $(a,z)$ was labeled $(+,-)$ or $(-,-)$ wrt~$M$. Since $M'(z) = M_1(z) = M(z)$, the second coordinate of the label of this edge wrt $M'$ is~$-$. -- Suppose $(a,z)$ was labeled~$(-,+)$. Since $a \in A_0$, his neighbor $M'_0(a)$ is ranked at least as good as $M_0(a)$ in his preference list. This is because women in $B_0$ are unmatched in our starting matching and no woman $b \in B_0$ prefers any neighbor in $A_1$ to $M_0(b)$ (all edges in $A_1\times B_0$ are labeled $(-,-)$ wrt $M_0$). Thus in our algorithm that computes~$M'_0$, $a$ will get accepted either by $M_0(a)$ or a better neighbor. Hence the first coordinate of this edge label wrt $M'$ is still~$-$. \end{itemize} We will now show that there is no edge labeled $(+,+)$ with respect to $M'$ in $Y \times B'$, i.e., in $Y \times (B_0 \cup B_1)$. \begin{itemize} \item[(3)] Consider any $(y,b) \in Y \times B_0$: the edge $(y,b)$ was labeled $(-,-)$ wrt~$M$. Since $M'(y) = M_1(y) = M(y)$, the first coordinate of the label of this edge wrt $M'$ is~$-$. Thus this edge cannot be labeled $(+,+)$ wrt~$M'$. \item[(4)] Consider any $(y,b) \in Y \times B_1$: there was no edge labeled $(+,+)$ wrt $M$ in $Y \times B'$. -- Suppose $(y,b)$ was labeled $(-,+)$ or $(-,-)$ wrt~$M$. Since $M'(y) = M_1(y) = M(y)$, the first coordinate of the label of this edge wrt $M'$ is~$-$. -- Suppose $(y,b)$ was labeled~$(+,-)$. Since $b \in B_1$, her neighbor $M'_0(b)$ is ranked at least as good as $M_0(b)$ in her preference list. This is because our starting matching matched $b$ to $M_0(b)$ and $b$ would reject $M_0(b)$ only upon receiving a better proposal. Thus the second coordinate of the label of this edge with respect to $M'$ is~$-$. \end{itemize} This completes the proof that there is no edge labeled $(+,+)$ with respect to $M'$ in~$G$. In other words, $M'$ is a stable matching in~$G$. \qed \section{Finding an unstable popular matching} \label{sec:dom-vs-stab} We are given $G = (A \cup B,E)$ with strict preference lists and we would like to know if every popular matching in $G$ is also stable. In order to answer this question, we could compute a dominant matching $D$ and a stable matching $S$ in~$G$. If $|D| > |S|$, then it is obviously the case that not every popular matching in $G$ is stable. However it could be the case that $D$ is stable (and so $|D| = |S|$). We now show an efficient algorithm to check if $\{$popular matchings$\} = \{$stable matchings$\}$ or not in $G$. Note that in the latter case, we have $\{$stable matchings$\} \subsetneq \{$popular matchings$\}$ in~$G$. Let $G$ admit an unstable popular matching~$M$. We know that $M$ can be partitioned into $M_0 \mathbin{\mathaccent\cdot\cup} M_1$, as described in Section~\ref{sec:pop-edge}. Here $M_0$ is a dominant matching on $A' \cup B'$ and $M_1$ is stable on $Y \cup Z$, where $Y = A \setminus A'$ and $Z = B \setminus B'$ (refer to Fig.~\ref{fig:first}). Since $M$ is unstable, there is an edge $(a,b)$ that blocks~$M$. Since there is no blocking pair involving on any vertex in $Y \cup Z$, it has to be the case that $a \in A'$ and $b \in B'$ (in particular, $a \in A_0$ and $b \in B_1$). Run the transformation $M_1 \leadsto M^*_1$ performed in the proof of Lemma~\ref{lem:domn-e}. Claim~\ref{clm2} tells us that $M^* = M_0 \cup M^*_1$ is a dominant matching. The edge $(a,b)$ is a blocking pair to $M^*$ since $M^*(a) = M_0(a)$ and $M^*(b) = M_0(b)$, so $a$ and $b$ prefer each other to their respective partners in~$M^*$. Thus $M^*$ is an unstable dominant matching and Lemma~\ref{non-stab-domn} follows. \begin{lemma} \label{non-stab-domn} If $G$ admits an unstable popular matching then $G$ admits an unstable dominant matching. \end{lemma} Hence in order to answer the question of whether every popular matching in $G$ is stable or not, we need to decide if there exists a dominant matching $M$ in $G$ with a blocking pair. We will use the mapping $T: \{$stable matchings in $G'\} \rightarrow \{$dominant matchings in $G\}$ defined in Section~\ref{sec:dom-mat} here. Our task is to determine if there exists a stable matching in $G'$ that includes a pair of edges $(a_0,v)$ and $(u_1,b)$ such that $a$ and $b$ prefer each other to $v$ and $u$, respectively, in~$G$. It is easy to decide in $O(m^3)$ time whether such a stable matching exists or not in~$G'$. \begin{itemize} \item For every pair of edges $e_1 = (a,v)$ and $e_2 = (u,b)$ in $G$ such that $a$ and $b$ prefer each other to $v$ and $u$, respectively: determine if there is a stable matching in $G'$ that contains the pair of edges $(a_0,v)$ and~$(u_1,b)$. \end{itemize} An algorithm to construct a stable matching that contains a pair of edges (if such a matching exists) is similar to the algorithm described earlier to construct a stable matching that contains a single edge~$e$. In the graph $G'$, we modify Gale-Shapley algorithm so that $b$ rejects proposals from all neighbors ranked worse than $u_1$ and $v$ rejects all proposals from neighbors ranked worse than~$a_0$. If the resulting algorithm returns a stable matching that contains the edges $(a_0,v)$ and $(u_1,b)$, then we have the desired matching; else $G'$ has no stable matching that contains this particular pair of edges. In order to determine if there exists an unstable dominant matching, we may need to go through all pairs of edges $(e_1,e_2) \in E\times E$. Since we can determine in linear time if there exists a stable matching in $G'$ with any particular pair of edges, the entire running time of this algorithm is $O(m^3)$, where $m = |E|$. \paragraph{A faster algorithm.} It is easy to improve the running time to $O(m^2)$. For each edge $(a,b) \in E$ we check for the following: \begin{itemize} \item[($\ast$)] a stable matching in $G'$ such that (1)~$a_0$ is matched to a neighbor that is ranked worse than $b$, and (2)~$b$ is matched to a neighbor $u_1$ where $u$ is ranked worse than $a$ in $b$'s list. \end{itemize} We modify the Gale-Shapley algorithm in $G'$ so that (1)~$b$ rejects all offers from level~0 neighbors, i.e., $b$ accepts proposals only from level~1 neighbors, and (2)~every neighbor of $a_0$ that is ranked better than $b$ rejects proposals from~$a_0$. Suppose ($\ast$) holds. Then this modified Gale-Shapley algorithm returns among all such stable matchings, the most men-optimal and women-pessimal one~\cite{GI89}. Thus among all stable matchings that match $a_0$ to a neighbor ranked worse than $b$ and the woman $b$ to a level~1 neighbor, the matching returned by the above algorithm matches $b$ to its least preferred neighbor and $a_0$ to its most preferred neighbor. Hence if the modified Gale-Shapley algorithm returns a matching that is (i)~unstable or (ii)~matches $a_0$ to $d(a_0)$ or (iii)~matches $b$ to a neighbor better than $a_1$, then there is no dominant matching $M$ in $G$ such that the pair $(a,b)$ blocks~$M$. Else we have the desired stable matching in $G'$, call this matching~$M'$. The matching $T(M')$ will be a dominant matching in $G$ where the pair $(a,b)$ blocks~$T(M')$. Since we may need to go through all edges in $E$ and the time taken for any edge $(a,b)$ is $O(m)$, the entire running time of this algorithm is~$O(m^2)$. We have thus shown the following theorem. \begin{theorem} \label{thm:unstable} Given $G = (A \cup B,E)$ where every vertex has a strict ranking over its neighbors, we can decide in $O(m^2)$ time whether every popular matching in $G$ is stable or not; if not, we can return an unstable popular matching. \end{theorem} \subsubsection{Conclusions and Open problems.} We considered the popular edge problem in a stable marriage instance $G= (A \cup B,E)$ with strict preference lists and showed a linear time algorithm for this problem. A natural extension is that we are given $k$ edges $e_1,\ldots,e_k$, for $k \ge 2$, and we would like to know if there exists a popular matching that contains {\em all} these $k$ edges. Another open problem is to efficiently find among all popular matchings that contain a particular edge $e^*$, one of largest size. There are no polynomial time algorithms currently known to construct a popular matching in $G$ that is neither stable nor dominant. The first polynomial time algorithm for the stable matching problem in general graphs (not necessarily bipartite) was given by Irving~\cite{Irv85}. On the other hand, the complexity of the popular matching problem in general graphs is open. Is there a polynomial time algorithm for the dominant matching problem in $G$? \noindent{\em Acknowledgment.} Thanks to Chien-Chung Huang for useful discussions which led to the definition of dominant matchings. \iffalse \fi \subsubsection*{Appendix: An overview of the maximum size popular matching algorithms in \cite{HK13,Kav12-journal}.} Theorem~\ref{thm:pop-char} (from \cite{HK13}) stated in Section~\ref{sec:char} showed conditions~(i)-(iii) as necessary and sufficient conditions for a matching to be popular in~$G = (A \cup B,E)$. It was also observed in \cite{HK13} that condition~(iv): {\em there is no augmenting path with respect to $M$ in $G_M$} was a sufficient condition for a popular matching to be a maximum size popular matching. The goal was to construct a matching $M$ that satisfied conditions~(i)-(iv). The algorithm in \cite{HK13} computed appropriate subsets $L$ and $R$ of $A \cup B$ and showed that running Gale-Shapley algorithm with vertices of $L$ proposing and vertices of $R$ disposing resulted in a matching that obeyed conditions~(i)-(iv). Constructing these sets $L$ and $R$ took $O(n_0)$ iterations, where $n_0 = \min(|A|,|B|)$; each iteration involved two invocations of the Gale-Shapley algorithm. Thus the running time of this algorithm was~$O(mn_0)$. A simpler and more efficient algorithm for constructing a matching that satisfied conditions~(i)-(iv) was given in \cite{Kav12-journal}. This algorithm worked with a graph $\tilde{G} = (\tilde{A}\cup B, \tilde{E})$ which is quite similar to the graph $G'$ used in Section~\ref{sec:dom-mat}, except that there were no dummy women in~$\tilde{G}$. The set $\tilde{A}$ had two copies $a_0$ and $a_1$ of each $a \in A$ and at most one of $a_0,a_1$ was {\em active} at any point in time. Every $b \in B$ preferred subscript~1 neighbors to subscript~0 neighbors (within subscript~$i$ neighbors, it was $b$'s original order of preference). The algorithm here was ``active men propose and women dispose''. To begin with, only the men in $\{a_0: a \in A\}$ were active but when a man $a_0$ got rejected by all his neighbors, he became inactive and his counterpart $a_1$ became active. It was shown that the matching returned satisfied conditions~(i)-(iv). This was a linear time algorithm for computing a maximum size popular matching in $G = (A \cup B,E)$. \end{document}
\begin{document} \title{Fractional colorings of cubic graphs with large girth} \author{Franti\v sek Kardo\v s\thanks{Institute of Mathematics, Faculty of Science, University of Pavol Jozef \v Saf\'arik, Jesenn\'a 5, 041 54 Ko\v sice, Slovakia. E-mail: {\tt [email protected]}. This author was supported by Slovak Research and Development Agency under the contract no. APVV-0007-07.} \and Daniel Kr\'al'\thanks{Department of Applied Mathematics and Institute for Theoretical Computer Science (ITI), Faculty of Mathematics and Physics, Charles University, Malostransk\'e n\'am\v est\'\i{} 25, 118 00 Prague 1, Czech Republic. E-mail: {\tt [email protected]}. Institute for Theoretical computer science is supported as project 1M0545 by Czech Ministry of Education. This author was supported by the grant GACR 201/09/0197 and in part was also supported by the grant GAUK~60310. } \and Jan Volec\thanks {Department of Applied Mathematics, Faculty of Mathematics and Physics, Charles University, Malostransk\'e n\'am\v est\'\i{} 25, 118 00 Prague 1, Czech Republic. E-mail: {\tt [email protected]}. This author was supported by the grant GACR 201/09/0197.} } \date{} \maketitle \begin{abstract} We show that every (sub)cubic $n$-vertex graph with sufficiently large girth has fractional chromatic number at most ${0.4352}frac$ which implies that it contains an independent set of size at least ${0.4352} n$. Our bound on the independence number is valid to random cubic graphs as well as it improves existing lower bounds on the maximum cut in cubic graphs with large girth. \end{abstract} \section{Introduction} \label{sect-intro} An {\it independent set} is a subset of vertices such that no two of them are adjacent. The {\it independence number} $\alpha(G)$ of a graph $G$ is the size of the largest independent set in $G$. In this paper, we study independent sets in cubic graphs with large girth. Recall that a graph $G$ is {\em cubic} if every vertex of $G$ has degree three, $G$ is {\em subcubic} if every vertex has degree at most three, and the {\em girth} of $G$ is the length of the shortest cycle of $G$. A {\em fractional coloring} of a graph $G$ is an assignment of weights to independent sets in $G$ such that for each vertex $v$ of $G$, the sum of weights of the sets containg $v$ is at least one. The {\em fractional chromatic number $c_f(G)$} of $G$ is the minimum sum of weights of independent sets forming a fractional coloring. The main result of this paper asserts that if $G$ is a cubic graph with sufficiently large girth, then there exists a probability distribution on its independent sets such that each vertex is contained in an independent set drawn based on this distribution with probability at least ${0.4352}$. This implies that every $n$-vertex cubic graph $G$ with large girth contains an independent set with at least ${0.4352} n$ and its fractional chromatic number is at most ${0.4352}frac$ (to see the latter, consider the probability distribution and assign each independent set $I$ in $G$ a weight equal to $c p(I)$ where $p(I)$ is the probability of $I$ and $c=1/{0.4352}$). In addition, our lower bound on the independence number also translate to random cubic graphs. Let us now survey previous results in this area. Inspired by Ne\v set\v ril's Pentagon Conjecture~\cite{bib-nesetril}, Hatami and Zhu~\cite{bib-hatami} showed that every cubic graph with sufficiently large girth has fractional chromatic number at most $8/3$. For the independence number, Hoppen~\cite{bib-hoppen} showed that every $n$-vertex cubic graph with sufficiently large girth has independence number at least ${0.4328} n$; this bound matches an earlier bound of Wormald~\cite{bib-wormald}, also independently proven by Frieze and Suen~\cite{bib-frieze}, for random cubic graphs. The bound for random cubic graphs was further improved by Duckworth and Zito~\cite{bib-duckworth} to ${0.4347} n$, providing an improvement of the bound unchallenged for almost 15 years. \subsection{Random cubic graphs} The independence numbers of random cubic graphs and cubic graphs with large girth are closely related. Here, we consider the model where a random cubic graph is randomly uniformly chosen from all cubic graphs on $n$ vertices. Any lower bound on the independence number for the class of cubic graphs with large girth is also a lower bound for random cubic graphs. First observe that a lower bound on the independence number for cubic graphs with large girth translates to subcubic graphs with large girth: assume that $H$ is a subcubic graph with $m_1$ vertices of degree one and $m_2$ vertices of degree two. Now consider an $(2m_1+m_2)$-regular graph with high girth and replace each vertex of it with copy of $H$ in such a way that its edges are incident with vertices of degree one and two in the copies. The obtained graph $G$ is cubic and has large girth; an application of the lower bound on the independence number for cubic graphs yields that one of the copies of $H$ contains a large independent set, too. Since a random cubic graph contains asymptotically almost surely only $o(n)$ cycles shorter than a fixed integer $g$~\cite{bib-wormald-shortcycles}, any lower bound for cubic graphs with large girth also applies to random cubic graphs. Conversely, Hoppen and Wormald~\cite{bib-wormald-private} have recently developed a technique for translating lower bounds (obtained in a specific but quite general way) for the independence number of a random cubic graph to cubic graphs with large girth. In the other direction, any upper bound on the independence number for a~random cubic graph also applies to cubic graphs with large girth (just remove few short cycles from a random graph). The currently best upper bound for random cubic graphs with $n$ vertices and thus for cubic graphs with large girth is ${0.455} n$ derived by McKay~\cite{bib-mckay}. In~\cite{bib-mckay}, McKay mentions that experimental evidence suggests that almost all $n$-vertex cubic graphs contain independent sets of size ${0.439} n$; newer experiments of McKay~\cite{bib-mckay-private} then suggests that the lower bound can even be ${0.439}new n$. \subsection{Maximum cut} Our results also improve known bounds on the size of the maximum cut in cubic graphs with large girth. A {\em cut} of a graph $G$ is the set of edges such that the vertices of $G$ can be partitioned into two subsets $A$ and $B$ and the edges of the cut have one end-vertex in $A$ and the other in $B$. The {\em maximum cut} is a cut with the largest number of edges. It is known that if $G$ is an $m$-edge cubic graph with sufficiently large girth, then its maximum cut has size at least $6m/7-o(m)$~\cite{bib-zyka}. On the other hand, there exist $m$-edge cubic graphs with large girth with the maximum cut of size at most ${0.9351}$. This result was first announced by McKay~\cite{bib-mckay-cut}, the proof can be found in~\cite{bib-hh}. If $G$ has an independent set of size $\alpha$, then it also has a cut of size $3\alpha$ (put the independent set on one side and all the other vertices on the other side). So, if $G$ is a cubic graph with $n$ vertices and $m=3n/2$ edges and the girth of $G$ is sufficiently large, then it has a cut of size at least $3\cdot{0.4352} n={0.4352}cut m$. Let us remark that this bound can be further improved to ${0.4352}cutnot m$ by considering a randomized procedure tuned up for producing a cut of large size. Since we focus on independent sets in cubic graphs with large girth in this paper, we omit details how the improved bound can be obtained. \section{Structure of the proof} Our proof is inspired by the proof of Hoppen from~\cite{bib-hoppen}. We develop a procedure for obtaining a random independent set in a cubic graph with large girth. Our main result is thus the following. \begin{theorem} \label{thm-distribution} There exists $g>0$ such that for every cubic graph $G$ with girth at least $g$, there is a probability distribution such that each vertex is contained in an independent set drawn according to this distribution with probability at least ${0.4352}$. \end{theorem} We describe our procedure for obtaining a random independent set in a cubic graph in Section~\ref{sect-procedure}. To analyze its performance, we first focus on its behavior for infinite cubic trees in Section~\ref{sect-tree}. The core of our analysis is the independence lemma (Lemma~\ref{indep}) which is used to simplify the recurrence relation appearing in the analysis. We then show that the procedure can be modified for cubic graphs with sufficiently large girth while keeping its performance. The actual performance of our randomized procedure is based on solving the derived recurrences numerically. As we have explained in Section~\ref{sect-intro}, Theorem~\ref{thm-distribution} has the following three corollaries. \begin{corollary} \label{cor-indep} There exists $g>0$ such that every $n$-vertex subcubic graph with girth at least $g$ contains an independent set of size at least ${0.4352} n$. \end{corollary} \begin{corollary} \label{cor-frac} There exists $g>0$ such that every cubic graph with girth at least $g$ has the fractional chromatic number at most ${0.4352}frac$. \end{corollary} \begin{corollary} \label{cor-cut} There exists $g>0$ such that every cubic graph $G$ with girth at least $g$ has a cut containing at least ${0.4352}cut m$ edges where $m$ is the total number of edges of $G$. \end{corollary} \section{Description of the randomized procedure} \label{sect-procedure} The procedure for creating a random independent set is parametrized by three numbers: the number $K$ of its rounds and probabilites $p_1$ and $p_2$. Throughout the procedure, vertices of the input graph have one of the three colors: white, blue and red. At the beginning, all vertices are white. In each round, some of the white vertices are recolored in such a way that {\em red} vertices always form an independent set and all vertices adjacent to red vertices as well as some of other vertices (see details below) are {\em blue}. All other vertices of the input graph are {\em white}. Red and blue vertices are never again recolored during the procedure. Because of this, when we talk of a {\em degree} of a vertex, we mean the number of its white neighbors. Observe that the neighbors of a white vertex that are not white must be blue. The first round of the procedure is special and differ from the other rounds. As we have already said, at the very beginning, all vertices of the input graph are white. In the first round, we randomly and independently with probability $p_1$ mark some vertices as active. Active vertices with no active neighbor become red and vertices with at least one active neighbor become blue. In particular, if two adjacent vertices are active, they both become blue (as well as their neighbors). At this point, we should note that the probabilty $p_1$ will be very small. In the second and the remaining rounds, we first color all white vertices of degree zero by red. We then consider all paths formed by white vertices with end vertices of degree one or three and with all inner vertices of degree two. Based on the degrees of their end vertices, we refer to these paths as paths of type $1{\leftrightarrow}1$, $1{\leftrightarrow}3$ or $3{\leftrightarrow}3$. Note that these paths can have no inner vertices. Each vertex of degree two is now activated with probability $p_2$ independently of all other vertices. For each path of type $1{\leftrightarrow}3$, we color the end vertex of degree one with red and we then color all the inner vertices with red and blue in the alternating way. If the neighbor of the end vertex of degree three becomes red, we also color the end vertex of degree three with blue. In other words, we color the end vertex of degree three by blue if the path has an odd length. For a path of type $1{\leftrightarrow}1$, we choose randomly one of its end vertices, color it red and color all the remaining vertices of the path with red and blue in the alternating way. We refer to the choosen end vertex as the beginning of this path. Note that the facts whether and which vertices of degree two on the paths of type $1{\leftrightarrow}1$ or $1{\leftrightarrow}3$ are activated do not influence the above procedure. Paths of type $3{\leftrightarrow}3$ are processed as follows. A path of type $3{\leftrightarrow}3$ becomes active if at least one of its inner vertices is activated, i.e., a path of length $\ell$ is active with probability $1-(1-p_2)^{\ell-1}$. Note that paths with no inner vertices, i.e., edges between two vertices of degree three, are never active. For each active path, flip the fair coin to select one of its end vertices of degree three, color this vertex by blue and its neighbor on the path by red. The remaining inner vertices of the path are colored with red and blue in the alternating way. Again we refer to the choosen end vertex as the beginning of this path. The other end vertex of degree three is colored blue if its neighbor on the path becomes red, i.e., if the path has an even length. Note that a vertex that has degree three at the beginning of the round cannot become red during the round but it can become blue because of several different paths ending at it. \section{Analysis for infinite cubic trees} \label{sect-tree} Let us start with introducing some notation used in the analysis. For any edge $uv$ of a cubic graph $G$, $T_{u,v}$ is the component of $G-v$ containing the vertex $u$; we sometimes refer to $u$ as to the root of $T_{u,v}$. If $G$ is the infinite cubic tree, then it is a union of $T_{u,v}, T_{v,u}$ and the edge $uv$. The subgraph of $T_{u,v}$ induced by vertices at distance at most $d$ from $u$ is denoted by $T_{u,v}^d$. Observe that if the girth of $G$ is larger than $2d+1$, all the subgraphs $T_{u,v}^d$ are isomorphic to the same rooted tree ${\cal T}^d$ of depth $d$. The infinite rooted tree with the root of degree two and all inner vertices of degree three will be denoted as $T^\infty$. Since any automorphism of the cubic tree yields an automorphism of the probability space of the vertex colorings constructed by our procedure, the probability that a vertex $u$ has a fixed color after $k$ rounds does not depend on the choice of $u$. Hence, we can use $w_k$, $b_k$ and $r_k$ to denote a probability that a fixed vertex of the infinite cubic tree is white, blue and red, respectively, after $k$ rounds. Similarly, $w^i_k$ denotes the probability that a fixed vertex has degree $i$ after $k$ rounds conditioned by the event that it is white after $k$ rounds, i.e., the probability that a fixed vertex white and has degree two after $k$ rounds is $w_k\cdot w^2_k$. The sets of white, blue and red vertices after $k$ rounds of the randomized procedure described in Section~\ref{sect-procedure} will be denoted by $W_k$, $B_k$ and $R_k$, respectively. Similarly, $W^i_k$ denotes the set of white vertices with degree $i$ after $k$ rounds, i.e., $W_k=W^0_k \cup W^1_k\cup W^2_k\cup W^3_k$. Finally, $c_k\left(T_{u,v}\right)$ denotes the coloring of $T_{u,v}$ and $c_k\left(T_{u,v}^d\right)$ the coloring of $T_{u,v}^d$ after $k$ rounds. The set ${\cal C}^d_k$ will consist of all possible colorings $\gamma$ of~${\cal T}^d$ such that the probability of $c_k\left(T_{u,v}^d\right)=\gamma$ is non-zero in the infinite cubic tree (note that this probability does not depend on the edge $uv$) and such that the root of ${\cal T}^d$ is colored white. We extend this notation and use ${\cal C}^\infty_k$ to denote all such colorings of~${\cal T}^\infty$. Since the infinite cubic tree is strongly edge-transitive, we conclude that the probability that $T_{u,v}$ has a given coloring from ${\cal C}^\infty_k$ after $k$ rounds does not depend on the choice of $uv$. Similarly, the probability that both $u$ and $v$ are white after $k$ rounds does not depend on the choice of the edge $uv$. To simplify our notation, this event is denoted by $uv\subseteq W_k$. Finally, the probability that $u$ has degree $i\in\{1,2,3\}$ after $k$ rounds conditioned by $uv\subseteq W_k$ does also not depend on the choice of the edge $uv$. This probability is denoted by $q^i_k$. \subsection{Independence lemma} We now show that the probability that both the vertices of an edge $uv$ are white after $k$ rounds can be computed as a product of two probabilities. This will be crucial in the proof of the Independence Lemma. To be able to state the next lemma, we need to introduce additional notation. The probability $P_k(u,v,\gamma)$ for $\gamma\in{\cal C}^\infty_{k-1}$ is the probability that $${\mathbf P}\bigl[u \; \mbox{\rm stays white regarding} \; T_{u,v} { \; | \; } c_{k-1}\left(T_{u,v}\right)=\gamma \land v \in W_{k-1}\bigr]$$ where the phrase ``$u$ stays white regarding $T_{u,v}$'' means \begin{itemize} \item if $k=1$, that neither $u$ nor a neighbor of it in $T_{u,v}$ is active, and \item if $k>1$, that neither \begin{itemize} \item $u$ has degree one after $k-1$ rounds, \item $u$ has degree two after $k-1$ rounds and the path formed by vertices of degree two in $T_{u,v}$ ends at a vertex of degree one, \item $u$ has degree two after $k-1$ rounds, the path formed by vertices of degree two in $T_{u,v}$ ends at a vertex of degree three and at least one of the vertices of degree two on this path is activated, nor \item $u$ has degree three after $k-1$ rounds and is colored blue because of a path of type $1{\leftrightarrow}3$ or $3{\leftrightarrow} 3$ fully contained in $T_{u,v}$. \end{itemize} \end{itemize} Informally, this phrase represents that there is no reason for $u$ not to stay white based on the coloring of $T_{u,v}$ and vertices in $T_{u,v}$ activated in the $k$-th round. \begin{lemma} \label{claim} Consider the randomized procedure for the infinite cubic tree. Let $k$ be an integer, $uv$ an edge of the tree and $\gamma_u$ and $\gamma_v$ two colorings from ${\cal C}^\infty_{k-1}$. The probability \begin{equation} \label{eq-claim} {\mathbf P}\bigl[uv \subseteq W_k { \; | \; } c_{k-1}\left(T_{u,v}\right)=\gamma_u \land c_{k-1}\left(T_{v,u}\right)=\gamma_v\bigr]\;\mbox{,} \end{equation} i.e., the probability that both $u$ and $v$ are white after $k$ rounds conditioned by $c_{k-1}\left(T_{u,v}\right)=\gamma_u$ and $c_{k-1}\left(T_{v,u}\right)=\gamma_v$, is equal to $$P_{k}(u,v,\gamma_u)\cdot P_{k}(v,u,\gamma_v)\;\mbox{.}$$ \end{lemma} \begin{proof} We distinguish the cases $k=1$ and $k>1$. If $k=1$, ${\cal C}^\infty_0$ contains a single coloring $\gamma_0$ where all vertices are white. Hence, the probability (\ref{eq-claim}) is ${\mathbf P}\bigl[uv \subseteq W_1 { \; | \; } c_{k-1}\left(T_{u,v}\right)=\gamma_0 \land c_{k-1}\left(T_{v,u}\right)=\gamma_0\bigr]$ and it is equal to ${\mathbf P}\bigl[uv\subseteq W_1\bigr]=(1-p_1)^6$. On the other hand, $P_1(u,v,\gamma_0)=P_1(v,u,\gamma_0)=(1-p_1)^3$. The assertion of the lemma follows. Suppose that $k>1$. Note that the colorings $\gamma_u$ and $\gamma_v$ completely determine the coloring after $k-1$ rounds. If $u$ has degree one after $k-1$ rounds, then (\ref{eq-claim}) is zero as well as $P_k(u,v,\gamma_u)=0$. If $u$ has degree two and lie on a path of type $1{\leftrightarrow}1$ or $1{\leftrightarrow}3$, then (\ref{eq-claim}) is zero and $P_k(u,v,\gamma_u)=0$ or $P_k(v,u,\gamma_v)=0$ depending which of the trees $T_{u,v}$ and $T_{v,u}$ contains the vertex of degree one. If $u$ has degree two and lie on a path of type $3{\leftrightarrow}3$ of length $\ell$, then (\ref{eq-claim}) is equal to $(1-p_2)^{\ell-1}$. Let $\ell_1$ and $\ell_2$ be the number of vertices of degree two on this path in $T_{u,v}$ and $T_{v,u}$, respectively. Observe that $\ell_1+\ell_2=\ell-1$. Since $P_k(u,v,\gamma_u)=(1-p_2)^{\ell_1}$ and $P_k(v,u,\gamma_v)=(1-p_2)^{\ell_2}$, the claimed equality holds. Hence, we can now assume that the degree of $u$ is three. Similarly, the degree of $v$ is three. Note that $u$ can only become blue and only because of an active path of type $3{\leftrightarrow}3$ ending at $u$. This happens with probability $1-P_k(u,v,\gamma_u)$. Similarly, $v$ becomes blue with probability $1-P_k(v,u,\gamma_v)$. Since the event that $u$ becomes blue and $v$ becomes blue conditioned by $c_{k-1}\left(T_{u,v}\right)=\gamma_u$ and $c_{k-1}\left(T_{v,u}\right)=\gamma_v$ are independent, it follows that (\ref{eq-claim}) is also equal to $P_{k}(u,v,\gamma_u)\cdot P_{k}(v,u,\gamma_v)$ in this case. \end{proof} Lemma~\ref{claim} plays a crucial role in the Independence Lemma, which we now prove. Its proof enlights how we designed our randomized procedure. \begin{lemma}[Independence Lemma] \label{indep} Consider the randomized procedure for the infinite cubic tree. Let $k$ be an integer, $uv$ an edge of the tree and $\Gamma_u$ and $\Gamma_v$ two measurable subsets of ${\cal C}^\infty_{k-1}$. Conditioned by the event $uv\subseteq W_{k}$, the events that $c_{k}\left(T_{u,v}\right)\in\Gamma_u$ and $c_k\left(T_{v,u}\right)\in\Gamma_v$ are independent. In other words, $${\mathbf P}\bigl[c_{k}\left(T_{u,v}\right)\in\Gamma_u { \; | \; } uv \subseteq~W_{k} \bigr]= {\mathbf P}\bigl[c_{k}\left(T_{u,v}\right)\in\Gamma_u { \; | \; } uv \subseteq~W_{k} \land c_k\left(T_{v,u}\right)\in\Gamma_v\bigr]\;\mbox{.}$$ \end{lemma} \begin{proof} The proof proceeds by induction on $k$. If $k=1$, the event $uv\subseteq W_{1}$ implies that neither $u$, $v$ nor their neighbors is active during the first round. Conditioned by this, the other vertices of the infinite cubic tree marked active with probability $p_1$ randomly and independently. The result of the marking in $T_{u,v}$ fully determine the coloring of the vertices of $T_{u,v}$ and is independent of the marking and coloring of $T_{v,u}$. Hence, the claim follows. Assume that $k>1$. Fix subsets $\Gamma_u$ and $\Gamma_v$. We aim at showing that the probabilities \begin{equation} \label{eq-a} {\mathbf P}\bigl[c_k\left(T_{u,v}\right)\in\Gamma_u { \; | \; } uv \subseteq W_k \bigr] \end{equation} and \begin{equation} \label{eq-b} {\mathbf P}\bigl[c_k\left(T_{u,v}\right)\in\Gamma_u { \; | \; } u \in W_k \land c_k\left(T_{v,u}\right) \in \Gamma_v \bigr] \end{equation} are equal. The definition of the conditional probability and the fact that ${\mathbf P}[uv\subseteq W_{k-1}|uv \subseteq W_k]=1$ yield that (\ref{eq-a}) is equal to \begin{equation} \frac{{\mathbf P}\bigl[c_k\left(T_{u,v}\right)\in\Gamma_u \land v \in W_k { \; | \; } uv \subseteq W_{k-1} \bigr]} {{\mathbf P}\bigl[uv \subseteq W_k { \; | \; } uv \subseteq W_{k-1} \bigr]}\;\mbox{.} \label{eq-aa} \end{equation} By the induction, for any two subsets $\Gamma_u'$ and $\Gamma_v'$ from $C^\infty_{k-1}$, the probabilities ${\mathbf P}\bigl[c_{k-1}\left(T_{v,u}\right)\in\Gamma_v' { \; | \; } u \in W_{k-1} \land c_{k-1}\left(T_{u,v}\right) \in \Gamma_u' \bigr]$ and ${\mathbf P}\bigl[c_{k-1}\left(T_{v,u}\right)\in\Gamma_v' { \; | \; } uv \subseteq W_{k-1} \bigr]$ are the same. Hence, the numerator of (\ref{eq-aa}) is equal to the following: \begin{align*} \int_{\gamma_u', \gamma_v' \in C^\infty_{k-1}} & {\mathbf P}\bigl[uv \subseteq W_k { \; | \; } c_{k-1}\left(T_{u,v}\right)=\gamma_u' \land c_{k-1}\left(T_{v,u}\right)=\gamma_v'\bigr]\times \\* &{\mathbf P}\bigl[c_k\left(T_{u,v}\right)\in\Gamma_u { \; | \; } uv \subseteq W_k \land c_{k-1}\left(T_{u,v}\right)=\gamma_u' \land c_{k-1}\left(T_{v,u}\right)=\gamma_v' \bigr] \\* & {\mathbf d}{\mathbf P}\bigl[c_{k-1}\left(T_{u,v}\right)=\gamma_u' { \; | \; } uv \subseteq W_{k-1} \bigr] {\mathbf d}{\mathbf P}\bigl[c_{k-1}\left(T_{v,u}\right)=\gamma_v' { \; | \; } uv \subseteq W_{k-1} \bigr] \mbox{.} \end{align*} Observe that when conditioning by $uv\subseteq W_k$, the event $c_k\left(T_{u,v}\right)\in\Gamma_u$ is independent of $c_{k-1}\left(T_{v,u}\right)=\gamma_v'$. Hence, the double sum can be rewritten to \begin{align*} \int_{\gamma_u', \gamma_v' \in C^\infty_{k-1}} & {\mathbf P}\bigl[uv \subseteq W_k { \; | \; } c_{k-1}\left(T_{u,v}\right)=\gamma_u' \land c_{k-1}\left(T_{v,u}\right)=\gamma_v'\bigr] \times \\* & {\mathbf P}\bigl[c_k\left(T_{u,v}\right)\in\Gamma_u { \; | \; } uv \subseteq W_k \land c_{k-1}\left(T_{u,v}\right)=\gamma_u' \bigr] \\* & {\mathbf d}{\mathbf P}\bigl[c_{k-1}\left(T_{u,v}\right)=\gamma_u' { \; | \; } uv \subseteq W_{k-1} \bigr] {\mathbf d}{\mathbf P}\bigl[c_{k-1}\left(T_{v,u}\right)=\gamma_v' { \; | \; } uv \subseteq W_{k-1} \bigr] \; \mbox{.} \end{align*} An application of Lemma~\ref{claim} then yields that the double sum is equal to \begin{align*} \int_{\gamma_u', \gamma_v' \in C^\infty_{k-1}} & P_k(u,v,\gamma_u') \cdot P_k(v,u,\gamma_v') \cdot {\mathbf P}\bigl[c_k\left(T_{u,v}\right)=\gamma_u { \; | \; } uv \subseteq W_k \land c_{k-1}\left(T_{u,v}\right)=\gamma_u' \bigr] \\* & {\mathbf d}{\mathbf P}\bigl[c_{k-1}\left(T_{u,v}\right)=\gamma_u' { \; | \; } uv \subseteq W_{k-1} \bigr] {\mathbf d}{\mathbf P}\bigl[c_{k-1}\left(T_{v,u}\right)=\gamma_v' { \; | \; } uv \subseteq W_{k-1} \bigr] \; \mbox{.} \end{align*} Regrouping the terms containing $\gamma_u'$ only and $\gamma_v'$ only, we obtain that the numerator of (\ref{eq-aa}) is equal to \begin{align*} \biggl( \int_{\gamma_u' \in C^\infty_{k-1}} & P_k(u,v,\gamma_u) \times {\mathbf P}\bigl[c_k\left(T_{u,v}\right)\in\Gamma_u { \; | \; } uv \subseteq W_k \land c_{k-1}\left(T_{u,v}\right)=\gamma_u' \bigr] \\* & {\mathbf d}{\mathbf P}\bigl[c_{k-1}\left(T_{u,v}\right)=\gamma_u' { \; | \; } uv \subseteq W_{k-1} \bigr]\biggr) \times \\* \biggl( \int_{\gamma_v' \in C^\infty_{k-1}} & P_k(v,u,\gamma_v) {\mathbf d}{\mathbf P}\bigl[c_{k-1}\left(T_{v,u}\right)=\gamma_v' { \; | \; } uv \subseteq W_{k-1} \bigr] \biggr) \; \mbox{.} \end{align*} Along the same lines, the denominator of~$(\ref{eq-aa})$ can be expressed as \begin{align*} \biggl( \int_{\gamma_u' \in C^\infty_{k-1}} & P_k(u,v,\gamma_u) {\mathbf d}{\mathbf P}\bigl[c_{k-1}\left(T_{u,v}\right)=\gamma_u' { \; | \; } uv \subseteq W_{k-1} \bigr]\biggr) \times \\* \biggl( \int_{\gamma_v' \in C^\infty_{k-1}} & P_k(v,u,\gamma_v) {\mathbf d}{\mathbf P}\bigl[c_{k-1}\left(T_{v,u}\right)=\gamma_v' { \; | \; } uv \subseteq W_{k-1} \bigr] \biggr) \; \mbox{.} \end{align*} Cancelling out the integral over $\gamma_v'$ which is the same in the numerator and the denominator of $(\ref{eq-aa})$, we obtain that $(\ref{eq-a})$ is equal to {\scriptsize \begin{equation} \frac{\int_{\gamma_u' \in C^\infty_{k-1}} P_k(u,v,\gamma_u') {\mathbf P}\bigl[c_k\left(T_{u,v}\right)\in\Gamma_u { \; | \; } uv \subseteq W_k \land c_{k-1}\left(T_{u,v}\right)=\gamma_u' \bigr] {\mathbf d}{\mathbf P}\bigl[c_{k-1}\left(T_{u,v}\right)=\gamma_u' { \; | \; } uv \subseteq W_{k-1} \bigr] } {\int_{\gamma_u' \in C^\infty_{k-1}} P_k(u,v,\gamma_u') {\mathbf d}{\mathbf P}\bigl[c_{k-1}\left(T_{u,v}\right)=\gamma_u' { \; | \; } uv \subseteq W_{k-1} \bigr]} \;\mbox{.}\label{eq-a-final} \end{equation} } The same trimming is applied to (\ref{eq-b}). First, the probability (\ref{eq-b}) is expressed as \begin{equation} \frac{{\mathbf P}\bigl[c_k\left(T_{u,v}\right)\in\Gamma_u \land c_k\left(T_{v,u}\right) \in \Gamma_v { \; | \; } uv \subseteq W_{k-1} \bigr]} {{\mathbf P}\bigl[u\in W_k \land c_k\left(T_{v,u}\right) \in \Gamma_v { \; | \; } uv \subseteq W_{k-1} \bigr]} \;\mbox{.} \label{eq-bb} \end{equation} The numerator of $(\ref{eq-bb})$ is then expanded to \begin{align*} \biggl( \int_{\gamma_u' \in C^\infty_{k-1}} & P_k(u,v,\gamma_u') {\mathbf P}\bigl[c_k\left(T_{u,v}\right)\in\Gamma_u { \; | \; } uv \subseteq W_k \land c_{k-1}\left(T_{u,v}\right)=\gamma_u' \bigr] \\* &{\mathbf d}{\mathbf P}\bigl[c_{k-1}\left(T_{u,v}\right)=\gamma_u' { \; | \; } uv \subseteq W_{k-1} \bigr] \biggr) \times \\* \biggl( \int_{\gamma_v' \in C^\infty_{k-1}} & {\mathbf P}\bigl[c_k\left(T_{v,u}\right)\in\Gamma_v { \; | \; } uv \subseteq W_k \land c_{k-1}\left(T_{v,u}\right)=\gamma_v' \cdot P_k(v,u,\gamma_v')\bigr] \\* &{\mathbf d}{\mathbf P}\bigl[c_{k-1}\left(T_{v,u}\right)=\gamma_v' { \; | \; } uv \subseteq W_{k-1} \bigr] \biggr) \end{align*} and the denominator of $(\ref{eq-b})$ is expanded to \begin{align*} \biggl( \int_{\gamma_u' \in C^\infty_{k-1}} & P_k(u,v,\gamma_u') {\mathbf d}{\mathbf P}\bigl[c_{k-1}\left(T_{u,v}\right)=\gamma_u' { \; | \; } uv \subseteq W_{k-1} \bigr]\biggr) \times \\* \biggl( \int_{\gamma_v' \in C^\infty_{k-1}} & {\mathbf P}\bigl[c_k\left(T_{v,u}\right)\in\Gamma_v { \; | \; } uv \subseteq W_k \land c_{k-1}\left(T_{v,u}\right)=\gamma_v' \cdot P_k(v,u,\gamma_v')\bigr] \\* &{\mathbf d}{\mathbf P}\bigl[c_{k-1}\left(T_{v,u}\right)=\gamma_v' { \; | \; } uv \subseteq W_{k-1} \bigr] \biggr) \; \mbox{.} \end{align*} We obtain (\ref{eq-a-final}) by cancelling out the integrals over $\gamma_v'$. The proof is now finished. \end{proof} \subsection{Recurrence relations} \label{sect-recurr} We now derive recurence relations for the probabilities describing the behavior of the randomized procedure. We show how to compute the probabilities after $(k+1)$ rounds only from the probabilities after $k$ rounds. Recall that $w^i_k$ is the probability that a fixed vertex $u$ has degree $i$ after $k$ rounds conditioned by the event that $u$ is white after $k$ rounds. Also recall that $q^i_k$ is the probability that a fixed vertex $u$ with a fixed neighbor $v$ has degree $i$ after $k$ rounds conditioned by the event that both $u$ and $v$ are white after $k$ rounds, i.e., $uv \subseteq W_k$. Finally, $w_k, r_k$ and $b_k$ are probabilities that a fixed vertex is white, red and blue, respectively, after $k$ rounds. If $u$ is white, the {\em white subtree} of $T_{u,v}$ is the maximal subtree containing $u$ and white vertices only. We claim that the probability that the white subtree of $T_{u,v}$ is isomorphic to a tree in a given subset ${\cal T}_0$ after $k$ rounds, conditioned by the event that both $u$ and $v$ are white after $k$ rounds, can be computed from the values of $q^i_k$ only. Indeed, if $T_0\in {\cal T}_0$, the probability that $v$ has degree $i$ as in $T_0$ is $q^i_k$. Now, if the degree of $v$ is $i$ as in $T_0$ and $z$ is a neighbor of $v$, the values of $q^i_k$ again determine the probability that the degree of $z$ is as in $T_0$. By Lemma~\ref{indep}, the probabilities that $v$ and $z$ have certain degrees, conditioned by the event that they are both white, are independent. Inductively, we can proceed with other vertices of $T_0$. Applying standard probability arguments, we see that the values of $q^i_k$ fully determine the probability that the white subtree of $T_{u,v}$ after $k$ rounds is isomorphic to a tree in ${\cal T}_0$. After the first round, the probabilities $w_k, r_k$, $b_k$ and $q^i_k$, $i\in\{0,1,2,3\}$, are the following. \begin{align*} & b_1 = 1 - \left(1-p_1\right)^3 & q^1_1 =& \left(1-\left(1-p_1\right)^2\right)^2 \\ & r_1 = p_1 \left(1-b_1\right) = p_1 \left(1-p_1\right)^3 & w^3_1 =& \left(1-p_1\right)^6 \\ & w_1 = 1 - b_1 - r_1 & w^2_1 =& 3 \cdot \left(1-p_1\right)^4 \left(1-\left(1-p_1\right)^2\right) \\ & q^3_1 = \left(1-p_1\right)^4 & w^1_1 =& 3 \cdot \left(1-p_1\right)^2 \left(1-\left(1-p_1\right)^2\right)^2 \\ & q^2_1 = 2 \cdot \left(1-p_1\right)^2 \left(1-\left(1-p_1\right)^2\right) & w^0_1 =& \left(1-\left(1-p_1\right)^2\right)^3 \end{align*} A vertex becomes blue if at least one of its neighbors is active and it becomes red if it is active and none of its neighbors is also active. Otherwise, a vertex stays white. This leads to the formulas above. To derive formulas for the probabilities $w_k, r_k$, $b_k$ and $q^i_k$ for $k\ge 2$, we introduce additional notation. The recurrence relations can be expressed using $q^i_k$ only, but additional notation will help us to simplify expressions appearing in our analysis. For a given edge $uv$ of the infinite tree, let $P^{\to1}_k$ is the probability that the white subtree of $T_{u,v}$ contains a path from $u$ to a vertex of degree one with all inner vertices of degree two after $k$ rounds conditioned by the event $uv\subseteq W_k$. Note that such a path may end at $v$. $P^{E\to1}_k$ and $P^{O\to1}_k$ are probabilities that the length of such a path is even or odd, respectively. Analogously, $P^{\to3}_k$, $P^{E\to3}_k$ and $P^{O\to3}_k$ are probabilities that the white subtree of $T_{u,v}$ contains a path, an even path and an odd path, respectively, from $u$ to a vertex of degree three with all inner vertices of degree two after $k$ rounds conditioned by the event $uv\subseteq W_k$. Using Lemma~\ref{indep}, we conclude that the values of the just defined probabilities can be computed as follows. \begin{align*} P^{\to1}_k =& q^1_k \cdot \textstyle\sum_{\ell\ge0}\left(q^2_k\right)^\ell = \frac{q^1_k}{1-q^2_k} & &P^{\to3}_k = q^3_k \cdot \textstyle\sum_{\ell\ge0}\left(q^2_k\right)^\ell = \frac{q^3_k}{1-q^2_k} \\ P^{O\to1}_k =& q^1_k \cdot \textstyle\sum_{\ell\ge0}\left(q^2_k\right)^{2\ell} = \frac{q^1_k}{1-\left(q^2_k\right)^2} & &P^{O\to3}_k = q^3_k \cdot \textstyle\sum_{\ell\ge0}\left(q^2_k\right)^{2\ell} = \frac{q^3_k}{1-\left(q^2_k\right)^2} \\ P^{E\to1}_k =& P^{\to1}_k - P^{O\to1}_k = q^2_k \cdot P^{O\to1}_k & &P^{E\to3}_k = P^{\to3}_k - P^{O\to3}_k = q^2_k \cdot P^{O\to3}_k \end{align*} Observe that $P^{\to1}_k + P^{\to3}_k = 1$. The formulas for the above probabilities can be easily altered to express the probabilities that a path exists and one of its inner vertices is active; simply, instead of multiplying by $q^2_k$, we multiply by $p_2q^2_k$. $\widehat{P}^{\to3}_k$ is now the probability that the white subtree of $T_{uv}$ contains a path from $u$ to a vertex of degree three with all inner vertices of degree two after $k$ rounds and none of them become active, conditioned by $uv\subseteq W_k$. Analogously to the previous paragraph, we use $\widehat{P}^{O\to3}$ and $\widehat{P}^{E\to3}$. The probabilities $\widehat{P}^{\to3}_k$, $\widehat{P}^{O\to3}$ and $\widehat{P}^{E\to3}$ can be computed in the following way. \begin{align*} \widehat{P}^{\to3}_k =& q^3_k \cdot \textstyle\sum_{\ell\ge0} \bigl(q^2_k \cdot\left(1-p_2\right)\bigr)^\ell = \frac{q^3_k}{1-q^2_k\cdot\left(1-p_2\right)} \\ \widehat{P}^{O\to3}_k =& q^3_k \cdot \textstyle\sum_{\ell\ge1}\bigl(q^2_k \cdot\left(1-p_2\right)\bigr)^{2\ell} = \frac{q^3_k}{1-\left(q^2_k\right)^2\cdot\left(1-p_2\right)^2 } & \\ \widehat{P}^{E\to3}_k =& \widehat{P}^{\to1}_k - \widehat{P}^{O\to3}_k = q^2_k\cdot\left(1-p_2\right) \cdot \widehat{P}^{O\to3}_k & \end{align*} The probabilities $\widetilde{P}^{O\to3}_k$ and $\widetilde{P}^{E\to3}$ are the probabilities that such an odd/even path exists and at least one of its inner vertices become active. Note that $P^{O\to3}_k=\widehat{P}^{O\to3}_k+\widetilde{P}^{O\to3}_k$. The value of $\widetilde{P}^{O\to3}_k$ is given by the equation $$\widetilde{P}^{O\to3}_k = \left(q^2_k\right)^2 \cdot \left(\left(1-p_2\right)^2 \cdot \widetilde{P}^{O\to3}_k + \left(1-\left(1-p_2\right)^2\right) \cdot P^{O\to3}_k\right)\;\mbox{,} $$ which can be manipulated to $$\widetilde{P}^{O\to3}_k = \frac{\left(q^2_k\right)^2 \cdot \left(1-\left(1-p_2\right)^2\right) \cdot P^{O\to3}_k}{1-\left(q^2_k\right)^2\cdot\left(1-p_2\right)^2} \; \mbox{.}$$ Using the expression for $\widetilde{P}^{O\to3}_k$, we derive that $\widetilde{P}^{E\to3}_k$ is equal to the following. $$ \widetilde{P}^{E\to3}_k = q^2_k\cdot \left( p_2 \cdot P^{O\to3}_k + \left(1-p_2\right) \cdot \widetilde{P}^{O\to3}_k \right) $$ We now show how to compute the probabilities $w_{k+1}, b_{k+1}$ and $r_{k+1}$. Since blue and red vertices keep their colors once assigned, we have to focus on the probability that a white vertex changes a color. We distinguish vertices based on their degrees. \begin{description} \item[A vertex of degree zero.] Such a vertex is always recolored to red. \item[A vertex of degree one.] Such a vertex is always recolored. Its new color is blue only if lies on an odd path to another vertex of degree one and the other end is chosen to be the beginning of the path. This leads to the following equalities. $$ {\mathbf P}\big[u \in R_{k+1} { \; | \; } u \in W^1_k \big] = \frac{1}{2}P^{O\to1}_k + P^{E\to1}_k + P^{\to3}_k \; \mbox{,} $$ and $$ {\mathbf P}\big[u \in B_{k+1} { \; | \; } u \in W^1_k \big] = \frac{1}{2}P^{O\to1}_k \; \mbox{.} $$ \item[A vertex of degree two.] Since we have already computed the~probabilities that the paths of white vertices with degree two leading in the two directions from the vertex end at a vertex of degree one/three, have odd/even length and contain an active vertex, we can easily determine the probability that the vertex stay white or become red or blue. Note that for odd paths with type $1{\leftrightarrow}1$ and $3{\leftrightarrow}3$, in addition, the random choice of the start of the path comes in the play. It is then straightforward to derive the following. {\scriptsize \begin{align*} {\mathbf P}&\big[u \in R_{k+1} { \; | \; } u \in W^2_k \big] = \left(P^{E\to1}_k\right)^2 + \frac{2}{2} P^{O\to1}_k P^{E\to1}_k + 2 P^{E\to1}_k P^{\to3}_k \\* & + (1-p_2) \cdot \left( \left( \widetilde{P}^{O\to3}_k\right)^2 + 2\widetilde{P}^{O\to3}_k \widehat{P}^{O\to3}_k + \frac{2}{2} \widetilde{P}^{O\to3}_k \widetilde{P}^{E\to3}_k + \frac{2}{2} \widehat{P}^{O\to3}_k \widetilde{P}^{E\to3}_k + \frac{2}{2} \widetilde{P}^{O\to3}_k \widehat{P}^{E\to3}_k \right) \\* & + p_2 \cdot \left(\left(P^{O\to3}_k\right)^2 + \frac{2}{2} P^{E\to3}_k P^{O\to3}_k\right) \\ {\mathbf P}&\big[u \in B_{k+1} { \; | \; } u \in W^2_k \big] = \left(P^{O\to1}_k\right)^2 + \frac{2}{2} P^{O\to1}_k P^{E\to1}_k + 2 P^{O\to1}_k P^{\to3}_k \\* & + (1-p_2) \cdot \left( \left( \widetilde{P}^{E\to3}_k\right)^2 + 2\widetilde{P}^{E\to3}_k \widehat{P}^{E\to3}_k + \frac{2}{2} \widetilde{P}^{O\to3}_k \widetilde{P}^{E\to3}_k + \frac{2}{2} \widehat{P}^{O\to3}_k \widetilde{P}^{E\to3}_k + \frac{2}{2} \widetilde{P}^{O\to3}_k \widehat{P}^{E\to3}_k \right) \\* & + p_2 \cdot \left(\left(P^{E\to3}_k\right)^2 + \frac{2}{2} P^{E\to3}_k P^{O\to3}_k\right) \end{align*} } \item[A vertex of degree three.] The vertex either stays white or is recolored to blue. It stays white if (and only if) all the three paths with inner vertices being white and with degree two are among the following: even paths to a vertex of degree one, non-activated paths to a vertex of degree three, or activated odd paths to a vertex of degree three that was chosen as the beginning (and thus recolored with blue). Hence we obtain that \begin{equation}\label{eq-W3toB} {\mathbf P}\big[u \in B_{k+1} { \; | \; } u \in W^3_k \big] = 1 - \left( P^{E\to1}_k + \widehat{P}^{\to3}_k + \frac{1}{2}\widetilde{P}^{O\to3}_k\right)^3 \; \mbox{.} \end{equation} \end{description} Plugging all the probabilites from the above analysis together yields that \begin{align*} r_{k+1} =& r_k + w_k \cdot \left( \sum_{i=0}^2 w^i_k \cdot {\mathbf P}\big[u \in R_{k+1} { \; | \; } u \in W^i_k\big] \right) \\ b_{k+1} =& b_k + w_k \cdot \left( \sum_{i=1}^3 w^i_k \cdot {\mathbf P}\big[u \in B_{k+1} { \; | \; } u \in W^i_k\big] \right) \\ w_{k+1} =& 1- r_{k+1} - b_{k+1} \; \mbox{.} \end{align*} The crucial for the whole analysis is computing the values of $w^i_{k+1}$. Suppose that $u$ is a white vertex after $k$ rounds. The values of $w^i_k$ determine the probability that $u$ has degree $i$ and the values of $q^i_k$ determine the probabilities that white neighbors of $u$ have certain degrees. In particular, the probability that $u$ has degree $i$ and its neighbors have degrees $j_1,\dots,j_i$ after $k$ rounds conditioned by $u$ being white after $k$ rounds is equal to $$w^i_k \cdot \prod\limits_{j\in \left\{ j_1,\dots,j_i \right\}} q^j_k \; \mbox{.}$$ In what follows, the vector of degrees $j_1,\ldots,j_i$ will be denoted by $\overrightarrow{J}$. Let $R^i_{k+1}(\overrightarrow{J})$ be the probability that $u$ is white after $(k+1)$ rounds conditioned by the event that $u$ is white, has degree $i$ and its white neighbors have degrees $\overrightarrow{J}$ after $k$ rounds. Note that the value of $R^i_{k+1}(\overrightarrow{J})$ is the same for all permutation of entries/degrees of the vector $\overrightarrow{J}$. If $u$ has degree three and all its neighbors also have degree three after $k$ rounds, the probability $R^3_{k+1}(3,3,3)$ is equal to one: no vertex of degree three can be colored by red and thus the color of $u$ stays white. On the other hand, if $u$ or any of its neighbor has degree one, $u$ does definitely not stay white and the corresponding probability $R^i_{k+1}(\overrightarrow{J})$ is equal to zero. We now analyze the value of $R^i_{k+1}(\overrightarrow{J})$ for the remaining combinations of $i$ and $\overrightarrow{J}$. If $i=2$, then $u$ stays white only if it lies on a non-active path of degree-two vertices between two vertices of degree three. Consequently, it holds that \begin{align*} R^2_{k+1}(2,2) =& \left(1-p_2\right)^3 \cdot \left(\widehat{P}^{\to3}_k\right)^2 \\ R^2_{k+1}(3,2) =& \left(1-p_2\right)^2 \cdot \widehat{P}^{\to3}_k \\ R^2_{k+1}(3,3) =& \left(1-p_2\right) \end{align*} If $i=3$, then $u$ stays white if and only if for every neighbor $v$ of degree two of $u$, the path of degree-two vertices from $v$ to $u$ is \begin{itemize} \item an odd path to a vertex of degree one, \item a non-activated path to a vertex of degree three, \item an activated even path to vertex of~degree three and $u$ is not chosen as the beginning. \end{itemize} Based on this, we obtain that the values of $R^3_{k+1}(\overrightarrow{J})$ for the remaining choices of $\overrightarrow{J}$ are the as follows. \begin{align*} R^3_{k+1}(2,2,2) =& \left( P^{O\to 1}_k + \left(1-p_2\right) \cdot \left(\widehat{P}^{\to 3}_k + \frac{1}{2}\widetilde{P}^{E\to 3}_k \right) + p_2\cdot \frac{1}{2} P^{E\to 3}_k \right)^3 \\ R^3_{k+1}(2,2,3) =& \left( P^{O\to 1}_k + \left(1-p_2\right) \cdot \left(\widehat{P}^{\to 3}_k + \frac{1}{2}\widetilde{P}^{E\to 3}_k \right) + p_2\cdot \frac{1}{2} P^{E\to 3}_k \right)^2 \\ R^3_{k+1}(2,3,3) =& P^{O\to 1}_k + \left(1-p_2\right) \cdot \left(\widehat{P}^{\to 3}_k + \frac{1}{2}\widetilde{P}^{E\to 3}_k \right) + p_2\cdot \frac{1}{2} P^{E\to 3}_k \\ \end{align*} We now focus on computing the probabilities $R^{i\to i'}_{k+1}(\overrightarrow{J})$ that a vertex $u$ is a white vertex of degree $i'$ after $(k+1)$ rounds conditioned by the event that $u$ is a white vertex with degree $i$ with neighbors of degrees in $\overrightarrow{J}$ after $k$ rounds and $u$ is also white after $(k+1)$ rounds. For example, $R^{2\to i'}_{k+1}(2,2)$ is equal to one for $i'=2$ and to zero for $i'\ne2$. To derive formulas for the probabilites $R^{i\to i'}_{k+1}$, we have to introduce some additional notation. $S^{(i,j)}_{k+1}$ for $(i,j)\in\{(2,2),(2,3),(3,2),(3,3)\}$ will denote the probability that a~vertex~$v$ is white after $(k+1)$ rounds conditioned by the event that $v$ is a white vertex of degree $j$ after $k$ rounds and a fixed (white) neighbor $u$ of $v$ has degree $i$ after $k$ rounds and $u$ is white after $(k+1)$ rounds. It is easy to see that $S^{(2,2)}_{k+1} = 1$. If $j=3$, the event we condition by guarantees that one of the neighbors of $v$ is white after $(k+1)$ rounds. Hence, we derive that $$S^{(2,3)}_{k+1} = S^{(3,3)}_{k+1} = \left( P^{E\to1}_k + \widehat{P}^{\to3}_k + \frac{1}{2}\widetilde{P}^{O\to3}_k\right)^2 \; \mbox{.}$$ Using the probabilities $S^{(i,j)}_{k+1}$, we can easily express some of the probabilities $R^{i\to i'}_{k+1}(\overrightarrow{J})$. {\footnotesize \begin{align*} R^{2\to 2}_{k+1}(2,3) =& S^{(2,3)}_{k+1} & R^{2\to 2}_{k+1}(3,3) =& \left(S^{(2,3)}_{k+1}\right)^2 \\* R^{2\to 1}_{k+1}(2,3) =& 1 - S^{(2,3)}_{k+1} & R^{2\to 1}_{k+1}(3,3) =& 2 \cdot S^{(2,3)}_{k+1} \left(1 - S^{(2,3)}_{k+1}\right) \\* R^{2\to 0}_{k+1}(2,3) =& 0 & R^{2\to 0}_{k+1}(3,3) =& \left(1 - S^{(2,3)}_{k+1}\right)^2 \\ R^{3\to 3}_{k+1}(3,3,3) =& \left(S^{(3,3)}_{k+1}\right)^3 & R^{3\to 1}_{k+1}(3,3,3) =& 3 \cdot S^{(3,3)}_{k+1} \left(1 - S^{(3,3)}_{k+1}\right)^2 \\* R^{3\to 2}_{k+1}(3,3,3) =& 3 \cdot \left(S^{(3,3)}_{k+1}\right)^2 \left(1 - S^{(3,3)}_{k+1}\right) & R^{3\to 0}_{k+1}(3,3,3) =& \left(1 - S^{(3,3)}_{k+1}\right)^3 \end{align*} } We now determine $S^{(3,2)}_{k+1}$, i.e., the probability that a vertex $v$ is white after $(k+1)$ rounds conditioned by the event that $v$ has degree two after $k$ rounds and a fixed white neighbor $u$ of $u$ that has degree three after $k$ rounds is white after $(k+1)$ rounds. Observe that $v$ is white after $(k+1)$ rounds only if $v$ is contained in a non-active $3{\leftrightarrow}3$ path. Since we condition by the event that $u$ is white after $(k+1)$ rounds, $v$ cannot be contained in an active $3{\leftrightarrow}3$ path of even length or an active $3{\leftrightarrow}3$ odd path with $u$ being chosen as the beginning of this path. Hence, the value of $S^{(3,2)}_{k+1}$ is the following. $$S^{(3,2)}_{k+1} = \frac{\left(1-p_2\right) \cdot \widehat{P}^{\to 3}_k} {P^{O\to 1}_k + \left(1-p_2\right) \cdot \left(\widehat{P}^{\to 3}_k + \frac{1}{2}\widetilde{P}^{E\to 3}_k \right) + p_2\cdot \frac{1}{2} P^{E\to 3}_k} \; \mbox{.}$$ Using $S^{(3,2)}_{k+1}$, the remaining values of $R^{i\to i'}_{k+1}(\overrightarrow{J})$ can be expressed as follows. {\footnotesize \begin{align*} R^{3\to3}_{k+1}(3,3,2) =& \left(S^{(3,3)}_{k+1}\right)^2 \cdot S^{(3,2)}_{k+1} \\ R^{3\to2}_{k+1}(3,3,2) =& \left(S^{(3,3)}_{k+1}\right)^2 \left(1-S^{(3,2)}_{k+1}\right) + 2\cdot \left(1-S^{(3,3)}_{k+1}\right) S^{(3,3)}_{k+1} \cdot S^{(3,2)}_{k+1} \\ R^{3\to1}_{k+1}(3,3,2) =& \left(1-S^{(3,3)}_{k+1}\right)^2 S^{(3,2)}_{k+1} + 2\cdot S^{(3,3)}_{k+1} \left(1-S^{(3,3)}_{k+1}\right) \left(1-S^{(3,2)}_{k+1}\right) \\ R^{3\to0}_{k+1}(3,3,2) =& \left(1-S^{(3,3)}_{k+1}\right)^2 \left(1-S^{(3,2)}_{k+1}\right) \\ R^{3\to3}_{k+1}(2,2,3) =& \left(S^{(3,2)}_{k+1}\right)^2 \cdot S^{(3,3)}_{k+1} \\ R^{3\to2}_{k+1}(2,2,3) =& \left(S^{(3,2)}_{k+1}\right)^2 \left(1-S^{(3,3)}_{k+1}\right) + 2\cdot \left(1-S^{(3,2)}_{k+1}\right) S^{(3,2)}_{k+1} \cdot S^{(3,3)}_{k+1} \\ R^{3\to1}_{k+1}(2,2,3) =& \left(1-S^{(3,2)}_{k+1}\right)^2 S^{(3,3)}_{k+1} + 2\cdot S^{(3,2)}_{k+1} \left(1-S^{(3,2)}_{k+1}\right) \left(1-S^{(3,3)}_{k+1}\right) \\ R^{3\to0}_{k+1}(2,2,3) =& \left(1-S^{(3,2)}_{k+1}\right)^2 \left(1-S^{(3,3)}_{k+1}\right) \\ R^{3\to 3}_{k+1}(2,2,2) =& \left(S^{(3,2)}_{k+1}\right)^3 \\ R^{3\to 2}_{k+1}(2,2,2) =& 3 \cdot \left(S^{(3,2)}_{k+1}\right)^2 \left(1 - S^{(3,2)}_{k+1}\right) \\ R^{3\to 1}_{k+1}(2,2,2) =& 3 \cdot S^{(3,2)}_{k+1} \left(1 - S^{(3,2)}_{k+1}\right)^2 \\ R^{3\to 0}_{k+1}(2,2,2) =& \left(1 - S^{(3,2)}_{k+1}\right)^3 \end{align*} } Using $R^{i\to i'}_{k+1}(\overrightarrow{J})$, we can compute $w^{i'}_{k+1}$, $i'\in \left\{0,1,2,3\right\}$. In the formula below, ${\cal J}_i$ denotes the set of all possible vectors $\overrightarrow{J}$ with $i$ entries and all entries either two or three. The denominator of the formula is the probability that the vertex $u$ is white after $k+1$ rounds conditioned by the event that $u$ is white after $k$ rounds; the nominator is the probability that $u$ is white and has degree $i'$ after $k+1$ rounds conditioned by the event that $u$ is white after $k$ rounds. {\footnotesize $$w^{i'}_{k+1}=\frac {\displaystyle\sum_{\substack{i\ge2 \\ i\ge i'}} \sum\limits_{\overrightarrow{J}\in{\cal J}_i} w^i_k \cdot \prod_{j\in\overrightarrow{J}} q^j_k \cdot R^i_{k+1}\left(\overrightarrow{J}\right) \cdot R^{i\to i'}_{k+1}\left(\overrightarrow{J}\right)} {\displaystyle\sum_{i\ge2} \sum\limits_{\overrightarrow{J}\in{\cal J}_i}w^i_k \cdot \prod_{j\in\overrightarrow{J}} q^j_k \cdot R^i_{k+1}\left(\overrightarrow{J}\right)} $$ } It remains to exhibit the recurrence relations for the values of $q^i_{k+1}$. Let $uv$ be an edge of the tree, $i\ge 1$ and $\overrightarrow{J} \in {\cal J}_i$. Observe that the probability that $u$ has degree $i$ and its neighbors have degrees $\overrightarrow{J}$ after $k$ rounds conditioned by the event $uv \subseteq W_k$ is exactly $$q^i_k \cdot \prod\limits_{j \in \overrightarrow{J}} q^j_k \; \mbox{.}$$ In what follows, we will assume that the first coordinate $j_1$ of $\overrightarrow{J}$ corresponds to the vector $v$. The probabilites $Q^i_{k+1}$ are defined analogiclly to $R^i_{k+1}$ with an additional requirement that $v$ is also white after $k+1$ rounds. Formally, $Q^i_{k+1}\left(\overrightarrow{J}\right)$ is the probability that a vertex $u$ and its fixed neighbor $v$ are both white after $(k+1)$ rounds conditioned by the event that $uv \subseteq W_k$, $u$ has degree $i$ and its white neighbors have degrees $\overrightarrow{J}$ after $k$~rounds. Observe that the following holds. \begin{align*} & Q^2_{k+1}(2,2) = R^2_{k+1}(2,2) & Q^2_{k+1}(3,2) =& R^2_{k+1}(3,2) \cdot S^{(2,3)}_{k+1} \\ & Q^2_{k+1}(2,3) = R^2_{k+1}(2,3) = R^2_{k+1}(3,2) & Q^2_{k+1}(3,3) =& R^2_{k+1}(3,3) \cdot S^{(2,3)}_{k+1} \\ & Q^3_{k+1}(j_1,j_2,j_3) = R^3_{k+1}(j_1,j_2,j_3) \cdot S^{(3,j_1)}_{k+1} \end{align*} Similarly, $Q^{i\to i'}_{k+1}$ is the probability that a vertex $u$ is a white vertex of degree $i'\ge 1$ and its fixed neighbor $v$ is white after $(k+1)$~rounds conditioned by the event that $u$ is a white vertex with degree $i$ with neighbors of degrees in $\overrightarrow{J}$ after $k$ rounds and $uv \subseteq W_{k+1}$. Using the arguments analogous to those to derive the formulas for $R^{i\to i'}_{k+1}(\overrightarrow{J})$, we obtain the following formulas for $Q^{i\to i'}_{k+1}$. We provide the list of recurrences to compute the values of $Q^{i\to i'}_{k+1}$ and leave the actual derivation to the reader. {\scriptsize \begin{align*} & Q^{2\to2}_{k+1}(2,2) = Q^{2\to 2}_{k+1}(3,2) = 1 & Q^{2\to 2}_{k+1}(2,3) &= Q^{2\to 2}_{k+1}(3,3) = S^{(2,3)}_{k+1} \\* & Q^{2\to1}_{k+1}(2,2) = Q^{2\to 1}_{k+1}(3,2) = 0 & Q^{2\to 1}_{k+1}(2,3) &= Q^{2\to 1}_{k+1}(3,3) = 1 - S^{(2,3)}_{k+1} \end{align*} \begin{align*} & Q^{3\to 3}_{k+1}(3,3,3) = \left(S^{(3,3)}_{k+1}\right)^2 & Q^{3\to3}_{k+1}(3,2,2) =& \left(S^{(3,2)}_{k+1}\right)^2 \\* & Q^{3\to 2}_{k+1}(3,3,3) = 2 \cdot S^{(3,3)}_{k+1} \left(1 - S^{(3,3)}_{k+1}\right) & Q^{3\to2}_{k+1}(3,2,2) =& 2\cdot \left(1-S^{(3,2)}_{k+1}\right) S^{(3,2)}_{k+1} \\* & Q^{3\to 1}_{k+1}(3,3,3) = \left(1 - S^{(3,3)}_{k+1}\right)^2 & Q^{3\to1}_{k+1}(3,2,2) =& \left(1-S^{(3,2)}_{k+1}\right)^2 \\ & Q^{3\to3}_{k+1}(3,3,2) = S^{(3,3)}_{k+1} \cdot S^{(3,2)}_{k+1} & Q^{3\to3}_{k+1}(2,3,3) =& \left(S^{(3,3)}_{k+1}\right)^2 \\* & Q^{3\to2}_{k+1}(3,3,2) = S^{(3,3)}_{k+1} \left(1-S^{(3,2)}_{k+1}\right) + \left(1-S^{(3,3)}_{k+1}\right) S^{(3,2)}_{k+1} & Q^{3\to2}_{k+1}(2,3,3) =& 2\cdot \left(1-S^{(3,3)}_{k+1}\right) S^{(3,3)}_{k+1} \\ & Q^{3\to1}_{k+1}(3,3,2) = \left(1-S^{(3,3)}_{k+1}\right) \left(1-S^{(3,2)}_{k+1}\right) & Q^{3\to1}_{k+1}(2,3,3) =& \left(1-S^{(3,3)}_{k+1}\right)^2 \\ & Q^{3\to3}_{k+1}(2,2,3) = S^{(3,2)}_{k+1} \cdot S^{(3,3)}_{k+1} & Q^{3\to 3}_{k+1}(2,2,2) =& \left(S^{(3,2)}_{k+1}\right)^2 \\ & Q^{3\to2}_{k+1}(2,2,3) = \left(1-S^{(3,2)}_{k+1}\right) S^{(3,3)}_{k+1} + S^{(3,2)}_{k+1} \left(1-S^{(3,3)}_{k+1}\right) & Q^{3\to 2}_{k+1}(2,2,2) =& 2 \cdot \left(1 - S^{(3,2)}_{k+1}\right) S^{(3,2)}_{k+1} \\ & Q^{3\to1}_{k+1}(2,2,3) = \left(1-S^{(3,2)}_{k+1}\right) \left(1-S^{(3,3)}_{k+1}\right) & Q^{3\to 1}_{k+1}(2,2,2) =& \left(1 - S^{(3,2)}_{k+1}\right)^2 \end{align*} } Using the values of $Q^{i}_{k+1}$ and $Q^{i\to i'}_{k+1}$, we can compute the values of $q^{i'}_{k+1}$ for $i' \in \left\{1,2,3\right\}$. The denominator of the formula is the probability that the vertices $u$ and $v$ are white after $k+1$ rounds conditioned by the event that $u$ and $v$ are white after $k$ rounds; the nominator is the probability that $u$ are $v$ are white and $u$ has degree $i'$ after $k+1$ rounds conditioned by the event that $u$ and $v$ are white after $k$ rounds. {\footnotesize $$q^{i'}_{k+1}=\frac {\displaystyle\sum_{\substack{i\ge2 \\ i\ge i'}} \sum\limits_{\overrightarrow{J}\in{\cal J}_i} q^i_k \cdot \prod_{j\in\overrightarrow{J}} q^j_k \cdot Q^i_{k+1}\left(\overrightarrow{J}\right) \cdot Q^{i\to i'}_{k+1}\left(\overrightarrow{J}\right)} {\displaystyle\sum_{i\ge2} \sum\limits_{\overrightarrow{J}\in{\cal J}_i}q^i_k \cdot \prod_{j\in\overrightarrow{J}} q^j_k \cdot Q^i_{k+1}\left(\overrightarrow{J}\right)} $$ } \subsection{Solving the recurrences} The recurrences presented in this section were solved numerically using the Python program provided in the Appendix. The particular choice of parameters used in the program was $p_1=p_2=10^{-5}$ and $K={307449}$. The choice of $K$ was made in such a way that $w_K\le 10^{-6}$. We also estimated the precision of our calculations based on the representation of float numbers to avoid rounding errors effecting the presented bound on significant digits. Solving the recurrences for the above choice of parameters we obtain that $r_K > {0.4352}$. \section{High-girth graphs} \label{sect-girth} In this section, we show how to modify the randomized procedure from the previous section that it can be applied to cubic graphs with large girth. In order to use the analysis of the randomized procedure presented in the previous section, we have to cope with the dependence of some of the events caused by the presence of cycles in the graph. To do so, we introduce an additional parameter $L$ which will control the length of paths causing the dependencies. Then, we will be able to guarantee that the probability that a fixed vertex of a given cubic graph is red is at least $r_K-o(1)$ assuming the girth of the graph is at least $8KL+2$. In particular, if $L$ tends to infinity, we approach the same probability as for the infinite cubic tree. \subsection{Randomized procedure} We now describe how the randomized procedure is altered. Let $G$ be the given cubic graph. We produce a sequence $G_0,\overline{G}_1,G_1,\ldots,\overline{G}_K, G_K$ of vertex-colored subcubic graphs; the only vertices in these graphs that have less than three neighbors will always be assigned a new color---black. The graphs can also contain some additional vertices which do not correspond to the vertices of $G$; such vertices will be called {\em virtual}. The graph $G_0$ is the cubic graph $G$ with all vertices colored white. Assume that $G_{k-1}$ is already defined. Let $\overline{G}_{k}$ be the graph obtained from $G_{k-1}$ using the randomized procedure for the $k$-th round exactly as described for the infinite tree. Before we describe how the graph $G_k$ is obtained from $\overline{G}_k$, we need to introduce additional notation. Let $y_0,y_1,\dots,y_{2L}$ be a fixed path in the cubic tree. Now consider all possible colorings of $T_{y_1,y_0}$ after $k$ rounds that satisfy that \begin{enumerate} \item the vertices $y_0,y_1,\dots,y_{2L}$ are white after $k$ rounds and \item the vertices $y_1,\dots,y_{2L-1}$ have degree two. \end{enumerate} Let ${\cal D}_k$ be the probability distribution on the colorings of $T_{y_1,y_0}$ that satisfy these two constraints such that the probability of each coloring is proportional to its probability after $k$ rounds. In other words, we discard the colorings that do not satisfy the two constraints and normalize the probabilities. The graph $G_k$ is obtained from $\overline{G}_k$ by performing the following operation for every path $P$ of type $1{\leftrightarrow}1$, $1{\leftrightarrow}3$ or $3{\leftrightarrow}3$ between vertices $a$ and $b$ that has length at least $2L$ and contains at least one non-virtual inner vertex. Let $x_u$ be the non-virtual inner vertex of $P$ that is the closest to $a$ and $x_v$ the one closest to $b$. Let $P_x$ be the subpath between $x_u$ and $x_v$ (inclusively) in $\overline{G}_{k}$. The process we describe will guarantee that the non-virtual vertices of $P$ form a subpath of $P$, i.e., $P_x$ containts exactly non-virtual inner vertices of $P$. Let $u$ be the neighbor of $x_u$ on $P$ towards $a$, and $v$ the neighbor of $x_v$ on $P$ towards $b$; $u$ and $a$ are the same if $a$ is non-virtual. Similarly, $v$ and $b$ are the same if $b$ is non-virtual. We now modify the graph $\overline{G}_k$ as follows. Color the vertices of $P_x$ black and remove the edges $x_uu$ and $x_vv$ from the graph. Then attach to $u$ and $v$ rooted trees $T_u$ and $T_v$, all of them fully comprised of virtual vertices, such that the colorings of $T_u$ and $T_v$ are randomly sampled according to the distribution ${\cal D}_{k}$. The roots of the trees will become adjacent to $u$ or $v$, respectively. These trees are later referred to as {\em virtual trees}. Observe that we have created no path between two non-virtual vertices containing a virtual vertex. After $K$ rounds, the vertices of $G$ receive colors of their counterparts in $G_K$. In this way, the vertices of $G$ are colored white, blue, red and black and the red vertices form an independent set. \subsection{Refining the analysis} We argue that the analysis for the infinite cubic trees presented in Section~\ref{sect-tree} also applies to cubic graphs with large girth. We start with some additional definitions. Suppose that $G_k$ is the graph obtained after $k$ rounds of the randomized procedure. Let $u$ and $v$ be two vertices of $G_k$. The vertex $v$ is {\em reachable} from $u$ if both $u$ and $v$ are white and there exists a path in $G_k$ between $u$ and $v$ comprised of white vertices with all inner vertices having degree two (recall that the degree of a vertex is the number of its white neighbors). Clearly, the relation of being reachable is symmetric. The vertex $v$ is {\em near} to $u$ if both $u$ and $v$ are white and either $v$ is reachable from $u$ or $v$ is a neighbor of a white vertex reachable from $u$. Note that the relation of being near is not symmetric in general. For a subset $X\subseteq V\left(G_k\right)$ of white vertices, $N_k(X) \subseteq V\left(G_k\right)$ is the set of white vertices that are near to a vertex of $X$ in $G_k$. We now prove the following theorem. \begin{theorem} Let $K$ be an integer, $p_1$ and $p_2$ positive reals, $G$ a cubic graph and $v$ a vertex of $G$. For every $\varepsilon >0$ there exists an integer $L$ such that if $G$ has girth at least $8KL+2$, then the probability that the vertex $v$ will be red in $G$ at the end of the randomized procedure is at least $r_K-\varepsilon$ where $r_K$ is the probability that a fixed vertex of the infinite cubic tree is red after $K$ rounds of the randomized procedure with parameters $p_1$ and $p_2$. \end{theorem} \begin{proof} We keep the notation introduced in the description of the randomized procedure. As the first step in the proof, we establish the following two claims. \begin{claim} \label{cl-1} Let $k$ be a non-negative integer, $u$ a vertex of $G_k$, $c$ one of the colors, $\gamma_1$ and $\gamma_2$ two colorings of vertices of $G_k$ such that $u$ is white in both $\gamma_1$ and $\gamma_2$. If the set of vertices near to $u$ in $\gamma_1$ and $\gamma_2$ induce isomorphic trees rooted at $u$, then the probability that $u$ has the color $c$ in $\overline{G}_{k+1}$ conditioned by the event that $G_k$ is colored as in $\gamma_1$, and the probability that $u$ has the color $c$ in $\overline{G}_{k+1}$ conditioned by the event that $G_k$ is colored as in $\gamma_2$, are the same. \end{claim} Indeed, the color of $u$ in $\overline{G}_{k+1}$ is influenced only by the length and the types of the white paths in $G_k$ containing $u$. All the vertices of these paths are near to $u$ as well as the white neighbors of the other end-vertices of these paths (which are necessary to determine the types of the paths). By the assumption on the colorings $\gamma_1$ and $\gamma_2$, the two probabilities from the statement of the claim are the same. We now introduce yet another definiton. For a subcubic graph $H$ with vertices colored white, red, blue or black, a vertex $v \in H$ is said to be {\em $d$-close} to a white vertex $u \in H$ if $v$ is white and there exists a path $P$ from $u$ to $v$ comprised of white vertices such that \begin{itemize} \item the length of $P$ is at most $d$, or \item $P$ contains a vertex $w$ such that $w$ is at distance at most $d$ from $u$ on $P$ and each of the first $(2L-1)$ vertices following $w$ (if they exist) has degree two (recall that the degree of a vertex refers to the number of its white neighbors). \end{itemize} Finally, a $d$-close tree of $u$ is the subgraph comprised of all vertices $d$-close to $u$ (for our choice of $d$, this subgraph will always be tree) that is rooted at $u$. By the definition, the $d$-close tree of $u$ contains white vertices only. Observe that if $v$ is $d$-close to $u$, then it is also $d'$-close to $u$ for every $d'>d$. Also observe that if a vertex $v$ of a virtual subtree is $d$-close to a non-virtual vertex $u$, then all the white vertices lying in the same white component of the virtual subtree are also $d$-close to $u$: indeed, consider a path $P$ witnessing that $v$ is $d$-close to $u$ and let $P_0$ be its subpath from $u$ to the root of the virtual tree. The path $P_0$ witnesses that the root is $d$-close to $u$ and $P_0$ can be prolonged by the path comprised of $2L-1$ degree-two virtual vertices to a path to any white vertex $v'$ of the same white component as $v$. The new path now witnesses that $v'$ is also $d$-close to $u$. Let us look at $d$-close sets in the infinite cubic trees. \begin{claim} \label{cl-3} Let $d$ be a non-negative integer, $T$ an infinite cubic tree with vertices colored red, blue and white, and $u$ a white vertex of $T$. If a vertex $v$ is $d$-close to $u$ in $T$, then every vertex $v'$ that is near to $v$ is $(d+2L)$-close to $u$. \end{claim} Let $P$ be a path from $u$ to $v$ that witnesses that $v$ is $d$-close to $u$. Assume first that the length of $P$ is at most $d$. If $v'$ lies on $P$ or is a neighbor of a vertex of $P$, then $v'$ is $(d+1)$-close to $u$. Otherwise, consider the path $P'$ from $u$ to $v'$; observe that $P$ is a subpath of $P'$ and all the vertices following $v$ on $P$ with a possible exception of $v'$ and the vertex immediately preceding it have degree two. If the length of $P'$ is at most $d+2L$, then $v'$ is $(d+2L)$-close to $u$. Otherwise, $v$ is followed by at least $2L-1$ vertices of degree two and $v'$ is again $(d+2L)$-close to $u$. Assume now that the length of $P$ is larger than $d$. Then, $P$ contains a vertex $w$ at distance at most $d$ from $u$ such that the first $2L-1$ vertices following $w$ (if they exist) have degree two. If $v'$ lies on $P$ or is adjacent to a vertex of $P$, then $v'$ lies on $P$ after $w$ or is adjacent to $w$. In both cases, $v'$ is $(d+1)$-close to $u$. In the remaining case, we again consider the path $P'$ from $u$ to $v'$ which must be an extension of $P$ (otherwise, $v'$ would lie on $P$ or it would be adjacent to a vertex on $P$). If there are at least $2L-1$ vertices following $w$ on $P$, then $v'$ is $d$-close to $u$. Otherwise, either $P'$ contains $2L-1$ vertices of degree two following $w$ or the length of $P'$ is at most $d+2L$. In both cases, $v'$ is $(d+2L)$-close to $u$. We are ready to prove our main claim. \begin{claim} \label{cl-5} Let $k\le K-1$ be a positive integer, $T$ a rooted subcubic tree such that its root is not contained in a path of degree-two vertices of length at least $2L$, $u$ a vertex of the infinite cubic tree and $v$ a non-virtual vertex of $G_k$. The probability that $u$ is white and the $4(K-k)L$-close tree of $u$ in the infinite tree is isomorphic to $T$ after $k$ rounds is the same as the probability that $v$ is white and the $4(K-k)L$-close tree of $v$ in $G_k$ is isomorphic to $T$ after $k$ rounds. \end{claim} The proof proceeds by induction on $k$. For $k=0$, both in the infinite tree and in $G_0$, the probability is equal to one if $T$ is the full rooted cubic tree of depth $4KL$ and it is zero, otherwise. Here, we use the girth assumption to derive that the subgraph of $G$ induced by $4KL$-close vertices to $u$ is a tree (otherwise, $G$ would contain a cycle of length at most $8KL+1$). Suppose $k>0$. Let $\widetilde{T}$ be the $4(K-k)L$-close tree of $v$ in $G_k$ and $W$ the set of vertices that are $4(K-k)L+2L$-close to $v$ in $G_{k-1}$. The degree and the color of each vertex in $\overline{G}_k$ is determined by vertices that are near to it in $G_{k-1}$ by Claim~\ref{cl-1}. The induction assumption and Claim~\ref{cl-3} imply that every vertex of $W$ is white and has a given degree $i$ in $\overline{G}_k$ with the same probability as its counterpart in the infinite cubic tree assuming that the vertex $u$ of the infinite tree does not lie on a path of degree-two vertices of length at least $2L$ after $k-1$ rounds. If $v$ lies on a path with at least $2L-1$ degree-two vertices in $\overline{G}_k$, it becomes black. Othwerwise, the set $W$ contains all vertices that are $4(K-k)L$-close to $v$ with the exception of the new virtual vertices that are $4(K-k)L$-close to $v$. Since the colorings of newly added virtual trees have been sampled according to the distribution ${\cal D}_{k+1}$, Lemma~\ref{indep} implies that the probability that $v$ is white and the $4(K-k)L$-close tree of $v$ is equal to $T$ is the same as the corresponding probability for $u$ in the infinite cubic tree. \begin{claim} \label{cl-6} Let $k$ be a non-negative integer and $u$ a vertex of the infinite cubic tree. The probability that $u$ is white and lies on a white path of length at least $2L$ after $k$ rounds is at most $2 \cdot \left(q^2_{k}\right)^{L-1}$. \end{claim} If $u$ lies on such a path, its degree must be two and the length of the path from $u$ in one of the two directions is at least $L-1$. The probability that this happens for each of the two possible directions from $u$ is at most $\left(q^2_{k}\right)^{L-1}$. The claim now follows. Let $p_0=\sum_{k=1}^K 2\left(q^2_{k}\right)^{L-1}$. Observe that $q^2_k<1$. Indeed, if a vertex $u$ of the infinite tree and all the vertices at distance at most $2K$ from $u$ are white after the first round, then $u$ and its three neighbors must have degree three after $K$ rounds (all vertices at distance at most $2(K+1-k)$ from $u$ are white after $k$ rounds). Since this happens with non-zero probability, $q^3_k>0$ and thus $q^2_k<1$. This implies that $p_0$ tends to $0$ with $L$ approaching the infinity. Fix a vertex $v$ of $G$. By Claim~\ref{cl-1}, the probability that $v$ is colored red in the $k$-th round is fully determined by the vertices that are near to $v$ in $G_{k-1}$. All such vertices are also $2L$-close to $v$ by Claim~\ref{cl-3}. Consider a rooted subcubic tree $T$. If the root of $T$ does not lie on a path with inner vertices of degree two with length at least $2L$, then the probability that $v$ is white and the $2L$-close tree of $v$ is isomorphic to $T$ is the same as the analogous probability for a vertex of the infinite tree by Claim~\ref{cl-5}. Since the probability that $v$ is white and it lies on a path of length at least $2L$ in its $2L$-close tree at some point during the randomized procedure is at most $p_0$ by Claim~\ref{cl-6}, the probability that $v$ is colored red in $G_K$ is at least $r_K-p_0$. Since $p_0<\varepsilon$ for $L$ sufficiently large, the statement of the theorem follows. \end{proof} \section{Conclusion} The method we presented here, similarly to the method of Hoppen~\cite{bib-hoppen}, can be applied to $r$-regular graphs for $r\ge 4$. Another related question is whether the fractional chromatic number of cubic graphs is bounded away from $3$ under a weaker assumption that the odd girth (the length of the shortest odd cycle) is large. This is indeed the case as we now show. \begin{theorem} \label{thm-odd} Let $g\ge 5$ be an odd integer. The fractional chromatic number of every subcubic graph $G$ with odd girth at least $g$ is at most $\frac{8}{3-6/(g+1)}$. \end{theorem} \begin{proof} Clearly, we can assume that $G$ is bridgeless. If $G$ contains two or more vertices of degree two, then we include $G$ in a large cubic bridgeless graph with the same odd girth. Hence, we can assume that $G$ contains at most one vertex of degree two. Consequently, $G$ has a $2$-factor $F$. We now construct a probability distribution on the independent sets such that each vertex is included in the independent set chosen according to this distribution with probability at least $3(1-2/(g+1))/8$. This implies the claim of the theorem. Number the vertices of each cycle of $F$ from $1$ to $\ell$ where $\ell$ is the length of the cycle. Choose randomly a number $k$ between $1$ and $(g+1)/2$ and let $W$ be the set of all vertices with indices equal to $k$ modulo $(g+1)/2$. Hence, each vertex is not in $W$ with probability $1-2/(g+1)$. Let $V_1,\ldots,V_m$ be the sets formed by the paths of $F\setminus W$. Since each set $V_i$, $i=1,\ldots,m$, contains at most $g-1$ vertices, the subgraph $G[V_i]$ induced in $G$ by $V_i$ is bipartite. Choose randomly (and independently of the other subgraphs) one of its two color classes and color its vertices red. Observe that if an edge has both its end-points colored red, then it must an edge of the matching $M$ complementary to $F$. If this happens, choose randomly one vertex of this edge and uncolor it. The resulting set of red vertices is independent. We estimate the probability that a vertex $v$ is red conditioned by $v\not\in W$. With probability $1/2$, $v$ is initially colored red. However, with probability at most $1/2$ its neighbor through an edge of $M$ is also colored red (it can happen that this neighbor is in $W$). If this is the case, then the vertex $v$ is uncolored with probability $1/2$. Consequently, the probability that $v$ is red is at least $1/2\cdot (1-1/4)=3/8$. Multiplying by the probability that $v\not\in W$, which is $1-2/(g+1)$, we obtain that the vertex $v$ is included in the independent set with probability at least $3(1-2/(g+1))/8$ as claimed earlier. \end{proof} \section*{Appendix} \begin{python}[label=prg-indep] W_THOLD = 1.0/1000000 p_1 = 0.00001 p_2 = 0.00001 def state(): print(" def gen_C(): S= {1: q[1], 2: q[2], 3: q[3]} for u in (1,2,3): A=(1,2,3) B=S if (u>1) else {"-":1} C=S if (u>2) else {"-":1} for a in A: for b in B: for c in C: C3[(u,a,b,c)] = w[u]*q[a]*B[b]*C[c] C2[(u,a,b,c)] = q[u]*q[a]*B[b]*C[c] def init(): global p_r, p_w, p_b t = (1-p_1)**2 w[3] = t**3 w[2] = 3*(t**2)*(1-t) w[1] = 3*t*(1-t)**2 w[0] = (1-t)**3 q[3] = t**2 q[2] = 2*t*(1-t) q[1] = (1-t)**2 p_b = 1-(1-p_1)**3 p_r = p_1*(1-p_b) p_w = 1 - p_r - p_b k=0 p_w = 1 p_b = p_r = 0 w = [0,0,0,3] q = [0,0,0,1] C3 = {} C2 = {} while (p_w > W_THOLD): k+=1 if (k==1): init() state() continue o1 = q[1]/(1-q[2]**2) e1 = q[2]*o1 p1 = o1 + e1 o3 = q[3]/(1-q[2]**2) e3 = q[2]*o3 p3 = o3 + e3 p3_n = q[3]/(1-q[2]*(1-p_2)) o3_n = q[3]/(1-q[2]**2 * (1-p_2)**2) e3_n = q[2]*(1-p_2)*o3_n o3_y = (q[2]**2 *(1-(1-p_2)**2)*o3)/(1-q[2]**2 *(1-p_2)**2) e3_y = q[2]*(p_2*o3 + (1-p_2)*o3_y) Pr = [1,0,0,0] Pb = [0,0,0,0] Pr[1] = (e1 + o1*.5) + p3 Pb[1] = o1*.5 Pr[2] = e1**2 + e1*o1 + 2*e1*p3 Pr[2]+= (1-p_2)*(o3_y**2 +2*o3_y*o3_n+o3_y*e3_y+o3_n*e3_y+o3_y*e3_n) Pr[2]+= p_2*(o3**2 + o3*e3) Pb[2] = o1**2 + e1*o1 + 2*o1*p3 Pb[2]+= (1-p_2)*(e3_y**2 +2*e3_y*e3_n+o3_y*e3_y+o3_n*e3_y+o3_y*e3_n) Pb[2]+= p_2*(e3**2+o3*e3) Pb[3] = 1-(p3_n+e1+o3_y/2)**3 p_r+= p_w*(w[0]+w[1]*Pr[1]+w[2]*Pr[2]) p_b+= p_w*(w[1]*Pb[1]+w[2]*Pb[2]+w[3]*Pb[3]) p_w = 1-p_r-p_b r32 = o1+(1-p_2)*(p3_n+.5*e3_y)+p_2*(.5*e3) s33 = (p3_n+e1+o3_y/2)**2 s32 = (1-p_2)*p3_n / r32 gen_C() T = [0,0,0,0] sum = 0 for C in C3: deg1=False for x in C: if x==1: deg1=True break if deg1: continue N={} C_p = C3[C] if (C[0]==2): N[3] = {0:1} C_p*=1-p_2 for i in (1,2): if (C[i]==3): N[i] = {0: s33 } N[i][1] = 1-N[i][0] elif (C[i]==2): C_p *= (1-p_2)*p3_n N[i] = {0: 1} elif (C[0]==3): for i in (1,2,3): if (C[i]==3): N[i] = {0: s33 } elif (C[i]==2): C_p *= r32 N[i] = {0: s32 } N[i][1] = 1-N[i][0] sum += C_p for a in N[1]: for b in N[2]: for c in N[3]: T[C[0]-a-b-c] += C_p*N[1][a]*N[2][b]*N[3][c] for i in (0,1,2,3): w[i] = T[i]/sum T = [0,0,0,0] sum = 0 for C in C2: deg1=False for x in C: if x==1: deg1=True break if deg1: continue N={} C_p = C2[C] if (C[1]==3): C_p *= s33 elif (C[1]==2): C_p *= (1-p_2)*p3_n if (C[0]==2): N[3] = {0:1} C_p*=1-p_2 if (C[2]==3): N[2] = {0: s33 } N[2][1] = 1-N[2][0] elif (C[2]==2): C_p *= (1-p_2)*p3_n N[2] = {0: 1} elif (C[0]==3): for i in (2,3): if (C[i]==3): N[i] = {0: s33 } elif (C[i]==2): C_p *= r32 N[i] = {0: s32 } N[i][1] = 1-N[i][0] sum += C_p for b in N[2]: for c in N[3]: T[C[0]-b-c] += C_p*N[2][b]*N[3][c] for i in (1,2,3): q[i] = T[i]/sum state() \end{python} \end{document}
\begin{document} \title{Locality and applications to subsumption testing and interpolation in $\mathcal{EL}$ and some of its extensions}\thanks{This work was partly supported by the German Research Council (DFG) as part of the Transregional Collaborative Research Center ``Automatic Verification and Analysis of Complex Systems'' (SFB/TR 14 AVACS). See \texttt{www.avacs.org} for more information.} \titlerunning{Locality and applications to subsumption testing} \author{Viorica Sofronie-Stokkermans} \institute{Viorica Sofronie-Stokkermans \\ University Koblenz-Landau, Koblenz, Germany and \\ Max-Planck-Institut f{\"u}r Informatik, Saarbr{\"u}cken, Germany \\ \email{[email protected]}} \date{} \maketitle \begin{abstract} In this paper we show that subsumption problems in lightweight description logics (such as $\mathcal{EL}$ and $\mathcal{EL}^+$) can be expressed as uniform word problems in classes of semilattices with monotone operators. We use possibilities of efficient local reasoning in such classes of algebras, to obtain uniform PTIME decision procedures for CBox subsumption in $\mathcal{EL}$, $\mathcal{EL}^+$ and extensions thereof. These locality considerations allow us to present a new family of (possibly many-sorted) logics which extend $\mathcal{EL}$ and $\mathcal{EL}^+$ with $n$-ary roles and/or numerical domains. As a by-product, this allows us to show that the algebraic models of ${\cal EL}$ and ${\cal EL}^+$ have ground interpolation and thus that ${\cal EL}$, ${\cal EL}^+$, and their extensions studied in this paper have interpolation. We also show how these ideas can be used for the description logic $\mathcal{EL}^{++}$. \end{abstract} \section{Introduction} \label{intro} Description logics are logics for knowledge representation used in databases and ontologies. They provide a logical basis for modeling and reasoning about objects, classes of objects (concepts), and relationships between them (roles). Recently, tractable description logics such as $\mathcal{EL}$ \cite{Baader2003} have attracted much interest. Although they have restricted expressivity, this expressivity is sufficient for formalizing the type of knowledge used in widely used ontologies such as the medical ontology SNOMED \cite{snomed1,snomed2}. Several papers were dedicated to studying the properties of $\mathcal{EL}$ and its extensions $\mathcal{EL}^+$ \cite{Baader-2005,Baader-dl-2006} and $\mathcal{EL}^{++}$ \cite{Baader-ijcai-2005}, and to understanding the limits of tractability in extensions of $\mathcal{EL}$. Undecidability results for extensions of $\mathcal{EL}$ are obtained in \cite{Baader-dl-2003} using a reduction to the word problem for semi-Thue systems. In this paper we show that the subsumption problem in $\mathcal{EL}$ and $\mathcal{EL}^+$ can be expressed as a uniform word problem in certain varieties of semilattices with monotone operators. We identify a large class of such algebras for which the uniform word problem is decidable in PTIME. For this, we use results on so-called {\em local theory extensions} which we introduced in \cite{Sofronie-cade-05} and further developed in \cite{Sofronie-ijcar-06,Sofronie-lmcs-08,sofronie-ihlemann-ismvl-07}. In \cite{Jacobs-Sofronie-pdpar-entcs07,ihlemann-jacobs-sofronie-tacas08,sofronie-ki08} we proved that local theory extensions occur in a natural way in verification (especially in program verification, and in the verification of parametric systems) and in mathematics. The purpose of this paper is to show that the concept of local theory extension turns out to be useful also for identifying and studying tractable extensions of ${\cal EL}$. General results on local theories allow us to: \begin{itemize} \item uniformly present extensions of $\mathcal{EL}$ and $\mathcal{EL}^+$ with $n$-ary roles (and concrete domains); \item provide uniform complexity analysis for $\mathcal{EL}$ and $\mathcal{EL}^+$ and their extensions; \item analyze interpolation in the corresponding algebraic models and its consequences. \end{itemize} \begin{figure} \caption{Constructors considered in this paper and their semantics} \label{fig-summary} \end{figure} The concept constructors, role constructors and role inclusions we can consider are summarized in Figure~\ref{fig-summary}. The main contributions of the paper are: \begin{itemize} \item We show that the subsumption problem in $\mathcal{EL}$ (resp.\ $\mathcal{EL}^+$) can be expressed as a uniform word problem in classes of semilattices with monotone operators (possibly satisfying certain composition laws). \item We show that the corresponding classes of semilattices with operators have local presentations and we use methods for efficient reasoning in local theories or in local theory extensions in order to obtain PTIME decision procedures for $\mathcal{EL}$ and $\mathcal{EL}^+$. \item These locality considerations allow us to present new families of PTIME logics with $n$-ary roles (and possibly also concrete domains) which extend $\mathcal{EL}$ and $\mathcal{EL}^+$. \item In particular, we identify a PTIME extension of $\mathcal{EL}$ with two sorts, ${\sf concept}$ and ${\sf num}$, where the concepts of sort ${\sf num}$ are interpreted as elements in the ORD-Horn, convex fragment of Allen's interval algebra. \item We notice that the axioms which correspond, at an algebraic level, to the role inclusions in ${\cal EL}^+$ are exactly of the type studied in the context of hierarchical interpolation in \cite{Sofronie-ijcar-06}. As a by-product, we thus show that the algebraic models of ${\cal EL}$ and ${\cal EL}^+$ have the ground interpolation property and infer that ${\cal EL}$, ${\cal EL}^+$, and their extensions studied in this paper have interpolation. \item We end the paper with some considerations on possibilities of handling ${\cal EL}^{++}$ constructors and ABoxes. \end{itemize} Some of the results of this paper were reported -- in preliminary form -- in \cite{sofronie-dl08,sofronie-aiml08}. At that time we could only prove a weak locality property in the presence of role inclusions. In this paper we considerably improve the results presented in \cite{sofronie-dl08,sofronie-aiml08} by showing that ${\cal EL}$, ${\cal EL}^+$ as well as some of their extensions enjoy {\em the same type} of locality property, which allows to reduce, ultimately, CBox subsumption checking to checking the satisfiability of ground clauses in the theory of partially-ordered sets. We thus obtain a cubic time decision procedures for CBox subsumption in a class of extensions of ${\cal EL}$. New contributions of this paper are also (i) the applications of our results on interpolation in local theory extensions \cite{Sofronie-ijcar-06,Sofronie-lmcs-08} to interpolation in ${\cal EL}^+$ and (ii) the presentation of PTIME results in ${\cal EL}^{++}$ in the framework of locality. \noindent {\em Structure of the paper.} In Sect.\ \ref{dl-gen} we present generalities on description logic and introduce the description logics $\mathcal{EL}$ and $\mathcal{EL}^+$. In Sect.\ \ref{algebra} we provide the notions from algebra and correspondence theory needed in the paper. In Sect.~\ref{dl-alg-sem} we show that for many extensions of ${\cal EL}$ CBox subsumption can be expressed as a uniform word problem in the class of semilattices with monotone operators satisfying certain composition axioms. In Sect.~\ref{locality} we present general definitions and results on local theory extensions and in Sect.~\ref{complexity} we show that the algebraic models of $\mathcal{EL}$ and $\mathcal{EL}^+$ have local presentations, thus providing an alternative proof of the fact that CBox subsumption in $\mathcal{EL}$ and $\mathcal{EL}^+$ is decidable in PTIME. Locality results for more general classes of semilattice with operators are used in Sect.~\ref{sect-extensions} for defining extensions of $\mathcal{EL}$ and $\mathcal{EL}^+$ with a subsumption problem decidable in PTIME. In Sect.~\ref{interpolation} we use these results for obtaining interpolation results for ${\cal EL}$ and its extensions. The results in Sect.~\ref{el++} show that also PTIME decidability of CBox subsumption in ${\cal EL}^{++}$ can be explained within the framework of locality. \section{Description logics: generalities} \label{dl-gen} The central notions in description logics are concepts and roles. In any description logic a set $N_C$ of {\em concept names} and a set $N_R$ of {\em roles} is assumed to be given. Complex concepts are defined starting with the concept names in $N_C$, with the help of a set of {\em concept constructors}. The available constructors determine the expressive power of a description logic. The semantics of description logics is defined in terms of interpretations ${\mathcal I} = (D^{\mathcal I}, \cdot^{\mathcal I})$, where $D^{\mathcal I}$ is a non-empty set, and the function $\cdot^{\mathcal I}$ maps each concept name $C \in N_C$ to a set $C^{\mathcal I} \subseteq D^{\mathcal I}$ and each role name $r \in N_R$ to a binary relation $r^{\mathcal I} \subseteq D^{\mathcal I} \times D^{\mathcal I}$. Fig.~2 shows the constructor names used in the description logic ${\mathcal A}{\mathcal L}{\mathcal C}$ and their semantics. The extension of $\cdot^{\mathcal I}$ to concept descriptions is inductively defined using the semantics of the constructors. \begin{figure} \caption{${\cal ALC} \label{table-dl-constr} \end{figure} \noindent \begin{definition}[Terminology] A {\em terminology}\/ (or TBox, for short) is a finite set consisting of {\em primitive concept definitions} of the form $C \equiv D$, where $C$ is a concept name and $D$ a concept description; and {\em general concept inclusions} (GCI) of the form $C \sqsubseteq D$, where $C$ and $D$ are concept descriptions. \end{definition} \begin{definition}[Interpretation] An interpretation ${\mathcal I}$ is a model of a TBox ${\mathcal T}$ if it satisfies: \begin{itemize} \item all concept definitions in ${\mathcal T}$, i.e.\ $C^{\mathcal I} {=} D^{\mathcal I}$ for all definitions $C {\equiv} D \in {\mathcal T}$; \item all general concept inclusions in ${\mathcal T}$, i.e.\ $C^{\mathcal I} {\subseteq} D^{\mathcal I}$ for every $C {\sqsubseteq} D \in {\mathcal T}$. \end{itemize} \end{definition} Since definitions can be expressed as double inclusions, in what follows we will only refer to TBoxes consisting of general concept inclusions (GCI) only. \begin{definition}[TBox subsumption] Let ${\mathcal T}$ be a TBox, and $C_1, C_2$ two concept descriptions. $C_1$ is subsumed by $C_2$ w.r.t.\ ${\mathcal T}$ (for short, $C_1 \sqsubseteq_{\mathcal T} C_2$) if and only if $C_1^{\mathcal I} \subseteq C_2^{\mathcal I}$ for every model ${\mathcal I}$ of ${\mathcal T}$. \end{definition} \subsection{The description logics $\mathcal{EL}$, $\mathcal{EL}^+$ and some extensions} \label{sect-el} By restricting the type of allowed concept constructors less expressive but tractable description logics can be defined. If we only allow intersection and existential restriction as concept constructors, we obtain the description logic ${\mathcal E}{\mathcal L}$ \cite{Baader2003}, a logic used in terminological reasoning in medicine \cite{snomed1,snomed2}. In \cite{Baader-2005,Baader-dl-2006}, the extension ${\mathcal E}{\mathcal L}^+$ of ${\mathcal E}{\mathcal L}$ with role inclusion axioms is studied. Relationships between concepts and roles are described using CBoxes. \begin{definition}[Constraint box] A CBox consists of a terminology ${\mathcal T}$ and a set $RI$ of role inclusions of the form $r_1 {\circ} {\dots} {\circ} r_n \sqsubseteq s$. Since terminologies can be expressed as sets of general concept inclusions, we will view CBoxes as unions $GCI {\cup} RI$ of a set $GCI$ of general concept inclusions and a set $RI$ of role inclusions of the form $r_1 {\circ} {\dots} {\circ} r_n \sqsubseteq s$, with $n {\geq} 1$. \end{definition} \begin{definition}[Models of CBoxes] An interpretation ${\mathcal I}$ is a model of the CBox ${\mathcal C} = GCI \cup RI$ if it is a model of $GCI$ and satisfies all role inclusions in ${\mathcal C}$, i.e.\ $r_1^{\mathcal I} \circ \dots \circ r_n^{\mathcal I} \subseteq s^{\mathcal I}$ for all $r_1 \circ \dots \circ r_n \subseteq s \in RI$. \end{definition} \begin{definition}[CBox subsumption] If ${\mathcal C}$ is a CBox, and $C_1, C_2$ are concept descriptions then $C_1 \sqsubseteq_{\mathcal C} C_2$ if and only if $C_1^{\mathcal I} \subseteq C_2^{\mathcal I}$ for every model ${\mathcal I}$ of ${\mathcal C}$. \end{definition} \noindent In \cite{Baader-2005} it was shown that subsumption w.r.t.\ CBoxes in $\mathcal{EL}^+$ can be reduced in linear time to subsumption w.r.t.\ {\em normalized} CBoxes, in which all GCIs have one of the forms: $C \sqsubseteq D, C_1 \sqcap C_2 \sqsubseteq D, C \sqsubseteq \exists r.D, \exists r.C \sqsubseteq D$, where $C, C_1, C_2, D$ are concept names, and all role inclusions are of the form $r \sqsubseteq s$ or $r_1 \circ r_2 \sqsubseteq r$. Therefore, in what follows, we consider w.l.o.g.\ that CBoxes only contain role inclusions of the form $r \sqsubseteq s$ and $r_1 \circ r_2 \sqsubseteq r$. \noindent In \cite{Baader-ijcai-2005}, the extension $\mathcal{EL}^{++}$ of $\mathcal{EL}^+$ is introduced. In addition to the constructions in $\mathcal{EL}^+$, $\mathcal{EL}^{++}$ can be parameterized by one or more concrete domains ${\mathcal D}_1, \dots, {\mathcal D}_m$, which correspond to standard data types and permit reference to concrete data objects such as strings and integers. Formally, a concrete domain is a pair ${\cal D} = (D^{\cal D}, {\cal P}^{\cal D})$, where $D^{\cal D}$ is a set and ${\cal P}^{\cal D}$ is a family of predicate names with given (strictly positive) arity, and given interpretations as relations on $D^{\cal D}$. The link between the description logic and the concrete domains is established by means of a set of {\em feature names} ${\sf N}_F$, interpreted as maps $f : D \rightarrow D_i$, where $D$ is the universe of the interpretation ${\cal I}$ of the description logic and $D_i$ is the universe of a concrete domain ${\cal D}_i$. TBoxes can contain constraints referring to features and concrete domains, of the form $$\begin{array}{|c|c|c|} \hline \text{ Name } & \text{ Syntax } & \text{ Semantics } \\ \hline ~{\sf concrete}~ & ~p(f_1, \dots, f_n)~ & ~\{ x \in D^{\cal I} \mid \exists y_1, \dots, y_k {\in} D^{\cal D}: f^{\cal I}_i(x) {=} y_i \text{ for } 1 {\leq} i {\leq k}~ \\ ~{\sf domains}~ & ~p \in {\cal P}^{\cal D}_i, f_i \in {\sf N}_F~ & ~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~\text{ and } p^D(y_1, \dots, y_k) \}\\ \hline \end{array}$$ \noindent In this paper we show that CBox subsumption for $\mathcal{EL}$ and $\mathcal{EL}^+$ can be expressed as a uniform word problem for classes of semilattices with monotone operators. We then analyze various other types of axioms leading to extensions of $\mathcal{EL}$ and $\mathcal{EL}^+$, including a variant of $\mathcal{EL}^{++}$ without ABoxes. \noindent We start by presenting the necessary notions from algebra. \section{Algebra: preliminaries} \label{algebra} We assume known notions such as partially-ordered set and order filter/ideal in a partially-ordered set. For further information cf.\ \cite{Priestley90}. In what follows we will use one-sorted as well as many-sorted algebraic structures. \noindent Let $\Sigma$ be a (one-sorted) signature consisting of a set of function symbols, together with an arity function $a : \Sigma \rightarrow {\mathbb N}$ which associates with every function symbol its arity. An algebraic structure (over $\Sigma$) is a tuple ${\cal A} = (A, \{ f_A \}_{f \in \Sigma})$, where $A$ is a non-empty set (the universe of ${\cal A}$) and for every $f \in \Sigma$, if $a(f) = n$ then $f_A : A^n \rightarrow A$. \noindent Let $(S, \Sigma)$ be a many-sorted signature consisting of a set $S$ of sorts and a set $\Sigma$ of function symbols, together with an arity function $a : \Sigma \rightarrow (S^* \rightarrow S)$ which associates with every function symbol $f$ its arity $a(f) = s_1, \dots, s_n \rightarrow s$ (which specifies the sorts of the $n$ arguments of $f$ and the sort of the output). A (many-sorted) algebraic structure (over $(S, \Sigma)$) is a tuple ${\cal A} = ( \{A_s\}_{s \in S}, \{ f_A \}_{f \in \Sigma})$, where for every $s \in S$, $A_s$ is a non-empty set (the universe of ${\cal A}$ of sort $s$) and for every $f \in \Sigma$, if $a(f) = s_1\dots s_n \rightarrow s$ then $f_A : A_{s_1} \times \dots \times A_{s_n} \rightarrow A_s$. \subsection{Semilattices, (distributive) lattices, Boolean algebras} An algebraic structure $(L, \wedge)$ consisting of a non-empty set $L$ together with a binary operation $\wedge$ is called {\em semilattice} if $\wedge$ is associative, commutative and idempotent. An algebraic structure $(L, \vee, \wedge)$ consisting of a non-empty set $L$ together with two binary operations $\vee$ and $\wedge$ on $L$ is called {\em lattice} if $\vee$ and $\wedge$ are associative, commutative and idempotent and satisfy the absorption laws. A {\em distributive lattice} is a lattice that satisfies either of the distributive laws $(D_{\wedge})$ or $(D_{\vee})$, which are equivalent in a lattice. \begin{eqnarray*} (D_{\wedge}) & \quad \quad \quad \forall x, y, z ~~~~ x \wedge (y \vee z) = (x \wedge y) \vee (x \wedge z) \\ (D_{\vee}) & \quad \quad \quad \forall x, y, z ~~~~ x \vee (y \wedge z) = (x \vee y) \wedge (x \vee z) \end{eqnarray*} In any semilattice $(L, \wedge)$ or lattice $(L, \vee, \wedge)$ an order can be defined in a canonic way by $$x \leq y \text{ if and only if } x \wedge y = x.$$ An element $0$ which is smaller than all other elements w.r.t.\ $\leq$ is called first element; an element $1$ which is larger than all other elements w.r.t.\ $\leq$ is called last element. A lattice having both a first and a last element is called {\em bounded}. A Boolean algebra is a structure $(B, \vee, \wedge, \neg, 0, 1)$, such that $(B, \vee, \wedge, 0, 1)$ is a bounded distributive lattice and $\neg$ is a unary operation that satisfies: \begin{eqnarray*} {\sf (Complement)} & \quad \forall x & \neg x \vee x = 1 \quad \quad \quad \forall x ~~~ \neg x \wedge x = 0 \end{eqnarray*} \noindent Let ${\mathcal V}$ be a class of algebras. The {\em universal Horn theory} of ${\mathcal V}$ is the collection of those closed formulae valid in ${\mathcal V}$ which are of the form \begin{eqnarray} \forall x_1 \dots \forall x_n (\bigwedge_{i=1}^n s_{i1} = s_{i2} \rightarrow t_{1} = t_{2}) \label{horn-1} \end{eqnarray} \noindent The formula~(\ref{horn-1}) above is valid in ${\mathcal V}$ if for each algebra ${\mathcal A} \in {\mathcal V}$ with universe $A$ and for each assignment $v$ of values in $A$ to the variables, if $v(s_{i1}) = v(s_{i2})$ for all $i \in \{ 1, \dots, n \}$ then $v(t_{1}) = v(t_{2})$.\footnote{If ${\cal A}$ is an algebra with universe $A$ and $v : X \rightarrow A$ an assignment, then $v$ extends in a canonical way to a homomorphism ${\overline v}$ from the algebra of terms with variables $X$ to ${\cal A}$. For every term $t$ with variables in $X$ we will, for the sake of simplicity, write $v(t)$ instead of ${\overline v}(t)$.} The problem of deciding the validity of universal Horn sentences in a class ${\mathcal V}$ of algebras is also called the {\em uniform word problem} for ${\mathcal V}$. It is known that the uniform word problem is decidable for the following classes of algebras: The class ${\sf SL}$ of semilattices (in PTIME), the class ${\sf DL}$ of distributive lattices (coNP-complete), and the class ${\sf Bool}$ of Boolean algebras (NP-complete). \subsection{Boolean algebras with operators} In what follows we will consider the following class of Boolean algebras with operators: \begin{definition} Let ${\sf BAO}({\Sigma})$ be the class of Boolean algebras with operators in $\Sigma$, of the form $(B, \vee, \wedge, \neg, 0, 1, \{ f_B \}_{f \in \Sigma})$, such that for every $f \in \Sigma$ of arity $n = a(f)$, $f_B : B^{n} \rightarrow B$ is a join-hemimorphism, i.e.\ $\begin{array}{lrcl} \forall x_1, \dots, x_i, x'_i, \dots, x_n~~~ & f(x_1, \dots, x_i \vee x'_i, \dots, x_n) & = & f(x_1, \dots, x_i, \dots, x_n) \vee f(x_1, \dots, x'_i, \dots, x_n)\\ \forall x_1, \dots, \dots, x_n ~~ & f(x_1,~ \dots~, ~0~ ,~ \dots~, x_n) & = & 0. \end{array}$ \end{definition} With every join-hemimorphism on a Boolean algebra $B$, $f_B : B^n \rightarrow B$ we can associate a map $g_B : B^n \rightarrow B$ defined for every $(x_1, \dots, x_n) \in B^n$ by $ g_B(x_1, \dots, x_n) = \neg f_B(\neg x_1, \dots, \neg x_n).$ The map $g_B$ is a meet-hemimorphism in every argument, i.e.\ it satisfies, for every $1 \leq i \leq n$: $\begin{array}{lrcl} \forall x_1, \dots, x_i, x'_i, \dots, x_n ~~~ & g(x_1, \dots, x_i \wedge x'_i, \dots, x_n) & = & g(x_1, \dots, x_i, \dots, x_n) \wedge g(x_1, \dots, x'_i, \dots, x_n)\\ \forall x_1, \dots, \dots, x_n & g(x_1,~ \dots~, ~1~ ,~ \dots~, x_n) & = & 1. \end{array}$ \noindent In relationship with ${\cal EL}$ and ${\cal EL}^+$ we will also use the following types of algebras: \begin{itemize} \item ${\sf DLO}({\Sigma})$ the class of bounded distributive lattices with operators $(L, \vee, \wedge, 0, 1, \{ f_L \}_{f \in \Sigma})$, such that $f_L : L^n \rightarrow L$ is a join-hemimorphism of arity $n = a(f)$; \item ${\sf SLO}({\Sigma})$ the class of all $\wedge$-semilattices with operators $(S, \wedge, 0, 1, \{ f_S \}_{f \in \Sigma})$, such that $f_S$ is monotone and $f_S(0) = 0$. \end{itemize} In what follows we will denote join-hemimorphisms by $f_{\exists}$ and the associated meet-he\-mi\-morphisms by $f_{\forall}$. The reason for this notation will become clear in Section~\ref{correspondence}, and especially in Section~\ref{dl-alg-sem}. \subsection{Correspondence theory} \label{correspondence} We now present some links between axioms satisfied in Boolean algebras with operators and properties of relational spaces.\footnote{Most calculations in the results presented here are simple; the correspondence results presented here could be also obtained as a consequence of a general result in algebraic logic, namely Sahlqvist's theorem.} \begin{definition}[Duals of Boolean algebras with operators] Let ${\bf B} = (B, \wedge, \vee, \neg, 0, 1, \{ f_{\exists}\}_{f \in \Sigma})$ be a Boolean algebra with operators having the property that for every $f \in \Sigma$, $f_{\exists} : B^{a(f)} \rightarrow B$ is a join-hemimorphism in every argument, and let $f_{\forall} : B^{a(f)} \rightarrow B$ be defined by $f_{\forall}(x_1, \dots, x_n) = \neg f_{\exists}(\neg x_1, \dots, \neg x_n)$ for every $x_i \in B^{a(f)}$ (a meet-hemimorphism in every argument). The {\em Stone dual of ${\bf B}$} is the topological relational space $D({\bf B}) = ({\cal F}_p(B), \{ r_f \}_{f \in N_R}, \tau)$ having as support the set ${\cal F}_p(B)$ of all prime filters of $B$ with the Stone topology, and relations associated with the operators of $B$ in a canonical way by: $$r_{f}(F, F_1, \dots, F_n) \text{ iff } f_{\exists}(F_1, \dots, F_n) \subseteq F.$$ \end{definition} \begin{definition}[Canonical extension of a Boolean algebra with operators] The {\em canonical extension} of ${\bf B}$ is the Boolean algebra of subsets of the Stone dual $D({\bf B})$ of ${\bf B}$, ${\cal P}(D({\bf B})) = ({\cal P}({\cal F}_p(B)), \cap, \cup, \emptyset,{\cal F}_p(B), \{ f_{\exists r_f} \}_{f \in \Sigma})$, where $$ f_{\exists r_f}(U_1, \dots, U_{n}) = \{ F \mid \exists F_1, \dots, F_n \in {\cal F}_p(B), r_f(F, F_1, \dots, F_n) \}$$ \end{definition} \subsubsection{From algebras to relational spaces} \label{corresp-bao-rel} We now analyze the link between properties of Boolean algebras with operators and properties of their duals. We focus on the properties related to the role inclusions considered in the study of ${\cal EL}^+$. We consider slightly more general {\em guarded role inclusions} of the form: $$ \begin{array}{ll} \forall x & (x \in C \wedge r(x, y) \rightarrow s(x, y)) \\ \forall x, y & (x \in C \wedge r_1 \circ s(x, y) \rightarrow r_2(x, y)) \\ \forall x, y & (x \in C \wedge r_1 \circ s(x, y) \rightarrow x = y) \end{array}$$ \begin{theorem} Let $B \in BAO(\Sigma)$, let $f, g, h \in \Sigma$ be unary join-hemimorphisms on $B$; and let $c$ be a constant and $C$ be the predicate associated in a canonical way with $c$ in $D(B)$ by $$C(F) \text{ iff } c \in F.$$ \begin{itemize} \item[~(1)] If $B \models \forall x (x \leq c \rightarrow g(x) \leq h(x))$ then $D(B) \models \forall x,y (y \in C \wedge r_g(x, y) \rightarrow r_h(x, y))$.\item[(2)] If $B \models \forall x (x \leq c \rightarrow f(g(x)) \leq h(x))$ then $D(B) ~\models~ \forall x, y ~(y \in C ~\wedge$ $r_f \circ r_g(x, y) \rightarrow r_h(x, y))$. \item[(3)] If $B \models \forall x (x \leq c \rightarrow f(g(x)) \leq x)$ then $D(B) \models \forall x, y (y \in C \wedge r_f \circ r_g(x, y) \rightarrow x = y)$. \end{itemize} \label{bao-rel-guards} \end{theorem} \noindent {\it Proof\/}:\ (1) Assume that $B \models \forall x (x \leq c \rightarrow g(x) \leq h(x))$. Let $F, G \in {\cal F}_p(B)$. Assume that $G \in C$ and $r_g(F, G)$. Then $c \in G$ and $g(G) \subseteq F$. We show that $h(G) \subseteq F$. Let $x \in G$. Then $c \wedge x \in G$. As $c \wedge x \leq x$, $g(c \wedge x) \leq h(c \wedge x) \leq h(x)$. Thus, $h(x) \in F$, i.e.\ $h(G) \subseteq F$. Hence, $(F, G) \in r_h$. (2) Let $F, G \in {\cal F}_p(B)$. Assume that $G \in C$ (i.e.\ $c \in G$) and $(F, G) \in r_f \circ r_g$. Then there exists $H \in {\cal F}_p(B)$ such that $(F, H) \in r_f$ and $(H, G) \in r_g$, i.e.\ such that $f(H) \subseteq F$ and $g(G) \subseteq H$. Then $f(g(G)) \subseteq f(H) \subseteq F$. Let $x \in G$. Then $c \wedge x \in G$. Hence, $f(g(c \wedge x)) \in F$. As $c \wedge x \leq x$, for every $x \in G$, $f(g(c \wedge x)) \leq h(c \wedge x) \leq h(x)$, so $h(x) \in F$. This shows that $h(G) \subseteq F$, i.e.\ $(F, G) \in r_h$. The proof of (3) is analogous to that of (2). \hspace*{\fill}$\Box$ \noindent In the particular case when $c = 1$ we obtain the following correspondence result: \begin{corollary} Let $B \in BAO(\Sigma)$, and let $f, g, h \in \Sigma$ be unary join-hemimorphisms on $B$. \begin{itemize} \item[(1)] If $B \models g(x) \leq h(x)$ then in $D(B)$, $r_g \subseteq r_h$. \item[(2)] If $B \models f(g(x)) \leq h(x)$ then in $D(B)$, $r_f \circ r_g \subseteq r_h$. \item[(3)] If $B \models f(g(x)) \leq x$ then in $D(B)$, $r_f \circ r_g \subseteq id$, where $id = \{ (x, x) \mid x \in {\cal F}_p(B) \}$. \end{itemize} \label{bao-rel} \end{corollary} Analogons of Theorem~\ref{bao-rel-guards} and Corollary~\ref{bao-rel} can also be proved for operators with higher arity: \begin{theorem} Let $B \in BAO(\Sigma)$, and let $f, g, g_1, \dots, g_n, h \in \Sigma$ such that $f,g$ are $n$-ary, $g_i$ are $n_i$-ary, and $h$ is an $m$-ary join-hemimorphism on $B$, and let $c_i$ (resp.\ $c^i_j$) be constants and $C_i$ (resp.\ $C^i_j$) the predicates associated in a canonical way with $c_i$ in $D(B)$ as explained above. Then: \begin{itemize} \item[(1)] If $n = m$ and $B \models \forall x_1, \dots, x_n (\bigwedge_i x_i \leq c_i \rightarrow g(x_1, \dots, x_n) \leq h(x_1, \dots, x_n))$ then \[D(B) \models \forall x, x_1, \dots, x_n (x_1 \in C_1 \wedge \dots \wedge x_n \in C_n \wedge r_g(x, x_1, \dots, x_n) \rightarrow r_h(x, x_1, \dots, x_n)).\] \item[(2)] If $B {\models} \forall \overline{x_1}, {\dots}, \overline{x_n} (\bigwedge_{i = 1}^n x^i_1 {\leq} c^i_1 \wedge \dots \wedge x^i_{n_i} {\leq} c^i_{n_i} \rightarrow f(g_1({\overline x}_1), {\dots}, g_n({\overline x}_n)) {\leq} h({\overline x}_1, {\dots}, {\overline x}_n))$ (where $\sum_{i=1}^n n_i = m$) then \[D(B) \models \forall x, \overline{x_1}, \dots, \overline{x_n} (\bigwedge_{i = 1}^n x^i_1 \in C^i_1 \wedge \dots, x^i_{n_i} \in C^i_{n_i} \wedge r_f(x, r_{g_1}({\overline x_1}), \dots, r_{g_n}(\overline{x_n})) \rightarrow r_h({\overline x}_1, \dots, {\overline x}_n)).\] \item[(3)] If $g_i$ are unary and $B \models \forall x (x \leq c \rightarrow f(g_1(x), \dots, g_n(x)) \leq x)$ then \[D(B) \models \forall y (y \in C \wedge r_f(x, r_{g_1}(y), \dots, r_{g_n}(y)) \rightarrow x = y).\] \end{itemize} \label{bao-rel-n-guards} \end{theorem} \noindent {\it Proof\/}:\ The proof of (1) is similar to the proof of item (1) in Theorem~\ref{bao-rel-guards}. (2) Let $F \in {\cal F}_p(B)$ and ${\overline F}_1, \dots, {\overline F}_n$ be tuples of prime filters such that ${\overline F}_i$'s length corresponds to the arity of $g_i$. Assume that $F^i_j \in C^i_j$ (i.e.\ $c^i_j \in F^i_j$) and that $(F, {\overline F}_1, \dots, {\overline F}_n) \in r_f \circ (r_{g_1}, \dots, r_{g_n})$. Then there exist $F_1, \dots, F_n \in {\cal F}_p(B)$ such that $(F, F_1, \dots, F_n) \in r_f$ and $(F_i, {\overline F}_i) \in r_{g_i}$. Then $f(F_1, \dots, F_n) \subseteq F$ and $g_i({\overline F}_i) \subseteq F_i$. It follows that $f(F,g_1({\overline F}_1), \dots, g_n({\overline F}_n)) \subseteq F$. As in the proof of (2) in Theorem~\ref{bao-rel-guards} we can then conclude that $(F, {\overline F}_1, \dots, {\overline F}_n) \in r_h$. The proof of (3) is similar. \hspace*{\fill}$\Box$ \begin{corollary} Let $B \in BAO(\Sigma)$, and let $f, g, g_1, \dots, g_n, h \in \Sigma$ be such that $f, g$ are $n$-ary, $g_i$ are $n_i$-ary, and $h$ is an $m$-ary join-hemimorphism on $B$. Then: \begin{itemize} \item[(1)] If $n = m$ and $B \models g(\overline{x}) \leq h(\overline{x})$ then in $D(B)$, $r_g \subseteq r_h$. \item[(2)] If $B \models f(g_1({\overline x}_1), \dots, g_n({\overline x}_n)) \leq h({\overline x}_1, \dots, {\overline x}_1)$ (where $\sum n_i = m$) then in $D(B)$, $$r_f \circ (r_{g_1}, \dots, r_{g_n}) \subseteq r_h.$$ \item[(3)] If $g_i$ are unary and $B \models f(g_1(x), \dots, g_n(x)) \leq x$ then in $D(B)$, $r_f \circ (r_{g_1}, \dots, r_{g_n}) \subseteq id$, where $id = \{ (x, x) \mid x \in {\cal F}_p(B) \}$ is the identity relation. \end{itemize} \label{bao-rel-n} \end{corollary} \subsubsection{From relational spaces to algebras} We now consider relational spaces, i.e.\ structures of the form ${\bf D} = (D, \{ r_D \}_{r \in \Sigma})$, where $D$ is a set and for every $r \in \Sigma$, $r_D$ is a relation on $D$. The dual of a Boolean algebra (if we ignore the topology) is a relational space. The canonical extension associated with a Boolean algebra $B$ is the Boolean algebra $$({\cal P}({\cal F}_p(B)), \cup, \cap, \neg, \emptyset, {\cal F}_p, \{ f_{\exists r_f} \}_{f \in \Sigma})$$ of subsets of ${\cal F}_p(B)$, with operators $f_{\exists r_f}, f_{\forall r_f}$ defined from the relations $r_f$ by: $\begin{array}{lll} f_{\exists r_f}(U_1, \dots, U_n) & = & \{ x \mid \exists y_1, \dots, y_n ((x,y_1, \dots, y_n) \in r_f \mbox{ and } y_i \in U_i \text { for } 1 \leq i \leq n)\} \label{1}\\ f_{\forall r}(U_1, \dots, U_n) & = & \{ x \mid \forall y_1, \dots, y_n ((x,y_1, \dots, y_n) \in r_f \Rightarrow y_i \in U_i \text { for } 1 \leq i \leq n)\}. \label{2} \end{array}$ \noindent With every relational space one can associate a Boolean algebra, with the universe consisting of all subsets of $D$. \begin{theorem} Let ${\bf D} = (D, \{ r_D \}_{r \in \Sigma})$ be a relational space, let $C \subseteq D$ and let ${\cal P}({\bf D}) $ be the Boolean algebra with operators $({\cal P}(D), \cup, \cap, \emptyset, D, \{ f_{\exists r}\}_{r \in \Sigma})$, where $f_{\exists r}$ is as in definition~(\ref{1}) above, and $c$ be a constant symbol with interpretation $C$. Then the following hold: \begin{itemize} \item[(1)] If~ ${\bf D} \models \forall x, y ( y \in C \wedge r_1(x, y) \rightarrow r_2(x, y))$ then ${\cal P}({\bf D}) \models \forall x ~~(x \leq c \rightarrow f_{\exists r_1}(x) \leq f_{\exists r_2}(x)).$ \item[(2)] If~ ${\bf D} \models \forall x, y (y {\in} C \wedge r_1 {\circ} s(x, y) {\rightarrow} r_2(x, y))$ then ${\cal P}({\bf D}) \models \forall x ~~( x {\leq} c {\rightarrow} f_{\exists r_1}(f_{\exists s}(x)) {\leq} f_{\exists r_2}(x)).$ \item[(3)] If~ ${\bf D} \models \forall x, y (y \in C \wedge r_1 \circ s(x, y) \rightarrow x = y)$ then ${\cal P}({\bf D}) \models \forall x (x \leq c \rightarrow f_{\exists r_1}(f_{\exists s}(x)) \leq x).$ \end{itemize} \label{rel-to-bao-guards} \end{theorem} \noindent {\it Proof\/}:\ Clearly, ${\mathcal P}({\bf D}) \in {\sf BAO}(\Sigma)$. Let $r_1, r_2, r {\in} N_R$ and $U {\in} {\mathcal P}(D)$ with $U \subseteq C$. (1) Assume that ${\bf D} \models \forall x, y ( y \in C \wedge r_1(x, y) \rightarrow r_2(x, y))$. Let $x \in f_{\exists r_1}(U)$. Then there exists $y \in U$ such that $r_1(x, y)$. As $U \subseteq C$, $y \in C$ so $r_2(x, y)$. (2) Assume that ${\bf D} \models \forall x, y (y \in C \wedge r_1 \circ s(x, y) \rightarrow r_2(x, y))$. Let $x \in f_{\exists r_1 \circ s}(U)$. Then there exists $y \in U$ such that $(x, y) \in r_1 \circ r_2$. As before, $y \in C$ so $r_2(x, y)$. The proof of (3) is similar. \hspace*{\fill}$\Box$ \begin{corollary} Let ${\bf D} = (D, \{ r_D \}_{r \in \Sigma})$ be a relational space and let ${\cal P}({\bf D}) $ be the Boolean algebra with operators $({\cal P}(D), \cup, \cap, \emptyset, D, \{ f_{\exists r}\}_{r \in \Sigma})$, where $f_{\exists r}$ is as in definition~(\ref{1}) above. The following hold: \begin{itemize} \item[(1)] If~ ${\bf D} \models r_1 \subseteq r_2$ then ${\cal P}({\bf D}) \models \forall x ~~ f_{\exists r_1}(x) \leq f_{\exists r_2}(x)$. \item[(2)] If~ ${\bf D} \models r_1 \circ s \subseteq r_2$ then ${\cal P}({\bf D}) \models \forall x ~~ f_{\exists r_1}(f_{\exists s}(x)) \leq f_{\exists r_2}(x)$. \item[(3)] If~ ${\bf D} \models r_1 \circ s \subseteq id$ then ${\cal P}({\bf D}) \models \forall x ~~ f_{\exists r_1}(f_{\exists s}(x)) \leq x$. \end{itemize} \label{rel-to-bao} \end{corollary} \noindent Similar results hold also for $n$-ary relations. \begin{theorem} Let ${\bf D} = (D, \{ r_D \}_{r \in \Sigma})$ be a relational space and let ${\cal P}({\bf D}) $ be the Boolean algebra with operators $({\cal P}(D), \cup, \cap, \emptyset, D, \{ f_{\exists r}\}_{r \in \Sigma}) \in BAO(\Sigma)$, where for every $r \in \Sigma, f_{\exists r}$ is defined as in formula~(\ref{1}) above. Let $r_1, s_1, \dots, s_n, r_2 \in \Sigma$ such that $r_1$ is an $n+1$-ary, $s_i$ are $n_i +1$-ary, and $r_2$ an $m+1$-ary relations. Let $C_i, C^j_k \subseteq D$ and let $c_i, c^j_k$ be constant symbols which are interpreted as $C_i, C^j_K$ respectively. The following hold: \begin{itemize} \item[(1)] If $n = m$ and ${\bf D} \models \forall x, {\overline y} (\bigwedge_i y_i \in C_i \wedge r_1(x, {\overline y}) \rightarrow r_2(x, {\overline y}))$ then \[{\cal P}({\bf D}) \models \forall x_1, \dots, x_n ~~(\bigwedge_i x_i \leq c_i \rightarrow f_{\exists r_1}(x_1, \dots, x_n) \leq f_{\exists r_2}(x_1, \dots, x_n)).\] \item[(2)] If ${\bf D} \models \forall x, {\overline y_1}, {\dots},{\overline y_n} (\bigwedge_{i = 1}^n \bigwedge_{j = 1}^{n_i} y^j_i {\in} C^j_i \wedge r_1 {\circ} (s_1, \dots, s_n)(x, {\overline y_1}, {\dots},{\overline y_n}) \rightarrow r_2(x, {\overline y_1}, {\dots},{\overline y_n}))$ then \[{\cal P}({\bf D}) \models \forall {\overline x^1}, \dots, {\overline x^n} ~~ \bigwedge_{i,j} x^i_j \leq c^i_j \rightarrow f_{\exists r_1}(f_{\exists s_1}({\overline x^1}), \dots, f_{\exists s_n}({\overline x^n})) \leq f_{\exists r_2}({\overline x^1}, \dots, {\overline x^n}).\] \item[(3)] If $s_i$ are binary and ${\bf D} \models \forall x, y (y \in C \wedge r_1 \circ (s_1, \dots, s_n)(x, y) \subseteq x = y)$ then \[{\cal P}({\bf D}) \models \forall x ~~ x \leq c \rightarrow f_{\exists r_1}(f_{\exists s_1}(x), \dots, f_{\exists s_n}(x)) \leq x.\] \end{itemize} \label{rel-to-bao-n} \end{theorem} \noindent {\it Proof\/}:\ Analogous to the proof of Theorem~\ref{rel-to-bao-guards}. \hspace*{\fill}$\Box$ \noindent If all $C^i_j$ are equal to $D$ all guards disappear and we obtain an $n$-ary analogon of Corollary~\ref{rel-to-bao}. \section{Algebraic semantics for description logics} \label{dl-alg-sem} A translation of concept descriptions into terms in a signature naturally associated with the set of constructors can be defined as follows. For every role name $r$, we introduce unary function symbols, $f_{\exists r}$ and $f_{\forall r}$. The renaming is inductively defined by: \begin{itemize} \item ${\overline C} = C$ for every concept name $C$; \item $\overline{\neg C} = \neg {\overline C}$; $\quad \overline{C_1 \sqcap C_2} = {\overline C_1} \wedge {\overline C_2}$,$\quad \overline{C_1 \sqcup C_2} = {\overline C_1} \vee {\overline C_2}$; \item $\overline{\exists r. C} = f_{\exists r}({\overline C})$, $\quad \overline{\forall r. C} = f_{\forall r}({\overline C})$. \end{itemize} There exists a one-to-one correspondence between interpretations ${\mathcal I} = (D, \cdot^{\mathcal I})$ and Boolean algebras of sets with additional operators, $({\mathcal P}(D), \cup, \cap, \neg, \emptyset, D, \{ f_{\exists r}, f_{\forall r}\}_{r \in N_R})$, together with valuations $v : N_C \rightarrow {\mathcal P}(D)$, where $f_{\exists r}, f_{\forall r}$ are defined, for every $U \subseteq D$, by: \begin{eqnarray*} f_{\exists r}(U) & = & \{ x \mid \exists y ((x,y) \in r^{\mathcal I} \mbox{ and } y \in U)\} \\ f_{\forall r}(U) & = & \{ x \mid \forall y ((x,y) \in r^{\mathcal I} \Rightarrow y \in U)\}. \end{eqnarray*} It is easy to see that, with these definitions: \begin{itemize} \item $f_{\exists r}$ is a join-hemimorphism, i.e.\ $f_{\exists r}(x \vee y) = f_{\exists r}(x) \vee f_{\exists r}(y)$, $f_{\exists r}(0) = 0$; \item $f_{\forall r}$ is a meet-hemimorphism, i.e.\ $f_{\forall r}(x \wedge y) {=} f_{\forall r}(x) \wedge f_{\forall r}(y)$, $f_{\forall r}(1) {=} 1$; \item $f_{\forall r}(x) = \neg f_{\exists r}(\neg x)$ for every $x \in B$. \end{itemize} Let $v : N_C \rightarrow {\mathcal P}(D)$ with $v(A) = A^{\mathcal I}$ for all $A \in N_C$, and let ${\overline v}$ be the (unique) homomorphic extension of $v$ to terms. Let $C$ be a concept description and $\overline{C}$ be its associated term. Then $C^{\mathcal I} = {\overline v}({\overline C})$ (denoted by ${\overline C}^{\mathcal I}$). \noindent The TBox subsumption problem for the description logic ${\cal ALC}$ (which was defined in Section~\ref{dl-gen}) can be expressed as uniform word problem for Boolean algebras with suitable operators. \begin{theorem} If ${\mathcal T}$ is an ${\mathcal A}{\mathcal L}{\mathcal C}$ TBox consisting of general concept inclusions between concept terms formed from concept names $N_C = \{ C_1, \dots, C_n \}$, and $D_1, D_2$ are concept descriptions, the following are equivalent: \begin{itemize} \item[(1)] $D_1 {\sqsubseteq}_{\mathcal T} D_2$. \item[(2)] ${\bf {\mathcal P}(D)} \models \forall C_1 ... C_n \left(\left( \bigwedge_{C {\sqsubseteq} D \in {\mathcal T}} \overline{C} {\leq} \overline{D} \right) \rightarrow \overline{D_1} {\leq} \overline{D_2}\right)$ for all in\-ter\-pre\-ta\-tions ${\mathcal I} = (D, \cdot^{\mathcal I})$, \\ where ${\bf {\mathcal P}(D)} = ({\mathcal P}(D), \cup, \cap, \neg, \emptyset, D, \{ f_{\exists r}, f_{\forall r}\}_{r \in N_R})$. \item[(3)] ${\sf BAO}_{N_R} {\models} \forall C_1 ... C_n \left(\left( \bigwedge_{C {\sqsubseteq} D \in {\mathcal T}} \overline{C} {\leq} \overline{D} \right) \rightarrow \overline{D_1} {\leq} \overline{D_2}\right).$ \end{itemize} \label{bao} \end{theorem} \noindent {\it Proof\/}:\ The equivalence of (1) and (2) follows from the definition of $D_1 \sqsubseteq_{\mathcal T} D_2$. $(3) \Rightarrow (2)$ is immediate. $(2) \Rightarrow (3)$ follows from the fact that every algebra in ${\sf BAO}_{N_R}$ homomorphically embeds into a Boolean algebra of sets, its canonical extension.\hspace*{\fill}$\Box$ \noindent An analogon of Theorem~\ref{bao} can be used for more general description logics in which in addition to the TBoxes also properties of roles need to be taken into account. We consider properties $R$ of roles which can be expressed by sets $R_a$ of clauses at an algebraic level. The main restriction we impose is that the sets of clauses $R_a$ are preserved when taking canonical extensions of Boolean algebras. We denote by $BAO_{N_R}(R_a)$ the family of all algebras in $BAO_{N_R}$ which satisfy the axioms in $R_a$. \begin{theorem} Let ${\mathcal T}$ be an ${\mathcal A}{\mathcal L}{\mathcal C}$ TBox consisting of general concept inclusions between concept terms formed from concept names $N_C = \{ C_1, \dots, C_n \}$, and let $R$ be a family of general (e.g.\ guarded) {\em role inclusions} with the additional property that there exists a set $R_a$ of clauses in the signature of $BAO_{N_R}$ such that: \begin{itemize} \item[(i)] For each in\-ter\-pre\-ta\-tion ${\mathcal I} = (D, \cdot^{\mathcal I})$, which satisfies the constraints on roles in $R$, we have that ${\bf {\mathcal P}(D)} \models R_a,$ where ${\bf {\mathcal P}(D)}$ stands for $({\mathcal P}(D), \cup, \cap, \neg, \emptyset, D, \{ f_{\exists r}, f_{\forall r}\}_{r \in N_R})$. \item[(ii)] Every $B \in BAO_{N_R}(R)$ embeds into an algebra of sets of the form ${\bf {\cal P}(D)}$ (defined as above), where $(D, \{ r \}_{r \in N_R})$ satisfies $R$. \end{itemize} Then for any concept descriptions $D_1, D_2$ the following are equivalent: \begin{itemize} \item[(1)] $D_1 {\sqsubseteq}_{{\mathcal T} \cup R} D_2$. \item[(2)] ${\bf {\cal P}(D)} \models \forall C_1 ... C_n \left(\left( \bigwedge_{C {\sqsubseteq} D \in {\mathcal T}} \overline{C} {\leq} \overline{D} \right) \rightarrow \overline{D_1} {\leq} \overline{D_2}\right)$ for all in\-ter\-pre\-ta\-tions ${\mathcal I} = (D, \cdot^{\mathcal I})$ \\ which are models of $R$. \item[(3)] ${\sf BAO}_{N_R}(R_a) {\models} \forall C_1 ... C_n \left(\left( \bigwedge_{C {\sqsubseteq} D \in {\mathcal T}} \overline{C} {\leq} \overline{D} \right) \rightarrow \overline{D_1} {\leq} \overline{D_2}\right).$ \end{itemize} \label{bao-ext-ax} \end{theorem} \noindent {\it Proof\/}:\ (1) $\Leftrightarrow$ (2) Let ${\mathcal I} = (D, \cdot^{\mathcal I})$ be an interpretation which is a model of $R$. Let $v : N_C \rightarrow {\cal P}(D)$ be a valuation with the property that ${\overline v}(C) \subseteq {\overline v}(D)$ for all $C {\sqsubseteq} D \in {\mathcal T}$. Since $D_1 {\sqsubseteq}_{{\mathcal T} \cup R} D_2$, it follows that ${\overline v}(D_1) \subseteq {\overline v}(D_2)$. (3) $\Rightarrow$ (2) follows from the fact that, by assumption (i), ${\bf {\mathcal P}(D)} = ({\mathcal P}(D), \cup, \cap, \neg, \emptyset, D, \{ f_{\exists r}, f_{\forall r}\}_{r \in N_R}) \in {\sf BAO}_{N_R}(R_a)$. (2) $\Rightarrow$ (3) follows from the fact that, by Assumption (ii), for every Boolean algebra $B$ with operators there exists a relational space $D$ which satisfies $R$, such that $B$ homomorphically embeds into a Boolean algebra of sets of the form ${\cal P}(D)$ which satisfies the conditions in (2). Hence, ${\cal P}(D) \models \forall C_1 ... C_n \left(\left( \bigwedge_{C {\sqsubseteq} D \in {\mathcal T}} \overline{C} {\leq} \overline{D} \right) \rightarrow \overline{D_1} {\leq} \overline{D_2}\right)$. As $B$ is isomorphic to a subalgebra of ${\cal P}(D)$, it follows that $B \models \forall C_1 ... C_n \left(\left( \bigwedge_{C {\sqsubseteq} D \in {\mathcal T}} \overline{C} {\leq} \overline{D} \right) \rightarrow \overline{D_1} {\leq} \overline{D_2}\right)$. \hspace*{\fill}$\Box$ \begin{example} Assume that $R$ consists of concept inclusions of the form $r \sqsubseteq s$ and $r_1 \circ s \sqsubseteq r_2$ and $r_1 \circ s \sqsubseteq id$ and $R_a$ consists of the corresponding axioms $\forall x (f_{\exists r}(x) \leq f_{\exists s}(x))$, $\forall x (f_{\exists r_1}(f_{\exists r}(x)) \leq f_{\exists r_2}(x))$, and $\forall x (f_{\exists r_1}(f_{\exists r}(x)) \leq x)$. Then, by Corollaries~\ref{rel-to-bao} and~\ref{bao-rel} premises (i) and (ii) of Theorem~\ref{bao-ext-ax} hold, hence the CBox subsumption problem can be expressed as a uniform word problem in ${\sf BAO}_{N_R}(R_a)$. \end{example} \subsection{Algebraic semantics for $\mathcal{EL}$, $\mathcal{EL}^+$ and extensions thereof} \label{alg-sem-el} In \cite{Sofronie-amai-07} we studied the link between TBox subsumption in $\mathcal{EL}$ and uniform word problems in the corresponding classes of semilattices with monotone functions. We now show that these results naturally extend to the description logic $\mathcal{EL}^+$. We will consider the following classes of algebras: \begin{itemize} \item ${\sf BAO}^{\exists}_{N_R}$: the class of boolean algebras with operators $(B, \vee, \wedge, \neg, 0, 1, \{ f_{\exists r} \}_{r \in N_R})$, such that $f_{\exists r}$ is a unary join-hemimorphism; \item ${\sf DLO}^{\exists}_{N_R}$: the class of bounded distributive lattices with operators $(L, \vee, \wedge, 0, 1, \{ f_{\exists r} \}_{r \in N_R})$, such that $f_{\exists r}$ is a unary join-hemimorphism; \item ${\sf SLO}^{\exists}_{N_R}$: the class of all $\wedge$-semilattices with operators $(S, \wedge, 0, 1, \{ f_{\exists R} \}_{R \in N_R})$, such that $f_{\exists R}$ is a monotone unary function and $f_{\exists R}(0) = 0$. \footnote{For the sake of simplicity, in this paper we assume that the description logics $\mathcal{EL}$ and $\mathcal{EL}^+$ contain the additional constructors $\perp, \top$, which will be interpreted as $0$ and $1$. Similar considerations can be used to show that the algebraic semantics for variants of $\mathcal{EL}$ and $\mathcal{EL}^+$ having only $\top$ (or $\perp$) is given by semilattices with $1$ (resp.\ 0).} \end{itemize} \subsection{Algebraic semantics for $\mathcal{EL}^+$} \label{sect:slo-el+} In ${\cal EL}^+$ the following types of role inclusions are considered: $$ r \sqsubseteq s \quad \quad \text{ and } \quad \quad r_1 \circ s \sqsubseteq r_2.$$ In \cite{Baader-2008} it is proved that subsumption w.r.t.\ $GCI$'s in the extension $\mathcal{ELI}$ of $\mathcal{EL}$ with inverse roles is ExpTime complete. It is also proved that subsumption w.r.t.\ general TBoxes in the extension $\mathcal{EL}^{\sf sym}$ of $\mathcal{EL}$ with symmetric roles is ExpTime complete. We will now start by considering also CBoxes containing role inclusion axioms which describe weaker, left- and right-inverse properties of roles, of the form: $r \circ s \subseteq id.$ \noindent Let $RI$ be a set of axioms of the form $r \sqsubseteq s$, $r_1 \circ r_2 \sqsubseteq r$, and $r_1 \circ r_2 \sqsubseteq id$ with $r_1, r_2, r \in N_R. $ We associate with $RI$ the following set $RI_a$ of axioms: \begin{eqnarray*} RI_a & = & \{\forall x ~~ f_{\exists r}(x) \leq f_{\exists s}(x) \mid r \sqsubseteq s \in RI\} \cup \\ & & \{ \forall x ~~(f_{\exists r_2} \circ f_{\exists r_1})(x) \leq f_{\exists r}(x) \mid r_1 \circ r_2 \sqsubseteq r \in RI \} \cup \\ & & \{ \forall x ~~(f_{\exists r_2} \circ f_{\exists r_1})(x) \leq x \mid r_1 \circ r_2 \sqsubseteq id \in RI \}. \end{eqnarray*} Let ${\sf BAO}^{\exists}_{N_R}(RI)$ (resp.\ ${\sf DLO}^{\exists}_{N_R}(RI)$, ${\sf SLO}^{\exists}_{N_R}(RI)$) be the subclass of ${\sf BAO}^{\exists}_{N_R}$ (resp.\ ${\sf DLO}^{\exists}_{N_R}$, ${\sf SLO}^{\exists}_{N_R}$) consisting of those algebras which satisfy $RI_a$. \begin{lemma} Let ${\mathcal I} = (D, \cdot^{\mathcal I})$ be a model of an $\mathcal{EL}^+$ CBox ${\mathcal C} = GCI \cup RI$. Then the algebra ${\cal P}({\bf D})_{|(\wedge, 0, 1)} = ({\mathcal P}(D), \cap, \emptyset, D, \{ f_{\exists r} \}_{r \in N_R})$ is a semilattice with operators in ${\sf SLO}^{\exists}_{N_R}(RI)$. \end{lemma} \noindent {\it Proof\/}:\ Clearly, $({\mathcal P}(D), \cap, \emptyset, D, \{ f_{\exists r} \}_{r \in N_R}) \in {\sf SLO}^{\exists}_{N_R}$. The proof of the second part uses exactly the same arguments as the proof of Theorem~\ref{rel-to-bao-guards} and Corollary~\ref{rel-to-bao}. \hspace*{\fill}$\Box$ \noindent We will now show that every algebra in ${\sf SLO}^{\exists}_{N_R}(RI)$ embeds into (the bounded semilattice reduct of) an algebra in ${\sf BAO}^{\exists}_{N_R}(RI)$. We start with a more general lemma, which will be important also for proving the locality results in Section~\ref{complexity}. \begin{lemma} For every structure ${\cal S} = (S, \wedge, 0, 1, \{ f_S \}_{f \in \Sigma})$ in which $f_S$ are partial functions, if properties (i), (ii) and (iii) below hold, then ${\cal S}$ embeds into a semilattice with operators in ${\sf SLO}^{\exists}_{N_R}(RI)$. \begin{itemize} \item[(i)] $(S, \wedge, 0, 1)$ is a bounded semilattice; $\leq$ the partial order on $S$ defined by $x {\leq} y$ iff $x {\wedge} y = y$. \item[(ii)] For every $f \in \Sigma$ with arity $n$, $f_S$ is a partial $n$-ary function on $S$ which satisfies the monotonicity axiom ${\sf Mon}(f)$ whenever all terms are defined. $${\sf Mon}(f) ~~~ \forall x, y (x \leq y \rightarrow f(x) \leq f(y))$$ \item[(iii)] There exists a set $RI^{\sf flat}$ of axioms of the form\footnote{These axioms are logically equivalent with those discussed before; the reason for preferring the flat version will become apparent in Section~\ref{complexity}.}: \begin{eqnarray*} \forall x ~~~& & g(x) \leq h(x) \\ \forall x, y ~~~& x \leq g(y) \rightarrow & f(x) \leq h(y) \\ \forall x, y ~~~& x \leq g(y) \rightarrow & f(x) \leq y \end{eqnarray*} such that: \begin{itemize} \item if $g, h$ appear in a rule as above and $g_S(s)$ is defined then also $h_S(s)$ is defined; \item for every $\beta : \{ x, y \} \rightarrow S$, and every axiom $D \in RI^{\sf flat}$ if all terms in ${\overline \beta}(D)$ are defined, then ${\overline \beta}(D)$ is true in $S$ (where ${\overline \beta}$ is the canonical extension of $\beta$ to formulae). \end{itemize} \end{itemize} \label{local-alg-el+} \end{lemma} \noindent {\it Proof\/}:\ Let ${\bf S} = (S, \wedge, 0, 1, \{ f_S \}_{f \in \Sigma})$ be a 0,1 semilattice, and let $f_S, g_S : S \rightarrow S$ be partially defined functions which satisfy the conditions above. Consider the lattice of all order-ideals of $S$, ${\cal OI}({\bf S}) = ({\cal OI}(S), \cap, \cup, \{ 0 \}, S, \{ {\overline f}_S \}_{f \in \Sigma})$, where join is set union, meet is set intersection, and the additional operators in $\Sigma$ are defined, for every order ideal $U$ of $S$, by $${\overline f}_S(U) = {\downarrow} \{ f_S(u) \mid u \in U, f_S(u) \mbox{ defined} \}.$$ Note that ${\overline f}_A(\{ 0 \}) = \{ 0 \}$ and ${\overline f}_S(U_1 \cup U_2) ~=~ {\downarrow} f_S(U_1 \cup U_2) = {\downarrow}(f_S(U_1) \cup f_S(U_2)) = {\downarrow} f_S(U_1) \cup {\downarrow} f_S(U_2)$. Thus, ${\cal OI}({\bf S}) \in {\sf DLO}^{\exists}_{N_R}$. \footnote{A similar construction can be made starting from $\wedge$-semilattices with monotone operators which have only 1 (resp.\ 0) or neither 0 nor 1.} Moreover, $\eta : {\bf S} \rightarrow {\cal OI}({\bf S})$ defined by $\eta(x) := {\downarrow} x$ is an injective homomorphism w.r.t.\ the bounded semilattice operations and $\eta(f_S(x)) = {\downarrow} f_S(x) = {\overline f}_S({\downarrow} x)$. We prove that ${\overline f}_S, {\overline g}_S, {\overline h}_S$ satisfy the axioms in $RI^{\sf flat}$. Consider first the axiom: \begin{eqnarray} \forall x, y ~ & (y \leq g(x) \rightarrow & f(y) \leq x) \label{ax1} \end{eqnarray} Let $U, V \in {\overline S}$ be such that $U \subseteq{\overline g}(V)$. Let $x \in {\overline f}_S(U)$. Then there exists $u \in U$ such that $f_S(u)$ is defined and $x \leq f_S(u)$. Since $U \subseteq g(V)$, we know that there exists $v \in V$ with $g_S(v)$ defined and $u \leq g(v)$. Since $S$ satisfies Axiom~(\ref{ax1}), and $g_S(v), f_S(u)$ are defined and $u \leq g_S(v)$ it follows that $f_S(u) \leq v.$ Thus, $x \leq f_S(u) \leq v \in V$, so $x \in V$. This shows that for all $U, V \in {\overline S}$: $$ U \leq {\overline g}(V) \rightarrow {\overline f}(U) \subseteq V.$$ We now check preservation of the axioms of the form: \begin{eqnarray} \forall x ~ & & g(x) \leq h(x) \label{ax2} \\ \forall x, y ~ & (y \leq g(x) \rightarrow & f(y) \leq h(x)) \label{ax3} \end{eqnarray} We assume that $S$ has the property that $h_S(a)$ is defined whenever $g_S(a)$ is defined. We have to show that if $f_S, g_S, h_S$ are monotone whenever defined and satisfy one of the axioms above (say (\ref{ax3}); the case of Axiom~(\ref{ax2}) is similar) whenever defined then ${\overline f}_S, {\overline g}_S$ and ${\overline h}_S$ satisfy (\ref{ax3}). Let $U, V \in {\overline S}$ be such that $U \subseteq{\overline g}_S(V)$. Let $x \in {\overline f}_S(U)$. Then there exists $u \in U$ such that $f_S(u)$ is defined and $x \leq f_S(u)$. Since $U \subseteq {\overline g}_S(V)$, we know that there exists $v \in V$ with $g_S(v)$ defined and $u \leq g_S(v)$. Due to the first condition in (iii), $h_S(v)$ must be defined as well. Since $S$ satisfies Axiom~(\ref{ax3}) and $g_S(v), f_S(u), h_S(v)$ are defined and $u \leq g_S(v)$ it follows that $f_S(u) \leq h_S(v).$ Thus, there exists $v \in V$ such that $x \leq f_S(u) \leq h_S(v)$, so $x \in {\overline h}_S(V)$. This shows that for all $U, V \in {\overline S}$: $$ U \leq {\overline g}(V) \rightarrow {\overline f}(U) \subseteq {\overline h}(V).$$ \hspace*{\fill}$\Box$ \begin{lemma} Every ${\bf S} \in {\sf SLO}^{\exists}_{N_R}(RI)$ embeds into (the bounded semilattice reduct of) a lattice in ${\sf DLO}^{\exists}_{N_R}(RI)$. Every lattice in ${\sf DLO}^{\exists}_{N_R}(RI)$ embeds into (the bounded lattice reduct of) an algebra in ${\sf BAO}^{\exists}_{N_R}(RI)$. \label{embeddings-slo-dlo-bao} \end{lemma} \noindent {\it Proof\/}:\ The first part follows from Lemma~\ref{local-alg-el+}. The second statement is a consequence of Priestley duality for distributive lattices. Let ${\bf L} \in {\sf DLO}^{\exists}_{N_R}(RI)$. Let ${\mathcal F}_p$ be the set of prime filters of $L$, and $B({\bf L}) = ({\mathcal P}({\mathcal F}_p), \cup, \cap, \{ {\overline f}_{\exists r} \}_{r \in N_r})$, where for $r \in R$, ${\overline f}_{\exists r}$ is defined by $$ {\overline f}_{\exists r}(U) = \{ F \in {\mathcal F}_p \mid \exists G \in U: f_{\exists r}(G) \subseteq F \}.$$ Let $i : {\bf L} \rightarrow B({\bf L})$ be defined by $i(x) = \{ F \in {\mathcal F}_p \mid x \in F \}$. Obviously, $i$ is a lattice homomorphism. We show that $i(f_{\exists r}(x)) = {\overline f}_{\exists r}(i(x))$. \begin{eqnarray*} {\overline f}_{\exists r}(i(x)) & = & \{ F \in {\mathcal F}_p \mid \exists G \in i(x): f_{\exists r}(G) \subseteq F \} \\ & = & \{ F \in {\mathcal F}_p \mid \exists G: x \in G \text{ and } f_{\exists r}(G) \subseteq F \} \\ & \subseteq & \{ F \in {\mathcal F}_p \mid f_{\exists r}(x) \in F \} = i(f_{\exists r}(x)). \end{eqnarray*} To prove the converse inclusion, let $F \in i(f_{\exists r}(x))$. Then $F \in {\mathcal F}_p$ and $f_{\exists r}(x) \in F$. Let $G = f_{\exists r}^{-1}(F)$. As $F$ is a prime filter, and $f_{\exists r}$ is a join-hemimorphism, $G$ is a prime filter with $x \in G$ and $f_{\exists r}(G) \subseteq F$, so $F \in {\overline f}_{\exists r}(i(x))$. Finally, we show that $B({\bf L})$ satisfies the axioms in $RI_a$. Let $U \in B({\bf L})$. By definition, \begin{eqnarray*} {\overline f}_{\exists r_1}(U) & = & \{ F \in {\mathcal F}_p \mid \exists G_1 \in U: f_{\exists r_1}(G_1) \subseteq F \}, \\ {\overline f}_{\exists r_2}({\overline f}_{\exists r_1}(U)) & = & \{ F \in {\mathcal F}_p \mid \exists G_1 \in {\overline f}_{\exists r_1}(U): f_{\exists r_2}(G_1) \subseteq F \} \\ & = & \{ F \in {\mathcal F}_p \mid \exists G_1, \exists G_2 \in U: f_{\exists r_1}(G_2) \subseteq G_1 \text{ and } f_{\exists r_2}(G_1) \subseteq F \} \\ & \subseteq & \{ F \in {\mathcal F}_p \mid \exists G_2 \in U: f_{\exists r_2}(f_{\exists r_1}(G_2)) \subseteq F \}. \end{eqnarray*} Assume that $r_1 \sqsubseteq r \in RI$. We know that ${\bf L} \models \forall x$, $f_{\exists r_1}(x) \leq f_{\exists r}(x)$. Let $F \in {\overline f}_{\exists r_1}(U)$. Then $f_{\exists r_1}(G_1) \subseteq F$ for some $G_1 \in U$, so also $f_{\exists r}(G_1) \subseteq F$. Hence, ${\overline f}_{\exists r_1}(U) \subseteq {\overline f}_{\exists r}(U)$. Similarly we can prove that if $r_1 \circ r_2 \sqsubseteq r \in RI$ then ${\overline f}_{\exists r_2}({\overline f}_{\exists r_1}(U)) \subseteq {\overline f}_{\exists r}(U)$ and that if $r_1 \circ r_2 \sqsubseteq id \in RI$ then ${\overline f}_{\exists r_2}({\overline f}_{\exists r_1}(U)) \subseteq U$. \hspace*{\fill}$\Box$ \begin{theorem} If the only concept constructors are intersection and existential restriction, then for all concept descriptions $D_1, D_2$ and every $\mathcal{EL}^+$ CBox ${\mathcal C} {=} GCI {\cup} RI$, with concept names $N_C = \{ C_1, \dots, C_n \}$ the following are equivalent: \begin{itemize} \item[(1)] $D_1 {\sqsubseteq}_{\mathcal C} D_2$. \item[(2)] ${\sf SLO}^{\exists}_{N_R}(RI) \models \forall C_1 \dots C_n \left(\left( \bigwedge_{C {\sqsubseteq} D \in GCI} \overline{C} {\leq} \overline{D} \right) \rightarrow \overline{D_1} {\leq} \overline{D_2}\right).$ \end{itemize} \label{el-slat} \end{theorem} \noindent {\it Proof\/}:\ We know that $C_1 \sqsubseteq_{\mathcal C} C_2$ iff $C_1^{\mathcal I} \subseteq C_2^{\mathcal I}$ for every model ${\mathcal I}$ of the CBox ${\mathcal C}$. Assume first that (2) holds. Let ${\mathcal I} = (D, \cdot^{\mathcal I})$ be an interpretation that satisfies ${\mathcal C}$. Then ${\cal P}({\bf D})_{|\wedge} = ({\mathcal P}(D), \cap, \emptyset, D, \{ f_{\exists r} \}_{r \in N_R}) \in {\sf SLO}^{\exists}_{N_R}(RI)$, hence ${\cal P}({\bf D})_{|\wedge} \models \left( \bigwedge_{C \sqsubseteq D \in GCI} \overline{C} \leq \overline{D} \right) \rightarrow \overline{D_1} \leq \overline{D_2}.$ As ${\mathcal I}$ is a model of $GCI$, $\overline{C}^{\mathcal I} \subseteq \overline{D}^{\mathcal I}$ for all $C \sqsubseteq D \in GCI$, so $D_1^{\mathcal I} \, {=} \, \overline{D_1}^{\mathcal I} \, {\subseteq} \, \overline{D_2}^{\mathcal I} \, {=} \, D_2^{\mathcal I}.$ To prove $(1) \Rightarrow (2)$ note first that in this case the premises of Thm.\ \ref{bao-ext-ax} are fulfilled. By Thm.\ \ref{bao-ext-ax}, if $D_1 \sqsubseteq_{\mathcal C} D_2$ then ${\sf BAO}_{N_R}(RI) \models \left( \bigwedge_{C \sqsubseteq D \in {\mathcal C}} {\overline C} \leq {\overline D} \right) \rightarrow \overline{D_1} \leq \overline{D_2}.$ Let ${\bf S} \in {\sf SLO}^{\exists}_{N_R}(RI)$. By Lemma~\ref{embeddings-slo-dlo-bao}, ${\bf S}$ embeds into an algebra in ${\sf BAO}^{\exists}_{N_R}$ which satisfies $RI_a$. Therefore, ${\bf S} \models \left( \bigwedge_{C \sqsubseteq D \in GCI} \overline{C} \leq \overline{D} \right) \rightarrow \overline{C_1} \leq \overline{C_2}.$ \hspace*{\fill}$\Box$ We will show that the word problem for the class of algebras ${\sf SLO}^{\exists}_{N_R}(RI)$ is decidable in PTIME. For this we will prove that ${\sf SLO}^{\exists}_{N_R}(RI)$ has a ``local'' presentation. The general locality definitions, as well as methods for recognizing local presentations are given in Sect.~\ref{locality}. The application to the class of models for $\mathcal{EL}$ and $\mathcal{EL}^+$ are given in Sect.~\ref{complexity}. Before doing this, we present some additional types of constraints on the roles which can be handled similarly. This will allow us to obtain a new tractable extension of $\mathcal{EL}^+$. \subsection{Guarded role inclusions} In applications it may be interesting to consider role inclusions guarded by membership to a certain concept, i.e.\ role inclusions of the form: \begin{eqnarray} \forall x, y & (y \in C \wedge r(x, y) & \rightarrow r'(x, y)) \label{c1}\\ \forall x, y & (y \in C \wedge r \circ s (x, y) & \rightarrow r'(x, y)) \label{c2}\\ \forall x, y & (y \in C \wedge r \circ s (x, y) & \rightarrow x = y). \label{c3} \end{eqnarray} The corresponding axioms at the algebra level we consider are: \begin{eqnarray} \forall x & (x \leq C & \rightarrow f_r(x) \leq f_{r'}(x)) \label{d1}\\ \forall x & (x \leq C & \rightarrow f_r(f_s(x)) \leq f_{r'}(x)) \label{d2}\\ \forall x & (x \leq C & \rightarrow f_r(f_s(x)) \leq f_{r'}(x)). \label{d3} \end{eqnarray} \begin{theorem} Assume that the only concept constructors are intersection and existential restriction. Let ${\mathcal C} {=} GCI {\cup} RI {\cup} GRI$ be a CBox containing a set $GCI$ of general concept inclusions, a set $RI$ of role inclusions of the type considered in Sect.~\ref{sect:slo-el+} and a set $GRI$ of guarded role inclusions of the form~(\ref{c1})--(\ref{c3}), with concept names $N_C = \{ C_1, \dots, C_n \}$. Then for all concept descriptions $D_1, D_2$ the following are equivalent: \begin{itemize} \item[(1)] $D_1 {\sqsubseteq}_{\mathcal C} D_2$. \item[(2)] $GRI(C_1, \dots, C_n) \wedge \left( \bigwedge_{C {\sqsubseteq} D \in GCI} \overline{C} {\leq} \overline{D} \right) \wedge \overline{D_1} {\not\leq} \overline{D_2}$ is unsatisfiable w.r.t.\ ${\sf BAO}^{\exists}_{N_R}(RI)$. \item[(3)] $GRI(C_1, \dots, C_n) \wedge \left( \bigwedge_{C {\sqsubseteq} D \in GCI} \overline{C} {\leq} \overline{D} \right) \wedge \overline{D_1} {\not\leq} \overline{D_2}$ is unsatisfiable w.r.t.\ ${\sf SLO}^{\exists}_{N_R}(RI)$. \end{itemize} \end{theorem} \noindent {\it Proof\/}:\ The proof is analogous to that of Theorem~\ref{el-slat} and uses the results in Theorems~\ref{bao-rel-guards} and~\ref{rel-to-bao-guards}, as well as an analogon of Theorem~\ref{embeddings-slo-dlo-bao}. \hspace*{\fill}$\Box$ \subsection{Extensions of $\mathcal{EL}^+$ with $n$-ary roles and concrete domains} \label{extensions-of-el-n-ary} We now present a possibility of extending $\mathcal{EL}^+$ with concrete domains, which is a natural generalization of the extension in Section~\ref{alg-sem-el}. This extension is different from the extensions with concrete domains and those with $n$-ary quantifiers studied in the description logic literature (cf.\ e.g.\ \cite{Baader-ijcai-2005,Baader2005-ki}). Later, in Section~\ref{el++} we will present another extension (the one used in $\mathcal{EL}^{++}$). \begin{figure} \caption{Constructors for $\mathcal{EL} \label{table:el-ext-constr} \end{figure} \noindent We consider $n$-ary roles because in relational databases, relations of higher arity are often used. This is especially important when we need to express dependencies between several (not only two) individuals. \begin{example} We would like to express, for instance, information about all the routes from cities in a set $C_1$ to cities in a set $C_2$ passing through cities in a set $C_3$. This could be done using ternary roles interpreted as ternary relations. \end{example} \subsubsection{An extension of $\mathcal{EL}^+$ with $n$-ary roles} \label{ext1-n-ary} \noindent An extension of the description logic $\mathcal{ALC}$, containing $n$-ary roles instead of binary roles (interpreted as $n$-ary relations) can easily be defined. The definition of TBox subsumption can be extended naturally to the $n$-ary case. In this paper we will restrict to ${\cal EL}$ (cf.\ Figure~\ref{table:el-ext-constr}), i.e.\ consider only existential restrictions, which are in this case $n$-ary -- of the form $\exists r.(C_1, \dots, C_n)$ -- and are interpreted in any interpretation ${\cal I} = (D, \cdot^{\cal I})$ as: $$\exists r.(C_1, \dots, C_n)^{\cal I} = \{ x \mid \exists y_1, \dots y_n (y_1 \in C_1 \wedge \dots \wedge y_n \in C_n \wedge r^{\cal I}(x, y_1, \dots, y_n)) \}.$$ A translation of concept descriptions into terms can be defined in a natural way also in this case, with the difference that for every role name $r$ with arity $n+1$, we introduce an $n$-ary function symbol $f_{\exists r}$. The renaming is inductively defined as in the binary case, with the difference that: $$\overline{\exists r.(C_1, \dots, c_n)} = f_{\exists r}({\overline C_1}, \dots, {\overline C_n}).$$ Also in the $n$-ary case we denote by ${\sf BAO}^{\exists}_{N_R}$ the class of Boolean algebras with operators $(B, \vee, \wedge, \neg, 0, 1, \{ f_{\exists r} \}_{r \in N_R})$, such that for every $r \in N_r$ with arity $n+1$, $f_{\exists r}$ is a join-hemimorphism with arity $n$; ${\sf DLO}^{\exists}_{N_R}$ and ${\sf SLO}^{\exists}_{N_R}$ are defined similarly. An extension of $\mathcal{EL}^+$ with $n$-ary roles can be obtained by allowing role inclusions of type: \begin{eqnarray} r_1 & \sqsubseteq & r_2 \label{n-ary-incl} \\ r_1 \circ (s_1, \dots, s_n) & \sqsubseteq & r_2 \label{n-ary} \\ r_1 \circ (s_1, \dots, s_n) & \sqsubseteq & id ~~~ \text{ for binary relations } s_i \label{n-ary-inv} \end{eqnarray} \noindent An interpretation ${\cal I} = (D, \cdot^{\cal I})$ satisfies a role inclusion type~(\ref{n-ary}) if it satisfies the formula: $$ \forall x, {\overline x_i}, {\overline y^k_j} ~~~(r_1(x, x_1, \dots, x_n) \wedge \bigwedge_{k = 1}^n s_k(x_k, y^k_1, \dots, y^k_{m_k})) \rightarrow r_2(x, y^1_1, \dots, y^1_{m_1}, \dots, y^n_1, \dots, y^n_{m_n}).$$ The truth of role inclusions of type~(\ref{n-ary-incl}) resp.~(\ref{n-ary-inv}) is defined in a similar way. As in the case of $\mathcal{EL}^+$ we can also prove that TBox subsumption can be expressed as a uniform word problem w.r.t.\ the class of semilattices with monotone operators associated with the roles, satisfying axioms corresponding in a natural way to the role inclusion laws above: $$\begin{array}{lrcl} \forall x_1, \dots, x_n ~~~ & f_{\exists r_1}(x_1, \dots, x_n) & \leq & f_{\exists r_2}(x_1, \dots, x_n) \\ \forall {\overline y^k_j}~~~ & f_{\exists r_1}(f_{\exists s_1}(y^1_1, \dots, y^1_{m_1}), \dots, f_{\exists s_n}(y^n_1, \dots, y^n_{m_n})) & \leq & f_{\exists r_2}(y^1_1, \dots, y^1_{m_1}, \dots, y^n_1, \dots, y^n_{m_n}) \\ \forall x ~~~ & f_{\exists r_1}(f_{\exists s_1}(x), \dots, f_{\exists s_n}(x)) & \leq & x \end{array}$$ \noindent This type of inequalities are exactly of the form studied in Section~\ref{correspondence}. A straightforward generalization of Theorem~\ref{el-slat}, using the corresponding corrolaries of Theorem~\ref{bao-rel-n-guards} and~\ref{rel-to-bao-n}, yields: \begin{theorem} If the only concept constructors are intersection and existential restriction, then for all concept descriptions $D_1, D_2$ and every $\mathcal{EL}^+$ CBox ${\mathcal C} {=} GCI {\cup} RI$ -- where $RI$ consists of role inclusions of type~(\ref{n-ary-incl})--(\ref{n-ary-inv}) -- with concept names $N_C = \{ C_1, \dots, C_n \}$ the following are equivalent: \begin{itemize} \item[(1)] $D_1 {\sqsubseteq}_{\mathcal C} D_2$. \item[(2)] ${\sf BAO}^{\exists}_{N_R}(RI) \models \forall C_1 \dots C_n \left(\left( \bigwedge_{C {\sqsubseteq} D \in GCI} \overline{C} {\leq} \overline{D} \right) \rightarrow \overline{D_1} {\leq} \overline{D_2}\right).$ \item[(3)] ${\sf SLO}^{\exists}_{N_R}(RI) \models \forall C_1 \dots C_n \left(\left( \bigwedge_{C {\sqsubseteq} D \in GCI} \overline{C} {\leq} \overline{D} \right) \rightarrow \overline{D_1} {\leq} \overline{D_2}\right).$ \end{itemize} \label{el-+-slat-gen} \end{theorem} A similar result is obtained if we also consider guarded role inclusions. \begin{theorem} Assume that the only concept constructors are intersection and existential restriction. Let ${\mathcal C} {=} GCI {\cup} RI {\cup} GRI$ be a CBox containing a set $GCI$ of general concept inclusions, a set $RI$ of role inclusions and a set $GRI$ of guarded role inclusions of the form discussed above, with concept names $N_C = \{ C_1, \dots, C_n \}$. Then for all concept descriptions $D_1, D_2$ the following are equivalent: \begin{itemize} \item[(1)] $D_1 {\sqsubseteq}_{\mathcal C} D_2$. \item[(2)] $GRI(C_1, \dots, C_n) \wedge \left( \bigwedge_{C {\sqsubseteq} D \in GCI} \overline{C} {\leq} \overline{D} \right) \wedge \overline{D_1} {\not\leq} \overline{D_2}$ is unsatisfiable w.r.t.\ ${\sf BAO}^{\exists}_{N_R}(RI)$. \item[(3)] $GRI(C_1, \dots, C_n) \wedge \left( \bigwedge_{C {\sqsubseteq} D \in GCI} \overline{C} {\leq} \overline{D} \right) \wedge \overline{D_1} {\not\leq} \overline{D_2}$ is unsatisfiable w.r.t.\ ${\sf SLO}^{\exists}_{N_R}(RI)$. \end{itemize} \end{theorem} \subsubsection{$\mathcal{EL}^+$ with $n$-ary roles and concrete domains} \label{ext1-n-ary-concrete} A further extension is obtained by allowing for certain concrete sorts -- having the same support in all interpretations; or additionally assuming that there exist specific concrete concepts which have a fixed semantics (or additional fixed properties) in all interpretations. \begin{example} \label{ex1} Consider a description logic having a usual (${\sf concept}$) sort and a 'concrete' sort ${\sf num}$ with fixed domain ${\mathbb R}$. We may be interested in general concrete concepts of sort ${\sf num}$ (interpreted as subsets of ${\mathbb R}$) or in special concepts of sort ${\sf num}$ such as ${\uparrow} n$, ${\downarrow} n$, or $[n, m]$ for $m, n \in {\mathbb R}$. For any interpretation ${\mathcal I}$, ${\uparrow} n^{\mathcal I} = \{ x \in {\mathbb R} \mid x \geq n \}$, ${\downarrow} n^{\mathcal I} = \{ x \in {\mathbb R} \mid x \leq n \}$, and $[n, m]^{\mathcal I} = \{ x \in {\mathbb R} \mid n \leq x \leq m \}$. We will denote the arities of roles using a many-sorted framework. Let $(D, {\mathbb R}, \cdot^{\mathcal I})$ be an interpretation with two sorts ${\sf concept}$ and ${\sf num}$. A role with arity $(s_1, \dots, s_n)$ is interpreted as a subset of $D_{s_1} \times \dots \times D_{s_n}$, where $D_{\sf concept} = D$ and $D_{\sf num} = {\mathbb R}$. \begin{enumerate} \item Let ${\sf price}$ be a binary role or arity $({\sf concept}, {\sf num})$, which associates with every element of sort ${\sf concept}$ its possible prices. The concept ~~~~~~~~~~~~~~~~~~~~~~~$\exists {\sf price}.{\uparrow} n = \{ x \mid \exists k \geq n: {\sf price}(x, k) \}$ \noindent represents the class of all individuals with some {\sf price} greater than or equal to $n$. \item Let {\sf has-weight-price} be a role of arity $({\sf concept},{\sf num},{\sf num})$. $\!\!$ The concept $ \exists \mbox{ {\sf has-weight-price}}.({\uparrow} {\sf y}, {\downarrow} {\sf p}) = \{ x \mid \exists y' {\geq} {\sf y}, \exists p' {\leq} {\sf p} \mbox{ and } \mbox{{\sf has-weight-price}}(x, y', p') \}$ \noindent denotes the family of individuals for which a weight above ${\sf y}$ and a price below ${\sf p}$ exist. \end{enumerate} \end{example} The example below can be generalized by allowing a set of concrete sorts. We discuss the algebraic semantics of this type of extensions of $\mathcal{EL}$. \noindent Let ${\sf SLO}^{\exists}_{N_R, S}$ denote the class of all structures $(S, {\mathcal P}(A_1), \dots, {\mathcal P}(A_n), \{ f_{\exists r} \mid r \in N_R \})$, where $S$ is a semilattice, $A_1, \dots, A_n$ are concrete domains, and $\{ f_{\exists r} \mid r \in N_R \}$ are $n$-ary monotone operators. We may allow constants of concrete sort, interpreted as sets in ${\mathcal P}(A_i)$. \begin{theorem} If the only concept constructors are intersection and existential restriction, then for all concept descriptions $D_1, D_2,$ and every CBox ${\cal C} = GCI \cup RI$ consisting of general concept inclusions $GCI$ with concrete domains as defined above, and role inclusions $RI$ of the type considered in Sect.~\ref{sect:slo-el+} or Sect.~\ref{ext1-n-ary} the following are equivalent: \begin{itemize} \item[(1)] $D_1 \sqsubseteq_{\mathcal C} D_2$. \item[(2)] ${\sf SLO}^{\exists}_{N_R, S}(RI) \models \forall C_1, \dots, C_n \left(\left( \bigwedge_{C \sqsubseteq D \in GCI} \overline{C} \leq \overline{D} \right) \rightarrow \overline{D_1} \leq \overline{D_2} \right).$ \end{itemize} \label{el-slat-ext} \end{theorem} \noindent {\it Proof\/}:\ Analogous to the proof of Theorem~\ref{el-slat}. \hspace*{\fill}$\Box$ \noindent We can also consider guarded role inclusions for $n$-ary many-sorted roles. All the previous results lift without problems. \subsection{Existential restrictions for roles} We will also consider relationships of the form $$\{ (x, y_1, \dots, y_{i-1}, y_{i+1}, \dots, y_n) \mid \exists y_i \in C: r( x, y_1, \dots, y_n)\}.$$ \noindent In analogy to concept construction by existential restrictions, we can apply existential restriction to $n+1$-ary roles for obtaining $n$-ary roles. The syntax and semantics are: $$\begin{array}{|l|l|} \hline \text{ Role construction } & \text{ Semantics } \\ \hline \exists r. (j, C) ~~(1\leq j\leq n)& \exists r. (j, C)^{\cal I} = \{ (x, x_1, \dots, x_{j-1}, x_{j+1}, \dots, x_n) \mid \exists x_j \in C: r(x, x_1, \dots, x_n) \} \\ \hline \end{array}$$ \begin{example} \label{ex2} Consider a database where we can express relationships of the form: \begin{eqnarray*} r_{\sf interm}(x, y, z) & & \text{(there exists a route from $x$ to $y$ passing through $z$)} \\ r(x, y) & & \text{(there exists a route from $x$ to $y$).} \end{eqnarray*} We will also want to express relationships of the form {\em ``For all $x_1, x_2$, if there exists a route from $x_1$ to $x_2$ passing through some city in $C_3$, then there exists a route from $x_1$ to $x_2$.''} We need therefore to express a new relation $r'$ where $r'(x_1, x_2)$ stands for there exists a route from $x_1$ to $x_2$ passing through some city in $C_3$. For this we will need constructors of the type $\exists r.(j, C)$. They help to formulate the property above as $\exists r_{\sf interm}(3, C_3) \sqsubseteq r$, interpreted as: $$\{ (x_1, x_2) \mid \exists x_3 \in C_3: r_{\sf interm}(x_1, x_2, x_3) \} \subseteq r, \text{ ~~i.e.\ ~~}\forall x_1, x_2 ~~ \exists r_{\sf interm}(3, C_3)(x_1, x_2) \rightarrow r(x_1, x_2). $$ \end{example} \begin{lemma} Assume that $s = \exists r. (i, C)$. Then \begin{eqnarray*} f_{\exists s}(U_1, \dots, U_{i-1}, U_{i+1}, \dots, U_n) & = & \{ x \mid \exists x_i \in U_i, i \in \{ 1, \dots, n \} \backslash \{ i \}: s(x, x_1, \dots, x_{i-1}, x_{i+1}, \dots, x_n) \} \\ & = & \{ x \mid \exists x_i \in U_i, i \in \{ 1, \dots, n \} \backslash \{ i \}: r(x, x_1, \dots, x_n) \} \\ & = & f_{\exists r}(U_1, \dots, U_{i-1}, C, U_{i+1}, \dots, U_n). \end{eqnarray*} \end{lemma} \noindent The axioms which corresponds to role restrictions are of the type: \begin{eqnarray}f_{\exists (\exists r.(i,C))}(x_1, \dots, x_{i-1}, x_{i+1}, \dots, x_n) = f_{\exists r}(x_1, \dots, x_{i-1}, C, x_{i+1}, \dots, x_n).\label{er} \end{eqnarray} \noindent All results established for ${\cal EL}^+$ hold also if this kind of role constructions are considered. \begin{theorem} Assume the only concept constructors are intersection and existential restriction. Let ${\mathcal C} = GCI {\cup} RI {\cup} GRI {\cup ER}$ be a CBox containing general concept inclusions ($GCI$), (guarded) role inclusions ($RI$, resp.\ $GRI$) and a set $ER$ of definitions of roles by existential restrictions with concept names $N_C = \{ C_1, \dots, C_n \}$. Then for all concept descriptions $D_1, D_2$ the following are equivalent: \begin{itemize} \item[(1)] $D_1 {\sqsubseteq}_{\mathcal C} D_2$. \item[(2)] $GRI(C_1, \dots, C_n) \wedge \left( \bigwedge_{C {\sqsubseteq} D \in GCI} \overline{C} {\leq} \overline{D} \right) \wedge \overline{D_1} {\not\leq} \overline{D_2}$ is unsatisfiable w.r.t.\ ${\sf SLO}^{\exists}_{N_R}(RI \cup ER)$. \end{itemize} \end{theorem} In addition, we may also need to express numerical information. \begin{example} Consider a variant of Example~\ref{ex2}, in which we use a role with arity 4, $r_{il}$, where $r_{il}(x_1, x_2, x_3, n)$ expresses the fact that there exists a route from $x_1$ to $x_2$ passing through $x_3$ of length $n$. Also in this situation we would like to talk about all routes from $x_1$ to $x_2$ passing through $x_3$ which are shorter than a certain length $l$. This can also be expressed using projections as the relation $\exists r_{il}(4, \downarrow l)$, where: $$\exists r_{il}(4, \downarrow l) = \{ (x_1, x_2, x_3) \mid \exists x_4 (x_4 \leq l \wedge r_{il}(x_1, x_2, x_3, x_4)) \}.$$ \end{example} \noindent We will show that the axioms describing the algebraic models for the extensions of ${\cal EL}^+$ we considered here are ``local'', a property which ensures that the uniform word problem (resp. the problem of checking the validity of a set of ground unit clauses) is decidable in PTIME. We start by presenting a few important results on local theories and local theory extensions. \section{Local theories; local theory extensions} \label{locality} First-order theories are sets of formulae (closed under logical consequence), typically the set of all consequences of a set of axioms. Alternatively, we may consider the set of all models of a theory. In this paper we consider theories specified by their sets of axioms. (At places, however, -- usually when talking about local extensions of a theory -- we will refer to a theory, and mean the set of all its models.) \noindent Before defining the notion of local theory and local theory extension we will introduce some preliminary notions on partial models of a theory. \begin{definition}[Partial and total models] Let $\Pi = (S, \Sigma, {\sf Pred})$ be a many-sorted signature with set of sorts $S$, set of function symbol $\Sigma$ and set of predicates ${\sf Pred}$. A partial $\Pi$-structure is a structure $(\{ A_s \}_{s \in S}, \{ f_A \}_{f \in \Sigma}, \{ P_A \}_{P \in {\sf Pred}})$ in which for some function symbols $f \in \Sigma$, $f_A$ may be partial. \end{definition} \begin{definition} A {\em weak $\Pi$-embedding} between the partial structures $A = (\{ A_s \}_{s \in S},$ $\{ f_A \}_{f \in \Sigma},$ $\{ P_A \}_{P \in {\sf Pred}})$ and $B = ( \{ B_s \}_{s \in S}, \{ f_B \}_{f \in \Sigma}, \{ P_B \}_{P \in {\sf Pred}})$ is a (many-sorted) family $i = (i_s)_{s \in S}$ of total maps $i_s : A_s \rightarrow B_s$ such that \begin{itemize} \item[(i)] if $f_A(a_1, \dots, a_n)$ is defined (in $A$) then also $f_B(i_{s_1}(a_1), \dots, i_{s_n}(a_n))$ is defined (in $B$) and $i_s(f_A(a_1, \dots, a_n)) = f_B(i_{s_1}(a_1), \dots, i_{s_n}(a_n))$; \item[(ii)] for each sort $s$, $i_s$ is injective and an embedding w.r.t.\ ${\sf Pred}$, i.e.\ for every $P \in {\sf Pred}$ with arity $s_1 \dots s_n$ and every $a_1, \dots, a_n$ where $a_i \in A_{s_i}$, $P_A(a_1, \dots, a_n)$ if and only if $P_B(i_{s_1}(a_1), \dots, i_{s_n}(a_n))$. \end{itemize} In this case we say that $A$ {\em weakly embeds} into $B$. \end{definition} \begin{definition} If $A$ is a partial structure and $\beta : X \rightarrow A$ is a valuation then for every literal $L = (\neg) P(t_1, \dots, t_n)$ with $P \in {\sf Pred} {\cup} \{ = \}$ we say that $(A, \beta) \models_w L$ if: \begin{itemize} \item[(i)] either $\beta(t_i)$ are all defined and $(\neg) P_A(\beta(t_1), \dots, \beta(t_n))$ is true in $A$, \item[(ii)] or $\beta(t_i)$ is not defined for some argument $t_i$ of $P$. \end{itemize} Weak satisfaction of clauses ($(A, \beta) \models_w C$) can then be defined in the usual way. We say that $A$ is a {\em weak partial model} of a set of clauses ${\mathcal K}$ if $(A, \beta) \models_w C$ for every $\beta : X \rightarrow A$ and for every clause $C \in {\mathcal K}$. \end{definition} The notion of {\em local theory} was introduced by Givan and McAllester \cite{GivanMcAllester92,McAllester-acm-tocl-02}. They studied sets of Horn clauses ${\mathcal K}$ with the property that, for any ground Horn clause $C$, ${\mathcal K} \models C$ only if already ${\mathcal K}[C] \models C$ (where ${\mathcal K}[C]$ is the set of instances of ${\mathcal K}$ in which all terms are subterms of ground terms in either ${\mathcal K}$ or $C$). Since the size of ${\mathcal K}[C]$ is polynomial in the size of $C$ for a fixed ${\mathcal K}$ and satisfiability of sets of ground Horn clauses can be checked in linear time \cite{DowlingGallier}, it follows that for local theories, validity of ground Horn clauses can be checked in polynomial time. Givan and McAllester proved that every problem which is decidable in PTIME can be encoded as an entailment problem of ground clauses w.r.t.\ a local theory \cite{McAllester-acm-tocl-02}. The property above can easily be generalized to the notion of locality of a set of (Horn) clauses: \begin{definition} A {\em local theory} is a set of Horn clauses ${\mathcal K}$ such that, for any set $G$ of ground Horn clauses, ${\mathcal K} \cup G \models \perp$ if and only if already ${\mathcal K}[G] \cup G \models \perp$, where ${\mathcal K}[G]$ is the set of instances of ${\mathcal K}$ in which all terms are subterms of ground terms in either ${\mathcal K}$ or $G$. \end{definition} In \cite{Ganzinger-01-lics}, Ganzinger established a link between proof theoretic and semantic concepts for polynomial time decidability of uniform word problems which had already been studied in algebra \cite{Skolem20,Burris95}. \subsection{Local theory extensions} \label{local-ext} We will also consider extensions of theories, in which the signature is extended by new {\em function symbols} (i.e.\ we assume that the set of predicate symbols remains unchanged in the extension). Let ${\mathcal T}_0$ be an arbitrary theory with signature $\Pi_0 = (S, \Sigma_0, {\sf Pred})$, where $S$ is a set of sorts, $\Sigma_0$ a set of function symbols, and ${\sf Pred}$ a set of predicate symbols. We consider extensions ${\mathcal T}_1$ of ${\mathcal T}_0$ with signature $\Pi = (S, {\Sigma}, {\sf Pred})$, where the set of function symbols is $\Sigma = \Sigma_0 \cup \Sigma_1$ (i.e.\ the signature is extended by new function symbols). We assume that ${\mathcal T}_1$ is obtained from ${\mathcal T}_0$ by adding a set ${\mathcal K}$ of (universally quantified) clauses in the signature $\Pi$. Thus, ${\sf Mod}({\mathcal T}_1)$ consists of all $\Pi$-structures which are models of ${\mathcal K}$ and whose reduct to $\Pi_0$ is a model of ${\mathcal T}_0$. In what follows, when referring to {\em (weak) partial models} of ${\mathcal T}_0 \cup {\mathcal K}'$, we mean (weak) partial models of ${\mathcal K}'$ whose reduct to $\Pi_0$ is a total model of ${\mathcal T}_0$. \subsubsection{Locality of an extension} In what follows, when we refer to sets $G$ of ground clauses we assume that they are in the signature $\Pi^c = (S, \Sigma \cup \Sigma_c, {\sf Pred})$, where $\Sigma_c$ is a set of new constants. \noindent We will focus on the following type of locality of a theory extension ${\mathcal T}_0 \subseteq {\mathcal T}_1$, where ${\mathcal T}_1 = {\mathcal T}_0 \cup {\mathcal K}$ with ${\mathcal K}$ a set of (universally quantified) clauses: \noindent \begin{tabular}{@{}ll} ${\sf (Loc)}$ & For every finite set $G$ of ground clauses ${\mathcal T}_1 {\cup} G \models \perp$ iff ${\mathcal T}_0 {\cup} {\mathcal K}[G] {\cup} G$ \\ & has no weak partial model with all terms in ${\sf st}({\mathcal K}, G)$ defined. \\ \end{tabular} \noindent Here, ${\sf st}({\mathcal K}, G)$ is the set of all ground terms occurring in ${\mathcal K}$ or $G$. \noindent We say that an extension ${\mathcal T}_0 \subseteq {\mathcal T}_1$ is {\em local} if it satisfies condition ${\sf (Loc)}$. (Note that a local equational theory \cite{Ganzinger-01-lics} is a local extension of the pure theory of equality (with no function symbols).) A more general notion, namely $\Psi$-locality of an extension theory (in which the instances to be considered are described by a closure operation $\Psi$) is introduced in \cite{ihlemann-jacobs-sofronie-tacas08}. Let ${\mathcal K}$ be a set of clauses. Let $\Psi_{\mathcal K}$ be a function associating with any set $T$ of ground terms a set $\Psi_{\mathcal K}(T)$ of ground terms such that \begin{itemize} \item[(i)] all ground subterms in ${\mathcal K}$ and $T$ are in $\Psi_{\mathcal K}(T)$; \item[(ii)] for all sets of ground terms $T, T'$ if $T \subseteq T'$ then $\Psi_{\mathcal K}(T) \subseteq \Psi_{\mathcal K}(T')$; \item[(iii)] for all sets of ground terms $T$, $\Psi_{\mathcal K}(\Psi_{\mathcal K}(T)) \subseteq \Psi_{\mathcal K}(T)$; \item[(iv)] $\Psi$ is compatible with any map $h$ between constants, i.e.\ for any map $h : C \rightarrow C$, $\Psi_{\mathcal K}({\overline h}(T)) = {\overline h}(\Psi_{\mathcal K}(T))$, where ${\overline h}$ is the unique extension of $h$ to terms. \end{itemize} Let ${\mathcal K}{[\Psi_{\mathcal K}(G)]}$ be the set of instances of ${\mathcal K}$ where the variables are instantiated with terms in $\Psi_{\mathcal K}({\sf st}({\mathcal K}, G))$ (set denoted in what follows by $\Psi_{\mathcal K}(G)$), where ${\sf st}({\mathcal K}, G)$ is the set of all ground terms occurring in ${\mathcal K}$ or $G$. We say that ${\mathcal K}$ is $\Psi$-stably local if it satisfies: \noindent \begin{tabular}{@{}l@{}l} $({\sf Loc}^{\Psi})~$ & for every finite set $G$ of ground clauses, ${\mathcal K} {\cup} G$ has a model which is a model of ${\cal T}_0$ \\ & iff ${\mathcal K}{[\Psi_{\mathcal K}(G)]} {\cup} G$ has a partial model which is a total model of ${\cal T}_0$ and in which all \\ & terms in $\Psi_{\mathcal K}(G)$ are defined. \end{tabular} \noindent If ${\Psi}_{\mathcal K}(G) = {\sf st}({\mathcal K}, G)$ we recover the definition of local theory extension. \noindent In $\Psi$-local theories and theory extensions hierarchical reasoning is possible. We present the ideas for the case of local theories. \subsubsection{Hierarchical reasoning} Consider a $\Psi$-local theory extension ${\mathcal T}_0 \subseteq {\cal T}_1 = {\mathcal T}_0 \cup {\mathcal K}$. The locality conditions defined above require that, for every set $G$ of ground clauses, ${\mathcal T}_1 \cup G$ is satisfiable if and only if ${\mathcal T}_0 \cup {\mathcal K}[\Psi_{\cal K}(G)] \cup G$ has a weak partial model with additional properties. All clauses in the set ${\mathcal K}[\Psi_{\cal K}(G)] \cup G$ have the property that the function symbols in $\Sigma_1$ have as arguments only ground terms. Therefore, ${\mathcal K}[\Psi_{\cal K}(G)] \cup G$ can be flattened and purified (i.e.\ the function symbols in $\Sigma_1$ are separated from the other symbols) by introducing, in a bottom-up manner, new constants $c_t$ for subterms $t = f(g_1, \dots, g_n)$ with $f \in \Sigma_1$, $g_i$ ground $\Sigma_0 \cup \Sigma_c$-terms (where $\Sigma_c$ is a set of constants which contains the constants introduced by flattening, resp.\ purification), together with corresponding definitions $c_t = t$. The set of clauses thus obtained has the form ${\mathcal K}_0 \cup G_0 \cup {\sf Def}$, where ${\sf Def}$ is a set of ground unit clauses of the form $f(g_1, \dots, g_n) = c$, where $f \in \Sigma_1$, $c$ is a constant, $g_1, \dots, g_n$ are ground terms without function symbols in $\Sigma_1$, and ${\mathcal K}_0$ and $G_0$ are clauses without function symbols in $\Sigma_1$. Flattening and purification preserve both satisfiability and unsatisfiability w.r.t.\ total algebras, and also w.r.t.\ partial algebras in which all ground subterms which are flattened are defined \cite{Sofronie-cade-05}. \noindent For the sake of simplicity in what follows we will always flatten and then purify ${\mathcal K}[\Psi_{\mathcal K}(G)] \cup G$. Thus we ensure that ${\sf Def}$ consists of ground unit clauses of the form $f(c_1, \dots, c_n) = c$, where $f \in \Sigma_1$, and $c_1, \dots, c_n, c$ are constants. \begin{theorem}[\cite{Sofronie-cade-05,ihlemann-jacobs-sofronie-tacas08}] Let ${\mathcal K}$ be a set of clauses. Assume that ${\mathcal T}_0 \subseteq {\cal T}_1 = {\mathcal T}_0 \cup {\mathcal K}$ is a $\Psi$-local theory extension, and that for every finite set $T$ of terms $\Psi_{\cal K}(T)$ is finite. For any set $G$ of ground clauses, let ${\mathcal K}_0 \cup G_0 \cup {\sf Def}$ be obtained from ${\mathcal K}[\Psi_{\cal K}(G)] \cup G$ by flattening and purification, as explained above. Then the following are equivalent: \begin{itemize} \item[(1)] $G$ is satisfiable w.r.t.\ ${\cal T}_1$. \item[(2)] ${\mathcal T}_0 {\cup} {\mathcal K}[\Psi_{\cal K}(G)] {\cup} G$ has a partial model with all terms in ${\sf st}({\mathcal K}, G)$ defined. \item[(3)] ${\mathcal T}_0 {\cup} {\mathcal K}_0 {\cup} G_0 {\cup} {\sf Def}$ has a partial model with all terms in ${\sf st}({\mathcal K}, G)$ defined. \item[(4)] ${\mathcal T}_0 \cup {\mathcal K}_0 \cup G_0 \cup {\sf Con}[G]_0$ has a (total) model, where $\displaystyle{~~~ {\sf Con}[G]_0 = \{ \bigwedge_{i = 1}^n c_i = d_i \rightarrow c = d \mid f(c_1, \dots, c_n) = c, f(d_1, \dots, d_n) = d \in {\sf Def} \}}.$ \end{itemize} \label{lemma-rel-transl} \end{theorem} \subsubsection{Parameterized decidability and complexity} Theorem~\ref{lemma-rel-transl} allows us to show that: \begin{itemize} \item decidability of checking satisfiability in a $\Psi$-local extension of a theory ${\cal T}_0$ is a consequence of the decidability of the problem of checking the satisfiability of ground clauses in ${\cal T}_0$, and \item the complexity of the task of checking the satisfiability of sets of ground clauses w.r.t.\ a $\Psi$-local extension of a base theory ${\cal T}_0$ can be expressed as a function of the complexity of checking the satisfiability of sets of ground clauses in ${\cal T}_0$. \end{itemize} \begin{theorem}[\cite{Sofronie-cade-05}] Assume that the theory extension ${\mathcal T}_0 \subseteq {\mathcal T}_1$ satisfies condition ${\sf (Loc)}$. If all variables in the clauses in ${\mathcal K}$ occur below some function symbol\footnote{This requirement ensures that all variables are instantiated in ${\mathcal K}[G]$, and that therefore the satisfiability problem can be reduced without problems to testing the satisfiability of a set of ground clauses.} from $\Sigma_1$ and if testing satisfiability of ground clauses in ${\mathcal T}_0$ is decidable, then testing satisfiability of ground clauses in ${\mathcal T}_1$ is decidable. Assume in addition that the complexity of testing the satisfiability of a set of ground clauses of size $m$ w.r.t.\ ${\cal T}_0$ can be described by a function $g(m)$. Let $G$ be a set of ${\cal T}_1$-clauses of size $n$. Then the complexity of checking the satisfiability of $G$ w.r.t.\ ${\cal T}_1$ is of order $g(n^k)$, where $k$ is the maximum number of free variables in a clause in ${\cal K}$, at least $2$. \label{complex} \end{theorem} \noindent {\it Proof\/}:\ This follows from the fact that: \begin{itemize} \item the number of clauses in ${\cal K}_0$ is polynomial in the size of $\Psi_{\cal K}(G)$, where the degree $d$ of the polynomial is at most the maximum number of free variables in a clause in ${\cal K}$; \item the number of clauses in $G_0$ is linear in the size of $G$; \item the number of clauses in ${\sf Con}[G]_0$ is quadratic in the size of $G$.\hspace*{\fill}$\Box$ \end{itemize} \subsubsection{Recognizing local theory extensions} The locality of an extension can be recognized by proving embeddability of partial models into total models \cite{Sofronie-cade-05,sofronie-ihlemann-ismvl-07,ihlemann-jacobs-sofronie-tacas08}. We will use the following notation: \noindent \begin{tabular}{@{}ll} ${\sf PMod^{\Psi}_w}({\Sigma_1}, {\mathcal T}_1)$ & is the class of all weak partial models $A$ of ${\mathcal T}_1 = {\cal T}_0 \cup {\cal K}$ in which the \\ & $\Sigma_1$-functions are partial, the $\Sigma_0$-functions are total, and the set of terms \\ & $\{ f(a_1, \dots, a_n) \mid f_A(a_1, \dots, a_n) \text{ defined} \}$ is closed under $\Psi_{\cal K}$. \end{tabular} \noindent For extensions ${\mathcal T}_0 \subseteq {\mathcal T}_1 = {\mathcal T}_0 \cup {\mathcal K}$, where ${\mathcal K}$ is a set of clauses, we consider the condition: \noindent \begin{tabular}{@{}ll} ${\sf (Emb^{\Psi}_w)}$ & Every $A \in {\sf PMod^{\Psi}_w}({\Sigma_1}, {\mathcal T}_1)$ weakly embeds into a total model of ${\mathcal T}_1$. \end{tabular} \noindent In what follows we say that a non-ground clause is $\Sigma_1$-{\em flat} if function symbols (including constants) do not occur as arguments of function symbols in $\Sigma_1$. A $\Sigma_1$-flat non-ground clause is called $\Sigma_1$-{\em linear} if whenever a variable occurs in two terms in the clause which start with function symbols in $\Sigma_1$, the two terms are identical, and if no term which starts with a function symbol in $\Sigma_1$ contains two occurrences of the same variable. Flatness and linearity are important because for flat and linear sets of axioms locality can be checked using semantic means. It is easy to see that every set of clauses can be flattened and linearized. Please note however that after flattening and linearization the set of instances in ${\cal K}[G]$ (resp.\ ${\cal K}[\Psi(G)]$ usually changes. \begin{theorem}[\cite{ihlemann-jacobs-sofronie-tacas08}] Let ${\mathcal K}$ be a set of $\Sigma$-flat and $\Sigma$-linear clauses. If the extension ${\mathcal T}_0 \subseteq {\mathcal T}_1 = {\cal T}_0 \cup {\cal K}$ satisfies ${\sf (Emb^{\Psi}_w)}$ -- where $\Psi$ satisfies conditions (i)--(iv) in Section~\ref{local-ext} -- then the extension satisfies ${\sf (Loc^{\Psi})}$. \label{rel-loc-embedding} \end{theorem} \noindent {\it Proof\/}:\ Assume that ${\mathcal T}_0 \cup {\mathcal K}$ is not a $\Psi$-local extension of ${\mathcal T}_0$. Then there exists a set $G$ of ground clauses (with additional constants) such that ${\mathcal T}_0 \cup {\mathcal K} \cup G \models \perp$ but ${\mathcal T}_0 \cup {\mathcal K}[\Psi_{\cal K}(G)] \cup G$ has a weak partial model $P$ in which all terms in $\Psi_{\cal K}(G)$ are defined. We assume w.l.o.g.\ that $G = G_0 \cup G_1$, where $G_0$ contains no function symbols in $\Sigma_1$ and $G_1$ consists of ground unit clauses of the form $f(c_1, \dots, c_n) \approx c,$ where $c_i, c$ are constants in $\Sigma_0 \cup \Sigma_c$ and $f \in \Sigma_1$.\footnote{All results below hold if only purified goals are considered; flattening and linearity of goals is not absolutely necessary.} We construct another structure, $A$, having the same support as $P$, which inherits all relations in ${\sf Pred}$ and all maps in $\Sigma_0 \cup \Sigma_c$ from $P$, but on which the domains of definition of the $\Sigma_1$-functions are restricted as follows: for every $f \in \Sigma_1$, $f_A(a_1, \dots, a_n)$ is defined if and only if there exist constants $c^1, \dots, c^n$ such that $f(c^1, \dots, c^n)$ is in $\Psi_{\cal K}(G)$ and $a^i = c^i_P$ for all $i \in \{ 1, \dots, n \}$. In this case we define $f_A(a_1, \dots, a_n) := f_P(c^1_P, \dots, c^n_P)$. The reduct of $A$ to $(\Sigma_0 \cup \Sigma_c, {\sf Pred})$ coincides with that of $P$. Thus, $A$ is a model of ${\mathcal T}_0 \cup G_0$. By the way the operations in $\Sigma_1$ are defined in $A$ it is clear that $A$ satisfies $G_1$, so $A$ satisfies $G$. To show that $A \models_w {\mathcal K}$ we use the fact that if $D$ is a clause in ${\mathcal K}$ and $\beta : X \rightarrow A$ is an assignment in which $\beta(t)$ is defined for every term $t$ occurring in $D$, then (by the way $\Sigma_1$-functions are defined in $A$) we can construct a substitution $\sigma$ with $\sigma(D) \in {\mathcal K}[G]$ and $\beta \circ \sigma = \beta$. As $(P, \beta) \models_w \sigma(D)$ we can infer $(A, \beta) \models_w D$. We now show that $D(A) = \{ f(a_1, \dots, a_n) \mid f_A(a_1, \dots, a_n) \text{ defined} \}$ is closed under $\Psi_{\cal K}$. By definition, $f(a_1, \dots, a_n) \in D(A)$ iff $\exists \text{ constants } c_1, \dots, c_n$ with ${c_i}_A = a_i$ for all $i$ and $f(c_1, \dots, c_n) \in \Psi_{\cal K}(G)$. Thus, $$\begin{array}{rll} D(A) & = \{ f(a_1, \dots, a_n) \mid f_A(a_1, \dots, a_n) \text{ defined} \} & \\ & = \{ f({c_1}_A, \dots, {c_n}_A) \mid c_i \text{ constants with } f(c_1, \dots, c_n) \in \Psi_{\cal K}(G) \} & \\ & = {\overline h}(\Psi_{\cal K}(G)) & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\text{ where } h(c_i) = a_i \text{ for all } i \\ \Psi_{\cal K}(D(A)) & = \Psi_{\cal K}({\overline h}(\Psi_{\cal K}(G))) = {\overline h}(\Psi_{\cal K}(\Psi_{\cal K}(G))) & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\text{ by property (iv) of } \Psi \\ & \subseteq {\overline h}(\Psi_{\cal K}(G)) = D(A) & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\text{ by property (iii) of } \Psi\\ \end{array}$$ As $A \models_w {\mathcal K}$, $A$ weakly embeds into a total algebra $B$ satisfying ${\mathcal T}_0 \cup {\mathcal K}$. But then $B \models G$, so $B \models {\mathcal T}_0 \cup {\mathcal K} \cup G$, which is a contradiction. \hspace*{\fill}$\Box$ \noindent Analyzing the proof of Theorem~\ref{rel-loc-embedding} we notice that the $\Sigma_1$-linearity restriction can be relaxed. We can allow a variable $x$ to occur below two unary function symbols $g$ and $h$ in a clause $C$ if $\Psi_{\cal K}$ has the property that for every constant $c$, if $g(c) \in \Psi_{\cal K}(G)$ then $h(c) \in \Psi_{\cal K}(G)$ or vice versa. (In terms of partial models this means that we consider models $A$ with the property that if $g_A(a)$ is defined then $h_A(a)$ is defined or vice versa.) The linearity condition can be similarly relaxed in the presence of $n$-ary functions, namely for groups of function symbols $(g_1, \dots, g_n, h)$ -- which occur in axioms containing clauses in which the following sets of terms occur at the same time: $$\{ g_i({\overline x}_i) \mid 1 \leq i \leq n \} \cup \{ h({\overline x}_1, \dots, {\overline x}_n)\},$$ where the sets of variables ${\overline x}_i$ and ${\overline x}_j$ are disjoint for $i \neq j$ -- with the property that if ($g_i(c^i_1, \dots, c^i_{n_i}) \in \Psi_{\cal K}(G)$ for all $i$) then $h({\overline c^1}, \dots, {\overline c^n}) \in \Psi_{\cal K}(G)$ or vice versa. \section{Locality and complexity of $\mathcal{EL}^+$ and $\mathcal{EL}$ and extensions thereof} \label{complexity} We now show that the classes of algebraic models of $\mathcal{EL}^+$ and of $\mathcal{EL}$ (and of their extensions presented in Sections~\ref{alg-sem-el} and~\ref{extensions-of-el-n-ary}) have presentations which satisfy certain locality properties. This gives an alternative, algebraic explanation of the fact that CBox subsumption in these logics is decidable in PTIME, and makes generalizations possible. \subsection{Locality and $\mathcal{EL}$} \label{loc-el} In \cite{Sofronie-amai-07} we proved that the algebraic counterpart of the description logic $\mathcal{EL}$ -- namely the class of semilattices with monotone operators -- has a local axiomatization -- ${\cal SL} \cup {\sf Mon}(\Sigma)$ -- i.e.\ an axiomatization with the property that for every set $G$ of ground clauses \[ {\cal SL} \cup {\sf Mon}(\Sigma) \cup G \models \perp \quad \mbox{ if and only if } \quad ({\cal SL} \cup {\sf Mon}(\Sigma))[G] \cup G \models \perp. \] We denoted by ${\sf Mon}(\Sigma)$ the set $\{ {\sf Mon}(f) \mid f \in \Sigma \}$, where $${\sf Mon}(f)~~~~ \forall x, y (x \leq y \rightarrow f(x) \leq f(y)).$$ In \cite{Sofronie-cade-05} we showed that the extension $SLO_{\Sigma} = SL {\cup} {\sf Mon}(\Sigma)$ of the theory $SL$ of bounded semilattices with a family of monotone functions is local. \begin{theorem}[\cite{Sofronie-cade-05,sofronie-ihlemann-ismvl-07}] \label{locality-of-el} Let $G$ be a set of ground clauses. The following are equivalent: \begin{itemize} \item[(1)] $SL \cup {\sf Mon}(\Sigma) \cup G \models \perp$. \item[(2)] $SL \cup {\sf Mon}(\Sigma)[G] \cup G$ has no partial model $A$ such that its $\{ \wedge, 0, 1 \}$-reduct is a (total) bounded semilattice, the functions in $\Sigma$ are partial and all $\Sigma$-subterms of $G$ are defined. \end{itemize} \end{theorem} Let ${\sf Mon}(\Sigma)[G]_0 \cup G_0 \cup {\sf Def}$ be obtained from ${\sf Mon}(\Sigma)[G] \cup G$ by purification, i.e.\ by replacing, in a bottom-up manner, all subterms $f(g)$ with $f \in \Sigma$, with newly introduced constants $c_{f(g)}$ and adding the definitions $f(g) = c_t$ to the set ${\sf Def}$. \begin{theorem} The following are equivalent (and equivalent to (1) and (2) above): \begin{itemize} \item[(3)] ${\sf Mon}(\Sigma)[G]_0 \cup G_0 \cup {\sf Def}$ has no partial model $A$ such that its $\{ \wedge, 0, 1 \}$-reduct is a (total) bounded semilattice, the functions in $\Sigma$ are partial and all $\Sigma_1$-subterm of $G$ are defined. \item[(4)] ${\sf Mon}(\Sigma)[G]_0 \cup G_0$ is unsatisfiable in $SL$. (Note that in the presence of ${\sf Mon}(\Sigma)$ the instances ${\sf Con}[G]_0$ of the congruence axioms for the functions in $\Sigma$ are not necessary.) ${\sf Con}[G]_0 = \{ g {=} g' \rightarrow c_{f(g)} {=} c_{f(g')} \mid f(g) {=} c_{f(g)}, f(g') {=} c_{f(g')} \in {\sf Def} \}.$ \end{itemize} \end{theorem} This equivalence allows us to hierarchically reduce, in polynomial time, proof tasks in $SL \cup {\sf Mon}(\Sigma)$ to proof tasks in $SL$ (cf. e.g.\ \cite{sofronie-ihlemann-ismvl-07}) which can then be solved in polynomial time. \begin{example} We illustrate the method on an example first considered in \cite{Baader2003}. Consider the $\mathcal{EL}$ TBox ${\mathcal T}$ consisting of the following definitions: $$\begin{array}{lll} A_1 & = & P_1 \sqcap A_2 \sqcap \exists r_1. \exists r_2. A_3 \\ A_2 & = & P_2 \sqcap A_3 \sqcap \exists r_2. \exists r_1. A_1 \\ A_3 & = & P_3 \sqcap A_2 \sqcap \exists r_1.(P_1 \sqcap P_2) \\ \end{array}$$ We want to prove that $P_3 \sqcap A_2 \sqcap \exists r_1.(A_1 \sqcap A_2) \sqsubseteq_{\mathcal T} A_3$. We translate this subsumption problem to the following satisfiability problem: \begin{eqnarray*} {\sf SL} \cup {\sf Mon}(f_1, f_2) & \cup & \{ \, a_1 = (p_1 \wedge a_2 \wedge f_1(f_2(a_3))), \\ & & ~~ a_2 = (p_2 \wedge a_3 \wedge f_2(f_1(a_1))), \\ & & ~~ a_3 = (p_3 \wedge a_2 \wedge f_1(p_1 \wedge p_2)), \\ & & ~~\neg (p_3 \wedge a_2 \wedge f_1(a_1 \wedge a_2) \leq a_3) \} \models \perp. \end{eqnarray*} We proceed as follows: We flatten and purify the set $G$ of ground clauses by introducing new names for the terms starting with the function symbols $f_1$ or $f_2$. Let ${\sf Def}$ be the corresponding set of definitions. We then take into account only those instances of the monotonicity and congruence axioms for $f_1$ and $f_2$ which correspond to the instances in ${\sf Def}$, and purify them as well, by replacing the terms themselves with the constants which denote them. We obtain the following separated set of formulae: $$\begin{array}{|l|ll|} \hline ~{\sf Def} & ~~~~~~~~~~~~~~~G_0 ~~~~~~~ \cup & ({\sf Mon}(f_1, f_2)[G])_0 \cup {\sf Con}[G]_0 \\ \hline \hline ~f_2(a_3) = c_1~ & ~(a_1 = p_1 \wedge a_2 \wedge c_2)~~~~~ & a_1 R c_1 \rightarrow c_3 R c_2, ~~R \in \{ \leq, \geq, = \} \\ ~f_1(c_1) = c_2~ & ~(a_2 = p_2 \wedge a_3 \wedge c_4) & a_3 R c_3 \rightarrow c_1 R c_4, ~~R \in \{ \leq, \geq, = \} \\ ~f_1(a_1) = c_3~ & ~(a_3 = p_3 \wedge a_2 \wedge d_1) & a_1 R e_1 \rightarrow c_3 R d_1, ~~R \in \{ \leq, \geq, = \} \\ ~f_2(c_3) = c_4~ & ~(p_3 \wedge a_2 \wedge d_2 \not\leq a_3) & a_1 R e_2 \rightarrow c_3 R d_2, ~~R \in \{ \leq, \geq, = \} \\ ~f_1(e_1) = d_1~ & ~p_1 \wedge p_2 = e_1 & c_1 R e_1 \rightarrow c_2 R d_1, ~~R \in \{ \leq, \geq, = \} \\ ~f_1(e_2) = d_2~ & ~a_1 \wedge a_2 = e_2 & c_1 R e_2 \rightarrow c_2 R d_2, ~~R \in \{ \leq, \geq, = \} \\ & & e_1 R e_2 \rightarrow d_1 R d_2, ~~R \in \{ \leq, \geq, = \} \\ \hline \end{array}$$ The subsumption is true iff $G_0 \cup ({\sf Mon}(f_1, f_2)[G])_0 \cup {\sf Con}[G]_0$ is unsatisfiable in the theory of semilattices. We can see this as follows: note that $a_1 \wedge a_2 \leq p_1 \wedge p_2$, i.e. $e_2 \leq e_1$. Then (using an instance of monotonicity) $d_2 \leq d_1$, so $p_3 \wedge a_2 \wedge d_2 \leq p_3 \wedge a_2 \wedge d_1 = a_3$. This can also be checked automatically in PTIME either by using the fact that there exists a local presentation of ${\sf SL}$ (cf.\ also Sect.~\ref{el-compl}) or using the fact that ${\sf SL} = ISP(S_2)$ (i.e. every semilattice is isomorphic with a sublattice of a power of $S_2$), where $S_2$ is the semilattice with two elements, hence ${\sf SL}$ and $S_2$ satisfy the same Horn clauses. Since the theory of semilattices is convex, satisfiability of ground clauses w.r.t. ${\sf SL}$ can be reduced to SAT solving. \end{example} \subsection{Locality and $\mathcal{EL}^+$} \label{loc-el+} We prove that similar results hold for the class $SLO_{\Sigma}(RI)$ of semilattices with monotone operators in a set $\Sigma$ satisfying a family $RI$ axioms of the form: \begin{eqnarray*} \forall x ~ & g(x) & \leq h(x) \\ \forall x ~ & f(g(x)) & \leq x\\ \forall x ~ & f(g(x)) & \leq h(x) \end{eqnarray*} Since the characterization of locality in Theorem~\ref{rel-loc-embedding} refers to sets of {\em flat} clauses, instead of $RI$ we consider the flat versions $RI^{\sf flat}$ of this family of axioms: \begin{eqnarray*} \forall x ~ ~ ~ ~ & & g(x) \leq h(x) \\ \forall x, y ~ & (y \leq g(x) \rightarrow & f(y) \leq x)\\ \forall x, y ~ & (y \leq g(x) \rightarrow & f(y) \leq h(x)) \end{eqnarray*} \begin{theorem} The extension $SL \cup {\sf Mon}(\Sigma) \cup RI^{\sf flat}(2)$ of the theory of semilattices with monotone functions $f, g$ satisfying axioms of the second type in $RI^{\sf flat}$ above is local. \label{loc-2} \end{theorem} \noindent {\it Proof\/}:\ We have to prove that every weak partial model of $SL \cup {\sf Mon}(\Sigma) \cup RI^{\sf flat}(2)$ weakly embeds into a total model of $SL \cup {\sf Mon}(\Sigma) \cup RI^{\sf flat}(2)$. This follows from Lemma~\ref{local-alg-el+}. \hspace*{\fill}$\Box$ \begin{theorem} The extension $SL \cup {\sf Mon}(\Sigma) \cup RI^{\sf flat}(1, 3)$ of the theory of lattices with monotone functions satisfying axioms of the first or third type in $RI^{\sf flat}$ above is $\Psi$-local, where $\Psi(T) = \bigcup_{i \geq 1} \Psi_i(T)$, with $\Psi_0(T) = T$, and $$\begin{array}{rcl} \Psi_{i+1}(T) & = & \{ h(c) \mid \forall x (g(x) \rightarrow h(x)) \in RI^{\sf flat} \text{ and } g(c) \in T \} \cup \\ & & \{ h(c) \mid \forall x (y \leq g(x) \rightarrow f(y) \leq h(x)) \in RI^{\sf flat} \text{ and } g(c) \in T \}. \end{array}$$ \label{loc-1-3} \end{theorem} \noindent {\it Proof\/}:\ Note first that the clauses we consider (see below) are flat, but not linear. \begin{eqnarray*} \forall x ~~~~ ~ & & g(x) \leq h(x) \\ \forall x, y ~ & (y \leq g(x) \rightarrow & f(y) \leq h(x)) \end{eqnarray*} As mentioned before, a small change in the proof of Theorem~\ref{rel-loc-embedding} allows us to relax the linearity condition on the sets of clauses. By Theorem~\ref{rel-loc-embedding}, an extension of $SL$ with monotonicity axioms and clauses of the type above is $\Psi$-local provided that every partial model $S$ of $SL \cup {\sf Mon}(\Sigma) \cup RI(1, 3)$ with a total bounded semilattice reduct and with the property that if $g_S(a)$ is defined then $h_S(a)$ is defined (for all $g$ and $h$ occurring at the positions they have in the axioms above) weakly embeds into a total model of $SL \cup{\sf Mon}(\Sigma) \cup RI(1, 3)$. The proof of the fact that this embeddability result holds is a consequence of Lemma~\ref{local-alg-el+}. \hspace*{\fill}$\Box$ \begin{theorem} Any extension of the theory $SL$ of semilattices with a set of monotone functions satisfying axioms of type $RI$ is $\Psi$-local, where $\Psi$ is defined as above. \label{locality-of-el+} \end{theorem} \noindent {\it Proof\/}:\ This is a consequence of Theorems~\ref{loc-2} and~\ref{loc-1-3} and of the fact that the same completion was used in all cases. \hspace*{\fill}$\Box$ \begin{theorem} Any theory of the form $SL \cup {\sf Mon}(\Sigma) \cup RI \cup GRI(c_1, \dots, c_n)$ -- where $GRI$ are guarded forms of axioms corresponding to role inclusions, as discussed in Section~\ref{sect:slo-el+} -- is $\Psi$-local, where $\Psi(T)$ is as defined above. \end{theorem} \noindent {\it Proof\/}:\ The proof is analogous to the proof of Theorems~\ref{loc-2} and~\ref{loc-1-3}. We illustrate, as an example, the completion process for the case of axioms of the type $$ \forall x (x \leq c \wedge y \leq g(x) \rightarrow f(y) \leq h(x)).$$ Let $S$ be a bounded semilattice with partial operators satisfying the axioms in ${\sf Mon}(\Sigma) \cup RI \cup GRI(c_1, \dots, c_n)$. We extend the functions to ${\overline S} = {\cal OI}(S)$ as explained in Lemma~\ref{local-alg-el+}. Let $\eta : S \rightarrow {\cal OI}(S)$ defined by $\eta(x) = \downarrow x$. Then $i(c) = \downarrow c$. Let now $U, V \in {\overline S}$ be such that $V \subseteq \downarrow c$ and $U \subseteq {\overline g}(V)$. Let $x \in {\overline f}(U)$, so there exist $u \in U$ for which $f(u)$ is defined, and $v \in V$ with $g(v)$ defined such that $v \leq c$, $x \leq f(u)$ and $u \leq g(v)$. By the $\Psi$-closure condition, $h(v)$ is defined as well. Thus, $x \leq f(u) \leq h(v)$, i.e.\ $x \in {\overline h}(V)$. The other guarded cases can be handled similarly. \hspace*{\fill}$\Box$ \begin{example} We illustrate the ideas on an example presented in \cite{Baader-2005} (here slightly simplified). Consider the CBox ${\mathcal C}$ consisting of the following $GCI$: $$\begin{array}{@{}r@{}c@{}l} {\sf Endocard} & \,\sqsubseteq\, & {\sf Tissue} \sqcap \exists {\sf cont}\text{-}{\sf in}.{\sf HeartWall} \sqcap \exists {\sf cont}\text{-}{\sf in}.{\sf HeartValve} \\ {\sf HeartWall} & \,\sqsubseteq\, & \exists {\sf part}\text{-}{\sf of}.{\sf Heart} \\ {\sf HeartValve} & \,\sqsubseteq\, & \exists {\sf part}\text{-}{\sf of}.{\sf Heart} \\ {\sf Endocarditis} & \,\sqsubseteq\, & {\sf Inflammation} \sqcap \exists {\sf has}\text{-}{\sf loc}.{\sf Endocard} \\ {\sf Inflammation} & \,\sqsubseteq\, & {\sf Disease} \\ {\sf Heartdisease} & = & {\sf Disease} \sqcap \exists {\sf has}\text{-}{\sf loc}.{\sf Heart} \end{array}$$ \noindent and the following role inclusions $RI$: $$\begin{array}{@{}r@{}c@{}l} {\sf part}\text{-}{\sf of} \circ {\sf part}\text{-}{\sf of} & \,\sqsubseteq\, & {\sf part}\text{-}{\sf of} \\ {\sf part}\text{-}{\sf of} & \,\sqsubseteq\, & {\sf cont}\text{-}{\sf in}\\ {\sf has}\text{-}{\sf loc} \circ {\sf cont}\text{-}{\sf in} & \,\sqsubseteq\, & {\sf has}\text{-}{\sf loc} \end{array}$$ \noindent We want to check whether ${\sf Endocarditis} \sqsubseteq_{\mathcal C} {\sf Heartdisease}$. This is the case iff (with some abbreviations -- e.g. $f_{\sf ci}$ stands for $f_{\exists {\sf cont}\text{-}{\sf in}}$ and $f_{\sf po}$ for $f_{\exists {\sf part}\text{-}{\sf of}}$, $h_w$ and $h_v$ for ${\sf HeartWall}$ resp. ${\sf HeartValve}$, $e$ for ${\sf Endocard}$, $h$ for ${\sf Heart}$, etc.): \noindent $\begin{array}{@{}lllll} SL & \cup & {\sf Mon}(f_{\sf ci}, f_{\sf hl}, f_{\sf po}) & \cup & \{ \forall x ~ y \leq f_{\sf ci}(x) \rightarrow f_{\sf ci}(y) {\leq} f_{\sf ci}(x), \\ && & & ~\, \forall x ~ f_{\sf po}(x) {\leq} f_{\sf ci}(x), \\ && & & ~\, \forall x ~ y \leq f_{\sf ci}(x) \rightarrow f_{\sf hl}(y) {\leq} f_{\sf hl}(x) \} \\ \end{array}$ $\begin{array}{lll} ~~~~ & \cup & \{ e \leq t \wedge f_{\sf ci}(h_w) \wedge f_{\sf ci}(h_v), h_w \leq f_{\sf po}(h), ~~h_v \leq f_{\sf po}(h), \\ & & ~ {\sf Endocarditis} \leq i \wedge f_{\sf hl}(e), ~~ i \leq d, ~~ {\sf Heartdisease}= d \wedge f_{\sf hl}(h), \\ & & ~ {\sf Endocarditis} \not\leq {\sf Heartdisease} \} ~~\models~~ \perp. \end{array}$ \noindent Then ${\sf st}({\mathcal K}, G) = \{ f_{\sf ci}(h_w),f_{\sf ci}(h_v), f_{\sf po}(h), f_{\sf hl}(e), f_{\sf hl}(h)\}$. It follows that $\Psi_{\mathcal K}(G)$ consists of the following terms: $\{ f_{\sf ci}(h_w),f_{\sf ci}(h_v), f_{\sf ci}(h), f_{\sf po}(h), f_{\sf hl}(e), f_{\sf hl}(h), f_{\sf hl}(h_w), f_{\sf hl}(h_v)\}$. After computing $( RI_a \cup {\sf Mon}(f_{\sf ci}, f_{\sf hl}, f_{\sf po}) \cup {\sf Con}){[\Psi(G)]}$ we obtain: $$\begin{array}{|l|l@{}l|} \hline ~G & (RI_a \cup {\sf Mon} \cup {\sf Con})[\Psi(G)] & \\ \hline \hline ~e \leq t \wedge f_{\sf ci}(h_w) \wedge f_{\sf ci}(h_v) & ~y \leq f_{\sf ci}(x) \rightarrow f_{\sf ci}(y) \leq f_{\sf ci}(x)~~~ & \text{ for } x, y \in \{ h_v, h_w, h \}, x \neq y \\ ~h_w \leq f_{\sf po}(h) & ~f_{\sf po}(h) \leq f_{\sf ci}(h) & \\ ~h_v \leq f_{\sf po}(h) & ~y \leq f_{\sf ci}(x) \rightarrow f_{\sf hl}(y) \leq f_{\sf hl}(x) & \text{ for } x \in \{ h_v, h_w, h \}\\ & & ~~~~~~ y \in \{ e, h, h_w, h_v \}, x \neq y \\ ~{\sf Endocarditis} \leq i \wedge f_{\sf hl}(e) & ~~~ & \\ ~i \leq d & ~x R y \rightarrow f_{\sf ci}(x) R f_{\sf ci}(y) & \text{ for } x, y \in \{ h_w, h_v, h \}, x \neq y \\ ~ {\sf Heartdisease}= d \wedge f_{\sf hl}(h) & ~ x R y \rightarrow f_{\sf hl}(x) R f_{\sf hl}(y) & \text{ for } x, y \in \{ e, h, h_w, h_v \} \\ ~{\sf Endocarditis} \not\leq {\sf Heartdisease}~ & R \in \{ \leq, \geq \} & \\[1ex] \hline \end{array}$$ We can simplify the problem even further by replacing the ground terms in $\Psi(G)$ with new constants, and taking into account the corresponding definitions $c_t = t$. Let $(RI_a \cup {\sf Mon} \cup {\sf Con}){[\Psi(G)]}_0$ be the set of clauses obtained this way. $$\begin{array}{|l|l@{}l|} \hline ~G_0 & (RI_a \cup {\sf Mon} \cup {\sf Con})[\Psi(G)]_0 & \\ \hline \hline ~e \leq t \wedge c_{f_{\sf ci}(h_w)} \wedge c_{f_{\sf ci}(h_v)} & ~y \leq c_{f_{\sf ci}(x)} \rightarrow c_{f_{\sf ci}(y)} \leq c_{f_{\sf ci}(x)} & \text{ for } x, y \in \{ h_v, h_w, h \}, x \neq y \\ ~h_w \leq c_{f_{\sf po}(h)} & ~c_{f_{\sf po}(h)} \leq c_{f_{\sf ci}(h)} & \\ ~h_v \leq c_{f_{\sf po}(h)} & ~y \leq c_{f_{\sf ci}(x)} \rightarrow c_{f_{\sf hl}(y)} \leq c_{f_{\sf hl}(x)}~~~ & \text{ for } x \in \{ h_v, h_w, h \}\\ & & ~~~~~~ y \in \{ e, h, h_w, h_v \}, x \neq y~~\\ ~{\sf Endocarditis} \leq i \wedge c_{f_{\sf hl}(e)} & ~~~ & \\ ~i \leq d & ~x R y \rightarrow c_{f_{\sf ci}(x)} R c_{f_{\sf ci}(y)} & \text{ for } x, y \in \{ h_w, h_v, h \}, x \neq y \\ ~ {\sf Heartdisease}= d \wedge c_{f_{\sf hl}(h)} & ~ x R y \rightarrow c_{f_{\sf hl}(x)} R c_{f_{\sf hl}(y)} & \text{ for } x, y \in \{ e, h, h_w, h_v \} \\ ~{\sf Endocarditis} \not\leq {\sf Heartdisease}~ & R \in \{ \leq, \geq \} & \\[1ex] \hline \end{array}$$ With the notation in the previous table, by Corollary~\ref{cor-stable-loc}, ${\sf Endocarditis} \sqsubseteq_{\mathcal C} {\sf Heartdisease}$ iff $G_0 \cup (RI_a \cup {\sf Mon} \cup {\sf Con}){[\Psi(G)]}_0 \models_{SL} \perp$ (i.e.\ it is unsatisfiable w.r.t. the theory of semilattices with 0 and 1). The satisfiability of $\phi$ can therefore be checked automatically in polynomial time in the size of $\phi$ which in its turn is polynomial in the size of ${\Psi}_{\mathcal K}(G)$. Hence, in this case, the size of $\phi$ is polynomial in the size of $G$. Unsatisfiability can also be proved directly: $G$ entails the inequalities: $$\begin{array}{lllll} (1) & {\sf Endocarditis} \leq (d \wedge f_{\sf hl}(e)); & ~~~~& (2) & e \leq (f_{\sf ci}(h_w) \wedge f_{ci}(h_v)); \\ (3) & (h_w \leq f_{\sf po}(h));~~~~~~~~~~~~~~~~ & & (4) & (h_v \leq f_{\sf po}(h)). \end{array}$$ Hence $G \wedge (RI_a \wedge {\sf Mon} \wedge {\sf Con})^{[\Psi(G)]} \models e \leq f_{\sf ci}(f_{\sf po}(h)) \leq f_{\sf ci}(f_{\sf ci}(h)) \leq f_{\sf ci}(h)$. Thus, $G \wedge (RI_a \wedge {\sf Mon} \wedge {\sf Con})^{[\Psi(G)]} \models f_{\sf hl}(e) \leq f_{\sf hl}(f_{\sf ci}(h)) \leq f_{\sf hl}(h)$, so $G \wedge (RI_a \wedge {\sf Mon} \wedge {\sf Con})^{[\Psi(G)]} \models {\sf Endocarditis} \leq d \wedge f_{\sf hl}(h)$, which together with $d \wedge f_{\sf hl}(h) = {\sf Heartdisease}$ and ${\sf Endocarditis} \not\leq {\sf Heartdisease}$ leads to a contradiction. \end{example} \subsection{Complexity} \label{el-compl} We now analyze the complexity of the problem of checking CBox subsumption in the extensions of ${\cal EL}^+$ considered in this paper. Note that by Theorems~\ref{locality-of-el} and~\ref{locality-of-el+}, in all cases considered in Section~\ref{loc-el} and \ref{loc-el+} we can reduce CBox subsumption to the task of checking the satisfiability of a set of constraints of the form $$RI[\Psi(G)]_0 \cup {\sf Mon}(\Sigma)[\Psi(G)]_0 \cup G_0$$ w.r.t.\ the theory of bounded semilattices. \begin{lemma} For the specific closure operator $\Psi$ we consider, the following hold: \begin{itemize} \item The size of $\Psi(G)$ is linear in the size of $|st(G)|$, where $|st(G)|$ is the number of subterms of $G$ which start with a function symbol in $\Sigma$. \item The size of ${\sf Mon}(\Sigma)[\Psi(G)]$ (and hence also the size of ${\sf Mon}(\Sigma)[\Psi(G)]$) is $2 |\Psi(G)|^2$, hence it is quadratic in the size of $|st(G)|$. \item The size of $RI[\Psi(G)]$ (hence also the size of $RI[\Psi(G)]_0$) is quadratic in the size of $\Psi(G)$, hence also in the size of $|st(G)|$. \end{itemize} \end{lemma} We reduced the initial problem to the problem of checking satisfiability w.r.t.\ the theory of bounded semilattices of a conjunction between a set $G_0$ of ground unit clauses of the form $$c_1 \wedge c_2 \leq c, \quad c_1 \leq c_2 \quad d_1 \not\leq d_2$$ of size linear in $|st(G)|$ and a set of Horn clauses of length at most $n+1$, where $n$ is the maximal arity of a function symbol in $\Sigma$ of the form $$ c_1 \leq c'_1 \wedge \dots \wedge c_n \leq c'_n \rightarrow c \leq c'.$$ It is easy to see (cf.\ also \cite{Sofronie-ijcar-06,Sofronie-lmcs-08}) that one can give a polynomial decision procedure for checking the satisfiability of such sets of clauses, by noticing that if the set of clauses is unsatisfiable then there exists an instance of monotonicity with all premises entailed by the unit clauses from $G_0$. We can add the conclusion to $G_0$ and recursively repeat the argument. \noindent In order to obtain an even more efficient method for checking TBox subsumption we use a reduction to reachability in the theory of posets. It is known that the theory of semilattices allows a local Horn axiomatization (cf.\ e.g.\ \cite{Skolem20,Burris95}), by means of the following axioms: $$\begin{array}{lll} (S1)~~~~~ ~~~~~ & \forall x, y, z~~~~~~~~~ & (x \leq y \wedge y \leq z ~ \rightarrow ~ z \leq z)\\ (S2)~~~~~ & \forall x & (0 \leq x \wedge x \leq 1) \\ (S3) & \forall x, y & (x \wedge y \leq x \wedge x \wedge y \leq y) \\ (S4)& \forall x, y, z & (z \leq x \wedge z \leq y ~\rightarrow~ z \leq x \wedge y) \end{array}$$ We denote by ${\cal SL}$ this set of axioms for the theory of bounded semilattices. \begin{theorem} The set of Horn clauses ${\cal SL}$ define a local extension of the pure theory of bounded partial orders, i.e.\ for every set $G$ of ground clauses in the signature of bounded semilattices, ${\cal SL} \cup G \models \perp$ iff ${\cal SL}[G] \cup G \models \perp$. \end{theorem} \noindent {\it Proof\/}:\ Let $(P, \leq, \wedge, 0, 1)$ be a weak partial model of ${\cal SL}$. Then $(P, \leq, 0, 1)$ is a poset with first and last element. Let ${\cal OI}(P) = ({\cal OI}(P, \leq), \cap, \{ 0 \}, P)$ be the semilattice of all order ideals of $P$. We show that the map $i : P \rightarrow {\cal OI}(P)$ defined by $i(x) = {\downarrow} x$ is a weak embedding: $i$ is obviously injective and an order embedding. Clearly, $i(0) = {\downarrow} 0 = \{0 \}$ and $i(1) = P$. Assume that $x \wedge y$ is defined in $P$. Then $i(x \wedge y) = {\downarrow} (x \wedge y)$. If $x \wedge y$ is defined in $P$, since $P$ weakly satisfies (S3), $x \wedge y \leq x$ and $x \wedge y \leq y$, so $x \wedge y \in {\downarrow} x \cap {\downarrow} y$. Hence, ${\downarrow}(x \wedge y) \subseteq {\downarrow} x \cap {\downarrow} y$. Conversely, let $z \in {\downarrow} x \cap {\downarrow} y$. Then $z \leq x$ and $z \leq y$ and as $x \wedge y$ is defined and $P$ weakly satisfies (S4), $z \leq x \wedge y$. It follows that $i(x \wedge y) = {\downarrow} (x \wedge y) = {\downarrow} x \cap {\downarrow} y = i(x) \cap i(y). $ \hspace*{\fill}$\Box$ \begin{corollary} The following are equivalent: \begin{itemize} \item[(1)] $SL \cup {\sf Mon}(\Sigma) \cup RI \models \forall {\overline x} \bigwedge_{i = 1}^n s_i({\overline x}) \leq s'_i({\overline x}) \rightarrow s({\overline x}) \leq s'({\overline x})$. \item[(2)] $SL \cup {\sf Mon}(\Sigma) \cup RI {\cup} G {\models} \perp$, where $G = \bigwedge_{i = 1}^n s_i({\overline c}) {\leq} s'_i({\overline c}) {\wedge} s({\overline c}) {\not\leq} s'({\overline c})$. \item[(3)] $SL \cup ({\sf Mon}(\Sigma) \cup RI)[\Psi(G)] {\cup} G {\models} \perp$, where $G = \bigwedge_{i = 1}^n s_i({\overline c}) {\leq} s'_i({\overline c}) {\wedge} s({\overline c}) {\not\leq} s'({\overline c})$, and $\Psi$ is defined as in Theorem~\ref{loc-1-3}. \item[(4)] ${\cal SL} \cup ({\sf Mon}(\Sigma) \cup RI)[\Psi(G)]_0 {\cup} G_0 {\models} \perp$, for the purified semilattice part of the problem. \item[(5)] ${\cal SL}[G'] \cup G' \models \perp,$ where $G' = ({\sf Mon}(\Sigma) \cup RI){[{\Psi}(G)]}_0 \cup G_0$. \item[(6)] ${\cal SL}[G']_0 \cup G'_0 \cup {\sf Con}(\wedge)[G'] \models \perp$. \end{itemize} \label{cor-stable-loc} \end{corollary} \begin{theorem} CBox subsumption can be checked in cubic time in the size of the original CBox for all CBoxes in the language of the extension of ${\cal EL}^+$ considered in this paper. \end{theorem} \noindent {\it Proof\/}:\ We analyze the complexity of the problem in item (6) of Corollary~\ref{cor-stable-loc}, as a function of the size of the input CBox, i.e.\ as a function of the size of $RI$ and $G$. We first estimate the size of $G'$. Note that $\Psi(G)$ can have at most $|{\sf st}(G)| \cdot |N_R|$ elements. Thus, its size is linear in the size of $G$ if $N_R$ is fixed. The number of clauses in $({\sf Mon}(\Sigma) \cup RI){[{\Psi}(G)]}$ is quadratic in $|\Psi(G)|$. By purification, the size grows linearly. Thus: \begin{itemize} \item The size of $G'$ is quadratic in the number of subterms of $G$. \item $G'$ contains a set of ground unit clauses (of size linear in the size of $G$) and a set of ground Horn clauses (of size quadratic in in the size of $G$). \item The number of subterms in $G'$ is linear in the number of subterms of $G$.\end{itemize} If we consider the form of the clauses in ${\cal SL}$ we note that the number of clauses in ${\cal SL}[G'] \cup {\sf Con}(\wedge)[G']$ is at most cubic in the number of subterms in $G'$, i.e.\ cubic in the number of subterms of $G$. The conclusion of the theorem now follows easily if we note that \begin{itemize} \item ${\cal SL}[G']_0 \cup G'_0 \cup {\sf Con}(\wedge)[G']$ is a set of ground Horn clauses, and \item in order to check the satisfiability of any set of $N$ ground clauses w.r.t. the theory of posets we only need to take into account those instances of the poset axioms in which the variables are instantiated with the (ground) terms occurring in $N$. \end{itemize} We can thus reduce the verification problem to the problem of checking the satisfiability of a set of Horn clauses of size at most cubic in the number of subterms of $G$. Since the satisfiability of Horn clauses can be tested in linear time \cite{DowlingGallier}, this shows that the uniform word problem for the class ${\sf SLO}_{\Sigma}(RI)$ (and thus for ${\sf SLO}^{\exists}_{NR}(RI)$) is decidable in cubic time. \hspace*{\fill}$\Box$ \subsection{Extensions of $\mathcal{EL}$ with $n$-ary roles and concrete domains} \label{sect-extensions} The previous results can easily be generalized to semilattices with $n$-ary monotone functions satisfying composition axioms. \subsubsection{Extensions of $\mathcal{EL}$ with $n$-ary roles} We now consider the extensions of $\mathcal{EL}$ with $n$-ary roles introduced in Section~\ref{ext1-n-ary}. The semantics is defined in terms of interpretations ${\mathcal I} = (D^{\mathcal I}, \cdot^{\mathcal I})$, where $D^{\mathcal I}$ is a non-empty set, concepts are interpreted as usual, and each $n$-ary role $R \in N_R$ is interpreted as an $n$-ary relation $R^{\mathcal I} \subseteq (D^{\mathcal I})^n$. All results in the previous section extend in a natural way to this case, because, independently of the arities of the functions, the extension of the theory of bounded semilattices with monotone functions is local and the number of instances of the monotonicity axioms in ${\sf Mon}[\Psi(G)]$ is quadratic in the size of $\Psi(G)$. \subsubsection{Extensions of $\mathcal{EL}^+$ with $n$-ary roles} In this case we need to take into account role inclusions of type: \begin{eqnarray} r_1 & \sqsubseteq & r_2 \label{n-ary-incl-1} \\ r_1 \circ (s_1, \dots, s_n) & \sqsubseteq & r_2 \label{n-ary-1} \\ r_1 \circ (s_1, \dots, s_n) & \sqsubseteq & id ~~~ \text{ for binary relations } s_i \label{n-ary-inv-1} \end{eqnarray} We proved that TBox subsumption can be expressed as a uniform word problem w.r.t.\ the class of semilattices with monotone operators associated with the roles, satisfying axioms $RI_a$ corresponding in a natural way to the role inclusion laws above. Below we write the flat form of those axioms $RI^{\sf flat}$: \begin{eqnarray*} \forall x_1, \dots, x_n & & f_{\exists r_1}(x_1, \dots, x_n) \leq f_{\exists r_2}(x_1, \dots, x_n) \\ \forall {\overline y^k_j} z_1,\dots,z_n~~ & \bigwedge_i z_i {\leq} f_{\exists s_i}(y^i_1, \dots, y^i_{m_i}) \rightarrow & f_{\exists r_1}(z_1, \dots, z_n) \leq f_{\exists r_2}(y^1_1, \dots, y^1_{m_1}, \dots, y^n_1, \dots, y^n_{m_n}) \\ \forall x, z_1,\dots,z_n~~ & \bigwedge_i z_i {\leq} f_{\exists s_i}(x) \rightarrow & f_{\exists r_1}(z_1, \dots, z_n) \leq x\end{eqnarray*} \begin{theorem} Any extension of the theory $SL$ of lattices with a set of monotone functions satisfying any combination of axioms containing axioms of type $RI^{\sf flat}$ is $\Psi$-local, where $\Psi(T) = \bigcup_{i \geq 1} \Psi^i(T)$, with $\Psi_0(T) = T$, and $$\begin{array}{rcl} \Psi_{i+1}(T) & = & \{ h({\overline c}) \mid \exists \forall {\overline x} (\bigwedge_i g_i({\overline x}) \rightarrow h({\overline x})) \in RI^{\sf flat} \text{ and } g({\overline c}) \in T \} \cup \\ & & \{ h(c) \mid \exists \forall x,{\overline y} (\bigwedge y_i \leq g_i(x) \rightarrow f(y_1, \dots, y_n) \leq h(x)) \in RI^{\sf flat} \text{ and } \forall i g_i(c) \in T \} \cup \\ & & \{ h({\overline c_1}, \dots, {\overline c}_n) \mid \exists \forall {\overline x}_i,{\overline y} (\bigwedge y_i \leq g_i({\overline x}_i) \rightarrow f(y_1, \dots, y_n) \leq h({\overline x}_1, \dots, {\overline x}_n)) \in RI^{\sf flat} \\ & & ~~~~~~~~~~~~~~~~~~~~~~~~\text{ and } g_i({\overline c}_i) \in T \text{ for all } i \}. \end{array}$$ \label{local-el+-n-ary-psi} \end{theorem} \noindent {\it Proof\/}:\ The proof is analogous to the proof of Theorem~\ref{locality-of-el+}. We illustrate as an example the fact that any axiom in $RI^{\sf flat}$ of the second type is $\Psi$-local. Consider an axiom of this type $$ \forall {\overline y^k_j} z_1,\dots,z_n~~ \bigwedge_i z_i {\leq} f_{\exists s_i}(y^i_1, \dots, y^i_{m_i}) \rightarrow f_{\exists r_1}(z_1, \dots, z_n) \leq f_{\exists r_2}(y^1_1, \dots, y^1_{m_1}, \dots, y^n_1, \dots, y^n_{m_n})$$ Let $U^k_j, V_1, \dots, V_n \in {\cal OI}(S)$ be such that $V_i \subseteq {\overline g}_i(U^i_1, \dots, U^i_{m_i})$. Let $x \in {\overline f}(V_1, \dots, V_n)$. Then there exist $v_i \in V_i$ such that $f(v_1, \dots, v_n)$ is defined and $x \leq f(v_1, \dots, v_n)$. Since $v_i \in V_i \subseteq {\overline g}_i(U^i_1, \dots, U^i_{m_i})$, there exist $u^i_j \in U^i_j$ with $g_i({\overline u}^i)$ defined and such that $v_i \leq g_i({\overline u}^i)$. By the $\Psi$-closure properties of the models we consider it follows that $h({\overline u}^1, \dots, {\overline u}^n)$ is also defined and since $S$ weakly satisfies the corresponding axiom, it follows that $x \leq f(v_1, \dots, v_n) \leq h({\overline u}^1, \dots, {\overline u}^n)$. Thus, $x \in {\overline h}({\overline U}^1, \dots, {\overline U}^n)$. \hspace*{\fill}$\Box$ \noindent The extension to guarded role inclusions follows exactly as in the case of binary relations. Because of the flatness restriction in the definition of locality we need to consider flat versions of $GRI_a$ axioms, $GRI^{\sf flat}$ which are defined analogously to $RI^{\sf flat}$. \subsubsection{Extensions with existential role restrictions} In the presence of existential role restrictions we can prove the following result. \begin{theorem} Any extension of the theory $SL$ of lattices with a set of monotone functions satisfying any combination of axioms containing axioms of type $RI^{\sf flat}$, $GRI^{\sf flat}$ and existential restrictions $ER$ of the form: $$ \forall x_1, \dots, x_{i-1}, x_{i+1}, \dots, x_n~~ g(x_1, \dots, x_{i-1}, x_{i+1}, \dots, x_n) = h(x_1, \dots, x_{i-1}, c, x_{i+1}, \dots, x_n),$$ is $\Psi$-local, where $\Psi(T) = \bigcup_{i \geq 1} \Psi^i(T)$, with $\Psi_0(T) = T$, $$\begin{array}{rcl} \Psi_{i+1}(T) & = & \{ h({\overline c}) \mid \exists \forall {\overline x} (g \wedge \bigwedge_i g_i({\overline x}) \rightarrow h({\overline x})) \in (G)RI^{\sf flat} \text{ and } g({\overline c}) \in T \} \cup \\ & & \{ h(c) \mid \exists \forall x,{\overline y} (g \wedge \bigwedge y_i \leq g_i(x) \rightarrow f(y_1, \dots, y_n) \leq h(x)) \in (G)RI^{\sf flat} \\ & & ~~~~~~~~~~~~~\text{ and } g_i(c) \in T \text{ for all } i \} \cup \\ & & \{ h({\overline c_1}, \dots, {\overline c}_n) \mid \exists \forall {\overline x}_i,{\overline y} (g \wedge \bigwedge y_i \leq g_i({\overline x}_i) \rightarrow f(y_1, \dots, y_n) \leq h({\overline x}_1, \dots, {\overline x}_n)) \in (G)RI^{\sf flat} \\ & & ~~~~~~~~~~~~~~~~~~~~~~~~~ \text{ and } g_i({\overline c}_i) \in T \text{ for all } i \}, \end{array}$$ where $g$ are either $true$ or a suitable conjunction of guards of the form $x_i \leq d_i$. \end{theorem} \noindent {\it Proof\/}:\ The only issue to be clarified is the locality of the extension with axioms in $ER$. The axioms in $ER$ are extensions by definitions like the ones considered in \cite{sofronie-ihlemann-ismvl-07}. Due to arity reasons, they are acyclic. Thus, we have the following chain of extensions: $SLO_{\Sigma} \subseteq SLO_{\Sigma}(ER) \subseteq SLO_{\Sigma}(RI \cup ER)$. \hspace*{\fill}$\Box$ \subsubsection{Extensions with $n$-ary roles and concrete domains} We now consider the extension with concrete domains studied in Section~\ref{ext1-n-ary-concrete}. We showed that an algebraic semantics can be given in terms of the class $SL_S$ of all structures ${\mathcal A} = (A, {\mathcal P}(A_1), \dots, {\mathcal P}(A_n))$, with signature $\Pi = (S, \{ \wedge \} {\cup} \Sigma, {\sf Pred})$ with $S {=} \{ {\sf concept}, {\sf s_1}, \dots, {\sf s_n} \}$, ${\sf Pred} {=} \{ \leq \} {\cup} \{ \subseteq_i \mid 1 \leq i \leq n \}$, where $A \in SL$, the support of sort ${\sf concept}$ of ${\mathcal A}$ is $A$, and for all $i$ the support sort $s_i$ of ${\mathcal A}$ is ${\mathcal P}(A_i)$. \begin{theorem}[\cite{sofronie-ihlemann-ismvl-07}] Every structure $(A, {\mathcal P}(A_1), \dots, {\mathcal P}(A_n), \{ f_A \}_{f \in \Sigma})$, where \begin{itemize} \item[(i)] $(A, {\mathcal P}(A_1), \dots, {\mathcal P}(A_n)) \in SL_S$, and \item[(ii)] for every $f {\in} \Sigma$ of arity $s'_1 {\dots} s'_n {\rightarrow} s$, with $s'_1, \dots, s'_n, s \in S$, $f_A$ is a partial function from $\prod_{i = 1}^n U_{s'_i}$ to $U_s$ which is monotone on its domain of definition (here $U_{\sf concept} = A$ and $U_{s_i} = {\mathcal P}(A_i)$ are the universes of the many-sorted structure in (i)). \end{itemize} weakly embeds into a total model of $SL_S {\cup} {\sf Mon}(\Sigma)$. \end{theorem} \begin{corollary} \label{cor-red-n-ary} Let $G = \bigwedge_{i = 1}^n s_i({\overline c}) {\leq} s'_i({\overline c}) \wedge s({\overline c}) {\not\leq} s'({\overline c})$ be a set of ground unit clauses in the extension $\Pi^c$ of $\Pi$ with new constants $\Sigma_c$. The following are equivalent: \begin{itemize} \item[(1)] $SL_S \cup {\sf Mon}(\Sigma) \cup G \models \perp$. \item[(2)] $SL_S \cup {\sf Mon}(\Sigma)[G] \cup G$ has no partial model with a total $\{ \wedge_{SL} \}$-reduct in which all terms in $G$ are defined. \end{itemize} \end{corollary} A hierarchical reduction to the problem of checking satisfiability of constraints in the disjoint combinations of the theory of semilattices and the theories ${\cal P}(A_i)$ follows immediately from this locality result. Let $\bigcup_{i = 0}^n {\sf Mon}(\Sigma)[G]_i \cup G_i \cup {\sf Def}$ be obtained from ${\sf Mon}(\Sigma)[G] \cup G$ by purification, i.e. by replacing, in a bottom-up manner, all subterms $f(g)$ of sort $s$ with $f \in \Sigma$, with newly introduced constants $c_{f(g)}$ of sort $s$ and adding the definitions $f(g) = c_t$ to the set ${\sf Def}$. We thus separate ${\sf Mon}(\Sigma)[G] \cup G$ into a conjunction of constraints $\Gamma_i = {\sf Mon}(\Sigma)[G]_i \cup G_i$, where $\Gamma_0$ is a constraint of sort ${\sf semilattice}$ and for $1 \leq i \leq n$, $\Gamma_i$ is a set of constraints over terms of sort $i$ ($i$ being the concrete sort with fixed support ${\mathcal P}(A_i)$). \begin{corollary} The following are equivalent (and are also equivalent to (1) and (2)): \begin{itemize} \item[(3)] $\bigcup_{i = 0}^n {\sf Mon}(\Sigma)[G]_i \cup G_i \cup {\sf Def}$ has no partial model with a total $\{ \wedge, 0, 1 \}$-reduct in which all terms in ${\sf Def}$ are defined. \item[(4)] $\bigcup_{i = 0}^n {\sf Mon}(\Sigma)[G]_i \cup G_i$ is unsatisfiable in the many-sorted disjoint combination of $SL$ and the concrete theories of ${\mathcal P}(A_i)$, $1 \leq i \leq n$. \end{itemize} \end{corollary} The complexity of the uniform word problem of $SL_S \cup {\sf Mon}(\Sigma)$ depends on the complexity of the problem of testing the satisfiability --- in the many-sorted disjoint combination of $SL$ with the concrete theories of ${\mathcal P}(A_i)$, $1 \leq i \leq n$ --- of sets of clauses $C_{\sf concept} \cup \bigcup_{i = 1}^n C_i \cup {\sf Mon}$, where $C_{\sf concept}$ and $C_i$ are unit clauses of sort ${\sf concept}$ resp. $s_i$, and ${\sf Mon}$ consists of possibly mixed ground Horn clauses. Specific extensions of the logic $\mathcal{EL}$ can be obtained by imposing additional restrictions on the interpretation of the ``concrete''-type concepts within ${\mathcal P}(A_i)$. For instance, we can require that numerical concepts are always interpreted as intervals, as in Example~\ref{ex1}. \begin{theorem} Consider the extension of $\mathcal{EL}$ with two sorts, ${\sf concept}$ and ${\sf num}$, where the semantics of classical concepts is the usual one, and the concepts of sort ${\sf num}$ are interpreted as elements in the ORD-Horn, convex fragment of Allen's interval algebra \cite{Nebel-Buerkert95}, where any CBox can contain many-sorted GCI's over concepts, as well as constraints over the numerical data expressible in the ORD-Horn fragment. In this extension, CBox subsumption is decidable in PTIME. \label{el-many-sorted} \end{theorem} \noindent {\it Proof\/}:\ The assumption on the semantics of the extension of $\mathcal{EL}$ we made ensures that all algebraic models are two-sorted structures of the form ${\mathcal A} = ((A, \wedge), {\sf Int}({\mathbb R}, O), \{ f_{\mathcal A} \}_{f \in \Sigma})$, with sorts $\{{\sf concept}, {\sf num} \}$, such that $(A, \wedge)$ is a semilattice, ${\sf Int}({\mathbb R}, O)$ is an interval algebra in the Ord-Horn fragment of Allen's interval arithmetic \cite{Nebel-Buerkert95}, and for all $f \in \Sigma$, $f_A$ is a monotone (many-sorted) function. We will denote the class of all these structures by $SL_{\sf OrdHorn}$. Note that the Ord-Horn fragment of Allen's interval arithmetic has the property that all operations and relations between intervals can be represented by Ord-Horn clauses, i.e.\ clauses over atoms $x \leq y, x = y$, containing at most one positive literal ($x \leq y$ or $x = y$) and arbitrarily many negative literals (of the form $x \neq y$). Nebel and B{\"u}rckert \cite{Nebel-Buerkert95} proved that a finite set of Ord-Horn clauses is satisfiable over the real numbers iff it is satisfiable over posets. As the theory of partial orders is convex, this means that although the theory of reals is not convex w.r.t. $\leq$, we can always assume that the theory of Ord-Horn clauses is convex. The main result in Corollary~\ref{cor-red-n-ary} can be adapted without problems to show that if $G = \bigwedge_{i = 1}^n s_i({\overline c}) {\leq} s'_i({\overline c}) \wedge s({\overline c}) {\not\leq} s'({\overline c})$ is a set of ground unit clauses in the extension $\Pi^c$ of $\Pi$ with new constants $\Sigma_c$, and if ${\sf Mon}(\Sigma)[G]_{\sf c} \cup {\sf Mon}(\Sigma)[G]_{\sf num} \cup G_{\sf c} \cup G_{\sf num} \cup {\sf Def}$ are obtained from ${\sf Mon}(\Sigma)[G] \cup G$ by purification, the following are equivalent: \begin{itemize} \item $SL_{\sf OrdHorn} \cup {\sf Mon}(\Sigma) \cup G \models \perp$; \item ${\sf Mon}(\Sigma)[G]_0 \cup G_0 \cup {\sf Con}[\sf Def]_0$ is unsatisfiable in the combination of $SL$ and the Ord-Horn fragment of Allen's interval arithmetic. \end{itemize} In order to test the unsatisfiability of the latter problem we proceed as follows. We first note that, due to the convexity of the theories involved and to the fact that all constraints in $G_0 \cup {\sf Mon}(\Sigma)[G]_0 \cup {\sf Con}[\sf Def]_0$ are separated (in the sense that there are no mixed atoms) if \begin{itemize} \item[(1)] $G_0 \cup {\sf Mon}(\Sigma)[G]_0 \cup {\sf Con}[\sf Def]_0 \models \perp$, then: \item[(2)] there exists a clause $C = (\bigwedge c_i = d_i \rightarrow c = d)$ in ${\sf Mon}(\Sigma)[G]_0 \cup {\sf Con}[\sf Def]_0$ such that $G_0 \models \bigwedge c_i = d_i$ and $G_0 \cup \{ c = d \} \cup ({\sf Mon}(\Sigma)[G]_0 \cup {\sf Con}[\sf Def]_0) \backslash \{C \} \models \perp$. \end{itemize} In order to prove this, let ${\mathcal D}$ be the set of all atoms $c_i R_i d_i$ occurring in premises of clauses in ${\sf Mon}(\Sigma)[G]_0 \cup {\sf Con}[\sf Def]_0$. As every model of $G_0 \wedge \bigwedge_{(c R d) \in {\mathcal D}} \neg (c R d)$ is also a model of $G_0 \cup {\sf Mon}(\Sigma)[G]_0 \cup {\sf Con}[\sf Def]_0$, and the last formula is by (1) unsatisfiable, it follows that $G_0 \wedge \bigwedge_{(c R d) \in {\mathcal D}} \neg (c R d) \models \perp$ in the combination of the Ord-Horn fragment over posets with the theory of semilattices. Let $G_0^+$ be the conjunction of all atoms in $G_0$, and $G_0^-$ be the set of all negative literals in $G_0$. Then $G_0^+ \models \bigvee _{(c R d) \in {\mathcal D}} (c R d) \vee \bigvee_{\neg L \in G_0^-} L.$ Since the constraints are sort-separated and both theories involved are convex, it follows that either $G_0 \models \perp$ or else $G_0 \models c R d$ for some $(c R d) \in {\mathcal D}$. We can repeat the process until all the premises of some clause in ${\sf Mon}(\Sigma)[G]_0 \cup {\sf Con}[\sf Def]_0$ are proved to be entailed by $G_0$. Thus, (2) holds. By iterating the argument above we can always -- if (1) holds -- successively entail sufficiently many premises of monotonicity and congruence axioms in order to ensure that, in the end, \begin{itemize} \item[(3)] there exists a set $\{ C_1, \dots, C_n \}$ of clauses in ${\sf Mon}(\Sigma)[G]_0 \cup {\sf Con}[\sf Def]_0$ with $C_j = (\bigwedge c^j_i = d^j_i \rightarrow c^j = d^j)$, such that for all $k \in \{0, \dots, n-1 \}$, $$G_0 \wedge \bigwedge_{j = 1}^k (c^j = d^j) \models \bigwedge c^{k+1}_i = d^{k+1}_i \text{ and } G_0 \wedge \bigwedge_{j = 1}^n (c^j = d^j) \models \perp.$$ \end{itemize} Note that (3) implies (1), since the conditions in (3) imply that $G_0 \wedge \bigwedge_{j = 1}^n (c^j = d^j)$ is logically equivalent with $G_0 \wedge C_1 \wedge \dots C_n$, which (as set of clauses) is contained in the set of clauses $G_0 \cup {\sf Mon}(\Sigma)[G]_0 \cup {\sf Con}[\sf Def]_0$. This means that in order to test satisfiability of $G_0 \cup {\sf Mon}(\Sigma)[G]_0 \cup {\sf Con}[\sf Def]_0$ we need to test entailment of the premises of ${\sf Mon}(\Sigma)[G]_0 \cup {\sf Con}[\sf Def]_0$ from $G_0$; when all premises of some clause are provably true we delete the clause and add its conclusion to $G_0$. The PTIME assumptions for concept subsumption and for the Ord-Horn fragment ensure that this process terminates in PTIME. \hspace*{\fill}$\Box$ \begin{example} Consider the special case described in Example~\ref{ex1}. Assume that the concepts of sort ${\sf num}$ used in any TBox are of the form ${\uparrow}n, {\downarrow}m$ and $[n, m]$. Consider the TBox ${\mathcal T}$ consisting of the following GCIs: $$\begin{array}{@{}l@{}l} \{ & \exists {\sf price}({\downarrow}n_1) \sqsubseteq {\sf affordable}, ~~\exists {\sf weight}({\uparrow}m_1) \sqcap {\sf car} \sqsubseteq {\sf truck}, \\ & \mbox{ {\sf has-weight-price}}({\uparrow}m, {\downarrow}n) \sqsubseteq \exists {\sf price}({\downarrow}n) \sqcap \exists {\sf weight}({\uparrow}m), \\ & {\downarrow}n \sqsubseteq {\downarrow}n_1, ~~{\uparrow}m \sqsubseteq {\uparrow}m_1,~~ C \sqsubseteq {\sf car}, ~~ C \sqsubseteq \exists \mbox{ {\sf has-weight-price}}({\uparrow}m, {\downarrow}n) ~~\} \end{array}$$ In order to prove that $C \sqsubseteq_{\mathcal T} {\sf affordable} \sqcap {\sf truck}$ we proceed as follows. We refute $\bigwedge_{D \sqsubseteq D' \in {\mathcal T}} {\overline D} \leq {\overline D}' \wedge {\overline C} \not\leq {\sf affordable} \wedge {\sf truck}$. We purify the problem introducing definitions for the terms starting with existential restrictions, and express the interval constraints using constraints over ${\mathbb Q}$ and obtain the following set of constraints: \noindent $\begin{array}{|l||l|l|l|} \hline {\sf Def} & C_{\sf num} & C_{\sf concept} & {\sf Mon} \\ \hline \hline f_{\sf price}({\downarrow}n_1) = c_1 & n \leq n_1 & c_1 \leq {\sf affordable} & n_1 \leq n \rightarrow c_1 \leq c \\ f_{\sf price}({\downarrow}n) = c & m \geq m_1 & d_1 \wedge {\sf car} \leq {\sf truck} & n_1 \geq n \rightarrow c_1 \geq c \\ f_{\sf weight}({\uparrow}m_1) = d_1 & & e \leq c \wedge d & m_1 \geq m \rightarrow d_1 \leq d \\ f_{\sf weight}({\uparrow}m) = d & & C \leq {\sf car} & m_1 \leq m \rightarrow d_1 \geq d \\ f_{\sf \text{h-w-p}}({\uparrow}m, {\downarrow}n) = e & & C \leq e & \\ & & C \not\leq {\sf affordable} \wedge {\sf truck} & \\ \hline \end{array}$ \noindent The task of proving $C \sqsubseteq_{\mathcal T} {\sf affordable} \sqcap {\sf truck}$ can therefore be reduced to checking whether $C_{\sf num} \wedge C_{\sf concept} \wedge {\sf Mon}$ is satisfiable w.r.t.\ the combination of $SL$ (sort {\sf concept}) with $LI({\mathbb Q})$ (sort ${\sf num}$). For this, we note that $C_{\sf num}$ entails the premises of the first, second, and fourth monotonicity rules. Thus, we can add $c \leq c_1$ and $d \leq d_1$ to $C_{\sf concept}$. Thus, we deduce that $C \leq e \wedge {\sf car} \leq (c \wedge d) \wedge {\sf car} \leq c_1 \wedge (d_1 \wedge {\sf car}) \leq {\sf affordable} \wedge {\sf truck}$, which contradicts the last clause in $C_{\sf concept}$. \noindent A similar procedure can be used in general for testing (in PTIME) the satisfiability of mixed constraints in the many-sorted combination of $SL$ with concrete domains of sort ${\sf num}$, assuming that all concepts of sort ${\sf num}$ are interpreted as intervals and the constraints $C_{\sf num}$ are expressible in a PTIME, convex fragment of Allen's interval algebra. \end{example} These results lift in a natural way to $n$-ary roles satisfying (guarded) role inclusion axioms. \section{Interpolation in semilattices with operators and applications} \label{interpolation} Interpolation theorems are important in the study of distributed or evolving ontologies. A theory ${\mathcal T}$ has interpolation if, for all formulae $\phi$ and $\psi$ in the signature of ${\mathcal T}$, if $\phi \models_{\mathcal T} \psi$ then there exists a formula $I$ containing only symbols which occur in both $\phi$ and $\psi$ such that $\phi \models_{\mathcal T} I$ and $I \models_{\mathcal T} \psi$. First order logic has interpolation but -- for an arbitrary theory ${\mathcal T}$ -- even if $\phi$ and $\psi$ are e.g. conjunctions of ground literals, $I$ may still be an arbitrary formula, containing alternations of quantifiers. It is often important to identify situations in which {\em ground clauses} have {\em ground interpolants}. In recent literature, when defining ground interpolation, instead of considering formulae $\phi$ and $\psi$ such that $\phi \models_{\mathcal T} \psi$, formulae $A$ and $B$ are considered such that $A \wedge B \models_{\mathcal T} \perp$. The two formulations are clearly equivalent. In what follows we will use the second one. \begin{definition}[Ground interpolation] We say that a theory ${\mathcal T}$ has the {\em ground interpolation property} (or, shorter, that ${\mathcal T}$ has {\em ground interpolation}) if for all ground clauses $A({\overline c}, {\overline d})$ and $B({\overline c}, {\overline e})$, if $A({\overline c}, {\overline d}) \wedge B({\overline c}, {\overline e}) \models_{\mathcal T} \perp$ then there exists a ground formula $I({\overline c})$, containing only the constants ${\overline c}$ occurring both in $A$ and $B$ (and, ideally, only function symbols shared by $A$ and $B$), such that $A({\overline c}, {\overline d}) \models_{\mathcal T} I({\overline c}) \text{ and } B({\overline c}, {\overline e}) \wedge I({\overline c}) \models_{\mathcal T} \perp.$ \end{definition} \begin{definition}[Equational interpolation property] An equational theory ${\mathcal T}$ (in signature $\Pi = (\Sigma, {\sf Pred})$ where ${\sf Pred} = \{ \approx \}$) has the {\em equational interpolation property} if whenever $$\bigwedge_i A_i({\overline a}, {\overline c}) \wedge \bigwedge_j B_j({\overline c}, {\overline b}) \wedge \neg B({\overline c}, {\overline b}) \models_{\mathcal T} \perp,$$ where $A_i$, $B_j$ and $B$ are ground atoms, there exists a conjunction $I({\overline c})$ of ground atoms containing only the constants ${\overline c}$ occurring both in $\bigwedge_i A_i({\overline a}, {\overline c})$ and $\bigwedge_j B_j({\overline c}, {\overline b}) \wedge \neg B({\overline c}, {\overline b})$, such that $\bigwedge_i A_i({\overline a}, {\overline c}) \models_{\mathcal T} I({\overline c}) \text{ and } I({\overline c}) \wedge\bigwedge_j B_j \models_{\mathcal T} B.$ \label{equational-interpolation} \end{definition} There exist results which relate ground interpolation to amalgamation or the injection transfer property \cite{Jonsson65,Bacsich75,Wronski86} and thus allow us to recognize many theories with ground interpolation. However, just knowing that ground interpolants exist is usually not sufficient: we would like to construct the interpolants fast. In \cite{Sofronie-ijcar-06,Sofronie-lmcs-08} a class of theory extensions was identified which have ground interpolation, and for which hierarchical methods for computing the interpolants exist. We present the results below. The theories we consider are theory extensions ${\cal T}_0 \subseteq {\cal T}_1 = {\cal T}_0 \cup {\cal K}$ which satisfy the following assumptions: \noindent ${\mathcal T}_0$ is a theory with the following properties: \begin{description} \item[Assumption 1:] ${\mathcal T}_0$ is {\em convex}\/ w.r.t.\ the set ${\sf Pred}$ (including equality $\approx$), i.e., for all conjunctions $\Gamma$ of ground atoms, relations $R_1, \dots, R_m \in {\sf Pred}$ and ground tuples of corresponding arity ${\overline t}_1, \dots, {\overline t}_n$, if $\Gamma \models_{{\mathcal T}_0} \bigvee_{i = 1}^m R_i({\overline t}_i)$ then there exists $j \in \{ 1, \dots, m \}$ such that $\Gamma \models_{{\mathcal T}_0} R_j({\overline t}_j)$. \item[Assumption 2:] ${\mathcal T}_0$ is {\em $P$-interpolating} w.r.t.\ a subset $P \subseteq {\sf Pred}$ and the separating terms $t_i$ can be effectively computed, i.e.\ for all conjunctions $A$ and $B$ of ground literals, all binary predicates $R \in P$ and all constants $a$ and $b$ such that $a$ occurs in $A$ and $b$ occurs in $B$ (or vice versa), if $A \wedge B \models_{{\mathcal T}_0} a R b$ then there exists a term $t$ containing only constants common to $A$ and $B$ with $A \wedge B \models_{{\mathcal T}_0} a R t \wedge t R b$. (If we can always find a term $t$ containing only constants common to $A$ and $B$ with $A \models_{{\mathcal T}_0} a R t$ and $B \models_{{\mathcal T}_0} t R b$ we say that ${\mathcal T}_0$ is {\em strongly $P$-interpolating}.). \item[Assumption 3:] ${\mathcal T}_0$ has ground interpolation. \end{description} The extension ${\mathcal T}_1 = {\mathcal T}_0 \cup {\mathcal K}$ of ${\mathcal T}_0$ has the following properties: \begin{description} \item[Assumption 4:] ${\mathcal T}_1$ is a local extension of ${\mathcal T}_0$; and \item[Assumption 5:] ${\mathcal K}$ consists of the following type of combinations of clauses: \begin{eqnarray*} \left\{ \begin{array}{l} x_1 \, R_1 \, s_1 \wedge \dots \wedge x_n \, R_n \, s_n \rightarrow f(x_1, \dots, x_n) \, R \, g(y_1, \dots, y_n) \\ x_1 \, R_1 \, y_1 \wedge \dots \wedge x_n \, R_n \, y_n \rightarrow f(x_1, \dots, x_n) \, R \, f(y_1, \dots, y_n) \end{array} \right. \end{eqnarray*} where $n \geq 1$, $x_1, \dots, x_n$ are variables, $R_1, \dots, R_n, R$ are binary relations, $R_1, \dots, R_n \in P$, $R$ is transitive, and each $s_i$ is either a variable among the arguments of $g$, or a term of the form $f_i(z_1, \dots, z_k)$, where $f_i \in \Sigma_1$ and all the arguments of $f_i$ are variables occurring among the arguments of $g$. \end{description} Because of the presence of several function symbols in the axioms in ${\cal K}$ we need to define a more general notion of ``shared function symbols''. \begin{definition}[Shared function symbols] We define a relation $\sim$ between extension functions, where $f \sim g$ if $f$ and $g$ occur in the same clause in ${\mathcal K}$. We henceforth consider that a function $f \in \Sigma_1$ is common to $A$ and $B$ if there exist $g, h \in \Sigma_1$ such that $f \sim g$, $f \sim h$, $g$ occurs in $A$ and $h$ occurs in $B$. \label{shared} \end{definition} \begin{theorem} Assume that the theories ${\cal T}_0$ and ${\cal T}_0 \cup {\cal K}$ satisfy Assumptions 1--5. For every conjunction $A \wedge B$ of ground unit clauses in the signature $\Pi^c$ of ${\mathcal T}_1$ (possibly containing additional constants) with $A \wedge B \models_{{\mathcal T}_1} \perp$ a ground interpolant $I$ for $A \wedge B$ exists. In \cite{Sofronie-ijcar-06,Sofronie-lmcs-08} a procedure for hierarchically computing interpolants is given. \noindent If in addition ${\cal T}_0$ is strongly $P$-interpolating and the interpolants for conjunctions of ground literals are again conjunctions of ground literals, the same is true in the extension. \end{theorem} The theory ${\cal T}_0$ of bounded semilattices has the following properties (cf.\ \cite{Sofronie-ijcar-06,Sofronie-lmcs-08}): \begin{itemize} \item it is convex w.r.t. $\approx$ and $\leq$; \item it is strongly $P$-interpolating w.r.t.\ $\leq$ and separating terms can be effectively computed; \item it has ground interpolation (in fact, the equational interpolation property (cf.\ \cite{Sofronie-lmcs-08})). \end{itemize} Thus, Assumptions 1, 2 and 3 above are fulfilled. The class $SLO_{\Sigma}(RI)$ of all semilattices with monotone operators which satisfy a set $RI$ of axioms satisfies also Assumptions 4 and 5 provided that $RI$ contains (flat) axioms of the following types: $$\begin{array}{lrrcl} \forall x ~~~~~~~~~~~~& & f(x) & \leq & g(x) \\ \forall x, y & x \leq g(y) \rightarrow & f(x) & \leq & h(y) \\ \forall x, y & x \leq g(y) \rightarrow & f(x) & \leq & y \\ \end{array}$$ as well as of the more general type: $$\begin{array}{lrrcl} \forall x_1, \dots, x_n & & f(x_1, \dots, x_n) & \leq & g(x_1, \dots, x_n) \\ \forall x_1, \dots, x_n, y^k_1, \dots, y^k_n ~~~~~~& \displaystyle{\bigwedge_k} x_k \leq g_k(y^k_1, \dots y^k_{m_k}) \rightarrow & f(x_1, \dots, x_n) & \leq & g({\overline y}^1, \dots, {\overline y}^n) \\ \forall x_1, \dots, x_n, y^k_1, \dots, y^k_n & \displaystyle{\bigwedge_k} x_k \leq g_k(y) \rightarrow & f(x_1, \dots, x_n) & \leq & y \\ \end{array}$$ \begin{corollary} The class $SLO_{\Sigma}(RI)$ has ground interpolation (in fact the equational interpolation property) and interpolants can be computed in a hierarchical manner. \label{int-slo} \end{corollary} \begin{example}[cf.\ also \cite{Sofronie-lmcs-08}] Let ${\mathcal T}_1 = SL \cup {\sf SGc}(f, g) \cup {\sf Mon}(f, g)$ be the extension of the theory of semilattices with two monotone functions $f, g$ satisfying the semi-Galois condition $${\sf SGc}(f, g)~~~~~~~~~~~~~~~~ \forall x, y ~~~ x \leq g(y) \rightarrow f(x) \leq y.$$ Consider the following ground formulae $A$, $B$ in the signature of ${\mathcal T}_1$: \noindent ~~~~~~~~~~~~~$A:~~ d \leq g(a) ~\wedge~ a \leq c \quad \quad B:~~ b \leq d ~\wedge~ f(b) \not\leq c.$ \noindent where $c$ and $d$ are shared constants. We proved that ${\mathcal T}_1$ is a local extension of the theory of (bounded) semilattices. To prove that $A \wedge B \models_{{\mathcal T}_1} \perp$ we proceed as follows: \noindent {\bf Step 1:} {\em Use locality.} By the locality condition, $A \wedge B$ is unsatisfiable w.r.t.\ $SL \wedge {\sf SGc}(f, g) \wedge {\sf Mon}(f, g)$ iff $SL \wedge {\sf SGc}(f, g)[A \wedge B] \wedge {\sf Mon}(f, g)[A \wedge B] \wedge A \wedge B$ has no weak partial model in which all terms in $A$ and $B$ are defined. The extension terms occurring in $A \wedge B$ are $f(b)$ and $g(a)$, hence: \begin{eqnarray*} {\sf Mon}(f, g)[A \wedge B] & = & \{ a \leq a \rightarrow g(a) \leq g(a),~~ b \leq b \rightarrow f(b) \leq f(b) \} \\ {\sf SGc}(f, g)[A \wedge B] & = & \{ b \leq g(a) \rightarrow f(b) \leq a \} \end{eqnarray*} \noindent {\bf Step 2:} {\em Flattening and purification.} We purify and flatten the formula ${\sf SGc}(f, g) \wedge {\sf Mon}(f, g)$ by replacing the ground terms starting with $f$ and $g$ with new constants. The clauses are separated into a part containing definitions for terms starting with extension functions, $D_A \wedge D_B$, and a conjunction of formulae in the base signature, $A_0 \wedge B_0 \wedge {\sf SGc}_0 \wedge {\sf Mon}_0$. \noindent {\bf Step 3:} {\em Reduction to testing satisfiability in ${\mathcal T}_0$.} As the extension $SL \subseteq {\mathcal T}_1$ is local, we have: $$A \wedge B \models_{{\mathcal T}_1} \perp \quad \text{ iff } \quad A_0 \wedge B_0 \wedge {\sf SGc}_0 \wedge {\sf Mon}_0 \wedge {\sf Con}_0 \text{ is unsatisfiable w.r.t.\ } SL,$$ where ${\sf Con}_0 = {\sf Con}[A \wedge B]_0$ consists of the flattened form of those instances of the congruence axioms containing only $f$- and $g$-terms which occur in $D_A$ or $D_B$, and ${\sf SGc}_0 \wedge {\sf Mon}_0$ consists of those instances of axioms in ${\sf SGc}(f, g) \wedge {\sf Mon}(f, g)$ containing only $f$- and $g$-terms which occur in $D_A$ or $D_B$. $$\begin{array}{l|ll} \hline {\sf Extension} & ~~~~~~{\sf Base} \\ D_A \wedge D_B & ~A_0 \wedge B_0 \wedge {\sf SGc}_0 \wedge {\sf Mon}_0 \wedge {\sf Con}_0 & ~~~~~~~~~~~~~~~ \\ \hline a_1 \approx g(a) ~ & ~A_0 = d \leq a_1 \wedge a \leq c & {\sf SGc}_0 = b \leq a_1 \rightarrow b_1 \leq a \\ b_1 \approx f(b) & ~B_0 = b \leq d \wedge b_1 \not\leq c & {\sf Con}_A \wedge {\sf Mon}_A = a \lhd a \rightarrow a_1 \lhd a_1, \lhd \in \{ \approx, \leq \}\\ & & {\sf Con}_B \wedge {\sf Mon}_B = b \lhd b \rightarrow b_1 \lhd b_1,~ \lhd \in \{ \approx, \leq \} \\ \hline \end{array}$$ \noindent It is easy to see that $A_0 \wedge B_0 \wedge {\sf SGc}_0 \wedge {\sf Mon}_0 \wedge {\sf Con}_0 $ is unsatisfiable w.r.t.\ ${\mathcal T}_0$: $A_0 \wedge B_0$ entails $b \leq a_1$; together with ${\sf SGc}_0$ this yields $b_1 \leq a$, which together with $a \leq c$ and $b_1 \not\leq c$ leads to a contradiction. In order to compute an interpolant we proceed as follows: Consider the conjunction $A_0 \wedge D_A \wedge B_0 \wedge D_B \wedge {\sf Con}[D_A \wedge D_B]_0 \wedge {\sf Mon}_0 \wedge {\sf SGc}_0$. The $A$ and $B$-part share the constants $c$ and $d$, and no function symbols. However, as $f$ and $g$ occur together in ${\sf SGc}$, $f \sim g$, so they are considered to be all shared. (Thus, the interpolant is allowed to contain both $f$ and $g$.) We obtain a separation for the clause $b \leq a_1 \rightarrow b_1 \leq a$ of ${\sf SGc}_0$ as follows: \begin{itemize} \item[(i)] We note that $A_0 \wedge B_0 \models b \leq a_1$. \item[(ii)] We can find an $SL$-term $t$ containing only shared constants of $A_0$ and $B_0$ such that $A_0 \wedge B_0 \models b \leq t \wedge t \leq a_1$. (Indeed, such a term is $t = d$.) \item[(iii)] We show that, instead of the axiom $b \leq g(a) \rightarrow f(b) \leq a$, whose flattened form is in ${\sf SGc}_0$, we can use, without loss of unsatisfiability: \begin{quote} \begin{itemize} \item[(1)] an instance of the monotonicity axiom for $f$: $b \leq d \rightarrow f(b) \leq f(d)$, \item[(2)] another instance of ${\sf SGc}$, namely: $d \leq g(a) \rightarrow f(d) \leq a$. \end{itemize} \end{quote} For this, we introduce a new constant $c_{f(d)}$ for $f(d)$ (its definition, $c_{f(d)} \approx f(d)$, is stored in a set $D_T$), and the corresponding instances ${\mathcal H}_{\sf sep} = {\mathcal H}^{A}_{\sf sep} \wedge {\mathcal H}^{B}_{\sf sep}$ of the congru\-ence, monotonicity and ${\sf SGc}(f, g)$-axioms, which are now separated into an $A$-part (${\mathcal H}^{A}_{\sf sep}: d \leq a_1 \rightarrow c_{f(d)} \leq a$) and a $B$-part (${\mathcal H}^{B}_{\sf sep}: b \leq d \rightarrow b_1 \leq c_{f(d)}$). We thus obtain a separated conjunction ${\overline A}_0 \wedge {\overline B}_0$ (where ${\overline A}_0 = {\mathcal H}^{A}_{\sf sep} \wedge A_0$ and ${\overline B}_0 = {\mathcal H}^{B}_{\sf sep} \wedge B_0$), which can be proved to be unsatisfiable in ${\mathcal T}_0 = SL$. \item[(iv)] To compute an interpolant in $SL$ for ${\overline A}_0 \wedge {\overline B}_0$ note that ${\overline A}_0$ is logically equivalent to the conjunction of unit literals \/ $d \leq a_1 ~\wedge~ a \leq c ~\wedge~ c_{f(d)} \leq a$ and ${\overline B}_0$ is logically equivalent to \/ $b \leq d ~\wedge~ b_1 {\not\leq} c ~\wedge~ b_1 \leq c_{f(d)}$. An interpolant is $I_0 = c_{f(d)} \leq c$. \item[(v)] By replacing the new constants with the terms they denote we obtain the interpolant $I = f(d) \leq c$ for $A \wedge B$. \end{itemize} \end{example} An immediate consequence of Corollary~\ref{int-slo} is interpolation in ${\cal EL}, {\cal EL}^+$ and their extensions considered in this paper. A variant of the result for the case of ${\cal EL}$ occurs in \cite{wolter-ijcar-08}. \begin{theorem} ${\cal EL}^+$ has the interpolation property, i.e.\ if ${\cal T} \cup RI \models C \subseteq D$ then there exists a finite set ${\cal T}_I$ of general concept inclusions containing only concept names and role names common\footnote{In the case of roles, by ``common'' we mean common or ``shared'' according to Definition~\ref{shared}.} to ${\cal T}$ and $C \subseteq D$ such that ${\cal T} \cup RI \models {\cal T}_I$ and ${\cal T}_I \cup RI \models C \subseteq D$. \noindent The same holds also for the generalization of ${\cal EL}^+$ with $n$-ary roles. \end{theorem} \noindent {\it Proof\/}:\ Assume that ${\cal T} \cup RI \models C \subseteq D$. Then $SLO^{\exists}_{N_R}(RI) \wedge A \wedge B \models \perp$, where $A = \bigwedge_{C_1 \sqsubseteq C_2} {\overline C_1} \leq {\overline C_2}$ and $B = {\overline C} \not\leq {\overline C}$. By Corollary~\ref{int-slo}, there exists a formula $I$ containing only constant names and role names common to $A$ and $B$ such that $SLO^{\exists}_{N_R}(RI) \wedge A \models I$ and $SLO^{\exists}_{N_R}(RI) \wedge I \wedge B \models \perp$. We actually showed that ${\sf SLO}_{\Sigma}(RI)$ has the equational interpolation property, so we can find an interpolant $I$ which is a conjunction of (positive) literals. Then ${\cal T}_I$ is this interpolant. \hspace*{\fill}$\Box$ \section{$\mathcal{EL}^{++}$ constructors} \label{el++} In the definition of $\mathcal{EL}^{++}$ the following concept constructors are considered: $${\sf ConcDom} ~~~~ p(f_1, \dots, f_n) = \{ x \mid \exists y_1, \dots, y_n: f_i(x) = y_i \text{ and } p(y_1, \dots, y_n) \}.$$ Here, we show how to approach this type of problems, as well as the related concept constructions of the following type\footnote{These constructors are allowed if we allow concept construction also on the concrete domains.} (where $D_1, \dots, D_n$ are concepts terms in the concrete domains): $${\sf ConcDom} ~~~~ p(f_1, \dots, f_n)(D_1, \dots, D_n) = \{ x \mid \exists y_1 \in D_1, \dots, y_n \in D_n: f_i(x) = y_i \text{ and } p(y_1, \dots, y_n) \}$$ within the framework of locality. Note that the following transfer of locality results holds: \begin{theorem} Let ${\cal T}_0$ be a theory and let ${\cal T}_0'$ be another theory, in the same signature $(\Sigma_0, {\sf Pred})$, with the property that every model of ${\cal T}_0'$ is a model of ${\cal T}_0$. Let $\Sigma_1$ be an additional set of function symbols, not contained in the signature of ${\cal T}_0$, and let ${\cal K}$ be a set of clauses over the signature $(\Sigma_0 \cup \Sigma_1, {\sf Pred})$. If the extension ${\cal T}_0 \subseteq {\cal T}_0 \cup {\cal K}$ has the property that every model in ${\sf PMod}_w(\Sigma_1, {\cal T}_0 \cup {\cal K})$ weakly embeds into a total model of ${\cal T}_0' \cup {\cal K}$ then every model in ${\sf PMod}_w(\Sigma_1, {\cal T}_0' \cup {\cal K})$ weakly embeds into a total model of ${\cal T}_0' \cup {\cal K}$. \label{transfer} \end{theorem} \begin{theorem} Assume that the only concept constructors are intersection, existential restriction, and ${\sf ConcDom}$. Let ${\mathcal C} {=} GCI {\cup} CD {\cup} RI$ be a CBox containing a set $GCI$ of general concept inclusions, a set $CD$ of definitions of domains $\{ c_1, \dots, c_k \}$ using rules in ${\sf ConcDom}$: $$ C_k = p_k(f^k_1, \dots, f^k_{n_k})$$ and a set $RI$ of (guarded) role inclusions. Assume that the only concepts names that appear are $N_C {=} \{ C_1, {\dots}, C_n \}$. Then for all concept descriptions $D_1, D_2$ the following are equivalent: \begin{itemize} \item[(1)] $D_1 {\sqsubseteq}_{\mathcal C} D_2$. \item[(2)] $\left( \bigwedge_{C {\sqsubseteq} D \in GCI} \overline{C} {\leq} \overline{D} \right) \wedge \overline{D_1} {\not\leq} \overline{D_2}$ is unsatisfiable w.r.t.\ the class $SetBAO(c_1, \dots, c_n)(RI)$ of all Boolean algebras of sets with monotone operators satisfying $RI_a$ (of the form ${\cal P}({\bf D}) = ({\cal P}(D), \cap, \cup, \neg, \emptyset, D, \{ f_r \}_{r \in N_R}, c_1, \dots, c_k)$). \item[(3)] $\left( \bigwedge_{C {\sqsubseteq} D \in GCI} \overline{C} {\leq} \overline{D} \right) \wedge \overline{D_1} {\not\leq} \overline{D_2}$ is unsatisfiable w.r.t.\ the class $SetSL(c_1, \dots, c_n)(RI)$ of all semilattices of sets with monotone operators (i.e.\ semilattices of the form ${\cal P}({\bf D}) = ({\cal P}(D), \cap, \emptyset, D, \{ f_r \}_{r \in N_R}, c_1, \dots, c_k)$) which satisfy $RI_a$. \end{itemize} \label{set-sem} \end{theorem} \noindent {\it Proof\/}:\ (2) $\Rightarrow$ (1) follows from the definition of $D_1 {\sqsubseteq}_{\mathcal C} D_2$, and (3) $\Rightarrow$ (2) is immediate. To prove that (1) $\Rightarrow$ (3), assume that (1) holds and (3) does not. Then there would exist a model ${\cal P}({\bf D}) = ({\cal P}(D), \cap, \emptyset, D, \{ f_r \}_{r \in N_R}, c_1, \dots, c_k) \in SetSL(c_1, \dots, c_n)(RI)$ of $$G = \left( \bigwedge_{C {\sqsubseteq} D \in GCI} \overline{C} {\leq} \overline{D} \right) \wedge \overline{D_1} {\not\leq} \overline{D_2}.$$ Then ${\cal P}({\bf D}) = ({\cal P}(D), \cap, \cup, \emptyset, D, \{ f_r \}_{r \in N_R}, c_1, \dots, c_k) \in SetSL(c_1, \dots, c_n)(RI)$ is a model of $G$. As the set of maximal filters of ${\cal P}({\bf D})$ is in bijective correspondence with $D$, the canonical definition of relations associated with the monotone functions $f_r$ on the Stone dual of ${\cal P}({\bf D})$ induces a model ${\cal I} = (D, \cdot^{\cal I})$ which satisfies $G$, $RI$ and also $CD$. This contradicts (1). \hspace*{\fill}$\Box$ \noindent We now show that $SetSL(c_1, \dots, c_n)(RI)$ is a local extension of $SetSL(c_1, \dots, c_n)$. We use the criterion in Theorem~\ref{transfer}. \begin{lemma} Let ${\bf S} = (S, \wedge, 0, 1, \{ f_r \}_{r \in \Sigma})$ be a bounded semilattice with partial unary functions $f_r$ weakly satisfying the monotonicity axioms and the $RI$ axioms. Then ${\bf S}$ weakly embeds into a total semilattice of sets with monotone operators satisfying the axioms $RI_a$. \label{emb-sets} \end{lemma} \noindent {\it Proof\/}:\ By the proof of Theorem~\ref{local-alg-el+}, ${\bf S}$ weakly embeds into the total semilattice reduct (in $SLO_{\Sigma}$) of the distributive lattice $L = {\cal OI}({\bf S}) \in DLO^{\exists}_{N_R}(RI)$. We can now use the proof of the last part in Lemma~\ref{embeddings-slo-dlo-bao} to show that if ${\cal F}_p$ is the set of prime filters of $L$ then the Boolean algebra of sets $B(L) = ({\cal P}({\cal F}_p), \cap, \cup, \emptyset, {\cal F}_p, \{ \overline{f}_{\exists r} \}_{r \in N_R})$ (defined in Lemma~\ref{embeddings-slo-dlo-bao}) is a Boolean algebra in $BAO_{N_R}^{\exists}(RI)$. \hspace*{\fill}$\Box$ \noindent We therefore can hierarchically reduce the problem of checking if $D_1 {\sqsubseteq}_{\mathcal C} D_2$ as follows: \begin{corollary} Assume that the only concept constructors are intersection, existential restriction, and ${\sf ConcDom}$. Let ${\mathcal C} {=} GCI {\cup} CD {\cup} RI$ be a CBox containing a set $GCI$ of general concept inclusions, a set $CD$ of definitions of domains $\{ c_1, \dots, c_k \}$ using rules in ${\sf ConcDom}$, as: $$ c_k = p_k(f^k_1, \dots, f^k_{n_k})$$ and sets $RI$, $GRI$ of (guarded) role inclusions. Assume that the concepts names that appear are $N_C {=} \{ C_1, \dots, C_n \}$. Then for all concept descriptions $D_1, D_2$ the following are equivalent: \begin{itemize} \item[(1)] $D_1 {\sqsubseteq}_{\mathcal C} D_2$. \item[(2)] $CD \wedge G$ --- where $ G = \left( \bigwedge_{C {\sqsubseteq} D \in GCI} \overline{C} {\leq} \overline{D} \right) \wedge \overline{D_1} {\not\leq} \overline{D_2}$ --- is unsatisfiable w.r.t.\ the class $SetSL(c_1, \dots, c_n)(RI)$ of all semilattices of sets with monotone operators satisfying $RI_a$, of the form ${\cal P}({\bf D}) = ({\cal P}(D), \cap, \emptyset, D, \{ f_r \}_{r \in N_R}, c_1, \dots, c_k)$. \item[(3)] $CD \wedge G_0 \wedge RI[G]_0 \wedge {\sf Con}_0 \wedge {\sf Def}$ is unsatisfiable w.r.t.\ the class $SetSL(c_1, \dots, c_n)(RI)$ of all semilattices of sets with monotone operators satisfying $RI_a$, of the form ${\cal P}({\bf D}) = ({\cal P}(D), \cap, \emptyset, D, \{ f_r \}_{r \in N_R}, c_1, \dots, c_k)$. \item[(4)] $CD_0 \wedge G_0 \wedge RI[G]_0 \wedge {\sf Mon}_0$ is unsatisfiable w.r.t.\ the extensions with free function symbols $\{ f_1, \dots, f_n \}$ of the many-sorted disjoint combination $(SetSL, {\sf Dom})$ of the theory $SetSL$ of sets with intersection and the theory ${\sf Dom}$ of the concrete domains. \end{itemize} \label{set-red} \end{corollary} \noindent {\it Proof\/}:\ (1) and (2) are equivalent by Theorem~\ref{set-sem}. It is obvious that (3) implies (2). We show that (2) implies (3). Assume that $CD \wedge G_0 \wedge RI[G]_0 \wedge {\sf Con}_0 \wedge {\sf Def}$ has a (partial) model ${\bf S} = {\cal P}({\bf D}) = ({\cal P}(D), \cap, \emptyset, D, \{ f_r \}_{r \in N_R}, c_1, \dots, c_k)$. By Theorem~\ref{emb-sets}, ${\bf S}$ weakly embeds into a semilattice with operators ${\bf S'} = {\cal P}({\bf D'}) = ({\cal P}(D'), \cap, \emptyset, D, \{ f_r \}_{r \in N_R})$ which satisfies $RI \cup GRI$ (the interpretation of the constants is translated too). Then ${\bf S'}$ is also a model of $G_0, CD_0$ and ${\sf Def}$, hence of $G \wedge CD$. Contradiction. The equivalence of (3) and (4) follows as a special case of Theorem~\ref{lemma-rel-transl}. \hspace*{\fill}$\Box$ \section{Conclusions} In this paper we have shown that subsumption problems in $\mathcal{EL}$ can be expressed as uniform word problems in classes of semilattices with monotone operators, and that subsumption problems in $\mathcal{EL}^+$ can be expressed as uniform word problems in classes of semilattices with monotone operators satisfying certain composition laws. This allowed us to obtain, in a uniform way, PTIME decision procedures for $\mathcal{EL}$, $\mathcal{EL}^+$, and extensions thereof. The use of the notion of local theory extensions allowed us to present a new family of PTIME (many-sorted) logics which extend $\mathcal{EL}$ with $n$-ary roles, (guarded) role inclusions, existential role restrictions and/or with numerical domains. These extensions are different from other types of extensions studied in the description logic literature such as extensions with $n$-ary existential quantifiers (cf.\ e.g.\ \cite{Baader2005-ki}) or with concrete domains \cite{Baader-ijcai-2005}, but are, in our opinion, very natural and very likely to occur in ontologies. Moreover, we showed that the results in this paper can also be used for the extension ${\cal EL}^{++}$ introduced in \cite{Baader-ijcai-2005} (it seems that the results on ${\cal EL}^{++}$ can be extended to tackle also ABoxes). In the future we would like to also analyze generalizations of existential concept restrictions in ${\cal EL}$ to existential relation restrictions of the form $\exists r.r1$ interpreted as $$ \{ x \mid \exists x_1, x_2: r(x, x_1, x_2) \wedge r_1(x_1, x_2) \},$$ implications of the form: $$ r_1(x, {\overline y}) \wedge r_2(x, {\overline y}) \rightarrow r_3(x, {\overline y})$$ and guarded role inclusions of the form: $$ r(x_1, x_2) \wedge r_1(x, x_1, x_2) \rightarrow r_2(x, x_1, x_2).$$ We also showed that the results in \cite{Sofronie-ijcar-06} can be used to prove that the class of semilattices with monotone operations satisfying the types of axioms considered here allows ground (equational) interpolation. We used this for proving interpolation properties in extensions of $\mathcal{EL}^+$. We would like to further explore the area of applications of such results for efficient (modular) reasoning in combinations of ontologies based on extensions of $\mathcal{EL}$ and $\mathcal{EL}^+$. \noindent {\bf Acknowledgments.} We thank St{\'e}phane Demri and Michael Zakharyaschev for asking the right questions and Carsten Ihlemann for his comments on a previous version of the paper. \end{document}
\begin{document} \date{} \title{\Large\textbf{Hausdorff measure of sets of distributional chaotic pairs for shift maps } \begin{abstract} Let $\sigma_{K}: \sum_{K}\rightarrow \sum_{K}$ be a shift map. For an interval $[p,q]\subset[0,1]$, let $D_{\sigma_{K}}([p,q])$ denote the set of pairs for which the density spectrum of the $\epsilon$-approach time set equals $[p,q]$ when $\epsilon$ is small and $E_{\sigma_{K}}([p,q])$ the set of pairs for which the density spectrum of the $\epsilon$-approach time set converges to $[p,q]$ when $\epsilon\rightarrow 0^+$. Then $\dim_{H} D_{\sigma_{K}}([p,q])=\dim_{H} E_{\sigma_{K}}([p,q])=2-q$. Moreover, $\mathscr{H}^{2-q}(E_{\sigma_{K}}([p,q]))=1$ when $q=0$ and $\mathscr{H}^{2-q}(E_{\sigma_{K}}([p,q]))=+\infty$ when $q>0$. Meanwhile, $\mathscr{H}^{2-q}(D_{\sigma_{K}}([p,q]))=+\infty$ when $q=1$ and $\mathscr{H}^{2-q}(D_{\sigma_{K}}([p,q]))=0$ when $q<1$. \end{abstract} \noindent {\bf Keywords.} Distributional density spectrum, Hausdorff measure, distributional chaotic pair, shift map.\\ {\bf MSC2010:} 37B05/10/20, 37C45, 28A75/78/80 \section{Introduction} The notion of chaos to describe the approaching-and-dispersing processes between trajectories in a dynamical system was first used in \cite{LY}. Suppose $(X,\rho,f)$ is a { topological dynamical system} ({TDS} for short), namely, $(X,\rho)$ is a compact metric space and $f$ a continuous surjective self-map on $X$. Then $(x,y)\in X\times X$ is said to be a { Li--Yorke pair} (\cite{BGKM}) if \[ \liminf_{i\rightarrow \infty}(f^i(x),f^i(y))=0\ \text{and}\ \limsup_{i\rightarrow \infty}(f^i(x),f^i(y))>0. \] A set $C\subset X$ is called a (Li--Yorke) { scrambled set} if each pair of different points in $C$ forms a Li--Yorke pair. In general, $f$ is said to be { Li--Yorke chaotic} if it has an uncountable scrambled set. It is proved in \cite{LY} that an interval map that has a periodic point of period $3$ is Li--Yorke chaotic. The existence of asymptotic pairs and Li--Yorke scrambled sets contained in the stable sets are studied in \cite{Huang2015}. Based on Li--Yorke chaos, different types of chaos, such as Devaney chaos (\cite{Dev}), generic chaos (\cite{Pio}), $\omega$-chaos (\cite{Li}), and Strong chaos (\cite{Xio1}) have been studied. Distributional chaos, which was first introduced in \cite{SS} and was generalized in \cite{BSS}, \cite{PS} and \cite{PS1}, has been the focus of chaos study for more than ten years. By describing the densities of trajectory approach time sets, distributional chaos reveals more rigorous complexity hidden in Li--Yorke chaos. We will now briefly review the definitions of the three types of distributional chaos. Let $(X,\rho,f)$ be a TDS. For ${x},{y}\in X$, define the lower distributional function $F_{{x},{y}}$ and upper distributional function $F_{{x},{y}}^*$ from $(0,+\infty)$ to $[0,1]$ by \begin{equation} \label{deqiis} \begin{aligned} &F_{x, y}(\epsilon) =\liminf _{n \rightarrow \infty} \frac{1}{n} \#\left(\left\{0 \leq i<n : \rho\left(f^{i}(x), f^{i}(y)\right)<\epsilon\right\}\right), \\ &F_{x, y}^{*}(\epsilon) =\limsup _{n \rightarrow \infty} \frac{1}{n} \#\left(\left\{0 \leq i<n : \rho\left(f^{i}(x), f^{i}(y)\right)<\epsilon\right\}\right), \end{aligned} \end{equation} where $\#(\cdot)$ denotes the cardinality of a set. A couple $({x},{y})\in X\times X$ is called a DC1 { pair} if \[ F_{x, y}^{*}(\epsilon) \equiv 1 \text { on }(0,+\infty) \text { and } F_{x, y}(\epsilon) \equiv 0 \text { on some }\left(0, \epsilon_{0}\right], \] a DC2 { pair} if \[ F_{x,y}^*(\epsilon)\equiv 1 \text { on }(0,+\infty) \text { and } F_{{x},{y}}(\epsilon)<1 \text { on some }\left(0,\epsilon_{0}\right], \] and a DC3 { pair} if \[ F_{{x},{y}}(\epsilon)<F_{{x},{y}}^*(\epsilon) \text{ on some } \left(\epsilon_{0},\epsilon_{1}\right ]. \] A set $C\subset X$ is said to be a $\rm {DC}i$ ($i=1,2$ or 3) { scrambled set} if each pair of different points in $C$ forms a $\rm {DC}i$ pair. In general, $f$ is said to be $\rm {DC}i$ chaotic if it has an uncountable $\rm {DC}i$ scrambled set. A pair $(x,y)\in X\times X$ is said to be a { mean Li--Yorke pair} if $$ \liminf_{n\rightarrow\infty}\dfrac{1}{n}\sum\limits_{i=0}^{n-1}\rho(f^i(x),f^i(y))=0\text{ and } \limsup_{n\rightarrow\infty}\dfrac{1}{n}\sum\limits_{i=0}^{n-1}\rho(f^i(x),f^i(y))>0. $$ A set $C\subset X$ is called a mean Li--Yorke chaotic set if each pair of different points in $C$ forms a mean Li--Yorke pair. In general, $f$ is said to be { mean Li--Yorke chaotic} if it has an uncountable mean Li--Yorke chaotic set. It is proved in \cite{Huang2014} that the intersections of the sets of asymptotic tuples and mean Li--Yorke tuples with the set of topological entropy tuples are dense in the set of topological entropy tuples. It is observed in \cite{Dow} that DC2 chaos is equivalent to mean Li--Yorke chaos (see \cite{Dow} for details). Cardinality (uncountable or not) is a simple description of the size of a scrambled set. In fact, measures are widely used to characterize the sizes of scrambled sets. The Lebesgue measures of scrambled sets are investigated in \cite{Smi1}, \cite{Smi2} and \cite{Mis}. The Bowen entropy dimensions of scrambled sets are studied in \cite{Hua} and \cite{FHYZ}. The Hausdorff dimensions of strong scrambled sets and DC1 scrambled sets are discussed in \cite{Xio1} and \cite{OS}, respectively. In the more recent paper \cite{BL}, the Lebesgue measure of Li--Yorke pairs for interval maps is thoroughly discussed. In the present paper we study the Hausdorff measure of the set of distributional chaos pairs for shift maps. A pair $(x,y)$ from a TDS is a DC1 pair if and only if the approach time sets of $x,y$ have upper density 1 and lower density 0, and is a DC2 pair if and only if the approach time sets have upper density 1 and lower density $<1$. So the set of DC1 pairs and the set of DC2 pairs are saturated sets with diverging Birkhoff averages of approach time sets. They are fractals generated by the distributional functions. This viewpoint motivates the application of multifractal analysis to the study of chaos, so as to investigate distributional chaos in a more refined way than in terms of DC1 and DC2. To give a more detailed description of our results, we introduce several definitions and notations. Let $\mathcal{C}([0,1])$ be the set of nonempty compact sub-intervals of ${[0,1]}$. Let $(X,\rho,f)$ be a TDS. For $[p,q]\in\mathcal{C}([0,1])$, define $$ \begin{aligned} &E_f({[p,q]})=\{(x,y)\in X\times X:\lim_{\epsilon\rightarrow 0^{+}}F_{{x},{y}}^*(\epsilon)=q,\ \lim_{\epsilon\rightarrow 0^{+}}F_{{x},{y}}(\epsilon)=p\}, \\ &D_f({[p,q]})=\{(x,y)\in X\times X:F_{{x},{y}}^*(\epsilon)=q,\ F_{{x},{y}}(\epsilon)=p\ \text{on some}\ (0,\epsilon_{0}]\}. \end{aligned} $$ For $\mathcal{J}\subset\mathcal{C}([0,1])$, write $$ E_{f}(\mathcal{J})=\bigcup_{I\in \mathcal{J}}E_{f}(I),\ D_{f} (\mathcal{J})=\bigcup_{I\in \mathcal{J} }D_{f}(I). $$ The sets $E_f([p,q])$ and $[p,q]\in\mathcal{C}({[0,1]})$ form a spectral decomposition of the product space $X\times X$, while $E_f({[p,q]})$ and $D_f({[p,q]})$ are generalizations of the relations DC1 and DC2. In fact, for the map $f$, the relation DC1 equals $D_f({[0,1]})$ and the relation DC2 equals $ E_f({\{[p,1]:0\leq p<1\}})$. For $[p,q]\in\mathcal{C}([0,1])$, we calculate the Hausdorff measures of the sets $E_{\sigma_K}({[p,q]})$ and $D_{\sigma_K}({[p,q]})$. They are as follows. \begin{theorem} \label{t:main} Let $[p,q]\in \mathcal{C}({[0,1]})$. Then $$ \mathscr{H}^{2-q}(E_{\sigma_K}([p,q]))=\left\{ \begin{aligned} &1,&q&=0,\\ &+\infty,&0&<q\leq 1 \end{aligned} \right. $$ and $$ \mathscr{H}^{2-q}(D_{\sigma_K}([p,q]))=\left\{ \begin{aligned} &0,&0&\leq q<1,\\ &+\infty,&q&=1. \end{aligned} \right. $$ \end{theorem} As an application, we get the following corollaries for the size of mean Li--Yorke chaos in symbolic space. \begin{corollary} Let $\rm{LY}(\sigma_K)$ be the set of all Li--Yorke pairs of the symbolic space $(\Sigma_K,\sigma_K).$ Then $\dim_{H}\rm{LY}(\sigma_K)=2,\mathscr{H}^2(\rm{LY}(\sigma_K))=1$. \end{corollary} \begin{corollary} Let $\rm{MLY}(\sigma_K)$ be the set of all mean Li--Yorke pairs of the symbolic space $(\Sigma_K,\sigma_K).$ Then $\dim_{H}\rm{MLY}(\sigma_K)=1,\mathscr{H}^1(\rm{MLY}(\sigma_K))=+\infty$. \end{corollary} The main body of this paper is organized as follows. In Section \ref{desooi}, some necessary definitions and notations are specified. In Section \ref{desoow}, the distributional functions $\mathcal{F}_{f}$ and $\mathcal{E}_{f}$ are defined. The distributional chaotic relations $E_{f}({[p,q]})$, $D_{f}({[p,q]})(f)$, $E_{f} ({\mathcal{J}})$ and $D_{f}({\mathcal{J}})$ are introduced. Some properties of certain invariances of these relations are discussed. Section \ref{desoou} is a review of the basic properties of Hausdorff measure on symbolic spaces. Some useful lemmas are proved. In Section \ref{desoos}, we give an useful variational inequality for calculating the Hausdoff dimensions and Hausdorff measures of the sets $E_{\sigma_K}([p,q])$ and $D_{\sigma_K}([p,q])$. In Section \ref{desooz}, we study the Hausdorff dimensions of $E_{\sigma_K}({\mathcal{J}})$ and $D_{\sigma_K}(\mathcal{J})$. It is proved that for $\emptyset\neq\mathcal{J}\subset\mathcal{C}({[0,1]})$, \begin{equation} \label{deqivi} \dim_H E_{\sigma_K}({\mathcal{J}})=\dim_H D_{\sigma_K}({\mathcal{J}})=2-\inf\{\sup I:{I\in\mathcal{J}}\}. \end{equation} In particular, for $[p,q]\in\mathcal{C}({[0,1]})$, \begin{equation} \label{q1203141204} \dim_H E_{\sigma_K}({[p,q]})=\dim_H D_{\sigma_K}({[p,q]})=2-q. \end{equation} From (\ref{deqivi}) we have that the Hausdorff dimension of the set of DC1 (or DC2) pairs for $\sigma_K$ is 1. We will prove Corollary 1.2. Section \ref{desooe} is the proof of Theorem \ref{t:main}. \section{Some definitions and notations}\label{desooi} For a number $a$ and sets of numbers $B,C$, we make use of the following notation: $$ \begin{aligned} &a+B=B+a=\{a+b:b\in B\},\ aB=Ba=\{ab:b\in B\},\\ &B+C=\{b+c:b\in B,\ c\in C\},\ BC=\{bc:b\in B,\ c\in C\}.\\ \end{aligned} $$ When $X$ is a set, $\mathcal{P}(X)$ denotes the power set of $X$. For $A\subset X$, $A^c$ denotes the complement of $A$, $X\setminus A$. We use $\Delta=\Delta(X)$ to denote the diagonal $\{(x,x):x\in X\}$ in $X\times X$. In this paper we use $\rho$ to denote any metric. Suppose that $X$ is a metric space. For $\epsilon>0$ we use $\Delta_\epsilon$ to denote the set $\{(x,y)\in X\times X:\rho(x,y)<\epsilon\}$. For $x\in X$ and nonempty sets $A,B\subset X$, define $$ \begin{aligned} &B_\epsilon(x)=\{y\in X:\rho(x,y)<\epsilon\},\ B_\epsilon(A)=\bigcup_{y\in A}B_\epsilon(y),\\ &\rho(A,B)=\inf\{\rho(a,b):a\in A,\ b\in B\},\ \rho(x,A)=\rho(A,x)=\rho(\{x\},A). \end{aligned} $$ We use $|\cdot|$ to denote the diameter of a set. Let $Y=\prod_{0\leq i<\alpha} X_i$, where $\alpha$ is an ordinal number $\leq\omega_{0}$. If $x\in Y$, we use $x_j$ to denote the $(j+1)$th coordinate of $x$. If $(y_i)$ is a sequence in $Y$, we use $y_{i,j}$ to denote the $(j+1)$th coordinate of $y_i$, i.e., $y_{i,j}=(y_i)_j$. For a product $Y=\prod_{0\leq i<n}X_i$ of finitely many metric spaces, unless otherwise specified, we endow $Y$ with the sup metric $$ \rho(x,y)=\sup_{0\leq i<n}\rho(x_i,y_i). $$ Suppose $X$ is a nonempty separable metric space. We use $\mathcal{C}(X)$ to denote the set of nonempty compact connected subsets of $X$. For a sequence $\alpha=(x_n)_{n\geq 0}$ of points in $X$, we use $\omega(\alpha)$ to denote the set of limit points of $\alpha$ set, i.e., $$ \omega(\alpha)=\{x\in X: \text{for each neighborhood}\ U of\ x, x_{n}\in U \text{ for infinitely many}\ n\}. $$ \begin{lemma}\label{detvxv} Let $X$ be a nonempty compact metric space. Suppose $(x_i)_{i\geq 0}$ is a sequence of points in $X$ with $\lim_{i\rightarrow \infty}\rho(x_i,x_{i+1})=0$. Then $\omega(x_i:i\geq0)\in\mathcal{C}(x)$. \end{lemma} We omit the proof of Lemma \ref{detvxv}, for it is easy. Let $\alpha=({n_i})_{i\geq 0}$ be a sequence of positive integers with infinitely many $n_i\geq 2$. Write $$\Sigma_\alpha=\prod_{i\geq 0}\{0,\cdots,n_{i}-1\}=\{(x_i)_{i\geq 0}:x_i\in\{0,\cdots,n_{i}-1\},\ i\geq 0\}.$$ For $x,y\in\Sigma_\alpha$, write $$ \delta(x,y)=\inf \{i\geq0:x_i\neq y_i\}, $$ where $\delta(x,x)=+\infty$. Endow $\Sigma_\alpha$ with the metric $$ \rho(x,y)=\prod_{0\leq i<\delta(x,y)}n_i^{-1}. $$ Write $$ W_{\alpha,i}=\prod_{0\leq j<i}\{0,\cdots,n_{j}-1\},\ W_\alpha=\bigcup_{i\geq 0}W_{\alpha,i}. $$ If $\omega\in W_{\alpha,i}$, then $\omega$ is called a {\bf word} with {\bf length} $|\omega|=i$. Write $$ [\omega]=\left\{x\in\Sigma_\alpha:x_{0}\cdots x_{i-1}=\omega\right\}, $$ where $x_{0}\cdots x_{i-1}$ is the concatenation of letters $x_{0}\cdots x_{i-1}$. In addition, $[\omega]$ is said to be a {\bf cylinder} in $\Sigma_\alpha$ of {\bf length} $i$. For $W\subset W_\alpha$, write $$ [W]=\bigcup\left\{[\omega]:\omega\in W_\alpha\right\}. $$ Suppose each $n_i=k$. Then we write $\Sigma_k$, $W_{k,i}$, $W_k$ for $\Sigma_\alpha$, $W_{\alpha,i}$, $W_\alpha$ respectively. The properties stated in the lemma below are direct. \newcommand{\detozi}{Lemma \ref{detozi}} \begin{lemma}\label{detozi} Let $\alpha=({n_i})_{i\geq 0}$ be a sequence of positive integers with infinitely many $n_i\geq2$. Then $\Sigma_\alpha=\prod_{i\geq 0}\{0,\cdots,n_{i}-1\}$ is a Cantor space. For each word $\omega\in W_\alpha$, the cylinder $[\omega]$ is closed and open with diameter $|[\omega]|=\prod_{0\leq i<|\omega|} n_i^{-1}$. The set $\{[\omega]:\omega\in W_\alpha\}\cup\{\emptyset\}$ is a base of the topology of $\Sigma_\alpha$. \end{lemma} Throughout this paper, $K\geq2$ denotes a fixed natural number. By Lemma \ref{detozi}, $\Sigma_K$ is a Cantor space, each cylinder $[\omega]$ is a closed and open subset of $\Sigma_K$ with diameter $K^{-|\omega|}$, and the set $\{[\omega]:\omega\in W_K\}\cup\{\emptyset\}$ is a topological base for $\Sigma_K$. Define the {\bf shift map} $\sigma_K$ on $\Sigma_K$ as the map $(\sigma_{K}(x))_i=x_{i+1}$, $i\geq 0$. It is a $K$ to $1$ continuous map. Suppose $\emptyset\neq A_i\subset W_{K,n_{i}}$, $i\geq 0$. We write $$ \prod_{i\geq 0} A_i=\left\{\omega_{0}\omega_{1}\cdots \in\Sigma_K: w_i\in A_j,\ j\geq 0\right\}. $$ where $\omega_{0}\omega_{1}\cdots$ is the concatenation of words $\omega_{0},\omega_{1},\cdots$. Let $N\subset\mathbb{N}$. For $n\geq1$, write $\zeta_n(N)=\#(N\cap\{0,\cdots,n-1\})$ and $\mu_n(N)=\frac{\zeta_{n}(N)}{n}$. Put $\mu(N)=\omega(\mu_n(N):n\geq1)$ and call it the $\mathbf{density\ spectrum}$ of $N$. By Lemma \ref{detvxv}, $\mu(N)$ is a nonempty subinterval of $[0,1]$, i.e., $\mu(N)\in\mathcal{C} ({[0,1]})$. We call $\mu_*(N):=\inf \mu(N)$ the $\mathbf{lower \ density}$ of $N$ and $\mu^*(N):=\sup \mu(N)$ the $\mathbf{upper\ density}$ of $N$. When $\mu(N)=[p,p]$, we also write $\mu(N)=p$. Define a partial order $\preceq$ on $\mathcal{C}({[0,1]})$ by $$I\preceq J\leftrightarrow\inf I\leq\inf J\ \text{and}\ \sup I\leq\sup J.$$ The properties stated in the lemma below are direct. \begin{lemma}\label{dstoxm} Let $N$, $M\subset\mathbb{N}$. Write $N=\{n_i:0\leq i<\#( N)\}$ with $n_i<n_{i+1}$. \begin{enumerate} \rnc{\labelenumi}{(\alph{enumi})} \item If $M\subset N$, then $\mu(M)\preceq\mu(N)$. \item If $\liminf_{i\rightarrow\infty}(n_{i+1}-n_i)\geq k\geq1$, then $$\mu(N+\{0,\cdots,k-1\})=\mu((N-\{0,\cdots,k-1\})\cap\mathbb{N})=k\mu N.$$ So, if $\lim_{i\rightarrow\infty}(n_{i+1}-n_i)=+\infty$, then $\mu (N)=0$. \item $\mu(kN+\{0,\cdots,k-1\})=\mu((kN-\{0,\cdots,k-1\})\cap\mathbb{N})=\mu (N)$ for $k\geq 1$. $ \Box$ \end{enumerate} \end{lemma} \section{Distributional functions and distributional chaotic relations}\label{desoow} Let $(X,\rho,f)$ be a TDS. For $x\in X$ and $A\subset X$, define the {\bf recurrence time set} $N_f(x,A)$ by $$N_f(x,A)=\{i\geq 0:f^i(x)\in A\}.$$ Define $\mathcal{F}_{f} :(X\times X)\times(0,+\infty)\rightarrow\mathcal{C}({[0,1]})$ by \begin{equation} \label{deqozw} \mathcal{F}_{f} ((x,y),\epsilon)=\mu({N_{f\times f}((x,y),\Delta_{\epsilon})}). \end{equation} For $(x,y)\in X\times X$, \begin{equation} \label{deqozz} \begin{aligned} 0<\epsilon_{0}<\epsilon_1&\Rightarrow N_{f\times f}((x,y),\Delta_{\epsilon_{0}})\subset N_{f\times f}((x,y),\Delta_{\epsilon_{1}})\\ &\Rightarrow \mathcal{F}_{f}((x,y),\epsilon_0)\preceq \mathcal{F}_{f}((x,y),\epsilon_1). \end{aligned} \end{equation} By (\ref{deqozz}), we define $ \mathcal{E}_f:X\times X\rightarrow\mathcal{C}({[0,1]})$ by \begin{equation} \label{deqozx} \mathcal{E}_f(x,y)=[\lim_{\epsilon\rightarrow 0^+}\inf\mathcal{F}_{f}((x,y),\epsilon),\lim_{\epsilon\rightarrow 0^+}\sup\mathcal{F}_{f}((x,y),\epsilon)]. \end{equation} For $[p,q]\in\mathcal{C}({[0,1]})$. Write $$ E_{f}({[p,q]})=\{(x,y)\in X\times X:\mathcal{E}_{f}({x,y})=[p,q]\} $$ and define $$ D_{f}({[p,q]})=\{(x,y)\in X\times X:\mathcal{F}_{f}((x,y),\epsilon)\equiv [p,q]\ \text{on some interval}\ (0,\epsilon_0]\}. $$ Note that $D_{f}({[p,q]})\subset E_{f}({[p,q]})$. For $\mathcal{J}\subset\mathcal{C}({[0,1]})$, put $$ E_f(\mathcal{J})=\bigcup_{I\in\mathcal{J}}E_{f}(I),\ D_{f}(\mathcal{J})=\bigcup_{I\in\mathcal{J}}D_{f}(I). $$ \begin{remark} \label{dstios} {\rm The distributional chaotic relation with respect to DC1 is $D_f({[0,1]})$ and the distributional chaotic relation with respect to DC2 is $E_f({\{[p,1]:0\leq p<1\}})$.} \end{remark} \begin{lemma}\label{detoss} Let $(X,f)$ be a TDS and $n\geq1$. Then $$ \mathcal{E}_{f^n}=\mathcal{E}_f $$ and, for ${[p,q]}\in\mathcal{C}({[0,1]})$, $$ E_{f^n}({[p,q]})=E_{f}({[p,q]})\ \text{and}\ D_{f^n}({[p,q]})=D_{f}({[p,q]}). $$ \end{lemma} \begin{proof} By the uniform continuity of $f$, we may choose positive numbers $\epsilon_i\rightarrow0^+$ such that, for each $(x,y)\in X\times X$ and $i\geq0$, \begin{equation} \label{deqoos} \rho({x,y})<\epsilon_{i+1}\Rightarrow \rho({f^j(x),f^j(y)})<\epsilon_i\ \text{for}\ 0\leq j<n. \end{equation} Let $(x,y)\in X\times X$. We are to verify \begin{equation} \label{deqoox} nN_{f^n\times f^n}(({x,y}),\Delta_{\epsilon_{i+1}})+\{0,\cdots,n-1\}\subset N_{f\times f}(({x,y}),\Delta_{\epsilon_i}) \end{equation} and \begin{equation} \label{deqoou} (nN_{f^n\times f^n}((x,y),\Delta_{\epsilon_i}^c)-\{0,\cdots,n-1\})\cap\mathbb{N}\subset N_{f\times f}((x,y),\Delta_{\epsilon_{i+1}}^c). \end{equation} Suppose $i\in N_{f^n\times f^n}((x,y),\Delta_{\epsilon_{i+1}})$ and $0\leq j<n$. Then $\rho(f^{in}(x),f^{in}(y))<\epsilon_{i+1}$. By (\ref{deqoos}), $\rho(f^{in+j}(x),f^{in+j}(y) )<\epsilon_{i}$, which means $in+j\in N_{f\times f}((x,y),\Delta_{\epsilon_i})$. So (\ref{deqoox}) holds. Suppose $i\geq0$ satisfies $i\in N_{f^n\times f^n}((x,y),\Delta_{\epsilon_i}^c)$, $0\leq j<n$ and $in-j\geq0$. Now $\rho( {f^{in}(x),f^{in}(y)})\geq\epsilon_{i}$. By (\ref{deqoos}), $\rho(f^{in-j}(x),f^{in-j}(y))\geq\epsilon_{i+1}$, which means $in-j\in N_{f\times f}((x,y),\Delta_{\epsilon_{i+1}}^c)$. So (\ref{deqoou}) holds. Eq\. (\ref{deqoou}) leads to \begin{equation} \label{deqooe} \begin{aligned} &N_{f\times f}((x,y),\Delta_{\epsilon_{i+1}})\\ =&(N_{f\times f}((x,y),\Delta_{\epsilon_{i+1}}^c))^c\\ \subset&((nN_{f^n\times f^n}((x,y),\Delta_{\epsilon_i}^c)-\{0,\cdots,n-1\}\cap\mathbb{N})^c\\ =&\mathbb{N}\setminus(nN_{f^n\times f^n}((x,y),\Delta_{\epsilon_i}^c)-\{0,\cdots,n-1\})\\ =&(n(N_{f^n\times f^n}((x,y),\Delta_{\epsilon_i}^c))^c-\{0,\cdots,n-1\})\cap\mathbb{N}\\ =&(nN_{f^n\times f^n}((x,y),\Delta_{\epsilon_i})-\{0,\cdots,n-1\})\cap\mathbb{N}. \end{aligned} \end{equation} Eqs (\ref{deqoox}) and (\ref{deqooe}) lead to \begin{equation} \label{qivioveviuo} \begin{aligned} &nN_{f^n\times f^n}( ({x,y}),\Delta_{\epsilon_{i+2}})+\{0,\cdots,n-1\}\\ \subset&N_{f\times f}((x,y),\Delta_{\epsilon_{i+1}})\\ \subset&(nN_{f^n\times f^n}((x,y),\Delta_{\epsilon_i})-\{0,\cdots,n-1\})\cap\mathbb{N}. \end{aligned} \end{equation} Applying (c) of Lemma \ref{dstoxm} to (\ref{qivioveviuo}), we get \begin{equation} \label{deqooz} \mathcal{F}_{f^n}((x,y),\epsilon_{i+2})\preceq \mathcal{F}_{f}((x,y),\epsilon_{i+1})\preceq \mathcal{F}_{f^n}((x,y),\epsilon_{i}). \end{equation} Letting $i\rightarrow\infty$ in (\ref{deqooz}) we obtain \begin{equation} \label{deqons} \mathcal{E}_{f^n}(x,y)=\mathcal{E}_{f}(x,y). \end{equation} So $\mathcal{E}_{f^n}=\mathcal{E}_f$ and, for $[p,q]\in\mathcal{C}({[0,1]})$, $$ E_{f^n}({[p,q]}) =\mathcal{E}_{f^n}^{-1}([p,q])=\mathcal{E}_{f}^{-1}([p,q])=E_{f}({[p,q]}). $$ For $(x,y)\in X\times X$ and $\epsilon>0$, put \begin{equation} \label{deqwov} \begin{aligned} &\mathcal{G}_{f,\epsilon}(x,y)\\ =&[\inf \mathcal{F}_f((x,y),\epsilon)-\inf \mathcal{E}_f(x,y),\sup \mathcal{F}_f((x,y),\epsilon)-\sup \mathcal{E}_f(x,y)]\\ \in&\mathcal{C} ({[0,1]}). \end{aligned} \end{equation} Then, by (\ref{deqooz}) and (\ref{deqons}), we have \begin{equation} \label{deqoio} \mathcal{G}_{f^n,\epsilon_{i+2}}(x,y)\preceq \mathcal{G}_{f,\epsilon_{i+1}}(x,y)\preceq \mathcal{G}_{f^n,\epsilon_{i}}(x,y). \end{equation} So \begin{equation} \label{deqozn} \text{if}\ \mathcal{G}_{f,\epsilon_{i}}(x,y)=[0,0]\ \text{then}\ \mathcal{G}_{f^n,\epsilon_{i+1}}(x,y)=[0,0] \end{equation} and \begin{equation} \label{deqiio} \text{if}\ \mathcal{G}_{f^n,\epsilon_{i}}(x,y)=[0,0]\ \text{then}\ \mathcal{G}_{f,\epsilon_{i+1}}(x,y)=[0,0]. \end{equation} Now (\ref{deqozn}), (\ref{deqiio}), (\ref{deqwov}) and (\ref{deqons}) imply $$ (x,y)\in D_{f^n}({[p,q]})\Leftrightarrow (x,y)\in D_{f}({[p,q]}). $$ Then $D_{f^n}({[p,q]})=D_f({[p,q]})$. \end{proof} \section{Hausdorff measure on symbolic spaces}\label{desoou} We will now briefly review the concept of Hausdorff measure and Hausdorff dimension. See \cite{Fal} for more details. Let $X$ be a separable metric space. Then $\mathcal{A}\subset\mathcal{P}(X)$ is called a {\bf cover} of $X$ if $\bigcup\mathcal{A}=X$. A cover $\mathcal{A}$ with $|\mathcal{A}|:=\sup_{A\in\mathcal{A}}|A|<\delta$ is called a {\bf $\delta$-cover}. Let $\mathscr{C}(X)$ denote the set of countable covers of $X$ and $\mathscr{C}(X,\delta)$ the set of countable $\delta$-covers of $X$. For $0\leq s\leq+\infty$ and $\delta>0$, define the {\bf $\mathscr{H}_\delta^s$ measure} of $X$ as $$ \mathscr{H}_\delta^s(X)=\inf_{\mathcal{A}\in\mathscr{C}(X,\delta)}\sum_{A\in\mathcal{A}}|A|^s, $$ where $0^0=0^{+\infty}=0$. It is obvious that $\mathscr{H}_\delta^s(X)$ is nondecreasing while $\delta$ decreases. Then define the {\bf $s$-dimensional Hausdorff measure} of $X$ as $$ \mathscr{H}^s(X)=\lim_{\delta\rightarrow 0^+}\mathscr{H}_\delta^s(X)=\sup_{\delta>0}\mathscr{H}_\delta^s(X)\in[0,+\infty]. $$ For $0\leq s<t$, by $$ \mathscr{H}_\delta^s(X)\geq\delta^{s-t}\mathscr{H}_\delta^t(X),\ 0<\delta<1, $$ if $\mathscr{H}^t(X)>0$, then $\mathscr{H}^s(X)=+\infty$. Then there is a unique value $\dim_{H} X\in[0,+\infty]$, called the {\bf Hausdorff dimension} of $X$, such that $$ 0\leq s<\dim_{H} X\Rightarrow \mathscr{H}^s(X)=+\infty\ \text{and}\ \dim_{H} X<s\leq+\infty\Rightarrow \mathscr{H}^s(X)=0. $$ The next two lemmas are well known. \begin{lemma}\label{detoex} Let $X$ be a separable metric space and $\mathcal{A}$ a countable set of subsets of $X$. Then, for $0\leq s\leq+\infty$, \begin{equation} \label{deqvuu} \sup_{A\in\mathcal{A}}\mathscr{H}^s(A)\leq\mathscr{H}^s\left(\bigcup\mathcal{A}\right)\leq\sum_{A\in\mathcal{A}}\mathscr{H}^s(A). \end{equation} Thus \begin{equation} \label{deqvus} \dim_{H}\bigcup\mathcal{A}=\sup_{A\in\mathcal{A}}\dim_{H} A.\ \Box \end{equation} \end{lemma} \begin{lemma}\label{detviw} Suppose $X,Y$ are separable metric spaces and $\pi$ is a surjective map from $X$ to $Y$. Let $s,c\in(0,+\infty)$. If for some $\delta_0>0$, $$ \rho(\pi(x),\pi(y))\leq c(\rho(x,y))^s\ \text{while}\ x,y\in X\ \text{with}\ 0<\rho(x,y)<\delta_0, $$ then $$ \mathscr{H}^t(Y)\leq c^t\mathscr{H}^{st}(X)\ \text{for}\ t\in(0,+\infty), $$ and thus $$ \dim_H Y\leq \frac{1}{s}\dim_H X.\ \Box $$ \end{lemma} \begin{lemma}\label{detouu} Let $X$, $Y$ be separable metric spaces. Suppose $\pi:X\rightarrow Y$ is surjective with $$ \lim_{\delta\rightarrow 0^+}\inf\left\{\frac{\ln\rho( {\pi(x),\pi(y)})}{\ln\rho (x,y)}:(x,y)\in X\times X,\ 0<\rho(x,y)<\delta\right\}=s. $$ Then \begin{equation} \label{deqouv} s\dim_HY\leq\dim_HX. \end{equation} So, if $s>0$, or $s\geq0$ and $\dim_HX>0$, then \begin{equation} \label{deqouw} \dim_HY\leq\frac{1}{s}\dim_HX. \end{equation} \end{lemma} \begin{proof} If $s\leq0$, the inequality (\ref{deqouv}) is obvious. Then suppose $s>0$. Let $0<\tau<s$. Pick $0<\delta_0<1$ such that \newcommand{\deqoiv}{(\ref{deqoiv})} $$ \begin{aligned} \frac{\ln\rho( {\pi(x),\pi(y)})}{\ln\rho (x,y)}\geq\tau,\ &\text{i.e.}\ \rho(\pi(x),\pi(y))\leq(\rho(x,y))^\tau,\\ &\text{for}\ (x,y)\in X\times X\ \text{and}\ 0<\rho(x,y)<\delta_0. \end{aligned} $$ By Lemma(\ref{detviw}), $\dim_H Y\leq\frac{1}{\tau}\dim_H X$, i.e. $\tau\dim_HY\leq\dim_HX$. Letting $\tau\nearrow s$ we get (\ref{deqouv}). \end{proof} \begin{lemma}\label{detouw} Let $\alpha=({n_i})_{i\geq 0}$ be a sequence of positive integers with infinitely many $n_i\geq2$. Let $\Sigma_\alpha=\prod_{i\geq 0}\{0,\cdots,n_{i}-1\}$. Then $\dim_H\Sigma_\alpha=1$ and $\mathscr{H}^1(\Sigma_\alpha)=1$. \end{lemma} \begin{proof} Let $\delta>0$. Choose $k$ with $\prod_{0\leq i\leq k}n_i^{-1}<\delta$. Then $\{[\omega]:\omega\in W_{\alpha,k}\}$ is a finite $\delta$-cover of $\Sigma_\alpha$ with $$ \sum_{\omega\in W_{\alpha,k}}|[\omega]|=\prod_{0\leq i<k} n_i\cdot\prod_{0\leq i<k}n_i^{-1}=1. $$ Since $\delta>0$ was arbitrary, we have $\mathscr{H}^1({\Sigma_\alpha})\leq1$ and thus $\dim_H\Sigma_\alpha\leq1$. Let $(A_i)_{i\geq0}$ be a countable cover of $\Sigma_\alpha$. We are to show \begin{equation} \label{deqoui} \sum_{i\geq0}|A_i|\geq1. \end{equation} Let $\epsilon>0$. For each $i$, let $B_i$ be a cylinder containing $A_i$ with $$ |B_i|<|A_i|+2^{-i}\epsilon. $$ Now $(B_i)$ is a sequence of closed and open sets covering $\Sigma_\alpha$. Since $\Sigma_\alpha$ is compact and each $B_i$ is open, we can choose a finite subcover $(B_{i_{j}})_{0\leq j< k}$. Suppose $B_{i_{j}}=[\omega_{i_{j}}]$, $0\leq j< k$. Let $l_{i_{j}}$ be the length of $\omega_{i_{j}}$ and put $l=\sup_{0\leq j< k}l_{i_{j}}$. Then $$ \begin{aligned} |[\omega_{i_j}]|&=\prod_{0\leq i<l_{i_j}}n_i^{-1}\\ &=\prod_{l_{i_j}\leq i<l}n_i\cdot\prod_{0\leq i<l}n_i^{-1}\\ &=\sum\{|[\omega]|:\omega\in W_{\alpha,l},\ \omega|_{\{0,\cdots,l_{i_j}-1\}}=\omega_{i_j}\}. \end{aligned} $$ Now $$ \begin{aligned} \sum|A_i|&\geq\sum|B_i|-2\epsilon\\ &\geq\sum_{0\leq j<k}|B_{i_j}|-2\epsilon\\ &=\sum_{0\leq j<k}|[\omega_{i_j}]|-2\epsilon\\ &\geq\sum\{|[\omega]|:\omega\in W_{\alpha,l}\}-2\epsilon\\ &=\prod_{0\leq i<l}n_i\cdot\prod_{0\leq i<l}n_i^{-1}-2\epsilon\\ &=1-2\epsilon. \end{aligned} $$ Since $\epsilon>0$ was arbitrary, we have $\sum|A_i|\geq1$. Since $(A_i)$ was arbitrary, we have $\mathscr{H}^{1}{(\Sigma_\alpha)}\geq1$ and thus $\dim_H\Sigma_\alpha\geq1$. \end{proof} \begin{lemma}\label{detoue} Suppose, for $i\geq 0$, that $\emptyset\neq A_i\subset W_{K,{n_i}}$ and $\#( A_i)=a_i$. Then \begin{equation} \label{deqwow} \dim_H\prod_{i\geq 0} A_i\geq\liminf_{j\rightarrow \infty}\frac{\sum_{0\leq i<j}\ln a_{i}}{\sum_{0\leq i<j+1}\ln K^{n_i}}. \end{equation} \end{lemma} \begin{proof} Write $X=\prod_{i\geq 0} A_i$ and $Y=\prod_{i\geq 0}\{0,\cdots, a_{i}-1\}$. If there are at most finitely many $a_i\geq2$, then (\ref{deqwow}) is obviously true. Then suppose there are infinitely many $a_i\geq2$. Note that, by Lemma \ref{deqouw}, $\dim_H Y=1$. Write $A_i=\{\omega_{i,j}:0\leq j< a_{i}\}$. Let $\pi:X \rightarrow Y$, and suppose that $$ \pi(\omega_{0,j_{0}},\omega_{1,j_{1}}\cdots)= j_{0}j_{1}\cdots. $$ It is obvious that $\pi$ is a bijection. For $x$, $y\in X$ with $$ \sum_{0\leq i<j}{n_i}\leq\delta(x,y)<\sum_{0\leq i<j+1}{n_i}, $$ we have $\delta({\pi(x)},{\pi(y)})=j$ and $$ \frac{\ln\rho( {\pi(x),\pi(y)})}{\ln\rho (x,y)}=\frac{\sum_{0\leq i<j}\ln a_i}{\ln K^{\delta(x,y)}}\geq\frac{\sum_{0\leq i<j}\ln a_i}{\sum_{0\leq i<j+1}\ln K^{n_i}}. $$ So $$ \begin{aligned} \lim_{\delta\rightarrow 0^+}\inf&\left\{\frac{\ln\rho( {\pi(x),\pi(y)})}{\ln\rho (x,y)}:(x,y)\in X\times X,\ 0<\rho(x,y)<\delta\right\}\\ \geq&\liminf_{j\leftarrow \infty}\frac{\sum_{0\leq i<j}\ln a_i}{\sum_{0\leq i<j+1}\ln K^{n_i}}. \end{aligned} $$ Applying Lemma \ref{detouu} to this last inequality and using the fact that $\dim_H Y=1$, we get (\ref{deqwow}). \end{proof} Let $(X,f),(Y,{g})$ be TDSs and $\pi:X\to Y$ be surjective and continuous. If $\pi$ satisfies $\pi\circ f=g\circ\pi$, then we call $\pi$ a {\bf semi-conjugation} from $f$ to $g$. In this case we say $g$ is a {\bf factor} of $f$ and $f$ an {\bf extension} of $g$. If in addition $\pi$ is injective, then we call $\pi$ a {\bf conjugation} from $f$ to $g$. Define $$ \tau_K:W_K\rightarrow\mathbb{N},\ \tau_K(\omega)=\sum_{0\leq i<{|\omega|}}K^i\omega_i. $$ For $n\geq1$, define $$ \tau_{K,n}:\Sigma_K\rightarrow\Sigma_{K^n},\ (\tau_{K,n}(x))_i=\tau_K(x|_{\{in,\cdots,(i+1)n-1\}}),\ i\geq0. $$ \begin{lemma}\label{detosx} \rnc{\labelenumi}{(\alph{enumi})} \begin{enumerate} Let $n\geq1$. \item $\tau_{K,n}$ is a conjugation from $\sigma_K^n$ to $\sigma_{K^n}$. \item For $x,y\in\Sigma_K$, $$ \rho(x,y)\leq\rho(\tau_{K,n}(x),\tau_{K,n}(y))\leq K^n\rho(x,y). $$ \item For $X\subset\Sigma_K$ and $0\leq s\leq+\infty$, $$ \mathscr{H}^{s}(X)\leq\mathscr{H}^{s}(\tau_{K,n}(X))\leq K^{sn}\mathscr{H}^{s}(X). $$ So $$ \dim_H\tau_{K,n}(X)=\dim_H X. $$ \end{enumerate} \end{lemma} \begin{proof} We prove (b) first. Let $x,y\in\Sigma_K$. If $x=y$, then the inequalities in (b) are obvious. Suppose $x\neq y$, $\delta(x,y)=in+j$, and $ 0\leq j<n$. Then $\delta(\tau_{K,n}(x),\tau_{K,n}(y))=i$. So $$ \rho(x,y)=K^{-in-j}\leq K^{-in}=\rho(\tau_{K,n}(x),\tau_{K,n}(y))\leq K^{-in-j+n}=K^n\rho(x,y). $$ Next we prove (a). By (b), $\tau_{K,n}$ is injective and continuous. Suppose $x\in\Sigma_{K^n}$. For $i\geq0$, $x_i\in\{0,\cdots,K^n-1\}$, and so there are unique $z_{i,j}\in \{0,\cdots,K-1\}$, $0\leq j< n$, with $x_i=\sum_{0\leq j< n}K^{j}z_{ij}$. Define $y\in\Sigma_K$ by $y_{in+j}=z_{ij}$ for $i\geq 0$ and $0\leq j< n$. Then $\tau_{K,n}(y)=x$. So $\tau_{K,n}$ is surjective. Now, using (b) again we see that $\tau_{K,n}^{-1}$ is continuous. So $\tau_{K,n}$ is a homeomorphism from $\Sigma_K$ to $\Sigma_{K^n}$. Let $x\in\Sigma_K$. Then $$ \begin{aligned} (\sigma_{K^n}(\tau_{K,n}(x)))_i&=\sum_{0\leq j< n}K^jx_{(i+1)_{n+j}}\\ &=\sum_{0\leq j< n}K^j(\sigma_K^n(x))_{in+j}\\ &=(\tau_{K,n}(\sigma_K^n(x)))_i,\ i\geq 0. \end{aligned} $$ So $\sigma_{K^n}\circ \tau_{K,n}=\tau_{K,n}\circ \sigma_K^n$. Now (c) follows from (b) and Lemma \ref{detviw}. \end{proof} Define $$ \pi_K:\Sigma_K\times\Sigma_K \rightarrow\Sigma_{K^2},\ (\pi_K(x_0,x_1))_i=x_{0,i}+Kx_{1,i}\in\{0,\cdots,K^{2}-1\},\ i\geq 0, $$ where $(x_0,x_1)\in\Sigma_K\times\Sigma_K $, $x_{0,i},x_{1,i}\in\{0,\cdots,K-1\}$ are the $(i+1)$th coordinate values of $x_0,x_1$ respectively. \begin{lemma}\label{detosw} \rnc{\labelenumi}{(\alph{enumi})} \begin{enumerate} \item $\pi_K$ is a conjugation from $\sigma_K\times\sigma_K$ to $\sigma_{K^2}$. \item For $(x_0,x_1),(y_0,y_1)\in\Sigma_K\times\Sigma_K $, $$ \rho(\pi_K(x_0,x_1),\pi_K(y_0,y_1))=(\rho((x_0,x_1),(y_0,y_1)))^2. $$ \item For $X\subset\Sigma_K\times\Sigma_K$ and $0\leq s\leq+\infty$, $$ \mathscr{H}^{s}(X)=\mathscr{H}^{\frac{s}{2}}(\pi_K(X)). $$ So $$ \dim_H\pi_K(X)=\frac{1}{2}\dim_H X. $$ \end{enumerate} \end{lemma} \begin{proof} We prove (b) first. Let $(x_0,x_1),(y_0,y_1)\in\Sigma_K\times\Sigma_K $. If $(x_0,x_1)=(y_0,y_1)$, then $$ \rho(\pi_K(x_0,x_1),\pi_K(y_0,y_1))=0=(\rho((x_0,x_1),(y_0,y_1)))^2. $$ Suppose $(x_0,x_1)\neq(y_0,y_1)$. Let $i$ be the least natural number satisfying $$ x_{0,i}\neq y_{0,i}\ \text{or}\ x_{1,i}\neq y_{1,i}. $$ Then $\delta({\pi_K(x_0,x_1)},{\pi_K(y_0,y_1)})=i$. So $$ \rho(\pi_K(x_0,x_1),\pi_K(y_0,y_1))=(K^2)^{-i}=(K^{-i})^2=(\rho((x_0,x_1),(y_0,y_1)))^2. $$ Next we prove (a). By (b), $\pi_K$ is injective and continuous. Suppose $x\in\Sigma_{K^2}$. For $i\geq0$, $x_i\in\{0,\cdots,K^{2}-1\}$, and so there are unique $z_{0,i},z_{1,i}\in\{0,\cdots,K-1\} $ with $x_i=z_{0,i}+Kz_{1,i}$. Put $y_0=(z_{0,i})_{i\geq 0}$, $y_1=(z_{1,i})_{i\geq 0}$. Then $(y_0,y_1)\in\Sigma_K\times\Sigma_K $ and $\pi_K(y_0,y_1)=x$. So $\pi_K$ is surjective. Now, use (b) again and we see $\pi_K^{-1}$ is continuous. So $\pi_K$ is a homeomorphism from $\Sigma_K\times\Sigma_K $ to $\Sigma_{K^2}$. Let $(x_0,x_1)\in\Sigma_K\times\Sigma_K $. Then $$ \begin{aligned} (\sigma_{K^2}(\pi_K(x_0,x_1)))_i&=(\pi_K(x_0,x_1))_{i+1}\\ &=x_{0,i+1}+Kx_{1,i+1}\\ &=(\sigma_K(x_0))_i+K(\sigma_K(x_1))_i\\ &=(\pi_K(\sigma_K(x_0),\sigma_K(x_1)))_i,\ i\geq 0. \end{aligned} $$ So $\sigma_{K^2}\circ\pi_K=\pi_K\circ(\sigma_{k}\times \sigma_{k} )$. Now (c) follows from (b) and Lemma \ref{detviw}. \end{proof} \section{Variational inequality}\label{desoos} In this section, we will prove a variational inequality (Lemma 5.8) for calculating the Hausdoff dimensions and Hausdorff measures of the sets $E_{\sigma_K}([p,q])$ and $D_{\sigma_K}([p,q])$. Let $m\geq2$. Denote \begin{equation}\label{ivoevvvwiz} Q_m=\left\{p=(p_{0},\cdots,p_{m-1})\in{[0,1]}^m:\sum_{0\leq i<m}p_i=1\right\}. \end{equation} Let $\mathcal{K}=(K_i)_{0\leq i<m}$ be a partition of $\{0,\cdots,K-1\}$. Define \begin{equation} \label{deqwuw} f_{\mathcal{K}}:Q_m\rightarrow[0,+\infty),\ f_\mathcal{K} (p)=\sum_{0\leq i<m} -p_i\ln\frac{p_i}{\#(K_i)}, \end{equation} where $0\ln0=0$. \begin{lemma}\label{detoui} \rnc{\labelenumi}{(\alph{enumi})} \begin{enumerate} \item $f_\mathcal{K}$ is continuous and strictly concave. \item $f_\mathcal{K}(Q_{m})\subset[0,\ln K]$ and $f_\mathcal{K}(p)=\ln K$ if and only if $p_i=\frac{\#(K_i)}{K}$ for $0\leq i<m$. \item Let $p_i\in{[0,1]}$, $2\leq i<m$, be fixed and satisfy $$ 1-\sum_{2\leq i<m}p_i=a>0. $$ For $c\in{[0,1]}$, define $q_c\in Q_m$ by $$ (q_c)_i= \begin{cases} ca,&i=0,\\ (1-c)a,&i=1,\\ p_i,&2\leq i<m. \end{cases} $$ Define $H:{[0,1]}\to[0,\ln K]$ by $$ H(c)=f_\mathcal{K} ({q_c})=-ca\ln\frac{ca}{\#(K_0)}- (1-c)a\ln\frac{(1-c)a}{\#(K_1)}+\sum_{2\leq i<m} -p_i\ln\frac{p_i}{\#(K_i)}. $$ Then $H$ is continuous and strictly concave and takes its maximal value at $$ c=\frac{\#(K_0)}{\#(K_0)+\#(K_1)}. $$ \end{enumerate} \end{lemma} \begin{proof} Define $\phi:[0,+\infty)\rightarrow\mathbb{R}$, $\phi (x)=x\ln x$. Then $\phi$ is continuous and, since $\phi''(x)=\frac{1}{x}>0$, strictly convex. (a) The continuity of $f_\mathcal{K}$ can be shown by the uniform continuity of the functions $$ f_{\mathcal{K},j}:{[0,1]}\rightarrow[0,+\infty),\ f_{\mathcal{K},j}(x)=-x\ln\frac{x}{\#(K_j)},\ 0\leq j<m . $$ Let $p_i\in Q_m$, $0<c_i<1$, $0\leq i<n$, with $\sum_{0\leq i<n}c_i=1$ and, for some $i_0,i_1$, $p_{i_0}\neq p_{i_1}$. Recall that $p_{i,j}$ is used to denote the $(j+1)$th coordinate of $p_i$. Then $$ \begin{aligned} &f_\mathcal{K}\left(\sum_{0\leq i<n} c_{i}p_{i}\right)-\sum_{0\leq i<n}c_{i}f_{\mathcal{K}}(p_{i})\\ =&\sum_{0\leq j<m}\left(-\sum_{0\leq i<n}c_{i}p_{i,j}\cdot\ln\frac{\sum_{0\leq i<n}c_{i}p_{i,j}}{\#(K_j)}\right)\\ &-\sum_{0\leq i<n}c_{i}\sum_{0\leq j<m}\left(-p_{i,j}\ln\frac{p_{i,j}}{\#(K_j)}\right)\\ =&\sum_{0\leq j<m}\left(-\sum_{0\leq i<n}c_{i}p_{i,j}\cdot\ln{\sum_{0\leq i<n}c_{i}p_{i,j}}\right)\\ &-\sum_{0\leq i<n}c_i\sum_{0\leq j<m}\left(-p_{i,j}\ln p_{i,j}\right)\\ =&\sum_{0\leq j<m}\left(\sum_{0\leq i<n}c_i\phi(p_{i,j})-\phi\left(\sum_{0\leq i<n}c_{i}p_{i,j}\right)\right)\\ >&0\quad \text{(by the strict convexity of $\phi$)}. \end{aligned} $$ So $f_{\mathcal{K}}$ is strictly concave. (b) Let $\mathcal{K}_1=(\{i\})_{0\leq i<K}$. Then for $q\in Q_K$, $$ \begin{aligned} f_{\mathcal{K}_1}(q)&=\sum_{0\leq i<K}(-q_i\ln q_i)=-\sum_{0\leq i<K}\phi ({q_i})=-K\sum_{0\leq i<K}\frac{1}{K}\phi ({q_i})\\ &\leq-K\phi\left(\sum_{0\leq i<K}\frac{1}{K}q_i\right)=-K\phi\left(\frac{1}{K}\right)=\ln K \end{aligned} $$ and $f_{\mathcal{K}_1}(q)=\ln K$ if and only if each $q_i=\frac{1}{K}$. Define $\psi:Q_{m}\rightarrow Q_K$ by $$ (\psi(p))_i=\frac{p_{j_i}}{\#(K_{j_i})},\ 0\leq i<K , $$ where $j_i$ is the unique number with $i\in K_{j_i}$. Suppose $p\in Q_m$. Then $$ \begin{aligned} f_{\mathcal{K}}(p)&=\sum_{0\leq j<m}\left(-p_j\ln\frac{p_j}{\#(K_j)}\right)=\sum_{0\leq j<m}\sum_{i\in K_j}\left(-\frac{p_j}{\#(K_j)}\ln\frac{p_j}{\#(K_j)}\right)\\ &=\sum_{0\leq i<K}(-(\psi(p))_i\ln(\psi (p))_i)=f_{\mathcal{K}_1}({\psi(p)}). \end{aligned} $$ Then $f_{\mathcal{K}}=f_{\mathcal{K}_1}\circ\psi$. So $$ \sup f_{\mathcal{K}}(Q_m)\leq\sup f_{\mathcal{K}_1}(Q_K) =\ln K $$ and $$ \begin{aligned} f_{\mathcal{K}}(p)=\ln K&\Leftrightarrow(\psi(p))_i=\frac{1}{K}\ \text{for}\ 0\leq i<K\\ &\Leftrightarrow p_j=\frac{\#(K_j)}{K}\ \text{for}\ 0\leq j<m. \end{aligned} $$ (c) We have $$ H'(c)=a\left(\ln\frac{1-c}{c}-\ln\frac{\#(K_1)}{\#(K_0)}\right). $$ So $$ H'\left(\frac{\#( K_0)}{\#(K_0)+\#( K_1)}\right)=0. $$ Now the conclusion follows from the fact that $$ H''(c)=-\frac{a}{c}-\frac {a}{1-c}<0,\ c\in(0,1). $$ \end{proof} We define the function $g_{\mathcal{K}}:Q_m\rightarrow[0,+\infty)$ by \begin{equation} \label{deqiie} \ g_{\mathcal{K}}(p)=\frac{f_{\mathcal{K}}(p)}{\ln K}=\frac{\sum_{0\leq i<m}- p_i\ln\frac{p_i}{\#(K_i)}}{\ln K}. \end{equation} By Lemma \ref{detoui}, for $g_{\mathcal{K}}$, we have the following properties. \begin{lemma}\label{detvxw} \rnc{\labelenumi}{(\alph{enumi})} \begin{enumerate} \item $g_{\mathcal{K}}$ is continuous and strictly concave. \item $g_{\mathcal{K}}(Q_m)\subset [0,1]$ and $g_{\mathcal{K}}(p)=1$ if and only if $p_i=\frac{\#(K_i)}{K}$ for $0\leq i<m$. \item Let $p_i\in{[0,1]}$, $2\leq i<m$, be fixed and satisfy $$ 1-\sum_{2\leq i<m}p_i=a>0. $$ For $c\in{[0,1]}$, define $q_c\in Q_m$ by $$ (q_c)_i= \begin{cases} ca,&i=0,\\ (1-c)a,&i=1,\\ p_i,&2\leq i<m. \end{cases} $$ Define $h:{[0,1]}\rightarrow[0,1]$ by $$ h(c)=g_{\mathcal{K}}({q_c})=\frac{-ca\ln\frac{ca}{\#(K_0)}- (1-c)a\ln\frac{(1-c)a}{\#(K_1)}+\sum_{2\leq i<m} -p_i\ln\frac{p_i}{\#(K_i)}}{\ln K}.$$ Then $h$ is continuous and strictly concave and takes its maximal value at $$ c=\frac{\#(K_0)}{\#(K_0)+\#(K_1)}.\ \Box $$ \end{enumerate} \end{lemma} For $n\geq1$, write $$ Q_{m,n}=\{p\in Q_m:np_i\in\mathbb{N}\ \text{for}\ 0\leq i<m\}. $$ \begin{lemma}\label{detous} Let $p\in Q_{m}$ and $n\geq1$. Then there is $q\in Q_{m,n}$ with \begin{equation} \label{deqoux} \rho(p,q)<\frac{m-1}{n}. \end{equation} \end{lemma} \begin{proof} For $0\leq i<m-1$, let $q_i$ be the number with $p_i-\frac{1}{n}<q_i\leq p_i$ and $nq_i\in\mathbb{N}$. Put $q_{m-1}=1-\sum_{0\leq i<m-1}q_i$. Put $q=(q_i)_{0\leq i<m}$. Then $q\in Q_{m,n}$ and $$ \rho(p,q)=\sup_{0\leq i<m}|p_i-q_i|<\frac{m-1}{n}. $$ \end{proof} \begin{lemma}\label{detouz} $\#(Q_{m,n})\leq(n+1)^m$. \end{lemma} \begin{proof} Let $$ A=\left\{\left(\frac{i_j}{n}\right)_{0\leq j<m}:0\leq i_j < n+1\right\}. $$ Then $\#( A)=(n+1)^m$ and $Q_{m,n}\subset A$. \end{proof} For $\omega\in W_{K,n}$, define $\mu_{\mathcal{K},n}(\omega)\in Q_m$ by $$ (\mu_{\mathcal{K},n}(\omega))_i=\frac{1}{n}\#(\{0\leq j<n:\omega_j\in K_i\}),\ 0\leq i<m. $$ For $p\in Q_{m,n}$, write $$ A_{\mathcal{K},p,n}=\{\omega\in W_{K,n}:\mu_{\mathcal{K},n}(\omega)=p\} $$ and $$ a_{\mathcal{K},p,n}=\#(A_{\mathcal{K},p,n}). $$ For $P\subset Q_{m,n}$, write $$ A_{\mathcal{K},P,n}=\{\omega\in W_{K,n}:\mu_{\mathcal{K},n}(\omega)\in P\} $$ and $$ a_{\mathcal{K},P,n}=\#(A_{\mathcal{K},P,n}). $$ The following lemma is direct. \begin{lemma}\label{detoio} For $n \geq 1$ and $p\in Q_{m,n}$, \begin{equation} \label{deqoiu} a_{\mathcal{K},p,n}=\prod_{0\leq i<m}C_{n-\sum_{0\leq j<i}np_j}^{np_i}(\#(K_i))^{np_i}=\frac{n!\prod_{0\leq i<m}(\#(K_i))^{np_i}}{\prod_{0\leq i<m}(np_i)!}.\ \Box \end{equation} \end{lemma} We will now review Stirling's Formula, which says \begin{equation} \label{deqoiz} \ln n!=n\ln n-n+\ln\sqrt{2\pi n}+\epsilon_n\ \text{where}\ \lim_{n\rightarrow \infty}\epsilon_n=0. \end{equation} \begin{lemma}\label{detoei} \begin{equation} \label{deqouo} \left|\frac{1}{n}\ln a_{\mathcal{K},p,n}-f_{\mathcal{K}}(p)\right|\leq\frac{1}{n}{(m+1)(\ln\sqrt{2\pi n}+E)},\ \text{for}\ p\in Q_{m,n}, \end{equation} where $E=\sup_{n\geq1}|\epsilon_n|$ with $\epsilon_n$ as in (\ref{deqoiz}). Thus \begin{equation} \label{deqoez} \lim_{n\rightarrow \infty}\sup\left\{\left|\frac{1}{n}\ln a_{\mathcal{K},p,n}-f_{\mathcal{K}}(p)\right|:p\in Q_{m,n}\right\}=0. \end{equation} \end{lemma} \begin{proof} Applying (\ref{deqoiz}) to (\ref{deqoiu}), we get $$ \begin{aligned} &\ln a_{\mathcal{K},p,n}\\ =&\sum_{0\leq i<m}np_i\ln\#(K_i)+\left(n\ln n-\sum_{0\leq i<m}np_i\ln np_i\right)-\left(n-\sum_{0\leq i<m}np_i\right)\\ &+\left(\ln\sqrt{2\pi n}-\sum_{0\leq i<m}\ln\sqrt{2\pi np_i}\right)+\left(\epsilon_n-\sum_{0\leq i<m}\epsilon_{np_i}\right)\\ =&\sum_{0\leq i<m}np_i\ln\#(K_i)+\left(n\ln n-\sum_{0\leq i<m}np_i\ln np_i\right)\\ &+\left(\ln\sqrt{2\pi n}-\sum_{0\leq i<m}\ln\sqrt{2\pi np_i}\right)+\left(\epsilon_n-\sum_{0\leq i<m}\epsilon_{np_i}\right)\\ =&\sum_{0\leq i<m}np_i\ln\#(K_i)-\sum_{0\leq i<m}np_i\ln p_i\\ &+\left(\ln\sqrt{2\pi n}-\sum_{0\leq i<m}\ln\sqrt{2\pi np_i}\right)+\left(\epsilon_n-\sum_{0\leq i<m}\epsilon_{np_i}\right)\\ =&-\sum_{0\leq i<m}np_i\ln\frac{p_i}{\#(K_i)}+\left(\ln\sqrt{2\pi n}-\sum_{0\leq i<m}\ln\sqrt{2\pi np_i}\right)+\left(\epsilon_n-\sum_{0\leq i<m}\epsilon_{np_i}\right)\\ =&nf_{\mathcal{K}}(p)+\left(\ln\sqrt{2\pi n}-\sum_{0\leq i<m}\ln\sqrt{2\pi np_i}\right)+\left(\epsilon_n-\sum_{0\leq i<m}\epsilon_{np_i}\right), \end{aligned} $$ i.e., \begin{equation} \label{deqoeo} \begin{aligned} &\frac{1}{n}\ln a_{\mathcal{K},p,n} \\ =&f_{\mathcal{K}}(p)+\frac{1}{n}\left(\ln\sqrt{2\pi n}-\sum_{0\leq i<m}\ln\sqrt{2\pi np_i}\right)+\frac{1}{n}\left(\epsilon_n-\sum_{0\leq i<m}\epsilon_{np_i}\right). \end{aligned} \end{equation} Since $\left|\ln\sqrt{2\pi n}-\sum_{0\leq i<m}\ln\sqrt{2\pi np_i}\right|\leq(m+1)\ln\sqrt{2\pi n}$, (\ref{deqoeo}) leads to (\ref{deqouo}). Now (\ref{deqoez}) follows from (\ref{deqouo}) and the fact $\frac{\ln\sqrt{2\pi n}}{n}\rightarrow 0$. \end{proof} Let $x\in\Sigma_K$. For $n\geq1$, define $\mu_{\mathcal{K},n}(x)\in Q_m$ by $$ (\mu_{\mathcal{K},n}(x))_i=\frac{1}{n}\#(\{0\leq j<n:x_j\in K_i\}),\ 0\leq i<m. $$ Define $$ \mu_\mathcal{K} (x)=\omega(\mu_{\mathcal{K},n}(x):n\geq1)\subset Q_m. $$ We call $\mu_\mathcal{K} (x)$ the {\bf distributional density spectrum} of $x$ on $\mathcal{K}$. \begin{lemma}\label{detosi} Let $x\in\Sigma_K$. Then $\mu_\mathcal{K} (x)\in\mathcal{C}(Q_m)$. \end{lemma} \begin{proof} Apply Lemma \ref{detvxv} to $$ \rho(\mu_{\mathcal{K},n}(x),\mu_{\mathcal{K},n+1}(x))\leq\frac{1}{n+1}\rightarrow 0, \ n\rightarrow \infty. $$ \end{proof} \begin{lemma}\label{detoso} Let $\emptyset\neq P\subset Q_m$. Then $\dim_H\{x\in \Sigma_K:\mu_\mathcal{K} (x)\cap P\neq\emptyset\}\leq\sup g_\mathcal{K} (P)$. \end{lemma} \begin{proof} Suppose $\sup g_\mathcal{K}(P) =s$. Let $\epsilon>0$. By the uniform continuity of $f_\mathcal{K}$, we may take $\delta>0$ such that \begin{equation} \label{deqwux} |f_\mathcal{K}(p)-f_\mathcal{K}(q)|<\epsilon\ln K\ \text{for}\ p,q\in Q_m\ \text{with}\ \rho(p,q)<\delta. \end{equation} Take $n_0$ such that \begin{equation} \label{deqwuu} \frac{(m+1)(\ln\sqrt{2\pi n}+E)}{n}<\epsilon\ln K\ \text{for}\ n\geq n_0, \end{equation} where $E=\sup_{n\geq1}|\epsilon_n|$ with $\epsilon_n$ as in (\ref{deqoiz}). Let $$ P_i=Q_{m,i+1}\cap B_\delta (P),\ i\geq0. $$ If $i\geq\frac {m-1}{\delta}-1$, then $\frac{m-1}{i+1}\leq\delta$ and, by Lemma \ref{detous}, $P_i\neq\emptyset$ and $P\subset B_\delta (P_i)$. Let $n_1\geq\sup(n_0,\frac{m-1}{\delta}-1)$. Suppose $i\geq n_1$ and $q\in P_i$. Then \begin{equation} \label{deqwus} \begin{aligned} &\ln a_{\mathcal{K},q,i+1}\\ \leq&(i+1)f_\mathcal{K}(q)+(m+1)(\ln\sqrt{2\pi (i+1)}+E)\quad\text{(by (\ref{deqouo}))}\\ \leq&(i+1)(f_\mathcal{K}(p)+\epsilon\ln K)+(i+1)\epsilon\ln K\quad\text{(by (\ref{deqwux}) and (\ref{deqwuu}))}\\ =&(i+1)(s\ln K+\epsilon\ln K)+(i+1)\epsilon\ln K\\ =&(i+1)(s+2\epsilon)\ln K. \end{aligned} \end{equation} By (\ref{deqwus}), we have \begin{equation} \label{deqwuz} a_{\mathcal{K},P_i,i+1}\leq(i+2)^me^{(i+1)(s+2\epsilon)\ln K}=(i+2)^mK^{(i+1)(s+2\epsilon)}\ \text{for}\ n\geq n_1. \end{equation} Now, if $n\geq n_1$, it follows from (\ref{deqwuz}) that \begin{equation}\label{q11021119} \begin{aligned} \sum_{\omega\in A_{\mathcal{K},P_i,i+1}}|[\omega]|^{s+4\epsilon} &=a_{\mathcal{K},P_i,i+1}K^{-(i+1)(s+4\epsilon)}\\ &\leq(i+2)^mK^{(i+1)(s+2\epsilon)}K^{-(i+1)(s+4\epsilon)}\\ &=(i+2)^mK^{-2(i+1)\epsilon}\\ &=(i+2)^mK^{-(i+1)\epsilon}\cdot K^{-(i+1)\epsilon}. \end{aligned} \end{equation} Note that $(i+2)^mK^{-(i+1)\epsilon}\rightarrow 0$. So, by (\ref{q11021119}), we can take $n_2\geq n_1$ such that \newcommand{\deqwue}{(\ref{deqwue})} \begin{equation} \label{deqwue} \sum_{\omega\in A_{\mathcal{K},P_i,i+1}}|[\omega]|^{s+4\epsilon}<K^{-(i+1)\epsilon}\ \text{for}\ i\geq n_2. \end{equation} By $$ \sum_{i\geq n}K^{-(i+1)\epsilon}=\frac{K^{-(n+1)\epsilon}}{1-K^{-\epsilon}}\rightarrow 0,\ n\rightarrow \infty, $$ we take $n_3\geq n_2$ such that \begin{equation} \label{deqwun} \sum_{i\geq n}K^{-(i+1)\epsilon}<1\ \text{for}\ n\geq n_3. \end{equation} Eqs (\ref{deqwue}) and (\ref{deqwun}) lead to \begin{equation} \label{deqwso} \sum_{i\geq n}\sum_{\omega\in A_{\mathcal{K},P_i,i+1}}|[\omega]|^{s+4\epsilon}<1\ \text{for}\ n\geq n_3. \end{equation} Suppose $x\in\{y\in \Sigma_K:\mu_\mathcal{K}(y)\cap P\neq\emptyset\}$. Then there are infinitely many $i$ with $\rho (\mu_{\mathcal{K},i+1}(x),P)<\delta$. For such $i$, we have $\mu_{\mathcal{K},i+1}(x)\in P_i$ and thus $x\in[A_{\mathcal{K},P_i,i+1}]$. So $$ \{x\in \Sigma_K:\mu_\mathcal{K}(x)\cap P\neq\emptyset\}\subset\bigcup_{i\geq n}[A_{\mathcal{K},P_i,i+1}]\ \text{for}\ n\geq0,$$ and thus, for $n\geq0$, $\mathcal{A}_n:=\{[\omega]:\omega\in\ A_{\mathcal{K},P_i,i+1},\ i\geq n\}$ is a countable cover of $\{x\in \Sigma_K:\mu_\mathcal{K}(x)\cap P\neq\emptyset\}$. Note that, for $[\omega]\in\mathcal{A}_n$, $|[\omega]|\leq K^{-(n+1)}$. Suppose $n\geq n_3$. Now we have $$ \begin{aligned} \mathscr{H}_{K^{-n}}^{s+4\epsilon}(\{x\in \Sigma_K:\mu_\mathcal{K} (x)\cap P\neq\emptyset\}) &\leq \sum_{[\omega]\in\mathcal{A}_n}|[\omega]|^{s+4\epsilon}\\ &=\sum_{i\geq n}\sum_{\omega\in A_{\mathcal{K},P_i,i+1}}|[\omega]|^{s+4\epsilon}\\ &<1. \end{aligned} $$ Letting $n\rightarrow \infty$, we have $$ \mathscr{H}^{s+4\epsilon}(\{x\in \Sigma_K:\mu_\mathcal{K}(x)\cap P\neq\emptyset\})\leq 1.$$ Then $$\dim_H\{x\in \Sigma_K:\mu_\mathcal{K}(x) \cap P\neq\emptyset\}\leq s+4\epsilon.$$ Since $\epsilon>0$ was arbitrary, we have $$\dim_H\{x\in \Sigma_K:\mu_\mathcal{K}(x)\cap P\neq\emptyset\}\leq s.$$ \end{proof} \section{Hausdorff dimensions of $E_{\sigma_K}(\mathcal{J})$ and $D_{\sigma_K}(\mathcal{J})$}\label{desooz} We will recall some of the notation defined at the beginning of this section. $$ \tau_K:W_K\rightarrow\mathbb{N},\ \tau_K(\omega)=\sum_{0\leq i<|\omega|}K^i\omega_i $$ $$ \tau_{K,n}:\Sigma_K\rightarrow\Sigma_{K^n},\ (\tau_{K,n}(x))_i=\tau_K(x|_{\{in,\cdots,(i+1)n-1\}}),\ i\geq0,\ n\geq1 $$ $$ \pi_K:\Sigma_K\times\Sigma_K \rightarrow\Sigma_{K^2},\ (\pi_K(x_0,x_1))_i=x_{0,i}+Kx_{1,i}\in \{0,\cdots,K^2-1\},\ i \geq 0 $$ $$ Q_2=\left\{p=(p_0,p_1)\in{[0,1]}^2:p_0+p_1=1\right\} $$ Additionally, we write $$E_K =\{i+Ki:0\leq i<K\}\subset\{0,\cdots,K^2-1\}.$$ Then $$E_{K}^{n}=\prod_{0\leq i<n}E_K\subset\{0,\cdots,K^2-1\}^n,\ n\geq1.$$ \begin{lemma}\label{detonx} Let $\emptyset\neq {\mathcal{J}}\subset{\mathcal{C}({[0,1]})}$. Then $$ \dim_H E_{\sigma_K}({\mathcal{J}})\leq2-\inf\{\sup I:I\in{\mathcal{J}}\}. $$ \end{lemma} \begin{proof} Let $X=\pi_K (E_{{\mathcal{J}}}(\sigma_K))$. Let \begin{equation} \label{deqwoe} q=\inf\{\sup I:I\in{\mathcal{J}}\}. \end{equation} By Lemma \ref{detosw}, to prove Lemma \ref{detonx}, it is enough to show $\dim_H X\leq1-\frac{q}{2}$. If $q=0$, the inequality is obvious. So we suppose $q>0$. Let $P_{q}=\{r=(r_0,r_1)\in Q_2:r_0\geq {q}\}$. For $n \geq 1$, write $$ \begin{aligned} &K_{n,0}=\tau_{K^2}(E_{K}^{n}),\ K_{n,1}=\{0,\cdots,K^{2n}-1\}\setminus K_{n,0};\ \mathcal{K}_n=(K_{n,0},K_{n,1}). \end{aligned} $$ Suppose $x=\pi_K(y,z)\in X$, where $(y,z)\in E_{\mathcal{J}}({\sigma_K})=\mathscr{E}_{\sigma_K}^{-1}(\mathcal{J})$. Then for any $n\geq 1$, $$ \begin{aligned} q&\leq\sup \mathscr{E}_{\sigma_K}(y,z)\quad\text{(by $\mathscr{E}_{\sigma_K}(y,z)\in\mathcal{J}$ and (\ref{deqwoe}))}\\ &=\sup \mathscr{E}_{\sigma_{K}^{n}}(y,z)\quad\text{(by Lemma \ref{detoss})}\\ &=\lim_{i\rightarrow \infty}\limsup_{j\rightarrow \infty}\frac{1}{j}\#(\{0\leq k<j:y|_{\{kn,\cdots,(k+i)n-1\}} =z|_{\{kn,\cdots,(k+i)n-1\}}\})\\ &=\lim_{i\rightarrow \infty}\limsup_{j\rightarrow \infty}\frac{1}{j}\#(\{0\leq k<j:\sigma_{K^2}^{kn}(x)|_{\{0,\cdots,in-1\}}\in E_{K}^{in}\})\\ &=\lim_{i\rightarrow \infty}\limsup_{j\rightarrow \infty}\frac{1}{j}\#(\{0\leq k<j:\sigma_{K^{2n}}^{k}(\tau_{K^2,n}(x))|_{\{0,\cdots,i-1\}}\in K_{n,0}^{i}\})\\ &\leq\limsup_{j\rightarrow \infty}\frac{1}{j}\#(\{0\leq k<j:(\sigma_{K^{2n}}^{k}(\tau_{K^2,n}(x)))_0\in K_{n,0}\})\\ &=\sup\{p_0:p=(p_0,p_1)\in\mu_{\mathcal{K}_n}(\tau_{K^2,n}(x))\}\\ &\in\{p_0:p=(p_0,p_1)\in\mu_{\mathcal{K}_n}(\tau_{K^2,n}(x))\}\quad\text{(by the compactness of $\mu_{\mathcal{K}_n}(\tau_{K^2,n}(x))$)}. \end{aligned} $$ So $\mu_{\mathcal{K}_n}(\tau_{K^2,n}(x))\cap P_q\neq\emptyset$, i.e., $\tau_{K^2,n}(x)\in\{x'\in \Sigma_K:\mu_\mathcal{K} ({x'})\cap P_q\neq\emptyset\}$. Then \begin{equation} \label{deqonw} \tau_{K^2,n}(X)\subset\{x'\in \Sigma_K:\mu_\mathcal{K} ({x'})\cap P_q\neq\emptyset\}. \end{equation} Now \begin{equation} \label{deqonx} \begin{aligned} \dim_H X&=\dim_H\tau_{K^2,n}(X)\quad\text{(by Lemma \ref{detosx})}\\ &\leq\dim_H\{x'\in \Sigma_K:\mu_\mathcal{K} ({x'})\cap P_q\neq\emptyset\}\quad\text{(by (\ref{deqonw}))}\\ &\leq\sup g_{\mathcal{K}_n}({P_q})\quad\text{(by Lemma \ref{detoso})}.\\ \end{aligned} \end{equation} Let $n\geq\frac{-\ln q}{\ln K}$. Then \begin{equation} \frac{\#( K_{n,0})}{\#(K_{n,0})+\#(K_{n,1})}=\frac{K^n}{K^{2n}}=\frac{1}{K^n}\leq q. \label{1210290941} \end{equation} For $n\geq\frac{-\ln q}{\ln K}$, we have \begin{equation} \label{deqonu} \begin{aligned} &\sup g_{\mathcal{K}_n}({P_q})\\ =&\sup g_{\mathcal{K}_n}\left(\left\{p=(p_0,p_1)\in Q_2:p_0\geq q\geq\frac{\#(K_{n,0})}{\#(K_{n,0})+\#(K_{n,1}}\right\}\right) \quad \text{by (\ref{1210290941})}\\ =&g_{\mathcal{K}_n}(q)\quad \text{(by Lemma \ref{detvxw} (c))}\\ =&\frac{-q\ln\frac q{K^n}-(1-q)\ln\frac {1-q}{K^{2n}-K^n}}{\ln K^{2n}}\\ =&\frac{-q\ln q-(1-q)\ln(1-q)}{\ln K^{2n}}\\ &+\frac{q\ln{K^n}+(1-q)\ln(K^{2n}-K^n)}{\ln K^{2n}}\\ \rightarrow&\frac12q+(1-q)\quad (n\rightarrow\infty)\\ =&1-\frac {q}{2}. \end{aligned} \end{equation} In (\ref{deqonx}) let $n\rightarrow\infty$. Then using (\ref{deqonu}), we get $\dim_H X\leq1-\frac {q}{2}$. \end{proof} If $(X,\rho,f)$ is a TDS, then $(x,y)\in X\times X$ is said to be an {\bf asymptotical pair} if $$ \lim_{i\rightarrow \infty}\rho(f^i(x),f^i(y))=0, $$ a {\bf proximal pair} if $$ \liminf_{i\rightarrow \infty}\rho(f^i(x),f^i(y))=0, $$ a ($\delta$-){\bf distal pair} if $$ \liminf_{i\rightarrow \infty}\rho(f^i(x),f^i(y))(\geq\delta)>0, $$ and a ($\delta$-){\bf Li--Yorke pair} if $$ \liminf_{i\rightarrow \infty}\rho(f^i(x),f^i(y))=0,\ \limsup_{i\rightarrow \infty}\rho(f^i(x),f^i(y))(\geq\delta)>0. $$ We use Asym$(f)$, Prox$(f)$, Dist$(f)$ and LY$(f)$ to denote the set of asymptotical pairs, the set of proximal pairs, the set of distal pairs and the set of Li--Yorke pairs of $f$, respectively. The properties stated in the two lemmas below are direct. \begin{lemma}\label{detiiu} Let $(X,\rho,f)$ be a TDS. Then $$ \begin{aligned} &\mathrm{Asym} (f)\subset D_f({[1,1]}),\\ &\mathrm{Dist} (f)\subset D_f({[0,0]}),\\ &\mathrm{Prox} (f)=X\times X\setminus\mathrm{Dist}(f),\\ &\mathrm{LY} (f)=\mathrm{Prox}(f)\setminus\mathrm{Asym}(f)=X\times X\setminus(\mathrm{Dist}(f)\cup\mathrm{Asym}(f)). \end{aligned} $$ \end{lemma} Recall that $N\subset\mathbb{N}$ is said to be {\bf syndetic} if for some $k\geq1$ we have $N-\{0,\cdots,k-1\} \supset \mathbb{N}$. \begin{lemma}\label{detvvz} Let $x,y\in\Sigma_K$. \rnc{\labelenumi}{(\alph{enumi})} \begin{enumerate} \item $(x,y)$ is an asymptotic pair for $\sigma_K$ if and only if $\{i\geq0:x_i\neq y_i\}$ is finite. \item $(x,y)$ is a distal pair for $\sigma_K$ if and only if $\{i\geq0:x_i\neq y_i\}$ is syndetic. $\Box$ \end{enumerate} \end{lemma} Let $N\subset\mathbb{N}$ with both $N$ and $N^c$ infinite. Write $$ \begin{aligned} &L_N=N\setminus(N+1)=\{l_{N,0}<l_{N,1}<\cdots\},\\ &R_N=(N+1)\setminus N=\{r_{N,0}<r_{N,1}<\cdots\}. \end{aligned} $$ Note that $l_{N,i}<r_{N,i}<l_{N,i+1}$ for $i\geq0$, $N=\mathbb{N}\cap\bigcup_{i\geq 0}[l_{N,i},r_{N,i})$ and \begin{equation} \label{deqwou} R_N\subset L_{N^c}\subset R_N\cup\{0\}. \end{equation} \begin{lemma}\label{detvoe} Let $N\subset\mathbb{N}$ with both $N$ and $N^c$ infinite. Then for $n\geq1$, $$ \{i\geq0:\{i,\cdots,i+n-1\}\subset N\}=N\setminus(R_N-\{0,\cdots,n-1\}). $$ \end{lemma} \begin{proof} Let $n\geq1$. Then $\{i,\cdots,i+n-1\}\subset N$ if and only if $i\in N$, and $\{i,\cdots,i+n-1\}\cap R_N=\emptyset$ if and only if $i\in N$ and $i\not\in R_N-\{0,\cdots,n-1\}$. \end{proof} Recall that $\zeta_n (N)=\#(N\cap\{0,\cdots,n-1\})$. For $i\geq 0$, write $$ \begin{aligned} &t_{N,2i}=l_{N,i},\ t_{N,2i+1}=r_{N,i};\\ &d_{N,i}=t_{N,i+1}-t_{N,i};\\ &t_{N,i,j}=t_{N,i}+j,\ 0\leq j< d_{N,i};\\ &e_{N£¬i}=\zeta_{t_{N,i+1}} (N)=\sum_{0\leq 2j<i+1}d_{N,2j},\ f_{N,i}=\zeta_{t_{N,i+1}}(N^c)=t_{N,i+1}-e_{N,i}.\\ \end{aligned} $$ \begin{lemma}\label{detosn} Let $N\subset\mathbb{N}$ with both $N$ and $N^c$ infinite. Then \rnc{\labelenumi}{(\alph{enumi})} $$ \mu(N)=\left[\liminf_{i\rightarrow\infty}\frac{e_{N,2i+1}}{t_{N,2i+2}},\limsup_{i\rightarrow\infty}\frac{e_{N,2i}}{t_{N,2i+1}}\right]. $$ \end{lemma} \begin{proof} It is obvious that $$ \mu(N)\supset\left[\liminf_{i\rightarrow\infty}\frac{e_{N,2i+1}}{t_{N,2i+2}},\limsup_{i\rightarrow\infty}\frac{e_{N,2i}}{t_{N,2i+1}}\right] $$ Note that, for $0\leq j<d_{N,2i+2}$, \begin{equation} \label{dsqoux} \begin{aligned} \frac{e_{N,2i+1}}{t_{N,2i+2}}&\leq\frac{e_{N,2i+1}+j+1}{t_{N,2i+2}+j+1}=\frac{\zeta_{t_{N,2i+2,j}+1}(N)}{t_{N,2i+2,j}+1}\\ &\leq\frac{e_{N,2i+1}+d_{N,2i+2}}{t_{N,2i+2}+d_{N,2i+2}}=\frac{e_{N,2i+2}}{t_{N,2i+3}} \end{aligned} \end{equation} and for $0\leq j<d_{N,2i+1}$, \begin{equation} \label{dsqouu} \begin{aligned} \frac{e_{N,2i+1}}{t_{N,2i+2}}&=\frac{e_{N,2i}}{t_{N,2i+2}}\leq\frac{e_{N,2i}}{t_{N,2i+1}+j+1}\\ &=\frac{\zeta_{t_{N,2i+1,j}+1}(N)}{t_{N,2i+1,j}+1}=\frac{\zeta_{t_{N,2i+1}}(N)}{t_{N,2i+1,j}+1}\\ &\leq\frac{\zeta_{t_{N,2i+1}}(N)}{t_{N,2i+1}}=\frac{e_{N,2i}}{t_{N,2i+1}}. \end{aligned} \end{equation} Eqs (\ref{dsqoux}) and (\ref{dsqouu}) lead to $$ \liminf_{i\rightarrow\infty}\frac{e_{N,2i+1}}{t_{N,2i+2}}\leq\inf\mu (N)\leq\sup\mu(N)\leq\limsup_{i\rightarrow\infty}\frac{e_{N,2i}}{t_{N,2i+1}}. $$ So $$ \mu(N)\subset\left[\liminf_{i\rightarrow\infty}\frac{e_{N,2i+1}}{t_{N,2i+2}},\limsup_{i\rightarrow\infty}\frac{e_{N,2i}}{t_{N,2i+1}}\right]. $$ \end{proof} Let $[p,q]\in\mathcal{C}({[0,1]})$. Define $$ \mathcal{M}({[p,q]})=\left\{N\subset\mathbb{N}:\frac{e_{N,2i+1}}{t_{N,2i+2}}\rightarrow p,\ \frac{e_{N,2i}}{t_{N,2i+1}}\rightarrow q\ \text{and}\ d_{N,i}\rightarrow\infty\right\}. $$ Note that, by Lemma \ref{detosn}, for $N\in\mathcal{M}({[p,q]})$, $\mu(N)=[p,q]$. \begin{lemma}\label{detvou} (\cite{WS}) Let $[p,q]\in\mathcal{C}({[0,1]})$. Then $\mathcal{M}({[p,q]})\neq\emptyset$. \end{lemma} Let $N\subset\mathbb{N}$ with both $N$ and $N^c$ infinite. Define $$ \gamma_N:N\rightarrow\mathbb{N},\ \gamma_N(n)=\zeta_n(N)=\#(N\cap\{0,\cdots,n-1\}). $$ In fact, $\gamma_N$ is the unique order preserving bijective map from $N$ to $\mathbb{N}$. More intuitively, if $N=\{n_{0}<n_{1}<\cdots\}$, then $\gamma_N(n_i)=i$, $i\geq0$. Define $\Phi_{N,a}:\Sigma_K\rightarrow\Sigma_K$ by $$ (\Phi_{N,a}(x))_i= \begin{cases} a_{\gamma N (i)},&i\in N,\\ x_{\gamma{N^c}(i)},&i\in N^c. \end{cases} $$ More intuitively, in the case $l_{N,0}=0$, $$ \begin{aligned} \Phi_{N,a}(x)=&a_0\cdots a_{r_{N,0}-1}x_0\cdots x_{(l_{N,1}-r_{N,0})-1}\\ &a_{r_{N,0}}\cdots a_{r_{N,0}+(r_{N,1}-l_{N,1})-1}x_{l_{N,1}-r_{N,0}}\cdots x_{(l_{N,1}-r_{N,0})+(l_{N,2}-r_{N,1})-1}\\ &\cdots\\ =&a_0\cdots a_{d_{N,0}-1}x_{0}\cdots x_{d_{N,1}-1}\\ &a_{d_{N,0}}\cdots a_{d_{N,0}+d_{N,2}-1}x_{d_{N,1}}\cdots x_{d_{N,1}+d_{N,3}-1}\\ &\cdots\\ =&a_0\cdots a_{e_{N,0}-1}x_{f_{N,0}}\cdots x_{f_{N,1}-1}\\ &a_{e_{N,1}}\cdots a_{e_{N,2}-1}x_{f_{N,2}}\cdots x_{f_{N,3}-1}\\ &\cdots. \end{aligned} $$ Note that $\Phi_{N,a}$ is a continuous injection. The idea of the definition of $\Phi_{N,a}$ comes from \cite{Mis}. Our definition is slightly different from the corresponding one in \cite{Mis}. \begin{lemma}\label{dstowv} Let $a,b,c\in\Sigma_K$. Suppose $(a,b)$ is $(K^{-k+1})$-distal for $\sigma_K$. Let $[p,q]\in\mathcal{C}({[0,1]})$ and $N\in\mathcal{M}({[p,q]})$. Write $x=\Phi_{N,c}(a)$, $y=\Phi_{N,c}(b)$. Then $$ \mathcal{F}_{\sigma_K}((x,y),\epsilon)\equiv [{p,q}]\ \text{for}\ 0<\epsilon\leq K^{-k+1}. $$ Thus, $(x,y)\in D_{[p,q]}({\sigma_K})$. \end{lemma} \begin{proof} Suppose $0<\epsilon\leq K^{-k+1}$. Let $M=N_{\sigma_ K\times\sigma_K}((x,y),\Delta_\epsilon))$. Take $m$ with $K^{-m}<\epsilon$. Let $$ M_m=(R_N-\{0,\cdots,m-1\})\cap\mathbb{N},\ M_k=(R_{N^c}-\{0,\cdots,k-1\})\cap\mathbb{N}. $$ Since $\lim_{i\rightarrow \infty }t_{N,i+1}-t_{N,i}=+\infty$, Lemma \ref{dstoxm} implies $\mu (M_m)=\mu (M_k)=0$. Suppose $i\in N\setminus M_m$. Then Lemma \ref{detvoe} implies that $\{i,\cdots,i+m-1\}\subset N$. So, by the definition of $x,y$, $$ \sigma_K^i(x)|_{\{0,\cdots,m-1\}}=\sigma_K^i(y)|_{\{0,\cdots,m-1\}}=\sigma_K^{\gamma_N(i)}(c)|_{\{0,\cdots,m-1\}}, $$ which means $\rho(\sigma_K^i(x),\sigma_K^i(y))\leq K^{-m}<\epsilon$ and thus $i\in M$. Then $N\setminus M_m\subset M$. So \begin{equation} \label{deqwos} \mu(M)\succeq\mu(N\setminus M_m)=\mu(N)=[p,q]. \end{equation} Since $(a,b)$ is $(K^{-k+1})$-distal for $\sigma_K$, we may choose $t\in\mathbb{N}$ such that \begin{equation} \label{deqoex} \sigma_K^i(a)|_{\{0,\cdots,k-1\}}\neq\sigma_K^i(b)|_{\{0,\cdots,k-1\}}\ \text{for}\ i\geq t. \end{equation} Let $s$ be a number such that $\gamma_{N^c}(s)=t$. Suppose $i\in N^c\setminus M_k\setminus\{0,\cdots,s-1\}$. Then \begin{equation} \label{deqoes} \gamma_{N^c}(i)\geq t \end{equation} By Lemma \ref{detvoe}, \begin{equation} \label{deqoeu} \{i,\cdots,i+k-1\}\subset N^c. \end{equation} Now (\ref{deqoes}), (\ref{deqoeu}), (\ref{deqoex}), and the definitions of $x$ and $y$ imply $$ \sigma_K^i(x)|_{\{0,\cdots,k-1\}}=\sigma_K^{\gamma_{N^c}(i)}(a)|_{\{0,\cdots,k-1\}}\neq\sigma_K^{\gamma_{N^c}(i)}(b)|_{\{0,\cdots,k-1\}}=\sigma_K^i(y)|_{\{0,\cdots,k-1\}}, $$ which means $\rho(\sigma_K^i(x),\sigma_K^i(y))\geq K^{-k+1}\geq\epsilon$ and thus $i\in M^c$. Then $$ N^c\setminus M_k\setminus\{0,\cdots,s-1\}\subset M^c, $$ i.e. $M\subset N\cup M_k\cup\{0,\cdots,s-1\}$. Now \begin{equation} \label{deqwoz} \mu(M)\preceq\mu(N\cup M_k\cup\{0,\cdots,s-1\})=\mu(N)=[p,q]. \end{equation} It follows from (\ref{deqwos}) and (\ref{deqwoz}) that $\mathcal{F}_{\sigma_K}((x,y),\epsilon)={[p,q]}$. \end{proof} \begin{lemma}\label{detosz} Let ${[p,q]}\in\mathcal{C}({[0,1]})$. Then $\dim_H D_{\sigma_K}({[p,q]}) \geq2-{q}$. \end{lemma} \begin{proof} By Lemma \ref{detvou}, pick $N\in\mathcal{M}({[p,q]})$. For $n\geq 1$, we define $$ X_n=\prod_{i\geq 0}C_i, $$ where $$ C_i= \begin{cases} E_{K}^n,&i\in N,\\ W_{K^2,n} \setminus E_{K}^n ,&i\in N^c. \end{cases} $$ Let $Y_n=\pi_K^{-1} ({X_n})$. Suppose $x=\pi_K(y,z)\in X_n$, where $(y,z)\in Y_n\subset\Sigma_K\times\Sigma_K $. Then there are $a,b,c\in\Sigma_K$ with \begin{equation} \label{deqosw} y=\Phi_{nN+\{0,\cdots,n-1\},a}(b),\ z=\Phi_{nN+\{0,\cdots,n-1\},a}(c) \end{equation} and \begin{equation} \label{deqosx} \pi_K(b,c)\in\prod_{i\geq 0}C_i. \end{equation} By (\ref{deqosx}) and the definitions of $B_i$ and $\pi_K$, for each $i\geq 0$, $b|_{\{i,\cdots,i+{2n-2}\}}\neq c|_{\{i,\cdots,i+{2n-2}\}}$, i.e., $\rho(\sigma_K^i(b),\sigma_K^i(c))\geq K^{-2n+2}$. So $(b,c)$ is a $(K^{-2n+2})$-distal pair for $\sigma_K$. Note that, since $N\in\mathcal{M}({[p,q]})$, part (c) of Lemma \ref{dstoxm} implies that $nN+\{0,\cdots,n-1\}\in\mathcal{M}({[p,q]})$. Then, by (\ref{deqosx}), (\ref{deqosw}) and Lemma \ref{dstowv}, $(y,z)\in D_{[p,q]}({\sigma_K})$. So \begin{equation} \label{deqosu} Y_n\subset D_{\sigma_K}({[p,q]}). \end{equation} Note that since, for each $k\geq 0$, $$ K^n=\#(\{i+Ki:0\leq i<K\}^{n})\leq\#(C_k)\leq\#( W_{K^2,n} \backslash \{i+Ki:0\leq i<K\}^{n})=K^{2n}-K^n, $$ we have $$ \frac{\ln K^n}{\ln K^{2n}}\leq\frac{\sum_{0 \leq k < t_{N,i,j} }\ln\#( C_k)}{\sum_{0 \leq k < t_{N,i,j} }\ln K^{2n}}\leq\frac{\ln (K^{2n}-K^n)}{\ln K^{2n}}. $$ Then \begin{equation} \label{deqoss} \begin{aligned} &\frac{\sum_{0 \leq k < t_{N,2i,j}}\ln\#(C_k)}{\sum_{0 \leq k < t_{N,2i,j}+1}\ln K^{2n}}\\ &=\frac{\sum_{0 \leq k < t_{N,2i}}\ln\#( C_k)+\sum_{0 \leq k < j}\ln\#( C_{t_{N,2i}+k})}{( t_{N,2i}+j)\ln K^{2n}} \cdot\frac{t_{N,2i}+j}{t_{N,2i}+j+1}\\ &=\frac{e_{N,2i-1}\ln K^n+f_{N,2i-1}\ln (K^{2n}-K^n)+j\ln K^n}{(t_{N,2i}+j)\ln K^{2n}} \cdot\frac{t_{N,2i}+j}{t_{N,2i}+j+1}\\ &\geq\frac{e_{N,2i-1}\ln K^n+f_{N,2i-1}\ln (K^{2n}-K^n)+d_{N,2i}\ln K^n}{(t_{N,2i}+d_{N,2i})\ln K^{2n}} \cdot \frac{t_{N,2i}}{t_{N,2i}+1}\\ &=\frac{e_{N,2i}\ln K^n+f_{N,2i}\ln (K^{2n}-K^n)}{t_{N,2i+1}\ln K^{2n}}\cdot\frac{t_{N,2i}}{t_{N,2i}+1}\\ &\rightarrow\frac{q}{2}+(1-{q})\cdot\frac{\ln(K^{2n}-K^n)}{\ln K^{2n}},\ i\rightarrow \infty, \end{aligned} \end{equation} and \begin{equation} \label{deqosz} \begin{aligned} &\frac{\sum_{0 \leq k <t_{N,2i+1,j}}\ln\#( C_k)}{\sum_{0 \leq k <t_{N,2i+1,j}+1}\ln K^{2n}}\\ &=\frac{\sum_{0 \leq k <t_{N,2i+1}}\ln\#( C_k)+\sum_{0 \leq k <j}\ln\#( C_{t_{N,2i+1}+k})}{(t_{N,2i+1}+j)\ln K^{2n}} \cdot\frac{t_{N,2i+1}+j}{t_{N,2i+1}+j+1}\\ &=\frac{e_{N,2i}\ln K^n+f_{N,2i}\ln (K^{2n}-K^n)+j\ln (K^{2n}-K^n)}{(t_{N,2i+1}+j)\ln K^{2n}} \cdot\frac{t_{N,2i+1}+j}{t_{N,2i+1}+j+1}\\ &\geq\frac{e_{N,2i}\ln K^n+f_{N,2i}\ln (K^{2n}-K^n)}{t_{N,2i+1}\ln K^{2n}}\cdot\frac{t_{N,2i+1}}{t_{N,2i+1}+1}\\ &\rightarrow\frac{q}{2}+(1-{q})\cdot\frac{\ln(K^{2n}-K^n)}{\ln K^{2n}},\ i\rightarrow \infty. \end{aligned} \end{equation} Eqs (\ref{deqoss}) and (\ref{deqosz}) lead to \begin{equation} \label{deqose} \liminf\frac{\sum_{0\leq i<j}\ln\#( C_i)}{\sum_{0\leq i<j+1}\ln K^{2n}}\geq\frac {q}2+(1-{q})\cdot\frac{\ln(K^{2n}-K^n)}{\ln K^{2n}}. \end{equation} Applying Lemma \ref{detoue} to (\ref{deqose}), we get \begin{equation} \label{deqosn} \dim_H X_n\geq\frac{q}{2}+(1-{q})\cdot\frac{\ln(K^{2n}-K^n)}{\ln K^{2n}}. \end{equation} Now \begin{equation} \label{deqozo} \begin{aligned} \dim_{H}D_{\sigma_K}({[p,q]})&\geq\dim_{H} Y_n \quad\text{(by (\ref{deqosu}))}\\ &= 2\dim_{H} X_n \quad\text{(by Lemma \ref{detosw})}\\ &\geq 2\left(\frac{q}{2}+(1-{q})\cdot\frac{\ln(K^{2n}-K^n)}{\ln K^{2n}}\right) \quad\text{(by (\ref{deqosn}))}\\ &\rightarrow 2-q,\ n\rightarrow \infty. \end{aligned} \end{equation} Since (\ref{deqozo}) holds for any $n\geq 1$, we have $\dim_{H} D_{\sigma_K}({[p,q]})\geq2-{q}$. \end{proof} \begin{lemma} \label{p1202141124} Let $\emptyset\neq {\mathcal{J}}\subset{\mathcal{C}({[0,1]})}$. Then $$ \dim_{H} D_{\sigma_K}({\mathcal{J}})\geq2-\inf\{\sup I:I\in{\mathcal{J}}\}. $$ \end{lemma} \begin{proof} Let $q=\inf\{\sup I:I\in{{\mathcal{J}}}\}$. Suppose $\epsilon>0$. Pick $[p_0,p_1]\in{{\mathcal{J}}}$ with $p_1<q+\epsilon$. Using Lemma \ref{detosz}, we get \begin{equation} \label{q1202141131} \begin{aligned} \dim_{H} D_{\sigma_K}({\mathcal{J}})\geq \dim_{H} D_{\sigma_K}({[p_0,p_1]}) =2-p_1>2-q-\epsilon. \end{aligned} \end{equation} Since (\ref{q1202141131}) holds for any $\epsilon>0$, it follows that $\dim_{H} D_{\sigma_K}({\mathcal{J}})\geq2-q$. \end{proof} \begin{theorem}\label{detose} Let $\emptyset\neq {{\mathcal{J}}}\subset{\mathcal{C}({[0,1]})}$. Then $$ \dim_{H} E_{\sigma_K}({\mathcal{J}})=\dim_{H} D_{\sigma_K}({\mathcal{J}})=2-\inf\{\sup I:I\in{{\mathcal{J}}}\}. $$ \end{theorem} \begin{proof} Apply Lemma \ref{detonx} and Lemma \ref{p1202141124} to $ D_{\sigma_K}({\mathcal{J}})\subset E_{\sigma_K}({\mathcal{J}})$. \end{proof} \begin{corollary}\label{detonw} Let $[p,q]\in\mathcal{C}({[0,1]})$. Then $$ \dim_{H} E_ {\sigma_K}({[p,q]})=\dim_{H} D_{\sigma_K}({[p,q]}) =2-{q}. $$ \end{corollary} \begin{corollary}\label{detonv} For ${\mathcal{J}}\subset{\mathcal{C}({[0,1]})}$, $$ \dim_{H} E_{\sigma_K}({\mathcal{J}})=\sup_{I\in{\mathcal{J}}}\dim_{H} E_{\sigma_K}({I}),\ \dim_{H} D_{\sigma_K}({\mathcal{J}})=\sup_{I\in{\mathcal{J}}}\dim_{H} D_{\sigma_K}({I}). $$ \end{corollary} \begin{proof} This follows directly from Theorem \ref{detose} and Corollary \ref{detonw}. \end{proof} \begin{corollary}\label{detiio} For $\sigma_K$, the distributional chaos relation with respect to DC1 and the distributional chaos relation with respect to DC2 are of Hausdorff dimension $1$. $\Box$ \end{corollary} \begin{theorem}\label{detiiv} $$ \dim_{H}\rm{Asym}(\sigma_K)=1 $$ and $$ \dim_{H}\rm{Prox}(\sigma_K)=\dim_{H}\rm{Dist}(\sigma_K)=\dim_{H}\rm{LY}(\sigma_K)=2. $$ Moreover, $$ \begin{aligned} &\mathscr{H}^1(\rm{Asym}(\sigma_K))=+\infty,\\ &\mathscr{H}^2(\rm{Prox}(\sigma_K))=\mathscr{H}^2(\rm{LY}(\sigma_K))=1,\\ &\mathscr{H}^2(\rm{Dist}(\sigma_K))=0.\\ \end{aligned} $$ \end{theorem} \begin{proof} Since $\rm{Asym}(\sigma_K)\subset D_{\sigma_K}([1,1])$ and $\dim_{H}D_{\sigma_K}([1,1])=1$, we have $\dim_{H}\rm{Asym}(\sigma_K)\leq1$. Since $\Delta({\Sigma_K})\subset\rm{Asym}(\sigma_K)$ and $\dim_{H}\Delta({\Sigma_K})=\dim_{H}\Sigma_K=1$, we have $\dim_{H}\rm{Asym}(\sigma_K)\geq1$. Thus $\dim_{H}\rm{Asym}(\sigma_K)=1$. Next we show $\mathscr{H}^1(\rm{Asym}(\sigma_K))=+\infty$. Note that the map $$ \varepsilon_K:\Sigma_K\rightarrow E_{K}^{\mathbb{N}}=\prod_{i\geq0}E_K\subset\Sigma_{K^2},\ (\varepsilon_K(x))_i=x_i+Kx_i,\ i\geq0 $$ is a homeomorphism with \begin{equation} \label{deqwio} \rho(\varepsilon_K(x),\varepsilon_K(y))=(K^2)^{-\delta(x,y)}=(K^{-\delta(x,y)})^2=(\rho(x,y))^2,\ x,y\in\Sigma_K. \end{equation} Eq. (\ref{deqwio}) and Lemma \ref{detviw} lead to \begin{equation} \label{deqwon} \mathscr{H}^{\frac{1}{2}}(E_{K}^{\mathbb{N}})=\mathscr{H}^1(\Sigma_K)=1. \end{equation} For $n\geq0$ and $\omega\in W_{K^2,n}$, write $$ \omega E_{K}^{\mathbb{N}}=\{x\in\Sigma_{K^2}:x|_{\{0,\cdots,n-1\}}=\omega,\ x|_{\{n,n+1,\cdots\}}\in E_{K}^{\mathbb{N}}\} $$ Then the map $$ \sigma_{K^2}^{n}:\omega E_{K}^{\mathbb{N}}\rightarrow E_{K}^{\mathbb{N}} $$ is a homeomorphism with \begin{equation} \label{deqwii} \rho(\sigma_{K^2}^{n}(x),\sigma_{K^2}^{n}(y))=K^{2n}\rho(x,y),\ x,y\in \omega E_{K}^{\mathbb{N}}. \end{equation} Applying Lemma \ref{detviw} to (\ref{deqwii}) and then using (\ref{deqwon}), we get \begin{equation} \label{deqwiv} \mathscr{H}^{\frac{1}{2}}(\omega E_{K}^{\mathbb{N}})=K^{-n}\mathscr{H}^{\frac{1}{2}}(E_{K}^{\mathbb{N}})=K^{-n}. \end{equation} Let $$ X_n=\bigcup_{\omega\in W_{K^2,n}}\omega E_{K}^{\mathbb{N}}. $$ Since each $\omega E_{K}^{\mathbb{N}}$ is compact and thus $\mathscr{H}^{\frac{1}{2}}$ measurable (see, e.g., \cite{Fal}), by (\ref{deqwiv}), \begin{equation} \label{deqwiw} \mathscr{H}^{\frac{1}{2}}(X_n)=\#(W_{K^2,n})K^{-n}=K^{2n}K^{-n}=K^n. \end{equation} Let $X=\pi_K(\rm{Asym}(\sigma_K))$. Then \begin{equation} \label{deqwix} X=\bigcup_{n\geq 0}X_n. \end{equation} By (\ref{deqwix}) and (\ref{deqwiw}), $$ \mathscr{H}^{\frac{1}{2}}(X)\geq K^n\ \text{for}\ n\geq0. $$ So $\mathscr{H}^{\frac{1}{2}}(X)=+\infty$. Now Lemma \ref{detosw} implies that $\mathscr{H}^{1}(\rm{Asym}(\sigma_K))=+\infty$. For $n\geq 1$, let $Y_n=\prod_{i\geq 0}(W_{K^2,n} \backslash \{i+Ki:0\leq i<K\}^{n})$. We have $$ \dim_{H} Y_n=\frac{\ln\#(C_n)}{\ln K^{2n}}=\frac{\ln(K^{2n}-K^n)}{\ln K^{2n}}. $$ Then \begin{equation} \label{deqiiw} \dim_{H} Y_n<1\ \text{and}\ \dim_{H} Y_n\rightarrow1,\ n\rightarrow\infty. \end{equation} Suppose $y,z\in\Sigma_K$ with $\pi_K(y,z)\in Y_n$. Because $y|_{\{in,\cdots,(i+1)n-1\}}\neq z|_{\{in,\cdots,(i+1)n-1\}}$ for $i\geq 0$, we have $y|_{\{i,\cdots,i+2n-2\}}\neq z|_{\{i,\cdots,i+2n-2\}}$ for $i\geq 0$. So $(y,z)$ is a $(K^{-2n+2})$-distal pair for $\sigma_K$. Then \begin{equation} \label{deqivv} Y_n\subset\pi_K(\rm{Dist}(\sigma_K)). \end{equation} On the other hand, suppose $(y,z)$ is a distal pair for $\sigma_K$. Then there is some $n\geq 0$ with $\inf_{i\geq 0}\rho(\sigma_K^i(y),\sigma_K^i(z))\geq K^{-n}$, i.e., $x|_{\{i,\cdots,i+n\}}\neq y|_{\{i,\cdots,i+n\}}$ for each $i\geq 0$, thus $\pi_K(y,z)\in Y_{n+1}$. So \begin{equation} \label{deqivw} \pi_K(\rm{Dist}(\sigma_K))\subset\bigcup_{n\geq 1}Y_n. \end{equation} Eqs (\ref{deqivv}) and (\ref{deqivw}) lead to \begin{equation} \label{deqiix} \pi_K(\rm{Dist}(\sigma_K))=\bigcup_{n\geq 1}Y_n. \end{equation} Applying Lemma \ref{detoex} to (\ref{deqiiw}), we obtain \begin{equation} \label{deqiiu} \dim_{H} \bigcup_{n\geq 1}Y_n=1\ \text{and}\ \mathscr{H}^1\left(\bigcup_{n\geq 1}Y_n\right)=0. \end{equation} Applying Lemma \ref{detosw} and (\ref{deqiiu}) to (\ref{deqiix}), we have $$ \dim_{H} \rm{Dist}(\sigma_K)=2\ \text{and}\ \mathscr{H}^2(\rm{Dist}(\sigma_K))=0. $$ Since $\mathscr{H}^2(\rm{Dist}(\sigma_K))=\mathscr{H}^2(\rm{Asym}(\sigma_K))=0$, by Lemma \ref{detoex}, we have $$ \mathscr{H}^2(\rm{Prox}(\sigma_K))=\mathscr{H}^2(X\times X\setminus\rm{Dist}(\sigma_K))=\mathscr{H}^2(X\times X)=1 $$ and $$ \mathscr{H}^2(\rm{LY}(\sigma_K))=\mathscr{H}^2(\rm{Prox}(\sigma_K)\setminus\rm{Asym}(\sigma_K))=\mathscr{H}^2(\rm{Prox}(\sigma_K))=1. $$ So $$ \dim_{H}\rm{LY}(\sigma_K)=\dim_{H}\rm{Prox}(\sigma_K)=2. $$ \end{proof} \section{The Hausdorff measures of $E_{\sigma_K}({[p,q]})$ and $D_{\sigma_K}({[p,q]})$} \label{desooe} We will now review some measure theoretical properties of $\mathscr{H}^1$ for $(\Sigma_K,\sigma_K)$. For each column $[\omega]$ in $\Sigma_K$, $\mathscr{H}^1([\omega])=K^{-|\omega|}$. Let $\mathcal{B}_{\Sigma_K}$ be the set of Borel subsets of $\Sigma_K$. Then each member of $\mathcal{B}_{\Sigma_K}$ is $\mathscr{H}^1$ measurable, so $\mathscr{H}^1$ is a probability measure on $(\Sigma_K,\mathcal{B}_{\Sigma_K})$. The measure $\mathscr{H}^1$ is ergodic for $\sigma_K$. We say $x\in\Sigma_K$ is a \textbf{generic point} for $(\mathscr{H}^1,\sigma_K)$ provided $\frac{1}{j}\sum_{0\leq i<j}\delta_{\sigma_K^i(x)}\to \mathscr{H}^1$, $j\to\infty$, under the weak* topology, where $\delta_x$ is the measure $\delta_x(A)=1\Leftrightarrow x\in A$. Let $G_{\mathscr{H}^1,\sigma_K}$ denote the set of generic points for $(\mathscr{H}^1,\sigma_K)$. Then $G_{\mathscr{H}^1,\sigma_K}\in\mathcal{B}_{\Sigma_K}$, $\mathscr{H}^1(G_{\mathscr{H}^1,\sigma_K})=1$ and \begin{equation}\label{e:6.1} \mu(N_{\sigma_K}(x,A))=\mathscr{H}^1(A)\text{ for }x\in G_{\mathscr{H}^1,\sigma_K}\text{ and }A\in\mathcal{B}_{\Sigma_K}. \end{equation} See, e.g., \cite{b:42}. \begin{lemma}\label{l:6.1} Suppose $(x,y)\in\Sigma_K\times\Sigma_K$ with $\pi_K(x,y)\in G_{\mathscr{H}^1,\sigma_{K^2}}$. Then $$\mathcal{F}_{\sigma_K}((x,y),K^{-k+1})=K^{-k}\text{ for }k\geq 0.$$ In particular, $(x,y)\in E_{\sigma_K}([0,0])$. \end{lemma} \begin{proof} Let $\pi_K(y,z)=x\in G_{\mathscr{H}^1,\sigma_{K^2}}$. For $k\geq 0$, by the definition of $\pi_K$ and (\ref{e:6.1}), we have $$\begin{aligned} \mathcal{F}_{\sigma_K}((y,z),K^{-k+1})&=\mu(\{i\ge 0:y|_{[i,i+k)}=z|_{[i,i+k)}\})\\ &=\mu(N_{\sigma_{K^2}}(x,[E_K^k]))=(K^2)^{-k}K^k=K^{-k}. \end{aligned}$$ \end{proof} \begin{lemma}\label{l:6.2} $\mathscr{H}^2(E_{\sigma_K}([0,0]))=1$. \end{lemma} \begin{proof} Lemma \ref{l:6.1} implies $\pi_K^{-1}(G_{\mathscr{H}^1,\sigma_{K^2}})\subset E_{[0,0]}(\sigma_K)$. So it follows from Lemma $\ref{detosw}$ that $$1\geq \mathscr{H}^2(E_{[0,0]}(\sigma_K))\ge \mathscr{H}^2(\pi_K^{-1}(G_{\mathscr{H}^1,\sigma_{K^2}}))=\mathscr{H}^1(G_{\mathscr{H}^1,\sigma_K})=1.$$ \end{proof} \begin{lemma}\label{l:6.3} Suppose $[p,q]\in\mathcal{C}([0,1])$, $N\in\mathcal{M}([p,q])$, $a,b,c\in\Sigma_K$ with $\pi_K(b,c)=x\in G_{\mathscr{H}^1,\sigma_K}$ and $y=\Phi_{N,c}(a)$, $z=\Phi_{N,c}(b)$. Then $$\mathcal{F}_{\sigma_K}((y,z),t)= \left\{\begin{aligned} &[p+(1-p)K^{-k},q+(1-q)K^{-k}],&&K^{-k}<t\leq K^{-k+1},\,\,k\geq 1\\ &1,&&t>1. \end{aligned} \right. $$ In particular, $(y,z)\in E_{\sigma_K}([p,q])$. \end{lemma} \begin{proof} If $t>1=diam(\Sigma_K)$, then $\mathcal{F}_{\sigma_K}((y,z),t)=1$. Let $k\geq 1$ be a fixed integer and $K^{-k}<t\leq K^{-k+1}$. Let $$\begin{aligned} N_1&=\{i\geq 0:\rho(\sigma_K^i(y),\sigma_K^i(z))<t\}=\{i\geq 0:\rho(\sigma_K^i(y),\sigma_K^i(z))<K^{-k}\},\\ N_2&=\{i\geq 0:\rho(\sigma_K^i(b),\sigma_K^i(c))<t\}=\{i\geq 0:\rho(\sigma_K^i(b),\sigma_K^i(c))<K^{-k}\}\\ &=\{i\geq 0:\sigma_{K^2}^i(x)\in [E_K^k]\},\\ N_3&=(\{t_{N,2j+1}:j\geq 0\}-\{0,1,\cdots,k-1\})\cap\mathbb{N},\\ N_4&=N\setminus N_3,\\ N_5&=(\{t_{N,2j}:j\geq 0\}-\{0,1,\cdots,k-1\})\cap\mathbb{N},\\ N_6&=N^c\setminus N_5,\\ N_7&=\gamma_{N^c}(N_5),\\ N_8&=\gamma_{N^c}(N_6)=N^c_7.\\ \end{aligned}$$ To prove the lemma, it is enough to show \begin{equation}\label{e:6.2} \mu(N_1)=[p+(1-p)K^{-k},q+(1-q)K^{-k}]. \end{equation} Since $x\in G_{\mathscr{H}^1,\sigma_{K^2}}$, we have \begin{equation}\label{e:6.3} \mu(N_2)=\mathscr{H}^1([E_K^k])=(K^2)^{-k}K^k=K^{-k}. \end{equation} Since $t_{N,2j+3}-t_{N,2j+1}=d_{N,2j+1}+d_{N,2j+2}\rightarrow \infty$, $t_{N,2j+2}-t_{N,2j}=d_{N,2j}+d_{N,2j+1}\rightarrow \infty$, $\gamma_{N^c}(t_{N,2j+2})-\gamma_{N^c}(t_{N,2j})=d_{N,2j+1}\rightarrow\infty$, then, by Lemma \ref{dstoxm}, \begin{equation}\label{e:6.4} \mu(N_3)=\mu(N_5)=\mu(N_7)=0. \end{equation} Then \begin{equation}\label{e:6.5} \mu(N_4 \sqcup N_6)=\mu(N_8)=1 \end{equation} Let $$J=\omega\left(\frac{\zeta_n(N_1\cap(N_4\sqcup N_6))}{n}:n\in(N_4\sqcup N_6)+1\right).$$ By (\ref{e:6.5}) and Lemma \ref{dstoxm}, to prove (\ref{e:6.2}) it is enough to prove \begin{equation}\label{e:6.6} \inf J=p+(1-p)K^{-k},\,\,\sup J=q+(q-1)K^{-k}. \end{equation} Suppose $i\in N_4$. Then, for some $j\geq 0$, $i\in[t_{N,2j},t_{N,2j+1}-k+1)\cap\mathbb{N}\subset N$. Then $i+\{0,1,\cdots,k-1\}\subset N$ and $y|_{[i,i+k)}=z|_{[i,i+k)}=a|_{[i-f_{N,2j},i-f_{N,2j}+k)}$. Thus $\rho(\sigma_K^i(y),\sigma_K^i(z))\le K^{-k}$, which means $i\in N_1$. So \begin{equation}\label{e:6.7} N_4\subset N_1. \end{equation} Suppose $i\in N_6$. Then, $i\in [0,t_{N,0}-k+1)\cap\mathbb{N}\subset N^c$ or, for some $j\geq 0$, $i\in [t_{N,2j+1},t_{N,2j+2}-k+1)\cap\mathbb{N}\subset N^c$. Then $i+\{0,1,\cdots,k-1\}\subset N^c$ and $y|_{[i,i+k)}=b|_{[\gamma_{N^c}(i),\gamma_{N^c}(i)+k)}$, $z|_{[i,i+k)}=c|_{[\gamma_{N^c}(i),\gamma_{N^c}(i)+k)}$, Thus $$\rho(\sigma_K^i(y),\sigma_K^i(z))\leq K^{-k}\Leftrightarrow\rho(\sigma_K^{\gamma_{N^c}(i)}(b),\sigma_K^{\gamma_{N^c}(i)}(c))\leq K^{-k}.$$ So, if $i\in N_1$, then $\gamma_{N^c}(i)\in N_2$. Hence \begin{equation}\label{e:6.8} \gamma_{N^c}(N_1\cap N_6)\subset N_2\cap\gamma_{N^c}(N_6)=N_2\cap N_8. \end{equation} On the other hand, suppose $i\in N_8$. Then, $\gamma_{N^c}^{-1}(i)\in [0,t_{N,0}-k+1)\cap\mathbb{N}\subset N^c$ or, for some $j\ge 0$, $\gamma_{N^c}^{-1}(i)\in [t_{N,2j+1},t_{N,2j+2}-k+1)\cap\mathbb{N}\subset N^c$. Then $\gamma_{N^c}^{-1}(i)+\{0,1,\cdots,k-1\}\subset N^c$ and $y|_{[\gamma_{N^c}^{-1}(i),\gamma_{N^c}^{-1}(i)+k)}=b|_{[i,i+k)}$, $z|_{[\gamma_{N^c}^{-1}(i),\gamma_{N^c}^{-1}(i)+k)}=c|_{[i,i+k)}$, Thus $$\rho(\sigma_K^{\gamma_{N^c}^{-1}(i)}(y),\sigma_K^{\gamma_{N^c}^{-1}(i)}(z))\leq K^{-k}\Leftrightarrow\rho(\sigma_K^{i}(b),\sigma_K^{i}(c))\leq K^{-k}.$$ So, if $i\in N_2$, then $\gamma_{N^c}^{-1}(i)\in N_1$. Then \begin{equation}\label{e:6.9} \gamma_{N^c}^{-1}(N_2\cap N_8)\subset N_1\cap\gamma_{N^c}^{-1}(N_8)=N_1\cap N_6. \end{equation} Eqs (\ref{e:6.8}) and (\ref{e:6.9}) lead to \begin{equation}\label{e:6.10} N_1\cap N_6=\gamma_{N^c}^{-1}(N_2\cap N_8). \end{equation} Combining (\ref{e:6.7}) and (\ref{e:6.10}) we get \begin{equation}\label{e:6.11} N_1\cap (N_4\sqcup N_6)=N_4 \sqcup (N_1\cap N_6)=N_4 \sqcup \gamma_{N^c}^{-1}(N_2\cap N_8). \end{equation} Suppose $p=q=0$, i.e., $\mu(N)=0$. For $n\in \gamma_{N^c}^{-1}(N_2\cap N_8)$ we have $$\begin{aligned} &\frac{\zeta_n(\gamma_{N^c}^{-1}(N_2\cap N_8)}{n}\\ =&\frac{\#(\gamma_{N^c}^{-1}(N_2\cap N_8)\cap\{0,1,\cdots,n-1\})}{n}\\ =&\frac{\#((N_2\cap N_8)\cap\{0,1,\cdots,\gamma_{N^c}(n)-1\})}{n}\text{ (since $\gamma_{N^c}(n)\in N_2\cap N_8$)}\\ =&\frac{\zeta_{\gamma_{N^c}(n)}(N_2\cap N_8)}{n}\\ =&\frac{\zeta_{\gamma_{N^c}(n)}(N_2\cap N_8)}{\gamma_{N^c}(n)}\cdot\frac{\gamma_{N^c}(n)}{n}\\ \rightarrow& K^{-k}\text{ while }n\rightarrow\infty\\ &\text{(by $\mu(N_2)=K^{-k}$, $\mu(N_8)=\mu(N^c)=1$ and Lemma \ref{dstoxm})}. \end{aligned}$$ Applying Lemma \ref{dstoxm} for this limit we see $\mu(\gamma_{N^c}^{-1}(N_2\cap N_8))=K^{-k}$. Since $\mu(N_4)\leq\mu(N)=0$, we have \begin{equation}\label{e:6.12} \mu(N_4\sqcup\gamma_{N^c}^{-1}(N_2\cap N_8))=\mu(\gamma_{N^c}^{-1}(N_2\cap N_8))=K^{-k}. \end{equation} Now (\ref{e:6.6}) follows from (\ref{e:6.12}) and (\ref{e:6.11}). Suppose $q>0$. As $d_{N,j}\rightarrow\infty$, we may choose $j^*\ge k$ such that if $j\ge j^*$, then $d_{N,j}\ge k$. Suppose $j\ge j^*$. Then $t_{N,2j+1}-k\in N_4$ and $t_{N,2j+2}-k\in N_6$. Now $$\begin{aligned} &\frac{\zeta_{t_{N,2j+1}-k+1}(N_4\sqcup(N_1\cap N_6))}{t_{N,2j+1}-k+1}\\ =&\frac{\zeta_{t_{N,2j+1}-k+1}(N_4)}{t_{N,2j+1}-k+1} +\frac{\zeta_{t_{N,2j+1}-k+1}(N_1\cap N_6)}{t_{N,2j+1}-k+1}\\ =&\frac{\zeta_{t_{N,2j+1}-k+1}(N_4)}{t_{N,2j+1}-k+1} +\frac{\zeta_{t_{N,2j}-k+1}(N_1\cap N_6)}{t_{N,2j+1}-k+1} \text{ (since $[t_{N,2j}-k+1,t_{N,2j+1}-k+1)\cap N_6=\emptyset$)}\\ =&\frac{\zeta_{t_{N,2j+1}-k+1}(N_4)}{t_{N,2j+1}-k+1} +\frac{\zeta_{t_{N,2j}-k+1}(\gamma_{N^c}^{-1}(N_2\cap N_8))}{t_{N,2j+1}-k+1}\text{ (by (\ref{e:6.10}))}, \end{aligned}$$ where $$\begin{aligned} &\frac{\zeta_{t_{N,2j+1}-k+1}(N_4)}{t_{N,2j+1}-k+1}\\ =&\left(\frac{\zeta_{t_{N,2j+1}}(N)}{t_{N,2j+1}} -\frac{\zeta_{t_{N,2j+1}}(N_3)}{t_{N,2j+1}}\right) \cdot\frac{t_{N,2j+1}}{t_{N,2j+1}-k+1} \text{ (since $[t_{N,2j}-k+1,t_{N,2j+1})\cap\mathbb{N}\subset N_3$)}\\ \rightarrow&q\text{ while }i\rightarrow\infty\text{ (since $\frac{\zeta_{t_{N,2j+1}}(N)}{t_{N,2j+1}}\rightarrow q$ and $\mu(N_3)=0$)} \end{aligned}$$ and $$ \begin{aligned} &\frac{\zeta_{t_{N,2j}-k+1}(\gamma_{N^c}^{-1}(N_2\cap N_8))}{t_{N,2j+1}-k+1}\\ =&\frac{\zeta_{\gamma_{N^c}(t_{N,2j}-k)+1}(N_2\cap N_8)}{t_{N,2j+1}-k+1}\,\,(\text{since } \gamma_{N^c}(t_{N,2j}-k)\in N_8)\\ =&\frac{\zeta_{\gamma_{N^c}(t_{N,2j}-k)+1}(N_2)}{t_{N,2j+1}-k+1} -\frac{\zeta_{\gamma_{N^c}(t_{N,2j}-k)+1}(N_7)}{t_{N,2j+1}-k+1}\\ =&\left(\frac{\zeta_{\gamma_{N^c}(t_{N,2j}-k)+1}(N_2)}{\gamma_{N^c}(t_{N,2j}-k)+1} -\frac{\zeta_{\gamma_{N^c}(t_{N,2j}-k)+1}(N_7)}{\gamma_{N^c}(t_{N,2j}-k)+1}\right) \cdot\frac{\gamma_{N^c}(t_{N,2j}-k)+1}{t_{N,2j+1}-k+1}\\ =&\left(\frac{\zeta_{\gamma_{N^c}(t_{N,2j}-k)+1}(N_2)}{\gamma_{N^c}(t_{N,2j}-k)+1} -\frac{\zeta_{\gamma_{N^c}(t_{N,2j}-k)+1}(N_7)}{\gamma_{N^c}(t_{N,2j}-k)+1}\right) \cdot\frac{\zeta_{t_{N,2j}-k+1}(N^c)}{t_{N,2j+1}-k+1}\\ \rightarrow&K^{-k}(1-q)\text{ while }i\rightarrow\infty\,\,(\text{since } \mu(N_2)=K^{-k}, \mu(N_7)=0 \text{ and } \frac{\zeta_{t_{N,2j+1}}(N^c)}{t_{N,2j+1}}\rightarrow 1-q). \end{aligned} $$ $$\begin{aligned} &\frac{\zeta_{t_{N,2j+2}-k+1}(N_4\sqcup(N_1\cap N_6))}{t_{N,2j+2}-k+1}\\ =&\frac{\zeta_{t_{N,2j+2}-k+1}(N_4)}{t_{N,2j+2}-k+1} +\frac{\zeta_{t_{N,2j+2}-k+1}(N_1\cap N_6)}{t_{N,2j+2}-k+1}\\ =&\frac{\zeta_{t_{N,2j+2}-k+1}(N_4)}{t_{N,2j+2}-k+1} +\frac{\zeta_{t_{N,2j+2}-k+1}(\gamma_{N^c}^{-1}(N_2\cap N_8))}{t_{N,2j+2}-k+1}\text{ (by (\ref{e:6.10}))}, \end{aligned}$$ where $$\begin{aligned} &\frac{\zeta_{t_{N,2j+2}-k+1}(N_4)}{t_{N,2j+2}-k+1}\\ =&\left(\frac{\zeta_{t_{N,2j+2}}(N)}{t_{N,2j+2}} -\frac{\zeta_{t_{N,2j+2}}(N_3)}{t_{N,2j+2}}\right) \cdot\frac{t_{N,2j+2}}{t_{N,2j+2}-k+1} \text{ (since $[t_{N,2j+2}-k+1,t_{N,2j+2})\cap\mathbb{N}\subset N^c$)}\\ \rightarrow&p\text{ while }i\rightarrow\infty\text{ (since $\frac{\zeta_{t_{N,2j+2}}(N)}{t_{N,2j+2}}\rightarrow p$ and $\mu(N_3)=0$)} \end{aligned}$$ and $$ \begin{aligned} &\frac{\zeta_{t_{N,2j+2}-k+1}(\gamma_{N^c}^{-1}(N_2\cap N_8))}{t_{N,2j+2}-k+1}\\ =&\frac{\zeta_{\gamma_{N^c}(t_{N,2j+2}-k)+1}(N_2\cap N_8)}{t_{N,2j+2}-k+1}\,\,(\text{since } \gamma_{N^c}(t_{N,2j+2}-k)\in N_8)\\ =&\frac{\zeta_{\gamma_{N^c}(t_{N,2j+2}-k)+1}(N_2)}{t_{N,2j+2}-k+1} -\frac{\zeta_{\gamma_{N^c}(t_{N,2j+2}-k)+1}(N_7)}{t_{N,2j+2}-k+1}\\ =&\left(\frac{\zeta_{\gamma_{N^c}(t_{N,2j+2}-k)+1}(N_2)}{\gamma_{N^c}(t_{N,2j+2}-k)+1} -\frac{\zeta_{\gamma_{N^c}(t_{N,2j+2}-k)+1}(N_7)}{\gamma_{N^c}(t_{N,2j+2}-k)+1}\right) \cdot\frac{\gamma_{N^c}(t_{N,2j+2}-k)+1}{t_{N,2j+2}-k+1}\\ =&\left(\frac{\zeta_{\gamma_{N^c}(t_{N,2j+2}-k)+1}(N_2)}{\gamma_{N^c}(t_{N,2j+2}-k)+1} -\frac{\zeta_{\gamma_{N^c}(t_{N,2j+2}-k)+1}(N_7)}{\gamma_{N^c}(t_{N,2j+2}-k)+1}\right) \cdot\frac{\zeta_{t_{N,2j+2}-k+1}(N^c)}{t_{N,2j+2}-k+1}\\ \rightarrow&K^{-k}(1-p)\text{ while }i\rightarrow\infty\,\,(\text{since } \mu(N_2)=K^{-k}, \mu(N_7)=0 \text{ and } \frac{\zeta_{t_{N,2j+2}}(N^c)}{t_{N,2j+2}}\rightarrow 1-p). \end{aligned} $$ Then \begin{equation}\label{e:6.13} \lim_{j\rightarrow\infty}\frac{\zeta_{t_{N,2j}-k+1}(N_4\sqcup(N_1\cap N_6))}{t_{N,2j}-k+1}=p+(1-p)K^{-k} \end{equation} and \begin{equation}\label{e:6.14} \lim_{j\rightarrow\infty}\frac{\zeta_{t_{N,2j+1}-k+1}(N_4\sqcup(N_1\cap N_6))}{t_{N,2j+1}-k+1}=q+(1-q)K^{-k} \end{equation} So \begin{equation}\label{e:6.15} \inf J\leq p+(1-p)K^{-k},\,\,\sup J\ge q+(1-q)K^{-k}. \end{equation} Let $$J_0=\omega\left(\frac{\zeta_n(N_1\cap(N_4\sqcup N_6))}{n}:n\in N_4+1\right)$$ and $$J_1=\omega\left(\frac{\zeta_n(N_1\cap(N_4\sqcup N_6))}{n}:n\in N_6+1\right)$$ Then \begin{equation}\label{e:6.16} J=J_0\cup J_1. \end{equation} Suppose $t_{N,2j,r}\in N_4$ with $j\ge j^*+1$. Then $t_{N,2j}\leq t_{N,2j,r}<t_{N,2j+1}-k+1$. We have \begin{equation*} \begin{aligned} &\frac{\zeta_{t_{N,2j+1}-k+1}(N_4\sqcup(N_1\cap N_6))}{t_{N,2j+1}-k+1}\\ \geq &\frac{\zeta_{t_{N,2j+1}-k+1}(N_4\sqcup(N_1\cap N_6))-((t_{N,2j+1}-k+1)-(t_{N,2j}+r+1))}{t_{N,2j+1}-k+1-((t_{N,2j+1}-k+1)-(t_{N,2j}+r+1))}\\ =&\frac{\zeta_{t_{N,2j,r}+1}(N_4\sqcup(N_1\cap N_6))}{t_{N,2j,r}+1}\,\,(\text{since }[t_{N,2j}+r+1,t_{N,2j}-k+1)\cap\mathbb{N}\subset N_4)\\ =&\frac{\zeta_{t_{N,2j}}(N_4\sqcup(N_1\cap N_6))+r+1}{t_{N,2j}+r+1}\,\,(\text{since }[t_{N,2j},t_{N,2j}+r+1)\cap\mathbb{N}\subset N_4)\\ \geq&\frac{\zeta_{t_{N,2j}}(N_4\sqcup(N_1\cap N_6))}{t_{N,2j}}\\ =&\frac{\zeta_{t_{N,2j}-k+1}(N_4\sqcup(N_1\cap N_6))}{t_{N,2j}}\,\,(\text{since }[t_{N,2j}-k+1,t_{N,2j})\cap (N_4\sqcup N_6)=\emptyset)\\ =&\frac{\zeta_{t_{N,2j}-k+1}(N_4\sqcup(N_1\cap N_6))}{t_{N,2j}-k+1} \cdot\frac{t_{N,2j}-k+1}{t_{N,2j}}. \end{aligned} \end{equation*} That is, \begin{equation}\label{e:6.17} \begin{aligned} &\frac{\zeta_{t_{N,2j}-k+1}(N_4\sqcup(N_1\cap N_6))}{t_{N,2j}-k+1} \cdot\frac{t_{N,2j}-k+1}{t_{N,2j}}\\ \leq&\frac{\zeta_{t_{N,2j,r}+1}(N_4\sqcup(N_1\cap N_6))}{t_{N,2j,r}+1}\\ \leq&\frac{\zeta_{t_{N,2j+1}-k+1}(N_4\sqcup(N_1\cap N_6))}{t_{N,2j+1}-k+1}. \end{aligned} \end{equation} It follows from (\ref{e:6.17}), (\ref{e:6.13}) and (\ref{e:6.14}) that \begin{equation}\label{e:6.18} J_0\subset [p+(1-p)K^{-k},q+(1-q)K^{-k}]. \end{equation} Suppose $t_{N,2j+1,r}\in N_6$ with $j\ge j^*$. Then $t_{N,2j+1}\leq t_{N,2j+1,r}<t_{N,2j+2}-k+1$. We have \begin{equation*} \begin{aligned} &\frac{\zeta_{t_{N,2j+1,r}+1}(N_4\sqcup(N_1\cap N_6))}{t_{N,2j+1,r}+1}\\ =&\frac{\zeta_{t_{N,2j+1}-k+1}(N_4) +\zeta_{t_{N,2j+1,r}+1}(N_1\cap N_6)}{t_{N,2j+1}+r+1}\\ =&\frac{\zeta_{t_{N,2j+1}-k+1}(N_4)}{t_{N,2j+1}+r+1} +\frac{\zeta_{\gamma_{N^c}(t_{N,2j+1,r})+1}(N_2\cap N_8)}{\gamma_{N^c}(t_{N,2j+1,r})+1} \cdot\frac{\gamma_{N^c}(t_{N,2j+1,r})+1}{t_{N,2j+1}+r+1}\\ =&\frac{\zeta_{t_{N,2j+1}-k+1}(N_4)}{t_{N,2j+1}+r+1} +\frac{\zeta_{\gamma_{N^c}(t_{N,2j+1,r})+1}(N_2\cap N_8)}{\gamma_{N^c}(t_{N,2j+1,r})+1} \cdot\frac{\zeta_{t_{N,2j+1,r}}(N^c)+1}{t_{N,2j+1}+r+1}\\ =&\frac{\zeta_{t_{N,2j+1}-k+1}(N_4)}{t_{N,2j+1}+r+1} +\frac{\zeta_{\gamma_{N^c}(t_{N,2j+1,r})+1}(N_2\cap N_8)}{\gamma_{N^c}(t_{N,2j+1,r})+1} \cdot\frac{\zeta_{t_{N,2j+1}}(N^c)+r+1}{t_{N,2j+1}+r+1}\\ &(\text{since }[t_{N,2j+1},t_{N,2j+1,r})\cap\mathbb{N}\subset N^c) \end{aligned} \end{equation*} That is, \begin{equation}\label{e:6.19} \begin{aligned} &\frac{\zeta_{t_{N,2j+1,r}+1}(N_4\sqcup(N_1\cap N_6))}{t_{N,2j+1,r}+1}\\ =&\frac{\zeta_{t_{N,2j+1}-k+1}(N_4)}{t_{N,2j+1}+r+1} +\frac{\zeta_{\gamma_{N^c}(t_{N,2j+1,r})+1}(N_2\cap N_8)}{\gamma_{N^c}(t_{N,2j+1,r})+1} \cdot\frac{\zeta_{t_{N,2j+1}}(N^c)+r+1}{t_{N,2j+1}+r+1} \end{aligned} \end{equation} Let $\epsilon>0$. Define the function $$g:\left[0,\frac{q}{p}-1+\epsilon\right]\rightarrow\mathbb{R},\,\,g(s)=\frac{q+K^{-k}(1-q+s)}{1+s}.$$ Since $q>0$, $g$ is well defined (and, in the case $p=0$, $g(+\infty)=K^{-k}$). Note that $g(s)$ is non-increasing and hence \begin{equation}\label{e:6.20} \begin{aligned} g\left(\left[0,\frac{q}{p}-1+\epsilon\right]\right) &=\left[g\left(\frac{q}{p}-1+\epsilon\right),g(0)\right]\\ &=\left[\frac{pq+K^{-k}(1-p)q+K^{-k}p\epsilon}{q+p\epsilon},q+K^{-k}(1-q)\right]. \end{aligned} \end{equation} Since $q>0$, the limit $$\frac{t_{N,2j+2}}{t_{N,2j+1}} =\frac{\frac{e_{N,2j}}{t_{N,2j+1}}}{\frac{e_{N,2j}}{t_{N,2j+2}}} =\frac{\frac{e_{N,2j}}{t_{N,2j+1}}}{\frac{e_{N,2j+1}}{t_{N,2j+2}}} \rightarrow \frac{q}{p},\,\,\rightarrow\infty$$ holds (and equals $+\infty$ when $p=0$). Then $$\frac{d_{N,2j+1}}{t_{N,2j+1}}\rightarrow \frac{q}{p}-1,\,\,j\rightarrow\infty.$$ So we can take $j^{**}\geq j^*$ such that if $j\geq j^{**}+1$, then $$\frac{d_{N,2j+1}}{t_{N,2j+1}}\rightarrow \frac{q}{p}-1+\epsilon.$$ Let $j\ge j^{**}+1$ and $t_{N,2j+1,r}\in N_6$. Then $t_{N,2j+1,r}\in[t_{N,2j+1},t_{N,2j+2}-k+1)$. Write $$s=\frac{r+1}{t_{N,2j+1}}.$$ Then $0<s<\frac{q}{p}-1+\epsilon$. Using (\ref{e:6.19}) we get \begin{equation*} \begin{aligned} &\frac{\zeta_{t_{N,2j+1,r}+1}(N_4\sqcup(N_1\cap N_6))}{t_{N,2j+1,r}+1}-g(s)\\ =&\left( \frac{ \frac{ \zeta_{t_{N,2j+1}-k+1}(N_4) }{t_{N,2j+1}} }{1+s} -\frac{q}{1+s}\right)\\ &+\left(\frac{\zeta_{\gamma_{N^c}(t_{N,2j+1,r})+1}(N_2\cap N_8)}{\gamma_{N^c}(t_{N,2j+1,r})+1} \cdot \frac{ \frac{ \zeta_{t_{N,2j+1}}(N^c)+r+1 }{t_{N,2j+1}}+s }{1+s} -K^{-k}\cdot\frac{1-q+s}{1+s}\right)\\ \rightarrow&0\text{ independent from }r,\,\,\text{ while }j\rightarrow\infty. \end{aligned} \end{equation*} This limit together with (\ref{e:6.20}) lead to \begin{equation*} J_1\subset\left[\frac{pq+K^{-k}(1-p)q+K^{-k}p\epsilon}{q+p\epsilon},q+K^{-k}(1-q)\right]. \end{equation*} Letting $\epsilon\rightarrow0$ we get \begin{equation}\label{e:6.21} J_1\subset[p+K^{-k}(1-p),q+K^{-k}(1-q)]. \end{equation} Now (\ref{e:6.6}) follows from (\ref{e:6.18}), (\ref{e:6.21}), (\ref{e:6.16}) and (\ref{e:6.15}). \end{proof} For convenience of our later use, we construct $N\in\mathcal{M}([p,q])$ in Example \ref{l:2.24} below. Our constructions are similar to those in \cite{WS}. \begin{example}\label{l:2.24} Let $[p,q]\in\mathcal{C}([0,1])$. We choose a $\delta\in(0,1)$, set $c_0=0$ and $c_1=1$, and choose a $c_i\in(0,1)$, $i\geq 2$, such that \begin{equation}\label{e:2.22} \text{for }i\ge 1,\,\,c_{2i+1}=\left\{ \begin{aligned} &\frac{1}{\sqrt{2i+1}},&\text{if }&q=0,\\ &q-\frac{q}{\sqrt{2i+1}},&\text{if }&0<q\leq 1, \end{aligned} \right. \end{equation} and \begin{equation}\label{e:2.23} c_{2i+2}<c_{2i+1}-\frac{\delta}{\sqrt{2i+2}},\,\,i\ge 0,\text{ and }c_{2i}\rightarrow p,\,\,i\rightarrow\infty. \end{equation} Set $t_0=0$ and $t_1=1$. Define iteratively $t_{2i+2}$ to be the least integer satisfying $t_{2i+2}\geq t_{2i+1}+1$ and \begin{equation}\label{e:2.24} \frac{\sum_{0\leq 2j<2i+2}(t_{2j+1}-t_{2j})}{t_{2i+2}}<c_{2i+2} \end{equation} and $t_{2i+3}$ the least integer satisfying $t_{2i+3}\ge t_{2i+2}+1$ and \begin{equation}\label{e:2.25} \frac{\sum_{0\leq 2j<2i+3}(t_{2j+1}-t_{2j})}{t_{2i+3}}\ge c_{2i+3}. \end{equation} Let $$N=\left(\bigcup_{j\ge 0}[t_{2j},t_{2j+1})\right)\cap\mathbb{N}.$$ Then for $j\ge 0$, $t_{N,j}=t_j$. By $(\ref{e:2.25})$, $(\ref{e:2.22})$, $(\ref{e:2.24})$ and $(\ref{e:2.23})$, \begin{equation}\label{e:2.26} \frac{e_{N,2j}}{t_{N,2j+1}}\rightarrow q \text{ and }\frac{e_{N,2j+1}}{t_{N,2j+2}}\rightarrow p\text{ while }j\rightarrow\infty. \end{equation} and \begin{equation}\label{e:2.27} \left\{ \begin{aligned} &\frac{e_{N,2j+2}}{t_{N,2j+3}} =\frac{e_{N,2j+1}++d_{N,2j+2}}{t_{N,2j+2}+d_{N,2j+2}} \geq c_{2j+3},\\ &\frac{e_{N,2j+1}}{t_{N,2j+2}} <c_{2j+1}-\frac{\delta}{\sqrt{2j+2}}, \end{aligned} \right. \end{equation} \begin{equation}\label{e:2.28} \left\{ \begin{aligned} &\frac{f_{N,2j+1}}{t_{N,2j+2}} =\frac{f_{N,2j+1}+d_{N,2j+1}}{t_{N,2j+1}+d_{N,2j+1}} >1-c_{2j+1}+\frac{\delta}{\sqrt{2j+2}},\\ &\frac{f_{N,2j}}{t_{N,2j+1}} \leq 1-c_{2j+1}. \end{aligned} \right. \end{equation} From $(\ref{e:2.27})$ and $(\ref{e:2.22})$ we get \begin{equation}\label{e:2.29} \begin{aligned} d_{N,2j+2}&>\frac{c_{2j+3}-c_{2j+1}+\frac{\delta}{\sqrt{2j+2}}}{1-c_{2j+3}}\cdot t_{N,2j+2}\\ &\geq \frac{\frac{\delta}{\sqrt{2j+2}} -\left|\frac{1}{\sqrt{2j+3}} -\frac{1}{\sqrt{2j+1}}\right|}{1}\cdot(2j+2)\\ &\rightarrow \infty,\,\,j\rightarrow\infty. \end{aligned} \end{equation} From $(\ref{e:2.28})$ we get \begin{equation}\label{e:2.30} \begin{aligned} d_{N,2j+1}&>\frac{\frac{\delta}{\sqrt{2j+2}}}{1-c_{2j+1} +\frac{\delta}{\sqrt{2j+2}}}\cdot t_{N,2j+1}\\ &\geq\frac{\frac{\delta}{\sqrt{2j+2}}}{1+\delta}\cdot (2j+1)\\ &\rightarrow\infty,\,\,j\rightarrow\infty. \end{aligned} \end{equation} By $(\ref{e:2.26})$, $(\ref{e:2.29})$ and $(\ref{e:2.30})$, $N\in\mathcal{M}([p,q])$. Suppose $q>0$. Since $$e_{N,2i}-1<c_{2i+1}(t_{N,2i+1}-1) =\left(q-\frac{q}{\sqrt{2i+1}}\right)(t_{N,2i+1}-1)$$ for large $i$, then, for large $i$, $$\begin{aligned} &f_{N,2i}-(1-q)t_{N,2i+1}\\ =&f_{N,2i}-t_{N,2i+1}+qt_{N,2i+1}\\ =&qt_{N,2i+1}-e_{N,2i}\\ >&qt_{N,2i+1}-\left(q-\frac{q}{\sqrt{2i+1}}\right)(t_{N,2i+1}-1)-1\\ =&\frac{q}{\sqrt{2i+1}}\cdot t_{N,2i+1}+\left(q-\frac{q}{\sqrt{2i+1}}\right)-1\\ \geq&\frac{q}{\sqrt{2i+1}}(2i+1)-1\\ \rightarrow&+\infty,\,\,i\rightarrow\infty. \end{aligned}$$ That is, \begin{equation}\label{e:2.31} \lim_{i\rightarrow\infty}(f_{N,2i}-(1-q)t_{N,2i+1})=+\infty\text{ while }q>0. \end{equation} \end{example} \begin{lemma}\label{l:6.4} Let $0<q\leq 1$ and $0\leq p\leq q$. Then $\mathscr{H}^{2-q}(E_{\sigma_K}([p,q]))=+\infty$. \end{lemma} \begin{proof} Let $G=G_{\mathscr{H}^1,\sigma_{K^2}}$. Let $N\in\mathcal{M}([0,1])$ be as in Example \ref{l:2.24}. Define $$X=\{\Phi_{N,x}(y):x\in E_K^\mathbb{N}, y\in G\}.$$ Lemma \ref{l:6.3} implies that $\pi_K(E_{\sigma_K}([p,q]))\supset X$. Then, by Lemma \ref{detosw}, to prove $\mathscr{H}^{2-q}(E_{\sigma_K}([p,q]))=+\infty$ it is enough to prove $\mathscr{H}^{1-\frac{q}{2}}(X)=+\infty$. For $i\geq0$ let $$V_i=\{v\in \Sigma_{K,i}:[v]\cap X\neq\emptyset\}.$$ Since $G$ is dense in $\Sigma_{K^2}$, by the definition of $X$, we can check \begin{equation}\label{e:6.22} V_i=\{v\in \Sigma_{K,i}:v_j\in E_K\text{ for }j\in N\cap \{0,1,\cdots,i-1\}\}. \end{equation} Suppose $\omega\in V_{t_{N,2j+1,r}+1}$. Define the set $$G_\omega=\{y\in G:y_i=\omega_{\gamma_{N^c}^{-1}(i)}\text{ for }i\in\{0,1,\cdots,\gamma_{N^c}(t_{N,2j+1,r})\}\}.$$ Let $v\in W_k$ be the longest word with $G_\omega\subset[v]$. Then $$|v|=\gamma_{N^c}(t_{N,2j+1,r})+1=\zeta_{t_{N,2j+1,r}+1}(N^c)=f_{N,2j}+r+1.$$ So \begin{equation}\label{e:6.23} \mathscr{H}^1(G_\omega)=\mathscr{H}^1(G\cap[v])=\mathscr{H}^1([v])=(K^2)^{-(f_{N,2j}+r+1)}. \end{equation} Define the function $h_\omega:G\rightarrow\mathbb{R}$ by $$h_\omega(y)=\left\{ \begin{aligned} &K^{-e_{N,2j}},&&y\in G_\omega,\\ &0,&&\text{otherwise}. \end{aligned} \right. $$ Clearly $h_\omega$ is a continuous function on $G$ and, by (\ref{e:6.23}), \begin{equation}\label{e:6.24} \int_G h_\omega \mathrm{d} \mathscr{H}^1=K^{-e_{N,2j}}(K^2)^{-(f_{N,2j}+r+1)}. \end{equation} Moreover, \begin{equation}\label{e:6.25} \begin{aligned} |[\omega]|^{1-\frac{q}{2}}&=(K^2)^{-(t_{N,2j+1,r}+1)(1-\frac{q}{2})}\\ &=(K^2)^{-(e_{N,2j}+f_{N,2j}+r+1)(1-\frac{q}{2})}\\ &=K^{-e_{N,2j}}(K^2)^{-(f_{N,2j}+r+1)}K^{f_{N,2j}q-e_{N,2j}(1-q)+(r+1)q}\\ &\geq K^{-e_{N,2j}}(K^2)^{-(f_{N,2j}+r+1)}K^{f_{N,2j}q-e_{N,2j}(1-q)}\\ &=K^{f_{N,2j}q-e_{N,2j}(1-q)}\int_G h_\omega \mathrm{d} \mathscr{H}^1\\ &=K^{t_{N,2j+1}q-e_{N,2j}}\int_G h_\omega \mathrm{d} \mathscr{H}^1\\ &=K^{f_{N,2j}-t_{N,2j+1}(1-q)}\int_G h_\omega \mathrm{d} \mathscr{H}^1. \end{aligned} \end{equation} Let $M>0$. Because of (\ref{e:2.31}), we may choose $j^*$ such that \begin{equation}\label{e:6.26} K^{f_{N,2j}-t_{N,2j+1}(1-q)}\geq M,\,\,j\geq j^*. \end{equation} Then, for $\omega\in V_{t_{N,2j+1,r}+1}$, it follows from (\ref{e:6.25}) and (\ref{e:6.26}) that \begin{equation}\label{e:6.27} |[\omega]|^{1-\frac{q}{2}}\geq M\int_G h_\omega \mathrm{d} \mathscr{H}^1,\,\,j\geq j^*. \end{equation} Let $k\geq t_{N,2j^*+1}$, $\mathcal{B}\in \mathcal{C}_{X,(K^2)^{-k}}$ and $\epsilon>0$. Since $X$ is perfect, then we may choose $\mathcal{B}_0\in\mathscr{C}_{X,K^{-k}}$ such that for each $B_0\in\mathcal{B}_0$ we have $\#(B_0)\geq 2$, for each $B\in\mathcal{B}_0$ with $B\subset B_0$ and \begin{equation}\label{e:6.28} \sum_{B\in\mathcal{B}_0}|B|^{1-\frac{q}{2}}<\sum_{B\in \mathcal{B}}|B|^{1-\frac{q}{2}}+\epsilon. \end{equation} Let $\mathcal{B}_1=\{[\omega_B]\cap X:B\in\mathcal{B}_0\}$, where $[\omega_B]$ is the longest column in $\Sigma_K$ that contains $B$. Since each $|[\omega_B]|=|B|$, \begin{equation}\label{e:6.29} \sum_{B\in \mathcal{B}_1}|B|^{1-\frac{q}{2}}\leq\sum_{B\in \mathcal{B}_0}|B|^{1-\frac{q}{2}}. \end{equation} Note that for any $v,\omega\in\{\omega_B:B\in \mathcal{B}_0\}$, one of the three statements $$[v]\subset[\omega],\,\,[\omega]\subset[v],\,\,[v]=[\omega]$$ is true. Then we may choose a subcover $\mathcal{B}_2\subset\mathcal{B}_1$ with $[\omega_B]$, $B\in \mathcal{B}_2$, pairwise disjoint. Then \begin{equation}\label{e:6.30} \sum_{B\in\mathcal{B}_2}|B|^{1-\frac{q}{2}}\leq\sum_{B\in\mathcal{B}_1}|B|^{1-\frac{q}{2}} \end{equation} Suppose $B\in\mathcal{B}_2$ with $|\omega_B|=t_{N,2j,r}+1\in N+1$. Put $$\mathcal{C}_B=\{[\omega]\cap X:\omega\in V_{t_{N,2j,r}+1},\omega|_{[0,t_{N,2j,r}+1)}=\omega_B\}.$$ Then $\bigcup\mathcal{C}_B=B$. By (\ref{e:6.22}), it follows that $\mathcal{C}_B$ contains $K^{d_{N,2j}-r}$ pairwise disjoint members of diameter $(K^2)^{-t_{N,2j+1}}$. Then \begin{equation}\label{e:6.31} \begin{aligned} \sum_{C\in\mathcal{C}_B}|C|^{1-\frac{q}{2}}&=K^{d_{N,2j}-r}(K^2)^{-t_{N,2j+1}(1-\frac{q}{2})}\\ &=K^{d_{N,2j}-r}(K^2)^{-t_{N,2j+1,r}(1-\frac{q}{2})}(K^2)^{-(d_{N,2j}-r)(1-\frac{q}{2})}\\ &=(K^2)^{-t_{N,2j+1,r}(1-\frac{q}{2})}(K^2)^{-(d_{N,2j}-r)(\frac{1}{2}-\frac{q}{2})}\\ &\leq(K^2)^{-t_{N,2j+1,r}(1-\frac{q}{2})}\\ &=|B|^{1-\frac{q}{2}}. \end{aligned} \end{equation} Let $$\mathcal{B}_3=\{C:C\in\mathcal{C}_B,\,\,B\in\mathcal{B}_2,\,\,|\omega_B|\in N+1\}\cup\{B:B\in \mathcal{B}_2,\,\,|\omega_B|\in N^c+1\}.$$ Then $\mathcal{B}_3$ is a cover of $X$ and a set of pairwise disjoint subsets of $X$, $|\omega_B|\in N^c+1$ for $B\in \mathcal{B}_3$ and, by (\ref{e:6.31}), \begin{equation}\label{e:6.32} \sum_{B\in \mathcal{B}_3}|B|^{1-\frac{q}{2}}\leq \sum_{B\in \mathcal{B}_2}|B|^{1-\frac{q}{2}} \end{equation} Let $j_0\leq j_1$. Suppose $y\in G$ and $\omega\in V_{t_{N,2j_0+1,r+1}}$ such that $$y_i=\omega_{\gamma^{-1}_{N^c}(i)}\text{ for }i\in\{0,1,\cdots,\gamma_{N^c}(t_{N,2j_0+1,r})\}$$ and $V_\omega$ is a maximal subset of $\bigcup_{t_{N,2j_1+1}+1\leq t \leq t_{N,2j_1+2}+1}V_t$ such that \begin{description} \item{(i)} for $v\in V_\omega$, $|v|\geq |\omega|$ and $v|_{[0,|\omega|)}=\omega$, \item{(ii)} for $v\in V_\omega$, $$y_i=\omega_{\gamma^{-1}_{N^c}(i)}\text{ for }i\in\{0,1,\cdots,\gamma_{N^c}(|v|)\},$$ \item{(iii)} for $v_0\neq v_1\in V_\omega$, $[v_0]\cap[v_1]=\emptyset$. \end{description} Then $$\#(V_\omega)=K^{\#(N\cap[t_{N,2j_0+2},t_{N,2j_1+1}))}=K^{e_{N,2j_1}-e_{N,2j_0}}.$$ Thus \begin{equation}\label{e:6.33} \sum_{v\in V_\omega}h_v(y) =K^{e_{N,2j_1}-e_{N,2j_0}}K^{-e_{N,2j_1}} =K^{-e_{N,2j_0}}=h_\omega(y). \end{equation} Let $y\in G$. Put $X_y=\{\Phi_{N,x}(y):x\in E_K^\mathbb{N}\}$ and $\mathcal{C}_y=\{B\in \mathcal{B}_3:B\cap X_y\neq\emptyset\}$. Since $X_y$ is compact, $\mathcal{C}_y$ covers $X_y$ and the members of $\mathcal{C}_y$ are open in $X$ and are pairwise disjoint. Hence $\mathcal{C}_y$ is finite. Suppose $\sup\{|\omega_B|:B\in\mathcal{C}_x\}\in [t_{N,2j_1+1}+1,t_{N,2j_1+2}+1)$. For each $\omega\in \{\omega_B:B\in \mathcal{C}_x\}$ choose $V_\omega$ to be a maximal subset of $\bigcup_{t_{N,2j_1+1}+1\leq t \leq t_{N,2j_1+2}+1}V_t$ satisfying (i), (ii) and (iii). Using (\ref{e:6.33}) we have \begin{equation}\label{e:6.34} \sum_{B\in \mathcal{C}_x}h_{\omega_B}(y)=\sum_{B\in \mathcal{C}_x}\sum_{v\in V_{\omega_B}}h_v(y)=K^{e_{N,2j_1}}K^{-e_{N,2j_1}}=1. \end{equation} Eq. (\ref{e:6.34}) leads to \begin{equation}\label{e:6.35} \sum_{B\in\mathcal{B}_3}h_{\omega_B}(y)\equiv 1\text{ for }y\in G. \end{equation} Note that, for $B\in\mathcal{B}_3$, $|\omega_B|\geq t_{N,2j^*+1}+1$. Then it follows from (\ref{e:6.27}) and (\ref{e:6.35}) that \begin{equation}\label{e:6.36} \begin{aligned} \sum_{B\in\mathcal{B}_3}|B|^{1-\frac{q}{2}}&=\sum_{B\in\mathcal{B}_3}|[\omega_B]|^{1-\frac{q}{2}}\geq M\int_G h_\omega \mathrm{d} \mathscr{H}^1\\ &=M\int_G \left(\sum_{B\in\mathcal{B}_3}h_{\omega_B}\right) \mathrm{d} \mathscr{H}^1=M\mathscr{H}^1(G)=M. \end{aligned} \end{equation} By (\ref{e:6.28}), (\ref{e:6.29}), (\ref{e:6.30}), (\ref{e:6.32}) and (\ref{e:6.36}), \begin{equation}\label{e:6.37} \sum_{B\in\mathcal{B}}|B|^{1-\frac{q}{2}}\geq M-\epsilon. \end{equation} Since (\ref{e:6.37}) holds for each $B\in \mathscr{C}_{X,(K^2)^{-k}}$, \begin{equation}\label{e:6.38} \mathscr{H}_{(K^2)^{-k}}^{1-\frac{q}{2}}(X)\geq M-\epsilon. \end{equation} Letting $k\rightarrow\infty$ in (\ref{e:6.38}) and then letting $M\rightarrow\infty$, we get $\mathscr{H}^{1-\frac{q}{2}}(X)=+\infty$. \end{proof} We sum up Lemma \ref{l:6.2} and Lemma \ref{l:6.4} into the following theorem. \begin{theorem}\label{l:6.5} $\mathscr{H}^2(E_{\sigma_K}([0,0]))=1$ and, for $0<q\leq 1$ and $0\leq p\leq q$, $\mathscr{H}^{2-q}(E_{\sigma_K}([p,q]))=+\infty$. \end{theorem} To calculate the Hausdorff measure of $D_{\sigma_K}([p,q])$, we need some lemmas. \begin{lemma}\label{l:2.20} Suppose $0\leq l<r$, $n\geq 1$ and $0\leq j_i<n$, $i=0,1$. Then $$|\#((n\mathbb{N}+j_0)\cap[l,r))-\#((n\mathbb{N}+j_1)\cap[l,r))|\leq 1.$$ \end{lemma} \begin{proof} Let $k$ be the maximal integer satisfying $l+kn\leq r$. Then $$\#((n\mathbb{N}+j_0)\cap[l,l+kn))=\#((n\mathbb{N}+j_1)\cap[l,l+kn))$$ and $$\#((n\mathbb{N}+j_0)\cap[l+kn,r)),\#((n\mathbb{N}+j_1)\cap[l+kn,r))\in\{0,1\}.$$ \end{proof} \begin{lemma}\label{l:2.21} Let $N\subset\mathbb{N}$ with both $N$ and $N^c$ infinite. Suppose $t_k\rightarrow\infty$ satisfy \begin{equation}\label{e:2.16} \lim_{k\rightarrow\infty}\frac{\zeta_{t_k}(N)}{t_k}=p \text{ and }\lim_{k\rightarrow\infty}\frac{\zeta_{t_k}(L_N)}{t_k}=0. \end{equation} Then, for $n\geq 1$ and $0\leq j<n$, \begin{equation}\label{e:2.17} \lim_{k\rightarrow\infty}\frac{\zeta_{t_k}(N\cap(n\mathbb{N}+j))}{t_k}=\frac{p}{n}. \end{equation} \end{lemma} \begin{proof} Let $n\geq 1$ and $0\leq j<n$. Note that $N\cap\{0,1,\cdots,t_k-1\}$ is the union of $\zeta_{t_k}(L_N)$ integer intervals. Then, for $0\leq j'<n$, by Lemma \ref{l:2.20}, \begin{equation}\label{e:2.18} |\zeta_{t_k}(N\cap(n\mathbb{N}+j))-\zeta_{t_k}(N\cap(n\mathbb{N}+j'))|\leq\zeta_{t_k}(L_N). \end{equation} Sum (\ref{e:2.18}) over $0\leq j'<n$ and divide the resulting formula by $nt_k$. We get \begin{equation}\label{e:2.19} \left|\frac{\zeta_{t_k}(N\cap(n\mathbb{N}+j))}{t_k}-\frac{\zeta_{t_k}(N)}{nt_k}\right|\leq\frac{\zeta_{t_k}(L_N)}{nt_k}. \end{equation} Now (\ref{e:2.17}) follows from (\ref{e:2.19}) and (\ref{e:2.16}). \end{proof} \begin{lemma}\label{l:6.6} For $0\leq p\leq q<1$, $\mathscr{H}^{2-q}(D_{\sigma_K}([p,q]))=0$. \end{lemma} \begin{proof} Suppose $0\leq p\leq q<1$. Let $X=\pi_K(D_{\sigma_K}([p,q]))\subset \Sigma_{K^2}$. Then, by Lemma \ref{detosx}, to prove $\mathscr{H}^{2-q}(D_{\sigma_K}([p,q]))=0$ it is enough to prove $\mathscr{H}^{1-\frac{q}{2}}(X)=0$. For $n\geq 1 $ let \begin{equation}\label{e:6.39} X_n=\{x\in X:\mu^*(N_{\sigma_{K^2}}(x,[E_K^k]))\equiv q\text{ for }k\geq n\}. \end{equation} Then \begin{equation}\label{e:6.40} X_n\subset X_{n+1}\text{ for } n\geq 1 \text{ and }X=\bigcup_{n\geq 1}X_n. \end{equation} Fix $n\geq 1$ and $x\in X_n$. For $k\geq 1$ write $$N_k=N_{\sigma_{K^2}}(x,[E_K^k])=\{i\geq 0:x|_{[i,i+k)}\in E_K^k\}.$$ For $m\geq k\geq 1$, $$\begin{aligned} i\in N_m&\Leftrightarrow x|_{[i,i+m)}\in E_K^m\Leftrightarrow x|_{[i+j,i+j+k)}\in E_K^k\text{ for }j\in\{0,1,\cdots,m-k\}\\ &\Leftrightarrow i+\{0,1,\cdots,m-k\}\subset N_k.\\ \end{aligned}$$ Then it follows from Lemma \ref{detvoe} that \begin{equation}\label{e:6.41} N_m=N_k\setminus (R_{N_k}-\{0,1,\cdots,m-k\})\text{ for }m\geq k\geq1. \end{equation} Since $x\in X_n$, by (\ref{e:6.40}) and (\ref{e:6.39}), we may choose $t_k\rightarrow \infty$ such that \begin{equation}\label{e:6.42} \lim_{k\rightarrow\infty}\frac{\zeta_{t_k}(N_k)}{t_k}=q. \end{equation} For $m\geq n$, since $N_m\supset N_k$ for $k\geq m$, using (\ref{e:6.42}) we have $$q\geq \limsup_{k\rightarrow\infty}\frac{\zeta_{t_k}(N_m)}{t_k} \geq\liminf_{k\rightarrow\infty}\frac{\zeta_{t_k}(N_m)}{t_k} \geq\lim_{k\rightarrow\infty}\frac{\zeta_{t_k}(N_k)}{t_k}=q.$$ Then \begin{equation}\label{e:6.43} \frac{\zeta_{t_k}(N_m)}{t_k}=q\text{ for }m\geq n. \end{equation} By (\ref{e:6.41}), $N_m=N_{m+1}\sqcup(R_{N_m}-1)$. Then \begin{equation}\label{e:6.44} 0\leq \frac{\zeta_{t_k}(R_{N_m})}{t_k} \leq\frac{\zeta_{t_k}(R_{N_m}-1)}{t_k} =\frac{\zeta_{t_k}(N_m)}{t_k}-\frac{\zeta_{t_k}(N_{m+1})}{t_k} \end{equation} It follows from (\ref{e:6.44}) and (\ref{e:6.43}) that \begin{equation}\label{e:6.45} \lim_{k\rightarrow\infty}\frac{\zeta_{t_k}(R_{N_m})}{t_k}=0\text{ for }m\geq n. \end{equation} Since $\zeta_{t_k}(R_{N_m})\leq\zeta_{t_k}(L_{N_m})\leq\zeta_{t_k}(R_{N_m})+1$, we know from (\ref{e:6.45}) that \begin{equation}\label{e:6.46} \lim_{k\rightarrow\infty}\frac{\zeta_{t_k}(L_{N_m})}{t_k} =\lim_{k\rightarrow\infty}\frac{\zeta_{t_k}(R_{N_m})}{t_k}=0\text{ for }m\geq n. \end{equation} Let $m\geq n$. It follows from (\ref{e:6.43}), (\ref{e:6.46}) and Lemma \ref{l:2.21} that \begin{equation}\label{e:6.47} \lim_{k\rightarrow\infty}\frac{\zeta_{t_k}(N_m\cap m\mathbb{N})}{t_k}=\frac{q}{m}. \end{equation} Note that $$N_{\sigma_{K^{2m}}}(\tau_{K^2,m}(x),[E_{K,m}])\cap \left[0,\left\lfloor\frac{t_k}{m}\right\rfloor\right)=\frac{N_m\cap m\mathbb{N} \cap[0,t_k)}{m},$$ where $\lfloor a\rfloor$ denotes the maximal integer no larger than $a$ for $a\in\mathbb{R}$. Then \begin{equation}\label{e:6.48} \begin{aligned} &\frac { \zeta_{\left\lfloor\frac{t_k}{m}\right\rfloor} ( N_{\sigma_{K^{2m}}}(\tau_{K^2,m}(x),[E_{K,m}]) ) } { \left\lfloor\frac{t_k}{m}\right\rfloor }\\ =&\frac {\#(N_m\cap m\mathbb{N} \cap[0,t_k))} {\left\lfloor\frac{t_k}{m}\right\rfloor}\\ =&\frac {\zeta_{t_k}(N_m\cap m\mathbb{N})} {\frac{t_k}{m}}\cdot \frac {\frac{t_k}{m}} {\left\lfloor\frac{t_k}{m}\right\rfloor}. \end{aligned} \end{equation} It follows from (\ref{e:6.48}) and (\ref{e:6.47}) that \begin{equation}\label{e:6.49} \lim_{k\rightarrow\infty}\frac { \zeta_{\left\lfloor\frac{t_k}{m}\right\rfloor} ( N_{\sigma_{K^{2m}}}(\tau_{K^2,m}(x),[E_{K,m}]) ) } { \left\lfloor\frac{t_k}{m}\right\rfloor } =q. \end{equation} In particular, for $s\geq 1$, \begin{equation}\label{e:6.50} \lim_{k\rightarrow\infty}\frac { \zeta_{\left\lfloor\frac{t_k}{sn}\right\rfloor} ( N_{\sigma_{K^{2sn}}}(\tau_{K^2,sn}(x),[E_{K,sn}]) ) } { \left\lfloor\frac{t_k}{sn}\right\rfloor } =q. \end{equation} Now put $M_n=N_n^c$. Then, by (\ref{e:6.43}), \begin{equation}\label{e:6.51} \lim_{k\rightarrow\infty}\frac{\zeta_{t_k}(M_n)}{t_k}=1-q. \end{equation} Since $R_{M_n}\subset L_{M_n}$ and $L_{M_n}\subset R_{N_n}\cup\{0\}$, \begin{equation}\label{e:6.52} \lim_{k\rightarrow\infty}\frac{\zeta_{t_k}(L_{M_n})}{t_k} =\lim_{k\rightarrow\infty}\frac{\zeta_{t_k}(R_{M_n})}{t_k}=0. \end{equation} Let $s\geq 1$. Write $$M_{n,s}=M_n\setminus(R_{M_n}-\{0,1,\cdots,sn-1\})=\{i\geq0:i+\{0,1,\cdots,sn-1\}\subset M_n\}.$$ Since $L_{M_{n,s}}\subset L_{M_n}$ and $R_{M_{n,s}}\subset R_{M_n}-sn+1$, we have by (\ref{e:6.52}) that \begin{equation}\label{e:6.53} \lim_{k\rightarrow\infty}\frac{\zeta_{t_k}(L_{M_{n,s}})}{t_k} =\lim_{k\rightarrow\infty}\frac{\zeta_{t_k}(R_{M_{n,s}})}{t_k}=0. \end{equation} Note that $M_{n,s}\subset M_n\subset M_{n,s}\sqcup(R_{M_n}-1-\{0,1,\cdots,sn-1\})$. Then \begin{equation}\label{e:6.54} \begin{aligned} \frac{\zeta_{t_k}(M_{n,s})}{t_k} &\leq\frac{\zeta_{t_k}(M_n)}{t_k} \leq\frac{\zeta_{t_k}(M_{n,s})}{t_k} +\frac{\zeta_{t_k}(R_{M_n}-1-\{0,1,\cdots,sn-1\})}{t_k}\\ &\leq \frac{\zeta_{t_k}(M_{n,s})}{t_k} +sn\cdot\frac{\zeta_{t_k}(R_{M_n}-1)}{t_k}. \end{aligned} \end{equation} Eqs (\ref{e:6.54}), (\ref{e:6.51}) and (\ref{e:6.53}) lead to \begin{equation}\label{e:6.55} \lim_{k\rightarrow\infty}\frac{\zeta_{t_k}(M_{n,s})}{t_k}=1-q. \end{equation} It follows from (\ref{e:6.55}), (\ref{e:6.53}) and Lemma \ref{l:2.21} that \begin{equation}\label{e:6.56} \lim_{k\rightarrow\infty}\frac{\zeta_{t_k}(M_{n,s})}{t_k}=\frac{1-q}{sn}. \end{equation} Similar to (\ref{e:6.48}), we have \begin{equation}\label{e:6.57} \begin{aligned} &\frac { \zeta_{\left\lfloor\frac{t_k}{sn}\right\rfloor} ( N_{\sigma_{K^{2sn}}}(\tau_{K^2,sn}(x),[\tau_{K^{2n}}(F^s_{K,n})]) ) } { \left\lfloor\frac{t_k}{sn}\right\rfloor }\\ =&\frac {\#(M_{n,s}\cap sn\mathbb{N} \cap[0,t_k))} {\left\lfloor\frac{t_k}{sn}\right\rfloor}\\ =&\frac {\zeta_{t_k}(M_{n,s}\cap sn\mathbb{N})} {\frac{t_k}{sn}}\cdot \frac {\frac{t_k}{sn}} {\left\lfloor\frac{t_k}{sn}\right\rfloor}. \end{aligned} \end{equation} Eqs (\ref{e:6.57}) and (\ref{e:6.56}) lead to \begin{equation}\label{e:6.58} \lim_{k\rightarrow\infty}\frac { \zeta_{\left\lfloor\frac{t_k}{sn}\right\rfloor} ( N_{\sigma_{K^{2sn}}}(\tau_{K^2,sn}(x),[\tau_{K^{2n}}(F^s_{K,n})]) ) } { \left\lfloor\frac{t_k}{sn}\right\rfloor }=1-q. \end{equation} Write $$K_{sn,0}=E_{K,sn},\quad K_{sn,1}=\tau_{K^{2n}}(F_{K,n}^s),\quad K_{sn,2}=\{0,1,\cdots K^{2sn}-1\}\setminus (K_{sn,0}\sqcup K_{sn,1})$$ and $$\mathcal{K}_{sn}=(K_{sn,0},K_{sn,1},K_{sn,2}).$$ Let $$r=(q,1-q,0).$$ By (\ref{e:6.50}) and (\ref{e:6.58}), $\tau_{K^2,sn}(x)\in V_{\mathcal{K}_{sn},r}$. As $x\in X_n$ was arbitrary, \begin{equation}\label{e:6.59} \tau_{K^2,sn}(X_n)\subset V_{\mathcal{K}_{sn},r}. \end{equation} Using Lemma \ref{detosx} and Lemma \ref{detoso} for (\ref{e:6.59}) we have $$ \begin{aligned} &\dim_HX_n\\ \leq&g_{\mathcal{K}_{sn}}(r)\\ =&\frac{-q\ln\frac{q}{\#(E_{K,sn})}-(1-q)\ln\frac{1-q}{\#(F^s_{K,n})}} {\ln K^{2sn}}\\ =&\frac{-q\ln\frac{q}{K^{sn}}-(1-q)\ln\frac{1-q}{(K^{2n}-K^n)^s}} {\ln K^{2sn}}\\ =&\frac{-q\ln q+q\ln K^{sn}-(1-q)\ln(1-q)+(1-q)\ln (K^{2n}-K^n)^s} {\ln K^{2sn}} \end{aligned} $$ Then $$ \begin{aligned} &\dim_HX_n-(1-\frac{q}{2})\\ \leq&\frac{-q\ln q+q\ln K^{sn}-(1-q)\ln(1-q)+(1-q)\ln (K^{2n}-K^n)^s} {\ln K^{2sn}}-(1-\frac{q}{2})\\ =&\frac{-q\ln q-(1-q)\ln(1-q)+(1-q)\ln (K^{2n}-K^n)^s-(1-q)\ln K^{2sn}}{\ln K^{2sn}}\\ =&\frac{-q\ln q-(1-q)\ln(1-q)+(1-q)s\ln (1-K^{-n})}{\ln K^{2sn}}\\ <&0\text{ for large }s. \end{aligned} $$ Then $\dim_HX_n<1-\frac{q}{2}$ and thus $\mathscr{H}^{1-\frac{q}{2}}(X_n)=0$. Now $$\mathscr{H}^{1-\frac{q}{2}}(X)=\mathscr{H}^{1-\frac{q}{2}}(\bigcup_{n\geq 0}X_n) \leq\sum_{n\geq 0}\mathscr{H}^{1-\frac{q}{2}}(X_n)=0.$$ \end{proof} \begin{lemma}\label{l:6.7} For $0\leq p\leq 1$, $\mathscr{H}^1(D_{\sigma_K}([p,1]))=+\infty$. \end{lemma} \begin{proof} Let $0\leq p\leq1$. Pick $M\in \mathcal{M}([p,1])$. Let $N=2M+\{0,1\}$. Then $N\in \mathcal{M}([p,1])$. Define $X\subset \Sigma_{K^2}$ by $$ \begin{aligned} X&=\{x\in\Sigma_{K^2}:x|_{[2i,2i+2)}\in E_K^2\text{ for }i\in M\text{ and }x|_{[2i,2i+2)}\in F_{K,2}\text{ for }i\in M^c\}\\ &=\{\Phi_{N,x}(y):x\in E_K^\mathbb{N},\,y\in F_{K,2}^\mathbb{N}\}. \end{aligned} $$ Since $\pi_K^{-1}(E_K^\mathbb{N})=\delta_{\Sigma_K}$ and $\pi_K^{-1}(F_{K,2}^\mathbb{N})=\text{Dist}(\sigma_K)$, by Lemma \ref{dstowv}, $\pi_K^{-1}(X)\subset D_{\sigma_K}([p,1])$. Then, by Lemma \ref{detosw}, to prove $\mathscr{H}^1(D_{\sigma_K}([p,1]))=+\infty$ it is enough to prove $\mathscr{H}^{\frac{1}{2}}(X)=+\infty$. Let $$ \begin{aligned} Y&=\tau_{K^2,2}(X)\\ &=\{x\in\Sigma_{K^4}:x_i\in \tau_{K^2}(E_{K,2})\text{ for }i\in M\text{ and }x_i\in \tau_{K^2}(F_{K,2})\text{ for }i\in M^c\}. \end{aligned} $$ By Lemma \ref{detosx}, to prove $\mathscr{H}^{\frac{1}{2}}(X)=+\infty$ it is enough to prove $\mathscr{H}^{\frac{1}{2}}(Y)=+\infty$. Let $$Z=(\tau_{K^2}(E_{K,2}))^\mathbb{N}\subset \Sigma_{K^4}.$$ Define $\phi:\{0,1,\cdots,K^2-1\}\rightarrow\tau_{K^2}(E_{K,2})$ and $\Phi:\Sigma_{K^2}\rightarrow Z$ by $$\phi(i+Kj)=(i+Ki)+K^2(j+Kj),\,\,i,j\in\{0,1,\cdots,K-1\}$$ and $$\Phi(x)=(\phi(x_i))_{i\geq 0},\,\,x\in \Sigma_{K^2}.$$ Then $\Phi$ is a bijection from $\Sigma_{K^2}$ to $Z$ with $$\rho(\Phi(x),\Phi(y))=(\rho(x,y))^2,\,x,y\in\Sigma_{K^2}.$$ So $$\mathscr{H}^\frac{1}{2}(Z)=\mathscr{H}^\frac{1}{2}(\Sigma_{K^2})=1.$$ Define $$\Psi=\{(\psi_i)_{i\in M^c}:\text{ each $\psi_i$ is an injection from $\tau_{K^2}(E_{K,2})$ into }\tau_{K^2}(F_{K,2})\}.$$ For $\psi\in\Psi$ define $$Y_{\psi}=\{x\in Y:x_i\in\psi_i(\tau_{K^2}(E_{K,2}))\text{ for }i\in M^2\}.$$ and define $T_\psi:Z\rightarrow Y_\psi$ by $$(T_\psi(x))_i=x_i\text{ for }i\in M\text{ and }(T_\psi(x))_i=\psi_i(x_i)\text{ for }i\in M^2.$$ Then $T_\psi$ is an isometry between $Z$ and $Y_\psi$. So $$\mathscr{H}^\frac{1}{2}(Y_\psi)=\mathscr{H}^\frac{1}{2}(Z)=1.$$ Since $$\frac{\#(\tau_{K^2}(F_{K,2}))}{\#(\tau_{K^2}(E_{K,2}))} =\frac{\#(F_{K,2})}{\#(E_{K,2})} =\frac{K^4-K^2}{K^2}=K^2-1\geq 3,$$ then there is an uncountable set $\Psi_0\subset\Psi$ such that the sets $Y_\psi$ and $\psi\in\Psi_0$ are pairwise disjoint. Since each $Y_\psi$ is compact and thus $\mathscr{H}^\frac{1}{2}$ measurable, $$\mathscr{H}^\frac{1}{2}(Y)\geq \sum_{\psi\in\Psi_0}\mathscr{H}^\frac{1}{2}(Y_\psi)=+\infty.$$ \end{proof} We sum up Lemma \ref{l:6.6} and Lemma \ref{l:6.7} into the following theorem. \begin{theorem} For $0\leq p\leq q<1$, $\mathscr{H}^{2-q}(D_{\sigma_K}([p,q]))=0$. For $0\leq p\leq 1$, $\mathscr{H}^1(D_{\sigma_K}([p,1]))=+\infty$. \end{theorem} \end{document}
\begin{equation}gin{document} \renewcommand{{\bf Fig.}}{{\bf Fig.}} \renewcommand{{\bf Tab.}}{{\bf Tab.}} \renewcommand{{\bf Fig.}}{{\bf Fig.}} \renewcommand{{\bf Tab.}}{{\bf Tab.}} \title{Quantum entanglement dynamics and decoherence wave in spin chains at finite temperatures} \author{S. D. ~Hamieh and M. I. ~Katsnelson} \affiliation{Institute for Molecules and Materials, Radboud University of Nijmegen, 6525 ED Nijmegen, The Netherlands} \date{\today} \begin{equation}gin{abstract} We analyze the quantum entanglement at the equilibrium in a class of exactly solvable one-dimensional spin models at finite temperatures and identify a region where the quantum fluctuations determine the behavior of the system. We probe the response of the system in this region by studying the spin dynamics after projective measurement of one local spin which leads to the appearance of the ``decoherence wave''. We investigate time-dependent spin correlation functions, the entanglement dynamics, and the fidelity of the quantum information transfer after the measurement. \end{abstract} \pacs{03.65.Ud; 03.67.Mn; 03.65.Ta} \maketitle \section{Introduction} \label{sect:1} Collective behavior in many-body quantum systems is associated with the development of classical correlations, as well as of the correlations which cannot be accounted for in terms of classical physics, namely, entanglement. The entanglement represents in essence the impossibility of giving a local description of a many-body quantum state. Experimental tests of the nonlocality by means of the Bell-type inequality \cite{1} have been made with different kind of particles including photons \cite{2} and massive fermions \cite{3,4,44}. The entanglement is expected to play an essential role at quantum phase transitions \cite{5}, where quantum fluctuations manifest themselves at all length scales. Several groups investigated this problem by studying the quantum spin systems (see, e.g., Refs.\onlinecite{Osbo02,19,Hami05,12,14,24,25,26,27,28,29,30,31,32,Amic04,Subr05,ghos1}). Additional studies have been carried out for more complicated systems including both itinerant electrons and localized spins; the local entanglement for these systems have been discussed in a context of the quantum phase transitions \cite{gu,Anfo} and of the Kondo problem \cite{ourkondo}. In particular, Anfossi {\it et al} \cite{Anfo} performed, within the density-matrix renormalization group method, a numerical comparison between the standard finite-size scaling and the local entanglement for the Hubbard model in a presence of bond charge interaction (the Hirsch model) at the Mott metal-insulator transition. Katsnelson {\it et al} \cite{ourkondo} have considered the suppression of the Kondo resonance by a probing of the charge state of the magnetic impurity which leads to the partial destruction of the entanglement between the localized spin and itinerant-electron Fermi sea. These examples illustrate a relevance of the concept of entanglement for the many-body physics. Moreover, the entanglement overwhelmingly comes into play in the quantum computation and communication theory \cite{6}, being the main physical resource needed for their specific tasks. The essential idea is to encode one particular qubit, and let it be transported to across the chain to recover the code from another qubit some distance away \cite{7}. The suppression of the entanglement by decohering actions such as noise, measurements, etc., is one of the central problems in quantum computation and quantum information theory; therefore the concept of entanglement for mixed states is of primary relevance \cite{Benn96}. For example, it is important to know what happens with the quantum computer after the measurement of one qubit state; for the case of the quantum system with broken continuous symmetry such as Bose-Einstein condensate (BEC) or easy-plane antiferromagnet the local measurements lead to the formation of the ``decoherence wave'' \cite{ourbec,ourneel}. It is interesting to investigate the effect of the decoherence wave on the entanglement in the system. Motivated by these results, in this paper, we aim on the evaluation of the pairwise entanglement in the 1D Ising-XY model with transverse magnetic field at finite temperatures. We identify a region where the thermal entanglement is non zero while it is zero at zero temperature, which results from the entanglement of the excited states as it will be explained below (section~\ref{sect:2}). We study the dynamical response of the system in the non vanishing entanglement region, which is the useful region for quantum information processing, after a projective measurement on one local spin. We find that, similar to the case of the BEC considered earlier \cite{ourbec} the spin decoherence wave appears propagating with the velocity proportional to the interaction strength. We investigate also the zero temperature case studying the time-dependent correlation functions and we discuss the relation between the dynamics of the magnetization and the entanglement (section~\ref{sect:3}). Our conclusions are given in section~\ref{sect:4}. \section{Thermal entanglement in the Ising-XY Model with transverse field} \label{sect:2} \begin{equation}gin{figure}[htb] \centering\includegraphics[width=9cm]{Cequ2.eps} \caption{(color online) Pairwise entanglement for the nearest neighbors in the isotropic XY model with transverse field at the equilibrium as function of $\begin{equation}ta$ and $\lambda$. Cequ is defined by Eq.(12) for the thermodynamic equilibrium state.\protect\label{Cequ1}} \end{figure} \begin{equation}gin{figure}[htb] \centering\includegraphics[width=9cm]{Cequ.eps} \caption{(color online)Same as Fig. \ref{Cequ1} for $0.8<\lambda<0.9$. \protect\label{Cequ}} \end{figure} \begin{equation}gin{figure}[htb] \centering\includegraphics[width=10cm]{nspinL0.8ML1.eps} \caption{Site magnetization as function of the temperature and time with $m-l=1$ and $\lambda=0.8$. \protect\label{spin}} \end{figure} In this section we present the solution of the N-sites Ising-XY model with transverse field following the standard method \cite{Lieb61,Baro70}. We proceed with the Hamiltonian \begin{equation} {\cal H}=-\sum_{i=1}^{N}(\lambda[(1+\gamma)\sigma_i^x\sigma_{i+1}^x+(1-\gamma)\sigma_i^y\sigma_{i+1}^y]+\sigma_i^z)\,,\label{1} \end{equation} where $\sigma^a_i$ are the Pauli operators, obeying the usual commutation relations $[\sigma^a_i, \sigma^b_j] = 2i\epsilon^{abc}\delta^{ij}\sigma^c_i$ . The Zeeman energy in the external magnetic field, as well as the Planck constant $\hbar$, have been set to 1. We assume cyclic boundary conditions, i.e., the index $i$ in the sum (\ref{1}) runs over ${1\dots N}$ with $S_{N +1} = S_1$. This Hamiltonian can be diagonalized by means of the Jordan-Wigner transformation \cite{Lieb61,Baro70} that maps spins to one-dimensional spinless fermions with creation and annihilation operators $c_i$ and $c_i^{\dag}$. The Hamiltonian Eq.(1) in the fermionic operator representation is represented by the quadratic form \begin{equation} H=(\sum_{i,j}c_i^{\dag}A_{i,j}c_j+\frac{1}{2}[c_i^{\dag}B_{i,j}c_j^{\dag}+{\rm H.c.}]) +N\,,\label{6} \end{equation} where $A_{i,i}=-1$ and $A_{i,i+1}=-\frac{1}{2}\lambda=A_{i+1,i}$, $B_{i,i+1}=-\frac{1}{2}\lambda\gamma=-B_{i+1,i}$, and all other $A_{i,j}$ and $B_{i,j}$ are zero. The quadratic Hamiltonian (\ref{6}) can be diagonalized by a linear Bogoliubov transformation of the fermionic operators, \begin{equation} \eta_k=\sum_i(g_{ki}c_i+h_{ki}c_i^{\dag})\,,\label{eta1} \end{equation} \begin{equation} \eta_k^{\dag}=\sum_i(g_{ki}c_i^{\dag}+h_{ki}c_i)\label{eta2}\,, \end{equation} where the $g_{ki}$ and $h_{ki}$ can be chosen to be real. After that it takes the diagonal form \begin{equation} H= \sum_k\Lambda_k\eta_k^{\dag}\eta_k-\frac{1}{2}\Lambda_k\,, \end{equation} where \begin{equation} \Lambda_k=\sqrt{(\gamma\lambda \sin k)^2+(1+\lambda \cos k)^2}\,. \end{equation} After the diagonalization of the Hamiltonian now we can proceed with the evaluation of the thermal pairwise entanglement. The pairwise entanglement, as its name indicates, measures how two spins separated by a distance $r$ are entangled. This measure is to be accomplished by evaluating the pairwise entanglement of the two-site density matrix after tracing out all other spins in the chain. We evaluate the entanglement of this states by using the concurrence which is defined as~\cite{Benn96} \begin{equation} {\cal C}=\max\{\lambda_1-\lambda_2-\lambda_3-\lambda_4,0\}\,, \label{eq:concur1} \end{equation} where $\lambda$'s are the square roots of the eigenvalues in decreasing order of the matrix $\rho_{AB}(\sigma_y\otimes\sigma_y\rho_{AB}^{\star} \sigma_y\otimes\sigma_y)$, where $\rho_{AB}^{\star}$ is the corresponding complex conjugation in the computational basis $\{|++\rangle, |+-\rangle, |-+\rangle,|--\rangle\}$. As usual, at the thermal equilibrium the system is described by the canonical ensemble density matrix \begin{equation} \rho=\frac{e^{-\begin{equation}ta H}}{Z}\,, \end{equation} where $Z={\rm Tr} e^{-\begin{equation}ta H}$ is the partition function of the system. Thus the reduced two-site density matrix, after taking into account the symmetries consideration, assumes the following form \begin{eqnarray} \rho_{ij}&=&{\rm Tr}_{k\neq i,j}\frac{e^{-\begin{equation}ta H}}{Z}=\frac{1}{4} ( I\otimes I+\langle\sigma_z\rangle(\sigma_z^i\otimes I+I\otimes\sigma_z^j)\nonumber\\ &+&\sum_{k=1}^3\langle\sigma^{i}_k\sigma^{j}_k\rangle \sigma^{i}_k\otimes\sigma^{j}_k)\,.\label{dens} \end{eqnarray} The correlation functions that show up in the density matrix Eq.(\ref{dens}) are well known \cite{Baro70} and for the nearest neighbor case considering here these correlation functions are giving by \begin{eqnarray} &&\langle\sigma^{i}_x\sigma^{j}_x\rangle={\cal G}_{-1},\quad \langle\sigma^{i}_y\sigma^{j}_y\rangle={\cal G}_{1}, \quad\langle\sigma^{i}_x\sigma^{j}_x\rangle=\langle\sigma_z\rangle^2-{\cal G}_{1}{\cal G}_{-1},\nonumber\\&& \quad \langle\sigma_z\rangle=-{\cal G}_{0}\,, \end{eqnarray} with \begin{eqnarray} {\cal G}_{mi}&=&\frac{1}{\pi}\int_0^{\pi}dk \cos[k(m-i)](1 + \lambda \cos k) \frac{\tanh (\Lambda_k\begin{equation}ta/2)}{\Lambda_k} \nonumber \\ &-& \frac{\lambda\gamma}{\pi}\int_0^{\pi}dk \sin[k(m-i)]\sin k \frac{\tanh (\Lambda_k\begin{equation}ta/2)}{\Lambda_k} \, \end{eqnarray} Thus the concurrence for the case of isotropic XY model, $\gamma=0$, reads \begin{equation} {\cal C}=\max\{0,||{\cal G}_{1}|-\sqrt{\frac{1}{4}(1+{\cal G}_{0}^2-{\cal G}_{1}^2)^2-{\cal G}_{0}^2}|\}\,, \end{equation} and at $T=0$ we have ${\cal G}_0=1$ and \[ {\cal G}_1=\left\{\begin{equation}gin{array}[c]{cc}0 &\quad { \lambda\leq 1}\\ \frac{2}{\pi}\sqrt{1-\lambda^{-2}}&\quad \lambda> 1\end{array} | 1\rangleght. \,.\label{rho} \] The two-site entanglement between the nearest-neighbors for the isotropic XY model is shown in Fig.\ref{Cequ1}. One can see from this figure that there is a region where the entanglement increases with the temperature increase whereas for $\lambda \leq 1$ the entanglement is zero at zero temperature and it remains zero until a critical temperature where the system starts to be entangled. Fig.\ref{Cequ} displays with more details a relevant region with $0.8<\lambda<0.99$. It is clearly seen in this figure that there is a strong enough entanglement in the region $2<\begin{equation}ta<20$, however, outside this region no entanglement can be observed. This entanglement transition should be understood as an effect of the entanglement of the higher excited states. Note that the ground state in this case is unentangled with all spins pointed in the same direction. Similar observation has been made in Refs.\onlinecite{Osbo02} and \onlinecite{31}. In the region where the entanglement does not vanish we expect the quantum fluctuations likely to dominate the behavior of the system and it is the region where the quantum information processing should be studied since it is well know that the entanglement is the main resource for quantum information. Thus the study of the dynamics of entanglement for this region is relevant. In the next section we will study the response of the system after a projective measurement in this region. \section{Spin decoherence after a projective measurement} \label{sect:3} \begin{equation}gin{figure}[htb] \centering\includegraphics[width=10cm]{nspinL0.99ML1.eps} \caption{Site magnetization as function of the temperature and time with $m-l=1$ and $\lambda=0.99$. \protect\label{spin2}} \end{figure} \begin{equation}gin{figure}[htb] \centering\includegraphics[width=10cm]{spinpmL0.8ML1.eps} \caption{Site magnetization as function of the temperature and time with $m-l=1$ and $\lambda=0.8$ for the case without knowledge of the measurement outcome. \protect\label{spinpm}} \end{figure} \begin{equation}gin{figure}[htb] \centering\includegraphics[width=10cm]{spinLpp1T0.1.eps} \caption{Site magnetization as function of the site location $x=m-l$ and time at fixed $\begin{equation}ta=10$ and $\lambda=2$. \protect\label{spinT0.1L2}} \end{figure} As mentioned above we will study the consequences of the local projective measurement and analyze the system behavior in terms of the spin decoherence wave. We introduce the operators \begin{equation} A_i=c_i^{\dag}+c_i\,\quad B_i=c_i^{\dag}-c_i\,. \end{equation} After a selective projective measurement with the projector $P=\frac{\sigma_z^l+1}{2}=\frac{1-A_lB_l}{2}$ ($P=P^{\dag}$), which means that the positive $z$ direction of the local spin $l$ is the measurement result (a general measurement will be considered below), the mean value at time $t$ for an operator $A$ reads \cite{Hami04} \begin{equation} \langle A(t)\rangle= \frac{{\rm Tr}\rho P A(t)P}{{\rm Tr} P\rho P}\,, \end{equation} where $A(t)=e^{iHt}A(0)e^{-iHt}$. Since we are interested in the evaluation of the time-dependent average value of the magnetization in the $z$ direction at site $m$ we have $A(0)=\sigma_z/2=-\frac{A_mB_m}{2}$. In order to evaluate $ \langle A(t)\rangle$ we write the operator $A_i$, and $B_i$ in term of $\eta$ and $\eta{\dag}$ operators using the inverse transformation of Eqs.(\ref{eta1}),(\ref{eta2}). Since the Hamiltonian is diagonal being written in terms of $\eta$ operators we have \begin{equation} \eta^{\dag}_k(t)=\exp{(i\Lambda_kt)}\eta_k^{\dag}(0)\,. \end{equation} Thus after a straightforward little algebra we found the following expression for $A(t)$ \begin{equation}gin{figure}[htb] \centering\includegraphics[width=10cm]{sigmaz.eps} \caption{Site magnetization at distance $x$ from the measurement point.\protect\label{magn}} \end{figure} \begin{equation} A(t)=\frac{-1}{2}\sum_{ii^{\prime}}\alpha_{ii^{\prime}}^mA_iA_{i^{\prime}}+\begin{equation}ta_{ii^{\prime}}^mA_iB_{i^{\prime}}+\gamma_{ii^{\prime}}^mB_iB_{i^{\prime}}\,, \end{equation} where \begin{equation} \alpha_{ii^{\prime}}^m=\phi_{mi}G_{mi^{\prime}};\, \begin{equation}ta_{ii^{\prime}}^m=\phi_{mi}\psi_{mi^{\prime}}-G_{mi}G_{mi^{\prime}};\, \gamma_{ii^{\prime}}^m=G_{mi}\psi_{mi^{\prime}}\,, \end{equation} and in the thermodynamic limit ($N | 1\rangleghtarrow \infty$) we have \begin{equation} \phi_{\mu\nu}=\psi_{\mu\nu}=\frac{1}{\pi}\int_0^{\pi}dk\cos k(\mu-\nu)\cos(\Lambda_kt)\,, \end{equation} \begin{eqnarray} G_{\mu\nu}&=&\frac{i}{\pi}\int_0^{\pi}dk \cos[k(\mu-\nu)](1 + \lambda \cos k) \frac{\sin (\Lambda_kt)}{\Lambda_k} \nonumber \\ &-& \frac{i\lambda\gamma}{\pi}\int_0^{\pi}dk \sin[k(\mu-\nu)]\sin k \frac{\sin (\Lambda_kt)}{\Lambda_k} \,. \end{eqnarray} We define the operator ${\cal A}=\sum_{ii^{\prime}}\begin{equation}ta_{ii^{\prime}}^m A_iB_{i^{\prime}}$ which corresponds to the part of $A(t)$ with real coefficients. Clearly we have $\langle{\cal A}\rangle_{\begin{equation}ta}=\langle A\rangle_{\begin{equation}ta}$, therefore after some calculations we obtain \begin{eqnarray} \langle A\rangle_{\begin{equation}ta} &=&\frac{1}{4({\cal G}_{ll}+1)}\left.({\cal G}_{mm}+\sum_{ii^{\prime}}\begin{equation}ta_{ii^{\prime}}^m[\langle A_lB_lA_iB_{i^{\prime}}\rangle_{\begin{equation}ta}\nonumber | 1\rangleght.\\ &+&\left. \langle A_iB_{i^{\prime}}A_lB_l\rangle_{\begin{equation}ta}-\langle A_lB_l A_iB_{i^{\prime}}A_lB_l\rangle_{\begin{equation}ta} | 1\rangleght.])\,, \end{eqnarray} where \begin{equation} \langle A_lB_lA_iB_{i^{\prime}}\rangle_{\begin{equation}ta}={\cal G}_{ii^{\prime}}{\cal G}_{ll}+\delta_{il}\delta_{i^{\prime}l}-{\cal G}_{li}{\cal G}_{li^{\prime}}\,, \end{equation} \begin{equation} \langle A_iB_{i^{\prime}}A_lB_l\rangle_{\begin{equation}ta}={\cal G}_{ii^{\prime}}{\cal G}_{ll}+\delta_{il}\delta_{i^{\prime}l}-{\cal G}_{il}{\cal G}_{i^{\prime}l}\,, \end{equation} \begin{equation}gin{widetext} \begin{eqnarray} \langle A_lB_l A_iB_{i^{\prime}}A_lB_l\rangle_{\begin{equation}ta}= -4\delta_{li^{\prime}}\delta_{il}{\cal G}_{ll}+\delta_{li}({\cal G}_{i^{\prime}l}+ {\cal G}_{li^{\prime}})+\delta_{li^{\prime}}({\cal G}_{il}+{\cal G}_{li}) +{\cal G}_{ll}{\cal G}_{i^{\prime}l}({\cal G}_{il}-{\cal G}_{li}) +{\cal G}_{ll}{\cal G}_{li^{\prime}}({\cal G}_{li}-{\cal G}_{il})-{\cal G}_{ii^{\prime}}\,. \end{eqnarray} \end{widetext} Here we have used the Fermi distribution function \begin{equation} \langle\eta_k\eta^{\dag}_{k^{\prime}}\rangle=\frac{\delta_{kk^{\prime}}}{e^{-\begin{equation}ta\Lambda_k}+1}\,. \end{equation} For simplicity, in this paper we will further consider only the isotropic XY Model, $\gamma=0$; the case $\gamma\neq 0$ will be addressed in the future. Thus, the magnetization at the site $m$ in the $z$ direction can be written as follow \begin{equation}gin{widetext} \begin{eqnarray} \langle{A}\rangle_{\begin{equation}ta}=\frac{1}{4({\cal G}_{ll}+1)}\left\{{\cal G}_{mm}+(2{\cal G}_{ll}+1)[2(\phi_{ml}^2-G_{ml}^2)+\alpha-\alpha^{\prime}] | 1\rangleght.+\left.4(G_{ml}\begin{equation}ta_{ml}^{\prime}-\phi_{ml}\begin{equation}ta_{ml})+2(\begin{equation}ta^{\prime 2}_{ml}-\begin{equation}ta_{ml}^2) | 1\rangleght\}\,, \end{eqnarray} \end{widetext} where the expressions of $\alpha,\,\begin{equation}ta_{ml},\,\alpha^{\prime},\,\begin{equation}ta_{ml}^{\prime}$ are given in the Appendix. The magnetizations at the neighboring site $m-l=1$ for the cases $\lambda=0.8$ and $\lambda=0.99$ are shown in Figs.\ref{spin} and \ref{spin2}, respectively, as functions of the inverse temperature $\begin{equation}ta$ and of the time. The magnetization oscillate in time with the frequency proportional to $\lambda$ which is a result of the propagation of the decoherence wave after the local measurement \cite{ourbec}. One can see that the amplitude of these oscillations increases in the close vicinity of the quantum critical point $\lambda=1$. Figs.\ref{spin} and \ref{spin2} demonstrate also that the amplitude increases with the temperature increase. It is connected probably with the thermal entanglement in the system under consideration (see the previous section). For a complete von Neumann measurement \cite{6} without knowledge of the measurement outcome the mean value for the operator $A$ at time $t$ is \begin{equation} \langle A(t)\rangle= {{\rm Tr}\rho P A(t)P}+ {{\rm Tr}\rho (1-P) A(t)(1-P)}\,, \end{equation} Thus we have \begin{eqnarray} \langle{A}\rangle_{\begin{equation}ta}&=&\frac{1}{4}\left\{{\cal G}_{mm}+4{\cal G}_{ll}(\phi_{ml}^2-G_{ml}^2)+(\alpha-\alpha^{\prime}) | 1\rangleght.\nonumber \\&+&\left.4(G_{ml}\begin{equation}ta_{ml}^{\prime}-\phi_{ml}\begin{equation}ta_{ml}) | 1\rangleght\}\,, \end{eqnarray} The magnetization at the neighboring site $m-l=1$ after the measurement is plotted in Fig.\ref{spinpm} for $\lambda=0.8$ as a function of $\begin{equation}ta$ and the time. One can clearly see from this figure that there is a reduction of the amplitude of the oscillation with respect to the selective measurement; this effect is due to the mixing of the state as here we don't know the results of the measurement outcome. Fig.\ref{spinT0.1L2} displays the propagation of the decoherence wave, that is, the magnetization distribution as a function of the distance from the measured site $x$ and time $t$ at a fixed value of $\begin{equation}ta=10$ and $\lambda=2$. \begin{equation}gin{figure}[htb] \centering\includegraphics[width=10cm]{test1.eps} \caption{Single-site entanglement $S$ at distance $x$ from the measurement point.\protect\label{sing}} \end{figure} \begin{equation}gin{figure}[htb] \centering\includegraphics[width=10cm]{test2.eps} \caption{Two-site entanglement C between the site $m$ and the site $i$ at zero temperature with $x=i-m$.\protect\label{twos}} \end{figure} \begin{equation}gin{figure}[htb] \centering\includegraphics[width=10cm]{test3.eps} \caption{(color online) $\langle \sigma_z\rangle$ (dotted line) and single site entanglement (solid line) are shown at the site $m$.\protect\label{decoh}} \end{figure} \begin{equation}gin{figure}[htb] \centering\includegraphics[width=10cm]{fidelity.eps} \caption{The fidelity of the quantum channel is shown at site $i$.\protect\label{fide}} \end{figure} Now we consider the effect of the local measurement on the pairwise entanglement in the system. This require the evaluation of a correlations functions such as $\langle\sigma_i^{\alpha}\sigma_j^{\begin{equation}ta}\rangle$. Here we will present the results for an interesting particular case, namely, for $\lambda<1$ and at zero temperature. Then, there is no entanglement in the system before the measurement since all spin are pointing in the same $z$-direction in the ground state. Therefore a projective measurement of the $z$-component of the magnetization will not provide us any nontrivial information and will not generate entanglement. However, we will show that the projective measurement of the $x$-component does create the entanglement. After the projective measurement in the $x$ direction at site $m$ with positive outcome the wave function will be \begin{equation} |\Psi\rangle_{m} =\frac{1+c_m^{\dag}}{\sqrt 2}|vac\rangle\,. \label{psi11} \end{equation} At time $t$ we have \begin{equation} c^{\dag}_m(t)=\sum_l(G_{ml}+\phi_{ml})c^{\dag}_l=\sum_lw_l(t)c^{\dag}_l\,, \end{equation} where in the thermodynamics limit $w_l(t)=J_{m-l}(\lambda t)$ and $J_n(x)$ is the Bessel function of order $n$. Thus the time-dependent wave function after the measurement will be \begin{equation} |\Psi_m(t)\rangle=\frac{1+\sum_l w_l(t)c_l^{\dag}}{\sqrt{2}}|vac\rangle\,, \end{equation} Thus, the time-dependent two-spin density matrix can be written in the form \[\rho_{ij}=\frac{1}{2}\left(\begin{equation}gin{array}[c]{cccc}0 & 0 & 0 & 0\\0 & w_i^2 & w_iw_j & w_i\\0 & w_iw_j & w_j^2 & w_j\\0 & w_i & w_j & 2-w_i^2-w_j^2\end{array} | 1\rangleght) \,,\label{rho}\] and the corresponding one-particle density matrix is $\rho_{i}=\frac{1}{2}\left(\begin{equation}gin{array}[c]{cc}w_i^2 & w_i \\w_i& 2-w_i^2\end{array} | 1\rangleght)\,.$ The magnetization of the site $i$ is therefore given by $\langle \sigma_z\rangle/2={\rm Tr }\rho \sigma_z/2=(w_i^2-1)/2$. The single-site entropy, $S(\rho_i)=-{\rm Tr} \rho_i\langle 0 |g \rho_i$, which characterizes the entanglement of one spin with the rest of the chain, can also be evaluated in this case. The pairwise entanglement for the two-site density matrix can be evaluated using Eq.(\ref{eq:concur1}). A straightforward algebra lead to the following expression for the concurrence: \begin{equation} {\cal C}=w_iw_j\,. \end{equation} In Figs.\ref{magn}, \ref{sing}, \ref{twos} we show the single-spin quantum entropy and the magnetization at distance $x$ from the measurement point, as well as the pairwise entanglement between site $m$ and $i$ as functions of time and of $x=i-m$. We conclude from these figures that the single-site entanglement and the pairwise entanglement propagate with the velocity proportional to the interaction strength, like the spin decoherence wave. In Fig.\ref{decoh} we display the site magnetization together with the single-site entanglement which demonstrates clearly that these two quantities oscillate coherently. This confirms that the dynamics of the spin decoherence reflects in some sense the dynamics of the entanglement in the system. The fidelity of the communication at site $m$ through the channel is the probability that a channel output pass a test for being the same as the input conducted by someone who knows what the input was. It can be defined as \cite{Benn196} \begin{equation} F= {\langle} \psi_m|\rho_i|\psi_m\rangle=\frac{1+w_i}{2}\,, \end{equation} where $|\psi_m\rangle=\frac{|0\rangle+|1\rangle}{\sqrt{2}}$ is the state of the site $m$ right after the measurement. In fact, the spin chain acts as an amplitude damping quantum channel where the initial state is transformed under the action of the superoperator \$ to \cite{6} \begin{equation} \rho | 1\rangleghtarrow \$(\rho)=M_0\rho M_0^{\dag}+M_1\rho M_1^{\dag}\,, \end{equation} with the Kraus operators such $M_0=\left(\begin{equation}gin{array}[c]{cc}w_i & 0 \\0& 1\end{array} | 1\rangleght)\,,$ and $M_1=\left(\begin{equation}gin{array}[c]{cc}0 & 0 \\ \sqrt{1-\omega_i^2}& 0\end{array} | 1\rangleght)\,$ where as usual $M_1$ describe the quantum jump and $M_0$ represent no quantum jump. The fidelity of the channel is shown in Fig.\ref{fide}. One can see clearly from this figure that the channel can be efficiently used to transmit the quantum information. The fidelity has a maximum value for $x=i-m\sim \lambda t$. This means that the quantum state is transported with the velocity proportional to the interaction strength $ \lambda$ similar to the decoherence wave. After a time $t=x/\lambda$ the state can be recovered with maximum fidelity at a distance $x$ from the initial site $m$. \section{Conclusions} \label{sect:4} In this paper, we have evaluated the equilibrium pairwise entanglement at finite temperatures in the isotropic one-dimensional Ising-XY model with transverse magnetic field. Our findings indicate that the behavior of entanglement with respect to temperature, at least for moderate values of temperature, is quite complex. In particular, we have found that for some ranges of temperature, entanglement in the system can grow with increasing temperature, which results from the entanglement of the excited states. We have studied the dynamical response of the system in a relevant region for quantum information processing after a projective measurement on one local spin which leads to the appearance of the ``decoherence wave'' that propagate with velocity proportional to the coupling constant $\lambda$, similar to the case of the Bose-Einstein condensate studied earlier \cite{ourbec}. One motivation behind our study is to know what happens with the quantum computer after the measurement. We have investigated for specific case ($T=0$ and $\lambda<1$) the dynamics of the entanglement and the spin decoherence wave and we have found that those quantities propagates coherently through the chain with the same velocity that is proportional to $\lambda$. The fidelity of the channel has been shown to be represented as amplitude damping channel. Finally, a generalization to $\gamma\neq 0$ and study of the entanglement dynamics in such system are desirable. \section*{Appendix} Using the standard properties of the Fourier transformation, one has \begin{equation}gin{widetext} \begin{eqnarray} \sum_lG_{il}\begin{eqnarray}r{G}_{i^{\prime}l}=\sum_l\int_{-\pi}^{\pi}\frac{dk}{2\pi}G(k)e^{ik(i-l)}\int_{-\pi}^{\pi}\frac{dk^{\prime}}{2\pi}\begin{eqnarray}r{G}(k^{\prime})e^{ik^{\prime}(i^{\prime}-l)} =\int_{-\pi}^{\pi}\frac{dk}{2\pi}G(k)\begin{eqnarray}r{G}(-k)e^{ik(i-i^{\prime})} \end{eqnarray} Putting $\gamma=0$ we find \begin{eqnarray} \alpha&=&\sum_{ii^{\prime}}\phi_{mi}\phi_{mi^{\prime}}{\cal G}_{ii^{\prime}}=\frac{1}{\pi}\int_0^{\pi}dk (1 + \lambda \cos k) \frac{\tanh (\Lambda_k\begin{equation}ta/2)}{\Lambda_k}\cos^2(\Lambda_kt) \nonumber \\ \end{eqnarray} \begin{eqnarray} \begin{equation}ta_{ml}&=& \sum_i\phi_{mi}{\cal G}_{li}=\frac{1}{\pi}\int_0^{\pi}dk \cos[k(m-l)](1 + \lambda \cos k) \frac{\tanh (\Lambda_k\begin{equation}ta/2)}{\Lambda_k}\cos(\Lambda_kt) \nonumber \\ \end{eqnarray} \begin{eqnarray} \alpha^{\prime}&=&\sum_{ii^{\prime}}G_{mi}G_{mi^{\prime}}{\cal G}_{ii^{\prime}}=\frac{-1}{\pi}\int_0^{\pi}dk (1 + \lambda \cos k)^3 \frac{\tanh (\Lambda_k\begin{equation}ta/2)}{\Lambda_k^3}\sin^2(\Lambda_kt) \nonumber \\ \end{eqnarray} \begin{eqnarray} \begin{equation}ta_{ml}^{\prime}&=& \sum_iG_{mi}{\cal G}_{li}= \frac{i}{\pi}\int_0^{\pi}dk \cos[k(m-l)](1 + \lambda \cos k)^2 \frac{\tanh (\Lambda_k\begin{equation}ta/2)}{\Lambda_k^2}\sin(\Lambda_kt) \nonumber \\ \end{eqnarray} \end{widetext} \section*{Acknowledgments} This work was performed as part of the research program of the \textsl{Stichting voor Fundamenteel Onderzoek der Materie (FOM)} with financial support from the \textsl{Nederlandse Organisatie voor Wetenschappelijk Onderzoek }. \begin{equation}gin{thebibliography}{99} \bibitem{1} J. S. Bell, Physics {\bf 1}, 195 (1964); Rev. Mod. Phys. {\bf 38}, 447 (1966). \bibitem{2} A. Aspect, Nature (London) {\bf 398}, 189 (1999). \bibitem{3} M. Lamehi-Rachti and W. Mittig, Phys. Rev. D {\bf 14}, 2543 (1976). \bibitem{4}C. Polachic, C. Rangacharyulu, A.M. van den Berg, S. Hamieh, M.N. Harakeh, M. Hunyadi, M.A. de Huu, H.J. W\"ortche, J. Heyse, C. Baumer., D. Frekers, S.Rakers, J.A. Brooke and P. Busch, Phys. Lett. A {\bf 323}, 176 (2004). \bibitem{44}S. Hamieh, H.J. W\"ortche, C. Baumer, A.M. van den Berg, D. Frekers, M.N. Harakeh, J. Heyse, M. Hunyadi, M.A. de Huu, C. Polachic and C. Rangacharyulu, J. Phys. G {\bf 30}, 481 (2004). \bibitem{5}S. Sachdev, {\it Quantum Phase Transitions} (Cambridge University Press, Cambridge, 1999). \bibitem{Osbo02} T. Osborne and M. Nielsen, Phys. Rev. A {\bf 66}, 032110 (2002). \bibitem{19} A. Osterloh, L. Amico, G. Falci, and R. Fazio, Nature (London) {\bf 416}, 608 (2002). \bibitem{Hami05} S. Hamieh and A. Tawfik, Acta Phys. Polon. B {\bf 36}, 801 (2005). \bibitem{12} M. A. Nielsen, Ph.D. thesis, University of New Mexico, 1998; quant-ph/0011036. \bibitem{14} P. Zanardi and X. Wang, J. Phys. A {\bf 35}, 7947 (2002). \bibitem{24} X. Wang, H. Fu, and A. I. Solomon, J. Phys. A {\bf 34}, 11307 (2001). \bibitem{25} W. K. Wootters, Contemporary Mathematics {\bf 305}, 299 (2002). \bibitem{26} M. C. Arnesen, S. Bose, and V. Vedral, Phys. Rev. Lett. {\bf 87}, 017901 (2001). \bibitem{27} D. A. Meyer and N. R. Wallach, J. Math. Phys. {\bf 43}, 4273 (2002). \bibitem{28} D. Gunlycke, S. Bose, V. M. Kendon, and V. Vedral, Phys. Rev. A {\bf 64}, 042302 (2001). \bibitem{29} X. Wang, Phys. Rev. A {\bf 64}, 012313 (2001). \bibitem{30} H. Fu, A. I. Solomon, and X. Wang, J. Phys. A {\bf 35}, 4293 (2002). \bibitem{31} X. Wang, Phys. Lett. A {\bf 281}, 101 (2001). \bibitem{32} X. Wang and P. Zanardi, Phys. Lett. A {\bf 301}, 1 (2002). \bibitem{Amic04} L. Amico, A. Osterloh, F. Plastina, R. Fazio, and G. M. Palma, Phys. Rev. A {\bf 69}, 022304 (2004). \bibitem{Subr05} V. Subrahmanyam and A. Lakshminarayan, quant-ph/0409048; V. Subrahmanyam, Phys. Rev. A {\bf 69}, 022311 (2004); V. Subrahmanyam, Phys. Rev. A {\bf 69}, 034304 (2004). \bibitem {ghos1} S. Ghosh, T. F. Rosenbaum, G. Aeppli, and S. N. Coppersmith, Nature (London) {\bf 425}, 48 (2003). \bibitem{gu} S. Gu, S. Deng, Y. Li, and H. Lin, Phys. Rev. Lett. {\bf 93}, 086402 (2004). \bibitem{Anfo} A. Anfossi, C. Boschi, A. Montorsi, and F. Ortolani, cond-mat/0503600. \bibitem{ourkondo} M. I. Katsnelson, V. V. Dobrovitski, H. A. De Raedt, and B. N. Harmon, Phys. Lett. A {\bf 318}, 445 (2003). \bibitem{6}M. Nielsen and I. Chuang, {\it Quantum Computation and Quantum Information} (Cambridge University Press, Cambridge, 2000). \bibitem{7} S. Bose, Phys. Rev. Lett. {\bf 91}, 207901 (2003). \bibitem{Benn96} C. Bennett, D. DiVincenzo, J. Smolin, and W. Wootters, Phys. Rev. A {\bf 54}, 3824 (1996); S. Hill and W. Wootters, Phys. Rev. Lett. {\bf 78}, 5022 (1997); W. Wootters, Phys. Rev. Lett. {\bf 80}, 2245 (1998). \bibitem{ourbec} M. I. Katsnelson, V. V. Dobrovitski, and B. N. Harmon, Phys. Rev. A {\bf 62}, 022118 (2000). \bibitem{ourneel} M. I. Katsnelson, V. V. Dobrovitski, and B. N. Harmon, Phys. Rev. B {\bf 63}, 212404 (2001). \bibitem{Lieb61} E. Lieb, T. Schultz, and D. Mattis, Ann. Phys. (N.Y.) {\bf 60}, 407 (1961). \bibitem{Baro70} E. Barouch and B. McCoy, Phys. Rev. A {\bf 2}, 1075 (1970); E. Barouch and B. McCoy, Phys. Rev. A {\bf 3}, 786 (1971). \bibitem{Hami04} S. Hamieh, R. Kobes, and H. Zaraket, Phys. Rev. A {\bf 70}, 052325 (2004). \bibitem{Benn196} C. Bennett, H. Bernstein, S. Popescu, and B. Schumacher, Phys. Rev. A {\bf 53}, 2046 (1996). \end{thebibliography} \end{document}
\begin{document} \maketitle \begin{abstract} Let $PU_n$ denote the projective unitary group of rank $n$ and $BPU_n$ be its classifying space. For an odd prime $p$, we extend previous results to a complete description of $H^s(BPU_n;\mathbb{Z})_{(p)}$ for $s<2p+5$ by showing that the $p$-primary subgroups of $H^s(BPU_n;\mathbb{Z})$ are trivial for $s = 2p+3$ and $s = 2p+4$. \end{abstract} \section{Introduction}\label{sec:intro} Let $U_n$ denote the group of $n \times n$ unitary matrices. The unit circle $S^1$ can be viewed as the normal subgroup of scalar matrices of $U_n$. We let $PU_n$ denote the quotient group of $U_n$ by $S^1$, and $BPU_n$ be the classifying space of $PU_n$. In this paper we consider $H^*(BPU_n;\mathbb{Z})$, th7e ordinary cohomology of $BPU_n$ with coefficients in $\mathbb{Z}$. \subsection*{A review of the literature} The ordinary and generalized cohomology of $BPU_n$ for special $n$ has been the subject of various works such as Kono-Mimura \cite{kono1975cohomology}, Kameko-Yagita \cite{Kameko2008brown}, Kono-Yagita \cite{kono1993brown}, Toda \cite{toda1987cohomology}, and Vavpeti{\v{c}}-Viruel \cite{vavpetivc2005mod}. Vezzosi \cite{vezzosi2000chow} and Vistoli \cite{vistoli2007cohomology} studied the Chow ring of the classifying space (in the sense of Totaro \cite{totaro1999chow}) of $BPGL_3(\mathbb{C})$ and $BPGL_p(\mathbb{C})$ for $p$ an odd prime, respectively. Much of their results applies to the ordinary cohomology of $BPU_p$. None of the works above dealt with $H^*(BPU_n;\mathbb{Z})$ for $n$ not a prime number. The first named author considered $H^*(BPU_n;\mathbb{Z})$, as well as the Chow ring of $BPGL_n(\mathbb{C})$ for an arbitrary $n$ in \cite{gu2020almost}, \cite{gu2019cohomology} and \cite{gu2019some}. In particular, in \cite{gu2019cohomology}, the first named author determined the ring structure of $H^*(BPU_n;\mathbb{Z})$ in dimensions less than or equal to $10$. Other related works include Duan \cite{duan2017cohomology}, in which the integral cohomology of $PU_n$ is fully determined, and Crowley-Gu \cite{crowley2021h}, in which the image of the canonical map $H^*(BPU_n;\mathbb{Z})\to H^*(BU_n;\mathbb{Z})$ is studied. The cohomology of $BPU_n$ plays significant roles in the study of the topological period-index problem (\cite{antieau2014period}, \cite{antieau2014topological}, \cite{gu2019topological} and \cite{gu2020topological}), and in the study of anormalies in partical physics (\cite{cordova2020anomalies}, \cite{garcia2019dai}). \subsection*{Notations} Throughout the rest of this paper, $H^* (-)$ denotes $H^*(-;\mathbb{Z})$. For an abelian group $A$ and a prime number $p$, let $A_{(p)}$ be the localization of $A$ at $p$, and let $_pA$ denotes the $p$-primary subgroup of $A$, i.e., the subgroup of $A$ of all torsion elements with torsion order a power of $p$. In particular, we have a canonical isomorphism $_pH^*(-)\cong {_p[H^*(-)_{(p)}]}$, and we will not distinguish the two throughout this paper. Tensor products of $\mathbb{Z}_{(p)}$-modules are always taken over $\mathbb{Z}_{(p)}$. \subsection*{The main theorem and some remarks} We review a basic fact on the cohomology of $BPU_n$. Consider the short exact sequence of Lie groups \begin{equation*} 1 \to \mathbb{Z}/ n \to SU_n \to PSU_n \simeq PU_n \to 1, \end{equation*} which induces a fiber sequence of their classifying spaces \begin{equation}\label{eq:Bn_cover} B(\mathbb{Z}/ n) \to BSU_n \to BPU_n \end{equation} When $p \nmid n$, the space $B(\mathbb{Z}/ n)$ is $p$-locally contractible and we have \begin{equation} H^*(BPU_n;\mathbb{Z}_{(p)}) \cong H^{*}(BSU_{n};\mathbb{Z}_{(p)}) \end{equation} Since $\mathbb{Z}_{(p)}$ is a flat $\mathbb{Z}$-module, and in particular, $H^*(-;\mathbb{Z}_{(p)})\cong H^*(-)_{(p)}$, we have an isomorphism of $\mathbb{Z}p$-algebras \begin{equation}\label{equation: cohomology of BSUn} H^*(BPU_n)_{(p)} \cong H^{*}(BSU_{n})_{(p)} = \mathbb{Z}p[c_2,c_3,\dots,c_n], \end{equation} which shows $H^*(BPU_n)_{(p)}$ is torsion-free for $p \nmid n$. In other words, we have the following \begin{prop}\label{pro:n-torsion} Suppose $x\in H^*(BPU_n)$ is a torsion class. Then there exists some $i\geq 0$ such that $n^ix = 0$. \end{prop} Therefore, to determine the graded abelian group structure of $H^s(BPU_n)$, it suffices to consider the $p$-primary subgroup $_pH^s(BPU_n)$ for $p\mid n$. \begin{rem}\label{rem:Vezzosi} In the case of Chow rings, Vezzosi \cite{vezzosi2000chow} proved the stronger result that all torsion classes in the Chow ring of $BPGL_n(\mathbb{C})$ are $n$-torsion. \end{rem} To state the main theorem, recall that, as shown in \cite{gu2019cohomology}, the integral cohomology group $H^3(BPU_n)$ is generated by a class denoted by $x_1$. In addition, $\operatorname{P}^i$ will denote the $i$th Steenrod reduced power operation, and \[\deltalta: H^*(-,\mathbb{Z}/p)\to H^{* + 1}(-)\] will denote the connecting homomorphism. Finally, a bar over an integral cohomology class will denote the mod $p$ reduction of this class. For instance, $\bar{x}_1$ denotes the mod $p$ reduction of $x_1$, which is in $H^3(BPU_n;\mathbb{Z}/p)$. \begin{thm}\label{thm: main thm, 2p+3 and 2p+4 have trivial p torsion} Let $p > 2$ be a prime number, and $n=p^rm$ for a positive integer $m$ co-prime to $p$. Then the $p$-primary subgroup of $H^{s}(BPU_n)$ in dimensions less than $2p+5$ is as follows: \begin{enumerate} \item For $r > 0$, we have \begin{equation*} _pH^s(BPU_n)\cong \begin{cases} \mathbb{Z}/p^r,\ s=3,\\ \mathbb{Z}/p,\ s=2p+2,\\ 0,\ s<2p+5,\ s\neq 3, 2p+2. \end{cases} \end{equation*} The group $_pH^{2p + 2}(BPU_n)$ is generated by $\deltalta\operatorname{P}^1(\bar{x}_1)$. \item For $r = 0$, we simply have $_pH^s(BPU_n) = 0$ for all $s\geq 0$. \end{enumerate} \end{thm} \begin{rem} Note $_pH^s(BPU_n)\cong {_pH}^s(BPU_n)_{(p)}$. By the discussion preceding Remark \ref{rem:Vezzosi}, Theorem \ref{thm: main thm, 2p+3 and 2p+4 have trivial p torsion} completely determines $H^s(BPU_n;\mathbb{Z}_{(p)})$ for $0 \leq s< 2p + 5$. \end{rem} For $s\leq 3$, the groups $_pH^s(BPU_n)$ are well known and are part of Theorem 1.1 of \cite{gu2019cohomology}. For $3 < s < 2p+2$, they are given in Theorem 1.2 of \cite{gu2019cohomology}. Therefore, what remains to show is \begin{equation}\label{eq:remain_to_show} _pH^{2p+2}(BPU_n) \cong \mathbb{Z}/p,\ _pH^{2p+3}(BPU_n) = {_pH}^{2p+4}(BPU_n) =0. \end{equation} \begin{rem} For $p = 2$, it was shown by the first named author \cite{gu2019cohomology} that the $2$-torsion subgroup of $H^{s}(BPU_n)$ in dimension $s = 2p+3 = 7$ is $\mathbb{Z}/2$ if $n \equiv 2$ mod $4$, and is $0$ otherwise. In particular, Theorem \ref{thm: main thm, 2p+3 and 2p+4 have trivial p torsion} does not generalize to the case $p=2$. \end{rem} \begin{rem} For $p = 3$, \eqref{eq:remain_to_show} follows immediately from the computation in \cite{gu2019cohomology} of $H^s(BPU_n)$ in dimensions $8, 9$ and $10$. \end{rem} \subsection*{Organization of the paper} In Section \ref{sec:spectral_sequence}, we discuss some preliminary results of the Serre spectral sequence $^UE$ associated to the fiber sequence $U: ~ BU_n \to BPU_n \to K(\mathbb{Z}, 3)$. This will be our main tool for computing the $p$-primary subgroup $_pH^s(BPU_n)$. We will also show that \eqref{eq:remain_to_show} can be deduced from Theorem 1.2 of \cite{gu2019cohomology} and Proposition \ref{prop: four term exact sequence}, which says that certain chain complex $\mathcal{M}$ constructed from the differentials in $^UE$ is exact. In Section \ref{sec:exactness}, we prove Proposition \ref{prop: four term exact sequence}. The proof is based on the explicit computation of some relevant differentials in $^UE$. This section finishes our proof of Theorem \ref{thm: main thm, 2p+3 and 2p+4 have trivial p torsion}. \subsection*{Acknowledgments} The authors would like to thank Ben Williams for various helpful editorial suggestions. The first named author would like to thank the hospitality of Professor Jie Wu, as well as the Center for Topology and Geometry based Technology (CTGT) and the School of Mathematical Sciences at Hebei Normal University, and acknowledge the supports from the National Natural Science Foundation of China (No. 21113062) and from the High-level Scientific Research Foundation of Hebei Province (No. 13113093). The second named author and the third named author would like to thank Xiangjun Wang for helpful discussions. The second named author and the third named author were supported by the National Natural Science Foundation of China (No. 11871284). The fourth named author was supported by the National Natural Science Foundation of China (No. 12001474; 11761072). All authors contribute equally. \section{The spectral sequences}\label{sec:spectral_sequence} \subsection*{The Serre spectral sequence $^UE$} We follow the strategy employed in \cite{gu2019cohomology} to compute the cohomology of $BPU_n$. The short exact sequence of Lie groups $$1 \to S^1 \to U_n \to PU_n \to 1$$ induces a fiber sequence of their classifying spaces $$BS^1 \to BU_n \to BPU_n.$$ Notice that $BS^1$ is of the homotopy type of the Eilenberg-Mac Lane space $K(\mathbb{Z}, 2)$ and indeed we obtain another fiber sequence \begin{equation}\label{eq:chi} U: ~ BU_n \to BPU_n \xrightarrow{\chi} K(\mathbb{Z}, 3). \end{equation} \begin{rem} In general, it is not always possible to obtain a fiber sequence of the form $F\to E\to B$ from a fiber sequence $\Omega B\to F\to E$. See Ganea \cite{ganea1967induced} for more. \end{rem} We will use the Serre spectral sequence associated to the last fiber sequence to compute the cohomology of $BPU_n$. For notational convenience, we denote this spectral sequence by $^UE$. The $E_2$ page of $^UE$ has the form $$^U E^{s, t}_{2} = H^{s}(K(\mathbb{Z},3);H^{t}(BU_{n})) \Longrightarrow H^{s+t}(BPU_{n}).$$ In principle, Cartan and Serre \cite{cartan19551955} determined the cohomology of $K(A,n)$ for all finitely generated abelian groups $A$. Also see Tamanoi \cite{tamanoi1999subalgebras} for a nice treatment. We summarize the $p$-local cohomology of $K(\mathbb{Z}, 3)$ in low dimensions as follows. \begin{prop}\label{prop: p local cohomology of KZ3 below 2p+5} Let $p > 2$ be a prime. In degrees up to $2p+5$, we have \begin{equation}\label{equation: p local cohomology of KZ3 below 2p+5} H^{s}(K(\mathbb{Z},3))_{(p)} = \begin{cases} \mathbb{Z}p, & s = 0,\ 3,\\ \mathbb{Z}/p, & s = 2p+2,\ 2p+5,\\ 0, & s < 2p+5, s \neq 0,\ 3,\ 2p+2. \end{cases} \end{equation} where $x_1,~ y_{p, 0},~ x_1 y_{p, 0}$ are generators on degree $3, 2p+2, 2p+5$ respectively. In addition, we have $y_{p, 0} = \deltalta\operatorname{P}^1(\bar{x}_1)$. \end{prop} Here we use the same notations for the generators as in \cite{gu2019cohomology}. Sometimes we abuse notations and let $x_1, y_{p, 0}$ denote $\chi^*(x_1)$, $\chi^*(y_{p,0})$, where $\chi: BPU_n\to K(\mathbb{Z},3)$ is defined in \eqref{eq:chi}. For instance, we have \begin{prop}[Theorem 1.2, \cite{gu2019cohomology}]\label{prop:thm1_2} Let $p$ be a prime. In $H^{2p+2}(BPU_{n})$, we have $y_{p,0}\neq 0$ of order $p$ when $p\mid n$, and $y_{p,0}=0$ otherwise. Furthermore, the $p$-torsion subgroup of $H^k(BPU_n)$ is $0$ for $3<k<2p+2$. \end{prop} Also recall \begin{equation}\label{equation: cohomology of BUn} H^{*}(BU_{n}) = \mathbb{Z}[c_1,c_2,\dots,c_n],\ |c_i|=2i. \end{equation} In particular, $H^{*}(BU_{n})$ is torsion-free. We have $$^U E^{s, t}_{2} \cong H^{s}(K(\mathbb{Z},3)) \otimes H^{t}(BU_{n}).$$ \subsection*{The auxiliary fiber sequences and spectral sequences} To determine some of the differentials in $^UE$, we consider two more fiber sequences. Let $T^n$ be the maximal torus of $U^n$ with the inclusion denoted by \[\psi: T^n\to U_n.\] Passing to quotients over $S^1$, we have another inclusion of maximal torus \[\psi': PT^n\to PU_n.\] The quotient map $T^n\to PT^n$ fits in an exact sequnce of Lie groups \[1\to S^1\to T^n\to PT^n\to 1,\] which induces a fiber sequence $$T: ~ BT^n \to BPT^n \to K(\mathbb{Z}, 3).$$ Notice that we have \begin{equation}\label{equation: cohomology of BTn} H^{*}(BT^{n}) = \mathbb{Z}[v_1,v_2,\dots,v_n],\ |v_i|=2. \end{equation} The next fiber sequence is simply the path fibration for the space $K(\mathbb{Z},3)$ $$K: ~ K(\mathbb{Z}, 2) \to * \to K(\mathbb{Z}, 3)$$ where $*$ denotes a contractible space. We denote their associated Serre spectral sequences as $^T E$ and $^K E$ respectively. We denote the corresponding differentials of $^UE$, ${^TE}$, and ${^KE}$ by ${^Ud}_*^{*,*}$, ${^Td}_*^{*,*}$, and ${^Kd}_*^{*,*}$, respectively, if there are risks of ambiguity. Otherwise, we simply denote the differentials by $d_*^{*,*}$. These fiber sequences fit into the following homotopy commutative diagram: \begin{equation}\label{eq:3_by_3_diag} \begin{tikzcd} K\arrow[d,"\Phi"]:& K(\mathbb{Z},2)\arrow[r]\arrow[d,"B\varphi"]& *\arrow[r]\arrow[d]& K(\mathbb{Z},3)\arrow[d,"="]\\ T:\arrow[d,"\Psi"]& BT^n\arrow[r]\arrow[d,"B\psi"]& BPT^n\arrow[r]\arrow[d,"B\psi'"]& K(\mathbb{Z},3)\arrow[d,"="]\\ U:& BU_n\arrow[r]& BPU_n\arrow[r]& K(\mathbb{Z},3) \end{tikzcd} \end{equation} Here, the map $B\varphi: K(\mathbb{Z},2)\simeq BS^1\to BT^n$ is the de-looping of the diagonal map $S^1\to T^n$. The induced homomorphism between cohomology rings is as follows: \[B\varphi^*:H^*(BT^n) = \mathbb{Z}[v_1,v_2,\cdots,v_n] \to H^*(BS^1) = \mathbb{Z}[v],\ v_i\mapsto v.\] The map $B\psi: BT^n\to BU_n$ induces the injective ring homomorphism \begin{equation*} \begin{split} B\psi^*: H^*(BU_n) = \mathbb{Z}[c_1,\cdots,c_n] &\to H^*(BT^n) = \mathbb{Z}[v_1,\cdots,v_n],\\ c_i &\mapsto \sigma_i(v_1,\cdots,v_n), \end{split} \end{equation*} where $\sigma_j(t_1,t_2,\cdots,t_n)$ be the $j$th elementary symmetric polynomial in variables $t_1,t_2,\cdots,t_n$: \begin{equation}\label{eq:sigma_def} \begin{split} & \sigma_0(t_1,t_2,\cdots,t_n) = 1,\\ & \sigma_1(t_1,t_2,\cdots,t_n) = t_1+t_2+\cdots+t_n,\\ & \sigma_2(t_1,t_2,\cdots,t_n) = \sum_{i<j}t_it_j,\\ & \vdots\\ & \sigma_p(t_1,t_2,\cdots,t_n) = t_1t_2\cdots t_n. \end{split} \end{equation} We will use the associated maps of spectral sequences to compute the differentials in $^UE$. This is possible because we have a good understanding of the corresonding differentials in $^TE$ and $^KE$. In particular, we have the following results. \begin{prop}[\cite{gu2019cohomology}, Corollary 2.16]\label{prop:diff0} The higher differentials of ${^KE}_{*}^{*,*}$ satisfy \begin{equation*} \begin{split} &d_{3}(v)=x_1,\\ &d_{2p-1}(x_{1}v^{lp^{e}-1})=v^{lp^{e}-1-(p-1)}y_{p,0},\quad e > 0,\ \operatorname{gcd}(l,p)=1,\\ &d_{r}(x_1)=d_{r}(y_{p,0})=0,\quad \textrm{for all }r, \end{split} \end{equation*} and the Leibniz rule. \end{prop} \begin{rem} Proposition \ref{prop:diff0} is a special case of Corollary 2.16, \cite{gu2019cohomology}. Here, we take the opportunity to correct a typo in the original Corollary 2.16, \cite{gu2019cohomology}, where the condition $k \geq e$ should be replaced by $e > k$. \end{rem} \begin{prop}[\cite{gu2019cohomology}, Proposition 3.2]\label{pro:diff1} The differential $^{T}d_{r}^{*,*}$, is partially determined as follows: \begin{equation} ^{T}d_{r}^{*,2t}(v_{i}^{t}\xi)={(B\pi_i)^{*}}({^Kd}_{r}^{*,2t}(v^{t}\xi)), \end{equation} where $\xi\in {^{T}E}_{r}^{*,0}$, a quotient group of $H^*(K(\mathbb{Z}, 3))$, and $\pi_i: T^{n}\rightarrow S^1$ is the projection of the $i$th diagonal entry. In plain words, $^{T}d_{r}^{*,2t}(v_{i}^{t}\xi)$ is simply $^{K}d_{r}^{*,2t}(v^{t}\xi)$ with $v$ replaced by $v_i$. \end{prop} \begin{rem} Here we correct another typo in the original Proposition 3.2 in \cite{gu2019cohomology}, in which ``$~\xi\in {^{T}E}_{r}^{0,*}$~'' should be replaced by ``~$\xi\in {^{T}E}_{r}^{*,0}$~''. \end{rem} \begin{prop}[\cite{gu2019cohomology}, Proposition 3.3]\label{prop:the_spectral_seq_TE} \begin{enumerate} \item The differential $^{T}d_{3}^{0,t}$ is given by the ``formal divergence'' \[\nabla=\sum_{i=1}^{n}(\partial/\partial v_i): H^{t}(BT^{n};R)\rightarrow H^{t-2}(BT^{n};R),\] in such a way that $^{T}d_{3}^{0,*}=\nabla(-)\cdot x_{1}.$ For any ground ring $R=\mathbb{Z}$ or $\mathbb{Z}/m$ for any integer $m$. \item The spectral sequence degenerates at ${{^T}E}^{0,*}_{4}$. Indeed, we have $^{T}E_{\infty}^{0,*}={^{T}E_{4}}^{0,*}=\operatorname{Ker}{^Td}_{3}^{0,*}=\mathbb{Z}[v_{1}-v_{n},\cdots, v_{n-1}-v_{n}]$. \end{enumerate} \end{prop} \begin{cor}[\cite{gu2019cohomology}, Corollary 3.4]\label{cor:d3} \[^{U}d_{3}^{0,*}(c_{k})=\nabla(c_{k})x_1=(n-k+1)c_{k-1}x_1.\] \end{cor} \subsection*{Computations in the spectral sequence $^UE$} In order to study $$_pH^* (BPU_n)\cong {_p[H^*(BPU_n)_{(p)}]},$$ it suffices to look at the $p$-localized spectral sequence, where the $E_2$ page becomes \begin{equation}\label{equation: E2 tensor form} (^U E^{s, t}_{2})_{(p)} = H^{s}(K(\mathbb{Z},3))_{(p)} \otimes H^{t}(BU_{n}) = H^{s}(K(\mathbb{Z},3)) \otimes H^{t}(BU_{n})_{(p)}. \end{equation} By abuse of notation, for the rest of this paper, we let ${^U E}, {^T E}$ and $^K E$ denote the corresponding $p$-localized Serre spectral sequences. By Proposition \ref{prop: p local cohomology of KZ3 below 2p+5} and \eqref{equation: cohomology of BUn}, in the range $s \leq 2p+5$, the only cases in which $^U E^{s, t}_{2}$ could be nonzero are when $s = 0, 3, 2p+2, 2p+5$ and $t$ is even. To simplify the notations, we let $$M^0 = ~ ^U E^{0,2p+2}_{2}, ~ M^1 = ~ ^U E^{3,2p}_{2}, ~ M^2 = ~ ^U E^{2p+2,2}_{2}, ~ M^3 = ~ ^U E^{2p+5,0}_{2}.$$ Inspection of degrees shows that $^U E^{3, 2p}_{*}$ can receive only the $d_3$ differential and support the $d_{2p-1}$ differential. Similarly, $^U E^{2p+2,2}_{*}$ can receive only the $d_{2p-1}$ differential and support the $d_3$ differential. In addition, all $d_2$'s are trivial and therefore we have $^U E^{*,*}_{2} = ~^U E^{*,*}_{3}$. We let $\delta^0$ be the map $$\delta^0: M^0 = ~^U E^{0,2p+2}_{3} \xrightarrow{d_3} ~^U E^{3, 2p}_{3} = M^1.$$ We let $\delta^1$ be the composition $$\delta^1:M^1 = ~^U E^{3, 2p}_{3} \to ~^U E^{3, 2p}_{3}/ \operatorname{Im}d_3 = ~ ^U E^{3,2p}_{2p-1} \xrightarrow{d_{2p-1}} ~ ^U E^{2p+2,2}_{2p-1} = \operatorname{Ker} d_3 \subset M^2.$$ We let $\delta^2$ be the map $$\delta^2:M^2 = ~^U E^{2p+2,2}_{3} \xrightarrow{d_3} ~^U E^{2p+5,0}_{3} = M^3.$$ One immediately sees that \[M^0 \xrightarrow{\delta^0} M^1 \xrightarrow{\delta^1} M^2 \xrightarrow{\delta^2} M^3\] is a chain complex of $\mathbb{Z}p$-modules, which we denote by $\mathcal{M}$. We will show later that Theorem \ref{thm: main thm, 2p+3 and 2p+4 have trivial p torsion} is a consequence of the following \begin{prop}\label{prop: four term exact sequence} Let $p \geq 3$ be a prime number such that $p\mid n$. The chain complex $\mathcal{M}$ defined above is exact. \end{prop} \begin{proof}[Proof of Theorem \ref{thm: main thm, 2p+3 and 2p+4 have trivial p torsion} assuming Proposition \ref{prop: four term exact sequence}] Let $n = p^r m$. For $r = 0$, the theorem follows from Proposition \ref{pro:n-torsion}. In the rest of the proof we assume $r > 0$. First, we prove $$_pH^{2p+2}(BPU_n) \cong \mathbb{Z}/p.$$ By Proposition \ref{prop:thm1_2}, $y_{p,0}\in ^UE_2^{2p+2,0}$ survives to a nonzero element in $H^{2p+2}(BPU_{n})$ of order $p$. Therefore, we have $$^UE_{\infty}^{2p+2,0} = ~^UE_2^{2p+2,0}\cong\mathbb{Z}/p.$$ Since the only nontrivial entries in $^UE_2^{*,*}$ of total degree $2p + 2$ are $^UE_2^{2p+2,0}$ and $^UE_2^{0,2p+2}$, we have a short exact sequence of $\mathbb{Z}p$-modules \[0\to ~^UE_{\infty}^{2p+2,0} \to H^{2p+2} (BPU_n)_{(p)} \to ~^UE_{\infty}^{0,2p+2} \to 0.\] Since $^UE_{\infty}^{0,2p+2} \subset ~^UE_2^{0,2p+2}$ is a free $\mathbb{Z}p$-module, the above short exact sequence splits and we have \[H^{2p+2} (BPU_n)_{(p)} \cong ~^UE_{\infty}^{2p+2,0} \oplus ~^UE_{\infty}^{0,2p+2},\] from which we deduce \[_pH^{2p+2} (BPU_n) \cong ~^UE_{\infty}^{2p+2,0} \cong \mathbb{Z}/p.\] Since the row $E_{\infty}^{*,0}$ is the image of $\chi^*$, the above implies \begin{equation}\label{eq:imageofchi} _pH^{2p+2} (BPU_n) = \chi^*(H^{2p+2}(K(\mathbb{Z},3))). \end{equation} From \eqref{eq:imageofchi} and Proposition \ref{prop: p local cohomology of KZ3 below 2p+5}, it follows that $_pH^{2p + 2}(BPU_n)$ is generated by $\deltalta\operatorname{P}^1(\bar{x}_1)$. Next, we prove $$_pH^{2p+3}(BPU_n) = H^{2p+3}(BPU_n)_{(p)} = 0.$$ The exactness of $\mathcal{M}$ at $M^1$ implies $^U E^{3, 2p}_{\infty} = 0$. On the other hand, ${^UE}_2^{3,2p}$ is the only nontrivial entry in ${^UE}_2^{*,*}$ of total degree $2p+3$. Hence, we have \[_pH^{2p+3}(BPU_n)\subset H^{2p+3}(BPU_n)_{(p)} = {^UE}^{3, 2p}_{\infty} = 0.\] Finally, we prove $$_pH^{2p+4}(BPU_n) = 0.$$ The exactness of $\mathcal{M}$ at $M^2$ implies $^U E^{2p+2, 2}_{\infty} = 0$. Since ${^UE}_2^{0,2p+4}$ and ${^UE}_2^{2p+2,2}$ are the only nontrivial entries in ${^UE}_2^{*,*}$ of total degree $2p+4$, we have \[H^{2p+4}(BPU_n)_{(p)}\cong {^UE}_{\infty}^{0,2p+4},\] which is torsion-free. In particular, we have $_pH^{2p+4}(BPU_n) = 0$. \end{proof} The proof of Proposition \ref{prop: four term exact sequence} occupies Section \ref{sec:exactness}. \section{The proof of Proposition \ref{prop: four term exact sequence}}\label{sec:exactness} From \eqref{equation: E2 tensor form}, we can write out the $\mathbb{Z}p$-modules $M^0, M^1, M^2, M^3$ more explicitly: $$M^0 = H^{0}(K(\mathbb{Z},3)) \otimes H^{2p+2}(BU_n)_{(p)}\cong H^{2p+2}(BU_n)_{(p)}$$ is the free $\mathbb{Z}p$-module generated by monomials in $c_1,\cdots,c_{p+1}$ in dimension $2p+2$, and $$M^1 = H^{3}(K(\mathbb{Z},3)) \otimes H^{2p}(BU_n)_{(p)} \cong H^{2p}(BU_n)_{(p)}$$ is the free $\mathbb{Z}p$-module generated by elements of the form $cx_1$ where $c$ is a monomial in $c_1,\cdots,c_p$ in dimension $2p$. Furthermore, we have $$M^2 = H^{2p+2}(K(\mathbb{Z},3)) \otimes H^{2}(BU_n)_{(p)} = \mathbb{Z}p \{c_{1}y_{p,0}\}/p\cong\mathbb{Z}/p$$ and $$M^3 = H^{2p+5}(K(\mathbb{Z},3)) \otimes H^{0}(BU_n)_{(p)} = \mathbb{Z}p \{x_{1}y_{p,0}\}/p\cong\mathbb{Z}/p.$$ \subsection*{The exactness of $\mathcal{M}$ at $M^2$} \begin{lem}\label{lem:Td_2p-1} In the spectral sequence $^TE$, we have \begin{equation}\label{eq:Td_3v_n} \begin{cases} v_{n}^{k}x_1 \in\operatorname{Im}{^Td}_3 ,\ 0 \le k \le p-2 \textrm{ or }k=p,\\ ^{T}d^{3,*}_{2p-1}(v_{n}^{p-1}x_1) = y_{p,0}. \end{cases} \end{equation} \end{lem} \begin{proof} When $p \nmid k+1$, the first formula in Proposition \ref{prop:diff0} together with Proposition \ref{pro:diff1} imply that $$v_n^k x_1 = {\frac{1}{k+1}}{^Td}_3(v_n^{k+1})$$ is in the image of ${^Td}_3$. This completes the proof for the case $0 \le k \le p-2 \textrm{ or }k=p$. The remaining case is proved by applying the second formula in Proposition \ref{prop:diff0}, taking $e=l=1$, and then Proposition \ref{pro:diff1}. \end{proof} \begin{lem}\label{lem:image_of_delta1} The map $\deltalta^1:M^1\to M^2\cong\mathbb{Z}/p$ is surjective. \end{lem} \begin{proof} Recall the morphism of fiber sequences $\Psi$ introduced in \eqref{eq:3_by_3_diag}, and the induced morphism $\Psi^*: {^U E}\to {^T E}$ of spectral sequences. For $1\le i \le n$, let $v'_i =v_i-v_{n}$. It follows from (2) of Proposition \ref{prop:the_spectral_seq_TE} that the $v'_i$'s are permanent cycles. To determine the value of $\deltalta^1$ at $c_{p}x_1 \in M^1$, we have \begin{equation}\label{eq:Psi_delta1} \begin{split} &\Psi^*\deltalta^1(c_{p}x_1) \\ =\ & \Psi^* ~ {^Ud}_{2p-1}^{3,2p}(c_{p}x_1) = {^Td}_{2p-1}^{3,2p}\Psi^{*}(c_{p}x_1)\\ =\ & {^Td}_{2p-1}^{3,2p}(\sum_{n\geq i_1 > i_2 >...> i_p \geq 1.}v_{i_1}v_{i_2}...v_{i_p}x_1)\\ =\ & {^Td}_{2p-1}^{3,2p}(\sum_{n\geq i_1 > i_2 >...> i_p\geq 1}(v'_{i_1}+v_n)(v'_{i_2}+v_n)...(v'_{i_p}+v_n)x_1)\\ =\ & {^Td}_{2p-1}^{3,2p}(\sum_{n\geq i_1 > i_2 >...> i_p \geq 1}\sum_{j=0}^{p}\sigma_j(v'_{i_1},\cdots,v'_{i_p})v_n^{p-j}x_1)). \end{split} \end{equation} where $\Psi^*: {^UE}\to {^TE}$ is the morphism of spectral sequences induced by the inclusions of maximal tori $T^n\to U_n$ and $PT^n\to PU_n$, as introduced in (\ref{eq:3_by_3_diag}), and $\sigma_i$ the elementary symmetric polynomials in $p$ variables, as in \eqref{eq:sigma_def}. By Lemma \ref{lem:Td_2p-1}, we simplify \eqref{eq:Psi_delta1} and obtain \begin{equation}\label{eq:Psi_delta1_second} \Psi^{*}\deltalta^1(c_{p}x_1) ={^Td}_{2p-1}(\sum_{n \geq i_1 > i_2 >...> i_p \geq 1}\sigma_1(v'_{i_1},\cdots,v'_{i_p})v_n^{p-1}x_1). \end{equation} To proceed, we evaluate the expression $$\sum_{n\geq i_1 > i_2 >...> i_p \geq 1}\sigma_1(t_{i_1},\cdots,t_{i_p})$$ for variables $t_i,\ 1\leq i\leq n$. Since it is multi-linear and symmetric in the variables $t_1,\cdots,t_n$, we have \[\sum_{n\geq i_1 > i_2 >...> i_p \geq 1}\sigma_1(t_{i_1},\cdots,t_{i_p}) = \lambda\sum_{i=1}^nt_i\] for some $\lambda\in\mathbb{Z}$. Taking the substitution $t_1=\cdots t_n=1$ and comparing both sides of the above, we obtain \[\lambda=\frac{p}{n}\binom{n}{p}=\binom{n-1}{p-1}\not\equiv 0\pmod{p}\] and \begin{equation}\label{eq:evaluate_sigma1} \sum_{n\geq i_1 > i_2 >...> i_p \geq 1}\sigma_1(t_{i_1},\cdots,t_{i_p}) = \binom{n-1}{p-1}\sum_{i=1}^nt_i. \end{equation} Consider the following commutative diagram: \begin{equation*} \begin{tikzcd} M^1 = {^UE}_2^{3,2p}\arrow[r,"\Psi^*"]\arrow[d] &{^TE}_{2}^{3,2p}\arrow[d] \\ {^UE}_{2p-1}^{3,2p}\arrow[r,"\Psi^*"]\arrow[d,"^Ud_{2p-1}"] &{^TE}_{2p-1}^{3,2p}\arrow[d, "^Td_{2p-1}"] \\ {^UE}_{2p-1}^{2p+2,2}\arrow[r,"\Psi^*"]\arrow[d, hook] &{^TE}_{2p-1}^{2p+2,2}\arrow[d, hook] \\ M^2 = {^UE}_2^{2p+2,2}\arrow[r,"\Psi^*"] &{^TE}_{2}^{2p+2,2} \end{tikzcd} \end{equation*} where the composition of the left vertical maps is $\delta^1$ and we resume the computation of $\Psi^{*}\deltalta^1(c_{p}x_1)$ started in \eqref{eq:Psi_delta1_second}: \begin{equation}\label{eq:Psi_delta1_third} \begin{split} &\Psi^{*}\deltalta^1(c_{p}x_1) \\ =\ &{^Td}_{2p-1}(\binom{n-1}{p-1}\sum_{i=1}^n v_i'v_n^{p-1}x_1) \ \ \ (\textrm{by }\eqref{eq:evaluate_sigma1}) \\ =\ &\binom{n-1}{p-1} \sum_{i=1}^n v'_i y_{p,0} \ \ (\textrm{since $v_i'$'s are permanent cocycles})\\ =\ &\binom{n-1}{p-1}\sum_{i=1}^n v_i y_{p,0} \ \ (\text{since $y_{p,0}$ is $p$-torsion}) \\ =\ &\Psi^{*}(\binom{n-1}{p-1}c_1y_{p,0}). \end{split} \end{equation} By the injectivity of \[\Psi^{*}: M^2 = {^UE}_2^{2p+2,2}\to {^TE}_2^{2p+2,2}\] together with \eqref{eq:Psi_delta1_third}, we have \[\deltalta^1(c_{p}x_1) = \binom{n-1}{p-1} c_{1}y_{p,0} \neq 0\] and we conclude. \end{proof} \begin{lem}\label{lem:exactness_at_M2} The chain complex $\mathcal{M}$ is exact at $M^2$. \end{lem} \begin{proof} By Lemma \ref{lem:image_of_delta1}, and the fact that $\mathcal{M}$ is a chain complex, we have $\deltalta^2 = 0$ and the lemma follows. Alternatively, one may compute $\deltalta^2 = d_3^{2p+2,2}$ directly with Corollary \ref{cor:d3} and obtain the same result. \end{proof} \subsection*{The exactness of $\mathcal{M}$ at $M^1$} Recall that the $\mathbb{Z}p$-module $M^1$ is freely generated by elements of the form $cx_1$ for \[c\in S' := \{c_1^{i_1}c_2^{i_2} \cdots c_p^{i_p} \mid i_k\geq 0 ,\ \sum_k ki_k = p\}.\] Indeed, $S'$ is simply the set of monomials in $c_1,c_2\cdots,c_n$ in $H^{2p}(BU_n)$. We define a total ordering $\mathfrak{O}$ on monomials in $c_1,c_2\cdots,c_n$ as follows. We assert \[c_1^{i_1}c_2^{i_2} \cdots c_p^{i_p} > c_1^{j_1}c_2^{j_2} \cdots c_p^{j_p}\] if and only if \begin{enumerate} \item there is at least one $k$ such that $i_k\neq j_k$, and \item for the smallest such $k$, we have $i_k>j_k$. \end{enumerate} Let $S := S'- \{c_p\}$. Then $\mathfrak{O}$ defines total orderings on $S$, $S'$ and $S'x_1$ as well. To compare $cx_1,c'x_1\in S'x_1$, we assert $cx_1>c'x_1$ if and only if $c>c'$. Let $L$ be the $\mathbb{Z}p$-submodule of $H^{2p}(BU_n)_{(p)}$ spanned by $S$. We define a $\mathbb{Z}p$-linear map \[\tau: L\to M^0 = H^{2p+2}(BU_n)_{(p)}\] as follows. Each element in $S$ is of the form $c_1^{i_1}c_2^{i_2}\cdots c_k^{i_k}$ such that $k<p$ and $i_k>0$, and we define \[\tau(c_1^{i_1}c_2^{i_2}\cdots c_k^{i_k}) := (c_1^{i_1}c_2^{i_2}\cdots c_{k-1}^{i_{k-1}})(c_k^{i_k-1}c_{k+1}).\] \begin{lem}\label{lem:delta0_tau} Let $\bar{\tau}:L/pL\to M^0/pM^0$ and $\bar{\deltalta}^0:M^0/pM^0\to M^1/pM^1$ denote the mod $p$ reductions of $\tau$ and $\deltalta^0$, respectively. Then the image of the composition \[L/pL\xrightarrow{\bar{\tau}} M^0/pM^0 \xrightarrow{\bar{\deltalta}^0} M^1/pM^1\] is $Lx_1/pLx_1$. In particular, we have \begin{equation}\label{eq:delta_0_tau_image} \operatorname{Im}\deltalta^0\tau \subset W := Lx_1 + (pc_px_1) \subset M^1. \end{equation} \end{lem} \begin{proof} Consider the $\mathbb{Z}p$-basis $S$, $S'x_1$ for $L$ and $M^1$, respectively, both in the descending order with respect to the ordering $\mathfrak{O}$. Notice that $c_px_1$ is the smallest element in $S'$. We study the $(N+1)\times N$ matrix $A$ of the map \[\deltalta^0\tau: L\to M^1\] with respect to these basis, where $N$ is the cardinality of $S$. Consider an arbitrary element \[c:=c_1^{i_1}\cdots c_k^{i_k}\in S\] with $k<p$ and $i_k>0$. By Corollary \ref{cor:d3} and the Leibniz's formula, we have \begin{equation*} \begin{split} & \deltalta^0\tau(c) = \deltalta^0(c_1^{i_1}\cdots c_{k-1}^{i_{k-1}}c_k^{i_k-1}c_{k+1})\\ =& \begin{cases} (n-k)cx_1 + n i_1 c_1^{i_1-1}c_2^{i_2}\cdots c_k^{i_k-1}c_{k+1}x_1 + (\textrm{higher order terms}),\ i_1>0,\\ (n-k)cx_1 + (\textrm{higher order terms}),\ i_1=0. \end{cases} \end{split} \end{equation*} In both cases, we have \begin{equation*} \deltalta^0\tau(c) \equiv (n-k)cx_1 + (\textrm{higher order terms})\pmod{p}. \end{equation*} Therefore, the matrix $A$ satisfies \begin{equation*} A \equiv \begin{pmatrix} \lambda_1 & * & \cdots & * \\ 0 & \lambda_2 & * & \vdots \\ 0 & 0 & \ddots & * \\ 0 & \cdots & 0 &\lambda_N\\ 0 & 0 & \cdots &0 \end{pmatrix}\pmod{p}, \end{equation*} where the $\lambda_i$'s are of the form $n-k$ for $k<p$, which are invertible in $\mathbb{Z}p$, and we have verified that the image of the composition \[L/pL\xrightarrow{\bar{\tau}} M^0/pM^0 \xrightarrow{\bar{\deltalta^0}} M^1/pM^1\] is $Lx_1/pLx_1$. The equation \eqref{eq:delta_0_tau_image} follows from the above and the fact \[M^1 = Lx_1 + (c_px_1).\] \end{proof} \begin{lem}\label{lem:V_to-W} Consider the $\mathbb{Z}p$-submodule $V=\tau(L)+(c_1c_p-c_{p+1})$ of $M^0$. We have $\deltalta^0(V)\subset W$ where \[W := Lx_1 + (pc_px_1)\subset M^1\] is the $\mathbb{Z}p$-submodule of $M^1$ defined in Lemma \ref{lem:delta0_tau}. \end{lem} \begin{proof} By Lemma \ref{lem:delta0_tau} we have $\deltalta^0(\tau(L)) \subset W$. On the other hand, we have \begin{equation}\label{eq:c_1c_p} \deltalta^0(c_1c_p-c_{p+1}) = (n-p+1)c_1c_{p-1}x_1 + pc_px_1\in W, \end{equation} and we conclude. \end{proof} \begin{lem}\label{lem:exact_at_M1} The chain complex $\mathcal{M}$ is exact at $M^1$. \end{lem} \begin{proof} By Lemma \ref{lem:V_to-W}, the restriction of $\deltalta^0$ to $V$ has image in $W$. Therefore, we write $\deltalta^0_V := \deltalta^0|_V: V\to W$ and consider its mod $p$ reduction \[\bar{\deltalta}^0_V: V/pV\to W/pW = Lx_1/pLx_1 + (pc_px_1)/(p^2c_px_1).\] By Lemma \ref{lem:delta0_tau}, we have $Lx_1/pLx_1\subset \operatorname{Im}{\bar{\deltalta}^0|_V}$. By $Lx_1/pLx_1\subset \operatorname{Im}{\bar{\deltalta}^0_V}$ and \eqref{eq:c_1c_p}, we have $[pc_px_1]\in \operatorname{Im}{\bar{\deltalta}^0_V}$, where $[pc_px_1]$ is the class in $W/pW$ represented by $pc_px_1$. Therefore, $\bar{\deltalta}^0_V: V/pV\to W/pW$ is surjective. By Nakayama's lemma in commutative algebra (Thoerem 2.2, Chapter 1, \cite{matsumura1989commutative}), $\deltalta^0_V: V\to W$ is surjective. Therefore, we have \begin{equation}\label{eq:exact_at_M1_1} \operatorname{Im}\deltalta^0 \supset\operatorname{Im}\deltalta^0_V = W = Lx_1 + (pc_px_1). \end{equation} On the other hand, we have $\operatorname{Ker}\deltalta^1 \supset \operatorname{Im}\deltalta^0$, and therefore $\operatorname{Ker}\deltalta^1 \supset W$. Now, by Lemma \ref{lem:image_of_delta1}, we have \[\mathbb{Z}/p\cong M^1/(L + (pc_px_1)) = M^1/W \to M^1/\operatorname{Ker}\deltalta^1\cong\mathbb{Z}/p,\] where the arrow is the tautological quotient map, which is surjective. Therefore, the above composition is a bijection. It follows that we have \begin{equation}\label{eq:exact_at_M1_2} W = \operatorname{Ker}\deltalta^1 \supset \operatorname{Im}\deltalta^0, \end{equation} and the lemma follows from \eqref{eq:exact_at_M1_1} and \eqref{eq:exact_at_M1_2}. \end{proof} Lemma \ref{lem:exact_at_M1} and Lemma \ref{lem:exactness_at_M2} complete the proof of Proposition \ref{prop: four term exact sequence}. \end{document}
\begin{document} \title{Testing a Quantum Inequality \\with a Meta-analysis of Data for Squeezed Light } \author{G. Jordan Maclay \and Eric W. Davis } \institute{G. Jordan Maclay \at Quantum Fields LLC, St. Charles, IL 60174 \\ \email{[email protected]} \and Eric W. Davis \at Institute for Advanced Studies at Austin, 11855 Research Blvd., Austin, TX 78759; Early Universe, Cosmology and Strings Group, Center for Astrophysics, Space Physics and Engineering Research, Baylor University, Waco, TX 76798\\ \email{[email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} In quantum field theory, coherent states can be created that have negative energy density, meaning it is below that of empty space, the free quantum vacuum. If no restrictions existed regarding the concentration and permanence of negative energy regions, it might, for example, be possible to produce exotic phenomena such as Lorentzian traversable wormholes, warp drives, time machines, violations of the second law of thermodynamics, and naked singularities. \ Quantum Inequalities (QIs) have been proposed that restrict the size and duration of the regions of negative quantum vacuum energy that can be accessed by observers. \ However, QIs generally are derived for situations in cosmology and are very difficult to test. \ Direct measurement of vacuum energy is difficult and to date no QI has been tested experimentally. \ We test a proposed QI for squeezed light by a meta-analysis of published data obtained from experiments with optical parametric amplifiers (OPA) and balanced homodyne detection. Over the last three decades, researchers in quantum optics have been trying to maximize the squeezing of the quantum vacuum and have succeeded in reducing the variance in the quantum vacuum fluctuations to -15 dB. To apply the QI, a time sampling function is required. In our meta-analysis different time sampling functions for the QI were examined, but in all physically reasonable cases the QI is violated by much or all of the measured data. This brings into question the basis for QI. \ Possible explanations are given for this surprising result. \keywords{squeezed light \and quantum inequality \and vacuum energy \and vacuum fluctuations \and negative energy \and optical parametric amplifier} \PACS{42.50Lc \and 42.50Dv \and 3.65Wj \and 11.10.Ef } \end{abstract} \section{Introduction} \label{intro} In quantum field theory, the vacuum expectation value of the normally ordered or renormalized energy density $\langle T_{oo}\rangle$ need not be positive. \ For example, a superposition of a vacuum state (n=0) and a two photon state (n=2), can have negative renormalized energy density with the proper choice of coefficients. \ Squeezed light can have a negative energy density. From theory and experiment, we know that static negative energy densities associated with vacuum states are concentrated in narrow spatial regions, e. g., inside a parallel plate Casimir cavity with small plate separation or in the region near the Schwarzschild radius in the Boulware vacuum where the energy density is everywhere negative as seen by static observers. There is no known way to directly measure vacuum energy density. \ On the other hand, the total energy of a system is believed to always be positive or zero. \ For example, the sum of the mass energy of the plates plus the negative vacuum energy inside the cavity is positive \cite{beckenstein}\cite{Visser}. \ The classical energy conditions imply that an inertial observer who initially encounters some negative energy density must encounter compensating positive energy density at some arbitrary time in the future. \ Quantum Inequalities (QIs) have been derived for the free vacuum quantum electromgnetic field, with no sources or boundaries, which constrain the magnitude and duration of negative energy densities relative to the energy density of an underlying reference vacuum state. \ The QI places bounds on quantum violations of the classical energy conditions \cite{ford}\cite{davies}. \ The QI is formulated as a mathematical bound on the average of the quantum expectation value of a free field's energy-momentum tensor in the vacuum state, where the average is taken along an observer's timelike or null worldline using time sampling functions. \ Contrary to the classical energy conditions, the QI dictates that the more negative the energy density is in some time interval T, the shorter the duration T of the interval, so that an inertial observer cannot encounter arbitrarily large negative energy densities that last for arbitrarily long time intervals. \ An inertial observer must encounter compensating positive energy density no later than after a time T, which is inversely proportional to the magnitude of the initial negative energy density. In QI, restrictions are placed on the integral of the vacuum expectation value of the renormalized energy density $<T_{oo}>$ multiplied by a sampling function. \ For the electromagnetic field in flat space-time, with a normalized time sampling function of $\ f(t)=(t_{o}/\pi)(1/(t^{2}+t_{o}^{2})),$ Ford has shown \cite{Ford2} \begin{equation} \hat{\rho}\equiv\frac{t_{o}}{\pi}\int_{-\infty}^{+\infty}\frac{\langle T_{oo}\rangle}{t^{2}+t_{o}^{2}}dt \geqslant -\frac{3}{16\pi^{2}}\frac{\hbar c}{(ct_{o})^{4}} \end{equation} To give a frame of reference, this can be compared to the vacuum energy density within an ideal parallel plate Casimir cavity of separation $a$: \begin{equation} \langle T_{oo}\rangle_{Cas}=-\frac{\pi^{2}}{720}\frac{\hbar c}{a^{4}} \end{equation} The ratio of the numerical factors for the free field to the Casimir cavity is 1.4, so a negative energy density $\hat{\rho}$\ equal to that in a perfectly conducting parallel plate cavity of spacing $a$ can exist no longer than for a time $t_{o}\sim a/c$, about $3\times10^{-16}$ seconds for a typical experiment. \ As the sampling time $t_{o}$ increases, $\hat{\rho}$ rapidly goes to zero. \ (Note however, that as derived, the QIs do not apply directly to the Casimir cavity since it has boundaries. Also, to test Eq. 1 experimentally, one must make an absolute measurement of the vacuum energy density, an experimental challenge for which no solution has yet been found. Some progress is due to Riek et al. who were able to directly probe the spectrum of squeezed vacuum fluctuations of the electric field in the multi-THz range using femtosecond laser pulses \cite{reik}.) If the laws of quantum field theory placed no restrictions on negative energy, then it might be possible to produce surprising macroscopic effects such as violations of the second law of thermodynamics, traversable wormholes, warp drives, and possibly time machines \cite{Ford2}. \ QI appear to restrict these violations of the second law \cite{ford3}\cite{ford5}. \ A quantum inequality has been derived for squeezed light by Marecki \cite{marecki1}. \ Squeezed light has a nonclassical distribution of the quadrature components (typically phase and amplitude), which may be considered as the canonical momentum and position components of an equivalent harmonic oscillator corresponding to the frequency of the electromagnetic radiation being considered. \ Squeezed states are routinely made in quantum optics experiments in the process of parametric down conversion, in which an incident photon is converted in a non-linear crystal to two entangled photons of the same frequency, which is one half of that of the incident photon. \ The fluctuations of the electric field in the squeezed light are locally lower than the vacuum fluctuations, the so-called shot-noise level. \ There appears to be a limit to the amount of squeezing relative to the free vacuum, which has been measured to be from -0.5 dB to the most recent value of -15 dB \cite{vah}. Detection of the squeezing relative to the free vacuum field is done using balanced homodyne detection (BHD). \ Marecki's QI predicts the maximum degree of squeezing in dB that is possible in terms of the fraction of the cycle during which the variance in the electric field is less than that of the free vacuum limit. Marecki developed the theoretical framework demonstrating the ability of BHD to quantify the vacuum fluctuations of the electric field in terms of vacuum expectation values of products of the electric field operators, the one- and two- point functions of arbitrary states of the electric field\cite{marecki3}\cite{marecki2}. He applied this theory to the measurement of negative Casimir energy densities using BHD. The corresponding experiments require the placement of photodiodes within Casimir cavities and have yet to be performed. To date no QI has been tested experimentally. \ One of the reasons for this was noted by Marecki \cite{marecki1}: "As far as we know quantum field theoreticians do not know that their inequalities may influence real experiments nor are quantum opticians aware of the existence of such inequalities." Most of the quantum inequalities have been developed by quantum cosmologists or quantum field theorists, who are unaware of measurements of vacuum energy done by experts in quantum optics. This paper is the first attempt to bridge this gap and test a quantum inequality with published experimental data. It appears easiest to test the QI for squeezed light because with balanced homodyne detection one measures the squeezing relative to the free vacuum, which corresponds to the theoretical quantity described in the corresponding quantum inequality. \ However, there may be some subtleties in the comparison because of differences in the measurement protocols. Indeed, we find that the QI as given is violated by\ most of the experimental data, yet all experimental data are consistent with a theoretical model of the optical parametric amplifier (OPA) used to generate squeezed light. \section{Quantum Inequality for Squeezed Light} \label{sec:1} The QI for squeezed light gives a minimum value for the time sampled magnitude of the variance $<\Delta>_{A}$ of the quantized electromagnetic field for a state A where \begin{multline} <\Delta>_A \equiv \\ \int_{-\infty}^{+\infty}f(t)dt(<E^{2}(x,t)>_A-<E^{2}(x,t)>_{vac}) \end{multline} A simplified version of Marecki's derivation is given in Appendix A. His key result is \cite{marecki1} \begin{multline} <\Delta>_{A}\quad \geq \\ \frac{-2}{(2\pi)^{2}}\int_{0}^{\infty}d\omega\int d^{3}p\mu_{p}^{2}\omega_{p}|(f^{1/2})_{FT}(\omega+\omega_{p})|^{2} \label{res} \end{multline} where $\omega^{2}_{p}=p_{1}^2+p_{2}^2+p_{3}^2$. \ The minimum value of the variance $<\Delta>_{A}$ for a state A is determined by the time window function $f(t)$, specifically by $|(f^{1/2})_{FT}|^{2}$ the magnitude squared of the Fourier transform of the square root of the window function $f(t)$. In order to insure convergence of the integral, a spectral function $\mu_{p}=$ $\mu(\omega_{p}-\omega_{0})$, a function of $\omega_{p}$ that is strongly peaked at $\omega_{p}=\omega_{0}$, must be included. This term reflects the frequency response of the apparatus measuring the variance. The result Eq. 4 is similar in spirit to that of other researchers in that it involves the Fourier transform of the time window \cite{pfen}.\ There is no proof that Eq. \ref{res} represents the greatest lower bound for $<\Delta>_{A}.$ In formulations of other Quantum Inequalities, other features of the time window, such as the second derivative, determine the minimum average energy over the time sampling \cite{Ford2}. \ In all formulations of Quantum Inequalities to date, the window function determines the minimum energy values. This may seem counter intuitive. \ In all cases, these formulations assume a free plane-wave electromagnetic field without sources or \\ boundaries. We have found, like others, that the specific properties of the window function are very important \cite{davies}. \ Only in Marecki's calculations of the\ QI does a spectral function $\mu_{p}$ appear. \ This\ may be a problem because the Fourier transform of the time window $f(t)$ implies a certain frequency response of the apparatus, and this may conflict with the independent requirements for the function $\mu_{p}.$ The quantity that is generally measured in experiments is the $Log_{10}$ of the variance for some state A relative to the variance of the free vacuum: \begin{equation} R_{expt}=10Log_{10}\left( \frac{<\Delta>_{A}+<E^{2}>_{vac}}{<E^{2}>_{vac}}\right) \end{equation} According to Marecki, the measured squeezing in dB must exceed in numeric value $R$, where \begin{multline} R=10Log_{10}\Bigg[\\ \frac{\frac{1}{(2\pi)^{3}}\int d^{3}p\mu_{p}^{2}\omega _{p}(1-\int_{0}^{\infty}d\omega(4\pi|(f^{1/2})_{FT}(\omega+\omega_{p})|^{2} )}{\frac{1}{(2\pi)^{3}}\int(\mu_{p}^{2}\omega_{p})d^{3}p}\Bigg] \end{multline} \subsection{Evaluation of R for Specific Time Sampling Functions} \label{sec:2} For a Gaussian \begin{equation} f(t)=\frac{1}{t_{0}\sqrt{2\pi}}e^{-t^{2}/2t_{0}^{2}} \end{equation} we obtain \begin{equation} R=10Log_{10}\left[ -\frac{\int d^{3}p\mu_{p}^{2}\omega_{p}Erf[\sqrt{2} t_{o}\omega_{p}]}{\int d^{3}p\mu_{p}^{2}\omega_{p}}\right] \end{equation} If $\mu_{p}$ is a function of $\omega_{p}$ sharply peaked at $\omega_{0}$, with width $\delta\omega<<\omega_{0}$, then to a good approximation \begin{equation} R(\omega_{o}t_{o})=10Log_{10}[Erf(\sqrt{2}\omega_{o}t_{o})] \end{equation} (Marecki has an additional factor of 2 in the Erf function, which we do not get). \ As a check on the role of the frequency windows $\mu_{p}$, we can do all the integrations in $R$ for a Gaussian frequency function, and we get an additional factor of $\omega_{0}^{3}\delta\omega$ for the $\omega_{p}$ integration in the numerator and in the denominator. \ These factors cancel, giving to lowest order in $\delta\omega$, the result quoted above. \ On the other hand, if we do not introduce a frequency function, we find that $R=10Log_{10}[1]$ $=0,$ indicating that no squeezing is possible. \ In other words, a frequency function is required to get reasonable results; however, as we have noted the frequency function may not be consistent with the time window. \ We can also compute $R$ for a squared Lorentzian time sampling function (an ordinary Lorentzian does not give well-behaved integrals): \begin{equation} f(t)=\frac{2}{\pi}\frac{t_{0}^{3}}{(t^{2}+t_{o}^{2})^{2}} \end{equation} We find \begin{equation} R(\omega_{o}t_{o})=10Log_{10}(1-e^{-2\omega_{o}t_{o}}) \end{equation} assuming \ $\mu_{p}$ is strongly peaked at $\omega_{o}.$ \ The equation behaves similarly to the one for the Gaussian time function. We can compute $R$ for a square window function $f(t)$ of width $\Delta T$ with perfectly sharp corners and using a frequency function $\mu_p$. We find that we always get perfect squeezing, $R=10Log_{10}0$, with no dependence on $\Delta T$. Although a perfectly sharp window is not physically possible, and is mathematically unstable, one still wonders about the meaning of this result. A sharp window allows one to do a perfect measurement (at least in principle) in which only regions of perfect squeezing are measured, and one can avoid the regions with partial or antisqueezing. We have also evaluated the variance for a symmetric trapezoidal window with a center region $T_{S}$ long and sloping sides that are each $nT_{s}$ long, normalized to 1. \section{Production of Squeezed Light Using Optical Parametric Amplification} A model for an optical parametric amplifier (OPA) with balanced homodyne detection (BHD) predicts the relative variance S in the quadrature components of the vacuum electromagnetic field for a state A: \begin{equation} S=\frac{\langle E^{2}\rangle_{A}}{\langle E^{2}\rangle_{vac}} \end{equation} \ The model \cite{gar,collandwalls,polzit} predicts that \begin{multline} S(\theta,x,\omega)=1+4\beta x\bigg[ \frac{\operatorname{cos^{2}\theta}}{(1-x)^{2} +(\omega/\gamma)^{2}} -\\ \frac{\operatorname{sin^{2}\theta}}{(1+x)^{2} +(\omega/\gamma)^{2}} \bigg] \end{multline} where $x =P/P_{th}$ is the ratio of the laser power to the power at threshold $(0<x<1)$, $\beta$ is the optical efficiency, $\theta$ is the phase difference between the local oscillator field (LO) and the vacuum field, $\omega$ is the sideband angular frequency of measurement by a spectrum analyzer, $\gamma$ is the halfwidth or cavity decay rate [$\gamma=c(T+L)/l$ where $c=$ speed of light, $T=$transmissivity of coupling mirror, $L=$round trip loss, $l=$round trip length]. The model has been parameterized so the squeezing is a maximum at $\omega=0.$ \ Generally, the squeezing is given in terms of dB: \begin{equation} R=10Log_{10}S(\theta,x,\omega) \end{equation} To clarify the physical basis of the model and derive equations relating it to the QI, we briefly review the OPA model and experimental results. \ In recent experiments, values for the full width $2\gamma/2\pi$ ranges from $9$ MHz to $84$ MHz. \ Measurement frequencies $\omega$ are typically about $1$ MHz to, at most, $8$ MHz, $0.9<\beta<0.99,$ and laser wavelengths vary from about $795$ nm to $1064$ nm. \ In the measurement range, $(\omega /\gamma)$ varies from about 0 to, at most, 1. Figure \ref{1} shows a recent experimental arrangement \cite{vah}. A CW laser with frequency $\omega_{LO}$ encounters a polarizing beam splitter PBS, one beam going to a second harmonic generator SHG, the other beam which serves as the local oscillator LO goes to a 50-50 beam splitter in the balanced homodyne detector BHD. The beam leaving the SHG, with frequency $2\omega_{LO}$, goes to the optical parametric amplifier OPA. The OPA is operated below threshold and is composed of a cavity with a nonlinear crystal that is fully reflective at one end and a partially reflective mirror at the other end. The OPA non-linear crystal is driven by the output of a frequency-doubled laser SHG. The crystal has a small probability of producing two photons of the same frequency $\omega_{LO}$ (half the driving frequency) by degenerate parametric down conversion. \ Detection is by balanced homodyne detection in which the difference in photodetector current PD1-PD2 is measured for components of the squeezed vacuum SQZ and the laser LO that have interfered at a 50-50 beam splitter. The difference current is analyzed by a spectrum analyzer, typically with a measurement bandwidth of about 100 kHz to 500 kHz. \begin{figure} \caption{Schematic of experimental setup. Squeezed vacuum states of light SQZ at a wavelength of 1064 nm were generated in a double resonant, type I optical parametric amplifier (OPA) operated below threshold. \ SHG: second-harmonic generator; PBS: polarizing beam splitter; DBS: dichoric beam splitter; LO: local oscillator; PD: photodiode; MC1064: three mirror ring cavity for spatiotemporal mode cleaning; EOM: electro-optical modulator; FI: Faraday isolator. The phase shifter for the relative phase $\theta$ between SQZ and LO was a piezoelectric actuated mirror\cite{vah} \label{1} \end{figure} Data on a squeezed vacuum taken from the apparatus illustrated are shown in Figure \ref{sqx} \cite{vah}. Fits based on the maximum and minimum values of $S$ from Eq. 13 are shown in dashed lines for three power settings. \begin{figure} \caption{Squeezing in $dB=10Log_{10} \label{sqx} \end{figure} \begin{figure} \caption{Squeezing dB as a function of phase difference $\theta$ measured here by the time to move a mirror. \ Curve a: noise level with all inputs blanked; Curve b: phase is locked to the squeezed quadrature ($\theta=\pi/2)$; Curve c: phase is locked to the antisqueezed quadrature ($\theta=0)$; Curve d: the phase is scanned. \ The fraction of the period that the squeezing is below zero equals $F_{T} \label{3} \end{figure} In Figure 3, the vacuum squeezing is presented as a function of the phase difference between the LO and the squeezed vacuum SQZ \cite{vah}. \ In this particular experiment the mirror was vibrated periodically and the abscissa given in time rather than angle, but these methods are equivalent. \ The first minimum would correspond to the antisqueezed quadrature $\theta=\pi/2$ radians, the second to $\pi/2$ +$\pi$ radians \cite{suzuki}. The fraction of the period for which the squeezing is negative equals $F_{T},$ which is an indicator of the squeezing. From measuring the graph (using curves d and a), one finds that $F_{T}$ is about 0.14. If $S(\theta,x,\omega)<1$, then the variance or noise of this quadrature component is less than that of the free vacuum and is squeezed. This implies that the other quadrature component $R(\theta+\pi/2,x,\omega)>1$ is antisqueezed. The minimum value of S for squeezing occurs for $\theta=\pi/2$ and equals \begin{equation} S_{-}(x,\omega)=1-\frac{4\beta x}{(1+x)^{2}+(\omega/\gamma)^{2}} \end{equation} and the maximum antisqueezing occurs for $\theta=0$ or $\pi$ and equals \begin{equation} S_{+}(x,\omega)=1+\frac{4\beta x}{(1-x)^{2}+(\omega/\gamma)^{2}} \end{equation} The maximum possible squeezing $S_{-}(x,0)$ is shown as a function of $x$ in Figure \ref{4}. \begin{figure} \caption{The maximum squeezing R$_{-} \label{4} \end{figure} The frequency spectrum of squeezing $1-S_{-}(x,f=\omega/\gamma)$ \ is Lorentzian with a halfwidth of $\gamma.$ \ The product of the maximum and minimum variances is \begin{multline} S_{-}(x,\omega)\ast S_{+}(x,\omega)=1-\\ \frac{16\beta(1-\beta)x^{2}}{\left[ (1+x)^{2}+(\omega/\gamma)^{2}\right] \left[ (1-x)^{2}+(\omega/\gamma )^{2}\right] } \end{multline} For an ideal optical system with no losses $\beta=1$ and the product is 1, as it must be according to the Heisenberg Uncertainty Principle. For comparison to the quantum inequality, we need to know the angular interval $\Delta\theta$\ over which the light is squeezed. The light is squeezed if the term in brackets in Eq. 13 is negative, which implies \begin{equation} \frac{S_{+}(x,\omega)-1}{1-S_{-}(x,\omega)}<\tan^{2}\theta \end{equation} It follows that $F_{T}=\Delta\theta/\pi$, which is the fraction of the period during which the light is squeezed, is given by \begin{equation} F_{T}(x,\omega)=1-\frac{2}{\pi}\tan^{-1}\sqrt{\frac{S_{+}(x,\omega)-1} {1-S_{-}(x,\omega)}} \label{ftgen} \end{equation} For the special case of an ideal OPA, we can substitute $S_{+}=1/S_{-}$ to get \begin{equation} F_{T}(x,\omega)=1-\frac{2}{\pi}\tan^{-1}\sqrt{\frac{1}{S_{-}(x,\omega)}} \label{ftideal} \end{equation} which can be solved for $S_{-}$ to obtain \begin{equation} S_{-}(x,\omega)=\tan^{2}[F_{T}(x,\omega)\frac{\pi}{2}] \end{equation} which is valid for $0<F_{T}<0.5$. A plot of $R =\\ 10Log_{10}S_{-}(x,\omega)$ as a function of $F_{T}(x,\omega)$\ for an ideal OPA is shown in Figure \ref{db}. We would not expect experimental points to display squeezing greater than the amount allowed for the ideal OPA. Since most OPA measurements were not ideal, we used the general formula Eq. \ref{ftgen} for $F_{T}(x,\omega)$ to reduce data. \ The maximum fraction of time in a period during which squeezing can occur is $F_{T}=1/2$, and this only occurs when $x$ approaches zero, so the amount of squeezing is slight. \begin{figure} \caption{$10Log_{10} \label{db} \end{figure} \section{Analysis of Data from OPA} We analyzed data from 12 experiments conducted over the last thirty years \cite{vah}\cite{tak}-\cite{hir}. \ We obtained values of $F_{T}$ and squeezing/antisqueezing from plots or from the text, and estimated errors as much as possible. The most recent data is shown in Figure 2 \cite{vah}. \ They fit their squeezing data $(S_{-}$ and $S_{+})$ to the model, actually including a small correction for the phase uncertainty, with excellent agreement. \ To get $F_{T}$ \ from their data, we used the equation from the OPA\ model in terms of the arctangent in Eq. \ref{ftgen}. \ On the other hand, \ for the data in Figure 3 we could do calculations of $F_{T}$ graphically from $S_{-}$ and $S_{+}$ and from $\Delta\theta,$ from their plots. \ Thus, we can compare the two methods. \ (Some papers plot the squeezing vs. time shown in Figure 2 as squeezing versus phase difference. \ We treat both types of plots in the same manner with an assumed equivalence between time and phase change that corresponds to the rate at which a mirror is moved in degrees/second.) \ In about half the papers, we could compare the two methods and found they agreed to within about $ \pm 8\%$ rms. \ When we could use both methods to compute $F_{T}$, we used the average in our plots. \ For Vahlbruch we took points at three power levels (x= 0.8, 0.3, 0.1) \cite{vah}. For all other publications, we had only one power level. \section{Interpretation of Squeezing and Observation Time} To compare the results of the QI and the OPA data requires the assumption that the squeezing in the OPA analysis is equivalent to the squeezing in the QI analysis as discussed by Marecki \cite{marecki1}. \ In the OPA\ case, the squeezing depends on the phase difference $\theta$ between the local oscillator LO and the squeezed light SQZ, while in the QI analysis the squeezing depends on the phase change $\omega_{o}t_{o}$ occurring during the observation time. In the equation for $R$, which expresses the QI, $t_{o}$ is the width of the Gaussian time sampling function and $\omega_{o}$ is the center frequency in radians/sec of the frequency sampling function. From Marecki's derivation, one would assume that this corresponds to the center frequency of the laser probe. However, once we are making measurements using BHD, where the detection is based on the interference between the LO and the squeezed light, the spectrum analyzer's output at a frequency $\omega$ is a quadrature noise measurement of the optical field at frequency $\omega_{LO} + \omega$. Consequently the appropriate expression for the BHD phase change in radians during a measurement lasting a time $t_{o}$ is given by the product $\omega t_{o}$ where $\omega$ is the BHD measurement frequency. If we observe for a time interval $M$, which equals the period of the squeezing $S(\theta,x,\omega)$, then the phase change is $M\omega=\pi$. \ Therefore\ $\omega t=\pi(t/M)$, where $t$ is the observation time. Defining the fractional observation time $F_{T}=t/M$, we conclude that $\omega t=$ $\pi F_{T}$. \ Thus for a Gaussian time sampling function we have \begin{equation} R(F_{T})=10Log_{10}[Erf(\sqrt{2}\pi F_{T})] \label{ftg} \end{equation} On the other hand, Marecki \cite{marecki1} just identified $\omega_{o}t_{o} $\ = $\tau$\ as the fraction of the period $F_{T}$ in which squeezing occurred, omitted the factor of $\pi$, and also had an additional factor of 2, thus obtaining $R(F_{T})= \\ 10Log_{10}[Erf(2\sqrt{2}F_{T})].$ For the squared Lorentzian time function we obtained \begin{equation} R(F_{T})=10Log_{10}(1-e^{-2\pi F_{T}}) \label{loreq} \end{equation} whereas Marecki did not have the factor $\pi$ in the exponent. Note that in the derivation of the QI, $\omega t_{o}$ is the phase change during an observation of the variance of a quadrature component. \ Nothing is said in the derivation about whether the field is squeezed or not. \ The QI appears to place a bound on the variance for this phase change for any quadrature component, squeezed or not, during the observation time. \ Assuming one physically can observe the field only when it is squeezed, then we should obtain the value for $R(F_{T})$ as restricted by the QI. \ \section{Comparison of QI Predictions\ and OPA\ Data} If we assume that we are observing the variance during half of the period then $F_{T}=0.5,$ and the QI gives a value of $R(F_{T})$ \ whose absolute value could not be exceeded with the maximum possible squeezing during the half period. \ Similarly, if we are observing for the entire period, then $F_{T}=1.$ \ By our understanding, the longer we observe, the more likely we will have regions of variance that are above the vacuum level and the bigger $R$ will be. \ For the shortest times, we can have the most squeezing. In Figure \ref{11} for a Gaussian time function, we have plotted our result for $R(F_{T})$ (Eq. \ref{ftg}, top dotted curve), and Marecki's result $R(F_{T}) = 10Log_{10}[Erf(2\sqrt{2}F_{T}]$ (middle dotted curve), and $R(F_{T})$ for an ideal OPA (Eq. \ref{ftideal}, bottom solid curve). The QI is very restrictive; the degree of squeezing obtained in the experiments is greater than that allowed by either form of the QI for all but one experimental point. All data are consistent with the ideal OPA model. \begin{figure} \caption{Squeezing $R$ in dB versus $F_{T} \label{11} \end{figure} \ \begin{figure} \caption{R dB versus $F_{T} \label{loz} \end{figure} The results for the squared Lorentzian time function were better than for the Gaussian, as shown in Figure \ref{loz}. \ Almost all points violated Eq. \ref{loreq}, but only about half the points violate Marecki's version of the QI with no $\pi$ (middle dashed curve). One phenomenological approach to understanding the disagreement between the data and the QI is to try reducing the argument in the equations to improve the agreement of the QI prediction with the data. As the arguments in the error function and the exponential decrease, the agreement does improve. Fitting the functional forms to the data gives the plots as shown in Figure\ \ref{yyy}. \ No points violate these best fits, but the significance of them is not clear. Certainly for $F_{T}$ above about 0.3, they do not appear to be sufficiently restrictive. They are not as restrictive as the ideal OPA curve. \begin{figure} \caption{$R$ dB vesus $F_{T} \label{yyy} \end{figure} We evaluated the variance for a symmetric trapezoidal window with a center region $T_{S}$ long, and sloping sides that are each $nT_{S}$ long, normalized to 1. The results (dashed curves) are displayed in Figure \ref{xx} \ for a range of values of $n$ from $0.001$ (most negative black curve, and $f(t)$ is almost a square window) to $5.0$ (dashed curve nearest the origin) which corresponds to a nearly triangular window. \ The solid curves are for the same $n $ values, but a factor of $\pi$ has been omitted in the argument in agreement with Marecki. The ideal OPA bound is the solid curve crossing all other curves. As the window becomes more triangular, the curves are less restrictive on squeezing and do not agree with the data. \ Only the curve for the nearly square window ($n=0.001$) without the $\pi$ is not inconsistent with all the data. Yet this curve clearly fails to be sufficiently restrictive for values of $F_{T}$ greater than about 0.3 and predicts squeezing exceeding that allowed by an ideal OPA. \ Mathematically this nearly rectangular window is on the edge of instability, especially for low values of the power, and as $n$ decreases further, this window becomes a square window for which the limit is $R=10Log_{10}0$. \ \begin{figure} \caption{$R$ $dB$ $(=S_{-} \label{xx} \end{figure} Clearly the form of the time window is very important, yet for all forms examined, the resultant QI do not appear to have the right functional form or the numerical values one might expect for a QI applicable to the experimental data. \section{Discussion and Conclusion} The mathematics of the derivation of the Quantum Inequality appear sound, and the model for the OPA data has been experimentally validated, and it is consistent with all the data we examined. \ Yet the QI and OPA\ model do not appear to be consistent with each other. The QI was violated by all the data points for the Gaussian time window and about half the points for the Lorentzian squared time window. \ Although other windows may show improved results, these inconsistencies suggest a deeper problem. The model for the ideal OPA gave the best results: no data exceeded its maximum squeezing, yet the most recently published data came close. It also predicted that the maximum duration of squeezed light does not exceed 1/2 the cycle, which agreed with the data, yet was not predicted by the QI. Nevertheless, the ideal OPA is a model that is an approximation with limitations. One of the issues mentioned was the potential conflict between specifying a time function $f(t)$ and an independent frequency function $\mu(\omega)$. \ To explore this effect, we did a calculation assuming the Gaussian time function and a frequency function $\mu(\omega)$ which was given by the Fourier transform of the time function. \ \ Explicit calculation showed that the resulting expression for the QI was similar to that obtained without explicitly giving the precise form of the frequency window. \ Although this conflict is real, it does not appear to be responsible for the systemic disagreement seen between the QI and the OPA\ data. Another possible issue might be the frequency dependence of the measured squeezing for the OPA data as predicted by Eq. 13. The output beam of a OPA has the Lorentzian squeezing spectrum with center frequency $\omega$ and halfwidth $\gamma$\ that depends on the properties of the resonant cavity. Experimental data are typically taken with a phase $\theta$ and frequency which give the maximum squeezing. \ Since the QI correlates the fraction of time the signal is squeezed with the dB of squeezing, we may need to account for the change in $F_{T}$ for frequencies away from the sideband used for the measurement. We can compute an "effective" duration of squeezing $F_{TE}(x)$ which is weighted by integrating the frequency over the variance of the squeezed vacuum. The behavior of $F_{TE}(x)$ will depend on the range over which we integrate and the halfwidth. This integration will increase the effective size of the $F_{TE}(x),$ essentially moving all data points to the right, making the disagreement between data and the QI worse, so this is not the explanation. Another critical issue concerns the nature of the assumed measurement in the derivation of the Quantum Inequality. The assumption is that a measurement of the energy density will be made that lasts a fraction of a cycle of oscillation of the electromagnetic field of the laser being employed. On the other hand, to make an accurate measurement using BHD requires observation for a number of cycles. It does not appear possible to make a good measurement of the squeezing if observation is for a fraction of a cycle. The measurement of the energy density in the OPA method is actually done over many cycles. For a fixed phase difference between the LO and the vacuum signal SQZ, the balanced homodyne detection automatically selects the corresponding energy output which is measured continuously over as many cycles of the laser light as desired, ensuring significant accuracy. On the other hand, no corresponding mechanism appears to be available for the measurement assumed to occur in the derivation of the QI. Thus, there may be an inconsistency between the measurement assumed in the derivation of the QI and the measurement method of the OPA. Marecki addressed this issue in an analysis of the BHD method, stressing that in the theory of the QI all operators are restricted to the frequency $\omega_{LO}$ of the LO and time was $2\pi /\omega_{LO}$ periodic and therefore $\omega_{LO}t<2\pi$ for all times \cite{marecki2}. It is not clear if there is an inconsistency and if it is responsible for the disagreement between the QI and the experimental data. The choice of window function is probably the most significant factor when applying the QI to real data. \ Mathematically, the choice of a window function is simple. \ However, when comparing theory to data, it is not clear what window function is actually appropriate for the experiment being done even though the choice dominates the restrictions due to the QI. In addition, Heisenberg and Bohr maintained that measured fields were averages over space-time volumes, whereas Marecki (Eq. A.6) and Ford (Eq. 1) only have a time average. \ This work represents the only comparison to date of experimental data to the theory of a QI. \ Hopefully, the conundrum of the disagreement between the QI and the OPA measurements will be resolved more fully in the future with interdisciplinary collaborations and more experiments, and more detailed theoretical derivations. \ Our results highlight the subtleties that can be implicit in theoretical derivations of QI, particularly in the proposed measurement process. Ideally, an unambiguous experimental procedure could be associated with the theoretical derivations. These issues may also affect the applicability of the QI that have been proposed for other situations. \appendix \section{Derivation of Marecki's Quantum Inequality} \setcounter{equation}{0} \renewcommand{A.\arabic{equation}}{A.\arabic{equation}} Marecki [9] derives a Quantum Inequality for squeezed light and squeezed vacuum following the general approach of Fewster and Teo \cite{fenster} and Pfenning \cite{pfen}. We briefly describe his derivation to clarify the comparisons to the OPA data. Marecki defines the operator variance of the normally ordered electric field$\ \Delta E^{2}(x,t)$: \begin{equation} \Delta E^{2}(x,t)=E^{2}(x,t)-<E^{2}(x,t)>_{vac} \end{equation} and considers a time sampling of the field squared \begin{equation} \Delta=\int_{-\infty}^{+\infty}dtf(t)\Delta E^{2}(x,t) \end{equation} where \begin{equation} 1=\int_{-\infty}^{+\infty}dtf(t) \end{equation} He also mentions the possibility of including a frequency sampling function $\mu_{p}=$ $\mu(\omega_{p}-\omega_{0})$, peaked at $\omega_{0}$, that reflects the frequency response of the apparatus measuring the variance. \ Since it is necessary to use a frequency sampling function to get finite results for $\Delta,$ we will include it in our derivations. \ However, we note that there is a potential consistency issue using an independently selected frequency sampling function since the time sampling function $f(t)$ implies a frequency selection determined by its Fourier transform. \ Using the Coulomb gauge, the vector potential is \begin{multline} A_{i}(\mathbf{x},t)=\frac{1}{\sqrt{(2\pi)^{3}}}\int\frac{d^{3}k}{\sqrt {2\omega_{k}}}\times \\ \sum_{\alpha=1,2}\mathbf{e}_{i}^{\alpha}(\mathbf{k})\{a_{\alpha }^{\dag}(\mathbf{k})e^{ikx}+a_{\alpha}(\mathbf{k})e^{-ikx}\} \end{multline} where $\omega_{k}=|k|$ and $\alpha$ denotes the two polarization states which are normalized and orthogonal to $\mathbf{k}$. In the exponentials, $kx=-\mathbf{k}\cdot\mathbf{x}+\omega t$ represents the scalar product. The electric field operator is \begin{equation} E_{i}=-\frac{\partial A_{i}}{\partial t} \end{equation} The expectation value of the time sampled free vacuum field squared is \begin{equation} <E^{2}>_{vac}=\int_{-\infty}^{+\infty}dtf(t)<E^{2}(\mathbf{x},t)>_{vac} \end{equation} \begin{multline} =\frac{1}{2(2\pi)^{2}}\int\mu_{k}d^{3}k\mu_{p}d^{3}p\sqrt{\omega_{k} \omega_{p}}2\pi f_{FT}(\omega_{p}-\omega_{k})\times \\ \sum_{\alpha,\beta =1,2}\mathbf{e}_{i}^{\alpha}(\mathbf{k})\mathbf{e}_{i}^{\beta}(\mathbf{p} )\delta_{\alpha\beta}\delta(\mathbf{p}-\mathbf{k}) \end{multline} where we have used the commutator $[a_{\alpha}(\mathbf{p}),a_{\beta}^{\dag }(\mathbf{k})]=\delta_{\alpha\beta}\delta(\mathbf{p}-\mathbf{k})$ and included the frequency function. The Fourier transform of the time sampling function $f(t)$ is defined as \begin{equation} f_{FT}(\omega)=\frac{1}{2\pi}\int_{-\infty}^{+\infty}dtf(t)e^{-i\omega t} \end{equation} Integrating Eq. A.7 over k, using the unity normalization of the polarization vectors $ {\displaystyle\sum\limits_{i}} \mathbf{e}_{i}^{\alpha}(\mathbf{k})\mathbf{e}_{i}^{\alpha}(\mathbf{p})=1$ , and that $f_{FT}(0)=1/2\pi$ because of the $f(t)$ nomalization, gives \begin{equation} <E^{2}>_{vac}=\frac{1}{(2\pi)^{3}}\int(\mu_{p}^{2}\omega_{p})d^{3}p \end{equation} Substituting this result into the expression for the variance $\Delta$ gives, after integration over time, \begin{multline} \Delta=\frac{1}{2(2\pi)^{2}}\int d^{3}kd^{3}p\mu_{k}\mu_{p}\sqrt{\omega _{k}\omega_{p}}\times\\ \sum_{\alpha,\beta=1,2}\mathbf{e}_{i}^{\alpha} (\mathbf{k})\mathbf{e}_{i}^{\beta}(\mathbf{p})\{a_{\alpha}^{\dag} (\mathbf{k})a_{\beta}(\mathbf{p})e^{i(-\mathbf{k}+\mathbf{p})\mathbf{x}} f_{FT}(\omega_{p}-\omega_{k})\\ -a_{\alpha}(\mathbf{k})a_{\beta}(\mathbf{p})e^{i(\mathbf{k}+\mathbf{p} )\mathbf{x}}f_{FT}(\omega_{p}+\omega_{k})+HC\} \end{multline} \linebreak where HC is the Hermitian conjugate. \ To derive a quantum inequality, Marecki defines a vector operator $B_{i}(\omega)$ and computes the integral over frequency of the norm of $\mathbf{B}$ which has to be positive \begin{equation} \int_{0}^{\infty}d\omega B_{i}^{\dag}(\omega)B_{i}(\omega)>0 \end{equation} We choose \begin{multline} B_{i}(\omega)= \frac{1}{\sqrt{2\pi^{2}}}\int d^{3}p\sqrt{\omega_{p}} \times \\ \sum_{\alpha,\beta=1,2}\mathbf{e}_{i}^{\alpha}(\mathbf{p})\{a_{\alpha }(p)(f^{1/2})_{FT}^{\ast}(\omega-\omega_{p})e^{i\mathbf{px}}- \\ a_{\alpha}^{\dag }(p)(f^{1/2})_{FT}^{\ast}(\omega+\omega_{p})e^{-i\mathbf{px}}\} \end{multline} and substitute this into Eq. A.11, and use the result in Eq. A.10. After taking the expectation value with respect to state A, we obtain Eq. 4 in Section 2. Note that in Eq. A.12, $(f^{1/2})_{FT}^{\ast}(\omega - \omega_{p})$ means the complex conjugate of the Fourier transform of the square root of $f(t)$. \end{document}
\begin{document} \title{(Di)graph decompositions and magic type labelings: a dual relation} \author{S. C. L\'opez} \address{ Departament de Matem\`{a}tiques\\ Universitat Polit\`{e}cnica de Catalunya.\\ C/Esteve Terrades 5\\ 08860 Castelldefels, Spain} \email{[email protected]} \author{F. A. Muntaner-Batle} \address{Graph Theory and Applications Research Group \\ School of Electrical Engineering and Computer Science\\ Faculty of Engineering and Built Environment\\ The University of Newcastle\\ NSW 2308 Australia} \email{[email protected]} \author{M. Prabu} \address{British University Vietnam\\ Hanoi, Vietnam} \email{[email protected]} \maketitle \begin{abstract} A graph $G$ is called edge-magic if there is a bijective function $f$ from the set of vertices and edges to the set $\{1,2,\ldots,|V(G)|+|E(G)|\}$ such that the sum $f(x)+f(xy)+f(y)$ for any $xy$ in $E(G)$ is constant. Such a function is called an edge-magic labeling of G and the constant is called the valence of $f$. An edge-magic labeling with the extra property that $f(V(G))= \{1,2,\ldots,|V(G)|\}$ is called super edge-magic. In this paper, we establish a relationship between the valences of (super) edge-magic labelings of certain types of bipartite graphs and the existence of a particular type of decompositions of such graphs. \end{abstract} \begin{quotation} \noindent{\bf Key Words}: {Edge-magic, super edge-magic, magic sum, $\otimes_h$-product, decompositions} \noindent{\bf 2010 Mathematics Subject Classification}: Primary 05C78, Se\-con\-dary 05C76 \end{quotation} \section{Introduction} For the terminology and notation not introduced in this paper we refer the reader to either one of the following sources \cite{BaMi,CharLes,G,SlMb,Wa}. By a $(p,q)$-graph we mean a graph of order $p$ and size $q$. Let $m\leq n$ be integers, to denote the set $\{m,m+1,\ldots,n\}$ we use $[m,n]$. Kotzig and Rosa introduced in \cite{KotRos70} the concepts of edge-magic graphs and edge-magic labelings as follows: Let $G$ be a $(p,q)$-graph. Then $G$ is called {\it edge-magic} if there is a bijective function $f:V(G)\cup E(G)\rightarrow [1,p+q]$ such that the sum $f(x)+f(xy)+f(y)=k$ for any $xy\in E(G)$. Such a function is called an {\it edge-magic labeling} of $G$ and $k$ is called the {\it valence} \cite{KotRos70} or the {\it magic sum} \cite{Wa} of the labeling $f$. We write $\hbox{val}(f)$ to denote the valence of $f$. Inspirated by the notion of edge-magic labelings, Enomoto et al. introduced in \cite{E} the concepts of super edge-magic graphs and super edge-magic labelings as follows: Let $f:V(G)\cup E(G) \rightarrow [1,p+q]$ be an edge-magic labeling of a $(p,q)$-graph G with the extra property that $f(V(G))=[1,p]$. Then G is called { \it super edge-magic} and $f$ is a {\it super edge-magic labeling} of $G$. Notice that although the definitions of (super) edge-magic graphs and labelings were originally provided for simple graphs (that is, graphs with no loops nor multiple edges), along this paper, we understand these definitions for any graph. Therefore, unless otherwise specified, the graphs considered in this paper are not necessarily simple. Figueroa-Centeno et al. provided in \cite{F2}, the following useful characterization of super edge-magic simple graphs, that works in exactly the same way for non necessarily simple graphs. \begin{lemma}\label{super_consecutive} \cite{F2} Let $G$ be a $(p,q)$-graph. Then $G$ is super edge-magic if and only if there is a bijective function $g:V(G)\longrightarrow [1,p]$ such that the set $S=\{g(u)+g(v):uv\in E(G)\}$ is a set of $q$ consecutive integers. In this case, $g$ can be extended to a super edge-magic labeling $f$ with valence $p+q+\min S$. \end{lemma} Unless otherwise specified, whenever we refer to a function as a super edge-magic labeling we will assume that it is a function $f$ as in Lemma \ref{super_consecutive}. Before moving on, it is worthwhile mentioning that Acharya and Hegde had already defined in 1991 \cite{AH} the concept of strongly indexable graphs. This concept turns out to be equivalent to the concept of super edge-magic graphs. However in this paper we will use the names super edge-magic graphs and super edge-magic labelings. In \cite{F1} Figueroa et al., introduced the concept of super edge-magic digraph as follows: a digraph $D=(V,E)$ is super edge-magic if its underlying graph is super edge-magic. In general, we say that a digraph D admits a labeling $f$ if its underlying graph admits the labeling $f$. It was also in \cite{F1} that the following product was introduced: let $D$ be a digraph and let $\Gamma$ be a family of digraphs with the same set $V$ of vertices. Assume that $h: E(D) \to \Gamma$ is any function that assigns elements of $\Gamma$ to the arcs of $D$. Then the digraph $D \otimes _{h} \Gamma $ is defined by (i) $V(D \otimes _{h} \Gamma)= V(D) \times V$ and (ii) $((a,i),(b,j)) \in E(D \otimes _{h} \Gamma) \Leftrightarrow (a,b) \in E(D)$ and $(i,j) \in E(h(a,b))$. Note that when $h$ is constant, $D \otimes _{h} \Gamma$ is the Kronecker product. Many relations among labelings have been established using the $\otimes_h$-product and some particular families of graphs, namely $\mathcal{S}_p$ and $\mathcal{S}_p^k$ (see for instance, \cite{ILMR,LopMunRiu1,LopMunRiu6,LopMunRiu7}). The family $\mathcal{S}_p$ contains all super edge-magic $1$-regular labeled digraphs of order $p$ where each vertex takes the name of the label that has been assigned to it. A super edge-magic digraph $F$ is in $\mathcal{S}_p^k$ if $|V(F)|= |E(F)|=p$ and the minimum sum of the labels of adjacent vertices is equal to $k$ (see Lemma \ref{super_consecutive}). Notice that, since each $1$-regular digraph has minimum induced sum equal to $(p+3)/2$, $ \mathcal{S}_p \subset \mathcal{S}_p^{(p+3)/2}$. The following result was introduced in \cite{LopMunRiu6}, generalizing a previous result found in \cite{F1} : \begin{theorem} \label{spk} \cite{LopMunRiu6} Let $D$ be a (super) edge-magic digraph and let $h: E(D) \to \mathcal{S}_p^k$ be any function. Then und($D\otimes _{h} \mathcal{S}_p^k$) is (super) edge-magic. \end{theorem} \begin{remark}\label{remarkspk} The key point in the proof of Theorem \ref{spk} is to rename the vertices of $D$ and each element of $\mathcal{S}_p^k$ after the labels of their corresponding (super) edge-magic labeling $f$ and their super edge-magic labelings respectively. Then the labels of the product are defined as follows: (i) the vertex $(a,i) \in V(D\otimes _{h} \mathcal{S}_p^k)$ receives the label: $p(a-1)+i$ and (ii) the arc $((a,i),(b,j)) \in E(D\otimes _{h} \mathcal{S}_p^k)$ receives the label: $p(e-1)+(k+p)-(i+j)$, where $e$ is the label of $(a,b)$ in D. Thus, for each arc $((a,i),(b,j)) \in E(D\otimes _{h} \mathcal{S}_p^k)$, coming from an arc $ e = (a,b) \in E(D)$ and an arc $ (i,j) \in E(h(a,b))$, the sum of labels is constant and equal to $p(a+b+e-3)+(k+p)$. That is, $p( \hbox{val}(f)-3)+k+p$. Thus, the next result is obtained. \end{remark} \begin{lemma}\label{valenceinducedproduct} \cite{LopMunRiu6} Let $\widehat{f}$ be the (super) edge-magic labeling of the graph $D \otimes_h\mathcal{S}_p^k$ induced by a (super) edge-magic labeling $f$ of $D$ (see Remark \ref{remarkspk}). Then the valence of $\widehat{f}$ is given by the formula \begin{eqnarray} \hbox{val}(\widehat{f}) &=& p(\hbox{val}(f)-3) + k + p. \end{eqnarray} \end{lemma} All the results in the literature involving the $\otimes_h$-product had super edge-magic labeled digraphs in the second factor of the product. However, in \cite{LMP2} it was shown that other labeled (di)graphs can be used in order to enlarge the results obtained, showing that the $\otimes_h$-product is a very powerful tool. Next, we introduce the family $\mathcal{T}^q_\sigma$ of edge-magic labeled digraphs. An edge-magic labeled digraph $F$ is in $\mathcal{T}^q_\sigma$ if $V(F)=V$, $|E(F)|=q$ and the magic sum of the edge-magic labeling is equal to $\sigma$. \begin{theorem}\cite{LMP2} \label{producte_super_k} Let $D\in \mathcal{S}_n^k$ and let $h$ be any function $h:E(D)\rightarrow \mathcal{T}^q_\sigma$. Then $D\otimes_h \mathcal{T}^q_\sigma$ admits an edge-magic labeling with valence $(p+q)(k+n-3)+\sigma$, where $p=|V|, \ |E(F)|=q$ and $F \in \mathcal{T}^q_\sigma$. \end{theorem} \begin{remark} Let $p=|V|$. The keypoint in the proof of Theorem \ref{producte_super_k} is to identify the vertices of $D$ and each element of $\mathcal{T}^q_\sigma$ after the labels of their corresponding super edge-magic labeling and edge-magic labeling, respectively. Then the labels of $D\otimes_h \mathcal{T}^q_\sigma$ are defined as follows: (i) if $(i,a)\in V(D\otimes_h \mathcal{T}^q_\sigma)$ we assign to the vertex the label: $(p+q)(i-1)+a$ and (ii) if $((i,a),(j,b))\in E (D\otimes_h \mathcal{T}^q_\sigma)$ we assign to the arc the label: $(p+q)(k+n-(i+j)-1)+(\sigma-(a+b)).$ Notice that, since $D\in \mathcal{S}_n^k$ is labeled with a super edge-magic labeling with minimum sum of the adjacent vertices equal to $k$, we have $\{(k+n)-(i+j): \ (i,j)\in E(D )\}=[1, n].$ Moreover, since each element $F\in \mathcal{T}^q_\sigma$, it follows that $\{(\sigma-(a+b): \ (a,b)\in E(F )\}=[1, p+q]\setminus V .$ Thus, the set of labels in $D\otimes_h \mathcal{T}^q_\sigma$ covers all elements in $[1, n(p+q)]$. Moreover, for each arc $((i,a)(j,b))\in E (D\otimes_h \mathcal{T}^q_\sigma)$ the sum of the labels is constant and is equal to: $(p+q)(k+n-3)+\sigma.$ \end{remark} In \cite{LopMunRiu5} L\'opez et al. introduced the following definitions. Let $G=(V,E)$ be a $(p,q)$-graph. Then the set $S_{G}$ is defined as $S_{G}= \{ 1/q( \Sigma_{u \in V} deg(u)g(u)+ \Sigma_{i=p+1}^{p+q} i ):$ the function $g:V \rightarrow [1,p]$ is bijective\}. If $\lceil\min S_G\rceil\le \lfloor\max S_G\rfloor$ then the {\it super edge-magic interval} of $G$, denoted by $I_G$, is defined to be the set $I_G=\left[\lceil\min S_G\rceil, \lfloor\max S_G\rfloor\right]$ and the {\it super edge-magic set} of $G$, denoted by $\sigma_G$, is the set formed by all integers $k\in I_G$ such that $k$ is the valence of some super edge-magic labeling of $G$. A graph $G$ is called {\it perfect super edge-magic} if $\sigma_G=I_G$. In order to conduct our study in this paper, the following lemma will be of great help. \begin{lemma}\cite{LMP2} \label{k1nsem} The graph formed by a star $K_{1,n}$ and a loop attached to its central vertex, denoted by $K_{1,n}^{l}$, is perfect super edge-magic for all positive integers $n$. Furthermore, $|I_{K_{1,n}^{l}}|=|\sigma_{K_{1,n}^{l}}|=n+1$. \end{lemma} In \cite{PEM_LMR} the same authors generalized the previous definitions to edge-magic graphs and labelings as follows: Let $G=(V,E)$ be a $(p,q)$-graph, and denote by $T_G$ the set $$\left\{\frac{\sum_{u\in V}\mbox{deg}(u)g(u)+\sum_{e\in E}g(e)}q:\ g:V\cup E \rightarrow [1,p+q] \ \mbox{ is a bijective function}\right\}.$$ If $\lceil\min T_G\rceil\le \lfloor\max T_G\rfloor$ then the {\it magic interval} of $G$, denoted by $J_G$, is defined to be the set $J_G=\left[\lceil\min T_G\rceil, \lfloor\max T_G\rfloor\right]$ and the {\it magic set} of $G$, denoted by $\tau_G$, is the set $\tau_G=\{n\in J_G:\ n \ \mbox{is the valence of some edge-magic labeling of}\ G\}.$ It is clear that $\tau_G\subseteq J_G$. A graph $G$ is called {\it perfect edge-magic } if $\tau_G=J_G$. In the next lemma, we provide a well known result that gives a lower bound and an upper bound for edge-magic valences. We add the proof as a matter of completeness. Recall that the complementary labeling of an edge-magic labeling $f$ is the labeling $\overline{f}(x)=p+q+1-f(x)$, for all $x \in V(G) \cup E(G)$, and that val$(\overline{f})=3(p+q+1)-\hbox{val}(f)$. \begin{lemma}\label{maxminvalence} Let $G$ be a $(p,q)$-graph with an edge-magic labeling $f$. Then $p+q+3 \leq \hbox{val}(f) \leq 2(p+q).$ \end{lemma} \begin{proof} Let $f:V(G) \cup E(G) \rightarrow [1,p+q]$ be an edge-magic labeling of $G$. The two lowest possible integers in $[1,p+q-1]$ that can be added to $p+q$ are $1$ and $2$. Thus, $\hbox{val}(f) \geq p+q+3.$ By using the complementary labeling, the maximum possible valence has the form $3(p+q+1)-\hbox{val}(g)$ where $\hbox{val}(g)$ is the minimum possible valence. Thus, $\hbox{val}(f) \leq 3(p+q+1)-\hbox{val}(g)\leq 2(p+q).$ \end{proof} The study of the (super) edge-magic properties of the graph $C_m\odot \overline{K}_n$ as a particular subfamily of $S_n^k$ has been of interest recently. See for instance \cite{LMP1,LopMunRiu5,PEM_LMR}. Due to this, many things are known on the (super) edge-magic properties of the graphs $C_{p^k} \otimes \overline{K}_n$ \cite{PEM_LMR} and $C_{pq} \otimes \overline{K}_n$ \cite{LMP1}, where $p$ and $q$ are coprime. However, many other things remain a mystery, and we believe that it is worth the while to work in this direction. In fact, a big hole in the literature, appears when considering graphs of the form $C_m\odot \overline{K}_n$ for $m$ even. In this paper, we will devote Section \ref{section_morevalences} to this type of graphs. This study leads us to consider other classes of graphs and to study the relation existing between the valences of edge-magic and super edge-magic labelings and the well known problem of graph decompositions. A decomposition of a simple graph $G$ is a collection $\{H_i: i \in [1,m] \}$ of subgraphs of $G$ such that $\cup_{i \in [1,m]}E(H_i)$ is a partition of the edge set of $G$. If the set $\{H_i: i \in [1,m] \}$ is a decomposition of $G$, then we denote it by $G \cong H_1 \oplus H_2 \oplus \dots \oplus H_m = \oplus_{i=1}^{m}H_i$. We want to bring this introduction to its end by saying that the interested reader can also find excellent sources of information about the topic of graph labeling in \cite{BaMi,G,MhMi,SlMb,Wa}. \section{More valences} \label{section_morevalences} As we have already mentioned in the introduction, not too much is known about the valences of (super) edge-magic labelings for the graph $C_m\odot \overline{K}_n$ when m is even. In fact, as far as we know, the only papers that deal with (super) edge-magic labelings of $C_m\odot \overline{K}_n$ for $m$ even are \cite{FIM02,LMP1}. Hence almost all such results involve only odd cycles. Next, we study the edge-magic valences of $C_m\odot \overline{K}_n$ when $m$ is even. Unless otherwise specified, $\overrightarrow{G}$ denotes any orientation of $G$. The next lemma is an generalization of Lemma 12 in \cite{LMP1}. \begin{lemma}\label{repeatedvalences} Let $g$ be a (super) edge-magic labeling of a graph $G$, and let $f_r$ be the super edge-magic labeling of $K_{1,n}^l$ that assigns label $r$ to the central vertex, $1 \leq r \leq n+1$. Then, \begin{itemize} \item[(i)] the induced (super) edge-magic labeling $\widehat{g}_r$ of $\overrightarrow{G} \otimes \overrightarrow{K}_{1,n}^l$ has valence $(n+1)(\hbox{val}(g)-2)+r+1$. \\ \item[(ii)] Let $g'$ be a different (super) edge-magic labeling of $G$ with $\hbox{val}(g) < \hbox{val}(g')$, then $\hbox{val}(\widehat{g}_{n+1}) < \hbox{val}(\widehat{g}_{1}')$, where $\widehat{g}_r'$ is the induced (super) edge-magic labeling of $\overrightarrow{G} \otimes \overrightarrow{K}_{1,n}^l$ when $K_{1,n}^l$ is labeled with $f_r$ and $G$ with $g'.$ \end{itemize} \end{lemma} \begin{proof} The labeling $f_r$ of $\overrightarrow{K}_{1,n}^l$ has minimum induced sum $r+1.$ Thus, $\overrightarrow{K}_{1,n}^l \in \mathcal{S}_{n+1}^{r+1}.$ By Lemma \ref{valenceinducedproduct}, $\hbox{val}(\widehat{g}_{r})=(n+1)[\hbox{val}(g)-3]+r+1+n+1$, that is, $\hbox{val}(\widehat{g}_{r})=(n+1)[\hbox{val}(g)-2]+r+1$. Let $g'$ be a different (super) edge-magic labeling of $G$ with $\hbox{val}(g) < \hbox{val}(g')$, then $\hbox{val}(\widehat{g}_{n+1})= (n+1)[\hbox{val}(g)-2]+n+2 \leq (n+1)[\hbox{val}(g')-1-2]+n+2$. That is, $\hbox{val}(\widehat{g}_{n+1}) \leq (n+1) [\hbox{val}(g')-2]+1 < \hbox{val}(\widehat{g}_{1}').$ Hence the result follows. \end{proof} \begin{theorem}\label{lowerboundvalences_1} Let $G$ be an edge-magic $(p,q)$-graph. Then $ |\tau_{\overrightarrow{G} \otimes \overrightarrow{K}_{1,n}^l}| \geq (n+1)|\tau_{\overrightarrow{G}}|+2$. \end{theorem} \begin{proof} Let $f_r$ be the super edge-magic labeling of $K_{1,n}^l$ that assigns the label $r$ to the central vertex, $1\leq r \leq n+1$. Let $g:V(G) \cup E(G) \rightarrow [1,p+q]$ be an edge-magic labeling of $G$. By Lemma \ref{repeatedvalences}, $\hbox{val}(\widehat{g}_r)=(n+1)[\hbox{val}(g)-2]+r+1$ and if $\hbox{val}(g) < \hbox{val}(g')$, then $\hbox{val}(\widehat{g}_{n+1}) < \hbox{val}(\widehat{g}_{1}')$ where $\widehat{g}_r$ is the induced edge-magic labeling of $\overrightarrow{G} \otimes \overrightarrow{K}_{1,n}^l$. Therefore, $ |\tau_{\overrightarrow{G} \otimes \overrightarrow{K}_{1,n}^l}| \geq (n+1)|\tau_{\overrightarrow{G}}|.$ Consider $\overrightarrow{K}_{1,n}^l \otimes \overrightarrow{G}.$ By Theorem \ref{producte_super_k}, $\hbox{val}(\tilde{g}_r)=(p+q)[n+r-1]+\hbox{val}(g), 1\leq r \leq n+1$ where $\tilde{g}_r$ is the induced labeling of $\overrightarrow{K}_{1,n}^l \otimes \overrightarrow{G}$ when $\overrightarrow{K}_{1,n}^l$ is labeled with $f_r$ and $\overrightarrow{G}$ with $g'.$ We claim that $\hbox{val}(\tilde{g}_1) < \hbox{val}(\hat{g}_1)$ and $\hbox{val}(\hat{g}_{n+1}) < \hbox{val}(\tilde{g}_{n+1})$. Assume to the contrary that $\hbox{val}(\tilde{g}_1) \geq \hbox{val}(\hat{g}_1)$, we get $\hbox{val}(g) \leq p+q+2$ which is a contradiction to Lemma \ref{maxminvalence}. Similarly, if $\hbox{val}(\hat{g}_{n+1}) \geq \hbox{val}(\tilde{g}_{n+1})$, we get $\hbox{val}(g) \geq 2(p+q)+1$ which again is a contradiction to Lemma \ref{maxminvalence}. Hence $|\tau_{\overrightarrow{G} \otimes \overrightarrow{K}_{1,n}^l}| \geq (n+1)|\tau_{\overrightarrow{G}}|+2.$ \end{proof} By adding an extra condition on the smallest and the biggest valence, we can improve the lower bound given in the previous result. \begin{theorem}\label{lowerboundvalences_2} Let $G$ be an edge-magic $(p,q)$-graph. If $\alpha$ and $\beta$ are the smallest and the biggest valences of $G$, respectively, and $\beta-\alpha<(\alpha-(p+q+2))n$ then $ |\tau_{\overrightarrow{G} \otimes \overrightarrow{K}_{1,n}^l}| \geq (n+3)|\tau_{\overrightarrow{G}}|$. \end{theorem} \begin{proof} The previous proof guarantees that, using Lemma \ref{repeatedvalences}, we get $ |\tau_{\overrightarrow{G} \otimes \overrightarrow{K}_{1,n}^l}| \geq (n+1)|\tau_{\overrightarrow{G}}|$. Next we will use Theorem \ref{producte_super_k} to complete the remaining valences. Consider now, the reverse order $\overrightarrow{K}_{1,n}^l \otimes \overrightarrow{G}.$ By Theorem \ref{producte_super_k}, $\hbox{val}(\widetilde{g}_r)=(p+q)[n+r-1]+\hbox{val}(g), 1\leq r \leq n+1$ where $\widetilde{g}_r$ is the induced labeling of $\overrightarrow{K}_{1,n}^l \otimes \overrightarrow{G}$ when $\overrightarrow{K}_{1,n}^l$ is labeled with $f_r$ and $\overrightarrow{G}$ with $g.$ Let $g$ be an edge-magic labeling of $G$ with valence $\alpha$ and $g'$ an edge-magic labeling with valence $\beta$. We claim that $\hbox{val}(\widetilde{g'}_1) < \hbox{val}(\widehat{g}_1)$ and $\hbox{val}(\widehat{g'}_{n+1}) < \hbox{val}(\widetilde{g}_{n+1})$. Assume to the contrary that $\hbox{val}(\widetilde{g'}_1) \ge \hbox{val}(\widehat{g}_1)$, then we get $\beta -\alpha\ge (\alpha-(p+q+2))n$ which is a contradiction to the statement. Similarly, if $\hbox{val}(\widehat{g'}_{n+1}) \ge \hbox{val}(\widetilde{g}_{n+1})$, we get $\beta-\alpha\ge (1+2(p+q)-\beta)n$. Notice that, since $\alpha $ and $\beta$ correspond to the valences of two complementary labelings of $G$, $\beta=3(p+q+1)-\alpha$ and this inequality is equivalent to $\beta -\alpha\ge (\alpha-(p+q+2))n$ which is again a contradiction. Since by construction of the induced labeling, if $\hbox{val}(g)< \hbox{val} (g')$, then $\hbox{val}(\tilde g_r)< \hbox{val} (\tilde g'_r)$, we obtain $\hbox{val}(\tilde g_1)< \ldots < \hbox{val} (\tilde g'_1)<\hbox{val}(\hat g_1)<\ldots <\hbox{val}(\hat g'_{n+1})<\hbox{val}(\tilde g_{n+1})< \ldots < \hbox{val} (\tilde g'_{n+1}).$ Hence $|\tau_{\overrightarrow{G} \otimes \overrightarrow{K}_{1,n}^l}| \geq (n+3)|\tau_{\overrightarrow{G}}|.$ \end{proof} \begin{figure} \caption{All theoretical valences are realizable for $C_4 \odot \overline{K} \label{allvalences of c4coronak2} \end{figure} \begin{corollary}\label{bipartite_2_regular} Let $G$ be any edge-magic (bipartite) 2-regular graph. Then $|\tau_{G\odot\overline{K}_n}| \geq (n+1)|\tau_G|+2$. \end{corollary} \begin{proof} Let $G=C_{m_1}\oplus C_{m_2}\oplus \cdots \oplus C_{m_k}$ and let $\overrightarrow{G}=C_{m_1}^+\oplus C_{m_2}^+\oplus \cdots \oplus C_{m_k}^+$ be an orientation of $G$ in which each cycle is strongly oriented. Then $\overrightarrow{G}\otimes \overrightarrow{K}_{1,n}^l=(C_{m_1}^+\otimes \overrightarrow{K}_{1,n}^l) \oplus (C_{m_2}^+\otimes \overrightarrow{K}_{1,n}^l) \oplus \cdots \oplus (C_{m_k}^+\otimes \overrightarrow{K}_{1,n}^l)$. Note that since $G$ is bipartite, all cycles should be of even length and by definition of $\otimes$-product, $G \odot \overline{K}_{n} \cong und(\overrightarrow{G}\otimes \overrightarrow{K}_{1,n}^l)$. Thus by Theorem \ref{lowerboundvalences_1}, $|\tau_{G\odot\overline{K}_n}| \geq (n+1)|\tau_G|+2$. \end{proof} \begin{example} Let $g$ be an edge-magic labeling of $\overrightarrow{C_4}$ and $f_r$ be the super edge-magic labeling of $\overrightarrow{K}_{1,2}^l$ that assigns the label $r$ to the central vertex, $1 \leq r \leq 3.$ Then the valence of the induced labeling $\widehat{g}_r$ is $\hbox{val}(\widehat{g}_r)=3(\hbox{val}(g)-2)+r+1 \in [3(\hbox{val}(g)-2)+2,3(\hbox{val}(g)-2)+4]$. Let $\alpha: 1\bar 56\bar 42\bar 73\bar81$, $\beta =1\bar 75\bar 62\bar 38\bar41$, $\gamma= 1\bar 58\bar 24\bar 37\bar61$ and $\delta= 8\bar 43\bar 57\bar 26\bar18$, where $i\bar mj$ indicates that $m$ is the label assineg to the edge $ij$. Since $\tau_{C_4}=[12,15]=[\hbox{val}(\alpha),\hbox{val}(\beta)]$ we get different $12$ edge-magic valences $[32,43]$ for the induced labeling of $C_4 \odot \overline{K}_2 \cong und(\overrightarrow{C_4} \otimes \overrightarrow{K}_{1,2}).$ Moreover, since the condition $\hbox{val}(\beta)-\hbox{val}(\alpha)<(\hbox{val}(\alpha)-(p+q+2))n$, is satisfied for $n\ge 2$, by using Theorem \ref{producte_super_k}, $\hbox{val}(\widetilde{g}_r)=8(1+r)+\hbox{val}(g)$ which gives, associated to a labeling $g$ two new valences, namely $\hbox{val}(\widetilde{g_1})$ and $\hbox{val}(\widetilde{g_3})$ which gives in total $20$ valences. The induced labelings and they are shown in Fig. \ref{allvalences of c4coronak2}, according to the notation introduced above (for clarity reasons, only the labels of the vertices are shown). Notice that, by using the missing labels, there is only one way to complete the edge-magic labelings obtained in Fig. \ref{allvalences of c4coronak2}. The minimum induced sum together with the maximum unused label provides the valence of the labeling. \end{example} \begin{remark} For a given even $m$, the magic interval for crowns of the form $C_m\odot \overline{K}_n$ is $[mn+2+5m/2,2mn+ 1+7m/2]$ ( see Section 2, in \cite{PEM_LMR}). Thus, for $m=4$, the magic interval is $[28,47]$. Hence, the crown $C_4\odot \overline{K}_2$ is perfect edge-magic. \end{remark} It is well known that all cycles are edge-magic \cite{GodSla}. Thus, the following corollary follows: \begin{corollary} Fix $m \in \mathcal{N}$. Then $\lim_{n\to\infty} |\tau_{C_m\odot \overline{K}_n}|=\infty$. \end{corollary} A similar argument to that of the first part in Theorem \ref{lowerboundvalences_1} can be used to prove the following theorem. \begin{theorem}\label{sem lowerbound} Let $G$ be a super edge-magic graph. Then $ |\sigma_{\overrightarrow{G} \otimes \overrightarrow{K}_{1,n}^l}| \geq (n+1)|\sigma_{\overrightarrow{G}}|$. \end{theorem} \section{A relation between (super) edge-magic labelings and graph decompositions} \label{section_decompositions} Let $G$ be a bipartite graph with stable sets $X=\{x_i\}_{i=1}^s$ and $Y=\{y_j\}_{j=1}^t$. Assume that $G$ admits a decomposition $G\cong H_1\oplus H_2$. Then we denote by $S_2(G;H_1,H_2)$ the graph with vertex and edge sets defined as follows: \begin{eqnarray*} V(S_2(G;H_1,H_2)) &=& X\cup Y\cup X'\cup Y', \\ E(S_2(G;H_1,H_2)) &=& E(G)\cup \{x_iy_j': x_iy_j\in E(H_1)\}\cup \{x_i'y_j: x_iy_j\in E(H_2)\}, \end{eqnarray*} where $X'=\{x_i'\}_{i=1}^s$ and $Y'=\{y_j'\}_{j=1}^t$. We are ready to state and prove the next theorem. \begin{theorem}\label{theo: SEM new_bipartite graph} Let $G$ be a bipartite (super) edge-magic simple graph with stable sets $X$ and $Y$. Assume that $G$ admits a decomposition $G\cong H_1\oplus H_2$. Then, the graph $S_2(G;H_1,H_2)$ is (super) edge-magic. \end{theorem} \begin{proof} Let $f$ be a (super) edge-magic labeling of $G$, and assume that the edges of $H_1$ are directed from $X$ to $Y$ and the edges of $H_2$ are directed from $Y$ to $X$ in $G$, obtaining the digraph $\overrightarrow{G}$. Let $\overrightarrow{K}_{1,1}^l$ be the super edge-magic labeled digraph with $V(\overrightarrow{K}_{1,1}^l)=\{1,2\}$ and $E(\overrightarrow{K}_{1,1}^l)=\{(1,1),(1,2)\}$. By Theorem \ref{spk}, we have that the graph und$(\overrightarrow{G}\otimes \overrightarrow{K}_{1,1}^l)$ is (super) edge-magic. Moreover, an easy check shows that the bijective function $\phi: V(\overrightarrow{G}\otimes \overrightarrow{K}_{1,1}^l)\rightarrow V(S_2(G;H_1,H_2))$ defined by $\phi (v,1)=v$ and $\phi (v,2)=v'$ is an isomorphism between und$(\overrightarrow{G}\otimes \overrightarrow{K}_{1,1}^l)$ and $S_2(G;H_1,H_2)$. Therefore, the graph $S_2(G;H_1,H_2)$ is (super) edge-magic. \end{proof} Next, we show an example. \begin{example} Consider the edge-magic labeling of $K_{3,3}$ shown in Fig. \ref{Fig_7}. The same figure shows a partition of the edges and a possible orientation of them when $X=\{1,2,3\}$ and $Y=\{4,8,12\}$. The construction given in the proof of Theorem \ref{theo: SEM new_bipartite graph} when each vertex $(a,i)$ is labeled $2(a-1)+i$ and each edge $(a,i)(b,j)$ is labeled $2(e-1)+4-(i+j)$ (where $e$ is the label of $(a,b)$ in $D$) results into the graph in Fig. \ref{Fig_8}. \begin{figure} \caption{A decomposition of $K_{3,3} \label{Fig_7} \end{figure} \begin{figure} \caption{An edge-magic labeling of $S_2(K_{3,3} \label{Fig_8} \end{figure} \end{example} Kotzig and Rosa \cite{KotRos70} proved that every complete bipartite graph is edge-magic. It is clear that Theorem \ref{theo: SEM new_bipartite graph} works very nicely when the graph $G$ under consideration is a complete bipartite graph and many new edge-magic graphs can be obtained. Theorem \ref{theo: SEM new_bipartite graph} can be easily extended. Let us do so next. Let $G$ be a bipartite graph with stable sets $X=\{x_i\}_{i=1}^s$ and $Y=\{y_j\}_{j=1}^t$. Assume that $G$ admits a decomposition $G\cong H_1\oplus H_2$. Then we define $S_{2n}(G;H_1,H_2)$ to be the graph with vertex and edge sets as follows: \begin{eqnarray*} V(S_{2n}(G;H_1,H_2)) &=& X\cup Y\cup (\cup_{k=1}^nX_k)\cup (\cup_{k=1}^nY_k), \\ E(S_{2n}(G;H_1,H_2)) &=& E(G)\cup \{x_iy_j^k: x_iy_j\in E(H_1)\}\cup \{x_i^ky_j: x_iy_j\in E(H_2)\}, \end{eqnarray*} where $X_k=\{x_i^k\}_{i=1}^s$ and $Y_k=\{y_j^k\}_{j=1}^t$. \begin{lemma}\label{S2n isomorphism} Let $G$ be a bipartite simple graph with stable sets $X$ and $Y$. Assume that $G$ admits a decomposition $G\cong H_1\oplus H_2$. Then, there exists an orientation of $G$ and $K_{1,n}^l$, namely $\overrightarrow{G}$ and $\overrightarrow{K}_{1,n}^l$ respectively, such that $S_{2n}(G;H_1,H_2) \cong und(\overrightarrow{G}\otimes \overrightarrow{K}_{1,n}^l).$ \end{lemma} \begin{proof} Assume that the digraph $\overrightarrow{G}$ is obtained from $G$ by orienting the edges of $H_1$ from $X$ to $Y$ and the edges of $H_2$ from $Y$ to $X$ in $G$. Let $\overrightarrow{K}_{1,n}^l$ be the digraph with $V(\overrightarrow{K}_{1,n}^l)=[1,n+1]$ and $E(\overrightarrow{K}_{1,n}^l)=\{(1,k):k \in [1,n+1]\}$. An easy check shows that the bijective function $\phi: V(\overrightarrow{G}\otimes \overrightarrow{K}_{1,n}^l)\rightarrow V(S_{2n}(G;H_1,H_2))$ defined by $\phi (v,1)=v$ and $\phi (v,k+1)=v^{k}, \ k \in [1,n]$ is an isomorphism between und$(\overrightarrow{G}\otimes \overrightarrow{K}_{1,n}^l)$ and $S_{2n}(G;H_1,H_2)$. \end{proof} We are ready to state and prove the next theorem. \begin{theorem}\label{coro: SEM new_multipartite graph} Let $G$ be a bipartite (super) edge-magic simple graph with stable sets $X$ and $Y$. Assume that $G$ admits a decomposition $G\cong H_1\oplus H_2$. Then, the graph $S_{2n}(G;H_1,H_2)$ is (super) edge-magic. \end{theorem} \begin{proof} Let $f$ be a (super) edge-magic labeling of $G$, and assume that the edges of $H_1$ are directed from $X$ to $Y$ and the edges of $H_2$ are directed from $Y$ to $X$ in $G$, obtaining the digraph $\overrightarrow{G}$. Let $\overrightarrow{K}_{1,n}^l$ be the super edge-magic labeled digraph with $V(\overrightarrow{K}_{1,n}^l)=[1,n+1]$ and $E(\overrightarrow{K}_{1,n}^l)=\{(1,k):k \in [1,n+1]\}$. By Theorem \ref{spk}, we have that the graph und$(\overrightarrow{G}\otimes \overrightarrow{K}_{1,n}^l)$ is (super) edge-magic. By Lemma \ref{S2n isomorphism}, $S_{2n}(G;H_1,H_2) \cong und(\overrightarrow{G}\otimes \overrightarrow{K}_{1,n}^l).$ Therefore, the graph $S_{2n}(G;H_1,H_2)$ is (super) edge-magic. \end{proof} With the help of Lemma \ref{k1nsem}, we can generalize Theorem \ref{coro: SEM new_multipartite graph} very easily. We do it in the following two results. \begin{theorem}\label{S_2n SEM} Let $G$ be a bipartite super edge-magic simple graph with stable sets $X$ and $Y$. Assume that $G$ admits a decomposition $G\cong H_1\oplus H_2$. Then $|\sigma_{S_{2n}(G;H_1,H_2)}| \geq (n+1)|\sigma_G|$. \end{theorem} \begin{proof} Let $h$ be a super edge-magic labeling of $G$, and assume that the edges of $H_1$ are directed from $X$ to $Y$ and the edges of $H_2$ are directed from $Y$ to $X$ in $G$, obtaining the digraph $\overrightarrow{G}$. Let $f_r$ be the super edge-magic labeling of $\overrightarrow{K}_{1,n}^l$ that assigns the label $r$ to the central vertex with $\hbox{val}(f_r)=2n+3+r, \ 1 \leq r \leq n+1$. Then by Lemma \ref{S2n isomorphism}, $S_{2n}(G;H_1,H_2) \cong und(\overrightarrow{G}\otimes \overrightarrow{K}_{1,n}^l)$ and by Theorem \ref{coro: SEM new_multipartite graph}, it is super edge-magic. By Theorem \ref{sem lowerbound}, $|\sigma_{S_{2n}(G;H_1,H_2)}| \geq (n+1)|\sigma_G|$. \end{proof} A similar argument to the one of Theorem \ref{S_2n SEM}, but now using Theorem \ref{lowerboundvalences_1}, allows us to prove the following theorem. \begin{theorem} \label{S_2n EM} Let $G$ be a bipartite edge-magic simple graph with stable sets $X$ and $Y$. Assume that $G$ admits a decomposition $G\cong H_1\oplus H_2$. Then $|\tau_{S_{2n}(G;H_1,H_2)}| \geq (n+1)|\tau_G|+2$. \end{theorem} Once again, we have the following two easy corollaries. \begin{corollary} Let $G$ be a bipartite super edge-magic simple graph with stable sets $X$ and $Y$. If $G$ admits a decomposition $G\cong H_1\oplus H_2$, then $\lim_{n\to\infty}|\sigma_{S_{2n}(G;H_1,H_2)}|=\infty$. \end{corollary} \begin{corollary} Let $G$ be a bipartite edge-magic simple graph with stable sets $X$ and $Y$. If $G$ admits a decomposition $G\cong H_1\oplus H_2$, then $\lim_{n\to\infty}|\tau_{S_{2n}(G;H_1,H_2)}|=\infty$. \end{corollary} At this point, consider any graph $G^*$ whose vertex set admits a partition of the form $V(G^*)=X\cup Y\cup_{k=1}^n X_k \cup_{k=1}^n Y_k$ and that decomposes as a union of three bipartite graphs $G^*\cong G\oplus H_1\oplus H_2$, where $G^*[X\cup Y]\cong G$, $G^*[X\cup Y_k]\cong H_1$ and $G^*[X_k\cup Y]\cong H_2$ for all $k\in [1,n]$. By Theorem \ref{theo: SEM new_bipartite graph}, we have the following remarks. \begin{remark} If $G$ is a (super) edge-magic graph and $G^*$ is not, then $H_1$ and $H_2$ do not decompose $G$. \end{remark} \begin{remark} If $|\sigma_{S_{2n}(G;H_1,H_2)}| < (n+1)|\sigma_G|$ provided that $G$ is a bipartite super edge-magic graph, then $G\not \cong H_1\oplus H_2 $. \end{remark} \begin{remark} If $|\tau_{S_{2n}(G;H_1,H_2)}| < (n+1)|\tau_G|+2$ provided that $G$ is a bipartite edge-magic graph, then $G\not \cong H_1\oplus H_2 $. \end{remark} We will bring this section to its end, by mentioning that, although some labelings involving differences as for instance, graceful labelings and $\alpha$-valuations have a strong relationship with graph decompositions, the results mentioned in this section are the only ones known relating the subject of decompositions with addition type labelings. This is why we consider these results interesting. \section{Conclusions} The goal of this paper is to show a new application of labeled super edge-magic (di)graphs to graph decompositions. The relation among labelings and decompositions of graphs is not new. In fact, one of the first motivations in order to study graph labelings was the relationship existing between graceful labelings of trees and decompositions of complete graphs into isomorphic trees. What we believe that it is new and surprising about the relation established in this paper is that, as far as we know, there are no relations between labelings involving sums and graph decompositions. In fact, we believe that this is the first relation found in this direction and we believe that to explore this relationship is a very interesting line for future research. \noindent{\bf Acknowledgements} The research conducted in this document by the first author has been supported symbolically by the Catalan Research Council under grant 2014SGR1147. \end{document}
\begin{document} \title{Weakly bound electrons in external magnetic field} \author{I.V.Mamsurov$^1$ and F. Kh. Chibirova$^2$} \affiliation{$^1$Faculty of Physics, Moscow State University, 119899, Moscow, Russia\\ $^2$Karpov Institute of Physical Chemistry, 103064, Moscow, Russia} \begin{abstract} The effect of the uniform magnetic field on the electron in the spherically symmetric square-well potential is studied. A transcendental equation that determines the electron energy spectrum is derived. The approximate value of the lowest (bound) energy state is found. The approximate wave function and probability current density of this state are constructed. \end{abstract} \partialacs{PACS numbers: 03.65.Ge, 03.65.-w} \maketitle \sqrtection{Introduction} Quantum nonrelativistic systems in external electromagnetic field have attracted permanent interest due to possible application of their models in many phenomena of quantum mechanics. Particularly it concerns so-called bound electron states. For example, it is well known that the integer quantum Hall effect is correlated with the presence of weakly bound electron states in corresponding samples. Nonrelativistic electrons in an external magnetic field are also responsible for such remarkable macroscopic quantum phenomena as, for example, high-temperature superconductivity. \cite{W}. Magnetic fields are also likely to effect on weakly bound electrons into singular potentials of defects in defect films \cite{Ch1,Ch2}. The effect of magnetic fields on loosely bound electron in two dimensions models was studied in \cite{Kh}. To this problem are also related such phenomena as parity violation, the Aharonov-Bohm effect \cite{Ah}, and others. The behavior of an electron in a constant uniform magnetic field and single attractive $\delta$ potential in three spatial dimensions was studied in \cite {Kh1}, in which it was also obtained non trivial result for probability current density of the loosely bound electron state. This current resembles "pancake vortices" in the high-temperature superconductors. In this paper is studied a more general case of the electron behavior in external uniform magnetic field in the presence of spherically symmetric square-well potential of finite radius. The calculations are made supposing small size of this radius compared to the magnetic length $a=\sqrt{\hbar/m \omega}$. In the first approximation in the small parameter $\xi =R^2/2a^2\ll 1$ is derived transcendental equation for the electron energy spectrum, and also approximate value of the bound energy state. Accordingly in zero approximation a wave function of the bound state is obtained, and the value of probability current for this state is calculated. This current, as also in \cite{Kh1}, appears to have non zero circulation around the axis parallel to the external magnetic field and to be mostly confined within the perpendicular plane. \sqrtection{Schr\"odinger-Pauli Equation } Let us consider an electron in a spherically symmetric square-well potential of the form: \begin{eqnarray} U(r)= \left\{ \begin{array} {c} -U_0,r<R \\ 0,r>R \end{array} \right. \end{eqnarray} in the presence of uniform magnetic field $H$, which is directed along the axis $z$. Vector potential is specified in a cylindrically symmetric gauge: \begin{eqnarray} A_{\partialhi}=\fracrac{H\rho}{2}, A_{\rho}=A_{z}=0. \end{eqnarray} Let us write the Schr\"odinger-Pauli equation for this electron: \begin{eqnarray} i\hbar\fracrac{\partialartial}{\partialartial t}\partialsi(t, {\bf r})=\hat{H} \partialsi(t, {\bf r}), \label{Pauli} \end{eqnarray} where Hamiltonian in cylindrical coordinates has the form: \begin{eqnarray} \hat{H}=-\fracrac{\hbar^2}{2m} \left[ \frac{\partial}{\rho \partial \rho} \left( \frac{\rho \partial}{\partial \rho} \right) +\frac{\partial^2}{\partial z^2} + \frac{\partial^2}{\rho^2 \partial \partialhi^2} \right] -\frac{i\hbar \omega}{2} \frac{\partial}{\partial \partialhi} +\frac{m\omega^2}{8} \rho^2 +U(\sqrt{\rho^2+z^2}) +\mu \sqrtigma_3 H, \label{Ham} \end{eqnarray} where: \begin{eqnarray} \omega=\frac{|e|m}{Hc}, \mu=\frac{|e|\hbar}{2mc}, \sqrtigma_3= \left( \begin{array} {c} 1 \ 0\\ 0 \ 1\\ \end{array} \right). \end{eqnarray} We are interested in a stationary solution of the equation (\ref{Pauli}): \begin{eqnarray} \partialsi(t, {\bf r})=e^{\frac{-iEt}{\hbar}} \partialsi_E ({\bf r}). \label{wf} \end{eqnarray} It is reasonable to seek the spatial part of the wave function in the form: \begin{eqnarray} \partialsi_E ({\bf r}) =\int\limits_{-\infty}^{+\infty} dp_z \sqrtum_{l=-\infty}^{+\infty} \sqrtum_{n_{\rho}=0}^{\infty} C_{En_{\rho} l p_z} \partialsi_{n_{\rho} l p_z} ({\bf r}). \label{repr} \end{eqnarray} Wave functions on the right side of this equation (\ref{repr}) are eigenfunctions of Hamiltonian (\ref{Ham}) in the absence of spherically symmetric potential (see, for example, \cite{L}). \begin{eqnarray} \partialsi_{n_{\rho} l p_z} ({\bf r})=\frac{1}{2} \frac{e^{i p_z z/ \hbar}}{\sqrt{2\partiali\hbar}} \frac{e^{il\partialhi}}{\sqrt{2\partiali}} \frac{1}{a} I_{n_{\rho} l} (\rho^2 /2a^2) \left( \begin{array} {c} 1+s \\ 1-s \\ \end{array} \right). \end{eqnarray} Where $s=\partialm 1$ is a constant quantum spin number of an electron, $a=\sqrt{\hbar/m\omega}$, and Laguerre functions: \begin{eqnarray} I_{n_{\rho} l} (x) =\frac{1}{\sqrt{(n_{\rho} + |l|)! n_{\rho}!}} e^{-x/2} x^{|l|/2} Q^{|l|}_{n_{\rho}}(x) \end{eqnarray} are expressed through corresponding polynomials. Multiplying the both sides of (\ref{repr}) by $\partialsi_{N_{\rho} L P_z}$, transferring to the right side the term containing the potential $U({\bf r})$, and to the left side all other terms, and integrating over all spatial coordinates, we obtain: \begin{eqnarray} C_{E N_{\rho} L P_z} \left( \hbar \omega \left( N_{\rho} + \frac{|L|+L+1+s}{2}\right) +\frac{P^2_z}{2m} - E \right)= \nonumber \\ =\frac{U_0}{\partiali} \int\limits^{+\infty}_{-\infty} dp_z \sqrtum^{\infty}_{n_{\rho}=0} C_{E n_{\rho} L p_z} \frac{1}{P_z -p_z} \nonumber \\ \frac{1}{a^2} \int\limits^{R}_{0} \rho d\rho I_{n_{\rho} L} (\rho^2/2a^2) I_{N_{\rho} L} (\rho^2/2a^2) \sqrtin \left( \frac{P_z-p_z}{\hbar} \sqrt{R^2 - \rho^2} \right). \label{Eq} \end{eqnarray} Taking into account a small radius of the potential well compared with magnetic length: $R^2/a^2<<1$, we expand the product of Laguerre functions on the right side of (\ref{Eq}) in a power series of parameter $\rho^2/2a^2$, up to the terms of the first order. We also suppose that in reality the integral over $p_z$ on the right side of (\ref{Eq}) has the finite limits. Accordingly we put: $\sqrt{R^2-\rho^2} (P_z-p_z)/\hbar<<1$ and substitute the sinus on the right side of (\ref{Eq}) by its argument. Then integrating over $\rho$ and using designation: $U_0 R^3=\lambda$, we obtain the following result: \begin{eqnarray} C_{E N_{\rho} 0 P_z} \left( \hbar \omega \left( N_{\rho} + \frac{1+s}{2}\right) +\frac{P^2_z}{2m} - E \right) = \nonumber \\ =\frac{\lambda}{\partiali \hbar a^2} \int\limits^{+\infty}_{-\infty} dp_z \sqrtum^{\infty}_{n_{\rho}=0} C_{E n_{\rho} 0 p_z} \left[ \frac{1}{3} - \frac{2}{15} \xi (1+n_{\rho} + N_{\rho}) \right], L=0. \label{Eq1} \end{eqnarray} \begin{eqnarray} C_{E N_{\rho} L P_z} \left( \hbar \omega \left( N_{\rho} + \frac{|L|+L+1+s}{2}\right) +\frac{P^2_z}{2m} - E \right) = \nonumber \\ =\frac{\lambda}{\partiali \hbar a^2} \int\limits^{+\infty}_{-\infty} dp_z \sqrtum^{\infty}_{n_{\rho}=0} C_{E n_{\rho} L p_z} \left[ \frac{2}{15} \xi \sqrt{(n_{\rho} +1) (N_{\rho} + 1)} \right], L=\partialm1 \label{Eq2} \end{eqnarray} Where $\xi =m \omega R^2 /2\hbar$. In case of all other values of $L$ the right side of (\ref{Eq}) in the first approximation equals to zero. It means that corresponding coefficients $C_{E n_{\rho} L p_z}$ also equal to zero when $L \ne 0, \partialm 1$. \sqrtection{Energy Spectrum} Because we are interested in the lowest energy state, we consider only the case $L=0$. We seek coefficients $C_{E n_{\rho} 0 p_z}$ in the form: \begin{eqnarray} C_{E n_{\rho} 0 p_z}= C_E \frac{1- \frac{2}{5} \xi (1/2 + n_{\rho})}{\hbar \omega \left( n_{\rho} + \frac{1+s}{2}\right) +\frac{p^2_z}{2m} - E}, \label{Coef} \end{eqnarray} Inserting (\ref{Coef}) in (\ref{Eq1}) and neglecting the term proportional to $\xi^2$, we obtain the equation for energy spectrum: \begin{eqnarray} 1 = \frac{\lambda}{3\partiali \hbar a^2} \int\limits^{+\infty}_{-\infty} dp_z \sqrtum^{\infty}_{n_{\rho}=0} \frac{1 - \frac{4}{5} \xi (1/2 + n_{\rho})}{\hbar \omega \left( n_{\rho} + \frac{1+s}{2}\right) +\frac{p^2_z}{2m} - E}. \label{Sp} \end{eqnarray} Integrating over $p_z$ we finally have: \begin{eqnarray} 1=\frac{\sqrt{2m}}{3 \hbar a^2} \lambda \sqrtum^{\infty}_{n_{\rho}=0} \frac{1- \frac{4}{5} \xi (1/2 +n_{\rho})}{\sqrt{\hbar \omega \left( n_{\rho} + \frac{1+s}{2}\right) - E}}. \label{Sp1} \end{eqnarray} This equation may be solved graphically. In order to find the approximate value of lowest (bound) state, we put in (\ref{Sp1}): $n_{\rho}=0, s=-1$. Then we obtain the following result: \begin{eqnarray} E_{min}=-\frac{2m\lambda^2}{9 \hbar^2 a^4} \left( 1-\frac{2}{5} \xi \right). \label{Lev} \end{eqnarray} \sqrtection{Wave Function and Probability Current} Let us find in zero approximation of $\xi$ the wave function $\partialsi_E ({\bf r})$ for lowest energy state. In this case in expansion of the product of Laguerre functions in (\ref{Eq}) we consider only one term, which does not contain $\xi$. Then the right side of (\ref{Eq}) does not equal to zero only when $L=0$. In the formula for coefficients $C_{n_{\rho} 0 p_z}$ (\ref{Coef}) we must neglect the member proportional to $\xi$. We find coefficient $C_E$ from the normalizing equation: \begin{eqnarray} \int\limits_{-\infty}^{+\infty} dp_z \sqrtum_{l=-\infty}^{+\infty} \sqrtum_{n_{\rho}=0}^{\infty} |C_{En_{\rho}l p_z}|^2 =1. \label{Norm} \end{eqnarray} Taking into account that in the summation over $l$ only one term of zero order is present, after integration over $p_z$ we have: \begin{eqnarray} C_E=\frac{1}{m \sqrt{2\partiali}} \left[ \sqrtum^{\infty}_{n_{\rho}=0} \frac{1}{(\hbar \omega n_{\rho} -E)^{3/2}} \right]^{-1/2}. \label{C_E} \end{eqnarray} Inserting (\ref{C_E}) in (\ref{Coef}), and (\ref{Coef}) in (\ref{repr}), and again taking into account that in the summation over $l$ only term of zero order is rest, we obtain the following formula: \begin{eqnarray} \partialsi_{E, s=-1}=\frac{1}{2\partiali a \sqrt{\hbar}} C_E \sqrtum^{\infty}_{n_{\rho}=0} \int\limits^{+\infty}_{-\infty} dp_z \frac{1}{\hbar \omega n_{\rho} +\frac{p^2_z}{2m} - E} e^{\frac{i p_z z}{\hbar}} I_{n_{\rho} 0} (\rho^2/2a^2). \label{F} \end{eqnarray} Because we are interested in the lowest (bound) state, we consider only term with $n_{\rho}=0$. Then integrating over $p_z$ we finally obtain: \begin{eqnarray} \partialsi_{E, s=-1}=\frac{1}{2\partiali a \sqrt{2m \hbar |E|} } C_E \exp\left(-\sqrt{2m|E|} \frac{\theta (z) z}{\hbar}\right) \exp\left(-\frac{m \omega \rho^2}{4\hbar}\right). \label{F1} \end{eqnarray} Where $\theta (z) = 1 (-1)$ when $z>0 (<0)$. Using well known expression for the density of probability current: \begin{eqnarray} {\bf j} =\frac{i \hbar}{2m} (\partialsi \nabla \partialsi^* -\partialsi^* \nabla\partialsi) - \frac{e}{mc} {\bf A} \partialsi^* \partialsi, \label{Cur} \end{eqnarray} we obtain the following result: \begin{eqnarray} j_{\partialhi}=-\frac{eH\rho}{16\partiali^2 a^2 m^2 c \hbar |E|} C_E^2 \exp\left(-2\sqrt{2m|E|} \frac{\theta (z) z}{\hbar}\right) \exp\left(-\frac{m \omega \rho^2}{2\hbar}\right),j_{\rho}=j_z=0. \label{Cur1} \end{eqnarray} \sqrtection{Discussion} So it is established that the presence of external magnetic field and potential well of finite depth produces an interesting bound energy state of an electron. Its probability current has nonzero circulation along the field axis. This fact attracts significant interest for it contributes to explanation of many quantum mechanics phenomena, in the first place such as high-temperature superconductivity. \noindent{\bf Acknowledgments} This paper was supported by a Joint Research Project of the Taiwan National Science Council (NSC-RFBR No. 95WFD0400022, Republic of China) and the Russian Foundation for Basic Research (No. NSC-a-89500.2006.2) under contract No. RP06N04-1, by the U.S. Department of Energy's Initiative for Proliferation Prevention (IPP) Program through Contract No. 94138 with the Brookhaven National Laboratory, and, in part, by the Program for Leading Russian Scientific Schools (Grant No. NSh-5332.2006.2)(I.V. M.). The authors are grateful to V. R. Khalilov for fruitful discussions. \end{document}
\mathfrak{m}athfrak{b}egin{document} \title{Systems of bihomogeneous forms of small bidegree} \mathfrak{m}athfrak{a}uthor[L. Hochfilzer]{Leonhard Hochfilzer} \mathfrak{m}athfrak{a}ddress{Mathematisches Institut, Bunsenstraße 3-5, 37073 Göttingen, Germany} \email{[email protected]} \mathfrak{m}athfrak{b}egin{abstract} We use the circle method to count the number of integer solutions to systems of bihomogeneous equations of bidegree $(1,1)$ and $(2,1)$ of bounded height in lopsided boxes. Previously, adjusting Birch's techniques to the bihomogeneous setting, Schindler showed an asymptotic formula provided the number of variables grows at least quadratically with the number of equations considered. Based on recent methods by Rydin Myerson we weaken this assumption and show that the number of variables only needs to satisfy a linear bound in terms of the number of equations. \end{abstract} \mathfrak{m}aketitle \tableofcontents \section{Introduction} Studying the number of rational solutions of bounded height on a system of equations is a fundamental tool in order to understand the distribution of rational points on varieties. A longstanding result by Birch~\cite{birch_forms} establishes an asymptotic formula for the number of integer points of bounded height that are solutions to a system of homogeneous forms of the same degree in a general setting, provided the number of variables is sufficiently big relative to the singular locus of the variety defined by the system of equations. This was recently improved upon by Rydin Myerson~\cite{myerson_quadratic, myerson_cubic} whenever the degree is $2$ or $3$. These results may be used in order to prove Manin's conjecture for certain Fano varieties, which arise as complete intersections in projective space. Analogous to Birch's result, Schindler studied systems of bihomogeneous forms~\cite{schindler_bihomogeneous}. Using the hyperbola method, Schindler established Manin's conjecture for certain bihomogeneous varieties as a result~\cite{schindler_manin_biprojective}. The aim of this paper is to improve Schindler's result by applying the ideas of Rydin Myerson to the bihomogeneous setting. Consider a system of bihomogeneous forms $\mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})= \left( F_1(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}), \mathfrak{m}athbb{H}dots, F_R(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) \mathbb{R}ight)$ with integer coefficients in variables $\mathfrak{m}athfrak{b}m{x} = (x_1, \mathfrak{m}athbb{H}dots, x_{n_1})$ and $\mathfrak{m}athfrak{b}m{y} = (y_1, \mathfrak{m}athbb{H}dots, y_{n_2})$. We assume that all of the forms have the same bidegree, which we denote by $(d_1,d_2)$ for nonnegative integers $d_1,d_2$. By this we mean that for any scalars $\lambda, \mathfrak{m}u \in \mathfrak{m}athbb{C}$ we have \mathfrak{m}athfrak{b}egin{equation*} F_i(\lambda \mathfrak{m}athfrak{b}m{x}, \mathfrak{m}u \mathfrak{m}athfrak{b}m{y}) = \lambda^{d_1} \mathfrak{m}u^{d_2} F_i(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}), \end{equation*} for all $i = 1, \mathfrak{m}athbb{H}dots, R$. This system defines a biprojective variety $V \subset \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{Q}}^{n_1-1} \times \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{Q}}^{n_2-1}$. One can also interpret the system in the affine variables $(x_1, \mathfrak{m}athbb{H}dots, x_{n_1}, y_1, \mathfrak{m}athbb{H}dots, y_{n_2})$ and thus $\mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$ also defines an affine variety which we will denote by $V_0 \subset \mathfrak{m}athbb{A}_{\mathfrak{m}athbb{Q}}^{n_1+n_2}$. We are interested in studying the set of integer solutions to this system of bihomogeneous equations. Consider two boxes $\mathfrak{m}athcal{B}_i \subset [-1,1]^{n_i}$ where each edge is of side length at most one and they are all parallel to the coordinate axes. In order to study the questions from an analytic point of view, for $P_1, P_2 > 1$ we define the following counting function \mathfrak{m}athfrak{b}egin{equation*} N(P_1,P_2) = \# \{ (\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{y}) \in \mathfrak{m}athbb{Z}^{n_1} \times \mathfrak{m}athbb{Z}^{n_2} \mathfrak{m}id \mathfrak{m}athfrak{b}m{x}/P_1 \in \mathfrak{m}athcal{B}_1, \; \mathfrak{m}athfrak{b}m{y}/P_2 \in \mathfrak{m}athcal{B}_2, \; \mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) = \mathfrak{m}athfrak{b}m{0} \}. \end{equation*} Generalising the work of Birch~\cite{birch_forms}, Schindler~\cite{schindler_bihomogeneous} used the circle method to achieve an asymptotic formula for $N(P_1,P_2)$ as $P_1,P_2 \mathbb{R}ightarrow \infty$ provided certain conditions on the number of variables are satisfied, as we shall describe below. Before we can state Schindler's result, consider the varieties $V_1^*$ and $V_2^*$ in $\mathfrak{m}athbb{A}_\mathfrak{m}athbb{Q}^{n_1+n_2}$ to be defined by the equations \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athrm{rank}\left(\frac{\mathfrak{m}athfrak{p}artial F_i }{\mathfrak{m}athfrak{p}artial x_j} \mathbb{R}ight)_{i,j} < R, \mathfrak{m}athbb{Q}uad \text{and} \mathfrak{m}athbb{Q}uad \mathfrak{m}athrm{rank}\left(\frac{\mathfrak{m}athfrak{p}artial F_i }{\mathfrak{m}athfrak{p}artial y_j} \mathbb{R}ight)_{i,j} < R \end{equation*} respectively. Assume that $V_0$ is a complete intersection, which means that $\dim V_0 = n_1+n_2-R$. Write $b = \mathfrak{m}ax\left\{ \frac{\log(P_1)}{\log(P_2)},1\mathbb{R}ight\}$ and $u = \mathfrak{m}ax\left\{ \frac{\log (P_2)}{\log(P_1)}, 1 \mathbb{R}ight\}$. If $n_i > R$ and \mathfrak{m}athfrak{b}egin{equation} \label{eq.cond_schindler} n_1+n_2 - \dim V_i^* > 2^{d_1+d_2-2} \mathfrak{m}ax \{R(R+1)(d_1+d_2-1), R(bd_1 + ud_2) \}, \end{equation} is satisfied, for $i=1,2$ then Schindler showed the asymptotic formula \mathfrak{m}athfrak{b}egin{equation} \label{eq.schindler_asymptotic} N(P_1,P_2) = \sigma P_1^{n_1-Rd_1} P_2^{n_2-Rd_2} + O\left(P_1^{n_1-Rd_1} P_2^{n_2-Rd_2} \mathfrak{m}in \{P_1,P_2 \}^{-\delta}\mathbb{R}ight), \end{equation} for some $\delta > 0$ and where $\sigma$ is positive if the system $\mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) = \mathfrak{m}athfrak{b}m{0}$ has a smooth $p$-adic zero for all primes $p$, and the variety $V_0$ has a smooth real zero in $\mathfrak{m}athcal{B}_1 \times \mathfrak{m}athcal{B}_2$. In the case when the equations $F_1(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}), \mathfrak{m}athbb{H}dots, F_R(\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{y})$ define a smooth complete intersection $V$, and where the bidegree is $(1,1)$ or $(2,1)$ the goal of this paper is to improve the restriction on the number of variables~\eqref{eq.cond_schindler} and still show~\eqref{eq.schindler_asymptotic}. The result by Schindler generalises a well-known result by Birch~\cite{birch_forms}, which deals with systems of homogeneous equations; Let $\mathfrak{m}athcal{B} \subset [-1,1]^n$ be a box containing the origin with side lengths at most $1$ and edges parallel to the coordinate axes. Given homogeneous equations $G_1(\mathfrak{m}athfrak{b}m{x}), \mathfrak{m}athbb{H}dots, G_R(\mathfrak{m}athfrak{b}m{x})$ with rational coefficients of common degree $d$ define the counting function \mathfrak{m}athfrak{b}egin{equation*} N(P) = \# \{ \mathfrak{m}athfrak{b}m{x} \in \mathfrak{m}athbb{Z}^n \colon \mathfrak{m}athfrak{b}m{x}/P \in \mathfrak{m}athcal{B}, \; G_1(\mathfrak{m}athfrak{b}m{x}) = \cdots = G_R(\mathfrak{m}athfrak{b}m{x}) = 0 \}. \end{equation*} Write $V^* \subset \mathfrak{m}athbb{A}^n_{\mathfrak{m}athbb{Q}}$ for the variety defined by \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athrm{rank} \left( \frac{\mathfrak{m}athfrak{p}artial G_i}{\mathfrak{m}athfrak{p}artial x_j} \mathbb{R}ight)_{i,j} < R, \end{equation*} commonly referred to as the \emph{Birch singular locus}. Assuming that $G_1, \mathfrak{m}athbb{H}dots, G_R$ define a complete intersection $X \subset \mathfrak{m}athbb{P}^{n-1}_{\mathfrak{m}athbb{Q}}$ and that the number of variables satisfies \mathfrak{m}athfrak{b}egin{equation} \label{eq.assumption_birch_quadr} n-\dim V^* > R(R+1)(d-1)2^{d-1}, \end{equation} then Birch showed \mathfrak{m}athfrak{b}egin{equation} \label{eq.birch_asymptotic} N(P) = \tilde{\sigma} P^{n-dR} + O(P^{n-dR-\varepsilon}), \end{equation} where $\tilde{\sigma} > 0$ if the system $\mathfrak{m}athfrak{b}m{G}(\mathfrak{m}athfrak{b}m{x})$ has a smooth $p$-adic zero for all primes $p$ and the variety $X$ has a smooth real zero in $\mathfrak{m}athcal{B}$. Building on ideas of M\"uller~\cite{mueller2, mueller1} on quadratic Diophantine inequalities, Rydin Myerson improved Birch's theorem. He weakened the assumption on the number of variables in the cases $d=2,3$~\cite{myerson_quadratic, myerson_cubic} whenever $R$ is reasonably large. Assuming that $X \subset \mathfrak{m}athbb{P}^{n-1}_{\mathfrak{m}athbb{Q}}$ defines a complete intersection, he was able to replace the condition in \eqref{eq.assumption_birch_quadr} by \mathfrak{m}athfrak{b}egin{equation} \label{eq.myerson_assumption} n - \sigma_{\mathfrak{m}athbb{R}} > d2^d R, \end{equation} where \[ \sigma_{\mathfrak{m}athbb{R}} = 1+ \mathfrak{m}ax_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{\mathfrak{m}athfrak{b}m{0}\}} \dim \mathfrak{m}athrm{Sing} \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{G}), \] and where $\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{G})$ is the pencil defined by $\sum_{i=1}^R \mathfrak{m}athfrak{b}eta_i G(\mathfrak{m}athfrak{b}m{x})$ in $\mathfrak{m}athbb{P}^{n-1}_{\mathfrak{m}athbb{Q}}$. We note at this point that several other authors have replaced the Birch singular locus condition with weaker assumptions, such as Schindler~\cite{schindler2015variant} and Dietmann~\cite{dietmann_weyl} who also considered dimensions of pencils, and very recently Yamagishi~\cite{yamagishi2023birch} who replaced the Birch singular locus with a condition regarding the Hessian of the system. Returning to Rydin Myerson's result if $X$ is non-singular then one can show \[ \sigma_{\mathfrak{m}athbb{R}} \leq R-1 \] and in this case if $n \geq (d 2^d+1)R$ then one obtains the desired asymptotic. Notably, the work of Rydin Myerson showed the number of variables $n$ thus only has to grow linearly in the number of equations $R$, whereas $R$ appeared quadratically in Birch's work. If $d\geq 4$ he showed that for \textit{generic} systems of forms it suffices to assume \eqref{eq.myerson_assumption} for the asymptotic~\eqref{eq.birch_asymptotic} to hold. Generic here means that the set of coefficients is required to lie in some non-empty Zariski open subset of the parameter space of coefficients of the equations. Our goal in this paper is to generalise the results obtained by Rydin Myerson to the case of bihomogeneous varieties whenever the bidegree of the forms is $(1,1)$ or $(2,1)$. Those two cases correspond to degrees $2$ and $3$ in the homogeneous case, respectively. We call a bihomogeneous form \emph{bilinear} if the bidegree is $(1,1)$. Given a bilinear form $F_i(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$ we may write it as \mathfrak{m}athfrak{b}egin{equation*} F_i(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) = \mathfrak{m}athfrak{b}m{y}^T A_i \mathfrak{m}athfrak{b}m{x}, \end{equation*} for some $n_2 \times n_1$-dimensional matrices $A_i$ with rational entries. Given $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R$ write \mathfrak{m}athfrak{b}egin{equation*} A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}} = \sum_{i = 1}^R \mathfrak{m}athfrak{b}eta_i A_i. \end{equation*} Regarding $A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}$ as a map $\mathfrak{m}athbb{R}^{n_1} \mathbb{R}ightarrow \mathfrak{m}athbb{R}^{n_2}$ and and $A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}^T$ as a map $\mathfrak{m}athbb{R}^{n_2} \mathbb{R}ightarrow \mathfrak{m}athbb{R}^{n_1}$ we define the quantities \mathfrak{m}athfrak{b}egin{equation*} \sigma_{\mathfrak{m}athbb{R}}^{(1)} \coloneqq \mathfrak{m}ax_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{ \mathfrak{m}athfrak{b}m{0}\}} \dim \ker (A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}), \mathfrak{m}athbb{Q}uad \text{and} \mathfrak{m}athbb{Q}uad \sigma_{\mathfrak{m}athbb{R}}^{(2)} \coloneqq \mathfrak{m}ax_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{ \mathfrak{m}athfrak{b}m{0}\}} \dim \ker (A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}^T). \end{equation*} We state our first theorem for systems of bilinear forms. Since the situation is completely symmetric with respect to the $\mathfrak{m}athfrak{b}m{x}$ and $\mathfrak{m}athfrak{b}m{y}$ variables if the forms are bilinear, we may without loss of generality assume $P_1 \geq P_2$ in the counting function, and still obtain the full result. \mathfrak{m}athfrak{b}egin{theorem} \label{thm.bilinear} Let $F_1(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}), \mathfrak{m}athbb{H}dots, F_R(\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{y})$ be bilinear forms with integer coefficients such that the biprojective variety $\mathfrak{m}athbb{V}(F_1, \mathfrak{m}athbb{H}dots, F_R) \subset \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{Q}}^{n_1-1} \times \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{Q}}^{n_2-1}$ is a complete intersection. Let $P_1\geq P_2> 1$, write $b = \frac{\log(P_1)}{\log(P_2)}$ and assume further that \mathfrak{m}athfrak{b}egin{equation} \label{eq.assumption_n_i_-sigma_i} n_i - \sigma_{\mathfrak{m}athbb{R}}^{(i)} > (2b+2)R \end{equation} holds for $i = 1,2$. Then there exists some $\delta > 0$ depending at most on $b$, $\mathfrak{m}athfrak{b}m{F}$, $R$ and $n_i$ such that \mathfrak{m}athfrak{b}egin{equation*} N(P_1,P_2) = \sigma P_1^{n_1-R} P_2^{n_2-R} + O(P_1^{n_1-R} P_2^{n_2-R-\delta}) \end{equation*} holds, where $\sigma > 0$ if the system $\mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) = \mathfrak{m}athfrak{b}m{0}$ has a smooth $p$-adic zero for all primes $p$ and if the variety $V_0$ has a smooth real zero in $\mathfrak{m}athcal{B}_1 \times \mathfrak{m}athcal{B}_2$. Moreover, if we assume $\mathfrak{m}athbb{V}(F_1, \mathfrak{m}athbb{H}dots, F_R) \subset \mathfrak{m}athbb{P}^{n_1-1}_{\mathfrak{m}athbb{Q}} \times \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{Q}}^{n_2-1}$ to be smooth the same conclusions hold if we assume \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}in \{n_1, n_2 \} > (2b+2)R \mathfrak{m}athbb{Q}uad \text{and} \mathfrak{m}athbb{Q}uad n_1+n_2 > (4b+5)R \end{equation*} instead of \eqref{eq.assumption_n_i_-sigma_i}. \end{theorem} We now move on to systems of forms $F_1(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}), \mathfrak{m}athbb{H}dots, F_R(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$ of bidegree $(2,1)$. We may write such a form $F_i(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$ as \mathfrak{m}athfrak{b}egin{equation*} F_i(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) = \mathfrak{m}athfrak{b}m{x}^T H_i(\mathfrak{m}athfrak{b}m{y}) \mathfrak{m}athfrak{b}m{x}, \end{equation*} where $H_i(\mathfrak{m}athfrak{b}m{y})$ is a symmetric $n_1 \times n_1$ matrix whose entries are linear forms in the variables $\mathfrak{m}athfrak{b}m{y} = (y_1, \mathfrak{m}athbb{H}dots, y_{n_2})$. Similarly to above, given $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R$ we write \mathfrak{m}athfrak{b}egin{equation*} H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y}) = \sum_{i = 1}^R\mathfrak{m}athfrak{b}eta_i H_i(\mathfrak{m}athfrak{b}m{y}). \end{equation*} Given $\ell \in\{ 1, \mathfrak{m}athbb{H}dots, n_2\}$ write $\mathfrak{m}athfrak{b}m{e}_\ell \in \mathfrak{m}athbb{R}^{n_2}$ for the standard unit basis vectors. Write \[\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}} (\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x})_{\ell = 1, \mathfrak{m}athbb{H}dots, n_2} = \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}} (\mathfrak{m}athfrak{b}m{e}_1) \mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}} (\mathfrak{m}athfrak{b}m{e}_{n_2}) \mathfrak{m}athfrak{b}m{x}) \subset \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{Q}}^{n_1-1}\] for this intersection of pencils, and define \mathfrak{m}athfrak{b}egin{equation} \label{eq.defn_s_1} s^{(1)}_{\mathfrak{m}athbb{R}} \coloneqq 1 + \mathfrak{m}ax_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{ \mathfrak{m}athfrak{b}m{0} \} } \dim \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}} (\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x})_{\ell = 1, \mathfrak{m}athbb{H}dots, n_2}. \end{equation} Further write $\mathfrak{m}athbb{V}(H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y}) \mathfrak{m}athfrak{b}m{x})$ for the biprojective variety defined by the system of equations \[ \mathfrak{m}athbb{V}(H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y}) \mathfrak{m}athfrak{b}m{x}) = \mathfrak{m}athbb{V}((H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y}) \mathfrak{m}athfrak{b}m{x})_1, \mathfrak{m}athbb{H}dots, (H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y}) \mathfrak{m}athfrak{b}m{x})_{n_1}) \subset \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{Q}}^{n_1-1} \times \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{Q}}^{n_2-1} \] and define \mathfrak{m}athfrak{b}egin{equation} \label{eq.defn_s_2} s_{\mathfrak{m}athbb{R}}^{(2)} \coloneqq \left\lfloor \frac{\mathfrak{m}ax_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{ 0\}} \dim \mathfrak{m}athbb{V}(H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y}) \mathfrak{m}athfrak{b}m{x})}{2} \mathbb{R}ight\mathbb{R}floor +1, \end{equation} where $\lfloor x \mathbb{R}floor$ denotes the largest integer $m$ such that $m \leq x$. \mathfrak{m}athfrak{b}egin{theorem} \label{thm.2,1_different_dimensions} Let $F_1(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}), \mathfrak{m}athbb{H}dots, F_R(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$ be bihomogeneous forms with integer coefficients of bidegree $(2,1)$ such that the biprojective variety $\mathfrak{m}athbb{V}(F_1, \mathfrak{m}athbb{H}dots, F_R) \subset \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{Q}}^{n_1-1} \times \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{Q}}^{n_2-1}$ is a complete intersection. Let $P_1,P_2 > 1$ be real numbers. Write $b = \mathfrak{m}ax\left\{\frac{\log(P_1)}{\log(P_2)}, 1 \mathbb{R}ight\}$ and $u = \mathfrak{m}ax\left\{\frac{\log(P_2)}{\log(P_1)}, 1 \mathbb{R}ight\}$. Assume further that \mathfrak{m}athfrak{b}egin{equation} \label{eq.assumption_n_i_(2,1)_introduction} n_1 - s_{\mathfrak{m}athbb{R}}^{(1)} > (8b+4u)R \mathfrak{m}athbb{Q}uad \text{and} \mathfrak{m}athbb{Q}uad \frac{n_1+n_2}{2} - s_{\mathfrak{m}athbb{R}}^{(2)} > (8b+4u)R \end{equation} is satisfied. Then there exists some $\delta > 0$ depending at most on $b$, $u$, $R$, $n_i$ and $\mathfrak{m}athfrak{b}m{F}$ such that \mathfrak{m}athfrak{b}egin{equation} \label{eq.asymptotic_bidegree_2_1} N(P_1,P_2) = \sigma P_1^{n_1-2R} P_2^{n_2-R} + O(P_1^{n_1-2R} P_2^{n_2-R} \mathfrak{m}in\{ P_1,P_2\}^{-\delta}) \end{equation} holds, where $\sigma > 0$ if the system $\mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) = \mathfrak{m}athfrak{b}m{0}$ has a smooth $p$-adic zero for all primes $p$, and if the variety $V_0$ has a smooth real zero in $\mathfrak{m}athcal{B}_1 \times \mathfrak{m}athcal{B}_2$. If we assume that $\mathfrak{m}athbb{V}(F_1, \mathfrak{m}athbb{H}dots, F_R) \subset \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{Q}}^{n_1-1} \times \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{Q}}^{n_2-1}$ is smooth, then the same conclusions hold if we assume \mathfrak{m}athfrak{b}egin{equation} \label{eq.assumptions_n_i_(2,1)_smooth_case} n_1 > (16b+8u+1)R, \mathfrak{m}athbb{Q}uad \text{and} \mathfrak{m}athbb{Q}uad n_2 > (8b+4u+1)R \end{equation} instead of \eqref{eq.assumption_n_i_(2,1)_introduction}. \end{theorem} We remark that we preferred to give conditions in terms of the geometry of the variety regarded as a biprojective variety, as opposed to an affine variety. The reason for this is the potential application of this result to proving Manin's conjecture for this variety, which will be addressed in due course. Compared to the result by Schindler we thus basically remove the assumption that the number of variables needs to grow at least quadratically in $R$. In particular, if the complete intersection defined by the system is assumed to be smooth, then our results requires fewer variables than Schindler's provided \[ d_1b+d_2u < \frac{R+1}{2} \] is satisfied, in the cases $(d_1,d_2) = (1,1)$ or $(2,1)$. In particular, if $R$ is large this means our result provides significantly more flexibility in the choice of $u$ and $b$. One cannot hope to achieve the asymptotic formula~\eqref{eq.schindler_asymptotic} in general where a condition of the shape $n_i > R(bd_1+ud_2)$ is not present. To see this note that the counting function satisfies \[ N(P_1,P_2) \gg P_1^{n_1} + P_2^{n_2}, \] coming from the solutions when $x_1 = \cdots = x_{n_1}= 0$ and $y_1 = \cdots = y_{n_2}= 0$. The asymptotic formula~\eqref{eq.schindler_asymptotic} thus implies \[ P_i^{n_i} \ll P_1^{n_1-d_1R} P_2^{n_2-d_2R}, \] for $i=1,2$. Noting that $P_1^{u} = P_2$ if $u > 1$ and $P_2^{b} =P_1$ if $b >1$ and comparing the exponents one necessarily finds $n_i > R(bd_1+ud_2)$. If the forms are diagonal then one can take boxes $\mathfrak{m}athcal{B}_i$, which avoid the coordinate axes in order to remedy this obstruction. In fact this is the approach taken by Blomer and Br\"{u}dern~\cite{blomer_bruedern_hyp} and they proved an asymptotic formula of a system of multihomogeneous equations without a restriction on the number of variables similar to the type described above. If the forms are not diagonal the problem still persists, even if one were to take boxes avoiding the coordinate axes. In general there may be 'bad' vectors $\mathfrak{m}athfrak{b}m{y}$ away from the coordinate axes such that \[ \#\left\{ \mathfrak{m}athfrak{b}m{x} \in \mathfrak{m}athbb{Z}^{n_1} \colon \mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) = \mathfrak{m}athfrak{b}m{0}, \mathfrak{m}athbb{N}orm{\mathfrak{m}athfrak{b}m{x}} \leq P_1 \mathbb{R}ight\} \gg P_1^{n_1-a}, \] where $a < d_1R$ for example. This is in contrast to the diagonal case, where the only vectors $\mathfrak{m}athfrak{b}m{y}$ where this occurs lie on at least one coordinate axis. It would be interesting to consider a modified counting function where one excludes such vectors $\mathfrak{m}athfrak{b}m{y}$, and analogously 'bad' vectors $\mathfrak{m}athfrak{b}m{x}$. In a general setting it seems difficult to control the set of such vectors. In particular, it is not clear how one would deal with the Weyl differencing step if one were to consider such a counting function. \subsection{Manin's conjecture} Let $V \subset \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{Q}}^{n_1-1} \times \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{Q}}^{n_2-1}$ be a non-singular complete intersection defined by a system of forms $F_i(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$, $i = 1, \mathfrak{m}athbb{H}dots, R$ of common bidegree $(d_1,d_2)$. Assume $n_i > d_iR$ so that $V$ is a Fano variety, which means that the inverse of the canonical bundle in the Picard group, the \emph{anticanonical bundle}, is very ample. For a field $K$, write $V(K)$ for the set of $K$-rational points of $V$. In the context of Manin's conjecture we define this to be the set of $K$-morphisms \[ \mathfrak{m}athrm{Spec}(K) \mathbb{R}ightarrow V_K, \] where $V_K$ denotes the base change of $V$ to the field $K$. For a subset $U(\mathfrak{m}athbb{Q}) \subset V(\mathfrak{m}athbb{Q})$ and $P \geq 1$ consider the counting function \[ N_U(P) = \# \left\{ (\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{y}) \in U(\mathfrak{m}athbb{Q}) \colon H(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) \leq P \mathbb{R}ight\}, \] where $H(\cdot, \cdot)$ is the \emph{anticanonical height} induced by the anticanonical bundle and a choice of global sections. In our case one such height may be explicitly given as follows. If $(\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{y}) \in U(\mathfrak{m}athbb{Q})$ we may pick representatives $\mathfrak{m}athfrak{b}m{x} \in \mathfrak{m}athbb{Z}^{n_1}$, and $\mathfrak{m}athfrak{b}m{y} \in \mathfrak{m}athbb{Z}^{n_2}$ such that $(x_1, \mathfrak{m}athbb{H}dots, x_{n_1}) = (y_1, \mathfrak{m}athbb{H}dots, y_{n_2}) = 1$ and we define \[ H(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) = \left( \mathfrak{m}ax_i \lvert x_i \mathbb{R}vert \mathbb{R}ight)^{n_1-Rd_1} \left( \mathfrak{m}ax_i \lvert y_i \mathbb{R}vert \mathbb{R}ight)^{n_2-Rd_2}. \] Manin's Conjecture in this context states that, provided $V$ is a Fano variety such that $V(\mathfrak{m}athbb{Q}) \subset V$ is Zariski dense, there exists a subset $U(\mathfrak{m}athbb{Q}) \subset V(\mathfrak{m}athbb{Q})$ where $(V \setminus U)(\mathfrak{m}athbb{Q})$ is a \emph{thin} set such that \[ N_U(P) \sim c P (\log P)^{\mathbb{R}ho-1}, \] where $\mathbb{R}ho$ is the Picard rank of the variety $V$ and $c$ is a constant as predicted and interpreted by Peyre~\cite{peyre_constant}. We briefly recall the definition of a thin set, according to Serre~\cite{serre_thin_sets}. First recall a set $A \subset V(K)$ is of type \mathfrak{m}athfrak{b}egin{itemize} \item[($C_1$)] if $A \subseteq W(K)$, where $W \subsetneq V$ is Zariski closed, \item[($C_2$)] if $A \subseteq \mathfrak{m}athfrak{p}i(V'(K))$, where $V'$ is irreducible such that $\dim V = \dim V'$, where $\mathfrak{m}athfrak{p}i \colon V' \mathbb{R}ightarrow V$ is a generically finite morphism of degree at least $2$. \end{itemize} Now a subset of the $K$-rational points of $V$ is \emph{thin} if it is a finite union of sets of type $(C_1)$ or $(C_2)$. Originally Batyrev--Manin~\cite{batyrev_manin} conjectured that it suffices to assume that $(V \setminus U)$ is Zariski closed, but there have been found various counterexamples to this, the first one being due to Batyrev--Tschinkel~\cite{batyrev_tschinkel_counterexample}. In~\cite{schindler_manin_biprojective} Schindler showed an asymptotic formula of the shape above, if $V$ is smooth and $d_1,d_2 \geq 2$ and \[ n_i > 3 \cdot 2^{d_1+d_2} d_1 d_2 R^3 + R \] is satisfied for $i=1,2$. If $R=1$ she moreover verified that the constant obtained agrees with the one predicted by Peyre, and thus proved Manin's conjecture for bihomogeneous hypersurfaces when the conditions above are met. The proof uses the asymptotic~\eqref{eq.schindler_asymptotic} established in~\cite{schindler_bihomogeneous} along with uniform counting results on fibres. That is, for a vector $\mathfrak{m}athfrak{b}m{y} \in \mathfrak{m}athbb{Z}^{n_2}$ one may consider the counting function \[ N_{\mathfrak{m}athfrak{b}m{y}}(P) = \# \left\{ \mathfrak{m}athfrak{b}m{x} \in \mathfrak{m}athbb{Z}^{n_1} \colon \mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) = \mathfrak{m}athfrak{b}m{0}, \lvert \mathfrak{m}athfrak{b}m{x} \mathbb{R}vert \leq P \mathbb{R}ight\}, \] and to understand its asymptotic behaviour uniformly means to understand the dependence of $\mathfrak{m}athfrak{b}m{y}$ on the constant in the error term. Similarly she considered $N_{\mathfrak{m}athfrak{b}m{x}}(P)$ for 'good' $\mathfrak{m}athfrak{b}m{x}$ and combined the three resulting estimates to obtain an asymptotic formula for the number of solutions $\widetilde{N}(P_1,P_2)$ to the system $\mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) = \mathfrak{m}athfrak{b}m{0}$, where $\lvert \mathfrak{m}athfrak{b}m{x} \mathbb{R}vert \leq P_1$, $\lvert \mathfrak{m}athfrak{b}m{y} \mathbb{R}vert \leq P_2$ and $\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}$ are 'good'. Considering only 'good' tuples essentially removes a closed subset from $V$, and thus, after an application of a slight modification of the hyperbola method developed as in~\cite{blomer_bruedern_hyp} she obtained an asymptotic formula for $N_U(P)$ of the desired shape. In forthcoming work the result established in Theorem~\mathbb{R}ef{thm.2,1_different_dimensions} will be used in verifying Manin's Conjecture for $V$, when $(d_1,d_2) = (2,1)$ in fewer variables than would be expected using Schindler's method as described above. Further, since the Picard rank of $V$ is strictly greater than $1$, it would be interesting to consider the \emph{all heights approach} as suggested by Peyre~\cite[Question V.4.8]{peyre_book_all_heights}. As noted by Peyre himself, in the case when a variety has Picard rank $1$, the answer to his Question 4.8 follows provided one can prove Manin's conjecture with respect to the height function induced by the anticanonical bundle. Schindler's results have been improved upon in a few special cases. Browning and Hu showed Manin's conjecture in the case of smooth biquadratic hypersurfaces in $\mathfrak{m}athbb{P}^{n-1}_{\mathfrak{m}athbb{Q}} \times \mathfrak{m}athbb{P}^{n-1}_{\mathfrak{m}athbb{Q}}$ if the number of variables satisfies $n>35$. If the bidegree is $(2,1)$ then Hu showed that $n>25$ suffices in order to obtain Manin's conjecture. Systems of bilinear varieties are flag varieties and thus Manin's conjecture follows from the result for flag varieties, which was proven by Franke, Manin and Tschinkel~\cite{jens_manin_tschinkel_manin_flag} using the theory of Eisenstein series. In the special case when the variety is defined by $\sum_{i=0}^s x_iy_i = 0$ then Robbiani~\cite{robbiani_bilinear} showed how one may use the circle method to establish Manin's conjecture if $s \geq 3$, which was later improved to $s \geq 2$ by Spencer~\cite{spencer_manin_bilinear}. \subsection*{Conventions} The symbol $\varepsilon >0$ is an arbitrarily small value, which we may redefine whenever convenient, as is usual in analytic number theory. Given forms $g_\ell$, $\ell = 1, \mathfrak{m}athbb{H}dots, k$ we write $\mathfrak{m}athbb{V}(g_\ell)_{\ell = 1, \mathfrak{m}athbb{H}dots, k}$ or sometimes just $\mathfrak{m}athbb{V}(g_\ell)_\ell$ for the intersection $\mathfrak{m}athbb{V}(g_1, \mathfrak{m}athbb{H}dots, g_k)$. Further, we may sometimes consider a vector of forms $\mathfrak{m}athfrak{b}m{h} = (h_1, \mathfrak{m}athbb{H}dots, h_k)$ and we similarly write $\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{h})$ for the intersection $\mathfrak{m}athbb{V}(h_1, \mathfrak{m}athbb{H}dots, h_k)$. For a real number $x \in \mathfrak{m}athbb{R}$ we will write $e(x) = e^{2 \mathfrak{m}athfrak{p}i i x}$. We will use Vinogradov's notation $O(\cdot)$ and $\ll$. We shall repeatedly use the convention that the dimension of the empty set $-1$. \section{Multilinear forms} Both Theorem~\mathbb{R}ef{thm.bilinear} and Theorem~\mathbb{R}ef{thm.2,1_different_dimensions} follow from a more general result. If we have control over the number of 'small' solutions to the associated linearised forms then we can show that the asymptotic~\eqref{eq.schindler_asymptotic} holds. More explicitly, given a bihomogeneous form $F(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$ with integer coefficients of bidegree $(d_1,d_2)$ for positive integers $d_1,d_2$, we may write it as \mathfrak{m}athfrak{b}egin{equation*} F(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) = \sum_{\mathfrak{m}athfrak{b}m{j}} \sum_{\mathfrak{m}athfrak{b}m{k}} F_{\mathfrak{m}athfrak{b}m{j},\mathfrak{m}athfrak{b}m{k}} x_{j_1} \cdots x_{j_{d_1}} y_{k_1} \cdots y_{k_{d_2}}, \end{equation*} where the coefficients $F_{\mathfrak{m}athfrak{b}m{j},\mathfrak{m}athfrak{b}m{k}} \in \mathfrak{m}athbb{Q}$ are symmetric in $\mathfrak{m}athfrak{b}m{j}$ and $\mathfrak{m}athfrak{b}m{k}$. We define the associated multilinear form \mathfrak{m}athfrak{b}egin{equation*} \Gamma_{F}(\widetilde{\mathfrak{m}athfrak{b}m{x}}, \widetilde{\mathfrak{m}athfrak{b}m{y}}) \coloneqq d_1! d_2! \sum_{\mathfrak{m}athfrak{b}m{j}} \sum_{\mathfrak{m}athfrak{b}m{k}} F_{\mathfrak{m}athfrak{b}m{j},\mathfrak{m}athfrak{b}m{k}} x^{(1)}_{j_1} \cdots x^{(d_1)}_{j_{d_1}} y^{(1)}_{k_1} \cdots y^{(d_2)}_{k_{d_2}}, \end{equation*} where $\widetilde{\mathfrak{m}athfrak{b}m{x}} = (\mathfrak{m}athfrak{b}m{x}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{x}^{(d_1)})$ and $\widetilde{\mathfrak{m}athfrak{b}m{y}} = (\mathfrak{m}athfrak{b}m{y}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(d_2)})$ for vectors $\mathfrak{m}athfrak{b}m{x}^{(i)}$ of $n_1$ variables and vectors $\mathfrak{m}athfrak{b}m{y}^{(i)}$ of $n_2$ variables. Write further $\widehat{\mathfrak{m}athfrak{b}m{x}} = (\mathfrak{m}athfrak{b}m{x}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{x}^{(d_1-1)})$ and $\widehat{\mathfrak{m}athfrak{b}m{y}} = (\mathfrak{m}athfrak{b}m{y}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(d_2-1)})$. Given $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R$ we define the auxiliary counting function $N_1^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}; B)$ to be the number of integer vectors satisfying $\widehat{\mathfrak{m}athfrak{b}m{x}} \in (-B,B)^{(d_1-1)n_1}$ and $\widetilde{\mathfrak{m}athfrak{b}m{y}} \in (-B,B)^{d_2n_2}$ such that \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}orm{\Gamma_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}(\widehat{\mathfrak{m}athfrak{b}m{x}}, \mathfrak{m}athfrak{b}m{e}_\ell, \widetilde{\mathfrak{m}athfrak{b}m{y}})} < \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}_\infty B^{d_1+d_2-2}, \end{equation*} for $\ell = 1, \mathfrak{m}athbb{H}dots, n_1$ where $\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}_\infty \coloneqq \frac{1}{d_1!d_2!} \mathfrak{m}ax_{\mathfrak{m}athfrak{b}m{j}, \mathfrak{m}athfrak{b}m{k}} \mathfrak{m}athbb{N}orm{\frac{\mathfrak{m}athfrak{p}artial^{d_1+d_2}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F})}{\mathfrak{m}athfrak{p}artial x_{j_1} \cdots \mathfrak{m}athfrak{p}artial x_{j_{d_1}} \mathfrak{m}athfrak{p}artial y_{k_1} \cdots \mathfrak{m}athfrak{p}artial y_{k_{d_2}} }}$. We define $N_2^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta};B)$ analogously. The technical core of this paper is the following theorem. \mathfrak{m}athfrak{b}egin{theorem} \label{thm.n_aux_imply_result} Assume $n_1,n_2 > (d_1+d_2)R$ and let $\mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) = (F_1(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}), \mathfrak{m}athbb{H}dots, F_R(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}))$ be a system of bihomogeneous forms with integer coefficients of common bidegree $(d_1,d_2)$ such that the variety $\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{F}) \subset \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{Q}}^{n_1-1} \times \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{Q}}^{n_2-1}$ is a complete intersection. Let $P_1,P_2 > 1$ and write $b = \mathfrak{m}ax \left\{\log(P_1) /\log (P_2),1 \mathbb{R}ight\}$ and $u = \mathfrak{m}ax \left\{\log(P_2) /\log (P_1),1 \mathbb{R}ight\}$. Assume there exist $C_0 \geq 1$ and $\mathfrak{m}athscr{C} > (bd_1+ud_2)R$ such that for all $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{ \mathfrak{m}athfrak{b}m{0}\}$ and all $B>0$ we have \mathfrak{m}athfrak{b}egin{equation} \label{eq.N_aux_general_condition} N_i^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta};B) \leq C_0 B^{d_1n_1 + d_2n_2 - n_i - 2^{d_1+d_2-1}\mathfrak{m}athscr{C}} \end{equation} for $i=1,2$. There exists some $\delta > 0$ depending on $b$, $u$, $C_0$, $R$, $d_i$ and $n_i$ such that \mathfrak{m}athfrak{b}egin{equation*} N(P_1,P_2) = \sigma P_1^{n_1-d_1R}P_2^{n_2-d_2R} + O \left(P_1^{n_1-d_1R}P_2^{n_2-d_2R} \mathfrak{m}in\{P_1,P_2 \}^{-\delta} \mathbb{R}ight). \end{equation*} The factor $\sigma = \mathfrak{m}athfrak{I} \mathfrak{m}athfrak{S}$ is the product of the singular integral $\mathfrak{m}athfrak{I}$ and the singular series $\mathfrak{m}athfrak{S}$, as defined in~\eqref{eq.def_singular_integral} and~\eqref{eq.def_singular_series}, respectively. Moreover, if the system $\mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) = \mathfrak{m}athfrak{b}m{0}$ has a non-singular real zero in $\mathfrak{m}athcal{B}_1 \times \mathfrak{m}athcal{B}_2$ and a non-singular $p$-adic zero for every prime $p$, then $\sigma >0$. \end{theorem} While showing that~\eqref{eq.N_aux_general_condition} holds is rather straightforward when the bidegree is $(1,1)$ it becomes significantly more difficult when the bidegree increases. In fact, in Rydin Myerson's work a similar upper bound on a similar auxiliary counting function needs to be shown. He is successful in doing so when the degree is $2$ or $3$ and the system defines a complete intersection, but for higher degrees he was only able to show this upper bound for generic systems. Our strategy is as follows. We will establish Theorem~\mathbb{R}ef{thm.n_aux_imply_result} in Section~\mathbb{R}ef{sec.weyl_differencing_etc} and Section~\mathbb{R}ef{sec.circle_method} and then use this to show Theorem~\mathbb{R}ef{thm.bilinear} and Theorem~\mathbb{R}ef{thm.2,1_different_dimensions} in Section~\mathbb{R}ef{sec.bilinear_proof} and in Section~\mathbb{R}ef{sec.proof.2-1}. \section{Geometric preliminaries} The following Lemma is taken from \cite{schindler_manin_biprojective}. \mathfrak{m}athfrak{b}egin{lemma}[Lemma 2.2 in \cite{schindler_manin_biprojective}] \label{lem.geometry_intersections} Let $W$ be a smooth variety that is complete over some algebraically closed field and consider a closed irreducible subvariety $Z \subseteq W$ such that $\dim Z \geq 1$. Given an effective divisor $D$ on $W$ then the dimension of every irreducible component of $D \cap Z$ is at least $\dim Z-1$. If $D$ is moreover ample we have in addition that $D \cap Z$ is nonempty. \end{lemma} In particular the following corollary will be very useful. \mathfrak{m}athfrak{b}egin{corollary} \label{cor.geometry_intersections} Let $V \subseteq \mathfrak{m}athbb{P}^{n_1-1}_{\mathfrak{m}athbb{C}} \times \mathfrak{m}athbb{P}^{n_2-1}_{\mathfrak{m}athbb{C}}$ be a closed variety such that $\dim V \geq 1$. Consider $H = \mathfrak{m}athbb{V}(f)$ where $f(\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{y})$ is a polynomial of bidegree at least $(1,1)$ in the variables $(\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{y}) = (x_1, \mathfrak{m}athbb{H}dots, x_{n_1}, y_1, \mathfrak{m}athbb{H}dots, y_{n_2})$. Then \mathfrak{m}athfrak{b}egin{equation*} \dim (V \cap H) \geq \dim V - 1, \end{equation*} in particular $V \cap H$ is non-empty. \end{corollary} \mathfrak{m}athfrak{b}egin{proof} Since the bidegree of $f$ is at least $(1,1)$ we have that $H$ defines an effective and ample divisor on $\mathfrak{m}athbb{P}^{n_1-1}_{\mathfrak{m}athbb{C}} \times \mathfrak{m}athbb{P}^{n_2-1}_{\mathfrak{m}athbb{C}}$. We apply Lemma \mathbb{R}ef{lem.geometry_intersections} with $W = \mathfrak{m}athbb{P}^{n_1-1}_{\mathfrak{m}athbb{C}} \times \mathfrak{m}athbb{P}^{n_2-1}_{\mathfrak{m}athbb{C}}$, $D = H$ and $Z$ any irreducible component of $V$. \end{proof} \mathfrak{m}athfrak{b}egin{lemma} \label{lem.sing_bf_small} Let $\mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$ be a system of $R$ bihomogeneous equations of the same bidegree $(d_1,d_2)$ with $d_1, d_2 \geq 1$. Assume that $\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{F}) \subset \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n_1-1} \times \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n_2-1}$ is a smooth complete intersection. Given $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{ \mathfrak{m}athfrak{b}m{0}\}$ we have \mathfrak{m}athfrak{b}egin{equation*} \dim \mathfrak{m}athrm{Sing} \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}) \leq R-2, \end{equation*} where we write $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F} = \sum_i \mathfrak{m}athfrak{b}eta_i F_i$. \end{lemma} \mathfrak{m}athfrak{b}egin{proof} The singular locus of $\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F})$ is given by \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athrm{Sing} \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}) = \mathfrak{m}athbb{V}\left(\frac{\mathfrak{m}athfrak{p}artial (\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F})}{\mathfrak{m}athfrak{p}artial x_j} \mathbb{R}ight)_{j = 1, \mathfrak{m}athbb{H}dots, n_1} \cap \mathfrak{m}athbb{V}\left(\frac{\mathfrak{m}athfrak{p}artial (\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F})}{\mathfrak{m}athfrak{p}artial y_j} \mathbb{R}ight)_{j = 1, \mathfrak{m}athbb{H}dots, n_2}. \end{equation*} Assume without loss of generality $\mathfrak{m}athfrak{b}eta_R \mathfrak{m}athbb{N}eq 0$ so that $\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{F}) = \mathfrak{m}athbb{V}(F_1, \mathfrak{m}athbb{H}dots, F_{R-1}, \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F})$. We claim that we have the following inclusion \mathfrak{m}athfrak{b}egin{equation} \label{eq.singular_loci_containment} \mathfrak{m}athbb{V}(F_1, \mathfrak{m}athbb{H}dots, F_{R-1}) \cap \mathfrak{m}athrm{Sing} \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}) \subseteq \mathfrak{m}athrm{Sing} \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{F}). \end{equation} To see this note first that $\mathfrak{m}athbb{V}(F_1, \mathfrak{m}athbb{H}dots, F_{R-1}) \cap \mathfrak{m}athrm{Sing} \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}) \subseteq \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{F})$. Further, the Jacobian matrix $J(\mathfrak{m}athfrak{b}m{F})$ of $\mathfrak{m}athfrak{b}m{F}$ is given by \mathfrak{m}athfrak{b}egin{equation*} J(\mathfrak{m}athfrak{b}m{F}) = \left( \frac{\mathfrak{m}athfrak{p}artial F_i}{\mathfrak{m}athfrak{p}artial z_j} \mathbb{R}ight)_{ij}, \end{equation*} where $i = 1, \mathfrak{m}athbb{H}dots, R$ and $z_j$ ranges through $x_1, \mathfrak{m}athbb{H}dots, x_{n_1}, y_1, \mathfrak{m}athbb{H}dots, y_{n_2}$. Now if the equations \mathfrak{m}athfrak{b}egin{equation*} \frac{\mathfrak{m}athfrak{p}artial (\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F})}{\mathfrak{m}athfrak{p}artial x_j} = \frac{\mathfrak{m}athfrak{p}artial (\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F})}{\mathfrak{m}athfrak{p}artial y_j} = 0, \end{equation*} are satisfied then this implies that the rows of $J(\mathfrak{m}athfrak{b}m{F})$ are linearly dependent. Since $\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{F})$ is a complete intersection we deduce the claim. Assume now for a contradiction that $\dim \mathfrak{m}athrm{Sing} \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}) \geq R-1$ holds. Applying Corollary~\mathbb{R}ef{cor.geometry_intersections} $(R-1)$-times with $V = \mathfrak{m}athrm{Sing} \mathfrak{m}athbb{V} (\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F})$, noting that the bidegree of $F_i$ is at least $(1,1)$, we find \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{V}(F_1, \mathfrak{m}athbb{H}dots, F_{R-1}) \cap \mathfrak{m}athrm{Sing} \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}) \mathfrak{m}athbb{N}eq \emptyset. \end{equation*} This contradicts~\eqref{eq.singular_loci_containment} since $\mathfrak{m}athrm{Sing} \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{F}) =\emptyset$ by assumption. \end{proof} \mathfrak{m}athfrak{b}egin{lemma} \label{lem.dimensions_varieties_not_too_big} Let $n_1 \leq n_2$ be two positive integers. For $i = 1, \mathfrak{m}athbb{H}dots, n_2$ let $A_i \in \mathfrak{m}athrm{M}_{n_1 \times n_1}(\mathfrak{m}athbb{C})$ be symmetric matrices. Consider the varieties $V_1 \subset \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_1-1}$ and $V_2 \subset \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_1-1} \times \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_2-1}$ defined by \mathfrak{m}athfrak{b}egin{align*} V_1 &= \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{t}^T A_i \mathfrak{m}athfrak{b}m{t})_{i = 1, \mathfrak{m}athbb{H}dots, n_2} \\ V_2 &= \mathfrak{m}athbb{V}\left(\sum_{i=1}^{n_2}y_i A_i \mathfrak{m}athfrak{b}m{x}\mathbb{R}ight). \end{align*} Then we have \mathfrak{m}athfrak{b}egin{equation*} \label{eq:dim_varieties_not_too_big} \dim V_2 \leq \dim V_1 + n_2-1. \end{equation*} In particular, if $V_1 = \emptyset$ then $\dim V_2 \leq n_2-2$. \end{lemma} \mathfrak{m}athfrak{b}egin{proof} Consider the variety $V_3 \subset \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_1-1} \times \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_1-1}$ defined by \[ V_3 = \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{z}^T A_i \mathfrak{m}athfrak{b}m{x})_{i = 1, \mathfrak{m}athbb{H}dots, n_2}. \] Further for $\mathfrak{m}athfrak{b}m{x} = (x_1, \mathfrak{m}athbb{H}dots, x_{n_1})^T$ consider \[ A (\mathfrak{m}athfrak{b}m{x}) = (A_1 \mathfrak{m}athfrak{b}m{x} \cdots A_{n_2} \mathfrak{m}athfrak{b}m{x}) \in \mathfrak{m}athrm{M}_{n_1 \times n_2}(\mathfrak{m}athbb{C})[x_1, \mathfrak{m}athbb{H}dots, x_{n_1}]. \] We may write $V_2 = \mathfrak{m}athbb{V}(A(\mathfrak{m}athfrak{b}m{x})\mathfrak{m}athfrak{b}m{y})$ and $V_3 = \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{z}^T A(\mathfrak{m}athfrak{b}m{x}))$. Our first goal is to relate the dimensions of the varieties above as follows \mathfrak{m}athfrak{b}egin{equation} \label{eq.dim_v_2_v_3} \dim V_2 \leq \dim V_3 +n_2-n_1. \end{equation} For $r = 0, \mathfrak{m}athbb{H}dots, n_1$ define the quasi-projective varieties $D_r \subset \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_1-1}$ given by \[ D_r = \{ \mathfrak{m}athfrak{b}m{x} \in \mathfrak{m}athbb{P}^{n_1-1}_\mathfrak{m}athbb{C} \colon \mathfrak{m}athrm{rank}(A (\mathfrak{m}athfrak{b}m{x})) = r \}. \] These are quasiprojective since they may be written as the intersection of the vanishing of all $(r+1) \times (r+1)$ minors of $A(\mathfrak{m}athfrak{b}m{x})$ with the complement of the vanishing of all $r \times r$ minors. For each $r$ let \[ D_r = \mathfrak{m}athfrak{b}igcup_{i \in I_r} D_r^{(i)} \] be a decomposition into finitely many irreducible components. Since $\mathfrak{m}athfrak{b}igcup_r D_r = \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_1-1}$ we have \mathfrak{m}athfrak{b}egin{equation*} \label{eq.dim_v_2_over_union} \dim V_2 = \mathfrak{m}ax_{\substack{0 \leq r < n_2 \\ i \in I_r}} \dim ((D_r^{(i)} \times \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_2-1}) \cap V_2). \end{equation*} Note that $r = n_2$ doesn't play a role here, since the intersection $(D_{n_2}^{(i)} \times \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_2-1}) \cap V_2$ is empty. Similarly we get \mathfrak{m}athfrak{b}egin{equation*} \label{eq.dim_v_1_over_union} \dim V_3 = \mathfrak{m}ax_{\substack{0 \leq r < n_2 \\ i \in I_r}} \dim ((D_r^{(i)} \times \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_1-1}) \cap V_3). \end{equation*} For $0 \leq r < n_2$ and $i \in I_r$ consider now the surjective projection maps \[ \mathfrak{m}athfrak{p}i_{2,r,i} \colon (D_r^{(i)} \times \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_2-1}) \cap V_2 \mathbb{R}ightarrow D_r^{(i)}, \; (\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{y}) \mathfrak{m}apsto \mathfrak{m}athfrak{b}m{x}, \] and \[ \mathfrak{m}athfrak{p}i_{3,r,i} \colon (D_r^{(i)} \times \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_1-1}) \cap V_3 \mathbb{R}ightarrow D_r^{(i)}, \; (\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{z}) \mathfrak{m}apsto \mathfrak{m}athfrak{b}m{x}, \] We note that by the way $D_r^{(i)}$ was constructed here, the fibres of both of these projection morphisms have constant dimension for fixed $r$. By the rank-nullity theorem we find that the dimensions of the fibres are related as follows \mathfrak{m}athfrak{b}egin{equation} \label{eq.dim_fibres_rank_null} \dim \mathfrak{m}athfrak{p}i_{2,r,i}^{-1}(\mathfrak{m}athfrak{b}m{x}) = \dim \mathfrak{m}athfrak{p}i_{3,r,i}^{-1}(\mathfrak{m}athfrak{b}m{x}) +n_2-n_1. \end{equation} We claim that the morphism $\mathfrak{m}athfrak{p}i_{2,r,i}$ is proper. For this note that the structure morphism $\mathfrak{m}athbb{P}^{n_1-1}_\mathfrak{m}athbb{C} \mathbb{R}ightarrow \mathfrak{m}athrm{Spec} \, \mathfrak{m}athbb{C}$ is proper whence $D_r^{(i)} \times \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_1-1} \mathbb{R}ightarrow D_r^{(i)}$ must be proper too, as properness is preserved under base change. As $(D_r^{(i)} \times \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_2-1}) \cap V_2$ is closed inside $D_r^{(i)} \times \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_1-1}$ the restriction $\mathfrak{m}athfrak{p}i_{2,r,i}$ must also be proper. By an analogous argument it follows $\mathfrak{m}athfrak{p}i_{3,r,i}$ is also proper. Further note that the fibres of $\mathfrak{m}athfrak{p}i_{2,r,i}$ are irreducible since they define linear subspaces of $(D_r^{(i)} \times \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_2-1}) \cap V_2$, and similarly the fibres of $\mathfrak{m}athfrak{p}i_{3,r,i}$ are irreducible. Since $D_r^{(i)}$ is irreducible by construction and all the fibres have constant dimension, it follows that $(D_r^{(i)} \times \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_2-1}) \cap V_2$ is irreducible. Similarly $(D_r^{(i)} \times \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_1-1}) \cap V_3$ is irreducible. Hence all the conditions of Chevalley's upper semicontinuity theorem are satisfied~\cite[Th\'eor\`eme 13.1.3]{EGA4}, so that for any $\mathfrak{m}athfrak{b}m{x} \in D_r^{(i)}$ we obtain \mathfrak{m}athfrak{b}egin{equation} \label{eq.dim_of_fibres} \dim \mathfrak{m}athfrak{p}i_{2,r,i}^{-1}(\mathfrak{m}athfrak{b}m{x}) = \dim ((D_r^{(i)} \times \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_2-1}) \cap V_2) - \dim D_r^{(i)}, \end{equation} and \mathfrak{m}athfrak{b}egin{equation} \label{eq.dim_of_fibres__V3} \dim \mathfrak{m}athfrak{p}i_{3,r,i}^{-1}(\mathfrak{m}athfrak{b}m{x}) = \dim ((D_r^{(i)} \times \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_1-1}) \cap V_3) - \dim D_r^{(i)}. \end{equation} Hence~\eqref{eq.dim_of_fibres} and~\eqref{eq.dim_of_fibres__V3} together with~\eqref{eq.dim_fibres_rank_null} yield \[ \dim ((D_r^{(i)} \times \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_2-1}) \cap V_2) = \dim ((D_r^{(i)} \times \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_1-1}) \cap V_3) + n_2-n_1. \] Choosing $r$ and $i$ such that $\dim V_2 = \dim ((D_r^{(i)} \times \mathfrak{m}athbb{P}_\mathfrak{m}athbb{C}^{n_2-1}) \cap V_2)$ the claim \eqref{eq.dim_v_2_v_3} now follows. Thus it is enough to find an upper bound for $\dim V_3$. To this end, consider the affine cones $\widetilde{V_1} = \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{u}^T A_i \mathfrak{m}athfrak{b}m{u})_{i = 1, \mathfrak{m}athbb{H}dots, n_2} \subset \mathfrak{m}athbb{A}_{\mathfrak{m}athbb{C}}^{n_1}$ and $\widetilde{V_3} = \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{x}^T A(\mathfrak{m}athfrak{b}m{z})) \subset \mathfrak{m}athbb{A}_{\mathfrak{m}athbb{C}}^{n_1} \times \mathfrak{m}athbb{A}_{\mathfrak{m}athbb{C}}^{n_1}$. Note in particular, that $\widetilde{V}_1 \mathfrak{m}athbb{N}eq \emptyset$ even if $V_1 = \emptyset$. Write $\widetilde{\Delta} \subset \mathfrak{m}athbb{A}_{\mathfrak{m}athbb{C}}^{n_1} \times \mathfrak{m}athbb{A}_{\mathfrak{m}athbb{C}}^{n_1}$ for the diagonal given by $\mathfrak{m}athbb{V}(x_i = z_i)_i$. Then $\widetilde{V_3} \cap \widetilde{\Delta} \cong \widetilde{V_1} \mathfrak{m}athbb{N}eq \emptyset$. Thus, the affine dimension theorem~\cite[Proposition 7.1]{hartshorne2013algebraic} yields \[ \dim \widetilde{V_1} \geq \dim \widetilde{V_3} -n_1. \] Noting $\dim V_1 +1 \geq \dim \widetilde{V_1}$ and $\dim \widetilde{V_3} \geq \dim V_3 +2$ now gives the desired result. We remind the reader at this point that this is compatible with the convention $\dim \emptyset = -1$. \end{proof} \section{The auxiliary inequality} \label{sec.weyl_differencing_etc} We remind the reader of the notation $e(x) = e^{2 \mathfrak{m}athfrak{p}i i x}$. For $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \in [0,1]^R$ define \mathfrak{m}athfrak{b}egin{equation*} S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha},P_1,P_2) = S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) \coloneqq \sum_{\mathfrak{m}athfrak{b}m{x} \in P_1 \mathfrak{m}athcal{B}_1} \sum_{\mathfrak{m}athfrak{b}m{y} \in P_2 \mathfrak{m}athcal{B}_2} e \left( \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \cdot \mathfrak{m}athfrak{b}m{F}\left(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y} \mathbb{R}ight) \mathbb{R}ight), \end{equation*} where the sum ranges over $\mathfrak{m}athfrak{b}m{x} \in \mathfrak{m}athbb{Z}^{n_1}$ such that $\mathfrak{m}athfrak{b}m{x}/P_1 \in \mathfrak{m}athcal{B}_1$ and similarly for $\mathfrak{m}athfrak{b}m{y}$. Throughout this section we will assume $P_1 \geq P_2$. Note crucially that we have \mathfrak{m}athfrak{b}egin{equation*} N(P_1,P_2) = \int_{[0,1]^R} S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) d \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}. \end{equation*} As noted in the introduction we can rewrite the forms as \mathfrak{m}athfrak{b}egin{equation*} F_i(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) = \sum_{\mathfrak{m}athfrak{b}m{j}} \sum_{\mathfrak{m}athfrak{b}m{k}} F^{(i)}_{\mathfrak{m}athfrak{b}m{j},\mathfrak{m}athfrak{b}m{k}} x_{j_1} \cdots x_{j_{d_1}} y_{k_1} \cdots y_{k_{d_2}}, \end{equation*} and given $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \in \mathfrak{m}athbb{R}^R$, as in \cite{schindler_bihomogeneous}, we consider the multilinear forms \mathfrak{m}athfrak{b}egin{equation*} \Gamma_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \cdot \mathfrak{m}athfrak{b}m{F}}(\widetilde{\mathfrak{m}athfrak{b}m{x}}, \widetilde{\mathfrak{m}athfrak{b}m{y}}) \coloneqq d_1! d_2! \sum_{i} \mathfrak{m}athfrak{a}lpha_i \sum_{\mathfrak{m}athfrak{b}m{j}} \sum_{\mathfrak{m}athfrak{b}m{k}} F^{(i)}_{\mathfrak{m}athfrak{b}m{j},\mathfrak{m}athfrak{b}m{k}} x^{(1)}_{j_1} \cdots x^{(d_1)}_{j_{d_1}} y^{(1)}_{k_1} \cdots y^{(d_2)}_{k_{d_2}}. \end{equation*} Further we write $\widehat{\mathfrak{m}athfrak{b}m{x}} = (\mathfrak{m}athfrak{b}m{x}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{x}^{(d_1-1)})$ and similarly for $\widehat{\mathfrak{m}athfrak{b}m{y}}$. For any real number $\lambda$ we write $\mathfrak{m}athbb{N}norm{\lambda} = \mathfrak{m}in_{k \in \mathfrak{m}athbb{Z}} \mathfrak{m}athbb{N}orm{\lambda-k}$. We now define $M_1(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \cdot \mathfrak{m}athfrak{b}m{F}; P_1,P_2,P^{-1})$ to be the number of integral $\widehat{\mathfrak{m}athfrak{b}m{x}} \in (-P_1,P_1)^{(d_1-1)n_1}$ and $\widetilde{\mathfrak{m}athfrak{b}m{y}} \in (-P_2,P_2)^{d_2n_2}$ such that for all $\ell = 1, \mathfrak{m}athbb{H}dots, n_1$ we have \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}norm{\Gamma_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \cdot \mathfrak{m}athfrak{b}m{F}}(\widehat{\mathfrak{m}athfrak{b}m{x}}, \mathfrak{m}athfrak{b}m{e}_\ell, \widetilde{\mathfrak{m}athfrak{b}m{y}})} < P^{-1}. \end{equation*} Similarly, we define $M_2(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \cdot \mathfrak{m}athfrak{b}m{F}; P_1,P_2,P^{-1})$ to be the number of integral $\widetilde{\mathfrak{m}athfrak{b}m{x}} \in (-P_1,P_1)^{d_1n_1}$ and $\widehat{\mathfrak{m}athfrak{b}m{y}} \in (-P_2,P_2)^{(d_2-1)n_2}$ such that for all $\ell = 1, \mathfrak{m}athbb{H}dots, n_2$ we have \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}norm{\Gamma_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \cdot \mathfrak{m}athfrak{b}m{F}}(\widetilde{\mathfrak{m}athfrak{b}m{x}}, \widehat{\mathfrak{m}athfrak{b}m{y}},\mathfrak{m}athfrak{b}m{e}_\ell,)} < P^{-1}. \end{equation*} For our purposes we will need a slight generalization of Lemma 2.1 in \cite{schindler_bihomogeneous} that deals with a polynomial $G(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$, which is not necessarily bihomogeneous. If $G(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$ has bidegree $(d_1,d_2)$ write \mathfrak{m}athfrak{b}egin{equation*} G(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) = \sum_{\substack{0 \leq r \leq d_1 \\ 0 \leq l \leq d_2}} G^{(r,l)}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}), \end{equation*} where $G^{(r,l)}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$ is homogeneous of bidegree $(r,l)$. Using notation as above we first show the following preliminary Lemma, which is a version of Weyl's inequality for our context. From now on we will often use the notation $\tilde{d} = d_1+d_2-2$. \mathfrak{m}athfrak{b}egin{lemma} \label{lem.weyl_differencing_general_poly} Let $\varepsilon > 0$. Let $G(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) \in \mathfrak{m}athbb{R}[x_1, \mathfrak{m}athbb{H}dots, x_{n_1},y_1, \mathfrak{m}athbb{H}dots, y_{n_2}]$ be a polynomial of bidegree $(d_1,d_2)$ with $d_1,d_2 \geq 1$. For the exponential sum \mathfrak{m}athfrak{b}egin{equation*} S_G(P_1,P_2) = \sum_{\mathfrak{m}athfrak{b}m{x} \in P_1 \mathfrak{m}athcal{B}_1} \sum_{\mathfrak{m}athfrak{b}m{x} \in P_ 2 \mathfrak{m}athcal{B}_2} e\left( G(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})\mathbb{R}ight) \end{equation*} we have the following bound \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}orm{S_G(P_1,P_2)}^{2^{\tilde{d}}} \ll P_1^{n_1(2^{\tilde{d}}-d_1+1) + \varepsilon} P_2^{n_2(2^{\tilde{d}}-d_2)} M_1 \left( G^{(d_1,d_2)}, P_1, P_2, P_1^{-1} \mathbb{R}ight). \end{equation*} \end{lemma} \mathfrak{m}athfrak{b}egin{proof} The proof is quite involved but follows closely the proof of Lemma 2.1 in \cite{schindler_bihomogeneous}, which in turn is based on idas of Schmidt \cite[Section 11]{schmidt85} and Davenport \cite[Section 3]{davenport_32_variables}. Our first goal is to apply a Weyl differencing process $d_2-1$-times to the $\mathfrak{m}athfrak{b}m{y}$ part of $G$ and then $d_1-1$-times to the $\mathfrak{m}athfrak{b}m{x}$ part of the resulting polynomial. Clearly this is trivial if $d_2=1$ or $d_1=1$, respectively. Therefore assume for now that $d_2 \geq 2$. We start by applying the Cauchy-Schwarz inequality and the triangle inequality to find \mathfrak{m}athfrak{b}egin{equation} \label{eq.S_G_cauchy_schwarz} \mathfrak{m}athbb{N}orm{S_G(P_1,P_2)}^{2^{d_2-1}} \ll P_1^{n_1(2^{d_2-1}-1)} \sum_{\mathfrak{m}athfrak{b}m{x} \in P_1 \mathfrak{m}athcal{B}_1} \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{x}}(P_1,P_2)}^{2^{d_2-1}}, \end{equation} where we define \mathfrak{m}athfrak{b}egin{equation*} S_{\mathfrak{m}athfrak{b}m{x}}(P_1,P_2) = \sum_{\mathfrak{m}athfrak{b}m{y} \in P_2 \mathfrak{m}athcal{B}_2} e(G(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})). \end{equation*} Now write $\mathfrak{m}athcal{U} = P_2 \mathfrak{m}athcal{B}_2$, write $\mathfrak{m}athcal{U}^D = \mathfrak{m}athcal{U}-\mathfrak{m}athcal{U}$ for the difference set and define \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athcal{U}(\mathfrak{m}athfrak{b}m{y}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(t)}) = \mathfrak{m}athfrak{b}igcap_{\varepsilon_1 = 0,1} \cdots \mathfrak{m}athfrak{b}igcap_{\varepsilon_t = 0,1}\left( \mathfrak{m}athcal{U}- \varepsilon_1 \mathfrak{m}athfrak{b}m{y}^{(1)} - \mathfrak{m}athbb{H}dots - \varepsilon_t \mathfrak{m}athfrak{b}m{y}^{(t)} \mathbb{R}ight). \end{equation*} Write $\mathfrak{m}athcal{F}(\mathfrak{m}athfrak{b}m{y}) = G(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$ and set \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athcal{F}_d(\mathfrak{m}athfrak{b}m{y}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(d)}) = \sum_{\varepsilon_1=0,1} \cdots \sum_{ \varepsilon_d = 0,1} (-1)^{\varepsilon_1 + \mathfrak{m}athbb{H}dots + \varepsilon_d} \mathfrak{m}athcal{F}(\varepsilon_1 \mathfrak{m}athfrak{b}m{y}^{(1)} + \mathfrak{m}athbb{H}dots + \varepsilon_d \mathfrak{m}athfrak{b}m{y}^{(d)}). \end{equation*} Equation (11.2) in \cite{schmidt85} applied to our situation gives \mathfrak{m}athfrak{b}egin{multline*} \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{x}}(P_1,P_2)}^{2^{d_2-1}} \ll \mathfrak{m}athbb{N}orm{\mathfrak{m}athcal{U}^D}^{2^{d_2-1}-d_2} \sum_{\mathfrak{m}athfrak{b}m{y}^{(1)} \in \mathfrak{m}athcal{U}^D} \cdots \\ \sum_{\mathfrak{m}athfrak{b}m{y}^{(d_2-2)} \in \mathfrak{m}athcal{U}^D} \mathfrak{m}athbb{N}orm{\sum_{\mathfrak{m}athfrak{b}m{y}^{(d_2-1)} \in \mathfrak{m}athcal{U}(\mathfrak{m}athfrak{b}m{y}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(d_2-2)})} e \left( \mathfrak{m}athcal{F}_{d_2-1} \left(\mathfrak{m}athfrak{b}m{y}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(d_2-1)}\mathbb{R}ight) \mathbb{R}ight) }^2, \end{multline*} and we note that this did not require $\mathfrak{m}athcal{F}(\mathfrak{m}athfrak{b}m{y})$ to be homogeneous in Schmidt's work. It is not hard to see that for $\mathfrak{m}athfrak{b}m{z}, \mathfrak{m}athfrak{b}m{z}' \in \mathfrak{m}athcal{U}(\mathfrak{m}athfrak{b}m{y}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(d_2-2)}) $ we have \mathfrak{m}athfrak{b}egin{multline*} \mathfrak{m}athcal{F}_{d_2-1} (\mathfrak{m}athfrak{b}m{y}^{(1)}, \cdots, \mathfrak{m}athfrak{b}m{z}) - \mathfrak{m}athcal{F}_{d_2-1} (\mathfrak{m}athfrak{b}m{y}^{(1)}, \cdots, \mathfrak{m}athfrak{b}m{z}') = \\ \mathfrak{m}athcal{F}_{d_2} (\mathfrak{m}athfrak{b}m{y}^{(1)}, \cdots, \mathfrak{m}athfrak{b}m{y}^{(d_2-1)}, \mathfrak{m}athfrak{b}m{y}^{(d_2)} ) - \mathfrak{m}athcal{F}_{d_2-1} (\mathfrak{m}athfrak{b}m{y}^{(1)}, \cdots, \mathfrak{m}athfrak{b}m{y}^{(d_2-1)}), \end{multline*} for some $\mathfrak{m}athfrak{b}m{y}^{(d_2-1)} \in \mathfrak{m}athcal{U}(\mathfrak{m}athfrak{b}m{y}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(d_2-2)})^D$ and $\mathfrak{m}athfrak{b}m{y}^{(d_2)} \in \mathfrak{m}athcal{U}(\mathfrak{m}athfrak{b}m{y}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(d_2-1)})$. Thus we find \mathfrak{m}athfrak{b}egin{multline} \label{eq.S_x_bound_long} \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{x}}(P_1,P_2)}^{2^{d_2-1}} \ll \mathfrak{m}athbb{N}orm{\mathfrak{m}athcal{U}^D}^{2^{d_2-1}-d_2} \sum_{\mathfrak{m}athfrak{b}m{y}^{(1)} \in \mathfrak{m}athcal{U}^D} \cdots \sum_{\mathfrak{m}athfrak{b}m{y}^{(d_2-2)} \in \mathfrak{m}athcal{U}^D} \sum_{\mathfrak{m}athfrak{b}m{y}^{(d_2-1)} \in \mathfrak{m}athcal{U}(\mathfrak{m}athfrak{b}m{y}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(d_2-2)})^D} \\ \sum_{\mathfrak{m}athfrak{b}m{y}^{(d_2)} \in \mathfrak{m}athcal{U}(\mathfrak{m}athfrak{b}m{y}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(d_2-1)}) } e \left( \mathfrak{m}athcal{F}_{d_2} \left(\mathfrak{m}athfrak{b}m{y}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(d_2)} \mathbb{R}ight) - \mathfrak{m}athcal{F}_{d_2-1} \left(\mathfrak{m}athfrak{b}m{y}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(d_2-1)} \mathbb{R}ight)\mathbb{R}ight). \end{multline} We may write the polynomial $G(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$ as follows \mathfrak{m}athfrak{b}egin{equation*} G(\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{y}) = \sum_{\substack{0 \leq r \leq d_1 \\ 0 \leq l \leq d_2}} \sum_{\mathfrak{m}athfrak{b}m{j}_r, \mathfrak{m}athfrak{b}m{k}_l} G_{\mathfrak{m}athfrak{b}m{j}_r,\mathfrak{m}athfrak{b}m{k}_l}^{(r,l)} \mathfrak{m}athfrak{b}m{x}_{\mathfrak{m}athfrak{b}m{j}_r} \mathfrak{m}athfrak{b}m{y}_{\mathfrak{m}athfrak{b}m{k}_l}, \end{equation*} for some real $G_{\mathfrak{m}athfrak{b}m{j}_r,\mathfrak{m}athfrak{b}m{k}_l}^{(r,l)}$. Further write $\mathfrak{m}athcal{F}(\mathfrak{m}athfrak{b}m{y}) = \mathfrak{m}athcal{F}^{(0)}(\mathfrak{m}athfrak{b}m{y}) + \mathfrak{m}athbb{H}dots + \mathfrak{m}athcal{F}^{(d_2)}(\mathfrak{m}athfrak{b}m{y})$, where $\mathfrak{m}athcal{F}^{(d)}(\mathfrak{m}athfrak{b}m{y})$ denotes the degree $d$ homogeneous part of $\mathfrak{m}athcal{F}(\mathfrak{m}athfrak{b}m{y})$. Lemma 11.4 (A) in \cite{schmidt85} states that $\mathfrak{m}athcal{F}_{d_2}$ transpires to be the multilinear form associated to $\mathfrak{m}athcal{F}^{(d_2)}(\mathfrak{m}athfrak{b}m{y})$. From this we see \mathfrak{m}athfrak{b}egin{equation} \label{eq.F_squiggle_difference} \mathfrak{m}athcal{F}_{d_2} - \mathfrak{m}athcal{F}_{d_2-1} = \sum_{\substack{0 \leq r \leq d_1 \\ 0 \leq l \leq d_2}} \sum_{\mathfrak{m}athfrak{b}m{j}_r, \mathfrak{m}athfrak{b}m{k}_l} G_{\mathfrak{m}athfrak{b}m{j}_r,\mathfrak{m}athfrak{b}m{k}_l}^{(r,l)} x_{j_r(1)} \cdots x_{j_r(r)} h_{\mathfrak{m}athfrak{b}m{k}_l} \left(\mathfrak{m}athfrak{b}m{y}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(d_2)} \mathbb{R}ight), \end{equation} where \mathfrak{m}athfrak{b}egin{equation*} h_{\mathfrak{m}athfrak{b}m{k}_{d_2}} \left(\mathfrak{m}athfrak{b}m{y}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(d_2)} \mathbb{R}ight) = d_2! y_{k_{d_2}(1)}^{(1)} \cdots y_{k_{d_2}(d_2)}^{(d_2)} + \tilde{h}_{\mathfrak{m}athfrak{b}m{k}_{d_2}} \left(\mathfrak{m}athfrak{b}m{y}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(d_2-1)} \mathbb{R}ight), \end{equation*} for some polynomials $\tilde{h}_{\mathfrak{m}athfrak{b}m{k}_{d_2}}$ of degree $d_2$ that are independent of $\mathfrak{m}athfrak{b}m{y}^{(d_2)}$ and further $h_{\mathfrak{m}athfrak{b}m{k}_{l}}$ are polynomials of degree $l$ that are always independent of $\mathfrak{m}athfrak{b}m{y}^{(d_2)}$ whenever $l \leq d_2-1$. Write $\widetilde{\mathfrak{m}athfrak{b}m{y}} = (\mathfrak{m}athfrak{b}m{y}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(d_2)})$. Now set \mathfrak{m}athfrak{b}egin{equation*} S_{\widetilde{\mathfrak{m}athfrak{b}m{y}}} = \sum_{\mathfrak{m}athfrak{b}m{x} \in P_1 \mathfrak{m}athcal{B}_1} e \left( \sum_{\substack{0 \leq r \leq d_1 \\ 0 \leq l \leq d_2}} \sum_{\mathfrak{m}athfrak{b}m{j}_r, \mathfrak{m}athfrak{b}m{k}_l} G^{(r,l)}_{\mathfrak{m}athfrak{b}m{j}_r, \mathfrak{m}athfrak{b}m{k}_l} x_{j_r(1)} \cdots x_{j_r(r)} h_{\mathfrak{m}athfrak{b}m{k}_l} (\widetilde{\mathfrak{m}athfrak{b}m{y}}) \mathbb{R}ight). \end{equation*} Now we swap the order of summation of $\sum_{\mathfrak{m}athfrak{b}m{x}}$ in \eqref{eq.S_G_cauchy_schwarz} with the sums over $\mathfrak{m}athfrak{b}m{y}^{(i)}$ in \eqref{eq.S_x_bound_long}. Using the Cauchy-Schwarz inequality and \eqref{eq.F_squiggle_difference} we thus obtain \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}orm{S_G(P_1,P_2)}^{2^{\tilde{d}}} \ll P_1^{n_1(2^{\tilde{d}}-2^{d_1-1})} P_2^{n_2(2^{\tilde{d}}-d_2)} \sum_{\mathfrak{m}athfrak{b}m{y}^{(1)}} \cdots \sum_{\mathfrak{m}athfrak{b}m{y}^{(d_2)}} \mathfrak{m}athbb{N}orm{S_{\widetilde{\mathfrak{m}athfrak{b}m{y}}}}^{2^{d_1-1}}. \end{equation*} The above still holds if $d_2=1$, which can be seen directly. Applying the same differencing process to $S_{\widetilde{\mathfrak{m}athfrak{b}m{y}}}$ gives \mathfrak{m}athfrak{b}egin{equation} \label{eq.S_G_weyl_differencing} \mathfrak{m}athbb{N}orm{S_G(P_1,P_2)}^{2^{\tilde{d}}} \ll P_1^{n_1(2^{\tilde{d}}-d_1)} P_2^{n_2(2^{\tilde{d}}-d_2)} \sum_{\mathfrak{m}athfrak{b}m{y}^{(1)}} \cdots \sum_{\mathfrak{m}athfrak{b}m{y}^{(d_2)}} \sum_{\mathfrak{m}athfrak{b}m{x}^{(1)}} \cdots \mathfrak{m}athbb{N}orm{\sum_{\mathfrak{m}athfrak{b}m{x}^{(d_1)}} e \left( \gamma(\widetilde{\mathfrak{m}athfrak{b}m{x}}, \widetilde{\mathfrak{m}athfrak{b}m{y}}) \mathbb{R}ight)}, \end{equation} where \mathfrak{m}athfrak{b}egin{equation*} \gamma(\widetilde{\mathfrak{m}athfrak{b}m{x}}, \widetilde{\mathfrak{m}athfrak{b}m{y}}) = \sum_{\substack{0 \leq r \leq d_1 \\ 0 \leq l \leq d_2}} \sum_{\mathfrak{m}athfrak{b}m{j}_r, \mathfrak{m}athfrak{b}m{k}_l} G^{(r,l)}_{\mathfrak{m}athfrak{b}m{j}_r,\mathfrak{m}athfrak{b}m{k}_l} g_{\mathfrak{m}athfrak{b}m{j}_r} (\widetilde{\mathfrak{m}athfrak{b}m{x}}) h_{\mathfrak{m}athfrak{b}m{k}_l}(\widetilde{\mathfrak{m}athfrak{b}m{y}}), \end{equation*} and where similar to before we have \mathfrak{m}athfrak{b}egin{equation*} g_{\mathfrak{m}athfrak{b}m{j}_{d_1}} (\widetilde{\mathfrak{m}athfrak{b}m{x}}) = d_1! x_{j_{d_1}(1)}^{(1)} \cdots x_{j_{d_1}(d_1)}^{(d_1)} + \tilde{g}_{\mathfrak{m}athfrak{b}m{j}_{d_1}} (\mathfrak{m}athfrak{b}m{x}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{x}^{(d_1-1)}), \end{equation*} with $\tilde{g}_{\mathfrak{m}athfrak{b}m{j}_{d_1}}$ and $g_{\mathfrak{m}athfrak{b}m{j}_{r}}$ for $r < d_1$ not depending on $\mathfrak{m}athfrak{b}m{x}^{(d_1)}$. We note that \eqref{eq.S_G_weyl_differencing} holds for all $d_1,d_2 \geq 1$ and all the summations $\sum_{\mathfrak{m}athfrak{b}m{x}^{(i)}}$ and $\sum_{\mathfrak{m}athfrak{b}m{y}^{(j)}}$ in \eqref{eq.S_G_weyl_differencing} are over boxes contained in $[-P_1,P_1]^{n_1}$ and $[-P_2,P_2]^{n_2}$, respectively. Write $\widehat{\mathfrak{m}athfrak{b}m{x}} = (\mathfrak{m}athfrak{b}m{x}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{x}^{(d_1-1)})$ and $\widehat{\mathfrak{m}athfrak{b}m{y}} = (\mathfrak{m}athfrak{b}m{y}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(d_2-1)})$. We now wish to estimate the quantity \mathfrak{m}athfrak{b}egin{equation} \label{eq.sum_of_hats} \sum(\widehat{\mathfrak{m}athfrak{b}m{x}}, \widehat{\mathfrak{m}athfrak{b}m{y}}) \coloneqq \sum_{\mathfrak{m}athfrak{b}m{y}^{(d_2)}} \mathfrak{m}athbb{N}orm{ \sum_{\mathfrak{m}athfrak{b}m{x}^{(d_1)}} e \left( \gamma(\widetilde{\mathfrak{m}athfrak{b}m{x}}, \widetilde{\mathfrak{m}athfrak{b}m{y}}) \mathbb{R}ight)}. \end{equation} Viewing $\sum_{a < x \leq b} e(\mathfrak{m}athfrak{b}eta x)$ for $b-a \geq 1$ as a geometric series we recall the following elementary estimate \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}orm{\sum_{a < x \leq b} e(\mathfrak{m}athfrak{b}eta x)} \ll \mathfrak{m}in \{ b-a, \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}eta}^{-1} \}. \end{equation*} This yields \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}orm{\sum_{\mathfrak{m}athfrak{b}m{x}^{(d_1)}} e \left( \gamma(\widetilde{\mathfrak{m}athfrak{b}m{x}}, \widetilde{\mathfrak{m}athfrak{b}m{y}}) \mathbb{R}ight)} \ll \mathfrak{m}athfrak{p}rod_{\ell = 1}^{n_1} \mathfrak{m}in \left\{ P_1, \mathfrak{m}athbb{N}norm{\widetilde{\gamma}(\widehat{\mathfrak{m}athfrak{b}m{x}}, \mathfrak{m}athfrak{b}m{e}_\ell, \widetilde{\mathfrak{m}athfrak{b}m{y}})}^{-1} \mathbb{R}ight\}, \end{equation*} where $\mathfrak{m}athfrak{b}m{e}_\ell$ denotes the $\ell$-th unit vector and where \mathfrak{m}athfrak{b}egin{equation*} \widetilde{\gamma}(\widetilde{\mathfrak{m}athfrak{b}m{x}}, \widetilde{\mathfrak{m}athfrak{b}m{y}}) = d_1! \sum_{0 \leq l \leq d_2} \sum_{\mathfrak{m}athfrak{b}m{j}_{d_1}, \mathfrak{m}athfrak{b}m{k}_l} G^{(d_1,l)}_{\mathfrak{m}athfrak{b}m{j}_{d_1}, \mathfrak{m}athfrak{b}m{k}_l} x_{j_{d_1}(1)}^{(1)} \cdots x_{j_{d_1}(d_1)}^{(d_1)} h_{\mathfrak{m}athfrak{b}m{k}_l}(\widetilde{\mathfrak{m}athfrak{b}m{y}}). \end{equation*} We now apply a standard argument in order to estimate this product, as in Davenport~\cite[Chapter 13]{davenport_book}. For a real number $z$ write $\{z \}$ for its fractional part. Let $\mathfrak{m}athfrak{b}m{r} = (r_1, \mathfrak{m}athbb{H}dots, r_{n_1}) \in \mathfrak{m}athbb{Z}^{n_1}$ be such that $0 \leq r_\ell < P_1$ holds for $\ell = 1, \mathfrak{m}athbb{H}dots, n_1$. Define $\mathfrak{m}athcal{A}(\widehat{\mathfrak{m}athfrak{b}m{x}}, \widehat{\mathfrak{m}athfrak{b}m{y}}, \mathfrak{m}athfrak{b}m{r})$ to be the set of $\mathfrak{m}athfrak{b}m{y}^{(d_2)}$ in the sum in \eqref{eq.sum_of_hats} such that \mathfrak{m}athfrak{b}egin{equation*} r_\ell P_1^{-1} \leq \left\{ \widetilde{\gamma} \left(\widehat{\mathfrak{m}athfrak{b}m{x}}, \mathfrak{m}athfrak{b}m{e}_\ell, \widehat{\mathfrak{m}athfrak{b}m{y}}, \mathfrak{m}athfrak{b}m{y}^{(d_2)}\mathbb{R}ight) \mathbb{R}ight\} < (r_\ell+1)P_1^{-1}, \end{equation*} holds for all $\ell = 1, \mathfrak{m}athbb{H}dots, n_1$ and write $A(\widehat{\mathfrak{m}athfrak{b}m{x}}, \widehat{\mathfrak{m}athfrak{b}m{y}}, \mathfrak{m}athfrak{b}m{r})$ for its cardinality. We obtain the estimate \mathfrak{m}athfrak{b}egin{equation*} \sum(\widehat{\mathfrak{m}athfrak{b}m{x}}, \widehat{\mathfrak{m}athfrak{b}m{y}}) \ll \sum_{\mathfrak{m}athfrak{b}m{r}} A(\widehat{\mathfrak{m}athfrak{b}m{x}}, \widehat{\mathfrak{m}athfrak{b}m{y}}, \mathfrak{m}athfrak{b}m{r}) \mathfrak{m}athfrak{p}rod_{\ell = 1}^{n_1} \mathfrak{m}in \left\{ P_1, \mathfrak{m}ax \left\{ \frac{P_1}{r_\ell}, \frac{P_1}{P_1-r_{\ell}-1} \mathbb{R}ight\} \mathbb{R}ight\}, \end{equation*} where the sum $\sum_{\mathfrak{m}athfrak{b}m{r}}$ is over integral $\mathfrak{m}athfrak{b}m{r}$ with $0 \leq r_\ell < P_1$ for all $\ell = 1, \mathfrak{m}athbb{H}dots, n_1$. Our next aim is to find a bound for $A(\widehat{\mathfrak{m}athfrak{b}m{x}}, \widehat{\mathfrak{m}athfrak{b}m{y}}, \mathfrak{m}athfrak{b}m{r})$ that is independent of $\mathfrak{m}athfrak{b}m{r}$. Given $\mathfrak{m}athfrak{b}m{u}, \mathfrak{m}athfrak{b}m{v} \in \mathfrak{m}athcal{A}(\widehat{\mathfrak{m}athfrak{b}m{x}}, \widehat{\mathfrak{m}athfrak{b}m{y}}, \mathfrak{m}athfrak{b}m{r})$ then \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}norm{\widetilde{\gamma} \left(\widehat{\mathfrak{m}athfrak{b}m{x}}, \mathfrak{m}athfrak{b}m{e}_\ell, \widehat{\mathfrak{m}athfrak{b}m{y}}, \mathfrak{m}athfrak{b}m{u}\mathbb{R}ight)- \widetilde{\gamma} \left(\widehat{\mathfrak{m}athfrak{b}m{x}}, \mathfrak{m}athfrak{b}m{e}_\ell, \widehat{\mathfrak{m}athfrak{b}m{y}}, \mathfrak{m}athfrak{b}m{v}\mathbb{R}ight)} < P_1^{-1}, \end{equation*} for $\ell = 1, \mathfrak{m}athbb{H}dots, n_1$. Similar as before we now define the multilinear forms \mathfrak{m}athfrak{b}egin{equation*} \Gamma_G(\widetilde{\mathfrak{m}athfrak{b}m{x}}, \widetilde{\mathfrak{m}athfrak{b}m{y}}) \coloneqq d_1! d_2! \sum_{\mathfrak{m}athfrak{b}m{j}_{d_1}, \mathfrak{m}athfrak{b}m{k}_{d_2}} G^{(d_1,d_2)}_{\mathfrak{m}athfrak{b}m{j}_{d_1}, \mathfrak{m}athfrak{b}m{k}_{d_2}} x_{j_{d_1}(1)}^{(1)} \cdots x_{j_{d_1}(d_1)}^{(d_1)} y_{k_{d_2}(1)}^{(1)} \cdots y_{k_{d_2}(d_2)}^{(d_2)}, \end{equation*} which only depend on the $(d_1,d_2)$-degree part of $G$. For fixed $\widehat{\mathfrak{m}athfrak{b}m{x}}, \widehat{\mathfrak{m}athfrak{b}m{y}}$ let $N(\widehat{\mathfrak{m}athfrak{b}m{x}}, \widehat{\mathfrak{m}athfrak{b}m{y}})$ be the number of $\mathfrak{m}athfrak{b}m{y} \in (-P_2,P_2)^{n_2}$ such that \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}norm{\Gamma_G( \widehat{\mathfrak{m}athfrak{b}m{x}}, \mathfrak{m}athfrak{b}m{e}_\ell, \widehat{\mathfrak{m}athfrak{b}m{y}}, \mathfrak{m}athfrak{b}m{y})} < P_1^{-1}, \end{equation*} for al $\ell = 1, \mathfrak{m}athbb{H}dots, n_1$. Observe now crucially \mathfrak{m}athfrak{b}egin{equation*} \widetilde{\gamma} \left(\widehat{\mathfrak{m}athfrak{b}m{x}}, \mathfrak{m}athfrak{b}m{e}_\ell, \widehat{\mathfrak{m}athfrak{b}m{y}}, \mathfrak{m}athfrak{b}m{u}\mathbb{R}ight)- \widetilde{\gamma} \left(\widehat{\mathfrak{m}athfrak{b}m{x}}, \mathfrak{m}athfrak{b}m{e}_\ell, \widehat{\mathfrak{m}athfrak{b}m{y}}, \mathfrak{m}athfrak{b}m{v}\mathbb{R}ight) = \Gamma_G( \widehat{\mathfrak{m}athfrak{b}m{x}}, \mathfrak{m}athfrak{b}m{e}_\ell, \widehat{\mathfrak{m}athfrak{b}m{y}}, \mathfrak{m}athfrak{b}m{u}- \mathfrak{m}athfrak{b}m{v}). \end{equation*} Thus we find $A(\widehat{\mathfrak{m}athfrak{b}m{x}}, \widehat{\mathfrak{m}athfrak{b}m{y}}, \mathfrak{m}athfrak{b}m{r}) \leq N(\widehat{\mathfrak{m}athfrak{b}m{x}}, \widehat{\mathfrak{m}athfrak{b}m{y}})$ for all $\mathfrak{m}athfrak{b}m{r}$ as specified above. Using this we get \mathfrak{m}athfrak{b}egin{equation*} \sum_{\mathfrak{m}athfrak{b}m{y}^{(d_2)}} \mathfrak{m}athbb{N}orm{ \sum_{\mathfrak{m}athfrak{b}m{x}^{(d_1)}} e \left( \gamma(\widetilde{\mathfrak{m}athfrak{b}m{x}}, \widetilde{\mathfrak{m}athfrak{b}m{y}}) \mathbb{R}ight)} \ll N(\widehat{\mathfrak{m}athfrak{b}m{x}}, \widehat{\mathfrak{m}athfrak{b}m{y}}) (P_1 \log P_1)^{n_1}. \end{equation*} Finally, summing over $\widehat{\mathfrak{m}athfrak{b}m{x}}$ and $\widehat{\mathfrak{m}athfrak{b}m{y}}$ we obtain \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}orm{S_G(P_1,P_2)}^{2^{\tilde{d}}} \ll P_1^{n_1(2^{\tilde{d}}-d_1+1) + \varepsilon} P_2^{n_2(2^{\tilde{d}}-d_2)} M_1 \left( G^{(d_1,d_2)}, P_1, P_2, P_1^{-1} \mathbb{R}ight). \mathfrak{m}athbb{Q}edhere \end{equation*} \end{proof} Inspecting the proof of Lemma 4.1 in \cite{schindler_bihomogeneous} we find that for a polynomial $G(\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{y})$ as above given $\theta \in (0,1]$ the following holds \mathfrak{m}athfrak{b}egin{multline*} M_1(G^{(d_1,d_2)},P_1,P_2,P_1^{-1}) \ll P_1^{n_1(d_1-1)} P_2^{n_2d_2} P_2^{-\theta(n_1d_1 + n_2d_2)} \\ \times \mathfrak{m}ax_{i=1,2} \left\{ P_2^{n_i \theta} M_i\left(G^{(d_1,d_2)}; P_2^\theta, P_2^\theta, P_1^{-d_1} P_2^{-d_2} P_2^{\theta(\tilde{d}+1)}\mathbb{R}ight) \mathbb{R}ight\} \end{multline*} Using this and Lemma \mathbb{R}ef{lem.weyl_differencing_general_poly} we deduce the next Lemma. \mathfrak{m}athfrak{b}egin{lemma} \label{lem.schindler_counting_linearised} Let $P_1,P_2 > 1$, $\theta \in (0,1]$ and $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \in \mathfrak{m}athbb{R}^R$. Write $S_G = S_G(P_1,P_2)$. Using the same notation as above for $i=1$ or $i=2$ we have \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}orm{S_G}^{2^{\tilde{d}}} \ll_{d_i,n_i,\varepsilon} P_1^{n_12^{\tilde{d}} + \varepsilon} P_2^{n_22^{\tilde{d}}} P_2^{\theta n_i- \theta(n_1d_1+n_2d_2) } \times M_i\left(G^{(d_1,d_2)}; P_2^\theta, P_2^\theta, P_1^{-d_1} P_2^{-d_2}P_2^{\theta(\tilde{d}+1)}\mathbb{R}ight). \end{equation*} \end{lemma} Using the preceding Lemma and adapting the proof of \cite[Lemma 3.1]{myerson_quadratic} to our setting we can now show the following. \mathfrak{m}athfrak{b}egin{lemma} \label{lem.auxiliary_ineq_lemma} Let $\varepsilon > 0$, $\theta \in (0,1]$ and $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}, \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R$. Then for $i = 1$ or $i=2$ we have \mathfrak{m}athfrak{b}egin{equation} \label{eq.almost_auxiliary} \mathfrak{m}in \left\{ \mathfrak{m}athbb{N}orm{\frac{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})}{P_1^{n_1+\varepsilon} P_2^{n_2}}}, \mathfrak{m}athbb{N}orm{\frac{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} + \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta})}{P_1^{n_1+\varepsilon} P_2^{n_2}}} \mathbb{R}ight\}^{2^{\tilde{d}+1}} \\ \ll_{d_i,n_i,\varepsilon} \frac{M_i\left(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}; P_2^\theta, P_2^\theta, P_1^{-d_1} P_2^{-d_2}P_2^{\theta(\tilde{d}+1)}\mathbb{R}ight)}{P_2^{\theta(n_1 d_1 + n_2d_2)-\theta n_i}} \end{equation} \end{lemma} \mathfrak{m}athfrak{b}egin{proof} Note first that for two real numbers $\lambda, \mathfrak{m}u >0$ we have \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}in\{\lambda, \mathfrak{m}u\} \leq \sqrt{\lambda\mathfrak{m}u}. \end{equation*} Therefore it suffices to show \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}orm{\frac{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} + \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta})}{P_1^{2n_1+2\varepsilon} P_2^{2n_2}} }^{2^{\tilde{d}}} \ll_{d_i,n_i,\varepsilon} \frac{M_i\left(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}; P_2^\theta, P_2^\theta, P_1^{-d_1} P_2^{-d_2}P_2^{\theta(\tilde{d}+1)}\mathbb{R}ight)}{P_2^{\theta(n_1 d_1 + n_2d_2)-\theta n_i}}. \end{equation*} holds for $i=1$ or $i=2$. Note first that \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}orm{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} + \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}) \overbar{S}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})} = \mathfrak{m}athbb{N}orm{\sum_{\substack{\mathfrak{m}athfrak{b}m{x} \in P_1 \mathfrak{m}athcal{B}_1 \\ \mathfrak{m}athfrak{b}m{y} \in P_2 \mathfrak{m}athcal{B}_2}} \sum_{\substack{\mathfrak{m}athfrak{b}m{x}+\mathfrak{m}athfrak{b}m{z} \in P_1 \mathfrak{m}athcal{B}_1 \\ \mathfrak{m}athfrak{b}m{y}+\mathfrak{m}athfrak{b}m{w} \in P_2 \mathfrak{m}athcal{B}_2}} e\left( (\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} + \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}) \cdot \mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) - \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \cdot \mathfrak{m}athfrak{b}m{F} (\mathfrak{m}athfrak{b}m{x}+\mathfrak{m}athfrak{b}m{z}, \mathfrak{m}athfrak{b}m{y}+\mathfrak{m}athfrak{b}m{w}) \mathbb{R}ight)}, \end{equation*} so by the triangle inequality we get \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}orm{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} + \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}) \overbar{S}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})} \leq \sum_{\substack{\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{z}}_\infty \leq P_1 \\ \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{w}}_\infty \leq P_2}} \mathfrak{m}athbb{N}orm{\sum_{\substack{\mathfrak{m}athfrak{b}m{x} \in P_1\mathfrak{m}athcal{B}_{\mathfrak{m}athfrak{b}m{z}} \\ \mathfrak{m}athfrak{b}m{y} \in P_2\mathfrak{m}athcal{B}_{\mathfrak{m}athfrak{b}m{w}}}} e\left( \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) - g_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha},\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},\mathfrak{m}athfrak{b}m{z},\mathfrak{m}athfrak{b}m{w}}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) \mathbb{R}ight)}, \end{equation*} where $g_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha},\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},\mathfrak{m}athfrak{b}m{z},\mathfrak{m}athfrak{b}m{w}}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$ is of degree at most $d_1+d_2-1$ in $(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$ and we have some boxes $\mathfrak{m}athcal{B}_{\mathfrak{m}athfrak{b}m{z}} \subset \mathfrak{m}athcal{B}_1$ and $\mathfrak{m}athcal{B}_{\mathfrak{m}athfrak{b}m{w}} \subset \mathfrak{m}athcal{B}_2$. Applying Cauchy's inequality $\tilde{d}$-times we deduce \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}orm{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} + \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}) \overbar{S}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})}^{2^{\tilde{d}}} \leq P_1^{n_1(2^{\tilde{d}}-1)}P_2^{n_2(2^{\tilde{d}}-1)} \sum_{\substack{\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{z}}_\infty \leq P_1 \\ \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{w}}_\infty \leq P_2}} \mathfrak{m}athbb{N}orm{\sum_{\substack{\mathfrak{m}athfrak{b}m{x} \in P_1\mathfrak{m}athcal{B}_{\mathfrak{m}athfrak{b}m{z}} \\ \mathfrak{m}athfrak{b}m{y} \in P_2\mathfrak{m}athcal{B}_{\mathfrak{m}athfrak{b}m{w}}}} e\left( \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) - g_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha},\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},\mathfrak{m}athfrak{b}m{z},\mathfrak{m}athfrak{b}m{w}}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) \mathbb{R}ight)}^{2^{\tilde{d}}}. \end{equation*} If we write $G(\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{y}) = \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) - g_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha},\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},\mathfrak{m}athfrak{b}m{z},\mathfrak{m}athfrak{b}m{w}}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$ then note that $G^{(d_1,d_2)} = \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}$. Using Lemma \mathbb{R}ef{lem.schindler_counting_linearised} we therefore obtain \mathfrak{m}athfrak{b}egin{multline*} \mathfrak{m}athbb{N}orm{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} + \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}) \overbar{S}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})}^{2^{\tilde{d}}} \ll P_1^{2^{\tilde{d}+1}n_1+\varepsilon}P_2^{2^{\tilde{d}+1}n_2} P_2^{-\theta (n_1d_1 + n_2d_2) + \theta n_i}\\ \times M_i(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}, P_2^\theta, P_2^\theta, P_1^{-d_1}P_2^{-d_2}P_2^{\theta(\tilde{d}+1)}), \end{multline*} for $i=1$ or $i=2$, which readily delivers the result. \end{proof} As in the introduction, for $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R$ we define the auxiliary counting function $N_1^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}; B)$ to be the number of integer vectors $\widehat{\mathfrak{m}athfrak{b}m{x}} \in (-B,B)^{(d_1-1)n_1}$ and $\widetilde{\mathfrak{m}athfrak{b}m{y}} \in (-B,B)^{d_2n_2}$ such that \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}orm{\Gamma_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}(\widehat{\mathfrak{m}athfrak{b}m{x}}, \mathfrak{m}athfrak{b}m{e}_\ell, \widetilde{\mathfrak{m}athfrak{b}m{y}})} < \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}_\infty B^{\tilde{d}}, \end{equation*} for $\ell = 1, \mathfrak{m}athbb{H}dots, n_1$ where $\mathfrak{m}athbb{N}norm{f}_\infty \coloneqq \frac{1}{d_1!d_2!} \mathfrak{m}ax_{\mathfrak{m}athfrak{b}m{j}, \mathfrak{m}athfrak{b}m{k}} \mathfrak{m}athbb{N}orm{\frac{\mathfrak{m}athfrak{p}artial^{d_1+d_2}f}{\mathfrak{m}athfrak{p}artial x_{j_1} \cdots \mathfrak{m}athfrak{p}artial x_{j_{d_1}} \mathfrak{m}athfrak{p}artial y_{k_1} \cdots \mathfrak{m}athfrak{p}artial y_{k_{d_2}} }}$. We also analogously define $N_2^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}; B)$. We now formulate an analogue for \cite[Proposition 3.1]{myerson_quadratic}. \mathfrak{m}athfrak{b}egin{proposition} \label{prop.auxiliary_ineq_from_counting_function} Let $C_0 \geq 1$ and $\mathfrak{m}athscr{C} > 0$ such that for all $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R$ and $B>0$ we have for $i=1,2$ that \mathfrak{m}athfrak{b}egin{equation} \label{eq.auxiliary_ineq_assumption} N_i^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}; B) \leq C_0 B^{d_1n_1+d_2n_2 - n_i - 2^{\tilde{d}+1} \mathfrak{m}athscr{C}}. \end{equation} Assume further that the forms $F_i$ are linearly independent, so that there exist $M>\mathfrak{m}u>0$ such that \mathfrak{m}athfrak{b}egin{equation} \label{eq.assumption_lin_ind} \mathfrak{m}u \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty \leq \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}_\infty \leq M \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty. \end{equation} Then there exists a constant $C>0$ depending on $C_0,d_i,n_i,\mathfrak{m}u$ and $ M$ such that the following \emph{auxiliary inequality} \mathfrak{m}athfrak{b}egin{equation*} \label{eq.aux_ineq_section_2} \mathfrak{m}in \left\{ \mathfrak{m}athbb{N}orm{\frac{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})}{P_1^{n_1+\varepsilon} P_2^{n_2}}}, \mathfrak{m}athbb{N}orm{\frac{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} + \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta})}{P_1^{n_1+\varepsilon} P_2^{n_2}}} \mathbb{R}ight\} \leq C\mathfrak{m}ax\left\{P_2^{-1}, P_1^{-d_1}P_2^{-d_2} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty^{-1}, \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty^{\frac{1}{\tilde{d}+1}} \mathbb{R}ight\}^{\mathfrak{m}athscr{C}} \end{equation*} holds for all real numbers $P_1, P_2 > 1$. \end{proposition} \mathfrak{m}athfrak{b}egin{proof} The strategy of this proof will closely follow the proof of \cite[Proposition 3.1]{myerson_quadratic}. By Lemma \mathbb{R}ef{lem.auxiliary_ineq_lemma} we know that \eqref{eq.almost_auxiliary} holds for $i=1$ or $i=2$. Assume that there is some $\theta \in (0,1]$ such that for the same $i$ we have \mathfrak{m}athfrak{b}egin{equation} \label{eq.counting_inequality} N_i^{\mathfrak{m}athrm{aux}} (\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}; P_2^\theta) < M_i(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}, P_2^\theta, P_2^\theta, P_1^{-d_1}P_2^{-d_2}P_2^{\theta(\tilde{d}+1)}), \end{equation} Going forward with the case $i=1$, noting that the case $i=2$ can be proven completely analogously, this means that there exists a $(d_1-1)$-tuple $\widehat{\mathfrak{m}athfrak{b}m{x}}$ and a $d_2$-tuple $\widetilde{\mathfrak{m}athfrak{b}m{y}}$ which is counted by $M_1(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}, P_2^\theta, P_2^\theta, P_1^{-d_1}P_2^{-d_2}P_2^{\theta(\tilde{d}+1)})$ but not by $N_1^{\mathfrak{m}athrm{aux}} (\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}; P_2^\theta)$. Therefore this pair of tuples satisfies \mathfrak{m}athfrak{b}egin{equation} \label{eq.tuples_in_box} \mathfrak{m}athbb{N}norm{\widehat{\mathfrak{m}athfrak{b}m{x}}^{(i)}}_\infty, \mathfrak{m}athbb{N}norm{\widetilde{\mathfrak{m}athfrak{b}m{y}}^{(j)}}_\infty \leq P_2^\theta, \; \text{for} \; i = 1,\mathfrak{m}athbb{H}dots, d_1-1 \; \text{and} \; j = 1, \mathfrak{m}athbb{H}dots, d_2, \end{equation} and \mathfrak{m}athfrak{b}egin{equation} \label{eq.gamma_upper_bound} \mathfrak{m}athbb{N}norm{\Gamma_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}(\widehat{\mathfrak{m}athfrak{b}m{x}}, \mathfrak{m}athfrak{b}m{e}_\ell, \widetilde{\mathfrak{m}athfrak{b}m{y}})} < P_1^{-d_1}P_2^{-d_2} P_2^{\theta(\tilde{d}+1)}, \; \text{for} \; \ell = 1, \mathfrak{m}athbb{H}dots, n_1, \end{equation} since it is counted by $M_1(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}, P_2^\theta, P_2^\theta, P_1^{-d_1}P_2^{-d_2}P_2^{\theta(\tilde{d}+1)})$. On the other hand, since it is not counted by $N_1^{\mathfrak{m}athrm{aux}} (\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}; P_2^\theta)$ there exists $\ell_0 \in \{1, \mathfrak{m}athbb{H}dots, n_1 \}$ such that \mathfrak{m}athfrak{b}egin{equation} \label{eq.gamma_lower_bound} \mathfrak{m}athbb{N}orm{\Gamma_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}} (\widehat{\mathfrak{m}athfrak{b}m{x}}, \mathfrak{m}athfrak{b}m{e}_{\ell_0}, \widetilde{\mathfrak{m}athfrak{b}m{y}})} \geq \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}_\infty P_2^{\tilde{d} \theta}. \end{equation} From \eqref{eq.gamma_upper_bound} we get that for $\ell_0$ we must have either \mathfrak{m}athfrak{b}egin{equation} \label{eq.gamma_really_small} \mathfrak{m}athbb{N}orm{\Gamma_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}(\widehat{\mathfrak{m}athfrak{b}m{x}}, \mathfrak{m}athfrak{b}m{e}_{\ell_0}, \widetilde{\mathfrak{m}athfrak{b}m{y}})} < P_1^{-d_1} P_2^{-d_2} P_2^{\theta(\tilde{d}+1)} \end{equation} or \mathfrak{m}athfrak{b}egin{equation} \label{eq.gamma_bigger_but_int_close} \mathfrak{m}athbb{N}orm{\Gamma_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}(\widehat{\mathfrak{m}athfrak{b}m{x}}, \mathfrak{m}athfrak{b}m{e}_{\ell_0}, \widetilde{\mathfrak{m}athfrak{b}m{y}})} \geq \frac{1}{2}. \end{equation} If \eqref{eq.gamma_really_small} holds then \eqref{eq.gamma_lower_bound} implies \mathfrak{m}athfrak{b}egin{equation} \label{eq.beta_F_bound_1} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}_\infty < \frac{P_1^{-d_1}P_2^{-d_2}P_2^{(\tilde{d}+1)\theta}}{P_2^{\tilde{d}\theta}} = P_2^\theta P_1^{-d_1} P_2^{-d_2} \end{equation} If on the other hand \eqref{eq.gamma_bigger_but_int_close} holds then \eqref{eq.tuples_in_box} gives \mathfrak{m}athfrak{b}egin{equation} \label{eq.beta_F_bound_2} \frac{1}{2} \leq \mathfrak{m}athbb{N}orm{\Gamma_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}(\widehat{\mathfrak{m}athfrak{b}m{x}}, \mathfrak{m}athfrak{b}m{e}_{\ell_0}, \widetilde{\mathfrak{m}athfrak{b}m{y}})} \ll \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}_\infty P_2^{(\tilde{d}+1)\theta}. \end{equation} Since either \eqref{eq.beta_F_bound_1} or \eqref{eq.beta_F_bound_2} holds then via \eqref{eq.assumption_lin_ind} we deduce \mathfrak{m}athfrak{b}egin{equation} \label{eq.P_theta_max} P_2^{-\theta} \ll_{\mathfrak{m}u,M} \mathfrak{m}ax\left\{ P_1^{-d_1}P_2^{-d_2} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty^{-1}, \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty^{\frac{1}{\tilde{d}+1}} \mathbb{R}ight\}. \end{equation} Since \eqref{eq.almost_auxiliary} holds for $i=1$ and due to the assumption \eqref{eq.auxiliary_ineq_assumption} we see that \eqref{eq.counting_inequality} holds if there exists some $C_1 > 0$ such that \mathfrak{m}athfrak{b}egin{equation} \label{eq.aux_ineq_predecessor} P_2^{-\theta 2^{\tilde{d}+1}\mathfrak{m}athscr{C}} \leq C_1 \mathfrak{m}in \left\{ \mathfrak{m}athbb{N}orm{\frac{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})}{P_1^{n_1+\varepsilon} P_2^{n_2}}}, \mathfrak{m}athbb{N}orm{\frac{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} + \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta})}{P_1^{n_1+\varepsilon} P_2^{n_2}}} \mathbb{R}ight\}^{2^{\tilde{d}+1}}. \end{equation} Now \emph{define} $\theta$ such that we have equality in the equation above, i.e. such that we have \mathfrak{m}athfrak{b}egin{equation} \label{eq.theta_definition} P_2^\theta = C_1^{\frac{1}{2^{\tilde{d}+1} \mathfrak{m}athscr{C}}} \mathfrak{m}in \left\{ \mathfrak{m}athbb{N}orm{\frac{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})}{P_1^{n_1+\varepsilon} P_2^{n_2}}}, \mathfrak{m}athbb{N}orm{\frac{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} + \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta})}{P_1^{n_1+\varepsilon} P_2^{n_2}}} \mathbb{R}ight\}^{-\frac{1}{\mathfrak{m}athscr{C}}}. \end{equation} If $\theta \in (0,1]$ then~\eqref{eq.aux_ineq_predecessor} holds and so together with the assumption~\eqref{eq.auxiliary_ineq_assumption} as argued above this implies~\eqref{eq.P_theta_max} holds, which gives the result in this case. But $\theta$ will always be positive; for if $\theta \leq 0$ then~\eqref{eq.theta_definition} implies \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}in \left\{ \mathfrak{m}athbb{N}orm{\frac{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})}{P_1^{n_1+\varepsilon} P_2^{n_2}}}, \mathfrak{m}athbb{N}orm{\frac{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} + \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta})}{P_1^{n_1+\varepsilon} P_2^{n_2}}} \mathbb{R}ight\} \geq C_1^{-\frac{1}{2^{\tilde{d}+1}}}. \end{equation*} However, note that clearly $\mathfrak{m}athbb{N}orm{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})} \leq (P_1+1)^{n_1} (P_2+1)^{n_2}$. Without loss of generality we may take $P_i$ large enough, depending on $\varepsilon$, so that this clearly leads to a contradiction. Finally, if $\theta \geq 1$ then we find $P_2^{- \mathfrak{m}athscr{C} \theta} \leq P_2^{-\mathfrak{m}athscr{C}}$, and so from~\eqref{eq.theta_definition} we obtain. \[ \mathfrak{m}in \left\{ \mathfrak{m}athbb{N}orm{\frac{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})}{P_1^{n_1+\varepsilon} P_2^{n_2}}}, \mathfrak{m}athbb{N}orm{\frac{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} + \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta})}{P_1^{n_1+\varepsilon} P_2^{n_2}}} \mathbb{R}ight\} \ll P_2^{-\mathfrak{m}athscr{C}}. \] This gives the result. \end{proof} \section{The circle method} \label{sec.circle_method} The aim of this section is to use the auxiliary inequality \mathfrak{m}athfrak{b}egin{equation} \label{eq.aux_ineq} P_1^{-\varepsilon} \mathfrak{m}in \left\{ \mathfrak{m}athbb{N}orm{\frac{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})}{P_1^{n_1} P_2^{n_2}}}, \mathfrak{m}athbb{N}orm{\frac{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} + \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta})}{P_1^{n_1} P_2^{n_2}}} \mathbb{R}ight\} \leq \\ C\mathfrak{m}ax\left\{P_2^{-1}, P_1^{-d_1}P_2^{-d_2} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty^{-1}, \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty^{\frac{1}{\tilde{d}+1}} \mathbb{R}ight\}^{\mathfrak{m}athscr{C}}, \end{equation} where $C \geq 1$ and apply the circle method in order to deduce an estimate for $N(P_1,P_2)$. In this section we will use the notation $P = P_1^{d_1}P_2^{d_2}$. Write $b = \mathfrak{m}ax \left\{1, \log P_1 / \log P_2\mathbb{R}ight\}$ and $u = \mathfrak{m}ax \left\{1, \log P_2/\log P_1 \mathbb{R}ight\}$. If $P_1 \geq P_2$ then $b = \log P_1 / \log P_2$ and thus $P_2^{bd_1+d_2} = P$ holds. The main result will be the following. \mathfrak{m}athfrak{b}egin{proposition} \label{prop.main_prop} Let $\mathfrak{m}athscr{C} > (bd_1+ud_2)R$, $C\geq 1$ and $\varepsilon >0$ such that the auxiliary inequality \eqref{eq.aux_ineq} holds for all $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}, \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R$, all $P_1, P_2 > 1$ and all boxes $\mathfrak{m}athcal{B}_i \subset [-1,1]^{n_i}$ with side lengths at most $1$ and edges parallel to the coordinate axes. There exists some $\delta > 0$ depending on $b$, $u$, $R$, $d_i$ and $n_i$ such that \mathfrak{m}athfrak{b}egin{equation*} N(P_1,P_2) = \sigma P_1^{n_1-d_1R}P_2^{n_2-d_2R} + O \left(P_1^{n_1-d_1R}P_2^{n_2-d_2R} P^{-\delta} \mathbb{R}ight). \end{equation*} The factor $\sigma = \mathfrak{m}athfrak{I} \mathfrak{m}athfrak{S}$ is the product of the singular integral $\mathfrak{m}athfrak{I}$ and the singular series $\mathfrak{m}athfrak{S}$, as defined in~\eqref{eq.def_singular_integral} and~\eqref{eq.def_singular_series}, respectively. \end{proposition} Note that this result holds for general bidegree, and therefore in the proof one may assume $P_1 \geq P_2$ throughout. For instance if one wishes to show the above proposition for bidegree $(2,1)$, the result follows from the asymmetric results of bidegree $(2,1)$ and bidegree $(1,2)$. \subsection{The minor arcs} First we will show that the contributions from the minor arcs do not affect the main term. For this we will prove a Lemma similar to Lemma 2.1 in~\cite{myerson_quadratic}. \mathfrak{m}athfrak{b}egin{lemma} \label{lem.general_integral_bound} Let $r_1, r_2 \colon (0, \infty) \mathbb{R}ightarrow (0, \infty)$ be strictly decreasing and increasing bijections, respectively, and let $A >0$ be a real number. For any $\mathfrak{m}athbb{N}u >0$ let $E_0 \subset \mathfrak{m}athbb{R}^R$ be a hypercube of side lengths $\mathfrak{m}athbb{N}u$ whose edges are parallel to the coordinate axes. Let $E \subseteq E_0$ be a measurable set and let $\varphi \colon E \mathbb{R}ightarrow [0, \infty)$ be a measurable function. Assume that for all $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}, \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R$ such that $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}, \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} + \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in E$ we have \mathfrak{m}athfrak{b}egin{equation} \label{eq.general_aux_ineq} \mathfrak{m}in \left\{ \varphi(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}), \varphi(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}+ \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}) \mathbb{R}ight\} \leq \mathfrak{m}ax\left\{A, r_1^{-1}\left( \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty \mathbb{R}ight),r_2^{-1}\left( \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty \mathbb{R}ight) \mathbb{R}ight\}. \end{equation} Then for all integers $k \leq \ell$ such that $A < 2^k$ we get \mathfrak{m}athfrak{b}egin{equation} \label{eq.general_integral_estimate} \int_E \varphi(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) d\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \ll_R \\ \mathfrak{m}athbb{N}u^R 2^k + \sum_{i = k}^{\ell-1} 2^i \left( \frac{\mathfrak{m}athbb{N}u r_1 (2^i)}{\mathfrak{m}in\{r_2(2^i),\mathfrak{m}athbb{N}u \}} \mathbb{R}ight)^R + \left( \frac{\mathfrak{m}athbb{N}u r_1(2^\ell)}{\mathfrak{m}in\{ r_2(2^\ell), \mathfrak{m}athbb{N}u \}} \mathbb{R}ight)^R \sup_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \in E} \varphi(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}). \end{equation} \end{lemma} Note that if we take \mathfrak{m}athfrak{b}egin{equation*} \varphi(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) = C^{-1} P_1^{-n_1-\varepsilon} P_2^{-n_2} \mathfrak{m}athbb{N}orm{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})}, \mathfrak{m}athbb{Q}uad r_1(t) = P_1^{-d_1}P_2^{-d_2}t^{-\frac{1}{\mathfrak{m}athscr{C}}}, \mathfrak{m}athbb{Q}uad r_2(t) = t^{\frac{\tilde{d}+1}{\mathfrak{m}athscr{C}}}, \mathfrak{m}athbb{Q}uad A = P_2^{-\mathfrak{m}athscr{C}} \end{equation*} where $C$ is the constant in \eqref{eq.aux_ineq}, then the assumption \eqref{eq.general_aux_ineq}~is just the auxiliary inequality \eqref{eq.aux_ineq}. \mathfrak{m}athfrak{b}egin{proof} Given $t \geq 0$ define the set \[ D(t) = \left\{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \in E \colon \varphi(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) \geq t \mathbb{R}ight\}. \] If $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}$ and $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}+ \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}$ are both contained in $D(t)$ then by~\eqref{eq.general_aux_ineq} one of the following must hold \[ A \geq t, \mathfrak{m}athbb{Q}uad \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty \leq r_1(t), \mathfrak{m}athbb{Q}uad \text{or} \mathfrak{m}athbb{Q}uad \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty \geq r_2(t). \] In particular, if $t > A$ then either $\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty \leq r_1(t)$ or $\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty \geq r_2(t)$. Assuming that $t > A$ is satisfied consider a box $\mathfrak{m}athfrak{b} \subset \mathfrak{m}athbb{R}^R$ with sidelengths $r_2(t)/2$ whose edges are parallel to the coordinate axes. Given $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \in \mathfrak{m}athfrak{b} \cap D(t)$ set \[ \mathfrak{m}athfrak{B}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) = \left\{ \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} + \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \colon \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R, \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty \leq r_1(t) \mathbb{R}ight\}. \] If $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} + \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athfrak{b} \cap D(t)$ then by construction $\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty \leq r_2(t) /2 < r_2(t)$ whence $\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty \leq r_1(t)$. Therefore we have $\mathfrak{m}athfrak{b} \cap D(t) \subset \mathfrak{m}athfrak{B}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})$, which in turn implies that the measure of $\mathfrak{m}athfrak{b} \cap D(t)$ is bounded by $(2r_1(t))^R$. Since $D(t)$ is contained in $E_0$ one can cover $D(t)$ with at most \[ \ll_R \frac{\mathfrak{m}athbb{N}u^R}{\mathfrak{m}in\{r_2(t),\mathfrak{m}athbb{N}u\}^R} \] boxes $\mathfrak{m}athfrak{b}$ whose sidelenghts are $r_2(t)/2$. Therefore we find \[ \mathfrak{m}u(D(t)) \ll_R \left( \frac{\mathfrak{m}athbb{N}u r_1(t)}{\mathfrak{m}in \{r_2(t), \mathfrak{m}athbb{N}u \}}\mathbb{R}ight)^R, \] where we write $\mathfrak{m}u(D(t))$ for the Lebesgue measure of $D(t)$. If $k < \ell$ are two integers then \mathfrak{m}athfrak{b}egin{equation*} \int_E \varphi(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) d \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} = \int_{E \setminus D(2^k)} \varphi(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) d \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} + \sum_{i = k}^\ell \int_{D(2^i) \setminus D(2^{i+1})} \varphi(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) d \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} + \int_{D(2^\ell)} \varphi(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) d \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}. \end{equation*} We can trivially bound $\int_{E \setminus D(2^k)}\varphi(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) d\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \leq \mathfrak{m}athbb{N}u^R 2^k$, and further we can bound \[ \int_{D(2^i) \setminus D(2^{i+1})} \varphi(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) d\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \leq 2^{i+1} \mathfrak{m}u(D(2^i)), \mathfrak{m}athbb{Q}uad \text{and} \mathfrak{m}athbb{Q}uad \int_{D(2^\ell)} \varphi(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) d \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \leq \mathfrak{m}u(D(2^\ell)) \sup_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \in E} \varphi(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}). \] If $2^k > A$ then for any $i \geq k$ by our discussion above we find \[\mathfrak{m}u(D(2^i)) \ll_R \left( \frac{\mathfrak{m}athbb{N}u r_1(2^i)}{\mathfrak{m}in \{r_2(2^i), \mathfrak{m}athbb{N}u \}}\mathbb{R}ight)^R.\] Therefore the result follows. \end{proof} Recall the notation $P=P_1^{d_1}P_2^{d_2}$. From now on we will assume $P_1 \geq P_2$. Note that the assumption in Proposition~\mathbb{R}ef{prop.auxiliary_ineq_from_counting_function} that $\mathfrak{m}athscr{C} > R(bd_1 + ud_2)$ holds, is equivalent to $\mathfrak{m}athscr{C} > R(bd_1 + d_2)$ if $P_1 \geq P_2$. \mathfrak{m}athfrak{b}egin{lemma} \label{lem.good_int_bound} Let $T \colon \mathfrak{m}athbb{R}^R \mathbb{R}ightarrow \mathfrak{m}athbb{C}$ be a measurable function. With notation as in Lemma \mathbb{R}ef{lem.general_integral_bound} assume that for all $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}, \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R$ and for all $P_1 \geq P_2 > 1$, and $\mathfrak{m}athscr{C} >0$ we have \mathfrak{m}athfrak{b}egin{equation} \label{eq.less_general_aux_ineq} \mathfrak{m}in \left\{ \mathfrak{m}athbb{N}orm{\frac{T(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})}{P_1^{n_1}P_2^{n_2}}},\mathfrak{m}athbb{N}orm{\frac{T(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}+\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta})}{P_1^{n_1}P_2^{n_2}}} \mathbb{R}ight\} \leq \mathfrak{m}ax \left\{P_2^{-1}, P_1^{-d_1}P_2^{-d_2} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty^{-1}, \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty^{\frac{1}{\tilde{d}+1}} \mathbb{R}ight\}^\mathfrak{m}athscr{C}. \end{equation} Write $P = P_1^{d_1} P_2^{d_2}$ and assume that that we have \mathfrak{m}athfrak{b}egin{equation} \label{eq.general_sup_bound} \sup_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \in E} \mathfrak{m}athbb{N}orm{T(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})} \leq P_1^{n_1}P_2^{n_2} P^{-\delta}, \end{equation} for some $\delta > 0$. Then we have \mathfrak{m}athfrak{b}egin{multline} \label{eq.many_cases_estimate} \int_E \frac{T(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})}{P_1^{n_1} P_2^{n_2}}d\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \ll_{\mathfrak{m}athscr{C},d_i,R} \\ \mathfrak{m}athfrak{b}egin{cases} \mathfrak{m}athbb{N}u^R P^{-R} P_2^{(\tilde{d}+2)R-\mathfrak{m}athscr{C}} + P_2^{-\mathfrak{m}athscr{C}} \mathfrak{m}athbb{Q}uad &\text{if} \; \mathfrak{m}athscr{C} < R \\ \mathfrak{m}athbb{N}u^R P^{-R} P_2^{(\tilde{d}+2)R-\mathfrak{m}athscr{C}} + P^{-R} \log P_2 + P_2^{-\mathfrak{m}athscr{C}} \mathfrak{m}athbb{Q}uad &\text{if } \mathfrak{m}athscr{C} = R \\ \mathfrak{m}athbb{N}u^R P^{-R} P_2^{(\tilde{d}+2)R-\mathfrak{m}athscr{C}} + P^{-R-\delta(1-R/\mathfrak{m}athscr{C})} + P_2^{-\mathfrak{m}athscr{C}}\mathfrak{m}athbb{Q}uad &\text{if } R < \mathfrak{m}athscr{C} < (d_1+d_2)R \\ \mathfrak{m}athbb{N}u^R P^{-R} \log P_2 + P^{-R-\delta(1-R/\mathfrak{m}athscr{C})}+P_2^{-\mathfrak{m}athscr{C}} \mathfrak{m}athbb{Q}uad &\text{if } \mathfrak{m}athscr{C} = (d_1+d_2)R \\ \mathfrak{m}athbb{N}u^RP^{-R-\delta(1-(d_1+d_2)R/\mathfrak{m}athscr{C})} + P^{-R-\delta(1-R/\mathfrak{m}athscr{C})} +P_2^{-\mathfrak{m}athscr{C}} \mathfrak{m}athbb{Q}uad &\text{if } \mathfrak{m}athscr{C} > (d_1+d_2)R. \\ \end{cases} \end{multline} \end{lemma} We expect the main term of $N(P_1,P_2)$ to be of order $P_1^{n_1-Rd_1}P_2^{n_2-Rd_2} = P_1^{n_1}P_2^{n_2} P^{-R}$. Thus the Lemma indicates why it is necessary for us to assume $\mathfrak{m}athscr{C} > R(bd_1+d_2)$, using this method of proof at least. \mathfrak{m}athfrak{b}egin{proof} We apply Lemma \mathbb{R}ef{lem.auxiliary_ineq_lemma} by taking \mathfrak{m}athfrak{b}egin{equation} \label{eq.choices} \varphi(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) = \frac{\mathfrak{m}athbb{N}orm{T(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})}}{P_1^{n_1}P_2^{n_2}}, \mathfrak{m}athbb{Q}uad r_1(t) = P_1^{-d_1}P_2^{-d_2}t^{-\frac{1}{\mathfrak{m}athscr{C}}}, \mathfrak{m}athbb{Q}uad r_2(t) = t^{\frac{\tilde{d}+1}{\mathfrak{m}athscr{C}}}, \text{ and } A = P_2^{-\mathfrak{m}athscr{C}}. \end{equation} Then our assumption \eqref{eq.less_general_aux_ineq} is just \eqref{eq.general_aux_ineq}. We will choose our parameters $k$ and $\ell$ such that the $\sum_{i=k}^{\ell-1}$ term dominates the right hand side of \eqref{eq.general_integral_estimate}. Let \mathfrak{m}athfrak{b}egin{equation} \label{eq.choosing_k_l} k = \left\lceil \log_2P_2^{-\mathfrak{m}athscr{C}} \mathbb{R}ight\mathbb{R}ceil, \mathfrak{m}athbb{Q}uad \text{and} \mathfrak{m}athbb{Q}uad \ell = \left\lceil \log_2P^{-\delta} \mathbb{R}ight\mathbb{R}ceil, \end{equation} so that we have \mathfrak{m}athfrak{b}egin{equation*} \label{eq.k_ell_inequality} P_2^{-\mathfrak{m}athscr{C}} < 2^k \leq 2 P_2^{-\mathfrak{m}athscr{C}}, \mathfrak{m}athbb{Q}uad \text{and} \mathfrak{m}athbb{Q}uad P^{-\delta} \leq 2^\ell < 2P^{-\delta}. \end{equation*} Without loss of generality we assume $ k< \ell$ since otherwise the bound in the assumption \eqref{eq.general_sup_bound} would be sharper than any of those listed in \eqref{eq.many_cases_estimate}. Substituting our choices \eqref{eq.choices} into \eqref{eq.general_integral_estimate} we get \mathfrak{m}athfrak{b}egin{multline} \label{eq.integral_bound_with_choices} \int_E \frac{\mathfrak{m}athbb{N}orm{T(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})}}{P_1^{n_1}P_2^{n_2}} \ll_R \mathfrak{m}athbb{N}u^R2^k + \sum_{i=k}^{\ell-1} 2^i \left( \frac{\mathfrak{m}athbb{N}u P_1^{-d_1}P_2^{-d_2}2^{-i/\mathfrak{m}athscr{C}}}{\mathfrak{m}in\left\{ \mathfrak{m}athbb{N}u, 2^{i(\tilde{d}+1)/\mathfrak{m}athscr{C}} \mathbb{R}ight\}} \mathbb{R}ight)^R + \\ \left( \frac{\mathfrak{m}athbb{N}u P_1^{-d_1}P_2^{-d_2} 2^{-\ell/\mathfrak{m}athscr{C}}}{\mathfrak{m}in\left\{ \mathfrak{m}athbb{N}u, 2^{\ell(\tilde{d}+1)/\mathfrak{m}athscr{C}} \mathbb{R}ight\}} \mathbb{R}ight)^R \sup_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}\in E} \frac{\mathfrak{m}athbb{N}orm{T(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})}}{P_1^{n_1}P_2^{n_2}}. \end{multline} From \eqref{eq.general_sup_bound} and \eqref{eq.choosing_k_l} we see that \mathfrak{m}athfrak{b}egin{equation} \label{eq.estimate_1} \sup_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}\in E} \frac{\mathfrak{m}athbb{N}orm{T(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})}}{P_1^{n_1}P_2^{n_2}} \leq P^{-\delta} \leq 2^\ell. \end{equation} Further, we clearly have \mathfrak{m}athfrak{b}egin{equation} \label{eq.estimate_2} \frac{ P_1^{-d_1}P_2^{-d_2}2^{-i/\mathfrak{m}athscr{C}}}{\mathfrak{m}in\left\{ \mathfrak{m}athbb{N}u, 2^{i(\tilde{d}+1)/\mathfrak{m}athscr{C}} \mathbb{R}ight\}} \leq \mathfrak{m}athbb{N}u^{-1}P_1^{-d_1}P_2^{-d_2}2^{-i/\mathfrak{m}athscr{C}} + 2^{-i(\tilde{d}+2)/\mathfrak{m}athscr{C}}P_1^{-d_1}P_2^{-d_2}. \end{equation} Substituting the estimates \eqref{eq.estimate_1} and \eqref{eq.estimate_2} into \eqref{eq.integral_bound_with_choices} we obtain \mathfrak{m}athfrak{b}egin{equation} \label{eq.another_estimate} \int_E \frac{\mathfrak{m}athbb{N}orm{T(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})}}{P_1^{n_1}P_2^{n_2}} \ll_R \mathfrak{m}athbb{N}u^R2^k + \sum_{i=k}^\ell \mathfrak{m}athbb{N}u^R P_1^{-d_1R}P_2^{-d_2R} 2^{i(1-(\tilde{d}+2)R/\mathfrak{m}athscr{C})} + \sum_{i=k}^\ell P_1^{-d_1R}P_2^{-d_2R} 2^{i(1-R/\mathfrak{m}athscr{C})}. \end{equation} Note now that \mathfrak{m}athfrak{b}egin{equation} \label{eq.three_cases_estimate_1} \sum_{i=k}^\ell 2^{i(1-R(\tilde{d}+2)/\mathfrak{m}athscr{C})} \ll_{\mathfrak{m}athscr{C},d_i,R} \mathfrak{m}athfrak{b}egin{cases} 2^{k(1-R(\tilde{d}+2)/\mathfrak{m}athscr{C})} \mathfrak{m}athbb{Q}uad &\text{if $\mathfrak{m}athscr{C} < (\tilde{d}+2)R$} \\ \ell-k &\text{if $\mathfrak{m}athscr{C} = (\tilde{d}+2)R$} \\ 2^{\ell(1-R(\tilde{d}+2)/\mathfrak{m}athscr{C})} &\text{if $\mathfrak{m}athscr{C} > (\tilde{d}+2)R$}, \end{cases} \end{equation} where we used $k < \ell$ for the second alternative. Recall from \eqref{eq.choosing_k_l} that we have \mathfrak{m}athfrak{b}egin{equation*} 2^k \geq P_2^{-\mathfrak{m}athscr{C}} \mathfrak{m}athbb{Q}uad \text{and} \mathfrak{m}athbb{Q}uad 2^\ell \leq 2P^{-\delta}, \end{equation*} so using this in \eqref{eq.three_cases_estimate_1} we get \mathfrak{m}athfrak{b}egin{equation} \label{eq.three_cases_estimate_2} \sum_{i=k}^\ell 2^{i(1-(\tilde{d}+2)/\mathfrak{m}athscr{C})} \ll_{\mathfrak{m}athscr{C},d_i,R} \mathfrak{m}athfrak{b}egin{cases} P_2^{(\tilde{d}+2)R-\mathfrak{m}athscr{C}} \mathfrak{m}athbb{Q}uad &\text{if $\mathfrak{m}athscr{C} < (\tilde{d}+2)R$} \\ \log P_2 &\text{if $\mathfrak{m}athscr{C} = (\tilde{d}+2)R$} \\ P^{-\delta(1-(\tilde{d}+2)R/\mathfrak{m}athscr{C})} &\text{if $\mathfrak{m}athscr{C} > (\tilde{d}+2)R$}. \end{cases} \end{equation} Arguing similarly for $\sum_{i=k}^\ell 2^{i(1-R/\mathfrak{m}athscr{C})}$ we find \mathfrak{m}athfrak{b}egin{equation} \label{eq.three_cases_estimate_3} \sum_{i=k}^\ell 2^{i(1-R/\mathfrak{m}athscr{C})} \ll_{\mathfrak{m}athscr{C},d_i,R} \mathfrak{m}athfrak{b}egin{cases} P_2^{R-\mathfrak{m}athscr{C}} \mathfrak{m}athbb{Q}uad &\text{if $\mathfrak{m}athscr{C} < R$} \\ \log P_2 &\text{if $\mathfrak{m}athscr{C} = R$} \\ P^{-\delta(1-R/\mathfrak{m}athscr{C})} &\text{if $\mathfrak{m}athscr{C} > R$}. \end{cases}. \end{equation} Finally we note that by our choice of $k$ we have $2^k \leq 2P_2^{-\mathfrak{m}athscr{C}}$ and we recall that $\tilde{d}+2 = d_1 + d_2$. Using this, as well as \eqref{eq.three_cases_estimate_2} and \eqref{eq.three_cases_estimate_3} in \eqref{eq.another_estimate} we deduce the result. \end{proof} We will finish this section by defining the major and minor arcs and showing that the minor arcs do not contribute to the main term. For $\Delta > 0$ we define the \emph{major arcs } to be the set given by \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athfrak{M}(\Delta) \coloneqq \mathfrak{m}athfrak{b}igcup_{\substack{q \in \mathfrak{m}athbb{N} \\ q \leq P^\Delta}} \mathfrak{m}athfrak{b}igcup_{\substack{0 \leq a_i \leq q \\ (a_1,\mathfrak{m}athbb{H}dots, a_R,q) = 1}} \left\{ \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \in [0,1]^R \colon 2 \mathfrak{m}athbb{N}norm{q\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}- \mathfrak{m}athfrak{b}m{a}}_\infty < P_1^{-d_1}P_2^{-d_2} P^\Delta \mathbb{R}ight\}, \end{equation*} and the \emph{minor arcs } to be the given by \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athfrak{m}(\Delta) \coloneqq [0,1]^R \setminus \mathfrak{m}athfrak{M}(\Delta). \end{equation*} Write further \mathfrak{m}athfrak{b}egin{equation} \label{eq.delta_0_definition} \delta_0 = \frac{\mathfrak{m}in_{i=1,2}\left\{n_1+n_2-\dim V_i^* \mathbb{R}ight\}}{(\tilde{d}+1)2^{\tilde{d}}R}. \end{equation} Note that if the forms $F_i$ are linearly independent, then $V_i^*$ are proper subvarieties of $\mathfrak{m}athbb{A}_{\mathfrak{m}athbb{C}}^{n_1+n_2}$ so that $\dim V_i^* \leq n_1+n_2-1$ whence $\delta_0 \geq \frac{1}{(\tilde{d}+1)2^{\tilde{d}}R}$. To see this for $V_1^*$ note that requiring \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athrm{rank}\left( \frac{\mathfrak{m}athfrak{p}artial F_i}{\mathfrak{m}athfrak{p}artial x_j} \mathbb{R}ight)_{i,j} < R \end{equation*} is equivalent to requiring all the $R \times R$ minors of $\left( \frac{\mathfrak{m}athfrak{p}artial F_i}{\mathfrak{m}athfrak{p}artial x_j} \mathbb{R}ight)_{i,j}$ vanish. This defines a system of polynomials of degree $R(d_1+d_2-1)$ in $(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$, which are not all zero unless there exists $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{ \mathfrak{m}athfrak{b}m{0} \}$ such that \mathfrak{m}athfrak{b}egin{equation*} \sum_{i = 1}^R \mathfrak{m}athfrak{b}eta_i \left( \frac{\mathfrak{m}athfrak{p}artial F_i}{\mathfrak{m}athfrak{p}artial x_j} \mathbb{R}ight) = 0 \mathfrak{m}athbb{Q}uad \text{for } j = 1, \mathfrak{m}athbb{H}dots, n_1 \end{equation*} holds identically in $(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$. This is the same as saying that \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}abla_{\mathfrak{m}athfrak{b}m{x}} \left( \sum_{i= 1}^R \mathfrak{m}athfrak{b}eta_i F_i \mathbb{R}ight) = 0 \end{equation*} holds identically. From this we find that $\sum_{i= 1}^R \mathfrak{m}athfrak{b}eta_i F_i$ must be a form entirely in the $\mathfrak{m}athfrak{b}m{y}$ variables. But this is a linear combination of homogeneous bidegree $(d_1,d_2)$ forms with $d_1 \geq 1$ and thus we must in fact have $\sum_{i= 1}^R \mathfrak{m}athfrak{b}eta_i F_i= 0$ identically, contradicting linear independence. The argument works analogously for $V_2^*$. The next Lemma shows that the assumption \eqref{eq.general_sup_bound} holds with $E = \mathfrak{m}athfrak{m}(\Delta)$ and $T(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) = C^{-1}P_1^{-\varepsilon}S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})$. \mathfrak{m}athfrak{b}egin{lemma} \label{lem.sup_bound} Let $0 < \Delta \leq R(\tilde{d}+1)(bd_1+d_2)^{-1}$ and let $\varepsilon > 0$. Then we have the upper bound \mathfrak{m}athfrak{b}egin{equation} \label{eq.sup_bound} \sup_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \in \mathfrak{m}athfrak{m}(\Delta)}\mathfrak{m}athbb{N}orm{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})} \ll P_1^{n_1}P_2^{n_2} P^{-\Delta \delta_0 + \varepsilon}. \end{equation} \end{lemma} \mathfrak{m}athfrak{b}egin{proof} The result follows straightforward from \cite[Lemma 4.3]{schindler_bihomogeneous} by setting the parameter $\theta$ to be \mathfrak{m}athfrak{b}egin{equation*} \theta = \frac{\Delta}{(\tilde{d}+1)R}. \end{equation*} If we have $0 < \Delta \leq R(\tilde{d}+1)(bd_1+d_2)^{-1}$ this ensures that the assumption $0 < \theta \leq (bd_1+d_2)^{-1}$ in \cite[Lemma 4.3]{schindler_bihomogeneous} is satisfied. \end{proof} Before we state the next proposition, recall that we assume $P_1 \geq P_2$ throughout, as was mentioned at the beginning of this section. \mathfrak{m}athfrak{b}egin{proposition} \label{prop.minor_arcs_estimate} Let $\varepsilon > 0$ and let $0 < \Delta \leq R(\tilde{d}+1)(bd_1+d_2)^{-1}$. Under the assumptions of Proposition \mathbb{R}ef{prop.main_prop} we have \mathfrak{m}athfrak{b}egin{equation*} \int_{\mathfrak{m}athfrak{m}(\Delta)}S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) d \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \ll P_1^{n_1-d_1R}P_2^{n_2-d_2R} P^{-\Delta \delta_0(1-(d_1+d_2)R/\mathfrak{m}athscr{C})+\varepsilon}. \end{equation*} \end{proposition} \mathfrak{m}athfrak{b}egin{proof} We apply Lemma \mathbb{R}ef{lem.general_integral_bound} with \mathfrak{m}athfrak{b}egin{equation*} T(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) = C^{-1}P^{-\varepsilon} S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}), \mathfrak{m}athbb{Q}uad E_0 = [0,1]^R, \mathfrak{m}athbb{Q}uad E = \mathfrak{m}athfrak{m}(\Delta), \mathfrak{m}athbb{Q}uad \text{and} \mathfrak{m}athbb{Q}uad \delta = \Delta \delta_0, \end{equation*} where $C > 0$ is some real number. With these choices \eqref{eq.less_general_aux_ineq} follows from the auxiliary inequality \eqref{eq.aux_ineq} since for any $\varepsilon > 0$ we have $P^{-\varepsilon} \leq P_1^{-\varepsilon}$. From Lemma \mathbb{R}ef{lem.sup_bound} we have the bound \mathfrak{m}athfrak{b}egin{equation*} \sup_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \in E}CT(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) \ll P_1^{n_1}P_2^{n_2}P^{-\delta}. \end{equation*} We may increase $C$ if necessary so that we recover \eqref{eq.general_sup_bound}. Therefore the hypotheses of Lemma \mathbb{R}ef{lem.good_int_bound}. Since we assume $\mathfrak{m}athscr{C} > (bd_1+d_2)R$, we also note \[ P_2^{-\mathfrak{m}athscr{C}} = P^{-R} P^{R-\mathfrak{m}athscr{C} (bd_1+d_2)^{-1}} \ll_{\mathfrak{m}athscr{C}} P^{-R-\tilde{\delta}}, \] for some $\tilde{\delta} >0$. Therefore if we assume $\mathfrak{m}athscr{C} > (bd_1+d_2)R$ then Lemma~\mathbb{R}ef{lem.good_int_bound} gives \mathfrak{m}athfrak{b}egin{equation*} \int_{\mathfrak{m}athfrak{m}(\Delta)}S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) d \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \ll P_1^{n_1-d_1R}P_2^{n_2-d_2R} P^{-\Delta \delta_0(1-(d_1+d_2)R/\mathfrak{m}athscr{C})+\varepsilon}, \end{equation*} as desired. \end{proof} \subsection{The major arcs} The aim of this section is to identify the main term via integrating the exponential sum $S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})$ over the major arcs, and analyse the singular integral and singular series appropriately. For $\mathfrak{m}athfrak{b}m{a} \in \mathfrak{m}athbb{Z}^R$ and $q \in \mathfrak{m}athbb{N}$ consider the complete exponential sum \mathfrak{m}athfrak{b}egin{equation*} S_{\mathfrak{m}athfrak{b}m{a},q} \coloneqq q^{-n_1-n_2} \sum_{\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}} e\left( \frac{\mathfrak{m}athfrak{b}m{a}}{q} \cdot \mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) \mathbb{R}ight), \end{equation*} where the sum $\sum_{\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}}$ runs through a complete set of residues modulo $q$. Further, for $P\geq 1$ and $\Delta > 0$ we define the truncated singular series \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athfrak{S}(P) \coloneqq \sum_{q \leq P^\Delta} \sum_{\mathfrak{m}athfrak{b}m{a}} S_{\mathfrak{m}athfrak{b}m{a},q}, \end{equation*} where the sum $\sum_{\mathfrak{m}athfrak{b}m{a}}$ runs over $\mathfrak{m}athfrak{b}m{a} \in \mathfrak{m}athbb{Z}^R$ such that $0 \leq a_i < q$ for $i = 1, \mathfrak{m}athbb{H}dots, R$ and $(a_1, \mathfrak{m}athbb{H}dots, a_R, q) = 1$. For $\mathfrak{m}athfrak{b}m{\gamma} \in \mathfrak{m}athbb{R}^R$ we further define \mathfrak{m}athfrak{b}egin{equation*} S_\infty(\mathfrak{m}athfrak{b}m{\gamma}) \coloneqq \int_{\mathfrak{m}athcal{B}_1 \times \mathfrak{m}athcal{B}_2}e \left( \mathfrak{m}athfrak{b}m{\gamma} \cdot \mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{u},\mathfrak{m}athfrak{b}m{v}) \mathbb{R}ight) d\mathfrak{m}athfrak{b}m{u} d\mathfrak{m}athfrak{b}m{v}, \end{equation*} and we define the truncated singular integral for $P \geq 1$, $\Delta >0$ as follows \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athfrak{I}(P) \coloneqq \int_{\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\gamma}}_\infty \leq P^\Delta} S_\infty(\mathfrak{m}athfrak{b}m{\gamma}) d \mathfrak{m}athfrak{b}m{\gamma}. \end{equation*} From now on we assume that our parameter $\Delta>0$ satisfies \mathfrak{m}athfrak{b}egin{equation} \label{eq.delta_assumption} (bd_1+d_2)^{-1}>\Delta(2R+3)+\delta \end{equation} for some $\delta > 0$. Since $\mathfrak{m}athscr{C} > R(bd_1+d_2)$ we are always able to choose such $\Delta$ in terms of $\mathfrak{m}athscr{C}$. Further as in \cite{schindler_bihomogeneous} we now define some slightly modified major arcs $\mathfrak{m}athfrak{M}'(\Delta)$ as follows \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athfrak{M}'(\Delta) \coloneqq \mathfrak{m}athfrak{b}igcup_{1 \leq q \leq P^\Delta} \mathfrak{m}athfrak{b}igcup_{\substack{0 \leq a_i < q \\ (a_1, \mathfrak{m}athbb{H}dots, a_R,q) = 1}}\mathfrak{m}athfrak{M}_{\mathfrak{m}athfrak{b}m{a},q}'(\Delta) , \end{equation*} where $\mathfrak{m}athfrak{M}_{\mathfrak{m}athfrak{b}m{a},q}'(\Delta) = \left\{ \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \in [0,1]^R \colon \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}- \frac{\mathfrak{m}athfrak{b}m{a}}{q}}_\infty < P_1^{-d_1}P_2^{-d_2} P^\Delta \mathbb{R}ight\}$. The sets $\mathfrak{m}athfrak{M}_{\mathfrak{m}athfrak{b}m{a},q}'$ are disjoint for our choice of $\Delta$; for if there is some \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \in \mathfrak{m}athfrak{M}_{\mathfrak{m}athfrak{b}m{a},q}'(\Delta) \cap \mathfrak{m}athfrak{M}_{\mathfrak{m}athfrak{b}m{\tilde{a}},\tilde{q}}'(\Delta), \end{equation*} where $\mathfrak{m}athfrak{M}_{\mathfrak{m}athfrak{b}m{\tilde{a}},\tilde{q}}'(\Delta) \mathfrak{m}athbb{N}eq \mathfrak{m}athfrak{M}_{\mathfrak{m}athfrak{b}m{a},q}'(\Delta)$ then there is some $i \in \{1, \mathfrak{m}athbb{H}dots, R \}$ such that \mathfrak{m}athfrak{b}egin{equation*} P^{-2\Delta} \leq \frac{1}{q \tilde{q}} \leq \mathfrak{m}athbb{N}orm{\frac{a_i}{q}- \frac{\tilde{a}_i}{\tilde{q}}} \leq 2P^{\Delta-1}, \end{equation*} which is impossible for large $P$, since by our assumption \eqref{eq.delta_assumption} we have $3 \Delta -1 < 0$. Further we note that clearly $\mathfrak{m}athfrak{M}'(\Delta) \supseteq \mathfrak{m}athfrak{M}(\Delta)$ whence $\mathfrak{m}athfrak{m}'(\Delta) \subseteq \mathfrak{m}athfrak{m}(\Delta)$ and so the conclusions of Proposition \mathbb{R}ef{prop.minor_arcs_estimate} hold with $\mathfrak{m}athfrak{m}(\Delta)$ replaced by $\mathfrak{m}athfrak{m}'(\Delta)$. The next result expands the exponential sum $S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})$ when $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}$ can be well-approximated by a rational number. In particular for our applications it is important to obtain an error term in which the constant does not depend on $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}$, whence we cannot just use Lemma 5.3 in~\cite{schindler_bihomogeneous} as it is stated there. \mathfrak{m}athfrak{b}egin{lemma} \label{lem.approx_exp_sum_major_arc} Let $\Delta>0$ satisfy \eqref{eq.delta_assumption}, let $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \in \mathfrak{m}athfrak{M}'_{\mathfrak{m}athfrak{b}m{a},q}(\Delta)$ where $q \leq P^\Delta$, and write $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} = \mathfrak{m}athfrak{b}m{a}/q + \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}$ such that $1 \leq a_i < q$ and $(a_1, \mathfrak{m}athbb{H}dots, a_R,q) = 1$. If $P_1 \geq P_2 > 1$ then \mathfrak{m}athfrak{b}egin{equation} \label{eq.approx_S(alpha)_major_arcs} S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) = P_1^{n_1}P_2^{n_2} S_{\mathfrak{m}athfrak{b}m{a},q} S_\infty(P \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}) + O\left(qP_1^{n_1}P_2^{n_2-1}\left(1+ P\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty \mathbb{R}ight) \mathbb{R}ight), \end{equation} where the implied constant in the error term does not depend on $q$ or on $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}$. \end{lemma} \mathfrak{m}athfrak{b}egin{proof} In the sum for $S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})$ we begin by writing $\mathfrak{m}athfrak{b}m{x} = \mathfrak{m}athfrak{b}m{z}^{(1)} + q \mathfrak{m}athfrak{b}m{x}'$ and $\mathfrak{m}athfrak{b}m{y} = \mathfrak{m}athfrak{b}m{z}^{(2)} + q \mathfrak{m}athfrak{b}m{y}'$ where $0 \leq z_i^{(1)} < q$ and $0 \leq z_j^{(2)} < q$ for all $1 \leq i \leq n_1$ and for all $1 \leq j \leq n_2$. A simple calculation now shows \mathfrak{m}athfrak{b}egin{align} S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}) &= \sum_{\mathfrak{m}athfrak{b}m{x} \in P_1 \mathfrak{m}athcal{B}_1} \sum_{\mathfrak{m}athfrak{b}m{y} \in P_2 \mathfrak{m}athcal{B}_2} e \left( \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} \cdot \mathfrak{m}athfrak{b}m{F} (\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{y}) \mathbb{R}ight) \mathfrak{m}athbb{N}onumber \\ &= \sum_{\mathfrak{m}athfrak{b}m{z}^{(1)}, \mathfrak{m}athfrak{b}m{z}^{(2)} \, \mathfrak{m}athrm{mod} \, q} e \left( \frac{\mathfrak{m}athfrak{b}m{a}}{q} \cdot \mathfrak{m}athfrak{b}m{F} (\mathfrak{m}athfrak{b}m{z}^{(1)}, \mathfrak{m}athfrak{b}m{z}^{(2)}) \mathbb{R}ight) \tilde{S}(\mathfrak{m}athfrak{b}m{z}^{(1)}, \mathfrak{m}athfrak{b}m{z}^{(2)}) \label{eq.summing_things} \end{align} where \mathfrak{m}athfrak{b}egin{equation*} \tilde{S}(\mathfrak{m}athfrak{b}m{z}^{(1)}, \mathfrak{m}athfrak{b}m{z}^{(2)}) = \sum_{\mathfrak{m}athfrak{b}m{x}', \mathfrak{m}athfrak{b}m{y}'} e \left( \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}(q\mathfrak{m}athfrak{b}m{x}'+\mathfrak{m}athfrak{b}m{z}^{(1)},q\mathfrak{m}athfrak{b}m{y}'+\mathfrak{m}athfrak{b}m{z}^{(2)} ) \mathbb{R}ight), \end{equation*} where $\mathfrak{m}athfrak{b}m{x}', \mathfrak{m}athfrak{b}m{y}'$ in the sum runs through integer tuples such that $q\mathfrak{m}athfrak{b}m{x}'+\mathfrak{m}athfrak{b}m{z}^{(1)} \in P_1 \mathfrak{m}athcal{B}_1$ and $q\mathfrak{m}athfrak{b}m{y}'+\mathfrak{m}athfrak{b}m{z}^{(2)} \in P_2 \mathfrak{m}athcal{B}_2$ is satisfied. Now consider $\mathfrak{m}athfrak{b}m{x}', \mathfrak{m}athfrak{b}m{x}''$ and $\mathfrak{m}athfrak{b}m{y}', \mathfrak{m}athfrak{b}m{y}''$ such that \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{x}'-\mathfrak{m}athfrak{b}m{x}''}_\infty, \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{y}'-\mathfrak{m}athfrak{b}m{y}''}_\infty \leq 2. \end{equation*} Then for all $i = 1, \mathfrak{m}athbb{H}dots, R$ we have \mathfrak{m}athfrak{b}egin{multline*} \mathfrak{m}athbb{N}orm{F_i(q \mathfrak{m}athfrak{b}m{x}' + \mathfrak{m}athfrak{b}m{z}^{(1)}, q\mathfrak{m}athfrak{b}m{y}' + \mathfrak{m}athfrak{b}m{z}^{(2)}) - F_i(q \mathfrak{m}athfrak{b}m{x}'' + \mathfrak{m}athfrak{b}m{z}^{(1)}, q\mathfrak{m}athfrak{b}m{y}'' + \mathfrak{m}athfrak{b}m{z}^{(2)}) } \\ \ll qP_1^{d_1-1} P_2^{d_2} + qP_1^{d_1} P_2^{d_2-1} \ll qP_1^{d_1} P_2^{d_2-1}, \end{multline*} where we used $P_1 \geq P_2 > 1$ for the last estimate. We note that the implied constant here does not depend on $q$. We now use this to replace the sum in $\tilde{S}$ by an integral to obtain \mathfrak{m}athfrak{b}egin{multline*} \tilde{S}(\mathfrak{m}athfrak{b}m{z}^{(1)}, \mathfrak{m}athfrak{b}m{z}^{(2)}) = \int_{q \tilde{\mathfrak{m}athfrak{b}m{v}} \in P_1 \mathfrak{m}athcal{B}_1} \int_{q \tilde{\mathfrak{m}athfrak{b}m{w}} \in P_2 \mathfrak{m}athcal{B}_2} e \left( \sum_{i=1}^R \mathfrak{m}athfrak{b}eta_i F_i(q \tilde{\mathfrak{m}athfrak{b}m{v}}, q\tilde{\mathfrak{m}athfrak{b}m{w}}) \mathbb{R}ight) d \tilde{\mathfrak{m}athfrak{b}m{v}} d\tilde{\mathfrak{m}athfrak{b}m{w}} \\ + O \left( \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty qP_1^{d_1} P_2^{d_2-1} \left( \frac{P_1}{q} \mathbb{R}ight)^{n_1} \left( \frac{P_2}{q} \mathbb{R}ight)^{n_2} + \left( \frac{P_1}{q} \mathbb{R}ight)^{n_1} \left( \frac{P_2}{q} \mathbb{R}ight)^{n_2-1} \mathbb{R}ight), \end{multline*} where we used that $q \leq P_2$, which is implied by our assumptions, but we mention here for the convenience of the reader. In the integral above we perform a substitution $\mathfrak{m}athfrak{b}m{v} = qP_1^{-1}\tilde{\mathfrak{m}athfrak{b}m{v}}$ and $\mathfrak{m}athfrak{b}m{w} = qP_2^{-1}\tilde{\mathfrak{m}athfrak{b}m{w}}$ to get \mathfrak{m}athfrak{b}egin{equation*} \tilde{S}(\mathfrak{m}athfrak{b}m{z}^{(1)}, \mathfrak{m}athfrak{b}m{z}^{(2)}) = P_1^{n_1} P_2^{n_2} q^{-n_1-n_2} \mathfrak{m}athfrak{I} (P \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}) + q^{-n_1-n_2} O \left( qP_1^{n_1}P_2^{n_2-1}\left(1+ P\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty \mathbb{R}ight) \mathbb{R}ight), \end{equation*} where the implied constant does not depend on $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}$ or $q$. Substituting this into \eqref{eq.summing_things} gives the result. \end{proof} From the Lemma und using that the sets $\mathfrak{m}athfrak{M}_{\mathfrak{m}athfrak{b}m{a},q}'$ are disjoint we deduce \mathfrak{m}athfrak{b}egin{multline} \label{eq.needed_once} \int_{\mathfrak{m}athfrak{M}'(\Delta)} S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})d \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} = P_1^{n_1}P_2^{n_2} \sum_{1 \leq q \leq P^\Delta} \sum_{\mathfrak{m}athfrak{b}m{a}} S_{\mathfrak{m}athfrak{b}m{a},q} \int_{\mathfrak{m}athbb{N}orm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}} S_\infty (P \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}) d \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \\ + O\left(P_1^{n_1}P_2^{n_2} P^{2 \Delta} P_2^{-1} \mathfrak{m}athrm{meas}\left(\mathfrak{m}athfrak{M}'(\Delta)\mathbb{R}ight) \mathbb{R}ight), \end{multline} where we used $q \leq P^\Delta$ and $P \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}_\infty \leq P^\Delta$ for the error term. Now we can bound the measure of the major arcs by \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athrm{meas}(\mathfrak{m}athfrak{M}'(\Delta)) \ll \sum_{q \leq P^\Delta} q^R P^{-R+\Delta R} \ll P^{-R+ \Delta(2R+1)}. \end{equation*} Using this and making the substitution $\mathfrak{m}athfrak{b}m{\gamma} = P \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}$ in the integral in \eqref{eq.needed_once} we find \mathfrak{m}athfrak{b}egin{equation} \label{eq.integral_major_arcs} \int_{\mathfrak{m}athfrak{M}'(\Delta)} S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})d \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} = P_1^{n_1}P_2^{n_2} P^{-R} \mathfrak{m}athfrak{S}(P) \mathfrak{m}athfrak{I}(P) \\ + O \left( P_1^{n_1} P_2^{n_2} P^{-R+\Delta(2R+3)-1/(bd_1+d_2)} \mathbb{R}ight). \end{equation} It becomes transparent why the assumption~\eqref{eq.delta_assumption} is in place, because then the error term in~\eqref{eq.integral_major_arcs} is bounded by $O(P_1^{n_1}P_2^{n_2}P^{-R-\delta})$ and thus is of smaller order than the main term. We now focus on the singular series $\mathfrak{m}athfrak{S}(P)$ and the singular integral $\mathfrak{m}athfrak{I}(P)$ in the next two Lemmas. \mathfrak{m}athfrak{b}egin{lemma} \label{lem.singular_series} Let $\varepsilon > 0$ and assume that the bound \eqref{eq.aux_ineq} holds for some $C \geq 1$, $\mathfrak{m}athscr{C} > 1+b\varepsilon$, for all $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}, \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R$ and all real $P_1 \geq P_2 > 1$. Then we have the following: \mathfrak{m}athfrak{b}egin{itemize} \item[(i)] For all $\varepsilon' > 0$ such that $\varepsilon' = O_{\mathfrak{m}athscr{C}}(\varepsilon)$ we have \mathfrak{m}athfrak{b}egin{equation*} \label{eq.aux_ineq_consequence} \mathfrak{m}in\left\{ \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a},q}}, \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a}',q'}} \mathbb{R}ight\} \ll_C (q'+q)^\varepsilon \mathfrak{m}athbb{N}norm{\frac{\mathfrak{m}athfrak{b}m{a}}{q} - \frac{\mathfrak{m}athfrak{b}m{a}'}{q'}}_\infty^{\frac{\mathfrak{m}athscr{C}-\varepsilon'}{\tilde{d}+1}} \end{equation*} for all $q,q' \in \mathfrak{m}athbb{N}$ and all $\mathfrak{m}athfrak{b}m{a} \in \{1, \mathfrak{m}athbb{H}dots, q \}^R$ and $\mathfrak{m}athfrak{b}m{a}' \in \{1, \mathfrak{m}athbb{H}dots, q' \}^R$ with $\frac{\mathfrak{m}athfrak{b}m{a}}{q} \mathfrak{m}athbb{N}eq \frac{\mathfrak{m}athfrak{b}m{a}'}{q'}$. \item[(ii)] If $\mathfrak{m}athscr{C} > \varepsilon'$ then for all $t \in \mathfrak{m}athbb{R}_{>0}$ and $q_0 \in \mathfrak{m}athbb{N}$ we have \mathfrak{m}athfrak{b}egin{equation*} \# \left\{ \frac{\mathfrak{m}athfrak{b}m{a}}{q} \in [0,1]^R \cap \mathfrak{m}athbb{Q}^R \colon q \leq q_0, \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a},q}} \geq t \mathbb{R}ight\} \ll_C (q_0^{-\varepsilon} t)^{-\frac{(\tilde{d}+1)R}{\mathfrak{m}athscr{C}-\varepsilon'}}, \end{equation*} where the fractions in the set above are in lowest terms. \item[(iii)] Assume that the forms $F_i(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$ are linearly independent. Then for all $q \in \mathfrak{m}athbb{N}$ and $\mathfrak{m}athfrak{b}m{a} \in \mathfrak{m}athbb{Z}^R$ with $(a_1, \mathfrak{m}athbb{H}dots, a_R, q) = 1$ there exists some $\mathfrak{m}athbb{N}u >0$ depending at most on $d_i$ and $R$ such that \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a},q}} \ll q^{-\mathfrak{m}athbb{N}u}. \end{equation*} \item[(iv)] Assume $\mathfrak{m}athscr{C} >(\tilde{d}+1)R$ and assume the forms $F_i(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$ are linearly independent. Then the \emph{singular series} \mathfrak{m}athfrak{b}egin{equation} \label{eq.def_singular_series} \mathfrak{m}athfrak{S} = \sum_{q=1}^\infty \sum_{\mathfrak{m}athfrak{b}m{a} \, \mathfrak{m}athrm{ mod } \, q} S_{\mathfrak{m}athfrak{b}m{a},q}\end{equation} exists and converges absolutely, with \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}orm{\mathfrak{m}athfrak{S}(P) - \mathfrak{m}athfrak{S}} \ll_{C, \mathfrak{m}athscr{C}} P^{-\Delta \delta_1}, \end{equation*} for some $\delta_1 > 0$ depending only on $\mathfrak{m}athscr{C}, d_i$ and $R$. \end{itemize} \end{lemma} \mathfrak{m}athfrak{b}egin{proof}[Proof of \emph{(i)}] Take $\mathfrak{m}athcal{B}_i = [0,1]^{n_i}$ so that $S_\infty(\mathfrak{m}athfrak{b}m{0}) = 1$. Therefore \eqref{eq.approx_S(alpha)_major_arcs} implies that \mathfrak{m}athfrak{b}egin{equation*} \frac{S\left( \frac{\mathfrak{m}athfrak{b}m{a}}{q} \mathbb{R}ight)}{P_1^{n_1}P_2^{n_2}} = S_{\mathfrak{m}athfrak{b}m{a},q} + O \left( q P_2^{-1} \mathbb{R}ight) \mathfrak{m}athbb{Q}uad \text{and} \mathfrak{m}athbb{Q}uad \frac{S\left( \frac{\mathfrak{m}athfrak{b}m{a}'}{q'} \mathbb{R}ight)}{P_1^{n_1}P_2^{n_2}} = S_{\mathfrak{m}athfrak{b}m{a}',q'} + O \left( q' P_2^{-1} \mathbb{R}ight). \end{equation*} Using this and the bound \eqref{eq.aux_ineq} we obtain \mathfrak{m}athfrak{b}egin{multline} \label{eq.inequality_1} \mathfrak{m}in \left\{ \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a},q}}, \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a}',q'}} \mathbb{R}ight\} \leq CP_1^\varepsilon P^{-\mathfrak{m}athscr{C}} \mathfrak{m}athbb{N}norm{\frac{\mathfrak{m}athfrak{b}m{a}}{q} - \frac{\mathfrak{m}athfrak{b}m{a}'}{q'}}_\infty^{-\mathfrak{m}athscr{C}} + \\ CP_1^\varepsilon \mathfrak{m}athbb{N}norm{\frac{\mathfrak{m}athfrak{b}m{a}}{q} - \frac{\mathfrak{m}athfrak{b}m{a}'}{q'}}_\infty^{\frac{\mathfrak{m}athscr{C}}{\tilde{d}+1}} + O \left( (q'+q)P_2^{-1} \mathbb{R}ight), \end{multline} where we note that $P_1^\varepsilon P_2^{-\mathfrak{m}athscr{C}} = O(P_2^{-1})$ due to our assumptions on $\mathfrak{m}athscr{C}$. Now set \mathfrak{m}athfrak{b}egin{equation*} P_1 = P_2 = (q+q') \mathfrak{m}athbb{N}norm{\frac{\mathfrak{m}athfrak{b}m{a}}{q} - \frac{\mathfrak{m}athfrak{b}m{a}'}{q'}}_\infty^{-\frac{1+\mathfrak{m}athscr{C}}{\tilde{d}+1}}. \end{equation*} Note $(q+q') \geq 1$ and $ \mathfrak{m}athbb{N}norm{\frac{\mathfrak{m}athfrak{b}m{a}}{q} - \frac{\mathfrak{m}athfrak{b}m{a}'}{q'}}_\infty \leq 1$ so that this gives $P_i \geq 1$. Substituting these choices into \eqref{eq.inequality_1} we get \mathfrak{m}athfrak{b}egin{multline*} \mathfrak{m}in \left\{ \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a},q}}, \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a}',q'}} \mathbb{R}ight\} \leq P_1^{ \varepsilon} (q+q')^{-\mathfrak{m}athscr{C}(d_1+d_2)} \mathfrak{m}athbb{N}norm{\frac{\mathfrak{m}athfrak{b}m{a}}{q} - \frac{\mathfrak{m}athfrak{b}m{a}'}{q'}}_\infty^{\frac{\mathfrak{m}athscr{C}^2+\mathfrak{m}athscr{C}}{\tilde{d}+1}(d_1+d_2)-\mathfrak{m}athscr{C}} + \\ CP_1^{\varepsilon} \mathfrak{m}athbb{N}norm{\frac{\mathfrak{m}athfrak{b}m{a}}{q} - \frac{\mathfrak{m}athfrak{b}m{a}'}{q'}}_\infty^{\frac{\mathfrak{m}athscr{C}}{\tilde{d}+1}} + O \left(\mathfrak{m}athbb{N}norm{\frac{\mathfrak{m}athfrak{b}m{a}}{q} - \frac{\mathfrak{m}athfrak{b}m{a}'}{q'}}_\infty^{\frac{1+\mathfrak{m}athscr{C}}{(\tilde{d}+1)}} \mathbb{R}ight). \end{multline*} Noting again that $(q+q') \geq 1$, $ \mathfrak{m}athbb{N}norm{\frac{\mathfrak{m}athfrak{b}m{a}}{q} - \frac{\mathfrak{m}athfrak{b}m{a}'}{q'}}_\infty \leq 1$ and also that $\frac{\mathfrak{m}athscr{C}^2+\mathfrak{m}athscr{C}}{\tilde{d}+1}(d_1+d_2)-\mathfrak{m}athscr{C} \geq \frac{\mathfrak{m}athscr{C}}{\tilde{d}+1}$ we see that the second term on the right hand side above dominates the expression. Hence we finally obtain \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}in \left\{ \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a},q}}, \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a}',q'}} \mathbb{R}ight\} \ll_C P_1^{ \varepsilon} \mathfrak{m}athbb{N}norm{\frac{\mathfrak{m}athfrak{b}m{a}}{q} - \frac{\mathfrak{m}athfrak{b}m{a}'}{q'}}_\infty^{\frac{\mathfrak{m}athscr{C}}{\tilde{d}+1}} = (q'+q)^\varepsilon \mathfrak{m}athbb{N}norm{\frac{\mathfrak{m}athfrak{b}m{a}}{q} - \frac{\mathfrak{m}athfrak{b}m{a}'}{q'}}_\infty^{\frac{\mathfrak{m}athscr{C}-\varepsilon'}{\tilde{d}+1}}, \end{equation*} for some $\varepsilon' = O_{\mathfrak{m}athscr{C}}(\varepsilon)$. \end{proof} \mathfrak{m}athfrak{b}egin{proof}[Proof of \emph{(ii)}] This now follows almost directly from (i). The points in the set \mathfrak{m}athfrak{b}egin{equation*} \left\{ \frac{\mathfrak{m}athfrak{b}m{a}}{q} \in [0,1]^R \cap \mathfrak{m}athbb{Q}^R \colon q \leq q_0, \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a},q}} \geq t \mathbb{R}ight\} \end{equation*} are separated by gaps of size $\gg_C (q_0^{-\varepsilon} t )^{\frac{\tilde{d}+1}{\mathfrak{m}athscr{C}-\varepsilon'}}$. Hence at most $O_{C}((q_0^{-\varepsilon} t )^{-\frac{\tilde{d}+1}{\mathfrak{m}athscr{C}-\varepsilon'}})$ fit in the box $[0,1]^R$ so the result follows. \end{proof} \mathfrak{m}athfrak{b}egin{proof}[Proof of \emph{(iii)}] Setting $P_1=P_2=q$ and $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} = \mathfrak{m}athfrak{b}m{a}/ q$ we find $S_{\mathfrak{m}athfrak{b}m{a},q} = q^{-n_1-n_2} S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha})$. Let $\delta_0$ be defined as in \eqref{eq.delta_0_definition}. We can define $\Delta$ by $(d_1+d_2) \Delta = 1- \varepsilon''$ for some $\varepsilon'' \in (0,1)$. We claim that $\mathfrak{m}athfrak{b}m{a}/q$ does not lie in the major arcs $\mathfrak{m}athfrak{M}(\Delta)$ if $(a_1, \mathfrak{m}athbb{H}dots, a_r, q) = 1$. For if, then there exist $q', \mathfrak{m}athfrak{b}m{a}'$ such that \mathfrak{m}athfrak{b}egin{equation*} 1 \leq q' \leq q^{(d_1+d_2) \Delta}, \end{equation*} and \mathfrak{m}athfrak{b}egin{equation*} 2 \mathfrak{m}athbb{N}orm{q'a_i - q a_i'} \leq q^{1-d_1-d_2}q^{(d_1+d_2) \Delta} < 1, \end{equation*} which is clearly impossible. The bound \eqref{eq.sup_bound} applied to our situation gives \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a},q}} \ll q^{-R \delta_0(1-\varepsilon'')+\varepsilon}. \end{equation*} As the forms $F_i$ are linearly independent we know that $\delta_0 \geq \frac{1}{(\tilde{d}+1)2^{\tilde{d}}R}$. Thus, choosing some small enough $\varepsilon$ delivers the result. \end{proof} \mathfrak{m}athfrak{b}egin{proof}[Proof of \emph{(iv)}] For $Q > 0$ let \mathfrak{m}athfrak{b}egin{equation*} s(Q) = \sum_{\substack{\mathfrak{m}athfrak{b}m{a}/q \in [0,1)^R \\ Q < q \leq 2Q}} \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a},q}}, \end{equation*} where $\sum_{\mathfrak{m}athfrak{b}m{a}/q \in [0,1)^R}$ is shorthand for the sum $\sum_{q=1}^\infty \sum_{\substack{\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{a}}_\infty \leq q}}$ such that $(a_1, \mathfrak{m}athbb{H}dots, a_R,q) = 1$. We claim that $s(Q) \ll_{C, \mathfrak{m}athscr{C}} Q^{-\delta_1}$ for some $\delta_1 > 0$. To see this, let $\ell \in \mathfrak{m}athbb{Z}$. Then \mathfrak{m}athfrak{b}egin{multline} \label{eq.s(Q)_first_est} s(Q) = \sum_{\substack{\mathfrak{m}athfrak{b}m{a}/q \in [0,1)^R \\ Q < q \leq 2Q \\ \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a},q}} \geq 2^{-\ell}}} \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a},q}} + \sum_{i = \ell}^\infty \sum_{\substack{\mathfrak{m}athfrak{b}m{a}/q \in [0,1)^R \\ Q < q \leq 2Q \\ 2^{-i} > \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a},q}} \geq 2^{-i-1}}} \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a},q}} \\ \leq \# \left\{ \frac{\mathfrak{m}athfrak{b}m{a}}{q} \in [0,1)^R \cap \mathfrak{m}athbb{Q}^R \colon q \leq 2Q, \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a},q}} \geq 2^{-\ell} \mathbb{R}ight\} \cdot \sup_{q > Q} \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a},q}} \\ + \sum_{i = \ell}^\infty \# \left\{ \frac{\mathfrak{m}athfrak{b}m{a}}{q} \in [0,1)^R \cap \mathfrak{m}athbb{Q}^R \colon q \leq 2Q, \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a},q}} \geq 2^{-i-1} \mathbb{R}ight\} \cdot 2^{-i}. \end{multline} Now from (ii) we know \mathfrak{m}athfrak{b}egin{equation*} \# \left\{ \frac{\mathfrak{m}athfrak{b}m{a}}{q} \in [0,1)^R \cap \mathfrak{m}athbb{Q}^R \colon q \leq 2Q, \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a},q}} \geq t \mathbb{R}ight\} \ll_C (Q^{-\varepsilon}t)^{-\frac{(\tilde{d}+1)R}{\mathfrak{m}athscr{C}-\varepsilon'}}, \end{equation*} and from (iii) we know, since $F_i$ are linearly independent there is some $\mathfrak{m}athbb{N}u >0$ such that \mathfrak{m}athfrak{b}egin{equation*} \sup_{q > Q} \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a},q}} \ll Q^{-\mathfrak{m}athbb{N}u}. \end{equation*} Using these estimates in \eqref{eq.s(Q)_first_est} we get \mathfrak{m}athfrak{b}egin{equation*} s(Q) \ll_C Q^{O_{\mathfrak{m}athscr{C}}(\varepsilon) - \mathfrak{m}athbb{N}u} 2^{\ell \frac{(\tilde{d}+1)R}{\mathfrak{m}athscr{C}-\varepsilon'}} + Q^{O_{\mathfrak{m}athscr{C}}(\varepsilon)} \sum_{i = \ell}^\infty 2^{(i+1) \left( \frac{(\tilde{d}+1)R}{\mathfrak{m}athscr{C}-\varepsilon''} \mathbb{R}ight) - i}. \end{equation*} Since we assumed $\mathfrak{m}athscr{C} > (\tilde{d}+1)R$ and since $\varepsilon'$ is small in terms of $\mathfrak{m}athscr{C}$ we may also assume $\mathfrak{m}athscr{C} > (\tilde{d}+1)R + \varepsilon'$. Therefore, summing the geometric expression gives \mathfrak{m}athfrak{b}egin{equation*} s(Q) \ll_{C,\mathfrak{m}athscr{C}} Q^{O_{\mathfrak{m}athscr{C}}(\varepsilon)} 2^{\ell \frac{(\tilde{d}+1)R}{\mathfrak{m}athscr{C}-\varepsilon'}} \left( Q^{-\mathfrak{m}athbb{N}u}+ 2^{-\ell} \mathbb{R}ight). \end{equation*} Now choose $\ell = \lfloor \log_2 Q^{\mathfrak{m}athbb{N}u} \mathbb{R}floor$ to get \mathfrak{m}athfrak{b}egin{equation*} s(Q) \ll Q^{\mathfrak{m}athbb{N}u \frac{(\tilde{d}+1)R-\mathfrak{m}athscr{C}}{ \mathfrak{m}athscr{C}} + O_{\mathfrak{m}athscr{C}}(\varepsilon)}. \end{equation*} Letting $\varepsilon$ be small enough in terms of $\mathfrak{m}athscr{C}, d_i$, $R$ we get some $\delta_1 > 0$ depending on $\mathfrak{m}athscr{C}, d_i$ and $R$ such that \mathfrak{m}athfrak{b}egin{equation*} s(Q) \ll Q^{-\delta_1}, \end{equation*} which proves the claim. Finally using this and splitting the sum into dyadic intervals we find \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}orm{\mathfrak{m}athfrak{S}(P)-\mathfrak{m}athfrak{S}} \leq \sum_{\substack{\mathfrak{m}athfrak{b}m{a}/q \in [0,1)^R \\ q > P^\Delta}} \mathfrak{m}athbb{N}orm{S_{\mathfrak{m}athfrak{b}m{a},q}} = \sum_{k=0}^\infty\sum_{\substack{Q = 2^k P^\Delta}} s(Q) \ll \sum_{k=0}^\infty \left( 2^k P^\Delta \mathbb{R}ight)^{-\delta_1}, \end{equation*} which proves (iv). \end{proof} The next Lemma handles the singular integral. \mathfrak{m}athfrak{b}egin{lemma} \label{lem.singular_integral} Let $\varepsilon > 0$ and assume that the bound \eqref{eq.aux_ineq} holds for some $C \geq 1$, $\mathfrak{m}athscr{C} > 1+b\varepsilon$ and for all $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha}, \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R$ and all real $P_1 \geq P_2 > 1$. Then: \mathfrak{m}athfrak{b}egin{itemize} \item[(i)] For all $\mathfrak{m}athfrak{b}m{\gamma} \in \mathfrak{m}athbb{R}^R$ we have \mathfrak{m}athfrak{b}egin{equation*} S_\infty(\mathfrak{m}athfrak{b}m{\gamma}) \ll_C \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\gamma}}_\infty^{-\mathfrak{m}athscr{C} + \varepsilon'}, \end{equation*} for some $\varepsilon >0$ such that $\varepsilon' = O_{\mathfrak{m}athscr{C}}(\varepsilon)$. \item[(ii)] Assume that $\mathfrak{m}athscr{C}-\varepsilon' > R$. Then for all $P_1, P_2 > 1$ we have \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}orm{\mathfrak{m}athfrak{I}(P) - \mathfrak{m}athfrak{I}} \ll_{\mathfrak{m}athscr{C},C,\varepsilon'} P^{- \Delta (\mathfrak{m}athscr{C}-\varepsilon'-R)}, \end{equation*} where $\mathfrak{m}athfrak{I}$ is the \emph{singular integral} \mathfrak{m}athfrak{b}egin{equation} \label{eq.def_singular_integral} \mathfrak{m}athfrak{I} = \int_{\mathfrak{m}athfrak{b}m{\gamma} \in \mathfrak{m}athbb{R}^R} S_\infty(\mathfrak{m}athfrak{b}m{\gamma}) d \mathfrak{m}athfrak{b}m{\gamma}. \end{equation} In particular we see that $\mathfrak{m}athfrak{I}$ exists and converges absolutely. \end{itemize} \end{lemma} \mathfrak{m}athfrak{b}egin{proof}[Proof of \emph{(i)}] It is easy to see that for all $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R$ we have $\mathfrak{m}athbb{N}orm{S(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta})} \leq \mathfrak{m}athbb{N}orm{S(\mathfrak{m}athfrak{b}m{0})}$. Thus applying \eqref{eq.aux_ineq} with $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{a}lpha} = \mathfrak{m}athfrak{b}m{0}$ and $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} = P^{-1}\mathfrak{m}athfrak{b}m{\gamma}$ we get \mathfrak{m}athfrak{b}egin{equation} \label{eq.used_1} \mathfrak{m}athbb{N}orm{S(P^{-1} \mathfrak{m}athfrak{b}m{\gamma})} \leq CP_1^{n_1}P_2^{n_2} P_1^\varepsilon \mathfrak{m}ax \left\{P_2^{-1} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\gamma}}_\infty^{-1}, P^{-\frac{1}{\tilde{d}+1}} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\gamma}}^{\frac{1}{\tilde{d}+1}} \mathbb{R}ight\}^{\mathfrak{m}athscr{C}}. \end{equation} Now from \eqref{eq.approx_S(alpha)_major_arcs} with $\mathfrak{m}athfrak{b}m{a} = \mathfrak{m}athfrak{b}m{0}$ and $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} = P^{-1} \mathfrak{m}athfrak{b}m{\gamma}$ we have \mathfrak{m}athfrak{b}egin{equation} \label{eq.used_2} S(P^{-1} \mathfrak{m}athfrak{b}m{\gamma}) = P_1^{n_1}P_2^{n_2} S_\infty(\mathfrak{m}athfrak{b}m{\gamma}) + O \left( P_1^{n_1}P_2^{n_2-1}(1+\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\gamma}}_\infty) \mathbb{R}ight), \end{equation} where we used as in the proof of part (i) Lemma~\mathbb{R}ef{lem.singular_series} that $P_1^\varepsilon P_2^{-\mathfrak{m}athscr{C}} \leq P_2^{-1}$ due to our assumptions on $\mathfrak{m}athscr{C}$. Combining \eqref{eq.used_1} and \eqref{eq.used_2} we obtain \mathfrak{m}athfrak{b}egin{equation*} S_\infty(\mathfrak{m}athfrak{b}m{\gamma}) \ll_C P_1^\varepsilon \mathfrak{m}ax \left\{ \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\gamma}}_\infty^{-1}, P^{-\frac{1}{\tilde{d}+1}} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\gamma}}_\infty^{\frac{1}{\tilde{d}+1}} \mathbb{R}ight\}^{\mathfrak{m}athscr{C}} + P_2^{-1} + \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\gamma}}_\infty P_2^{-1}. \end{equation*} Taking $P_1=P_2 = \mathfrak{m}ax\{1, \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\gamma}}_\infty^{1+\mathfrak{m}athscr{C}}\}$ gives the result. \end{proof} \mathfrak{m}athfrak{b}egin{proof}[Proof of \emph{(ii)}] For this simply note that by part (i) we get \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}orm{\mathfrak{m}athfrak{I}(P) - \mathfrak{m}athfrak{I}} = \int_{\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\gamma}}_\infty \geq P^\Delta} S_\infty(\mathfrak{m}athfrak{b}m{\gamma}) d \mathfrak{m}athfrak{b}m{\gamma} \ll_{\mathfrak{m}athscr{C},C,\varepsilon'} \int_{\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\gamma}}_\infty \geq P^\Delta} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\gamma}}_\infty^{-\mathfrak{m}athscr{C}-\varepsilon'} d\mathfrak{m}athfrak{b}m{\gamma} \ll P^{-\Delta(\mathfrak{m}athscr{C}-\varepsilon'-R)}, \end{equation*} where the last estimate follows since we assumed $\mathfrak{m}athscr{C}-\varepsilon' > R$. \end{proof} Before we finish the proof of the main result we state two different expressions for the singular series and the singular integral that will be useful later on. If $\mathfrak{m}athscr{C} > R(d_1+d_2)$ then $\mathfrak{m}athfrak{I}$ and $\mathfrak{m}athfrak{S}$ converge absolutely, as was shown in the previous two Lemmas. Therefore, as in \S7 of~\cite{birch_forms}, by regarding the bihomogeneous forms under investigation simply as homogeneous forms we may express the singular series as an absolutely convergent product \mathfrak{m}athfrak{b}egin{equation} \label{eq.sing_series_p_adic_expression} \mathfrak{m}athfrak{S} = \mathfrak{m}athfrak{p}rod_p \mathfrak{m}athfrak{S}_p, \end{equation} where \[ \mathfrak{m}athfrak{S}_p = \lim_{k \mathbb{R}ightarrow \infty} \frac{1}{p^{k(n_1+n_2-R)}} \# \left\{ (\mathfrak{m}athfrak{b}m{u}, \mathfrak{m}athfrak{b}m{v}) \in \{1, \mathfrak{m}athbb{H}dots, p^k \}^{n_1+n_2} \colon F_i(\mathfrak{m}athfrak{b}m{u}, \mathfrak{m}athfrak{b}m{v}) \equiv 0 \; (\mathfrak{m}athrm{mod} \, p), i = 1, \mathfrak{m}athbb{H}dots, R \mathbb{R}ight\}. \] Lemma 2.6 in~\cite{myerson_quadratic} further shows that we can write the singular integral as \mathfrak{m}athfrak{b}egin{equation} \label{eq.sing_series_real_density} \mathfrak{m}athfrak{I} = \lim_{P \mathbb{R}ightarrow \infty} \frac{1}{P^{n_1+n_2-(d_1+d_2)R}} \mathfrak{m}u \mathfrak{m}athfrak{b}ig\{ (\mathfrak{m}athfrak{b}m{t}_1, \mathfrak{m}athfrak{b}m{t}_2)/P \in \mathfrak{m}athcal{B}_1 \times \mathfrak{m}athcal{B}_2 \colon \\ \mathfrak{m}athbb{N}orm{F_i(\mathfrak{m}athfrak{b}m{t}_1, \mathfrak{m}athfrak{b}m{t}_2)} \leq 1/2, \; i = 1, \mathfrak{m}athbb{H}dots, R \mathfrak{m}athfrak{b}ig\}, \end{equation} where $\mathfrak{m}u( \cdot )$ denotes the Lebesgue measure. We may therefore interpret the quantities $\mathfrak{m}athfrak{I}$ and $\mathfrak{m}athfrak{S}_p$ as the real and $p$-adic \emph{densities}, respectively, of the system of equations $F_1(\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{y}) = \cdots = F_R(\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{y}) = 0$. \subsection{Proofs of Proposition~\mathbb{R}ef{prop.main_prop} and Theorem~\mathbb{R}ef{thm.n_aux_imply_result}} \mathfrak{m}athfrak{b}egin{proof}[Proof of Proposition \mathbb{R}ef{prop.main_prop}] From Proposition \mathbb{R}ef{prop.minor_arcs_estimate}, the estimate \eqref{eq.integral_major_arcs}, Lemma \mathbb{R}ef{lem.singular_series} and Lemma \mathbb{R}ef{lem.singular_integral}, for any $\varepsilon > 0$ we find \mathfrak{m}athfrak{b}egin{equation*} \frac{N(P_1,P_2)}{P_1^{n_1}P_2^{n_2}P^{-R}} - \mathfrak{m}athfrak{S} \mathfrak{m}athfrak{I} \ll P^{-\Delta \delta_1} + P^{-\Delta \delta_0(1-(d_1+d_2)R/\mathfrak{m}athscr{C} ) + \varepsilon } + P^{(2R+3) \Delta - 1/(bd_1+d_2)} + P^{-\Delta(\mathfrak{m}athscr{C}-\varepsilon'-R)}. \end{equation*} for some $\delta_1 > 0$ and some $1> \varepsilon' >0$. Recall we assumed $\mathfrak{m}athscr{C} > (bd_1+d_2)R$ and assuming the forms $F_i$ are linearly independent we also have $\delta_0 \geq \frac{1}{(\tilde{d}+1)2^{\tilde{d}}R}$. Therefore choosing suitably small $\Delta > 0$ there exists some $\delta > 0$ such that \mathfrak{m}athfrak{b}egin{equation*} \frac{N(P_1,P_2)}{P_1^{n_1}P_2^{n_2}P^{-R}} - \mathfrak{m}athfrak{S} \mathfrak{m}athfrak{I} \ll P^{-\delta} \end{equation*} as desired. Finally, since we assume that the equations $F_i$ define a complete intersection, it is a standard fact to see that $\mathfrak{m}athfrak{S}$ is positive if there exists a non-singular $p$-adic zero for all primes $P$, and similarly $\mathfrak{m}athfrak{I}$ is positive if there exists a non-singular real zero within $\mathfrak{m}athcal{B}_1 \times \mathfrak{m}athcal{B}_2$. A detailed argument of this fact using a version of Hensel's Lemma for $\mathfrak{m}athfrak{S}$ and the implicit function theorem for $\mathfrak{m}athfrak{I}$ can be found for example in \S4 of~\cite{myerson_quadratic}. \end{proof} We finish this section by deducing the technical main theorem, namely Theorem~\mathbb{R}ef{thm.n_aux_imply_result}. \mathfrak{m}athfrak{b}egin{proof}[Proof of Theorem~\mathbb{R}ef{thm.n_aux_imply_result}] Assume the estimate in~\eqref{eq.N_aux_general_condition} holds for some constant $C_0>0$. From Proposition~\mathbb{R}ef{prop.auxiliary_ineq_from_counting_function} it thus follows that the auxiliary inequality~\eqref{eq.aux_ineq} holds with a constant $C > 0$ depending on $C_0$, $d_i$, $n_i$, $\mathfrak{m}u$ and $M$, where all of these quantities follow the same notation as in Section~\mathbb{R}ef{sec.weyl_differencing_etc}. Therefore the assumptions of Proposition~\mathbb{R}ef{prop.main_prop} so we can apply it to obtain the desired conclusions. \end{proof} \section{Systems of bilinear forms} \label{sec.bilinear_proof} In this section we assume $d_1=d_2=1$. Then we can write our system as \mathfrak{m}athfrak{b}egin{equation*} F_i(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) = \mathfrak{m}athfrak{b}m{y}^T A_i \mathfrak{m}athfrak{b}m{x}, \end{equation*} where $A_i$ are $n_2 \times n_1$-dimensional matrices with integer entries. For $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R$ we now have \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F} = \mathfrak{m}athfrak{b}m{y}^T A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}} \mathfrak{m}athfrak{b}m{x}, \end{equation*} where $A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}} = \sum_i \mathfrak{m}athfrak{b}eta_i A_i$. Recall that we put \mathfrak{m}athfrak{b}egin{equation*} \sigma_{\mathfrak{m}athbb{R}}^{(1)} = \mathfrak{m}ax_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{0 \} } \dim \ker ( A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}) \mathfrak{m}athbb{Q}uad \text{and} \mathfrak{m}athbb{Q}uad \sigma_{\mathfrak{m}athbb{R}}^{(2)} = \mathfrak{m}ax_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{0 \} } \dim \ker ( A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}^T). \end{equation*} Since the row rank of a matrix is equal to its column rank we can also define \mathfrak{m}athfrak{b}egin{equation*} \mathbb{R}ho_{\mathfrak{m}athbb{R}} \coloneqq \mathfrak{m}in_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{ 0\}} \mathfrak{m}athrm{rank} (A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}) = \mathfrak{m}in_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{ 0\}} \mathfrak{m}athrm{rank} (A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}^T). \end{equation*} Due to the rank-nullity theorem the conditions \[ n_i-\sigma_\mathfrak{m}athbb{R}^{(i)} > (2b+2)R \] for $i=1,2$ are equivalent to \mathfrak{m}athfrak{b}egin{equation*} \label{eq.condition_rho} \mathbb{R}ho_{\mathfrak{m}athbb{R}} > (2b+2)R. \end{equation*} \mathfrak{m}athfrak{b}egin{lemma} \label{lem.bilinear_smooth_complete_assumptions} Assume that $\mathfrak{m}athbb{V}(F_1, \mathfrak{m}athbb{H}dots, F_R) \subset \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n_1-1} \times \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n_2-1}$ is a smooth complete intersection. Let $b \geq 1$ be a real number. Assume further \mathfrak{m}athfrak{b}egin{equation} \label{eq.n_i_assumption_bilinear_lemma} \mathfrak{m}in\{n_1,n_2\} > (2b+2)R, \mathfrak{m}athbb{Q}uad \text{and} \mathfrak{m}athbb{Q}uad n_1+n_2 > (4b+5)R. \end{equation} Then we have \mathfrak{m}athfrak{b}egin{equation} \label{eq.n_i_assumption_lemma} n_i - \sigma_{\mathfrak{m}athbb{R}}^{(i)} > (2b+2)R \end{equation} for $i = 1,2$. \end{lemma} \mathfrak{m}athfrak{b}egin{proof} Without loss of generality assume $n_1 \geq n_2$. Pick $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R\setminus \{\mathfrak{m}athfrak{b}m{0}\}$ such that $\mathfrak{m}athrm{rank} (A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}) = \mathbb{R}ho_{\mathfrak{m}athbb{R}}$. In particular then \mathfrak{m}athfrak{b}egin{equation*} \dim \ker (A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}) = \sigma_{\mathfrak{m}athbb{R}}^{(1)}, \mathfrak{m}athbb{Q}uad \text{and} \mathfrak{m}athbb{Q}uad \dim \ker (A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}^T) = \sigma_{\mathfrak{m}athbb{R}}^{(2)}. \end{equation*} We proceed in distinguishing two cases. Firstly, if $\sigma_{\mathfrak{m}athbb{R}}^{(2)} = 0$ then~\eqref{eq.n_i_assumption_lemma} follows for $i = 2$ by the assumption~\eqref{eq.n_i_assumption_bilinear_lemma}. Further by comparing row rank and column rank of $A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}$ in this case we must then have $\sigma_{\mathfrak{m}athbb{R}}^{(1)} \leq n_1-n_2$, and therefore \mathfrak{m}athfrak{b}egin{equation*} n_1 - \sigma_{\mathfrak{m}athbb{R}}^{(1)} \geq n_2 > (2b+2)R, \end{equation*} so~\eqref{eq.n_i_assumption_lemma} follows for $i=1$. Now we turn to the case $\sigma_{\mathfrak{m}athbb{R}}^{(2)} > 0$. Then also $\sigma_{\mathfrak{m}athbb{R}}^{(1)} > 0$. The singular locus of the variety $ \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}) \subset \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n_1-1} \times \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n_2-1}$ is given by \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athrm{Sing} \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}) = \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{y}^T A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}) \cap \mathfrak{m}athbb{V} (A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}} \mathfrak{m}athfrak{b}m{x}). \end{equation*} Therefore we have \mathfrak{m}athfrak{b}egin{equation*} \dim \mathfrak{m}athrm{Sing} \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}) = \sigma_{\mathfrak{m}athbb{R}}^{(1)} + \sigma_{\mathfrak{m}athbb{R}}^{(2)}-2. \end{equation*} Since we assumed $\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{F})$ to be a smooth complete intersection we can apply Lemma~\mathbb{R}ef{lem.sing_bf_small} to get $\dim \mathfrak{m}athrm{Sing} \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}) \leq R-2$. Therefore we find \mathfrak{m}athfrak{b}egin{equation*} \sigma_{\mathfrak{m}athbb{R}}^{(1)} + \sigma_{\mathfrak{m}athbb{R}}^{(2)} \leq R. \end{equation*} From our previous remarks we know that showing~\eqref{eq.n_i_assumption_lemma} is equivalent to showing $\mathbb{R}ho_{\mathfrak{m}athbb{R}} > (2b+2)R$. But now \mathfrak{m}athfrak{b}egin{equation*} \mathbb{R}ho_{\mathfrak{m}athbb{R}} = \frac{1}{2} \left(n_1+n_2 - \sigma_{\mathfrak{m}athbb{R}}^{(1)} - \sigma_{\mathfrak{m}athbb{R}}^{(2)} \mathbb{R}ight) \geq \frac{1}{2} (n_1+n_2 - R) > (2b+2)R, \end{equation*} where the last inequality followed from the assumption~\eqref{eq.n_i_assumption_bilinear_lemma}. Therefore~\eqref{eq.n_i_assumption_lemma} follows as desired. \end{proof} \mathfrak{m}athfrak{b}egin{proof}[Proof of Theorem~\mathbb{R}ef{thm.bilinear}] Recall the notation $b = \frac{\log P_1}{\log P_2}$. By virtue of Theorem~\mathbb{R}ef{thm.n_aux_imply_result} it suffices to show that assuming \mathfrak{m}athfrak{b}egin{equation*} n_i - \sigma_{\mathfrak{m}athbb{R}}^{(i)} > (2b+2)R \end{equation*} for $i=1,2$ implies~\eqref{eq.N_aux_general_condition}. We will show~\eqref{eq.N_aux_general_condition} for $i=1$, the other case follows analogously. Let $\mathfrak{m}athscr{C} = \frac{n_2-\sigma_{\mathfrak{m}athbb{R}}^{(2)}}{2}$ and we note that we have $\mathfrak{m}athscr{C} > (bd_1+d_2)R = (b+1)R$ precisely when $n_2 - \sigma_{\mathfrak{m}athbb{R}}^{(2)} > (2b+2)R$ holds. Therefore it suffices to show that \mathfrak{m}athfrak{b}egin{equation} \label{eq.N_1_small_sigma_2} N_1^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},B) \ll B^{\sigma_{\mathfrak{m}athbb{R}}^{(2)}}. \end{equation} for all $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{ \mathfrak{m}athfrak{b}m{0}\}$ with the implied constant not depending on $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}$. In our case we have \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athfrak{b}m{\Gamma}(\mathfrak{m}athfrak{b}m{u}) = \mathfrak{m}athfrak{b}m{u}^T A(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}), \end{equation*} where $\mathfrak{m}athfrak{b}m{u} \in \mathfrak{m}athbb{Z}^{n_2}$. Therefore $N_1^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},B)$ counts vectors $\mathfrak{m}athfrak{b}m{u} \in \mathfrak{m}athbb{Z}^{n_2}$ such that \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{u}}_\infty \leq B \mathfrak{m}athbb{Q}uad \text{and} \mathfrak{m}athbb{Q}uad \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{u}^T A(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta})}_\infty \leq \mathfrak{m}athbb{N}norm{A(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta})}_\infty = \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}_\infty. \end{equation*} In particular, all of the vectors $\mathfrak{m}athfrak{b}m{u} \in \mathfrak{m}athbb{Z}^{n_2}$, which are counted by $N_1^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},B)$ are contained in the ellipsoid \mathfrak{m}athfrak{b}egin{equation*} E_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}} \coloneqq \left\{ \mathfrak{m}athfrak{b}m{t} \in \mathfrak{m}athbb{R}^{n_2} \colon \mathfrak{m}athfrak{b}m{t}^T A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}} A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}^T \mathfrak{m}athfrak{b}m{t} < n_2\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}_\infty^2 \mathbb{R}ight\}. \end{equation*} The principal radii of $E_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}$ are given by $\mathfrak{m}athbb{N}orm{\lambda_i}^{-1} n_2^{1/2} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}_\infty$ for $i = 1, \mathfrak{m}athbb{H}dots, n_2$, where $\lambda_i$ run through the $n_2$ singular values of $A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}$ and are listed in increasing order of absolute value. Thus we find \mathfrak{m}athfrak{b}egin{equation*} N_1^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},B) \ll \mathfrak{m}athfrak{p}rod_{i = 1}^{n_2} \mathfrak{m}in \left\{ \mathfrak{m}athbb{N}orm{\lambda_i}^{-1} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}_\infty +1,B \mathbb{R}ight\}. \end{equation*} If $\left\vert\lambda_{\sigma_{\mathfrak{m}athbb{R}}^{(2)}+1} \mathbb{R}ight\vert \gg \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}_\infty$ holds then~\eqref{eq.N_1_small_sigma_2} would follow. So suppose for a contradiction that there exists a sequence $(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}^{(i)})$ such that $\mathfrak{m}athbb{N}orm{\lambda_{\sigma_{\mathfrak{m}athbb{R}}^{(2)}+1}} = o\left(\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}^{(i)} \cdot \mathfrak{m}athfrak{b}m{F}}_\infty\mathbb{R}ight)$. Let $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}$ be the limit of a subsequence of $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}^{(i)}/\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}^{(i)}}$, which must exist by the Bolzano--Weierstrass theorem. For this $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}$ we must then have $\lambda_{\sigma_{\mathfrak{m}athbb{R}}^{(2)}+1} = 0$. Since the singular values were listed in order of increasing absolute value it follows that \mathfrak{m}athfrak{b}egin{equation*} \lambda_1 = \cdots = \lambda_{\sigma_{\mathfrak{m}athbb{R}}^{(2)}+1} = 0, \end{equation*} and so $\dim \ker A_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}^T = \sigma_{\mathfrak{m}athbb{R}}^{(2)}+1$. This contradicts the maximality of $\sigma_{\mathfrak{m}athbb{R}}^{(2)}+1$. The second part of the theorem is now a direct consequence of Lemma~\mathbb{R}ef{lem.bilinear_smooth_complete_assumptions}. \end{proof} \section{Systems of forms of bidegree $(2,1)$} \label{sec.proof.2-1} We consider a system $\mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$ of homogeneous equations of bidegree $(2,1)$, where $\mathfrak{m}athfrak{b}m{x} = (x_1, \mathfrak{m}athbb{H}dots, x_{n_1})$ and $\mathfrak{m}athfrak{b}m{y} = (y_1, \mathfrak{m}athbb{H}dots, y_{n_2})$. We will first assume $n_1=n_2 = n$, say, and then deduce Theorem~\mathbb{R}ef{thm.2,1_different_dimensions} afterwards. Therefore the initial main goal is to establish the following. \mathfrak{m}athfrak{b}egin{proposition} \label{thm.2,1} Let $F_1(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}), \mathfrak{m}athbb{H}dots, F_R(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$ be bihomogeneous forms of bidegree $(2,1)$ such that the biprojective variety $\mathfrak{m}athbb{V}(F_1, \mathfrak{m}athbb{H}dots, F_R) \subset \mathfrak{m}athbb{P}^{n-1}_{\mathfrak{m}athbb{Q}} \times \mathfrak{m}athbb{P}^{n-1}_{\mathfrak{m}athbb{Q}}$ is a complete intersection. Write $b = \mathfrak{m}ax \{\log P_1/\log P_2, 1 \}$ and $u = \mathfrak{m}ax \{\log P_2/\log P_1, 1 \}$ Assume that \mathfrak{m}athfrak{b}egin{equation} \label{eq.assumption_n_i_(2,1)} n - s_{\mathfrak{m}athbb{R}}^{(i)} > (8b+4u)R \end{equation} holds for $i=1,2$, where $s_{\mathfrak{m}athbb{R}}^{(i)}$ are as defined in~\eqref{eq.defn_s_1} and~\eqref{eq.defn_s_2}. Then there exists some $\delta > 0$ depending at most on $\mathfrak{m}athfrak{b}m{F}$, $R$, $n$, $b$ and $u$ such that we have \mathfrak{m}athfrak{b}egin{equation*} N(P_1,P_2) = \sigma P_1^{n-2R} P_2^{n-R} + O(P_1^{n-2R} P_2^{n-R} \mathfrak{m}in\{ P_1, P_2\}^{-\delta}) \end{equation*} where $\sigma > 0$ if the system $\mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) = \mathfrak{m}athfrak{b}m{0}$ has a smooth $p$-adic zero for all primes $p$ and a smooth real zero in $\mathfrak{m}athcal{B}_1 \times \mathfrak{m}athcal{B}_2$. If we assume that $\mathfrak{m}athbb{V}(F_1, \mathfrak{m}athbb{H}dots, F_R) \subset \mathfrak{m}athbb{P}^{n-1}_{\mathfrak{m}athbb{Q}} \times \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{Q}}^{n-1}$ is smooth, then the same conclusions hold if we assume \mathfrak{m}athfrak{b}egin{equation*} n > (16b+8u+1)R \end{equation*} instead of \eqref{eq.assumption_n_i_(2,1)}. \end{proposition} For $r = 1, \mathfrak{m}athbb{H}dots, R$ we can write each form $F_r(\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{y})$ as \mathfrak{m}athfrak{b}egin{equation*} F_r(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) = \sum_{i,j,k} F^{(r)}_{ijk} x_i x_j y_k, \end{equation*} where the coefficients $F^{(r)}_{ijk}$ are symmetric in $i$ and $j$. In particular, for any $r = 1, \mathfrak{m}athbb{H}dots, R$ we have an $n \times n$ matrix given by $H_r(\mathfrak{m}athfrak{b}m{y}) = (\sum_k F^{(r)}_{ijk} y_k)_{ij}$ whose entries are linear homogeneous polynomials in $\mathfrak{m}athfrak{b}m{y}$. We may thus also write each equation in the form \[ F_r(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) = \mathfrak{m}athfrak{b}m{x}^T H_r(\mathfrak{m}athfrak{b}m{y}) \mathfrak{m}athfrak{b}m{x}. \] The strategy of the proof of Proposition~\mathbb{R}ef{thm.2,1} is the same as in the bilinear case, however this time more techincal arguments are required. We need to obtain a good upper bound for the counting functions $N_i^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}; B)$ so that we can apply Theorem~\mathbb{R}ef{thm.n_aux_imply_result}. For $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R$ we consider $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}$, which we can rewrite in our case as \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}(\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{y}) = \mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y}) \mathfrak{m}athfrak{b}m{x} \end{equation*} where $H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y}) = \sum_{i=1}^R \mathfrak{m}athfrak{b}eta_i H_i(\mathfrak{m}athfrak{b}m{y})$ is a symmetric $n \times n$ matrix whose entries are linear and homogeneous in $\mathfrak{m}athfrak{b}m{y}$. The associated multilinear form $\Gamma_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}(\mathfrak{m}athfrak{b}m{x}^{(1)}, \mathfrak{m}athfrak{b}m{x}^{(2)}, \mathfrak{m}athfrak{b}m{y})$ is thus given by \mathfrak{m}athfrak{b}egin{equation*} \Gamma_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}(\mathfrak{m}athfrak{b}m{x}^{(1)}, \mathfrak{m}athfrak{b}m{x}^{(2)}, \mathfrak{m}athfrak{b}m{y}) = 2 \left(\mathfrak{m}athfrak{b}m{x}^{(1)}\mathbb{R}ight)^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y}) \mathfrak{m}athfrak{b}m{x}^{(2)}. \end{equation*} Recall $N_1^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, B)$ counts integral tuples $\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{y} \in \mathfrak{m}athbb{Z}^n$ satisfying $\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{x}}_\infty, \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{y}}_\infty \leq B$ and \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}norm{\left(\Gamma_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}(\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{e}_1, \mathfrak{m}athfrak{b}m{y}), \mathfrak{m}athbb{H}dots, \Gamma_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}(\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{e}_n, \mathfrak{m}athfrak{b}m{y}) \mathbb{R}ight)^T}_\infty = 2\mathfrak{m}athbb{N}norm{ H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}( \mathfrak{m}athfrak{b}m{y}) \mathfrak{m}athfrak{b}m{x}}_\infty \leq \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}_\infty B. \end{equation*} Now $N_2^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, B)$ counts integral tuples $\mathfrak{m}athfrak{b}m{x}^{(1)}$, $\mathfrak{m}athfrak{b}m{x}^{(2)}$ with $\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{x}^{(1)}}_\infty, \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{x}^{(2)}}_\infty \leq B$ and \mathfrak{m}athfrak{b}egin{align*} \mathfrak{m}athbb{N}norm{\left(\Gamma_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}(\mathfrak{m}athfrak{b}m{x}^{(1)}, \mathfrak{m}athfrak{b}m{x}^{(2)}, \mathfrak{m}athfrak{b}m{e}_1), \mathfrak{m}athbb{H}dots, \Gamma_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}(\mathfrak{m}athfrak{b}m{x}^{(1)}, \mathfrak{m}athfrak{b}m{x}^{(2)}, \mathfrak{m}athfrak{b}m{e}_n) \mathbb{R}ight)^T}_\infty \leq \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}_\infty B. \end{align*} We may rewrite this as saying that \[ \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{x}^{(1)} H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x}^{(2)} } \leq \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}_\infty B \] is satisfied for $\ell = 1, \mathfrak{m}athbb{H}dots, n$. As in the proof of Theorem~\mathbb{R}ef{thm.bilinear} using Proposition \mathbb{R}ef{prop.auxiliary_ineq_from_counting_function} and Proposition \mathbb{R}ef{prop.main_prop} we find that for the proof of Theorem~\mathbb{R}ef{thm.2,1} it is enough to show that there exists a positive constant $C_0$ such that for all $B \geq 1$ and all $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^r \setminus \{ 0 \}$ we have \mathfrak{m}athfrak{b}egin{equation*} \label{eq.aux_condition_2-1} N_i^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}; B) \leq C_0 B^{2n - 4 \mathfrak{m}athscr{C}} \end{equation*} for $i=1,2$, where $\mathfrak{m}athscr{C} > (2b+u)R$. The remainder of this section establishes these upper bounds. \subsection{The first auxiliary counting function} This is the easier case and the problem of finding a suitable upper bound for $N_1^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}; B)$ is essentially handled in~\cite{myerson_cubic}. \mathfrak{m}athfrak{b}egin{lemma}[Corollary 5.2 of \cite{myerson_cubic}] \label{lem.aux_small_or_bilinear_big} Let $H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y})$ and $N_1^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}; B)$ be as above. Let $B,C \geq 1$, let $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{ 0 \}$ and let $\sigma \in \{0, \mathfrak{m}athbb{H}dots, n-1 \}$. Then we either obtain the bound \mathfrak{m}athfrak{b}egin{equation*} \label{eq.aux_counting_2-1_bounded} N_1^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}; B) \ll_{C,n} B^{n+\sigma} (\log B)^n \end{equation*} or there exist non-trivial linear subspaces $U,V \subseteq \mathfrak{m}athbb{R}^n$ with $\dim U + \dim V = n+\sigma+1$ such that for all $\mathfrak{m}athfrak{b}m{v} \in V$ and $\mathfrak{m}athfrak{b}m{u}_1, \mathfrak{m}athfrak{b}m{u}_2 \in U$ we have \mathfrak{m}athfrak{b}egin{equation*} \label{eq.bilinear_form_very_singular} \frac{\mathfrak{m}athbb{N}orm{\mathfrak{m}athfrak{b}m{u}_1^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{v}) \mathfrak{m}athfrak{b}m{u}_2 }}{\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}_\infty} \ll_n C^{-1} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{u}_1}_\infty \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{v}}_\infty \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{u}_2}_\infty. \end{equation*} \end{lemma} Recall the quantity \mathfrak{m}athfrak{b}egin{equation*} s^{(1)}_{\mathfrak{m}athbb{R}} \coloneqq 1 + \mathfrak{m}ax_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{ 0 \} } \dim \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}} (\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x})_{\ell = 1, \mathfrak{m}athbb{H}dots, n_2}, \end{equation*} where we regard $\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}} (\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x})_{\ell = 1, \mathfrak{m}athbb{H}dots, n_2} \subset \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n_1-1}$ as a projective variety. Note that for this definition we do not necessarily require $n_1 = n_2$. \mathfrak{m}athfrak{b}egin{proposition} \label{prop.aux_counting_2-1_small} Let $\varepsilon > 0$. For all $B \geq 1$, $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{ 0 \}$ we have \mathfrak{m}athfrak{b}egin{equation} \label{eq.estimate_for_naux} N_1^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}; B) \ll_\varepsilon B^{n+s^{(1)}_{\mathfrak{m}athbb{R}} + \varepsilon}. \end{equation} \end{proposition} \mathfrak{m}athfrak{b}egin{proof} Assume for a contradiction that the estimate in \eqref{eq.estimate_for_naux} does not hold. In this case Lemma \mathbb{R}ef{lem.aux_small_or_bilinear_big} gives that for each $N \in \mathfrak{m}athbb{N}$ there exist $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}_N \in \mathfrak{m}athbb{R}^R$ and there are non-trivial linear subspaces $U_N, V_N \subseteq \mathfrak{m}athbb{R}^n$ with $\dim U_N + \dim V_N = n+s^{(1)}_{\mathfrak{m}athbb{R}}+1$ such that for all $\mathfrak{m}athfrak{b}m{v} \in V_N$ and $\mathfrak{m}athfrak{b}m{u}_1, \mathfrak{m}athfrak{b}m{u}_2 \in U_N$ we have \mathfrak{m}athfrak{b}egin{equation*} \frac{\mathfrak{m}athbb{N}orm{\mathfrak{m}athfrak{b}m{u}_1^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}_N}(\mathfrak{m}athfrak{b}m{v}) \mathfrak{m}athfrak{b}m{u}_2 }}{\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}_N \cdot \mathfrak{m}athfrak{b}m{F}}_\infty} \ll_n N^{-1} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{u}_1}_\infty \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{v}}_\infty \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{u}_2}_\infty. \end{equation*} If we change $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}_N$ by a scalar then $2\frac{\mathfrak{m}athbb{N}orm{ H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}_N}(\mathfrak{m}athfrak{b}m{y})}}{\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}_N \cdot \mathfrak{m}athfrak{b}m{F}}_\infty}$ remains unchanged for any $\mathfrak{m}athfrak{b}m{y} \in \mathfrak{m}athbb{R}^n$. Therefore we may without loss of generality assume $\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}_N}_\infty = 1$. Thus there exists a convergent subsequence of $(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}_N)$ whose limit we will denote by $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}$. Hence we find subspaces $U,V \subseteq \mathfrak{m}athbb{R}^n$ with $\dim U+ \dim V = n+s^{(1)}_{\mathfrak{m}athbb{R}}+1$ such that for all $\mathfrak{m}athfrak{b}m{v} \in V$ and $\mathfrak{m}athfrak{b}m{u}_1, \mathfrak{m}athfrak{b}m{u}_2 \in U$ we have \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athfrak{b}m{u}_1^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{v}) \mathfrak{m}athfrak{b}m{u}_2 =0. \end{equation*} Let $k$ denote the nonnegative integer such that \mathfrak{m}athfrak{b}egin{equation*} \dim V = n-k, \mathfrak{m}athbb{Q}uad \text{and} \mathfrak{m}athbb{Q}uad \dim U = s^{(1)}_{\mathfrak{m}athbb{R}} + k +1 \end{equation*} holds. Consider now a basis $\mathfrak{m}athfrak{b}m{v}_{k+1}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{v}_n$ of $V$ that we extend to a basis $\mathfrak{m}athfrak{b}m{v}_{1}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{v}_n$ of $\mathfrak{m}athbb{R}^n$. Write also $[U] \subseteq \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n-1}$ for the projectivisation of $U$. Define $W \subseteq [U]$ to be the projective variety defined by the equations \mathfrak{m}athfrak{b}egin{equation*} \label{eq.proj_var_equation} \mathfrak{m}athfrak{b}m{u}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{v}_i) \mathfrak{m}athfrak{b}m{u} = 0, \mathfrak{m}athbb{Q}uad \text{for $i = 1, \mathfrak{m}athbb{H}dots, k$} \end{equation*} We find $\dim W \geq \dim [U] - k = s^{(1)}_{\mathfrak{m}athbb{R}}$. Since $W \subseteq [U]$ and by the definition of $W$, noting that the entries of $H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y})$ are linear in $\mathfrak{m}athfrak{b}m{y}$ we get that if $\mathfrak{m}athfrak{b}m{u} \in W$ then \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athfrak{b}m{u}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y}) \mathfrak{m}athfrak{b}m{u} = 0 \mathfrak{m}athbb{Q}uad \text{for all $\mathfrak{m}athfrak{b}m{y} \in \mathfrak{m}athbb{R}^n$}. \end{equation*} In particular it follows that $W \subseteq \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}} (\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x})_{\ell = 1, \mathfrak{m}athbb{H}dots, n} \subset \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n-1}$ and thus \mathfrak{m}athfrak{b}egin{equation*} s^{(1)}_{\mathfrak{m}athbb{R}} -1 \geq \dim W \geq s^{(1)}_{\mathfrak{m}athbb{R}}, \end{equation*} which is clearly a contradiction. \end{proof} Now that we found an upper bound in terms of the geometry of $\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{F})$ the next Lemma shows that if $\mathfrak{m}athfrak{b}m{F}$ defines a non-singular variety then $s_\mathfrak{m}athbb{R}^{(1)}$ is not too large. For the next Lemma we will not assume $n_1 = n_2$ as we will require it later in the slightly more general context when this assumption is not necessarily satisfied. \mathfrak{m}athfrak{b}egin{lemma} \label{lem.sigma_2-1_small} Let $s^{(1)}_{\mathfrak{m}athbb{R}}$ be defined as above and assume that $\mathfrak{m}athfrak{b}m{F}$ is a system of bihomogenous equations of bidegree $(2,1)$ that defines a smooth complete intersection $ \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{F}) \subset \mathfrak{m}athbb{P}^{n_1-1}_{\mathfrak{m}athbb{C}} \times \mathfrak{m}athbb{P}^{n_2-1}_{\mathfrak{m}athbb{C}}$. Then \mathfrak{m}athfrak{b}egin{equation*} s^{(1)}_{\mathfrak{m}athbb{R}} \leq \mathfrak{m}ax\{ 0,R + n_1-n_2\}. \end{equation*} \end{lemma} \mathfrak{m}athfrak{b}egin{proof} Consider $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{ 0 \}$ such that $\dim \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}} (\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x})_{\ell = 1, \mathfrak{m}athbb{H}dots, n_2} = s^{(1)}_{\mathfrak{m}athbb{R}} -1$. In the case when $\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}} (\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x})_{\ell = 1, \mathfrak{m}athbb{H}dots, n_2} = \emptyset$ then the statement in the lemma is trivially true. Hence we may assume that this is not the case. The singular locus of $\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}) \subseteq \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n_1-1} \times \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n_2-1}$ is given by \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athrm{Sing} \mathfrak{m}athbb{V} (\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F} ) = \left( \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}} (\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x})_{\ell = 1, \mathfrak{m}athbb{H}dots, n_2} \times \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n_2-1} \mathbb{R}ight) \cap \mathfrak{m}athbb{V}( H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}} (\mathfrak{m}athfrak{b}m{y}) \mathfrak{m}athfrak{b}m{x}). \end{equation*} From Lemma~\mathbb{R}ef{lem.sing_bf_small} we obtain \mathfrak{m}athfrak{b}egin{equation*} \dim \mathfrak{m}athrm{Sing} \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F} ) \leq R-2. \end{equation*} Further, since $\mathfrak{m}athbb{V}( H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}} (\mathfrak{m}athfrak{b}m{y}) \mathfrak{m}athfrak{b}m{x})$ is a system of $n_1$ bilinear equations, Lemma~\mathbb{R}ef{lem.geometry_intersections} gives \[ \dim \mathfrak{m}athrm{Sing} \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F} ) \geq s_{\mathfrak{m}athbb{R}}^{(1)} - 1 + n_2-1 -n_1. \] Combining the previous two inequalities yields \[ s_{\mathfrak{m}athbb{R}}^{(1)} \leq R + n_1-n_2, \] as desired. \end{proof} We remark here that the proof of Lemma~\mathbb{R}ef{lem.sigma_2-1_small} shows that if $\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{F})$ defines a smooth complete intersection and if $s^{(1)}_{\mathfrak{m}athbb{R}} >0$ then $n_2 < n_1+R$. \subsection{The second auxiliary counting function} Define $\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x}^{(1)})$ to be the $n \times n$ matrix with the rows given by $(\mathfrak{m}athfrak{b}m{x}^{(1)})^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_\ell)/\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}}_\infty$ for $\ell = 1, \mathfrak{m}athbb{H}dots, n$. Using this notation $N_2^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, B)$ counts the number of integer tuples $\mathfrak{m}athfrak{b}m{x}^{(1)}$, $\mathfrak{m}athfrak{b}m{x}^{(2)}$ such that $\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{x}^{(1)}}_\infty, \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{x}^{(2)}}_\infty \leq B$ and \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}norm{\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x}^{(1)}) \mathfrak{m}athfrak{b}m{x}^{(2)}}_\infty \leq B, \end{equation*} is satisfied. The entries of $\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x}^{(1)})$ are homogeneous linear polynomials in $\mathfrak{m}athfrak{b}m{x}^{(1)}$ whose coefficients do not exceed absolute value $1$. Let $A$ be a real $m \times n$ matrix. Then $A^TA$ is a symmetric and positive definite $n \times n$ matrix, with eigenvalues $\lambda_1^2, \mathfrak{m}athbb{H}dots, \lambda_n^2$. The nonnegative real numbers $\{ \lambda_i \}$ are the \emph{singular values} of $A$. \mathfrak{m}athbb{N}oindent \textbf{Notation.} Given a matrix $M = (m_{ij})$ we define $\mathfrak{m}athbb{N}norm{M}_\infty \coloneqq \mathfrak{m}ax_{i,j} |m_{ij}|$. For simplicity we will from now on write $\mathfrak{m}athfrak{b}m{x}$ instead of $\mathfrak{m}athfrak{b}m{x}^{(1)}$ and $\mathfrak{m}athfrak{b}m{y}$ instead of $\mathfrak{m}athfrak{b}m{x}^{(2)}$. For $\mathfrak{m}athfrak{b}m{x} \in \mathfrak{m}athbb{R}^n$ let $\lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, 1}(\mathfrak{m}athfrak{b}m{x}), \mathfrak{m}athbb{H}dots, \lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},n}(\mathfrak{m}athfrak{b}m{x})$ denote the singular values of the real $n \times n$ matrix $\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x})$ in descending order, counted with multiplicity. Note that $\lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, i}(\mathfrak{m}athfrak{b}m{x})$ are real and nonnegative. Also note \mathfrak{m}athfrak{b}egin{equation*} \lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},1}^2(\mathfrak{m}athfrak{b}m{x}) \leq n \mathfrak{m}athbb{N}norm{\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x})^T \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x})}_\infty \leq n^2 \mathfrak{m}athbb{N}norm{\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x})}_\infty^2 \leq n^4 \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{x}}_\infty^2. \end{equation*} Taking square roots we find the following useful estimates \mathfrak{m}athfrak{b}egin{equation} \label{eq.useful_estimate_sing_values} \lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},1} (\mathfrak{m}athfrak{b}m{x}) \leq n \mathfrak{m}athbb{N}norm{\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x})}_\infty \leq n^2 \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{x}}_\infty \end{equation} Let $i \in \{1, \mathfrak{m}athbb{H}dots, n\}$ and write $\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, i)}(\mathfrak{m}athfrak{b}m{x})$ for the vector with $\mathfrak{m}athfrak{b}inom{n}{i}^2$ entries being the $i \times i$ minors of $\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x})$. Note that the entries are homogeneous polynomials in $\mathfrak{m}athfrak{b}m{x}$ of degree $i$. Finally write $J_{\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, i)}}(\mathfrak{m}athfrak{b}m{x})$ for the Jacobian matrix of $\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, i)}(\mathfrak{m}athfrak{b}m{x})$. That is, $J_{\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, i)}}(\mathfrak{m}athfrak{b}m{x})$ is the $\mathfrak{m}athfrak{b}inom{n}{i}^2 \times n$ matrix given by \mathfrak{m}athfrak{b}egin{equation*} (J_{\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, i)}}(\mathfrak{m}athfrak{b}m{x}))_{jk} = \frac{\mathfrak{m}athfrak{p}artial D_j^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, i)}}{\mathfrak{m}athfrak{p}artial x_k}. \end{equation*} \mathfrak{m}athfrak{b}egin{definition} \label{def.k_k} Let $k \in \{0, \mathfrak{m}athbb{H}dots, n\}$ and let $E_1, \mathfrak{m}athbb{H}dots, E_{k+1} \in \mathfrak{m}athbb{R}$ be such that $E_1 \geq \mathfrak{m}athbb{H}dots \geq E_{k+1} \geq 1$ holds. We define $K_k(E_1, \mathfrak{m}athbb{H}dots, E_{k+1}) \subseteq \mathfrak{m}athbb{R}^n$ to be the set containing $\mathfrak{m}athfrak{b}m{x} \in \mathfrak{m}athbb{R}^n$ such that the following three conditions are satisfied: \mathfrak{m}athfrak{b}egin{enumerate}[(i)] \item \label{cond.K_k_1} $\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{x}}_\infty \leq B$, \item \label{cond.K_k_2} $\frac{1}{2}E_i < \lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},i}(\mathfrak{m}athfrak{b}m{x}) \leq E_i$ if $1 \leq i \leq k$, and \item \label{cond.K_k_3} $\lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},i}(\mathfrak{m}athfrak{b}m{x}) \leq E_{k+1}$ if $k +1 \leq i \leq n$. \end{enumerate} \end{definition} \mathfrak{m}athfrak{b}egin{lemma} \label{lem.fixed_matrix_eigenvalues_counting} Let $\widetilde{H}$ be an $n \times n $ matrix with real entries, and denote its singular values in descending order by $\lambda_1, \mathfrak{m}athbb{H}dots, \lambda_n$. Let $C,B \geq 1$ and assume $\lambda_1 \leq CB$. Write $N_{\widetilde{H}}(B)$ for the number of integral vectors $\mathfrak{m}athfrak{b}m{y} \in \mathfrak{m}athbb{Z}^n$ such that \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{y}}_\infty \leq B, \mathfrak{m}athbb{Q}uad \text{and} \mathfrak{m}athbb{Q}uad \mathfrak{m}athbb{N}norm{\widetilde{H}\mathfrak{m}athfrak{b}m{y}}_\infty \leq B \end{equation*} holds. Then \mathfrak{m}athfrak{b}egin{equation*} N_{\widetilde{H}}(B) \ll_{C,n} \mathfrak{m}in_{1 \leq i \leq n} \frac{B^n}{1+ \lambda_1 \cdots \lambda_i}. \end{equation*} \end{lemma} \mathfrak{m}athfrak{b}egin{proof} Consider the ellipsoid \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athcal{E} \coloneqq \{ \mathfrak{m}athfrak{b}m{t} \in \mathfrak{m}athbb{R}^n \colon \mathfrak{m}athfrak{b}m{t}^T \widetilde{H}^T \widetilde{H} \mathfrak{m}athfrak{b}m{t} \leq nB^2 \}. \end{equation*} Note that any $\mathfrak{m}athfrak{b}m{y} \in \mathfrak{m}athbb{Z}^n$ counted by $N_{\widetilde{H}}(B)$ is contained in $\mathfrak{m}athcal{E}\cap [-B,B]^n$. Now recall that $\widetilde{H}^T \widetilde{H}$ is a symmetric matrix with eigenvalues $\lambda_1^2, \mathfrak{m}athbb{H}dots, \lambda_n^2$. Therefore the principal radii of the ellipsoid $\mathfrak{m}athcal{E}$ are given by $\lambda_i^{-1} \sqrt{n} B$. Hence we find \mathfrak{m}athfrak{b}egin{equation} \label{eq.N_H(B)_first_estimate} N_{\widetilde{H}}(B) \ll_n \mathfrak{m}athfrak{p}rod_{i=1}^n \mathfrak{m}in \{1+ \lambda_i^{-1} \sqrt{n}B, B\} \end{equation} By assumption we have $\lambda_i \leq CB$ and so the quantity on the right hand side of \eqref{eq.N_H(B)_first_estimate} is bounded above by \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athfrak{p}rod_{i=1}^n \mathfrak{m}in \{2C \lambda_i^{-1} \sqrt{n} B, B\}, \end{equation*} and thus \mathfrak{m}athfrak{b}egin{equation*} N_{\widetilde{H}}(B) \ll_{C,n} B^n \mathfrak{m}athfrak{p}rod_{i=1}^n \mathfrak{m}in \{ \lambda_i^{-1}, 1\}. \end{equation*} Since $\lambda_1 \geq \cdots \geq \lambda_n$ the result now follows. \end{proof} \mathfrak{m}athfrak{b}egin{lemma} \label{lem.N_2_aux_in_terms_K_k} Given $B \geq 1$ one of the following three possibilities must be true. Either we have \mathfrak{m}athfrak{b}egin{equation} \label{eq.alt_1} \frac{N_2^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, B)}{B^n (\log B)^n} \ll_n \# (\mathfrak{m}athbb{Z}^n \cap K_0(1)), \end{equation} or there exist nonnegative integers $e_1, \mathfrak{m}athbb{H}dots, e_k$ for some $k \in \{1, \mathfrak{m}athbb{H}dots, n-1\}$ such that $\log B \gg_n e_1 \geq \mathfrak{m}athbb{H}dots \geq e_k$ and \mathfrak{m}athfrak{b}egin{equation} \label{eq.alt_2} \frac{2^{e_1 + \cdots + e_k} N_2^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, B)}{B^n (\log B)^n} \ll_n \# (\mathfrak{m}athbb{Z}^n \cap K_k (2^{e_1}, \mathfrak{m}athbb{H}dots, 2^{e_k}, 1) ), \end{equation} or there exist nonnegative integers $e_1, \mathfrak{m}athbb{H}dots, e_n$ such that $\log B \gg_n e_1 \geq \mathfrak{m}athbb{H}dots \geq e_n$ and \mathfrak{m}athfrak{b}egin{equation} \label{eq.alt_3} \frac{2^{e_1 + \cdots + e_n} N_2^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, B)}{B^n (\log B)^n} \ll_n \# (\mathfrak{m}athbb{Z}^n \cap K_{n-1} (2^{e_1}, \mathfrak{m}athbb{H}dots, 2^{e_n}) ). \end{equation} \end{lemma} \mathfrak{m}athfrak{b}egin{proof} If $k=n$ then condition \eqref{cond.K_k_3} in Definition~\mathbb{R}ef{def.k_k} is always trivially satisfied and thus \mathfrak{m}athfrak{b}egin{equation*} K_n(2^{e_1}, \mathfrak{m}athbb{H}dots, 2^{e_n}, 1) \subseteq K_{n-1} (2^{e_1}, \mathfrak{m}athbb{H}dots, 2^{e_n}). \end{equation*} In particular, \eqref{eq.alt_3} follows from \eqref{eq.alt_2} with $k=n$. We are left showing that either \eqref{eq.alt_1} holds or there exist nonnegative integers $e_1, \mathfrak{m}athbb{H}dots, e_k$ for some $k \in \{1, \mathfrak{m}athbb{H}dots, n\}$ such that $\log B \gg_n e_1 \geq \mathfrak{m}athbb{H}dots \geq e_k$ and \eqref{eq.alt_2} holds. Note that the box $[-B,B]^n$ is the disjoint union of $K_0(1)$ and $K_k(2^{e_1}, \mathfrak{m}athbb{H}dots, 2^{e_k},1)$ where $k$ runs over $1, \mathfrak{m}athbb{H}dots, n$ and $e_i$ run over integers $ \log B \gg_n e_1 \geq \mathfrak{m}athbb{H}dots \geq e_k$. Given $\mathfrak{m}athfrak{b}m{x} \in \mathfrak{m}athbb{Z}^n$ write \mathfrak{m}athfrak{b}egin{equation*} N_{\mathfrak{m}athfrak{b}m{x}}(B) = \# \left\{ \mathfrak{m}athfrak{b}m{y} \in \mathfrak{m}athbb{Z}^n \colon \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{y}}_\infty \leq B, \; \mathfrak{m}athbb{N}norm{\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x}) \mathfrak{m}athfrak{b}m{y} }_\infty \leq B \mathbb{R}ight\}. \end{equation*} We thus obtain \mathfrak{m}athfrak{b}egin{equation} \label{eq.N2aux_long_sum} N_2^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},B) = \sum_{\substack{\mathfrak{m}athfrak{b}m{x} \in \mathfrak{m}athbb{Z}^n \\ \mathfrak{m}athfrak{b}m{x} \in K_0(1)}}N_{\mathfrak{m}athfrak{b}m{x}}(B) + \sum_{\substack{1 \leq k \leq n \\1 \leq e_k \leq \mathfrak{m}athbb{H}dots \leq e_1 \\ e_1 \ll_n \log B}} \; \sum_{\substack{\mathfrak{m}athfrak{b}m{x} \in \mathfrak{m}athbb{Z}^n \\ \mathfrak{m}athfrak{b}m{x} \in K_k(2^{e_1}, \mathfrak{m}athbb{H}dots, 2^{e_k},1 )}} N_{\mathfrak{m}athfrak{b}m{x}}(B). \end{equation} Note that the number of terms of the outer sum of the second term of the right hand side of \eqref{eq.N2aux_long_sum} is bounded by $\ll_n (\log B)^n$. From this it follows that we either have \mathfrak{m}athfrak{b}egin{equation} \label{eq.Nx_bound_alt_1} \sum_{\substack{\mathfrak{m}athfrak{b}m{x} \in \mathfrak{m}athbb{Z}^n \\ \mathfrak{m}athfrak{b}m{x} \in K_0(1)}}N_{\mathfrak{m}athfrak{b}m{x}}(B) \gg_n \frac{N_2^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, B) }{(\log B)^n} \end{equation} or there exists an integer $k \in \{1, \mathfrak{m}athbb{H}dots, n \}$ and integers $e_1 \geq \mathfrak{m}athbb{H}dots \geq e_k \geq 1$ such that \mathfrak{m}athfrak{b}egin{equation} \label{eq.Nx_bound_alt_2} \sum_{\substack{\mathfrak{m}athfrak{b}m{x} \in \mathfrak{m}athbb{Z}^n \\ \mathfrak{m}athfrak{b}m{x} \in K_k(2^{e_1}, \mathfrak{m}athbb{H}dots, 2^{e_k},1 )}} N_{\mathfrak{m}athfrak{b}m{x}}(B) \gg_n \frac{N_2^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, B) }{(\log B)^n}. \end{equation} If \eqref{eq.Nx_bound_alt_1} holds then \eqref{eq.alt_1} follows from the trivial bound $N_{\mathfrak{m}athfrak{b}m{x}}(B) \ll_n B^n$. Assume now \eqref{eq.Nx_bound_alt_2} holds. From \eqref{eq.useful_estimate_sing_values}, for each $\mathfrak{m}athfrak{b}m{x}$ appearing in the sum of \eqref{eq.Nx_bound_alt_2} we have the bound \mathfrak{m}athfrak{b}egin{equation*} \lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},1}(\mathfrak{m}athfrak{b}m{x}) \leq n^2B. \end{equation*} Applying Lemma \mathbb{R}ef{lem.fixed_matrix_eigenvalues_counting} with $C = n^2$ and $\widetilde{H} = \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x})$ we find \mathfrak{m}athfrak{b}egin{equation} \label{eq.estimating_Nx_last_time} N_{\mathfrak{m}athfrak{b}m{x}}(B) \ll_n \frac{B^n}{2^{e_1 + \mathfrak{m}athbb{H}dots + e_k}}. \end{equation} Substituting \eqref{eq.estimating_Nx_last_time} into \eqref{eq.Nx_bound_alt_2} delivers \eqref{eq.alt_2}. \end{proof} We now recall two Lemmas from~\cite{myerson_cubic} that are conveniently stated in a form so that they apply to our setting. \mathfrak{m}athfrak{b}egin{lemma}[Lemma 3.2 in \cite{myerson_cubic}] \label{lem.minors_and_sing_values} Let $M$ be a real $m \times n$ matrix with singular values $\lambda_1, \mathfrak{m}athbb{H}dots, \lambda_n$ listed with multiplicity in descending order. For $k \leq \mathfrak{m}in \{m,n \}$ denote by $\mathfrak{m}athfrak{b}m{D}^{(k)}$ the vector of $k \times k$ minors of $M$. Given such $k$, the following statements are true: \mathfrak{m}athfrak{b}egin{enumerate}[(i)] \item \label{lem.singular_1} We have \mathfrak{m}athfrak{b}egin{equation*} \label{eq.minors_asymp_sing_values} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{D}^{(k)}}_\infty \mathfrak{m}athfrak{a}symp \lambda_1 \cdots \lambda_k \end{equation*} \item There is a $k$-dimensional subspace $V \subset \mathfrak{m}athbb{R}^n$, which can be taken to be a span of standard basis vectors $\mathfrak{m}athfrak{b}m{e}_i$, such that for all $\mathfrak{m}athfrak{b}m{v} \in V$ the following holds \mathfrak{m}athfrak{b}egin{equation*} \label{eq.lower_bound_matrix_mult_sing_values} \mathfrak{m}athbb{N}norm{M \mathfrak{m}athfrak{b}m{v}}_\infty \gg_{m,n} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{v}}_\infty \lambda_k \end{equation*} \item Given $C \geq 1$ one of the following alternatives holds. Either there exists a $(n-k+1)$-dimensional subspace $X \subset \mathfrak{m}athbb{R}^n$ such that \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}norm{M \mathfrak{m}athfrak{b}m{X}}_\infty \leq C^{-1} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{X}}_\infty \mathfrak{m}athbb{Q}uad \text{for all $\mathfrak{m}athfrak{b}m{X} \in X$}, \end{equation*} or there is a $k$-dimensional subspace $V \subset \mathfrak{m}athbb{R}^n$ spanned by standard basis vectors such that \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}norm{M \mathfrak{m}athfrak{b}m{v}}_\infty \gg_{m,n} C^{-1} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{v}}_\infty \mathfrak{m}athbb{Q}uad \text{for all $\mathfrak{m}athfrak{b}m{v} \in V$}. \end{equation*} \end{enumerate} \end{lemma} Next, we are interested in counting the number of integer tuples contained in the sets $K_k(E_1, \mathfrak{m}athbb{H}dots, E_{k+1})$. The next Lemma is taken from \cite{myerson_cubic}. \mathfrak{m}athfrak{b}egin{lemma}[Lemma 4.1 in \cite{myerson_cubic}] \label{lem.counting_K_k} Let $B,C \geq 1$, $\sigma \in \{0, \mathfrak{m}athbb{H}dots, n-1 \}$ and $k \in \{0, \mathfrak{m}athbb{H}dots, n-\sigma-1 \}$. Assume further $CB \geq E_1 \geq \mathfrak{m}athbb{H}dots \geq E_{k+1} \geq 1$. Then one of the following alternatives must hold. \mathfrak{m}athfrak{b}egin{enumerate} \item[(I)\textsubscript{$k$}] \label{alt_I_k} We have the estimate \mathfrak{m}athfrak{b}egin{equation*} \# (\mathfrak{m}athbb{Z}^n \cap K_k(E_1, \mathfrak{m}athbb{H}dots, E_{k+1})) \ll_{C,n} B^\sigma (E_1 \cdots E_{k+1}) E_{k+1}^{n-\sigma-k-1}. \end{equation*} \item[(II)\textsubscript{$k$}] \label{alt_II_k} For some integer $b \in \{1, \mathfrak{m}athbb{H}dots, k\}$ there exists a $(\sigma + b +1)$-dimensional subspace $X \subset \mathfrak{m}athbb{R}^n$ and there exists $\mathfrak{m}athfrak{b}m{x}^{(0)} \in K_b(E_1, \mathfrak{m}athbb{H}dots, E_{b+1})$ such that $E_{b+1} < C^{-1} E_b$ and \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}norm{J_{\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, b+1)}}(\mathfrak{m}athfrak{b}m{x}^{(0)}) \mathfrak{m}athfrak{b}m{X} }_\infty \leq C^{-1}\mathfrak{m}athbb{N}norm{ \mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, b)}(\mathfrak{m}athfrak{b}m{x}^{(0)})}_\infty \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{X}}_\infty \mathfrak{m}athbb{Q}uad \text{for all $\mathfrak{m}athfrak{b}m{X} \in X$.} \end{equation*} \item[(III)] \label{alt_III} There exists a $(\sigma + 1)$-dimensional subspace $X \subset \mathfrak{m}athbb{R}^n$ such that \mathfrak{m}athfrak{b}egin{equation} \label{eq.H_tilde_small_on_subspace} \mathfrak{m}athbb{N}norm{\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{X})}_\infty \leq C^{-1} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{X}}_\infty \mathfrak{m}athbb{Q}uad \text{for all $\mathfrak{m}athfrak{b}m{X} \in X$.} \end{equation} \end{enumerate} \end{lemma} \mathfrak{m}athfrak{b}egin{remark} In~\cite{myerson_cubic}, Lemma \mathbb{R}ef{lem.counting_K_k} was stated for $\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x})$ being a symmetric matrix, and $\lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},i}(\mathfrak{m}athfrak{b}m{x})$ were taken to be the eigenvalues of $\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x})$ whose absolute values coincide with its singular values. However, an inspection of the proof shows that only the estimates in Lemma \mathbb{R}ef{lem.minors_and_sing_values} as well as \eqref{eq.useful_estimate_sing_values} were used, which are valid for singular values as well as the (absolute values) of the eigenvalues. Therefore the proof remains valid in our setting. \end{remark} The next Lemma is similar to Lemma 5.1 in \cite{myerson_cubic}, however we need to account for the fact that $\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x})$ is not necessarily a symmetric matrix. \mathfrak{m}athfrak{b}egin{lemma} \label{lem.bilinear_small} Let $b \in \{1, \mathfrak{m}athbb{H}dots, n-1\}$ and $\mathfrak{m}athfrak{b}m{x}^{(0)} \in \mathfrak{m}athbb{R}^n$ be such that $\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b)}(\mathfrak{m}athfrak{b}m{x}^{(0)}) \mathfrak{m}athbb{N}eq 0$. Then there exist subspaces $Y_1, Y_2 \subseteq \mathfrak{m}athbb{R}^n$ with $\dim Y_1 = \dim Y_2 = n-b$ such that for all $\mathfrak{m}athfrak{b}m{Y}_1 \in Y_1$, $\mathfrak{m}athfrak{b}m{Y}_2 \in Y_2$ and $\mathfrak{m}athfrak{b}m{t} \in \mathfrak{m}athbb{R}^n$ we have \mathfrak{m}athfrak{b}egin{equation} \label{eq.estimate_bilinear_form} \mathfrak{m}athfrak{b}m{Y}_1^T \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{t}) \mathfrak{m}athfrak{b}m{Y}_2 \ll_n \left( \frac{\mathfrak{m}athbb{N}norm{J_{\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b+1)}}(\mathfrak{m}athfrak{b}m{x}^{(0)}) \mathfrak{m}athfrak{b}m{t}}_\infty}{\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b)}(\mathfrak{m}athfrak{b}m{x}^{(0)})}_\infty} + \frac{\lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, b+1}(\mathfrak{m}athfrak{b}m{x}^{(0)}) \cdot \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{t}}_\infty }{\lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b} (\mathfrak{m}athfrak{b}m{x}^{(0)})} \mathbb{R}ight) \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{Y}_1}_\infty \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{Y}_2}_\infty \end{equation} where the implied constant only depends on $n$ but is otherwise independent from $\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{t})$ \end{lemma} \mathfrak{m}athfrak{b}egin{proof} Given $\mathfrak{m}athfrak{b}m{x} \in \mathfrak{m}athbb{R}^n$ define $\mathfrak{m}athfrak{b}m{y}^{(1)}_1(\mathfrak{m}athfrak{b}m{x}), \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(n-b)}_1(\mathfrak{m}athfrak{b}m{x})$ in the following way. The $j$-th entries are given by \mathfrak{m}athfrak{b}egin{equation} \label{eq.def_y_1} (y_1^{(i)}(\mathfrak{m}athfrak{b}m{x}))_j= \mathfrak{m}athfrak{b}egin{cases} (-1)^{n-b} \det \left( (\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x})_{k \ell} )_{\substack{k = n-b+1, \mathfrak{m}athbb{H}dots, n \\ \ell = n-b+1, \mathfrak{m}athbb{H}dots, n }} \mathbb{R}ight) \mathfrak{m}athbb{Q}uad &\text{if $j=i$,} \\ (-1)^j \det \left( (\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x})_{k \ell} )_{\substack{k = i, n-b+1, \mathfrak{m}athbb{H}dots, n; \; k \mathfrak{m}athbb{N}eq j \\ \ell = n-b+1, \mathfrak{m}athbb{H}dots, n }} \mathbb{R}ight) &\text{if $j > n-b$,} \\ 0 &\text{otherwise}, \end{cases} \end{equation} where $k = i, n-b+1, \mathfrak{m}athbb{H}dots, n; \; k \mathfrak{m}athbb{N}eq j$ denotes that we let the index $k$ run over the values $i,n-b+1, \mathfrak{m}athbb{H}dots, n$ with $k = j$ omitted. Similarly we define $\mathfrak{m}athfrak{b}m{y}^{(1)}_2(\mathfrak{m}athfrak{b}m{x}), \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{y}^{(n-b)}_2(\mathfrak{m}athfrak{b}m{x})$ by \mathfrak{m}athfrak{b}egin{equation*} \label{eq.def_y_2} (y_2^{(i)}(\mathfrak{m}athfrak{b}m{x}))_j = \mathfrak{m}athfrak{b}egin{cases} (-1)^{n-b} \det \left( (\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x})_{k \ell} )_{\substack{k = n-b+1, \mathfrak{m}athbb{H}dots, n \\ \ell = n-b+1, \mathfrak{m}athbb{H}dots, n }} \mathbb{R}ight) \mathfrak{m}athbb{Q}uad &\text{if $j=i$,} \\ (-1)^j \det \left( (\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x})_{k \ell} )_{\substack{k = n-b+1, \mathfrak{m}athbb{H}dots, n \\ \ell = i, n-b+1, \mathfrak{m}athbb{H}dots, n; \; \ell \mathfrak{m}athbb{N}eq j }} \mathbb{R}ight) &\text{if $j > n-b$,} \\ 0 &\text{otherwise}. \end{cases} \end{equation*} \label{eq.determinants_laplace_y_1} Using the Laplace expansion of a determinant along columns and rows we thus obtain \mathfrak{m}athfrak{b}egin{equation} (\mathfrak{m}athfrak{b}m{y}_1^{(i)}(\mathfrak{m}athfrak{b}m{x})^T \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x}))_j = \mathfrak{m}athfrak{b}egin{cases} (-1)^{n-b} \det \left( (\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x})_{k \ell} )_{\substack{k = i,n-b+1, \mathfrak{m}athbb{H}dots, n \\ \ell =j, n-b+1, \mathfrak{m}athbb{H}dots, n }} \mathbb{R}ight) \mathfrak{m}athbb{Q}uad &\text{if $j \leq n-b$,} \\ 0 &\text{otherwise,} \end{cases} \end{equation} and \mathfrak{m}athfrak{b}egin{equation} \label{eq.determinants_laplace_y_2} ( \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x})\mathfrak{m}athfrak{b}m{y}_2^{(i)}(\mathfrak{m}athfrak{b}m{x}))_j = \mathfrak{m}athfrak{b}egin{cases} (-1)^{n-b} \det \left( (\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x})_{k \ell} )_{\substack{k = j,n-b+1, \mathfrak{m}athbb{H}dots, n \\ \ell =i, n-b+1, \mathfrak{m}athbb{H}dots, n }} \mathbb{R}ight) \mathfrak{m}athbb{Q}uad &\text{if $j \leq n-b$,} \\ 0 &\text{otherwise,} \end{cases} \end{equation} respectively. It follows from \eqref{eq.def_y_1} --- \eqref{eq.determinants_laplace_y_2} that there exist matrices $L_1^{(i)}$, $L_2^{(i)}$, $M_1^{(i)}$ and $M_2^{(i)}$ for $i = 1, \mathfrak{m}athbb{H}dots, n-b$ with entries only in $\{0, \mathfrak{m}athfrak{p}m 1\}$ such that we obtain \mathfrak{m}athfrak{b}egin{align} \mathfrak{m}athfrak{b}m{y}_1^{(i)}(\mathfrak{m}athfrak{b}m{x}) &= L_1^{(i)} \mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, b)}(\mathfrak{m}athfrak{b}m{x}), \\ \mathfrak{m}athfrak{b}m{y}_2^{(i)}(\mathfrak{m}athfrak{b}m{x}) &= L_2^{(i)} \mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, b)}(\mathfrak{m}athfrak{b}m{x}), \label{eq.y_2_in_minors} \\ (\mathfrak{m}athfrak{b}m{y}_1^{(i)}(\mathfrak{m}athfrak{b}m{x}))^T \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x}) &= [ M_1^{(i)} \mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, b+1)}(\mathfrak{m}athfrak{b}m{x}) ]^T, \mathfrak{m}athbb{Q}uad \text{and} \label{eq.transpose_y_1_H_tilde} \\ \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x}) \mathfrak{m}athfrak{b}m{y}_2^{(i)}(\mathfrak{m}athfrak{b}m{x}) &= M_2^{(i)} \mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, b+1)}(\mathfrak{m}athfrak{b}m{x}). \label{eq.H_y_linear_span_minors} \end{align} Given $\mathfrak{m}athfrak{b}m{t} \in \mathfrak{m}athbb{R}^n$ we write $\mathfrak{m}athfrak{p}artial_{\mathfrak{m}athfrak{b}m{t}}$ for the directional derivative given by $\sum t_i \frac{\mathfrak{m}athfrak{p}artial}{\mathfrak{m}athfrak{p}artial x_i}$. Applying $\mathfrak{m}athfrak{p}artial_{\mathfrak{m}athfrak{b}m{t}}$ to both sides of \eqref{eq.H_y_linear_span_minors} we obtain \mathfrak{m}athfrak{b}egin{equation} \label{eq.taking_derivative} [\mathfrak{m}athfrak{p}artial_{\mathfrak{m}athfrak{b}m{t}} \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x}) ] \mathfrak{m}athfrak{b}m{y}_2^{(i)}(\mathfrak{m}athfrak{b}m{x}) + \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x}) [\mathfrak{m}athfrak{p}artial_{\mathfrak{m}athfrak{b}m{t}} \mathfrak{m}athfrak{b}m{y}_2^{(i)}(\mathfrak{m}athfrak{b}m{x})] = M_2^{(i)} [\mathfrak{m}athfrak{p}artial_{\mathfrak{m}athfrak{b}m{t}} \mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, b+1)}(\mathfrak{m}athfrak{b}m{x})]. \end{equation} Now note \mathfrak{m}athfrak{b}egin{equation} \label{eq.directional_deriv_easy_identity} \mathfrak{m}athfrak{p}artial_{\mathfrak{m}athfrak{b}m{t}} \mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, b+1)}(\mathfrak{m}athfrak{b}m{x}) = J_{\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b+1 )}}(\mathfrak{m}athfrak{b}m{x}) \mathfrak{m}athfrak{b}m{t}, \mathfrak{m}athbb{Q}uad \text{and} \mathfrak{m}athbb{Q}uad \mathfrak{m}athfrak{p}artial_{\mathfrak{m}athfrak{b}m{t}} \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x}) = \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{t}). \end{equation} Substituting \eqref{eq.directional_deriv_easy_identity} and \eqref{eq.y_2_in_minors} into \eqref{eq.taking_derivative} yields \mathfrak{m}athfrak{b}egin{equation*} \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{t}) \mathfrak{m}athfrak{b}m{y}_2^{(i)}(\mathfrak{m}athfrak{b}m{x}) = M_2^{(i)} J_{\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b+1 )}}(\mathfrak{m}athfrak{b}m{x}) \mathfrak{m}athfrak{b}m{t} - \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x}) L_2^{(i)} \mathfrak{m}athfrak{p}artial_{\mathfrak{m}athfrak{b}m{t}} \mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, b)}(\mathfrak{m}athfrak{b}m{x}). \end{equation*} If we premultiply this by $\mathfrak{m}athfrak{b}m{y}_1^{(j)}(\mathfrak{m}athfrak{b}m{x})^T$ and use \eqref{eq.transpose_y_1_H_tilde} then we obtain \mathfrak{m}athfrak{b}egin{multline} \label{eq.bilinear_general_x} \mathfrak{m}athfrak{b}m{y}_1^{(j)}(\mathfrak{m}athfrak{b}m{x})^T \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{t}) \mathfrak{m}athfrak{b}m{y}_2^{(i)}(\mathfrak{m}athfrak{b}m{x}) = \mathfrak{m}athfrak{b}m{y}_1^{(j)}(\mathfrak{m}athfrak{b}m{x})^T M_2^{(i)} J_{\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b+1 )}}(\mathfrak{m}athfrak{b}m{x}) \mathfrak{m}athfrak{b}m{t} \\ - [ M_1^{(j)} \mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, b+1)}(\mathfrak{m}athfrak{b}m{x}) ]^T [L_2^{(i)} \mathfrak{m}athfrak{p}artial_{\mathfrak{m}athfrak{b}m{t}} \mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, b)}(\mathfrak{m}athfrak{b}m{x})]. \end{multline} Lemma~\mathbb{R}ef{lem.minors_and_sing_values}~\eqref{lem.singular_1} yields the bounds \mathfrak{m}athfrak{b}egin{equation} \label{eq.bounds_minors_sing_values_1} \frac{\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b+1)}(\mathfrak{m}athfrak{b}m{x})}_\infty}{\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b)}(\mathfrak{m}athfrak{b}m{x})}_\infty} \ll_n \lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b+1}(\mathfrak{m}athfrak{b}m{x}), \end{equation} and \mathfrak{m}athfrak{b}egin{equation} \label{eq.bounds_minors_sing_values_2} \frac{\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{p}artial_{\mathfrak{m}athfrak{b}m{t}} \mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b)}(\mathfrak{m}athfrak{b}m{x})}_\infty}{\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b)}(\mathfrak{m}athfrak{b}m{x})}_\infty} \ll_n \frac{\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{t}}_\infty}{\lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b}(\mathfrak{m}athfrak{b}m{x})}. \end{equation} Now we specify $\mathfrak{m}athfrak{b}m{x} = \mathfrak{m}athfrak{b}m{x}^{(0)}$ so by assumption we have $\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b)}(\mathfrak{m}athfrak{b}m{x}^{(0)})}_\infty > 0$. Thus define \mathfrak{m}athfrak{b}egin{equation} \label{eq.Y_k_def} \mathfrak{m}athfrak{b}m{Y}_k^{(i)} = \frac{\mathfrak{m}athfrak{b}m{y}_k^{(i)}(\mathfrak{m}athfrak{b}m{x}^{(0)})}{\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b)}(\mathfrak{m}athfrak{b}m{x}^{(0)})}_\infty}, \mathfrak{m}athbb{Q}uad \text{for $i = 1, \mathfrak{m}athbb{H}dots, n-b$ and $k = 1,2$.} \end{equation} Dividing~\eqref{eq.bilinear_general_x} by $1/\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b)}(\mathfrak{m}athfrak{b}m{x}^{(0)})}_\infty^2$ and using~\eqref{eq.Y_k_def} as well as the bounds~\eqref{eq.bounds_minors_sing_values_1} and~\eqref{eq.bounds_minors_sing_values_2} gives \mathfrak{m}athfrak{b}egin{equation*} \label{eq.blinear_specified_x_0} \mathfrak{m}athbb{N}orm{\mathfrak{m}athfrak{b}m{Y}_1^{(j)} \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{t}) \mathfrak{m}athfrak{b}m{Y}_2^{(i)}} \ll_n \frac{\mathfrak{m}athbb{N}norm{J_{\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b+1 )}}(\mathfrak{m}athfrak{b}m{x}^{(0)}) \mathfrak{m}athfrak{b}m{t}}_\infty}{\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b)}(\mathfrak{m}athfrak{b}m{x}^{(0)})}_\infty} + \frac{\lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b+1}(\mathfrak{m}athfrak{b}m{x}^{(0)}) \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{t}}_\infty}{\lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b}(\mathfrak{m}athfrak{b}m{x}^{(0)})}. \end{equation*} We claim now that we can take the subspaces $Y_k \subseteq \mathfrak{m}athbb{R}^n$ to be defined as the span of $\mathfrak{m}athfrak{b}m{Y}_k^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{Y}_k^{(n-b)}$ for $k = 1,2$ respectively, so that the Lemma holds. For this we need to show that~\eqref{eq.estimate_bilinear_form} holds, and also that $\dim Y_1 = \dim Y_2 = n-b$. Therefore it suffices to show the following claim: Given $\mathfrak{m}athfrak{b}m{\gamma} \in \mathfrak{m}athbb{R}^{n-b}$ if we take $\mathfrak{m}athfrak{b}m{Y}_k = \sum \gamma_i Y_k^{(i)}$ then $\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\gamma}}_\infty \ll_n \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{Y}_k}_\infty$, for $k = 1,2$ respectively. Assume that the $b \times b$ minor of $\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x}^{(0)})$ of largest absolute value lies in the bottom right corner of $\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x}^{(0)})$. In other words, we assume \mathfrak{m}athfrak{b}egin{equation} \label{eq.assumption_that_is_wlog} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b)}(\mathfrak{m}athfrak{b}m{x}^{(0)})}_\infty = \mathfrak{m}athbb{N}orm{\det \left( (\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x}^{(0)})_{k \ell} )_{\substack{k = n-b+1, \mathfrak{m}athbb{H}dots, n \\ \ell = n-b+1, \mathfrak{m}athbb{H}dots, n }} \mathbb{R}ight)}. \end{equation} After permuting the rows and columns of $\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x}^{(0)})$ the identity \eqref{eq.assumption_that_is_wlog} will always be true. The vectors $\mathfrak{m}athfrak{b}m{Y}_k^{(i)}$ depend on minors of $\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x}^{(0)})$. Thus we can apply the same permutations to $\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x}^{(0)})$ that ensure that \eqref{eq.assumption_that_is_wlog} holds to the definition of these vectors. From this we see that we can always reduce the general case to the case where \eqref{eq.assumption_that_is_wlog} holds. Now for $k= 1,2$ we define matrices \mathfrak{m}athfrak{b}egin{equation*} Q_k = \left( \mathfrak{m}athfrak{b}m{Y}_k^{(1)} \Big\vert \cdots \Big\vert \mathfrak{m}athfrak{b}m{Y}_k^{(n-b)} \Big\vert \mathfrak{m}athfrak{b}m{e}_{n-b+1} \Big\vert \cdots \Big\vert \mathfrak{m}athfrak{b}m{e}_{n} \mathbb{R}ight). \end{equation*} By the definition of $\mathfrak{m}athfrak{b}m{Y}_k^{(i)}$ we see that $Q_k$ must be of the following form \mathfrak{m}athfrak{b}egin{equation*} Q_k = \mathfrak{m}athfrak{b}egin{pmatrix} I_{n-b} & 0 \\ \widetilde{Q}_k & I_{b} \end{pmatrix}, \end{equation*} for some matrix $\widetilde{Q}_k$. In particular we find $\det Q_k = 1$ and so $\mathfrak{m}athbb{N}norm{Q_k^{-1}}_\infty \ll_n 1$. Given $\mathfrak{m}athfrak{b}m{Y}_k = \sum \gamma_i Y_k^{(i)}$ we thus find \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\gamma}}_\infty = \mathfrak{m}athbb{N}norm{Q_k^{-1} \mathfrak{m}athfrak{b}m{Y}_k}_\infty \ll_n \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{Y}_k}_\infty, \end{equation*} and so the Lemma follows. \end{proof} The next Corollary is the main technical result from this section, which will allow us to deduce that either $N_2^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},B)$ is small or a suitable singular locus is large. \mathfrak{m}athfrak{b}egin{corollary} \label{cor.N_2_small_or_bilinear_vanishes} Let $B, C \geq 1$ and let $\sigma \in \{ 0, \mathfrak{m}athbb{H}dots, n-1\}$. Then one of the following alternatives is true. Either we have the bound \mathfrak{m}athfrak{b}egin{equation} \label{eq.corollary_alt_1} N_2^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},B) \ll_{C,n} B^{n+\sigma} (\log B)^n, \end{equation} or there exist subspaces $X, Y_1, Y_2 \subseteq \mathfrak{m}athbb{R}^n$ with $\dim X + \dim Y_1 = \dim X + \dim Y_2 = n+\sigma +1$, such that \mathfrak{m}athfrak{b}egin{equation} \label{eq.corollary_alt_2} \mathfrak{m}athbb{N}orm{\mathfrak{m}athfrak{b}m{Y}_1^T \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{X}) \mathfrak{m}athfrak{b}m{Y}_2} \ll_n C^{-1} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{Y}_1}_\infty \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{X}}_\infty \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{Y}_2}_\infty \end{equation} holds for all $\mathfrak{m}athfrak{b}m{X} \in X, \mathfrak{m}athfrak{b}m{Y}_1 \in Y_1, \mathfrak{m}athfrak{b}m{Y}_2 \in Y_2$. \end{corollary} \mathfrak{m}athfrak{b}egin{proof} Let $k \in \{0, \mathfrak{m}athbb{H}dots, n-\sigma-1\}$ and $E_1, \mathfrak{m}athbb{H}dots, E_{k+1} \in \mathfrak{m}athbb{R}$ be such that \mathfrak{m}athfrak{b}egin{equation*} CB \geq E_1 \geq \mathfrak{m}athbb{H}dots \geq E_{k+1} \geq 1. \end{equation*} We know that one of the alternatives (I)\textsubscript{$k$}, (II)\textsubscript{$k$} or (III) in Lemma \mathbb{R}ef{lem.counting_K_k} holds. Assume first that~(I)\textsubscript{$k$} always holds so that the estimate \mathfrak{m}athfrak{b}egin{equation} \label{eq.K_k_bound} \# (\mathfrak{m}athbb{Z}^n \cap K_k(E_1, \mathfrak{m}athbb{H}dots, E_{k+1})) \ll_{C,n} B^\sigma (E_1 \cdots E_{k+1}) E_{k+1}^{n-\sigma-k-1}. \end{equation} holds for every $k \in \{0, \mathfrak{m}athbb{H}dots, n-\sigma-1\}$ and $E_1, \mathfrak{m}athbb{H}dots, E_{k+1} \in \mathfrak{m}athbb{R}$ such that $CB \geq E_1 \geq \mathfrak{m}athbb{H}dots \geq E_{k+1} \geq 1$. From Lemma~\mathbb{R}ef{lem.N_2_aux_in_terms_K_k} we find that either we have \mathfrak{m}athfrak{b}egin{equation} \label{eq.alt_1_in_proof} \frac{N_2^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, B)}{B^n (\log B)^n} \ll_n \# (\mathfrak{m}athbb{Z}^n \cap K_0(1)), \end{equation} or there exist nonnegative integers $e_1, \mathfrak{m}athbb{H}dots, e_k$ for some $k \in \{1, \mathfrak{m}athbb{H}dots, n-1\}$ such that $\log B \gg_n e_1 \geq \mathfrak{m}athbb{H}dots \geq e_k$ and \mathfrak{m}athfrak{b}egin{equation} \label{eq.alt_2_in_proof} \frac{2^{e_1 + \cdots + e_k} N_2^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, B)}{B^n (\log B)^n} \ll_n \# (\mathfrak{m}athbb{Z}^n \cap K_k (2^{e_1}, \mathfrak{m}athbb{H}dots, 2^{e_k}, 1) ), \end{equation} or there exist nonnegative integers $e_1, \mathfrak{m}athbb{H}dots, e_n$ such that $\log B \gg_n e_1 \geq \mathfrak{m}athbb{H}dots \geq e_n$ and \mathfrak{m}athfrak{b}egin{equation} \label{eq.alt_3_in_proof} \frac{2^{e_1 + \cdots + e_n} N_2^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, B)}{B^n (\log B)^n} \ll_n \# (\mathfrak{m}athbb{Z}^n \cap K_{n-1} (2^{e_1}, \mathfrak{m}athbb{H}dots, 2^{e_n}) ). \end{equation} We may take $C$ to be large enough depending on $n$ such that $CB \geq 2^{e_1}$ is satisfied. Then upon sbustituting the bound~\eqref{eq.K_k_bound} into either of~\eqref{eq.alt_1_in_proof}, \eqref{eq.alt_2_in_proof} or~\eqref{eq.alt_3_in_proof} gives~\eqref{eq.corollary_alt_1}. If~(III) holds in Lemma~\mathbb{R}ef{lem.counting_K_k} we can take $Y_1 = Y_2 = \mathfrak{m}athbb{R}^n$ so that~\eqref{eq.corollary_alt_2} follows from~\eqref{eq.H_tilde_small_on_subspace}. Finally, assume there exist $k \in \{0, \mathfrak{m}athbb{H}dots, n-\sigma-1\}$ and $E_1, \mathfrak{m}athbb{H}dots, E_{k+1} \in \mathfrak{m}athbb{R}$ with $CB \geq E_1 \geq \mathfrak{m}athbb{H}dots \geq E_{k+1} \geq 1$ such that~(II)\textsubscript{$k$} in Lemma \mathbb{R}ef{lem.counting_K_k} holds. Recall this means there exists some integer $b \in \{1, \mathfrak{m}athbb{H}dots, k\}$, a $(\sigma + b +1)$-dimensional subspace $X \subset \mathfrak{m}athbb{R}^n$ and $\mathfrak{m}athfrak{b}m{x}^{(0)} \in K_b(E_1, \mathfrak{m}athbb{H}dots, E_{b+1})$ such that $E_{b+1} < C^{-1} E_b$ and \mathfrak{m}athfrak{b}egin{equation} \label{eq.jacobian_bound} \mathfrak{m}athbb{N}norm{J_{\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, b+1)}}(\mathfrak{m}athfrak{b}m{x}^{(0)}) \mathfrak{m}athfrak{b}m{X} }_\infty \leq C^{-1}\mathfrak{m}athbb{N}norm{ \mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}, b)}(\mathfrak{m}athfrak{b}m{x}^{(0)})}_\infty \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{X}}_\infty \mathfrak{m}athbb{Q}uad \text{for all $\mathfrak{m}athfrak{b}m{X} \in X$.} \end{equation} As $\mathfrak{m}athfrak{b}m{x}^{(0)} \in K_b(E_1, \mathfrak{m}athbb{H}dots, E_{b+1})$ we have $E_i/2 < \lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},i}(\mathfrak{m}athfrak{b}m{x}^{(0)}) \leq E_i$ for $i = 1, \mathfrak{m}athbb{H}dots, k$ and $\lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b+1}(\mathfrak{m}athfrak{b}m{x}^{(0)}) \leq E_{b+1}$. This, together with the fact that $E_{b+1} < C^{-1} E_b$ implies \mathfrak{m}athfrak{b}egin{equation} \label{eq.sing_value_bound} \lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b+1}(\mathfrak{m}athfrak{b}m{x}^{(0)}) < 2C^{-1}\lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b}(\mathfrak{m}athfrak{b}m{x}^{(0)}). \end{equation} Also we find $\lambda_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b}(\mathfrak{m}athfrak{b}m{x}^{(0)}) \mathfrak{m}athbb{N}eq 0$, from which it follows from Lemma~\mathbb{R}ef{lem.minors_and_sing_values}~\eqref{lem.singular_1} that $\mathfrak{m}athfrak{b}m{D}^{(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},b)}(\mathfrak{m}athfrak{b}m{x}^{(0)}) \mathfrak{m}athbb{N}eq 0$. Thus we may apply Lemma~\mathbb{R}ef{lem.bilinear_small} to obtain spaces $Y_1,Y_2 \subseteq \mathfrak{m}athbb{R}^n$ with $\dim Y_1 = \dim Y_2 = n-b$ such that the estimate \eqref{eq.estimate_bilinear_form} holds. Now taking $\mathfrak{m}athfrak{b}m{t} = \mathfrak{m}athfrak{b}m{X}$ in~\eqref{eq.estimate_bilinear_form} and using~\eqref{eq.jacobian_bound} and~\eqref{eq.sing_value_bound} then~\eqref{eq.corollary_alt_2} follows. Since $\dim X = \sigma +b+1$ we also have $\dim X + \dim Y_1 = \dim X + \dim Y_2 = n+\sigma +1$ as desired. \end{proof} Recall the definition of the quantity \mathfrak{m}athfrak{b}egin{equation*} s_{\mathfrak{m}athbb{R}}^{(2)} \coloneqq \left\lfloor \frac{\mathfrak{m}ax_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{ 0\}} \dim \mathfrak{m}athbb{V}(H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y}) \mathfrak{m}athfrak{b}m{x})}{2} \mathbb{R}ight\mathbb{R}floor +1, \end{equation*} where $\lfloor x \mathbb{R}floor$ denotes the largest integer $m$ such that $m \leq x$. Although we have been assuming $n_1=n_2$ throughout the definition of this quantity remains valid if $n_1 \mathfrak{m}athbb{N}eq n_2$. Note that we have $\mathfrak{m}athbb{V}(H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y}) \mathfrak{m}athfrak{b}m{x}) \subsetneq \mathfrak{m}athbb{P}^{n_1-1}_{\mathfrak{m}athbb{C}} \times \mathfrak{m}athbb{P}^{n_2-1}_{\mathfrak{m}athbb{C}}$ for all $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{0\}$. For if not, then the matrix $H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y})$ is identically zero for some $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{0\}$ contradicting the fact that $\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{F})$ is a complete intersection. In particular this yields $s_{\mathfrak{m}athbb{R}}^{(2)} \leq \frac{n_1+n_2}{2}-1$. Before we prove the main result of this section we require another small Lemma. \mathfrak{m}athfrak{b}egin{lemma} \label{lem.H_tilde_or_normal_H} Let $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R} \setminus \{0\}$. The system of equations \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athfrak{b}m{y}^T \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x} = 0, \; \text{for $\ell = 1, \mathfrak{m}athbb{H}dots, n$} \mathfrak{m}athbb{Q}uad \text{and} \mathfrak{m}athbb{Q}uad H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y})\mathfrak{m}athfrak{b}m{x} = \mathfrak{m}athfrak{b}m{0} \end{equation*} define the same variety in $\mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n-1} \times \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n-1}$. \end{lemma} \mathfrak{m}athfrak{b}egin{proof} Recall that by definition we have \mathfrak{m}athfrak{b}egin{equation*} \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{z}) = \mathfrak{m}athfrak{b}egin{pmatrix} \mathfrak{m}athfrak{b}m{z}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_1) \\ \vdots \\ \mathfrak{m}athfrak{b}m{z}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_n). \end{pmatrix} \end{equation*} For $\ell \in \{1, \mathfrak{m}athbb{H}dots, n\}$ we get \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athfrak{b}m{y}^T \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x} = \mathfrak{m}athfrak{b}m{y}^T \mathfrak{m}athfrak{b}egin{pmatrix} \mathfrak{m}athfrak{b}m{e}_\ell^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_1) \mathfrak{m}athfrak{b}m{x} \\ \vdots \\ \mathfrak{m}athfrak{b}m{e}_\ell^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_n) \mathfrak{m}athfrak{b}m{x} \end{pmatrix} = \sum_{i=1}^n y_i \mathfrak{m}athfrak{b}m{e}_\ell^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_i) \mathfrak{m}athfrak{b}m{x} = \mathfrak{m}athfrak{b}m{e}_\ell^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y}) \mathfrak{m}athfrak{b}m{x}, \end{equation*} where the last line follows since the entries of $H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y})$ are linear homogeneous in $\mathfrak{m}athfrak{b}m{y}$. The result is now immediate. \end{proof} \mathfrak{m}athfrak{b}egin{proposition} \label{prop.N_2aux_small} Let $s_{\mathfrak{m}athbb{R}}^{(2)}$ be defined as above and let $B \geq 1$. Then for all $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^{R} \setminus \{0\}$ the following holds \mathfrak{m}athfrak{b}egin{equation*} \label{eq.N_2_estimate_that_we_want} N_2^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta},B) \ll_{n} B^{n+s_{\mathfrak{m}athbb{R}}^{(2)}} (\log B)^n. \end{equation*} \end{proposition} \mathfrak{m}athfrak{b}egin{proof} Suppose for a contradiction the result were false. Then for each positive integer $N$ there exists some $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}_N$ such that \mathfrak{m}athfrak{b}egin{equation*} N_2^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}_N,B) \geq N B^{n+s_{\mathfrak{m}athbb{R}}^{(2)}} (\log B)^n. \end{equation*} From Corollary~\mathbb{R}ef{cor.N_2_small_or_bilinear_vanishes} it follows that there are linear subspaces $X^{(N)}, Y_1^{(N)}, Y_2^{(N)} \subset \mathfrak{m}athbb{R}^n$ with \mathfrak{m}athfrak{b}egin{equation*} \dim X^{(N)} + \dim Y_i^{(N)} = n+ s_{\mathfrak{m}athbb{R}}^{(2)} + 1, \mathfrak{m}athbb{Q}uad i=1,2, \end{equation*} such that for all $\mathfrak{m}athfrak{b}m{X} \in X^{(N)}$, $\mathfrak{m}athfrak{b}m{Y}_i \in Y_i^{(N)}$ we get \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athbb{N}orm{\mathfrak{m}athfrak{b}m{Y}_1^T \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}_N}(\mathfrak{m}athfrak{b}m{X}) \mathfrak{m}athfrak{b}m{Y}_2} \leq N^{-1} \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{Y}_1}_\infty \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{X}}_\infty \mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{Y}_2}_\infty. \end{equation*} Note that $\widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}_N}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta})$ is unchanged when $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}_N$ is multiplied by a constant. Thus we may assume $\mathfrak{m}athbb{N}norm{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}_N}_\infty = 1$ and consider a converging subsequence of $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}_{N_r}$ converging to $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}$, say, as $N \mathbb{R}ightarrow \infty$. This delivers subspaces $X,Y_1,Y_2 \subset \mathfrak{m}athbb{R}^n$ with $\dim X + \dim Y_i = n + s_{\mathfrak{m}athbb{R}}^{(2)} + 1$ for $i = 1,2$ such that \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athfrak{b}m{Y}_1^T \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{X}) \mathfrak{m}athfrak{b}m{Y}_2 = 0 \mathfrak{m}athbb{Q}uad \text{for all $\mathfrak{m}athfrak{b}m{X} \in X, \mathfrak{m}athfrak{b}m{Y}_1 \in Y_1, \mathfrak{m}athfrak{b}m{Y}_2 \in Y_2$.} \end{equation*} There exists some $b \in \{0, \mathfrak{m}athbb{H}dots, n- s_{\mathfrak{m}athbb{R}}^{(2)}-1 \}$ such that $\dim X = n-b$ and $\dim Y_i = s_{\mathfrak{m}athbb{R}}^{(2)} + b+1$. Now let $\mathfrak{m}athfrak{b}m{x}^{(1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{x}^{(n)}$ be a basis for $\mathfrak{m}athbb{R}^n$ such that $\mathfrak{m}athfrak{b}m{x}^{(b+1)}, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{x}^{(n)}$ is a basis for $X$. Write $[Y_i] \subset \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n-1}$ for the linear subspace of $\mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n-1}$ associated to $Y_i$ for $i = 1,2$. Define the biprojective variety $W \subset [Y_1] \times [Y_2]$ in the variables $(\mathfrak{m}athfrak{b}m{y}_1, \mathfrak{m}athfrak{b}m{y}_2)$ by \mathfrak{m}athfrak{b}egin{equation*} W = \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{y}_1 \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x}^{(i)}) \mathfrak{m}athfrak{b}m{y}_2 )_{i = 1, \mathfrak{m}athbb{H}dots, b}. \end{equation*} Since the non-trivial equations defining $W$ have bidegree $(1,1)$ we can apply Corollary~\mathbb{R}ef{cor.geometry_intersections} to find \mathfrak{m}athfrak{b}egin{equation} \label{eq.dim_w_lower_bound} \dim W \geq \dim [Y_1] \times [Y_2] - b = 2 s_{\mathfrak{m}athbb{R}}^{(2)} + b. \end{equation} Given $(\mathfrak{m}athfrak{b}m{y}_1, \mathfrak{m}athfrak{b}m{y}_2) \in W$ we have in particular $(\mathfrak{m}athfrak{b}m{y}_1, \mathfrak{m}athfrak{b}m{y}_2) \in [Y_1] \times [Y_2]$ and so \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athfrak{b}m{y}_1 \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{x}^{(i)}) \mathfrak{m}athfrak{b}m{y}_2 = 0, \mathfrak{m}athbb{Q}uad \text{for $i = b+1, \mathfrak{m}athbb{H}dots, n$,} \end{equation*} and hence $\mathfrak{m}athfrak{b}m{y}_1 \widetilde{H}_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{z}) \mathfrak{m}athfrak{b}m{y}_2 = 0$ for all $\mathfrak{m}athfrak{b}m{z} \in \mathfrak{m}athbb{R}^n$. From Lemma~\mathbb{R}ef{lem.H_tilde_or_normal_H} we thus see $H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y}_1) \mathfrak{m}athfrak{b}m{y}_2 = 0$ for all $(\mathfrak{m}athfrak{b}m{y}_1, \mathfrak{m}athfrak{b}m{y}_2) \in W$. Hence in particular \mathfrak{m}athfrak{b}egin{equation*} \label{eq.dim_W_upper_bound} \dim W \leq \dim \mathfrak{m}athbb{V}(H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y})\mathfrak{m}athfrak{b}m{x}) \leq 2 s_{\mathfrak{m}athbb{R}}^{(2)} -1, \end{equation*} where we regard $\mathfrak{m}athbb{V}(H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y})\mathfrak{m}athfrak{b}m{x})$ as a variety in $\mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n-1} \times \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n-1}$ in the variables $(\mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athfrak{b}m{y})$. This together with \eqref{eq.dim_w_lower_bound} implies $b \leq -1$, which is clearly a contradiction. \end{proof} In the next Lemma we show that $s_{\mathfrak{m}athbb{R}}^{(2)}$ is small if $\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{F})$ defines a smooth complete intersection. For this we no longer assume $n_1 = n_2$. \mathfrak{m}athfrak{b}egin{lemma} \label{lem.sigma_2_bounds} Let $s_{\mathfrak{m}athbb{R}}^{(2)}$ be defined as above. If $\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{F})$ is a smooth complete intersection in $\mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n_1-1} \times \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n_2-1}$ then we have the bound \mathfrak{m}athfrak{b}egin{equation} \label{eq.sigma_2_bounds} \frac{n_2-1}{2} \leq s_{\mathfrak{m}athbb{R}}^{(2)} \leq \frac{n_2+R}{2}. \end{equation} \end{lemma} \mathfrak{m}athfrak{b}egin{proof} Let $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{ \mathfrak{m}athfrak{b}m{0} \}$ be such that \mathfrak{m}athfrak{b}egin{equation*} s_{\mathfrak{m}athbb{R}}^{(2)} = \left\lfloor \frac{\dim \mathfrak{m}athbb{V}(H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y} )\mathfrak{m}athfrak{b}m{x}) }{2} \mathbb{R}ight\mathbb{R}floor +1. \end{equation*} Note that then \mathfrak{m}athfrak{b}egin{equation} \label{eq.sigma_dimensions_bounds} 2s_{\mathfrak{m}athbb{R}}^{(2)} -2 \leq \dim \mathfrak{m}athbb{V}(H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y} )\mathfrak{m}athfrak{b}m{x}) \leq 2 s_{\mathfrak{m}athbb{R}}^{(2)} -1. \end{equation} The variety $\mathfrak{m}athbb{V}(H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y} )\mathfrak{m}athfrak{b}m{x}) \subset \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n_1-1} \times \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n_2-1}$ is defined by $n_1$ bilinear polynomials. Using Corollary~\mathbb{R}ef{cor.geometry_intersections} we thus find \mathfrak{m}athfrak{b}egin{equation*} \dim \mathfrak{m}athbb{V}(H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y} )\mathfrak{m}athfrak{b}m{x}) \geq n_2-2 \end{equation*} so the lower bound in~\eqref{eq.sigma_2_bounds} follows. We proceed by considering two cases. \mathfrak{m}athbb{N}oindent \textbf{Case 1: $\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x})_{\ell = 1, \mathfrak{m}athbb{H}dots, n_2} = \emptyset$.} Note that this can only happen if $n_2 \geq n_1$. We can therefore apply Lemma~\mathbb{R}ef{lem.dimensions_varieties_not_too_big} with $V_1 = \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x})_{\ell = 1, \mathfrak{m}athbb{H}dots, n_2}$, $V_2 = \mathfrak{m}athbb{V}(H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y} )\mathfrak{m}athfrak{b}m{x})$ and $A_i = H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_i)$ to find \[ \dim \mathfrak{m}athbb{V}(H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y} )\mathfrak{m}athfrak{b}m{x}) \leq n_2-1 + \dim \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x})_{\ell = 1, \mathfrak{m}athbb{H}dots, n_2} = n_2-2. \] From this and~\eqref{eq.sigma_dimensions_bounds} the upper bound in~\eqref{eq.sigma_2_bounds} follows for this case. \mathfrak{m}athbb{N}oindent \textbf{Case 2: $\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x})_{\ell = 1, \mathfrak{m}athbb{H}dots, n_2} \mathfrak{m}athbb{N}eq \emptyset$:} By assumption there exists $\mathfrak{m}athfrak{b}m{x} \in \mathfrak{m}athbb{C}^{n_1} \setminus \{ \mathfrak{m}athfrak{b}m{0}\}$ such that \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x} = 0, \mathfrak{m}athbb{Q}uad \text{for all $\ell = 1, \mathfrak{m}athbb{H}dots, n_2$}. \end{equation*} We claim that there exists $\mathfrak{m}athfrak{b}m{y} \in \mathfrak{m}athbb{C}^{n_2} \setminus \{ \mathfrak{m}athfrak{b}m{0}\}$ such that $H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y})\mathfrak{m}athfrak{b}m{x} = \mathfrak{m}athfrak{b}m{0}$. For this define the vectors \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athfrak{b}m{u}_\ell = H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x}, \mathfrak{m}athbb{Q}uad \ell = 1, \mathfrak{m}athbb{H}dots, n_2. \end{equation*} Note that $\mathfrak{m}athfrak{b}m{x} \in \langle \mathfrak{m}athfrak{b}m{u}_1, \mathfrak{m}athbb{H}dots, \mathfrak{m}athfrak{b}m{u}_{n_2} \mathbb{R}angle^\mathfrak{m}athfrak{p}erp$ so these vectors must be linearly dependent. Thus there exist $y_1, \mathfrak{m}athbb{H}dots, y_{n_2} \in \mathfrak{m}athbb{C}$ not all zero, such that \mathfrak{m}athfrak{b}egin{equation*} H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y})\mathfrak{m}athfrak{b}m{x} = \sum_{\ell = 1}^{n_2} y_\ell H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_\ell)\mathfrak{m}athfrak{b}m{x} = \mathfrak{m}athfrak{b}m{0}, \end{equation*} where the first equality followed since the entries of $H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y})$ are linear homogeneous in $\mathfrak{m}athfrak{b}m{y}$. The claim follows. In particular it follows from this that \mathfrak{m}athfrak{b}egin{equation*} \left( \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x})_{\ell = 1, \mathfrak{m}athbb{H}dots, n_2} \times \mathfrak{m}athbb{P}^{n_2-1} \mathbb{R}ight)\cap \mathfrak{m}athbb{V}(H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y})\mathfrak{m}athfrak{b}m{x}) \mathfrak{m}athbb{N}eq \emptyset. \end{equation*} Using Lemma~\mathbb{R}ef{lem.geometry_intersections} and~\eqref{eq.sigma_dimensions_bounds} we therefore find \mathfrak{m}athfrak{b}egin{multline} \label{eq.dimensions_sigma_small} \dim \left[\left( \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x})_{\ell = 1, \mathfrak{m}athbb{H}dots, n_2} \times \mathfrak{m}athbb{P}^{n_2-1} \mathbb{R}ight)\cap \mathfrak{m}athbb{V}(H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y})\mathfrak{m}athfrak{b}m{x})\mathbb{R}ight] \geq \\ \dim \mathfrak{m}athbb{V}(H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y})\mathfrak{m}athfrak{b}m{x}) - n_2 \geq 2 s_{\mathfrak{m}athbb{R}}^{(2)} -n_2-2. \end{multline} Recall $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F} = \mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y}) \mathfrak{m}athfrak{b}m{x}$ so that \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athrm{Sing} \mathfrak{m}athbb{V} (\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}) = \left( \mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{x}^T H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{e}_\ell) \mathfrak{m}athfrak{b}m{x})_{\ell = 1, \mathfrak{m}athbb{H}dots, n_2} \times \mathfrak{m}athbb{P}^{n_2-1} \mathbb{R}ight) \cap \mathfrak{m}athbb{V}(H_{\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}}(\mathfrak{m}athfrak{b}m{y})\mathfrak{m}athfrak{b}m{x}). \end{equation*} Under our assumptions we can apply Lemma~\mathbb{R}ef{lem.sing_bf_small} to find $\dim \mathfrak{m}athrm{Sing} \mathfrak{m}athbb{V} (\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \cdot \mathfrak{m}athfrak{b}m{F}) \leq R-2$. The result follows from this and~\eqref{eq.dimensions_sigma_small}. \end{proof} \mathfrak{m}athfrak{b}egin{proof}[Proof of Theorem~\mathbb{R}ef{thm.2,1}] Applying Theorem~\mathbb{R}ef{thm.n_aux_imply_result} it suffices to show \mathfrak{m}athfrak{b}egin{equation} \label{eq.aux_suff} N_i^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}; B) \leq C_0 B^{2n - 4 \mathfrak{m}athscr{C}}, \end{equation} holds for all $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta} \in \mathfrak{m}athbb{R}^R \setminus \{ 0 \}$ and $i = 1,2$, where $\mathfrak{m}athscr{C} > (2b+u)R$. Let \mathfrak{m}athfrak{b}egin{equation*} s = \mathfrak{m}ax \{s_{\mathfrak{m}athbb{R}}^{(1)}, s_{\mathfrak{m}athbb{R}}^{(2)} \}, \end{equation*} where $s_{\mathfrak{m}athbb{R}}^{(1)}$ and $s_{\mathfrak{m}athbb{R}}^{(2)}$ are as defined in~\eqref{eq.defn_s_1} and~\eqref{eq.defn_s_2}, respectively. From Proposition~\mathbb{R}ef{prop.aux_counting_2-1_small} and Proposition~\mathbb{R}ef{prop.N_2aux_small} for any $\varepsilon >0$ we get \mathfrak{m}athfrak{b}egin{equation*} N_i^{\mathfrak{m}athrm{aux}}(\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}; B) \ll_\varepsilon B^{n+s+\varepsilon}, \end{equation*} with the implied constant not depending on $\mathfrak{m}athfrak{b}m{\mathfrak{m}athfrak{b}eta}$. Choose $\varepsilon = \frac{n-s-(8b+4u)R}{2}$, which is a positive real number by our assumption~\eqref{eq.assumption_n_i_(2,1)}. Taking \mathfrak{m}athfrak{b}egin{equation*} \mathfrak{m}athscr{C} = \frac{n-s-\varepsilon}{4}, \end{equation*} we see that from the assumption $n-s_{\mathfrak{m}athbb{R}}^{(i)} > (8b+4u)R$ for $i = 1,2$ we must have $\mathfrak{m}athscr{C} > (2b+u)R$ for this choice. Therefore~\eqref{eq.aux_suff} holds and the first part of the theorem follows upon applying Theorem~\mathbb{R}ef{thm.n_aux_imply_result}. For the second part recall we assume $n >(16b+8u+1)R$ and that the forms $F_i(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y})$ define a smooth complete intersection in $\mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n-1} \times \mathfrak{m}athbb{P}_{\mathfrak{m}athbb{C}}^{n-1}$. By Lemma~\mathbb{R}ef{lem.sigma_2-1_small} in this case we obtain \[ s_{\mathfrak{m}athbb{R}}^{(1)} \leq R, \] and from Lemma~\mathbb{R}ef{lem.sigma_2_bounds} we find \[ s_{\mathfrak{m}athbb{R}}^{(2)} \leq \frac{n+R}{2}. \] Therefore it is easily seen that assuming $n > (16b+8u+1)R$ implies that \[ n-s_{\mathfrak{m}athbb{R}}^{(i)} > (8b+4u)R \] holds for $i=1,2$, which is what we wanted to show. \end{proof} \subsection{Proof of Theorem~\mathbb{R}ef{thm.2,1_different_dimensions}} \mathfrak{m}athfrak{b}egin{proof}[Proof of Theorem~\mathbb{R}ef{thm.2,1_different_dimensions}] If $n_1 = n_2$ then the result follows immediately from Proposition~\mathbb{R}ef{thm.2,1}. We have two cases to consider and although their strategies are very similar they are not entirely symmetric. Therefore it is necessary to consider them individually. \mathfrak{m}athbb{N}oindent \textbf{Case 1:} $n_1 > n_2$. We consider a new system of equations $\widetilde{F}_i({\mathfrak{m}athfrak{b}m{x}}, \tilde{\mathfrak{m}athfrak{b}m{y}})$ in the variables ${\mathfrak{m}athfrak{b}m{x}} = (x_1, \mathfrak{m}athbb{H}dots, x_n)$ and $\tilde{\mathfrak{m}athfrak{b}m{y}} = (y_1, \mathfrak{m}athbb{H}dots, y_{n_2}, y_{n_2+1}, \mathfrak{m}athbb{H}dots, y_{n_1})$ where the forms $\widetilde{F}_i({\mathfrak{m}athfrak{b}m{x}}, \tilde{\mathfrak{m}athfrak{b}m{y}})$ satisfy \[ \widetilde{F}_i({\mathfrak{m}athfrak{b}m{x}}, \tilde{\mathfrak{m}athfrak{b}m{y}}) = F(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}), \] where $\mathfrak{m}athfrak{b}m{y} = (y_1, \mathfrak{m}athbb{H}dots, y_{n_2})$. Write $\widetilde{N}(P_1,P_2)$ for the counting function associated to the system $\widetilde{\mathfrak{m}athfrak{b}m{F}} = \mathfrak{m}athfrak{b}m{0}$ and the boxes $\mathfrak{m}athcal{B}_1 \times (\mathfrak{m}athcal{B}_2 \times [0,1]^{n_1-n_2})$. Note in particular, that if we replace $F$ by $\widetilde{F}$ in~\eqref{eq.sing_series_real_density} and~\eqref{eq.sing_series_p_adic_expression} then the expressions for the singular series and the singular integral remain unchanged. Further denote by $\tilde{s}_{\mathfrak{m}athbb{R}}^{(i)}$ the quantities defined in~\eqref{eq.defn_s_1} and~\eqref{eq.defn_s_2} but with $F$ replaced by $\widetilde{F}$. Note that we have $\tilde{s}_{\mathfrak{m}athbb{R}}^{(1)} = s_{\mathfrak{m}athbb{R}}^{(1)}$ and $\tilde{s}_{\mathfrak{m}athbb{R}}^{(2)} \leq s_{\mathfrak{m}athbb{R}}^{(2)} + \frac{n_1-n_2}{2}$. Therefore the assumptions~\eqref{eq.assumption_n_i_(2,1)_introduction} imply \[ n_1 - \tilde{s}_{\mathfrak{m}athbb{R}}^{(i)} > (8b+4u)R \] for $i=1,2$. Hence we may apply Proposition~\mathbb{R}ef{thm.2,1} in order to obtain \[ \widetilde{N}(P_1,P_2) = {\mathfrak{m}athfrak{I}} {\mathfrak{m}athfrak{S}}P_1^{n_1-2R} P_2^{n_1-R} + O(P_1^{n_1-2R} P_2^{n_1-R} \mathfrak{m}in\{P_1,P_2\}^{-\delta}), \] for some $\delta >0$. Finally it is easy to see that \mathfrak{m}athfrak{b}egin{align*} \widetilde{N}(P_1,P_2) &= N(P_1,P_2) \# \left\{ \mathfrak{m}athfrak{b}m{t} \in \mathfrak{m}athbb{Z}^{n_1-n_2} \cap [0,P_2]^{n_1-n_2} \mathbb{R}ight\} \\ &= N(P_1,P_2) (P_2^{n_1-n_2} + O(P_2^{n_1-n_2-1})), \end{align*} and so~\eqref{eq.asymptotic_bidegree_2_1} follows. \mathfrak{m}athbb{N}oindent \textbf{Case 2:} $n_2 > n_1$ We deal with this very similarly as in the first case; we define a new system of forms $\widetilde{F}_i(\tilde{\mathfrak{m}athfrak{b}m{x}}, \mathfrak{m}athfrak{b}m{y})$ in the variables $\tilde{\mathfrak{m}athfrak{b}m{x}} = (x_1, \mathfrak{m}athbb{H}dots, x_{n_2})$ and $\mathfrak{m}athfrak{b}m{y} = (y_1, \mathfrak{m}athbb{H}dots, y_{n_2})$ such that \[ \widetilde{F}_i({\mathfrak{m}athfrak{b}m{x}}, \tilde{\mathfrak{m}athfrak{b}m{y}}) = F_i(\mathfrak{m}athfrak{b}m{x},\mathfrak{m}athfrak{b}m{y}) \] holds. As before we define a new counting function $\widetilde{N}(P_1, P_2)$ with respect to the new product of boxes $\left(\mathfrak{m}athcal{B}_1 \times [0,1]^{n_2-n_1}\mathbb{R}ight) \times \mathfrak{m}athcal{B}_2$, and we define $\tilde{s}_{\mathfrak{m}athbb{R}}^{(i)}$ similarly to the previous case. Note that $\tilde{s}_{\mathfrak{m}athbb{R}}^{(1)} = s_{\mathfrak{m}athbb{R}}^{(1)}+n_2-n_1$ and $\tilde{s}_{\mathfrak{m}athbb{R}}^{(2)} \leq s_{\mathfrak{m}athbb{R}}^{(2)} + \frac{n_2-n_1}{2}$ so that~\eqref{eq.assumption_n_i_(2,1)_introduction} gives \[ n_2 - \tilde{s}_{\mathfrak{m}athbb{R}}^{(i)} > (8b+4u)R, \] for $i=1,2$. Therefore Proposition~\mathbb{R}ef{thm.2,1} applies and we deduce again that~\eqref{eq.asymptotic_bidegree_2_1} holds as desired. Finally we turn to the case when $\mathfrak{m}athbb{V}(\mathfrak{m}athfrak{b}m{F})$ defines a smooth complete intersection. Note first that by Lemma~\mathbb{R}ef{lem.sigma_2_bounds} we have \[ s_{\mathfrak{m}athbb{R}}^{(2)} \leq \frac{n_2+R}{2}, \] and therefore the condition \[ \frac{n_1+n_2}{2} - s_{\mathfrak{m}athbb{R}}^{(2)} > (8b+4u)R \] is satisfied if we assume $n_1 > (16b+8u+1)R$. Further, by Lemma~\mathbb{R}ef{lem.sigma_2-1_small} we have \[ s_{\mathfrak{m}athbb{R}}^{(1)} \leq \mathfrak{m}ax\{0, n_1+R-n_2 \}, \] and so we may replace the condition $n_1 - s_{\mathfrak{m}athbb{R}}^{(1)} > (8b+4u)R$ by \[ n_1- \mathfrak{m}ax\{0, n_1+R-n_2 \} > (8b+4u)R. \] If $n_2 \geq n_1+R$ then this reduces to assuming $n_1 > (8b+4u+1)R$, which follows immediately since we assumed $n_1 > (16b+8u+1)R$. If $n_2 \leq n_1+R$ on the other hand, then this is equivalent to assuming \[ n_2 > (8b+4u+1)R. \] In any case, the assumptions~\eqref{eq.assumptions_n_i_(2,1)_smooth_case} imply the assumptions~\eqref{eq.assumption_n_i_(2,1)_introduction} as desired. \end{proof} \mathfrak{m}athfrak{b}ibliography{refs.bib} \mathfrak{m}athfrak{b}ibliographystyle{plain} \end{document}
\begin{document} \title{A note on mean equicontinuity} \author[J.~Qiu]{Jiahao Qiu} \address[J.~Qiu]{Wu Wen-Tsun Key Laboratory of Mathematics, USTC, Chinese Academy of Sciences and School of Mathematics, University of Science and Technology of China, Hefei, Anhui, 230026, P.R. China} \email{[email protected]} \author[J.~Zhao ]{Jianjie Zhao} \address[J.~Zhao]{Wu Wen-Tsun Key Laboratory of Mathematics, USTC, Chinese Academy of Sciences and School of Mathematics, University of Science and Technology of China, Hefei, Anhui, 230026, P.R. China} \email{[email protected]} \date{\today} \begin{abstract} In this note, it is shown that several results concerning mean equicontinuity proved before for minimal systems are actually held for general topological dynamical systems. Particularly, it turns out that a dynamical system is mean equicontinuous if and only if it is equicontinuous in the mean if and only if it is Banach (or Weyl) mean equicontinuous if and only if its regionally proximal relation is equal to the Banach proximal relation. Meanwhile, a relation is introduced such that the smallest closed invariant equivalence relation containing this relation induces the maximal mean equicontinuous factor for any system. \end{abstract} \keywords{Mean equicontinuity, equicontinuity in the mean} \subjclass[2010]{54H20, 37A25} \maketitle \section{Introduction} Throughout this paper, \emph{a topological dynamical system} is a pair $(X,T)$, where $X$ is a non-empty compact metric space with a metric $d$ and $T$ is a continuous map from $X$ to itself. We all know that equicontinuous systems have simple dynamical behaviors. By the well known Halmos-von Neumann theorem, a transitive equicontinuous system is conjugate to a minimal rotation on a compact abelian metric group, and $(X,T,\mu)$ has discrete spectrum, where $\mu$ is the unique Haar measure on $X$. In this note, we discuss the systems with equicontinuity in the mean sense. Recall that a dynamical system $(X,T)$ is called \emph{mean equicontinuous} if for every $\varepsilon>0$, there exists a $\delta>0$ such that whenever $x,y\in X$ with $d(x,y)<\delta$, we have \[\limsup_{n\to\infty} \bar d_n(x,y)<\varepsilon,\ \text{where}\ \bar d_n(x,y)=\frac{1}{n}\sum_{i=0}^{n-1}d(T^ix,T^iy).\] A notion called \emph{stable in the mean in the sense of Lyapunov} or simply \emph{mean-L-stable} is introduced by Fomin~\cite{F51}. We call a dynamical system $(X,T)$ \emph{mean-L-stable} if for every $\varepsilon>0$, there is a $\delta>0$ such that $d(x,y)<\delta$ implies $d(T^nx,T^ny)<\varepsilon$ for all $n\in{\mathbb{Z}_+}$ except a set of upper density less than $\varepsilon$. Oxtoby \cite{O52}, Auslander \cite{A59} and Scarpellini \cite{S82} also studied mean-L-stable systems. It is easy to see that a dynamical system is mean-L-stable if and only if it is mean equicontinuous. Answering an open question in \cite{S82}, it was proved by Li, Tu and Ye in \cite{YL} that a minimal mean equicontinuous system has discrete spectrum. We refer to \cite{Felipe1, Felipe2, Felipe3, GLZ17, HL, Li} for further study on mean equicontinuity and related subjects. In the study of a dynamical system with bounded complexity (defined by using the mean metrics), recently Huang, Li, Thouvenot, Xu and Ye \cite{HL} introduced a notion called \emph{equicontinuity in the mean}. We say that a dynamical system $(X,T)$ is \emph{equicontinuous in the mean} if for every $\varepsilon>0$, there exists a $\delta>0$ such that $\frac{1}{n}\sum_{i=0}^{n-1}d(T^ix,T^iy)<\varepsilon$ for all $n\in {\mathbb{Z}_+}$ and all $x,y\in X$ with $d(x,y)<\delta$. It was proved in \cite{HL} that for a minimal system the notions of mean equicontinuity and equicontinuity in the mean are equivalent. In this note we will show that a dynamical system is mean equicontinuous if and only if it is equicontinuous in the mean (Theorem \ref{general}). In \cite{YL} the notion of Banach (or Weyl) mean equicontinuity was introduced, and the authors asked if for a minimal system Banach mean equicontinuity is equal to mean equicontinuity. This question was answered positively in \cite{DG16}. In this note we will show that in fact for any system the two notions are equivalent (Theorem \ref{thm:Weyl-mean-equ}). Moreover, in \cite{YL} the authors showed that if $(X,T)$ is mean equicontinuous, then its regionally proximal relation is equal to the Banach proximal relation. In this note we will prove that the converse statement is also valid (Theorem \ref{thm:transitive-mean-eq}). Moreover, we define a notion called \emph{regionally proximal relation in the mean} and we show that the mean equicontinuous structure relation is the smallest closed invariant equivalence relation that contains regionally proximal relation in the mean (Theorem \ref{MAX ME}). The note is organized as follows. In Section 2, the basic notions used in the note are introduced. In Section 3, among other things we show that mean equicontinuity is equal to equicontinuity in the mean. In Section 4, we prove that if the regionally proximal relation is equal to Banach proximal relation then the system is mean equicontinuous. In Section 5, we prove the equivalence of mean equicontinuity and Weyl mean equicontinuity. In the final section, we discuss the question which relation induces the maximal mean equicontinuous factor. \section{Preliminaries} In this section we recall some notions and aspects of the theory of topological dynamical systems. \subsection{Subsets of non-negative integers} Let ${\mathbb{Z}_+}$ ($\mathbb{N}$, $\mathbb Z$, respectively) be the set of all non-negative integers (positive integers, integers, respectively). Let $F$ be a subset of $\mathbb{Z}_+$ ($\mathbb{N}$, $\mathbb Z$, respectively). Denote by $\#(F)$ the number of elements of $F$. We say that $F$ has \emph{density} $D(F)$ if the \emph{lower density} of $F$ ($\underline{D}(F)$) is equal to the \emph{upper density} of $F$ ($\overline{D}(F)$), that is, $D(F)=\overline{D}(F)=\underline{D}(F)$, where \[ \underline{D}(F)=\liminf_{n\to\infty} \frac{\#(F\cap[0,n-1])}{n}\] and \[ \overline{D}(F)=\limsup_{n\to\infty} \frac{\#(F\cap[0,n-1])}{n}.\] Similarly, we say that $F$ has \emph{Banach density} if the \emph{lower Banach density} of $F$ ($BD_*(F)$) is equal to the \emph{upper Banach density} of $F$ ($BD^*(F)$), that is, $BD(F)=BD_{*}(F)=BD^{*}(F)$, where, $$ BD_{*}(F)=\liminf_{N-M \to \infty} \frac{\#(F\cap [M,N])}{N-M+1} $$ and $$ BD^{*}(F)=\limsup_{N-M \to \infty} \frac{\#(F\cap [M,N])}{N-M+1}. $$ \subsection{Compact metric spaces} Denote by $(X,d)$ a compact metric space. For $x\in X$ and $\varepsilon>0$, denote $B(x,\varepsilon)=\{y\in X\colon d(x,y)<\varepsilon\}$. We denote by $\diam(X)$ the diameter of $X$ given by $\diam(X)=\sup_{x, y\in X}d(x,y)$, the product space $X\times X=\{(x,y)\colon x,y\in X\}$ and the diagonal $\Delta_X=\{(x,x)\colon x\in X\}$. Let $C(X)$ be the set of continuous complex value functions on $X$ with the supremum norm $\Vert f\Vert=\sup_{x\in X}|f(x)|$. We denote by $C(X)^*$ the dual space of $C(X)$. \subsection{Topological dynamics} Let $(X,T)$ be a dynamical system. A non-empty closed invariant subset $Y \subset X$ (i.e., $TY \subset Y$ ) defines naturally a subsystem $(Y,T)$ of $(X,T)$. A system $(X,T)$ is called minimal if it contains no proper subsystem. Each point belonging to some minimal subsystem of $(X,T)$ is called a minimal point. The orbit of a point $x\in X$ is the set $Orb(x,T)=\{x,Tx,T^2x, \ldots,\}$. The set of limit points of the orbit $Orb(x,T)$ is called the $\omega$-limit set of $x$, and is denoted by $\omega(x,T)$. For $x \in X$ and $U,V \subset X$, put $N(x,U) = \{n \in{\mathbb{Z}_+}: T^nx \in U\}$ and $N(U,V) = \{n\in {\mathbb{Z}_+}: U\cap T^{-n}V\neq \emptyset\}$. Recall that a dynamical system $(X,T)$ is called topologically transitive (or just transitive) if for every two non-empty open subsets $U,V$ of $X$ the set $N(U,V)$ is infinite. Any point with dense orbit is called a transitive point. Denote the set of all transitive points by $Trans(X,T)$. It is well known that for a transitive system, $Trans(X,T)$ is a dense $G_\delta$ subset of $X$. For two dynamical systems $(X,T)$ and $(Y,S)$. Let $\pi\colon X\to Y$ be a continuous map. If $\pi$ is surjective with $\pi\circ T=S\circ \pi$, then we say that $\pi$ is a \emph{factor map}, the system $(Y, T)$ is a \emph{factor} of $(X, T)$ or $(X, T)$ is an \emph{extension} of $(Y, T)$. If $\pi$ is a homeomorphism, then we say that $\pi$ is a \emph{conjugacy} and that the dynamical systems $(X,T)$ and $(Y,S)$ are \emph{conjugate}. By the Halmos and von Neumann theorem (see \cite[Theorem 5.18]{W82}), a minimal system is equicontinuous if and only if it is conjugate to a minimal rotation on a compact abelian metric group. A pair $(x,y)\in X\times X$ is said to be \emph{proximal} if for any $\varepsilon>0$, there exists a positive integer $n$ such that $d(T^nx,T^ny)<\varepsilon$. Let $P(X,T)$ denote the collection of all proximal pairs in $(X,T)$. If any pair of two points in $X$ is proximal, then we say that the dynamical system $(X,T)$ is \emph{proximal}. A pair $(x,y)\in X\times X$ is said to be \emph{Banach proximal} if for any $\varepsilon>0$, $d(T^nx,T^ny)<\varepsilon$ for all $n\in {\mathbb{Z}_+}$ except a set of zero Banach density. Let $BP(X,T)$ denote the collection of all Banach proximal pairs in $(X,T)$. See \cite{LT14} for a detailed study on Banach proximality. A pair $(x,y)$ is called \emph{regionally proximal} if for every $\varepsilon>0$, there exist two points $x',y'\in X$ with $d(x,x')<\varepsilon$ and $d(y,y')<\varepsilon$, and a positive integer $n$ such that $d(T^nx',T^ny')<\varepsilon$. Let $Q(X,T)$ be the set of all regionally proximal pairs in $(X,T)$. Clearly, $Q(X,T)\supset P(X,T)\supset BP(X,T)$. A factor map $\pi\colon (X,T)\to (Y,S)$ is called \emph{proximal} (\emph{Banach proximal}, respectively) if whenever $\pi(x)=\pi(y)$ the pair $(x,y)$ is proximal (Banach proximal, respectively ). The factor $\pi\colon(X,T)\to (Y,S)$ is the maximal equicontinuous factor if the system $(Y,T)$ is equicontinuous and for any other factor map $\phi\colon(X,T)\to (Z,U)$, where $(Z,U)$ is equicontinuous, there exists a factor map $\psi\colon(Y,S)\to (Z,U)$ such that $\phi=\psi \circ \pi$. It is thus unique up to conjugacy and therefore referred to as the \emph{maximal equicontinuous factor}. Let $\pi\colon(X,T)\to (Y,S)$ be the factor map to the maximal equicontinuous factor. The equivalence relation $R_\pi=\{(x,y)\in X\times X: \pi(x)=\pi(y) \}$ is called the \emph{equicontinuous structure relation}. It is shown in~\cite{EG60} that the equicontinuous structure relation is the smallest closed invariant equivalence relation containing the regionally proximal relation. \subsection{Invariant measures} For a dynamical system $(X,T)$, we denote by $M(X,T)$ the set of $T$-invariant regular Borel probability measures on $X$. It is well known that $M(X, T)$ is always nonempty. We say that $(X,T)$ is \emph{uniquely ergodic} if $M(X,T)$ consists a single measure. We regard $M(X)$ as a closed convex subset of $C(X)^*$, equipped with the weak$^*$ topology. Then $M(X)$ is a compact metric space. An invariant measure is ergodic if and only if it is an extreme point of $M(X,T)$. Let $\mu \in M(X, T)$. We define the \emph{support} of $\mu$ by $\supp(\mu)$=\{$x\in X$: $\mu(U)>0$ for any neighborhood $U$ of $x\}$. The \emph{support} of a dynamical system $(X,T)$, denoted by $\supp(X,T)$, is the smallest closed subset $C$ of $X$ such that $\mu(C)=1$ for all $\mu\in M(X,T)$. The action of $T$ on $X$ induces an action on $M(X)$ in the following way: for $\mu\in M(X)$ we define $T\mu$ by \[\int_X f(x)\ \textrm{d}T\mu(x)= \int_X f(Tx)\textrm{d}\mu(x), \quad \forall f\in C(X).\] Hence $(M(X),T)$ is also a topological dynamical system. Fix a measure space $(X, \mathcal{B}, \mu)$. If $f$ and $g$ are functions on $X$, we denote by $f\otimes g$ the function on $X\times X$ given by $f\otimes g(x,x')=f(x)g(x')$ and by $L^{\infty}(X)\otimes L^{\infty}(X) $ we denote the algebra of functions on $X\times X$ that are finite sums of functions $f\otimes g$ with $f, g \in L^{\infty}(X, \mathcal{B}, \mu)$. We denote by $\mu_{\Delta}$ the diagonal measure on $X\times X$ given by $\int f(x,x')\textrm{d}\mu_{\Delta}(x,x')=\int f(x,x)\textrm{d}\mu(x)$. We notice that $\mu_\Delta (A\times B)=\mu(A\cap B)$ for any $A, B \in \mathcal{B}$. For a dynamical system $(X,T)$, $f\in C(X)$ and $n\in\mathbb{N}$, let $f_n(x)=\frac{1}{n}\sum_{i=0}^{n-1} f(T^ix).$ The following theorem is well known. \begin{thm}\cite{O52} \label{thm:unique-ergodic} Let $(X,T)$ be a dynamical system. Then the following conditions are equivalent: \begin{enumerate} \item $(X,T)$ is uniquely ergodic; \item for each $f\in C(X)$, $\{f_n\}_{n=1}^\infty$ converges uniformly on $X$ to a constant; \item for each $f\in C(X)$, there is a subsequence $\{f_{n_k}\}_{k=1}^\infty$ which converges pointwise on $X$ to a constant. \item $(X,T)$ contains only one minimal set, and for each $f\in C(X)$, $\{f_n\}_{n=1}^\infty$ converges uniformly on $X$. \end{enumerate} \end{thm} \section{Mean equicontinuity and equicontinuity in the mean} In this section we will show that mean equicontinuity is equal to equicontinuity in the mean. Moreover, we will discuss what kinds of minimal sets can appear in a transitive mean equicontinuous system. \subsection{Mean equicontinuity and equicontinuity in the mean} We start with the following characterizations of equicontinuous in the mean systems. To do this, we need a simple lemma. \begin{lem}\cite[Lemma 3.2]{YL} \label{same} Let $(X,T)$ and $(Y,S)$ be two dynamical systems. Then $(X\times Y,T\times S)$ is mean equicontinuous if and only if both $(X,T)$ and $(Y,S)$ are mean equicontinuous. \end{lem} \begin{thm}\label{thm:unifon-conv-equi} Let $(X,T)$ be a dynamical system. Then the following conditions are equivalent: \begin{enumerate} \item $(X,T)$ is equicontinuous in the mean; \item for each $f\in C(X\times X)$, the sequence $\{\frac{1}{n}\sum_{i=0}^{n-1} f\circ(T^i\times T^i)\}_{n=1}^\infty$ is uniformly equicontinuous; \item for each $f\in C(X\times X)$, the sequence $\{\frac{1}{n}\sum_{i=0}^{n-1} f\circ(T^i\times T^i)\}_{n=1}^\infty$ is uniformly convergent to a $T\times T$-invariant continuous function $f^*\in C(X\times X)$. \end{enumerate} \end{thm} \begin{proof} We only present the proof (1) implies (2) and the rest is similar to the proof of \cite[Theorem 3.3]{YL}. To make the idea of the proof clearer, when proving (1)$\Rightarrow$(2), we assume $f\in C(X)$ instead of $f\in C(X\times X)$, because if $(X,T)$ is equicontinuous in the mean if and only if so is $(X\times X,T\times T)$. (1)$\Rightarrow$(2) Fix $f\in C(X)$ and $\varepsilon>0$. By continuity of $f$, there exists $\eta\in(0,\frac{\varepsilon}{2\Vert f\Vert})$ such that if $x$, $y\in X$ with $d(x,y)<\eta$ then $|f(x)-f(y)|<\frac{\varepsilon}{2}$. As $(X,T)$ is equicontinuous in the mean, there is $\delta\in(0,\eta)$ such that if $x,y\in X$ with $d(x,y)<\delta$, one has \[\frac{1}{n}\sum_{i=0}^{n-1}d(T^ix,T^iy)<\eta^2,\ n=1,2,\dotsc.\] For every $n\in \mathbb{N}$ and $x,y\in X$ with $d(x,y)<\delta$, let \[I_n(x,y)=\{i\in [0,n-1]\colon d(T^ix,T^iy)\geq \eta\}.\] Then $\#(I_n(x,y))\leq \eta n$. So for every $n\in \mathbb{N}$ and $x,y\in X$ with $d(x,y)<\delta$, we have \begin{align*} \Bigl\vert\frac{1}{n}\sum_{i=0}^{n-1} f(T^ix)-\frac{1}{n}\sum_{i=0}^{n-1} f(T^iy)\Bigr\vert &\leq \frac{1}{n} \sum_{i=0}^{n-1}\bigl\vert f(T^ix)-f(T^iy)\bigr\vert\\ &\leq \frac{1}{n}\Bigl(\sum_{i\in I_n(x,y)} 2 \Vert f\Vert+ \sum_{i\in [0,n-1]\setminus I_n(x,y)}\bigl\vert f(T^ix)-f(T^iy)\bigr\vert\Big)\\ &\leq 2\eta \Vert f\Vert +\frac{\varepsilon}{2}<\varepsilon. \end{align*} This shows that $\{\frac{1}{n}\sum_{i=0}^{n-1} f\circ T^i\}_{n=1}^\infty$ is uniformly equicontinuous. \end{proof} Before proving the main result of this section we give a proof of a result in \cite{O52} which is outlined there. We need the following lemmas. \begin{lem}\label{mean-eq-minimal}\cite{HL} Let $(X,T)$ be a minimal dynamical system. Then $(X,T)$ is mean equicontinuous if and only if it is equicontinuous in the mean. \end{lem} \begin{lem}\label{lem:mean-eq-BP}\cite[Theorem 3.5]{YL} Let $(X,T)$ be a dynamical system. If $(X,T)$ is mean equicontinuous, then $Q(X,T)=P(X,T)=BP(X,T)$ and it is a closed invariant equivalence relation. \end{lem} \begin{lem} \cite{LT14} \label{PRBD} Let $(X,T)$ be a dynamical system. Then the support of $(X,T)$ is the smallest closed subset $K$ of $X$ such that for every $x\in X$ and every open neighborhood $U$ of $K$, $N(x,U)$ has Banach density one. \end{lem} Now we are ready to show \begin{thm}\label{transitive} Let $(X,T)$ be a dynamical system. If $(X,T)$ is mean equicontinuous, then for every $x\in X$, $(\overline{\text{Orb}(x,T)},T)$ is uniquely ergodic. In particular, if $(X,T)$ is also transitive, then $(X,T)$ is uniquely ergodic. \end{thm} \begin{proof} Without loss of generality, we can assume that $X=\overline{Orb(x,T)}$. If $M_1,M_2$ are two minimal sets in $(X,T)$. By Auslanser-Ellis theorem, there exist $y_1\in M_1$ and $y_2\in M_2$ such that $(x,y_1)$ and $(x,y_2)$ are both proximal. For a given $\varepsilon>0$,set $$ A_1=\{n\in{\mathbb{Z}_+}: d(T^nx,T^ny_1)<\varepsilon/2\}\ \text{and}\ A_2=\{n\in{\mathbb{Z}_+}: d(T^nx,T^ny_2)<\varepsilon/2\}. $$ By Lemma~\ref{lem:mean-eq-BP}, $A_1\cap A_2\not=\emptyset$ which implies that $(y_1,y_2)$ is proximal. As $y_1$ and $y_2$ are minimal points, then their orbit closures are equal which deduces that $M_1=M_2$. So there is only one minimal set in $(X,T)$, denoted by $M$. It is clear that $(M,T)$ is also mean equicontinuous. By Lemma \ref{mean-eq-minimal}, $(M,T)$ is equicontinuous in the mean. Then $(M,T)$ is uniquely ergodic by Theorem \ref{thm:unique-ergodic} and Theorem \ref{thm:unifon-conv-equi}. For every $z\in X$, by Auslanser-Ellis theorem again, there exists a point $y\in M$ such that $(z,y)$ is proximal. By Lemma~\ref{lem:mean-eq-BP}, $(z,y)$ is a Banach proximal piont. So for every open neighborhood $U$ of $M$ and any $z\in X$, $N(z,U)$ has Banach density one. Then by Lemma \ref{PRBD} we have $\supp(X,T)\subset M$. As $(M,T)$ is uniquely ergodic, so is $(X,T)$. \end{proof} Now we begin to prove the main result of this section. We need the following lemma. \begin{lem}\label{minimal-point} Let $(X,T)$ be mean equicontinuous system and $\nu$ be an ergodic measure on $X$, then every point of $\text{supp} (\nu)$ is minimal. \end{lem} \begin{proof} $(\text{supp}(\nu),T)$ is a transitive system since $\nu$ is an ergodic measure on $X$. By Theorem \ref{transitive}, $(\text{supp}(\nu),T)$ is uniquely ergodic, and so, it is minimal. \end{proof} Now we are ready to show the main result. Note that our method is different from the proof for the minimal case. \begin{thm}\label{general} $(X,T)$ is mean equicontinuous if and only if equicontinuous in the mean. \end{thm} \begin{proof} If $(X,T)$ is equicontinuous in the mean, it is clear that $(X,T)$ is mean equicontinuous. Now assume that $(X,T)$ is mean equicontinuous. If $(X,T)$ is not equicontinuous in the mean, then there are $x_k,y_k,z\in X,n_k\in {\mathbb{Z}_+},k=1,2,\cdots$ and $\varepsilon_0>0$ such that $\lim_{k\rightarrow\infty }x_k=z=\lim_{k\rightarrow\infty }y_k$ and for every $k\in {\mathbb{Z}_+}$ $$ \frac{1}{n_k}\sum_{i=0}^{n_k-1}d(T^ix_k,T^iy_k)\geq \varepsilon_0. $$ Let $\mu_k=\frac{1}{n_k}\sum_{i=0}^{n_k-1}\delta_{(T^ix_k,T^iy_k)}$, then $\mu_k\in M(X\times X)$, we may assume $\mu_k\rightarrow \mu$(otherwise we may consider the subsequence), where $\mu\in M(X\times X,T\times T)$. We claim that $\mu(\text{supp}(\mu )\setminus \Delta_X)>0$. Actually, $d(\cdot,\cdot)$ is a continuous function on $X\times X$, then we have $$ \int_{X\times X}d(x,y)\textrm{d}\mu_k\longrightarrow \int_{X\times X}d(x,y)\textrm{d}\mu $$ and $$ \int_{X\times X}d(x,y)\textrm{d}\mu_k=\frac{1}{n_k}\sum_{i=0}^{n_k-1}d(T^ix_k,T^iy_k)\geq \varepsilon_0 $$ which implies $$\mu(\text{supp}(\mu)\setminus \Delta_X)>0.$$ By the ergodic decomposition, we have $\nu(\text{supp}(\mu)\setminus \Delta_X)>0$ for some ergodic measure on $X\times X $. Thus, there exists a minimal point in $\text{supp}(\mu)\setminus \Delta_X$, since $\text{supp}(\nu)$ is a minimal set by Lemma \ref{minimal-point}. Denote this minimal point by $(u,v)$. For $l\in\mathbb{N}$, let $B_l=\{(x,y)\in X\times X:d((x,y),(u,v))<\frac{1}{l} \}$, then $$ \mu(B_l)>0 \ \text{and}\ \mu_k(B_l)=\frac{1}{n_k}\#(\{0\leq i \leq n_k:(T^ix_k,T^iy_k)\in B_l\}). $$ There are infinitely many $k\in {\mathbb{Z}_+}$ with $0\leq m_k \leq n_k$ such that $(T^{m_k}x_k,T^{m_k}y_k)\in B_l$, since $0<\mu(B_l)\leq \liminf_{k\rightarrow\infty}\mu_k(B_l)$ for every $l\in \mathbb{N}$. Put $$\delta=d(\overline{Orb((z,z),T\times T)},\overline{Orb((u,v),T\times T)}).$$ Then, $\delta>0$, since $\overline{Orb((u,v),T\times T)}\cap \Delta_X=\emptyset$. As $(X,T)$ is mean equicontinuous, so is $(X\times X,T\times T)$ by Lemma \ref{same}. Then, for $\frac{1}{4}\delta$, there is $\eta>0$ such that if $d((x,y),(x',y'))<\eta$, then \[\limsup_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}d((T\times T)^i(x,y),(T\times T)^i(x',y'))<\frac{\delta}{4}.\] We can choose $k\in \mathbb{Z_{+}}$ with $d((x_k,y_k),(z,z))<\eta$ and $d((T^{m_k}x_k,T^{m_k}y_k),(u,v))<\eta$, then \[\limsup_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}d((T\times T)^i(x_k,y_k),(T\times T)^i(z,z))<\frac{\delta}{4}\] and \[\limsup_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}d((T\times T)^i(T^{m_k}x_{k},T^{m_k}y_{k}),(T\times T)^i(u,v))<\frac{\delta}{4}\] which implies \[\limsup_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}d((T\times T)^i(T^{m_k}z,T^{m_k}z),(T\times T)^i(u,v))<\frac{\delta}{2}.\] It is a contradiction, thus $(X,T)$ is equicontinuous in the mean. \end{proof} \subsection{Minimal sets in a transitive mean equicontinuous system} In Theorem \ref{transitive} we have shown that a transitive mean equicontinuous system is uniquely ergodic, and thus it contains a unique minimal subset. Here we will discuss what kinds of minimal sets can appear in a transitive mean equicontinuous system. \begin{thm} We have the following observations. \begin{enumerate} \item If $(X,T)$ is weakly mixing and mean equicontinuous, then the unique minimal set is a fixed point. \item If $(X,T)$ is totally transitive and mean equicontinuous, then the unique minimal set is totally minimal and mean equicontinuous. Moreover, any totally transitive, minimal, mean equicontinuous system can be realized in a totally transitive non-minimal mean equicontinuous system. \item If $(X,T)$ is transitive and mean equicontinuous, then the unique minimal set is mean equicontinuous. Moreover, any minimal, mean equicontinuous system can be realized in a transitive non-minimal mean equicontinuous system. \end{enumerate} \end{thm} \begin{proof} (1). If $(X,T)$ is mean equicontinuous and weakly mixing, then $(X\times X,T\times T)$ is transitive, thus it is uniquely ergodic by Theorem \ref{transitive}. As we know $\mu\times \mu$ and $\mu_\Delta$ are invariant measure on $X\times X$ for any invariant measure $\mu$ on $X$, thus $\mu\times \mu=\mu_\Delta$. $\mu_\Delta(\Delta_X)=1$ implies $\mu$ must be $\delta_x$ for some $x\in X$. Hence $(X,T)$ is also uniquely ergodic and the unique fixed point is $x$. (2). If $(X,T)$ is mean equicontinuous and totally transitive, it has only one minimal set by Theorem \ref{transitive}, denoted by $M$. Then $(M,T)$ is totally minimal. Actually, $(X,T^n)$ is also a transitive mean equicontinuous system for every $n\in\mathbb{N}$, again by Theorem \ref{transitive}, there is only one $T^n$-invariant measure on $X$ denoted by $\mu_n$. Let $\mu$ be the unique invariant measure on $(X,T)$ and it is also invariant on $T^n$, hence $\mu=\mu_n$ and $M=\text{supp}(\mu)=\text{supp}(\mu_n)$, which implies $(M,T^n)$ is minimal. It is clear that $(X,T)$ is mean equicontinuous. Now let $(X_1,T_1)$ be a totally minimal mean equicontinuous system and $(X_2,T_2)$ be a weakly mixing system such that the uniquely ergodic measure is supported on a fixed point $p$. Then, $(X_2,T_2)$ is mean equicontinuous and $(X_1\times X_2, T_1\times T_2)$ is the system we want. (3). The first statement follows again by Theorem \ref{transitive}. Let $(X_1,T_1)$ be a minimal mean equicontinuous system and $(X_2,T_2)$ be a weakly mixing system such that the uniquely ergodic measure is supported on a fixed point $p$. Then, $(X_2,T_2)$ is mean equicontinuous and $(X_1\times X_2, T_1\times T_2)$ is the system we want. \end{proof} \section{Regionally proximal and Banach proximal relations} Lemma \ref{lem:mean-eq-BP} shows that for a mean equicontinuous system $(X,T)$, we have $BP(X,T)=P(X,T)=Q(X,T)$. We will show the converse is also valid, i.e. for a dynamical system $(X,T)$, if $BP(X,T)=P(X,T)=Q(X,T)$ then it is equicontinuous in the mean. In fact we will prove more by providing a series of equivalent statements, see Theorem \ref{thm:transitive-mean-eq} for details. We start with some preparations. The following lemma is just a simple observation. \begin{lem} Let $(X,T)$ be a dynamical system, if $(x,y)\in BP(X,T)$, then for any neighborhood $U$ of $\Delta_{X}$, we have $BD(N((x,y),U))=1$. \end{lem} \begin{lem}\label{measure} Let $(X,T)$ be a dynamical system. If there exists $\mu\in M(X\times X, T\times T)$ such that $\mu(BP(X,T))=1$, then $\mu(\Delta_X)=1$. \end{lem} \begin{proof} Assume that $\mu(\Delta_{X})<1$, i.e. $\mu(BP(X,T)\setminus \Delta_{X})>0$, As $X$ is a compact metric space there exists a closed set $F\subset \text{supp}(\mu)\cap BP(X,T)\setminus \Delta_{X}$ with $\mu(F)>0$. By the ergodic decomposition theorem, there exists an ergodic measure $\nu$ with $\nu(F)>0$. By Birkhoff ergodic theorem there exists $z\in F$ such that $$ \frac{1}{n} \#(N(z,F)\cap[0,n-1])=\frac{1}{n}\sum_{i=0}^{n-1} \chi_{F}(T^{i}z)\rightarrow \int \chi_F \textrm{d}\nu=\nu(F)>0,$$ then we have $D(N(z,F))>0$. We choose neighborhoods $U$ and $V$ of $F$ and $\Delta_X$ respectively with $U\cap V=\emptyset$, then $\underline{BD}(N(z,U))\geq D(N(z,F))>0$. On the other hand, we have $BD(N(z,V))=1$, since $z\in BP(X,T)$. The contradiction shows the lemma. \end{proof} For a minimal system the following result was known, see \cite{DG16} and \cite{YL}. We now show it holds for a general system. \begin{thm}\label{thm:transitive-mean-eq} Let $(X,T)$ be a dynamical system. Let $(Z,S)$ be the maximal equicontinuous factor of $(X,T)$ and $\pi\colon (X,T)\to (Z,S)$ be the factor map. Then the following conditions are equivalent: \begin{enumerate} \item $(X,T)$ is mean equicontinuous; \item $\pi$ is Banach proximal; \item $BP(X,T)=P(X,T)=Q(X,T)$; \item $\pi\colon (X,\mu,T) \to (Z,\nu,S)$ is measure-theoretically isomorphic, where $\mu$ and $\nu$ are any invariant measures on $X$ and $Z$ respectively with $\pi(\mu)=\nu$; \item $(X,T)$ is equicontinuous in the mean. \end{enumerate} \end{thm} \begin{proof} (1) $\Rightarrow$ (2) by Lemma \ref{lem:mean-eq-BP}. (1) $\Leftrightarrow$ (5) by Theorem \ref{general}. Moreover, it is clear that (2) $\Leftrightarrow$ (3). (3) $\Rightarrow$ (4) This is essentially proved in \cite[Theorem 3.8]{YL}. Here we provide a proof for completeness. Let $\mu$ be an invariant measure on $(X,T)$ and $\nu$ be the invariant measure on $(Z,S)$ with $\pi(\mu)=\nu$. We consider the disintegration of $\mu$ over $\nu$. That is, for a.e. $y\in Z$ we have a measure $\mu_y$ on $X$ such that $\supp(\mu_y)\subset \pi^{-1}(y)$ and $\mu=\int_{y\in Z}\mu_y \textrm{d}\nu$. Let $W=\{(u,v)\in X\times X: \pi(u)=\pi(v)\}$. As $\supp(\mu_y)\subset \pi^{-1}(y)$, we have $\supp(\mu_y\times\mu_y)\subset \pi^{-1}(y)\times \pi^{-1}(y)\subset W$, a.e. $y\in Y$. Let $\mu\times_Z\mu=\int_{y\in Z}\mu_y\times\mu_y\textrm{d}\nu$. Then $\mu\times_Z\mu$ is an invariant measure on $(X\times X,T\times T)$. Moreover, \[\mu\times_Z\mu(W)=\int_{y\in Z} \mu_y\times\mu_y(W)\textrm{d}\nu=1,\] then $\supp(\mu\times_Z\mu)\subset W$. By Lemma \ref{measure} we have $\mu\times_Z\mu(\Delta_X)=1$. Since \[\mu\times_Z\mu(\Delta_X)=\int_{y\in Y}\mu_y\times \mu_y(\Delta_X)\textrm{d}\nu=1,\] we have $\mu_y\times\mu_y(\Delta_X)=1$ a.e. $y\in Z$. Then for a.e. $y\in Z$, there exists a point $c_y\in \pi^{-1}(y)$ such that $\mu_y=\delta_{c_y}$. Let $Y_0$ be the collection of $y\in Z$ such that $\mu_y$ is not equal to $\delta_x$ for any $x\in X$. Then $\nu(\cup_{i\in {\mathbb{Z}_+}}S^{-i}Y_0)=0$. Let $Z_0=Z\setminus \cup_{i\in {\mathbb{Z}_+}}S^{-i}Y_0$ and $X_0=\{c_y:y\in Z_0\}$. Then $\nu(Z_0)=1$. Now we show that $X_0$ is a measurable set. In fact, the map $y\mapsto\mu_y$ from $Z_0$ to $M(X)$ is measurable and $x\mapsto \delta_x$ is an embedding. Since $Z_0$ is a measurable set and maps are 1-1, it follows from Souslin's theorem that $X_0$ is a measurable set, and it is clear that $\mu(X_0)=\mu(\pi^{-1}Z_0)=\nu(Z_0)=1$. By the same argument, $\pi: (X_0,\mu,T)\to (Z_0,\nu,S)$ is a measure-theoretic isomorphism. (4) $\Rightarrow$ (5) If $(X,T)$ is not equicontinuous in the mean, then there are $x_k,x\in X,n_k\in {\mathbb{Z}_+},k=1,2,\cdots$ and $\varepsilon_0>0$ such that $\lim_{k\rightarrow\infty }x_k=x$ and for every $k\in {\mathbb{Z}_+}$ $$ \frac{1}{n_k}\sum_{i=0}^{n_k-1}d(T^ix_k,T^ix)\geq \varepsilon_0. $$ Let $\pi(x_k)=z_k$ and $\pi(x)=z$. We define $$\mu_k=\frac{1}{n_k}\sum_{i=0}^{n_k-1}\delta_{(T^ix_k,T^ix)}$$ and $$\nu_k=\frac{1}{n_k}\sum_{i=0}^{n_k-1}\delta_{(S^iz_k,S^iz)}$$ then $$(\pi \times \pi)(\mu_k)=\nu_k.$$ By taking the subsequence, there exists $\mu$ and $\nu$ on $M(X\times X,T\times T)$ and $M(Z\times Z,S\times S)$ respectively with $\lim_{k\rightarrow\infty}\mu_k=\mu,\lim_{k\rightarrow\infty}\nu_k=\nu$ and $(\pi \times \pi)(\mu)=\nu$. We claim that $\mu(\text{supp}(\mu )\setminus \Delta_X)>0$. Actually, $d(\cdot,\cdot)$ is a continuous function on $X\times X$, then we have $$ \int_{X\times X}d(x,y)\textrm{d}\mu_k\longrightarrow \int_{X\times X}d(x,y)\textrm{d}\mu $$ and $\int_{X\times X}d(x,y)\textrm{d}\mu_k=\frac{1}{n_k}\sum_{i=0}^{n_k-1}d(T^ix_k,T^ix)\geq \varepsilon_0$ which implies $\mu(\text{supp}(\mu)\setminus \Delta_X)>0$. There are open sets $U$ and $V$ of $X$ with $U\cap V=\emptyset$ and $\mu(U\times V)>0$. Let $\mu'$ and $\nu'$ be the projection of $\mu$ and $\nu$ onto the first component of $X$ and $Z$ respectively. It is clear that $\mu'\in M(X,T)$ and $\nu'\in M(Z,S)$. It is easy to see $\text{supp}(\nu)\subset \overline{Orb((z,z),S\times S)}$. Then $\nu'(\overline{Orb(z,S)})=\nu(\overline{Orb(z,S)}\times Z)=1$, which implies $\text{supp}(\nu')\subset \overline{Orb(z,S)}$. Furthermore, $\nu'$ is the only invariant measure on $\overline{Orb(z,S)}$, since $\overline{Orb(z,S)}$ is uniquely ergodic. As for every $f,g\in C(Z)$, we have $$\int_{Z\times Z}f(z_1)g(z_2)\textrm{d}\nu_k(z_1,z_2)\longrightarrow \int_{Z\times Z}f(z_1)g(z_2)\textrm{d}\nu(z_1,z_2)$$ and $$\int_{Z\times Z}f(z_1)g(z_2)\textrm{d}\nu_k(z_1,z_2)=\frac{1}{n_k}\sum_{i=0}^{n_k-1}f(S^iz_{k})g(S^iz)\longrightarrow \int_Z f(z)g(z)\textrm{d}\nu'(z),$$ thus $\nu(A\times B)=\nu'(A\cap B)$. So $\nu'(\pi(U)\cap \pi(V))=\nu(\pi(U)\times \pi(V))\geq \mu(U\times V)>0$. Obviously, $\pi(\mu')$ is an invariant measure on $\overline{Orb(z,S)}$, thus we have $\pi(\mu')=\nu'$. As $\pi\colon (X,\mu',T) \to (Z,\nu',S)$ is measure-theoretic isomorphic, we have $\nu'(\pi(U)\cap \pi(V))=\mu'(U\cap V)=0$, it is a contradiction. This shows $(X,T)$ is mean equicontinuous. \end{proof} \section{Mean equicontinuity and Weyl mean equicontinuity} Following \cite{DG16} and \cite{YL}, a dynamical system $(X,T)$ is called \emph{Banach mean equicontinuous or Weyl mean equicontinuous} if for every $\varepsilon>0$, there exists a $\delta>0$ such that \[ \limsup_{n-m\to\infty}\frac{1}{n-m}\sum_{i=m}^{n-1}d(T^ix,T^iy)<\varepsilon \] for all $x,y\in X$ with $d(x,y)<\delta$. It is clear that each Weyl mean equicontinuous system is mean equicontinuous. It is shown in~\cite{DG16} that if a minimal system is mean equicontinuous then it is Weyl mean equicontinuous. In this section we show that for a general dynamical system mean equicontinuity is equivalent to Weyl mean equicontinuity. That is, we have \begin{thm}\label{thm:Weyl-mean-equ} A dynamical system $(X,T)$ is mean equicontinuous if and only if it is Weyl mean equicontinuous. \end{thm} Before proving the Theorem, we need the following lemma. \begin{lem} If a dynamical system $(X,T)$ is uniquely ergodic, then for any $f\in C(X, \mathbb{R})$ and $x\in X$, \[\lim_{n-m\to\infty}\frac{1}{n-m}\sum_{i=m}^{n-1} f(T^ix)=\lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1} f(T^ix).\] \end{lem} \begin{proof} Let $\mu$ be the unique invariant measure on $(X,T)$. Then for any $f\in C(X, \mathbb{R})$ and $x\in X$, \[ \lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1} f(T^ix)=\int f \textrm{d}\mu.\] If the conclusion does not hold, then there exist $f\in C(X, \mathbb{R})$, $x\in X$ and two sequences $\{n_k\}$ and $\{m_k\}$ with $n_k- m_k\to\infty $ such that \[\lim_{n_k-m_k\to\infty}\frac{1}{n_k-m_k}\sum_{i=m_k}^{n_k-1} f(T^ix) =c \neq \int f \textrm{d}\mu.\] As $M(X)$ is compact, passing to a subsequence if necessary we may assume that \[ \lim_{k\to\infty} \frac{1}{n_k-m_k}\sum_{i=m_k}^{n_k-1}\delta_{T^ix} =\nu.\] It is easy to see that $\nu$ is an invariant measure. As $(X,T)$ is unqiuely ergodic then $\nu=\mu$. So \[\lim_{n_k-m_k\to\infty}\frac{1}{n_k-m_k}\sum_{i=m_k}^{n_k-1} f(T^ix) = \int f\textrm{d}\mu.\] This is a contradiction. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:Weyl-mean-equ}] As $(X,T)$ is mean equicontinuous, then so is $(X\times X,T\times T)$. Fix $(x,y)\in X\times X$. By Theorem \ref{transitive}, $(\overline{Orb((x,y),T\times,T)},T\times T)$ is uniquely ergodic. Now applying the above theorem to the distance function $d(\cdot,\cdot)$ and $(x,y)$, we get \[ \lim_{n-m\to\infty}\frac{1}{n-m}\sum_{i=m}^{n-1}d(T^ix,T^iy)= \lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}d(T^ix,T^iy). \] Then the result follows from the definition. \end{proof} We now give the following conclusion to end this section. \begin{thm} Let $(X,T)$ be a mean equicontinuous system, then for every $\varepsilon >0$, there are $\delta>0$ and $N>0$, such that whenever $d(x,y)<\delta$, we have $$\frac{1}{n}\sum_{i=j}^{n+j-1} d(T^ix,T^iy)<\varepsilon$$ for all $ n\geq N$ and $j\geq 0$. \end{thm} \begin{proof} Assume that there are $x_k,y_k,z\in X,n_k,j_k\in {\mathbb{Z}_+},k=1,2,\cdots$ and $\varepsilon_0>0$ such that $\lim_{k\rightarrow\infty }x_k=z=\lim_{k\rightarrow\infty }y_k$ and for every $k\in {\mathbb{Z}_+}$ $$ \frac{1}{n_k}\sum_{i=j_k}^{n_k+j_k-1}d(T^ix_k,T^iy_k)\geq \varepsilon_0. $$ Let $\mu_k=\frac{1}{n_k}\sum_{i=j_k}^{n_k+j_k-1}\delta_{(T^ix_k,T^iy_k)}$, and then $\mu_k\in M(X\times X)$. We may assume $\mu_k\rightarrow \mu$(otherwise we may consider the subsequence), where $\mu\in M(X\times X,T\times T)$. We claim that $\mu(\text{supp}(\mu )\setminus \Delta_X)>0$. In fact, $d(\cdot,\cdot)$ is a continuous function on $X\times X$, then we have $$ \int_{X\times X}d(x,y)\textrm{d}\mu_k\longrightarrow \int_{X\times X}d(x,y)\textrm{d}\mu $$ and $$ \int_{X\times X}d(x,y)\textrm{d}\mu_k=\frac{1}{n_k}\sum_{i=j_k}^{n_k+j_k-1}d(T^ix_k,T^iy_k)\geq \varepsilon_0 $$ which implies $$\mu(\text{supp}(\mu)\setminus \Delta_X)>0.$$ By ergodic decomposition theorem, we have $\nu(\text{supp}(\mu)\setminus \Delta_X)>0$ for some ergodic measure $\nu$ on $X\times X $, thus there exists a minimal point $(u,v)$ in $\text{supp}(\mu)\setminus \Delta_X$ since $\text{supp}(\nu)$ is a minimal set by Lemma \ref{minimal-point}. Let $B_l=\{(x,y)\in X\times X:d((x,y),(u,v))<\frac{1}{l} \}$. Then we have $$\mu(B_l)>0 \ \text{and}\ \mu_k(B_l)=\frac{1}{n_k}\#(\{j_k\leq i \leq n_k+j_k-1:(T^ix_k,T^iy_k)\in B_l\}).$$ Thus for any $l\in {\mathbb{Z}_+}$ there exist infinte $k\in {\mathbb{Z}_+}$ with $j_k\leq m_k \leq n_k+j_k-1$ such that $(T^{m_k}x_k,T^{m_k}y_k)\in B_l$, since $0<\mu(B_l)\leq \liminf_{k\rightarrow\infty}\mu_k(B_l)$. Put $$\delta=d(\overline{Orb((z,z),T\times T)},\overline{Orb((u,v),T\times T)})$$ then $\delta>0$, since $\overline{Orb((u,v),T\times T)}\cap \Delta_X=\emptyset$. As $(X,T)$ is mean equicontinuous, so is $(X\times X,T\times T)$ by Lemma \ref{same}. Thus, for $\frac{1}{4}\delta$, there is $\eta>0$ such that if $d((x,y),(x',y'))<\eta$ then \[\limsup_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}d((T\times T)^i(x,y),(T\times T)^i(x',y'))<\frac{\delta}{4}.\] There are infinitely many $k\in {\mathbb{Z}_+}$ with $d((x_k,y_k),(z,z))<\eta$ and $d((T^{m_k}x_k,T^{m_k}y_k),(u,v))<\eta$, then \[\limsup_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}d((T\times T)^i(x_k,y_k),(T\times T)^i(z,z))<\frac{\delta}{4}\] and \[\limsup_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}d((T\times T)^i(T^{m_k}x_{k},T^{m_k}y_{k}),(T\times T)^i(u,v))<\frac{\delta}{4}\] which implies \[\limsup_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}d((T\times T)^i(T^{m_k}z,T^{m_k}z),(T\times T)^i(u,v))<\frac{\delta}{2}.\] It is a contradiction which shows the theorem. \end{proof} \section{Mean equicontinuous relation} It is well known that the equicontinuous structure relation is the smallest closed invariant equivalence relation containing the regionally proximal relation. In \cite{YL} the authors showed that each topological dynamical system admits a maximal mean equicontinuous factor. Inspired by the above ideas, we now define a new notation called \emph{a pair sensitive in the mean} and introduce the mean equicontinuous structure relation. We show that the maximal mean equicontinuous factor is induced by the smallest invariant closed equivalence relation containing the relation of sensitivity in the mean. \begin{defn} Let $(X,T)$ be a dynamical system. We say $(x,y)$ is \emph{a pair sensitive in the mean}, if $x=y$ or for any $\tau>0$, there exists $c=c(\tau)>0$ such that for every $\varepsilonsilon>0$, there exist $x',y'\in X$ and $n\in \mathbb{N}$ such that $d(x',y')<\varepsilon$ and $$ \frac{1}{n}\#(\{0\leq i\leq n-1:d(T^ix',x)<\tau,d(T^iy',y)<\tau\})>c$$ Let $Q_{me}(X,T)$ be the set of all pairs sensitive in the mean in $(X,T)$, and we call that \emph{the relation of sensitivity in the mean}. \end{defn} Clearly, if $T$ is a homeomorphism, then $Q_{me}(X,T)\subset Q(X,T)$. Let $S_{me}(X, T)$ be the smallest closed $T\times T$ invariant equivalence relation such that $X/S_{me}(X, T)$ is a mean equicontinuous system. We will show that $S_{me}(X, T)$ is the smallest closed $T\times T$ invariant equivalent relation that contains the relation of sensitivity in the mean. This will be done through the following lemmas. First we observe that \begin{lem} \label{NOT ME} Let $(X,T)$ be a dynamical system. Then $(X,T)$ is not mean equicontinuous system if and only if there exists $x, x_{k}\in X, n_{k}\in \mathbb{N}$ and $\varepsilon_{0}>0$, such that $x_{k}\rightarrow x, \frac{1}{n_{k}}\sum_{i=0}^{n_{k}-1}d(T^{i}x,T^{i}x_{k})\geq \varepsilon_{0}$. \end{lem} It is easy to check: \begin{lem} Let $\pi:(X,T)\rightarrow (Y,S)$ be a factor map. If $(x,y)\in Q_{me}(X, T)$, then we have $(\pi(x),\pi(y))\in Q_{me}(Y,S)$. \end{lem} \begin{lem} \label{ME-Q} Let $(X,T)$ be a dynamical system, then $(X,T)$ is mean equicontinuous if and only if $Q_{me}(X, T)=\Delta_X.$ \end{lem} \begin{proof} If $(X,T)$ is mean equicontinuous, it is clear that $Q_{me}(X, T)=\Delta_X$. Conversely, assume that $Q_{me}(X, T)=\Delta_X$. Suppose that $(X,T)$ is not mean equicontinuous. By Lemma ~\ref{NOT ME}, there are $x_k,x \in X,n_k\in{\mathbb{Z}_+}$ and $\varepsilon_0>0$ such that $\lim_{k\rightarrow\infty}d(x_k,x)=0$ and for every $k\in \mathbb{N}$, we have $$ \frac{1}{n_k}\sum_{i=0}^{n_k-1}d(T^ix_k,T^ix)\geq\varepsilon_0. $$ Let $\mu_k=\frac{1}{n_k}\sum_{i=0}^{n_k-1}\delta_{(T^ix_k,T^ix)}$, then $\mu_k\in M(X\times X)$, we may assume $\mu_k\rightarrow \mu$(otherwise we may consider the subsequence), where $\mu\in M(X\times X,T\times T)$. From the proof of Theorem ~\ref{general}, it follows that $\mu(\text{supp}(\mu)\setminus \Delta_X)>0$. Let $(y,z)\in \text{supp}(\mu)\setminus \Delta_X$. Fix $\tau>0$, choose $l\in\mathbb{N}$ such that $\frac{1}{l}<\tau$. Let $B_l=\{(u,v)\in X\times X:d((y,z),(u,v))<\frac{1}{l} \}$, where $d((y,z),(u,v))=d(y,u)+d(z,v)$ then $$ \mu(B_l)>0 \ \text{and}\ \mu_k(B_l)=\frac{1}{n_k}\#(\{0\leq i \leq n_k-1:d(T^ix_k,y)+d(T^ix,z)<\frac{1}{l}\}). $$ There exist infinite $k\in {\mathbb{Z}_+}$ such that $\mu_k(B_l)>\frac{\mu(B_l)}{2}>0$ since $0<\mu(B_l)\leq \liminf_{k\rightarrow\infty}\mu_k(B_l)$. For $\varepsilon>0$, we can choose $k\in {\mathbb{Z}_+}$ satisfying above proposition with $d(x_k,x)<\varepsilon$, hence $$ \frac{1}{n_k}\#(\{0\leq i \leq n_k-1:d(T^ix_k,y)<\tau,d(T^ix,z)<\tau\})>\frac{\mu(B_l)}{2}. $$ It follows that $(y,z)\in Q_{me}(X, T)$. It is a contradiction which implies the lemma. \end{proof} \begin{lem}\label{mean equicontinuous factor} Let $(X,T)$ be a dynamical system and $\mathcal{A}(Q_{me}(X, T))$ be the smallest closed $T\times T$ invariant equivalence relation containing $Q_{me}(X, T)$, then $X/\mathcal{A}(Q_{me}(X, T))$ is the maximal mean equicontinuous factor. \end{lem} \begin{proof} Let $Y=X/\mathcal{A}(Q_{me}(X, T))$ and $\pi:X\rightarrow Y$ be the factor map. As $\pi\colon X\to Y$ is a continuous surjective, we can choose a metric on $X$ (also denoted by $d$) such that $d(x,y)\geq d(\pi(x),\pi(y))$ for all $x,y\in X$. Assume that $(Y,T)$ is not mean equicontinuous. By Lemma ~\ref{ME-Q}, there exist $x,y\in Y$ with $x\neq y$ and $(x,y)\in Q_{me}(Y,T)$. Let $\tau<\frac{1}{4}d(x,y)$. For $k\in \mathbb{N}$, there are $x_k,y_k\in Y$ and $n_k\in {\mathbb{Z}_+}$ with $d(x_k,y_k)<\frac{1}{k}$ such that $$ \frac{1}{n_k}\#(\{0\leq i \leq n_k-1:d(T^ix_k,x)<\tau,d(T^iy_k,y)<\tau\})>c $$ for some $c>0$. For every $k\in \mathbb{N}$, choose $u_k,v_k\in X$ such that $\pi(u_k)=x_k,\pi(v_k)=y_k$. Without loss of generality, we can assume that $\lim_{k\rightarrow\infty}x_k=z=\lim_{k\rightarrow\infty}y_k$ and $\lim_{k\rightarrow\infty}u_k=w=\lim_{k\rightarrow\infty}v_k$, then $\pi(w)=z$. $$\frac{1}{n_k}\sum_{i=0}^{n_k-1}d(T^iu_k,T^iv_k)\geq \frac{1}{n_k}\sum_{i=0}^{n_k-1}d(T^ix_k,T^iy_k)\geq c\cdot(d(x,y)-2\tau)>\frac{c}{2}\cdot d(x,y)>0.$$ Let $\mu_k=\frac{1}{n_k}\sum_{i=0}^{n_k-1}\delta_{(T^iu_k,T^iv_k)}$ and $\nu_k=\frac{1}{n_k}\sum_{i=0}^{n_k-1}\delta_{(T^ix_k,T^iy_k)}$. Without loss of generality, assume that $\mu_k\rightarrow \mu$ and $\nu_k\rightarrow \nu$, where $\mu\in M(X\times X,T\times T)$ and $\nu\in M(Y\times Y,T\times T)$. From the proof of Theorem ~\ref{general}, we have $\mu(\text{supp}(\mu)\setminus \Delta_X)>0$ and $\nu(\text{supp}(\nu)\setminus \Delta_Y)>0$. Moreover $\pi\times \pi(\mu)=\nu$ since $\pi\times \pi(\mu_k)=\nu_k$. If $(a,b)\in \text{supp}(\mu) $, by the proof of Theorem ~\ref{general}, we have $(a,b)\in Q_{me}(X,T)$ which implies $\pi(a)=\pi(b)$, hence $\pi\times \pi (\text{supp}(\mu))=\Delta_Y$. Therefore $\nu(\Delta_Y)=\pi\times \pi(\mu)(\Delta_Y)=\mu((\pi\times \pi)^{-1}\Delta_Y)= \mu(\text{supp}(\mu))=1$. It is a contradiction which implies the lemma. \end{proof} By Lemma \ref{ME-Q} and Lemma \ref{mean equicontinuous factor}, we have the main result in this section. \begin{thm} \label{MAX ME} Let $(X,T)$ be a dynamical system. Then $S_{me}(X, T)$ is the smallest closed $T\times T$ invariant equivalence relation containing $Q_{me}(X, T)$. \end{thm} \noindent {\bf Acknowledgments.} The authors would like to thank Jian Li and Xiangdong Ye for bringing us the questions and for useful discussions when doing the research. We also thank Jie Li for the careful reading which help the writing of the paper. Finally the authors thank the referee for his/her careful reading. The authors were supported by NNSF of China (11431012). \end{document}
\begin{document} \title{The deformation space of non-orientable hyperbolic 3-manifolds} \begin{abstract} We consider non-orientable hyperbolic 3-manifolds of finite volume $M^3$. When $M^3$ has an ideal triangulation $\Delta$, we compute the deformation space of the pair $(M^3, \Delta)$ (its Neumann Zagier parameter space). We also determine the variety of representations of $\pi_1(M^3)$ in $\mathrm{Isom}(\mathbb{H}^3)$ in a neighborhood of the holonomy. As a consequence, when some ends are non-orientable, there are deformations from the variety of representations that cannot be realized as deformations of the pair $(M^3, \Delta)$. We also discuss the metric completion of these structures and we illustrate the results on the Gieseking manifold. \end{abstract} \section{Introduction} Let $M^3$ be a complete, non-compact, hyperbolic three-manifold of finite volume. Assume first that $M^3$ is orientable. Assume also that $M^3$ has a geometric ideal triangulation $\Delta$ (see \cite{NeumannZagier} for the definition). Following Thurston's construction for the figure eight knot exterior in \cite{ThurstonNotes}, Neumann and Zagier defined in \cite{NeumannZagier} a deformation space of the pair $(M^3, \Delta)$, by considering the set of parameters of the ideal simplices of $\Delta$ subject to compatibility equations. We denote the Neumann-Zagier parameter space by $\mathrm{Def}(M^3, \Delta)$. It is proved in \cite{NeumannZagier} that it is homeomorphic to an open subset of $\mathbb{C}^l$, where $l$ is the number of ends of $M^3$. Another approach to deformations is based on $\mathcal{R}(\pi_1(M^3) , \mathrm{Isom}(\mathbb{H}^3))$, the variety of conjugacy classes of representations of $\pi_1(M^3)$ in $\mathrm{Isom}(\mathbb{H}^3)$. It is proved for instance by Kapovich in \cite{KapovichBook} that a neighborhood of the holonomy of $M^3$ is bi-analytic to an open subset of $\mathbb{C}^l$. Both approaches to deformations can be used to prove the hyperbolic Dehn filling theorem (even if it is still an open question whether $M^3$ orientable admits a geometric ideal triangulation or not). Among other things, one has to take into account that $\mathrm{Def}(M^3, \Delta)$ is a $2^l$ to $1$ branched covering of the neighborhood in $\mathcal{R}(\pi_1(M^3) , \mathrm{Isom}(\mathbb{H}^3))$. When $M^3$ is orientable, both approaches yield the same deformation space. In this paper we investigate the non-orientable setting, that is, $M^3$ is a connected, non-orientable, hyperbolic $3$-manifold of finite volume. When it has an ideal triangulation $\Delta$, we define a deformation space of the pair $\mathrm{Def}(M^3, \Delta)$ \`a la Neumann-Zagier. Here is the main result of the paper (for simplicity, we assume that $M^3$ has a single end, that is non-orientable): \begin{theorem} \label{Thm:main_thm_one_cusp} Let $M^3$ be a complete non-orientable hyperbolic $3$-manifold of finite volume with a single end, that is non-orientable. \begin{enumerate}[(a)] \item If $M^3$ admits a geometric ideal triangulation $\Delta$, then, $\mathrm{Def}(M^3, \Delta)\cong (-1,1)$, where the parameters $\pm t\in (-1,1)$ correspond to the same structure. \item A neighborhood of the holonomy in $\mathcal{R}(\pi_1(M^3),\mathrm{Isom}(\mathbb{H}^3))$ is homeomorphic to an interval $(-1,1)$. \end{enumerate} Furthermore, the holonomy map $\mathrm{Def}(M^3, \Delta)\to \mathcal{R}(\pi_1(M^3),\mathrm{Isom}(\mathbb{H}^3))$ folds the interval $(-1,1)$ at 0 and its image is the half-open interval $[0,1)$, where $0$ corresponds to the complete structure. \end{theorem} The version with \emph{several cusps} of this theorem is stated in Theorem~\ref{Thm:main_thm_several_cusps}. For $M^3$ as in the theorem, structures in the subinterval $[0,1)\subset (-1,1)$ in the variety of representations are realized by $\mathrm{Def}(M^3, \Delta)$, but structures in $(-1,0)$ are not. This corresponds to two different kinds of representations of the Klein bottle, that we denote of type I when realized, and II when not. Those are described in Section~\ref{S:RepKl}. Deformations of the complete structure are non-complete, and therefore for a deformation of the holonomy the hyperbolic structure is not unique. Deformations of type I can be realized by ideal triangulations, hence there is a natural choice of structure and we prove in Theorem~\ref{Theorem:completion} that its metric completion consist in adding a singular geodesic, so that it is the core of a solid Klein bottle and it has a singularity of cone angle in a neighborhood of zero. For deformations of type II, we prove that there is a natural choice of structure (radial), and the metric completion consists in adding a singular interval, also in Theorem~\ref{Theorem:completion}. This singular interval is the soul of a twisted disc orbi-bundle over an interval with mirror boundary. Topologically, a neighborhood of this interval is the disc sum of two cones on a projective plane. Metrically, it is a conifold, with cone angle at the interior of the interval in a neighborhood of zero. The paper is organized as follows. In Section~\ref{S:CombinatorialDef} we describe $\mathrm{Def}(M^3, \Delta)$ and in Section~\ref{S:VarReps}, $\mathcal{R}(\pi_1(M^3),\mathrm{Isom}(\mathbb{H}^3))$. Section~\ref{S:RepKl} is devoted to representations of the Klein bottle. Metric completions are described in Section~\ref{S:Completion}, and finally in Section~\ref{S:Gieseking} we describe in detail the deformation space(s) of the Gieseking manifold. \paragraph{Acknowledgement} We thank the anonymous referee for useful suggestions. The first author is supported by doctoral grant BES-2016-079278. Both authors are partially supported by the Spanish State Research Agency through grant PGC2018-095998-B-I00, as well as the Severo Ochoa and María de Maeztu Program for Centers and Units of Excellence in R\&D (CEX2020-001084-M). \section{Deformation space from ideal triangulations} \label{S:CombinatorialDef} Before discussing non-orientable manifolds, we recall first the orientable case. The first example was constructed by Thurston in his notes \cite[Chapter 4]{ThurstonNotes} for the figure eight knot exterior, and the general case was constructed by Neumann and Zagier in \cite{NeumannZagier}. We refer the reader to these references for the upcoming exposition. From the point of view of a triangulation, the deformation of the hyperbolic structure on a manifold with a given \emph{geometric ideal} triangulation is the space of parameters of ideal tetrahedra, subject to compatibility equations. A \emph{geometric ideal} tetrahedron is a geodesic tetrahedron of $\mathbb{H}^3$ with all of its vertices in the ideal sphere $\partial_\infty\mathbb{H}^3$. We say that a hyperbolic $3$-manifold \emph{admits a geometric ideal triangulation} if it is the union of such tetrahedra, along the geodesic faces. Though it has been established in many cases, it is still an open problem to decide whether every orientable hyperbolic three-manifold of finite volume admits a geometric ideal triangulation. Given an ideal tetrahedron in $\mathbb{H}^3$, up to isometry we may assume that its ideal vertices in $\partial_\infty\mathbb H^3\cong\mathbb C\cup\{\infty \}$ are $0$, $1$, $\infty$ and $z\in \mathbb{C}$. The idea of Thurston is to equip the (unoriented) edge between $0$ and $\infty$ with the complex number $z$, called \textit{edge invariant}. The edge invariant determines the isometry class of the tetrahedron, and for different edges the corresponding invariants satisfy some relations, called \textit{tetrahedron relations}: \begin{itemize} \item Opposite edges have the same invariant. \item Given $3$ edges with a common end-point and invariants $z_1, z_2, z_3$, indexed following the right hand rule towards the common ideal vertex, they are related to $z_1$ by $z_2=\frac{1}{1-z_1}$ and $z_3=\frac{z_1-1}{z_1}$. \end{itemize} Let $M^3$ be a possibly non-orientable complete hyperbolic $3$-manifold of finite volume, which admits a geometric ideal triangulation, $\Delta= \{A_1, \cdots, A_n\}$ . As we have stated before, up to (oriented) isometry, the hyperbolic structure of each tetrahedron can be determined by a single edge invariant, thus the usual parameterization of the triangulation goes as follows: we fix an edge $e_{i}$ in each tetrahedron $A_i$, and consider its edge invariant, $z_i$. Hence, the hyperbolic structure of $M^3$ can be parametrized by $n$ parameters (one for each tetrahedron) and we will denote the parameters of the complete triangulation $\{z_1^0,\cdots, z_n^0\}$. The \textit{deformation space of $M^3$ with respect to $\Delta$}, $\mathrm{Def}(M^3, \Delta)$, is defined as the set of parameters $\{z_1,\cdots, z_n\}$ in a small enough neighborhood of the complete structure for which the gluing bestows a hyperbolic structure on $M^3$. However, we find that the equations defining the deformation space are easier to work with if we use $3n$ parameters (one for each edge after taking into account the duplicity in opposite edges) and ask them to satisfy the second tetrahedron relation too. When $M^3$ is orientable, in order for the gluing to be geometric it is necessary and sufficient that around each edge cycle $[e]=\{e_{i_1,j_1}, \cdots, e_{i_n, j_n}\}$ the following two compatibility conditions are satisfied: \begin{gather} \label{equation_edge_orientable} \prod_{l=1}^n z(e_{i_l,j_l})=1, \\ \label{equation_edge_orientable2} \sum_{l=1}^n \arg(z(e_{i_l,j_l}))=2 \pi. \end{gather} The geometric meaning of these equations can be seen as follows: if we try to realize in $\mathbb{H}^3$ the tetrahedra around the edge cycle $[e]$, Equation~\eqref{equation_edge_orientable} means that the triangulation must `close up' and Equation~\eqref{equation_edge_orientable2}, that the angle around $[e]$ must be precisely $2\pi$ (instead of a multiple). The parameters of the complete hyperbolic structure are denoted by $\{z^0(e_{1,1}), \cdots, z^0(e_{n,3})\}$. In a small enough neighborhood of $\{z^0(e_{1,1}), \cdots, z^0(e_{n,3})\}$, fulfillment of \eqref{equation_edge_orientable} implies \eqref{equation_edge_orientable2}. We end the overview of the orientable case with the theorem we want to extend to the non-orientable case: \begin{theorem}[Neumann--Zagier \cite{NeumannZagier}] \label{thm:neumannzagier-thurston} Let $M^3$ be connected, oriented, hyperbolic, of finite volume with $l$ cusps. Then $\mathrm{Def}(M^3, \Delta)$ is bi-holomorphic to an open set of $\mathbb{C}^l$. \end{theorem} When we deal with non-orientable manifolds, again the problem of the gluing being geometric lives within a neighborhood of the edges. The compatibility equations in this case carry the same geometric meaning as in Equations~\eqref{equation_edge_orientable} and \eqref{equation_edge_orientable2}, but accounting for the possible change of orientation of the tetrahedra. \begin{proposition} Let $M^3$ be a non-orientable manifold triangulated by a finite number of ideal tetrahedra $A_i$. The triangulation bestows a hyperbolic structure around the edge cycle $[e]=\{e_{i_1,j_1}, \cdots, e_{i_n,j_n}\}$ if and only if the following compatibility equations are satisfied: \begin{gather} \label{compeq1} \prod_{l=1}^n \frac{z(e_{i_l,j_l})^{\epsilon_l}}{\overline{z(e_{i_l,j_l})}^{1-\epsilon_l}}=1, \\ \label{compeq2} \sum_{l=1}^n \arg(z(e_{i_l,j_l}))=2 \pi, \end{gather} where $z(e_{i_l,j_l})$ is the edge invariant of $e_{i_l,j_l}$, and $\epsilon_l=0,1$ in such a way that, in the gluing around the edge cycle $[e]$, a coherent orientation of the tetrahedra is obtained by gluing a copy of $A_{i_l}$ with its orientation reversed if $\epsilon_l=0$, (or kept the original one if $\epsilon_l=1$), and with the initial condition that the orientation of the tetrahedron $A_{i_1}$ is kept as given. \end{proposition} \begin{proof} When we follow a cycle of side identifications around an edge, can always reorient the tetrahedra (maybe more than once) so that the gluing is done by orientable isometries. The compatibility equations for the orientable case can be then applied and, hence, for the neighborhood of the edge cycle to inherit a hyperbolic structure, \eqref{equation_edge_orientable} must be satisfied, with the corresponding edge invariants. Now, let us consider an edge $e_{i,j}\in A_i$ with parameter $z(e_{i_j})$. To see how the edge invariant changes under a non-orientable isometry, we can assume that $A_{i}$ has vertices $0, 1, z(e_{i,j})$ and $\infty$ in the upper-space model, and consider the isometry is $c$, the Poincar\'e extension of the complex conjugation in $\partial_{\infty}\mathbb H^3\cong\mathbb{C}\cup\{\infty\}$. Then, the edge invariant of $c(e_{i,j})\in c(A_i)$ is ${1}/{\overline{z(e_{i,j})}}$. Thus, the proposition follows with ease after changing the orientation of some tetrahedra. \end{proof} \begin{Definition} \label{dfn:DefSpace} Let $M^3$ be a connected, complete, non-orientable, hyperbolic 3-manifold of finite volume. Let $\Delta$ be an ideal triangulation of $M^3$. The \emph{deformation space of $M^3$ related to the triangulation $\Delta$} is \[ \begin{array}{rl} \mathrm{Def}(M^3, \Delta)= & \{(z_{1,1}, \cdots, z_{n,3})\in U\cap\mathbb{C}^{3n} \textrm{ satisfying the compatibility} \\ & \textrm{ equations \eqref{compeq1} and \eqref{compeq2} and the tetrahedron relations} \}, \end{array} \] where $U$ is a small enough neighborhood of the parameters $(z^0_{i,j})$ of the complete structure. \end{Definition} Let $M^3_+$ be the orientation covering of $M^3$. The ideal triangulation on $M^3$, $\Delta$, can be lifted to an ideal triangulation $\Delta_+$ on $M^3_+$. There is an orientation reversing homeomorphism, $\iota$, acting on $M^3_+$ such that $M^3=M^3_+/\iota$ and $\iota^2=\mathrm{Id}$. The triangulation on $M^3_+$ is constructed in the usual way: for every tetrahedron $A_i$ we take another tetrahedron with the opposite orientation, $\iota(A_i)$, and glue them so that the orientation is coherent. For every edge, $e_{i,j} \in A_i$, let $z(e_{i,j})$ or $z_{i,j}$ denote its edge invariant. Analogously, $w(\iota(e_{i,j}))$ or $w_{i,j}$ will denote the edge invariant of $\iota(e_{i,j}) \in \iota(A_i)$. \begin{remark} \label{rmk_non-orientable_equation}The compatibility equations \eqref{compeq1} and \eqref{compeq2} around $[e]\in M^3$ are precisely the (orientable) compatibility equations in any lift of $[e]$ to the orientation covering. \end{remark} The orientation reversing homeomorphism acts on $\mathrm{Def}(M^3_+, \Delta_+)$ by pulling-back (equivalently, pushing-forward) the associated hyperbolic metric on each tetrahedron. Combinatorially, the action is described in the following lemma: \begin{lemma} \label{lemma_iotaacts} Let $M^3=M^3_+/\iota$, where $\iota$ is an orientation reversing homeomorphism. Let $M^3$ admit an ideal triangulation $\Delta$. Then, $\iota$ acts on $\mathrm{Def}(M^3_+, \Delta_+)$ as \begin{equation} \label{iotaacts} \iota_*((z_{i,j},w_{i,j}))=(\frac{1}{\overline{w_{i,j}}}, \frac{1}{\overline{z_{i,j}}}). \end{equation} \end{lemma} \begin{proof} The proof follows easily from the fact that $\iota$ permutes the edges and, for $e_{i,j}\in A_i$ with invariant $z(e_{i,j})$, the edge invariant of $c(e_{i,j}) \in c(A_i)$ is $\frac{1}{\overline{z(e_{i,j})}}$, where $c$ is the Poincar\'e extension of the complex conjugation. \end{proof} \begin{remark} Metrics on tetrahedra are considered up to isotopy. \end{remark} \begin{corollary} \label{coro:isomorphism_def_orientation_cover} The map $(z_{i,j}) \in \mathrm{Def}(M^3,\Delta) \longmapsto (z_{i,j}, 1/\overline{z_{i,j}})\in \mathrm{Def}(M^3_+, \Delta_+)^\iota$ is a real analytic isomorphism. \end{corollary} \begin{proof} It follows from Remark~\ref{rmk_non-orientable_equation} and Lemma~\ref{lemma_iotaacts}. \end{proof} Our goal is to use Corollary~\ref{coro:isomorphism_def_orientation_cover} and Theorem~\ref{thm:neumannzagier-thurston} in order to identify the deformation space of $M^3$ with the fixed points under an action on $\mathbb{C}^k$. Let us suppose for the time being that $M^3$ has only one cusp which is non-orientable. The section of this cusp must be a Klein bottle. In order to define the bi-holomorphism through generalized Dehn filling coefficients we must first fix a longitude-meridian pair in the peripheral torus in the orientation covering $M^3_+$. As we will see, there is a canonical choice. Afterwards, following Thurston, we will compute the derivative of the holonomy, $\mathrm{hol'}$, and translate the action of $\iota$ over there and finally, to the generalized Dehn filling coefficients. \noindent \textbf{Fixing a longitude-meridian pair.} Let $K^2$ be Klein bottle, its fundamental group admits a presentation $$\pi_1(K^2)=\langle a, b | aba^{-1}=b^{-1}\rangle $$ The elements $a^2, b$ in the orientation covering $T^2$ are generators of $\pi_1(T^2)$ and are represented by the unique homotopy classes of loops in the orientation covering that are invariant by the deck transformation (as unoriented curves). From now on, we will choose as longitude-meridian pair the elements: \begin{align*} l&:=a^2, \\ m&:=b. \end{align*} \begin{Definition} \label{def:distiguished} The previous generators of $\pi_1(T^2)$ are called \emph{distinguished} elements. \end{Definition} \begin{lemma} Let $[\alpha] \in \pi_1(T)$, let $\iota$ be the involution in the orientation covering $M^3_+$, that is, $M^3\cong M^3_+/\iota$. We also denote by $\iota$ the restriction of $\iota$ to the peripheral torus $T$. If $$\mathrm{hol}'(\alpha)=\prod_{r\in I}z(e_{i_r,j_r})^{\epsilon_r} \prod_{s\in J} w(\iota(e_{i_s,j_s}))^{\epsilon_s},$$ where $\epsilon_r, \epsilon_s \in \{\pm 1\} $, then $$\mathrm{hol'}(\iota(\alpha))=\prod_{r \in I}w(\iota(e_{i_r,j_r}))^{-\epsilon_r} \prod_{s\in J} z(e_{i_s,j_s})^{-\epsilon_s}.$$ \end{lemma} When we compute the derivative of the holonomy of an element, $\mathrm{hol}'(\gamma)$, we assume that $\mathrm{hol}(\gamma)$ fixes $\infty$. \begin{proof} We use Thurston's method for computing the holonomy through the developing of triangles in $\mathbb{C}$ (see \cite{ThurstonNotes}). Thus, the factor that each piece of path adds to the derivative of the holonomy changes as in Figure~\ref{fig:action} under the action of $\iota$. \end{proof} \begin{figure} \caption{Change under the action of $\iota$.} \label{fig:action} \end{figure} \begin{proposition} \label{propiotaaccts2} For the chosen longitude-meridian pair, the action of $\iota$ on $\mathrm{Im}(\mathrm{hol'})\subset \mathbb{C}^2$ is \begin{equation} \label{iotaacts2} \iota_*(L,M)=(\overline{L}, \overline{M}^{-1}), \end{equation} where $L=\mathrm{hol'}(l)$, $M=\mathrm{hol'}(m)$. \end{proposition} \begin{proof} The action of $\iota$ on the longitude-meridian pair is $\iota_*(l)=l$, $\iota_*(m)=m^{-1}$. Hence, the previous lemma implies that the derivative holonomy of the longitude and the meridian has the following features: \begin{gather} \mathrm{hol'}(m)=\prod_{r\in I}z(e_{i_r,j_r})^{\epsilon_r} \prod_{s\in J} w(\iota(e_{i_s,j_s}))^{\epsilon_s}=\prod_{r \in I}w(\iota(e_{i_r,j_r}))^{\epsilon_r} \prod_{s\in J} z(e_{i_s,j_s})^{\epsilon_s}, \\ \mathrm{hol'}(l)=\prod_{r\in I } (z(e_{i_r,j_r}) w(\iota(e_{i_r,j_r}))^{-1})^{\epsilon_r}. \end{gather} \end{proof} \begin{remark} \label{rmk:hol_lm} Following the notation of Proposition~\ref{propiotaaccts2}, $(L,M) \in (\mathbb{C}^2)^\iota$ if and only if $L\in \mathbb{R}$, $|M|=1$. \end{remark} Let us denote by $u:=\log \mathrm{hol'}(l)$, $v:=\log \mathrm{hol'}(m)$, the \emph{generalized Dehn coefficients} are the solutions in $\mathbb{R}^2 \cup \{\infty \}$ to \emph{Thurston's equation} \begin{equation} \label{dehn_equation} pu+qv=2\pi i. \end{equation} Indeed, Neumann-Zagier Theorem~\ref{thm:neumannzagier-thurston} (see also~\cite{ThurstonNotes}) states that, for $M^3$ orientable, the map $(z_{i,j}) \in \mathrm{Def}(M^3, \Delta) \mapsto (p_k,q_k)$ is a bi-holomorphism, and the image is a neighborhood of $(\infty, \cdots, \infty)\in \overline{\mathbb{C}}^l$, where $l$ is the number of cusps of $M^3$. \begin{proposition} The action of $\iota$ on $(p,q) \in U\cap \mathbb{R}^2\cup \{\infty\}$, where $(p,q)$ are the generalized Dehn coefficients, is \begin{equation} \label{iotaacts4} \iota_*(p,q)=(-p,q). \end{equation} \end{proposition} \begin{proof} The action of $\iota$ can be translated through the logarithm to $(u,v)$ from the action on the holonomy \eqref{iotaacts2} as $\iota_*(u,v)=(\overline{u},-\overline{v})$. Then, to find the action on generalized Dehn coefficients, we have to solve Thurston's equation \eqref{dehn_equation} with $\overline{u}$ and $-\overline{v}$, that is, \begin{equation} p'\overline{u}-q'\overline{v}=2\pi i, \end{equation} where $\iota_*(p,q)=(p',q')\in \mathbb{R}^2\cup \{\infty \}$. It is straightforward to check that $(p',q')=(-p,q)$ is the solution. \end{proof} \begin{corollary} The fixed points under $\iota$, which are in correspondence with $\mathrm{Def}(M^3, \Delta)$, are those whose generalized Dehn filling coefficients are of type $(0,q)$. \end{corollary} \begin{theorem} \label{deformation} Let $M^3$ be a connected, complete, non-orientable, hyperbolic 3-manifold of finite volume. Let $M^3$ have $k$ non-orientable cusps and $l$ orientable ones and let it admit an ideal triangulation $\Delta$. Then $\mathrm{Def}(M^3,\Delta)$ is real bi-analytic to an open set of $\mathbb{R}^{k+2l}$. \end{theorem} \begin{proof} We have already proved the theorem for $k=1, l=0$. Let $k=0, l=1$. Any peripheral torus on $M^3$ is lifted to two peripheral tori, $T_1, T_2$, on $M^3_+$. The action of $\iota$ here is by permutation. More precisely, when can fix any longitude-meridian pair in one, $l_1, m_1 \in \pi_1(T_1)$, and choose the longitude-meridian pair in the second torus as $l_2:=\iota_*(l_1), m_2:=\iota_*(m_1) \in \pi_1(T_2)$. The same arguments as in Proposition~\ref{propiotaaccts2} show that $\iota_*(p_1,q_1, p_2, q_2) =-(p_2,q_2,p_1,q_1)$ and, hence, the fixed points have generalized coefficients $(p,q,-p,-q)$, $p,q \in \mathbb{R}$. Finally, in general the action of $\iota$ on $\mathrm{Im}(\mathrm{hol'})\subset \mathbb{C}^{k+2l}$ can be understood as a product of $k+l$ actions, $\iota_1\times \cdots \times \iota_l$, the first $k$, $\iota_i, i=1, \cdots , k$ acting on $\mathbb{C}$ as in the case for a Klein bottle cusp, and the subsequent $l$, $\iota_j, j=k+1, \cdots , k+l$, acting on $\mathbb{C}^2$ as in the case for a peripheral torus. \end{proof} \section{Varieties of representations} \label{S:VarReps} The group of isometries of hyperbolic space is denoted by $G$, and we have the following well known isomorphisms: $$ G=\mathrm{Isom}(\mathbb{H}^3)\cong\mathrm{PO}(3,1)\cong\mathrm{PSL}(2,\mathbb{C})\rtimes \mathbb{Z}_2, $$ which we will often consider in order to identify elements of $G$ with elements of $\mathrm{PSL}(2,\mathbb{C})\rtimes \mathbb{Z}_2$. The group $G$ has two connected components, according to whether the isometries preserve or reverse the orientation. For a finitely generated group $\Gamma$, the variety of representations of $\Gamma$ in $G$ is denoted by $$ \hom(\Gamma, G). $$ As $G$ is algebraic, it has a natural structure of algebraic set, cf Johnson--Millson \cite{JohnsonMillson}, but we consider only its topological structure. We are interested in the set of conjugacy classes of representations: $$ \mathcal{R}(\Gamma,G)=\hom(\Gamma, G)/G. $$ When $M^3$ is hyperbolic, we write $\Gamma=\pi_1(M^3)$. The holonomy of $M^3$ $$\mathrm{hol}\colon\Gamma\to G$$ is well defined up to conjugacy, hence $[\mathrm{hol}]\in \mathcal{R}(\Gamma,G)$. To understand deformations, we analyze a neighborhood of the holonomy in $\mathcal{R}(\Gamma,G)$. The main result of this section is: \begin{Theorem} \label{Thm:dimdef} Let $M^3$ be a hyperbolic manifold of finite volume. Assume that it has $k$ non-orientable cusps and $l$ orientable cusps. Then there exists a neighborhood of $[\mathrm{hol}]$ in $\mathcal{R}(\Gamma,G)$ homeomorphic to $\mathbb{R}^{k+2l}$. \end{Theorem} When $M^3$ is orientable this result is well known, see for instance Boileau--Porti or Kapovich \cite{BoileauPorti,KapovichBook}, hence we assume that $M^3$ is non-orientable. We will prove a more precise result in Theorem~\ref{Thm:coordinatesNO}, as for our purposes it is relevant to describe local coordinates in terms of the geometry of holonomy structures at the ends. Before starting the proof, we need a lemma on varieties of representations. The projection to the quotient $\pi\colon\hom(\Gamma,G)\to \mathcal{R}(\Gamma,G)$ can have quite bad properties, for instance even if $\hom(\Gamma,G)$ is Hausdorff, in general $\mathcal{R}(\Gamma,G)$ is not. But in a neighborhood of the holonomy we have: \begin{Lemma} \label{Lemma:GoodNbhd} There exists a neighborhood $V\subset \mathcal{R}(\Gamma,G)$ of $[\mathrm{hol}]$ such that: \begin{enumerate}[(a)] \item If $[\rho]=[\rho']\in V$, then the matrix $A\in G$ satisfying $A\rho(\gamma) A^{-1}=\rho'(\gamma)$, $\forall\gamma\in\Gamma$, is \emph{unique}. \item $V$ is Hausdorff and the projection $\pi\colon \pi^{-1}(V)\to V$ is open. \item If $[\rho]\in V$, then $\forall\gamma\in \Gamma$ $\rho(\gamma)$ preserves the orientation of $\mathbb{H}^3$ if and only if $\gamma$ is represented by a loop that preserves the orientation of $M^3$ \end{enumerate} \end{Lemma} Assertions (a) and (b) are proved for instance by Johnson and Millson in \cite{JohnsonMillson}: they define the property of \emph{good} representation, that is open in $\mathcal{R}(\Gamma,G)$, it implies Assertions (a) and (b), and it is satisfied by the conjugacy class of the holonomy. Assertion (c) is clear by continuity and the decomposition of $G$ in two components, according to the orientation. To describe the neighborhood of the holonomy in $\mathcal{R}(\Gamma, G)$ we use the orientation covering. \subsection{Orientation covering and the involution on representations} As mentioned, we assume $M^3$ non-orientable. Let $$M_+^3\to M^3$$ denote the orientation covering, with fundamental group $\Gamma_+=\pi_1(M^3_+)$. In particular we have a short exact sequence: $$ 1\to\Gamma_+\to\Gamma\to\mathbb Z_2\to 1. $$ \begin{Definition} \label{dfn:sigma} For $\zeta\in \Gamma\setminus \Gamma_+$, define the group automorphism $$ \begin{array}{rcl} \sigma_*\colon \Gamma_+ & \to & \Gamma_+ \\ \gamma & \mapsto & \zeta\gamma\zeta^{-1} \end{array} $$ \end{Definition} The automorphism $\sigma_*$ depends on the choice of $\zeta\in \Gamma\setminus \Gamma_+$: automorphisms corresponding to different choices of $\zeta$ differ by composition (or pre-composition) with an inner automorphism of $\Gamma_+$; furthermore $\sigma_*^2$ is an inner automorphism because $\zeta^2\in\Gamma_+$. This automorphism $\sigma_*$ is the map induced by the deck transformation of the orientation covering $M^3_+\to M^3$. The map induced by $\sigma_*$ in the variety of representations is denoted by $$ \begin{array}{rcl} \sigma^*\colon \mathcal{R}(\Gamma, G) & \to & \mathcal{R}(\Gamma, G)\\ {}[\rho] & \mapsto & [\rho\circ\sigma_*] \end{array} $$ and $\sigma^*$ does not depend on the choice of $\zeta$, because $\sigma_*$ is well defined up to inner automorphism. Furthermore $\sigma^*$ is an involution, $(\sigma^*)^2=\mathrm{Id}$. Consider the restriction map: $$ \mathrm{res}\colon \mathcal{R}(\Gamma,G) \mapsto \mathcal{R}(\Gamma_+,G) $$ that maps the conjugacy class of a representation of $\Gamma$ to the conjugacy class of its restriction to $\Gamma_+$. \begin{Lemma} \label{Lemma:UV} There exist $U\subset \mathcal{R}(\Gamma, G)$ neighborhood of $[\mathrm{hol}]$ and $V\subset \mathcal{R}(\Gamma_+, G)$ neighborhood of $\mathrm{res}([\mathrm{hol}])$ such that $$ \mathrm{res}\colon U\overset{\cong}\longrightarrow \{[\rho]\in V\mid \sigma^*([\rho])=[\rho]\} $$ is a homeomorphism. \end{Lemma} \begin{proof} We show first that $\mathrm{res}(\mathcal{R}(\Gamma, G))\subset \{[\rho]\in \mathcal{R}(\Gamma_+, G) \mid \sigma^*([\rho])=[\rho]\} $: if $\rho_+=\mathrm{res}(\rho)$, then $\forall\gamma\in \Gamma_+$ $$ \sigma^*(\rho_+)(\gamma)=\rho_+(\sigma_*(\gamma))=\rho_+(\zeta\gamma\zeta^{-1})= \rho(\zeta)\rho_+(\gamma)\rho(\zeta)^{-1}. $$ Hence $\sigma^*([\mathrm{res}(\rho)])= [\mathrm{res}(\rho)]$. Next, given $[\rho_+]\in \mathcal{R}(\Gamma_+, G) $ satisfying $\sigma^*([\rho_+])=[\rho_+]$, by construction there exists $A\in G$ that conjugates $\rho_+$ and $\rho_+\circ\sigma_*$. We chose the neighborhood $V$ so that Lemma~\ref{Lemma:GoodNbhd} applies, hence such an $A\in G$ is unique. From uniqueness (of $A$ and $A^2$), it follows easily that, if $\zeta\in\Gamma\setminus\Gamma_+$ is the element such that $\sigma_*$ is conjugation by $\zeta$ then, by choosing $\rho(\zeta)=A$, $\rho_+$ extends to $\rho\colon\Gamma\to G$. Hence: $$ \mathrm{res}(\mathcal{R}(\Gamma, G))= \{[\rho]\in \mathcal{R}(\Gamma_+, G) \mid \sigma^*([\rho])=[\rho]\} . $$ Let $U=\mathrm{res}^{-1}(V)$. With this choice of $U$ and~$V$, $$ \mathrm{res}\colon U\to \{[\rho]\in V\mid \sigma^*([\rho])=[\rho]\}$$ is a continuous bijection. Finally we establish continuity of $\mathrm{res}^{-1}$ using a slice. The existence of a slice $S\subset \mathcal{R}(\Gamma_+, G) $ at $\mathrm{res}(\mathrm{hol})$ is proved by Johnson and Millson in \cite[Theorem~1.2]{JohnsonMillson}, who point to Borel-Wallach \cite[IX.5.3]{BorelWallach} for a definition of slice. From the properties of the slice, and as the stabilizer of $\mathrm{hol}\vert_{\Gamma_+}$ is trivial, the natural map $G\times S\to \mathcal{R}(\Gamma_+, G)$, that maps $(g, s)\in G \times S$ to $gsg^{-1}$, yields a homeomorphism between $G\times S$ and a neighborhood of the orbit of $\mathrm{res}(\mathrm{hol})$, and the projection induces a homeomorphism $S\cong V$. It follows from the product structure that the $A\in G$ that conjugates $\rho_+$ and $\rho_+\circ \sigma_*$ is continuous on $\rho_+$, so the extension of $\rho_+$ to a representation of the whole $\Gamma$ is continuous on $\rho_+$. Then continuity of $\mathrm{res}^{-1}$ follows by composing the homeomorphism $V\cong S$ (restricted to the fixed point set of $\sigma^*$) with the extension from $\Gamma_+$ to $\Gamma$, and projecting to $U\subset \mathcal{R}(\Gamma, G)$. \end{proof} As $\Gamma_+$ preserves the orientation, next we use the complex structure of the identity component $G_0= \operatorname{Isom}^+(\mathbb{H}^3)\cong \mathrm{PSL}(2,\mathbb C)$. \subsection{Representations in $ \mathrm{PSL}(2,\mathbb C)$} The holonomy of the orientation covering $M_+^3$ is contained in $ \mathrm{PSL}(2,\mathbb C)$, and it is well defined up to the action of $G=\mathrm{PSL}(2,\mathbb C)\rtimes\mathbb Z_ 2$ by conjugation. If we furthermore choose an orientation on $M_+^3$, then the holonomy is unique up to the action by conjugacy of $G_0= \operatorname{Isom}^+(\mathbb{H}^3)\cong \mathrm{PSL}(2,\mathbb C)$, and complex conjugation corresponds to changing the orientation. We call the conjugacy class in $\mathrm{PSL}(2,\mathbb C)$ of the holonomy of $M^3_+$ the \emph{oriented holonomy}. We consider $$ \mathcal{R}(\Gamma_+, \mathrm{PSL}(2,\mathbb C))= \hom( \Gamma_+, \mathrm{PSL}(2,\mathbb C))/ \mathrm{PSL}(2,\mathbb C). $$ Its local structure is well known: \begin{Theorem} \label{Thm:smooth} A neighborhood of the oriented holonomy of $M_+^3$ in $\mathcal{R}(\Gamma_+, \mathrm{PSL}(2,\mathbb C))$ has a natural structure of $\mathbb{C}$-analytic variety defined over $\mathbb{R}$. \end{Theorem} The fact that it is $\mathbb C$-analytic follows for instance from~\cite{JohnsonMillson} or~\cite{KapovichBook}. In Theorem~\ref{Thm:coordinates} we precise $\mathbb C$-analytic coordinates, for the moment this is sufficient for our purposes. \begin{Lemma} \label{Lemma:notfixed} Let $\mathrm{hol}_+$ be the oriented holonomy of $M_+^3$. Then $$ [\mathrm{hol}_+]\neq [\overline{\mathrm{hol}_+}]\in \mathcal{R}(\Gamma_+, \mathrm{PSL}(2,\mathbb C)). $$ Namely, the oriented holonomy and its complex conjugate are not conjugate by a matrix in $\mathrm{PSL}(2,\mathbb C)$. \end{Lemma} \begin{proof} By contradiction, assume that $\mathrm{hol}_+$ and $\overline{\mathrm{hol}_+}$ are conjugate by a matrix in $\mathrm{PSL}(2,\mathbb C)$: there exists an orientation-preserving isometry $A\in \mathrm{PSL}(2,\mathbb C)$ such that $$ A\,\mathrm{hol}_+(\gamma)\, A^{-1}= \overline{\mathrm{hol}_+(\gamma)},\qquad\forall\gamma\in \Gamma_+. $$ Consider the orientation reversing isometry $B=c\circ A$, where $c$ is the isometry with M\"obius transformation the complex conjugation, $z \mapsto \overline{z}$. The previous equation is equivalent to \begin{equation} \label{eqn:commute} B\,\mathrm{hol}_+(\gamma)\, B^{-1}={\mathrm{hol}_+(\gamma)},\qquad\forall\gamma\in \Gamma_+. \end{equation} Brouwer's fixed point theorem yields that the fixed point set of $B$ in the ball compactification $\mathbb{H}^3\cup\partial_{\infty} \mathbb{H}^3$ is non-empty: $$ \mathrm{Fix}(B)=\{x\in \mathbb{H}^3\cup\partial_{\infty} \mathbb{H}^3\mid B(x)=x\}\neq\emptyset. $$ By \eqref{eqn:commute} $\mathrm{hol}_+(\Gamma_+)$ preserves $\mathrm{Fix}(B)$. Thus, by minimality of the limit set of a Kleinian group, since $\mathrm{Fix}(B)\neq\emptyset$ is closed and $\mathrm{hol}_+(\Gamma_+)$-invariant, it contains the whole ideal boundary: $\partial_{\infty} \mathbb{H}^3\subset \mathrm{Fix}(B)$. Hence $B$ is the identity, contradicting that $B$ reverses the orientation. \end{proof} From Lemma~\ref{Lemma:notfixed} and Theorem~\ref{Thm:smooth}, we have: \begin{Corollary} There exists a neighborhood $W\subset \mathcal{R}(\Gamma_+, \mathrm{PSL}(2,\mathbb C))$ of the conjugacy class of the oriented holonomy of $M_+$ that is disjoint from its complex conjugate: $$ \overline{W}\cap W=\emptyset. $$ \end{Corollary} By choosing the neighborhood $W\subset \mathcal{R}(\Gamma_+, \mathrm{PSL}(2,\mathbb C))$ sufficiently small, we may assume that its projection to $\mathcal{R}(\Gamma_+, G)$ is in contained in $V$ as in Lemma~\ref{Lemma:UV}. The neighborhood $V$ can also be chosen smaller, to be equal to the projection of $W$, as this map is open. Namely the neighborhoods can be chosen so that $ \mathcal{R}(\Gamma_+, \mathrm{PSL}(2,\mathbb C))\to \mathcal{R}(\Gamma_+, G)$ restricts to a homeomorphism between $W$ (or $\overline{W}$) and $V$. In particular we can lift to $W$ the restriction map from $U$ to $V$: $$ \xymatrix{ & & W \ar[d]^{\cong}\\ U \ar[rr]^{\mathrm{res}} \ar@{.>}[urr]^{\widetilde{\mathrm{res}}} & & V } $$ \begin{Lemma} \label{Lemma:liftres} For $U\subset \mathcal{R}(\Gamma, G)$ and $W \subset \mathcal{R}(\Gamma_+, \mathrm{PSL}(2,\mathbb C))$ as above, the lift of the restriction map yields an homeomorphism: $$ {\widetilde{\mathrm{res}}}\colon U\overset{\cong}\longrightarrow \{[\rho]\in W\mid [\rho\circ\sigma_*]=[\overline{\rho}] \}. $$ \end{Lemma} This lemma has same proof as Lemma~\ref{Lemma:UV}, just taking into account that $\rho(\zeta)\in G$ reverses the orientation, for $[\rho]\in U$ and $\zeta\in \Gamma\setminus\Gamma_+$. \subsection{Local coordinates} Here we give the local coordinates of Theorem~\ref{Thm:smooth} and we prove a stronger version of Theorem~\ref{Thm:dimdef}. For $\gamma\in\Gamma_+$ and $[\rho]\in\mathcal{R}(\Gamma_+,\mathrm{PSL}(2,\mathbb{C}))$, as Culler and Shalen in \cite{CullerShalen} define \begin{equation} \label{eqn:Igamma} I_{\gamma}([\rho])=(\mathrm{trace}(\rho(\gamma)))^2-4. \end{equation} Thus $I_\gamma$ is a function from $\mathcal{R}(\Gamma_+,\mathrm{PSL}(2,\mathbb{C}))$ to $\mathbb{C}$. This function plays a role in the generalization of Theorem~\ref{Thm:dimdef}. \begin{Theorem} \label{Thm:coordinates} Let $M_+^3$ be as above and assume that it has $n$ cusps. Chose $\gamma_1,\ldots,\gamma_n\in\Gamma_+$ a non-trivial element for each peripheral subgroup. Then, for a neighborhood $W\subset \mathcal{R}(\Gamma_+,\mathrm{PSL}(2,\mathbb{C}))$ of the oriented holonomy, $$ (I_{\gamma_1},\dots, I_{\gamma_n})\colon W\to\mathbb{C}^n $$ defines a bi-analytic map between $W$ and a neighborhood of the origin. \end{Theorem} This theorem holds for any orientable hyperbolic manifold of finite volume, though we only use it for the orientation covering. Again, see \cite{BoileauPorti,KapovichBook} for a proof. As explained in these references, this is the algebraic part of the proof of Thurston's hyperbolic Dehn filling theorem using varieties of representations. For a Klein bottle $K^2$, in Definition~\ref{def:distiguished} we considered the presentation of its fundamental group: $$ \pi_1(K^2)=\langle a,b \mid aba^{-1}=b^{-1}\rangle .$$ The elements $a^2$ and $b$ are called \emph{distinguished} elements. Recall that, in terms of paths, those are represented by the unique homotopy classes of loops in the orientation covering that are invariant by the deck transformation (as unoriented curves). Here we prove the following generalization of Theorem~\ref{Thm:dimdef}: \begin{Theorem} \label{Thm:coordinatesNO} Let $M^3$ be a non-orientable manifold of finite volume with $k$ non-orientable cusps and $l$ orientable cusps. For each horospherical Klein bottle, $K^2_i$, chose $ \gamma_i\in\pi_1(K_i^2)$ distinguished, $i=1,\ldots, k$. For each horospherical torus, $T^2_j$, chose a nontrivial $\mu_j\in\pi_1(T_j^2)$, $j=1,\ldots, l$. There exists a neighborhood $U\subset \mathcal{R}(\Gamma, G)$ of the holonomy of $M^3$ such that the map $$ ( I_{\gamma_1},\ldots, I_{\gamma_k},I_{\mu_1},\ldots,I_{\mu_l} )\circ \widetilde{\mathrm{res}}: U \to \mathbb{R}^k\times \mathbb{C}^{l} $$ defines a homeomorphism between $U$ and a neighborhood of the origin in $\mathbb{R}^k\times \mathbb{C}^{l}$. \end{Theorem} \begin{proof} Let $M^3_+\to M^3$ be the orientation covering. By construction, by the choice of distinguished elements in the peripheral Klein bottles, $\gamma_i\in \Gamma_+$. Furthermore, as the peripheral tori are orientable, $\mu_j\in \Gamma_+$. Hence $$ \{\gamma_1,\ldots,\gamma_k,\mu_1,\ldots,\mu_l,\sigma_*(\mu_1), \ldots, \sigma_*(\mu_l)\} $$ gives a nontrivial element for each peripheral subgroup of $\Gamma_+$, where $\sigma_*$ is the group automorphism from Definition~\ref{dfn:sigma} . We apply Theorem~\ref{Thm:coordinates}: $$ I=(I_{\gamma_1}, \ldots, I_{\gamma_k},I_{\mu_1},\ldots,I_{\mu_l}, I_{\sigma_*(\mu_1)}, \ldots, I_{\sigma_*(\mu_l)})\colon W\to \mathbb C^{k+2l} $$ is a bi-analytic map with a neighborhood of the origin. Furthermore, as $\sigma _*(\gamma_i)=\gamma_i^{\pm 1}$ and $(\sigma^*)^2=\operatorname{Id}$, $$ I\circ\sigma^*\circ I^{-1}(x_1,\ldots,x_k,y_1,\ldots,y_l,z_1,\ldots, z_l)= (x_1,\ldots,x_k,z_1,\ldots,z_l,y_1,\ldots, y_l). $$ In addition, by construction $I$ commutes with complex conjugation. Hence, by Lemma~\ref{Lemma:liftres} the image $ (I\circ\widetilde{\mathrm{res}})(U) $ is the subset of a neighborhood of the origin in $\mathbb C^{k+2l}$ defined by $$ \begin{cases} x_i=\overline{x_i},& \forall i=1,\ldots,k,\textrm{ and } \\ z_j=\overline{y_j},& \forall j=1,\ldots,l. \end{cases} $$ Finally, by combining Theorem~\ref{Thm:coordinates} and Lemma~\ref{Lemma:liftres}, the map $I\circ\widetilde{\mathrm{res}}$ is a homeomorphism between $U$ and its image. \end{proof} We can now state the generalization of Theorem~\ref{Thm:main_thm_one_cusp} to several cusps. Here $D(1)\subset\mathbb C$ denotes a disk of radius~$1$. \begin{theorem} \label{Thm:main_thm_several_cusps} Let $M^3$ be a complete non-orientable hyperbolic $3$-manifold of finite volume with $k$ non-orientable cusps and $l$ orientable cusps. \begin{enumerate}[(a)] \item If $M^3$ admits a geometric ideal triangulation $\Delta$, then $\mathrm{Def}(M^3, \Delta)\cong (-1,1)^k\times D(1)^l$. The parameters $(\pm t_1,\ldots, \pm l_k,\pm u_1,\ldots,\pm u_l) \in (-1,1)^k\times D(1)^l$ correspond to the same structure. \item A neighborhood of the holonomy in $\mathcal{R}(\pi_1(M^3),\mathrm{Isom}(\mathbb{H}^3))$ is homeomorphic to $(-1,1)^k\times D(1)^l$. \end{enumerate} Furthermore, the holonomy map $\mathrm{Def}(M^3, \Delta) \to \mathcal{R}(\pi_1(M^3),\mathrm{Isom}(\mathbb{H}^3))$ in coordinates writes as: $$ \begin{array}{rcl} (-1,1)^k\times D(1)^l & \to & (-1,1)^k\times D(1)^l \\ ( t_1,\ldots, t_k, v_1,\ldots, v_l) & \mapsto & ( t_1^2,\ldots, {t_k}^2, {v_1}^2,\ldots, {v_l}^2) \end{array} $$ Namely, each interval $ (-1,1)$ is folded along $0$ and has image $[0,1)$, and disks $D(1)$ are mapped to disks by a 2:1 branched covering. \end{theorem} \begin{proof} Assertion (a) is Theorem~\ref{deformation}, and assertion (b) is Theorem~\ref{Thm:coordinatesNO}. To describe the holonomy map in coordinates, for each cusp (orientable or not) choose an orientation preserving peripheral element $m$ and let $v$ be the logarithm of the holonomy of $m$ defined as in \eqref{dehn_equation} in a neighborhood of the origin in $\mathbb C$ (with $v\in i\mathbb{R}$ in the non-orientable case). In particular $v$ is a component of the local coordinates of $\mathrm{Def}(M^3, \Delta)$. Furthermore, the holonomy of $m$ is conjugate to $$ \pm \begin{pmatrix} e^{v/2} & 1 \\ 0 & e^{-v/2} \end{pmatrix} $$ therefore it has trace $\pm 2\cosh\frac{v}{2}$, and this trace is a component of the local coordinates of $\mathcal{R}(\pi_1(M^3),\mathrm{Isom}(\mathbb{H}^3))$. Then the assertion follows from applying a suitable coordinate change. \end{proof} \section{Representations of the Klein bottle} \label{S:RepKl} Let $\pi_1(K^2)=\langle a,b | aba^{-1}=b^{-1} \rangle$ be a presentation of the fundamental group of the Klein bottle, and $G=\mathrm{Isom}(\mathbb{H}^3)\cong \mathrm{PSL}(2,\mathbb{C})\rtimes \mathbb{Z}_2$. The variety of representations $\mathrm{hom}(\pi_1(K^2),G)$ is identified with $$ \mathrm{hom}(\pi_1(K^2),G)\cong \{A,B \in G | ABA^{-1}=B^{-1} \}. $$ Topologically, we can expect to have, at least, $4$ (possibly empty) connected components according to the orientable nature of $A$ and $B$. We are interested in studying one of them. \begin{Definition} A representation $\rho \in \mathrm{hom}(\pi_1(K^2),G)$ is said to \emph{preserve the orientation type} if, for every $\gamma\in \pi_1(K^2)$, $\rho(\gamma)$ is an orientation-preserving isometry if and only if $\gamma$ is represented by and orientation-preserving loop of $K^2$. We denote this subspace of representations by $$ \hom_+(\pi_1(K^2), G). $$ \end{Definition} Let $T^2 \rightarrow K^2$ be the orientation covering. The restriction map on the varieties of representations (without quotienting by conjugation) is: $$ \mathrm{res}\colon \hom(\pi_1(K^2), G) \to \hom(\pi_1(T^2), \mathrm{PSL}(2,\mathbb{C}) ). $$ \begin{theorem} \label{th:repklein} Let $\rho\in \hom_+(\pi_1(K^2),G)$ preserve the orientation type and let $\rho(b)\neq \mathrm{Id}$. By writing $A=\rho(a)$, $B=\rho(b)$ as Möbius transformations, up to conjugation one of the following holds: \begin{itemize} \item[a)] $A(z)=\overline z+1$, $B(z)=z+\tau i$, with $\tau\in \mathbb{R}_{> 0}$. \item[a')] $A(z)=\overline z$, $B(z)=z+\tau i$, with $\tau\in \mathbb{R}_{> 0}$. \item[b)] $A(z)=e^l\overline{z}$, $B(z)=e^{\alpha i} z$, with $l\in \mathbb{R}_{\geq 0}$, $\alpha \in (0,\pi]$. \item[c)] $A(z)=e^{\alpha i}/\overline{z}$, $B(z)=e^l z$, with $l\in \mathbb{R}_{> 0}$, $\alpha \in [0,\pi]$. \end{itemize} \end{theorem} \begin{proof} Let $G^0=\mathrm{PSL}(2, \mathbb{C})\vartriangleleft G$ be the connected component of the identity. The variety of representations $\hom(\pi_1(T^2), G^0)/G^0$ is well known. A representation $[\rho_0]$ in this variety is the class of either a parabolic representation, $\rho_0(l)(z)=z+1$, $\rho_0(m)(z)=z+\tau$, $\tau \in \mathbb{C}$, a parabolic degenerated one, $\rho_0(l)(z)=z$, $\rho_0(m)(z)=z+\tau$, $\tau \in \mathbb{C}$, or a hyperbolic one $\rho_0(l)(z)=\lambda z$, $\rho_0(m)(z)=\mu z$, $\lambda, \mu \in \mathbb{C}$, where $\pi_1(T^2)=\langle l, m | lm=ml \rangle$. For $\rho_0 =\mathrm{res}(\rho)$, let $A= \rho(a)$, $B= \rho(b)$ where $a,b$ are generators of $\pi_1(K^2)$, and $L=\rho(l)$, $M=\rho(m)$. The following is satisfied: \begin{align} (A^2,B)= & (L,M), \tag{Restriction of a representation to the torus} \\ ABA^{-1}= & B^{-1}. \tag{Klein bottle relation} \end{align} In fact, in order for $\rho_0$ to be a restriction, there must be $A$ and $B$ satisfying the previous conditions. We prove the theorem using these equations. If $[\rho]$ is in the parabolic case, by hypothesis $\tau\neq 0$. Then, the solution is unique and, $A(z)=\overline{z}+1$, $B(z)=z+\tau i$, $\tau \in \mathbb{R}\setminus \{0\}$, hence $L(z)=z+2$, $M(z)=z+\tau i$. Similarly, for the degenerated parabolic case, $A(z)=\overline{z}$, $B(z)=z+\tau i$, $\tau \in \mathbb{R}\setminus \{0\}$. On the other hand, for $[\rho]$ hyperbolic, either $L$ corresponds to a real dilation and $M$ to a rotation, or the other way around. In the case $L(z)=e^{2l}z$, $M(z)=e^{\alpha i}z$, $l\in \mathbb{R}$, $\alpha \in (-\pi, \pi]$ the representation can be written as the restriction of several representations of the Klein bottle, but all of them are conjugated to $A(z)=e^{l} \overline{z}$, $B(z)=e^{\alpha i}z$. A similar situation happens when $L(z)=e^{2\alpha i}z$, $M(z)=e^{l}z$, $ l \in \mathbb{R}$, $\alpha \in (-\pi, \pi]$, obtaining $A(z)=e^{\alpha i}/\overline{z}, B(z)=e^lz$. However, in the last case, we should note down that for every such representation $[\rho]$, we get two non-conjugated representations $[\rho_1], [\rho_2]$ such that $[\rho_0]=\mathrm{res}([\rho_1])=\mathrm{res}([\rho_2])$, them differing in $A_1(z)=e^{\alpha i}/\overline{z}$, $A_2(z)=e^{(\alpha+\pi)i}/\overline{z}=-e^{\alpha i}/\overline{z}$. Thus, we obtain a classification of representations in $\hom(\pi_1(K^2),G)/G^0$. To get the classification quotienting by the whole group, $\hom(\pi_1(K^2),G)/G$, we only have to see how the complex conjugation $c$ acts by conjugation on each representation: In $a)$, $a')$, $c$ maps $z+\tau i \mapsto z-\tau i$; in $b)$, $e^{\alpha i}z \mapsto e^{-\alpha i}z$; and in $c)$, $e^{\alpha i}/\overline{z} \mapsto e^{-\alpha i}/\overline{z}$. The choice $\alpha >0, l>0$ in $b), c)$ is obtained by taking into account that $[\rho]=[\rho^{-1}]$. \end{proof} \begin{Definition} \label{def:namereps} According to the different cases in Theorem~\ref{th:repklein}, a representation $\rho\in \hom_+(\pi_1(K^2),G)$ is called: \begin{itemize} \item \emph{parabolic non-degenerate} in case a) and \emph{parabolic degenerate} in case a'), \item \emph{type I} in case b), and \item \emph{type II} in case c). \end{itemize} Furthermore, type I or II are called non-degenerate if $l\neq 0$ or $\alpha\neq 0$ respectively, and degenerate otherwise. \end{Definition} \begin{remark} \label{rmk:non-deg} The holonomy of a non-orientable cusp restricts to a representation of the Klein bottle that preserves the orientation type and is parabolic non-degenerate. Furthermore, deformations of this representation still preserve the orientation type and are non-degenerate (possibly of type I or II), by continuity. \end{remark} For $\gamma\in \pi_1(T^2) \vartriangleleft \pi_1(K^2)$, recall from \eqref{eqn:Igamma} that $$ \begin{array}{rcl} I_{\gamma}\colon \hom(\pi_1(K^2), G) & \to & \mathbb{C} \\ \rho& \mapsto & (\operatorname{trace}_{\mathrm{PSL}(2,\mathbb{C})}(\rho(\gamma)))^2-4, \end{array} $$ where $\operatorname{trace}_{\mathrm{PSL}(2,\mathbb{C})}$ means trace as matrix in $\mathrm{PSL}(2,\mathbb{C})$. \begin{lemma} \label{Lemma:typeofrep} Let $\rho\in\hom(\pi_1(K^2), G)$ preserve the orientation type and $\rho(b)\neq \mathrm{Id}$. Then: \begin{itemize} \item If $\rho$ is parabolic, then $I_{\gamma}(\rho)=0$, $\forall \gamma\in \pi_1(T^2) $. \item If $\rho$ is of type I, then $I_{ a^2}(\rho)\geq 0$ and $I_{b}(\rho)< 0$. \item If $\rho$ is of type II, then $I_{ a^2}(\rho)\leq 0$ and $I_{b}(\rho)> 0$. \end{itemize} \begin{proof} It is a straightforward computation from Theorem~\ref{th:repklein}. \end{proof} \end{lemma} \begin{corollary}\label{Coro:notrealized} \begin{enumerate}[a)] \item The holonomy of a representation in $\mathrm{Def}(M,\Delta)$ is of type I \item Representations in a neighborhood of $[\mathrm{hol}]$ in $\mathcal R(M^3,G)$ are or both, type I and II. \item In particular, the holonomy map $\mathrm{Def}(M,\Delta)\to \mathcal R(M^3,G)$ is not surjective in a neighborhood of the holonomy. \end{enumerate} \end{corollary} \begin{proof} Assertion a) follows from Remark~\ref{rmk:hol_lm} and Assertion b) from Theorem~\ref{Thm:coordinatesNO}, in both cases using Lemma~\ref{Lemma:typeofrep}. \end{proof} \section{Metric completion} \label{S:Completion} As we deform non-compact manifolds, the deformations into non-complete manifolds are not unique (eg~one can consider proper open subset of a non-complete manifold). We are not discussing the different issues related to this non-uniqueness, just the existence of a deformation into a metric that can be complete as a conifold (see below). The main result of this section is Theorem~\ref{Theorem:completion}. In the orientable case, the metric completion after deforming an orientable cusp is a singular space with a singularity called of \emph{Dehn type} (that include non-singular manifolds), see Hodgson's thesis \cite{Hodgson} and Boileau--Porti \cite[Appendix~B]{BoileauPorti}. In the non-orientable case, the singularity is more specific, a so called conifold. \subsection{Conifolds and cylindrical coordinates} \label{Subsection:conifolds} A \emph{conifold} is a metric length space locally isometric to the metric cone of constant curvature on a spherical conifold of dimension one less, see for instance \cite{BLP}. When, as topological space, a conifold is homeomorphic to a manifold, it is called a \emph{cone manifold}, but in general it is only a pseudo-manifold. In dimension 2 conifolds are also cone manifolds, but in dimension three there may be points with a neighborhood homeomorphic to the cone on a projective plane $P^2$. We are interested in three local models of singular spaces, that as conifolds are: \begin{itemize} \item The hyperbolic cone over a round sphere $S^2$. This corresponds to a point with a non-singular hyperbolic metric. \item The hyperbolic cone over $S^2(\alpha,\alpha)$, the sphere with two cone points of angle $\alpha$, that is the spherical suspension of a circle of perimeter $\alpha$. It corresponds to a singular axis of angle~$\alpha$. \item The hyperbolic cone over $P^2(\alpha)$, the projective plane with a cone point of angle $\alpha$. This is the quotient of the previous one by a metric involution, which is the antipodal map on each concentric sphere. \end{itemize} Next we describe metrically those local models, by using cylindrical coordinates in the hyperbolic space. These coordinates are defined from a geodesic line $g$ in $\mathbb H^3$, and we fix a point in the unit normal bundle to $g$, ie~a vector $\vec u$ of norm $1$ and perpendicular to $g$. Cylindrical coordinates give a diffeomorphism: $$ \begin{array}{rcl} \mathbb{H}^3\setminus g & \overset\cong\longrightarrow & (0,+\infty)\times \mathbb{R}/2\pi\mathbb{Z}\times \mathbb{R} \\ p & \longmapsto & (r,\theta, h) \end{array} $$ where $r$ is the distance between $g$ and $p$, $\theta$ is the angle parameter (the angle between the parallel transport of $\vec u$ and the tangent vector to the orthogonal geodesic from $g$ to $p$) and $h$ is the arc parameter of $g$, the signed distance between the base point of $\vec u$ and the orthogonal projection from $p$ to $g$, Figure~\ref{Figure:Cylindrical}. \begin{figure} \caption{Cylindrical coordinates.} \label{Figure:Cylindrical} \end{figure} In the upper-half space model of $\mathbb H^3$, if $g$ is the geodesic from $0$ and $\infty$, then there exists a choice of coordinates (a choice of $\vec u$) so that the projection from $g$ to the ideal boundary $\partial_\infty \mathbb{H}^3$ maps a point with cylindrical coordinates $(r,\theta, h)$ to $e^{h+i\theta}\in\mathbb C$, Figure~\ref{Figure:Projection}. A different choice of $\vec u$ would yield instead $\lambda e^{h+i\theta}\in\mathbb C$, for some $\lambda\in\mathbb C\setminus\{0\}$. \begin{figure} \caption{Orthogonal projection to $\partial_\infty\mathbb H^3$ with $g$ the geodesic with ideal end-points $0$ and $\infty$.} \label{Figure:Projection} \end{figure} The hyperbolic metric on $\mathbb H^3$ in these coordinates is $$ d r^2+\sinh^2 (r) d\theta^2+\cosh^2 (r) d h^2 $$ More precisely, $\mathbb H^3$ is the metric completion of $(0,+\infty)\times \mathbb{R}/2\pi\mathbb{Z}\times \mathbb{R}$ with this metric. \begin{Definition} For $\alpha\in (0,2\pi)$, $\mathbb{H}^3(\alpha)$ is the metric completion of $(0,+\infty)\times \mathbb{R}/2\pi\mathbb{Z}\times \mathbb{R}$ for the metric $$ d s^2= d r^2+\left(\frac{\alpha}{2\pi }\right)^2 \sinh^2 (r) d\theta^2+\cosh^2 (r) d h^2 $$ \end{Definition} The metric space $\mathbb{H}^3(\alpha)$ may be visualized by taking a sector in $\mathbb{H}^3$ of angle $\alpha$ and identifying its sides by a rotation. Alternatively, with the change of coordinates $\widetilde \theta = \frac{\alpha}{2\pi } \theta$, $\mathbb{H}^3(\alpha)$ is the metric completion of $(0,+\infty)\times \mathbb{R}/\alpha \mathbb{Z}\times \mathbb{R}$ for the metric $ d r^2+\sinh^2 (r) d{\widetilde\theta}^2+\cosh^2 (r) d h^2 $. \begin{Remark} The metric models are: \begin{itemize} \item For the non-singular case (the cone on the round sphere) it is $\mathbb{H}^3$. \item For the singular axis (the cone on $S^2(\alpha,\alpha)$) it is $\mathbb{H}^3(\alpha)$. \item For the cone on $P^2(\alpha)$, it is the quotient $$ \mathbb{H}^3(\alpha)/(r,\theta,h)\sim (r,-\theta,-h). $$ \end{itemize} \end{Remark} \subsection{Conifolds bounded by a Klein bottle} We keep the notation of Subsection~\ref{Subsection:conifolds}, with cylindrical coordinates. Before discussing conifolds bounded by a Klein bottle, we describe a cone manifold bounded by a torus. \begin{Definition} A \emph{solid torus with singular soul} is $ \mathbb{H}^3(\alpha)/\!\!\sim$, where $\sim$ is the relation induced by the isometric action of $\mathbb{Z}$ generated by $$ (r,\theta,h)\mapsto (r, \theta+\tau, h+L) $$ for $\tau\in \mathbb{R}/2\pi\mathbb{Z}$ and $L>0$. \end{Definition} The space $ \mathbb{H}^3(\alpha)/\!\!\sim$ is a solid torus of infinite radius with singular soul of cone angle $\alpha$, length of the singularity $L>0$ and torsion parameter $\tau\in \mathbb{R}/2\pi\mathbb Z$ (the rotation angle induced by parallel transport along the singular geodesic is $\frac{\alpha}{2\pi}\tau\in \mathbb{R}/\alpha\mathbb Z$). By considering the metric neighborhood of radius $r_0>0$ on the singular soul, we get a compact solid torus, bounded by a $2$-torus. This compact solid torus depicts a tubular neighborhood of a component of the singular locus of a cone manifold (compare Hodgson--Kerckhoff \cite{HodgsonKerckhoff} and Hodgson's thesis \cite{Hodgson}). We describe two conifolds bounded by a Klein bottle, that are a quotient of this solid torus by an involution. \begin{Definition} A \emph{solid Klein bottle with singular soul} is $ \mathbb{H}^3(\alpha)/\!\!\sim$, where $\sim$ is the relation induced by the isometric action of $\mathbb{Z}$ generated by $$ (r,\theta,h)\mapsto (r, -\theta, h+L) $$ for $L>0$. \end{Definition} The space $ \mathbb{H}^3(\alpha)/\!\!\sim$ is a solid Klein bottle of infinite radius with singular soul of cone angle~$\alpha$, and length of the singularity $L>0$. We may consider a metric tubular neighborhood of radius $r_0$, bounded by a Klein bottle. Its orientation cover is a solid torus with singular soul, cone angle $\alpha$, length of the singularity $2L$ and torsion parameter $\tau=0$. \begin{Definition} The \emph{disc orbi-bundle with singular soul} is $ \mathbb{H}^3(\alpha)/\!\!\sim$, where $\sim$ is the relation induced by two isometric involutions: $$ \begin{array}{rcl} (r,\theta,h)&\mapsto &(r, \theta+\pi, -h)\\ (r,\theta,h)&\mapsto &(r, \theta+\pi, 2L-h) \end{array} $$ for $L>0$. \end{Definition} To describe this space, it is useful first to look at the action on the preserved geodesic, corresponding to $r=0$. These involutions map $h\in\mathbb{R}$ to $-h$ and to $2L-h$ respectively. Thus it is the action of the infinite dihedral group $\mathbb Z_2*\mathbb Z_2$ on a line generated by two reflections. Its orientation preserving subgroup is $\mathbb{Z}$ acting by translations on $\mathbb{R}$. Thus $\mathbb{R}/\mathbb{Z}$ is a circle, and $\mathbb{R}/(\mathbb Z_2*\mathbb Z_2)$ is an orbifold. The solid torus is a disc bundle over the circle, and our space is an orbifold-bundle over $\mathbb{R}/(\mathbb Z_2*\mathbb Z_2)$ with fibre a disc. This space is the quotient of an involution on the solid torus. View the solid torus as two $3$-balls joined by two 1-handles, Figure~\ref{Figure:SolidTorus}. On each 3-ball apply the antipodal involution (on each concentric sphere of given radius), and extend this involution by permuting the 1-handles. The quotient of each ball is the (topological) cone on $P^2$, hence our space is the result of joining two cones on $P^2$ by a $1$-handle. Its boundary is the connected sum $P^2\# P^2\cong K^2$. \begin{figure} \caption{A solid torus as two $3$-balls joined by two $1$-handles.} \label{Figure:SolidTorus} \end{figure} The singular locus of the disc orbi-bundle $ \mathbb{H}^3(\alpha)/\!\!\sim$ is an interval (the underlying space of the orbifold bundle) of length $L$. The interior points of the singular locus have cone angle $\alpha$, and the boundary points of the interval are precisely the points where it is not a topological manifold. Again $ \mathbb{H}^3(\alpha)/\!\!\sim$ has $\infty$ radius, and the metric tubular neighborhood of radius $r$ of the singularity is bounded by a Klein bottle. It is the quotient of a solid torus of length $2L$ and torsion parameter $\tau=0$ by an isometric involution with two fixed points (thus, as an orbifold, its orientation orbi-covering is a solid torus). \begin{remark} The boundary of both, the solid Klein bottle and the disc orbi-bundle, is a Klein bottle. In both cases the holonomy preserves the orientation type, but the type of the presentation as in Definition~\ref{def:namereps} is different: \begin{enumerate}[a)] \item The holonomy of the boundary of a solid Klein bottle with singular soul is a representation of type I. \item The holonomy of the boundary of a disc orbi-bundle over a singular interval is of type II. \end{enumerate} \end{remark} For a non-orientable cusp, the holonomy of the peripheral torus is either parabolic non-degenerate, of type I or of type II, also nondegenerate (Remark~\ref{rmk:non-deg}). The aim of next section is to prove that the deformations can be defined so that the metric completion is either solid Klein bottle with singular soul or a disc orbi-bundle with singular soul, according to the type. This is the content of Theorem~\ref{Theorem:completion}, that we prove at the end of the section. \subsection{The radial structure} Let $M^3$ be a non-compact hyperbolic 3-manifold of finite volume. We deform its holonomy representation and accordingly we deform its hyperbolic metric. Nonetheless, incomplete metrics are not unique, so here we give a statement about the existence of a maximal structure, which corresponds to the one completed in Theorem~\ref{Theorem:completion}. Let $[\rho] \in \mathcal{R}(\pi_1(M^3), G)$ be a deformation of its complete structure. There is some nuance in associating to $[\rho]$ a hyperbolic structure which is made explicit by Canary--Epstein--Green in \cite{CEG}. Here, the authors conclude that every deformation with a given holonomy representation are related by an isotopy of the inclusion of $M^3$ in some fixed thickening $(M^{3})^*$, where a thickening is just another hyperbolic $3$-manifold containing ours. We will start by making clear what we mean by a maximal structure. \begin{Definition} Let $M$ be a manifold with an analytic $(G,X)$-structure. We say that $M^*$ is an \emph{isotopic thickening of $M$} if it is a thickening and there is a isotopy, $i'$, of the inclusion, $i: M \hookrightarrow M^*$, such that $i'(M)=M^*$. \end{Definition} Given two isotopic thickenings of $M$ we say that $M_1^*\leq M_2^*$ if there is a $(G,X)$ isomorphism from $M_1^*$ to some subset of $M_2^*$ extending the identity on $M$. Hence, we say that an isotopic thickening is maximal if it is with respect the partial order relation we have just defined. In general, it is not clear whether maximal isotopic thickenings exist, nor under which circumstances they do exist. However, we will construct in our situation an explicit maximal thickening. \begin{lemma} \label{lm:inj_rad} Let $\mathrm{inj}_{M^3}(x)$ denote the injectivity radius at a point $x\in M^3$. Then, a necessary condition for a non-trivial thickening of $M^3$ to exist is that there must exist a sequence $\{x_n\}\subset M^3$ with $\mathrm{inj}_{M^3}(x_n)\rightarrow 0$. \end{lemma} \begin{proof} Let us suppose a thickening $(M^3)^*$ exists. Then, take a point $x\in \partial((M^3)^*\setminus M^3)$. Any sequence $\{x_n\}\subset M^3$ such that $x_n\rightarrow x$ satisfies $\mathrm{inj}_{M^3}(x_n)\rightarrow 0$. \end{proof} The purpose of Lemma~\ref{lm:inj_rad} is two-fold: first, it gives a condition for a thickening to be maximal (in the sense of the partial order relation we just defined), and second, it shows where a manifold could possibly be thickened. Taking into account a thick-thin decomposition of the manifold, the thickening can only be done in the deformed cusps. Each cusp of $M^3$ is diffeomorphic to either $T^2\times [0,\infty)$ or $K^2\times [0,\infty)$. Let us consider a proper product compact subset $K^2\times[0,\lambda]$ or $T^2\times [0,\lambda]$ of an end, for some $\lambda>0$, and let us denote by $D_\rho$ the developing map of a structure with holonomy $\rho$ in the equivalence class $[\rho]\in \mathcal{R}(\Gamma,G)$. \begin{lemma} \label{lm:section_developing} The image of the proper product subset by the developing map, $D_{\rho}(\widetilde{K^2}\times[0,\lambda])$, $D_{\rho}(\widetilde{T^2}\times[0,\lambda])$ lies within two tubular neighborhoods of a geodesic $\gamma \in M^3$, that is, in $N_{\epsilon_2}(\gamma)\setminus N_{\epsilon_1}(\gamma)$, where $N_\epsilon(\gamma)=\{x\in \mathbb{H}^3\mid d(x,\gamma)<\epsilon \}$. Moreover, for every geodesic ray exiting orthogonally from $\gamma$, the intersection of the ray with $D_{\rho}(\widetilde{K^2}\times[0,\lambda])$ is non-empty and transverse to any section $D_{\rho}(\widetilde{K^2}\times\{\mu\})$, $\mu\in[0,\lambda]$ and, analogously for an orientable end. \end{lemma} \begin{proof} We use a modified argument of Thurston (see his notes \cite[Chapters 4 and 5]{ThurstonNotes}) to prove the lemma for a non-orientable end (the same idea goes for an orientable one). The original argument of Thurston shows that in an ideal triangulated manifold, the image of the universal cover of the end under the developing map is the whole tubular neighborhood but the geodesic. Let $[\rho_0]$ be the parabolic representation corresponding to the complete structure, then $D_{\rho_0}(\widetilde{K^2}\times[0,\lambda])$ is the region between two horospheres centered at an ideal point $p_\infty \in \partial_{\infty}\mathbb{H}^3$. Let $K\subset \widetilde{K^2}\times[0,\lambda]$ denote a fundamental domain of $K^2\times[0,\lambda]$. The domain $K$ can be taken so that $D_{\rho_0}(\overline{K})$ is a rectangular prism between two horospheres. We want to deform $D_{\rho_0}(\overline{K})$ as we deform $\rho_0$ to $\rho$. We do that by deforming the horosphere centered at $p_\infty$ to surfaces equidistant to the geodesics $\gamma_\rho$ invariant by the holonomy of the peripheral subgroup $\rho(\pi_1(K^2))$. The deformation of the horosphere to equidistant surfaces is described in \cite[\S4.4]{ThurstonNotes} in the half-space model of $\mathbb H^3$, see also Benedetti--Petronio \cite[\S E.6.iv]{BenedettiPetronio}. Alternatively, we can view the deformation of the horosphere to the equidistant surfaces as follows. Consider $\mathbb Z^2<\pi_1(K^2)$ the orientation preserving subgroup of index $2$, $\rho(\mathbb Z^2)$ is contained in a unique one-complex parameter subgroup $U_\rho\subset \mathrm{PSL }(2,\mathbb C)$ (ie~$U_\rho$ is the exponential image of a $\mathbb C$-line in the Lie algebra $\mathfrak{sl}(2,\mathbb C)$). This $U_\rho$ depends continuously on $\rho$, and given $x\in\mathbb H^3$ the orbit $U_\rho (x)=\{g(x)\mid g\in U_\rho \}$ is a surface containing $x$ such that: when $\rho=\rho_0$ then $U_\rho (x)$ is a horosphere centered at $p_\infty$, and when when $\rho\neq \rho_0$, then $U_\rho (x)$ is a surface equidistant to the geodesic $\gamma_\rho$. Using this construction, the image of the domain $D_{\rho_0}(\overline{K})$ deforms to $D_{\rho}(\overline{K})$ with the required properties, by following the equidistant surfaces for the factor $\widetilde{K^2}$ and the geodesics orthogonal to these surfaces for the factor $[0,\lambda]$. \end{proof} \begin{Definition} The geodesic of Lemma~\ref{lm:section_developing} is called the \emph{soul} of the end. \end{Definition} \begin{remark} The face of the section of proper product subset the cusp $K^2\times[0,\lambda]$ or $T^2\times [0,\lambda]$ that is glued to the thick part of the manifold is the section of the cusp which is further away from the geodesic. Hence, we will only consider thickenings \textquotedblleft towards" the soul. \end{remark} Let $x$ be a point in a cusp of the manifold and consider its image under the developing $y=D_{\rho}(\tilde{x})$ of any lift $\tilde{x}$. There is only one geodesic segment in $\mathbb{H}^3$ such that $\gamma(0)=y$ and goes towards the soul orthogonally. In cylindrical coordinates, if $y=(r,\theta, h)$, the image of the geodesic consists of $\{(t,\theta, h)\mid t\in [0,r]\}$. Let us denote by $\gamma_x$ the corresponding geodesic in $M^3$. \begin{theorem} There exists a maximal thickening $M^*$ of a half-open product $M=K^2\times[0,\lambda)$ or $T^2\times [0,\mu)$. It is characterized by the following property: for every point $x\in M$, the geodesic $\gamma_x$ can be extended in $M^*$ so that $D_{\rho}(\tilde{\gamma_x})$ is the geodesic whose cylindrical coordinates with respect to the soul are $\{(t,\theta, h)\mid t\in (0,r]\}$. \end{theorem} \begin{proof} Given a cusp section $S:=K^2$ or $T^2$, a product subset of the end, $K:=S\times[0,\lambda]$, a fixed fundamental domain $K_0$ of $K$ and a small neighborhood of $K_0$, $N(K_0)$, the set $T:=\{t\in \mathrm{Deck}(\tilde{K}/K)\mid tN(K_0)\cap N(K_0) \neq \emptyset \}$ is finite, where $\mathrm{Deck}(\tilde{K}/K)$ denotes the group of covering transformations of the universal cover. Hence, we can suppose that $D_{\rho\mid(T\overline{K_0})}$ is an embedding. Let $\mathcal{U}$ be an open cover of $K$ by simply connected charts. For each $U$, take a lift $U_0\in \tilde{\mathcal{U}}$ such that $U_0\cap K_0 \neq \emptyset$ and consider $D_{\rho}(U_0)$. Given such a lift $U_0$, the other possible lift that could have non-empty intersection with $K_0$ are $tU$, for $t\in T$. Furthermore, we can always assume that the chart $U$ coincides with the image of $U_0$ under the developing map, $D_{\rho}(U_0)$. Thus, we can identify $$K\cong (\bigcup_{U\in \mathcal{U}} D_{\rho}(U_0))/\sim,$$ where the equivalence relation is by the action of $\mathrm{hol}(t)$, for $t \in T$. Each $U\in \mathcal{U}$ can be thickened by identifying $U$ with $D_{\rho}(U_0)$ and considering, in cylindrical coordinates, the set of rays $R(U)=\{(t,\theta,h)\in \mathbb{H}^3\setminus \{soul\}\mid \exists (t_0, \theta, h)\in U, t<t_0\}$. Given two lifts of two thickened charts $R(U_1)$ and $R(U_2)$ with non-empty intersection with $K_0$, we glue them together in the points corresponding to $\mathrm{hol}(t)(R(U_1))\cap R(U_2)$, where $t\in T$. This defines a thickening of the cusp $K^*$. We have yet to show that it is isotopic to the original (half-open) product subset. Let us consider the section $S\times\{0\}$ of the cusp, the radial geodesics $\gamma_x$ for $x\in S\times\{0\}$ define a foliation of $K^*$ of finite length. Moreover, due to Lemma~$\ref{lm:section_developing}$, the foliation is transversal to $S\times\{0\}$. By re-parameterizing the foliation and considering its flow, we obtain a trivialization of the cusp, $K^*\cong S\times[0,\mu)$. Similarly, $K^*\setminus K$ is also a product. This let us construct an isotopy from $K^*$ to $K$. This thickening clearly satisfies the property that $\gamma_x\subset{K^*}$ can be extended so that $D_{\rho}(\tilde{\gamma_x})=\{(t,\theta, h)\mid t\in (0,r] \}$. By taking geodesics $\gamma_x$ to geodesics through the developing map, it is clear our thickening can be mapped into every other thickening satisfying this property. Furthermore, if we consider the thickenings to be isotopic, we obtain an embedding. Regarding the maximality, we will differentiate between an orientable end and a non-orientable one. The general idea will be the same one, for another isotopic thickening $(K)^{**}$ to include ours, the developing map should map some open set $V$ into a ball $W$ around a point $y_0$ in the soul, what will led to some kind of contradiction. If $K$ is non-orientable, let us denote the distinguishable generators of $\pi_1(K^2)$, $a,b$, with $aba^{-1}=b^{-1}$. If $[\rho]$ is type I, $y_0$ is fixed by $\rho(b)$. Let $y\in W\setminus\{\mathrm{soul}\}$ and $x\in V$ be its preimage. $W$ is invariant by $\rho(b)$ and, in addition, both $x$ and $b\cdot x$ belong to $V$. Now take the geodesic $\gamma: I \mapsto \widetilde{(K)^{**}}$ from $x$ to $x_0$ which corresponds to the geodesic from $y$ to $y_0$. By equivariance and continuity, $x_0=\lim \gamma(t)=\lim b\gamma(t)=bx_0$. This contradicts $b$ being a covering transformation. If $[\rho]$ is type II, the previous argument with $a^2$ holds. If $K$ is orientable, we will follow the same arguments leading to the completion of the cusp (for more details see, for instance, \cite{BenedettiPetronio}). the deformation $[\rho]$ is characterized in terms of its generalized Dehn filling coefficients $\pm(p,q)$. The case $p=0$ or $q=0$ are solved as in the non-orientable cusp, so we have the $2$ usual cases, $p/q\in \mathbb{Q}$ or $p/q \in \mathbb{I}$. For $p/q\in \mathbb{Q}$, there exists $k>0$ such that $k(p,q)\in \mathbb{Q}^2$ and $(kp)a+(kq)b$ is a trivial loop in the new thickening. If $p/q \in \mathbb{I}$, then $y_0$ is dense in $\{\mathrm{soul}\}\cap V$, which is a contradiction. \end{proof} \begin{Definition} We call the previous thickening, the \emph{radial thickening} of the cusp. \end{Definition} \begin{remark} If the manifold $M^3$ admits an ideal triangulation, the canonical structure coming from the triangulation is precisely the radial thickening of the cusp. \end{remark} \begin{figure} \caption{The radial thickening.} \end{figure} \begin{Theorem} \label{Theorem:completion} For a deformation of the holonomy $M^3$, the corresponding deformation of the metric can be chosen so that on a non-orientable end: \begin{itemize} \item It is a cusp, a metrically complete end, if the peripheral holonomy is parabolic. \item The metric completion is a solid Klein bottle with singular soul if the peripheral holonomy is of type I. \item The metric completion is a disc orbi-bundle with singular soul if it is of type II. \end{itemize} Furthermore, the cone angle $\alpha$ and the length $L$ of the singular locus is described by the peripheral boundary, so that those parameters start from $\alpha=L=0$ for the complete structure and grow continuously when deforming in either direction. \end{Theorem} \begin{proof}[Proof of Theorem~\ref{Theorem:completion}] The proof uses the orientation covering and equivariance. More precisely, the deformation is constructed in the complete case for the orientation covering and it can be made equivariant. The holonomy of a torus restricted from a Klein bottle is either parabolic or the holonomy of a solid torus with singular soul (and $\tau=0$). In particular the holonomy of a Klein bottle is parabolic iff its restriction to the orientable covering is parabolic. Furthermore, by using the description in cylindrical coordinates (and using Figure~\ref{Figure:Projection}) and as $\tau=0$, the solid torus is equivariant by the action of $\pi_2(K^2)/\pi_1(T^2) \cong \mathbb{Z}_2$. \end{proof} \section{Example: The Gieseking manifold} \label{S:Gieseking} We use the Gieseking manifold to illustrate the results of this paper. In particular the difference between deformation spaces obtained from ideal triangulations and from the variety of representations. The Gieseking manifold $M$ is a non-orientable hyperbolic 3-manifold with finite volume and one cusp, with horospherical section a Klein bottle. It has an ideal triangulation with a single tetrahedron. The orientation cover of the Gieseking manifold is the figure eight knot exterior, and the ideal triangulation with one simplex lifts to Thurston's ideal triangulation with two ideal simplices in Thurston's notes \cite{ThurstonNotes}. This manifold $M$ was constructed by Gieseking in his thesis in 1912, here we follow the description of Magnus in \cite{Magnus}, using the notation of Alperin--Dicks--Porti \cite{ADP}. Start with the regular ideal vertex $\Delta$ in $\mathbb H^3$, with vertices $\{0, 1, \infty, \frac{1-i\sqrt{3}}2\}$, Figure~\ref{fig:gieseking_labelled}. The side identifications are the non-orientable isometries defined by the M\"obius transformations $$ U(z)= \frac{ 1}{1+\tfrac{1+i\sqrt{3}}{2}\overline z} \qquad\textrm{ and }\qquad V(z)= -\tfrac{1+i\sqrt{3}}{2}\overline z+1 . $$ The identifications of the faces are defined by their action on vertices: $$ U\colon(\tfrac{1-i\sqrt{3}}2, 0 ,\infty)\mapsto (\tfrac{1-i\sqrt{3}}2,1,0) \qquad\textrm{ and }\qquad V\colon(1, 0 ,\infty)\mapsto (\tfrac{1-i\sqrt{3}}2,1,\infty). $$ \begin{figure} \caption{Gieseking Manifold with labeled edges.} \label{fig:gieseking_labelled} \end{figure} By applying Poincar\'e's fundamental theorem \begin{equation} \label{eqn:presentationG} \pi_1(M)\cong\langle U, V\mid VU=U^2V^2\rangle \end{equation} The relation $VU=U^2V^2$ corresponds to a cycle of length 6 around the edge. \subsection{The deformation space $\mathrm{Def}(M, \Delta) $} We compute the deformation space of the triangulation with a single tetrahedron, as in Section~\ref{S:CombinatorialDef}. For any ideal tetrahedron in $\mathbb{H}_3$, we set its ideal vertices in $0, \, 1, \infty \, $and $-\omega$, where $\omega$ is in $\mathbb{C}_+$, the upper half-space of $\mathbb{C}$. The role played by $-\omega$ will be the one of $\frac{1-i\sqrt{3}}{2}$ in the complete structure. For any such $\omega$ it is possible to glue the faces of the tetrahedron in the same pattern as in the Gieseking manifold via two orientation-reversing hyperbolic isometries, which we will call likewise $U$ and $V$. For the gluing to follow the same pattern, it must map $U: \; (-\omega,\, 0, \infty ) \rightarrow (-\omega, \, 1, \, 0)$ and $V: \; (1, \,0 , \, \infty) \rightarrow (-\omega, \, 1, \infty)$. The orientation-reversing isometries $U$ and $V$ satisfying this are: $$ U(z)=\frac{1}{\frac{1+\omega}{|\omega|^2}\overline{z}+1} \qquad\textrm{ and }\qquad V(z)=-(1+\omega)\overline{z}+1.$$ Although it is always possible to glue the faces in the same pattern as in the Gieseking manifold, not for all them the gluing will have a hyperbolic structure. Let us label the edges as in Figure~\ref{fig:gieseking_labelled}. For the topological manifold to be geometric, we only have to check that the pairing is proper (see \cite{Ratcliffe}). In this case, the only condition which we need to satisfy is that the isometry that goes through the only edge cycle is the identity. This is given by: $$ a \overset{V}{\longrightarrow} c \overset{V}{\longrightarrow} b \overset{U}{\longrightarrow} d \overset{U}{\longrightarrow} e \overset{V^{-1}}{\longrightarrow} f \overset{U^{-1}}{\longrightarrow} a, $$ and, therefore, we will have a hyperbolic structure if and only if $U^{-1}V^{-1}U^2V^2=\textrm{Id}$. Doing this computation, we obtain the equation \begin{equation} \label{eqn:proper} |\omega(1+\omega)|=1. \end{equation} Let us show that this equation matches the one obtained from Definition~\ref{dfn:DefSpace}. If we denote by $z(a)$ the edge invariant of $a$ and analogously for the rest of the edges, we have that the equation describing the deformation space of the manifold in terms of this triangulation is $$\frac{z(a)z(b)z(e)}{\overline{z(c)z(d)z(f)}}=1.$$ Writing down all of the edge invariants in terms of $z(a)$ by means of the tetrahedron relations results in the equation \begin{gather} \label{equation_gieseking} \frac{z(a)^2\overline{z(a)}^2}{(1-z(a))(1-\overline{z(a)})}=\frac{|z(a)|^4}{|1-z(a)|^2}=1. \end{gather} If we substitute $z(a)=-1/\omega$, we obtain $$ \frac{1}{\omega \overline{\omega}(\omega+1)(\overline{\omega}+1)}=1, $$ which is equivalent to Equation~\eqref{eqn:proper}. \begin{Remark} The set $\{w\in\mathbb C\mid \vert w(1+w)\vert=1\}$ is homeomorphic to $S^1$, and the deformation space $\{w\in\mathbb C\mid \vert w(1+w)\vert=1 \textrm{ and } \mathrm{Im}(w)>0\}$ is homeomorphic to an open interval, see Figure~\ref{Figure:AlgebraicDefSpace}. \end{Remark} We justify the remark and Figure~\ref{Figure:AlgebraicDefSpace}. Firstly, to prove that set of algebraic solutions is homeomorphic to a circle, we write the defining equation $ \vert w(1+w)\vert=1$ as $$ \Big\vert \Big(w+\frac{1}{2}\Big)^2-\frac{1}{4}\Big\vert= 1 $$ Thus $\big(w+\frac{1}{2}\big)^2$ lies in the circle of center $\frac{1}{4}$ and radius $1$. As this circle separates $0$ from $\infty$, the equation defines a connected covering of degree two of the circle. Secondly, the set of algebraic solutions is invariant by the involutions $w\mapsto \overline{w}$ and $w\mapsto -1-w$ (hence symmetric with respect to the real line and the line defined by real part equal to $-\frac{1}{2}$). Furthermore it intersects the real line at $w=\frac{-1\pm \sqrt{5}}{2}$ and the line with real part $-\frac{1}{2}$ at $\frac{-1\pm i\sqrt{3}}{2}$. \begin{figure} \caption{ The set of solutions of the compatibility equations and $\mathrm{Def} \label{Figure:AlgebraicDefSpace} \end{figure} Let us construct the link of the cusp. We denote the link of each cusp point as in Figure~\ref{fig:gieseking_manifold_link} and glue them to obtain the link as in Figure~\ref{fig:gieseking_link}, which is a Klein bottle. \begin{figure} \caption{Gieseking manifold with link.} \label{fig:gieseking_manifold_link} \caption{Link of the cusp point.} \label{fig:gieseking_link} \end{figure} Now we take two tetrahedra and construct the orientation covering of $M$ (the figure eight knot exterior). For the first tetrahedron, we will denote by $z_1:=z(a)$, and $z_2, z_3$ so that they follow the cyclic order described in the tetrahedron relations. Afterwards, in the second tetrahedron, we denote by $w_i$ the edge invariant of the corresponding edge after applying an orientation reversing isometry to the tetrahedron, that is, $w_i=\frac{1}{\overline{z_i}}$. We consider the link of the orientation covering. The derivative of the holonomy of the two loops in the link of the orientation covering, $l_1$, $l_2$, depicted in Figure~\ref{fig:gieseking_longitude} (which are free homotopic) is $\frac{w_1}{z_1}=\frac{1}{|w_1|^2}$ and $\frac{w_3}{z_3}=\frac{1}{|w_3|^2}$. For the manifold to be complete, $\mathrm{hol'}(l_i)=1$ for $i=1,2$, which happens if and only if $z_1=\frac{1}{2}+\frac{\sqrt{3}}{2}\textrm{i}$. This corresponds to the regular ideal tetrahedron, which, as expected, is the manifold originally given by Gieseking. Notice that the upper loop (the one going through the side $\epsilon$) can be taken as a \emph{distinguished} longitude. A suitable meridian is drawn in Figure~\ref{fig:gieseking_meridian}. \begin{figure} \caption{Two free homotopic loops.} \label{fig:gieseking_longitude} \caption{Meridian in the link of the cover.} \label{fig:gieseking_meridian} \end{figure} Let us check that both the longitude and the meridian satisfy the conditions we stated for their holonomy in remark~\ref{rmk:hol_lm}, that is $\mathrm{hol'}(l)\in \mathbb{R}, |\mathrm{hol'}(m)|=1$. We have already shown it for the longitude. Regarding the meridian, $$ \mathrm{hol'}(m)=\frac{z_2z_3w_2w_3}{w_2z_1z_2w_1}=\frac{z_1}{z_3}\frac{w_1}{w_3}=\frac{z_1}{\overline{z_1}}\frac{\overline{z_3}}{z_3}, $$ therefore $|\mathrm{hol}'(m)|=1$. This leads to the result that the generalized Dehn filling coefficients of a lifted structure have the form $(0,q)$, after an appropriate choice of longitude-meridian pair. The last result could also had been obtained from Thurston's triangulation. By rotating the tetrahedra, our triangulation could be related with his, and the parameters identified. We can then check that in his choice of longitude and meridian, the holonomy has the same features if the structure is a lift from Gieseking manifold. \subsection{The Gieseking manifold as a punctured torus bundle} The Gieseking manifold $M$ is fibered over the circle with fibre a punctured torus $T^2\setminus\{*\}$. We use this structure to compute the variety of representations. The monodromy of the fibration is an automorphism $$\phi\colon T^2\setminus\{*\}\to T^2\setminus\{*\}.$$ The map $\phi$ is the restriction of a map of the compact torus $T^2 \cong\mathbb R^2/\mathbb Z^2 $ that lifts to the linear map of $\mathbb R^2$ with matrix $$\begin{pmatrix} 0 & 1 \\ 1 & 1 \end{pmatrix} .$$ This matrix also describes the action on the first homology group $H_1( T^2\setminus\{*\} ,\mathbb Z )\cong\mathbb Z^2$. The map $ \phi$ is orientation reversing (the matrix has determinant $-1$) and $\phi^2$ is the monodromy of the orientation covering of $M$, the figure eight knot exterior. The fibration induces a presentation of the fundamental group of $M$: $$ \pi_1(M)\cong \langle r, s, t\mid t r t^{-1}=\phi(r),\ tst^{-1}=\phi(s)\rangle $$ where $ \langle r, s\mid \rangle =\pi_1(T^2\setminus\{*\})\cong F_2$, and $$ \begin{array}{rcl} \phi_*\colon F_2 & \to & F_2\\ r & \mapsto & s \\ s & \mapsto & rs \end{array} $$ is the algebraic monodromy, the map induced by $\phi$ on the fundamental group. The relationship with the presentation \eqref{eqn:presentationG} of $\pi_1(M)$ from the triangulation is given by $$ r=UV, \qquad s=VU,\qquad t= U^{-1}. $$ Furthermore, a peripheral group is given by $\langle rsr^{-1} s^{-1}, t\rangle$, which is the group of the Klein bottle. We use this fibered structure to compute the variety of conjugacy classes of representations. Set $$G=\operatorname{Isom}(\mathbb H^3)\cong \mathrm{PO}(3,1) \cong \mathrm{PSL}(2,\mathbb C)\rtimes \mathbb Z_2 ,$$ let $$\hom^{\mathrm{irr}}(\pi_1(M),G )$$ denote the space of \emph{irreducible} representations (ie~that have no invariant line in $\mathbb{C}^2$). As we are interested in deformations, we restrict to representations $\rho$ that preserve the orientation type: $\rho(\gamma)$ is an orientation preserving isometry iff $\gamma \in \pi_1(M)$ is represented by a loop that preserves the orientation of $M$, $\forall\gamma\in \pi_1(M)$. We denote the subspace of representations that preserve the orientation type by $$\hom^{\mathrm{irr}}_+(\pi_1(M),G ).$$ Let $$\hom^{\mathrm{irr}}_+(\pi_1(M),G )/G$$ be their the space of their conjugacy classes. \begin{Proposition} \label{Prop:traces} We have an homeomorphism, via the trace of $\rho(s)$: $$ \begin{array}{rcl} \hom^{\mathrm{irr}}_+(\pi_1(M),G )/G & \to & \big(\{x\in \mathbb{C} \mid \vert x-1\vert =1\textrm{ and } x\neq 2\} \big)/\!\!\sim \\ { [\rho]} & \mapsto & \operatorname{trace}( \rho(s)) \end{array} $$ where $\sim $ is the relation by complex conjugation. In particular, $\hom^{\mathrm{irr}}_+(\pi_1(M),G )/G$ is homeomorphic to a half-open interval. \end{Proposition} \begin{proof} Let $\rho\colon \pi_ 1(M)\to G$ be an irreducible representation. The fibre $T^2\setminus\{*\}$ is orientable, so the restriction of $\rho$ to the free group $\langle r, s\mid\rangle\cong F_2$ is contained in $\mathrm{PSL}(2,\mathbb C)$. Furthermore, as $\langle r, s\mid\rangle$ is the commutator subgroup, we may assume that the image $\rho( \langle r, s\mid\rangle)\subset \mathrm{SL}(2,\mathbb C)$, cf Heusener--Porti \cite{HeusenerPorti04}. We consider the variety of characters $X(F_2,\mathrm{SL}(2,\mathbb C))$ and the action of the algebraic monodromy $\phi_*$ on the variety of characters: $$ \begin{array}{rcl} \phi^*\colon X( F_2, \mathrm{SL}(2,\mathbb C)) & \to & X( F_2, \mathrm{SL}(2,\mathbb C)) \\ \chi& \mapsto & \chi\circ \phi_* \end{array} $$ \begin{Lemma} \label{lemma:inv} The restriction of $ \hom^{\mathrm{irr}}_+(\pi_1(M),G )/G$ to $X(F_2, \mathrm{SL}(2,\mathbb C) )$ is contained in $$ \{\chi\in X(F_2, \mathrm{SL}(2,\mathbb C) ) \mid \phi^*(\chi)=\overline\chi \} $$ \end{Lemma} \begin{proof}[Proof of Lemma~\ref{lemma:inv}] Let $\rho\in \hom^{\mathrm{irr}}(\pi_1(M),G )$. If we write $\rho(t)= A\circ c$ for $A\in\mathrm{PSL}(2,\mathbb{C})$ and $c$ complex conjugation, from the relation $$ t \gamma t^{-1} =\phi_*(\gamma)\qquad \forall \gamma\in F_2, $$ we get $$ A \overline{ \rho(\gamma) } A^{-1} =\rho(\phi_*(\gamma)) \qquad \forall \gamma\in F_2. $$ Hence if $\rho_0$ denotes the restriction of $\rho$ to $F_ 2$, it satisfies that $\overline{\rho_0}$ and $\rho_0\circ\phi_*$ are conjugate, hence they have the same character and the lemma follows. \end{proof} Lemma~\ref{lemma:inv} motivates the following computation: \begin{Lemma} \label{lemma:fixed} We have a homeomorphism: $$ \{\chi_\rho\in X(F_2, \mathrm{SL}(2,\mathbb C) ) \mid \phi^*(\chi_\rho)=\overline{\chi_\rho} \}\cong\{x\in \mathbb{C} \mid \vert x-1\vert =1\} $$ by setting $x= \operatorname{trace}( \rho(s))=\chi_\rho(s)$. \end{Lemma} \begin{proof}[Proof of Lemma~\ref{lemma:fixed}] First at all we describe coordinates for $ X(F_2, \mathrm{SL}(2,\mathbb C))$. Let $\tau_r$, $\tau_s$ and $\tau_{rs}$ denote the trace functions, ie~$ \tau_r(\chi_\rho)=\chi_\rho(r)=\mathrm{trace}(\rho(r)) $, and similarly for $s$ and $rs$. Fricke-Klein's theorem yields an isomorphism $$ (\tau_{r},\tau_{s}, \tau_{rs})\colon X(F^2, \mathrm{SL}(2,\mathbb C) )\cong \mathbb C^3 $$ (see Goldman \cite{Goldman09} for a proof). From the relations $$ \phi_*(r)=s, \qquad \phi_*(s)=rs,\qquad \phi_*(rs)=srs, $$ the equality $\phi^*(\chi_\rho)=\overline{\chi_\rho}$ is equivalent to: $$ \overline{\tau_r}=\tau_s,\qquad \overline{\tau_s}=\tau_{rs}, \qquad \overline{\tau_{rs}}=\tau_{srs}=\tau_s\tau_{rs}-\tau_r. $$ In the expression for $ \tau_{srs} $ we have used the relation $\mathrm{tr}(AB)=\mathrm{tr}(A)\mathrm{tr}(B)-\mathrm{tr}(AB^{-1})$ for $A,B\in \mathrm{SL}(2,\mathbb C) $. Taking $x=\tau_r=\tau_{rs}$ and $\tau_s=\overline x$, the defining equation is $x+\overline{x}=x \overline{x}$. Namely, the circle $ \vert x-1\vert =1 $. \end{proof} To prove Proposition~\ref{Prop:traces}, we need to know which conjugacy classes of representations of $F^2$ are irreducible. By Culler--Shalen \cite{CullerShalen}, a character $\chi_\rho$ in $ X(F^2, \mathrm{SL}(2,\mathbb C))$ is reducible iff $\chi_\rho([r,s])= \operatorname{tr}( \rho([r,s])) =2$, and a straightforward computation shows that this happens in the circle $\vert x-1\vert = 1$ precisely when $x=2$. Now, let $\rho$ be a representation of $F^2 $ in $\mathrm{SL}(2,\mathbb C)$ whose character $\chi_\rho$ satisfies $\phi^*(\chi_\rho)=\overline{\chi_\rho} $. Assume $\rho$ is irreducible, then $\rho\circ\phi_* $ and $\overline{\rho} $ are conjugate by a unique matrix $A\in \mathrm{PSL}(2,\mathbb C)$: $$ A c\rho(\gamma)c A^{-1}=A \overline{\rho(\gamma)} A^{-1} = \rho( \phi_*(\gamma) ),\qquad \forall \gamma\in F_2, $$ where $c$ means complex conjugation. Thus, by defining $\rho(t)= A\circ c$ this gives a unique way to extend $\rho$ to $\pi_1(M)$. When $\chi_\rho$ is reducible, then $x=2$ and the character $\chi_\rho$ is trivial. Then either $\rho$ is trivial or parabolic. In any case, it is easy to check that all possible extensions to $\pi_1(M)$ yield reducible representations. \end{proof} \subsection{Comparing both ways of computing deformation spaces} We relate both ways of computing deformation spaces, via the ideal simplex and via the fibration: \begin{Lemma} Given a triangulated structure with parameter $w$ as in \eqref{eqn:proper}, the parameter $x$ of its holonomy as in Proposition~\ref{Prop:traces} is $$ x=1+w+|w|^2 $$ (or $x=1+\overline w+|w|^2$, because $x$ is only defined up to complex conjugation). \end{Lemma} \begin{proof} As $r=UV$, a straightforward computation yields $$ \rho(r)=\begin{pmatrix} 0 & |w|^2 \\ -\frac{1}{\vert w\vert ^2} & 1+w+|w|^2 \end{pmatrix} \in \mathrm{SL}(2,\mathbb C). $$ Then the lemma follows from $x=\mathrm{trace}(\rho(r))$ \end{proof} The fact that not all deformations are obtained from triangulations (Corollary~\ref{Coro:notrealized}) is illustrated in the following remark, whose proof is an elementary computation. \begin{Remark} The image of the map $$ \begin{array}{rcl} \{w\in\mathbb C\mid \vert w(1+w)\vert=1\}& \to & \{x\in \mathbb{C} \mid \vert x-1\vert =1\} \\ w & \mapsto & x=1+w+|w|^2 \end{array} $$ is $ \{\vert x-1\vert =1\}\cap \{\mathrm{Re}(x)\geq \frac{3}{2}\} $, ie~the arc of circle bounded by the image of the holonomy structure (and its complex conjugate). See Figure~\ref{Figure:xandw}. \end{Remark} \begin{figure} \caption{ The image of $ x=1+w+|w|^2$ in the circle $\vert x-1\vert =1$. } \label{Figure:xandw} \end{figure} To be precise on the type of structures at the peripheral Klein bottle, we compute the trace of the peripheral element $[r,s]$ for each method and apply Lemma~\ref{Lemma:typeofrep}: \begin{itemize} \item We compute it from the variety of representations, ie~from $x$. Using the notation of the proof of Proposition~\ref{Prop:traces}: $$ \tau_{[r,s]}= x_1^2+x_2^2+x_3^2 - x_1x_2x_3-2= (x+\overline{x})^2-3 (x+\overline{x})-2= (x+\overline{x})((x+\overline{x})-3)- 2. $$ The complete hyperbolic structure corresponds to $ (x+\overline{x})=3$, hence, by deforming $x$ we may have either $\tau_{[r,s]}> -2$ or $\tau_{[r,s]}< -2$. \item Next we compute it from the ideal triangulation, ie~from $w$. As $x=1+w+|w|^2$, we get $$ \tau_{[r,s]}= 2 \operatorname{Re} (w+ w^2)\geq -2 $$ because $ \vert w+ w^2 \vert =1$. \end{itemize} \begin{Remark} As a final remark, we notice that the path of deformations of the Gieseking manifold lifts to a path of deformations of the figure-eight knot exterior that is the same as the one considered by Hilden, Lozano and Montesinos in \cite{HLMRemarkable} by deforming polyhedra. The transition from type I to type II of the Gieseking manifold corresponds to the \emph{spontaneous surgery} in \cite{HLMRemarkable}. \end{Remark} \noindent \textsc{Departament de Matem\`atiques, Universitat Aut\`onoma de Barcelona, 08193 Cerdanyola del Vall\`es, and Centre de Recerca Matem\`atica} \noindent \textsf{[email protected]}, \textsf{[email protected]} \end{document}
\begin{document} \title{\textbf{ Feshbach projection formalism for open quantum systems }} \author{Dariusz Chru\'sci\'nski and Andrzej Kossakowski } \affiliation{ Institute of Physics \\ Nicolaus Copernicus University \\ Grudzi{a}dzka 5, 87--100 Torun, Poland\\ } \begin{abstract} We provide a new approach to open quantum systems which is based on the Feshbach projection method. Instead of looking for a master equation for the dynamical map acting in the space of density operators we provide the corresponding equation for the evolution in the Hilbert space of the amplitude operators. Its solution enables one to construct a legitimate quantum evolution (completely positive and trace preserving). Our approach, contrary to the standard Nakajima-Zwanzig method, allows for a series of consistent approximations resulting in a legitimate quantum evolution. The new scheme is illustrated by the well known spin-boson model beyond rotating wave approximation. It is shown that the presence of counter-rotating terms dramatically changes the asymptotic evolution of the system. \end{abstract} \pacs{03.65.Yz, 03.65.Ta, 42.50.Lc} \maketitle {\em Introduction. --} The description of a quantum system interacting with its environment is of fundamental importance for quantum physics and defines the central objective of the theory of open quantum systems \cite{Breuer,Weiss}. During the last few years there has been an increasing interest in open quantum systems in connection to the growing interest in controlling quantum systems and applications in modern quantum technologies such as quantum communication, cryptography, computation and ever growing number of applications. In practice, this theory is usually applied in the so-called Markovian or memoryless approximation. However, when strong coupling or long environmental relaxation times make memory effects important for a realistic description of the dynamics one needs more refined approach and hence the general structure of non-Markovian quantum evolution is a crucial issue \cite{Wolf,RHP,BLP,PRL}. For the recent papers devoted to both theoretical and experimental aspects of quantum evolution with memory see e.g. a collection of papers in \cite{REV} and references therein. The standard approach to the dynamics of open system uses the Nakajima-Zwanzig projection operator technique \cite{NZ} which shows that under fairly general conditions, the master equation for the reduced density matrix takes the form of the following non-local equation \begin{equation}\label{NZ} \frac{d}{dt}\rho_t = \int_{0}^t \mathcal{K}_{t-u}\rho_u\, du\ , \end{equation} in which quantum memory effects are taken into account through the introduction of the memory kernel $\mathcal{K}_t$: this simply means that the rate of change of the state $\rho_t$ at time $t$ depends on its history. An alternative and technically much simpler scheme is provided by the time-convolutionless projection operator technique \cite{TCL,BKP,Breuer} in which one obtains a first-order differential equation for the reduced density matrix \begin{equation}\label{local} \frac{d}{dt}\rho_t = L_t \rho_t\ . \end{equation} The advantage of the local approach consists in the fact that it yields an equation of motion for the relevant degrees of freedom which is local in time and which is therefore often much easier to deal with than the Nakajima-Zwanzig non-local master equation (\ref{NZ}). It should be stressed that the structure of the memory kernel $\mathcal{K}_t$ is highly nontrivial and, therefore, the non-local master equation (\ref{NZ}) is rather untractable. Note, that this equation is exact, i.e. in deriving (\ref{NZ}) one does not use any specific approximation. Approximating (\ref{NZ}) is a delicate issue. One often applies second order Born approximation which considerably simplifies the structure of $\mathcal{K}_t$. However, this approximation in general violates basic properties of the master equation like for example complete positivity or even positivity of $\rho_t$ \cite{Fabio-rev}. Further simplification of (\ref{NZ}) consists in various Markov approximations which allow one to avoid memory effects. These approximations may also break the physics of the problem. For example well known local Redfield equation \cite{Redfield} again violates complete positivity \cite{Breuer,Fabio-rev}. The problem of a consistent Markov approximation was studied in \cite{Dumcke}. One often tries to use phenomenological memory kernels. However, as was already observed in \cite{Stenholm}, there is no simple recipe how to construct $\mathcal{K}_t$ in order to preserve basic properties of quantum evolution \cite{Lidar,Sabrina}. The local approach based on (\ref{local}) is much more popular and provides a straightforward generalization of the celebrated Markovian semigroup \cite{GKS,Lindblad}. It should be also stressed that Markovian semigroup, being a special case of (\ref{local}), is derived from (\ref{NZ}) by applying quite sophisticated Markovian approximations like for example weak coupling or singular coupling limits \cite{Dumcke,Alicki}. In this Letter we provide a new approach to the reduced dynamics of open quantum systems. Instead of applying the Nakajima-Zwanzig projection we apply the Feshbach projection formalism \cite{Feshbach} to the Schr\"odinger equation of the total system. This formalism was recently applied in the context of open quantum system in \cite{Gaspard,Wu,Wu1}. In this Letter we use Feshbach projection technique to derive a closed formula for the reduced dynamics (see formula (\ref{Lambda-S})). We stress that although the Feshbach projection technique is well known the above formula for the dynamical map $\Lambda_t$ is completely new. We illustrate the power of this method analyzing spin-boson model beyond rotating wave approximation (RWA). The big advantage of this approach is the ability of performing a consistent approximation which creates notorious problems in the standard Nakajima-Zwanzig approach. However, the essential limitation of this method is that the initial state of the environment has to be pure (for example in the standard spin-boson model one starts with the vacuum state of the boson field \cite{Breuer}). To get rid of this constraint we propose a {\em generalized Feshbach projection method} which enables one to start with an arbitrary mixed state of the environment. This generalized technique allows to analyze spin-boson model beyond RWA and with arbitrary mixed state of the field. As a byproduct we derive a new description of quantum systems based not on the density matrix $\rho$ but on the amplitude operator $\kappa$ satisfying $\rho =\kappa \kappa^\dagger$. Clearly, $\kappa$ is not uniquely defined (it is gauge dependent) but the whole theory is perfectly gauge invariant. {\em Feshbach projection technique. --} Consider a quantum system coupled to the environment living in $\mathcal{H}_S{\,\otimes\,} \mathcal{H}_E$ and let $H$ denote the total Hamiltonian of the composed system \begin{equation}\label{} H = H_0 + V = H_S {\,\otimes\,} \mathbb{I}_E + \mathbb{I}_S {\,\otimes\,} H_E + V \ . \end{equation} Passing to the interaction picture $V(t) = e^{iH_0 t} V e^{-i H_0 t}$ one considers \begin{equation}\label{H-Psi} i \partial_t \Psi_t = V(t) \Psi_t\ . \end{equation} Now, let $\psi_E$ be a fixed vector state of the environment and let us introduce an orthogonal projector $P_0 : \mathcal{H}_S{\,\otimes\,} \mathcal{H}_E \rightarrow \mathcal{H}_S{\,\otimes\,} \mathcal{H}_E$ defined by $$P_0 \psi {\,\otimes\,} \phi = \psi {\,\otimes\,} \psi_E \langle \psi_E|\phi\rangle\ , \ \ \ $$ and by linearity one defines $P_0 \Psi$ for arbitrary vector $\Psi$. Moreover, let $P_1 = \mathbb{I}_S {\,\otimes\,} \mathbb{I}_E - P_0$ denotes a complementary projector. The standard projection technique gives \begin{eqnarray} \label{Rel} \partial_t P_0 \Psi_t &=& -i V_{00}(t) P_0 \Psi_t - i V_{01}(t) P_1 \Psi_t\ , \\ \label{Irr} \partial_t P_1 \Psi_t &=& -i V_{10}(t) P_0 \Psi_t - i V_{11}(t) P_1 \Psi_t\ , \end{eqnarray} where we introduced a convenient notation $V_{ij}(t) = P_i V(t) P_j$. Assuming separable initial state $\Psi_0 = \psi {\,\otimes\,} \psi_E$ and solving (\ref{Irr}) for the irrelevant part $P_1\Psi_t$ \begin{equation} \label{P1} P_1\Psi_t = - i \int_0^t ds\, W_{t,s} V_{10}(s) P_0 \Psi_s\ , \end{equation} one ends up with the following non-local equation for the relevant (system) part $P_0\Psi_t$: \begin{equation}\label{R-P} \partial_t P_0\Psi_t = -i V_{00}(t) P_0\Psi_t - \int_0^t {K}_{t,s} P_0 \Psi_s\, ds\ , \end{equation} with ${K}_{t,s} = V_{01}(t) W_{t,s} V_{10}(s)$, and \begin{equation}\label{W-ts} W_{t,s} = \mathcal{T}\, \exp\left( -i \int_s^t V_{11}(u) du \right)\ , \end{equation} where $\mathcal{T}$ denotes chronological product. Let $Z_t : \mathcal{H}_S \rightarrow \mathcal{H}_S$ be defined by \begin{equation}\label{} (Z_t \psi) {\,\otimes\,} \psi_E = P_0 \Psi_t = P_0 U_t (\psi {\,\otimes\,} \psi_E) \ , \end{equation} where $U_t$ provides a solution to the original Schr\"odinger equation (\ref{H-Psi}), that is, $i\partial_t U_t = V(t)U_t$. Equation (\ref{R-P}) may be rewritten as the following equation for $Z_t$ \begin{equation}\label{R} \partial_t Z_t = -i V_{\rm eff}(t) Z_t - \int_0^t M_{t,s} Z_s\, ds\ , \end{equation} where the effective time-dependent system Hamiltonian is defined by $V_{\rm eff}(t) = {\rm tr}_E(V(t) \, \mathbb{I}_S{\,\otimes\,} |\psi_E\rangle\langle\psi_E|)$ and $M_{t,s} = {\rm tr}_E(\mathbb{K}_{t,s} \, \mathbb{I}_S {\,\otimes\,} |\psi_E\rangle\langle\psi_E|)$. Solving (\ref{R}) one finds the reduced evolution of initial state vector $\psi \in \mathcal{H}_S$: $\psi \rightarrow \psi_t = Z_t \psi_0$. Let us observe that $\langleZ_t\psi|Z_t\psi\rangle \leq \langle\psi|\psi\rangle$ which shows that $Z_t \psi$ is no longer a legitimate vector state for $t > 0$. It is clear since $Z_t$ describes the decay of $\psi$ and hence $||Z_t \psi||$ is not conserved -- it leaks out to the irrelevant part $P_1U_t(\psi{\,\otimes\,} \psi_E)$. On the other hand there is a standard formula for the dynamical map \begin{equation}\label{Lambda} \Lambda_t(|\psi\rangle\langle\psi|) = {\rm tr}_E [ U_t (|\psi\rangle\langle\psi| {\,\otimes\,} |\psi_E\rangle\langle\psi_E|) U_t^\dagger]\ . \end{equation} Simple calculation shows that inserting the identity $\mathbb{I}_S {\,\otimes\,} \mathbb{I}_E=P_0 + P_1$ under the partial trace $[(P_0+P_1) U_t (|\psi\rangle\langle\psi| {\,\otimes\,} |\psi_E\rangle\langle\psi_E|) U_t^\dagger(P_0+P_1)]$ and using the definition of $Z_t$ and formula (\ref{P1}) one obtains \begin{equation}\label{Lambda-S} \Lambda_t(|\psi\rangle\langle\psi|) = Z_t \rho Z_t^\dagger + {\rm Tr}_E ( Y_t [\rho {\,\otimes\,} |\psi_E\rangle\langle\psi_E|] Y_t^\dagger)\ , \end{equation} where the operator $Y_t$ is defined by \begin{equation}\label{} Y_t = \int_0^t ds \ W_{t,s} V_{10}(s) (Z_s {\,\otimes\,} \mathbb{I}_E) \ . \end{equation} By linearity one defines the action of $\Lambda_t$ on an arbitrary density operator: if $\rho = \sum_k p_k |\psi_k\rangle\langle\psi_k|$, then $\Lambda_t(\rho) = \sum_k p_k \Lambda_t(|\psi_k\rangle\langle\psi_k|)$. It should be stressed that although the Feshbach projection technique is well known \cite{Gaspard,Wu,Wu1} the above formula for the reduced dynamics is completely new. Note, that presented method requires that the initial state of the environment is a pure vector state $\psi_E$. Hence the standard Feshbach projection method is much more restrictive than the corresponding Zwanzig-Nakajima method. However, the advantage of the Feshbach technique consists in the fact that (contrary to the Zwanzig-Nakajima method) it allows for consistent approximations. By a consistent we mean an approximation which results in completely positive and trace preserving evolution of a density matrix. {\em Born-like approximation. --} Note, that the original memory kernel $M_{t,s}$ contain an infinite number of multi-time correlations functions which makes the full problem rather untractable. The simplest approximation consists in neglecting $V_{11}(t)$. Roughly speaking $V_{11}(t)$ is responsible for transitions within irrelevant part of the Hilbert space $P_1(\mathcal{H}_S {\,\otimes\,} \mathcal{H}_E)$. It means that one approximates the evolution operator $W_{t,s}$ by $\mathbb{I}_S {\,\otimes\,} \mathbb{I}_E$. It leads to the second order approximation for the memory kernel ${M}_{t,s} \simeq {\rm Tr}_E [\, \mathbb{I}_S{\,\otimes\,} |\psi_E\rangle\langle\psi_E| V_{01}(t)V_{10}(s)\, ]$ and hence it is natural to call it Born-like approximation. The big advantage of our approach consists in the fact that the above approximation leads to the legitimate completely positive and trace preserving quantum evolution. {\em Example: spin-boson model. --} To illustrate our approach let us consider well known spin-boson model \cite{Breuer} beyond RWA defined by \begin{equation}\label{} H_S = \omega_0 \sigma^+\sigma^- \ , \ \ \ H_E = \int dk\, \omega(k) a^\dagger(k) a(k)\ , \end{equation} and the interaction term \begin{equation}\label{} V = \sigma^+ {\,\otimes\,} X + \sigma^- {\,\otimes\,} X^\dagger\ , \end{equation} where $X = a(f) + a^\dagger(h) $, and $a^\dagger(f) = \int dk\, f(k) a^\dagger(k)$. As usual $\sigma^\pm$ are standard raising and lowering qubit operators. Note that a form-factor `$h$' introduces counter-rotating terms. One easily computes \begin{equation}\label{} V(t) = \sigma^+ {\,\otimes\,} X(t) + \sigma^- {\,\otimes\,} X^\dagger(t)\ , \end{equation} where $X(t) = e^{-i \omega_0 t} [a(f_t) + a^\dagger(h_t)]$, and the time-dependent form-factors read $f_t(k) = e^{-i \omega(k)t} f(k)$ and a similar formula for $h_t$. Recall that in the standard spin-boson model $h=0$ (no counter-rotating terms) and $\psi_E = |{\rm vac}\rangle$ is the vacuum state of the boson field \cite{Breuer}. Let $\mathcal{H}_0$ be a 2-dimensional subspace of $\mathcal{H}_S {\,\otimes\,} \mathcal{H}_E$ spanned by $|i\rangle {\,\otimes\,} |{\rm vac}\rangle$ for $i =1,2$, and $\mathcal{H}_1$ a subspace spanned by $|1\rangle {\,\otimes\,} a^\dagger(k)|{\rm vac}\rangle$. One takes the initial state $\Psi_0 = |\psi\rangle {\,\otimes\,} |{\rm vac}\rangle \in \mathcal{H}_0$. The structure of the interaction Hamiltonian $V(t)$ within RWA guaranties that $\Psi_t \in \mathcal{H}_0 \oplus \mathcal{H}_1$. One easily finds $ P_1 V(t) P_1 \Big|_{\mathcal{H}_0 \oplus \mathcal{H}_1} = 0$, which shows that the Born-like approximation within the standard spin-boson model is exact. Actually, due to this fact the standard model is exactly solvable. Consider now the spin-boson model beyond RWA and let $\psi_E$ be a fixed pure state of the environment. If $B_1,\ldots,B_n$ are field operators, then denote by $\langle B_1\ldots B_n\rangle = \langle\psi_E|B_1 \ldots B_n|\psi_E\rangle$ the corresponding correlation function. To simplify our presentation let us assume that $\psi_E$ satisfies $ \langle a(f) \rangle = \langle a^\dagger(f)\rangle = 0$. The above condition is satisfied for the vacuum state. It is clear that this condition implies $\langle X(t)\rangle=0$, and hence $V_{\rm eff}(t)=0$. In the Born-like approximation the formula for ${M}_{t,s}$ reduces to \begin{eqnarray}\label{} {M}_{t,s} = m_1(t,s)\, |1\rangle\langle1| + m_2(t,s)\, |2\rangle\langle2|\ , \end{eqnarray} with $m_1(t,s) = \langle X(t) X^\dagger(s) \rangle$ and $m_2(t,s) = \langle X^\dagger(t) X(s) \rangle$. which proves that within this approximation the dynamics of ${Z}_t$ is fully controlled by 2-point correlation functions. It is, therefore, clear that $Z_t$ has the following form \begin{equation}\label{} {Z}_t = {z}_1(t) |1\rangle\langle1| + {z}_2(t) |2\rangle\langle2|\ , \end{equation} where the complex functions ${z}_k(t)$ satisfy \begin{eqnarray} \partial_t {z}_k(t) = - \int_0^t m_k(t,s)\, \, {z}_k(s)\, ds\ , \end{eqnarray} with ${z}_k(0)=1$. Interestingly, we have two decoupled equations for $z_k$. Having solved for ${Z}_t$ one computes a second part of the dynamical map (\ref{Lambda-S}), namely ${\rm Tr}_E ( {Y}_t [\,\rho {\,\otimes\,} |\psi_E\rangle\langle\psi_E|] {Y}_t^\dagger)\,$, where in the Born-like approximation ${Y}_t = \int_0^t ds {V}_{10}(s) {Z}_s {\,\otimes\,} \mathbb{I}_E$. Observing that ${V}_{10}(s) = {P}_1 {V}(s) {P}_0 = {V}(s){P}_0$ due to ${P}_0V(t){P}_0=0$, one finds the following Kraus representation for the dynamical map \begin{eqnarray*} \Lambda_t(\rho) &=& {Z}_t \rho {Z}_t^\dagger + d_1(t) \sigma^+\rho \sigma^- + d_2(t) \sigma^-\rho \sigma^+ \\ &+& \alpha(t) \sigma^+\rho \sigma^+ + \alpha^*(t) \sigma^-\rho \sigma^-\ , \end{eqnarray*} where \begin{eqnarray*} d_1(t) &=& \int_0^t ds \int_0^t du \, \langle X^\dagger(u) X(s) \rangle \, z_1(s) z_1^*(u) \ , \\ d_2(t) &=& \int_0^t ds \int_0^t du \, \langle X(u) X^\dagger(s) \rangle\, z_2(s) z_2^*(u) \ , \\ \alpha(t) &=& \int_0^t ds \int_0^t du \, \langle X(u) X(s) \rangle\, z_1(s) z_2^*(u) \ . \end{eqnarray*} Interestingly, the preservation of trace implies that $ d_k(t) + |z_k(t)|^2 = 1$, for $k=1,2$. Actually, this simple condition is hardly visible from the definition of $d_k(t)$ and the corresponding non-local equations for $z_k(t)$. The evolution of density matrix $\rho_{ij}(t)$ reads: \begin{eqnarray} \rho_{11}(t) &=& |z_1(t)|^2 \rho_{11} + d_2(t) \rho_{22} \ ,\nonumber \\ \rho_{22}(t) &=& d_1(t) \rho_{11} + |z_2(t)|^2 \rho_{22}\ , \\ \rho_{12}(t) &=& z_1(t)z_2^*(t) \rho_{12} + \alpha(t) \rho_{21} \ . \nonumber \end{eqnarray} Interestingly, if $|z_1(t)|^2$ and $|z_2(t)|^2$ vanish at infinity then asymptotically one has \begin{equation}\label{} \rho_{11}(t) \rightarrow \rho_{22} \ , \ \ \ \rho_{22}(t) \rightarrow \rho_{11} \ , \end{equation} which means that the occupations $\rho_{11}$ and $\rho_{22}$ simply swap. Hence, such evolution does not have a proper equilibrium state (its asymptotic state highly depend upon the initial one). Such asymptotic behavior is excluded in the standard spin-boson model without counter-rotating terms corresponding to $|z_1(t)|=1$ and $\alpha(t)=0$. Again, if $|z_2(t)|^2$ vanish at infinity then asymptotically $ \rho_{11}(t) \rightarrow 1$ and $\rho_{22}(t) \rightarrow 0$ which means that the ground state $|1\rangle\langle1|$ defines the unique equilibrium state. Our analysis shows that the presence of anti-resonant terms dramatically changes the asymptotic evolution of the system since the system does not possesses an equilibrium state. {\em Density operators vs. amplitudes. --} It should be stressed that presented method requires that the initial state of the environment has to be pure. There is no natural way within presented approach to generalize formula (\ref{Lambda-S}) for arbitrary mixed state of the environment. To get rid of this limitation we provide a new approach to the dynamics of quantum systems. Our approach is based not on the Schr\"odinger equation for the vector state $|\Psi_t\rangle$ but on the Schr\"odinger-like equation for the ``amplitude'' operator. Let as recall that if $\rho$ is density operator in $\mathcal{H}$ then a Hilbert-Schmidt operator $\kappa \in \mathcal{L}^2(\mathcal{H})$ is called an amplitude of $\rho$ if $\rho = \kappa\kappa^\dagger$. Recall, that $\mathcal{L}^2(\mathcal{H})$ is equipped with a scalar product $(\kappa,\eta) = {\rm tr}(\kappa^\dagger \eta)$ and $\kappa \in \mathcal{L}^2(\mathcal{H})$ if the Hilbert-Schmidt norm $||\kappa||^2 = (\kappa,\kappa)$ is finite. Note that ${\rm tr}\, \rho=1$ implies $(\kappa,\kappa)=1$. If $a^\dagger = a$ is an observable, then \begin{equation}\label{} (\kappa,a\, \kappa) = {\rm tr}\, (\kappa^\dagger a \kappa) = {\rm tr}\, ( a \kappa\kappa^\dagger)={\rm tr}\,(a \rho)\ , \end{equation} reproduces the standard formula for the expectation value of $a$ in the state $\rho$. Amplitudes display a natural gauge symmetry: a gauge transformation $\kappa \rightarrow \kappa \, U$ leaves $\rho=\kappa \kappa^\dagger$ invariant for any unitary operator $U$ in $\mathcal{H}$. The main idea of this paper is to analyze the dynamics of gauge invariant $\rho$ in terms of its gauge dependent amplitudes $\kappa$. This is well known trick in physics. Recall that for example in Maxwell theory it is much easier to analyze Maxwell equations not in terms of gauge invariant $\mathbf{E}$ and $\mathbf{B}$ fields but in terms of gauge dependent four potential $A_\mu$. Suppose that $\rho_t$ satisfies von Neumann equation $ i\partial_t {\rho}_t = [H_t,\rho_t]$, with time-dependent Hamiltonian $H_t$ (it might be for example the interaction Hamiltonian in the interaction picture). Its solution is given by $\rho_t = v_t \rho v_t^\dagger$, where the unitary operator $v_t$ solves the Schr\"odinger equation $i \partial_t v_t = H_t v_t$ with the initial condition $v_0 = \mathbb{I}$. The corresponding equation for the amplitude $\kappa_t$ is highly non unique. Any equation $ i\partial_t {\kappa}_t = H_t\kappa_t - \kappa_t G_t$, where $G_t$ is an arbitrary time-dependent Hermitian operator, does the job. The quantity $G_t$ plays a role of a gauge field. It is clear that the solution $\kappa_t$ does depend upon $G_t$ but $\rho_t = \kappa_t \kappa_t^\dagger$ is perfectly gauge-invariant. If $G_t=H_t$ then $\kappa_t$ satisfies the same von-Neumann equation as $\rho_t$ (such choice is used e.g. in \cite{Froehlich}). Taking $G_t=0$ one arrives at the following Schr\"odinger-like equation for the amplitude. \begin{equation}\label{vN-kappa-} i\partial_t {\kappa}_t = H_t\kappa_t \ . \end{equation} As we shall see this simple choice leads to considerable simplification of the underlying structure of the gauge theory. Note that the Schr\"odinger-like equation (\ref{vN-kappa-}) still allows for global (i.e. time independent) gauge transformations $\kappa_t \rightarrow \kappa_t u$. Equation (\ref{vN-kappa-}) provides a starting point for the generalized Feshbach method. We show that one may replace in (\ref{Lambda-S}) a pure state $|\psi_E\rangle\langle\psi_E|$ by an arbitrary mixed state of the environment. To justify this statement we shall work with amplitudes instead of density operators. {\em Generalized Feshbach projection technique. --} We generalize the Feshbach projection method to the Hilbert space of amplitude operators $\mathcal{L}^2(\mathcal{H}_S {\,\otimes\,} \mathcal{H}_E)$. Let $\Omega$ be a fixed state of the environment and let $\kappa_E$ be its amplitude. Let us introduce orthogonal projectors \begin{equation}\label{} \mathcal{P}_0 \mu_S{\,\otimes\,} \mu_E = \mu_S {\,\otimes\,} \kappa_E(\kappa_E,\mu_E)\ ,\ \ \ \mathcal{P}_1 = \oper - \mathcal{P}_0\ . \end{equation} where $\mu_S \in \mathcal{L}^2(\mathcal{H}_S)$ and $\mu_E \in \mathcal{L}^2(\mathcal{H}_E)$. Again by linearity we extend the above definition for an arbitrary element from $\mathcal{L}^2(\mathcal{H}_S {\,\otimes\,} \mathcal{H}_E)$. It should be stressed that $\mathcal{P}_\alpha$ define projectors in $\mathcal{L}^2(\mathcal{H}_S {\,\otimes\,} \mathcal{H}_E)$ but not in $\mathcal{H}_S {\,\otimes\,} \mathcal{H}_E$. Now we apply Feshbach technique replacing $\mathcal{H}_S {\,\otimes\,} \mathcal{H}_E$ by $\mathcal{L}^2(\mathcal{H}_S {\,\otimes\,} \mathcal{H}_E)$: let $\zeta = \kappa_S {\,\otimes\,} \kappa_E$ be an initial amplitude of the composed system. It is therefore clear that the initial state $\zeta\zeta^\dagger = \rho {\,\otimes\,} \Omega$ is a product state, where $\rho = \kappa_S\kappa_S^\dagger$ and $\Omega = \kappa_E\kappa_E^\dagger$. Let $\zeta_t \in \mathcal{L}^2(\mathcal{H}_S {\,\otimes\,} \mathcal{H}_E)$ satisfy Schr\"odinger-like equation \begin{equation}\label{} i \partial_t{\zeta}_t = V(t)\zeta_t\ . \end{equation} Performing the same steps leading to formula (\ref{Lambda-S}) one arrives at \begin{equation}\label{Lambda-G} \Lambda_t(\rho) = \mathcal{Z}_t \rho \mathcal{Z}_t^\dagger + {\rm Tr}_E ( \mathcal{Y}_t [\,\rho {\,\otimes\,} \Omega] \mathcal{Y}_t^\dagger)\ , \end{equation} where $\mathcal{Z}_t$ satisfies \begin{equation}\label{RG} \partial_t \mathcal{Z}_t = -i {V}_{\rm eff}(t) \mathcal{Z}_t - \int_0^t \mathcal{M}_{t,s} \mathcal{Z}_s\, ds\ , \end{equation} with the effective time-dependent system Hamiltonian defined by ${V}_{\rm eff}(t) = {\rm tr}_E(V(t) \, \mathbb{I}_S{\,\otimes\,} \Omega)$, $\mathcal{M}_{t,s} = {\rm tr}_E(\mathcal{K}_{t,s} \, \mathbb{I}_S {\,\otimes\,} \Omega)$ and $\mathcal{K}_{t,s} = \mathcal{V}_{01}(t) \mathcal{W}_{t,s} \mathcal{V}_{10}(s)$. Moreover, we introduced $\mathcal{V}_{ij}(t) = \mathcal{P}_i V(t) \mathcal{P}_j$. The propagator $\mathcal{W}_{t,s}$ reads \begin{equation}\label{} \mathcal{W}_{t,s} = \mathcal{T}\, \exp\left( -i \int_s^t \mathcal{V}_{11}(u) du \right)\ , \end{equation} and finally the operator $\mathcal{Y}_t$ is defined by \begin{equation}\label{} \mathcal{Y}_t = \int_0^t ds \ \mathcal{W}_{t,s} \mathcal{V}_{10}(s) (\mathcal{Z}_s {\,\otimes\,} \mathbb{I}_E) \ . \end{equation} Interestingly, the dynamical map defined in (\ref{Lambda-G}) has exactly the same form as in (\ref{Lambda-S}) with $Z_t$ replaced by $\mathcal{Z}_t$, $Y_t$ by $\mathcal{Y}_t$ and pure state $|\psi_E\rangle\langle\psi_E|$ is replaced by an arbitrary mixed state $\Omega$ of the environment. Finally, the Hilbert space projectors $P_\alpha$ are replaced by the projectors $\mathcal{P}_\alpha$ acting in the space of amplitudes. It should be stressed that although the dynamical map (\ref{Lambda-G}) is defined by the same formula as (\ref{Lambda-S}) the derivation of $\Lambda_t$ with arbitrary mixed state of the environment was possible only after passing to the space of amplitudes. Finally, let us observe that projectors $\mathcal{P}_\alpha$ acting in the space of amplitudes are gauge-dependent. However, one proves that the corresponding dynamical map $\Lambda_t$ depends only upon the multi-time correlation functions of the environmental operators, and hence it is perfectly gauge-invariant. {\em Conclusions. --} Contrary to the standard Nakajima-Zwanzig projection technique in the Banach space of density operators our general method is based on the Feshbach projection technique in the Hilbert space of amplitudes. The main advantages of presented approach are $i)$ it is based on a much simpler dynamical equation not for the dynamical map itself but for the linear operator acting in the system Hilbert space. Solving this equation one constructs the exact dynamical map. $ii)$ This approach enables one to work with mixed states of the environment. $iii)$ Its crucial property is the ability for coherent approximations leading to legitimate (completely positive and trace preserving) approximated dynamics. This is a big advantage with respect to the Nakajima-Zwanzig approach. This is due to the fact that Feshbach technique works within the Hilbert space of vector states or amplitude operators. This is physically more intuitive and mathematically much simpler than the Nakajima-Zwanzig technique in the abstract Banach space of density operators. We illustrated our approach by the spin-boson model beyond RWA. It provides a key model in the theory of open quantum systems \cite{Breuer} due to the fact that it is exactly solvable (within RWA) if $\psi_E = |{\rm vac\rangle}$. Our approach shows that this feature corresponds to the fact that the standard spin-boson model is exact in the Born-like approximation. Adding counter-rotating terms and/or replacing $|{\rm vac}\rangle$ by another state (pure or mixed) make this model untractable within standard approach \cite{Breuer}. The generalized Feshbach projection technique enables one to deal both with counter-rotating terms and arbitrary mixed state of the environment. The power of this method consists in the fact that one may deal with arbitrary mixed state of the environment by replacing the corresponding correlation functions. {\em Acknowledgements --} This work was partially supported by the National Science Centre project DEC-2011/03/B/ST2/00136. Authors thank anonymous referees for many valuable comments. \end{document}
\begin{document} \date{ } \author{H. Leahu, M. Mandjes (Univ.\ of Amsterdam) \& AM. Oprescu (Vrije Univ.\ Amsterdam)} \title{A Numerical Approach to Stability of Multi-class Queueing Networks} \begin{abstract}\noindent The Multi-class Queueing Network (McQN) arises as a natural multi-class extension of the traditional (single-class) Jackson network. In a single-class network subcriticality (i.e.\ subunitary nominal workload at every station) entails stability, but this is no longer sufficient when jobs/customers of different classes (i.e.\ with different service requirements and/or routing scheme) visit the same server; therefore, analytical conditions for stability of McQNs are lacking, in general. \noindent In this note we design a numerical (simulation-based) method for determining the stability region of a McQN, in terms of arrival rate(s). Our method exploits certain (stochastic) monotonicity properties enjoyed by the associated Markovian queue-configuration process. Stochastic monotonicity is a quite common feature of queueing models and can be easily established in the single-class framework (Jackson networks); recently, also for a wide class of McQNs, including first-come-first-serve (FCFS) networks, monotonicity properties have been established. Here, we provide a minimal set of conditions under which the method performs correctly. \noindent Eventually, we illustrate the use of our numerical method by presenting a set of numerical experiments, covering both single and multi-class networks. \end{abstract} \section{Introduction} Multi-class queueing networks (McQNs) provide the mathematical framework for modeling a wide range of stochastic systems, e.g., manufacturing lines, computer grids and telecommunication systems. They differ from the classical Jacksonian network model in that the same (physical) item entering the system may require multiple service stages at the same station, with different service and routing characteristics, thus giving rise to a different class of jobs. As such, (some) stations behave as multi-class (rather than single-class) queues. This distinguishing feature has a rather significant impact on the assessment of stability of such networks; more specifically, while for Jackson networks stability is equivalent to sub-criticality, for some McQNs such an equivalence does not hold anymore, as demonstrated by a plethora of examples in the literature; see, e.g. \cite{Bramson:08} for a significant list of examples of subcritical networks which are \emph{not} stable. It remains true, however, that stability implies subcriticality \cite{Bramson:08}, hence subcriticality is a necessary, but not sufficient condition for stability. In this note we consider McQNs in which inter-arrival and service times are exponentially distributed; under this assumption, the queue-configuration process defines a continuous-time Markov chain (Markov process on a discrete state space) which enables one to employ a more powerful mathematical apparatus. We address the following problem: \emph{given a certain network, with specified service rates and routing scheme, what is the set of arrival rates which makes the network stable}? In this context, stability refers to the associated Markovian model, hence positive Harris recurrence. While in the Jacksonian framework the answer to the above question is straightforward, under the multi-class paradigm, in the absence of analytical conditions for stability, one needs to resort to numerical methods. We design a numerical (simulation-based) method for solving this problem. Our method, which is among the first schemes of this kind, assumes some (weak) monotonicity conditions on the associated Markov process, which ensure that the stability region (the set of arrival-rate vectors which make the network stable) defines a star-shaped domain in the parameter space. In addition, the stability region can be recovered by interpolating the boundary points (stability thresholds) in various directions which, in turn, can be approximated by numerical root-finding methods. Importantly, the required monotonicity conditions hold for McQNs in which jobs are executed one at a time; see \cite{LM:2016}. To test the approach, we performed an extensive set of numerical experiments. We include here a number of illustrative examples. We show first that the method correctly identifies the predicted stability thresholds when they are available in analytical form, e.g. for Jackson and Kelly type networks. Furthermore, we apply our numerical method to two instances of multi-class networks (reentrant lines) where stability conditions are not available, obtaining approximations for the (unknown) stability thresholds. In Section \ref{mcqn:sec} we introduce the mathematical model, the relevant notation and terminology. Furthermore, in Section \ref{main:sec} we introduce our method, the necessary assumptions and the (main) convergence result. Finally, in Section \ref{num:sec} the numerical experiments are presented. \section{The Mathematical Model}\label{mcqn:sec} In this section we describe our mathematical model and introduce the notation and terminology which will be used throughout this note. \subsection{Multi-class Queueing Networks: The Model}\label{model:sec} We consider a general McQN model consisting of $\aleph$ stations (each having its own service/queueing policy) executing $d$ classes of jobs. Each class $k$ is assigned to a specified station $\mathcal{S}(k)$. We further assume that the mapping $k\longmapsto\mathcal{S}(k)$ is surjective, i.e. each station serves (at least) one class, hence $1\leq\aleph\leq d$. When the mapping $\mathcal{S}$ is bijective one recovers the standard Jackson Network model. The set $\{k:\mathcal{S}(k)=i\}$, of all classes assigned to station $i$ will be denoted by $\mathcal{K}_i$. We now describe the dynamics of the McQN. Jobs of class $k$ enter the network according to a Poisson process with rate $\theta_k\geq 0$; the case $\theta_k=0$ corresponds to a void arrival process, meaning that class $k$ does not have external input. Upon arrival, a job of class $k$ is assigned to station $\mathcal{S}(k)$; depending on the underlying service/queue policy, it either starts receiving service immediately, or it is enqueued in a waiting line. We assume that jobs of class $k$ require an exponentially distributed service time, with rate $\beta_k>0$, independent of everything else. After finishing service at station $\mathcal{S}(k)$, a job of class $k$ turns into a job of class $l$, with probability $R_{kl}$ and moves to station $\mathcal{S}(l)$ (where it follows the corresponding queueing routine) or leaves the network with probability $R_{k0}:=1-\sum_{l=1}^dR_{kl}$. To ensure that the network is open, we assume that the matrix $R:=\{R_{kl}\}_{k,l=1,\ldots,d}$ is sub-stochastic, i.e. $$(I-R)^{-1}=I+R+R^2+\ldots;$$ this condition guarantees that any job will eventually leave the network (in finite time) with probability one. An McQN with $R_{k\:k+1}=1$, for $k=1,\ldots,d-1$ and $R_{d0}=1$, such that only class $1$ has non-trivial external input, i.e., $\theta_2=\ldots=\theta_d=0$, is called a \emph{reentrant line}. Reentrant lines are the most popular instances of McQNs, as they provide mathematical models for manufacturing systems (assembly lines). We define the vector of \emph{effective arrival rates} by $$\lambda:=(I-R')^{-1}\theta.$$ Furthermore, the \emph{traffic rate} (or \emph{nominal workload}) of station $i$ is defined as \begin{equation}\label{traffic:eq} \rho_i:=\sum_{k\in\mathcal{K}_i}\frac{\lambda_k}{\beta_k}. \end{equation} Station $i$ is called \emph{sub-critical} if $\rho_i<1$ and the network is called sub-critical if every node is. \subsection{The Stability and the Subcriticality Regions}\label{regio:sec} Under the assumptions in Section \ref{model:sec}, the queue-configuration process defines a Markov process \cite{Dai:95}, $\mathcal{X}:=\{X_t:t\geq 0\}$ (on some suitable state-space $\mathbb{X}$), which depends on the parameter $\theta=(\theta_1,\ldots,\theta_d)\in\Theta$, where $\Theta\subseteq\mathbb{R}^d_+:=\{\theta\in\mathbb{R}^d:\theta\geq\mathbf{0}\:\text{(componentwise)}\}$ is a pre-specified set; the underlying probability, resp. expectation operator, will be denoted by $\mathbb{P}_\theta$, resp. $\mathbb{E}_\theta$. For a given McQN, we define the $\Theta$-\emph{stability region} via the associated Markov process $\mathcal{X}$, as follows: $$\Theta_{\rm s}:=\{\theta\in\Theta:\:\mathcal{X}\:\text{is stable under}\:\mathbb{P}_\theta\};$$ here, by \emph{stability} we mean positive (Harris) recurrence. Stability of McQNs has been thoroughly investigated in \cite{Dai:95,BGT:96,H:97,GH:05,Bramson:08}. \begin{remk} {\em Note that the concept ``stability region'' is slightly different from the one introduced in \cite{Dai}, which refers to the stability of the associated fluid model; the latter is, in general, a subset of the former \cite{Dai:95}.} \end{remk} In the same vein, define the $\Theta$-\emph{subcriticality region} $$\Theta_{\rm c}:=\left\{\theta\in\Theta:\:\max_{i}\sum_{k\in\mathcal{K}_i}\frac{[(I-R')^{-1}\theta]_k}{\beta_k}<1\right\}.$$ If $\Theta=\mathbb{R}^d_+$, we shall use the terminology \emph{full stability (subcriticality) region} and we shall omit specifying $\Theta$ when not relevant, or no confusion occurs. In many cases (e.g. Jackson and Kelly networks) stability is equivalent to subcriticality, hence $\Theta_{\rm s}=\Theta_{\rm c}$. Nevertheless, this is not always the case, as illustrated by numerous (counter) examples in the literature (see also Example \ref{Dai:exmp} below) and, in general, stability only implies subcriticality, hence $\Theta_{\rm s}\subseteq\Theta_{\rm c}$; see \cite{Bramson:08}. \begin{exmp}\label{Dai:exmp} {\em Consider a re-entrant line with two servers and six classes, with the routing indicated in Figure \ref{BD:fig}. Both stations employ the usual first-come-first-serve discipline. We let $\theta\in\Theta=\{(r,0,\ldots,0):r\geq 0\}$, i.e.\ $r$ denotes the (Poisson) arrival rate, and denote by $\mu_1,\ldots,\mu_6$ the expected service times of the respective classes. Then, we have $$\rho_1=r(\mu_1+\mu_6),\:\rho_2=r(\mu_2+\mu_3+\mu_4+\mu_5).$$ However, if $\mu_1=\mu_3=\mu_4=\mu_5=0.001$, $\mu_2=0.897$ and $\mu_6=0.899$, then $(1,0,\ldots,0)\in\Theta_{\rm c}\setminus\Theta_{\rm s}$, cf.~\cite{Dai:95}.} \end{exmp} We conclude that, except from the situations when $\Theta_{\rm s}=\Theta_{\rm c}$, no analytical representations are available, in general, for stability regions. Therefore, numerical methods are sought instead. It is also worth noting that the full subcriticality region is an open, bounded, star-shaped domain in $\mathbb{R}^d$, around the origin (vantage point); it is not clear, however, whether the full stability region enjoys similar properties. \subsection{Stability and Subcriticality Thresholds} A vector $\vec{v}:=(v_1,\ldots,v_d)\in\mathbb{R}^d_+$ satisfying $\|\vec{v}\|=1$ will be called a (positive) \emph{direction} in $\mathbb{R}^d$; for a given direction $\vec{v}$, we define the $\vec{v}$-\emph{ray} $$\langle\vec{v}\,\rangle:=\{r\cdot\vec{v}:r\geq 0\};$$ the $\vec{v}$-ray is a one-dimensional manifold isomorphic to $[0,\infty)$, hence one can endow it with the usual ordering and topology on the real non-negative half-line. In the sequel, we shall restrict our analysis to the case $\Theta=\langle\vec{v}\,\rangle$; there are at least two reasons for that: \begin{itemize} \item The family of all rays $\langle\vec{v}\,\rangle$ sweeps the whole non-negative quadrant, hence any given set $\mathfrak{D}\subseteq\mathbb{R}^d_+$ is characterized by the family of traces it leaves on the positive rays. \item Many of the practical applications of McQNs concern reentrant lines, where $\Theta=\langle(1,0,\ldots,0)\rangle$. \end{itemize} For an arbitrary positive direction $\vec{v}$, we define the \emph{stability threshold} in direction $\vec{v}$ as $\theta_*(\vec{v}\,):=\sup\langle\vec{v}\,\rangle_{\rm s}$; in the same vein, we define the \emph{critical threshold} in in direction $\vec{v}$ as $\bar{\theta}(\vec{v}\,):=\sup\langle\vec{v}\,\rangle_{\rm c}$. When the direction $\vec{v}$ is not relevant (or clear from the context), we shall use the simplified notations $\theta_*$, resp. $\bar{\theta}$; we stress however that both thresholds depend on $\vec{v}$. Note that, in the light of the properties put forward in Section \ref{regio:sec}, it holds that $\mathbf{0}<\theta_*\leq\bar{\theta}<\infty$; the leftmost inequality follows from the fact that the full stability region includes the open set $$\left\{\theta\in\mathbb{R}^d_+:\:\sum_{k=1}^d\frac{[(I-R')^{-1}\theta]_k}{\beta_k}<1\right\},$$ corresponding to the sufficient (global) stability condition $\rho_1+\ldots+\rho_\aleph<1$; see \cite{Bramson:08}. Furthermore, we have $\langle\vec{v}\,\rangle_{\rm c}=[\mathbf{0},\bar{\theta})$, but a similar representation does not necessarily hold for $\langle\vec{v}\,\rangle_{\rm s}$, unless the full stability region has similar geometric properties as the subcriticality region, i.e., it is an open, star-shaped domain; it holds, however, that $\langle\vec{v}\,\rangle_{\rm s}\subseteq[\mathbf{0},\theta_*)$. Finally, we note that for any direction $\vec{v}$ there exist finite (positive) constants $r_*$ and $\bar{r}$ (both depending on $\vec{v}$), such that $\theta_*(\vec{v}\,)=r_*(\vec{v}\,)\cdot\vec{v}$, resp.\ $\bar{\theta}(\vec{v}\,)=\bar{r}(\vec{v}\,)\cdot\vec{v}$; in addition, $\bar{r}$ can always be analytically calculated, as follows: $\bar{r}(\vec{v}\,)=\min_i\bar{r}_i(\vec{v}\,)$, where \begin{equation}\label{critic:eq} \bar{r}_i(\vec{v}\,):=\left[\sum_{k\in\mathcal{K}_i}\frac{\delta_k}{\beta_k}\right]^{-1}, \end{equation} denotes the critical threshold for station $i$; in the last display, we used the short-hand notation $$\delta:=(I-R')^{-1}\vec{v}=(I+R+R^2+\ldots)'\vec{v};$$ in particular, $\theta=r\cdot\vec{v}$ entails $\lambda=r\delta$ on $\langle\vec{v}\,\rangle$. \begin{figure} \caption{A first-come-first-serve reentrant line (Bramson -- Dai).} \label{BD:fig} \end{figure} \section{A Numerical Method for Determining Stability Regions}\label{main:sec} Throughout this section, $\mathcal{X}$ will denote the (Markov) queue-configuration process associated with an McQN with $\aleph$ stations, $d$ classes, arrival-rate vector $\theta$, service-rate vector $\beta$ and routing matrix $R$, while $\vec{v}$ will denote a fixed positive direction in $\mathbb{R}^d$; in particular, $\theta=r\cdot\vec{v}$. The aim is to design a numerical method for evaluating the stability threshold $\theta_*$ along the positive direction $\vec{v}$. Our analysis will reveal that, under some (rather weak) monotonicity conditions, the $\langle\vec{v}\,\rangle$-stability region satisfies $\langle\vec{v}\,\rangle_{\rm s}=[\mathbf{0},\theta_*)=[0,r_*)\cdot\vec{v}$ and that the stability threshold $\theta_*$ (in fact, $r_*$) can be evaluated via Robbins-Monro schemes; eventually, we extend this method to more general, star-convex parameter sets. \subsection{Stability Thresholds for Jackson Networks}\label{Jackson:sec} Assume that $\mathcal{X}$ corresponds to a Jackson network with $d$ stations/classes. In this case, $\Theta_{\rm s}=\Theta_{\rm c}$, for any $\Theta$, hence $\theta_*=\bar{\theta}$, resp. $r_*=\bar{r}=\min_k(\beta_k/\delta_k)$, cf.~(\ref{critic:eq}). Furthermore, consider $\phi:\mathbb{X}=\mathbb{N}^d\longrightarrow(0,1]$ defined as $\phi(\mathbf{x}):=\exp(-\alpha\|\mathbf{x}\|)$, for some (fixed) $\alpha>0$; then for any $r<\bar{r}$ (stability) it holds (cf.\ \cite{Jackson}) that (recall that $\theta=r\cdot\vec{v}$) \begin{equation}\label{equilibrium:eq} \tilde{\varphi}(r):=\lim_{t\rightarrow\infty}\mathbb{E}_\theta[\phi(X_t)]=\prod_{k=1}^d\frac{\bar{r}_k-r}{\bar{r}_k-re^{-\alpha}}. \end{equation} The function $\tilde{\varphi}$ in the above display is continuous and strictly decreasing on $[0,\bar{r})$, with $\tilde{\varphi}(0)=1$, $\tilde{\varphi}(\bar{r})=0$.\\ Therefore, denoting by $r_\varepsilon\in(0,\bar{r})$ the (unique) root of the equation $\tilde{\varphi}(r)=\varepsilon\in(0,1)$, we note that $r_\varepsilon$ is increasing in $\varepsilon$ and it can be verified that it approaches $\bar{r}$ as $\varepsilon$ decreases to $0$; the same holds true if we replace $\phi$ by any bounded function vanishing at infinity. One concludes that, for Jackson networks, stability thresholds can be approximated by roots of equations of the type $\tilde{\varphi}(r)=\varepsilon$ (for $\varepsilon$ close to $0$), where $\tilde{\varphi}$ is a stationary performance measure of the network under consideration; more specifically, $\tilde{\varphi}$ appears as the expectation under the equilibrium distribution of some bounded function vanishing at infinity. \subsection{Stability Thresholds in the Multi-class Setup} In this section we extend the approximation scheme described in Section \ref{Jackson:sec} beyond the Jackson network setup, in order to approximate stability thresholds in cases where they are not available in closed form. In doing so, the following questions/challenges arise: \begin{enumerate} \item [(I)] Is the $\langle\vec{v}\,\rangle$-stability region still a half-open interval of the form $[\mathbf{0},\theta_*)=[0,r_*)\cdot\vec{v}$, so that $r_*$ determines the stability region? \item [(II)] Provided the answer in (I) is affirmative, does there exist a (stationary) performance measure $\tilde{\varphi}:[0,\infty)\longrightarrow[0,1]$ such that the root $r_\varepsilon$ of the equation $\tilde{\varphi}(r)=\varepsilon$ approaches the threshold $r_*$, for $\varepsilon$ close to $0$? \item [(III)] Provided the answers in (I)--(II) are affirmative, how to evaluate $r_\varepsilon$, since analytical expressions for stationary performance measures, such as the one in (\ref{equilibrium:eq}), are not available in general? \end{enumerate} In what follows, we shall provide a set of conditions guaranteeing positive answers to questions (I) and (II) above and discuss possible approaches to (III). To start with, note that (I) assumes a certain type of monotonic behavior. More specifically, it requires that the stability region is a monotone set, in the sense that stability for a certain parameter entails stability for all ``smaller'' parameters. In addition, if the answer is affirmative for any direction $\vec{v}$, then the full stability region defines a star-shaped domain around the origin. Assume now that there exists some $\phi:\mathbb{X}\longrightarrow(0,1]$, vanishing at infinity, satisfying the following condition: \begin{enumerate} \item [(\textbf{M}1)] the mapping $$(t,\theta)\longmapsto\varphi_t(\theta):=\mathbb{E}_\theta[\phi(X_t)|X_0=\emptyset],$$ is (jointly) non-increasing on $[0,\infty)\times\Theta$; \end{enumerate} that is, we assume the existence of some functional $\phi$ of the process $\mathcal{X}$ (started in the empty configuration) which is monotone (in expectation) w.r.t.~both time and arrival-rates (componentwise ordering). Provided that (\textbf{M}1) above holds true, the limit \begin{equation}\label{limit:eq} \varphi(\theta):=\lim_{t\rightarrow\infty}\varphi_t(\theta)=\inf_{t\geq 0}\varphi_t(\theta)\in[0,1], \end{equation} exists and defines a non-decreasing function on $\Theta$. For any such $\varphi$ it holds that \begin{equation}\label{regio:eq} \Theta_{\rm s}=\{\theta\in\Theta:\varphi(\theta)>0\}; \end{equation} in particular, given the positive direction $\vec{v}$, we define $\tilde{\varphi}_t,\tilde{\varphi}:[0,\infty)\longrightarrow[0,1]$ as the push-forwards of $\varphi_t$, resp. $\varphi$, on the ray $\vec{v}$; that is, $\tilde{\varphi}(r)=\varphi(r\cdot\vec{v})$. Then, $$\langle\vec{v}\rangle_{\rm s}=\tilde{\varphi}^{-1}((0,1])\cdot\vec{v}=[0,r_*)\cdot\vec{v},$$ which solves question (I). A complete proof of the above facts is provided in \cite{LM:2016}. Furthermore, to guarantee (II), it suffices that \begin{enumerate} \item [(\textbf{M}2)] the mapping $\varphi:\Theta\longrightarrow[0,1]$ defined by (\ref{limit:eq}) is continuous and strictly decreasing on $\Theta_{\rm s}$. \end{enumerate} Indeed, assuming that (\textbf{M}2) holds true, the function $\tilde{\varphi}:[0,r_*)\longrightarrow(0,1]$ is homeomorphic, hence the root $r_\varepsilon=\tilde{\varphi}^{-1}(\varepsilon)$ is correctly defined and approximates the threshold $r_*$, for $\varepsilon\rightarrow 0$. Finally, for estimating the root $r_\varepsilon$ one can employ a stochastic approximation scheme of Robbins-Monro (RM) type \cite{RM}, which requires that the values of $\varphi$ (for various parameters) are evaluated by simulation. More specifically, an RM approximation scheme is an iterative method which constructs a sequence of parameter updates such that at every update an unbiased estimate of $\varphi$ is used to generate a new parameter. The main difficulty when applying an RM scheme in this setting arises from the fact that one needs to sample from $\varphi$, which appears as a stationary (limiting) measure of the process $\mathcal{X}$. There are two possible approaches: \begin{enumerate} \item[(1)] direct simulation via regenerative ratios, which in turn requires simulating the queue-configuration process along a regenerative cycle; see e.g.\ \cite{Asm-Glynn}. \item[(2)] simulating instead $\varphi_t$, for an increasing sequence of time-horizons $t\rightarrow\infty$ and invoking an approximation argument; see e.g.\ \cite {Burk:56}. \end{enumerate} Method (1) seems more forthright. Note however that recurrence times are random and may become arbitrarily large as the input parameter approaches the boundary of the stability region. Since one expects that the approximation scheme will stabilize somewhere in the neighborhood of the stability threshold, i.e. at the boundary of the stability region, such a method seems rather unpredictable in terms of computational effort. Method (2) avoids this inconvenience by setting fixed simulation horizons, hence allows for a better control over the computational complexity. We conclude this section with several considerations on the two conditions formulated above: \begin{itemize} \item Conditions (\textbf{M}1) and (\textbf{M}2) are deliberately stated for general $\Theta$ (rather than $\langle\vec{v}\,\rangle$), since in many situations, the conditions hold for $\Theta=\mathbb{R}^d_+$, which entails their validity (for the same $\phi$) for any ray. \item Conditions (\textbf{M}1) and (\textbf{M}2) are quite common for Jackson networks; (\textbf{M}1) follows by standard stochastic monotonicity theory for Markov chains, whereas (\textbf{M}2) follows directly by (\ref{equilibrium:eq}). \item Thm.\ 1 in \cite{LM:2016} establishes the validity of (\textbf{M}1), provided that the queue-configuration process $\mathcal{X}$ fulfils a certain stochastic monotonicity condition. \item Prop.\ 1 and 2 in \cite{LM:2016} show that, for a wide class of McQNs (including the examples treated in this paper), conditions (\textbf{M}1) and (\textbf{M}2) hold for certain $\phi$'s (hence, $\varphi$'s) and for $\Theta=\mathbb{R}^d_+$. \end{itemize} \subsection{Numerical Evaluation of Stability Thresholds}\label{method:sec} In this section we assume that conditions (\textbf{M}1) and (\textbf{M}2) hold for $\Theta=\langle\vec{v}\rangle$, for a certain $\phi:\mathbb{X}\longrightarrow(0,1]$, vanishing at infinity and we design a numerical method for approximating the stability threshold $\theta_*=r_*\cdot\vec{v}$. Fix some arbitrary increasing sequence $\{t_n\}_{n\geq 0}$ of non-negative numbers satisfying $t_0=0$, $t_n\rightarrow\infty$ and let $\mathfrak{D}_n(r)$ denote the distribution of $\phi(X_{t_n})$ under $\mathbb{P}_\theta$, for $\theta=r\cdot\vec{v}$ and $n\geq 0$; we further set $\mathfrak{D}_n(r)=\mathfrak{D}_n(0)$, for $r<0$. Furthermore, fix some sequence $\{a_n\}_{n\geq 1}$ of decreasing positive numbers satisfying $a_n\rightarrow 0$ and $\varepsilon>0$ and define the sequence of iterates \begin{equation}\label{st-approx:eq} \forall n\geq 1:\: x_n = x_{n-1} + a_n\left(z_n-\varepsilon\right), \end{equation} where $x_0\in(0,\bar{r})$ is arbitrarily chosen and for each $n\geq 0$ the r.v.\ $z_n$ follows the conditional distribution $\mathfrak{L}[z_n|x_{n-1}]=\mathfrak{D}_n(x_{n-1})$, given $x_{n-1}$. Our next result establishes the convergence of the iterates $x_n$ in (\ref{st-approx:eq}) towards the root $r_\varepsilon$ of the equation $\tilde{\varphi}(r)=\varphi(r\cdot\vec{v})=\varepsilon$, for $n\rightarrow\infty$ and, under slightly more restrictive conditions, provides the magnitude of the approximation error; see the Appendix for a proof. \begin{them}\label{conv:them} For any $\varepsilon>0$, $x_0\in(0,\bar{r})$ and positive sequences $\{t_n\}_{n\geq 0}$ and $\{a_n\}_{n\geq 1}$, satisfying $$\lim_{n\rightarrow\infty}t_n=\infty,\:\sum_{n\geq 1}a_n=\infty,\:\sum_{n\geq 1}a_n^2<\infty,$$ the iterates $\{x_n\}_{n\geq 0}$ in (\ref{st-approx:eq}) satisfy $x_n\longrightarrow r_\varepsilon$, a.s. Furthermore, assume that the family of derivatives $\tilde{\varphi}_t'$ converges uniformly on $(0,r_*)$, for $t\rightarrow\infty$, and that $\inf|\tilde{\varphi}'(r)|>0$. If $a_n=a\cdot n^{-\omega}$, for $\omega\in(1/2,1]$ and $a>(\omega-1/2)/(\inf|\tilde{\varphi}_t'(r)|)$ and \begin{equation}\label{conv:eq} \sup_{r\in[0,r_*]}|\tilde{\varphi}_{t_n}(r)-\tilde{\varphi}(r)|=o(n^{-\kappa}), \end{equation} for some $\kappa>\omega-1/2$, then (in probability) $$(x_n-r_\varepsilon)=O(n^{-(\omega-1/2)}).$$ \end{them} In practice, we fix some large $n\geq 1$ and use the estimate $x_n$ to approximate $r_*$. The approximation error consists of a random and a deterministic component: \begin{equation}\label{error:eq} \Delta_n^{\varepsilon}:=|x_n-r_*|\leq|x_n-r_\varepsilon|+(r_*-r_\varepsilon). \end{equation} While the behavior of the random component $|x_n-r_\varepsilon|$ is established by Theorem \ref{conv:them}, for the deterministic part in (\ref{error:eq}) we note that (for small $\varepsilon$) $$\varepsilon=\tilde{\varphi}(r_\varepsilon)-\tilde{\varphi}(r_*)\approx -\tilde{\varphi}'(r_\varepsilon)(r_*-r_\varepsilon);$$ in particular, if $\tilde{\varphi}'$ is bounded away from $0$ (close to $r_*$) then one obtains $r_\varepsilon\rightarrow r_*$ at a linear rate. Nevertheless, if $\lim_{r\rightarrow r_*}\tilde{\varphi}'(r)=0$ then convergence is slower, as we shall note in our numerical experiments in Section \ref{num:sec}. We conclude that the approximation error of the method depends essentially on the behavior of the derivative $\tilde{\varphi}'$ close to $r_*$; more specifically, denoting $c:=\lim_{r\rightarrow r_*}|\tilde{\varphi}'(r)|$, we note that the larger $c$, the better the accuracy. \subsection{Approximating Star-shaped Stability Regions} In this section we assume that conditions (\textbf{M}1) and (\textbf{M}2) hold true for a certain $\phi:\mathbb{X}\longrightarrow(0,1]$ and for some star-shaped (around the origin) parameter set $\Theta\subseteq\mathbb{R}^d_+$. Then the $\Theta$-stability region $\Theta_{\rm s}$ defines itself a star-shaped domain around the origin; a similar fact holds for the stability region for the fluid model \cite{Chen}. Assuming w.l.o.g.\ that $\Theta=\mathbb{R}_+^{d'}$, for some $d'\leq d$, such a domain can be approximated as follows: one can construct a grid of points on the positive orthant of the unit sphere (each point corresponding to a given direction) and determine the stability threshold along each direction, cf.~Section \ref{method:sec}. Finally, one connects thresholds corresponding to neighboring points (directions), obtaining in this way a polytope which approximates the $\Theta$-stability region (for large number of points); such a procedure, for $d'=2$, is graphically illustrated in Figure \ref{star:fig}. \begin{remk} {\em Note that the boundary point of the $\Theta$-stability region in some given direction is obtained as the minimum between the boundary point of $\Theta$ and the stability threshold in that direction; for a more efficient numerical procedure, one can replace $\bar{r}$ in (\ref{st-approx:eq}) by the corresponding boundary point of $\Theta$, thus avoiding to simulate (too) congested networks.} \end{remk} \begin{figure} \caption{Numerical approximation of a star-shaped domain. Solid line = true boundary; dotted lines = rays; bullets = approximations of the boundary points (thresholds); dashed line = approximated boundary.} \label{star:fig} \end{figure} \begin{figure} \caption{\label{LK:fig} \label{LK:fig} \end{figure} \section{Numerical Results}\label{num:sec} In this section we illustrate the use of the method developed in Section \ref{method:sec}. We include here experimental results corresponding to examples for which the stability thresholds are known, and results for which these are not known. For a given $\varepsilon>0$ we average out $N=10000$ RM iterates (\ref{st-approx:eq}) in order to construct an estimator \begin{equation}\label{estimator:eq} \hat{r}_\varepsilon:=\frac{1}{N}\sum_{n=1}^N x_n, \end{equation} for the solution $r_\varepsilon$ of $\varphi(r)=\varepsilon$, where $\varphi$ is defined by (\ref{limit:eq}), for $\phi(\xi)=\exp(-\alpha\|\xi\|)$, with $\|\xi\|$ denoting the total number of jobs in the network configuration $\xi$; the average in (\ref{estimator:eq}) has the advantage that it is less sensitive to initial jumps/outliers \cite{PJ:92}. For illustrative purposes, we analyze the effect of varying the value of $\varepsilon$; more specifically, we let $\varepsilon=10^{-c}$, with $c=7,8,9,10$. For the numerical experiments below, $a_n=a/n^\omega$ and $t_n=t_0+bn$, with $\omega=1$, $a=1/\varepsilon$, $t_0=2\cdot 10^6$, $b=200$; also, let $x_0=0$ and $\alpha=1$. These parameters are set such that they provide (approximately) correct values when the stability region is known; note that otherwise firm conclusions can only be drawn under the proviso that condition (\ref{conv:eq}) (Thm.\ \ref{conv:them}) holds true. \input JacksonTab \subsection{Jackson Networks} Consider an open Jackson network consisting of two servers/classes $k=1,2$, having input rates $\theta_1$, resp.~$\theta_2$, and service rates $\beta_1$, resp.~$\beta_2$. We further assume that any job finishing service at server $1$ moves to server $2$ with probability $\wp\in[0,1]$, or leaves the network; that is, $R_{12}=\wp$ and $R_{11}=R_{21}=R_{22}=0$. Pick now some $\vec{v}=(1,v)$, for some arbitrary $v\geq 0$ and recall that $r_*=\bar{r}=\bar{r}_1\wedge\bar{r}_2$, where, cf.~(\ref{critic:eq}), \begin{equation}\label{threshold:eq} \bar{r}_1=\beta_1\cdot\|\vec{v}\|,\:\bar{r}_2=(\wp+v)^{-1}\beta_2\cdot\|\vec{v}\|. \end{equation} Let $\beta_1=2$, $\beta_2=1.6$, $\wp=0.2$. A summary of the corresponding results compared to the true values, calculated using (\ref{threshold:eq}), is provided in Table \ref{Jackson:tab}. \subsection{Multi-class Reentrant Lines} For reentrant lines, $\Theta=\langle\vec{v}\rangle$, with $\vec{v}=(1,0,\ldots,0)$, so that under the monotonicity condition (\textbf{M}1) the $\Theta$-stability region is determined (only) by the stability threshold $r_*$. For the networks considered below, it has been demonstrated in \cite{LM:2016} that monotonicity conditions (\textbf{M}1) and (\textbf{M}2) hold; however, for illustrative purposes, here we test condition (\textbf{M}1) numerically; see Table \ref{allTheta}. Our first example is the network in Example \ref{Dai:exmp}, for which $\bar{r}=\bar{r}_1=\bar{r}_2=1/0.9\simeq 1.111$, cf.~(\ref{critic:eq}). Table \ref{allR} (A) displays estimates $\hat{r}_\varepsilon$ for the above specified $\varepsilon$'s. Secondly, consider the Lu-Kumar network \cite{LK:91}, in which both stations employ a (preemptive) priority policy, as illustrated in Figure \ref{LK:fig}. Stability holds iff the network is subcritical and $\theta(\beta_2+\beta_4)<\beta_2\beta_4$ \cite{Dai:96}. For our numerical experiments, we let $\beta_1=1.2$, $\beta_3=2$ and $\beta_2=\beta_4=1$, hence $\bar{r}=0.545$ and $r_*=0.5$; this is illustrated in Table \ref{allR} (B). Finally, consider the FCFS version of the Lu-Kumar network, with the same service rates; in this case, determining the stability region is an open problem, cf.~\cite{Dai:96}. Our numerical results, provided in Table \ref{allR} (C), suggest that stability and subcriticality are equivalent. The estimates $\hat{r}_\varepsilon$ in Table \ref{allR} provide approximations for $r_\varepsilon(t)$, the root of the equation $\varphi_t(r)=\varepsilon$, where $t=t_N=t_0+bN$, which in turn approximate $r_\varepsilon$ (for large $t$). Furthermore, it holds that $$\lim_{\varepsilon\rightarrow 0} r_\varepsilon(t)=\bar{r},\quad \lim_{\varepsilon\rightarrow 0}\lim_{t\rightarrow\infty}r_\varepsilon(t)=r_*.$$ In particular, the limits above are not interchangeable when $r_*\neq\bar{r}$ and the iterates $\hat{r}_\varepsilon$ do not converge to $r_*$ in these cases, as suggested by Table \ref{allR} (A) and (B). \section{Concluding Remarks}\label{remk:sec} In this paper we have developed a simulation-based numerical method for determining the stability region (w.r.t.~arrival rates) associated with Markovian McQNs. Our method identifies thresholds at which the queue sizes `explode'. In particular, stability regions for networks for which no analytical stability conditions are known can be approximated numerically. The method does not extend in a straightforward way to the non-Markovian McQNs (non-exponential distributions), as the required (stochastic) monotonicity properties for such networks have not been not established yet. The complexity of a given network is reflected by the number of stations, classes and positive entries in the routing matrix. The computation time for generating one iterate $x_n$ increases linearly w.r.t.\ the time-horizon $t_n$. The trade-off between method complexity and accuracy is governed by the growth rate of the sequence $\{t_n\}_n$, hence gaining insight into the impact of the choice of $\{t_n\}_n$ deserves future research efforts. \appendix \noindent\textbf{Proof} of Thm.\ \ref{conv:them}: The proof is based on Thms.\ 1 and 2 in \cite{Burk:56}. Namely, for every $n\geq 0$ and $r\geq 0$, let us define the mean, resp. the variance: $$\omega_n(r):=\mathbb{E}\left[\left(\varepsilon-z_n\right)|r\right],\:\sigma_n(r):=\mathrm{Var}\left[\left(\varepsilon-z_n\right)|r\right].$$ By the monotonicity assumption (\textbf{M}1), $\omega_n$ is continuous and increasing w.r.t.~$r\in(0,\bar{r})$ and $n$. In addition, $\sigma_n(r)\leq 1$, for any $n$ and $r\geq 0$. For the convergence part we apply Thm.\ 1 in \cite{Burk:56}; to this end, we verify the following set of conditions: \begin{enumerate} \item [(i)] $\omega_n,\sigma_n:[0,\infty)\longrightarrow\mathbb{R}$ are measurable, s.t. $$\sup_{(n,r)}\frac{|\omega_n(r)|}{1+r}<\infty,\:\sup_{(n,r)}\sigma_n(r)<\infty;$$ \item [(ii)] for any $\epsilon>0$ there exists $n_\epsilon\geq 1$ s.t.~$|r-r_\varepsilon|>\epsilon$ entails $(r-r_\varepsilon)\omega_n(r)>0$, for $n\geq n_\epsilon$; \item [(iii)] $\sum_n a_n^2<\infty$ and for $0<\epsilon_1<\epsilon_2$ it holds that $$\sum_{n\geq 1}a_n\left(\inf_{\epsilon_1<|r-r_\varepsilon|<\epsilon_2}|\omega_n(r)|\right)=\infty.$$ \end{enumerate} Condition (i) is immediate since $\omega_n(r)\in(-1,1)$ and $\sigma_n(r)\in[0,1]$, for any $(n,r)$. Set $\omega(r):=\varepsilon-\tilde{\varphi}(r)$ and note that $r_\varepsilon$ appears as the (unique) root of the equation $\omega(r)=0$, with $\omega$ being (strictly) increasing in $r_\varepsilon$, cf. (\textbf{M}2). Let $\epsilon>0$; since $\omega(r_\varepsilon+\epsilon)>0$ and $\omega_n(r)\uparrow\omega(r)$, for $n\rightarrow\infty$, it follows that there exists some $n_\epsilon\geq 1$ such that $n\geq n_\epsilon$ entails $\omega_n(r_\varepsilon+\epsilon)>0$, hence for any $r>r_\varepsilon+\epsilon$ it holds that $$(r-r_\varepsilon)\omega_n(r)\geq(r-r_\varepsilon)\omega_n(r_\varepsilon+\epsilon)>0.$$ On the other hand, $r<r_\varepsilon-\epsilon$ entails $$\omega_n(r)\leq\omega(r)<\omega(r_\varepsilon)=0,$$ for any $n$, hence (ii) follows true, as well. Finally, to verify (iii) we let $\epsilon_1<\epsilon_2$ and (as before) we choose $n_1\geq 1$ (depending only on $\epsilon_1$), such that $\omega_n(r)>0$ for $r>r_\varepsilon+\epsilon_1$ and every $n\geq n_1$. Since $\omega_n(r)<0$ for $r\leq r_\varepsilon-\epsilon_1$ and $n\geq 1$, one obtains for $n\geq n_1$ and $\epsilon_1<|r-r_\varepsilon|<\epsilon_2$ \begin{eqnarray*} |\omega_n(r)| & = & \min\left\{\omega_n(r_\varepsilon+\epsilon_1),-\omega_n(r_\varepsilon-\epsilon_2)\right\} \\ & \geq & \min\{\varepsilon-\tilde{\varphi}_{t_n}(r_\varepsilon+\epsilon_1),\tilde{\varphi}(r_\varepsilon-\epsilon_2)-\varepsilon\}; \end{eqnarray*} using $\lim_n\min\{u_n,v\}=\min\{\lim_n u_n,v\}$ yields $$\inf_r|\omega_n(r)|\geq \min\{\varepsilon-\tilde{\varphi}(r_\varepsilon+\epsilon_1),\tilde{\varphi}(r_\varepsilon-\epsilon_2)-\varepsilon\}>0,$$ where the infimum is taken w.r.t.~$\epsilon_1<|r-r_\varepsilon|<\epsilon_2$. Hence, (iii) holds true, provided that $$\sum_{n\geq 1} a_n=\infty,\quad\sum_{n\geq 1} a_n^2<\infty;$$ this proves the first claim. For the second part, we invoke Thm.\ 2 in \cite{Burk:56}; to this end, we verify the following set of conditions: \begin{enumerate} \item [(i)] For any $n$, $\omega_n(r)$ is strictly increasing in $r$; in particular, there exists the root $r_{\varepsilon}^n$ of $\omega_n(r)=0$. \item [(ii)] The function sequence $\{\tau_n\}_{n\geq 0}$, defined as $$\tau_n(r):=\left\{ \begin{array}{ll} (r-r_{n,\varepsilon})^{-1}\omega_n(r), & \hbox{$r\neq r_{n,\varepsilon}$;} \\ -\varphi'(r_\varepsilon), & \hbox{$r=r_{n,\varepsilon}$,} \end{array} \right.$$ satisfies $\tau_n(r)\in[M_1,M_2]$, for all $n,r$, with $M_1>0$ and $\tau_n(x_n)\rightarrow-\tilde{\varphi}'(r_\varepsilon)$ for $x_n\rightarrow r_\varepsilon$. \item [(iii)] There exists constants $0\leq M_3<M_4$ such that $M_3\leq\sigma_n(r)=\mathrm{Var}[z_n|r]\leq M_4$, for all $n,r$, and $x_n\rightarrow r_\varepsilon$ entails $\sigma_n(x_n)\rightarrow\sigma>0$. \item [(iv)] there exist $\kappa,\omega$, s.t.\ $(\omega-1/2)\in(0,\kappa)$ and $$(r_{\varepsilon}^n-r_\varepsilon)=o(n^{-\kappa}),\:n^\omega a_n\rightarrow a>(\omega-1/2)/M_1.$$ \end{enumerate} Condition (i) is immediate since $\omega_n(r)=\varepsilon-\tilde{\varphi}_{t_n}(r)$ and $\tilde{\varphi}_t$ decreases, with $\tilde{\varphi}(0)=1$, vanishing at infinity. To verify (ii), we note that since $\tilde{\varphi}_t'$ is continuous and non-vanishing on $[0,r_*]$, hence it is bounded away from both infinity and $0$, for any $t>0$; moreover, since $\tilde{\varphi}_{t_n}'$ converges uniformly to $\tilde{\varphi}_t'$, which is continuous, non-vanishing on $(0,r_*)$, it follows that $\tilde{\varphi}_{t_n}'$ is uniformly bounded away from both $0$ and infinity. Furthermore, if $x_n\rightarrow r_\varepsilon$, such that $x_n\neq r_{\varepsilon}^n$, for all $n$, we obtain (mean value) $\tau_n(x_n)=-\tilde{\varphi}_{t_n}'(u_n)$, for some $u_n$ satisfying $|u_n-r_\varepsilon|<\epsilon$, for some (small) $\epsilon>0$. The convergence $\tilde{\varphi}_{t_n}'(u_n)\rightarrow\tilde{\varphi}'(r_\varepsilon)$ follows from the uniform convergence of the derivatives; the convergence is not affected if $x_n=r_{\varepsilon}^n$, for some $n$'s. Furthermore, the variance converges uniformly, viz. $$\sigma_n(r)\rightarrow\sigma(r):=\left\{ \begin{array}{ll} \mathrm{Var}_{\pi_\theta}[\phi(X)], & \hbox{$r<r_*$;} \\ 0, & \hbox{$r\geq r_*$,} \end{array} \right.$$ where (recall) $\theta=r\cdot\vec{v}$ and $\pi_\theta$ denotes the equilibrium distribution under $\mathbb{P}_\theta$, for $\theta\in\langle\vec{v}\,\rangle_{\rm s}$. We conclude that $x_n\rightarrow r_\varepsilon$ entails $\sigma_n(x_n)\longrightarrow\sigma(r_\varepsilon)>0$, as required. Finally, let $\gamma_\epsilon:=\inf_{|r-r_\varepsilon|<\epsilon}|\tilde{\varphi}'(r)|$, for $\epsilon>0$; since $\tilde{\varphi}'(r_\varepsilon)<0$, for small $\epsilon$ we have $\gamma_\epsilon>0$. On the other hand, for every $n\geq 0$ it holds that $$\tilde{\varphi}(r_{\varepsilon}^n)-\tilde{\varphi}_{t_n}(r_{\varepsilon}^n)= \tilde{\varphi}(r_{\varepsilon}^n)-\tilde{\varphi}(r_\varepsilon)=-\tilde{\varphi}'(u_n)(r_{\varepsilon}^n-r_\varepsilon),$$ for some $u_n\in(r_{\varepsilon},r_{\varepsilon}^n)$; for the first equality we used the fact that $\tilde{\varphi}(r_\varepsilon)=\varepsilon=\tilde{\varphi}_{t_n}(r_{\varepsilon}^n)$, while the second one follows by the mean value theorem. Consequently, for large $n$, satisfying $|r_{\varepsilon}^n-r_\varepsilon|<\epsilon$, we have $$(r_{\varepsilon}^n-r_\varepsilon)\leq\gamma_\epsilon^{-1}\sup_{0\leq r\leq r_*}|\tilde{\varphi}_{t_n}(r)-\tilde{\varphi}(r)|= o(n^{-\kappa});$$ this proves the claim and concludes the proof. $ \square$ \input allTables \renewcommand{0.98}{0.98} \iffalse \begin{IEEEbiography}[{\includegraphics[width=1in, height=1.25in, keepaspectratio]{hphoto1}}] {Haralambie Leahu} (1978) is affiliated to the Korteweg-de Vries Institute for Mathematics of the University of Amsterdam. He received a MSc (2002) degree from University of Bucharest and a PhD degree from VU University, Amsterdam (2008). His current research interest lies in the area of numerical simulation-based methods for control and optimization of stochastic models, covering applications in manufacturing and engineering systems. \end{IEEEbiography} \begin{IEEEbiography}[{\includegraphics[width=1in, height=1.25in, keepaspectratio]{mphoto1}}] {Michel Mandjes} (1970) is a full professor of applied probability at the Korteweg-de Vries Institute for Mathematics of the University of Amsterdam. He received MSc degrees in Mathematics and Econometrics from the VU University, Amsterdam (1993), and a PhD in Operations Research from the same university (1996). He has worked as a member of technical staff subsequently at KPN Research, Leidschendam, the Netherlands, and at Lucent Technologies/Bell Laboratories, Murray Hill NJ, United States. From 2000 has been appointed as a full professor, between 2000 and 2004 at the University of Twente, and as of 2004 at the University of Amsterdam (since 2006 full-time). Between 2000 and 2006 he worked at CWI, Amsterdam, where he was department head. In 2008 he was visiting professor at Stanford University, and in 2013-2014 at New York University. Mandjes' research focuses on various aspects of stochastic processes, with a broad application range. Key topics include queueing theory, asymptotic techniques, large deviations, weak convergence and fluctuation theory. \end{IEEEbiography} \begin{IEEEbiography}[{\includegraphics[width=1in, height=1.25in, keepaspectratio]{aphoto1}}] {Ana-Maria Oprescu} is a researcher in the Computer Systems group at the VU University, Amsterdam. She has received a MSc degree from the VU University, Amsterdam (2006), and a PhD degree from the same university (2013). Her current research focuses on developing fast and accurate methods for multi-objective optimization problems arising from scheduling various workloads across multiple cloud providers. \end{IEEEbiography}\fi \end{document}
\begin{document} \title{The Lie module structure on the Hochschild cohomology groups of monomial algebras with radical square zero} \begin{abstract} We study the Lie module structure given by the Gerstenhaber bracket on the Hochschild cohomology groups of a monomial algebra with ra${}$dical square zero. The description of such Lie module structure will be given in terms of the combinatorics of the quiver. The Lie module structure will be related to the classification of finite dimensional modules over simple Lie algebras when the quiver is given by the two loops and the ground field is the complex numbers. \end{abstract} \section*{Introduction.} Let $A$ be an associative unital $k$-algebra where $k$ is a field. The {\it{$n^{th}$ Hochschild cohomology group of $A$ }}, denoted by $HH^n(A)$, refers to \[ HH^n(A):= HH^n(A,A)=\, Ext_{A^e}^n(A,A) \] where $A^e$ is the enveloping algebra $A^{op} \otimes_k A$ of $A$. Thus, for example, $HH^0(A)$ is the center of $A$ and the first Hochschild cohomology group $HH^1(A)$ is the vector space of the outer derivations. Note that the first Hochschild cohomology group has a Lie algebra structure given by the commutator bracket. In \cite{gersten}, Gerstenhaber introduced two operations on the Hochschild cohomology groups: the cup product and the bracket \[ [\, - \, , \, - \,]: HH^n(A) \times HH^m(A) \longrightarrow HH^{n+m-1}(A) . \] He proved that the {\it{Hochschild cohomology of $A$ }}, $$ HH^{*}(A) := \displaystyle{\bigoplus_{n=0}^\infty \, HH^{n}(A)} \, , $$ provided with the cup product is a graded commutative algebra. Furthermore, he demonstrated that $HH^{*+1}(A)$ endowed with the Gerstenhaber bracket has a graded Lie algebra structure. Consequently, $HH^1(A)$ is a Lie algebra and $HH^n(A)$ is a Lie module over $HH^1(A)$. As a matter of fact, the Gerstenhaber bracket restricted to $HH^1(A)$ is the commutator Lie bracket of the outer derivations. Moreover, the cup product and the Gerstenhaber bracket endow $HH^*(A)$ with the so-called Gerstenhaber algebra structure. Besides, it was shown that the algebra structure on $HH^*(A)$ is invariant under derived equivalence \cite{happel, rickard}. In addition, in \cite{keller}, Keller proved that the Gerstenhaber bracket on $HH^{*+1}(A)$ is preserved under derived equivalence. Therefore, the Lie module structure on $HH^n(A)$ over $HH^1(A)$ is also an invariant under derived equivalence. Understanding both the graded commuative algebra and the graded Lie algebra structure, on the Hochschild cohomology of algebras is a difficult assigment. Different techniques have been used in order to: $(1)$ describe the Hochschild cohomology algebra (or ring) for some algebras, \cite{holm, cibilssolotar,cibils, erdmannsnashall, erdmannholm, siegelwitherspoon, mariano, erdmannholmsnashall,cagliero,eu,xu}; $(2)$ study the Hochschild cohomology ring modulo nilpotence, \cite{greensnashallsolberg,greensnashallsolberg2,greensnashall} and $(3)$ compute the Gerstenhaber bracket \cite{bustamante,eu2}. On the other hand, C. Strametz studied, in \cite{strametz}, the Lie algebra structure on the first Hochschild cohomology group of monomial algebras. She accomplishes to describe such Lie algebra structure in terms of the combinatorics of the monomial algebras. Moreover, she relates such description to the algebraic groups which appear in Guil-Asensio and Saor{\'\i}n's study of the outer automorphisms \cite{saorin}. In \cite{strametz}, Strametz also gave criteria for simplicity of the first Hochschild cohomology group. In this paper we are interested in the Lie module structure on the Hochschild cohomology groups induced by the Gerstenhaber bracket. This approach was suggested by C. Kassel and motivated by the work of C. Strametz. The aim of this paper is to describe the Lie module structure on the Hochschild cohomology groups for monomial algebras of (Jacobson) radical square zero. Recall that a {\it{monomial algebra of radical square zero}} is the quotient of the path algebra of a quiver $Q$ by the two-sided ideal generated by the set of paths of length two. We will use the combinatorics of the quiver in order to describe the Lie module structure. Moreover, for the case of the two loops quiver, we relate such Lie module structure of $HH^n(A)$ to the classification of the (finite-dimensional) irreducible Lie modules over $sl_2$ when the ground field is the complex numbers. The Hochschild cohomology groups of those algebras have been described in \cite{cibils} using the combinatorics of the quiver. Such description enables to prove that the cup product of elements of positive degree is zero when $Q$ is not an oriented cycle. In this paper, we use Cibils' description of $HH^n(A)$ in order to study the Lie module structure on the Hochschild cohomology groups. First, we reformulate the Gerstenhaber bracket for the realization of the Hochschild cohomology groups obtained through the computations in \cite{cibils}. In the first section we construct two quasi-ismorphisms between the Hochschild cochain complex and the complex induced by the reduced projective resolution. Then in the seco${}$nd section, using such quasi-isomorphisms, we introduce a new bracket; which coincides with the Gerstenhaber bracket. In the third section, we use the combinatorics of the quiver to describe the Gerstenhaber bracket. In the last section, we study a particular case: the monomial algebra of radical square zero given by the two loops quiver. For this algebra, we prove that $HH^1(A)$ is isomorphic as a Lie algebra to $gl_2 \mathbb{C}$ and then we identify a copy of $sl_2 \mathbb{C}$ in $HH^1(A)$. In order to decribe $HH^n(A)$ as a Lie module over $HH^1(A)$, we start studying the Lie module structure of $HH^n(A)$ over $sl_2 \mathbb{C}$. In this article, we determine the decomposition of $HH^n(A)$ into direct sum of irreducible modules over $sl_2 \mathbb{C}$. Moreover, we show that such decomposition can be obtained by an algorithm. In the following table we illustrate the decomposition for the Hochschild cohomology groups of degrees between 2 and 7. We denote by $V(i)$ the unique irreducible Lie module of dimension $i+1$ over $sl_{2} \mathbb{C}$. \[ \scalebox{0.9} {$ \begin{array}{c||ccccccccc} n & V(0)&V(1)&V(2)&V(3)&V(4)&V(5)&V(6)&V(7)&V(8) \\ \hline \hline \, & \, & \, & \, & \, & \, & \, & \, & \, & \, \\ HH^2(A) & \, & 1 & \, & 1 & \, & \, & \, & \, & \, \\ \, & \, & \, & \, & \, & \, & \, & \, & \, & \, \\ HH^3(A) & 1 & \, & 2 & \, & 1 & \, & \, & \, & \, \\ \, & \, & \, & \, & \, & \, & \, & \, & \, & \,\\ HH^4(A) & \, & 3 & \, & 3 & \, & 1 & \, & \, & \, \\ \, & \, & \, & \, & \, & \, & \, & \, & \,& \, \\ HH^5(A) & 3 & \, & 6 & \, & 4 & \, & 1 & \, & \, \\ \, & \, & \, & \, & \, & \, & \, & \, & \, & \, \\ HH^6(A) & \, & 9 & \, & 10 & \, & 5 & \, & 1 & \, \\ \, & \, & \, & \, & \, & \, & \, & \, & \,& \, \\ HH^7(A) & 9 & \, & 19 & \, & 15 & \, & 6 & \, & 1 \\ \end{array} $} \] $\quad$ \\ In the above table, let us remark that the three last diagonal form a component of the Pascal triangle. Note also that the integer sequence given by the first and second column are the same. We will prove that these two remarks are in general true. This will enable to show the validity of the algorithm and in consequence obtain the other diagonals of the table. Moreover, we have introduced the sequence of numbers in the Encyclopedia of Integer Sequences [http://www.research.att.com/~njas/sequences/index.html], it appears to be related with two sequences. Among these sequence, there is one that represents the expected saturation of a binary search tree (or BST) on n nodes times the number of binary search trees on n nodes, or alternatively, the sum of the saturation of all binary search trees on n nodes. Another sequence gives the number of standard tableaux of shapes (n+1,n-1). The two sequences are given by explicit formulas. In a future paper, we will apply the same techniques, as those we use in this article, to prove that the first Hochschild cohomology group of the monomial algebra of radical square zero is the Lie algebra $gl_{n} \mathbb{C}$ when the quiver is given by $n$ loops. Moreover, we will determine, as we did for the two loops case, the decomposition into direct sum of irreducible modules over $sl_{n} \mathbb{C}$ but only for the second Hochschild cohomology group. We will also be dealing with the case when the quiver has no loops and no cycles. \thanks{{\bf{Acknowledgment.}} This work will be part of my PhD thesis at the University of Montpellier 2. I am indebted to my advisor, Professor Claude Cibils, not only for valuable discussions about the subject and his helpful remarks on this paper, but also for his encouragement. I would like to thank the referee for helpful suggestions in improving this paper. } \section{ A comparison map beetween the bar projective resolution and the reduced bar projective resolution.} In this section, we deal with finite dimensional $k$-algebras whose semisimple part (i.e the quotient by its radical) is isomorphic to a finite number of copies of the field. Monomial algebras of radical square are a particular case of these algebras. \subsection*{Two projective resolutions.} The usual $A^e$-projective resolution of $A$ used to calculate the Hochschild cohomology groups is the standard bar resolution. The {\it{standard bar resolution}}, that we will denote by $\bold{S}$, is given by the following exact sequence: \[ \bold{S}:= \qquad \quad \cdots \rightarrow A^{\otimes^{n+1}_k} \stackrel{\delta}{\rightarrow} \, A^{\otimes^{n}_k} \stackrel{\delta}{\rightarrow} \, \cdots \stackrel{\delta}{\rightarrow} \, A^{\otimes^3_k} \stackrel{\delta}{\rightarrow} \, A {\mathcal{T}}nsork A \stackrel{\mu}{\rightarrow} \, A \rightarrow \, 0 \] where $\mu$ is the multplication and the $A^e$-morphisms $\delta$ are given by \[ \delta (x_1 \otimes \dots \otimes x_{n+1})= \sum_{i=1}^{n} \, (-1)^{i+1} x_1 \otimes \dots \otimes x_ix_{i+1} \otimes \dots \otimes x_{n+1} \] where $x_i \in A$ and $\otimes$ means ${\mathcal{T}}nsork$. Now, the $A^e$-projective resolution of $A$ used in \cite{cibils} to compute the Hochschild cohomology groups of a monomial radical square zero is the {\it{reduced bar resolution}}. It is defined for a finite dimensional $k$-algebra $A$ whose Wedderburn-Malcev decomposition is given by the direct sum $A=E \oplus r$ where $r$ is the Jacobson radical of $A$ and $E \cong A/r \cong k \times k \cdots \times k$. In the sequel $A$ denotes an algebra verifying those conditions. Let us denote by $\bold{R}$ the reduced bar resolution. It is given by the following exact sequence: \[ \bold{R}:= \cdots \rightarrow A {\mathcal{T}}nsorE r^{\otimes^{n+1}_E} {\mathcal{T}}nsorE A \stackrel{\delta}{\rightarrow} \, A {\mathcal{T}}nsorE r^{\otimes^{n}_E} {\mathcal{T}}nsorE A \stackrel{\delta}{\rightarrow} \, \cdots \stackrel{\delta}{\rightarrow} \, A {\mathcal{T}}nsorE r {\mathcal{T}}nsorE A \stackrel{\delta}{\rightarrow} \, A {\mathcal{T}}nsorE A \stackrel{\mu}{\rightarrow} \, A \rightarrow \, 0 \] where $\mu$ is the multplication and the $A^e$-morphisms $\delta$ are given by \[ \begin{array}{cl} \delta (a \otimes x_1 \otimes \dots \otimes x_{n+1} \otimes b) & = ax_1 \otimes x_2 \otimes \dots \otimes x_{n+1} \otimes b \\ \, & + \, \sum_{i=1}^{n} \, (-1)^i a \otimes x_1 \otimes \dots \otimes x_ix_{i+1} \otimes \dots \otimes b \\ \, & + \, (-1)^{n+1} \, a \otimes x_1 \otimes \dots \otimes x_{n} \otimes x_{n+1}b \end{array} \] where $a,b \in A$, $x_i \in r$ and $\otimes$ means ${\mathcal{T}}nsorE$. The proof that this sequence is a projective resolution can be found in \cite{cibils2}. \subsection*{Comparison maps.} Theorically, a comparison map exists between these two projective resolutions. The objective of this section is to give an explicit comparison map between the projective resolutions $\bold{S}$ and $\bold{R}$ in both directions. Such comparison map will induce some quasi-isomorphisms between the Hochschild cochain complex and the complex induced by the reduced bar resolution. The explicit calculations of these quasi-isomorphisms, enables to reformulate the Gerstenhaber bracket. In this paragraph, we are going to give two maps of complexes: $$ p:\bold{S} \rightarrow \bold{R} {\mathcal{T}}xt{ and } s:\bold{R} \rightarrow \bold{S}.$$ This means we will define maps $(p_n)$ and $(s_n)$ such that the next diagram \begin{equation}\label{Diagrama} \xymatrix{ \: \cdots \: A {\mathcal{T}}nsork A^{\otimes^{n+1}_k} {\mathcal{T}}nsork A \ar[d]_{p_{n+1}} \ar[r]^\delta & A {\mathcal{T}}nsork A^{\otimes^n_k} {\mathcal{T}}nsork A \ar[d]_{p_n} \: \cdots & A {\mathcal{T}}nsork A \ar[d]_{p_0} \ar[r]^{\mu} & A \ar[d]_{id} \ar[r] & 0 \\ \: \cdots \: A {\mathcal{T}}nsorE r^{\otimes^{n+1}_E} {\mathcal{T}}nsorE A \ar[r]^\delta \ar[d]_{s_{n+1}} & A {\mathcal{T}}nsorE r^{\otimes^n_E} {\mathcal{T}}nsorE A \ar[d]_{s_n} \: \cdots & A {\mathcal{T}}nsorE A \ar[d]_{s_0} \ar[r]^{\mu} & A \ar[d]_{id} \ar[r] & 0 \\ \: \cdots \: A {\mathcal{T}}nsork A^{\otimes^{n+1}_k} {\mathcal{T}}nsork A \ar[r]^{\delta} & A {\mathcal{T}}nsork A^{\otimes^n_k} {\mathcal{T}}nsork A \: \cdots & A {\mathcal{T}}nsork A \ar[r]^\mu & A \ar[r] & 0 } \end{equation} commutes. \begin{map}[$p_n$] We define $p_0$ as the linear map given by \[ \begin{array}{rlllc} p_0: & A {\mathcal{T}}nsork A & \rightarrow & A {\mathcal{T}}nsorE A & \, \\ \, & a {\mathcal{T}}nsork b & \mapsto & a {\mathcal{T}}nsorE b & . \end{array} \] Now, let $n {\mathfrak{g}}eq 1$. Define \[ p_n: A {\mathcal{T}}nsork A^{\otimes^n_k} {\mathcal{T}}nsork A \rightarrow A {\mathcal{T}}nsorE r^{\otimes^n_E} {\mathcal{T}}nsorE A \] as the linear map given by \begin{center} \scalebox{0.9}{$ a {\mathcal{T}}nsork x_1 {\mathcal{T}}nsork \cdots {\mathcal{T}}nsork x_i {\mathcal{T}}nsork \cdots {\mathcal{T}}nsork x_{n+1} {\mathcal{T}}nsork b \mapsto a {\mathcal{T}}nsorE \pi(x_1){\mathcal{T}}nsorE \cdots {\mathcal{T}}nsorE \pi(x_i) {\mathcal{T}}nsorE \cdots {\mathcal{T}}nsorE \pi(x_{n+1}) {\mathcal{T}}nsorE b $.} \end{center} where $\pi$ denotes the projection map from $A$ to the Jacobson radical square zero. Notice that $p_n$ is an $A^e$-morphism for all $n$. \end{map} In order to define the maps $(s_n)$ we introduce some notation. In the sequel, let $E_0$ denote a complete system of idempotents and orthogonal elements of $E$. Note that the set $E_0$ is finite. \begin{remark} Now, consider elements of $A {\mathcal{T}}nsorE r^{\otimes^n_E} {\mathcal{T}}nsorE A$ of the form $$ ae_{j_1} {\mathcal{T}}nsorE \cdots {\mathcal{T}}nsorE e_{j_{i-1}} x_{i-1} e_{j_i} {\mathcal{T}}nsorE e_{j_i}x_ie_{j_{i+1}} {\mathcal{T}}nsorE e_{j_{i+1}} x_{i+1} e_{j_{i+2}} {\mathcal{T}}nsorE \cdots {\mathcal{T}}nsorE e_{j_{n+1}}b $$ where each $e_{j_i}$ is in $E_0$, $a,b$ are in $A$ and $x_i$ in $r$. It is not difficult to see that those elements generate the vector space $A {\mathcal{T}}nsorE r^{\otimes^n_E} {\mathcal{T}}nsorE A$. Indeed, we have that \[ \begin{array}{l} a {\mathcal{T}}nsorE x_1 {\mathcal{T}}nsorE \cdots {\mathcal{T}}nsorE x_i {\mathcal{T}}nsorE \cdots {\mathcal{T}}nsorE x_{n} {\mathcal{T}}nsorE b = \\ \, \\ \displaystyle{\sum_{ j_{1},\dots,j_{n+1} }} ae_{j_1} {\mathcal{T}}nsorE \cdots {\mathcal{T}}nsorE e_{j_{i-1}} x_{i-1} e_{j_i} {\mathcal{T}}nsorE e_{j_i}x_ie_{j_{i+1}} {\mathcal{T}}nsorE e_{j_{i+1}} x_{i+1} e_{j_{i+2}} {\mathcal{T}}nsorE \cdots {\mathcal{T}}nsorE e_{j_{n+1}}b \end{array} \] where the sum is over all $(n+1)$-tuples $(e_{j_1},\dots,e_{j_i},\dots,e_{j_{n+1}})$ of elements of $E_0$. \end{remark} \begin{map}[$s_n$] Define $s_0$ as the linear map given by \[ \begin{array}{rlllc} s_0: & A {\mathcal{T}}nsorE A & \rightarrow & A {\mathcal{T}}nsork A & \, \\ \, & ae {\mathcal{T}}nsorE eb & \mapsto & ae {\mathcal{T}}nsork eb & . \end{array} \] So we have that \[ s_0(a {\mathcal{T}}nsorE b)= \displaystyle{\sum_{e \in E_0} ae {\mathcal{T}}nsork eb} \, . \] It is well defined because $s_0(ae {\mathcal{T}}nsorE b)= ae {\mathcal{T}}nsork eb = s_0(a{\mathcal{T}}nsorE eb)$ for all $e \in E$. Now, let $n {\mathfrak{g}}eq 1$. Define \[ s_n: A {\mathcal{T}}nsorE r^{\otimes^n_E} {\mathcal{T}}nsorE A \rightarrow A {\mathcal{T}}nsork A^{\otimes^n_k} {\mathcal{T}}nsork A \] as the linear map given by \[ \begin{array}{l} ae_{j_1} {\mathcal{T}}nsorE \cdots {\mathcal{T}}nsorE e_{j_{i-1}} x_{i-1} e_{j_i} {\mathcal{T}}nsorE e_{j_i}x_ie_{j_{i+1}} {\mathcal{T}}nsorE e_{j_{i+1}} x_{i+1} e_{j_{i+2}} {\mathcal{T}}nsorE \cdots {\mathcal{T}}nsorE e_{j_{n+1}}b \mapsto \\ ae_{j_1} {\mathcal{T}}nsork \cdots {\mathcal{T}}nsork e_{j_{i-1}} x_{i-1} e_{j_i} {\mathcal{T}}nsork e_{j_i}x_ie_{j_{i+1}} {\mathcal{T}}nsork e_{j_{i+1}} x_{i+1} e_{j_{i+2}} {\mathcal{T}}nsork \cdots {\mathcal{T}}nsork e_{j_{n+1}}b \end{array} \] where each $e_{j_i}$ is in $E_0$. So we have that \[ \begin{array}{l} s_{n} (a {\mathcal{T}}nsorE x_1 {\mathcal{T}}nsorE \cdots {\mathcal{T}}nsorE x_i {\mathcal{T}}nsorE \cdots {\mathcal{T}}nsorE x_{n} {\mathcal{T}}nsorE b )= \\ \, \\ \displaystyle{\sum_{j_{1},\dots,j_{n+1} }} ae_{j_1} {\mathcal{T}}nsork \cdots {\mathcal{T}}nsork e_{j_{i-1}} x_{i-1} e_{j_i} {\mathcal{T}}nsork e_{j_i}x_ie_{j_{i+1}} {\mathcal{T}}nsork e_{j_{i+1}} x_{i+1} e_{j_{i+2}} {\mathcal{T}}nsork \cdots {\mathcal{T}}nsork e_{j_{n+1}}b \end{array} \] where the sum is over all $(n+1)$-tuples $(e_{j_1},\dots,e_{j_i},\dots,e_{j_{n+1}})$ of elements of $E_0$. Notice that $s_n$ is an $A^e$-morphism. \end{map} \begin{remark} It is clear that $p_ns_n=id_{A {\mathcal{T}}nsorE r^{\otimes_E^n} {\mathcal{T}}nsorE A}.$ \end{remark} \begin{lemma}\label{map} The maps $$ p:\bold{S} \rightarrow \bold{R} {\mathcal{T}}xt{ and } s:\bold{R} \rightarrow \bold{S}$$ defined above are maps of complexes. \end{lemma} \begin{proof} A straightforward verification shows that the diagram (\ref{Diagrama}) is commutative. \end{proof} \subsection*{Two complexes.} We will denote the {\it{Hochschild cochain complex}} by $\bold{C^\bullet(A,A)}$. Recall that it is defined by the complex, \[ \begin{array}{rll} 0 \rightarrow A \stackrel{\delta}{\rightarrow} Hom_{k}(A,A) \stackrel{\delta}{\longrightarrow} & \cdots &\, \\ \cdots \: \longrightarrow & Hom_{k}(A^{\otimes^n_k},A) \stackrel{\delta}{\longrightarrow} Hom_{k}(A^{\otimes^{n+1}_k},A) & \cdots \end{array} \] where $ \delta(a)(x)=xa-ax $ for $a$ in $A$ and \[ \begin{array}{cl} \delta f(x_1 \otimes \cdots \otimes x_n \otimes x_{n+1}) = & x_1f(x_2 \otimes \cdots \otimes x_{n+1}) \, + \\ \, & \sum_{i=1}^{n} (-1)^i f(x_1 \otimes \cdots \otimes x_ix_{i+1} \otimes \cdots \otimes x_{n+1})+ \\ \, & (-1)^{n+1} f(x_1 \otimes \cdots \otimes x_n)x_{n+1} \end{array} \] for $f$ in $Hom_{k}(A^{\otimes^{n}_k},A)$. Notice that after applying the functor $Hom_{A^e}(-,A)$ to the standard bar resolution, the Hochschild cochain complex is obtained by identifying $Hom_{A^e}(A \otimes_k A^{\otimes^n_k} \otimes_k A,A)$ to $Hom_{k}(A^{\otimes^n_k},A)$. The {\it{reduced complex}} is obtained from the reduced bar resolution in a similar way. First we apply $Hom_{A^e}(-,A)$ to the reduced bar resolution, then we identify the vector space $Hom_{A^e}(A \otimes_E r^{\otimes^n_E} \otimes_E A,A)$ to $Hom_{E^e}(r^{\otimes^n_E},A)$. Therefore, the reduced bar complex that we denote by $\bold{R^\bullet(A,A)}$ is given by \[ \begin{array}{rll} 0 \rightarrow A^E \stackrel{\delta}{\rightarrow} Hom_{E^e}(r,A) \stackrel{\delta}{\longrightarrow} & \cdots &\, \\ \cdots \: \longrightarrow & Hom_{E^e}(r^{\otimes^n_E},A) \stackrel{\delta}{\longrightarrow} Hom_{E^e}(r^{\otimes^{n+1}_E},A) & \cdots \end{array} \] where $A^E$ is the subalgebra of $A$ defined as follows: \[ A^E:= \{ a \in A \, | \, ae=ea {\mathcal{T}}xt{ for all } e \in E\}. \] The differentials in the reduced complex are given as the above formulas. \subsection*{Induced quasi-isomorphism.} In this paragraph, we will compute the quasi-ismorphisms between the Hochschild cochain complex and the reduced complex, induced by the comparison maps $p$ and $s$. We will denote them by $$ p^\bullet: \bold{R^\bullet(A,A)} \rightarrow \bold{C^\bullet(A,A)} {\mathcal{T}}xt{ \, and \, } s^\bullet: \bold{C^\bullet(A,A)} \rightarrow \bold{R^\bullet(A,A)}.$$ \begin{map}[$p^\bullet$] In degree zero, we have that $ p_0:A^E \rightarrow A $ is the inclusion map. For $n {\mathfrak{g}}eq 1$, \[ p^n: Hom_{E^e}(r^{\otimes^n_E},A) \longrightarrow Hom_k(A^{\otimes^n_k},A) \] is given by \[ p^nf (x_1 {\mathcal{T}}nsork \cdots {\mathcal{T}}nsork x_n)= f(\pi(x_1) {\mathcal{T}}nsorE \cdots {\mathcal{T}}nsorE \pi(x_n)) \] where $f$ is in $Hom_{E^e}(r^{\otimes^n_E},A)$ and $x_i \in r$. \end{map} \begin{map}[$s^\bullet$] In degree zero, we have that $ s^0: A \rightarrow A^E $ is given by \[ s^0(x)=\sum_{e \in E_0} exe \] where $x \in A$. For $n {\mathfrak{g}}eq 1$, we have that \[ s^n: Hom_k(A^{\otimes^n_k},A) \longrightarrow Hom_{E^e}(r^{\otimes^n_E},A) \] is given by \[ s^nf(x_1 {\mathcal{T}}nsorE \cdots {\mathcal{T}}nsorE x_n)= \sum_{ j_0, \dots, j_{n}} e_{j_0} f(e_{j_0} x_1e_{j_1} {\mathcal{T}}nsork \cdots {\mathcal{T}}nsork e_{j_{i-1}}x_{i}e_{j_i} {\mathcal{T}}nsork \cdots {\mathcal{T}}nsork e_{j_{n-1}}x_n e_{j_n}) e_{j_n} \] where the sum is over all $(n+1)$-tuples $(e_{j_0},\dots,e_{j_i},\dots,e_{j_{n}})$ of elements of $E_0$, $f$ is in $Hom_{k}(A^{\otimes^n_k},A)$ and $x_i$ is in $r$. \end{map} \begin{remark} Let us remark that $s^{\bullet} \, p^{\bullet}= id_{\bold{R^\bullet(A,A)}}$. \end{remark} \section{Gerstenhaber bracket and reduced bracket.} The Gerstenhaber bracket is defined on the Hochschild cohomology groups using the Hochschild complex. In this section we will define the reduced bracket using the reduced complex. We show that the Gerstenhaber bracket and the reduced bracket provides the same graded Lie algebra structure on $HH^{*+1}(A)$. We begin by recalling the Gerstenhaber bracket in order to fix notation. \subsection*{Gerstenhaber bracket.} Set $C^0(A,A):=A$ and for $n {\mathfrak{g}}eq 1$, we will denote the space of Hochschild cochains by \[ C^n(A,A):= Hom_{k}(A^{\otimes^n_k},A). \] In \cite{gersten}, Gerstenhaber defined a right pre-Lie system $\{ C^n(A,A), \circ_i\}$ where elements of $C^n(A,A)$ are declared to have degree $n-1$. The operation $\circ_i$ is given as follows. Given $n {\mathfrak{g}}eq 1$, let us fix $i = 1,\dots, n$. The bilinear map \[ \circ_i: C^n(A,A) \times C^m(A,A) \longrightarrow C^{n+m-1}(A,A) \] is given by the following formula: \begin{center} \scalebox{0.97}{$ f^n \circ_i g^m (x_1 \otimes \cdots \otimes x_{n+m-1}):= f^n(x_1 \otimes \cdots \otimes g^m(x_i \otimes \cdots \otimes x_{i+m-1}) \otimes \cdots \otimes x_{n+m-1})$} \end{center} where $f^n$ is in $C^n(A,A)$ and $g^m$ is in $C^m(A,A)$. Then he proved that such pre-Lie system induces a graded pre-Lie algebra structure on \[ C^{*+1}(A,A):=\bigoplus_{n=1}^\infty C^n(A,A) \] by defining an operation $\circ$ as follows: \[ f^n \circ g^m:= \sum_{i=1}^n (-1)^{(i-1)(m-1)} f^n \circ_i g^m. \] Finally, $C^{*+1}(A,A)$ becomes a graded Lie algebra by defining the bracket as the graded commutator of $\circ$. So we have that \[ [f^n \, , \, g^m]:=f^n \circ g^m - (-1)^{(n-1)(m-1)} g^m \circ f^n. \] \begin{remark} The Gerstenhaber restricted to $C^1(A,A)$ is the usual Lie commutator bracket. \end{remark} Moreover, Gerstenhaber proved that \[ \delta[f^n \, , \, g^m]= [f^n \, , \, \delta g^m] + (-1)^{m-1}[\delta f^n \, , \, g^m] \] where $\delta$ is the differential of Hochschild cochain complex. This formula implies that the following bilinear map: \[ [\, - \, , \, - \,]: HH^n(A) \times HH^m(A) \longrightarrow HH^{n+m-1}(A) \] is well defined. Therefore, $HH^{*+1}(A)$ endowed with the induced Gerstenhaber bracket is also a graded Lie algebra. \subsection*{Reduced Bracket.} In order to define the reduced bracket, we proceed in the same way as Gerstenhaber did. We will define the reduced bracket as the graded commutator of an operation $\circR$. Such operation will be given by $\circulito$. Denote by $C^n_E(r,A)$ the cochain space of the reduced complex, this is \[ C^n_E(r,A):= Hom_{E^e}(r^{\otimes^n_E},A). \] \begin{definition} Let $n{\mathfrak{g}}eq 1$ and fix $i=1, \dots , n$. The bilinear map \[ \circulito: C^n_E(r,A) \times C^m_E(r,A) \rightarrow C^{n+m-1}_E(r,A) \] is given by the following formula: \[ f^n \circulito g^m(x_1 {\mathcal{T}}nsorE \cdots {\mathcal{T}}nsorE x_{n+m-1}):= f^n(x_1 {\mathcal{T}}nsorE \cdots {\mathcal{T}}nsorE \pi g^m(x_i {\mathcal{T}}nsorE \cdots {\mathcal{T}}nsorE x_{i+m-1}) {\mathcal{T}}nsorE \cdots {\mathcal{T}}nsorE x_{n+m-1}) \] where $f^n$ is in $C^n_E(r,A)$ and $g^m$ is in $C^m_E(r,A)$ and $x_1,\dots, x_{n+m-1}$ are in $r$. Let us remark that the image of $g^m$ does not necessarily belong to the radical but the image of $\pi g^m$ clearly does. Therefore $f^n \circulito g^m$ is well defined. \end{definition} Then we can define $\circR$ on \[ C^{*+1}_E(r,A):= \bigoplus_{n=1}^\infty C^n_E(r,A) \] as above but replacing $\circulito$ instead of $\circ_i$. This means that \[ f^n \circR g^m := \sum_{i=1}^n (-1)^{(i-1)(m-1)}f^n \circulito g^m \qquad \: \: \: \] Let us remark $\circR$ is a graded operation on $C^{*+1}_E(r,A)$ by declaring elements of $C^n_E(r,A)$ to have degree $n-1$. \begin{definition} We call the {\it{reduced bracket}}, denoted by $[\, - \, , \, - \,]_R$, to the graded commutator bracket of $\circR$. This is, \[ [\, - \, , \, - \,]_R: C^n_E(r,A) \times C^m_E(r,A) \longrightarrow C^{n+m-1}_E(r,A) \] is given by \[ [f^n \, , \, g^m]_R := f^n \circR g^m - (-1)^{(n-1)(m-1)} g^m \circR f^n. \] \end{definition} The following lemmas will relate the Gerstenhaber bracket and the reduced bracket. \begin{lemma}\label{bracketuno} We have the following formula: \[ [f^n \, , \, g^m]_R = s^{n+m-1}[\, p^n f^n \, , \, p^m g^m \,]. \] \end{lemma} \begin{proof} A straightforward verification shows that \[ f^n \circulito g^m = s^{n+m-1}(\,p^n f^n \circ_i p^m g^m\,). \] Since $s^{n+m-1}$ is a linear application we have the formula wanted. \end{proof} \begin{lemma} \label{bracketdos} We have the following formula: \[ p^{n+m-1}[\, f^n \, , \, g^m \,]_R= [\, p^n f^n \, , \, p^m g^m\, ] \] \end{lemma} \begin{proof} Since $p^{n+m-1}$ is a complex morphism, we prove that \[ p^{n+m-1} (f^n \circulito g^m) = p^nf^n \circ_i p^m g^m \] by a direct computation. \end{proof} We will write $p^*$ for the morphism \[ p^* : C^{*+1}_E(r,A) \longrightarrow C^{*+1}(A,A) \] induced by $p^\bullet$. We have the following proposition due to the above lemmas that relate both brackets. \begin{proposition}\label{gradedLiealgebra} The graded product $[\, - \, , \, - \,]_R$ endows $C^*_E(r,A)$ with the structure of graded Lie algebra. We also have that $p^*$ is a morphism of graded Lie algebras. \end{proposition} \begin{proof} Using the lemma \ref{bracketuno}, it is easy to see that the reduced bracket satisfies the graded antisymmetric property as a consequence of the fact that the Gerstenhaber bracket satisfies the same condition. For the graded Jacobi identity, we proceed in the same way. First, let us write a formula that relates both brackets, using the lemma \ref{bracketuno} and the lemma \ref{bracketdos} we have that \[ \begin{array}{rcl} [\,[\, f^n \, , \, g^m \,]_R \, , \, h^l \,]_R & = & s^{n+m+p-2}[\, p^{n+m-1}[\, f^n \, , \, g^m \,]_R \, , \, p^lh^l \,] \\ \, & = & s^{n+m+p-2}[\, [ \, p^nf^n \, , \, p^mg^m \, ] \, , \, p^lh^l \, ] \end{array} \] Then, using the linearity of $s^{n+m+p-2}$ and the fact that the Gerstenhaber bracket satisfies the graded Jacobi identity we have proved that $[\, - \, , \, - \, ]_R$ satisfies the two conditions of the definition of graded Lie algebra. Finally, $p^*$ becomes a Lie graded morphism because of lemma \ref{bracketdos}. \end{proof} Now, the reduced bracket induce a bracket in Hochschild cohomology groups because of the following lemma. \begin{lemma}\label{diferencial} Let $\delta$ be the differential of the Hochschild cocomplex then we have \[ \delta [\, f^n \, , \, g^m \, ]_R= [\, f^n \, , \, \delta g^m \,]_R+(-1)^{m-1}[\,\delta f^n \, , \, g^m \,]_R . \] Hence we have a well defined bracket in the Hochschild cohomology groups: \[ [\, - \, , \, - \,]_R: HH^n(A) \times HH^m(A) \longrightarrow HH^{n+m-1}(A) \, . \] \end{lemma} \begin{proof} We have that \[ \begin{array}{rl} \delta[\, f^n \, , \, g^m \, ]_R & = \delta s^{n+m-1}[\, p^n f^n \, , \, p^m g^m \,] \\ \, & = s^{n+m-1} \delta [\, p^n f^n \, , \, p^m g^m \, ] \\ \, & = s^{n+m-1} [\, p^n f^n \, , \, \delta p^m g^m \,] + (-1)^{m-1}s^{n+m-1} [\, \delta p^n f^n \, , \, p^m g^m \, ] \\ \, & = s^{n+m-1} [\, p^n f^n \, , \, p^m \delta g^m \, ] + (-1)^{m-1}s^{n+m-1} [\, p^n \delta f^n \, , \, p^m g^m \, ] \\ \, &= [\, f^n \, , \, \delta g^m \, ]_R + (-1)^{m-1}[\, \delta f^n \, , \, g^m \, ]_R \end{array} \]\end{proof} We have equipped $HH^{*+1}(A)$ with a graded Lie algebra structure induced by the reduced bracket. We know that $HH^{*+1}(A)$ is already a graded Lie algebra and this structure is given by the Gerstenhaber bracket. We have then the following proposition. \begin{proposition} The graded Lie algebra $HH^{*+1}(A)$ endowed with the Gerstenhaber bracket is isomorphic to $HH^{*+1}(A)$ endowed with the reduced bracket. \end{proposition} \begin{proof} By abuse of notation we continue to write $\overline{p^*}$ for the automorphism of $HH^{*+1}(A)$ given by the family of morphisms $(\overline{p^n})$. Thus, a direct consequence of the above proposition is that $\overline{p^*}$ becomes an isomorphism of graded Lie algebras. \end{proof} \section{Reduced bracket for monomial algebras \\ with radical square zero.} Let $Q$ be a quiver. The path algebra $kQ$ is the $k$-linear span of the set of paths of $Q$ where multiplication is provided by concatenation or zero. We denote by $Q_0$ the set of vertices and $Q_1$ the set of arrows. The trivial paths are denoted by $e_i$ where $i$ is a vertex. The set of all paths of length $n$ is denoted by $Q_n$. In the sequel, let $A$ be a monomial algebra with radical square zero, this is \[ A:=\frac{kQ}{ < Q_2 >}. \] The Jacobson radical of $A$ is given by $r=kQ_1$. Moreover, the Wedderburn-Malcev decomposition of these algebras is $A= kQ_0 \oplus kQ_1$ where $E=kQ_0$. In this section we are going to describe the reduced bracket on $HH^{*+1}(A)$. Such bracket is given in terms of the combinatorics of the quiver. We will use computations of the Hochschild cohomology groups of these algebras given by Cibils in \cite{cibils}. \subsection*{The reduced complex.} Notice that in the case of monomial algebras with radical square zero, the middle-sum terms of the coboundary morphism of the reduced projective resolution $\bold{R}$ vanishes because the multiplication of two arrows is always zero. Therefore, we have that the coboundary morphism is given by the following formula: \[ \begin{array}{cl} \delta (a \otimes x_1 \otimes \dots \otimes x_{n+1} \otimes b) & = ax_1 \otimes x_2 \otimes \dots \otimes x_{n+1} \otimes b \\ \, & + \, (-1) ^{n+1} \, a \otimes x_1 \otimes \dots \otimes x_{n} \otimes x_{n+1}b. \end{array} \] In \cite{cibils} an isomorphic complex to $\bold{R^\bullet(A,A)}$ is given. This new complex is obtained in terms of the combinatorics of the quiver. To describe it we will need to introduce some notation. We say that two paths $\alpha$ and $\beta$ are {\it{parallels}} if and only if they have the same source and the same end. If $\alpha$ and $\beta$ are parallel paths we write $\alpha \parallel \beta$. Let $X$ and $Y$ be sets consisting of paths of $Q$, the set of parallel paths $X \parallel Y$ is given by : \[ X \parallel Y:\, =\{ \, ({\mathfrak{g}}amma, {\mathfrak{g}}amma ') \, \in \, X \times Y \mid \, {\mathfrak{g}}amma \parallel {\mathfrak{g}}amma' \, \}. \] For example: \begin{itemize} \item $Q_n \parallel Q_0$ is the set of {\it{pointed oriented cycles}}, this is the set of pairs $({\mathfrak{g}}amma^n,e)$ where ${\mathfrak{g}}amma^n$ is an oriented cycle of length $n$. \item $Q_n \parallel Q_1$ is the set of pairs $({\mathfrak{g}}amma^n,a)$ where the arrow $a$ is a {\it{shortcut}} of the path ${\mathfrak{g}}amma^n$ of length $n$. \end{itemize} We denote by $k(X \parallel Y)$ the $k$-vector space generated by the set $X \parallel Y$. For each natural number $n$, Cibils defines \[ D_n:k(Q_n \parallel Q_0) \rightarrow k(Q_{n+1} \parallel Q_1) \] as follows: \begin{equation}\label{Dene} D_n ({\mathfrak{g}}amma^n,e)= \sum_{a \in Q_1e} (a{\mathfrak{g}}amma^n, a) + (-1)^{n+1}\sum_{a \in eQ_1} ({\mathfrak{g}}amma^na, a) \end{equation} where the path ${\mathfrak{g}}amma^n$ is parallel to the vertex $e$. In \cite{cibils}, the Hochschild cohomology groups of a radical square zero algebra are obtained from the following complex, denoted by $C^\bullet(Q)$ : \[ 0 \rightarrow k(Q_0 \parallel Q_0) \oplus k(Q_0 \parallel Q_1) \stackrel{\matrizDcero}{\longrightarrow} k(Q_1 \parallel Q_0) \oplus k(Q_1 \parallel Q_1) \stackrel{\matrizDuno}{\longrightarrow} \cdots \quad \] \[ \cdots \: k(Q_n \parallel Q_0) \oplus k(Q_n \parallel Q_1) \stackrel{\matrizDn}{\longrightarrow} k(Q_{n+1} \parallel Q_0) \oplus k(Q_{n+1} \parallel Q_1). \] Cibils proved that $C^\bullet(Q)$ is isomorphic to the reduced complex $\bold{R^\bullet(A,A)}$ using the following lemma. \begin{lemma}[\cite{cibils}]\label{cnera} Let $A:=kQ / <Q_2>$ where $Q$ is a finite quiver. The vector space $C^n_E(r,A)=Hom_{E^e}(r^{\otimes^n_E},A)$ is isomorphic to \[ k(Q_n \parallel Q_0 \cup Q_1)= k(Q_n \parallel Q_0) \oplus k(Q_n \parallel Q_1). \] \end{lemma} \subsection*{The reduced bracket.} Once we have the combinatorial description of $C^n_E(r,A)$, we are going to compute the reduced bracket in the same terms. To do so we use the above lemma. We begin by introducing some notation. \begin{notation} Given two paths: $\alpha^n$ in $Q_n$ and $\beta^m$ in $Q_m$, we will suppose that \[ \begin{array}{rcl} \alpha^n & = & a_1 a_2 \dots a_n \\ \beta^m & = & b_1 b_2 \dots b_m \end{array} \] where $a_i$ and $b_j$ are in $Q_1$. Under this assumption, we say that $a_i$ and $b_j$ are {\it{arrows in the decomposition}} of $\alpha^n$ and $\beta^m$, respectively. Let $i=1, \dots , n$, if $a_i \parallel \beta^m$, we denote by $\alpha^n \diamantei \beta^m$ the path in $Q_{n+m-1}$ obtained by replacing the arrow $a_i$ with the path $\beta^m$. This means \[ \alpha^n \diamantei \beta^m := a_1 \cdots a_{i-1} b_1 \cdots b_m a_{i+1} \cdots a_n \] If $a_i$ is not parallel to $\beta^m$ then $\alpha^n \diamantei \beta^m$ has no sense. Clearly, $\diamantei$ is not commutative. For example, let $a$ in $Q_1$. If $a \parallel \beta^m$ then we have that \[ a \diamanteuno \beta^m = \beta^m \] Now, if $b_i \parallel a$ we have that \[ \beta^m \diamantei a = b_1 \dots b_{i-1} a b_{i+1} \dots b_{m} . \] \end{notation} \begin{definition} Let $Q$ be a finite quiver and $n{\mathfrak{g}}eq 1$. Fix $i=1, \dots , n$. The bilinear map \[ \circulito: k(Q_n \parallel Q_0 \cup Q_1) \times k(Q_m \parallel Q_0 \cup Q_1) \longrightarrow k(Q_{n+m-1} \parallel Q_0 \cup Q_1) \] is given by \[ (\alpha^n,x) \circulito (\beta^m,y)= \delta_{a_i,y} \, \cdot \, (\alpha^n \diamantei \beta^m,x ) \] where \[ \delta_{a_i,y}=\begin{cases} 1 & {\mathcal{T}}xt{ if } a_i=y \\ 0 & {\mathcal{T}}xt{ otherwise } \end{cases} \] and $\alpha^n=a_1 \cdots a_i \cdots a_n.$ \end{definition} Denote by $C^{*+1}(Q)$ the following vector space \[ C^{*+1}(Q):= \bigoplus_{n=1}^\infty k(Q_n \parallel Q_0) \oplus k(Q_n \parallel Q_1) \quad. \] \begin{definition} Let $Q$ be a finite quiver. The biliner map \[ [\,-\,,\,-\,]_Q: k(Q_n \parallel Q_0 \cup Q_1 ) \times k(Q_m \parallel Q_0 \cup Q_1) \longrightarrow k(Q_{n+m-1} \parallel Q_0 \cup Q_1) \] is defined as follows \[ \begin{array}{rl} [\, (\alpha^n,x) \, , \, (\beta^m,y)\,]_Q & = \displaystyle{\sum_{i=1}^n (-1)^{(i-1)(m-1)}} (\alpha^n,x) \circulito (\beta^m,y) \\ \, & -(-1)^{(n-1)(m-1)} \displaystyle{\sum_{i=1}^m (-1)^{(i-1)(n-1)}} (\beta^m,y)\circulito (\alpha^n,x). \end{array} \] \end{definition} \begin{theorem} Let $Q$ be a finite quiver. The vector space $C^{*+1}(Q)$ together with the bracket $[\,-\,,\,-\,]_Q$ is a graded Lie algebra. Moreover, if $A:=kQ / <Q_2>$ then the graded Lie algebra $C^{*+1}_E(r,A)$ endowed with the reduced bracket is isomorphic to $C^{*+1}(Q)$ endowed with the bracket $[\,-\,,\,-\,]_Q$. \end{theorem} \begin{proof} Let $Q$ be a finite quiver and $A:=kQ / <Q_2>$. Let us remark that $C^{*+1}(Q)$ is isomorphic as a vector space to $C^{*+1}(r,A)$ because of lemma \ref{cnera}. Using the same isomorphism defined by Cibils to prove lemma (\ref{cnera}), a straightfoward verification shows that the bracket $[\,-\,,\,-\,]_Q$ is the combinatorial translation of the reduced bracket. \end{proof} \begin{corollary}\label{resultado} Let $A:=kQ / <Q_2>$ where $Q$ is a finite quiver. The graded Lie algebra structure on $HH^{*+1}(A)$ given by the Gerstenhaber bracket is induced by the graded Lie algebra structure on $C^{*+1}(Q)$ given by $[\,-\,,\,-\,]_Q$. \end{corollary} \section{Lie module structure of $HH^n(A)$ over $HH^1(A)$.} In this section, we are going to study the Lie module structure of $HH^n(A)$ over $HH^1(A)$ when $A:=kQ/Q_2$ in two cases. The first case is when $Q$ is a loop and the second case is when $Q$ is a two loops quiver. \subsection*{The one loop case.} It is shown in \cite{cibils} that if $char \, k =0$ and $Q$ is the one loop quiver then the function $D_n$, given by the equation (\ref{Dene}), is zero when $n$ is even and $D_{n}$ is injective when $n$ is odd. In fact we have the following proposition: \begin{propo}[\cite{cibils}] Assume that $Q$ is the one loop quiver. Let $k$ be a field of characteritic zero and $A:=kQ/<Q_2>$. Then we have that $HH^{0}(A) \cong A$ and for $n>0$ we have that \[ HH^n(A) \cong \begin{cases} \displaystyle k (Q_n \parallel Q_0) & {\mathcal{T}}xt{if n is even} \\ \, & \,\\ \displaystyle k (Q_n \parallel Q_1) & {\mathcal{T}}xt{if n is odd \qquad \qquad } \\ \end{cases} \] Therefore, for $n {\mathfrak{g}}eq 0$ the Hochschild cohomology group $HH^n(A)$ is one dimensional. \end{propo} \begin{proposition} Assume that $Q$ is the one loop quiver, where $e$ is the vertex and $a$ is the loop. Let $k$ be a field of characteritic zero and $A:=kQ / <Q_2>$. Then $HH^1(A)$ is the one dimensional (abelian) Lie algebra and the Lie module structure on the Hochschild cohomology groups given by the Gerstenhaber bracket \[ HH^1(A) \times HH^n(A) \longrightarrow HH^n(A) \] is induced by the following morphisms: If $n$ is even, we have that \[ \scalebox{0.97}{$ k(Q_1 \parallel Q_1) \times k(Q_n \parallel Q_0) \longrightarrow k(Q_n\parallel Q_0) $} \] is given as follows \[ (a,a).(a^n,e) = - \, n \, (a^n,e). \] If $n$ is odd, we have that \[ \scalebox{0.97}{$ k(Q_1 \parallel Q_1) \times k(Q_n \parallel Q_1) \longrightarrow k(Q_n\parallel Q_1) $} \] is given as follows \[ (a,a).(a^n,a) = - \, (n-1) \, (a^n,a). \] So, the Lie module $HH^n(A)$ over $HH^1(A)$ corresponds to the one dimensional standard module over $k$. \end{proposition} \begin{proof} It is an immediate consequence of the definition of the bracket $[\, - \, , \, - \, ]_Q$ and the corollary \ref{resultado}. \end{proof} Moreover, we have that \begin{proposition} Let $k$ be a field of characteritic zero, $Q$ the one loop quiver and $A:=kQ/<Q_2>$. The Lie algebra $HH^{odd}$ is the infinite dimensional Witt algebra. \end{proposition} \begin{proof} If $n$ and $m$ are odd then, using the formula for the bracket, we have $$[\, (a^n,a) \, , \, (a^m,a) \, ]_Q= \, (n-m) \, (a^{n+m-1},a) \, .$$ \end{proof} \subsection*{The two loops case.} In \cite{cibils}, Cibils proved that the function $D_n$, given by the equation (\ref{Dene}), is injective for $n {\mathfrak{g}}eq 1$ when $Q$ is neither a loop nor an oriented cycle. Hence we have the following result: \begin{teo}[\cite{cibils}] Let $A:=kQ/<Q_2>$ where $Q$ is the two loops quiver. Then, $HH^0(A)=A$ and for $n {\mathfrak{g}}eq 1$ $$HH^n(A) \cong \frac{k(Q_n \parallel Q_1)}{Im\, D_{n-1}}$$ where \[ D_{n-1}:k(Q_{n-1} \parallel Q_0) \longrightarrow k(Q_n \parallel Q_1) \] is given by the formula (\ref{Dene}). Moreover, we have that for $n > 1$, $$ dim_k HH^n(A)= 2^{n+1}-2^{n-1} \, . $$ \end{teo} \begin{theorem}\label{formula} Let $A:=kQ / <Q_2>$ where $Q$ is a finite quiver. If $Q$ is not an oriented cycle then the Lie module structure on the Hochschild cohomology groups given by the Gerstenhaber bracket \[ HH^1(A) \times HH^n(A) \longrightarrow HH^n(A) \] is induced by the following bilinear map: \[ \scalebox{0.97}{$ k(Q_1 \parallel Q_1) \times k(Q_n \parallel Q_1) \longrightarrow k(Q_n\parallel Q_1) $} \] given as follows \[ \displaystyle{ (a,x).(\alpha^n,y) = \delta_{y,a} \cdot (\alpha^n,a) - \sum_{i=1}^n \delta_{x,a_i} \cdot (\alpha^n \underset{i}{\diamond} x,y) } \] where $a \parallel x$ and $y$ is a shortcut of the path $\alpha^n$ whose decomposition into arrows is given by $\alpha^n=a_1 \cdots a_i \cdots a_n$. The path $\alpha^n \underset{i}{\diamond} x$ is obtained by replacing $a_i$ with $x$ if $a_i = y$ \end{theorem} \begin{proof} It is an immediate consequence of the definition of the bracket $[\, - \, , \, - \, ]_Q$ and the corollary \ref{resultado}. \end{proof} In \cite{strametz}, Strametz studies the Lie algebra structure on the first Hochschild cohomology group for monomial algebras. She formulates the Lie bracket on $HH^1(A)$ using the combinatorics of the quiver. Let us remark that the formula given by the above theorem gives the Lie bracket on $HH^1(A)$ when we set $n=1$. Such formula coincides with the one given in \cite{strametz}. Let us describe the Lie algebra $HH^1(A)$. \begin{proposition} Assume that $Q$ is the two loops quiver where $e$ is the vertex and the loops are denoted by $a$ and $b$. Let $A:=\mathbb{C}Q/<Q_2>$ where $\mathbb{C}$ is the complex number field. Then the elements \[ \begin{array}{rcl} H&:=&(b,b)-(a,a) \\ E&:=&(a,b) \\ F&:=&(b,a) \end{array} \] generate a copy of the Lie algebra $sl_2(\mathbb{C})$ in $HH^1(A)$. Moreover, the Lie algebra $HH^1(A)$ is isomorphic to $sl_2(\mathbb{C}) \times \mathbb{C}$. \end{proposition} \begin{proof} First notice that $HH^1(A) \cong k(Q_1 \parallel Q_1)$ and that the elements $H$, $E$, $F$ and $I:=(a,a)+(b,b)$ form a basis of $HH^1(A)$. A straightforward verification of the following relations: \[ [\, H \, , \, E \,]_Q = 2E , \quad [\, H \, , \, F \,]_Q = -2F , \quad [\, E \, , \, F \,]_Q = H \] proves that $HH^1(A)$ contains a copy of $sl_2\mathbb{C}$. Finally, it is easy to see that \[ [\, I \, , \, H \,]_Q = 0 , \quad [\, I \, , \, E \,]_Q = 0 , \quad [\, I \, , \, F \,]_Q = 0 , \] \end{proof} In order to study the Lie module $HH^n(A)$ over $HH^1(A)$, we will study $HH^n(A)$ as a $sl_2(\mathbb{C})$-module. Now, let us recall two classical Lie theory results, see \cite{erdmann,fulton} for more detail. \begin{enumerate} \item Every (finite dimensional) $sl_2 \mathbb{C}$-module has a decomposition into direct sum of irreducible modules \item Classification of irreducible $sl_2 \mathbb{C}$-modules: there exists an unique irreducible module for each dimension. We denote by $V(t)$ the irreducible $sl_2 \mathbb{C}$ module of dimension $t+1$. \end{enumerate} Using the above notation, this means that $HH^n(A)$ has a decomposition into direct sum of irreducible modules over $sl_2 \mathbb{C}$ as follows: \[ HH^n(A)= \bigoplus_{t=0}^\infty V(t)^{q_t} \] We will determine each $q_t$ and to do so we will use the usual tools of the classical Lie theory. We begin by calculating the eigenvector spaces of $H$ as endomorphism of $k(Q_n \parallel Q_0)$ and $Im \, D_{n-1}$. Given a path ${\mathfrak{g}}amma^n$ in $Q_n$ we denote by $a({\mathfrak{g}}amma^n)$ the number of times that the arrow "$a$" appears in the decomposition of ${\mathfrak{g}}amma^n$. We also denote by $b({\mathfrak{g}}amma^n)$ the number of times that the arrow "$b$" appears in the decomposition of ${\mathfrak{g}}amma^n$. \begin{map}[$v$] Define $v$ as the function given by: \[ \begin{array}{rrll} v_n: & Q_n & \rightarrow & \mathbb{Z} \\ \, & {\mathfrak{g}}amma^n & \mapsto & a({\mathfrak{g}}amma^n)-b({\mathfrak{g}}amma^n) \end{array} \] \end{map} \begin{lemma} For all ${\mathfrak{g}}amma^n$ in $Q_n$ we have that \[ \begin{array}{ccc} H.({\mathfrak{g}}amma^n,a)&=&(v_n({\mathfrak{g}}amma^n) - 1) \, ({\mathfrak{g}}amma^n,a) \\ H.({\mathfrak{g}}amma^n,b)&=&(v_n({\mathfrak{g}}amma^n) + 1) \, ({\mathfrak{g}}amma^n,b) \end{array} \] and for all ${\mathfrak{g}}amma^{n-1}$ in $Q_{n-1}$ we have that \[ \begin{array}{c} H.D_{n-1}({\mathfrak{g}}amma^{n-1},e)=v_{n-1}({\mathfrak{g}}amma^{n-1}) \, D_{n-1}({\mathfrak{g}}amma^{n-1},e) \, . \end{array} \] \end{lemma} \begin{proof} Use the formula given in proposition (\ref{formula}). \end{proof} \begin{proposition} Assume that $char \, k =0$. \begin{enumerate} \item Consider $H$ as an endomorphism of $k(Q_n \parallel Q_1)$. The eigenvalues of $H$ are $n+1-2l$ where $l=0,\dots n+1 $. Denote by $W(\lambda)$ the eigenspace of $H$ of the eigenvalue $\lambda$. We have that $$dim_k W(n+1-2l)=\cnk \, . $$ \item Consider $H$ as an endomorphism of $Im \, D_{n-1}$ The eigenvalues of $H$ restricted to $Im \, D_{n-1}$ are $n-1-2l$ where $l=0,\dots n-1 $. As above, denote by $W(\lambda)$ the eigenspace of $H$ of the eigenvalue $\lambda$. We have that $$dim_k W(n-1-2l)=\cnnk \, .$$ \end{enumerate} \end{proposition} \begin{proof} $(i)$ From the above lemma, it is clear that the set \[ \{ ({\mathfrak{g}}amma^n,a) \, \mid \, {\mathfrak{g}}amma^n \in Q_n \} \cup \{ ({\mathfrak{g}}amma^n,b) \, \mid \, {\mathfrak{g}}amma^n \in Q_n \} \] is a basis of $k(Q_n \parallel Q_1)$ consisting of eigenvectors. We also have that $({\mathfrak{g}}amma^n,a)$ and $({\mathfrak{g}}amma^n,b)$ are eigenvectors of eigenvalue $v({\mathfrak{g}}amma^n) + 1$ and $v({\mathfrak{g}}amma^n) - 1$ respectively. Since $a({\mathfrak{g}}amma^n)+b({\mathfrak{g}}amma^n)=n$ for all paths $ {\mathfrak{g}}amma^n$, we have that $v({\mathfrak{g}}amma^n)=n-2b({\mathfrak{g}}amma^n)$ where $b({\mathfrak{g}}amma^n)$ varies from $0$ to $n$. Then we have that $v({\mathfrak{g}}amma^n) \pm 1$ is of the form $n+1-2l({\mathfrak{g}}amma^n)$ where $l= 0 \dots, n+1 $. Let us remark the following: \begin{enumerate} \item[$-$] $(a^n,b)$ is the only eigenvector of value $n+1$ \item[$-$] $(b^n,a)$ is the only eigenvector of value $-(n+1)$ \item[$-$] If $0 < l < n+1 $, we have that \begin{itemize} \item $({\mathfrak{g}}amma^n,a)$ is an eigenvector of eigenvalue $n+1-2l$ iff $l=b({\mathfrak{g}}amma^n)$ \item $({\mathfrak{g}}amma^n,b)$ is an eigenvector of eigenvalue $n+1-2l$ iff $l-1=b({\mathfrak{g}}amma^n)$ \end{itemize} \end{enumerate} On the other hand, if $0 < l < n+1$, we know that there are \scalebox{0.7}{$\left( \begin{array}{c} n \\ l \end{array}\right)$} paths ${\mathfrak{g}}amma^n$ such that $b({\mathfrak{g}}amma^n)=l$ and \scalebox{0.7}{$\left( \begin{array}{c} n \\ l-1 \end{array}\right)$} paths ${\mathfrak{g}}amma^n$ such that $b({\mathfrak{g}}amma^n)=l-1$. Therefore, there are \[ \scalebox{0.8}{$ \left( \begin{array}{c} n \\ l \end{array}\right) + \left( \begin{array}{c} n \\ l-1 \end{array}\right) = \left( \begin{array}{c} n+1 \\ l \end{array}\right)$} \] eigenvectors $({\mathfrak{g}}amma^n,x)$ of eigenvalue $n+1-2l$.\\ $(ii)$ From the above lemma, it is clear that the set \[ \{ D_{n-1}({\mathfrak{g}}amma^{n-1},e) \, \mid \, {\mathfrak{g}}amma^{n-1} \in Q_{n-1} \} \] is a basis of $Im\, D_{n-1}$ consisting of eigenvectors. We also have that $D_{n-1}({\mathfrak{g}}amma^{n-1},e)$ is an eigenvector of eigenvalue $v({\mathfrak{g}}amma^{n-1})$. Since $a({\mathfrak{g}}amma^{n-1}) + b({\mathfrak{g}}amma^{n-1}) = n-1$ for all paths ${\mathfrak{g}}amma^{n-1}$, we have that $v({\mathfrak{g}}amma^{n-1})=n-1-2b({\mathfrak{g}}amma^{n-1})$ where $b({\mathfrak{g}}amma^n)$ varies from $0$ to $n-1$. Therefore the eignevalues are of the form $n-1-2l$ where $l$ varies from $0$ to $n-1$ and there are \scalebox{0.7}{$\left( \begin{array}{c} n-1 \\ l \end{array}\right)$} eigenvectors of eignevalue $n+1-2l$. \end{proof} Recall the following result from Lie theory: \begin{lemma}[General Multiplicty Formula \cite{bremner}] Let $V$ a finite dimensional $sl_2\mathbb{C}$-module. For every integer $t$, let $V_t$ be the eigenspace of $H$ of eigenvalue $n$. Then for any nonnegative integer $t$, the indecomposable module the number of copies of $V(t)$ that appear in the decomposition into direct sum of indecomposable is $dim \, V_t \, - \, dim \, V_{t-2}$. \end{lemma} A consequence of the above lemma is the following result: \begin{lemma} Let $\mathbb{C}$ be the field of complex numbers, $Q$ the quiver given by two loops and $A:=\mathbb{C}Q/<Q_2>$. For $n {\mathfrak{g}}eq 1$, we denote by $h(n)$ the following: \[ h(n):= max \, \{ \, l \,\mid \, n+1-2l {\mathfrak{g}}eq 0 \, \} \] and for $l=0, \dots, h(n)$ we denote by $p(n,l)$ the following: \[ \scalebox{0.9}{$ p(n,l):=\begin{cases} \cck & {\mathcal{T}}xt{ if $l = 0$} \\ \, & \, \\ \pnl & {\mathcal{T}}xt{ if $l {\mathfrak{g}}eq 1$} \end{cases} $} \] Then we have that \begin{enumerate} \item the decomposition into direct sum of irreducibles of $\mathbb{C}(Q_n \parallel Q_1)$ as $sl_2(\mathbb{C})$ Lie module is given by \[ \displaystyle{\mathbb{C}(Q_n \parallel Q_1) \cong \bigoplus_{l=0}^{h(n)} V(n+1-2l)^{p(n+1,l)}\, , } \] \item the decomposition into direct sum of irreducibles of $Im \, D_{n-1}$ as $sl_2(\mathbb{C})$ Lie module is given by \[ \displaystyle{ Im \, D_{n-1} \cong \bigoplus_{l=0}^{h(n)-1} V(n-1-2k)^{p(n-1,l)} \, .} \] \end{enumerate} \end{lemma} \begin{proposition} Let $\mathbb{C}$ be the field of complex numbers, $Q$ the quiver given by two loops and $A:=\mathbb{C}Q/<Q_2>$. For $n{\mathfrak{g}}eq 1$ and $l=0, \dots, h(n)$ we denote by $q(n,l)$ the following: \[ \scalebox{0.9}{$ q(n,l):=\begin{cases} \mnz & {\mathcal{T}}xt{ if $l = 0,1$} \\ \, & \, \\ \qnk & {\mathcal{T}}xt{ if $l {\mathfrak{g}}eq 2$} \end{cases} $} \] Then, the decomposition of $HH^n(A)$ into a direct sum of irreducible Lie modules over $sl_2(\mathbb{C})$ is given by \[ \displaystyle{HH^n(A) \cong \bigoplus_{l=0}^{h(n)} V(n+1-2l)^{q(n,l)} }. \] \end{proposition} \begin{algorithm} There is an algorithm that give us the decomposition of $HH^n(A)$ described in the above proposition. We will explain it in the next paragraph. We use the following table to write such decomposition: \[ \scalebox{0.9}{$ \begin{array}{c||ccccccccc} n & V(0)&V(1)&V(2)&V(3)&V(4)&V(5)&V(6)&V(7)&\cdots \\ \hline \hline \, & \, & \, & \, & \, & \, & \, & \, & \, & \, \\ HH^2(A) & \, & 1 & \, & 1 & \, & \, & \, & \, & \, \\ \rotatebox{90}{$\cdots$} & \, & \, & \, & \, & \, & \, & \, & \,& \, \\ HH^n(A) & q_0 & q_1 & q_2 & q_3 & q_4 & q_5 & q_6 & q_7 & \cdots \\ \, & \, & \, & \, & \, & \, & \, & \, & \,& \, \\ \end{array} $} \] In the above table, at the row $HH^n(A)$, the number that appears in the column $V(t)$ states the number of copies of the irreducibble module $V(t)$ that appears in the decomposition of $HH^n(A)$. We leave a blank space if no $V(t)$ appears in the decomposition of $HH^n(A)$. We fix the first row of the table with the decomposition of $HH^2(A)$. Now, given the entries of the row $HH^n(A)$, we can fill out the coefficients of the next row, this is for $HH^{n+1}(A)$, in the following manner: \begin{enumerate} \item Add an imaginary column $(-)$ just before the column $V(0)$, consisting of zeros. \item Write down the coefficients of the next row by using the rule from Pascal's triangle: add the number directly above and to the left with the number directly above and to the right. \end{enumerate} \[ \scalebox{0.7}{ $ \xymatrix{ \, \ar@{-}@<+8ex>[ddd]\ar@{-}@<+8.5ex>[ddd] \ar@{-}@<-3.5ex>[rrrrrrrr] \ar@{-}@<-3ex>[rrrrrrrr] & (-) & V(0)& V(1)&\cdots & V(t-1) & V(t) & V(t+1) & \cdots \\ HH^n(A) & 0 \ar@{-}[rd] & q_0 & q_1 \ar@{-}[ld] & \cdots & q_{t-1} \ar@{-}[rd] & q_t & q_{t+1} \ar@{-}[ld] & \cdots \\ HH^{n+1}(A)& 0 & q_1 & \cdots & \cdots & \cdots & q_{t-1}+q_{t+1} & \cdots & \cdots \\ \, & \, & \, & \, &\, & \, & \, & \, & } $} \] Let us remark that the number of copies of $V(1)$ that appear in the decomposition of $HH^{n}(A)$ is equal to the number of copies of $V(0)$ that appear in the decomposition of $HH^{n+1}(A)$. \end{algorithm} \begin{lemma}We have that \begin{enumerate} \item If $n$ is even then $q(n,h(n))= q(n+1,h(n+1))$. \item If $n {\mathfrak{g}}eq 2$ then $q(n,l)+q(n,l+1)=q(n+1,l+1)$. \end{enumerate} \end{lemma} \begin{proof} For the first equality, we verify by a direct computation for $n=2$ and $n=4$. For $n {\mathfrak{g}}eq 6$, we use that if $n$ is even then we have that \[ \left( \begin{array}{c} n+1 \\ n/2 \end{array} \right) = \left( \begin{array}{c} n+1 \\ n/2 + 1 \end{array} \right) . \] For the second equality, we verify by a direct computation for $l=0$ and $l=1$. For $l {\mathfrak{g}}eq 2$, we use the Pascal triangle's rule: \[ \left( \begin{array}{c} n \\ l \end{array} \right) + \left( \begin{array}{c} n \\ l + 1 \end{array} \right) = \left( \begin{array}{c} n+1 \\ l + 1 \end{array} \right) . \] \end{proof} \begin{remark} The algorithm is justify by the above lemma. Moreover, we have that \[ \scalebox{0.9} {$q(n,2)=\left( \begin{array}{c} n-1 \\ 2 \end{array}\right)$}. \] This is the reason why we have a section of the Pascal triangle in the above table. \end{remark} Finally, once we have the decompostion of $HH^n(A)$ into direct sum of irreducible modules over $sl_2 \mathbb{C}$, we return to study $HH^n(A)$ as a $HH^1(A)$-module. \begin{corollary} We have that \[ \displaystyle{HH^n(A) \cong \bigoplus_{l=0}^{h(n)} V(n+1-2l)^{q(n,l)} \otimes \mathbb{C}} \] as Lie modules over $HH^1(A)$. \end{corollary} \begin{proof} Notice that $$I.({\mathfrak{g}}amma^n,x)=(1-a({\mathfrak{g}}amma^n)-b({\mathfrak{g}}amma^n))({\mathfrak{g}}amma^n,x)=(1-n)({\mathfrak{g}}amma^n,x).$$ \end{proof} \end{document}
\begin{document} \begin{frontmatter} \title{A functional non-central limit theorem for multiple-stable processes with long-range dependence} \author[label1]{Shuyang Bai} \author[label2]{Takashi Owada} \author[label3]{Yizao Wang} \address[label1]{Department of Statistics, University of Georgia, 310 Herty Drive, Athens, GA, 30602, USA. {[email protected]}} \address[label2]{Department of Statistics, Purdue University, 250 N.~University Street, West Lafayette, IN, 47907, USA. {[email protected]}} \address[label3]{Department of Mathematical Sciences, University of Cincinnati, 2815 Commons Way, Cincinnati, OH, 45221-0025, USA. {[email protected]}} \begin{abstract} A functional limit theorem is established for the partial-sum process of a class of stationary sequences which exhibit both heavy tails and long-range dependence. The stationary sequence is constructed using multiple stochastic integrals with heavy-tailed marginal distribution. Furthermore, the multiple stochastic integrals are built upon a large family of dynamical systems that are ergodic and conservative, leading to the long-range dependence phenomenon of the model. The limits constitute a new class of self-similar processes with stationary increments. They are represented by multiple stable integrals, where the integrands involve the local times of intersections of independent stationary stable regenerative sets. \end{abstract} \begin{keyword} multiple integral \sep stable regenerative set \sep local time \sep heavy-tailed distribution \sep functional limit theorem \sep long-range dependence \sep infinite ergodic theory \MSC 60F17 \sep 60G18 \sep 60H05 \end{keyword} \end{frontmatter} \section{Introduction} \subsection{Background} The seminal work of Rosi\'nski \citep{rosinski95structure} revealed an intriguing connection between stationary stable processes and ergodic theory. Consider a stationary process in the form of \begin{equation}\lefteftarrowbel{eq:1} X_k = \int_E f(T^k x)M(dx), \ \ k\in{\mathbb N}, \end{equation} where $M$ is symmetric $\alpha$-stable random measure on a measure space $(E,\mathcal E,\mu)$, $f:E\to {\mathbb R}$ is a measurable function and $T$ is a measure-preserving transform from $E$ to $E$. Then, many properties of the process $X$ can be derived from the underlying dynamical system $(E,\mathcal E,\mu,T)$. Because of this connection, the process $X$ is also referred to as {\em driven by the flow $T$}, and many developments on structures, representations, and ergodic properties of such processes have stemmed from this connection (see e.g., \citep{samorodnitsky16stochastic,samorodnitsky05null,kabluchko16stochastic,pipiras02structure,pipiras02decomposition,pipiras17stable,sarkar18stable,roy08stationary,roy10nonsingular,wang13ergodic}; background to be reviewed in Section \rightef{sec:ergodic}). In particular, it was argued by Samorodnitsky \citep[Remark 2.5]{samorodnitsky05null} that the case where $T$ is conservative and ergodic is the most challenging to develop a satisfactory characterization of the ergodic properties of the process in terms of the underlying dynamical system. While examples of stable processes driven by conservative and ergodic flows have been known for more than 20 years since \citep{rosinski96classes}, limit theorems for such processes have not been established until in very recent breakthroughs in a series of papers by Samorodnitsky and coauthors \citep{owada15functional,owada15maxima,lacaux16time,samorodnitsky17extremal}, all exhibiting phenomena of long-range dependence with new limit objects. Here, by long-range dependence, we mean generally that the partial-sum process $(S_{\floor {nt}})_{t\in[0,1]}$, with $S_n:=X_1+\cdots+X_n$, scales to a non-degenerate stochastic process with a normalization that is different from the case when $(X_k)_{k\in{\mathbb N}}$ are i.i.d. We follow this point of view as in Samorodnitsky \citep{samorodnitsky16stochastic}, and one could also consider limit theorems for other statistics; the key is always the abnormal normalization compared to the i.i.d.~case. The functional central limit theorem for stationary stable processes driven by a conservative and ergodic flow, established in \citep{owada15functional}, serves as our starting point and takes the following form. With $f$ in \eqref{eq:1} such that the support has finite $\mu$-measure and $\mu(f):=\int_Efd\mu$ is finite and nonzero, it was shown that \begin{equation}\lefteftarrowbel{eq:OS} \frac1{d_n}\pp{S_{\floor{nt}}}_{t\in[0,1]} {\mathbb R}ightarrow \mu(f)\pp{\int_{\Omega'\times [0,\infty)}\mathcal M_{\beta}((t-v)_+,\omega')S_{\alpha,\beta}(d\omega',dv)}_{t\in[0,1]} \end{equation} in $D([0,1])$, where $\alpha\in(0,2)$, $\beta \in (0,1)$, and $d_n$ is a regularly varying sequence with exponent ${\beta+(1-\beta)/\alpha}$. (This was actually established in a slightly more general framework with $M$ replaced by an infinitely-divisible random measure with heavy-tail index $\alpha$.) Here, $(\Omega',\mathcal F',P')$ is a probability space separate from the one that carries the randomness of the stochastic integral itself, $S_{\alpha,\beta}$ is a symmetric $\alpha$-stable (S$\alpha$S) random measure on $\Omega' \times [0,\infty)$ with control measure $P' \times (1-\beta)v^{-\beta}dv$, and $\mathcal M_\beta$ is the Mittag--Leffler process with index $\beta$, the inverse process of a $\beta$-stable subordinator, defined on $(\Omega',\mathcal F',P')$. Here, $\beta\in(0,1)$ is the memory parameter of an underlying dynamical system (see Section \rightef{sec:nCLT} and in particular how $\beta$ characterizes the memory of $T$ in terms of Assumption \rightef{assump}), and as $\beta\downarrow0$ the limit process in \eqref{eq:OS} becomes an S$\alpha$S L\'evy process. At the core of this result, the appearance of the Mittag--Leffler process is established as a functional generalization of the one-dimensional Darling--Kac limit theorem in \citep{aaronson81asymptotic,bingham71limit} for the underlying dynamical system, which is of independent interest in ergodic theory. Later developments \citep{lacaux16time,samorodnitsky17extremal} revealed that more essentially, stable regenerative sets \citep{bertoin99subordinators} and their intersections play a fundamental role in describing the limit objects for a large family of processes driven by conservative and ergodic flows. In this paper, as a generalization of \eqref{eq:1} we consider the process defined in terms of multiple stochastic integrals in the form of \begin{equation}\lefteftarrowbel{eq:2} X_k = \int_{E^p}'f(T^kx_1,\dots,T^kx_p)M(dx_1) \cdots M(dx_p), \ \ k\in{\mathbb N}, \ \ p\in{\mathbb N}, \end{equation} where the prime mark $'$ indicates that the multiple integral is defined to {\em exclude the diagonals}, and this time $f$ is a measurable function from $E^p$ to ${\mathbb R}$. The definition of multiple stochastic integrals will be recalled in Section \rightef{sec:mult int} below. We restrict to the case of multiple integrals without the diagonals, in order to obtain limit processes in the form of {\em multiple stable integrals}, which we refer to as {\em multiple-stable processes}. Since the seminal works of Dobrushin and Major \citep{dobrushin79noncentral} and Taqqu \citep{taqqu79convergence}, the processes in the form of multiple Gaussian integrals have frequently appeared in limit theorems under long-range dependence. For example, they were obtained as limits for partial sums \citep{dobrushin79noncentral,taqqu79convergence, surgailis82domains,arcones1994limit,ho97limit,bai14generalized}), for empirical processes \citep{dehling1989empirical,ho1996asymptotic,wu2003empirical} as well as for quadratic forms \citep{fox1985noncentral,terrin1990noncentral}. Such limit theorems are often referred to as {\em non-central limit theorems} and have found numerous applications to statistical theories for long-range dependent data (see, e.g., \cite{beran13long} and the references therein). Limit theorems with (non-Gaussian) multiple-stable processes as limits, to the best of our knowledge however, have been rarely considered so far in the literature of long-range dependence. Note that the exclusion of the diagonals is necessary to obtain multiple-stable processes with multiplicity $p\ge 2$: with the terms on the diagonal included, the case $p=2$ has been partly considered in \citep{owada16limit}, and the limit is again a stable process. \subsection{Overview of main results} Our ultimate goal (Theorem \rightef{Thm:CLT}) is to establish formally that \[ \frac1{d_n}\pp{\summ k1{\floor{nt}}X_k}_{t\in[0,1]}{\mathbb R}ightarrow \pp{{\mathbb Z}ab(t)}_{t\in[0,1]} \] for a large family of $(X_k)$ in \eqref{eq:2}, and the limit process has the representation \begin{multline}\lefteftarrowbel{eq:Z} \pp{{\mathbb Z}ab(t)}_{t\ge 0}\\ \eqd \pp{\int_{({\mathbf F}\times [0,\infty) )^p}'L_t\pp{{\bf i}gcap_{i=1}^p(R_i+v_i)}S_{\alpha,\beta}(d R_1,dv_1)\cdots S_{\alpha,\beta}(dR_p,dv_p)}_{t\ge 0}, \end{multline} where $S_{\alpha,\beta}$ is an S$\alpha$S random measure on ${\mathbf F}\times [0,\infty)$, with control measure $P_\beta\times (1-\beta)v^{-\beta}dv$, with $P_\beta$ the probability measure on ${\mathbf F}\equiv {\mathbf F}([0,\infty))$, the space of closed subsets of $[0,\infty)$, induced by the law of a $\beta$-stable regenerative set, and $L_t$ is the local-time functional for a $(p\beta-p+1)$-stable regenerative set \citep{kingman73intrinsic}. An immediate observation is that for the right-hand side of \eqref{eq:Z} to be non-degenerate, we need ${\bf i}gcap_{i=1}^p(R_i+v_i)$ to be non-empty, with $(R_i)_{i=1,\dots,p}$ being i.i.d.~$\beta$-stable regenerative sets. The key relation between the memory parameter $\beta$ and the multiplicity $p$ assumed throughout this paper is that \begin{equation}\lefteftarrowbel{eq:beta} \beta\in(0,1), \quad p\in{\mathbb N} \quad \mbox{ such that }\quad \beta_p:=p\beta-p+1 \in(0,1), \end{equation} or equivalently $\beta\in(1-1/p,1)$. It is known (e.g., \citep{samorodnitsky17extremal}) that this is exactly the case when ${\bf i}gcap_{i=1}^p(R_i+v_i)$ is a $\beta_p$-stable regenerative set with a random shift with probability one. When \eqref{eq:beta} is violated and $v_i$ are all different, the intersection becomes an empty set with probability one and hence ${\mathbb Z}ab$ becomes degenerate. The limit theorem in such a case will be of a different nature and addressed in a separate paper. Our theorem applies to a large family of dynamical systems, including in particular the shift transforms of certain null-recurrent Markov chains, and a class of transforms on the real line called the AFN-systems \citep{zweimuller98ergodic,zweimuller00ergodic} often considered in the literature of infinite ergodic theory. Establishing the aforementioned convergence, however, turns out to be a completely different task from the one in \citep{owada15functional}, and the proof consists of two parts. The first part is devoted to the investigation of the integrand of the right-hand side of \eqref{eq:Z}, which are local-time processes of intersections of stable regenerative sets (Section \rightef{sec:local times}). Let $(R_i)_{i\in{\mathbb N}}$ be i.i.d.~$\beta$-stable regenerative sets. To exploit a series representation of the multiple integral \eqref{eq:Z} (see \eqref{eq:Z_t [0,1]} below), we need to characterize the law of \[ L_{I,t} \equiv L_t\pp{{\bf i}gcap_{i\in I}(R_i+v_i)} \mbox{ for all } I\subset {\mathbb N},~ |I| = p,~t\ge 0, \] jointly in $I$ and $t$, governed by certain law on the shifts $(v_i)_{i\in I}$ independent from the regenerative sets. Marginally, for each $I$, $(L_{I,t})_{t\ge 0}$ has the law of a Mittag--Leffler process shifted in time with parameter $\beta_{p}$, up to a multiplicative constant \citep{samorodnitsky17extremal}. In particular when $p=1$ we have \begin{equation}\lefteftarrowbel{eq:p=1} \pp{L_t(R_1+v_1)}_{t\ge 0} \eqd c_\beta\pp{\mathcal M_{\beta}((t-v_1)_+)}_{t\ge 0} \end{equation} for some constant $c_\beta$. It is then a matter of convenience to work with either of the two representations in \eqref{eq:p=1}, and the right-hand side was used in \citep{owada15functional}. However when $p\ge 2$, the information from the Mittag--Leffler process is only marginal, whereas we need to work with $L_{I,t}$ jointly in $I,t$. More precisely, we shall compute all their joint moments with appropriately randomized shifts. For this key calculation, we adapt the {\em random covering scheme} for constructing regenerative sets \citep{fitzsimmons85intersections}, to develop approximations of joint law of $L_{I,t}$ in Theorem \rightef{thm:1}. The second part of the proof is devoted to the convergence of the partial-sum process to ${\mathbb Z}ab$. To illustrate the idea, assume for simplicity that $f(x_1,\leftdots,x_p)=1_A(x_1)\leftdots 1_A(x_p)$, where $A$ is a suitable finite-measure subset of $E$. To work with a series representation of the multiple integral \eqref{eq:2} (see \eqref{eq:X_k series} below), the key ingredient is to show the joint convergence after proper normalization, in $I$ and $t$, of counting processes of simultaneous returns of i.i.d.~dynamical systems, indexed by $i\in I$, in the form \begin{equation}\lefteftarrowbel{eq:key} \sum_{k=1}^{\floor{nt}}\prod_{i\in I}{\bf 1}dd{T^kx_i\in A}, \end{equation} where the staring points $x_i\in E$ are governed by i.i.d.\ infinite stationary distributions. For any individual $I$, our assumptions essentially entail that the simultaneous-return times behave like renewal times of a heavy-tailed renewal process, and then the above is known to converge to the local-time process $L_{I,t}(R^*+V^*)$ for $\beta_p$-stable regenerative set $R^*$ with a random shift $V^*$. This certainly includes $p=1$ as a special case (\citep{bingham71limit} and \citep[Theorem 6.1]{owada15functional}). The challenge lies in characterizing the joint limits for say $(I_j,t_j)_{j=1,\dots,r}$. Theorem \rightef{Thm:local time} is devoted to this task, showing that the limit of the above is $(L_{I_j,t_j})_{j=1,\dots,r}$ (with respect to random shifts $v_j$). The proof is of combinatorial nature and by computing the asymptotic moments of \eqref{eq:key}. A delicate approximation scheme similar to Krickeberg \citep{krickeberg67strong} is then developed so that the asymptotic moment formula is extended to the case where the product in \eqref{eq:key} is replaced by $f(T^kx_1,\dots,T^k x_{p})$ for a general class of functions of $f$. We also mention that a simultaneous work \citep{bai20limit} considers the case where the random measure $M$ in \eqref{eq:2} is replaced by a Gaussian one so that $X_k$ has finite variance marginally. In that case, a functional non-central limit theorem is established with Hermite processes (e.g., \cite{taqqu79convergence}), a well-known class of processes represented by multiple Gaussian integrals, arising as limits. It is remarkable that the proof techniques of \citep{bai20limit} exploit special properties of multiple Gaussian integrals, and in particular, the local-time processes and their approximations as we deal with here are not needed in \citep{bai20limit}. On the other hand, however, the joint local-time processes are still intrinsically connected to the limit Hermite processes. As shown in the manuscript \cite{bai19representations} after the present work, if the multiple-stable integrals in \eqref{eq:Z} are extended to the Gaussian case $\alpha=2$, then they yield new representations for the Hermite processes. The paper is organized as follows. Section \rightef{sec:local times} introduces the joint local-time processes, and establishes a formula for the joint moments by the random covering scheme. Section \rightef{sec:limit process} reviews certain series representations of multiple integrals and defines formally the limit process ${\mathbb Z}ab$. Section \rightef{sec:nCLT} introduces our model of stationary processes in terms of multiple integrals with long-range dependence, and states the main non-central limit theorem. Section \rightef{sec:proof} is devoted to the proof of the main theorem. Throughout the paper, $C$ and $C_i$ denote generic positive constants which are independent of $n$ and may change from line to line. \section{Local-time processes}\lefteftarrowbel{sec:local times} \subsection{Definitions and results} We start by recalling some facts about random closed sets on $[0,\infty)$, and in particular, stable regenerative sets. We refer the reader to \citep{molchanov17theory} for more details. Let ${\mathbf F} \equiv \mathbf{F}([0,\infty))$ denote the collection of all closed subsets of $[0,\infty)$. We equip ${\mathbf F}$ with the Fell topology which is generated by the sets $\{F\in {\mathbf F}: F\cap G \neq \emptyset\}$ and $\{F\in {\mathbf F}: \ F\cap K=\emptyset\}$ for arbitrary open $G\subset [0,\infty)$ and compact $K\subset [0,\infty)$. A random closed set on $[0,\infty)$ is a Borel measurable random element taking values in ${\mathbf F}$. If the law of a random closed set $R$ on $[0,\infty)$ is identical to that of the closed range of a subordinator \citep{bertoin99subordinators}, then $R$ is said to be a regenerative set. The random set $R$ is, in addition, said to be $\beta$-stable, $\beta\in(0,1)$, if the corresponding subordinator, say $(\sigma_t)_{t\ge0}$, is $\beta$-stable; that is, $(\sigma_t)_{t\ge0}$ is a non-decreasing L\'evy process determined by \begin{equation}\lefteftarrowbel{eq:laplace} \mathbb{E} e^{-\lefteftarrowmbda \sigma_t}=\exp(- t\lefteftarrowmbda^\beta), \lefteftarrowmbda\ge0. \end{equation} In this case, the associated L\'evy measure of the regenerative set $R$ is \begin{equation}\lefteftarrowbel{eq:Pi_beta} \mathbb{P}i_\beta(dx)=\frac{ \beta}{\Gamma(1-\beta)}x^{-1-\beta} 1_{(0,\infty)}(x) dx, \end{equation} which characterizes the law of $R$. For our purposes, we shall work with a family of countably many independent stable regenerative sets with independent shifts, and we need in particular to describe their intersections. Let $(R_i)_{i\in{\mathbb N}}$ be i.i.d.~$\beta$-stable regenerative sets and $(V_i)_{i\in{\mathbb N}}$ be independent random shifts with arbitrary laws, and the two sequences are independent. Under our assumption on $\beta$ and $p$ in \eqref{eq:beta}, for every \begin{equation}\lefteftarrowbel{eq:D_p} I\in \mathcal D_p:=\ccbb{I=(i_1,\leftdots,i_p)\in{\mathbb N}^p:~i_1< \cdots<i_p}, \end{equation} we have \begin{equation}\lefteftarrowbel{eq:decomp intersect} {\bf i}gcap_{i\in I}(R_i+V_i) \eqd R_I+V_I , \end{equation} where $R_I$ is a $\beta_p$-stable regenerative set and $V_I$ is an independent random variable. In words, the intersection of $p$ independent randomly shifted $\beta$-stable regenerative set is $\beta_p$-stable regenerative with an independent random shift. This follows for example from the strong Markov property of the regenerative sets. See also \citep[Appendix B]{samorodnitsky17extremal}. There are multiple ways to construct the local time associated to a regenerative set (\citep[Chapter 12]{kallenberg17random}). For the series representation of multiple integrals needed later, we use a construction due to Kingman \citep{kingman73intrinsic} which treats the local time as a functional defined on $\mathbf{F}$. In particular, set \[ L=L^{(\beta_p)}: \mathbf{F}\rightightarrow[0,\infty], ~ L(F) :=\leftimsup_{n\to\infty}\frac1{l_{\beta_p}(n)}\lefteftarrowmbda\lefteft(F+\bb{- \frac{1}{2n},\frac{1}{2n}}\rightight), \] where $\lefteftarrowmbda$ is the Lebesgue measure, $F+\sbb{- 1/{2n},1/{2n}}\equiv\cup_{x\in F}[x-1/{2n},x+1/{2n}]$, and the normalization sequence \[ l_{\beta_p}(n)=\int_{0}^{1/n} \mathbb{P}i_{\beta_p}((x,\infty))dx= \frac{ n^{\beta_p-1}}{\Gamma(2-\beta_p)}, \] where $\mathbb{P}i_\beta$ is as in \eqref{eq:Pi_beta}. The exclusive choice of $\beta_p$ as in \eqref{eq:beta} is due to the fact that we shall only deal with local times of shifted $\beta_p$-stable regenerative sets, obtained as the intersection of $p$ independent stable regenerative sets. We then define \begin{equation}\lefteftarrowbel{eq:L_t} L_t(F):=L(F\cap [0,t]), \quad t\ge 0. \end{equation} \begin{Lem} The functionals $L$ and $L_t$ are $\mathcal B({\mathbf F})/\mathcal{B}( [0,\infty])$-measurable, where $\mathcal B({\mathbf F})$ and $\mathcal{B}( [0,\infty])$ denote the Borel $\sigma$-fields on ${\mathbf F}$ and $ [0,\infty]$ respectively. \end{Lem} \begin{proof} Direct sum and intersection are measurable operations for closed sets \citep[Theorem 1.3.25]{molchanov17theory}. The Lebesgue measure $\lefteftarrowmbda$ is also a measurable functional from $\mathbf{F}$ to $[0,\infty]$. Indeed, write $[0,\infty)=\cup_{n=0}^\infty K_n$ where $K_n=[n,n+1]$. Then $F\mapsto \lefteftarrowmbda (F\cap K_n)$ is a measurable mapping from $\mathbf{F}$ to $[0,\infty]$ since it is upper semi-continuous \citep[Proposition E.13]{molchanov17theory}. Hence $F\rightightarrow \lefteftarrowmbda({F})= \sif n0 \lefteftarrowmbda(F\cap K_n)$ is measurable as well. \end{proof} From now on, we denote the local-time processes using the notation \begin{equation}\lefteftarrowbel{eq:LIt} L_{I,t} \equiv L_t\pp{{\bf i}gcap_{i\in I}(R_i+V_i)},~ t \in[0,\infty), ~I\in \mathcal D_p. \end{equation} In view of \eqref{eq:decomp intersect} and \cite[Theorem 3]{kingman73intrinsic} (conditioning on $V_I$ in \eqref{eq:decomp intersect}), for each $I\in\mathcal D_p$, the finite-dimensional distributions of $(L_{I,t})_{t\ge 0}$ coincide with those of a randomly shifted $\beta_p$-Mittag--Leffler process, $(\mathcal M_{\beta_p}(t-V_I)_+)_{t\ge 0}$, where $V_I$ is independent of $\mathcal M_{\beta_p}$. In particular, $(L_{I,t})_{t\ge 0}$ admits a version which has a non-decreasing and continuous path a.s.. The advantage of the above construction is that now for different $I,t$, the corresponding local times are constructed on a common probability space as measurable functions evaluated at intersections of independent shifted random regenerative sets. We shall develop the formula for their joint moments. We work with a specific choice of the random shifts: most of the time we assume in addition that $(V_i)_{i\in{\mathbb N}}$ are i.i.d.~with the law \begin{equation}\lefteftarrowbel{eq:V} P(V_i\lefte v) = v^{1-\beta}, v\in[0,1]. \end{equation} \begin{Rem}\lefteftarrowbel{rem:stationary} The law of the shift \eqref{eq:V} will show up naturally in our limit theorem later. To understand the origin of \eqref{eq:V}, recall that a random closed set $F$ on $[0,\infty)$ is said to be stationary, if its law is unchanged under the map $F\rightightarrow (F\cap [x,\infty))-x$ for any $x>0$. While a $\beta$-stable regenerative set $R_i$ itself is not stationary, it is known that with an independent shift $V_i$ following an infinite law proportional to $v^{-\beta}dv$ on ${\mathbb R}_+$, the shifted random (with respect to an infinite measure) set $R_i+V_i$ is stationary (\citep[Proposition 4.1]{lacaux16time}, see also \citep{fitzsimmons88stationary}). The law \eqref{eq:V} is nothing but the normalized restriction to $[0,1]$ of this infinite law. As a consequence, one could derive that ${\bf i}gcap_{i\in I}(R_i+V_i) \equiv R_I+V_I$ is also stationary with respect to an infinite measure \citep[Corollary B.3]{samorodnitsky17extremal}. This is in accordance with the stationarity of the increments of the process ${\mathbb Z}ab$ in \eqref{eq:Z} (see Section \rightef{sec:limit process def}). \end{Rem} From now on we fix $\beta\in(0,1)$, $p\in{\mathbb N}$, such that \eqref{eq:beta} holds. Introduce for $q\ge 2$, a symmetric function $h_q\topp\beta$ on the off-diagonal subset of $(0,1)^q$ determined by \begin{equation}\lefteftarrowbel{eq:hq} h_q\topp\beta(x_1,\dots, x_q) = \Gamma(\beta)\Gamma(2-\beta)\prodd j2q (x_j-x_{j-1})^{\beta-1},\ 0<x_1<\cdots<x_q<1. \end{equation} Here and below, for any $q\in{\mathbb N}$, a $q$-variate function $f$ is said to be symmetric, if $f(x_1,\dots,x_q) = f(x_{\sigma(1)},\dots,x_{\sigma(q)})$ for any permutation $\sigma$ of $\{1,\dots,q\}$. For a symmetric function on the off-diagonal set, we do not specify the values on the diagonal set $\{(x_1,\dots,x_q)\in(0,1)^q: x_i = x_j \mbox{ for some $i\ne j$}\}$, which has zero Lebesgue measure and hence does not have any impact in our derivation. Introduce also $h_0\topp\beta:=1$ and $h_1\topp\beta(x):=\Gamma(\beta)\Gamma(2-\beta)$. The main result of this section is the following. \begin{Thm}\lefteftarrowbel{thm:1} Let $(R_i)_{i\in{\mathbb N}}$ be i.i.d.~$\beta$-stable regenerative sets and $(V_i)_{i\in{\mathbb N}}$ be i.i.d.~with law \eqref{eq:V}, the two sequences being independent. Given a collection of $I_\ell \in \mathcal{D}_p $, $\ell=1,\leftdots,r$, set $ K = \max\pp{{\bf i}gcup_{\ell=1}^r I_\ell}. $ Then, for all $\vv{t}=(t_1,\leftdots,t_r)\in [0,1]^r$, \begin{equation}\lefteftarrowbel{eq:moment limit} \mathbb{E} \pp{\prod_{\ell =1}^r L_{I_\ell,t_\ell}}= \frac{1}{\Gamma(\beta_p)^r} \int_{\vv 0 < \vv x <\vv t} \prod_{i=1}^K h_{|\mathcal I (i)|}^{(\beta)} (\vv x_{\mathcal I(i)}) \, d\vv x \end{equation} with \begin{equation}\lefteftarrowbel{eq:calI} \mathcal I(i) := \ccbb{\ell\in\{1,\dots,r\}:i\in I_\ell},~ i=1,\dots,K. \end{equation} \end{Thm} Above and below, we write $\vv x = (x_1,\dots,x_r), d\vv x = dx_1\dots dx_r$, $\vv 0 = (0,\dots,0), \vv 1 = (1,\dots,1)$, and $\vv x<\vv y$ is understood in the coordinate-wise sense. Also, write \[ \vv x_{\mathcal I(i)} = (x_\ell)_{\ell\in \mathcal I(i)}, \] understood as the vector in ${\mathbb R}_+^{|\mathcal I(i)|}$. (Since each $h_{|\mathcal I(i)|}\topp\beta$ is a symmetric function, the order of coordinates of $\vv x_{\mathcal I(i)}$ is irrelevant here.) Write $\vv V_I = (V_i)_{i\in I}$ and $\vv R_I = (R_i)_{i\in I}$. In view of \eqref{eq:LIt}, from now on we write explicitly $L_{I,t} \equiv L_{I,t}(\vv R_I,\vv V_I)$. We have, by Fubini's theorem, \[ \mathbb{E} \pp{\prod_{\ell =1}^r L_{I_\ell,t_\ell}(\vv R_{I_\ell},\vv V_{I_\ell})}= \int_{(0,1)^K}{\mathbb E}\pp{\prodd\ell1rL_{I_\ell,t_\ell}(\vv R_{I_\ell},\vv v_{I_\ell})}(1-\beta)^K\prodd i1Kv_i^{-\beta}d\vv v. \] We shall establish a formula for \[ \mathbb{P}si(\vv v):= {\mathbb E}\pp{\prodd\ell1rL_{I_\ell,t_\ell}(\vv R_{I_\ell},\vv v_{I_\ell})}, \mbox{ for all } \vv v\in (0,1)^K, \] where the expectation is with respect to the randomness coming from $\vv R_{I_\ell}$, $\ell=1,\leftdots,r$. At the core of our argument is the following proposition. Let $g_q$, $q\in {\mathbb N}$ be symmetric functions on the off-diagonal subset of $(0,1)^q$ such that \begin{equation}\lefteftarrowbel{eq:g_q} g_q^{(\beta)}(x_1,\leftdots,x_q)= \prod_{j=1}^q (x_j - x_{j-1})^{\beta-1}, \ \ x_0:=0 < x_1 < \cdots < x_q <1, \end{equation} and $g_0^{(\beta)} := 1$. We write $\max(\vv v_I) = \max_{i\in I}v_i$, and similarly for $\min(\vv v_I)$. \begin{Pro}\lefteftarrowbel{prop:psy} Under the assumption of Theorem \rightef{thm:1}, \begin{equation}\lefteftarrowbel{eq:psy} \mathbb{P}si(\vv v) = \frac{1}{\Gamma(\beta_p)^r} \int_{\max(\vv v_{I_\ell}) < x_\ell < t_\ell, \, \ell=1,\dots,r} \prod_{i=1}^K g^{(\beta)}_{|\mathcal{I}(i)|}(\vv x_{\mathcal I(i)}-v_{i}\vv 1)d\vv x. \end{equation} In particular, $\mathbb{P}si(\vv v) = 0$ if $\max(\vv v_{I_\ell})\ge t_\ell$ for some $\ell=1,\dots,r$. \end{Pro} The proof of the proposition is postponed to Section \rightef{sec:covering} below. \begin{proof}[Proof of Theorem \rightef{thm:1}] We shall compute \begin{equation}\lefteftarrowbel{eq:convolution0} (1-\beta)^{K}\int_{(0,1)^K}\mathbb{P}si(\vv v) \prod_{i=1}^K v_i^{-\beta}d \vv v. \end{equation} We express the constraint $\max(\vv v_{I_\ell}) < x_\ell$, $\ell=1,\leftdots,r$ in \eqref{eq:psy} as \[ v_i < \min(\vv x_{\mathcal I(i)})=:m_i,\quad i=1,\dots,K. \] Then by Proposition \rightef{prop:psy}, the expression in \eqref{eq:convolution0} becomes \begin{equation}\lefteftarrowbel{eq:convolution} \frac{(1-\beta)^{K}}{\Gamma(\beta_p)^r}\int_{\vv 0< \vv x< \vv t}\int_{\vv 0<\vv v<\vv m}\prodd i1K\pp{g_{|\mathcal I(i)|}^{(\beta)}(\vv x_{\mathcal I(i)}-v_i\vv 1)v_i^{-\beta}}d\vv v d\vv x. \end{equation} A careful examination shows that \[ g_{|\mathcal I(i)|}^{(\beta)}(\vv x_{\mathcal I(i)}-v_i\vv 1) =\frac1{\Gamma(\beta)\Gamma(2-\beta)} (m_i-v_i)^{\beta-1}h_{|\mathcal I(i)|}^{(\beta)}(\vv x_{\mathcal I(i)}). \] \comment{ Observe that $g_{|\mathcal I(i)|}(\vv x_{\mathcal I(i)}-v_i\vv 1) = g_{|\mathcal I(i)|}^{(\beta)}(\vv x_{\mathcal I(i)}^*)$ with $\vv x_{\mathcal I(i)}^* = (x^*_\ell)_{\ell\in \mathcal I(i)}$ given by \[ x^*_\ell := \begin{cases} x_\ell & x_\ell \ne m_i\\ m_i-v_i & x_\ell = m_i, \end{cases} \ell\in\mathcal I(i). \] (We only need consider the case $m_i = \min(\vv x_{\mathcal I(i)})$ is uniquely achieved, since $g_{|\mathcal I(i)|}$ is only non-zero off-diagonal.) }Then, \eqref{eq:convolution} becomes \begin{multline*} \frac1{\Gamma(\beta_p)^r}\pp{\frac1{\Gamma(\beta)\Gamma(1-\beta)}}^K\\ \times \int_{\vv 0<\vv x< \vv t}\int_{\vv 0<\vv v<\vv m}\prodd i1K\pp{(m_i-v_i)^{\beta-1}v_i^{-\beta}h_{|\mathcal I(i)|}^{(\beta)}(\vv x_{\mathcal I(i)})}d\vv vd \vv x\\ = \frac1{\Gamma(\beta_p)^r}\int_{\vv 0<\vv x< \vv t}\prodd i1Kh_{|\mathcal I(i)|}^{(\beta)}(\vv x_{\mathcal I(i)})d\vv x, \end{multline*} by integrating with respect to each $v_i$ separately and applying the relation between beta and gamma functions. Then the desired result follows. \end{proof} In particular, we have the following. \begin{Cor}\lefteftarrowbel{Cor:incre moment} Let $L_{I,t}$ be as in \eqref{eq:LIt}. Then for $0\lefte s<t\lefte 1$, \begin{equation}\lefteftarrowbel{eq:4.5} \mathbb{E} \lefteft(L_{I,t}-L_{I,s}\rightight)^r = \mathbb{E} L_{I,t-s}^r = \frac{\Gamma(\beta)^p\Gamma(2-\beta)^pr!}{ \Gamma(\beta_p) \Gamma((r-1)\beta_p+2)} \cdot (t-s)^{(r-1)\beta_p+1}. \end{equation} \end{Cor} \begin{proof} The second equality follows from \eqref{eq:moment limit} with $I_1= \cdots =I_r=I$ and the following identity: \[ \int_{0<x_1<\cdots<x_r<1}\prodd i2r(x_{i}-x_{i-1})^{\gamma}d\vv x= \frac{\Gamma(\gamma+1)^{r-1} }{\Gamma(r(\gamma+1)-\gamma+1)} \mbox{ for all } \gamma>-1, r\ge 2, \] which can be obtained by changes of variables and the relation between beta and gamma functions. The first equality can be either derived from \eqref{eq:moment limit} through an expansion, or from the fact that each underlying shifted $\beta$-stable regenerative set $R_i+V_i$ is stationary when restricted to the interval $[0,1]$ (Remark \rightef{rem:stationary}). \end{proof} \begin{Rem} As mentioned before Remark \rightef{rem:stationary}, when restricted to $[0,1]$, $L_{I,t} \stackrel{d}{=} M_{\beta_p}((t-V_I)_+)$ where $V_I$ is a sub-random variable with density function $c_{\beta,p}(1-\beta_p)v^{-\beta_p}$ with $c_{\beta,p} = (\Gamma(\beta)\Gamma(2-\beta))^p/(\Gamma(\beta_p)\Gamma(2-\beta_p))$ \citep[Eq.(B.9)]{samorodnitsky17extremal}. Therefore, all the properties of $(L_{I,t})_{t\in[0,1]}$, for a single fixed $I$, can also be derived from the corresponding $(M_{\beta_p}((t-V_I)_+)_{t\in[0,1]}$, where $\mathbb P(V_{I}\lefte v) = v^{1-\beta_p}$ and $V_{I}$ is independent from $M_{\beta_p}$. For example, the $r$-th moments of the latter have been known \citep[bottom of page 77]{owada16limit}, and they entail \eqref{eq:4.5} as an alternative proof. \end{Rem} \subsection{Random covering scheme}\lefteftarrowbel{sec:covering} To establish Proposition \rightef{prop:psy}, we shall use a construction of local times motivated from the so-called random covering scheme, by first constructing a stable regenerative set as the set left uncovered by a family of random open intervals based on a Poisson point process (e.g.~\citep{bertoin00two,fitzsimmons85set} and \citep[Chapter 7]{bertoin99subordinators}). We shall work with a specific construction of $(R_i)_{i\in{\mathbb N}}$ as follows. Let $\mathcal{N}=\sum_{\ell\in{\mathbb N}} \delta_{(a_\ell,y_\ell,z_\ell)}$ be a Poisson point process on $[0,K) \times {\mathbb R}_+\times{\mathbb R}_+$, $K\in {\mathbb N}$, with intensity measure $dadyz^{-2}dz$, where $\delta$ denotes the Dirac measure. Define \[ O_i:={\bf i}gcup_{\ell:a_\ell\in J_i}(y_\ell,y_\ell+z_\ell),\quad R_i:= [0,\infty)\setminus O_i,\quad i=1,\leftdots,K, \] where $J_i=[i-1,i-\beta)$. It is known that $(R_i)_{i=1,\dots,K}$ constructed above are i.i.d.~$\beta$-stable regenerative sets starting at the origin \citep[Example 1]{fitzsimmons85set}. In this section we shall work with deterministic shifts \[ \vv v = (v_1,\dots,v_K)\in (0,1)^K. \] Let \begin{equation}\lefteftarrowbel{eq:D_p(m)} \mathcal{D}_p(m):=\{I\in \mathcal{D}_p: ~\max I\lefte m\}, ~m\in{\mathbb N}. \end{equation} where $\mathcal{D}_p$ is as in \eqref{eq:D_p}. With the functional $L_t$ in \eqref{eq:L_t}, consider \begin{equation}\lefteftarrowbel{eq:LItv} L_{I,t} \equiv L_t\pp{{\bf i}gcap_{i\in I}(R_i+v_i)},\ I\in\mathcal D_p(K),\ t\ge 0, \end{equation} where $(R_i)_{i\in{\mathbb N}}$ are as above. We emphasize that the notation in \eqref{eq:LItv} is {\em strictly restricted to this section}, and in particular is different from our notation of $L_{I,t}$ in the other sections, where $v_i$ will be replaced by random $V_i$. Next, we consider the following approximations of $(R_i)_{i=1,\dots,K}$. For any $\epsilon>0$, we set \[ O_i^{(\epsilon)}:={\bf i}gcup_{\ell:a_\ell\in J_i, z_\ell \ge \epsilon}(y_\ell,y_\ell+z_\ell),\quad R_i^{(\epsilon)}:= [0,\infty)\setminus O_i^{(\epsilon)},\quad i=1,\leftdots,K. \] Define \[ \wt R_i\topp\epsilon := R_i\topp\epsilon+v_i \quad\mbox{ and }\quad \wt R_I\topp\epsilon := {\bf i}gcap_{i\in I}\wt R_i\topp\epsilon, \quad I\in\mathcal D_p(K). \] Introduce then \begin{equation}\lefteftarrowbel{eq:Delta_st} L_{I,t}\topp\epsilon:= \frac{1}{\Gamma(\beta_p)} \lefteft(\frac{\epsilon}{e}\rightight)^{{\beta_p-1}}\int_0^t {\bf 1}dd{x\in \wt{R}_I^{(\epsilon)} } dx \quad\mbox{ and }\quad {\mathbb D}elta_{s,t}^{(\epsilon)}(I):=L_{I,t}\topp\epsilon-L_{I,s}^{(\epsilon)}, \end{equation} for $0<s<t$. Set also \[\mathcal{N}_\epsilon:= \sum_{\ell:\, z_\ell \ge \epsilon} \delta_{(a_\ell,y_\ell,z_\ell)}. \] Below we begin with calculating certain asymptotic moments involving \eqref{eq:Delta_st}. \begin{Lem}\lefteftarrowbel{lem:1} For any $I_\ell\in \mathcal{D}_p(K)$, $\vv v \in (0,1)^K$, and $s_\ell, t_\ell$ satisfying $\max(\vv v_{I_\ell}) < s_\ell < t_\ell \lefte 1$, $\ell=1,\dots,r$, we have \begin{align} &\leftim_{\substack{s_\ell \downarrow \max (\vv v_{I_\ell}), \\ \ell=1,\dots,r}} \leftim_{\epsilon \downarrow 0} {\mathbb E}\pp{\prodd\ell1r {\mathbb D}elta_{s_\ell, t_\ell}^{(\epsilon)}(I_\ell) } \lefteftarrowbel{eq:psy1} \\ &\quad = \Gamma(\beta_p)^{-r} \int_{\max(\vv v_{I_\ell}) < x_\ell < t_\ell, \, \ell=1,\dots,r} \prod_{i=1}^K g_{|\mathcal{I}(i)|}^{(\beta)}(\vv x_{\mathcal I(i)}-v_{i}\vv1)d\vv x. \notag \end{align} \end{Lem} We start with a preparation. Define $g_{q,\epsilon}^{(\beta)}$ similarly as $g_q^{(\beta)}$ in \eqref{eq:g_q} as the symmetric function determined by \[ g_{q,\epsilon}^{(\beta)}(x_1,\leftdots,x_q) = \prod_{j=1}^q f_\epsilon(x_j - x_{j-1}), \ \ x_0:=0 < x_1 < \cdots < x_q <1, \] where \begin{equation}\lefteftarrowbel{eq:h_epsilon} f_\epsilon(y):= {\bf i}g(e^{y/\epsilon-1}\epsilon {\bf i}g)^{\beta-1}1_{\{y\lefte \epsilon\}} + y^{\beta-1}1_{\{y> \epsilon\}},\quad y>0. \end{equation} We set also $g_{0,\epsilon}^{(\beta)}:=1$. \begin{proof} [Proof of Lemma \rightef{lem:1}] First, we claim that if \begin{equation*} (x_1,\leftdots,x_q)\in D_q:=\{(x_1,\leftdots,x_q) \in (0,1)^q:\ x_i\neq x_j \text{ for }i\neq j \}, \ q\in {\mathbb N}, \end{equation*} then for $\epsilon\in (0,1)$, \begin{equation} \lefteftarrowbel{e:multi.pts} P\pp{x_i \in {R}^{(\epsilon)}_1, ~i=1,\leftdots,q} = \lefteft(\frac{e}{\epsilon}\rightight)^{q(\beta-1)} g_{q,\epsilon}^{(\beta)}(x_1,\leftdots,x_q). \end{equation} For the proof, assume without loss of generality that $x_0=0<x_1<\cdots<x_q< 1$. Observe that the event in the probability sign in \eqref{e:multi.pts} occurs exactly when the Poisson point process $\mathcal N$ has no points in the following regions \[ \ccbb{(a,y,z)\in[0,1-\beta)\times[x_{i-1},x_i)\times{\mathbb R}_+:\, y+z>x_i,\, z>\epsilon }, \ \ i=1,\dots, q. \] Therefore, \[ P\pp{x_i \in {R}_1^{(\epsilon)}, ~i=1,\leftdots,q} =\prod_{i=1}^q \exp\pp{ -(1-\beta) \int_{x_{i-1}}^{x_i}\int_{\max\{x_i-y, \epsilon\}}^\infty \frac{1}{z^2} dz dy }. \] By elementary calculations, \[ \int_{x_{i-1}}^{x_i}\int_{\max\{x_i-y, \epsilon\}}^\infty \frac{1}{z^2} dz dy = \begin{cases} \displaystyle\frac{x_i-x_{i-1}}\epsilon & \text{ if } x_i-x_{i-1}\lefte \epsilon;\\ \\ \displaystyle\leftog\pp{\frac e\epsilon(x_i-x_{i-1})} & \text{ if } x_i-x_{i-1}>\epsilon. \end{cases} \] Putting these together yields the desired result. Now let us turn our attention to proving \eqref{eq:psy1}. We have, by \eqref{eq:Delta_st} and Fubini, \begin{align} \mathbb{E} \pp{ \prod_{\ell =1}^r {\mathbb D}elta_{s_\ell,t_\ell}^{(\epsilon)}(I_\ell)} & = \frac1{\Gamma(\beta_p)^r} \lefteft(\frac{\epsilon}{e}\rightight)^{rp(\beta-1)}\mathbb{E}\lefteft( \prod_{\ell=1}^r\int_{s_\ell}^{t_{\ell}} {\bf 1}dd{x\in \wt{R}_{I_\ell}^{(\epsilon)}} dx\rightight)\nonumber\\ &= \frac1{\Gamma(\beta_p)^r} \lefteft(\frac{\epsilon}{e}\rightight)^{rp(\beta-1)} \int_{\vv s < \vv x < \vv t} P\pp{x_\ell\in \wt{R}_{I_\ell}^{(\epsilon)},~\ell=1,\dots,r }d\vv x. \lefteftarrowbel{eq:Delta_epsilon} \end{align} Notice that $\sccbb{x_\ell\in\wt R_{I_\ell}\topp\epsilon} = {\bf i}gcap_{i:\ell\in\mathcal I(i)}\sccbb{x_\ell-v_i\in R_i\topp\epsilon}$. Therefore by independence, we get \[ P\pp{x_\ell\in \wt{R}_{I_\ell}^{(\epsilon)},~\ell=1,\leftdots,r } = \prod_{i=1}^K P\pp{x_\ell-v_i\in {R}_i^{(\epsilon)},~ \ell\in \mathcal{I}(i)}. \] Note that the probability above is zero if one of $x_\ell-v_i$ is negative, $i=1,\leftdots,K$. Hence by \eqref{e:multi.pts} and the fact $\sum_{i=1}^K |\mathcal{I}(i)|=rp$, we have \[ \prod_{i=1}^K P\pp{x_\ell-v_i\in {R}_i^{(\epsilon)},~ \ell\in \mathcal{I}(i)}= \lefteft(\frac e{\epsilon}\rightight)^{rp(\beta-1)}\prod_{i=1}^K g_{|\mathcal{I}(i)|,\epsilon}^{(\beta)}(\vv x_{\mathcal I(i)}-v_{i}\vv1) 1_{\{\vv x_{\mathcal I(i)}\ge v_{i}\vv1\}}. \] Summing up, in view of \eqref{eq:Delta_epsilon}, we claim that \begin{align} &\leftim_{\substack{s_\ell \downarrow \max (\vv v_{I_\ell}), \\ \ell=1,\dots,r}} \leftim_{\epsilon \downarrow 0} {\mathbb E}\pp{\prodd\ell1r {\mathbb D}elta_{s_\ell, t_\ell}^{(\epsilon)}(I_\ell) } \lefteftarrowbel{eq:moment epsilon zero} \\ &= \Gamma(\beta_p)^{-r} \int_{\max(\vv v_{I_\ell}) < x_\ell < t_\ell, \, \ell=1,\dots,r}\prod_{i=1}^K g_{|\mathcal{I}(i)|}^{(\beta)}(\vv x_{\mathcal I(i)}-v_{i}\vv1)d\vv x \notag \end{align} where $g_q^{(\beta)}$ is as in \eqref{eq:g_q}. Indeed it is elementary to verify from \eqref{eq:h_epsilon} that as $\epsilon\downarrow 0$, we have $f_\epsilon(y) \uparrow y^{\beta-1}$ for any $y>0$, and hence $g_{q,\epsilon}^{(\beta)}\uparrow g_q^{(\beta)}$ a.e.. So \eqref{eq:moment epsilon zero} follows from the monotone convergence theorem. \end{proof} Next in order to establish Proposition \rightef{prop:psy}, we need to identify an a.s.~limit of \\ $\leftim_{{\mathbb Q}\ni s_\ell \downarrow \max (\vv v_{I_\ell})} \leftim_{\epsilon \downarrow 0} \prodd\ell1r {\mathbb D}elta_{s_\ell, t_\ell}^{(\epsilon)}(I_\ell)$, together with an interchangeability between the limits and an expectation. To this aim we shall provide the following two lemmas. In the first lemma below, if $p=1$, this is the same result as that in \citep{bertoin00two}. For general $I$ the proof follows the same strategy. \begin{Lem} For every $I\in \mathcal{D}_p(K)$ and $s,t$ satisfying $\max(\vv v_I)<s<t\lefte 1$, \begin{equation}\lefteftarrowbel{eq:rev mart} \mathbb{E}\pp{{\mathbb D}elta_{s,t}^{(\eta)}(I) \;\middle\vert\;\mathcal{N}_\epsilon}= {\mathbb D}elta_{s,t}^{(\epsilon)}(I) ~a.s. \mbox{ for all } \ 0<\eta<\epsilon< s -\max(\vv v_I). \end{equation} \end{Lem} \begin{proof} For $\eta\in (0,\epsilon)$, define \[ O_i^{(\eta,\epsilon)}={\bf i}gcup_{\ell:a_\ell\in J_i, z_\ell \in [\eta, \epsilon)}(y_\ell,y_\ell+z_\ell),\quad R_i^{(\eta,\epsilon)}= [0,\infty)\setminus O_i^{(\eta,\epsilon)},\quad i=1,\leftdots,K, \] and define \[ \wt R_i\topp{\eta,\epsilon}:= R_i\topp{\eta,\epsilon}+v_i \quad\mbox{ and }\quad \wt R_I\topp{\eta,\epsilon}:={\bf i}gcap_{i\in I}\wt R_i\topp{\eta,\epsilon}, \ I \in \mathcal D_p(K). \] Then for $0<\eta<\epsilon< s -\max(\vv v_I)$, by Fubini's theorem and the independence property of the Poisson point process, we have \begin{align}\lefteftarrowbel{eq:check mart} \mathbb{E}\pp{\int_s^t {\bf 1}dd{x\in \wt{R}_I^{(\eta)}} dx \;\middle\vert\;\mathcal{N}_\epsilon} & =\int_s^t \mathbb{E}\pp{ {\bf 1}dd{x\in \wt{R}_I^{(\epsilon)} } {\bf 1}dd{x\in \wt{R}_I^{(\eta,\epsilon)}} \;\middle\vert\; \mathcal{N}_\epsilon}dx\notag\\ &=\int_s^t P\pp{x\in \wt{R}_I^{(\eta,\epsilon)} } {\bf 1}dd{x\in \wt{R}_I^{(\epsilon)} } dx. \end{align} By a calculation similar to that in the proof of Lemma \rightef{lem:1} (see also \citep[page 10]{bertoin00two}), we have, for $w>\epsilon$, \begin{align*} P\pp{w\in {R}_i^{(\eta,\epsilon)} } = \exp\pp{ -(1-\beta) \iint 1_{\{ y <w <y+z,\ z\in [\eta,\epsilon)\}} \frac{1}{z^2} dzdy }=\lefteft(\frac{\eta}{\epsilon}\rightight)^{1-\beta}. \end{align*} Hence \[ P\pp{x\in \wt{R}_I^{(\eta,\epsilon)} }=\prod_{i\in I} P\pp{x-v_i\in {R}_i^{(\eta,\epsilon)} }=\lefteft(\frac{\eta}{\epsilon}\rightight)^{p(1-\beta )}=\lefteft(\frac{\eta}{\epsilon}\rightight)^{1-\beta_p}. \] Plugging this back into (\rightef{eq:check mart}), we obtain (\rightef{eq:rev mart}). \end{proof} This lemma says that $({\mathbb D}elta_{s,t}\topp\epsilon (I))_{\epsilon\in(0,s-\max(\vv v_I))}$ is a martingale as $\epsilon\downarrow0$ with respect to the filtration $(\sigma(\mathcal N_\epsilon))_{\epsilon>0}$. Since the convergence of the moments of ${\mathbb D}elta_{s,t}^{(\epsilon)}(I)$ as $\epsilon\downarrow 0$, was established in the proof of Lemma \rightef{lem:1}, by the martingale convergence theorem, we have for every $0 < s < t \lefte1$, \begin{equation}\lefteftarrowbel{eq:Delta conv} \leftim_{\epsilon\downarrow 0} {\mathbb D}elta_{s,t}^{(\epsilon)}(I) =: {\mathbb D}elta^*_{s,t}(I) \mbox{ a.s. and in $L^m$ for all $m\in {\mathbb N}$}. \end{equation} Then there exists a probability-one set, on which the convergence in \eqref{eq:Delta conv} holds for all $s\in \mathbb{Q}\cap (0,t)$. Since ${\mathbb D}elta^*_{s,t}(I)$ is non-increasing in $s\in \mathbb{Q}\cap (0,t)$, one can a.s.\ define \begin{equation}\lefteftarrowbel{eq:L_t(I) cover}L_{I,t}^*: =\begin{cases} \leftim\leftimits_{{\mathbb Q}\ni s\downarrow \max(\vv v_I)} {\mathbb D}elta^*_{s,t}(I), &\text{ if } \max(\vv v_I)<t,\\ 0 & \text{ if }\max(\vv v_I)\ge t. \end{cases} \end{equation} \begin{Lem}\lefteftarrowbel{lem:local time identical} For any $0 < t\lefte 1$, $\vv {v}\in (0,1)^K$, and any $I\in\mathcal D_p(K)$, we have $L_{I,t} = L^*_{I,t}$ almost surely. \end{Lem} \begin{proof} First we write \[ \wt{R}_I={\bf i}gcap_{i\in I} (R_i+v_i)=R_I+ V_I \] where $ V_I:=\inf \wt{R}_I$ and $R_I:= (\wt{R}_I\cap [V_I,\infty))-V_I$. (Note that even with all $v_i$ fixed, $V_I$ is still a non-degenerate random variable with probability one, unless $v_i = v$ for all $i\in I$.) In view of \citep[Lemma 3.1]{samorodnitsky17extremal}, $R_I$ is a $\beta_p$-stable regenerative set and $V_I\ge 0$ is a random shift independent of $R_I$. Observe that $L_{I,t}=L_{I,t}^*=0$ for $t\in [0,V_{I})$, so it suffices to show $L_{I,t+V_I}= L_{I,t+V_I}^*$ for any $t\ge 0$ a.s. By \citep[Theorem 3]{kingman73intrinsic}, $L_{I,t+V_I}=L_t(R_I)$ is a version of the standard local time of $R_I$ (or a standard $\beta_p$-Mittag--Leffler process). Here by ``standard'', we mean that $L_t(R_I)$ has the same law as the inverse of a standard $\beta_p$-stable subordinator satisfying \eqref{eq:laplace} but with $\beta$ there replaced by $\beta_p$. On the other hand, using Kolmogorov's criterion \citep[Theorem 3.23]{kallenberg02foundations} and the formula of moments in Lemma \rightef{lem:1} above, one can verify that $\{L_{I,t}^*\}_{t\ge 0}$ admits a version which is continuous in $t$. It also follows from the construction that $L_{I,t+V_I}^*$ is additive and increases only over $t\in R _I$. Then by Maisonneuve \citep[Theorem 3.1]{maisonneuve87subordinators}, for some constant $c>0$, $L_{I,t+V_I}^*= cL_{I,t+V_I}$ almost surely for each $t\ge 0$. We shall show that $c=1$. Taking $t=1$, ${\mathbb E} L_{I,1+V_I} = 1/\Gamma(\beta_p+1)$ by our knowledge of Mittag--Leffler process (e.g.~\citep[Proposition 1(a)]{bingham71limit}). Now to show $c=1$, it suffices to show that $\mathbb{E} L_{I,1+V_I}^* = 1/\Gamma(\beta_p+1)$. Let $(L_{I,t}^{o})_{t\ge 0}$ be $(L_{I,t}^*)_{t\ge 0}$ in \eqref{eq:L_t(I) cover} but with $\vv{v}_I=\vv 0$. From \eqref{eq:psy}, one may verify that $\mathbb{E} L_{I,1}^o = 1/\Gamma(\beta_p+1)$ (in fact, comparing all the moments leads to $L_{I,1}^o \eqd L_{I,1+V_I}$). The proof is concluded by showing that \begin{equation}\lefteftarrowbel{eq:strongMarkov} (L_{I,t+V_I}^*)_{t\ge 0} \eqd (L_{I,t}^o)_{t\ge 0}. \end{equation} This essentially follows from a strong regenerative property. Indeed, for fixed $\epsilon>0$, let $\mathcal{G}\topp\epsilon_t$, $t\ge 0$, be the augmented filtration generated by the $p$-dimensional process $(D_{i,t}^{(\epsilon)},\ i\in I)_{t\ge 0}$, where $D_{i,t}\topp\epsilon=\inf ( \wt{R}_i^{(\epsilon)} \cap (t,\infty))$. Note that for each $i\in I$, $\wt R_i\topp\epsilon = R_i\topp\epsilon+v_i$ is regenerative with respect to $(\mathcal G\topp\epsilon_t)_{t\ge 0}$ in the sense of \citep[Definition 1.1]{fitzsimmons85intersections}: this can be seen from the fact that $R\topp\epsilon_i$ is regenerative with respect to $(\mathcal G\topp\epsilon_{t+v_i})_{t\ge 0}$ (see e.g.~\citep[Eq.(6)]{fitzsimmons85set}). Next, consider the shift operator $\theta_t$ on $\mathbf{F}$ as $ \theta_t F= (F\cap [t,\infty))-t, $ for $t\ge 0$. Write $ V_I^{(\epsilon)}:=\inf \wt{R}_I^{(\epsilon)}$, which is finite almost surely. Observe that $V_I\topp\epsilon = \inf\{t>0:D_{i,t-}\topp\epsilon = t, \mbox{ for all } i\in I\}$, and hence it is an optional time with respect to $(\mathcal G_t\topp\epsilon)_{t\ge0}$. Note in addition that $V_I^{(\epsilon)}\in \wt{R}_i^{(\epsilon)}$ for all $i\in I$, and that $\theta_{V_I^{(\epsilon)}}\wt{R}_i^{(\epsilon)}$'s are conditionally independent given $\mathcal{G}\topp\epsilon_{V_I^{(\epsilon)}}$ So it follows from the strong regenerative property (\citep[Proposition (1.4)]{fitzsimmons85intersections}) that $ \lefteft(\theta_{V_I^{(\epsilon)}} \wt{R}^{(\epsilon)}_i\rightight)_{i\in I}\overset{d}{=} \lefteft( R_i^{(\epsilon)}\rightight)_{i\in I}. $ Therefore, \[ \lefteft(\int_s^t {\bf 1}dd{x\in \theta_{V_I^{(\epsilon)}} \wt{R}_I^{(\epsilon)}}dx\rightight)_{0<s<t} \mathbb{E}qD \lefteft(\int_s^t {\bf 1}dd{x\in R _I^{(\epsilon)}}dx\rightight)_{0<s<t}. \] Now, examining the construction starting from \eqref{eq:Delta_st}, we see that the relation above leads to \eqref{eq:strongMarkov}. This completes the proof. \end{proof} By combining all the lemmas above, it is now straightforward to complete the proof of Proposition \rightef{prop:psy}. \begin{proof}[Proof of Proposition \rightef{prop:psy}] In view of Lemmas \rightef{lem:1} and \rightef{lem:local time identical}, it suffices to show that $$ \leftim_{\substack{{\mathbb Q}\ni s_\ell\downarrow \max \vv (v_{I_\ell})\\\ell=1,\dots,r}} \leftim_{\epsilon \downarrow 0} {\mathbb E} \pp{ \prod_{\ell=1}^r {\mathbb D}elta_{s_\ell, t_\ell}^{(\epsilon)} (I_\ell) } = {\mathbb E} \pp{\prodd \ell 1rL_{I_\ell,t_\ell}^*}. $$ By the $L^m$ convergence in \eqref{eq:Delta conv}, $$ \leftim_{\epsilon \downarrow 0} {\mathbb E} \pp{ \prod_{\ell=1}^r {\mathbb D}elta_{s_\ell, t_\ell}^{(\epsilon)} (I_\ell) } = {\mathbb E} \pp{ \prod_{\ell=1}^r {\mathbb D}elta_{s_\ell, t_\ell}^*(I_\ell) }. $$ It then remains to show that \[ \leftim_{\substack{{\mathbb Q}\ni s_\ell\downarrow \max \vv (v_{I_\ell})\\\ell=1,\dots,r}}{\mathbb E} \pp{\prodd \ell 1r{\mathbb D}elta^*_{s_\ell,t_\ell}(I_\ell)} = {\mathbb E} \pp{\prodd \ell 1rL_{I_\ell,t_\ell}^*}, \] for which we have established the pointwise convergence in \eqref{eq:L_t(I) cover}. To enhance to the convergence in expectation via uniform integrability, we need a uniform upper bound for ${\mathbb E} (\prodd\ell1r {\mathbb D}elta_{s_\ell,t_\ell}^*(I_\ell)^2)$ in terms of $\vv s$. This follows from a reexamination of \eqref{eq:moment epsilon zero}. The proof is then completed. \end{proof} \section{Stable-regenerative multiple-stable processes} \lefteftarrowbel{sec:limit process} \subsection{Series representations for multiple integrals}\lefteftarrowbel{sec:mult int} We review the the multilinear series representation of off-diagonal multiple integrals with respect to an infinitely divisible random measure without a Gaussian component. Our main reference is Szulga \citep{szulga91multiple} and Samorodnitsky \citep[Chapter 3]{samorodnitsky16stochastic}. Let $(E,\mathcal{E},\mu)$ be a measure space where $\mu$ is $\sigma$-finite and atomless. First we recall the infinitely divisible random measure without Gaussian component. Let $M(\cdot)$ be such a random measure with a control measure $\mu$. Then, its law is determined by \[ \mathbb{E} e^{i\theta M(A)}=\exp\lefteft(-\mu(A) \int_{{\mathbb R} } (1-\cos(\theta y)) \rightho(dy)\rightight),~ A\in \mathcal{E}, ~\mu(A)<\infty,~ \theta\in {\mathbb R}, \] where $\rightho$ is a \emph{symmetric} L\'evy measure satisfying $\int_{{\mathbb R}} (1\wedge y^2) \rightho(dy)\in (0,\infty)$ \citep[Section 3.2]{samorodnitsky16stochastic}. We shall later on need a generalized inverse of the tail L\'evy measure defined as \[ \rightho^{\lefteftarrow}(y):=\inf\{x> 0: \rightho(x,\infty)\lefte y/2\},\quad y>0. \] A special case of our interest is the symmetric $\alpha$-stable (S$\alpha$S) random measure on $(E,\mathcal E)$, denoted by $S_\alpha$ ($\alpha\in(0,2)$), determined by $\mathbb{E} e^{iu S_\alpha(A)}= \exp(-|u|^\alpha \mu(A))$ for all $A\in \mathcal{E}, \mu(A)<\infty$. In this case, the L\'evy measure is \begin{equation}\lefteftarrowbel{eq:C_alpha} \rightho(dy)= \frac{\alpha C_\alpha}{2} |y|^{-\alpha-1}1_{\{y\neq 0\}}dy \quad\mbox{ with }\quad C_\alpha=\lefteft(\int_0^\infty \sin(y) y^{-\alpha} dy\rightight)^{-1}, \end{equation} and $\rightho^{\lefteftarrow}(y)=C_\alpha^{1/\alpha} y^{-1/\alpha}$, $y>0$. Throughout we shall work with the following assumption for $\rightho$: \begin{equation}\lefteftarrowbel{eq:rho} \rightho((x,\infty))\in \mathrm{RV}_\infty(-\alpha), \alpha\in(0,2) \text{ and } \rightho((x,\infty)) = O(x^{-\alpha_0}),\ \alpha_0<2 \mbox{ as } x\downarrow0, \end{equation} where $\mathrm{RV}_\infty(-\alpha)$ denotes the class of functions regularly varying with index $-\alpha$ at infinity (\cite{bingham87regular}). Now we introduce the series representations for multiple integrals with respect to $M$. When working with series representations, we shall always treat integrands supported within a finite-measure subspace of $E^p$. In particular, fix an index set ${\mathbb T}$ and suppose $(f_t)_{t\in {\mathbb T}}$ is a family of product measurable symmetric functions from $E^p$ to ${\mathbb R}$, such that $\cup_{t\in {\mathbb T}} \mathrm{supp}(f_t) \subset B^p$ for some $B\in\mathcal E$ with $\mu(B)\in(0,\infty)$, where $\mathrm{supp}(f_t):=\{x\in E^p:~f_t(x)\neq 0\}$. Now let $({\rightm{Var}}epsilon_i)_{i\in{\mathbb N}}$ be i.i.d.~Rademacher random variables, $(\Gamma_i)_{i\in{\mathbb N}}$ be consecutive arrival times of a standard Poisson process, and $( U_i)_{i\in{\mathbb N}}$ be i.i.d.~random elements taking values in $E$ with distribution $\mu(\cdot \cap B)/\mu(B)$, all assumed to be independent. Then for every $A\in\mathcal{E}$ with $A\subset B$, the series $ M_0(A):=\sum_{i=1}^\infty {\rightm{Var}}epsilon_i \rightho^{\lefteftarrow}(\Gamma_i/\mu(B)) \delta_{U_i}(A) $ converges a.s.\ and $M_0\mathbb{E}qD M$ (\citep[Theorem 3.4.3]{samorodnitsky16stochastic}, see also \citep{rosinski99product}). Without loss of generality we shall make the identification $M=M_0$. Then the (off-diagonal) multiple integral of $f_t$ with respect to $M$ can be defined as \begin{align} \lefteftarrowbel{eq:series rep} &\lefteft(\int_{B^p}' f_t(x_1,\leftdots,x_p) M(dx_1)\cdots M(dx_p)\rightight)_{t\in {\mathbb T}}\\ \notag &=\lefteft( p!\sum_{ I \in \mathcal{D}_p} \lefteft(\prod_{i\in I}{\rightm{Var}}epsilon_i \rightho^{\lefteftarrow}(\Gamma_i/\mu(B))\rightight) f_t(\vv U_I)\rightight)_{t\in {\mathbb T}}, \end{align} where \[ \vv U_{I}\equiv(U_{i_1},\leftdots,U_{i_p}) \mbox{ for } I =(i_1,\dots,i_p)\in \mathcal D_p, \] as long as the multilinear series in \eqref{eq:series rep} converges a.s. It is known that the convergence holds if and only if \begin{equation}\lefteftarrowbel{eq:series finite} \sum_{ I\in\mathcal{D}_p } \prod_{i\in I} \rightho^{\lefteftarrow}(\Gamma_i/\mu(B)))^2 f_t(\vv U_I)^2<\infty \quad \mbox{ a.s.,} \end{equation} and in this case the convergence also holds unconditionally, namely, regardless of any deterministic permutation of its entries (\citep{kwapien92random} and \citep[Remark 1.5]{samorodnitsky89asymptotic}). On the other hand, a non-symmetric integrand, say $g$, can always be symmetrized without affecting the resulting multiple stochastic integral, by considering $ (p!)^{-1}\sum_\sigma g(x_{\sigma(1)},\dots,x_{\sigma(p)})$, summing over all permutations of $\{1,\dots,p\}$. The following lemma provides a condition to verify the convergence under \eqref{eq:rho}. \begin{Lem}\lefteftarrowbel{lem:A1} Let $({\rightm{Var}}epsilon_i)_{i\in{\mathbb N}}$ and $(\Gamma_i)_{i\in{\mathbb N}}$ be as above and let $f:E^p\rightightarrow {\mathbb R}$ be a measurable symmetric function. For every $p\in{\mathbb N}, c>0$, \[ \sum_{ I\in\mathcal{D}_p } \lefteft(\prod_{i\in I}{\rightm{Var}}epsilon_i \rightho^\lefteftarrow(\Gamma_i/c)\rightight) f(\vv U_I) \] converges almost surely and unconditionally, if ${\mathbb E} f(\vv U_I)^2<\infty$. \end{Lem} \begin{proof} It suffices to prove for $c=1$, and in this case the convergence criterion \eqref{eq:series finite} becomes \begin{equation}\lefteftarrowbel{eq:series finite lemma} \sum_{ I\in\mathcal{D}_p } \prod_{i\in I} \rightho^\lefteftarrow(\Gamma_i)^2 f(\vv U_I)^2<\infty \quad \mbox{ a.s.} \end{equation} Define \begin{align}\lefteftarrowbel{eq:D le p M} \mathcal{D}_{\lefte p}(M) & :=\{I\in \mathcal D_k: \ 0\lefte k\lefte p,\ \max I\lefte M\},\\ \lefteftarrowbel{eq:H(k,M)} \mathcal{H}(k,M) & :=\{I\in \mathcal{D}_k:\ \min I> M\},~ k=0,\dots,p, \end{align} for $M\in{\mathbb N}$, to be chosen later, where $\mathcal D_k$ is as in \eqref{eq:D_p} with $\mathcal D_0=\emptyset$. Then the series in (\rightef{eq:series finite lemma}) is equal to \begin{align}\lefteftarrowbel{eq:series low+high} \sum_{I_1\in \mathcal{D}_{\lefte p}(M)} \lefteft( \prod_{i\in I_1} \rightho^\lefteftarrow(\Gamma_i)^2\rightight) \lefteft[ \sum_{I_2\in \mathcal{H}(p-|I_1|,M)}\lefteft( \prod_{i\in I_2} \rightho^\lefteftarrow(\Gamma_i)^2 \rightight)f(\vv U_{I_1\cup I_2})^2\rightight]. \end{align} Note that $\mathcal{D}_{\lefte p}(M)$ is finite. Hence to prove the almost-sure convergence of the non-negative series, it suffices to show that for each $I_1\in \mathcal{D}_{\lefte p}(M)$, the term in the bracket of (\rightef{eq:series low+high}) is finite almost surely. This follows, in view of \eqref{eq:series finite}, if we can show that \[ \sum_{I_2\in \mathcal{H}(k,M)} \mathbb{E}\lefteft(\prod_{i\in I_2} \rightho^\lefteftarrow(\Gamma_i)^2\rightight) \mathbb{E} f(\vv U_{I_1\cup I_2})^2 <\infty,~k=1,\leftdots,p. \] From assumption \eqref{eq:rho}, it follows that $\rightho^\lefteftarrow(x)\in\mathrm{RV}_0(-1/\alpha)$, where the latter denotes the class of functions regularly varying at zero, and $\rightho^\lefteftarrow(x)= O(x^{-1/\alpha_0})$ as $x\to\infty$. By Potter's bound and the fact that $\rightho^\lefteftarrow$ is monotone, it then follows that there exists $C>0$ and $\epsilon>0$ such that \[ \rightho^\lefteftarrow(x)\lefte C\pp{x^{-1/\alpha_0}+x^{-(1/\alpha)-\epsilon}}, \mbox{ for all } x>0. \] The following estimate can be obtained via H\"older's inequality as in \citep[Eq.(3.2)]{samorodnitsky89asymptotic}: given $\delta>0$, there exists a constant $C>0$, such that \begin{equation}\lefteftarrowbel{eq:gamma estimate} \mathbb{E} \pp{\prod_{i\in I_2}\Gamma_i^{-\delta} }\lefte C \prod_{i\in I_2}i^{-\delta} \quad \mbox{ for all } I_2 = (i_1,\dots,i_k)\in \mathcal D_k \mbox{ with } i_1>\delta k. \end{equation} It then follows that for all $\delta_1,\delta_2>0$, \[ {\mathbb E}\pp{\prod_{i\in I_2}(\Gamma_i^{-\delta_1}+\Gamma_i^{-\delta_2})}\lefte C\prod_{i\in I_2} i^{-(\delta_1\wedge \delta_2)} ~ \mbox{ for all $I_2\in \mathcal D_k$ s.t.} \min I_2>(\delta_1\vee \delta_2)k. \] Therefore, taking $M>2p\max\{1/\alpha_0,(1/\alpha +\epsilon)\}$ and $\alpha^*:=((1/\alpha)+\epsilon)\wedge1/\alpha_0>1/2$ we have, \begin{align*} \sum_{I\in \mathcal{H}(k,M)} \mathbb{E}\pp{ \prod_{i\in I} \rightho^\lefteftarrow(\Gamma_i)^2} &\lefte C\sum_{I\in \mathcal H(k,M)}\prod_{i\in I } i^{-2\alpha^*} \\ & \lefte C\sum_{I\in \mathcal D_k}\prod_{i\in I}i^{-2\alpha^*} \lefte C\lefteft(\sum_{i=1}^\infty i^{-2\alpha^*} \rightight)^k<\infty.\nonumber \end{align*} \end{proof} \subsection{Stable-regenerative multiple-stable process}\lefteftarrowbel{sec:limit process def} Recall our assumption on $p,\beta$ and $\beta_p$ in \eqref{eq:beta}, and the local-time functional $L_t$ in \eqref{eq:L_t}. We introduce the {\em stable-regenerative multiple-stable process of multiplicity $p$}, denoted throughout by ${\mathbb Z}ab \equiv ({\mathbb Z}ab(t))_{t\ge 0}$, $\alpha\in(0,2)$, via the multiple integrals: \begin{equation}\lefteftarrowbel{eq:Zt} Z_{\alpha,\beta,p}(t):=\int_{({\mathbf F} \times [0,\infty))^p }' L_t\pp{{\bf i}gcap_{i=1}^p (R_i+v_i)} S_{\alpha,\beta}(d R_1,dv_1)\cdots S_{\alpha,\beta}(dR_p,dv_p),\ t\ge 0. \end{equation} where $S_{\alpha,\beta}(\cdot)$ is a S$\alpha$S random measure on $ {\mathbf F} \times [0,\infty) $ with control measure $P_\beta\times (1-\beta) v^{-\beta} dv$. {Note that when $p=1$, the process ${\mathbb Z}ab$ is represented as a stable integral, and in particular, is the same process known as the {\em $\beta$-Mittag--Leffler fractional S$\alpha$S motion} introduced in \citep{owada15functional}. The well-definedness of the multiple integral above when $t\in [0,1]$ directly follows from Lemma \rightef{lem:A1} and Theorem \rightef{thm:1}, and can be similarly verified for $t>1$ by a proper scaling. More specifically, if $t\in[0,1]$, using the fact that $L_t$ vanishes when any $v_i>1$ in \eqref{eq:Zt}, the process $ {\mathbb Z}ab(t) $ can be represented in the form of \eqref{eq:Zt}, with ${\mathbf F}\times[0,\infty)$ replaced by ${\mathbf F}\times[0,1]$, and the control measure replaced by a probability measure $P_\beta\times (1-\beta) v^{-\beta}{\bf 1}dd{v\in[0,1]}dv$. Then, as in \eqref{eq:series rep}, one can obtain the series representation \begin{equation}\lefteftarrowbel{eq:Z_t [0,1]} \lefteft({\mathbb Z}ab(t)\rightight)_{t\in [0,1]} \overset{f.d.d.}{=}\lefteft(p! C_\alpha^{p/\alpha} \sum_{I\in \mathcal{D}_p}\lefteft(\prod_{i\in I} {\rightm{Var}}epsilon_i \Gamma_i^{-1/\alpha } \rightight) L_t\lefteft({\bf i}gcap_{i\in I} (R_i+V_i)\rightight)\rightight)_{t\in [0,1]}, \end{equation} where $f.d.d.$ stands for finite-dimensional distributions, $C_\alpha$ is as in \eqref{eq:C_alpha}, $({\rightm{Var}}epsilon_i)_{i\in{\mathbb N}}, (\Gamma_i)_{i\in{\mathbb N}}$ are as in Section \rightef{sec:mult int}, $(R_i)_{i\in{\mathbb N}}$ are i.i.d.~$\beta$-stable regenerative sets, $(V_i)_{i\in{\mathbb N}}$ are i.i.d.~random variables with law \eqref{eq:V}, and the four sequences are independent from each other. As a direct consequence of the functional limit theorem proved in Theorem \rightef{Thm:CLT} below and Lamperti's theorem \citep{lamperti62semi}, the process ${\mathbb Z}ab$ turns out to be self-similar with Hurst index \[ H= \beta_p+\frac{1-\beta_p}\alpha =p\pp{\frac1\alpha-1}(1-\beta)+1 \in (1/2,\infty), \] that is, \[ ({\mathbb Z}ab(ct))_{t\ge 0} \eqd c^H( {\mathbb Z}ab(t))_{t\ge 0} \mbox{ for all } c>0, \] and have stationary increments. In view of self-similarity, we shall only work with $({\mathbb Z}ab(t))_{t\in[0,1]}$ onward. We conclude this section with a result on the path regularity of ${\mathbb Z}ab$. \begin{Pro} The process ${\mathbb Z}ab$ admits a continuous version whose path is locally $\delta$-H\"older continuous a.s.\ for any $\delta\in (0,\beta_p)$. \end{Pro} \begin{proof} We restrict $t\in [0,1]$ without loss of generality and work with the series representation \eqref{eq:Z_t [0,1]}. In view of independence, assume for convenience that the underlying probability space is the product space of $(\Omega_i,\mathcal F_i, P_i), i=1,2$, where $(\epsilon_i)_{i\in{\mathbb N}}$ depends only on $\omega_1\in \Omega_1$ and $(\Gamma_i, R_i, V_i)_{i\in{\mathbb N}}$ depends only on $\omega_2\in \Omega_2$. The probability measures $P_1$ and $P_2$ are such that those random variables have the desired law, and $P$ is the product measure of $P_1$ and $P_2$ on the product space. We also write $\mathbb{E}_i$ the integration with respect to $P_i$ over $\Omega_i$, $i=1,2$, We shall work with the series representation in \eqref{eq:Z_t [0,1]}, where without loss of generality we replace $\overset{f.d.d.}{=}$ with $=$. Then as before, write $L_{I,t} = L_t({\bf i}gcap_{i\in I} (R_i+V_i))$. Since $L_{I,t}(\omega_1,\omega_2)$ is a constant function of $\omega_1$ with $\omega_2,I,t$ fixed, we write $L_{I,t}(\omega_1,\omega_2) = L_{I,t}(\omega_2)$ for the sake of simplicity. In addition, we shall identify $L_{I,t}$ with its continuous version, which exists in view of Corollary \rightef{Cor:incre moment} and Kolmogorov's criterion. Using a generalized Khinchine inequality for multilinear forms in Rademacher random variables (\citep{krakowiak86random}, see also \citep[Theorem 1.3 (ii)]{samorodnitsky89asymptotic}), for any $r>1$ and some constant $C>0$, we have for $0\lefte s<t\lefte 1$ that, writing $\vv\omega = (\omega_1,\omega_2)$, \[ \mathbb{E}_1 |{\mathbb Z}ab(t)(\vv\omega)-{\mathbb Z}ab(s)(\vv\omega)|^r \lefte CY_{s,t}(\omega_2) \] with \[ Y_{s,t}(\omega_2) := \pp{\sum_{I\in\mathcal D_p} \lefteft(\prod_{i\in I} \Gamma_i(\omega_2)^{-2/\alpha } \rightight) \lefteft|L_{I,t}(\omega_2)-L_{I,s}(\omega_2)\rightight|^2}^{r/2}, 0\lefte s<t\lefte 1. \] The two-parameter process $(Y_{s,t})_{0\lefte s<t\lefte 1}$ is finite $P_2$-almost surely in view of Lemma \rightef{lem:A1} and Corollary \rightef{Cor:incre moment} (note that \eqref{eq:rho} is satisfied with $\alpha =\alpha_0$ in this case). Since $L_{I,t}$ is a shifted $\beta_p$-Mittag--Leffer process, in view of \citep[Lemma 3.4]{owada15functional}, the random variable \[ K_I(\omega_2):=\sup_{(s,t)\in D} \frac{|L_{I,t}(\omega_2)-L_{I,s}(\omega_2)|}{(t-s)^{\beta_p}|\leftog(t-s)|^{1-\beta_p}} \] is $P_2$-a.s.\ finite, and has finite moments of all orders, where $D=\{(s,t): \ 0\lefte s<t\lefte 1,\ t-s<1/2 \}$. Hence for all $(s,t)\in D$, we have \[ \mathbb{E}_1 |{\mathbb Z}ab(t)(\vv\omega)-{\mathbb Z}ab(s)(\vv\omega)|^r\lefte C(t-s)^{r\beta_p}|\leftog(t-s)|^{r(1-\beta_p)} M(\omega_2), \] where \[ M(\omega_2) = \pp{\sum_{I\in\mathcal D_p} \lefteft(\prod_{i\in I} \Gamma_i(\omega_2)^{-2/\alpha } \rightight) K_I(\omega_2)^2}^{r/2}, \] which is finite $P_2$-a.s.: this is a special case of \eqref{eq:series finite lemma}, addressed in the proof of Lemma \rightef{lem:A1}. Take $r$ large enough so that $r\beta_p>1$. Then by Kolmogorov's criterion, for any $\delta\in(0,\beta_p)$ and $P_2$-a.e.\ $\omega_2\in \Omega_2$, ${\mathbb Z}ab(t)(\cdot,\omega_2)$ admits a version ${\mathbb Z}ab^*(t)(\cdot,\omega_2)$ under $P_1$ whose path is locally $\delta$-H\"older continuous $P_1$-a.s. By Fubini, ${\mathbb Z}ab^*(t)(\vv \omega)$ is also a version of ${\mathbb Z}ab(t)(\vv\omega)$ under $P_1\times P_2$ which has a locally $\delta$-H\"older continuous path $(P_1\times P_2)$-a.s. \end{proof} \section{A functional non-central limit theorem}\lefteftarrowbel{sec:nCLT} \subsection{Infinite ergodic theory and Krickeberg's setup}\lefteftarrowbel{sec:ergodic} We shall introduce some concepts in the infinite ergodic theory necessary for the formulation of our results. Our main reference is Aaronson \citep{aaronson97introduction}. Let $(E,\mathcal{E},\mu)$ be a measure space where $\mu$ is a $\sigma$-finite measure satisfying $\mu(E)=\infty$. Suppose that $T: E\rightightarrow E$ is a measure-preserving transform, namely, $T$ is measurable and $\mu(T^{-1}B)=\mu(B)$ for all $B\in \mathcal{E}$. Let $\widehatat T$ denote the {\em dual} (a.k.a.~Perron--Frobenius, or transfer) operator of $T$, defined by \[ \widehat{T}: L^1(\mu)\rightightarrow L^1(\mu),\quad \widehat{T} g := \frac{d\mu_g \circ T^{-1} }{d\mu}, \] where $\mu_g(B)=\int_B g d\mu$, $B\in \mathcal{E}$. It is also characterized by the relation \begin{equation}\lefteftarrowbel{eq:dual} \int_E (\widehat{T} g) \cdot h d\mu = \int_E g \cdot (h\circ T) d\mu, \quad \mbox{ for all } g\in L^1(\mu),~h\in L^\infty(\mu). \end{equation} We always assume that $T$ is \emph{ergodic}, namely, $T^{-1} B= B$ mod $\mu$ implies either $\mu(B)=0$ or $\mu(B^c)=0$, and that $T$ is \emph{conservative}, namely, for any $B\in \mathcal{E}$ with $\mu(B)>0$, we have $ \sum_{k=1}^\infty 1_B(T^k x)=\infty$ for a.e.~$x\in B$. It is known that $T$ is ergodic and conservative, if and only if for any $B\in \mathcal{E}$ with $\mu(B)>0$, we have \[ \sum_{k=1}^\infty 1_{B}(T^k x)=\infty \quad \text{ for a.e. }x\in E, \]or equivalently \begin{equation}\lefteftarrowbel{eq:erg and cons dual} \sum_{k=1}^\infty \widehat{T}^k g =\infty \quad \mbox{ a.e. for all } g\in L^1(\mu),~ g\ge 0, \mbox{ a.e.~and } \mu(g)>0. \end{equation} We shall, however, need a more quantitative description of the ergodic property of $T$, which provides information about the rate of divergence in \eqref{eq:erg and cons dual}. The following assumption is formulated in the spirits of Krickeberg \citep{krickeberg67strong} and Kesseb\"ohmer and Slassi \citep{kessebohmer07limit}. We shall use the following convention throughout: \emph{any function defined on a subspace (e.g.~$A$) will be extended to the full space (e.g.~$E$) by assuming zero value outside the subspace, whenever necessary}. \begin{assump}\lefteftarrowbel{assump} There exists $A\in\mathcal{E}$ with $\mu(A)\in (0,\infty)$ and $A$ is a Polish space with $\mathcal{E}_A:=\mathcal{E}\cap A$ being its Borel $\sigma$-field. In addition, there exists a positive rate sequence $ (b_n)_{n\in{\mathbb N}}$ satisfying \begin{equation}\lefteftarrowbel{eq:bn_RV} (b_n)\in \mathrm{RV}_{\infty}(1-\beta),\quad \beta\in (0,1), \end{equation} where $\mathrm{RV}_\infty(1-\beta)$ denotes the class of sequences regularly varying with index $1-\beta$ at infinity (\cite{bingham87regular}), so that \begin{equation}\lefteftarrowbel{eq:uniform ret} \leftim_{n\to\infty} b_n \widehat{T}^n g(x)= \mu(g) \quad \text{uniformly for a.e. }x\in A \end{equation} for all bounded and $\mu$-a.e.\ continuous $g$ on $A$. \end{assump} \begin{Rem} The relation \eqref{eq:uniform ret} was first explicitly formulated in \cite{kessebohmer07limit} and termed as the \emph{uniform return} condition. Due to the existence of weakly wandering sets (\cite{hajian64weakly}), the relation \eqref{eq:uniform ret} can fail even for a bounded integrable function $g$ supported within $A$. To be able to treat a large family of integrands $f$ in Theorem \rightef{Thm:CLT} below, we adopt an idea of \cite{krickeberg67strong}: we impose a topological structure on the subspace $A$, and retrain our attention to bounded and a.e.\ continuous functions supported within $A$. It is worth noting the resemblance of this approach to the theory of weak convergence of measures. See Section \rightef{sec:Eg} below for examples satisfying Assumption \rightef{assump}. \end{Rem} \begin{Rem} Assumption \rightef{assump} has an alternative characterization in Proposition \rightef{Pro:unif ret} below. Typically, the whole space $E$ is Polish as well. Nevertheless, we stress that when a topological concept such as continuity, interior or boundary is mentioned, we solely refer to the Polish topology on the subspace $A$ (or $A^p$ in the context of product space). \end{Rem} Additionally, for $A$ in Assumption \rightef{assump}, and $x\in E$, we define the \emph{first entrance time} \begin{equation}\lefteftarrowbel{eq:phi} {\rightm{Var}}phi(x)={\rightm{Var}}phi_A(x)=\inf\{k\ge 1:\ T^k x\in A \}, \end{equation} and the \emph{wandering rate} sequence \begin{equation}\lefteftarrowbel{eq:w_n} w_n=\mu({\rightm{Var}}phi \lefte n)=\mu\lefteft({\bf i}gcup_{k=1}^n T^{-k} A\rightight), \ \ n\in{\mathbb N}, \end{equation} which measures the amount of $E$ which visits $A$ up to time $n$. Kesseb\"ohmer and Slassi \citep[Proposition 3.1]{kessebohmer07limit} proved that under Assumption \rightef{assump}, \begin{equation}\lefteftarrowbel{eq:b_n w_n} b_n\sim \Gamma(\beta)\Gamma(2-\beta) w_n \end{equation} as $n\rightightarrow\infty$. In particular, $w_n\in \mathrm{RV}_{\infty}(1-\beta)$ (note that their $\beta$ corresponds to our $1-\beta$, and their $w_n$ corresponds to our $w_{n+1}$). \subsection{A non-central limit theorem} Let $(E,\mathcal E,\mu)$ be $\sigma$-finite infinite measure space and $T$ a measure-preserving ergodic and conservative transform. We recall our model, a stationary sequence $(X_k)_{n\in{\mathbb N}}$ in \eqref{eq:2}, where $M$ is the infinitely divisible random measure on $(E,\mathcal E)$ with symmetric L\'evy measure $\rightho$ and control measure $\mu$ as in Section \rightef{sec:mult int}. We are now ready to state the main result of the paper. Below $\mu^{\otimes p}$ denotes the $p$-product measure of $\mu$ on the product $\sigma$-field $\mathcal{E}^p$. \begin{Thm}\lefteftarrowbel{Thm:CLT} Assume $\beta, p$ and $\beta_p$ are as in \eqref{eq:beta}. For $(X_k)_{k\in{\mathbb N}}$ introduced in \eqref{eq:2}, suppose the following assumptions hold: \begin{enumerate}[(a)] \item The L\'evy measure $\rightho$ satisfies \eqref{eq:rho}. \item There exists $A\in \mathcal{E}$ satisfying Assumption \rightef{assump}, and $f$ is a bounded $\mu^{\otimes p}$-a.e.\ continuous function on $A^p$. \end{enumerate} Then the stationary process $(X_k)_{k\in{\mathbb N}}$ in \eqref{eq:2} is well-defined. Furthermore, \begin{equation}\lefteftarrowbel{eq:nclt} \pp{ \frac{1}{ c_n} \sum_{k=1}^{\leftfloor nt \rightfloor} X_k}_{t\in [0,1]}{\mathbb R}ightarrow \Gamma(\beta_p) C_\alpha^{-p/\alpha} \mu^{\otimes p}(f)\cdot \pp{ {\mathbb Z}ab(t)}_{t\in[0,1]}, \end{equation} in $D([0,1])$ with respect to the uniform metric as $n\to\infty$, where ${\mathbb Z}ab(t)$ is the stable-regenerative multiple-stable process defined in \eqref{eq:Zt}. Moreover, \begin{equation} c_n = n \cdot \pp{\frac{\rightho^{ \lefteftarrow}(1/w_n)}{ b_n}}^p \in \mathrm{RV}_\infty\pp{\beta_p+\frac{1-\beta_p}\alpha},\quad \lefteftarrowbel{eq:c_n} \end{equation} where $(w_n)$ is the wandering rate associated to $A$ in \eqref{eq:w_n} and $C_\alpha$ is as in \eqref{eq:C_alpha}. \end{Thm} The proof of Theorem \rightef{Thm:CLT} is carried out in Section \rightef{sec:proof}. \begin{Rem}\lefteftarrowbel{Rem:diff} Compared to the result for $p=1$ established in \citep{owada15functional}, we assume the same assumption on $\rightho$, but strictly stronger assumptions on the dynamical system and $f$. Indeed, weaker notions \emph{Darling--Kac set} and \emph{uniform set} were adopted in \citep{owada15functional} instead of \eqref{eq:uniform ret}. For example, a set $A$ is a {\em Darling--Kac set} if for some positive sequence $ (a_n)_{n\in{\mathbb N}}$ tending to $\infty$, \begin{equation}\lefteftarrowbel{eq:uniform} \frac{1}{a_n}\sum_{k=1}^n\widehat{T}^k 1_A \to \mu(A)\quad \mbox{ uniformly a.e.~on $A$}, \end{equation} which is a Ces\'aro average version of \eqref{eq:uniform ret} when $g=1_A$. See \citep{kessebohmer07limit} for more discussions on the difference between uniform sets and uniformly returning sets. Also if $p = 1$, topologizing $A$ as a Polish space is unnecessary since one can apply the powerful Hopf's ratio ergodic theorem in order to treat a general $f$ (see the proof of Theorem 6.1 of \cite{owada15functional}). The reason that we enforce a stronger assumption here is that for multiple integrals with $p\ge 2$, it is no longer clear how to write the statistic of interest in terms of a partial sum to which we can apply \eqref{eq:uniform} (compare e.g.~\eqref{eq:DCT1} below with \citep[Eq.\ (6.10)]{owada15functional}). It is unclear to us whether Theorem \rightef{Thm:CLT} continues to hold if Assumption \rightef{assump} is relaxed to the Ces\'aro average version as in \eqref{eq:uniform} or even to those in \cite{owada15functional}. Nevertheless, Assumption \rightef{assump} allows us to treat a sufficiently rich class of dynamical systems and functions $f$ as exemplified in Section \rightef{sec:Eg} below. \end{Rem} \subsection{Examples}\lefteftarrowbel{sec:Eg} We shall provide two classes of examples regarding the assumptions involved in the main result Theorem \rightef{Thm:CLT}, one about transforms on the interval $[0,1]$, and the other about Markov chains. \begin{Eg} The following example can be found in Thaler \citep{thaler00asymptotics}. Let $(E,\mathcal{E})=([0,1],\mathcal{B}[0,1])$. Define a measure by \begin{equation*} \mu_q(dx)= \lefteft(\frac{1}{x^q}+\frac{1}{(1+x)^q}\rightight) 1_{(0,1]}(x)dx, \quad q>1. \end{equation*} Define the transformation $T=T_q: E\rightightarrow E$ by \[ T_q(x):=x\lefteft(1+\lefteft(\frac{x}{1+x}\rightight)^{q-1}-x^{q-1}\rightight)^{1/(1-q)} ~~(\text{mod }1). \] The transform $T_q$ has an indifferent fixed point at $x=0$, namely, $T_q(0)=0$ and $T_q'(0+)=1$, and the measure $\mu_q$ is infinite on any neighborhood of $x=0$. Furthermore, $T_q$ can be verified to be $\mu_q$-preserving, conservative and ergodic. If we choose $A=[\epsilon,1]$, $\epsilon\in (0,1)$, then according to Thaler \citep{thaler00asymptotics}, any Riemann integrable function on $A$ satisfies \eqref{eq:uniform ret} and \eqref{eq:bn_RV} with $\beta=1/q$. In Theorem \rightef{Thm:CLT}, we can take the $p$-variate function $f$ to be any Riemman integrable function with support in $A^p$. In fact, the example above belongs to the so-called AFN-systems, a well-known class of interval maps possessing indifferent fixed points and an infinite invariant measure. See Zweim\"uler \citep{zweimuller98ergodic,zweimuller00ergodic} for the definitions. Recently for a large class of AFN-systems, Melbourne and Terhesiu \citep[Theorem 1.1]{melbourne12operator} and Gou\"ezel \citep{gouezel11correlation} established the uniform return relation (\rightef{eq:uniform ret}) with \eqref{eq:bn_RV} for Riemann integrable $g$ on $A\subset [0,1]$ where $A$ is a union of closed intervals which are away from the indifferent fixed points of $T$. \end{Eg} We state a primitive characterization of Assumption \rightef{assump} which facilitates the discussion of the next example. \begin{Pro}\lefteftarrowbel{Pro:unif ret} Let $(A,\mathcal{E}_A)$ be as in Assumption \rightef{assump}. Assumption \rightef{assump} holds if and only if there exists a collection $\mathcal{C}\subset\mathcal{E}_A$ with the following properties: \begin{enumerate}[(a)] \item \lefteftarrowbel{a}$\mathcal{C}$ is a $\pi$-system containing $A$; \item $\mathcal{C}$ generates the Polish topology of $A$ in the sense that for any open $G\subset A$ and any $x\in G$, there exists $ U\in \mathcal{C}$ such that $x\in \mathring{U} \subset U\subset G$; \item \lefteftarrowbel{c} Any set in $\mathcal{C}$ is $\mu$-continuous; \item \lefteftarrowbel{d} There exists a positive sequence $b_n \in \text{RV}_\infty(1-\beta)$, $0 < \beta < 1$, such that for any $B\in \mathcal{C}$, \begin{equation}\lefteftarrowbel{eq:unif ret C} b_n \widehat{T}^n 1_B(x) \rightightarrow \mu(B) \quad \text{uniformly for a.e. }x\in A. \end{equation} \end{enumerate} \end{Pro} The proof of the proposition can be found in Section \rightef{sec:approx} below. \begin{Eg} Let $S$ be a countably infinite state space. Consider an aperiodic irreducible and null-recurrent Markov chain $(Y_k)_{k\ge0}$ on $S$, which has $n$-step transition probabilities $(p^{(n)}(i,j))_{i,j\in S}$ and an invariant measure $\pi$ on $S$ which satisfies $\pi_i>0$ for any $i\in S$. Fix a state $o\in S$ and assume without loss of generality a normalization condition: \[ \pi_o=1. \] Consider the path space $E=\{x=(x(0),x(1),x(2),\leftdots):~ x(k)\in S \}$ and let $\mathcal{E}$ be the cylindrical $\sigma$-field. Then one can define a $\sigma$-finite infinite measure $\mu$ on $(E,\mathcal{E})$ as \[ \mu(\cdot )=\sum_{i\in S}\pi_i P^i(\cdot), \] where $P^i(\cdot)$ denotes the law $(Y_k)_{k\ge0}$ starting at state $i\in S$ at time $k=0$. Consider the measure preserving map of the left-shift \[ T:E\rightightarrow E , ~T(x(0),x(1),x(2),\leftdots)=(x(1),x(2),\leftdots).\] Due to the assumptions on the chain, the map $T$ is ergodic and conservative \citep{harris53ergodic}, and each $P^i$ can be verified to be atomless and thus so is $\mu$. Now let $A=\{x=(x(0),x(1),\leftdots)\in E: x(0)=o\}$. Consider the discrete topology on $S$ induced by the metric $d(i,j)=1_{\{i\neq j\}}$, $i,j\in S$. Then the product space $A$ is known to be Polish with Borel $\sigma$-field $\mathcal{E}_A:=\mathcal{E}\cap A$, and a topological basis of $A$ is formed by \[ \mathcal{C}={\bf i}g\{\{x\in E: ~x(0)=o,\ x(1)=s_1, \leftdots,\ x(m)=s_m\}, \ m\in {\mathbb N},\ s_i\in S{\bf i}g\}\cup \{\emptyset,\ A\}. \] See e.g. \cite{moschovakis09descriptive}, Section 1A. Note that every set in $\mathcal{C}$ is both open and closed, so the boundary of each is empty. Therefore conditions \eqref{a}--\eqref{c} in Proposition \rightef{Pro:unif ret} hold. By \citep[the last line of page 156]{aaronson97introduction}, if $B=\{x\in A: ~x(1)=s_1,\leftdots,\ x(m)=s_m\}\in \mathcal{C}$, we have for $x=(o,x(1),x(2),\leftdots)\in A$ and $n>m$ that \begin{align*} (\widehat{T}^n 1_B)(x) = p(o ,s_1)\cdots p(s_{m-1},s_m) p^{(n-m)}(s_m,o)=\mu(B) p^{(n-m)}(s_m,o). \end{align*} We claim that if we assume \begin{equation}\lefteftarrowbel{eq:RV ret prob} p^{(n)}(o,o) \in \mathrm{RV}_\infty(\beta-1), \end{equation} then condition \eqref{d} of Proposition \rightef{Pro:unif ret} holds with $b_n\sim 1/p^{(n)}(o,o)$ as $n\rightightarrow\infty$. Indeed, this is the case if for any $m\in {\mathbb N}$ and $s\in S$, we have \begin{equation}\lefteftarrowbel{eq:strong ratio} \leftim_n \frac{p^{(n-m)}(s,o)}{p^{(n)}(o,o)}=1. \end{equation} Condition \eqref{eq:strong ratio} is essentially the \emph{strong ratio limit property} in \cite{orey61strong}, and as shown there, it is equivalent to \begin{equation*} \leftim_n \frac{p^{(n+1)}(o,o)}{p^{(n)}(o,o)} =1. \end{equation*} The last line follows from \eqref{eq:RV ret prob} and \cite[Theorem 1.9.8]{bingham87regular}. In view of the topological basis $\mathcal{C}$, any function $f$ on $A^p$ which depends only on a finite number of coordinates of $(x_1,\leftdots,x_p)\in A^p$ can be verified to be continuous. On the other hand, a bounded continuous function on $A^p$ depending on infinitely many coordinates can be constructed, for example, as $ f(x_1,\leftdots,x_p)= \sum_{n=1}^\infty 2^{-n} \sum_{j=1}^p 1_{\{x_j(n)=o\}} . $ \end{Eg} \section{Proof of the non-central limit theorem} \lefteftarrowbel{sec:proof} We first provide a summary of the proof. We prove our main Theorem \rightef{Thm:CLT} here by establishing the convergence of finite-dimensional distributions and the tightness in $D([0,1])$ separately. We shall work with our series representation established in Section \rightef{sec:mult int}, and proceed by decomposing it into a leading term and a remainder term. Most of the effort is devoted to the convergence of the finite-dimensional distributions of the leading term. For this purpose, the key is Theorem \rightef{Thm:local time} which concerns a convergence to the joint local-time processes introduced in Section \rightef{sec:local times}. To prove Theorem \rightef{Thm:local time}, we shall apply the method of moments and make use of the moment formulas established in Theorem \rightef{thm:1} for the joint local-time processes. To facilitate the moment computation, a delicate approximation scheme is developed in Section \rightef{sec:approx}. The tightness in $D([0,1])$ is also established with the aid of the aforementioned decomposition. Finally we note that our proof techniques are essentially different from those in the case $p=1$ considered in \cite{owada15functional}. In the case $p=1$, the proof in \cite{owada15functional} relied heavily on the infinitely divisibility of the single stochastic integral and Hopf's ratio ergodic theorem. These ingredients are non-applicable for $p\ge 2$, and our proof strategy, instead, exploits the series representation of multiple stochastic integrals. We now start by a series representation of the joint distribution of $(X_k)_{k=1,\dots,n}$. For each fixed $n\in {\mathbb N}$, let $(U_i^{(n)})_{i\in{\mathbb N}}$ be i.i.d.~taking values in $E$ following the law \begin{equation}\lefteftarrowbel{eq:mu_n} \mu_n(\cdot):= \frac{\mu( \cdot \cap \{{\rightm{Var}}phi\lefte n\} )}{\mu({\rightm{Var}}phi\lefte n)}=\frac{\mu( \cdot \cap \{{\rightm{Var}}phi\lefte n\} )}{w_n}, \end{equation} where ${\rightm{Var}}phi$ is the first entrance time to $A$ as in (\rightef{eq:phi}). Let \[ T_p:=T\times\cdots \times T: E^p\to E^p \] be the product transform. For each fixed $n\in \mathbb{N}$, we apply the series representation (\rightef{eq:series rep}) with $B=\{{\rightm{Var}}phi\lefte n\}$, and obtain \begin{equation}\lefteftarrowbel{eq:X_k series} (X_k)_{k=1,\dots,n}\mathbb{E}qD \lefteft(p!\sum_{ I \in \mathcal{D}_p} \lefteft(\prod_{i\in I}{\rightm{Var}}epsilon_i \rightho^{\lefteftarrow}(\Gamma_i/w_n)\rightight) f\circ T_p^k(\vv U^{(n)}_I)\rightight)_{k=1,\dots,n}, n\in{\mathbb N}, \end{equation} where $w_n=\mu({\rightm{Var}}phi \lefte n)$ is the wandering rate sequence as in (\rightef{eq:w_n}), and $({\rightm{Var}}epsilon_i)_{i\in{\mathbb N}}, (\Gamma_i)_{i\in{\mathbb N}}$ are as in Section \rightef{sec:mult int} and are independent from $(U_i\topp n)_{i\in{\mathbb N}}$. Recall the notation $\vv U_I\topp n = (U_{i_1}\topp n,\dots,U_{i_p}\topp n)$ with $I = (i_1,\dots,i_p)\in\mathcal D_p$. For every $n$, the series representation converges almost surely by Lemma \rightef{lem:A1} since $f$ is bounded. Let \begin{equation}\lefteftarrowbel{eq:S_n(t)} S_n(t):= \frac{1}{c_n}\summ k1{\floor {nt}} X_k \end{equation} be the normalized partial sum of interest, with $c_n = n (\rightho^{ \lefteftarrow}(1/w_n)/ b_n)^p$ as in \eqref{eq:c_n}. The proof consists of proving the convergence of finite-dimensional distributions and tightness. \subsection{An approximation scheme}\lefteftarrowbel{sec:approx} Under the setup of Assumption \rightef{assump}, we introduce a class of functions useful for approximation purposes. Note that the product space $A^p$ is also Polish with Borel $\sigma$-field $\mathcal{E}_A^p$. \begin{Def}\lefteftarrowbel{Def:eleme ntary} A function $g:A^p\rightightarrow \mathbb{R}$ is said to be an \emph{elementary function}, if it is a finite linear combination of indicators of $p$-products of $\mu$-continuity sets in $\mathcal{E}_A$, that is, \[ g(x_1,\leftdots,x_p)=\sum_{m=1}^M b_m 1_{B_{1,m}\times \cdots\times B_{p,m}}(x_1,\leftdots,x_p) \] where $M\in \mathbb{N}$, $b_m$'s are some real constants and $B_{j,m}\in \mathcal{E}_A$ with $\mu(\partial B_{j,m} )=0$. A set $B\in\mathcal{E}_A^p$ is said to be an \emph{elementary set}, if $1_B$ is an elementary function. \end{Def} \begin{Lem}\lefteftarrowbel{Lem:riemann approx} Let $f$ be a bounded $\mu^{\otimes p}$-a.e.\ continuous function on $A^p$. Then for any $\epsilon>0 $, there exist elementary functions $g_1,g_2$ on $A^p$, such that $L(f)\lefte g_1\lefte f\lefte g_2\lefte U(f)$ and $|\mu^{\otimes p}(f)-\mu^{\otimes p}(g_i)|<\epsilon$, $i=1,2$, where $L(f)=\inf\{f(\vv x):\vv x \in A^p\}$ and $U(f)=\sup \{f(\vv x):\ \vv x \in A^p\}$. \end{Lem} \begin{proof} Suppose the Polish topology of $A$ is induced by a metric $d$ and let $N( x,\delta)=\{ y\in A:~d( x, y)<\delta\}$, $\delta>0$. For any $\vv x=(x_1,\leftdots,x_p)\in A^p$ and $\delta>0$, define the product neighborhood (corresponding to the uniform metric on $A^p$ induced from $d$) \[ N_p(\vv x,\delta)=N(x_1,\delta)\times \cdots \times N(x_p,\delta). \] Let $C\subset A^p$ be the set of continuity points of $f$, and fix $\epsilon>0$. For every $\vv x\in C$, when $\delta>0$ is small enough and avoids a countable set of values, the set $N_p(\vv x,\delta)$ can be made elementary (i.e., each $N(x_i,\delta)$ is $\mu$-continuous, $i=1,\leftdots,p$) and \[\omega(\vv x,\delta):=\sup\{ |f(\vv x)-f(\vv y)|:~ \vv y\in N_p(\vv x,\delta)\}<\epsilon. \] Next, note that the separable metric space $A^p$ is second-countable and thus Lindel\"of (every open cover has a countable subcover). Hence there exist $\delta_n>0$ and $\vv x_n\in C$, such that $\cup_{n=1}^\infty N_p(\vv x_n,\delta_n) \supset C$, where each $N_p(\vv x_n,\delta_n)$ is elementary and $\omega(\vv x_n,\delta_n)<\epsilon$. For each $m\in{\mathbb N}$, set $C_m:=\cup_{n=1}^m N_p(\vv x_n,\delta_n)$. This is an elementary set, and one can further choose $m$ large enough so that $\mu^{\otimes p}(A^p\setminus C_m)=\mu^{\otimes p}(C\setminus C_m)<\epsilon$. One could further express $C_m$ as a union of disjoint elementary sets $C_m = \cup_{n=1}^mD_n$ with $D_n:=N_p(\vv x_n,\delta_n)\setminus (\cup_{i=1}^{n-1}N_p(\vv x_i,\delta_i))$. Then define \[ g_1(\vv x) := \sum_{n=1}^m \inf\{f(\vv x):~\vv x\in D_n \} 1_{D_n}(\vv x) +\inf\{f(\vv x): ~ \vv x\in A^p\}1_{ A^p\setminus C_m }(\vv x) \] and define $g_2$ with $\inf$'s replaced by $\sup$'s above. Then $g_1$ and $g_2$ are elementary functions satisfying $g_1\lefte f \lefte g_2$, and \[ \mu^{\otimes p}(f-g_1) \wedge \mu_p (g_2-f)\ge 0, \quad \mu^{\otimes p}(f-g_1) \vee \mu^{\otimes p}(g_2-f)\lefte \epsilon (\mu^{\otimes p}(A^p) + 2\|f\|_\infty). \] \end{proof} \begin{proof}[Proof of Proposition \rightef{Pro:unif ret}] The ``only if'' part is immediate if $\mathcal{C}$ to consists of all $\mu$-continuity sets in $\mathcal{E}_A$. We only need to show the ``if'' part. Let $\mathcal{D}$ be the smallest class of subsets of $A$ containing $\mathcal{C}$, which is also closed under (i) finite unions of disjoint sets and (ii) proper set differences. Then we apply a variant of Dynkin's $\pi$-$\lefteftarrowmbda$ theorem, where the $\sigma$-field is replaced by a field, and in the definition of a $\lefteftarrowmbda$-system, the ``countable disjoint union'' is replaced by ``finite disjoint union''. This variant can be established using similar arguments as those in \cite[Section 2.2.2]{resnick99probability}. Applying this we conclude that $\mathcal{D}$ is the smallest field containing $\mathcal{C}$. On the other hand, the class of $\mu$-continuity subsets of $A$ also forms a field, and so does $\mathcal{E}_A$. Hence any set in $\mathcal{D}$ is $\mu$-continuous and $\mathcal{D}\subset \mathcal{E}_A$. Next, one can verify directly that the set operations (i) and (ii) mentioned above preserve \eqref{eq:unif ret C}, and hence the relation \eqref{eq:unif ret C} holds for $B\in \mathcal{D}$. Now note that $\mu$ restricted to Polish $A$ is tight (see e.g. \cite[Theorem 1.3]{billingsley99convergence}). Hence for any $\mu$-continuity set $B\in \mathcal{E}_A$ and any $\epsilon>0$, there exists a compact $K\subset \mathring{B}$, such that $\mu(B\setminus K)=\mu(\mathring{B}\setminus K)<\epsilon/2$. Due to the compactness and condition (b) of Proposition \rightef{Pro:unif ret}, there exists $D_1\in \mathcal{D}$ which is a finite union of sets in $\mathcal{C}$, so that $ K \subset D_1\subset \mathring{B}$. This together with a similar argument with $B$ replaced by $A \setminus B$ entails the existence of $D_1,D_2\in \mathcal{D}$ satisfying $D_1\subset B \subset D_2$ and $\mu(D_2)-\mu(D_1)<\epsilon$. Taking $n\rightightarrow\infty$ in\begin{equation}\lefteftarrowbel{eq:approx two sides} b_n\widehat{T}^n 1_{D_1} \lefte b_n \widehat{T}^n 1_B\lefte b_n\widehat{T}^n 1_{D_2} \quad \text{a.e.}, \end{equation} we see that \eqref{eq:uniform ret} holds for $g=1_B$. To obtain \eqref{eq:uniform ret} in full generality, first observe that by linearity of $\widehat{T}$, the relation extends to $g$ which is a finite linear combination of indicators of $\mu$-continuity sets in $\mathcal{E}_A$. Then it extends to general bounded $\mu$-a.e.\ continuous $g$ by an approximation similar to \eqref{eq:approx two sides} via Lemma \rightef{Lem:riemann approx} with $p=1$. \end{proof} \subsection{Proof of convergence of finite-dimensional distributions}\lefteftarrowbel{sec:pf fdd} We proceed by first writing \begin{equation}\lefteftarrowbel{eq:SR} \ccbb{S_n(t)}_{t\in[0,1]} \eqd \ccbb{S_{n,m}(t)+R_{n,m}(t)}_{t\in[0,1]}, \end{equation} for $m\in{\mathbb N}$ with \[ S_{n,m}(t) :=\frac{1}{c_n}\summ k1{\floor {nt}} p!\sum_{ I \in \mathcal{D}_p(m)} \lefteft(\prod_{i\in I}{\rightm{Var}}epsilon_i \rightho^{\lefteftarrow}(\Gamma_i/w_n)\rightight) f\circ T_p^k(\vv U^{(n)}_I), \] where $\mathcal D_p(m)$ is as in \eqref{eq:D_p(m)}. To show the convergence of finite-dimensional distributions, we shall show \begin{equation} S_{n,m}(t)\ConvFDD \Gamma(\beta_p) \cdot p!\cdot\mu^{\otimes p}(f) \sum_{ I \in \mathcal{D}_p(m)} \lefteft(\prod_{i\in I}{\rightm{Var}}epsilon_i \Gamma_i^{-1/\alpha} \rightight) L_t\pp{{\bf i}gcap_{i\in I} (R_i+V_i)},\lefteftarrowbel{eq:Snm1} \end{equation} for all $m\in{\mathbb N}$ (compare it with \eqref{eq:Z_t [0,1]}) and \begin{equation}\lefteftarrowbel{eq:Rn} \leftim_{m\to\infty} \leftimsup_{n\to\infty} P(|R_{n,m}(t)|>\epsilon)=0, \mbox{ for all } t\in[0,1], \epsilon>0. \end{equation} We prove the two claims separately. \subsubsection*{Proof of \eqref{eq:Snm1}.} Introduce \[ G_n(y):=\frac{\rightho^{\lefteftarrow}(y/w_n)}{\rightho^{\lefteftarrow}(1/w_n)} \] and \begin{equation}\lefteftarrowbel{eq:L_t(I,n)} L_{n,I,t}:=\frac{ b_n^p}{n} \sum_{k=1}^{\leftfloor nt \rightfloor} f\circ T_p^k(\vv U_I^{(n)}),\quad I\in \mathcal{D}_p, t\ge0, n\in{\mathbb N}, \end{equation} and write \begin{equation} S_{n,m}(t)=p! \sum_{ I \in \mathcal{D}_p(m)} \lefteft(\prod_{i\in I}{\rightm{Var}}epsilon_i G_n(\Gamma_i) \rightight)L_{n,I,t} \lefteftarrowbel{eq:Snm0}. \end{equation} By the assumption $\rightho((x,\infty))\in\mathrm{RV}_\infty(-\alpha)$ we have that \[ \leftim_{n\to\infty} G_n(y) = y^{-1/\alpha},\quad y>0. \] Therefore, \eqref{eq:Snm1} follows from the following result. \begin{Thm}\lefteftarrowbel{Thm:local time}With the notation above, \[ \pp{L_{n,I,t}}_{I\in \mathcal D_p, t\in[0,1]} \stackrel{f.d.d.}\to \mu^{\otimes p} (f)\Gamma(\beta_p)\pp{L_t\pp{{\bf i}gcap_{i\in I} (R_i+V_i)}}_{I\in \mathcal D_p, t\in[0,1]}. \] \end{Thm} Theorem \rightef{Thm:local time} can be proved by a method of moments. \begin{Pro}\lefteftarrowbel{Pro:moments} Let $f$ be as in Theorem \rightef{Thm:CLT}. Then for any $I_1,\leftdots,I_r \in \mathcal{D}_p$, $t_1,\leftdots,t_r\in [0,1]$, we have \begin{align}\lefteftarrowbel{eq:moment sum} \leftim_{n\to\infty}\mathbb{E}\pp{ \prod_{\ell =1}^r L_{n,I_\ell,t_{\ell}}} = \mu^{\otimes p}(f)^{r}\int_{(\vv0,\vv t)} \prod_{i=1}^K h_{|\mathcal I (i)|}^{(\beta)} (\vv x_{\mathcal I(i)}) \, d\vv x, \end{align} where $h_{q}^{(\beta)}$ is as in \eqref{eq:hq} and $K = \max({\bf i}gcup_{\ell=1}^r I_\ell)$. \end{Pro} \begin{proof} We may assume that $t_\ell >0$ for all $\ell=1,\leftdots,r$, otherwise (\rightef{eq:moment sum}) trivially holds with both-hand sides being zeros. \comment{Let \begin{equation}\lefteftarrowbel{eq:a_n^r} a_n^{(r)}= \frac{n}{{\bf i}gl( \Gamma (\beta) \Gamma (2-\beta) w_n {\bf i}gr)^r} \sim \frac{n}{b_n^r} ,\quad r=1,\leftdots,p, \end{equation} where the asymptotic equivalence is due to (\rightef{eq:b_n w_n}).} We then proceed as follows: \begin{align} {\mathbb E}\pp{\prodd \ell1r L_{n,I_\ell,t_\ell}} & = \pp{\frac{b_n^p}n}^r{\mathbb E}\pp{\prodd\ell1r\summ k1{\floor{nt_\ell}}f\circ T_p^k(\vv U_{I_\ell}\topp n)}\notag\\ & = \pp{\frac{b_n^p}n}^r \sum_{\vv 1\lefte \vv k\lefte \floor{n\vv t}} \mathbb{E} \pp{ \prod_{\ell=1}^r f\circ T_p^{k_\ell}(\vv U_{I_\ell}^{(n)}) }. \lefteftarrowbel{eq:moment start} \end{align} We claim that it is enough to prove \eqref{eq:moment sum} for function $f$ of the form \begin{equation}\lefteftarrowbel{eq:f single term} f(\vv x) = \prod_{j=1}^p f_{j}(x_j), \quad\mbox{ with }\quad f_j(x) = {\bf 1}_{A_j}(x), \end{equation} where each $f_{j}$ is an indicator of a $\mu$-continuity set $A_j\in\mathcal E_A$ satisfying the uniform return relation \eqref{eq:uniform ret} and \eqref{eq:bn_RV}. Indeed, since $f$ can always be written as a difference of two non-negative bounded $\mu^{\otimes p}$-a.e.\ continuous functions (e.g., $f= (f + \| f \|_\infty 1_{A^p}) - \|f \|_\infty 1_{A^p}$), so by an expansion of the product in \eqref{eq:moment start}, one may assume that $f\ge 0$. Next, in view of Lemma \rightef{Lem:riemann approx}, Assumption \rightef{assump} and an approximation argument exploiting monotonicity, it suffices to consider $f$ which is elementary in the sense of Definition \rightef{Def:eleme ntary}. By a further expansion of the product in \eqref{eq:moment start}, it suffices to focus on $f$ with simple form \eqref{eq:f single term}. From (\rightef{eq:f single term}), we can rewrite using $I_\ell = (I_\ell(1),\dots,I_\ell(p))$ with $I_\ell(1)<\cdots<I_\ell(p)$: \begin{align*} \prod_{\ell=1}^r f\circ T_p^{k_\ell}(U_{I_\ell}^{(n)})&= \prod_{\ell=1}^r \prod_{j=1}^p f_{j}\circ T^{k_\ell}(U_{I_\ell(j)}^{(n)})\notag\\ &= \prod_{i=1}^K \prod_{\ell \in \mathcal I(i)} f_{\mathcal K(i,\ell)} \circ T^{k_\ell} (U_i^{(n)}), \end{align*} where, for every $\ell\in\mathcal I(i) = \{\ell' \in\{1,\dots,r\}, i\in I_{\ell'}\}$, $\mathcal K(i,\ell) \in \{ 1,\dots, p \}$ is defined by the relation $I_\ell{\bf i}g(\mathcal K(i,\ell){\bf i}g)=i$. Here and below, we follow the convention $\prod_{\ell \in \emptyset}(\cdot) \equiv 1$. Since $U_1^{(n)}, \dots, U_K^{(n)}$ are i.i.d.\ following $\mu_n$ in (\rightef{eq:mu_n}), we have $$ \mathbb{E}\pp{ \prod_{i=1}^K \prod_{\ell \in \mathcal I(i)} f_{\mathcal K(i,\ell)} \circ T^{k_\ell} (U_i^{(n)}) } = \prod_{i=1}^K \mu_n \pp{\prod_{\ell \in \mathcal I(i)} f_{\mathcal K(i,\ell)} \circ T^{k_\ell} }. $$ Then, \begin{align} {\mathbb E}\pp{\prodd \ell1r L_{n,I_\ell,t_\ell}} &= \pp{\frac{b_n^p}n}^r\sum_{\vv1\lefte\vv k\lefte \floor{n\vv t}}\prodd i1K\mu_n\pp{\prod_{\ell\in\mathcal I(i)}f_{\mathcal K(i,\ell)}\circ T^{k_\ell}}.\lefteftarrowbel{eq:Psy_n} \end{align} Expressing the $r$-tuple sum over $\vv k$ above by an integral, we claim that \begin{align} &{\mathbb E}\pp{\prodd \ell1r L_{n,I_\ell,t_\ell}} = b_n^{pr}\int_{(\mathbf{0}, \leftfloorloor n\vv t\rightfloorloor /n)}\prodd i1K\mu_n\pp{\prod_{\ell\in\mathcal I(i)}f_{\mathcal K(i,\ell)}\circ T^{\floor{nx_\ell}+1}}d\vv x\nonumber\\ &\sim \pp{ \Gamma (\beta) \Gamma (2-\beta) }^{pr}\int_{(\mathbf{0}, \leftfloorloor n\vv t\rightfloorloor /n)}\prodd i1Kw_n^{|\mathcal I(i)|-1}\mu\pp{\prod_{\ell\in\mathcal I(i)}f_{\mathcal K(i,\ell)}\circ T^{\floor{nx_\ell}}}d\vv x. \lefteftarrowbel{eq:DCT} \end{align} Indeed, in \eqref{eq:DCT}, we have used $\mu_n(\cdot)=\mu(\cdot \cap \{{\rightm{Var}}phi\lefte n\})/w_n$, the relation \eqref{eq:b_n w_n}, and the fact that the functions $f_j\circ T^k$, $1\lefte k\lefte n$, are supported within $ \{{\rightm{Var}}phi\lefte n\}$ and $\sum_{i=1}^K |\mathcal I(i)|=|I_1|+\cdots+ |I_r|= pr$; we also drop the `$+1$' in the power of $T$, since $T$ is measure-preserving with respect to $\mu$. To complete the proof, it remains to establish \begin{multline}\lefteftarrowbel{eq:DCT1} \leftim_{n\to\infty}\int_{(\mathbf{0}, \leftfloorloor n\vv t\rightfloorloor /n)}\prodd i1Kw_n^{|\mathcal I(i)|-1}\mu\pp{\prod_{\ell\in\mathcal I(i)}f_{\mathcal K(i,\ell)}\circ T^{\floor{nx_\ell}}}d\vv x \\ = {\bf i}gl( \Gamma (\beta) \Gamma (2-\beta) {\bf i}gr)^{-pr}\lefteft( \prod_{i=1}^K \prod_{\ell \in \mathcal I(i)} \mu(f_{\mathcal K(i,\ell)}) \rightight) \int_{(\vv0,\vv t)} \prod_{i=1}^K h_{|\mathcal I (i)|}^{(\beta)} (\vv x_{\mathcal I(i)}) \, d\vv x. \end{multline} Indeed, the desired convergence of moments \eqref{eq:moment sum} now follows from \eqref{eq:Psy_n}, \eqref{eq:DCT}, \eqref{eq:DCT1} and that $$ \prod_{i=1}^K \prod_{\ell \in \mathcal I(i)} \mu\lefteft(f_{\mathcal K(i,\ell)}\rightight) = \lefteft( \prod_{j=1}^p \mu(f_{j}) \rightight)^r = \mu^{\otimes p}(f)^{r}. $$ In order to show \eqref{eq:DCT1}, we apply the dominated convergence theorem. To simplify the notation, we consider $q \in\{ 1,\dots,p\}$ and $f_1,\dots,f_q$ as in \eqref{eq:f single term}, and introduce \[ H_{n,q}(\vv x):= w_n^{q-1} \mu\lefteft( \prod_{j=1}^q f_j\circ T^{\leftfloor n x_j \rightfloor}\rightight), \quad \vv x\in (0,1)^q. \] A careful examination shows that \eqref{eq:DCT1} follows from the following two results: \begin{equation}\lefteftarrowbel{eq:pointwise} \leftim_{n\to\infty} H_{n,q}(\vv x)= \pp{ \Gamma (\beta) \Gamma (2-\beta) }^{-q} \lefteft(\prod_{j=1}^q \mu(f_j) \rightight) h_q^{(\beta)} (\vv x), \mbox{ for all } \vv x\in(\vv0,\vv1)_{\neq}, \end{equation} and, for some $\eta\in(0,\beta)$, \begin{equation}\lefteftarrowbel{eq:upper_bound} H_{n,q}(\vv x)\lefte C h_q^{(\beta-\eta)} (\vv x), \mbox{ for all } \vv x\in(\vv0,\vv1)_{\neq}. \end{equation} (Recall $h_q\topp\beta$ in \eqref{eq:hq}.) Note that we only need to consider the limit for $\vv x\in(\vv 0,\vv 1)_{\neq}:=\{\vv y\in(\vv0,\vv1):y_\ell\ne y_{\ell'}, \forall \ell\ne\ell'\}$. The product $\prod_{i=1}^K h_{|\mathcal{I}(i)|}^{(\beta-\eta)} (\vv x_{\mathcal I(i)})$ is integrable on $(\vv 0,\vv 1)_{\neq}$ since it is up to a multiplicative constant \[ {\mathbb E}\pp{\prodd \ell1r\wt L_{I_\ell,t_\ell}} \lefte \frac1r \summ \ell1r {\mathbb E} \wt L_{I_\ell,t_\ell}^r, \] where $\wt L_{I,t}$ is defined similarly as $L_{I,t}$, with the underlying $\beta$-stable regenerative sets replaced by $(\beta-\eta)$-stable regenerative sets (see \eqref{eq:LIt}). Setting $\eta>0$ small enough so that $p(\beta-\eta)-p+1\in(0,1)$, the finiteness of the integration now follows from \eqref{eq:4.5}. We now prove \eqref{eq:pointwise} and \eqref{eq:upper_bound}. Assume $q\ge 2$ below. The case $q=1$ is similar and simpler and hence omitted. To show \eqref{eq:pointwise}, it suffices to focus on the tetrahedron $(\vv 0,\vv1)_\uparrow:=\{\vv x\in(0,1)^q: 0 < x_1 < \dots < x_q < 1\}$. First write \begin{align*} \prodd j1q f_j\circ T^{\floor{nx_j}} & = f_{1}\circ T^{\floor{nx_1}} \times \pp{ \prod_{j=2}^q f_{j} \circ T^{\leftfloor nx_{j} \rightfloor - \leftfloor nx_{1} \rightfloor}} \circ T^{\leftfloor nx_{1} \rightfloor}\\ & = f_{1}\circ T^{\floor{nx_1}} \times \pp{ \prod_{j=2}^q f_{j} \circ T^{\leftfloor nx_{j} \rightfloor - \leftfloor nx_{2} \rightfloor}}\circ T^{\floor{nx_2}-\floor{nx_1}}\circ T^{\leftfloor nx_{1} \rightfloor}. \end{align*} Then, by the measure-preserving property, \begin{equation}\lefteftarrowbel{eq:H_q,n} H_{n,q}(\vv x) = w_n^{q-1} \int_E f_{1}\times \pp{ \prod_{j=2}^q f_{j} \circ T^{\leftfloor nx_{j} \rightfloor - \leftfloor nx_{2} \rightfloor}}\circ T^{\leftfloor nx_{2} \rightfloor - \leftfloor nx_{1} \rightfloor} d\mu, \end{equation} which, by duality \eqref{eq:dual}, equals \[ w_n^{q-2} \frac{w_n}{w_{\leftfloor nx_{2} \rightfloor - \leftfloor nx_{1} \rightfloor}}\, \int_A w_{\leftfloor nx_{2} \rightfloor - \leftfloor nx_{1} \rightfloor} \lefteft(\widehat T^{\leftfloor nx_{2} \rightfloor - \leftfloor nx_{1} \rightfloor} f_{1}\rightight) \prod_{j=2}^q f_{j}\circ T^{\leftfloor nx_{j} \rightfloor - \leftfloor nx_{2} \rightfloor} d\mu. \] Due to the uniform convergence of a regularly varying sequence of positive index \citep[Proposition 2.4]{resnick07heavy}, we have $ \leftim_{n\to\infty} w_{\leftfloor nx_{2} \rightfloor - \leftfloor nx_{1} \rightfloor}/w_n = (x_{2} - x_{1})^{1-\beta}$. In addition, using the uniform convergence in (\rightef{eq:uniform ret}) and the relation (\rightef{eq:b_n w_n}), as $n\to\infty$, \[ H_{n,q}(\vv x) \sim\frac{ \mu(f_{1})}{\Gamma (\beta) \Gamma (2-\beta) } (x_{2} - x_{1})^{\beta-1} w_n^{q-2} \int_E \prod_{j=2}^q f_{j} \circ T^{ \leftfloor nx_{j} \rightfloor - \leftfloor nx_{2} \rightfloor}d\mu. \] Repeating the arguments above yields \eqref{eq:pointwise}. We now prove \eqref{eq:upper_bound}. The situation is more delicate, and we shall introduce \[ D_{n,q}:=\ccbb{\vv x \in (\vv0,\vv1)_{\uparrow}:\leftfloor n x_i \rightfloor \neq \leftfloor nx_j \rightfloor \text{ for all }i\neq j }. \] First assume that $\vv x\in D_{n,q}$, which implies $\leftfloor n x_{1} \rightfloor < \leftfloor n x_{2} \rightfloor$. By the Potter's bound \citep[Theorem 1.5.6]{bingham87regular} and an elementary bound \citep[Eq.(40)]{bai14generalized}, \begin{equation} \lefteftarrowbel{e:bound1 lem} \frac{w_n}{w_{\leftfloor nx_{2} \rightfloor - \leftfloor nx_{1} \rightfloor}} \lefteq C_1 \lefteft(\frac{\leftfloor n x_{2} \rightfloor - \leftfloor n x_{1} \rightfloor}{n} \rightight)^{\beta-1-\eta}\lefte C_2 (x_{2} - x_{1})^{\beta-1-\eta}, \end{equation} for all $n\in{\mathbb N},\vv x\in D_{n,q}$, where recall that $\eta>0$ is sufficiently small such that $\beta-\eta > 1-1/p$. In addition, the relations (\rightef{eq:uniform ret}) and (\rightef{eq:b_n w_n}) imply \begin{equation} \lefteftarrowbel{e:bound2 lem} \sup_{\substack{0 < x_1<x_2 < 1, y\in A\\ n: \leftfloor n x_{1} \rightfloor<\leftfloor n x_{2}\rightfloor}} w_{\leftfloor nx_{2} \rightfloor - \leftfloor nx_{1} \rightfloor} \lefteft(\widehat T^{\leftfloor nx_{2} \rightfloor - \leftfloor nx_{1} \rightfloor} 1_A\rightight) (y) <\infty. \end{equation} Applying these observations to (\rightef{eq:H_q,n}), and bounding $|f_j|$'s by $1_A$ up to a constant almost everywhere, we get \[ H_{n,q}(\vv x) \lefteq C (x_{2}-x_{1})^{\beta-1-\eta} w_n^{q-2} \ \int_E 1_A \prodd j3q 1_A\circ T^{\leftfloor nx_{j} \rightfloor - \leftfloor nx_{2} \rightfloor} d\mu. \] Applying the bounds of the form \eqref{e:bound1 lem} and \eqref{e:bound2 lem} iteratively, we eventually get \eqref{eq:upper_bound} for $\vv x\in D_{n,q}$. Now we assume that $\vv x\in (\vv0,\vv1)_\uparrow\setminus D_{n,q}$. Again in (\rightef{eq:H_q,n}), we shall bound each $|f_{j}|$ by $1_A$ up to a constant almost everywhere. Assume first that only two of $\floor{nx_i}$s are the same, and without loss of generality we consider $\leftfloor nx_{2} \rightfloor = \leftfloor n x_{1} \rightfloor$ and $\leftfloor n x_{j}\rightfloor \neq \leftfloor n x_{j-1} \rightfloor $ for $j=3,\dots,q$. Then \begin{align*} H_{n,q}(\vv x) &\lefte C w_n^{q-1} \int_E \prod_{j=2}^q 1_A \circ T^{\leftfloor nx_{j} \rightfloor} d\mu \\ & = C w_n \cdot w_n^{q-2} \int_E 1_{A} \prod_{j=3}^q 1_{A} \circ T^{\leftfloor nx_{j} \rightfloor - \leftfloor nx_{2} \rightfloor} d\mu. \end{align*} Handling the integral factor as in \eqref{e:bound1 lem} and \eqref{e:bound2 lem}, we obtain \begin{equation}\lefteftarrowbel{eq:H_nq bound} H_{n,q}(\vv x)\lefteq C w_n \prod_{j=3}^q (x_{j} - x_{{j-1}})^{\beta-1-\eta}\end{equation} Furthermore, since $\leftfloor nx_{2} \rightfloor = \leftfloor n x_{1}\rightfloor $ implies $x_{2} - x_{1} < 1/n$, under which $ n^{\beta-1-\eta} (x_{2} - x_{1})^{\beta-1-\eta}>1$. Inserting this into \eqref{eq:H_nq bound}, it then follows that \[ H_{n,q}(\vv x) \lefteq C w_n n^{\beta-1-\eta} h_q^{(\beta)}eta (\vv x). \] Note that $w_n n^{\beta-1-\eta}\in \mathrm{RV}_\infty(-\eta)$ and thus converges to zero as $n\rightightarrow\infty$. So the above satisfies what we need in \eqref{eq:upper_bound}. The case where $\vv x\in (\vv0,\vv1)_{\uparrow}\setminus D_{n,q}$ with $\floor{nx_i} = \floor{nx_{i+1}}$ more than one value of $i=1,\dots,q-1$ can be treated similarly. The proof is thus completed. \end{proof} \begin{proof}[Proof of Theorem \rightef{Thm:local time}] We have computed the joint moments of $(L_{I_\ell,t_\ell})_{\ell=1,\dots,r}$ in Theorem \rightef{thm:1}. On the other hand, we have established the convergence of the joint moments of $(L_{n,I_\ell,t_\ell})_{\ell=1,\dots,r}$ in Proposition \rightef{Pro:moments}. It remains to show that the law of $(L_{I_\ell,t_\ell})_{\ell=1,\dots,r}$ is uniquely determined by the joint moments, for every choice of $I_1,\dots,I_r, t_1,\dots,t_r$. Then, it suffices to check the multivariate Carleman condition \citep[Theorem 1.12]{shohat43problem} \begin{equation}\lefteftarrowbel{eq:carleman} \sum_{k=1}^\infty \eta_{2k}^{-1/(2k)}=\infty, \quad\mbox{ with }\quad \eta_{2k} := \summ \ell1r {\mathbb E} L_{I_\ell,t_\ell}^{2k}. \end{equation} In view of Corollary \rightef{Cor:incre moment}, we have $\eta_{2k} \lefte C^{2k} (2k)!/\Gamma(2k \beta_p -\beta_p +2)$. By the Stirling's approximation, one can obtain the inequality $\eta_{2k}^{-1/(2k)}\ge C k^{\beta_p-1}$. So (\rightef{eq:carleman}) holds because $\beta_p>0$. \end{proof} \subsubsection*{Proof of \eqref{eq:Rn}} We shall need the following uniform control: \begin{equation}\lefteftarrowbel{eq:Gn bound} G_n(y)\equiv\frac{\rightho^{\lefteftarrow}(y/w_n)}{\rightho^{\lefteftarrow}(1/w_n)}\lefte C \pp{ y^{-1/\alpha_0} + y^{-(1/\alpha)-\epsilon}},~ \mbox{ for all } y>0 \text{ and } n\in{\mathbb N}. \end{equation} To see this, we first note that the assumptions on $\rightho$ in \eqref{eq:rho} imply that $\rightho^\lefteftarrow\in\mathrm{RV}_0(-1/\alpha)$ and $\rightho^\lefteftarrow(y) = O(y^{-1/\alpha_0})$ as $y\to\infty$. By Potter's bound \citep[Theorem 1.5.6]{bingham87regular}, for every $\epsilon>0$ there exists a constant $A_\epsilon>0$ such that If $y\lefte A_\epsilon w_n$, $G_n(y)\lefte 2y^{-(1/\alpha) - \epsilon}$. On the other hand, for $y>A_\epsilon w_n$, we have $\rightho^\lefteftarrow(y/w_n)\lefte C(y/w_n)^{-1/\alpha_0}$ and $\rightho^\lefteftarrow(1/w_n)\ge C(1/w_n)^{-(1/\alpha)+\epsilon}$, whence we have \[ G_n(y)\lefte C y^{-1/\alpha_0}w_n^{1/\alpha_0-(1/\alpha)+\epsilon}, \mbox{ for all } y>A_\epsilon w_n, n\in{\mathbb N}. \] (The constants $C$ here and below depend on $\epsilon$.) Now, note that for the second assumption on $\alpha_0$ in \eqref{eq:rho}, one could take $\alpha_0$ arbitrarily close to and smaller than 2. Set also $\epsilon$ small so that $1/\alpha_0-(1/\alpha)+\epsilon<0$, so that the upper bound above becomes $G_n(y)\lefte Cy^{-1/\alpha_0}$ for all $y>A_\epsilon w_n$. We have thus proved \eqref{eq:Gn bound}. Fix a large $M$ which will be specified later. In view of \eqref{eq:SR} and \eqref{eq:Snm0}, we express \[ R_{n,m}(t)=\sum_{I_1\in \mathcal{D}_{\lefte p-1}(M)} \lefteft( \prod_{i\in I_1} {\rightm{Var}}epsilon_i G_{n}(\Gamma_i)\rightight) F(I_1,n,M,m) \] where $\mathcal{D}_{\lefte p-1}(M)$ is as in (\rightef{eq:D le p M}), and \[ F(I_1,n,M,m) := \sum_{I_2\in \mathcal{H}(p-|I_1|,M,m)}\lefteft(\prod_{i\in I_2} {\rightm{Var}}epsilon_i G_{n}(\Gamma_i)\rightight) L_{n,I_1\cup I_2,t}, \] with \[ \mathcal{H}(k,M,m):=\{I\in \mathcal{D}_k: \ \min I> M,\ \max I>m\}. \] (Compare it with $\mathcal H(k,M)$ in \eqref{eq:H(k,M)}.) Observe that $\mathcal{D}_{\lefte p-1}(M)$ is finite and $\mathbb{E} |\prod_{i\in I_1}G_{n}(\Gamma_i)|^q<\infty$ for all $I_1 \in \mathcal{D}_{\lefte p-1}(M)$ when $q>0$ is sufficiently small in view of \eqref{eq:Gn bound} and \eqref{eq:gamma estimate}. Hence by H\"older's inequality, it suffices to show for each $I_1\in \mathcal{D}_{\lefte p-1}(M)$, \begin{equation}\lefteftarrowbel{eq:I_1 term q norm b} \leftim_{m\to\infty} \sup_{n\in{\mathbb N}} \mathbb{E} F(I_1,n,M,m)^2=0. \end{equation} For the above to hold we shall actually need $M$ to be large enough, which will be determined at the end. Introduce \[ k:= p-|I_1|. \] We start by using the orthogonality $\mathbb{E}[(\prod_{i\in I}{\rightm{Var}}epsilon_i)( \prod_{i\in I'} {\rightm{Var}}epsilon_i)] =1_{\{I=I'\}}$, $I,I'\in \mathcal D_k$ to obtain \[ \mathbb{E} F(I_1,n,M,m)^2=\sum_{I_2\in \mathcal{H}(k,M,m)} \mathbb{E}\lefteft( \prod_{i\in I_2} G_{n}(\Gamma_i)^2\rightight) \mathbb{E} L_{n,I_1\cup I_2,t}^2. \] Note that ${\mathbb E} L_{n,I_1\cup I_2,t}^2 = {\mathbb E} L_{n,I,t}^2$ for all $I\in \mathcal D_p$, which is convergent as $n\to\infty$ by Proposition \rightef{Pro:moments} and hence uniformly bounded in $I$ and $n$. Note also that $\mathcal{H}(k,M,m)\downarrow \emptyset$ as $m\rightightarrow\infty$. Therefore, to show \eqref{eq:I_1 term q norm b}, by the dominated convergence theorem it suffices to find $g^*: \mathcal H(k,M)\to{\mathbb R}_+$ such that \[ g_n^*(I_2):={\mathbb E}\pp{\prod_{i\in I_2}G_n(\Gamma_i)^2}\lefte g^*(I_2), \mbox{ for all } I_2\in \mathcal H(k,M), n\in{\mathbb N} \] and $\sum_{I_2\in\mathcal H( k ,M)} g^*(I_2)<\infty$. Setting $\gamma:=\min\{1/\alpha_0, 1/\alpha +\epsilon\}$ and taking $M>2\gamma k$, we have \begin{equation} {\mathbb E}\pp{\prod_{i\in I_2}G_n(\Gamma_i)^2} \lefte C{\mathbb E} \pp{\prod_{i\in I_2}\pp{\Gamma_i^{-1/\alpha_0}+\Gamma_i^{-(1/\alpha)-\epsilon}}^2} \lefte C{\prod_{i\in I_2} i^{-2\gamma}}=:g^*(I_2), \lefteftarrowbel{eq:Gn_Gamma} \end{equation} where the first inequality follows from \eqref{eq:Gn bound}, and the second from \eqref{eq:gamma estimate}. The bound $g^*$ is summable over $\mathcal H(k,M)$ as \[ \sum_{I_2\in\mathcal H(k,M)}g^*(I_2)\lefte C\pp{\sum_{i=1}^\infty i^{-2\gamma}}^{k}, \] and that $2\gamma>1$. This completes the proof of \eqref{eq:I_1 term q norm b} and hence \eqref{eq:Rn}. \subsection{Proof of tightness}\lefteftarrowbel{sec:pf tight} \begin{Pro} Under the assumptions of Theorem \rightef{Thm:CLT}, the laws of processes $(S_n(t))_{t\in[0,1]}, n\in{\mathbb N}$ are tight in the Skorokhod space $D([0,1])$ with respect to the uniform topology. \end{Pro} \begin{proof} Fix $m\in {\mathbb N}$ large enough specified later. Assume without loss of generality that $f\ge 0$, since a general $f$ can be written as a difference of two non-negative bounded $\mu^{\otimes p}$-a.e.~continuous functions on $A^p$. Recall the decomposition $S_n(t)=S_{n,m}(t)+R_{n,m}(t)$ as in (\rightef{eq:SR}). It suffices to check the tightness of $(S_{n,m})_{n\in{\mathbb N}}$ and $(R_{n,m})_{n\in{\mathbb N}}$ respectively. We start with $(S_{n,m})_{n\in{\mathbb N}}$. Let $L_{n,I,t}$ be as in (\rightef{eq:L_t(I,n)}). Recall that \[ S_{n,m}(t) = p!\sum_{ I \in \mathcal{D}_p(m)}\pp{\prod_{i\in I}{\rightm{Var}}epsilon_iG_n(\Gamma_i)}L_{n,I,t}. \] By Theorem \rightef{Thm:local time}, the limit of each $L_{n,I,t}$ in finite-dimensional distribution is, up to a constant, the local time $L_t(\cap_{i\in I}(R_i+V_i))$ of the shifted $\beta_p$-stable regenerative set $\cap_{i\in I}(R_i+V_i)$, for which we shall work with its continuous version. Then for each fixed $I\in \mathcal{D}_p(m)$, the laws of the a.s.\ non-decreasing processes $(L_{n,I,t})_{t\in[0,1]}, n\in{\mathbb N}$ are tight \citep[Theorem 3]{bingham71limit}. Furthermore, we have seen that $\prod_{i\in I}G_{n}(\Gamma_i)\to \prod_{i\in I}\Gamma_i^{-1/ \alpha}$ as $n\rightightarrow\infty$, and hence \[ \wt G_{n,I}:=\prod_{i\in I}{\rightm{Var}}epsilon_iG_n(\Gamma_i), n\in{\mathbb N} \] is a tight sequence of random variables for every $I\in\mathcal D_p(m)$. For every fixed $m\in{\mathbb N}$, the tightness of $\{(S_{n,m}(t))_{t\in[0,1]}, n\in{\mathbb N}\}$ then follows. Next, we show the tightness of $(R_{n,m}(t))_{t\in[0,1]}, n\in{\mathbb N}$ for $m$ fixed large enough. Write \[ R_{n,m}(t) =\sum_{I_1\in \mathcal{D}_{\lefte p-1}(m)} \wt G_{n,I_1} {\sum_{I_2\in \mathcal{H}(p-|I_1|,m)} \wt G_{n,I_2} L_{n,I_1\cup I_2,t}}, \] Since $\mathcal{D}_{\lefte p-1}(m)$ is finite, it suffices to prove, for fixed $I_1\in \mathcal{D}_{\lefte p-1}(m)$ and $k=p-|I_1|\ge 1$, the tightness of \[ A_n(t):=\sum_{I_2\in \mathcal{H}(k,m)}\wt G_{n,I_2} L_{n,I_1\cup I_2,t}, t\in[0,1], n\in{\mathbb N}. \] For this purpose, it is standard (e.g.~\citep[Theorem 13.5]{billingsley99convergence}) to show that for all $0\lefte s<t\lefte 1$, there exist constants $C>0$, $a>0$ and $b>1$, such that \begin{equation}\lefteftarrowbel{eq:goal tight} \mathbb{E} |A_n(t)-A_n(s)|^{a} \lefte C \lefteft( t-s \rightight)^{b}, \mbox{ for all } 0\lefte s<t\lefte 1, n\in{\mathbb N}. \end{equation} For this purpose, we compute \begin{align*} \mathbb{E} (A_n(t)-A_n(s))^{2} = \sum_{I_2 \in \mathcal{H}(k,m)} \mathbb{E} \pp{\prod_{i\in I_2}G_n(\Gamma_i)^{2}} \mathbb{E} (L_{n,I_1\cup I_2,t}-L_{n,I_1\cup I_2,s})^2. \end{align*} The first expectation is uniformly bounded by $g^*(I_2)$ as in \eqref{eq:Gn_Gamma} (assuming $m>2\gamma k$ in place of $M>2\gamma k$), which is summable over $\mathcal H(k,m)$. For the second, by first bounding $f$ by $1_{A^p}$ up to a constant and then applying an argument similar to the proof of Proposition \rightef{Pro:moments}, in particular, using the bound \eqref{eq:upper_bound}, we have \begin{align*} \mathbb{E} (L_{n,I_1\cup I_2,t}-L_{n,I_1\cup I_2,s})^2 &\lefte C \int_{ \frac{\leftfloor ns \rightfloor}{n}<x_1<x_2< \frac{\leftfloor nt \rightfloor}{n} } (x_2-x_1)^{p(\beta-1-\eta)} dx_1 dx_2 \\ & \lefte C \lefteft(s-t\rightight)^{\beta_p +1 -p\eta}, \end{align*} where $\eta>0$ is chosen sufficiently small so that $p\eta<\beta_p\in (0,1)$. The proof of (\rightef{eq:goal tight}) is then completed. \end{proof} \end{document}
\begin{document} \title{{\TheTitle} \begin{abstract} Any model order reduced dynamical system that evolves a modal decomposition to approximate the discretized solution of a stochastic PDE can be related to a vector field tangent to the manifold of fixed rank matrices. The Dynamically Orthogonal (DO) approximation is the canonical reduced order model for which the corresponding vector field is the orthogonal projection of the original system dynamics onto the tangent spaces of this manifold. The embedded geometry of the fixed rank matrix manifold is thoroughly analyzed. The curvature of the manifold is characterized and related to the smallest singular value through the study of the Weingarten map. Differentiability results for the orthogonal projection onto embedded manifolds are reviewed and used to derive an explicit dynamical system for tracking the truncated Singular Value Decomposition (SVD) of a time-dependent matrix. It is demonstrated that the error made by the DO approximation remains controlled under the minimal condition that the original solution stays close to the low rank manifold, which translates into an explicit dependence of this error on the gap between singular values. The DO approximation is also justified as the dynamical system that applies instantaneously the SVD truncation to optimally constrain the rank of the reduced solution. Riemannian matrix optimization is investigated in this extrinsic framework to provide algorithms that adaptively update the best low rank approximation of a smoothly varying matrix. The related gradient flow provides a dynamical system that converges to the truncated SVD of an input matrix for almost every initial data. \end{abstract} \begin{keywords} Model order reduction, fixed rank matrix manifold, low rank approximation, Singular Value Decomposition, orthogonal projection, curvature, Weingarten map, Dynamically Orthogonal approximation, Riemannian matrix optimization. \end{keywords} \begin{AMS} 65C20, 53B21, 65F30, 15A23, 53A07, 35R60, 65M15 \end{AMS} \section{Introduction} Finding efficient model order reduction methods is an issue commonly encountered in a wide variety of domains involving intensive computations and expensive high-fidelity simulations \cite{schilders2008model,Quarteroni_Rozza_ROM_Springer2013,Kutz_2013,constantine2015_activesubspaces}. Such domains include uncertainty quantification \cite{ghanem_spanos_2003,lermusiaux_et_al_2006b,smith_UQ_SIAM2013,sullivan_UQ_Springer2015}, dynamical systems analysis \cite{holmes_etal_PODturbulence_Cambridge1998, benner_etal_SIAMreview2015,williams_etal_JNS2015}, electrical engineering \cite{Fortuna_etal_Springer2013,Bartel_etal_Springer2014}, mechanical engineering \cite{Noack_etal_ROM_Springer2011}, ocean and weather predictions \cite{lermusiaux2001evolving,majda_timofeyev_JAS2003,cao_etal_IJNMF2007,rozier_etal_SIAMreview2007}, chemistry \cite{okino_mavrovouniotis_CR1998}, and biology \cite{lee_etal_JPC2002}, to name a few. The computational costs and challenges arise from the complexity of the mathematical models as well as from the needs of representing variations of parameter values and the dominant uncertainties involved. For example, to quantify uncertainties of dynamical system fields, one often needs to solve stochastic partial differential equations (PDEs), \begin{equation} \label{eqn:SPDE}\partial_t \bm u= \mathscr{L}(t,\bm u;\omega)\, , \end{equation} where $t$ is time, $\bm u$ the uncertain dynamical fields, $\mathscr{L}$ a differential operator, and $\omega$ a random event. For deterministic but parametric dynamical systems, $\omega$ may represent a large set of possible parameter values that need to be accounted for by the model-order reduction. Generally, after both spatial and stochastic/parametric event discretization of the PDE \cref{eqn:SPDE}, or more directly if the focus is on solving a complex system of ordinary differential equations (ODEs), one is interested in the numerical solution of a large system of ODEs of the form \begin{equation} \label{eqn:dRstar}\dot{\mathfrak{R}}=\mathcal{L} (t,\mathfrak{R}),\end{equation} where $\mathcal{L}$ is an operator acting on the space of $l$-by-$m$ matrices $\mathfrak{R}$. In the case of a direct Monte-Carlo approach for the resolution of the stochastic PDE \cref{eqn:SPDE}, $\mathcal{L}$ is thought as being the discretization of the differential operator $\mathscr{L}$ by using $l$ spatial nodes and $m$ Monte-Carlo realizations or parameter values being considered. Accurate quantification of the statistical/parametric properties of the original solution $\bm u$ often require to solve such system \cref{eqn:dRstar} with both a high spatial resolution, $l$, and high number of realizations, $m$. Hence, solving \cref{eqn:dRstar} directly with a Monte-Carlo approach becomes quickly intractable for realistic, real-time applications such as ocean and weather predictions \cite{palmer_Rep_Prog_phys2000,lermusiaux_JCP2006} or real-time control \cite{Lall_etal_IJRNC2002,Lin_McLaughlin_SIAMJSC2014}. A method to address this challenge is to assume the existence of an approximation $\bm u_{\textrm{DO}}$ of the solution $\bm u$ onto a finite number of $r$ spatial modes, $\bm u_i(t,x)$, and stochastic coefficients, $\zeta_i(t,\omega)$ (here assumed to be both time-dependent \cite{lermusiaux_JCP2006,sapsis2009dynamically}), \begin{equation} \label{eqn:KLdecomposition} \bm u(t,\bm x;\omega)\simeq {\bm u}_\textrm{DO}=\sum_{i=1}^r \zeta_i(t,\omega)\, \bm u_i(t,\bm x),\end{equation} and look for a dynamical system that would most accurately govern the evolution of these dominant modes and coefficients. The optimal approximation (in the sense that the $L^2$ error $\mathbb{E}[||\bm u-\bm u_{\textrm{DO}}||^2]^{1/2}$ is minimized) is achieved by the Karuhnen-Lo\`{e}ve (KL) decomposition \cite{Loeve1978,holmes_etal_PODturbulence_Cambridge1998}, whose first $r$ modes yields an optimal orthonormal basis $(\bm u_i)$. Many methods, such as polynomial chaos expansions \cite{xiu_karniadakis_PCE_SIAM2002}, Fourier decomposition \cite{WillcoxMegretski2005}, or Proper Orthogonal Decomposition \cite{holmes_etal_PODturbulence_Cambridge1998} rely on the choice of a predefined, time-independent orthonormal basis either for the modes, $(\bm u_i)$, or the coefficients, $(\zeta_i)$, and obtain equations for the respective unknown coefficients or modes by Galerkin projection \cite{PetterssonIaccarinoNordstroem2015}. However, the use of modes and coefficients that are simultaneously dynamic has been shown to be efficient \cite{lermusiaux_JCP2006,lermusiaux_PhysD2007}. Dynamically Orthogonal (DO) field equations \cite{sapsis2009dynamically,sapsis2012dynamical} were thus derived to evolve adaptively this decomposition for a general differential operator $\mathscr{L}$ and allowed to obtain efficient simulations of stochastic Navier-Stokes equations \cite{ueckermann_et_al_JCP2013}. At the discrete level, the decomposition \cref{eqn:KLdecomposition} is written $\mathfrak{R}\simeq R=UZ^T$ where $R$ is a rank $r$ approximation of the full rank matrix $\mathfrak{R}$, decomposed as the product of a $l$-by-$r$ matrix $U$ containing the discretization of the basis functions, $(\bm u_i)$, and of a $m$-by-$r$ matrix $Z$ containing the realizations of the stochastic coefficients, $(\zeta_i)$. It is well known that such approximation is optimal (in the Frobenius norm) when $R=UZ^T$ is obtained by truncating the Singular Value Decomposition (SVD), \emph{i.e.}\;by selecting $U$ to be the singular vectors associated with the $r$ largest singular values of $\mathfrak{R}$ and setting $Z=\mathfrak{R}^TU$ \cite{horn_johnson_1985,horn_johnson_1991}. In 2007, Koch and Lubich \cite{koch2007dynamical} proposed a method inspired from the Dirac Frenkel variational principle in quantum physics, to evolve dynamically a rank $r$ matrix $R=UZ^T$ that approximates the full dynamical system \cref{eqn:dRstar}. The main principle of the method lies in the intuition that one can update optimally the low-rank approximation $R$ by projecting $\mathcal{L}(t,R)$ onto the tangent space of the manifold constituted by low rank matrices. Recently, Musharbash \cite{musharbash2015error} noticed the parallel with the DO method, and applied the results obtained in \cite{koch2007dynamical} to analyze the error committed by the DO approximation for a stochastic heat equation. In fact, in the same way the KL expansion is the continuous analogous of the SVD, the discretization of the DO decomposition \cite{sapsis2009dynamically} is strictly equivalent to the dynamical low rank approximation of Koch and Lubich \cite{koch2007dynamical} when the discretization reduces to simulate the matrix dynamical system \cref{eqn:dRstar} of $m$ realizations spatially resolved with $l$ nodes. Simultaneously, new approaches have emerged since the 1990s in optimization onto matrix sets \cite{edelman1998,absil2009optimization}. The application of Riemannian geometry to manifolds of matrices has allowed the development of new optimization algorithms, that are evolving orthogonality constraints geometrically rather than using more classical techniques, such as Lagrange multiplier methods \cite{edelman1998}. Matrix dynamical systems that continuously perform matrices operations, such as inversion, eigen- or SVD-decompositions, steepest descents, and gradient optimization have thus been proposed \cite{brockett1988,dehaene1995continuous,Smith1991}. These continuous--time systems were extended and applied to adaptive uncertainty predictions, learning of dominant subspace, and data assimilation \cite{Lermusiaux1997,lermusiaux_DAO1999}. The purpose of this article is to extend the analysis and the understanding of the DO method in the matrix framework as initiated by \cite{koch2007dynamical} and in the above works, by furthering its relation to the Singular Value Decomposition and its geometric interpretation as a constrained dynamics on the manifold $\mathscr{M}$ of fixed rank matrices. In the vein of \cite{edelman1998,absil2009optimization,mishra2014}, this article utilizes the point of view of differential geometry. To provide a visual intuition, a 3D projection of two 2-dimensional subsurfaces of the manifold $\mathscr{M}$ of rank one 2-by-2 matrices is visible on \cref{fig:plotManifold}. This figure has been obtained by using the parameterization \[ R(\rho,\theta,\phi)=\rho\begin{pmatrix} \sin(\theta)\sin(\phi) & \sin(\theta)\cos(\phi)\\ \cos(\theta)\sin(\phi) & \cos(\theta)\cos(\phi)\end{pmatrix}, \; \rho>0, \theta\in [0,2\pi],\,\phi\in[0,2\pi],\] on $\mathscr{M}$ and projecting orthogonally two subsurfaces by plotting the first three elements $R_{11}, R_{12}$ and $R_{21}$. Since the multiplication of singular values by a non-zero constant does not affect the rank of a matrix, $\mathscr{M}\subset \mathcal{M}_{2,2}$ is a cone, which is consistent with the increasing of curvature visible on \cref{fig:plotManifoldSpiral} near the origin. More generally, $\mathscr{M}$ is the union of $r$-dimensional affine subspaces of $\mathcal{M}_{l,m}$ supported by the manifold of strictly lower rank matrices. It will actually be proven in \cref{sec:curvatureMatrix} that the curvature of $\mathscr{M}$ is inversely proportional to the lowest singular value, which diverges as matrices approach a rank strictly less than $r$. Hence $\mathscr{M}$ can be understood either as a collection of cones (\cref{fig:plotManifoldCone}) or as a multidimensional spiral (\cref{fig:plotManifoldSpiral}). \begin{figure} \caption{$R(\rho,\pi\rho,\phi)$} \label{fig:plotManifoldSpiral} \caption{$R(\rho,\phi,\rho-\phi)$} \label{fig:plotManifoldCone} \caption{\small Two subsurfaces of the rank-1 manifold $\mathscr{M} \label{fig:plotManifold} \end{figure} Geometrically, a dynamical system \cref{eqn:dRstar} can be seen as a time dependent vector field $\mathcal{L}$ that assigns the velocity $\mathcal{L}(t,\mathfrak{R})$ at time $t$ at each point $\mathfrak{R}$ of the ambient space $\mathcal{M}_{l,m}$ of $l$-by-$m$ matrices (\cref{fig:vectorField}). Similarly, any rank $r$ model order reduction can be viewed as a vector field $L$ that must be everywhere tangent to the manifold $\mathscr{M}$ of rank $r$ matrices. The corresponding dynamical system is \begin{equation} \label{eqn:dRmanifold} \dot{R}=L(t,R)\,\in \TT{R}, \end{equation} where $\TT{R}$ denotes the tangent space of $\mathscr{M}$ at $R$. \begin{figure} \caption{\small Dynamical systems as vector fields $\mathcal{L} \label{fig:vectorField} \caption{\small Geometric concepts of interest: orthogonal projection $X=\Pi_\TT{R} \label{fig:tangentProj} \caption{\small Vector fields on the fixed rank manifold $\mathscr{M} \label{fig:ManifoldFields} \end{figure} From this point of view, ``combing the hair'' formed by the original vector field $\mathcal{L}$ on the manifold $\mathscr{M}$, by setting $L(t,R)$ to the time-dependent orthogonal projection of each vector $\mathfrak{X}=\mathcal{L}(t,R)$ onto each tangent space $\TT{R}$ is nothing less than the DO approximation (\cref{fig:tangentProj}). As such, the DO-reduced dynamical system is optimal in the sense that the resulting vector field $L$ is the best dynamic tangent approximation of $\mathcal{L}$ at every point $R\in\mathscr{M}$. Analyzing the error committed by the DO approximation can be done by understanding how the best rank $r$ approximation of the solution $\mathfrak{R}$ evolves \cite{koch2007dynamical,musharbash2015error}. This requires the time derivative of the truncated SVD as a function of $\dot{\mathfrak{R}}$. Nevertheless, to the best of our knowledge, no explicit expression of the dynamical system satisfied by the best low rank approximation has been obtained in the literature. To address this gap, this article brings forward the following novelties. First, a more exhaustive study of the extrinsic geometry of the fixed rank manifold $\mathscr{M}$ is provided. This includes the characterization and derivation of principal curvatures and of their direct relation to singular values. Second, the geometric interpretation of the truncated SVD as an orthogonal projection onto $\mathscr{M}$ is utilized, so as to apply existing results relating the differential of this projection to the curvature of the manifold. It will be demonstrated in particular (\cref{thm:diffProjection}) that the truncated SVD is differentiable so long as the singular values of order $r$ and $r+1$ remain distinct, even if multiple singular values of lower order occur. As a result, an explicit dynamical system is obtained for the evolution of the best low rank approximation of the solution $\mathfrak{R}(t)$ of \cref{eqn:dRstar}. This derivation finally also allows a sharpening of the the initial error analysis of \cite{koch2007dynamical}. The article is organized as follows: the Riemannian geometric setting is specified in \cref{sec:riemanniansetup}. Parameterizations of $\mathscr{M}$ and of its tangent spaces are first recalled. Novel geometric characteristics such as covariant derivative and geodesic equations are then derived. In \cref{sec:curvature}, classical results on the differentiability of the orthogonal projection onto smooth embedded sub-manifolds \cite{gilbarg2015elliptic} are reviewed and reformulated in a framework that avoids the use of tensor notations. Curvatures with respect to a normal direction are defined, and their relation to the differential of the projection map is stated in \cref{thm:distanceDiff}. These results are applied in \cref{sec:curvatureMatrix} where the curvature of the fixed rank manifold $\mathscr{M}$ is characterized, and the new formula for the differential of the truncated SVD is provided. The Dynamically Orthogonal approximation (DO) is studied in \cref{sec:DO}. Two justifications of the ``reasonable'' character of this approximation are given. First, it is shown that this reduced order model corresponds to the dynamical system that applies the SVD truncation at all instants. The error analysis performed by \cite{koch2007dynamical} is then extended and improved using the knowledge of the differential of the truncated SVD. The error committed by the DO approximation is shown to be controlled over large integration times provided the original solution remains close to the low rank manifold $\mathscr{M}$, in the sense that it remains far from the skeleton of $\mathscr{M}$. This geometric condition can be expressed as an explicit dependence of the error on the gaps between singular values of order $r$ and $r+1$. Lastly, Riemannian matrix optimization on the fixed rank manifold equipped with the extrinsic geometry is considered in \cref{sec:numerical} as an alternative approach for tracking the truncated SVD. A novel dynamical system is proposed to compute the best low-rank approximation, that is shown to be convergent for almost any initial data. \subsection*{Notations} Important notations used in this paper are summed up below : \begin{table}[H] \centering \scalebox{0.8}{ \begin{tabular}{ll} $\mathcal{M}_{l,m}$ & Space of $l$-by-$m$ real matrices \\ $\mathcal{M}_{m,r}^*$ & Space of $m$-by-$r$ matrices that have full rank\\ $\textrm{rank}(R)$ & Rank of a matrix $R\in\mathcal{M}_{l,m}$\\ $\mathscr{M}=\{R\in \mathcal{M}_{l,m}|\mathrm{rank}(R)=r\}$ & Fixed rank matrix manifold\\ $\mathcal{O}_r=\{P\in \mathcal{M}_{r,r}\;|\;P^TP=I\}$ & Group of $r$-by-$r$ orthogonal matrices \\ $\mathrm{St}_{l,r}=\{U\in\mathcal{M}_{l,r}\;|\;U^TU=I\}$ &Stiefel Manifold\\ $R=UZ^T$ & Point $R\in\mathscr{M}$ with $U\in \mathrm{St}_{l,r}$ and $Z\in\mathcal{M}_{m,r}^*$\\ $\TT{R}$ & Tangent space at $R\in\mathscr{M}$\\ $X\in\TT{R}$ & Tangent vector $X$ at $R=UZ^T$\\ $\mathcal{H}_{(U,Z)}$ & Horizontal space at $R=UZ^T$\\ $(X_U,X_Z)\in \mathcal{H}_{(U,Z)}$ & $X=X_UZ^T+UX_Z^T\in \TT{R}$ with \\ & $X_U\in\mathcal{M}_{l,r},\,X_Z\in\mathcal{M}_{m,r}$ and $U^TX_U=0$\\ $\Pi_{\TT{R}}$ & Orthogonal projection onto the plane $\TT{R}$\\ $\textrm{Sk}(\mathscr{M})$ & Skeleton of $\mathscr{M}$ \\ $\Pi_\mathscr{M}$ & Orthogonal projection onto $\mathscr{M}$ (defined on $\mathcal{M}_{l,m}\backslash \textrm{Sk}(\mathscr{M})$) \\ $I$ & Identity mapping \\ $A^T$ & Transpose of a square matrix $A$ \\ $\langle A,B \rangle=\Tr(A^TB)$ & Frobenius matrix scalar product\\ $||A||=\Tr(A^TA)^{1/2}$ & Frobenius norm\\ $\sigma_1(A)\ensuremath \geq\dots\ensuremath \geq\sigma_{\textrm{rank}(A)}(A)$ & Non zeros singular values of $A\in\mathcal{M}_{l,m}$\\ $\dot{R}=\ensuremath \mathrm{d} R/\ensuremath \mathrm{d} t$ & Time derivative of a trajectory $R(t)$\\ $\ensuremath \mathrm{d}D_X f(R)$ & Differential of a function $f$ in direction $X$\\ $\ensuremath \mathrm{d}D \Pi_{\TT{R}}(X)\cdot Y$ & Differential of the projection operator $\Pi_{\TT{R}}$ applied to $Y$\\ \end{tabular}} \end{table} The differential of a smooth function $f$ at the point $R\in \mathcal{M}_{l,m}$ (respectively $R\in\mathscr{M}$) in the direction $X\in\mathcal{M}_{l,m}$ (respectively $X\in \TT{R}$) is denoted $\ensuremath \mathrm{d}D_X f(R)$: \begin{equation} \label{eqn:Diff_X_f} \ensuremath \mathrm{d}D_X f(R)=\left.\frac{\ensuremath \mathrm{d} }{\ensuremath \mathrm{d} t}f(R(t))\right|_{t=0}=\lim\limits_{\ensuremath \mathrm{d}elta t\rightarrow 0}\frac{f(R(t+\ensuremath \mathrm{d}elta t))-f(R(t))}{\ensuremath \mathrm{d}elta t},\end{equation} where $R(t)$ is a curve of $\mathcal{M}_{l,m}$ (respectively $\mathscr{M}$) such that $R(0)=R$ and $\dot{R}(0)=X$. The differential of the orthogonal projection operator $R\mapsto \Pi_\TT{R}$ at $R\in\mathscr{M}$, in the direction $X\in \TT{R}$ and applied to $Y\in \mathcal{M}_{l,m}$ is denoted $\ensuremath \mathrm{d}D \Pi_{\TT{R}}(X)\cdot Y$: \begin{equation} \label{eqn:Diff_PiT_XY} \ensuremath \mathrm{d}D \Pi_{\TT{R}}(X)\cdot Y = \left[\left.\frac{\ensuremath \mathrm{d} }{\ensuremath \mathrm{d} t}\Pi_{\TT{R(t)}}\right|_{t=0}\right](Y)=\left[\lim\limits_{\ensuremath \mathrm{d}elta t\rightarrow 0}\frac{\Pi_\TT{R(t+\ensuremath \mathrm{d}elta t)}-\Pi_\TT{R(t)}}{\ensuremath \mathrm{d}elta t}\right](Y),\end{equation} where $R(t)$ is a curve drawn on $\mathscr{M}$ such that $R(0)=R$ and $\dot{R}(0)=X$. \section{Riemannian set up: parameterizations, tangent-space, geodesics} \label{sec:riemanniansetup} This section establishes the geometric framework of low-rank approximation, by reviewing and unifying results sparsely available in \cite{koch2007dynamical,sapsis2009dynamically,musharbash2015error}, and by providing new expressions for classical geometric characteristics, namely geodesics and covariant derivative. It is not assumed that the reader is accustomed to differential geometry: necessary definitions and properties are recalled. Several concepts of this section are illustrated on \cref{fig:tangentProj}. \begin{definition} \label{def:manifold} The manifold of $l$-by-$m$ matrices of rank $r$ is denoted by $\mathscr{M}$: \[ \mathscr{M}=\{R\in \mathcal{M}_{l,m}|\mathrm{rank}(R)=r\}.\] \end{definition} \begin{remark} The fact that $\mathscr{M}$ is a manifold is a consequence of the constant rank theorem (\cite{spivak1973comprehensive}, Th.10, chap.2, vol. 1) whose assumptions (the map $(U,Z)\mapsto UZ^T$ from $\mathrm{St}_{l,r}\times\mathcal{M}_{m,r}^*$ to $\mathscr{M}$ is a submersion with differential of constant rank) translate in the requirement that the candidate tangent spaces have constant dimension, as found later in \cref{prop:tangentSpace}. Detailed proofs are available in \cite{spivak1973comprehensive} (exercise 34, chap. 2, vol. 1) or \cite{vandereycken2013low} (Prop. 2.1). \end{remark} The following lemma \cite{Piziak1999} fixes the parametrization of $\mathscr{M}$ by conveniently representing its elements $R$ in terms of mode and coefficient matrices, $U$ and $Z$, respectively. \begin{lemma} \label{lemma:rank} Any matrix $R\in\mathscr{M}$ can be decomposed as $R=UZ^T$ where $U\in \mathrm{St}_{l,r}$ and $Z\in \mathcal{M}_{m,r}^*$,\, \emph{i.e.}\;$U^TU=I \textrm{ and }\mathrm{rank}(Z)=r$, respectively. Furthermore, this decomposition is unique modulo a rotation matrix $P\in O_r$, namely if $U_1,U_2\in \mathcal{M}_{l,r}$, $Z_1,Z_2\in\mathcal{M}_{m,r}$, and $U_1^TU_1=U_2^TU_2=I$, then \begin{equation} \label{eqn:invariance} U_1Z_1^T=U_2Z_2^T\Leftrightarrow \exists P\in \mathrm{O}_r, U _1=U_2P\textrm{ and }Z_1=Z_2P.\end{equation} \end{lemma} In the following, the statement ``let $UZ^T\in\mathscr{M}$'' always implicitly assumes $U\in\mathcal{M}_{l,r}$, $Z\in\mathcal{M}_{m,r}$, $U^TU=I$, and $\mathrm{rank}(Z)=r$. Other parameterizations of $\mathscr{M}$ are possible and give equivalent results \cite{mishra2014}. The tangent space $\TT{UZ^T}$ at a point $R=UZ^T$ is the set of all possible vectors tangent to smooth curves $R(t)=U(t)Z(t)^T$ drawn on the manifold $\mathscr{M}$. Therefore, such tangent vector at $R(0)=UZ^T$ is of the form $\dot{R}=\dot{U}Z^T+U\dot{Z}^T$, where $\dot{U}$ and $\dot{Z}$ are the time derivatives of the matrices $U(t)$ and $Z(t)$ at time $t=0$. In the following, the notations $X_U$, $X_Z$, and $X=X_UZ^T+UX_Z^T$ will be used to denote the tangent directions $\dot{U}$, $\dot{Z}$, and $\dot{R}$ for the respective matrices $U$, $Z$ and $R$. The orthogonality condition that $U^TU=I$ must hold for all times implies that $X_U$ must satisfy $\dot{U}^TU+U^T\dot{U}=X_U^TU+U^TX_U=0$. Nevertheless, this is not sufficient to parameterize uniquely tangent vectors $X$ from the displacements $X_U$ and $X_Z$ for $U$ and $Z$: two different couples $(X_U,X_Z)\neq (X_U',X_Z')$ satisfying $X_U^TU+U^TX_U=X_U^{'T}U+U^TX_U'=0$ may exist for a single tangent vector $X=X_UZ^T+UX_Z^T=X_U'Z^T+UX_Z^{'T}$. Indeed, rotations $U\gets UP$ of the columns of the mode matrix $U$ do not change the subspace $\textrm{span}(\bm u_i)$ supporting the modal decomposition \cref{eqn:KLdecomposition}, and hence can be captured by updating the values of the coefficients $(\zeta_i)$ contained in the matrix $Z$ with the same rotation $Z\gets ZP$. This translates infinitesimally in the tangent space by the invariance of tangent vectors $X=X_UZ^T+UX_Z^T$ under the transformations $X_U\gets X_U+U{\mathcal{O}_n}mega$ and $X_Z\gets X_Z+Z{\mathcal{O}_n}mega$ for any skew-symmetric matrix ${\mathcal{O}_n}mega=-{\mathcal{O}_n}mega^T$. This can easily be seen by inserting the transformations into the expression for $X$ or by differentiating the relation $UZ^T=(UP)(ZP)^T$ with $\dot{P}={\mathcal{O}_n}mega P$. A unique parameterization of the tangent space can be obtained by fixing this infinitesimal rotation ${\mathcal{O}_n}mega$, for example by adding the condition that the reduced subspace spanned by the columns of $U$ must dynamically evolve orthogonally to itself, in other words by requiring $U^TX_U=0$. This gauge condition has thus been called ``Dynamically Orthogonal'' condition by \cite{sapsis2009dynamically} and is at the origin of the name ``Dynamically Orthogonal approximation'' as further investigated in \cref{sec:DO}. \begin{proposition} \label{prop:tangentSpace} The tangent space of $\mathscr{M}$ at $R=UZ^T\in\mathscr{M}$ is the set \begin{equation} \label{eqn:tangentSpace} \TT{UZ^T}=\{X_UZ^T+UX_Z^T\;|\; X_U\in\mathcal{M}_{l,r}, \, X_Z\in\mathcal{M}_{m,r}, \, U^TX_U+X_U^TU=0\}.\end{equation} $\TT{UZ^T}$ is uniquely parameterized by the \emph{horizontal space} \begin{equation} \label{eqn:horizSpace} \mathcal{H}_{(U,Z)}=\{ (X_U,X_Z)\in\mathcal{M}_{l,r}\times \mathcal{M}_{m,r}\;|\;U^TX_U=0\}, \end{equation} that is for any tangent vector $X\in \TT{UZ^T}$, there exists a unique $(X_U,X_Z)\in \mathcal{H}_{(U,Z)}$ such that $X=X_UZ^T+UX_Z^T$. As a consequence $\mathscr{M}$ is a smooth manifold of dimension $\mathrm{dim}(\mathcal{H}_{(U,Z)})=(l+m)r-r^2$. \end{proposition} \begin{proof} (see also \cite{koch2007dynamical,absil2009optimization}) One can always write a tangent vector $X$ as \[\begin{aligned}X &=U\dot{Z}^T+\dot{U}Z^T\\ &=U(\dot{Z}^T+U^T\dot{U}Z^T) +((I-UU^T)\dot{U})Z^T=X_UZ^T+UX_Z^T,\end{aligned}\] for some $\dot{U}\in\mathrm{St}_{l,r}$ and $\dot{Z}\in\mathcal{M}_{m,r}$ with $X_U=(I-UU^T)\dot{U}Z^T$ satisfying $X_U^TU=0$ and $X_Z^T=\dot{Z}^T+U^T\dot{U}Z^T$. This implies $\TT{UZ^T}=\{X_UZ^T+UX_Z^T|(X_U,X_Z)\in\mathcal{H}_{(U,Z)}\}$. Furthermore, if $X=UX_Z^T+X_UZ^T$ with $U^TX_U=0$, then the relations $X_Z=X^TU$ and $X_U=(I-UU^T)XZ(Z^TZ)^{-1}$ show that $(X_U,X_Z)\in\mathcal{H}_{(U,Z)}$ is defined uniquely from $X$. \end{proof} \begin{remark} The denomination ``\emph{horizontal space}'' for the set $\mathcal{H}_{(U,Z)}$ \cref{eqn:horizSpace} refers to the definition of a non-ambiguous representation of the tangent space $\TT{UZ^T}$ \cref{eqn:tangentSpace}. This notion is developed rigorously in the theory of quotient manifolds e.g.\;\cite{mishra2014,edelman1998}. \end{remark} In the following, the notation $X=(X_U,X_Z)$ is used equivalently to denote a tangent vector $X=X_UZ^T+UX_Z^T\in \TT{UZ^T}$, where $U^TX_U=0$ is implicitly assumed. A metric is needed to define how distances are measured on the manifold, by prescribing a smoothly varying scalar product on each tangent space. In \cite{mishra2014} and others in matrix optimization e.g.\;\cite{absil2012projection,vandereycken2013low,chechik2011online}, one uses the metric induced by the parametrization of the manifold $\mathscr{M}$: the norm of a tangent vector $(X_U,X_Z)\in\mathcal{H}_{(U,Z)}$ is defined to be $||(X_U,X_Z)||^2=||X_U||_{\mathrm{St}_{l,r}}^2+||X_Z||_{\mathcal{M}_{m,r}}^2$ where $||\;||_{\mathrm{St}_{l,r}}$ is a canonical norm on the Stiefel Manifold (see \cite{edelman1998}) and $||\; ||_{\mathcal{M}_{m,r}}$ is the Frobenius norm on $\mathcal{M}_{m,r}$. In this work, one is rather interested in the metric inherited from the ambient full space $\mathcal{M}_{l,m}$, since it is the metric used to estimate the distance from a matrix $\mathfrak{R}\in\mathcal{M}_{l,m}$ to its best $r$-rank approximation, namely the error committed by the truncated SVD. \begin{definition} At each point $UZ^T\in\mathscr{M}$, the metric $g$ on $\mathscr{M}$ is the scalar product acting on the tangent space $\TT{UZ^T}$ that is inherited from the scalar product of $\mathcal{M}_{l,m}$~: \begin{equation} \label{eqn:metric} \begin{aligned}g((X_{U},X_{Z}),(Y_{U},Y_{Z})) & =\Tr((X_{U}Z^T+UX_Z^{T})^T(Y_{U}Z^T+UY_Z^{T})) \\ &=\Tr(Z^TZX_U^{T}Y_U+X_Z^{T}Y_Z).\end{aligned}\end{equation} \end{definition} A main object of this paper is the orthogonal projection $\Pi_\TT{R}$ onto the tangent space $\TT{R}$ at a point $R$ on $\mathscr{M}$. This map projects displacements $\mathfrak{X}=\dot{\mathfrak{R}}\in\mathcal{M}_{l,m}$ of a matrix $\mathfrak{R}$ of the ambient space $\mathcal{M}_{l,m}$ to the tangent directions $X=\Pi_\TT{R}\mathfrak{X}\in \TT{R}$. \begin{proposition} At every point $UZ^T\in\mathscr{M}$, the orthogonal projection $\Pi_{\TT{UZ^T}}$ onto the tangent space $\TT{UZ^T}$ is the application \begingroup \renewcommand*{\arraystretch}{1.2} \begin{equation} \label{eqn:projectionMap}\begin{array}{ccccc} \Pi_\TT{UZ^T} &: & \mathcal{M}_{l,m} & \rightarrow & \mathcal{H}_{(U,Z)} \\ & & \mathfrak{X} & \mapsto & ((I-UU^T)\mathfrak{X} Z(Z^TZ)^{-1},\mathfrak{X}^TU).\end{array}\end{equation} \endgroup \end{proposition} \begin{proof} (see also \cite{koch2007dynamical}) $\Pi_\TT{R} \mathfrak{X}$ is obtained as the unique minimizer of the convex functional $J(X_U,X_Z)=\frac{1}{2}||\mathfrak{X}-X_UZ^T-UX_Z^T||^2$ on the space $\mathcal{H}_{(U,Z)}$. The minimizer $(X_U,X_Z)$ is characterized by the vanishing of the gradient of $J$: \[ \forall \ensuremath \mathrm{d}elta\in\mathcal{M}_{l,r},\; \ensuremath \mathrm{d}elta^TU=0\mathbb{R}ightarrow \frac{\partial J}{\partial X_U}\cdot \ensuremath \mathrm{d}elta=-\langle \mathfrak{X}-X_UZ^T-UX_Z^T,\ensuremath \mathrm{d}elta Z^T \rangle=0,\] \[ \forall \ensuremath \mathrm{d}elta\in\mathcal{M}_{m,r}, \;\frac{\partial J}{\partial X_Z}\cdot \ensuremath \mathrm{d}elta=-\langle \mathfrak{X}-X_UZ^T-UX_Z^T,U \ensuremath \mathrm{d}elta^T \rangle=0, \] yielding respectively $X_U=(I-UU^T)\mathfrak{X} Z(Z^TZ)^{-1}$ and $X_Z=\mathfrak{X}^TU$. \end{proof} The orthogonal complement of the tangent space $\TT{R}$ is obtained from the identity $(I-\Pi_\TT{UZ^T})\cdot \mathfrak{X}=(I-UU^T)\mathfrak{X}(I-Z(Z^TZ)^{-1}Z^T)$: \begin{definition} The normal space $\mathbb{N}N{R}$ of $\mathscr{M}$ at $R=UZ^T$ is defined as the orthogonal complement to the tangent space $\TT{R}$. For the fixed rank manifold $\mathscr{M}$: \begin{equation} \label{eqn:normalSpaceDef}\begin{aligned}\mathbb{N}N{R}& =\{ N\in\mathcal{M}_{l,m}| (I-UU^T)N(I-Z(Z^TZ)^{-1}Z^T)=N\}\\ &=\{ N\in\mathcal{M}_{l,m}\;|\;U^TN=0\textrm{ and }NZ=0\}.\end{aligned}\end{equation} \end{definition} In model order reduction, a matrix $R=UZ^T\in\mathscr{M}$ is usually a low rank-$r$ approximation of a full rank matrix $\mathfrak{R}\in\mathcal{M}_{l,m}$. The following proposition shows that the normal space at $R$, $\mathbb{N}N{R}$, can be understood as the set of all possible completions of the approximation \cref{eqn:KLdecomposition}: \begin{proposition} \label{prop:normalSpace} Let $N$ be a given normal vector $N\in\mathbb{N}N{R}$ at $R=UZ^T\in\mathscr{M}$ and denote $k=\mathrm{rank}(N)$. Then there exists an orthonormal basis of vectors $(u_i)_{1\ensuremath \leqi\ensuremath \leql}$ in $\mathbb{R}^l$, an orthonormal basis $(v_i)_{1\ensuremath \leqi\ensuremath \leqm}$ of $\mathbb{R}^m$, and $r+k$ non zero singular values $(\sigma_i)_{1\ensuremath \leqi\ensuremath \leqr+k}$ such that \begin{equation} \label{eqn:singularVectors} UZ^T=\sum_{i=1}^r \sigma_i u_ iv_i^T \textrm{ and } N=\sum_{i=1}^k \sigma_{r+i}u_{r+i}v_{r+i}^T.\end{equation} \end{proposition} \begin{proof} Consider $N=U_N\Theta V_N^T$ the SVD decomposition of $N$ \cite{horn_johnson_1991}. Since $U^TN=0$, $r$ columns of $U_N$ are spanned by $U$ and associated with zero singular values of $N$, therefore $u_i$ is obtained from the columns of $U$ for $1\ensuremath \leqi\ensuremath \leqr$ and from the left singular vectors of $N$ associated with non zero singular values for $r+1\ensuremath \leqi\ensuremath \leqr+k$, $k \ge 0$. The vectors $v_i$ and $v_{r+j}$ are obtained similarly. The singular values $\sigma_i$ are obtained by reunion of the respective $r$ and $k$ non-zeros singular values of $Z$ and $N$. \end{proof} In differential geometry, one distinguishes the geometric properties that are \emph{intrinsic}, \emph{i.e.}\;that depend only on the metric $g$ defined on the manifold, from the ones that are \emph{extrinsic}, \emph{i.e.}\;that depend on the ambient space in which the manifold $\mathscr{M}$ is defined. The following proposition recalls the link between the extrinsic projection $\Pi_\TT{R}$ and the intrinsic notion of derivation onto a manifold. For embedded manifolds, \emph{i.e.}\;defined as subsets of an ambient space, the covariant derivative at $R\in\mathscr{M}$ is obtained by projecting the usual derivative onto the tangent space $\TT{R}$, and the Christoffel symbol corresponds to the normal component that has been removed \cite{edelman1998}. \begin{proposition} \label{prop:connection}Let $X$ and $Y$ be two tangent vector fields defined on a neighborhood of $R\in \mathscr{M}$. The covariant derivative $\nabla_X Y$ with respect to the metric inherited from the ambient space is the projection of $\ensuremath \mathrm{d}D_X Y$ onto the tangent space $\TT{R}$: \[ \nabla_X Y=\Pi_\TT{R} (\ensuremath \mathrm{d}D_X Y).\] The Christoffel symbol $\Gamma(X,Y)$ is defined by the relationship $\nabla_X Y=\ensuremath \mathrm{d}D_X Y+\Gamma(X,Y)$ and is characterized by the formula \[ \Gamma(X,Y)=-(I-\Pi_\TT{R})\ensuremath \mathrm{d}D_X Y=-\ensuremath \mathrm{d}D \Pi_\TT{R}(X)\cdot Y.\] The Christoffel symbol is symmetric: $\Gamma(X,Y)=\Gamma(Y,X)$. \end{proposition} \begin{proof} See \cite{spivak1973comprehensive}, Vol.3, Ch.1. \end{proof} \begin{remark} An important feature of this definition is that the Christoffel symbol $\Gamma(X,Y)=-\ensuremath \mathrm{d}D\Pi_\TT{R}(X)\cdot Y$, depends only on the projection map $\Pi_T$ at the point $R$ and not on neighboring values of the tangent vectors $X,Y$, which is \emph{a priori} not clear from the equality $\Gamma(X,Y)=-(I-\Pi_\TT{R})\ensuremath \mathrm{d}D_X Y$. The Christoffel symbol $\Gamma(X,Y)$ is computed explicitly for the matrix manifold $\mathscr{M}$ in \cref{rmk:christoffel}. \end{remark} The covariant derivative allows to obtain equations for the geodesics of the manifold $\mathscr{M}$. These geodesics (\cref{fig:tangentProj}) are the shortest paths among all possible smooth curves drawn on $\mathscr{M}$ joining two points sufficiently close. Mathematically, they are curves $R(t)=U(t)Z(t)$ characterized by a velocity $\dot{R}=\dot{U}Z^T+U\dot{Z}^T$ that is stationary under the covariant derivative \cite{spivak1973comprehensive}, \emph{i.e.}\;$\nabla_{\dot{R}} \dot{R}=0$. Since $\ensuremath \mathrm{d}D_{\dot{R}}\dot{R}=\ddot{R}$, this leads to \begin{equation} \label{eqn:geodesicDef}\nabla_{\dot{R}} \dot{R}=\ddot{R}-\ensuremath \mathrm{d}D \Pi_\TT{R}(\dot{R})\cdot \dot{R}=0. \end{equation} \begin{theorem} Consider $X=(X_U,X_Z)\in \mathcal{H}_{(U,Z)}$ and $Y=(Y_U,Y_Z)\in \mathcal{H}_{(U,Z)}$ two tangent vector fields. The covariant derivative $\nabla_{X}Y$ on $\mathscr{M}$ is given by \begin{equation}\label{eqn:connection} \nabla_X Y=(D_X Y_U+UX_U^TY_U+(X_UY_Z^T+Y_UX_Z^T)Z(Z^TZ)^{-1},\, D_XY_Z-ZY_U^TX_U).\end{equation} Therefore, geodesic equations on $\mathscr{M}$ are given by \begin{equation} \label{eqn:geodesics} \left\{\begin{array}{rl}\ddot{U}+U\dot{U}^T\dot{U}+2\dot{U}\dot{Z}^TZ(Z^TZ)^{-1}= & 0\\ \ddot{Z}-Z\dot{U}^T\dot{U}= & 0.\end{array}\right.\end{equation} \end{theorem} \begin{proof} Writing $X=X_UZ^T+UX_Z^{T}$ and $Y=Y_UZ^T+UY_Z^{T}$, one obtains: \begin{align*} D_XY &=D_XY_UZ^T+Y_UX_Z^{T}+X_UY_Z^{T}+UD_XY_Z^T\\ &=D_XY_UZ^T+UD_XY_Z^T+X_UY_Z^{T}+Y_UX_Z^{T}. \end{align*} Applying the projection $\Pi_{T}(UZ^T)$ using eqn.\,\cref{eqn:projectionMap}, \emph{i.e.} \[ \nabla_X Y=\Pi_{(U,Z)}(D_X Y)=((I-UU^T)D_X YZ(Z^TZ)^{-1},D_XY^T U),\] yields in the coordinates of the horizontal space: \[\lb{connection1} \nabla_{X}Y =((I-UU^T)D_X (Y_U)+(X_UY_Z^T+Y_UX_Z^T)Z(Z^TZ)^{-1},D_X (Y_Z)+ZD_X(Y_U^T)U).\] \cref{eqn:connection} is obtained by differentiating the constraint $U^TY_U=0$ along the direction $X$, \emph{i.e.}\;$X_U^TY_U+U^TD_XY_U=0$, and replacing accordingly $U^TD_XY_U$ into the above expression. Since $\ensuremath \mathrm{d}D_{(\dot{U},\dot{Z})}(\dot{U})=\ddot{U}$ and $\ensuremath \mathrm{d}D_{(\dot{U},\dot{Z})}(\dot{Z})=\ddot{Z}$, $\nabla_{(\dot{U},\dot{Z})}(\dot{U},\dot{Z})=0$ yields eqs.\;\cref{eqn:geodesics}. \end{proof} \begin{remark} Physically, a curve $R(t)=U(t)Z(t)^T$ describes a geodesic on $\mathscr{M}$ if and only if its acceleration lies in the normal space at all instants (eqn.\;\cref{eqn:geodesicDef}) \cite{edelman1998,spivak1973comprehensive}. \end{remark} Geodesics allow to define the exponential map \cite{spivak1973comprehensive}, which indicates how to walk on the manifold from a point $R\in \mathscr{M}$ along a straight direction $X\in \TT{R}$. \begin{definition} The exponential map $\exp_{UZ^T}$ at $R=UZ^T\in \mathscr{M}$ is the function \begin{equation}\label{eqn:expmap}\begin{array}{cccc}\exp_{UZ^T}:\; & \TT{UZ^T} & \rightarrow & \mathscr{M}\\ & X & \mapsto & R(1),\end{array}\end{equation} where $R(1)=U(1)Z(1)^T$ is the value at time 1 of the solution of the geodesic equation \cref{eqn:geodesics} with initial conditions $U(0)Z(0)^T=R$ and $(\dot{U}(0),\dot{Z}(0))=X$. The value of the velocity of the point $R(1)=\exp_{UZ^T}(X)$, \begin{equation}\label{eqn:parallelTransport}\tau_{RR(1)} X=\dot{U}(1)Z(1)^T+U(1)\dot{Z}(1)^T,\end{equation} is called the parallel transport of $X$ from $R$ to $R(1)$. \end{definition} \section{Curvature and differentiability of the orthogonal projection onto smooth embedded manifolds} \label{sec:curvature} Differentiability results for the orthogonal projection onto smooth embedded manifolds, as presented with tensor notations in \cite{Ambrosio2000}, are now centralized and adapted to the present study. The main motivation is that the SVD truncation (\cref{sec:curvatureMatrix}) is an example of such orthogonal projection in the particular case of the fixed-rank manifold. Hence, general geometric differentiability results for the projections will transpose directly into a formula for the differential of the application mapping a matrix to its best low rank approximation. The same analysis can be applied to other matrix manifolds to obtain the differential of other algebraic operations, and even generalized to non-Euclidean ambient spaces, which is the object of \cite{Feppon2016b}. In this section, the space of $l$-by-$m$ matrices $\mathcal{M}_{l,m}$ is replaced with a general finite dimensional Euclidean space $E$, and the fixed rank manifold with any given smooth embedded manifold $\mathscr{M}\subset E$. \begin{definition} \label{def:projectionMap} Let $\mathscr{M}$ be a smooth manifold embedded in an Euclidian space $E$. The orthogonal projection of a point $\mathfrak{R}$ onto $\mathscr{M}$ is defined whenever there is a unique point $\Pi_\mathscr{M}(\mathfrak{R})\in\mathscr{M}$ minimizing the Euclidean distance from $\mathfrak{R}$ to $\mathscr{M}$, \emph{i.e.} \[ ||\mathfrak{R}-\Pi_\mathscr{M}(\mathfrak{R})||=\inf_{R\in\mathscr{M}} ||\mathfrak{R}-R||.\] \end{definition} A fundamental property of the orthogonal projection is that the vector $\mathfrak{R}-R$ is normal to $\mathscr{M}$ for the point $R=\Pi_\mathscr{M}(\mathfrak{R})$, as geometrically illustrated on \cref{fig:tangentProj}: \begin{proposition} \label{prop:normal} Whenever $\Pi_\mathscr{M}(\mathfrak{R})$ is defined, the residual $\mathfrak{R}-\Pi_\mathscr{M}(\mathfrak{R})\in\mathbb{N}N{R}$ must be normal to $\mathscr{M}$ at $R$, namely \begin{equation} \label{eqn:SVDorthogonality} \Pi_\TT{\Pi_\mathscr{M}(\mathfrak{R})}(\mathfrak{R}-\Pi_\mathscr{M}(\mathfrak{R}))=0.\end{equation} \end{proposition} \begin{proof} For any tangent vector $X\in \TT{R}$, consider a curve $R(t)$ drawn on $\mathscr{M}$ such that $R(0)=R$ and $\dot{R}(0)=X$ where $R$ is minimizing $J(R)=\frac{1}{2}||\mathfrak{R}-R||^2$. Then the stationarity condition $\left.\frac{\ensuremath \mathrm{d}}{\ensuremath \mathrm{d} t}\right|_{t=0}J(R(t))=-\langle \mathfrak{R}-R,X \rangle=0$ states precisely \cref{eqn:SVDorthogonality}. \end{proof} The following proposition, also used in the proofs of \cite{koch2007dynamical}, provides an equation for the differential of $\Pi_{\mathscr{M}}$, that will be solved by the study of the curvature of $\mathscr{M}$. \begin{proposition} Suppose the projection $\Pi_\mathscr{M}$ is defined and differentiable at $\mathfrak{R}$. Then the differential $\ensuremath \mathrm{d}D_\mathfrak{X}\Pi_{\mathscr{M}}(\mathfrak{R})$ of $\Pi_\mathscr{M}$ at the point $\mathfrak{R}$ in the direction $\mathfrak{X}\in E$ satisfies~: \begin{equation}\label{eqn:SVDdifferential1} \ensuremath \mathrm{d}D_\mathfrak{X} \Pi_\mathscr{M}(\mathfrak{R})=\Pi_\TT{\Pi_\mathscr{M}(\mathfrak{R})}(\mathfrak{X})+\ensuremath \mathrm{d}D \Pi_\TT{\Pi_\mathscr{M}(\mathfrak{R})}(\ensuremath \mathrm{d}D_\mathfrak{X} \Pi_\mathscr{M}(\mathfrak{R}))\cdot (\mathfrak{R}-\Pi_\mathscr{M}(\mathfrak{R})).\end{equation} \end{proposition} \begin{proof} Differentiating equation \cref{eqn:SVDorthogonality} along the direction $\mathfrak{X}$ yields \[ \ensuremath \mathrm{d}D \Pi_\TT{\Pi_\mathscr{M}(\mathfrak{R})}(\ensuremath \mathrm{d}D\Pi_\mathscr{M}(\mathfrak{R})(\mathfrak{X}))\cdot (\mathfrak{R}-\Pi_\mathscr{M}(\mathfrak{R}))+\Pi_\TT{\Pi_\mathscr{M}(\mathfrak{R})}(\mathfrak{X}-\ensuremath \mathrm{d}D_\mathfrak{X}\Pi_\mathscr{M}(\mathfrak{R}))=0.\] Since $\Pi_\mathscr{M}(\mathfrak{R})\in\mathscr{M}$ for any $\mathfrak{R}$, the differential $\ensuremath \mathrm{d}D_\mathfrak{X}\Pi_\mathscr{M}(\mathfrak{R})$ is a tangent vector, and the results follows from the relation $\Pi_\TT{\Pi_\mathscr{M}(\mathfrak{R})}(\ensuremath \mathrm{d}D_\mathfrak{X}\Pi_\mathscr{M}(\mathfrak{R}))=\ensuremath \mathrm{d}D_\mathfrak{X}\Pi_\mathscr{M}(\mathfrak{R})$. \end{proof} Let $R=\Pi_\mathscr{M}(\mathfrak{R})$ be the projection of the point $\mathfrak{R}$ on $\mathscr{M}$ and $N=\mathfrak{R}-R$ the corresponding normal residual vector. Solving \cref{eqn:SVDdifferential1} for the differential $X=\ensuremath \mathrm{d}D_\mathfrak{X} \Pi_\mathscr{M}(\mathfrak{R})$ requires to invert the linear operator $I-L_R(N)$ where $L_R(N)$ is the map $X\mapsto\ensuremath \mathrm{d}D \Pi_\TT{R}(X)\cdot N$. $L_R(N)$ would be zero if $\mathscr{M}$ were to be a ``flat'' vector subspace and can be interpreted as a curvature correction. In fact, $L_R(N)$ is nothing else than the Weingarten map, at the origin of the definition of principal curvatures. For embedded hypersurface, this application maps tangent vectors $X$ to the tangent variations $-\ensuremath \mathrm{d}D_X N$ of the unit normal vector field $N$, and the eigenvalues and eigenvectors of this symmetric endomorphism define the principal curvatures and directions of the hypersurface (\cite{spivak1973comprehensive}, Vol. 2). For general smooth embedded sub-manifolds, a Weingarten map is defined for every possible normal direction \cite{Simon1983,Ambrosio2000,absil2013extrinsic,absil2009all}. \begin{definition}[Weingarten map] \label{def:weingartenMap} For any point $R\in\mathscr{M}$, tangent and normal vector fields $X,Y\in \TT{R}$ and $N\in\mathbb{N}N{R}$ defined on a neighborhood of $R$, the following relation, called \emph{Weingarten identity} holds: \begin{equation} \label{eqn:weingartenIdentity}\langle \Pi_\TT{R}(\ensuremath \mathrm{d}D_X N),Y \rangle=\langle N,\Gamma(X,Y) \rangle.\end{equation} Also, the tangent variations $\Pi_\TT{R}(\ensuremath \mathrm{d}D_X N)$ depend only on the value of the normal vector field $N$ at $R$ as it can be seen from the identity \begin{equation} \label{eqn:weingartenApplication}\ensuremath \mathrm{d}D\Pi_\TT{R}(X)\cdot N=-\Pi_\TT{R}(\ensuremath \mathrm{d}D_X N).\end{equation} The application \[\begin{array}{ccccc}L_R(N) &: & \TT{R} & \rightarrow & \TT{R} \\ & & X & \mapsto & \ensuremath \mathrm{d}D\Pi_\TT{R}(X)\cdot N,\end{array}\] is therefore a symmetric map of the tangent space into itself and is called the Weingarten map in the normal direction $N$. The corresponding eigenvectors and eigenvalues are respectively called the \emph{principal directions} and \emph{principal curvatures} of $\mathscr{M}$ in the normal direction~$N$. The induced symmetric bilinear form on the tangent space, \begin{equation}\label{eqn:secondFundamentalFormAbstract}\mathbb{R}N{2}(N):\;(X,Y)\mapsto -\langle N,\Gamma(X,Y) \rangle,\end{equation} is called the second fundamental form in the direction $N$. \end{definition} \begin{proof} See \cite{Simon1983} or the proof Theorem 5 of \cite{spivak1973comprehensive}, vol.3, ch.1. \end{proof} The differentiability of the projection map for arbitrary sets has been studied in \cite{wulbert1968continuity,abatzoglou1979metric} and more recently in the context of smooth manifolds in \cite{Ambrosio2000,gilbarg2015elliptic,cannarsa2004representation} with recent applications in shape optimization \cite{allaire2014multi}. The following theorem reformulates these results in the framework of this article. The proof given in \cref{app:proofDiff} is essentially a justification that one can indeed invert the operator $I-L_R(N)$ by using its eigendecomposition. Recall that the adherence $\overline{\mathscr{M}}$ is the set of limit points of $\mathscr{M}$. In this paper, the boundary of a manifold is defined as the set $\partial \mathscr{M}=\overline{\mathscr{M}}\backslash\mathscr{M}$ . \begin{theorem} \label{thm:distanceDiff} Let ${\mathcal{O}_n}mega\subset E$ be an open set of $E$ and assume that for any $\mathfrak{R}\in{\mathcal{O}_n}mega$, there exists a unique projection $\Pi_\mathscr{M}(\mathfrak{R})\in \mathscr{M}$ such that \begin{equation}\label{eqn:CondunicityProj}||\mathfrak{R}-\Pi_\mathscr{M}(\mathfrak{R})||=\inf_{R\in\mathscr{M}}||\mathfrak{R}-R||,\end{equation} and that in addition, there is no other projection on the boundary $\partial \mathscr{M}$ of $\mathscr{M}$: \begin{equation}\label{eqn:Condfrontier} \forall R\in \overline{\mathscr{M}}\backslash \mathscr{M}, ||\mathfrak{R}-R||>||\mathfrak{R}-\Pi_\mathscr{M}(\mathfrak{R})||.\end{equation} For $\mathfrak{R}\in{\mathcal{O}_n}mega$, denote $\kappa_i(N)$ and $\Phi_i$ the respective eigenvalues and eigenvectors of the Weingarten map $L_R(N)$ at $R=\Pi_\mathscr{M}(\mathfrak{R})$ with the normal direction $N=\mathfrak{R}-\Pi_\mathscr{M}(\mathfrak{R})$. Then all the principal curvatures satisfy $\kappa_i(N)<1$ and the projection $\Pi_\mathscr{M}$ is differentiable at $\mathfrak{R}$. The differential $\ensuremath \mathrm{d}D_\mathfrak{X}\Pi_\mathscr{M}(\mathfrak{R})$ at $\mathfrak{R}$ in the direction $\mathfrak{X}$ satisfies \begin{equation}\label{eqn:diffProjection} \begin{aligned} \ensuremath \mathrm{d}D_\mathfrak{X} \Pi_\mathscr{M}(\mathfrak{R}) & =\sum\limits_{\kappa_i(N)} \frac{1}{1-\kappa_i(N)}\langle \Phi_i,\mathfrak{X} \rangle\Phi_i \\ &=\Pi_{\TT{\Pi_\mathscr{M}(\mathfrak{R})}}(\mathfrak{X})+\sum\limits_{\kappa_i(N)\neq 0} \frac{\kappa_i(N)}{1-\kappa_i(N)}\langle \Phi_i,\mathfrak{X} \rangle\Phi_i .\end{aligned} \end{equation} \end{theorem} \begin{proof} See \cref{app:proofDiff} or \cite{Ambrosio2000}. \end{proof} The set $\textrm{Sk}(\mathscr{M})\subset E$ of points that admit more than one possible projection is called the \emph{skeleton} of $\mathscr{M}$ (see \cite{delfour2011shapes}). One cannot expect the projection map to be differentiable at points that are in the adherence $\overline{\textrm{Sk}(\mathscr{M})}$, as there is a ``jump'' of the projected values across $\textrm{Sk}(\mathscr{M})$ (\cref{fig:parabola}). Equation \cref{eqn:diffProjection} is analogous to the formula presented in \cite{gilbarg2015elliptic} for hyper-surfaces (Lemma 14.17). In this framework, one retrieves the usual notion of principal curvature by considering the eigenvalues $\kappa_i(N)$ for a normalized normal vector $N$. Curvature radius being defined as inverse of curvatures: $\rho_i=\kappa_i\left(\frac{N}{||N||}\right)^{-1}$, the condition $\kappa_i(N)=||\mathfrak{R}-\Pi_\mathscr{M}(\mathfrak{R})||/\rho_i\neq 1$ states that the projection $\Pi_\mathscr{M}$ is differentiable at points $\mathfrak{R}$ that are not center of curvature. Note that assumption (\ref{eqn:Condfrontier}) is required to deal with non closed manifolds (boundary points being not considered as part of the manifold), which is the case for the fixed rank matrix manifold. \begin{figure} \caption{\small A parabola $\mathscr{M} \label{fig:parabola} \end{figure} \section{Curvature of the fixed rank matrix manifold and the differentiability of the SVD truncation} \label{sec:curvatureMatrix} In the following, $\mathscr{M}\subset \mathcal{M}_{l,m}$ denotes again the fixed rank matrix manifold of \cref{def:manifold} and $E=\mathcal{M}_{l,m}$ is the space of $l$-by-$m$ matrices. It is well known \cite{Golub2012,horn_johnson_1985} that the truncated SVD, \emph{i.e.}\;the map that set all singular values of a matrix $\mathfrak{R}$ to zero except the $r$ highest, yields the best rank $r$ approximation. \begin{definition} \label{def:projectionM} Let $\mathfrak{R}\in\mathcal{M}_{l,m}$ a matrix of rank at least $r$, i.e.\;$r+k, k \ge 0$, and denote $\mathfrak{R}=\sum_{i=1}^{r+k}\sigma_i(\mathfrak{R})u_iv_i^T$ its singular value decomposition. If $\sigma_r(\mathfrak{R})>\sigma_{r+1}(\mathfrak{R})$, then the rank $r$ truncated SVD \[ \Pi_{\mathscr{M}}(\mathfrak{R})=\sum_{i=1}^r \sigma_i(\mathfrak{R}) u_iv_i^T\in\mathscr{M},\] is the unique matrix $R\in\mathscr{M}$ minimizing the Euclidian distance $R\mapsto ||\mathfrak{R}-R||$. \end{definition} \begin{remark} The skeleton of $\mathscr{M}$ (Fig.\;\ref{fig:parabola}) is therefore the set \[\textrm{Sk}(\mathscr{M})=\{ \sigma_r(\mathfrak{R})=\sigma_{r+1}(\mathfrak{R})\}\] characterized by the crossing of the singular values of order $r$ and $r+1$. \end{remark} In the following, the Weingarten map for the fixed rank manifold is derived. Note that its expression has been previously found by \cite{absil2013extrinsic} under the form of equation \cref{eqn:weingartenAbsil} below. \begin{proposition} \label{prop:matrixWeingarten} The Weingarten map $L_R(N)$ of the fixed rank manifold $\mathscr{M}$ in the normal direction $N\in \mathbb{N}N{R}$ is the application: \begin{equation}\label{eqn:matrixweingartenMap}\begin{array}{crcc}L_R(N)\;:\;& \mathcal{H}_{(U,Z)} & \longrightarrow & \mathcal{H}_{(U,Z)} \\ & (X_U,X_Z) & \longmapsto & (NX_Z(Z^TZ)^{-1},N^TX_U). \end{array}\end{equation} Or, denoting $R=\sum_{i=1}^r \sigma_i u_i v_i^T$ and $N=\sum_{j=1}^k \sigma_{r+j}u_{r+j}v_{r+j}^T$ as in \cref{prop:normalSpace}, this can be rewritten more explicitly as \begin{equation} \label{eqn:weingartenClear} \forall X\in\TT{R},\, L_R(N)X=\sum_{\substack{1\ensuremath \leq i\ensuremath \leqr\\1\ensuremath \leqj\ensuremath \leqk}} \frac{\sigma_{r+j}}{\sigma_i}\left[u_iv_i^TX^Tu_{r+j}v_{r+j}^T+u_{r+j}v_{r+j}^TX^Tu_iv_i^T\right].\end{equation} The second fundamental form is given by: \begin{equation} \label{eqn:secondFundamentalForm} \mathbb{R}N{2}\;:\;(X,Y)\mapsto \langle X,L_R(N)(Y) \rangle=\Tr((X_UY_Z^T+Y_UX_Z^T)^TN).\end{equation} \end{proposition} \begin{proof} Differentiating \cref{eqn:projectionMap} along the tangent direction $X=(X_U,X_Z)\in\mathcal{H}_{(U,Z)}$, and using the relations $U^TN=0$ and $NZ=0$, yields \begin{equation}\label{eqn:weingartenMatrix}L_R(N)X= UX_U^TN+NX_Z(Z^TZ)^{-1}Z^T.\end{equation} The normality of $N$ implies that $(NX_Z(Z^TZ)^{-1},N^TX_U)$ is a vector of the horizontal space and therefore equation \cref{eqn:matrixweingartenMap} follows. Eqn. \cref{eqn:weingartenMatrix} can be rewritten as \begin{equation} \label{eqn:weingartenAbsil} L_R(N)X= U(Z^TZ)^{-1}Z^TX^TN+NX^TU(Z^TZ)^{-1}Z^T, \end{equation} by expressing $X_U=(I-UU^T)XZ(Z^TZ)^{-1}$ and $X_Z=X^TU$ in terms of $X$ (eqn. \cref{eqn:projectionMap}), from which is derived eqn. \cref{eqn:weingartenClear} by introducing singular vectors $(u_i)$, $(v_i)$ and singular values $(\sigma_i)$. One obtains\cref{eqn:secondFundamentalForm} by evaluating the scalar product $\langle X,L_R(N)(Y) \rangle$ with the metric $g$ (equation \cref{eqn:metric}). \end{proof} \begin{remark} \label{rmk:christoffel} The Christoffel symbol is deduced from equations \cref{eqn:secondFundamentalForm} and \cref{eqn:secondFundamentalFormAbstract}: \begin{equation} \label{eqn:christoffel} \Gamma(X,Y)=-(I-\Pi_\TT{R})(X_UY_Z^T+Y_UX_Z^T). \end{equation} \end{remark} \begin{theorem} \label{thm:curvature} Consider a point $R=UZ^T=\sum_{i=1}^r \sigma_i u_iv_i^T\in\mathscr{M}$ and a normal vector $N=\sum_{j=1}^k \sigma_{r+j}u_{r+j}v_{r+j}^T\in \mathbb{N}N{R}$ (no ordering of the singular values is assumed). At $R$ and in the direction $N$, there are $2kr$ non-zero principal curvatures \[\kappa_{i,j}^\pm(N)=\pm\frac{\sigma_{r+j}}{\sigma_i},\] for all possible combinations of non-zero singular values $\sigma_{r+j}, \sigma_i$ for $1\ensuremath \leqi\ensuremath \leqr$ and $1\ensuremath \leqj\ensuremath \leqk$. The normalized corresponding principal directions are the tangent vectors \begin{equation} \label{eqn:principalDirectionMatrix} \Phi_{i,r+j}^\pm=\frac{1}{\sqrt{2}}(u_{r+j}v_i^T\pm u_iv_{r+j}^T).\end{equation} The other principal curvatures are null and associated with the principal subspace \[\mathrm{Ker}(L_R(N))=\mathrm{span}\{(u_iv^T)_{1\ensuremath \leqi\ensuremath \leqr} | Nv=0\}\oplus\mathrm{span}\{(uv_i^T)_{1\ensuremath \leqi\ensuremath \leqr} | u^TN=u^TU=0\}.\] \end{theorem} \begin{proof} From \cref{eqn:weingartenClear}, it is clear that $L_R(N)\Phi_{i,r+j}^\pm=\kappa_{i,r+j}^\pm(N) \Phi_{i,r+j}^\pm$. In addition, $\Phi_{i,r+j}^\pm$ is indeed a tangent vector as one can write $\Phi_{i,r+j}^\pm=X_UZ^T\pm UX_Z^T$ with: \[ (X_U,X_Z)=\frac{1}{\sqrt{2}\sigma_{r+j}\sigma_i}(Nv_{r+j}u_i^TU, N^Tu_{r+j}v_i^TZ).\] Therefore $(\Phi_{i,r+j}^\pm)$ is a family of $2kr$ independent eigenvectors. Then it is easy to check that $\mathrm{span}\{(u_iv^T)_{1\ensuremath \leqi\ensuremath \leqr} | Nv=0\}$ and $\mathrm{span}\{(uv_i^T)_{1\ensuremath \leqi\ensuremath \leqr} | u^TN=u^TU=0\}$ are null eigenspaces of respective dimension $(m-k)r$ and $(l-k-r)r$. The total dimension obtained is $(m-k)r+(l-k-r)r+2kr=mr+lr-r^2$, implying that the full spectral decomposition has been characterized. \end{proof} This theorem shows that the maximal curvature of $\mathscr{M}$ (for normalized normal directions $||N||=1$) is $\sigma_r(\mathfrak{R})^{-1}$ and hence diverges as the smallest singular value goes to 0. This fact confirms what is visible on \cref{fig:plotManifold}: the manifold $\mathscr{M}$ can be seen as a collection of cones or as a multidimensional spiral, whose axes are the lower dimensional manifolds of matrices of rank strictly less than $r$. Applying directly the formula \cref{eqn:diffProjection} of \cref{thm:distanceDiff}, one obtains an explicit expression for the differential of the truncated SVD: \begin{theorem} \label{thm:diffProjection} Consider $\mathfrak{R}\in\mathcal{M}_{l,m}$ with rank greater than $r$ and denote $\mathfrak{R}=\sum_{i=1}^{r+k}\sigma_i u_iv_i^T$ its SVD decomposition, where the singular values are ordered decreasingly: $\sigma_1\ensuremath \geq\sigma_2\ensuremath \geq\dots\ensuremath \geq\sigma_{r+k}$. Suppose that the orthogonal projection $\Pi_{\mathscr{M}}(\mathfrak{R})=UZ^T$ of $\mathfrak{R}$ onto $\mathscr{M}$ is uniquely defined, that is $\sigma_r>\sigma_{r+1}$. Then $\Pi_\mathscr{M}$, the truncated SVD of order $r$, is differentiable at $\mathfrak{R}$ and the differential $\ensuremath \mathrm{d}D_\mathfrak{X}\Pi(\mathfrak{R})$ in a direction $\mathfrak{X}\in\mathcal{M}_{l,m}$ is given by the formula \begin{multline}\label{eqn:diffSVD} \ensuremath \mathrm{d}D_\mathfrak{X}\Pi_{\mathscr{M}}(\mathfrak{R})=\Pi_{\TT{\Pi_{\mathscr{M}}(R)}}(\mathfrak{X})\\+\sum_{\substack{1\ensuremath \leqi\ensuremath \leqr\\1\ensuremath \leqj\ensuremath \leqk}} \left[\frac{\sigma_{r+j}}{\sigma_i-\sigma_{r+j}} \langle\mathfrak{X},\Phi_{i,r+j}^+ \rangle\Phi_{i,r+j}^+-\frac{\sigma_{r+j}}{\sigma_i+\sigma_{r+j}}\langle \mathfrak{X},\Phi_{i,r+j}^- \rangle\Phi_{i,r+j}^-\right],\end{multline} where $\Phi_{i,r+j}^\pm$ are the principal directions of equation \cref{eqn:principalDirectionMatrix}. More explicitly, \begin{multline} \label{eqn:diffSVDexplicit} \ensuremath \mathrm{d}D_\mathfrak{X} \Pi_\mathscr{M}(\mathfrak{R})=(I-UU^T)\mathfrak{X} Z(Z^TZ)^{-1}Z^T+UU^T\mathfrak{X} \\ +\sum_{\substack{1\ensuremath \leqi\ensuremath \leqr\\1\ensuremath \leqj\ensuremath \leqk}} \frac{\sigma_{r+j}}{\sigma_i^2-\sigma_{r+j}^2} [ (\sigma_i u_{r+j}^T\mathfrak{X} v_i+\sigma_{r+j} u_i^T\mathfrak{X} v_{r+j})u_{r+j}v_i^T\\+(\sigma_{r+j}u_{r+j}^T\mathfrak{X} v_i+\sigma_iu_i^T\mathfrak{X} v_{r+j})u_iv_{r+j}^T].\end{multline} \end{theorem} \begin{proof} The set $\{\mathfrak{R}\in\mathcal{M}_{l,m}, \sigma_{r+1}(\mathfrak{R})>\sigma_r(\mathfrak{R})\}$ is open by continuity of the singular values, therefore condition (\ref{eqn:CondunicityProj}) of \cref{thm:distanceDiff} is fulfilled. The boundary $\overline{\mathscr{M}}\backslash \mathscr{M}$ is the set of matrices of rank strictly lower than $r$, hence condition (\ref{eqn:Condfrontier}) is also fulfilled. Equation (\ref{eqn:diffSVD}) follows by replacing $\kappa_i(N)$ and $\Phi_i$ in \cref{eqn:diffProjection} by the corresponding curvature eigenvalues $\pm\frac{\sigma_{r+j}}{\sigma_i}$ and eigenvectors $\Phi_{i,r+j}^\pm$ of \cref{thm:curvature}. \end{proof} \begin{remark} Dehaene \cite{dehaene1995continuous} and Dieci and Eirola \cite{dieci1999smooth} have previously derived formulas for the time derivative of singular values and singular vectors of a smoothly varying matrix. One can also certainly use these results to find formula \cref{eqn:diffSVDexplicit} by differentiating singular values $(\sigma_i)$ and singular vectors $(u_i), (v_i)$ separately in $\sum_{i=1}^r \sigma_i u_i v_i^T$. In the present work, the proof of \cref{thm:diffProjection} does not require singular values to remain simple, and formula \cref{eqn:diffSVD} is obtained directly from its geometric interpretation. \end{remark} \section{The Dynamically Orthogonal Approximation} \label{sec:DO} The above results are now utilized for model order reduction. Following the introduction, the DO approximation is defined to be the dynamical system obtained by replacing the vector field $\mathcal{L}(t,\cdot)$ with its tangent projection on the manifold. (\cref{fig:tangentProj}). \begin{definition} The maximal solution in time of the \emph{reduced} dynamical system on $\mathscr{M}$, \begin{equation} \label{eqn:DOsystemAbstract} \left\{\begin{array}{cl}\dot{R}&=\Pi_\TT{R}(\mathcal{L}(t,R))\\ R(0) &= \Pi_\mathscr{M}(\mathfrak{R}(0)),\end{array}\right. \end{equation} is called the \emph{Dynamically Orthogonal} (DO) approximation of \cref{eqn:dRstar}. The solution $R(t)=U(t)Z^T(t)$ is governed by a dynamical system for the mode matrix $U$ and the coefficient matrix $Z$ such that $(\dot{U},\dot{Z})\in \mathcal{H}_{(U,Z)}$ satisfies the \emph{dynamically orthogonal condition} $U^T\dot{U}=0$ at every instant: \begin{equation}\label{eqn:DOsystem} \left\{\begin{array}{rl} \dot{U}= & (I-UU^T)\mathcal{L}(t,UZ^T)Z(Z^TZ)^{-1} \\ \dot{Z}= & \mathcal{L}(t,UZ^T)^T U\\ U(0)Z(0)^T=& \Pi_\mathscr{M}(\mathfrak{R}(0)).\end{array}\right. \end{equation} \end{definition} \begin{remark} Equations \cref{eqn:DOsystem} are exactly those presented as DO equations in \cite{sapsis2009dynamically,sapsis2011dynamically}. With the notation of \cref{eqn:SPDE,eqn:KLdecomposition}, using $\langle \cdotp,\cdotp \rangle$ to denote the continuous dot product operator (an integral over the spatial domain) and $\mathbb{E}$ the expectation, they were written as the following set of coupled stochastic PDEs: \begin{equation} \label{eqn:DOsapsis} \left\{\begin{aligned} \partial_t \zeta_i & = \langle {\mathscr{L}(t,{\bm u}_\textrm{DO}^{\textrm{}};\omega)},{\bm u}_i \rangle\\ \sum_{j=1}^r \mathbb{E}[\zeta_i \zeta_j]\partial_t {\bm u}_j & =\mathbb{E}\left[\zeta_i\left({\mathscr{L}(t,{\bm u}_\textrm{DO}^{\textrm{}};\omega)}-\sum_{j=1}^r \langle {\mathscr{L}(t,{\bm u}_\textrm{DO}^{\textrm{}};\omega)},{\bm u}_j \rangle{\bm u}_j\right)\right]\;. \end{aligned}\right. \end{equation} However, when dealing with infinite dimensional Hilbert spaces, the vector space of solutions of \cref{eqn:SPDE} depends on the PDEs, which complicates the derivation of a general theory for \cref{eqn:DOsapsis}. Considering the DO approximation as a computational method for evolving low rank matrices relaxes these issues through the finite-dimensional setting. \end{remark} \begin{remark} One can relate \cref{eqn:DOsystemAbstract} to projected dynamical systems encountered in optimization \cite{Nagurney2012}, where the manifold $\mathscr{M}$ is replaced with a compact convex set. \end{remark} In the following, two justifications of the accuracy of this approximation are given. First, the DO approximation is shown to be the continuous limit of a scheme that would truncate the SVD of the full matrix solution after each time step, and hence is instantaneously optimal among any other possible model order reduced system. Then, its dynamics is compared to that of the best low rank approximation, yielding error bounds on global integration times. The efficiency of the DO approach in the context of the discretization of a stochastic PDE is not discussed here. These points are examined in \cite{Feppon2016a} and in references cited therein. \subsection{The DO system applies instantaneously the truncated SVD} This paragraph details first a ``computational'' interpretation of the DO approximation. Consider the temporal integration of the dynamical system \cref{eqn:dRstar} over $(t^n, t^{n+1})$, \begin{equation} \label{eqn:fullEulerScheme} \mathfrak{R}^{n+1}=\mathfrak{R}^n+\ensuremath \mathrm{d}elta t \overline{\mathcal{L}}(t^n,\mathfrak{R}^n,\ensuremath \mathrm{d}elta t),\end{equation} where $\overline{\mathcal{L}}(t,\mathfrak{R},\ensuremath \mathrm{d}elta t)$ denotes the full-space integral $\overline{\mathcal{L}}(t,\mathfrak{R},\ensuremath \mathrm{d}elta t)=\frac{1}{\ensuremath \mathrm{d}elta t}\int_t^{t+\ensuremath \mathrm{d}elta t} \mathcal{L}(s,\mathfrak{R}(s))\ensuremath \mathrm{d} s$ for the exact integration or the increment function \cite{Haier2000} for a numerical integration. Examples of the latter include $\overline{\mathcal{L}}(t,\mathfrak{R},\ensuremath \mathrm{d}elta t)=\mathcal{L}(t,\mathfrak{R})$ for forward Euler and $\overline{\mathcal{L}}(t,\mathfrak{R},\ensuremath \mathrm{d}elta t)=\mathcal{L}(t+\ensuremath \mathrm{d}elta t/2,\mathfrak{R}+\ensuremath \mathrm{d}elta t/2\, \mathcal{L}(t,\mathfrak{R}))$ for a second-order Runge-Kutta scheme. Assume that the solution $\mathfrak{R}^n$ at time $t^n$ is well approximated by a rank $r$ matrix $R^n$. A natural way to estimate the best rank $r$ approximation $\Pi_\mathscr{M}(\mathfrak{R}^{n+1})$ at the next time step is then to set \begin{equation} \label{eqn:fullSVDscheme} \left\{\begin{aligned} R^{n+1} & =\Pi_\mathscr{M}(R^n+\ensuremath \mathrm{d}elta t\overline{\mathcal{L}}(t,R^n,\ensuremath \mathrm{d}elta t))\\ R^0 & = \Pi_\mathscr{M}(\mathfrak{R}(0)). \end{aligned}\right.\end{equation} Such a numerical scheme uses the truncated SVD, $\Pi_\mathscr{M}$, to remove after each time step of the initial time-integration \cref{eqn:fullEulerScheme} the optimal amount of information required to constrain the rank of the solution. A data-driven adaptive version of this approach was for example used in \cite{Lermusiaux1997,lermusiaux_DAO1999}. One can then look for a dynamical system for which \cref{eqn:fullSVDscheme} would be a temporal discretization. One then finds that, for any rank $r$ matrix $R\in\mathscr{M}$, \begin{equation}\label{eqn:consistency}\frac{\Pi_{\mathscr{M}}(R+\ensuremath \mathrm{d}elta t\overline{\mathcal{L}}(t,R,\ensuremath \mathrm{d}elta t))-R}{\ensuremath \mathrm{d}elta t}\underset{\ensuremath \mathrm{d}elta t\rightarrow 0}\longrightarrow \ensuremath \mathrm{d}D_{\overline{\mathcal{L}}(t,R,0)}\Pi_\mathscr{M}(R)= \Pi_\TT{R}(\mathcal{L}(t,R))\end{equation} holds true since the curvature term depending on $N=R-\Pi_\mathscr{M}(R)=0$ vanishes in \cref{eqn:diffProjection}, and $\overline{\mathcal{L}}(t,R,0)=\mathcal{L}(t,R)$ by consistency of the time marching with the exact integration \cref{eqn:fullEulerScheme} \cite{Haier2000}. This implies, under sufficient regularity condition on $\mathcal{L}$, that the continuous limit of the scheme \cref{eqn:fullSVDscheme} is the DO dynamical system \cref{eqn:DOsystemAbstract}. \begin{theorem} \label{prop:convergenceScheme} Assume that the DO solution \cref{eqn:DOsystemAbstract} is defined on a time interval $[0,T]$ discretized with $N_T$ time steps $\ensuremath \mathrm{d}elta t=T/N_T$ and denote $t^n=n\ensuremath \mathrm{d}elta t$. Consider $R^n$ the sequence obtained from the class of schemes \cref{eqn:fullSVDscheme}. Assume that $\mathcal{L}$ is Lipschitz continuous, that is there exists a constant $K$ such that \begin{equation}\label{eqn:lipschitz} \forall t\in [0,T], ~\forall A,B\in \mathcal{M}_{l,m}, ~ ||\mathcal{L}(t,A)-\mathcal{L}(t,B)||\ensuremath \leq K||A-B||.\end{equation} Then the sequence $R^{n}$ converges uniformly to the DO solution $R(t)$ in the following sense: \[ \sup_{0\ensuremath \leqn\ensuremath \leqN_T} ||R^n-R(t^n)||\underset{ \ensuremath \mathrm{d}elta t\rightarrow 0}{\longrightarrow} 0\] \end{theorem} \begin{proof} It is sufficient to check that the scheme \cref{eqn:fullSVDscheme} is both consistent and stable (see \cite{Haier2000}). Denote $\Phi$ the increment function of the scheme \cref{eqn:fullSVDscheme}: \begin{equation}\label{eqn:incrFunction} \Phi(t,R,\ensuremath \mathrm{d}elta t)=\frac{\Pi_\mathscr{M}(R+\ensuremath \mathrm{d}elta t\overline{\mathcal{L}}(t,R,\ensuremath \mathrm{d}elta t))-R}{\ensuremath \mathrm{d}elta t}=\frac{1}{\ensuremath \mathrm{d}elta t}\int_0^1 \frac{\ensuremath \mathrm{d}}{\ensuremath \mathrm{d} \tau}\Pi_\mathscr{M}(g(R,t,\tau,\ensuremath \mathrm{d}elta t))\ensuremath \mathrm{d} \tau\end{equation} with $g(R,t,\tau,\ensuremath \mathrm{d}elta t)=R+\tau \ensuremath \mathrm{d}elta t\overline{\mathcal{L}}(t,R,\ensuremath \mathrm{d}elta t)$. Consider a compact neighborhood $\mathcal{U}$ of $\mathcal{M}_{l,m}$ containing the trajectory $R(t)$ on the interval $[0,T]$ and sufficiently thin such that $\mathcal{U}$ does not intersect the skeleton of $\mathscr{M}$. In particular, $\Pi_\mathscr{M}$ is differentiable with respect to $R$ on the compact neighborhood $\mathcal{U}$, hence Lipschitz continuous. The consistency of \cref{eqn:fullSVDscheme} and continuity of $\Phi$ on $[0,T]\times \mathcal{U}\times \mathbb{R}$ follows from \cref{eqn:consistency}. For usual time marching schemes (e.g.\;Runge Kutta), the Lipschitz condition \cref{eqn:lipschitz} also holds for the map $R\mapsto \overline{\mathcal{L}}(t,R,\ensuremath \mathrm{d}elta t)$. Therefore it $\Phi$ is also Lipschitz continuous with respect to $R$ on $\mathcal{U}$ by composition. This is a sufficient stability condition. \end{proof} As such, the DO approximation can be interpreted as the dynamical system that applies instantaneously the truncated SVD to constrain the rank of the solution. Therefore, other reduced order models of the form \cref{eqn:dRmanifold} are characterized by larger errors on short integration times for solutions whose initial value lies on $\mathscr{M}$. \begin{remark} Other dynamical systems that perform instantaneous matrix operations have been derived in \cite{brockett1988,Smith1991}, and in \cite{dehaene1995continuous} (e.g.\;Lemma\;3.4 and Corollary\;3.5) or \cite{dieci1999smooth} (sections 2.1 and 2.3.) for tracking the full SVD or QR decomposition. Continuous SVD has been combined with adaptive Kalman filtering in uncertainty quantification to continuously adapt the dominant subspace supporting the stochastic solution \cite{Lermusiaux1997,lermusiaux_DAO1999,lermusiaux2001evolving}. All of these results utilized the instantaneous truncated SVD concept and formed the computational basis of the continuous DO dynamical system. In fact, the dominant singular vectors of state transition matrices and other operators have found varied applications in atmospheric and ocean sciences for some time \cite{farrell1996generalized_partI,farrell1996generalized_partII,palmer1998singular,hoskins2000nature,lermusiaux_PhysD2007,moore2004comprehensive,kalnay2003atmospheric,Diaconescu_laprise_SV_review_ESR2012}. \end{remark} \subsection{The DO approximation is close to the dynamics of the best low rank approximation of the original solution} Ideally, a model order reduced solution $R(t)$ would coincide at all times with the best rank $r$ approximation $\Pi_\mathscr{M}(\mathfrak{R}(t))$, so as to keep the error $||\mathfrak{R}(t)-R(t)||$ minimal. However, $\Pi_\mathscr{M}(\mathfrak{R}(t))$ is not the solution of a reduced system of the form \cref{eqn:dRmanifold} as its time derivative depends on the knowledge of the true solution $\mathfrak{R}$ in the full space $\mathcal{M}_{l,m}$. Indeed, formula \cref{eqn:diffSVDexplicit} for the differential of the SVD yields the following system of ODEs for the evolution of modes and coefficients of the best rank$-r$ approximation $\Pi_\mathscr{M}(\mathfrak{R}(t))$: \begin{equation} \label{eqn:SVDtracking} \left\{\begin{aligned} \dot{U} &=(I-UU^T)\dot{\mathfrak{R}}Z(Z^TZ)^{-1} \\ & \qquad +\left[\sum_{\substack{1\ensuremath \leqi\ensuremath \leqr\\1\ensuremath \leqj\ensuremath \leqk}} \frac{\sigma_{r+j}}{\sigma_i^2-\sigma_{r+j}^2} (\sigma_i u_{r+j}^T\dot{\mathfrak{R}} v_i+\sigma_{r+j} u_i^T\dot{\mathfrak{R}} v_{r+j})u_{r+j}v_i^T\right]Z(Z^TZ)^{-1}\\ \dot{Z} & =\dot{\mathfrak{R}}^TU+\left[\sum_{\substack{1\ensuremath \leqi\ensuremath \leqr\\1\ensuremath \leqj\ensuremath \leqk}} \frac{\sigma_{r+j}}{\sigma_i^2-\sigma_{r+j}^2} (\sigma_{r+j} u_{r+j}^T\dot{\mathfrak{R}} v_i+\sigma_{i} u_i^T\dot{\mathfrak{R}} v_{r+j})v_{r+j}u_i^T\right]U, \end{aligned} \right. \end{equation} where the (time-dependent) SVD of $\mathfrak{R}(t)$ at the time $t$ is $\sum_{i=1}^{r+k}\sigma_i u_iv_i^T$ with $k=\min(m,l)$ (allowing possibly $\sigma_{r+j}=0$ for $1\ensuremath \leqj\ensuremath \leqk$). One therefore sees from this best rank$-r$ governing differential \cref{eqn:SVDtracking} that its reduced DO system \cref{eqn:DOsystemAbstract} is obtained by (i) replacing the derivative $\dot{\mathfrak{R}}=\mathcal{L}(t,\mathfrak{R})$ with the approximation $\mathcal{L}(t,R)$ (first terms in each of the right-hand sides of \cref{eqn:SVDtracking}), and (ii) neglecting the dynamics corresponding to the interactions between the low-rank$-r$ approximation (singular values and vectors of order $1\ensuremath \leqi\ensuremath \leqr$) and the neglected normal component (singular values and vectors of order $r+j$ for $1\ensuremath \leqj\ensuremath \leqk$). These interactions are the last summation terms in each right-hand sides of \cref{eqn:SVDtracking}. Estimating these interactions in all generality would require, in addition to the knowledge of a rank $r$ approximation $R\simeq \Pi_\mathscr{M}(\mathfrak{R})$, either external observations \cite{lermusiaux_DAO1999} or closure models \cite{Wang_closurePOD_CMAME2012}, so as to estimate the otherwise neglected normal component $\mathfrak{R}-\Pi_\mathscr{M}(\mathfrak{R})=\sum_{j=1}^k \sigma_{r+j}u_{r+j}v_{r+j}^T$. Comparing the dynamics \cref{eqn:DOsystem} of the DO approximation to that of the governing differential \cref{eqn:SVDtracking} of the best low rank$-r$ approximation, a bound for the growth of the DO error is now obtained. \begin{theorem} \label{thm:DOError} Assume that both the original solution $\mathfrak{R}(t)\in\mathcal{M}_{l,m}$ (eqn.\;$\cref{eqn:dRstar}$) and its DO approximation $R(t)$ (eqn.\;$\cref{eqn:DOsystemAbstract}$) are defined on a time interval $[0,T]$ and that the following conditions hold: \begin{enumerate} \item \label{eqn:cond1} $\mathcal{L}$ is Lipschitz continuous, \emph{i.e.}\;equation \cref{eqn:lipschitz} holds. \item \label{eqn:cond2} The original (true) solution $\mathfrak{R}(t)$ remains close to the low rank manifold $\mathscr{M}$, in the sense that $\mathfrak{R}(t)$ does not cross the skeleton of $\mathscr{M}$ on $[0,T]$, \emph{i.e.}\;there is no crossing of the singular value of order $r$: \[\forall t\in[0,T],~\sigma_r(\mathfrak{R}(t))>\sigma_{r+1}(\mathfrak{R}(t)).\] \end{enumerate} Then, the error of the DO approximation $R(t)$ (eqn.\;\cref{eqn:DOsystemAbstract}) remains controlled by the best approximation error $||\mathfrak{R}-\Pi_{\mathscr{M}}(\mathfrak{R}(t))||$ on $[0,T]$: \begin{multline}\label{eqn:DOBound}\forall t\in [0,T], ~ ||R(t)-\Pi_\mathscr{M}(\mathfrak{R}(t))|| \ensuremath \leq \\ \int_0^t ||\mathfrak{R}(s)-\Pi_\mathscr{M}(\mathfrak{R}(s))||\left( K+\frac{||\mathcal{L}(s,\mathfrak{R}(s))||}{\sigma_{r}(\mathfrak{R}(s))-\sigma_{r+1}(\mathfrak{R}(s))}\right)e^{\eta (t-s)}\ensuremath \mathrm{d} s ,\end{multline} where $\eta$ is the constant \begin{equation}\label{eqn:growthRate} \eta= K+\sup_{t\in[0,T]}\frac{2}{\sigma_r(\mathfrak{R}(t))}||\mathcal{L}(t,\mathfrak{R}(t))||.\end{equation} \end{theorem} \begin{proof} A proof is given in \cref{app:proofError}. \end{proof} This statement improves the result expressed in \cite{koch2007dynamical} (Theorem 5.1), since no assumption is made on the smallness of the best approximation error $||\mathfrak{R}-\Pi_\mathscr{M}(\mathfrak{R})||$, nor on the boundedness of $||R-\Pi_\mathscr{M}(\mathfrak{R})||$. \cref{thm:DOError} also highlights two sufficient conditions for the error committed by the DO approximation to remain small~: \subparagraph{Condition \ref{eqn:cond1}} The discrete operator $\mathcal{L}$ must not be too sensitive to the error $\mathfrak{R}(t)-R(t)$, namely the Lipschitz constant $K$ must be small. This error is commonly encountered by any approximation made for evaluating the operator of a dynamical system (as a consequence of Gronwall's lemma \cite{hartman2002}). The Lipschitz constant $K$ also quantifies how fast the vector field $\mathcal{L}$ may deviate from its values when getting away from the low rank manifold $\mathscr{M}$. \subparagraph{Condition \ref{eqn:cond2}} Independently of the choice of the reduced order model, the solution of the initial system \cref{eqn:dRstar}, $\mathfrak{R}(t)$, must remain close to the manifold $\mathscr{M}$, or in other words, must remain far from the skeleton $\textrm{Sk}(\mathscr{M})$ of $\mathscr{M}$. As visible on \cref{fig:parabola}, the best rank $r$ approximation $\Pi_\mathscr{M}(\mathfrak{R})$ of $\mathfrak{R}$ exhibits a jump when $\mathfrak{R}$ crosses the skeleton, \emph{i.e.}\;when $\sigma_r(\mathfrak{R})=\sigma_{r+1}(\mathfrak{R})$ occurs. At that point, the discontinuity of $\Pi_\mathscr{M}(\mathfrak{R}(t))$ cannot be tracked by the DO or any other smooth dynamical approximation. Condition \ref{eqn:cond2} in some sense supersedes the stronger condition of ``smallness of the initial truncation error'' of the error analysis of \cite{koch2007dynamical}. Indeed, when $\sigma_r(\mathfrak{R})\simeq\sigma_{r+1}(\mathfrak{R})$ occurs, as observed numerically in \cite{musharbash2015error}, the DO solution may then diverge sharply from the SVD truncation. From the point of view of model order reduction, the resulting error can be related to the evolution of the residual $\mathfrak{R}-\Pi_\mathscr{M}(\mathfrak{R})$ that is not accounted for by the reduced order model. When the crossing of singular values occurs, neglected modes in the approximation \cref{eqn:KLdecomposition} become ``dominant", but cannot be captured by a reduced order model that has evolved only the first modes initially dominant. In such cases, one has to restart the simulations from the initial conditions with a larger subspace size or the size of the DO subspace has to be increased and corrections applied from external information. The latter learning of the subspace can be done from measurements or from additional Monte-Carlo simulations and breeding of the best low-rank$-r$ approximation \cite{lermusiaux_DAO1999,kalnay2003atmospheric,sapsis2012dynamical}. Last, it should be noted that the growth rate $\eta$ (equation \cref{eqn:growthRate}) of the error increases as the evolved trajectory becomes close to be singular, \emph{i.e.}\;when $\sigma_r(\mathfrak{R}(t))$ goes to zero. This growth rate comes mathematically from the Gronwall estimates of the proofs, and is intuitively related to the fact the tangent projection $\Pi_\mathcal{T}$ in \cref{eqn:DOsystemAbstract} is applied at the location of the DO solution $R(t)$ instead of the one of the best approximation $\Pi_\mathscr{M}(\mathfrak{R}(t))$. If the evolved trajectory is close to be singular, the local curvature of $\mathscr{M}$ experienced by the DO solution $R(t)$ and the best approximation $\Pi_\mathscr{M}(\mathfrak{R}(t))$ is high. Therefore the tangent spaces $\TT{R(t)}$ and $\TT{\Pi_\mathscr{M}(\mathfrak{R}(t))}$ may be oriented very differently because of this curvature, resulting in increased error when approximating the tangent projection operator $\Pi_\TT{\Pi_\mathscr{M}(\mathfrak{R}(t))}$ by $\Pi_\TT{R(t)}$ in the DO system \cref{eqn:DOsystemAbstract}. \begin{remark} \mathbb{C}ref{prop:convergenceScheme,thm:DOError} may be generalized in a straightforward manner to the case of any smooth embedded manifolds $\mathscr{M}\subset E$ (Theorem 2.5 and 2.6 in \cite{FepponThesis}). \end{remark} \section{Optimization on the fixed rank matrix manifold for tracking the best low rank approximation} \label{sec:numerical} This section applies the framework of Riemannian matrix optimization \cite{edelman1998,absil2009all} as an alternative approach to the direct tracking of the truncated SVD of a time-dependent matrix $\mathfrak{R}(t)\in\mathcal{M}_{l,m}$. At the end, we provide a remark (\cref{remark:DO-optimization}) linking the two approaches within the context of the DO system. Consider a given (full-rank) matrix $\mathfrak{R}\in\mathcal{M}_{l,m}$ and recall that $\Pi_\mathscr{M}(\mathfrak{R})$, when it is non-ambiguously defined, is the unique minimizer of the distance functional \begin{equation} \label{eqn:J}\begin{array}{ccccc}J &: & \mathscr{M} & \longrightarrow & \mathbb{R}\\ & & R & \longmapsto & \frac{1}{2}||R-\mathfrak{R}||^2 . \end{array}\end{equation} Riemannian optimization algorithms, namely gradient descents and Newton methods on the fixed rank manifold $\mathscr{M}$, are now used to provide alternative ways to more standard direct algebraic algorithms \cite{Golub2012} for evaluating the truncated SVD $\Pi_\mathscr{M}(\mathfrak{R})$. Such optimizations can be useful to dynamically update the best low rank approximation of a time dependent matrix $\mathfrak{R}(t)$: this is because for a sufficiently small time step $\ensuremath \mathrm{d}elta t$, $R(t)=\Pi_\mathscr{M}(\mathfrak{R}(t))$ is expected to be close to $R(t+\ensuremath \mathrm{d}elta t)=\Pi_\mathscr{M}(\mathfrak{R}(t+\ensuremath \mathrm{d}elta t))$, hence $\Pi_\mathscr{M}(\mathfrak{R}(t))$ provides a good initial guess for the minimization of $R\mapsto ||\mathfrak{R}(t+\ensuremath \mathrm{d}elta t)-R||$. The minimization of the distance functional $J$ has already been considered in the matrix optimization community \cite{absil2009optimization,vandereycken2013low,mishra2014} that derived gradient descent and Newton methods on the fixed rank manifold, but not in the case of the metric inherited from the ambient space $\mathcal{M}_{l,m}$ (eqn.\;\cref{eqn:metric}), which is done in what follows. As a benefit of this ``extrinsic'' approach already noticed in \cite{absil2013extrinsic}, the covariant Hessian of $J$ relates directly to the Weingarten map at critical points: this will allow obtaining the convergence of the gradient descent for almost every initial data (\cref{prop:localMinima}). Ingredients required for the minimization of $J$ on the manifold $\mathscr{M}$ are first derived, namely the covariant gradient and Hessian. As reviewed in \cite{edelman1998}, usual optimization algorithms such as gradient and Newton methods can be straightforwardly adapted to matrix manifolds. The differences with their Euclidean counterparts is that: (i) usual gradient and Hessians must be replaced by their covariant equivalents; (ii) one needs to follow geodesics instead of straight lines to move on the manifold; and, (iii) directions followed at the previous time steps, needed for example in the conjugate gradient method, must be transported to the current location (equation \cref{eqn:parallelTransport}). Covariant gradient and Hessian are recalled in the following definition (for details, see \cite{absil2009optimization}, chapter 5). \begin{definition} \label{def:gradienthessian} Let $J$ be a smooth function defined on $\mathscr{M}$ and $R\in\mathscr{M}$. The covariant gradient of $J$ at $R$ is the unique vector $\nabla J\in \TT{R}$ such that \[ \forall X\in \TT{R},\, J(\exp_R(tX))=J(R)+t\langle \nabla J,X \rangle+o(t).\] The covariant Hessian $\mathcal{H}J$ of $J$ at $R$ is the linear map on $\TT{R}$ defined by \[ \mathcal{H}J(X)=\nabla_X \nabla J, \] and the following second order Taylor approximation of $J$ holds: \[ J(\exp_R(tX))=J(R)+t\langle \nabla J,X \rangle+\frac{t^2}{2} \langle X,\mathcal{H}J(X) \rangle+o(t^2).\] \end{definition} The following proposition (see \cite{absil2013extrinsic}) explains how these quantities are related to the usual gradient and Hessian, so that they become accessible for computations. \begin{proposition} \label{prop:gradientHessian} Let $J$ be a smooth function defined in the ambient space $\mathcal{M}_{l,m}$ and denote $\ensuremath \mathrm{d}D J$ and $\ensuremath \mathrm{d}D^2 J$ its respective Euclidean gradient and Hessian. Then the covariant gradient and Hessian are given by \begin{equation} \label{eqn:covgradient} \nabla J=\Pi_\TT{R}(\ensuremath \mathrm{d}D J),\end{equation} \begin{equation}\label{eqn:covhessian} \mathcal{H}J(X)=\Pi_\TT{R}(\ensuremath \mathrm{d}D^2 J(X))+\ensuremath \mathrm{d}D\Pi_\TT{R}(X)\cdot \left[(I-\Pi_\TT{R})(\ensuremath \mathrm{d}D J)\right]. \end{equation} \end{proposition} Applying directly \cref{prop:gradientHessian}, the gradient and the Hessian of $J$ at $R=UZ^T\in \mathscr{M}$ are given by: \begin{equation}\label{eqn:covgradientJ}\nabla J =((I-UU^T)(UZ^T-\mathfrak{R})Z(Z^TZ)^{-1},(UZ^T-\mathfrak{R})^TU), \end{equation} \begin{equation}\label{eqn:covhessianJ} \begin{array}{ccccc} \mathcal{H}J &: & \mathcal{H}_{(U,Z)} & \rightarrow & \mathcal{H}_{(U , Z)} \\ & & \begin{pmatrix}X_U\\X_Z\end{pmatrix} & \mapsto & \begin{pmatrix}X_U-N_{UZ^T}(\mathfrak{R})X_Z(Z^TZ)^{-1}\\X_Z-N_{UZ^T}(\mathfrak{R})^TX_U\end{pmatrix},\end{array} \end{equation} where $N_{UZ^T}(\mathfrak{R})=(I-\Pi_\TT{UZ^T})(\mathfrak{R}-UZ^T)=(I-UU^T)\mathfrak{R}(I-Z(Z^TZ)^{-1}Z^T)$ is the orthogonal projection of $\mathfrak{R}-R$ onto the normal space. The Newton direction $X$ is found by solving the linear system $\mathcal{H}J(X)=-\nabla J(R)$, that reduces to \[ \left\{\begin{array}{r} X_UA +BX_Z =E\\ B^TX_U+X_Z = F,\end{array}\right.\] with $A=(Z^TZ)$, $B=-N_{UZ^T}(\mathfrak{R})$, $E=(I-UU^T)\mathfrak{R} Z$ and $F=-Z+\mathfrak{R}^TU$. This requires to solve the Sylvester equation $X_UA-BB^TX_U=E-BF$ for $X_U$, that can be done in theory by using standard techniques \cite{kirrinnis2001fast}, before computing $X_Z$ from $X_Z=F-B^TX_U$. It is now proven that the distance function $J$ may admit several critical points, but a unique local, hence global, minimum on $\mathscr{M}$. As a consequence, saddle points of $J$ are unstable equilibrium solutions of the gradient flow $\dot{R}=-\nabla J(R)$ and hence are expected to be avoided by gradient descent, which will converge in practice to the global minimum $\Pi_\mathscr{M}(\mathfrak{R})$. This ``almost surely'' convergence guarantee for the gradient descent may be compared to probabilistic analyses investigated in more general contexts \cite{Pitaval2015,WeiCaiChanEtAl2016}. Our result also shows that one cannot expect the Newton method to converge for initial guesses that are far from the optimal. Indeed, this method seeks a zero of the gradient $\nabla J$ rather than a true minimum, and hence may converge or oscillate around several of the saddle points of the objective function. \begin{proposition} \label{prop:localMinima} Consider $\mathfrak{R}\in\mathcal{M}_{l,m}$ such that its projection onto $\mathscr{M}$ is well defined, that is $\sigma_r(\mathfrak{R})>\sigma_{r+1}(\mathfrak{R})$. Then the distance function $J$ to $\mathfrak{R}$ (eqn.\;\cref{eqn:J}) admits no other local minima than $\Pi_{\mathscr{M}}(\mathfrak{R})$. In other words, for almost any initial rank $r$ matrix $U(0)Z(0)^T$, the solution $U(t)Z(t)^T$ of the gradient flow \begin{equation} \left\{\begin{aligned} \dot{U} & =(I-UU^T)\mathfrak{R} Z(Z^TZ)^{1}\\ \dot{Z} & =\mathfrak{R}^TU-Z \end{aligned}\right. \end{equation} converges to $\Pi_\mathscr{M}(\mathfrak{R})$, the rank $r$ truncated SVD of $\mathfrak{R}$. \end{proposition} \begin{proof} It is known from \cref{prop:normal} that the points $R$ for which $\nabla J$ vanishes are such that $\ensuremath \mathrm{d}D J=R-\mathfrak{R}\in\mathbb{N}N{R}$ is a normal vector. Since in addition $\ensuremath \mathrm{d}D^2 J=I$, \cref{prop:gradientHessian} yields the identity \[\begin{aligned} \forall X\in\TT{R}, \quad \langle \mathcal{H} J(X),X \rangle & = \langle X, X\rangle -\langle\ensuremath \mathrm{d}D\Pi_\TT{R}(X)\cdot N,X\rangle \\ & = ||X||^2-\langle L_R(N)(X),X \rangle, \end{aligned}\] where $N=-(I-\Pi_\TT{R})(\ensuremath \mathrm{d}D J)=-\ensuremath \mathrm{d}D J=\mathfrak{R}-R\in \mathbb{N}N{R}$, since $\nabla J=\Pi_\TT{R}(\ensuremath \mathrm{d}D J)$ vanishes at $R$. Let $\mathfrak{R}=\sum_{i=1}^{r+k} \sigma_i(\mathfrak{R}) u_iv_i^T$ be the SVD of $\mathfrak{R}$. For $\mathfrak{R}-R$ to be a normal vector, $R$ must necessary be of the form $R=\sum_{i\in A} \sigma_i u_iv_i^T$ where $A$ is a subset of $r$ indices $1\ensuremath \leqi\ensuremath \leqr+k$. Then the minimum eigenvalue of the Hessian $\mathcal{H}$ is $1-\frac{\sigma_1(N)}{\sigma_r(R)}$, which is positive if and only if $\sigma_r(R)>\sigma_1(N)$. This happens only for $R=\Pi_\mathscr{M}(\mathfrak{R})$. \end{proof} \begin{remark} The reader is referred to \cite{jost2008riemannian} for details regarding the convergence almost surely of sufficiently smooth gradient flows towards the unique minimizer of a function (Morse theory). \end{remark} On \cref{fig:optimization}, a matrix $\mathfrak{R}\in\mathcal{M}_{l,m}$ with $m=100$ and $l=150$ is considered, with singular values chosen to be equally spaced in the interval $[1,10]$. Three optimization algorithms detailed in \cite{edelman1998} (gradient descent with fixed step, conjugate gradient descent, and Newton method) are implemented to find the best rank $r=5$ approximation of $\mathfrak{R}$, with a random initialization. Convergence curves are plotted on \cref{fig:optimization}: linear and quadratic rates characteristic of respectively gradient and Newton methods are obtained. As expected from \cref{prop:localMinima}, gradient descents globally converge to the truncated SVD, while Newton iterations may be attracted to any saddle point. \begin{figure} \caption{\small Convergence curves of optimization algorithms for minimizing the distance function $J$ (equation \cref{eqn:J} \label{fig:optimization} \end{figure} \begin{remark} \label{remark:DO-optimization} The above gradient descent and Newton methods can be combined with previously-derived numerical schemes for the time-integrated DO eqs.\;\cref{eqn:fullSVDscheme}. One class of schemes consists of discretizing the ODEs \cref{eqn:DOsystem} in time, as in \cite{sapsis2009dynamically,ueckermann_et_al_JCP2013,musharbash2015error,koch2007dynamical}. Another follows \cref{eqn:fullSVDscheme} directly and aims to compute the SVD truncation $\Pi_\mathscr{M}(\mathfrak{R})$ of $\mathfrak{R}=UZ^T+\ensuremath \mathrm{d}elta t\,\overline{\mathcal{L}}(t,UZ^T,\ensuremath \mathrm{d}elta t)$, where the increment function can be that of Euler or of higher-order explicit time marching (of course, the total rank of this $\mathfrak{R}$ depends on the dynamics and numerical scheme, and can be greater than $r$). Examining the expression of the gradient of $J$ (eqn.\;\cref{eqn:covgradientJ}), one time-step of the above schemes can be interpreted as one gradient descent step for minimizing the functional $J$. Therefore, optimization algorithms on the Riemannian manifold $\mathscr{M}$ can be combined with such DO time-stepping schemes, as further investigated in \cite{Feppon2016a}. A key advantage of such optimization is the capability of altering the rank $r$ of the dynamical approximation over a time step or stage (e.g.\;a rank $p>r$ approximation can be used in the target cost functional $J$). These strategies may also be utilized for the computation of nonlinear singular vectors \cite{VaidyaNagarajothers2010} or for continuous dominant subspace estimation \cite{lermusiaux2001evolving}. Finally, it can also be combined with adaptive learning schemes \cite{lermusiaux_DAO1999,lermusiaux_PhysD2007,sapsis2012dynamical} which use system measurements and/or Monte-Carlo breeding nonlinear simulations to estimate the missing fastest growing modes. Such additional information can then correct the predictor of the SVD of $\mathfrak{R}(t+\ensuremath \mathrm{d}elta t)$ in directions orthogonal to the discrete DO increments and essentially increase the subspace size, e.g.\;when the estimates of $\sigma_{r+1}(\mathfrak{R}(t))$ become close to these of $\sigma_r(\mathfrak{R}(t))$. \end{remark} \section{Conclusion} A geometric approach was developed for dynamical model-order reduction, through the analysis of the embedded geometry of the fixed rank manifold $\mathscr{M}$. The extrinsic curvatures of matrix manifolds were studied and geodesic equations obtained. The relationships among these notions and the differential of the orthogonal projection of the original system dynamics onto the tangent spaces of the manifold were derived and linked to the DO approximation. These geometric results allowed to derive the differential of the truncated SVD interpreted as an orthogonal projection onto the fixed rank matrix manifold. The DO approximation, with its instantaneous application of the SVD truncation of the stochastic/parametric dynamics, was shown to be the natural dynamical reduced-order model that is optimal on small integration times among all other reduced-order models that evaluate the operator of the full-space dynamics exclusively onto low rank approximations. Additionally, the explicit dynamical system satisfied by the best low rank approximation was derived and used to sharpen the error analysis of the DO approximation. The DO method was related to Riemannian matrix optimization, for which gradient descent methods were applied and shown capable of adaptively tracking the best low rank approximation of dynamic matrices. This may prove beneficial in the integration of the time stepping of the DO approximation. Such approaches, in contrast with classic numerical integrations of the governing differential equations for the DO modes and their coefficients, open new future avenues for efficient DO numerical schemes. In general, there are now many promising directions for developing new, efficient, dynamic reduced-order methods, based on the geometry and shape of the full-space dynamics. Opportunities abound over a wide range of needs and applications of uncertainty quantification and dynamical system analyses and optimization in oceanic and atmospheric sciences, thermal-fluid sciences and engineering, electrical engineering, and chemical and biological sciences and engineering. \section*{Acknowledgments} We thank the members of the MSEAS group at MIT as well as Camille Gillot, Christophe Zhang, and Saviz Mowlavi for insightful discussions related to this topic. We are grateful to the Office of Naval Research for support under grants N00014-14-1-0725 (Bays-DA) and N00014-14-1-0476 (Science of Autonomy -- LEARNS) to the Massachusetts Institute of Technology. \appendix \section{Proof of \cref{thm:distanceDiff}} \label{app:proofDiff} \begin{lemma} \label{lem:continuityProj} Let ${\mathcal{O}_n}mega$ be an open set over which the projection $\Pi_\mathscr{M}$ is uniquely defined by eqn.\;\cref{eqn:CondunicityProj} and such that condition \cref{eqn:Condfrontier} holds. Then $\Pi_\mathscr{M}$ is continuous on ${\mathcal{O}_n}mega$. \end{lemma} \begin{proof} Consider a sequence $\mathfrak{R}_n$ converging in $E$ to $\mathfrak{R}$ and denote $\Pi_\mathscr{M}(\mathfrak{R}_n)$ the corresponding projections. Let $\epsilon>0$ be a real such that $\forall n\ensuremath \geq0, ||\mathfrak{R}_n-\mathfrak{R}||<\epsilon$. Since \[\begin{aligned}||\Pi_\mathscr{M}(\mathfrak{R}_n)-\mathfrak{R}|| & \ensuremath \leq||\Pi_\mathscr{M}(\mathfrak{R}_n)-\mathfrak{R}_n||+||\mathfrak{R}_n-\mathfrak{R}|| \\ & \ensuremath \leq||\mathfrak{R}_n-\Pi_\mathscr{M}(\mathfrak{R})||+||\mathfrak{R}_n-\mathfrak{R}||\\ & \ensuremath \leq 2\epsilon+||\mathfrak{R}-\Pi_\mathscr{M}(\mathfrak{R})||,\end{aligned}\] the sequence $\Pi_\mathscr{M}(\mathfrak{R}_n)$ is bounded. Denote $R\in\overline{\mathscr{M}}$ a limit point of this sequence. Passing to the limit the inequality $||\mathfrak{R}_n-\Pi_\mathscr{M}(\mathfrak{R}_n)||\ensuremath \leq||\mathfrak{R}_n-\Pi_\mathscr{M}(\mathfrak{R})||$, one obtains $||\mathfrak{R}-R||\ensuremath \leq||\mathfrak{R}-\Pi_\mathscr{M}(\mathfrak{R})||$. The uniqueness of the projection, and the fact that there is no $R\in\overline{\mathscr{M}}\backslash \mathscr{M}$ satisfying this inequality, shows that $R=\Pi_\mathscr{M}(\mathfrak{R})$. Since the bounded sequence $(\Pi_\mathscr{M}(\mathfrak{R}_n))$ has a unique limit point, one deduces the convergence $\Pi_\mathscr{M}(\mathfrak{R}_n)\rightarrow \Pi_\mathscr{M}(\mathfrak{R})$ and hence the continuity of the projection map at $\mathfrak{R}$. \end{proof} \begin{lemma} \label{lem:kappaicond} At any point $\mathfrak{R}\in{\mathcal{O}_n}mega$, any principal curvature $\kappa_i(N)$ in the direction $N$ at $\Pi_\mathscr{M}(\mathfrak{R})$ must satisfy $\kappa_i(N)<1$. \end{lemma} \begin{proof} It is shown in \cref{prop:gradientHessian} that the covariant Hessian of the distance function $J(R)=\frac{1}{2}||\mathfrak{R}-J||^2$ at $R=\Pi_\mathscr{M}(\mathfrak{R})$ is given by \begin{equation} \label{eqn:HessianJ}\begin{array}{cccccc}\mathcal{H}J &: & \TT{R} & \rightarrow & \TT{R} \\ & & X & \mapsto & X-L_R(N)(X),\end{array}\end{equation} where $N$ is the normal direction $N=\mathfrak{R}-\Pi_\mathscr{M}(\mathfrak{R})$. Since $R=\Pi_\mathscr{M}(\mathfrak{R})$ must be a local minimum of $J$, this Hessian must be positive, namely any eigenvalue $\kappa_i(N)$ of the Weingarten map $L_R(N)$ must satisfy $1-\kappa_i(N)\ensuremath \geq0$. Now, consider $s>1$ such that $R+sN\in{\mathcal{O}_n}mega$ and notice that $||R+sN-\Pi_\mathscr{M}(\mathfrak{R})||=s||N||$. Since \[||R+sN-\Pi_\mathscr{M}(R+sN)||\ensuremath \leq||R+sN-\Pi_\mathscr{M}(\mathfrak{R})||=s||N||,\] the uniqueness of the projection in ${\mathcal{O}_n}mega$ implies that $\Pi_\mathscr{M}(R+sN)=R$ (\emph{i.e.}\;the projection is invariant along orthogonal rays). The linearity of the Weingarten map in $N$ implies $\kappa_i(sN)=s\kappa_i(N)$, hence $\kappa_i(N)\ensuremath \leq\frac{1}{s}<1$, which concludes the proof. \end{proof} \begin{proof}[Proof of \cref{thm:distanceDiff}] Consider the function $f(\mathfrak{R},R)=\Pi_\TT{R}(R-\mathfrak{R})$ defined on $\mathscr{M}\times E$. The differential of $f$ with respect to the variable $R$ in a direction $X\in \TT{R}$ at $R=\Pi_\mathscr{M}(\mathfrak{R})$ is the application \[ X\mapsto \Pi_\TT{R}X-\ensuremath \mathrm{d}D_X \Pi_\TT{R}(\mathfrak{R}-R)=(I-L_R(N))(X).\] \cref{lem:kappaicond} implies that the Jacobian $\partial_{R,X} f$ has no zero eigenvalue and hence is invertible. The implicit function theorem ensures the existence of a diffeomorphism $\phi$ mapping an open neighborhood ${\mathcal{O}_n}mega_E\subset E$ of $\mathfrak{R}$ to an open neighborhood ${\mathcal{O}_n}mega_\mathscr{M}\subset \mathscr{M}$ of $R$, such that for any $\mathfrak{R}'\in{\mathcal{O}_n}mega_E$, $\phi(\mathfrak{R}')$ is the unique element of ${\mathcal{O}_n}mega_\mathscr{M}$ satisfying $f(\mathfrak{R}',\phi(\mathfrak{R}'))=0$. By continuity of the projection (\cref{lem:continuityProj}), one can assume, by replacing ${\mathcal{O}_n}mega_E$ with the open subset ${\mathcal{O}_n}mega_E\cap \Pi_\mathscr{M}^{-1}({\mathcal{O}_n}mega_\mathscr{M})$, that $\Pi_\mathscr{M}({\mathcal{O}_n}mega_E)\subset {\mathcal{O}_n}mega_\mathscr{M}$. Then, the equality $f(\mathfrak{R}',\Pi_\mathscr{M}(\mathfrak{R}'))=0$ implies by uniqueness: $\phi(\mathfrak{R}')=\Pi_\mathscr{M}(\mathfrak{R}')$. Hence $\Pi_\mathscr{M}=\phi$ on ${\mathcal{O}_n}mega_E$, and, in particular, $\Pi_\mathscr{M}$ is differentiable. Finally, for a given $X\in E$, one can now solve \cref{eqn:SVDdifferential1} by projection onto the eigenvectors of $L_R(N)$ and obtain \cref{eqn:diffProjection}.\, \end{proof} \section{Proof of \cref{thm:DOError}} \label{app:proofError} \begin{lemma} \label{corol:boundDXPiM} For any $\mathfrak{R}\in\mathcal{M}_{l,m}$ satisfying $\sigma_r(\mathfrak{R})>\sigma_{r+1}(\mathfrak{R})$ and $\mathfrak{X}\in\mathcal{M}_{l,m}$~: \[ ||\ensuremath \mathrm{d}D_\mathfrak{X}\Pi_\mathscr{M}(\mathfrak{R})-\Pi_\TT{\Pi_\mathscr{M}(\mathfrak{R})}\mathfrak{X}||\ensuremath \leq\frac{\sigma_{r+1}(\mathfrak{R})}{\sigma_{r+1}(\mathfrak{R})-\sigma_r(\mathfrak{R})}||\mathfrak{X}||.\] \end{lemma} \begin{proof} This is a consequence of the fact that the maximum eigenvalue in the decomposition \cref{eqn:diffProjection} is \[ \max_{i,j}\frac{\frac{\sigma_{r+j}(\mathfrak{R})}{\sigma_i(\mathfrak{R})}}{1-\frac{\sigma_{r+j}(\mathfrak{R})}{\sigma_i(\mathfrak{R})}}=\frac{\sigma_{r+1}(\mathfrak{R})}{\sigma_r(\mathfrak{R})-\sigma_{r+1}(\mathfrak{R})}.\] \end{proof} The following lemma can be found in \cite{WeiCaiChanEtAl2016} and Theorem 2.6.1 in \cite{Golub2012}. \begin{lemma} \label{lemma:curvatureLemma2} For any points $R^1, R^2\in\mathscr{M}$ the following estimate holds: \begin{equation} \label{eqn:curvatureBound} ||\Pi_\TT{R^1}-\Pi_\TT{R^2}||\ensuremath \leq\min\left(1,\frac{2}{\sigma_r(R^1)}||R^1-R^2||\right ),\end{equation} where the norm of the left-hand side is the operator norm.\end{lemma} \begin{remark} This result from \cite{WeiCaiChanEtAl2016} enhances the ``curvature estimates'' of Lemma 4.2. of \cite{koch2007dynamical} that allows to have a global bound and hence avoids the smallness assumption of the initial truncation error. Note that such a bound always exists at every points of smooth manifolds (Definition 2.17 of \cite{FepponThesis}). A purely geometric analysis (Lemma 3.1. in \cite{FepponThesis}) may also be used to yield locally a sharper bound than \cref{eqn:curvatureBound} but with a larger constant $5/2$ instead of $2$ as a global estimate. \end{remark} \begin{proof}[Proof of \cref{thm:DOError}] Denote $R^*(t)=\Pi_\mathscr{M}(\mathfrak{R}(t))$. Since $\dot{R}^*(t)=\ensuremath \mathrm{d}D_{\dot{\mathfrak{R}}}\Pi_\mathscr{M}(\mathfrak{R}(t))$, bounding \cref{eqn:SVDdifferential1} and using \cref{eqn:dRstar} and \cref{corol:boundDXPiM} yields: \[ ||\dot{R}-\dot{R}^*||\ensuremath \leq||\Pi_\TT{R^*}(\mathcal{L}(t,\mathfrak{R}))-\Pi_\TT{R}(\mathcal{L}(t,R))||+\frac{\sigma_{r+1}(\mathfrak{R})}{\sigma_r(\mathfrak{R})-\sigma_{r+1}(\mathfrak{R})}||\mathcal{L}(t,\mathfrak{R})||.\] Furthermore, by triangle inequality, \[ \begin{aligned} ||\Pi_\TT{R^*}(\mathcal{L}(t,\mathfrak{R}))-\Pi_\TT{R}(\mathcal{L}(t,R))|| & \ensuremath \leq ||\Pi_\TT{R^*}(\mathcal{L}(t,\mathfrak{R}))-\Pi_\TT{R}(\mathcal{L}(t,\mathfrak{R}))||\\ &+||\Pi_\TT{R}(\mathcal{L}(t,\mathfrak{R}))-\Pi_\TT{R}(\mathcal{L}(t,R^*))|| \\ &+||\Pi_\TT{R}(\mathcal{L}(t,R^*))-\Pi_\TT{R}(\mathcal{L}(t,R))||.\end{aligned}\] The \cref{lemma:curvatureLemma2} (first eqn.) and Lipschitz continuity of $\mathcal{L}$ (last two eqs.) then imply \begin{gather*} ||\Pi_\TT{R^*}(\mathcal{L}(t,\mathfrak{R}))-\Pi_\TT{R}(\mathcal{L}(t,\mathfrak{R}))||\ensuremath \leq \frac{2}{\sigma_r(R^*)}||R-R^*||\,||\mathcal{L}(t,\mathfrak{R})||, \\ ||\Pi_\TT{R}(\mathcal{L}(t,\mathfrak{R}))-\Pi_\TT{R}(\mathcal{L}(t,R^*))||\ensuremath \leq K||\mathfrak{R}-R^*||,\\ ||\Pi_\TT{R}(\mathcal{L}(t,R^*))-\Pi_\TT{R}(\mathcal{L}(t,R))||\ensuremath \leqK||R-R^*||. \end{gather*} Finally, the following inequality is derived, combining all above equations together: \begin{multline}||\dot{R}-\dot{R}^*||\\ \ensuremath \leq \left(K+\frac{2||\mathcal{L}(t,\mathfrak{R})||}{\sigma_r(R^*)}\right)||R-R^*||+\left(K+\frac{||\mathcal{L}(t,\mathfrak{R})||}{\sigma_r(\mathfrak{R})-\sigma_{r+1}(\mathfrak{R})}\right)||\mathfrak{R}-R^*||.\end{multline} An application of Gronwall's Lemma (see corollary 4.3. in \cite{hartman2002}) yields \cref{eqn:DOBound}.\, \end{proof} \end{document}
\begin{document} \title{MCMC for non-linear state space models \ using ensembles of latent sequences} Non-linear state space models are a widely-used class of models for biological, economic, and physical processes. Fitting these models to observed data is a difficult inference problem that has no straightforward solution. We take a Bayesian approach to the inference of unknown parameters of a non-linear state model; this, in turn, requires the availability of efficient Markov Chain Monte Carlo (MCMC) sampling methods for the latent (hidden) variables and model parameters. Using the ensemble technique of Neal (2010) and the embedded HMM technique of Neal (2003), we introduce a new Markov Chain Monte Carlo method for non-linear state space models. The key idea is to perform parameter updates conditional on an enormously large ensemble of latent sequences, as opposed to a single sequence, as with existing methods. We look at the performance of this ensemble method when doing Bayesian inference in the Ricker model of population dynamics. We show that for this problem, the ensemble method is vastly more efficient than a simple Metropolis method, as well as $1.9$ to $12.0$ times more efficient than a single-sequence embedded HMM method, when all methods are tuned appropriately. We also introduce a way of speeding up the ensemble method by performing partial backward passes to discard poor proposals at low computational cost, resulting in a final efficiency gain of $3.4$ to $20.4$ times over the single-sequence method. \section{Introduction} Consider an observed sequence $z_{1}, \ldots, z_{N}$. In a state space model for $Z_{1}, \ldots, Z_{N}$, the distribution of the $Z_{i}$'s is defined using a latent (hidden) Markov process $X_{1}, \ldots, X_{N}$. We can describe such a model in terms of a distribution for the first hidden state, $p(x_{1})$, transition probabilities between hidden states, $p(x_{i} | x_{i-1})$, and emission probabilities, $p(z_{i} | x_{i})$, with these distributions dependent on some unknown parameters $\theta$. While the state space model framework is very general, only two state space models, Hidden Markov Models (HMM's) and linear Gaussian models have efficient, exact inference algorithms. The forward-backward algorithm for HMM's and the Kalman filter for linear Gaussian models allow us to perform efficient inference of the latent process, which in turn allows us to perform efficient parameter inference, using an algorithm such as Expectation-Maximizaton for maximum likelihood inference, or various MCMC methods for Bayesian inference. No such exact and efficient algorithms exist for models with a continuous state space with non-linear state dynamics, non-Gaussian transition distributions, or non-Gaussian emission distributions, such as the Ricker model we consider later in this paper. In cases where we can write down a likelihood function for the model parameters conditional on latent and observed variables, it is possible to perform Bayesian inference for the parameters and the latent variables by making use of sampling methods such as MCMC. For example, one can perform inference for the latent variables and the parameters by alternately updating them according to their joint posterior. Sampling of the latent state sequence $x = (x_{1}, \ldots, x_{N})$ is difficult for state space models when the state space process has strong dependencies --- for example, when the transitions between states are nearly deterministic. To see why, suppose we sample from $\pi(x_{1}, \ldots, x_{N} | z_{1}, \ldots, z_{N})$ using Gibbs sampling, which samples the latent variables one at a time, conditional on values of other latent variables, the observed data, and the model parameters. In the presence of strong dependencies within the state sequence, the conditional distribution of a latent variable will be highly concentrated, and we will only be able to change it slightly at each variable update, even when the marginal distribution of this latent variable, given $z_{1}, \ldots, z_{N}$, is relatively diffuse. Consequently, exploration of the space of latent sequences will be slow. The embedded HMM method of (Neal (2003), Neal, et al. (2004)) addresses this problem by updating the entire latent sequence at once. The idea is to temporarily reduce the state space of the model, which may be countably infinite or continuous, to a finite collection of randomly generated ``pool'' states at each time point. If the transitions between states are Markov, this reduced model is an HMM, for which we can use the forward-backward algorithm to efficiently sample a sequence with values in the pools at each time point. Pool states are chosen from a distribution that assigns positive probability to all possible state values, allowing us to explore the entire space of latent sequences in accordance with their exact distribution. Neal (2003) showed that when there are strong dependencies in the state sequence, the embedded HMM method performs better than conventional Metropolis methods at sampling latent state sequences. In our paper, we first look at an MCMC method which combines embedded HMM updates of the hidden state sequence with random-walk Metropolis updates of the parameters. We call this method the `single-sequence' method. We next reformulate the embedded HMM method as an ensemble MCMC method. Ensemble MCMC allows multiple candidate points to be considered simultaneously when a proposal is made. This allows us to consider an extension of the embedded HMM method for inference of the model parameters when they are unknown. We refer to this extension as the ensemble embedded HMM method. We then introduce and describe a ``staged'' method, which makes ensemble MCMC more efficient by rejecting poor proposals at low computational cost after looking at a part of the observed sequence. We use the single-sequence, ensemble, and ensemble with staging methods to perform Bayesian inference in the Ricker model of population dynamics, comparing the performance of these new methods to each other, and to a simple Metropolis sampler. \section{Ensemble MCMC} We first describe Ensemble MCMC, introduced by Neal (2010) as a general MCMC method, before describing its application to inference in state space models. Ensemble MCMC is based on the framework of MCMC using a temporary mapping. Suppose we want to sample from a distribution $\pi$ on $\mathcal{X}$. This can be done using a Markov chain with transition probablity from $x$ to $x'$ given by $T(x' | x)$, for which $\pi$ is an invariant distribution --- that is, $T$ must satisfy $\int \pi(x) T(x' | x) dx = \pi(x')$. The temporary mapping strategy defines $T$ as a composition of three stochastic mappings. The current state $x$ is stochastically mapped to a state $y \in \mathcal{Y}$ using the mapping $\hat{T}(y | x)$. Here, the space $\mathcal{Y}$ need not be the same as the space $\mathcal{X}$. The state $y$ is then updated to $y'$ using the mapping $\bar{T}(y' | y)$, and finally a state $x'$ is obtained using the mapping $\check{T}(x' | y')$. In this approach, we may choose whatever mappings we want, so long as the overall transition $T$ leaves $\pi$ invariant. In particular, if $\rho$ is a density for $y$, $T$ will leave $\pi$ invariant if the following conditions hold. \begin{align} &\int \pi(x) \hat{T}(y | x) dx = \rho(y) \\ &\int \rho(y) \bar{T}(y' | y) dy = \rho(y') \\ &\int \rho(y') \check{T}(x' | y') dy' = \pi(x') \end{align} In the ensemble method, we take $\mathcal{Y} = \mathcal{X}^{K}$, with $y = (x^{(1)}, \ldots, x^{(K)})$ referred to as an ``ensemble'', with $K$ being the number of ensemble elements. The three mappings are then constructed as follows. Consider an ``ensemble base measure'' over ensembles $(x^{(1)}, \ldots, x^{(K)})$ with density $\zeta(x^{(1)}, \ldots, x^{(K)})$, and with marginal densities $\zeta_{k}(x^{(k)})$ for each of the $k = 1, \ldots, K$ ensemble elements. We define $\hat{T}$ as \begin{equation} \hat{T}(x^{(1)}, \ldots, x^{(K)} | x) = \frac{1}{K} \sum_{k = 1}^{K} \zeta_{-k | k}(x^{(-k)} | x)\delta_{x}(x^{(k)}) \end{equation} Here, $\delta_{x}$ is a distribution that places a point mass at $x$, $x^{(-k)}$ is all of $x^{(1)}, \ldots, x^{(K)}$ except $x^{(k)}$, and $\zeta_{-k | k}(x^{(-k)} | x^{(k)}) = \zeta(x^{(1)}, \ldots, x^{(K)}) / \zeta_{k}(x^{(k)})$ is the conditional density of all ensemble elements except the $k$-th, given the value $x^{(k)}$ for the $k$-th. This mapping can be interpreted as follows. First, we select an integer $k$, from a uniform distribution on $\{1, \ldots, K\}$. Then, we set the ensemble element $x^{(k)}$ to $x$, the current state. Finally, we generate the remaining elements of the ensemble using the conditional density $\zeta_{-k | k}$. The ensemble density $\rho$ is determined by $\pi$ and $\hat{T}$, and is given explicitly as \begin{align} \rho(x^{(1)}, \ldots, x^{(K)}) &= \int \pi(x) \hat{T}(x^{(1)}, \ldots, x^{(K)} | x) dx \notag \\ &=\zeta(x^{(1)}, \ldots, x^{(K)}) \frac{1}{K}\sum_{k = 1}^{K}\frac{\pi(x^{(k)})}{\zeta_{k}(x^{(k)})} \end{align} $\bar{T}$ can be any update (or sequence of updates) that leaves $\rho$ invariant. For example, $\bar{T}$ could be a Metropolis update for $y$, with a proposal drawn from some symmetrical proposal density. Finally, $\check{T}$ maps from $y'$ to $x'$ by randomly setting $x'$ to $x^{(k)}$ with $k$ chosen from $\{1, \ldots, K\}$ with probabilities proportional to $\pi(x^{(k)}) / \zeta_{k}(x^{(k)})$. The mappings descibed above satisfy the necessary properties to make them a valid update, in the sense of preserving the stationary distribution $\pi$. The proof can be found in Neal (2010). \section{Embedded HMM MCMC as an \\ Ensemble MCMC method} The embedded HMM method briefly described in the introduction was not initially introduced as an ensemble MCMC method, but it can be reformulated as one. We assume here that we are interested in sampling from the posterior distribution of the state sequences, $\pi(x_{1}, \ldots, x_{N} | z_{1}, \ldots, z_{N})$, when the parameters of the model are known. Suppose the current state sequence is $x = (x_{1}, \ldots, x_{N})$. We want to update this state sequence in a way that leaves $\pi$ invariant. The first step of the embedded HMM method is to temporarily reduce the state space to a finite number of possible states at each time, turning our model into an HMM. This is done by, for each time $i$, generating a set of $L$ ``pool'' states, $P_{i} = \{x_{i}^{[1]}, \ldots, x_{i}^{[L]}\}$, as follows. We first set the pool state $x_{i}^{[1]}$ to $x_{i}$, the value of the current state sequence at time $i$. The remaining $L - 1$ pool states $x_{i}^{[l]}$, for $l > 1$ are generated by sampling independently from some pool distribution with density $\kappa_{i}$. The collections of pool states at different times are selected independently of each other. The total number of sequences we can then construct using these pool states, by choosing one state from the pool at each time, is $K = L^{N}$. The second step of the embedded HMM method is to choose a state sequence composed of pool states, with the probability of such a state sequence, $x$, being proportional to \begin{equation} q(x | z_{1}, \ldots, z_{N}) \textnormal{ } \propto \textnormal{ } p(x_{1})\prod_{i = 2}^{N}p(x_{i} | x_{i-1}) \prod_{i = 1}^{N}\biggl[\frac{p(z_{i} | x_{i})}{\kappa_{i}(x_{i})}\biggr] \end{equation} We can define $\gamma(z_{i} | x_{i}) = p(z_{i} | x_{i})/\kappa_{i}(x_{i})$, and rewrite $(6)$ as \begin{equation} q(x | z_{1}, \ldots, z_{N}) \textnormal{ } \propto \textnormal{ } p(x_{1})\prod_{i = 2}^{N}p(x_{i} | x_{i-1}) \prod_{i = 1}^{N}\gamma(z_{i} | x_{i}) \end{equation} We now note that the distribution $(7)$ takes the same form as the distribution over hidden state sequences for an HMM in which each $x_{i} \in P_{i}$ --- the initial state distribution is proportional to $p(x_{1})$, the transition probabilities are proportional to $p(x_{i} | x_{i-1})$, and the $\gamma(z_{i} | x_{i})$ have the same role as emission probabilities. This allows us to use the well-known forward-backward algorithms for HMM's (reviewed by Scott (2002)) to efficiently sample hidden state sequences composed of pool states. To sample a sequence with the embedded HMM method, we first compute the ``forward'' probabilities. Then, using a stochastic ``backwards'' pass, we select a state sequence composed of pool states. (We can alternately compute backward probabilities and then do a stochastic forward pass). We emphasize that having an efficient algorithm to sample state sequences is crucial for the embedded HMM method. The number of possible sequences we can compose from the pool states, $L^{N}$, can be very large, and so naive sampling methods would be impractical. In detail, for $x_{i} \in P_{i}$, the forward probabilities $\alpha_{i}(x_{i})$ are computed using a recursion that goes forward in time, starting from $i = 1$. We start by computing $\alpha_{1}(x_{1}) = p(x_{1})\gamma(z_{1} | x_{1})$. Then, for $1 < i \leq N$, the forward probabilities $\alpha_{i}(x)$ are given by the recursion \begin{equation} \alpha_{i}(x_{i}) = \gamma(z_{i} | x_{i})\sum_{l = 1}^{L} p(x_{i}| x^{[l]}_{i-1})\alpha_{i-1}(x^{[l]}_{i-1}), \quad \textnormal{for } x \in P_{i} \end{equation} The stochastic backwards recursion samples a state sequence, one state at a time, beginning with the state at time $N$. First, we sample $x_{N}$ from the pool $P_{N}$ with probabilities proportional to $\alpha_{N}(x_{N})$. Then, going backwards in time for $i$ from $N - 1$ to $1$, we sample $x_{i}$ from the pool $P_{i}$ with probabilities proportional to $p(x_{i+1} | x_{i})\alpha_{i}(x_{i})$, where $x_{i+1}$ is the variable just sampled at time $i + 1$. Both of these recursions are commonly implemented using logarithms of probabilities to avoid numerical underflow. Let us now reformulate the embedded HMM method as an ensemble MCMC method. The step of choosing the pool states can be thought of as performing a mapping $\hat{T}$ which takes a single hidden state sequence $x$ and maps it to an ensemble of $K$ state sequences $y = (x^{(1)}, \ldots, x^{(K)})$, with $x = x^{(k)}$ for some $k$ chosen uniformly from $\{1, \ldots, K\}$. (However, we note that in this context, the order in the ensemble does not matter.) \\ Since the randomly chosen pool states are independent under $\kappa_{i}$ at each time, and across time as well, the density of an ensemble of hidden state sequences in the ensemble base measure, $\zeta$, is defined through a product of $\kappa_{i}(x_{i}^{[l]})$'s over the pool states and over time, and is non-zero for ensembles consisting of all sequences composed from the chosen set of pool states. The corresponding marginal density of a hidden state sequence $x^{(k)}$ in the ensemble base measure is \begin{equation} \zeta_{k}(x^{(k)}) = \prod_{i = 1}^{N} \kappa_{i}(x_{i}^{(k)}) \end{equation} Together, $\zeta$ and $\zeta_{k}$ define the conditional distribution $\zeta_{-k | k}$, which is used to define $\hat{T}$. The mapping $\bar{T}$ is taken to be a null mapping that keeps the ensemble fixed at its current value, and the mapping $\check{T}$ to a single state sequence is performed by selecting a sequence $x^{(k)}$ from the ensemble with probabilities given by $(7)$, in the same way as in the embedded HMM method. \section{The single-sequence embedded HMM MCMC method} Let us assume that the parameters $\theta$ of our model are unknown, and that we want to sample from the joint posterior distribution of state sequences $x = (x_{1}, \ldots, x_{N})$ and parameters $\theta = (\theta_{1}, \ldots, \theta_{P})$, with density $\pi(x, \theta | z_{1}, \ldots, z_{N})$. One way of doing this is by alternating embedded HMM updates of the state sequence with Metropolis updates of the parameters. Doing updates in this manner makes use of an ensemble to sample state sequences more efficiently in the presence of strong dependencies. However, this method only takes into account a single hidden state sequence when updating the parameters. The update for the sequence is identical to that in the embedded HMM method, with initial, transition and emission densities dependent on the current value of $\theta$. In our case, we only consider simple random-walk Metropolis updates for $\theta$, updating all of the variables simultaneously. Evaluating the likelihood conditional on $x$ and $z$, as needed for Metropolis parameter updates, is computationally inexpensive relative to updates of the state sequence, which take time proportional to $L^{2}$. It may be beneficial to perform several Metropolis parameter updates for every update of the state sequence, since this will not greatly increase the overall computational cost, and allows us to obtain samples with lower autocorrelation time. \section{An ensemble extension of the embedded HMM method} When performing parameter updates, we can look at all possible state sequences composed of pool states by using an ensemble $((x^{(1)}, \theta), \ldots, (x^{(K)}, \theta))$ that includes a parameter value $\theta$, the same for each element of the ensemble. The update $\bar{T}$ could change both $\theta$ and $x^{(k)}$, but in the method we will consider here, we only change $\theta$. To see why updating $\theta$ with an ensemble of sequences might be more efficient than updating $\theta$ given a single sequence, consider a Metropolis proposal in ensemble space, which proposes to change $\theta$ for all of the ensemble elements, from $\theta$ to $\theta^{*}$. Such an update can be accepted whenever there are \textit{some} elements $(x^{(k)}, \theta^{*})$ in the proposed ensemble that make the ensemble density $\rho((x^{(1)}, \theta^{*}), \ldots, (x^{(K)}, \theta^{*}))$ relatively large. That is, it is possible to accept a move in ensemble space with a high probability as long as \textit{some} elements of the proposed ensemble, with the new $\theta^{*}$, lie in a region of high posterior density. This is at least as likely to happen as having a proposed $\theta^{*}$ together with a single state sequence $x$ lie in a region of high posterior density. Using ensembles makes sense when the ensemble density $\rho$ can be computed in an efficient manner, in less than $K$ times as long as it takes to compute $p(x, \theta | z_{1}, \ldots, z_{N})$ for a single hidden state sequence $x$. Otherwise, one could make $K$ independent proposals to change $x$, which would have approximately the same computational cost as a single ensemble update, and likely be a more efficient way to sample $x$. In the application here, $K = L^{N}$ is enormous for typical values of $N$ when $L \geq 2$, while computation of $\rho$ takes time proportional only to $NL^{2}$. In detail, to compute the ensemble density, we need to sum $q(x^{(k)}, \theta | z_{1}, \ldots, z_{N})$ over all ensemble elements $(x^{(k)}, \theta)$, that is, over all hidden sequences which are composed of the pool states at each time. The forward algorithm described above makes it possible to compute the ensemble density efficiently by summing the probabilities at the end of the forward recursion $\alpha_{N}(x_{N})$ over all $x_{N} \in P_{N}$. That is \begin{equation} \rho((x^{(1)}, \theta), \ldots, (x^{(K)}, \theta)) \textnormal{ } \propto \textnormal{ } \pi(\theta)\sum_{k=1}^{K} q(x^{(k)}, \theta | z_{1}, \ldots, z_{N}) = \pi(\theta)\sum_{l = 1}^{L}\alpha_{N}(x_{N}^{[l]}) \end{equation} where $\pi(\theta)$ is the prior density of $\theta$. The ensemble extension of the embedded HMM method can be thought of as using an approximation to the marginal posterior of $\theta$ when updating $\theta$, since summing the posterior density over a large collection of hidden sequences approximates integrating over all such sequences. Larger updates of $\theta$ may be possible with an ensemble because the marginal posterior of $\theta$, given the data, is more diffuse than the conditional distribution of $\theta$ given a fixed state sequence and the data. Note that since the ensemble of sequences is periodically changed, when new pool states are chosen, the ensemble method is a proper MCMC method that converges to the exact joint posterior, even though the set of sequences using pool states is restricted at each MCMC iteration. The ensemble method is more computationally expensive than the single-sequence method --- it requires two forward passes to evaluate the ensemble density for two values of $\theta$ and one backward pass to sample a hidden state sequence, whereas the single-sequence method requires only a single forward pass to compute the probabilities for every sequence in our ensemble, and a single backward pass to select a sequence. Some of this additional computational cost can be offset by reusing the same pool states to do multiple updates of $\theta$. Once we have chosen a collection of pool states, and performed a forward pass to compute the ensemble density at the current value of $\theta$, we can remember it. Proposing an update of $\theta$ to $\theta^{*}$ requires us to compute the ensemble density at $\theta^{*}$ using a forward pass. Now, if this proposal is rejected, we can reuse the stored value of the ensemble density $\theta$ when we make another proposal using the same collection of pool states. If this proposal is accepted, we can remember the ensemble density at the accepted value. Keeping the pool fixed, and saving the current value of the ensemble density therefore allows us to perform $M$ ensemble updates with $M+1$ forward passes, as opposed to $2M$ if we used a new pool for each update. With a large number of pool states, reusing the pool states for two or more updates has only a small impact on performance, since with any large collection of pool states we essentially integrate over the state space. However, pool states must still be updated occasionally, to ensure that the method samples from the exact joint posterior. \section{Staged ensemble MCMC sampling} Having to compute the ensemble density given the entire observed sequence for every proposal, even those that are obviously poor, is a source of inefficiency in the ensemble method. If poor proposals can be eliminated at a low computational cost, the ensemble method could be made more efficient. We could then afford to make our proposals larger, accepting occasional large proposals while rejecting others at little cost. We propose to do this by performing ``staged'' updates. First, we choose a part of the observed sequence that we believe is representative of the whole sequence. Then, we propose to update $\theta$ to a $\theta^{*}$ found using an ensemble update that only uses the part of the sequence we have chosen. If the proposal found by this ``first stage'' update is accepted, we perform a ``second stage'' ensemble update given the entire sequence, with $\theta^{*}$ as the proposal. If the proposal at the first stage is rejected, we do not perform a second stage update, and add the current value of $\theta$ to our sequence of sampled values. This can be viewed as a second stage update where the proposal is the current state --- to do such an update, no computations need be performed. Suppose that $\rho_{1}$ is the ensemble density given the chosen part of the observed sequence, and $q(\theta^{*}|\theta)$ is the proposal density for constructing the first stage update. Then the acceptance probability for the first stage update is given by \begin{equation} \min\biggl(1, \frac{\rho_{1}((x^{(1)}, \theta^{*}), \ldots, (x^{(K)}, \theta^{*}))q(\theta|\theta^{*})}{\rho_{1}((x^{(1)}, \theta), \ldots, (x^{(K)}, \theta))q(\theta^{*}|\theta)}\biggr) \end{equation} If $\rho$ is the ensemble density given the entire sequence, the acceptance probability for the second stage update is given by \begin{equation} \min\Biggl(1, \frac{\rho((x^{(1)}, \theta^{*}), \ldots, (x^{(K)}, \theta^{*}))\min\biggl(1, \frac{\rho_{1}((x^{(1)}, \theta^{*}), \ldots, (x^{(K)}, \theta^{*}))q(\theta|\theta^{*})}{\rho_{1}((x^{(1)}, \theta), \ldots, (x^{(K)}, \theta))q(\theta^{*}|\theta)}\biggr)}{\rho((x^{(1)}, \theta), \ldots, (x^{(K)}, \theta))\min\biggl(1, \frac{\rho_{1}((x^{(1)}, \theta), \ldots, (x^{(K)}, \theta))q(\theta^{*}|\theta)}{\rho_{1}((x^{(1)}, \theta^{*}), \ldots, (x^{(K)}, \theta^{*}))q(\theta|\theta^{*})}\biggr)}\Biggr) \end{equation} Regardless of whether $\rho_{1}((x^{(1)}, \theta^{*}), \ldots, (x^{(K)}, \theta^{*}))q(\theta|\theta^{*}) < \rho_{1}((x^{(1)}, \theta), \ldots, (x^{(K)}, \theta))q(\theta^{*}|\theta)$ or vice versa, the above ratio simplifies to \begin{equation} \min\Biggl(1, \frac{\rho((x^{(1)}, \theta^{*}), \ldots, (x^{(K)}, \theta^{*}))\rho_{1}((x^{(1)}, \theta^{*}), \ldots, (x^{(K)}, \theta^{*}))q(\theta|\theta^{*})}{\rho((x^{(1)}, \theta), \ldots, (x^{(K)}, \theta))\rho_{1}((x^{(1)}, \theta), \ldots, (x^{(K)}, \theta))q(\theta^{*}|\theta)}\Biggr) \end{equation} Choosing a part of the observed sequence for the first stage update can be aided by looking at the acceptance rates at the first and second stages. We need the moves accepted at the first stage to also be accepted at a sufficiently high rate at the second stage, but we want the acceptance rate at the first stage to be low so the method will have an advantage over the ensemble method in terms of computational efficiency. We can also look at the `false negatives' for diagnostic purposes, that is, how many proposals rejected at the first step would have been accepted had we looked at the entire sequence when deciding whether to accept. We are free to select any portion of the observed sequence to use for the first stage. We will look here at using a partial sequence for the first stage consisting of observations starting at $n_{1}$ and going until the end, at time $N$. This is appropriate for our example later in the paper, where we only have observations past a certain point in time. For this scenario, to perform a first stage update, we need to perform a backward pass. As we perform a backward pass to do the first stage proposal, we save the vector of ``backward'' probabilities. Then, if the first stage update is accepted, we start the recursion for the full sequence using these saved backward probabilities, and compute the ensemble density given the entire sequence, avoiding recomputation of backward probabilities for the portion of the sequence used for the first stage. To compute the backward probabilities $\beta_{i}(x_{i})$, we perform a backward recursion, starting at time $N$. We first set $\beta_{N}(x_{N})$ to $1$ for all $x_{N} \in P_{N}$. We then compute, for $n_{1} \leq i < N$ \begin{equation} \beta_{i}(x_{i}) = \sum_{l = 1}^{L}p(x_{i + 1}^{[l]}|x_{i})\beta_{i+1}(x_{i+1}^{[l]})\gamma(z_{i+1}|x_{i+1}^{[l]}) \end{equation} We compute the first stage ensemble density $\rho_{1}$ as follows \begin{equation} \rho_{1}((x^{(1)}, \theta), \ldots, (x^{(K)}, \theta)) = \pi(\theta)\sum_{l = 1}^{L}p(x_{n_{1}}^{[l]})p(y_{n_{1}} | x_{n_{1}}^{[l]})\beta_{n_{1}}(x_{n_{1}}^{[l]}) \end{equation} We do not know $p(x_{n_{1}})$, but we can choose a substitute for it (which affects only performance, not correctness). One possibility is a uniform distrubution over the pool states at $n_{1}$, which is what we will use in our example below. The ensemble density for the full sequence can be obtained by performing the backward recursion up to the beginning of the sequence, and then computing \begin{equation} \rho((x^{(1)}, \theta), \ldots, (x^{(K)}, \theta)) = \pi(\theta)\sum_{l = 1}^{L}p(x_{1}^{[l]})p(y_{1} | x_{1}^{[l]})\beta_{1}(x_{1}^{[l]}) \end{equation} To see how much computation time is saved with staged updates, we measure computation time in terms of the time it takes to do a backward (or equivalently, forward) pass --- generally, the most computationally expensive operation in ensemble MCMC --- counting a backward pass to time $n_{1}$ as a partial backward pass. Let us suppose that the acceptance rate for stage 1 updates is $a_{1}$. An ensemble update that uses the full sequence requires us to perform two backwards passes if we update the pool at every iteration, whereas a staged ensemble update will require us to perform \begin{equation} 1 + \frac{N - n_{1}}{N - 1} + a_{1}\frac{n_{1} - 1}{N - 1} \end{equation} backward passes on average (counting a pass back only to $n_{1}$ as $(N - n_{1})/(N - 1)$ passes). The first term in the above formula represents the full backward pass for the initial value of $\theta$ that is always needed --- either for a second stage update, or for mapping to a new set of pool states. The second term represents the partial backward pass we need to perform to complete the first stage proposal. The third term accounts for having to compute the remainder of a backwards pass if we accept the first stage proposal --- hence it is weighted by the first stage acceptance rate. We can again save computation time by updating the pool less frequently. Suppose we decide to update the pool every $M$ iterations. Without staging, the ensemble method would require a total of $M + 1$ forward (or equivalently, backward) passes. With staged updates, the expected number of backward passes we would need to perform is \begin{equation} 1 + M\frac{N - n_{1}}{N - 1} + Ma_{1}\frac{n_{1} - 1}{N - 1} \end{equation} as can be seen by generalizing the expression above to $M > 1$. \section{Constructing pools of states} An important aspect of these embedded HMM methods is the selection of pool states at each time, which determines the mapping $\hat{T}$ from a single state sequence to an ensemble. In this paper, we will only consider sampling pool states independently from some distribution with density $\kappa_{i}$ (though dependent pool states are considered in (Neal, 2003). One choice of $\kappa_{i}$ is the conditional density of $x_{i}$ given $z_{i}$, based on some pseudo-prior distribution $\eta$ for $x_{i}$ --- that is, $\kappa_{i}(x_{i}) \propto \eta(x_{i})p(z_{i} | x_{i})$. This distribution approximates, to some extent, the marginal posterior of $x_{i}$ given all observations. We hope that such a pool distribution will produce sequences in our ensemble which have a high posterior density, and as a result make sampling more efficient. To motivate how we might go about choosing $\eta$, consider the following. Suppose we sample values of $\theta$ from the model prior, and then sample hidden state sequences, given the sampled values of $\theta$. We can then think of $\eta$ as a distribution that is consistent with this distribution of hidden state sequences, which is in turn consistent with the model prior. In this paper, we choose $\eta$ heuristically, setting it to what we believe to be reasonable, and not in violation of the model prior. Note that the choice of $\eta$ affects only sampling efficiency, not correctness (as long as it is non-zero for all possible $x_{i}$). However, when we choose pool states in the ensemble method, we cannot use current values of $\theta$, since this would make an ensemble update non-reversible. This restriction does not apply to the single-sequence method since in this case $\bar{T}$ is null. \section{Performance comparisons on a \\ population dynamics model} We test the performance of the proposed methods on the Ricker population dynamics model, described by Wood (2003). This model assumes that a population of size $N_{i}$ (modeled as a real number) evolves as $N_{i+1} = r N_{i}\exp(-N_{i} + e_{i})$, with $e_{i}$ independent with a Normal$(0,\sigma^{2})$ distribution, with $N_{0} = 1$. We don't observe this process directly, but rather we observe $Y_{i}$'s whose distribution is Poisson$(\phi N_{i})$. The goal is to infer $\theta = (r, \sigma, \phi)$. This is considered to be a fairly complex inference scenario, as evidenced by the application of recently developed inference methods such as Approximate Bayesian Computation (ABC) to this model. (See Fearnhead, Prangle (2012) for more on the ABC approach.) This model can be viewed as a non-linear state space model, with $N_{i}$ as our state variable. MCMC inference in this model can be inefficient for two reasons. First, when the value of $\sigma^{2}$ in the current MCMC iteration is small, consecutive $N_{i}$'s are highly dependent, so the distribution of each $N_{i}$, conditional on the remaining $N_{i}$'s and the data, is highly concentrated, making it hard to efficiently sample state sequences one state at a time. An MCMC method based on embedding an HMM into the state space, either the single-sequence method or the ensemble method, can potentially make state sequence sampling more efficient, by sampling whole sequences at once. The second reason is that the distribution of $\theta$, given a single sequence of $N_{i}$'s and the data, can be concentrated as well, so we may not be able to efficiently explore the posterior distribution of $\theta$ by alternately sampling $\theta$ and the $N_{i}$'s. By considering an ensemble of sequences, we may be able to propose and accept larger changes to $\theta$. This is because the posterior distribution of $\theta$ summed over an ensemble of state sequences is less concentrated than it is given a single sequence. To test our MCMC methods, we consider a scenario similar to those considered by Wood (2010) and Fearnhead and Prangle (2012). The parameters of the Ricker model we use are $r = \exp(3.8), \sigma = 0.15, \phi = 2$. We generated $100$ points from the Ricker model, with $y_{i}$ only observed from time $51$ on, mimicking a situation where we do not observe a population right from its inception. We put a Uniform$(0, 10)$ prior on $\log(r)$, a Uniform$(0, 100)$ prior on $\phi$, and a Uniform$[\log(0.1), 0]$ prior on $\log(\sigma)$. Instead of $N_{i}$, we use $M_{i} = \log(\phi N_{i})$ as our state variables, since we found that doing this makes MCMC sampling more efficient. With these state variables, our model can be written as \begin{align} M_{1} &\sim N(\log(r) + \log(\phi) - 1, \sigma^{2}) \\ M_{i} | M_{i-1} &\sim N(\log(r) + M_{i-1} - \exp(M_{i-1})/\phi, \sigma^{2}), \quad i = 2, \ldots, 100 \\ Y_{i} | M_{i} &\sim \textnormal{Poisson}(\exp(M_{i})), \quad i = 51, \ldots, 100 \end{align} Furthermore, the MCMC state uses the logarithms of the parameters, $(\log(r), \log(\sigma), \log(\phi))$. For parameter updates in the MCMC methods compared below, we used independent normal proposals for each parameter, centered at the current parameter values, proposing updates to all parameters at once. To choose appropriate proposal standard deviations, we did a number of runs of the single sequence and the ensemble methods, and used these trial runs to estimate the marginal standard deviations of the logarithm of each parameter. We got standard deviation estimates of $0.14$ for $\log(r)$, $0.36$ for $\log(\sigma)$, and $0.065$ for $\log(\phi)$. We then scaled each estimated marginal standard deviation by the same factor, and used the scaled estimates as the proposal standard deviations for the corresponding parameters. The maximum scaling we used was $2$, since beyond this our proposals would often lie outside the high probability region of the marginal posterior density. We first tried a simple Metropolis sampler on this problem. This sampler updates the latent states one at a time, using Metropolis updates to sample from the full conditional density of each state $M_{t}$, given by \begin{align} p(m_{1} | y_{51}, \ldots, y_{100}, m_{-1}) &\propto p(m_{1})p(m_{2}|m_{1}) \\ p(m_{i} | y_{51}, \ldots, y_{100}, m_{-i}) &\propto p(m_{i}|m_{i-1})p(m_{i+1}|m_{i}), \quad 2 \leq i \leq 50 \\ p(m_{i} | y_{51}, \ldots, y_{100}, m_{-i}) &\propto p(m_{i}|m_{i-1})p(y_{i}|m_{i})p(m_{i+1}|m_{i}), \quad 51 \leq i \leq 99 \\ p(m_{100} | y_{51}, \ldots, y_{100}, m_{-100}) &\propto p(m_{100}|m_{99})p(y_{100}|m_{100}) \end{align} We started the Metropolis sampler with parameters set to prior means, and the hidden states to randomly chosen values from the pool distributions we used for the embedded HMM methods below. After we perform a pass over the latent variables, and update each one in turn, we perform a Metropolis update of the parameters, using a scaling of $0.25$ for the proposal density. The latent states are updated sequentially, starting from $1$ and going up to $100$. When updating each latent variable $M_{i}$, we use a Normal proposal distribution centered at the current value of the latent variable, with the following proposal standard deviations. When we do not observe $y_{i}$, or $y_{i} = 0$, we use the current value of $\sigma$ from the state times $0.5$. When we observe $y_{i} > 0$, we use a proposal standard deviation of $1/\sqrt{1/\sigma^{2} + y_{i}}$ (with $\sigma$ from the state). This choice can be motivated as follows. An estimate of precision for $M_{i}$ given $M_{i-1}$ is $1/\sigma^{2}$. Furthermore, $Y_{i}$ is Poisson$(\phi N_{i})$, so that Var$(\phi N_{i}) \approx y_{i}$ and Var$(\log(\phi N_{i})) = \textnormal{Var}(M_{i}) \approx 1/y_{i}$. So an estimate for the precision of $M_{i}$ given $y_{i}$ is $y_{i}$. We combine these estimates of precisions to get a proposal standard deviation for the latent variables in the case when $y_{i} > 0$. We ran the Metropolis sampler for $6,000,000$ iterations from five different starting points. The acceptance rate for parameter updates was between about $10\%$ and $20\%$, depending on the initial starting value. The acceptance rate for latent variable updates was between $11\%$ and $84\%$, depending on the run and the particular latent variable being sampled. We found that the simple Metropolis sampler does not perform well on this problem. This is most evident for sampling $\sigma$. The Metropolis sampler appears to explore different regions of the posterior for $\sigma$ when it is started from different initial hidden state sequences. That is, the Metropolis sampler can get stuck in various regions of the posterior for extended periods of time. An example of the behaviour of the Metropolis sampler be seen on Figure \ref{fig:ex-met}. The autocorrelations for the parameters are so high that accurately estimating them would require a much longer run. \begin{figure} \caption{An example run of the simple Metropolis method.} \label{fig:ex-met} \end{figure} \begin{figure} \caption{An example ensemble method run, with $120$ pool states and proposal scaling $1.4$.} \label{fig:ex-ens} \end{figure} This suggests that more sophisticated MCMC methods are necessary for this problem. We next looked at the single-sequence method, the ensemble method, and the staged ensemble method. All embedded MCMC-based samplers require us to choose pool states for each time $i$. For the $i$'s where no $y_{i}$'s are observed, we choose our pool states by sampling values of $\exp(M_{i})$ from a pseudo-prior $\eta$ for the Poisson mean $\exp(M_{i})$ --- a Gamma$(k, \theta)$ distribution with $k = 0.15$ and $\theta = 50$ --- and then taking logarithms of the sampled values. For the $i$'s where we observe $y_{i}$'s we choose our pool states by sampling values of $\exp(M_{i})$ from the conditional density of $\exp(M_{i}) | y_{i}$ with $\eta$ as the pseudo-prior. Since $Y_{i} | M_{i}$ is Poisson$(\exp(M_{i}))$, the conditional density of $\exp(M_{i}) | y_{i}$ has a Gamma$(k + y_{i}, \theta/(1 + \theta))$ distribution. It should be noted that since we choose our pool states by sampling $\exp(M_{i})$'s, but our model is written in terms of $M_{i}$, the pool density $\kappa_{i}$ must take this into account. In particular, when $y_{i}$ is unobserved, we have \begin{equation} \kappa_{i}(m_{i}) = \frac{1}{\Gamma(k)\theta^{k}}\exp(km_{i} - \exp(m_{i})/\theta), \quad -\infty < m_{i} < \infty \end{equation} and when $y_{i}$ is observed we replace $k$ with $k + y_{i}$ and $\theta$ with $\theta/(1 + \theta)$. We use the same way of choosing the pool states for all three methods we consider. The staged ensemble MCMC method also requires us to choose a portion of the sequence to use for the first stage update. On the basis of several trial runs, we chose to use the last $20$ points, i.e. $n_{1} = 81$. As we mentioned earlier, when likelihood evaluations are computationally inexpensive relative to sequence updates, we can do multiple parameter updates for each update of the hidden state sequence, without incurring a large increase in computation time. For the single-sequence method, we do ten Metropolis updates of the parameters for each update of the hidden state sequence. For the ensemble method, we do five updates of the parameters for each update of the ensemble. For staged ensemble MCMC, we do ten parameter updates for each update of the ensemble. These choices were based on numerous trial runs. We thin each of the single-sequence runs by a factor of ten when computing autocorrelation times --- hence the time per iteration for the single-sequence method is the time it takes to do a single update of the hidden sequence and ten Metropolis updates. We do not thin the ensemble and staged ensemble runs. To compare the performance of the embedded HMM methods, and to tune each method, we looked at the autocorrelation time, $\tau$, of the sequence of parameter samples, for each parameter. The autocorrelation time can be thought of as the number of samples we need to draw using our Markov chain to get the equivalent of one independent point (Neal (1993)). It is defined as \begin{equation} \tau = 1 + 2 \sum_{k=1}^{\infty}\rho_{k} \end{equation} where $\rho_{k}$ is the autocorrelation at lag $k$ for a function of state that is of interest. Here, $\hat{\rho}_{k} = \hat{\gamma}_{k} / \hat{\gamma}_{0}$, where $\hat{\gamma}_{k}$ is the estimated autocovariance at lag $k$. We estimate each $\hat{\gamma}_{k}$ by estimating autocovariances using each of the five runs, using the overall mean from the five runs, and then averaging the resulting autocovariance estimates. We then estimate $\tau$ by \begin{equation} \hat{\tau} = 1 + 2 \sum_{k=1}^{K}\hat{\rho}_{k} \end{equation} Here, the truncation point $K$ is where the remaining $\hat{\rho}_{k}$'s are nearly zero. For our comparisons to have practical validity, we need to multiply each estimate of autocorrelation time estimate by the time it takes to perform a single iteration. A method that produces samples with a lower autocorrelation time is often more computationally expensive than a method that produces samples with a higher autocorrelation time, and if the difference in computation time is sufficiently large, the computationally cheaper method might be more efficient. Computation times per iteration were obtained with a program written in MATLAB on a Linux system with an Intel Xeon X5680 3.33 GHz CPU. For each number of pool states considered, we started the samplers from five different initial states. Like with the Metropolis method, the parameters were initialized to prior means, and the hidden sequence was initialized to states randomly chosen from pool distribution at each time. When estimating autocorrelations, we discarded the first $10\%$ of samples of each run as burn-in. An example ensemble run is shown in Figure \ref{fig:ex-ens}. Comparing with Figure \ref{fig:ex-met}, one can see that the ensemble run has an enormously lower autocorrelation time. Autocorrelation time estimates for the various ensemble methods, along with computation times, proposal scaling, and acceptance rates, are presented in Table \ref{table:results}. For the staged ensemble method, the acceptance rates are shown for the first stage and second stage. We also estimated the parameters of the model by averaging estimates of the posterior means from each of the five runs for the single-sequence method with $40$ pool states, the ensemble method with $120$ pool states, and the staged ensemble method with $120$ pool states. The results are presented in Table \ref{table:est}. The estimates using the three different methods do not differ significantly. \begin{table}[t] \small \centering \begin{tabular}{c|ccccc|ccc|ccc} \hline & Pool & & Acc. & Iter- & Time / & & ACT & & ACT & $\times$ & time\\ [0.5ex] Method & states & Scaling & Rate &ations & iteration & $r$ & $\sigma$ & $\phi$ & $r$ & $\sigma$ & $\phi$ \\ [0.5ex] \hline & 10 & & & 400,000 & 0.09 & 4345 & 2362 & 7272 & 391 & 213 & 654 \\ [1ex] & 20 & & & 400,000 & 0.17 & \phantom{0}779 & 1875 & 1849 & 132 & 319 & 314 \\ [1ex] Single- & 40 & 0.25 & 12\% & 200,000 & 0.31 & \phantom{0}427 & \phantom{0}187 & 1317 & 132 & \phantom{0}58 & 408 \\ [1ex] sequence & 60 & & & 200,000 & 0.47 & \phantom{0}329 & \phantom{0}155 & \phantom{0}879 & 155 & \phantom{0}73 & 413 \\ [1ex] & 80 & & & 200,000 & 0.61 & \phantom{0}294 & \phantom{0}134 & \phantom{0}869 & 179 & \phantom{0}82 & 530 \\ \hline & 40 & 0.6 & 13\% & 100,000 & 0.23 & \phantom{0}496 & \phantom{0}335 & \phantom{0}897 & 114 & \phantom{0}77 & 206 \\ [1ex] & 60 & 1 & 11\% & 100,000 & 0.34 & \phantom{0}187 & \phantom{0}115 & \phantom{0}167 & \phantom{0}64 & \phantom{0}39 & \phantom{0}57 \\ [1ex] Ensemble & 80 & 1 & 16\% & 100,000 & 0.47 & \phantom{0}107 & \phantom{00}55 & \phantom{00}90 & \phantom{0}50 & \phantom{0}26 & \phantom{0}42 \\ [1ex] & 120 & 1.4 & 14\% & 100,000 & 0.76 & \phantom{00}52 & \phantom{00}41 & \phantom{00}45 & \phantom{0}40 & \phantom{0}31 & \phantom{0}34 \\ [1ex] & 180 & 1.8 & 12\% & 100,000 & 1.26 & \phantom{00}35 & \phantom{00}27 & \phantom{00}29 & \phantom{0}44 & \phantom{0}34 & \phantom{0}37 \\ \hline & 40 & 1 & 30, 15\% & 100,000 & 0.10 & \phantom{0}692 & \phantom{0}689 & 1201 & \phantom{0}69 & \phantom{0}69 & 120 \\ [1ex] & 60 & 1.4 & 27, 18\% & 100,000 & 0.14 & \phantom{0}291 & \phantom{0}373 & \phantom{0}303 & \phantom{0}41 & \phantom{0}52 & \phantom{0}42 \\ [1ex] Staged & 80 & 1.4 & 30, 25\% & 100,000 & 0.21 & \phantom{0}187 & \phantom{0}104 & \phantom{0}195 & \phantom{0}39 & \phantom{0}22 & \phantom{0}41 \\ [1ex] Ensemble & 120 & 1.8 & 25, 29\% & 100,000 & 0.29 & \phantom{00}75 & \phantom{00}59 & \phantom{00}70 & \phantom{0}22 & \phantom{0}17 & \phantom{0}20 \\ [1ex] & 180 & 2 & 23, 32\% & 100,000 & 0.45 & \phantom{00}48 & \phantom{00}44 & \phantom{00}52 & \phantom{0}22 & \phantom{0}20 & \phantom{0}23 \\ \hline \end{tabular} \caption{Comparison of autocorrelation times.}\label{table:results} \end{table} \begin{table}[b] \small \centering \begin{tabular}{c c c c} \hline Method & $r$ & $\sigma$ & $\phi$ \\ [0.5ex] \hline Single-Sequence & 44.46 ($\pm$ 0.09) & 0.2074 ($\pm$ 0.0012) & 1.9921 ($\pm$ 0.0032) \\ Ensemble & 44.65 ($\pm$ 0.09) & 0.2089 ($\pm$ 0.0009) & 1.9853 ($\pm$ 0.0013) \\ Staged Ensemble & 44.57 ($\pm$ 0.04) & 0.2089 ($\pm$ 0.0010) & 1.9878 ($\pm$ 0.0015) \\ \hline \end{tabular} \caption{Estimates of posterior means, with standard errors of posterior means shown in brackets.} \label{table:est} \end{table} From these results, one can see that for our Ricker model example, the single-sequence method is less efficient than the ensemble method, when both methods are well-tuned and computation time is taken into account. We also found that the the staged ensemble method allows us to further improve performance of the ensemble method. In detail, depending on the parameter one looks at, the best tuned ensemble method without staging (with 120 pool states) is between 1.9 and 12.0 times better than the best tuned single-sequence method (with 40 pool states). The best tuned ensemble method with staging (120 pool states) is between 3.4 and 20.4 times better than the best single-squence method. The large drop in autocorrelation time for $\sigma$ for the single-sequence method between $20$ and $40$ pool states is due to poor mixing in one of the five runs. To confirm whether this systematic, we did five more runs of the single sequence method, from another set of starting values, and found that the same problem is again present in one of the five runs. This is inidicative of a risk of poor mixing when using the single-sequence sampler with a small number of pool states. We did not observe similar problems for larger numbers of pool states. The results in Table \ref{table:results} show that performance improvement is greatest for the parameter $\phi$. One reason for this may be that the posterior distribution of $\phi$ given a sequence is significantly more concentrated than the marginal posterior distribution of $\phi$. Since for a sufficiently large number of pool states, the posterior distribution given an ensemble approximates the marginal posterior distribution, the posterior distribution of $\phi$ given an ensemble will become relatively more diffuse than the posterior distributions of $r$ and $\sigma$. This leads to a larger relative performance improvement when sampling values of $\phi$. Evidence of this can be seen on Figure \ref{fig:ill}. To produce it, we took the hidden state sequence and parameter values from the end of one of our ensemble runs, and performed Metropolis updates for the parameters, while keeping the hidden state sequence fixed. We also took the same hidden state sequence and parameter values, mapped the hidden sequence to an ensemble of sequences by generating a collection of pool states (we used $120$) and peformed Metropolis updates of the parameter part of the ensemble, keeping the pool states fixed. We drew $50,000$ samples given a single fixed sequence and $2,000$ samples given an ensemble. \begin{figure} \caption{An illustration to aid explanation of relative performance. Note that the fixed sequence dots are smaller, to better illustrate the difference in the spread between the fixed sequence and two other distributions.} \label{fig:ill} \end{figure} Visually, we can see that the posterior of $\theta$ given a single sequence is significantly more concentrated than the marginal posterior, and the posterior given a fixed ensemble of sequences. Comparing the standard deviations of the posterior of $\theta$ given a fixed sequence, and the marginal posterior of $\theta$, we find that the marginal posterior of $\phi$ has a standard deviation about $21$ times larger than the posterior of $\phi$ given a single sequence. The marginal posteriors for $r$ and $\sigma$ have a posterior standard deviation larger by a factor of $5.2$ and $6.0$. The standard deviation of the posterior given our fixed ensemble of sequences is greater for $\phi$ by a factor of $11$, and by factors of $4.3$ and $3.2$ for $r$ and $\sigma$. This is consisent with our explanation above. We note that the actual timings of the runs are different from what one may expect. In theory, the computation time should scale as $nL^{2}$, where $L$ is the number of pool states and $n$ is the number of observations. However, the implementation we use, in MATLAB, implements the forward pass as a nested loop over $n$ and over $L$, with another inner loop over $L$ vectorized. MATLAB is an interpreted language with slow loops, and vectorizing loops generally leads to vast performance improvements. For the numbers of pool states we use, the computational cost of the vector operation corresponding to the inner loop over $L$ is very low compared to that of the outer loop over $L$. As a result, the total computational cost scales approximately linearly with $L$, in the range of values of $L$ we considered. An implementation in a different language might lead to different optimal pool state settings. The original examples of Wood and Fearnhead and Prangle used $\phi = 10$ instead of $\phi = 2$. We found that for $\phi = 10$, the ensemble method still performs better than the single sequence method, when computation time is taken into account, though the difference in performance is not as large. When $\phi$ is larger, the observations $y_{i}$ are larger on average as well. As a result, the data is more informative about the values of the hidden state variables, and the marginal posteriors of the model parameters are more concentrated. As a result of this, though we would still expect the ensemble method to improve performance, we would not expect the improvement to be as large as when $\phi = 2$. Finally, we would like to note that if the single-sequence method is implemented already, implementing the ensemble method is very simple, since all that is needed is an additional call of the forward pass function and summing of the final forward probabilities. So, the performance gains from using an ensemble for parameter updates can be obtained with little additional programming effort. \section{Conclusion} We found that both the embedded HMM MCMC method and its ensemble extension perform significantly better than the ordinary Metropolis method for doing Bayesian inference in the Ricker model. This suggests that it would be promising to investigate other state space models with non-linear state dynamics, and see if it is possible to use the embedded HMM methods we described to perform inference in these models. Our results also show that using staged proposals further improves the performance of the ensemble method. It would be worthwhile to look at other scenarios where this technique might be applied. Most importantly, however, our results suggest that looking at multiple hidden state sequences at once can make parameter sampling in state space models noticeably more efficient, and so indicate a direction for further research in the development of MCMC methods for non-linear, non-Gaussian state space models. \section*{References} \leftmargini 0.2in \labelsep 0in \begin{description} \item Fearnhead, P., Prangle, D. (2012) ``Constructing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation'', {\em Journal of the Royal Statistical Society, Series B}, vol.~74, pp.~1-28. \item Neal, R. M. (2010) ``MCMC Using Ensembles of States for Problems with Fast and Slow Variables such as Gaussian Process Regression'', Technical Report No. 1011, Department of Statistics, University of Toronto. \item Neal, R. M. (2003) ``Markov Chain Sampling for Non-linear State Space Models using Embedded Hidden Markov Models'', Technical Report No. 0304, Department of Statistics, University of Toronto. \item Neal, R. M., Beal, M. J., and Roweis, S. T. (2004) ``Inferring state sequences for non-linear systems with embedded hidden Markov models'', in S. Thrun, et al (editors), {\em Advances in Neural Information Processing Systems 16}, MIT Press. \item Neal, R. M. (1993) ``Probabilistic Inference Using Markov Chain Monte Carlo Methods'', Technical Report CRG-TR-93-1, Dept. of Computer Science, University of Toronto. \item Steven L. Scott (2002) ``Bayesian Methods for Hidden Markov Models: Recursive Computing in the 21st Century'', {\em Journal of the American Statistical Association}. vol.~97, no.~457, pp.~337-351. \item Wood, S. (2010) ``Statistical inference for noisy nonlinear ecological dynamic systems'', {\em Nature}. vol.~466, pp.~1102-1104. \end{description} \end{document}
\begin{document} \large \title[The Square Root of the Inverse Different]{Galois Module Structure of the Square Root of the Inverse Different over Maximal Orders} \author{Cindy (Sin Yi) Tsang} \address{Department of Mathematics, University of California, Santa Barbara} \email{[email protected]} \urladdr{http://sites.google.com/site/cindysinyitsang/} \date{\today} \begin{abstract}Let $K$ be a number field with ring of integers $\mathcal{O}_K$ and let $G$ be a \mbox{finite group of} odd order. Given a $G$-Galois $K$-algebra $K_h$, let $A_h$ be the square root of the inverse different of $K_h/K$, which exists by Hilbert's formula. If $K_h/K$ is weakly ramified, then $A_h$ is locally free over $\mathcal{O}_{K}G$ by a result of B. Erez, in which case it determines a class in the locally free class group $\mbox{Cl}(\mathcal{O}_KG)$ of $\mathcal{O}_KG$. Such a class in $\mbox{Cl}(\mathcal{O}_KG)$ is said to be $A$-realizable, and tame $A$-realizable if $K_h/K$ is tame. Let $\mathcal{A}(\mathcal{O}_KG)$ and $\mathcal{A}^t(\mathcal{O}_KG)$ denote the sets of all $A$-realizable classes and tame $A$-realizable classes, respectively. For $G$ abelian, we will show that the two sets $\mathcal{A}(\mathcal{O}_KG)$ and $\mathcal{A}^t(\mathcal{O}_KG)$ are equal when extended scalars to the maximal order in $KG$. \end{abstract} \maketitle \tableofcontents \section{Introduction}\label{intro} Let $K$ be a number field with ring of integers $\mathcal{O}_K$ and let $G$ be a \mbox{finite group.} Let $\Omega_K$ denote the absolute Galois group of $K$ and let $\Omega_K$ act \mbox{trivially on $G$} (on the left). Then, the set of all isomorphism classes of $G$-Galois $K$-algebras (see Section~\ref{Galgebra} for a brief review) is in bijection with the pointed set \[ H^1(\Omega_K,G)=\mbox{Hom}(\Omega_K,G)/\mbox{Inn}(G). \] Given $h\in H^1(\Omega_K,G)$, we will write $K_h$ for a Galois algebra \mbox{representative. If} the inverse different of $K_h/K$ has a square root, then we will denote it by $A_h$. We will study the Galois module structure of $A_h$ in this paper. In what follows, assume that $G$ has odd order. Then, for any $h\in H^1(\Omega_K,G)$, the inverse different of $K_h/K$ has a square root by Proposition~\ref{Hformula} below. \begin{prop}\label{Hformula}Let $p$ be a prime number and let $F/\mathbb{Q}_p$ be a finite extension. Let $N/F$ be a finite Galois extension with different ideal $\mathfrak{D}_{N/F}$ and let $\pi_N$ \mbox{be a} uniformizer in $N$. Then, we have $\mathfrak{D}_{N/F}=(\pi_N)^{v_N(\mathfrak{D}_{N/F})}$ for \begin{equation}\label{hilbert} v_N(\mathfrak{D}_{N/F})=\sum_{n=0}^{\infty}(|\mbox{Gal}(N/F)_n|-1), \end{equation} where $\mbox{Gal}(N/F)_n$ is the $n$-th ramification group of $N/F$ in lower numbering. \end{prop} \begin{proof} See \cite[Chapter IV, Proposition 4]{S}, for example. We remark that (\ref{hilbert}) is also known as Hilbert's formula. \end{proof} If $h\in H^1(\Omega_K,G)$ is such that $K_h/K$ is \emph{weakly ramified} (see Definition~\ref{ramification}), then $A_h$ is locally free over $\mathcal{O}_KG$ by \cite[Theorem 1 in Section 2]{E} and it deter- mines a class $\mbox{cl}(A_h)$ in the locally free class group $\mbox{Cl}(\mathcal{O}_KG)$ of $\mathcal{O}_KG$. \mbox{Such a} class in $\mbox{Cl}(\mathcal{O}_KG)$ is said to be \emph{$A$-realizable}, and \emph{tame $A$-realizable} \mbox{if $K_h/K$ is} tame. We will write \[ \mathcal{A}(\mathcal{O}_KG):=\{\mbox{cl}(A_h):h\in H^1(\Omega_K,G)\mbox{ with $K_h/K$ weakly ramified}\} \] for the set of all $A$-realizable classes in $\mbox{Cl}(\mathcal{O}_KG)$, and \[ \mathcal{A}^t(\mathcal{O}_KG):=\{\mbox{cl}(A_h):h\in H^1(\Omega_K,G)\mbox{ with $K_h/K$ tame}\} \] for the subset of $\mathcal{A}(\mathcal{O}_KG)$ consisting of the tame $A$-realizable classes. In what follows, assume that $G$ is abelian in addition to having odd order. In \cite[(12.1) and Theorem 1.3]{T}, the author gave a complete characterization of the set $\mathcal{A}^t(\mathcal{O}_KG)$ and showed that $\mathcal{A}^t(\mathcal{O}_KG)$ is a subgroup of $\mbox{Cl}(\mathcal{O}_KG)$. In \cite[Theorem 1.6]{T}, the author further showed that for \mbox{any $h\in H^1(\Omega_K,G)$ with} $K_h/K$ weakly ramified, if the wildly ramified primes of $K_h/K$ satisfy certain extra hypotheses, then we have $\mbox{cl}(A_h)\in\mathcal{A}^t(\mathcal{O}_KG)$. It is then natural to ask whether the two sets $\mathcal{A}(\mathcal{O}_KG)$ and $\mathcal{A}^t(\mathcal{O}_KG)$ are in fact equal. In this paper, we will prove that $\mathcal{A}(\mathcal{O}_KG)$ and $\mathcal{A}^t(\mathcal{O}_KG)$ become equal once we extend scalars to the maximal $\mathcal{O}_K$-order $\mathcal{M}(KG)$ in $KG$. More precisely, let $\mbox{Cl}(\mathcal{M}(KG))$ denote the locally free class group of $\mathcal{M}(KG)$ and let \[ \Psi:\mbox{Cl}(\mathcal{O}_KG)\longrightarrow\mbox{Cl}(\mathcal{M}(KG)) \] be the natural homomorphism afforded by extension of scalars. From another result \cite[Theorem 1.7]{T} of the author, we have $\Psi(\mathcal{A}(\mathcal{O}_KG))=\Psi(\mathcal{A}^t(\mathcal{O}_KG))$, provided that every prime divisor of $|G|$ is unramified in $K/\mathbb{Q}$. We will show that this additional hypothesis is unnecessary. \begin{thm}\label{equal} Let $K$ be a number field and let $G$ be a finite abelian \mbox{group of} odd order. Then, we have $\Psi(\mathcal{A}(\mathcal{O}_KG))=\Psi(\mathcal{A}^t(\mathcal{O}_KG))$. \end{thm} \begin{proof}See Remark~\ref{pfequal} below. \end{proof} Using the characterization of the set $\mathcal{A}^t(\mathcal{O}_KG)$ given in \cite[(12.1)]{T}, the proof of Theorem~\ref{equal} reduces to computing the local generators \mbox{of $A_h$ over $\mathcal{O}_KG$ for} each $h\in H^1(\Omega_K,G)$ with $K_h/K$ wildly and weakly ramified. Such generators at the tame primes of $K_h/K$ have already been characterized in \cite[Theorems 10.3 and 10.4]{T}. We will compute these generators at the wild primes of $K_h/K$ in this paper. We will prove (see \mbox{Sections~\ref{notation} and \ref{prereq}} for the notation): \begin{thm}\label{decomp}Let $p$ be a prime number and let $F/\mathbb{Q}_p$ be a finite extension. Let $G$ be a finite abelian group of odd order and let $h\in H^1(\Omega_F,G)$ be such that $F_h/F$ is wildly and weakly ramified. If $A_h=\mathcal{O}_FG\cdot a$, then \[ r_G(a)=rag(\gamma)u\Theta^t_*(g) \] for some $\gamma\in\mathcal{M}(FG)^\times$, $u\in\mathcal{H}(\mathcal{O}_FG)$, and $g\in \Lambda(FG)^\times$. \end{thm} \begin{remark}\label{pfequal}The proof of Theorem~\ref{equal} is that of \cite[Theorem 1.7]{T} verbatim, except we will need to use Theorem~\ref{decomp} above in place of \cite[Theorem 16.1]{T}. \mbox{To avoid} repetition, we will only prove Theorem~\ref{decomp} in this paper. \end{remark} \section{Notation and Conventions}\label{notation} Throughout this paper, unless specified, the symbol $F$ will denote a number field or a finite extension of $\mathbb{Q}_p$ for some prime number $p$. We will \mbox{also fix a} finite abelian group $G$ and we will use the convention that the homomorphisms in the cohomology groups considered are continuous. For such a field $F$, fix an algebraic closure $F^c$ of $F$ and let $\Omega_F$ denote the Galois group of $F^c/F$. Let $\mathcal{O}_F$ denote the ring of integers in $F$ and write $\mathcal{O}_{F^c}$ for its integral closure in $F^c$. We will let $\Omega_F$ act trivially on $G$ (on the left) and choose a compatible set $\{\zeta_n:n\in\mathbb{Z}^+\}$ of primitive roots of unity in $F^c$. We will also write $\widehat{G}$ for the group of irreducible $F^c$-valued characters on $G$, and $\mathcal{M}(FG)$ for the maximal $\mathcal{O}_F$-order in $FG$. When $F$ is a finite extension of $\mathbb{Q}_p$, given a finite extension $N/F$, say with uniformizer $\pi_N$ in $N$, let $v_N:N\longrightarrow\mathbb{Z}\cup\{\infty\}$ denote the additive valuation on $N$ for which $v_N(\pi_N)=1$. Given a fractional $\mathcal{O}_N$-ideal $\mathfrak{A}$ in $N$, \mbox{we will also} write $v_N(\mathfrak{A})$ for the unique integer for which $\mathfrak{A}=(\pi_N)^{v_N(\mathfrak{A})}$. Finally, if $N/F$ is Galois, then for each $n\in\mathbb{Z}_{\geq 0}$, let $\mbox{Gal}(N/F)_n$ denote the $n$-th ramification group of $N/F$ in lower numbering. \section{Prerequisites}\label{prereq} In this section, we will define the notation used in the statement of Theorem~\ref{decomp}, in particular \emph{reduced resolvends} and the \emph{modified Stickelberger tranpose}. The former was introduced by L. McCulloh in \cite[Section 2]{M}, where he studied the Galois module structure of rings of integers. The latter \mbox{was intro-} duced by the author in \cite[Section 8]{T} by modifying the definition of the \emph{Stickelberger tranpose} defined by McCulloh in \cite[Section 4]{M}. \subsection{Galois Algebras and Resolvends}\label{Galgebra} We will give a brief review of Galois algebras and resolvends (see \cite[Section 1]{M} for a more detailed discussion). We note that their definitions still make sense even when $G$ is not abelian. \begin{definition}\label{GaloisAlg}A \emph{Galois algebra over $F$ with group $G$} or \emph{$G$-Galois $F$-algebra} is a commutative semi-simple $F$-algebra $N$ on which $G$ acts (on the left) as a group of automorphisms satisfying $N^{G}=F$ and $[N:F]=|G|$. Two $G$-Galois $F$-algebras are said to be \emph{isomorphic} if there exists an \mbox{$F$-algebra isomorphism} between them which preserves the action of $G$. \end{definition} The set of all isomorphism classes of $G$-Galois $F$-algebras may be shown to be in bijective correspondence with the pointed set \[ H^1(\Omega_F,G):=\mbox{Hom}(\Omega_F,G)/\mbox{Inn}(G). \] Since the fixed finite group $G$ is abelian, the isomorphism classes of $G$-Galois $F$-algebras may be identified with the homomorphisms in $\mbox{Hom}(\Omega_F,G)$. More specifically, each $h\in\mbox{Hom}(\Omega_F,G)$ is associated to the $F$-algebra \[ F_{h}:=\mbox{Map}_{\Omega_F}(^{h}G,F^{c}), \] where $^{h}G$ is the group $G$ endowed with the $\Omega_F$-action given by \[ (\omega\cdot s):=h(\omega)s\hspace{1cm}\mbox{for $s\in G$ and $\omega\in\Omega_F$}. \] The $G$-action on $F_h$ is given by \[ (s\cdot a)(t):=a(ts)\hspace{1cm}\mbox{for $a\in F_h$ and $s,t\in G$}. \] Given a set $\{s_i\}$ of coset representatives for $h(\Omega_F)\backslash G$, note that each $a\in F_h$ is uniquely determined by the values $a(s_i)$, and these $a(s_i)$ may be arbitrarily chosen provided that they are fixed by all $\omega\in\ker(h)$. Setting $F^{h}:=(F^{c})^{\ker(h)}$, we see that evaluation at the $s_i$ induces an isomorphism \[ F_{h}\simeq \prod_{h(\Omega_F)\backslash G}F^{h} \] of $F$-algebras. The above isomorphism depends on the choice of the set $\{s_i\}$. \begin{definition}\label{ramification} Given $h\in\mbox{Hom}(\Omega_F,G)$, we say that $F_h/F$ or $h$ is \emph{unramified} if $F^h/F$ is unramified. Similarly for \emph{tame}, \emph{wild}, and \emph{weakly ramified}. Recall that a Galois extension over $F$ is said to be weakly ramified if all of its second ramification groups (in lower numbering) are trivial. \end{definition} \begin{definition}Given $h\in\mbox{Hom}(\Omega_F,G)$, let $\mathcal{O}^h:=\mathcal{O}_{F^h}$ and define \[ \mathcal{O}_h:=\mbox{Map}_{\Omega_F}(^hG,\mathcal{O}^h). \] If the inverse different of $F^h/F$ is a square, let $A^h$ be its square root and set \[ A_h:=\mbox{Map}_{\Omega_F}(^hG,A^h). \] In the sequel, whenever we write $A_h$ for some $h\in\mbox{Hom}(\Omega_F,G$), we implicitly assume that $A^h$ exists (by Proposition~\ref{Hformula}, this is so when $G$ has odd order). \end{definition} \begin{prop}\label{Aexists}Let $F$ be a finite extension of $\mathbb{Q}_p$ and let $h\in\mbox{Hom}(\Omega_F,G)$ be wildly and weakly ramified. Then, the inverse different of $F^h/F$ is a square, and we have $v_{F^h}(A^h)=1-|\mbox{Gal}(F^h/F)_0|$. Moreover, the group $\mbox{Gal}(F^h/F)_0$ is equal to $\mbox{Gal}(F^h/F)_1$ and is elementary $p$-abelian. \end{prop} \begin{proof}Let $\mathfrak{D}^h$ denote the different ideal of $F^h/F$. Since $h$ is weakly ramified, by Proposition~\ref{Hformula}, we know that \[ v_{F^h}(\mathfrak{D}^h)=|\mbox{Gal}(F^h/F)_0|+|\mbox{Gal}(F^h/F)_1|-2. \] Now, since $G$ is abelian, by \cite[Chapter IV, Proposition 9, Corollary 2]{S}, we have $\mbox{Gal}(F^h/F)_n=\mbox{Gal}(F^h/F)_{n+1}$ for all $n\in\mathbb{Z}_{\geq 0}$ that is not divisible by \[ e_0:=[\mbox{Gal}(F^h/F)_0:\mbox{Gal}(F^h/F)_1]. \] If $e_0\neq 1$, then $\mbox{Gal}(F^h/F)_1=\mbox{Gal}(F^h/F)_2$ and this is impossible because $h$ is wildly and weakly ramified. Hence, we must have $e_0=1$ and so \[ \mbox{Gal}(F^h/F)_0=\mbox{Gal}(F^h/F)_1. \] We then deduce that $\mathfrak{D}^h$ is a square and that $v_{F^h}(A^h)=1-|\mbox{Gal}(F^h/F)_0|$. Because $\mbox{Gal}(F^h/F)_1/\mbox{Gal}(F^h/F)_2$ is elementary $p$-abelian by \cite[Chapter IV, Proposition 7, Corollary 3]{S} and $\mbox{Gal}(F^h/F)_2=1$ by hypothesis, we then see that the group $\mbox{Gal}(F^h/F)_0$ is elementary $p$-abelian as well. \end{proof} Next, consider the $F^c$-algebra $\mbox{Map}(G,F^c)$ on which we let $G$ act via \[ (s\cdot a)(t):=a(ts)\hspace{1cm}\mbox{for $a\in\mbox{Map}(G,F^c)$ and $s,t\in G$}. \] Note that $F_h$ is an $FG$-submodule of $\mbox{Map}(G,F^c)$ for all $h\in\mbox{Hom}(\Omega_F,G)$. \begin{definition}\label{resolvend} The \emph{resolvend map} $\mathbf{r}_{G}:\mbox{Map}(G,F^{c})\longrightarrow F^{c}G$ is defined by \[ \mathbf{r}_{G}(a):=\sum\limits _{s\in G}a(s)s^{-1}. \] \end{definition} The map $\mathbf{r}_{G}$ is clearly an isomorphism of $F^cG$-modules, but not an isomorphism of $F^cG$-algebras because it does not preserve multiplication. Moreover, given $a\in\mbox{Map}(G,F^c)$, we have that $a\in F_h$ if and only if \begin{equation}\label{resol1} \omega\cdot\mathbf{r}_{G}(a)=\mathbf{r}_{G}(a)h(\omega) \hspace{1cm}\mbox{for all }\omega\in\Omega_F. \end{equation} The next proposition shows that resolvends may be used to identify elements $a\in F_h$ for which $F_h=FG\cdot a$ or $\mathcal{O}_h=\mathcal{O}_FG\cdot a$ or $A_h=\mathcal{O}_FG\cdot a$. Here $[-1]$ denotes the involution on $F^cG$ induced by the involution $s\mapsto s^{-1}$ on $G$. \begin{prop}\label{NBG} Let $h\in\mbox{Hom}(\Omega_F,G)$ and let $a\in F_h$ be given. We have \begin{enumerate}[(a)] \item $F_h=FG\cdot a$ if and only if $\mathbf{r}_{G}(a)\in (F^{c}G)^{\times}$; \item $\mathcal{O}_h=\mathcal{O}_FG\cdot a$ with $h$ unramified if and only if $\mathbf{r}_G(a)\in(\mathcal{O}_{F^c}G)^{\times}$; \item $A_h=\mathcal{O}_FG\cdot a$ if and only if $a\in A_h$ and $\mathbf{r}_G(a)\mathbf{r}_G(a)^{[-1]}\in(\mathcal{O}_FG)^\times$. \end{enumerate} \end{prop} \begin{proof}See \cite[Proposition 1.8 and (2.11)]{M} for (a) and (b), and \cite[Proposition 3.10]{T} for (c). \end{proof} We are interested in giving a description of the resolvends $\mathbf{r}_G(a)$ for which $A_h=\mathcal{O}_FG\cdot a$ for a wildly and weakly ramified $h\in\mbox{Hom}(\Omega_F,G)$ when $F$ is a finite extension of $\mathbb{Q}_p$. The next proposition will be a crucial tool; \mbox{it will allow} us to reduce to the case when $F^{h}/F$ is totally ramified. \begin{prop}\label{factor} Let $F$ be a finite extension of $\mathbb{Q}_p$ and let $h\in\mbox{Hom}(\Omega_F,G)$. \begin{enumerate}[(a)] \item There exists a factorization $h=h^{nr}h^{tot}$ of $h$, with $h^{nr},h^{tot}\in\mbox{Hom}(\Omega_F,G)$, such that $h^{nr}$ is unramified and $F^{h^{tot}}/F$ is totally ramified. Furthermore, if $h$ is wildly and weakly ramified, then so is $h^{tot}$. \item Assume that $h$ is weakly ramified and let $h=h^{nr}h^{tot}$ be given as in (a). If $\mathcal{O}_{h^{nr}}=\mathcal{O}_FG\cdot a_{nr}$ and $A_{h^{tot}}=\mathcal{O}_FG\cdot a_{tot}$, then there exists $a\in A_h$ such that $A_h=\mathcal{O}_{F}G\cdot a$ and $\mathbf{r}_G(a)=\mathbf{r}_G(a_{nr})\mathbf{r}_G(a_{tot})$. \end{enumerate} \end{prop} \begin{proof}See \cite[Propositions 9.2 and 5.3]{T}. \end{proof} \subsection{Cohomology and Reduced Resolvends}\label{reduced}We will define reduced resolvends and explain how to interpret them as \mbox{functions on characters of $G$.} Recall that $\Omega_F$ acts trivially on $G$ on the left. Define \[ \mathcal{H}(FG):=((F^{c}G)^{\times}/G)^{\Omega_F}. \] Given a coset $\mathbf{r}_G(a)G\in\mathcal{H}(FG)$, we will denote it by $r_G(a)$, called the \emph{reduced resolvend of $a$}. Now, taking $\Omega_F$-cohomology of the short exact sequence \begin{equation}\label{es1} \begin{tikzcd}[column sep=1cm, row sep=1.5cm] 1 \arrow{r} & G \arrow{r} & (F^{c}G)^{\times} \arrow{r} & (F^{c}G)^{\times}/G \arrow{r}& 1 \end{tikzcd} \end{equation} yields the exact sequence \begin{equation}\label{es2} \begin{tikzcd}[column sep=1cm, row sep=1.5cm] 1 \arrow{r} & G \arrow{r} & (FG)^{\times} \arrow{r}{rag} & \mathcal{H}(FG) \arrow{r}{\delta}& \mbox{Hom}(\Omega_F,G) \arrow{r}& 1, \end{tikzcd} \end{equation} where exactness on the right follows from the fact that $H^1(\Omega_F,(F^cG)^\times)=1$, which is Hilbert's Theorem 90. Alternatively, a coset $\mathbf{r}_{G}(a)G\in\mathcal{H}(FG)$ is in the preimage of $h\in\mbox{Hom}(\Omega_F,G)$ under $\delta$ if and only if \[ h(\omega)=\mathbf{r}_G(a)^{-1}(\omega\cdot\mathbf{r}_{G}(a)) \hspace{1cm}\mbox{for all }\omega\in\Omega_F, \] which is equivalent to $F_{h}=FG\cdot a$ by (\ref{resol1}) and Proposition~\ref{NBG} (a). Because there always exists an element $a\in F_h$ for which $F_h=FG\cdot a$ by the Normal Basis Theorem, the map $\delta$ is indeed surjective. The argument above also shows that \[ \mathcal{H}(FG)=\{r_{G}(a)\mid F_{h}=FG\cdot a\mbox{ for some }h\in\mbox{Hom}(\Omega_F,G)\}. \] Similarly, we may define \[ \mathcal{H}(\mathcal{O}_FG):=((\mathcal{O}_{F^c}G)^\times/G)^{\Omega_F}. \] Then, the argument above together with Proposition~\ref{NBG} (b) imply that \begin{equation}\label{rrunram} \mathcal{H}(\mathcal{O}_FG)=\left\lbrace r_G(a)\,\middle\vert\, \begin{array}{@{}c@{}c} \mathcal{O}_h=\mathcal{O}_FG\cdot a\mbox{ for some}\\ \mbox{unramified $h\in\mbox{Hom}(\Omega_F,G)$} \end{array} \right\rbrace. \end{equation} To view reduced resolvends as functions on characters of $G$, define \[ \det:\mathbb{Z}\widehat{G}\longrightarrow\widehat{G};\hspace{1em}\det\Bigg(\sum_{\chi} n_{\chi}\chi\Bigg):=\prod_{\chi}\chi^{n\chi} \] and set $S_{\widehat{G}}:=\ker(\det)$. By applying the functor $\mbox{Hom}(-,(F^{c})^{\times})$ to the short exact sequence \[ \begin{tikzcd}[column sep=1cm, row sep=1.5cm] 0 \arrow{r} & S_{\widehat{G}} \arrow{r} & \mathbb{Z}\widehat{G}\arrow{r}[font=\normalsize, auto]{\det} & \widehat{G} \arrow{r} & 1, \end{tikzcd} \] we obtain the short exact sequence \begin{equation}\label{es3} \begin{tikzcd}[column sep=0.45cm, row sep=1.5cm] 1 \arrow{r} & \mbox{Hom}(\widehat{G},(F^{c})^{\times}) \arrow{r} & \mbox{Hom}(\mathbb{Z}\widehat{G},(F^{c})^{\times}) \arrow{r}& \mbox{Hom}(S_{\widehat{G}},(F^{c})^{\times}) \arrow{r}& 1, \end{tikzcd} \end{equation} where exactness on the right follows from the fact that $(F^{c})^{\times}$ is divisible and thus injective. We will identify (\ref{es3}) with (\ref{es1}) as follows. First, observe that we have canonical identifications \begin{equation}\label{iden1} (F^{c}G)^{\times}=\mbox{Map}(\widehat{G},(F^{c})^{\times}) =\mbox{Hom}(\mathbb{Z}\widehat{G},(F^c)^\times). \end{equation} The second identification is given by extending the maps $\widehat{G}\longrightarrow(F^c)^\times$ via $\mathbb{Z}$-linearity, and the first identification is induced by characters on $G$ as follows. Each resolvend $\mathbf{r}_{G}(a)\in (F^cG)^\times$ gives rise to a map $\mbox{Map}(\widehat{G},(F^c)^\times)$ given by \begin{equation}\label{resolvent} \mathbf{r}_G(a)(\chi):=\sum_{s\in G}a(s)\chi(s)^{-1}\hspace{1cm}\mbox{for }\chi\in\widehat{G}. \end{equation} Conversely, given $\varphi\in\mbox{Map}(\widehat{G},(F^c)^\times)$, one recovers $\mathbf{r}_{G}(a)$ by the formula \[ a(s):=\frac{1}{|G|}\sum_{\chi}\varphi(\chi)\chi(s)\hspace{1cm}\mbox{for }s\in G. \] Since $G=\mbox{Hom}(\widehat{G},(F^{c})^{\times})$ canonically, the third terms in (\ref{es1}) and (\ref{es3}) are naturally identified as well. Taking $\Omega_F$-invariants, we then obtain \begin{equation}\label{idenH} \mathcal{H}(FG)=\mbox{Hom}_{\Omega_F}(S_{\widehat{G}},(F^c)^\times). \end{equation} Finally, given $c\in (FG)^\times$, we will write $rag(c)$ for its image in $\mathcal{H}(FG)$ under the map $rag$ in (\ref{es2}). \subsection{The Modified Stickelberger Transpose}\label{StickelTranspose}\label{StickelS} In this subsection, assume further that $G$ has odd order. Recall from Section~\ref{notation} that we chose a compatible set $\{\zeta_n:n\in\mathbb{Z}^+\}$ of primitive roots of unity in $F^c$. \begin{definition}\label{Stickel}For each $\chi\in\widehat{G}$ and $s\in G$, let $ \upsilon(\chi,s)\in \left[\frac{1-|s|}{2},\frac{|s|-1}{2}\right]$ denote the unique integer (note that $|s|$ is odd because $G$ has odd order) such that $\chi(s)=(\zeta_{|s|})^{\upsilon(\chi,s)}$, and define $\langle\chi,s\rangle_{*}:=\upsilon(\chi,s)/|s|$. Extending this definition by $\mathbb{Q}$-linearity, we obtain a pairing $\langle\hspace{1mm},\hspace{1mm}\rangle_*:\mathbb{Q}\widehat{G}\times\mathbb{Q}G\longrightarrow\mathbb{Q}$. The map \[ \Theta_{*}:\mathbb{Q}\widehat{G}\longrightarrow\mathbb{Q}G; \hspace{1em} \Theta_{*}(\psi):=\sum_{s\in G}\langle\psi,s\rangle_{*}s \] is called the \emph{modified Stickelberger map}. \end{definition} \begin{prop}\label{A-ZG} Given $\psi\in\mathbb{Z}\widehat{G}$, we have $\Theta_{*}(\psi)\in\mathbb{Z}G$ if and only if $\psi\in S_{\widehat{G}}$. \end{prop} \begin{proof}See \cite[Proposition 8.2]{T}. \end{proof} Note that $\Omega_F$ acts on $\widehat{G}$ (on the left) canonically via its action \mbox{on the roots} of unity in $F^c$, and recall that $\Omega_F$ acts trivially on $G$ by definition. \mbox{Below, we} define other $\Omega_F$-actions on $G$, one of which will make the $\mathbb{Q}$-linear map $\Theta_*$ preserve the $\Omega_F$-action. \begin{definition}\label{cyclotomic} Let $m:=\exp(G)$ and let $\mu_m$ be the group of $m$-th roots of unity in $F^c$. The \emph{$m$-th cyclotomic character of $\Omega_F$} is the homomorphism \[ \kappa:\Omega_F\longrightarrow(\mathbb{Z}/m\mathbb{Z})^{\times} \] defined by the equations \[ \omega(\zeta)=\zeta^{\kappa(\omega)}\hspace{1cm}\mbox{for $\omega\in\Omega_F$ and }\zeta\in\mu_m. \] For $n\in\mathbb{Z}$, let $G(n)$ be the group $G$ equipped with the $\Omega_F$-action given by \[ \omega\cdot s:=s^{\kappa(\omega^{n})}\hspace{1cm}\mbox{for $s\in G$ and $\omega\in\Omega_F$}. \] \end{definition} \begin{prop}\label{eqvariant} The map $\Theta_{*}:\mathbb{Q}\widehat{G}\longrightarrow\mathbb{Q}G(-1)$ preserves $\Omega_F$-action. \end{prop} \begin{proof}See \cite[Proposition 8.4]{T}. \end{proof} From Propositions~\ref{A-ZG} and \ref{eqvariant}, the map $\Theta_*$ restricts to an $\Omega_F$-equivariant map $S_{\widehat{G}}\longrightarrow\mathbb{Z}G(-1)$. Applying the functor $\mbox{Hom}(-,(F^c)^\times)$, we then obtain an $\Omega_F$-equivariant homomorphism \[ \Theta_{*}^{t}:\mbox{Hom}(\mathbb{Z}G(-1),(F^{c})^{\times})\longrightarrow\mbox{Hom}(S_{\widehat{G}},(F^{c})^{\times});\hspace{1em}f\mapsto f\circ\Theta_*. \] Taking $\Omega_F$-invariants, this yields a homomorphism \[ \Theta^t_{*}:\mbox{Hom}_{\Omega_F}(\mathbb{Z}G(-1),(F^{c})^{\times})\longrightarrow\mbox{Hom}_{\Omega_F}(S_{\widehat{G}},(F^{c})^{\times}), \] called the \emph{modified Stickelberger transpose}. To simplify notation, define \begin{equation}\label{lambda} \Lambda(FG):=\mbox{Map}_{\Omega_F}(G(-1),F^c). \end{equation} Since there is a natural identification $\Lambda(FG)^\times= \mbox{Hom}_{\Omega_F}(\mathbb{Z}G(-1),(F^c)^\times)$, we may regard $\Theta_{*}^{t}$ as a homomorphism $\Lambda(FG)^{\times}\longrightarrow\mathcal{H}(FG)$ (recall (\ref{idenH})). \section{Valuations of Local Wild Resolvents}\label{computeval} In order to prove Theorem~\ref{decomp}, we will first prove the following. \begin{thm}\label{units}Let $F$ be a finite extension of $\mathbb{Q}_p$ and let $h\in\mbox{Hom}(\Omega_F,G)$ be wildly and weakly ramified. If $A_h=\mathcal{O}_FG\cdot a$ (cf. Proposition~\ref{Aexists}), then \begin{equation}\label{resolventdef} (a\mid\chi):=\sum_{s\in G}a(s)\chi(s)^{-1} \end{equation} (cf. (\ref{resolvent})) is element of $\mathcal{O}_{F^c}^\times$ for all $\chi\in\widehat{G}$. \end{thm} We note that (\ref{resolventdef}) is called the \emph{resolvent of $a$ at $\chi$}. In the rest of this section, let $F$ be a finite extension of $\mathbb{Q}_p$ and let $\zeta=\zeta_p$ be the chosen primitive $p$-th root of unity in $F^c$. We will also use the following notation and conventions. \begin{definition}\label{defch5}Let $\mathbb{F}_p:=\mathbb{Z}/p\mathbb{Z}$. For each $i\in\mathbb{F}_p$, if $z$ is an element of order $1$ or $p$ in a group, we will write $z^i$ for $z^{n_i}$, where $n_i\in\mathbb{Z}$ is any \mbox{representative of $i$.} If $i\in\mathbb{F}_p^\times$, we will write $i^{-1}$ for its multiplicative inverse in $\mathbb{F}_p^\times$. If $p$ is odd, we will further define $c(i)\in\left[\frac{1-p}{2},\frac{p-1}{2}\right]$ to be the unique \mbox{integer that represents $i$.} \end{definition} \begin{definition}\label{Rn} Notice that $\mathbb{Q}_p$ contains all $(p-1)$-st roots of unity. We will write $\widehat{\mathbb{F}_p^\times}$ for the group of $\mathbb{Q}_p^\times$-valued characters on $\mathbb{F}_p^\times$. Given $\varphi\in\widehat{\mathbb{F}_p^\times}$, we will extend it to a map on $\mathbb{F}_p$ by setting $\varphi(0)=0$. For each $n\in\mathbb{N}$ which divides $p-1$, let $R_n:=(\mathbb{F}_p^\times)^n$ be the subgroup of $\mathbb{F}_p^\times$ consisting of the non-zero $n$-th powers in $\mathbb{F}_p$. \end{definition} \subsection{Valuations of Gauss Sums over $\mathbb{Q}_p$} We will prove Theorem~\ref{units} by first computing the valuations of the following Gauss sums. \begin{definition}\label{Gauss}For each $\varphi\in\widehat{\mathbb{F}_p^\times}$ and $j\in\mathbb{F}_p$, define \[ G(\varphi,j):=\sum_{k\in\mathbb{F}_p}\varphi(k)\zeta^{jk}. \] \end{definition} \begin{lem}\label{GaussBasic} For all $\varphi\in\widehat{\mathbb{F}_p^\times}$ and $j\in\mathbb{F}_p^\times$, we have \begin{enumerate}[(a)] \item $G(1,0)=p-1$ and $G(\varphi,0)=0$ if $\varphi\neq 1$; \item $G(\varphi,j)=\varphi(j)^{-1}G(\varphi,1)$ and $G(1,j)=-1$. \end{enumerate} \end{lem} \begin{proof}The claims in (a) follow from the orthogonality of characters, and both equalities in (b) follow from a simple calculation. \end{proof} In view of Lemma~\ref{GaussBasic}, it remains to consider $G(\varphi,1)$ for $\varphi\neq 1$. \begin{prop}\label{GaussVal}Let $\varphi\in\widehat{\mathbb{F}_p^\times}$ be of order $n\neq 1$. For all $j\in\mathbb{F}_p^\times$, we have \[ v_{\mathbb{Q}_p(\zeta)}(G(\varphi,j))\geq (p-1)/n. \] \end{prop} \begin{proof}By Lemma~\ref{GaussBasic} (b), it is enough to prove the claim for $j=1$. We will do so by computing the valuation of the sum \[ S:=\sum_{j\in\mathbb{F}_p}G(\varphi,j)^n. \] On one hand, using Definition~\ref{Gauss}, we have \begin{align*} S &=\sum_{j\in\mathbb{F}_p} \sum_{\substack{k_i\in\mathbb{F}_p\\1\leq i\leq n}}\varphi(k_1\cdots k_n)\zeta^{j(k_1+\cdots+k_n)}\\ &=\sum_{\substack{k_i\in\mathbb{F}_p\\1\leq i\leq n}}\varphi(k_1\cdots k_n)\sum_{j\in\mathbb{F}_p}\zeta^{j(k_1+\cdots+k_n)}. \end{align*} Since each $\varphi(k_1\cdots k_n)$ is integral and \[ \sum_{j\in\mathbb{F}_p}\zeta^{j(k_1+\cdots+k_n)}=\begin{cases} p&\mbox{ if }k_1+\cdots+k_n=0\\ 0&\mbox{ otherwise}, \end{cases} \] the sum $S$ is the product of $p$ and an element of non-negative valuation, so \begin{equation}\label{S1} v_{\mathbb{Q}_p(\zeta)}(S)\geq v_{\mathbb{Q}_p(\zeta)}(p)=p-1. \end{equation} On the other hand, notice that $G(\varphi,0)=0$ by Lemma~\ref{GaussBasic} (a) because $\varphi\neq 1$. Using Lemma~\ref{GaussBasic} (b) and the fact that $\varphi$ has order $n$, we then see that \[ S=\sum_{j\in\mathbb{F}_p^\times}\varphi(j)^{-n}G(\varphi,1)^n=(p-1)G(\varphi,1)^n. \] Since $p-1$ has valuation zero, this shows that \begin{equation}\label{S2} v_{\mathbb{Q}_p(\zeta)}(S)=n\cdot v_{\mathbb{Q}_p(\zeta)}(G(\varphi,1)). \end{equation} The desired inequality now follows from (\ref{S1}) and (\ref{S2}). \end{proof} We will also need the following proposition. \begin{prop}\label{gG}Let $\varphi\in\widehat{\mathbb{F}_p^\times}$ be of order $n\neq 1$. For all $j\in\mathbb{F}_p^\times$, we have \[ \sum_{l=1}^{n-1}G(\varphi^l,j)=1+n\sum_{k\in R_n}\zeta^{jk}. \] \end{prop} \begin{proof}First of all, we have \[ \sum_{l=0}^{n-1}G(\varphi^l,j) =\sum_{l=0}^{n-1}\sum_{k\in\mathbb{F}_p}\varphi^l(k)\zeta^{jk} =\sum_{k\in\mathbb{F}_p}\zeta^{jk}\sum_{l=0}^{n-1}\varphi^l(k). \] Observe that $\ker(\varphi)=R_n$ because $\varphi$ has order $n$. In particular, we may regard $1,\varphi,\cdots,\varphi^{n-1}$ as the distinct characters on $\mathbb{F}_p^\times/R_n$. By the orthogonality of characters, we see that \[ \sum_{l=0}^{n-1}\varphi^l(k)=\begin{cases} n&\mbox{if }k\in R_n\\ 0&\mbox{otherwise}. \end{cases} \] It follows that \[ \sum_{l=0}^{n-1}G(\varphi^l,j)=n\sum_{k\in R_n}\zeta^{jk}. \] Since $G(1,j)=-1$ by Lemma~\ref{GaussBasic} (b), the claim now follows. \end{proof} \subsection{Proof of Theorem~\ref{units}} First, we make the following observation, and we will consider the special case when $h\in\mbox{Hom}(\Omega_F,G)$ is wildly and weakly ramified with $F^h/F$ totally ramified (cf. Proposition~\ref{factor}). \begin{lem}\label{prelimval}Let $h\in\mbox{Hom}(\Omega_F,G)$ be given. If $A_h=\mathcal{O}_FG\cdot a$, then \[ v_{N}((a\mid\chi^{-1}))=-v_{N}((a\mid\chi)) \] for all $\chi\in\widehat{G}$. In particular, we have $v_F((a\mid 1))=0$. Here $N/F$ is any finite extension that contains $(a\mid\chi)$ for all $\chi\in\widehat{G}$. \end{lem} \begin{proof}This follows from the observation that \[ (a\mid\chi)(a\mid\chi^{-1})=\mathbf{r}_G(a)\mathbf{r}_G(a)^{[-1]}(\chi) \] (cf. (\ref{resolvent}) and recall that $[-1]$ denotes the involution on $F^cG$ induced by the involution $s\mapsto s^{-1}$ on $G$), which lies in $\mathcal{O}_N^\times$ by Proposition~\ref{NBG} (c). \end{proof} We note that the next proposition is a generalization of \cite[Theorem 15.4]{T}. \begin{prop}\label{valtotram}Let $h\in\mbox{Hom}(\Omega_F,G)$ be wildly and weakly ramified and be such that $F^h/F$ is totally ramified. Then, there exists $a\in A_h$ \mbox{(cf. Proposition} \ref{Aexists}) such that $A_h=\mathcal{O}_FG\cdot a$ and $(a\mid\chi)\in\mathcal{O}_{F^c}^\times$ for all $\chi\in\widehat{G}$. \end{prop} \begin{proof}From Proposition~\ref{Aexists}, we know that \[ v_{F^h}(A^h)\equiv 1\hspace{1cm}\mbox{(mod $|\mbox{Gal}(F^h/F)_1|)$}. \] Then, by \cite[Theorem 1.1]{HJ}, we have $A^h=\mathcal{O}_F\mbox{Gal}(F^h/F)\cdot\alpha$ for some $\alpha\in A^h$. Define $a\in\mbox{Map}(G,F^c)$ by setting \[ a(s):= \begin{cases} \omega(\alpha) &\mbox{if $s=h(\omega)$ for $\omega\in\Omega_F$}\\ 0 & \mbox{otherwise}. \end{cases} \] It is not hard to see that $a$ is well-defined and that $A_h=\mathcal{O}_FG\cdot a$. It remains to show that $(a\mid\chi)\in\mathcal{O}_{F^c}^\times$ for all $\chi\in\widehat{G}$. To that end, first observe that for each $\chi\in\widehat{G}$, we have \begin{equation}\label{achi} (a\mid\chi)=\sum_{s\in h(\Omega_F)}a(s)\chi(s)^{-1}. \end{equation} Note that $h(\Omega_F)\simeq\mbox{Gal}(F^h/F)$ and $\mbox{Gal}(F^h/F)=\mbox{Gal}(F^h/F)_0$ since $F^h/F$ is totally ramified. Because $\mbox{Gal}(F^h/F)_0$ has exponent $p$ by Proposition~\ref{Aexists}, so does $h(\Omega_F)$. In particular, the resolvent $(a\mid\chi)$ lies in $F^h(\zeta)$. If $p=2$, then $(a\mid\chi)=(a\mid\chi^{-1})$ and hence $(a\mid\chi)\in\mathcal{O}_{F^h(\zeta)}^\times$ by Lemma~\ref{prelimval}. If $p$ is odd and $[F(\zeta):F]$ is even, then $(a\mid\chi)\in\mathcal{O}_{F^h(\zeta)}^\times$ by \cite[Theorem 15.4]{T}. If $p$ and $[F(\zeta):F]$ are both odd, then $\mbox{Gal}(F(\zeta)/F)\simeq R_n$ for some $n\in\mathbb{N}$ dividing $p-1$ with $n\neq 1$. Now, suppose on the contrary that $(a\mid\chi)\notin\mathcal{O}_{F^h(\zeta)}^\times$ for some $\chi\in\widehat{G}$. By Lemma~\ref{prelimval}, we know that $\chi\neq 1$ and we may assume that $v_{F^h(\zeta)}((a\mid\chi))>0$. For each $s\in h(\Omega_F)$, let $j_s\in\mathbb{F}_p$ be such that $\chi^{-1}(s)=\zeta^{j_s}$. Let $\varphi\in\widehat{\mathbb{F}_p^\times}$ be any character of order $n$ and consider the sum \[ S:=\sum_{s\in h(\Omega_F)}a(s)\sum_{l=1}^{n-1}G(\varphi^l,j_s). \] We will obtain a contradiction by computing the valuation of $S$. First, let $e,d,r\in\mathbb{N}$ be such that the numbers in the diagram below represent the ramification indices of the extensions. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node at (0,0) [name=F] {$F$}; \node at (5,1) [name=F'] {$F(\zeta)$}; \node at (0,2.5) [name=L] {$F^h$}; \node at (5,3.5) [name=L'] {$F^h(\zeta)$}; \node at (0,-2.5) [name=Q] {$\mathbb{Q}_p$}; \node at (5,-1.5) [name=Q'] {$\mathbb{Q}_p(\zeta)$}; \node at (9.25,2) {$v_{F^h}(A^h)=1-p^r$}; \path[font=\small] (F') edge node[auto] {$\frac{ed}{p-1}$} (Q') (Q) edge node[auto] {$e$} (F) (Q') edge node[auto] {$p-1$} (Q) (F') edge node[auto] {$d$} (F) (F) edge node[auto] {$p^r$} (L) (L') edge node[above] {$d$} (L) (L') edge node[auto] {$p^r$} (F'); \draw[bend left=25] (F) edge node[pos=0.6,auto] {$R_n$} (F'); \end{tikzpicture} \] Note that $v_{F^h}(A^h)=1-p^r$ because $v_{F^h}(A^h)=1-|\mbox{Gal}(F^h/F)_0|$ by Proposition~\ref{Aexists}. For each $s\in h(\Omega_F)$ and $l=1,\dots,n-1$, if $j_s=0$, then $G(\varphi^l,j_s)=0$ by Lemma~\ref{GaussBasic} (a). If $j_s\neq 0$, then using Proposition~\ref{GaussVal}, we obtain \begin{align}\label{dval} v_{F^h(\zeta)}(a(s)G(\varphi^l,j_s)) &\geq d(1-p^r)+\frac{ed}{p-1}\cdot p^r\cdot\frac{p-1}{n}\\ &=dp^r\Big(\frac{e}{n}-1\Big)+d.\notag \end{align} Note that $nd\leq n|R_n|=p-1$ and that $p-1\leq ed$ by the multiplicativity of ramification indices. It follows that $n\leq e$ and so (\ref{dval}) is positive. We then deduce that $S$ has positive valuation. Next, let $H_0$ denote the subgroup of $h(\Omega_F)$ consisting of the elements $s$ for \noindent which $j_s=0$. Then, for all $s\in H_0$, we have $G(\varphi^l,j_s)=0$ for $l=1,\dots,n-1$ by Lemma~\ref{GaussBasic} (a). Using Proposition~\ref{gG}, we may then rewrite \[ S=\sum_{s\in h(\Omega_F)}a(s)\left(1+n\sum_{k\in R_n}\zeta^{j_sk}\right)-\sum_{s\in H_0}a(s)\left(1+n\sum_{k\in R_n}\zeta^{(0)k}\right). \] Recall that $\chi^{-1}(s)=\zeta^{j_s}$ by definition. By (\ref{achi}), the above the simplifies to \begin{align*} S&=\sum_{s\in h(\Omega_F)}a(s)+n\sum_{k\in R_n}\sum_{s\in h(\Omega_F)}a(s)\chi^k(s)^{-1}-p\sum_{s\in H_0}a(s)\\ &=(a\mid 1)+n\sum_{k\in R_n}(a\mid\chi^k)-p\sum_{s\in H_0}a(s). \end{align*} Since $[F(\zeta):F]$ and $[F^h:F]$ are coprime, there is a canonical isomorphism \[ \mbox{Gal}(F^h(\zeta)/F)\simeq\mbox{Gal}(F(\zeta)/F)\times\mbox{Gal}(F^h/F). \] For each $k\in R_n$, let $\omega_k$ be the element in $\mbox{Gal}(F(\zeta)/F)$ such that $\omega_k(\zeta)=\zeta^k$ and set $\widetilde{\omega_k}:=\omega_k\times\mbox{id}_{F^h}$. Then, clearly $(a\mid\chi^{k})=\widetilde{\omega_k}((a\mid\chi))$ and so \[ v_{F^h(\zeta)}((a\mid\chi^k))=v_{F^h(\zeta)}((a\mid\chi)), \] which is positive by assumption. We have already shown that $S$ has positive valuation. Since $(a\mid 1)$ has valuation zero by Lemma~\ref{prelimval}, we deduce that \[ v_{F^h}\left(p\sum_{s\in H_0}a(s)\right)=0. \] Since $a\in A_h$, this in turn implies that \begin{align*} 0&\geq v_{F^h}(p)+v_{F^h}(A^h)\\ &=ep^r+(1-p^r)\\ &=p^r(e-1)+1, \end{align*} which is impossible. Hence,we must have $(a\mid\chi)\in\mathcal{O}_{F^h(\zeta)}^\times$ for all $\chi\in\widehat{G}$. \end{proof} \begin{proof}[Proof of Theorem~\ref{units}]Let $h=h^{nr}h^{tot}$ be a factorization of $h$ as in Proposition~\ref{factor} (a). Since $F^{h^{tot}}/F$ is wildly, weakly, and totally ramified, by Proposition~\ref{valtotram}, there exists $a_{tot}\in A_{h^{tot}}$ such that $A_{h^{tot}}=\mathcal{O}_FG\cdot a_{tot}$ and \begin{equation}\label{val2} (a_{tot}\mid\chi)\in\mathcal{O}_{F^c}^\times\hspace{1cm}\mbox{for all }\chi\in\widehat{G}. \end{equation} On the other hand, by a classical theorem of E. Noether (alternatively, by \cite[Proposition 5.5]{M}), there exists $a_{nr}\in\mathcal{O}_{h^{nr}}$ such that $\mathcal{O}_{h^{nr}}=\mathcal{O}_FG\cdot a_{nr}$, and \begin{equation}\label{val1} (a_{nr}\mid\chi)\in\mathcal{O}_{F^c}^\times\hspace{1cm}\mbox{for all }\chi\in\widehat{G} \end{equation} as a consequence of Proposition~\ref{NBG} (b). From Proposition~\ref{factor} (b), we then obtain an element $a'\in A_h$ such that $A_h=\mathcal{O}_FG\cdot a'$ and $\mathbf{r}_{G}(a')=\mathbf{r}_{G}(a_{nr})\mathbf{r}_{G}(a_{tot})$. Since $A_h=\mathcal{O}_FG\cdot a$ also, we have $a=\gamma\cdot a'$ for some $\gamma\in(\mathcal{O}_FG)^\times$, so \[ (a\mid\chi)=\gamma(\chi)(a_{nr}\mid\chi)(a_{tot}\mid\chi)\hspace{1cm} \mbox{for all }\chi\in\widehat{G}. \] Clearly $\gamma(\chi)\in\mathcal{O}_{F^c}^\times$ for all $\chi\in\widehat{G}$. The above, together with (\ref{val2}) and (\ref{val1}), then implies that $(a\mid\chi)\in\mathcal{O}_{F^c}^\times$ for all $\chi\in\widehat{G}$ as well. \end{proof} \section{Decomposition of Local Wild Resolvends}\label{wildresol} In this section, let $F$ be a finite extension of $\mathbb{Q}_p$ and assume further that $G$ has odd order. As in Section~\ref{computeval}, let $\zeta=\zeta_p$ be the chosen primitive $p$-th root of unity in $F^c$. We will also use the same notation and conventions set up in Definitions~\ref{defch5} and~\ref{Rn}. \subsection{Construction of Local Normal Basis Generators} The major ingredient of the proof of Theorem~\ref{decomp} is the following proposition (recall the notation from Section~\ref{StickelS} and cf. Theorem~\ref{units}). We remark that its proof is very similar \mbox{to that of} \cite[Proposition 13.2]{T}. \begin{prop}\label{rrwildmax}Let $h\in\mbox{Hom}(\Omega_F,G)$ be wildly and weakly \mbox{ramified, and be} such that $[F^h:F]=p$. Then, there exists $a\in F_h$ such that $F_h=FG\cdot a$ and \begin{enumerate}[(1)] \item $r_G(a)=\Theta^t_*(g)$ for some $g\in\Lambda(FG)^\times$; \item $(a\mid\chi)\in\mathcal{O}_{F^c}^\times$ for all $\chi\in\widehat{G}$. \end{enumerate} \end{prop} In what follows, let $h\in\mbox{Hom}(\Omega_F,G)$ be as in Proposition~\ref{rrwildmax}. Note that $p$ must be odd since $G$ has odd order and $F^h/F$ is wildly ramified of degree $p$. First of all, we will introduce the basic set up and some essential notation. Set $L:=F^h$. Note that there is a canonical isomorphism \[ \mbox{Gal}(L(\zeta)/F)\simeq\mbox{Gal}(F(\zeta)/F)\times\mbox{Gal}(L/F) \] because $[L:F]$ and $[F(\zeta):F]$ are coprime. Let $n\in\mathbb{N}$ be the unique integer dividing $p-1$ such that $\mbox{Gal}(F(\zeta)/F)\simeq R_n$ and let $d\in\mathbb{F}_p$ denote the class represented by $(p-1)/n$. We will also fix a generator $\tau$ of $\mbox{Gal}(L/F)$ and let $\widetilde{\tau}$ be the element in $\mbox{Gal}(L(\zeta)/F(\zeta))$ which is identified with $\tau$ via the above isomorphism. We summarize the set-up in the diagram below, where the numbers indicate the degrees of the extensions. \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \node at (0,0) [name=F] {$F$}; \node at (5,1) [name=F'] {$F(\zeta)$}; \node at (0,2.5) [name=L] {$L$}; \node at (5,3.5) [name=L'] {$L(\zeta)$}; \node at (0,-2.5) [name=Qp] {$\mathbb{Q}_p$}; \node at (1.5,-2.5) {($p$ is odd)}; \node at (10.25,2) {$d:=(p-1)/n\mbox{ (mod $p$)}$}; \path[font=\small] (Qp) edge node[auto] {} (F) (F') edge node[auto] {$\frac{p-1}{n}$} (F) (F) edge node[auto] {$p$} (L) (L') edge node[above] {$\frac{p-1}{n}$} (L) (L') edge node[auto] {$p$} (F'); \draw[bend left=25] (F) edge node[pos=0.6,auto] {$R_n$} (F'); \draw[bend left=45] (F) edge node[auto] {$\langle\tau\rangle$} (L) (L') edge node[auto] {$\langle\widetilde{\tau}\rangle$} (F'); \end{tikzpicture} \] Next, as in the proof of Proposition~\ref{valtotram}, we may deduce from Proposition~\ref{Aexists} and \cite[Theorem 1.1]{HJ} that $A^h=\mathcal{O}_F\mbox{Gal}(L/F)\cdot\alpha'$ for some $\alpha'\in A^h$. Moreover, the map $a'\in\mbox{Map}(G,F^c)$ given by \[ a'(s):= \begin{cases} \omega(\alpha') &\mbox{if $s=h(\omega)$ for $\omega\in\Omega_F$}\\ 0 & \mbox{otherwise} \end{cases} \] is well-defined and we have $A_h=\mathcal{O}_FG\cdot a'$. We will define the desired generator $a\in F_h$ using the resolvents (\ref{resolventdef}) of $a'\in A_h$ (cf. the proof of Lemma~\ref{yiunit}). \begin{definition}\label{yidef}For each $i\in\mathbb{F}_p$, define \[ y_i:=\sum_{k\in\mathbb{F}_p}\tau^k(\alpha')\zeta^{-ik}. \] \end{definition} \begin{lem}\label{yiunit}For all $i\in\mathbb{F}_p$, we have $y_i\in\mathcal{O}_{L(\zeta)}^\times$. \end{lem} \begin{proof}Let $\omega_\tau\in\Omega_F$ be a lift of $\tau$. Then, the isomorphism $\mbox{Gal}(L/F)\simeq h(\Omega_F)$ induced by $h$ identifies $\tau$ with $h(\omega_\tau)$. Define $t:=h(\omega_\tau)$ and notice that $t$ has order $p$ because $[L:F]=p$ by hypothesis. Now, given $i\in\mathbb{F}_p$, let $\chi\in\widehat{G}$ be any character such that $\chi(t)=\zeta^i$. Then, we have \[ (a'\mid\chi)=\sum_{s\in h(\Omega_F)}a'(s)\chi(s)^{-1}=\sum_{k\in\mathbb{F}_p}\tau^k(\alpha')\chi(t)^{-k}, \] which is equal to $y_i$. It then follows from Theorem~\ref{units} that $y_i\in\mathcal{O}_{L(\zeta)}^\times$. \end{proof} \begin{lem}\label{yiprop1}For all $i\in\mathbb{F}_p$, we have $\widetilde{\tau}(y_i)=\zeta^iy_i$ and $y_i^p\in F(\zeta)^\times$. \end{lem} \begin{proof}Given $i\in\mathbb{F}_p$, the claim that $\widetilde{\tau}(y_i)=\zeta^iy_i$ follows from a simple calculation. Using this, we further deduce that \[ N_{L(\zeta)/F(\zeta)}(y_i)=\prod_{k\in\mathbb{F}_p}\widetilde{\tau}^k(y_i)=\prod_{k\in\mathbb{F}_p}\zeta^{ik}y_i=y_i^p. \] Thus, indeed $y_i^p\in F(\zeta)^\times$, and this proves the lemma. \end{proof} Recall Definitions~\ref{defch5} and~\ref{Rn}. Consider the element \[ \alpha:=\frac{1}{p}\Bigg(\sum_{k\in\mathbb{F}_p}\prod_{i\in R_n}y_i^{c(i^{-1}k)}\Bigg)\\ =\frac{1}{p}\Bigg(1+\prod_{i\in R_n}y_i^{c(i^{-1})}+\cdots+\prod_{i\in R_n}y_{i}^{c(i^{-1}(p-1))}\Bigg). \] We will show that the map $a\in\mbox{Map}(G,F^c)$ defined by \begin{equation}\label{awildmax} a(s):=\begin{cases} \omega(\alpha) & \mbox{if }s=h(\omega)\mbox{ for }\omega\in\Omega_F\\ 0 & \mbox{otherwise} \end{cases} \end{equation} is well-defined and that it has the desired properties. \begin{definition}\label{omegaimax}For each $i\in R_n$, define \[ \omega_i\in\mbox{Gal}(L(\zeta)/L);\hspace{1em}\omega_i(\zeta):=\zeta^{i} \] (note that our notation here is slightly different from that used in \cite[Definition 13.4]{T}). Clearly, we have \begin{equation}\label{omegaiyj} \omega_i(y_j)=y_{ij}\hspace{1cm}\mbox{for all $j\in\mathbb{F}_p$}. \end{equation} \end{definition} First, we show that $\alpha\in L$, which will then imply that $a$ is well-defined. \begin{lem}\label{inLmax}We have $\alpha\in L$. \end{lem} \begin{proof}Clearly $\alpha\in L(\zeta)$ because $y_i\in L(\zeta)$ for all $i\in\mathbb{F}_p$ by definition. Hence, we have $\alpha\in L$ if and only if $\alpha$ is fixed by the action of $\mbox{Gal}(L(\zeta)/L)$. \mbox{Now, an} element of $\mbox{Gal}(L(\zeta)/L)$ is equal to $\omega_j$ for some $j\in R_n$. Then, \mbox{using (\ref{omegaiyj}), we} deduce that for each $k\in\mathbb{F}_p$, we have \[ \omega_j\left(\prod_{i\in R_n}y_i^{c(i^{-1}k)}\right) =\prod_{i\in R_n}y_{ij}^{c(i^{-1}k)} =\prod_{i\in R_n}y_{i}^{c(i^{-1}jk)}. \] This implies that $\omega_j$ permutes the summands \[ 1,\prod_{i\in R_n}y_{i}^{c(i^{-1})},\dots,\prod_{i\in R_n}y_{i}^{c(i^{-1}(p-1))} \] in the definition of $\alpha$ and hence fixes $\alpha$. Thus, indeed $\alpha\in L$. \end{proof} Next, we compute the Galois conjugates of $\alpha$ in $L/F$. \begin{prop}\label{conjugatesmax}For all $j,k\in\mathbb{F}_p$, we have \[ \widetilde{\tau}^{j}\left(\prod_{i\in R_n}y_i^{c(i^{-1}k)}\right)=\zeta^{jkd}\cdot\prod_{i\in R_n}y_i^{c(i^{-1}k)}. \] In particular, this implies that for all $j\in\mathbb{F}_p$, we have \[ \tau^j(\alpha)=\frac{1}{p}\sum_{k\in\mathbb{F}_p}\left(\zeta^{jkd}\prod_{i\in R_n}y_i^{c(i^{-1}k)}\right). \] \end{prop} \begin{proof}Let $j,k\in\mathbb{F}_p$ be given. Notice that $\widetilde{\tau}^j(y_1)=\zeta^j y_1$ by Lemma~\ref{yiprop1}, and that $y_i=\omega_i(y_1)$ for all $i\in R_n$ by (\ref{omegaiyj}). Because $\mbox{Gal}(L(\zeta)/F)$ is abelian, for each $i\in R_n$, we have that \begin{align*} \widetilde{\tau}^{j}(y_i) &=(\widetilde{\tau}^j\circ\omega_i)(y_1)\\ &=(\omega_i\circ\widetilde{\tau}^j)(y_1)\\ &=\omega_i(\zeta^{j}y_1)\\ &=\zeta^{ij}y_i. \end{align*} It then follows that \begin{align*} \widetilde{\tau}^j\left(\prod_{i\in R_n}y_i^{c(i^{-1}k)}\right) &=\prod_{i\in R_n}\zeta^{ijc(i^{-1}k)}y_i^{c(i^{-1}k)}\\ &=\prod_{i\in R_n}\zeta^{jk}y_i^{c(i^{-1}k)}\\ &=\zeta^{jk(p-1)/n}\cdot\prod_{i\in R_n}y_i^{c(i^{-1}k)}. \end{align*} Since $(p-1)/n$ represents $d\in\mathbb{F}_p$ by definition, the claim now follows. \end{proof} Finally, we define the desired element $g\in\Lambda(FG)^\times$ (recall (\ref{lambda})). As in the \noindent proof of Lemma~\ref{yiunit}, let $\omega_\tau\in\Omega_F$ be a lift of $\tau$ and set $t:=h(\omega_\tau)$, which has order $p$. It will also be helpful to recall Definition~\ref{cyclotomic}. \begin{lem}\label{gmax} The map $g\in\mbox{Map}(G(-1),(F^c)^\times)$ given by \[ g(s):= \begin{cases} y_i^p&\mbox{if }s=t^{d^{-1}i^{-1}}\mbox{ for }i\in R_n\\ 1&\mbox{otherwise} \end{cases} \] is well-defined and preserves the $\Omega_F$-action. Thus, we have $g\in\Lambda(FG)^\times$. \end{lem} \begin{proof} It is clear that $g$ is well-defined because $t$ has order $p$. To show that $g$ preserves the $\Omega_F$-action, let $\omega\in\Omega_F$ and $s\in G(-1)$ be given. If $s=t^{d^{-1}i^{-1}}$ for some $i\in R_n$, then $s$ has order $p$ and so $\omega\cdot s$ is determined by the action of $\omega$ on $\zeta$. Let $j\in R_n$ be such that $\omega|_{F(\zeta)}=\omega_j|_{F(\zeta)}$. Then, we have $\omega^{-1}(\zeta)=\zeta^{j^{-1}}$ and so $\omega\cdot s=s^{j^{-1}}=t^{d^{-1}i^{-1}j^{-1}}$. Recall that $y_i^p\in F(\zeta)$ by Lemma~\ref{yiprop1} and that $y_{ij}=\omega_j(y_i)$ by (\ref{omegaiyj}). We then deduce that \[ g(\omega\cdot s)=y_{ij}^p=\omega_j(y_i^p)=\omega(g(s)). \] Now, if $\omega\cdot s= t^{d^{-1}i^{-1}}$ for some $i\in R_n$, then the same argument above shows that $s=\omega^{-1}\cdot(\omega\cdot s)=t^{d^{-1}i^{-1}j^{-1}}$ for some $j\in R_n$ as well. Hence, if $s\neq t^{d^{-1}i^{-1}}$ for all $i\in R_n$, then the same is true for $\omega\cdot s$. In this case, we have \[ g(\omega\cdot s)=1=\omega(1)=\omega(g(s)). \] Hence, indeed $g$ preserves the $\Omega_F$-action, and $g\in\Lambda(FG)^\times$ by definition. \end{proof} \begin{proof}[Proof of Proposition~\ref{rrwildmax}]Let $a\in\mbox{Map}(G,F^c)$ and $g\in\Lambda(FG)^\times$ be as in (\ref{awildmax}) and Lemma~\ref{gmax}, respectively. Since $\alpha\in L$ by Lemma~\ref{inLmax} and $L=F^h$, it is clear that $a$ is well-defined and that $a\in F_h$. First, we will use the identification $\mathcal{H}(FG)=\mbox{Hom}_{\Omega_F}(S_{\widehat{G}},(F^c)^\times)$ in (\ref{idenH}) to show that $r_G(a)=\Theta^t_*(g)$. To that end, let $\chi\in\widehat{G}$ be given and let $k\in\mathbb{F}_p$ be such that $\chi(t)=\zeta^{k}$. Observe that $\langle\chi,t^{d^{-1}i^{-1}}\rangle_*=c(d^{-1}i^{-1}k)/p$ for all $i\in R_n$ by Definitions~\ref{Stickel} and~\ref{defch5}, and so \[ \Theta_{*}^{t}(g)(\chi) =\prod_{s\in G}g(s)^{\langle\chi,s\rangle_*} =\prod_{i\in R_n}y_i^{c(d^{-1}i^{-1}k)}. \] On the other hand, because $\tau$ is identified with $t:=h(\omega_\tau)$ via the isomorphism $\mbox{Gal}(L/F)\simeq h(\Omega_F)$ induced by $h$, we have that \[ (a\mid\chi)=\sum_{s\in h(\Omega_F)}a(s)\chi(s)^{-1} =\sum_{j\in\mathbb{F}_p}\tau^{j}(\alpha)\zeta^{-jk}. \] Then, using Proposition~\ref{conjugatesmax}, we obtain \begin{align*} (a\mid\chi) &=\frac{1}{p}\sum_{j\in\mathbb{F}_p}\sum_{l\in\mathbb{F}_p}\Bigg(\zeta^{jld}\prod_{i\in R_n} y_i^{c(i^{-1}l)}\Bigg)\zeta^{-jk}\\ &=\frac{1}{p}\sum_{l\in\mathbb{F}_p}\Bigg(\prod_{i\in R_n} y_i^{c(i^{-1}l)}\sum_{j\in\mathbb{F}_p}\zeta^{j(ld-k)}\Bigg)\\ &=\prod_{i\in R_n}y_i^{c(i^{-1}d^{-1}k)}. \end{align*} Hence, indeed $r_{G}(a)=\Theta_{*}^{t}(g)$. Since $y_i\in\mathcal{O}_{L(\zeta)}^\times$ for each $i\in R_n$ by Lemma~\ref{yiunit}, the above also shows that $(a\mid\chi)\in\mathcal{O}_{L(\zeta)}^\times$ for all $\chi\in\widehat{G}$. Via the identification in (\ref{iden1}), this implies that $\mathbf{r}_G(a)\in (F^cG)^\times$ as well, and thus $F_h=FG\cdot a$ by Proposition~\ref{NBG} (a). This proves that the element $a$ in (\ref{awildmax}) satisfies all of the properties claimed in Proposition~\ref{rrwildmax}. \end{proof} \subsection{Proof of Theorem~\ref{decomp}} \begin{proof}[Proof of Theorem~\ref{decomp}] Let $h=h^{nr}h^{tot}$ be a factorization of $h$ as in Proposition~\ref{factor} (a). Since $F^{h^{tot}}/F$ is wildly, weakly, and totally ramified, the Galois group $\mbox{Gal}(F^{h^{tot}}/F)$ has exponent $p$ by Proposition~\ref{Aexists}. Since $h^{tot}(\Omega_F)\simeq\mbox{Gal}(F^{h^{tot}}/F)$, we have \begin{equation}\label{directprod} h^{tot}(\Omega_F)=H_1\times \cdots\times H_r \end{equation} for subgroups $H_1,\dots,H_r$ each of order $p$. For each $i=1,\dots,r$, define \[ h_i\in\mbox{Hom}(\Omega_F,G);\hspace{1em}h_i(\omega):=\uppi_i(h^{tot}(\omega)), \] where $\uppi_i:h^{tot}(\Omega_F)\longrightarrow H_i$ is the canonical projection map given by (\ref{directprod}). It is clear that $h^{tot}=h_1\cdots h_r$. For each $i=1,\dots,r$, observe that $F^{h_i}\subset F^{h^{tot}}$ and $[F^{h_i}:F]=p$. Since a Galois subextension of a weakly ramified extension is still weakly ramified (see \cite[Proposition 2.2]{V}, for example), each $h_i$ is wildly and weakly ramified. Hence, Proposition~\ref{rrwildmax} applies and there exists $a_i\in F_{h_i}$ with $F_{h_i}=FG\cdot a_i$ such that \[ r_G(a_i)=\Theta^t_*(g_i)\hspace{1cm}\mbox{for some }g_i\in\Lambda(FG)^\times \] and that $(a_i\mid\chi)\in\mathcal{O}_{F^c}^\times$ for all $\chi\in\widehat{G}$. On the other hand, by a classical theorem of E. Noether (alternatively, by \cite[Proposition 5.5]{M}), there exists $a_{nr}\in\mathcal{O}_{h^{nr}}$ such that $\mathcal{O}_{h^{nr}}=\mathcal{O}_FG\cdot a_{nr}$. In addition, we know from (\ref{rrunram}) that \[ r_G(a_{nr})=u\hspace{1cm}\mbox{for some }u\in\mathcal{H}(\mathcal{O}_FG) \] and that $(a_{nr}\mid\chi)\in\mathcal{O}_{F^c}^\times$ for all $\chi\in\widehat{G}$. Now, since $\mathbf{r}_G$ is bijective, there exists $a'\in\mbox{Map}(G,F^c)$ such that \begin{equation}\label{a'} \mathbf{r}_G(a')=\mathbf{r}_G(a_{nr})\mathbf{r}_G(a_1)\cdots\mathbf{r}_G(a_r). \end{equation} From (\ref{resol1}) and Proposition~\ref{NBG} (a), in fact we have $a'\in F_h$ and $F_h=FG\cdot a'$. But $F_h=FG\cdot a$ also, so we have $a=\gamma\cdot a'$ for some $\gamma\in(FG)^\times$, and \[ r_G(a)=rag(\beta)r_G(a')=rag(\gamma)u\Theta^t_*(g), \] where $g:=g_1\cdots g_r\in\Lambda(FG)^\times$. It remains to show that $\gamma\in\mathcal{M}(FG)^\times$. To that end, observe that \[ (FG)^\times=\mbox{Map}_{\Omega_F}(\widehat{G},(F^c)^\times) \] from the identification in (\ref{iden1}), and so \[ \mathcal{M}(FG)^\times=\mbox{Map}_{\Omega_F}(\widehat{G},\mathcal{O}_{F^c}^\times). \] Now, for any $\chi\in\widehat{G}$, by definition we have \[ (a\mid\chi)=\gamma(\chi)(a_{nr}\mid\chi)(a_1\mid\chi)\cdots (a_r\mid\chi). \] We know that $(a_{nr}\mid\chi),(a_1\mid\chi),\dots,(a_r\mid\chi)\in\mathcal{O}_{F^c}^\times$. Since $(a\mid\chi)\in\mathcal{O}_{F^c}^\times$ by Theorem~\ref{units}, it follows that $\gamma(\chi)\in\mathcal{O}_{F^c}^\times$ as well. Thus, indeed $\gamma\in\mathcal{M}(FG)^\times$ and this proves the theorem. \end{proof} \end{document}
\begin{document} \title{Walsh spaces containing smooth functions and quasi-Monte Carlo rules of arbitrary high order} \author{Josef Dick\thanks{School of Mathematics and Statistics, University of New South Wales, Sydney 2052, Australia. ({\tt [email protected]})}} \date{} \maketitle \begin{abstract} We define a Walsh space which contains all functions whose partial mixed derivatives up to order $\mathrm{e}lta \ge 1$ exist and have finite variation. In particular, for a suitable choice of parameters, this implies that certain Sobolev spaces are contained in these Walsh spaces. For this Walsh space we then show that quasi-Monte Carlo rules based on digital $(t,\alpha,s)$-sequences achieve the optimal rate of convergence of the worst-case error for numerical integration. This rate of convergence is also optimal for the subspace of smooth functions. Explicit constructions of digital $(t,\alpha,s)$-sequences are given hence providing explicit quasi-Monte Carlo rules which achieve the optimal rate of convergence of the integration error for arbitrarily smooth functions. \end{abstract} \begin{keywords} Numerical integration, quasi-Monte Carlo, digital nets and sequences, Walsh functions \end{keywords} \begin{AMS} primary: 11K38, 11K45, 65C05; secondary: 42C10; \end{AMS} \pagestyle{myheadings} \thispagestyle{plain} \markboth{Josef DICK}{Explicit constructions of quasi-Monte Carlo rules achieving arbitrary high convergence} \section{Introduction} Quasi-Monte Carlo rules are quadrature rules which aim to approximate an integral $\int_{[0,1]^s} f(\boldsymbol{x})\,\mathrm{d}\boldsymbol{x}$ by the average of the $N$ function values $f(\boldsymbol{x}_n)$ at the quadrature points $\boldsymbol{x}_0,\ldots,\boldsymbol{x}_{N-1} \in [0,1]^s$ (and hence are equal weight quadrature rules). The dimension $s$ can be arbitrarily large. The task here is to find ways of how to choose those quadrature points in order to obtain a fast convergence of the approximation to the integral. Explicit constructions of quadrature points in arbitrary high dimensions are until now available for the following two cases: \begin{enumerate} \item for sufficiently smooth periodic functions arbitrary high convergence can be achieved using Kronecker sequences \cite[Theorem~5.3]{nie78} or a modification of digital nets recently introduced in \cite{Dick05}; \item a convergence of $\mathcal{O}(N^{-1} (\log N)^{s-1})$ can be achieved for functions of bounded variation (in this case the functions are not required to be periodic). \end{enumerate} For non-periodic functions no explicit constructions have been established which can fully exploit the smoothness of the integrand. This paper provides a complete solution to this problem. Among other things we show that an explicit construction of suitable point sets and sequences can be obtained in the following way: let $d \ge 1$ be an integer and let $\boldsymbol{x}_0,\boldsymbol{x}_1,\ldots \in [0,1)^{ds}$ be the points of a digital $(t,m,ds)$-net or digital $(t,ds)$-sequence over a finite field $\mathbb{F}_q$ in dimension $d s$ (see \cite{niesiam} for the definition of digital nets and sequences and see for example \cite{faure,niesiam,nierev,NX,sob67} for explicit constructions of suitable digital nets and sequences). Let $\boldsymbol{x}_n = (x_{n,1},\ldots, x_{n,ds})$ with $x_{n,j} = x_{n,j,1} q^{-1} + x_{n,j,2} q^{-2} + \cdots$ and $x_{n,j,i} \in \{0,\ldots, q-1\}$ (i.e. $x_{n,j,i}$ are the digits in the base $q$ representation of $x_{n,j}$). Then for $n \ge 0$ we define $\boldsymbol{y}_n = (y_{n,1},\ldots, y_{n,s})$ with $$y_{n,j} = \sum_{i=1}^\infty \sum_{k=1}^d x_{n,(j-1) d + k,i} q^{-k - (i-1) d} \quad\mbox{for } j = 1,\ldots, s.$$ (Note that the addition here is carried out in $\mathbb{R}$ and that the sum over $i$ above is often finite as $x_{n,j,i} = 0$ for $i$ large enough.) We point out here that the quality of the point set or sequence is directly related to the $t$-value of the underlying $(t,m,ds)$-net or $(t,ds)$-sequence, see Theorem~\ref{th_talphabeta} and Theorem~\ref{th_talphabetaseq}. Corollary~\ref{cor_errorbounddignet} now shows that quasi-Monte Carlo rules using the points $\boldsymbol{y}_0,\ldots, \boldsymbol{y}_{N-1}$ (with $N = q^m$ for some $m \ge 1$) achieve the optimal rate of convergence of the integration error of $\mathcal{O}(N^{-\vartheta} (\log N)^{\vartheta s})$ for functions which have partial mixed derivatives up to order $\vartheta$ which are square integrable as long as $1 \le \vartheta \le d$ (Corollary~\ref{cor_errorbounddignet} is actually more general). If $\vartheta > d$ no improvement of the convergence rate is obtained compared to functions with smoothness $\vartheta = d$, i.e. we obtain a convergence of $\mathcal{O}(N^{-d} (\log N)^{d s})$. Similar, but less general results for periodic functions compared to those in this paper have been shown in \cite{Dick05} by a different proof method. (The construction above is an example of a construction method which can be used. In Section~\ref{sectdignets} we outline the general algebraical properties required for the construction of suitable point sets.) The quasi-Monte Carlo algorithm based on digital nets and sequences proposed here has also some further useful properties. For example our results also hold if one randomizes the point set by, say, a random digital shift (see for example \cite{DP05,DP05b,matou}). (This follows easily because the worst-case error (see Section~\ref{sect_int}) is invariant with respect to digital shifts in the Walsh space and hence we obtain the same upper bounds for randomized digital nets and sequences.) In summary the quadrature rules have the following properties: \begin{itemize} \item The quadrature rules introduced in this paper are equal weight quadrature rules which achieve the optimal rate of convergence up to some $\log N$ factors and the result holds for deterministic and randomly digitally shifted quadrature rules. \item The construction of the underlying point set is explicit and suitable point sets are available in arbitrary high dimensions and arbitrary high number of points. \item The quadrature rules automatically adjust themselves to the optimal rate of convergence $\mathcal{O}(N^{-\vartheta} (\log N)^{s\vartheta})$ as long as $1 \le \vartheta \le d$. \item The underlying point set is extensible in the dimension as well as in the number of points, i.e., one can always add some coordinates or points to an existing point set such that the quality of the point set is preserved. \end{itemize} In the following we lay out some of the underlying principles used in this work which stem from the behaviour of the Walsh coefficients of smooth functions. Walsh functions are piecewise constant wavelets which form an orthonormal set of $\mathcal{L}_2([0,1]^s)$. In their simplest form, for a non-negative integer $k$ with base $2$ representation $k = \kappa_0 + \cdots + \kappa_{m-1} 2^{m-1}$ and an $x \in [0,1)$ with base $2$ representation $x = x_1 2^{-1} + x_2 2^{-2} + \cdots$, the $k$-th Walsh function in base $2$ is given by $${\rm wal}_k(x) = (-1)^{\kappa_0 x_1 + \cdots + \kappa_{m-1} x_m}.$$ (Later on we will use the more general definition of Walsh functions over groups.) The behaviour of the Fourier coefficients of smooth periodic functions is well known, i.e. the smoother the function the faster the Fourier coefficients go to zero (see for example \cite{zyg}). An analogous result for Walsh functions has, to the best of the authors knowledge, not been known until now (see Fine~\cite{Fine} who, for example, shows that the only absolute continuous functions whose $k$-th Walsh coefficients decay faster than $1/k$ are constant functions). This will be established here and subsequently be exploited to obtain quasi-Monte Carlo rules with arbitrary high order of convergence. To give a glimpse of how the Walsh coefficients of smooth functions behave, consider for example the Walsh series for $1/2-x$: $$1/2-x = \sum_{k=0}^\infty c_k {\rm wal}_k(x) = \sum_{a=0}^\infty 2^{-a-2} {\rm wal}_{2^a}(x).$$ Although the function is infinitely smooth, in general the decay of the Walsh coefficient is only of order $1/k$. But note that most of the Walsh coefficients are actually $0$. For example when we consider $(1/2-x)^2$, then typically we would have that the Walsh coefficient of $k = 2^a$ is of order $2^{-a}$, the Walsh coefficient of $k = 2^{a_1} + 2^{a_2}$ ($a_1 > a_2$) is of order $2^{-a_1-a_2}$ and for $k = 2^{a_1} + 2^{a_2} + \cdots + 2^{a_v}$ with $a_1 > \cdots > a_v$ and $v > 2$ the $k$-th Walsh coefficient would be $0$. By considering $(1/2-x)^3, (1/2-x)^4, \ldots$, or more generally polynomials, one can now realize that the speed of convergence of the Walsh coefficients depends on how many non-zero digits $k$ has. This is the basic feature which we will relate to the speed of convergence of the Walsh coefficients for smooth functions. Subsequently we will explicitly state and use the behaviour of the Walsh coefficients of smooth functions. In general the Walsh functions depend on the base $q$ digit expansion of the wavenumber $k$ and also of the point $x$ where the Walsh function is to be evaluated. Hence, maybe not surprisingly, the value of the $k$-th Walsh coefficients of smooth functions also depend on the $q$-adic expansion of $k$. We show that the Walsh space~$\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$ introduced in Section~\ref{sectwalshspace} contains all functions whose partial mixed derivatives up to order $\mathrm{e}lta < \vartheta$ exist and have finite variation, where $\vartheta$ is a parameter restricting the behaviour of the Walsh coefficients of the function space $\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$. (We use a similar, though much more general, technique as Fine~\cite{Fine} used for showing that the Walsh coefficients of a differentiable function cannot decay faster than $1/k$.) The concept of digital $(t,\alpha,\beta,m,s)$-nets and digital $(t,\alpha,\beta,s)$-sequences (see Section~\ref{sectdignets} and also \cite{Dick05} for a similar concept) is now designed to yield point sets which work well for the Walsh space~$\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$, just in the same way as the digital nets and sequences from \cite{faure,nie86,niesiam,NX,sob67} are designed to work well for the spaces for example considered in \cite{DP05,HHY} (or as lattice rules are designed to work well for periodic Korobov spaces). Here the power of the result that the Walsh space~$\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$ contains smooth functions comes into play: it follows that we can fully exploit the smoothness of an integrand using digital $(t,\alpha,\alpha,m,s)$-nets or digital $(t,\alpha,\alpha,s)$-sequences. As the construction of the points $\boldsymbol{y}_0,\boldsymbol{y}_1,\ldots$ introduced at the beginning yields explicit examples of digital $(t,\alpha,\alpha,m,s)$-nets or digital $(t,\alpha,\alpha,s)$-sequences as shown in Section~\ref{sectdignets} we therefore obtain explicit constructions of quasi-Monte Carlo rules which can achieve the optimal order of convergence for arbitrary smooth functions. In the next section we introduce Walsh functions over groups and state some of their essential properties. \section{Walsh functions over groups} In this section we give the definition of Walsh functions over groups and present some essential properties. Walsh functions in base $2$ were first introduced by Walsh~\cite{walsh}, though a similar but non-complete set of functions has already been studied by Rademacher~\cite{rade}. Further important results were obtained in \cite{Fine}. We follow \cite{PDP} in our presentation. \subsection{Definition of Walsh functions over groups} An essential tool for the investigation of digital nets are Walsh functions. A very general definition, corresponding to the most general construction of digital nets over finite rings, was given in \cite{LNS}. There, Walsh functions over a finite abelian group $G$, using some bijection $\varphi$, were defined. Here we restrict ourselves to the additive groups of the finite fields $\mathbb{F}_{p^r}$, $p$ prime and $r \ge 1$. We restate the definitions for this special case here for the sake of convenience. In the following let $\mathbb{N}$ denote the set of positive integers and $\mathbb{N}_0$ the set of non-negative integers. \begin{definition}[Walsh functions]\label{defwalsh}\rm Let $q=p^r$, $p$ prime, $r\in \mathbb{N}$ and let $\mathbb{F}_q$ be the finite field with $q$ elements. Let $\mathbb{Z}_q=\{0,1,\ldots,q-1\}\subset\mathbb{Z}$ and let $\varphi:\mathbb{Z}_q \longrightarrow \mathbb{F}_q$ be a bijection such that $\varphi(0)=0$, the neutral element of addition in $\mathbb{F}_q$. Moreover denote by $\psi$ the canonical isomorphism (described below) of additive groups $\psi:\mathbb{F}_q \longrightarrow \mathbb{Z}_p^r$ and define $\eta:=\psi \circ \varphi$. For $1 \le i \le r$ denote by $\pi_i$ the projection $\pi_i:\mathbb{Z}_p^r\longrightarrow \mathbb{Z}_p$, $\pi_i(x_1,\ldots,x_r)=x_i$. \[ \xymatrix{ \mathbb{Z}_{q} \ar[r]^{\varphi} \ar[dr]_{\eta} & \mathbb{F}_{q} \ar[d]^{\psi} \\ & \mathbb{Z}_p^{r} \ar[r]^{\pi_i} & \mathbb{Z}_p } \] Let now $k \in \mathbb{N}_0$ with base $q$ representation $k=\kappa_0+\kappa_1 q+\cdots +\kappa_{m-1} q^{m-1}$ where $\kappa_l \in \mathbb{Z}_q$ and let $x \in [0,1)$ with base $q$ representation $x=x_1/q+x_2/q^2+\cdots$ (unique in the sense that infinitely many $x_l$ must be different from $q-1$). Then the $k$-th Walsh function over the additive group of the finite field $\mathbb{F}_q$ with respect to the bijection $\varphi$ is defined by $$_{\mathbb{F}_q,\varphi}{\rm wal}_k(x) = \exp\left(\frac{2\pi \mathtt{i}}{p} \sum_{l=0}^{m-1}\sum_{i=1}^r (\pi_i \circ \eta)(\kappa_l)(\pi_i \circ \eta)(x_{l+1})\right).$$ For convenience we will in the rest of the paper omit the subscript and simply write ${{\rm wal}}_k$ if there is no ambiguity. Multivariate Walsh functions are defined by multiplication of the univariate components, i.e., for $s >1$, $\boldsymbol{x}=(x_1,\ldots,x_s)\in [0,1)^s$ and $\boldsymbol{k}=(k_1,\ldots,k_s) \in\mathbb{N}^s_0$, we set \[ {\rm wal}_{\boldsymbol{k}}(\boldsymbol{x}) = \prod_{j=1}^s {\rm wal}_{k_j}(x_j) .\] \end{definition} We now briefly describe the canonical isomorphism. Let $\mathbb{F}_q=\mathbb{Z}_p[\theta]$, such that $\{1,\theta,\ldots,\theta^{r-1}\}$ is a basis of $\mathbb{F}_q$ over $\mathbb{Z}_p$ as a vector space. Then the isomorphism $\psi$ between $\mathbb{F}_q$ and $\mathbb{Z}_p^r$ shall be given by \[ \psi(x)=(x_1,\ldots,x_r)^\top,\text{ for }x=\sum_{i=1}^r x_i\theta^{i-1}, x_i \in \mathbb{Z}_p. \] For more information on the Walsh functions defined above see \cite{PDP}. We summarize some important properties of Walsh functions over the additive group of a finite field which will be used throughout the paper. The proofs of the subsequent results can be found e.g.\ in \cite{larpir,pirsic} (see also \cite{chrest}). In the following we call $x \in [0,1)$ a $q$-adic rational if $x$ can be represented by a finite base $q$ expansion. \begin{proposition} \label{walshprops} Let $p$, $q$, $\mathbb{F}_q$ and $\varphi$ be as in Definition \ref{defwalsh}. For $x,y$ with $q$-adic representations $x=\sum_{i=w}^{\infty}{x_i}{q^{-i}}$ and $y=\sum_{i=w}^{\infty}{y_i}{q^{-i}}$, $w\in\mathbb{Z}$ (taking $w$ negative, hence the following operations are also defined for integers), define $x \oplus_{\varphi} y:=\sum_{i=w}^{\infty}{z_i}{q^{-i}}$ where $z_i:=\varphi^{-1}(\varphi(x_i)+\varphi(y_i))$ and $\ominus_{\varphi}x:=\sum_{i=w}^{\infty}{v_i}{q^{-i}}$ where $v_i:=\varphi^{-1}(-\varphi(x_i))$. Further we set $x\ominus_\varphi y:= x\oplus_\varphi(\ominus_\varphi y)$. For vectors $\boldsymbol{x}, \boldsymbol{y}$ we define the operations component-wise. Then we have: \begin{enumerate} \item For all $k,l \in \mathbb{N}_0$ and all $x,y \in [0,1)$, with the restriction that if $x, y$ are not $q$-adic rationals then $x \oplus_{\varphi} y$ is not allowed to be a $q$-adic rational, we have $${\rm wal}_k(x) \cdot {\rm wal}_l(x) = {\rm wal}_{k \oplus_{\varphi} l}(x), \; \; \; {\rm wal}_k(x) \cdot {\rm wal}_k(y) = {\rm wal}_k(x \oplus_{\varphi} y)$$ and, with the restriction that if $x, y$ are not $q$-adic rationals then $x \ominus_{\varphi} y$ is not allowed to be a $q$-adic rational, $${\rm wal}_k(x) \cdot \overline{{\rm wal}_l(x)} = {\rm wal}_{k \ominus_{\varphi} l}(x), \; \; \; {\rm wal}_{k}(x) \cdot \overline{{\rm wal}_{k}(y)} = {\rm wal}_k(x\ominus_{\varphi} y). $$ \item We have $$\sum_{k=0}^{q-1} {\rm wal}_{l}(k/q) = \begin{cases}0&\text{if } l\neq 0,\\ q&\text{if } l=0. \end{cases} $$ \item We have $$\int_0^1{\rm wal}_0(x)\,\mathrm{d} x=1 \;\;\; \text{ and } \;\;\;\int_0^1{\rm wal}_k(x)\,\mathrm{d} x=0 \text{ if } k>0.$$ \item For all $\boldsymbol{k}, \boldsymbol{l} \in \mathbb{N}_0^s$ we have the following orthogonality properties: $$\int_{[0,1)^s} {\rm wal}_{\boldsymbol{k}}(\boldsymbol{x}) \overline{{\rm wal}_{\boldsymbol{l}}(\boldsymbol{x})}\,\mathrm{d} \boldsymbol{x} = \begin{cases} 1 & \text{ if } \boldsymbol{k}=\boldsymbol{l}, \\ 0 & \text{ otherwise}. \end{cases} $$ \item For any $f \in \mathcal{L}_2([0,1)^s)$ and any $\boldsymbol{s}igma \in [0,1)^s$ we have $$\int_{[0,1)^s}f(\boldsymbol{x})\,\mathrm{d} \boldsymbol{x} = \int_{[0,1)^s}f(\boldsymbol{x} \oplus_{\varphi} \boldsymbol{s}igma)\,\mathrm{d} \boldsymbol{x}.$$ \item For any integer $s \ge 1$ the system $\{{\rm wal}_{\boldsymbol{k}}: \boldsymbol{k} \in \mathbb{N}_0^s\}$ is a complete orthonormal system in $\mathcal{L}_2([0,1)^s)$. \end{enumerate} \end{proposition} \begin{remark}\rm The restrictions in item $1.$ was added to exclude cases like: $x = (0.010101\ldots)_2$, $y = (0.0010101\ldots)_2$ and $x \oplus y = (0.1)_2$, for which the result is of course not true. On the other hand, the result holds for $x \oplus y = (0.0111111\ldots)_2$. \end{remark} Throughout the paper we will use a fixed bijection $\varphi$ and a fixed finite field $\mathbb{F}_q$ is used for Walsh functions and $\oplus_{\varphi}$ and $\ominus_{\varphi}$. Hence we will often write $\oplus$ and $\ominus$ instead of $\oplus_{\varphi}$ and $\ominus_{\varphi}$. In the following section we will deal with Walsh series and Walsh coefficients, which we briefly describe in the following: functions $f \in \mathcal{L}_2([0,1)^s)$ have an associated Walsh series $$f(\boldsymbol{x}) \sim \sum_{\boldsymbol{k} \in \mathbb{N}_0^s} \hat{f}(\boldsymbol{k}) {\rm wal}_{\boldsymbol{k}}(\boldsymbol{x}),$$ where the Walsh coefficients $\hat{f}(\boldsymbol{k})$ are given by $$\hat{f}(\boldsymbol{k}) = \int_{[0,1)^s} f(\boldsymbol{x}) {\rm wal}_{\boldsymbol{k}}(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x}.$$ For smooth functions the Walsh series converges to the function, which is shown in Section~\ref{subsect_convergence}. \section{Walsh spaces containing smooth functions}\label{sectwalshspace} In the following we investigate how the Walsh coefficients of smooth functions decay and subsequently we use this to define function classes based on Walsh functions which contain smooth functions. But first we introduce a suitable variation. \subsection{A generalized weighted Hardy and Krause variation} In the following we generalize the Hardy and Krause variation which suits our purposes later on. \subsubsection{H\"older condition} A function $f:[0,1) \rightarrow \mathbb{R}$ satisfies a H\"older condition with coefficient $0 < \lambda \le 1$ if there is a constant $C_f >0$ such that $$|f(x) - f(y)| \le C_f |x-y|^\lambda \quad \mbox{for all } x,y \in [0,1).$$ The right hand side of the above inequality forms a metric on $[0,1)$. When one considers the higher dimensional domain $[0,1)^s$ then $|x-y|$ is changed to some other metric on $[0,1)^s$. Here we consider tensor product spaces and we generalize the H\"older condition to higher dimensions in a way which is suitable for tensor product spaces in our context. Consider for example the function $f(\boldsymbol{x}) = \prod_{j=1}^s f_j(x_j)$, where $\boldsymbol{x} = (x_1,\ldots, x_s)$ and each $f_j:[0,1)\rightarrow \mathbb{R}$ satisfies a H\"older condition with coefficient $0<\lambda \le 1$. Then it follows that for all $\emptyset \neq u \subseteq \mathcal{S} := \{1,\ldots, s\}$ we have \begin{equation}\label{eq_prodholder} \prod_{j \in u} |f_j(x_j) - f_j(y_j)| \le \prod_{j\in u} C_{f_j} \prod_{j \in u} |x_j - y_j|^\lambda \end{equation} for all $x_j,y_j \in [0,1)$ with $j \in u$. But here $\prod_{j=1}^s |x_j - y_j|$ is not a metric on $[0,1)^s$. Note that we have \begin{equation}\label{eq_sumprod} \prod_{j \in u} |f_j(x_j) - f_j(y_j)| = \left|\sum_{v \subseteq u} (-1)^{|v|-|u|} \prod_{j\in v} f_j(x_j) \prod_{j\in u\setminus v} f_j(y_j)\right|, \end{equation} which can be described in words in the following way: for given $\emptyset \neq u \subseteq \mathcal{S}$ let $x_j, y_j \in [0,1)$ with $x_j \neq y_j$ for all $j \in u$; consider the box $J$ with vertices $\{(a_j)_{j\in u}: a_j = x_j \mbox{ or } a_j = y_j \mbox{ for } j \in u\}$. Then (\ref{eq_sumprod}) is the alternating sum of the function $\prod_{j\in u} f_j$ at the vertices of $J$ where adjacent vertices have opposite signs. This sum can also be defined for functions on $[0,1)^s$ which are not of product form. Indeed, let for a subinterval $J = \prod_{j=1}^s [x_j, y_j)$ with $0 \le x_j < y_j \le 1$ and a function $f:[0,1)^s \rightarrow \mathbb{R}$ the function $\Delta(f,J)$ denote the alternating sum of $f$ at the vertices of $J$ where adjacent vertices have opposite signs. (Hence for $f = \prod_{j=1}^s f_j$ we have $\Delta(f,J) = \prod_{j=1}^s (f_j(x_j) - f_j(y_j))$.) \subsubsection{Generalized Vitali variation} Let $\mathfrak{p} \ge 1$. Then we define the generalized variation in the sense of Vitali with coefficient $0 < \lambda \le 1$ by \begin{equation}\label{fracVitalivar} V^{(s)}_{\lambda, \mathfrak{p}}(f) = \sup_{{\mathcal{P}}} \left(\sum_{J \in \mathcal{P}} {\rm Vol}(J) \left|\frac{\Delta(f,J)}{{\rm Vol}(J)^{\lambda}}\right|^{\mathfrak{p}}\right)^{1/\mathfrak{p}},\end{equation} where the supremum is extended over all partitions $\mathcal{P}$ of $[0,1]^s$ into subintervals and ${\rm Vol}(J)$ denotes the volume of the subinterval $J$. Note that for $\lambda = 1$ and $\mathfrak{p} = 1$ one obtains the usual definition of the Vitali variation, see for example \cite{niesiam}. If we take $\mathfrak{p} = \infty$, then we obtain a condition of the form (\ref{eq_prodholder}) where $u = \mathcal{S}$ and where we can take the constant $\prod_{j=1}^s C_{f_j} = V^{(s)}_{\lambda,\infty}(f)$. For $s = 1$ and $\mathfrak{p} = \infty$ we obtain a H\"older condition with coefficient $0 < \lambda \le 1$. In this sense we can view (\ref{fracVitalivar}) as a fractional Vitali variation of order $\lambda$. For $\lambda = 1$ and if the partial derivatives of $f$ are continuous on $[0,1]^s$ we also have the formula \begin{equation}\label{eq_formelV} V_{1,\mathfrak{p}}^{(s)}(f) = \left(\int_{[0,1]^s} \left|\frac{\partial^s f}{\partial x_1\cdots \partial x_s} \right|^{\mathfrak{p}} \,\mathrm{d} \boldsymbol{x}\right)^{1/\mathfrak{p}}, \end{equation} for all $\mathfrak{p} \ge 1$. Indeed we have $$|\Delta(f,J)| = \left|\int_J\frac{\partial^s f}{\partial x_1\cdots \partial x_s}(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x} \right| = {\rm Vol}(J) \left|\frac{\partial^s f}{\partial x_1\cdots \partial x_s}(\boldsymbol{\zeta}_J)\right|$$ for some $\boldsymbol{\zeta}_J \in \overline{J}$, which follows by applying the mean value theorem to the inequality \begin{equation*} \min_{\boldsymbol{x} \in \overline{J}} \left|\frac{\partial^s f}{\partial x_1\cdots \partial x_s}(\boldsymbol{x}) \right| \le {\rm Vol}(J)^{-1} \left|\int_J\frac{\partial^s f}{\partial x_1\cdots \partial x_s}(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x} \right| \le \max_{\boldsymbol{x}\in \overline{J}} \left|\frac{\partial^s f}{\partial x_1\cdots \partial x_s}(\boldsymbol{x})\right|. \end{equation*} Therefore we have $$ \sum_{J \in \mathcal{P}} {\rm Vol}(J) \left|\frac{\Delta(f,J)}{{\rm Vol}(J)}\right|^{\mathfrak{p}} = \sum_{J\in\mathcal{P}} {\rm Vol}(J) \left|\frac{\partial^s f}{\partial x_1\cdots \partial x_s}(\boldsymbol{\zeta}_J)\right|^\mathfrak{p},$$ which is just a Riemann sum for the integral $\int_{[0,1]^s} \left|\frac{\partial^s f}{\partial x_1\cdots \partial x_s} \right|^{\mathfrak{p}} \,\mathrm{d} \boldsymbol{x}$ and thus the equality follows. Using H\"older's inequality and the fact that $\left(\sum_{J \in \mathcal{P}} ({\rm Vol}(J)^{1-1/\mathfrak{p}})^{\mathfrak{p}/(\mathfrak{p}-1)}\right)^{1-1/\mathfrak{p}} = \left(\sum_{J\in \mathcal{P}} {\rm Vol}(J)\right)^{1-1/\mathfrak{p}} = 1$ it follows that $$V_{\lambda,1}^{(s)}(f) \le V_{\lambda,\mathfrak{p}}^{(s)}(f) \quad \mbox{for all } \mathfrak{p} \ge 1.$$ \subsubsection{Generalized Hardy and Krause variation} Until now we did not take projections to lower dimensional faces into account (in (\ref{eq_prodholder}) we did take projections into account as we considered all $\emptyset \neq u \subseteq\mathcal{S}$). For $\emptyset \neq u \subseteq\mathcal{S}$, let $V_{\lambda, \mathfrak{p}}^{(|u|)}(f_u;u)$ be the generalized Vitali variation with coefficient $0 < \lambda \le 1$ of the $|u|$-dimensional function $f_u(\boldsymbol{x}_u) = \int_{[0,1)^{s-|u|}} f(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x}_{\mathcal{S}\setminus u}$. For $u = \emptyset$ we have $f_\emptyset = \int_{[0,1)^{s}} f(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x}_{\mathcal{S}}$ and we define $V_{\lambda, \mathfrak{p}}^{(|\emptyset|)}(f_\emptyset;\emptyset) = |f_\emptyset|$. Let $\mathfrak{q} \ge 1$, then \begin{equation}\label{eq_varhk} V_{\lambda, \mathfrak{p}, \mathfrak{q}}(f) = \left(\sum_{u \subseteq\mathcal{S}} \left(V^{(|u|)}_{\lambda, \mathfrak{p}}(f_u;u) \right)^{\mathfrak{q}}\right)^{1/\mathfrak{q}} \end{equation} is called the generalized Hardy and Krause variation of $f$ on $[0,1]^s$. For $\lambda = \mathfrak{p} = \mathfrak{q} = 1$ one obtains an unanchored version of the usual definition of the Hardy and Krause variation, see \cite{niesiam}. A function $f$ for which $V_{\lambda, \mathfrak{p}, \mathfrak{q}}(f) < \infty$ is said to be of finite variation with coefficient $\lambda$. (We remark that in some cases it might be appropriate to leave out the term corresponding to $u = \emptyset$ in (\ref{eq_varhk}), but here this term will be needed later on and hence we include it already in the definition of the variation.) \subsubsection{Generalized weighted Hardy and Krause variation} As first suggested in \cite{SW98} (see also \cite{DSWW1}) different coordinates might have different importance, hence we can also define a weighted variation. In the spirit of the weighted Sobolev spaces in \cite{DSWW1}, let $\boldsymbol{\gamma} = (\gamma_u)_{u \subset \mathbb{N}}$ be an indexed set of non-negative real numbers. Then we define the weighted variation $V_{\lambda,\mathfrak{p},\mathfrak{q},\boldsymbol{\gamma}}(f)$ of $f$ with coefficient $0 < \lambda \le 1$ by \begin{equation*} V_{\lambda,\mathfrak{p}, \mathfrak{q},\boldsymbol{\gamma}}(f) = \left(\sum_{u \subseteq\mathcal{S}} \gamma_u^{-1} \left(V^{(|u|)}_{\lambda, \mathfrak{p}}(f_u;u) \right)^{\mathfrak{q}}\right)^{1/\mathfrak{q}}. \end{equation*} Note that for $\lambda = 1$ and $\mathfrak{p} = \mathfrak{q} = 2$ the weighted variation $V_{\lambda,\mathfrak{p},\mathfrak{q},\boldsymbol{\gamma}}(f)$ coincides with the norm in a weighted unanchored Sobolev space for any function in this Sobolev space, i.e, we have the identity $V_{1,2,2,\boldsymbol{\gamma}}(f) =\|f\|_{{\rm sob}}$, where \begin{equation*} \|f\|_{{\rm sob}} = \left(\sum_{u\subseteq \mathcal{S}} \gamma_u^{-1} \int_{[0,1)^{|u|}} \left|\int_{[0,1)^{s-|u|}} \frac{\partial^{|u|} f(\boldsymbol{x})}{\partial \boldsymbol{x}_u} \,\mathrm{d}\boldsymbol{x}_{\mathcal{S}\setminus u} \right|^2 \,\mathrm{d} \boldsymbol{x}_u\right)^{1/2} \end{equation*} denotes the norm in the weighted Sobolev space (see \cite{DSWW1} for more information on this Sobolev space). \subsection{The decay of the Walsh coefficients of smooth functions} We are now ready to show how the Walsh coefficients of smooth functions decay. This behaviour is essentially captured in Definition~\ref{def_qadicnonincreasing} below. But before we get there we need several lemmas to prove the result. The following lemma is needed to show how the Walsh coefficients of functions with bounded variation decay. A simpler version of it was shown in \cite[Lemma~4]{pirsic}. \begin{lemma}\label{lem_pirs} Let $f \in \mathcal{L}_1([0,1)^s)$ and let $\boldsymbol{k} = (k_1,\ldots, k_s) \in \mathbb{N}^s$ with $k_j = \kappa_j q^{a_j-1} + k'_j$ where $a_j \in \mathbb{N}$, $\kappa_j \in \{1,\ldots, q-1\}$, $0 \le k'_j < q^{a_j-1}$ and let $0 \le c_j < q^{a_j-1}$ for $j = 1,\ldots, s$. Then \begin{eqnarray*} \left|\int_{\prod_{j=1}^s [c_j q^{-a_j+1}, (c_j+1)q^{-a_j+1})} f(\boldsymbol{x}) \; \overline{{\rm wal}_{\boldsymbol{k}}(\boldsymbol{x})} \,\mathrm{d} \boldsymbol{x} \right| & \le & q^{-\sum_{j=1}^s (a_j -1)} \sup_{J} |\Delta(f,J)|, \end{eqnarray*} where the supremum is taken over all boxes of the form $$J = \prod_{j=1}^s [d_j, e_j) \subseteq \prod_{j=1}^s [c_j q^{-a_j+1}, (c_j+1) q^{-a_j+1})$$ with $q^{a_j}|e_j-d_j| \in \{1,\ldots, q-1\}$. \end{lemma} \begin{proof} We have $\overline{{\rm wal}_{k_j}} = \overline{{\rm wal}_{\kappa_j q^{a_j-1}}} \; \overline{{\rm wal}_{k'_j}}$ and the function $\overline{{\rm wal}_{k'_j}}$ is constant on each subinterval $[c_j q^{-a_j+1}, (c_j+1) q^{-a_j+1})$. Hence we have \begin{eqnarray*} \lefteqn{\left|\int_{\prod_{j=1}^s [c_j q^{-a_j+1}, (c_j+1)q^{-a_j+1})} f(\boldsymbol{x}) \; \overline{{\rm wal}_{\boldsymbol{k}}(\boldsymbol{x})} \,\mathrm{d} \boldsymbol{x} \right| } \\ & = & \left|\int_{\prod_{j=1}^s [c_j q^{-a_j+1}, (c_j+1)q^{-a_j+1})} f(\boldsymbol{x}) \; \prod_{j=1}^s \overline{{\rm wal}_{\kappa_j q^{a_j-1}}(x_j)} \,\mathrm{d} \boldsymbol{x} \right|. \end{eqnarray*} Note that the function $\overline{{\rm wal}_{\kappa_j q^{a_j-1}}}$ is constant on each of the subintervals $[r_j q^{-a_j}, (r_j +1) q^{-a_j})$ for $r_j = 0,\ldots, q^{a_j}-1$ for $j = 1,\ldots, s$. Without loss of generality we may assume that $c_j = 0$, for all other $c_j$ the result follows by the same arguments. Thus we have \begin{eqnarray*} \lefteqn{\int_{\prod_{j=1}^s [0, q^{-a_j+1})} f(\boldsymbol{x}) \; \prod_{j=1}^s \overline{{\rm wal}_{\kappa_j q^{a_j-1}}(x_j)} \,\mathrm{d} \boldsymbol{x} } \\ & = & \sum_{r_1,\ldots, r_s = 0}^{q-1} \prod_{j=1}^s \overline{{\rm wal}_{\kappa_j}(r_j/q)} \int_{\prod_{j=1}^s [r_j q^{-a_j}, (r_j +1) q^{-a_j})} f(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x}. \end{eqnarray*} Let now $a_{(r_1,\ldots, r_s)} = \int_{\prod_{j=1}^s [r_j q^{-a_j}, (r_j+1) q^{-a_j})} f(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x}$ and for a given $0 \le r_1,\ldots, r_s < q$ let \begin{equation}\label{def_A} A(r_1,\ldots, r_s) = q^{-s} \sum_{t_1,\ldots, t_s = 0}^{q-1} \sum_{\emptyset \neq u \subseteq\{1,\ldots, s\}} (-1)^{|u|} a_{(\boldsymbol{t}_u,\boldsymbol{r}_{\{1,\ldots, s\}\setminus u})},\end{equation} where $(\boldsymbol{t}_u,\boldsymbol{r}_{\{1,\ldots, s\}\setminus u})$ denotes the vector obtained by setting the $j$-th coordinate to $t_j$ if $j \in u$ and $r_j$ if $j \notin u$. Further let $$B(r_1,\ldots, r_s) = q^{-s} \sum_{t_1,\ldots, t_s = 0}^{q-1} \sum_{u \subseteq\{1,\ldots, s\}} (-1)^{|u|} a_{(\boldsymbol{t}_u,\boldsymbol{r}_{\{1,\ldots, s\}\setminus u})}.$$ Then we have \begin{eqnarray*} \lefteqn{\sum_{r_1,\ldots, r_s =0}^{q-1} \prod_{j=1}^s \overline{{\rm wal}_{\kappa_j}(r_j/q)} a_{(r_1,\ldots, r_s)} } \hspace{2cm} \\ & = & -\sum_{r_1,\ldots, r_s =0}^{q-1} \prod_{j=1}^s \overline{{\rm wal}_{\kappa_j}(r_j/q)} A(r_1,\ldots, r_s) \\ && + \sum_{r_1,\ldots, r_s =0}^{q-1} \prod_{j=1}^s \overline{{\rm wal}_{\kappa_j}(r_j/q)} (a_{(r_1,\ldots, r_s)} + A(r_1,\ldots, r_s)). \end{eqnarray*} Since $\sum_{r=0}^{q-1} \overline{{\rm wal}_{\kappa}(r/q)} = 0$ and $A(r_1,\ldots, r_s)$ is a sum where each summand does not depend on at least one $r_j$, i.e. the case $u = \emptyset$ is excluded in (\ref{def_A}), it follows that the first sum on the right hand side above is zero. Further we have $a_{(r_1,\ldots, r_s)} + A(r_1,\ldots, r_s) = B(r_1,\ldots, r_s)$ and thus \begin{equation*} \left|\sum_{r_1,\ldots, r_s =0}^{q-1} \prod_{j=1}^s \overline{{\rm wal}_{\kappa_j}(r_j/q)} a_{(r_1,\ldots, r_s)} \right| \le \sum_{r_1,\ldots, r_s =0}^{q-1} \left|B(r_1,\ldots, r_s) \right|. \end{equation*} We have $$|B(r_1,\ldots, r_s)| \le \max_{\boldsymbol{t} \in \{0,\ldots, q-1\}^s} \left|\sum_{u \subseteq\{1,\ldots, s\}} (-1)^{|u|} a_{(\boldsymbol{t}_u,\boldsymbol{r}_{\{1,\ldots, s\}\setminus u})} \right|.$$ Therefore we have \begin{eqnarray*} \lefteqn{ \left|\int_{\prod_{j=1}^s [0,q^{-a_j+1})} f(\boldsymbol{x}) \; \prod_{j=1}^s \overline{{\rm wal}_{\kappa_j q^{a_j-1}}(x_j)} \,\mathrm{d} \boldsymbol{x} \right| } \\ & \le & \sum_{r_1,\ldots, r_s = 0}^{q-1} \max_{\boldsymbol{t} \in \{0,\ldots, q-1\}^s} \left|\sum_{u \subseteq\{1,\ldots, s\}} (-1)^{|u|} a_{(\boldsymbol{t}_u,\boldsymbol{r}_{\{1,\ldots, s\}\setminus u})} \right| \\ & \le & q^s \max_{\boldsymbol{r}, \boldsymbol{t} \in \{0,\ldots, q-1\}^s} \left|\sum_{u \subseteq\{1,\ldots, s\}} (-1)^{|u|} a_{(\boldsymbol{t}_u,\boldsymbol{r}_{\{1,\ldots, s\}\setminus u})} \right|. \end{eqnarray*} Note that if in the above maximum there is a $j$ such that $r_j = t_j$ then it follows that $$\left|\sum_{u \subseteq\{1,\ldots, s\}} (-1)^{|u|} a_{(\boldsymbol{t}_u,\boldsymbol{r}_{\{1,\ldots, s\}\setminus u})} \right| = 0.$$ Hence we may in the following assume without loss of generality that the maximum in the last line of the inequality above is taken on for $\boldsymbol{r} = (r_1,\ldots, r_s)$ and $\boldsymbol{t} = (t_1,\ldots, t_s)$ which satisfy $r_j \neq t_j$ for $j = 1,\ldots, s$. We have \begin{eqnarray*} \lefteqn{ \left|\sum_{u \subseteq\{1,\ldots, s\}} (-1)^{|u|} a_{(\boldsymbol{t}_u,\boldsymbol{r}_{\{1,\ldots, s\}\setminus u})} \right| } \\ & = & \left|\int_{\prod_{j=1}^s [r_j q^{-a_j}, (r_j+1) q^{-a_j})} \sum_{u\subseteq\{1,\ldots, s\}} (-1)^{|u|} f(\boldsymbol{x} + \boldsymbol{y}_{u}) \,\mathrm{d} \boldsymbol{x} \right|, \end{eqnarray*} where $\boldsymbol{y}_u = (y_1,\ldots, y_s)$ with $y_j = 0$ for $j \notin u$ and $y_j = (t_j-r_j) q^{-a_j}$ for $j \in u$. We can write $$\sum_{u\subseteq\{1,\ldots, s\}} (-1)^{|u|} f(\boldsymbol{x} + \boldsymbol{y}_{u}) = \Delta(f,J_{\boldsymbol{x}}),$$ where $J_{\boldsymbol{x}} = \prod_{j=1}^s [\min(x_j, x_j + (t_j-r_j) q^{-a_j}), \max(x_j, x_j + (t_j-r_j) q^{-a_j}))$. Therefore it follows that \begin{eqnarray*} \left|\sum_{u \subseteq\{1,\ldots, s\}} (-1)^{|u|} a_{(\boldsymbol{t}_u,\boldsymbol{r}_{\{1,\ldots, s\}\setminus u})} \right| & \le & q^{-\sum_{j=1}^s a_j} \sup_{\boldsymbol{x} \in \prod_{j=1}^s [r_j q^{-a_j},(r_j+1)q^{-a_j})} |\Delta(f,J_{\boldsymbol{x}})|. \end{eqnarray*} The result follows. \end{proof} In the following lemma we now obtain a bound on the Walsh coefficients for functions of bounded variation. It is a generalization of \cite[Proposition~6]{pirsic}. \begin{lemma}\label{lem_proppirs} Let $0 < \lambda \le 1$ and let $f \in \mathcal{L}_2([0,1)^s)$ satisfy $V_{\lambda,1,1,\boldsymbol{\gamma}}(f) < \infty$. Then for any $\boldsymbol{k} \in \mathbb{N}_0^s\setminus\{\boldsymbol{z}ero\}$ the $\boldsymbol{k}$-th Walsh coefficient of $f$ satisfies $$|\hat{f}(\boldsymbol{k})| \le q^{|u|-\lambda\sum_{j\in u} (a_j-1)} V^{(|u|)}_{\lambda,1}(f_u;u),$$ where $\boldsymbol{k} = (k_1,\ldots, k_s)$, $u = \{1\le j \le s: k_j \neq 0\}$ and for $j \in u$ we have $k_j = \kappa_j q^{a_j-1} + k'_j$, where $\kappa_j \in \{1,\ldots, q-1\}$, $a_j \in \mathbb{N}$ and $0 \le k'_j < q^{a_j-1}$. \end{lemma} \begin{proof} Let $f \in \mathcal{L}_2([0,1)^s)$ with $\boldsymbol{k}$-th Walsh coefficient $\hat{f}(\boldsymbol{k})$. First note that it suffices to show the result for $\boldsymbol{k} \in \mathbb{N}^s$, as otherwise we only need to replace the function $f$ with the function $f_u(\boldsymbol{x}_u) = \int_{[0,1)^{s-|u|}} f(\boldsymbol{x}) \,\mathrm{d}\boldsymbol{x}_{\mathcal{S}\setminus u}$. Hence let now $\boldsymbol{k}\in\mathbb{N}^s$ be given and let $\boldsymbol{k}' = (k'_1,\ldots, k'_s)$. Then we have \begin{eqnarray*} \lefteqn{ \left|\int_{[0,1)^s} f(\boldsymbol{x}) \;\overline{{\rm wal}_{\boldsymbol{k}}(\boldsymbol{x})} \,\mathrm{d} \boldsymbol{x}\right| } \\ & \le & \sum_{0 \le c_j < q^{a_j-1} \atop 1 \le j \le s} \left|\int_{c_1 q^{-a_1+1}}^{(c_1+1) q^{-a_1+1}} \cdots \int_{c_s q^{-a_s+1}}^{(c_s+1) q^{-a_s+1}} f(\boldsymbol{x}) \; \overline{{\rm wal}_{\boldsymbol{k}}(\boldsymbol{x})}\,\mathrm{d}\boldsymbol{x}\right|. \end{eqnarray*} Now we use Lemma~\ref{lem_pirs} and thereby obtain that the above sum is bounded by $$\sum_{0 \le c_1 < q^{a_1-1}} \cdots \sum_{0 \le c_s < q^{a_s-1}} q^{-\sum_{j=1}^s (a_j-1)} \sup_{J} |\Delta(f,J)|,$$ where the supremum is taken over all boxes $J = \prod_{j=1}^s [d_j, e_j) \subseteq \prod_{j=1}^s [c_j q^{-a_j+1}, (c_j+1) q^{-a_j+1})$ with $q^{a_j}|e_j-d_j| \in \{1,\ldots, q-1\}$. Now we have $$q^{-\sum_{j=1}^s (a_j-1)} \sup_{J} |\Delta(f,J)| \le \sup_{\mathcal{P}_{\boldsymbol{c}}} \sum_{I\in \mathcal{P}_{\boldsymbol{c}}} {\rm Vol}(I)^{1-\lambda}|\Delta(f,I)| \frac{q^{-\sum_{j=1}^s (a_j-1)}}{{\rm Vol}(I)^{1-\lambda}},$$ where the supremum on the right hand side is taken over all partitions $\mathcal{P}_{\boldsymbol{c}}$ of the cube $\prod_{j=1}^s [c_j q^{-a_j+1}, (c_j+1) q^{-a_j+1})$ and where each $I \in \mathcal{P}_{\boldsymbol{c}}$ is of the form $I = \prod_{j=1}^s [x_j,y_j)$ with $q^{a_j}|y_j-x_j| \in \{1,\ldots, q-1\}$. We have $q^{-\sum_{j=1}^s a_j} \le {\rm Vol}(I) \le q^{-\sum_{j=1}^s (a_j-1)}$ and therefore $$\frac{q^{-\sum_{j=1}^s (a_j-1)}}{{\rm Vol}(I)^{1-\lambda}} \le {\rm Vol}(I)^\lambda q^s \le q^{s-\lambda\sum_{j=1}^s (a_j-1)}$$ and hence \begin{equation*} q^{-\sum_{j=1}^s (a_j-1)} \sup_{J} |\Delta(f,J)| \le q^{s-\lambda\sum_{j=1}^s (a_j-1)} \sup_{\mathcal{P}_{\boldsymbol{c}}} \sum_{I\in \mathcal{P}_{\boldsymbol{c}}} {\rm Vol}(I)^{1-\lambda}|\Delta(f,I)|. \end{equation*} Note that \begin{equation*} \sum_{0 \le c_j < q^{a_j-1} \atop 1 \le j \le s} \sup_{\mathcal{P}_{\boldsymbol{c}}} \sum_{I\in \mathcal{P}_{\boldsymbol{c}}} {\rm Vol}(I)^{1-\lambda}|\Delta(f,I)| \le \sup_{\mathcal{P}} \sum_{J\in \mathcal{P}} {\rm Vol}(J) \frac{|\Delta(f,J)|}{{\rm Vol}(J)^\lambda} \end{equation*} where the supremum on the left hand side is taken over all partitions $\mathcal{P}$ of the cube $\prod_{j=1}^s [c_j q^{-a_j+1}, (c_j+1) q^{-a_j+1})$ into subintervals and the supremum on the right hand side is taken over all partitions of $[0,1)^s$ into subintervals. Thus the result follows. \end{proof} For the next lemma we will need the following two functions. For $\kappa\in\{1,\ldots, q-1\}$ let now $$\upsilon_{\kappa} = \sum_{r=0}^{q-1} r {\rm wal}_{\kappa}(r/q).$$ If $q$ is chosen to be a prime number and the bijections $\varphi$ and $\eta$ are chosen to be the identity, then $\upsilon_{\kappa} = q (\mathrm{e}^{2\pi\mathtt{i} \kappa/q}-1)^{-1}$, see \cite[Appendix~A]{DP05}. Further for $l \in \{1,\ldots, q-1\}$ we define the function $\zeta_a(x) = \sum_{r = 0}^{x_a-1} \overline{{\rm wal}_{l}(r/q)}$, where $a \ge 1$ and $x = x_1 q^{-1} + x_2 q^{-2} + \cdots$ and where for $x_a = 0$ we set $\zeta_a(x) = 0$. The function $\zeta_a$ depends on $x$ only through $x_a$, thus it is a step-function which is constant on the intervals $[c q^{-a}, (c+1)q^{-a})$ for $c = 0,\ldots, q^a-1$. By \cite[Proposition~5]{pirsic} it follows that $\zeta_a$ can be represented by a finite Walsh series. Indeed, there are numbers $c_0,\ldots, c_{q-1}$ (which depend on $l$ but not on $a$) such that $$\zeta_a(x) = \sum_{z=0}^{q-1} c_z \overline{{\rm wal}_{zq^{a-1}}(x)}.$$ If $q$ is chosen to be a prime number and the bijections $\varphi$ and $\eta$ are chosen to be the identity, then $\zeta_a(x) = (1-\overline{{\rm wal}_{lq^{a-1}}(x)}) (1-\overline{{\rm wal}_l(1/q)})^{-1}$, i.e., $c_0 = (1-\overline{{\rm wal}_l(1/q)})^{-1}$, $c_l = (\overline{{\rm wal}_l(1/q)}-1)^{-1}$ and $c_z = 0$ for $z \neq 0,l$. The following lemma will be used in the induction step for differentiable functions. For example, for a differentiable function $F:\mathbb{R} \rightarrow \mathbb{R}$ given by $F(x) = \int_0^x f(y) \,\mathrm{d} y$ we can calculate the Walsh coefficients using integration by parts in the following way: for $k > 0$ we have \begin{eqnarray}\label{eq_indstep} \hat{F}(k) & = & \int_0^1 F(x) \overline{{\rm wal}_k(x)} \,\mathrm{d} x = \left[ \int_0^x \overline{{\rm wal}_k(y)} \,\mathrm{d} y F(x) \right]_0^1 - \int_0^1 f(x) \int_0^x \overline{{\rm wal}_k(y)} \,\mathrm{d} y \,\mathrm{d} x \nonumber \\ & = & - \int_0^1 f(x) \int_0^x \overline{{\rm wal}_k(y)} \,\mathrm{d} y \,\mathrm{d} x, \end{eqnarray} where we used $\int_0^0 \overline{{\rm wal}_k(x)} \,\mathrm{d} x = \int_0^1 \overline{{\rm wal}_k(x)} \,\mathrm{d} x = 0$. For $k = 0$ on other hand we obtain \begin{eqnarray}\label{eq_indstep0} \hat{F}(0) & = & \int_0^1 F(x) \overline{{\rm wal}_0(x)} \,\mathrm{d} x = \left[ \int_0^x \overline{{\rm wal}_0(y)} \,\mathrm{d} y F(x) \right]_0^1 - \int_0^1 f(x) \int_0^x \overline{{\rm wal}_0(y)} \,\mathrm{d} y \,\mathrm{d} x \nonumber \\ & = & \int_0^1 f(x) \,\mathrm{d} x - \int_0^1 f(x) \int_0^x \overline{{\rm wal}_0(y)} \,\mathrm{d} y \,\mathrm{d} x, \end{eqnarray} Thus if we know the Walsh series for $f$, then we can easily calculate the Walsh series for $F$, provided that we know the Walsh series for $\int_0^x \overline{{\rm wal}_k(y)} \,\mathrm{d} y$. This will be calculated in the following lemma. It appeared in a simpler form in \cite{Fine}. \begin{lemma}\label{lem_intJ} For $k \in \mathbb{N}_0$ and $x \in [0,1)$ define $J_{k}(x) = \int_{0}^x \overline{{\rm wal}_{k}(y)} \,\mathrm{d} y$. For $k \ge 1$ let $k = l q^{a-1} + k'$ where $l \in \{1,\ldots, q-1\}$, $a \ge 1$ and $0 \le k' < q^{a-1}$. Then $J_{k}$ can be represented by a Walsh series which is given by \begin{equation*} J_{k}(x) = q^{-a}\Bigg( \sum_{z=0}^{q-1} c_z \overline{{\rm wal}_{z q^{a-1} + k'}(x)} + 2^{-1} \overline{{\rm wal}_{k}(x)} + \sum_{c=1}^\infty \sum_{\kappa=1}^{q-1} q^{-c-1} \upsilon_{\kappa} \overline{{\rm wal}_{\kappa q^{a+c-1} + k}(x)}\Bigg). \end{equation*} Further we have $$J_0(x) = 1/2 + \sum_{c=1}^\infty \sum_{\kappa=1}^{q-1} q^{-c-1} \upsilon_{\kappa} \; \overline{{\rm wal}_{\kappa q^{c-1}}(x)}.$$ \end{lemma} \begin{proof} Let $k = l q^{a-1} + k'$ with $a \ge 1$, $0 \le k' < q^{a-1}$ and $l \in \{1,\ldots, q-1\}$. The function $\overline{{\rm wal}_{l q^{a-1}}(y)}$ is constant on each interval $[r q^{-a}, (r+1) q^{-a})$ and $\overline{{\rm wal}_{k'}(y)}$ is constant on each interval $[c q^{-a+1}, (c+1) q^{-a+1})$. We have $\overline{{\rm wal}_k(y)} = \overline{{\rm wal}_{l q^{a-1}}(y)} \; \overline{{\rm wal}_{k'}(y)}$. For any $0 \le c < q^{a-1}$ we have \begin{eqnarray*} \int_{[c q^{-a+1}, (c+1) q^{-a+1})} \!\!\!\!\overline{{\rm wal}_k(y)}\,\mathrm{d} y & = & \overline{{\rm wal}_{k'}(cq^{-a+1})} \int_{[c q^{-a+1}, (c+1) q^{-a+1})} \!\!\!\!\overline{{\rm wal}_{l q^{a-1}}(y)}\,\mathrm{d} y \\ & = & \overline{{\rm wal}_{k'}(c q^{-a+1})} q^{-a} \sum_{r = 0}^{q-1} {\rm wal}_l(r/q) \\ & = & 0. \end{eqnarray*} Thus we have $$J_k(x) = \overline{{\rm wal}_{k'}(x)} J_{l q^{a-1}}(x).$$ Let $x = x_1 q^{-1} + x_2 q^{-2} + \cdots$ and $y = x_{a+1} q^{-1} + x_{a+2} q^{-2} + \cdots$, then we have $$J_{l q^{a-1}}(x) = q^{-a} \sum_{r = 0}^{x_a-1} \overline{{\rm wal}_{l}(r/q)} + q^{-a} \overline{{\rm wal}_{l}(x_a/q)} y.$$ We now investigate the Walsh series representation of the function $J_{l q^{a-1}}(x)$. First note that $\overline{{\rm wal}_l(x_a/q)} = \overline{{\rm wal}_{lq^{a-1}}(x)}$. Further, by a slight adaption of \cite[eq. (30)]{DP05} we obtain \begin{equation}\label{eq_xwalsh} y = 1/2 + \sum_{c=1}^\infty \sum_{\kappa=1}^{q-1} q^{-c-1} \upsilon_{\kappa} \; \overline{{\rm wal}_{\kappa q^{c-1}}(y)}. \end{equation} As $\overline{{\rm wal}_{\kappa q^{c-1}}(y)} = \overline{{\rm wal}_{\kappa q^{a+c-1}}(x)}$ we obtain $$y = 1/2 + \sum_{c=1}^\infty \sum_{\kappa=1}^{q-1} q^{-c-1} \upsilon_{\kappa} \; \overline{{\rm wal}_{\kappa q^{a+c-1}}(x)}.$$ As noted above, the Walsh series of $\zeta_a(x) = \sum_{r = 0}^{x_a-1} \overline{{\rm wal}_{l}(r/q)}$, where for $x_a = 0$ we set $\zeta_a(x) = 0$ can be written as $$\zeta_a(x) = \sum_{z=0}^{q-1} c_z \overline{{\rm wal}_{zq^{a-1}}(x)}.$$ Altogether we obtain \begin{eqnarray*} q^a J_{lq^{a-1}}(x) & = & \sum_{z=0}^{q-1} c_z \overline{{\rm wal}_{z q^{a-1}}(x)} + 2^{-1} \overline{{\rm wal}_{l q^{a-1}}(x)} \\ && \mathfrak{q}uad\quad + \sum_{c=1}^\infty \sum_{\kappa=1}^{q-1} q^{-c-1} \upsilon_{\kappa} \overline{{\rm wal}_{\kappa q^{a+c-1} + l q^{a-1}}(x)} \end{eqnarray*} and therefore \begin{equation*} q^a J_{k}(x) = \sum_{z=0}^{q-1} c_z \overline{{\rm wal}_{z q^{a-1} + k'}(x)} + 2^{-1} \overline{{\rm wal}_{k}(x)} + \sum_{c=1}^\infty \sum_{\kappa=1}^{q-1} q^{-c-1} \upsilon_{\kappa} \overline{{\rm wal}_{\kappa q^{a+c-1} + k}(x)}. \end{equation*} The result for $k = 0$ follows easily from (\ref{eq_xwalsh}). \end{proof} Note that Lemma~\ref{lem_intJ} can easily be generalized to arbitrary dimensions $s$, since for $\boldsymbol{k} = (k_1,\ldots, k_s) \in \mathbb{N}_0^s$ we have for any $\boldsymbol{x} = (x_1,\ldots, x_s) \in [0,1)^s$ that $$J_{\boldsymbol{k}}(\boldsymbol{x}) = \int_{[0,\boldsymbol{x})} \overline{{\rm wal}_{\boldsymbol{k}}(\boldsymbol{y})} \,\mathrm{d} \boldsymbol{y} = \prod_{j=1}^s J_{k_j}(x_j),$$ where $[0,\boldsymbol{x}) = \prod_{j=1}^s [0,x_j)$. The next lemma shows how the Walsh coefficients of a function $F = \int f$ can be obtained from the Walsh coefficients of $f$. \begin{lemma}\label{lem_sumfF} Let $f\in \mathcal{L}_2([0,1)^s)$ and let $F(\boldsymbol{x}) = \int_{[0,\boldsymbol{x})} f(\boldsymbol{y})\,\mathrm{d}\boldsymbol{y}$, where $[0,\boldsymbol{x}) = \prod_{j=1}^s [0,x_j)$ with $\boldsymbol{x} = (x_1,\ldots, x_s)$. Further let $\hat{F}(\boldsymbol{k})$ denote the $\boldsymbol{k}$-th Walsh coefficient of $F$. Let $\boldsymbol{k} = (k_1,\ldots, k_s) \in \mathbb{N}_0^s$ and let $U = \{1\le j \le s: k_j \neq 0\}$. For $j\in U$ let $k_j = l_j q^{a_j-1} + k_j'$, $0 < l_j < q$ and $0 \le k_j' < q^{a_j-1}$ and further let $\boldsymbol{k}' = (k'_1,\ldots, k'_s)$ where $k'_j =0$ for $j \notin U$. Then we have \begin{eqnarray*} \hat{F}(\boldsymbol{k}) & = & q^{-\sum_{j\in U} a_j} \sum_{U \subseteq v \subseteq \mathcal{S}} (-1)^{|v|} \sum_{\boldsymbol{h}_v \in \mathbb{N}_0^{|v|}} \hat{f}(\boldsymbol{k}' + (\boldsymbol{h}_v,\boldsymbol{z}ero)) \; \chi_{U,v,\boldsymbol{k}}(\boldsymbol{h}_v), \end{eqnarray*} where $(\boldsymbol{h}_v,\boldsymbol{z}ero)$ denotes the $s$-dimensional vector whose $j$-th component is $h_j$ for $j \in v$ and $0$ otherwise and where for $\boldsymbol{h}_v = (h_j)_{j\in v} \in \mathbb{N}_0^{|v|}$ we set $$\chi_{U,v, \boldsymbol{k}}(\boldsymbol{h}_v) = \prod_{j \in U} \rho_{k_j}(h_j) \prod_{j \in v\setminus U} \phi(h_j).$$ Here $$\rho_{k_j}(h_j) = \left\{\begin{array}{ll} c_z + 2^{-1} 1_{z = l_j} & \mbox{for } h_j = z q^{a_j-1}, \\ \upsilon_z q^{-i-1} & \mbox{for } h = z q^{a_j-1+i} + l_j q^{a_j-1}, i > 0, 0 < z < q, \\ 0 & \mbox{otherwise,} \end{array} \right.$$ where $1_{z = l_j} = 1$ for $z = l_j$ and $0$ otherwise, and $$\phi(h_j) = \left\{\begin{array}{ll} 2^{-1} & \mbox{for } h_j = 0, \\ \upsilon_z q^{-i-1} & \mbox{for } h = z q^{a_j-1+i}, i > 0, 0 < z < q, \\ 0 & \mbox{otherwise.} \end{array} \right.$$ \end{lemma} \begin{proof} Using integration by parts in each coordinate, Fubini's theorem and $J_k(0) = J_k(1) = 0$ for any $k \in\mathbb{N}$ (see Equations~(\ref{eq_indstep}) and (\ref{eq_indstep0}) for one-dimensional examples) it follows that $$\hat{F}(\boldsymbol{k}) = \int_{[0,1)^s} F(\boldsymbol{x}) \overline{{\rm wal}_{\boldsymbol{k}}(\boldsymbol{x})} \,\mathrm{d} \boldsymbol{x} = \sum_{U \subseteq v \subseteq \mathcal{S}} (-1)^{|v|} \int_{[0,1)^s} J_{\boldsymbol{k}_v}(\boldsymbol{x}_v) f(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x},$$ where for $\boldsymbol{k} = (k_1,\ldots, k_s)$ and $\boldsymbol{x} = (x_1,\ldots, x_s)$ we have $J_{\boldsymbol{k}_v}(\boldsymbol{x}_v) = \prod_{j \in v} J_{k_j}(x_j)$. Using the Walsh series expansion of $J_{\boldsymbol{k}}$ given by Lemma~\ref{lem_intJ} we can now express the Walsh coefficient $\hat{F}(\boldsymbol{k})$ as a sum of the Walsh coefficients $\hat{f}(\boldsymbol{h})$, from which the result follows. \end{proof} The following definition now captures the essence of the decay of the Walsh coefficients of smooth functions and will be used in the statement of the subsequent lemmas, theorems and corollaries. \begin{definition}\label{def_qadicnonincreasing} Let $k = k(v;a_1,\ldots, a_v) = \kappa_1 q^{a_1-1} + \cdots + \kappa_v q^{a_v-1}$ with $v \ge 1$, $\kappa_1,\ldots, \kappa_v \in \{1,\ldots, q-1\}$ and $1 \le a_v < \cdots < a_1$ be a natural number. For $k = 0$ we set $v = 0$, i.e., $k(0) = 0$. A function $\mathcal{B}:\mathbb{N}_0 \rightarrow \mathbb{R}$ is called $q$-adically non-increasing if $\mathcal{B}(k) = \mathcal{B}(k(v;a_1,\ldots,a_v))$ is non-increasing in $v$ and each $a_i$ for $i = 1,\ldots, v$, that is, for any $v \ge 0$ we have $$\mathcal{B}(k(v;a_1,\ldots, a_v)) \ge \mathcal{B}(k(v+1;a'_1,\ldots, a'_{v+1}))$$ with $1\le a'_{v+1} < \cdots < a'_{1}$ and $a_1,\ldots, a_v \in \{a'_1,\ldots, a'_{v+1}\}$ and for an arbitrary $1 \le i \le v$ we have $$\mathcal{B}(k(v;a_1,\ldots, a_v)) \ge \mathcal{B}(k(v;a_1,\ldots, a_{i-1}, a_i+1, a_{i+1}, \ldots, a_v))$$ provided that $a_i+1 < a_{i-1}$ in case $1 < i \le v$. \end{definition} In the following lemma we give a bound on the Walsh coefficients of $F$ if $f$ satisfies some smoothness condition. \begin{lemma}\label{lem_iteration} Let $\mathcal{B}:\mathbb{N}_0^s \rightarrow [0,\infty)$ be a $q$-adically non-increasing function in each variable. Let $f\in \mathcal{L}_2([0,1)^s)$ and let the Walsh coefficients of $f$ satisfy $$|\hat{f}(\boldsymbol{k})| \le \mathcal{B}(\boldsymbol{k}) \quad \mbox{for all } \boldsymbol{k} \in \mathbb{N}_0^s.$$ Let $F(\boldsymbol{x}) = \int_{[0,\boldsymbol{x})} f(\boldsymbol{y})\,\mathrm{d}\boldsymbol{y}$, where $[0,\boldsymbol{x}) = \prod_{j=1}^s [0,x_j)$ with $\boldsymbol{x} = (x_1,\ldots, x_s)$. Further let $\hat{F}(\boldsymbol{k})$ denote the $\boldsymbol{k}$-th Walsh coefficient of $F$. Let $\boldsymbol{k} = (k_1,\ldots, k_s) \in \mathbb{N}_0^s \setminus\{\boldsymbol{z}ero\}$ and let $U = \{1\le j \le s: k_j \neq 0\}$. For $j\in U$ let $k_j = l_j q^{a_j-1} + k_j'$ and $0 \le k_j' < q^{a_j-1}$ and further let $\boldsymbol{k}' = (k'_1,\ldots, k'_s)$ where $k'_j =0$ for $j \notin U$. Then there is a constant $C_{s,U} > 0$ independent of $\boldsymbol{k}$ such that $$|\hat{F}(\boldsymbol{k})| \le C_{s,U} \; q^{-\sum_{j\in U} a_j} \mathcal{B}(\boldsymbol{k}').$$ \end{lemma} \begin{proof} Using Lemma~\ref{lem_sumfF} we obtain that $$|\hat{F}(\boldsymbol{k})| \le q^{-\sum_{j\in U} a_j} \mathcal{B}(\boldsymbol{k}') \sum_{U\subseteq v \subseteq \mathcal{S}} \sum_{\boldsymbol{h}_v \in \mathbb{N}_0^{|v|}} |\chi_{U,v,\boldsymbol{k}}(\boldsymbol{h}_v)|,$$ as $|\hat{f}(\boldsymbol{k}'+(\boldsymbol{h}_v,\boldsymbol{z}ero))| \le \mathcal{B}(\boldsymbol{k}'+(\boldsymbol{h}_v,\boldsymbol{z}ero)) \le \mathcal{B}(\boldsymbol{k}')$ for all values of $\boldsymbol{h}_v \in \mathbb{N}_0^{|v|}$ for which $\chi_{U,v,\boldsymbol{k}}(\boldsymbol{h}_v) \neq 0$, since $\mathcal{B}$ is $q$-adically non-increasing in each variable. Thus it remains to bound $\sum_{U\subseteq v \subseteq \mathcal{S}} \sum_{\boldsymbol{h}_v \in \mathbb{N}_0^{|v|}} |\chi_{U,v,\boldsymbol{k}}(\boldsymbol{h}_v)|$ independently of $\boldsymbol{k}$. We only prove the case where $q$ is chosen to be a prime number and the bijections $\varphi$ and $\eta$ are chosen to be the identity, as in this case we can obtain an explicit constant $C_{s,U} > 0$. The general case can be obtained by similar arguments using the result from Lemma~\ref{lem_sumfF}. Using the notation from Lemma~\ref{lem_sumfF} we have \begin{eqnarray*} \sum_{h\in\mathbb{N}_0} |\rho_{k_j}(h)| & = & |1-\omega_q^{-l_j}|^{-1} + 2^{-1} |1+\omega_q^{-l_j}| |\omega_q^{-l_j}-1|^{-1} \\ && + \sum_{i=1}^\infty q^{-i} \sum_{z=1}^{q-1} |\mathrm{e}^{2\pi\mathtt{i} z/q}-1|^{-1} \\ & \le & 3 (2-2\cos(2\pi/q))^{-1/2} \end{eqnarray*} and $$\sum_{h=0}^\infty |\phi(h)| = 2^{-1} + \sum_{i=1}^\infty q^{-i} \sum_{z=1}^{q-1} |\mathrm{e}^{2\pi\mathtt{i} z/q}-1|^{-1} \le 2^{-1} + (2-2\cos(2\pi/q))^{-1/2}.$$ Therefore we have \begin{eqnarray*} \lefteqn{ \sum_{U\subseteq v \subseteq S} \sum_{\boldsymbol{h}_v \in \mathbb{N}_0^{|v|}} |\chi_{U,v,\boldsymbol{k}}(\boldsymbol{h}_v)| } \\ & \le & 3^{|U|} (2-2\cos(2\pi/q))^{-|U|/2} (3/2+(2-2\cos(2\pi/q))^{-1/2})^{s-|U|} \end{eqnarray*} and hence we can choose \begin{equation}\label{eq_constcsu} C_{s,U} = 3^{|U|} (2-2\cos(2\pi/q))^{-|U|/2} (3/2+(2-2\cos(2\pi/q))^{-1/2})^{s-|U|} \end{equation} in Lemma~\ref{lem_iteration} for this case. \end{proof} We use the above results now to establish an upper bound on the Walsh coefficients of a polynomial. The proof will give a glimpse on how the argument will work for more general function classes. \begin{lemma}\label{lem_walshpoly} Let $k = \kappa_1 q^{a_1-1} + \cdots + \kappa_v q^{a_v-1}$ with $v\ge 1$, $\kappa_1,\ldots, \kappa_v \in \{1,\ldots, q-1\}$ and $1 \le a_v < \cdots < a_1$. For $v = 0$ let $k = 0$. Let $f:[0,1)\rightarrow \mathbb{R}$ be the polynomial $f(x) = f_0 + f_1 x + \cdots + f_i x^i$ with $f_i \neq 0$ and let $\hat{f}(k)$ denote the $k$-th Walsh coefficient of $f$. Then for $v \ge 0$ there are constants $0 < C_{f,i,v} < \infty$ such that $$|\hat{f}(k)| \le C_{f,i,v} q^{-a_1 - \cdots - a_v},$$ where we can choose $C_{f,i,v} = 0$ for $v > i$. \end{lemma} \begin{proof} Let $f(x) = f_0 + f_1 x + \cdots + f_i x^i$, where $i = \mathrm{e}g(f)$ (that is, $f_i \neq 0$). Then we have $f^{(i)}(x) = i! f_i \neq 0$. As $f^{(i)}$ is a constant function, its Walsh series representation is simply given by $f^{(i)}(x) = i! f_i$. Now we use Lemma~\ref{lem_iteration}. The dimension $s$ in our case is $1$ and we can choose the function $\mathcal{B}_1$ by $\mathcal{B}_1(0) = i!|f_i|$ and for $k > 0$ we set $\mathcal{B}_1(k) = 0$. Note the function $\mathcal{B}_1$ defined this way is a $q$-adically non-increasing function. Then it follows that there is a constant $C_1 > 0$ such that the Walsh coefficients of the function $\int_0^x f^{(i)}(t) \,\mathrm{d} t = f^{(i-1)}(x) - f^{(i-1)}(0)$ are bounded by $C_1 q^{-a_1} i!|f_i|$ for all $k$ where $v = 1$ and the Walsh coefficients are $0$ for $v > 1$. The Walsh coefficient for $k = 0$ is given by $f^{(i-2)}(1) - f^{(i-2)}(0) - f^{(i-1)}(0)$. Now consider the function $\int_0^x f^{(i-1)}(t)\,\mathrm{d} t = f^{(i-2)}(x) - f^{(i-2)}(0)$. It follows from the above and Lemma~\ref{lem_iteration} that the Walsh coefficients of $ f^{(i-2)}(x) - f^{(i-2)}(0)$ can be bounded by a $q$-adically non-increasing function $\mathcal{B}_2$. Indeed there are constants $C_2,C_3 > 0$ such that we can choose $\mathcal{B}_2(0) = |f^{(i-2)}(1) - f^{(i-2)}(0) - f^{(i-1)}(0)|$, $\mathcal{B}_2(k) = C_{2} q^{-a_1}$ for $v = 1$, $\mathcal{B}_2(k) = C_3 q^{-a_1-a_2} $ for $v = 2$ and $\mathcal{B}_2(k) = 0$ for $v > 2$. Again $\mathcal{B}_2$ is a $q$-adically non-increasing function and Lemma~\ref{lem_iteration} can again be used. By using the above argument iteratively we obtain that there is a constant $C > 0$ such that $|\hat{f}(k)| \le C q^{-a_1 - \cdots - a_v}$, for $k =\kappa_1 q^{a_1-1} + \cdots + \kappa_v q^{a_v-1}$ with $\kappa_1,\ldots, \kappa_v \in \{1,\ldots, q-1\}$ and $1 \le a_v < \cdots < a_1$. The result thus follows. \end{proof} For the case where $q$ is chosen to be a prime number and the bijections $\varphi$ and $\eta$ are chosen to be the identity and for $0 \le v \le i$ we can choose \begin{equation}\label{eq_constcfv} C_{f,i,v} = \bar{C}^v \sum_{l = v}^i C'^{i-l} l! |f_l|, \end{equation} where $\bar{C} = (2-2\cos(2\pi/q))^{-1/2}$ and $C' = 3/2 + (2-2\cos(2\pi/q))^{-1/2}$ in Lemma~\ref{lem_walshpoly}. Let $f:[0,1)^s \rightarrow \mathbb{R}$ be such that the partial mixed derivatives up to order $\mathrm{e}lta\ge 1$ in each variable exist and are continuous. We need some further notation: let $\boldsymbol{\tau} = (\tau_1,\ldots, \tau_s)$ and $$f^{(\boldsymbol{\tau})}(\boldsymbol{x}) = \frac{\partial^{\tau_1+\cdots + \tau_s}}{\partial x_1^{\tau_1} \cdots \partial x_s^{\tau_s}} f(\boldsymbol{x}).$$ For $\boldsymbol{\tau} \in \{0,\ldots, \mathrm{e}lta\}^s$ let $u(\boldsymbol{\tau}) = \{1\le j \le s: \tau_j = \mathrm{e}lta\}$. Let $\boldsymbol{\gamma} = (\gamma_v)_{v\subset \mathbb{N}}$ be an indexed set of non-negative real numbers. Let $v(\boldsymbol{\tau}) = \{1\le j \le s: \tau_j > 0\}$. Then for $0 < \lambda \le 1$ and $\mathfrak{p},\mathfrak{q},\mathfrak{r} \ge 1$ ($\mathfrak{p},\mathfrak{q},\mathfrak{r}$ do not appear in the subscript of $N$ as they do not have influence on our subsequent bounds, we only assume that they are bigger or equal to $1$) we define \begin{equation}\label{def_N} N_{\mathrm{e}lta, \lambda,\boldsymbol{\gamma}}(f) = \left(\sum_{\boldsymbol{\tau} \in \{0,\ldots, \mathrm{e}lta\}^s} \gamma_{v(\boldsymbol{\tau})}^{-1} \left[V_{\lambda,\mathfrak{p},\mathfrak{q},\boldsymbol{1}}^{(|u(\boldsymbol{\tau})|)}(f^{(\boldsymbol{\tau})}(\cdot, \boldsymbol{z}ero_{\mathcal{S}\setminus u(\boldsymbol{\tau})}))\right]^\mathfrak{r} \right)^{1/\mathfrak{r}}, \end{equation} where, for clarity, we introduce the additional superscript $(|u(\boldsymbol{\tau})|)$ in the Hardy and Krause variation $V_{\lambda,\mathfrak{p},\mathfrak{q},\boldsymbol{1}}^{(|u(\boldsymbol{\tau})|)}$ which indicates the dimension of the function and where for $u(\boldsymbol{\tau}) = \emptyset$ we set $V_{\lambda,\mathfrak{p},\mathfrak{q},\boldsymbol{1}}^{(|u(\boldsymbol{\tau})|}(f^{(\boldsymbol{\tau})}(\cdot, \boldsymbol{z}ero_{\mathcal{S}\setminus u(\boldsymbol{\tau})})) = |f^{(\boldsymbol{\tau})}(\boldsymbol{z}ero)|$. The weights $\boldsymbol{\gamma}$ are introduced to modify the importance of various coordinate projections and were first introduced in \cite{SW98}, see also \cite{DSWW1,DSWW2}. If for some $v' \subseteq \mathcal{S}$ the weight $\gamma_{v'} = 0$, then we assume that the function $f$ satisfies $V_{\lambda,\mathfrak{p},\mathfrak{q},\boldsymbol{1}}^{(|v'|)}(f^{(\boldsymbol{\tau})}(\cdot, \boldsymbol{z}ero_{\mathcal{S}\setminus v'})) = 0$ for all $\boldsymbol{\tau} \in \{0,\ldots, \mathrm{e}lta\}^s$ with $v(\boldsymbol{\tau}) = v'$ and in (\ref{def_N}) we formally set $0/0 = 0$. The parameters in the definition of $N_{\mathrm{e}lta, \lambda,\boldsymbol{\gamma}}$ have the following meaning: \begin{itemize} \item $\mathrm{e}lta$ denotes the order of partial derivatives of $f$ required in order for $N_{\mathrm{e}lta, \lambda,\boldsymbol{\gamma}}(f)$ to make sense; \item $\lambda$ is a H\"older type parameter or fractional order type parameter of the generalized Hardy and Krause variation; roughly, $f$ needs to have partial derivatives up to order $\mathrm{e}lta + \lambda$, where for $0 < \lambda < 1$ this means some type of fractional smoothness or in dimension one a H\"older condition of order $\lambda$; \item the Vitali variation is in $\mathfrak{p}$ norm; \item $\mathfrak{q}$ is the norm in the summation of the generalized Hardy and Krause variation; \item $\mathfrak{r}$ is the norm in the summation over the $\boldsymbol{\tau}$; \item $\boldsymbol{\gamma}$ are the weights which regulate the importance of different coordinate projections; \end{itemize} Note that for $\lambda = 1$ and $\mathfrak{p} = \mathfrak{q} = \mathfrak{r} = 2$ the functional $N_{\mathrm{e}lta, \lambda,\boldsymbol{\gamma}}$ is just the norm in a weighted reproducing kernel Sobolev space with continuous partial mixed derivatives up to order $\mathrm{e}lta + 1$ in each variable. In one dimension the unweighted norm in this reproducing kernel Sobolev space is given by \begin{eqnarray}\label{eq_ipsob1} \lefteqn{\langle f, g \rangle_{{\rm sob}an,\mathrm{e}lta+1} } \\ &=& f(0) g(0) + \cdots + f^{(\mathrm{e}lta-1)}(0) g^{(\mathrm{e}lta-1)}(0) \nonumber \\ && + \int_0^1 f^{(\mathrm{e}lta)}(x) \,\mathrm{d} x \int_0^1 g^{(\mathrm{e}lta)}(x)\,\mathrm{d} x + \int_0^1 f^{(\mathrm{e}lta+1)}(x) g^{(\mathrm{e}lta+1)}(x) \,\mathrm{d} x \nonumber \end{eqnarray} and for higher dimensions one just takes the weighted tensor product of the one dimensional reproducing kernel Sobolev spaces (see \cite{DSWW2} for examples of weighted tensor product reproducing kernel Sobolev spaces). Let the $s$ dimensional weighted inner product be denoted by $\langle \cdot, \cdot \rangle_{{\rm sob}an,s,\mathrm{e}lta+1, \boldsymbol{\gamma}}$ and the corresponding norm $\|\cdot\|_{{\rm sob}an,s,\mathrm{e}lta+1,\boldsymbol{\gamma}}$ (indeed if the partial mixed derivatives up to order $\mathrm{e}lta+1$ of $f$ are continuous on $[0,1]^s$ then we have $\|f\|_{{\rm sob}an,s,\mathrm{e}lta+1,\boldsymbol{\gamma}} = N_{\mathrm{e}lta, 1,\boldsymbol{\gamma}}(f)$). In the following we define a function $\mu$ which will be used throughout the paper: let $\mathrm{e}lta \ge 1$ be an integer and $0 < \lambda \le 1$ be a real number. Then for $\boldsymbol{k} = (k_1,\ldots, k_s)$ we set \begin{equation}\label{defmus} \mu_{q,\mathrm{e}lta+\lambda}(\boldsymbol{k}) = \sum_{j=1}^s \mu_{q,\mathrm{e}lta+\lambda}(k_j)\end{equation} with \begin{equation}\label{defmu} \mu_{q,\mathrm{e}lta + \lambda}(k) = \left\{\begin{array}{ll} 0 & \mbox{for } k = 0, \\ a_1 + \cdots + a_v & \mbox{for } v \le \mathrm{e}lta, \\ a_1 + \cdots + a_{\mathrm{e}lta} + \lambda a_{\mathrm{e}lta +1} & \mbox{for } v > \mathrm{e}lta, \end{array} \right. \end{equation} where for $k \in \mathbb{N}$ we write $k = \kappa_1 q^{a_1 -1} + \cdots + \kappa_v q^{a_v-1}$ with $v \ge 1$, $\kappa_1,\ldots, \kappa_v \in \{1,\ldots, q-1\}$ and $1 \le a_v < \cdots < a_1$. \begin{theorem}\label{th_boundwalshcoeff} Let $\mathrm{e}lta \ge 1$ be an integer, $0 < \lambda \le 1$, $\mathfrak{p}, \mathfrak{q}, \mathfrak{r} \ge 1$ be real numbers and an indexed set $\boldsymbol{\gamma} = (\gamma_v)_{v\subset\mathbb{N}}$ of non-negative real numbers be given. Let $f:[0,1)^s \rightarrow \mathbb{R}$ be such that the partial mixed derivatives up to order $\mathrm{e}lta$ in each variable exist and such that $N_{\mathrm{e}lta,\lambda,\boldsymbol{\gamma}}(f) < \infty$. Then for any $\boldsymbol{k} \in \mathbb{N}_0^s\setminus\{\boldsymbol{z}ero\}$ it follows that there is a constant $C_{f,q,s,\boldsymbol{\gamma}} >0$ independent of $\boldsymbol{k}$ such that $$|\hat{f}(\boldsymbol{k})| \le C_{f,q,s,\boldsymbol{\gamma}} q^{-\mu_{q,\mathrm{e}lta+\lambda}(\boldsymbol{k})}.$$ \end{theorem} \begin{proof} In order to prove the result we use the Taylor series expansion of the function $f$. We have \begin{eqnarray}\label{eq_taylor} f(\boldsymbol{x}) &=& \sum_{\boldsymbol{\tau} \in \{0,\ldots, \mathrm{e}lta-1\}^s} \frac{\boldsymbol{x}^{\boldsymbol{\tau}}}{\boldsymbol{\tau}!} f^{(\boldsymbol{\tau})}(\boldsymbol{z}ero) + \sum_{\emptyset\neq u \subseteq\mathcal{S}}\sum_{\boldsymbol{\tau}_{\mathcal{S}\setminus u} \in \{0,\ldots,\mathrm{e}lta-1\}^{s-|u|}} ((\mathrm{e}lta-1)!)^{-|u|} \nonumber \\ && \frac{\prod_{j\in\mathcal{S}\setminus u} x_j^{\tau_j}}{\prod_{j\in\mathcal{S}\setminus u} \tau_j!} \int_{[\boldsymbol{z}ero_u,\boldsymbol{x}_u)} f^{(\boldsymbol{\mathrm{e}lta}_u, \boldsymbol{\tau}_{\mathcal{S}\setminus u})}(\boldsymbol{y}_u, \boldsymbol{z}ero_{\mathcal{S}\setminus u}) \prod_{j\in u} (x_j-y_j)^{\mathrm{e}lta-1} \,\mathrm{d} \boldsymbol{y}_u. \end{eqnarray} First note that the first sum in (\ref{eq_taylor}) is a polynomial in $\boldsymbol{x}$ and therefore the Walsh coefficients of this polynomial satisfy the desired bound by Lemma~\ref{lem_walshpoly}. Now we consider the second sum. Let $\emptyset \neq u \subseteq \mathcal{S}$ with $u = \{j_1,\ldots, j_{|u|}\}$ be given. Then for $j \notin u$ the Walsh coefficients satisfy the desired bound by Lemma~\ref{lem_walshpoly}. Hence it remains to consider the Walsh coefficients of \begin{eqnarray*} G_u(\boldsymbol{x}_u) &=& \int_{[\boldsymbol{z}ero_u,\boldsymbol{x}_u)} f^{(\boldsymbol{\mathrm{e}lta}_u, \boldsymbol{\tau}_{\mathcal{S}\setminus u})}(\boldsymbol{y}_u, \boldsymbol{z}ero_{\mathcal{S}\setminus u}) \prod_{j\in u} (x_j-y_j)^{\mathrm{e}lta-1} \,\mathrm{d} \boldsymbol{y}_u \\ & = & \int_{[\boldsymbol{z}ero_u,\boldsymbol{1}_u)} f^{(\boldsymbol{\mathrm{e}lta}_u, \boldsymbol{\tau}_{\mathcal{S}\setminus u})}(\boldsymbol{y}_u, \boldsymbol{z}ero_{\mathcal{S}\setminus u}) \prod_{j\in u} (x_j-y_j)_+^{\mathrm{e}lta-1} \,\mathrm{d} \boldsymbol{y}_u, \end{eqnarray*} where $(x-y)_+ = \max(0, x-y)$. By differentiating the function $G_u$ in each variable $0 \le k < \mathrm{e}lta$ times we obtain (see \cite[pp. 153,154]{SB}) \begin{eqnarray*} \lefteqn{\frac{\partial^{k|u|}}{\partial \boldsymbol{x}_u^{k}} G_u(\boldsymbol{x}_u) } \\ & =& \left(\frac{(\mathrm{e}lta-1)!}{(\mathrm{e}lta-1-k)!}\right)^{|u|} \int_{[\boldsymbol{z}ero_u,\boldsymbol{x}_u)} f^{(\boldsymbol{\mathrm{e}lta}_u, \boldsymbol{\tau}_{S\setminus u})}(\boldsymbol{y}_u, \boldsymbol{z}ero_{S\setminus u}) \prod_{j\in u} (x_j-y_j)^{\mathrm{e}lta-1-k} \,\mathrm{d} \boldsymbol{y}_u. \end{eqnarray*} Hence $\frac{\partial^{k|u|}}{\partial \boldsymbol{x}_u^{k}} G_u(\boldsymbol{x}_u) = 0$ if there is at least one $j \in u$ such that $x_j = 0$. Further we have $$\frac{\partial^{\mathrm{e}lta|u|}}{\partial \boldsymbol{x}_u^{\mathrm{e}lta}} G_u(\boldsymbol{x}_u) = ((\mathrm{e}lta-1)!)^{|u|} f^{(\boldsymbol{\mathrm{e}lta}_u, \boldsymbol{\tau}_{S\setminus u})}(\boldsymbol{x}_u, \boldsymbol{z}ero_{S\setminus u}).$$ From $N_{\mathrm{e}lta,\lambda,\boldsymbol{\gamma}}(f) < \infty$ it follows that the Walsh coefficients of $f^{(\boldsymbol{\mathrm{e}lta}_u,\boldsymbol{\tau}_{\mathcal{S}\setminus u})}(\boldsymbol{x}_u,\boldsymbol{z}ero_{\mathcal{S}\setminus u})$ decay with order $\mu_{q,0+\lambda}(k)$ in each variable. Further we have $$G_u(\boldsymbol{x}_u) = \int_{[\boldsymbol{z}ero,\boldsymbol{x}_u)} \int_{[\boldsymbol{z}ero,\boldsymbol{y}_1)} \cdots \int_{[\boldsymbol{z}ero,\boldsymbol{y}_{\mathrm{e}lta-1})} G_u^{(\boldsymbol{\mathrm{e}lta}_u)}(\boldsymbol{y}_{\mathrm{e}lta}) \,\mathrm{d}\boldsymbol{y}_{\mathrm{e}lta} \cdots \,\mathrm{d} \boldsymbol{y}_1$$ as the function $G_u$ and its derivatives are $0$ if at least one $x_j = 0$ for $j \in u$, i.e., we have $$\int_{[\boldsymbol{z}ero,\boldsymbol{x}_u)} G_u^{(\boldsymbol{\tau})}(\boldsymbol{y})\,\mathrm{d}\boldsymbol{y} = \sum_{v\subseteq u} (-1)^{u\setminus v} G_u^{(\boldsymbol{\tau}-\boldsymbol{1})}(\boldsymbol{x}_v,\boldsymbol{z}ero_{u\setminus v}) = G_u^{(\boldsymbol{\tau}-\boldsymbol{1})}(\boldsymbol{x}_u).$$ Hence it follows by repeated use of Lemma~\ref{lem_iteration} that the desired bound holds for $G_u$ and thus the result follows from (\ref{eq_taylor}). \end{proof} For the case where $q$ is chosen to be a prime number and the bijections $\varphi$ and $\eta$ are chosen to be the identity we can also obtain an explicit constant in Theorem~\ref{th_boundwalshcoeff}. Indeed, using Lemma~\ref{lem_proppirs}, Lemma~\ref{lem_iteration} together with the explicit constant (\ref{eq_constcsu}) and Lemma~\ref{lem_walshpoly} together with the explicit constant (\ref{eq_constcfv}) we obtain that the constant $C_{q,\boldsymbol{\gamma}}$ can be chosen as \begin{eqnarray*} C_{f,q,s,\boldsymbol{\gamma}} &=& \sum_{\boldsymbol{\tau} \in \{0,\ldots, \mathrm{e}lta-1\}^s\atop \gamma_{v(\boldsymbol{\tau})} \neq 0} |f^{(\boldsymbol{\tau})}(\boldsymbol{z}ero)| \hat{C}^{\tau_1 + \cdots + \tau_s} + \sum_{\emptyset \neq u \subseteq \mathcal{S}} q^{|u|} C_{s,u}^\mathrm{e}lta \\ && \sum_{\boldsymbol{\tau}_{\mathcal{S}\setminus u} \in \{0,\ldots,\mathrm{e}lta-1\}^{s-|u|} \atop \gamma_{u\cup v(\boldsymbol{\tau}_{\mathcal{S}\setminus u})} \neq 0} \!\!\! \hat{C}^{\sum_{j \in \mathcal{S}\setminus u} \tau_j} V^{(|u|)}_{\lambda,1}(f^{(\boldsymbol{\mathrm{e}lta}_u,\boldsymbol{\tau}_{\mathcal{S}\setminus u})}(\cdot, \boldsymbol{z}ero_{\mathcal{S}\setminus u})), \nonumber \end{eqnarray*} where $\hat{C} = 1$ for $2 \le q < 6$ and $\hat{C} = (2-2\cos(2\pi/q))^{-1/2}$ for $q > 6$ (note that for $q > 6$ we have $\hat{C} > 1$) and $$C_{s,u} = 3^{|u|} (2-2\cos(2\pi/q))^{-|u|/2} (3/2+(2-2\cos(2\pi/q))^{-1/2})^{s-|u|}.$$ As noted above, under certain conditions we can write $V^{(|u|)}_{\lambda,1}$ also as an integral, see (\ref{eq_formelV}). We give a further useful estimation of the constant by separating the dependence of the function from the constants. This way we obtain \begin{equation*} C_{f,q,s,\boldsymbol{\gamma}} \le C_{\mathrm{e}lta,q,s,\boldsymbol{\gamma}} N_{\mathrm{e}lta,\lambda,\boldsymbol{\gamma}}(f), \end{equation*} where \begin{eqnarray}\label{eq_constcn} C_{\mathrm{e}lta,q,s,\boldsymbol{\gamma}} &=& \sum_{\boldsymbol{\tau} \in \{0,\ldots, \mathrm{e}lta-1\}^s} \gamma_{v(\boldsymbol{\tau})} \hat{C}^{\tau_1 + \cdots + \tau_s} \nonumber \\ && + \sum_{\emptyset \neq u \subseteq \mathcal{S}} q^{|u|} C_{s,u}^\mathrm{e}lta \sum_{\boldsymbol{\tau}_{\mathcal{S}\setminus u} \in \{0,\ldots,\mathrm{e}lta-1\}^{s-|u|}} \gamma_{u\cup v(\boldsymbol{\tau}_{\mathcal{S}\setminus u})} \hat{C}^{\sum_{j \in \mathcal{S}\setminus u} \tau_j}. \end{eqnarray} Consider now the case where the weights are of product form (see \cite{DSWW2}), i.e., there is a sequence of positive real numbers $(\gamma_j)_{j\in\mathbb{N}}$ such that $\gamma_v = \prod_{j\in v} \gamma_j$ for all $v \subset \mathbb{N}$ and for $v = \emptyset$ we set $\gamma_v = 1$. If now $q$ is prime with $2 \le q < 6$, then \begin{eqnarray*} C_{\mathrm{e}lta,q,s,\boldsymbol{\gamma}} & = & \prod_{j=1}^s (1 + \gamma_j (\mathrm{e}lta-1)) + \prod_{j=1}^s \left[(1+\gamma_j(\mathrm{e}lta-1)) (3/2+\hat{C}) + \gamma_j 3 q \hat{C}\right] \\ && - \prod_{j=1}^s \left[(1+\gamma_j(\mathrm{e}lta-1))(3/2+\hat{C})\right]. \end{eqnarray*} For example for $q = 2$, $\mathrm{e}lta = 1$ and product weights we obtain $$C_{1,2,s,\boldsymbol{\gamma}} = 1 - 2^s + 2^s \prod_{j=1}^s (1 + 6 \gamma_j).$$ The approach used here for prime $q$ and $\varphi$ the identity map can also be used for arbitrary prime powers $q$ and arbitrary mappings $\varphi$ with $\varphi(0) = 0$. Hence we obtain the following corollary. \begin{corollary}\label{cor_boundwalshcoeff} Under the assumptions of Theorem~\ref{th_boundwalshcoeff} there exists a constant $C_{\mathrm{e}lta,q,s,\boldsymbol{\gamma}} > 0$ independent of $\boldsymbol{k}$ and $f$ such that $$|\hat{f}(\boldsymbol{k})| \le C_{\mathrm{e}lta, q,s,\boldsymbol{\gamma}} N_{\mathrm{e}lta,\lambda,\boldsymbol{\gamma}}(f) q^{-\mu_{q,\mathrm{e}lta+\lambda}(\boldsymbol{k})} \quad \mbox{ for all } \boldsymbol{k} \in \mathbb{N}_0^s,$$ where $\mu_{q,\mathrm{e}lta+\lambda}$ is given by (\ref{defmus}) and (\ref{defmu}). \end{corollary} \begin{remark}\rm The results in this section also hold for the following generalization. In the definition of $N_{\mathrm{e}lta,\lambda,\boldsymbol{\gamma}}$ we anchored the function and its derivatives at $0$, i.e., we used $V_{\lambda,\mathfrak{p}, \mathfrak{q}, \boldsymbol{1}}^{(|u(\boldsymbol{\tau})|)}(f(\cdot,\boldsymbol{z}ero_{\mathcal{S}\setminus u(\boldsymbol{\tau})}))$. This can be generalized by choosing an arbitrary $\boldsymbol{a} \in [0,1]^s$ and using $V_{\lambda,\mathfrak{p},\mathfrak{q},\boldsymbol{1}}^{(|u(\boldsymbol{\tau})|)}(f(\cdot,\boldsymbol{a}_{S\setminus u(\boldsymbol{\tau})}))$ in the definition of $N_{\mathrm{e}lta,\lambda,\boldsymbol{\gamma}}$. It can be shown that in this case we also have Theorem~\ref{th_boundwalshcoeff} and Corollary~\ref{cor_boundwalshcoeff}. \end{remark} \subsection{Convergence of the Walsh series}\label{subsect_convergence} For our purposes here we need strong assumptions on the convergence of the Walsh series $S(f)(\boldsymbol{x}) = \sum_{\boldsymbol{k} \in \mathbb{N}_0^s} \hat{f}(\boldsymbol{k}) {\rm wal}_{\boldsymbol{k}}(\boldsymbol{x})$ to the function $f$, i.e., we require that the partial series $S_{m}(f)(\boldsymbol{x}) = \sum_{\boldsymbol{k} \in \mathbb{N}_0^s \atop k_j < m} \hat{f}(\boldsymbol{k}) {\rm wal}_{\boldsymbol{k}}(\boldsymbol{x})$ converges to $f(\boldsymbol{x})$ at every point $\boldsymbol{x} \in [0,1)^s$ as $m \rightarrow \infty$. (Note that the Walsh series $S(f)$ for the functions considered in this paper is always absolutely convergent, i.e., $\sum_{\boldsymbol{k}\in\mathbb{N}_0^s} |\hat{f}(\boldsymbol{k})| < \infty$, hence the Walsh series $S(f)(\boldsymbol{x})$ is uniformly bounded by $\sum_{\boldsymbol{k}\in\mathbb{N}_0^s} |\hat{f}(\boldsymbol{k})|$ and therefore $S(f)(\boldsymbol{x})$ itself converges at every point $\boldsymbol{x} \in [0,1)^s$.) This is necessary as we want to approximate the integral at function values $\boldsymbol{x}_n$ and for our analysis we deal with the Walsh series rather than the function itself, hence it is paramount that the function and its Walsh series coincide at every point $\boldsymbol{x} \in [0,1)^s$. As the functions considered here are at least differentiable it follows that they are continuous and using the argument in \cite[p. 373]{Fine} it follows that the Walsh series really converges at every point $\boldsymbol{x} \in [0,1)^s$ to the function value $f(\boldsymbol{x})$. Indeed, for a given $\boldsymbol{x} \in [0,1)^s$ we have $$S_{q^m}(f)(\boldsymbol{x}) = \sum_{\boldsymbol{k} \in \{0,\ldots, q^m-1\}^s} \hat{f}(\boldsymbol{k}){\rm wal}_{\boldsymbol{k}}(\boldsymbol{x}) = {\rm Vol}(J_{\boldsymbol{x}})^{-1} \int_{J_{\boldsymbol{x}}} f(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x},$$ where $J_{\boldsymbol{x}} = \prod_{j=1}^s [q^{-m}\lfloor q^m x_j\rfloor, q^{-m}\lfloor q^m x_j\rfloor+q^{-m})$. The last equality follows from \begin{eqnarray*} \lefteqn{\sum_{\boldsymbol{k} \in \{0,\ldots, q^m-1\}^s} \hat{f}(\boldsymbol{k}){\rm wal}_{\boldsymbol{k}}(\boldsymbol{x}) } \mathfrak{q}uad\mathfrak{q}uad \\ & = & \int_{[0,1)^s} f(\boldsymbol{y}) \sum_{\boldsymbol{k}\in \{0,\ldots, q^m-1\}^s} {\rm wal}_{\boldsymbol{k}}(\boldsymbol{x}) \overline{{\rm wal}_{\boldsymbol{k}}(\boldsymbol{y})} \,\mathrm{d} \boldsymbol{y} \\ & = & {\rm Vol}(J_{\boldsymbol{x}})^{-1} \int_{J_{\boldsymbol{x}}} f(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x}. \end{eqnarray*} As the function $f$ is continuous it immediately follows that $S_{q^m}(f)(\boldsymbol{x})$ converges to $f(\boldsymbol{x})$ as $m$ goes to infinity and the result follows. \subsection{A function space based on Walsh functions containing smooth functions}\label{subsectwalshspaceE} In this section we use the above results to define a function space based on Walsh functions which contains smooth functions for smoothness conditions considered in the previous section. Let $\vartheta > 1$ be a real number and $q$ a prime power. Then for $\boldsymbol{k} \in \mathbb{N}_0^s$ we set $r_{q,\vartheta}(\boldsymbol{k}) = q^{-\mu_{q,\vartheta}(\boldsymbol{k})}$, where $\mu_{q,\vartheta}$ is given by (\ref{defmus}) and (\ref{defmu}) (if $\vartheta$ is an integer, then choose $\lambda = 1$ and $\mathrm{e}lta = \vartheta-1$ and otherwise $\mathrm{e}lta = \lfloor \vartheta \rfloor$ and $\lambda = \vartheta - \lfloor \vartheta \rfloor$). Now we define a function space $\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}} \subseteq \mathcal{L}_2([0,1)^s)$ with norm $\|\cdot\|_{\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}}$ given by $$\|f\|_{\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}} = \max_{u \subseteq \mathcal{S} \atop \gamma_u \neq 0} \gamma_u^{-1} \sup_{\boldsymbol{k}_u \in \mathbb{N}^{|u|}} \frac{|\hat{f}(\boldsymbol{k}_u,\boldsymbol{z}ero_{\mathcal{S}\setminus u})|}{r_{q,\vartheta}(\boldsymbol{k}_u)},$$ where again for $\gamma_u = 0$ we assume that $\hat{f}(\boldsymbol{k}_u,\boldsymbol{z}ero_{\mathcal{S}\setminus u}) = 0$ for all $\boldsymbol{k}_u \in \mathbb{N}^{|u|}$. The following result follows now directly from Corollary~\ref{cor_boundwalshcoeff}. \begin{corollary}\label{cor_smoothfwalsh} Let $\mathrm{e}lta \ge 1$, $0 < \lambda \le 1$, $\mathfrak{p}, \mathfrak{q}, \mathfrak{r} \ge 1$ and an indexed set $\boldsymbol{\gamma} = (\gamma_v)_{v\subset\mathbb{N}}$ of non-negative real numbers be given. Then there exists a constant $C_{\mathrm{e}lta,q,s,\boldsymbol{\gamma}} > 0$ such that for every function $f:[0,1)^s \rightarrow \mathbb{R}$, whose partial mixed derivatives up to order $\mathrm{e}lta$ exist, we have $$\|f\|_{\mathcal{E}_{s,q,\mathrm{e}lta+\lambda,\boldsymbol{\gamma}}} \le C_{\mathrm{e}lta, q,s,\boldsymbol{\gamma}} N_{\mathrm{e}lta,\lambda,\boldsymbol{\gamma}}(f),$$ where $\mu_{q,\mathrm{e}lta+\lambda}$ is given by (\ref{defmus}) and (\ref{defmu}). \end{corollary} Again, using (\ref{eq_constcn}) an explicit constant in Corollary~\ref{cor_smoothfwalsh} can be obtained for $q$ prime and $\varphi$ the identity map. For all other cases (i.e., arbitrary prime powers $q$ and mappings $\varphi$ with $\varphi(0) = 0$) explicit constants can be obtained as well, but in this case the constant may also depend on the particular choice of $q$ and $\varphi$. Further, as noted already above, for $\lambda = 1$ and $\mathfrak{p} = \mathfrak{q} = \mathfrak{r} = 2$ the functional $N_{\mathrm{e}lta,\lambda,\boldsymbol{\gamma}}$ coincides with the norm in a certain Sobolev space (the one dimensional inner product for this Sobolev space is given by (\ref{eq_ipsob1}) and for higher dimensions one just considers tensor products of the one dimensional space) and hence it follows that $\mathcal{E}_{s,q,\mathrm{e}lta + 1,\boldsymbol{\gamma}}$ contains certain Sobolev spaces. Hence Corollary~\ref{cor_smoothfwalsh} shows that if we want to prove results for smooth functions it is enough to consider $\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$ (in the following we design quasi-Monte Carlo rules which work well for $\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$ rather than directly for smooth functions, so the results for smooth functions come as a byproduct). A function $f \in \mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$ can be written as a sum of their anova terms $f = \sum_{u \subseteq \mathcal{S}} f_u$ (see \cite{ES}). For a function $f\in \mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$ given by $f(\boldsymbol{x}) = \sum_{\boldsymbol{k} \in \mathbb{N}_0^s} \hat{f}(\boldsymbol{k}) {\rm wal}_{\boldsymbol{k}}(\boldsymbol{x})$ the anova term $f_u$ corresponding to a subset $u \subseteq \mathcal{S}$ is simply given by $$f_u(\boldsymbol{x}_u) = \sum_{\boldsymbol{k}_u \in \mathbb{N}^{|u|}} \hat{f}(\boldsymbol{k}_u,\boldsymbol{z}ero_{\mathcal{S}\setminus u}) {\rm wal}_{\boldsymbol{k}_u}(\boldsymbol{x}_u).$$ If for some $u \subseteq \mathcal{S}$ we have $\gamma_u = 0$, then this implies that the anova term corresponding to $u$ satisfies $f_u \equiv 0$. Hence the Walsh space $\mathcal{E}_{s,q,\vartheta, \boldsymbol{\gamma}}$ consists only of functions whose anova term belonging to a subset $u$ is zero for all subsets $u$ with $\gamma_u = 0$ (see also \cite{DSWW2}). \section{Digital $(t,\alpha, \beta ,n\times m,s)$-nets and digital $(t,\alpha,\beta, \sigma ,s)$-sequences}\label{sectdignets} In this section we give the definition of digital $(t,\alpha, \beta,n\times m,s)$-nets and digital $(t,\alpha, \beta,\sigma, s)$-sequences. Similar point sets were introduced in \cite{Dick05}. \subsection{The digital construction scheme} The construction of the point set used here is a slight generalization of the digital construction scheme introduced by Niederreiter, see \cite{niesiam}, by breaking with the tradition of having square generating matrices. \begin{definition}\label{def_digcons} \rm Let $q$ be a prime-power and let $n,m,s \ge 1$ be integers. Let $C_1,\ldots ,C_s$ be $n \times m$ matrices over the finite field $\mathbb{F}_q$ of order $q$. Now we construct $q^m$ points in $[0,1)^s$: for $0 \le h \le q^m -1$ let $h=h_0+h_1 q +\cdots +h_{m-1} q^{m-1}$ be the $q$-adic expansion of $h$. Consider an arbitrary but fixed bijection $\varphi:\{0,1,\ldots ,q-1\}\longrightarrow \mathbb{F}_q$. Identify $h$ with the vector $\vec{h}=(\varphi(h_0),\ldots ,\varphi(h_{m-1}))^{\top} \in \mathbb{F}_q^m$, where $\top$ means the transpose of the vector (note that we write $\vec{h}$ for vectors in the finite field $\mathbb{F}_q^m$ and $\boldsymbol{h}$ for vectors of integers or real numbers). For $1 \le j \le s$ multiply the matrix $C_j$ by $\vec{h}$, i.e., $$C_j \vec{h}=:(y_{j,1}(h),\ldots ,y_{j,n}(h))^{\top} \in \mathbb{F}_q^n,$$ and set $$x_{h,j}:=\frac{\varphi^{-1}(y_{j,1}(h))}{q}+\cdots +\frac{\varphi^{-1}(y_{j,n}(h))}{q^n}.$$ The point set $\{\boldsymbol{x}_0,\ldots, \boldsymbol{x}_{q^m-1}\}$ is called a digital net (over $\mathbb{F}_q$) (with generating matrices $C_1,\ldots, C_s$). For $n,m = \infty$ we obtain a sequence $\{\boldsymbol{x}_0,\boldsymbol{x}_1,\ldots\}$, which is called a digital sequence (over $\mathbb{F}_q$) (with generating matrices $C_1,\ldots, C_s$). \end{definition} Niederreiter's concept of a digital $(t,m,s)$-net and a digital $(t,s)$-sequence will appear as a special case in the subsequent section. Further, the digital nets considered below all satisfy $n \ge m$. For a digital net with generating matrices $C_1,\ldots, C_s$ let $\mathcal{D} = \mathcal{D}(C_1,\ldots, C_s)$ be the dual net given by $$\mathcal{D} = \{\boldsymbol{k} \in \mathbb{N}_0^s\setminus \{\boldsymbol{z}ero\}: C_1^\top \vec{k}_1 + \cdots + C_s^\top \vec{k}_s = \vec{0}\},$$ where for $\boldsymbol{k} = (k_1,\ldots, k_s)$ with $k_j = \kappa_{j,0} + \kappa_{j,1} q + \cdots$ and $\kappa_{j,i} \in \{0,\ldots, q-1\}$ let $\vec{k}_j = (\varphi(\kappa_{j,0}),\ldots, \varphi(\kappa_{j,n-1}))^\top$. Further, for $\emptyset \neq u \subseteq \mathcal{S}$ let $\mathcal{D}_u = \mathcal{D}((C_j)_{j\in u})$ and $\mathcal{D}_u^\ast = \mathcal{D}_u \cap \mathbb{N}^{|u|}$. Note that throughout the paper Walsh functions and digital nets are defined using the same finite field $\mathbb{F}_q$ and the same bijection $\varphi$. The following lemma is a slight generalization of \cite[Lemma~2.5]{PDP}. \begin{lemma} \label{fqwalsum} Let $\{\boldsymbol{x}_0,\ldots,\boldsymbol{x}_{q^m-1} \}$ be a digital net over $\mathbb{F}_{q}$ with bijection $\varphi$, where $\varphi(0)=0$, generated by the $n\times m$ matrices $C_1,\ldots, C_s$ over $\mathbb{F}_q$, $n,m\ge 1$. Then for any vector $\boldsymbol{k}=(k_1,\ldots ,k_s)$ of nonnegative integers $0\leq k_1,\ldots,k_s<q^n$ we have \[ \sum_{h=0}^{q^m-1}\, _{\mathbb{F}_q,\varphi}{\rm wal}_{\boldsymbol{k}}(\boldsymbol{x}_h) = \begin{cases} q^m & \text{ if } \boldsymbol{k} \in \mathcal{D} \cup \{\boldsymbol{z}ero\}, \\ 0 & \text{ else}, \end{cases} \] where $\boldsymbol{z}ero$ is the zero vector in $\mathbb{N}_0^s$. \end{lemma} \subsection{$(t,\alpha,\beta,n \times m,s)$-nets and $(t,\alpha,\beta,\sigma,s)$-sequences}\label{sec_talpha} Digital $(t,\alpha,\beta,m,s)$-nets and digital $(t,\alpha,\beta,s)$-sequences were first introduced in \cite{Dick05}. Those point sets were used for quasi-Monte Carlo rules which achieve the optimal rate of convergence of the worst-case error in Korobov spaces (which are reproducing kernel Hilbert spaces of smooth periodic functions). By a slight generalization of digital $(t,\alpha,\beta,m,s)$-nets we will show that those digital nets also achieve the optimal convergence of the worst-case error in the space $\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$ for all $1 < \vartheta \le \alpha$. The $t$ value of a $(t,m,s)$-net is a quality parameter for the distribution properties of the net. A low $t$ value yields well distributed point sets and it has been shown, see for example \cite{DP05,niesiam}, that a small $t$ value also guarantees a small worst-case error for integration in Sobolev spaces for which the partial first derivatives are square integrable. In \cite{Dick05} it was shown how the definition of the $t$ value needs to be modified in order to obtain faster convergence rates for periodic Sobolev spaces for which the partial derivatives up to order $\mathrm{e}lta \le \beta$ are square integrable. Here we extend those result in several ways. First we generalize the digital $(t,\alpha,\beta,m,s)$-nets used in \cite{Dick05} to digital $(t,\alpha,\beta,n\times m,s)$-nets and show that we then can remove the periodicity assumption necessary in \cite{Dick05}. Further, if the derivatives up to order $\mathrm{e}lta$ also have bounded variation with coefficient $0 < \lambda \le 1$, then we have shown that such functions are in $\mathcal{E}_{s,q,\mathrm{e}lta+\lambda,\boldsymbol{\gamma}}$. In the following we repeat some definitions and results from \cite{Dick05} and give the definition of digital $(t,\alpha,\beta,n\times m,s)$-nets and digital $(t,\alpha,\beta, \sigma, s)$-sequences. For a real number $\vartheta > 1$ the definition of the Walsh space $\mathcal{E}_{s,q,\vartheta, \boldsymbol{\gamma}}$ suggests to define the metric $\mu_{q,\vartheta}(\boldsymbol{k},\boldsymbol{l}) = \mu_{q,\vartheta}(\boldsymbol{k}\ominus \boldsymbol{l})$ on $\mathbb{N}_0^s$, where $\mu_{q,\vartheta}(\boldsymbol{k} \ominus\boldsymbol{l})$ is given by (\ref{defmus}) and (\ref{defmu}), which is an extension of the metric introduced in \cite{nie86}, see also \cite{rt} (the metric for $\vartheta = 1$ can be used for Walsh spaces for example considered in \cite{DP05}; for this case one basically obtains the metric in \cite{nie86,rt}). As we will see later, in order to obtain a small worst-case error in the Walsh space $\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$ we need digital nets for which $\min\{\mu_{q,\vartheta}(\boldsymbol{k}): \boldsymbol{k} \in \mathcal{D}\}$ is large. By translating this property into a linear independence property of the row vectors of the generating matrices $C_1,\ldots, C_s$ we arrive at the following definition. \begin{definition}\rm\label{def_net} Let $n, m,\alpha \ge 1$ be natural numbers, let $0 <\beta \le \alpha m/n$ be a real number and let $0 \le t \le \beta n$ be a natural number. Let $\mathbb{F}_q$ be the finite field of prime power order $q$ and let $C_1,\ldots, C_s \in \mathbb{F}_q^{n \times m}$ with $C_j = (c_{j,1}, \ldots, c_{j,n})^\top$. If for all $1 \le i_{j,\nu_j} < \cdots < i_{j,1} \le n$, where $0 \le \nu_j \le m$ for all $j = 1,\ldots, s$, with $$\sum_{j = 1}^s \sum_{l=1}^{\min(\nu_j,\alpha)} i_{j,l} \le \beta n - t$$ the vectors $$c_{1,i_{1,\nu_1}}, \ldots, c_{1,i_{1,1}}, \ldots, c_{s,i_{s,\nu_s}}, \ldots, c_{s,i_{s,1}}$$ are linearly independent over $\mathbb{F}_q$ then the digital net with generating matrices $C_1,\ldots, C_s$ is called a digital $(t,\alpha,\beta,n\times m,s)$-net over $\mathbb{F}_q$. Further we call a digital $(t,\alpha,\beta, n\times m,s)$-net over $\mathbb{F}_q$ with the largest possible value of $\beta$, i.e., $\beta = \alpha m/n$, a digital $(t,\alpha,n\times m,s)$-net over $\mathbb{F}_q$. If $t$ is the smallest non-negative integer such that the digital net generated by $C_1,\ldots, C_s$ is a digital $(t,\alpha,\beta,n\times m,s)$-net, then we call the digital net a strict digital $(t,\alpha,\beta, n \times m,s)$-net or a strict digital $(t,\alpha,n \times m,s)$-net if $\beta = \alpha m/n$. \end{definition} \begin{remark}\rm Using duality theory (see \cite{np}) it follows that for a digital $(t,\alpha,\beta,n \times m,s)$-net we have $\min_{\boldsymbol{k} \in \mathcal{D}} \mu_{q,\alpha}(\boldsymbol{k}) > \beta n - t$ and for a strict digital $(t,\alpha,\beta,n \times m,s)$-net we have $\min_{\boldsymbol{k} \in \mathcal{D}} \mu_{q,\alpha}(\boldsymbol{k}) = \beta n - t + 1$. Hence digital $(t,\alpha,\beta,n \times m,s)$-nets with high quality have a large value of $\beta n - t$. \end{remark} \begin{remark}\rm In summary the parameters $t, \alpha,\beta, n, m, s$ have the following meaning: \begin{itemize} \item $s$ denotes the dimension of the point set. \item $n$ and $m$ denote the size of the generating matrices for digital nets, i.e. the generating matrices are of size $n \times m$; in particular this means the point set has $q^m$ points. \item $t$ denotes the quality parameter of the point set; a low $t$ value means high quality. In the upper bound, $t$ is a quality parameter related to the constant in the upper bound. \item $\beta$ is also a quality parameter. We will see later that the integration error is roughly $q^{-n}$. This is of course only true within boundaries, which is the reason for the parameter $\beta$, i.e. the integration error is roughly $q^{-\beta n}$. Hence $\beta$ is a quality parameter related to the convergence rate. \item $\alpha$ is the smoothness parameter of the point set. \end{itemize} We can group the parameters also in the following way: \begin{itemize} \item $m,n,s$ are fixed parameters, i.e. they specify the number and size of the generating matrices. \item $\alpha$ is a variable parameter, i.e. given (fixed) generating matrices can for example generate a $(t_1, 1, \beta_1, 10 \times 5, 5)$-net, a $(t_2, 2, \beta_2, 10 \times 5, 5)$-net, and so on (note the point set is always the same in each instance; the values $t_1,t_2, \ldots, \beta_1,\beta_2, \ldots$ may differ). This is necessary as in the upper bounds $\alpha$ will be the smoothness of the integrand, which may not be known explicitly. \item $t$ and $\beta$ are dependent parameters, they will depend on the generating matrices and on $\alpha$. For given generating matrices, it is desirable to know the values of $\beta$ and $t$ for each value of $\alpha \in \mathbb{N}$. \end{itemize} \end{remark} Digital $(t,\alpha,\beta,n\times m,s)$-nets do not exist for arbitrary choices of the parameters $t,\alpha,\beta,n,m,s$, see \cite{Dick05}. The digital nets considered in \cite{Dick05} had the restriction that $n = m$ and special attention was paid to those digital nets with high quality, i.e., where $\alpha = \beta$. In this paper, a special role will be played by those digital nets for which $n = \alpha m$ and $\beta = 1$. The restriction on the linear independence of the digital nets comprises now $n - t = \alpha m - t$ row vectors, which is the same as in \cite{Dick05}, with the only difference that the size of the generating matrices is now bigger as now each generating matrix has $n = \alpha m$ rows. As those digital nets play a special role in this work we have the following definition. \begin{definition}\rm\label{def_netn} A digital $(t,\alpha,1, \alpha m \times m, s)$-net over $\mathbb{F}_q$ is called a digital $(t,\alpha, \alpha m \times m,s)$-net over $\mathbb{F}_q$. A strict digital $(t,\alpha,1, \alpha m \times m, s)$-net over $\mathbb{F}_q$ is called a strict digital $(t,\alpha, \alpha m \times m,s)$-net over $\mathbb{F}_q$. \end{definition} \begin{remark}\rm For practical purposes we would like to explicitly know digital $(t,\alpha,\alpha m \times m, s)$-nets for all $\alpha,m,s \ge 1$ with $t$ as small as possible (as will be shown later, they achieve the optimal rate of convergence of the integration error of integrands for which all mixed partial derivatives of order $\alpha$ are, for example, square integrable, thus their usefulness). Further, for given $\alpha,m,s \ge 1$ and a given digital $(t,\alpha, \alpha m \times m,s)$-net $P$, we would then also like to know the $t'$ and $\beta'$ value of this point set $P$ when viewed as a digital $(t',\mathrm{e}lta,\beta', \alpha m \times m, s)$-net for all values $\mathrm{e}lta \in \mathbb{N}$, i.e., $t'$ and $\beta'$ are functions of $\mathrm{e}lta$ (this is because we would also like to know how well such a digital net $P$ performs if the integrand has partial mixed derivatives of order up to $\mathrm{e}lta$, because we might not know the smoothness of the integrand, but still would wish that $P$ performs best possible). \end{remark} We can also define sequences of points for which the first $q^m$ points form a digital $(t,\alpha,\beta,n\times m,s)$-nets. In the classical case \cite{niesiam} one can just consider the left-upper $m \times m$ submatrices of the generating matrices of a digital sequence and determine the net properties of these for each $m \in \mathbb{N}$. Here, on the other hand, we are considering digital nets whose generating matrices are $n \times m$ matrices. So we would have to consider the left-upper $n_m \times m$ submatrices of the generating matrices of the digital sequence for each $m \in \mathbb{N}$ and where $(n_m)_{m \in\mathbb{N}}$ is a sequence of natural numbers. For our purposes here it is enough to consider only $n_m$ of the form $\sigma m$, for some given $\sigma \in \mathbb{N}$. \begin{definition}\rm\label{def_seq} Let $\alpha, \sigma \ge 1$ and $t \ge 0$ be integers and let $0 <\beta \le \alpha/\sigma$ be a real number. Let $\mathbb{F}_q$ be the finite field of prime power order $q$ and let $C_1,\ldots, C_s \in \mathbb{F}_q^{\infty \times \infty}$ with $C_j = (c_{j,1}, c_{j,2}, \ldots)^\top$. Further let $C_{j,\sigma m \times m}$ denote the left upper $\sigma m \times m$ submatrix of $C_j$. If for all $m > t/(\beta\sigma)$ the matrices $C_{1,\sigma m \times m},\ldots, C_{s,\sigma m \times m}$ generate a digital $(t,\alpha,\beta,\sigma m \times m,s)$-net then the digital sequence with generating matrices $C_1,\ldots, C_s$ is called a digital $(t,\alpha,\beta, \sigma ,s)$-sequence over $\mathbb{F}_q$. Further we call a digital $(t,\alpha,1, \alpha,s)$-sequence over $\mathbb{F}_q$ a digital $(t,\alpha,s)$-sequence over $\mathbb{F}_q$. If $t$ is the smallest non-negative integer such that the digital sequence generated by $C_1,\ldots, C_s$ is a digital $(t,\alpha,\beta, \sigma,s)$-sequence, then we call the digital sequence a strict digital $(t,\alpha,\beta, \sigma, s)$-sequence or a strict digital $(t,\alpha,s)$-sequence if $\alpha = \sigma$ and $\beta = 1$. \end{definition} For short we will often write $(t,\alpha,\beta, n\times m, s)$-net instead of digital $(t,\alpha,\beta, n\times m, s)$-net over $\mathbb{F}_q$. The same applies to the other notions defined above. \begin{remark}\rm Note that the definition of a digital $(t, 1,m\times m,s)$-net coincides with the definition of a digital $(t,m,s)$-net and the definition of a digital $(t, 1,s)$-sequence coincides with the definition of a digital $(t,s)$-sequence as defined by Niederreiter~\cite{niesiam}. Further note that the $t$-value depends on $\alpha, \beta$ and $\sigma$, i.e., $t = t(\alpha,\beta,\sigma)$ or $t = t(\alpha)$ if $\alpha = \sigma$ and $\beta = 1$. \end{remark} The definition of $(t,\alpha,s)$-sequences here differs slightly from the definition in \cite{Dick05}. Indeed the definition of a $(t,\alpha,s)$-sequence in \cite{Dick05} corresponds to a $(t,\alpha,\alpha,1,s)$-sequence in the terminology of this paper, whereas here we call a $(t,\alpha,1,\alpha,s)$-sequence a $(t,\alpha,s)$-sequence. On the other hand note that the condition of linear independence in Definition~\ref{def_net} is the same in both cases, i.e., the sum $i_{1,1} + \cdots + i_{1,\min(\nu_1,\alpha)} + \cdots + i_{s,1} + \cdots + i_{s,\min(\nu_s,\alpha)}$ needs to be bounded by $\alpha m -t$ for all $m$ for $(t,\alpha,1,\alpha,s)$-sequences and also for $(t,\alpha,\alpha,1,s)$-sequences. \subsection{Some properties of $(t,\alpha,\beta, n\times m, s)$-nets and $(t,\alpha,\beta,\sigma,s)$-sequences} The properties of such digital nets and sequences shown in \cite{Dick05} also hold here. For example it was shown there that a digital $(t,\alpha,m,s)$-net is also a digital $(\lceil t \alpha'/\alpha\rceil ,\alpha',m,s)$-net for all $1\le \alpha' \le \alpha$ and every digital $(t,\alpha,s)$-sequence is also a digital $(\lceil t\alpha'/\alpha \rceil,\alpha',s)$-sequence for all $1\le \alpha' \le \alpha$. In the same way we have the following theorem. \begin{theorem}\label{th_prop} Let $P$ be a digital $(t,\alpha,\beta,n\times m,s)$-net over $\mathbb{F}_q$ and let $S$ be a digital $(t,\alpha,\beta, \sigma,s)$-sequence over $\mathbb{F}_q$. Then we have: \begin{enumerate} \item[(i)] $P$ is a digital $(t',\alpha,\beta',n\times m,s)$-net for all $1\le \beta' \le \beta$ and all $t \le t' \le \beta' m$ and $S$ is a digital $(t',\alpha,\beta', \sigma ,s)$-sequence for all $1 \le \beta' \le \beta$ and all $t \le t'$. \item[(ii)] $P$ is a digital $(t',\alpha',\beta',n\times m,s)$-net for all $1\le \alpha' \le n$ where $\beta' = \beta\min(\alpha,\alpha')/\alpha$ and $t' = \lceil t \min(\alpha,\alpha')/\alpha\rceil$ and $S$ is a digital $(t',\alpha',\beta', \sigma,s)$-sequence for all $\alpha' \ge 1$ where $\beta' = \beta \min(\alpha,\alpha')/\alpha$ and where $t' = \lceil t \min(\alpha,\alpha')/\alpha\rceil$. \item[(iii)] Any digital $(t,\alpha,n \times m,s)$-net is a digital $(\lceil t \alpha'/\alpha\rceil ,\alpha',n\times m,s)$-net for all $1\le \alpha' \le \alpha$ and every digital $(t,\alpha,\sigma, s)$-sequence is a digital $(\lceil t\alpha'/\alpha \rceil,\alpha', \sigma,s)$-sequence for all $1\le \alpha' \le \alpha$. \item[(iv)] If $C_1,\ldots, C_s \in \mathbb{Z}_b^{n\times m}$ are the generating matrices of a digital $(t,\alpha,\beta,n\times m,s)$-net then the matrices $C_1^{(n')}, \ldots, C_s^{(n')}$, where $C_j^{(n')}$ consists of the first $n'$ rows of $C_j$, generate a digital $(t,\alpha,\beta, n'\times m,s)$-net for all $1 \le n' \le n$ . \item[(v)] Any digital $(t,\alpha,\beta,\sigma,s)$-sequence is a digital $(t,\alpha,\beta,\sigma',s)$-sequence for all $1 \le \sigma' \le \sigma$. \end{enumerate} \end{theorem} \subsection{Constructions of $(t,\alpha,\beta, n \times m,s)$-nets and $(t,\alpha, \sigma,s)$-sequences}\label{sec_talphacons} In this section we show how explicit examples of $(t,\alpha,\beta, n \times m,s)$-nets and $(t,\alpha,\beta,\sigma, s)$-sequences can be constructed. The idea for the construction is based on the construction method presented in \cite{Dick05}. Let $d \ge 1$ and let $C_1,\ldots, C_{sd}$ be the generating matrices of a digital $(t,m,sd)$-net. Note that many explicit examples of such generating matrices are known, see for example \cite{faure,niesiam,NX,sob67} and the references therein. For the construction of a $(t,\alpha,m,s)$-net any of the above mentioned explicit constructions can be used, but as will be shown below the quality of the $(t,\alpha,m,s)$-net obtained depends on the quality of the underlying digital $(t,m,sd)$-net on which our construction is based on. Let $C_j = (c_{j,1},\ldots, c_{j,m})^\top$ for $j = 1,\ldots, sd$, i.e., $c_{j,l}$ are the row vectors of $C_j$. Now let the matrix $C^{(d)}_{j}$ be made of the first rows of the matrices $C_{(j-1)d + 1},\ldots, C_{jd}$, then the second rows of $C_{(j-1)d+1},\ldots, C_{jd}$ and so on. The matrix $C^{(d)}_{j}$ is then an $d m \times m$ matrix, i.e., $C^{(d)}_j = (c^{(d)}_{j,1},\ldots, c^{(d)}_{j,dm})^\top$ where $c^{(d)}_{j,l} = c_{u,v}$ with $l = (v-j)d + u$, $1\le v \le m$ and $(j-1)d < u \le jd$ for $l = 1,\ldots, d m$ and $j = 1,\ldots, s$. The following result is a slight generalization of \cite[Theorem~3]{Dick05} and can be obtained using the same proof technique. \begin{theorem}\label{th_talphabeta} Let $d \ge 1$ be a natural number and let $C_{1},\ldots, C_{sd}$ be the generating matrices of a digital $(t',m,sd)$-net over the finite field $\mathbb{F}_q$ of prime power order $q$. Let $C^{(d)}_{1},\ldots, C^{(d)}_{s}$ be defined as above. Then for any $\alpha \ge 1$ the matrices $C^{(d)}_{1},\ldots, C^{(d)}_{s}$ are generating matrices of a digital $(t,\alpha,\min(1,\alpha/d),dm \times m,s)$-net over $\mathbb{F}_q$ with $$t = \min(\alpha,d)\;t' + \left\lceil \frac{s(d-1) \min(\alpha,d)}{2}\right\rceil.$$ \end{theorem} The above construction and Theorem~\ref{th_talphabeta} can easily be extended to $(t,\alpha,\beta,\sigma,s)$-sequences. Indeed, let $d \ge 1$ and let $C_1,\ldots, C_{sd}$ be the generating matrices of a digital $(t,sd)$-sequence. Again many explicit generating matrices are known, see for example \cite{faure,niesiam,NX,sob67}. Let $C_j = (c_{j,1},c_{j,2},\ldots)^\top$ for $j = 1,\ldots,sd$, i.e., $c_{j,l}$ are the row vectors of $C_j$. Now let the matrix $C^{(d)}_{j}$ be made of the first rows of the matrices $C_{(j-1)d + 1},\ldots, C_{jd}$, then the second rows of $C_{(j-1)d+1},\ldots, C_{jd}$ and so on, i.e., $$C^{(d)}_{j} = (c_{(j-1)d+1,1},\ldots, c_{jd,1},c_{(j-1)d+1,2},\ldots,c_{jd,2},\ldots)^\top.$$ The following theorem states that the matrices $C^{(d)}_{1},\ldots, C^{(d)}_{s}$ are the generating matrices of a digital $(t,\alpha,\min(1,\alpha/d), d,s)$-sequence, compare with \cite[Theorem~4]{Dick05}. \begin{theorem}\label{th_talphabetaseq} Let $d \ge 1$ be a natural number and let $C_{1},\ldots, C_{sd}$ be the generating matrices of a digital $(t',sd)$-sequence over the finite field $\mathbb{F}_q$ of prime power order $q$. Let $C^{(d)}_{1},\ldots, C^{(d)}_{s}$ be defined as above. Then for any $\alpha \ge 1$ the matrices $C^{(d)}_{1},\ldots, C^{(d)}_{s}$ are generating matrices of a digital $(t,\alpha,\min(1,\alpha/d), d,s)$-sequence over $\mathbb{F}_q$ with $$t = \min(\alpha,d)\;t' + \left\lceil \frac{s(d-1) \min(\alpha,d)}{2}\right\rceil.$$ \end{theorem} The last result shows that $(t,\alpha,\beta, \sigma m \times m,s)$-nets indeed exist for $\beta = 1$ and any $0 < \sigma \le \alpha$ and for $m$ arbitrarily large. We have even shown that digital $(t,\alpha,\beta, \alpha m \times m,s)$-nets exist which are extensible in $m$ and $s$. This can be achieved by using an underlying $(t',sd)$-sequence which is itself extensible in $m$ and $s$. If the $t'$ value of the original $(t',m,s)$-net or $(t',s)$-sequence is known explicitly then we also know the $t$ value of the digital $(t,\alpha,\beta,\alpha m \times m,s)$-net or $(t,\alpha,\beta, \sigma ,s)$-sequence. Furthermore it has also been shown how such digital nets can be constructed in practice. Further results on such sequences are established in \cite{Dick05}. \section{Numerical integration in the Walsh space $\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$}\label{sect_int} In this section we investigate numerical integration in the Walsh space $\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$ using quasi-Monte Carlo rules $$Q_{q^m,s}(f) = \frac{1}{q^m} \sum_{n=0}^{q^m-1} f(\boldsymbol{x}_n),$$ where $\boldsymbol{x}_0,\ldots, \boldsymbol{x}_{q^m-1}$ are the points of a digital $(t,\alpha,\beta,m,s)$-net over $\mathbb{F}_q$. More precisely, we want to approximate the integral $$I_s(f) = \int_{[0,1]^s} f(\boldsymbol{x})\,\mathrm{d}\boldsymbol{x}$$ by the quasi-Monte Carlo rule $Q_{q^m,s}(f)$. As a quality measure for our rule we introduce the worst-case error in the next section. \subsection{The worst-case error in the Walsh space $\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$} The worst-case error for the Walsh space~$\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$ using the quasi-Monte Carlo rule $Q_{q^m,s}$ is given by $$e(Q_{q^m,s},\mathcal{E}_{s, q,\vartheta,\boldsymbol{\gamma}}) = \sup_{f \in \mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}} \atop \|f\|_{\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}} \le 1} \left|I_s(f) - Q_{q^m,s}(f)\right|.$$ The initial error is given by $$e(Q_{0,s},\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}) = \sup_{f\in\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}} \atop \|f\|_{\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}}\le 1} \left|I_s(f)\right|.$$ In the following we use digital nets generated by the matrices $C_1,\ldots, C_s$ as quadrature points for the quadrature rule $Q_{q^m,s}$. Let $f \in \mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$. Using Lemma~\ref{fqwalsum} it follows that \begin{eqnarray*} |I_s(f) - Q_{q^m,s}(f)| & = & \left|\sum_{\boldsymbol{k} \in \mathcal{D}} \hat{f}(\boldsymbol{k}) \right| \\ &\le & \sum_{\boldsymbol{k} \in \mathcal{D}} |\hat{f}(\boldsymbol{k})| = \sum_{\emptyset \neq u \subseteq \mathcal{S}} \sum_{\boldsymbol{k}_u \in \mathcal{D}_u^\ast} |\hat{f}(\boldsymbol{k}_u,\boldsymbol{z}ero_{\mathcal{S}\setminus u})|. \end{eqnarray*} Now we have $|\hat{f}(\boldsymbol{k}_u,\boldsymbol{z}ero_{\mathcal{S}\setminus u})| \le \gamma_u r_{q,\vartheta}(\boldsymbol{k}_u) \|f\|_{\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}}$ and thus we obtain \begin{equation}\label{eq_bounderror} |I_s(f) - Q_{q^m,s}(f)| \le \|f\|_{\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}} \sum_{\emptyset \neq u \subseteq \mathcal{S}} \gamma_u \sum_{\boldsymbol{k}_u \in \mathcal{D}_u^\ast} r_{q,\vartheta}(\boldsymbol{k}_u). \end{equation} By choosing $\hat{f}(\boldsymbol{k}_u,\boldsymbol{z}ero_{\mathcal{S}\setminus u}) = \gamma_u r_{q,\vartheta}(\boldsymbol{k}_u)$ for all $u$ and $\boldsymbol{k}_u$ we can also obtain equality in (\ref{eq_bounderror}). Thus we have \begin{equation}\label{wce_eq} e(Q_{q^m,s},\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}) = \sum_{\emptyset \neq u \subseteq\mathcal{S}} \gamma_u \sum_{\boldsymbol{k}_u \in \mathcal{D}_u^\ast} r_{q,\vartheta}(\boldsymbol{k}_u). \end{equation} From the last formula we can now see that essentially a large value of $\min\{\mu_{q,\vartheta}(\boldsymbol{k}): \boldsymbol{k} \in \mathcal{D}\}$ guarantees a small worst-case error. Further it can be shown that \begin{equation}\label{initialerror} e(Q_{0,s},\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}) = \gamma_{\emptyset}.\end{equation} We have shown the following theorem. \begin{theorem} The initial error for multivariate integration in the Walsh space $\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$ is given by (\ref{initialerror}) and the worst-case error for multivariate integration in the Walsh space~$\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$ using a digital net as quadrature points is given by (\ref{wce_eq}). \end{theorem} In the following lemma we establish an upper bound on the sum $\sum_{\boldsymbol{k}_u \in \mathcal{D}_u^\ast} r_{q,\vartheta}(\boldsymbol{k}_u)$ for digital $(t,\alpha,\beta, n\times m,s)$-nets over $\mathbb{F}_q$. The proof is similar to \cite[Lemma~6]{Dick05}. \begin{lemma}\label{lem_boundsum} Let $\vartheta > 1$ be a real number, $q \ge 2$ be a prime power, $C_1,\ldots, C_s \in \mathbb{F}_q^{n\times m}$ be the generating matrices of a digital $(t,\lceil \vartheta \rceil,\beta,n \times m,s)$-net over $\mathbb{F}_q$ with $0 < \beta \le 1$ and let $\mathcal{D}_u^\ast = \mathcal{D}_u^\ast((C_j)_{j\in u})$. For all $\emptyset \neq u \subseteq \mathcal{S}$ we have: if $\vartheta$ is not an integer it follows that $$\sum_{\boldsymbol{k}_u\in \mathcal{D}_u^\ast} r_{q,\vartheta}(\boldsymbol{k}_u) \le C_{|u|,q,\vartheta} (\beta n - t + \lceil\vartheta\rceil)^{|u| \lceil\vartheta\rceil - 1} q^{-\vartheta\lfloor (\beta n - t)/\lceil\vartheta \rceil \rfloor},$$ where $$C_{|u|,q,\vartheta} = q^{|u|\lceil\vartheta\rceil} ((q-q^{\vartheta-\lfloor\vartheta\rfloor })^{-1} + (1-q^{(1-\vartheta)/\lceil\vartheta\rceil})^{-|u|\lceil\vartheta\rceil})$$ and if $\vartheta$ is an integer it follows that $$\sum_{\boldsymbol{k}_u\in \mathcal{D}_u^\ast} r_{q,\vartheta}(\boldsymbol{k}_u) \le C'_{|u|,q,\vartheta} (\beta n - t + \vartheta)^{|u| \vartheta} q^{-(\beta n - t)},$$ where $$C'_{|u|,q,\vartheta} = q^{|u|\vartheta} (q^{-1} + (1-q^{1/\vartheta-1})^{-|u|\vartheta}).$$ \end{lemma} \begin{proof} To simplify the notation we prove the result only for $u = \mathcal{S}$. For all other subsets the result follows by the same arguments. We first consider the case where $\vartheta > 1$ is not an integer. We partition the set $\mathcal{D}^\ast_{\mathcal{S}}$ into parts where the highest digits of $k_j$ are prescribed and we count the number of solutions of $C_1^\top \vec{k}_1 + \cdots + C_s^\top \vec{k}_s = \vec{0}$. For $j = 1,\ldots, s$ let now $i_{j,\lceil \vartheta \rceil} < \cdots < i_{j,1}$ with $i_{j,1} \ge 1$. Note that we now allow $i_{j,l} < 1$, in which case the contributions of those $i_{j,l}$ are to be ignored. This notation is adopted in order to avoid considering many special cases. Further we write $\boldsymbol{i}_{s,\lceil \vartheta \rceil} = (i_{1,1},\ldots, i_{1,\lceil \vartheta\rceil},\ldots, i_{s,1},\ldots, i_{s,\lceil \vartheta \rceil})$ and define \begin{eqnarray*} \mathcal{D}^\ast_{\mathcal{S}}(\boldsymbol{i}_{s,\lceil\vartheta\rceil}) &=& \{\boldsymbol{k} \in \mathcal{D}^\ast_{\mathcal{S}}: k_j = \lfloor \kappa_{j,1} q^{i_{j,1}-1} + \cdots + \kappa_{j,\lceil\vartheta\rceil} q^{i_{j,\lceil\vartheta\rceil}-1} + l_j\rfloor \\ && \mbox{ with } 0 \le l_j < q^{i_{j,\lceil\vartheta\rceil}-1} \mbox{and } 1 \le \kappa_{j,l} < q \mbox{ for } j = 1,\ldots, s\}, \end{eqnarray*} where $\lfloor\cdot \rfloor$ just means that the contributions of $i_{j,l} < 1$ are to be ignored. Let $\mu(\boldsymbol{i}_{s,\lceil\vartheta\rceil}) = i_{1,1} + \cdots + i_{1,\lceil\vartheta\rceil-1} + (\vartheta - \lfloor\vartheta\rfloor) i_{1,\lceil\vartheta\rceil} + \cdots + i_{s,1} + \cdots + i_{s,\lceil\vartheta\rceil-1} + (\vartheta - \lfloor\vartheta\rfloor) i_{s,\lceil\vartheta\rceil}$. Then we have \begin{eqnarray}\label{sum_Qstar} \sum_{\boldsymbol{k}_\mathcal{S}\in \mathcal{D}_\mathcal{S}^\ast} r_{q,\vartheta}(\boldsymbol{k}_\mathcal{S}) & = & \sum_{i_{1,1}=1}^\infty \cdots \sum_{i_{1,\lceil\vartheta\rceil}= 1}^{i_{1,\lceil\vartheta\rceil-1}-1} \cdots \sum_{i_{s,1}=1}^\infty \cdots \sum_{i_{s,\lceil\vartheta\rceil}=1}^{i_{s,\lceil\vartheta\rceil-1}-1} \frac{| \mathcal{D}^\ast_{S}(\boldsymbol{i}_{s,\lceil\vartheta\rceil})|}{q^{\mu(\boldsymbol{i}_{s,\lceil\vartheta\rceil})}}. \end{eqnarray} Some of the sums above can be empty in which case we just set the corresponding summation index $i_{j,l}=0$. Note that by the $(t,\lceil\vartheta\rceil,\beta,n \times m,s)$-net property we have that $|\mathcal{D}^\ast_{\mathcal{S}}(\boldsymbol{i}_{s,\lceil\vartheta\rceil})| = 0$ as long as $i_{1,1} + \cdots + i_{1,\lceil\vartheta\rceil} + \cdots + i_{s,1} + \cdots + i_{s,\lceil\vartheta\rceil} \le \beta n - t$. Hence let now $0\le i_{1,1}, \ldots, i_{s,\lceil\vartheta\rceil}$ be given such that $i_{1,1},\ldots, i_{s,1}\ge 1$, $i_{j,\lceil\vartheta\rceil}< \cdots < i_{j,1}$ for $j = 1,\ldots, s$ and where if $i_{j,l} < 1$ we set $i_{j,l}=0$ (in which case we also have $i_{j,l+1}= i_{j,l+2} = \ldots = 0$ and the inequalities $i_{j,l} > \cdots > i_{j,\lceil\vartheta\rceil}$ are ignored) and $i_{1,1} + \cdots + i_{1,\lceil\vartheta\rceil} + \cdots + i_{s,1} + \cdots + i_{s,\lceil\vartheta\rceil} > \beta n - t$. We now need to estimate $|\mathcal{D}^\ast_{\mathcal{S}}(\boldsymbol{i}_{s,\lceil\vartheta\rceil})|$, that is we need to count the number of $\boldsymbol{k} \in \mathcal{D}^\ast_{\mathcal{S}}$ with $k_j = \lfloor \kappa_{j,1} b^{i_{j,1}-1} + \cdots + \kappa_{j,\lceil\vartheta\rceil}b^{i_{j,\lceil\vartheta\rceil}-1} + l_j\rfloor$. There are at most $(q-1)^{\lceil\vartheta\rceil s}$ choices for $\kappa_{1,1},\ldots, \kappa_{s,\lceil\vartheta\rceil}$ (we write at most because if $i_{j,l} < 1$ then the corresponding $\kappa_{j,l}$ does not have any effect and therefore need not to be included). Let now $1\le \kappa_{1,1},\ldots, \kappa_{s,\lceil\vartheta\rceil} < q$ be given and define $$\vec{g} = \kappa_{1,1} c_{1,i_{1,1}}^\top + \cdots + \kappa_{1,\lceil\vartheta\rceil} c_{1,i_{1,\lceil\vartheta\rceil}}^\top + \cdots + \kappa_{s,1} c_{s,i_{s,1}}^\top + \cdots + \kappa_{s,\lceil\vartheta\rceil} c_{s,i_{s,\lceil\vartheta\rceil}}^\top,$$ where we set $c^\top_{j,l}=0$ if $l < 1$ or $l > n$. Further let $$B = (c_{1,1}^\top,\ldots, c_{1,i_{1,\lceil\vartheta\rceil}-1}^\top,\ldots, c_{s,1}^\top,\ldots, c_{s,i_{s,\lceil\vartheta\rceil}-1}^\top).$$ Now the task is to count the number of solutions $\vec{l}$ of $B \vec{l} = \vec{g}$. As long as the columns of $B$ are linearly independent the number of solutions can at most be $1$. By the $(t,\lceil\vartheta\rceil,\beta,n \times m,s)$-net property this is certainly the case if (we write $(x)_+ = \max(x,0)$) \begin{eqnarray*} (i_{1,\lceil\vartheta\rceil}-1)_+ + \cdots + (i_{1,\lceil\vartheta\rceil}-\lceil\vartheta\rceil)_+ + \cdots &&\\ + (i_{s,\lceil\vartheta\rceil}-1)_+ + \cdots + (i_{s,\lceil\vartheta\rceil}-\lceil\vartheta\rceil)_+ &\le & \lceil\vartheta\rceil (i_{1,\lceil\vartheta\rceil} + \cdots + i_{s,\lceil\vartheta\rceil}) \\ & \le & \beta n - t, \end{eqnarray*} that is, as long as $$i_{1,\lceil\vartheta\rceil} + \cdots + i_{s,\lceil\vartheta\rceil} \le \frac{\beta n - t}{\lceil\vartheta\rceil}.$$ Let now $i_{1,\lceil\vartheta\rceil} + \cdots + i_{s,\lceil\vartheta\rceil} > \frac{\beta n - t}{\lceil\vartheta\rceil}$. Then by considering the rank of the matrix $B$ and the dimension of the space of solutions of $B\vec{l} = \vec{0}$ it follows the number of solutions of $B\vec{l} = \vec{g}$ is smaller or equal to $q^{i_{1,\lceil\vartheta\rceil} + \cdots + i_{s,\lceil\vartheta\rceil}-\lfloor(\beta n - t)/\lceil\vartheta\rceil\rfloor}$. Thus we have $$|\mathcal{D}^\ast_{\mathcal{S}}(\boldsymbol{i}_{s,\lceil\vartheta\rceil})| = 0$$ if $\sum_{j=1}^s \sum_{l = 1}^{\lceil\vartheta\rceil} i_{j,l} \le \beta n -t$, we have $$|\mathcal{D}^\ast_{\mathcal{S}}(\boldsymbol{i}_{s,\lceil\vartheta\rceil})| = (q-1)^{s\lceil\vartheta\rceil}$$ if $\sum_{j=1}^s \sum_{l=1}^{\lceil\vartheta\rceil} i_{j,l} > \beta n -t$ and $\sum_{j=1}^s i_{j,\lceil\vartheta\rceil} \le \frac{\beta n -t}{\lceil\vartheta\rceil}$ and finally we have $$ |\mathcal{D}^\ast_{\mathcal{S}}(\boldsymbol{i}_{s,\lceil\vartheta\rceil})| \le (q-1)^{s \lceil\vartheta\rceil} q^{i_{1,\lceil\vartheta\rceil} + \cdots + i_{s,\lceil\vartheta\rceil} - \lfloor (\beta n -t)/\lceil\vartheta\rceil\rfloor}$$ if $\sum_{j=1}^s\sum_{l=1}^{\lceil\vartheta\rceil} i_{j,l} > \beta n -t$ and $\sum_{j=1}^s i_{j,\lceil\vartheta\rceil} > \frac{\beta n -t}{\lceil\vartheta\rceil}$. We estimate the sum (\ref{sum_Qstar}) now. Let $S_1$ be the sum in (\ref{sum_Qstar}) where $ i_{1,1} + \cdots + i_{s,\lceil\vartheta\rceil} > \beta n -t$ and $i_{1,\lceil\vartheta\rceil} + \cdots + i_{s,\lceil\vartheta\rceil} \le \frac{\beta n -t}{\lceil\vartheta\rceil}$. Let $l_1 = i_{1,1} + \cdots + i_{1,\lceil\vartheta\rceil-1} + \cdots + i_{s,1} + \cdots + i_{s,\lceil\vartheta\rceil-1}$ and let $l_2 = i_{1,\lceil\vartheta\rceil} + \cdots + i_{s,\lceil\vartheta\rceil}$. Let $A(l_1 + l_2)$ denote the number of admissible choices of $i_{1,1},\ldots,i_{s,\lceil\vartheta\rceil}$ such that $l_1 + l_2 = i_{1,1} + \cdots + i_{s,\lceil\vartheta\rceil}$. Then we have $$S_1 = (q-1)^{s \lceil\vartheta\rceil} \sum_{l_2 = 0}^{\lfloor \frac{\beta n -t}{\lceil\vartheta\rceil}\rfloor} \frac{1}{q^{(\vartheta - \lfloor \vartheta\rfloor) l_2}} \sum_{l_1 = \beta n -t +1 - l_2}^{\infty} \frac{A(l_1+l_2)}{b^{l_1}}.$$ We have $A(l_1 + l_2) \le {l_1+ l_2 +s\lceil\vartheta\rceil -1 \choose s\lceil\vartheta\rceil - 1}$ and hence we obtain $$S_1 \le (q-1)^{s\lceil\vartheta\rceil} \sum_{l_2 = 0}^{\lfloor \frac{\beta n -t}{\lceil\vartheta\rceil} \rfloor} \frac{1}{q^{(\vartheta - \lfloor \vartheta\rfloor) l_2}} \sum_{l_1 = \beta n -t +1 - l_2}^{\infty} \frac{1}{q^{l_1}} {l_1+l_2 +s\lceil\vartheta\rceil - 1 \choose s \lceil\vartheta\rceil -1}.$$ From a result by Matou\v{s}ek~\cite[Lemma~2.18]{matou}, see also \cite[Lemma~6]{DP05}, we have \begin{eqnarray*} \lefteqn{(q-1)^{s\lceil\vartheta\rceil} \sum_{l_1 = \beta n -t +1 - l_2}^{\infty} \frac{1}{q^{l_1}} {l_1+l_2 +s\lceil\vartheta\rceil - 1 \choose s \lceil\vartheta\rceil -1} } \mathfrak{q}uad\mathfrak{q}uad\mathfrak{q}uad\mathfrak{q}uad\mathfrak{q}uad \\ & \le & q^{l_2 - \beta n + t - 1 + s\lceil\vartheta\rceil} {\beta n - t + s\lceil\vartheta\rceil \choose s \lceil\vartheta\rceil - 1} \end{eqnarray*} and further we have $$ \sum_{l_2 = 0}^{\lfloor \frac{\beta n -t}{\lceil\vartheta\rceil}\rfloor} \frac{q^{l_2}}{q^{(\vartheta - \lfloor \vartheta\rfloor) l_2}} = \sum_{l_2 = 0}^{\lfloor \frac{\beta n -t}{\lceil\vartheta\rceil}\rfloor} q^{l_2(\lceil\vartheta\rceil - \vartheta)} = \frac{q^{(\lceil\vartheta\rceil - \vartheta)(\lfloor (\beta n - t)/\lceil\vartheta\rceil \rfloor + 1)}-1}{q^{\lceil\vartheta\rceil - \vartheta}-1}.$$ Thus we obtain \begin{eqnarray*} S_1 &\le & \frac{q^{(\lceil\vartheta\rceil - \vartheta)(\lfloor (\beta n - t)/\lceil\vartheta\rceil \rfloor + 1)}-1}{q^{\lceil\vartheta\rceil - \vartheta}-1} q^{- \beta n + t - 1 + s\lceil\vartheta\rceil} {\beta n - t + s\lceil\vartheta\rceil \choose s \lceil\vartheta\rceil - 1} \\ & \le & \frac{q^{s\lceil\vartheta\rceil-1}}{1-q^{\vartheta- \lceil\vartheta\rceil}} {\beta n - t + s\lceil\vartheta\rceil \choose s \lceil\vartheta\rceil - 1} q^{-\vartheta\lfloor (\beta n - t)/\lceil\vartheta \rceil \rfloor}. \end{eqnarray*} Let $S_2$ be the part of (\ref{sum_Qstar}) for which $ i_{1,1} + \cdots + i_{s,\lceil\vartheta\rceil} > \beta n -t$ and $i_{1,\lceil\vartheta\rceil} + \cdots + i_{s,\lceil\vartheta\rceil} > \frac{\beta n -t}{\lceil\vartheta\rceil}$, i.e., we have \begin{eqnarray*} S_2 & \le & (q-1)^{s\lceil\vartheta\rceil} \sum_{i_{1,1}=1}^\infty \cdots \sum_{i_{1,\lceil\vartheta\rceil}= 1}^{i_{1,\lceil\vartheta\rceil-1}-1} \cdots \\ && \sum_{i_{s,1}=1}^\infty \cdots \sum_{i_{s,\lceil\vartheta\rceil}=1}^{i_{s,\lceil\vartheta\rceil-1}-1} \frac{q^{- \lfloor (\beta n - t)/\lceil\vartheta\rceil \rfloor} q^{(i_{1,\lceil\vartheta\rceil} + \cdots + i_{s,\lceil\vartheta\rceil}) (\lceil\vartheta\rceil -\vartheta)}}{q^{i_{1,1} + \cdots + i_{1,\lceil\vartheta\rceil-1} + \cdots + i_{s,1} + \cdots + i_{s,\lceil\vartheta\rceil-1}}}, \end{eqnarray*} where we have the additional conditions $ i_{1,1} + \cdots + i_{s,\lceil\vartheta\rceil} > \beta n -t$ and $i_{1,\lceil\vartheta\rceil} + \cdots + i_{s,\lceil\vartheta\rceil} > \frac{\beta n -t}{\lceil\vartheta\rceil}$. As above let $l_1 = i_{1,1} + \cdots + i_{1,\lceil\vartheta\rceil-1} + \cdots + i_{s,1} + \cdots + i_{s,\lceil\vartheta\rceil-1}$ and let $l_2 = i_{1,\lceil\vartheta\rceil} + \cdots + i_{s,\lceil\vartheta\rceil}$. Let $A(l_1 + l_2)$ denote the number of admissible choices of $i_{1,1},\ldots,i_{s,\lceil\vartheta\rceil}$ such that $l_1 + l_2 = i_{1,1} + \cdots + i_{s,\lceil\vartheta\rceil}$. Note that $l_1 > \lfloor \vartheta \rfloor l_2$. Then we have $A(l_1 + l_2) \le {l_1+ l_2 +s\lceil\vartheta\rceil -1 \choose s\lceil\vartheta\rceil - 1}$ and hence we obtain \begin{eqnarray*} S_2 & \le & (q-1)^{s\lceil\vartheta\rceil} q^{- \lfloor (\beta n - t)/\lceil\vartheta\rceil \rfloor} \\ && \sum_{l_2 = \lfloor \frac{\beta n -t}{\lceil\vartheta\rceil} \rfloor +1}^\infty q^{(\lceil\vartheta\rceil - \vartheta) l_2} \sum_{l_1 = \lfloor\vartheta\rfloor l_2+1}^{\infty} \frac{1}{q^{l_1}} {l_1+l_2 +s\lceil\vartheta\rceil - 1 \choose s \lceil\vartheta\rceil -1} \\ & = & (q-1)^{s\lceil\vartheta\rceil} q^{- \lfloor (\beta n - t)/\lceil\vartheta\rceil \rfloor} \\ && \sum_{l_2 = \lfloor \frac{\beta n -t}{\lceil\vartheta\rceil} \rfloor +1}^\infty \sum_{l_1 = 0}^{\infty} q^{-l_1+l_2-1-l_2\vartheta} {l_1+l_2 + \lfloor \vartheta\rfloor l_2 -1 +s\lceil\vartheta\rceil - 1 \choose s \lceil\vartheta\rceil -1}. \end{eqnarray*} By using again Matou\v{s}ek~\cite[Lemma~2.18]{matou}, see also \cite[Lemma~6]{DP05}, we have \begin{eqnarray*} \lefteqn{ (q-1)^{s\lceil\vartheta\rceil} \sum_{l_1 = 0}^{\infty} q^{-l_1+l_2-1-l_2\vartheta} {l_1+l_2 + \lfloor \vartheta \rfloor l_2 -1 +s\lceil\vartheta\rceil - 1 \choose s \lceil\vartheta\rceil -1} } \mathfrak{q}uad\mathfrak{q}uad\mathfrak{q}uad\mathfrak{q}uad\mathfrak{q}uad \\ & \le & q^{s\lceil\vartheta\rceil} q^{l_2(1-\vartheta)-1} {l_2 \lceil \vartheta\rceil -1 +s \lceil\vartheta\rceil - 1 \choose s \lceil\vartheta\rceil -1} \end{eqnarray*} and also \begin{eqnarray*} \lefteqn{ q^{s\lceil\vartheta\rceil - 1 - \lfloor (\beta n - t)/\lceil\vartheta\rceil \rfloor} \sum_{l_2 = \lfloor \frac{\beta n -t}{\lceil\vartheta\rceil} \rfloor +1}^\infty q^{l_2(1-\vartheta)} {l_2 \lceil \vartheta \rceil - 1 +s \lceil\vartheta\rceil - 1 \choose s \lceil\vartheta\rceil -1} } \\ & \le & q^{s\lceil\vartheta\rceil - 1 - \lfloor (\beta n - t)/\lceil\vartheta\rceil \rfloor} \sum_{l_2 = \beta n -t}^\infty q^{l_2(1-\vartheta)/\lceil\vartheta\rceil} {l_2 + \lceil\vartheta\rceil - 1 +s \lceil\vartheta\rceil - 1 \choose s \lceil\vartheta\rceil -1} \\ & \le & q^{s\lceil\vartheta\rceil} (1-q^{(1-\vartheta)/\lceil\vartheta\rceil})^{-s\lceil\vartheta\rceil} {\beta n - t + \lceil\vartheta\rceil-2 + s \lceil\vartheta\rceil \choose s \lceil\vartheta\rceil -1} q^{-\vartheta (\beta n -t)/\lceil\vartheta\rceil}. \end{eqnarray*} Hence we have \begin{equation*} S_{2} \le q^{s\lceil\vartheta\rceil} (1-q^{(1-\vartheta)/\lceil\vartheta\rceil})^{-s\lceil\vartheta\rceil} {\beta n - t + \lceil\vartheta\rceil- 2 + s \lceil\vartheta\rceil \choose s \lceil\vartheta\rceil -1} q^{-\vartheta (\beta n -t)/\lceil\vartheta\rceil}. \end{equation*} Note that we have $\sum_{\boldsymbol{k}_\mathcal{S}\in \mathcal{D}_\mathcal{S}^\ast} r_{q,\vartheta}(\boldsymbol{k}_S) = S_1 + S_2$. Let $a \ge 1$ and $b\ge 0$ be integers, then we have $${a + b \choose b} = \prod_{i=1}^b \left(1 + \frac{a}{i}\right) \le (1+a)^b.$$ Therefore we obtain $$S_1 \le \frac{q^{s\lceil\vartheta\rceil-1}}{1-q^{\vartheta- \lceil\vartheta\rceil}} (\beta n - t + 2)^{s \lceil\vartheta\rceil - 1} q^{-\vartheta\lfloor (\beta n - t)/\lceil\vartheta \rceil \rfloor}$$ and $$S_{2} \le q^{s\lceil\vartheta\rceil} (1-q^{(1-\vartheta)/\lceil\vartheta\rceil})^{-s\lceil\vartheta\rceil} (\beta n - t + \lceil\vartheta\rceil)^{ s \lceil\vartheta\rceil -1} q^{-\vartheta (\beta n -t)/\lceil\vartheta\rceil}.$$ Thus we have $$\sum_{\boldsymbol{k}_\mathcal{S}\in \mathcal{D}_\mathcal{S}^\ast} r_{q,\vartheta}(\boldsymbol{k}_\mathcal{S}) \le C_{s,q,\vartheta} (\beta n - t + \lceil\vartheta\rceil)^{s \lceil\vartheta\rceil - 1} q^{-\vartheta\lfloor (\beta n - t)/\lceil\vartheta \rceil \rfloor},$$ where $$C_{s,q,\vartheta} = q^{s\lceil\vartheta\rceil} ((q-q^{\vartheta-\lfloor\vartheta\rfloor })^{-1} + (1-q^{(1-\vartheta)/\lceil\vartheta\rceil})^{-s\lceil\vartheta\rceil}).$$ The result follows for the case $0 < \vartheta - \lfloor \vartheta \rfloor < 1$. Let now $\vartheta > 1$ be an integer. Then using the same arguments as above it can be shown that $$S_1 \le (\beta n - t + 2)^{s \vartheta} q^{-(\beta n - t)-1+s\vartheta}$$ and $$S_2 \le q^{s\vartheta} (1-q^{1/\vartheta-1})^{-s\vartheta} (\beta n - t + \vartheta)^{ s \vartheta -1} q^{-(\beta n -t)}.$$ Thus we have $$\sum_{\boldsymbol{k}_\mathcal{S}\in \mathcal{D}_\mathcal{S}^\ast} r_{q,\vartheta}(\boldsymbol{k}_\mathcal{S}) \le C'_{s,q,\vartheta} (\beta n - t + \vartheta)^{s \vartheta} q^{-(\beta n - t)},$$ where $$C'_{s,q,\vartheta} = q^{s\vartheta} (q^{-1} + (1-q^{1/\vartheta-1})^{-s\vartheta}).$$ The result now follows. \end{proof} \begin{remark}\rm We note that the above lemma does not hold for $\beta > 1$ in general. Indeed, take for example $u = \{1\}$, then $\boldsymbol{k}_u = (k_1)$ and choose $k_1 = q^{n}$. Then the digit vector of the first $n$ digits of $q^n$ is $(0,\ldots,0)^\top$ and hence $C_1^\top \vec{k}_1 = \vec{0}$ and hence $\boldsymbol{k}_{(1)} \in \mathcal{D}^\ast_{(1)}$. Thus $$\sum_{\boldsymbol{k}_{(1)} \in \mathcal{D}^\ast_{(1)}} r_{q,\vartheta}(\boldsymbol{k}_{(1)}) \ge q^{-n-1}$$ and hence a counterexample can be obtained for some choices of $n,\beta, \vartheta$. In \cite{Dick05} we did allow $\beta > 1$, but therein we had the additional assumption that the functions are periodic. In this case we were able to show that the Walsh coefficients $r_{q,\alpha}(\boldsymbol{k},\boldsymbol{l}) = \prod_{j=1}^s r_{q,\alpha}(k_j,l_j)$ of the reproducing kernel also satisfy the additional property that $r_{q,\alpha}(q^m k_j,q^m k_j) = r_{q,\alpha}(q^m k_j) = q^{-2\alpha m}r_{q,\alpha}(k_j,k_j)$ for all $k_j, m \in \mathbb{N}$, see \cite[Lemma~15]{Dick05}. Similarly, if we would also assume here that $r_{q,\vartheta}(q^n k) = q^{-\vartheta n} r_{q,\vartheta}(k)$ and $r_{q,\vartheta}(k)$ given as above if $q \not| k$, then the above counterexample would fail as then $r_{q,\vartheta}(q^n) = r_{q,\vartheta}(1 q^n) = q^{-\vartheta(n+1)} r_{q,\vartheta}(1)$. \end{remark} Using the above lemma we can now obtain an upper bound on the worst-case error. \begin{theorem}\label{th_errorbounddignet} Let $\vartheta > 1$ be a real number and $q \ge 2$ be a prime power. The worst-case error for multivariate integration in the Walsh space~$\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$ using a digital $(t,\lceil\vartheta\rceil,\beta,n \times m, s)$-net over $\mathbb{F}_q$, with $0 < \beta \le 1$, as quadrature points is for non-integers $\vartheta$ bounded by $$e(Q_{q^m,s},\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}) \le q^{-\vartheta\lfloor (\beta n - t)/\lceil\vartheta \rceil \rfloor} \sum_{\emptyset \neq u \subseteq\mathcal{S}} \gamma_u C_{|u|,q,\vartheta} (\beta n - t + \lceil\vartheta\rceil)^{|u| \lceil\vartheta\rceil - 1},$$ where $$C_{|u|,q,\vartheta} = q^{|u|\lceil\vartheta\rceil} ((q-q^{\vartheta-\lfloor\vartheta\rfloor })^{-1} + (1-q^{(1-\vartheta)/\lceil\vartheta\rceil})^{-|u|\lceil\vartheta\rceil}),$$ and if $\vartheta$ is an integer, the worst-case error is bounded by $$e(Q_{q^m,s},\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}) \le q^{-(\beta n - t)} \sum_{\emptyset \neq u \subseteq\mathcal{S}} \gamma_u C'_{|u|,q,\vartheta} (\beta n - t + \vartheta)^{|u| \vartheta},$$ where $$C'_{|u|,q,\vartheta} = q^{|u|\vartheta} (q^{-1} + (1-q^{1/\vartheta-1})^{-|u|\vartheta}).$$ \end{theorem} As a direct consequence of Corollary~\ref{cor_smoothfwalsh} we obtain the following result. \begin{corollary}\label{cor_errorbounddignet} Let $\mathrm{e}lta \ge 1$ be an integer, $0 < \lambda \le 1$ and $q \ge 2$ be a prime power. Then for any function $f:[0,1)^s \rightarrow \mathbb{R}$ whose partial mixed derivatives up to order $\mathrm{e}lta$ exist it follows that the integration error using a digital $(t,\mathrm{e}lta+1,\beta,n \times m, s)$-net over $\mathbb{F}_q$ with $0 < \beta \le 1$ as quadrature points is for $0 < \lambda < 1$ bounded by \begin{eqnarray*} |I_s(f) - Q_{q^m,s}(f)| &\le & q^{-(\mathrm{e}lta+\lambda)\lfloor (\beta n - t)/(\mathrm{e}lta+1) \rfloor} C_{\mathrm{e}lta,s,q,\boldsymbol{\gamma}} N_{\mathrm{e}lta,\lambda,\boldsymbol{\gamma}}(f) \\ && \sum_{\emptyset \neq u \subseteq\mathcal{S}} \gamma_u C_{|u|,q,\mathrm{e}lta+\lambda} (\beta n - t + \mathrm{e}lta+1)^{|u| (\mathrm{e}lta+1) - 1} \end{eqnarray*} and for $\lambda = 1$ the integration error is bounded by \begin{eqnarray*} |I_s(f) - Q_{q^m,s}(f)| & \le & q^{-(\beta n - t)} C_{\mathrm{e}lta,s,q,\boldsymbol{\gamma}} N_{\mathrm{e}lta,\lambda,\boldsymbol{\gamma}}(f) \\ && \sum_{\emptyset \neq u \subseteq\mathcal{S}} \gamma_u C'_{|u|,q,\mathrm{e}lta+1} (\beta n - t + \mathrm{e}lta+1)^{|u| \mathrm{e}lta+1}, \end{eqnarray*} where the constant $C_{\mathrm{e}lta,s,q,\boldsymbol{\gamma}}$ is given in Corollary~\ref{cor_smoothfwalsh} and the constants $C_{|u|,q,\mathrm{e}lta+\lambda}$ and $C'_{|u|,q,\mathrm{e}lta+1}$ are given in Theorem~\ref{th_errorbounddignet}. \end{corollary} Explicit constructions of digital $(t,\alpha,\min(1, \alpha/d), d m \times m,s)$-nets over $\mathbb{F}_q$ for all prime powers $q$, integers $\alpha, d, m, s > 1$ are given in Section~\ref{sec_talphacons}. By choosing $d = \alpha = \lceil\vartheta\rceil = \mathrm{e}lta + 1$, by Theorem~\ref{th_errorbounddignet} and Corollary~\ref{cor_errorbounddignet} we obtain a convergence of $\mathcal{O}(q^{-\vartheta m} m^{s \lceil\vartheta\rceil + 1})$, which is optimal even for the smooth functions contained in the Walsh space~$\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$, see \cite{shar} where a lower bound for smooth periodic functions was shown. \begin{remark}\rm In \cite[Remark~4]{Dick05} it was noted that if $m = n$ and $\beta > \alpha$ the $t$-value must grow with $m$ and hence the restriction $\beta \le \alpha$ was added. A similar argument yields in our case that the $t$-value must grow with $n$ if $\beta n > \alpha m$ as Theorem~\ref{th_errorbounddignet} shows a convergence of $\mathcal{O}(q^{-\beta n + t})$ but the best possible convergence rate is $q^{-\alpha m}$, hence the restriction $\beta \le \alpha m/n$ was added. \end{remark} In case the smoothness of the function is not known our constructions adjust themselves automatically up to a certain degree in the following way: for the construction of the digital net we choose some value of $d \ge 1$ and construct a digital $(t,\alpha,\min(1,\alpha/d), dm \times m, s)$-net or a digital $(t,\alpha,\min(1,\alpha/d),d,s)$-sequence for all $\alpha \ge 1$. The values $\mathrm{e}lta \ge 1$ and $0 < \lambda \le 1$ determine the real smoothness of the function, which we now assume is not known. The value of $\alpha$ is the smoothness analog for the digital net, i.e., we need to choose $\alpha = \mathrm{e}lta + 1$. First assume that $\mathrm{e}lta + \lambda \le d$, then $\min(1,\alpha/d) = (\mathrm{e}lta+ 1)/d$ and therefore we have $\beta = (\mathrm{e}lta+1)/d$. As $n = d m$ it follows that $\beta n = (\mathrm{e}lta+1) m$ and therefore Corollary~\ref{cor_errorbounddignet} shows that we achieve a convergence of $\mathcal{O}(q^{-(\mathrm{e}lta+\lambda) m} m^{s(\mathrm{e}lta+1)+1})$, which is optimal. Now assume on the other hand that $\mathrm{e}lta + \lambda > d$, then $\min(1,\alpha/d) = 1$ and therefore $\beta = 1$. Again we have $n = d m$ and hence $\beta n = d m$. In this case Corollary~\ref{cor_errorbounddignet} shows that our construction achieves a convergence of $\mathcal{O}(q^{-d m} m^{s(\mathrm{e}lta+1)+1})$. Note that numerical integration of functions with less smoothness, i.e., for example functions with partial mixed derivatives up to degree 1 in $\mathcal{L}_{2}([0,1)^s)$ or functions with bounded variation, has been considered in many papers and monographs, see for example \cite{DKPS,DP05,DSWW1,DSWW2,KN,niesiam,SW98,sob67}. Using the notation from above, basically those results are concerned with the case where $\mathrm{e}lta = 0$ and $\lambda = 1$, hence the results here are a direct continuation of what was previously known. The construction of digital nets proposed here for $d = 1$ yields obviously digital $(t,m,s)$-nets and $(t,s)$-sequences as for example defined in \cite{niesiam}. In view of Corollary~\ref{cor_errorbounddignet} and the explanation which followed it is hence not surprising that the classical examples and theory (see for example \cite{DKPS,DP05,hlawka,koksma,KN,niesiam,PDP,sob67}) only yielded a convergence of $\mathcal{O}(q^{m(-1+\varepsilon)})$ for any $\varepsilon > 0$ (the $\varepsilon$ here is used to hide the powers of $m$). Note that the worst-case error in the Walsh space~$\mathcal{E}_{s,q,\vartheta,\boldsymbol{\gamma}}$ is invariant with respect to a digital shift (see \cite{DP05}), hence Corollary~\ref{cor_errorbounddignet} also holds for digitally shifted digital nets. Thus, if one wants to use randomized digital nets, one can also use randomly digitally shifted digital nets. The root mean square worst-case error for this case would of course be bounded by the bound in Corollary~\ref{cor_errorbounddignet}, as this bound holds for any digital shift, i.e., our result here is even stronger in that we have shown that even for the worst digital shift we still have the bound of Corollary~\ref{cor_errorbounddignet}. From this, it follows that for our situation here, there is, in some sense, no bad digital shift. Other more sophisticated scrambling methods which do not destroy the essential properties of the point set can be used as well (for example a digital shift of depth $m$, see \cite{DP05b,matou}), see \cite{owenscr} for some ideas in this direction. \end{document}
\begin{document} \title{Conditional infimum and recovery of monotone processes} \author{Martin Larsson\thanks{Department of Mathematics, ETH Zurich, R\"amistrasse 101, CH-8092, Zurich, Switzerland, [email protected].}\ \thanks{ The author would like to thank Nicole El Karoui, Pietro Siorpaes, and Josef Teichmann for useful comments and fruitful discussions. Financial support by the Swiss National Science Foundation (SNF) under grant 205121\textunderscore163425 is gratefully acknowledged.}} \maketitle \begin{abstract} Monotone processes, just like martingales, can often be recovered from their final values. Examples include running maxima of supermartingales, as well as running maxima, local times, and various integral functionals of sticky processes such as fractional Brownian motion. An interesting corollary is that any positive local martingale can be reconstructed from its final value and its global maximum. These results rely on the notion of conditional infimum, which is developed for a large class of complete lattices. The framework is sufficiently general to handle also more exotic examples, such as the process of convex hulls of certain multidimensional processes, and the process of sites visited by a random walk. \\[2ex] \noindent{\textbf {Keywords:} Conditional infimum, complete lattice, sticky processes, max-martingale, maxingale.} \\[2ex] \noindent{\textbf {MSC2010 classifications:} 60G48, 60G20, 06B23.} {\rm e}nd{abstract} \section{Introduction} Let $M=(M_t)_{t=0,\ldots,T}$ be a discrete time martingale defined on a filtered probability space $(\Omega,{\mathbb F}cal,\{{\mathbb F}cal_t\}_{t=0,\ldots,T},{\mathbb P})$, where we suppose $\Omega$ is finite. Let $\overline M_t=\max_{s\le t}M_s$ be the running maximum. Just as the martingale $M$ can be recovered from its final value $M_T$ via the formula $M_t={\mathbb E}[M_T\mid{\mathbb F}cal_t]$, the running maximum process $\overline M$ can be recovered from its final value $\overline M_T$. In fact, for any $t\in\{0,\ldots,T\}$ and non-null $\omega\in\Omega$, we claim that \begin{equation} \label{intro1} \overline M_t(\omega)=\min_{\omega'\in A}\overline M_T(\omega'), {\rm e}nd{equation} where $A$ is the atom of ${\mathbb F}cal_t$ containing $\omega$. To see this, note that $\overline M_t(\omega)\le \min_{\omega'\in A}\overline M_T(\omega')$ since $\overline M$ is nondecreasing. If the inequality were strict, we would have $M_t\le \overline M_t<\overline M_T$ on the ${\mathbb F}cal_t$-measurable event $A$, contradicting the martingale property: on $A$, $M$ would be sure to experience a strict increase between $t$ and $T$. Thus {\rm e}qref{intro1} must hold. The right-hand side of the identity {\rm e}qref{intro1} is the {{\rm e}m conditional infimum of $\overline M_T$ given ${\mathbb F}cal_t$}, evaluated at~$\omega$, and the identity itself expresses an ``inf-martingale'' property of $\overline M$. The goal of the present paper is to develop these ideas in some generality. For a large class of complete lattices $S$, we show that the conditional infimum of an $S$-valued random element $X$ given a sub-$\sigma$-algebra ${\mathbb E}cal$ is well-defined; we denote it by $\bigwedge[X\mid{\mathbb E}cal]$. In the presence of a filtration one is led to consider ``inf-martingales'' $\bigwedge[X\mid{\mathbb F}cal_t]$, $t\ge0$, and a key message of this paper is that many naturally occurring nondecreasing processes turn out to have this property. They can then be recovered from their final value. Examples include running maxima of supermartingales and, more generally, of processes that become supermartingales after an equivalent change of measure (Proposition~\ref{P:maxM}). Running maxima, local times, and various integral functionals of so-called sticky processes also have this property (Propositions~\ref{P:f(X) run max}, \ref{P:X U K sticky}, \ref{P:int f sticky}, and their corollaries). More exotic examples include the process of convex hulls of certain multidimensional processes, and the process of sites visited by a random walk (Propositions~\ref{P:convex hull sticky} and~\ref{P:RV sticky}). These results are derived from a simple ``no-arbitrage'' principle for nondecreasing lattice-valued processes (Theorem~\ref{T:NCR}). In the martingale context, an interesting corollary is that any positive local martingale can be recovered from its final value and its global maximum (Proposition~\ref{P:maxM X}). The general theory covers a rather broad class of measurable complete lattices $S$. One only needs measurability of the ``triangle'' $\{(x,y)\colon x\le y\}$ in the product space $S\times S$, measurability of the countable supremum and infimum maps, and existence of a strictly increasing measurable map $S\to{\mathbb R}$. These hypotheses are stated precisely in \ref{A1}--\ref{A3} below. Apart from the extended real line $[-\infty,\infty]$, we prove that this covers the family of closed convex subsets of ${\mathbb R}^d$, as well as the family $2^{{\mathcal X}}$ of subsets of a countable set ${\mathcal X}$ (Theorems~\ref{T:convex sets} and~\ref{T:countable set}, respectively). Conditional infima (and suprema) for real-valued random variables have appeared previously in the literature, along with real-valued ``inf-martingales'' (or ``sup-martingales'', also called max-martingales or maxingales); see for instance \cite{bar_car_jen_03,elk_mez_08}. We extend these constructions to general complete lattices with the additional structural properties mentioned above. A related but different notion of maxingale has been used by \cite{puh_97,puh_99,puh_01} and \cite{fle_04} in the context of idempotent probability with applications to large deviations theory and control theory. The notion of stickiness, which is closely related to the developments in the present paper, plays an important role in mathematical finance; see e.g.~\citet{GRS:08,BPS:15,RS:16}. Conditional infima in lattices of sets have also been useful in problems from multidimensional martingale optimal transport; see \cite{OS:17}, who make use of our Example~\ref{E:CI 2} below. The rest of the paper is organized as follows. After ending this introduction with some remarks on notation, we turn to Section~\ref{S:CI} where the general theory of conditional infima in complete lattices is developed, including analogues of the martingale regularization and optional stopping theorems. Section~\ref{S:sticky} discusses sticky processes and their relations to conditional infima. Applications to martingale theory are given in Section~\ref{S:martingales}, including a general version of {\rm e}qref{intro1}. Examples involving processes of convex hulls and processes of subsets of a countable set a given in Section~\ref{S:further}. Section~\ref{S:closed subsets} develops several general results, mainly of measure theoretic nature, for lattices of closed sets. These results should be of independent interest. \subsection{Remarks on notation} Throughout this paper, $(\Omega,{\mathbb F}cal,{\mathbb P})$ is a probability space. Relations between random quantities are understood in the almost sure sense, unless stated otherwise. The probability space is endowed with a filtration ${\mathbb F}=\{{\mathbb F}cal_t\}_{t\ge 0}$ of sub-$\sigma$-algebras of ${\mathbb F}cal$, and we set ${\mathbb F}cal_\infty=\bigvee_{t\ge0}{\mathbb F}cal_t$. The filtration ${\mathbb F}$ need not be augmented with the ${\mathbb P}$-nullsets, but unless stated otherwise it is assumed throughout the paper that ${\mathbb F}$ is right-continuous. It is sometimes convenient to work with the order-theoretic indicator $\chi_A$ of a subset $A\subseteq\Omega$, defined by \[ \chi_A(\omega) = \begin{cases} -\infty, & \omega\in A\\ +\infty, & \omega\notin A. {\rm e}nd{cases} \] The meaning of the symbols $+\infty$ and $-\infty$ are discussed below. \section{Conditional infimum} \label{S:CI} Throughout this section, let $(S,\le)$ be a complete lattice. That is, $S$ is a partially ordered set such that any subset $A\subseteq S$ has a least upper bound, denoted by $\sup A$. This implies that the greatest lower bound $\inf A$ also exists, and that $S$ contains a greatest element $+\infty$ and smallest element $-\infty$. We write $x\vee y$ for $\sup\{x,y\}$ and $x\wedge y$ for $\inf\{x,y\}$, and use $x<y$ as shorthand for $x\le y$ and $x\ne y$. We assume that $S$ is equipped with a $\sigma$-algebra ${\mathcal S}$ that satisfies the following two properties: \begin{enumerate}[label={\rm(A\arabic*)}] \item\label{A1} The set $\{(x,y)\in S^2\colon x\le y\}$ lies in the product $\sigma$-algebra ${\mathcal S}^2={\mathcal S}\otimes{\mathcal S}$. \item\label{A2} The countable supremum and infimum maps \[ (x_1,x_2,\ldots) \mapsto \sup\{x_1,x_2,\ldots\} \qquad\text{and}\qquad (x_1,x_2,\ldots) \mapsto \inf\{x_1,x_2,\ldots\} \] are measurable, where the set of sequences $S^\infty=\{(x_1,x_2,\ldots)\colon x_n\in S \text{ for all $n$}\}$ is equipped with the product $\sigma$-algebra ${\mathcal S}^\infty=\bigotimes_{n=1}^\infty{\mathcal S}$. {\rm e}nd{enumerate} These properties ensure that random elements of~$S$ (i.e., measurable maps $\Omega\to S$) behave well. Indeed, let $X_n$, $n\in{\mathbb N}$, be random elements of $S$. Assumption~\ref{A1} implies that $\{X_1\le X_2\}\in{\mathbb F}cal$, and hence also $\{X_1<X_2\}\in{\mathbb F}cal$.\footnote{Indeed, $\{X_1<X_2\}$ equals $\{X_1\le X_2\}\setminus(\{X_1\le X_2\}\cap\{X_2\le X_1\})$ and is therefore measurable.} Assumption~\ref{A2} implies that $\sup_n X_n$ and $\inf_n X_n$ are again random elements of $S$. This will be used repeatedly in what follows. Finally, we make the following assumption, where, of course, strictly increasing means that $x<y$ implies $\phi(x)<\phi(y)$: \begin{enumerate}[label={\rm(A\arabic*)},resume] \item\label{A3} There exists a strictly increasing measurable map $\phi\colon S\to{\mathbb R}$. {\rm e}nd{enumerate} \begin{remark} In some cases, naturally appearing lattices are not complete, but only {{\rm e}m Dedekind complete}: suprema (infima) are guaranteed to exist only for subsets $A\subseteq S$ that are bounded above (below). In such cases one can extend the given lattice to a complete lattice satisfying \ref{A1}--\ref{A3}, provided these properties hold in the given lattice; see Proposition~\ref{P:ext Dedekind}. {\rm e}nd{remark} There are plenty of examples of complete lattices which satisfy \ref{A1}--\ref{A3}, some of which are discussed below. The first example below concerns the familiar (extended) real-valued case, while the subsequent examples involve more complicated complete lattices. \begin{example} \label{E:CI 1} $\overline{\mathbb R}=[-\infty,\infty]$ together with the usual order and the Borel $\sigma$-algebra is a complete lattice which clearly satisfies \ref{A1}--\ref{A3}. {\rm e}nd{example} \begin{example} \label{E:CI 2_new} Let ${\mathcal X}$ be a countable set, and let $S=2^{\mathcal X}$ be the collection of all subsets of~${\mathcal X}$ partially ordered by set inclusion. Supremum is set union, $-\infty={\rm e}mptyset$, and $+\infty={\mathcal X}$. With these operations $S$ is a complete lattice, and it admits a $\sigma$-algebra ${\mathcal S}$ such that \ref{A1}--\ref{A3} are satisfied; see Theorem~\ref{T:countable set}. {\rm e}nd{example} \begin{example} \label{E:CI 2} Let $S={\rm CO}({\mathbb R}^d)$ be the collection of all closed convex subsets $C\subseteq{\mathbb R}^d$ partially ordered by set inclusion. For a subset $A\subseteq S$ one has $\sup A=\overline\conv (\bigcup_{C\in A}C)$, the closed convex hull of the union of all $C\in A$, and $\inf A=\bigcap_{C\in A}C$. Moreover, $-\infty={\rm e}mptyset$ and $+\infty={\mathbb R}^d$. With these operations $S$ is a complete lattice, and it admits a $\sigma$-algebra ${\mathcal S}$ such that \ref{A1}--\ref{A3} are satisfied; see Theorem~\ref{T:convex sets}. {\rm e}nd{example} The following lemma is a consequence of the existence of a strictly increasing measurable real-valued map. We will use it to define the conditional infimum. \begin{lemma} \label{L:esssup} Let ${\mathcal L}$ be a set of random elements of $S$ closed under countable suprema. Then ${\mathcal L}$ contains a maximal element. That is, there exists $X^*\in{\mathcal L}$ such that $X\le X^*$ almost surely for every $X\in{\mathcal L}$. The maximal element $X^*$ is unique up to almost sure equivalence. {\rm e}nd{lemma} \begin{proof} The uniqueness statement is obvious since any other maximal element $X^{**}\in{\mathcal L}$ satisfies $X^*\le X^{**}\le X^*$ almost surely. To prove existence, let $\phi\colon S\to{\mathbb R}$ be a strictly increasing measurable map, without loss of generality taken to be bounded, and define \[ \alpha = \sup\{ {\mathbb E}[\phi(X)] \colon X\in{\mathcal L}\}. \] Let $(X_n)_{n\in{\mathbb N}}$ be a maximizing sequence and define $X^*=\sup_n X_n \in{\mathcal L}$. Then \[ \alpha\ge{\mathbb E}[\phi(X^*)]\ge{\mathbb E}[\phi(X_n)] \to \alpha, \] so ${\mathbb E}[\phi(X^*)]=\alpha$. Consider any $X\in{\mathcal L}$ and assume for contradiction that ${\mathbb P}(X\not\le X^*)>0$. Then the random element $Y=X^*\vee X \in {\mathcal L}$ satisfies $X^*\le Y$ and ${\mathbb P}(X^*<Y)>0$. Therefore, since $\phi$ is strictly increasing, \[ \alpha \ge {\mathbb E}[\phi(Y)] > {\mathbb E}[\phi(X^*)] = \alpha, \] a contradiction. Thus $X\le X^*$ almost surely, as desired. {\rm e}nd{proof} Although it will not be used in this paper, let us mention that Lemma~\ref{L:esssup} implies the existence of essential suprema. Given a set ${\mathcal L}$ of random elements of $S$, a random element $X^*$ is the {{\rm e}m essential supremum of ${\mathcal L}$} (necessarily a.s.~unique) if $X^*$ a.s.~dominates ${\mathcal L}$ and satisfies $X^*\le Y$ a.s.~for any random element $Y$ that also a.s.~dominates ${\mathcal L}$. \begin{corollary} Let ${\mathcal L}$ be any set of random elements of $S$. Then ${\mathcal L}$ admits an essential supremum $X^*$, which can be expressed as the supremum of countably many elements of ${\mathcal L}$. {\rm e}nd{corollary} \begin{proof} Let $\overline{\mathcal L}$ be the set of all countable suprema $\sup_nX_n$ of elements $X_n\in{\mathcal L}$. This set is closed under countable suprema, and thus admits a maximal element $X^*$ by Lemma~\ref{L:esssup}. Moreover, if $Y$ dominates ${\mathcal L}$, it also dominates $\overline{\mathcal L}$, whence $X^*\le Y$. Finally, being an element of $\overline{\mathcal L}$, $X^*$ is the supremum of countably many elements of ${\mathcal L}$. {\rm e}nd{proof} The following definition introduces the key object of interest in this paper, the conditional infimum. Lemma~\ref{L:esssup} implies that the the conditional infimum always exists and is unique up to almost sure equivalence. \begin{definition} \label{D:cond inf} Let $X$ be a random element of $S$, and let ${\mathbb E}cal\subseteq{\mathbb F}cal$ be a sub-$\sigma$-algebra. The {{\rm e}m conditional infimum of $X$ given ${\mathbb E}cal$}, denoted by $\bigwedge[X\mid{\mathbb E}cal]$, is the maximal element of \[ \{Z:\Omega\to S \text{ such that $Z$ is ${\mathbb E}cal$-measurable and $Z\le X$ almost surely} \}. \] That is, $\bigwedge[X\mid{\mathbb E}cal]$ is the greatest ${\mathbb E}cal$-measurable lower bound on $X$. {\rm e}nd{definition} The following lemma collects some basic properties of the conditional infimum, which are immediate consequences of the definition. These properties are well-known in the literature, at least in the case $S=\overline{\mathbb R}$; see \cite{bar_car_jen_03}. \begin{lemma}[Properties of the conditional infimum] \label{L:PCI} Let $X$ and $Y$ be random elements of~$S$, and let ${\mathbb E}cal$ and ${\mathcal G}$ be sub-$\sigma$-algebras of ${\mathbb F}cal$. Then the following properties hold: \begin{enumerate} \item\label{PCI:mon I}If ${\mathbb E}cal\subseteq{\mathcal G}$ then $\bigwedge[X\mid{\mathbb E}cal]\le\bigwedge[X\mid{\mathcal G}]$. \item\label{PCI:mon II}If $X\le Y$ then $\bigwedge[X\mid{\mathbb E}cal]\le\bigwedge[Y\mid{\mathbb E}cal]$. \item\label{PCI:tower}If ${\mathbb E}cal\subseteq{\mathcal G}$, then $\bigwedge[\,\bigwedge[X\mid{\mathcal G}]\mid{\mathbb E}cal] = \bigwedge[X\mid{\mathbb E}cal]$. \item\label{PCI:cont I}Let $\{{\mathbb E}cal_n\}_{n\in{\mathbb N}}$ be a non-increasing sequence of sub-$\sigma$-algebras and suppose ${\mathbb E}cal = \bigcap_{n\in{\mathbb N}}{\mathbb E}cal_n$. Then $\bigwedge[X\mid{\mathbb E}cal] = \inf_{n\in{\mathbb N}} \bigwedge[X\mid{\mathbb E}cal_n]$. \item\label{PCI:cont II} Let $\{X_n\}_{n\in{\mathbb N}}$ be a sequence of random elements of~$S$. Then $\bigwedge[\inf_{n\in{\mathbb N}}X_n\mid{\mathbb E}cal] = \inf_{n\in{\mathbb N}} \bigwedge[X_n\mid{\mathbb E}cal]$. \item\label{PCI:max linear} If $Y$ is ${\mathbb E}cal$-measurable and $\le$ is a total order, then $\bigwedge[X\vee Y\mid{\mathbb E}cal] = \bigwedge[X\mid{\mathbb E}cal] \vee Y$. {\rm e}nd{enumerate} {\rm e}nd{lemma} \begin{proof} \ref{PCI:mon I}: $\bigwedge[X\mid{\mathbb E}cal]$ is ${\mathcal G}$-measurable and dominated by $X$. \ref{PCI:mon II}: Any lower bound of $X$ is also a lower bound on $Y$. \ref{PCI:tower}: By monotonicity, $\bigwedge[\,\bigwedge[X\mid{\mathcal G}]\mid{\mathbb E}cal]\le\bigwedge[X\mid{\mathbb E}cal]\le X$. Moreover, if $Z\le X$ is ${\mathbb E}cal$-measurable, then it is also ${\mathcal G}$-measurable, whence $Z\le\bigwedge[X\mid{\mathcal G}]$. Thus $Z=\bigwedge[Z\mid{\mathbb E}cal]\le \bigwedge[\,\bigwedge[X\mid{\mathcal G}]\mid{\mathbb E}cal]$. \ref{PCI:cont I}: Since ${\mathbb E}cal_n$ is non-increasing in $n$, \ref{PCI:mon I} yields $\inf_{n\in{\mathbb N}} \bigwedge[X\mid{\mathbb E}cal_n]=\inf_{n\ge m} \bigwedge[X\mid{\mathbb E}cal_n]$ for each $m$. Thus $\inf_{n\in{\mathbb N}} \bigwedge[X\mid{\mathbb E}cal_n]$ is ${\mathbb E}cal$-measurable. Moreover, it dominates any ${\mathbb E}cal$-measurable~$Z\le X$. \ref{PCI:cont II}: $\bigwedge[X_n\mid{\mathbb E}cal]$ is a lower bound on $X_n$, whence $\inf_{n\in{\mathbb N}} \bigwedge[X_n\mid{\mathbb E}cal]$ is a lower bound on $\inf_{n\in{\mathbb N}}X_n$. Thus $\inf_{n\in{\mathbb N}} \bigwedge[X_n\mid{\mathbb E}cal]\le \bigwedge[\inf_{n\in{\mathbb N}}X_n\mid{\mathbb E}cal]$. The reverse inequality follows from~\ref{PCI:mon II}. \ref{PCI:max linear}: Set $X'=\bigwedge[X\mid{\mathbb E}cal]$. Then $X'\le X$, hence $X'\vee Y\le X\vee Y$. It remains to pick an arbitrary ${\mathbb E}cal$-measurable $Z\le X\vee Y$ and show that $Z\le X'\vee Y$. On $\{Y<Z\}$ one has $Y<Z\le X\vee Y$ and hence $X\vee Y=X$. Thus \[ Z\wedge \chi_{\{Y<Z\}^c} \le (X\vee Y) \wedge \chi_{\{Y<Z\}^c} = X \wedge \chi_{\{Y<Z\}^c} \le X. \] The left-hand side is ${\mathbb E}cal$-measurable, so $Z\wedge \chi_{\{Y<Z\}^c}\le X'$ by definition of $X'$. It follows that $Z\le X'\vee Y$ on $\{Y<Z\}$. Since $\le$ is a total order, $\{Y<Z\}^c=\{Z\le Y\}$, so that $Z\le X'\vee Y$ also on this set. Thus $Z\le X'\vee Y$, as required. {\rm e}nd{proof} We now consider $S$-valued stochastic processes $V=(V_t)_{t\ge0}$ adapted to the right-continuous filtration~${\mathbb F}$. A process $V$ with nondecreasing paths is called {{\rm e}m right-continuous} if it satisfies \[ V_t(\omega) = \inf_{s>t} V_s(\omega) \quad\text{for all}\quad (\omega,t)\in\Omega\times{\mathbb R}_+. \] This amounts to a slight abuse of terminology, since $S$ need not have any topological structure. Given a random element $X$, one can consider the family $V=(V_t)_{t\ge0}$ of random variables $V_t=\bigwedge[X\mid{\mathbb F}cal_t]$. In view of Lemma~\ref{L:PCI}\ref{PCI:mon I}, $V_t$ is non-decreasing in $t$; however, at this stage it is only defined up to a nullset that may depend on $t$. The following result confirms that one can choose a regular version. \begin{lemma}[Right-continuous version] \label{L:RC version} Let $X$ be a random element of $S$. Then there exists an adapted nondecreasing right-continuous $S$-valued process $V=(V_t)_{t\ge0}$ such that $V_t=\bigwedge[X\mid{\mathbb F}cal_t]$ for all $t\ge0$. The process $V$ is unique up to evanescence. {\rm e}nd{lemma} \begin{proof} Fix a dense countable subset $D\subset{\mathbb R}_+$, and let $V^0_t$ be a version of $\bigwedge[X\mid{\mathbb F}cal_t]$ for each $t\in D$. For each $t\in {\mathbb R}_+$, define \[ H_t = \bigcap_{u>t} H^0_u, \qquad H^0_t = \bigcap_{\substack{r,s\in D\\ r<s\le t}} \left\{ \omega\colon V^0_r(\omega) \le V^0_s(\omega) \right\}. \] Thus $H_t$ is the set of $\omega$ such that the map $s\mapsto V^0_s(\omega)$ is nondecreasing on $D\cap[0,t+\varepsilon]$ for some $\varepsilon>0$. One has ${\mathbb P}(H_t)=1$ by Lemma~\ref{L:PCI}\ref{PCI:mon I}, as well as $H_t\in{\mathbb F}cal_t$ by right-continuity of ${\mathbb F}$. Define $V=(V_t)_{t\ge 0}$ by \[ V_t(\omega) = \begin{cases} \inf_{s\ge t,\,s\in D} V^0_s(\omega) & \omega\in H_t, \\ +\infty & \omega\notin H_t.{\rm e}nd{cases} \] It follows that $V$ is adapted, nondecreasing, and right-continuous. Furthermore, Lemma~\ref{L:PCI}\ref{PCI:cont I} and right-continuity of ${\mathbb F}$ yield $V_t=\inf_{s\ge t,\,s\in D} V^0_s = \bigwedge[X\mid F_t]$. The uniqueness statement follows from the almost sure uniqueness of each $V_t$ together with right-continuity. {\rm e}nd{proof} \begin{lemma}[Optional stopping] \label{L:optional sampling} Let $X$ be a random element of $S$ and let $V=(V_t)_{t\ge0}$ be the regular version of $V_t=\bigwedge[X\mid{\mathbb F}cal_t]$. Then \[ V_\tau = \bigwedge[X\mid {\mathbb F}cal_\tau ] \] for every stopping time $\tau$. {\rm e}nd{lemma} \begin{proof} It suffices to prove the result for $\tau$ taking finitely many values. Indeed, in the general case one has $\lim_{m\to\infty}\tau_m=\tau$ for some non-increasing sequence of stopping times~$\tau_m$ taking finitely many values. Lemma~\ref{L:PCI}\ref{PCI:cont I} and right-continuity of ${\mathbb F}$ and $V$ then yield the results. We therefore suppose $\tau=\sum_n t_n\bm 1_{A_n}$ for finitely many distinct $t_n\in[0,\infty]$ and pairwise disjoint sets $A_n\in{\mathbb F}cal_{t_n}$ forming a partition of $\Omega$. Let $Y$ be any ${\mathbb F}cal_\tau$-measurable random element of $S$ with $Y\le X$. We must show that $Y\le V_\tau$. To this end, define the random elements \[ Y_n = \begin{cases} Y &\text{on $A_n$} \\ -\infty &\text{on $A_n^c$} {\rm e}nd{cases} \] For any $B\in{\mathcal S}$ one has $\{Y_n\in B\} = \left(\{Y\in B\}\cap A_n\right) \cup \left(\{\infty\in B\}\cap A_n^c\right)$. This event lies in ${\mathbb F}cal_{t_n}$ since $\{Y\in B\}\cap A_n=\{Y\in B\}\cap\{\tau=t_n\}\in{\mathbb F}cal_{t_n}$ by ${\mathbb F}cal_\tau$-measurability of $Y$, and since $A_n^c=\{\tau\ne t_n\}\in{\mathbb F}cal_{t_n}$ due to the fact that $\tau$ is a stopping time. Consequently $Y_n$ is ${\mathbb F}cal_{t_n}$-measurable and satisfies $Y_n\le Y\le X$, so by definition of the conditional infimum we have $Y_n\le\bigwedge[X\mid{\mathbb F}cal_{t_n}]=V_{t_n}$. Therefore, $Y = \inf_n (Y_n\vee\chi_{A_n}) \le \inf_n (V_{t_n}\vee\chi_{A_n}) = V_\tau$, as required. {\rm e}nd{proof} \begin{example} It is not true in general that $V_{t-}=\bigwedge[X\mid{\mathbb F}cal_{t-}]$. For example, suppose $S=\overline{\mathbb R}$. Let $W$ be standard Brownian motion and ${\mathbb F}$ the right-continuous filtration it generates. Set $X=|W_1|$ and let $V$ be the regular version of $V_t=\bigwedge[X\mid{\mathbb F}cal_t]$. Then $V_t=0$ for all $t<1$, but since ${\mathbb F}cal_{1-}={\mathbb F}cal_1$ one has $\bigwedge[X\mid{\mathbb F}cal_{1-}]=X>0$. {\rm e}nd{example} The following theorem is the main result of this section. It provides equivalent conditions for when a monotone process can be recovered from its final value by taking conditional infima. \begin{theorem}[Recovery of monotone processes] \label{T:NCR} Let $U=(U_t)_{t\ge0}$ be an adapted nondecreasing right-continuous $S$-valued process, and define $U_\infty=\sup_{t\ge0}U_t$. The following conditions are equivalent, where the regular version of $\bigwedge[U_\infty\mid{\mathbb F}cal_t]$ is understood: \begin{enumerate} \item\label{T:NCR:1} $U_t = \bigwedge[U_\infty\mid{\mathbb F}cal_t]$ for all $t\ge0$; \item\label{T:NCR:2} Any stopping time $\tau$ with $U_\tau<Y$ on $\{\tau<\infty\}$ for some ${\mathbb F}cal_\tau$-measurable $S$-valued random variable $Y\le U_\infty$ satisfies ${\mathbb P}(\tau<\infty)=0$. \item\label{T:NCR:3} ${\mathbb P}(Y\le U_\infty \mid{\mathbb F}cal_\tau)<1$ holds on $\{U_\tau<+\infty\}$ for every stopping time $\tau$ and every ${\mathbb F}cal_\tau$-measurable $S$-valued random variable $Y$ with $U_\tau<Y$ on $\{U_\tau<+\infty\}$. {\rm e}nd{enumerate} {\rm e}nd{theorem} Condition~\ref{T:NCR:2} of Theorem~\ref{T:NCR} excludes sure improvements. Indeed, if the condition fails for some stopping time $\tau$, then on the positive probability event $\{\tau<\infty\}$, one has $U_\tau <Y\le U_\infty$, where $Y$ is ${\mathbb F}cal_\tau$-measurable. At time $\tau$ one is therefore guaranteed that $U$ will increase in the future by an amount that is ``bounded away from zero''. In economic terms, supposing $U$ is real-valued to fix ideas, one can think of a situation where exchanging the current value $U_\tau$ for the final outcome $U_\infty$ is guaranteed to result in an ${\mathbb F}cal_\tau$-measurable gain of at least $Y-U_\tau>0$. On the other hand, if condition~\ref{T:NCR:2} is satisfied, then there is no nontrivial ${\mathbb F}cal_\tau$-measurable lower bound on the gain. In this sense, \ref{T:NCR:2} is reminiscent of the no-arbitrage type conditions appearing in mathematical finance. This analogy is brought further by the equivalent characterization~\ref{T:NCR:1}, which can be thought of as a martingale condition. In contrast to the correspondence between no arbitrage and martingales however, Theorem~\ref{T:NCR} does not involve any equivalent changes of probability measure. This is because the conditional infimum only depends on the probability measure through its nullsets, which carries over to the ``martingale'' condition~\ref{T:NCR:1}. Both~\ref{T:NCR:1} and~\ref{T:NCR:2} are thus invariant with respect to equivalent measure changes. The equivalent property \ref{T:NCR:3} is similar to~\ref{T:NCR:2}, but looks more convoluted. The reason for stating it is that it is closely related to the notion of {{\rm e}m stickiness} for real-valued increasing processes. In fact, \ref{T:NCR:3} may be viewed as a natural generalization of the stickiness property to processes on $[0,\infty)$ with values in a lattice $S$ which satisfies the assumptions~\ref{A1}--\ref{A3}. Sticky processes are discussed in Section~\ref{S:sticky}. \begin{proof}[Proof of Theorem~\ref{T:NCR}] \ref{T:NCR:1} ${\mathbb R}ightarrow$ \ref{T:NCR:2}: Pick a stopping time $\tau$ and an ${\mathbb F}cal_\tau$-measurable random variable $Y\le U_\infty$ such that $U_\tau<Y$ on $\{\tau<\infty\}$. In particular, $Y\le\bigwedge[U_\infty\mid{\mathbb F}cal_\tau]$. Together with~\ref{T:NCR:1} and Lemma~\ref{L:optional sampling}, this yields \[ \bigwedge[U_\infty\mid{\mathbb F}cal_\tau] = U_\tau < Y \le \bigwedge[U_\infty\mid{\mathbb F}cal_\tau] \] on $\{\tau<\infty\}$. Thus ${\mathbb P}(\tau<\infty)=0$ as required, showing that~\ref{T:NCR:2} holds. \ref{T:NCR:2} ${\mathbb R}ightarrow$ \ref{T:NCR:3}: Pick a stopping time $\tau$ and an ${\mathbb F}cal_\tau$-measurable random variable $Y$ with $Y\le U_\infty$ and $U_\tau<Y$ on $\{U_\tau<\infty\}$. Define $A=\{{\mathbb P}(Y\le U_\infty\mid{\mathbb F}cal_\tau)=1\}\cap\{U_\tau<\infty\}$. We must show that ${\mathbb P}(A)=0$. To this end, define the stopping time \[ \sigma = \tau\bm 1_A + \infty \bm 1_{A^c} \] and the ${\mathbb F}cal_\sigma$-measurable random variable \[ Z = (Y\vee \chi_A)\wedge (U_t\vee \chi_{A^c}) = \begin{cases} Y & \text{on $A$}\\ U_t &\text{on $A^c$.}{\rm e}nd{cases} \] Since $Y\le U_\infty$ on $A$, we have $Z\le U_\infty$. Moreover, since $\{\sigma<\infty\}\subseteq A$, we have $U_\tau<Z$ on $\{\sigma<\infty\}$. Thus~\ref{T:NCR:2} implies ${\mathbb P}(\sigma<\infty)=0$. Therefore, using also that $A\cap\{\tau=\infty\}$ is a nullset since $Y\le U_\infty=U_\tau<Y$ there, we obtain \[ {\mathbb P}(A) = {\mathbb P}(A\cap\{\tau<\infty\}) = {\mathbb P}(\sigma<\infty) = 0, \] as required. \ref{T:NCR:3} ${\mathbb R}ightarrow$ \ref{T:NCR:1}: Pick $t\ge0$ and let $A=\{U_t<\bigwedge[U_\infty\mid{\mathbb F}cal_t]\}$. Define $Y=\bigwedge[U_\infty\mid{\mathbb F}cal_t] \vee \chi_{A^c}$. Then $Y$ is ${\mathbb F}cal_t$-measurable, with $U_t<Y$ on $A$ and $Y=+\infty$ on $A^c$, hence $U_t<Y$ on $\{U_t<+\infty\}$. Moreover, $Y\le U_\infty$ on $A$, hence ${\mathbb P}(Y\le U_\infty\mid{\mathbb F}cal_t)=1$ on $A\subseteq\{U_t<+\infty\}$. By \ref{T:NCR:3}, this forces ${\mathbb P}(A)=0$. Thus~\ref{T:NCR:1} holds. {\rm e}nd{proof} \section{Sticky processes} \label{S:sticky} In this section we apply the theory of Section~\ref{S:CI} with the complete lattice $\overline{\mathbb R}=[-\infty,\infty]$ in Example~\ref{E:CI 1}. We are thus dealing with scalar (i.e., extended real-valued) non-decreasing processes and conditional infima of scalar random variables. In this context there is a close connection between condition~\ref{T:NCR:3} of Theorem~\ref{T:NCR} and the notion of {{\rm e}m stickiness}. Throughout this section, $X$ denotes a c\`adl\`ag adapted process with values in a given separable metric space $({\mathcal X},d)$, which is equipped with its Borel $\sigma$-algebra. \begin{definition} \label{D:sticky} We call $X$ {{\rm e}m globally sticky} if ${\mathbb P}(\sup_{t\in[\tau,\infty)}d(X_t,X_\tau) \le \varepsilon \mid {\mathbb F}cal_\tau) > 0$ for every stopping time $\tau$ and every strictly positive ${\mathbb F}cal_\tau$-measurable random variable $\varepsilon$. We call $X$ {{\rm e}m sticky} if the stopped process $X^T=(X_{t\wedge T})_{t\ge0}$ is globally sticky for every $T\in{\mathbb R}_+$. {\rm e}nd{definition} Note that on the event $\{\tau=\infty\}$, the supremum in this definition is taken over the empty set, and therefore equals $-\infty$ by convention. Thus the conditional probability is equal to one on this event. Furthermore, we never have to evaluate the possibly undefined quantity~$X_\infty$. \begin{remark} The terminology of Definition~\ref{D:sticky} is consistent with the existing literature, where stickiness is generally defined for process on a bounded time interval $[0,T]$. In our setting it is more natural to work with process on $[0,\infty)$, which makes the notion of global stickiness convenient. {\rm e}nd{remark} A wide variety of processes are sticky. For example, any one-dimensional regular strong Markov process is sticky, see \citet[Proposition~3.1]{G:06}. Moreover, any process with conditional full support is sticky; see \citet{GRS:08,BPS:15}. This includes most L\'evy processes, large classes of solutions of stochastic differential equations, processes like skew Brownian motion, as well as non-Markovian non-semimartingales like fractional Brownian motion. We will return to the conditional full support property in connection with Proposition~\ref{P:int f sticky} below. Continuous functions of sticky processes are sticky, and stickiness is preserved under bounded time changes. \citet{RS:16} provide further examples and references, and we develop some additional results in this direction below. For a non-decreasing ${\mathbb R}$-valued process $U$, global stickiness reduces to the condition \begin{equation} \label{eq:U sticky} {\mathbb P}(U_\infty \le U_\tau + \varepsilon \mid {\mathbb F}cal_\tau) > 0 \text{ for any stopping time $\tau$ and ${\mathbb F}cal_\tau$-measurable $\varepsilon>0$,} {\rm e}nd{equation} where $U_\infty=\lim_{t\to\infty}U_t$. This immediately yields the following corollary of Theorem~\ref{T:NCR}, which explains the relevance of stickiness in the present context. \begin{corollary} \label{C:sticky} An adapted non-decreasing right-continuous ${\mathbb R}$-valued process $U$ satisfies $U_t=\bigwedge[U_\infty\mid{\mathbb F}cal_t]$ for all $t\ge0$ if and only if it is globally sticky. {\rm e}nd{corollary} \begin{proof} View $U$ as taking values in $S=\overline{\mathbb R}$. By Theorem~\ref{T:NCR}, the equality $U_t=\bigwedge[U_\infty\mid{\mathbb F}cal_t]$ holds for all $t\in{\mathbb R}_+$ if and only if ${\mathbb P}(U_\tau + \varepsilon \le U_\infty\mid{\mathbb F}cal_\tau)<1$ holds on $\{U_\tau<\infty\}$ for every stopping time $\tau$ and every ${\mathbb F}cal_\tau$-measurable $\varepsilon>0$. Applying this with~$\varepsilon/2$ in place of~$\varepsilon$, one sees that the weak inequality can be replaced by a strict inequality. Therefore the inequality ${\mathbb P}(U_\tau + \varepsilon \le U_\infty\mid{\mathbb F}cal_\tau)<1$ can be replaced by ${\mathbb P}(U_\infty \le U_\tau + \varepsilon \mid{\mathbb F}cal_\tau)>0$. Consequently, since $U$ is actually finite-valued, the above statement is equivalent to the stickiness property~{\rm e}qref{eq:U sticky}. {\rm e}nd{proof} \begin{remark} Inspired by Corollary~\ref{C:sticky}, one may use condition \ref{T:NCR:3} of Theorem~\ref{T:NCR} to {{\rm e}m define} global stickiness for nondecreasing processes valued in a complete lattice satisfying the assumptions~\ref{A1}--\ref{A3}. {\rm e}nd{remark} Corollary~\ref{C:sticky} is useful because non-decreasing functionals of sticky processes are often sticky, which means that there is an abundance of non-decreasing sticky processes. We now provide a number of results in this direction. \begin{proposition} \label{P:f(X) run max} Let $U_t=\sup_{s\le t}f(X_s)$, where $f:E\to{\mathbb R}$ is a continuous map. If $X$ is (globally) sticky, then $U$ is also (globally) sticky. {\rm e}nd{proposition} \begin{proof} Assume that $X$ is globally sticky. The result for $X$ sticky but not globally sticky follows by replacing $X$ by $X^T$ for any $T<\infty$ in the argument below. Fix any stopping time $\tau$ and ${\mathbb F}cal_\tau$-measurable random variable $\varepsilon>0$. On the event $\{\tau<\infty\}$, the random set $C=f^{-1}((-\infty,f(X_\tau)+\varepsilon))$ is open and contains $X_\tau$. One can find a strictly positive ${\mathbb F}cal_\tau$-measurable random variable $\varepsilon'$ such that, on $\{\tau<\infty\}$, $C$ contains the closed ball of radius $\varepsilon'$ centered at $X_\tau$. Consequently, \begin{align*} {\mathbb P}(U_\infty \le U_\tau+\varepsilon \mid {\mathbb F}cal_\tau) &\ge{\mathbb P}(f(X_t) \le f(X_\tau) + \varepsilon \text{ for all $t\in[\tau,\infty)$} \mid {\mathbb F}cal_\tau) \\ &\ge {\mathbb P}( d(X_t,X_\tau) \le \varepsilon' \text{ for all $t\in[\tau,\infty)$} \mid{\mathbb F}cal_\tau) \\ &>0, {\rm e}nd{align*} using that $X$ is globally sticky. Thus $U$ is also globally sticky. {\rm e}nd{proof} \begin{corollary} If $X$ is real-valued and (globally) sticky, then $\overline X_t=\max_{0\le s\le t}X_s$ and $X^*_t=\max_{0\le s\le t}|X_s|$ are also (globally) sticky. {\rm e}nd{corollary} The next result looks somewhat abstract, but has useful consequences. In particular, it implies that the local time of a sticky semimartingale is again sticky; see Corollary~\ref{C:sticky local time} below. We let $d(x,K)=\inf\{d(x,y)\colon y\in K\}$ denote the distance from a point $x\in {\mathcal X}$ to a subset $K\subseteq{\mathcal X}$. \begin{proposition} \label{P:X U K sticky} Let $K\subseteq {\mathcal X}$ be a closed subset, and let $U$ be a nondecreasing right-continuous adapted process that satisfies the following property for almost every~$\omega$: \begin{equation}\label{P:X U K sticky cond} \text{\parbox{0.8\textwidth}{If $[t_1,t_2]$ is an interval such that either $X(\omega)\in K$ on $[t_1,t_2)$, or\\ $d(X(\omega),K)\ge a$ on $[t_1,t_2]$ for some $a>0$, then $U$ is constant on $[t_1,t_2]$.}} {\rm e}nd{equation} If $X$ is (globally) sticky, then $U$ is also (globally) sticky. {\rm e}nd{proposition} \begin{proof} We prove the result for $X$ globally sticky. Fix any stopping time $\tau$ and any ${\mathbb F}cal_\tau$-measurable $\varepsilon>0$. For each $a>0$, define the stopping time $\sigma_a=\inf\{t\ge \tau\colon d(X_t,K)\ge a\}$. Since $U$ satisfies {\rm e}qref{P:X U K sticky cond}, the equality $U_\infty=U_{\sigma_a}$ holds on the event where $d(X_t,K) \ge a/2$ for all $t\in[\sigma_a,\infty)$. Consequently, for any $a>0$, \begin{align*} {\mathbb P}(U_\infty \le U_\tau + \varepsilon \mid{\mathbb F}cal_\tau) &\ge {\mathbb P}(U_{\sigma_a} \le U_\tau + \varepsilon \text{ and } d(X_t,K) \ge \frac{a}{2} \text{ for all $t\in[\sigma_a,\infty)$} \mid{\mathbb F}cal_\tau) \\ &= {\mathbb E}\left[ \bm 1_{\{U_{\sigma_a} \le U_\tau + \varepsilon\}}\, {\mathbb P}( d(X_t,K) \ge \frac{a}{2} \text{ for all $t\in[\sigma_a,\infty)$} \mid {\mathbb F}cal_{\sigma_a}) {\ \Big|\ } {\mathbb F}cal_\tau \right]. {\rm e}nd{align*} Consider now the event \[ A = \{{\mathbb P}(U_\infty \le U_\tau + \varepsilon \mid{\mathbb F}cal_\tau) = 0\} \in {\mathbb F}cal_\tau. \] Then, by the inequality above, \[ \bm 1_{\{U_{\sigma_a} \le U_\tau + \varepsilon\}}\, {\mathbb P}( d(X_t,K) \ge a/2 \text{ for all $t\in[\sigma_a,\infty)$} \mid {\mathbb F}cal_{\sigma_a})=0 \] holds on~$A$ for all $a>0$. The global stickiness property states that the conditional probability is strictly positive, whence $\bm 1_{\{U_{\sigma_a} \le U_\tau + \varepsilon\}}=0$ on $A$ for all $a>0$. Define the stopping time $\sigma_0=\inf\{t\ge \tau\colon X_t\notin K\}$. Sending $a$ to zero and using that $K$ is closed, we obtain $\sigma_a{\rm d}ownarrow \sigma_0$. Right-continuity and non-decrease of $U$ then yields $U_{\sigma_a}{\rm d}ownarrow U_{\sigma_0}$, and hence \begin{equation}\label{P:X U K sticky:eq2} \bm 1_{\{U_{\sigma_0} < U_\tau + \varepsilon\}} \le \lim_{a{\rm d}ownarrow0}\bm 1_{\{U_{\sigma_a} \le U_\tau + \varepsilon\}}=0 \quad\text{on $A$.} {\rm e}nd{equation} But since $X$ lies in $K$ on $[\tau,\sigma_0)$, the assumption~{\rm e}qref{P:X U K sticky cond} on $U$ implies that $U_{\sigma_0}=U_\tau$. Thus the left-hand side of~{\rm e}qref{P:X U K sticky:eq2} equals one, which forces ${\mathbb P}(A)=0$. Thus ${\mathbb P}(U_\infty \le U_\tau + \varepsilon \mid{\mathbb F}cal_\tau)>0$, that is, $U$ is sticky. {\rm e}nd{proof} \begin{corollary} \label{C:sticky local time} Suppose $X$ is a real semimartingale, and let $L^x$ be its local time at level $x$. If $X$ is (globally) sticky, and $x_1,\ldots,x_n\in{\mathbb R}$, $n\in{\mathbb N}$ are arbitrary, then $L^{x_1}+\cdots+L^{x_n}$ is also (globally) sticky. {\rm e}nd{corollary} \begin{proof} Apply Proposition~\ref{P:X U K sticky} with $K=\{x_1,\ldots,x_n\}$ and $U=L^{x_1}+\cdots+L^{x_n}$. {\rm e}nd{proof} \begin{example} Without the stickiness assumption on $X$, there is no guarantee that its local time is sticky. Indeed, let $W$ be Brownian motion, which is not globally sticky. Its local time $L^0_t(W)$ tends to infinity with $t$, so that $\bigwedge[L^0_\infty\mid{\mathbb F}cal_t]=\infty\ne L^0_t(W)$. This can be turned into an example on a finite time horizon as follows. Let $\tau=\inf\{t\ge0\colon L^0_t(W)\ge 1\}$, which is a finite stopping time. Define $X_t = W_{(t/(1-t))\wedge\tau}$ for $t\in[0,1]$, which for $t=1$ should be read $X_1=W_\tau$. Then $X$ is a semimartingale with respect to the time-changed filtration, and its local time is given by $L^0_t(X)=L^0_{(t/(1-t))\wedge\tau}(W)$. In particular, one has $\bigwedge[L^0_1(X)]=1\ne 0=L^0_0(X)$. {\rm e}nd{example} For the next result we assume that ${\mathcal X}$ is an open connected subset of ${\mathbb R}^n$ and that $X$ has continuous paths. For any deterministic times $\tau\le T<\infty$, the restriction $X|_{[\tau,T]}=(X_t)_{t\in[\tau,T]}$ is a random element of the space $C([\tau,T];{\mathcal X})$ of all continuous maps $[\tau,T]\to {\mathcal X}$, equipped with the uniform metric. The process $X$ is said to have {{\rm e}m conditional full support} if for any choice of deterministic times $0\le\tau<T$, the support of the regular conditional distribution of $X|_{[\tau,T]}$ given ${\mathbb F}cal_\tau$ is almost surely all of $C_{X_\tau}([\tau,T];{\mathcal X})$, the closed subset of $C([\tau,T];{\mathcal X})$ whose paths are equal to $X_\tau$ at time $\tau$. The notion of conditional full support plays an important role in mathematical finance, and implies the stickiness property; see e.g.~\citet{BPS:15}. \begin{proposition} \label{P:int f sticky} Let $f:{\mathcal X}\to{\mathbb R}_+$ be a nonnegative continuous function with $0\in f({\mathcal X})$. If $X$ has conditional full support, then the process $U$ given by \[ U_t = \int_0^t f(X_s) ds \] is also sticky. {\rm e}nd{proposition} \begin{proof} We must show that for any $T<\infty$, any stopping time $\tau\le T$, and any strictly positive ${\mathbb F}cal_\tau$-measurable $\varepsilon$, we have ${\mathbb P}(U_T\le U_\tau+\varepsilon\mid{\mathbb F}cal_\tau)>0$. By \citet[Lemma~3.1]{BPS:15}, it suffices to take $\tau<T$ deterministic; see also \citet[Lemma~2.1]{RS:16}. Consider the regular conditional distribution of $(\varepsilon,X|_{[\tau,T]})$ given ${\mathbb F}cal_\tau$, under which $X_\tau$ and $\varepsilon$ are both Dirac distributed almost surely and therefore can be treated as being deterministic. Let $\gamma\colon [0,1]\to {\mathcal X}$ be a continuous map with $\gamma(0)=X_\tau$ and $f(\gamma(1))=0$. Such a map exists since ${\mathcal X}$ is connected and $0\in f({\mathcal X})$. Let $m=\max_{s\in[0,1]}f(\gamma(s))$ be the largest value that $f$ attains along $\gamma$. Now define the set $G\subset C_{X_\tau}([\tau,T];{\mathcal X})$ to consist of all ${\mathcal X}$-valued continuous paths $x\colon [\tau,T]\to {\mathcal X}$ with $x(\tau)=X_\tau$ satisfying the following two properties: \begin{enumerate} \item $f(x(t)) < m+\varepsilon$ for all $t\in[\tau,T]$. \item $f(x(t)) < \varepsilon/(2T)$ for all $t\in[\sigma,T]$, where we define $\sigma\in(\tau,T)$ by \[ \sigma=\tau + \frac{\varepsilon/2}{m + \varepsilon} \wedge \frac{T-\tau}{2}. \] {\rm e}nd{enumerate} Then $G$ is open, being the intersection of two open sets. Moreover, $G$ is nonempty since it contains the path $(x(t))_{t\in[\tau,T]}$ given by \[ x(t) = \gamma\left( 1\wedge\frac{t-\tau}{\sigma-\tau}\right). \] The conditional full support property therefore implies that the event $\{X|_{[\tau,T]}\in G\}$ has strictly positive regular conditional probability. On the other hand, whenever $X|_{[\tau,T]}$ remains in $G$ one has \[ U_T - U_\tau = \int_\tau^\sigma f(X_t) dt + \int_\sigma^T f(X_t)dt \le \frac{\varepsilon/2}{m+\varepsilon}\times(m+\varepsilon) + T\times \frac{\varepsilon}{2T} = \varepsilon. \] In conclusion, one has ${\mathbb P}(U_T-U_\tau \le \varepsilon\mid{\mathbb F}cal_\tau)>0$ as required. {\rm e}nd{proof} \section{Martingales and supermartingales} \label{S:martingales} Martingales are not always sticky: one example is the martingale $M_t={\mathbb P}(W_1>0\mid{\mathbb F}cal_t)$ where $W$ is Brownian motion. This martingale starts at $1/2$ and converges to zero or one at time $t=1$. Therefore it does not remain in any interval around $1/2$ of width strictly less than $1/2$. Nonetheless, certain functionals of martingales are necessarily sticky, and consequently satisfy the ``inf-martingale'' property~\ref{T:NCR:1} of Theorem~\ref{T:NCR}. This includes the running maximum process $\overline M_t=\sup_{s\le t}M_s$ as well as the local time processes. The following result is an immediate consequence of Theorem~\ref{T:NCR}. Recall that a c\`adl\`ag supermartingale $M$ is {{\rm e}m closed on the right} if there exists an integrable random variable $X$ such that $M_t\ge {\mathbb E}[X\mid{\mathbb F}cal_t]$ for all $t\ge0$. For instance, this holds if $M$ is nonnegative or, more generally, if $\{M_t^-\colon t\ge0\}$ is a uniformly integrable family; see VI.8 in~\citet{DM:82}. \begin{proposition} \label{P:maxM} Let $M$ be a c\`adl\`ag supermartingale, possibly after an equivalent change of probability measure. Then $\overline M$ is sticky, that is, \[ \overline M_t = \bigwedge[ \overline M_T \mid{\mathbb F}cal_T], \qquad t\le T<\infty. \] If additionally $M$ is closed on the right, then $\overline M$ is globally sticky, that is, \[ \overline M_t = \bigwedge[ \overline M_\infty \mid{\mathbb F}cal_t], \qquad t\ge0. \] {\rm e}nd{proposition} \begin{proof} We apply the theory of Section~\ref{S:CI} with the complete lattice $\overline{\mathbb R}=[-\infty,\infty]$ in Example~\ref{E:CI 1}. Since the desired conclusion is invariant under equivalent changes of probability measure, we may suppose $M$ is already a supermartingale. We may also suppose it is closed on the right, since we otherwise replace $M$ by $M^T$. The result now follows from Theorem~\ref{T:NCR} with $U=\overline M$, once condition~\ref{T:NCR:2} of the theorem is verified. Thus, consider any stopping time~$\tau$ such that $\overline M_\tau < Y$ on $\{\tau<\infty\}$ for some ${\mathbb F}cal_\tau$-measurable random variable $Y \le \overline M_\infty$. Define $\sigma=\inf\{t>\tau\colon M_t\ge Y\}$. Then \[ 0 \ge {\mathbb E}[ M_\sigma - M_\tau ] = {\mathbb E}[ (M_\sigma - M_\tau)\bm 1_{\{\tau<\infty\}} ] \ge {\mathbb E}[ (Y - \overline M_\tau)\bm 1_{\{\tau<\infty\}} ] \ge 0. \] Therefore ${\mathbb E}[ (Y - \overline M_\tau)\bm 1_{\{\tau<\infty\}} ]=0$, and we deduce ${\mathbb P}(\tau<\infty)=0$, as required. {\rm e}nd{proof} An interesting consequence of Proposition~\ref{P:maxM} is that it allows to reconstruct any nonnegative local martingale $M$ from the pair $(M_\infty,\overline M_\infty)$. For uniformly integrable martingales this is obvious, since $M_t={\mathbb E}[M_\infty\mid{\mathbb F}cal_t]$ for all $t\ge0$. For general nonnegative local martingales the result is less obvious and even counterintuitive (at least to the author); in particular, many such local martingales satisfy $M_\infty=0$, in which case the global maximum $\overline M_\infty$ alone contains the same information as the entire process. To reconstruct $M$ from $(M_\infty,\overline M_\infty)$, simply observe that a reducing sequence for $M$ is given by the crossing times $\tau_n=\inf\{t\ge0:\overline M_t\ge n\}$, so that \[ M_{t\wedge\tau_n} = {\mathbb E}[M_{\tau_n}\mid{\mathbb F}cal_t] = {\mathbb E}[\overline M_{\tau_n}\bm 1_{\{\tau_n<\infty\}} + M_\infty\bm 1_{\{\tau_n=\infty\}}\mid{\mathbb F}cal_t]. \] Thus $M_t=\lim_{n\to\infty}M_{t\wedge\tau_n}$ is determined by $(M_\infty,\overline M)$, which by Proposition~\ref{P:maxM} is determined by $(M_\infty,\overline M_\infty)$. In fact, a stronger statement is true: it is enough to know only the very largest values of $\overline M_\infty$, in the following sense. \begin{proposition} \label{P:maxM X} Let $M$ be a nonnegative local martingale and let $X$ be any bounded random variable. Then $M$ can be reconstructed from the pair $(M_\infty, X\vee \overline M_\infty)$. {\rm e}nd{proposition} \begin{proof} Define $V_t=\bigwedge[X\vee \overline M_\infty\mid{\mathbb F}cal_t]$ and $\tau_n=\inf\{t\ge0\colon V_t\ge n\}$. Let $c\ge X$ be a deterministic upper bound on $X$. We claim that $\overline M_t=V_t$ on $A=\{V_t\ge n\}$ for any $n>c$ and any $t\ge0$. To see this, note that $X<V_t\le X\vee\overline M_\infty$ on $A$ and hence $X<\overline M_\infty$ on $A$. Thus by Lemma~\ref{L:PCI}\ref{PCI:max linear} and Proposition~\ref{P:maxM}, \[ V_t \vee\chi_A = \bigwedge[X\vee \overline M_\infty \vee \chi_A \mid{\mathbb F}cal_t] = \bigwedge[\overline M_\infty \vee \chi_A \mid{\mathbb F}cal_t] = \bigwedge[\overline M_\infty \mid{\mathbb F}cal_t] \vee \chi_A = \overline M_t\vee\chi_A. \] This proves that $\overline M_t=V_t$ on $\{V_t\ge n\}$, as claimed. In conjunction with the inequality $\overline M_t\le V_t$, this implies that $\tau_n=\inf\{t\ge0:\overline M_t\ge n\}$ and $V_{\tau_n}=\overline M_{\tau_n}$ on $\{\tau_n<\infty\}$ for all $n>c$. The argument preceding the theorem now yields the desired result. {\rm e}nd{proof} The fact that a nonnegative local martingale $M$ can be reconstructed from the pair $(M_\infty, \overline M_\infty)$ can be deduced from results that already exist in the literature, under the additional assumption that $\overline M$ is continuous. For example, assuming without loss of generality that $M_\infty=0$, a conditional version of an argument by \citet{ELY:97} shows that \begin{equation} \label{eq:ELYeq} M_t=\lim_{n\to\infty}n\,{\mathbb P}(\overline M_\infty\ge n\mid{\mathbb F}cal_t). {\rm e}nd{equation} An alternative argument is based on the following identity due to \citet{NY:06}, where it is additionally assumed that $M_0=1$ and $M>0$: \begin{equation} \label{eq:NYeq} {\mathbb E}[f(\overline M_\infty)\mid{\mathbb F}cal_t] = f(\overline M_t)\left(1 - \frac{M_t}{\overline M_t}\right) + M_t \int_{\overline M_t}^\infty \frac{f(x)}{x^2}dx {\rm e}nd{equation} for any positive or bounded Borel function $f$. Choosing for $f$ functions $f_n$ such that $f_n=0$ on $(-\infty,n]$ and $\int_n^\infty x^{-2}f_n(x)dx=1$, the right-hand side of~{\rm e}qref{eq:NYeq} becomes equal to $M_t$ as soon as $n$ exceeds $\overline M_t$. This shows that \[ M_t = \lim_{n\to\infty} {\mathbb E}[f_n(\overline M_\infty)\mid{\mathbb F}cal_t], \] which shows that $M_t$ can be recovered from $\overline M_\infty$. Note that {\rm e}qref{eq:ELYeq} crucially relies on the assumption that $\overline M$ is continuous. Indeed, \citet[Example~3.2]{HR:15} construct a nonnegative martingale $M$, with very large but unlikely upward jumps, such that $M_0=1$, $M_\infty=0$, and \[ \lim_{n\to\infty} n\,{\mathbb P}(\overline M_\infty \ge n) = 0. \] This is inconsistent with~{\rm e}qref{eq:ELYeq}. The continuity of $\overline M$ is similarly crucial for {\rm e}qref{eq:NYeq}. Our next result shows that another interesting functional, namely the local time process of a local martingale, is always sticky. \begin{proposition} Let $M$ be a local martingale, and let $L^x$ denote its local time at level $x$. Then $L^x$ is sticky, that is, \[ L^x_t = \bigwedge[L^x_T\mid{\mathbb F}cal_t], \qquad t\le T<\infty. \] {\rm e}nd{proposition} \begin{proof} By localization we may assume that $M$ is a martingale. Pick any $T<\infty$, any stopping time $\tau\le T$, and any strictly positive ${\mathbb F}cal_\tau$-measurable random variable $\varepsilon$. To verify the stickiness property~{\rm e}qref{eq:U sticky}, we must show that ${\mathbb P}(L^x_T-L^x_\tau \le 2\varepsilon\mid{\mathbb F}cal_\tau)>0$. To this end, define stopping times \begin{align*} \rho_n&=\inf\{t\ge\tau\colon |M_t-x|\ge n^{-1}\}\wedge T, \\ \rho&=\inf\{t\ge\tau\colon M_t\ne x\}\wedge T. {\rm e}nd{align*} We first show that \begin{equation} \label{eq:M LT 1} {\mathbb P}(L^x_T - L^x_{\rho_n} \le \varepsilon \mid{\mathbb F}cal_{\rho_n}) >0. {\rm e}nd{equation} Let $B=\{{\mathbb P}(L^x_T - L^x_{\rho_n} \le \varepsilon \mid{\mathbb F}cal_{\rho_n}) = 0\}$ and define the stopping time \[ \upsilon = \inf\{t\ge\rho_n\colon M_t = x\} \wedge T. \] On $B$ we know that the local time process increases over the interval $[\rho_n,T]$ (in fact, it increases by more than $\varepsilon$). By \citet[Theorem~IV.7]{P:05}, the local time measure $dL_t$ is concentrated on those time points $t$ for which $M_{t-}=M_t=x$. Therefore $M_\upsilon=x$ on $B$. Moreover, $\rho_n$ occurs strictly before $T$ on $B$, so that $|M_{\rho_n}-x|\ge n^{-1}$ on $B$. Combining these observations yields \[ {\mathbb E}[M_\upsilon - M_{\rho_n} \mid {\mathbb F}cal_{\rho_n}] = \begin{cases} {\mathbb E}[x-M_{\rho_n}\mid{\mathbb F}cal_{\rho_n}] \le -{{\rm d}isplaystyle\frac{1}{n}} & \text{on $\{M_{\rho_n}\ge x+n^{-1}\}\cap B$}, \\[1ex] {\mathbb E}[x-M_{\rho_n}\mid{\mathbb F}cal_{\rho_n}] \ge {{\rm d}isplaystyle\frac{1}{n}} & \text{on $\{M_{\rho_n}\le x-n^{-1}\}\cap B$}. {\rm e}nd{cases} \] Thus $\left|{\mathbb E}[M_\upsilon - M_{\rho_n} \mid {\mathbb F}cal_{\rho_n}] \right| \ge n^{-1}$ on $B$. The martingale property then forces ${\mathbb P}(B)=0$, which proves~{\rm e}qref{eq:M LT 1}. Next, we prove that \begin{equation} \label{eq:M LT 2} {\mathbb P}(L^x_T - L^x_\rho \le 2\varepsilon \mid{\mathbb F}cal_\rho) >0. {\rm e}nd{equation} To this end, define the stopping time \[ \sigma = \inf\{t\ge \rho\colon L^x_t \ge L^x_\rho + \varepsilon\}. \] On the event $\{\sigma>T\}$, clearly $L^x_T-L^x_\rho\le\varepsilon$. On the event $\{\rho_n\le\sigma\le T\}$, one has $L^x_\sigma\ge L^x_{\rho_n}$ and $L^x_\sigma-L^x_\rho=\varepsilon$, hence $L^x_T-L^x_\rho\ge L^x_T-L^x_{\rho_n} + \varepsilon$. Consequently, for each $n$, \begin{align*} {\mathbb P}(L^x_T - L^x_\rho \le 2\varepsilon \mid{\mathbb F}cal_\rho) &\ge {\mathbb P}(L^x_T - L^x_{\rho_n} \le \varepsilon, \ \rho_n\le \sigma \mid{\mathbb F}cal_\rho) \\ &= {\mathbb E}[\bm 1_{\{\rho_n\le \sigma\}} {\mathbb P}(L^x_T - L^x_{\rho_n} \le \varepsilon \mid{\mathbb F}cal_{\rho_n}) \mid {\mathbb F}cal_\rho]. {\rm e}nd{align*} Let $A=\{{\mathbb P}(L^x_T - L^x_\rho \le 2\varepsilon \mid{\mathbb F}cal_\rho)=0\}$. The above inequality along with~{\rm e}qref{eq:M LT 1} yields \[ \bm 1_{\{\rho_n\le \sigma\}\cap A} = 0 \] for all $n$. Since $\rho_n{\rm d}ownarrow \rho$, and since $\sigma>\rho$, it follows that ${\mathbb P}(A)=0$. This proves~{\rm e}qref{eq:M LT 2}. Finally, just observe that $M$ is constant and equal to $x$ on $[\tau,\rho)$, so that $L^x_\tau=L^x_\rho$. Therefore \[ {\mathbb P}(L^x_T-L^x_\tau \le 2\varepsilon\mid{\mathbb F}cal_\tau) = {\mathbb E}[ {\mathbb P}(L^x_T-L^x_\rho \le 2\varepsilon\mid{\mathbb F}cal_\rho)\mid{\mathbb F}cal_\tau] > 0 \] due to~{\rm e}qref{eq:M LT 2}. This completes the proof. {\rm e}nd{proof} \section{Further examples of recovery of monotone processes} \label{S:further} We now consider two examples of set-valued nondecreasing processes that can be recovered from their final values. The first example deals with convex hulls, and we apply the theory of Section~\ref{S:CI} with the complete lattice $S={\rm CO}({\mathbb R}^d)$ in Example~\ref{E:CI 2}. The second example deals with the collection of sites visited by a random walk on a countable set ${\mathcal X}$, and uses the complete lattice $S=2^{\mathcal X}$ in Example~\ref{E:CI 2_new}. \subsection{Convex hulls} Let $X=(X_t)_{t\ge0}$ be a c\`adl\`ag adapted process with values in ${\mathbb R}^d$. By Lemma~\ref{L:conv is adapted}, the ${\rm CO}({\mathbb R}^d)$-valued process $U=(U_t)_{t\ge0}$ given by \[ U_t = \overline\conv(X_s\colon s\le t) \] is adapted. We have the following result. \begin{proposition} \label{P:convex hull sticky} If $X$ is sticky, then \[ U_t = \bigwedge [U_T \mid {\mathbb F}cal_t], \qquad t\le T<\infty. \] {\rm e}nd{proposition} \begin{proof} Relying on the implication $\ref{T:NCR:3}{\mathbb R}ightarrow\ref{T:NCR:1}$ of Theorem~\ref{T:NCR}, it suffices to consider any stopping time $\tau\le T$ and ${\mathbb F}cal_\tau$-measurable ${\rm CO}({\mathbb R}^d)$-valued random variable $Y$ such that $U_\tau\subsetneq Y$, and prove that ${\mathbb P}(Y\subseteq U_T\mid{\mathbb F}cal_\tau)<1$. Define the ${\mathbb F}cal_\tau$-measurable random variable \[ \varepsilon = 1\wedge \frac12 \sup_{y\in Y} \inf_{x\in U_\tau} |x-y| \] which is strictly positive since $U_\tau\subsetneq Y$. Furthermore, one has \[ Y \not\subseteq \overline\conv( U_\tau \cup B(X_\tau,\varepsilon)), \] where $B(X_\tau,\varepsilon)$ is the ball of radius $\varepsilon$ centered at $X_\tau$. Since $X$ is sticky, one therefore gets \begin{align*} 0 &< {\mathbb P}(\sup_{t\in[\tau,T]} |X_t-X_\tau| \le \varepsilon \mid{\mathbb F}cal_\tau) \\ &\le {\mathbb P}( U_T \subseteq \overline\conv(U_\tau \cup B(X_\tau,\varepsilon) \mid{\mathbb F}cal_\tau) \\ &\le {\mathbb P}( Y \not\subseteq U_T \mid{\mathbb F}cal_\tau). {\rm e}nd{align*} This yields ${\mathbb P}(Y\subseteq U_T\mid{\mathbb F}cal_\tau)<1$ as required. {\rm e}nd{proof} \subsection{Sites visited by a random walk} Let $X=(X_t)_{t\ge0}$ be a c\`adl\`ag process with values in a countable set ${\mathcal X}$. Define the $2^{\mathcal X}$-valued process $U=(U_t)_{t\ge0}$ by \[ U_t = \bigcup_{s\le t}\{X_s\}. \] This is the process whose value at time $t$ is the set of all sites $X$ has visited up to and including time $t$, and is adapted by Lemma~\ref{L:range of SP}. In this context, if we equip ${\mathcal X}$ with the discrete metric $d(x,y)=\bm1_{\{y\}}(x)$, stickiness of $X$ simply means that \[ {\mathbb P}(X_t = X_\tau \text{ for all $t\in[\tau,T]$} \mid{\mathbb F}cal_\tau) > 0 \] for every $T\ge0$ and every stopping time $\tau\le T$. That is, $X$ has conditionally unbounded holding times. \begin{proposition} \label{P:RV sticky} Assume $X$ has conditionally unbounded holding times in the above sense. Then \[ U_t = \bigwedge [U_T \mid {\mathbb F}cal_t], \qquad t\le T<\infty. \] {\rm e}nd{proposition} \begin{proof} The proof is similar to that of Proposition~\ref{P:convex hull sticky}, but simpler. {\rm e}nd{proof} \section{Spaces of closed sets} \label{S:closed subsets} Let $({\mathcal X},d)$ be a complete separable metric space, and let ${\rm CL}({\mathcal X})$ denote the collection of all nonempty closed subsets of ${\mathcal X}$. In our applications, ${\mathcal X}$ is either a countable set or ${\mathbb R}^d$, but we do not impose this yet. The distance between a point $x\in{\mathcal X}$ and a subset $A\subseteq{\mathcal X}$ is denoted by \[ d(x,A)=\inf\{d(x,y)\colon y\in A\}. \] The {{\rm e}m Wijsman topology} on ${\rm CL}({\mathcal X})$ is the smallest topology for which the maps $A\mapsto d(x,A)$, $x\in{\mathcal X}$, are all continuous; see~\cite{W:66}. It was proved by \citet[Theorem~4.3]{Beer:91} that with the Wijsman topology, ${\rm CL}({\mathcal X})$ becomes a Polish space. The space ${\rm CL}({\mathcal X})$ is partially ordered by set inclusion. It is however not a lattice under union and intersection since it does not include the empty set. The space \[ {\rm CL}_0({\mathcal X}) = {\rm CL}({\mathcal X})\cup\{{\rm e}mptyset\} \] on the other hand is a complete lattice with $\inf_\alpha A_\alpha=\bigcap_\alpha A_\alpha$ and $\sup_\alpha A_\alpha=\overline{\bigcup_\alpha A_\alpha}$ for arbitrary collections $\{A_\alpha\}\subseteq{\rm CL}_0({\mathcal X})$. The Wijsman topology is extended to ${\rm CL}_0({\mathcal X})$ by declaring a sequence of closed sets $A_n$ convergent to ${\rm e}mptyset$ if $d(x,A_n)\to\infty$ for all $x\in{\mathcal X}$. Equipped with the extended Wijsman topology, ${\rm CL}_0({\mathcal X})$ is again a Polish space; see \citet[Theorem~4.4]{Beer:91}. The spaces ${\rm CL}({\mathcal X})$ and ${\rm CL}_0({\mathcal X})$ are convenient from the point of view of stochastic analysis. The reason is a characterization due to~\citet{Hess:83,Hess:86} of the Borel $\sigma$-algebra on ${\rm CL}({\mathcal X})$. Namely, the Borel $\sigma$-algebra coincides with the {{\rm e}m Effros $\sigma$-algebra}, which is generated by the sets $\{A\in{\rm CL}({\mathcal X})\colon A\cap V\ne{\rm e}mptyset\}$, where $V$ ranges over the open subsets of ${\mathcal X}$. This identification leads to the following lemma. \begin{lemma} \label{L:range of SP} Let $X=(X_t)_{t\ge0}$ be an ${\mathcal X}$-valued c\`adl\`ag adapted process on a filtered measurable space $(\Omega,{\mathbb F}cal,{\mathbb F})$, whose filtration ${\mathbb F}$ is not necessarily right-continuous. Then the ${\rm CL}({\mathcal X})$-valued process $Y=(Y_t)_{t\ge0}$ given by \[ Y_t = \overline{\{X_s\colon s\le t\}} \] is adapted. The process is then also adapted when viewed as taking values in ${\rm CL}_0({\mathcal X})$. {\rm e}nd{lemma} \begin{proof} We need to argue that $\omega\mapsto Y_t(\omega)$ is ${\mathbb F}cal_t$-measurable for each $t$. Using Hess's characterization, it suffices to inspect inverse images of sets $\{A\in{\rm CL}({\mathcal X})\colon A\cap V\ne{\rm e}mptyset\}$ with $V$ open. That is, we must check that the event \[ F = \left\{\omega\in\Omega\colon \overline{\{X_s(\omega)\colon s\le t\}}\cap V \ne {\rm e}mptyset\right\} \] lies in ${\mathbb F}cal_t$. For a c\`adl\`ag process $X$, the set $\overline{\{X_s(\omega)\colon s\le t\}}\cap V$ is nonempty if and only if $X_s(\omega)\in V$ for some $s\le t$. Consequently, \[ F = \{\omega\in\Omega\colon \tau(\omega) < t \text{ or } X_t(\omega)\in V\}, \qquad \text{where }\tau=\inf\{s\ge0\colon X_{s-} \in V\}. \] Since $X_-$ is left-continuous, $\tau$ is predictable, and hence $F\in{\mathbb F}cal_t$; see \citet[Theorem~IV.73(b)]{DM:78}. The final assertion follows from the fact that $Y$ can never take the value ${\rm e}mptyset$. {\rm e}nd{proof} The following result will be used later. Its proof illustrates the use of the two alternative descriptions of the Borel $\sigma$-algebra on ${\rm CL}({\mathcal X})$. We use the notation \begin{equation} \label{eq:A_epsilon} A_\varepsilon=\{x\in {\mathcal X}\colon d(x,A)\le\varepsilon\} {\rm e}nd{equation} for any $A\in{\rm CL}({\mathcal X})$ and any $\varepsilon\ge0$. If $A={\rm e}mptyset$ then $A_\varepsilon={\rm e}mptyset$ by convention. \begin{lemma} \label{L:gen prop} \begin{enumerate} \item\label{L:gen prop:mu} The map $A\mapsto\mu(A)$ from ${\rm CL}_0({\mathcal X})$ to ${\mathbb R}$ is measurable, where $\mu$ is any finite measure on~${\mathcal X}$. \item\label{L:gen prop:eps} The map $A\mapsto A_\varepsilon$ from ${\rm CL}_0({\mathcal X})$ to itself is measurable for any $\varepsilon>0$. {\rm e}nd{enumerate} {\rm e}nd{lemma} \begin{proof} In both cases it suffices to show that the respective maps restricted to ${\rm CL}({\mathcal X})$ are measurable. \ref{L:gen prop:mu}: Using closedness of $A$ and the dominated convergence theorem, one obtains the equalities $\mu(A)=\int\bm 1_A(x)\mu(dx)=\int\bm 1_{\{0\}}(d(x,A))\mu(dx)=\lim_n \int(1-nd(x,A))^+\mu(dx)$, where $y^+$ denotes the positive part of~$y\in{\mathbb R}$. Each map $A\mapsto \int(1-nd(x,A))^+\mu(dx)$ is continuous, hence measurable, by definition of the Wijsman topology and the fact that ${\rm d}elta\mapsto\int(1-n{\rm d}elta)^+\mu(dx)$ is continuous due to the dominated convergence theorem. Thus the map $A\mapsto\mu(A)$ is the pointwise limit of real-valued measurable maps, and therefore itself measurable. \ref{L:gen prop:eps}: One readily verifies $A_\varepsilon\cap V = A\cap V_\varepsilon$ for any open set $V$, where we define the open set $V_\varepsilon=\{x\in{\mathcal X}: d(x,V)<\varepsilon\}$. Therefore $\{A\in{\rm CL}({\mathcal X})\colon A_\varepsilon \cap V\} = \{A\in{\rm CL}({\mathcal X})\colon A \cap V_\varepsilon\}$. The left-hand side is the inverse image of $\{A\colon A\cap V\ne{\rm e}mptyset\}$ under the map $A\mapsto A_\varepsilon$, and the right-hand side lies in the Effros $\sigma$-algebra on ${\rm CL}({\mathcal X})$. Measurability now follows from Hess's characterization. {\rm e}nd{proof} \subsection{Lattice operations} In the following lemma, measurability is always understood with respect to the Borel $\sigma$-algebra. Since ${\rm CL}_0({\mathcal X})$ is Polish, the Borel $\sigma$-algebra on ${\rm CL}_0({\mathcal X})^k$ for $k\in\{2,3,\ldots,\infty\}$ coincides with the corresponding product $\sigma$-algebra. \begin{lemma} \label{L:latop} \begin{enumerate} \item\label{L:latop:cup} The map $(A,B)\mapsto A\cup B$ from ${\rm CL}_0({\mathcal X})^2$ to ${\rm CL}_0({\mathcal X})$ is continuous. \item\label{L:latop:closed} The set $\{(A,B)\colon A\subseteq B\}$ is closed in ${\rm CL}_0({\mathcal X})^2$. \item\label{L:latop:incr} If $A_n$ is a nondecreasing sequence in ${\rm CL}_0({\mathcal X})$, meaning that $A_n\subseteq A_{n+1}$ for all~$n$, then $A_n$ converges to $\overline{\bigcup_n A_n}$ in ${\rm CL}_0({\mathcal X})$. \item\label{L:latop:cup2} The map $(A_n)\mapsto \overline{\bigcup_n A_n}$ from ${\rm CL}_0({\mathcal X})^\infty$ to ${\rm CL}_0({\mathcal X})$ is measurable. \item\label{L:latop:cap} If ${\mathcal X}$ is $\sigma$-compact, the map $(A_n)\mapsto \bigcap_n A_n$ from ${\rm CL}_0({\mathcal X})^\infty$ to ${\rm CL}_0({\mathcal X})$ is measurable. {\rm e}nd{enumerate} {\rm e}nd{lemma} \begin{proof} \ref{L:latop:cup}: Observe that $d(x,A\cup B) \le d(x,A)\wedge d(x,B)$ for all $x\in{\mathcal X}$, where we use the convention $d(x,{\rm e}mptyset)=\infty$. We claim that strict inequality is impossible. Indeed suppose $A\cup B\ne{\rm e}mptyset$ and let $x_n\in A\cup B$ achieve $d(x,x_n)\to d(x,A\cup B)$. Suppose $A$ contains infinitely many of the $x_n$ (otherwise $B$ does, and we work with $B$ instead). Then $x_n\in A$ along a subsequence, so that $d(x,A)\le d(x,x_n)\to d(x,A\cup B)$. Therefore strict inequality is impossible, and we have $d({\,\cdot\,},A\cup B) = d({\,\cdot\,},A)\wedge d({\,\cdot\,},B)$. The stated continuity property now follows from the definition of the extended Wijsman topology. \ref{L:latop:closed}: If $A_n\subseteq B_n$ and $(A_n,B_n)\to(A,B)$, then $B=\lim_n B_n=\lim_n A_n\cup B_n = A\cup B$ in view of~\ref{L:latop:cup}. Thus $A\subseteq B$, as required. \ref{L:latop:incr}: The statement is obvious if $A_n={\rm e}mptyset$ for all $n$, so we suppose $A_n\ne{\rm e}mptyset$ for some $n$, and then without loss of generality for all $n$. Define $A=\overline{\bigcup_n A_n}$ for ease of notation. Fix any $x\in{\mathcal X}$. Since $A_n\subseteq A$, we have $d(x,A_n)\ge d(x,A)$ and hence $\lim_n d(x,A_n)\ge d(x,A)$. For the reverse inequality, pick any $\varepsilon>0$ and $y\in A$ such that $d(x,y)\le d(x,A)+\varepsilon$. Since $A$ is the closure of $\bigcup_nA_n$, there exists some $m$ and some $z\in A_m$ with $d(y,z)\le\varepsilon$. Consequently, \[ d(x,A_m) \le d(x,z) \le d(x,y) + d(y,z) \le d(x,A) + 2\varepsilon. \] Since $d(x,A_n)$ is non-increasing, and since $\varepsilon$ was arbitrary, it follows that $\lim_n d(x,A_n)\le d(x,A)$. We deduce that $d(x,A_n)\to d(x,A)$ for all $x\in{\mathcal X}$, which means that $A_n\to A$. \ref{L:latop:cup2}: First note that the map $\varphi_k\colon{\rm CL}_0({\mathcal X})^\infty\to{\rm CL}_0({\mathcal X})$, $(A_n)\mapsto\bigcup_{n\le k}A_n$ is continuous, being a composition ${\rm CL}_0({\mathcal X})^\infty \to {\rm CL}_0({\mathcal X})^k \to {\rm CL}_0({\mathcal X})$, $(A_n)\mapsto(A_1,\ldots,A_k)\mapsto \bigcup_{n\le k}A_n$ of two maps that are continuous by definition of the product topology and due to repeated use of~\ref{L:latop:cup}. By~\ref{L:latop:incr}, the map $(A_n)\mapsto \overline{\bigcup_n A_n}$ is the pointwise limit of the maps $\varphi_k$, and therefore measurable by \citet[Lemma~4.29]{AB:06}. \ref{L:latop:cap}: Let $\varphi\colon(A_n)\mapsto \bigcap_n A_n$ denote the intersection map. We will prove that $\varphi^{-1}({\bf F})$ is a measurable subset of ${\rm CL}({\mathcal X})^\infty$, hence of ${\rm CL}_0({\mathcal X})^\infty$, for any measurable ${\bf F}\subseteq{\rm CL}({\mathcal X})$. The same then holds for any measurable ${\bf F}\subseteq{\rm CL}_0({\mathcal X})$, since $\varphi^{-1}(\{{\rm e}mptyset\})=(\varphi^{-1}({\rm CL}({\mathcal X})))^c$ is measurable. This readily implies the assertion. We must thus argue that $\varphi^{-1}({\bf F})$ is measurable for any measurable ${\bf F}\subseteq{\rm CL}({\mathcal X})$. In view of Hess's characterization of the Borel $\sigma$-algebra on ${\rm CL}({\mathcal X})$ it suffices to consider sets of the form ${\bf F}=\{A\in{\rm CL}({\mathcal X})\colon A\cap V\ne{\rm e}mptyset\}$ with $V$ open. For such sets we have \begin{equation} \label{eq:L:latop:phi-1} \varphi^{-1}({\bf F}) = \big\{(A_n)\colon V\cap\bigcap_n A_n \ne{\rm e}mptyset\big\} = \bigcup_m \big\{(A_n)\colon K_m\cap V\cap\bigcap_n A_n \ne{\rm e}mptyset\big\}, {\rm e}nd{equation} where $\{K_m\}_{m\in{\mathbb N}}$ is a compact cover of ${\mathcal X}$, which exists by $\sigma$-compactness. Thus it suffices to prove measurability of any set of the form $\{(A_n)\colon K\cap V\cap\bigcap_n A_n \ne{\rm e}mptyset\}$ with $V$ open and $K$ compact. Fix a countable dense subset $D\subseteq{\mathcal X}$. We claim that for any $(A_n)\in{\rm CL}_0({\mathcal X})^\infty$ we have \begin{equation} \label{KcapV} K\cap V\cap\bigcap_n A_n \ne{\rm e}mptyset \qquad\Longleftrightarrow \qquad \begin{minipage}[c][4.5em][c]{.40\textwidth} \begin{center} ${\rm e}xists\varepsilon>0$ rational, $\forall k\in{\mathbb N}$, ${\rm e}xists x_k\in D$, $d(x_k,K)\le k^{-1}$, $d(x_k,V^c)\ge\varepsilon$, and $\forall n\in{\mathbb N}$, $d(x_k,A_n)\le k^{-1}$. {\rm e}nd{center} {\rm e}nd{minipage} {\rm e}nd{equation} To prove ``${\mathbb R}ightarrow$'', let $x\in K\cap V\cap\bigcap_n A_n$. Since $V$ is open, there exists some rational $\varepsilon>0$ such that $d(x,V^c)\ge 2\varepsilon$. Since $D$ is dense, there exist points $x_k\in D$ such that $d(x_k,x)\le k^{-1}\wedge\varepsilon$. The triangle inequality then yields $d(x_k,V^c)\ge d(x,V^c)-d(x_k,x)\ge\varepsilon$, and we have $d(x_k,K)\le d(x_k,x)\le k^{-1}$ as well as $d(x_k,A_n)\le d(x_k,x)\le k^{-1}$ for all~$n$. This proves the forward implication. To prove ``$\Leftarrow$'', let $\varepsilon>0$ and $x_k$, $k\in{\mathbb N}$, have the stated properties. Since $d(x_k,K)\le k^{-1}$, there exist $y_k\in K$ with $d(x_k,y_k)\le 2k^{-1}$. By compactness of $K$, we may pass to a subsequence and assume that $y_k\to x$ for some $x\in K$. Then also $x_k\to x$, and continuity of the distance function implies $d(x,V^c)\ge\varepsilon$ and $d(x,A_n)=0$ for all $n$. We conclude that $x\in K\cap V\cap\bigcap_n A_n$, which is therefore nonempty. This completes the proof of {\rm e}qref{KcapV}. Now, observe that {\rm e}qref{KcapV} can be expressed as \begin{equation} \label{eq:L:latop:cube} \big\{(A_n)\colon K\cap V\cap\bigcap_n A_n \ne{\rm e}mptyset\big\} = \bigcup_{\substack{\varepsilon>0\\[.5ex]\varepsilon\in{\mathbb Q}}} \bigcap_{k\in{\mathbb N}} \bigcup_{\substack{x_k\in D \text{ with}\\[1ex]d(x_k,K)\le k^{-1}\\[1ex]d(x_k,V^c)\ge\varepsilon}} \big\{(A_n)\colon d(x_k,A_n)\le k^{-1} \ \forall n\big\}. {\rm e}nd{equation} The right-hand side is formed through countable unions and intersections of sets of the form $\{(A_n)\colon d(x_k,A_n)\le k^{-1} \ \forall n\}$. Such a set is actually a cube ${\bf G}_k\times {\bf G}_k\times\cdots\subseteq{\rm CL}({\mathcal X})^\infty$, where ${\bf G}_k=\{A\colon d(x_k,A)\le k^{-1}\}$ is the inverse image of $[0,k^{-1}]$ under the continuous map $A\mapsto d(x_k,A)$. We deduce that the right-hand side of {\rm e}qref{eq:L:latop:cube}, and hence the left-hand side, is measurable. Thus $\varphi^{-1}({\bf F})$ in {\rm e}qref{eq:L:latop:phi-1} is also measurable, as required. {\rm e}nd{proof} \begin{remark} It appears unlikely to the author that $\sigma$-compactness is really be needed for measurability of the intersection map; dropping this assumption would be desirable and natural. However, it is interesting to note that there are some striking differences between unions and intersections. For instance, $A\cap B$ may be empty even if $A$ and $B$ are not. Also, the map $(A,B)\mapsto A\cap B$ is not continuous, even if one restrict to compact convex sets. Indeed, let ${\mathcal X}={\mathbb R}^2$, and let $A_n=\{(x_1,x_2):0\le x_1\le 1/n,\, x_2=nx_1\}$ be the straight line from the origin to the point $(1/n,1)$. Then $A_n\to B$, where $B=\{0\}\times[0,1]$ is the line from the origin to $(0,1)$. Thus $A_n\cap B=\{(0,0)\}$ does not converge to $(\lim_n A_n)\cap B=B$. {\rm e}nd{remark} \subsection{Vector space operations} If $A$ and $B$ are subsets of a vector space, their sum is defined by $A+B=\{x+y\colon x\in A,\,y\in B\}$. This operation is associative and commutative, so the expression $A+B+C$ is unambiguous and equal to $A+C+B$, etc. Similarly, we define $\lambda A=\{\lambda x\colon x\in A\}$ for any scalar~$\lambda$. The dimension of an affine subspace $V$ is denoted ${\rm d}im(V)$, with the convention ${\rm d}im({\rm e}mptyset)=-1$. \begin{lemma} \label{L:vec op} Assume ${\mathcal X}$ is a locally convex topological vector space.\footnote{Of course, the topology is assumed to coincide with the one generated by the given metric $d$.} \begin{enumerate} \item\label{L:vec op:+} The map $(A_1,\ldots,A_n)\mapsto \overline{A_1+\cdots+A_n}$ from ${\rm CL}_0({\mathcal X})^n$ to ${\rm CL}_0({\mathcal X})$ is measurable for any $n\in{\mathbb N}$. \item\label{L:vec op:scal} The map $A\mapsto\lambda A$ from ${\rm CL}_0({\mathcal X})$ to itself is measurable, where $\lambda$ is any scalar. \item\label{L:vec op:conv} The map $A\mapsto\cconv(A)$ from ${\rm CL}_0({\mathcal X})$ to itself is measurable. \item\label{L:vec op:aff} The map $A\mapsto\caff(A)$ from ${\rm CL}_0({\mathcal X})$ to itself is measurable. \item\label{L:vec op:dim aff} The map $A\mapsto{\rm d}im(\aff(A))$ from ${\rm CL}_0({\mathcal X})$ to ${\mathbb R}\cup\{\infty\}$ is lower semicontinuous. {\rm e}nd{enumerate} {\rm e}nd{lemma} \begin{proof} In each case, we only need to consider inverse images of measurable subsets of ${\rm CL}({\mathcal X})$, since the inverse image of $\{{\rm e}mptyset\}$ is obviously measurable for each of the given maps. The proofs all use Hess's characterization in terms of the Effros $\sigma$-algebra. Thus we inspect inverse images of the set $\{A\in{\rm CL}({\mathcal X})\colon A\cap V\ne{\rm e}mptyset\}$, where $V$ is any nonempty open subset of ${\mathcal X}$. \ref{L:vec op:+}: It suffices to consider the case $n=2$, as the general case follows by induction together with the fact that $\overline{A_1+A_2+A_3}=\overline{\overline{A_1+A_2}+A_3}$. Define the maps \[ \varphi_\varepsilon\colon (A_1,A_2) \mapsto \overline{(A_1)_\varepsilon + (A_2)_\varepsilon} \] for any $\varepsilon\ge0$, where we use the notation~{\rm e}qref{eq:A_epsilon}. We may assume without loss of generality that the metric $d$ is translation invariant, see e.g.~\citet[Lemma~5.75]{AB:06}, in which case one readily verifies the inequalities \[ d(x,A_1+A_2)-4\varepsilon \le d(x,(A_1)_\varepsilon+(A_2)_\varepsilon)\le d(x,A_1+A_2) \] for any $x\in{\mathcal X}$ and $A_1,A_2\in {\rm CL}_0({\mathcal X})$. It follows that $\lim_{\varepsilon\to0}\varphi_\varepsilon = \varphi_0$ pointwise with respect to the Wijsman topology. Thus it suffices to prove measurability of $\varphi_\varepsilon$ for $\varepsilon>0$. To this end, let $D\subseteq{\mathcal X}$ be a countable dense subset. Observe that $\overline{(A_1)_\varepsilon+(A_2)_\varepsilon}$ intersects the open set $V$ if and only if $(A_1)_\varepsilon+(A_2)_\varepsilon$ does. Since each $(A_i)_\varepsilon$ has nonempty interior, this holds if and only if $x_1+x_2\in V$ for some points $x_i\in D\cap(A_i)_\varepsilon$. This can be expressed as follows: \[ \{(A_1,A_2)\colon \overline{(A_1)_\varepsilon+(A_2)_\varepsilon}\cap V\ne{\rm e}mptyset\} = \bigcup_{\substack{x_1,\,x_2\in D \\ x_1+x_2\in V}} \{(A_1,A_2)\colon d(x_i,A_i)\le\varepsilon,\, i=1,2\}. \] The right-hand side is a countable union of products of the sets $\{A_i\colon d(x_i,A_i)\le\varepsilon\}$, $i=1,2$, which are measurable since $d(x_i,{\,\cdot\,})$ is continuous. Hence $\varphi_\varepsilon$ is measurable, as required. \ref{L:vec op:scal}: If $\lambda=0$, the inverse image is either empty or all of ${\rm CL}({\mathcal X})$, so we may suppose that $\lambda$ is nonzero. But then $\{A\colon (\lambda A)\cap V\ne{\rm e}mptyset\}=\{A\colon A\cap (\lambda^{-1}V)\ne{\rm e}mptyset\}$ is measurable since $\lambda^{-1}V$ is open whenever $V$ is. \ref{L:vec op:conv}: Since $V$ is open, we have $V\cap\cconv(A)\ne{\rm e}mptyset$ if and only if $V\cap\conv(A)\ne{\rm e}mptyset$. This is equivalent to $\sum_i\lambda_i x_i \in V$ for some (finitely many) convex weights $\lambda_i$ and points $x_i\in A$. Again since $V$ is open, the $\lambda_i$ may be chosen rational. Therefore, \[ \{A\colon \cconv(A)\cap V\ne{\rm e}mptyset\} = \bigcup_n \bigcup_{\substack{\lambda_i\in{\mathbb Q}_+\text{ with}\\[0.5ex]\sum_{i=1}^n\lambda_i=1}} \{ A\colon (\lambda_1A + \cdots + \lambda_n A)\cap V\ne{\rm e}mptyset\}. \] The right-hand side is measurable in view of \ref{L:vec op:+} and \ref{L:vec op:scal}, so the left-hand side is measurable as well. \ref{L:vec op:aff}: The proof is identical to the one for the convex hull, except that the $\lambda_i$ are affine weights rather than convex weights, meaning that they sum to one but are not constrained to be nonnegative. \ref{L:vec op:dim aff}: Choose any convergent sequence $A_n\to A$ and set $k={\rm d}im(\aff(A))$. We need to show that $\liminf_n {\rm d}im(\aff(A_n)) \ge k$. For $k=-1$, i.e.~$A={\rm e}mptyset$, the statement is obvious. Suppose instead $0\le k<\infty$. Then there exist $k+1$ affinely independent points $x_0,\ldots,x_k\in A$. By definition of the extended Wijsman topology, $d(x_i,A_n)\to0$ for $i=0,\ldots,k$. Thus for all large $n$, $A_n$ also contains $k+1$ affinely independent points, whence ${\rm d}im(\aff(A_n))\ge k$. Finally, if $k=\infty$, the above argument replaced by an arbitrary $k'\in{\mathbb N}$ shows that ${\rm d}im(\aff(A_n))\ge k'$ for all large $n$, and thus $\lim_n {\rm d}im(\aff(A_n)) =\infty$. {\rm e}nd{proof} \subsection{The space of convex subsets of Euclidean space} In this subsection we assume that ${\mathcal X}={\mathbb R}^d$ and that the metric comes from the norm, $d(x,y)=\|x-y\|$. We consider the subspace ${\rm CO}({\mathcal X}) \subset {\rm CL}_0({\mathcal X})$ consisting of all closed convex subsets, equipped with the subspace topology and the associated Borel $\sigma$-algebra. The space ${\rm CO}({\mathcal X})$ is again partially ordered by set inclusion, and is a complete lattice with $\inf_\alpha A_\alpha = \bigcap_\alpha A_\alpha$ and $\sup_\alpha A_\alpha = \cconv(\bigcup_\alpha A_\alpha)$ for arbitrary collections $\{A_\alpha\}\subseteq{\rm CO}({\mathcal X})$. Note that ${\rm CO}({\mathcal X})$ is a closed subset of ${\rm CL}_0({\mathcal X})$. The following result shows that this complete lattice satisfies the assumptions imposed in Section~\ref{S:CI}. \begin{theorem} \label{T:convex sets} The complete lattice ${\rm CO}({\mathcal X})$ satisfies assumptions \ref{A1}--\ref{A3}. A strictly increasing measurable map $\phi\colon{\rm CO}({\mathcal X})\to{\mathbb R}$ is given by \[ \phi(A) = {\rm d}im(\aff(A)) + \mu(A \mid \aff(A)), \] where $\mu({\,\cdot\,}\mid V)$ is the distribution of an ${\mathbb R}^d$-valued standard Gaussian random variable conditioned to lie in the affine subspace $V$. We set $\mu({\rm e}mptyset\mid{\rm e}mptyset)=0$ by convention. {\rm e}nd{theorem} \begin{proof} Due to Lemma~\ref{L:latop}\ref{L:latop:closed} the set $\{(A,B)\in{\rm CO}({\mathcal X})^2\colon A\subseteq B\}$ is closed in ${\rm CO}({\mathcal X})^2$ and hence measurable. Thus Assumption~\ref{A1} holds. Lemma~\ref{L:latop}\ref{L:latop:cap} yields that the countable infimum map is measurable, and Lemma~\ref{L:latop}\ref{L:latop:cup2} together with Lemma~\ref{L:vec op}\ref{L:vec op:conv} yield that the countable supremum map is measurable. Thus Assumption~\ref{A2} holds. Next, we claim that the map $\phi$ is strictly increasing. To see this, first note that $\phi({\rm e}mptyset)=0$ and $\phi(A)\ge1$ if $A\ne{\rm e}mptyset$. Next, let $A\subsetneq B$ be two nonempty convex sets. If ${\rm d}im(\aff(A))<{\rm d}im(\aff(B))$ then $\phi(A) \le {\rm d}im(\aff(B)) - 1 + \mu(A\mid\aff(A)) \le {\rm d}im(\aff(B)) < \phi(B)$. On the other hand, if ${\rm d}im(\aff(A))={\rm d}im(\aff(B))$, then the two affine hulls coincide and we denote them both by~$V$. Since $A$ is strictly contained in $B$ and both sets are convex and closed, $B\setminus A$ contains a set which is open in $V$. Therefore $\phi(B)-\phi(A)=\mu(B\setminus A\mid V)>0$. Finally, to see that $\phi$ is measurable, first note that $A\mapsto{\rm d}im(\aff(A))$ is measurable since it is lower semicontinuous by Lemma~\ref{L:vec op}\ref{L:vec op:dim aff}. Next, observe that \[ \mu(A\mid \aff(A)) = \lim_{\varepsilon{\rm d}ownarrow0} \frac{\mu(A_\varepsilon)}{\mu(\aff(A)_\varepsilon)}, \qquad A\ne{\rm e}mptyset, \] where $\mu({\,\cdot\,})$ is the standard Gaussian distribution on ${\mathbb R}^d$. Therefore, by Lemma~\ref{L:gen prop}\ref{L:gen prop:mu}--\ref{L:gen prop:eps} and Lemma~\ref{L:vec op}\ref{L:vec op:aff}, the map $A\mapsto\mu(A\mid\aff(A))$ is a limit of measurable maps, and hence itself measurable. {\rm e}nd{proof} \begin{lemma} \label{L:conv is adapted} Let $X=(X_t)_{t\ge0}$ be an ${\mathbb R}^d$-valued c\`adl\`ag adapted process on a filtered measurable space $(\Omega,{\mathbb F}cal,{\mathbb F})$, whose filtration ${\mathbb F}$ is not necessarily right-continuous. Then the ${\rm CO}({\mathbb R}^d)$-valued process $Y=(Y_t)_{t\ge0}$ given by \[ Y_t = \cconv(X_s\colon s\le t) \] is adapted. {\rm e}nd{lemma} \begin{proof} This follows from Lemma~\ref{L:range of SP} and Lemma~\ref{L:vec op}\ref{L:vec op:conv}. {\rm e}nd{proof} \subsection{The space of subsets of a countable set} In this subsection we assume that ${\mathcal X}$ is countable set equipped with the discrete metric $d(x,y)=\bm 1_{\{y\}}(x)$. Then every subset of ${\mathcal X}$ is closed, so $2^{\mathcal X}={\rm CL}_0({\mathcal X})$. This space is partially ordered by set inclusion, and is a complete lattice under union and intersection. Furthermore, it satisfies the assumptions of Section~\ref{S:CI}. \begin{theorem} \label{T:countable set} The complete lattice $2^{\mathcal X}$ satisfies assumptions \ref{A1}--\ref{A3}. A strictly increasing measurable map $\phi:2^{\mathcal X}\to{\mathbb R}$ is given by \[ \phi(A) = \sum_{x\in A}w(x), \] where $\{w(x):x\in{\mathcal X}\}$ is a countable set of strictly positive numbers summing to one. {\rm e}nd{theorem} \begin{proof} Assumptions~\ref{A1} and~\ref{A2} follows directly from Lemma~\ref{L:latop}\ref{L:latop:closed}, \ref{L:latop:cup2}, and~\ref{L:latop:cap}. The map $\phi$ is clearly strictly increasing. To see that it is measurable, write $\phi(A)=\sum_{x\in{\mathcal X}}\bm 1_A(x)w(x)=\sum_{x\in{\mathcal X}}\bm (1-d(x,A))w(x)$ and observe that $A\mapsto d(x,A)$ is continuous and hence measurable. {\rm e}nd{proof} \appendix \section{Extension of Dedekind complete lattices} \begin{proposition} \label{P:ext Dedekind} Let $(S_0,\le)$ be a Dedekind complete lattice equipped with a $\sigma$-algebra~${\mathcal S}_0$, and assume that the following conditions hold: \begin{enumerate}[label={\rm(A\arabic*$_0$)}] \item\label{A1_0} The set $\{(x,y)\in S_0^2\colon x\le y\}$ lies in the product $\sigma$-algebra ${\mathcal S}_0^2$. \item\label{A2_0} For every measurable subset $A\in{\mathcal S}_0$, the sets \[ \{(x_1,x_2,\ldots)\colon \sup\{x_1,x_2,\ldots\} \in A\} \quad\text{and}\quad \{(x_1,x_2,\ldots)\colon \inf\{x_1,x_2,\ldots\} \in A\} \] both lie in ${\mathcal S}_0^\infty$.\footnote{Here $\sup$ and $\inf$ refer to the operations on $S_0$. If $(x_1,x_2,\ldots)$ is a sequence in $S_0$ for which $\sup_n x_n$ does not exist, then the condition $\sup_n x_n \in A$ is by convention not satisfied. Similarly for $\inf_n x_n$. In particular, by considering $A=S_0$, condition \ref{A2_0} implies that the set of sequences which admit a supremum and/or infimum is measurable, i.e.~lies in ${\mathcal S}_0^\infty$.} \item\label{A3_0} There exists a strictly increasing measurable map $\phi_0:S_0\to{\mathbb R}$. {\rm e}nd{enumerate} Define $S=S_0\cup\{-\infty,+\infty\}$, where $-\infty$ and $+\infty$ are not elements of $S_0$, and define ${\mathcal S}={\mathcal S}_0\vee\sigma(\{-\infty\},\{+\infty\})$. Extend the order $\le$ to $S$ by declaring $-\infty$ ($+\infty$) a lower (upper) bound on $S_0$. Then $(S,\le)$ with the $\sigma$-algebra ${\mathcal S}$ satisfies \ref{A1}--\ref{A3}. {\rm e}nd{proposition} \begin{proof} The set $\{(x,y)\in S^2\colon x\le y\}$ is the union of $\{(x,y)\in S_0^2\colon x\le y\}$, $\{-\infty\}\times S$, and $S\times\{+\infty\}$. It is therefore measurable, so~\ref{A1} holds. Next, let $A_{\rm sup}\subseteq S_0^\infty$ be the set of all sequences of elements in $S_0$ which admit a supremum in $S_0$. Condition~\ref{A2_0} implies that this set is measurable, $A_{\rm sup}\in{\mathcal S}_0^\infty$. It is easy to check that the countable supremum map $\varphi$ on $S^\infty$ is given by \[ \varphi((x_n)_{n\in{\mathbb N}}) = \begin{cases} \sup_n x_n, & (x_n)_{n\in{\mathbb N}}\in A_{\rm sup}\\ +\infty, & (x_n)_{n\in{\mathbb N}}\notin A_{\rm sup}. {\rm e}nd{cases} \] It then follows from \ref{A2_0} that $\varphi$ is $({\mathcal S}^\infty,{\mathcal S})$-measurable. The countable infimum map on $S^\infty$ is similarly shown to be measurable. This proves \ref{A2_0}. Finally, by replacing $\phi_0$ by $\frac{2}{\pi}\arctan(\phi_0)$ if necessary, we may assume that $\phi_0$ takes values in the interval $[-1,1]$. The map $\phi:S\to{\mathbb R}$ defined by $\phi(x)=\phi_0(x)$ for $x\in S_0$, $\phi(-\infty)=-2$, and $\phi(+\infty)=+2$, is then a strictly increasing measurable map. Thus~\ref{A3} holds. {\rm e}nd{proof} {\rm e}nd{document}
\begin{document} \title{Tight complexity lower bounds for integer linear programming with few constraints} \begin{abstract} We consider the standard {\textsc{ILP Feasibility}} problem: given an integer linear program of the form $\{A\ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}},\, \ensuremath{\mathbf{x}}\geqslantqslant 0\}$, where $A$ is an integer matrix with $k$ rows and $\ell$ columns, $\ensuremath{\mathbf{x}}$ is a vector of $\ell$ variables, and $\ensuremath{\mathbf{b}}$ is a vector of $k$ integers, we ask whether there exists $\ensuremath{\mathbf{x}}\in {\ensuremath{\mathbb{N}}}^\ell$ that satisfies $A\ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}$. Each row of $A$ specifies one linear {\em{constraint}} on $\ensuremath{\mathbf{x}}$; our goal is to study the complexity of {\textsc{ILP Feasibility}} when both $k$, the number of constraints, and $\|A\|_\infty$, the largest absolute value of an entry in $A$, are small. Papadimitriou~\cite{Papadimitriou81} was the first to give a fixed-parameter algorithm for {\textsc{ILP Feasibility}} under parameterization by the number of constraints that runs in time $\leqslantft((\|A\|_\infty+\|b\|_\infty) \cdot k\right)^{\ensuremath{\mathcal{O}}(k^2)}$. This was very recently improved by Eisenbrand and Weismantel~\cite{EisenbrandW18}, who used the Steinitz lemma to design an algorithm with running time $(k\|A\|_\infty)^{\ensuremath{\mathcal{O}}(k)}\cdot \|\ensuremath{\mathbf{b}}\|_\infty^2$, which was subsequently improved by Jansen and Rohwedder~\cite{JansenR18} to $\ensuremath{\mathcal{O}}(k\|A\|_\infty)^{k}\cdot \log \|\ensuremath{\mathbf{b}}\|_\infty$. We prove that for $\{0,1\}$-matrices $A$, the running time of the algorithm of Eisenbrand and Weismantel is probably optimal: an algorithm with running time $2^{o(k\log k)}\cdot (\ell+\|\ensuremath{\mathbf{b}}\|_\infty)^{o(k)}$ would contradict the Exponential Time Hypothesis (ETH). This improves previous non-tight lower bounds of Fomin et al.~\cite{FominPRS16}. We then consider integer linear programs that may have many constraints, but they need to be structured in a ``shallow'' way. Precisely, we consider the parameter {\em{dual treedepth}} of the matrix $A$, denoted $\mathrm{td}_D(A)$, which is the treedepth of the graph over the rows of $A$, where two rows are adjacent if in some column they simultaneously contain a non-zero entry. It was recently shown by Kouteck\'y et al.~\cite{KouteckyLO18} that {\textsc{ILP Feasibility}} can be solved in time $\|A\|_\infty^{2^{\ensuremath{\mathcal{O}}(\mathrm{td}_D(A))}}\cdot (k+\ell+\log \|\ensuremath{\mathbf{b}}\|_\infty)^{\ensuremath{\mathcal{O}}(1)}$. We present a streamlined proof of this fact and prove that, again, this running time is probably optimal: even assuming that all entries of $A$ and $\ensuremath{\mathbf{b}}$ are in $\{-1,0,1\}$, the existence of an algorithm with running time $2^{2^{o(\mathrm{td}_D(A))}}\cdot (k+\ell)^{\ensuremath{\mathcal{O}}(1)}$ would contradict the ETH. \end{abstract} \vskip -1cm \begin{picture}(0,0) \put(392,10) {\hbox{\includegraphics[width=40px]{logo-erc.jpg}}} \put(382,-50) {\hbox{\includegraphics[width=60px]{logo-eu.pdf}}} \end{picture} \section{Introduction} Integer linear programming (ILP) is a powerful technique used in countless algorithmic results of theoretical importance, as well as applied routinely in thousands of instances of practical computational problems every day. Despite the problem being {\textsc{NP}}-hard in general, practical ILP solvers excel in solving real-life instances with thousands of variables and constraints. This can be partly explained by applying a variety of subroutines, often based on heuristic approaches, that identify and exploit structure in the input in order to apply the best suited algorithmic strategies. A theoretical explanation of this phenomenon would of course be hard to formulate, but one approach is to use the paradigm of {\em{parameterized complexity}}. Namely, the idea is to design algorithms that perform efficiently when certain relevant structural parameters of the input have moderate values. In this direction, probably the most significant is the classic result of Lenstra~\cite{lenstra}, who proved that {\textsc{ILP Optimization}} is {\em{fixed-parameter tractable}} when parameterized by the number of variables $\ell$. That is, it can be solved in time $f(\ell)\cdot |I|^{\ensuremath{\mathcal{O}}(1)}$, where $f$ is some function and $|I|$ is the total bitsize of the input; we shall use the previous notation throughout the whole manuscript. Subsequent work in this direction~\cite{frank-tardos,kannan} improved the dependence of the running time on $\ell$ to $f(\ell)\leqslantqslant 2^{\ensuremath{\mathcal{O}}(\ell \log \ell)}$. In this work we turn to a different structural aspect and study ILPs that have few {\em{constraints}}, as opposed to few {\em{variables}} as in the setting considered by Lenstra. Formally, we consider the parameterization by $k$, the number of {\em{constraints}} (rows of the input matrix $A$), and $\|A\|_\infty$, the maximum absolute value over all entries in $A$. The situation when the number of constraints is significantly smaller than the number of variables appears naturally in many relevant settings. For instance, to encode {\textsc{Subset Sum}} as an instance of {\textsc{ILP Feasibility}} it suffices to introduce a $\{0,1\}$-variable $x_i$ for every input number $s_i$, and then set only one constraint: $\sum_{i=1}^n s_i x_i = t$, where $t$ is the target value. Note that the fact that {\textsc{Subset Sum}} is {\textsc{NP}}-hard for the binary encoding of the input and polynomial-time solvable for the unary encoding, explains why $\|A\|_\infty$ is also a relevant parameter for the complexity of the problem. Integer linear programs with few constraints and many variables arise most often in the study of knapsack-like and scheduling problems via the concept of so-called {\em{configuration ILPs}}, in the context of approximation and parameterized algorithms. \subparagraph*{Parameterization by the number of constraints.} Probably the first to study the complexity of integer linear programming with few constraints was Papadimitriou~\cite{Papadimitriou81}, who already in 1981 observed the following. Consider an ILP of the standard form $\{A\ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}},\, \ensuremath{\mathbf{x}}\geqslantqslant 0\}$, where $A$ is an integer matrix with $k$ rows (constraints) and $\ell$ columns (variables), $\ensuremath{\mathbf{x}}$ is a vector of integer variables, and $\ensuremath{\mathbf{b}}$ is a vector of integers. Papadimitriou proved that assuming such an ILP is feasible, it admits a solution with all variables bounded by $B=\ell \cdot ((\|A\|_\infty+\|\ensuremath{\mathbf{b}}\|_\infty) \cdot k)^{2k+1}$, which in turn can be found in time $\ensuremath{\mathcal{O}}((\ell B)^{k+1}\cdot |I|)$ using simple dynamic programming. Noting that by removing duplicate columns one can assume that $\ell\leqslantqslant (2\|A\|_\infty+1)^k$, this yields an algorithm with running time $((\|A\|_\infty + \|\ensuremath{\mathbf{b}}\|_\infty)\cdot k)^{\ensuremath{\mathcal{O}}(k^2)}$. The approach can be lifted to give an algorithm with a similar running time bound also for the {\textsc{ILP Optimization}} problem, where instead of finding any feasible solution $\ensuremath{\mathbf{x}}$, we look for one that maximizes the value $\ensuremath{\mathbf{w}}^{\intercal}\ensuremath{\mathbf{x}}$ for a given optimization goal vector $\ensuremath{\mathbf{w}}$. The result of Papadimitriou was recently improved by Eisenbrand and Weismantel~\cite{EisenbrandW18}, who used the Steinitz Lemma to give an amazingly elegant algorithm solving the {\textsc{ILP Optimization}} problem (and thus also the {\textsc{ILP Feasibility}} problem) for a given instance $\{\max \ensuremath{\mathbf{w}}^{\intercal} \ensuremath{\mathbf{x}}\colon A\ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}},\, \ensuremath{\mathbf{x}}\geqslantqslant 0\}$ with $k$ constraints in time $(k\|A\|_\infty)^{\ensuremath{\mathcal{O}}(k)}\cdot \|b\|^2_\infty$. This running time has been subsequently refined by Jansen and Rohwedder~\cite{JansenR18} to $\ensuremath{\mathcal{O}}(k\|A\|_\infty)^{2k}\cdot \log \|\ensuremath{\mathbf{b}}\|_\infty$ in the case of {\textsc{ILP Optimization}}, and to $\ensuremath{\mathcal{O}}(k\|A\|_\infty)^{k}\cdot \log \|\ensuremath{\mathbf{b}}\|_\infty$ in the case of {\textsc{ILP Feasibility}}.\footnote{Throughout, $\log$ denotes the binary logarithm.}\looseness=-1 From the point of view of fine-grained parameterized complexity, this raises the question of whether the parametric factor $\ensuremath{\mathcal{O}}(k\|A\|_\infty)^{k}$ is the best possible. Jansen and Rohwedder~\cite{JansenR18} studied this question under the assumption that $k$ is a fixed constant and $\|A\|_\infty$ is the relevant parameter. They proved that assuming the Strong Exponential Time Hypothesis (SETH), for every fixed $k$ there is no algorithm with running time $(k\cdot (\|A\|_\infty+\|\ensuremath{\mathbf{b}}\|_\infty))^{k-\delta}\cdot |I|^{\ensuremath{\mathcal{O}}(1)}$, for any $\delta>0$. Note that as $k$ is considered a fixed constant, this essentially shows that the degree of $\|A\|_\infty$ needs to be at least $k$, but does not exclude algorithms with running time of the form $\|A\|_\infty^{\ensuremath{\mathcal{O}}(k)}\cdot |I|^{\ensuremath{\mathcal{O}}(1)}$, or $2^{\ensuremath{\mathcal{O}}(k)}\cdot |I|^{\ensuremath{\mathcal{O}}(1)}$ when all entries in the input matrix $A$ are in $\{-1,0,1\}$. On the other hand, the algorithms of~\cite{EisenbrandW18,JansenR18} provide only an upper bound of $2^{\ensuremath{\mathcal{O}}(k\log k)}\cdot |I|^{\ensuremath{\mathcal{O}}(1)}$ in the latter setting. As observed by Fomin et al.~\cite{FominPRS16}, a trivial encoding of {\textsc{3SAT}} as an ILP shows a lower bound of $2^{o(k)}\cdot |I|^{\ensuremath{\mathcal{O}}(1)}$ for instances with $A$ having entries only in $\{0,1\}$, $\ensuremath{\mathbf{b}}$ having entries only in $\{0,1,2,3\}$, and $\ell =\ensuremath{\mathcal{O}}(k)$. This still leaves a significant gap between the $2^{o(k)}\cdot |I|^{\ensuremath{\mathcal{O}}(1)}$ lower bound and the $2^{\ensuremath{\mathcal{O}}(k\log k)}\cdot |I|^{\ensuremath{\mathcal{O}}(1)}$ upper bound. \subparagraph*{Parameterization by the dual treedepth.} A related, recent line of research concerns ILPs that may have many constraints, but these constraints need to be somehow organized in a structured, ``shallow'' way. It started with a result of Hemmecke et al.~\cite{HemmeckeOR13}, who gave a fixed-parameter tractable algorithm for solving the so-called {\em{$n$-fold ILPs}}. An $n$-fold ILP is an ILP where the constraint matrix is of the form $$A=\begin{pmatrix} B & B & \ldots & B\\ C & 0 & \cdots & 0 \\ 0 & C & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & C \end{pmatrix}$$ and the considered parameters are the dimensions of matrices $B$ and $C$, as well as $\|A\|_{\infty}$. The running time obtained by Hemmecke et al. is $\|A\|_{\infty}^{\ensuremath{\mathcal{O}}(k^3)}\cdot |I|^{\ensuremath{\mathcal{O}}(1)}$ when all these dimensions are bounded by $k$. See~\cite{HemmeckeOR13} and the recent improvements of Eisenbrand et al.~\cite{EisenbrandHKKLO19} for more refined running time bounds expressed in terms of particular dimensions. The result of Hemmecke et al.~\cite{HemmeckeOR13} quickly led to multiple improvements in the best known upper bounds for several parameterized problems, where the technique of configuration ILPs is applicable~\cite{KnopK18,KnopKM17b,KnopKM17a}. Recently, the technique was also applied to improve the running times of several approximation schemes for scheduling problems~\cite{jansen-ptas}. Chen and Marx~\cite{ChenM18} introduced a more general concept of {\em{tree-fold ILPs}}, where the ``star-like'' structure of an $n$-fold ILP is generalized to any bounded-depth rooted tree, and they showed that it retains relevant fixed-parameter tractability results. This idea was followed on by Eisenbrand et al.~\cite{EisenbrandHK18} and by Kouteck\'y et al.~\cite{KouteckyLO18}, whose further generalizations essentially boil down to considering a structural parameter called the {\em{dual treedepth}} of the input matrix $A$. This parameter, denoted $\mathrm{td}_D(A)$, is the smallest number $h$ such that the rows of $A$ can be organized into a rooted forest of height $h$ with the following property: whenever two rows have non-zero entries in the same column, one is the ancestor of the other in the forest. As shown explicitly by Kouteck\'y et al.~\cite{KouteckyLO18} and somewhat implicitly by Eisenbrand et al.~\cite{EisenbrandHK18}, {\textsc{ILP Optimization}} can be solved in fixed-parameter time when parameterized by $\|A\|_{\infty}$ and $\mathrm{td}_D(A)$. For more detailed discussion of algorithmic implications and theory of block-structured integer programs we refer the reader to a recent survey~\cite{Chen19}. \subparagraph*{Our results.} For the parameterization by the number of constraints $k$, we close the above mentioned complexity gap by proving the following optimality result. \begin{theorem}\label{thm:main} Assuming ETH, there is no algorithm that would solve any {\textsc{ILP feasibility}} instance $\{A \ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}, \ensuremath{\mathbf{x}}\geqslantqslant 0\}$ with $A \in \{0,1\}^{k \times \ell}$, $\ensuremath{\mathbf{b}} \in {\ensuremath{\mathbb{N}}}^k$, and $\ell,\|\ensuremath{\mathbf{b}}\|_\infty =\ensuremath{\mathcal{O}}(k \log k)$ in time~$2^{o(k\log k)}$. \end{theorem} This shows that the algorithms of~\cite{EisenbrandW18,JansenR18} have the essentially optimal running time of $2^{\ensuremath{\mathcal{O}}(k\log k)}\cdot |I|^{\ensuremath{\mathcal{O}}(1)}$ also in the regime where $\|A\|_\infty$ is a constant and the number of constraints $k$ is the relevant parameter. We can also reduce the coefficients in the target vector $\ensuremath{\mathbf{b}}$ to constant at the cost of adding negative entries to $A$: \begin{corollary}\label{cor:main} Assuming ETH, there is no algorithm that would solve any {\textsc{ILP feasibility}} instance $\{A \ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}, \ensuremath{\mathbf{x}}\geqslantqslant 0\}$ with $A \in \{-1,0,1\}^{k \times \ell}$, $\ensuremath{\mathbf{b}} \in \{0,1\}^k$ and $\ell=\ensuremath{\mathcal{O}}(k \log k)$ in time~$2^{o(k\log k)}$. \end{corollary} The same cannot be done for non-negative matrices $A$, since in this case Papadimitriou's algorithm is even simpler and works in time $\|\ensuremath{\mathbf{b}}\|_\infty^{\ensuremath{\mathcal{O}}(k)} \cdot |I|^{\ensuremath{\mathcal{O}}(1)}$. The reduction in Theorem~\ref{thm:main} is hence simultaneously tight against this algorithm (since for $\|\ensuremath{\mathbf{b}}\|_\infty =\ensuremath{\mathcal{O}}(k \log k)$ the bound $\|\ensuremath{\mathbf{b}}\|_\infty^{\ensuremath{\mathcal{O}}(k)}$ is $2^{\ensuremath{\mathcal{O}}(k \log k)}$). The main ingredient of the proof of Theorem~\ref{thm:main} is a certain quaint combinatorial construction --- {\em{detecting matrices}} introduced by Lindstr\"om~\cite{Lindstrom65} --- that provides a general way for compressing a system $A\ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}$ with $k$ equalities and bounded targets $\|\ensuremath{\mathbf{b}}\|_\infty \leqslantqslant d$ into $\ensuremath{\mathcal{O}}(k/\log_d k)$ equalities (with unbounded targets). Each new equality is a linear combination of the original ones; in fact, just taking $\ensuremath{\mathcal{O}}(k/\log_d k)$ sums of random subsets of the original equalities suffices, but we also provide a deterministic construction taking $\ensuremath{\mathcal{O}}(dk/\log_d k)$ such subsets. By composing such a compression procedure for $d=4$ with a standard reduction from {\textsc{(3,4)SAT}} --- a variant of {\textsc{3SAT}} where every variable occurs at most $4$ times --- to {\textsc{ILP Feasibility}}, we obtain a reduction that given an instance of {\textsc{(3,4)SAT}} with $n$ variables and $m$ clauses, produces an equivalent instance of {\textsc{ILP Feasibility}} with $k=\ensuremath{\mathcal{O}}((n+m)/\log (n+m))$ constraints. Since $2^{o(k\log k)}=2^{o(n+m)}$, we would obtain a $2^{o(n+m)}$-time algorithm for {\textsc{(3,4)SAT}}, which is known to contradict ETH. We note that detecting matrices were recently used by two of the authors in the context of different lower bounds based on ETH~\cite{BonamyKPSW17}. For the parameterization by the dual treedepth, we first streamline the presentation of the approach of Kouteck\'y et al.~\cite{KouteckyLO18} and clarify that the parametric factor in the running time is doubly-exponential in the treedepth. The key ingredient here is the upper bound on $\ell_1$-norms of the elements of the {\em{Graver basis}} of the input matrix $A$, expressed in terms of $\|A\|_\infty$ and $\mathrm{td}_D(A)$. Using standard textbook bounds for Graver bases and the recursive definition of treedepth, we prove that these $\ell_1$-norms can be bounded by $(2\|A\|_{\infty}+1)^{2^{\mathrm{td}_D(A)}-1}$. This, combined with the machinery developed by Kouteck\'y et al.~\cite{KouteckyLO18}, implies the following. \begin{theorem}\label{thm:dualtd-upper-bound} There is an algorithm that solves any given {\textsc{ILP Optimization}} instance $I=\{\max \ensuremath{\mathbf{w}}^\intercal \ensuremath{\mathbf{x}} \colon A\ensuremath{\mathbf{x}}=\ensuremath{\mathbf{b}}, \ensuremath{\mathbf{l}}\leqslantqslant \ensuremath{\mathbf{x}}\leqslantqslant \ensuremath{\mathbf{u}}\}$ in time $\|A\|_\infty^{2^{\ensuremath{\mathcal{O}}(\mathrm{td}_D(A))}}\cdot |I|^{\ensuremath{\mathcal{O}}(1)}$. \end{theorem} We remark that the running time as outlined above also follows from a fine analysis of the reasoning presented in~\cite{KouteckyLO18}, but the intermediate step of using tree-fold ILPs in~\cite{KouteckyLO18} makes tracking parametric dependencies harder to follow. We next show that, perhaps somewhat surprisingly, the running time provided by Theorem~\ref{thm:dualtd-upper-bound} is optimum. Namely, we have the following lower bound. \begin{theorem}\label{thm:dualtd-lower-bound} Assuming ETH, there is no algorithm that would solve any {\textsc{ILP Feasibility}} instance $I=\{A\ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}, \ensuremath{\mathbf{x}} \geqslantqslant 0\}$, where all entries of $A$ and $\ensuremath{\mathbf{b}}$ are in $\{-1,0,1\}$, in time~$2^{2^{o(\mathrm{td}_D(A))}}\cdot |I|^{\ensuremath{\mathcal{O}}(1)}$. \end{theorem} To prove Theorem~\ref{thm:dualtd-lower-bound} we reduce from the {\textsc{Subset Sum}} problem. The key idea is that we are able to ``encode'' any positive integer $s$ using an ILP with dual treedepth $\ensuremath{\mathcal{O}}(\log \log s)$. This lower bound has been recently generalized by Eisenbrand et al.~\cite{EisenbrandHKKLO19} to include a parameter they call topological height. \begin{comment} \subsection{Notes} All logarithms are binary. Note that duplicate columns in $A$ are unnecessary, so we can always assume a bound on the number of variables $\ell \leqslantqslant (2\Delta+1)^k$. Papadimitriou~\cite{Papadimitriou81} was first to consider ILPs with few constraints; he proved that if there is a solution $\ensuremath{\mathbf{x}}\in {\ensuremath{\mathbb{N}}}^\ell$, then there is one in $\{0,1,\dots,B\}^n$ for $B=\ell \cdot ((\Delta+\|b\|_\infty) \cdot k)^{2k+1}$, which implies a simple dynamic programming algorithm with running time $\ensuremath{\mathcal{O}}((\ell B)^{k+1}) \leqslantqslant ((\Delta+\|b\|_\infty) \cdot k)^{\ensuremath{\mathcal{O}}(k^2)}$. (We remark that unlike later approaches, Papadimitriou's algorithm also works without change for the more general problem where instead of any $\ensuremath{\mathbf{x}} \in {\ensuremath{\mathbb{N}}}^\ell$ we ask for an integer vector $0 \leqslantqslant \ensuremath{\mathbf{x}} \leqslantqslant \ve{u}$ (coordinate-wise), for some given upper bounds vector $\ve{u}$; these bounds are not counted in the number of constraints $k$.) Eisenbrand and Weismantel~\cite{EisenbrandW18} recently showed: \begin{theorem}[\cite{EisenbrandW18}] Given $A \in {\ensuremath{\mathbb{Z}}}^{k \times \ell}$ and $\ensuremath{\mathbf{b}} \in {\ensuremath{\mathbb{Z}}}^k$ with $\|A\|_\infty \leqslantqslant \Delta$, a solution of $\{\ensuremath{\mathbf{x}} \in {\ensuremath{\mathbb{Z}}}^\ell \mid A \ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}, \ensuremath{\mathbf{x}}\geqslantqslant 0\}$ can be found in time $\ensuremath{\mathcal{O}}(\Delta \cdot k)^k \cdot \|\ensuremath{\mathbf{b}}\|_1 \cdot \ell \leqslantqslant (\Delta \cdot k)^{\ensuremath{\mathcal{O}}(k)} \cdot \|b\|_\infty$. \end{theorem} Moreover they solve the optimization variant in only quadratically worse time. The $\|\ensuremath{\mathbf{b}}\|_\infty$ factor was later improved to $\log \|\ensuremath{\mathbf{b}}\|_\infty$ by Jansen and Rohwedder~\cite{JansenR18}. Our contribution is proving a lower bound with tight dependencies on $k$, assuming the Exponential Time Hypothesis (ETH). \begin{theorem}\label{thm:main} Assuming ETH, there is no algorithm for {\sc{ILP feasibility}} that resolves every input instance $\{\ensuremath{\mathbf{x}} \in {\ensuremath{\mathbb{Z}}}^\ell \mid A \ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}, x\geqslantqslant 0\}$ with $A \in \{0,1\}^{k \times \ell}$, $\ensuremath{\mathbf{b}} \in {\ensuremath{\mathbb{N}}}^k$, and $\ell,\|\ensuremath{\mathbf{b}}\|_\infty =\ensuremath{\mathcal{O}}(k \log k)$ in time $2^{o(k\log k)}$. \end{theorem} More generally, we show how a certain quaint combinatorial construction provides a general way for compressing a system $A\ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}$ with $k$ equalities and bounded targets $\|\ensuremath{\mathbf{b}}\|_\infty \leqslantqslant d$ into $\ensuremath{\mathcal{O}}(k/\log_d k)$ equalities (with unbounded targets). Each new equality is a linear combination of the original ones, in fact just taking $\ensuremath{\mathcal{O}}(k/\log_d k)$ sums of random subsets of original equalities suffices. We also provide a deterministic construction taking $\ensuremath{\mathcal{O}}(dk/\log_d k)$ such subsets. As observed by Fomin et al.~\cite{FominPRS16}, a trivial encoding of 3-SAT as an ILP shows a lower bound of $2^{o(k)}$ for instances with non-negative $A,\ensuremath{\mathbf{b}}$, $\Delta=1$, $\|\ensuremath{\mathbf{b}}\|_\infty = 3$, and $\ell =\ensuremath{\mathcal{O}}(k)$. In particular, there cannot be an algorithm that works in general in time $\ell^{o(k / \log k)} \cdot \|\ensuremath{\mathbf{b}}\|_\infty^{o(k)}$, just because it cannot work for constant $\Delta,\|\ensuremath{\mathbf{b}}\|_\infty$ and $\ell=\ensuremath{\mathcal{O}}(k)$. In the same way, our result implies there is no algorithm that works in time $\Delta^{o(k \log k)} \cdot (\ell + \|\ensuremath{\mathbf{b}}\|_\infty)^{o(k)}$, just because it would solve instances with $\Delta$ constant and $\ell,\|\ensuremath{\mathbf{b}}\|_\infty =\ensuremath{\mathcal{O}}(k \log k)$ in time $2^{o(k \log k)}$. This however tells us nothing about the dependency on $\Delta$ and $\|\ensuremath{\mathbf{b}}\|_\infty$. In this direction, Jansen and Rohwedder~\cite{JansenR18} proved that for any \emph{fixed} $k$ and $\delta>0$, ILP with $k$ constraints cannot be solved in time $\ensuremath{\mathcal{O}}((\Delta+\|\ensuremath{\mathbf{b}}\|_\infty)^{k-\delta}) \cdot n^{\ensuremath{\mathcal{O}}(1)}$ (assuming SETH). \end{comment} \section{Parameterization by the number of constraints} \subsection{Detecting matrices} Our main tool is the usage of so-called \emph{detecting matrices}, first studied by Lindstr\"{o}m~\cite{Lindstrom65}. They can be explained via the following coin-weighing puzzle: given $m$ coins with weights in $\{0,1,\dots,d-1\}$, we want the deduce the weight of each coin with as few weighings as possible. We have a spring scale, so in one weighing we can exactly determine the sum of weights of any subset of the coins. While the naive strategy---weigh coins one by one---yields $m$ weighings, it is actually possible to find a solution using $\ensuremath{\mathcal{O}}(m/\log_d m)$ weighings. This number is asymptotically optimal, as each weighing provides $\Theta(\log m)$ bits of information, so fewer weighings would not be enough to distinguish all $d^m$ possible weight functions. Probably the easiest way to construct such a strategy is using the probabilistic method. It turns out that querying $\ensuremath{\mathcal{O}}(m/\log_d m)$ random subsets of coins with high probability provides enough information to determine the weight of each coin. This is because a random subset distinguishes any of the $\ensuremath{\mathcal{O}}(d^m \cdot d^m)$ non-equal pairs of weight functions with probability at least $\frac{1}{2}$, but pairs of weight functions that are close to each other are few, while pairs of weight functions that are far from each other have a significantly better probability than $\frac{1}{2}$ of being distinguished. Note that thus we construct a {\em{non-adaptive}} strategy: the subsets of coins to be weighed can be determined and fixed at the very start. We refer the reader to e.g.~\cite[Corollary 2]{GrebinskiK00} for full details, and we remark that the last two authors recently used detecting matrices in the context of algorithmic lower bounds for the {\textsc{Multicoloring}} problem~\cite{BonamyKPSW17}. Viewing each tuple of coin weights as a vector $\ve{v}\in \{0,\dots,d-1\}^m$, each weighing returns the value $\ve{a}^\intercal \ve{v}$ for the characteristic vector $\ve{a} \in \{0,1\}^m$ of some subset of coins. Thus $k$ weighings give the vectors of values $M \ve{v}$ for some $\{0,1\}$-matrix $M$ with $k$ rows and $m$ columns. An equivalent formulation is then to ask for a $\{0,1\}$-matrix $M$ with $m$ columns, such that knowing the vector $M \ve{v}$ uniquely determines any $\ve{v}\in\{0,\dots,d-1\}^m$. Such an $M$ is called a \emph{$d$-detecting matrix} and we seek to minimize the number of rows/weighings $k$ it can have. Lindstr\"{o}m gave a deterministic construction and proved the bound on $k$ to be tight. See also Bshouty~\cite{Bshouty09} for a more direct and general construction using Fourier analysis. \begin{theorem}[\cite{Lindstrom65}]\label{thm:detectingBounded} For all $d,m\in{\ensuremath{\mathbb{N}}}$, there is a $\{0,1\}$-matrix $M$ with $m$ columns and $k\leqslantqslant\frac{2m \log d}{\log m}(1+o(1))$ rows such that for any $\ve{u},\ve{v}\in \{0,\dots,d-1\}^m$, if $M\ve{u} = M\ve{v}$ then~$\ve{u}=\ve{v}$. Moreover, such matrix $M$ can be constructed in time polynomial in $dm$. \end{theorem} In other words, this allows us to check $m$ equalities between values in $\{0,\dots,d-1\}$ (i.e., corresponding coordinates of vectors $\ve{u}$ and $\ve{v}$) using only $\ensuremath{\mathcal{O}}(m/\log_d m)$ comparisons of sums of certain subsets of these values (i.e., coordinates of vectors $M\ve{u}$ and $M\ve{v}$). For an ILP instance $A \ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}$ with $\|\ensuremath{\mathbf{b}}\|_\infty \leqslantqslant d$ and $m$ constraints, we may use this idea to check the equality on each of the $m$ coordinates of $A \ensuremath{\mathbf{x}}$ using only $\ensuremath{\mathcal{O}}(m/\log_d m)$ constraints. Indeed, the intuition is that if $M$ is a $d$-detecting matrix, then we can rewrite $A \ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}$ as $M A\ensuremath{\mathbf{x}} = M \ensuremath{\mathbf{b}}$ and check the latter --- which involves $\ensuremath{\mathcal{O}}(m/\log_d m)$ $\{0,1\}$-combinations of the original constraints. This is the core of our approach. However, there is one subtle caveat: in order to claim that the assertions $A \ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}$ and $M A\ensuremath{\mathbf{x}} = M \ensuremath{\mathbf{b}}$ are equivalent, we would need to ensure that $\|A\ve{x}\|_\infty\leqslantqslant d$ for an arbitrary vector $\ve{x} \in {\ensuremath{\mathbb{N}}}^n$. One solution is to use the fact that a uniformly random $\{0,1\}$-matrix has a stronger ``detecting'' property: it will, with high probability, distinguish all vectors of low $\ell_1$-norm, as shown by Grebinski and Kucherov~\cite{GrebinskiK00}. \begin{lemma}[\cite{GrebinskiK00}]\label{lem:detectingRandom} For all $d,m\in{\ensuremath{\mathbb{N}}}$, there exists a $\{0,1\}$-matrix $M$ with $m$ columns and $k\leqslantqslant\frac{4m \log (d+1)}{\log m}(1+o(1))$ rows such that for any $\ve{u},\ve{v}\in{\ensuremath{\mathbb{N}}}^m$ satisfying $\|\ve{u}\|_1,\|\ve{v}\|_1 \leqslantqslant dm$, if $M\ve{u} = M\ve{v}$ then $\ve{u}=\ve{v}$. Moreover, such matrix $M$ can be computed in randomized polynomial time (in $dm$). \end{lemma} Note that in Lemma~\ref{lem:detectingRandom}, we do not actually have to assume bounds on one of the two vectors: it suffices to assume $\ve{u} \in {\ensuremath{\mathbb{N}}}^m$ and $\|\ve{v}\|_1 \leqslantqslant dm$, because simply adding a single row full of ones to $M$ guarantees $\|\ve{u}\|_1=\|\ve{v}\|_1$. Therefore as long as $A$ is non-negative and $\|\ensuremath{\mathbf{b}}\|_\infty \leqslantqslant d$, it suffices to check $MA\ensuremath{\mathbf{x}} = M\ensuremath{\mathbf{b}}$. Unfortunately, to the best of our knowledge, no deterministic construction is known for Lemma~\ref{lem:detectingRandom}. We remark that Bshouty gave a deterministic, but adaptive detecting strategy~\cite{Bshouty09}; that is, in terms of coin weighing, consecutive queries on coins may depend on results of previous weighings. Instead, we show that a different, recursive construction by Cantor and Mills~\cite{Cantor66} for $2$-detecting matrices can be adapted so that no bounds (other than non-negativity) are assumed for one of the vectors, while the other must have all coefficients in $\{0,1,\dots,d-1\}$. \begin{lemma}\label{lem:detectingOne} For all $d,m\in{\ensuremath{\mathbb{N}}}$, there exists a $\{0,1\}$-matrix $M$ with $m$ columns and \mbox{$k\leqslantqslant\frac{md\log d}{\log m}(1+o(1))$} rows such that for any $\ve{u}\in{\ensuremath{\mathbb{N}}}^m$ and $\ve{v}\in\{0,1,\dots,d-1\}^m$, if $M\ve{u} = M\ve{v}$ then $\ve{u}=\ve{v}$. Moreover, such matrix $M$ can be computed in time polynomial in~$dm$. \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:detectingOne}] Fix $d\geqslantqslant 2$. We construct inductively for each $i\in{\ensuremath{\mathbb{N}}}$ a certain $\{0,1\}$-matrix $M_i$ with $k_i$ rows and $m_i$ columns such that for any $\ve{u}\in{\ensuremath{\mathbb{N}}}^{m_i}$ and $\ve{v}\in\{0,1,\dots,d-1\}^{m_i}$, if $M\ve{u} = M\ve{v}$ then $\ve{u}=\ve{v}$. For $i=1$ we use the $d\times d$ identity matrix, that is, $k_1=m_1=d$. \newcommand{\vx}[1]{{\ensuremath{\mathbf{x}}^{(#1)}}} \newcommand{\vxp}[1]{{\ensuremath{\mathbf{x}}'^{(#1)}}} Let $B=M_i$ be such the matrix for $i\geqslantqslant 1$. We claim that the following matrix $M=M_{i+1}$ with $k_{i+1}=d\cdot k_{i}+d$ rows and $m_{i+1} = d\cdot m_{i}+k_{i}$ columns satisfies the same condition \[ M = \begin{pmatrix} B & B &\cdots& B & I \\ B & J-B & & & \\ \vdots& &\ddots& & \\ B & & & J-B & \\ \hline &1\dots1& & & \\ & &\ddots& & \\ & & &1\dots1& \\ & & & &1\dots1\\ \end{pmatrix} \] Here, $I$ is the $k_i\times k_i$ identity matrix, $J$ is the $k_i\times m_i$ matrix with all entries equal to $1$, and all empty blocks are 0. Take any $\ve{u}\in{\ensuremath{\mathbb{N}}}^{m_{i+1}}$ and $\ve{v}\in\{0,1,\dots,d-1\}^{m_{i+1}}$, and write \begin{align*} \ve{u}^\intercal = (\vx{1}^\intercal \,\mid \dots \mid \vx{d}^\intercal\, \mid {\ve{z}}^\intercal\,) & \textrm{ for }\vx{1}\,,\dots,\vx{d}\, \in{\ensuremath{\mathbb{N}}}^{m_{i}}, \ve{z} \in {\ensuremath{\mathbb{N}}}^{k_i};\\ \ve{v}^\intercal=(\vxp{1}^\intercal \mid \dots \mid \vxp{d}^\intercal \mid {\ve{z}'}^\intercal) & \textrm{ for }\vxp{1},\dots,\vxp{d} \in\{0,\dots,d-1\}^{m_{i}}, \ve{z}' \in \{0,\dots,d-1\}^{k_i}. \end{align*} Then $M\ve{u} = M\ve{v}$ is equivalent to: \begin{align} \sum_{i=1}^d B \vx{i} + I \ve{z} &=\sum_{i=1}^d B \vxp{i} + I \ve{z}' \\ B \vx{1} + (J-B) \vx{i} &= B \vxp{1} + (J-B) \vxp{i} &\quad\quad (i=2\dots d)\\ \sum_{j=1}^{m_i} x^{(i)}_j &= \sum_{j=1}^{m_i} x'^{(i)}_j &\quad\quad (i=2\dots d)\\ \sum_{j=1}^{k_i} z_j &= \sum_{j=1}^{k_i} z_j' \vspace*{-10pt}\end{align} Equation (3) is equivalent to $J \vx{i} = J \vxp{i}$ (for $i=2\dots d$), thus the sum of (1) with all (2) equations implies $d B \vx{1} + \ve{z} = dB\vxp{1} + \ve{z}'$ and hence $z_j \equiv z_j' \pmod{d}$ for all $j$. Since $z_j\geqslantqslant 0$ and $z_j' \in \{0,\dots,d-1\}$, this implies $z_j\geqslantqslant z_j'$. This together with (4) implies that in fact $z_j = z_j'$ for all $j$, and thus $\ve{z}=\ve{z}'$. Since $I\ve{z} = I \ve{z}'$ and $J\vx{i} = J\vxp{i}$ ($i=2\dots d$), linear combinations of equations (1) and (2) imply that $B \vx{i} = B \vxp{i}$ for each $i=1\dots d$. By inductive assumption on $B$, this implies that $\vx{i}=\vxp{i}$ and hence $\ve{u}=\ve{v}$. We thus obtain $\{0,1\}$-matrices $M_i$ with $k_i$ rows and $m_i$ columns such that \begin{equation*} k_1=m_1=d\quad \textrm{and}\quad k_{i+1}=dk_{i}+d\quad \textrm{and}\quad m_{i+1} = dm_{i}+k_{i}. \end{equation*} Using a straightforward induction we can check the following explicit formulas for $k_i$ and $m_i$: \begin{equation*} k_i = \frac{d^{i+1}-d}{d-1}\quad\textrm{and}\quad m_i = \frac{(i-1) \cdot d^i}{d-1} + \frac{C d^i+d}{(d-1)^2},\quad \textrm{where }C=(d-1)(d-2). \end{equation*} Hence $k_i \leqslantqslant \frac{d \cdot d^{i}}{d-1}$, $m_i \geqslantqslant \frac{(i-1) \cdot d^{i}}{d-1}$, and $\log_d(m_i)\leqslantqslant(i-1)(1+o(1))$, implying $k_i \leqslantqslant \frac{m_i \cdot d}{\log_d m_i} (1+o(1))$. To interpolate between $m_i$ and $m_{i+1}$, one can join several of the constructed matrices into a block-diagonal matrix. Formally, if $f(m)$ denotes the least $k$ such that a $k \times m$ matrix as above exists, then $f(m+m') \leqslantqslant f(m)+f(m')$ and $f(m_i) \leqslantqslant \frac{m_i \cdot d}{\log_d m_i} (1+o(1))$. Standard methods then allow us to show that $f(m) \leqslantqslant \frac{m \cdot d}{\log_d m} (1+o(1))$, see e.g.~\cite[Theorem 2]{Cantor66}; see also~\cite{MarKha89} for a slightly more explicit construction for all $m$. \end{proof} We remark that the bounds in Theorem~\ref{thm:detectingBounded} and Lemma~\ref{lem:detectingRandom} were also shown to be tight. Lemma~\ref{lem:detectingOne} gives matrices that are also $d$-detecting, in particular, hence the bound is tight for $d=2$ (and tight up to an $\ensuremath{\mathcal{O}}(d)$ factor in general). Note also that we can relax the non-negativity constraint to requiring that $\ve{u}\in{\ensuremath{\mathbb{Z}}}^m$ is any integer with all entries lower bounded by $-\lfloor\frac{d}{2}\rfloor$ and $\ve{v}\in\{-\lfloor\frac{d}{2}\rfloor,\dots,\lfloor\frac{d}{2}\rfloor\}^m$. This is because $M \ve{u}=M\ve{v}$ is equivalent to $M(\ve{u}+\ve{c})=M(\ve{v}+\ve{c})$ where $\ve{c}$ is the constant $\lfloor\frac{d}{2}\rfloor$ vector. This allows to use the same detecting matrix for such pairs of vectors as well. However, note that some lower bound on the coefficients of $\ve{u}$ is necessary, since even if we fix $\ve{v}=0$, the matrix $M$ has a non-trivial kernel, giving many non-zero vectors $\ve{u} \in {\ensuremath{\mathbb{Z}}}^m$ satisfying $M\ve{u}=M\ve{v}$. \subsection{Coefficient reduction} In further constructions, we will need a way to reduce coefficients in a given {\textsc{ILP Feasibility}} instance with a nonnegative constraint matrix $A$ to $\{0,1\}$. We now prove that this can be done in a standard way by replacing each constraint with $\ensuremath{\mathcal{O}}(\log \|A\|_\infty)$ constraints that check the original equality bit by bit. Here and throughout this paper we use the convention that for a vector $\ensuremath{\mathbf{x}}$, by $x_i$ we denote the $i$-th entry of $\ensuremath{\mathbf{x}}$. \begin{lemma}[Coefficient Reduction]\label{lem:deltaReduction} Consider an instance $\{A \ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}, \ensuremath{\mathbf{x}}\geqslantqslant 0\}$ of {\textsc{ILP Feasibility}}, where $\ensuremath{\mathbf{b}} \in {\ensuremath{\mathbb{N}}}^k$ and $A$ is a nonnegative integer matrix with $k$ rows and $\ell$ columns. In polynomial time, this instance can be reduced to an equivalent instance $\{A' \ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}', \ensuremath{\mathbf{x}}\geqslantqslant 0\}$ of {\textsc{ILP Feasibility}} where $A'$ is a $\{0,1\}$-matrix with $k' = \ensuremath{\mathcal{O}}(k \log \|A\|_\infty)$ rows and $\ell' = \ell+\ensuremath{\mathcal{O}}(k \log \|A\|_\infty)$ columns, and $\ensuremath{\mathbf{b}}' \in {\ensuremath{\mathbb{N}}}^{k'}$ is a vector with $\|\ensuremath{\mathbf{b}}'\|_\infty = \ensuremath{\mathcal{O}}(\|\ensuremath{\mathbf{b}}\|_\infty)$. \end{lemma} \begin{proof} Denote $\delta = \lceil \log (1+\|A\|_{\infty})\rceil = \ensuremath{\mathcal{O}}(\log \|A\|_\infty)$. Consider a single constraint $\ensuremath{\mathbf{a}}^\intercal \ensuremath{\mathbf{x}} = b$, where $\ensuremath{\mathbf{a}} \in {\ensuremath{\mathbb{N}}}^\ell$ is a row of $A$ and $b\in{\ensuremath{\mathbb{N}}}$ is an entry of $\ensuremath{\mathbf{b}}$. Let $a_i[j]$ be the $j$-th bit of $a_i$, the $i$-th entry of vector $\ensuremath{\mathbf{a}}$; similarly for $b$. By choice of $\delta$, $\|\ensuremath{\mathbf{a}}\|_\infty \leqslantqslant 2^\delta - 1$, so each entry of $\ensuremath{\mathbf{a}}$ has up to $\delta$ binary digits. Now, for $\ensuremath{\mathbf{x}}\in {\ensuremath{\mathbb{Z}}}^n$, the constraint $\ensuremath{\mathbf{a}}^\intercal\ensuremath{\mathbf{x}} = b$ is equivalent to \[ \sum_{j = 0}^{\delta-1} 2^j \leqslantft( \sum_{i = 1}^n a_i[j] \cdot x_i \right) = b\,. \] We rewrite this equation into $\delta$ equations, each responsible for verifying one bit. For this, we introduce $\delta-1$ carry variables $y_0,y_1,\ldots,y_{\delta-2}$ and emulate the standard algorithm for adding binary numbers by writing equations \[ y_{j-1} + \sum_{i = 1}^n a_i[j] \cdot x_i = b[j] + 2y_j \qquad\qquad\textrm{for } j = 0, \ldots, \delta - 1, \] where $y_{-1}$ and $y_{\delta-1}$ are replaced with $0$ and $b[\delta-1]$ is replaced with the number whose binary digits are (from the least significant): $b[\delta-1],b[\delta],b[\delta+1],\dots$ (we do this because $b$ may have more than $\delta$ digits). To get rid of the variable $y_j$ on the right-hand side, we let $B=2^{\lceil \log b\rceil}$ and introduce two new variables $y_j', y_j''$ for each carry variable $y_j$, with constraints \[ y_j + y_j' = B \quad\textrm{and}\quad y_j + y_j'' = B\quad\textrm{for }j=0,\ldots,\delta-2, \] which is equivalent to $y_j'=y_j''=B - y_j$. Hence the previous equations can be replaced by \[ y_{j-1} + \sum_{i = 1}^n a_i[j] \cdot x_i + y_j'+y_j''= b[j] + 2B \qquad\qquad\textrm{for } j = 0, \ldots, \delta - 1. \] We thus replace each row of $A$ with $2(\delta-1)+\delta$ rows and $3(\delta-1)$ auxiliary variables. \end{proof} \subsection{Proof of Theorem~\ref{thm:main}} The Exponential Time Hypothesis states that for some $c>0$, {\textsc{3SAT}} with $n$ variables cannot be solved in time $\ensuremath{\mathcal{O}}star(2^{cn})$ (the $\ensuremath{\mathcal{O}}star$ notation hides polynomial factors). It was introduced by Impagliazzio, Paturi, and Zane~\cite{seth} and developed by Impagliazzo and Paturi~\cite{eth} to become a central conjecture for proving tight lower bounds for the complexity of various problems. While the original statement considers the parameterization by the number of variables, the \emph{Sparsification Lemma}~\cite{eth} allows us to assume that the number of clauses is linear in the number of variables, and hence we have the following. \begin{theorem}[{see e.g.~\cite[Theorem 14.4]{platypus}}] \label{thm:eth-main} Unless ETH fails, there is no algorithm for {\textsc{3SAT}} that runs in time $2^{o(n+m)}$, where $n$ and $m$ denote the numbers of variables and clauses. \end{theorem} We now proceed to the proof of Theorem~\ref{thm:main}. Our first step is to decrease the number of occurrences of each variable. The {\textsc{(3,4)SAT}} is the variant of {\textsc{3SAT}} where each clause uses exactly 3 different variables and every variable occurs in at most 4~clauses. Tovey~\cite{Tovey84} gave a linear reduction from {\textsc{3SAT}} to {\textsc{(3,4)SAT}}, i.e., an algorithm that, given an instance of {\textsc{3SAT}} with $n$ variables and $m$ clauses, in linear time constructs an equivalent instance of {\textsc{(3,4)SAT}} with $\ensuremath{\mathcal{O}}(n+m)$ variables and clauses. In combination with Theorem~\ref{thm:eth-main} this yields: \begin{corollary}\label{cor:34sat} Unless ETH fails, there is no algorithm for {\textsc{(3,4)SAT}} that runs in time $2^{o(n+m)}$, where $n$ and $m$ denote the numbers of variables and clauses, respectively. \end{corollary} We now reduce {\textsc{(3,4)SAT}} to {\textsc{ILP Feasibility}}. A {\textsc{$(3,4)$SAT}} instance $\varphi$ with $n$ variables and $m$ clauses can be encoded in a standard way as an {\textsc{ILP Feasibility}} instance with $\ensuremath{\mathcal{O}}(n+m)$ variables and constraints as follows. For each formula variable $v$ we introduce two ILP variables $x_v$ and $x_{\neg v}$ with a constraint $x_v+x_{\neg v}=1$ (hence exactly one of them should be 1, the other 0). For each clause $c$ we introduce two auxiliary slack variables $y_c,z_c$ and two constraints: $y_c+z_c=2$ and $x_{\ell_1}+x_{\ell_2}+x_{\ell_3}+y_c = 3$, where $\ell_1,\ell_2,\ell_3$ are the three literals in~$c$. Since $y_c,z_c$ will not appear in any other constraints, the first constraint is equivalent to ensuring that $y_c \leqslantqslant 2$, so the second constraint is equivalent to $x_{\ell_1}+x_{\ell_2}+x_{\ell_3} \geqslantqslant 1$. This way, one can reduce in polynomial time a {\textsc{$(3,4)$SAT}} instance $\varphi$ with $n$ variables and $m$ clauses into an equivalent instance $\{\ensuremath{\mathbf{x}} \in {\ensuremath{\mathbb{Z}}}^{\ell} \mid A\ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}, \ensuremath{\mathbf{x}} \geqslantqslant 0\}$ of {\textsc{ILP feasibility}} where: \begin{itemize} \item the constraint matrix $A$ has $k:=n+2m$ rows and $\ell:=2n+2m$ columns; \item each entry in $A$ is zero or one; \item each row and column of $A$ contains at most $4$ non-zero entries; and \item the target vector $\ensuremath{\mathbf{b}}$ has all entries equal to $1$, $2$, or $3$; \end{itemize} \newcommand\var{{\text{var}}} \newcommand\cl{{\text{cl}}} We now reduce the obtained instance to another {\textsc{ILP Feasibility}} instance containing only $\ensuremath{\mathcal{O}}((n+m) / \log (n+m))$ constraints. Let $M$ be the detecting matrix given by Lemma~\ref{lem:detectingOne} for $d=4$ and the required number of columns ($m$ in the notation of the statement of Lemma~\ref{lem:detectingOne}) equal to the number or rows (constraints) in $A$, which is $k$. Then for any $\ensuremath{\mathbf{x}} \in {\ensuremath{\mathbb{N}}}^{\ell}$, we have $A \ensuremath{\mathbf{x}} \in {\ensuremath{\mathbb{N}}}^{k}$ (since $A$ is non-negative) and $\ensuremath{\mathbf{b}} \in \{0,\dots,d-1\}^{k}$, hence by Lemma~\ref{lem:detectingOne} we have that $A\ensuremath{\mathbf{x}}=\ensuremath{\mathbf{b}}$ if and only if $MA\ensuremath{\mathbf{x}} = M \ensuremath{\mathbf{b}}$. We conclude that the {\textsc{ILP Feasibility}} instance $\{\ensuremath{\mathbf{x}} \in {\ensuremath{\mathbb{Z}}}^{\ell} \mid A' \ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}', \ensuremath{\mathbf{x}} \geqslantqslant 0\}$ with $A'=MA$ and $\ensuremath{\mathbf{b}}'=M\ensuremath{\mathbf{b}}$ is equivalent to the previous instance $\{\ensuremath{\mathbf{x}} \in {\ensuremath{\mathbb{Z}}}^{\ell} \mid A\ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}, \ensuremath{\mathbf{x}} \geqslantqslant 0\}$. The new instance has the same number $\ell'=\ell=2n+2m$ of variables, but only $k'=\ensuremath{\mathcal{O}}(k /\log k)=\ensuremath{\mathcal{O}}((n+m) / \log (n+m))$ constraints. The entries of $\ensuremath{\mathbf{b}}'=M\ensuremath{\mathbf{b}}$ are non-negative and bounded by $k \cdot \|\ensuremath{\mathbf{b}}\|_\infty = \ensuremath{\mathcal{O}}(n+m)$. Similarly, entries of $A'=MA$ are non-negative, and since every column of $A$ has at most 4 non-zero entries, we get $\|A'\|_\infty \leqslantqslant 4$. To further reduce $\|A'\|_\infty$, we apply Lemma~\ref{lem:deltaReduction}, replacing each row of $A'$ by a constant number of $\{0,1\}$-rows and auxiliary variables. This way, we reduced in polynomial time a {\textsc{$(3,4)$SAT}} instance $\varphi$ with $n$ variables and $m$ clauses into an equivalent {\textsc{ILP Feasibility}} instance $\{\ensuremath{\mathbf{x}} \in {\ensuremath{\mathbb{Z}}}^{\ell''}\mid A''\ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}'',\ensuremath{\mathbf{x}}\geqslantqslant 0\}$, where $A''$ is a $\{0,1\}$-matrix with $\ell'' = \ell'+\ensuremath{\mathcal{O}}(k')=\ensuremath{\mathcal{O}}(n+m)$ columns and $k''=\Theta(k')=\Theta((n+m) / \log (n+m))$ rows, while $\|\ensuremath{\mathbf{b}}''\|_\infty = \ensuremath{\mathcal{O}}(n+m)$. Hence $\ell'',\|\ensuremath{\mathbf{b}}''\|_\infty = \ensuremath{\mathcal{O}}(k'' \log k'')$. We are now in position to finish the proof of Theorem~\ref{thm:main}. Suppose there is an algorithm for {\textsc{ILP Feasibility}} that works in time $2^{o(k'' \log k'')}$ on instances with $A \in \{0,1\}^{k''\times \ell''}$ and $\ell'',\|\ensuremath{\mathbf{b}}''\|_\infty = \ensuremath{\mathcal{O}}(k'' \log k'')$. Then applying the above reduction would solve {\textsc{(3,4)SAT}} instances with $N=n+m$ variables and clauses in time $2^{o((N/\log N) \cdot \log (N/\log N))} = 2^{o(N)}$, which contradicts ETH by Corollary~\ref{cor:34sat}. This concludes the proof of Theorem~\ref{thm:main}. \subsection{Reducing coefficients in the target vector} We now prove Corollary~\ref{cor:main}. That is, we show that in Theorem~\ref{thm:main}, the coefficients in the target vector $\ensuremath{\mathbf{b}}$ can be reduced to constant, at the cost of introducing negative (-1) coefficients in the matrix $A$. \begin{proof}[Proof of Corollary~\ref{cor:main}] Let $\{A \ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}, \ensuremath{\mathbf{x}}\geqslantqslant 0\}$ be an instance given by Theorem~\ref{thm:main}, with $\ell,\|\ensuremath{\mathbf{b}}\|_\infty = \ensuremath{\mathcal{O}}(k \log k)$. Let $s := \lceil \log(\|\ensuremath{\mathbf{b}}\|_\infty + 1)\rceil$. To the system of linear equalities we add $s+1$ new variables $z,y_0,\dots,y_{s-1}$, with constraints $z=1$ and \[ z + y_0 + \cdots + y_{i-1} = y_i \qquad\qquad\qquad \forall i = 0, \ldots, s-1 \,, \] which force $z=1$ and $y_i = 2^i$. Then each original constraint $\ensuremath{\mathbf{a}}^\intercal \ensuremath{\mathbf{x}} = b$, where $\ensuremath{\mathbf{a}} \in {\ensuremath{\mathbb{N}}}^\ell$ is a row of $A$ and $b\in{\ensuremath{\mathbb{N}}}$ is an entry of $\ensuremath{\mathbf{b}}$, can be replaced by the constraint $\ensuremath{\mathbf{a}}^\intercal \ensuremath{\mathbf{x}} - \ve{c}^\intercal\ve{y} = 0$, where $\ve{c}$ is chosen so that $\ve{c}^\intercal \ve{y} = b$; that is, the $i$-th entry of $\ve{c}$ is 0 or 1 depending on the $i$-th bit of $b$. In matrix form, we thus created the following instance (where $B \in \{-1,0\}^{(1+s) \times \ell}$ is the matrix corresponding to the binary encoding of $\ensuremath{\mathbf{b}}$, with the first column zero, since it corresponds to the variable $z$): \begin{center} $$ \leqslantft(\begin{NiceArray}{C|C}[name=Ap] A\rule[-20pt]{0pt}{50pt} & B\\\hline \hspace*{5em} & \small\begin{matrix}1&&&\\1&-1&&\\1&1&-1&\\1&1&1&-1\end{matrix}\\ \end{NiceArray}\right) = \leqslantft(\begin{NiceArray}{C} \small\begin{matrix}0\\0\\0\\\end{matrix}\rule[-20pt]{0pt}{50pt}\\\hline \small\begin{matrix}1\\0\\0\\0\\\end{matrix}\!\\ \end{NiceArray}\right) $$ \tikz[remember picture,overlay] \node at (Ap-1-1.north) {$\ell$}; \tikz[remember picture,overlay] \node at (Ap-1-2.north |- Ap-1-1.north) {$1+s$}; \tikz[remember picture,overlay] \node at ($(Ap-2-1.west |- Ap-1-1.west)-(0.8,0)$) {$k$}; \tikz[remember picture,overlay] \node at ($(Ap-2-1.west)-(0.8,0)$) {$1+s$}; \end{center} Since $s = \ensuremath{\mathcal{O}}(\log k)$, the resulting matrix has $\{-1,0,1\}$ entries, $k+1+s = \Theta(k)$ rows, $\ell+1+s=\ensuremath{\mathcal{O}}(k \log k)$ columns. The new target vector has only $\{0,1\}$ entries, as required. \end{proof} \section{Parameterization by the dual treedepth} \subsection{Preliminaries} \subparagraph*{Treedepth and dual treedepth.} For a graph $G$, the {\em{treedepth}} of $G$, denoted $\mathrm{td}(G)$, can be defined recursively as follows: \begin{equation}\label{eq:rec-td} \mathrm{td}(G)= \begin{cases} 1 & \quad \textrm{if $G$ has one vertex;}\\[0.2cm] \max(\mathrm{td}(G_1),\ldots,\mathrm{td}(G_p)) & \quad \textrm{if $G$ is disconnected and $G_1,\ldots,G_p$} \\ & \quad \textrm{are its connected components;} \\[0.2cm] 1+\min_{u\in V(G)}\mathrm{td}(G-u) & \quad \textrm{if $G$ has more than one vertex}\\ & \quad \textrm{and is connected.} \end{cases} \end{equation} See e.g.~\cite{treedepth}. Equivalently, treedepth is the smallest possible height of a rooted forest $F$ on the same vertex set as $G$ such that whenever $uv$ is an edge in $G$, then $u$ is an ancestor of $v$ in $F$ or vice versa. Since we focus on constraints, we consider, for a matrix $A$, the {\em{constraint graph}} or {\em{dual graph}} $G_D(A)$, defined as the graph with rows of $A$ as vertices where two rows are adjacent if and only if in some column they simultaneously contain a non-zero entry. The {\em{dual treedepth}} of $A$, denoted $\mathrm{td}_D(A)$, is the treedepth of $G_D(A)$. The recursive definition~\eqref{eq:rec-td} is elegantly reinterpreted in terms of row removals and partitioning into blocks as follows. A matrix $A$ is {\em{block-decomposable}} if after permuting its rows and columns it can be presented in block-diagonal form, i.e., rows and columns can be partitioned into intervals $R_1,\ldots,R_p$ and $C_1,\ldots,C_p$, for some $p\geqslantqslant 2$, such that non-zero entries appear only in blocks $B_1,\ldots,B_p$, where $B_i$ is the block of entries at intersections of rows from $R_i$ with columns from $C_i$. It is easy to see that $A$ is block-decomposable if and only if $G_D(A)$ is disconnected, and the finest block decomposition of $A$ corresponds to the partition of $G_D(A)$ into connected components. The blocks $B_1,\ldots,B_p$ in this finest partition are called the {\em{block components}} of $A$---they are not block-decomposable. Then the recursive definition of treedepth provided in~\eqref{eq:rec-td} translates to the following definition of the dual treedepth of a matrix~$A$: \begin{equation}\label{eq:rec-dualtd} \mathrm{td}_D(A)= \begin{cases} 1 & \quad \textrm{if $A$ has one row;}\\[0.16cm] \max(\mathrm{td}_D(B_1),\ldots,\mathrm{td}_D(B_p)) & \quad \textrm{if $A$ is block-decomposable and}\vspace*{-3pt} \\ & \quad \textrm{$B_1,\ldots,B_p$ are its block components;} \\[0.16cm] 1+\min\limits_{\ensuremath{\mathbf{a}}^\intercal \colon \textrm{rows of }A}\,\mathrm{td}_D(A{\backslash \ensuremath{\mathbf{a}}^\intercal})\vspace*{-6pt} & \quad \textrm{if $A$ has more than one row and}\\ & \quad \textrm{is not block decomposable.} \end{cases} \end{equation} Here $A{\backslash\ensuremath{\mathbf{a}}^\intercal}$ is the matrix obtained from $A$ by removing the row $\ensuremath{\mathbf{a}}^\intercal$. Intuitively, dual treedepth formalizes the idea that a block-decomposable matrix is as hard as the hardest of its block components, and that adding a single row makes it a bit harder, but not uncontrollably so. \subparagraph*{Graver bases.} Two integer vectors $\ensuremath{\mathbf{a}},\ensuremath{\mathbf{b}}\in {\ensuremath{\mathbb{Z}}}^n$ are {\em{sign-compatible}} if $a_i\cdot b_i\geqslantqslant 0$ for all $i=1,\ldots,n$. For $\ensuremath{\mathbf{a}},\ensuremath{\mathbf{b}}\in {\ensuremath{\mathbb{Z}}}^n$ we write $\ensuremath{\mathbf{a}}\sqsubseteq \ensuremath{\mathbf{b}}$ if $\ensuremath{\mathbf{a}}$ and $\ensuremath{\mathbf{b}}$ are sign-compatible and $|a_i|\leqslantqslant |b_i|$ for all $i=1,\ldots,n$. Then $\sqsubseteq$ is a partial order on ${\ensuremath{\mathbb{Z}}}^n$; we call it the {\em{conformal order}}. Note that $\sqsubseteq$ has a unique minimum element, which is the zero vector $\ve{0}$. For a matrix $A$, the {\em{Graver basis}} of $A$, denoted $\mathcal{G}(A)$ is the set of conformally minimal vectors in $(\ker A\cap {\ensuremath{\mathbb{Z}}}^n)-\{\ve{0}\}$. It is easy to see by Dickson's lemma that $({\ensuremath{\mathbb{Z}}}^n,\sqsubseteq)$ is a well quasi-ordering, hence there are no infinite antichains with respect to the conformal order. It follows that the Graver basis of every matrix is finite, though it can be quite large. For a matrix $A$ and $p\in [1,\infty]$, we denote $g_p(A)=\max_{\ensuremath{\mathbf{u}}\in \mathcal{G}(A)} \|\ensuremath{\mathbf{u}}\|_p$. \subsection{Upper bound} We start with the upper bound for the dual treedepth parameterization, that is, Theorem~\ref{thm:dualtd-upper-bound}. As explained in the introduction, this result easily follows from the work of Kouteck\'y et al.~\cite{KouteckyLO18} and the following lemma bounding $g_1(A)$ in terms of $\mathrm{td}_D(A)$ and $\|A\|_\infty$, for any integer matrix $A$. \begin{lemma}\label{lem:g1-dualtd} For any matrix $A$ with integer entries, it holds that $$g_1(A)\leqslantqslant (2\|A\|_\infty+1)^{2^{\mathrm{td}_D(A)}-1}.$$ \end{lemma} Before we prove Lemma~\ref{lem:g1-dualtd}, let us sketch how using the reasoning from Kouteck\'y et al.~\cite{KouteckyLO18} one can derive Theorem~\ref{thm:dualtd-upper-bound}. Using the bound on the $\ell_1$-norm of vectors in the Graver basis of~$A$, we can construct a {\em{$\Lambda$-Graver-best oracle}} for the considered {\textsc{ILP Optimization}} instance. This is an oracle that given any feasible solution $\ensuremath{\mathbf{x}}$, returns another feasible solution $\ensuremath{\mathbf{x}}'$ that differs from $\ensuremath{\mathbf{x}}$ only by an integer multiple not larger than $\Lambda$ of a vector from the Graver basis of~$A$, and among such solution achieves the best goal value of $\ensuremath{\mathbf{w}}^\intercal\ensuremath{\mathbf{x}}'$. Such a $\Lambda$-Graver-best oracle runs in time $(\|A\|_\infty\cdot g_1(A))^{\ensuremath{\mathcal{O}}(\mathrm{tw}_D(A))}\cdot |I|^{\ensuremath{\mathcal{O}}(1)}$, where $\mathrm{tw}_D(A)$ is the treewidth of the constraint graph $G_D(A)$, which is always upper bounded by $\mathrm{td}_D(A)+1$. See the proof of Lemma~25 and the beginning of the proof of Theorem~3 in~\cite{KouteckyLO18}; the reasoning there is explained in the context of tree-fold ILPs, but it uses only boundedness of the dual treedepth of~$A$. Once a $\Lambda$-Graver-best oracle is implemented, we can use it to implement a {\em{Graver-best oracle}} (Lemma~14 in~\cite{KouteckyLO18}) within the same asymptotic running time, and finally use the main theorem---Theorem~1 in~\cite{KouteckyLO18}---to obtain the algorithm promised in Theorem~\ref{thm:dualtd-upper-bound}~above. We now proceed to the proof of Lemma~\ref{lem:g1-dualtd}. \begin{proof}[Proof of Lemma~\ref{lem:g1-dualtd}] We proceed by induction on the number of rows of $A$ using the recursive definition~\eqref{eq:rec-dualtd}. For the base case---when $A$ has one row---we may use the following well-known bound. \begin{claim}[Lemma 3.5.7 in~\cite{deLoeraBook}]\label{cl:base-case} If $A$ is an integer matrix with one row, then $$g_1(A)\leqslantqslant 2\|A\|_{\infty}+1.$$ \end{claim} We note that the original bound of $2\|A\|_{\infty}-1$, stated in~\cite{deLoeraBook}, works only for non-zero $A$. We now move to the induction step, so suppose the considered matrix $A$ has more than one row. We consider two cases: either $A$ is block-decomposable, or it is not. First suppose that $A$ is block-decomposable. Let $B_1,\ldots,B_p$ be the block components of~$A$, and let $R_1,\ldots,R_p$ and $C_1,\ldots,C_p$ be the corresponding partitions of rows and columns of~$A$ into segments, respectively. Observe that integer vectors $\ensuremath{\mathbf{u}}$ from $\ker A$ are exactly vectors of the form $(\,\ensuremath{\mathbf{v}}^{(1)}\ |\ \ensuremath{\mathbf{v}}^{(2)}\ |\ \ldots\ |\ \ensuremath{\mathbf{v}}^{(p)}\,)$, where each $\ensuremath{\mathbf{v}}^{(i)}$ is an integer vector of length $|C_i|$ that belongs to $\ker B_i$. It follows that $\mathcal{G}(A)$ consists of vectors of the following form: for some $i\in \{1,\ldots,p\}$ put a vector from $\mathcal{G}(B_i)$ on coordinates corresponding to the columns of $C_i$, and fill all the other entries with zeroes. Consequently, we have \begin{equation}\label{eq:decomposable1} g_1(A)\leqslantqslant \max_{i=1,\ldots,p} g_1(B_i). \end{equation} On the other hand, by~\eqref{eq:rec-dualtd} we have \begin{equation}\label{eq:decomposable2} \mathrm{td}_D(A)=\max_{i=1,\ldots,p} \mathrm{td}_D(B_i). \end{equation} Since each matrix $B_i$ has fewer rows than $A$, we may apply the induction assumption to matrices $B_1,\ldots,B_p$, thus inferring by~\eqref{eq:decomposable1} and~\eqref{eq:decomposable2} that $$g_1(A)\leqslantqslant \max_{i=1,\ldots,p} g_1(B_i)\leqslantqslant \max_{i=1,\ldots,p} (2\|B_i\|_\infty+1)^{2^{\mathrm{td}_D(B_i)}-1}\leqslantqslant (2\|A\|_\infty+1)^{2^{\mathrm{td}_D(A)}-1}.$$ We are left with the case when $A$ is not block-decomposable. For this, we use the following claim, which is essentially Lemma 3.7.6 and Corollary~3.7.7 in~\cite{deLoeraBook}. The statement there is slightly different, but the same proof, which we repeat for convenience in the appendix, in fact proves the following bound. \begin{claim}\label{cl:step-non-decomposable} Let $A$ be an integer matrix and let $\ensuremath{\mathbf{a}}^\intercal$ be a row of $A$. Then $$g_1(A)\leqslantqslant (2\|\ensuremath{\mathbf{a}}^\intercal\|_\infty+1)\cdot g_1(A\backslash\ensuremath{\mathbf{a}}^\intercal)\cdot g_\infty(A\backslash\ensuremath{\mathbf{a}}^\intercal).$$ \end{claim} \begin{proof} Denote $B=A\backslash\ensuremath{\mathbf{a}}^\intercal$. Consider any vector $\ensuremath{\mathbf{u}}\in \mathcal{G}(A)$. Then $\ensuremath{\mathbf{u}}$ is also in $\ker B\cap {\ensuremath{\mathbb{Z}}}^n$, where $n$ is the number of columns of $A$, which means that we can write $\ensuremath{\mathbf{u}}$ as a {\em{sign-compatible sum}} of elements of the Graver basis of $B$, that is, $$\ensuremath{\mathbf{u}} = \sum_{i=1}^p \lambda_i \ensuremath{\mathbf{g}}_i,$$ for some $\lambda_1,\ldots,\lambda_p\in {\ensuremath{\mathbb{N}}}$ and distinct sign-compatible vectors $\ensuremath{\mathbf{g}}_1,\ldots,\ensuremath{\mathbf{g}}_p\in \mathcal{G}(B)$. Let $\ensuremath{\mathbf{l}}ambda$ be a vector of length $p$ with entries $\lambda_1,\ldots,\lambda_p$. Further, let $\ensuremath{\mathbf{b}}$ be also a vector of length $p$, where $b_i=\ensuremath{\mathbf{a}}^\intercal \ensuremath{\mathbf{g}}_i$. Considering $\ensuremath{\mathbf{b}}^\intercal$ as a matrix with one row, we have $\ensuremath{\mathbf{l}}ambda\in \ker \ensuremath{\mathbf{b}}^\intercal$. Indeed, we have $$\sum_{i=1}^p \lambda_i b_i = \sum_{i=1}^p \lambda_i (\ensuremath{\mathbf{a}}^\intercal \ensuremath{\mathbf{g}}_i)=\ensuremath{\mathbf{a}}^\intercal \sum_{i=1}^p \lambda_i \ensuremath{\mathbf{g}}_i = \ensuremath{\mathbf{a}}^\intercal \ensuremath{\mathbf{u}} = 0,$$ because $\ensuremath{\mathbf{u}}\in \ker \ensuremath{\mathbf{a}}^\intercal$ due to $\ensuremath{\mathbf{u}}\in \mathcal{G}(A)$. We now verify that in fact $\ensuremath{\mathbf{l}}ambda\in \mathcal{G}(\ensuremath{\mathbf{b}}^\intercal)$. Indeed, since $\ensuremath{\mathbf{u}}$ is non-zero, $\ensuremath{\mathbf{l}}ambda$ is non-zero as well. Also, if there existed some non-zero $\ensuremath{\mathbf{l}}ambda'\sqsubset \ensuremath{\mathbf{l}}ambda$ with $\ensuremath{\mathbf{l}}ambda'\in \ker \ensuremath{\mathbf{b}}^\intercal$, then the same computation as above would yield that $\ensuremath{\mathbf{u}}'=\sum_{i=1}^p \lambda_i'\ensuremath{\mathbf{g}}_i$ also belongs to $\ker A$. However, as vectors $\ensuremath{\mathbf{g}}_1,\ldots,\ensuremath{\mathbf{g}}_p$ are sign-compatible, $\ve{0}\sqsubset \ensuremath{\mathbf{l}}ambda'\sqsubset \ensuremath{\mathbf{l}}ambda$ would entail $\ve 0\sqsubset \ensuremath{\mathbf{u}}'\sqsubset \ensuremath{\mathbf{u}}$, a contradiction with the conformal minimality of $\ensuremath{\mathbf{u}}$ following from $\ensuremath{\mathbf{u}}\in \mathcal{G}(A)$. Now that we know that $\ensuremath{\mathbf{l}}ambda\in \mathcal{G}(\ensuremath{\mathbf{b}}^\intercal)$, we may use Claim~\ref{cl:base-case} to infer that $$\|\ensuremath{\mathbf{l}}ambda\|_1\leqslantqslant 2\|\ensuremath{\mathbf{b}}^\intercal\|_\infty+1=2\max_{i=1,\ldots,p} |\ensuremath{\mathbf{a}}^\intercal \ensuremath{\mathbf{g}}_i| +1 \leqslantqslant 2\|\ensuremath{\mathbf{a}}\|_\infty\cdot g_1(B)+1\leqslantqslant (2\|\ensuremath{\mathbf{a}}\|_\infty+1)\cdot g_1(B).$$ Hence, we have $$\|\ensuremath{\mathbf{u}}\|_1 = \leqslantft\|\sum_{i=1}^p \lambda_i \ensuremath{\mathbf{g}}_i\right\|_1\leqslantqslant \|\ensuremath{\mathbf{l}}ambda\|_1\cdot g_{\infty}(B)\leqslantqslant (2\|\ensuremath{\mathbf{a}}\|_\infty+1)\cdot g_1(B)\cdot g_{\infty}(B).$$ This concludes the proof. \cqed\end{proof} Suppose then that $A$ is not block-decomposable. By~\eqref{eq:rec-dualtd}, there exists a row $\ensuremath{\mathbf{a}}^\intercal$ of $A$ such that $\mathrm{td}_D(A\backslash\ensuremath{\mathbf{a}}^\intercal)=\mathrm{td}_D(A)-1$. Then, by Claim~\ref{cl:step-non-decomposable} and the inductive assumption, we have \begin{align*} g_1(A) & \leqslantqslant (2\|\ensuremath{\mathbf{a}}^\intercal\|_\infty+1)\cdot g_1(A\backslash\ensuremath{\mathbf{a}}^\intercal)\cdot g_{\infty}(A\backslash\ensuremath{\mathbf{a}}^\intercal)\leqslantqslant (2\|A\|_\infty+1)\cdot \leqslantft(g_1(A\backslash\ensuremath{\mathbf{a}}^\intercal)\right)^2\\ & \leqslantqslant (2\|A\|_\infty+1)^{1+2\cdot (2^{\mathrm{td}_D(A)-1}-1)}=(2\|A\|_\infty+1)^{2^{\mathrm{td}_D(A)}-1}. \end{align*} This concludes the proof. \end{proof} \subsection{Lower bound} We now move to the proof of the lower bound, Theorem~\ref{thm:dualtd-lower-bound}. We will reduce from the {\textsc{Subset Sum}} problem: given non-negative integers $s_1,\ldots,s_k,t$, encoded in binary, decide whether there is a subset of numbers $s_1,\ldots,s_k$ that sums up to~$t$. The standard {\textsc{NP}}-hardness reduction from {\textsc{3SAT}} to {\textsc{Subset Sum}} takes an instance of {\textsc{3SAT}} with $n$ variables and $m$ clauses, and produces an instance $(s_1,\ldots,s_k,t)$ of {\textsc{Subset Sum}} with a linear number of numbers and each of them of linear bit-length, that is, $k\leqslantqslant \ensuremath{\mathcal{O}}(n+m)$ and $0\leqslantqslant s_1,\ldots,s_k,t<2^\delta$, for some $\delta\leqslantqslant \ensuremath{\mathcal{O}}(n+m)$. See e.g.~\cite{AbboudBHS17} for an even finer reduction, yielding lower bounds for {\textsc{Subset Sum}} under Strong ETH. By Theorem~\ref{thm:eth-main}, this immediately implies an ETH-based lower bound for {\textsc{Subset Sum}}. \begin{lemma}\label{lem:SS-ETH} Unless ETH fails, there is no algorithm for {\textsc{Subset Sum}} that would solve any input instance $(s_1,\ldots,s_k,t)$ in time $2^{o(k+\delta)}$, where $\delta$ is the smallest integer such that $s_1,\ldots,s_k,t<2^\delta$. \end{lemma} The idea for our reduction from {\textsc{Subset Sum}} to {\textsc{ILP Feasibility}} is as follows. Given an instance $(s_1,\ldots,s_k,t)$, we first {\em{construct}} numbers $s_1,\ldots,s_k$ using ILPs $P_1,\ldots,P_k$, where each $P_i$ uses only constant-size coefficients and has dual treedepth $\ensuremath{\mathcal{O}}(\log \delta)$. The ILP $P_i$ will have a designated variable $z_i$ and two feasible solutions: one that sets $z_i$ to $0$ and one that sets it to $s_i$. Similarly we can construct an ILP $Q$ that forces a designated variable $w$ to be set to $t$. Having that, the whole input instance can be encoded using one additional constraint: $z_1+\ldots+z_k-w=0$. To construct each $P_i$, we first create $\delta$ variables $y_0,y_1,\ldots,y_{\delta-1}$ that are either all evaluated to $0$ or all evaluated to $2^0,2^1,\ldots,2^{\delta-1}$, respectively; this involves constraints of the form $y_{j+1}=2y_j$. Then the number $s_i$ (or $0$) can be obtained on a new variable $z_i$ using a single constraint that assembles the binary encoding of $s_i$. The crucial observations is that the constraint graph $G_D(P_i)$ consists of a path on $\delta$ vertices and one additional vertex, and thus has treedepth $\ensuremath{\mathcal{O}}(\log \delta)$. We start implementing this plan formally by giving the construction for a single number~$s$. \setcounter{MaxMatrixCols}{40} \begin{lemma}\label{lem:construct-single} For all positive integers $\delta$ and $s$ satisfying $0\leqslantqslant s<2^\delta$, there exists an instance $P=\{A\ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}},\, \ensuremath{\mathbf{x}}\geqslantqslant 0\}$ of {\textsc{ILP Feasibility}} with the following properties: \begin{itemize} \item $A$ has all entries in $\{-1,0,1,2\}$ and $\mathrm{td}_D(A)\leqslantqslant \log \delta + \ensuremath{\mathcal{O}}(1)$; \item $\ensuremath{\mathbf{b}}$ is a vector with all entries in $\{0,1\}$; and \item $P$ has exactly two solutions $\ensuremath{\mathbf{x}}^{(1)}$ and $\ensuremath{\mathbf{x}}^{(2)}$, where $x^{(1)}_1=0$ and $x^{(2)}_1=s$. \end{itemize} Moreover, the instance $P$ can be constructed in time polynomial in $\delta+\log s$. \end{lemma} \begin{proof} We shall use $n+2$ variables, denoted for convenience by $y_0,y_1,\ldots,y_{\delta-1},z,u$; these are arranged into the variable vector $\ensuremath{\mathbf{x}}$ of length $\delta+2$ so that $x_1=z$. Letting $b_0,b_1,\ldots,b_{\delta-1}$ be the consecutive digits of the number $s$ in the binary encoding, the instance $P$ then looks as follows: $$\begin{matrix} u & + & y_0 & & & & & & & & & & & = & 1 \\ & & 2y_0 & - & y_1 & & & & & & & & & = & 0 \\ & & & & 2y_1 & - & y_2 & & & & & & & = & 0 \\ & & & & & & \ddots & & \ddots & & & & & & \vdots \\ & & & & & & & & 2y_{\delta-2} & - & y_{\delta-1} & & & = & 0 \\ & & b_0y_0 & + & b_1y_1 & + & \ldots & + & b_{\delta-2}y_{\delta-2} & + & b_{\delta-1}y_{\delta-1} & - & z & = & 0 \end{matrix}$$ Since $0\leqslantqslant u \leqslantqslant 1$, it is easy to see that $P$ has exactly two solutions in nonnegative integers: \begin{itemize} \item If one sets $u=1$, then all the other variables need to be set to $0$. \item If one sets $u=0$, then $y_i$ needs to be set to $2^i$ for all $i=0,1,\ldots,\delta-1$, and then $z$ needs to be set to $s$ by the last equation. \end{itemize} It remains to analyze the dual treedepth of $A$. Observe that the constraint graph $G_D(A)$ consists of a path of length $\delta$, plus one vertex corresponding to the last equation that may have an arbitrary neighborhood within the path. Since the path on $\delta$ vertices has treedepth $\lceil \log (\delta+1)\rceil$, it follows that $G_D(A)$ has treedepth at most $1+\lceil \log (\delta+1)\rceil\leqslantqslant \log \delta + \ensuremath{\mathcal{O}}(1)$. \end{proof} We note that in the above construction one may remove the variable $u$ and replace the constraint $u+y_0=1$ with $y_0=1$, thus forcing only one solution: the one that sets the first variable to $s$. This will be used later. We are ready to show the core part of the reduction. \begin{lemma}\label{lem:SS-ILPFeas} An instance $(s_1,\ldots,s_k,t)$ of {\textsc{Subset Sum}} with $0\leqslantqslant s_i,t<2^\delta$ for $i=1,\ldots,k$, can be reduced in polynomial time to an equivalent instance $\{A\ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}},\, \ensuremath{\mathbf{x}}\geqslantqslant 0\}$ of {\textsc{ILP Feasibility}} where entries of $A$ are in $\{-1,0,1,2\}$, entries of $\ensuremath{\mathbf{b}}$ are in $\{0,1\}$, and $\mathrm{td}_D(A)\leqslantqslant \log \delta + \ensuremath{\mathcal{O}}(1)$.\looseness=-1 \end{lemma} \begin{proof} For each $i\in \{1,\ldots,k\}$, apply Lemma~\ref{lem:construct-single} to construct a suitable instance $P_i=\{A_i\ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}_i,\, \ensuremath{\mathbf{x}}\geqslantqslant 0\}$ of {\textsc{ILP Feasibility}} for $s=s_i$. Also, apply Lemma~\ref{lem:construct-single} to construct a suitable instance $Q=\{C\ensuremath{\mathbf{x}} = \ensuremath{\mathbf{d}},\, \ensuremath{\mathbf{x}}\geqslantqslant 0\}$ of {\textsc{ILP Feasibility}} for $s=t$, and modify it as explained after the lemma's proof so that there is only one solution, setting the first variable to $t$. Let $$A=\begin{pmatrix} & & \ensuremath{\mathbf{c}}^\intercal & & \\ \hline A_1 & & & & \\ & A_2 & & & \\ & & \ddots & & \\ & & & A_k & \\ & & & & C \end{pmatrix}$$ where $$\ensuremath{\mathbf{c}}^\intercal=(\, 1\, 0\, \ldots\, 0\ |\ 1\, 0\, \ldots\, 0\ |\ \ldots\ |\ 1\, 0\, \ldots\, 0\ |\ (-1)\, 0\, \ldots\, 0\, )$$ with consecutive blocks of lengths equal to the numbers of columns of $A_1,\ldots,A_k$, and $C$, respectively. Observe that $$\mathrm{td}_D(A)\leqslantqslant 1+\max(\mathrm{td}_D(A_1),\ldots,\mathrm{td}_D(A_k),\mathrm{td}_D(C))=\log \delta + \ensuremath{\mathcal{O}}(1).$$ Further, let $$\ensuremath{\mathbf{b}}^\intercal=(\, 0\ |\ \ensuremath{\mathbf{b}}^\intercal_1\ |\ \ldots\ |\ \ensuremath{\mathbf{b}}^\intercal_k\ |\ \ensuremath{\mathbf{d}}^\intercal\,).$$ We now claim that the ILP $\{A\ensuremath{\mathbf{x}} =\ensuremath{\mathbf{b}},\, \ensuremath{\mathbf{x}}\geqslantqslant 0\}$ is feasible if and only if the input instance of {\textsc{Subset Sum}} has a solution. Indeed, if we denote by $z_1,\ldots,z_k,w$ the variables corresponding to the first columns of blocks $A_1,\ldots,A_k,C$, respectively, then by Lemma~\ref{lem:construct-single} within each block $A_i$ there are two ways of evaluating variables corresponding to columns of $A_i$: one setting $z_i=0$ and second setting $z_i=s_i$. However, there is only one way of evaluating the variables corresponding to columns of $C$, which sets $w=t$. The first row of $A$ then constitutes the constraint $z_1+\ldots+z_k-w=0$, which can be satisfied by setting $z_i$-s and $w$ as above if and only if some subset of the numbers $s_1,\ldots,s_k$ sums up to $t$. \end{proof} It remains to reduce entries in $A$ equal to 2, simply by duplicating variables. \begin{lemma}\label{lem:stupid-deg-red} An instance $\{A\ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}},\, \ensuremath{\mathbf{x}}\geqslantqslant 0\}$ of {\textsc{ILP Feasibility}} where entries of $A$ are in $\{-1,0,1,2\}$ and entries of $\ensuremath{\mathbf{b}}$ are in $\{0,1\}$ can be reduced in polynomial time to an equivalent instance $\{A'\ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}',\, \ensuremath{\mathbf{x}}\geqslantqslant 0\}$ of {\textsc{ILP Feasibility}} with all entries in $\{-1,0,1\}$ and $\mathrm{td}_D(A')\leqslantqslant \mathrm{td}_D(A)+1$. \end{lemma} \begin{proof} It suffices to duplicate each variable $x$ by introducing a variable $x'$, adding a constraint $x-x'=0$, and replacing occurrences of $2x$ in constraints by $x+x'$. In the dual graph, this results in introducing a new vertex (for the constraint $x-x'=0$), adjacent only to those constraints that contained $x$, which form a clique in $G_D(A)$ (they are pairwise adjacent). The new vertices are non-adjacent to each other. We show that in total, this operation can only increase the dual treedepth by at most $1$. Let $F$ be a rooted forest of height $\mathrm{td}(G_D(A))$ with the same vertex set as $G_D(A)$ such that whenever $uv$ is an edge of $G_D(A)$, then $u$ is an ancestor of $v$ in $F$ or vice versa. Then in particular, for each original variable $x$, the constraints containing it form a clique in $G_D(A)$, so the constraint that is the lowest in $F$, say $\ensuremath{\mathbf{a}}^\intercal$, has all the others as ancestors. This means that the new vertex representing the constraint $x-x'=0$ can be added to $F$ as a pending leaf below $\ensuremath{\mathbf{a}}^\intercal$. Doing this for each original variable $x$ can only add pendant leaves to original vertices of $F$, which increases its height by at most $1$. \end{proof} Theorem~\ref{thm:dualtd-lower-bound} now follows by observing that combining the reductions of Lemma~\ref{lem:SS-ILPFeas} and Lemma~\ref{lem:stupid-deg-red} with a hypothetical algorithm for {\textsc{ILP Feasibility}} on $\{-1,0,1\}$-input with running time $2^{2^{o(\mathrm{td}_D(A))}}\cdot |I|^{\ensuremath{\mathcal{O}}(1)}$, or just $2^{o(2^{\mathrm{td}_D(A)})}\cdot |I|^{\ensuremath{\mathcal{O}}(1)}$, would yield an algorithm for {\textsc{Subset Sum}} with running time $2^{o(k+\delta)}$, contradicting ETH by Lemma~\ref{lem:SS-ETH}. \begin{comment} Here, one needs to additionally observe the following features of the reduction of Lemma~\ref{lem:deltaReduction}: \begin{itemize} \item The reduction preserves the dual treedepth in the following sense: $\mathrm{td}_D(A')\leqslantqslant \ensuremath{\mathcal{O}}(\delta)\cdot \mathrm{td}_D(A)$. Indeed, in the proof of Lemma~\ref{lem:deltaReduction}, every row is replaced with $\ensuremath{\mathcal{O}}(\delta)$ rows that may have non-zero entries only in the same columns as the original row, plus in $\ensuremath{\mathcal{O}}(\delta)$ new columns where no other row has a non-zero entry. Thus, $G_D(A')$ can be obtained from $G_D(A)$ by replacing every vertex with a group of $\ensuremath{\mathcal{O}}(\delta)$ vertices, where edges in $G_D(A')$ may only connect vertices within the same group or from groups originating in vertices that were adjacent in $G_D(A)$. It is easy to see that this increases the dual treedepth at most by a multiplicative factor of $\ensuremath{\mathcal{O}}(\delta)$. \item In case the input instance has a $\{0,1\}$-vector $\ensuremath{\mathbf{b}}$ on the right hand side, the output instance will also have a $\{0,1\}$-vector $\ensuremath{\mathbf{b}}'$ on the right hand side. This can be readily checked throughout the proof of Lemma~\ref{lem:deltaReduction}. \end{itemize} \end{comment} \section{Conclusions} We conclude this work by stating two concrete open problems in the topic. First, apart from considering the standard form $\{A\ensuremath{\mathbf{x}} =\ensuremath{\mathbf{b}},\, \ensuremath{\mathbf{x}}\geqslantqslant 0\}$, Eisenbrand and Weismantel~\cite{EisenbrandW18} also studied the more general setting of ILPs of the form $\{A\ensuremath{\mathbf{x}} =\ensuremath{\mathbf{b}},\, \ensuremath{\mathbf{l}}\leqslantqslant \ensuremath{\mathbf{x}}\leqslantqslant \ensuremath{\mathbf{u}}\}$, where $\ensuremath{\mathbf{l}}$ and $\ensuremath{\mathbf{u}}$ are integer vectors. That is, instead of only requiring that every variable is nonnegative, we put an arbitrary lower and upper bound on the values it can take. Note that such lower and upper bounds can be easily emulated in the standard formulation using slack variables, but this would require adding more constraints to the matrix $A$; the key here is that we {\em{do not}} count these lower and upper bounds in the total number of constraints $k$. For this more general setting, Eisenbrand and Weismantel~\cite{EisenbrandW18} gave an algorithm with running time $k^{\ensuremath{\mathcal{O}}(k^2)}\cdot \|A\|_{\infty}^{\ensuremath{\mathcal{O}}(k^2)}\cdot |I|^{\ensuremath{\mathcal{O}}(1)}$, which boils down to $2^{\ensuremath{\mathcal{O}}(k^2 \log k)}\cdot |I|^{\ensuremath{\mathcal{O}}(1)}$ when $\|A\|_{\infty}=\ensuremath{\mathcal{O}}(1)$. (A typo leading to a $2^{\ensuremath{\mathcal{O}}(k^2)}\cdot |I|^{\ensuremath{\mathcal{O}}(1)}$ bound has been fixed in later versions of the paper). Is this running time optimal or could the $2^{\ensuremath{\mathcal{O}}(k^2 \log k)}$ factor be improved? Note that Theorem~\ref{thm:main} implies a $2^{o(k\log k)}$-time lower-bound, unless ETH fails. Second, in this work we studied the parameter dual treedepth of the constraint matrix $A$, but of course one can also consider the {\em{primal treedepth}}. It can be defined as the treedepth of the graph over the columns (variables) of $A$, where two columns are adjacent if they have a non-zero entry in same row (the variables appear simultaneously in some constraint). It is known that {\textsc{ILP Feasibility}} and {\textsc{ILP Optimization}} are fixed-parameter tractable when parameterized by $\|A\|_\infty$ and $\mathrm{td}_P(A)$, that this, there is an algorithm with running time $f(\|A\|_\infty,\mathrm{td}_P(A))\cdot |I|^{\ensuremath{\mathcal{O}}(1)}$, for some function $f$~\cite{KouteckyLO18} (see also~\cite{EisenbrandHKKLO19}). Again, the key ingredient here is an inequality on $\ell_\infty$-norms of the elements of the Graver basis of any integer matrix $A$: $g_\infty(A)\leqslantqslant h(\|A\|_\infty,\mathrm{td}_P(A))$ for some function $h$. The first bound on $g_\infty(A)$ was given by Aschenbrenner and Hemmecke~\cite{AschenbrennerH07}. The work of Aschenbrenner and Hemmecke~\cite{AschenbrennerH07} considers the setting of {\em{multi-stage stochastic programming}} (MSSP), which is related to primal treedepth in the same way as tree-fold ILPs are related to dual treedepth. The translation between MSSP and primal treedepth was first formulated by Kouteck\'y et al.~\cite{KouteckyLO18}. However, to establish a bound on $g_\infty(A)$, Aschenbrenner and Hemmecke~\cite{AschenbrennerH07} use the theory of well quasi-orderings (in a highly non-trivial way) and consequently give no direct bounds on the function $h$. Recently, Klein~\cite{klein19-two-stage} gave the first constructive bound on $g_\infty(A)$ for MSSP. However, we conjecture that the function $h$ has to be non-elementary in $\mathrm{td}_P(A)$. If this was the case, an example could likely be used to prove a non-elementary lower bound under ETH for {\textsc{ILP Feasibility}} under that $\mathrm{td}_P(A)$ parameterization (with $\|A\|_\infty=\ensuremath{\mathcal{O}}(1)$). Very recently, Kouteck{\'{y}} and Kr{\'{a}}{l'}~\cite{KouteckyK19} showed that algorithms parameterized by dual treedepth can be extended to the parameter ``branch-depth'' defined on the column matroid of the constraint matrix. This parameter has the advantage of being invariant under row operations. The transformation however incurs an exponential blow-up in the parameter. \pagebreak \end{document}
\begin{document} \title[$2\frac{1}{2}$ dimensional Hall equation]{On the existence and temporal asymptotics of solutions for the two and half dimensional Hall MHD} \author{Hantaek Bae} \address{Department of Mathematical Sciences, Ulsan National Institute of Science and Technology, Korea} \email{[email protected]} \author{Kyungkeun Kang} \address{Department of Mathematics, Yonsei University, Korea} \email{[email protected]} \date{\today} \keywords{Hall MHD, Well-posedness, Decay rates, Asymptotic behaviors} \subjclass[2010]{35K55, 35Q85, 35Q86.} \begin{abstract} In this paper, we deal with the $2\frac{1}{2}$ dimensional Hall MHD by taking the velocity field $u$ and the magnetic field $B$ of the form $u(t,x,y)=\left(\nabla^{\perp}\phi(t,x,y), W(t,x,y)\right)$ and $B(t,x,y)=\left(\nabla^{\perp}\psi(t,x,y), Z(t,x,y)\right)$. We begin with the Hall equations (without the effect of the fluid part). We first show the long time behavior of weak solutions and weak-strong uniqueness. We then proceed to prove the existence of unique strong solutions locally in time and to derive a blow-up criterion. We also demonstrate that the strong solution exists globally in time and decay algebraically if some smallness conditions are imposed. We further improve the decay rates of $\psi$ using the structure of the equation of $\psi$. As a consequence of the decay rates of $(\psi,Z)$, we find the asymptotic profiles of $(\psi,Z)$. We finally show that a small perturbation of initial data near zero can be extended to a small perturbations near harmonic functions. In the presence of the fluid filed, the results, by comparison, fall short of the previous ones in the absence of the fluid part. We prove two results: the existence of unique strong solutions locally in time and a blow-up criterion, and the existence of unique strong solutions globally in time with some smallness condition on initial data. \end{abstract} \maketitle \section{Introduction} The Magnetohydrodynamics equations (MHD in short) provide a macroscopic description of a plasma and provide a relevant description for fusion plasmas, the solar interior and its atmosphere, the Earth's magnetosphere and inner core, etc. The governing equations for the incompressible and resistive MHD are \begin{subequations} \langlebel{MHD} \begin{align} \text{Momentum Equation:} \quad & u_{t} +u\cdot \nabla u - J\times B+\nabla p-\mu\Delta u=0, \langlebel{MHD a}\\ \text{Incompressibility:} \quad & \dv u=0, \langlebel{MHD b}\\ \text{Amp$\grave{\text{e}}$re's Law:} \quad & \curl B=\mu_{0}J, \langlebel{MHD c}\\ \text{Faraday's Law:} \quad & \curl E=-B_{t},\langlebel{MHD d}\\ \text{Ohm's Law for resistive MHD:} \quad &E+u\times B=\nu J, \langlebel{MHD e}\\ \text{Incompressibility:} \quad& \dv B=0, \langlebel{MHD f} \end{align} \end{subequations} where $u$ is the velocity field, $p$ is the pressure, and $B$ is the magnetic field. $\mu$ and $\nu$ are the viscosity and the resistivity constants, respectively. The right-hand side of (\ref{MHD e}) is called the collision term and $J\times B$ in (\ref{MHD a}) is called the Lorentz force. However, (\ref{MHD}) is deficient in many respect: for example, (\ref{MHD}) does not explain magnetic reconnection on the Sun which is very important role in acceleration plasma by converting magnetic energy into bulk kinetic energy. For this reason, the generalized Ohm's Law is required and we here take the following \begin{eqnarray} \langlebel{Generalized Ohm} E+u\times B=\nu J+\frac{1}{en}\left(J\times B-\nabla p_{e}\right), \end{eqnarray} where $e$ is the elementary charge, $n$ is the number density, and $p_{e}$ is the electron pressure. The second term on the right-hand side of (\ref{Generalized Ohm}) is called the Hall term. In terms of $(u,\overline{p},B)$, we have the Hall MHD with $\mu_{0}=en=1$ for simplicity: \begin{subequations}\langlebel{Hall MHD} \begin{align} &u_{t} +u\cdot \nabla u - B\cdot \nabla B+\nabla \overline{p}-\mu\Delta u=0, \langlebel{Hall MHD a}\\ &B_{t} + u\cdot \nabla B -B\cdot \nabla u +\curl \left((\curl B)\times B\right)- \nu\Delta B=0, \\ &\dv u=0, \quad \dv B=0, \end{align} \end{subequations} where we use the following in (\ref{Hall MHD a}): \[ J\times B=B\cdot \nabla B-\frac{1}{2}\nabla |B|^{2}, \quad \overline{p}=p+\frac{1}{2}|B|^{2}. \] The Hall-MHD is important in describing many physical phenomena \cite{Balbus, Forbes, Homann, Lighthill, Mininni, Shalybkov, Wardle}. The Hall-MHD recently has been studied intensively. The Hall-MHD can be derived from either two fluids model or kinetic models in a mathematically rigorous way \cite{Acheritogaray}. Global weak solution, local classical solution, global solution for small data, and decay rates are established in \cite{Chae-Degond-Liu, Chae-Lee, Chae Schonbek}. There have been many follow-up results of these papers; see \cite{Chae Wan Wu, Chae Weng, Dai, Fan, Han, Wan 1, Wan 2, Wan 3, Wan 4, Weng 1, Weng 2} and references therein. \subsection{$2\frac{1}{2}$ dimensional Hall MHD} The Hall term, $\curl \left(\left(\curl B\right)\times B\right)$, is dominant when analyzing (\ref{Hall MHD}). So, even if we deal with (\ref{Hall MHD}) in the $2 \frac{1}{2}$ dimensional case, the global regularity problem for (\ref{Hall MHD}) is still open. As a result, compared to MHD and the incompressible Navier-Stokes equations, there are only a few results dealing with the $2 \frac{1}{2}$ dimensional (\ref{Hall MHD}); partial regularity theory \cite{Chae Wolf 1}, global regularity with partial dissipations in (\ref{Hall MHD a}) \cite{Du}, irreducibility property \cite{Yamazaki}. In this paper, we take its $2 \frac{1}{2}$ dimensional form of (\ref{Hall MHD}) through \begin{subequations}\langlebel{2D reduction} \begin{align} u(t,x,y)&=\left(\nabla^{\perp}\phi(t,x,y), W(t,x,y)\right)=\left(-\phi_{y}(t,x,y), \phi_{x}(t,x,y), W(t,x,y)\right), \langlebel{2D reduction a}\\ B(t,x,y)&=\left(\nabla^{\perp}\psi(t,x,y), Z(t,x,y)\right)=\left(-\psi_{y}(t,x,y), \psi_{x}(t,x,y), Z(t,x,y)\right). \langlebel{2D reduction b} \end{align} \end{subequations} Due to the presence of the pressure in (\ref{Hall MHD a}), we take the curl to (\ref{Hall MHD a}) and we rewrite (\ref{Hall MHD}) as \begin{subequations} \langlebel{coupled two half} \begin{align} & \psi_{t}-\Delta \psi =[\psi,Z]-[\psi,\phi],\\ & Z_{t}-\Delta Z=[\Delta \psi,\psi]-[Z,\phi]+[W,\psi],\\ & W_{t}-\Delta W=-[W,\phi]-[\psi,Z],\\ & \Delta \phi_{t}-\Delta^{2} \phi=-[\Delta \phi,\phi]+[\Delta \psi,\psi], \end{align} \end{subequations} where we set $\mu=\nu=1$ for simplicity and $[f,g]=\nabla f\cdot \nabla^{\perp}g=f_{x}g_{y}-f_{y}g_{x}$. (\ref{coupled two half}) is used to investigate the influence of the Hall-term on the island width of a tearing instability \cite{Homann} and to show a finite-time collapse to a current sheet \cite{Brizard, Janda 1, Janda 2, Litvinenko}. (\ref{coupled two half}) is also used in \cite{Chae Wolf 2} to study regularity of stationary weak solutions. \subsection{\bf Hall equation} Since the Hall term is dominant when we deal with (\ref{coupled two half}), we will mainly concentrate on the Hall equations: (\ref{coupled two half}) without the effect of the fluid part. Then, (\ref{coupled two half}) is reduced to the following equations \begin{subequations}\langlebel{Hall equation in 2D} \begin{align} &\psi_{t}-\Delta \psi=[\psi,Z], \langlebel{Hall equation in 2D a}\\ &Z_{t} -\Delta Z=[\Delta \psi,\psi]. \langlebel{Hall equation in 2D b} \end{align} \end{subequations} We will explain several results of (\ref{Hall equation in 2D}) from Section \ref{sec:1.2} to Section \ref{sec:1.5}, but before we do, we briefly describe them. \begin{enumerate}[] \item (1) The existence of weak solutions and decay rates of (\ref{Hall equation in 2D}) can be proved by following \cite{Chae-Degond-Liu, Chae Schonbek}. In Section \ref{sec:1.2}, we restate the decay rate of weak solutions in \cite{Chae Schonbek} to (\ref{Hall equation in 2D}) (Theorem \ref{weak solution}) and establish weak-strong uniqueness (Theorem \ref{weak strong uniqueness}). \item (2) In Section \ref{sec:1.3}, we deal with strong solutions of (\ref{Hall equation in 2D}). We first establish the existence of unique local-in-time solutions with large initial data and a blow-up criterion (Theorem \ref{LWP}). Having established the local in time results, we then proceed to extend the solution globally-in-time and to derive decay rates by imposing some smallness condition to initial data (Theorem \ref{GWP}). We also improve the decay rates of $\psi$ by using the structure of the equation of $\psi$ (Theorem \ref{decay of psi}). \item (3) It is reasonable to study the asymptotic stability of temporally decaying solutions in Theorem \ref{GWP} and Theorem \ref{decay of psi}. In Section \ref{sec:1.4}, we intend to find asymptotic profiles of such solutions of (\ref{Hall equation in 2D}) from the observation that constant multiples of the two dimensional heat kernel $\Gamma$ are solutions of (\ref{Hall equation in 2D}) (Theorem \ref{Asymptotics}). \item (4) The aim of Section \ref{sec:1.5} is to analyze (\ref{Hall equation in 2D}) around harmonic functions. We take $\psi=\rho+\overline{\psi}$ or $Z=\omega+\overline{Z}$, where $\overline{\psi}$ and $\overline{Z}$ are harmonic functions. We show that there exists unique global-in-time solutions if $\rho_{0}$ or $\omega_{0}$ are sufficiently small (Theorem \ref{perturbation theorem 1} and Theorem \ref{perturbation theorem 2}). We emphasize that the size of $\overline{\psi}$ and $\overline{Z}$ are arbitrary. \end{enumerate} \subsection{\bf Weak solution of (\ref{Hall equation in 2D})}\langlebel{sec:1.2} The the existence and decay rate of a weak solution even for (\ref{Hall MHD}) are already proved in \cite{Chae-Degond-Liu, Chae Schonbek} with $u_{0}\in L^{2}$ and $B_{0}\in L^{2}$. We here restate these results to (\ref{Hall equation in 2D}). We first note that we can derive the following: \[ \frac{1}{2}\frac{d}{dt} \left(\left\|\nabla \psi\right\|^{2}_{L^{2}}+\left\|Z\right\|^{2}_{L^{2}}\right) +\left\|\Delta \psi\right\|^{2}_{L^{2}}+ \left\|\nabla Z\right\|^{2}_{L^{2}}=0. \] This is enough to show the existence of a weak solution of (\ref{Hall equation in 2D}) with $\left(\nabla\psi_{0}, Z_{0}\right)\in L^{2}$. Moreover, we have temporal decay rates of weak solutions which is the two dimensional version of \cite{Chae Schonbek}. \begin{theorem}\langlebel{weak solution} Let $\left(\nabla\psi_{0}, Z_{0}\right)\in L^{2}$. Then, there is a weak solution of (\ref{Hall equation in 2D}) satisfying \[ \left\|\nabla \psi(t)\right\|^{2}_{L^{2}}+\left\|Z(t)\right\|^{2}_{L^{2}}+2\int^{t}_{0}\left(\left\|\Delta \psi(s)\right\|^{2}_{L^{2}}+\left\|\nabla Z(s)\right\|^{2}_{L^{2}}\right)ds \leq \left\|\nabla \psi_{0}\right\|^{2}_{L^{2}}+\left\|Z_{0}\right\|^{2}_{L^{2}} \] for all $t>0$. If $\left(\nabla\psi_{0}, Z_{0}\right)\in L^{2}\cap L^{1}$, $\nabla\psi$ and $Z$ decay in time as \begin{eqnarray}\langlebel{Weak decay} \left\|\nabla \psi(t)\right\|_{L^{2}}+\left\|Z(t)\right\|_{L^{2}}\leq \frac{C_{0}}{\sqrt{1+t}}, \end{eqnarray} where $C_{0}$ depends on $\left\|\nabla \psi_{0}\right\|_{L^{2}\cap L^{1}}$ and $\left\|Z_{0}\right\|_{L^{2}\cap L^{1}}$. \end{theorem} As in the case of the incompressible Navier-Stokes equations, the uniqueness of weak solution of (\ref{Hall equation in 2D}) is unknown. Weak-strong uniqueness is to find a path space $\mathcal{P}$ of a strong solution $B\in \mathcal{P}$ such that all weak solutions which share the same initial condition $B_{0}$ equal $B$. In this paper, we do not aim to derive very general weak-strong uniqueness results as in the case of the incompressible Navier-Stokes equations \cite{Germain}, but focus on Serrin-type results. \begin{theorem}\langlebel{weak strong uniqueness} Let $B_{1}=\left(\nabla^{\perp}\psi_{1}, Z_{1}\right)$ and $B_{2}=\left(\nabla^{\perp}\psi_{2}, Z_{2}\right)$ be weak solutions of (\ref{Hall equation in 2D}) with the same initial data $\left(\nabla\psi_{0}, Z_{0}\right)\in L^{2}$. Then, $B_{1}=B_{2}$ on $[0,T]$ if $B_{2}$ satisfies the condition \begin{eqnarray} \langlebel{Serrin condition} \left(\Delta\psi_{2}, \nabla Z_{2}\right)\in L^{p}\left([0,T]: L^{q}\right), \quad \frac{1}{p}+\frac{1}{q}=\frac{1}{2}, \quad 2\leq q<\infty. \end{eqnarray} \end{theorem} \begin{remark} \upshape In \cite{Chae-Degond-Liu}, weak-strong uniqueness is established with $B_{2}\in L^{2}\left([0,T]: W^{1, \infty}(\mathbb{R}^{3})\right)$. By contrast, we derive weak-strong uniqueness with $B_{2}\in L^{p}\left([0,T]: L^{q}(\mathbb{R}^{2})\right)$. \end{remark} \subsection{\bf Strong solutions of (\ref{Hall equation in 2D})} \langlebel{sec:1.3} In this paper, we establish the local in time existence of unique strong solutions of (\ref{Hall equation in 2D}) with initial data $\left(\nabla\psi_{0}, Z_{0}\right)\in H^{2}$. Let \begin{equation} \langlebel{energy norm} \begin{split} &M(t)=\left\|\nabla \psi(t)\right\|^{2}_{H^{2}}+\left\|Z(t)\right\|^{2}_{H^{2}}, \quad M(0)=\left\|\nabla \psi_{0}\right\|^{2}_{H^{2}}+\left\|Z_{0}\right\|^{2},\\ &N(t)=\left\|\nabla^{2} \psi(t)\right\|^{2}_{H^{2}}+\left\|\nabla Z(t)\right\|^{2}_{H^{2}}, \quad \mathcal{E}(t)=M(t)+\int^{t}_{0}N(s)ds. \end{split} \end{equation} When the energy method is applied, we observe that the terms with the highest derivative, that are unlikely to be handled by Laplacian's regularity, disappear due to the the properties of the commutator in (\ref{commutator}). For example, see (\ref{H2 bound 1}). So, we can derive the following inequalities: \begin{equation*} \begin{split} &\frac{d}{dt}(1+M)+N \leq C\left(1+M\right)^{3} \end{split} \end{equation*} and this gives the first part of the following result. To derive a blow-up criterion, we re-estimates some terms in Section \ref{sec:3.1.1} from $L^{2}$ to $L^{r}$, $r\ne 2$. From now on, constants that depend on $M(0)$ are not specifically specified each time when we state our results, and we will use $\mathcal{E}_{0}$ in common. \begin{theorem} \langlebel{LWP} Let $\left(\nabla\psi_{0}, Z_{0}\right)\in H^{2}$. There exists $T^{\ast}=T(\mathcal{E}_{0})$ such that there exists a unique solution of (\ref{Hall equation in 2D}) with $\mathcal{E}(T^{\ast})<\infty$. Moreover, the maximal existence time $T^{\ast}<\infty$ if and only if \begin{eqnarray} \langlebel{blowup} \lim_{T\nearrow T^{\ast}}\int^{T}_{0}\left\|\nabla Z(t)\right\|^{q}_{L^{p}}dt=\infty, \quad \frac{1}{p}+\frac{1}{q}=\frac{1}{2}, \quad 2\leq q<\infty. \end{eqnarray} \end{theorem} \begin{remark} \upshape Compared to \cite{Chae-Degond-Liu}, the regularity of initial data is the borderline case: $B_{0}=\left(\nabla^{\perp}\psi_{0}, Z_{0}\right)\in H^{2}$ with $2=\frac{d}{2}+1=\frac{2}{2}+1$. Moreover, the blow-up criterion in (\ref{blowup}) is stated only in terms of the third component of $B$. A similar blow-up criterion can be derived in terms of $\Delta\psi$: \begin{eqnarray} \langlebel{blowup 2222} T^{\ast}<\infty \iff \lim_{T\nearrow T^{\ast}}\int^{T}_{0}\left\|\Delta \psi\right\|^{q}_{L^{p}}dt=\infty, \quad \frac{1}{p}+\frac{1}{q}=\frac{1}{2}, \quad 2\leq q<\infty. \end{eqnarray} The condition (\ref{Serrin condition}) and the blow-up criteria (\ref{blowup}) and (\ref{blowup 2222}) are related to the scaling invariant property of (\ref{Hall equation in 2D}): if $(\psi(t,x,y),Z(t,x,y))$ is a solution of (\ref{Hall equation in 2D}) on $[0,T]$, so is \begin{eqnarray} \langlebel{scaling invariance} (\psi_{\langlembda}(t,x,y)=\langlembda^{-1}\psi\left(\langlembda^{2}t,\langlembda x, \langlembda y\right), \quad Z_{\langlembda}(t,x,y)=Z\left(\langlembda^{2}t,\langlembda x, \langlembda y\right) \quad \text{on $[0,\langlembda^{2}T]$}. \end{eqnarray} \end{remark} Since (\ref{Hall equation in 2D}) is dissipative, we typically expect the global well-posedness and temporal decay rates of solutions under some smallness conditions. In Section \ref{sec:3}, we will derive the followings \begin{equation} \langlebel{two inequalities Hall} \begin{split} &\frac{d}{dt}\left(\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}}\right)+\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}}\leq CS(t)\left(\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}}\right),\\ &\frac{d}{dt}\left(\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}}\right)+\left\|\Delta^{2} \psi\right\|^{2}_{L^{2}}+\left\|\nabla \Delta Z\right\|^{2}_{L^{2}}\leq CS(t)\left(\left\|\Delta^{2} \psi\right\|^{2}_{L^{2}}+\left\|\nabla \Delta Z\right\|^{2}_{L^{2}}\right), \end{split} \end{equation} where $S(t)=\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}}$. By imposing the smallness condition of the form \begin{eqnarray} \langlebel{smallness condition 1} \epsilon_{1}=\left\|\Delta \psi_{0}\right\|^{2}_{L^{2}}+ \left\|\nabla Z_{0}\right\|^{2}_{L^{2}}, \quad C\epsilon_{1}<1 \end{eqnarray} we can obtain global-in-time solutions and can find decay rates of the solution in Theorem \ref{LWP}. \begin{theorem} \langlebel{GWP} Let $(\nabla \psi_{0}, Z_{0})\in H^{2}$ which satisfies (\ref{smallness condition 1}). Then, we can take $T^{\ast}=\infty$ in Theorem \ref{LWP}. If $(\nabla \psi_{0}, Z_{0})\in L^{1}$ in addition, $(\Delta \psi, \nabla Z)$ decays in time as follows \begin{eqnarray} \langlebel{GWP Decay 2} \left\|\Delta \psi(t)\right\|_{L^{2}}+\left\|\nabla Z(t)\right\|_{L^{2}} \leq \frac{\mathcal{E}_{0}}{1+t},\quad \left\|\nabla \Delta \psi(t)\right\|_{L^{2}}+\left\|\Delta Z(t)\right\|_{L^{2}}\leq \frac{\mathcal{E}_{0}}{(1+t)^{3/2}}. \end{eqnarray} \end{theorem} \begin{remark} \upshape \begin{enumerate}[] \item (1) The more detailed dependence of $\mathcal{E}_{0}$ on the initial condition is given in Section \ref{sec:3.3.2}. From the linear part of (\ref{Hall equation in 2D}), we expect the decay rates of the form \[ \left\|\Delta \psi(t)\right\|_{L^{2}}+\left\|\nabla Z(t)\right\|_{L^{2}} \leq \frac{\mathcal{E}_{0}}{1+\sqrt{t}},\quad \left\|\nabla \Delta \psi(t)\right\|_{L^{2}}+\left\|\Delta Z(t)\right\|_{L^{2}}\leq \frac{\mathcal{E}_{0}}{1+t} \] but, these are improved to (\ref{GWP Decay 2}) by combining with (\ref{Weak decay}). \item (2) According to (\ref{scaling invariance}), $\dot{H}^{1}$ is a scaling-invariant space of $(\nabla\psi_{0}, Z_{0})$ and we use the smallness condition (\ref{smallness condition 1}) in Theorem \ref{GWP}. In \cite{Chae-Lee}, the smallness condition is presented in the Besov space $\dot{B}^{3/2}_{2,1}$. We here replace $\dot{B}^{3/2}_{2,1}$ with $\dot{H}^{1}$. This is possible by two reasons: (i) we reduce the dimension from 3 to 2 and (ii) we are able to avoid the Littlewood-Paley theory by exploiting some cancellation properties of the commutator (\ref{commutator}). \item (3) In the proof of Theorem \ref{LWP}, we obtain (\ref{H2 bound 2}): \[ \frac{d}{dt}\left(\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}}\right)+\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}}\leq C\left\|\Delta \psi\right\|^{2}_{L^{2}} \left\|\nabla\Delta \psi\right\|^{2}_{L^{2}}. \] Combined with the uniqueness part in Section \ref{sec:3.1.2}, we can show the existence of a unique solution globally-in-time with $(\nabla \psi_{0}, Z_{0})\in \dot{H}^{1}$ under the smallness condition (\ref{smallness condition 1}). But, we do not state this case separately because we are more interested in using Theorem \ref{GWP} to prove Theorem \ref{decay of psi} and Theorem \ref{Asymptotics}. \end{enumerate} \end{remark} The decay rates in Theorem \ref{GWP} are obtained by treating $\nabla \psi$ and $Z$ together. But, we observe that we can improve the decay rates of $\psi$ by using the structure of (\ref{Hall equation in 2D a}) which is a dissipative transport equation, and this is also the reason why the same method cannot be applied to $Z$. \begin{theorem} \langlebel{decay of psi} Let $(\nabla \psi_{0}, Z_{0})\in H^{2}$ satisfy (\ref{smallness condition 1}). If $\psi_{0}\in L^{1}\cap L^{2}$ in addition, $\psi$ decays in time as follows: \begin{eqnarray} \langlebel{improved decay psi} \left\|\nabla \psi(t)\right\|_{L^{2}}\leq \frac{\mathcal{E}_{0}}{1+t}, \quad \left\|\Delta \psi(t)\right\|_{L^{2}}\leq \frac{\mathcal{E}_{0}}{(1+t)^{3/2}} \quad \text{for all $t>0$}. \end{eqnarray} \end{theorem} \subsection{Asymptotic behaviors} \langlebel{sec:1.4} Theorem \ref{GWP} and Theorem \ref{decay of psi} provide upper bounds of decay rates. In particular, we have \[ \left\|\nabla \psi(t)\right\|_{L^{2}}\leq \frac{\mathcal{E}_{0}}{1+t}, \quad \left\|\nabla Z(t)\right\|_{L^{2}}\leq \frac{\mathcal{E}_{0}}{1+t}. \] Although there is no embedding relationship between $\dot{H}^{1}$ and $L^{\infty}$, we can expect similar decay results in $L^{\infty}$ if we establish the asymptotic behavior of $(\psi,Z)$ as $t\rightarrow \infty$. One motivation for considering asymptotic behavior is that the asymptotic behavior of the vorticity of the incompressible Navier-Stokes equations in 2D is well-established: \cite{Carpio}, \cite{Gallay}, \cite[Page 44]{Giga}. And similar results can be obtained to more complicated models such as an aerotaxis model coupled to fluid equations \cite{Chae Kang Lee}. Along this direction, we also want to find an asymptotic profile of the solutions of (\ref{Hall equation in 2D}). To do so, we assume $\left(\nabla\psi_{0}, Z_{0}\right)\in H^{2}$ as before and we impose the following additional conditions \begin{subequations}\langlebel{L1 average} \begin{align} &\psi_{0}\in L^{1}, \quad Z_{0}\in L^{1}, \langlebel{L1 average a}\\ & \langle x\rangle \psi_{0}\in L^{1}, \quad \langle x\rangle Z_{0}\in L^{1}, \langlebel{L1 average b}\\ & \int_{\mathbb{R}^{2}}\psi_{0}(x)dx=\gamma, \quad \int_{\mathbb{R}^{2}}Z_{0}(x)dx=\eta, \langlebel{L1 average c} \end{align} \end{subequations} where $\langle x \rangle=\sqrt{1+|x|^{2}}$. Depending on which of one given by (\ref{L1 average}) we choose, we can describe asymptotic behaviors accordingly. \begin{theorem}\langlebel{Asymptotics} Suppose $\left(\nabla\psi_{0}, Z_{0}\right)\in H^{2}$ and we assume (\ref{L1 average a}). Then, we obtain \[ \psi(t,x)=\Gamma(t) \ast \psi_{0}+O(t^{-3/2}),\quad Z(t,x)=\Gamma(t) \ast Z_{0} +O(t^{-2}) \] as $t\rightarrow \infty$, where $\Gamma$ is the two dimensional heat kernel. If we assume (\ref{L1 average b}) and (\ref{L1 average c}), \[ \psi(t,x)=\gamma\Gamma(t,x) +O(t^{-3/2}),\quad Z(t,x)=\eta\Gamma(t,x) +O(t^{-3/2}). \] \end{theorem} We note that the asymptotic behavior of $\nabla \psi$ is the same as $Z$ because $\nabla\left(\nabla^{\perp}Z\cdot \nabla \psi \right) \simeq \dv \left(\nabla \psi \Delta \psi\right)$ in terms of regularity and decay rates. So, the asymptotic behavior of $B$ is \[ B(t,x)=\left(\gamma \nabla^{\perp}\Gamma(t,x), \eta\Gamma(t,x)\right)+O(t^{-3/2}). \] \subsection{\bf Perturbation around harmonic functions} \langlebel{sec:1.5} Theorem \ref{GWP} is about the existence of a solution globally-in-time when the initial data is small enough around zero. We now perturb (\ref{Hall equation in 2D}) around harmonic functions. Then, the newly generated terms are linear and so one may guess that a smallness condition on harmonic functions is also necessary. But, we emphasize that this is not the only case. As one can see from the statements of Theorem \ref{perturbation theorem 1} and Theorem \ref{perturbation theorem 2} or the proof of them in Section \ref{sec:4}, we can absorb these terms to the left-hand side of the desired bounds by multiplying by a large constant depending on the harmonic function we choose. \subsubsection{\bf Case 1} Let $\overline{\psi}$ be a harmonic function such that $\left\|\nabla^{2}\overline{\psi}\right\|_{L^{\infty}}<\infty$. (For example, $\overline{\psi}(x,y)=x^{2}-y^{2}$ or $\overline{\psi}(x,y)=xy$.) Let $\psi=\rho+\overline{\psi}$. Then, we obtain the following equations of $(\rho, Z)$: \begin{subequations} \langlebel{new perturbation equation 1} \begin{align} &\rho_{t}-\Delta \rho=[\rho,Z]+ [\overline{\psi},Z], \\ & Z_{t} -\Delta Z=[\Delta \rho,\rho]+ [\Delta \rho, \overline{\psi}]. \end{align} \end{subequations} Since the smallness condition is stated by combining $\overline{\psi}$ and the regularity of the various level of the initial conditions as shown just below, we take \begin{equation} \langlebel{several norm 1} \begin{split} F_{1}=\left\|\nabla \rho\right\|^{2}_{L^{2}}+\left\|Z\right\|^{2}_{L^{2}}, \quad &F_{2}=\left\|\Delta \rho\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}}\\ F_{3}=\left\|\nabla \Delta \rho\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}}, \quad &F_{4}=\left\|\Delta^{2} \rho\right\|^{2}_{L^{2}}+\left\|\nabla \Delta Z\right\|^{2}_{L^{2}} \end{split} \end{equation} and let $\epsilon_{2}=C_{1}^{2}F_{1}(0)+C_{1}F_{2}(0)+ F_{3}(0)$ and $C_{1}=k\left\|\nabla^{2}\overline{\psi}\right\|^{2}_{L^{\infty}}$ with $k$ fixed by (\ref{size k 1}). \begin{theorem} \langlebel{perturbation theorem 1} There exists a constant $C$ such that if $C\epsilon_{2}<1$, there exists a unique solution $(\nabla \rho, Z)$ of (\ref{new perturbation equation 1}) satisfying \[ \begin{split} &C_{1}^{2}F_{1}(t)+C_{1}F_{2}(t)+ F_{3}(t)+ (1-C\epsilon_{1})\int^{t}_{0}\left(C_{1}F_{3}(s)+F_{4}(s)\right)ds \\ &\leq C_{1}^{2}F_{1}(0)+C_{1}F_{2}(0)+ F_{3}(0)\quad \text{for all $t>0$.} \end{split} \] \end{theorem} \subsubsection{\bf Case 2} Let $\overline{Z}$ be a harmonic function such that $\left\|\nabla \overline{Z}\right\|_{L^{\infty}}<\infty$. (For example, $\overline{Z}(x,y)=ax+by$.) Let $Z=\omega+\overline{Z}$. Then, we obtain the following equations of $(\psi,\omega)$: \begin{subequations} \langlebel{new perturbation equation 2} \begin{align} &\psi_{t}-\Delta \psi=[\psi,\omega]+ [\psi, \overline{Z}], \\ & \omega_{t} -\Delta \omega=[\Delta \psi,\psi]. \end{align} \end{subequations} For the same reason as Case 1, let \begin{equation} \langlebel{several norm 2} \begin{split} K_{1}=\left\|\nabla \psi\right\|^{2}_{L^{2}}+\left\|\omega\right\|^{2}_{L^{2}}, \quad &K_{2}=\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla \omega\right\|^{2}_{L^{2}}, \\ K_{3}=\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta \omega\right\|^{2}_{L^{2}}, \quad &K_{4}=\left\|\Delta^{2} \psi\right\|^{2}_{L^{2}}+\left\|\nabla \Delta \omega\right\|^{2}_{L^{2}} \end{split} \end{equation} and let $\epsilon_{3}=C_{2}^{3}\left\|\psi_{0}\right\|^{2}_{L^{2}}+ C_{2}^{2}K_{1}(0)+ C_{2}K_{2}(0)+K_{3}(0)$ and $C_{2}=k\left\|\nabla^{2}\overline{Z}\right\|^{2}_{L^{\infty}}$ with $k$ fixed by (\ref{size k 2}). \begin{theorem} \langlebel{perturbation theorem 2} There exists a constant $C$ such that if $C\epsilon_{3}<1$, there exists a unique solution $(\nabla \psi, \omega)$ of (\ref{new perturbation equation 2}) satisfying \begin{equation*} \langlebel{final bound per 2} \begin{split} &C_{2}^{3}\left\|\psi(t)\right\|^{2}_{L^{2}}+ C_{2}^{2}K_{1}(t)+C_{2}K_{2}(t)+K_{3}(t)+(1-C\epsilon_{2})\int^{t}_{0}\left(C_{2}^{2}K_{2}(s)+ C_{2}K_{3}(s)+K_{4}(s)\right)ds\\ &\leq C_{2}^{3}\left\|\psi(0)\right\|^{2}_{L^{2}}+ C_{2}^{2}K_{1}(0)+C_{2}K_{2}(0)+ K_{3}(0) \quad \text{for all $t>0$.} \end{split} \end{equation*} \end{theorem} \begin{remark} \upshape \begin{enumerate}[] \item (1) Compared to Theorem \ref{GWP}, we are not able to derive decay rates in Theorem \ref{perturbation theorem 1} and Theorem \ref{perturbation theorem 2} due to the terms having $\overline{\psi}$ and $\overline{Z}$ on the right-hand side of (\ref{perp 2}), (\ref{perp 3}), (\ref{new perp 2}), and (\ref{new perp 3}). \item (2) If we take $\psi=\rho+\overline{\psi}$ and $Z=\omega+\overline{Z}$, we obtain the following equations: \begin{subequations} \langlebel{both harmonic perturbation} \begin{align} &\rho_{t}-\Delta \rho=[\rho,\omega]+ [\rho, \overline{Z}]+[\overline{\psi},\omega]+ [\overline{\psi}, \overline{Z}], \\ & \omega_{t} -\Delta \omega=[\Delta \rho,\rho]+ [\Delta \rho, \overline{\psi}]. \end{align} \end{subequations} To obtain a result like Theorem \ref{perturbation theorem 1} and Theorem \ref{perturbation theorem 2}, we may start by choosing $\overline{\psi}$ and $\overline{Z}$ such that $[\overline{\psi}, \overline{Z}]=0$. One specific choice is $\overline{\psi}=\overline{Z}=ax+by$. In this case, $[\rho, \overline{Z}]=b\rho_{x}-a\rho_{y}$, $[\overline{\psi},\omega]=a\omega_{y}-b\omega_{x}$ and $[\Delta \rho, \overline{\psi}]=b\Delta \rho_{x}-a\Delta \rho_{y}$, and these will be cancelled out when we apply our method. So, (\ref{both harmonic perturbation}) is equivalent to (\ref{Hall equation in 2D}). But, we do not have results dealing with more general $\overline{\psi}$ and $\overline{Z}$ to handle (\ref{both harmonic perturbation}) as (\ref{new perturbation equation 1}) or (\ref{new perturbation equation 2}). \end{enumerate} \end{remark} \subsection{\bf Hall MHD} After considering the Hall equations, we consider in this section the $2 \frac{1}{2}$ dimensional Hall MHD given by (\ref{coupled two half}). Due to the presence of the fluid part, results similar to Theorem \ref{decay of psi}, Theorem \ref{Asymptotics}, Theorem \ref{perturbation theorem 1}, and Theorem \ref{perturbation theorem 2} will not be presented in this paper. Instead, we begin with the existence and the decay rate of weak solutions of (\ref{coupled two half}) which are again the two dimensional version of \cite{Chae-Degond-Liu, Chae Schonbek}. \begin{theorem} Let $\left(\nabla\psi_{0}, Z_{0}, \nabla\phi_{0}, Z_{0}\right)\in L^{2}$. Then, there is a weak solution of (\ref{coupled two half}) satisfying \[ \begin{split} &\left\|\nabla \psi(t)\right\|^{2}_{L^{2}}+\left\|Z(t)\right\|^{2}_{L^{2}}+\left\|\nabla \phi(t)\right\|^{2}_{L^{2}}+\left\|W(t)\right\|^{2}_{L^{2}}\\ &+2\int^{t}_{0}\left(\left\|\Delta \psi(s)\right\|^{2}_{L^{2}}+\left\|\nabla Z(s)\right\|^{2}_{L^{2}}+\left\|\Delta \phi(s)\right\|^{2}_{L^{2}}+\left\|\nabla W(s)\right\|^{2}_{L^{2}}\right)ds \\ &\leq \left\|\nabla \psi_{0}\right\|^{2}_{L^{2}}+\left\|Z_{0}\right\|^{2}_{L^{2}} +\left\|\nabla \phi_{0}\right\|^{2}_{L^{2}}+\left\|W_{0}\right\|^{2}_{L^{2}} \end{split} \] for all $t>0$. If $\left(\nabla\psi_{0}, Z_{0}, \nabla\phi_{0}, W_{0}\right)\in L^{2}\cap L^{1}$, $\left(\nabla\psi, Z, \nabla\phi, W\right)$ decay in time as \begin{eqnarray}\langlebel{Weak decay Hall MHD} \left\|\nabla \psi(t)\right\|_{L^{2}}+\left\|Z(t)\right\|_{L^{2}}+\left\|\nabla \phi(t)\right\|_{L^{2}}+\left\|W(t)\right\|_{L^{2}}\leq \frac{C_{0}}{\sqrt{1+t}}, \end{eqnarray} where $C_{0}$ depends on $\left\|\nabla \psi_{0}\right\|_{L^{2}\cap L^{1}}$, $\left\|Z_{0}\right\|_{L^{2}\cap L^{1}}$, $\left\|\nabla \phi_{0}\right\|_{L^{2}\cap L^{1}}$, and $\left\|W_{0}\right\|_{L^{2}\cap L^{1}}$. \end{theorem} We now proceed, as in Section \ref{sec:1.3}, towards the strong solutions of (\ref{coupled two half}). We first show the existence of unique local-in-time solutions with large initial data and we derive a blow-up criterion. The function spaces that we introduce are similar to those used for (\ref{Hall equation in 2D}) \begin{equation}\langlebel{energy norm 2} \begin{split} P(t)&=\left\|\nabla \psi(t)\right\|^{2}_{H^{2}}+\left\|Z(t)\right\|^{2}_{H^{2}}+\left\|\nabla \phi(t)\right\|^{2}_{H^{2}}+ \left\|W(t)\right\|^{2}_{H^{2}}, \\ Q(t)&=\left\|\Delta \psi(t)\right\|^{2}_{H^{2}}+\left\|\nabla Z(t)\right\|^{2}_{H^{2}}+\left\|\Delta \phi(t)\right\|^{2}_{H^{2}}+ \left\|\nabla W(t)\right\|^{2}_{H^{2}},\\ \mathcal{E}(t)&=P(t)+\int^{t}_{0}Q(s)ds. \end{split} \end{equation} As for the Hall equations, constants that depend on $P(0)$ are not specifically specified each time when we state our results, and we will use $\mathcal{E}_{0}$ in common. \begin{theorem} \langlebel{LWP Hall MHD} Let $\left(\nabla\psi_{0}, Z_{0},\nabla \phi_{0}, W_{0}\right)\in H^{2}$. There exists $T^{\ast}=T(\mathcal{E}_{0})>0$ such that there exists a unique solution of (\ref{coupled two half}) with $\mathcal{E}(T^{\ast})<\infty$. Moreover, the maximal existence time $T^{\ast}<\infty$ if and only if \begin{eqnarray} \langlebel{blowup Hall MHD} \lim_{T\nearrow T^{\ast}}\int^{T}_{0}\left\|\nabla Z(t)\right\|^{q}_{L^{p}}dt=\infty, \quad \frac{1}{p}+\frac{1}{q}=\frac{1}{2}, \quad 2\leq q<\infty. \end{eqnarray} \end{theorem} \begin{remark} \upshape We emphasize that the blow-up criterion is specified only in terms of $Z$ even when the fluid part enters. \end{remark} In Section \ref{sec:7}, we will derive inequalities similar to (\ref{two inequalities Hall}) but the smallness condition is expressed more complicated by \begin{eqnarray} \langlebel{smallness condition 11} \epsilon_{4}=\left\|\nabla \psi_{0}\right\|^{2}_{H^{1}}+\left\|Z_{0}\right\|^{2}_{H^{1}}+\left\|\nabla \phi_{0}\right\|^{2}_{H^{1}}+\left\|W_{0}\right\|^{2}_{H^{1}}, \quad C\epsilon_{4}<1. \end{eqnarray} Compared to Theorem \ref{GWP}, we need to modify the smallness condition as (\ref{smallness condition 11}) because (\ref{coupled two half}) does not have a scaling-invariant property. Suppose that $(u,B=0)$ and $(u=0,B)$ solves (\ref{Hall MHD}), respectively. Then, the same is true for rescaled functions: $u_{\langlembda}(t,x)=\langlembda u(\langlembda^{2}t, \langlembda x)$ and $B_{\langlembda}(t,x)=B(\langlembda^{2}t, \langlembda x)$ accordingly. So, $u$ and $B$ have different scaling. Since (\ref{Hall MHD}) and so (\ref{coupled two half}) include both $u$ and $B$, the smallness condition is determined by a combination the scaling invariant quantities of $u$ and $B$. \begin{theorem}\langlebel{GWP Hall MHD} Let $\left(\nabla\psi_{0}, Z_{0},\nabla \phi_{0}, W_{0}\right)\in H^{2}$ which satisfies (\ref{smallness condition 11}). Then, we can take $T=\infty$ in Theorem \ref{LWP Hall MHD}. $\left(\nabla\psi_{0}, Z_{0},\nabla \phi_{0}, W_{0}\right)\in L^{1}$ in addition, $(\Delta \psi, \nabla Z,\Delta \phi, \nabla W)$ decay in time as follows \begin{equation} \langlebel{GWP Decay Hall MHD} \begin{split} &\left\|\Delta \psi(t)\right\|_{L^{2}}+\left\|\nabla Z(t)\right\|_{L^{2}} +\left\|\Delta \psi(t)\right\|_{L^{2}}+\left\|\nabla Z(t)\right\|_{L^{2}}\leq \frac{\mathcal{E}_{0}}{1+t},\\ & \left\|\nabla \Delta \psi(t)\right\|_{L^{2}}+\left\|\Delta Z(t)\right\|_{L^{2}}+\left\|\nabla \Delta \phi(t)\right\|_{L^{2}}+\left\|\Delta W(t)\right\|_{L^{2}}\leq \frac{\mathcal{E}_{0}}{(1+t)^{3/2}}. \end{split} \end{equation} \end{theorem} The decay rate (\ref{GWP Decay Hall MHD}) can be easily derived by using (\ref{Weak decay Hall MHD}) and the argument in Section \ref{sec:3.3}, we will skip the process of proving (\ref{GWP Decay Hall MHD}). \section{Preliminaries} All constants will be denoted by $C$ and we follow the convention that such constants can vary from expression to expression and even between two occurrences within the same expression. We here provide some inequalities in 2D: \begin{subequations} \langlebel{inequality 2} \begin{align} &\left\|f\right\|_{L^{p}}\leq C(p)\left\|f\right\|^{\frac{2}{p}}_{L^{2}}\left\|\nabla f\right\|^{1-\frac{2}{p}}_{L^{2}}, \quad 2<p<\infty\langlebel{inequality 2 a}\\ & \left\|f\right\|_{L^{\infty}} \leq C\left\|f\right\|^{\frac{1}{2}}_{L^{2}} \left\|\Delta f\right\|^{\frac{1}{2}}_{L^{2}}\langlebel{inequality 2 b}. \end{align} \end{subequations} We also use the following inequalities which hold in any dimensions: \begin{eqnarray} \langlebel{inequality 1} \left\|\nabla f\right\|_{L^{2}} \leq \left\|f\right\|^{\frac{1}{2}}_{L^{2}}\left\|\Delta f\right\|^{\frac{1}{2}}_{L^{2}}, \quad \left\|\nabla^{2}f\right\|_{L^{p}}\leq C(p)\left\|\Delta f\right\|_{L^{p}}, \quad 1<p<\infty. \end{eqnarray} We now recall the commutator $[f,g]=\nabla f\cdot \nabla^{\perp}g=f_{x}g_{y}-f_{y}g_{x}$. Then, the commutator has the following properties: \begin{subequations} \langlebel{commutator} \begin{align} & [f,f]=0, \quad [f,g]=-[g,f]\\ & \Delta [f,g]= [\Delta f,g] +[f,\Delta g] + 2[f_{x},g_{x}] +2[f_{y},g_{y}],\langlebel{commutator c}\\ & \int f[f,g]=0, \langlebel{commutator d}\\ & \int f[g,h]=\int g [h,f]. \langlebel{commutator e} \end{align} \end{subequations} We will use (\ref{inequality 2}) -- (\ref{commutator}) repeatedly when proving our results and we will not refer them every time when it is obvious to use them. \subsection*{How to prove our results} To prove our results, we may use a fixed point argument. But, in principle, the calculations used to derive a priori estimates are easily applied to the fixed point argument. So, we only provide a priori estimates for the existence part and show the uniqueness. \section{Proofs of Theorem \ref{LWP}, Theorem \ref{weak strong uniqueness}, and Theorem \ref{GWP}} \langlebel{sec:3} In this section we establish the local-in-time existence of unique strong solutions of (\ref{Hall equation in 2D}). The analysis given in this section will apply to (\ref{coupled two half}) in Section \ref{sec:4} and Section \ref{sec:7} . Since the computations used to prove Theorem \ref{LWP} can be used to prove Theorem \ref{weak strong uniqueness}, we begin with Theorem \ref{LWP}. \subsection{Proof of Theorem \ref{LWP}} We first recall (\ref{Hall equation in 2D}): \begin{subequations}\langlebel{equation psi Z} \begin{align} &\psi_{t}-\Delta \psi=[\psi,Z], \langlebel{equation of psi a}\\ &Z_{t} -\Delta Z=[\Delta \psi,\psi]. \langlebel{equation of Z a} \end{align} \end{subequations} \subsubsection{\bf A priori estimates}\langlebel{sec:3.1.1} We multiply (\ref{equation of psi a}) by $-\Delta \psi$, (\ref{equation of Z a}) by $Z$, and integrate over $\mathbb{R}^{2}$. By using (\ref{commutator e}), we have \begin{eqnarray} \langlebel{L2 bound} \frac{1}{2}\frac{d}{dt}\left(\left\|\nabla \psi\right\|^{2}_{L^{2}}+\left\|Z\right\|^{2}_{L^{2}}\right)+\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}}=-\int \Delta \psi[\psi,Z] +\int Z[\Delta \psi,\psi]=0. \end{eqnarray} We next multiply (\ref{equation of psi a}) by $\Delta^{2} \psi$, (\ref{equation of Z a}) by $-\Delta Z$ and integrate over $\mathbb{R}^{2}$. Then, \[ \frac{1}{2}\frac{d}{dt}\left(\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}}\right)+\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}} = \int \Delta^{2} \psi [\psi,Z] -\int\Delta Z[\Delta \psi,\psi]. \] Since \begin{equation}\langlebel{H2 bound 1} \begin{split} &\int \Delta^{2} \psi [\psi,Z] -\int\Delta Z[\Delta \psi,\psi]=2\int \Delta \psi \left([\psi_{x},Z_{x}]+[\psi_{y},Z_{y}] \right)\leq C\left\|\nabla^{2}Z\right\|_{L^{2}} \left\|\nabla^{2}\psi\right\|^{2}_{L^{4}}\\ & \leq C \left\|\Delta Z\right\|_{L^{2}} \left\|\nabla^{2}\psi\right\|_{L^{2}}\left\|\nabla^{3}\psi\right\|_{L^{2}} \leq C\left\|\Delta \psi\right\|^{2}_{L^{2}}\left\|\nabla\Delta \psi\right\|^{2}_{L^{2}} +\frac{1}{2}\left\|\Delta Z\right\|^{2}_{L^{2}}, \end{split} \end{equation} we obtain \begin{eqnarray} \langlebel{H2 bound 2} \frac{d}{dt}\left(\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}}\right)+\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}}\leq C\left\|\Delta \psi\right\|^{2}_{L^{2}} \left\|\nabla\Delta \psi\right\|^{2}_{L^{2}}. \end{eqnarray} We finally multiply (\ref{equation of psi a}) by $-\Delta^{3} \psi$, (\ref{equation of Z a}) by $\Delta^{2} Z$ and integrate over $\mathbb{R}^{2}$. By noticing, as (\ref{H2 bound 1}), the cancellation of the terms having the highest order derivative in the first equality below, we have \begin{equation*} \begin{split} &\frac{1}{2}\frac{d}{dt}\left(\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}}\right) +\left\|\Delta^{2} \psi\right\|^{2}_{L^{2}}+\left\|\nabla \Delta Z\right\|^{2}_{L^{2}} = -\int \Delta^{3}\psi [\psi,Z]+\int \Delta^{2} Z[\Delta \psi,\psi]\\ & = -\int \Delta^{2}\psi [\Delta \psi,Z] -2\int \Delta^{2}\psi \left([\psi_{x},Z_{x}]+[\psi_{y},Z_{y}]\right) - 2\int \Delta\psi \left([\psi_{x},\Delta Z_{x}]+[\psi_{y},\Delta Z_{y}]\right)\\ &=\text{(I)+(II)+(III)}. \end{split} \end{equation*} We bound each term on the right-hand side as follows. By using the definition of the commutator, \begin{equation} \langlebel{first term} \begin{split} \text{(I)}&=-\int \left(\nabla^{\perp}Z\cdot \nabla \Delta \psi\right) \Delta^{2}\psi =\int \left(\nabla \nabla^{\perp}Z\cdot \nabla \Delta \psi\right)\cdot \nabla \Delta\psi +\int \left(\nabla^{\perp}Z\cdot \nabla \nabla \Delta \psi\right) \nabla \Delta\psi \\ &=\int \left(\nabla \nabla^{\perp}Z\cdot \nabla \Delta \psi\right)\cdot \nabla \Delta\psi -\frac{1}{2}\int \nabla \nabla^{\perp}Z \left|\nabla \Delta \psi\right|^{2} \leq C\int \left|\nabla^{2}Z\right| \left|\nabla^{3}\psi \right|^{2}\\ &\leq C \left\|\nabla^{2}Z\right\|_{L^{2}} \left\|\nabla^{3}\psi\right\|^{2}_{L^{4}} \leq C\left\|\nabla^{2}Z\right\|_{L^{2}} \left\|\nabla^{3}\psi\right\|_{L^{2}} \left\|\nabla^{4}\psi\right\|_{L^{2}} \\ &\leq C\left\|\Delta Z\right\|^{2}_{L^{2}} \left\|\nabla\Delta \psi\right\|^{2}_{L^{2}}+\frac{1}{4}\left\|\Delta^{2}\psi\right\|^{2}_{L^{2}} . \end{split} \end{equation} By moving one derivative in $\Delta^{2}\psi$ to $\left([\psi_{x},Z_{x}]+[\psi_{y},Z_{y}]\right)$ and by using (\ref{commutator e}), \begin{equation} \langlebel{H3 bound 1} \begin{split} \text{(II)}+\text{(III)}\leq C\int \left|\nabla^{2}Z\right| \left|\nabla^{3}\psi \right|^{2}+C\int \left|\nabla^{2}\psi \right| \left|\nabla^{3}\psi \right|\left|\nabla^{3}Z\right| \end{split} \end{equation} with the second term estimated by \begin{equation*} \begin{split} \int \left|\nabla^{2}\psi \right| \left|\nabla^{3}\psi \right|\left|\nabla^{3}Z\right| &\leq \left\|\nabla^{2}\psi\right\|_{L^{4}} \left\|\nabla^{3}\psi\right\|_{L^{4}} \left\|\nabla^{3}Z\right\|_{L^{2}} \\ & \leq C \left\|\Delta \psi\right\|^{2}_{L^{2}} \left\|\nabla \Delta \psi\right\|^{4}_{L^{2}}+\frac{1}{4}\left\|\Delta^{2}\psi\right\|^{2}_{L^{2}}+ \frac{1}{2}\left\|\nabla\Delta Z\right\|^{2}_{L^{2}}. \end{split} \end{equation*} With these estimates, we have \begin{equation}\langlebel{H3 bound 2} \begin{split} &\frac{d}{dt}\left(\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}}\right) +\left\|\Delta^{2} \psi\right\|^{2}_{L^{2}}+\left\|\nabla \Delta Z\right\|^{2}_{L^{2}} \\ &\leq C\left\|\Delta Z\right\|^{2}_{L^{2}} \left\|\nabla\Delta \psi\right\|^{2}_{L^{2}}+C \left\|\Delta \psi\right\|^{2}_{L^{2}} \left\|\nabla \Delta \psi\right\|^{4}_{L^{2}}. \end{split} \end{equation} By (\ref{L2 bound}), (\ref{H2 bound 2}), and (\ref{H3 bound 2}), we derive the following inequality: \begin{eqnarray} \langlebel{full H3 ddd} \frac{d}{dt}(1+M)+N\leq CM^{2}+C M^{3}\leq C(1+M)^{3}, \end{eqnarray} where $M$ and $N$ are defined in (\ref{energy norm}). From this, we deduce \begin{eqnarray} \langlebel{full H3 dddd} M(t)\leq \sqrt{\frac{(1+M(0))^{2}}{1-2Ct(1+M(0))^{2}}}-1 \quad \text{for all} \ t\leq T^{\ast}<\frac{1}{2C (1+M(0))^{2}}. \end{eqnarray} Integrating (\ref{full H3 ddd}) and using (\ref{full H3 dddd}), we finally derive $\mathcal{E}(T^{\ast})<\infty$. \subsubsection{\bf Uniqueness} \langlebel{sec:3.1.2} Suppose there are two solutions $(\psi_{1}, Z_{1})$ and $(\psi_{2}, Z_{2})$. Let $\psi=\psi_{1}-\psi_{2}$ and $Z=Z_{1}-Z_{2}$. By subtracting the equations for $(\psi_{1}, Z_{1})$ and $(\psi_{2}, Z_{2})$, we have \begin{subequations} \langlebel{difference} \begin{align} &\psi_{t}-\Delta \psi=[\psi_{1}, Z]+[\psi,Z_{2}], \\ & Z_{t}-\Delta Z=[\Delta \psi, \psi_{1}]+[\Delta \psi_{2}, \psi]. \end{align} \end{subequations} From this, we see that \begin{equation} \langlebel{difference uniqueness} \begin{split} &\frac{1}{2}\frac{d}{dt} \left(\left\|\nabla \psi\right\|^{2}_{L^{2}}+\left\|Z\right\|^{2}_{L^{2}}\right) +\left\|\Delta \psi\right\|^{2}_{L^{2}}+ \left\|\nabla Z\right\|^{2}_{L^{2}}\\ & =-\int \Delta \psi[\psi_{1},Z] - \int \Delta \psi[\psi,Z_{2}] +\int Z[\Delta \psi,\psi_{1}] +\int Z[\Delta \psi_{2},\psi]\\ &= - \int \Delta \psi[\psi,Z_{2}] +\int Z[\Delta \psi_{2},\psi] \\ &\leq C\left(\left\|\nabla Z_{2}\right\|^{2}_{L^{2}} \left\|\Delta Z_{2}\right\|^{2}_{L^{2}} +\left\|\Delta \psi_{2}\right\|^{2}_{L^{2}} \left\|\nabla \Delta \psi_{2}\right\|^{2}_{L^{2}} \right)\left\|\nabla \psi\right\|^{2}_{L^{2}} +\left\|\Delta \psi\right\|^{2}_{L^{2}}+ \left\|\nabla Z\right\|^{2}_{L^{2}} \end{split} \end{equation} which gives \[ \frac{d}{dt} \left(\left\|\nabla \psi\right\|^{2}_{L^{2}}+\left\|Z\right\|^{2}_{L^{2}}\right) \leq C\left(\left\|\nabla Z_{2}\right\|^{2}_{L^{2}} \left\|\Delta Z_{2}\right\|^{2}_{L^{2}} +\left\|\Delta \psi_{2}\right\|^{2}_{L^{2}} \left\|\nabla \Delta \psi_{2}\right\|^{2}_{L^{2}} \right)\left(\left\|\nabla \psi\right\|^{2}_{L^{2}}+\left\|Z\right\|^{2}_{L^{2}}\right). \] Since $\left\|\nabla Z_{2}\right\|^{2}_{L^{2}} \left\|\Delta Z_{2}\right\|^{2}_{L^{2}} +\left\|\Delta \psi_{2}\right\|^{2}_{L^{2}} \left\|\nabla \Delta \psi_{2}\right\|^{2}_{L^{2}}$ is integrable on $[0,T^{\ast})$, the uniqueness follows using Gronwall's lemma. \subsubsection{\bf Blow-up criterion} \langlebel{sec:3.1.3} To obtain (\ref{blowup}), we first bound the right-hand side of (\ref{H2 bound 1}) by \[ \left|\int \Delta \psi \left([\psi_{x},Z_{x}]+[\psi_{y},Z_{y}] \right)\right|\leq C\left\|\nabla Z\right\|_{L^{p}} \left\|\nabla^{2}\psi\right\|_{L^{q}} \left\|\nabla^{3}\psi\right\|_{L^{2}}, \quad \frac{1}{p}+\frac{1}{q}=\frac{1}{2}. \] By (\ref{inequality 2 a}), we have \begin{equation} \langlebel{blowup first} \begin{split} \left\|\nabla Z\right\|_{L^{p}} \left\|\Delta\psi\right\|_{L^{q}} \left\|\nabla^{3}\psi\right\|_{L^{2}}&\leq C\left\|\nabla Z\right\|_{L^{p}} \left\|\nabla^{2}\psi\right\|^{\frac{2}{q}}_{L^{2}} \left\|\nabla\Delta\psi\right\|^{2-\frac{2}{q}}_{L^{2}}\\ &\leq C\left\|\nabla Z\right\|^{q}_{L^{p}} \left\|\Delta\psi\right\|^{2}_{L^{2}} +\left\|\nabla\Delta\psi\right\|^{2}_{L^{2}}. \end{split} \end{equation} So, we can rewrite (\ref{H2 bound 1}) as \[ \langlebel{blowup 1} \frac{d}{dt}\left(\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}}\right)+\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}}\leq C\left\|\nabla Z\right\|^{q}_{L^{p}}\left(\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}}\right). \] Integrating this in time by using Gronwall's inequality, we have \begin{equation} \langlebel{blowup 2} \begin{split} &\left\|\Delta \psi(t)\right\|^{2}_{L^{2}}+\left\|\nabla Z(t)\right\|^{2}_{L^{2}}+\int^{t}_{0} \left(\left\|\nabla \Delta \psi(s)\right\|^{2}_{L^{2}}+\left\|\Delta Z(s)\right\|^{2}_{L^{2}}\right)ds\\ &\leq \mathcal{E}_{0}\exp\left[C\int^{t}_{0}\left\|\nabla Z(s)\right\|^{q}_{L^{p}}ds\right]. \end{split} \end{equation} We then integrate (\ref{H3 bound 2}) in time, without including $\left\|\Delta^{2} \psi\right\|^{2}_{L^{2}}+\left\|\nabla \Delta Z\right\|^{2}_{L^{2}}$, to obtain \[ \left\|\nabla \Delta \psi(t)\right\|^{2}_{L^{2}}+\left\|\Delta Z(t)\right\|^{2}_{L^{2}}\leq \mathcal{E}_{0}\exp\left[C\int^{t}_{0}\left(\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta \psi\right\|^{2}_{L^{2}} \left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}\right)ds\right]. \] By using (\ref{blowup 2}) for the second term in the integrand of the right-hand side, we obtain \begin{eqnarray} \langlebel{Blowup Hall equation} \left\|\nabla\Delta \psi(t)\right\|^{2}_{L^{2}}+\left\|\Delta Z(t)\right\|^{2}_{L^{2}}\leq \mathcal{E}_{0}\exp \exp\left[CB(t)+CB^{2}(t)\right], \quad B(t)=\int^{t}_{0}\left\|\nabla Z(s)\right\|^{q}_{L^{p}}ds. \end{eqnarray} This completes the proof of Theorem \ref{LWP}. \subsection{Proof of Theorem \ref{weak strong uniqueness}} The proof of Theorem \ref{weak strong uniqueness} is very similar to the one in Section \ref{sec:3.1.2} and Section \ref{sec:3.1.3}. Let $B_{1}=\left(\nabla^{\perp}\psi_{1}, Z_{1}\right)$ and $B_{2}=\left(\nabla^{\perp}\psi_{2}, Z_{2}\right)$ be the two weak solutions of (\ref{Hall equation in 2D}). Let $\psi=\psi_{1}-\psi_{2}$ and $Z=Z_{1}-Z_{2}$. By (\ref{difference uniqueness}) and (\ref{commutator e}), and by using (\ref{blowup first}), we have \begin{equation*} \begin{split} &\frac{1}{2}\frac{d}{dt} \left(\left\|\nabla \psi\right\|^{2}_{L^{2}}+\left\|Z\right\|^{2}_{L^{2}}\right) +\left\|\Delta \psi\right\|^{2}_{L^{2}}+ \left\|\nabla Z\right\|^{2}_{L^{2}}= - \int \Delta \psi[\psi,Z_{2}] +\int \Delta \psi_{2}[\psi,Z]\\ &\leq C\left\|\nabla Z_{2}\right\|_{L^{p}} \left\|\nabla\psi\right\|_{L^{q}} \left\|\Delta\psi\right\|_{L^{2}}+C\left\|\Delta \psi_{2}\right\|_{L^{p}} \left\|\nabla\psi\right\|_{L^{q}} \left\|\nabla Z\right\|_{L^{2}}\\ & \leq C\left(\left\|\nabla Z_{2}\right\|^{q}_{L^{p}}+\left\|\Delta \psi_{2}\right\|^{q}_{L^{p}} \right)\left(\left\|\nabla \psi\right\|^{2}_{L^{2}}+\left\|Z\right\|^{2}_{L^{2}}\right)+\frac{1}{2}\left\|\Delta \psi\right\|^{2}_{L^{2}}+ \frac{1}{2}\left\|\nabla Z\right\|^{2}_{L^{2}} \end{split} \end{equation*} and so we obtain \[ \frac{d}{dt} \left(\left\|\nabla \psi\right\|^{2}_{L^{2}}+\left\|Z\right\|^{2}_{L^{2}}\right) +\left\|\Delta \psi\right\|^{2}_{L^{2}}+ \left\|\nabla Z\right\|^{2}_{L^{2}}\leq C\left(\left\|\nabla Z_{2}\right\|^{q}_{L^{p}}+\left\|\Delta \psi_{2}\right\|^{q}_{L^{p}} \right)\left(\left\|\nabla \psi\right\|^{2}_{L^{2}}+\left\|Z\right\|^{2}_{L^{2}}\right). \] By Gronwall inequality, we complete the proof of Theorem \ref{weak strong uniqueness}. \subsection{Proof of Theorem \ref{GWP}}\langlebel{sec:3.3} \subsubsection{\bf A priori estimates} We now show that the strong solutions provided by Theorem \ref{LWP} are in fact defined for all $t>0$ under the smallness condition (\ref{smallness condition 1}). We first rewrite (\ref{H2 bound 2}) as \[ \frac{d}{dt}\left(\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}}\right)+\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}}\leq CS(t)\left(\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}}\right), \] where $S(t)=\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}}$. Let $\epsilon_{1}=\left\|\Delta \psi_{0}\right\|^{2}_{L^{2}}+ \left\|\nabla Z_{0}\right\|^{2}_{L^{2}}$. If $C\epsilon_{1}<1$, we have \begin{eqnarray} \langlebel{Hall decay 1} \left\|\Delta \psi(t)\right\|^{2}_{L^{2}}+\left\|\nabla Z(t)\right\|^{2}_{L^{2}}+(1-C\epsilon_{1})\int^{t}_{0}\left(\left\|\nabla \Delta \psi(s)\right\|^{2}_{L^{2}}+\left\|\Delta Z(s)\right\|^{2}_{L^{2}}\right)ds\leq \epsilon_{1} \end{eqnarray} for all $t>0$. We next proceed to bound (\ref{H3 bound 2}) by estimating the two terms on the right-hand side of (\ref{H3 bound 1}) in a different way. From the last expression of (\ref{first term}), we obtain \begin{equation} \langlebel{third term} \begin{split} &C\left\|\nabla^{2}Z\right\|_{L^{2}} \left\|\nabla^{3}\psi\right\|^{2}_{L^{4}}\leq C\left\|\Delta Z\right\|^{2}_{L^{2}} \left\|\nabla\Delta \psi\right\|^{2}_{L^{2}}+\frac{1}{2}\left\|\Delta^{2}\psi\right\|^{2}_{L^{2}} \\ & \leq C\left\|\nabla Z\right\|^{2}_{L^{2}} \left\|\nabla \Delta Z\right\|^{2}_{L^{2}} +C\left\|\Delta \psi\right\|^{2}_{L^{2}} \left\|\Delta^{2} \psi\right\|^{2}_{L^{2}}+\frac{1}{2}\left\|\Delta^{2}\psi\right\|^{2}_{L^{2}}. \end{split} \end{equation} By (\ref{inequality 2 b}), we bound the second term on the right-hand side of (\ref{H3 bound 1}) as \begin{equation}\langlebel{fourth term} \begin{split} &\left\|\nabla^{2}\psi\right\|_{L^{\infty}} \left\|\nabla^{3}\psi\right\|_{L^{2}} \left\|\nabla^{3}Z\right\|_{L^{2}} \leq C\left\|\Delta\psi\right\|^{\frac{1}{2}}_{L^{2}} \left\|\Delta^{2}\psi\right\|^{\frac{1}{2}}_{L^{2}} \left\|\nabla^{3}\psi\right\|_{L^{2}} \left\|\nabla^{3}Z\right\|_{L^{2}}\\ & \leq C\left\|\Delta\psi\right\|_{L^{2}} \left\|\Delta^{2}\psi\right\|_{L^{2}} \left\|\nabla^{3}Z\right\|_{L^{2}} \leq C\left\|\Delta\psi\right\|^{2}_{L^{2}} \left\|\Delta^{2}\psi\right\|^{2}_{L^{2}}+ \frac{1}{2}\left\|\nabla^{3}Z\right\|^{2}_{L^{2}}. \end{split} \end{equation} So, (\ref{H3 bound 2}) is replaced with \begin{eqnarray} \langlebel{H3 bound 3} \frac{d}{dt}\left(\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}}\right) +\left\|\Delta^{2} \psi\right\|^{2}_{L^{2}}+\left\|\nabla \Delta Z\right\|^{2}_{L^{2}} \leq CS(t)\left(\left\|\Delta^{2} \psi\right\|^{2}_{L^{2}}+\left\|\nabla \Delta Z\right\|^{2}_{L^{2}} \right). \end{eqnarray} Then (\ref{Hall decay 1}) gives \begin{equation*} \langlebel{Hall decay 2} \begin{split} &\left\|\nabla\Delta \psi(t)\right\|^{2}_{L^{2}}+\left\|\Delta Z(t)\right\|^{2}_{L^{2}}+(1-C\epsilon_{1})\int^{t}_{0}\left(\left\|\Delta^{2} \psi(s)\right\|^{2}_{L^{2}}+\left\|\nabla \Delta Z(s)\right\|^{2}_{L^{2}}\right)ds\\ &\leq \left\|\nabla\Delta \psi_{0}\right\|^{2}_{L^{2}}+\left\|\Delta Z_{0}\right\|^{2}_{L^{2}} \end{split} \end{equation*} for all $t>0$. This completes the first part of Theorem \ref{GWP}. \subsubsection{\bf Decay rates} \langlebel{sec:3.3.2} To conclude this Section and the proof of Theorem \ref{GWP}, we now prove the decay rates (\ref{GWP Decay 2}) in Theorem \ref{GWP}. We first write (\ref{H2 bound 2}) as \begin{eqnarray} \langlebel{H2 bound 3} \frac{d}{dt} \left(\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}} \right) +(1-C\epsilon_{1})\left\|\nabla\Delta \psi\right\|^{2}_{L^{2}} +(1-C\epsilon_{1})\left\|\Delta Z\right\|^{2}_{L^{2}} \leq 0. \end{eqnarray} By (\ref{inequality 1}) and (\ref{Weak decay}), we have \begin{equation*} \begin{split} &\left\|\Delta \psi\right\|^{4}_{L^{2}}\leq C\left\|\nabla \psi\right\|^{2}_{L^{2}}\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}} \leq \frac{\mathcal{E}_{0}}{1+t} \left\|\nabla \Delta \psi\right\|^{2}_{L^{2}},\\ & \left\|\nabla Z\right\|^{4}_{L^{2}}\leq C \left\|Z\right\|^{2}_{L^{2}}\left\|\Delta Z\right\|^{2}_{L^{2}}\leq \frac{\mathcal{E}_{0}}{1+t}\left\|\Delta Z\right\|^{2}_{L^{2}}. \end{split} \end{equation*} Then, (\ref{H2 bound 3}) becomes \[ \frac{d}{dt} \left(\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}} \right) +\mathcal{E}_{0}(1+t)\left(\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}} \right)^{2} \leq 0. \] By solving this ODE, we derive the following inequality for $t>0$: \[ \langlebel{decay 1} \left\|\Delta \psi(t)\right\|^{2}_{L^{2}}+\left\|\nabla Z(t)\right\|^{2}_{L^{2}}\leq \frac{2\left\|\Delta \psi_{0}\right\|^{2}_{L^{2}}+ 2\left\|\nabla Z_{0}\right\|^{2}_{L^{2}}}{2+\mathcal{E}_{0}\big(\left\|\Delta \psi_{0}\right\|^{2}_{L^{2}}+ \left\|\nabla Z_{0}\right\|^{2}_{L^{2}}\big)(1+t)^{2}}. \] We next write (\ref{H3 bound 3}) as \[ \frac{d}{dt} \left(\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}} \right) +\left(1-C\epsilon_{1}\right)\left\|\Delta^{2} \psi\right\|^{2}_{L^{2}} +\left(1-C\epsilon_{1}\right)\left\|\nabla \Delta Z\right\|^{2}_{L^{2}} \leq 0. \] By (\ref{inequality 1}) and (\ref{Weak decay}), we have \begin{equation*} \begin{split} &\left\|\nabla \Delta \psi\right\|^{3}_{L^{2}} \leq C\left\|\nabla \psi\right\|_{L^{2}}\left\|\Delta^{2} \psi\right\|^{2}_{L^{2}}\leq \frac{\mathcal{E}_{0}}{\sqrt{1+t}}\left\|\Delta^{2} \psi\right\|^{2}_{L^{2}},\\ & \left\|\Delta Z\right\|^{3}_{L^{2}}\leq C \left\|Z\right\|_{L^{2}}\left\|\nabla \Delta Z\right\|^{2}_{L^{2}} \leq \frac{\mathcal{E}_{0}}{\sqrt{1+t}}\left\|\nabla \Delta Z\right\|^{2}_{L^{2}}. \end{split} \end{equation*} So, we obtain \[ \frac{d}{dt} \left(\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}} \right) +\mathcal{E}_{0}\sqrt{1+t}\left(\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}} \right)^{\frac{3}{2}} \leq 0. \] From this, we derive the following inequality: \[ \langlebel{decay 2} \left\|\nabla \Delta \psi(t)\right\|^{2}_{L^{2}}+\left\|\Delta Z(t)\right\|^{2}_{L^{2}}\leq \frac{36\left\|\nabla \Delta \psi_{0}\right\|^{2}_{L^{2}}+36\left\|\Delta Z_{0}\right\|^{2}_{L^{2}}}{\big(6+\mathcal{E}_{0}\sqrt{\left\|\nabla \Delta \psi_{0}\right\|^{2}_{L^{2}}+\left\|\Delta Z_{0}\right\|^{2}_{L^{2}}}(1+)^{\frac{3}{2}}\big)^{2}} \] and thus concluding the proof of Theorem \ref{GWP}. \section{Proof of Theorem \ref{decay of psi}} In this section, we want to improve the decay rate of $\psi$ by using Theorem \ref{GWP}. We first recall the equation of $\psi$: \begin{eqnarray} \langlebel{eq of psi only} \psi_{t}+\nabla^{\perp}Z\cdot \nabla \psi-\Delta \psi=0 \end{eqnarray} which is a dissipative transport equation with a fast decaying coefficient $\nabla^{\perp}Z$. We begin with the $L^{1}$ bound of $\psi$: \begin{eqnarray} \langlebel{L1 bound} \left\|\psi(t)\right\|_{L^1}\leq \left\|\psi_{0}\right\|_{L^1}. \end{eqnarray} By applying Fourier splitting method in \cite{Chae Schonbek}, we also obtain the $L^{2}$ bound: \begin{eqnarray} \langlebel{L2 bound psi} \left\|\psi(t)\right\|_{L^2}\leq \frac{\mathcal{E}_{0}}{\sqrt{1+t}}. \end{eqnarray} We now test $\displaystyle \frac{\psi}{t}$ to (\ref{eq of psi only}). Then, we obtain \begin{equation}\langlebel{psi t} \frac{1}{2}\frac{d}{dt}\frac{\left\|\psi\right\|^{2}_{L^{2}}}{t}+\frac{\left\|\psi\right\|^{2}_{L^{2}}}{2t^2}+\frac{\left\|\nabla \psi\right\|^{2}_{L^{2}}}{t}=0. \end{equation} By (\ref{inequality 2 a}) and (\ref{L1 bound}), we have \[ \left\|\psi\right\|^{4}_{L^{2}}\leq C \left\|\psi\right\|^{2}_{L^{1}}\left\|\nabla \psi\right\|^{2}_{L^{2}}\leq \mathcal{E}_{0} \left\|\nabla \psi\right\|^{2}_{L^{2}} \] and so (\ref{psi t}) can be replaced with \begin{eqnarray} \langlebel{psi t 2} \frac{1}{2}\frac{d}{dt}\frac{\left\|\psi\right\|^{2}_{L^{2}}}{t}+\frac{t}{\mathcal{E}_{0}}\Big(\frac{\left\|\psi\right\|^{2}_{L^{2}}}{t}\Big)^{2}\leq 0. \end{eqnarray} On the other hand, testing $-\Delta \psi$ to (\ref{eq of psi only}), we have \[ \frac{1}{2}\frac{d}{dt}\left\|\nabla \psi\right\|^{2}_{L^{2}}+\left\|\Delta \psi \right\|^{2}_{L^{2}}\leq \left\|\nabla Z\right\|_{L^4}\left\|\nabla \psi\right\|_{L^4}\left\|\Delta \psi\right\|_{L^{2}}\leq C\left\|\nabla Z\right\|^{4}_{L^4}\left\|\nabla \psi\right\|^{2}_{L^2}+\frac{1}{2}\left\|\Delta \psi\right\|^{2}_{L^{2}} \] and so we obtain \begin{eqnarray} \langlebel{nabla psi t} \frac{1}{2}\frac{d}{dt}\left\|\nabla \psi\right\|^{2}_{L^{2}}+\frac{1}{2}\left\|\Delta \psi \right\|^{2}_{L^{2}}\leq C\left\|\nabla Z\right\|^{4}_{L^4}\left\|\nabla \psi\right\|^{2}_{L^2}. \end{eqnarray} By (\ref{inequality 2 a}) and (\ref{GWP Decay 2}), \[ \left\|\nabla Z(t)\right\|^{4}_{L^4}\leq C \left\|\nabla Z(t)\right\|^{2}_{L^2}\left\|\Delta Z(t)\right\|^{2}_{L^2}\leq \frac{\mathcal{E}_{0}}{(1+t)^{5}}. \] By taking $t$ sufficiently large, which is expressed by $t>t_{0}$ for the rest of the proof of Theorem \ref{decay of psi}, and by combining (\ref{psi t 2}) and (\ref{nabla psi t}), we obtain \[ \frac{1}{2}\frac{d}{dt}\Big(\frac{\left\|\psi\right\|^{2}_{L^{2}}}{t}+ \left\|\nabla \psi\right\|^{2}_{L^{2}}\Big)+\frac{1}{t}\left\|\nabla \psi\right\|^{2}_{L^{2}}+\left\|\Delta \psi \right\|^{2}_{L^{2}} \leq 0. \] By (\ref{inequality 1}) and (\ref{L2 bound psi}), we have \[ \left\|\nabla \psi\right\|^{4}_{L^{2}}\leq C \left\|\psi\right\|^{2}_{L^{2}}\left\|\Delta \psi\right\|^{2}_{L^{2}}\leq \frac{\mathcal{E}_{0}}{1+t} \left\|\Delta \psi\right\|^{2}_{L^{2}}\leq \frac{\mathcal{E}_{0}}{t} \left\|\Delta \psi\right\|^{2}_{L^{2}} \] when $t>t_{0}$. So, we derive the following inequality: \begin{eqnarray} \langlebel{t inequality} \frac{1}{2}\frac{d}{dt}\Big(\frac{\left\|\psi\right\|^{2}_{L^{2}}}{t}+ \left\|\nabla \psi\right\|^{2}_{L^{2}}\Big)+\frac{t}{\mathcal{E}_{0}} \Big(\frac{\left\|\psi\right\|^{2}_{L^{2}}}{t}+ \left\|\nabla \psi\right\|^{2}_{L^{2}}\Big)^{2}\leq 0. \end{eqnarray} We now solve this ODE to find \[ \frac{\left\|\psi(t)\right\|^{2}_{L^{2}}}{t}+ \left\|\nabla \psi(t)\right\|^{2}_{L^{2}}\leq \frac{\mathcal{E}_{0}\Big(\frac{\left\|\psi(t_{0})\right\|^{2}_{L^{2}}}{t_{0}}+ \left\|\nabla \psi(t_{0})\right\|^{2}_{L^{2}}\Big)}{\mathcal{E}_{0} + \Big(\frac{\left\|\psi(t_{0})\right\|^{2}_{L^{2}}}{t_{0}}+ \left\|\nabla \psi(t_{0})\right\|^{2}_{L^{2}}\Big)(t^{2}-t^{2}_{0})}. \] Since $ \left\|\psi(t)\right\|_{H^{1}}\leq \left\|\psi_{0}\right\|_{H^{1}}$ for all $t>0$ by Theorem \ref{GWP}, we obtain \begin{eqnarray} \langlebel{decay 3} \left\|\nabla \psi(t)\right\|_{L^{2}}\leq \frac{\mathcal{E}_{0}}{1+t}. \end{eqnarray} By modifying (\ref{t inequality}) with the extra $t$-factor in the denominator, we have \begin{eqnarray} \langlebel{t inequality 2} \frac{1}{2}\frac{d}{dt}\Big(\frac{\left\|\psi\right\|^{2}_{L^{2}}}{t^{2}}+ \frac{\left\|\nabla \psi\right\|^{2}_{L^{2}}}{t}\Big)+\frac{t^{2}}{\mathcal{E}_{0}} \Big(\frac{\left\|\psi\right\|^{2}_{L^{2}}}{t^{2}}+ \frac{\left\|\nabla \psi\right\|^{2}_{L^{2}}}{t}\Big)^{2}\leq 0. \end{eqnarray} We now test $\Delta^{2}\psi$ to (\ref{eq of psi only}). Then, we have \begin{equation*} \begin{split} \frac{1}{2}\frac{d}{dt}\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}&=\int \Delta \psi \Delta [\psi,Z]=\int \Delta \psi [\psi,\Delta Z]+2\int \Delta \psi \left([\psi_{x}, Z_{x}]+[\psi_{y, Z_{y}}]\right)\\ &=\text{(I)+(II)}. \end{split} \end{equation*} We first bound $\text{(I)}$ as follows \begin{equation*} \begin{split} \text{(I)}&\leq \left\|\Delta Z\right\|_{L^{2}}\left\|\nabla \psi\right\|_{L^{\infty}}\left\|\nabla \Delta \psi\right\|_{L^{2}}\leq \left\|\Delta Z\right\|^{2}_{L^{2}}\left\|\nabla \psi\right\|^{2}_{L^{\infty}}+\frac{1}{8}\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}\\ &\leq C\left\|\Delta Z\right\|^{4}_{L^{2}}\left\|\nabla \psi\right\|^{2}_{L^{2}}+\frac{1}{4}\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}\leq \frac{\mathcal{E}_{0}}{(1+t)^{4}}\left\|\nabla \psi\right\|^{2}_{L^{2}} +\frac{1}{4}\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}. \end{split} \end{equation*} $\text{(II)}$ is bounded by \[ \begin{split} \text{(II)}&\leq C \left\|\Delta Z\right\|_{L^{2}}\left\|\Delta \psi\right\|^{2}_{L^{4}}\leq C\left\|\Delta Z\right\|^{2}_{L^{2}}\left\|\Delta \psi\right\|^{2}_{L^{2}}+\frac{1}{4}\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}\\ &\leq \frac{\mathcal{E}_{0}}{(1+t)^{2}}\left\|\Delta \psi\right\|^{2}_{L^{2}} +\frac{1}{4}\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}. \end{split} \] So, we obtain \begin{eqnarray} \langlebel{Delta psi} \frac{d}{dt}\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}\leq \frac{\mathcal{E}_{0}}{t^{4}}\left\|\nabla \psi\right\|^{2}_{L^{2}}+\frac{\mathcal{E}_{0}}{t^{2}}\left\|\Delta \psi\right\|^{2}_{L^{2}} \end{eqnarray} when $t>t_{0}$. Then, (\ref{t inequality 2}) and (\ref{Delta psi}) give \[ \frac{d}{dt}\Big(\frac{\left\|\psi\right\|^{2}_{L^{2}}}{t^{2}}+ \frac{\left\|\nabla \psi\right\|^{2}_{L^{2}}}{t}\Big)+\mathcal{E}_{0}t^{2}\Big(\frac{\left\|\psi\right\|^{2}_{L^{2}}}{t^{2}}+ \frac{\left\|\nabla \psi\right\|^{2}_{L^{2}}}{t}\Big)^{2}+\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}\leq 0. \] By (\ref{inequality 1}) and (\ref{decay 3}), we have \[ \left\|\Delta \psi\right\|^{4}_{L^{2}}\leq C\left\|\nabla \psi\right\|^{2}_{L^{2}} \left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}\leq \frac{\mathcal{E}_{0}}{(1+t)^{2}}\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}\leq \frac{\mathcal{E}_{0}}{t^{2}} \] and we derive the following inequality: for $t>t_{0}$ \begin{eqnarray} \langlebel{after t0 2} \frac{d}{dt} \Big(\frac{\left\|\psi\right\|^{2}_{L^{2}}}{t^{2}}+ \frac{\left\|\nabla \psi\right\|^{2}_{L^{2}}}{t}+\left\|\Delta \psi\right\|^{2}_{L^{2}}\Big) +\mathcal{E}_{0}t^{2}\Big(\frac{\left\|\psi\right\|^{2}_{L^{2}}}{t^{2}}+ \frac{\left\|\nabla \psi\right\|^{2}_{L^{2}}}{t}+\left\|\Delta \psi\right\|^{2}_{L^{2}}\Big)^{2}\leq 0. \end{eqnarray} Let $\mathcal{I}_{0}=\frac{\left\|\psi(t_{0})\right\|^{2}_{L^{2}}}{t^{2}_{0}}+ \frac{\left\|\nabla \psi(t_{0})\right\|^{2}_{L^{2}}}{t_{0}}+\left\|\Delta \psi(t_{0})\right\|^{2}_{L^{2}}$. By solving (\ref{after t0 2}), we have \begin{eqnarray} \langlebel{after t0 3} \frac{\left\|\psi(t)\right\|^{2}_{L^{2}}}{t^{2}}+ \frac{\left\|\nabla \psi(t)\right\|^{2}_{L^{2}}}{t}+\left\|\Delta \psi(t)\right\|^{2}_{L^{2}}\leq \frac{3\mathcal{I}_{0}}{3+3(t^{3}-t^{3}_{0})\mathcal{I}_{0}}. \end{eqnarray} Since $ \left\|\psi(t)\right\|_{H^{2}}\leq \left\|\psi_{0}\right\|_{H^{2}}$ for all $t>0$ by Theorem \ref{GWP}, we obtain \[ \left\|\Delta \psi(t)\right\|_{L^{2}}\leq \frac{\mathcal{E}_{0}}{(1+t)^{3/2}} \] which complete the proof of Theorem \ref{decay of psi}. \section{Proof of Theorem \ref{Asymptotics}} The purpose of this section is to establish the asymptotic behavior of $(\psi, Z)$ as $t\rightarrow \infty$. Let \[ \Gamma(t,x)=\frac{1}{4\pi t}e^{-\frac{|x|^{2}}{4t}} \] be the two dimensional heat kernel. We first notice that we have the $L^{p}$ estimates of $\Gamma$ in two dimensions: for $1\leq p\leq r\leq \infty$ \begin{equation} \langlebel{decay rate of Gamma} \begin{split} &\left\|\Gamma(t) \ast f\right\|_{L^{r}}\leq C(p,r) t^{-\left(\frac{1}{p}-\frac{1}{r}\right)}\|f\|_{L^{p}},\\ &\left\|\nabla \Gamma(t) \ast f\right\|_{L^{r}}\leq C(p,r) t^{-\left(\frac{1}{p}-\frac{1}{r}\right)-\frac{1}{2}}\|f\|_{L^{p}}\\ & \left\|\nabla^{2} \Gamma(t) \ast f\right\|_{L^{r}}\leq C(p,r) t^{-\left(\frac{1}{p}-\frac{1}{r}\right)-1}\|f\|_{L^{p}}, \end{split} \end{equation} where $\ast$ is the convolution in the space variables. We also observe that constant multiples of $\Gamma$ are solutions of (\ref{Hall equation in 2D}) because $\Gamma$ and $\Gamma_{t}$ are radial functions and so \begin{eqnarray} \langlebel{orthogonality} \nabla^{\perp}\Gamma\cdot \nabla \Gamma=0, \quad \nabla^{\perp}\Gamma\cdot \nabla \Delta\Gamma=\nabla^{\perp}\Gamma\cdot \nabla \Gamma_{t}=0. \end{eqnarray} We are now in position to prove Theorem \ref{Asymptotics}. Let $\widetilde{\psi}=\psi-\gamma \Gamma$ and $\widetilde{Z}=Z-\eta\Gamma$, where $\gamma$ and $\eta$ are defined in (\ref{L1 average c}). By using (\ref{orthogonality}), we have \begin{eqnarray} \langlebel{tilde equations} \widetilde{\psi}_{t}-\Delta \widetilde{\psi}=-\nabla^{\perp}Z\cdot \nabla \psi, \quad \widetilde{Z}_{t}-\Delta \widetilde{Z}=-\nabla^{\perp}\psi\cdot \nabla \Delta\psi. \end{eqnarray} So, there are two types of the integral forms of $(\psi,Z)$ from (\ref{Hall equation in 2D}) and (\ref{tilde equations}): \begin{subequations} \langlebel{no tilde integral equations} \begin{align} & \psi(t)=\Gamma(t) \ast \psi_{0}-\int^{t}_{0} \Gamma(t-s)\ast (\nabla^{\perp}Z\cdot \nabla \psi)(s)ds, \langlebel{no tilde integral equations a}\\ & Z(t)=\Gamma(t) \ast Z_{0}-\int^{t}_{0} \Gamma(t-s)\ast (\nabla^{\perp}\psi\cdot \nabla \Delta\psi)(s)ds \langlebel{no tilde integral equations b} \end{align} \end{subequations} and $\psi=\widetilde{\psi}+\gamma \Gamma$ and $Z=\widetilde{Z}+\eta\Gamma$ with \begin{subequations} \langlebel{tilde integral equations} \begin{align} & \widetilde{\psi}(t)=\Gamma(t) \ast (\psi_{0}-\gamma\delta_{0})-\int^{t}_{0} \Gamma(t-s)\ast (\nabla^{\perp}Z\cdot \nabla \psi)(s)ds, \langlebel{tilde integral equations a}\\ & \widetilde{Z}(t)=\Gamma(t) \ast (Z_{0}-\eta\delta_{0})-\int^{t}_{0} \Gamma(t-s)\ast (\nabla^{\perp}\psi\cdot \nabla \Delta\psi)(s)ds, \langlebel{tilde integral equations b} \end{align} \end{subequations} where $\delta_{0}$ is the Dirac delta function supported at the origin. Since the time integrals of (\ref{no tilde integral equations}) and (\ref{tilde integral equations}) are same, the only differences in the asymptotic behaviors are given by the linear parts. In particular, we need (\ref{L1 average c}) to handle the linear part of (\ref{tilde integral equations}). We here estimate $(\widetilde{\psi}, \widetilde{Z})$ which also give the estimation of $(\psi,Z)$. We now estimate $\widetilde{\psi}$ in $L^{\infty}$ with $\nabla^{\perp}Z\cdot \nabla \psi=\dv \left(\nabla^{\perp}Z \psi\right)$: \begin{equation*} \begin{split} \left\|\widetilde{\psi}(t)\right\|_{L^{\infty}}& \leq \left\|\Gamma(t) \ast (\psi_{0}-\gamma\delta_{0})\right\|_{L^{\infty}}+\int^{\frac{t}{2}}_{0} \left\|\dv \Gamma(t-s)\ast (\nabla^{\perp}Z \psi)(s)\right\|_{L^{\infty}}ds\\ &+ \int^{t}_{\frac{t}{2}} \left\|\dv \Gamma(t-s)\ast (\nabla^{\perp}Z \psi)(s)\right\|_{L^{\infty}}ds=\text{(I)}+\text{(II)}+\text{(III)}, \end{split} \end{equation*} We begin with $\text{(I)}$: \begin{equation*} \begin{split} \text{(I)}&=\left\|\int_{\mathbb{R}^{2}}\left(\Gamma(t, x-y)-\Gamma(t,x)\right) \psi_{0}(y)dy \right\|_{L^{\infty}}\\ &=\left\|\int_{\mathbb{R}^{2}}\int^{1}_{0}\nabla \Gamma(t, x-\theta y)\cdot y \psi_{0}(y)d\theta dy \right\|_{L^{\infty}} \leq \left\|\nabla \Gamma(t)\right\|_{L^{\infty}} \left\|\langle y\rangle \psi\right\|_{L^{1}}\leq \frac{\mathcal{E}_{0}}{t^{\frac{3}{2}}}. \end{split} \end{equation*} To bound $\text{(II)}$, we use Theorem \ref{GWP}, (\ref{L2 bound psi}), and (\ref{decay rate of Gamma}): \begin{equation*} \begin{split} \text{(II)}&\leq C\int^{\frac{t}{2}}_{0}(t-s)^{-\frac{3}{2}} \left\|\nabla^{\perp}Z(s)\psi(s)\right\|_{L^{1}}ds \leq C \int^{\frac{t}{2}}_{0}(t-s)^{-\frac{3}{2}}\left\|\nabla Z(s)\right\|_{L^{2}}\left\|\psi(s)\right\|_{L^{2}}ds \\ &\leq \frac{\mathcal{E}_{0}}{t^{3/2}} \int^{\frac{t}{2}}_{0} \frac{1}{(s+1)\sqrt{s+1}}ds \leq \frac{\mathcal{E}_{0}}{t^{\frac{3}{2}}}. \end{split} \end{equation*} We also bound $\text{(III)}$ by using Theorem \ref{GWP}, Theorem \ref{decay of psi}, (\ref{L2 bound psi}), and (\ref{decay rate of Gamma}): \begin{equation*} \begin{split} \text{(III)}&\leq C\int^{t}_{\frac{t}{2}}(t-s)^{-\frac{5}{6}} \left\|\nabla^{\perp}Z(s)\psi(s)\right\|_{L^{3}}ds \leq C \int^{t}_{\frac{t}{2}}(t-s)^{-\frac{5}{6}}\left\|\nabla Z(s)\right\|_{L^{6}}\left\|\psi(s)\right\|_{L^{6}}ds \\ & \leq C\int^{t}_{\frac{t}{2}}(t-s)^{-\frac{5}{6}}\left\|\nabla Z(s)\right\|^{\frac{1}{3}}_{L^{2}} \left\|\Delta Z(s)\right\|^{\frac{2}{3}}_{L^{2}} \left\|\psi(s)\right\|^{\frac{1}{3}}_{L^{2}} \left\|\nabla \psi(s)\right\|^{\frac{2}{3}}_{L^{2}}ds \\ &\leq \mathcal{E}_{0} \int^{t}_{\frac{t}{2}}(t-s)^{-\frac{5}{6}} s^{-\frac{1}{6}}s^{-\frac{2}{3}}s^{-\frac{1}{3}}s^{-1}ds \leq \frac{\mathcal{E}_{0}}{t^{2}}. \end{split} \end{equation*} Taking all these bounds into account, we find two types of asymptotic behaviors of $\psi$ \begin{equation*} \begin{split} & \psi(t,x)=\gamma\Gamma(t,x)+\widetilde{\psi}(t,x)=\gamma\Gamma(t,x) +O(t^{-3/2}),\\ &\psi(t,x)=\gamma\Gamma(t,x)+\widetilde{\psi}(t,x)=\Gamma(t)\ast \psi_{0} +O(t^{-3/2}). \end{split} \end{equation*} We now derive the same kind of estimates for $\widetilde{Z}$ in $L^{\infty}$ with $\nabla^{\perp}Z\cdot \nabla \Delta \psi=\dv \left(\nabla^{\perp}\psi \Delta \psi\right)$: \begin{equation*} \begin{split} \left\|\widetilde{Z}(t)\right\|_{L^{\infty}}& \leq \left\|\Gamma(t) \ast (Z_{0}-\gamma\delta_{0})\right\|_{L^{\infty}}+\int^{\frac{t}{2}}_{0} \left\|\dv \Gamma(t-s)\ast (\nabla^{\perp}\psi \Delta \psi)(s)\right\|_{L^{\infty}}ds\\ &+ \int^{t}_{\frac{t}{2}} \left\|\dv \Gamma(t-s)\ast (\nabla^{\perp}\psi \Delta \psi)(s)\right\|_{L^{\infty}}ds=\text{(IV)}+\text{(V)}+\text{(VI)}. \end{split} \end{equation*} $\text{(IV)}$ is bounded exactly as $\text{(I)}$: \[ \text{(IV)} \leq \left\|\nabla \Gamma(t)\right\|_{L^{\infty}} \left\|\langle y\rangle Z\right\|_{L^{1}}\leq \frac{\mathcal{E}_{0}}{t^{\frac{3}{2}}}. \] Before bounding $\text{(V)}$, we rewrite $\dv \Gamma(t-s)\ast (\nabla^{\perp}\psi \Delta \psi)$ as \begin{equation*} \begin{split} &\dv \Gamma(t-s)\ast (\nabla^{\perp}\psi \Delta \psi)(x)\\ &=-\int \partial_{1}\Gamma(y-x) \Delta \psi(y)\partial_{2}\psi(y)dy+ \int \partial_{2}\Gamma(y-x) \Delta \psi(y)\partial_{1}\psi(y)dy\\ & =\int \partial_{1}\partial_{k}\Gamma(y-x) \partial_{k}\psi(y)\partial_{2}\psi(y)dy-\int \partial_{2}\partial_{k}\Gamma(y-x) \partial_{k}\psi(y)\partial_{1}\psi(y)dy\\ &+ \int \partial_{1}\Gamma(y-x) \partial_{k} \psi(y)\partial_{2}\partial_{k}\psi(y)dy - \int \partial_{2}\Gamma(y-x) \partial_{k} \psi(y)\partial_{1}\partial_{k}\psi(y)dy. \end{split} \end{equation*} We then integrate the last two terms by parts \begin{equation*} \begin{split} &\int \partial_{1}\Gamma(y-x) \partial_{k} \psi(y)\partial_{2}\partial_{k}\psi(y)dy - \int \partial_{2}\Gamma(y-x) \partial_{k} \psi(y)\partial_{1}\partial_{k}\psi(y)dy\\ &=-\int \partial_{1}\partial_{2}\Gamma(y-x) \partial_{k} \psi(y)\partial_{k}\psi(y)dy-\int \partial_{1}\Gamma(y-x) \partial_{2}\partial_{k} \psi(y)\partial_{k}\psi(y)dy\\ &+\int \partial_{1}\partial_{2}\Gamma(y-x) \partial_{k} \psi(y)\partial_{k}\psi(y)dy+\int \partial_{2}\Gamma(y-x) \partial_{1}\partial_{k} \psi(y)\partial_{k}\psi(y)dy \end{split} \end{equation*} which gives \[ \int \partial_{1}\Gamma(y-x) \partial_{k} \psi(y)\partial_{2}\partial_{k}\psi(y)dy - \int \partial_{2}\Gamma(y-x) \partial_{k} \psi(y)\partial_{1}\partial_{k}\psi(y)dy=0 \] and so we obtain \[ \begin{split} \dv \Gamma(t-s)\ast (\nabla^{\perp}\psi \Delta \psi)(x) &=\int \partial_{1}\partial_{k}\Gamma(y-x) \partial_{k}\psi(y)\partial_{2}\psi(y)dy\\ &-\int \partial_{2}\partial_{k}\Gamma(y-x) \partial_{k}\psi(y)\partial_{1}\psi(y)dy. \end{split} \] We now bound $\text{(V)}$ using Theorem \ref{decay of psi} and (\ref{decay rate of Gamma}): \[ \begin{split} \text{(V)}&\leq C\int^{\frac{t}{2}}_{0}(t-s)^{-2} \left\|\nabla \psi(s)\nabla \psi(s)\right\|_{L^{1}}ds \leq C \int^{\frac{t}{2}}_{0}(t-s)^{-2}\left\|\nabla \psi(s)\right\|^{2}_{L^{2}}ds \\ &\leq \frac{\mathcal{E}_{0}}{t^{2}} \int^{\frac{t}{2}}_{0} \frac{1}{(s+1)^{2}}ds \leq \frac{\mathcal{E}_{0}}{t^{2}}. \end{split} \] We finally bound $\text{(VI)}$ using Theorem \ref{GWP}, Theorem \ref{decay of psi} and (\ref{decay rate of Gamma}): \begin{equation*} \begin{split} \text{(VI)}&\leq C\int^{t}_{\frac{t}{2}}(t-s)^{-\frac{5}{6}} \left\|\nabla^{\perp}\psi(s)\Delta\psi(s)\right\|_{L^{3}}ds \leq C \int^{t}_{\frac{t}{2}}(t-s)^{-\frac{5}{6}}\left\|\nabla \psi(s)\right\|_{L^{6}}\left\|\Delta \psi(s)\right\|_{L^{6}}ds \\ & \leq C \int^{t}_{\frac{t}{2}}(t-s)^{-\frac{5}{6}}\left\|\nabla \psi(s)\right\|^{\frac{1}{3}}_{L^{2}}\left\|\Delta \psi(s)\right\|_{L^{2}}\left\|\nabla \Delta \psi(s)\right\|^{\frac{2}{3}}_{L^{2}}ds\\ &\leq \mathcal{E}_{0}\int^{t}_{\frac{t}{2}}(t-s)^{-\frac{5}{6}} s^{-\frac{1}{3}}s^{-\frac{3}{2}} s^{-1}ds \leq \frac{\mathcal{E}_{0}}{t^{3}}. \end{split} \end{equation*} Taking all these bounds into account, we also obtain two types of asymptotic behaviors of $Z$: \begin{equation*} \begin{split} &Z(t,x)=\eta\Gamma(t,x)+\widetilde{Z}(t,x)=\eta\Gamma(t,x) +O(t^{-\frac{3}{2}}),\\ & Z(t,x)=\eta\Gamma(t,x)+\widetilde{Z}(t,x)=\Gamma(t)\ast Z_{0} +O(t^{-2}). \end{split} \end{equation*} \section{Proof of Theorem \ref{perturbation theorem 1} and Theorem \ref{perturbation theorem 2}} \langlebel{sec:4} This section is devoted to proving the global existence and the uniqueness of solutions of (\ref{Hall equation in 2D}) around harmonic functions. The analysis here is very close to the one in Section \ref{sec:3.3}, but we will take a different kind of smallness condition, and the existence of harmonic functions requires a bit more computation. \subsection{Proof of Theorem \ref{perturbation theorem 1}} \langlebel{sec:4.1} We recall the equations of $\rho$ and $Z$: \begin{subequations}\langlebel{new perturbation equation 11} \begin{align} &\rho_{t}-\Delta \rho=[\rho,Z]+ [\overline{\psi},Z], \langlebel{new perturbation equation 11 a}\\ &Z_{t} -\Delta Z=[\Delta \rho,\rho]+ [\Delta \rho, \overline{\psi}] \langlebel{new perturbation equation 11 b}. \end{align} \end{subequations} \subsubsection{\bf A priori estimates} \langlebel{sec:5.1.1} By (\ref{commutator e}), we have \begin{eqnarray} \langlebel{perp 1} \frac{d}{dt}\left(\left\|\nabla \rho\right\|^{2}_{L^{2}}+\left\|Z\right\|^{2}_{L^{2}}\right)+2\left\|\Delta \rho\right\|^{2}_{L^{2}}+2\left\|\nabla Z\right\|^{2}_{L^{2}}=0. \end{eqnarray} We next multiply (\ref{new perturbation equation 11 a}) by $\Delta^{2} \rho$, (\ref{new perturbation equation 11 b}) by $-\Delta Z$ and integrate over $\mathbb{R}^{2}$ to get \begin{equation*} \begin{split} &\frac{1}{2}\frac{d}{dt}\left(\left\|\Delta \rho\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}}\right) +\left\|\nabla \Delta \rho\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}} \\ &= \int \Delta^{2} \rho [\rho,Z] -\int\Delta Z[\Delta \rho,\rho]+ \int \Delta^{2} \rho [\overline{\psi},Z] -\int\Delta Z[\Delta \rho,\overline{\psi}]=\text{(I)+(II)+(III)+(IV)}. \end{split} \end{equation*} Treating $\text{(I)+(II)}$ as (\ref{H2 bound 1}) with $1/4$ not $1/2$, we have \begin{eqnarray} \langlebel{perp new dd} \text{(I)+(II)}\leq C\left\|\Delta \rho\right\|^{2}_{L^{2}}\left\|\nabla\Delta \rho\right\|^{2}_{L^{2}} +\frac{1}{4}\left\|\Delta Z\right\|^{2}_{L^{2}}. \end{eqnarray} Since \begin{equation*} \begin{split} \text{(III)}+ \text{(IV)}&=2\int\Delta \rho\left([\overline{\psi}_{x},Z_{x}]+[\overline{\psi}_{y},Z_{y}]\right)\leq C\int\left|\nabla^{2}\overline{\psi}\right| \left|\nabla^{2}Z\right| \left|\nabla^{2}\rho\right| \\ &\leq C\left\|\nabla^{2}\overline{\psi}\right\|^{2}_{L^{\infty}}\left\|\Delta \rho\right\|^{2}_{L^{2}}+\frac{1}{4}\left\|\Delta Z\right\|^{2}_{L^{2}}, \end{split} \end{equation*} we obtain \begin{eqnarray} \langlebel{perp 2} \frac{d}{dt}\left(\left\|\Delta \rho\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}}\right) +\left\|\nabla \Delta \rho\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}} \leq C\left(\left\|\nabla\Delta \rho\right\|^{2}_{L^{2}}+\left\|\nabla^{2}\overline{\psi}\right\|^{2}_{L^{\infty}}\right)\left\|\Delta \rho\right\|^{2}_{L^{2}}. \end{eqnarray} We finally multiply (\ref{new perturbation equation 11 a}) by $-\Delta^{3} \rho$, (\ref{new perturbation equation 11 b}) by $\Delta^{2} Z$ and integrate over $\mathbb{R}^{2}$: \begin{equation*} \begin{split} &\frac{1}{2}\frac{d}{dt} \left(\left\|\nabla \Delta \rho\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}}\right) + \left\|\Delta^{2} \rho\right\|^{2}_{L^{2}}+\left\|\nabla \Delta Z\right\|^{2}_{L^{2}} \\ &= -\int \Delta^{3} \rho [\rho,Z] +\int\Delta^{2} Z[\Delta \rho,\rho] -\int \Delta^{3} \rho [\overline{\psi},Z] +\int\Delta^{2} Z[\Delta \rho,\overline{\psi}]=\text{(V)+(VI)+(VII)+(VIII)}. \end{split} \end{equation*} By following (\ref{third term}) and (\ref{fourth term}) in the proof of Theorem \ref{GWP} with $\frac{1}{2}$ replaced by $\frac{1}{4}$, we have \begin{eqnarray} \langlebel{perp new dddd} \text{(V)+(VI)}\leq C\left\|\nabla Z\right\|^{2}_{L^{2}} \left\|\nabla \Delta Z\right\|^{2}_{L^{2}} +C\left\|\Delta \rho\right\|^{2}_{L^{2}} \left\|\Delta^{2} \rho\right\|^{2}_{L^{2}}+\frac{1}{4}\left(\left\|\Delta^{2} \rho\right\|^{2}_{L^{2}}+\left\|\nabla \Delta Z\right\|^{2}_{L^{2}}\right). \end{eqnarray} The last two terms add up in the following way \begin{equation*} \begin{split} \text{(VII)}+\text{(VIII)}&=-2\int\Delta \rho\left([\overline{\psi}_{x},\Delta Z_{x}]+[\overline{\psi}_{y},\Delta Z_{y}]\right) -2\int\Delta^{2} \rho\left([\overline{\psi}_{x},Z_{x}]+[\overline{\psi}_{y},Z_{y}]\right)\\ & \leq C\int\left|\nabla^{2}\overline{\psi}\right| \left|\nabla^{3}Z\right| \left|\nabla^{3}\rho\right|+C\int\left|\nabla^{2}\overline{\psi}\right| \left|\nabla^{2}Z\right| \left|\nabla^{4}\rho\right| \\ &\leq C\left\|\nabla^{2}\overline{\psi}\right\|^{2}_{L^{\infty}}\left(\left\|\nabla \Delta \rho\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}}\right)+\frac{1}{4}\left(\left\|\Delta^{2} \rho\right\|^{2}_{L^{2}}+\left\|\nabla \Delta Z\right\|^{2}_{L^{2}}\right). \end{split} \end{equation*} So, we arrive at \begin{equation} \langlebel{perp 3} \begin{split} &\frac{d}{dt} \left(\left\|\nabla \Delta \rho\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}}\right) + \left\|\Delta^{2} \rho\right\|^{2}_{L^{2}}+\left\|\nabla \Delta Z\right\|^{2}_{L^{2}} \\ &\leq C\left\|\nabla Z\right\|^{2}_{L^{2}} \left\|\nabla \Delta Z\right\|^{2}_{L^{2}} +C\left\|\Delta \rho\right\|^{2}_{L^{2}} \left\|\Delta^{2} \rho\right\|^{2}_{L^{2}}+C\left\|\nabla^{2}\overline{\psi}\right\|^{2}_{L^{\infty}}\left(\left\|\nabla \Delta \rho\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}}\right). \end{split} \end{equation} Let $C_{1}=k\left\|\nabla^{2}\overline{\psi}\right\|^{2}_{L^{\infty}}$ with $k$ large enough which is determined below. By multiplying (\ref{perp 1}) by $C_{1}^{2}$ and (\ref{perp 2}) by $C_{1}$ and adding the resulting equations to (\ref{perp 3}), we have \begin{equation} \langlebel{sum of equations 1} \begin{split} &\frac{d}{dt}\left(C_{1}^{2}F_{1}+C_{1}F_{2}+ F_{3}\right)+ 2C^{2}_{1}\left(\left\|\Delta \rho\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}}\right)+ C_{1}\left(\left\|\nabla \Delta \rho\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}}\right)+F_{4} \\ &\leq \widehat{C}C_{1}\left\|\nabla^{2}\overline{\psi}\right\|^{2}_{L^{\infty}} \left\|\Delta \rho\right\|^{2}_{L^{2}}+\widehat{C}\left\|\nabla^{2}\overline{\psi}\right\|^{2}_{L^{\infty}}\left(\left\|\nabla \Delta \rho\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}}\right)\\ & +CC_{1}\left\|\Delta \rho\right\|^{2}_{L^{2}}\left\|\nabla \Delta \rho\right\|^{2}_{L^{2}}+C\left\|\nabla Z\right\|^{2}_{L^{2}} \left\|\nabla \Delta Z\right\|^{2}_{L^{2}} +C\left\|\Delta \rho\right\|^{2}_{L^{2}} \left\|\Delta^{2} \rho\right\|^{2}_{L^{2}}, \end{split} \end{equation} where we fix two constants by $\widehat{C}$ to determine $k$ and $F_{1}, F_{2}, F_{3}, F_{4}$ are defined in (\ref{several norm 1}). We now choose $k$ such that \begin{eqnarray} \langlebel{size k 1} k>2\widehat{C}, \quad C_{1}>1. \end{eqnarray} Then, one can easily check that (\ref{sum of equations 1}) can be reduced to \[ \frac{d}{dt}\left(C_{1}^{2}F_{1}+C_{1}F_{2}+ F_{3}\right)+ C_{1}F_{3}+F_{4} \leq C\left(C_{1}^{2}F_{1}+C_{1}F_{2}+ F_{3}\right) \left(C_{1}F_{3}+F_{4}\right). \] If $C\epsilon_{2}=C\left(C_{1}^{2}F_{1}(0)+C_{1}F_{2}(0)+ F_{3}(0)\right)<1$, we obtain the following for all $t>0$: \[ C_{1}^{2}F_{1}(t)+C_{1}F_{2}(t)+ F_{3}(t)+ (1-C\epsilon_{2})\int^{t}_{0}\left(C_{1}F_{3}(s)+F_{4}(s)\right)ds \leq C_{1}^{2}F_{1}(0)+C_{1}F_{2}(0)+ F_{3}(0). \] \subsubsection{ \bf Uniqueness} Suppose there are two solutions $(\rho_{1}, Z_{1})$ and $(\rho_{2}, Z_{2})$. Let $\rho=\rho_{1}-\rho_{2}$ and $Z=Z_{1}-Z_{2}$. Then, $(\rho,Z)$ satisfies the following equations \[ \begin{split} &\rho_{t}-\Delta \rho=[\rho_{1}, Z]+[\rho,Z_{2}]+[\overline{\psi},Z], \\ &Z_{t}-\Delta Z=[\Delta \rho, \rho_{1}]+[\Delta \rho_{2}, \rho]+[\Delta \rho, \overline{\psi}]. \end{split} \] Since \[ -\int \Delta \rho [\overline{\psi},Z] +\int Z [\Delta \rho, \overline{\psi}]=0, \] the proof of the uniqueness is identical to the one in Section \ref{sec:3.1.2}. \subsection{Proof of Theorem \ref{perturbation theorem 2}}\langlebel{sec:4.2} We recall the equations of $\psi$ and $\omega$: \begin{subequations}\langlebel{new perturbation equation 21} \begin{align} &\psi_{t}-\Delta \psi=[\psi,\omega]+ [\psi, \overline{Z}], \langlebel{new perturbation equation 21 a}\\ &\omega_{t} -\Delta \omega=[\Delta \psi,\psi] \langlebel{new perturbation equation 21 b}. \end{align} \end{subequations} \subsubsection{\bf A priori estimates} Compared to Theorem \ref{perturbation theorem 1}, we also need the $L^{2}$ bound of $\psi$ to complete the proof of Theorem \ref{perturbation theorem 2}. So, we first have \[ \langlebel{new perp 1} \frac{d}{dt} \left\|\psi\right\|^{2}_{L^{2}} +2\left\|\nabla \psi\right\|^{2}_{L^{2}}=0. \] We next multiply (\ref{new perturbation equation 21 a}) by $-\Delta \psi$, (\ref{new perturbation equation 21 b}) by $\omega$, and integrate over $\mathbb{R}^{2}$. Then, \begin{equation*} \begin{split} &\frac{1}{2}\frac{d}{dt}\left(\left\|\nabla \psi\right\|^{2}_{L^{2}}+\left\|\omega\right\|^{2}_{L^{2}}\right)+\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla \omega\right\|^{2}_{L^{2}}\\ &=-\int \Delta \psi[\psi,\overline{Z}] \leq C\left\|\nabla \overline{Z}\right\|^{2}_{L^{\infty}}\left\|\nabla \psi\right\|^{2}_{L^{2}}+\frac{1}{2}\left\|\Delta \psi\right\|^{2}_{L^{2}} \end{split} \end{equation*} and so we obtain \begin{eqnarray} \langlebel{new perp 2} \frac{d}{dt}\left(\left\|\nabla \psi\right\|^{2}_{L^{2}}+\left\|\omega\right\|^{2}_{L^{2}}\right)+\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla \omega\right\|^{2}_{L^{2}}\leq C\left\|\nabla \overline{Z}\right\|^{2}_{L^{\infty}}\left\|\nabla \psi\right\|^{2}_{L^{2}}. \end{eqnarray} We now multiply (\ref{new perturbation equation 21 a}) by $\Delta^{2} \psi$, (\ref{new perturbation equation 21 b}) by $-\Delta \omega$, and integrate over $\mathbb{R}^{2}$. Then, \begin{equation*} \begin{split} &\frac{1}{2}\frac{d}{dt}\left(\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla \omega\right\|^{2}_{L^{2}}\right)+\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta \omega\right\|^{2}_{L^{2}}\\ &=\int \Delta^{2} \psi[\psi,\omega] -\int \Delta \omega[\Delta \psi,\psi]+\int \Delta^{2} \psi[\psi,\overline{Z}] =\text{(I)+(II)+(III)}. \end{split} \end{equation*} As (\ref{perp new dd}), we bound $\text{(I)+(II)}$ by \[ \text{(I)+(II)}\leq C\left\|\Delta \psi\right\|^{2}_{L^{2}}\left\|\nabla\Delta \psi\right\|^{2}_{L^{2}} +\frac{1}{2}\left\|\Delta \omega\right\|^{2}_{L^{2}}. \] And we bound $\text{(III)}$ as \[ \text{(III)}=\int \Delta \psi \Delta [\psi,\overline{Z}]=2\int \Delta \psi \left([\psi_{x},\overline{Z}_{x}]+ [\psi_{y},\overline{Z}_{y}]\right) \leq C\left\|\nabla \overline{Z}\right\|^{2}_{L^{\infty}}\left\|\Delta \psi\right\|^{2}_{L^{2}}+\frac{1}{2} \left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}. \] So, we obtain \begin{eqnarray} \langlebel{new perp 3} \frac{d}{dt}\left(\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla \omega\right\|^{2}_{L^{2}}\right)+\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta \omega\right\|^{2}_{L^{2}}\leq C\left(\left\|\nabla^{3}\psi\right\|^{2}_{L^{2}} +\left\|\nabla \overline{Z}\right\|^{2}_{L^{\infty}}\right)\left\|\Delta \psi\right\|^{2}_{L^{2}}. \end{eqnarray} We finally multiply (\ref{new perturbation equation 21 a}) by $-\Delta^{3} \psi$, (\ref{new perturbation equation 21 b}) by $\Delta^{2} \omega$, and integrate over $\mathbb{R}^{2}$. Then, \begin{equation*} \begin{split} &\frac{1}{2}\frac{d}{dt}\left(\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta \omega\right\|^{2}_{L^{2}}\right)+\left\|\Delta^{2} \psi\right\|^{2}_{L^{2}}+\left\|\nabla \Delta \omega\right\|^{2}_{L^{2}}\\ &=-\int \Delta^{3} \psi[\psi,\omega] +\int \Delta^{2} \omega[\Delta \psi,\psi]-\int \Delta^{3} \psi[\psi,\overline{Z}] =\text{(IV)+(V)+(VI)}. \end{split} \end{equation*} Similar to (\ref{perp new dddd}), we bound $\text{(IV)+(V)}$ by \[ \text{(IV)+(V)}\leq C\left\|\nabla \omega\right\|^{2}_{L^{2}} \left\|\nabla \Delta \omega\right\|^{2}_{L^{2}} +C\left\|\Delta\psi\right\|^{2}_{L^{2}} \left\|\Delta^{2}\psi\right\|^{2}_{L^{2}}+\frac{1}{2}\left\|\Delta^{2}\psi\right\|^{2}_{L^{2}}+ \frac{1}{2}\left\|\nabla\Delta \omega\right\|^{2}_{L^{2}}. \] To estimate $\text{(VI)}$, we use \[ \text{(VI)}=- \int \Delta^{2} \psi [\Delta\psi,\overline{Z}]-2\int \Delta^{2} \psi [\psi_{x},\overline{Z}_{x}]- 2\int \Delta^{2} \psi [\psi_{y},\overline{Z}_{y}]=\text{(VI)}_{(1)}+\text{(VI)}_{(2)}+\text{(VI)}_{(3)}. \] The first term is bounded as above: \[ \text{(VI)}_{(1)}\leq C\left\|\nabla \overline{Z}\right\|^{2}_{L^{\infty}}\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\frac{1}{4} \left\|\Delta^{2} \psi\right\|^{2}_{L^{2}}. \] We next estimate $\text{(VI)}_{(2)}$ as \begin{equation*} \begin{split} \text{(VI)}_{(2)}&=-2\int \Delta \psi \Delta [\psi_{x},\overline{Z}_{x}]=-2\int \Delta \psi [\Delta \psi_{x},\overline{Z}_{x}]-4\int \Delta \psi [\psi_{xx},\overline{Z}_{xx}]-4\int \Delta \psi [\psi_{xy},\overline{Z}_{xy}]\\ &=\text{(VI)}_{(2a)}+\text{(VI)}_{(2b)}+\text{(VI)}_{(2c)}, \end{split} \end{equation*} where we use the fact that $\overline{Z}_{x}$ is harmonic. $\text{(VI)}_{(2a)}$ is bounded as above: \[ \text{VI}_{(2a)}\leq C\left\|\nabla \overline{Z}\right\|^{2}_{L^{\infty}}\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\frac{1}{16} \left\|\Delta^{2} \psi\right\|^{2}_{L^{2}}. \] By the integration by parts, \[ \text{(VI)}_{(2b)}=-4\int \overline{Z}_{xx}[\Delta \psi, \psi_{xx}]=4\int \overline{Z}_{x}[\Delta \psi, \psi_{xx}]_{x} \leq C\left\|\nabla \overline{Z}\right\|^{2}_{L^{\infty}}\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\frac{1}{16} \left\|\Delta^{2} \psi\right\|^{2}_{L^{2}}. \] Similarly, we obtain \[ \text{(VI)}_{(2c)}\leq C\left\|\nabla \overline{Z}\right\|^{2}_{L^{\infty}}\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\frac{1}{16} \left\|\Delta^{2} \psi\right\|^{2}_{L^{2}}. \] Since $\text{(VI)}_{(2)}$ and $\text{(VI)}_{(2)}$ are of the same form, we obtain \begin{equation*} \langlebel{new perp 4} \begin{split} &\frac{d}{dt}\left(\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta \omega\right\|^{2}_{L^{2}}\right)+\left\|\Delta^{2} \psi\right\|^{2}_{L^{2}}+\left\|\nabla \Delta \omega\right\|^{2}_{L^{2}} \\ &\leq C\left\|\nabla \omega\right\|^{2}_{L^{2}} \left\|\nabla \Delta \omega\right\|^{2}_{L^{2}} +C\left\|\Delta\psi\right\|^{2}_{L^{2}} \left\|\Delta^{2}\psi\right\|^{2}_{L^{2}}+C\left\|\nabla \overline{Z}\right\|^{2}_{L^{\infty}}\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}. \end{split} \end{equation*} Let $C_{2}=k\left\|\nabla \overline{Z}\right\|^{2}_{L^{\infty}}$ with $k$ large enough which is determined below. By following the argument in Section \ref{sec:5.1.1}, we obtain \begin{equation}\langlebel{sum of equations 2} \begin{split} &\frac{d}{dt}\left(C_{2}^{3}\left\|\psi\right\|^{2}_{L^{2}}+ C_{2}^{2}K_{1}+C_{2}K_{2}+ K_{3}\right)+2C^{3}_{2}\left\|\nabla \psi\right\|^{2}_{L^{2}}+C^{2}_{2} \left(\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla \omega\right\|^{2}_{L^{2}}\right)\\ &+C_{2}\left(\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta \omega\right\|^{2}_{L^{2}}\right)+K_{4} \\ &\leq \widehat{C}C^{2}_{2}\left\|\nabla \overline{Z}\right\|^{2}_{L^{\infty}} \left\|\nabla \psi\right\|^{2}_{L^{2}}+ \widehat{C}C_{2}\left\|\nabla \overline{Z}\right\|^{2}_{L^{\infty}} \left\|\Delta \psi\right\|^{2}_{L^{2}} +\widehat{C}\left\|\nabla \overline{Z}\right\|^{2}_{L^{\infty}} \left\|\nabla \Delta \psi\right\|^{2}_{L^{2}} \\ &+CC_{2}\left\|\Delta \psi\right\|^{2}_{L^{2}}\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+C\left\|\Delta \psi\right\|^{2}_{L^{2}}\left\|\Delta^{2} \psi\right\|^{2}_{L^{2}}+C\left\|\nabla \omega \right\|^{2}_{L^{2}}\left\|\nabla \Delta \omega\right\|^{2}_{L^{2}}, \end{split} \end{equation} where we fix two constants by $\widehat{C}$ to determine $k$ and $K_{1}, K_{2}, K_{3}, K_{4}$ are defined in (\ref{several norm 2}). We now choose $k$ such that \begin{eqnarray} \langlebel{size k 2} k>2\widehat{C}, \quad C_{2}>\max\{2\widehat{C}, 1\}. \end{eqnarray} Then, (\ref{sum of equations 2}) can be reduced to \begin{equation*} \begin{split} &\frac{d}{dt}\left(C_{2}^{3}\left\|\psi\right\|^{2}_{L^{2}}+ C_{2}^{2}K_{1}+C_{2}K_{2}+ K_{3}\right)+ C_{2}^{2}K_{2}+ C_{2}K_{3}+K_{4} \\ &\leq C C_{2}K_{2}K_{3}+CK_{2}K_{4}\leq C\left(C_{2}^{3}\left\|\psi\right\|^{2}_{L^{2}}+ C_{2}^{2}K_{1}+C_{2}K_{2}+ K_{3}\right) \left(C_{2}^{2}K_{2}+ C_{2}K_{3}+K_{4}\right). \end{split} \end{equation*} If $C\epsilon_{3}=C\left(C_{2}^{3}\left\|\psi(0)\right\|^{2}_{L^{2}}+ C_{2}^{2}K_{1}(0)+C_{2}K_{2}(0)+K_{3}(0)\right)<1$, we obtain \begin{equation*} \langlebel{final bound per 2} \begin{split} &C_{2}^{3}\left\|\psi(t)\right\|^{2}_{L^{2}}+ C_{2}^{2}K_{1}(t)+C_{2}K_{2}(t)+K_{3}(t)+(1-C\epsilon_{3})\int^{t}_{0}\left(C_{2}^{2}K_{2}(s)+ C_{2}K_{3}(s)+K_{4}(s)\right)ds\\ &\leq C_{2}^{3}\left\|\psi(0)\right\|^{2}_{L^{2}}+ C_{2}^{2}K_{1}(0)+C_{2}K_{2}(0)+ K_{3}(0) \end{split} \end{equation*} for all $t>0$. \subsubsection{\bf Uniqueness} Suppose there are two solutions $(\psi_{1}, \omega_{1})$ and $(\psi_{2}, \omega_{2})$. Let $\psi=\psi_{1}-\psi_{2}$ and $\omega=\omega_{1}-\omega_{2}$. Then, $(\psi,\omega)$ satisfies the following equations \begin{subequations}\langlebel{difference perturbation 1} \begin{align} & \psi_{t}-\Delta \psi=[\psi_{1}, \omega]+[\psi,\omega_{2}]+[\psi,\overline{Z}], \langlebel{difference perturbation 1 a}\\ & \omega_{t}-\Delta \omega=[\Delta \psi, \psi_{1}]+[\Delta \psi_{2}, \psi]. \langlebel{difference perturbation 1 b} \end{align} \end{subequations} Compared to (\ref{difference}), there is one extra term $[\psi,\overline{Z}]$. When we multiply (\ref{difference perturbation 1 a}) by $-\Delta \psi$, (\ref{difference perturbation 1 b}) by $g$, and integrate over $\mathbb{R}^{2}$, this term can be bounded by \[ -\int \Delta \psi[\psi,\overline{Z}] \leq \left\|\nabla\overline{Z}\right\|_{L^{\infty}}\left\|\nabla \psi\right\|_{L^{2}}\left\|\Delta \psi\right\|_{L^{2}} \leq C\left\|\nabla\overline{Z}\right\|^{2}_{L^{\infty}}\left\|\nabla \psi\right\|^{2}_{L^{2}}+\frac{1}{2}\left\|\Delta \psi\right\|^{2}_{L^{2}}. \] By changing the constant from 1 to $\frac{1}{2}$ in front of $\left\|\Delta \psi\right\|^{2}_{L^{2}}$, we can follow Section \ref{sec:3.1.2} for the remaining part to complete the proof of the uniqueness. \section{Proof of Theorem \ref{LWP Hall MHD} and Theorem \ref{GWP Hall MHD}}\langlebel{sec:7} In this section, we deal with the $2 \frac{1}{2}$ dimensional Hall MHD. We first recall (\ref{coupled two half}): \begin{subequations} \langlebel{coupled two half ddddd} \begin{align} & \psi_{t}-\Delta \psi =[\psi,Z]-[\psi,\phi], \langlebel{coupled two half ddddd a}\\ & Z_{t}-\Delta Z=[\Delta \psi,\psi]-[Z,\phi]+[W,\psi],\langlebel{coupled two half ddddd b}\\ & W_{t}-\Delta W=-[W,\phi]-[\psi,Z],\langlebel{coupled two half ddddd c}\\ & \Delta \phi_{t}-\Delta^{2} \phi=-[\Delta \phi,\phi]+[\Delta \psi,\psi].\langlebel{coupled two half ddddd d} \end{align} \end{subequations} Proceeding as Theorem \ref{LWP}, we define the following norms: \begin{equation} \langlebel{coupled norms} \begin{split} & P(t)=P_{1}(t)+P_{2}(t)+P_{3}(t), \quad Q(t)=P_{2}(t)+P_{3}(t)+P_{4}(t),\\ &P_{1}=\left\|\nabla \psi\right\|^{2}_{L^{2}}+\left\|Z\right\|^{2}_{L^{2}}+\left\|\nabla \phi\right\|^{2}_{L^{2}}+\left\|W\right\|^{2}_{L^{2}}, \\ & P_{2}=\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}}+\left\|\Delta \phi\right\|^{2}_{L^{2}}+\left\|\nabla W\right\|^{2}_{L^{2}}, \\ &P_{3}=\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\left\|\Delta Z\right\|^{2}_{L^{2}}+\left\|\nabla \Delta \phi\right\|^{2}_{L^{2}}+\left\|\Delta W\right\|^{2}_{L^{2}}, \\ & P_{4}=\left\|\Delta^{2} \psi\right\|^{2}_{L^{2}}+\left\|\nabla \Delta Z\right\|^{2}_{L^{2}}+\left\|\Delta^{2} \phi\right\|^{2}_{L^{2}}+\left\|\nabla \Delta W\right\|^{2}_{L^{2}}. \end{split} \end{equation} We remember that the Hall term is the most difficult term to deal with the Hall MHD. So while (\ref{coupled two half ddddd}) and the corresponding spaces (\ref{coupled norms}) look very complicated, the terms resulting from the Hall effect has already been handled in Section \ref{sec:3} and other terms can be treated similarly. Thus, rather than presenting the proof in great detail, we will present a proof with a few calculations omitted. \subsection{Proof of Theorem \ref{LWP Hall MHD}} \subsubsection{\bf A priori estimates} By multiplying $-\Delta \psi$, $Z$, $W$, $-\phi$ to (\ref{coupled two half ddddd a}), (\ref{coupled two half ddddd b}), (\ref{coupled two half ddddd c}), (\ref{coupled two half ddddd d}), respectively, we first obtain \begin{eqnarray}\langlebel{L2 bound Hall MHD} \frac{1}{2}\frac{d}{dt}P_{1}+P_{2}=0. \end{eqnarray} By multiplying $\Delta^{2} \psi$, $-\Delta Z$, $-\Delta W$, $\Delta \phi$ to (\ref{coupled two half ddddd a}), (\ref{coupled two half ddddd b}), (\ref{coupled two half ddddd c}), (\ref{coupled two half ddddd d}), respectively, we have \begin{equation} \langlebel{P2 P3} \begin{split} \frac{1}{2}\frac{d}{dt} P_{2}+P_{3}&=\int \Delta^{2} \psi [\psi,Z] -\int\Delta Z[\Delta \psi,\psi]- \int \Delta^{2}\psi[\psi,\phi] +\int\Delta Z[Z,\phi]\\ &-\int \Delta Z[W,\psi] +\int\Delta W [W,\phi]+\int\Delta W[\psi,Z]+\int \Delta \phi[\Delta \psi,\psi]\\ &=\text{I(a)+I(b)+I(c)+I(d)+I(e)+I(f)+I(g)+I(h)}. \end{split} \end{equation} Treating as (\ref{H2 bound 1}) with $\frac{1}{2}$ replaced with $\frac{1}{8}$, we have \[ \text{I(a)+I(b)}\leq C \left\|\Delta Z\right\|_{L^{2}} \left\|\Delta \psi\right\|_{L^{2}}\left\|\nabla\Delta \psi\right\|_{L^{2}} \leq C\left\|\Delta \psi\right\|^{2}_{L^{2}}\left\|\nabla\Delta \psi\right\|^{2}_{L^{2}} +\frac{1}{8}\left\|\Delta Z\right\|^{2}_{L^{2}}. \] After some reduction, $\text{I(c)+I(h)}$ is estimated as \begin{equation*} \langlebel{Ich} \begin{split} \text{I(c)+I(h)}= &-2\int \Delta \psi \left([\psi_{x},\phi_{x}]+[\psi_{y},\phi_{y}] \right)\leq C\left\|\Delta \phi\right\|_{L^{2}} \left\|\nabla^{2}\psi\right\|^{2}_{L^{4}} \\ &\leq C \left\|\Delta \phi\right\|_{L^{2}} \left\|\Delta\psi\right\|_{L^{2}}\left\|\nabla\Delta \psi\right\|_{L^{2}} \leq C\left\|\Delta \phi\right\|^{2}_{L^{2}}\left\|\Delta \psi\right\|^{2}_{L^{2}} +\frac{1}{8}\left\|\nabla \Delta \phi\right\|^{2}_{L^{2}}. \end{split} \end{equation*} By the definition of the commutator and by using after integrating by parts, we have \begin{equation*} \langlebel{Id} \begin{split} \text{I(d)}+\text{I(f)}&=\int \partial_{k} Z\partial_{k}\nabla^{\perp}\phi \cdot \nabla Z +\int \partial_{k} W\partial_{k}\nabla^{\perp}\phi \cdot \nabla W \\ & \leq C\left\|\nabla Z\right\|^{2}_{L^{2}}\left\|\Delta \phi\right\|^{2}_{L^{2}} +\frac{1}{8}\left\|\Delta Z\right\|^{2}_{L^{2}}+C\left\|\nabla W\right\|^{2}_{L^{2}}\left\|\Delta \phi\right\|^{2}_{L^{2}} +\frac{1}{8}\left\|\Delta W\right\|^{2}_{L^{2}}. \end{split} \end{equation*} We finally bound $\text{I(e)+I(g)}$ as \begin{equation*}\langlebel{Ieg} \begin{split} \text{I(e)+I(g)}&=\int Z[\Delta \psi,W]+2\int Z[\psi_{x}, W_{x}]+2\int Z[\psi_{y},W_{y}] \leq C \left\|\nabla Z\right\|_{L^{4}} \left\|\nabla W\right\|_{L^{4}}\left\|\Delta \psi\right\|_{L^{2}}\\ & \leq C \left\|\nabla Z\right\|^{2}_{L^{2}} \left\|\Delta \psi\right\|^{2}_{L^{2}}+C \left\|\nabla W\right\|^{2}_{L^{2}} \left\|\Delta \psi\right\|^{2}_{L^{2}}+\frac{1}{8} \left\|\Delta Z\right\|^{2}_{L^{2}} +\frac{1}{8} \left\|\Delta W\right\|^{2}_{L^{2}}. \end{split} \end{equation*} So, we obtain \begin{equation}\langlebel{H1 bound} \begin{split} \frac{d}{dt} P_{2}+P_{3}&\leq C \left\|\Delta \phi\right\|^{2}_{L^{2}} \left\|\Delta \psi\right\|^{2}_{L^{2}}+C \left\|\nabla Z\right\|^{2}_{L^{2}} \left\|\Delta \psi\right\|^{2}_{L^{2}}+C \left\|\nabla W\right\|^{2}_{L^{2}} \left\|\Delta \psi\right\|^{2}_{L^{2}}\\ &+C \left\|\Delta \phi\right\|^{2}_{L^{2}} \left\|\nabla W\right\|^{2}_{L^{2}}+C \left\|\Delta \phi\right\|^{2}_{L^{2}} \left\|\nabla Z\right\|^{2}_{L^{2}} +C\left\|\Delta \psi\right\|^{2}_{L^{2}}\left\|\nabla\Delta \psi\right\|^{2}_{L^{2}}. \end{split} \end{equation} By multiplying $-\Delta^{3} \psi$, $\Delta^{2} Z$, $\Delta^{2} W$, $-\Delta^{2} \phi$ to (\ref{coupled two half ddddd a}), (\ref{coupled two half ddddd b}), (\ref{coupled two half ddddd c}), (\ref{coupled two half ddddd d}), respectively, we have \begin{equation}\langlebel{P3 P4} \begin{split} \frac{1}{2}\frac{d}{dt}P_{3}+P_{4} &=-\int \Delta^{3} \psi [\psi,Z] +\int\Delta^{2} Z[\Delta \psi,\psi]+ \int \Delta^{3}\psi[\psi,\phi] -\int\Delta^{2} Z[Z,\phi]\\ &+\int \Delta^{2} Z[W,\psi] -\int\Delta^{2} W [W,\phi]-\int\Delta^{2} W[\psi,Z]-\int \Delta^{2} \phi[\Delta \psi,\psi]\\ &=\text{II(a)+II(b)+II(c)+II(d)+II(e)+II(f)+II(g)+II(h)}. \end{split} \end{equation} By following the computations used for (\ref{H3 bound 2}), we have \[ \text{II(a)+II(b)} \leq C\left\|\Delta Z\right\|^{2}_{L^{2}} \left\|\nabla\Delta \psi\right\|^{2}_{L^{2}} +C\left\|\Delta \psi\right\|^{2}_{L^{2}} \left\|\nabla\Delta \psi\right\|^{4}_{L^{2}} +\frac{1}{6}\left\|\nabla\Delta Z\right\|^{2}_{L^{2}}+\frac{1}{4} \left\|\Delta^{2}\psi\right\|_{L^{2}}. \] And we bound $\text{II(c)+II(h)}$ as follows \begin{equation*} \langlebel{IIch} \begin{split} \text{II(c)+II(h)}&=\int \Delta^{2}\psi[\Delta \psi, \phi]+2\int \Delta^{2}\psi[\psi_{x}, \phi_{x}] +2\int \Delta^{2}\psi[\psi_{y}, \phi_{y}]\\ &+ 2\int \Delta\psi[\psi_{x}, \Delta\phi_{x}] +2\int \Delta\psi[\psi_{y}, \Delta\phi_{y}]\\ &=\int \partial_{k}\nabla^{\perp}\phi\cdot \nabla \Delta \psi \partial_{k}\Delta \psi+2\int \Delta^{2}\psi[\psi_{x}, \phi_{x}] +2\int \Delta^{2}\psi[\psi_{y}, \phi_{y}]\\ &+ 2\int \Delta\psi[\psi_{x}, \Delta\phi_{x}] +2\int \Delta\psi[\psi_{y}, \Delta\phi_{y}]\\ &\leq C \left\|\Delta \phi\right\|_{L^{2}}\left\|\nabla \Delta \psi\right\|^{2}_{L^{4}}+C \left\|\Delta \phi\right\|_{L^{4}}\left\| \Delta \phi\right\|_{L^{4}}\left\|\Delta^{2} \psi\right\|_{L^{2}}+C\left\|\Delta \phi\right\|^{2}_{L^{4}} \left\|\Delta^{2} \phi\right\|^{2}_{L^{2}}\\ & \leq C\left\|\Delta \phi\right\|^{2}_{L^{2}} \left\|\nabla \Delta \psi\right\|^{2}_{L^{2}} +C\left\|\Delta \phi\right\|^{2}_{L^{2}}\left\|\nabla\Delta \phi\right\|^{2}_{L^{2}}+C\left\|\Delta \psi\right\|^{2}_{L^{2}} \left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}\\ &+\frac{1}{4} \left\|\Delta^{2} \psi\right\|_{L^{2}} +\frac{1}{2} \left\|\Delta^{2} \phi\right\|_{L^{2}}. \end{split} \end{equation*} We also bound $\text{II(d)}$ as \begin{equation*} \langlebel{IId} \begin{split} \text{II(d)}&=\int \Delta Z[Z,\Delta \phi]+2\int \Delta Z[Z_{x},\phi_{x}]+2\int \Delta Z[Z_{y},\phi_{y}] \\ & \leq C \left\|\Delta \phi\right\|_{L^{4}}\left\|\nabla Z\right\|_{L^{4}} \left\|\nabla \Delta Z\right\|_{L^{2}}+C \left\|\Delta \phi\right\|_{L^{2}}\left\|\Delta Z\right\|^{2}_{L^{4}} \\ & \leq C \left\|\Delta \phi\right\|^{2}_{L^{2}}\left\|\nabla\Delta \phi\right\|^{2}_{L^{2}}+C\left\|\nabla Z\right\|^{2}_{L^{2}} \left\|\Delta Z\right\|^{2}_{L^{2}} +C\left\|\Delta \phi\right\|^{2}_{L^{2}} \left\|\Delta Z\right\|^{2}_{L^{2}} +\frac{1}{6}\left\|\nabla \Delta Z\right\|^{2}_{L^{2}}. \end{split} \end{equation*} Similarly, we bound $\text{II(f)}$ as \[ \text{II(f)} \leq C \left\|\Delta \phi\right\|^{2}_{L^{2}}\left\|\nabla\Delta \phi\right\|^{2}_{L^{2}}+C\left\|\nabla W\right\|^{2}_{L^{2}} \left\|\Delta W\right\|^{2}_{L^{2}} +C\left\|\Delta \phi\right\|^{2}_{L^{2}} \left\|\Delta W\right\|^{2}_{L^{2}} +\frac{1}{4}\left\|\nabla \Delta W\right\|^{2}_{L^{2}}. \] We finally bound $\text{II(e)+II(g)}$ as \begin{equation*}\langlebel{IIeg} \begin{split} \text{II(e)+II(g)}&=\int Z[\Delta W,\Delta \psi]+2\int Z[\Delta W_{x}, \psi_{x}]+2\int Z[\Delta W_{y},\psi_{y}]+\int \Delta Z[W,\Delta \psi]\\ &+2\int \Delta Z[W_{x}, \psi_{x}] +2\int \Delta Z[W_{y}, \psi_{y}]\\ & \leq C \left\|\nabla Z\right\|_{L^{4}}\left\|\Delta \psi\right\|_{L^{4}}\left\|\nabla \Delta W\right\|_{L^{2}} +C \left\|\nabla W\right\|_{L^{4}}\left\|\Delta \psi\right\|_{L^{4}}\left\|\nabla \Delta Z\right\|_{L^{2}}\\ &\leq C \left\|\nabla Z\right\|^{2}_{L^{2}}\left\|\Delta Z\right\|^{2}_{L^{2}}+C \left\|\nabla W\right\|^{2}_{L^{2}}\left\|\Delta W\right\|^{2}_{L^{2}}+ C\left\|\Delta \psi\right\|^{2}_{L^{2}} \left\|\nabla \Delta \psi\right\|^{2}_{L^{2}} \\ &+\frac{1}{4} \left\|\nabla \Delta W\right\|^{2}_{L^{2}} +\frac{1}{6} \left\|\nabla \Delta Z\right\|_{L^{2}}. \end{split} \end{equation*} Collecting all the bounds, we derive \begin{equation}\langlebel{H2 bound} \begin{split} \frac{d}{dt} P_{3}+P_{4}&\leq C\left\|\Delta \phi\right\|^{2}_{L^{2}} \left\|\nabla \Delta \psi\right\|^{2}_{L^{2}} +C\left\|\Delta \phi\right\|^{2}_{L^{2}}\left\|\nabla \Delta \phi\right\|^{2}_{L^{2}}+C\left\|\Delta \psi\right\|^{2}_{L^{2}} \left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}\\ &+C\left\|\Delta Z\right\|^{2}_{L^{2}} \left\|\nabla\Delta \psi\right\|^{2}_{L^{2}} +C \left\|\nabla W\right\|^{2}_{L^{2}}\left\|\Delta W\right\|^{2}_{L^{2}} +C\left\|\Delta \phi\right\|^{2}_{L^{2}} \left\|\Delta W\right\|^{2}_{L^{2}}\\ &+C\left\|\nabla Z\right\|^{2}_{L^{2}} \left\|\Delta Z\right\|^{2}_{L^{2}} +C\left\|\Delta \phi\right\|^{2}_{L^{2}} \left\|\Delta Z\right\|^{2}_{L^{2}}+C\left\|\Delta \psi\right\|^{2}_{L^{2}} \left\|\nabla\Delta \psi\right\|^{4}_{L^{2}}. \end{split} \end{equation} By (\ref{L2 bound Hall MHD}), (\ref{H1 bound}), and (\ref{H2 bound}), \begin{eqnarray} \langlebel{full H3 d} \left(1+P(t)\right)'+Q(t) \leq C P^{2}(t)+CP^{3}(t)\leq C \left(1+P(t)\right)^{3} \end{eqnarray} from which we deduce \begin{eqnarray} \langlebel{full H3 dd} P(t)\leq \sqrt{\frac{(1+P(0))^{2}}{1-2Ct(1+P(0))^{2}}}-1 \quad \text{for all} \ t\leq T_{\ast}<\frac{1}{2C (1+P(0))^{2}}. \end{eqnarray} Integrating (\ref{full H3 d}) and using (\ref{full H3 dd}), we finally derive \begin{eqnarray} \langlebel{last bound Hall MHD} P(t) +\int^{t}_{0}Q(s)ds<\infty, \quad 0<t<T_{\ast}. \end{eqnarray} \subsubsection{\bf Uniqueness} Suppose there are two solutions $(\psi_{1}, Z_{1}, \phi_{1}, W_{1})$ and $(\psi_{2}, Z_{2}, \phi_{2}, W_{2})$. Let $\psi=\psi_{1}-\psi_{2}$, $Z=Z_{1}-Z_{2}$, $\phi=\phi_{1}-\phi_{2}$ and $W=W_{1}-W_{2}$. Then, $(\psi,Z, \phi, W)$ satisfies the following equations \[ \begin{split} & \psi_{t}-\Delta \psi =[\psi_{1},Z]+[\psi,Z_{2}]-[\psi_{1},\phi] -[\psi,\phi_{2}], \\ & Z_{t}-\Delta Z=[\Delta \psi,\psi_{1}]+[\Delta \psi_{2},\psi]-[Z_{1},\phi]-[Z,\phi_{2}]+[W_{1},\psi] +[W,\psi_{2}],\\ & W_{t}-\Delta W=-[W_{1},\phi] -[W,\phi_{2}]-[\psi_{1},Z] -[\psi,Z_{2}],\\ & \Delta \phi_{t}-\Delta^{2} \phi=-[\Delta \phi_{1},\phi] -[\Delta \phi,\phi_{2}]+[\Delta \psi_{1},\psi] +[\Delta \psi,\psi_{2}]. \end{split} \] From this, we obtain \begin{equation*} \begin{split} &\frac{1}{2}\frac{d}{dt}\left(\left\|\nabla \psi\right\|^{2}_{L^{2}}+\left\|Z\right\|^{2}_{L^{2}}+\left\|\nabla \phi\right\|^{2}_{L^{2}}+\left\|W\right\|^{2}_{L^{2}}\right)+\left\|\Delta \psi\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}}+\left\|\Delta \phi\right\|^{2}_{L^{2}}+\left\|\nabla W\right\|^{2}_{L^{2}}\\ &=-\int \Delta \psi[\psi_{1},Z]-\int \Delta \psi[\psi,Z_{2}]+\int Z[\Delta \psi,\psi_{1}]+\int Z[\Delta \psi_{2},\psi]\\ &-\int Z[Z_{1},\phi]+\int Z[W_{1},\psi] +\int Z[W,\psi_{2}]-\int W[W_{1},\phi] -\int W[\psi_{1},Z] -\int W[\psi,Z_{2}]\\ &+\int \Delta \psi[\psi_{1},\phi] +\int \Delta \psi[\psi,\phi_{2}]+\int \phi[\Delta \phi,\phi_{2}] -\int \phi[\Delta \psi,\psi_{2}]-\int \phi[\Delta \psi_{1},\psi]. \end{split} \end{equation*} The first line on the right-hand side is bounded as (\ref{difference uniqueness}) \[ C\left(\left\|\nabla Z_{2}\right\|^{2}_{L^{2}} \left\|\Delta Z_{2}\right\|^{2}_{L^{2}} +\left\|\Delta \psi_{2}\right\|^{2}_{L^{2}} \left\|\nabla \Delta \psi_{2}\right\|^{2}_{L^{2}} \right)\left\|\nabla \psi\right\|^{2}_{L^{2}} +\frac{1}{3}\left\|\Delta \psi\right\|^{2}_{L^{2}}+ \left\|\nabla Z\right\|^{2}_{L^{2}} \] and the second line is bounded by \[ \begin{split} &C\left\|\nabla Z_{1}\right\|_{L^{\infty}}\left\|Z\right\|_{L^{2}} \left\|\nabla \phi\right\|_{L^{2}} +C\left\|\nabla W_{1}\right\|_{L^{\infty}}\left\|Z\right\|_{L^{2}} \left\|\nabla \psi\right\|_{L^{2}} +C\left\|\nabla \psi_{2}\right\|_{L^{\infty}}\left\|Z\right\|_{L^{2}} \left\|\nabla W\right\|_{L^{2}}\\ & +C\left\|\nabla W_{1}\right\|_{L^{\infty}}\left\|W\right\|_{L^{2}} \left\|\nabla \phi\right\|_{L^{2}} +C\left\|\nabla \psi_{1}\right\|_{L^{\infty}}\left\|W\right\|_{L^{2}} \left\|\nabla Z\right\|_{L^{2}} +C\left\|\nabla Z_{2}\right\|_{L^{\infty}}\left\|W\right\|_{L^{2}} \left\|\nabla \psi\right\|_{L^{2}}. \end{split} \] The third line except for the last one is bounded by \[ \begin{split} &C\left\|\nabla \psi_{1}\right\|_{L^{\infty}}\left\|\nabla \phi\right\|_{L^{2}} \left\|\Delta \psi\right\|_{L^{2}} +C\left\|\nabla \psi_{2}\right\|_{L^{\infty}}\left\|\nabla \psi\right\|_{L^{2}} \left\|\Delta \psi\right\|_{L^{2}} +C\left\|\nabla \phi_{1}\right\|_{L^{\infty}}\left\|\nabla \phi\right\|_{L^{2}} \left\|\Delta \phi\right\|_{L^{2}} \\ &+C\left\|\nabla \psi_{2}\right\|_{L^{\infty}}\left\|\nabla \phi\right\|_{L^{2}} \left\|\Delta \phi\right\|_{L^{2}} +C\left\|\Delta \psi_{1}\right\|_{L^{2}}\left\|\nabla \phi\right\|_{L^{4}} \left\|\nabla \psi\right\|_{L^{4}}\\ &\leq C\left\|\nabla \psi_{1}\right\|^{2}_{L^{\infty}}\left\|\nabla \phi\right\|^{2}_{L^{2}} +C\left\|\nabla \psi_{2}\right\|^{2}_{L^{\infty}}\left\|\nabla \psi\right\|^{2}_{L^{2}}+C\left\|\nabla \phi_{1}\right\|^{2}_{L^{\infty}}\left\|\nabla \phi\right\|^{2}_{L^{2}}+C\left\|\nabla \psi_{2}\right\|^{2}_{L^{\infty}}\left\|\nabla \phi\right\|^{2}_{L^{2}} \\ &+C \left\|\Delta \psi_{1}\right\|^{2}_{L^{2}}\left\|\nabla \phi\right\|^{2}_{L^{2}} +C \left\|\Delta \psi_{1}\right\|^{2}_{L^{2}}\left\|\nabla \psi\right\|^{2}_{L^{2}}+\frac{2}{3}\left\|\Delta \psi\right\|^{2}_{L^{2}}\left\|\Delta \phi\right\|^{2}_{L^{2}}. \end{split} \] Thus, we arrive at the the following inequality: \[ \frac{d}{dt}\left(\left\|\nabla \psi\right\|^{2}_{L^{2}}+\left\|Z\right\|^{2}_{L^{2}}+\left\|\nabla \phi\right\|^{2}_{L^{2}}+\left\|W\right\|^{2}_{L^{2}}\right)\leq C\mathcal{I}\left(\left\|\nabla \psi\right\|^{2}_{L^{2}}+\left\|Z\right\|^{2}_{L^{2}}+\left\|\nabla \phi\right\|^{2}_{L^{2}}+\left\|W\right\|^{2}_{L^{2}}\right), \] where \[ \begin{split} \mathcal{I}&=\left\|\nabla Z_{2}\right\|^{2}_{L^{2}} \left\|\Delta Z_{2}\right\|^{2}_{L^{2}} +\left\|\Delta \psi_{2}\right\|^{2}_{L^{2}} \left\|\nabla \Delta \psi_{2}\right\|^{2}_{L^{2}}+\left\|\nabla Z_{1}\right\|_{L^{\infty}}+\left\|\nabla W_{1}\right\|_{L^{\infty}}+\left\|\nabla \psi_{1}\right\|_{L^{\infty}}\\ &+\left\|\nabla \psi_{2}\right\|_{L^{\infty}}+\left\|\nabla Z_{2}\right\|_{L^{\infty}}+\left\|\nabla \psi_{1}\right\|^{2}_{L^{\infty}} +\left\|\nabla \psi_{2}\right\|^{2}_{L^{\infty}}+\left\|\nabla \phi_{1}\right\|^{2}_{L^{\infty}}+\left\|\Delta \psi_{1}\right\|^{2}_{L^{2}}. \end{split} \] Since $\mathcal{I}$ is integrable in time by (\ref{last bound Hall MHD}), we conclude the uniqueness of solutions. \subsubsection{\bf Blow-up criterion} To find a blow-up criterion, we first use (\ref{blowup first}) to bound $\text{I(a)+I(b)}$ as \[ \begin{split} &\left|\int \Delta \psi \left([\psi_{x},Z_{x}]+[\psi_{y},Z_{y}] \right)\right| \leq C\left\|\nabla Z\right\|^{q}_{L^{p}} \left\|\Delta\psi\right\|^{2}_{L^{2}} +\epsilon\left\|\nabla\Delta\psi\right\|^{2}_{L^{2}}, \quad \frac{1}{p}+\frac{1}{q}=\frac{1}{2}. \end{split} \] So, we can rewrite (\ref{H1 bound}) as \begin{equation*} \begin{split} \frac{d}{dt}P_{2} +P_{3}&\leq C \left\|\Delta \phi\right\|^{2}_{L^{2}} \left\|\Delta \psi\right\|^{2}_{L^{2}}+C \left\|\nabla Z\right\|^{2}_{L^{2}} \left\|\Delta \psi\right\|^{2}_{L^{2}}+C \left\|\nabla W\right\|^{2}_{L^{2}} \left\|\Delta \psi\right\|^{2}_{L^{2}}+C \left\|\Delta \phi\right\|^{2}_{L^{2}} \left\|\nabla W\right\|^{2}_{L^{2}}\\ &+C \left\|\Delta \phi\right\|^{2}_{L^{2}} \left\|\nabla Z\right\|^{2}_{L^{2}} +C\left\|\nabla Z\right\|^{q}_{L^{p}} \left\|\Delta\psi\right\|^{2}_{L^{2}}. \end{split} \end{equation*} Integrating this in time with the aid of (\ref{L2 bound Hall MHD}), we obtain \begin{equation}\langlebel{H1 bound New} \begin{split} & \left\|\Delta \psi(t)\right\|^{2}_{L^{2}}+\left\|\nabla Z(t)\right\|^{2}_{L^{2}}+\left\|\Delta \phi(t)\right\|^{2}_{L^{2}}+\left\|\nabla W(t)\right\|^{2}_{L^{2}}\\ &+\int^{t}_{0}\left(\left\|\nabla \Delta \psi(s)\right\|^{2}_{L^{2}}+\left\|\Delta Z(s)\right\|^{2}_{L^{2}}+\left\|\nabla \Delta \phi(s)\right\|^{2}_{L^{2}}+\left\|\Delta W(s)\right\|^{2}_{L^{2}}\right) ds\\ &\leq C\mathcal{E}_{0}\exp\left[C\int^{t}_{0}\left\|\nabla Z(s)\right\|^{q}_{L^{p}}ds\right]. \end{split} \end{equation} Using the idea in Section \ref{sec:3.1.3} to bound (\ref{H3 bound 2}) by using (\ref{blowup 2}), we bound (\ref{H2 bound}) as follows \[ \left\|\nabla\Delta \psi(t)\right\|^{2}_{L^{2}}+\left\|\Delta Z(t)\right\|^{2}_{L^{2}}+\left\|\nabla\Delta \phi(t)\right\|^{2}_{L^{2}}+\left\|\Delta W(t)\right\|^{2}_{L^{2}}\leq \mathcal{E}_{0}\exp \exp\left[CB(t)+CB^{2}(t)\right], \] where $B(t)$ is defined in (\ref{Blowup Hall equation}). This completes the proof of Theorem \ref{LWP Hall MHD}. \subsection{Proof of Theorem \ref{GWP Hall MHD}} To prove Theorem \ref{GWP Hall MHD}, we need to bound the quantities used in the proof of Theorem \ref{LWP Hall MHD} in different ways. We first bound each term on the right-hand side of (\ref{P2 P3}) as follow \begin{equation*}\langlebel{Ibc New} \begin{split} \text{I(a)+I(b)}&\leq C \left\|\Delta Z\right\|_{L^{2}} \left\|\Delta \psi\right\|_{L^{2}}\left\|\nabla\Delta \psi\right\|_{L^{2}} \leq C\left\|\Delta \psi\right\|^{2}_{L^{2}}\left\|\nabla\Delta \psi\right\|^{2}_{L^{2}} +\frac{1}{8}\left\|\Delta Z\right\|^{2}_{L^{2}},\\ \text{I(c)+I(h)}&\leq C \left\|\nabla \phi\right\|^{2}_{L^{2}} \left\|\nabla \Delta \phi\right\|^{2}_{L^{2}}+C\left\|\nabla \psi\right\|^{2}_{L^{2}} \left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}+\frac{1}{8}\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}},\\ \text{I(e)+I(g)}& \leq C \left\|Z\right\|^{2}_{L^{2}}\left\|\Delta Z\right\|^{2}_{L^{2}}+C \left\|W\right\|^{2}_{L^{2}}\left\|\Delta W\right\|^{2}_{L^{2}}+C \left\|\nabla \psi\right\|^{2}_{L^{2}}\left\|\nabla \Delta \psi\right\|^{2}_{L^{2}}\\ &+\frac{1}{8} \left\|\Delta Z\right\|^{2}_{L^{2}} +\frac{1}{8} \left\|\Delta W\right\|^{2}_{L^{2}},\\ \text{I(d)+I(f)}& \leq C \left\|\nabla \phi\right\|^{2}_{L^{2}}\left\|\nabla \Delta \phi\right\|^{2}_{L^{2}} +C\left\|W\right\|^{2}_{L^{2}}\left\|\Delta W\right\|^{2}_{L^{2}}+C \left\|Z\right\|^{2}_{L^{2}}\left\|\Delta Z\right\|^{2}_{L^{2}}\\ &+\frac{1}{8} \left\|\Delta W\right\|^{2}_{L^{2}} +\frac{1}{8} \left\|\Delta Z\right\|^{2}_{L^{2}}. \end{split} \end{equation*} So, we can rewrite (\ref{H1 bound}) as \begin{eqnarray} \langlebel{H1 bound New 2} \frac{d}{dt} P_{2}+P_{3}\leq C \left(\left\|\nabla \phi\right\|^{2}_{L^{2}} +\left\|\nabla \psi\right\|^{2}_{L^{2}}+\left\|Z\right\|^{2}_{L^{2}} +\left\|W\right\|^{2}_{L^{2}} +\left\|\Delta \psi\right\|^{2}_{L^{2}} \right)P_{3}. \end{eqnarray} By (\ref{L2 bound Hall MHD}) and (\ref{H1 bound New 2}), \begin{eqnarray} \langlebel{P1 plus P2} \frac{d}{dt}(P_{1}+P_{2})+P_{2}+P_{3}\leq C \left(P_{1}+P_{2}\right)\left(P_{2}+P_{3}\right). \end{eqnarray} Let $\epsilon_{4}= \left\|\nabla \psi_{0}\right\|^{2}_{H^{1}}+\left\|Z_{0}\right\|^{2}_{H^{1}}+\left\|\nabla \phi_{0}\right\|^{2}_{H^{1}}+\left\|W_{0}\right\|^{2}_{H^{1}}$. If $\epsilon_{4}$ is sufficiently small such as $C\epsilon_{4}<1$, (\ref{P1 plus P2}) implies the following inequality for all $t>0$: \begin{eqnarray} \langlebel{P1 plus P2 integral} P_{1}(t)+P_{2}(t)+(1-C\epsilon_{4})\int^{t}_{0}\left(P_{2}(s)+P_{3}(s)\right)ds\leq \epsilon_{4}. \end{eqnarray} We also bound the right-hand side of (\ref{P3 P4}) as follows \begin{equation*} \langlebel{IIab New} \begin{split} \text{II(a)+II(b)} & \leq C\left\|\nabla Z\right\|^{2}_{L^{2}}\left\|\nabla \Delta Z\right\|^{2}_{L^{2}} +C\left\|\Delta \psi\right\|^{2}_{L^{2}}\left\|\Delta^{2} \psi\right\|^{2}_{L^{2}} +C\left\|\Delta \psi\right\|^{3}_{L^{2}} \left\|\Delta^{2} \psi\right\|^{2}_{L^{2}} \\ &+\frac{1}{6}\left\|\nabla\Delta Z\right\|^{2}_{L^{2}}+\frac{1}{4} \left\|\Delta^{2}\psi\right\|_{L^{2}},\\ \text{II(c)+II(h)}& \leq C\left\|\nabla \phi\right\|^{2}_{L^{2}} \left\|\Delta^{2} \psi\right\|^{2}_{L^{2}} +C \left\|\nabla \psi\right\|^{2}_{L^{2}} \left\|\Delta^{2}\phi\right\|^{2}_{L^{2}} +C\left\|\nabla \phi\right\|^{2}_{L^{2}}\left\|\Delta^{2}\phi\right\|^{2}_{L^{2}}\\ & +C\left\|\nabla \psi\right\|^{2}_{L^{2}} \left\|\Delta^{2} \psi\right\|^{2}_{L^{2}}+\frac{1}{4} \left\|\Delta^{2} \psi\right\|_{L^{2}},\\ \text{II(e)+II(g)}&\leq C \left\| Z\right\|^{2}_{L^{2}}\left\|\nabla \Delta Z\right\|^{2}_{L^{2}}+C \left\|W\right\|^{2}_{L^{2}}\left\|\nabla \Delta W\right\|^{2}_{L^{2}}+ C\left\|\nabla \psi\right\|^{2}_{L^{2}} \left\|\Delta^{2} \psi\right\|^{2}_{L^{2}} \\ &+\frac{1}{4} \left\|\nabla \Delta W\right\|^{2}_{L^{2}} +\frac{1}{6} \left\|\nabla \Delta Z\right\|_{L^{2}},\\ \text{II(d)+II(f)}& \leq C \left\|\nabla \phi\right\|^{2}_{L^{2}}\left\|\Delta^{2}\phi\right\|^{2}_{L^{2}}+C\left\| W\right\|^{2}_{L^{2}} \left\|\nabla \Delta W\right\|^{2}_{L^{2}} +C\left\|Z\right\|^{2}_{L^{2}} \left\|\nabla \Delta Z\right\|^{2}_{L^{2}}\\ &+C\left\|\nabla \phi\right\|^{2}_{L^{2}}\left\|\nabla \Delta W\right\|^{2}_{L^{2}}+C \left\|W\right\|^{2}_{L^{2}}\left\|\Delta^{2}\phi\right\|^{2}_{L^{2}} +C\left\|\nabla \phi\right\|^{2}_{L^{2}}\left\|\nabla \Delta Z\right\|^{2}_{L^{2}}\\ & +C \left\|Z\right\|^{2}_{L^{2}}\left\|\Delta^{2}\phi\right\|^{2}_{L^{2}} +\frac{1}{4}\left\|\nabla \Delta W\right\|^{2}_{L^{2}}+\frac{1}{6}\left\|\nabla \Delta Z\right\|^{2}_{L^{2}}. \end{split} \end{equation*} So, we rewrite (\ref{H2 bound}) as \[ \frac{d}{dt} P_{3}+P_{4} \leq C\left(\left\|Z\right\|^{2}_{L^{2}}+\left\|\nabla Z\right\|^{2}_{L^{2}}+\left\|W\right\|^{2}_{L^{2}}+\left\|\nabla \psi \right\|^{2}_{L^{2}}+\left\|\nabla \phi\right\|^{2}_{L^{2}}+\left\|\Delta \psi\right\|^{2}_{L^{2}} \right)P_{4}. \] By (\ref{P1 plus P2 integral}), we derive the following for all $t>0$: \begin{eqnarray} \langlebel{P3 integral} P_{3}(t)+\left(1-C\epsilon_{4}\right)\int^{t}_{0}P_{4}(s)ds\leq P_{3}(0). \end{eqnarray} Combining (\ref{L2 bound Hall MHD}), (\ref{P1 plus P2 integral}), (\ref{P3 integral}), we conclude that \[ P(t)+\left(1-C\epsilon_{0}\right)\int^{t}_{0}Q(s)ds\leq P(0) \] for all $t>0$. This completes the proof of Theorem \ref{GWP Hall MHD}. \section*{Acknowledgments} H.B. was supported by NRF-2018R1D1A1B07049015. K.K. was supported by NRF-2019R1A2C1084685 and NRF-20151009350. \end{document}
\begin{document} \title{ {Nullspace Vertex Partition in Graphs} \author{Irene Sciriha \thanks{{[email protected]} {:--Corresponding author}} \and{Xandru Mifsud\thanks{[email protected]}}\and {James Borg\thanks{[email protected]}} \thanks{\textsc{Department of Mathematics, Faculty of Science, University of Malta, Msida, Malta}}}} \maketitle {\bf Keywords} {Nullspace, core vertices, core--labelling, graph perturbations.} {\bf Mathematics Classification 2021}{05C50, 15A18} \begin{abstract} The core vertex set of a graph is an invariant of the graph. It consists of those vertices associated with the non-zero entries of the nullspace vectors of a $\{0,1\}$-adjacency matrix. The remaining vertices of the graph form the core--forbidden vertex set. For graphs with independent core vertices, such as bipartite minimal configurations and trees, the nullspace induces a well defined three part vertex partition. The parts of this partition are the core vertex set, their neighbours and the remote core--forbidden vertices. The set of the remote core--forbidden vertices are those not adjacent to any core vertex. We show that this set can be removed, leaving the nullity unchanged. For graphs with independent core vertices, we show that the submatrix of the adjacency matrix defining the edges incident to the core vertices determines the nullity of the adjacency matrix. For the efficient allocation of edges in a network graph without altering the nullity of its adjacency matrix, we determine which perturbations make up sufficient conditions for the core vertex set of the adjacency matrix of a graph to be preserved in the process. \end{abstract} \section{Introduction} \label{SecIntro} A graph $G = (V,E)$ has a finite vertex set $V=\{v_1,v_2,\ldots,v_n\}$ with vertex labelling $[n]:=\{1,2,...,n\}$ and an edge set $E$ of 2-element subsets of $V$. The graphs we consider are simple, that is without loops or multiple edges. A subset $U$ of $V$ is independent if no two vertices form an edge. The open--neighbourhood of a vertex $v \in V$, denoted by $N(v)$, is the set of all vertices incident to $v$. The degree $\rho (v)$ of a vertex $v$ is the number of edges incident to $v$. The induced subgraph $G[V\backslash S]$ of $G$ is $G-S$ obtained by deleting a vertex subset $S,$ together with the edges incident to the vertices in $S$. For simplicity of notation, we write $G-u$ for the induced subgraph obtained from $G$ by deleting vertex $u$ and $G-u-w$ when both vertices $u$ and $w$ are deleted. The adjacency matrix $\mathbf{A} = (a_{ij})$ of the labelled graph $G$ on $n$ vertices is the $n\times n$ matrix $\mathbf{A} = (a_{ij})$ such that $a_{ij} = 1$ if the vertices $v_i$ and $v_j$ are adjacent (that is $v_i\sim v_j$) and $a_{ij} = 0$ otherwise. The nullity $\eta(G)$ is the algebraic multiplicity of the eigenvalue $0$ of $\mathbf{A}$, obtained as a root of the characteristic polynomial $\det(\lambda {\bf I} -{\bf A})$. The geometric multiplicity of an eigenvalue of a matrix is the dimension $\eta({\bf A})$ of the nullspace $\ker({\bf A})$ of ${\bf A}.$ Since ${\bf A}$ is real and symmetric, it is the same as its algebraic multiplicity. In particular, the {\it nullity } $\eta(G)$ of $G$ is the multiplicity of the eigenvalue 0. By the dimension theorem for linear transformations, for a graph $G$ on $n$ vertices, the rank of ${\bf A}$ is $\text{rank}(G) = n - \eta(G)$. Graphs, for which 0 is an eigenvalue, that is $\eta(G)> 0,$ are singular. In \cite{ SciCoefx97, SciConstrNullOne98,SciRanKalamazoo1999}, the terms core vertex, core--forbidden vertex and kernel vector for a singular graph $G$ are introduced. The \textit{kernel vector} refers to a non--zero vector $\mathbf{x}$ in the nullspace of $\mathbf{A},$ that is, it satisfies $\mathbf{A}\mathbf{x} = \mathbf{0}$, $\mathbf{x} \neq \mathbf{0}$. The support of a vector $\mathbf{x}$ is the set of indices of non--zero entries of ${\bf x}$. \begin{definition} \label{core} \cite{SciConstrNullOne98,SciCHznSingGr07} A vertex of a singular graph $G$ is a {\em core vertex} $(cv)$ of $G$ if it corresponds to a non-zero entry of {\it some} kernel vector of $G$. A vertex $u$ is a {\em core--forbidden vertex} $(cfv)$, if {\it every} kernel vector has a zero entry at position $u.$ \end{definition} It follows that the union of the elements of the support of all kernel vectors of ${\bf A}$ form the set of core vertices of $G.$ It is clear that a vertex of a singular graph $G$ is either a $cv$ or a $cfv$. The set of core vertices is denoted by $CV$, and the set ${\mathcal V}\backslash CV$ by $CFV$. Cauchy's Inequalities for real symmetric matrices, also referred to as the Interlacing Theorem in spectral graph theory \cite{SelectedSchwenkEigs}, are considered to be among the most powerful tool in studies related to the location of eigenvalues. The Interlacing Theorem refers to the interlacing of the eigenvalues of the adjacency matrix of a vertex deleted subgraph relative to those of the parent graph. As a consequence of the well-known Interlacing Theorem, the nullity of a graph can change by 1 at most, on deleting a vertex. On deleting a vertex, the nullity reduces by 1 if and only if the vertex is a core vertex \cite[Proposition~1.4]{SciMaxExtremSing2012}, \cite[Corollary 13]{SciCoalEmbNut08} and \cite[Theorem 2.3]{SciFarrBk2020}. It follows that the deletion of a core--forbidden vertex can leave the nullity of the adjacency matrix unchanged, or else the nullity increases by 1. \begin{definition} A vertex of a graph $G$ is $cfv_{mid}$ if its deletion leaves the nullity of the adjacency matrix, of the subgraph obtained, unchanged. A vertex of $G$ is $cfv_{upp}$ if when removed, the nullity increases by 1. The set $CFV$ is the disjoint union of the sets $\left\{cfv_{mid}\right\}$ and $\left\{cfv_{upp}\right\}$, denoted by $CFV_{mid}$ and $CFV_{upp},$ respectively. \end{definition} At this point it is worth mentioning that in 1994, the first author coined the phrases {\it core vertices, periphery and core-forbidden vertices}. The core vertices with respect to ${\bf x}$ of a graph $G$ with a singular adjacency matrix ${\bf A}$ correspond to the support of the vector ${\bf x}$ in the nullspace of ${\bf A}$. One must not confuse the core vertex set with the same term referring to independent sets introduced much later \cite{Levit2002Core}. The term core is also used in relation to graph homomorphisms. There were other researchers who used associated concepts in different contexts. In 1982, Neumaier used the terms {\it essential} and {\it non--essential} vertices corresponding to core vertices and core--forbidden vertices, respectively, but only for the class of trees \cite{NeumaierEssential82}. Back in 1960, S. Parter studied the upper core--forbidden vertices in the context of real symmetric matrices. In fact in the linear algebraic community these vertices are referred to as Parter vertices, the core vertices as downer vertices and the middle core--forbidden vertices as neutral vertices. Core--forbidden vertices are also referred to as Fiedler vertices in engineering. Graphs with no edges between pairs of vertices in CV have a well defined vertex partition, which facilitates the form of the adjacency matrix in block form as shown in (\ref{EqCoreL}) in Section \ref{SecInd}. \begin{figure} \caption{A vertex partition induced by a generalized kernel vector of $G$ in a graph of nullity 3: the label 0 indicates a vertex in $N(CV)$; the starred vertices are $cfv_R$.} \label{FigNull2e} \end{figure} \begin{definition} A graph is said to have \textit{independent core vertices} if no two core vertices are adjacent. \end{definition} If CV is an independent set, then the core-forbidden vertex set $CFV$ is partitioned into two subsets: $N(CV),$ the neighbours of the core vertices in $G$, and $CFV_{R},$ the remote core--forbidden vertices, as shown in Figure \ref{FigNull2e} for a graph of nullity 3. A similar concept is considered in \cite{SANDER2009133,JAUME2018836} for the case of trees. In this work, unless specifically stated, we consider all graphs. \begin{definition} A \textit{core-labelled} graph $G$ has an independent CV. The vertex set of $G$ is partitioned such that $V = CV \ \dot{\cup} \ N(CV) \ \dot{\cup} \ CFV_R$. The vertices of $CV$ are labelled first, followed by those of $N(CV)$ and then by those of $CFV_R.$\end{definition} In Section \ref{SecPend}, we show that removing a pendant edge from a graph not only preserves the nullity (which is well known) but also the type of vertices. In Section \ref{SecInd}, we determine the nullity of the submatrices of the adjacency matrix for a graph in the class of graphs with independent core vertices. The remote core--forbidden vertices do not contribute to the equations involving the nullspace vectors and can be removed to obtain a {\it slim graph}. In Section \ref{SecBip}, bipartite minimal configurations are shown to be slim graphs with independent core vertices. Moreover all vertices in $N(CV) $ of a bipartite minimal configuration are shown to be upper core--forbidden vertices. In Section \ref{SecTrees}, we obtain results on the nullity and the number of the different types of vertices of singular trees in the light of the results obtained in Section \ref{SecInd}. Section \ref{SecAddE} focuses on the types of non--adjacent vertex pairs that can be joined by edges in a graph under various constraints associated with the nullspace of ${\bf A}.$ \section{Graphs with Pendant Edges} \label{SecPend} By definition of $CV$ and $CFV,$ the nullspace of $\bf A$ induces a partition of the vertices of the associated graph $G$ into $CV$ and $CFV.$ The set $CV$ is empty if $G$ is non--singular and non--empty otherwise. It could happen that $CFV$ is empty in which case the graph is singular and it is a {\it core graph}. Consider two graphs on 4 vertices. The path $P_4$ is non--singular whereas the cycle $C_4$ is a core graph of nullity two. A quick method to obtain the nullity and kernel vectors of a graph is known as the {\it Zero Sum Rule}. The neighbours of a vertex are weighted so that their weights add up to zero. Repeating this for each vertex gives the minimum number $\eta (G)$ of independent parameters in which to express the entries of a generalized vector in the nullspace of ${\bf A}$. Figure \ref{FigNull2b} shows a graph of nullity two with the entry of a generalized kernel vector next to each vertex, in terms of the parameters $a$ and $b$. \begin{figure} \caption{A graph of nullity 2 and a generalized kernel vector ${\bf x} \label{FigNull2b} \end{figure} We are interested in the change in the type of vertices on the deletion of vertices and edges. Deleting a core--vertex from an odd path $P_{2k+1}$ may transform some of the core vertices to $CFV_{upp}$. Similarly, deleting a $CFV_{upp}$ vertex from the cycle $C_6$ on six vertices transforms some of the core--forbidden vertices to core--vertices. Removing a core vertex and a neighbouring $cfv$ may alter the nullity. Consider the 4 vertex graph obtained by identifying an edge of two 3--cycles. Removing the identified edge increases the nullity by 1, whereas removing any of the other edges decreases the nullity by 1. However, it is well known that removing an end vertex $v,$ also known as a leaf, in the literature, and its unique neighbour $u,$ from a graph $G$, leaves the nullity unchanged in $G - u - v$ \cite{CvetGutMultZeroeig72}. Note that the vertex $v$ may be $cv$ or $cfv.$ We give a new proof of this known result that also leads to an unusual preservation of the type of the remaining vertices after removing two vertices. \begin{theorem} \label{TheoRemPend} Let $w$ be an end vertex and $u$ its unique neighbour in a singular graph $G$. The nullity of $G - u - w$ is the same as that of $G$. Moreover, the type of vertices in $G - u - w$ is preserved. \end{theorem} \begin{proof} Let $u, w$ be the $n-1^{\text{th}}$ and $n^{\text{th}}$ labelled vertices, respectively, of a graph on $n$ vertices. The adjacency matrix $\mathbf{A}(G)$ satisfies $$ \mathbf{A}(G)\left(\begin{array}{c} \mathbf{x}\\ \hline y\\ z \end{array}\right) = \left (\begin{array} {ccc|cc} &\mathbf{A}(G-u-w)&&{\star}&{\bf 0}\\ \hline &{\star}&&0&1\\ &{\bf 0}&&1&0 \end{array}\right ) \left(\begin{array}{c} \mathbf{x}\\ \hline y\\ z \end{array}\right)= \left(\begin{array}{c} \mathbf{0}\\ \hline 0\\ 0 \end{array}\right).$$ Hence $y$ is 0 and $\mathbf{A}(G-u-w)\mathbf{x}=\mathbf{0}.$ Also $z$ depends on $\mathbf{x}$ and the neighbours of $w.$ The nullity of $G-u-w$ is equal to the nullity of $G$. This is because there is a 1--1 correspondence between the kernel vectors in $G-u-w$ and the kernel vectors in $G$. Whatever $z$ is, this 1--1 correspondence holds. So the number of linearly independent vectors in the nullspace of G is equal to the number of linearly independent vectors in the nullspace of $G-u-w.$ Also, on removing the end vertex and its neighbour, the non--zero entries of $\mathbf{x}$ restricted to $G-u-w$ will be the same as for $G$. Hence, the core and core--forbidden vertices in $G -u-w$ are the same as those in $G$. \end{proof} In a tree, it is possible to remove end vertices and associated unique neighbours successively until no edges remain. Indeed, the graph obtained by removing all pendant edges in $T$ and in the subgraphs obtained in the process, is $\overline{K_{\eta}}, $ each vertex of which, as expected from Theorem \ref{TheoRemPend}, is a core vertex. This leads to a well known criterion to determine the nullity of a tree. \begin{corollary} \label{CorTreeRem2v} For a tree $T,$ the number of isolated vertices, obtained by the removal of end vertices and their unique neighbours in $T$ and in its successive subgraphs, is $\eta(T).$ \end{corollary} Since by Theorem \ref{TheoRemPend}, the vertices of $\overline{K_{\eta}}$ are in CV of $T,$ we can deduce the following result: \begin{proposition} \label{PropT2endCV} A singular tree $T$ has at least 2 core vertices which are end vertices. \end{proposition} \begin{proof} Starting from any end--vertex in $T,$ if the order of pendant--edge removals, is chosen appropriately, then at least one vertex $u$ of $\overline{K_{\eta}},$ obtained as in Corollary \ref{CorTreeRem2v}, is an end--vertex of $T$ and its type in $T$ is a {\it cv}. Similarly, starting from the edge containing the end--vertex $u$ of $T,$ there is another end--vertex $w$ which is a {\it cv} of $T.$ \end{proof} Corollary \ref{CorTreeRem2v} describes a polynomial--time algorithm to determine the nullity of a tree. A {\it matching} in a bipartite graph is a set of edges, no two of which share a common vertex. The matching number $t$ is the number of edges in a maximal matching \cite{CvetGutMultZeroeig72}. Corollary \ref{CorTreeRem2v} and Proposition \ref{PropT2endCV} provide an immediate proof of the well known result $\eta(T)=n-2t$ \cite{CvetGutMultZeroeig72}. \section{Graphs with independent core vertices} \label{SecInd} In a singular graph, core vertices may be adjacent. Indeed, in a core graph (not $\overline{K_r}$), each edge joins two core vertices. The family of cycles $C_{4k}, k\in {\mathbb N}$ consists of core graphs of nullity 2. By definition, a singular graph has a non--empty $CV.$ If in a singular graph, $N(CV)$ is empty, then $CFV$ must be empty and the graph is a core graph. It is convenient to work with graphs for which $CFV_R$ is empty. Removal of $CFV_R$ from a graph leaves the type of vertices in the resulting subgraph unchanged. \begin{definition}\label{DefSlim} A connected singular graph $G$ is a {\it slim graph} if it has an independent $CV$ and $CFV$ is precisely $N(CV)$. \end{definition} From Definition \ref{DefSlim}, it follows that a singular graph is slim if and only if its $CV$ is an independent set and its $CFV_R$ is empty. For a core--labelled graph the adjacency matrix $\mathbf{A}$ is a block matrix of the form, \begin{equation} \mathbf{A} = \left[\begin{array}{c|c|c} \mathbf{0} & \mathbf{Q} & \mathbf{0} \\ \hline \mathbf{Q}^\intercal & \mathbf{N} & \mathbf{R} \\ \hline \mathbf{0} & \mathbf{R}^\intercal & \mathbf{M} \end{array}\right] \label{EqCoreL} \end{equation} where $\mathbf{Q}$ is $CV \times N(CV)$, $\mathbf{R}$ is $N(CV) \times CFV_R$, $\mathbf{N}$ is $N(CV) \times N(CV)$ and $\mathbf{M}$ is $CFV_R \times CFV_R$. The submatrix $\mathbf{Q}$ plays an important role to relate the linear independence of its columns to the nullity of $G$. \begin{lemma} \label{LemNullQ} Let $G$ be a singular core--labelled graph. Then $\eta(\mathbf{Q}^\intercal)=\eta(G)$. \end{lemma} \begin{proof} For a core--labelling of $G$, let $\mathbf{x}^{(i)}$ be one of the $\eta(G)$ kernel vectors of $\mathbf{A}$. The vector $\mathbf{x}^{(i)}$ is of the form $\left(\mathbf{x}_{CV}^{(i)}, {\bf 0}\right)$ and $\mathbf{ x}_{CV}^{(i)} = \left(\alpha_1,...,\alpha_{|CV|}\right) \not ={\bf 0}.$ Now, $\mathbf{Ax}^{(i)} = \mathbf{0}$ if and only if $\mathbf{Q}^\intercal \mathbf{x}_{CV}^{(i)} = \mathbf{0}$. Thus there are as many linearly independent kernel vectors of ${\bf A}$ as there are of $\mathbf{Q}^{\intercal}.$ It follows that $\textup{Dim}\left(\textup{Ker}(\mathbf{Q}^\intercal)\right) = \textup{Dim}\left(\textup{Ker}(\mathbf{A})\right)$. \end{proof} \begin{lemma} \label{LemQdep} Let $G$ be a singular core--labelled graph. For a core--labelling of $G$, the columns of $\mathbf{Q}^{\intercal}$ are linearly dependent and $\textup{rank}(\mathbf{Q}) < |CV|$. \end{lemma} \begin{proof} Since $\text{Dim}\left(\text{Ker}(\mathbf{A})\right) \geq 1$, then $\text{Ker}(\mathbf{Q}^\intercal)\not =\{{\bf 0}\}.$ Thus there is a non-zero linear combination of the columns of $\mathbf{Q}^\intercal$ that is equal to $\mathbf{0}$, that is $\mathbf{Q}^\intercal \mathbf{x}_{CV} = \mathbf{0}$. Hence the columns of $\mathbf{Q}^\intercal$ are linearly dependent. Since column rank is equal to row rank, it follows that $\textup{rank}(\mathbf{Q}) < |CV|$. \end{proof} \begin{figure} \caption{In graph $H,$\,the number of vertices in CV and in NCV are the same and in graph $K,$\,$|CV|<|N(CV)|$.} \label{FigNCV} \end{figure} The relative number of vertices in $CV$ and in $N(CV)$ may differ. For the graphs $H$ and $K$ of Figure \ref{FigNCV} $|CV|=|N(CV)|$ and $|CV|<|N(CV)|, $ respectively. In Section \ref{SecBip}, we see that graphs with $|CV|>|N(CV)| $ exist, a property satisfied by minimal configurations (defined in Definition \ref{Defmc}). \begin{theorem} \label{TheoNullCVrnkQ} Let $G$ be a singular core-labelled graph with independent core vertices. Then $\eta(G) = |CV| - \textup{rank}(\mathbf{Q})$. \end{theorem} \begin{proof} By the well known dimension theorem,\\ \hspace*{2cm} Dim(Domain$({\bf Q}{^\intercal}))=$ Dim(Ker$({\bf Q}{^\intercal}) ) +$ Dim(Im$({\bf Q}{^\intercal})).$ Now Dim(Domain$({\bf Q}{^\intercal}))= |CV|.$ By Lemma \ref{LemNullQ}, Dim(Ker$({\bf Q}{^\intercal}) )=\eta(G).$ Hence rank$({\bf Q})= $ rank$({\bf Q}^{\intercal})= |CV| - \eta (G).$ \end{proof} It is clear that for a singular core--labelled graph, if $|CV|< |N(CV)|,$ then the columns of the $|CV|\times |N(CV)|$ matrix ${\bf Q}$ are linearly dependent. For $|CV|= |N(CV)|,$ by Theorem \ref{TheoNullCVrnkQ}, $\textup{rank}(\mathbf{Q}) < |CV| $ and thus the $|N(CV)|$ columns of $\mathbf{Q}$ are linearly dependent. We shall now determine a necessary and sufficient condition for $\mathbf{Q}$ to have full column rank. \begin{theorem} \label{TheoQindep} Let $G$ be a singular core--labelled graph. The matrix $\mathbf{Q}$ has linearly independent columns if and only if $\eta(G) = |CV| - |N(CV)|$. \end{theorem} \begin{proof} The matrix $\mathbf{Q}$ has full rank if and only if rank$({\bf Q})= {\rm Dim}({\rm Im}({\bf Q}))=|N(CV)|.$ By Theorem \ref{TheoNullCVrnkQ}, the necessary and sufficient condition for the matrix $\mathbf{Q}$ to have linearly independent columns is that $\eta(G) = |CV| - |N(CV)|$. \end{proof} Recall that the vertex set $V$ of a core--labelled graph is partitioned into $CV$, $N(CV)$ and $CFV_R.$ On deleting $N(CV)$ and $CV$ from a graph, the subgraph induced by $CFV_R$ remains. \begin{theorem}\label{TheoCFVrInv} The subgraph induced by $CFV_R$ for a core--labelled graph is non--singular. \end{theorem} \begin{proof} Using an adjacency matrix ${\bf A}$ of the form (\ref{EqCoreL}), we need to show that $\mathbf {My=0}$ if and only if $\mathbf{y=0}.$ For a core--labelling, all kernel vectors of $\textbf{A}(G)$ are of the form ${\bf z}=\left(\begin{array}{c} {\bf x}\\ {\bf 0}\\ {\bf 0} \end{array}\right).$ But ${\bf My=0}$ for some ${\bf y}\not ={\bf 0} $ if and only if there exists ${\bf x} $ such that ${\bf A}\left(\begin{array}{c} {\bf x}\\ {\bf 0}\\ {\bf y} \end{array}\right)={\bf 0}.$ This contradicts the form of the kernel vector for a core--labelling. Hence no kernel vectors exist for ${\bf M}.$ \end{proof} Graphs with independent core vertices include the family of half cores. A \textit{half core} is a bipartite graph with one partite set being the set $CV$ and the other partite set being $CFV$. In Section \ref{SecTrees}, we shall see that trees also have independent core vertices. At this stage, the case for unicyclic graphs is worth mentioning. The coalescence of two graphs is obtained by identifying a vertex of one graph with a vertex of the other graph. If none of the two graphs is $K_1$, then this vertex becomes a cut vertex. Unicyclic graphs can be considered to be the coalescence of a cycle $C_r$ with $r$ trees (some or all of which may be the isolated vertex $P_1$), each tree $T_v$ coalesced with $C_r$ at a unique vertex $v$ of the cycle. If $r\not =4k,\ k\in {\mathbb Z}^+, $ then the unicyclic graph has independent core vertices. Since the nullity of $C_4$ is 2, using Theorem \ref{TheoRemPend}, the following result is immediate. \begin{theorem} \label{TheoUnicyc} Let $G$ be a unicyclic graph with cycle $C_r$ where $r=4k.$ \begin{enumerate}[{\rm (i)}] \item If the vertex $v$ of at least one tree $T_v$ which is coalesced with the cycle is a core--forbidden vertex, then the unicyclic graph also has independent core vertices. \item If the vertices $v$ of each tree $T_v$ which is coalesced with the cycle is a core vertex, then the unicyclic graph must have nullity at least 2. \end{enumerate} \end{theorem} \section{Bipartite Minimal Configurations} \label{SecBip} In \cite{SciCoefx97, SciConstrNullOne98,SciCHznSingGr07}, the concept of {\it minimal configurations} (MCs) as admissible subgraphs, that go to construct a singular graph, is introduced. It is shown that there are $\eta $ MCs as subgraphs of a singular graph $G$ of nullity $\eta >0.$ A MC is a graph of nullity 1 and its adjacency matrix ${\bf A} $ satisfies ${\bf Ax}={\bf 0}$ where ${\bf x}\not ={\bf 0}$ is the generator of the nullspace of the adjacency matrix ${\bf A}$ of $G$. The core vertices of a MC induce a subgraph termed the {\it core} $F$ with respect to ${\bf x}.$ Among singular graphs with core $F$ and kernel vector ${\bf x},$ a MC has the least number of vertices and there are no edges joining pairs of core--forbidden vertices. For instance, the path $P_7$ on 7 vertices is a MC with ${\bf x}=(1,0,-1,0,1,0,-1)^{\intercal}.$ \begin{definition} \label{Defmc} A \textit{minimal configuration} (MC) is a singular graph on a vertex set $V$ which is either $K_1$ or if $|V| \geq 3$, then it has a core $F = G\left[CV\right]$ and periphery $\mathcal{P} =V\backslash CV$ satisfying the following conditions, \begin{enumerate}[\rm (i)] \item $\eta(G) = 1$, \item $\mathcal{P} = \emptyset$ or $\mathcal{P}$ induces a graph consisting of isolated vertices, \item $\left|\mathcal{P}\right| + 1 = \eta\left(F\right)$. \end{enumerate} \end{definition} Note that a MC $\Gamma $ is connected. To see this, suppose $\Gamma $ is the disjoint union $G_1\dot \cup G_2$ of the graphs $G_1$ and $ G_2,$ labelled so that the core vertices of $G_1$ are labelled first followed by its {\it cfv}, then the cv of $G_2$ followed by its \textit{cfv}. There exists a nullspace vector $({\bf x}_1,{\bf 0},{\bf x}_2,{\bf 0}),$ of ${\bf A}$ with each entry of ${\bf x}_1$ and of ${\bf x}_2$ non-zero. Since $({\bf x}_1,{\bf 0},{\bf 0},{\bf 0}),$ and $({\bf 0},{\bf 0},{\bf x}_2,{\bf 0}),$ are conformal linearly independent vectors in the nullspace of ${\bf A},$ the nullity of $G$ is at least 2, a contradiction. For the nullity to be 1, it follows without loss of generality, that $\mathbf{x}_2={\bf 0}. $ But then all vertices in $G_2$ lie in the periphery and by definition of MC, they form an independent set. Hence $G_2$ consists of isolated vertices that add $|G_2|\ (>0)$ to the nullity of $G_1$, a contradiction. Hence $G$ must consist of one component only. The $n$--vertex set of a bipartite graph $G(V_1,V_2,E)$ is partitioned into independent sets $V_1$ and $V_2$ and has edges in $E$ between vertices in $V_1$ and vertices in $V_2.$ If the vertices in $V_1$ are labelled first, then the adjacency matrix of $G$ is of the form \begin{equation} \mathbf{A} = \left(\begin{array}{c|c} \mathbf{0} & \mathbf{S} \\ \hline \mathbf{S}^\intercal & \mathbf{0} \end{array}\right), \label{EqBip}\end{equation} where the $|V_1| \times |V_2|$ matrix $\mathbf{S}$ describes the edges between $V_1$ and $V_2$. The nullity of ${\bf A}$ is $n-2\ {\rm rank}({\bf S }). $ We have proved the following result: \begin{proposition}\label{PropBipParity} The nullity of the adjacency matrix of an $n$--vertex bipartite graph and $n$ are of the same parity. \end{proposition} In \cite{BevisRank95}, the result in Proposition \ref{PropBipParity} is obtained for trees, a subclass of the bipartite graphs. In particular, a bipartite non-singular graph has an even number of vertices. To explore bipartite MCs it is convenient to consider first a singular bipartite graph of nullity 1. \begin{proposition} \label{PropBipCVInd} A singular bipartite graph of nullity 1 admits a core--labelling. \end {proposition} \begin{proof} Let $G({ V}_1, { V}_2, { E})$ be a singular bipartite graph with partite sets ${ V}_1$ and ${ V}_2.$ We show that $CV\subseteq { V}_1,$ without loss of generality. Suppose $CV\subseteq { V}_1 \cup { V}_2.$ Then there exists ${\bf x}=\left(\alpha_1, ..., \alpha_{|{ V}_1|}, \beta_1, ..., \beta_{|{ V}_2|}\right)^\intercal , $ $\ {\bf x}\not ={\bf 0},$ where not all the $\alpha_i$ are zero and not all the $\beta_j$ are zero. Then ${\bf A}\left(\alpha_1, ..., \alpha_{|{ V}_1|}, 0,...,0\right)^\intercal={\bf 0}$ and ${\bf A}\left( 0,...,0, \beta_1, ..., \beta_{|{ V}_2|}\right)^\intercal={\bf 0},$ showing that ${\bf A}$ has two linearly independent nullspace vectors. This contradicts that the nullity of a MC is 1. Hence without loss of generality, $\beta _j=0,\ 1\leq j\leq |{ V}_2|,$ showing that the core vertices lie in ${ V}_1.$ Thus the $CV$ of a bipartite MC is necessarily an independent set, which is the condition for the existence of a core--labelling. \end{proof} \begin{theorem} \label{TheoBipNull1} Let $G$ be a bipartite graph, of nullity 1, on $n$ vertices with partite vertex sets $V_1$ and $V_2$. Then, \begin{enumerate}[\rm (i)] \item $n$ is odd \item For $|V_1| > |V_2|$, $|V_2| = \dfrac{n-1}{2}$ and $|V_1| = |V_2| + 1$ \item $ CV \subseteq V_1.$ \end{enumerate} \end{theorem} \begin{proof} Let the adjacency matrix of $G$ be as in (\ref{EqBip}). \begin{enumerate}[(i)] \item Since $\text{rank}(\mathbf{A}) = 2\, \text{rank}(\mathbf{S})$ and $\eta(G) = 1$, then $n = 2\, \text{rank}(\mathbf{S}) + 1$, which is odd. \item Without loss of generality, let $|V_1| > |V_2|$. Then $\text{rank}(\mathbf{S})\leq |V_2|$. Hence $n - 1 = \text{rank}(\mathbf{A}) \leq 2|V_2|.$ Thus $ |V_1| + |V_2| - 1 \leq 2|V_2|$ and$|V_1| = |V_2| + 1$. Since $n = |V_1| + |V_2|$, it follows that $|V_2| = \dfrac{n - 1}{2}$. \item The proof of Proposition \ref{PropBipCVInd} shows that $CV\subseteq { V}_1.$ \end{enumerate} \end{proof} A MC has nullity equal to 1. For a bipartite MC, with partite sets $V_1$ and $V_2,$ and $|V_1|>|V_2|,$ we have $|V_1|=|V_2|+1.$ \begin{corollary} Let $G$ be a bipartite MC with vertex partite sets $V_1$ and $V_2,$ where $|V_1|>|V_2|.$ Then the set $CV$ of core vertices is $V_1$ and the set $CFV$ ( that is $\mathcal{P}$) is $V_2$. \end{corollary} \begin{proof} By Theorem \ref{TheoBipNull1}(iii), $CV\subseteq { V}_1.$ A minimal configuration is connected and $V_1$ is an independent set in a bipartite MC. Note that $\mathcal{P}$ is an independent set. Thus the only neighbouring vertices of a vertex in $\mathcal{P}$ are in $CV.$ Since $ \mathcal{P}= V\backslash CV,$ then $ \mathcal{P}\cap V_1=\emptyset. $ Thus $\mathcal{P}\subseteq V_2$. Moreover $CV=V_1$ and $\mathcal{P}=V_2.$ \end{proof} Another characterization of a bipartite MC focuses on the removal of extra vertices and edges, from a singular bipartite graph of nullity 1, producing a slim graph (Definition \ref{DefSlim}, page \pageref{DefSlim}). \begin{theorem} \label{TheoBipMCharzn} A graph $G(V_1,V_2,E),$ \ $|V_1|>|V_2|,$ is a bipartite MC if and only if it is a slim bipartite graph of nullity 1 with $CV=V_1$. \end{theorem} \begin{proof} Let $G(V_1,V_2,E)$ be a bipartite MC, $|V_1|>|V_2|$. Then it has nullity 1 and $|V_2|=|V_1|-1$. The set $V_1$ is $CV$ and $V_2$ is $CFV={\mathcal P}.$ Thus it has no $CFV_R$ and is therefore a slim graph of nullity 1. Conversely, let $G(V_1,V_2,E)$ be a slim bipartite graph of nullity 1, with $CV=V_1$. Then $V_2=CFV $ and by Theorem \ref{TheoBipNull1} (ii), $|V_2|=|V_1|-1.$ Removal of $V_2$ leaves the core $F,$ induced by $CV,$ with nullity $|CV| $ increasing the nullity from 1 to $|V_1|.$ But then the nullity increases by one with the removal of each vertex in $V_2.$ Thus $\mathcal{P}=V_2$ and is an independent set. Also $\eta (F)= |V_1|.$ Moreover $|\mathcal{P}|=|V_2|=\eta(F)-1.$ Hence $G$ is a bipartite MC. \end{proof} It is worth mentioning that stipulating that a MC is bipartite can do away with the third axiom of a general MC. \section{Nullspace Vertex Partition in Trees} \label{SecTrees} Trees are the most commonly studied class of graphs \cite{SciFioGutTreesMaxNull05}. In this section we explore MC trees and singular trees in general. First we need a result on the number of core vertices adjacent to any vertex of a singular graph on more than 1 vertex. \begin{lemma} \label{Lem2CV} A vertex of a singular graph cannot be adjacent to exactly one core vertex. \end{lemma} \begin{proof} A graph is singular if there exists ${\bf x}\in {\mathbb R}^n,\ {\bf x}\not = {\bf 0},$ such that ${\bf Ax}={\bf 0}.$ Let $v\in V(G).$ The $v$th row of ${\bf Ax}={\bf 0}$ can be written as $\sum_{i\sim v } x_i=0.$ The neighbours of $v$ may be all $cfv$. If not, then there exists $w\in CV$ such that $w\sim v$ and $x_w\not =0.$ But then there exists at least one other $cv\ w',\ w'\sim v$ with $x_{w'}\not =0$ to satisfy $\sum_{i\sim v } x_i=0.$ \end{proof} As a result of Lemma \ref{Lem2CV}, if 2 core vertices are adjacent then an infinite path is a subgraph of a finite tree, since a tree has no cycles. This contradiction proves the following result \begin{proposition} \cite{NeumaierEssential82, JAUME2018836} \label{PropTreeCVindep} Let $T$ be a singular tree. Then $T$ has independent core vertices. \end{proposition} For a tree, the combinatorial properties of the subgraph induced by $CFV_R$ will prove useful in Theorem \ref{TheoSubdiv}. \begin{theorem} \label{TheoPM} For a core--labelling of a singular tree $T,$ the subgraph induced by $CFV_R$ has a perfect matching. \end{theorem} \begin{proof} In Proposition \ref{TheoCFVrInv}, we show that $M$ as in (\ref{EqCoreL}) is invertible. The nullity $\eta(T)=n-2t=0.$ Hence the subgraph induced by $CFV_R$ has a perfect matching (a one--factor). \end{proof} We shall now use the concept of subdivision for the proof of the characterization of a MC tree. \begin{definition} A {\it subdivision} $S$ of a connected graph $G$ on $n$ vertices and $m$ edges is obtained from $G$ by inserting a vertex of degree 2 in each edge. Thus $S$ has $n+m$ vertices and $2m$ edges. \end{definition} \begin{lemma} \label{LemSubdv} Let ${\bf B}$ be the vertex--edge incidence matrix of a connected graph $G.$ The characteristic polynomial of the subdivision $S$ of a connected graph $G$ is $\phi(S,\lambda)= \lambda ^{n-m}\det(\lambda ^2{\bf I}-{\bf B}^{\intercal}{\bf B}).$ \end{lemma} \begin{proof} The adjacency matrix of $S$ is \begin{equation} \label{MatrixSubDiv} {\bf A}(S) =\left( \begin{array}{c|c} {\bf 0}_n&{\bf B}\\ \hline {\bf B}^{\intercal}& {\bf 0}_m \end{array} \right). \end{equation} Expanding using Schur's complement, $\phi(S,\lambda)= \lambda ^{n}\det(\lambda {\bf I}-{\bf B}^{\intercal}( \lambda {\bf I})^{-1}{\bf B} ) = \lambda ^{n-m}\det(\lambda ^2{\bf I}-{\bf B}^{\intercal}{\bf B}).$ \end{proof} \begin{corollary} \label{CorTreeBrank} For a tree $T,$ the incidence matrix ${\bf B}$ has full rank. \end{corollary} \begin{proof} Consider ${\bf B}^{\intercal} \left(\begin{array}{c} \alpha _1\\ \alpha _2\\ \vdots\\ \alpha _n \end{array}\right)= \left(\begin{array}{c} 0\\ 0\\ \vdots\\ 0 \end{array}\right).$ Since there are only 2 non--zero entries in each column of ${\bf B},$ \ $\alpha _u=-\alpha _w $ for edge $\{u,w\}.$ For a connected graph, it follows that the nullspace of ${\bf B}^{\intercal} $ has dimension 1 for a bipartite graph and 0 otherwise. The tree $T$ is bipartite and $m=n-1.$ Hence the rank of ${\bf B} $ which is the same as the rank of ${\bf B}^{\intercal} $ is $m.$ \end{proof} \begin{corollary} \label{CorSub} The subdivision of a tree is singular with nullity 1. \end{corollary} \begin{proof} This follows immediately from Lemma \ref{LemSubdv} since from Corollary \ref{CorTreeBrank}, the nullspace of ${\bf B}^{\intercal}{\bf B}$ is $\{\bf 0\}$ for a tree with $m=n-1.$ \end{proof} In \cite{LAMAMinConfigTreesGutmanSciriha_2006}, a characterization of MC trees is presented. Here we give a different proof by using Corollary \ref{CorSub}. \begin{theorem} \label{TheoSubdiv} \cite{LAMAMinConfigTreesGutmanSciriha_2006} A tree is a minimal configuration if and only if it is a subdivision of another tree. \end{theorem} \begin{proof} Let $T'$ be a MC with $|CV|=n$ and $|{\mathcal P}|=|N(CV)|= m.$ Then $m-n=1. $ Note that both $CV$ and $N(CV)$ are independent sets, the partite sets of $T'.$ Also the number of edges of $T'$ is $m+n-1=2m.$ Now a vertex of ${\mathcal P}(T')$ cannot be an end vertex as otherwise its neighbour is a $cfv$, contradicting the independence of ${\mathcal P}(T')$ in a MC. Thus each vertex of ${\mathcal P}(T')$ has degree 2. Therefore $T'$ is the subdivision of a tree $T$ on $n$ vertices and $m$ edges. Conversely, let $T$ be a tree on $n$ vertices and $m$ edges and let $S$ be its subdivision. Then by Corollary \ref{CorSub}, $S$ has nullity 1. Since $S$ is a singular tree, then by Proposition \ref{PropTreeCVindep}, CV is an independent set. Hence $S$ has a core--labelling. Let the partite sets $V_1$ and $V_2$ in $S$ be the original vertices of $T$ and the inserted vertices, respectively. Note $|V_1|=|V_2|+1.$ By Theorem \ref{TheoBipNull1} (iii), $CV\subseteq V_1.$ Since $S$ is bipartite, $N(CV)\subseteq V_2.$ Recall that $V_1$ in $S$ was the set of original vertices of $T.$ Let $w\in V_1.$ The subgraph $S-w$ of $S,$ obtained from $S$ after removing $w$ has a perfect matching with edges $\{u_i,w_j\}, \ u_i \in V_2 ,\ w_j\in V_1.$ Hence $S-w$ has nullity 0. This means that the nullity of $S$ decreases on deleting $w.$ Hence $w\in CV,$ that is $V_1\subseteq CV.$ The subset $V_1$ is therefore $CV$ in $S.$ We now consider $V_2,$ which is a partition of $N(CV)$ and $CFV_R.$ Since the $S$ is connected, then $V_2=N(CV). $ It follows that $S$ is a bipartite slim graph of nullity 1, with $V_1=CV$. By Theorem \ref{TheoBipMCharzn}, $S$ is a bipartite $MC$. \end{proof} Note that the subgraph of $S,$ obtained after removing $u\in V_2,$ is a subdivision of a forest of two trees and has nullity 2. Repeating the process until all the vertices in $V_2$ are removed, the nullity increases to $V_1.$ Hence the nullity increased by 1 with each vertex deletion. It follows that each vertex in $V_2$ is an upper $cfv,$ a condition required for a MC. It is also worth noting that the incidence matrix $\bf B$ appearing as a submatrix of the adjacency matrix of a subdivision of a tree in (\ref{MatrixSubDiv}) is precisely ${\bf Q}$ in (\ref{EqCoreL}). We now show that the size of the periphery of a MC tree is related to the matching number $t.$ For a general singular tree $T$, a maximal matching consists of the pendant edges removed, until $\overline{K_{\eta(T)}}$ is obtained, starting from any end--vertex in $T.$ One can start from a slim forest and extend to a general tree $T'$ of the same nullity with the CV preserved by adding pairs of adjacent vertices in $CFV_R(T').$ This can be done either by adding a pendant edge and joining it to a $cfv $ or by inserting two vertices of degree 2 in an edge with $cfv$s as end vertices. \begin{proposition} \label{ProptMC} If $T'$ is a minimal configuration tree, then $t = |N(CV)|$. \end{proposition} \begin{proof} For a MC tree, $\eta(T')=1=n(T')-2t.$ Also, by Theorem \ref{TheoSubdiv}, $T'$ is the subdivision of a tree $T$ on $n$ vertices and $m$ edges. So $n(T')=n+m $ and $2t= n+m-1= 2m.$ Since the vertices in $N(CV(T'))\ (= {\mathcal P}(T'))$ are the vertices inserted in the edges of $T$ to form the subdivision, $t=m=|N(CV)|$. \end{proof} The next result is on the rank of ${\bf Q}$ in the adjacency matrix of a core--labelled tree. \begin{theorem} \label{TheoTreeQ} If $T$ is a core--labelled tree, then the columns of $\mathbf{Q}$ are linearly independent. \end{theorem} \begin{proof} For a core--labelled graph $G$, by Theorems \ref{TheoNullCVrnkQ}, $\eta(G) = |CV| - \textup{rank}(\mathbf{Q})$. For a tree, $\eta(T)=n-2t.$ By Theorem \ref{TheoPM}, for a core--labelled tree, $2t=2|N(CV)|+|CFV_R|. $ Since $n= |CV|+ |N(CV)|+|CFV_R|$, by eliminating t, $\eta(T) = |CV| - |N(CV)|$. By Theorem \ref{TheoQindep}, $\mathbf{Q}$ has full rank and the $|N(CV)|$ columns of $\mathbf{Q}$ are linearly independent. \end{proof} \section{Nullspace Preserving Edge Additions} \label{SecAddE} In this last section, we explore which edges could be added (or removed) from a graph to preserve the nullity or the core vertex set. By Cauchy's Interlacing Theorem for real symmetric matrices, the nullity changes by at most 1, on adding or deleting a vertex. By definition, if the vertex is a $cfv_{mid},$ the nullity is preserved. We now explore which edge additions allow the nullity and the core vertex set to be both preserved in a graph with independent core--vertices. We use again the vertex partition into $CV$, $N(CV)$ and $CFV_R$ induced by a core--labelling. We consider adding an edge between two vertices within a part or between two distinct parts of the partition. \begin{theorem} \label{TheoEcvNcv} Let $G$ be a core-labelled graph. Let $u \in CV$ and $w \in N(CV)$, such that $u \nsim w$ in $G$. Let $G^\prime \coloneqq G + e$ be obtained from $G$ by adding an edge $e$ such that the core-labelling is preserved, where $e \coloneqq \{u,w\}$. Then $\eta(G^\prime) \geq \eta(G).$ Moreover, there is a vector $\mathbf{x}_{ _{CV}}$ which is in $\text{\rm Ker}\left(\mathbf{Q}^\intercal\right)$ but not in $\text{\rm Ker}\left(\left(\mathbf{Q^\prime}\right)^\intercal\right)$ and a vector $\mathbf{y}_{ _{CV}}$ which is in $\text{\rm Ker}\left(\left(\mathbf{Q^\prime}\right)^\intercal\right)$ but not in $\text{\rm Ker}\left(\mathbf{Q}^\intercal\right)$ . \end{theorem} \begin{proof} For a core labelling of a graph $G,$ with vertices $u \in CV$ and $w \in N(CV)$ labelled 1 and $|CV|+1,$ respectively, such that $u \nsim w$ in $G, $ we write $u = 1$ and $w = |CV| + 1.$ Let the adjacency matrix $\mathbf{A}$ be as in (\ref{EqCoreL}). On adding edge $\{u,w\},$ the adjacency matrix $\mathbf{A}^\prime$ of $G^\prime$ satisfies $\mathbf{A}^\prime = \mathbf{A} + \mathbf{E}$ where \vspace*{-.52mm} $$\mathbf{E} = \left(\begin{array}{c|c|c} 0 & 1 \ 0 \ . . . & 0 \\ \vspace*{-.35cm} &&\\ {\bf 0} & {\bf 0} \ & {\bf 0} \\ \hline 1 \ 0 \ . . . &\ 0 \ . . . & 0 \\ \vspace*{-.35cm} &&\\ {\bf 0} \ & {\bf 0} &{\bf 0} \\ \hline {\bf 0} &{\bf 0} & {\bf 0}\\ \end{array}\right).$$ Since $u$ is a cv, there exists $\mathbf{x}^{(1)}$ in the nullspace of $ \mathbf{A}$ with the first entry $\alpha $ non-zero. If $\eta(G)>1,$ let $\mathbf{x}^{(1)}, \mathbf{x}^{(2)}, ..., \mathbf{x}^{\left(\eta(G)\right)}$ be a basis for the nullspace of $G$, such that only $\mathbf{x}^{(1)}$ has the first entry non-zero. Denoting column $i$ of the identity matrix by $\mathbf{e}_i$ and writing $\mathbf{x}^{(1)}= \left(\begin{array}{c} \mathbf{x}_{ _{CV}}\\ \hline {\bf 0} \\ \hline {\bf 0} \end{array}\right),$ conformal \mbox{with (\ref{EqCoreL}),} row $w$ of $\mathbf{A}^\prime \mathbf{x}^{(1)}$ is $\mathbf{e}_w^{\intercal}(\mathbf{Q}^\prime)^{\intercal} \mathbf{x}_{ _{CV}} = \mathbf{e}_w^{\intercal}\mathbf{Q}^{\intercal} \mathbf{x}_{ _{CV}} +\alpha =\alpha \neq { 0}.$ Hence $\mathbf{Q}^\prime \mathbf{x}^{(1)}_{ _{CV}}\neq {\bf 0}.$ By the proof of Lemma \ref{LemNullQ}, $\mathbf{A}^\prime \mathbf{x}^{(1)}\neq {\bf 0}.$ Thus $\mathbf{x}^{(1)}$ is a vector in the nullspace of $\mathbf{A}$ but not in the nullspace of $\mathbf{A}^\prime.$ Moreover, $\left(\mathbf{Q^\prime}\right)^\intercal \mathbf{x}^{(i)}_{ _{CV}} = \mathbf{0}$, for $2 \leq i \leq \eta(G)$. Thus the $\eta(G) - 1$ vectors $\mathbf{x}^{(2)}, ..., \mathbf{x}^{\left(\eta(G)\right)}$ lie in the nullspace of $G^\prime$. Since $CV $ is preserved of adding edge $\{u,w\}$, $u$ is also a core vertex in $G^\prime$. Hence there is another vector $\mathbf{y}^{(1)}$ in the nullspace in $ \mathbf{A}^\prime$ with the first entry non-zero. Therefore $\eta(G^\prime) \geq \eta(G)$.\\ A similar argument as above yields $\eta(G^\prime) \leq \eta(G)$, so that the graphs $G$ and $G^\prime$ have the same nullity. Moreover, $\mathbf{x}^{(1)} $ is a vector in the nullspace of $\mathbf{A}$ but not in the nullspace of $\mathbf{A}^\prime$ whereas $\mathbf{y}^{(1)}$ is a vector in the nullspace of $\mathbf{A}^\prime$ but not in the nullspace of $\mathbf{A}.$ \end{proof} As a consequence of Theorem \ref{TheoEcvNcv}, addition of an edge from a vertex in $CV$ to a vertex in $ N(CV)$ which preserves the core-labelling does not change the nullity but may change the nullspace. The addition of edges between two vertices in $CV$ vertices is not possible as the core-labelling will not remain well defined. Furthermore, the addition of an edge between a $CV$ vertex and a $CFV_R$ vertex is not permissible either as the core-labelling changes. Therefore, to preserve the core--labelling, only the following edge additions are left to be considered: \begin{enumerate}[(i)] \item $N(CV)$ -- $N(CV)$ edges, \item $N(CV)$ -- $CFV_R$ edges, \item $CFV_R$ -- $CFV_R$ edges. \end{enumerate} \vspace*{-.5cm} Before presenting results on the perturbations that satisfy constraints relating to the nullspace of ${\bf A},$ we give examples to show the possible effects on the vertex types and on the nullspace on adding an edge to graphs with independent core vertices. \begin{figure} \caption{Adding edge $e=\{5,8\} \label{FigEdge85} \end{figure} Figure \ref{FigEdge85} shows tree $T$ and the unicyclic graph $T+e_{\{5,8\}} $ with core vertices $\{1,8\}$ replaced by $\{1,8,9\}.$ Figure \ref{FigEdge314} shows the half cores $H$ of nullity 2 and $H+e_{3,14}$ with the same core vertices but with different nullspace vectors of their adjacency matrix. The nullspace of ${\bf A}(H)$ is generated by $$\{\{0, -1, 0, 0, 0, 1, 0, -1, 0, 1, 0, -1, 0, 1\}, \{0, -1, 0, -1, 0, 1, 2, 0, 0, 0, 0, 0, 0, 0\}\}$$ and on adding the edge $\{3,14\}$ the nullspace generator of ${\bf A}(H+e_{\{3,14\}})$ becomes $$\{\{0,-1,0,1,0,1,0,-2,0,2,0,-2,0,2\},\{0,-1,0,-1,0,1,2,0,0,0, 0,0,0,0\}\}.$$ \vspace*{-.32cm} \begin{figure} \caption{Adding edge $e=\{3,14\} \label{FigEdge314} \end{figure} \pagebreak We give another example where the nullity changes from 0 to 2 on adding an edge. The perturbation to the tree $T'$ shown in Figure \ref{FigEdge12} is the addition of edge $\{1,2\}$. The nullspace of ${\bf A}(T')$ is generated by $\{0\}$ and on adding the edge $\{1,2\}$ the nullspace generator of ${\bf A}(T'+e_{1,2})$ becomes $\{\{0, 1, 0, -1, 0, 1\}, \{-1, 0, 1, 0, 0, 0\}\}.$ \begin{figure} \caption{Adding edge $e=\{1,2\} \label{FigEdge12} \end{figure} \begin{proposition} \label{PropG+e} Let $G$ be a graph with independent core vertices. Let $u$ and $w$ be core-forbidden vertices, such that $u \nsim w$ in $G$. Let $G^\prime \coloneqq G + e$ be obtained from $G$ by adding the edge $e = \{u, w\}$. If the nullity is preserved, then $G + e$ has the same nullspace and core-labelling of $G$. \end{proposition} \begin{proof} Let $G$ be labelled so that $\mathbf{A}$ is a block matrix as in (\ref{EqCoreL}). We show that a kernel vector $\mathbf{x}$ of $\mathbf{A}(G)$ is a kernel vector for $\mathbf{A}(G^\prime)$. Let $\mathbf{x}_{ _{CV}}$ be the restriction $\left(\alpha_1,\ldots, \alpha_{|CV|}\right)^\intercal $ of $\mathbf{x}$ to the core vertices of $G$. Then $\mathbf{x} = (\mathbf{x}_{ _{CV}},{\bf 0}).$ By definition of a kernel vector, $\mathbf{A}(G)\mathbf{x} = \mathbf{0}$. Therefore $\mathbf{Q}^\intercal \mathbf{x}_{ _{CV}} = \mathbf{0}.$ Now, on adding edge $e,$ the change in $\mathbf{A}(G)$ is contained in the blocks associated with the core-forbidden vertices. Therefore, $\mathbf{A}(G^\prime)\mathbf{x} = \mathbf{Q}^\intercal \mathbf{x}_{CV} = \mathbf{0}.$ Therefore the kernel vectors of $\mathbf{A}(G)$ are also kernel vectors of $\mathbf{A}\left(G^\prime\right)$. Thus $CV(G) \subseteq CV\left(G^\prime\right)$, that is $\eta(G) \leq \eta\left(G^\prime\right)$. If a \textit{cfv} in $G$ becomes a \textit{cv} in $G^\prime$, then the nullity increases. But the nullity is preserved. Hence $CV$ is preserved and so is the nullspace. In turn, it follows that $N(CV)$ and core-labelling of $G$ are unaltered by the perturbation. \end{proof} The necessary condition established in Proposition \ref{PropG+e} can be relaxed to a necessary and sufficient condition involving $CV$ only. \begin{theorem} \label{TheoAddE} Let $G$ be a graph with independent core vertices. Let $u$ and $w$ be core-forbidden vertices, such that $u \nsim w$ in $G$. Let $G + e$ be an edge addition to $G$, where $e = \{u, w\}$. Then, nullity is preserved if and only if $CV(G) = CV(G + e)$. \end{theorem} \begin{proof} Let the nullity be preserved. By Proposition \ref{PropG+e}, it follows that the core-labelling is preserved and hence $CV(G) = CV(G + e)$.\\ Conversely, let $CV(G) = CV(G + e)$. Since the added edge is amongst the core-forbidden vertices in $G$, then $\mathbf{Q}(G) = \mathbf{Q}(G + e)$. By Theorem \ref{TheoNullCVrnkQ}, \begin{align*} \eta(G) &= |CV(G)| - \text{rank}(\mathbf{Q}(G)) \\ &= |CV(G + e)| - \text{rank}(\mathbf{Q}(G + e)) \\ &= \eta(G + e) \end{align*} \vspace*{-.5cm} and hence nullity is preserved. \end{proof} The study of perturbations to networks finds many applications, in information technology and social networks in particular \cite{WANG20122084,RowlinsonPerturb1990,SciBriffDups2019}. The results presented here are of interest in combinatorial optimization and the study of perturbations to singular networks with the goal of inserting or removing edges efficiently while maintaining the same core vertex set. In machine learning, to train a neural network, switches linked to edge detectors in the neural network stochastically disable specific detectors in accordance with a preconfigured probability. This technique is used to reduce over--fitting on the training data \cite{GoogleDropOut2019}. The behaviour of graph invariants, when applying changes to a graph with constraints associated with the nullspace of the adjacency matrix, leads to optimal architectures with a specified nullity, retaining the independence of the core vertex set or the core--labelling. Many algorithms in predictive modelling depend on the processing of network graphs with underlying spanning trees in a network. The combinatorial properties of trees that we discussed shed light on their inherent structure and help to devise efficient algorithms. In the search for optimal network graphs with a constraint related to the nullspace of the adjacency matrix, one may start with a slim graph and add an admissible edge joining non--adjacent vertices. The goal can be the preservation of one or more of the three properties associated with the nullspace of the adjacency matrix. These are the nullity, the core--vertex set and the entries of the normalized basis vectors of the nullspace of the adjacency matrix. Depending on the property to be preserved, edges can be added selectively to obtain optimal networks with a maximal number of edges having the constant property. We have shown that adding edges to a graph may alter the core vertex set, the nullity or the nullspace. Constraints may be imposed to keep one aspect unchanged. Theorem \ref{TheoEcvNcv} shows that adding edges between the mixed types $CV$ and $N(CV)$ of vertices, while the core--labelling is unchanged, preserves the nullity but upsets the nullspace. By Theorem \ref{TheoAddE}, adding edges between core--forbidden vertices is a safe operation since the core vertex set is left intact, as long as the nullity is unaltered. \end{document}
\begin{document} \title{ Continuous optimal ensembles II. Reducing the separability condition to numerical equations} \begin{abstract} A density operator of a bipartite quantum system is called robustly separable if it has a neighborhood of separable operators. Given a bipartite density matrix, its property to be robustly separable is reduced, using the continuous ensemble method, to a finite number of numerical equations. The solution of this system exists for any robustly separable density operator and provides its representation by a continuous mixture of pure product states. \end{abstract} \section{Introduction}\label{sintro} Entanglement is treated as a crucial resource for quantum computation. It plays a central r\^ole in quantum communication and quantum computation. A considerable effort is being put into quantifying quantum entanglement. Usually the efforts are focused on quantifying entanglement itself, that is, describing the \emph{impossibility} to prepare a state by means of LOCC (local operations and classical communications). One may, although, go another way around and try to quantify \emph{separability} rather than entanglement: this turned out to be applicable for building combinatorial entanglement patterns for multipartite quantum systems \cite{myjmo}. Briefly, the contents of this paper is the following. I dwell on the case of bipartite quantum systems. A state of such system is called {\sc separable} if it can be prepared by LOCC, and {\sc robustly separable} if it has a neighborhood of separable operators in the space of self adjoint operators. In terms of density matrices that means that $\boldsymbol{\rho}$, its density matrix, can be represented as a mixture of pure product states. I suggest to replace finite sums of projectors by continuous distributions on the set of unit vectors. The case of single particle system is considered in section \ref{scontensemb}, as an example, the explicit form for the equations for the parameters of the mixture is obtained. In section \ref{scontbi} the bipartite case is considered. The density operators are represented by distribution on the Cartesian product of unit spheres in subsystems' spaces. Given a density operator $\boldsymbol{\rho}$, we consider it as an element of the space $\mathcal{L}p$ of all self adjoint operators in the product space $\mathcal{H}p=\mathcal{H}\otimes\mathcal{H}$. Then the robust separability of $\boldsymbol{\rho}$ is equivalent (see also \cite{coei}) to the solvability of the following vector equation in $\mathcal{L}p$: \[ \nabla \mathbf{K} \;=\; \boldsymbol{\rho} \] \noindent for the trace functional $\mathbf{K}$ on $\mathcal{L}p$ whose form is obtained in section \ref{scontbi}. When we fix a product basis in $\mathcal{H}p$, this equation becomes a system of $n^4$ transcendent equations with respect to $n^4$ variables. \section{Smeared spectral decomposition and optimal ensembles}\label{scontensemb} The set of all self-adjoint operators in $\mathcal{H}=\mathbb{C}^n$ has a natural structure of a real space $\mathbb{R}^{2n}$, in which the set of all density matrices is a hypersurface, which is the zero surface of the affine functional $\trc{}X-1$. Generalizing the fact that any convex combination of density operators is again a density operator, we represent density operators as probability distributions on the unit sphere in the state space $\mathcal{H}$ of the system. \paragraph{Technical remark.} Pure states form a projective space rather than the unit sphere in $\mathcal{H}$. On the other hand, one may integrate over any probabilistic space. For technical reasons I prefer to represent ensembles of pure states by measures on unit vectors in $\mathcal{H}$. I use the Umegaki measure on $\cfield{B}_\lth$--- the uniform measure with respect to the action of $U(n)$ normalized so that \begin{equation}\label{einvarmes} \int_{\phii\in\cfield{B}_\lth}\limits\; \,\,d\mathbf{S}_{\lth} \;=\; 1 \end{equation} \noindent Similarly, for bipartite case the integration will be carried out over the Cartesian product of unit spheres in appropriate state spaces. Now pass to a more detailed account of this issue beginning with the case of a single quantum system. Let $\mathcal{H}=\mathbb{C}^n$ be a $n$-dimensional Hermitian space, let $\rho$ be a density matrix in $\mathcal{H}$. We would like to represent the state whose density operator is $\rho$ by an ensemble of pure states. Let this ensemble be continuous with the probability density expressed by a function $\mu(\phii)$ where $\phii$ ranges over all unit vectors in $\mathcal{H}$ (see the technical remark above). The density operator of a continuous ensemble associated with the measure $\mu(\phii)$ on the set $\cfield{B}_\lth$ of unit vectors in $\mathcal{H}$ is calculated as the following (matrix) integral \begin{equation}\label{e01integral} \rho \;=\; \int_{\phii\in\cfield{B}_\lth}\limits\; \mu(\phii)\, \raypr{\phii} \,\,d\mathbf{S}_{\lth} \end{equation} \noindent where $\raypr{\phii}$ is the projector onto the vector $\bra{\phii}$. Effectively the operator integral $\rho$ \eqref{e01integral} can be calculated by its matrix elements. In any fixed basis $\{\ket{\mathbf{e}_i}\}$ in $\mathcal{H}$, each matrix element $\rho_{ij}=\bracket{\mathbf{e}_i}{\rho}{\mathbf{e}_j}$ is the following numerical integral: \begin{equation}\label{e01basis} \rho_{ij} \;=\; \bracket{\mathbf{e}_i}{\rho}{\mathbf{e}_j} \;=\; \int_{\phii\in\cfield{B}_\lth}\limits\; \mu(\phii)\, \braket{\mathbf{e}_i}{\phii} \braket{\phii}{\mathbf{e}_j} \,\,d\mathbf{S}_{\lth} \end{equation} \subsection{Smeared spectral decomposition}\label{ssmspec} Usual spectral decomposition of a density operator $\rho=\sum{}p_k\raypr{\mathbf{e}_k}$ can be treated as an atomic measure on $\cfield{B}_\lth$ whose density is the appropriate combinations of the delta functions: \[ \mu_{\mbox{\scriptsize spec}}(\phii) \;=\; \sum{}p_k\,\delta({\phii-\mathbf{e}_k}) \] For further purposes a `smeared' version of the spectral decomposition is needed, begin with some technical setup. Denote by $p_0$ the smallest eigenvalue of the density matrix $\rho$ (recall that $p_0>0$ as we restrict ourselves to full-range density matrices). Let $K$ be an integer such that $K+1<(1/n{}p_0)$, then the density matrix $\rho$ is represented as a continuous ensemble with positive density: \[ \rho \;=\; \sum{}p_k\raypr{\mathbf{e}_k} \;=\; \int_{\phii\in\cfield{B}_\lth}\limits\; \mu(\phii) \, \raypr{\phii} \,\,d\mathbf{S}_{\lth} \] \noindent with \begin{equation}\label{esmsing} \mu(\phii) \;=\; \frac{((K+1)n)!}{K\cdot(Kn)! \,n!} \,\cdot \sum_k\limits \left(p_k \,- \frac{1}{(K+1)n} \right) \left| \braket{\mathbf{e}_k}{\phii} \right|^{\,2Kn} \end{equation} \noindent Furthermore, the distribution \eqref{esmsing} tends to the spectral decomposition of $\rho$ as $K$ tends to infinity. See appendix \ref{sderivsmsing} for the proof of formula \eqref{esmsing}. \subsection{Optimal entropy ensembles}\label{soptens} Let us begin with a single particle case. We need to solve the following variational problem. Given a functional $Q$ on $L^1(\cfield{B}_\lth)$ and a density matrix $\rho$ in $\mathcal{H}$, find the distribution $\mu$ on the set $\cfield{B}_\lth$ of unit vectors in $\mathcal{H}$ such that \begin{equation}\label{e03} \left\lbrace \begin{array}{l} \int_{\phii\in\cfield{B}_\lth}\limits\; \mu(\phii)\,\raypr{\phii}\,d\mathbf{S}_{\lth} \;=\;\rho \\ \qquad \\ Q(\mu)\;\to\; \mbox{extr} \end{array} \right. \end{equation} \noindent We choose the differential entropy of the distribution $\mu$ as the functional $Q$: \begin{equation}\label{e03q} Q(\mu) \;=\; -\int_{\phii\in\cfield{B}_\lth}\limits\; \mu(\phii)\,\ln\mu(\phii)\,\,d\mathbf{S}_{\lth} \end{equation} \noindent then, according to \eqref{e01basis}, the variational problem \eqref{e03} takes the form \[ \left\lbrace \begin{array}{l} \int_{\phii\in\cfield{B}_\lth}\limits\; \mu(\phii) \braket{\mathbf{e}_i}{\phii} \braket{\phii}{\mathbf{e}_j} \,d\mathbf{S}_{\lth}\;=\;\rho_{ij} \\ -\int_{\phii\in\cfield{B}_\lth}\limits\; \mu(\phii)\,\ln\mu(\phii)\,\,d\mathbf{S}_{\lth} \;\to\; \mbox{extr} \end{array}\right. \] \noindent Solving this variational problem by introducing Lagrangian multiples $X_{ij}$ we get \begin{equation}\label{e03a} -(1+\ln\mu(\phii)) \,-\, \sum_{ij} X_{ij} \braket{\mathbf{e}_i}{\phii} \braket{\phii}{\mathbf{e}_j} \;=\; 0 \end{equation} \noindent Combining the Lagrange multiples into the operator $X=\sum_{ij} X_{ij}\ketbra{\mathbf{e}_j}{\mathbf{e}_i}$ we have \begin{equation}\label{e01a} \mu(\phii) \;=\; e^{-\bracket{\phii}{X}{\phii}-1} \end{equation} \noindent and the problem reduces to finding $\mu$ from the condition \begin{equation}\label{e04ini} \int_{\phii\in\cfield{B}_\lth}\limits\; \mu(\phii) \raypr{\phii}\,d\mathbf{S}_{\lth} \;=\;\rho \end{equation} \noindent which according to \eqref{e01a} and \eqref{e01basis} and redefining $X:=-\,\mathbb{I}-X$ can be written as \begin{equation}\label{e04} \bracket{\mathbf{e}_i}{\rho}{\mathbf{e}_j} \;=\; \int_{\phii\in\cfield{B}_\lth}\limits \;e^{\bracket{\phii}{X}{\phii}} \braket{\mathbf{e}_i}{\phii} \braket{\phii}{\mathbf{e}_j}\,d\mathbf{S}_{\lth} \end{equation} \noindent It follows from \eqref{e03a} that the coefficients $X_{ik}$ can be chosen so that $X_{ik}=\bar{X}_{ki}$. That means that the problem of finding the optimal ensemble reduces to that of finding the coefficients of a self-adjoint operator, that is, to finding $n^2$ numbers from $n^2$ equations. \subsection{The explicit form for single particle case}\label{ssingexpl} In case of single particle system the equation \eqref{e04} can be given an explicit form. First note that, given a self-adjoint operator $X=\sum\,x_k\raypr{\mathbf{e}_k}$, for any integrable function $f(x)$ the operator integral \[ \int_{\cfield{B}_\lth}\limits \,f\bigl(\bracket{\phii}{X}{\phii}\bigr) \raypr{\phii} \,d\mathbf{S}_{\lth} \] \noindent is diagonal in the eigenbasis of $X$ (see the appendix in \cite{mygibbs} for proof). Therefore, in order to calculate the integral \eqref{e04}, we only need its diagonal elements. Calculate first the functional $\mathcal{K}$ \cite{coei}: \begin{equation}\label{edefhk} \mathcal{K}(X) \;=\; \int_{\cfield{B}_\lth}\limits \,e^{\bracket{\phii}{X}{\phii}} \,d\mathbf{S}_{\lth} \end{equation} \noindent for which the following formula holds \begin{equation}\label{ekviadet} \mathcal{K}(X) \;=\; (-1)^n\,(n-1)!\; \frac{W_1(x_1,,\ldots,ots,x_n)}{W(x_1,,\ldots,ots,x_n)} \end{equation} \noindent where $x_1,,\ldots,ots,x_n$ are the eigenvalues of the operator $X$, \[ W(x_1,,\ldots,ots,x_n) \;=\; \mbox{det } \left\lvert \begin{array}{cccc} 1 & 1 & ,\ldots,ots & 1 \\ x_1 & x_2 & ,\ldots,ots & x_n \\ ,\ldots,ots & ,\ldots,ots & ,\ldots,ots & ,\ldots,ots \\ x_{1}^{n-1} & x_{2}^{n-1} & ,\ldots,ots & x_{n}^{n-1} \end{array} \right\rvert \] \noindent is the Wandermonde determinant, and the matrix $W_1$ is defined as \begin{equation}\label{e78det} W_1(x_1,,\ldots,ots,x_n) \;=\; \mbox{det } \left\lvert \begin{array}{cccc} e^{x_1} & e^{x_2} & ,\ldots,ots & e^{x_n} \\ 1 & 1 & ,\ldots,ots & 1 \\ x_1 & x_2 & ,\ldots,ots & x_n \\ ,\ldots,ots & ,\ldots,ots & ,\ldots,ots & ,\ldots,ots \\ x_1^{n-2} & x_2^{n-2} & ,\ldots,ots & x_n^{n-2} \end{array} \right\rvert \end{equation} We can then explicitly write down the expressions for the functional $\mathcal{K}$ \begin{equation}\label{e77-90} \mathcal{K} \;=\; (n-1)! \, \sum_{k=1}^{n}\frac{e^{x_k}}{\prod_{\stackrel{i=1}{i\neq k}}^{n}\,(x_k-x_i)} \end{equation} \noindent For the operator \eqref{e04} we have $\bracket{\mathbf{e}_j}{\nabla\,\mathcal{K}}{\mathbf{e}_j}\;=$ \begin{equation}\label{e77-90a} \;=\; (n-1)! \,\left[ \sum_{\stackrel{k=1}{k\neq j}}^{n} \frac{e^{x_k}}{(x_k-x_j)\prod_{\stackrel{i=1}{i\neq k}}^{n}\limits \,(x_k-x_i)} \,+\, \frac{e^{x_j}}{\prod_{\stackrel{i=1}{i\neq k}}^{n}\limits \,(x_i-x_j)} \left( 1- \sum_{\stackrel{k=1}{k\neq j}}^{n}\limits \frac{1}{x_j-x_k} \right) \right] \end{equation} \noindent So, in order to obtain the optimal ensemble for the single particle density matrix $\rho=\sum\,p_j\,\raypr{\mathbf{e}_j}$ we solve the system of $n$ equations $\bracket{\mathbf{e}_j}{\nabla\,\mathcal{K}}{\mathbf{e}_j}=p_j$. \section{The bipartite case}\label{scontbi} Let $\boldsymbol{\rho}$ be a robustly separable density matrix in the product space $\mathcal{H}\otimes\mathcal{H}'$. Then it can be represented (in infinitely many ways) as a continuous ensemble of pure product states. Carrying out the same reasoning as in section \ref{soptens} we conclude (see section \ref{sexistence} for futher details) that among those continuous ensembles there exists one having the greatest differential entropy, this will be the ensemble we are interested in. Like in section \ref{soptens}, formulate the variational problem. Let $\boldsymbol{\rho}$ be a density operator in a tensor product space $\mathcal{H}p=\mathcal{H}\otimes\mathcal{H}'$. The task is to find a probability density $\mu(\ppp\phii)$ defined on the Cartesian product $\mathfrak{T}=\cfield{B}_\lth\times\cfield{B}_\lth$ of the unit spheres in $\mathcal{H},\mathcal{H}'$, respectively. \begin{equation}\label{e03bi} \left\lbrace \begin{array}{l} \int_{\ppp\phii\in\mathfrak{T}}\limits\; \mu(\ppp\phii)\,\raypr{\ppp\phii}\ppp\,d\mathbf{S}_{\lth} \;=\;\boldsymbol{\rho} \\ \qquad \\ Q(\mu)\;\to\; \mbox{extr} \end{array} \right. \end{equation} \noindent with \begin{equation}\label{e03qp} Q(\mu) \;=\; -\int_{\phii\in\cfield{B}_\lth}\limits\; \mu(\ppp\phii)\,\ln\mu(\ppp\phii)\,\ppp\,d\mathbf{S}_{\lth} \end{equation} \noindent Proceeding exactly in the same way as with single particle, we get the following representation: \begin{equation}\label{erepbi} \boldsymbol{\rho} \;=\; \int_{\ppp\phii\in\mathfrak{T}}\limits\; e^{\bracket{\ppp\phii}{X}{\ppp\phii}} \,\raypr{\ppp\phii}\ppp\,d\mathbf{S}_{\lth} \end{equation} \noindent for some self-adjoint operator $X$ in $\mathcal{L}$. \subsection{Bipartite separability problem}\label{sbiseparpro} From a formal point of view, the equation \eqref{erepbi} provides a solution of bipartite separability problem. In fact, given a product basis $\{\bra{\mathbf{e}_{i}\mathbf{e}_{i'}}\}$ in the product space $\mathcal{H}p$, \eqref{erepbi} is a system of $n^4$ \textbf{numeric} equations with respect to $n^4$ variables---the matrix elements of the quadratic form $X$. \begin{equation}\label{erepbas} \boldsymbol{\rho}_{\ppp{i}\ppp{k}} \;=\; \int_{\ppp\phii\in\mathfrak{T}}\limits\; e^{\bracket{\ppp\phii}{X}{\ppp\phii}} \,\braket{\mathbf{e}_{i}\mathbf{e}_{i'}}{\ppp\phii} \,\braket{\ppp\phii}{\mathbf{e}_{k}\mathbf{e}_{k'}}\ppp\,d\mathbf{S}_{\lth} \end{equation} If the solution exists, \eqref{erepbas} provides explicitly an instance of representation of $\boldsymbol{\rho}$ as a mixture of pure product states. Although these equations are transcendental. Even in the simple case of a single particle in dimension 2, when the operator $X$ is proved to commute with $\rho=p_1\raypr{\mathbf{e}_1}+p_2\raypr{\mathbf{e}_2}$, we have two variables $x_1,x_2$ which we have to find from the following system of equations, which are a special case of \eqref{e77-90a}: \begin{equation}\label{esinglex} \left\lbrace \begin{array}{ccc} \frac{e^{x_1}(x_1-x_2-1)+e^{x_2}}{(x_1-x_2)^2} & = & p_1\, \\ \\ \frac{e^{x_2}(x_2-x_1-1)+e^{x_1}}{(x_1-x_2)^2} & = & p_2\, \end{array} \right. \end{equation} \noindent But the essence of the separability problem is the \emph{existence} of a solution of \eqref{erepbas}. \subsection{The existence}\label{sexistence} In this section I show that the existence of a solution of the equations \eqref{erepbas} for any robustly separable state follows from the concavity of the functional $Q$ in \eqref{e03bi}. Begin with a two-dimensional geometrical analogy. Let $S^2=\{(x_1, x_2, x_3)\mid x_1+x_2+x_3=1;\;x_1, x_2, x_3\ge 0\}$ be a 2-dimensional simplex in $\mathbb{R}^3$ and $\mathcal{K}$ be a functional on the plane $\boldsymbol{P}: x_1+x_2+x_3=1$ symmetric with respect to $x_1, x_2, x_3$. If $\mathcal{K}$ is \emph{concave}, that is, for any $a, b\in\boldsymbol{P}$ and any point $q$ lying between $a$ and $b$ \[ \mathcal{K}(q) \;\ge\; \mathcal{K}(a), \quad \mathcal{K}(q) \;\ge\; \mathcal{K}(b) \] \noindent Let $\mathcal{K}$ be bounded in a domain containing the simplex $S^2$ and let $l$ be a line intersecting the interior of $S^2$. Then $\mathcal{K}$ has a local maximum in the \emph{interior} of $S^2$. Now return to the case of bipartite density matrices. Any such matrix $\boldsymbol{\rho}$ can be represented as (an infinite) number of \emph{pseudomixtures} of pure product states (see, e.g. \cite{vidaltarrach} for a discussion). Consider the affine space $\boldsymbol{P}_1$ of all normalized signed measures on the Cartesian product $\mathfrak{T}=\cfield{B}_\lth\times\cfield{B}_\lth$ of unit spheres (it plays the r\^ole of the plane $\boldsymbol{P}$ in the example above). Given a density operator $\boldsymbol{\rho}$, the collection $l_1$ of all pseudomixtures representing $\boldsymbol{\rho}$ is an affine submanifold of $\boldsymbol{P}_1$. The set of all probability measures on $\mathfrak{T}$ is a simplex $S^\infty$ in $\boldsymbol{P}_1$. A density matrix $\boldsymbol{\rho}$ is robustly separable if and only if $l_1$ intersects the interior. This because $\boldsymbol{\rho}$ can be represented as a mixture of products of full-range density matrices in $\mathcal{H},\mathcal{H}'$ each of which can be, in turn, represented by a smeared spectral decomposition \eqref{esmsing}, that means that there exists a probability measure representing $\boldsymbol{\rho}$ which does not vanish anywhere on $\mathfrak{T}$ Therefore the solution of the equations \eqref{erepbas} always exists for any robustly separable $\boldsymbol{\rho}$ as the functional $\mathcal{K}$ introduced in \eqref{e03qp} is concave. \section{Concluding remarks}\label{sconclud} First sum up the contents of the paper. Given a density operator $\boldsymbol{\rho}$ in the product of two Hilbert spaces each dimension $n$, to solve the separability problem is to tell if it has \emph{a} representation by a mixture of pure product states. In this paper \emph{the most smeared} distribution on pure product states which yields $\boldsymbol{\rho}$ rather than their finite mixture is considered. \noindent The separability problem is reduced to $n^4$ numerical equations which are symbolically written as \[ \nabla\,\mathcal{K}=\boldsymbol{\rho} \] \noindent whose coordinate form in any product basis in $\mathcal{H}p=\mathcal{H}\otimes\mathcal{H}$ is \eqref{erepbas} \[ \boldsymbol{\rho}_{\ppp{i}\ppp{k}} \;=\; \int_{\ppp\phii\in\mathfrak{T}}\limits\; e^{\bracket{\ppp\phii}{X}{\ppp\phii}} \,\braket{\mathbf{e}_{i}\mathbf{e}_{i'}}{\ppp\phii} \,\braket{\ppp\phii}{\mathbf{e}_{k}\mathbf{e}_{k'}}\ppp\,d\mathbf{S}_{\lth} \] \noindent These equations are transcendental even for non-product case \eqref{esinglex}, but their solution always exists for any robustly separable density matrix $\boldsymbol{\rho}$. That is why one can look for asymptotic methods of finding the solution of the separability problem along these lines. This issue will be addressed in the next paper on optimal ensembles. \paragraph{Acknowledgments.} Several crucial issues related to this research were intensively discussed during the meeting Glafka-2004 `Iconoclastic Approaches to Quantum Gravity' (15--18 June, 2004, Athens, Greece) supported by QUALCO Technologies (special thanks to its organizers---Ioannis Raptis and Orestis Tsakalotos). Much helpful advice from Serguei Krasnikov is highly appreciated. The financial support for this research was provided by the research grant No. 04-06-80215a from RFFI (Russian Basic Research Foundation). \appendix \section{Smeared spectral decomposition}\label{sderivsmsing} The formula \eqref{esmsing} is derived as follows. Let $\bra{\mathbf{e}}$ be a unit vector in $\mathcal{H}$, then for any integer $m$ the following formula holds \begin{equation}\label{ei07} \int_{\phii\in\cfield{B}_\lth}\limits\; \left| \braket{\mathbf{e}}{\phii} \right|^{\,2m} \raypr{\phii} \,\,d\mathbf{S}_{\lth} \;=\; \frac{m!\,(n-1)!}{\,(m+n)!} \,(\,m\raypr{\mathbf{e}}+\,\mathbb{I}) \end{equation} \noindent which is a direct consequence of a more general formula obtained in \cite{mygibbs}. Let $m=Kn$ for some integer $K$. Then it follows directly from \eqref{ei07} that \[ Kn \raypr{\mathbf{e}}+ \,\mathbb{I} \;=\; \int_{\phii\in\cfield{B}_\lth}\limits\; \frac{(Kn+n)!}{(Kn)! \,(n-1)!} \left| \braket{\mathbf{e}}{\phii} \right|^{\,2Kn} \raypr{\phii} \,\,d\mathbf{S}_{\lth} \] \noindent Dividing this equation by $n(K+1)$ we obtain the following continuous representation of the projector $\raypr{\mathbf{e}}$ and the white noise matrix $\Lambda=\,\mathbb{I}/n$: \begin{equation}\label{e77-32} \frac{K}{K+1} \cdot \raypr{\mathbf{e}}+ \frac{1}{K+1} \cdot \frac{\,\mathbb{I}}{n} \;=\; \int_{\phii\in\cfield{B}_\lth}\limits\; \frac{((K+1)n)!}{(K+1) \cdot(Kn)! \,n!} \left| \braket{\mathbf{e}}{\phii} \right|^{\,2Kn} \raypr{\phii} \,\,d\mathbf{S}_{\lth} \end{equation} \noindent The formula \eqref{e77-32} is valid for any eigenvector $\bra{\mathbf{e}_k}$ of $\rho$. Form a convex combination of the lhs of \eqref{e77-32} with (yet unknown) coefficients $q_k$ and require it to be $\rho$: \[ \sum \,q_k \left( \frac{K}{K+1} \cdot \raypr{\mathbf{e}_k}+ \frac{1}{K+1} \cdot \frac{\,\mathbb{I}}{n} \right) \;=\; \sum{}p_k\raypr{\mathbf{e}_k} \] \noindent When $\,\mathbb{I}$ is replaced by $\sum\raypr{\mathbf{e}_k}$, the second summand in the lhs of the above formula takes the form $\frac{1}{K+1} \cdot \frac{1}{n} \cdot \sum\, \raypr{\mathbf{e}_k} $, from which we obtain \[ q_k\cdot \frac{K}{K+1} \,+ \frac{1}{K+1} \cdot \frac{1}{n} \;=\;p_k \] \noindent the we get the expression for $q_k$ \begin{equation}\label{epkqk} q_k \;= \; \frac{K+1}{K} \left(p_k \,- \frac{1}{(K+1)n} \right) \end{equation} \noindent So, we can form the convex combinations of the expressions \eqref{e77-32} with the coefficients $q_k$ which yields us $\rho$: \[ \rho \;=\; \sum_k\limits \frac{K+1}{K} \left(p_k \,- \frac{1}{(K+1)n} \right) \cdot \int_{\phii\in\cfield{B}_\lth}\limits\; \frac{((K+1)n)!}{(K+1) \cdot(Kn)! \,n!} \left| \braket{\mathbf{e}_k}{\phii} \right|^{\,2Kn} \raypr{\phii} \,\,d\mathbf{S}_{\lth} \] \noindent Let $p_0$ be the smallest eigenvalue of $\rho$, then, if $K+1<(1/np_0)$, all the coefficients $q_k$ in \eqref{epkqk} are positive. \section{Proof of the formula \eqref{e77-90}}\label{s77-90} Given a self-adjoint operator $X$ in $\mathcal{H}$, consider its eigenbasis $\{\mathbf{e}_k\}$ and the integral \[ F^{n}(x_1,,\ldots,ots,x_n) \;=\; \int_{\cfield{B}_\lth}\limits \,f\bigl(\bracket{\phii}{X}{\phii}\bigr) \,d\mathbf{S}_{\lth} \] \noindent In order to calculate it denote $\braket{\phii}{\mathbf{e}_k}=e^{i\theta}\,r_k$. Then, taking into account that the integrand does not depend on phases, the above integral (with respect to the normalized measure) reads \begin{equation}\label{efnj} F^{n}(x_1,,\ldots,ots,x_n) \;=\; \frac{(n-1)!}{2\pi^{n}} \cdot (2\pi)^{n} \int_{\sum_{k=1}^{n-1}\limits r_{k}^2\le 1}\limits f\left(x_n+ \sum_{k=1}^{n-1}\limits{} (x_k-x_n)\,r_{k}^2 \right) \, \left( \prod_{k=1}^{n-1} r_k\,dr_k \right) \end{equation} \noindent then, introducing the variables $t_k=r_k^2$, we reduce the above integral to \begin{equation}\label{efn} F^{n}(x_1,,\ldots,ots,x_n) \;=\; (n-1)\,!\, \int_{\sum_{k=1}^{n-1}\limits t_{k}\le 1}\limits f\left(x_n+ \sum_{k=1}^{n-1}\limits{} (x_k-x_n)\,t_{k} \right) \left( \prod_{k=1}^{n-1} dt_k \right) \end{equation} \noindent For $f(z)=e^z$ the above equation reads $\frac{F^{n}(x_1,,\ldots,ots,x_n)}{(n-1)\,!} =\,e^{x_n}\!\! \int_{\sum_{k=1}^{n-1}\limits t_{k}\le 1}\limits \prod_{k=1}^{n-1}\limits{} e^{(x_k-x_n)\,t_{k}} dt_k \;=\; $ \[ \;=\; \,e^{x_n}\!\! \int_{\sum_{k=1}^{n-2}\limits t_{k}\le 1}\limits \prod_{k=1}^{n-2}\limits{} e^{(x_k-x_n)\,t_{k}} dt_k \int_0^{1-\sum_{k=1}^{n-2}\limits t_{k}}\limits \,e^{(x_{n-1}-x_n)\,t_{n-1}} \,dt_{n-1} \;=\; \] \[ \;=\; \frac{e^{x_n}}{x_{n-1}-x_n}\!\! \int_{\sum_{k=1}^{n-2}\limits t_{k}\le 1}\limits \prod_{k=1}^{n-2}\limits{} e^{(x_k-x_n)\,t_{k}} dt_k \, \left(e^{(x_{n-1}-x_n)\,(1-\sum_{k=1}^{n-2}t_{k})} -1 \right) \;=\; \] \[ \;=\; \frac{e^{x_{n-1}}}{x_{n-1}-x_n}\!\! \int_{\sum_{k=1}^{n-2}\limits t_{k}\le 1}\limits \prod_{k=1}^{n-2}\limits{} e^{(x_k-x_{n-1})\,t_{k}} dt_k \;-\; \frac{e^{x_n}}{x_{n-1}-x_n}\!\! \int_{\sum_{k=1}^{n-2}\limits t_{k}\le 1}\limits \prod_{k=1}^{n-2}\limits{} e^{(x_k-x_n)\,t_{k}} dt_k \] \noindent and we get the following recurrent relation \begin{equation}\label{erecf} F^{n}(x_1,,\ldots,ots,x_n) \;=\; \frac{n-1}{x_{n-1}-x_n} \, \left( \vphantom{\int_0^1} F^{n-1}(x_1,,\ldots,ots,x_{n-1}) - F^{n-1}(x_1,,\ldots,ots,x_{n-2},x_{n}) \right) \end{equation} \noindent For $n=2$ we have the explicit expression like \eqref{esinglex}, for higher $n$ one can directly verify that the expression \eqref{e77-90} satisfies the recurrent relation \eqref{erecf}. \end{document}
\begin{document} \title{Exponential Communication Complexity Advantage from Quantum Superposition of the Direction of Communication} \author{Philippe Allard Gu\'{e}rin} \affiliation{Faculty of Physics, University of Vienna, Boltzmanngasse 5, 1090 Vienna, Austria} \affiliation{Institute for Quantum Optics and Quantum Information (IQOQI), Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria} \author{Adrien Feix} \affiliation{Faculty of Physics, University of Vienna, Boltzmanngasse 5, 1090 Vienna, Austria} \affiliation{Institute for Quantum Optics and Quantum Information (IQOQI), Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria} \author{Mateus Ara\'{u}jo} \affiliation{Faculty of Physics, University of Vienna, Boltzmanngasse 5, 1090 Vienna, Austria} \affiliation{Institute for Quantum Optics and Quantum Information (IQOQI), Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria} \author{\v{C}aslav Brukner} \affiliation{Faculty of Physics, University of Vienna, Boltzmanngasse 5, 1090 Vienna, Austria} \affiliation{Institute for Quantum Optics and Quantum Information (IQOQI), Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria} \date{\today} \begin{abstract} In communication complexity, a number of distant parties have the task of calculating a distributed function of their inputs, while minimizing the amount of communication between them. It is known that with quantum resources, such as entanglement and quantum channels, one can obtain significant reductions in the communication complexity of some tasks. In this work, we study the role of the quantum superposition of the direction of communication as a resource for communication complexity. We present a tripartite communication task for which such a superposition allows for an exponential saving in communication, compared to one-way quantum (or classical) communication; the advantage also holds when we allow for protocols with bounded error probability. \end{abstract} \maketitle Quantum resources make it possible to solve certain communication and computation problems more efficiently than what is classically possible. In communication complexity problems, a number of parties have to calculate a distributed function of their inputs while reducing the amount of communication between them~\cite{Yao1979, Kushilevitz1996}. The minimal amount of communication is called the \textit{complexity of the problem}. For some communication complexity tasks, the use of shared entanglement and quantum communication significantly reduces the complexity as compared to protocols exploiting shared classical randomness and classical communication~\cite{Yao1993, Buhrman2010}. Important early examples for which quantum communication yields an exponential reduction in communication complexity over classical communication are the distributed Deutsch-Jozsa problem~\cite{Buhrman1998} and Raz's problem~\cite{Raz1999}. Quantum computation and communication are typically assumed to happen on a definite causal structure, where the order of the operations carried on a quantum system is fixed in advance. However, the interplay between general relativity and quantum mechanics might force us to consider more general situations in which the metric, and hence the causal structure, is indefinite. Recently, a quantum framework has been developed with no assumption of a global causal order~\cite{Oreshkov2012, Araujo2015, Oreshkov2015}. This framework can also be used to study quantum computation beyond the circuit model, for instance using the ``quantum switch'' as a resource --- a qubit coherently controlling the order of the gates in a quantum circuit~\cite{Chiribella2013}. It has recently been realized experimentally~\cite{Procopio2015}. It was shown that this new resource provides a reduction in complexity to $n$ black-box queries in a problem for which the optimal quantum algorithm with fixed order between the gates requires a number of queries that scales as $n^2$~\cite{Araujo2014_PRL}. The quantum switch is also useful in communication complexity; a task has been found for which the quantum switch yields an increase in the success probability, yet no advantage in the asymptotic scaling of the communication complexity was found~\cite{Feix2015}. Most generally, no information processing task is known for which the quantum switch (or any other causally indefinite resource) would provide an exponential advantage over causal quantum (or classical) algorithms. Here we find a tripartite communication complexity task for which there is an exponential separation in communication complexity between the protocol using the quantum switch and any causally ordered quantum communication scheme. The task requires no promise on inputs and is inspired by the problem of deciding whether a pair of unitary gates commute or anticommute, which can be solved by the quantum switch with only one query of each unitary~\cite{Chiribella2012}. If the parties are causally ordered, the number of qubits that needs to be communicated to accomplish the task scales linearly with the number of input bits, whereas the protocol based on the quantum switch only requires logarithmically many communicated qubits. This shows that causally indefinite quantum resources can provide an exponential advantage over causally ordered quantum resources (i.e., entanglement and one-way quantum channels). The tripartite causally ordered communication scenario we consider in this paper is illustrated in Fig.~\ref{fig:causally_ordered}. Alice and Bob are respectively given inputs $x \in X$ and $y \in Y$, taken from finite sets $X$, $Y$. There is a third party, Charlie, whose goal is to calculate a binary function $f(x,y)$ of Alice's and Bob's inputs, while minimizing the amount of communication between all three parties. We shall first assume that communication is one-way only: from Alice to Bob and from Bob to Charlie. Furthermore, we grant the parties access to unrestricted local computational power and unrestricted shared entanglement. We will also consider bounded error communication, in which the protocol must succeed on all inputs with an error probability smaller than $\epsilon$. In quantum communication, the parties communicate with each other by sending quantum systems. Conditionally on their inputs, the parties may apply general quantum operations to the received system, and then send this system out. We require that the parties' local laboratories receive a system \textit{only once} from the outside environment. We impose this requirement to exclude sequential communication, in which the parties communicate back and forth by sending quantum systems to each other at different time steps. Alice's laboratory has an input and output quantum state, consisting of $N_{A_I}$ and $N_{A_O}$ qubits, respectively; similar notation is used for Bob's and Charlie's systems. We seek to succeed at the communication task on all inputs with error probability lower than $\epsilon$, while minimizing the number of communicated qubits $N := N_{A_O} + N_{B_O}$. The optimal causally ordered strategy is for Bob to calculate $f(x,y)$ and then communicate the result to Charlie using one bit of communication; in this case $N_{A_O}$ is a good lower bound for $N$. \begin{figure} \caption{Causally ordered quantum communication complexity scenario. Conditionally on their inputs $x$ and $y$, Alice sends a state $\rho_x$ to Bob, who then applies a CP map $\mathcal{B} \label{fig:causally_ordered} \end{figure} The communication complexity of any causally ordered tripartite communication complexity task can be bounded by considering the bipartite task obtained by identifying Bob and Charlie as a single party. Bearing this in mind, we prove a tight lower bound on the quantum communication complexity of an important family of one-way bipartite deterministic (error probability $\epsilon = 0$) communication tasks, which in turn implies a lower bound on the communication complexity of causally ordered tripartite tasks. This result appears in Theorem 5 of Ref.~\cite{Klauck2000}, but we present a different proof here. \begin{lemma} \label{lemma:rows} For deterministic one-way evaluation of any binary distributed function $f: X \times Y \to \{0,1\}$ such that $\forall x_1, x_2 \in X$, with $x_1 \neq x_2$, $\exists y \in Y$ for which $f(x_1, y) \neq f(x_2, y)$, the minimum Hilbert space dimension of the system sent between two parties sharing an arbitrary amount of entanglement is $ d = \bigl\lceil \sqrt{|X|} \, \bigr\rceil$. Equivalently, the minimum number of communicated qubits is $\lceil \log_2 d \rceil$. \end{lemma} \begin{proof} We recall a well-known result of quantum information~\cite{Hausladen1996}, establishing that if Alice and Bob share unlimited entanglement, the largest number of orthogonal (perfectly distinguishable) states that Alice can transmit to Bob by sending a $d$-dimensional system is $d^2$. Therefore, they can deterministically compute $f$ if Alice sends a system of Hilbert space dimension $\bigl\lceil \sqrt{|X|} \, \bigr\rceil$. Suppose by way of contradiction that the Hilbert space dimension of the communicated system is only $( \bigl\lceil \sqrt{|X|} \, \bigr\rceil - 1)$. The maximal number of orthogonal states that can be transmitted by Alice to Bob is $(\bigl\lceil \sqrt{|X|} \, \bigr\rceil - 1)^2 < |X|$. Therefore, there exist inputs $x_1, x_2 \in X$ such that the corresponding states $\rho_1$, $\rho_2$ transmitted to Bob are not orthogonal, and thus not perfectly distinguishable~\cite{NielsenChuang}. By our assumption about the function $f$, there exists an input $y \in Y$ such that $f(x_1, y) \neq f(x_2, y)$. Therefore, if Bob receives the input $y$, he will need to distinguish between $\rho_1$ and $\rho_2$ in order to output the function correctly, but this cannot be done with zero error probability. \qed \end{proof} The previous lemma establishes that for a very large class of deterministic communication complexity tasks, it is necessary for Alice to communicate all of her input to Bob. In these cases, the only advantage achieved by causal one-way quantum communication is a reduction by a constant factor of two due to dense coding~\cite{Bennett1992}. An important example of this form is the Inner Product game~\cite{Cleve1998, Nayak2002}. Note that Lemma~\ref{lemma:rows} does not apply to relational tasks such as the hidden matching problem \cite{Bar-Yossef2004}, for which there is an exponential separation between quantum and classical communication complexity. \begin{figure} \caption{Communication complexity setup using the quantum switch. A qubit in the state $\frac{1} \label{fig:switch} \end{figure} We now seek to establish a communication complexity task for which indefinite causal order can be used as a resource. In the following we assume that the parties have local laboratories, and that they receive a quantum system from the environment only once. They then perform a general quantum operation on their system, and send it out. An example of a noncausally ordered process is the quantum switch~\cite{Chiribella2013}, whose use in the context of communication complexity is shown in Fig.~\ref{fig:switch}. Charlie is in the causal future of both Alice and Bob, and an ancilla qubit coherently controls the causal ordering of Alice and Bob; both the target state and the control qubit are prepared externally. Assume that Alice and Bob apply unitary gates $U_A$ and $U_B$ to their respective input systems of $N$ qubits. The global unitary describing the evolution of the system from Charlie's point of view is \begin{equation} \label{eq:V_switch} V(U_A, U_B) = \ket{0}\bra{0}_c \otimes (U_B U_A)_t + \ket{1}\bra{1}_c \otimes (U_A U_B)_t, \end{equation} where the index $c$ denotes the control qubit, and the unitaries $U_A$ and $U_B$ act on the target Hilbert space of $N$ qubits. Using the quantum switch, one can determine whether two unitaries $U_A$, $U_B$ commute or anticommute with a single query of each unitary, while at least one unitary must be queried twice in the causally ordered case~\cite{Chiribella2012}. Explicitly, consider the quantum switch with the control qubit initially in state $\ket{+}_c = \frac{1}{\sqrt{2}} (\ket{0}_c + \ket{1}_c)$ and with initial target state $\ket{\psi}_t$. If $A$ and $B$ apply local unitaries $U_A$ and $U_B$, the resulting state after applying $V(U_A, U_B)$ is \begin{equation} \frac{1}{\sqrt{2}} \left( \ket{0}_c \otimes U_B U_A \ket{\psi}_t + \ket{1}_c \otimes U_A U_B \ket{\psi}_t \right). \end{equation} If Charlie subsequently applies a Hadamard gate to the control qubit, the resulting state is \begin{equation} \frac{1}{2} \left( \ket{0}_c \otimes \{ U_A, U_B \} \ket{\psi}_t - \ket{1}_c \otimes [U_A, U_B] \ket{\psi}_t \right). \label{eq:comm_anti_state} \end{equation} Suppose that Alice and Bob randomly choose unitaries from a set $\mathcal{U}$ and that there exists a state $\ket{\psi}_t$ such that $\forall U,V \in \mathcal{U}$, either $[U, V]\ket{\psi}_t = 0$ or $\{U, V\} \ket{\psi}_t = 0$. Then Eq.~\eqref{eq:comm_anti_state} shows that the quantum switch with initial target state $\ket{\psi}_t$ and control qubit $\ket{+}_c$ as inputs allows Charlie to discriminate between these two possibilities with certainty by measuring the control qubit in the computational basis. We now define a communication complexity task, the Exchange Evaluation game $EE_n$, for any integer $n$. In this game, Alice and Bob are respectively given inputs $(\mathbf{x},f), (\mathbf{y},g) \in \mathbb{Z}_2^n \times F_n$, where $F_n$ is the set of functions over $\mathbb{Z}_2^n$ that evaluate to zero on the zero vector \begin{equation} F_n = \left\{ f: \mathbb{Z}_2^n \to \mathbb{Z}_2 \, | \, f(\mathbf{0}) = 0 \right \}. \end{equation} Charlie must output \begin{equation} EE_n(\mathbf{x}, f, \mathbf{y}, g) = f(\mathbf{y}) \oplus g(\mathbf{x}), \end{equation} where the symbol $\oplus$ denotes addition modulo 2. This game can be interpreted as the sum modulo 2 of two parallel random access codes~\cite{Ambainis1999}. We first construct an encoding of the inputs $(\mathbf{x}, f), (\mathbf{y}, g)$ in terms of local $n$-qubit unitaries that all commute or anticommute; we then use the previous observation to conclude that the switch succeeds deterministically at this task with $n$ qubits of communication. We start with some definitions. The group of Pauli $X$ operators on $n$ qubits is defined as \begin{equation} X(\mathbf{x}) = X_1^{x_1} \otimes X_2^{x_2} \otimes \dots \otimes X_n^{x_n}, \end{equation} where $x_i$ is the $i$th component of the binary vector $\mathbf{x} \in \mathbb{Z}_2^n$. Here, $X_i$ is the single qubit Pauli $X$-operator acting on the $i$th qubit, and $X_i^0 = \mathbb{I}_i$ is the single qubit identity matrix. We associate to every $f \in F_n$ a diagonal matrix \begin{equation} D(f) = \sum_{\mathbf{z} \in \mathbb{Z}_2^n} (-1)^{f(\mathbf{z})} \ket{\mathbf{z}}\bra{\mathbf{z}}, \end{equation} where $\ket{\mathbf{z}}$ is the state such that $Z_i \ket{\mathbf{z}} = (-1)^{z_i} \ket{\mathbf{z}}$, with $Z_i$ the single qubit Pauli $Z$ operator acting on qubit $i$. The set $\{ D(f) \}_{f \in F_n}$ consists of all diagonal matrices with entries $\pm 1$ in the computational basis, such that the first entry is $+1$. We define the set of unitaries \begin{equation} \mathcal{U}_n = \{ X(\mathbf{x})D(f) | (\mathbf{x} , f) \in \mathbb{Z}_2^n \times F_n \}, \label{eq:U_n} \end{equation} which has dimension \begin{equation} |\mathcal{U}_n| = 2^{2^n +n -1}. \label{eq:dim_U_n} \end{equation} This superexponential scaling of $| \mathcal{U}_n |$ is essential to establish a communication advantage with the quantum switch. Also note that \begin{equation} X(\mathbf{x})D(f)X(\mathbf{y})D(g) \ket{\mathbf{0}} = (-1)^{f(\mathbf{y})} \ket{ \mathbf{x \oplus y}}. \end{equation} Therefore, when acting on the $n$-qubit input state $\ket{\mathbf{0}}$, the elements of $\mathcal{U}_n$ all commute or anticommute with each other, and \begin{align*} [X(\mathbf{x})D(f),X(\mathbf{y})D(g)] \ket{\mathbf{0}} &= 0 \,, \mathrm{if} \, \,(-1)^{f(\mathbf{y})} = (-1)^{ g(\mathbf{x})} \nonumber \\ \{X(\mathbf{x})D(f),X(\mathbf{y})D(g)\} \ket{\mathbf{0}} &= 0 \,, \mathrm{if} \, \,(-1)^{f(\mathbf{y})} = (-1)^{ g(\mathbf{x})+1} . \end{align*} Therefore, the game is equivalent to determining whether the corresponding unitaries $X(\mathbf{x})D(f)$ and $X(\mathbf{y})D(g)$ commute or anticommute \textit{when applied to the state} $\ket{\mathbf{0}}$. By the discussion following Eq.~\eqref{eq:comm_anti_state}, this problem can be solved deterministically by Charlie using the quantum switch with $O(n)$ qubits of communication from Alice to Bob, with a strategy consisting of applying the unitary corresponding to their input according to Eq.~\ref{eq:U_n}. We now show that the Exchange Evaluation game satisfies the conditions of Lemma~\ref{lemma:rows}; this will allow us to conclude that for deterministic ($\epsilon = 0$) evaluation in the one-way causally ordered case, $EE_n$ requires an amount of communicated qubits that grows exponentially with $n$. \begin{proposition} For every $(\mathbf{x_1}, f_1), (\mathbf{x_2}, f_2) \in \mathbb{Z}_2^n \times F_n$, such that $(\mathbf{x_1},f_1) \neq (\mathbf{x_2}, f_2)$, there exists $(\mathbf{y}, g) \in \mathbb{Z}_2^n \times F_n$ such that $EE_n(\mathbf{x_1}, f_1, \mathbf{y}, g) \neq EE_n(\mathbf{x_2}, f_2, \mathbf{y},g)$. \label{prop:EE_condition} \end{proposition} \begin{proof} First note that $EE_n(\mathbf{x_1}, f_1, \mathbf{y}, g) \neq EE_n(\mathbf{x_2}, f_2, \mathbf{y}, g)$ if and only if \begin{equation} f_1(\mathbf{y}) \oplus f_2(\mathbf{y}) \oplus g(\mathbf{x_1}) \oplus g(\mathbf{x_2})= 1. \label{eq:rows_condition_game} \end{equation} Then, since $(\mathbf{x_1}, f_1) \neq (\mathbf{x_2}, f_2)$, either $\mathbf{x_1} \neq \mathbf{x_2}$ or $f_1 \neq f_2$ holds. We check that the conditions of the lemma are satisfied in both cases. \paragraph{(i) Case where $\mathbf{x_1} \neq \mathbf{x_2}$:\\} Suppose without loss of generality that $\mathbf{x_1} \neq \mathbf{0}$ and define $g$ as the function such that $g(\mathbf{x_1}) = 1$ and $g(\mathbf{z}) = 0, \, \forall \mathbf{z} \neq \mathbf{x_1}$. Also, because $f_1, f_2 \in F_n$, $f_1(\mathbf{0}) = f_2(\mathbf{0}) = 0$. Therefore, the function $g$ we just defined and $\mathbf{y} = \mathbf{0}$ satisfy Eq.~\eqref{eq:rows_condition_game}. \paragraph{(ii) Case where $f_1 \neq f_2$:\\} Let $\mathbf{y} \in \mathbb{Z}_2^n$ be a vector for which $f_1$ and $f_2$ differ, so that $f_1(\mathbf{y}) + f_2(\mathbf{y}) = 1$. Then this $\mathbf{y}$ and the zero function $g(\mathbf{x}) = 0 \, \forall \mathbf{x}$ satisfies Eq.~\eqref{eq:rows_condition_game}.\qed \end{proof} According to Eq.~\eqref{eq:dim_U_n}, the dimension of the set of inputs to $EE_n$ is $|\mathcal{U}_n| = 2^{2^n +n -1}$. Direct application of Proposition \ref{prop:EE_condition} with Lemma \ref{lemma:rows} establishes that the number of qubits of communication required for deterministic success in the causally ordered case is $\frac{1}{2} \log_2 |\mathcal{U}_n| = \frac{1}{2}(2^n + n -1) = \Omega(2^n)$, using dense coding. In comparison, we have seen that with the quantum switch as a resource, we need only $n$ qubits of communication between Alice and Bob to calculate this function. We thus conclude that for the Exchange Evaluation game, there is an exponential separation in the deterministic communication complexity of $EE_n$. Note that with two-way (classical) communication, it is possible to solve the Exchange Evaluation game with $2n + 2$ bits of communication, simply by having Alice and Bob send their vectors $\mathbf{x}$, $\mathbf{y}$ to the other party, followed by local evaluation of $f(\mathbf{y})$ and $g(\mathbf{x})$ by the parties and communication of the result to Charlie. We emphasize that once we allow two-way communication, the quantum advantage can also disappear in traditional quantum communication complexity (comparing causally ordered quantum communication with classical communication): this is the case for the distributed Deutsch-Jozsa problem~\cite{Buhrman1998}, but not for Raz's problem~\cite{Klartag2011}. For causally ordered communication complexity tasks, the exponential quantum-classical separation does not always continue to hold when allowing for protocols to have a small but nonzero error probability $\epsilon > 0$. Indeed, looking at early examples of tasks, the advantage disappears for the distributed Deutsch-Jozsa problem~\cite{Buhrman1998}, while it remains for Raz's problem~\cite{Raz1999}. We prove in the Appendix that the one-way quantum communication complexity with bounded error for $EE_n$ scales as $\Omega(2^n)$, and thus that the exponential separation in communication complexity due to superposition of causal ordering persists when allowing for a nonzero error probability. To show that it is possible to operationally distinguish quantum control of causal order from two-way communication one could introduce counters at the output ports of Alice's and Bob's laboratories, whose role is to count the number of uses of the channels. Such an argument has already been made in Ref.~\cite{Araujo2014_PRL} to justify a computational advantage. We can model a counter as a qutrit initially in the state $\ket{0}$, whose evolution when a system exits the laboratory is $\ket{i} \to \ket{i + 1 \, \mathrm{mod} \, 3}$, where $i \in \{0,1,2\}$. Then, for both one-way communication and the quantum switch, the counters of Alice and Bob will be in the state $\ket{1}$ at the end of the protocol; for genuine two-way communication, at least one of these counters will be in the final state $\ket{2}$. Therefore, the expectation value of the observables $N = \sum_{i=0}^2 \ket{i} \bra{i}$ for the counters allows us to distinguish realizations of the quantum switch, such as~\cite{Procopio2015}, from two-way quantum communication. In conclusion, we have found a communication complexity task, the Exchange Evaluation game, for which a quantum superposition of the direction of communication --- the quantum switch --- results in an exponential saving in communication when compared to causally ordered quantum communication. An interesting feature of this game is that it is not a promise game, as are most known tasks for which quantum resources have an exponential advantage~\cite{Buhrman2010}. In future work, it would be interesting to explore other information processing tasks for which the quantum switch -- or other causally indefinite processes -- may yield interesting advantages. For example, one could look at the uses of the quantum switch for secure distributed computation \cite{Yao1982, Lo1997, Buhrman2012, Liu2013}. Indeed, imagine that Alice and Bob both want to learn about the value of $EE_n$, in such a way that the other party does not learn about their inputs. They could achieve this goal by enlisting a third party and using the quantum switch with the $EE_n$ protocol. We thank Ashley Montanaro for pointing out Ref.~\cite{Klauck2000} to us, used to establish the bounded-error advantage. We acknowledge support from the European Commission project RAQUEL (No. 323970); the Austrian Science Fund (FWF) through the Special Research Programme FoQuS, the Doctoral Programme CoQuS and Individual Project (No. 2462), and the John Templeton Foundation. P. A. G. is also supported by FQRNT (Quebec). \providecommand{\href}[2]{#2}\begingroup\raggedright \endgroup \appendix \section{VC-dimension bounds on the bounded error one-way quantum communication complexity} In this section we show that if the protocol allows for some error probability, bounded by $\epsilon$ for all inputs, the one-way communication complexity of $EE_n$ still scales as $\Omega(2^n)$. As in Fig.~\ref{fig:causally_ordered}, we assume that Alice and Bob share unlimited prior entanglement, and that Alice sends a quantum state to Bob. We note that under the promise that Bob's input function is the zero function $g = 0$, the Exchange Evaluation game reduces to a random access code~\cite{Ambainis1999}, for which optimal bounds on the bounded error communication complexity are known~\cite{Nayak1998}. However, it is more straightforward to apply a bound that uses the concept of VC-dimension~\cite{Vapnik1971}. \begin{definition} \textit{VC-dimension.} Let $f: X \times Y \to \{0, 1\}$. A subset $S \subseteq Y$ is shattered, if $\forall R \subseteq S, \exists x \in X$ such that \begin{equation} f(x,y) = \begin{cases} 1, & \text{if $\mathbf{y} \in R$}.\\ 0, & \text{if $\mathbf{y} \in S \setminus R$}. \end{cases} \end{equation} The VC-dimension $VC(f)$ is the size of the largest shattered subset of $Y$. \end{definition} Given a function $f(x,y)$, we denote by $Q_\epsilon^{1}(f)$ the one-way (from Alice to Bob) bounded error quantum communication, where $\epsilon$ is the allowed worst-case error, and arbitrary prior shared entanglement is available. We make use of a theorem by Klauck (Theorem 3 of~\cite{Klauck2000}) that relates the bounded error quantum communication complexity of a function to its VC-dimension. \begin{theorem} For all functions $f: X \times Y \to \{0,1\}$, $Q_\epsilon^1 (f) \geq \frac{1}{2} (1 - H(\epsilon)) VC(f)$, where $H(\epsilon)$ is the binary entropy $H(\epsilon) = \epsilon \log(\epsilon) + (1 - \epsilon) \log (1-\epsilon)$ \label{th:klauck} \end{theorem} Let us bound the VC-dimension of $EE_n: X \times Y \to \{0,1\}$, where $X = Y = \mathbb{Z}_2^n\times F_n$, by showing that $S = \{(\mathbf{y},g) | g = 0, \mathbf{y} \neq \mathbf{0} \} \subset Y$ is shattered. This is clear, since for any $R \subseteq S$, there exists the indicator function \begin{equation} f_R(\mathbf{y}) = \begin{cases} 1, & \text{if $(\mathbf{y},0) \in R$}.\\ 0, & \text{otherwise}, \end{cases} \end{equation} so that $S$ is shattered. Therefore $VC(EE_n) \geq |S| = 2^{n - 1}$, and Theorem~\ref{th:klauck} implies that the one-way quantum communication complexity $Q_\epsilon^1 ( EE_n) \geq (1 - H(\epsilon)) 2^{n - 2}$. This establishes that the number of communicated qubits scales exponentially with $n$ even in the bounded error case, so that the exponential separation between the quantum switch and one-way quantum communication continues to hold. \end{document}
\begin{document} \title{\Lambdarge {\bf Tanaka formula and local time for a class of interacting branching measure-valued diffusions} \thanks{Partial funding in support of this work was provided by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Department of Mathematics at the University of Oregon.}} \author{\bf D. A. Dawson\thanks{D. A. Dawson, Carleton University, [email protected]} \and \bf J. Vaillancourt\thanks{J. Vaillancourt, HEC Montr\'eal, [email protected]. Corresponding Author : 3000, chemin de la C\^ote-Sainte-Catherine, Montr\'eal (Qu\'ebec), Canada H3T 2A7 orcid.org/0000-0002-9236-7728 } \and \bf H. Wang\thanks{H. Wang, University of Oregon, [email protected]}} \maketitle {\narrower{\narrower \centerline{\bf Abstract} We construct superprocesses with dependent spatial motion (SDSMs) in Euclidean spaces $\mathbb{R}^d$ with $d\ge1$ and show that, even when they start at some unbounded initial positive Radon measure such as Lebesgue measure on $\mathbb{R}^d$, their local times exist when $d\le3$. A Tanaka formula of the local time is also derived. \noindent{\bf 2010 Mathematics Subject Classification}: Primary 60J68, 60J80; Secondary 60H15, 60K35, 60K37 \noindent{\bf Keywords and Phrases}: Measure-valued diffusions, stochastic partial differential equations, superprocesses, branching processes, local time, Tanaka formula. \par}\par} \begin{array}selineskip=6.0mm \setcounter{equation}{0}\sectionion{Introduction} \setcounter{equation}{0} In the present paper, we provide an explicit representation of the local time for a class of interacting superprocesses (measure-valued branching diffusions) on Euclidean space $\mathbb{R}^d$ when $d\le3$. Known as superprocesses with dependent spatial motion (SDSM), this class was introduced in Wang \cite{Wang97}, \cite{Wang98} and extended in Dawson et al. \cite{DLW01}. The subdivision of SDSM into subclasses, according to the portion of the state space that SDSM charges almost surely, its various representations by associated stochastic partial differential equations (SPDE) and some of its trajectorial properties have been exhibited and analyzed since then, notably in the special case of measure-valued processes on the real line ($d=1$). In that case, let us mention the following results: the dimension of its support in Wang \cite{Wang97}, depending on the degeneracy or not of the diffusion term; the explicit form of the SPDE for the motion of the resulting purely atomic measure-valued SDSM in various degenerate cases, in Dawson et al. \cite{DLW03} and Li et al. \cite{LWX04a}; an explicit representation of its density in Dawson et al. \cite{DVW2000} in the non degenerate case and the delicate matter of joint continuity in time and space of this density, in Li et al. \cite{LWXZ12} and Hu et al. \cite{HLN13} and \cite{HNX19}. Extending some of these results to the general case $d\ge1$ still offers many challenges, some of which are addressed here. To avoid repetitions, we make the following basic assumptions, called for as needed throughout this paper and related to the properties of the processes themselves, as well as the filtered probability spaces they are constructed on. Definitions and notations are gathered at the beginning of Section \mathbb{R}ef{sec:main}; they are introduced in the text by the $:=$ symbol. \begin{hyp}\lambdabel{hyp:basicassumpFilter} Let $(\Omega, {\cal F}, \{{\cal F}_t\}_{t \geq 0}, \mathbb{P} )$ be a filtered probability space with a right continuous filtration $\{{\cal F}_t\}_{t \geq 0}$, satisfying the usual hypotheses and upon which all our processes are built, notably an $\mathbb{R}^1$-valued Brownian sheet $W$ on $\mathbb{R}^d$ (see below) and a countable family $\{B_{k}, k\geq 1\}$ of independent, $\mathbb{R}^d$-valued, standard Brownian motions written $B_{k}=(B_{k1}, \cdots, B_{kd})$. The family $\{B_{k}, k\geq 1\}$ is assumed independent of $W$. {\mbox{\rm e}}nd{hyp} Following Walsh \cite[Chapter 2]{Walsh86}, a random set function $W$ on ${\cal B}(\mathbb{R}^d \times[0,\infty))$ defined on $(\Omega, {\cal F}, \{{\cal F}_t\}_{t \geq 0}, \mathbb{P} )$ is called an $\mathbb{R}^1$-valued Brownian sheet on $\mathbb{R}^d$ (or space-time white noise) if both of the following statements hold: for every $A \in {\cal B}(\mathbb{R}^d)$ having finite Lebesgue measure $\lambdambda_0(A)$, the process $M(A)_t:=W(A\times [0, t])$ is a square-integrable $\{ {\cal F}_t\}$-martingale; and for every pair $A_i\in {\cal B}(\mathbb{R}^d \times[0,\infty))$, $i=1, 2$, having finite Lebesgue measure with $A_1\cap A_2={\mbox{\rm e}}mptyset$, the random variables $W(A_1)$ and $W(A_2)$ are independent, Gaussian random variables with mean zero, respective variance $\lambdambda_0(A_i)$ and $ W(A_1\cup A_2) = W(A_1)+W(A_2)$ holds $\mathbb{P} $-almost surely (see Walsh \cite{Walsh86}, Dawson \cite[Section 7.1]{Dawson93} and Perkins \cite{Perkins02} for further details). The rest of our assumptions relate to the coefficients in the equations (bounded and Lipschitz continuous) and the restrictions imposed on the (often infinite) initial measure. \begin{hyp}\lambdabel{hyp:basicassumpElliptic} The vector $h=(h_1, \cdots,h_d)$ satisfies $h_p \in L^1(\mathbb{R}^d) \cap {\mathbb{R}m Lip}_b (\mathbb{R}^d)$ and the $d\times d$ matrix $c=(c_{pr})$ satisfies $c_{pr} \in {\mathbb{R}m Lip}_b(\mathbb{R}^d)$, for $p,r=1, \cdots,d$. For each $p,q=1,\ldots,d$, we write $a_{pq}(x):=\sum_{r=1}^{d}c_{pr}(x)c_{qr}(x)$ and $\mathbb{R}ho_{pq}(x-y):=\int_{\mathbb{R}^d}h_{p}(u-x)h_{q}(u-y)du$. For every $m\ge1$, the $dm\times dm$ diffusion matrices $(\mathbb{G} ammamma_{pq}^{ij})_{1 \leq i,j \leq m; 1 \leq p,q \leq d}$ of real-valued functions defined by \begin{eqnarray} \lambdabel{gammaij} \hspace*{8mm}\mathbb{G} ammamma_{pq}^{ij}(x_{1}, \cdots, x_{m}) := \left\{ \begin{array}{ll} (a_{pq}(x_{i})+\mathbb{R}ho_{pq}(0)) \quad & \mbox{if $i=j$,} \\ \mathbb{R}ho_{pq}(x_{i}- x_{j}) \quad & \mbox {if $i\neq j$}, {\mbox{\rm e}}nd{array} \mathbb{R}ight. {\mbox{\rm e}}nd{eqnarray} are strictly positive definite everywhere on $\mathbb{R}^d$ with smallest and largest eigenvalues bounded away respectively from $0$ and $\infty$, uniformly in $\mathbb{R}^d$; that is, there are two positive constants $\lambdambda_m^*$ and $\Lambdambda_m^*$ such that for any $\xi = (\xi^{(1)}, \cdots, \xi^{(m)}) \in (\mathbb{R}^d)^m$ we have a positive definite form which satisfies \[ 0 < \lambdambda_m^* |\xi|^2\leq \sum_{k,l=1}^m \sum_{p,q=1}^d \mathbb{G} ammamma_{pq}^{kl}(\cdot) \xi_p^{(k)} \xi_q^{(l)} \leq \Lambdambda_m^*|\xi|^2 < \infty. \] {\mbox{\rm e}}nd{hyp} \noindent {\bf Remark:} Hypothesis \mathbb{R}ef{hyp:basicassumpElliptic} ensures that $\mathbb{G} ammamma$ is uniformly elliptic (lower bound). Since $\mathbb{G} ammamma$ is a sum of two non-negative definite matrices, this occurs as soon as one of the two matrices is strictly positive definite. This is the case if the individual motions are uniformly elliptic. Consult Section \mathbb{R}ef{sec:SDSM} for more consequences of Hypothesis \mathbb{R}ef{hyp:basicassumpElliptic}. Hypotheses \mathbb{R}ef{hyp:basicassumpFilter} and \mathbb{R}ef{hyp:basicassumpElliptic} together guarantee the existence of non-degenerate SDSM and their characterization by way of the martingale problem approach (Theorem \mathbb{R}ef{MPforSDSM}). The rest of our assumptions relate to the restrictions imposed on the initial measure. Their statement requires some additionnal notation. Let $M(\mathbb{R}^d)$ be the set of all positive Radon measures on $\mathbb{R}^d$ and $M_0(\mathbb{R}^d)$, its subspace of finite positive Radon measures. For any $a \ge 0$, let $I_a(x):= (1 + |x|^2)^{(-a/2)}$ and \begin{eqnarray} \lambdabel{spacesMa} M_a(\mathbb{R}^d)= \{\mu \in M(\mathbb{R}^d): \langle} \def\>{\rangleI_a,\mu\>:= \int_{\mathbb{R}^d}I_a(x) \mu(dx) < \infty \}. {\mbox{\rm e}}nd{eqnarray} The topology $\tau_a$ of $M_a(\mathbb{R}^d)$ is defined in the following way: $\mu_n \in M_a(\mathbb{R}^d)$ converges to $\mu \in M_a(\mathbb{R}^d)$ as $n \mathbb{R}ightarrow \infty$, if $\lim_{n \mathbb{R}ightarrow \infty}\langle} \def\>{\rangle \phi, \mu_n\> = \langle} \def\>{\rangle\phi, \mu\>$ holds for every $\phi \in K_a(\mathbb{R}^d),$ where \begin{eqnarray} \lambdabel{spacesKa} K_a(\mathbb{R}^d)=\{\phi: \phi:= h + \beta I_a, \beta \in \mathbb{R}, h \in C_c(\mathbb{R}^d)\}. {\mbox{\rm e}}nd{eqnarray} Then, $(M_a(\mathbb{R}^d), \tau_a)$ is a Polish space (see Iscoe \cite{Iscoe86a} and Konno and Shiga \cite{KonnoShiga88}). For instance, the Lebesgue measure $\lambdambda_0$ on $\mathbb{R}^d$ belongs to $M_a(\mathbb{R}^d)$ for any $a >d$. Furthermore, both $dx=\lambdambda_0(dx)$ are used interchangeably when calculating Lebesgue integrals. \begin{hyp}\lambdabel{hyp:basicassumpGauss} For any $T > 0$, the initial measure $\mu_0\in\cup_{a\ge0}M_a(\mathbb{R}^d)$ verifies \begin{eqnarray} \lambdabel{nonBA} \sup_{x\in \mathbb{R}^d} \sup_{0 < t \leq T} \langle} \def\>{\rangle\varphirphi_t(x-\cdot), \mu_0\> < \infty, {\mbox{\rm e}}nd{eqnarray} where $\varphirphi_t$ is the transition density of the standard $d$-dimensional Brownian motion. {\mbox{\rm e}}nd{hyp} In order to fulfill the requirements of using Fubini's theorem in some of the proofs, we also require an additional uniform bound on measure $\mu_0 \in M_a(\mathbb{R}^d)$. \begin{hyp}\lambdabel{hyp:basicassumpUniformInteg} The initial measure $\mu_0 \in M_a(\mathbb{R}^d)$ satisfies \[ \sup_{w\in \mathbb{R}^d} \langle} \def\>{\rangleI_a(\cdot+w), \mu_0 \> < \infty . \] {\mbox{\rm e}}nd{hyp} \noindent {\bf Remark:} Hypotheses \mathbb{R}ef{hyp:basicassumpGauss} and \mathbb{R}ef{hyp:basicassumpUniformInteg} are uniform boundedness conditions ensuring the existence of the local time for SDSM in general (Theorem \mathbb{R}ef{lt_th1}) as well as the validity of the Tanaka formula (\mathbb{R}ef{TanakaII}). Hypothesis \mathbb{R}ef{hyp:basicassumpGauss} is akin to condition (2.9) in Sugitani \cite{Sugitani89}, instrumental in his proof of joint continuity of the local time for Super-Brownian motion. The treatment of the joint continuity of the local time for SDSM in general requires sharper inequalities than the ones used here and is treated in \cite{DVW2021}. Additional insight into the need for Hypotheses \mathbb{R}ef{hyp:basicassumpGauss} and \mathbb{R}ef{hyp:basicassumpUniformInteg} is provided in Section \mathbb{R}ef{sec:SDSM}. The class SDSM $\{\mu_t\}$ is the main object of this paper and is characterized through the second order linear differential operator ${\cal L}$, defined for bounded smooth real-valued functions $F$ on $M_a(\mathbb{R}^d)$ by \begin{equation}lb \lambdab{pregenerator} {\cal L}F(\mu) & := & { \displaystyle \frac{1}{2} \sum_{p,q=1}^{d} \int_{\mathbb{R}^d} (a_{pq}(x) + \mathbb{R}ho_{pq}(0)) \left(\frac{\partial^{2}}{\partial x_p \partial x_q}\mathbb{R}ight) \frac{{\delta} F(\mu)}{{\delta} \mu(x)}\,\mu(dx)} \nonumber \\ & & + { \displaystyle \frac{1}{2}\sum_{p,q=1}^{d} \int_{{\mathbb{R}^d}} \int_ {{\mathbb{R}^d}} \mathbb{R}ho_{pq}(x-y) \left(\frac{\partial}{\partial x_p }\mathbb{R}ight) \left(\frac{\partial}{\partial y_q }\mathbb{R}ight) \frac{{\delta}^{2}F(\mu)}{{\delta} \mu(x) {\delta} \mu(y)} \mu(dx) \mu(dy)} \nonumber \\ & & + { \displaystyle \frac{\gammamma \sigma^2}{2} \int_{{\mathbb{R}^d}} \frac{{\delta}^{2} F(\mu)}{{\delta} \mu(x)^{2}} \mu (dx). } {\mbox{\rm e}}nd{equation}lb The local (or individual) diffusion coefficients $a_{pq}$ and the global (or common) interactive diffusion coefficients $\mathbb{R}ho_{pq}$ are defined within Hypothesis \mathbb{R}ef{hyp:basicassumpElliptic}. Parameter $\gammamma>0$ is related to the branching rate of the particle system and $\sigma^2 > 0$ is the variance of the limiting offspring distribution. The variational derivative is defined by \[ \frac{\delta F(\mu)}{\delta \mu(x) }:= \lim_{\varepsilonsilon \downarrow 0} \frac{F(\mu+\varepsilonsilon \delta_{x})-F(\mu)}{\varepsilonsilon} \] where $\delta_{x}$ stands for the Dirac measure at $x$. The domain ${\cal D}({\cal L})$ of the operator ${\cal L}$ contains all functions of the form $F(\mu) = g(\langle} \def\>{\rangle\phi_1, \mu\>, \cdots, \langle} \def\>{\rangle\phi_k, \mu \>)$ with $g \in C_b^2(\mathbb{R}^k)$ for some $k\ge1$ and $\phi_i \in C_c^{\infty}(\mathbb{R}^d)$ for every $1 \leq i \leq k $ (although it is not restricted to just these functions). For any $\mu \in M(\mathbb{R}^d)$ and any $\mu$-integrable $\phi$ we write $\langle} \def\>{\rangle\phi,\mu\>= \int_{\mathbb{R}^d} \phi(x) \mu(dx)$ here and henceforth. Clearly the class of SDSM includes the critical branching Dawson-Watanabe superprocesses when $h{\mbox{\rm e}}quiv0$. The literature on these is extensive and the reader may consult the lecture notes by Dawson \cite{Dawson92, Dawson93}, Le Gall \cite{LeGall99}, Etheridge \cite{Etheridge00} and Perkins \cite{Perkins02} for historical insights into the origin and the early evolution of the field, as well as the subsequent works by Li \cite{Li10} and Xiong \cite{Xiong13} for thorough updates on the subject. Amongst the many properties of SDSM the one of interest here is the existence of a local time when $d\le3$. A local time of SDSM $\{\mu_t\}$ is a density process of the occupation time process $\int_0^t \mu_s ds$ of SDSM, a time-averaging giving rise to a new measure-valued process with more regular paths and, in some cases, a density with respect to Lebesgue measure, even when SDSM itself does not have one. For instance, from Wang \cite{Wang97} we know that in the degenerate case, the SDSM is a purely atomic measure-valued process, so the density of the occupation time for this degenerate SDSM process may not exist. However, Li and Xiong \cite{LiXiong07} introduced an interesting alternative way to define the local time for a class of purely atomic measure-valued processes along the path of each particle. The local time (in this sense) of the degenerate SDSM is constructed there and its joint H{\"o}lder continuity proved. We now proceed in our non-degenerate (uniformly elliptic) case with the following, more familiar definition. \begin{definition} \lambdab{de1} A local time of the SDSM $\{\mu_t\}$ is any ${\cal B}(\mathbb{R}^d \times[0,\infty)) \times {\cal F}$-measurable function $\Lambdambda^{x}_t(w):(\mathbb{R}^d \times [0, \infty) \times \Omega) \mathbb{R}ightarrow [0, \infty)$, satisfying \begin{equation}lb \lambdabel{LT} \int_{\mathbb{R}^d}\phi(x) \Lambdambda_t^x dx = \int_0^t \langle} \def\>{\rangle\phi, \mu_s\>ds, {\mbox{\rm e}}nd{equation}lb for every $t > 0$ and every $\phi \in C_c(\mathbb{R}^d)$. {\mbox{\rm e}}de We will construct such a local time so that $(\mathbb{R}ef{LT})$ holds simultaneously for every $t > 0$ and every $\phi \in C_c(\mathbb{R}^d)$ outside of a common $\mathbb{P} $-null set. In the degenerate case, Li et al. \cite{LWX04a} have shown that two SDSM with the same initial data can have distinct pathwise solutions, thus generating two distinct local times that satisfy equation $(\mathbb{R}ef{LT})$; hence the need for the alternate definition of local time provided in Li and Xiong \cite{LiXiong07} in the degenerate case. Here instead, under Hypotheses \mathbb{R}ef{hyp:basicassumpFilter} through \mathbb{R}ef{hyp:basicassumpUniformInteg}, this situation is avoided, as will be seen in Section \mathbb{R}ef{sec:main}. In the case of Super-Brownian motion (where $h{\mbox{\rm e}}quiv0$ and $c$ is the identity matrix) the existence and the joint space-time continuity of paths for its local time when $d\le3$ go back to Iscoe \cite{Iscoe86b} and Sugitani \cite{Sugitani89}. These results, as well as further path properties, were generalized to superdiffusions (still $h{\mbox{\rm e}}quiv0$) in Krone \cite{Krone93}. In these and many other papers where the finer aspects of the superprocesses are analyzed, the argument largely depends on a multiplicative property of branching processes and the availability of a manageable closed form for the log-Laplace functional, a powerful tool to estimate the higher moments of $\{\mu_t\}$. Unfortunately, in our model the dependency of motion ($h$ is no longer the null function) destroys the multiplicative property in question and makes this approach largely intractable, as it relies intimately on the independence structure built into Dawson-Watanabe superprocesses. This method was applied notably by Adler and Lewin \cite{AdlerLewin92} in their proof of the Tanaka formula for the local time of Super-Brownian motion and super stable processes. This also occurs when trying the approach proposed in L\'opez-Mimbela and Villa \cite{Lopez-MimbelaVilla04} for Super-Brownian motion, where an alternative representation of the local time simplifies the proof of its joint continuity by taking advantage of sharp estimates for the Green function of Brownian motion and its associated Tanaka formula. However, the higher order singularity of the Green function and its derivative in our case, raises some new technical difficulties in the moment estimation of the interacting term, as well as in the handling of a stochastic convolution integral term appearing in the corresponding Tanaka formula. These issues are resolved using a perturbation argument and a Tanaka formula for SDSM emerges. The remainder of this paper is organized as follows. Section \mathbb{R}ef{sec:main} states the main results and assembles the remaining notation required for their formulation. Section \mathbb{R}ef{sec:SDSM} is devoted to the construction of SDSM $\{\mu_t\}$ started with an unbounded initial measure on $\mathbb{R}^d$ with $d \geq 1$. This construction proceeds as a limit of a sequence of branching particle systems in $\mathbb{R}^d$; it uses a tightness argument adapted from the one built by Ren et al. \cite{RenSongWang09} for SDSM on a bounded domain $D \subset \mathbb{R}^d$. The derivation of the associated SPDE is achieved there as well. In Section \mathbb{R}ef{sec:dualConst}, sharp bounds are obtained for the $k^{th}$-moments of SDSM by computing them through a duality argument, using the martingale problem approach. In Section \mathbb{R}ef{sec:Tanaka}, we derive a Tanaka formula for SDSM and use it to prove the existence of the local time $\Lambdambda_t^x$ for SDSM for some unbounded initial measures. To avoid forward referencing, Sections \mathbb{R}ef{sec:SDSM}, \mathbb{R}ef{sec:dualConst} and \mathbb{R}ef{sec:Tanaka} are in logical and chronological order. Some technical results have their proof postponed to Section \mathbb{R}ef{app:ProofsOfLemmas}; those proofs are self-contained. \setcounter{equation}{0}\sectionion{Main results and notation} \lambdabel{sec:main} \setcounter{equation}{0} For any Polish space $S$, that is, a topologically complete and separable metric space, ${\cal B}(S)$ denotes its Borel $\sigma$-field, $B(S)$ the Banach space of real-valued bounded Borel measurable functions on $S$ with the supremum norm $\|\cdot\|_{\infty}$ and $C(S)$ the space of real-valued continuous functions on $S$. Subscripts $b$ or $c$ on any space of functions will always refer to its subspace of bounded or compactly supported functions, respectively, as in $C_b(S)$ and $C_c(S)$ here. $S^m$ denotes the $m$-fold product of $S$. The spaces of continuous $C([0, \infty), S)$ and c\`adl\`ag $D([0, \infty), S)$ trajectories into Polish space $S$ are respectively equipped with the topology of uniform convergence on compact time sets and the usual Skorohod topology; they are themselves also Polish spaces (see Ethier and Kurtz \cite{EthierKurtz86}). Given any positive Radon measure $\mu\in M(\mathbb{R}^d)$ and any $p\in[1,\infty)$, we write $L^p(\mu)$ for the Banach space of real-valued Borel measurable functions on $\mathbb{R}^d$, with finite norm $\|\phi\|_{\mu,p}:= \{\int_{\mathbb{R}^d}|\phi(x)|^pd\mu(x)\}^{1/p} < \infty$ and $|x|^2 := \sum_{i=1}^dx_i^2$. When $\mu=\lambdambda_0$ is the Lebesgue measure we use the standard notation $L^p(\mathbb{R}^d)=L^p(\lambdambda_0)$ and $\|\phi\|_p:= \|\phi\|_{\lambdambda_0,p}$. We need various subspaces of continuous functions inside $C(\mathbb{R}^d)$, notably $C^k(\mathbb{R}^d)$ the space of continuous functions on $\mathbb{R}^d$ with continuous derivatives up to and including order $k\ge0$, with $C^\infty(\mathbb{R}^d)$ their common intersection (the smooth functions) and noticing that $C^0(\mathbb{R}^d)=C(\mathbb{R}^d)$; $C_b^k(\mathbb{R}^d)$ their respective subspace of bounded continuous functions with bounded derivatives up to and including order $k$, again with $C_b^\infty(\mathbb{R}^d)$ their common intersection and $C_b^0(\mathbb{R}^d)=C_b(\mathbb{R}^d)$; $C_0^k(\mathbb{R}^d)$ those bounded continuous functions vanishing at $\infty$ together with their derivatives up to and including order $k$, with $C_0^\infty(\mathbb{R}^d)$ their common intersection and $C_0^0(\mathbb{R}^d)=C_0(\mathbb{R}^d)$, this last a Banach space when equipped with finite supremum norm; $C_c^k(\mathbb{R}^d)$ the further subspace of those with compact support, again with $C_c^\infty(\mathbb{R}^d)$ their common intersection and $C_c^0(\mathbb{R}^d)=C_c(\mathbb{R}^d)$. We use ${\mathbb{R}m Lip}(\mathbb{R}^d)$ to denote the space of Lipschitz functions on $\mathbb{R}^d$, that is, $\phi \in {\mathbb{R}m Lip}(\mathbb{R}^d)$ if there is a constant $M>0$ such that $|\phi(x)-\phi(y)|\leq M |x-y|$ for every $x, y \in \mathbb{R}^d$. The class of bounded Lipschitz functions on $\mathbb{R}^d$ will be denoted by ${\mathbb{R}m Lip}_b(\mathbb{R}^d)$. Let ${\cal S}(\mathbb{R}^d)$ be the Schwartz space of rapidly decreasing test functions and ${\cal S}'(\mathbb{R}^d)$ the space of Schwartz tempered distributions, the dual space of ${\cal S}(\mathbb{R}^d)$ (see Al-Gwaiz \cite{Al-Gwaiz92}, Barros-Neto \cite{Barros-Neto73} or Schwartz \cite{Schwartz59}). In addition to ${\cal S}(\mathbb{R}^d)$, the main set of functions of interest here is $K_a(\mathbb{R}^d)$ defined in $(\mathbb{R}ef{spacesKa})$ for any real number $a\ge0$. Since $C_c^\infty(\mathbb{R}^d)$ is uniformly dense in $C_c(\mathbb{R}^d)$ (with $C_0(\mathbb{R}^d)$ as common closure), the uniform closure of $K_a(\mathbb{R}^d)$ remains unchanged if we replace $C_c(\mathbb{R}^d)$ by $C_c^\infty(\mathbb{R}^d)$. Both are also $\|\cdot\|_p$-dense in $L^p(\mathbb{R}^d)$ for every $p\in[1,\infty)$ (for instance, see Lemma 2.19 in Lieb and Loss \cite{LiebLoss01}), a fact that will come in handy later. Of course ${\cal S}(\mathbb{R}^d)$ is uniformly dense in $C_0(\mathbb{R}^d)$ and $\|\cdot\|_p$-dense in $L^p(\mathbb{R}^d)$ for every $p\in[1,\infty)$ as well. We will also need $C_b^{1,2}([0, t] \times (\mathbb{R}^d)^m)$, the space of bounded continuous functions with all derivatives bounded, up to and including order $1$ in the time variable up to time $t$ and order $2$ in the $md$ space variables, including mixed derivatives of that order. When no ambiguity is present we also write the partial derivatives (of functions and distributions) in abridged form \begin{equation*} \partial_p:=\frac{\partial}{\partial x_p} \quad \mbox{and}\quad \partial_p\partial_q:=\frac{\partial}{\partial x_p}\frac{\partial}{\partial x_q} \quad \mbox{and so on.} {\mbox{\rm e}}nd{equation*} Set \begin{equation}lb\lambdabel{eqn:Gn1} G_1 := \sum_{p,q=1}^{d}\tfrac{1}{2}(a_{pq}(x)+\mathbb{R}ho_{pq}(0)) \partial_p \partial_q. {\mbox{\rm e}}nd{equation}lb Let us now turn our attention to the characterization of SDSM through the formulation of a well-posed martingale problem (see Ethier and Kurtz \cite{EthierKurtz86}). A solution to the $({\cal L}, \delta_{{\mu}_{0}})$-martingale problem is a stochastic process $\mu$ with values in $M_a(\mathbb{R}^d)$ defined on $(\Omega, {\cal F}, \{{\cal F}_t\}_{t \geq 0}, \mathbb{P} )$ with initial value ${\mu}_{0}\in M_a(\mathbb{R}^d)$ such that, for every $F\in {\cal D}({\cal L})$, the process $F(\mu_t)-\int_{0}^{t} {\cal L} F(\mu_s)ds$ is an ${\cal F}_t$-martingale. We say this martingale problem is well-posed (or has a unique solution) if such a solution exists and any two solutions have the same finite dimensional distributions. \begin{theorem} \lambdab{MPforSDSM} Assume Hypotheses $\mathbb{R}ef{hyp:basicassumpFilter}$ and $\mathbb{R}ef{hyp:basicassumpElliptic}$. For any $a\ge0$ and initial value ${\mu}_{0}\in M_a(\mathbb{R}^d)$, the $({\cal L}, \delta_{{\mu}_{0}})$-martingale problem for the operator given by $(\mathbb{R}ef{pregenerator})$ is well-posed and its unique solution $\mu_t$ is an $M_a(\mathbb{R}^d)$-valued diffusion process which satisfies the SPDE \begin{equation}lb \lambdabel{SPDEc} \langle} \def\>{\rangle\phi,\mu_{t}\> - \langle} \def\>{\rangle\phi,\mu_{0}\> = {X}_{t}(\phi) + M_t(\phi) + \int_{0}^{t} \left\langle} \def\>{\rangle G_1\phi, \, {\mu}_{s} \mathbb{R}ight\> ds {\mbox{\rm e}}nd{equation}lb $\mathbb{P} _{\mu_0}$-almost surely and the $\mathbb{P} _{\mu_0}$-null set in question is common to all $t>0$ and every $\phi \in K_a(\mathbb{R}^d) \cup {\cal S}(\mathbb{R}^d)$. Further, for every $t>0$ and every $\phi \in K_a(\mathbb{R}^d) \cup {\cal S}(\mathbb{R}^d)$, there also holds \begin{equation}lb \lambdabel{SPDEb} \mathbb{E} _{\mu_0} \bigg(\langle} \def\>{\rangle\phi,\mu_{t}\> - \langle} \def\>{\rangle\phi,\mu_{0}\> - {X}_{t}(\phi) - M_t(\phi) - \int_{0}^{t} \left\langle} \def\>{\rangle G_1\phi, \, {\mu}_{s} \mathbb{R}ight\> ds\bigg)^2 = 0. {\mbox{\rm e}}nd{equation}lb Both $$ {X}_{t}(\phi) := \sum_{p=1}^{d} \int_{0}^{t}\int_{\mathbb{R}^d} \left\langle} \def\>{\rangleh_p(y-\cdot) \partial_p \phi(\cdot), {\mu}_{s} \mathbb{R}ight\> {W}(dy,ds) $$ and $$ M_t(\phi) :=\int_0^t \int_{\mathbb{R}^d} \phi (y) M(dy, ds) $$ are continuous square-integrable $\{{\cal F}_t\}$-martingales, mutually orthogonal for every choice of $\phi \in K_a(\mathbb{R}^d) \cup {\cal S}(\mathbb{R}^d)$ and driven respectively by a Brownian sheet $W$ and a square-integrable martingale measure $M$ with \begin{equation}nn \langle} \def\>{\rangle{M}(\phi) \>_t = \gamma \sigma^2 \int_0^t \langle} \def\>{\rangle\phi^2, {\mu}_s \> ds \qquad \hbox{for every } t>0 \hbox{ and } \phi \in K_a(\mathbb{R}^d) \cup {\cal S}(\mathbb{R}^d). {\mbox{\rm e}}nd{equation}nn {\mbox{\rm e}}th Here the filtration of choice is ${\cal F}_t:=\sigma\{ \langle} \def\>{\rangle\phi,\mu_{s}\> , M_s(\phi), X_s(\phi): \phi \in K_a(\mathbb{R}^d), s \leq t\}$. The law for this solution $\{\mu_t\}$ started at ${\mu}_{0}\in M_a(\mathbb{R}^d)$ will henceforth be denoted by $\mathbb{P} _{\mu_0}$. The corresponding expectation is just $\mathbb{E} _{\mu_0}$. The proof of Theorem \mathbb{R}ef{MPforSDSM} is in Section \mathbb{R}ef{sec:dualConst}. This unique solution to the martingale problem for $({\cal L}, {\cal D}({\cal L}))$ is our SDSM. Note that Hypothesis \mathbb{R}ef{hyp:basicassumpElliptic} ensures that operator $G_1$ is uniformly elliptic on $C^2(\mathbb{R}^d)$. For the single particle transition density $q_t^{1}(0,x)$ (from 0) of the semigroup $P_t^1$ associated with generator $G_1$ from (\mathbb{R}ef{eqn:Gn1}), its Laplace transform (in the time variable) is given by \begin{equation}lb\lambdabel{eqn:green} Q^{\lambdambda}(x) := \int_0^{\infty}e^{-\lambdambda t} q_t^{1}(0,x)dt, {\mbox{\rm e}}nd{equation}lb for any $\lambdambda > 0$. Formally $Q^{0}$ is known as Green's function for density $q_t^{1}$ and exhibits a potential singularity at $x=0$. For all $x \in \mathbb{R}^d \smallsetminus \{x=(x_1,x_2,\cdots,x_d):x_1x_2\cdots x_d=0\}$ we can also write --- the legitimacy of the interchange and the finiteness of both expressions will be explained in due course --- \begin{equation}lb\lambdabel{eqn:interchange} \partial_{x_i}Q^{\lambdambda}(x)= \partial_{x_i}\int_0^{\infty}e^{-\lambdambda t}q_t^{1}(0,x)dt =\int_0^{\infty}e^{-\lambdambda t}\partial_{x_i}q_t^{1}(0,x)dt {\mbox{\rm e}}nd{equation}lb for any $i \in \{1,2,\cdots,d\}$, with the derivative taken in the distributional sense --- see Lemma \mathbb{R}ef{FellerProp} for clarifications. The singularity at $x=0$ requires the following perturbation. For every $\varepsilonsilon>0$ and every $x \in \mathbb{R}^d$, define \[ Q^{\lambdambda}_{\varepsilonsilon}(x - \cdot):= \int_{0}^{\infty} e^{- \lambdambda s}q_{s + \varepsilonsilon}^{1}(x- \cdot) ds =e^{\lambdambda \varepsilonsilon}\int_{\varepsilonsilon}^{\infty} e^{- \lambdambda t}q_t^{1}(x- \cdot) dt \in C_b^{\infty}(\mathbb{R}^d). \] The local time for SDSM is constructed by way of its approximating sequence, as follows: for every $(t,x) \in [0, \infty) \times \mathbb{R}^d$, define \begin{equation}lb \lambdab{TanakaI} \Lambdambda^{x, \varepsilonsilon}_{t} := \int_0^t \langle} \def\>{\rangle(-G_1 + \lambdambda)Q^{\lambdambda}_{ \varepsilonsilon}(x - \cdot), \mu_s\>ds. {\mbox{\rm e}}nd{equation}lb \begin{theorem} \lambdabel{ClaimAk} Assume Hypotheses $\mathbb{R}ef{hyp:basicassumpFilter}$, $\mathbb{R}ef{hyp:basicassumpElliptic}$, $\mathbb{R}ef{hyp:basicassumpGauss}$ and $\mathbb{R}ef{hyp:basicassumpUniformInteg}$. For any real numbers $T>0$, $\varepsilonsilon>0$ and any integer $k\ge1$, there holds \begin{equation}lb \lambdab{AME} \sup_{x \in \mathbb{R}^d} \sup_{0 \leq t \leq T}\mathbb{E} _{\mu_0} [|\Lambdambda^{x, \varepsilonsilon}_t |^k] < \infty {\mbox{\rm e}}nd{equation}lb and \begin{equation}lb \lambdab{BME} \lim_{\varepsilonsilon, \varepsilonsilon^{'} \downarrow 0} \sup_{x \in \mathbb{R}^d} \sup_{0 \leq t \leq T} \mathbb{E} _{\mu_0} [|\Lambdambda^{x, \varepsilonsilon}_t - \Lambdambda^{x, \varepsilonsilon{'}}_t |^k] = 0. {\mbox{\rm e}}nd{equation}lb {\mbox{\rm e}}th Since each $\Lambdambda^{x, \varepsilonsilon}_t(\omega) : [0, \infty) \times \mathbb{R}^d \times \Omega \mathbb{R}ightarrow[0, \infty)$ is a jointly measurable function, there exists a common $\mathbb{P} _{\mu_0}$-null set $N$ for any $(t,x) \in [0, \infty) \times \mathbb{R}^d$, such that \begin{equation}lb \lambdab{DELT} \Lambdambda^x_t := \begin{cases} \lim_{\varepsilonsilon \downarrow 0}\Lambdambda^{x, \varepsilonsilon}_t , & \quad \mbox{ if $\omega \notin N $} \\ 0 , & \quad \mbox{ if $\omega \in N $} {\mbox{\rm e}}nd{cases} {\mbox{\rm e}}nd{equation}lb is a well-defined limit (in the $L^k(\mathbb{P} _{\mu_0})$ sense for every integer $k\ge1$) satisfying both \begin{equation}lb \lambdab{AME1} \sup_{x \in \mathbb{R}^d} \sup_{0 \leq t \leq T} \mathbb{E} _{\mu_{0}} [|\Lambdambda^{x}_t|^k] < \infty \nonumber {\mbox{\rm e}}nd{equation}lb and \begin{equation}lb \lambdab{BME1} \lim_{\varepsilonsilon \downarrow 0} \sup_{x \in \mathbb{R}^d} \sup_{0 \leq t \leq T} \mathbb{E} _{\mu_0}[|\Lambdambda^x_t - \Lambdambda^{x, \varepsilonsilon}_t|^k] = 0. \nonumber {\mbox{\rm e}}nd{equation}lb The proof of Theorem \mathbb{R}ef{ClaimAk} is achieved in Section \mathbb{R}ef{sec:dualConst} by computing the $k^{th}$-moments of SDSM through a duality argument. We now state our main result, which confirms that $(\mathbb{R}ef{DELT})$ defines the local time of SDSM and possesses a Tanaka representation, under some restriction on the family of initial measures. \begin{theorem} \lambdab{lt_th1} Under Hypotheses $\mathbb{R}ef{hyp:basicassumpFilter}$ and $\mathbb{R}ef{hyp:basicassumpElliptic}$, with $d=1$, $2$ or $3$, select any $a\ge0$ and ${\mu}_{0}\in M_a(\mathbb{R}^d)$ satisfying both Hypotheses $\mathbb{R}ef{hyp:basicassumpGauss}$ and $\mathbb{R}ef{hyp:basicassumpUniformInteg}$, with the Brownian sheet $W$ provided by Hypothesis $\mathbb{R}ef{hyp:basicassumpFilter}$ and martingale measure $M$ constructed from Theorem $\mathbb{R}ef{MPforSDSM}$. Fix $\lambdambda>0$. Outside of a $\mathbb{P} _{\mu_0}$-null set which is common to all $(t,x) \in (0, \infty) \times \mathbb{R}^d$, $\Lambdambda^x_{t}$ defined by $(\mathbb{R}ef{DELT})$ satisfies $\Lambdambda^x_{t} \geq 0$ and the following Tanaka formula holds: \begin{equation}lb \lambdab{TanakaII} \Lambdambda^{x}_{t} & = & \langle} \def\>{\rangleQ^{\lambdambda}(x - \cdot), \mu_0\> - \langle} \def\>{\rangleQ^{\lambdambda}(x- \cdot), \mu_{t}\> + \lambdambda \int_0^{t} \langle} \def\>{\rangleQ^{\lambdambda}(x - \cdot), \mu_s\>ds \nonumber \\ & & + \sum_{p=1}^{d}\int_0^{t} \int_{\mathbb{R}^d}\langle} \def\>{\rangleh_p(y- \cdot) \partial_p Q^{\lambdambda}(x - \cdot), \mu_s\>W(dy,ds) \nonumber \\ & & + \int_0^{t} \int_{\mathbb{R}^d}Q^{\lambdambda}(x-y)M(dy,ds). {\mbox{\rm e}}nd{equation}lb Further, $\Lambdambda^x_{t}$ is the local time for SDSM $\{\mu_t\}$, in that $(\mathbb{R}ef{LT})$ holds outside of a $\mathbb{P} _{\mu_0}$-null set which is common to all $(t,\phi) \in [0, \infty) \times C_c(\mathbb{R}^d)$. Finally, for every $t \ge 0$ there holds $ \sup_{x\in\mathbb{R}^d}\sup_{0 \leq s \leq t} \mathbb{E} _{\mu_0} [( \Lambdambda^x_{s} )^k] < \infty$ for every integer $k\ge1$. {\mbox{\rm e}}th \noindent {\bf Remark} Note that the $\mathbb{P} _{\mu_0}$-null set is independent of the test function $\phi \in {\cal S}(\mathbb{R}^d)$ in Theorem \mathbb{R}ef{MPforSDSM} and of $x\in\mathbb{R}^d$ in Theorem \mathbb{R}ef{lt_th1}, hence the same holds also in $(\mathbb{R}ef{LT})$. Note also that the value of the local time does not depend on parameter $\lambdambda > 0$ (although it does vary with the dimension $d$ of the space). Theorem \mathbb{R}ef{lt_th1} establishes the existence of the local time $\Lambdambda^x_t$ for SDSM directly through the characterization provided by (\mathbb{R}ef{TanakaI}) and (\mathbb{R}ef{DELT}), an explicit Tanaka formula (\mathbb{R}ef{TanakaII}) expressed through a Green function with a spatial singularity, in the spirit of the approach proposed in L\'opez-Mimbela and Villa \cite{Lopez-MimbelaVilla04} in their Theorem 3.1, of which our Theorem \mathbb{R}ef{lt_th1} is an extension. However, in order to make sense of it, we have to approximate this singular Green function and its derivatives by smooth functions to ensure that the various stochastic integrals in (\mathbb{R}ef{TanakaII}) are well-defined. Recently, using Malliavin Calculus, Hu et al. \cite{HNX19} proved H{\"o}lder continuity for a related class of processes, namely SDSM with the classical Dawson-Watanabe branching mechanism replaced by the more cohesive and therefore asymptotically smoother Mytnik-Sturm branching, under an initial measure $\tilde\mu_0$ with a bounded Radon-Nikodym derivative with respect to Lebesgue measure $\lambdambda_0$. More precisely they have shown that, in the case where $c$ is the identity matrix and $h$ is smooth enough (and matrix-valued), this regularized SDSM $\{\tilde\mu_t\}$ has a density $\tilde f_t=d\tilde\mu_t/d\lambdambda_0$ which is almost surely jointly H{\"o}lder continuous, with exponent $\beta_1\in(0,1)$ in space and $\beta_2\in(0,1/2)$ in time. In fact they showed that for every such $\beta_1$ and $\beta_2$, as well as for every choice of $p > 1$, there is a constant $c=c(T, d, h, p, \beta_1, \beta_2)> 0$ such that, for every $x,z\in\mathbb{R}^d$ and $0 < s < t \le T$ there holds \[ \left[ \mathbb{E} _{\tilde\mu_0} \left| \tilde f_t(z)-\tilde f_s(x)\mathbb{R}ight|^{2p} \mathbb{R}ight]^{1/2p} \leq cs^{-1/2} (|z - x|^{\beta_1}+|t - s|^{\beta_2}). \] Under these stronger initial conditions we can write the regularized local time as \[ \tilde\Lambdambda^x_{t} = \int_0^t \tilde f_s(x)ds \] which is thus almost surely jointly H{\"o}lder continuous, with (at least) the same exponents. Such a phenomenon is not likely to occur for the local time of our SDSM here, since there is such a density $f_t=d\mu_t/d\lambdambda_0$ when $d=1$ (Konno and Shiga \cite{KonnoShiga88} for Super-Brownian motion and Dawson et al \cite{DVW2000} for the general case) but not when $d\ge2$, not even in the Super-Brownian motion case (see Dawson and Hochberg \cite{DawsonHochberg79} and Perkins \cite{Perkins88}, \cite{Perkins02}). The sharpest estimates for the closed support of $\mu_t$ in the Super-Brownian motion case are found in Dawson and Perkins \cite{DawsonPerkins99} when $d\ge3$ and Le Gall and Perkins \cite{LeGallPerkins95} when $d=2$. See also Hong \cite{Hong18} for renormalization issues. The matter of the modulus of continuity of the local time $\Lambdambda^x_{t}$, including in the aforementioned special cases, is addressed in a separate paper \cite{DVW2021}. \setcounter{equation}{0}\sectionion{Branching model, SDSM and SPDE}\lambdabel{sec:SDSM} \setcounter{equation}{0} The construction of SDSM as a limit of a sequence of branching particle systems in $\mathbb{R}^d$ is presented here. First we consider the simplest situation in which there is a finite number $m$ of particles moving in $\mathbb{R}^d$ without branching, but submitted to a diffusive dynamic that includes an interacting property, which is produced by the random environment. For this we need some useful properties of finite dimensional diffusions. \subsection{Diffusive part of the finite systems and Hypothesis \mathbb{R}ef{hyp:basicassumpGauss}} Consider a system of $m\ge1$ interacting particles $\{z_{k}: k=1,\ldots,m\}$, each moving continuously in $\mathbb{R}^d$ from some arbitrary starting point $z_{k}(0)\in \mathbb{R}^d$ and each driven by its own standard $d$-dimensional Brownian motion $B_{k}$ independently of one another, but all of them evolving coherently in a random medium prescribed by a common Brownian sheet $W$ on $\mathbb{R}^d$, which remains the same throughout the construction and is assumed to be independent of the set $\{B_{k}: k\ge1\}$. Under Hypotheses $\mathbb{R}ef{hyp:basicassumpFilter}$ and $\mathbb{R}ef{hyp:basicassumpElliptic}$ on diffusion coefficient $c$ (a $d\times d$ matrix of real-valued functions) and random medium intensity $h$ (a vector of $d$ real-valued functions), the system of stochastic integral equations \begin{equation}lb \lambdabel{individual} z_k(t)=z_k(0) + \int_{0}^t c(z_{k}(s)) dB_{k}(s) + \int_{0}^{t } \int_{\mathbb{R}^d}h(y-z_{k}(s))W(dy,ds) {\mbox{\rm e}}nd{equation}lb for $k=1,\ldots,m$, has a unique strong solution for any $m\ge1$ and every fixed starting point $(z_1(0), z_2(0), \ldots, z_m(0))\in (\mathbb{R}^d)^m$. It is a strong $\{{\cal F}_t\}_{t \geq 0}$-Markov process and there is no explosion in finite time. Existence and uniqueness of solution follow by Picard's method of successive approximations, as in Wang \cite{Wang97}. Bounded continuous coefficients preclude explosion. In other words (\mathbb{R}ef{individual}) has a strong solution and pathwise uniqueness holds, in the sense that any two solutions with sample paths $\mathbb{P} $-almost surely in $C([0, \infty), (\mathbb{R}^d)^m)$ and identical starting points, must be equal with $\mathbb{P} $-probability $1$. The cloud of particles $\{z_{1}, z_{2}, \ldots , z_{m}\}$ as a whole clearly diffuses according to correlated dynamics, an important feature of the motion is that it generates new difficulties in the identification of the local time of SDSM, as will be seen shortly. For any integer $m \geq 1$, write $Z_m(t):= (z_{1}(t), \cdots, z_{m}(t))$ for the motion of the cloud of $m$-particles solving (\mathbb{R}ef{individual}), $\mathbb{P} _x$ for the law of $Z_m$ with initial point $x\in (\mathbb{R}^d)^m$ and $\mathbb{E} _x$ for the expectation with respect to $\mathbb{P} _x$. Since $Z_m$ is a time-homogeneous $\{{\cal F}_t\}_{t \geq 0}$-Markov process, let $\{P^m_t: t\ge 0\}$ be the corresponding Markov semigroup on $B({(\mathbb{R}^d)^m})$ for $Z_m$, that is \begin{equation}lb\lambdabel{eqn:Semigroup} P^{m}_t f(x):=\mathbb{E} _x \left[ f(Z_{m}(t) )\mathbb{R}ight] \qquad \hbox{for } t\geq 0 \hbox{ and } f \in B({(\mathbb{R}^d)^m}). {\mbox{\rm e}}nd{equation}lb Note that $P^m_t$ is a Feller semigroup and maps each of $B({(\mathbb{R}^d)^m})$, $C_b({(\mathbb{R}^d)^m})$ and $C_0({(\mathbb{R}^d)^m})$ into itself. It\^{o}'s formula yields the following generator for $\{P^m_t: t\ge 0\}$ : for all $f\in C_b^{2}({(\mathbb{R}^d)^m})$, \begin{equation}lb\lambdabel{eqn:Gn} {G}_{m} f(x):= \frac{1}{2} \sum_{i,j=1}^{m} \hspace{1mm} \sum_{p, q=1}^{d} \mathbb{G} ammamma_{pq}^{ij}(x_{1}, \cdots, x_{m}) \frac{\partial^2}{\partial x_{ip} \partial x_{jq} } f(x_{1}, \cdots, x_{m}) {\mbox{\rm e}}nd{equation}lb where $x=(x_1, \cdots, x_m)\in (\mathbb{R}^d)^m$ has components $x_i = (x_{i1}, \cdots, x_{id}) \in \mathbb{R}^d$ for $1\leq i \leq m$ and $\mathbb{G} ammamma_{pq}^{ij}$ is defined by $(\mathbb{R}ef{gammaij})$. Hypothesis $\mathbb{R}ef{hyp:basicassumpElliptic}$ ensures that operator ${G}_{m}$ is uniformly elliptic --- see Subsection \mathbb{R}ef{app:pf_ellip1} for a proof. Following Stroock and Varadhan \cite{StroockVaradhan79}, it is useful to view process $\{Z_m(t):{t \geq 0}\}$ as a solution to the $({G}_{m}, \delta_{Z_m(0)})$-martingale problem on $(\Omega, {\cal F}, \{{\cal F}_t\}_{t \geq 0}, \mathbb{P} )$ for any fixed starting point $Z_m(0)\in (\mathbb{R}^d)^m$, meaning that, for every choice of $f\in C_c^{\infty}({(\mathbb{R}^d)^m})$, the process $f(Z_m(t))-\int_{0}^{t} {G}_{m} f(Z_m(s))ds$ is an ${\cal F}_t$-martingale. We say this martingale problem is well-posed (or has a unique solution) if there exists at least one solution and any two solutions have the same finite dimensional distributions. We also need the following summary of several known results from the literature. \begin{lemma} \lambdab{FellerProp} Under Hypotheses $\mathbb{R}ef{hyp:basicassumpFilter}$ and $\mathbb{R}ef{hyp:basicassumpElliptic}$, the following statements hold for every choice of $m\geq 1$. \begin{itemize} \item For any initial value $Z_m(0)\in (\mathbb{R}^d)^m$, the $({G}_{m}, \delta_{Z_m(0)})$-martingale problem is well-posed. The trajectories of $\{Z_m(t):{t \geq 0}\}$ are in $C([0, \infty), (\mathbb{R}^d)^m)$. \item $P_t^m f (x)$, as a function of $(t,x)$, belongs to $\cup_{t>0}C_b^{1,2}((0, t] \times (\mathbb{R}^d)^m)$ when $t>0$, for every choice of $f\in C_0({(\mathbb{R}^d)^m})$, and $\{P_t^m\}$ is a Feller semigroup mapping $C_0^2((\mathbb{R}^d)^m)$ into itself. \item $\{P_t^m: t\ge 0\}$ has a transition probability density when $t > 0$, i.e., there is a function $q_t^{m}(x,y) > 0$ which is jointly continuous in $(t,x,y) \in (0, \infty) \times (\mathbb{R}^d)^m \times (\mathbb{R}^d)^m$ everywhere and such that there holds $P_t^m f (\cdot)=\int_{(\mathbb{R}^d)^m} f(y) q_t^{m}(\cdot,y)dy$ when $t > 0$, for every $f\in C_0({(\mathbb{R}^d)^m})$. \item For each choice of $T > 0$, $d\ge1$ and $m\ge1$, there are positive constants $a_1$ and $a_2$ such that, for any choice of $1\le p,p' \le dm$ and nonnegative integers $r$, $s$ and $s'$ verifying $0\le 2r+s+s'\le2$, \begin{equation}lb \lambdab{LSU} \left|\frac{\partial^{r}}{\partial t}\frac{\partial^{s}}{\partial y_{p}}\frac{\partial^{s'}}{\partial y_{p'}} q_t^{m}(x,y)\mathbb{R}ight| \leq \frac{a_1}{t^{(dm+2r+s+s')/2}}{\mbox{\rm e}}xp{\left\{- a_2\left( \frac{|y-x|^{2}}{t} \mathbb{R}ight)\mathbb{R}ight\}} {\mbox{\rm e}}nd{equation}lb holds everywhere in $(t,x,y)\in(0,T) \times (\mathbb{R}^d)^m \times (\mathbb{R}^d)^m$ with $y=(y_1,\ldots,y_{dm})$. \item For each choice of $T > 0$, $d\ge1$ and initial data $(\xi, \tau) \in \mathbb{R}^d \times [0,T)$, there is a unique fundamental solution $\mathbb{G} ammamma(x,t; \xi, \tau)$ to ${G}_{1}\mathbb{G} ammamma-\partial_t\mathbb{G} ammamma = 0$. Moreover, there exist two constants $c > 0$ and $c_0 > 0$, such that, for all nonnegative integers $r,s_1,s_2,\ldots,s_d$ verifying $0\le l=2r + s_1+s_2+ \ldots + s_d \le 2$ and writing $\partial^{l}=\partial^{r}_t\partial^{s_1}_{x_1}\partial^{s_2}_{x_2}\cdots\partial^{s_d}_{x_d}$ with $\partial^{0}$ for the identity, there holds, for every choice of $\alphapha\in(0,1)$, \begin{equation}lb \lambdab{formulaI} & & |\partial^{l}\mathbb{G} ammamma(x, \xi; t, \tau) -\partial^{l}\mathbb{G} ammamma(x, \xi^{'}; t, \tau^{'})| \nonumber \\ \leq & & c( |\xi-\xi^{'}|^{\alphapha} + |\tau - \tau^{'}|^{\alphapha/2}) \bigg[ (t-\tau)^{- (d+l)/2} {\mbox{\rm e}}xp{ \{- c_0 \frac{|x - \xi|^2}{t - \tau} \} } \nonumber \\ & & \hspace{5cm} + (t-\tau^{'})^{- (d+l)/2} {\mbox{\rm e}}xp{\{-c_0 \frac{|x - \xi^{'}|^2}{t - \tau^{'}}\}} \bigg]. {\mbox{\rm e}}nd{equation}lb {\mbox{\rm e}}nd{itemize} {\mbox{\rm e}}le \noindent {\bf Remark} One important consequence of Lemma \mathbb{R}ef{FellerProp} is that $C_0^2((\mathbb{R}^d)^m)$ is a core for generator ${G}_{m}$ (see Propositions 1.3.3 and 8.1.6 in Ethier and Kurtz \cite{EthierKurtz86}). \noindent {\bf Proof: }\ Since $h_p \in L^1(\mathbb{R}^d) \cap {\mathbb{R}m Lip}_b (\mathbb{R}^d) \subset L^2(\mathbb{R}^d)$ holds for all $p\ge1$, so does $\mathbb{R}ho_{pq}\in {\mathbb{R}m Lip}_b(\mathbb{R}^d)$ and $\mathbb{G} ammamma_{p,q}^{i,j}\in {\mathbb{R}m Lip}_b((\mathbb{R}^d)^m)$ for all $p,q\ge1$. Hypothesis \mathbb{R}ef{hyp:basicassumpElliptic} is stronger than the conditions in each of Theorem 6.3.4 p.152, Theorem 3.2.1 p.71 or Corollary 3.2.2 p.72 in Stroock and Varadhan \cite{StroockVaradhan79}, hence the Feller property holds true for Markov semigroup $P^m_t$ in (\mathbb{R}ef{eqn:Semigroup}) and the first three statements follow, using the uniform density of $C_c^{\infty}({(\mathbb{R}^d)^m})$ in $C_0({(\mathbb{R}^d)^m})$. The fourth, the upper bound in (\mathbb{R}ef{LSU}) on the transition density $q_t^{m}(x,y)$ of semigroup $\{P_t^{m}: t\ge 0\}$ generated by $G_{m}$ from (\mathbb{R}ef{eqn:Gn}), is a consequence of Equation (13.1) of Lady\u{z}enskaja et al. (\cite{LSU68} p.376). The fifth ensues from Theorem 5.3.5 of Garroni and Menaldi \cite{GarroniMenaldi92}, as ${G}_{1}-\partial_t$ is uniformly parabolic whenever ${G}_{1}$ is uniformly elliptic. $\square$ \noindent {\bf Remark:} When combined with Hypothesis $\mathbb{R}ef{hyp:basicassumpElliptic}$, condition $(\mathbb{R}ef{nonBA})$ in Hypothesis \mathbb{R}ef{hyp:basicassumpGauss} is equivalent to \begin{equation}lb \lambdab{H4} \sup_{x\in \mathbb{R}^d} \sup_{0 < t \leq T} \langle} \def\>{\rangleq_t^{1}(\cdot,x), \mu_0\> < \infty, {\mbox{\rm e}}nd{equation}lb where $q_t^{1}$ is the one particle transition density with generator $G_1$ on $\mathbb{R}^d$ given by either $(\mathbb{R}ef{eqn:Gn1})$ or $(\mathbb{R}ef{eqn:Gn})$. This is a consequence of the uniform ellipticity, which provides us with both an upper bound $(\mathbb{R}ef{LSU})$ as well as the corresponding lower bound : there exist four positive constants $a^*$, $b$, $c$ and $A^*$ such that \begin{eqnarray} \lambdabel{Aronsonbounds} a^* \cdot \varphirphi_{bs} (y-x) \leq q_s^{1}(x,y) \leq A^* \cdot \varphirphi_{cs}(y-x) {\mbox{\rm e}}nd{eqnarray} holds for any $x, y \in \mathbb{R}^d$ and $s > 0$ (Aronson \cite[Theorem 10]{Aronson68}). Notice also that any measure $\mu_0$ with a finite Radon-Nikodym derivative with respect to Lebesgue measure $\lambdambda_0$ (finite as in finitely $\lambdambda_0$-integrable) satisfies $ \sup_{t > 0} \sup_{x\in \mathbb{R}^d} \langle} \def\>{\rangle \varphirphi_t(x-\cdot), \mu_0\> < \infty$ and therefore Hypothesis $\mathbb{R}ef{hyp:basicassumpGauss}$ as well. In particular this is the case for measures $I_a(x)dx$, for all choices of $a\ge0$. The need for Hypothesis $\mathbb{R}ef{hyp:basicassumpGauss}$ stems from the following illustration. \begin{example} Let $q_t^{1}$ is the one particle transition density with generator $G_1$ on $\mathbb{R}^d$, made explicit in (\mathbb{R}ef{eqn:Gn}) above. For initial measure $\mu_0=\delta_0$ and any $t >0$, there holds \begin{eqnarray} \lambdabel{ex} \hspace*{8mm}\int_0^t \int_{\mathbb{R}^d}q_s^{1}(y,x) \delta_0(dy) ds = \int_0^t q_s^{1}(0,x)ds \left\{ \begin{array}{lll} < \infty \quad & \mbox{if $x=0$}, & d=1 \\ < \infty \quad & \mbox{if $x \neq 0$} & d=1, \\ = \infty \quad & \mbox{if $x_1x_2=0$}, & d=2 \\ < \infty \quad & \mbox{if $x_1x_2 \neq 0$} & d=2. {\mbox{\rm e}}nd{array} \mathbb{R}ight. {\mbox{\rm e}}nd{eqnarray} {\mbox{\rm e}}nd{example} From this example, we see that, if the initial measure has an atom and $d \geq 2$, the existence of a continuous local time for SDSM is questionable. This motivates the constraint on the family of initial measures set forth in Hypothesis \mathbb{R}ef{hyp:basicassumpGauss}. \noindent {\bf Remark:} Lebesgue measure $\lambdambda_0$ on $\mathbb{R}^d$ satisfies Hypothesis \mathbb{R}ef{hyp:basicassumpGauss} for all $d\ge1$ but satisfies Hypothesis \mathbb{R}ef{hyp:basicassumpUniformInteg} when and only when $a>d$. In general, when $a>d$, any measure $\mu_0 \in M_a(\mathbb{R}^d)$ which satisfies Hypothesis \mathbb{R}ef{hyp:basicassumpGauss} will also satisfy Hypothesis \mathbb{R}ef{hyp:basicassumpUniformInteg}. Indeed, using $\langle} \def\>{\rangleI_a(\cdot+w), \mu_0 \> = \lim_{t\downarrow0}\int_{\mathbb{R}^d}\int_{\mathbb{R}^d}I_a(x+w+y)\varphirphi_{t}(y)\lambdambda_0(dy)\mu_0(dx)$, the translation invariance of Lebesgue measure yields \begin{equation}lb \lambdab{Tonelli} \int_{\mathbb{R}^d}\int_{\mathbb{R}^d}I_a(x+w+y)\varphirphi_{t}(y)\lambdambda_0(dy)\mu_0(dx) &=&\int_{\mathbb{R}^d}\int_{\mathbb{R}^d}I_a(w+y)\varphirphi_{t}(y-x)\lambdambda_0(dy)\mu_0(dx) \nonumber \\ &\le& \sup_{y\in \mathbb{R}^d} \sup_{0 < t \leq T} \langle} \def\>{\rangle\varphirphi_{t}(y-\cdot), \mu_0\> \cdot \sup_{w\in \mathbb{R}^d} \langle} \def\>{\rangleI_a(w+\cdot), \lambdambda_0\> \nonumber \\ && < \infty {\mbox{\rm e}}nd{equation}lb and $\sup_{w\in \mathbb{R}^d} \langle} \def\>{\rangleI_a(\cdot+w), \mu_0 \> < \infty$ ensues. \subsection{Branching Particle Systems} Ren et al. \cite{RenSongWang09} handled the construction of SDSM and the derivation of the associated SPDE, using a tightness argument for the laws on $D([0, \infty), M_0(D))$ of the trajectories of high-density particles, but only when these particles move in a bounded domain $D \subset \mathbb{R}^d$ with killing boundary and the initial data is a finite measure $\mu_{0} \in M_0(D)$. There are some significant differences between the construction of our model in $\mathbb{R}^d$ and theirs, notably in the topological structure of the spaces required, so, for the sake of completeness, some details are provided here about the relevant branching particle systems in $\mathbb{R}^d$ when $d\ge1$, as well as the construction of SDSM in $\mathbb{R}^d$ and the associated SPDE on $\mathbb{R}^d$. Additional details can be found in Ren et al. \cite{RenSongWang09}. A prior construction for SDSM on the whole real line ($d=1$) can be found in Dawson et al. \cite{DLW01}, where the initial data is bounded and use is made of the canonical space $C([0, \infty), M_0(\mathbb{R}))$ since the trajectories are less irregular than in the current case ($d\ge2$), thus affording sharper inequalities. Earlier still, Konno and Shiga \cite{KonnoShiga88} built super stable processes on $\mathbb{R}^d$ with unbounded initial data but no interacting term ($h{\mbox{\rm e}}quiv0$). In both cases the construction is on the canonical space $C([0, \infty), M_a(\mathbb{R}^d))$, an approach that is unavailable here since the interacting term depends on space-time stochastic integrals that control the evolution of the strong solution of the associated SDE. Therefore we must also begin with the full definition of the interacting branching particle systems. In summary, each particle has an exponentially distributed lifetime at the end of which it either splits into finitely many identical particles or dies, independently of one another and of the above diffusion mechanism. The newborn particles (if there are any) then start afresh, with their own independent exponential lifetimes, at the spatial position where the parent left off and the new (enlarged or diminished) cloud of particles continues its collective evolution in $\mathbb{R}^d$ according to (\mathbb{R}ef{individual}). For the resulting branching particle systems, particles undergo a finite-variance branching at independent exponential times and have interacting spatial motions powered by diffusions and a common white noise. Under critical conditions on the branching rate, the distribution of the number of offsprings (the mean number of offspring is asymptotically one) and the (common) mass of the particles ensuring a non trivial high density limit, the sequence of empirical measure processes representing the proportion of the mass carried by those particles alive at a given time and located within a given set, converges to a limiting measure-valued Markov process $\{\mu_t:t\ge0\}$. We first introduce an index set in order to identify each particle in the branching tree structure. Let $\mathbb{R}e$ be the set of all multi-indices, i.e., strings of the form $\xi = n_{1} \oplus n_{2} \oplus \cdots \oplus n_{k}$, where the $n_{i}$'s are non-negative integers. Let $| \xi |$ denote the length of $\xi$. We provide $\mathbb{R}e$ with the arboreal ordering: $m_{1} \oplus m_{2} \oplus \cdots \oplus m_{p} \prec n_{1} \oplus n_{2} \oplus \cdots \oplus n_{q}$ if and only if $p \leq q$ and $m_{1}=n_{1}, \cdots, m_{p}=n_{p}.$ If $| \xi | = p$, then $\xi$ has exactly $p-1$ predecessors, which we shall denote respectively by $\xi - 1$, $\xi - 2, \cdots, \xi - | \xi |+1 $. For example, with $\xi = 6 \oplus 18 \oplus 7 \oplus 9$, we get $ \xi - 1 = 6 \oplus 18 \oplus 7$, $\xi - 2 = 6 \oplus 18$ and $\xi - 3 = 6$. We also define an $\oplus$ operation on $\mathbb{R}e$ as follows: if ${\mbox{\rm e}}ta \in \mathbb{R}e$ and $|{\mbox{\rm e}}ta| = m$, for any given non-negative integer $k$, ${\mbox{\rm e}}ta \oplus k \in \mathbb{R}e$ and ${\mbox{\rm e}}ta \oplus k$ is an index for a particle in the $(m + 1)$-th generation. For example, when ${\mbox{\rm e}}ta = 3 \oplus 8 \oplus 17 \oplus 2$ and $k = 1$, we have ${\mbox{\rm e}}ta \oplus k = 3 \oplus 8 \oplus 17 \oplus 2 \oplus 1 $. Let $\{B_{\xi}=(B_{\xi 1}, \cdots, B_{\xi d})^T: \, \xi \in \mathbb{R}e\}$ be an independent family of standard $\mathbb{R}^d$-valued Brownian motions, where $B_{\xi k}$ is the $k$-th component of the $d$-dimensional Brownian motion $B_{\xi}$, and $W$ a Brownian sheet on $\mathbb{R}^d$. Assume that $W$ and $\{B_{\xi}: \xi \in \mathbb{R}e\}$ are defined on a common filtered probability space $(\Omega, {\cal F}, \{{\cal F}_t\}_{t\geq 0}, \mathbb{P} )$, and independent of each other. Under Hypotheses $\mathbb{R}ef{hyp:basicassumpFilter}$ and $\mathbb{R}ef{hyp:basicassumpElliptic}$, just as for equations (\mathbb{R}ef{individual}) (see Lemma 3.1 of Dawson et al. \cite{DLW01}), for every index $\xi\in \mathbb{R}e$ and initial data $z_{\xi}(0)$, by Picard's iteration method, there is a unique strong solution $z_{\xi}(t)$ to the equation \begin{equation}lb \lambdabel{3.1} z^{T}_{\xi}(t)=z^{T}_{\xi}(0) + \int_{0}^{t} c(z_{\xi}(s)) dB_{\xi}(s) + \int_{0}^{t}\int_{\mathbb{R}^d}h(y-z_{\xi}(s))W(dy,ds). {\mbox{\rm e}}nd{equation}lb Since the strong solution of (\mathbb{R}ef{3.1}) only depends on the initial state $z_{\xi}(0)$, the Brownian motion $B_{\xi}:=\{B_{\xi}(t):t\ge0\}$ and the common $W$, this solution can be written as $z_{\xi}(t)=\mathbb{P} hi (z_{\xi}(0),B_{\xi},t)$ for some measurable $\mathbb{R}^d$-valued map $\mathbb{P} hi$ (dropping $W$ for the sake of simplifying the formulas coming up). For every $\phi\in C_b^{2}(\mathbb{R}^d)$ and $t>0$, It\^{o}'s formula yields \begin{equation}lb \lambdab{3.4} && \hspace{-1cm} \phi(z_{\xi}(t)) - \phi(z_{\xi}(0)) \\ &=& \sum_{p=1}^{d}\left[\int_{0}^{t} \left( \partial_{p} \phi(z_{\xi}(s)) \mathbb{R}ight) \sum_{i=1}^{d} c_{pi}(z_{\xi}(s)) dB_{\xi i}(s) \mathbb{R}ight. \nonumber \\ & & \left.+ \int_{0}^{t} \int_{\mathbb{R}^d} \partial_{p} {\phi}(z_{\xi}(s)) h_p(y- z_{\xi}(s))W(dy,ds) \mathbb{R}ight] \nonumber \\ & & + \frac{1}{2} \sum_{p,q=1}^{d}\int^{t}_{0}\left( \partial_{p} \partial_{q} {\phi}(z_{\xi}(s)) \sum_{i=1}^{d}c_{pi}(z_{\xi}(s)) c_{qi}(z_{\xi}(s)) \mathbb{R}ight) \, ds \nonumber \\ & & + \frac{1}{2} \sum_{p,q=1}^{d} \int_{0}^{t}( \partial_{p} \partial_{q} {\phi}(z_{\xi}(s))) \int_{\mathbb{R}^d} h_p(y- z_{\xi}(s) ) h_q(y- z_{\xi}(s) )dy ds. \nonumber {\mbox{\rm e}}nd{equation}lb With the spatial motion of each particle modelled by (\mathbb{R}ef{3.1}), we next spell out the critical conditions governing the branching mechanism. For every positive integer $n\geq 1$, there is an initial system of $m_{0}^{(n)}$ particles. Each particle has mass $1/ {\theta}^{n}$ and branches independently at rate $\gamma \theta^{n}$. Let $q^{(n)}_k$ denote the probability of having $k$ offspring when a particle survives in $\mathbb{R}^d$. The sequence $\{q^{(n)}_{k}\}$ is assumed to satisfy the following conditions: $$ q^{(n)}_{k} = 0 \qquad \hbox{if } k=1 \hbox{ or } k \geq n+1, $$ and $$ \sum_{k=0}^{n} k q^{(n)}_{k} =1 \quad \hbox{ and } \quad \lim_{n \mathbb{R}ightarrow \infty}\sup_{k\geq 0} |q^{(n)}_{k}-p_{k}|=0 , $$ where $\{p_k: k=0, 1, 2, \cdots \}$ is the limiting offspring distribution which is assumed to satisfy following conditions: $$ p_{1}=0 , \hspace{1mm} \sum_{k=0}^{\infty} kp_{k} =1 \hspace{1mm} \hbox{ and } \hspace{1mm} m_2 := \sum_{k=0}^{\infty}k^{2}p_{k} < \infty. $$ Let $m_c^{(n)}:= \sum_{k=0}^n(k-1)^4q_k^{(n)}$. The sequence $\{m_c^{(n)}: n\geq 1\}$ may be unbounded, but we assume that $$ \lim_{n \mathbb{R}ightarrow \infty}\frac{m_c^{(n)}}{\theta^{2n}}=0 \hspace{1cm}\mbox{ for any $\theta > 1$. } $$ Assume that $m_{0}^{(n)} \leq \hbar \, {\theta}^{n} $, where $\hbar> 0$ and $\theta > 1$ are fixed constants. Finally, define $m^{(n)}_2 := \sum_{k=0}^{n} k^{2}q^{(n)}_{k}$, $\sigma^2_{n} :=m^{(n)}_2 -1$ and $\sigma^2 := m_2 -1$. Note that $\sigma^2_n$ and $\sigma^2$ are the variance of the $n$-th stage and the limiting offspring distribution, respectively. We have $\sigma^2_{n} < \infty$ and $\lim_{n \mathbb{R}ightarrow \infty} \sigma^2_n = \sigma^2$. For a fixed stage $n \ge 1$, particle $\xi \in\mathbb{R}e$ is located at $x_{\xi}(t)$ at time $t$; $\{O^{(n)}_{\xi}:\, \xi\in\mathbb{R}e\}$ is a family of i.i.d. random variables with $\mathbb{P} (O^{(n)}_{\xi} = k) = q^{(n)}_{k}$ and $k=0,1,2,\cdots$; and $\{C^{(n)}_{\xi}: \, \xi \in \mathbb{R}e\}$ be a family of i.i.d. real-valued exponential random variables with parameter $\gamma {\theta}^{n}$, which will serve as lifetimes of the particles. We assume $W$, $\{B_{\xi}: \, \xi \in{\mathbb{R}e}\}$, $\{C^{(n)}_{\xi}: \, \xi \in{\mathbb{R}e}\}$ and $\{O^{(n)}_{\xi}: \, \xi \in\mathbb{R}e\}$ are all independent. Once the particle $\xi$ dies, it is sent at once to a cemetery point noted by $\partial$. To simplify our notation further, we drop the superscript $(n)$ from the random variables. If the death location of the particle $\xi-1$ is in $\mathbb{R}^d$, then the birth time $\beta(\xi)$ of the particle $\xi$ is given by $$ \beta(\xi) := \left\{ \begin{array}{ll} \sum_{j=1}^{| \xi |-1}C_{\xi - j}, & \mbox{if $ O_{\xi -j} \geq 2$ for every $j = 1, \cdots , |\xi|-1$\,;} \\ \infty, & \mbox{otherwise}. {\mbox{\rm e}}nd{array} \mathbb{R}ight. $$ The death time of the particle $\xi$ is given by $\zeta(\xi) = \beta(\xi) + C_{\xi}$ and the indicator function of the lifespan of $\xi$ is denoted by ${\mbox{\rm e}}ll_{\xi}(t) := 1_{[\beta(\xi), \zeta(\xi) )}(t)$. Define $x_{\xi}(t) = \partial$ if either $ t < \beta(\xi) $ or $ t \geq \zeta(\xi)$. We make the convention that any function $f$ defined on $\mathbb{R}^d$ is automatically extended to $\mathbb{R}^d \cup \{ \partial \}$ by setting $f(\partial) = 0$. Given $\mu_{0}\in { M}_{a}( \mathbb{R}^d)$, let $\mu_0^{(n)}:=(1/\theta^n)\sum_{\xi=1}^{m_{0}^{(n)}}\delta_{x_{\xi}(0)}$ be constructed upon a collection of initial starting points $\{x_{\xi}(0) \}$ for each $n\geq1$, in order for $\mu_0^{(n)} \mathbb{R}ightarrow\mu_{0}$ to hold as $n \mathbb{R}ightarrow \infty$. Let ${\cal N}_{1}^{n} := \{1,2, \cdots, m_{0}^{(n)} \}$ be the set of indices for the first generation of particles. For any $\xi \in{\cal N}_{1}^{n} \cap \mathbb{R}e$, if $x_{\xi}(0) \in \mathbb{R}^d$, define \begin{eqnarray} \lambdabel{3.5} x_{\xi}(t):= \begin{cases} \mathbb{P} hi (x_{\xi}(0), B_{\xi}, t), &\qquad t \in [0, C_{\xi}) , \\ \partial , &\qquad t < 0, \mbox{or } t \geq C_{\xi}, {\mbox{\rm e}}nd{cases} {\mbox{\rm e}}nd{eqnarray} and $$ x_{\xi}(t) {\mbox{\rm e}}quiv \partial \hspace{8mm}\mbox{for any $\xi \in (\mathbb{N} \setminus {\cal N}_{1}^{n}) \cap \mathbb{R}e$ and $t \geq 0$}. $$ For the path of the second generation, let $\begin{array}r{\zeta_1} = min\{C_{\xi}: \xi \in {\cal N}_{1}^{n} \cap \mathbb{R}e\}$. By Ikeda et al. \cite{INW68} and \cite{INW69}, for each $\omega \in \Omega$, there exists a measurable selection $\xi_0= \xi_0(\omega) \in {\cal N}_{1}^{n} \cap \mathbb{R}e $ such that $\begin{array}r{\zeta_1} = C_{\xi_0}$. If $\xi_0\in {\cal N}_1^n \cap \mathbb{R}e$ and $x_{\xi_{0}}(t_0) = \partial$ for some $t_0 > 0$, then $x_{\xi}(t) {\mbox{\rm e}}quiv \partial$ for any $\xi \succ \xi_0 $ and any $t \geq \zeta({\xi}_{0})\vee t_0$. Otherwise, if $x_{{\xi}_{0}}(\zeta({\xi}_{0})-) \in \mathbb{R}^d$ and $O_{{\xi}_{0}}(\omega)=k \geq 2$, define for every ${\xi} \in \{{\xi}_{0} \oplus i: \ i=1,2, \cdots,k \}$, \begin{eqnarray} \lambdabel{3.6} x_{\xi}(t):=\begin{cases} \mathbb{P} hi (x_{{\xi}_{0}}(\zeta({\xi}_{0})-), B_{\xi}, t), & \quad t \in [\beta(\xi) , \, \zeta({\xi})), \\ \partial , &\quad t \ge \zeta({\xi}). {\mbox{\rm e}}nd{cases} {\mbox{\rm e}}nd{eqnarray} If $O_{{\xi}_{0}}(\omega)=0$, define $x_{\xi}(t) {\mbox{\rm e}}quiv \partial$ for $0 \leq t < \infty$ and ${\xi} \in \{{\xi}_{0} \oplus i: i \geq 1 \}$. More generally for any integer $m\geq 1$, let ${\cal N}_{m}^{n} \subset \mathbb{R}e$ be the set of all indices for the particles in the $m$-th generation. If $\xi_0\in {\cal N}_m^n$ and if $x_{{\xi}_{0}}(\zeta({\xi}_{0})-) \in \partial$, then $x_{\xi}(t) {\mbox{\rm e}}quiv \partial$ for any $\xi \succ \xi_0$ and any $t \geq \zeta({\xi}_{0}) $. Otherwise, if $x_{{\xi}_{0}}(\zeta({\xi}_{0})-) \in \mathbb{R}^d$ and $O_{\xi_{0}}(\omega)=k \geq 2$, define for ${\xi} \in \{{\xi}_{0} \oplus i: i=1,2, \cdots,k \}$ \begin{eqnarray} \lambdabel{3.7} x_{\xi}(t):=\begin{cases} \mathbb{P} hi (x_{{\xi}_{0}}(\zeta({\xi}_{0})-), B_{\xi}, t), & \quad t \in [\beta(\xi) , \, \zeta({\xi})), \\ \partial , &\quad t \ge \zeta({\xi}). {\mbox{\rm e}}nd{cases} {\mbox{\rm e}}nd{eqnarray} If $O_{{\xi}_{0}}(\omega)=0$, define $x_{\xi}(t) {\mbox{\rm e}}quiv \partial$ for $0 \leq t < \infty$ and for ${\xi} \in \{{\xi}_{0} \oplus i: i \geq 1 \}$. Continuing in this way, we obtain a branching tree of particles for any given $\omega$ with random initial state taking values in $\left\{x_{1}(0),x_{2}(0),\cdots,x_{m_{0}^{(n)}}(0)\mathbb{R}ight\}$. \subsection{Tightness and SPDE for SDSM} We proceed to show that the sequence of laws of the empirical processes \begin{equation} \lambdabel{4.1} \mu^{ (n)}_{t} := \frac{1}{{\theta}^{n}}\sum_{\xi \in{\mathbb{R}e}}\delta_{x_{\xi}(t)}, {\mbox{\rm e}}nd{equation} associated with the branching particle system $\{x_{\xi}\}$ constructed in the last subsection, converges weakly as $n\to \infty$ on $D([0, \infty), {\cal S}^{\prime}(\mathbb{R}^d))$, that its weak limit is actually on $C([0, \infty), {\cal S}^{\prime}(\mathbb{R}^d))$ and that this limit is our SDSM from (\mathbb{R}ef{SPDEc}). First recall that every finite positive Radon measure on $\mathbb{R}^d$ defines a tempered distribution on $\mathbb{R}^d$; the injection of $M_0(\mathbb{R}^d)$ into ${\cal S}' (\mathbb{R}^d)$ is in fact continuous by Proposition 5.1 in Dawson and Vaillancourt \cite{DV95}. We can therefore extend the bracket notation $\langle} \def\>{\rangle\phi,\mu\>$ to the duality between ${\cal S} (\mathbb{R}^d)$ and ${\cal S}' (\mathbb{R}^d)$ without ambiguity. Sometimes we write $\mu(\phi)$=$\langle} \def\>{\rangle\phi,\mu\>$. Furthermore this induces continuous injections of $C([0, \infty), M_0(\mathbb{R}^d))$ into $C([0, \infty), {\cal S}^{\prime}(\mathbb{R}^d))$ and $D([0, \infty), M_0(\mathbb{R}^d))$ into $D([0, \infty), {\cal S}^{\prime}(\mathbb{R}^d))$. We now build the transfer function that maps the infinite measure-valued processes to finite ones. For each $a\ge0$, the map $T_{I_a} : M_a(\mathbb{R}^d) \mathbb{R}ightarrow M_0(\mathbb{R}^d)$ defined by \begin{equation}lb \lambdabel{transfer} T_{I_a}(\mu)(A) := \int_{A} I_a(x) \mu(dx)=\int_{A} (1 + |x|^2)^{- a/2} \mu(dx), {\mbox{\rm e}}nd{equation}lb for any $A \in {\cal B}(\mathbb{R}^d)$, is clearly homeomorphic (continuous and bijective, with a continuous inverse), hence also Borel measurable. It also induces continuous mappings from $C([0, \infty), M_a(\mathbb{R}^d))$ into $C([0, \infty), {\cal S}^{\prime}(\mathbb{R}^d))$ and $D([0, \infty), M_a(\mathbb{R}^d))$ into $D([0, \infty), {\cal S}^{\prime}(\mathbb{R}^d))$. For any $t>0$ and $A \in {\cal B}({\mathbb{R}}^d)$, define \begin{equation} \lambdabel{4.2} M^{(n)}(A \times (0,t]) := \sum_{\xi \in {\mathbb{R}e}} \frac{[O^{(n)}_{\xi} -1]}{{\theta}^{n}} 1_{\{x_{\xi}({\zeta(\xi)-}) \in A, \, \zeta(\xi) \leq t\}}, \, {\mbox{\rm e}}nd{equation} which describes the space-time related branching in the set $A$ up to time $t$. By (\mathbb{R}ef{3.4}) we have a finite dimensional SPDE \begin{equation}lb \lambdab{DSPDE} & &\hspace{-1cm} \langle} \def\>{\rangle\phi(\cdot) I_a^{-1}(\cdot), I_a(\cdot)\mu^{(n)}_{t}\> -\langle} \def\>{\rangle\phi(\cdot) I_a^{-1}(\cdot), I_a(\cdot) \mu^{(n)}_{0}\> \nonumber \\ & = & \frac{1}{\sqrt{{\theta}^{n}}}\langle} \def\>{\rangle\phi(\cdot) I_a^{-1}(\cdot), I_a(\cdot)U^{(n)}_{t}\> + \langle} \def\>{\rangle\phi(\cdot) I_a^{-1}(\cdot), I_a(\cdot)X^{(n)}_{t}\> \nonumber \\ & & + \langle} \def\>{\rangle\phi(\cdot) I_a^{-1}(\cdot), I_a(\cdot)Y^{(n)}_{t}\> + \langle} \def\>{\rangle\phi(\cdot) I_a^{-1}(\cdot), I_a(\cdot) M^{(n)}_{t}\>, {\mbox{\rm e}}nd{equation}lb for every $\phi \in {\cal S} (\mathbb{R}^d)$ and every $a\ge0$, where, recalling that ${\mbox{\rm e}}ll_\xi(s)=1_{ [\beta (\xi), \, \zeta (\xi))}(s)$, \\ \begin{equation}nn & & U^{(n)}_{t}(\phi) := \frac{1}{\sqrt{{\theta}^{n}}} \sum_{\xi \in {\mathbb{R}e}} \hspace{1mm}\sum_{p, i=1}^{d} \int_{0}^{t} {\mbox{\rm e}}ll_{\xi}(s)\, \partial_p \phi(x_{\xi}(s)) c_{pi}(x_{\xi}(s))dB_{\xi i}(s)\,, \\ && X^{(n)}_{t}(\phi) := \sum_{p=1}^{d} \int_{0}^{t}\int_{\mathbb{R}^d}\langle} \def\>{\rangleh_p(y-\cdot) \partial_p \phi(\cdot), \mu^{(n)}_{s}\>W(dy,ds)\,, \\ {\mbox{\rm e}}nd{equation}nn \begin{equation}nn && Y^{(n)}_{t}(\phi) := \sum_{p,q=1}^{d} \int_{0}^{t}\left \langle} \def\>{\rangle \tfrac{1}{2} \partial_{p} \partial_{q} \phi(\cdot)\left[ \sum_{i=1}^{d}c_{pi}(\cdot)c_{qi}(\cdot) + \int_{\mathbb{R}^d}h_p(y - \cdot)h_q(y - \cdot)dy \mathbb{R}ight], \mu^{(n)}_{s}\mathbb{R}ight\>ds, \\ & & M^{(n)}_{t}(\phi) := \int_{0}^{t} \int_{\mathbb{R}^d} \phi(x)M^{(n)}(dx,ds) = \sum_{\xi \in {\mathbb{R}e}} \frac{[O^{(n)}_{\xi}-1]}{{\theta}^{n}} \phi (x_{\xi}({\zeta(\xi)-})) \, 1_{\{\zeta(\xi) \leq t\}} \,. {\mbox{\rm e}}nd{equation}nn The four terms in (\mathbb{R}ef{DSPDE}) represent the respective contributions to the overall motion of the finite particle system $\langle} \def\>{\rangle\phi,\mu^{(n)}_{t}\>$ in $\mathbb{R}^d$ by the individual Brownian motions ($U^{(n)}_{t}(\phi)$), the random medium ($X^{(n)}_{t}(\phi)$), the mean effect of interactive and diffusive dynamics ($Y^{(n)}_{t}(\phi)$), and the branching mechanism ($M^{(n)}_{t}(\phi)$). Denote by $ \nu^{(n)}_t := T_{I_a}(\mu^{(n)}_{t})$ the transferred measure of interest and similarly define \begin{equation}lb \lambdab{transf} v^{(n)}_t := I_a(\cdot)U^{(n)}_{t} , x^{(n)}_t := I_a(\cdot)X^{(n)}_{t}, y^{(n)}_t := I_a(\cdot)Y^{(n)}_{t}, m^{(n)}_t := I_a(\cdot) M^{(n)}_{t}. {\mbox{\rm e}}nd{equation}lb We can now adapt Theorem 3.3 in \cite{RenSongWang09} to our present context. \\ \begin{theorem} \lambdabel{ApMain} Assume Hypotheses $\mathbb{R}ef{hyp:basicassumpFilter}$ and $\mathbb{R}ef{hyp:basicassumpElliptic}$ are satisfied. With the notation above, we have following conclusions: \begin{description} \item{{\mbox{\rm e}}m (i)} $(\nu^{(n)}, v^{(n)}, x^{(n)}, y^{(n)}, m^{(n)})$ is tight on $D([0, \infty), ({\cal S}^{\prime}(\mathbb{R}^d))^{5})$. \item{{\mbox{\rm e}}m (ii)} {{\mbox{\rm e}}m (A Skorohod representation)}: Suppose that the joint distribution of a subsequence \[ (\nu^{(n_m)}, v^{(n_m)}, x^{(n_m)}, y^{(n_m)}, m^{(n_m)}, W) \] converges weakly to the joint distribution of \[(\nu^{(0)}, v^{(0)}, x^{(0)}, y^{(0)}, m^{(0)}, W). \] Then, there exists a probability space $(\widetilde{\Omega}, \widetilde{\cal F}, \widetilde{\mathbb{P} })$ and $D([0, \infty),{\cal S}'(\mathbb{R}^d))$-valued sequences $\{\widetilde{\nu}^{(n_{m})} \}$, $\{\widetilde{v}^{(n_{m})} \}$, $\{\widetilde{x}^{(n_{m})} \}$, $\{\widetilde{y}^{(n_{m})} \}$, $\{\widetilde{m}^{(n_{m})} \}$ and a $D([0, \infty), {\cal S}' (\mathbb{R}^d))$-valued sequence $\{{\widetilde{W}}^{(n_{m})}\}$ defined on it, such that \begin{equation}nn && \mathbb{P} \circ(\nu^{(n_m)}, v^{(n_m)}, x^{(n_m)}, y^{(n_m)}, m^{(n_m)}, W)^{-1} \\ && \hspace{1cm} = \widetilde{\mathbb{P} } \circ ( \widetilde{\nu}^{(n_{m})} , \widetilde{v}^{(n_{m})} , \widetilde{x}^{(n_{m})} , \widetilde{y}^{(n_{m})} , \widetilde{m}^{(n_{m})} , {\widetilde{W}}^{(n_{m})})^{-1} {\mbox{\rm e}}nd{equation}nn holds and, $\widetilde{\mathbb{P} }$-almost surely on $D([0, \infty),({\cal S}'(\mathbb{R}^d))^{6})$, \begin{equation}nn \left( \widetilde{\nu}^{(n_{m})} , \widetilde{v}^{(n_{m})} , \widetilde{x}^{(n_{m})} , \widetilde{y}^{(n_{m})} , \widetilde{m}^{(n_{m})} , \widetilde{W}^{(n_{m})}\mathbb{R}ight) \mathbb{R}ightarrow \left( \widetilde{\nu}^{(0)} , \widetilde{v}^{(0)} , \widetilde{x}^{(0)} , \widetilde{y}^{(0)} , \widetilde{m}^{(0)} , \widetilde{W}^{(0)}\mathbb{R}ight) \quad {\mbox{\rm e}}nd{equation}nn as $ m \to \infty$. \item{{\mbox{\rm e}}m (iii)} There exists a dense subset $\mathbb{X} i \subset [0, \infty)$ such that $[0, \infty) \setminus \mathbb{X} i$ is at most countable and for each $t \in \mathbb{X} i$ and each $\phi \in {\cal S}(\mathbb{R}^d)$, as $\mathbb{R}^{6}$-valued processes \begin{equation}nn && ( \widetilde{\nu}^{(n_{m})}_t(\phi) , \widetilde{v}^{(n_{m})}_t(\phi), \widetilde{x}^{(n_{m})}_t(\phi), \widetilde{y}^{(n_{m})}_t(\phi), \widetilde{m}^{(n_{m})}_t(\phi), {\widetilde{W}}^{(n_{m})}_t(\phi)) \\ && \hspace{1cm} \mathbb{R}ightarrow ( \widetilde{\nu}^{(0)}_t(\phi), \widetilde{v}^{(0)}_t(\phi), \widetilde{x}^{(0)}_t(\phi), \widetilde{y}^{(0)}_t(\phi), \widetilde{m}^{(0)}_t(\phi), {\widetilde{W}}^{(0)}_t(\phi)) {\mbox{\rm e}}nd{equation}nn in $L^{2}(\widetilde{\Omega}, \widetilde{\cal F}, \widetilde{\mathbb{P} })$ as $m \mathbb{R}ightarrow \infty$. Furthermore, let $\widetilde{\cal F}^{(0)}_t$ be the $\sigma$-algebra generated by $\{\widetilde{\nu}^{(0)}_{s}(\phi)$, $\widetilde{v}^{(0)}_{s}(\phi)$, $\widetilde{x}^{(0)}_{s}(\phi)$, $\widetilde{y}^{(0)}_{s}(\phi)$, $\widetilde{m}^{(0)}_{s}(\phi)$, ${\widetilde{W}}^{(0)}_{s}(\phi)\}$ for all $\phi \in {\cal S}(\mathbb{R}^d)$ and $ s \leq t$. Then $\widetilde{m}^{(0)}_t(\phi)$ is a continuous, square-integrable $\widetilde{\cal F}^{(0)}_t$-martingale with quadratic variation \begin{equation}nn \langle} \def\>{\rangle\widetilde{m}^{(0)}_t(I^{-1}_a \phi) \> = \gamma \sigma^2 \int_0^t \langle} \def\>{\rangleI^{-1}_a \phi^2 , \widetilde{\nu}^{(0)}_u \>du. {\mbox{\rm e}}nd{equation}nn \item{{\mbox{\rm e}}m (iv)} $\widetilde{W}^{(0)}(dy,ds)$, ${\widetilde{W}}^{(n_{m})}(dy,ds)$ are Brownian sheets and, for any $\phi \in {\cal S}(\mathbb{R}^d)$, the continuous square integrable martingale \[ \displaystyle{ \widetilde{x}_{t}^{(n_{m})}(I^{-1}_a \phi):= \sum_{p=1}^{d} \int_{0}^{t}\int_{\mathbb{R}^d} \left\langle} \def\>{\rangleI^{-1}_a h_p(y-\cdot) \partial_{p} \phi(\cdot), \widetilde{\nu}^{(n_{m})}_{s}\mathbb{R}ight\> {\widetilde{W}}^{(n_{m})}(dy,ds) } \] converges to $$ \widetilde{x}_{t}^{(0)}(I^{-1}_a \phi) := \sum_{p=1}^{d} \int_{0}^{t}\int_{\mathbb{R}^d} \left\langle} \def\>{\rangle I^{-1}_a h_p(y-\cdot) \partial_p \phi(\cdot), \widetilde{\nu}^{(0)}_{s} \mathbb{R}ight\> \widetilde{W}^{(0)}(dy,ds) $$ in $ L^{2}(\widetilde{\Omega}, \widetilde{\cal F}, \widetilde{\mathbb{P} })$ . \item{{\mbox{\rm e}}m (v)} $\widetilde{\nu}^{(0)}=\{ \widetilde \nu^{(0)}_t: t\geq 0\}$ is a solution to the $({\cal L},\delta_{\nu_0})$-martingale problem and $\widetilde{\nu}^{(0)}$ is a continuous process and for every $t \ge 0$ and every $\phi \in {\cal S}(\mathbb{R}^d)$ we have \begin{equation}lb \lambdabel{SPDE} & & \widetilde{\nu}^{(0)}_{t}(I^{-1}_a \phi) - \widetilde{\nu}^{(0)}_{0}(I^{-1}_a \phi) = \widetilde{x}^{(0)}_{t}(I^{-1}_a \phi) \nonumber \\ & & + \int_{0}^{t} \left\langle} \def\>{\rangle \sum_{p,q=1}^{d} \tfrac{1}{2}I^{-1}_a (a_{pq}(\cdot)+\mathbb{R}ho_{pq}(\cdot, \cdot)) \partial_p \partial_q \phi(\cdot), \, \widetilde{\nu}^{(0)}_{s} \mathbb{R}ight\>ds \nonumber \\ & &\hspace{1cm} + \int_{0}^{t}\int_{\mathbb{R}^d}I^{-1}_a \phi(x) \widetilde{m}^{(0)}(dx,ds) \mbox{\hspace{1cm}$\widetilde{\mathbb{P} }$-a.s. and in $L^{2}(\widetilde{\Omega}, \widetilde{\cal F}, \widetilde{\mathbb{P} })$}. {\mbox{\rm e}}nd{equation}lb {\mbox{\rm e}}nd{description} {\mbox{\rm e}}th \noindent {\bf Proof: }\ While this result is not a direct consequence of Theorem 3.3 in \cite{RenSongWang09}, our conditions ensure that all the key inequalities used in their proof remain valid here as well. Some of the difficulties encountered there, due to the necessity of dealing with trajectories in a much larger space, are here avoided through the direct use of ${\cal S}' (\mathbb{R}^d)$, the dual of a nuclear Fr\'{e}chet space. We therefore only summarize those key ideas that fit our own context. (i) By Mitoma \cite{Mitoma83}, we only need to prove that, for any $\phi \in {\cal S}(\mathbb{R}^d)$, the sequence of laws of $(\nu^{(n)}(\phi), v^{(n)}(\phi), x^{(n)}(\phi), y^{(n)}(\phi), m^{(n)}(\phi))$ is tight in $D([0, \infty), \mathbb{R}^{5})$, after noting that $\phi\cdot I_a^{-1}\in{\cal S}(\mathbb{R}^d)$ holds. The arguments in the proof of Lemma 3.2 in \cite{RenSongWang09} still work and we can use the tightness criterion from Theorem 3.8.6 in Ethier-Kurtz \cite{EthierKurtz86} to obtain the tightness for each $\phi$. (ii) This is a direct application of the versions of Skorohod's Representation Theorem on $D([0, \infty), ({\cal S}^{\prime}(\mathbb{R}^d))^k)$ due to Jakubowski \cite{Jakubowski86,Jakubowski97}. See also \cite{CWX12} in the present context. (iii) The proof is the same as that of part (iii) of Theorem 3.3 in \cite{RenSongWang09}. (iv) Since $W$, ${\widetilde{W}}^{(0)}$ and ${\widetilde{W}}^{(n_{m})}$ have the same distribution, $\widetilde{W}^{(0)}$ and ${\widetilde{W}}^{(n_{m})}$ are Brownian sheets. The conclusion follows from (ii) just as in \cite{RenSongWang09}. (v) An application of It\^{o}'s formula under conditions that remain unchanged from the corresponding case treated in \cite{RenSongWang09} yield that $\{\widetilde{\mu}_{t}^{(0)}: t \geq 0\}$ is a solution to the martingale problem for $({\cal{L}},\delta_{\mu_{0}})$ with continuous trajectories. $\square$ This proves part of Theorem \mathbb{R}ef{MPforSDSM}, namely the existence of a solution to the $({\cal L}, \delta_{{\mu}_{0}})$-martingale problem for the operator given by $(\mathbb{R}ef{pregenerator})$, on $C([0, \infty), {\cal S}'(\mathbb{R}^d))$ for any starting measure $\mu_0\in M_a(\mathbb{R}^d)$. It also confirms the validity of both $(\mathbb{R}ef{SPDEc})$ and $(\mathbb{R}ef{SPDEb})$, as well as the fact that both ${X}_{t}(\phi)$ and $M_t(\phi)$ are mutually orthogonal continuous square-integrable martingales. It remains to show that there is one such solution living on $C([0, \infty), M_a(\mathbb{R}^d))$ and that this solution is unique. This is achieved next. \setcounter{equation}{0}\sectionion{Duality and the martingale problem}\lambdabel{sec:dualConst} \setcounter{equation}{0} In this section, we use the construction of a function-valued dual process, in the sense of Dawson and Kurtz \cite{DawsonKurtz82}, as a way to directly exhibit the transition probability of SDSM, thus immediately giving an alternative construction of SDSM, as well as proving uniqueness in law on the space $C([0, \infty), M_a(\mathbb{R}^d))$, since duality also yields the full characterization of the law of SDSM by way of the martingale problem formulation. The technique of duality was developed in order to identify the more complex measure-valued one uniquely and compute some of its mathematical features, after first showing its existence through some other means, often by way of a tightness argument or some other limiting scheme. Part of the interest of this section lies with the uncommon use of the existence of a dual function-valued process, in order to construct a transition function for SDSM and show the existence of associated measure-valued processes of interest on richer spaces of trajectories resulting from the inclusion of infinite starting measures. The approach used here was developed by Barton et al. \cite{BartonEtheridgeVeber10} for the construction and characterization of a spatial version of the $\Lambdambda$-Fleming-Viot process, after the work of Evans \cite{Evans97} on systems of coalescing Borel right processes. See also Donnelly et al. \cite{DEFKZ2000} for another instance of this, in the context of the continuum-sites stepping-stone process. (Note that some technical aspects of the treatment in this section are required due to the topology on $M_a(\mathbb{R}^d)$. The reader can refer to the appendix in Konno and Shiga \cite{KonnoShiga88} for additional clarifications.) Let us begin with the construction of the function-valued process that will serve our purpose, namely an extension of the ones built in Ren et al. \cite{RenSongWang09} and Dawson et al. \cite{DLW01}. In order to facilitate some of the calculations required henceforth, notably because infinite starting measures lying in $M_a(\mathbb{R}^d)$ impose restrictions on the set of functions needed for a full description of the dual process, the domain ${\cal D}({\cal L})$ of operator ${\cal L}$ in (\mathbb{R}ef{pregenerator}) --- the set of functions in $B(M_a(\mathbb{R}^d))$ upon which ${\cal L}$ is well-defined --- is enlarged to comprise all bounded continuous functions of the form \begin{equation}lb\lambdabel{eqn:fulldomain} F(\mu) = g(\langle} \def\>{\rangle f_1, \mu^{m_1} \>, \cdots, \langle} \def\>{\rangle f_k, \mu^{m_k} \>) {\mbox{\rm e}}nd{equation}lb with $g \in C^2(\mathbb{R}^k)$ for some $k\ge1$, any choice of positive integers $m_1,\ldots, m_k$ and, for every $1 \leq i \leq k$, $f_i \in {\cal D}_a(G_{m_i})$. We describe the space ${\cal D}_a(G_{m})$ next. For the generator $G_m$ from (\mathbb{R}ef{eqn:Gn}) of strongly continuous contraction semigroup $\{P^{m}_t\}$ on Banach space $C_0((\mathbb{R}^d)^{m})$, the domain ${\cal D}(G_m)$ --- the set of functions in $B((\mathbb{R}^d)^{m})$ upon which $G_m$ is well-defined --- is simply the set of those functions $f$ such that the limit \[ \lim_{t \mathbb{R}ightarrow 0+}\frac{1}{t}(P^{m}_t f - f) \] exists in $C_0((\mathbb{R}^d)^{m})$, so we write $f \in {\cal D}(G_{m})$ if and only if this limit exists and equals $G_{m}f$. \\ In order to ensure integrability with respect to some infinite measures, our statements about functions in this domain ${\cal D}(G_m)$ are restricted to its subspace defined by \begin{equation}lb \lambdabel{DaGm} {\cal D}_a(G_m):= \{ f \in {\cal D}(G_m) : \| {\cal I}_{a, m}^{-1} f \|_{\infty} < \infty \mbox{ and $\| {\cal I}_{a, m}^{-1} G_m f \|_{\infty} < \infty $} .\} {\mbox{\rm e}}nd{equation}lb The short form $\mu^{m}= \mu\otimes\ldots\otimes\mu$ denotes the $m$-fold product measure of $\mu\in M_a(\mathbb{R}^d)$ by itself and we write ${\cal I}_{a, m}$ for the product ${\cal I}_{a, m}(x)=I_a(x_1)\cdot\ldots\cdot I_a(x_m)$, keeping in mind that ${\cal I}_{a, m}^{-1} f(x)={\cal I}_{a, m}^{-1}(x)f(x)$ means the product, not the composition of functions. \\ Observe first that both \begin{equation}lb\lambdabel{iam} {\cal I}_{a, m} \in C_b^{\infty}((\mathbb{R}^d)^{m}) {\mbox{\rm e}}nd{equation}lb and \begin{equation}lb\lambdabel{iamgm} {\cal I}_{a, m}^{-1} G_m {\cal I}_{a, m} \in C_b((\mathbb{R}^d)^{m}) {\mbox{\rm e}}nd{equation}lb hold, under Hypothesis $\mathbb{R}ef{hyp:basicassumpElliptic}$, hence so do $\| {\cal I}_{a, m}^{-1} G_m {\cal I}_{a, m} \|_{\infty} < \infty$ and ${\cal I}_{a, m} \in {\cal D}_a(G_m)$. A quick sketch of proof of these facts is supplied in Subsection \mathbb{R}ef{app:pf_iam}. The useful inclusions $C_c^2((\mathbb{R}^d)^m) \subset{\cal D}_a(G_m)\subset {\cal D}(G_m)$ and ${\cal I}_{a, m}^{-1} G_m \{C_c^2((\mathbb{R}^d)^m)\} \subset C_c((\mathbb{R}^d)^m)$ are also clearly valid for every choice of $a\ge0$. It is important to note at this point that, for every positive value of $a > 0$ and $m\ge1$, while ${\cal I}_{a, m} \in C_0^{\infty}((\mathbb{R}^d)^{m})$ holds (this is false when $a=0$), we also have ${\cal I}_{a, m} \not\in {\cal D}_b(G_m)$ for any $b > a$. Therefore $C_0^{\infty}((\mathbb{R}^d)^m) \not\subset {\cal D}_a(G_m)$ for any $a > 0$, so the core $C_0^2((\mathbb{R}^d)^m)$ of $G_m$ does not lie inside ${\cal D}_a(G_m)$ even though $C_c^2((\mathbb{R}^d)^m)$ is uniformly dense in $C_0^2((\mathbb{R}^d)^m)$. More generally, we also get the following results pertaining to the preservation of the semigroup property under the rescaling induced by function ${\cal I}_{a, m}$, the proof of which may also be found in Subsection \mathbb{R}ef{app:pf_iam}. \begin{lemma} \lambdabel{lea} Assume that Hypothesis $\mathbb{R}ef{hyp:basicassumpElliptic}$ is satisfied. For every $a\ge0$, $f \in {\cal D}_a(G_m)$ and $T > 0$, there holds $P_T^{m} f \in {\cal D}_a(G_m)$, $\sup_{0 \leq t \leq T} \| {\cal I}_{a, m}^{-1} P_t^{m} f \|_{\infty} < \infty$ and \[ \sup_{0 \leq t \leq T} \| {\cal I}_{a, m}^{-1} \frac{\partial}{\partial t} P_t^{m} f \|_{\infty} =\sup_{0 \leq t \leq T} \| {\cal I}_{a, m}^{-1} G_m P_t^{m} f \|_{\infty} =\sup_{0 \leq t \leq T} \| {\cal I}_{a, m}^{-1} P_t^{m} G_m f \|_{\infty} < \infty. \] {\mbox{\rm e}}le The construction of the function-valued process can now proceed, as follows. Let $\{J_t: t\ge 0\}$ be a decreasing c\`{a}dl\`{a}g Markov jump process on the nonnegative integers $\{0,1,2,\ldots\}$, started at $J_0=m$ and decreasing by 1 at a time, with Poisson waiting times of intensity $\gammamma \sigma^2 l(l-1)/2$ when the process equals $l\ge2$. The process is frozen in place when it reaches value $1$ and never moves if it is started at either $m=0$ or $1$. Write $\{\tau_k: 0\le k\le J_0-1\}$ for the sequence of jump times of $\{J_t:t\ge 0\}$ with $\tau_0=0$ and $\tau_{J_0} = \infty$. At each such jump time a randomly chosen projection is effected on the function-valued process of interest, as follows. Let $\{S_k: 1\le k\le J_0\}$ be a sequence of random operators which are conditionally independent given $\{J_t: t\ge 0\}$ and satisfy $$ \mathbb{P} \{S_k = \mathbb{P} hi_{ij}^m | J_{\tau_k -} =m\} = \frac{1}{m(m-1)}, \qquad 1 \le i \neq j \le m, $$ as long as $m\ge2$. Here $\mathbb{P} hi_{ij}^mf$ is a mapping from ${\cal D}_a(G_m)$ into ${\cal D}_a(G_{m-1})$ defined by \begin{equation}lb \lambdabel{restriction} \mathbb{P} hi_{ij}^m f(y):= f(y_{1}, \cdots, y_{j-1},y_{i},y_{j+1},\cdots, y_{m}), {\mbox{\rm e}}nd{equation}lb for any $m\ge2$ and $y=(y_1, \cdots, y_{j-1}, y_{j+1}, \cdots, y_m)\in (\mathbb{R}^d)^{m-1}$. When $m=0$ we simply write ${\cal D}_a(G_0)=\mathbb{R}^d$ and $P_t^0$ acts as the identity mapping on constant functions. That $\mathbb{P} hi_{ij}^m$ is well-defined follows from the observation that the sets ${\cal D}_a(G_m)$ form an increasing sequence in $m$, in this last case when interpreting any function of $m\le n$ variables also as the restriction of a function of $n$ variables. Details are in Subsection \mathbb{R}ef{app:pf_iam}. Given $J_0=m$ for some $m\ge0$, define process $Y:=\{Y_t: t\ge0\}$, started at some point $Y_0\in {\cal D}_a(G_m)$ within the (disjoint) topological union $\mbox{\boldmath $B$}} \def\bM{\mbox{\boldmath $M$}:=\cup_{m=0}^\infty {\cal D}_a(G_m)$, by \begin{equation}lb \lambdabel{Yprocess} Y_t = P^{J_{\tau_k}}_{t-\tau_k} S_k P^{J_{\tau_{k-1}}} _{\tau_k -\tau_{k-1}}S_{k-1} \cdots P^{J_{\tau_1}}_{\tau_2 -\tau_1} S_1 P^{J_0}_{\tau_1}Y_0, \quad \tau_k \le t < \tau_{k+1}, 0\le k\le J_0-1. {\mbox{\rm e}}nd{equation}lb By Lemma \mathbb{R}ef{lea}, the process $Y$ is a well-defined $\mbox{\boldmath $B$}} \def\bM{\mbox{\boldmath $M$}$-valued strong Markov process for any starting point $Y_0\in \mbox{\boldmath $B$}} \def\bM{\mbox{\boldmath $M$}$. Clearly, $\{(J_t, Y_t): t\ge 0\}$ is also a strong Markov process. \begin{lemma} \lambdabel{lea2} Assume that Hypothesis $\mathbb{R}ef{hyp:basicassumpElliptic}$ is satisfied. Given any $a\ge0$, $J_0=m\ge1$ and $T > 0$, there exists a constant $c=c(a,d,m,T) > 0$ such that, for every $Y_0\in {\cal D}_a(G_m)$ we have $\mathbb{P} $-almost surely \[ \sup_{0 \leq t \leq T} \| {\cal I}_{a, J_t}^{-1} Y_t \|_\infty \le c \| {\cal I}_{a, m}^{-1} Y_0 \|_\infty. \] {\mbox{\rm e}}le The proof is found in Subsection \mathbb{R}ef{app:pf_iam}. Recall from (\mathbb{R}ef{transfer}) the homeomorphic transfer function $T_{I_a} : M_a(\mathbb{R}^d) \mathbb{R}ightarrow M_0(\mathbb{R}^d)$ denoted by $$ T_{I_a}(\mu)(A) := \int_{A} I_a(x) \mu(dx)=\int_{A} (1 + |x|^2)^{- a/2} \mu(dx), $$ for any $A \in {\cal B}(\mathbb{R}^d)$.\\ \begin{theorem} \lambdabel{tha} Assume that Hypotheses $\mathbb{R}ef{hyp:basicassumpFilter}$ and $\mathbb{R}ef{hyp:basicassumpElliptic}$ are satisfied. For any $a\ge0$, $m\ge1$, $ f\in {\cal D}_a(G_m)$, $\mu_0\in M_a(\mathbb{R}^d)$ and $t\in[0,\infty)$, there exists a time-homogeneous Markov transition function $\{{\mathbb{R}m Q}_t(\mu,\mathbb{G} ammamma): t\in[0,\infty),\mu\in M_a(\mathbb{R}^d),\mathbb{G} ammamma\in {\B}(M_a(\mathbb{R}^d))\}$, given by \begin{equation}lb \lambdabel{dual} & & \int_{M_a(\mathbb{R}^d)}\langle} \def\>{\ranglef, \nu^m\>{\mathbb{R}m Q}_t(\mu, d\nu) \nonumber \\ = & & \mathbb{E} \left[\langle} \def\>{\rangle{\cal I}_{a, J_t}^{-1} Y_t, (T_{I_a}(\mu))^{J_t}\> {\mbox{\rm e}}xp\left(\frac{\gammamma \sigma^2}{2}\ \int_0^t J_s(J_s-1)ds\mathbb{R}ight)\Bigl|(J_0,Y_0)=(m,f) \mathbb{R}ight]. {\mbox{\rm e}}nd{equation}lb The associated probability measure $\mathbb{P} _{\mu_0}$ on $C([0, \infty), M_a(\mathbb{R}^d))$ of the form: \begin{equation}lb \lambdab{GSDSM} && \mathbb{P} _{\mu_0} (\{ w \in C([0, \infty), M_a(\mathbb{R}^d)): w_{t_i} \in \mathbb{G} ammamma_i, i=0, \cdots, n\}) \nonumber \\ = && \int_{\mathbb{G} ammamma_0} \cdots \int_{\mathbb{G} ammamma_{n-1}} {\mathbb{R}m Q}_{t_n-t_{n-1}}(\mu_{n-1}, \mathbb{G} ammamma_n){\mathbb{R}m Q}_{t_{n-1}-t_{n-2}}(\mu_{n-2}, d \mu_{n-1}) \cdots {\mathbb{R}m Q}_{t_{1}}(\mu_{0}, d \mu_{1}), {\mbox{\rm e}}nd{equation}lb for any $0 \leq t_1 \leq t_2 \leq \cdots \leq t_n$ and $\mathbb{G} ammamma_i \in {\cal B}(M_a(\mathbb{R}^d)), i=0,1, \cdots, n$, is the unique probability measure on $C([0, \infty), M_a(\mathbb{R}^d))$ which satisfies the (well-posed) $({\cal L}, \delta_{{\mu}_{0}})$-martingale problem. {\mbox{\rm e}}th \noindent {\bf Remark:} We adapt the approach used for the case $a=0$ and $d=1$ in Dawson et al. \cite{DLW01} by exhibiting a transition function, built by using the law of function-valued process $Y$ and charging space $C([0, \infty), M_a(\mathbb{R}^d))$ with a probability measure fitting our needs. In this setting, we only give a quick sketch of the main ideas but provide details for overcoming the new difficulties arising from the larger space. \noindent {\bf Proof: }\ We need to prove that the time-homogeneous transition function given by (\mathbb{R}ef{dual}) is well-defined, so that ${\mathbb{R}m Q}_t(\mu, \cdot)$ is a probability measure on ${\B}(M_a(\mathbb{R}^d))$ with ${\mathbb{R}m Q}_0(\mu, \cdot)=\delta_{\mu}$ and ${\mathbb{R}m Q}_{\cdot}(\cdot,\mathbb{G} ammamma)$ is Borel measurable, for every choice of $t$ and $\mu$. We begin by quickly sketching that there exists a transition function ${\mathbb{R}m Q}_t(\mu, d \nu)$ on ${\B}(M_0(\mathbb{R}^d))$, for any $t \geq 0$ and $\mu \in M_0(\mathbb{R}^d)$, for which the duality identity (\mathbb{R}ef{dual}) and the extended Chapman-Kolmogorov equations (\mathbb{R}ef{GSDSM}) both hold. The argument for this first step is as follows. In the general case $a\ge0$, Lemma \mathbb{R}ef{lea2} ensures that the right hand side of (\mathbb{R}ef{dual}) is well-defined, as well as bounded for every $ f\in {\cal D}_a(G_m)$; therefore, it defines a strongly continuous positive bounded semigroup of linear operators on the subspace of $C_b(M_a({\mathbb{R}^d}))$ of functions of the form $F_{m,f} (\mu):=\langle} \def\>{\rangle f, \mu^m \>$ for any $f \in {\cal D}_a(G_m)$. (The argument runs along the same lines as in the proof of Theorem 5.1 in Dawson et al. \cite{DLW01} for the case $d=1$.) Unfortunately the constant functions, ${\cal D}_a(G_0)$, do not have finite integrals when $a > 0$ and must therefore be removed from the set $\mbox{\boldmath $B$}} \def\bM{\mbox{\boldmath $M$}:=\cup_{m=0}^\infty {\cal D}_a(G_m)$ of values, so this amputated subspace of functions suffices no longer for a full characterization of measures. Since $M_a({\mathbb{R}^d})$ is not locally compact, one needs, at this stage, to first add a point $\infty$ at infinity to build the one-point compactification $\hat{\mathbb{R}}^d:=\mathbb{R}^d\cup\{\infty\}$ of $\mathbb{R}^d$ and then add an isolated point $\mathbb{D} eltalta$ to the space of finite Radon measures $M_0(\hat{\mathbb{R}}^d)$ on $\hat{\mathbb{R}}^d$, yielding the new compact metrizable space $E^{\mathbb{D} eltalta}:=M_0(\hat{\mathbb{R}}^d)\cup\{\mathbb{D} eltalta\}$. By the Stone-Weierstrass theorem, the algebra $\mathbb{P} i(E^{\mathbb{D} eltalta})$ generated by the set of all functions of the form $F_{m,f} (\mu):=\langle} \def\>{\rangle f, \mu^m \>$, for any $m\ge1$ and $f \in C((\hat{\mathbb{R}}^d)^m)$, is dense in $C(E^{\mathbb{D} eltalta})$. The operator ${\cal T}_tF_{m,f}$ defined as the right hand side of (\mathbb{R}ef{dual}) can be extended to $C(E^{\mathbb{D} eltalta})$ by simply putting ${\cal T}_tF:=F(\mathbb{D} eltalta)+{\cal T}_t(F-F(\mathbb{D} eltalta))$ for every $F\in\mathbb{P} i(E^{\mathbb{D} eltalta})$. Theorem 4.2.7 in Ethier and Kurtz \cite{EthierKurtz86} implies the existence of a probability measure ${\mathbb{R}m Q}_t(\mu, \cdot)$ on ${\B}(E^{\mathbb{D} eltalta})$ such that (\mathbb{R}ef{dual}) holds, for every $f \in C((\hat{\mathbb{R}}^d)^m)$, $\mu_0\in E^{\mathbb{D} eltalta}$, $\mu\in E^{\mathbb{D} eltalta}$, $\mathbb{G} ammamma\in{\B}(E^{\mathbb{D} eltalta})$ and $t\in[0,\infty)$. The definition of the extension ensures that ${\mathbb{R}m Q}_t$ is normalized so that it is a probability. The existence of a probability measure $\mathbb{P} _{\mu_0}$ on $D([0, \infty), E^{\mathbb{D} eltalta})$ which satisfies (\mathbb{R}ef{GSDSM}) also ensues from Theorem 4.2.7 in Ethier and Kurtz \cite{EthierKurtz86}. That $\mathbb{P} _{\mu_0}$ is supported by $C([0, \infty), E^{\mathbb{D} eltalta})$ follows from the results of Bakry and Emery \cite{BakryEmery85} on operators with the derivation proprety, just as in the special case $d=1$ treated in Wang \cite{Wang98}. The canonical process $\{w_{t}\}$ under $\mathbb{P} _{\mu_0}$ is our SDSM with initial distribution $\delta_{\mu_0}$ and $\mathbb{D} eltalta$ as a trap. Just as in the original case $a=0$ and $d=1$ treated in Wang \cite{Wang98} (see his Theorem 4.1), unicity of the solution for (\mathbb{R}ef{GSDSM}) on $C([0, \infty), E^{\mathbb{D} eltalta})$, started at $\mu_0\in E^{\mathbb{D} eltalta}$, follows from the well-posedness of the $({\cal L}, \delta_{{\mu}_{0}})$-martingale problem, by way of the duality identity (\mathbb{R}ef{dual}) on $C([0, \infty), E^{\mathbb{D} eltalta})$ plus the fact that, for any $t > 0$, the moment power series \[ \sum_{n=1}^{\infty}\frac{\theta^n}{n!} \mathbb{E} _{\mu_0} \langle} \def\>{\rangle 1, w_t\>^n \] has a positive radius of convergence, thus identifying the one dimensional laws uniquely. This works because we chose to build $E^{\mathbb{D} eltalta}$ with $a=0$ rather than $a > 0$. By an argument similar to Step 8 in the appendix of Konno-Shiga \cite{KonnoShiga88} or the proof of Theorem 4.1 of Dawson et al \cite{DLW01}, we get $\mathbb{P} _{\mu_0}(w_{t}(\{\infty\}))=0 \mbox{ for all $t \geq 0$})= 1$ provided ${\mu_0}(\{\infty\})=0$. It follows that whenever $\mu_0\in M_0(\mathbb{R}^d)$ holds instead of $\mu_0\in E^{\mathbb{D} eltalta}$, the solution $\mathbb{P} _{\mu_0}$ is actually supported by $C([0, \infty), M_0(\mathbb{R}^d))$. This ends the proof of both (\mathbb{R}ef{dual}) and (\mathbb{R}ef{GSDSM}) in the case $a=0$.\\ With $a > 0$, given $\{{\mathbb{R}m Q}_t(\mu,\mathbb{G} ammamma): t\in[0,\infty),\mu\in M_0(\mathbb{R}^d),\mathbb{G} ammamma\in {\B}(M_0(\mathbb{R}^d))\}$ just constructed, define $\{{\tilde {\mathbb{R}m Q}}_t(\mu,\mathbb{G} ammamma): t\in[0,\infty),\mu\in M_a(\mathbb{R}^d),\mathbb{G} ammamma\in {\B}(M_a(\mathbb{R}^d))\}$ by \[ {\tilde {\mathbb{R}m Q}}_t(\mu,\mathbb{G} ammamma):={\mathbb{R}m Q}_t(T_{I_a}(\mu),T_{I_a}(\mathbb{G} ammamma)) \] and ${\tilde \mathbb{P} }_{\mu_0}$ on $C([0, \infty), M_a(\mathbb{R}^d))$ by (\mathbb{R}ef{GSDSM}) with ${\tilde {\mathbb{R}m Q}}_t$ instead of ${\mathbb{R}m Q}_t$ throughout and a typical trajectory written as $\{{\tilde w}_t\} \in C([0, \infty), M_a(\mathbb{R}^d))$. Both ${\tilde {\mathbb{R}m Q}}_t$ and ${\tilde \mathbb{P} }_{\mu_0}$ are well-defined solutions to (\mathbb{R}ef{dual}) and (\mathbb{R}ef{GSDSM}) respectively. Since $M_a(\mathbb{R}^d)$ is a Polish space, the uniqueness of solution to the martingale problem is equivalent to that of $\{{\mathbb{R}m Q}_t:t\ge0\}$, by Theorem 4.1.1 of Ethier and Kurtz \cite{EthierKurtz86}. Both hold because the homeomorphism $T_{I_a}$ allows us to write, for any two solutions $\{{\tilde {\mathbb{R}m Q}}_{\xi,t}:t\ge0\}$ to (\mathbb{R}ef{dual}), for $\xi=1,2$, with $\{{\mathbb{R}m Q}_t:t\ge0\}$ the unique solution in the case $a=0$: \[ \int_{M_a(\mathbb{R}^d)}\langle} \def\>{\ranglef, \nu^m\>{\tilde {\mathbb{R}m Q}}_{\xi,t}(\mu_0, d\nu) = \int_{M_0(\mathbb{R}^d)}\langle} \def\>{\rangle{\cal I}_{a, m}^{-1}f, T_{I_a}(\nu)^m\> {\mathbb{R}m Q}_t(T_{I_a}(\mu_0), d[T_{I_a}(\nu)]) \] for any $m\ge1$, $ f\in {\cal D}_a(G_m)$, $\mu_0\in M_a(\mathbb{R}^d)$ and $t\in[0,\infty)$. Since $K_a(\mathbb{R}^d)\subset {\cal D}_a(G_1)$ is a separating set for measures inside $M_a(\mathbb{R}^d)$, it ensues that $\cup_{m=1}^\infty {\cal D}_a(G_m)$ is a algebra of functions that separates points in $M_a(\mathbb{R}^d)$ and hence is separating for the probability measures on ${\cal B}(M_a(\mathbb{R}^d))$, by Theorem 3.4.5 of Ethier and Kurtz \cite{EthierKurtz86}. Finally, denoting by ${\cal L}^{*}$ the infinitesimal generator for the process $(J,Y)$, the differential form of the duality between its associated martingale problem and the $({\cal L}, \delta_{{\mu}_{0}})$-martingale problem of pregenerator (\mathbb{R}ef{pregenerator}) for SDSM can be written in the following manner. We only need to consider functions of the form (\mathbb{R}ef{eqn:fulldomain}) with $k=1$, since they form a separating set for measures in $M_a(\mathbb{R}^d)$, by Proposition 4.4.7 in Ethier and Kurtz \cite{EthierKurtz86}. Writing $F_{m,f} (\mu) := g(\langle} \def\>{\rangle f, \mu^m \>)$ for any such function with $g(s)=s$, $f \in {\cal D}_a(G_m)$ and $\mu \in M_a(\mathbb{R}^d)$, a quick calculation yields \begin{equation}lb \lambdabel{dualityform} {\cal L} F_{m,f} (\mu)=\langle} \def\>{\rangle {\cal L}^{*}f, \mu^m \> + \frac{1}{2} \gamma \sigma^2 m(m-1)\langle} \def\>{\rangle f, \mu^m \> {\mbox{\rm e}}nd{equation}lb with \[ {\cal L}^{*}f={G}_{m} f + \frac{\gammamma \sigma^2}{2} \sum_{i,j =1, \, i\not=j}^m (\mathbb{P} hi_{ij}^mf - f). \] An application of It\^{o}'s formula with a stopping argument allows this formal calculation to be made rigorous, so that (\mathbb{R}ef{SPDEb}) ensues for every $\phi\in C_c^\infty(\mathbb{R}^d)$. See Ren et al. \cite{RenSongWang09} for additional details. We need only check (\mathbb{R}ef{SPDEb}) for the remaining function $\phi=I_a$ to cover all of $K_a(\mathbb{R}^d)$. Since $I_a\in L^1(\mu_0)$ holds by definition, $I_a^{-1} G_1 I_a$ satisfies (\mathbb{R}ef{iamgm}) and $\partial_p I_a\in C_0(\mathbb{R}^d)\cap L^1(\mu_0)$ by the calculations in Subsection \mathbb{R}ef{app:pf_iam}, the same stopping argument can be applied, as the three integrals in (\mathbb{R}ef{SPDEb}) can be controlled simultaneously. Therefore, the uniqueness follows by applying the now classical results from Dawson and Kurtz \cite{DawsonKurtz82} (or use Corollary 4.4.13 in Ethier and Kurtz \cite{EthierKurtz86}) to the generators appearing in the differential form (\mathbb{R}ef{dualityform}) of the duality equation, the required integrability conditions holding for every $\phi\in K_a(\mathbb{R}^d)$, which is uniformly dense in ${\cal D}_a(G_1)$. $\square$ \break \noindent {\bf Proof: }\ [Proof of Theorem \mathbb{R}ef{MPforSDSM}] Theorem \mathbb{R}ef{tha} yields the existence and uniqueness of the solution to the martingale problem for SDSM on the space $C([0, \infty), M_a(\mathbb{R}^d))$. A representation of this unique solution has already been obtained in Theorem \mathbb{R}ef{ApMain} in the form of the solution to an SPDE on $C([0, \infty), {\cal S}'(\mathbb{R}^d))$. This representation satisfies all of the other statements in Theorem \mathbb{R}ef{MPforSDSM} through the continuity of the transfer function $T_{I_a}$ defined in (\mathbb{R}ef{transfer}) and that of the injection of $M_0(\mathbb{R}^d)$ in ${\cal S}' (\mathbb{R}^d)$, by Proposition 5.1 in Dawson and Vaillancourt \cite{DV95}. $\square$ Another consequence of Theorem \mathbb{R}ef{tha} is the following uniform bound on the growth of the moments associated with the single particle transition density $q_t^{1}(\cdot,x)$ with generator $G_1$ on $\mathbb{R}^d$ given by $(\mathbb{R}ef{eqn:Gn1})$ and its spatial derivative $\partial_p q_t^{1}(\cdot,x)$. \begin{corollary} \lambdab{MCOM} Assume all the conditions in of Theorem \mathbb{R}ef{tha}. Let $\{\mu_t\}$ be the SDSM started at a $\mu_0$ which satisfies Hypothesis $\mathbb{R}ef{hyp:basicassumpGauss}$, with dual process $\{Y_t\}$ started at $Y_0=f$. The duality identity (\mathbb{R}ef{dual}) holds true for all functions $f\in L^1(\mu_0^m)$, with a finite common value on both sides as part of the conclusion; these include the two cases : a) $f=q_\varepsilonsilon^{1}(\cdot,x)$ and b) $f=\partial_p q_\varepsilonsilon^{1}(\cdot,x)$, for any fixed $x\in\mathbb{R}^d$ and $\varepsilonsilon>0$. The dual identity (\mathbb{R}ef{dual}) can also be rewritten as follows : for every integer $m>0$, \begin{equation}lb \lambdab{ME} \mathbb{E} _{\mu_0}[\langle} \def\>{\ranglef, \mu_t\>^m] = \mathbb{E} _{m,f^{\otimes m}} \bigg[\langle} \def\>{\rangleY_t, \mu_0^{J_t}\> {\mbox{\rm e}}xp\{\frac{1}{2} \int_0^t J_s(J_s-1)ds \} \bigg] = \sum_{k=0}^{m-1}M_k(t), {\mbox{\rm e}}nd{equation}lb with $\sup_{0\le t\le T}|M_k(t)|<\infty$ for every $T>0$, where process $Y$ is expressed as in (\mathbb{R}ef{Yprocess}) and \begin{equation}lb \lambdab{mMk0} M_k(t) & := & \frac{m! (m-1)!}{2^k (m-k)! (m-k-1)!} \int_{(0, t]}dr_1 \int_{(r_1, t]}dr_2 \cdots \int_{(r_{(k-1)}, t]} \nonumber \\ & & \cdot \mathbb{E} _{m,f^{\bigotimes m}}\bigg[ \langle} \def\>{\rangleP^{m-k}_{t-r_k} S_k \cdots P^{m-1}_{r_2-r_1} S_1 P^{m}_{r_1}f^{\bigotimes (m)}, \mu_0^{\bigotimes (m-k)}\>\bigg| \tau_j = r_j : 1 \leq j \leq k \bigg] dr_k \nonumber \\ {\mbox{\rm e}}nd{equation}lb {\mbox{\rm e}}cor \noindent {\bf Proof: }\ First, recall that all Radon measures are locally finite, hence all compact sets have finite measure and $C_c^\infty(\mathbb{R}^d)\subset L^p(\mu_0)$, for all $a\ge0$ and $p\ge1$. Since all Radon measures are inner regular and outer regular, for any set $N\in{\cal B}(\mathbb{R}^d)$ we can choose a decreasing sequence of open subsets $U_{\mbox{\rm e}}ll\in{\cal B}(\mathbb{R}^d)$ and an increasing sequence of compact subsets $V_{\mbox{\rm e}}ll\in{\cal B}(\mathbb{R}^d)$ such that $V_{\mbox{\rm e}}ll \subset N \subset U_{\mbox{\rm e}}ll$ and $\mu_0(U_{\mbox{\rm e}}ll\smallsetminus V_{\mbox{\rm e}}ll) < 1/{\mbox{\rm e}}ll$. In this context, Urysohn's Lemma (see Lemma 2.19 in Lieb and Loss \cite{LiebLoss01}) asserts the existence of a sequence of functions $\psi_{\mbox{\rm e}}ll\in C_c^\infty(\mathbb{R}^d)$ such that $1_{V_{{\mbox{\rm e}}ll}}\le\psi_{\mbox{\rm e}}ll\le 1_{U_{\mbox{\rm e}}ll}$ holds everywhere, with $1_N(x)=1$ if $x\in N$ and $0$ elsewhere. This proves not only that $C_c^\infty(\mathbb{R}^d)$ is dense in $L^p(\mu_0)$, for all $p\ge1$, but also that every $\phi\in L^p(\mu_0)$ is the $\mu_0$-almost sure limit as well as the $\|\cdot\|_{\mu_0,p}$-norm limit of some sequence $\phi_n\in C_c^\infty(\mathbb{R}^d)$. (By the way, note that we have just proved that $C_c^\infty(\mathbb{R}^d)$ is $\mathbb{P} _{\mu_0}$-almost surely dense in $L^p(\mu_t)$, for all $t > 0$, $a\ge0$ and $p\ge1$ as well, since Theorem \mathbb{R}ef{MPforSDSM} implicitly asserts that $\mu_t\in M(\mathbb{R}^d)$ holds $\mathbb{P} _{\mu_0}$-almost surely.) Meanwhile, $Y_t\in C_0^2((\mathbb{R}^d)^m)$ holds for all $t>0$ as soon as $Y_0\in C_c^\infty((\mathbb{R}^d)^m)$ does. Extending the duality identity (\mathbb{R}ef{dual}) through a density argument therefore makes sense. We do the proof only for $\mu_0\in M_0(\mathbb{R}^d)$ and leave it to the reader to extend it to $\mu_0\in M_a(\mathbb{R}^d)$ for $a>0$. The duality identity (\mathbb{R}ef{dual}) first extends to all $f\in B((\mathbb{R}^d)^m)\cap L^1(\mu_0^m)$ since $C_0^2((\mathbb{R}^d)^m)$ is uniformly dense in $C_0((\mathbb{R}^d)^m)$ and therefore dense in $B((\mathbb{R}^d)^m)$ in the bounded pointwise sense (see Proposition 3.4.2 in Ethier and Kurtz \cite{EthierKurtz86}). Moreover, (\mathbb{R}ef{dual}) remains true for unbounded $f\in L^1(\mu_0^m)$, by monotone convergence of the sequence $f\cdot 1_{\{| f | < n\}}$ applied to the positive and negative parts of $f$ on both sides of (\mathbb{R}ef{dual}), since operators $P_t^{m}$ and $\mathbb{P} hi_{ij}^m$ are both linear and positivity preserving. When $r>0$ and $x\in\mathbb{R}^d$, $f \in L^1(\mu_0)$ in both the cases considered here --- $f=q_\varepsilonsilon^{1}(\cdot,x)$ and $f=\partial_p q_\varepsilonsilon^{1}(\cdot,x)$ --- by Hypothesis $\mathbb{R}ef{hyp:basicassumpGauss}$ and $(\mathbb{R}ef{LSU})$, so the integrals on both sides of the first equality in $(\mathbb{R}ef{ME})$ make sense in those cases as well. Conditionning on the number of jumps for dual process $\{Y_t\}$ yields $(\mathbb{R}ef{mMk0})$. It remains to show $\sup_{0\le t\le T}|M_k(t)|<\infty$ and that the expectations involved in the above argumentation, are all finite. This is done on the right hand side of (\mathbb{R}ef{dual}) by reducing the proof for the SDSM case, to that of Super-Brownian motion. First note that all four constants $a^*$, $b$, $c$ and $A^*$ in the Aronson bounds $(\mathbb{R}ef{Aronsonbounds})$ depend on the dimension $d$ of the space. Without loss of generality we can select them so that $(\mathbb{R}ef{Aronsonbounds})$ holds jointly for all $d\le m$ with common values for $a^*$, $b$, $c$ and $A^*$ --- the proof is found in Subsection \mathbb{R}ef{app:pf_jointparametrization}. To keep notation to a minimum in this proof, let $K>0$ denote some generic constant, which will change throughout the proof but the value of which is irrelevant for the sake of this argumentation --- its value depends on $a^*$, $b$, $c$ and $A^*$. Writing $T^k_t$ for $P^k_t$ in the special case of the heat semigroup and using the exchangeability of the dual particles in this particular instance, $(\mathbb{R}ef{Aronsonbounds})$ implies, for any nonnegative $f \in L^1(\mu_0)$ and any choice of $0<r_1<r_2<\ldots<r_k<t\le T$, \begin{equation}lb \lambdab{mMk} && \langle} \def\>{\rangleP^{m-k}_{t-r_k} S_k \cdots P^{m-1}_{r_2-r_1} S_1 P^{m}_{r_1}f^{\bigotimes (m)}, \mu_0^{\bigotimes (m-k)}\> \nonumber \\ \le && K \langle} \def\>{\rangle ( T^1_{c t - c r_k } )^{\bigotimes (m-k)} S_k \cdots (T^1_{c r_2 - c r_1 })^{\bigotimes (m-1)} S_1 (T^{1}_{c r_1})^{\bigotimes (m)} f^{\bigotimes (m)}, \mu_0^{\bigotimes (m-k)} \> \nonumber \\ \le && K \left[\prod_{i=1}^k \sup_{x \in \mathbb{R}^d}T^1_{c r_i}f(x)\mathbb{R}ight] \langle} \def\>{\rangle(T^1_{c t})^{\bigotimes (m-k)}f^{\bigotimes (m-k)}, \mu_0^{\bigotimes (m-k)}\> \nonumber \\ = && K \left[\prod_{i=1}^k \sup_{x \in \mathbb{R}^d}T^1_{c r_i}f(x)\mathbb{R}ight] \sup_{0 < t \leq T} \langle} \def\>{\rangleT^1_{ct}f,\mu_0 \>^{(m-k)}. \nonumber {\mbox{\rm e}}nd{equation}lb By Hypothesis $\mathbb{R}ef{hyp:basicassumpGauss}$ and $(\mathbb{R}ef{LSU})$, we get $\sup_{0\le t\le T}|M_k(t)|<\infty$ in the case $f(w)=q_\varepsilonsilon^{1}(w,x)\ge0$. In the second case we apply the same argument to $f(w)=|\partial_p q_\varepsilonsilon^{1}(w,x)|$ and for simplicity we set $c=1$. Equation $(\mathbb{R}ef{LSU})$ immediately allows us to bound $T^1_{r_i}f$ by $r_i^{1/2}T^1_{r_i}\varphirphi$. Since operator $\partial_p$ commutes with operator $T_{t}^1$, without loss of generality we restrict ourselves to the exponential jump times $\tau_i \sim \frac{1}{2}(m-i+1)(m-i)e^{-\frac{1}{2}(m-i+1)(m-i) }$ of the dual process. Fixing $t^* = r_i$ for some arbitrary $i$, by the dominated convergence theorem, we have \begin{equation}lb \lambdab{5ES5} \sup_{x \in \mathbb{R}^d}T^1_{t^*}f(x) & \leq & \sup_{x \in \mathbb{R}^d} \int_0^{\infty} K \frac{e^{- \lambdambda r}}{\sqrt{r}} \langle} \def\>{\rangle\varphirphi_r(w- \xi), \varphirphi_{t^*}(x- \xi)d \xi \> dr \nonumber \\ & \leq & K \sup_{x \in \mathbb{R}^d} | T_{t^*}^1 \partial_p \tilde{Q}^{\lambdambda}(w-x)| \nonumber \\ & = & K \sup_{x \in \mathbb{R}^d} | \partial_p T_{t^*}^1 \tilde{Q}^{\lambdambda}(w-x)| \nonumber \\ & = & K \sup_{x \in \mathbb{R}^d}| \partial_p \tilde{Q}_{t^*}^{\lambdambda}(w-x)| \nonumber \\ & \leq & K \sup_{z \in \mathbb{R}^d}| \partial_p \tilde{Q}_{t^*}^{\lambdambda}(z)| < \infty, {\mbox{\rm e}}nd{equation}lb where $\tilde{Q}_{t^*}^{\lambdambda}(z)$ is defined by $\tilde{Q}_t^{\lambdambda}(z) := \int_0^{\infty} e^{- \lambdambda r} \varphirphi_{t+r}(z) dr$. Next \begin{equation}lb \lambdab{5ES6} & & \sup_{0 < t \leq T} \sup_{w \in \mathbb{R}^d} \langle} \def\>{\rangleT^1_{t}f, \mu_0\> \nonumber \\ & & \leq \sup_{0 < t \leq T} \sup_{w \in \mathbb{R}^d} \int_{\mathbb{R}^d} \int_{\mathbb{R}^d} \varphirphi_t(x-y) f(y,w) dy \mu_0(dx) \nonumber \\ & & \mbox{By Fubini's Theorem} \nonumber \\ & & \leq \sup_{0 < t \leq T} \sup_{w \in \mathbb{R}^d} \int_{\mathbb{R}^d} \int_{\mathbb{R}^d} \varphirphi_t(x-y) \mu_0(dx)f(y,w)dy \nonumber \\ & & \leq \sup_{0 < t \leq T} \sup_{y \in \mathbb{R}^d}\langle} \def\>{\rangle\varphirphi_t(x,y), \mu_0(dx)\> \sup_{w \in \mathbb{R}^d} \int_{\mathbb{R}^d}f(y,w)dy. {\mbox{\rm e}}nd{equation}lb By Hypothesis $\mathbb{R}ef{hyp:basicassumpGauss}$, the only term remaining to bound is \begin{equation}lb \lambdab{5ES8} & & \sup_{w \in \mathbb{R}^d} \langle} \def\>{\ranglef(y,w), \lambdambda_0\> \leq \sup_{w \in \mathbb{R}^d} \langle} \def\>{\rangle \int_{0}^{\infty}e^{- \lambdambda s} \frac{1}{s^{(d+1)/2}}{\mbox{\rm e}}xp{\{ - \frac{a_0 |w-y|^2}{s} \}} ds, \lambdambda_0(dy)\> \nonumber \\ & & \leq K \sup_{w \in \mathbb{R}^d}\langle} \def\>{\rangle \int_{0}^{\infty}e^{- \lambdambda s} \frac{1}{s^{1/2}} \varphirphi_s(w-y) ds, \lambdambda_0(dy)\> \nonumber \\ & & \leq K \int_{0}^{\infty}e^{- \lambdambda s} \frac{1}{s^{1/2}} ds < \infty. {\mbox{\rm e}}nd{equation}lb Hence $\sup_{0\le t\le T}|M_k(t)|<\infty$ ensues. $\square$ \noindent {\bf Proof: }\ [Proof of Theorem \mathbb{R}ef{ClaimAk}] We explicit the proof of both (\mathbb{R}ef{AME}) and (\mathbb{R}ef{BME}) only for the base case $k=1$, the general case ensuing from (\mathbb{R}ef{ME}) in Corollary \mathbb{R}ef{MCOM} in a similar fashion. More precisely, for any $T > 0$, $0 \leq t \leq T$, $\varepsilonsilon > 0$ and $\varepsilonsilon^{'} > 0$, we have \begin{equation}nn \sup_{x \in \mathbb{R}^d} \sup_{0 \leq t \leq T} \mathbb{E} _{\mu_{0}} |\Lambdambda^{x, \varepsilonsilon}_t| && \leq \sup_{x \in \mathbb{R}^d} \sup_{0 \leq s \leq T} \mathbb{E} _{\mu_0} \int_0^t \langle} \def\>{\rangleq_{\varepsilonsilon}(x- \cdot), \mu_s\>ds \nonumber \\ && \leq \sup_{x \in \mathbb{R}^d} \sup_{0 \leq s \leq T} \int_0^t \langle} \def\>{\rangleP_sq_{\varepsilonsilon}(x- \cdot), \mu_0\>ds \nonumber \\ && \leq \sup_{x \in \mathbb{R}^d} \sup_{0 \leq s \leq T} \langle} \def\>{\rangleq_{s + \varepsilonsilon}(x- \cdot), \mu_0\> < \infty {\mbox{\rm e}}nd{equation}nn while, using also the sharp estimation (\mathbb{R}ef{formulaI}) for the proof of (\mathbb{R}ef{BME}), \begin{equation}nn && \lim_{\varepsilonsilon \downarrow 0}\lim_{\varepsilonsilon^{'} \downarrow 0}\sup_{x \in \mathbb{R}^d}\sup_{0 \leq t \leq T} \mathbb{E} _{\mu_0}| \Lambdambda^{x, \varepsilonsilon}_t - \Lambdambda^{x, \varepsilonsilon^{'}}_t| \nonumber \\ && \leq \lim_{\varepsilonsilon \downarrow 0} \lim_{\varepsilonsilon^{'} \downarrow 0} \sup_{x \in \mathbb{R}^d}\sup_{0 \leq t \leq T} \mathbb{E} _{\mu_0}| \int_0^t \langle} \def\>{\rangle(q_{\varepsilonsilon}(x - \cdot) - q_{\varepsilonsilon^{'}}(x - \cdot)), \mu_s\> ds| \nonumber \\ && \leq \lim_{\varepsilonsilon \downarrow 0} \lim_{\varepsilonsilon^{'} \downarrow 0} \sup_{x \in \mathbb{R}^d}\sup_{0 \leq t \leq T} \mathbb{E} _{\mu_0}| \int_0^t \langle} \def\>{\ranglec|\varepsilonsilon - \varepsilonsilon^{'}|^{\alphapha/2}(k_1 q_{\varepsilonsilon}(x - \cdot) + k_2 q_{\varepsilonsilon^{'}}(x - \cdot)), \mu_s\> ds| = 0 \nonumber \\ {\mbox{\rm e}}nd{equation}nn holds for any $\alphapha \in (0 , 1)$. $\square$ \setcounter{equation}{0}\sectionion{Tanaka formula and local time of SDSM}\lambdabel{sec:Tanaka} \setcounter{equation}{0} The stage is now set for the proof of our main result (Theorem \mathbb{R}ef{lt_th1}). We need three lemmata. The first one concerns the Laplace transform $Q^{\lambdambda}$ defined in (\mathbb{R}ef{eqn:green}). \begin{lemma} \lambdab{le_b} Assume Hypothesis $\mathbb{R}ef{hyp:basicassumpElliptic}$ is satisfied. For any $\lambdambda > 0$ there holds:\\ {{\mbox{\rm e}}m (i)} For all $d\ge1$, we have $Q^{\lambdambda} \in L^1(\mathbb{R}^d)$ and $\partial_{x_i}Q^{\lambdambda} \in L^1(\mathbb{R}^d)$ for any $i \in \{1,2,\cdots,d\}$. \\ {{\mbox{\rm e}}m (ii)} For $d=1$, we also have $\partial_{x} Q^{\lambdambda} \in L^2(\mathbb{R})$. \\ {{\mbox{\rm e}}m (iii)} For $d=1$, $2$ or $3$, we finally have $Q^{\lambdambda} \in L^2(\mathbb{R}^d)$. \\ If Hypotheses $\mathbb{R}ef{hyp:basicassumpGauss}$ and $\mathbb{R}ef{hyp:basicassumpUniformInteg}$ are also satisfied for $\mu_0 \in M_a(\mathbb{R}^d)$ with $a\ge0$, then there holds : \\ {{\mbox{\rm e}}m (iv)} For all $d\ge1$, we have $Q^{\lambdambda} \in L^1(\mu_0)$ and $\partial_{x_i}Q^{\lambdambda} \in L^1(\mu_0)$ for any $i \in \{1,2,\cdots,d\}$. \\ {{\mbox{\rm e}}m (v)} For $d=1$, $2$ or $3$, we finally have $Q^{\lambdambda} \in L^2(\mu_0)$. {\mbox{\rm e}}le The proof is technical and found in Subsection \mathbb{R}ef{app:pf_le_b}. \\ Further, for each $\lambdambda > 0$ and $x \in \mathbb{R}^d$, $u(\cdot)=Q^{\lambdambda}(x-\cdot)$ solves equation $\left( - G_1 + \lambdambda \mathbb{R}ight) u = \delta_{x}$ in the distributional sense, so the Green operator $Q^{\lambdambda}*\phi(x)=\int_{\mathbb{R}^d}\phi(y)Q^{\lambdambda}(x-y)dy$ for Markov semigroup $P^1_t$, is a well-defined convolution for any $\phi\in C_b(\mathbb{R}^d)$ and solves \begin{equation}lb \lambdab{GPDE} (- G_1 + \lambdambda)u = \phi. {\mbox{\rm e}}nd{equation}lb \begin{lemma} \lambdab{le_new1} Assume Hypotheses $\mathbb{R}ef{hyp:basicassumpFilter}$, $\mathbb{R}ef{hyp:basicassumpElliptic}$, $\mathbb{R}ef{hyp:basicassumpGauss}$ and $\mathbb{R}ef{hyp:basicassumpUniformInteg}$ are satisfied. For either $d=1$, $2$ or $3$, the random field \begin{equation}lb \lambdab{DI0} \mathbb{X} i_t(x):= \int_0^{t} \int_{\mathbb{R}^d} \langle} \def\>{\rangleh_p(y-\cdot) \partial_pQ^{\lambdambda}(x-\cdot), \mu_s\> W(dy,ds) {\mbox{\rm e}}nd{equation}lb is a square-integrable ${\cal F}_t$-martingale, for every $\lambdambda > 0$ and $p\in\{1,2,\ldots,d\}$, with quadratic variation given by \begin{equation}nn \langle} \def\>{\rangle\mathbb{X} i(x)\>_t = \int_0^{t} ds \int_{\mathbb{R}^d} \langle} \def\>{\rangleh_p(y-\cdot)\partial_pQ^{\lambdambda}(x-\cdot), \mu_s\>^2 dy {\mbox{\rm e}}nd{equation}nn and satisfying $\sup_{x\in\mathbb{R}^d} \mathbb{E} _{\mu_0} \langle} \def\>{\rangle\mathbb{X} i(x)\>_t < \infty $ for every $t > 0$. {\mbox{\rm e}}le \noindent {\bf Proof: }\ Recalling the single particle transition density $q_t^{1}(0,\cdot)$ in the case where the starting position is the origin $0\in\mathbb{R}^d$, define the following perturbation of $Q^{\lambdambda}$ from (\mathbb{R}ef{eqn:green}), for every $\lambdambda > 0$ and $\varepsilonsilon > 0$: \begin{equation}lb \lambdab{perturbation} Q^{\lambdambda}_{\varepsilonsilon}(\cdot) := \int_{0}^{\infty}e^{- \lambdambda u} q_{u + \varepsilonsilon}^{1}(0,\cdot)du = e^{\lambdambda \varepsilonsilon}\int_{\varepsilonsilon}^{\infty}e^{- \lambdambda t} q_t^{1}(0,\cdot)dt {\mbox{\rm e}}nd{equation}lb and observe that $Q^{\lambdambda}_{\varepsilonsilon}\in C_b^{\infty}(\mathbb{R}^d)$ and \begin{equation}lb \lambdab{basic1} \lim_{\varepsilonsilon \downarrow 0}|Q^{\lambdambda}_{\varepsilonsilon}(x) - Q^{\lambdambda}(x)| \leq \lim_{\varepsilonsilon \downarrow 0} \int_0^{\infty} e^{- \lambdambda u}|P^1_{\varepsilonsilon}*q_u^{1}(x) - q_u^{1}(x)|du = 0, {\mbox{\rm e}}nd{equation}lb for every $x \in \mathbb{R}^d \setminus \{y= (y_1, y_2, \cdots,y_d): y_1 y_2 \cdots y_d = 0\}$ as $\varepsilonsilon \downarrow 0 $. By (iv) and (v) of Lemma \mathbb{R}ef{le_b} and the dominated convergence theorem, there holds, for every $a\ge0$ and $\mu_0 \in M_a(\mathbb{R}^d)$, \[ Q^{\lambdambda}_{\varepsilonsilon} \mathbb{R}ightarrow Q^{\lambdambda} \hspace{1cm} \mbox{ in $L^{k}(\mu_0); k=1,2; d=1,2,3;$} \] and \[ \partial_{x_i}Q^{\lambdambda}_{\varepsilonsilon} \mathbb{R}ightarrow \partial_{x_i}Q^{\lambdambda} \hspace{1cm} \mbox{in $L^1(\mu_0); d \geq 1; i=1, \cdots, d.$ } \] In fact, by (\mathbb{R}ef{LSU}) we know that ${\partial_{j}^{r}} {\partial_{k}^{s}} Q^{\lambdambda}_{\varepsilonsilon}\in L^p(\mu_0)\cap C_0(\mathbb{R}^d)$ also holds for any choice of $p\ge1$, $a\ge0$, $\mu_0\in M_a(\mathbb{R}^d)$, $1\le j, k \le d$ and nonnegative integers $r$ and $s$ such that $0\le r, s\le2$. By Corollary \mathbb{R}ef{MCOM}, we also know that ${\partial_{j}^{r}} {\partial_{k}^{t}} Q^{\lambdambda}_{\varepsilonsilon}\in L^p(\mu_t)$ holds $\mathbb{P} _{\mu_0}$-almost surely for all $t\ge0$. Hence, by Hypothesis $\mathbb{R}ef{hyp:basicassumpElliptic}$, the integrand in (\mathbb{R}ef{DI0}) is $\mathbb{P} _{\mu_0}$-almost surely finite for all $t\ge0$. We need a bit more, namely integrability with respect to $W$, which we prove next. With $\mathbb{X} i_t^{\varepsilonsilon}(x)$ defined like $\mathbb{X} i_t(x)$ in (\mathbb{R}ef{DI0}) with $Q^{\lambdambda}$ replaced by $Q^{\lambdambda}_{\varepsilonsilon}$, we begin by showing that $\{\mathbb{X} i_t^{\varepsilonsilon}(x):t\ge0\}$ is itself a square-integrable ${\cal F}_t$-martingale for every $x\in\mathbb{R}^d$. For any choice of $x,y\in\mathbb{R}^d$ let us denote by $\mathbb{P} si_{xy}^\varepsilonsilon\in C_b({(\mathbb{R}^d)^2})\cap L^1(\mathbb{R}^{2d})\cap L^1(\mu_0^2)$ the function \[ \mathbb{P} si_{xy}^\varepsilonsilon(w_1,w_2) := h_p(y-w_1) \partial_pQ^{\lambdambda}_{\varepsilonsilon}(x-w_1)\times h_p(y-w_2) \partial_pQ^{\lambdambda}_{\varepsilonsilon}(x-w_2). \] By Corollary \mathbb{R}ef{MCOM}, we can apply the duality identity (\mathbb{R}ef{dual}) to the second moment, to get \begin{equation}lb \lambdab{DI1} \mathbb{E} _{\mu_0} \left[ \{\mathbb{X} i_t^{\varepsilonsilon}(x)\}^2\mathbb{R}ight] & = & \mathbb{E} _{\mu_0} \int_0^t \int_{\mathbb{R}^d} \langle} \def\>{\rangle\mathbb{P} si_{xy}^\varepsilonsilon, \mu_s^2\> dy ds \\ & = & \int_0^t ds\int_{\mathbb{R}^d} dy\bigg[\langle} \def\>{\rangleP^2_s\mathbb{P} si_{xy}^\varepsilonsilon, \mu_0^2\> + \int_0^s \langle} \def\>{\rangleP^1_{s-r}\mathbb{P} hi_{12}^2P^2_r\mathbb{P} si_{xy}^\varepsilonsilon, \mu_0\> \gammamma \sigma^2 dr\bigg], \nonumber {\mbox{\rm e}}nd{equation}lb where $P^1_s$ and $P^2_s$ are the one and two particle transition semigroups from (\mathbb{R}ef{eqn:Semigroup}), with respective generators $G_1$ and $G_2$, and $\mathbb{P} hi_{12}^2$ is the diagonalisation mapping from (\mathbb{R}ef{restriction}). Rewrite (\mathbb{R}ef{LSU}) for every $(t,x,y)\in(0,T) \times (\mathbb{R}^d)^m \times (\mathbb{R}^d)^m$ with $T > 0$ as \begin{equation}lb \lambdab{LSU2} \left|\frac{\partial^{r}}{\partial t}\frac{\partial^{s}}{\partial y_{p}} q_t^{m}(x,y)\mathbb{R}ight| \leq \frac{a_1}{t^{(2r+s)/2}} \left(\frac{\pi}{a_2}\mathbb{R}ight)^{dm/2} \prod_{i=1}^m g_t(y_i-x_i), {\mbox{\rm e}}nd{equation}lb where $g_t$ is the centered gaussian density on $\mathbb{R}^d$ with $d$ independent coordinates and diagonal variance matrix with all entries equal to $t/2a_2$. Use Hypothesis $\mathbb{R}ef{hyp:basicassumpElliptic}$ to first take care of the term \begin{equation}lb \lambdab{DI10} & & \int_{\mathbb{R}^d} | h_p(y-w_1) | | h_p(y-w_2) | dy \leq \|h\|_\infty \|h\|_1 < \infty. {\mbox{\rm e}}nd{equation}lb We can now bound $\mathbb{P} si_{xy}^\varepsilonsilon$ by using the values $m=1$, $r=0$ and $s=1$ in (\mathbb{R}ef{LSU2}), to get, for some constant $c=c(T,d,h) > 0$ and all $\varepsilonsilon > 0$, \begin{equation}lb \lambdab{Psibound} \int_{\mathbb{R}^d} | \mathbb{P} si_{xy}^\varepsilonsilon(w_1,w_2) | dy \leq c \int_0^{\infty}\int_0^{\infty} e^{- \lambdambda (u+v)} \frac{1}{\sqrt{uv}} g_u(x-w_1)g_v(x-w_2) dudv. {\mbox{\rm e}}nd{equation}lb Therefore, using Fubini's theorem plus the values $m=2$ and $r=s=0$ in (\mathbb{R}ef{LSU2}) this time, we get, with a new constant $c=c(T,d,h) > 0$, \begin{equation}lb \lambdab{DI8} \int_{\mathbb{R}^d} | P^2_r \mathbb{P} si_{xy}^\varepsilonsilon(z_1,z_2) | dy & \leq & c \int_0^{\infty}\int_0^{\infty} e^{- \lambdambda (u+v)} \frac{1}{\sqrt{uv}} dudv \nonumber \\ & & \times \int_{\mathbb{R}^d}\int_{\mathbb{R}^d} g_u(x-w_1)g_v(x-w_2)g_r(w_1-z_1)g_r(w_2-z_2) dw_1dw_2 \nonumber \\ & \leq & c \int_0^{\infty}\int_0^{\infty} e^{- \lambdambda (u+v)} \frac{1}{\sqrt{uv}} g_{r+u}(x-z_1)g_{r+v}(x-z_2) dudv. {\mbox{\rm e}}nd{equation}lb By the same reasoning one finally obtains, with a further value for $c=c(T,d,h) > 0$, \begin{equation}lb \lambdab{DI9} \int_{\mathbb{R}^d} | P^1_{s-r}\mathbb{P} hi_{12}^2P^2_r \mathbb{P} si_{xy}^\varepsilonsilon(\zeta) | dy & \leq & c \int_0^{\infty}\int_0^{\infty} e^{- \lambdambda (u+v)} \frac{1}{\sqrt{uv}} dudv \nonumber \\ & & \times \int_{\mathbb{R}^d} g_{r+u}(x-z)g_{r+v}(x-z) q_{s-r}^{1}(\zeta,z) dz. {\mbox{\rm e}}nd{equation}lb Hypothesis $\mathbb{R}ef{hyp:basicassumpGauss}$ now yields a new $c=c(T,d,h,a,\mu_0)$ such that \begin{equation}lb \lambdab{DI12} & & \int_0^t ds \int_{\mathbb{R}^d} dy \int_0^s | \langle} \def\>{\rangle P^1_{s-r}\mathbb{P} hi_{12}^2P^2_r\mathbb{P} si_{xy}^\varepsilonsilon, \mu_0 \> | dr = \int_0^t dr \int_{\mathbb{R}^d} dy \int_r^t | \langle} \def\>{\rangle P^1_{s-r}\mathbb{P} hi_{12}^2P^2_r\mathbb{P} si_{xy}^\varepsilonsilon, \mu_0 \> | ds \nonumber \\ & & \leq c \int_0^t dr \int_0^{\infty}\int_0^{\infty} e^{- \lambdambda (u+v)} \frac{1}{\sqrt{uv}} dudv \left[ \int_{\mathbb{R}^d} g_{r+u}(x-z)g_{r+v}(x-z)dz \mathbb{R}ight] \nonumber \\ & & \leq ca_1\chi_{d} (t), {\mbox{\rm e}}nd{equation}lb which is finite when $d\le3$, since (see Subsection \mathbb{R}ef{app:pf_DI13} for the proof of this upper bound) \begin{equation}lb \lambdab{DI13} \chi_{d}(t) {\mbox{\rm e}}quiv \int_0^{\infty} \int_0^{\infty} e^{- \lambdambda (u+v)} \frac{1}{\sqrt{uv}} \int_0^t \frac{1}{(\sqrt{2r +u + v})^{d}} dr du dv \le (t+1)\pi \sqrt{\frac{\pi}{\lambdambda}}. {\mbox{\rm e}}nd{equation}lb This takes care of the second term in (\mathbb{R}ef{DI1}); the first one is done similarly by way of (\mathbb{R}ef{DI8}). This proves that $\{\mathbb{X} i_t^\varepsilonsilon(x):t\ge0\}$ is a square-integrable martingale for every $x\in\mathbb{R}^d$ and $\varepsilonsilon > 0$, as long as $d\le3$; moreover, for every $T > 0$ and $\lambdambda > 0$, there holds \begin{equation}lb \lambdab{SquareInteg} \sup_{\varepsilonsilon>0}\sup_{x\in\mathbb{R}^d}\sup_{0 \leq t \leq T} \mathbb{E} _{\mu_0} \left[ \mathbb{X} i_t^{\varepsilonsilon}(x)\mathbb{R}ight]^2 < \infty. {\mbox{\rm e}}nd{equation}lb Using similar ideas we get \begin{equation}lb \lambdab{DI1a} \sup_{x\in\mathbb{R}^d}\sup_{0 \leq t \leq T} \mathbb{E} _{\mu_0} \left[ \left| \mathbb{X} i_t^{\varepsilonsilon}(x)-\mathbb{X} i_t^{\delta}(x)\mathbb{R}ight|^2 \mathbb{R}ight] \leq c \varepsilonsilon, {\mbox{\rm e}}nd{equation}lb for some new constant $c=c(T,d,h,a,\mu_0,a_1,a_2,\gammamma,\sigma, \lambdambda) > 0$ (this one dependent on $\lambdambda$ but independent of $\varepsilonsilon$) and all $0 < \delta < \varepsilonsilon < 1$. Details are found in Subsection \mathbb{R}ef{app:pf_DI1a}. Picking any sequence $\varepsilonsilon_n$ decreasing to $0$, we get that $\{\mathbb{X} i_t^0(x):=\lim_{\varepsilonsilon_n\mathbb{R}ightarrow0}\mathbb{X} i_t^{\varepsilonsilon_n}(x):t\ge0\}$ exists $\mathbb{P} _{\mu_0}$-almost surely and is a square-integrable martingale for every $x\in\mathbb{R}^d$. We define $\mathbb{X} i_t$ in (\mathbb{R}ef{DI0}) as this unique limit $\mathbb{X} i_t:=\mathbb{X} i_t^0$, in both of the $\mathbb{P} _{\mu_0}$-almost sure and $L^2(\mathbb{P} _{\mu_0})$ sense. The convergence in $L^2(\mathbb{P} _{\mu_0})$ of the sequence $\{\mathbb{X} i_t^{\varepsilonsilon_n}(x)\}$, for each fixed $t\ge0$ and $x\in\mathbb{R}^d$, together with Corollary \mathbb{R}ef{MCOM} plus Hypotheses $\mathbb{R}ef{hyp:basicassumpFilter}$, $\mathbb{R}ef{hyp:basicassumpElliptic}$ and $\mathbb{R}ef{hyp:basicassumpGauss}$, imply that (\mathbb{R}ef{DI1}) holds also in the limiting case $\varepsilonsilon=0$. Since the right hand side of (\mathbb{R}ef{DI8}), (\mathbb{R}ef{DI12}), and (\mathbb{R}ef{DI13}) do not dependent on $\varepsilonsilon$, by the dominated convergence theorem, we also have $\sup_{x\in\mathbb{R}^d}\sup_{0 \leq t \leq T} \mathbb{E} _{\mu_0} \left[ \mathbb{X} i_t(x)\mathbb{R}ight]^2 < \infty$. $\square$ \begin{lemma} \lambdab{SFT} Assume Hypotheses $\mathbb{R}ef{hyp:basicassumpFilter}$, $\mathbb{R}ef{hyp:basicassumpElliptic}$, $\mathbb{R}ef{hyp:basicassumpGauss}$ and $\mathbb{R}ef{hyp:basicassumpUniformInteg}$ are satisfied. For any $t>0$ and $\phi \in C^{\infty}_c(\mathbb{R}^d)$, we have $\mathbb{P} _{\mu_0}$-almost surely \begin{equation}lb \lambdab{ex1} & & \int_{\mathbb{R}^d} \phi(x) \int_0^{t} \int_{\mathbb{R}^d} \sum_{p=1}^{d} \langle} \def\>{\rangleh_p(y- \cdot) \partial_p Q^{\lambdambda}(x - \cdot), \mu_s\>W(dy,ds) dx \nonumber \\ & & = \int_0^{t} \int_{\mathbb{R}^d} \int_{\mathbb{R}^d} \phi(x) \sum_{p=1}^{d} \langle} \def\>{\rangleh_p(y- \cdot) \partial_p Q^{\lambdambda}(x - \cdot), \mu_s\>dx W(dy,ds) {\mbox{\rm e}}nd{equation}lb and \begin{equation}lb \lambdab{ex2} & & \int_{\mathbb{R}^d} \phi(x) \int_0^{t} \int_{\mathbb{R}^d}Q^{\lambdambda}(x-y)M(dy,ds)dx \nonumber \\ & & = \int_0^{t} \int_{\mathbb{R}^d} \int_{\mathbb{R}^d} \phi(x) Q^{\lambdambda}(x-y)dx M(dy,ds), {\mbox{\rm e}}nd{equation}lb where $W$ is a space-time white noise and $M$ is an orthogonal worthy martingale measure with \[ \langle} \def\>{\rangle \int_{\mathbb{R}^d} \phi(x-y) M(dyds)\> = \gammamma \sigma^2 \int_{\mathbb{R}^d} \phi(x)^2 dx ds. \] {\mbox{\rm e}}le \noindent {\bf Proof: }\ Both equalities follow from a stochastic version of Fubini's theorem due to Walsh \cite{Walsh86} (see his Theorem 2.6 on p. 296). Since the covariance measure of $M(dx,ds)$ is deterministic, (\mathbb{R}ef{ex1}) and (\mathbb{R}ef{ex2}) are handled similarly; we only prove the more difficult (\mathbb{R}ef{ex1}). By Corollary 2.8 of Walsh \cite{Walsh86}, we only need to check \begin{equation}lb \lambdab{cond1} & & \hspace{-1cm} \mathbb{E} _{\mu_0} \int_{\mathbb{R}^d} \int_{\mathbb{R}^d} \int_0^t \langle} \def\>{\rangle\sum_{p=1}^{d}|h_p(\xi - y) \partial_p Q^{\lambdambda}(x - y)|, \mu_s(dy)\> \nonumber \\ & & \cdot \langle} \def\>{\rangle \sum_{q=1}^{d} | h_q(\xi - y) \partial_q Q^{\lambdambda}(x - y)|, \mu_s(dy)\> d\xi ds \nu(dx) < \infty, {\mbox{\rm e}}nd{equation}lb for any finite measure on ${\cal B}(\mathbb{R}^d)$ of the form $\nu(dx) = \phi(x) dx$ with positive $\phi \in C_c(\mathbb{R}^d)$. Once again write $g_r$ for the centered gaussian density on $\mathbb{R}^d$ with $d$ independent coordinates and diagonal variance matrix with all entries equal to $r/2a_2$. Let \[ f(x-y) := \int_0^{\infty} \frac{e^{- \lambdambda r}}{\sqrt{r}} g_r(x - y)dr \] and observe \begin{equation}lb \lambdab{cond2} & & \hspace{-1cm} \mathbb{E} _{\mu_0} \int_{\mathbb{R}^d} \int_{\mathbb{R}^d} \int_0^t \langle} \def\>{\rangle\sum_{p=1}^{d}|h_p(\xi - y) \partial_p Q^{\lambdambda}(x - y)|, \mu_u(dy)\> \nonumber \\ & & \cdot \langle} \def\>{\rangle \sum_{q=1}^{d} | h_q(\xi - y) \partial_q Q^{\lambdambda}(x - y)|, \mu_u(dy)\> d\xi du \nu(dx) \nonumber \\ & & \leq k_1 \sum_{p,q=1}^{d} \mathbb{R}ho_{pq}(0) \mathbb{E} _{\mu_0} \int_{\mathbb{R}^d} \int_0^t \langle} \def\>{\rangle\int_0^{\infty} \frac{e^{- \lambdambda r}}{\sqrt{r}} g_r(x - y)dr, \mu_u(dy)\> \nonumber \\ & & \cdot \langle} \def\>{\rangle\int_0^{\infty} \frac{e^{- \lambdambda s}}{\sqrt{s}}g_s(x - y)ds, \mu_u(dy)\> du \nu(dx) \nonumber \\ & & \leq k_1 \sum_{p,q=1}^{d} \mathbb{R}ho_{pq}(0) \int_0^t \int_{\mathbb{R}^d} \mathbb{E} _{\mu_0} \langle} \def\>{\ranglef(x-y), \mu_u(dy)\>^2 du \nu(dx). {\mbox{\rm e}}nd{equation}lb We already proved the finiteness of \[ \sup_{0 < u \leq T}\sup_{x \in \mathbb{R}^d} \mathbb{E} _{\mu_0}\langle} \def\>{\ranglef(x-y), \mu_u(dy)\>^2 < \infty. \] Thus, (\mathbb{R}ef{cond1}) holds. This proves (\mathbb{R}ef{ex1}). $\square$ \\ \noindent {\bf Proof: }\ [Proof of Theorem \mathbb{R}ef{lt_th1}] First, since $u=Q^{\lambdambda}$ is the fundamental solution of \[ (\lambdambda - G_1)u = 0, \] we have \[ (\lambdambda - G_1 ) Q^{\lambdambda}_{\varepsilonsilon}(x)= (\lambdambda - G_1 ) Q^{\lambdambda}*q_{\varepsilonsilon}^{1}(x) = q_{\varepsilonsilon}^{1}(x). \] For every $ \phi \in C_c(\mathbb{R}^d)$ and any given $t > 0$, by definition of $\Lambdambda^x_t $ we have \begin{equation}lb \lambdab{App0} && \lim_{\varepsilonsilon \downarrow 0}\mathbb{E} _{\mu_0}|\int_{\mathbb{R}^d} \phi(x) \Lambdambda^x_tdx - \int_{\mathbb{R}^d}\phi(x) \Lambdambda^{x, \varepsilonsilon}_t dx| \nonumber \\ & & \leq \lim_{\varepsilonsilon \downarrow 0 }\sup_{x \in \mathbb{R}^d} \sup_{0 \leq t \leq T} \mathbb{E} _{\mu_0}|\Lambdambda^x_t - \Lambdambda^{x, \varepsilonsilon}_t|\int_{\mathbb{R}^d}\phi(x) dx = 0. {\mbox{\rm e}}nd{equation}lb Then, for every $0 \leq t \leq T$, $x \in \mathbb{R}^d$ and $\omega$ outside of the $\mathbb{P} _{\mu_0}$-null set $N$ specified by the statement of Lemma \mathbb{R}ef{SFT}, a stochastic version of Fubini's theorem, there holds \begin{equation}lb \lambdab{App1} & & \int_{\mathbb{R}^d} \phi(x) \Lambdambda^{x}_t dx = \lim_{\varepsilonsilon \downarrow 0}\int_{\mathbb{R}^d} \phi(x) \Lambdambda^{x, \varepsilonsilon}_t dx \nonumber \\ & & = \lim_{\varepsilonsilon \downarrow 0} \int_{0}^{t}\langle} \def\>{\rangle\int_{\mathbb{R}^d}(-G_1 + \lambdambda) Q^{\lambdambda}_{\varepsilonsilon}(x- \cdot) \phi(x)dx, \mu_s\> ds \nonumber \\ && = \lim_{\varepsilonsilon \downarrow 0} \int_{0}^{t}\langle} \def\>{\rangle\int_{\mathbb{R}^d}q_{\varepsilonsilon}^{1}(x- \cdot) \phi(x)dx, \mu_s\> ds \nonumber \\ && = \lim_{\varepsilonsilon \downarrow 0} \int_{0}^{t}\langle} \def\>{\rangleP_{\varepsilonsilon}*\phi(\cdot), \mu_s\> ds =\int_0^t\langle} \def\>{\rangle\phi, \mu_s\>ds. {\mbox{\rm e}}nd{equation}lb This proves that $\Lambdambda^x_t$ is the SDSM local time. Since $Q^{\lambdambda} \mb{\rm a.s.}t \phi(\cdot) \in C_b^{\infty}(\mathbb{R}^d)$, by Lemma \mathbb{R}ef{SFT}, we have \begin{equation}lb \lambdab{App1} & & \int_{\mathbb{R}^d} \phi(x) \Lambdambda^{x}_t dx = \int_{0}^{t}\langle} \def\>{\rangle(-G_1 + \lambdambda) \int_{\mathbb{R}^d}Q^{\lambdambda}(x- \cdot) \phi(x)dx, \mu_s\> ds \nonumber \\ & & = \int_{0}^{t}\langle} \def\>{\rangle(-G_1 + \lambdambda) Q^{\lambdambda}* \phi(\cdot), \mu_s\> ds \nonumber \\ & & = \langle} \def\>{\rangleQ^{\lambdambda}* \phi(\cdot), \mu_0\> - \langle} \def\>{\rangleQ^{\lambdambda}* \phi(\cdot), \mu_{t}\> + \lambdambda \int_0^{t} \langle} \def\>{\rangleQ^{\lambdambda}* \phi(\cdot), \mu_s\>ds \nonumber \\ & & + \sum_{p=1}^{d}\int_0^{t} \int_{\mathbb{R}^d}\langle} \def\>{\rangleh_p(y- \cdot) \partial_p Q^{\lambdambda}* \phi(\cdot), \mu_s\>W(dy,ds) \nonumber \\ & & + \int_0^{t} \int_{\mathbb{R}^d}Q^{\lambdambda}* \phi(y)M(dy,ds) \nonumber \\ & & = \int_{\mathbb{R}^d} \phi(x) \bigg\{\langle} \def\>{\rangleQ^{\lambdambda}(x - \cdot), \mu_0\> - \langle} \def\>{\rangleQ^{\lambdambda}(x- \cdot), \mu_{t}\> + \lambdambda \int_0^{t} \langle} \def\>{\rangleQ^{\lambdambda}(x - \cdot), \mu_s\>ds \nonumber \\ & & + \sum_{p=1}^{d}\int_0^{t} \int_{\mathbb{R}^d}\langle} \def\>{\rangleh_p(y- \cdot) \partial_p Q^{\lambdambda}(x - \cdot), \mu_s\>W(dy,ds) \nonumber \\ & & + \int_0^{t} \int_{\mathbb{R}^d}Q^{\lambdambda}(x-y)M(dy,ds)\bigg\}dx {\mbox{\rm e}}nd{equation}lb $\mathbb{P} _{\mu_0}$-almost surely, jointly for all $\phi \in C_c(\mathbb{R}^d)$. So (\mathbb{R}ef{TanakaII}) holds and we are done. $\square$ \setcounter{equation}{0}\sectionion{Proofs of technical results}\lambdabel{app:ProofsOfLemmas} \setcounter{equation}{0} \subsection{Proof of uniform ellipticity of operator ${G}_{m}$ in (\mathbb{R}ef{eqn:Gn})}\lambdabel{app:pf_ellip1} \noindent {\bf Proof: }\ To check the uniform ellipticity of $(\mathbb{G} ammamma^{ij}_{pq})$, let $\xi_i= (\xi_{i1}, \cdots, \xi_{id})^{T}$ denote an arbitrary column vector in $\mathbb{R}^d$ and $\mathbb{G} ammamma := (\mathbb{G} ammamma^{ij}_{pq}(x_1, \cdots, x_m))_{1 \leq i,j \leq m, 1 \leq p,q \leq d}$. Since \begin{equation}lb \lambdab{ellip1} & & (\xi_1^{T}, \cdots, \xi_m^{T} ) \mathbb{G} ammamma \left( \begin{array}{c} \xi_1 \\ \vdots \\ \xi_{m} {\mbox{\rm e}}nd{array} \mathbb{R}ight) = \sum_{i,j=1}^{m} \sum_{p,q=1}^{d} \xi_{ip} \mathbb{G} ammamma^{ij}_{pq}(x_1, \cdots, x_m) \xi_{jq} \nonumber \\ && = \sum_{i=1}^{m} \sum_{p,q=1}^{d}\left[\xi_{ip}(a_{pq}(x_i) + \mathbb{R}ho_{pq}(x_i,x_i)) \xi_{iq} \mathbb{R}ight] + \sum_{i,j=1, i \neq j}^{m} \sum_{p,q=1}^{d}\xi_{ip} \mathbb{R}ho_{pq}(x_i,x_j) \xi_{jq} \nonumber \\ & & = \sum_{i=1}^{m} \bigg[\sum_{r=1}^{d} \big(\sum_{p=1}^d \xi_{ip} c_{pr}(x_i) \big)^2 + \int_{\mathbb{R}^d}\big(\sum_{p=1}^d \xi_{ip} h_{p}(u-x_i)\big)^2du \bigg] \nonumber \\ & & + \sum_{i,j=1, i \neq j}^{m} \int_{\mathbb{R}^d} \big( \sum_{p=1}^d \xi_{ip} h_{p}(u-x_i) \big)\big( \sum_{p=1}^d \xi_{jq} h_{q}(u-x_j) \big) du \nonumber \\ & & = \sum_{i=1}^{m} \sum_{r=1}^{d} \big(\sum_{p=1}^d \xi_{ip} c_{pr}(x_i) \big)^2 + \int_{\mathbb{R}^d}\bigg[ \sum_{i=1}^{m} \big(\sum_{p=1}^d \xi_{ip} h_{p}(u-x_i)\big)\bigg]^2du \geq 0 , {\mbox{\rm e}}nd{equation}lb by the uniform ellipticity assumption of $(a_{pq})_{1 \leq p,q \leq d}$ there exists a positive real number $\varepsilonsilon > 0$ such that for each $1 \leq i \leq m$ \begin{equation}lb \lambdab{u_e} \sum_{r=1}^{d} \big(\sum_{p=1}^d \xi_{ip} c_{pr}(x_i) \big)^2=\sum_{p,q=1}^{d}\left[\xi_{ip}a_{pq}(x_i) \xi_{iq} \mathbb{R}ight] \geq \varepsilonsilon | \xi_i|^2, {\mbox{\rm e}}nd{equation}lb where $| \xi_i|= \sqrt{\xi_{i1}^2 + \cdots + \xi_{id}^2}$ . The uniform ellipticity of $\mathbb{G} ammamma$ follows. In the left hand side of the last inequality of (\mathbb{R}ef{ellip1}), the first term is associated with the coefficients of the individual motions and the second term is associated with the coefficients of the common random environment. $\square$ \subsection{Proof of uniformity of Aronson bounds in (\mathbb{R}ef{Aronsonbounds})}\lambdabel{app:pf_jointparametrization} \noindent {\bf Proof: }\ Since mapping $c\mapsto c^{d/2}\varphirphi_{cs}(y-x)$ is increasing for any $x, y \in \mathbb{R}^d$ and $s > 0$, all upper and lower Aronson bounds are respectively bounded uniformly in $d\in\{1, 2, \ldots, n\}$ above and below, via $$ \bigg(\frac{c_*}{c^*}\bigg)^{n/2} \alphapha_* \cdot \varphirphi_{c_*s}(y-x) \leq A^*(d) \cdot \varphirphi_{c(d)s}(y-x) \leq \bigg(\frac{c^*}{c_*}\bigg)^{n/2} \alphapha^* \cdot \varphirphi_{c^*s}(y-x) $$ where $c=c(d)$ expresses the dependency on the dimension, $c^*=\max\{c(1), c(2), \ldots, c(n)\}$ and $c_*=\min\{c(1), c(2), \ldots, c(n)\}$, while $\alphapha^*=\max\{A^*(1), A^*(2), \ldots, A^*(n)\}$ and finallly $\alphapha_*=\min\{A^*(1), A^*(2), \ldots, A^*(n)\}$. $\square$ \subsection{Proof of rescaling properties in Section \mathbb{R}ef{sec:dualConst}}\lambdabel{app:pf_iam} \noindent {\bf Proof: }\ [Proof of statement (\mathbb{R}ef{iam})] Clearly it suffices to prove it in the case $m=1$. For every $d\ge1$ we see at once that $I_a\in C_b(\mathbb{R}^d)$ holds. The symmetry of $|x|^2$ in its coordinates also allows for a reordering of the partial derivatives. Using repeatedly the formula $\partial_{x_i} I_a(x)=-ax_i I_{a+2}(x)$, by induction on $k\ge0$ we have \begin{equation*}\partial_{x_1}^k I_a(x)=\sum_{\gammamma=0}^k H_{\gammamma,k}(x_1)I_{a+k+\gammamma}(x) {\mbox{\rm e}}nd{equation*} for every $d\ge1$, where $H_{\gammamma,k}$ is a real polynomial in $x_1$ of order at most ${\gammamma}$. Therefore all these partial derivatives are finite sums of bounded continuous functions. Note that $H_{\gammamma,k}$ depends on parameter $a\ge0$ but not on dimension $d\ge1$. This also finishes the proof in the base case $d=1$. The next stage yields \begin{equation*}\partial_{x_2}^{{\mbox{\rm e}}ll}\partial_{x_1}^k I_a(x) =\sum_{{\beta}=0}^{{\mbox{\rm e}}ll}\sum_{{\gammamma}=0}^k H_{\gammamma,k}(x_1)H_{\beta,{\mbox{\rm e}}ll}(x_2)I_{a+{\mbox{\rm e}}ll+k+\beta+\gammamma}(x) {\mbox{\rm e}}nd{equation*} for every $k\ge1$, ${\mbox{\rm e}}ll\ge1$ and $d\ge1$, again all bounded and continuous. Case $d=2$ is also finished. Another iteration finishes the proof, this time through a multiple induction on the integer vectors of the form $(d,k_1,\ldots,k_d)$ with $d\ge1$ and every $k_i\ge0$. $\square$ \noindent {\bf Proof: }\ [Proof of statement (\mathbb{R}ef{iamgm})] Using the above calculations, one can also show that ${\cal I}_{a, m}^{-1} G_m {\cal I}_{a, m} \in C_b((\mathbb{R}^d)^{m})$ holds for every $m\ge1$. Indeed, since all coefficients $\mathbb{G} ammamma_{pq}^{ij}$ of $G_m$ in (\mathbb{R}ef{eqn:Gn}) are bounded and continuous by Hypothesis \mathbb{R}ef{hyp:basicassumpElliptic}, with $G_m$ involving only partial derivatives of order $1$ or $2$, it suffices to prove this statement for $m\le2$. In the base case $m=1$, $I_a^{-1} G_1 I_a \in C_b(\mathbb{R}^d)$ holds because both $I_a^{-1}(x) \partial_{x_1}^2 I_a(x)$ and $I_a^{-1}(x) \partial_{x_2}\partial_{x_1} I_a(x)$ are bounded and continuous. For $m=2$ the only new terms take the form $I_a^{-1}(x)I_a^{-1}(y)\partial_{x_1} I_a(x)\partial_{y_1}I_a(y)$, a product of two terms already covered in case $m=1$. $\square$ \noindent {\bf Proof: }\ [Proof of Lemma \mathbb{R}ef{lea}] Proposition 1.1.5b of Ethier and Kurtz \cite{EthierKurtz86} ensures that both $P_t^{m}f\in{\cal D}(G_m)$ and $ \frac{\partial}{\partial t} P_t^{m} f = G_m P_t^{m} f = P_t^{m} G_m f$ hold for every choice of $f\in{\cal D}(G_m)$ and $t\ge0$. The result is therefore true in the case $a=0$ under Hypothesis \mathbb{R}ef{hyp:basicassumpElliptic}, so let us proceed with parameter values $a > 0$ in mind. All we need to show is that both the following integrability conditions hold, for any $f \in {\cal D}_a(G_m)$: $ \sup_{0 \leq t \leq T} \| {\cal I}_{a, m}^{-1} P^{m}_t f \|_{\infty} < \infty$ and $ \sup_{0 \leq t \leq T} \| {\cal I}_{a, m}^{-1} P_t^{m} G_m f \|_{\infty} < \infty$. It suffices to show their validity when $f={\cal I}_{a, m}$ because the linearity of $P_t^{m}$ implies both $ | {\cal I}_{a, m}^{-1} P^{m}_t f (x)| \le {\cal I}_{a, m}^{-1} P^{m}_t {\cal I}_{a, m}(x) \cdot \| {\cal I}_{a, m}^{-1} f \|_{\infty}$ and $ | {\cal I}_{a, m}^{-1} P^{m}_t G_m f (x)| \le {\cal I}_{a, m}^{-1} P^{m}_t {\cal I}_{a, m}(x) \cdot \| {\cal I}_{a, m}^{-1} G_m f \|_{\infty}$, for every $x\in(\mathbb{R}^d)^m$, since we already assume $\| {\cal I}_{a, m}^{-1} f \|_{\infty} < \infty$ and $\| {\cal I}_{a, m}^{-1} G_m f \|_{\infty} < \infty$ by choosing $f \in {\cal D}_a(G_m)$. We first prove that $ \sup_{0 \leq t \leq T} \| {\cal I}_{a, m}^{-1} P^{m}_t {\cal I}_{a, m} \|_{\infty} < \infty$ holds. Since the gaussian density integrates to $1$ for every mean $x\in(\mathbb{R}^d)^m$, as in $\langle} \def\>{\rangle g_t^{m}(x,\cdot), \lambdambda_0 \>=1$ with \begin{equation}lb \lambdab{Gauss} g_t^{m}(x,y)= \left(\frac{a_2}{\pi t} \mathbb{R}ight)^{dm/2}{\mbox{\rm e}}xp{\left\{- a_2 \left( \frac{|y-x|^{2}}{t} \mathbb{R}ight)\mathbb{R}ight\}}, {\mbox{\rm e}}nd{equation}lb the application of inequality (\mathbb{R}ef{LSU}) with $r=s=0$ yields upper bound \begin{equation}nn {\cal I}_{a, m}^{-1}(x) P^{m}_t {\cal I}_{a, m} (x) & = & {\cal I}_{a, m}^{-1}(x) \int_{(\mathbb{R}^d)^m} {\cal I}_{a, m} (y) q_t^{m}(x,y)dy \\ & \le & a_1\left(\frac{\pi}{a_2} \mathbb{R}ight)^{dm/2} {\cal I}_{a, m}^{-1}(x) \int_{(\mathbb{R}^d)^m} {\cal I}_{a, m} (y) g_t^{m}(x,y)dy \\ & = & a_1\left(\frac{\pi}{a_2} \mathbb{R}ight)^{dm/2} \prod_{i=1}^m\left(I_a^{-1}(x_i)\int_{\mathbb{R}^d}I_a (y_i) g_t^{1}(x_i,y_i)dy_i\mathbb{R}ight) {\mbox{\rm e}}nd{equation}nn for every $x\in(\mathbb{R}^d)^m$ and $t\in(0,T)$. Therefore this part of the proof will be complete once we show that there is a positive constant $C=C(a,d,T)$ such that (when $m=1$) \begin{equation}lb \lambdab{mainconstant} \sup_{0 \leq t \leq T} \sup_{x\in\mathbb{R}^d}\left(I_a^{-1}(x)\int_{\mathbb{R}^d}I_a (y) g_t^{1}(x,y)dy\mathbb{R}ight)\le C {\mbox{\rm e}}nd{equation}lb holds for all $a\ge0$. Any standard gaussian random variable $Z\sim N(0,1)$ satisfies the inequality $P(|Z|\ge z)\le e^{-z^2/2}$ for every $z\ge0$, so we have, for every $z\ge0$ and $t\ge0$, \begin{equation}nn I_a^{-1}(x)\int_{\{y:|y-x|\ge z|x|\}}I_a (y) g_t^{1}(x,y)dy & \le & I_a^{-1}(x){\mbox{\rm e}}xp{\left\{- \frac{a_2 z^{2}|x|^{2}}{t}\mathbb{R}ight\}} \\ & \le & \max\left\{2^{a/2},\left(\frac{at}{ea_2z^2}\mathbb{R}ight)^{a/2}\mathbb{R}ight\}, {\mbox{\rm e}}nd{equation}nn using $I_a \le1$ in the first inequality and then separating as to whether or not $|x|\le1$, to get the second one; restricting to $z\in(0,1/2)$ both ensures $|y|\ge (1-z)|x|$ on the complement of set $\{y:|y-x|\ge z|x|\}$ and makes $I_a^{-1}(x)I_a ((1-z)x)$ into an increasing function in $|x|$, so that \[ I_a^{-1}(x)\int_{\{y:|y-x| < z|x|\}}I_a (y) g_t^{1}(x,y)dy \le I_a^{-1}(x)I_a ((1-z)x)\le(1-z)^{-a}. \] These bounds together yield the existence of such a $C$, with $C\ge2$ for all $a\ge0$. Finally, for every $x\in(\mathbb{R}^d)^m$ and $t\in(0,T)$, we have \[ | {\cal I}_{a, m}^{-1} P_t^{m} G_m {\cal I}_{a, m} | (x) = | {\cal I}_{a, m}^{-1} P_t^{m} ({\cal I}_{a, m} {\cal I}_{a, m}^{-1} G_m {\cal I}_{a, m}) (x) | \le \| {\cal I}_{a, m}^{-1} G_m {\cal I}_{a, m} \|_{\infty} \cdot {\cal I}_{a, m}^{-1} P_t^{m} {\cal I}_{a, m} (x), \] so that $ \sup_{0 \leq t \leq T} \| {\cal I}_{a, m}^{-1} P_t^{m} G_m {\cal I}_{a, m} \|_{\infty} < \infty$ holds as well. $\square$ \noindent {\bf Proof: }\ [Proof that the mappings $\mathbb{P} hi_{ij}^m$ in (\mathbb{R}ef{restriction}) are well-defined] We need to prove that $\mathbb{P} hi_{ij}^m$ maps ${\cal D}_a(G_m)$ into ${\cal D}_a(G_{m-1})$ for every $m\ge2$. First notice the equalities \begin{equation}nn & & {\cal I}_{a, m-1}^{-1}(y_1, \cdots, y_{j-1}, y_{j+1}, \cdots, y_m) f(y_{1}, \cdots, y_{j-1},y_{i},y_{j+1},\cdots, y_{m}) \\ = & & I_a(y_{i})I_a^{-1}(y_{i}){\cal I}_{a, m-1}^{-1}(y_1, \cdots, y_{j-1}, y_{j+1}, \cdots, y_m) f(y_{1}, \cdots, y_{j-1},y_{i},y_{j+1},\cdots, y_{m}) \\ = & & I_a(y_{j})I_a^{-1}(y_{j}){\cal I}_{a, m-1}^{-1}(y_1, \cdots, y_{j-1}, y_{j+1}, \cdots, y_m) f(y_{1}, \cdots, y_{j-1},y_{j},y_{j+1},\cdots, y_{m}) \\ = & & I_a(y_{j}){\cal I}_{a, m}^{-1}(y_1, \cdots, y_{j-1}, y_{j}, y_{j+1}, \cdots, y_m) f(y_{1}, \cdots, y_{j-1},y_{j},y_{j+1},\cdots, y_{m}), {\mbox{\rm e}}nd{equation}nn the second one holding whenever variable $y_j$ is set to the value of $y_i$. Taking the supremum over all $(y_1, \cdots, y_{j-1}, y_{j}, y_{j+1}, \cdots, y_m)\in(\mathbb{R}^d)^m$ in the top and bottom lines yields \begin{equation}lb \lambdab{shrink} || {\cal I}_{a, m-1}^{-1} \mathbb{P} hi_{ij}^mf ||_{\infty}\le || I_a ||_{\infty} \cdot || {\cal I}_{a, m}^{-1} f ||_{\infty} \le || {\cal I}_{a, m}^{-1} f ||_{\infty}, {\mbox{\rm e}}nd{equation}lb so that $ || {\cal I}_{a, m}^{-1} f ||_{\infty} < \infty$ implies $ || {\cal I}_{a, m-1}^{-1} \mathbb{P} hi_{ij}^mf ||_{\infty} < \infty$, which is the first requirement. By the same reasoning, it follows that $ || {\cal I}_{a, m}^{-1} D_m f ||_{\infty} < \infty$ implies $ || {\cal I}_{a, m-1}^{-1} \mathbb{P} hi_{ij}^m D_mf ||_{\infty} < \infty$, for any differential operator $D_m$ of order at most $2$ (such as $G_m$, for example). The second requirement is $|| {\cal I}_{a, m-1}^{-1} G_{m-1} \mathbb{P} hi_{ij}^mf ||_{\infty} < \infty$, which follows at once upon noting that the chain rule yields $\partial/\partial_{x_{iq}}\mathbb{P} hi_{ij}^mf=\mathbb{P} hi_{ij}^m(\partial/\partial_{x_{iq}}f+\partial/\partial_{x_{jq}}f)$ while $\partial/\partial_{x_{jq}}\mathbb{P} hi_{ij}^mf=0$ and $\partial/\partial_{x_{kq}}\mathbb{P} hi_{ij}^mf=\mathbb{P} hi_{ij}^m(\partial/\partial_{x_{kq}}f)$ for every $k\not\in\{i,j\}$ and every $q$. Handling the second derivatives similarly one obtains an operator $D_m$ such that $G_{m-1} \mathbb{P} hi_{ij}^mf = \mathbb{P} hi_{ij}^m D_mf$. $\square$ \noindent {\bf Proof: }\ [Proof of Lemma \mathbb{R}ef{lea2}] Given $J_0=m\ge1$ and $Y_0\in{\cal D}_a(G_m)$, rewrite the expression for process $Y$ in (\mathbb{R}ef{Yprocess}) by replacing each $S_k$ with ${\cal I}_{a, k-1}{\cal I}_{a, k-1}^{-1}S_k{\cal I}_{a, k}{\cal I}_{a, k}^{-1}$ and reading the new expression from right to left through the natural triplets thus formed, to get \[ {\cal I}_{a, J_t}^{-1} Y_t = \left({\cal I}_{a, J_t}^{-1} P^{J_{\tau_k}}_{t-\tau_k} {\cal I}_{a, J_{\tau_{k-1}}}\mathbb{R}ight) \left({\cal I}_{a, J_{\tau_{k-1}}}^{-1} S_k {\cal I}_{a, J_{\tau_{k-2}}}\mathbb{R}ight) \cdots \left({\cal I}_{a, J_{\tau_{1}}}^{-1} S_1 {\cal I}_{a, J_{\tau_{0}}}\mathbb{R}ight) \left({\cal I}_{a, J_{0}}^{-1} P^{J_0}_{\tau_1}Y_0\mathbb{R}ight), \] for $\tau_k \le t < \tau_{k+1}$ and $0\le k\le J_0-1$. By inequality (\mathbb{R}ef{shrink}), all the triplets in $S$ are contractions and will not affect the overall uniform bound. By the calculations leading to and including (\mathbb{R}ef{mainconstant}), there exists a positive constant $C=C(a,d,m,T)$ such that, for every $k\ge1$ and $f\in{\cal D}_a(G_k)$, \[ \sup_{0 \le t < \tau_{k+1}-\tau_k} || {\cal I}_{a, k}^{-1} P^{k}_t f ||_{\infty} \le \sup_{0 \le t < \tau_{k+1}-\tau_k} || {\cal I}_{a, k}^{-1} P^{k}_t {\cal I}_{a, k} ||_{\infty} \cdot || {\cal I}_{a, k}^{-1} f ||_{\infty} \le C^k || {\cal I}_{a, k}^{-1} f ||_{\infty} < \infty. \] This bound, when applied from right to left along any trajectory of rewritten process $Y$, to the resulting triplets of the form ${\cal I}_{a, k}^{-1}P^{k}_t{\cal I}_{a, k}$, yields the upper bound \[ \sup_{0 \leq t \leq T} || {\cal I}_{a, J_t}^{-1} Y_t ||_\infty \le C^{1+2+\ldots+m} || {\cal I}_{a, m}^{-1} Y_0 ||_\infty, \] for every $m\ge1$, since $C\ge2$ implies $C^{1+2+\ldots+{\mbox{\rm e}}ll}\le C^{1+2+\ldots+m}$ for every $1\le {\mbox{\rm e}}ll\le m$. $\square$ \subsection{Proof of Lemma \mathbb{R}ef{le_b}}\lambdabel{app:pf_le_b} \noindent {\bf Proof: }\ For any $d\ge1$ and $\lambdambda > 0$, $Q^{\lambdambda}(x)$ is integrable as a direct consequence of (\mathbb{R}ef{LSU}) with $m=1$, $y=0$, $r=0$ and $s=0$: there are constants $a_1 > 0$ and $a_2 > 0$ such that \begin{equation}nn \int_{\mathbb{R}^d} |Q^{\lambdambda}(x)| dx \leq \int_0^{\infty}e^{-\lambdambda t} a_1 \prod_{i=1}^{d} \left(\int_{\mathbb{R}} \frac{1}{\sqrt{t}} {\mbox{\rm e}}xp\{ - a_2 \frac{x_i^2}{t}\}dx_i \mathbb{R}ight)dt = \left( \frac{\pi}{a_2} \mathbb{R}ight)^{d/2}\frac{a_1}{\lambdambda}< \infty. {\mbox{\rm e}}nd{equation}nn Similarly for $\partial_{x_i}Q^{\lambdambda}(x)$, by first using (\mathbb{R}ef{eqn:interchange}) and then setting instead $m=1$, $y=0$, $r=0$ and $s=1$ in (\mathbb{R}ef{LSU}): \begin{equation}nn \int_{\mathbb{R}^d}|\partial_{x_i}Q^{\lambdambda}(x)| dx \leq \int_0^{\infty}e^{-\lambdambda t} \frac{a_1}{\sqrt{t}} \prod_{i=1}^{d} \left(\int_{\mathbb{R}} \frac{1}{\sqrt{t}} {\mbox{\rm e}}xp\{ - a_2 \frac{x_i^2}{t}\}dx_i \mathbb{R}ight)dt = \left( \frac{\pi}{a_2} \mathbb{R}ight)^{d/2}\frac{a_1\sqrt{\pi}}{\sqrt{\lambdambda}} < \infty. {\mbox{\rm e}}nd{equation}nn In the case $d=1$, $\partial_{x}Q^{\lambdambda}(x)$ is also square-integrable: the use of (\mathbb{R}ef{eqn:interchange}) and (\mathbb{R}ef{LSU}) yields \begin{equation}nn \int_{\mathbb{R}}|\partial_{x}Q^{\lambdambda}(x)|^2 dx & \leq & \int_{\mathbb{R}} \left | \int_0^{\infty}e^{-\lambdambda t} \frac{a_1}{t} {\mbox{\rm e}}xp{\{- a_2 \frac{x^2}{t}\}}dt \mathbb{R}ight |^2 dx \\ & = & \int_0^{\infty} \int_0^{\infty}e^{-\lambdambda s}e^{-\lambdambda t} \frac{a_1^2}{st} \sqrt{ \frac{\pi st}{a_2(s+t)} } ds dt \\ & \leq & a_1^2 \sqrt{ \frac{\pi}{2a_2} } \int_0^{\infty} \int_0^{\infty}e^{-\lambdambda s}e^{-\lambdambda t} s^{-3/4} t^{-3/4} ds dt = a_1^2 \mathbb{G} ammamma^2(1/4) \sqrt{ \frac{\pi}{2a_2\lambdambda} } < \infty, {\mbox{\rm e}}nd{equation}nn by first expanding the square and changing the order of integration, then using elementary inequality $2\sqrt{ st } \le s+t$ and the classical Gamma function $\mathbb{G} ammamma(x)$. Finally $Q^{\lambdambda}(x)$ is square-integrable when $d\le3$: set $y=0$, $r=0$, $s=0$ in (\mathbb{R}ef{LSU}) to get \begin{equation}nn \int_{\mathbb{R}^d}|Q^{\lambdambda}(x)|^2 dx & \leq & \int_{\mathbb{R}^d} \left | \int_0^{\infty}e^{-\lambdambda t} \frac{a_1}{t^{d/2}} {\mbox{\rm e}}xp{\{- a_2 \frac{|x|^2}{t}\}}dt \mathbb{R}ight |^2 dx \\ & = & \int_0^{\infty} \int_0^{\infty}e^{-\lambdambda s}e^{-\lambdambda t} \frac{a_1^2}{s^{d/2} t^{d/2}} \left( \frac{\pi st}{a_2(s+t)} \mathbb{R}ight)^{d/2} ds dt \\ & \leq & a_1^2 \left( \frac{\pi}{2a_2} \mathbb{R}ight)^{d/2} \int_0^{\infty} \int_0^{\infty}e^{-\lambdambda s}e^{-\lambdambda t} s^{-d/4} t^{-d/4} ds dt \\ & = & \frac{a_1^2}{\lambdambda^2} \left( \frac{\pi\lambdambda}{2a_2} \mathbb{R}ight)^{d/2} \mathbb{G} ammamma^2(1-d/4)< \infty , {\mbox{\rm e}}nd{equation}nn by the same argumentation, provided $d\le3$ for the finiteness. For parts (iv) and (v), the proofs will benefit from a bit of simplification in the notation. By Lemma \mathbb{R}ef{FellerProp}, there is a constant $k>0$ such that \begin{equation}lb\lambdabel{eqn:interchange2} |\partial_{x_i}Q^{\lambdambda}(x)| &=& |\partial_{x_i}\int_0^{\infty}e^{-\lambdambda t}q_t^{1}(0,x)dt| =|\int_0^{\infty}e^{-\lambdambda t}\partial_{x_i}q_t^{1}(0,x)dt| \nonumber \\ &\leq& k \int_0^{\infty}e^{-\lambdambda t} \frac{1}{\sqrt{t}} \varphirphi_{t/2a_2}(x)dt = k' \int_0^{\infty}e^{-\lambdambda' t} \frac{1}{\sqrt{t}} \varphirphi_{t}(x)dt <\infty {\mbox{\rm e}}nd{equation}lb holds for all $x \in \mathbb{R}^d \smallsetminus \{0\}$ and any $i \in \{1,2,\cdots,d\}$, where $\varphirphi_t$ is the transition density of the standard $d$-dimensional Brownian motion, $k'=k\sqrt{2a_2}$ and $\lambdambda'=2a_2\lambdambda$. Since only finiteness is sought for every $\lambdambda>0$, we drop the prime by setting $a_2=1/2$. Let us consider (iv). Observe, with a different constant $k_1>0$, $$ \int_{\mathbb{R}^d}| \partial_{x_i}Q^{\lambdambda}(x-y) |\mu_0(dy) \leq k_T + k \sup_{x \in \mathbb{R}^d} \sup_{0 < t \leq T}\langle} \def\>{\rangle \varphirphi_t(x-y), \mu_0(dy)\>\int_0^{T} e^{- \lambdambda t} \frac{1}{\sqrt{t}} dt, $$ where \begin{equation}lb \lambdab{PL2} k_T & := & k \int_{T}^{\infty} e^{- \lambdambda t} \frac{1}{\sqrt{t}}\langle} \def\>{\rangle \varphirphi_t(x-y), \mu_0(dy) \>dt \nonumber \\ & \leq & k \int_{T}^{\infty} e^{- \lambdambda t} \frac{1}{\sqrt{t}} \frac{1}{(2 \pi T)^{d/2}}\langle} \def\>{\rangle{\mbox{\rm e}}xp\{ - \frac{|x-y|^2}{2t}\}, \mu_0(dy) \>dt \nonumber \\ & \leq &\frac{k}{(2 \pi T)^{d/2}} \sup_{x \in \mathbb{R}^d}\langle} \def\>{\rangleI_a(x-y), \mu_0(dy)\>[ \sup_{x-y \in \mathbb{R}^d} {\mbox{\rm e}}xp\{ - \frac{|x-y|^2}{2T}\}I^{-1}_a(x-y)] \int_T^{\infty}\frac{e^{- \lambdambda t}}{\sqrt{t}}dt \nonumber \\ & < & \infty. {\mbox{\rm e}}nd{equation}lb Thus, $ \partial_{x_i}Q^{\lambdambda}(x-\cdot) \in L^1(\mu_0)$ holds for every $\lambdambda>0$, using Hypotheses $\mathbb{R}ef{hyp:basicassumpGauss}$ and $\mathbb{R}ef{hyp:basicassumpUniformInteg}$. \\ Next, we consider (v). Using inequality (\mathbb{R}ef{LSU}), observe \begin{equation}lb \lambdab{QL2} \int_{\mathbb{R}^d}[Q^{\lambdambda}(x-y)]^2 \mu_0(dy) & = & \langle} \def\>{\rangle \int_{0}^{\infty} e^{- \lambdambda u} q_u^{1}(x-y)du \int_{0}^{\infty} e^{- \lambdambda v} q_v^{1}(x-y)dv, \mu_0(dy) \> \nonumber \\ & = & \int_{0}^{\infty} \int_{0}^{\infty} e^{- \lambdambda (u + v)} \langle} \def\>{\rangle q_u^{1}(x-y)q_v^{1}(x-y), \mu_0(dy)\> du dv \nonumber \\ & \leq & k_1 \int_{0}^{\infty} \int_{0}^{\infty} e^{- \lambdambda (u + v)} \langle} \def\>{\rangle \varphirphi_u(x-y) \varphirphi_v(x-y), \mu_0(dy)\> du dv \nonumber \\ & \leq & k_1 \int_{0}^{\infty} \int_{0}^{\infty} e^{- \lambdambda (u + v)} \langle} \def\>{\rangle\frac{1}{(2 \pi)^{d/2}} \frac{1}{(u+v)^{d/2}}\langle} \def\>{\rangle \varphirphi_{\tau}(x-y), \mu_0(dy)\> du dv \nonumber \\ & \leq & k_1(T_{(i)} + T_{(ii)} + T_{(iii)} + T_{(iv)}), {\mbox{\rm e}}nd{equation}lb where $\tau := \frac{uv}{u+v}$ and \begin{equation}lb \lambdab{Ti} T_{(i)} := \int_{0}^{T} \int_{0}^{T} e^{- \lambdambda (u + v)} \langle} \def\>{\rangle\frac{1}{(2 \pi)^{d/2}} \frac{1}{(u+v)^{d/2}}\langle} \def\>{\rangle \varphirphi_{\tau}(x-y), \mu_0(dy)\> du dv, {\mbox{\rm e}}nd{equation}lb \begin{equation}lb \lambdab{Ti} T_{(ii)} := \int_{T}^{\infty} \int_{0}^{T} e^{- \lambdambda (u + v)} \langle} \def\>{\rangle\frac{1}{(2 \pi)^{d/2}} \frac{1}{(u+v)^{d/2}}\langle} \def\>{\rangle \varphirphi_{\tau}(x-y), \mu_0(dy)\> du dv, {\mbox{\rm e}}nd{equation}lb \begin{equation}lb \lambdab{Ti} T_{(iii)} := \int_{0}^{T} \int_{T}^{\infty} e^{- \lambdambda (u + v)} \langle} \def\>{\rangle\frac{1}{(2 \pi)^{d/2}} \frac{1}{(u+v)^{d/2}}\langle} \def\>{\rangle \varphirphi_{\tau}(x-y), \mu_0(dy)\> du dv, {\mbox{\rm e}}nd{equation}lb \begin{equation}lb \lambdab{Ti} T_{(iv)} := \int_{T}^{\infty} \int_{T}^{\infty} e^{- \lambdambda (u + v)} \langle} \def\>{\rangle\frac{1}{(2 \pi)^{d/2}} \frac{1}{(u+v)^{d/2}}\langle} \def\>{\rangle \varphirphi_{\tau}(x-y), \mu_0(dy)\> du dv. {\mbox{\rm e}}nd{equation}lb Then, we have \begin{equation}lb \lambdab{Ti} T_{(i)} := & & \int_{0}^{T} \int_{0}^{T} e^{- \lambdambda (u + v)} \langle} \def\>{\rangle\frac{1}{(2 \pi)^{d/2}} \frac{1}{(u+v)^{d/2}}\langle} \def\>{\rangle \varphirphi_{\tau}(x-y), \mu_0(dy)\> du dv \nonumber \\ \leq & & k_2 \int_0^{\pi/2} \int_0^T e^{- \lambdambda r} \frac{1}{r^{d/2}} 2 r \sin \theta \cos \theta d \theta dr \nonumber \\ = && k_2 (\sin \theta)|_0^{\pi/2} \int_0^T e^{- \lambdambda r} r^{1 - d/2}dr = k_2 \int_0^T e^{- \lambdambda r} r^{1 - d/2}dr< \infty, {\mbox{\rm e}}nd{equation}lb if $d=1,2,3$, where \[ k_2 := \frac{1}{(2 \pi)^{d/2}} \sup_{x \in \mathbb{R}^d}\sup_{0 < \tau \leq T} \langle} \def\>{\rangle \varphirphi_{\tau}(x-y), \mu_0(dy) \>. \] Similarly one gets successively \begin{equation}lb \lambdab{Tii} T_{(ii)} & & := \int_{T}^{\infty} \int_{0}^{T} e^{- \lambdambda (u + v)} \langle} \def\>{\rangle\frac{1}{(2 \pi)^{d/2}} \frac{1}{(u+v)^{d/2}}\langle} \def\>{\rangle \varphirphi_{\tau}(x-y), \mu_0(dy)\> du dv \nonumber \\ & & \leq \sup_{x \in \mathbb{R}^d} \sup_{0 < \tau \leq T}\langle} \def\>{\rangle \varphirphi_{\tau}(x-y), \mu_0(dy)\> \int_{T}^{\infty} \int_{0}^{T} e^{- \lambdambda (u + v)} \langle} \def\>{\rangle\frac{1}{(2 \pi)^{d/2}} \frac{1}{(u+v)^{d/2}} du dv \nonumber \\ & & < \infty; {\mbox{\rm e}}nd{equation}lb \begin{equation}lb \lambdab{Tiii} T_{(iii)} & & := \int_{0}^{T} \int_{T}^{\infty} e^{- \lambdambda (u + v)} \langle} \def\>{\rangle\frac{1}{(2 \pi)^{d/2}} \frac{1}{(u+v)^{d/2}}\langle} \def\>{\rangle \varphirphi_{\tau}(x-y), \mu_0(dy)\> du dv \nonumber \\ & & \leq \sup_{x \in \mathbb{R}^d} \sup_{0 < \tau \leq T}\langle} \def\>{\rangle \varphirphi_{\tau}(x-y), \mu_0(dy)\> \int_{0}^{T} \int_{T}^{\infty} e^{- \lambdambda (u + v)} \langle} \def\>{\rangle\frac{1}{(2 \pi)^{d/2}} \frac{1}{(u+v)^{d/2}} du dv \nonumber \\ & & < \infty; {\mbox{\rm e}}nd{equation}lb \begin{equation}lb \lambdab{Tiv} T_{(iv)} & & := \int_{T}^{\infty} \int_{T}^{\infty} e^{- \lambdambda (u + v)} \langle} \def\>{\rangle\frac{1}{(2 \pi)^{d/2}} \frac{1}{(u+v)^{d/2}}\langle} \def\>{\rangle \varphirphi_{\tau}(x-y), \mu_0(dy)\> du dv \nonumber \\ & & \leq \sup_{x \in \mathbb{R}^d} \sup_{0 < \tau \leq T}\langle} \def\>{\rangle \varphirphi_{\tau}(x-y), \mu_0(dy)\> \int_{T}^{\infty} \int_{T}^{\infty} e^{- \lambdambda (u + v)} \langle} \def\>{\rangle\frac{1}{(2 \pi)^{d/2}} \frac{1}{(u+v)^{d/2}} du dv \nonumber \\ & & < \infty. {\mbox{\rm e}}nd{equation}lb So this proves that $Q^{\lambdambda} \in L^2(\mu_0 )$ for $\mu_0 \in M_a(\mathbb{R}^d)$, $a > 0$ and $d=1,2,3$. \subsection{Proof of upper bound (\mathbb{R}ef{DI13})}\lambdabel{app:pf_DI13} \noindent {\bf Proof: }\ By the monotonicity in $d$ of the integrands in (\mathbb{R}ef{DI13}), when $r$, $u$ and $v$ are arbitrarily fixed, the bound $\chi_{2}(s)\le \chi_{1}(s)+\chi_{3}(s)$ is valid for all values of $s\ge0$. (By the way, a simple integration by parts also yields $\chi_{4}(s)=\infty$ everywhere.) Next, because of inequalities \begin{equation}lb \int_0^{s} \frac{1}{(\sqrt{2r +u + v})^{3}} dr = - \frac{1}{\sqrt{2s +u +v}} + \frac{1}{\sqrt{u +v}} \le \frac{1}{\sqrt{u +v}} \nonumber {\mbox{\rm e}}nd{equation}lb and \begin{equation}lb \int_0^{s} \frac{1}{\sqrt{2r +u + v}} dr \le \frac{s}{\sqrt{u +v}}, \nonumber {\mbox{\rm e}}nd{equation}lb we can also see that the integrals $\chi_{1}(s)$, $\chi_{2}(s)$ and $\chi_{3}(s)$ are each bounded by $(s+1)$ times \begin{equation}lb \int_0^{\infty} \int_0^{\infty} e^{- \lambdambda (u+v)} \frac{1}{\sqrt{uv}} \frac{1}{\sqrt{u + v}} du dv. \nonumber {\mbox{\rm e}}nd{equation}lb By first using the transformation $u= w^2$ and $v=z^2$, followed by a polar coordinate transformation, this last integral becomes successively \begin{equation}lb \int_0^{\infty} \int_0^{\infty} e^{- \lambdambda (w^2 + z^2)} \frac{4}{\sqrt{w^2 + z^2}} dw dz = 4 \int_0^{\frac{\pi}{2}} \int_0^{\infty} e^{- \lambdambda r^2} dr d\theta = \pi \sqrt{\frac{\pi}{\lambdambda}}, \nonumber {\mbox{\rm e}}nd{equation}lb yielding the common upper bound (\mathbb{R}ef{DI13}) for $\chi_{d}(s)$, valid for each of $d=1$, 2 and 3. $\square$ \subsection{Proof of upper bound (\mathbb{R}ef{DI1a})}\lambdabel{app:pf_DI1a} \noindent {\bf Proof: }\ For any $x,y\in\mathbb{R}^d$ denote by $\mathbb{P} si_{xy}^{\varepsilonsilon,\delta}\in C_b({(\mathbb{R}^d)^2})\cap L^1(\mathbb{R}^{2d})\cap L^1(\mu_0^2)$ the function \[ \mathbb{P} si_{xy}^{\varepsilonsilon,\delta}(w_1,w_2):=\mathbb{P} hi_{xy}^{\varepsilonsilon,\delta}(w_1)\times\mathbb{P} hi_{xy}^{\varepsilonsilon,\delta}(w_2) \] with \[ \mathbb{P} hi_{xy}^{\varepsilonsilon,\delta}(w):=h_p(y-w) [\partial_pQ^{\lambdambda}_{\varepsilonsilon}(x-w) - \partial_pQ^{\lambdambda}_{\delta}(x-w)]. \] Using (\mathbb{R}ef{LSU2}), the coarse upper bound \[ |\partial_pQ^{\lambdambda}_{\varepsilonsilon}(x) - \partial_pQ^{\lambdambda}_{\delta}(x) | \leq c(e^{\lambdambda \varepsilonsilon} -1) \int_{\varepsilonsilon}^{\infty}e^{- \lambdambda u} \frac{1}{\sqrt{u}}g_{u}(x)du + ce^{\lambdambda\varepsilonsilon}\int_{0}^{\varepsilonsilon}e^{- \lambdambda u} \frac{1}{\sqrt{u}}g_{u}(x)du, \] valid for any $0 < \delta < \varepsilonsilon < \infty$, combined with (\mathbb{R}ef{DI10}) yields a constant $c > 0$ such that \begin{equation}nn \int_{\mathbb{R}^d} | \mathbb{P} si_{xy}^{\varepsilonsilon,\delta}(w_1,w_2) | dy & \le & c(e^{\lambdambda \varepsilonsilon} -1)^2 \int_{\varepsilonsilon}^{\infty}\int_{\varepsilonsilon}^{\infty} e^{- \lambdambda (u+v)} \frac{1}{\sqrt{uv}} g_u(x-w_1)g_v(x-w_2) dudv \\ & & + 2ce^{\lambdambda \varepsilonsilon}(e^{\lambdambda \varepsilonsilon} -1) \int_0^{\varepsilonsilon}\int_{\varepsilonsilon}^{\infty} e^{- \lambdambda (u+v)} \frac{1}{\sqrt{uv}} g_u(x-w_1)g_v(x-w_2) dudv \\ & & + c(e^{\lambdambda \varepsilonsilon})^2 \int_0^{\varepsilonsilon}\int_0^{\varepsilonsilon} e^{- \lambdambda (u+v)} \frac{1}{\sqrt{uv}} g_u(x-w_1)g_v(x-w_2) dudv {\mbox{\rm e}}nd{equation}nn for all $0 < \delta < \varepsilonsilon < \infty$, with the four terms merged into three. Proceeding as in the argument leading to (\mathbb{R}ef{DI12}), we get a new constant $c > 0$, independent of $\lambdambda$, such that \begin{equation}nn \int_0^t ds \int_{\mathbb{R}^d} dy \int_0^s \langle} \def\>{\rangle P^1_{s-r}\mathbb{P} hi_{12}^2P^2_r\mathbb{P} si_{xy}^{\varepsilonsilon,\delta}, \mu_0 \> dr & \leq & c (e^{\lambdambda \varepsilonsilon} -1)^2 \chi_{d}^{\infty,\infty}(t) \\ & & + 2c e^{\lambdambda \varepsilonsilon}(e^{\lambdambda \varepsilonsilon} -1) \chi_{d}^{\varepsilonsilon,\infty}(t) + c e^{2\lambdambda \varepsilonsilon} \chi_{d}^{\varepsilonsilon,\varepsilonsilon}(t) {\mbox{\rm e}}nd{equation}nn with $\chi_{d}^{\varepsilonsilon,{\mbox{\rm e}}ta}$ given by \begin{equation}lb \lambdab{chiepsdelta} \chi_{d}^{\varepsilonsilon,{\mbox{\rm e}}ta}(s) {\mbox{\rm e}}quiv \int_0^{\varepsilonsilon} \int_0^{{\mbox{\rm e}}ta} e^{- \lambdambda (u+v)} \frac{1}{\sqrt{uv}} \int_0^s \frac{1}{(\sqrt{2r +u + v})^{d}} dr du dv {\mbox{\rm e}}nd{equation}lb and going to $0$ with either $\varepsilonsilon$ or ${\mbox{\rm e}}ta$ (or both) when $d\le3$, using the finiteness in (\mathbb{R}ef{DI13}); and \begin{equation}nn \int_0^t ds \int_{\mathbb{R}^d} dy \langle} \def\>{\rangle P^2_s\mathbb{P} si_{xy}^{\varepsilonsilon,\delta}, \mu_0 \> & \leq & c(e^{\lambdambda \varepsilonsilon} -1)^2 \int_0^t ds \int_{\varepsilonsilon}^{\infty}\int_{\varepsilonsilon}^{\infty} e^{- \lambdambda (u+v)} \frac{1}{\sqrt{uv}} dudv \\ & & + 2ce^{\lambdambda \varepsilonsilon}(e^{\lambdambda \varepsilonsilon} -1) \int_0^t ds \int_0^{\varepsilonsilon}\int_{\varepsilonsilon}^{\infty} e^{- \lambdambda (u+v)} \frac{1}{\sqrt{uv}} dudv \\ & & + ce^{2\lambdambda \varepsilonsilon} \int_0^t ds \int_0^{\varepsilonsilon}\int_0^{\varepsilonsilon} e^{- \lambdambda (u+v)} \frac{1}{\sqrt{uv}} dudv, {\mbox{\rm e}}nd{equation}nn which goes to $0$ with $\varepsilonsilon$, at a rate of $\varepsilonsilon$ (because of the third and asymptotically largest term), since $\int_{0}^{\infty} e^{- \lambdambda u} \frac{1}{\sqrt{u}} du < \infty$ and $e^{\lambdambda \varepsilonsilon} -1\le (e^{\lambdambda} -1)\varepsilonsilon$ when $0\le\varepsilonsilon\le1$. The vanishing rate for the other term follows from the calculations in Subsection \mathbb{R}ef{app:pf_DI13}, which yield \begin{equation}lb \left| \chi_{d}^{\varepsilonsilon,{\mbox{\rm e}}ta}(s) \mathbb{R}ight| \le 2\pi (s+1) \int_0^{\sqrt{\varepsilonsilon^2+{\mbox{\rm e}}ta^2}} e^{- \lambdambda r^2} dr \le 2\pi (s+1) \sqrt{\varepsilonsilon^2+{\mbox{\rm e}}ta^2}, \nonumber {\mbox{\rm e}}nd{equation}lb hence $\chi_{d}^{\varepsilonsilon,\varepsilonsilon}(s)$ goes to $0$ at a rate of $\varepsilonsilon$. $\square$ \noindent {\bf Acknowledgements.} The third author would also like to thank Professor Daniel Dugger for assistance in accessing the University of Oregon research resources. \begin{thebibliography}{10} \bibitem{AdlerLewin92} {Adler, R. J. and Lewin, M. (1992)}, {\mbox{\rm e}}mph{Local time and \mbox{Tanaka} formulae for super brownian and super stable processes}, Stochastic Process. Appl. \textbf{{41}}, 45--67. \bibitem{Al-Gwaiz92} {AI-Gwaiz, M. A. (1992)}, {\mbox{\rm e}}mph{Theory of distributions}, Marcel Dekker, Inc. New York. \bibitem{Aronson68} {Aronson, D. G. (1968)}. Non-negative solutions of linear parabolic equations. {\mbox{\rm e}}mph{Annali della Scuola Normale Superiore di Pisa-Classe di Scienze} \textbf{{22(4)}} 607--694. \bibitem{BakryEmery85} {Bakry, D. and Emery, M. (1985)}, {\mbox{\rm e}}mph{Diffusions hypercontractives}, Lecture Notes in Math \textbf{{1123}}, 177--206. \bibitem{Barros-Neto73} {Barros-Neto, J. (1973)}, {\mbox{\rm e}}mph{An introduction to the theory of distributions}, Marcel Dekker, Inc. New York. \bibitem{BartonEtheridgeVeber10} {Barton, N. H., Etheridge, A. M. and Veber, A. (2010)}, {\mbox{\rm e}}mph{A new model for evolution in a spatial continuum}, Electronic Journal of Probability \textbf{{15}}, 162--216. \bibitem{CWX12} {Chen, Z.-Q., Wang, H. and Xiong, J. (2012)}, {\mbox{\rm e}}mph{Interacting superprocesses with discontinuous spatial motion}, Forum Mathematicum \textbf{{24(6)}}, 1183--1223. \bibitem{Dawson92} {Dawson, D. A. (1992)}, {\mbox{\rm e}}mph{Infinitely divisible random measures and superprocesses}, {{\mbox{\rm e}}m in} Proc. 1990 Workshop on Stochastic Analysis and Related Topics,, Silivri, Turkey. \bibitem{Dawson93} {Dawson, D. A. (1993)}, {\mbox{\rm e}}mph{Measure-valued \mbox{Markov} processes}, Lecture Notes in Math. \textbf{{1541}}, 1--260. Springer, Berlin. \bibitem{DawsonHochberg79} {Dawson, D. A. and Hochberg, K. L. (1979)}, {\mbox{\rm e}}mph{The carrying dimension of a stochastic measure diffusion}, Ann. Probab. \textbf{{7}}, 683--703. \bibitem{DawsonKurtz82} {Dawson, D. A. and Kurtz, T. G. (1982)}, {\mbox{\rm e}}mph{Application of duality to measure-valued diffusion processes}, Lecture Notes in Control and Inf. Sci. \textbf{{42}}, 177--191. \bibitem{DawsonPerkins99} {Dawson, D. A. and Perkins, E. A. (1999)}, {\mbox{\rm e}}mph{Measure-valued processes and renormalization of branching particle systems}, Stochastic Partial Differential Equations: Six Perspectives, Mathematical Surveys and Monographs,Amer. Math. Soc.,Providence \textbf{{64}}, 45--106. \bibitem{DLW01} {Dawson, D. A., Li, Z. and Wang, H. (2001)}, {\mbox{\rm e}}mph{Superprocesses with dependent spatial motion and general branching densities}, Electron. J. Probab. \textbf{{6(25)}}, 1--33. \bibitem{DLW03} {Dawson, D. A., Li, Z. and Wang, H. (2003)}, {\mbox{\rm e}}mph{A degenerate stochastic partial differential equation for the purely atomic superprocess with dependent spatial motion}, Infinite Dimensional Analysis, Quantum Probability and Related Topics \textbf{{6(4)}}, 597--607. \bibitem{DV95} {Dawson, D. A. and Vaillancourt, J. (1995)}, {\mbox{\rm e}}mph{Stochastic Mckean-Vlasov equations}, Nonlinear Differential Equations and Appl. \textbf{{2(2)}}, 199--229. \bibitem{DVW2000} {Dawson, D. A., Vaillancourt, J. and Wang, H. (2000)}, {\mbox{\rm e}}mph{Stochastic partial differential equations for a class of interacting measure-valued diffusions}, Ann. Inst. Henri Poincar\'{e}, Probabilit\'{e}s et Statistiques \textbf{{36(2)}}, 167--180. \bibitem{DVW2021} {Dawson, D. A., Vaillancourt, J. and Wang, H. (2021)}, {\mbox{\rm e}}mph{Joint H{\"o}lder continuity of local time for a class of interacting branching measure-valued diffusions}, Stochastic Process. Appl. \textbf{{138}}, 212--233. \bibitem{DEFKZ2000} {Donnelly, P., Evans, S. N., Fleischmann, K., Kurtz, T. G. and Zhou, X. (2000)}, {\mbox{\rm e}}mph{Continuum-sites stepping-stone models, coalescing exchangeable partitions and random trees}, Ann. Probab. \textbf{{28(3)}}, 1063--1110. \bibitem{Etheridge00} {Etheridge, A. (2000)}, {\mbox{\rm e}}mph{An introduction to superprocesses}, American Mathematical Society. \bibitem{EthierKurtz86} {Ethier, S. N. and Kurtz, T. G. (1986)}, {\mbox{\rm e}}mph{Markov processes{{\mbox{\rm e}}m :} characterization and convergence}, John Wiley and Sons, New York. \bibitem{Evans97} {Evans, S. N. (1997)}, {\mbox{\rm e}}mph{Coalescing Markov labelled partitions and a continuous sites genetics model with infinitely many types}, Ann. Inst. Henri Poincar\'{e}, Probabilit\'{e}s et Statistiques \textbf{{33(3)}}, 339--358. \bibitem{GarroniMenaldi92} {Garroni, M. G. and Menaldi, J. L. (1992)}. {\mbox{\rm e}}mph{Green functions for second order parabolic \mbox{integro-differential} problems}. Longman Scientific \& Technical, Essex. \bibitem{Hong18} {Hong J. (2018)}, {\mbox{\rm e}}mph{Renormalization of local times of super-Brownian motion}, Electronic Journal of Probability \textbf{{23}}, 1--45. \bibitem{HLN13} {Hu, Y., Lu, F. and Nualart, D. (2013)}, {\mbox{\rm e}}mph{H{\"o}lder continuity of the solutions for a class of nonlinear \mbox{SPDE's} arising from one dimensional superprocesses}, Probab.Th. Rel. Fields \textbf{{156(1-2)}}, 27--49. \bibitem{HNX19} {Hu, Y., Nualart, D. and Xia, P. (2019) }, {\mbox{\rm e}}mph{H{\"o}lder continuity of the solutions to a class of \mbox{SPDE's} arising from branching particle systems in a random environment}, Electron. J. Probab. \textbf{{24(105)}}, 0--52. \bibitem{INW68} {Ikeda, N., Nagasawa, M. and Watanabe, S. (1968)}, {\mbox{\rm e}}mph{Branching \mbox{Markov} processes}, J. Math. Kyoto Univ. \textbf{{8}}, I(233--278), II(365--410). \bibitem{INW69} {Ikeda, N., Nagasawa, M. and Watanabe, S. (1969)}, {\mbox{\rm e}}mph{Branching \mbox{Markov} processes}, J. Math. Kyoto Univ. \textbf{{9}}, III(9, p95--160). \bibitem{Iscoe86a} {Iscoe, I. (1986a)}, {\mbox{\rm e}}mph{A weighted occupation time for a class of measure-valued branching processes}, Probab. Th. Rel. Fields \textbf{{71}}, 85--116. \bibitem{Iscoe86b} {Iscoe, I. (1986b)}, {\mbox{\rm e}}mph{Ergodic theory and a local occupation time for measure-valued critical branching \mbox{Brownian} motion}, Stochastics \textbf{{18}}, 197--243. \bibitem{Jakubowski86} {Jakubowski, A. (1986)}, {\mbox{\rm e}}mph{On the \mbox{Skorohod} topology}, Ann. Inst. Henri. Poincare \textbf{{22(3)}}, 263--285. \bibitem{Jakubowski97} {Jakubowski, A. (1997)}, {\mbox{\rm e}}mph{The almost sure \mbox{Skorohod's} representation for subsequences in nonmetric spaces}, Theory Probab. Appl. \textbf{{42(2)}}, 167--174. \bibitem{KonnoShiga88} {Konno, N. and Shiga, T. (1988)}, {\mbox{\rm e}}mph{Stochastic partial differential equations for some measure-valued diffusions}, Probab. Th. Rel. Fields \textbf{{79}}, 201--225. \bibitem{Krone93} {Krone, S. M. (1993)}, {\mbox{\rm e}}mph{Local times for superdiffusions}, Ann. Probab. \textbf{{21}}, 1599--1623. \bibitem{LSU68} {Lady\u{z}enskaja, O. A., Solonnikov, V. A. and Ural'ceva, N. N. (1968)}, {\mbox{\rm e}}mph{Linear and quasi-linear equations of parabolic type}, American Mathematical Society, Providence. \bibitem{LeGall99} {Le Gall, J. F. (1999)}, {\mbox{\rm e}}mph{Spatial branching processes, random snakes and partial differential equations}, Springer Science and Business Media. \bibitem{LeGallPerkins95} {Le Gall, J. F. and Perkins, E. A. (1995)}, {\mbox{\rm e}}mph{Carrying dimension of a stochastic measure diffusion}, Ann. Probab. \textbf{{23(4)}}, 1719--1747. \bibitem{Li10} {Li, Z. (2010)}. {\mbox{\rm e}}mph{Measure-valued branching Markov processes}. Springer-Verlag, Berlin. \bibitem{LiXiong07} {Li, Z. H. and Xiong, J. (2007)}, {\mbox{\rm e}}mph{Continuous local time of a purely atomic immigration superprocess with dependent spatial motion}, Stochastic Anal. Appl. \textbf{{25(6)}}, 1273--1296. \bibitem{LWX04a} {Li, Z. H., Wang, H. and Xiong, J. (2004)}, {\mbox{\rm e}}mph{A degenerate stochastic partial differential equation for superprocesses with singular interaction}, Probab. Th. Rel. Fields \textbf{{130}}, 1--17. \bibitem{LWXZ12} {Li, Z., Wang, H., Xiong, J. and Zhou, X. (2012)}, {\mbox{\rm e}}mph{Joint continuity of the solutions to a class of nonlinear \mbox{SPDE}}, Probab. Th. Rel. Fields \textbf{153}, 441--469. \bibitem{LiebLoss01} {Lieb, E. H. and Loss, M. (2001)}, {\mbox{\rm e}}mph{Analysis}, American Mathematical Society, Providence, Rhode Island. \bibitem{Lopez-MimbelaVilla04} {L\'{o}pez-Mimbela, J. A. and Villa, J. (2004)}, {\mbox{\rm e}}mph{\mbox{Super-Brownian} local time: A representation and two applications}, Journal of Mathematical Sciences \textbf{{121(5)}}, 2653--2663. \bibitem{Mitoma83} {Mitoma, I. (1983)}, {\mbox{\rm e}}mph{Tightness of probabilities on \mbox{$C([0, 1],\cal{S}^{\prime})$} and \mbox{$D([0, 1], \cal{S}^{\prime})$}}, Ann. Prob. \textbf{{11}}, 989--999. \bibitem{Perkins88} {Perkins, E. (1988)}, {\mbox{\rm e}}mph{A space-time property of a class of measure-valued branching diffusions}, Trans. Amer. Math. Soc. \textbf{{305}}, 743--795. \bibitem{Perkins02} {Perkins, E. A. (2002)}, {\mbox{\rm e}}mph{\mbox{Dawson-Watanabe} superprocesses and measure-valued diffusions}, Lecture Notes in Math. \textbf{{1781}}, 125--329. Springer, Berlin. \bibitem{RenSongWang09} {Ren, Y.-X., Song, R. and Wang, H. (2009)}, {\mbox{\rm e}}mph{A class of stochastic partial differential equations for interacting superprocesses on a bounded domain.}, Osaka J. Math. \textbf{{42}}, 373--401. \bibitem{Schwartz59} {Schwartz, L. (1959)}, {\mbox{\rm e}}mph{Th\'{e}orie des distributions, {{\mbox{\rm e}}m vols i and ii (second edition)}}, Hermann, Paris. \bibitem{StroockVaradhan79} {Stroock, D. W. and Varadhan, S. R. S. (1979)}, {\mbox{\rm e}}mph{Multidimensional diffusion processes}, Springer-Verlag. \bibitem{Sugitani89} {Sugitani, S. (1989)}, {\mbox{\rm e}}mph{Some properties for the measure-valued branching diffusion processes}, J. Math. Soc. Japan \textbf{{41}}, 437--462. \bibitem{Walsh86} {Walsh, J. B. (1986)}, {\mbox{\rm e}}mph{An introduction to stochastic partial differential equations}, Lecture Notes in Math. \textbf{{1180}}, 265--439. \bibitem{Wang97} {Wang, H. (1997)}, {\mbox{\rm e}}mph{State classification for a class of measure-valued branching diffusions in a \mbox{ Brownian} medium}, Probab. Th. Rel. Fields \textbf{{109}}, 39--55. \bibitem{Wang98} {Wang, H. (1998)}, {\mbox{\rm e}}mph{A class of measure-valued branching diffusions in a random medium}, Stochastic Anal. Appl. \textbf{{16(4)}}, 753--786. \bibitem{Xiong13} {Xiong, J. (2013)}. {\mbox{\rm e}}mph{Three classes of nonlinear stochastic partial differential equations}. World Scientific Publishing, Singapore. {\mbox{\rm e}}nd{thebibliography} {\mbox{\rm e}}nd{document}
\begin{document} \title{Higher-Order Cone Programming} \author{Lijun Ding} \address{Department of Statistics, University of Chicago, Chicago, IL 60637-1514.} \email{[email protected]} \author{Lek-Heng Lim} \address{Computational and Applied Mathematics Initiative, Department of Statistics, University of Chicago, Chicago, IL 60637-1514.} \email{[email protected]} \begin{abstract} We introduce a conic embedding condition that gives a hierarchy of cones and cone programs. This condition is satisfied by a large number of convex cones including the cone of copositive matrices, the cone of completely positive matrices, and all symmetric cones. We discuss properties of the intermediate cones and conic programs in the hierarchy. In particular, we demonstrate how this embedding condition gives rise to a family of cone programs that interpolates between LP, SOCP, and SDP. This family of $k$th order cones may be realized either as cones of $n$-by-$n$ symmetric matrices or as cones of $n$-variate even degree polynomials. The cases $k = 1, 2, n$ then correspond to LP, SOCP, SDP; or, in the language of polynomial optimization, to DSOS, SDSOS, SOS. \end{abstract} \maketitle \section{Introduction} Given a convex proper cone we will show how to construct a hierarchy of cones with associated cone programs, provided that a certain embedding property (defined below) is satisfied. This generalizes the work of Ahmadi and Majumdar in \cite{ahmadi2017dsos} where they constructed a sequence of polynomial conic programs, particularly the DSOS and SDSOS conic programs, to approximate the SOS cone program. We will show how such a construction can be carried out for a large number of conic programming problems including: \begin{enumerate}[\upshape (i)] \item the nonnegative orthant; \item\label{it:soc} the second-order cone; \item\label{it:psd} the cone of symmetric positive semidefinite matrices; \item the cone of copositive matrices; \item the cone of completely positive matrices; \item all symmetric cones, i.e., any cone is constructed out of a direct sum of \eqref{it:soc}, \eqref{it:psd}, or the cones of Hermitian positive semidefinite matrices over $\mathbb{C}$, $\mathbb{H}$, and $\mathbb{O}$ (quaternions and octonions); \item any norm cones where the norm satisfies a consistency condition, which includes $l^p$-norms, Schatten and Ky Fan norms, operator $(p,q)$-norms, etc. \end{enumerate} For each of these cones, we can build a sequence of intermediate cones and conic programs in the hierarchy. In the case of \eqref{it:soc}, we obtain a family of cone programs that interpolates between LP, SOCP, and SDP. This family of $k$th order cones may be realized either as cones of $n$-by-$n$ symmetric matrices or as cones of $n$-variate even degree polynomials. The cases $k = 1, 2, n$ then correspond to LP, SOCP, SDP; or, in the language of polynomial optimization, to DSOS, SDSOS, SOS. \subsection*{Notations} Throughout this article, we write $\mathbb{N} \coloneqq \{1,2,3,\ldots\}$ for the set of positive integers. The skew field of quaternions will be denoted as $\mathbb{H}$ and the division ring of octonions as $\mathbb{O}$. We will slightly abuse terminologies and refer to $\mathbb{R}$, $\mathbb{C}$, $\mathbb{H}$, $\mathbb{O}$ as `fields.' We will write $\mathbb{S}^{d}_\mathbb{F}$ for the $\mathbb{F}$-vector space (or, strictly speaking, $\mathbb{F}$-module when $\mathbb{F}$ is not a field) of $d \times d$ Hermitian matrices over $\mathbb{F} = \mathbb{R},\mathbb{C}, \mathbb{H}, \mathbb{O}$. When the choice of $\mathbb{F}$ is implicit or immaterial, we will just write $\mathbb{S}^{d}$. For a vector $x\in \mathbb{F}^d$, the notation $x\geq 0$ means each component of $x$ is greater or equal to $0$. We write $[d] \coloneqq \{1,\dots,d\}$ for any $d\in \mathbb{N}$. We denote the set of all increasing sequences of length $k$ in $[d]$ as $\comb{[d]}{k}=\{(i_1,\dots,i_k)\mid 1\leq \,i_1<\dots <i_k\leq d\}$. For a matrix $A = [a_{ij}]_{ij}\in\mathbb{S}^d$, we write $\tr(A)= \sum_{i=1}^d a_{ii}$. The inner product $\langle \cdot ,\cdot \rangle :\mathbb{S}^d\times \mathbb{S}^d \rightarrow \mathbb{R}$ we use in this article is the standard trace inner product $\langle A, B\rangle = \tr(AB)$. The topology is then defined via the distance metric induced by the trace inner product. We write the interior of a set $S \subset \mathbb{F}^{d}$ as $\interior(S)$. \section{Conic embedding property} To standardize our terminologies, the cones in this article will all be represented as cones of symmetric matrices over some field $\mathbb{F}$; although we will see that this is hardly a limitation --- conic programs involving cones in other common $\mathbb{F}$-vector spaces, e.g., of vectors in $\mathbb{F}^n$ or polynomials in $\mathbb{F}[x]$ or $\mathbb{F}$-valued functions on some set, can often be transformed to a symmetric matrix setting. We start by defining two linear maps. Let $k \le d$ be positive integers. For $\{i_1,\dots,i_k\} \in \comb{[d]}{k}$, i.e., $1\leq i_1<\dots<i_k\leq d$, the \emph{truncation operator} is the projection $\Trn{i_1 \cdots i_k}^d: \mathbb{S}^{d} \rightarrow \mathbb{S}^{k}$ defined by \[ \Trn{i_1 \cdots i_k}^d(Z) \coloneqq\begin{bmatrix} z_{i_1i_1} & \dots & z_{i_1i_k}\\ z_{i_2i_1} & \dots & z_{i_2i_k}\\ & \ddots & \\ z_{i_k i_1} & \dots &z_{i_ki_k} \end{bmatrix} \] for any $Z\in \mathbb{S}^{d}$; the \emph{lift operator} is the injection $\Aug{i_1 \cdots i_k}^d : \mathbb{S}^{k}\rightarrow \mathbb{S}^{d}$ defined by \[ [\Aug{i_1 \cdots i_k}^d(X)]_{i_p i_q} = \begin{cases} x_{pq} & p,q\in \{1,\dots,k\} ,\\ 0 & \text{otherwise}. \end{cases} \] for any $X\in \mathbb{S}^{k}$. In other words, the truncation operator takes a $d \times d$ matrix to its $k \times k$ submatrix; whereas the lifting operator takes a $k \times k$ matrix and embed it as a $d \times d$ matrix by filling-in the extra entries as zeros. Clearly for a fixed index set $\{i_1,\dots,i_k\}$, $\Trn{i_1 \cdots i_k}^d$ is a left inverse of $\Aug{i_1 \cdots i_k}^d$, i.e., \[ \Trn{i_1 \cdots i_k}^d \circ \Aug{i_1 \cdots i_k}^d = \operatorname{id}_{\mathbb{S}^{k}}. \] We now state our \text{embedding property}\xspace. \begin{definition} Let $\mathbb{F} = \mathbb{R}$, $\mathbb{C}$, $\mathbb{H}$, or $\mathbb{O}$ and $\mathbb{S}^k =\mathbb{S}^k_\mathbb{F}$. Let $k_0 \in \mathbb{N}$ and $\{\cone{k} : k\in \mathbb{N},\; k \ge k_0\}$ be a sequence of convex proper cones where $\cone{k} \subseteq \mathbb{S}^k$ for each $k \ge k_0$. We say that the sequence $\{\cone{k} \}_{k=k_0}^\infty$ satisfies the \emph{\text{embedding property}\xspace} with \emph{index map} \begin{equation}\label{eq:index} I:\{(d,k)\in \mathbb{N}\times \mathbb{N}\mid d\geq k\} \rightarrow \bigcup_{k_0\le k \le d}\comb{[d]}{k} \end{equation} if for any $d\geq k\geq k_0$, $(i_1,\dots,i_k)\in I(d,k)$, we have \[ \Trn{i_1i_2 \cdots i_k}^d (Z) \in \cone{k} \qquad \text{and} \qquad \Aug{i_1i_2 \cdots i_k}^d (X) \in \cone{d} \] for all $Z\in \cone{d}$ and $ X\in \cone{k}$. \end{definition} We caution our reader that the ``higher-order cones'' in the title of this article do not refer to $\{\cone{k}\}_{k=k_0}^\infty$ but will be constructed out of these cones. In several instances, the index map is given simply by \[ I(d,k) = \comb{[d]}{k}, \] and in which case we will drop any reference to the index map and just say that $\{\cone{k}\}_{k=k_0}^\infty$ satisfies the \text{embedding property}\xspace. If in addition $k_0=1$, we will say that $\{\cone{k}\}_{k=1}^\infty$ satisfies the \text{embedding property}\xspace \emph{thoroughly}. The embedding property simply says for a $d \times d$ matrix $Z\in \cone{d}$, its $k \times k$ principle submatrix belongs to the lower dimension cone $\cone{k}$; conversely, for a $k \times k$ matrix $X\in \cone{k}$, embedding as it a principle submatrix of a $d \times d$ matrix with all other entries set to be zero gives a matrix in $\cone{d}$. A simple example is the cone of symmetric diagonally dominant matrices with nonnegative diagonals, \[ \DD{d} \coloneqq \Bigl\{ M \in \mathbb{S}^d : m_{ii}\geq \sum\nolimits_{j\ne i} |m_{ij}|, \; i=1,\dots,d \Bigr\}, \] where it is easy to see that $\{ \DD{k}\}_{k=1}^\infty$ satisfies the \text{embedding property}\xspace thoroughly. We will see many more examples of cones satisfying the \text{embedding property}\xspace over the next few sections. We may now define the higher order cones in the title of this article. They are obtained by lifting cones in lower dimension to higher dimension. The benefit is that though the cones defined are in high dimension, they are expressible by cones in lower dimension and property of cones in lower dimension might be utilized. These higher cone might be served as an inner approximation of cones in high dimension. As usual, in the following we let $\mathbb{F} = \mathbb{R}$, $\mathbb{C}$, $\mathbb{H}$, or $\mathbb{O}$ and write $\mathbb{S}^d = \mathbb{S}^d_\mathbb{F}$. \begin{definition} Let $\{\cone{k}\}_{k=k_0}^\infty$ be a sequence of cones that satisfies the \text{embedding property}\xspace with index map $I$. The \emph{$k$th order cone} with index set $J\subseteq I(d,k)$ induced by $\cone{k}$ is \[ \cone{d}_{k}(J) \coloneqq \Bigl\{ M \in \mathbb{S}^d : M=\sum\nolimits_{(i_1,\dots,i_k)\in J} \Aug{i_1 \cdots i_k}^d (M_{i_1 \cdots i_k}),\; M_{i_1 \cdots i_k}\in \cone{k} \Bigr\}. \] If $J = I(d,k)$, we will just write $\cone{d}_k$ for $\cone{d}_{k}(J)$. \end{definition} We will establish some basic properties of higher order cones. \begin{prop}\label{proposition: koc} Let $\{\cone{d}\}_{k=k_0}^\infty$ satisfy the \text{embedding property}\xspace with index mapping $I$. Then the following properties hold: \begin{enumerate}[\upshape (i)] \item \emph{Nested cones:} Suppose a sequence of index sets $\{J_k\}_{k=k_0}^d$, $J_k \subset \comb{[d]}{k}$ satisfies that for any $k\geq k_0$ and any $s\in J_k$, there is an $s' \in J_{k+1}$ such that all the components of $s$ appears in $s'$ (This property is satisfied by $\comb{[d]}{k}$). Then we have \[ \cone{d}_{k_0}(J_{k_0}) \subseteq \cone{d}_{k_0+1}(J_{k_0+1})\subseteq \dots \subseteq \cone{d}_{d}(J_{d}).\] In particular, if $\{I(d,k)\}_{k=k_0}^d$ is such sequence of index sets, then for every $d\geq k_0$, we have \[ \cone{d}_{k_0}\subseteq \cone{d}_{k_0+1}\subseteq \dots \subseteq \cone{d}_{d}.\] \item \emph{Dual cones:} the dual cone of $\cone{d}_k(J)$ is \[(\cone{d}_k(J))^* = \{A \in \mathbb{S}^d : A\,\text{is Hermitian and}\,\text{for all}\, (i_1,\dots,i_k)\in J,\,\Trn{i_1 \cdots i_k} (A) \in (\cone{k})^*\} .\] \item \emph{Membership:} If $I(d,k) =\comb{d}{k}$ for every $d$, we have $X_1\in \cone{t}_k, X_2\in \cone{s}_k \iff \diag(X_1,X_2)\in \cone{t+s}_k$, ditto for the dual cones of $\cone{d}_k$. \item \emph{Inheritance}: It the \text{embedding property}\xspace is satisfied throughly by $\{\cone{k}\}_{k=1}^\infty$, then for each $k\geq 1$, the sequence of cones $\{\cone{l}_{k}\}_{l=k}^\infty$ satisfies the \text{embedding property}\xspace. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate}[\upshape (i)] \item Consider $k_0+i$ and $k_0+i+1$ where $0\leq i\leq d-k_0-1$. The cones $\cone{d}_{k_0+i}(J_{k_0+i})$ and $\cone{d}_{k_0+i+1}(J_{k_0+i+1})$ can be expressed as \begin{equation}\label{eq: Kk0decomposition} \begin{aligned} \cone{d}_{k_0+i}(J_{k_0+i}) = \sum_{(j_1,\dots,j_{k_0+i})\in J_{k_0+i}}\Aug{j_1\dots j_{k_0+i}}^d (\cone{k_0+i}) \end{aligned} \end{equation} and \begin{equation}\label{eq: Kk0+1decomposition} \begin{aligned} \cone{d}_{k_0+i+1}(J_{k_0+i+1}) = \sum_{(j_1,\dots,j_{k_0+i},j_{k_0+i+1})\in J_{k_0+i+1} }\Aug{j_1\dots j_{k_0+i} j_{k_0+i+1}}^d (\cone{k_0+i+1}) \end{aligned} \end{equation} By the assumption on $\{J_k\}_{k=k_0}^d$, we know for each $(j_1,\dots,j_{k_0+i})\in J_{k_0+i}$, there is some $j$ such that $\{j_1,\dots,j_{k_0+i},j\}$ after ordering is in $ J_{k_0+i+1}$. Without loss of generality, we may assume $j$ is the largest among $\{j_1,\dots,j_{k_0+i},j\}$. Thus \begin{align*}\Aug{j_1\dots j_{k_0+i}}^d (\cone{k_0+i}) & =\Aug{j_1\dots j_{k_0+i}j}^d (\begin{bmatrix} \cone{k_0+i} & \\ & 0 \end{bmatrix}) \\ & \marka{\subset} \Aug{j_1\dots j_{k_0+i}j}^d(\cone{k_0+i+1}), \end{align*} where (a) is because of $\begin{bmatrix} \cone{k_0+i} & \\ & 0 \end{bmatrix}\subseteq \cone{k_0+i+1}$ using the \text{embedding property}\xspace. Thus we see each summand in the decomposition \eqref{eq: Kk0decomposition} is a subset of a summand in the decomposition of \eqref{eq: Kk0+1decomposition}. Using the conic property that $a,b \in \cone{k_0+i+1} \implies a+b \in \cone{k_0+i+1}$, we see indeed \[ \cone{d}_{k_0+i} \subseteq \cone{d}_{k_0+i}.\] Since $i$ is arbitrary, we see we have the cones are nested. \item We use the following simple fact \cite[Corollary 16.3.2]{rockafellar1970convex} that for convex cone $K_1,K_2$, \[(K_1+K_2)^* = K_1^*\cap K_2^*.\] By definition, $\cone{d}_{k}(J)$ can be expressed as \[ \cone{d}_k(J) = \sum_{(i_1,\dots, i_k)\in J}\Aug{i_1 \cdots i_k}^d(\cone{k}),\] where each $\Aug{i_1 \cdots i_k}^d(\cone{k})$ is a convex cone in $\mathbb{S}^d$. The dual cone of $\Aug{i_1 \cdots i_k}^d(\cone{k})$ is \[( \Aug{i_1 \cdots i_k}^d(\cone{k}))^* = \{A\in \mathbb{S}^d: \Trn{i_1 \cdots i_k}(A)\in (\cone{k})^*\}.\] Applying previous fact, we get the characterization of the dual cone. \item We first show that $X_1\in \cone{t}_k, X_2\in \cone{s}_k \implies \diag(X_1,X_2)\in \cone{s+t}_k$. We know there are $M_{i_1 \cdots i_k}$,$Y_{j_1\dots j_k}\in \cone{k}$ such that \[ X_1 = \sum_{(i_1,\dots,i_k)\in\comb{[t]}{k}}\Aug{i_1 \cdots i_k}^s(M_{i_1 \cdots i_k}),\] and \[ X_2 = \sum_{(j_1,\dots,j_k)\in \comb{[s]}{k}}\Aug{j_1\dots j_k}^t(Y_{j_1\dots j_k}).\] Thus \begin{align*} \diag(X_1,X_2) =&\sum_{(i_1,\dots,i_k)\in\comb{[t]}{k}}\diag(\Aug{i_1 \cdots i_k}^s(M_{i_1 \cdots i_k}),0) \\&+ \sum_{(j_1,\dots,j_k)\in \comb{[s]}{k}}\diag(0,\Aug{j_1\dots j_k}^t(Y_{j_1\dots j_k}))\\ =& \sum_{(i_1,\dots,i_k)\in\comb{[t]}{k}}\Aug{i_1 \cdots i_k}^{s+t}(M_{i_1 \cdots i_k})\\ & + \sum_{(j_1,\dots,j_k)\in \comb{[s]}{k}}\Aug{j_1\dots j_k}^{s+t}(Y_{j_1\dots j_k}). \end{align*} Since $\comb{[s]}{k},\comb{[t]}{k} \subseteq \comb{[s+t]}{k}$, we see the above indeed gives a valid decomposition of $k$th order cone induced by $\{\cone{k}\}_{k=1}^\infty$. Now suppose $\diag(X_1,X_2)\in \cone{s+t}_k$. This gives \begin{align*} \diag(X_1,X_2) = \sum_{(i_1,\dots, i_k)\in \comb{[s+t]}{k} }\Aug{i_1 \cdots i_k}^{s+t}(Z_{i_1 \cdots i_k}) \end{align*} where $Z_{i_1 \cdots i_k} \in \cone{k}$. Apply $\Trn{1,2,\dots,s}$ and $\Trn{s+1,s+2,\dots,s+t}$ to both sides of the above equality gives valid decompositions of $X_1 \in \cone{s}_k$ and $X_2 \in \cone{t}_k$ due to the \text{embedding property}\xspace. For the dual cones, note that if $X\in (\cone{k})^*$, then for each $l\leq k$, $\Trn{i_1\dots i_{l}}(X) \in (\cone{l})^* $ because of the \text{embedding property}\xspace. The rest of the proof is similar to previous one. \item Fix $k\leq l<m$. Consider an increasing sequence $(i_1,\dots i_l)\in \comb{[m]}{l}$ and any $X \in \cone{l}_k, Z\in \cone{m}_k$. Then there are some $M_{j_1\dots j_k}\in \cone{k},Y_{n_1\dots n_k}\in \cone{k}$ such that \begin{align*} \Aug{i_1,\dots,i_l} ^m(X) & = \Aug{i_1,\dots, i_l}^m\biggr(\sum_{(j_1,\dots,j_k)\in \comb{[l]}{k}}\Aug{j_1\dots j_k }^l(M_{j_1\dots j_k})\biggr) \\ & =\sum_{(j_1,\dots,j_k)\in \comb{[l]}{k}}\Aug{i_1,\dots, i_l}^m\biggr(\Aug{j_1\dots j_k }^l(M_{j_1\dots j_k})\biggr), \end{align*} and \begin{align*} \Trn{i_1\dots i_l} ^m(Z) &= \Trn{i_1\dots i_l}^m\biggr(\sum_{(n_1,\dots,n_k)\in \comb{[m]}{k}}\Aug{n_1\dots n_k }^m(Y_{n_1\dots n_k})\biggr) \\ & = \sum_{(n_1,\dots,n_k)\in\comb{[m]}{k}}\Trn{i_1\dots i_l}^m\biggr(\Aug{n_1\dots n_k }^m(Y_{n_1\dots n_k})\biggr) \end{align*} Since $\Aug{i_1,\dots, i_l}^m\circ (\Aug{j_1\dots j_k }^l) = \Aug{i_{j_1}\dots i_{j_k}}^m$, we see $\Aug{i_1,\dots,i_l} ^m(X) $ is indeed a member of $\cone{m}_{k}$. Using the \text{embedding property}\xspace for each $\Trn{i_1\dots i_l}^m\biggr(\Aug{n_1\dots n_k }^m(Y_{n_1\dots n_k})\biggr)$, we see $\Trn{i_1\dots i_l} ^m(Z) \in \cone{l}_{k}$. \end{enumerate} \end{proof} Given the definition of higher order cone and dual cone, we can consider their corresponding conic programs. We assume the underlying filed is real for simplicity. More precisely, the $k$th order cone program (standard form) is \begin{equation}\label{program:kocpstandard} \begin{aligned} & \text{minimize}& & \tr(A_0X) \\ & \text{subject to}\;& & \tr(A_iX)=b_i ,\quad i =1,\dots,p \\ & & & X\in \cone{d}_k(J) \end{aligned} \end{equation} where $A_1,\dots,A_p \in\mathbb{R}^{n\times n}$, $ b_1,\dots,b_p\in \mathbb{R}$ Alternatively, $k$th order cone program (inequality form) is \begin{equation}\label{program:kocpinequality} \begin{aligned} &\text{minimize} & & q^{\scriptscriptstyle\mathsf{T}} x \\ & \text{subject to}& & x_1P_1+x_2P_2+\dots+x_kP_k+P_0\in \cone{d}_k(J) \end{aligned} \end{equation} where $P_0,\dots,P_k \in \mathbb{S}^d$ and $q\in \mathbb{R}^n$. The constraint here is called linear matrix inequality (LMI). We call these programs \emph{$k$OCP} induced by $\cone{k}$ with set $J$ and simply \emph{$k$OCP} if the underlying cone $\cone{k}$ is clear from the context and $J = I(k,d)$. We note that the ambient dimension $d$ might change from problem to problem as the case of semidefinite programming where the ambient dimension $d$ is not specified, i.e., we write $X\succeq 0$ meaning $X$ is positive semidefinite but did not specify the size of $X$. If the nested cones property is satisfied by the underlying cone $\cone{k}$ (which is true when $I(d,k)$ satisfies the condition of first item, Nested Cones, of Proposition \ref{proposition: koc}), the above program serves as inner approximation of $\cone{d}$ program. We state an equivalence theorem of the two form when $\cone{k}$ satisfies the \text{embedding property}\xspace thoroughly. \begin{theorem}\label{thm:equivalencesdfineq} If $\{\cone{k}\}_{k=1}^{\infty}$ satisfies the \text{embedding property}\xspace thoroughly, then the inequality form and the standard form are equivalent. \end{theorem} \begin{proof} Without loss of generality, we assume that $\cone{1} = \mathbb{R}_+$ (because one dimensional proper cone is either $\mathbb{R}_+$ or $\mathbb{R}_-$) and $p\geq k$ (we can repeat a few constraints if $p<k$). By Lemma \ref{lemma: geqzerokoc} proved in the Appendix, we find that for any $x\in \mathbb{F}^d$, \[\diag(x)\in \cone{k}_k\iff x\geq 0,\] where $x\geq 0$ means each component of $x$ is greater or equal to $0$. We first prove the direction from the standard form to inequality form, i.e., \eqref{program:kocpstandard} to \eqref{program:kocpinequality}: By treating $X$ as a long vector, the objective and the conic constraint $X\in \cone{d}_k$ can be transformed in a standard way. Indeed, the objective is just $\sum_{ij}(A_0)_{ij}x_{ij}$. For conic constraint, we have \begin{equation}\label{eq:conicconstrainttoLMI} \begin{aligned} X\in \cone{d}_k \iff \sum_{j>k}x_{jk}(E_{jk}+E_{kj})+\sum_{j=1}^n x_{jj}E_{jj}\in \cone{d}_k, \end{aligned} \end{equation} where $E_{jk}$ are the matrices with only non-zero entry $1$ at $(j,k)$th entry. The linear constraint $\tr(A_iX)=b_i$ can be encoded by \begin{equation}\label{eq:lineartoLMI} \begin{aligned} \diag([\tr(A_iX)-b_i]_{i=1}^p) \in \cone{k}_k,\quad \diag([b_i-\tr(A_iX)]_{i=1}^p) \in \cone{k}_k. \end{aligned} \end{equation} Finally, using membership property in Lemma \ref{proposition: koc}, the transformed linear constraints \eqref{eq:lineartoLMI} and the transformed conic constraint \eqref{eq:conicconstrainttoLMI} can be made into one big $k$th order cone linear matrix inequality. We now prove the direction from inequality form to standard form, i.e., \eqref{program:kocpinequality} to \eqref{program:kocpstandard}: First we can write $x = x^+-x^-$ as two non-negative vectors (element wise non-negative). Let $\bar{X} = x_1P_1+x_2P_2+\dots+x_kP_k+P_0$, then the inequality form \eqref{program:kocpinequality} can be transformed to \begin{equation*} \begin{aligned} & \underset{x^+,x^-,\bar{X}}{ \text{minimize}} & & q^{\scriptscriptstyle\mathsf{T}}x^+- q^{\scriptscriptstyle\mathsf{T}}x^- \\ & \text{subject to}& & \sum_{i=1}^{k}(x^+_iP_i-x^-_iP_i) -\bar{X}=-P_0\\ & & &\bar{X} \in \cone{d}_k,\quad x^+\geq 0,\quad x^-\geq 0. \end{aligned} \end{equation*} It can then be transformed to \eqref{program:kocpstandard}. We may let the $X$ in \eqref{program:kocpstandard} be \[ X= \begin{bmatrix} \diag(x^+) & & \\ & \diag(x^-) & \\ & & \bar{X}\\ \end{bmatrix}. \] The objective in \eqref{program:kocpstandard} then can be easiy formulated as $D_0 = \begin{bmatrix} \diag(q) & & \\ & -\diag(q) & \\ & & 0\\ \end{bmatrix}$. The equality constraints are just a re-statement of the elementwise version of $\sum_{i=1}^{k}(x^+_iP_i-x^-_iP_i) -\bar{X}=-P_0$. So $D_i,f_i$ are setted so that $\tr(D_iX)= [\sum_{t=1}^{k}(x^+_tP_t-x^-_tP_t) -\bar{X}]_{jk}=-P_{jk}=f_i$. A total of $\frac{n(n+1)}{2}$ constraints can be obtained from this method. To enforce the $0$ in $X$, we can put more $\tr(E_{ij}X)=0$ constraints on $X$ with position index $(i,j)$ of $0$ in $X$ where $E_{ij}$ is defined as the previous part. These linear constraints implies that for $n\geq k$, $X\in \cone{d+2n}_k$ if and only if $x^-,x^+\geq 0, \bar{X}\in \cone{d}_k$ because the membership property and Lemma \ref{lemma: geqzerokoc}. If $n<k$, we may simply repeat $x^-, x^+$ in $X$ and enforce the repetition by adding more linear constraints. \end{proof} The dual $k$OCP (standard form) is \[ \begin{aligned} & \text{minimize} & & \tr(A_0X) \\ & \text{subject to}\;& & \tr(A_iX)=b_i,\quad i=1,\dots,p \\ & & & X\in (\cone{d}_k(J))^* \end{aligned} \] where $A_1,\dots,A_p \in\mathbb{R}^{n\times n}$, $ b_1,\dots,b_p\in \mathbb{R}$ and the dual $k$OCP (inequality form) is \[ \begin{aligned} &\text{minimize} & & q^{\top}x \\ & \text{subject to}& & x_1P_1+x_2P_2+\dots+x_kP_k+P_0\in (\cone{d}_k(J))^* \end{aligned} \] where $P_0,\dots,P_k \in \mathbb{S}^n$. \section{Positive semidefinite cone} Our first example is the cone of positive semidefinite matrices with dimension $d$: \[ \psd{d} \coloneqq \{A\in \mathbb{S}^d : A = FF^{\top} \, \text{for some}\, F\in \mathbb{R}^{d\times r},\,r\in \mathbb{N}\}.\] Clearly, the sequence of cones $\{\psd{k}\}_{k=1}^d$ satisfy the \text{embedding property}\xspace thoroughly. The first two order cones are: \begin{enumerate}[\upshape (i)] \item $(\psd{d})_1 = \diag(\mathbb{R}_+^d) \cong \mathbb{R}_+^d$. Note that from the inheritance property, fourth item of Proposition \ref{proposition: koc}, the nonnegative orthant $\diag(\mathbb{R}_+^d) \cong\mathbb{R}_+^d$ satisfies the embedding property throughly as well, which can also be directly verified. \item $(\psd{d})_2 = \{A : A =\sum\nolimits_{i<j}\Aug{ij}(M^{ij}), \quad M^{ij}\in \mathbb{S}_+^2\}$. Note this series of cone also satisfied the embedding property throughly by attaching $\mathbb{R}_+$ to the series $\{(\psd{d})_2\}_{d=2}^\infty$. \end{enumerate} It turns out that the second order cone $(\psd{d})_2$ actually is the same as the set of symmetric scaled diagonally dominant matrices with nonnegative diagonals (SDD), $\SDD{d}$, \[ \SDD{d} \coloneqq \{ M \in \mathbb{S}^{d} : \text{there exists } d>0, d_ia_{ii}\geq \sum\nolimits_{j \ne i} d_j|a_{ij}|, \quad \text{for all}\, i=1,\dots, d\}, \] as shown in the following lemma, which appeared in \cite[Theorems~8 and 9]{boman2005factor} and \cite[Lemma~9]{ahmadi2017dsos}. \begin{lemma}\label{lemma: 2oc=sdd} $(\psd{d})_2 = \SDD{d}$. \end{lemma} We provide a simple, different and self-contained proof of this lemma based on the following lemma which can be found in Appendix. \begin{lemma}\label{lemma: sdd results} Denote $M(A)=[\alpha_{ij}]$ where $\alpha_{ii}=a_{ii}$ for all $i$ and $\alpha_{ij}=-|a_{ij}|$ for all $i \ne j$ and $\rho(A)=\max\{|\lambda|: \lambda\; \text{is an eigenvalue of}\; A \}$. The following are all equivalent when $A \in \mathbb{S}^n$. \begin{enumerate}[\upshape (i)] \item $A$ is SDD; \item $M(A)$ is positive semi-definite. \end{enumerate} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma: 2oc=sdd}] We let $M(A)=[\alpha_{ij}] \in \mathbb{S}^{d}$ where $\alpha_{ii}=a_{ii}$ for $i=1,\dots,d$, and $\alpha_{ij}=-|a_{ij}|$ for all $i \ne j$; this is often called the comparison matrix \cite{BP} of $A$. To show that $\SDD{d}\supset(\psd{d})_2$, suppose $A\in (\psd{d})_2$. Then $A= \sum_{i<j} M^{ij}$. Since $M(A)=\sum_{i<j} M(M^{ij})$ with $M(M^{ij})\in \psd{d}$, $M(A)$ belongs to both $(\psd{d})_2$ and $\psd{d}$. It follows from Lemma~\ref{lemma: sdd results} that $A\in \SDD{d}$. Now suppose $A\in \SDD{d}$. There exists $d=(d_1,\dots,d_n)>0$ such that $d_i a_{ii} \geq \sum_{j \ne i}d_j |a_{ij}|$ for each $i$, which allows us to define $M^{ij}$ by \[ m^{ij}_{ij} = a_{ij}, \quad m^{ij}_{ii}= \frac{d_j}{d_i}a_{ij}, \quad m^{ij}_{jj}=\frac{d_i}{d_j}a_{ij}. \] We may then increase the values of $m^{ij}_{ii}$ and $m^{ij}_{jj}$ appropriately so that they sum up to the respective diagonal entries of $A$. This shows that $\SDD{d}\subset(\psd{d})_2$. \end{proof} For general $J\subseteq \comb{[d]}{2}$, the equality in Lemma \ref{lemma: 2oc=sdd} does not hold, i.e., $(\psd{d})_2(J) \ne \SDD{d}$ for general $J$. The $k$OCP in this case is actually very interesting. The $1$OCP is simply Linear Program (LP) since $(\psd{d})_1 = \diag(\mathbb{R}_+) \cong \mathbb{R}_+$, the $2$OCP in this case is SDD program. We show in the following theorem that SDD program is the same as Second Order Cone Program (SOCP): \begin{equation}\label{socp} \begin{aligned} & \text{minimize} & & a^{\scriptscriptstyle\mathsf{T}}x \\ & \text{subject to} & &\|A_ix+b_i\|_2\leq c_i^{\scriptscriptstyle\mathsf{T}}x+d_i, \quad i =1,\dots, q, \\ & & & Bx = e. \end{aligned} \end{equation} \begin{theorem}\label{theorem: equivalence socp and SDD} SDD program is equivalent to SOCP, i.e., SOCP can be casted into SDD Program and vice versa. \end{theorem} \begin{proof} The fact that SDD program can be optimized using SOCP has been shown in \cite[Theorem 10]{ahmadi2017dsos}, which is just an easy consequence of Lemma \ref{lemma: 2oc=sdd}. We are only left to show the other direction. We will show one can transform a SOCP to the inequality form of SDD program. The equivalence between inequality form and standard form of SDD program follows from Theorem \ref{thm:equivalencesdfineq}. Our only difficulty is to transform a SOC constraint, \[\|A_ix+b_i\|_2\leq c_i^{\scriptscriptstyle\mathsf{T}}x+d_i,\] to a SDD constraint. We know \[\|A_ix+b_i\|_2\leq c_i^{\scriptscriptstyle\mathsf{T}}x+d_i \iff \begin{bmatrix} (c_i^{\scriptscriptstyle\mathsf{T}}x + d)I & A_ix +b_i\\ (A_ix +b_i)^{\scriptscriptstyle\mathsf{T}} & c_i^{\scriptscriptstyle\mathsf{T}} x+d\\ \end{bmatrix} \in \mathbb{S}_+^n\iff \begin{bmatrix} (c_i^{\scriptscriptstyle\mathsf{T}}x + d)I & -|A_ix +b_i|\\ -|(A_ix +b_i)^{\scriptscriptstyle\mathsf{T}}| & c_i^{\scriptscriptstyle\mathsf{T}} x+d\\ \end{bmatrix} \in \mathbb{S}_+^n \] for appropriate $n$ by the Schur complement condition for positive semi-definiteness, i.e., \[ X= \begin{bmatrix} A & B \\ B^{\scriptscriptstyle\mathsf{T}} & C\\ \end{bmatrix} \in \mathbb{S}^n_+ \iff A\in \mathbb{S}^m_+, C-B^{\scriptscriptstyle\mathsf{T}}A^{-1}B\in \mathbb{S}^{h}, \] where $m,h$ are number of rows of $A$ and $C$. Now using Lemma \ref{lemma: sdd results}, we see \[\begin{bmatrix} (c_i^{\scriptscriptstyle\mathsf{T}}x + d)I & -|A_ix +b_i|\\ -|(A_ix +b_i)^{\scriptscriptstyle\mathsf{T}}| & c_i^{\scriptscriptstyle\mathsf{T}} x+d\\ \end{bmatrix} \in \mathbb{S}_+^n \iff \begin{bmatrix} (c_i^{\scriptscriptstyle\mathsf{T}}x + d)I & A_ix +b_i\\ (A_ix +b_i)^{\scriptscriptstyle\mathsf{T}} & c_i^{\scriptscriptstyle\mathsf{T}} x+d\\ \end{bmatrix} \in (\psd{n})_2.\] The last equation is a linear $(\psd{n})_2$ constraint and we see SOCP can be transformed to SDD program and so the two are equivalent. \end{proof} Thus we have shown that the intermediate program between LP, SOCP and SDP are $k$OCP and \begin{itemize} \item $1$OCP = LP, \item $2$OCP = SOCP, \item $d$OCP = SDP, \item $k$OCP for $k=3,\dots, d-1$ are intermediate programs: \[ \begin{aligned} & \text{minimize} & & \tr(A_0X) \\ & \text{subject to}\;& & \tr(A_iX)=b_i,\quad i=1,\dots,p \\ & & & X\in (\psd{d})_k^*, \end{aligned} \] where $ (\psd{d})_k^* = \{ M : M=\sum_{ (i_1,\dots, i_k)\in {\comb{[d]}{k}}} \Aug{i_1 \cdots i_k}^d (M_{i_1 \cdots i_k}),\; M_{i_1 \cdots i_k}\in \psd{k}\}.$ \end{itemize} The elements in higher order cone $(\psd{d})_k$ with $k\geq 3$ turns out to be known as factor-width $k$ matrices \cite{boman2005factor}. The corresponding program has being introduced in \cite{PP} before. The dual cones are : \[ ((\psd{d})_k)^* = \{A \in \mathbb{S}^{d} : \text{for all}\, (i_1,\dots,i_k)\in J,\,\Trn{i_1 \cdots i_k} (A) \in\psd{k}\} .\] In the case of semidefinite cone, the nested inclusion for higher order cones $\{(\psd{d})_{k}\}_{k=1}^d$ and its dual cone series $\{((\psd{d})_k)^*\} _{k=1}^d$ are strict as shown in the following lemma. \label{key} \begin{lemma} We have \[(\psd{d})_1\subsetneq(\psd{d})_2\subsetneq \dots \subsetneq (\psd{d})_d=\psd{d}\] and \[((\psd{d})_1)^*\supsetneq((\psd{d})_2)^*\supsetneq \dots \supsetneq ((\psd{d}))^*_d=\psd{d}.\] \end{lemma} \begin{proof} Both inclusion are easy consequences of first and second item of Proposition \ref{proposition: koc}. We now prove the inclusion is strict. We first prove that the strict inclusion for the dual cones. Denote $\mathbf{1}_d = (\underbrace{1, \dots,1}_{d \text{ copies}})^{\scriptscriptstyle\mathsf{T}}$ and $I_d$ be the identity matrix in $\mathbb{S}^d$. The matrix \[ \begin{bmatrix} \sqrt{k-1} & \mathbf{1}^\tp_{d-1} \\ \mathbf{1}_{d-1}^{\scriptscriptstyle\mathsf{T}} & \sqrt{k-1}I_{d-1},\\ \end{bmatrix} \] is always in $((\psd{d})_k)^*$ but not in $((\psd{d})_{k+1})^*$. Since $((\psd{d})_k)^* = \cap_{(i_1,\dots,i_k) \in {\comb{[d]}{k}}}K_{i_1 \cdots i_k}$ where \[ K_{i_1 \cdots i_k} = \{A\in \mathbb{S}^{d} : \Trn{i_1 \cdots i_k} (A) \in\psd{k} \},\] $(K_{i_1 \cdots i_k})^* = \Aug{i_1 \cdots i_k} (\psd{k})$ and the identity matrix $I\in \interior(K_{i_1 \cdots i_k})$ for all $(i_1,\dots,i_k) \in {\comb{[d]}{k}}$, the Krein-Rutman Theorem \cite[Corollary 3.3.13]{borwein2010convex} implies that \[((\psd{d})_k)^{**} = \sum_{i_1 \cdots i_k} \Aug{i_1 \cdots i_k} (\psd{k}) = (\psd{d})_k .\] Thus strict inclusion in the dual cones implies the strict inclusion in the cones $(\psd{d})_k$. The equality $ (\psd{d})_d=\psd{d}= ((\psd{d})_d)^*$ is because $\psd{d}$ is self-dual. \end{proof} So far we have mostly dealing with index set $J_k = \comb{[d]}{k}$. By changing the index $J_k$ of the $k$th order cone, we obtain new cones and new conic program. In real problems, the choice of the subset $J_k$ of $\comb{[d]}{k}$ represents some prior knowledge of the problem. The corresponding higher order cone and dual higher order cone prorgam can enojoy less computational budget because of the smaller size of $J_k$. This has been explored in the literature of chordal structure of SDP \cite{waki2006sums,de2010exploiting}. \section{Sum-of-squares cone} A real coefficient polynomial $p(x)$ is a sum-of-square ($\mathsf{SOS}$) if it can be written as $p(x)= \sum_{i=1}^mq_i^2(x)$ for some polynomial $q_i$. It is clear that the set of sum of square polynomials form a convex cone. It is well-known that a polynomial with $n$ variable and degree $2d$ is a sum of square if and only if there exists a positive semidefinite symmetric $A$ such that \[ p(x)=m(x)^{\scriptscriptstyle\mathsf{T}}Am(x) \] where $m(x)$ is the vector of all monomials (so in total ${n+d\choose d}$ tuples) that have degree less than or equal to $d$ \cite{parrilo2000structured}. Due to this equivalence and our previous discussion on $k$OCP induced by $\psd{k}$, we define the following $k\mathsf{DDSOS}$. \begin{definition} Let $i_1,\dots,i_k \in \{ 1,2,\dots, {n+d\choose d}\}$. A polynomial $p$ is $k$th-diagonally-dominant-sum-of-squares ($k\mathsf{DDSOS}$) if it can be written as \[ p= \sum_{i_1 \cdots i_k} \sum_{j}\biggl (\sum_{l=1}^k \alpha^{i_1 \cdots i_k}_{ji_l}m_{i_l} \biggr)^2 \] for some monomials $m_{i_l}$ and some constants $ \alpha^{i_1 \cdots i_k}_{ji_l} \in \mathbb{R}$. \end{definition} It directly follows from the definition that a polynomial $p$ (with $n$ variable and $2d$ degree) is SOS if and only if it is ${n+d\choose d} \mathsf{DDSOS}$. The cases $k=1,2$ has been explored intensively in \cite{ahmadi2017dsos} under the name DSOS and SDSOS. In the definition, we did not require $i_1<\dots<i_k$ as we did in defining $k$OC. We show in the following lemma that this requirement is not necessary. \begin{lemma}\label{lm2} Suppose the monomials having $n$ variables with degree less than or equal to $d$ are indexed by $\{1,2,\dots,{n+d\choose d}\}$ according to some order. A polynomial $p$ with degree $2d$ , $n$ variables is $k\mathsf{DDSOS}$ if and only if it can be written as \[ p= \sum_{(i_1,\dots, i_k)\in{ \comb{[n+d]}{ d}}} \sum_{j=1}^k \biggl(\sum_{l=1}^k \alpha^{i_1 \dots i_k}_{ji_l}m_{i_l} \biggr)^2, \] where $(m_{i_l})_{l=1}^k$ are different for different $(i_l)_{l=1}^k$. \end{lemma} \begin{proof} It is easy to see a polynomial can be written in the above form is a $k\mathsf{DDSOS}$. Now suppose $p$ is a $k\mathsf{DDSOS}$, by rearrange the brackets and adding $0$ terms if necessary, we could write $p$ in the form \[ p= \sum_{(i_1,\dots, i_k)\in\comb{[n+d]}{ d}} \sum_{j} \biggl(\sum_{l=1}^k \alpha^{i_1 \cdots i_k}_{ji_l}m_{i_l}\biggr )^2, \] where $m_{i_l}$s are different when $i_l$s are not equal. So the thing left to do is to make sure there are $k$ brackets in the second sum, i.e., the sum over $j$. Since \[ (\sum_{l=1}^k \alpha^{i_1 \cdots i_k}_{ji_l}m_{i_l} )^2 =m_{i_1 \cdots i_k}^{\scriptscriptstyle\mathsf{T}} (\alpha^{i_1 \cdots i_k}_j)^{\scriptscriptstyle\mathsf{T}}\alpha^{i_1 \cdots i_k}_j m_{i_1 \cdots i_k}, \] where $m_{i_1 \cdots i_k} = (m_{i_1},\dots, m_{i_k}),\alpha^{i_1 \cdots i_k}_j=(\alpha^{i_1\dots i_k}_{ji_1},\dots, \alpha^{i_1 \cdots i_k}_{ji_k}).$ The sum $\sum_j (\alpha^{i_1 \cdots i_k}_j)^{\scriptscriptstyle\mathsf{T}}\alpha^{i_1 \cdots i_k}_j $ is still a non-negative definite matrix and thus has a Cholesky decomposition,i.e., $\sum_j (\alpha^{i_1 \cdots i_k}_j)^{\scriptscriptstyle\mathsf{T}}\alpha^{i_1 \cdots i_k}_j = D^{\scriptscriptstyle\mathsf{T}}D$. This means \[ p= \sum_{i_1 \cdots i_k}( m_{i_1 \cdots i_k}D)^{\scriptscriptstyle\mathsf{T}} D(m_{i_1 \cdots i_k}), \] which shows there can be exactly $k$ brakets in the second sum. \end{proof} The following theorem connects our $k\mathsf{DDSOS}$ polynomial with our $k$th order cone induced by $\psd{k}$. \begin{theorem}\label{theorem: sos} A polynomial $p$ of degree $2d$ with $n$ variables is $k\mathsf{DDSOS}$ if and only if it admits a representation as $p(x)=m^{\scriptscriptstyle\mathsf{T}}(x)Am(x)$, where $m(x)$ is the standard monomial vector of degree $d$ (so in total ${n+d\choose d}$ tuples with different entries), and $A \in (\psd{h})_k$ for some $h\leq {n+d\choose d}$. \end{theorem} \begin{proof} If $p$ admits a representation \[ p =m^{\scriptscriptstyle\mathsf{T}}Am, \] where $A\in (\psd{h})_k$ for some $h$ and $m$ is the vector of all monomials with degree less than $d$. Since $A\in (\psd{h})_k$, $A$ has the decomposition $A = \sum_{(i_1,\dots, i_k)\in \comb {[n+d]}{d}}M^{i_1 \cdots i_k}$. $M^{i_1 \cdots i_k}$ are zero except for those $(i,j), i,j\in \{i_1,\dots,i_k\}$ entries. $\Trn{i_1 \cdots i_k}^h(M ^{i_1 \cdots i_k})$ are positive semi-definite and thus has the Cholesky decomposition $\Trn{i_1 \cdots i_k}^h(M^{i_1 \cdots i_k}) = N_{i_1 \cdots i_k}^{\scriptscriptstyle\mathsf{T}} N_{i_1 \cdots i_k}$. Thus, we have \begin{align*} p & = m^{\scriptscriptstyle\mathsf{T}}Am \\ & = \sum_{i_1 \cdots i_k} m^{\scriptscriptstyle\mathsf{T}} M^{i_1 \cdots i_k}m \\ & = \sum_{i_1 \cdots i_k} \begin{bmatrix} m_{i_1} & \dots & m_{i_k} \end{bmatrix} \Trn{i_1 \cdots i_k}^d(M^{i_1 \cdots i_k})\begin{bmatrix} m_{i_1} \\ \vdots \\ m_{i_k} \end{bmatrix}\\ & =\sum_{i_1 \cdots i_k} (\begin{bmatrix} m_{i_1} & \dots & m_{i_k} \end{bmatrix} N_{i_1 \cdots i_k}^{\scriptscriptstyle\mathsf{T}}) \Biggl(N_{i_1 \cdots i_k}\begin{bmatrix} m_{i_1} \\ \vdots \\ m_{i_k}\end{bmatrix}\Biggr) \end{align*} The last expression shows that $p$ is a $k\mathsf{DDSOS}$. Now if $p$ is a $k\mathsf{DDSOS}$, as shown in lemma \ref{lm2}, we could write \[ p = \sum_{i_1 \cdots i_k} \sum_{j=1}^k \biggl(\sum_{l=1}^k \alpha^{i_1 \cdots i_k}_{ji_l}m_{i_l} \biggr)^2, \] where $m_{i_l}$ are different for different $i_l$. This gives our \[N_{i_1 \cdots i_k} = \begin{bmatrix} \alpha_{1i_1}^{i_1 \cdots i_k} & \dots & \alpha_{1i_k}^{i_1 \cdots i_k} \\ \alpha_{2i_1} ^{i_1 \cdots i_k}& \dots & \alpha_{2i_k}^{i_1 \cdots i_k}\\ \vdots & \dots & \vdots \\ \alpha_{ki_1}^{i_1 \cdots i_k} & \dots & \alpha_{ki_k}^{i_1 \cdots i_k} \end{bmatrix}. \] We then can construct $M^{i_1 \cdots i_k}$ and $A$. \end{proof} We define the corresponding $k\mathsf{DDSOS}$ program here. \begin{definition} Denote the cone of $k\mathsf{DDSOS}$ with degree $2d$ and $n$ variables as $k\mathsf{SOS}_{n,d}$. We call the following optimization $k\mathsf{DDSOS}$ programming. \begin{equation}\label{program: kDSOS} \begin{aligned} & \underset{u\in \mathbb{R}^l}{\text{minimize}} & & r^{\scriptscriptstyle\mathsf{T}}u \\ & \text{subject to} & &r_{0,t}+ r_{1,t}(x)u_1+\dots +r_{s_t,t}(x)u_{s_t} \in k\mathsf{SOS}_{n_t,d_t}, t = 1,2,\dots, N,\\ & & & \end{aligned} \end{equation} where $r(x)$s are given polynomials and $n_t,d_t$ depends on $r(x)$. $n_t$ is the total number of variables of $r(x)$ in the same inequality. $d_t$ is half the highest degree of $r(x)$ in the same inequality. \end{definition} To link to our previous discussion of $k$OCP induced by $\psd{k}$, we show that these two programs are equivalent. \begin{theorem} $k\mathsf{DDSOS}$ programming is equivalent to $(\psd{d})_k$ cone programming ($k$OCP induced by $\psd{k}$). \end{theorem} \begin{proof} We first show how to reduce $(\psd{d})_k$ cone program to $k\mathsf{DDSOS}$ program: We may suppose $(\psd{d})_k$ program is in its standard form, i.e., the form in \eqref{program:kocpstandard} (the equivalence between standard form and inequality form for $(\psd{d})_k$ can be proved via standard techniques). To avoid confusion, suppose $U$ is the variable matrix in $(\psd{d})_k$ cone program. Then our $r$ in $k\mathsf{DDSOS}$ program \eqref{program: kDSOS} is just $\text{vec}(A_0)$. The linear equality can be incorporated into a $k\mathsf{DDSOS}$ inequality by let $r$s in the inequality in \eqref{program: kDSOS} be constant and matches $(D_i)_{jk},-(D_i)_{jk}$ as the following \[ \tr(D_iU)=f_i \quad \iff \quad -f_i+\sum_{jk} (D_{i})_{jk}u_{ij} \in k\mathsf{SOS}_{1,1} \text{ and } f_i+\sum_{jk} -(D_{i})_{jk}u_{ij} \in k\mathsf{SOS}_{1,1}. \] The condition $U\in( \psd{d})_k$ is the same as \[ \sum_{1\leq i,j\leq d} x_i x_j u_{ij} \in k\mathsf{SOS}_{d,1} \] by Theorem~\ref{theorem: sos}. Next we show how to reduce $k\mathsf{DDSOS}$ program to $(\psd{d})_k$ cone program in its inequality form. The objective is the same for both program. The constraint $p_t(x)=r_{0,t}(x)+ r_{1,t}(x)u_1+\dots +r_{s_t,t}(x)u_k \in k\mathsf{SOS}_{n_t,d_t}$ is the same as there is one $A=[a_{ij}]_{ij}\in (\psd{h})_k$ for some $h$ such that $p_t(x)= m^{\scriptscriptstyle\mathsf{T}}Am$. Thus \[ p_t(x)=r_{0,t}(x)+ r_{1,t}(x)u_1+\dots r_{s_t,t}(x)u_t \in k\mathsf{SOS}_{n_t,d_t} \] if and only if there exists \[ A \in (\psd{n_t+d_t\choose d_t})_k,\quad \text{and linear constrants on}\,a_{ij},u_{i}, \] where the linear constrants come from matching coefficients of $p_t(x)=m^{\scriptscriptstyle\mathsf{T}}Am =r_{0,t}+ r_{1,t}(x)u_1+\dots+ r_{s_t,t}(x)u_t $. The condition $A\in (\psd{n_t+d_t\choose d_t})_k$ is a $k$OC constraint and we could add variable $a_{ij}$ to $k$OCP. This shows the other direction. \end{proof} \section{Completely positive cone and copostive cone} Recall the following definition of completely positive matrices and copositive matrices: \begin{itemize} \item The set of copositive matrices with dimension $d$, $\cp{d}$: \[ \cp{d} :\,=\{M\in \mathbb{S}^d : x^{\scriptscriptstyle\mathsf{T}}Mx \geq 0 \text{ for all } x\in \mathbb{R}^d_+\}. \] \item The set of complete positive matrices with dimension $d$, $\cpp{d}$: \[ \cpp{d}\coloneqq\{ B^{\scriptscriptstyle\mathsf{T}}B : B\in \mathbb{R}^{m\times d}_+,\; m \text{ is an integer} \}. \] \end{itemize} These two cones satisfy the \text{embedding property}\xspace throughly by verifying the definition directly. The corresponding copositive programming and copositive programming gives a lot modeling power in combinatorics and nonconvex problems \cite{dur2010copositive,burer2015gentle}. However, these programs are NP-hard to solve in general. Using the construction of $k$OCP induced by $\cp{k}$ or $\cpp{k}$, for $k=1,2,3,4$, we are able to solve the the inner approximation of copositive programming and copositive programming. The case $k=2$ of $\cpp{k}$ has been explored in \cite{bostanabad2018inner}. \begin{theorem}\label{thm: cpcpptohocp} $2,3,4$-OCP with index set $J$ induced by $\cp{k}$ or $\cpp{k}$ can be casted into $2,3,4$-OCP induced by $\psd{k}$. \end{theorem} The theorem is mainly due to the following lemma: \begin{lemma}\label{lemma: cppcppsd} \cite{maxfield1962matrix} Denote $\nn{k} = (\mathbb{R}^{k\times k}_+)\cap \mathbb{S}^k$, we have for $k=1,2,3,4$ \[\cp{k} = \psd{k} + \nn{k} ,\quad \cpp{k} = \psd{k} \cap \nn{k}.\] \end{lemma} \begin{proof}[Proof of Theorem \ref{thm: cpcpptohocp}] We may suppose $i$-OCP with index set $J$ induced by $\cp{k}$ or $\cpp{k}$ are in its standard form \eqref{program:kocpstandard} where $i=2,3,4$. The case of inequality form is similar. The constraint $X\in \cp{d}_i$ is the same as \[X\in \cp{d}_i \iff X = \sum_{\{j_1,\dots,j_i\}\in J,j_1<\dots <j_i} \Aug{j_1\dots j_i}(M_{j_1\dots j_i}) \quad \text{and}\quad M_{j_1\dots j_i}\in \cp{i}.\] Since $M_{j_1\dots j_i}\in \cp{i}$ if and only if $M_{j_1\dots j_i} = S_{j_1\dots j_i} + N_{j_1\dots j_i}$ for some $S_{j_1\dots j_i} \in \psd{i}$, $N_{j_1\dots j_i}\in \nn{i}$ by Lemma \ref{lemma: cppcppsd} . We see $i$-OCP with index set $J$ induced by $\cp{k}$ can be casted into $i$OCP induced by $\psd{k}$ (the constraint $S_{j_1\dots j_i} \in \psd{i}$ can be casted into $(\psd{d})_k$ constraint by setting $d = k$ and the nonnegative constraint can be handled via $\diag(x)\in (\psd{d})_k\iff x \geq 0$). For $i$-OCP with index set $J$ induced by $\cpp{k}$. We note that \[X\in \cp{d}_i \iff X = \sum_{\{j_1,\dots,j_i\}\in J,j_1<\dots <j_i} \Aug{j_1\dots j_i}(M_{j_1\dots j_i}) \quad \text{and}\quad M_{j_1\dots j_i}\in \cpp{i}.\] Since $M_{j_1\dots j_i}\in \cpp{i}$ if and only if $M_{j_1\dots j_i} \in \psd{i}$ and $M_{j_1\dots j_i}\in \nn{i}$ by Lemma \ref{lemma: cppcppsd} . We see $i$-OCP with index set $J$ induced by $\cp{k}$ can be casted into $i$OCP induced by $\psd{k}$. \end{proof} By adjusting the set $J$ and an $U\in \mathbb{R}^{d \times d }$ , we may consider solving \[ \begin{aligned} & \text{minimize} & & \tr(A_0X) \\ & \text{subject to}\;& & \tr(A_iX)=b_i,\quad i=1,\dots,p \\ & & & X\in U\cp{d}_k(J)U^{\scriptscriptstyle\mathsf{T}}. \end{aligned} \] and \[ \begin{aligned} & \text{minimize} & & \tr(A_0X) \\ & \text{subject to}\;& & \tr(A_iX)=b_i,\quad i=1,\dots,p \\ & & & X\in U\cpp{d}_k(J)U^{\scriptscriptstyle\mathsf{T}}. \end{aligned} \] This formulation gives us more modeling power and can also be casted into $k$OCP induced by $\psd{k}$ for $k=2,3,4$. \section{Symmetric cones} \subsection{Positive semidefinite matrices in $\mathbb{R}^{d\times d},\mathbb{C}^{d\times d}, \mathbb{H}^{d \times d}$ and $\mathbb{O}^{d\times d}$} Let us first recall the five irreducible symmetric cones\footnote{A cone is symmetric if it is self-dual and its autonomous group acts transitively on it. A symmetric cone is irreducible means it cannot be written as a Cartesian product of other symmetric cones}: \begin{enumerate}[\upshape (i)] \item Symmetric real positive semidefinite matrices in $\mathbb{S}^{d}$ \item Hermitian complex positive semidefinite matrices in $\mathbb{C}^{d \times d}$ \item Hermitian quaternion positive semidefinite matrices in $\mathbb{H}^{d \times d}$ \item Hermitian octonian positive semidefinite $\mathbb{O}^{3\times 3}$ matrices \item Second order cone in $\mathbb{R}^{d+1}$: $\mathsf{SOC}^{d+1} = \{ (t,x)\mid \twonorm{x}\leq t, x\in \mathbb{R}^{d},t\in \mathbb{R}\}$. \end{enumerate} For the first three cones, they satisfy the \text{embedding property}\xspace throughly as they are all of the form \[ \{A \in \mathbb{S}^d : x^*Ax \geq 0, \,\text{for all}\, x\in \mathbb{F}^{d}\},\] where $\mathbb{F} = \mathbb{R}, \mathbb{C}$ or $\mathbb{H}$. Let \[\Ho{d}_+ = \{A \in \mathbb{O}^{d \times d} : A= A^*, x^*Ax \geq 0, \,\text{for all}\, x\in \mathbb{O}^{d}\}.\] We may consider the series $\{\Ho{k}\}_{k=1}^\infty$ so that the cone of Hermitian octonian positive semidefinite $\mathbb{O}^{3\times 3}$ matrices is a member of it. The series satisfies the \text{embedding property}\xspace throughly. \subsection{Second Order Cone} We need to first transform the second order cone into the space of symmetric matrices. This can be done through: \begin{equation} \begin{aligned}\label{def: sock} \{A : A = \diag(t,x), \, \text{for some}\,(t,x)\in \mathsf{SOC}^{k}\} \end{aligned} \end{equation} We abuse the notation and call the above set as $\mathsf{SOC}^{k}$ as well. Moreover, we define $\mathsf{SOC}^1 = \mathbb{R}_+$. The index map for $\{\mathsf{SOC}^{k}\}_{k=1}^\infty$ is then \[I_{\mathsf{SOC}} (d,k)= \{s : s = (1,i_1,\dots, i_{k-1}), 2\leq i_1<\dots <i_{k-1}\leq d\}\] for $k\geq 2$ and is simply $\{1\}$ if $k=1$. It can be easily verified that $\{\mathsf{SOC}^{k}\}_{k=1}^\infty$ satisfies the \text{embedding property}\xspace with index map $I(d,k)_{\mathsf{SOC}}$. We can avoid lifting the second order cone to $d \times d$ matrices. First, we define $\Trn{i_1 \cdots i_k} ^{\mathbb{R}^d}: \mathbb{R}^{d} \rightarrow \mathbb{R}^{k}$, $\Aug{i_1 \cdots i_k}^{\mathbb{R}^d}:\mathbb{R}^{k}\rightarrow \mathbb{R}^{d}$ for every $(i_1,\dots,i_k)\in{\comb{d}{k}}$ such that \[ \Trn{i_1 \cdots i_k} ^{\mathbb{R}^d}(x) = (x_{i_1},\dots,x_{i_k}), \quad [\Aug{i_1 \cdots i_k}^{\mathbb{R}^d}(y)]_{i}= \begin{cases} y_j & i = i_j \,\text{for some}\,i_j\\ 0 & \text{otherwise} \end{cases}.\] The $k$th higher order cone of $\mathsf{SOC}^{d}$ is then \[ \mathsf{SOC}^{d}_{k} = \{ x\in \mathbb{R}^{d} : x = \sum_{(1,i_1,\dots ,i_{k-1})\in {I}_{\mathsf{SOC}}(d,k)} \Aug{1i_1 \cdots i_{k-1}}(x_{1i_1\dots i_{k-1}}), x_{1i_1 \cdots i_{k-1}} \in \mathsf{SOC}^{k} \}. \] The following Lemma shows the nest inclusion of $\{\mathsf{SOC}^{d}_k\}_{k=1}^d$ is strict. \begin{lemma} We have \[ \mathsf{SOC}^{d}_1\subsetneq \mathsf{SOC}^{d} _2\subsetneq\mathsf{SOC}^{d}_3\subsetneq \dots \subsetneq \mathsf{SOC}^{d}_d = \mathsf{SOC}^{d}.\] \end{lemma} \begin{proof} The inclusion follows easily from the Nested Cone property in Proposition \ref{proposition: koc}. We now prove the inclusion is actually strict. First, we consider the dual cones \[(\mathsf{SOC}^{d}_k)^*= \{x\in \mathbb{R}^{d} : \Trn{1i_1 \cdots i_{k-1}} ^{\mathbb{R}^d}(x) \in \mathsf{SOC}^{k} \,\text{for all}\,(1,i_1,\dots ,i_{k-1})\in {I}_{\mathsf{SOC}}(d,k)\}.\] An application of first and second item of \ref{proposition: koc} tells us that $$(\mathsf{SOC}^{d}_1)^*\supseteq (\mathsf{SOC}^{d} _2)^*\supseteq(\mathsf{SOC}^{d}_3)^*\supseteq \dots (\supseteq \mathsf{SOC}^{d}_d )^*= (\mathsf{SOC}^{d})^*.$$ Consider $(\sqrt{k-1},\mathbf{1}_{d-1})$, where $\mathbf{1}$ is a all one vector with length $d-1$. This vector belongs $(\mathsf{SOC}^{d}_k)^*$ but not $(\mathsf{SOC}^{d}_{k+1})^*$. Thus the inclusion in the dual cones is strict. Since $(\mathsf{SOC}^{d}_k)^* = \cap_{(1,i_1,\dots ,i_{k-1})\in {I}_{\mathsf{SOC}}(d,k)} K_{i_1 \cdots i_{k-1}}$ where $K_{i_1 \cdots i_{k-1}} = \{x\in \mathbb{R}^{d} : \Trn{1i_1 \cdots i_{k-1}} ^{\mathbb{R}^d}(x) \in \mathsf{SOC}^{k} \}$, and $(\sqrt{d+1}, \mathbf{1}_{d-1})\in \interior(K_{i_1 \cdots i_{k-1}})$, by the Krein-Rutman Theorem \cite[Corollary 3.3.13]{borwein2010convex}, we have \[(\mathsf{SOC}^{d}_k)^{**}= \sum_{(1,i_1,\dots ,i_{k-1})\in {I}_{\mathsf{SOC}}(d,k)}\Aug{1i_1 \cdots i_{k-1}}^{\mathbb{R}^{d}}(\mathsf{SOC}^{k}) =\mathsf{SOC}^{d}_k .\] Thus the strict inclusion in the dual cone implies that strict inclusion in $\{\mathsf{SOC}^d_k\}_{k=1}^d$. \end{proof} \section{Norm Cones} The \text{embedding property}\xspace property is also satisfied by a large class of norm cones. Specifically, the property we need is the following. \begin{definition}\label{def: consistenceMonotonic} Suppose a norm $\|\cdot\|$ is defined on $\mathbb{S}^d$ (or $\diag(\mathbb{R}^d)$) for all $d$. For any $1\leq k\leq d$, $X\in \mathbb{S}^d$ (or $\diag(\mathbb{R}^d)$) and any $(i_1,\dots, i_k)\in {\comb{[d]}{k}}$, it is \begin{enumerate}[\upshape (i)] \item \emph{consistent} if $\|\Trn{i_1\cdots i_k}(X)\| = \| \Aug{i_1\cdots i_k}(\Trn{i_1\cdots i_k}(X))\|$; \item \emph{monotonic} if $\|\Trn{i_1\cdots i_k}(X)\| \leq \|X\|$. \end{enumerate} \end{definition} Norms satisfied the consistency and monotonicity are abundant, for example, \begin{enumerate}[\upshape (a)] \item All $\ell_p$ norms on $\mathbb{R}^d$: $\|x\|_p = (\sum_{i=1}^d|x_i|^p)^{\frac{1}{p}}$ for any $x\in \mathbb{R}^d,p\geq 1$. \item All Schatten norm on $\mathbb{S}^d$ with underlying field being $\mathbb{R}$ or $\mathbb{C}$: $\|X \|_p = (\sum_{i=1}^d|\sigma_i(X)|^p)^{\frac{1}{p}}$ for all $X\in \mathbb{S}^d$ where $\sigma_i(X)$ is the $i$th largest singular value of $X$. The monotonicity is due to Cauchy's interlace theorem. \item All Ky-Fan $k$ norm on $\mathbb{S}^d$ with underlying field being $\mathbb{R}$ or $\mathbb{C}$: $\|X \|_{\text{KF}_k}= \sum_{i=1}^k\sigma_i(X)$ for all $X\in \mathbb{S}^d$ and $\sigma_i =0$ for $i>d$. The monotonicity is also due to Cauchy's interlace theorem. \item The operator norm of matrix induced by $\ell_p$, $\ell_q$ vector norms: $\|A\|_{p,q} = \sup_{\|x\|_p=1} \|Ax\|_{q}$ for any $1\leq p,q\leq \infty$. \end{enumerate} In fact, these two properties turns out to be the characterization of norms so that its corresponding norm cones having \text{embedding property}\xspace as $\mathsf{SOC}$. This fact is shown by the following theorem. \begin{theorem}[Characterization of Norm Cones satisfying \text{embedding property}\xspace as $\mathsf{SOC}$] For a norm $\|\cdot\|$ defined on $\mathbb{S}^d$ (or $\diag(\mathbb{R}^d)$ for all $d\geq 1$, let the norm cone in $\mathbb{S}^{d+1}$ (or $\diag(\mathbb{R}^{d+1})$ be \[ N_{\|\cdot\|}^{d+1} =\{ \diag(t,X)\mid \|X\|\leq t, X\in \mathbb{L}^d,t \in \mathbb{R}\}, \] and $N_{\|\cdot\|}^{1} = \mathbb{R}_+$ where $\mathbb{L}^d= \mathbb{S}^d$ or $\diag(\mathbb{R}^d)$. If the norm is consistent and monotonic, then the series of norm cones $\{N_{\|\cdot\|}^k\}_{k=1}^\infty$ satisfies the \text{embedding property}\xspace with index map $I_{\mathsf{SOC}}(d,k)$. The converse is also true. \end{theorem} \begin{proof} We prove the case of $\mathbb{S}^d$. The case of $\diag(\mathbb{R}^d)$ follows exactly the same line. We first show that consistency with monotonicity implies that $N_{\|\cdot\|}^d$ satisfies the \text{embedding property}\xspace with index map $I_{\mathsf{SOC}}$. For any $1\leq k\leq d$, $(1,i_1,\dots ,i_{k-1})\in {I}_{\mathsf{SOC}}(d,k)$, $\diag(t,X)\in N_{\|\cdot\|}^{d+1}$, and $\diag(s,Z) \in N_{\|\cdot \|}^{k+1}$, the consistency implies that \[\Aug{1i_1\cdots i_k}(\diag(s,Z)) \in N_{\|\cdot\|}^{d+1},\] since \[\Aug{1i_1\cdots i_k}(\diag(s,Z)) = \diag(s, \Aug{i_1\cdots i_k}(Z)), \quad \text{and} \quad \|\Aug{i_1\cdots i_k}(Z)\| = \|Z\|\leq s.\] The monotonicity implies that \[\Trn{1i_1\cdots i_k} (\diag(t,X)) \in N_{\|\cdot\|}^{k+1},\] since \[\Trn{1i_1\cdots i_k}(\diag(t,X)) = \diag(t, \Trn{i_1\cdots i_k}(X)), \quad \text{and} \quad \|\Trn{i_1\cdots i_k}(X)\| \leq \|X\|\leq t.\] The case $k=0$ is trivial. Next we show the \text{embedding property}\xspace of $\{N_{\|\cdot\|}^k\}_{k=1}^\infty$ implies its consistency and monotonicity. Due to the \text{embedding property}\xspace of $\{N_{\|\cdot\|}^{k}\}_{k=2}^d$, we have for any $1\leq k\leq d$, $(1,i_1,\dots ,i_{k-1})\in {I}_{\mathsf{SOC}}(d,k)$, $Z\in \mathbb{S}^k$, \[ \diag(\|Z\|,Z) \in N_{\|\cdot\|}^{k+1} \implies \Aug{1i_1\cdots i_k}(\diag(\|Z\|,Z)) \in N_{\|\cdot\|}^{d+1} \implies \|\Aug{1i_1\cdots i_k}(Z)\|\leq \|Z\|. \] Now consider \[ \diag(\|\Aug{1i_1\cdots i_k}(Z)\|,\Aug{1i_1\cdots i_k}(Z)) \in N_{\|\cdot\|}^{d+1} \implies \Trn{1i_1\cdots i_k}(\diag(\|\Aug{1i_1\cdots i_k}(Z)\|,\Aug{1i_1\cdots i_k}(Z)))\in N_{\|\cdot\|}^{k+1}. \] But $\Trn{1i_1\cdots i_k}(\diag(\|\Aug{1i_1\cdots i_k}(Z)\|,\Aug{1i_1\cdots i_k}(Z))) = \diag(\|\Aug{1i_1\cdots i_k}(Z)\|,Z)$ and we have \[\|\Aug{1i_1\cdots i_k}(Z)\|\geq \|Z\|.\] This shows the consistency by taking $Z = \Trn{i_1\cdots i_k}(X)$. To prove monotonicity, we have for any $X\in \mathbb{S}^d$, \[\Trn{1i_1\cdots i_k} (\diag(t,X)) \in N_{\|\cdot\|}^{k+1} \implies \|\Trn{i_1\cdots i_k}(X)\| \leq t\] and taking $t= \|X\|$ shows the monotonicity. \end{proof} Thus the norm cones of our previous mentioned four kinds of norm (1) $\ell_p$ norm on $\mathbb{R}^d$, (2) Schatten norm on $\mathbb{S}^d$ with underlying field being $\mathbb{R}$ or $\mathbb{C}$, (3) Ky-Fan $k$ norm with underlying field being $\mathbb{R}$ or $\mathbb{C}$, and (4) operator norm induced by $p,q$ norms all satisfies the \text{embedding property}\xspace with index map $I_{\mathsf{SOC}}$. This means our previous discussion on $\mathsf{SOC}$ is just a special case of norm cones with \text{embedding property}\xspace. Here we give two more concrete examples of norm with consistency and monotonicity and studies its $k$th order cone. Let us first consider the $\ell_1$ norm in $\mathbb{R}^d$. As in the case of the second order cone, we don't need to lift the space to matrices. The second order cone induced by $N_{\ell_1}^{d+1}$ is \[ (N_{\ell_1}^{d+1})_2 = \{ (t,x) \mid (t,x) = (\sum_{i=1}^d t_1+\cdots t_d, x_1,\dots,x_d), \,\text{for all}\,i, \, t_i\geq |x_i|, \,t_i,x_i \in \mathbb{R}\}. \] which is simply $N_{\ell_1}^{d+1}$! Thus by first item of Proposition \ref{proposition: koc}, we know the $k$th order cone induced by $N_{\ell_1}^{d+1}$ is just itself $N_{\ell_1}^{d+1}$ for $k\geq 2$. We don't gain new cones from this construction except the trivial cone $(N_{\ell_1})^{d+1} = \mathbb{R}_+\times \{0_{d}\}$ where $0_d$ is the zero vector of length $d$. Note that this is not the case for the second order cone. Next we consider the nuclear norm $\|\cdot\|_*$: \[ \|X\|_* = \sum_{i=1}^d \sigma_i(X), \quad \text{for all}\quad X\in \mathbb{S}^d\] with underlying field being real or complex. The $(k+1)$th order cone induced by $N_{\|\cdot\|_*}^{d+1}$ is \[ (N_{\|\cdot\|_*}^{d+1})_{k+1} = \{\sum_{(1,i_1,\dots ,i_{k-1})\in {I}_{\mathsf{SOC}}(d,k)} \Aug{1i_1\cdots i_k}(\diag(t_{i_1\cdots i_k},X_{i_1\cdots i_k})) \mid \diag(t_{i_1\cdots i_k},X_{i_1\cdots i_k}) \in N_{\|\cdot \|_*}^{k+1}\}.\] Since $(N_{\|\cdot\|_*}^{d+1})^* = N_{\|\cdot\|_{2}}^{d+1}$ where $\|\cdot\|_{2}$ is the operator two norm, we know from second item of Proposition \ref{proposition: koc}, the dual cone of $(N_{\|\cdot\|_*}^{d+1})_{k+1}$ is \[ ((N_{\|\cdot\|_*}^{d+1})_{k+1})^* = \{\diag(t,X) \mid \Trn{1i_1\cdots i_k} (\diag(t,X))\in N_{\|\cdot \|_2}^{k+1}, \,\text{for all} \, (1,i_1,\dots ,i_{k-1})\in {I}_{\mathsf{SOC}}(d,k)\}.\] Moreover, by an application of first and second items of Proposition \ref{proposition: koc}, we have \[ ((N_{\|\cdot\|_*}^{d+1})_{d+1})^*\subseteq ((N_{\|\cdot\|_*}^{d+1})_{d})^* \subseteq \cdots \subseteq ((N_{\|\cdot\|_*}^{d+1})_{2})^*\subseteq ((N_{\|\cdot\|_*}^{d+1})_{1})^*. \] By considering $\diag(k,I_d + \mathbf{1}\mathbf{1}^\top)$ with $k=1,2,\dots, d$ where $I_d$ is the identity matrix in $\mathbb{S}^d$ and $\mathbf{1}$ is the all one vector, we find that \[ ((N_{\|\cdot\|_*}^{d+1})_{d+1})^*\subsetneq ((N_{\|\cdot\|_*}^{d+1})_{d})^* \subsetneq \cdots \subsetneq ((N_{\|\cdot\|_*}^{d+1})_{2})^*\subsetneq ((N_{\|\cdot\|_*}^{d+1})_{1})^*. \] Since $\diag(d+1,I_d) \in \interior( K_{i_1 \cdots i_{k}} )= \interior(\{\diag(t,X): \Trn{1i_1 \cdots i_{k}} (\diag(t,X) )\in N_{\|\cdot\|_{2}}^{k+1}\})$ and $((N_{\|\cdot\|_*}^{d+1})_{k+1})^* = \cap_{(1,i_1,\dots ,i_{k-1})\in {I}_{\mathsf{SOC}}(d,k) }K_{i_1 \cdots i_{k}} $, by the Krein-Rutman Theorem \cite[Corollary 3.3.13]{borwein2010convex}, we find that \[ (N_{\|\cdot\|_*}^{d+1})_{d+1}\supsetneq (N_{\|\cdot\|_*}^{d+1})_{d}\supsetneq \cdots \supsetneq (N_{\|\cdot\|_*}^{d+1})_{2} \supsetneq (N_{\|\cdot\|_*}^{d+1})_{1}. \] Thus, unlike the case of $\ell_1$ norm cone, we indeed obtain new cones here. Finally, the $k$th order cone program induced by $(N_{\|\cdot\|}^{d+1})_{k}$ for monotonic and consistent norm $\|\cdot\|$ is \begin{equation*} \begin{aligned} & \text{minimize} & & \tr(A_0X) +a_0t \\ & \text{subject to}\;& & \tr(A_iX)+a_it=b_i,\quad i=1,\dots,p \\ & & & \diag(t,X)\in (N_{\|\cdot\|}^{d+1})_k, \end{aligned} \end{equation*} where $A_i\in \mathbb{S}^d$, $a_i,\,b_i\in \mathbb{R}$, for all $i=0,\dots, p$. \section{KKT Condition and Self-Concordance} \subsection{KKT Condition} Here we list the KKT condition for our higher order cone program. The primal form of our $\cone{d}_k(J)$ program is \begin{equation} \begin{aligned} \label{Optprogram: primal higher order} & \text{minimize}& & \tr(A_0X) \\ & \text{subject to}\;& & \tr(A_iX)=b_i ,\quad i =1,\dots,p \\ & & & X\in \cone{d}_k(J) \end{aligned} \end{equation} The dual of the above program is \begin{equation} \begin{aligned} \label{Optprogram: dual higher order} & \text{minimize}& & b^{\scriptscriptstyle\mathsf{T}} y \\ & \text{subject to}\;& & A_0 - \sum_{i=1}^p y_i \in (\cone{d}_k(J))^*.\\ \end{aligned} \end{equation} Let $X^{\star},y^\star$ be a primal and dual solution pair of the above programs. Also let $Z^{\star} = A_0 - \sum_{i=1}^pA_iy^\star_i$. If strong duality holds: \[ \tr(A_0 X^{\star}) = b^{\scriptscriptstyle\mathsf{T}} y^\star,\] we have the KKT condition as \begin{equation} \label{eq: kkt} \begin{aligned} X^{\star}& \in \cone{d}_k(J)\\ \tr(A_iX^{\star} )& =b_i, i=1,\dots,p\\ Z^{\star} &\in (\cone{d}_k(J))^*,\\ \tr (Z^{\star} X^{\star}) & =0,\\ A_0- \sum_{i=1}^p A_iy^\star_i& =Z^{\star} . \end{aligned} \end{equation} \subsection{Self-Concordance} We assume the original cone $\cone{k}$ and its $k$th order cone $\cone{d}_k$ are proper and the underlying field is $\mathbb{R}$. The index set is $I(d,k)$. The assumption of properness on $k$-th order is true for all previous mentioned examples in $\mathbb{S}^{d}$. Recall the definition of self-concordance and a few propositions of it. \begin{definition} Let $K$ be a convex closed cone. A continuous function $f: K\to \mathbb{R}\cup\{+\infty\}$ is called a barrier function of $K$ if it satisfies \[f(x)<\infty \quad \text{for every } x \in \interior(K) \quad f(x)=+\infty \quad \text{for every }x \in \partial K, \] where $K^{\circ}$ means taking the interior of $K$ and $\partial K$ means the boundary of $K$ induced by the usual topology in $\mathbb{R}^n$. A convex third order differentiable function $f(x)$ on $K$ is self-concordant if for every $x\in K^{\circ}$ and $h\in \mathbb{R}^n$ the univariate function $\phi (\alpha) = f(x+\alpha h)$ satisfies the property \[|\phi'''(0)|\leq 2 |\phi''(0)|^{\frac{3}{2}} \quad . \] A barrier $f(x)$ of $K$ is logarithmically homogeneous of degree $\theta$ if \[f(tx)=f(x)-\theta \log(t). \] \end{definition} The following property is an easy consequence of the definition of self-concordance and can be found in section 9.6 in \cite{Boyd}. \begin{prop}\label{prop: selfconcordanceprop} If $f_1,f_2$ are self-concordant functions on $K\subseteq \mathbb{R}^n$. then the following functions are also self-concordant. \begin{enumerate}[\upshape (i)] \item $af$, for all $a\geq 1$ \item $f_1+f_2$ \item $f(Ax+b)$ for all $A\in \mathbb{R}^{n\times m}, b \in \mathbb{R}^n$. \end{enumerate} \end{prop} The following theorem is adapted from Theorem~2.4.2, Theorem~2.4.4, and Proposition~2.4.1 in \cite{NN}. One can also found this in section 11.6 in \cite{Boyd}. \begin{theorem}\label{sf} Let $K$ be a proper cone, i.e., $K$ is solid, convex, pointed and closed, in $\mathbb{R}^n$ and let $f$ be a $\theta$-logarithmically homogeneous self-concordant barrier for $K$. Then the Fenchel conjugate $f^*$ of $f$ is a $\theta$-logarithmically homogeneous self-concordant barrier for $-K^*$,i.e, the polar dual of $K$. Moreover, we have the interior of $-K^*$ to be \[\interior(-K^*) =\{\nabla f(x): x\in \interior(K)\} ,\] and \[ f^*(x)+f(y)+ \theta\log (-x^{\scriptscriptstyle\mathsf{T}}y) \geq \theta \log (\theta )-\theta\] where the equality holds for if and only if $x= t\nabla f(y)$ for some $t>0$. \end{theorem} We prove the following theorem when a self-concordance function of the dual cones $(\cone{k})^*$ is available. \begin{theorem} Let $g$ be a $\theta$-logarithmically self-concordant barrier of the dual cone $(\cone{k})^*$. Also let $f(Y)=\sum_{(i_1,\dots, i_k)\in {I}(d,k)}g(\Trn{i_1 \cdots i_k}^d(Y)), Y\in \interior((\cone{d}_k)^*)$. Assuming $\nabla f\coloneqq x \mapsto \nabla f(x)$ is invertible, and $(\cone{d}_k)^*$ is a proper cone, the function $F(X)=-\tr(X(\nabla f)^{-1}(-X))-f((\nabla f)^{-1}(-X))$ is a $\theta \card(I(d,k))$-logarithmically self-concordant barrier for $\cone{d}_k$. \end{theorem} \begin{proof} We first show $f$ is a $\theta \card({I(d,k)})$-logarithmically self-concordant barrier of the dual cone $(\cone{d}_k)^*$ where $\card({I(d,k)})$ is the cardinality of $I(d,k)$. The barrier property follows from the fact that the boundary of $(\cone{d}_k)^*$ are those $Y$s such that some of $\Trn{i_1 \cdots i_k}^d(Y)$ are on the boundary of $(\cone{k})^*$. To verify that $f$ is self-concordant, we only need to show that for all $X$ in the interior of $(\cone{d}_k)^*$, $V\in \mathbb{S}^n$, $\phi(t)= f(X+tV) = \sum_{(i_1,\dots, i_k)\in {I}(d,k)}g(\Trn{i_1 \cdots i_k}^d(X+tV))$ is self-concordant. By Proposition \ref{prop: selfconcordanceprop}, it is enough to show $g(\Trn{i_1 \cdots i_k}^d(X+tV))$ is self-concordant. Since $X$ is in the interior, we know $\Trn{i_1 \cdots i_k}^d(X+tV)$ is indeed in $\interior((\cone{k})^*)$ for all small $t$ and so $g(\Trn{i_1 \cdots i_k}^d(X+tV))$ is self-concordant as $g$ is. This proves $f$ is self-concordant on $(\cone{d}_k)^*$. From the following computation, \begin{align*} f(tY) & =\sum_{(i_1,\dots, i_k)\in {I}(d,k)}g(\Trn{i_1 \cdots i_k}^d(tY))\\ & \marka{=} \sum_{(i_1,\dots, i_k)\in {I}(d,k)}g(\Trn{i_1 \cdots i_k}^d(Y)) - \theta \log t \\ & = \sum_{i(i_1,\dots, i_k)\in {I}(d,k)}g(\Trn{i_1 \cdots i_k}^d(Y))- \card(I(d,k))\theta \log t, \end{align*} where (a) is because $g$ is $\theta$-logarithmically homogeneous. We see $f$ is indeed logarithmically homogeneous of degree $\card(I(d,k))\theta$. By Theorem~\ref{sf}, we know that $f^*(X)$ is indeed a $\theta $ -logarithmically homogeneous self-concordant barrier for $-\cone{n}_k$. The Fenchel-Young's inequality asserts that \[f^*(X)+f(Y) \geq \tr(XY)\] and this becomes equality if $X=\nabla f(Y)$. Since $\nabla f$ is invertible from $\interior((\cone{d}_k)^*)$ to its image which from Theorem~\ref{sf} is just $\interior(-\cone{d}_k)= -\interior(\cone{d}_k)$, $\nabla f$ is bijective from the interior of the dual cone to the interior of $-\cone{d}_k$. Thus the notation $(\nabla f)^{-1}$ always makes sense. We have \[f^*(X) = \tr(X(\nabla f)^{-1}(X))-f((\nabla f)^{-1}(X))\] and so $F(X)$ is indeed a $\theta \card(I(d,k))$-logarithmically self-concordant barrier of $\cone{d}_k$. \end{proof} The condition $\nabla f\coloneqq x \mapsto \nabla f(x)$ is invertible is satisfied when $f$ has positive definite Hessian (see Lemma \ref{lemma: hessianinvertibelgrad} in Appendix). This is the case for $(\psd{d})_k$. \begin{lemma} The function $f(x)=\sum_{(i_1,\dots, i_k)\in {\comb{[d]}{k}}}- \log (\det(\Trn{i_1 \cdots i_k}^d(Y)))$ is a $k{n\choose k}$ -logarithmically homogeneous self-concordant, strictly convex barrier on $((\psd{d})_k)^*$ and has positive definite Hessian on the interior of $((\psd{d})_k)^*$. \end{lemma} \begin{proof} The cone $((\psd{d})_k)^*$ can be easily verified to be proper. We only need to show the Hessian is positive definite as other parts are due to $-\log (\det (S))$ is self-concordant for $S\in \interior(\psd{k})$. Since first order approximation of $f$ is \begin{align*} f(Y+\alpha H) & = \sum_{i_1 \cdots i_k} -\log (\det(\Trn{i_1 \cdots i_k}^d(Y+\alpha H)))\\ & = \sum_{(i_1,\dots, i_k)\in I(d,k)} -\log (\det(\Trn{i_1 \cdots i_k}^d(Y))) - \alpha \tr (\Trn{i_1 \cdots i_k}^d(Y)^{-1} \Trn{i_1 \cdots i_k}^d(H))+O(\alpha^2) \end{align*} The first order derivative is \[f'(Y)= - \sum_{(i_1,\dots, i_k)\in I(d,k)} \Aug{i_1 \cdots i_k}^d (\Trn{i_1 \cdots i_k}^d(Y)^{-1}).\] Now if we approximate the derivative up to the first order, we have \begin{align*} f'(Y+\alpha H) = \sum_{(i_1,\dots, i_k)\in I(d,k)}- \Aug{i_1 \cdots i_k}^d (\Trn{i_1 \cdots i_k}^d(Y)^{-1}) + \alpha \Aug{i_1 \cdots i_k}^d (\Trn{i_1 \cdots i_k}^d(Y)^{-1}\Trn{i_1 \cdots i_k}^d(H)\Trn{i_1 \cdots i_k}^d(Y)^{-1}) +O(\alpha^2)\\ \end{align*} Thus we see \begin{align*} D^2f(X)[H,H] & = \tr(\sum_{(i_1,\dots, i_k)\in I(d,k)}\Aug{i_1 \cdots i_k}^d (\Trn{i_1 \cdots i_k}^d(Y)^{-1}\Trn{i_1 \cdots i_k}^d(H)\Trn{i_1 \cdots i_k}^d(Y)^{-1})H)\\ & = \sum_{(i_1,\dots, i_k)\in I(d,k) }\tr( (\Trn{i_1 \cdots i_k}^d(Y)^{-1}\Trn{i_1 \cdots i_k}^d(H)\Trn{i_1 \cdots i_k}^d(Y)^{-1})\Trn{i_1 \cdots i_k}^d(H))\\ & = \sum_{(i_1,\dots, i_k)\in I(d,k)}\tr( (\Trn{i_1 \cdots i_k}^d(Y)^{-\frac{1}{2}}\Trn{i_1 \cdots i_k}^d(H)\Trn{i_1 \cdots i_k}^d(Y)^{-\frac{1}{2}})^2)\\ \end{align*} where $D^2f[H,H]$ denotes the value of second differential of $f$ taken at $x$ along the direction $H,H$. The last term is greater than zero for non-zero $H$. This means that $f$ is strictly convex and its Hessian is positive definite. \end{proof} \section{Appendix} Here we prove a few results in the main text. We first prove a simple Lemma used in proving Theorem \ref{thm:equivalencesdfineq} which states the equivalence between standard form and inequality form of $k$OCP, \begin{lemma}\label{lemma: geqzerokoc}. Suppose $\{\cone{k}\}_{k=1}^\infty$ satisfies the \text{embedding property}\xspace thoroughly. If $x\in \mathbb{F}^{d}$ and $d \geq k$ and $\cone{1}=\mathbb{R}_+$, then \[ \diag(x) \in \cone{d}_k \iff x\geq 0,\] where $x\geq 0$ means each component of $x$ is greater or equal to $0$. \end{lemma} \begin{proof} If $\diag(x) \in \cone{d}_k$, then \[\diag(x) = \sum_{(i_1,\dots, i_k)\in \comb{[d]}{k}}\Aug{i_1 \cdots i_k}^d (M^{i_1 \cdots i_k}), \quad\text{and} \,M^{i_1 \cdots i_k} \in \cone{k}.\] The \text{embedding property}\xspace and our assumption on $\cone{1}$ implies that the diagonal of $M^{i_1 \dots i_k}$ are nonnegative. Thus we have $x\geq 0$. Conversely, if $x\geq 0$, we can write \[\diag(x) = \sum_{i=1}^d \diag( x_i e_i)\] where $e_i$ is the $i$th standard vector in $\mathbb{F}^d$. Because of our assumption on $\cone{1}$ and the $\{\cone{k}\}_{k=1}^\infty$ satisfies the \text{embedding property}\xspace thoroughly, each $\diag( x_i e_i) \in \cone{d}_k$ and this is a valid decomposition in $\cone{d}_k$. Thus $\diag(x)\in \cone{d}_k$. \end{proof} We note the assumption $\cone{1} = \mathbb{R}_+$ has no loss of generality since for nonempty one dimensional cone in $\mathbb{S}^1$, it is either $\mathbb{R}_-$ or $\mathbb{R}_+$. The following Theorem includes Lemma \ref{lemma: sdd results} as a special case. See item (i) and (vi) of the theorem. The same result can also be found in \cite[Theorem 8,9]{boman2005factor} but we give a different proof. \begin{theorem}\label{thm5} For a matrix $A =[a_{ij}]_{ij}$, denote $M(A)=[\alpha_{ij}]$ where $\alpha_{ii}=a_{ii} $ for all $i$ and $\alpha_{ij}=-|a_{ij}|$ for all $i \ne j$ and $\rho(A)=\max\{|\lambda|: \lambda\; \text{is an eigenvalue of}\; A \}$. The following are all equivalent when $A \in \mathbb{S}^n$. \begin{enumerate}[\upshape (i)] \item $A\in \SDD{d}$ ; \item $M(A)\in \SDD{d}$; \item there exists $D=\diag(d),d>0$, i.e., elementwise positive, such that $D^{\scriptscriptstyle\mathsf{T}}AD\in\DD{d}$; \item there exists a permutation matrix $P$ such that $P^{\scriptscriptstyle\mathsf{T}}AP\in \SDD{d}$ ; \item $M(A)= sI-B$ for some $s$ and $B$ where $B$ is a non-negative matrix and $s$ is greater or equal to the largest absolute value of eigenvalue of $B$, i.e., $s\geq \rho(B)$; \item $M(A)$ is positive semi-definite. \end{enumerate} \end{theorem} \begin{proof} We begin with the equivalence between (i)--(iv). It directly follows from the definition that (i) and (ii) are equivalent. By multiplying out $D^{\scriptscriptstyle\mathsf{T}} A D$ and examining row by row, one finds the condition $D^{\scriptscriptstyle\mathsf{T}} A D\in \DD{d}$ is the same as $A\in \SDD{d}$. Thus (iii) is equivalent to (i). The equivalence between (i) and (iv) can also be easily verified from the definition. Next we show that (v) and (vi) are equivalent. First (v) implies (vi) since for symmetric matrix, $\|B\|_2= \rho(B)$ and $\max_{\|v\|_2=1}v^{\scriptscriptstyle\mathsf{T}}Bv =\rho(B)$, $\min_{\|v\|_2=1}v^{\scriptscriptstyle\mathsf{T}}M(A)v = \min_{\|v\|_2=1}s - \max_{\|v\|_2=1}v^{\scriptscriptstyle\mathsf{T}}Bv = s-\rho(B)\geq 0$. Also, (vi) implies (v): If $M(A)$ is positive semi-definite, then we know all its eigenvalues are non-negative and the largest eigenvalue is positive (the case $M(A)$ is a zero matrix is trivially true for the implication). Denote the eigenvalue of $M(A)$ to be $\lambda_1\geq \lambda_2\geq \dots \geq \lambda_n\geq 0$ (counting multiplicity), then $\lambda_1 I- M(A) \in \psd{d}$. Furthermore, since $B=\lambda_1 I- M(A)$ is positive semi-definite, the diagonal element is non-negative and so $B$ is non-negative. We also have $\rho(B)=\lambda_1-\lambda_n\leq \lambda_1$. This shows (vi) implies (v). Lastly we deduce that (v)--(vi) and (i)--(iv) are equivalent. Suppose $A\in \SDD{d}$ and so is $M(A)$, then by characterization (iii) and the fact that diagonally dominant matrix are positive semi-definite which follows from Gerschigorin circle theorem, we see $M(A)$ is positive semi-definite. This shows (i)--(iv) implies (v)--(vi). Conversely, suppose $M(A)=sI- B$ where $B$ is non-negative and $s\geq \rho(B)$. Since $B$ is symmetric, there always exists a permutation matrix $P$ such that \[ P^{\scriptscriptstyle\mathsf{T}}BP= \begin{bmatrix} B_1 & & &\\ & B_2 & & \\ & & \ddots &\\ & & & B_k \end{bmatrix}, \] and $B_i$ are all irreducible and square matrices and for all $i$, $\rho(B_i)\leq s$. Now by the the Perron--Frobenius theorem, we know for each $B_i$, there is an elementwise positive vector $v_i$ such that $B_iv_i =\rho(B_i)v_i$. Then if we multiply the vector $v=(v_1,\dots, v_k)$ on the right to $P^{\scriptscriptstyle\mathsf{T}}M(A)P$, we have $P^{\scriptscriptstyle\mathsf{T}}M(A)Pv = sv - (\rho(B_1)v_1,\dots,\rho(B_k)v_k)\geq 0$. This shows that $P^{\scriptscriptstyle\mathsf{T}}M(A)P\in \SDD{d}$ is and so are $M(A)$ and $A$. \end{proof} \begin{lemma}\label{lemma: hessianinvertibelgrad} Suppose $f$ is a real valued second order differentiable function defined on a open convex cone $K\subseteq \mathbb{R}^n$. If $f$ has positive definite Hessian, then $\nabla f$ is an injection. \end{lemma} \begin{proof} For every $x\in K$ and $x+h\in K, h \ne 0$, we have \[ h^{\scriptscriptstyle\mathsf{T}}(\nabla f(x+h)-\nabla f(x))=\int_{0}^1 h^{\scriptscriptstyle\mathsf{T}} \nabla^2 f(x+th) h\, dt>0\] as $\nabla ^2 f$, the Hessian, is positive definite. This means $f(x+h) \ne f(x)$ and $f$ is injective. \end{proof} \end{document}
\begin{document} \title{On Pesin's entropy formula for dominated splittings without mixed behavior} \author{Dawei Yang\and Yongluo Cao\footnote{D. Yang and Y. Cao would like to thank the support of NSFC 11125103, NSFC 11271152 and A Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD). Y. Cao is the corresponding author.}} \date{} \maketitle \begin{abstract} For $C^1$ diffeomorphisms, we prove that the Pesin's entropy formula holds for some invariant measure supported on any topological attractor that admits a dominated splitting without mixed behavior. We also prove Shub's entropy conjecture for diffeomorphisms having such kind of splittings. {\varepsilon}nd{abstract} \section{Introduction} Pesin's entropy formula characterize the relationship between the metric entropy and Lyapunov exponents: the metric entropy is the integration of the sum of positive Lyapunov exponents. Sometimes, a measure that satisfies the Pesin's entropy formula is called an {\varepsilon}mph{SRB} measure when there is at least one positive Lyapunov exponent. We would like to know the existence of measures that satisfy the entropy formula for a given system. Lots of results were got for $C^2$ maps. Since the absence of distortion bounds, we lose some method to get SRB measures for $C^1$ maps. However, there are results for $C^1$ maps. See \cite{CaQ01,CCE15,Qiu11,SuT12,Tah02} for instance. In this paper, we consider a topogical attractor which admits a dominated splitting without mixed behavior. We show the existence of measures satisfying Pesin's entropy formula for this kind of systems. Such a splitting is satisfied in some natural setting, for instance, if a non-periodic transitive set of a surface diffeomorphism has a non-trivial dominated splitting, then this dominated splitting has no mixed behavior. Let $f$ be a diffeomorphism on a manifold $M$ whose dimension is $d$. For a compact invariant set $\Lambdambda$, one says that $\Lambdambda$ admits a {\varepsilon}mph{dominated splitting} if there is a continuous invariant splitting $T_\Lambdambda M=E\oplus F$, and constants $C>0$ and $\lambdambda\in(0,1)$ such that for any $x\in\Lambdambda$, and $n\in\NN$, any $u\in E(x)\setminus\{0\}$ and any $v\in F(x)\setminus \{0\}$, we have $$\frac{\|Df^n(u)\|}{\|u\|}\le C\lambdambda^n\frac{\|Df^n(v)\|}{\|v\|}.$$ We say a dominated splitting $T_\Lambdambda M=E\oplus F$ has {\varepsilon}mph{no mixed behavior} if for any measure $\mu$ supported on $\Lambdambda$, every Lyapunov exponent of $\mu$ along $E$ is non-positive and every Lyapunov exponent of $\mu$ along $F$ is non-negative. Equivalently, we have that $$\liminf_{n\to\infty}\frac{1}{n}\log\|Df^n|_{E(x)}\|\le 0,~~~\liminf_{n\to\infty}\frac{1}{n}\log\|Df^{-n}|_{F(x)}\|\le 0,~~~\forall x\in\Lambdambda.$$ \begin{theoremalph}\lambdabel{Thm:formula} For a $C^1$ diffeomorphism $f$, if an attractor $\Lambdambda$ admits a dominated splitting $T_\Lambdambda M=E\oplus F$ without mixed behavior, then there is a measure $\mu$ supported on $\Lambdambda$ satisfying Pesin's entropy formula. {\varepsilon}nd{theoremalph} In a recent paper by Liu and Lu \cite{LiL15}, for a $C^2$ map, they got measures satisfying Pesin's entropy formula for a topological attractor which admits a partially hyperbolic splitting without mixed behavior. Cowieson and Young proved the existence of SRB measures \cite[Corollary 1]{CoY05} if $\Lambdambda$ is an attractor of a $C^\infty$ diffeomorphism $f$ and $\Lambdambda$ admits a dominated splitting $T_\Lambdambda M=E\oplus F$ without mixed behavior and $\limsup_{n\to\infty}(1/n)\log\|Df^n|_{F(x)}\|>0$ for any point $x\in\Lambdambda$. With some additional effort from the proof of Theorem~\ref{Thm:formula}, we can know that the topological entropy varies upper semi continuous w.r.t. the diffeomorphisms. Thus, by a usual argument we can know the entropy conjecture is also true for dominated splittings without mixed behavior. The diffeomorphism $f$ induces naturally a map $f_{*,k}:H_k(M,R)\to H_k(M,R)$ for any $0\le k\le d$, where $H_k(M,R)$ is the $k$-th homology group of $M$. Shub conjectured in \cite{Shu74} that for every $C^1$ diffeomorphism $f$, $$\max_{0\le i\le d}{\rm sp}(f_{*,i})\le {\rm h}_{top}(f),$$ where ${\rm sp}(A)$ is the spectral radius of a linear map $A$. \begin{theoremalph}\lambdabel{Thm:conjecture} For a $C^1$ diffeomorphism $f$, if $M$ admits a dominated splitting without mixed behavior, then the entropy conjecture is true, i.e., $$\max_{0\le i\le d}{\rm sp}(f_{*,i})\le {\rm h}_{top}(f).$$ {\varepsilon}nd{theoremalph} Shub's entropy conjecture is still open. However, there are lots of interesting results on that. We give a partial list: \begin{itemize} \item \cite{Yom87} proved that Shub's conjecture holds for $C^\infty$ maps. \item \cite{RuS75,ShW75} proved the conjecture for Anosov systems and general Axiom A diffeomorphisms. \item \cite{Man75} proved the conjecture for the three-dimensional case. \item \cite{SaX10} proved the conjecture for partially hyperbolic systems with one-dimensional center bundle. \item \cite{LVY13} proved the conjecture for diffeomorphisms that away from ones with a homoclinic tangency. \item \cite{LiL15} proved the conjecture for diffeomorphisms admits a partially hyperbolic splitting without mixed behavior. {\varepsilon}nd{itemize} We notice that the assumption of Theorem~\ref{Thm:conjecture} is not contained in any result listed above. We will also consider the properties of asymptotically entropy expansive and principal symbolic extension in Section~\ref{Sec:esti-localentropy}. \section{Definitions and Properties of entropies} In this section, we give the definitions and properties of metric entropy, local entropy and topological entropy. \subsection{Metric entropies} Let $\mu$ be a probability measure. For a finite measurable partition ${\cal B}=\{B_1,B_2,\cdots,B_k\}$, we define $${\rm H}_\mu({\cal B})=\sum_{i=1}^k-\mu(B_i)\log\mu(B_i),$$ and $$\bigvee_{i=0}^{n-1}f^{-i}({\cal B})=\{C:~C=\bigcap_{i=0}^{n-1}f^{-i}(B_{i_j})\}.$$ If $\mu$ is an invariant probability measure of a map $f$, the metric entropy of $\mu$ w.r.t. a partition ${\cal B}$ is $${\rm h}_\mu(f,{\cal B})=\lim_{n\to\infty}\frac{1}{n}{\rm H}_{\mu}(\bigvee_{i=0}^{n-1}f^{-i}({\cal B})),$$ and the metric entropy of $\mu$ is $${\rm h}_\mu(f)=\sup_{{\cal B}:~\textrm{partition}}{\rm h}_{\mu}(f,{\cal B}).$$ \begin{Definition} Given a finite partition ${\cal B}=\{B_1,B_2,\cdots,B_n\}$, the {\varepsilon}mph{norm} of the partition is $\max_{1\le i\le n}{\rm Diam}(B_i)$. The norm of $\cal B$ is denoted by $\|{\cal B}\|$. Given a measure $\mu$, a partition ${\cal B}$ is called {\varepsilon}mph{regular} if $\mu(\partial B)=0$ for any $B\in\cal B$; it is called {\varepsilon}mph{$\alpha$-regular} if $\|{\cal B}\|<\alpha$ and it is regular. {\varepsilon}nd{Definition} By the definition, we have the following lemma: \begin{Lemma}\lambdabel{Lem:finite-approximation} Given a regular partition ${\cal B}$ of a measure $\mu$ of a diffeomorphism $f$, and given $n\in\NN$, for any $\varepsilon>0$, there is $\deltalta>0$ such that for any $g$ which is $\deltalta$-$C^1$-close to $f$, for any invariant measure $\nu$ of $g$ which is $\deltalta$-close to $\mu$ in the weak-$*$ topology, then $$|\frac 1n {\rm H}_{\mu}(\bigvee_{i=0}^{n-1}f^{-i}({\cal B})) - \frac 1n {\rm H}_{\nu}(\bigvee_{i=0}^{n-1}g^{-i}({\cal B}))|<\varepsilon.$$ {\varepsilon}nd{Lemma} The following fundamental results are from \cite[Section 8.2]{Wal82}: \begin{Lemma}\lambdabel{Lem:H} Let $\mu_1,\mu_2,\cdots,\mu_n$ be probability measures and $s_1,s_2,\cdots,s_n$ be non-negative numbers such that $\sum s_i=1$. For any partition ${\cal B}$, we have $$\sum_{i=1}^n s_i{\rm H}_{\mu_i}({\cal B})\le {\rm H}_{\sum_{i=1}^n s_i\mu_i}({\cal B}).$$ {\varepsilon}nd{Lemma} \subsection{Local entropy} We need to define the Bowen balls or dynamical balls in the entropy theory. Given a point $x$ and $\alpha>0$, \begin{itemize} \item the closed ball of radius $\alpha$ at $x$: $B(x,\alpha)=\{y\in M:~d(x,y)\le\alpha\}$; \item $n$-th Bowen ball for $f$: $B_n(x,\alpha,f)=\bigcap_{0\le i\le n-1}f^{-i}(B(f^i(x),\alpha))$; for simplicity, we denote $B_n(x,\alpha)=B_n(x,\alpha,f)$ if there is no confusion; \item bi-$n$-th Bowen ball: $B_{\pm n}(x,\alpha)=\bigcap_{-n+1\le i\le n-1}f^{-i}(B(f^i(x),\alpha))$; \item infinite Bowen ball for $f$: $B_\infty(x,\alpha)=B_{+\infty}(x,\alpha)=\bigcap_{n\in\NN}f^{-n}(B(f^n(x),\alpha))$; for simplicity, we denote $B_\infty(x,\alpha)=B_\infty(x,\alpha,f)$ if there is no confusion; \item bi-infinite Bowen ball: $B_{\pm \infty}(x,\alpha)=\bigcap_{n\in\ZZ}f^{-n}(B(f^n(x),\alpha))$ {\varepsilon}nd{itemize} \begin{Definition}[Local entropy] For a compact set $\Gammamma$ (not necessarily invariant), for $n\in\NN$ and $\deltalta$, a finite set $P\subset \Gammamma$ is called an $(n,\deltalta)$-spanning set for $f$ (or $(n,\deltalta,f)$-spanning set) if $P\cap B_n(x,\deltalta)$ is not empty for any $x\in\Gammamma$. The minimal cardinality of all $(n,\deltalta)$-spanning set is denoted by $r_n(\Gammamma,\deltalta)$. Then one can define the entropy of $\Gammamma$ by $${\rm h}(f,\Gammamma)={\rm h}(f|_\Gammamma)=\lim_{\deltalta\to 0}\limsup_{n\to\infty} \frac{1}{n} \log r_n(\Gammamma,\deltalta).$$ When $\Gammamma$ is a compact invariant set, we also call ${\rm h}(f|_\Gammamma)$ the topological entropy of $f$ on $\Gammamma$. Sometimes, one denotes it by ${\rm h}_{top}(f|_\Gammamma)$. We then define the local entropy of the scale $\alpha$ for a compact set $\Gammamma$ by $${\rm h}_{\alpha}(f|_\Gammamma)=\sup_{x\in\Gammamma}{\rm h}(f,B_{\infty}(x,\alpha)).$$ {\varepsilon}nd{Definition} One has the following lemma for spanning sets from Bowen \cite[Lemma 2.1]{Bow72}. \begin{Lemma}\lambdabel{Lem:product-spanning} Assume that $\Gammamma$ is a compact set and $\varepsilon>0$. Let $0=t_0<t_1<t_2<\cdots<t_r=n$ be integers. If $P_i$ is a $(t_{i+1}-t_i,\varepsilon)$-spanning set of $f^{t_i}(\Gammamma)$ for any $0\le i\le r-1$, then $$r_n(\Gammamma,2\varepsilon)\le\prod_{i=0}^{r-1}\# P_i.$$ {\varepsilon}nd{Lemma} By using the definition, we have \begin{Lemma}\lambdabel{Lem:entropy-nStep} Given any $\alpha>0$, for any $x\in M$ and any $m\in\NN$, we have $${\rm h}(f,B_{\pm\infty}(x,\alpha))={\rm h}(f,B_{\pm\infty}(f^m(x),\alpha)).$$ {\varepsilon}nd{Lemma} \begin{proof} For any $\varepsilon>0$, let us fix an $\varepsilon/4$-dense set in $M$ whose cardinality is $N_\varepsilon$. Thus for any compact set $\Gammamma$, there is a $(1,\varepsilon)$-spanning set whose cardinality is at most $N_\varepsilon$. For any $n\in\NN$, by Lemma~\ref{Lem:product-spanning}, we have $$r_{m+n}(B_{\pm\infty}(x,\alpha),2\varepsilon)\le N_\varepsilon^m r_n(B_{\pm\infty}(f^m(x),\alpha),\varepsilon).$$ On the other hand, for any $\varepsilon>0$ and any $n\in\NN$, if $P_{m+n}$ is an $(n+m,\varepsilon)$-spanning set for $f$ of $B_{\pm\infty}(x,\alpha)$ satisfying $\# P_{m+n}=r_{m+n}(B_{\pm\infty}(x,\alpha),\varepsilon)$, then $f^m(P_{m+n})$ is an $(n,\varepsilon)$-spanning set for $f$ of $B_\infty(f^m(x),\alpha)$. Hence, we have $$r_{m+n}(B_{\pm\infty}(x,\alpha),\varepsilon)=\# f^m(P_{m+n})\ge r_n(B_{\pm\infty}(f^m(x),\alpha),\varepsilon).$$ By taking the limits, one can get the conclusion. {\varepsilon}nd{proof} \subsection{Local entropies for $f$ and $f^{-1}$} In this subsection, we need to prove the following proposition. We borrow some ideas from \cite[Proposition 2.5]{LVY13}. \begin{Proposition}\lambdabel{Pro:inverse-entropy} For any ergodic measure $\mu$, there is a full $\mu$-measure set $R$ such that $$\sup_{x\in R}{\rm h}(f,B_{\pm\infty}(x,\alpha))=\sup_{x\in R}{\rm h}(f^{-1},B_{\pm\infty}(x,\alpha)).$$ {\varepsilon}nd{Proposition} \begin{proof} In fact, since $\mu$ is ergodic, one can take $$R=\{x:~\frac{1}{n}\sum_{i=0}^{n-1}\deltalta_{f^i(x)}\to\mu,~\frac{1}{n}\sum_{i=0}^{n-1}\deltalta_{f^{-i}(x)}\to\mu\}.$$ In this proposition, the situation for $f$ and $f^{-1}$ is symmetric. Without loss of generality, one can assume that there is a point $x_0\in R$ such that $${\rm h}(f,B_{\pm\infty}(x_0,\alpha))>\sup_{x\in R}{\rm h}(f^{-1},B_{\pm\infty}(x,\alpha)).$$ Thus, one can find two numbers $a_1>a_2$ such that $${\rm h}(f,B_{\pm\infty}(x_0,\alpha))>a_1>a_2>\sup_{x\in R}{\rm h}(f^{-1},B_{\pm\infty}(x,\alpha)).$$ Recall the definition of the local entropy, there is $\varepsilon_0>0$ small such that for any $\varepsilon\in(0,\varepsilon_0)$, we have $$\limsup_{n\to\infty}\frac{1}{n}\log r_n(B_{\pm\infty}(x_0,\alpha),\varepsilon)>a_1.$$ In other words, there is a sequence of integers $\{n_i\}$ such that $$r_{n_i}(B_{\pm\infty}(x_0,\alpha),\varepsilon)>{\rm e}^{a_1 n_i}.$$ For this $\varepsilon_0>0$, we choose a finite set $P_0$ that $\varepsilon_0/8$-dense in $M$. Thus, for any compact subset $\Gammamma$ of $M$, there is a $\varepsilon_0/2$-dense set in $\Gammamma$, whose cardinality is at most $\# P_0$. Take $$\mu_{n_i}=\frac{1}{n}\sum_{j=0}^{n-1}\deltalta_{f^j(x_0)}.$$ Since $\mu$ is ergodic, we have that $\mu_{n_i}\to\mu$ as $i\to\infty$. For $\mu$, for each $n\in\NN$, one can find $\varepsilon_n\ll\varepsilon_0$ such that if we define the set $R_n$ as $$R_n=\{x\in R:~r_m(B_{\pm\infty}(x,\alpha),\varepsilon_n/4,f^{-1})<{\rm e}^{a_2 m},~~~\forall m\ge n\},$$ then we have \begin{itemize} \item$\mu(\cup_{n\in\NN}R_n)=1$. \item $\{R_n\}$ is an increasing sequence of measurable sets. {\varepsilon}nd{itemize} Then one can choose an increasing sequence of {\varepsilon}mph{compact} sets $\{\Lambdambda_n\}$ such that \begin{itemize} \item $\Lambdambda_n\subset R_n$ for each $n\in\NN$. \item $\mu(\cup_{n\in\NN}\Lambdambda_n)=1$. {\varepsilon}nd{itemize} Now we fix some $n$ that is probably large enough. For any $x\in\Lambdambda_n$, let $P_n(x)$ be an $(n,\varepsilon_n/4,f^{-1})$-spanning set of $B_{\pm\infty}(x,\alpha)$ such that $\# P_n(x)<{\rm e}^{a_2 n}$. Then $$U_n(x)=\bigcup_{z\in P_n(x)}B_n(z,\varepsilon_n/2,f^{-1})$$ is a neighborhood of $B_{\pm\infty}(x,\alpha)$. We have the following observations. \begin{itemize} \item $f^{-n}P_n(x)$ is a $(n,\varepsilon_n/4)$-spanning set of $B_{\pm\infty}(f^{-n}(x),\alpha)$ for $f$ for any $x\in R_n$. It is clear that $\# f^{-n}P_n(x)<{\rm e}^{a_2 n}$. \item $\mu(f^{-n}(\Lambdambda_n))=\mu(\Lambdambda_n)$ {\varepsilon}nd{itemize} Now we choose a smaller neighborhood $V_n(x)\subset U_n(x)$ of $x$ and an integer $N_n(x)$ such that for any $y\in V_n(x)$, we have $$B_{\pm N_n(x)}(y,\alpha)\subset U_n(x).$$ By the definition of $U_n(x)$, we have that for any $y\in V_n(x)$, $B_{\pm N_n(x)}(y,\alpha)$ is $(n,\varepsilon_n/2,f^{-1})$-spanned by $P_n(x)$. As a corollary, we have that $$\{V_n(x)\}_{x\in\Lambdambda_n}$$ is an open covering of $\Lambdambda_n$. Thus, there are finitely points $\{x_1,x_2,\cdots,x_k\}\subset\Lambdambda_n$ such that $$W_n=\bigcup_{1\le j\le k}V_n(x_j)\supset\Lambdambda_n.$$ Consequently, $$\lim_{n\to\infty}\mu(W_n)=1.$$ Take $$H(n)=\max\{n,N_n(x_1),\cdots,N_n(x_k)\}.$$ We have that $$\lim_{i\to\infty}\mu_{n_i}(f^{-n}(W_n))=\lim_{i\to\infty}\mu_{n_i}(f^{-n}(W_n))\ge\mu(f^{-n}(W_n))=\mu(W_n)\ge\mu(\Lambdambda_n).$$ Now we consider the positive iteration of $x_0$. For $n_i$ large, we find a sequence of times $0=\iota_0<\iota_1<\cdots<\iota_L=n_i$ by the following way inductively: \begin{itemize} \item if $f^{\iota_j}\in f^{-n}(W_n)$ and $\iota_j\in[H(n),n_i-H(n)]$, then $\iota_{j+1}=\iota_j+n$; \item otherwise, one takes $\iota_{j+1}=\iota_j+1$. {\varepsilon}nd{itemize} Let $$A_n=\{\iota_j:~f^{\iota_j}(x_0)\in f^{-n}(W_n),~\iota_j\in[H(n),n_i-H(n)]\},~~~B_n=\{\iota_i\}_{0\le j\le L}\setminus A_n.$$ We have the following properties: \begin{itemize} \item if $\iota_j\in A_n$, then there is some $k_j\in\{1,2,\cdots,k\}$ such that $f^{\iota_j+n}(x_0)\in V(x_{k_j})$, and $$f^{\iota_j+n}(B_{\pm n_i}(x_0,\alpha))\subset B_{\pm H(n)}(f^{\iota_j+n}(x_0),\alpha)\subset U_n(x_{k_j}).$$ {\varepsilon}nd{itemize} This implies that $f^{\iota_j}(B_{\pm n_i}(x_0,\alpha))$ is $(n,\varepsilon_n/4,f)$-spanned by $f^{-n}P_n(x_{k_j})$ By Lemma~\ref{Lem:product-spanning}, we have $$r_{n_i}(B_{\pm n_i}(x_0,\alpha),\varepsilon_0/2)\le\left(\prod_{\iota_j\in A_n}\# f^{-n}P_n(x_{k_j})\right)\cdot (\# P_0)^{\# B_n}\le {\rm e}^{a_2n\#A_n}\cdot (\# P_0)^{\#B_n},$$ By definitions, we have \begin{itemize} \item $n\#A_n\le n_i$, \item $\#B_n\le \#\{0\le j<n_i:~f^j(x_0)\notin f^{-n}(W_n)\}+2H(n)\le (1-\mu_{n_i}(W_n))n_i+2H(n)$. {\varepsilon}nd{itemize} This implies that \begin{eqnarray*} r_{n_i}(B_{\pm n_i}(x_0,\alpha),\varepsilon_0/2)&\le&{\rm e}^{a_2 n_i}\cdot (\# P_0)^{(1-\mu_{n_i}(W_n))n_i+2H(n)}. {\varepsilon}nd{eqnarray*} Thus, $$\frac{1}{n_i}\log r_{n_i}(B_{\pm n_i}(x_0,\alpha),\varepsilon_0/2)\le a_2 +[(1-\mu_{n_i}(W_n))+2\frac{H(n)}{n_i}]\log\# P_0.$$ For fixed $n$, we have that $H(n)$ is much smaller than $n_i$. By taking $n_i\to\infty$, we have $$\limsup_{n_i\to\infty}\frac{1}{n_i}r_{n_i}(B_{\pm \infty}(x_0,\alpha),\varepsilon_0/2)\le\limsup_{n_i\to\infty}\frac{1}{n_i}r_{n_i}(B_{\pm n_i}(x_0,\alpha),\varepsilon_0/2)\le a_2+(1-\mu(W_n))\log\# P_0.$$ Then by asking $n\to\infty$, $$\limsup_{n_i\to\infty}\frac{1}{n_i}r_{n_i}(B_{\pm \infty}(x_0,\alpha),\varepsilon_0/2)\le a_2+\lim_{n\to\infty}(1-\mu(W_n))\log\# P_0=a_2<a_1.$$ We get a contradiction. The proof is complete. {\varepsilon}nd{proof} In fact, we have the following more accurate characterization. \begin{Proposition}\lambdabel{Pro:almostevery} For any ergodic measure $\mu$, there is a constant $H$ such that for $\mu$-a.e. $x$, we have $$H={\rm h}(f,B_{\pm\infty}(x,\alpha))={\rm h}(f^{-1},B_{\pm\infty}(x,\alpha)).$$ {\varepsilon}nd{Proposition} For proving Proposition~\ref{Pro:almostevery}, we need to adapt the definition of the entropy. For a compact invariant set $\Gammamma$, given $n\in\NN$ and $\varepsilon>0$, a subset $P$ of $\Gammamma$ is called an $(n,\varepsilon)$-separated set of $\Gammamma$ if for any $x,y\in P$, $d_n(x,y)>\varepsilon$. Denote by $s_n(\Gammamma,\varepsilon)$ the largest cardinality for any $(n,\varepsilon)$-subset of $\Gammamma$. By summarizing \cite[Chapter 7.2]{Wal82}, we have \begin{itemize} \item $r_n(\Gammamma,\varepsilon)\le s_n(\Gammamma,\varepsilon)\le r_n(\Gammamma,\varepsilon_2)$ for any $n$ and any $\varepsilon$. \item ${\rm h}(f,\Gammamma)=\lim_{\varepsilon\to0}\limsup_{n\to\infty}(1/n)\log s_n(\Gammamma,\varepsilon)=\lim_{\varepsilon\to0}\limsup_{n\to\infty}(1/n)\log r_n(\Gammamma,\varepsilon)$. {\varepsilon}nd{itemize} We need to modify the definition of $s_n$ to $\bar s_n$ by the following way: for a compact invariant set $\Gammamma$, given $n\in\NN$ and $\varepsilon>0$, a subset $P$ of $\Gammamma$ is called an {\varepsilon}mph{closed $(n,\varepsilon)$-separated set} of $\Gammamma$ if for any $x,y\in P$, $d_n(x,y)\ge\varepsilon$. Denote by $\bar s_n(\Gammamma,\varepsilon)$ the largest cardinality for any closed $(n,\varepsilon)$-separated set of $\Gammamma$. By using the definitions, we have the following properties: \begin{itemize} \item Given $\varepsilon>0$ and $n\in\NN$, we have $$s_n(\Gammamma,\varepsilon)\le \bar s_n(\Gammamma,\varepsilon)\le s_n(\Gammamma,\varepsilon/2).$$ \item ${\rm h}(f,\Gammamma)=\lim_{\varepsilon\to0}\limsup_{n\to\infty}(1/n)\log \bar s_n(\Gammamma,\varepsilon)$. {\varepsilon}nd{itemize} Now we can give the proof of Proposition~\ref{Pro:almostevery}. \begin{proof}[Proof of Proposition~\ref{Pro:almostevery}] For fixed $\alpha>0$, we define the local entropy function $$H(x)={\rm h}(f,B_{\pm\infty}(x,\alpha)).$$ We need to verify that $H(x)$ is measurable. After that, by Lemma~\ref{Lem:entropy-nStep}, we have $H(x)=H(f(x))$, and then by the ergodicity of $\mu$, we have that $H(x)$ is constant for $\mu$-a.e. $x$. By the same reason, we have that ${\rm h}(f^{-1},B_{\pm\infty}(x,\alpha))$ is also a constant for $\mu$-a.e. $x$. Then by Proposition~\ref{Pro:inverse-entropy}, one can conclude this proposition. To verify $H$ is a $\mu$-measurable, it is enough to check that given $\varepsilon>0$, the set $$L_a=\{x:~\limsup_{n\to\infty}\frac{1}{n}\log \bar s_n(B_{\pm\infty}(x,\alpha),\varepsilon)>a\}$$ is $\mu$-measurable for any $a>0$. We have that $$L_a=\bigcup_{k\in\NN}\bigcap_{n\in\NN}\bigcup_{m\ge n}\{x:~\bar s_m(B_{\pm\infty}(x,\alpha),\varepsilon)\ge {\rm e}^{(a+1/k) m}\}.$$ Thus it is enough to show that the set $$L_{a,m}=\{x:~\bar s_m(B_{\pm\infty}(x,\alpha),\varepsilon)\ge{\rm e}^{a m}\}$$ is $\mu$-measurable for any $a>0$ and $m\in\NN$. This can be deduced the fact that $L_{a,m}$ is closed by the upper semi continuity of bi-infinity Bowen balls. In fact, assume that there is a sequence $\{x_n\}\subset L_{a,m}$ such that $\lim_{n\to\infty}x_n=x$, we need to show that $x\in L_{a,m}$. For each $x_n$, there is a closed $(m,\varepsilon)$-separated set $P_n=\{y_n^1,y_n^2,\cdots,y_n^{N_n}\}$ contained in $B_{\pm\infty}(x_n,\alpha)$ whose cardinality is $N_n=\# P_n\ge [{\rm e}^{a m}]$. By taking a subsequence if necessary, one can assume that for any $1\le j\le [{\rm e}^{a m}]$, $\lim_{n\to\infty}y_n^j=y^j\in B_{\pm\infty}(x,\alpha)$. Moreover, we have that for any $1\le i<j\le [{\rm e}^{a m}]$, $d_m(y^i,y^j)\ge \varepsilon$. This implies that there is a closed $(m,\varepsilon)$-separated set contained in $B_{\pm\infty}(x,\alpha)$ whose cardinality is at least $[{\rm e}^{am}]$, and hence ${\rm e}^{am}$. Consequently, $x\in L_{a,m}$. The proof is complete now. {\varepsilon}nd{proof} \section{Upper semi continuity of entropies} In this section, we will mainly prove the upper semi continuity of the metric entropy w.r.t. invariant measures. Actually, we prove the following stronger result. \begin{Theorem}\lambdabel{Thm:usc} Assume that a compact invariant set $\Lambdambda$ admits a dominated splitting $T_\Lambdambda M=E\oplus F$ without mixed behavior. If there is a sequence of diffeomorphisms $\{f_n\}$ and a sequence of invariant measures $\mu_n$ such that each $\mu_n$ is an invariant measure of $f_n$ and supported on a compact invariant set $\Lambdambda_n$ of $f_n$, and $$\lim_{n\to\infty}f_n=f,~~~\lim_{n\to\infty}\mu_n=\mu,~~~\limsup_{n\to\infty}\Lambdambda_n\subset\Lambdambda,$$ then $$\limsup_{n\to\infty}{\rm h}_{\mu_n}(f_n)\le {\rm h}_{\mu}(f).$$ {\varepsilon}nd{Theorem} We first give some consequences of Theorem~\ref{Thm:usc}, and then give its proof. \subsection{Consequences of Theorem~\ref{Thm:usc}} One says that the entropy function is {\varepsilon}mph{upper semi continuous w.r.t. the measures} if for any measure $\mu$ and any sequence of measures $\mu_n$ such that $\lim_{n\to\infty}\mu_n=\mu$, then $\limsup_{n\to\infty}{\rm h}_{\mu_n}(f)\le {\rm h}_{\mu}(f)$. \begin{Corollary}\lambdabel{Cor:usc} Assume that a compact invariant set $\Lambdambda$ admits a dominated splitting $T_\Lambdambda M=E\oplus F$ without mixed behavior. Then the metric entropy is upper semi continuous w.r.t. the measures. {\varepsilon}nd{Corollary} The corollary can be deduced from Theorem~\ref{Thm:usc} directly. The upper semi continuity of the entropy function can be applied in thermodynamical formalism. For any continuous function $\varphi$, the pressure of $\varphi$ is defined by $$P(\varphi)=\sup_{\mu~\textrm{invariant}}\{{\rm h}_\mu(f)+\int \varphi{\rm d}\mu\}.$$ A measure $\mu$ is called an {\varepsilon}mph{equilibrium state} of $\varphi$ if $P(\varphi)={\rm h}_\mu(f)+\int \varphi{\rm d}\mu$. By the upper semi continuity, we have the following corollary directly: \begin{Corollary}\lambdabel{Cor} Assume that a compact invariant set $\Lambdambda$ admits a dominated splitting $T_\Lambdambda M=E\oplus F$ without mixed behavior. Then every continuous function of $\Lambdambda$ has an equilibrium state on $\Lambdambda$. {\varepsilon}nd{Corollary} Another corollary is the upper semi continuity of topological entropy w.r.t. the diffeomorphisms. \begin{Corollary}\lambdabel{Cor:usc-top} Assume that a compact invariant set $\Lambdambda$ admits a dominated splitting $T_\Lambdambda M=E\oplus F$ without mixed behavior. If $f_n\to f$ as $n\to\infty$ in the $C^1$ topology and $\Lambdambda_{f_n}$ is a compact invariant set of $f_n$ satisfying $\limsup_{n\to\infty}\Lambdambda_{f_n}\subset\Lambdambda$, then $$\limsup_{n\to\infty}{\rm h}_{top}(f_n|_{\Lambdambda_{f_n}}) \le {\rm h}_{top}(f |_{\Lambdambda_f}).$$ {\varepsilon}nd{Corollary} \begin{proof} For each $n$, we take an ergodic measure $\mu_n$ supported on $\Lambdambda_{f_n}$ such that $${\rm h}_{\mu_n}(f_n|_{\Lambdambda_{f_n}})>{\rm h}_{top}(f_n|_{\Lambdambda_{f_n}})-\frac 1n.$$ By taking a subsequence if necessary, we assume that $\lim_{n\to\infty}\mu_n=\mu$ for some invariant measure $\mu$ of $f$. By Theorem~\ref{Thm:usc}, we have that $${\rm h}_{top}(f|_{\Lambdambda_f})\ge {\rm h}_{\mu}(f|_{\Lambdambda_f}) \ge \limsup_{n\to\infty}{\rm h}_{\mu_n}(f_n|_{\Lambdambda_{f_n}})=\limsup_{n\to\infty}\left({\rm h}_{\mu_n}(f_n|_{\Lambdambda_{f_n}})+\frac 1n\right)=\limsup_{n\to\infty}{\rm h}_{top}(f_n|_{\Lambdambda_{f_n}}).$$ {\varepsilon}nd{proof} \begin{proof}[Proof of Theorem~\ref{Thm:conjecture}] Now we consider a diffeomorphism $f$ such that $M$ admits a dominated splitting without mixed behavior. There is a neighborhood $\cal U$ of $f$ such that any $g\in\cal U$ is isotropic to $f$. Thus we have $$\max_{0\le i\le d}{\rm sp}(f_{*,i})= \max_{0\le i\le d}{\rm sp}(g_{*,i}),$$ For any $\varepsilon>0$, we choose a $C^\infty$ diffeomorphism $g\in\cal U$ such that by applying Yomdin's result \cite{Yom87}, we have $${\rm h}_{top}(f)>{\rm h}_{top}(g)-\varepsilon\ge \max_{0\le i\le d}{\rm sp}(g_{*,i})-\varepsilon=\max_{0\le i\le d}{\rm sp}(f_{*,i})-\varepsilon.$$ Then by the arbitrariness of $\varepsilon$, one can complete the proof. {\varepsilon}nd{proof} \subsection{Uniformity on dominated splittings without mixed behavior} \begin{Lemma}\lambdabel{Lem:uniformity} Assume that $\Lambdambda$ admits a dominated splitting without mixed behavior. Then for any $\beta>0$, there is $N=N(\beta)\in\NN$ and a neighborhood $\cal U$ of $f$ such that for any $g\in\cal U$ and a neighborhood $U$ of $\Lambdambda$, for any compact invariant set $\Lambdambda_g$ of $g$ that is contained in $U$, we have that $\Lambdambda_g$ admits a dominated splitting $$T_{\Lambdambda_g}M=E_g\oplus F_g,$$ and $$\|Dg^N|_{E_g(x)}\|\le (1+\beta)^N,~~~\|Dg^{-N}|_{F_g(x)}\|\le (1+\beta)^N.$$ {\varepsilon}nd{Lemma} \begin{proof} By the main techniques in \cite{Cao03}, for $\beta/2$, there is $N>0$ such that for any $x\in\Lambdambda$, we have $$\|Df^N|_{E(x)}\|\le (1+\beta/2)^N,~~~\|Df^{-N}|_{F(x)}\|\le (1+\beta/2)^N.$$ Thus there is a neighborhood $\cal U$ of $f$ such that for any $g\in\cal U$, if a compact invariant set $\Lambdambda_g$ of $g$ is contained in a small neighborhood of $\Lambdambda$, then $\Lambdambda_g$ has a dominated splitting $T_{\Lambdambda_g}M=E_g\oplus F_g$. By shrinking $\cal U$ and $U$ if necessary, we have that $E_g$ and $F_g$ are close to $E$ and $F$, respectively. Thus for any $x\in\Lambdambda_g$, we have $$\|Dg^N|_{E_g(x)}\|\le (1+\beta)^N,~~~\|Dg^{-N}|_{F_g(x)}\|\le (1+\beta)^N.$$ {\varepsilon}nd{proof} \subsection{The plaque family theorem and Pliss Lemma} For dominated splittings, we have local invariant center stable manifolds and local invariant center unstable manifolds \cite[Theorem 5.5]{HPS77}. For $i\in\NN$, denote by $D^i(1)$ be the unit ball in $\RR^i$ and ${\rm Emb}(D^i(1),M)$ is the space of $C^1$ embeddings from $D^i(1)$ to $M$. \begin{Lemma}\lambdabel{Lem:plaque} Let $\Lambdambda$ be a compact invariant set with a dominated splitting $T_\Lambdambda M=E\oplus F$, where $\operatorname{dim} E=i$. Then there is a neighborhood $\cal U$ of $f$ and a neighborhood $U$ of $\Lambdambda$ such that for any $g\in\cal U$, for any compact invariant set $\Lambdambda_g$ contained in $U$, denoting the dominated splitting of $\Lambdambda_g$ by $E_g\oplus F_g$, then there is a map $\Thetaeta_g:\Lambdambda_g\to {\rm Emb}(D^i(1),M)$ such that when one denotes $W^{E_g}_\varepsilon(x,g)=\Thetaeta_g(x)(D^i(\varepsilon))$, we have \begin{itemize} \item Invariance: for any $\varepsilon>0$, there is $\deltalta>0$ such that for any $x\in\Lambdambda_g$, we have $g(W^{E_g}_\deltalta(x,g))\subset W^{E_g}_\varepsilon(g(x),g)$. \item Tangency: for any $x\in\Lambdambda_g$, we have $T_x W^{E_g}_\varepsilon(x,g)=E_g(x)$. \item Continuity: when $g_n\to g$ as $n\to\infty$, $x_n\in\Lambdambda_{g_n}$ such that $x_n\to x\in\Lambdambda$, then $W^{E_{g_n}}(x_n,g_n)\to W^{E_g}(x,g)$. {\varepsilon}nd{itemize} We can also get the manifolds $\{W^F(x,g)\}_{x\in\Lambdambda_g}$ tangent to $F_g$. {\varepsilon}nd{Lemma} \begin{Remark} We notice that the continuity is not stated in the original version of the plaque family theorem. From the proof of the plaque family theorem, one can know this property. {\varepsilon}nd{Remark} We have the following version of Pliss lemma \cite{Pli72} that is useful to get uniform estimations in some non-uniform setting. Recall that ${\rm m}(A)$ is the mini-norm of a linear isomorphism $A$, i.e., ${\rm m}(A)=\inf_{\|v\|=1}\|Av\|$. \begin{Lemma}\lambdabel{Lem:pliss} Assume that $\Lambdambda$ is a compact invariant set with a dominated splitting $T_\Lambdambda M=E\oplus F$. Given $N\in\NN$ and $\lambdambda_1>\lambdambda_2>1$ such that for any $x\in\Lambdambda$, if $$\lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}\log {\rm m}(Df^N|_{F(f^{iN}(x))})\ge \lambdambda_1,$$ then there is a point $y$ in the positive orbit of $x$, we have for any $n\in\NN$ $$\frac{1}{n}\sum_{i=0}^{n-1}\log {\rm m}(Df^N|_{F(f^{iN}(y))})\ge \lambdambda_2.$$ {\varepsilon}nd{Lemma} We have the following estimations on centre-unstable manifolds. The proof is a simple application of the mean value theorem, hence omitted. \begin{Lemma}\lambdabel{Lem:expansion} Assume that $\Lambdambda$ is a compact invariant set with a dominated splitting $T_\Lambdambda M=E\oplus F$. Given $n\in\NN$, for $\lambdambda_1>\lambdambda_2>1$, there are $C=C(\lambdambda_1,\lambdambda_2)$ and $\alpha_0=\alpha_0(\lambdambda_1,\lambdambda_2)$, for any $x\in\Lambdambda$ satisfying $$\prod_{i=0}^{n-1}{\rm m}(Df^N|_{F(f^{iN}(y))})\ge \lambdambda_1^n,~~~\forall n\in\NN$$ for any $y$ and for any $n\in\NN$ such that $$f^{\varepsilon}ll(y)\in W^F_{\alpha_0}(f^{\varepsilon}ll(x)), ~~~\forall 0\le{\varepsilon}ll \le n,$$ then we have $$d(f^n(x),f^n(y))\ge C\lambdambda_2^n d(x,y).$$ {\varepsilon}nd{Lemma} \subsection{The entropy of a plaque} \begin{Lemma}\lambdabel{Lem:entropy-plaque} Let $\Lambdambda$ be a compact invariant set that admits a dominated splitting $T_\Lambdambda M=E\oplus F$ without mixed behavior. For any $\varepsilon>0$, there is a neighborhood $\cal U$ of $f$ and a neighborhood $U$ of $\Lambdambda$ and $\alpha>0$ such that for any $g\in\cal U$ and for any point $x\in\Lambdambda_g\subset U$, we have $${\rm h}(g|_{W^E_\alpha(x)})\le\varepsilon,~~~{\rm h}(g^{-1}|_{W^F_\alpha(x)})\le\varepsilon.$$ {\varepsilon}nd{Lemma} \begin{proof} We only prove the case for $W^E$. The result for $W^F$ will be symmetric. Given $\varepsilon>0$, we take $\beta>0$ such that $(\operatorname{dim} E)\log(1+2\beta)<\varepsilon.$ By Lemma~\ref{Lem:uniformity}, there is $N=N(\beta)\in\NN$ and a neighborhood $\cal U$ of $f$ and a neighborhood $U$ of $\Lambdambda$ such that for any $g\in\cal U$, for any compact invariant set $\Lambdambda_g$ of $g$ in $U$, we have that $\Lambdambda_g$ admits a dominated splitting $$T_{\Lambdambda_g}M=E_g\oplus F_g,$$ Now we have that for any $x\in\Lambdambda_g$, $$\|Dg^{N}|_{E_g(x)}\|\le (1+\beta)^N.$$ Thus one can choose $\alpha>0$ small such that for any $z\in W^{E_g}_{\alpha}(x)$, we have $$\|Dg^N|_{T_z W^{E_g}(x)}\|\le (1+2\beta)^N.$$ Thus, if we take $C=C(\beta)$ to be $$C=\max\{(1+2\beta)\|Df\|,(1+2\beta)^2\|Df^2\|,\cdots,(1+2\beta)^{N-1}\|Df^{N-1}\|\}+2.$$ Then we have for any $z\in W^{E_g}_{\alpha}(x)$ and any $n\in\NN$, if $g^{{\varepsilon}ll}(z)\subset W^{E_g}_\alpha(g^{\varepsilon}ll(x))$ for any $0\le {\varepsilon}ll\le n-1$, then we have that $$\|Dg^n|_{T_z W^{E_g}(x)}\|\le C(1+2\beta)^n.$$ Fix $\deltalta>0$. By using the Mean Value Theorem, for any $y,z\in W^{E_g}_{\alpha}(x)$ satisfying $d(y,z)<\deltalta$, when $f^{\varepsilon}ll(y),f^{{\varepsilon}ll}(z) \in W^{E_g}_{\alpha}(f^{{\varepsilon}ll}x)$ for any $1\le{\varepsilon}ll\le n-1$, there is $\xi_n\in W^{E_g}_{\alpha}(x)$ such that $$d(g^n(y),g^n(z))\le \|Dg^n|_{T_{\xi_n} W^{E_g}(x)}\|d(y,z)\le C(1+2\beta)^n d(y,z).$$ Thus, the $n$-th Bowen ball $B_n(y,\deltalta)$ contains a ball of radius $\deltalta/C(1+2\beta)^n$. We consider the volume of the ball $B_n(y,\deltalta)$, then we have $${\rm Volume}(B_n(y,\deltalta))\ge \frac{\deltalta^{\operatorname{dim} E}}{C^{\operatorname{dim} E}(1+2\beta)^{n \operatorname{dim} E}}.$$ Thus, there are at most $$\left[\frac{C^{\operatorname{dim} E}\alpha^{\operatorname{dim} E}(1+2\beta)^{n\operatorname{dim} E}}{\deltalta^{\operatorname{dim} E}}\right]+1$$ disjoint $n$-th Bowen balls contained in $W^{E_g}_{\alpha}(x)$. This implies the entropy is bounded by $$\lim_{\deltalta\to0}\limsup_{n\to\infty}\frac{1}{n}\log\frac{C^{\operatorname{dim} E}\alpha^{\operatorname{dim} E}(1+2\beta)^{n\operatorname{dim} E}}{\deltalta^{\operatorname{dim} E}}\le (\operatorname{dim} E)\log(1+2\beta)<\varepsilon.$$ {\varepsilon}nd{proof} \subsection{Estimation of the local entropy}\lambdabel{Sec:esti-localentropy} We need the following lemma for local entropy. \begin{Lemma}\lambdabel{Lem:local} Assume that a compact invariant set $\Lambdambda$ admits a dominated splitting $T_\Lambdambda M=E\oplus F$ without mixed behavior. Then for any $\varepsilon>0$, there is $\alpha>0$ and a neighborhood ${\cal U}$ of $f$ and a neighborhood $U$ of $\Lambdambda$ such that for any $g\in \cal U$ and any compact invariant set $\Lambdambda_g\subset U$ of $g$, we have $${\rm h}_{\alpha}(g|_{\Lambdambda_g})\le \varepsilon.$$ {\varepsilon}nd{Lemma} \begin{proof} We recall a result from \cite[Proposition 2.5]{LVY13}. For proving ${\rm h}_{\alpha}(g)\le\varepsilon$, it suffices to prove that for any ergodic invariant measure $\mu$ supported on $\Lambdambda_g$ of $g$, for $\mu$-a.e. $x$, for the bi-infinite Bowen ball, we have that $${\rm h}(g,B_{\pm\infty}(x,\alpha))\le\varepsilon.$$ In fact, by Proposition~\ref{Pro:inverse-entropy}, it suffices to prove that \begin{itemize} \item either, ${\rm h}(g,B_{\pm\infty}(x,\alpha))\le\varepsilon$ for $\mu$-a.e. $x$; \item or, ${\rm h}(g^{-1},B_{\pm\infty}(x,\alpha))\le\varepsilon$ for $\mu$-a.e. $x$ {\varepsilon}nd{itemize} For the constants of the dominated splitting, we assume that there are $N\in\NN$ and $\lambdambda\in(0,1)$ (independent of $g$) such that for any $x\in\Lambdambda_g$, $\|Dg^N|_{E_g(x)}\|\|Dg^{-N}|_{F_g(g^N(x))}\|\le\lambdambda.$ We define the functions $$\varphi^{E_g}(x)=\log\|Dg^N|_{E_g(x)}\|,~~~\psi^{F_g}(x)=\log{\rm m}(Dg^N|_{F_g(x)}).$$ $$S_n(\varphi^{E_g}(x))=\frac{1}{n}\sum_{i=0}^{n-1}\varphi^{E_g}(g^{iN}(x)),~~~S_n(\psi^{F_g}(x))=\frac{1}{n}\sum_{i=0}^{n-1}\psi^{F_g}(g^{iN}(x)).$$ By Birkhoff's ergodic theorem, the following two limits exist: $$\lim_{n\to\infty}S_n(\varphi^{E_g}(x))=\int \varphi^{E_g}(x){\rm d}\mu,~~~\lim_{n\to\infty}S_n(\psi^{F_g}(x))=\int \varphi^{F_g}(x){\rm d}\mu.$$ By domination, at most one of the above quantities is contained in $(\log\lambdambda/2,-\log\lambdambda/2)$ for $\mu$-a.e. $x$. Without loss of generality, one assume that $\lim_{n\to\infty}S_n(\psi^{F_g}(x))$ is not contained in this interval. Thus, we have that $\lim_{n\to\infty}S_n(\psi^{F_g}(x))\ge-\log\lambdambda/2$. In this case, we will prove that for $\mu$-a.e. $x$, $B_{\pm\infty}(x,\alpha)\subset W^E_\alpha(x)$, and by applying Lemma~\ref{Lem:entropy-plaque}, one can conclude. Notice that when $\lim_{n\to\infty}S_n(\psi^{E_g}(x))$ is not in this interval, then one can also prove that for $\mu$-a.e. $x$, $B_{\pm\infty}(x,\alpha)\subset W^F_\alpha(x)$. Then we need to apply Proposition~\ref{Pro:inverse-entropy} to prove that for $\mu$-a.e. $x$, we have that ${\rm h}(f^{-1},B_{\pm\infty}(x,\alpha))$ is small. Take $C=C(\lambdambda^{-1/4},\lambdambda^{-1/5})$ and $\alpha_0=\alpha_0(\lambdambda^{-1/4},\lambdambda^{-1/5})$ as in Lemma~\ref{Lem:expansion}. By reducing $\alpha_0$ if necessary, one can assume that for any $w_1,w_2$ in some locally maximal invariant set of some neighborhood of $\Lambdambda$, if $d(w_1,w_2)<\alpha_0$, then $$\lambdambda^{1/12}\le \frac{{\rm m}(Df^N|_{F(w_1)})}{{\rm m}(Df^N|_{F(w_2)})}\le \lambdambda^{-1/12}.$$ The above reduction implies $$\lim_{n\to\infty}\frac{1}{n}\log\left(\prod_{i=0}^{n-1}{\rm m}(Df^N|_{F(f^{iN}(x))})\right)\ge -\frac{\log\lambdambda}{2}.$$ By Lemma~\ref{Lem:entropy-nStep}, it is enough to estimate the entropy at any iterate of $x$. By Lemma~\ref{Lem:pliss}, without loss of generality after an iteration, one can assume that $$\prod_{i=0}^{n-1}{\rm m}(Df^N|_{F(f^{iN}(x))})\ge \lambdambda^{-n/3},~~~\forall n\in\NN.$$ By reducing $\alpha$ if necessary, since $y\in B_{\pm\infty}(x,\alpha)$, we have $$\prod_{i=0}^{n-1}{\rm m}(Df^N|_{F(f^{iN}(y))})\ge \lambdambda^{-n/4},~~~\forall n\in\NN.$$ If there is $y\in B_{\pm\infty}(x,\alpha)\setminus W^{E_g}_\alpha(x)$, then we consider $z\in W^{F_g}_{\alpha_0}(y)\cap W^{E_g}_{\alpha_0}(x)$. There is $n_0$ such that \begin{itemize} \item such that $d(g^{n_0}(y),g^{n_0}(z))$ is almost $\alpha_0$ by Lemma~\ref{Lem:expansion}. This means that $n_0$ is related to $\alpha$: when $\alpha$ is small we have $n_0$ is large. \item $d(g^{n_0}(x),g^{n_0}(z))/d(g^{n_0}(y),g^{n_0}(z))$ is small when $n_0$ is large by the domination. \item $d(g^{n_0}(x),g^{n_0}(y))$ is bounded by $\alpha$ since $y$ is contained in the Bowen ball of $x$ of size $\alpha$. {\varepsilon}nd{itemize} When $\alpha\ll\alpha_0$, we have that $n_0$ is large. Thus, $$d(g^{n_0}(y),g^{n_0}(z))>d(g^{n_0}(x),g^{n_0}(z))+d(g^{n_0}(x),g^{n_0}(y)).$$ Then one can get a contradiction by the triangle inequality. {\varepsilon}nd{proof} \begin{Definition} For a compact metric space $X$ and a homeomorphism $T:X\to X$, $T$ is {\varepsilon}mph{asymptotically entropy expansive} if for any $\varepsilon>0$, there is $\alpha>0$ such that for any $x\in X$, we have $${\rm h}(B_\infty(x,\alpha))<\varepsilon.$$ {\varepsilon}nd{Definition} We have the following corollary directly: \begin{Corollary}\lambdabel{Cor:asymptotic} Assume that a compact invariant set $\Lambdambda$ admits a dominated splitting $T_\Lambdambda M=E\oplus F$ without mixed behavior. Then $f|_\Lambdambda$ is asymptotically entropy expansive. {\varepsilon}nd{Corollary} Thus, we also have a ``principal symbolic extension''. \begin{Definition} We say a compact invariant set $\Lambdambda$ admits a {\varepsilon}mph{principal symbolic extension} if there is $n\in\NN$ and a compact invariant subset $\Sigmagma$ of the shift $(\{1,2,\cdots,n\}^\ZZ,\sigma)$, where $\sigma$ is the shift map, and a continuous surjective map $\pi:\Sigmagma\to\Lambdambda$ such that for any invariant measure $\mu$ of $(\Sigmagma,\sigma)$, the metric entropy of $\mu$ w.r.t. $\sigma$ is the same as the the metric entropy of $\pi_*(\mu)$ w.r.t. $f$. {\varepsilon}nd{Definition} It was proven by \cite{BFF02} that any asymptotically entropy expansive system admits a principal symbolic extension. Hence we have the following corollary directly: \begin{Corollary}\lambdabel{Cor:asymptotic} Assume that a compact invariant set $\Lambdambda$ admits a dominated splitting $T_\Lambdambda M=E\oplus F$ without mixed behavior. Then $\Lambdambda$ admits a principal symbolic extension. {\varepsilon}nd{Corollary} \subsection{Upper semi continuous of the metric entropy} Now we can give the proof of Theorem~\ref{Thm:usc}. \begin{proof}[Proof of Theorem~\ref{Thm:usc}] Given a regular partition $\cal B$ of $\mu$, for any $\varepsilon>0$, there is $n\in\NN$, for any $m\in\NN$ large enough, by using Lemma~\ref{Lem:finite-approximation}, we have \begin{eqnarray*} {\rm h}_{\mu}(f)&\ge &{\rm h}_{\mu}(f,{\cal B})\\ &\ge& \frac{1}{n}{\rm H}_{\mu}(\bigvee_{i=0}^{n-1}f^{-i}(\cal B))-\varepsilon\\ &\ge&-2\varepsilon +\frac{1}{n}{\rm H}_{\mu_m}(\bigvee_{i=0}^{n-1}f_m^{-i}(\cal B))\\ &\ge& -2\varepsilon+{\rm h}_{\mu_m}(f_m,{\cal B}). {\varepsilon}nd{eqnarray*} By \cite[Theorem 3.5]{Bow72}, we have that for any partition ${\cal B}$ whose norm is less than $\alpha$, we have $${\rm h}_{\mu_m}(f_m|_{\Lambdambda_{f_m}})\le {\rm h}_{\mu_m}(f,{\cal B})+{\rm h}_{\alpha}({f_m|_{\Lambdambda_{f_m}}}).$$ By applying Lemma~\ref{Lem:local}, one can choose $\alpha>0$ such that ${\rm h}_{\alpha}({f_m|_{\Lambdambda_{f_m}}})<\varepsilon$ for $m$ large enough. Hence, by taking an $\alpha$-regular partition ${\cal B}$, we have \begin{eqnarray*} {\rm h}_{\mu}(f)&\ge& -2\varepsilon+{\rm h}_{\mu_m}(f_m,{\cal B})\ge {\rm h}_{\mu_m}(f_m|_{\Lambdambda_{f_m}})-{\rm h}_{\alpha}(f_m|_{\Lambdambda_{f_m}})-2\varepsilon\\ &\ge&{\rm h}_{\mu_m}(f_m)-3\varepsilon {\varepsilon}nd{eqnarray*} for $m$ large enough. By taking a limit and by the arbitrariness of $\varepsilon$, one can get the conclusion. {\varepsilon}nd{proof} \section{The equilibrium state of $\psi(x)=-\log|\deltat Df|_{F(x)}|$} In this section, we will consider a $C^1$ diffeomorphism $f$ that has a topological attractor with a dominated splitting $T_\Lambdambda M=E\oplus F$. We can extend the bundles $E$ and $F$ into a small neighborhood $U$ of $\Lambdambda$ continuously. The extensions are still denoted by $E$ and $F$. We can also extend the function $\psi(x)=-\log|\deltat Df|_{F(x)}|$ in a small neighborhood of $\Lambdambda$. In $U$, one can define the cone field ${\cal C}_\theta^F$ associated to $F$ of width $\theta>0$ by the following way: $${{\cal C}_\theta^F}(x)=\{v=v^E+v^F\in T_x M:~|v^E|\le \theta|v^F|\}.$$ Since the splitting is dominated, the cone field ${\cal C}_\theta^F$ is positive invariant for some large iteration $Df^N$ and the width of $Df^n({\cal C}_\theta^F(x))$ tends to zero exponentially for some $x\in U$ by some uniform constants. For the continuous function $\psi=-\log|\deltat Df|_{F}|$ and $n\in\NN$, define $$S_n\psi(x)=\sum_{i=0}^{n-1}\psi(f^i(x)).$$ Some similar version of the following theorem has been already stated in \cite{LeY15}. The proof based on volume estimation used in \cite{Qiu11}, originally from \cite{BoR75}. \begin{Theorem}\lambdabel{Thm:es} Let $\Lambdambda$ be a topological attractor which admits a dominated splitting $T_\Lambdambda M=E\oplus F$. Assume that the entropy function is upper semi continuous, then there is $\deltalta_0>0$ and $\theta>0$ such that for any manifold $D$ tangent to the cone ${\cal C}_{\theta}^F$, whose diameter is less than $\deltalta_0$, then for Lebesgue almost every point $x\in D$, for any accumulation point $\mu$ of $$\{\frac{1}{n} \sum_{i=0}^{n-1}\deltalta_{f^i(x)}\},$$ we have $${\rm h}_{\mu}(f)+\int \psi{\rm d}\mu\ge 0.$$ {\varepsilon}nd{Theorem} By the properties of cone fields, there are $\theta>0$ and $r>0$ such that for any disc $D$ tangent to the cone field ${\cal C}_\theta^F$ and whose diameter is less than $r$, if the diameter of $f^n(D)$ is also less than $r$, then $f^n(D)$ tangent to the cone ${\cal C}_\theta^F$. For $\varepsilon>0$, we consider the set of invariant measures: $${\cal M}_\varepsilon=\{\mu:~{\rm h}_{\mu}(f)+\int \psi{\rm d}\mu\ge -\varepsilon.\}$$ Notice that we have $${\cal M}_0=\bigcap_{n\in\NN}{\cal M}_{1/n}.$$ By the upper semi continuity of the metric entropy, we have that ${\cal M}_\varepsilon$ is closed. Thus ${\cal M}\setminus{\cal M}_\varepsilon$ is open. Thus in the metric space of invariant measures, there are countably many open sets $\{{\cal O}_i\}_{i\in\NN}$ such that \begin{itemize} \item the union of all ${\cal O}_i$ is ${\cal M}\setminus{\cal M}_\varepsilon.$ \item Each ${{\cal O}}_i$ is convex and open. \item the closure of ${{\cal O}}_i$ is contained in ${\cal M}\setminus{\cal M}_\varepsilon$. {\varepsilon}nd{itemize} For each set ${\cal O}$, we define $$B_D({\cal O})=\{x\in D:~\{\frac{1}{n}\sum_{i=0}^{n-1}\deltalta_{f^i(x)}\}\textrm{~has an accumulation point in~}{\cal O}\}.$$ $$B_D({\cal O},n)=\{x\in D:~\frac{1}{n}\sum_{i=0}^{n-1}\deltalta_{f^i(x)}\in {\cal O}\}$$ From the definition, we have $$B_D({\cal O})\subset\limsup_{n\to\infty}B_n({\cal O},n)=\bigcap_{n\ge 1}\bigcup_{m\ge n}B_D({\cal O},m)$$ We have the following result to conclude Theorem~\ref{Thm:es}. \begin{Lemma}\lambdabel{Lem:zero-Leb} For each ${\cal O}\in\{{\cal O}_i\}$, we have that the Lebesgue measure of $B_D({\cal O})$ is zero. {\varepsilon}nd{Lemma} \begin{proof}[Proof of Theorem~\ref{Thm:es}] By Lemma~\ref{Lem:zero-Leb}, for any small $C^1$ sub manifold $D$ tangent to the cone field ${\cal C}_\theta^F$, we have that Lebesgue almost every point $x$ in $D$, any accumulation point $\mu$ of $\{\frac{1}{n} \sum_{i=0}^{n-1}\deltalta_{f^i(x)}\},$ we have that ${\rm h}_{\mu}(f)+\int \psi{\rm d}\mu\ge 0.$ A small neighborhood of $\Lambdambda$ can be foliated by such kind of sub manifolds. Thus, the proof can be complete. {\varepsilon}nd{proof} Now we give the proof of Lemma~\ref{Lem:zero-Leb}. \begin{proof}[Proof of Lemma~\ref{Lem:zero-Leb}] By using the Borel-Cantelli argument, for proving ${\rm Leb}(B_D({\cal O}))=0$, it suffices to prove that $$\sum_{n=1}^\infty {\rm Leb}(B_D({\cal O},n))<\infty.$$ Thus we need to estimate ${\rm Leb}(B_D({\cal O},n))$ for $n$ large enough. We consider $$B_D(\overline{\cal O},n)=\{x\in D:~\frac{1}{n}\sum_{i=0}^{n-1}\deltalta_{f^i(x)}\in \overline{\cal O}\}$$ We first cover $B_D(\overline{\cal O},n)$ by a maximal $(n,\deltalta)$-seperated set $\Deltalta_{n,\deltalta}$. Since it is maximal, we have $$B_D({\cal O},n)\subset B_D(\overline{\cal O},n)\subset \bigcup_{x\in\Deltalta_{n,\deltalta}}B_n(x,\deltalta).$$ We need to choose two constants. Notice that by positive iterations, the cone ${\cal C}^F_\theta$ will decrease exponentially. Thus, by considering a positive iteration of $D$ (saying $f^{N}(D)$) and then dividing the positive iteration into small pieces, one can assume that $D$ is tangent to a very thin cone field (since $f^N(D)$ is tangent to a very thin cone field). We can choose constants $C_{\deltalta}$ such that for any disc $W$ tangent to the cone field ${\cal C}^F_\theta$, for any points $x,y\in W$ satisfying $d_W(x,y)<\deltalta$, we have $|\psi(x)-\psi(y)|\le\log{C_{\deltalta}}$. By the uniform continuity of $\psi$, one can assume that $C_\deltalta\to 1$ as $\deltalta\to 0$. For any $\kappa>0$, there is $\theta_\kappa$ such that for any disc $W$ tangent to the cone fied ${\cal C}^F_{\theta_\kappa}$, we have for any $x\in W$, $$\left| \log |{\rm det}Df|_{T_x W}|-\log\psi(x)\right|<\kappa.$$ There is $N_\kappa\in\NN$ such that for any $n>N_\kappa$, for any sub-manifold $W$ tangent to ${\cal C}^F_\theta$, then $f^n(W)$ is tangent to ${\cal C}^F_{\theta_\kappa}$. Thus, there is $C_\kappa$ (large) such that \begin{eqnarray*} {\rm Leb}(B_D(\overline{\cal O},n)) &\le& \sum_{x\in\Deltalta_{n,\deltalta}}{\rm Leb}B_n(x,\deltalta)=\sum_{x\in\Deltalta_{n,\deltalta}}\int_{B_n(x,\deltalta)}{\rm d}{\rm Leb}_D(y)\\ &=& \sum_{x\in\Deltalta_{n,\deltalta}}\int_{f^n(B_n(x,\deltalta))} \prod_{i=0}^n |\deltat (Df|_{T_{f^{-n+i}(z)}f^i W})|^{-1} {\rm d}{\rm Leb}_{f^n D}(z)\\ &\le& C_\kappa {\rm e}^{n\kappa} \sum_{x\in\Deltalta_{n,\deltalta}}\int_{f^n(B_n(x,\deltalta))} {\rm e}^{S_n\psi(z)} {\rm d}{\rm Leb}_{f^n D}(z)\\ &\le& V_\deltalta C_\kappa {\rm e}^{n\kappa} C_\deltalta^n\sum_{x\in\Deltalta_{n,\deltalta}}{\rm e}^{S_n\psi(x)}, {\varepsilon}nd{eqnarray*} where $V_\deltalta$ is the maximal volume of a disc $D$ whose diameter is less than $\deltalta$, which is tangent to ${\cal C}^F_\theta$. Now we need to estimate $\sum_{x\in\Deltalta_{n,\varepsilon}}{\rm e}^{S_n\psi(x)}$. Take $\nu_n$ and $\mu_n$: $$\nu_n=\frac{\sum_{x\in \Deltalta_{n,\varepsilon}} {\rm e}^{S_n\psi(x)}\deltalta_x}{\sum_{x\in \Deltalta_{n,\varepsilon}} {\rm e}^{S_n\psi(x)}}$$ $$\mu_n=\frac{1}{n}\sum_{i=0}^{n-1}f^i_*\nu_n=\sum_{x\in\Deltalta_{n,\varepsilon}}\frac{{\rm e}^{S_n\psi(x)}}{\sum_{x\in \Deltalta_{n,\varepsilon}} {\rm e}^{S_n\psi(x)}}\frac{1}{n}\sum_{i=0}^{n-1}\deltalta_{f^i(x)}.$$ \begin{Claim} $\mu_n\in \overline{\cal O}$. {\varepsilon}nd{Claim} \begin{proof}[Proof of the Claim] Since $x\in\Deltalta_{n,\varepsilon}\subset B_D(\overline{\cal O},n)$, we have that $$\frac{1}{n}\sum_{i=0}^{n-1}\deltalta_{f^i(x)}\in \overline{\cal O}.$$ By the convexity of $\overline{\cal O}$, the claim is true. {\varepsilon}nd{proof} We have that any accumulation point $\mu$ of $\{\mu_n\}$ is invariant. And moreover $\mu\in\overline {\cal O}$. By the construction of $\overline{\cal O}$, we have ${\rm h}_\mu(f)+\int\psi{\rm d}\mu\le-\varepsilon$. Now we want to prove $${\rm h}_{\mu}(f)+\int \psi{\rm d}\mu\ge \limsup_{n\to\infty} \frac{1}{n} \log\sum_{x\in\Deltalta_{n,\deltalta}}{\rm e}^{S_n\psi(x)}.$$ Take a partition ${\cal B}=\{B_1,B_2,\cdots,B_k\}$ that is $\deltalta$-regular for $\mu$. Then we have that every element of $\bigvee_{i=0}^{n-1}f^{-i}{\cal B}$ contains at most one point in $\Deltalta_{n,\deltalta}$. By \cite[Chapter 9]{Wal82}, we have $${\rm H}_{\nu_n}(\bigvee_{i=0}^{n-1}f^{-i}{\cal B})+\int S_n\psi {\rm d}\nu_n=\log\sum_{x\in\Deltalta_{n,\deltalta}}{\rm e}^{S_n\psi(x)}.$$ Now we need to consider the relationship between ${\rm H}_{\mu_n}$ and ${\rm H}_{\nu_n}$. Given some integer $1\le j<q<n$, the partition $\bigvee_{i=0}^{n-1}f^{-i}{\cal B}$ can be written in the following way: $$\bigvee_{i=0}^{n-1}f^{-i}{\cal B}=\bigvee_{r=0}^{[(n-j)/q]-1}f^{-(rq+j)} \bigvee_{i=0}^{q-1}f^{-i}{\cal B} \vee \bigvee_{{\varepsilon}ll\in S_j}f^{-{\varepsilon}ll}{\cal B},$$ where $S_j=\{0,1,\cdots,j-1,j+[(n-j)/q]q,\cdots,n-1\}$. We have $|S_j|\le 2q$, and \begin{eqnarray*} \log\sum_{x\in\Deltalta_{n,\deltalta}}{\rm e}^{S_n\psi(x)} &=& {\rm H}_{\nu_n}(\bigvee_{j=0}^{n-1}f^{-j}{\cal B})+\int S_n\psi {\rm d}\nu_n\\ &\le& \sum_{r=0}^{[(n-j)/q]-1}{\rm H}_{\nu_n}(f^{-(rq+j)}\bigvee_{i=0}^{q-1}f^{-i}{\cal B})+{\rm H}_{\nu_n}(\bigvee_{k\in S_j}f^{-k}({\cal B}))+\int S_n\psi{\rm d}\nu_n\\ &\le& \sum_{r=0}^{[(n-j)/q]-1}{\rm H}_{f^{rq+j}_*\nu_n}(\bigvee_{i=0}^{q-1}f^{-i}{\cal B})+2q\log k+\int S_n\psi{\rm d}\nu_n {\varepsilon}nd{eqnarray*} By taking the sum for $j$ from 0 to $q-1$, and using Lemma~\ref{Lem:H}, we have \begin{eqnarray*} q\log\sum_{x\in\Deltalta_{n,\deltalta}}{\rm e}^{S_n\psi(x)} &\le& \sum_{p=j}^{j+[(n-j)/q]q}{\rm H}_{f^p_*\nu_n}(\bigvee_{i=0}^{q-1}f^{-i}{\cal B})+2q^2\log k+q\int S_n\psi{\rm d}\nu_n\\ &\le& \sum_{p=0}^{n-1}{\rm H}_{f^p_*\nu_n}(\bigvee_{i=0}^{q-1}f^{-i}{\cal B})+2q^2\log k+q\int S_n\psi{\rm d} \nu_n\\ &\le& n{\rm H}_{\mu_n}(\bigvee_{i=0}^{q-1}f^{-i}{\cal B})+2q^2\log k+q\int S_n\psi{\rm d}\nu_n. {\varepsilon}nd{eqnarray*} By dividing by $n$, we have $$\frac{q}{n} \log\sum_{x\in\Deltalta_{n,\varepsilon}}{\rm e}^{S_n\psi(x)} \le {\rm H}_{\mu_n}(\bigvee_{i=0}^{q-1}f^{-i}{\cal B})+\frac{2q^2\log k}{n}+q\int \psi{\rm d}\mu_n.$$ By taking the $\limsup$ for $n$ we have $$q\limsup_{n\to\infty}\frac{1}{n} \log\sum_{x\in\Deltalta_{n,\deltalta}}{\rm e}^{S_n\psi(x)}\le {\rm H}_{\mu}(\bigvee_{i=0}^{q-1}f^{-i}{\cal B})+q\int \psi{\rm d}\mu.$$ By dividing $q$ and letting $q\to\infty$, we have $$\limsup_{n\to\infty}\frac{1}{n} \log\sum_{x\in\Deltalta_{n,\deltalta}}{\rm e}^{S_n\psi(x)}\le {\rm h}_{\mu}({\cal B})+\int \psi {\rm d}\mu.$$ Recall that ${\rm h}_{\mu}(f)+\int \psi {\rm d}\mu\le-\varepsilon$, we have $$\limsup_{n\to\infty}\frac{1}{n} \log\sum_{x\in\Deltalta_{n,\deltalta}}{\rm e}^{S_n\psi(x)}\le -\varepsilon.$$ Thus, \begin{eqnarray*} \limsup_{n\to\infty}\frac{1}{n}\log {\rm Leb}(B_D({\cal O},n)) &\le& \limsup_{n\to\infty}\frac{1}{n}\log V_\deltalta C_\kappa+\kappa+\log C_\deltalta\\ &+& \lim_{n\to\infty}\frac{1}{n}\log\sum_{x\in\Deltalta_{n,\deltalta}}{\rm e}^{S_n\psi(x)}\\ &\le& \kappa+\log C_\deltalta-\varepsilon. {\varepsilon}nd{eqnarray*} By choosing $\kappa>0$ small and $C_\deltalta$ close to $1$, we have that $$\limsup_{n\to\infty}\frac{1}{n}\log {\rm Leb}(B_D({\cal O},n))<0.$$ Then by using the Borel-Cantelli argument, we can complete the proof. {\varepsilon}nd{proof} \begin{proof}[Proof of the main theorem] Now we assume that $\Lambdambda$ is a topological attractor that admits a dominated splitting $T_\Lambdambda M=E\oplus F$ without mixed behavior. Notice that the entropy function is upper semi continuous by Corollary~\ref{Cor:usc}. Then by Theorem~\ref{Thm:es} we have that there is a measure $\mu$ such that $${\rm h}_\mu(f)\ge \int \log |{\rm Det} Df|_F|{\rm d}\mu.$$ Since there is no mixed behavior, we have that $${\rm h}_\mu(f)\ge \int \sum\lambdambda_+{\rm d}\mu,$$ Where $\sum\lambdambda^+$ is the sum of positive Lyapunov exponents of $\mu$. On the other hand, by Ruelle's inequality, we have $${\rm h}_\mu(f)\le \int \sum\lambdambda_+{\rm d}\mu.$$ Thus $\mu$ satisfies the entropy formula. {\varepsilon}nd{proof} \begin{thebibliography}{1d} \bibitem{Bow72}R. Bowen, Entropy-expansive maps, {\it Tran. A.M.S.}, {\bf 164}(1972), 323-331. \bibitem{BoR75}R. Bowen and D. Ruelle, The ergodic theory of Axiom A flows, {\it Invent. Math.}, {\bf 29}(1975), 181-202. \bibitem{BFF02}M. Boyle, D. Fiebig and U. Fiebig, Residual entropy, conditional entropy and subshift covers, {\it Forum Math.}, {\bf14}(2002), 713-757. \bibitem{CaQ01}J. Campbell and A. Quas, A generic $C^1$ expanding maps has a singular S-R-B measure, {\it Comm. Math. Phys.}, {\bf221}(2001), 335-349. \bibitem{Cao03}Y. Cao, Non-zero Lyapunov exponents and uniform hyperbolicity, {\it Nonlinearity}, {\bf16}(2003), 1473-1479. \bibitem{CCE15}E. Catsigeras, M. Cerminara and H. Enrich, The Pesin entropy formula for diffeomorphisms with dominated splitting, {\it Ergod. Th. Dynam. Sys.}, {\bf35}(2015), 737-761. \bibitem{CoY05}W. Cowieson and L.S. Young, SRB measures as zero-noise limits, {\it Ergod. Th. Dynam. Sys.}, {\bf25}(2005), 1115-1138. \bibitem{HPS77} M. W. Hirsch; C. C. Pugh; M. Shub, Invariant manifolds, {\it Lecture Notes in Mathematics}, {\bf 583} Springer-Verlag 1977. \bibitem{LeY15}R. Leplaideur and D. Yang, On the existence of SRB measures for singular hyperbolic attractors, 2015, {\it in preparation}. \bibitem{LiL15}P. Liu and K. Lu, A note on partially hyperbolic attractors: entropy conjecture and SRB measures, {\it Discrete and Continuous Dynamical Systems}, {\bf 35}(2015), 341-352. \bibitem{LVY13}G. Liao, M. Viana and J. Yang, Then entropy conjecture for diffeomorphisms away from tangencies, {\it J. Eur. Math. Soc.}, {\bf15}(2013), 2043-2060. \bibitem{Man75}A. Manning, Topological entropy and the first homology group, in {\it Dynamical Systems-Warwick 1974}, Lecture Notes in Math., 468, Springer-Verlag, 1975, 185-190. \bibitem{Pli72}V. Pliss, A hypothesis due to Smale, {\it Diff. Eq.} {\bf 8} (1972), 203-214. \bibitem{Qiu11}H. Qiu, Existence and uniqueness of SRB measure on $C^1$ generic hyperbolic attractors, {\it Comm. Math. Phys.}, {\bf302}(2011), 345-357. \bibitem{RuS75}D. Ruelle and D. Sullivan, Currents, flows and diffeomorphisms, {\it Topology}, {\bf14}(1975), 319-327. \bibitem{SaX10}R. Saghin and Z. Xia, The entropy conjecture for partially hyperbolic diffeomorphisms with $1$-D center, {\it Topology Appl.}, {\bf157}(2010), 29-34. \bibitem{Shu74}M. Shub, Dynamical systems, filtrations and entropy, {\it Bull. Amer. Math. Soc.}, {\bf 80}(1974), 27-41. \bibitem{ShW75}M. Shub and R. Williams, Entropy and stability, {\it Topology}, {\bf14}(1975), 329-338. \bibitem{SuT12}W. Sun and X. Tian, Dominated splitting and Pesin's entropy formula, {\it Discrete Contin. Dyn. Syst.}, {\bf 32}(2012), 1421-1434. \bibitem{Tah02}A. Tahzibi, $C^1$-generic Pesin's entropy formula, {\it C. R. Acad. Sci. Paris, Ser. I}, {\bf 335}(2002), 1057-1062. \bibitem{Wal82}P. Walters, An introduction to Ergodic Theory, {\it Graduate texts in Mathematics}, {\bf79}(1982), Springer-Verlag. \bibitem{Yom87}Y. Yomdin, Volume growth and entropy, {\it Israel J. Math.}, {\bf 57}(1987), 285-300. {\varepsilon}nd{thebibliography} \vskip 5pt \noindent Dawei Yang \noindent School of Mathematical Sciences \noindent Soochow University, Suzhou, 215006, P.R. China \noindent [email protected], [email protected] \vskip 5pt \noindent Yongluo Cao \noindent School of Mathematical Sciences \noindent Soochow University, Suzhou, 215006, P.R. China \noindent [email protected] {\varepsilon}nd{document}
\begin{document} \newcommand{2010.02768}{2010.02768} \renewcommand{026}{026} \FirstPageHeading \ShortArticleName{Mixed vs Stable Anti-Yetter--Drinfeld Contramodules} \ArticleName{Mixed vs Stable Anti-Yetter--Drinfeld Contramodules} \Author{Ilya SHAPIRO} \AuthorNameForHeading{I.~Shapiro} \Address{Department of Mathematics and Statistics, University of Windsor,\\ 401 Sunset Avenue, Windsor, Ontario N9B 3P4, Canada} \Email{\href{mailto:[email protected]}{[email protected]}} \URLaddress{\url{http://http://web2.uwindsor.ca/math/ishapiro/}} \ArticleDates{Received November 09, 2020, in final form March 04, 2021; Published online March 17, 2021} \Abstract{We examine the cyclic homology of the monoidal category of modules over a finite dimensional Hopf algebra, motivated by the need to demonstrate that there is a difference between the recently introduced mixed anti-Yetter--Drinfeld contramodules and the usual stable anti-Yetter--Drinfeld contramodules. Namely, we show that Sweedler's Hopf algebra provides an example where mixed complexes in the category of stable anti-Yetter--Drinfeld contramodules (previously studied) are not equivalent, as differential graded categories to~the category of mixed anti-Yetter--Drinfeld contramodules (recently introduced).} \Keywords{Hopf algebras; homological algebra; Taft algebras} \Classification{16E35; 16T05; 18G90; 19D55} \section{Introduction} Cyclic (co)homology was introduced independently by Boris Tsygan and Alain Connes in the 1980s. It has since been generalized, applied to many fields, and now reaches into many different settings. Our investigations in this paper focus on the equivariant flavour that began with Connes--Moscovici~\cite{conmos} and was generalized into Hopf-cyclic cohomology by Hajac--Khalkhali--Rangipour--Sommerh\"{a}user~\cite{HKRS1, HKRS2} and Jara--Stefan~\cite{JS} (independently). Roughly speaking, the original theory defines cohomology groups for an associative algebra that play the role of the de~Rham cohomology in the noncommutative setting. The equivariant version considers an~alge\-bra with an action of a Hopf alge\-bra. It turns out that just as in the de~Rham cohomology, one has coefficients in the Hopf setting; it is an interesting fact that, unlike the de~Rham setting, Hopf-cyclic cohomology requires coefficients, i.e., there are no canonical trivial coefficients. These coefficients are known as stable anti-Yetter--Drinfeld modules, due to their similarity to the usual Yetter--Drinfeld modules. It turns out that the more natural, from a conceptual point of view, version of coefficients are stable anti-Yetter--Drinfeld contramodules~\cite{contra}. It is the desire to~under\-stand the coefficients themselves that motivated a series of papers by the author of the present one. This paper is a natural next step. This paper is a descendant of~\cite{chern}, where it is shown that the classic stable anti-Yetter--Drinfeld contramodules are simply objects in the naive cyclic homology category of ${_H\mathcal{M}}$, the monoidal category of modules over the Hopf algebra $H$. It is furthermore conjectured there, that the new coefficients introduced (mixed anti-Yetter--Drinfeld contramodules) are obtained via the true cyclic homology category; this makes exact the analogy between the de~Rham coefficients in the geometric setting and the Hopf-cyclic coefficients. Namely, while the latter are obtained from the cyclic homology of ${_H\mathcal{M}}$, the former are shown in~\cite{it} to arise from the cyclic homology of quasi-coherent sheaves on the space $X$. More precisely, in~\cite{chern}, a category of mixed anti-Yetter--Drinfeld contramodules is defined by analogy with the derived algebraic geometry case of~\cite{it}. This new generalization is conceptual, and furthermore allows the expression of~the Hopf-cyclic cohomology of an algebra $A$ with coefficients in $M$ as an ${\rm Ext}$ (in this category) between $\mathop{\rm ch}(A)$, the Chern character object associated to $A$, and $M$ itself. Even if one takes $M$ to be a~stable anti-Yetter--Drinfeld contramodule, the object $\mathop{\rm ch}(A)$ is truly a mixed anti-Yetter--Drinfeld contramodule. It is conjectured that mixed anti-Yetter--Drinfeld contramodules are the objects in the cyclic homology category of ${_H\mathcal{M}}$. The comparison in~\cite{chern} between anti-Yetter--Drinfeld contramodules and the cyclic homology category of ${_H\mathcal{M}}$ involves a monad on ${_H\mathcal{M}}$ with a central element $\sigma$. When we talk about the $S^1$-action we mean the action of this central element on the category of modules over the monad. It is this description that allows us here to reduce the investigations into the differences between the previously studied and the new Hopf-cyclic cohomology to the analysis of categories of modules over two differential graded algebras (DGAs). Namely, in the notation of the paper, we have an algebra $\widehat{D}(H)$ whose modules are the anti-Yetter--Drinfeld contramodules, we have a DGA $\widehat{D}(H)[\theta]$ with $d\theta=\sigma-1$ that yields the new mixed anti-Yetter--Drinfeld contramodules, and we have a DGA $\widehat{D}(H)/(\sigma-1)[\theta]$ with $d\theta=0$ that yields the previously studied setting, i.e., the mixed complexes in stable anti-Yetter--Drinfeld contramodules. Thus, it suffices for our purposes to compare the DG categories of modules over these two DGAs. We~concentrate on~finite dimensional Hopf algebras $H$. We~show that if the square of the antipode is trivial, i.e., $S^2={\rm Id}$ then the DG categories coincide (Proposition~\ref{sscase}): \begin{prop} Let $H$ be a finite dimensional Hopf algebra such that the square of the antipode is equal to the identity, i.e., $S^2={\rm Id}$. Then the categories of mixed complexes in stable $aYD$-contramodules and mixed $aYD$-contramodules are $DG$-equivalent. \end{prop} On the other hand, if we consider Sweedler's Hopf algebra $T_2(-1)$ (the smallest case of $S^2\neq {\rm Id}$) then they do not (Proposition~\ref{notsscase}): \begin{prop} Let $H=T_2(-1)$, then the mixed complexes in the category of stable anti-Yetter--Drinfeld contramodules are not DG-equivalent to the category of mixed anti-Yetter--Drinfeld contramodules. \end{prop} \noindent \textbf{Conventions.} All algebras $A$ in monoidal categories are assumed to be unital associative. Our~$H$ is a Hopf algebra over some fixed algebraically closed field $k$, of characteristic $0$, and $\text{Vec}$ denotes the category of $k$-vector spaces. For the purposes of this paper we are only interested in finite dimensional Hopf algebras. We~use the following version of Sweedler's notation: For~$h\in H$ we denote the coproduct $\Delta(h)\in H\otimes H$ by $h^1\otimes h^2$. The letter $S$ denotes the antipode of~$H$. The number $p$ is prime. Finally, DG stands for differential graded. \section{Twisted Drinfeld double} Let $H$ be a Hopf algebra. From~\cite{chern} we see that the study of the Hochschild and cyclic homologies of ${_H\mathcal{M}}$, the monoidal category of $H$-modules, reduces to the study of modules over a certain monad on ${_H\mathcal{M}}$. Recall that the consideration of Hochschild and cyclic homologies of monoidal categories is motivated by their recently discovered role~\cite{chern} in the understanding of Hopf-cyclic theory coefficients. Briefly, we have the monad (see~\cite{chern} for more details): \begin{gather}\trianglerightbel{themonad} \mathop{\rm Hom}\nolimits_k(H,-)\colon \ {_H\mathcal{M}}\to{_H\mathcal{M}} \end{gather} with the $H$-module structure on $\mathop{\rm Hom}\nolimits_k(H,V)$: \begin{gather} x\cdot\varphi=x^2\varphi\big(S\big(x^3\big)(-)x^1\big), \end{gather} for $x\in H$ and $\varphi\in \mathop{\rm Hom}\nolimits_k(H,V)$. The unit $1_V\colon {\rm Id}(V)\to \mathop{\rm Hom}\nolimits_k(H,V)$ is \begin{gather} 1_V(v)(h)=\epsilon(h)v \end{gather} and a crucial central element $\sigma_V\colon {\rm Id}(V)\to \mathop{\rm Hom}\nolimits_k(H,V)$ is \begin{gather} \sigma_V(v)(h)=hv. \end{gather} The anti-Yetter--Drinfeld contramodules then coincide with modules over this monad (shown in~\cite{chern}), while the stable ones consist of those for which the action of $\sigma$ agrees with that of $1$, and the mixed ones introduced in~\cite{chern} are the homotopic version of this on the nose requirement. Recall the mixed complexes of~\cite{kassel}. These are complexes of vector spaces $(V^\bullet,d)$ with a~homo\-topy $h$ such that $dh+hd=0$. We~can replace vector spaces with $R$-modules for some ring~$R$. Note that as observed in~\cite{kassel}, the DG-category of mixed complexes in $R$-modules is isomorphic to the DG-category of DG-modules over $R[\theta]$, where $\theta$ is a freely and centrally adjoined degree~$-1$~graded commutative variable (naturally $R$ itself is placed in degree $0$, so that $R[\theta]=R\to R$ as a~comp\-lex) and $d=0$ on $R[\theta]$. The action of $\theta$ gives the homotopy $h$. We can generalize the considerations of~\cite{kassel} so as to apply to our particular situation. Namely, let $z\in Z(R)$, i.e., $zr=rz$ for all $r\in R$. Define a DG-algebra $R[\theta]$ by placing $R$ in degree $0$ and $\theta$ in degree $-1$. Let $\theta$ commute with $R$ and itself, so in particular $\theta^2=0$. So far it is as above. Now define the differential to be $0$ on $R$ and $d\theta=z$. This is well defined and unique by the Leibniz rule. We~observe that the category of DG-modules over $R[\theta]$ consists of complexes of $R$-modules equipped with a homotopy $h$ such that $dh+hd=z$. Now recall from~\cite{chern}: \begin{Definition} We say that $(M^\bullet,d,h)$ is a mixed anti-Yetter--Drinfeld contramodule if $(M^\bullet,d)$ is a complex of contramodules, i.e., modules over the monad \eqref{themonad}, and $h$ is a homotopy annihilating $\sigma-1$. More precisely, \begin{gather} dh+hd=\sigma-1. \end{gather} \end{Definition} In this section we will define an explicit DG-algebra that will yield the mixed anti-Yetter--Drinfeld (aYD) contramodules (for $H$ finite dimensional) as its DG-modules. The construction of the twisted convolution algebra below is analogous to the classical Drinfeld double $D(H)$ and its anti-version $D_a(H)$~\cite{HKRS2} (we review these in Appendix~\ref{appx}, where we expand upon this comparison). \begin{Definition}\trianglerightbel{dhhat} Let $H$ be a Hopf algebra with an invertible antipode $S$, define a twisted double~$\widehat{D}(H)$ as follows. The multiplication on $\widehat{D}(H):={\rm End}(H)$ is \begin{gather}\trianglerightbel{mult} (f\star g)(h)= f\big(h^1\big)^2g\big(S\big(f\big(h^1\big)^3\big)h^2f\big(h^1\big)^1\big), \end{gather} thus the multiplicative identity, which we denote by $1$ is $\epsilon(-)1$, and the central element $\sigma(h)=h$ is invertible with inverse $S^{-1}$. \end{Definition} Definition~\ref{dhhat} is extracted from the monad~\eqref{themonad} with the sole purpose consisting of making the following lemma a tautology. \begin{Lemma}\trianglerightbel{bas} Let $H$ be a finite dimensional Hopf algebra. \begin{itemize}\itemsep=0pt \item The category of anti-Yetter--Drinfeld contramodules over $H$ is isomorphic to $\widehat{D}(H)$-mo\-dules. \item The category of stable anti-Yetter--Drinfeld contramodules is isomorphic to modules over \begin{gather} A:=\widehat{D}(H)/(\sigma-1). \end{gather} \item The DG-category of mixed anti-Yetter--Drinfeld contramodules is isomorphic to DG-mo\-dules over the DG algebra \begin{gather} B:=\widehat{D}(H)[\theta], \end{gather} where $\theta$ is a freely adjoined degree $-1$ graded commutative variable and $d\theta=\sigma-1$, with $d|_{\widehat{D}(H)}=0$. \end{itemize} \end{Lemma} \begin{proof} In the finite dimensional case, as vector spaces, $\widehat{D}(H)={\rm End}(H)\simeq H^*\otimes H$. Furthermore, as an algebra, $\widehat{D}(H)$ is the quotient of the free product algebra, generated by $H^*$ and $H$, by the relation: \begin{gather} \trianglerightbel{mult1} h \chi= \chi\big(S\big(h^3\big)(-)h^1\big)h^2, \end{gather} where $h\in H$, $\chi\in H^*$. Thus, modules over the algebra are both $H$-modules and $H$-cont\-ra\-mo\-du\-les (same as $H^*$-modules for $H$ finite dimensional). The two actions satisfy the requisite compatibility condition for contramodules, as specified in~\cite{contra}, and ensured by \eqref{mult1}. Clearly, modules over $A$ consist of the full subcategory of objects on which $\sigma$ acts by identity, these are exactly the stable contramodules. Finally, a DG-module over $B$ is just a complex of $\widehat{D}(H)$-modules with a homotopy given by the action of $\theta$. The condition $d\theta=\sigma-1$ ensures that $dh+hd=\sigma-1$ on~$M^\bullet$. \end{proof} Our main goal is to compare the category of mixed aYD contramodules to the category of~mixed complexes of stable aYD contramodules. By the preceding lemma this means determining when, and more interestingly when not, the category of DG-modules over $B$ is DG-equivalent to $A[\theta]$-modules (with $\theta$ of degree $-1$ and $d=0$). The study of Hopf-cyclic cohomology has thus far only concerned itself with the latter. The following simple lemma takes care of a lot of cases. \begin{Lemma}\trianglerightbel{diag} Let $H$ be a finite dimensional Hopf algebra and suppose that the action of $\sigma-1$ on~$\widehat{D}(H)$ is diagonalizable. Then the categories of mixed complexes in stable $aYD$-contramodules and mixed $aYD$-contramodules are $DG$-equivalent. \end{Lemma} \begin{proof} Since the action of the central element $\sigma-1$ on $\widehat{D}(H)$ is diagonalizable we may decom\-pose $\widehat{D}(H)$ as a product of algebras $\widehat{D}(H)_0\oplus \widehat{D}(H)_+$ with $\widehat{D}(H)_0$ being the $0$-eigenspace and~$\widehat{D}(H)_+$ all the other eigenspaces. We~have an inclusion of DGAs: $\widehat{D}(H)_0[\theta]\to B$ that induces an isomorphism on cohomology. Namely, as complexes $B$ is $\widehat{D}(H)\stackrel{\sigma-1}{\to}\widehat{D}(H)$ whereas $\widehat{D}(H)_0[\theta]$ is $\widehat{D}(H)_0\stackrel{0}{\to}\widehat{D}(H)_0$ and $\sigma-1$ is invertible on $\widehat{D}(H)_+$. Note that $\widehat{D}(H)_0\simeq A$ and we are done. \end{proof} \begin{Proposition}\trianglerightbel{sscase} Let $H$ be a finite dimensional Hopf algebra such that the square of the antipode is identity, i.e., $S^2={\rm Id}$. Then the categories of mixed complexes in stable $aYD$-contramodules and mixed $aYD$-contramodules are $DG$-equivalent. \end{Proposition} \begin{proof}We need characteristic $0$ here. Since $S^2={\rm Id}$, so $H$ is semi-simple~\cite{sqid2, sqid}, so $D(H)$ (its Drinfeld double) is semi-simple~\cite{dhss}. By Lemma~\ref{uhu}, we know that it follows that $D_a(H)$ is semi-simple and thus by~\cite{s2} so is $\widehat{D}(H)$. Thus, by Schur's lemma, the action of the central element $\sigma-1$ is diagonalizable and we are done by Lemma~\ref{diag}. \end{proof} In light of the above we need to consider an example of $H$ with $S^2\neq {\rm Id}$. It turns out that the smallest, dimension wise, such example suffices. \section{Taft Hopf algebras}\trianglerightbel{sec:blah} We fix a prime $p$ and a primitive $p$th root of unity $\xi\in k$ in the following. The Taft Hopf algebra $T_p(\xi)$~\cite{taft} is generated as a $k$-algebra by $g$ and $x$ with the relations \begin{gather} g^p=1,\qquad x^p=0, \\ gx=\xi xg. \end{gather} It is sometimes called the quantum ${\mathfrak{sl}}_2$ Borel algebra. It is $p^2$ dimensional over $k$. Furthermore, the coalgebra structure is \begin{gather} \Delta(g)=g\otimes g,\qquad \Delta(x)=x\otimes 1+g\otimes x \end{gather} with $\epsilon(g)=1$, $\epsilon(x)=0$, and thus $S(g)=g^{-1}$, while $S(x)=-g^{-1}x$. Note that \begin{gather} S^2(x)=\xi^{-1}x\neq x, \end{gather} making $T_2(-1)$ the smallest Hopf algebra with $S^2\neq {\rm Id}$. The Taft algebra $T_2(-1)$ is somewhat different from the other $T_p(\xi)$ and has its own name: Sweedler’s Hopf algebra. \subsection{The identification with the dual} The Taft algebra is isomorphic to its dual. We~need some explicit formulas establishing the isomorphism $T_p(\xi)\simeq T_p(\xi)^*$ and its inverse. Suppose that $\omega$ is a $p$th root of unity, let \begin{gather} (n)_\omega=1+\cdots +\omega^{n-1} \end{gather} and \begin{gather} (n)_\omega !=(n)_\omega\cdots(1)_\omega. \end{gather} The verification of the following is left to the reader; key details can be found in~\cite{selfdual}. The lemma itself can be obtained from~\cite{nen}. \begin{Lemma}\trianglerightbel{nenlem} As Hopf algebras \begin{gather} T_p(\xi)^*\simeq T_p(\xi). \end{gather} \end{Lemma} \begin{proof} Consider a basis of $T_p(\xi)$: $\big\{g^ix^j\big\}_{ i,j=0}^{p-1}$ so that $\big\{\big(g^ix^j\big)^*\big\}$ denotes the dual basis of $T_p(\xi)^*$. Then the isomorphism of Hopf algebras and its inverse are given by \begin{gather} g^ix^j\mapsto(j)_{\xi^{-1}}!\sum_l\xi^{i(j+l)}\big(g^lx^j\big)^* \end{gather} and \begin{gather} (g^ix^j)^*\mapsto\dfrac{1}{p(j)_{\xi^{-1}}!}\sum_l\xi^{-l(i+j)}g^lx^j. \end{gather} \end{proof} \begin{Corollary}\trianglerightbel{gens}The twisted double $\widehat{D}(T_p(\xi))$ is a quotient of $k\left\trianglerightngle x,x',g,g'\right\triangleleftngle.$ \begin{itemize}\itemsep=0pt \item The relations are \begin{gather*} x^p=x'^p=g^p-1=g'^p-1=0, \\ gg'=g'g,\qquad gx=\xi xg,\qquad g'x'=\xi x'g',\qquad gx'=\xi^{-1} x'g, \qquad g'x=\xi^{-1} xg', \\ xx'-\xi^{-1}x'x=1-\xi^{-1}g'^{-1}g. \end{gather*} \item The actions of $g'$ and $g$ on a $\widehat{D}(T_p(\xi))$-module $V$, yields a $(\mathbb{Z}/p)^2$-grading on $V$ by their eigenspaces, i.e., $g'$, $g$ act on $V_{ij}$ by $\xi^i$, $\xi^j$, respectively. Thus $x$ and $x'$ have degrees $(-1,1)$ and $(1,-1)$, respectively. \item The $S^1$-action of $\sigma$ on $V_{ij}$ is \begin{gather}\trianglerightbel{sigmaaction} \sum_{l=0}^{p-1}\dfrac{\xi^{(i-l)(j+l)}}{(l)_{\xi^{-1}}!}x'^lx^l. \end{gather} \end{itemize} \end{Corollary} \begin{proof} We use the identification of vector spaces $\widehat{D}(T_p(\xi))=(T_p(\xi))^*\otimes T_p(\xi)$ as in Lemma~\ref{bas} followed by $(T_p(\xi))^*\otimes T_p(\xi) \simeq T_p(\xi)\otimes T_p(\xi)$ from Lemma~\ref{nenlem}. We~let \begin{gather} x'=x\otimes 1, \qquad x=1\otimes x, \qquad g'=g\otimes 1, \qquad g=1\otimes g \end{gather} in the latter. To derive the rest of the relations we apply \eqref{mult1}. The action of $\sigma=\sum_{ij}\big(g^ix^j\big)^*\otimes g^ix^j$ is computed on the graded components directly. \end{proof} Observe that it follows from Corollary~\ref{gens} that $gg'\in\widehat{D}(T_p(\xi))$ is central. Since we see that $(gg')^p=1$ so its action on $\widehat{D}(T_p(\xi))$ is diagonalizable with eigenvalues $\xi^s$, $s\in\mathbb{Z}/p$. Thus as an~algebra \begin{gather}\trianglerightbel{split} \widehat{D}(T_p(\xi))=\bigoplus_s \widehat{D}(T_p(\xi))/(gg'-\xi^s) \end{gather} so that it suffices to understand $\widehat{D}(T_p(\xi))/(gg'-\xi^s)$. There are two cases: $p=2$ and $p>2$. We~will begin by briefly discussing the latter (though without addressing the $S^1$-action), and then concentrate our attention on the former (with examining the $S^1$-action) to achieve the goal set out in the abstract. \subsection[The case of p>2]{The case of $\boldsymbol{p>2}$} Let $p>2$, then there exists a primitive $p$th root of unity $q\in k$ such that \begin{gather} q^2=\xi^{-1}. \end{gather} We then have \begin{gather}\trianglerightbel{sbys} \widehat{D}(T_p(\xi))/(gg'-\xi^s)\simeq u_q({\mathfrak{sl}}_2). \end{gather} More precisely, let \begin{gather} E=\frac{q^{s+1}}{q-q^{-1}}x', \qquad F=xg',\qquad\text{and}\qquad K=q^{s+1}g, \end{gather} so that $\widehat{D}(T_p(\xi))/(gg'-\xi^s)$ is generated by $E$, $F$, $K$ subject to \begin{gather*} E^p=F^p=K^p-1=0, \\ [E,F]=\frac{K-K^{-1}}{q-q^{-1}},\qquad KEK^{-1}=q^2E,\qquad \text{and}\qquad KFK^{-1}=q^{-2}F. \end{gather*} This shows that: \begin{gather}\trianglerightbel{justasindh} \widehat{D}(T_p(\xi))\simeq u_q({\mathfrak{sl}}_2)\otimes\mathcal{O}_{\mathbb{Z}/p}\simeq u_q({\mathfrak{sl}}_2)\otimes k\mathbb{Z}/p. \end{gather} As we will see below the case of $p=2$ is very different, in particular as $s$ varies, the algebra will change significantly whereas here it does not \eqref{sbys}. \begin{Remark}\trianglerightbel{DD} See~\cite{uqsl2}, where the Taft algebra is called the quantum ${\mathfrak{sl}}_2$ Borel algebra, and its Drinfeld double is computed. The result obtained is identical to ours in~\eqref{justasindh}, though we compute the twisted double. This is not surprising as we see from Appendix~\ref{appx} that our analysis of $\widehat{D}(H)$ can be interpreted as that of~$D(H)$, with $\sigma$ being a new ingredient. Note that more generally, a comparison between Drinfeld doubles of Nichols algebras and quantized universal enveloping algebras can be found in~\cite{extra}. \end{Remark} \subsection[The case of p=2]{The case of $\boldsymbol{p=2}$} We need to describe the algebra $\widehat{D}(T_2(-1))$ in greater detail, paying particular attention to the element $\sigma$. By \eqref{split} the category of DG-modules over $\widehat{D}(T_2(-1))[\theta]$ is a product of categories, $\mathcal{C}_0\times \mathcal{C}_1$, corresponding to the cases $s=0$ and $s=1$. We~will deal with both separately. More precisely, for a $\widehat{D}(T_2(-1))[\theta]$-module $V$, we decompose \begin{gather} V=(V_{00}\oplus V_{11})\oplus(V_{01}\oplus V_{10}). \end{gather} Observe that by the Corollary~\ref{gens} we have that \begin{gather} xx'+x'x=1+(-1)^s, \end{gather} so that \begin{gather}\trianglerightbel{dd} (x'x)^2=(1+(-1)^s)x'x. \end{gather} We see from \eqref{dd} that the minimal polynomial of $x'x$ depends only on $s$; this is exclusive to $p=2$ and makes this case tractable. Note that by \eqref{sigmaaction}: \begin{gather}\trianglerightbel{sig} \sigma|_{V_{00}}=1-x'x,\qquad \sigma|_{V_{11}}=-1+x'x,\qquad\text{and}\qquad \sigma|_{V_{01}}=\sigma|_{V_{10}}=1+x'x. \end{gather} Let \begin{gather} D_s=\widehat{D}(T_2(-1))/(gg'-(-1)^s). \end{gather} We begin with $s=0$: The category of $D_0$-modules consists of $\mathbb{Z}/2$-graded vector spaces ($V=V_{00}\oplus V_{11}$) equipped with degree changing operators $x$ and $x'$ subject to the relation $xx'+x'x=2$. The action of $\sigma-1$ on $V_{00}$ is $-x'x$ and on $V_{11}$ it is $x'x-2$. \begin{Lemma} The category $\mathcal{C}_0$ consists of the mixed complexes of~{\rm \cite{kassel}}. \end{Lemma} \begin{proof} Note that $D_0/(\sigma-1)$-modules are just vector spaces. Indeed, let $y=x'/2$. For improved clarity, denote by $x_i$ and $y_i$ the action of $x$ and $y$, respectively, that originates at $V_{ii}$. After modding out by $\sigma-1$ we have by \eqref{sig} that $y_1x_0=0\implies x_1y_0=1$ and $y_0x_1=1$. So \begin{gather} y_0:V_{00}\simeq V_{11}: x_1. \end{gather} Furthermore, $x^2=y^2=0$ so that both $x_0$ and $y_1$ are the $0$ maps. Thus \begin{gather} V=V_{00}\oplus V_{11}\mapsto V_{00} \end{gather} establishes an equivalence of categories between $D_0/(\sigma-1)$-modules and $\text{Vec}$. The action of $\sigma-1$ on $D_0$ is diagonalizable by \eqref{dd} and so by the proof of Lemma~\ref{diag} the algebras $D_0[\theta]$ ($d\theta=\sigma-1$) and $D_0/(\sigma-1)[\theta]$ ($d\theta=0$) are quasi-isomorphic. Thus, $\mathcal{C}_0$, being DG-equivalent to $D_0/(\sigma-1)[\theta]$-modules, is by the above discussion, equivalent to $k[\theta]$-modules. These are just the mixed complexes of~\cite{kassel}. \end{proof} \begin{Remark} Note that not only does $\mathcal{C}_0$ consist of the usual mixed complexes but it also does not provide any evidence of the need for the mixed $aYD$-contramodules (see the proof above). \end{Remark} Moving on to $s=1$ we find that things change for the better. Recall that $D_1$ is generated by \begin{gather} x,\ x',\ g \end{gather} subject to \begin{gather} x^2=x'^2=g^2-1=0, \qquad xx'=-x'x, \qquad gx=-xg, \qquad gx'=-x'g. \end{gather} Furthermore, $\sigma-1$ acts as $x'x$ by \eqref{sig}. \begin{Proposition}\trianglerightbel{notsscase} Let $H=T_2(-1)$, then the mixed complexes in the category of stable anti-Yetter--Drinfeld contramodules are not DG-equivalent to the category of mixed anti-Yetter--Drinfeld contramodules. \end{Proposition} \begin{proof} By the preceding discussion, i.e., the decomposition of the category into a product, it~suf\-fices to show that the categories $D_1[\theta]\text{-mod}$ (where $d\theta=x'x$) and $D_1/(x'x)[\theta]\text{-mod}$ (where $d\theta =0$) are not DG-equivalent. Recall that the Hochschild cohomology $HH^i(C^\bullet)$ of a DG algebra $C^\bullet$ is an invariant of its DG category of modules~\cite{dginvar}. We~will compute $HH^{-1}$. In our simple case of a DG algebra $C=C^{-1}\stackrel{d}{\to} C^0$ concentrated in two degrees, we have \begin{gather} HH^{-1}(C)={\rm ker} \big(C^{-1}\stackrel{\alpha}{\to} C^0\oplus {\rm Hom}\big(C^0, C^{-1}\big)\big), \end{gather} where $\alpha(x)=(dx, [x,-])$. The key observation here is that the center of $D_1$ is spanned by $1$, $xx'$, $xx'g$, while that of~$D_1/(x'x)$ is spanned by $1$. Thus, in the first case we get that $HH^{-1}$ is spanned by $xx'$ and~$xx'g$. In the second case it is spanned by $1$. \end{proof} \appendix \section{Appendix}\trianglerightbel{appx} Our purpose in this section is to compare $\widehat{D}(H)$ of Definition~\ref{dhhat} to the more familiar Drinfeld double $D(H)$ in the case of a finite dimensional Hopf algebra $H$. Though not original, see~\cite{quantumgroups} for Definition~\ref{yd} and~\cite{HKRS2} for Definition~\ref{ayd}, since our conventions differ from the usual ones we spell out the definitions again below (for $H$ finite dimensional): \begin{Definition}\trianglerightbel{yd} The algebra $D(H)$ is generated by $H$ and $H^*$ subject to the relations: \begin{gather} \chi h=h^2\chi\big(h^3(-)S^{-1}\big(h^1\big)\big), \end{gather} for $\chi\in H^*$ and $h\in H$. Thus $D(H)=H\otimes H^*$ as vector spaces. \end{Definition} \begin{Definition}\trianglerightbel{ayd} The algebra $D_a(H)$ is generated by $H$ and $H^*$ subject to the relations: \begin{gather} \chi h=h^2\chi\big(h^3(-)S\big(h^1\big)\big), \end{gather} for $\chi\in H^*$ and $h\in H$. Thus $D_a(H)=H\otimes H^*$ as vector spaces. The central element is $\sigma=e_i\otimes e^i$, where $e_i$ is any basis of $H$ and $e^i$ is its dual basis of $H^*$. \end{Definition} Note that modules over $D_a(H)$ as specified in Definition~\ref{ayd} can be identified with what is usually called left-right anti-Yetter--Drinfeld modules, i.e., left modules and right como\-du\-les~\cite{HKRS2}. Recall that if $H$ is finite dimensional then we have an $S^1$-equivariant equivalence between $\widehat{D}(H)$-modules and $D_a(H)$-modules~\cite{s2}. We~will thus focus on the comparison between $D_a(H)$ and~$D(H)$. It is known that in general they give very different categories of~modules~\cite{gentaft}. It is immediate that if $S^2={\rm Id}$ then the algebras in fact coincide, since the only difference bet\-ween them is $S$ in Definition~\ref{ayd} and $S^{-1}$ in Definition~\ref{yd}. Below we extend that observation slightly so as to cover our case of Taft algebras where we do not have $S^2={\rm Id}$, but instead we~get \begin{gather} S^2(h)=uhu^{-1} \end{gather} for some group-like element $u\in H$, i.e., $\Delta(u)=u\otimes u$. For $T_p(\xi)$ we have $S^2(a)=g^{-1}ag$ with $\Delta g^{-1}=g^{-1} \otimes g^{-1}$, so that $u=g^{-1}$. Recall that Hopf algebras possessing such an element $u$ as above are called pivotal, see~\cite{pivot, spherical} for example. Their categories of representations are thus pivotal as well, i.e., equipped with a monoidal isomorphism from the identity functor to the double dual functor. This natural transformation is given by the action of $u\in H$; it is monoidal since $u$ is group-like and mapping to the double dual since $S^2(h)=uhu^{-1}$. The following lemma is a straightforward computation but can be obtained from~\cite{gentaft}: \begin{Lemma}\trianglerightbel{uhu} Let $H$ be a finite dimensional Hopf algebra and suppose that there exists a $u\in H$ with $\Delta(u)=u\otimes u$ such that $S^2(h)=uhu^{-1}$ for all $h\in H$. Then \begin{align*} D(H)&\to D_a(H), \\ h\otimes \chi&\mapsto h\otimes \chi((-)u) \end{align*} is an isomorphism of algebras. \end{Lemma} Thus for Taft algebras, Drinfeld doubles can play the role of $\widehat{D}(H)$, as long as we are careful to remember about the crucial central element $\sigma$. \LastPageEnding \end{document}
\begin{document} \title{Hilbert--Schmidt volume of the set of mixed quantum states} \author{Karol {\.Z}yczkowski$^{1,2}$ and Hans-J{\"u}rgen Sommers$^3$} \affiliation {$^1$Instytut Fizyki im. Smoluchowskiego, Uniwersytet Jagiello{\'n}ski, ul. Reymonta 4, 30-059 Krak{\'o}w, Poland} \affiliation{$^2$Centrum Fizyki Teoretycznej, Polska Akademia Nauk, Al. Lotnik{\'o}w 32/44, 02-668 Warszawa, Poland} \affiliation{$^3$Fachbereich Physik, Universit\"{a}t Duisburg-Essen, Standort Essen, 45117 Essen, Germany} \date{July 9, 2003} \begin{abstract} We compute the volume of the convex $N^2-1$ dimensional set ${\cal M}_{N}$ of density matrices of size $N$ with respect to the Hilbert-Schmidt measure. The hyper--area of the boundary of this set is also found and its ratio to the volume provides an information about the complex structure of ${\cal M}_{N}$. Similar investigations are also performed for the smaller set of all real, symmetric density matrices. As an intermediate step we analyze volumes of the unitary and orthogonal groups and of the flag manifolds. \end{abstract} \pacs{03.65.Ta} \maketitle \begin{center} {\small e-mail: [email protected] \ \quad \ [email protected]} \end{center} \section{Introduction} Although the notion of a density matrix is one of the fundamental concepts discussed in the elementary courses of quantum mechanics, the structure of the set ${\cal M}_{N}$ of all density matrices of size $N$ is not easy to characterize \cite{Bl76,Ha78,ACH93}. The only exception is the case $N=2$, for which ${\cal M}_{2}$ embedded in ${\mathbb R}^3$ has an appealing form of the {\sl Bloch ball}. Its boundary, $\partial {\cal M}_{2}$, consists of pure states and forms the {\sl Bloch sphere}. For larger number of states the dimensionality of ${\cal M}_{N}$ grows quadratically with $N$, which makes its analysis involved. In particular, for $N>2$ the set of pure states forms a $2N-2$ dimensional manifold, of measure zero in the $N^2-2$ dimensional boundary $\partial {\cal M}_{N}$. In this work we compute the volume of ${\cal M}_{N}$ with respect to the Hilbert--Schmidt (HS) measure. The HS measure is defined by the HS metric which is distinguished by the fact that it induces the flat, Euclidean geometry into the set of mixed states. The (hyper)area of the boundary of the space of the density matrices, $\partial {\cal M}_{N}$, is also computed, as well as the area of (hyper)edges of this set - the HS volume of the subspace of density matrices of an arbitrary rank $k <N$. In the special case of $k=1$ we obtain a well--known formula for the volume of the space of pure states, equivalent to the complex projective manifold ${\mathbb C}P^{N-1}$. A similar analysis is also performed for the set of real density matrices. To calculate the volume of the set of complex (real) mixed states we use the volume of the unitary (orthogonal) groups and the volume of the complex (real) flag manifolds - these results are described in the appendix. A motivation for such a study is twofold. On one hand, the complex structure of the set of mixed quantum states is interesting for itself. It is well--known that for $N>2$ the $D=N^2-1$ dimensional set ${\cal M}_{N}$ is neither a $D$--ball nor a polytope, but, how it looks like? More like a ball or more like a polytope? Instead of using techniques of differential geometry and computing the average curvature on the boundary of the set ${\cal M}_{N}$, we compute the volume of its boundary and compare it with the volume of the $D-1$ sphere, which surrounds the ball of the same volume as ${\cal M}_{N}$. Such a comparison shows us, to what extend the shape of the body of mixed quantum states differs from the ball, in a sense that more (hyper)area of the surface is needed to cover the same volume. A complementary information characterizing the structure of a given set is obtained by calculating the ratio between the area of its boundary and its volume. Among all $D$-dimensional bodies of a fixed volume, such a ratio is smallest for the $D$--ball. Hence computing such a ratio for the $D$--dimensional body of mixed quantum states we may compare it with similar ratios obtained for $D$--balls, $D$--cubes and $D$--simplices. On the other hand our investigations might be useful in characterizing the absolute volume of the subset of mixed states distinguished by a certain attribute. For instance, if $\varrho$ describes a composite system, one may ask, what is the volume of the set of separable (entangled) mixed states \cite{ZHSL98,Zy99}. Furthermore, assume we are given a concrete mixed quantum state $\varrho$. It is natural to ask, wether $\varrho$ is in some sense typical, e.g. wether its von Neumann entropy is close to the average taken over the entire set ${\cal M}_{N}$ with respect to the HS measure. To compute such averages (see e.g. \cite{ZS01}) it is usefull to know the volume of ${\cal M}_{N}$ and to make use of integrals developed for such a calculation. \section{Geometry of ${\cal M}_{N}$ with respect to the Hilbert-Schmidt metric} The set of mixed quantum states ${\cal M}_{N}$ consists of Hermitian, positive matrices of size $N$, normalized by the trace condition \begin{equation} {\cal M}_{N}:=\{\varrho: \varrho=\varrho^{\dagger}; \ \ \varrho\ge 0; \ \ {\rm tr }\varrho=1; \ \ {\rm dim}(\rho)=N \}. \label{setM} \end{equation} It is a compact convex set of dimensionality $D=N^2-1$. Any density matrix may be diagonalized by a unitary rotation, \begin{equation} \varrho=U \Lambda U^{-1}, \label{diagC} \end{equation} where $\Lambda$ is a diagonal matrix of eigenvalues $\Lambda_i$. Due to the trace condition they satisfy $\sum_{i=1}^{N}\Lambda_{i}=1$, so the space of spectra is isomorphic with a $(N-1)$--dimensional simplex $\Delta_{N-1}$. Let $B$ be a diagonal unitary matrix. Since $\varrho =UB\Lambda B^{\dagger }U^{\dagger}$, in the generic case of a non degenerate spectrum the unitary matrix $U$ is determined up to $N$ arbitrary phases entering $B$. To specify uniquely the unitary matrix of eigenvectors $U$ it is thus sufficient to select a point on the coset space $Fl^{(N)}_{\mathbb C}:=U(N)/[U(1)]^N$, called complex {\sl flag manifold}. The generic density matrix is thus determined by $(N-1)$ parameters determining eigenvalues and $N^2-N$ parameters related to eigenvectors, which sum up to the dimensionality $D$ of ${\cal M}_{N}$. Although for degenerated spectra the dimension of the flag manifold decreases (see e.g. \cite{ACH93,ZSlo01}), these cases of measure zero do not influence the estimation of the volume of the entire set of density matrices. Several different distances may be introduced into the set ${\cal M}_{N}$, (see for instance \cite{PS96,ZSlo01}). In this work we shall use the Hilbert-Schmidt metric, which induces the flat geometry. The Hilbert-Schmidt distance between any two density operators is defined as the Hilbert-Schmidt (Frobenius) norm of their difference, \begin{equation} D_{\rm HS}(\varrho_1,\varrho_2)= ||\varrho_1 -\varrho_2||_{\rm HS} = \sqrt{ {\rm Tr} [(\varrho_1 - \varrho_2)^2] }. \label{HS1} \end{equation} The set of all mixed states of size two acquires under this metric the geometry of the Bloch ball ${\bf B}^3$ embedded in ${\mathbb R}^3$. Its boundary, $\partial {\bf B}^3={\bf S}^2$ contains all pure states and is called {\sl Bloch sphere}. To show this let us use the Bloch representation of a $N=2$ density matrix \begin{equation} \varrho = \frac {\mathbb I}{N} + {\vec \tau}\cdot {\vec \lambda} \ , \label{Pauli} \end{equation} where $\vec{\lambda}$ denotes the vector of three rescaled traceless Pauli matrices $\{\sigma_x, \sigma_y, \sigma_z \}/{\sqrt{2}}$. They are normalized according to tr$\lambda_i^2=1$. The three dimensional Bloch vector $\vec \tau$ is real due to Hermiticity of $\varrho$. Positivity requires tr$\varrho^2\le 1$ and this implies $|\vec \tau|\le 1/\sqrt{2}=:R_2$. Demanding equality one distinguishes the set of all pure states, $\varrho^2=\varrho$, which form the Bloch sphere of radius $R_2$. Consider two arbitrary density matrices and express their difference $\varrho_1-\varrho_2$ in the representation (\ref{Pauli}). The entries of this difference consists of the differences between components of both Bloch vectors ${\vec \tau}_1$ and ${\vec \tau}_2$. Therefore \begin{equation} D_{\rm HS}\bigl( \varrho_{{\vec \tau}_1}, \varrho_{{\vec \tau}_2}\bigr)= D_{E}({\vec \tau}_1, {\vec \tau}_2) \ , \label{densHS} \end{equation} where $D_E$ is the Euclidean distance between both Bloch vectors in ${\mathbb R}^3$. This proves that with respect to the HS metric the set ${\cal M}_{2}$ possesses the geometry of a ball ${\bf B}^3$ . The unitary rotations of a density matrix $\varrho \to U\varrho U^{\dagger}$ correspond to the rotations of $\vec \tau$ in ${\mathbb R}^3$. This is due to the fact that the adjoint representation of $SU(2)$ is isomorphic with $SO(3)$. The Hilbert--Schmidt metric induces a flat geometry inside ${\cal M}_{N}$ for arbitrary $N$. Any state $\varrho$ may be represented by (\ref{Pauli}), but now the $\vec \lambda$ represents an operator valued vector which consists of $D=N^2-1$ traceless Hermitian generators of $SU(N)$, which fulfill tr$\lambda_i\lambda_j=\delta_{ij}$. This generalized Bloch representation of density matrices for arbitrary $N$ was introduced by Hioe and Eberly \cite{HE81}, and recently used in \cite{RMNDMC01}. The case $N=3$, related to the Gell-Mann matrices, is discussed in detail in the paper by Arvind et al. \cite{AMM97}. The generalized Bloch vector $\vec \tau$ (also called {\sl coherence vector}) is $D$ dimensional. In the general case of an arbitrary $N$ the right hand side of (\ref{densHS}) denotes the Euclidean distance between two Bloch vectors in ${\mathbb R}^{N^2-1}$. Positivity of $\rho$ implies the bound for its length \begin{equation} |\vec{\tau}| \le D_{\rm HS}\bigl( {\mathbb I}/N ,|\psi\rangle \langle {\psi}| \bigr) = \sqrt{\frac{N-1}{N}}=:R_N . \label{Rn} \end{equation} In contrast to the Bloch sphere, the complex projective space ${\mathbb C}{\bf P}^{N-1}$ which contains all pure states, forms for $N>2$ only a measure zero, simply connected $2(N-1)$-dimensional subset of the $N^2-2$ dimensional sphere of radius $R_N$ embedded in ${\mathbb R}^{N^2-1}$. Thus not every vector $\vec \tau$ of the maximal length $R_N$ represents a quantum state. This is related with the fact that for $N\ge 3$ the adjoint representation of $SU(N)$ forms only a subset of $SO(N^2-1)$, (see e.g. \cite{Ma68}). Sufficient and necessary conditions for a Bloch vector to represent a pure state were given in \cite{AMM97} for $N=3$, and in \cite{JS01} for an arbitrary $N$. Furthermore, by far not all vectors of length shorter then $R_N$ represent a quantum state, as not all the points inside a hyper-sphere belong to the simplex inscribed inside it. Necessary conditions for a Bloch vector to represent quantum mixed state were recently provided by Kimura \cite{Ki03}. On the other hand, there exists a smaller sphere inscribed inside the set ${\cal M}_{N}$. Its radius reads \cite{Ha78} \begin{equation} r_N = D_{\rm HS}\bigl({\mathbb I}/N ,\varrho_{N-1} \bigr) \frac{ 1}{\sqrt{N(N-1)}}=\frac{R_N}{N-1}, \label{radius2} \end{equation} where $\varrho_{N-1}$ denotes any state with the spectrum $(\frac{1}{N-1},...,\frac{1}{N-1},0)$. \section{Hilbert-Schmidt measure} Any metric in the space of mixed quantum states generates a measure, inasmuch as one can assume that drawing random density matrices from each ball of a fixed radius is equally likely. The balls are understood with respect to a given metric. In this work we investigate the measure induced by the Hilbert-Schmidt distance (\ref{HS1}). The infinitesimal distance takes a particularly simple form \begin{equation} ({\rm d}s_{\rm HS})^2= {\rm Tr} [({\rm d} \varrho)^2] \label{HS2} \end{equation} valid for any dimension $N$. Making use of the diagonal form $\rho=U\Lambda U^{-1}$ we may write \begin{equation} {\rm d}\varrho = U[ {\rm d}\Lambda +U^{-1}{\rm d U}\Lambda - \Lambda U^{-1}{\rm d U} ] U^{-1}. \label{drho1} \end{equation} Thus (\ref{HS2}) can be rewritten as \begin{equation} ({\rm d}s_{\rm HS})^2= \sum_{i=1}^N ({\rm d}\Lambda_i)^2 + 2 \sum_{i<j}^N (\Lambda_i-\Lambda_j)^2 |(U^{-1}{\rm d}U)_{ij}|^2 . \label{HS2b} \end{equation} Since the density matrices are normalized, $\sum_{i=1}^N \Lambda_i=1$, thus $\sum_{i=1}^N {\rm d}\Lambda_i=0$. Hence one may consider the variation of the $N$-th eigenvalue as a dependent one, ${\rm d}\Lambda_N= -\sum_{i=1}^{N-1} {\rm d}\Lambda_i$, which implies \begin{equation} \sum_{i=1}^N ({\rm d}\Lambda_i)^2 = \sum_{i=1}^{N-1} ({\rm d}\Lambda_i)^2 + \bigl( \sum_{i=1}^{N-1} {\rm d}\Lambda_i \bigr)^2 =\sum_{i,j=1}^{N-1}{\rm d}\Lambda_i g_{ij}{\rm d}\Lambda_j . \label{HS2c} \end{equation} The corresponding volume element gains a factor $\sqrt{{\rm det}g}$, where $g$ is the metric in the $(N-1)$ dimensional simplex $\Delta_{N-1}$ of eigenvalues. From (\ref{HS2c}) one may read out the explicit form of the metric $g_{ij}$ \begin{equation} g=\left[ \begin{array} [c]{ccc} 1 & & 0\\ & \ddots & \\ 0 & & 1 \end{array} \right] + \left[ \begin{array} [c]{ccc} 1 & \cdots & 1\\ \vdots & \ddots & \vdots\\ 1 & \cdots & 1 \end{array} \right] \ . \label{metricg} \end{equation} It is easy to check that the spectrum of the $N-1$ dimensional matrix $g$ consists of one eigenvalue equal to $N$ and remaining $N-2$ eigenvalues equal to unity, so that det$g=N$. Thus the Hilbert-Schmidt volume element is given by \begin{equation} {\rm d}V_{\rm HS} = \sqrt{N} \prod_{j=1}^{N-1} {\rm d}\Lambda_j \prod_{j<k}^{1\cdots N} (\Lambda_j-\Lambda_k)^2\ | \prod_{j<k}^{1\cdots N} 2 {\rm Re}(U^{-1}{\rm d}U)_{jk} {\rm Im}(U^{-1}{\rm d}U)_{jk} | . \label{HS2d} \end{equation} and has the following product form \begin{equation} {\rm d} V = {\rm d} \mu (\Lambda_1,\Lambda_2,...,\Lambda_N) \times {\rm d} \nu_{\rm Haar} \ . \label{dVdmupr} \end{equation} The first factor depends only on the eigenvalues $\Lambda_i$, while the latter on the eigenvectors of $\varrho$ which compose the unitary matrix $U$. Any unitary matrix may be considered as an element of the Hilbert-Schmidt space of operators with the scalar product $\langle A|B\rangle={\rm Tr}A^{\dagger}B$. This suggests the following definition of an invariant metric of the unitary group $U(N)$, \begin{equation} ({\rm d}s)^2 := -{\rm Tr} (U^{-1} {\rm d}U)^2= \sum_{jk=1}^N|(U^{-1}{\rm d}U)_{jk}|^2 = \sum_{j=1}^N|(U^{-1}{\rm d}U)_{jj}|^2 + 2 \sum_{j<k=1}^N |(U^{-1}{\rm d}U)_{jk}|^2 \ . \label{ds2} \end{equation} This metric induces the unique Haar measure $\nu_{Haar}$ on $U(N)$, invariant with respect to unitary transformations, $\nu_{\rm Haar}(W)=\nu_{\rm Haar}(UW)$, where $W$ denotes an arbitrary measurable subset of $U(N)$. Integrating the volume element corresponding to (\ref{ds2}) over the unitary group we obtain the volume \begin{equation} {\rm Vol} \bigl[ U(N)\bigr] = \frac{ (2\pi)^{N(N+1)/2}}{ 1! 2! \cdots (N-1)!} . \label{volUNb} \end{equation} Integrating the volume element with the diagonal terms in (\ref{ds2}) omitted (in that case the diagonal elements of $U$ are fixed by $U_{ii}\ge 0$) we obtain the volume of the complex flag manifold, $Fl^{(N)}_{\mathbb C}:=U(N)/[U(1)^N]$, \begin{equation} {\rm Vol} \bigl[ Fl^{(N)}_{\mathbb C}\bigr] = \frac{ {\rm Vol} \bigl[ U(N)\bigr]}{ (2\pi)^N} = \frac{ (2\pi)^{N(N-1)/2}}{ 1! 2! \cdots (N-1)!} . \label{volFlb} \end{equation} Both results are known in the literature for almost fifty years \cite{Hu63}. However, since many different conventions in defining the volume of the unitary group are in use \cite{Ma81,Tu85,Fu01,TBS02,BST02,TS02,Ca02} we sketch a derivation of the above expressions in the appendix and provide a list of related results. Comparing formulae (\ref{HS2d}) and (\ref{ds2}) we recognize that the measure $\nu$, responsible for the choice of eigenvectors of $\varrho$, is the natural measure on the complex flag manifold $Fl^{(N)}_{\mathbb C}=U(N)/[U(1)^N]$ induced by the Haar measure on $U(N)$. Since the trace is unitarily invariant, it follows directly from the definition (\ref{HS2}), that the volume element with respect to the HS measure is invariant with respect to the group of unitary rotations, ${\rm d}V_{\rm HS}(\varrho)={\rm d}V_{\rm HS}(U\varrho U^{\dagger})$. Such a property is characteristic to any {\sl product measure} of the form (\ref{dVdmupr}). Several product measures with different choices of $\mu$ were examined in \cite{Zy99,Sl99a,ZS01,Ca02}. Integrating the volume element (\ref{HS2d}) with respect to the eigenvectors of $\varrho$ distributed according to the Haar measure one obtains the probability distribution in the simplex of eigenvalues \begin{equation} P^{(2)}_{\rm HS}(\Lambda_1,\dots,\Lambda_N) = C^{\rm HS}_N \delta(1-\sum_{j=1}^N \Lambda_j) \prod_{j<k}^N (\Lambda_j-\Lambda_k)^2, \label{HS3} \end{equation} where for future convenience we have decorated the symbol $P$ with the superscript $^{(2)}$ consistent with the exponent in the last factor. As discussed in the following section the normalization constant $C^{\rm HS}_N$ may be expressed \cite {ZS01} in terms of the Euler Gamma function $\Gamma(x)$ \cite{SO87} \begin{equation} C_{N}^{\rm HS} = \frac{\Gamma(N^2)} {\prod_{j=0}^{N-1} \Gamma(N-j) \Gamma(N-j+1) } . \label{constn} \end{equation} The above joint probability distribution, derived by Hall \cite{Ha98}, defines the measure $\mu_{\rm HS}$ in the space of diagonal matrices, and the {\sl Hilbert-Schmidt} measure (\ref{HS2d}) in the space of density matrices ${\cal M}_{N}$. Interestingly, the very same measure may be generated by drawing random pure states $|\phi\rangle\in {\cal H}_1 \otimes {\cal H}_2$ of a composite $N \times N$ system according to the Fubini-Study measure on ${\mathbb C}{\bf P}^{N^2-1}$. Then the density matrices of size $N$ obtained by partial trace, $\varrho={\rm tr}_2(|\phi\rangle\langle\phi|)$, are distributed according to the HS measure \cite{Br96,Ha98,ZS01}. Alternatively, one may generate a random matrix $A$ of the Ginibre ensemble, (non-Hermitian complex matrix with all entries independent Gaussian variables with zero mean and a fixed variance) and obtain a HS distributed random density matrix by a projection $\varrho=A^{\dagger}A/{\rm tr}A^{\dagger}A$ \cite{ZS01}. A similar approach was recently advocated by Tucci \cite{Tu02}, who used the name 'uniform ensemble' just for ensemble of density matrices generated according to the HS measure. \section{Volume of the set of mixed states} For later convenience let us introduce generalized normalization constants \begin{equation} \frac{1}{C_{N}^{(\alpha,\beta)}}:= \int_0^{\infty} {\rm d}\Lambda_1 \cdots {\rm d}\Lambda_N \delta(\sum_{i=1}^N \Lambda_i -1) \prod_{i=1}^N \Lambda_i^{\alpha-1} \prod_{i<j} |\Lambda_i-\Lambda_j|^{\beta} \label{constab} \end{equation} with $\alpha, \beta >0$. These constants may be calculated using the formula for the Laguerre ensemble, discussed in the book of Mehta \cite{Me91}, \begin{equation} \int_0^{\infty} {\rm d}\Lambda_1 \cdots {\rm d}\Lambda_N \exp\bigl(-\sum_{i=1}^N \Lambda_i\bigr) \prod_{i=1}^N \Lambda_i^{\alpha-1} \prod_{i<j} |\Lambda_i-\Lambda_j|^{\beta} = \prod_{j=1}^N \Bigl[ \frac{ \Gamma[1+j\beta/2] \Gamma[\alpha +(j-1)\beta/2]} { \Gamma[1+\beta/2]} \Bigr] . \label{constab1} \end{equation} Substituting $x_i^2=\Lambda_i$ we may bring the latter integral to the Gaussian form. Expressing it in the spherical coordinates we get the integral (\ref{constab}) and eventually obtain \begin{equation} \frac{1}{C_N^{(\alpha,\beta)} }:= \frac{1}{\Gamma[\alpha N +\beta N(N-1)/2]} \prod_{j=1}^N \Bigl[ \frac{ \Gamma[1+j\beta/2] \Gamma[\alpha +(j-1)\beta/2]} { \Gamma[1+\beta/2]} \Bigr] . \label{constab2} \end{equation} By definition $C_N^{\rm HS}=C_N^{(1,2)}$ and the special case of the above expression reduces to (\ref{constn}). To obtain the Hilbert-Schmidt volume of the set of mixed states ${\cal M}_{N}$ one has to integrate the volume element (\ref{HS2d}) over eigenvalues and eigenvectors. By definition the first integral gives $1/C_N^{\rm HS}$, while the second is equal to the volume of the flag manifold. To make the diagonalization transformation (\ref{diagC}) unique one has to restrict to a certain order of eigenvalues, say, $\Lambda_1 < \Lambda_2 < \cdots < \Lambda_N$, (a generic density matrix is not degenerate), which corresponds to a choice of a certain Weyl chamber of the eigenvalue simplex $\Delta_{N-1}$. In other words, different permutations of the vector of $N$ generically different eigenvalues $\Lambda_i$ belong to the same unitary orbit. The number of different permutations (Weyl chambers) equals to $N!$, so the volume reads \begin{equation} V^{(2)}_N:={\rm Vol}_{\rm HS} \bigl( {\cal M}_{N} \bigr) = \frac{\sqrt{N}}{N!} \ \frac{{\rm Vol}\bigl(Fl^{(N)}_{\mathbb C}\bigr)}{C_N^{\rm HS} }. \label{volmix2a} \end{equation} The square root stems from the volume element (\ref{HS2d}), and the index $^{(2)}$ refers to the general case of complex density matrices. Making use of (\ref{constn}) and (\ref{volFlb}) we arrive at the final result \footnote {Apart of the first factor $\sqrt{N}$, the same formula appeared already in the work of Tucci \cite{Tu02}} \begin{equation} V^{(2)}_N =\sqrt{N} (2\pi)^{N(N-1)/2}\ \frac{\Gamma(1) \cdots \Gamma(N)} {\Gamma(N^2)} . \label{volmix2b} \end{equation} Substituting $N=2$ we are pleased to receive $V^{(2)}_2=\pi \sqrt{2}/3$ - exactly the volume of the Bloch ball ${\bf B}^3$ of radius $R_2=1/\sqrt{2}$. This result may be also found in the notes by Caves \cite{Ca02}, who also derived an explixit integral for the volume of the set of mixed states for arbitrary $N$. The next result $V^{(2)}_3=\pi^3 /(840\sqrt{3})$ allows us to characterize the difference between the set ${\cal M}_{3}\subset {\mathbb R}^8$ and the ball ${\bf B}^8$. The set of mixed states is inscribed into the sphere of radius $R_3=\sqrt{2/3}\approx 0.816$, while the maximal ball contained inside has the radius $r_3=R_3/2\approx 0.408$. Using Eq. (\ref{volSNBN}) we find the radius of the $8$--ball of volume $V_3$ is $\rho_3\approx 0.519$. The distance from the center of ${\cal M}_{3}$ to its boundary varies with the direction in ${\mathbb R}^8$ from $r_3$ to $R_3$, in contrast to the $N=2$ case of Bloch ball, for which $R_2=r_2=\rho_2=1/\sqrt{2}$. The average HS--distance from the center of ${\cal M}_{3}$ to its boundary is equal to $\rho_3$. Similar calculations performed for $N=4$ give the maximal radius $R_4=\sqrt{3/4}\approx 0.866$, the minimal radius $r_4=R_4/3\approx 0.289$ and the 'mean' radius $\rho_4\approx 0.428$ which generates the ball ${\bf B}^{15}$ of the same volume as $V_4$. In general, let $\rho_N$ denote the radius of a ball ${\bf B}^{N^2-1}$ of the same volume as the set ${\cal M}_{N}$. The volume $V_N$ tends to zero if $N\to \infty$, but there is no reason to worry about it. The same is true for the volume of the $N$--ball, see (\ref{volSNBN}). This is just a consequence of the choice of the units. We are comparing the volume of an object in ${\mathbb R}^N$ with the volume of a hypercube $C^N$ of side one, and it is easy to understand, that the larger dimension, the smaller is the volume of the ball inscribed into it. \section{Area of the boundary of the set of mixed states} The boundary of the set of mixed states is far from being trivial. Formally it may be written as a solution of the equation det$\varrho=0$ which contains all matrices of a lower rank. The boundary $\partial {\cal M}_{N}$ contains orbits of different dimensionality generated by spectra of different rank and degeneracy (see eg. \cite{ACH93,ZSlo01}). Fortunately all of them are of measure zero besides the generic orbits created by unitary rotations of diagonal matrices with all eigenvalues different and one of them equal to zero; $\Lambda=\{0, \Lambda_2 <\Lambda_3 <\cdots <\Lambda_N\}$. Such spectra form the $N-2$ dimensional simplex $\Delta_{N-2}$, which contains $(N-1)!$ Weyl chambers - this is the number of possible permutations of elements of $\Lambda$ which all belong to the same unitary orbit. Hence the hyper-area of the boundary may be computed in a way analogous to (\ref{volmix2a}), \begin{equation} S^{(2)}_{N}:={\rm Vol}_{\rm HS} \bigl( {\cal M}_{N} \bigr) = \frac{\sqrt{N-1}}{(N-1)!} \ \frac{{\rm Vol}\bigl(Fl^{(N)}_{\mathbb C}\bigr)}{C_{N-1}^{(3,2)} }. \label{volare2a} \end{equation} The change of the parameter $\alpha$ in (\ref{constab}) from $1$ to $3$ is due to the fact that by setting one component of an $N$ dimensional vector to zero the corresponding Vandermonde determinant of size $N$ leads to the determinant of size $N-1$ for $\beta = 1$ and to the square of the determinant for $\beta = 2$. Applying (\ref{constn}) and (\ref{volFlb}) we obtain an explicit result \begin{equation} S^{(2)}_{N}= \sqrt{N-1} \ (2\pi)^{N(N-1)/2}\ \frac{\Gamma(1) \cdots \Gamma(N+1)} {\Gamma(N)\Gamma(N^2-1)} . \label{volare2b} \end{equation} For $N=2$ we get $S_2^{(2)}=2\pi$ - just the area of the Bloch sphere ${\bf S}^2$ of radius $R_2=1/\sqrt{2}$. The area of the $7$-dim boundary of ${\cal M}_{3}$ reads $S_3^{(2)}=\sqrt{2} \pi^3/105$. In an analogous way we may find the volume of edges, formed by the unitary orbits of the vector of eigenvalues with two zeros. More generally, states of rank $N-n$ are unitarily similar to diagonal matrices with $n$ eigenvalues vanishing, $\Lambda=\{0,\dots,0,\Lambda_{n+1} <\Lambda_{n+2} <\cdots <\Lambda_N\}$. These edges of order $n$ are $N^2-n^2-1$ dimensional, since the dimension of the set of such spectra is $N-n-1$, while the orbits have the structure of $U(N)/[U(n)\times (U(1))^{N-n}]$ and dimensionality $N^2-n^2-(N-n)$. Repeating the reasoning used to derive (\ref{volare2a}) we obtain the volume of the hyperedges \begin{equation} S^{(2)}_{N,n}= \frac{\sqrt{N-n}}{(N-n)!} \ \frac{1}{ C_{N-n}^{(1+2n,2)}} \ \frac{{\rm Vol}\bigl(Fl^{(N)}_{\mathbb C}\bigr)} { {\rm Vol}\bigl(Fl^{(n)}_{\mathbb C}\bigr)}. \label{voledges} \end{equation} Note that for $n=0$ this expression gives the volume $V_N^{(2)}$ of the set ${\cal M}_{N}$, for $n=1$ the hyperarea $S_N^{(2)}$ of its boundary $\partial {\cal M}_{N}$, for $n\ge 2$ the area of the edges of rank $N-n$. In the extreme case of $n=N-1$ the above formula gives correctly the volume of the set of pure states, (the states of rank one), Vol$({\mathbb C}{\bf P}^{N-1})=(2\pi)^{N-1}/\Gamma(N)$, see appendix. \section{The ratio: area/volume} Certain information about the structure of a convex body may be extracted from the ratio $\gamma$ of the (hyper)area of its boundary to its volume. The smaller the coefficient $\gamma$ (with the diameter of the body kept fixed), the better the body investigated may be approximated by a ball, for which such a ratio is minimal. And conversely, the larger $\gamma$, the less the body resembles a ball, since more (hyper)area is needed to bound a given volume. To analyze simple examples let us recall the volume of the $N$-dimensional unit ball ${\bf B}^N\subset {\mathbb R}^{N}$ and the volume $S_N$ of the unit $N$--sphere ${\bf S}^N\subset {\mathbb R}^{N+1}$ \begin{equation} B_N:=\mbox{vol}({\bf B}^N) = \frac{\mbox{vol}({\bf S}^{N-1})}{N} = \frac{{\pi}^{\frac{N}{2}}}{{\Gamma}(\frac{N}{2} + 1)} \sim \frac{1}{\sqrt{2{\pi}}} \left( \frac{2{\pi}e}{N}\right)^{\frac{N}{2}} , \label{volSNBN} \end{equation} where the Stirling expansion \cite{SO87} was used for large $N$. For small $N$ we obtain the well known expressions, $S_1=2\pi$, $S_2=4\pi$, $S_3=2\pi^2$, $S_4=8\pi^2/3$ and $B_2=\pi$, $B_3=4\pi/3$, $B_4=\pi^2/2$. If the spheres and balls have radius $L$ then the scale factor $L^N$ has to be supplied. In odd dimensions the volume of the sphere simplifies, ${\mbox{vol}} ({\bf S}^{2k-1}) =2\pi^k/(k-1)!$. Since the boundary of a $N$--ball is formed by a $N-1$ sphere, $\partial {\bf B}^N= {\bf S}^{N-1}$, the ratio $\gamma$ for a ball of radius $L$ reads \begin{equation} \gamma( {\bf B}^N) := \frac{\mbox{vol}({\bf \partial B}^N)} {\mbox{vol}({\bf B}^N)} = \frac{N}{L} . \label{muball} \end{equation} Intuitively this ratio will be the smallest possible among all $N$-dimensional sets of the same volume. Hence let us compare it with an analogous result for a hypercube $\square_N$ of side $L$ and volume $L^N$. The cube has $2^N$ corners and $2N$ faces, of area $L^{N-1}$ each. We find the ratio \begin{equation} \gamma( { \square}_N) := \frac{\mbox{vol}({ \partial \square}_N)} {\mbox{vol}({\square}_N)} = 2 \ \frac{N}{L} \label{mucube} \end{equation} which grows twice as fast as for $N$-balls. Another comparison can be made with simplices $\triangle_N$, generated by $(N+1)$ equally distant points in ${\mathbb R}^N$. The simplex $\triangle_2$ is a equilateral triangle, while $\triangle_3$ is a regular tetrahedron. The volume of a simplex of side $L$ reads $\mbox{vol}( \triangle_N) = [L^N \sqrt{(N+1)/2^N}]/N!$. Since the boundary of $\triangle_N$ consists of $N+1$ simplices $\triangle_{N-1}$ we obtain \begin{equation} \gamma( {\bf \triangle}_N) := \frac{\mbox{vol}( \partial \triangle_N)} { {\mbox{vol}}(\triangle_N)} = \sqrt{\frac {2N}{N+1}} \frac {N(N+1)}{L} \ . \label{musimplex} \end{equation} In this case the ratio $\gamma$ grows quadratically with $N$, which reflects the fact that simplices do have much 'sharper' corners, in contrast to the cubes, so more (hyper)area of the boundary is required to cover a given volume. Furthermore, if one defines a hyper--diamond as two simplices glued along one face, its volume is twice the volume of $\triangle_N$ while its boundary consists of $2N$ simplices $\triangle_{N-1}$, so the coefficient $\gamma$ grows exactly as $N^2$. Interestingly, the ratio $\gamma$ of the $N$--cube is the same as for the $N$--ball inscribed in, which has much smaller volume. The same property is characteristic for the $N$--simplex. Hence another possibility to characterize the shape of any convex body $F$ is to compute the ratio $\chi_1:={\rm vol}[{\bf B}_1(F)]/{\rm vol}(F)$, and $\chi_2:={\rm vol}(F)/{\rm vol}[{\bf B}_2(F)]$, where ${\bf B}_1(F)$ is the largest ball inscribed in $F$ while ${\bf B}_2(F)$ is the smallest ball in which $F$ may be inscribed. As stated above for cubes and simplices one has $\gamma(F)=\gamma[{\bf B}_1(F)]$. Such quotients may be computed for the rather complicated convex body of mixed quantum states analyzed with respect to the Hilbert-Schmidt measure. Using expressions (\ref{volmix2a}) and (\ref{volare2a}) we find \begin{equation} \gamma_N:= \frac{\mbox{vol} \bigl( \partial {\cal M}_{N} \bigr)} { {\mbox{vol}} \bigl( {\cal M}_{N} \bigr)} = \frac{N! \sqrt{N-1}}{\sqrt{N} (N-1)!} \ \frac{C_N^{(1,2)} } {C_{N-1}^{(3,2)}}= \sqrt{N(N-1)}\ (N^2-1). \label{muMN} \end{equation} The first coefficients read $\gamma_2=3 \sqrt{2}$, $\gamma_3=8\sqrt{6}$, and $\gamma_4=15 \sqrt{12}$, so they grow with $N$ faster than $N^{2}$. A direct comparison with the results received for balls or cubes would be unfair, since here $N$ does not denote the dimension of the set ${\cal M}_{N}\subset {\mathbb R}^D$. Substituting the right dimension, $D=N^2-1$, we see that the area/volume ratio for the mixed states increases with the dimensionality as $\gamma \sim D^{3/2}$. The linear scaling factor $L$, equal to the radius $R_N$ tends asymptotically to unity and does not influence this behaviour. Note that the set of mixed states is convex and is inscribed into the sphere of radius $R_N$, so for each finite $N$ the ratio $\gamma_N$ remains finite. On the other hand, the fact that this coefficient increases with the dimension $D$ much faster than for balls or cubes, sheds some light into the intricate structure of the set ${\cal M}_{N}$. It touches the hypersphere ${\bf S}^{N^2-2}$ of radius $R_N$ along the $2N-2$ dimensional manifold of pure states. However, to be characterized by such a value of the coefficient $\gamma$ it is a rather 'thin' set, and a lot of hyper--area of the boundary is used to encompass its volume. In fact, for any mixed state $\varrho \in {\cal M}_{N}$ its distance to the boundary $\partial {\cal M}_{N}$ does not exceed the radius $r_N\sim 1/N$. Another comparison can be made with the $D$-ball of radius $L=r_N=[N(N-1)]^{-1/2}$, inscribed into ${\cal M}_{N}$. Although its volume is much smaller than this of the larger set of mixed states, its area to volume ratio, $\gamma=D/L$ is exactly equal to (\ref{muMN}) characterizing ${\cal M}_{N}$. In other words, for any dimensionality $N$ the set of mixed quantum states belongs to the class of bodies for which $\gamma(F)=\gamma({\bf B}_1(F))$ holds. Using the notion of the effective radius $\rho_N$, introduced in section IV, we may express the coefficients $\chi_i$ for the set ${\cal M}_{N}$ as a ratio between radii raised to the power equal to the dimensionality, $D=N^2-1$. The exact values of $\chi_1=(r_N/\rho_N)^D$ and $\chi_2=(\rho_N/R_N)^D$, as well as their product $\chi=\chi_1\chi_2$, may be readily obtained from (\ref{volmix2b}). Let us only note the large $N$ behaviour, $\chi( {\cal M}_{N}) =(N-1)^{-N^2+1}$ so it grows with the dimensionality $D$ as $D^{-D/2}$ while $\chi({\bf B}^N)=1$, $\chi( {\square}_N)=N^{-N/2}$, and $\chi(\triangle_N)\approx N^{-N}$. \section{Rebits: real density matrices} Even though from physical point of view one should in general consider the entire set ${\cal M}_{N}$ of complex density matrices, we propose now to discuss its proper subset: the set of real density matrices. This set, denoted by ${\cal M}^{\mathbb R}_{N}$, is of smaller dimension $D_1=N(N+1)/2-1<D=N^2-1$, and any reduction of dimensionality simplifies the investigations. While complex density matrices of size two are known as qubits, the real density matrices are sometimes called {\sl rebits} \cite{CFR02}. In the sense of the HS metric the space of rebits forms the full circle ${\bf B}^2$, which may be obtained as a slice of the Bloch ball ${\bf B}^3$ along a plane containing ${\mathbb I}/2$. To find the volume of the set ${\cal M}^{\mathbb R}_{N}$ we will repeat the steps (\ref{drho1})--(\ref{volmix2b}) for real symmetric density matrices which may be diagonalized by an orthogonal rotation, $\varrho=O\Lambda O^T$. The expressions \begin{equation} {\rm d}\varrho = O[ {\rm d}\Lambda +O^{-1}{\rm d O}\Lambda - \Lambda O^{-1}{\rm d O} ] O^{-1} \label{drhoRe1} \end{equation} and \begin{equation} ({\rm d}s_{\rm HS})^2= \sum_{i=1}^N ({\rm d}\Lambda_i)^2 + 2 \sum_{i<j}^N (\Lambda_i-\Lambda_j)^2 |(O^{-1}{\rm d}O)_{ij}|^2 \label{HSRe2b} \end{equation} allow us to obtain the HS volume element, analogous to (\ref{HS2d}), \begin{equation} {\rm d}V_{\rm HS}^{(1)} = \sqrt{N} \prod_{j=1}^{N-1} {\rm d}\Lambda_j \prod_{j<k}^{1\dots N} |\Lambda_j-\Lambda_k|\ \cdot \ |\prod_{j<k}^{1\cdots N} \sqrt{2}\bigl( O^{-1} {\rm d }O)_{jk}| \label{HS2Red} \end{equation} As in the complex case the measure has the product form, and the last factor is the volume element of the orthogonal group (see appendix). Orthogonal orbits of a nondegenerate diagonal matrix form real flag manifolds $Fl^{(N)}_{\mathbb R}=O(N)/[O(1)]^N$ of the volume \begin{equation} {\rm Vol} \bigl[Fl^{(N)}_{\mathbb R}\bigr] = \frac{ {\rm Vol} \bigl[ O(N)\bigr]}{ 2^N} = \frac{ (2\pi)^{N(N-1)/4} \pi^{N/2}}{\Gamma[1/2]\cdots \Gamma[N/2]} \ . \label{volFlReb} \end{equation} Here $O(1)$ is the reflection group ${\mathbb Z}_2$ with volume $2$. The volume element (\ref{HS2Red}) leads to the following probability measure in the simplex of eigenvalues \begin{equation} P^{(1)}_{\rm HS}(\Lambda_1,\dots,\Lambda_N) = C_N^{(1,1)} \delta(1-\sum_{j=1}^N \Lambda_j) \prod_{j<k}^N |\Lambda_j-\Lambda_k|, \label{HSRe3} \end{equation} with the normalization constant given in (\ref{constab2}). Note the linear dependence on the differences of eigenvalues, in contrast to the quadratic form present in (\ref{HS3}). Taking into account the number $N!$ of different permutations of the elements of the spectrum $\Lambda$ we obtain the expression for the volume of the set of ${\cal M}^{\mathbb R}_{N}$, \begin{equation} V^{(1)}_N:={\rm Vol}_{\rm HS} \bigl( {\cal M}^{\mathbb R}_{N} \bigr) = \frac{\sqrt{N}}{N!} \ \frac{{\rm Vol}\bigl(Fl^{(N)}_{\mathbb R}\bigr)}{C_N^{(1,1)}} \ , \label{volmixRe2a} \end{equation} which gives \begin{equation} V^{(1)}_N =\frac{\sqrt{N}}{N!}\ \frac{2^N (2\pi)^{N(N-1)/4}\ \Gamma\bigl[ \frac{N+1}{2} \bigr] } {\Gamma\bigl[\frac{N(N+1)}{2}\bigr]\ \Gamma\bigl[\frac{1}{2}\bigr]}\ \prod_{k=1}^N \Gamma\bigl[1+\frac{k}{2}] \ . \label{volmixRe2b} \end{equation} As in the complex case we find the volume of the boundary of ${\cal M}^{\mathbb R}_{N}$, and in general, the volume of edges of order $n$ with $0\le n\le N-1$. In the case of real density matrices these edges are $N(N+1)/2-1-n(n+1)/2$ dimensional, since the dimension of the set of such spectra is $N-n-1$, and the orbits have the structure of $O(N)/[O(n)\times (O(1))^{N-n}]$ and dimensionality $N(N-1)/2-n(n-1)/2$. In analogy to (\ref{voledges}) we obtain \begin{equation} S^{(1)}_{N,n}= \frac{\sqrt{N-n}}{(N-n)!} \ \frac{1}{ C_{N-n}^{(1+n,1)}}\ \frac{{\rm Vol}\bigl(Fl^{(N)}_{\mathbb R}\bigr)} { {\rm Vol}\bigl(Fl^{(n)}_{\mathbb R}\bigr)}, \label{voledgesRe} \end{equation} which for $n=1$ gives the volume $S$ of the boundary $\partial {\cal M}^{\mathbb R}_{N}$, and allows us to compute the ratio area to volume, \begin{equation} \gamma( {\cal M}_{N}^{\mathbb R}) := \frac{\mbox{vol} \bigl( \partial {\cal M}^{\mathbb R}_{N} \bigr)} { {\mbox{vol}} \bigl( {\cal M}^{\mathbb R}_{N} \bigr)} = \frac{N! \sqrt{N-1}}{\sqrt{N} (N-1)!}\ \frac{C_N^{(1,1)} } {C_{N-1}^{(2,1)}}= \sqrt{N(N-1)}(N-1)(1+N/2). \label{muMNRe} \end{equation} The product of the last two factors is equal to the dimensionality of the set of real density matrices, $D_1=N(N+1)/2-1$. Therefore, just as in the complex case, the ratio area to volume for ${\cal M}^{\mathbb R}_{N}$ coincides with such a ratio $\gamma=D_1/L$ for the maximal ball of radius $L=r_N=[N(N-1)]^{-1/2}$ contained in this set. In the simplest case of $N=2$ we receive $V^{(1)}_2=\pi/2$ - the volume of the circle ${\bf B}^2$ of radius $R_2=1/\sqrt{2}$. The volume of the boundary, $S=\pi\sqrt{2}$, equals to the circumference of the circle of radius $R_2=1/\sqrt{2}$, and gives $\gamma=2\sqrt{2}$ in agreement with (\ref{muMNRe}). \section{Concluding remarks} We have found the volume $V$ and the surface area $S$ of the $D=N^2-1$ dimensional set of mixed states ${\cal M}_{N}$ acting in the $N$--dimensional Hilbert space, and its subset ${\cal M}^{\mathbb R}_{N}$ containing real symmetric matrices. Although the volume of the unitary (orthogonal) group depends on the definition used, as discussed in the appendix, the volume of the set of mixed state has a well specified, unambiguous meaning. For instance, for $N=2$ the volume $V_2$ may be interpreted as the ratio of the volume of the Bloch ball (of radius $R_2$ fixed by the Hilbert--Schmidt metric), to the cube spanned by three orthonormal vectors of the HS space: the rescaled Pauli matrices, $\{\sigma_x, \sigma_y, \sigma_z \}/{\sqrt{2}}$. On one hand, these explicit results may be applied for estimation of the volume of the set of entangled states \cite{ZHSL98,Zy99,Sl00,Sl02,Sl03}, or yet another subset of ${\cal M}_{N}$. It is also likely to expect that some integrals obtained in this work will be usefull is such investigations. On the other hand, outcomes of this paper advance our understanding of the properties of the set of mixed quantum states. The ratio of the hyperarea of the boundary of $D$--balls to their volume grows linearly with the dimension $D$. The same ratio for $D$--simplices behaves as $D^2$, while for the sets of complex and real density matrices it grows with the dimensionality $D$ as $D^{3/2}$. Hence these geometrical properties of the convex body of mixed states are somewhere in between the properties of $D$--balls and $D$--simplices. Furthermore, we have shown that for any $N$ the sets of complex (real) density matrices belong to the family of sets, for which the ratio area to volume is equal to such a ratio computed for the maximal ball inscribed into this set. It is necessary to emphasize, that a similar problem of estimating the volume of the set of mixed states could be also considered with respect to other probability measures. In particular, analogous results presented by us in \cite{SZ03} for the measure \cite{Ha98,BS01} related to the Bures distance \cite{Bu69,Uh76} allow us to investigate similarities and differences between the geometry of mixed states induced by different metrics. It is a pleasure to thank M. Ku{\'s} for helpful discussions and to P. Slater for valuable correspondence. Financial support by Komitet Bada{\'n} Naukowych in Warsaw under the grant 2P03B-072~19 and the Sonder\-forschungs\-be\-reich/Transregio 12 der Deutschen Forschungs\-gemein\-schaft is gratefully acknowledged. \appendix \section{Volumes of the unitary groups and flag manifolds} Although the volume of the unitary (orthogonal) group and the complex (real) flag manifold, we use in our calculations, were computed by Hua many years ago \cite{Hu63}, one may find in more recent literature related results, which in some cases seem to be contradicting. However, different authors used different definitions of the volume of unitary group \cite{Ma81,Tu85,Fu01,TBS02,BST02,Ca02}, so we review in this appendix three most common definitions and compare the results. \subsection{Unitary group $U(N)$} We shall recall (\ref{ds2}) the metric of the unitary group $U(N)$ induced by the Hilbert-Schmidt scalar product and used by Hua \cite{Hu63} \begin{equation} ({\rm d}s)^2 := -{\rm Tr} (U^{-1} {\rm d}U)^2= \sum_{j=1}^N|(U^{-1}{\rm d}U)_{jj}|^2 + 2 \sum_{j<k=1}^N |(U^{-1}{\rm d}U)_{jk}|^2\ , \label{dUs} \end{equation} which is left- and right-invariant under unitary transformations. The volume element is then given by the product of independent differentials times the square root of the determinant of the metric tensor. One has still the freedom of an overall scale factor for (\ref{dUs}) which appears then correspondingly in the volume element. To keep invariance the ratio of the prefactors $c_{\rm diag}$ and $c_{\rm off}$ of the diagonal and off--diagonal terms has to be fixed. Nevertheless one may introduce different scalings of the volume elements which we call $d\nu_A, d\nu_B, d\nu_C$: \begin{equation} {\rm d}\nu_A := \Bigl| \prod_{i=1}^N (U^{-1}{\rm d}U)_{ii} \prod_{j<k}^{1\cdots N} \sqrt{2}\ {\rm Re}(U^{-1}{\rm d}U)_{jk} {\rm Im}(U^{-1}{\rm d}U)_{jk} \Bigr|; \quad \quad c_{\rm diag}=1, \quad c_{\rm off}=2; \label{dnuA} \end{equation} \begin{equation} {\rm d}\nu_B := 2^{-N(N-1)/2}\ {\rm d}\nu_A; \quad \quad c_{\rm diag}=1, \quad c_{\rm off}=1; \label{dnuB} \end{equation} \begin{equation} {\rm d}\nu_C := 2^{-N/2}\ {\rm d}\nu_B; \quad \quad c_{\rm diag}=1/2 \quad c_{\rm off}=1. \label{dnuC} \end{equation} The product in (\ref{dnuA}), consistent with (\ref{dUs}), has to be understood in the sense of alternating external multiplication of differential forms. Only the first convention (\ref{dnuA}) labeled by the index $_A$ was used in the main part of this work. Note that the normalisation (\ref{dnuC}) corresponds to the rescaled line element $({\rm d}s)^2 = -\frac{1}{2}{\rm Tr} (U^{-1} {\rm d}U)^2$. In general we may scale \begin{equation} {\rm d}\nu_{_X} := (c_{\rm diag})^{N/2} (c_{\rm off})^{N(N-1)/2} {\rm d}\nu_B \label{dnu} \end{equation} where the label $_X$ denotes a certain choice of the prefactors $c_{\rm diag}$ and $c_{\rm off}$ for diagonal or off--diagonal elements in (\ref{dUs}). All these volumes correspond to the Haar measure which is unique up to an overall constant scale factor. Thus we deduce that \begin{equation} {\rm Vol}_A \bigl[ U(N)\bigr] = 2^{N(N-1)/2}{\rm Vol}_B \bigl[ U(N)\bigr] {\rm \quad and \quad} {\rm Vol}_C \bigl[ U(N)\bigr] = 2^{-N/2}{\rm Vol}_B \bigl[ U(N)\bigr] . \label{volUABC} \end{equation} In order to determine the volume of the unitary group let us recall the fiber bundle structure $U(N-1) \to U(N) \to {\bf S}^{2N-1}$, see e.g. \cite{Bo91}. This topological fact implies a relation between the volume of the unit sphere ${\bf S}^{2N-1}$ and the volume of the unitary group defined by the measure $d\nu_B$ (\ref{dnuB}), for which all components of the vector $(U^{-1}{\rm d}U)_{jk}$ have unit prefactors, \begin{equation} {\rm Vol}_B[U(N)]= {\rm Vol}_B[U(N-1)] \times {\rm Vol}[{\bf S}^{2N-1}]. \label{volBBS} \end{equation} To prove this equality by a direct calculation it is convenient to parametrize a unitary matrix of size $N$ as \begin{equation} U_N=\left[ \begin{array} [c]{cc} e^{i\phi} & 0 \\ 0 & U_{N-1} \end{array} \right] \left[ \begin{array} [c]{cc} \sqrt{1-|h|^2} & -h^{\dagger} \\ h & \sqrt{ {\mathbb I} - h\otimes h^{\dagger} } \end{array} \right] \ , \label{UUUN1} \end{equation} where $\phi \in [0,2\pi)$ is an arbitrary phase and $h$ is a complex vector with $N-1$ components such that $|h|\le 1$. This representation shows (we may arrange the two matrices in (\ref{UUUN1}) also in the opposite order) the relation \begin{equation} U(N)/[U(1) \times U(N-1)] = {\mathbb C}{\bf P}^{N-1}\ , \label{CPN-1} \end{equation} since the second factor represents the complex projective space ${\mathbb C}{\bf P}^{N-1}$. In fact, if one calculates the metric (\ref{dUs}) we find \begin{equation} ({\rm d}s_N)^2 \cong ({\rm d}s_1)^2 + ({\rm d}s_{N-1})^2 + 2({\rm d}s_h)^2 \label{dsN2} \end{equation} where $({\rm d}s_N)^2$ means the metric for $U(N)$ (the sign $\cong$ shall indicate that we have omitted some shifts in $({\rm d}s_1)^2$ and $({\rm d}s_{N-1})^2$ that are not relevant for the volume) and $({\rm d}s_h)^2$ means the metric of the complex projective space ${\mathbb C}{\bf P}^{N-1}$ with radius $1$: \begin{equation} ({\rm d}s_h)^2 = {\rm d}h^{\dagger}{\rm d}h +\frac{(h^{\dagger}{\rm d}h +{\rm d}h^{\dagger}h)^2}{4(1-|h|^2)}+ \frac{ (h^{\dagger}{\rm d}h -{\rm d}h^{\dagger}h)^2}{4}\ . \label{dsh2} \end{equation} It is easy to see by diagonalizing this metric (eigenvalues $1-|h|^2$, $1/(1-|h|^2)$, and otherwise $1$) that the corresponding volume is that of the real ball ${\bf B}^{2N-2}$ with radius $1$ and dimension $2N-2$. Thus one obtains \begin{equation} {\rm Vol}_X [U(N)]= {\rm Vol}_X [U(N-1)] \times {\rm Vol}_X [U(1)] \times c_{\rm off}^{N-1} {\rm Vol}[{\bf B}^{2N-2}] \ , \label{volBBS2} \end{equation} which for the measure (\ref{dnuB}) with $c_{\rm diag}=c_{\rm off}=1$ reduces to (\ref{volBBS}). Applying this relation $N-1$ times we obtain \begin{equation} {\rm Vol}_B \bigl[U(N)]= {\rm Vol}[{\bf S}^{2N-1}] \times \cdots \times {\rm Vol}[{\bf S}^{3}] \times {\rm Vol}[{\bf S}^{1}] . \label{volUNB} \end{equation} Taking into account that Vol$[{\bf S}^{2N-1}]=2\pi^N/(N-1)!$ and making use of the relation (\ref{volUABC}) we may write an explicit result for the volumes calculated with respect to different definitions (\ref{dnuA} -- \ref{dnuC}) \begin{equation} {\rm Vol}_X [U(N)]= a^U_X \ \frac {2^N \pi^{N(N+1)/2}}{0!1!\cdots (N-1)!}, \label{volUBB} \end{equation} where the proportionality constants read $a^U_A=2^{N(N-1)/2}$, $a^U_B=1$ and $a^U_C=2^{-N/2}$. The result for ${\rm Vol}_A[U(N)]$ was rigorously derived in \cite{Hu63}, ${\rm Vol}_B[U(N)]$ was given in \cite{Fu01}, while ${\rm Vol}_A[U(N)]$ and ${\rm Vol}_C[U(N)]$ were compared in \cite{Ca02}. In particular, ${\rm Vol}_A[U(1)]={\rm Vol}_B[U(1)]=2\pi$, while ${\rm Vol}_C[U(1)]={\sqrt{2}}\pi$ and ${\rm Vol}_A[U(2)]=8\pi^3$, ${\rm Vol}_B[U(2)]=4\pi^3$, ${\rm Vol}_C[U(2)]=2\pi^3$. In general, the volume of a coset space may be expressed as a ratio of the volumes. Consider for instance the manifold of all pure states of dimensionality $N$. It forms the complex projective space ${\mathbb C}{\bf P}^{N-1}=U(N)/[U(N-1) \times U(1)]$. Therefore \begin{equation} {\rm Vol}_X [ {\mathbb C}{\bf P}^{N-1} ] = \frac {{\rm Vol}_X [U(N)] } { {\rm Vol}_X [U(1)]\ {\rm Vol}_X [U(N-1)]} \ , \label {CPNIND} \end{equation} which gives the general result \begin{equation} {\rm Vol}_X [{\mathbb C}{\bf P}^{k}] = a^{\rm CP}_X \ \frac {\pi^{k}}{k!} = a^{\rm CP}_X \ {\rm Vol} [ {\bf B}^{2k} ] \label{volcpn} \end{equation} The scale factors read $a^{\rm CP}_A=2^k$ and $a^{\rm CP}_B=a^{\rm CP}_C=1$. For instance ${\rm Vol}_A [{\mathbb C}{\bf P}^{1}]=2\pi$ which corresponds to the circle of radius $ \sqrt{2}$ , while ${\rm Vol}_B [{\mathbb C}{\bf P}^{1}]= {\rm Vol}_C [{\mathbb C}{\bf P}^{1}]=\pi$, equal to the area of the circle of radius $1 $. The latter convention is natural if one uses the Fubini--Study metric in the space of pure states, $D_{FS}(|\varphi\rangle,|\psi\rangle) = {\rm arccos}(\sqrt{\kappa})$, where the transition probability is given by $\kappa = | \langle \varphi | \psi \rangle|^2$. Then the largest possible distance $D_{FS}=\pi/2$, obtained for any orthogonal states, sets the geodesic length of the complex projective space to $\pi$ which corresponds to the geodesic distance of two opposite points on the unit circle, being identified. It is worth to add that ${\rm Vol}_C [{\mathbb C}{\bf P}^{k}]={\rm Vol}[ {\bf S}^{2k+1}] / {\rm Vol} [{\bf S}^{1}]$ and this relation was used in \cite{BST02} to {\sl define} the volume Vol$_C$ of complex projective spaces. We see therefore that different conventions adopted in (\ref{dnuA} -- \ref{dnuC}) lead to different sizes (geodesic lengths) of the manifolds analyzed. Unitary orbits of a generic mixed state with a non--degenerate spectrum have structure of a $(N^2-N)$--dimensional complex flag manifold $Fl^{(N)}_{\mathbb C}=U(N)/[U(1)]^N$. Hence its volume reads \begin{equation} {\rm Vol}_X [ Fl_{\mathbb C}^{(N)}] = \ \frac { {\rm Vol}_X [ U(N)] } { \bigl( {\rm Vol}_X [U(1)] \bigr)^N} = a_X^{\rm Fl} \ \frac { \pi^{N(N-1)/2}}{1!2!\cdots (N-1)! } \label{volflagC} \end{equation} with convention dependent scale constants $a^{\rm Fl}_A=2^{N(N-1)/2}$ \cite{Hu63} and $a^{\rm Fl}_B=a^{\rm Fl}_C=1$ \cite{BST02}. It is easy to check that the relation \begin{equation} {\rm Vol}_X [ Fl_{\mathbb C}^{(N)}] = {\rm Vol}_X [{\mathbb C}{\bf P}^{1}] \times {\rm Vol}_X [{\mathbb C}{\bf P}^{2}] \times \cdots \times {\rm Vol}_X [{\mathbb C}{\bf P}^{N-1}] \label{volflagC2} \end{equation} holds for any definition (\ref{dnuA} -- \ref{dnuC}), since the scale constants do cancel. For completeness we discuss also the group $SU(N)$, the volume of which is {\sl not} equal to ${\rm Vol}[U(N)] / {\rm Vol} [U(1)]$ \cite{Ma81,TBS02,BST02}. To show this let us parametrize a matrix $Y_N \in SU(N)$ v\begin{equation} Y_N=\left[ \begin{array} [c]{cc} e^{i\phi} & 0 \\ 0 & e^{-i[\phi/(N-1)]} Y_{N-1} \end{array} \right] \left[ \begin{array} [c]{cc} \sqrt{1-|h|^2} & -h^{\dagger} \\ h & \sqrt{ {\mathbb I} - h\otimes h^{\dagger} } \end{array} \right] =VW \ , \label{SUN1} \end{equation} where $\phi \in [0,2\pi)$ is an arbitrary phase and $h$ is a complex vector with $N-1$ components such that $|h|\le 1$. Condition det$Y_{N}=1$ implies Tr$Y^{-1}_N{\rm d}Y_N=0.$ For instance, the metric (\ref{dUs}) gives, if the volume is concerned \begin{equation} ({\rm d}s)^2 \cong -{\rm Tr} (V^{-1} {\rm d}V)^2 - {\rm Tr} (W^{-1} {\rm d}W)^2 . \label{dSUNb} \end{equation} Since the first factor $V$ is block diagonal the first term is equal to $({\rm d}\phi)^2 N/(N-1) - {\rm Tr} (Y_{N-1}^{-1} {\rm d} Y_{N-1})^2$, while the second one gives the metric on ${\mathbb C}{\bf P}^{N-1}$. Integrating an analogous expression in the general case of an arbitrary metric and using (\ref {CPNIND}) we obtain the following result \begin{equation} {\rm Vol}_X \bigl[SU(N)]= \frac { {\rm Vol}_X [U(N)]} { {\rm Vol}_X [U(N-1)]} \ \sqrt{\frac {N}{N-1}} \ {\rm Vol}_X \bigl[SU(N-1)] , \label{volSUN1} \end{equation} which iterated $N-1$ times gives the correct relation \begin{equation} {\rm Vol}_X \bigl[SU(N)] = \sqrt{N} \ \frac { {\rm Vol}_X [U(N)]} { {\rm Vol}_X [U(1)]} \label{volSUN2} \end{equation} with the stretching factor $\sqrt{N}$. For instance, working with the measure (\ref{dnuC}) and making use of (\ref{volUBB}) we obtain ${\rm Vol}_C \bigl[SU(N)]=\sqrt{N}2^{(N-1)/2}\pi^{(N+2)(N-1)/2} /[1!\cdots (N-1)!]$, so in particular, $ {\rm Vol}_C \bigl[SU(2)]=2\pi^2$, $ {\rm Vol}_C \bigl[SU(3)]=\sqrt{3} \pi^5$ and $ {\rm Vol}_C \bigl[SU(4)]=\sqrt{2}\pi^9/3$ in consistence with results obtained in \cite{Ma81,By98,BST02,Ca02}. \subsection{Orthogonal group $O(N)$} The analysis of the orthogonal group is simpler, since $(O^{-1} {\rm d}O)^T=- (O^{-1} {\rm d}O)^T$, so the diagonal elements of ${\rm Tr} (O^{-1} {\rm d}O)^2$ vanish. Thus we shall consider only two metrics (analogous to the measures (\ref{dnuA} -- \ref{dnuC})) with different scalings, \begin{equation} ({\rm d}s_A)^2 := -{\rm Tr} (O^{-1} {\rm d}O)^2= 2 \sum_{j<k=1}^N |(O^{-1}{\rm d}O)_{jk}|^2, \label{dOsa} \end{equation} used in section VII of this work, and \begin{equation} ({\rm d}s_B)^2= ({\rm d}s_C)^2 := -\frac{1}{2}{\rm Tr} (O^{-1} {\rm d}O)^2= \sum_{j<k=1}^N |(O^{-1}{\rm d}O)_{jk}|^2 , \label{dOsb} \end{equation} which both lead to the Haar measure on the orthogonal group. To obtain the volume of $O(N)$ we proceed as in the unitary case and parametrize an orthogonal matrix of size $N$ as \begin{equation} O_N=\left[ \begin{array} [c]{cc} O_1 & 0 \\ 0 & O_{N-1} \end{array} \right] \left[ \begin{array} [c]{cc} \sqrt{1-|h|^2} & -h^{T} \\ h & \sqrt{ {\mathbb I} - h \otimes h^{T} } \end{array} \right] \ , \label{OOON1} \end{equation} where $O_1\in O(1)=\pm $, while $h$ is here a real vector with $N-1$ components such that $|h|\le 1$. Representing the metric $({\rm d}s_B)^2$ by these two matrices we see that the term containing only the vector $h$ gives the metric of a real projective space. Integrating the resulting volume element (with scale factor $1$) we obtain the volume of ${\mathbb R}{\bf P}^{N-1}$, equal to $\frac{1}{2} {\rm Vol} [{\bf S}^{N-1}]$. Taking into account a factor of two resulting from $O(1)$ we arrive at Vol$_B[O(N)]={\rm Vol}_B [O(N-1)] {\rm Vol} [{\bf S}^{N-1}]$, which applied recursively leads to \begin{equation} {\rm Vol}_B[O(N)]= {\rm Vol} [{\bf S}^{N-1}] \times \cdots \times {\rm Vol} [{\bf S}^{1}] \times {\rm Vol} [{\bf S}^{0}] = \prod_{k=1}^N \frac{2 \pi^{k/2}}{\Gamma(k/2)} , \label{volON1} \end{equation} where ${\rm Vol} [{\bf S}^{0}] = {\rm Vol} [O(1)] = 2$. To get an equivalent result for the metric (\ref{dOsa}) we have to take into account the factor $\sqrt{2}$ which occures for each of $N(N-1)/2$ off-diagonal elements. Doing so we obtain \begin{equation} {\rm Vol}_A[O(N)]= 2^{N(N-1)/4} {\rm Vol}_B[O(N)]= 2^{N(N+3)/4} \prod_{k=1}^N \frac{ \pi^{k/2}}{\Gamma(k/2)} , \label{volON2} \end{equation} in consistency with Hua \cite{Hu63}. In particular, ${\rm Vol}_A[O(1)]={\rm Vol}_B[O(1)]=2$, while ${\rm Vol}_A[O(2)]=4\sqrt{2} \pi$, ${\rm Vol}_B[O(2)]=4\pi$, and ${\rm Vol}_A[O(3)]=32\sqrt{2}\pi^2$, ${\rm Vol}_B[O(3)]=16\pi^2$. In full analogy to the unitary case we obtain the volume of the real projective manifold \begin{equation} {\rm Vol}_X [ {\mathbb R}{\bf P}^{N-1} ] = \frac {{\rm Vol}_X [O(N)] } { {\rm Vol}_X [O(1)]\ {\rm Vol}_X [O(N-1)]} \ . \label {RPNIND} \end{equation} For the metric (\ref{dOsb}) this expression reduces to ${\rm Vol}_B [ {\mathbb R}{\bf P}^{k} ]=\frac{1}{2}{\rm Vol} [{\bf S}^{k}]$. Hence, this metric may be called 'unit sphere' metric, while the convention (\ref{dOsa}) may be called 'unit trace' metric. In the similar way we find the volume of the real flag manifolds, used in analysis of real density matrices, \begin{equation} {\rm Vol}_X [ Fl_{\mathbb R}^{(N)}] = \ \frac { {\rm Vol}_X [ O(N)] } { \bigl( {\rm Vol}_X [O(1)] \bigr)^N} = \ \frac {1}{2^N} {\rm Vol}_X [ O(N)] . \label{volflagR} \end{equation} Exactly as in the complex case we observe that the relation \begin{equation} {\rm Vol}_X [ Fl_{\mathbb R}^{(N)}] = {\rm Vol}_X [{\mathbb R}{\bf P}^{1}] \times {\rm Vol}_X [{\mathbb R}{\bf P}^{2}] \times \cdots \times {\rm Vol}_X [{\mathbb R}{\bf P}^{N-1}] \label{volflagR2} \end{equation} is satisfied for any definition of the metric. Computation of the volume of the special orthogonal group $SO(N)$ is much easier than in the complex case, since there are no diagonal elements in the metric and hence, no stretching factors. For any normalization one gets \begin{equation} {\rm Vol}_X [SO(N)] = \frac {{\rm Vol}_X [O(N)] } { {\rm Vol}_X [O(1)] } = \frac{1}{2} \ {\rm Vol}_X [O(N)] \label {SON} \end{equation} In particular, we get $ {\rm Vol}_B [SO(2)]=2\pi$ and $ {\rm Vol}_B [SO(3)]= {\rm Vol}_C \bigl[SO(3)]=8\pi^2$. The latter results seem to be inconsistent with $ {\rm Vol}_C [SU(2)]=2\pi^2$, since there exists a one to two relation between both groups. This paradox is resolved by analyzing the scale effects \cite{BST02}: the volume of $SU(2)$ is two times larger than the volume of the real projective manifold conjugated to $SO(3)$ of the appropriate geodesic length, $ {\rm Vol}_C [{\mathbb R}{\bf P}^3]=\pi^2$. \end{document}
\begin{document} \title{Sudden death of distillability in qutrit-qutrit systems} \author{Wei Song} \email{[email protected]} \affiliation{Laboratory of Quantum Information Technology, ICMP and SPTE, South China Normal University, Guangzhou 510006, China} \author{Lin Chen} \email{[email protected](Corresponding~Author)} \affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117542} \author{Shi-Liang Zhu} \affiliation{Laboratory of Quantum Information Technology, ICMP and SPTE, South China Normal University, Guangzhou 510006, China} \date{\today} \date{2 Jul 2009} \pacs{03.65.Yz, 03.67.Mn, 03.65.Ud, 03.67.Pp } \begin{abstract} We introduce the concept of distillability sudden death, i.e., free entangled states can evolve into non-distillable (bound entangled or separable) states in finite time under local noise. We describe the phenomenon through a specific model of local dephasing noise and compare the behavior of states in terms of the Bures fidelity. Then we propose a few methods to avoid distillability sudden death of states under (general) local dephasing noise, so that free entangled states can be robust against decoherence. Moreover, we find that bound entangled states are unstable in the limit of infinite time. \end{abstract} \maketitle \section{Introduction} Entanglement is not only a remarkable feature which distinguishes the quantum world from the classical one but also a key resource to realize high-speed quantum computation and high-security quantum communication \cite{Neilsen:2000}. In realistic quantum-information processing, entanglement usually needs to be prepared or distributed beforehand among different remote locations. However, in the process of entanglement distribution the quantum systems are not isolated and each system will unavoidably interact with the environment. This leads to local decoherence which will degrade the entanglement of the shared states. It is thus of fundamental importance to study the entanglement properties under the influence of the local decoherence. In this context, Yu and Eberly \cite{Yu:2004} investigated the time evolution of entanglement of a bipartite qubit system undergoing various modes of decoherence. Remarkably, they found that, although it takes infinite time to complete the decoherence locally, the global entanglement may vanish in finite time. The phenomenon of finite-time disentanglement, also named entanglement sudden death(ESD), unveils a fundamental difference between the global behavior of an entangled system and the local behavior of its constituents under the effect of local decoherence. Clearly, ESD puts an limit on the applicability of entangled states in the practical quantum information processing. Initially, Yu and Eberly reported the ESD for two-qubit entangled states, but this effect is not limited to such case. Further investigations in a wider context including higher dimensional Hilbert spaces have been made by various groups \cite{Yonac:2006,Cui:2007,Yu:2007,Lastra:2007,Zhang:2007,Vaglica:2007,Bellomo:2007,Ikram:2007,Qasimi:2008,Ficek:2006,Sainz:2008,Ann:2007,Rau:2008,Huang:2007,Li:2009,Shan:2008,Fei:2009}. There are also a number of studies looking at ESD in more complicated systems using other entanglement measures \cite{Carvalho:2007,Annb:2007,Sainz:2007,Lopez:2008,Aolita:2008,Man:2008,An:2009,Liu:2009}, and an attempt to give a geometric interpretation of the phenomenon has also been made \cite{Cunha:2007}. In addition, experimental evidences of ESD have been reported for optical setups \cite{Almeida:2007} and atomic ensembles \cite{Laurat:2007}. However, all previous studies omitted an important fact that high dimensional bipartite entangled states can be divided into two classes \cite{Horodecki:1998}. One is free, which means that the state can be distilled under local operations and classical communication (LOCC); the other is bound, which means that no LOCC strategy is able to extract pure-state entanglement from the state even if many copies are available. Bound entanglement(BE) cannot be used alone for quantum information processing, and irreversibility occurs in asymptotic manipulations of entanglement for all BE states\cite{Yang:2005}. Since it was constructed from a pure mathematical point of view, we may ask whether BE can appear in physically relevant quantum systems naturally. Very recently a few works have addressed this question \cite{Toth:2007}. Their results suggest that different many-body models present thermal bound entangled states. In this paper, we investigate the problem from a very different viewpoint, i.e., in the present of local decoherence, which is one of the dominant noises during the distribution of entanglement. Analogous to the definition of ESD, if an initial free entangled state becomes non-distillable in finite time under the influence of local decoherence, then we say that it undergoes distillability sudden death(DSD). Note that when a free entangled state loses its distillability at a specific time, it may still be entangled since we cannot exclude the existence of BE. So far, it is not clear whether bipartite bound entangled states can be created from free entangled states under local decoherence process naturally. The first aim of this paper is to show that such a process indeed exists through an explicit qutrit-qutrit example. Afterwards we propose the DSD-free state, which has entanglement robust against local decoherence. Such entangled states are thus useful resources for practical quantum-information processing. So the second aim is to address the DSD-free state. We develop a few systematic approaches to build DSD-free states. Finally we will show that different from free entangled and separable states, no PPT bound entangled states exist in the infinite time limit. The paper is organized as follows. In Sec. II, the idea of DSD is presented. In particular, we show the phenomenon of DSD through a specific qutrit-qutrit example. In addition, we show that one can avoid the sudden death of distillability by performing a simple local unitary operation on the initial state. Furthermore, we compare the behaviors of states affected by decoherence in terms of the Bures fidelity. In Sec. III, we develop some methods to build DSD-free states which can protect free entanglement under general local dephasing noise. We also show that no PPT entangled states can exist in the limit of infinite time. Finally, in Sec. IV we discuss some open questions and also give a summary of our results. \section{Qutrit-Qutrit DSD states under local decoherence} Before discussing dynamical process of entanglement, we briefly review how to characterize bound entangled states. It was proven in \cite{Horodecki:1998} that a quantum state with a positive partial transposition(PPT) is non-distillable under LOCC. Therefore PPT entangled states must be bound entangled states \cite{Song:2009}. To verify them, one can use the so-called realignment criterion (cross-norm criterion) \cite{Rudolph:2005}. The definition of realignment on the density matrix is given by $ \left( {\rho ^R } \right)_{ij,kl} = \rho _{ik,jl}$. A separable state $\rho$ always satisfies ${\left\| {\rho ^R } \right\| \le 1}$. For a PPT state $\rho$, the positive value of the quantity ${\left\| {\rho ^R } \right\| - 1}$ can hence verify that it is a bound entangled state. \begin{figure} \caption{ (Color online). The eigenvalue $ \lambda _1$ as a function of time $t$ and dephasing rate $\Gamma$ for two different cases: (a)$ \alpha = 4.5$; (b)$ \alpha = 4.9$. Figures (c) and (d) correspond to the contour plots of the upper one, respectively.} \label{fig1} \end{figure} The system we study consists of two noninteracting qutrits in two independent local environments, each coupling to one of the qutrits. Here the qutrit is a three-dimensional state composed of the computational basis $ \left| 0 \right\rangle$, $ \left| 1 \right\rangle$, and $ \left| 2 \right\rangle$. Besides, we take the weak local dephasing noise to model the environment, such noise is indeed one of the main decoherence sources in solid-state systems. The general time-evolved density matrix expressible in the operator-sum decomposition is the completely positive trace preserving map \cite{Kraus:1983} $ \rho \left( t \right) = \varepsilon \left( {\rho \left( 0 \right)} \right) = \sum\nolimits_\mu {K_\mu ^\dag \left( t \right)\rho \left( 0 \right)} K_\mu \left( t \right)$. The operators $ \left\{ {K_\mu \left( t \right)} \right\}$ representing the influence of statistical noise satisfy the completeness condition $ \sum\nolimits_\mu {K_\mu \left( t \right)K_\mu ^\dag \left( t \right)} = \mathbb{I}$ which guarantees that the evolution is trace-preserving\cite{Neilsen:2000}. In our model, the operators are of the form $K_{\mu}(t) = D_{j}(t)E_{i}(t)$ such that \begin{eqnarray} \rho\left(t\right) = \mathcal{E}\left(\rho\left(0\right)\right) = \sum_{i,j = 1}^{2} D_{j}^{\dagger}\left(t\right)E_{i}^{\dagger}\left(t\right) \rho\left(0\right) E_{i}\left(t\right)D_{j}\left(t\right)\label{krausGeneral}. \end{eqnarray} Here, $E_{i}(t)$ and $D_{j}(t)$ correspond to local dephasing noise components acting on the first and second qutrits, respectively, and both operators satisfy the completeness conditions. For simplicity, we first take these to be of the specific forms \cite{Ann:2007} \begin{eqnarray*} E_{1}(t) &=& {\rm diag}(1,\gamma_{\rm A},\gamma_{\rm A}) \otimes \mathbb{I}_{3} \ ,\\ E_{2}(t) &=& {\rm diag}(0,\omega_{\rm A},\omega_{\rm A}) \otimes \mathbb{I}_{3} \ ,\\ D_{1}(t) &=& \mathbb{I}_{3} \otimes {\rm diag}(1,\gamma_{\rm B},\gamma_{\rm B}) \ ,\\ D_{2}(t) &=& \mathbb{I}_{3} \otimes {\rm diag}(0,\omega_{\rm B},\omega_{\rm B})\ , \end{eqnarray*} where $\mathbb{I}_{3}$ is the $3 \times 3$ identity matrix, $\gamma_{\rm A}\left(t\right) = e^{-\Gamma_{{\rm A}}t/2}, \ \gamma_{\rm B}\left(t\right) = e^{-\Gamma_{{\rm B}}t/2}, \ \omega_{\rm A}\left(t\right) = \sqrt{1-\gamma_{{\rm A}}^{2}(t)},\ {\rm and} \ \omega_{\rm B}\left(t\right) = \sqrt{1-\gamma_{{\rm B}}^{2}(t)}.$ For concreteness, we illustrate our ideas by considering the following qutrit-qutrit state, \begin{eqnarray} \rho \left( 0 \right) &=& \frac{2}{{21}}\left( {\left| {01} \right\rangle + \left| {10} \right\rangle + \left| {22} \right\rangle } \right)\left( {\left\langle {01} \right| + \left\langle {10} \right| + {\left\langle {22} \right|} } \right) \notag\\ &+& \frac{\alpha }{{21}}\left( {\left| {00} \right\rangle \left\langle {00} \right| + \left| {12} \right\rangle \left\langle {12} \right| + \left| {21} \right\rangle \left\langle {21} \right|} \right) \notag\\ &+& \frac{{5 - \alpha }}{{21}}\left( {\left| {11} \right\rangle \left\langle {11} \right| + \left| {20} \right\rangle \left\langle {20} \right| + \left| {02} \right\rangle \left\langle {02} \right|} \right) \end{eqnarray} \noindent with $ 4 < \alpha \le 5$. In fact, it is straightforward to prove that the state $ \rho \left( 0 \right)$ is free entangled. By using the local operation $\left( {\left| 0 \right\rangle \left\langle 0 \right| + \left| 1 \right\rangle \left\langle 1 \right|} \right) \otimes \left( {\left| 0 \right\rangle \left\langle 0 \right| + \left| 1 \right\rangle \left\langle 1 \right|} \right)$, one can converts $ \rho \left( 0 \right)$ into a $2 \otimes 2$ entangled state, which is afterwards distillable by virtue of the BBPSSW-Horodecki protocol \cite{Horodecki:1997}. So the state $ \rho \left( 0 \right)$ is a free entangled state and we take it as the initial state under the local dephasing noise in Eq. (1). This evolution can be calculated analytically, i.e., at time $t$ the two-qutrit density operator $\rho(t)$ reads \begin{equation} \left( {\begin{array}{*{20}c} {\frac{\alpha }{{21}}} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & {\frac{2}{{21}}} & 0 & {\frac{2}{{21}}\gamma _A \gamma _B } & 0 & 0 & 0 & 0 & {\frac{2}{{21}}\gamma _A } \\ 0 & 0 & {\frac{{5 - \alpha }}{{21}}} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & {\frac{2}{{21}}\gamma _A \gamma _B } & 0 & {\frac{2}{{21}}} & 0 & 0 & 0 & 0 & {\frac{2}{{21}}\gamma _B } \\ 0 & 0 & 0 & 0 & {\frac{{5 - \alpha }}{{21}}} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & {\frac{\alpha }{{21}}} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & {\frac{{5 - \alpha }}{{21}}} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & {\frac{\alpha }{{21}}} & 0 \\ 0 & {\frac{2}{{21}}\gamma _A } & 0 & {\frac{2}{{21}}\gamma _B } & 0 & 0 & 0 & 0 & {\frac{2}{{21}}} \\ \end{array}} \right), \end{equation} \noindent where the basis are spanned by $ \left\{ {\left| {{\rm{00}}} \right\rangle ,\left| {{\rm{01}}} \right\rangle ,\left| {{\rm{02}}} \right\rangle ,\left| {{\rm{10}}} \right\rangle ,\left| {{\rm{11}}} \right\rangle ,\left| {{\rm{12}}} \right\rangle ,\left| {{\rm{20}}} \right\rangle ,\left| {{\rm{21}}} \right\rangle ,\left| {{\rm{22}}} \right\rangle } \right\}$. Let us analyze the matrix carefully. There are three eigenvalues of the partial transposition of the state $\rho \left(t \right)$ which could be negative, namely $ \lambda _1 = f\left( {\Gamma _A } \right),\lambda _2 = f\left( {\Gamma _B } \right),$ and $\lambda _3 = f\left( {\Gamma _A + \Gamma _B } \right)$, where $ f\left( \lambda \right) = \frac{{e^{ - \lambda t} }}{{882}}\left( {105e^{\lambda t} - \sqrt {11025e^{2\lambda t} + 1764e^{\lambda t} \left( {4 - 5\alpha e^{\lambda t} + \alpha ^2 e^{\lambda t} } \right)} } \right) $. For simplicity, we choose the local asymptotic dephasing rates $\Gamma_{\rm A} = \Gamma_{\rm B} = \Gamma$, and thus $ \lambda _1 = \lambda _2 < \lambda _3$ in the following arguments. In order to have a vivid illustration, in Fig. 1 we plot $\lambda _1$ as a function of t and $\Gamma$ with specific $\alpha=4.5$ and $\alpha=4.9$, respectively. Fig.2(a) shows the value of $\lambda _1$ versus $t$ for different decoherence rates with $\alpha=4.5$, and we can see that the eigenvalue of the partial transposition of the state $\rho(t)$ will always arrive at a positive value in finite time. For example, if we choose $\Gamma=1$, the density matrix $\rho(t)$ will become a PPT state after time $t \approx 0.58$ in Fig.2(a). Analytically, the time at which $\rho(t)$ becomes a PPT state is $ t_d = \frac{1}{{\Gamma }}\ln \frac{4}{{\alpha \left( {5 - \alpha } \right)}}$. Next, we use realignment to verify the BE in this evolution. To this end, we need to compute the quantity $ \left\| {\rho(t) ^R } \right\|-1$, and it is given by $ \frac{2}{{21}}e^{ - \Gamma t} \left( {2 + 4e^{\frac{1}{2}\Gamma t} + \left( { - 7 + \sqrt {19 - 15\alpha + 3\alpha ^2 } } \right)e^{\Gamma t} } \right)$. In order to make a comparison, we plot $ \left\| {\rho \left( t \right)^R } \right\| - 1$ as a function of time $t$ by choosing $\alpha = 4.5$ and $\Gamma = 1$ in Fig.2(b). We can see that if $ t <0.84$, the value of $ \left\| {\rho \left( t \right)^R } \right\| - 1$ is always positive, which indicates that in the range $ 0.58 < t <0.84$, the two-qutrit system is a bound entanglement state. Thus we have shown that free initial entangled states can evolve into non-distillable (bound)entangled states in finite time under the local external asymtotic dephasing noise. \begin{figure} \caption{(a)The eigenvalue $\lambda _1$ versus time $t$ for different dephasing rates. The solid, dash-dotted, and dashed lines correspond to $\Gamma = 1,\Gamma = 0.7$, and $\Gamma = 0.4$, respectively. (b) The quantity $\left\| {\rho \left( t \right)^R } \label{fig2} \end{figure} Besides we find that with $\alpha = 4.5$ and $\Gamma = 1$, the state $\rho(t)$ will be separable when $t>1.39$. It could be proved by extracting three "$2\times2$" density operators, which are respectively spanned by $\{\left|00\right\rangle,\left|01\right\rangle,\left|10\right\rangle,\left|11\right\rangle\}$, $\{\left|01\right\rangle,\left|02\right\rangle,\left|21\right\rangle,\left|22\right\rangle\}$ and $\{\left|10\right\rangle,\left|12\right\rangle,\left|20\right\rangle,\left|22\right\rangle\}$, from the state $\rho(t)$. Then one can find out that the condition $t>1.39$ makes the three states separable. As the state $\rho(t)$ is actually the linear combination of the above three states and some product states, $\rho(t)$ also becomes separable when $t>1.39$. Nevertheless, there is still a small window $t\in[0.84,1.39]$ in which the state $\rho(t)$ has merely PPT and we don't know whether it's entangled or separable. To summarize the example, start from the initial state $\rho(0)$ in $ 4 < \alpha \le 5$ the qutrit-qutrit state $\rho(t)$ will experience three phases under the local dephasing noise in Eq.(1): it evolves from an initial distillable entangled state to a PPT entangled state, and then changes to be a separable state and finally remains in this phase for even. In other word, $\rho(t)$ first experiences DSD and then ESD under local decoherence. In contrast with the above example, we briefly consider the initial state $\rho(0)$ for $ 3 < \alpha \le 4$. In this case, it is a PPT entangled state \cite{Horodecki:1999}, one can verify that the state $\rho(t)$ will finally become separable in finite time under the local dephasing noise in Eq.(1); that is, $\rho(t)$ only experiences ESD. Moreover, we can easily verify the time-domain factorization relation as follows, \begin{equation} S_{t_1 + t_2 } = S_{t_1 } S_{t_2 }, \end{equation} \noindent where $t_1 ,t_2 \ge 0$, and $S$ denotes the noise in Eq.(1). Because the entanglement cannot increase under the local operations, we conclude that if a state becomes separable at a specific time in Eq.(1), it must remain separable in all subsequent time. More generally, if an entangled state evolves into a PPT state (either entangled or separable) at a specific time in Eq.(1), it always has PPT for all subsequent time since the local operation cannot change PPT. Of course, such disappearance of distillability in finite time can seriously affect the application of entanglement in quantum information tasks. An important question arises naturally: given a free initial entangled state, does there exist a suitable intervention that may alter the final fate of DSD? In the following we show that one can realize this aim by merely performing a simple local unitary operation on the initial state in the above example. Let the initial local operation be $ U = \mathbb{I}_3 \otimes \lambda$, with $ \lambda = \left| 0 \right\rangle \left\langle 1 \right| + \left| 1 \right\rangle \left\langle 0 \right| + \left| 2 \right\rangle \left\langle 2 \right| $. Then, the transformed state is $ \rho '\left( 0 \right) = U\rho \left( 0 \right)U^\dag=\frac{2}{7}P_ + + \frac{\alpha }{7}\rho _ + + \frac{{5 - \alpha }}{7}\rho _ - ,$ where the projectors are $P_ + = \left| {\Psi _ + } \right\rangle \left\langle {\Psi _ + } \right|,\left| {\Psi _ + } \right\rangle = \frac{1}{{\sqrt 3 }}\left( {\left| {00} \right\rangle + \left| {11} \right\rangle + \left| {22} \right\rangle } \right), \rho _ + = \frac{1}{3}\left( {\left| {01} \right\rangle \left\langle {01} \right| + \left| {12} \right\rangle \left\langle {12} \right| + \left| {20} \right\rangle \left\langle {20} \right|} \right), \rho _ - = \frac{1}{3}\left( {\left| {10} \right\rangle \left\langle {10} \right| + \left| {21} \right\rangle \left\langle {21} \right| + \left| {02} \right\rangle \left\langle {02} \right|} \right).$ Evidently, the states $\rho '\left( 0 \right)$ and $ \rho \left( 0 \right)$ are equally useful quantum resources without decoherence. In Ref.\cite{Horodecki:1999}, Horodecki demonstrated that $\rho' \left( 0 \right)$ is a free entangled state for $4 < \alpha \le 5$. It will evolve into $\rho '\left( t \right)$ according to Eq.(1), whose partial transpose always has a negative eigenvalue in finite time $t$. So $\rho '\left( t \right)$ never experiences ESD when subjecting only to the local dephasing noise in Eq.(1). Furthermore, the state $ \rho' \left( t \right)$ does not undergo DSD in finite time. To see this, we perform the local operation $ \left( {\left| 1 \right\rangle \left\langle 1 \right| + \left| 2 \right\rangle \left\langle 2 \right|} \right) \otimes \left( {\left| 1 \right\rangle \left\langle 1 \right| + \left| 2 \right\rangle \left\langle 2 \right|} \right) $ on $ \rho' \left( t \right)$. The resulting two-qubit state is easily proved to be entangled and hence distillable in finite time $t$. So the state $ \rho ' \left( t \right)$ can always be distilled under the local dephasing noise. By far we have considered DSD in the presence of local dephase noise. Another useful physical quantity in quantum-information problems is the fidelity, which measures to what extent the evolved state is close to the initial one. For concreteness, we study the states $\rho(t)$ and $\rho '(t)$ by means of the Bures fidelity \cite{Bures:1969}. The Bures fidelity of states $\rho$ and $\sigma$ is defined as $ F\left( {\rho ,\sigma } \right) = \left[ {tr\left( {\sqrt {\sqrt \rho \sigma \sqrt \rho } } \right)} \right]^2$. For the state $ {\rho \left( t \right)}$, the fidelity is given by \begin{equation} F^\rho \left( t \right) = \left[ {\frac{1}{{21}}\left( {15 + \sqrt {6e^{ - \Gamma t} \left( {1 + 2e^{\Gamma t} + \sqrt {1 + 8e^{\Gamma t} } } \right)} } \right)} \right]^2, \end{equation} \noindent while for $ \rho '\left( t \right)$ it reads \begin{equation} F^{\rho '} \left( t \right) = \left[ {\frac{1}{{21}}\left( {15 + \sqrt {18 + 6\sqrt {1 + 8e^{ - 2\Gamma t} } } } \right)} \right]^2. \end{equation} \noindent Eq.(5) and Eq.(6) indicate that the fidelities do not depend on the coefficient $ \alpha$. In Fig.3, we plot $F^\rho \left( t \right)$ and $F^{\rho '} \left( t \right)$ as the function of t with $ \Gamma = 1$. Obviously, the fidelity $F^{\rho '} \left( t \right)$ is always larger than $F^\rho \left( t \right)$, which means that $ \rho \left( t \right)$ degrades faster than $ \rho '\left( t \right)$. In the infinite time limit, the values of $F^{\rho '} \left( t \right)$ and $F^\rho \left( t \right)$ will approach to $ \left[ {\frac{1}{{21}}\left( {15 + 2\sqrt 6 } \right)} \right]^2$ and $ \left[ {\frac{1}{{21}}\left( {15 + 2\sqrt 3 } \right)} \right]^2$, respectively. Thus, we have shown that $ \rho '\left( t \right)$ have the advantages over $ \rho \left( t \right)$ in two aspects: Firstly, it does not experience DSD in finite time. Secondly, its fidelity decays slower than that of $ \rho \left( t \right)$ for all time. \begin{figure} \caption{The fidelity versus time $t$ for different initial states. The solid and dashed lines correspond to $F^\rho \left( t \right)$ and $F^{\rho '} \label{fig3} \end{figure} \section{Construction of DSD-free states} In the last section we have presented the examples of qutrit-qutrit states with and without DSD. We call the latter as DSD-free states, which always have negative partial transposition (NPT). NPT is a necessary condition for distillability under LOCC, so the DSD-free states are available resource for quantum-information tasks as they have robust entanglement against local dephasing noise described in Eq.(1). It is thus worth finding out diverse DSD-free states in terms of different kinds of local dephasing noises. In this sense, DSD-free states deserve storage in quantum-information "warehouse" and they are more useful than DSD states. On the other hand, one can also directly propose schemes of quantum information processing based on the robustness of DSD-free states. We now turn to develop several approaches to construct DSD-free states for an arbitrary qutrit-qutrit state $\sigma$. As a direct method, we can locally project a few copies of the state $\sigma (t)$ onto some $2\otimes N (N\geq2)$ states. Such states are distillable (DSD-free) under LOCC, if and only if they have NPT in finite time \cite{Horodecki:1997}. Generally, it's difficult to find out the projectors and there have been a few results on this problem \cite{Song:2009,Horodecki:1999-2,Chen:2000}. Here we only investigate one copy of $\sigma (t)$ under the dephasing noise described in Eq.(1). For example: \textbf{Lemma 1}. Consider an NPT entangled state $\sigma(0)$ and it experiences the local dephasing noise in Eq.(1). Then $\sigma(t)$ is always distillable in finite time, when $\sum_{i= 1}^{2} D_{2}^{\dagger}\left(t\right)E_{i}^{\dagger}\left(t\right) \sigma\left(0\right) E_{i}\left(t\right)D_{2}\left(t\right)\label{krausGeneral}$ or $\sum_{i= 1}^{2} D_{i}^{\dagger}\left(t\right)E_{2}^{\dagger}\left(t\right) \sigma\left(0\right) E_{2}\left(t\right)D_{i}\left(t\right)\label{krausGeneral}$ is entangled. \textit{Proof}. One can get the above given state by projecting $\sigma(t)$ with the projector $D_{2}\left(t\right)$ on system $A$ or $E_{2}\left(t\right)$ on system $B$ respectively (both projectors actually have nothing to do with time $t$ when separately used as operators). Since the given state is a distillable qubit-qutrit entangled state, the assertion follows from \cite{Horodecki:1997}. It's interesting to note that the given state is just the sum of the third and fourth terms in Eq.(1). \hspace*{\fill}$\blacksquare$ Lemma 1 actually requires to check the PPT of some $6 \times 6$ matrices depending on time $t$, which could be done by using Mathematics. This is a systematic way to build the subspace of robust entangled states that never experience DSD in Eq. (1). To show the connection between the last section and lemma 1, we propose a weak version of lemma 1 as follows. \textbf{Lemma 2}. Suppose $D_{2}^{\dagger}\left(t\right)E_{2}^{\dagger}\left(t\right) \sigma\left(0\right) E_{2}\left(t\right)D_{2}\left(t\right)$ are entangled, then the states $\sigma(0)$ are DSD-free states in local dephasing noise Eq.(1). \textit{Proof}. The state $\sum_{i= 1}^{2} D_{2}^{\dagger}\left(t\right)E_{i}^{\dagger}\left(t\right) \sigma\left(0\right) E_{i}\left(t\right)D_{2}\left(t\right)\label{krausGeneral}$ is entangled when $D_{2}^{\dagger}\left(t\right)E_{2}^{\dagger}\left(t\right) \sigma\left(0\right) E_{2}\left(t\right)D_{2}\left(t\right)$ is entangled. The assertion then follows from lemma 1. \hspace*{\fill}$\blacksquare$ One may see that the state $\rho '(0)$ in the previous section belongs to the case of lemma 2. The advantage of lemma 2 is that the density operators $D_{2}^{\dagger}\left(t\right)E_{2}^{\dagger}\left(t\right) \sigma\left(0\right) E_{2}\left(t\right)D_{2}\left(t\right)$ do not contain time $t$ and are convenient for calculation. This result is reasonable for the local dephasing noise described in Eq.(1) which occurs only between the ground state $i=0$ and the $i$th ($i=1,2$) excited state. The model in Eq.(1) is neither the simplest case of local dephasing nor the most general case. In the most general case dephasing occurs between all local basis states within each subsystem. In what follows we put forward some results under such general local dephasing noise \cite{Ann:2007}. \textbf{Lemma 3}. Qutrit-qutrit entangled MC states are DSD-free states in general local dephasing noise. \textit{Proof}. Let us consider the qutrit-qutrit maximally correlated (MC) states $\sigma_{MC}(0)$, which read \cite {Horodecki:2003} \begin{equation} \sigma_{MC}(0)=\sum^{2}_{i,j=0}a_{ij}\left|ii\rangle\langle jj\right|. \end{equation} We make it go through the channel of general local dephasing noise. As we know, such noise cannot change the diagonal elements and the non-diagonal elements of the evolved state have non-zero dephasing values for any finite time. So the evolved state $\sigma_{MC}(t)$ is still a MC state. On the other hand, the MC state is separable if and only if all non-diagonal entries are zero. Then $\sigma_{MC}(t)$ is always entangled for all finite $t$ whenever the initial state $\sigma_{MC}(0)$ is entangled. Furthermore, one may prove that $\sigma_{MC}(t)$ can always be distillable by using the similar argument to show the distillability of the state $ \rho' \left( t \right)$ in the previous section.\hspace*{\fill}$\blacksquare$ This example enlightens us on building DSD-free entangled states in a stronger way, i.e., \textbf{Lemma 4}. We perform the local operation $ \left( {\left| i \right\rangle \left\langle i \right| + \left| j \right\rangle \left\langle j \right|} \right) \otimes \left( {\left| m \right\rangle \left\langle m \right| + \left| n \right\rangle \left\langle n \right|} \right)$ on any initial qutrit-qutrit state, where $ i,j,m$, and $n$ are chosen from the basis 0, 1 and 2, respectively. If the projected state is an entangled MC state, then the initial state is a DSD-free state in general local dephasing noise.\hspace*{\fill}$\blacksquare$ One can prove lemma 4 by showing that there are always some nonzero non-diagonal entries of the projected state, and we can take it as the new initial state in the local dephasing noise. Evidently, the newly built state in lemma 4 contains MC states in lemma 3. These results can be extended to the scenario of higher dimension. For simplicity, we consider the general MC state $\sigma^{\prime}_{MC}(0)=\sum^{d-1}_{i,j=0}a_{ij}\left|ii\rangle\langle jj\right|$. Suppose it experiences generalized local dephasing noise. Analogous to the arguments for lemma 3, one can see that the state $\sigma^{\prime}_{MC}(t)$ also has no DSD in finite time. Thus we have provided a high dimensional subspace $\{|00\rangle,|11\rangle,,...,|d-1,d-1\rangle\}$ in which all entangled states are distillable under local dephasing noise; that is, \textbf{Lemma 5}. Entangled MC states are DSD-free states in general local dephasing noise. \hspace*{\fill}$\blacksquare$ One can also build other higher dimensional DSD-free states, which could be locally projected onto entangled MC states by following the techniques for lemma 4. As a short summary, lemma 1 to 5 give a few primary results on DSD-free states in finite time, in terms of the special dephasing noise Eq.(1) and general dephasing noise respectively. To gain a better understanding of the entanglement dynamics, we further study the properties of the evolved state in the infinite-time limit, which is summarized as the following lemma. \textbf{Lemma 6}. For any qutrit-qutrit state under the dephasing noise in Eq.(1), the final evolved state could be separable or eternally distillable in the infinite-time limit, but no PPT entangled sate can exist as time goes to infinity. \textit{Proof}. Let us briefly justify the above statement. Consider an arbitrary qutrit-qutrit state $\sigma$ under the action of an infinite time of local dephasing, then the final state $ \sigma \left( t \right)$ can always be written as: $ a\left| {00} \right\rangle \left\langle {00} \right| + \left| 0 \right\rangle \left\langle 0 \right| \otimes \left( {b\left| 1 \right\rangle \left\langle 1 \right| + c\left| 1 \right\rangle \left\langle 2 \right| + c^* \left| 2 \right\rangle \left\langle 1 \right| + d\left| 2 \right\rangle \left\langle 2 \right|} \right) + \left( {e\left| 1 \right\rangle \left\langle 1 \right| + f\left| 1 \right\rangle \left\langle 2 \right| + f^* \left| 2 \right\rangle \left\langle 1 \right| + g\left| 2 \right\rangle \left\langle 2 \right|} \right) \otimes \left| 0 \right\rangle \left\langle 0 \right| + \left( {\left| 1 \right\rangle \left\langle 1 \right| + \left| 2 \right\rangle \left\langle 2 \right|} \right) \otimes \left( {\left| 1 \right\rangle \left\langle 1 \right| + \left| 2 \right\rangle \left\langle 2 \right|} \right)\rho \left( 0 \right)\left( {\left| 1 \right\rangle \left\langle 1 \right| + \left| 2 \right\rangle \left\langle 2 \right|} \right) \otimes \left( {\left| 1 \right\rangle \left\langle 1 \right| + \left| 2 \right\rangle \left\langle 2 \right|} \right) $. There are four terms in all, and each of them corresponds to one of the terms in Eq.(1), respectively. Note that the first three terms could be removed by the local operator $ \left( {\left| 1 \right\rangle \left\langle 1 \right| + \left| 2 \right\rangle \left\langle 2 \right|} \right) \otimes \left( {\left| 1 \right\rangle \left\langle 1 \right| + \left| 2 \right\rangle \left\langle 2 \right|} \right)$ and it does not change the fourth term. So the evolved state $ \sigma \left( t \right)$ is entangled if and only if the fourth term is entangled. It immediately implies that the state $ \sigma \left( t \right)$ could be separable or eternally NPT and distillable in the infinite time limit. As we have seen in the previous sections, both cases also exist in finite time's evolution. Since the entanglement property of the evolved state is determined by the two-qubit state, then we conclude that any PPT entangled state is unstable as time goes to infinity under the local dephasing noise in Eq.(1). In contrast, it's still an open question whether there always exists PPT entangled state in finite time's evolution. \hspace*{\fill}$\blacksquare$ \section{Discussions and Conclusions} Before concluding we would like to discuss some open questions as follow: Firstly, the initial state in our scheme is a free entangled state. We have shown that it may evolve into a PPT entangled state and then a separable state for finite time; it may also be eternally free entangled for all finite time in the local dephasing noise described by Eq.(1). We have also shown that any qutrit-qutrit PPT entangled state will lose its entanglement as time goes to infinity in our decoherence model. Then an open question is whether PPT entanglement can be preserved in any finite time under (general) local dephasing noise? If the answer is no (supported by the calculation in Sec.II), then we have another interesting difference between the free and bound entangled state: that is, the former can be robust against the decoherence in any time while the latter cannot. Secondly, we have proposed a few systematic methods to build DSD-free subspaces for $ 3 \otimes 3$ state, in which all states keep their distillability asymptotically in local dephasing noise. The primary extension to higher dimensional bipartite system has also been proposed. It's then an open question to find out more entangled states which are robust against different local noises (beyond our model) in any dimensional space. Such states are applicable to quantum information schemes and thus deserve a deeper study. In summary, we have introduced the concept of DSD through an explicit qutrit-qutrit state. We found that under the bi-local dephasing noise, the free entangled state may evolve into a PPT entangled state (DSD) and then become separable (ESD) permanently. Moreover, a simple local unitary operation on the initial state can avoid the sudden death of distillability. We also have compared the action of DSD and DSD-free states in terms of the Bures fidelity. Next, we have proposed the systematic methods of building the DSD-free space against decoherence, so that the evolved states are distillable in finite time. Finally, we proved that there is no PPT entangled state in infinite-time limit in our model. Our results imply that further study on free entanglement with time evolution in practical local noises is required. \end{document}
\begin{document} \begin{abstract} In this paper, we prove a global existence and blow-up of the positive solutions to the initial-boundary value problem of the nonlinear porous medium equation and the nonlinear pseudo-parabolic equation on the stratified Lie groups. Our proof is based on the concavity argument and the Poincar\'e inequality, established in \cite{RS_JDE} for stratified groups. \end{abstract} \maketitle \tableofcontents \section{Introduction} The main purpose of this paper is to study the global existence and blow-up of the positive solutions to the initial-boundary problem of the nonlinear porous medium equation \begin{align}\label{main_eqn_p>2} \begin{cases} u_t -\mathcal{L}_p (u^m) = f(u), \,\,\, & x \in D,\,\, t>0, \\ u(x,t) =0, \,\,\,& x\in \partial D,\,\, t>0, \\ u(x,0) = u_0(x)\geq 0,\,\,\, & x \in \overline{D}, \end{cases} \end{align} and the nonlinear pseudo-parabolic equation \begin{align}\label{main_eqn_p} \begin{cases} u_t - \nabla_H \cdot (|\nabla_H u|^{p-2}\nabla_H u_t) -\mathcal{L}_pu = f(u), \,\,\, & x \in D,\,\, t>0, \\ u(x,t) =0, \,\,\,& x\in \partial D,\,\, t>0, \\ u(x,0) = u_0(x)\geq 0,\,\,\, & x \in \overline{D}, \end{cases} \end{align} where $m \geq 1$ and $p\geq 2$, $f$ is locally Lipschitz continuous on $\mathbb{R}$, $f(0)=0$, and such that $f(u)>0$ for $u>0$. Furthermore, we suppose that $u_0$ is a non-negative and non-trivial function in $C^1(\overline{D})$ with $u_0(x)=0$ on the boundary $\partial D$ for $p=2$ and in $L^{\infty}(D)\cap \mathring{S}^{1,p}(D)$ for $p>2$, respectively. \begin{defn} Let $\mathbb{G}$ be a stratified group. We say that an open set $D \subset \mathbb{G}$ is an admissible domain if it is bounded and if its boundary $\partial D$ is piecewise smooth and simple, that is, it has no self-intersections. \end{defn} Let $\mathbb{G}$ be a stratified group. Let $D \subset \mathbb{G}$ be an open set, then we define the functional spaces \begin{equation}\label{sobolev} S^{1,p}(D) =\{ u: D \rightarrow \mathbb{R}; u, |\nabla_{H} u| \in L^p(D) \}. \end{equation} We consider the following functional \begin{equation*} \mathcal{J}_p(u):= \left( \int_{D} |\nabla_{H} u(x)|^p dx\right)^{\frac{1}{p}}. \end{equation*} Thus, the functional class $\mathring{S}^{1,p}(D)$ can be defined as the completion of $C_0^1(D)$ in the norm generated by $\mathcal{J}_p$, see e.g. \cite{CDG1993}. A Lie group $\mathbb{G}=(\mathbb{R}^{n},\circ)$ is called a stratified (Lie) group if it satisfies the following conditions: (a) For some integer numbers $N_1+N_{2}+...+N_{r}=n$, the decomposition $\mathbb{R}^{n}=\mathbb{R}^{N_1}\times\ldots\times\mathbb{R}^{N_{r}}$ is valid, and for any $\lambda>0$ the dilation $$\delta_{\lambda}(x):=(\lambda x',\lambda ^{2}x^{(2)},\ldots,\lambda^{r}x^{(r)})$$ is an automorphism of $\mathbb{G}.$ Here $x'\equiv x^{(1)}\in \mathbb{R}^{N_1}$ and $x^{(k)}\in \mathbb{R}^{N_{k}}$ for $k=2,\ldots,r.$ (b) Let $N_1$ be as in (a) and let $X_{1},\ldots,X_{N_1}$ be the left-invariant vector fields on $\mathbb{G}$ such that $X_{k}(0)=\frac{\partial}{\partial x_{k}}|_{0}$ for $k=1,\ldots,N_1.$ Then the H\"ormander rank condition must be satisfied, that is, $${\rm rank}({\rm Lie}\{X_{1},\ldots,X_{N_1}\})=n,$$ for every $x\in\mathbb{R}^{n}.$ Then, we say that the triple $\mathbb{G}=(\mathbb{R}^{n},\circ, \delta_{\lambda})$ is a stratified (Lie) group. Recall that the standard Lebesgue measure $dx$ on $\mathbb R^{n}$ is the Haar measure for $\mathbb{G}$ (see e.g. \cite{FR}, \cite{RS_book}). The left-invariant vector field $X_{j}$ has an explicit form: \begin{equation}\label{Xk0} X_{k}=\frac{\partial}{\partial x'_{k}}+ \sum_{l=2}^{r}\sum_{m=1}^{N_{l}}a_{k,m}^{(l)}(x',...,x^{(l-1)}) \frac{\partial}{\partial x_{m}^{(l)}}, \end{equation} see e.g. \cite{RS_book}. The following notations are used throughout this paper: $$\nabla_{H}:=(X_{1},\ldots, X_{N_1})$$ for the horizontal gradient, and \begin{equation}\label{pLap} \mathcal{L}_{p}f:=\nabla_{H}\cdot(|\nabla_{H}f|^{p-2}\nabla_{H}f),\quad 1<p<\infty, \end{equation} for the $p$-sub-Laplacian. When $p=2$, that is, the second order differential operator \begin{equation}\label{sublap} \mathcal{L}=\sum_{k=1}^{N_1}X_{k}^{2}, \end{equation} is called the sub-Laplacian on $\mathbb{G}$. The sub-Laplacian $\mathcal{L}$ is a left-invariant homogeneous hypoelliptic differential operator and it is known that $\mathcal{L}$ is elliptic if and only if the step of $\mathbb{G}$ is equal to 1. One of the important examples of the nonlinear parabolic equations is the porous medium equation, which describes widely processes involving fluid flow, heat transfer or diffusion, and its other applications in different fields such as mathematical biology, lubrication, boundary layer theory, and etc. Existence and nonexistence of solutions to problem \eqref{main_eqn_p>2} for the reaction term $u^m$ in the case $m=1$ and $m>1$ have been actively investigated by many authors, for example, \cite{Ball, Band-Brun, Chen-Fila-Guo, Ding-Hu, Deng-Levine, Gal-Vaz-97, Gr-Mu-Po-13, Hayakawa, Ia-San-14, Ia-San-19, Levine90, LP1, ST-21, Sam-Gal-Ku-Mik, Souplet}, Grillo, Muratori and Punzo considered fractional porous medium equation \cite{Gr-Mu-Pu-1, Gr-Mu-Pu-2}, and it was also considered in the setting of Cartan-Hadamard manifolds \cite{Gr-Mu-Pu-3}. By using the concavity method, Schaefer \cite{Sch09} established a condition on the initial data of a Dirichlet type initial-boundary value problem for the porous medium equation with a power function reaction term when blow-up of the solution in finite time occurs and a global existence of the solution holds. We refer for more details to Vazquez's book \cite{Vaz} which provides a systematic presentation of the mathematical theory of the porous medium equation. The energy for the isotropic material can be modeled by a pseudo-parabolic equation \cite{CG68}. Some wave processes \cite{BBM72}, filtration of the two-phase flow in porous media with the dynamic capillary pressure \cite{Baren} are also modeled by pseudo-parabolic equations. The global existence and finite-time blow-up for the solutions to pseudo-parabolic equations in bounded and unbounded domains have been studied by many researchers, for example, see \cite{Korpusov1, Korpusov2, Long, Luo, Peng, Xu1, Xu2, Xu3} and the references therein. In \cite{PohVer}, Veron and Pohozaev have obtained blow-up results for the following semi-linear diffusion equation on the Heisenberg groups $$ \frac{\partial u(x,t)}{\partial t}-\mathcal{L} u(x,t)=|u(x,t)|^{p},\,\,\,\,\,(x,t)\in \mathbb{H} \times(0,+\infty). $$ Also, blow-up of the solutions to the semi-linear diffusion and pseudo-parabolic equations on the Heisenberg groups was derived in \cite{AAK1, AAK2, DL1, JKS1, JKS2}. In addition, in \cite{RY} the authors found the Fujita exponent on general unimodular Lie groups. In some of our considerations a crucial role is played by \begin{itemize} \item The condition \begin{equation}\label{cond-f} \alpha F(u) \leq u^m f(u) + \beta u^{pm} +\alpha\gamma,\,\,\, u>0, \end{equation} where \begin{equation*} F(u)=\frac{pm}{m+1}\int_{0}^{u}s^{m-1}f(s)ds, \,\,\, m\geq 1, \end{equation*} introduced by Chung-Choi \cite{Chung-Choi} for a parabolic equation. We will deal with several variants of such condition. \item The Poincar\'e inequality established by the first author and Suragan in \cite{RS_JDE} for stratified groups: \begin{lem}\label{lem1} Let $D \subset \mathbb{G}$ be an admissible domain with $N_1$ being the dimension of the first stratum. Let $1<p<\infty$ with $p\neq N_1$. For every function $u \in C_0^{\infty}(D \backslash \{x'=0\})$ we have \begin{equation} \int_{D} |\nabla_{H} u|^p dx \geq \frac{|N_1-p|^p}{(pR)^p} \int_{D} |u|^p dx, \end{equation} where $R = \sup_{x \in D} |x'|$. \end{lem} \end{itemize} Note that it is possible to interpret the constant $\frac{|N_1-p|^p}{(pR)^p}$ as a measure of the size of the domain $D$. Then $\beta$ in \eqref{cond-f} is dependent on the size of the domain $D$. Our paper is organised so that we discuss the existence and nonexistence of positive solutions to the nonlinear porous medium equation in Section \ref{sec1} and the nonlinear pseudo-parabolic equation in Section \ref{sec2}. \section{Nonlinear porous medium equation}\label{sec1} In this section, we prove the global solutions and blow-up phenomena of the initial-boundary value problem \eqref{main_eqn_p>2}. \subsection{Blow-up solutions of the nonlinear porous medium equation} We start with the blow-up properly. \begin{thm}\label{thm_p>2} Let $\mathbb{G}$ be a stratified group with $N_1$ being the dimension of the first stratum. Let $D \subset \mathbb{G}$ be an admissible domain. Let $2\leq p<\infty$ with $p\neq N_1$. Assume that function $f$ satisfies \begin{equation}\label{condt_p} \alpha F(u) \leq u^{m} f(u) + \beta u^{pm} +\alpha\gamma,\,\,\, u>0, \end{equation} where \begin{equation*} F(u)=\frac{pm}{m+1}\int_{0}^{u}s^{m-1}f(s)ds, \,\,\, m\geq 1, \end{equation*} for some \begin{equation*} \gamma >0,\,\, 0<\beta\leq \frac{|N_1-p|^p}{(pR)^p}\frac{(\alpha - m -1)}{m+1}\,\,\, \text{ and } \,\, \alpha>m+1, \end{equation*} where $R = \sup_{x \in D} |x'|$ and $x=(x',x'')$ with $x'$ being in the first stratum. Let $u_0 \in L^{\infty}(D)\cap\mathring{S}^{1,p}(D)$ satisfy the inequality \begin{equation}\label{J(1)} J_0:= - \frac{1}{m+1} \int_{D} |\nabla_{H} u^m_0(x)|^p dx + \int_{D} (F(u_0(x))-\gamma) dx >0. \end{equation} Then any positive solution $u$ of \eqref{main_eqn_p>2} blows up in finite time $T^*,$ i.e., there exists \begin{equation}\label{T} 0<T^*\leq \frac{M}{\sigma \int_{D}u_0^{m+1}(x)dx}, \end{equation} such that \begin{equation} \lim_{t\rightarrow T^*} \int_{0}^t \int_{D} u^{m+1} (x,\tau) dx d\tau = +\infty, \end{equation} where $M>0$ and $\sigma = \frac{\sqrt{pm\alpha}}{m+1}-1>0$. In fact, in \eqref{T}, we can take \begin{equation*} M = \frac{(1+\sigma)(1+1/\sigma)(\int_D u_0^{m+1}(x)dx)^2}{\alpha (m+1)J_0}. \end{equation*} \end{thm} \begin{rem} Note that condition on nonlinearity \eqref{condt_p} includes the following cases: \begin{itemize} \item[1.] Philippin and Proytcheva \cite{PhP-06} used the condition \begin{equation} (2+\epsilon)F(u) \leq u f(u), \,\,\, u>0, \end{equation} where $\epsilon >0$. It is a special case of an abstract condition by Levine and Payne \cite{LP2}. \item[2.] Bandle and Brunner \cite{Band-Brun} relaxed this condition as follows \begin{equation} (2+\epsilon)F(u) \leq u f(u) + \gamma, \,\,\, u>0, \end{equation} where $\epsilon >0$ and $\gamma >0$. \end{itemize} These cases were established on the bounded domains of the Euclidean space, and it is a new result on the stratified groups. \end{rem} \begin{proof}[Proof of Theorem \ref{thm_p>2}] Assume that $u(x,t)$ is a positive solution of \eqref{main_eqn_p>2}. We use the concavity method for showing the blow-up phenomena. We introduce the functional \begin{align}\label{J1} J(t) := -\frac{1}{m+1} \int_{D} |\nabla_{H} u^m(x,t)|^p dx + \int_{D} (F(u(x,t))-\gamma) dx, \end{align} and by \eqref{J(1)} we have \begin{align}\label{J0} J(0) = -\frac{1}{m+1} \int_{D} |\nabla_{H} u^m_0(x)|^p dx + \int_{D} (F(u_0(x))-\gamma) dx>0. \end{align} Moreover, $J(t)$ can be written in the following form \begin{equation}\label{Fp} J(t) = J(0) + \int_{0}^t \frac{d J(\tau)}{d\tau}d\tau, \end{equation} where \begin{align*} \int_{0}^t \frac{d J(\tau)}{d\tau}d\tau &= - \frac{1}{m+1} \int_{0}^t\int_{D} \frac{d}{d\tau}|\nabla_{H} u^m(x,\tau)|^p dx d\tau + \int_{0}^t \int_{D} \frac{d}{d\tau} (F(u(x,\tau))-\gamma) dx d\tau\\ & = - \frac{p}{m+1} \int_{0}^t\int_{D} |\nabla_{H} u^m(x,\tau)|^{p-2} \nabla_{H} u^m \cdot \nabla_{H} (u^m(x,\tau))_{\tau} dx d\tau\\ & + \int_{0}^t \int_{D} F_u(u(x,\tau)) u_{\tau}(x,\tau) dx d\tau\\ & = \frac{p}{m+1} \int_{0}^t \int_{D} [\mathcal{L}_p (u^m )+ f(u)](u^m(x,\tau))_{\tau} dx d\tau \\ & = \frac{pm}{m+1}\int_{0}^t \int_{D} u^{m-1}(x,\tau) u_{\tau}^2(x,\tau) dx d\tau. \end{align*} Define \begin{equation*} E(t) = \int_{0}^t \int_{D} u^{m+1}(x, \tau) dx d\tau + M, \,\, t\geq 0, \end{equation*} with $M>0$ to be chosen later. Then the first derivative with respect $t$ of $E(t)$ gives \begin{equation*} E'(t)=\int_{D} u^{m+1}(x,t) dx = (m+1)\int_{D} \int_{0}^t u^{m}(x,\tau) u_{\tau}(x,\tau) d\tau dx + \int_{D} u^{m+1}_0(x) dx. \end{equation*} By applying \eqref{condt_p}, Lemma \ref{lem1} and $0<\beta\leq \frac{|N_1-p|^p}{(pR)^p}\frac{(\alpha - m -1)}{m+1}$, we estimate the second derivative of $E(t)$ as follows \begin{align*} E''(t) &=(m+1) \int_{D} u^{m}(x,t) u_t(x,t) dx\\ & = -(m+1) \int_{D} |\nabla_{H} u^m(x,t)|^p dx + (m+1)\int_{D} u^{m}(x,t)f(u(x,t))dx \\ & \geq - (m+1) \int_{D} |\nabla_{H} u^m(x,t)|^p dx + (m+1) \int_{D} \left[ \alpha F(u(x,t)) -\beta u^{pm} (x,t) -\alpha \gamma \right] dx\\ & =\alpha (m+1) \left[ -\frac{1}{m+1} \int_{D} |\nabla_{H} u^m(x,t)|^p dx + \int_{D} (F(u(x,t))-\gamma) dx \right] \\ &+ (\alpha-m-1) \int_{D} |\nabla_{H} u^m(x,t)|^p dx -\beta (m+1) \int_{D} u^{pm}(x, t) dx \\ & \geq \alpha(m+1) \left[ -\frac{1}{m+1} \int_{D} |\nabla_{H} u^m(x,t)|^p dx + \int_{D} (F(u(x,t))-\gamma) dx \right] \\ & + \left[ \frac{|N_1-p|^p}{(pR)^p}(\alpha - m-1) - \beta (m+1) \right]\int_{D} u^{pm}(x, t) dx\\ & \geq \alpha (m+1)\left[ -\frac{1}{m+1} \int_{D} |\nabla_{H} u^m(x,t)|^p dx + \int_{D} (F(u(x,t))-\gamma) dx \right]\\ & = \alpha (m+1)J(t)\\ & = \alpha (m+1) J(0) + p\alpha m \int_{0}^t \int_{D} u^{m-1}(x,\tau) u_{\tau}^2(x,\tau) dx d\tau. \end{align*} By employing the H\"older and Cauchy-Schwarz inequalities, we obtain the estimate for $[E'(t)]^2$ as follows \begin{align*} [E'(t)]^2&\leq (1+\delta)\left( \int_{D} \int_{0}^t (u^{m+1}(x,\tau))_{\tau} d\tau dx \right)^2 + \left( 1+ \frac{1}{\delta}\right)\left( \int_{D} u_0^{m+1}(x)dx \right)^2 \\ & = (m+1)^2(1+\delta)\left( \int_{D} \int_{0}^t u^{m}(x,\tau) u_{\tau}(x,\tau)dx d\tau \right)^2 + \left( 1+ \frac{1}{\delta}\right)\left( \int_{D} u_0^{m+1}(x)dx \right)^2\\ & = (m+1)^2(1+\delta)\left( \int_{D} \int_{0}^t u^{(m+1)/2 + (m-1)/2}(x,\tau) u_{\tau}(x,\tau)dx d\tau \right)^2 \\ &+ \left( 1+ \frac{1}{\delta}\right)\left( \int_{D} u_0^{m+1}(x)dx \right)^2 \end{align*} \begin{align*} & \leq (m+1)^2(1+\delta)\left( \int_{D} \left(\int_{0}^t u^{m+1} d\tau\right)^{1/2}\left( \int_{0}^t u^{m-1} u_{\tau}^2(x,\tau)d\tau\right)^{1/2} dx \right)^2 \\ &+ \left( 1+ \frac{1}{\delta}\right)\left( \int_{D} u_0^{m+1}(x)dx \right)^2 \\ & \leq (m+1)^2(1+\delta) \left(\int_{0}^t \int_{D} u^{m+1} dx d\tau\right)\left( \int_{0}^t \int_{D} u^{m-1} u_{\tau}^2(x,\tau)dxd\tau \right) \\ &+ \left( 1+ \frac{1}{\delta}\right)\left( \int_{D} u_0^{m+1}(x)dx \right)^2, \end{align*} for arbitrary $\delta>0$. So we have \begin{equation}\label{eq-E2} \small [E'(t)]^2\leq (m+1)^2(1+\delta) \left(\int_{0}^t \int_{D} u^{m+1} dx d\tau\right)\left( \int_{0}^t \int_{D} u^{m-1} u_{\tau}^2dxd\tau \right) + \left( 1+ \frac{1}{\delta}\right)\left( \int_{D} u_0^{m+1}dx \right)^2. \end{equation} The previous estimates together with $\sigma=\delta= \frac{\sqrt{pm\alpha}}{m+1}-1>0$ where positivity comes from $\alpha > m+1$, imply \begin{align*} &E''(t) E (t) - (1+\sigma) [E'(t)]^2\\ &\geq \alpha M(m+1) \left[ -\frac{1}{m+1} \int_{D} |\nabla_{H} u^m_0|^p dx + \int_{D} (F(u_0)-\gamma)dx\right] \\&+ pm\alpha \left( \int_{0}^t\int_{D} u^{m+1}(x, \tau) dx d\tau \right) \left(\int_{0}^t \int_{D} u_{\tau}^2(x,\tau) u^{m-1}(x,\tau) dx d\tau \right)\\ &- (m+1)^2(1+\sigma) (1+\delta) \left(\int_{0}^t \int_{D} u^{m+1} dx d\tau\right)\left( \int_{0}^t \int_{D} u^{m-1} u_{\tau}^2(x,\tau)dxd\tau \right)\\ & - (1+\sigma)\left( 1+ \frac{1}{\delta}\right)\left( \int_{D} u_0^{m+1}(x)dx \right)^2 \\ &\geq \alpha M(m+1) J(0) - (1+\sigma)\left( 1+ \frac{1}{\delta}\right)\left( \int_{D} u_0^{m+1}(x)dx \right)^2. \end{align*} By assumption $J(0)>0$, thus if we select $$M =\frac{(1+\sigma)\left( 1+ \frac{1}{\delta}\right)\left( \int_{D} u_0^{m+1}(x)dx \right)^2}{\alpha (m+1) J(0)}, $$ that gives \begin{equation} E''(t) E (t) - (1+\sigma) (E'(t))^2 \geq 0. \end{equation} We can see that the above expression for $t\geq 0$ implies \begin{equation*} \frac{d}{dt} \left[ \frac{E'(t)}{E^{\sigma+1}(t)} \right] \geq 0 \Rightarrow \begin{cases} E'(t) \geq \left[ \frac{E'(0)}{E^{\sigma+1}(0)} \right] E^{1+\sigma}(t),\\ E(0)=M. \end{cases} \end{equation*} Then for $\sigma = \frac{\sqrt{pm\alpha}}{m+1}-1>0$, we arrive at \begin{align*} - \frac{1}{\sigma} \left[ E^{-\sigma}(t) - E^{-\sigma}(0) \right] \geq \frac{E'(0)}{E^{\sigma+1}(0)} t, \end{align*} and some rearrangements with $E(0)=M$ give \begin{equation*} E(t) \geq \left( \frac{1}{M^{\sigma}}-\frac{ \sigma \int_{D} u^{m+1}_0(x)dx }{M^{\sigma+1}} t\right)^{-\frac{1}{\sigma}}. \end{equation*} Then the blow-up time $T^*$ satisfies \begin{equation*} 0<T^*\leq \frac{M}{\sigma \int_{D} u_0^{m+1}dx}. \end{equation*} That completes the proof. \end{proof} \subsection{Global existence for the nonlinear porous medium equation} We now show that under some assumptions, if a positive solution to \eqref{main_eqn_p>2} exists, its norm is globally controlled. \begin{thm}\label{thm_GEp} Let $\mathbb{G}$ be a stratified group with $N_1$ being the dimension of the first stratum. Let $D \subset \mathbb{G}$ be an admissible domain. Let $2 \leq p<\infty$ with $p\neq N_1$. Assume that \begin{equation}\label{global_cond-p} \alpha F(u) \geq u^{m} f(u) + \beta u^{pm} +\alpha\gamma, \,\,\, u>0, \end{equation} where \begin{equation*} F(u)=\frac{pm}{m+1}\int_{0}^{u}s^{m-1}f(s)ds, \,\,\, m\geq 1, \end{equation*} for some \begin{equation*} \gamma \geq 0, \,\, \alpha \leq 0 \,\,\, \text{ and } \,\,\, \beta \geq \frac{|N_1-p|^p}{(pR)^p}\frac{( \alpha-m-1 )}{m+1}, \end{equation*} where $R = \sup_{x \in D} |x'|$ and $x=(x',x'')$ with $x'$ being in the first stratum. Assume also that $u_0 \in L^{\infty}(D)\cap \mathring{S}^{1,p}(D)$ satisfies inequality \begin{equation}\label{J(0)} J_0:= \int_{D} (F(u_0(x))-\gamma) dx - \frac{1}{m+1} \int_{D} |\nabla_{H} u^m_0(x)|^p dx>0. \end{equation} If $u$ is a positive local solution of problem \eqref{main_eqn_p>2}, then it is global and satisfies the following estimate: \begin{equation*} \int_D u^{m+1}(x,t) dx \leq \int_D u^{m+1}_0(x)dx. \end{equation*} \end{thm} \begin{proof}[Proof of Theorem \ref{thm_GEp}] Recall from the proof of Theorem \ref{thm_p>2}, the functional \begin{align*} J(t) &:= -\frac{1}{m+1} \int_{D} |\nabla_{H} u^m(x,t)|^p dx + \int_{D} (F(u(x,t))-\gamma) dx\\ & = J_0 + \frac{pm}{m+1}\int_{0}^t \int_{D} u^{m-1}(x,\tau) u_{\tau}^2(x,\tau) dx d\tau. \end{align*} Let us define \begin{equation*} \mathcal E(t) = \int_{D} u^{m+1}(x,t)dx. \end{equation*} By applying \eqref{global_cond-p}, Lemma \ref{lem1} and $\beta \geq \frac{|N_1-p|^p}{(pR)^p}\frac{( \alpha-m-1 )}{m+1}$, respectively, one finds \begin{align*} \mathcal E'(t) &=(m+1) \int_{D} u^{m}(x,t) u_t(x,t) dx\\ & = (m+1) \left[\int_{D} u^{m}(x,t) \nabla_{H} \cdot (|\nabla_{H} u^m(x,t)|^{p-2}\nabla_{H} u^m(x,t)) + \int_{D} u^{m}(x,t)f(u(x, t))dx\right] \\ & = (m+1) \left[-\int_{D} |\nabla_{H} u^m(x,t)|^p dx + \int_{D} u^{m}(x,t)f(u(x,t))dx\right] \\ & \leq (m+1) \left[-\int_{D} |\nabla_{H} u^m(x,t)|^p dx + \int_{D} \left[ \alpha F(u(x,t)) -\beta u^{pm} (x,t) -\alpha \gamma \right] dx\right]\\ & =\alpha (m+1) \left[ -\frac{1}{m+1} \int_{D} |\nabla_{H} u^m(x,t)|^p dx + \int_{D} (F(u(x,t))-\gamma) dx \right] \\ &- (m+1-\alpha) \int_{D} |\nabla_{H} u^m(x,t)|^p dx -\beta (m+1) \int_{D} u^{pm}(x, t) dx\\ & \leq \alpha (m+1)\left[ -\frac{1}{m+1} \int_{D} |\nabla_{H} u^m(x,t)|^p dx + \int_{D} (F(u(x,t))-\gamma) dx \right] \\ & - \left[ \frac{|N_1-p|^p}{(pR)^p}(m+1-\alpha ) + \beta (m+1) \right]\int_{D} u^{pm}(x, t) dx\\ & \leq \alpha (m+1)\left[ -\frac{1}{m+1} \int_{D} |\nabla_{H} u^m(x,t)|^2 dx + \int_{D} (F(u(x,t))-\gamma) dx \right] \\ &=\alpha (m+1)J(t). \end{align*} We can rewrite $\mathcal E'(t)$ by using \eqref{Fp} and $\alpha \leq 0$ as follows \begin{align} \mathcal E'(t) \leq \alpha (m+1) J(0) + p\alpha m \int_{0}^t \int_{D} u^{m-1}(x,\tau) u_{\tau}^2(x,\tau) dx d\tau \leq 0. \end{align} That gives \begin{equation*} \mathcal E(t) \leq \mathcal E(0). \end{equation*} This completes the proof of Theorem \ref{thm_GEp}. \end{proof} \section{Nonlinear pseudo-parabolic equation}\label{sec2} In this section, we prove the global solutions and blow-up phenomena of the initial-boundary value problem \eqref{main_eqn_p}. \subsection{Blow-up phenomena for the pseudo-parabolic equation} We start with conditions ensuring the blow-up of solutions in finite time. \begin{thm}\label{thm_p>21} Let $\mathbb{G}$ be a stratified group with $N_1$ being the dimension of the first stratum. Let $D \subset \mathbb{G}$ be an admissible domain. Let $2\leq p<\infty$ with $p\neq N_1$. Assume that \begin{equation}\label{hyp-blow-p} \alpha F(u) \leq u f(u) + \beta u^{p} +\alpha\gamma,\,\,\, u>0, \end{equation} where \begin{equation*} F(u) = \int_{0}^u f(s)ds, \end{equation*} for some \begin{align} \alpha>p \,\, &\text{ and } \,\, 0<\beta\leq \frac{|N_1-p|^p}{(pR)^p}\frac{(\alpha - p)}{p}, \\ \gamma>0 \,\,&\text{ and }\,\, R = \sup_{x \in D} |x'|. \nonumber \end{align} Assume also that $u_0 \in L^{\infty}(D)\cap \mathring{S}^{1,p}(D)$ satisfies \begin{equation} \mathcal{F}_0:= -\frac{1}{p} \int_{D} |\nabla_H u_0(x)|^p dx + \int_{D} (F(u_0(x))-\gamma) dx >0. \end{equation} Then any positive solution $u$ of \eqref{main_eqn_p} blows up in finite time $T^*,$ i.e., there exists \begin{equation} 0<T^*\leq \frac{M}{\sigma \int_{D} u_0^2 + \frac{2}{p} |\nabla_H u_0|^p dx}, \end{equation} such that \begin{equation} \lim_{t\rightarrow T^*}\int_{0}^t \int_{D} [u^{2} + \frac{2}{p}|\nabla_H u|^p ]dx d\tau = +\infty, \end{equation} where $\sigma=\sqrt{\frac{\alpha}{2}}-1>0$ and \begin{equation*} M = \frac{(1+\sigma)\left( 1+ \frac{1}{\sigma}\right)\left( \int_{D} u^{2}_0 + \frac{2}{p}|\nabla_H u_0|^p dx \right)^2}{2\alpha \mathcal{F}_0}. \end{equation*} \end{thm} \begin{proof}[Proof of Theorem \ref{thm_p>21}] The proof is based on a concavity method. The main idea is to show that $[E^{-\sigma}_p(t)]''\leq 0$ which means that $E^{-\sigma}_p(t)$ is a concave function, for $E_p(t)$ defined below. Let us introduce some notations: \begin{align*} \mathcal{F}(t) := -\frac{1}{p} \int_{D} |\nabla_H u(x,t)|^p dx + \int_{D} (F(u(x,t))-\gamma) dx, \end{align*} and \begin{align*} \mathcal{F}(0) := -\frac{1}{p} \int_{D} |\nabla_H u_0(x)|^p dx + \int_{D} (F(u_0(x))-\gamma) dx, \end{align*} with \begin{equation*} F(u)=\int_{0}^{u}f(s)ds. \end{equation*} We know that \begin{equation}\label{F-p} \mathcal{F}(t) = \mathcal{F}(0) + \int_{0}^t \frac{d \mathcal{F}(\tau)}{d\tau}d\tau, \end{equation} where \begin{align*} \int_{0}^t \frac{d \mathcal{F}(\tau)}{d\tau}d\tau &= - \frac{1}{p} \int_{0}^t\int_{D} \frac{d}{d\tau}|\nabla_H u|^p dx d\tau + \int_{0}^t \int_{D} \frac{d}{d\tau} (F(u)-\gamma) dx d\tau\\ & = - \int_{0}^t\int_{D} |\nabla_H u|^{p-2} \nabla u \cdot \nabla_H u_{\tau} dx d\tau + \int_{0}^t \int_{D} F_u(u) u_{\tau} dx d\tau\\ & = \int_{0}^t \int_{D} [\mathcal{L}_p u + f(u)]u_{\tau} dx d\tau \\ & = \int_{0}^t \int_{D} u_{\tau}^2 - u_{\tau} \nabla_H \cdot (|\nabla_H u|^{p-2} \nabla_H u_{\tau})dx d\tau\\ & = \int_{0}^t \int_{D} u_{\tau}^2 + |\nabla_H u|^{p-2}|\nabla_H u_{\tau}|^2dx d\tau. \end{align*} Let us define \begin{align*} E_p(t) &:= \int_{0}^t \int_{D}[ u^{2} + \frac{2}{p}|\nabla_H u|^p] dx d\tau + M, \,\, t\geq 0, \end{align*} with a positive constant $M>0$ to be chosen later. Then \begin{align}\label{eq-E} E'_p(t) = \int_{D}[ u^2 + \frac{2}{p} |\nabla_H u|^p ]dx = \int_{0}^t \frac{d}{d\tau}\int_{D} [u^2 + \frac{2}{p}|\nabla_H u|^p]dx d\tau + \int_{D} u^{2}_0 + \frac{2}{p}|\nabla_H u_0|^p dx. \end{align} Now we estimate $E''_p(t)$ by using assumption \eqref{hyp-blow-p} and integration by parts, that gives \begin{align*} E''_p(t) &= 2\int_{D} u u_t dx + \frac{2}{p}\int_{D} (|\nabla_H u|^p)_t dx\\ & =2 \int_{D} [ u\mathcal{L}_pu+ u \nabla_H \cdot (|\nabla_H u|^{p-2}\nabla_H u_t) + uf(u)]dx + \frac{2}{p}\int_{D} (|\nabla_H u|^p)_t dx\\ & = -2 \int_{D} [|\nabla_H u|^p + |\nabla_H u|^{p-2}\nabla_H u \cdot \nabla_H u_t ] dx + 2\int_{D} uf(u)dx + \frac{2}{p}\int_{D} (|\nabla_H u|^p)_t dx \\ & \geq - 2 \int_{D} |\nabla_H u|^p dx + 2 \int_{D} \left[ \alpha F(u) -\beta u^{p} -\alpha \gamma \right] dx\\ & =2 \alpha \left[ -\frac{1}{p} \int_{D} |\nabla_H u|^p dx + \int_{D} (F(u)-\gamma) dx \right] \\ &+ \frac{2(\alpha-p)}{p} \int_{D} |\nabla_H u|^p dx -2\beta \int_{D} u^{p} dx. \end{align*} Next we apply Lemma \ref{lem1}, which gives \begin{align*} & \geq 2\alpha \left[ -\frac{1}{p} \int_{D} |\nabla_H u|^p dx + \int_{D} (F(u)-\gamma) dx \right] \\ & + 2\left[\frac{|N_1-p|^p}{(pR)^p}\frac{(\alpha -p)}{p} - \beta \right]\int_{D} u^{p} dx\\ & \geq 2\alpha\left[ -\frac{1}{p} \int_{D} |\nabla_H u|^p dx + \int_{D} (F(u)-\gamma) dx \right]\\ & = 2\alpha \mathcal{F}(t), \end{align*} with $\mathcal{F}(t)$ as in \eqref{F-p}, then $E''_p(t)$ can be rewritten in the following form \begin{align} E''_p(t) \geq 2\alpha \mathcal{F}(0) + 2\alpha \int_{0}^t \int_{D} [u_{\tau}^2 + |\nabla_H u|^{p-2}|\nabla_H u_{\tau}|^2]dx d\tau. \end{align} Also we have for arbitrary $\delta>0$, in view of \eqref{eq-E}, \begin{align*} [E'_p(t)]^2&\leq (1+\delta)\left( \int_{0}^t \frac{d}{d\tau}\int_{D}[u^2 + \frac{2}{p}|\nabla_H u|^p] dx d\tau \right)^2\\ & + \left( 1+ \frac{1}{\delta}\right)\left( \int_{D} [u^{2}_0 + \frac{2}{p}|\nabla_H u_0|^p] dx \right)^2. \end{align*} Then by taking $\sigma=\delta=\sqrt{\frac{\alpha}{2}}-1>0$, we arrive at \begin{align*} &E''_p(t) E_p(t) - (1+\sigma) [E'_p(t)]^2\\ &\geq 2\alpha M\mathcal{F}(0)+ 2\alpha \left(\int_{0}^t \int_{D} [u_{\tau}^2 + |\nabla_H u|^{p-2}|\nabla_H u_{\tau}|^2]dx d\tau \right) \left(\int_{0}^t \int_{D} [u^{2} + \frac{2}{p}|\nabla_H u|^p dx] d\tau \right)\\ &- (1+\sigma) (1+\delta)\left( \int_{0}^t \frac{d}{d\tau}\int_{D}[u^2 + \frac{2}{p}|\nabla_H u|^p] dx d\tau \right)^2 - (1+\sigma) \left( 1+ \frac{1}{\delta}\right)\left( \int_{D} [u^{2}_0 + \frac{2}{p}|\nabla_H u_0|^p] dx \right)^2 \\ &= 2\alpha M\mathcal{F}(0) - (1+\sigma)\left( 1+ \frac{1}{\delta}\right)\left( \int_{D} [u^{2}_0 + \frac{2}{p}|\nabla_H u_0|^p] dx \right)^2\\ & + 2\alpha \left[ \left(\int_{0}^t \int_{D} [u_{\tau}^2 + |\nabla_H u|^{p-2}|\nabla_H u_{\tau}|^2]dx d\tau \right) \left(\int_{0}^t \int_{D} [u^{2} + \frac{2}{p}|\nabla_H u|^p dx] d\tau \right) \right. \\ &- \left.\left(\int_{0}^t \int_{D} [uu_{\tau} + |\nabla_H u|^{p-2}\nabla_H u \cdot \nabla_H u_{\tau} ]dx d\tau \right)^2 \right]\\ &\geq 2 \alpha M \mathcal{F}(0) - (1+\sigma)\left( 1+ \frac{1}{\delta}\right)\left( \int_{D} u^{2}_0 + \frac{2}{p}|\nabla_H u_0|^p dx \right)^2. \end{align*} Note that in the last line we have used the following inequality \begin{align*} &\left( \int_{0}^t \int_{D} [u^2 + |\nabla_H u|^p]dx d\tau \right)\left(\int_{0}^t \int_{D} [u_\tau^2 +|\nabla_H u|^{p-2} |\nabla_H u_\tau|^2]dx d\tau \right) \\ &- \left( \int_{0}^t \int_{D} [u u_{\tau} + |\nabla_{H} u|^{p-2}\nabla_H u \cdot \nabla_H u_{\tau}]dx d\tau \right)^2 \\ & \geq \left[ \left( \int_{D} \int_{0}^t u^2 d\tau dx\right)^{\frac{1}{2}}\left( \int_{D} \int_{0}^t |\nabla_H u|^{p-2}|\nabla_H u_{\tau}|^2 d\tau dx\right)^{\frac{1}{2}} \right. \\ & \left. - \left( \int_{D} \int_{0}^t |\nabla_H u|^p d\tau dx\right)^{\frac{1}{2}}\left( \int_{D} \int_{0}^t u_{\tau}^2 d\tau dx\right)^{\frac{1}{2}} \right]^2 \geq 0, \end{align*} where making use of the H\"older inequality and Cauchy-Schawrz inequality we have \begin{align*}\small &\left( \int_{0}^t\int_{D} [u u_{\tau} + |\nabla_H u|^{p-2} \nabla_H u\cdot \nabla_H u_{\tau}]dx d\tau \right)^2 \\ \leq &\left( \int_{D} \left(\int_{0}^t u^2 d\tau\right)^{\frac{1}{2}} \left(\int_{0}^t u_{\tau}^2 d\tau\right)^{\frac{1}{2}} dx +\int_{D} \left(\int_{0}^t |\nabla_H u|^p d\tau\right)^{\frac{1}{2}}\left(\int_{0}^t |\nabla_{H}u|^{p-2}|\nabla_H u_{\tau}|^2 d\tau\right)^{\frac{1}{2}} dx \right)^2\\ =& \left( \int_{D} \left(\int_{0}^t u^2 d\tau\right)^{\frac{1}{2}} \left(\int_{0}^t u_{\tau}^2 d\tau\right)^{\frac{1}{2}} dx\right)^2+ \left(\int_{D} \left(\int_{0}^t |\nabla_H u|^p d\tau\right)^{\frac{1}{2}}\left(\int_{0}^t |\nabla_{H}u|^{p-2}|\nabla_H u_{\tau}|^2 d\tau\right)^{\frac{1}{2}} dx \right)^2 \\ + &2\left( \int_{D} \left(\int_{0}^t u^2 d\tau\right)^{\frac{1}{2}} \left(\int_{0}^t u_{\tau}^2 d\tau\right)^{\frac{1}{2}} dx\right)\left(\int_{D} \left(\int_{0}^t |\nabla_H u|^p d\tau\right)^{\frac{1}{2}}\left(\int_{0}^t |\nabla_{H}u|^{p-2}|\nabla_H u_{\tau}|^2 d\tau\right)^{\frac{1}{2}} dx \right)\\ \leq& \left( \int_{D} \int_{0}^t u^2 d\tau dx\right)\left( \int_{D} \int_{0}^t u^2_{\tau} d\tau dx\right)+ \left( \int_{D} \int_{0}^t |\nabla_H u|^p d\tau dx\right)\left( \int_{D} \int_{0}^t |\nabla_{H}u|^{p-2}|\nabla_H u_{\tau}|^2 d\tau dx\right)\\ + &2\left[ \left( \int_{D} \int_{0}^t u^2 d\tau dx\right)\left( \int_{D} \int_{0}^t u^2_{\tau} d\tau dx\right) \left( \int_{D} \int_{0}^t |\nabla_H u|^p d\tau dx\right)\left( \int_{D} \int_{0}^t |\nabla_{H}u|^{p-2}|\nabla_H u_{\tau}|^2 d\tau dx\right)\right]^{\frac{1}{2}}. \end{align*} By assumption $\mathcal{F}(0)>0$, thus we can select \begin{equation*} M = \frac{(1+\sigma)\left( 1+ \frac{1}{\delta}\right)\left( \int_{D} u^{2}_0 + \frac{2}{p}|\nabla_H u_0|^p dx \right)^2}{2\alpha \mathcal{F}(0)}, \end{equation*} that gives \begin{equation} E''_p(t) E_p(t) - (1+\sigma) [E'_p(t)]^2 \geq 0. \end{equation} We can see that the above expression for $t\geq 0$ implies \begin{equation*} \frac{d}{dt} \left[ \frac{E'_p(t)}{E^{\sigma+1}_p(t)} \right] \geq 0 \Rightarrow \begin{cases} E'_p(t) \geq \left[ \frac{E'_p(0)}{E^{\sigma+1}_p(0)} \right] E^{1+\sigma}_p(t),\\ E_p(0)=M. \end{cases} \end{equation*} Then for $\sigma =\sqrt{\frac{\alpha}{2}}-1>0$, we arrive at \begin{equation*} E_p(t) \geq \left( \frac{1}{M^{\sigma}}-\frac{ \sigma \int_{D} [u_0^2 + \frac{2}{p} |\nabla_H u_0|^p ]dx }{M^{\sigma+1}} t\right)^{-\frac{1}{\sigma}}. \end{equation*} Then the blow-up time $T^*$ satisfies \begin{equation*} 0<T^*\leq \frac{M}{\sigma \int_{D} [u_0^2 + \frac{2}{p} |\nabla_H u_0|^p] dx}. \end{equation*} This completes the proof. \end{proof} \subsection{Global solution for the pseudo-parabolic equation} We now show that positive solutions, when they exist for some nonlinearities, can be controlled. \begin{thm}\label{thm_GEp1} Let $\mathbb{G}$ be a stratified group with $N_1$ being the dimension of the first stratum. Let $D \subset \mathbb{G}$ be an admissible domain. Let $2 \leq p<\infty$ with $p\neq N_1$. Assume that function $f$ satisfies \begin{equation}\label{G1} \alpha F(u) \geq u f(u) + \beta u^{p} +\alpha\gamma, \,\,\, u>0, \end{equation} where \begin{equation*} F(u) = \int_{0}^u f(s)ds, \end{equation*} for some \begin{equation} \beta \geq\frac{( p-\alpha)}{2} \,\,\, \text{ and } \,\,\,\alpha \leq 0, \,\, \gamma \geq 0. \end{equation} Let $u_0 \in L^{\infty}(D)\cap \mathring{S}^{1,p}(D)$ satisfy \begin{equation} \mathcal{F}_0:= -\frac{1}{p} \int_{D} |\nabla_H u_0(x)|^p dx +\int_{D} (F(u_0(x))-\gamma) dx >0. \end{equation} If $u$ is a positive local solution of problem \eqref{main_eqn_p}, then it is global and satisfies the following estimate: \begin{equation*} \int_{D}[ u^{2} + \frac{2}{p}|\nabla_H u|^p] dx \leq \exp({-(p-\alpha)t})\int_{D}[ u^{2}_0 + \frac{2}{p}|\nabla_H u_0|^p] dx. \end{equation*} \end{thm} \begin{proof}[Proof of Theorem \ref{thm_GEp1}] Define \begin{align*} \mathcal{E}(t) := \int_{D}[ u^{2} + \frac{2}{p}|\nabla_H u|^p] dx. \end{align*} Now we estimate $\mathcal{E}'(t) $ by using assumption \eqref{G1}, that gives \begin{align*} \mathcal{E}'(t) &= 2\int_{D} u u_t dx +\frac{2}{p} \int_{D} (|\nabla_H u|^p)_t dx\\ & =2 \int_{D} [u\mathcal{L}_p u + u\nabla_H \cdot (|\nabla_H u|^{p-2}\nabla_H u_t) + uf(u)]dx + \frac{2}{p} \int_{D} (|\nabla_H u|^p)_t dx\\ & = -2 \int_{D} [|\nabla_H u|^p + |\nabla_H u|^{p-2}\nabla_H u \cdot \nabla_H u_t ] dx + 2\int_{D} uf(u)dx + \frac{2}{p}\int_{D} (|\nabla_H u|^p)_t dx \\ & \leq 2\alpha \left[ -\frac{1}{p} \int_{D} |\nabla_H u|^p dx + \int_{D} (F(u)-\gamma) dx \right] - \frac{2(p-\alpha)}{p} \int_{D} |\nabla_H u|^p dx -2\beta \int_{D} u^{p} dx \\ & \leq 2\alpha \left[ -\frac{1}{p} \int_{D} |\nabla_H u|^p dx + \int_{D} (F(u)-\gamma) dx \right] \\ &- (p-\alpha)[E_p(t) - \int_D u^2 dx] dx -2\beta \int_{D} u^{2} dx,\\ & =2\alpha \mathcal{F}(t) - (p-\alpha)\mathcal{E}(t) + [p-\alpha -2\beta]\int_{D}u^2 dx, \end{align*} with \begin{align*} \mathcal{F}(t)&:= -\frac{1}{p} \int_{D} |\nabla_H u(x,t)|^p dx + \int_{D} (F(u(x,t))-\gamma) dx\\ &= \mathcal{F}_0 + \int_{0}^t \int_{D} u_{\tau}^2 + |\nabla_H u|^{p-2}|\nabla_H u_{\tau}|^2dx d\tau. \end{align*} Since $\beta \geq \frac{p-\alpha}{2}$ we arrive at \begin{align*} \mathcal{E}'(t) + (p-\alpha)\mathcal{E}(t) \leq 2 \alpha \left[ \mathcal{F}_0 + \int_{0}^t \int_{D} u_{\tau}^2 + |\nabla_H u|^{p-2}|\nabla_H u_{\tau}|^2dx d\tau \right]\leq 0. \end{align*} This implies, \begin{equation*} \mathcal{E}(t) \leq \exp({-(p-\alpha)t})\mathcal{E}(0), \end{equation*} finishing the proof. \end{proof} \end{document}
\begin{document} \title[Birch's theorem with shifts]{Birch's theorem with shifts} \author[Sam Chow]{Sam Chow} \address{School of Mathematics, University of Bristol, University Walk, Clifton, Bristol BS8 1TW, United Kingdom} \email{[email protected]} \subjclass[2010]{11D75, 11E76} \keywords{Diophantine inequalities, forms in many variables, inhomogeneous polynomials} \thanks{} {\,{\rm d}}ate{} \begin{abstract} A famous result due to Birch (1961) provides an asymptotic formula for the number of integer points in an expanding box at which given rational forms of the same degree simultaneously vanish, subject to a geometric condition. We present the first inequalities analogue of Birch's theorem. \end{abstract} \maketitle \section{Introduction} \ellabel{intro} A famous result due to Birch \cite[Theorem 1]{Bir1962} provides an asymptotic formula for the number of integer points in an expanding box at which given rational forms of the same degree simultaneously vanish, subject to a geometric condition. This in particular implies the existence of a nontrivial solution to the system of homogeneous equations, providing that a nonsingular solution exists in every completion of the rationals. We present the following inequalities analogue of Birch's theorem. \begin{thm} \ellabel{MainThm} Let $f_1, \elldots, f_R$ be rational forms of degree $d \ge 2$ in \begin{equation} \ellabel{numvars} n > {\sigma}} {\,{\rm d}}ef\Sig{{\Sigma}} {\,{\rm d}}ef\bfsig{{\boldsymbol \sig} + R(R+1)(d-1)2^{d-1} \end{equation} variables, where ${\sigma}} {\,{\rm d}}ef\Sig{{\Sigma}} {\,{\rm d}}ef\bfsig{{\boldsymbol \sig}ma$ is the dimension of the affine variety cut out by the condition \[ \rank(\nabla f_k)_{k=1}^R < R. \] Assume that the forms $(1,\elldots,1) \cdot \nabla f_k$ $(1 \elle k \elle R)$ are linearly independent. Let $\btau \in \bR^R$, and let $\eta$ be a positive real number. Let $\mu$ be an irrational real number, and write $\bmu = (\mu,\elldots,\mu) \in \bR^n$. Then the number \mbox{$N(P) = N_\bf(P; \mu, \btau, \eta)$} of integer solutions $\bx \in [-P,P]^n$ to \begin{equation*} |f_k(\bx + \bmu) - \tau_k| < \eta \qquad (1 \elle k \elle R) \end{equation*} satisfies \begin{equation} \ellabel{asymp} N(P) = (2\eta)^R c P^{n-Rd} + o(P^{n-Rd}) \end{equation} as $P \to \infty$, where \begin{equation} \ellabel{cdef} c = c_\bf = \int_{\bR^R} \int_{[-1,1]^n} e(\bgam \cdot \bf(\bt)) {\,{\rm d}} \bt {\,{\rm d}} \bgam. \end{equation} If $\bf = \bzero$ has a nonsingular real solution then $c > 0$. \end{thm} The definition \eqref{cdef} of the singular integral $c$ is the one given by Birch \cite{Bir1962}. We can interpret $c$ as the real density of points on the variety $\bf = \bzero$; we defer an extended discussion until \S \ref{SingularIntegral}. Theorem \ref{MainThm} implies that $\{ \bf(\bx + \bmu) : \bx \in \bZ^n \}$ is dense in $\bR^R$. The example with $R = 1$ and \[ f_1(\bx) = (x_1 - x_2)^3 + \elldots + (x_{99} - x_{100})^3 \] shows that some condition, such as the linear independence of the forms $(1,\elldots,1) \cdot \nabla f_k$ $(1 \elle k \elle R)$, is necessary in order for our statement to be true. Theorem \ref{MainThm} involves a `uniform' shift $\bmu = (\mu, \elldots, \mu) \in \bR^n$. From our method it is not clear how to handle an arbitrary shift $\bmu = (\mu_1, \elldots, \mu_n) \in \bR^n \setminus \bQ^n$, as in \cite{scs,wps}, since many more simultaneous rational approximations would then be necessary. In Theorem \ref{MainThm}, we have used the `Birch singular locus' to control the degeneracy of the system $\bf$. An alternative approach involves Schmidt's $h$-invariant \cite[\S 1]{Sch1985}. Quoting Schmidt, the $h$-invariant of a form $F$ of degree $d \ge 2$ with rational coefficients is the least $h$ such that $F$ `splits into $h$ products', i.e. \[ F = A_1B_1 + \elldots + A_hB_h \] for some forms $A_i$, $B_i$ of positive degrees and rational coefficients. If $f_1, \elldots, f_R$ are forms in $n$ variables with rational coefficients and have the same degree $d \ge 2$, then the $h$-invariant of the system $\bf$ is the minimum $h$-invariant of any form in the rational pencil. Writing $h$ for the $h$-invariant of $\bf$, we note that $h \elle n$. Define $\Phi(d)$ by $\Phi(2) = \Phi(3) = 1$, $\Phi(4) = 3$, $\Phi(5) = 13$ and \[ \Phi(d) = \frac{d!}{(\ellog 2)^d} \qquad (d \ge 6). \] \begin{thm} \ellabel{TheoremTwo} We may replace the condition \eqref{numvars} in Theorem \ref{MainThm} by the hypothesis \begin{equation} \ellabel{hhyp} \frac h{\Phi(d)} > R(R+1)(d-1)2^{d-1} + R(R-1)(d-1), \end{equation} and the same conclusions hold. \end{thm} Cognoscenti will recall that in Schmidt's work \cite{Sch1985} the $h$-invariant needs to be larger if one seeks to ensure positivity of the singular series. This is not necessary for us: there is no singular series, since the main term comes from a single major arc around $\bzero$. Over its half century of fame, Birch's theorem has been an extremely popular result to improve and generalise. In fact it may be possible for one to incorporate into Theorem \ref{MainThm} a recent improvement in Birch's theorem due independently to Dietmann \cite{Die2014} and Schindler \cite{Sch2014}. Skinner \cite{Ski1997} generalised Birch's theorem to number fields, and Lee \cite{Lee2012} considered Birch's theorem in a function field setting. Other results related to Birch's theorem are too numerous to honestly describe in a confined space, but recent papers include those of Brandes, Browning, Dietmann, Heath-Brown and Prendiville \cite{Bra2014, BDHB2014, BHB2014, BP2014}. The case where $R=1$ and $f_1$ is an indefinite quadratic form has been solved in five variables by Margulis and Mohammadi \cite{MM2011}, who generalised famous results due to G\"otze \cite{Goe2004}, Margulis \cite{Mar1989} and others; four variables suffice unless the signature is $(2,2)$, while three variables suffice to obtain a lower bound of the expected strength. This present paper is a sequel to \cite{scs, wps}. The author was initially motivated to study shifted forms by Marklof's papers \cite{Mar2002, Mar2003}, which dealt with shifted quadratic forms in relation to the Berry--Tabor conjecture from quantum chaos; see also \cite{Mar2002cor}. To our knowledge, no author has previously considered inhomogeneous diophantine inequalities of degree three or higher without assuming any additive structure, although inhomogeneous cubic equations were investigated by Davenport and Lewis \cite{DL1964}. For previous results on additive inhomogeneous diophantine inequalities see \cite{scs, wps}, where the author built on work of Freeman \cite{Fre2003}, who applied important estimates due to Baker \cite{Bak1986}. Some of these ideas were used by Parsell to treat simultaneous diagonal inequalities in \cite{Par1, Par2, Par3}. For homogeneous diophantine inequalities without additive structure, there is Schmidt's general result \cite[Theorem 1]{Sch1980}, as well as improved treatments of the cubic scenario due to Pitman \cite{Pit1968} and then Freeman \cite{Fre2000}. The more specialised cases of split cubic forms and cubic forms involving a norm form have been studied by the author \cite{sf} and Harvey \cite{Har2011}, respectively. We now outline our proof of the asymptotic formula \eqref{asymp} in Theorem \ref{MainThm}. Our main weapon is Freeman's variant \cite{Fre2002} of the Davenport--Heilbronn method \cite{DH1946}. We may assume that the coefficients of $f_1,\elldots,f_R$ are integer multiples of $d!$. Indeed, we may if necessary rescale $\bf, \btau, \eta$, and change variables in the outer integral of \eqref{cdef}. Our starting point is the Taylor expansion \begin{equation} \ellabel{Taylor} f_k(\bx + \bmu) = f_k(\bx) + f_k(\bmu) + \sum_{j=1}^{d-1} \mu^{d-j} \sum_{|\bj|_1 = j} d_{k,\bj} \bx^\bj \qquad (1 \elle k \elle R) \end{equation} about $\bmu$, where for $\bj \in \bZ_{\ge 0}^n$ we write \[ \bx^\bj = x_1^{j_1} \cdots x_n^{j_n}, \quad |\bj|_1 = j_1+\elldots+j_n, \quad \bj! = j_1! \cdots j_n! \] and \begin{equation} \ellabel{partials} d_{k,\bj} = \bj !^{-1} \partial^\bj f_k(1, \elldots, 1) \in \bZ \qquad (1 \elle k \elle R). \end{equation} Thus, we may regard our shifted forms as polynomials in $\bx$. Note that \begin{equation} \ellabel{fknote} f_k(\bx) = \sum_{|\bj|_1 = d} d_{k,\bj} \bx^\bj \qquad (1 \elle k \elle R). \end{equation} The pertinent exponential sums are \begin{equation*} S(\balp) = \sum_{|\bx| \elle P} e(\balp \cdot \bf (\bx + \bmu)) \qquad (\balp \in \bR^R). \end{equation*} From \eqref{Taylor} we see that the highest degree component of $f_k(\bx + \bmu)$ is precisely $f_k(\bx)$. We can therefore use Birch's argument \cite{Bir1962}, which is based on Weyl differencing and the geometry of $\bf$, to restrict consideration to a thin set of major arcs where $\balp$ is well approximated. Though the polynomials $f_k(\bx + \bmu)$ are of the particular shape \eqref{Taylor}, we shall also need some exponential sum bounds in a more general inhomogeneous context. There are \begin{equation} \ellabel{NjDef} N_j := {j+n-1 \choose n-1} \end{equation} monomials of degree $j$ in $n$ variables, or in other words there are $N_j$ vectors $\bj \in \bZ_{\ge 0}^n$ such that $|\bj|_1 = j$. For $\balp \in \bR^R$ and \begin{equation} \ellabel{omegaDiam} \bomega_{\,{\rm d}}iam = ({{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj)_{1 \elle |\bj|_1 \elle d-1} \in \bR^{N_1 + \elldots + N_{d-1}}, \end{equation} write \begin{equation} \ellabel{gdef} g(\balp, \bomega_{\,{\rm d}}iam) = \sum_{|\bx| \elle P} e\Bigl( \balp \cdot \bf(\bx) + \sum_{1 \elle |\bj|_1 \elle d-1} {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj \bx^\bj \Bigr). \end{equation} Using \eqref{Taylor}, we shall view $S(\balp)$ as a special case of $g(\balp, \bome_{\,{\rm d}}iam)$, up to multiplication by a constant of absolute value 1. Thanks to the early steps of our argument, this will allow us to focus on the situation in which \begin{equation} \ellabel{gbig} |g(\balp, \bome_{\,{\rm d}}iam)| \ge P^n H^{-1}, \end{equation} where $H$ is at most a small power of $P$. Let \begin{equation} \ellabel{Fdef} F_{k,j}(\bx) = \sum_{|\bj|_1 = j} d_{k,\bj} \bx^\bj \qquad (1 \elle k \elle R, 1 \elle j \elle d), \end{equation} and note from \eqref{fknote} that \begin{equation} \ellabel{top} F_{k,d} = f_k \qquad (1 \elle k \elle R). \end{equation} We now see from \eqref{Taylor} that \begin{equation} \ellabel{Taylor2} f_k(\bx+\bmu) = f_k(\bmu) + \sum_{1 \elle j \elle d} \mu^{d-j} F_{k,j}(\bx) \qquad (1 \elle k \elle R) \end{equation} and \[ \balp \cdot \bf(\bx+ \bmu) = \balp \cdot \bf(\bmu) + \balp \cdot \bf(\bx) + \sum_{k \elle R}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_k \sum_{1 \elle |\bj|_1 \elle d-1} \mu^{d-|\bj|_1} d_{k,\bj} \bx^\bj. \] Thus, with \eqref{omegaDiam} and the specialisation \begin{equation} \ellabel{omegadef} {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj := \sum_{k \elle R} d_{k,\bj} {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_k \mu^{d-|\bj|_1} \qquad (1 \elle |\bj|_1 \elle d), \end{equation} we have \begin{equation} \ellabel{Sg} S(\balp) = e(\balp \cdot \bf(\bmu)) g(\balp, \bome_{\,{\rm d}}iam). \end{equation} Throughout, we define ${{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}_\bj$ $(|\bj|_1 = d)$ in terms of $\balp$ by \begin{equation} \ellabel{omegatop} {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}_\bj = \sum_{k \elle R} d_{k,\bj} {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_k \qquad (|\bj|_1 = d). \end{equation} This is consistent with \eqref{omegadef}. Though $\bom_{\,{\rm d}}iam$ does not depend on those ${{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}_\bj$ for which $|\bj|_1 = d$, it will be convenient to also consider them. Ideally, we would like to have good rational approximations to ${\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_k \mu^{d-j}$ for all $k \in \{1,2,\elldots,R \}$ and all $j \in \{ 1,2,\elldots,d \}$. We could then use the procedure demonstrated in \cite[ch. 8]{Bro2009} to decompose $S(\balp)$ into archimedean and non-archimedean components. We are only able to achieve this ideal for $j \in \cS$, where $\cS$ is the set of $j \in \{1,2,\elldots,d\}$ such that $F_{1,j}, F_{2,j}, \elldots, F_{R,j}$ are linearly independent. For all $j \in \{1,2,\elldots, d\}$, we are nonetheless able to rationally approximate those linear combinations of ${\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1 \mu^{d-j}, \elldots, {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_R \mu^{d-j}$ that are needed at this stage of the argument, namely the ${{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}e_\bj$. These rational approximations are a nontrivial consequence of \eqref{gbig}. The key idea is to fix all but one of the variables, and to regard the summation thus obtained as a univariate exponential sum. We can then use the simultaneous approximation methods of Baker \cite{Bak1986}. Finally, we use the irrationality of $\mu$ to obtain nontrivial cancellation on Davenport--Heilbronn minor arcs $\fm$ (this is where $|\balp|$ is of `intermediate' size). We need the information that $d, d-1 \in \cS$. These facts follow from our geometric assumptions. Indeed, to see that $d-1 \in \cS$ one may compare \eqref{Taylor2} to the Taylor expansion \[ f_k(\bx + \bmu) = f_k(\bx) + f_k(\bmu) + \sum_{i=1}^{d-1} \mu^i \sum_{|\bi|_1 = i} \bi!^{-1} \partial^\bi f_k (\bx) \] about $\bx$, which shows that \[ F_{k,d-1} = (1,\elldots,1) \cdot \nabla f_k \qquad (1 \elle k \elle R). \] We thus have good rational approximations to $\balp$ and $\mu \balp$, and their strength may be used to contradict the irrationality of $\mu$ unless we have a nontrivial estimate on $\fm$. The proof of Theorem \ref{TheoremTwo} is almost the same, with the only substantial change being a suitable analogue of Lemma \ref{Birch43}. It transpires that such an analogue can be deduced without much work from Schmidt's seminal paper \cite{Sch1985}. Further details shall be provided in \S \ref{SchmidtApproach}. We organise thus. In \S \ref{BirchType}, we use Freeman's kernel functions to relate $N(P)$ to exponential sums; see \cite[\S 2]{Fre2002}. Using Birch's argument, we then obtain good simultaneous rational approximations to the ${\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_k$ ($1 \elle k \elle R$) in the case that $g(\balp, \bome_{\,{\rm d}}iam)$ is `large'; see \cite[Lemma 4.3]{Bir1962}. In \S \ref{BakerType}, we simultaneously approximate the ${{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}_\bj$ ($1 \elle |\bj|_1 \elle d$). In \S \ref{special}, we use $S(\balp)$ to obtain simultaneous rational approximations to the ${\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_{k,j}$ ($1 \elle k \elle R, j \in \cS$). In \S \ref{classical}, we adapt classical bounds to the present context. In \S \ref{bgf}, we exploit the irrationality of $\mu$ by using a simplification of the methods of Bentkus, G\"otze and Freeman, similarly to \cite[\S 2]{Woo2003}. The lemmas therein motivate our precise Davenport--Heilbronn trisection, which we present in \S \ref{dh}. We then resolve the asymptotic formula \eqref{asymp}. We complete the proof of Theorem \ref{MainThm} in \S \ref{SingularIntegral} by establishing the final statement of the theorem. It is then that we provide Schmidt's interpretation \cite{Sch1982, Sch1985} of the singular integral $c$ as a real density. Finally, we prove Theorem \ref{TheoremTwo} in \S \ref{SchmidtApproach}. We adopt the convention that $\varepsilon$ denotes an arbitrarily small positive number, so its value may differ between instances. For $x \in \bR$ and $r \in \bN$, we put $e(x) = e^{2 \pi i x}$ and $e_r(x) = e^{2 \pi i x / r}$. Bold face will be used for vectors, for instance we shall abbreviate $(x_1,\elldots,x_n)$ to $\bx$, and define $|\bx| = \max(|x_1|, \elldots, |x_n|)$. For a vector $\bx$ of length $n$, and for $\bj \in \bZ_{\ge 0}^n$, we define $\bx^{\bj} = x_1^{j_1} \cdots x_n^{j_n}$, $|\bj|_1 = j_1 + \elldots + j_n$ and $\bj! = j_1! \cdots j_n!$. If $M$ is a matrix then we write $|M|$ for the maximum of the absolute values of its entries. We will use the unnormalised sinc function, given by $\sinc(x) = \sin(x)/x$ for $x \in \bR \setminus \{0\}$ and $\sinc(0) = 1$. We regard $\btau, \mu$ and $\eta$ as constants. The word \emph{large} shall mean in terms of $\bf, \varepsilon$ and constants, together with any explicitly stated dependence. Similarly, the implicit constants in Vinogradov and Landau notation may depend on $\bf, \varepsilon$ and constants, and any other dependence will be made explicit. The pronumeral $P$ denotes a large positive real number. The word \emph{small} will mean in terms of $\bf$ and constants. We sometimes use such language informally, for the sake of motivation; we make this distinction using quotation marks. The author thanks Trevor Wooley very much for his enthusiastic supervision, and for suggesting such an agreeable research programme. Special thanks go to Adam Morgan for an elegant proof of Lemma \ref{Adam}. Finally, thanks to the anonymous referees for doing a thorough job and making several helpful suggestions. \section{Approximations of Birch type} \ellabel{BirchType} We deploy the kernel functions introduced by Freeman \cite[\S 2.1]{Fre2002}; see also \cite[\S 2]{PW2014}. We shall define $T: [1, \infty) \to [1, \infty)$ in due course. For now, it suffices to note that \begin{equation} \ellabel{Tbound} T(P) \elle P, \end{equation} and that $T(P) \to \infty$ as $P \to \infty$. Put \begin{equation} \ellabel{Ldef} L(P) = \max(1,\ellog T(P)), \qquad \rho = \eta L(P)^{-1} \end{equation} and \begin{equation} \ellabel{Kdef} K_{\pm}({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}) = \frac {\sin(\pi {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha} \rho) \sin(\pi {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}(2 \eta \pm \rho))} {\pi^2 {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}^2 \rho}. \end{equation} From \cite[Lemma 1]{Fre2002} and its proof, we have \begin{equation} \ellabel{Kbounds} K_\pm({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}) \elll \min(1, L(P) |{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}|^{-2}) \end{equation} and \begin{equation} \ellabel{Ubounds} 0 \elle \int_\bR e({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha} t) K_{-}({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}){\,{\rm d}}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha} \elle U_\eta(t) \elle \int_\bR e({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha} t) K_{+}({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}){\,{\rm d}}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha} \elle 1, \end{equation} where \begin{equation*} U_\eta(t) = \begin{cases} 1, &\text{if } |t| < \eta \\ 0, &\text{if } |t| \ge \eta. \end{cases} \end{equation*} For $\balp \in \bR^R$, write \begin{equation} \ellabel{Kprod} \bK_\pm(\balp) = \prod_{k \elle R} K_\pm({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_k). \end{equation} The inequalities \eqref{Ubounds} give \[ R_{-}(P) \elle N(P) \elle R_+(P), \] where \[ R_\pm(P) = \int_{\bR^R} S(\balp) e(-\balp \cdot \btau) \bK_\pm(\balp) {\,{\rm d}} \balp. \] In order to prove \eqref{asymp}, it therefore remains to show that \begin{equation} \ellabel{goal1} R_\pm(P) = (2 \eta)^R c P^{n-Rd} + o(P^{n-Rd}) \end{equation} as $P \to \infty$, where $c$ is given by \eqref{cdef}. In this section we employ some classical bounds of Davenport \cite{Dav1959, Dav1962, Dav1963, Dav2005} and Birch \cite{Bir1962}; see also \cite[ch. 8]{Bro2009}. These results apply directly to Weyl sums associated to $\balp \cdot \bf$, and are proved by Weyl differencing down to degree one. As such, they are unaffected by the presence of terms of degree lower than $d$. The idea that lower order terms are irrelevant when establishing Weyl-type bounds is well known; Birch himself notes this in \cite[\S 2]{Bir1962}, and it was also used to prove \cite[Lemma 1]{DL1964}. From \eqref{gdef}, we see that the polynomial associated to the Weyl sum $g(\balp, \bom_{\,{\rm d}}iam)$ has $\balp \cdot \bf$ as its highest degree component. Exploiting this, we may deduce these classical bounds for $g(\balp, \bome_{\,{\rm d}}iam)$. \begin{lemma} \ellabel{Birch43} Let $0 < {\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta} \elle 1$. Suppose \begin{equation} \ellabel{gbig0} |g(\balp, \bome_{\,{\rm d}}iam)| > P^{n- (n-{\sigma}} {\,{\rm d}}ef\Sig{{\Sigma}} {\,{\rm d}}ef\bfsig{{\boldsymbol \sig}) {\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}/ 2^{d-1} + \varepsilon}. \end{equation} Then there exist integers $q, a_1, \elldots, a_R$ such that \begin{equation} \ellabel{qa} 1 \elle q \elle P^{R(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}}, \qquad \gcd(a_1, \elldots, a_R, q) = 1 \end{equation} and \begin{equation} \ellabel{FirstApprox} 2|q \balp - \ba| \elle P^{R(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta} - d}. \end{equation} In particular, if $|S(\balp)| > P^{n- (n-{\sigma}} {\,{\rm d}}ef\Sig{{\Sigma}} {\,{\rm d}}ef\bfsig{{\boldsymbol \sig}) {\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}/ 2^{d-1} + \varepsilon}$ then there exist $q \in \bN$ and $\ba \in \bZ^R$ satisfying \eqref{qa} and \eqref{FirstApprox}. We may replace $g(\balp, \bome_{\,{\rm d}}iam)$ by \[ \sum_{1 \elle x_1, \elldots, x_n \elle P} e\Bigl( \balp \cdot \bf(\bx) + \sum_{1 \elle |\bj|_1 \elle d-1} {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj \bx^\bj \Bigr), \] and the same conclusions hold. \end{lemma} \begin{proof} For the first statement we may imitate Birch's proof of \cite[Lemma 4.3]{Bir1962}. We have removed the implied constant from \eqref{gbig0} by redefining $\varepsilon$ and recalling that $P$ is large. Now \eqref{Sg} gives rise to our second claim. The third assertion follows in the same way as the first. \end{proof} Throughout, put \begin{equation} \ellabel{kapDef} {\kappa} = \frac{n-{\sigma}} {\,{\rm d}}ef\Sig{{\Sigma}} {\,{\rm d}}ef\bfsig{{\boldsymbol \sig}}{R(d-1)2^{d-1}}. \end{equation} It follows from \eqref{numvars} that \begin{equation} \ellabel{kapBound} {\kappa} > R+1. \end{equation} The argument of the corollary to \cite[Lemma 4.3]{Bir1962} now produces the following. \begin{cor} \ellabel{Birch43cor} For $\balp \in \bR^R$ with $|\balp| < P^{-d/2}$, and for $\bomega_{\,{\rm d}}iam$ as in \eqref{omegaDiam}, we have \begin{equation} \ellabel{first} g(\balp, \bom_{\,{\rm d}}iam) \elll P^{n+\varepsilon} (P^d |\balp|)^{-{\kappa}}. \end{equation} \end{cor} Fix a small positive real number ${\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0$. Let $\fN$ be the set of $\balp \in \bR^R$ satisfying \eqref{qa} and \eqref{FirstApprox} with ${\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta} = {\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0$, for some integers $q, a_1,\elldots,a_R$. Given $\balp \in \fN$, such integers would be unique. Indeed, if we also had \[ 1 \elle t \elle P^{R(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0}, \qquad (b_1, \elldots, b_R,t) = 1 \] and \[ 2|t \balp - \bb| \elle P^{R(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0 - d} \] for some integers $t,b_1, \elldots, b_R$, then \begin{align*} |q^{-1}\ba - t^{-1}\bb| &\elle |\balp - q^{-1}\ba| + |\balp - t^{-1}\bb| \\ &< (1/q+1/t) P^{R(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0 -d} < (qt)^{-1}; \end{align*} this would imply that $t^{-1} \bb= q^{-1} \ba$, and hence that $t=q$ and $\bb = \ba$. Let $\fU$ be an arbitrary unit hypercube in $R$ dimensions. Using Lemma \ref{Birch43}, the argument of \cite[Lemma 4.4]{Bir1962} shows that \begin{equation} \ellabel{BasicMinorBound} \int_{(\bR^R \setminus \fN) \cap \fU} |S(\balp)| {\,{\rm d}} \balp \elll P^{n - Rd - \varepsilon}. \end{equation} Put \begin{equation} \ellabel{NstarDef} \fN^* = \fN^*_P = \{ \balp \in \fN: |S(\balp)| > P^{n - R(R+1) d {\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0} \}. \end{equation} The measure of $\fN \cap \fU$ is $O(P^{R(R+1)(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0 - Rd})$, so \[ \int_{(\fN \setminus \fN^*) \cap \fU} |S(\balp)| {\,{\rm d}} \balp \elll P^{n- Rd - R(R+1) {\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0}. \] Combining this with \eqref{BasicMinorBound} yields \[ \int_{(\bR^R \setminus \fN^*) \cap \fU} |S(\balp)| {\,{\rm d}} \balp \elll P^{n-Rd-\varepsilon}. \] Now \eqref{Tbound}, \eqref{Ldef}, \eqref{Kbounds} and \eqref{Kprod} give \[ \int_{\bR^R \setminus \fN^*} |S(\balp) \bK_\pm(\balp)| {\,{\rm d}} \balp \elll L(P)^R P^{n-Rd-\varepsilon} = o(P^{n-Rd}). \] In view of the discussion surrounding \eqref{goal1}, it remains to show that \begin{equation} \ellabel{goal2} \int_{\fN^*} S(\balp) e(-\balp \cdot \btau) \bK_\pm(\balp) {\,{\rm d}} \balp = (2 \eta)^R c P^{n-Rd} + o(P^{n-Rd}) \end{equation} as $P \to \infty$, with $c$ as in \eqref{cdef}. \section{Approximations of Baker type} \ellabel{BakerType} By \eqref{gdef}, \eqref{Fdef}, \eqref{top} and \eqref{omegatop}, we have \begin{equation} \ellabel{omg} g(\balp, \bom_{\,{\rm d}}iam) = \sum_{|\bx| \elle P} e \Bigl( \sum_{1 \elle |\bj|_1 \elle d} {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}e_\bj \bx^\bj \Bigr). \end{equation} In the case that $g(\balp, \bome_{\,{\rm d}}iam)$ is `large', we shall use \cite[Theorem 5.1]{Bak1986} to obtain simultaneous rational approximations to the ${{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj$. The idea is to fix $x_2, \elldots, x_n$, so as to consider $\sum {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}e_\bj \bx^\bj$ as a polynomial in $x_1$. If we simply do this, we are only able to approximate certain linear combinations of the ${{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}_\bj$, and we do not acquire enough information. However, if we first change variables, then we can approximate different linear combinations of the ${{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}_\bj$. The point is to use several carefully selected changes of variables. We never actually make these changes of variables; we merely incorporate them into our summations. Suppose we were to put $\bx = \by + x_1 \bm$, regarding $m_1 = 1, m_2, \elldots, m_n \in \bN$ and $y_1 = 0$ as being fixed. For some $\by$, will be able to simultaneously approximate the coefficients of the $x_1^j$ in $\sum_\bj {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}e_\bj (\by + x_1 \bm)^\bj$. By the binomial theorem, the coefficient of $x_1^j$ in $(\by+x_1 \bm)^\bi$ is \begin{equation} \ellabel{zdef} z_{j,\bi} = z_{j,\bi}(\bm, \by) := \sum_{\bj \elle \bi: |\bj|_1 = j} {\bi \choose \bj} \bm^\bj \by^{\bi - \bj}, \end{equation} where \[ {\bi \choose \bj} = \prod_{v \elle n} {i_v \choose j_v}, \] and $\bj \elle \bi$ means that $j_v \elle i_v$ ($1 \elle v \elle n$). Hence, the coefficient of $x_1^j$ in $\sum_\bj {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}e_\bj (\by+x_1 \bm)^\bj$ is \begin{equation} \ellabel{coeff} \sum_{|\bj|_1 = j} \bm^\bj {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj + \sum_{j < |\bi|_1 \elle d} z_{j,\bi} {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bi. \end{equation} Since we wish to approximate the ${{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj$, the first sum in \eqref{coeff} motivates the need for our next lemma. Recall \eqref{NjDef}. \begin{lemma} \ellabel{Adam} There exist $\bm_1, \elldots, \bm_{N_d} \in \bN^n$ such that the first entry of $\bm_t$ is $1$ $(1 \elle t \elle N_d)$ and the square matrices \begin{equation} \ellabel{MjDef} M_j = (\bm_t^\bj)_{1 \elle t \elle N_j, |\bj|_1 = j} \qquad (1 \elle j \elle d) \end{equation} are invertible over $\bQ$. \end{lemma} \begin{proof} Put \[ \bm_t = (\nu_1^{t-1}, \elldots, \nu_n^{t-1}) \qquad (1 \elle t \elle N_d) \] with $\nu_1 = 1$ and $\nu_s = 2^{(d+1)^{s-2}}$ ($2 \elle s \elle n$). Let $j \in \{1,2,\elldots,d\}$, and note that the order of the vectors $\bj$ does not affect whether or not the matrix is invertible. We have \[ \bm_t^\bj = (\bnu^\bj)^{t-1} \qquad (1 \elle t \elle N_j, |\bj|_1 = j), \] so $M_j$ is a square Vandermonde matrix with parameters $\bnu^\bj$ ($|\bj|_1 = j$), and it remains to show that if $|\bi|_1 = |\bj|_1 = j$ and $\bi \ne \bj$ then $\bnu^\bi \ne \bnu^\bj$. We may assume that $\bi > \bj$ in reverse lexicographic order, so that there exists $r \in \{2,3,\elldots,n\}$ such that $i_r > j_r$ and $i_s = j_s$ ($r+1 \elle s \elle n$). Now \[ \bnu^\bi / \bnu^\bj \ge \nu_r / \nu_{r-1}^j > 1. \] \end{proof} Henceforth, we let $\bm_1, \elldots, \bm_{N_d}$ be fixed vectors as in Lemma \ref{Adam}. Baker's work \cite[Theorem 5.1]{Bak1986} shows that if a Weyl sum in one variable is `large' then its non-constant coefficients admit good simultaneous rational approximations. There is currently no close analogue in many variables. However, since we have already restricted attention to a thin set of major arcs, we obtain a satisfactory analogue by fixing all but one variable and then using Baker's result. For the time being, we work with the more general Weyl sum $g(\balp, \bom_{\,{\rm d}}iam)$. Put \begin{equation} \ellabel{Ndef} N = N_1 + \elldots + N_d. \end{equation} \begin{lemma} \ellabel{bak} Let $H > 0$ be such that \begin{equation} \ellabel{Hlimit} H^{2^d N + 1} \elle P, \end{equation} and assume \eqref{gbig}. Then there exist unique $r \in \bN$ and \[ \ba_{\,{\rm d}}agger = (a_{\bj})_{1 \elle |\bj|_1 \elle d} \in \bZ^N \] such that \begin{equation} \ellabel{bak1} r \elll H^{Nd} P^\varepsilon, \qquad \gcd(r,\ba_{\,{\rm d}}agger) = 1 \end{equation} and \begin{equation} \ellabel{bak2} r {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj - a_\bj \elll H^{Nd} P^{\varepsilon - |\bj|_1} \qquad (\bj \in \bZ_{\ge 0}^n: 1 \elle |\bj|_1 \elle d), \end{equation} where $\gcd(r,\ba_{\,{\rm d}}agger)$ denotes the greatest common divisor of $r$ and the entries of $\ba_{\,{\rm d}}agger$. \end{lemma} \begin{proof} Let $t \in \{1,2,\elldots, N_d\}$, and set $y_1 = 0$. By \eqref{omg}, we have \[ g(\balp, \bom_{\,{\rm d}}iam) = \sum_{\substack{y_2, \elldots, y_n: \\ |\by| \elle (|\bm_t|+1) P}} \sum_{x_1 \in I_t(\by)} e \Bigl( \sum_{1 \elle |\bj|_1 \elle d} {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}_\bj (\by + x_1 \bm_t)^\bj \Bigr), \] where \[ I_t(\by) = \{ x_1 \in \bZ : |\by + x_1 \bm_t| \elle P \} \] is a discrete subinterval of $[-P,P] \cap \bZ$. More precisely, given $t$ and $\by$ as above, there exists a real subinterval $[a,b]$ of $[-P,P]$ such that $I_t(\by) = [a,b] \cap \bZ$. By \eqref{gbig} and the triangle inequality, there exists $\by_t \in \bZ^n$ such that $|\by_t| \elll P$ and \[ \Bigl | \sum_{x_1 \in I_t(\by_t)} e \Bigl( \sum_{1 \elle |\bj|_1 \elle d} {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}_\bj (\by_t + x_1 \bm_t)^\bj \Bigr) \Bigr | \gg PH^{-1}. \] Now \cite[Theorem 5.1]{Bak1986} and the calculation \eqref{coeff} imply the existence of integers $q_t, v_{t,d}, \elldots, v_{t,1}$ such that \[ 0 < q_t \elll H^d P^\varepsilon \] and \begin{equation} \ellabel{q0errors} q_t \Bigl( \sum_{|\bj|_1 = j} \bm_t^\bj {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj + \sum_{j < |\bi|_1 \elle d} z_{j,\bi,t} {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bi \Bigr) - v_{t,j} \elll H^d P^{\varepsilon-j} \quad (1 \elle j \elle d), \end{equation} where \[ z_{j,\bi,t} = z_{j,\bi}(\bm_t, \by_t). \] With \eqref{MjDef}, put \begin{equation} \ellabel{DeljDef} \Del_j = {\,{\rm d}}et(M_j) \qquad (1 \elle j \elle d). \end{equation} In order for this to be well defined, we need to fix an ordering of the $\bj$ ($|\bj|_1 = j$), and we can do this by writing $\{ \bj_{1,j}, \elldots, \bj_{N_j,j} \}$ for the set of $\bj \in \bZ_{\ge 0}^n$ such that $|\bj|_1 = j$. Explicitly, we now have \begin{equation} \ellabel{MjExplicit} M_j = \begin{pmatrix} \bm_1^{\bj_{1,j}} & \elldots & \bm_1^{\bj_{N_j,j}} \\ \vdots && \vdots \\ \bm_{N_j}^{\bj_{1,j}} & \elldots & \bm_{N_j}^{\bj_{N_j,j}} \end{pmatrix} \qquad (1 \elle j \elle d). \end{equation} Note that the matrices $\Del_j M_j^{-1}$ have integer entries. For $j=1,2,\elldots,d$, write \[ \Omega_j = \begin{pmatrix} {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_{\bj_{1,j}} \\ \vdots \\ {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_{\bj_{N_j,j}} \end{pmatrix}, \qquad V_j = \begin{pmatrix} v_{1,j} \\ \vdots \\ v_{N_j,j} \end{pmatrix}, \] and also let $\fQ_j = {\,{\rm d}}iag(q_1, \elldots, q_{N_j})$. Let \[ Q_j = q_1 \cdots q_{N_j} \qquad (1 \elle j \elle d) \] and \[ \xi_j = \prod_{i=j}^d \Del_i Q_i \qquad (1 \elle j \elle d). \] For $j=1,2,\elldots,d$, put \[ \psi_{t,j} = \sum_{j < |\bi|_1 \elle d} z_{j,\bi,t} {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bi, \qquad \Psi_j = \begin{pmatrix} \psi_{1,j} \\ \vdots \\ \psi_{N_j,j} \end{pmatrix}. \] We proceed, by induction on $|\bi|_1$ from $d$ down to $1$, to show that there exist integers $v_\bi$ ($1 \elle |\bi|_1 \elle d$) such that \begin{equation} \ellabel{InductiveOutcome} \xi_{|\bi|_1} {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bi - v_\bi \elll (H^d P^\varepsilon)^{N_{|\bi|_1} + \elldots + N_d} P^{-|\bi|_1}. \end{equation} From \eqref{q0errors}, we have \[ |\fQ_d M_d \Omega_d - V_d| \elll H^d P^{\varepsilon-d}. \] Left multiplication by the integer matrix \[ \Del_d Q_d M_d^{-1} \fQ_d^{-1} = (\Del_d M_d^{-1}) \cdot (Q_d \fQ_d^{-1}) \] gives \[ |\Del_d Q_d \Omega_d - \Del_d Q_d M_d^{-1} \fQ_d^{-1} V_d| \elll (H^d P^\varepsilon)^{N_d} P^{-d}, \] since $q_t \elll H^d P^\varepsilon$ ($1 \elle t \elle N_d$). In particular, there exist $v_\bj \in \bZ$ ($|\bj|_1 = d$) such that \begin{equation*} \Delta_d Q_d {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj - v_\bj \elll (H^d P^\varepsilon)^{N_d} P^{-d} \qquad (|\bj|_1 = d). \end{equation*} We have confirmed \eqref{InductiveOutcome} whenever $|\bi|_1 = d$. Next let $j \in \{1, 2, \elldots, d-1 \}$, and suppose that for $i \in \{j+1, j+2, \elldots, d\}$ there exist $v_\bi \in \bZ$ ($|\bi|_1 = i$) satisfying \eqref{InductiveOutcome}. Put \[ Z_j = \begin{pmatrix} z_{1,j} \\ \vdots \\ z_{N_j,j} \end{pmatrix}, \] where for $1 \elle t \elle N_j$ we write \[ z_{t,j} = \sum_{i = j+1}^d \Del_{j+1} \cdots \Del_{i-1} Q_{j+1} \cdots Q_{i-1} \sum_{|\bi|_1 = i} z_{j,\bi,t} v_\bi. \] From \eqref{q0errors}, we see that \[ |\fQ_j (M_j \Omega_j + \Psi_j) - V_j| \elll H^d P^{\varepsilon-j}. \] Noting that \[ |\Del_jQ_j M_j^{-1} \fQ_j^{-1}| = |(\Del_j M_j^{-1}) \cdot (Q_j \fQ_j^{-1})| \elll (H^d P^\varepsilon)^{N_j-1}, \] we now have \[ |\Del_j Q_j \Omega_j - \Del_jQ_j M_j^{-1} \fQ_j^{-1} (V_j - \fQ_j \Psi_j)| \elll (H^d P^\varepsilon)^{N_j} P^{-j}. \] Hence \begin{align} \notag \xi_j \Omega_j &= \xi_{j+1} \Del_jQ_j M_j^{-1} \fQ_j^{-1} (V_j - \fQ_j \Psi_j) + O((H^d P^\varepsilon)^{N_j + \elldots + N_d} P^{-j}) \\ \ellabel{MainCalc} &= X_j - \Del_j Q_j M_j^{-1} \xi_{j+1} \Psi_j + O((H^d P^\varepsilon)^{N_j + \elldots + N_d} P^{-j}), \end{align} where $X_j = \xi_{j+1} \Del_jQ_j M_j^{-1} \fQ_j^{-1} V_j$ has integer entries and we have used Landau's notation entry-wise. By our inductive hypothesis and the bound \[ z_{j,\bi,t} \elll P^{|\bi|_1-j}, \] we have \begin{equation} \ellabel{NiceApprox} \xi_{j+1} \Psi_j = Z_j + O((H^d P^\varepsilon)^{N_{j+1} + \elldots + N_d} P^{-j}). \end{equation} Substituting \eqref{NiceApprox} into \eqref{MainCalc} yields \[ \xi_j \Omega_j = X_j - \Del_j Q_j M_j^{-1}Z_j + O((H^d P^\varepsilon)^{N_j + \elldots + N_d} P^{-j}). \] In particular, there exist $v_\bj \in \bZ$ ($|\bj|_1 = j$) such that \[ \xi_j {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj - v_\bj \elll (H^d P^\varepsilon)^{N_j + \elldots + N_d} P^{-j} \qquad (|\bj|_1 = j). \] The induction has shown that there exist integers $v_\bi$ ($1 \elle |\bi|_1 \elle d$) satisfying \eqref{InductiveOutcome}. Our existence statement follows by redefining $\varepsilon$, and choosing $r, a_\bj$ ($1 \elle |\bj|_1 \elle d$) by rescaling the integers $\xi_1, (\xi_1 / \xi_{|\bj|_1}) v_\bj$ in such a way that $r > 0$ and $\gcd(r, \ba_{\,{\rm d}}agger) = 1$. Next suppose \eqref{bak1} and \eqref{bak2} also hold with $s \in \bN$ and \[ \bb_{\,{\rm d}}agger = (b_\bj)_ {1 \elle |\bj|_1 \elle d} \in \bZ^N \] in place of $r$ and $\ba_{\,{\rm d}}agger$. Then, by the triangle inequality, we have \[ |a_\bj /r - b_\bj/s| \elll (1/r + 1/s) H^{Nd} P^{\varepsilon - 1} \qquad (1 \elle |\bj|_1 \elle d). \] Since $P$ is large and $r,s \elll H^{Nd} P^\varepsilon$, we may now recall \eqref{Hlimit} to see that \[ |a_\bj/r - b_\bj/s| < (rs)^{-1} \qquad (1 \elle |\bj|_1 \elle d). \] Hence $a_\bj/r = b_\bj/s$ ($1 \elle |\bj|_1 \elle d$). The conditions \[ \gcd(r,\ba_{\,{\rm d}}agger) = \gcd(s, \bb_{\,{\rm d}}agger) = 1 \] now imply that $(r,\ba_{\,{\rm d}}agger) = (s, \bb_{\,{\rm d}}agger)$. We have demonstrated uniqueness. \end{proof} It may be possible to obtain the inequalities \eqref{bak1} and \eqref{bak2} with a smaller power of $H$, but we do not require this. Using an argument similar to that of the corollary to \cite[Lemma 4.3]{Bir1962}, we now deduce the following estimate for $g(\balp, \bom_{\,{\rm d}}iam)$. \begin{cor} \ellabel{BakCor0} Let $\xi$ be a small positive real number. Let $\balp \in \bR^R$, and let \[ \bomega_{\,{\rm d}}iam = ({{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj)_{1 \elle |\bj|_1 \elle d-1} \in \bR^{N_1 + \elldots + N_{d-1}} \] be such that \[ P^{|\bj|_1}|{{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj| \elle (P^{\xi + (2^dN + 1)^{-1}})^{Nd} \qquad (1 \elle |\bj|_1 \elle d-1). \] Then \begin{equation} \ellabel{second} g(\balp, \bom_{\,{\rm d}}iam) \elll_\xi P^{n+\xi} \Bigl( \max_{1 \elle |\bj|_1 \elle d-1} P^{|\bj|_1} |{{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj| \Bigr)^{-(Nd)^{-1}}. \end{equation} \end{cor} \begin{proof} Let $\bj \in \bZ_{\ge 0}^n$ be such that $1 \elle |\bj|_1 = j \elle d-1$, and determine $H > 0$ by \begin{equation} \ellabel{Hdef} P^j|{{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj| = (HP^\xi)^{Nd}. \end{equation} Note that we have \eqref{Hlimit}. Assume for a contradiction that \[ |g(\balp, \bom_{\,{\rm d}}iam)| \ge P^{n+\xi} (P^j |{{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj| )^{-(Nd)^{-1}}, \] for some $P$ that is large in terms of $\xi$. Then $|g(\balp, \bom_{\,{\rm d}}iam)| \ge P^n H^{-1}$, so by Lemma \ref{bak} there exist $r,a_\bj \in \bZ$ satisfying $0< r \elll H^{Nd} P^\xi$ and \begin{equation} \ellabel{2sub} r{{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj - a_\bj \elll H^{Nd} P^{\xi-j}. \end{equation} The triangle inequality and \eqref{Hdef} now give \[ a_\bj \elll H^{Nd} P^{\xi-j} + (H^{Nd} P^\xi) \cdot (H^{Nd} P^{Nd\xi-j}). \] By \eqref{Hlimit}, we must now have $a_\bj = 0$. Substituting this into \eqref{2sub} yields \[ {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj \elll H^{Nd} P^{\xi - j}, \] contradicting \eqref{Hdef}. We must therefore have \eqref{second}. \end{proof} Put \begin{equation}\ellabel{delDef} {{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta} = (R(R+1) Nd^2+1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0. \end{equation} Recall \eqref{partials} and \eqref{NstarDef}. We henceforth define the ${{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}_\bj$ ($1 \elle |\bj|_1 \elle d$) in terms of $\balp$ by \eqref{omegadef}. The following is another consequence of Lemma \ref{bak}. \begin{cor} \ellabel{BakCor} Let $\balp \in \fN^*$. Then there exist unique \[ r \in \bZ, \qquad \ba_{\,{\rm d}}agger = (a_{\bj})_{1 \elle |\bj|_1 \elle d} \in \bZ^N \] such that \begin{equation} \ellabel{bak1cor} 1 \elle r < P^{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}, \qquad \gcd(r,\ba_{\,{\rm d}}agger)=1 \end{equation} and \begin{equation} \ellabel{bak2cor} |r {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj - a_\bj | < P^{{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta} - |\bj|_1} \qquad (1 \elle |\bj|_1 \elle d). \end{equation} There also exist unique integers $q, a_1, \elldots, a_R$ such that \begin{equation} \ellabel{qbound} 1 \elle q \elle P^{R(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0}, \qquad \gcd(a_1, \elldots, a_R,q) = 1 \end{equation} and \begin{equation} \ellabel{qerror} 2 |q\balp - \ba| \elle P^{R(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0 - d}. \end{equation} \end{cor} \begin{proof} Recall that \eqref{Sg} holds with the specialisation \eqref{omegadef}. For existence of satisfactory $r$ and $\ba_{\,{\rm d}}agger$, apply Lemma \ref{bak} with $H = P^{R(R+1)d {\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0}$. Our first uniqueness assertion follows in the same way as the uniqueness statement in Lemma \ref{bak}. Existence and uniqueness of $q,a_1,\elldots,a_R$ follow from the definition of $\fN$ and the subsequent discussion, since $\balp \in \fN^* \subseteq \fN$. \end{proof} \section{Special approximations} \ellabel{special} Recall \eqref{NjDef}, and that the ${{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}_\bj$ are now defined in terms of $\balp$ by \eqref{omegadef}. Recall \eqref{Fdef}, and that $\cS$ is the set of $j \in \{1,2,\elldots,d\}$ such that $F_{1,j}, F_{2,j}, \elldots, F_{R,j}$ are linearly independent. \begin{lemma} \ellabel{SpecLemma} Let $j \in \cS$ and $\balp \in \bR^R$. Let $r, a_\bj \in \bZ$ $(|\bj|_1 = j)$. Then there exist integers $D_j \ne 0$ and $a_{k,j}$ $(1 \elle k \elle R)$ such that \[ D_j r {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_k \mu^{d-j} - a_{k,j} \elll \max_{|\bj|_1 = j} |r {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj - a_\bj| \qquad (1 \elle k \elle R) \] and $D_j$ is bounded in terms of $\bf$. \end{lemma} \begin{proof} As in the proof of Lemma \ref{bak}, we fix an ordering of the $\bj$ ($|\bj|_1 = j$) by writing $\{ \bj_{1,j}, \elldots, \bj_{N_j,j} \}$ for the set of $\bj \in \bZ_{\ge 0}^n$ such that $|\bj|_1 = j$. From \eqref{omegadef}, we have \[ \Omega_j = C_j Y_j, \] where \[ \Omega_j = \begin{pmatrix} {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_{\bj_{1,j}} \\ \vdots \\ {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_{\bj_{N_j,j}} \end{pmatrix}, \quad C_j = \begin{pmatrix} d_{1,\bj_{1,j}} &\elldots& d_{R,\bj_{1,j}} \\ \vdots && \vdots \\ d_{1, \bj_{N_j,j}}&\elldots& d_{R, \bj_{N_j,j}} \end{pmatrix}, \quad Y_j = \begin{pmatrix} {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1 \mu^{d-j} \\ \vdots \\ {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_R \mu^{d-j} \end{pmatrix}. \] We note from \eqref{numvars} and \eqref{NjDef} that $N_j \ge R$. The condition $j \in \cS$ ensures that the $R$ columns of $C_j$ are linearly independent, and it follows from linear algebra that $C_j$ contains $R$ linearly independent rows, indexed say by $T_j \subseteq \{1,2,\elldots, N_j \}$ (row rank equals column rank). Form $C'_j$ by assembling these rows of $C_j$ to form an invertible $R \times R$ matrix, and let $A'_j = (a_{\bj_{t,j}})_{t \in T_j}$ be the $R \times 1$ matrix formed by assembling the same rows of $(a_{\bj_{t,j}})_{1 \elle t \elle N_j}$. We put $D_j = {\,{\rm d}}et(C'_j)$ and \[ \begin{pmatrix} a_{1,j} \\ \vdots \\ a_{R,j} \end{pmatrix} = D_j (C'_j)^{-1} A'_j. \] Define the $R \times 1$ matrix $\Omega'_j = ({{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_{\bj_{t,j}})_{t \in T_j}$. Now $\Om'_j = C'_j Y_j$, so \[ D_j r Y_j - \begin{pmatrix} a_{1,j} \\ \vdots \\ a_{R,j} \end{pmatrix} = D_j (C'_j)^{-1} (r \Omega'_j - A'_j), \] completing the proof. \end{proof} As $d,d-1 \in \cS$, we have the following corollary. \begin{cor} \ellabel{SpecCor} Let $\balp \in \fN^*$. Let the integers $r$ and $a_\bj$ $(1 \elle |\bj|_1 \elle d)$ be as determined by Corollary \ref{BakCor}. Then there exists $C_\bf > 1$, depending only on $\bf$, as well as $D,E \in \bZ \setminus \{0\}$ and $\ba_1, \ba_2 \in \bZ^R$ such that \begin{equation} \ellabel{DE} |D|, |E| \elle C_\bf, \end{equation} \begin{equation} \ellabel{Dr} |Dr \balp - \ba_1| \elll \max_{|\bj|_1 = d} |r {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj - a_\bj| \end{equation} and \begin{equation} \ellabel{Er} |Er \mu \balp - \ba_2| \elll \max_{|\bj|_1 = d-1} |r {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj - a_\bj|. \end{equation} The choice of $(D, E, \ba_1, \ba_2)$ is unique if we impose the further conditions \begin{equation} \ellabel{further} D,E > 0, \quad \gcd(D,\ba_1) = \gcd(E,\ba_2) = 1. \end{equation} \end{cor} \begin{proof} For existence, apply Lemma \ref{SpecLemma} with $j=d$ and then with $j=d-1$. For uniqueness, suppose we also have \eqref{DE}, \eqref{Dr}, \eqref{Er} and \eqref{further} with $D',E',\ba_1',\ba_2'$ in place of $D,E,\ba_1,\ba_2$. Combining these bounds with \eqref{bak2cor} and the triangle inequality gives \[ |D^{-1} \ba_1 - (D')^{-1} \ba'_1| < (DD')^{-1} \] and \[ |E^{-1} \ba_2 - (E')^{-1} \ba'_2| < (EE')^{-1}, \] so $D^{-1}\ba_1 = (D')^{-1}\ba'_1$ and $E^{-1}\ba_2 = (E')^{-1}\ba'_2$. Having made the assumptions \eqref{further} and \[ D',E' > 0, \quad \gcd(D',\ba'_1) = \gcd(E',\ba'_2) = 1, \] we must now have $(D',E',\ba_1',\ba_2') = (D,E,\ba_1,\ba_2)$. \end{proof} Henceforth, fix $C_\bf$ to be as in Corollary \ref{SpecCor}. Recall \eqref{Ndef}. For $r,D,E,q \in \bN$, \[ \ba_{\,{\rm d}}agger = (\ba_\bj)_{1 \elle |\bj|_1 \elle d} \in \bZ^N, \quad \ba_1, \ba_2,\ba \in \bZ^R, \] write \begin{equation} \ellabel{suppress} \cX = (r,D,E,q,\ba_{\,{\rm d}}agger,\ba_1, \ba_2,\ba), \end{equation} and let $\fR(\cX) = \fR_P(\cX) $ be the set of $\balp \in \bR^R$ satisfying \eqref{bak1cor}, \eqref{bak2cor}, \eqref{qbound}, \eqref{qerror}, \eqref{DE}, \eqref{Dr}, \eqref{Er} and \eqref{further}. Let $\fR = \fR_P$ be the union of these sets. This union is disjoint, as with uniqueness in Corollaries \ref{BakCor} and \ref{SpecCor}. These corollaries also tell us that \begin{equation} \ellabel{subset} \fN^* \subseteq \fR. \end{equation} Recall \eqref{partials}. \begin{lemma} \ellabel{identities} Suppose $\fR_P(\cX) \ne \emptyset$. Then \begin{equation} \ellabel{equality1} (Dr)^{-1} \ba_1 = q^{-1} \ba, \end{equation} and $q$ divides $Dr$. We must also have \begin{equation} \ellabel{equality2} r^{-1} a_\bj = q^{-1} \sum_{k \elle R} d_{k,\bj} a_k \qquad (|\bj|_1 = d). \end{equation} \end{lemma} \begin{proof} Let $\balp \in \fR_P(\cX)$. From \eqref{bak2cor}, \eqref{qerror}, \eqref{Dr} and the triangle inequality, we see that \[ |(Dr)^{-1} \ba_1 - q^{-1} \ba| \elll q^{-1} P^{R(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0 - d} + r^{-1} P^{{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta} - d}. \] By \eqref{bak1cor}, \eqref{qbound} and \eqref{DE}, we now have \[ |(Dr)^{-1} \ba_1 - q^{-1} \ba| < (Drq)^{-1}, \] which implies \eqref{equality1}. Hence $q$ divides $Dr$, since $\gcd(a_1, \elldots, a_R, q) = 1$. Now let $\bj \in \bZ_{\ge 0}^n$ be such that $|\bj|_1 = d$. We see from \eqref{omegadef} and \eqref{bak2cor} that \[ \Bigl |\sum_{k \elle R} d_{k,\bj} {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_k - a_\bj / r \Bigr | < r^{-1} P^{{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}-d}. \] Combining this with \eqref{qerror} and the triangle inequality yields \[ r^{-1} a_\bj - q^{-1} \sum_{k \elle R} d_{k,\bj} a_k \elll r^{-1} P^{{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}-d} + q^{-1} P^{R(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0 -d}. \] In light of \eqref{bak1cor} and \eqref{qbound}, we now have \[ \Bigl | r^{-1} a_\bj - q^{-1} \sum_{k \elle R} d_{k,\bj} a_k \Bigr| < (qr)^{-1}, \] which establishes \eqref{equality2}. \end{proof} \section{Adaptations of known bounds} \ellabel{classical} In this section we consider $S(\balp)$ for $\balp \in \fR$. Let $r, D, q \in \bN$, where $D \elle C_\bf$ and $q$ divides $Dr$. Assume that $Dr \elle P$. Let $\ba \in \bZ^R$ and \begin{equation} \ellabel{dagger} \ba_{\,{\rm d}}agger = (a_{\bj})_{1 \elle |\bj|_1 \elle d} \in \bZ^N, \end{equation} where we recall \eqref{NjDef} and \eqref{Ndef}. Recall that the ${{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}_\bj$ are defined in terms of $\balp$ by \eqref{omegadef}, and put \begin{equation} \ellabel{spec1} \balp = q^{-1} \ba + \bz, \qquad {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj = r^{-1} a_\bj + z_\bj \quad (1 \elle |\bj|_1 \elle d-1). \end{equation} Recall \eqref{gdef} and \eqref{Sg}. Our starting point is the calculation \begin{equation} \ellabel{gnote} g(\balp, \bome_{\,{\rm d}}iam) = \sum_{\bx \mmod Dr} e \Bigl( q^{-1} \ba \cdot \bf(\bx) + r^{-1} \sum_{1 \elle |\bj|_1 \elle d-1} a_\bj \bx^\bj \Bigr) S_{Dr} (\bx), \end{equation} where \[ S_{Dr}(\bx) = \sum_{\substack{|\by| \elle P \\ \by \equiv \bx \mmod Dr}} e\Bigl( \bz \cdot \bf(\by) + \sum_{1 \elle |\bj|_1 \elle d-1} z_\bj \by^\bj \Bigr). \] For $\bgam \in \bR^R$ and \begin{equation} \ellabel{diam} \bgam_{\,{\rm d}}iam = ({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_\bj)_{1 \elle |\bj|_1 \elle d-1} \in \bR^{N_1 + \elldots + N_{d-1}}, \end{equation} write \begin{equation} \ellabel{Idef} I(\bgam, \bgam_{\,{\rm d}}iam) = \int_{[-1,1]^n} e\Bigl( \bgam \cdot \bf(\bt) + \sum_{1 \elle |\bj|_1 \elle d-1} {\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_\bj \bt^\bj \Bigr) {\,{\rm d}} \bt. \end{equation} By \cite[Lemma 8.1]{Bro2009} and a change of variables, we have \begin{equation} \ellabel{SIcompare} S_{Dr}(\bx) = (P/(Dr))^n I(\bgam, \bgam_{\,{\rm d}}iam) + O((P/r)^{n-1} (1+|\bgam| + |\bgam_{\,{\rm d}}iam|)) \end{equation} with \begin{equation} \ellabel{spec2} \bgam = P^d \bz, \qquad {\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_\bj = P^{|\bj|_1} z_\bj \qquad (1 \elle |\bj|_1 \elle d-1). \end{equation} Let \begin{equation} \ellabel{SraDef} S_{r,D,q}(\ba, \ba_{\,{\rm d}}ag) =\sum_{\bx \mmod Dr} e \Bigl( q^{-1} \ba \cdot \bf(\bx) + r^{-1} \sum_{1 \elle |\bj|_1 \elle d-1} a_\bj \bx^\bj \Bigr). \end{equation} Since $D \elle C_\bf$, substituting \eqref{SIcompare} into \eqref{gnote} shows that \begin{equation} \ellabel{one} g(\balp, \bom_{\,{\rm d}}iam) - P^n (Dr)^{-n} S_{r,D,q}(\ba,\ba_{\,{\rm d}}agger)I(\bgam, \bgam_{\,{\rm d}}iam) \elll rP^{n-1}(1+|\bgam| + |\bgam_{\,{\rm d}}iam|), \end{equation} with \eqref{spec2}. Specialising $r=D=q=1$, $\ba = \bzero$, and $\ba_{\,{\rm d}}agger = \bzero$ yields \begin{equation} \ellabel{two} g(\balp, \bom_{\,{\rm d}}iam) = P^n I(\bgam, \bgam_{\,{\rm d}}iam) + O(P^{n-1}(1+|\bgam| + |\bgam_{\,{\rm d}}iam|)) \end{equation} with \begin{equation} \ellabel{WeirdGam} \bgam = P^d \balp, \qquad {\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_\bj = P^{|\bj|_1} {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj \quad (1 \elle |\bj|_1 \elle d-1). \end{equation} Emulating \cite[Lemma 5.2]{Bir1962} or \cite[Lemma 8.8]{Bro2009}, we combine \eqref{first}, \eqref{second} and \eqref{two} in order to bound $I(\bgam, \bgam_{\,{\rm d}}iam)$, uniformly for $\bgam \in \bR^R$ and $\bgam_{\,{\rm d}}iam \in \bR^{N_1 + \elldots + N_{d-1}}$. \begin{lemma} \ellabel{IboundLemma} Let ${\ellambda}} {\,{\rm d}}ef\Lam{{\Lambda}} {\,{\rm d}}ef\Lamtil{{\widetilde \Lambda}$ be a small positive real number. Then for $\bgam \in \bR^R$ and $\bgam_{\,{\rm d}}iam \in \bR^{N_1+N_2+ \elldots + N_{d-1}}$, we have \begin{equation} \ellabel{Ibound} I(\bgam, \bgam_{\,{\rm d}}iam) \elll_{\ellambda}} {\,{\rm d}}ef\Lam{{\Lambda}} {\,{\rm d}}ef\Lamtil{{\widetilde \Lambda} (1 + |\bgam|^{{\kappa} - {\ellambda}} {\,{\rm d}}ef\Lam{{\Lambda}} {\,{\rm d}}ef\Lamtil{{\widetilde \Lambda}} + |\bgam_{\,{\rm d}}iam|^{(Nd)^{-1} - {\ellambda}} {\,{\rm d}}ef\Lam{{\Lambda}} {\,{\rm d}}ef\Lamtil{{\widetilde \Lambda}})^{-1}. \end{equation} \end{lemma} \begin{proof} As $I(\bgam, \bgam_{\,{\rm d}}iam) \elll 1$, we may assume that $|\bgam| + |\bgam_{\,{\rm d}}iam|$ is large. Recall that \eqref{two} holds with \eqref{WeirdGam}. This, \eqref{first} and \eqref{second} show that \[ I(\bgam, \bgam_{\,{\rm d}}iam) \elll \frac{P^{{\ellambda}} {\,{\rm d}}ef\Lam{{\Lambda}} {\,{\rm d}}ef\Lamtil{{\widetilde \Lambda} n^{-2} N^{-1}}} {|\bgam|^{\kappa} + |\bgam_{\,{\rm d}}iam|^{(Nd)^{-1}}} + \frac{1+|\bgam| + |\bgam_{\,{\rm d}}iam|}P \] whenever $|\bgam| < P^{d/2}$ and $|\bgam_{\,{\rm d}}iam| \elle (P^{{\ellambda}} {\,{\rm d}}ef\Lam{{\Lambda}} {\,{\rm d}}ef\Lamtil{{\widetilde \Lambda} n^{-2} N^{-1} + (2^dN + 1)^{-1}})^{Nd}$. Recall \eqref{numvars}. As $|\bgam| + |\bgam_{\,{\rm d}}iam|$ is large and $I(\bgam, \bgam_{\,{\rm d}}iam)$ does not depend on $P$, we are free to choose $P = (|\bgam| + |\bgam_{\,{\rm d}}iam|)^n$, which gives \[ I(\bgam, \bgam_{\,{\rm d}}iam) \elll \frac{(|\bgam| + |\bgam_{\,{\rm d}}iam|)^{{\ellambda}} {\,{\rm d}}ef\Lam{{\Lambda}} {\,{\rm d}}ef\Lamtil{{\widetilde \Lambda} (Nn)^{-1}}} {|\bgam|^{\kappa} + |\bgam_{\,{\rm d}}iam|^{(Nd)^{-1}}}. \] Recall \eqref{kapDef}. By cross-multiplying and considering cases, we may now deduce that \[ I(\bgam, \bgam_{\,{\rm d}}iam) \elll (|\bgam|^{{\kappa} - {\ellambda}} {\,{\rm d}}ef\Lam{{\Lambda}} {\,{\rm d}}ef\Lamtil{{\widetilde \Lambda}} + |\bgam_{\,{\rm d}}iam|^{(Nd)^{-1}-{\ellambda}} {\,{\rm d}}ef\Lam{{\Lambda}} {\,{\rm d}}ef\Lamtil{{\widetilde \Lambda}})^{-1}. \] As $|\bgam| + |\bgam_{\,{\rm d}}iam| > 1$, this yields \eqref{Ibound}. \end{proof} In analogy with \cite[Lemma 15.3]{Dav2005}, we deduce the following bound. We note from Lemma \ref{identities} that the conditions below are necessarily met whenever $\fR_P(\cX) \ne \emptyset$. \begin{lemma} \ellabel{Sra} Let $\psi > 0$, $q \in \bN$ and $\ba \in \bZ^R$ be such that $\gcd(a_1, \elldots, a_R, q) = 1$. Let $D \in \bN$ with $D \elle C_\bf$. Let $r \in \bN$ be such that $q$ divides $Dr$, and let $\ba_{\,{\rm d}}ag$ be as in \eqref{dagger}. Then \begin{equation} \ellabel{SraBound} S_{r,D,q}(\ba, \ba_{\,{\rm d}}ag) \elll_\psi r^n q^{\psi - {\kappa}}. \end{equation} \end{lemma} \begin{proof} We may assume without loss that $\psi < 1$. Since $|S_{r,D,q}(\ba, \ba_{\,{\rm d}}ag)| \elle (Dr)^n$, we may assume $q$ to be large in terms of $\psi$. Suppose for a contradiction that \[ |S_{r,D,q}(\ba, \ba_{\,{\rm d}}ag)| > (Dr)^n q^{\psi - {\kappa}}. \] Break $S_{r,D,q}(\ba, \ba_{\,{\rm d}}ag)$ into $(Dr/q)^n$ sums, parametrised by $\bv \in \{1,2,\elldots,Dr/q\}^n$. The sum associated to a given $\bv$ is \begin{equation} \ellabel{shape} \sum_{1 \elle y_1, \elldots, y_n \elle q} e \Bigl( q^{-1} \ba \cdot \bf(\by + q \bv) + \sum_{1 \elle |\bj|_1 \elle d-1} r^{-1} a_\bj (\by + q \bv)^\bj \Bigr) \end{equation} and, by the triangle inequality, at least one such sum must exceed $q^{n + \psi - {\kappa}}$ in absolute value. Fix $\bv \in \{1,2,\elldots,Dr/q\}^n$ so that the expression \eqref{shape} exceeds $q^{n+\psi - {\kappa}}$ in absolute value. The polynomial in the Weyl sum \eqref{shape} is of the shape $q^{-1} \ba \cdot \bf(\by)$ plus lower degree terms. By \eqref{kapDef}, we may apply Lemma \ref{Birch43} with $P=q$ and ${\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta} = R^{-1}(d-1)^{-1} - \psi/n$. This shows that there exist integers $s, b_1, \elldots, b_R$ such that \[ 1 \elle s < q, \qquad |sa_k / q - b_k| < q^{-1} \qquad (1 \elle k \elle R). \] Hence $a_k/q = b_k/s$ ($1 \elle k \elle R$). This is impossible, since $0 < s < q$ and $\gcd(a_1, \elldots, a_R,q) = 1$. This contradiction implies \eqref{SraBound}. \end{proof} In view of \eqref{subset}, we may restrict attention to $\fR$. With \eqref{one} as the harbinger of our endgame, we perceive the need to obtain a nontrivial upper bound for $S_{r,D,q}(\ba, \ba_{\,{\rm d}}ag) \cdot I(\bgam, \bgam_{\,{\rm d}}ag)$ on Davenport--Heilbronn minor arcs. From \eqref{kapBound}, \eqref{spec1} and \eqref{spec2}, we see that the inequalities \eqref{Ibound} and \eqref{SraBound} save a `large' power of $P^d|q\balp - \ba|$ on $\fR$. We shall also need to save a power of $P^{d-1}|Er \mu \balp - \ba_2|$. If $|\balp|$ is somewhat large, the irrationality of $\mu$ will force one of $|q\balp - \ba|$ and $|Er \mu \balp - \ba_2|$ to be somewhat large, leading to a nontrivial estimate. From \eqref{Er}, \eqref{spec1}, \eqref{diam} and \eqref{spec2}, we see that \eqref{Ibound} saves a power of $P^{d-1} |E\mu \balp - r^{-1} \ba_2|$ over a trivial estimate for $I(\bgam, \bgam_{\,{\rm d}}iam)$. Thus, our final task for this stage of the analysis is to save a power of $r$ over a trivial estimate for $S_{r,D,q}(\ba, \ba_{\,{\rm d}}ag)$. Roughly speaking, we achieve this by fixing $x_2, \elldots, x_n$ and then using \cite[Theorem 7.1]{Vau1997} to bound the resulting univariate exponential sum. This entails bounding the greatest common divisor of the coefficients of this latter sum, which leads us to consider several notional changes of variables, much like in the proof of Lemma \ref{bak}. \begin{lemma} Let $D \in \bN$ with $D \elle C_\bf$. Let $r,q \in \bN$, and let $\psi > 0$. Let $\ba \in \bZ^R$, and let $\ba_{\,{\rm d}}ag$ be as in \eqref{dagger}. Assume \eqref{equality2}, and that $\gcd(r,\ba_{\,{\rm d}}ag) = 1$. Then \begin{equation} \ellabel{SraBound2} S_{r,D,q}(\ba, \ba_{\,{\rm d}}ag) \elll_\psi r^{n-(N_d d)^{-1}+\psi}. \end{equation} \end{lemma} \begin{proof} By \eqref{fknote}, \eqref{equality2}, \eqref{SraDef} and periodicity, we have \begin{equation} \ellabel{periodicity} S_{r,D,q}(\ba, \ba_{\,{\rm d}}ag) = D^n \sum_{\bx \mmod r} e_r \Bigl( \sum_{1 \elle |\bj|_1 \elle d} a_\bj \bx^\bj \Bigr). \end{equation} Set $y_1 = 0$, and recall that we have fixed $\bm_1, \elldots, \bm_{N_d} \in \bN^n$ as in Lemma \ref{Adam}. Write \[ \bm_t = (m_{t,1}, \elldots, m_{t,n}) \qquad (1 \elle t \elle N_d), \] where $m_{t,1} = 1$ ($1 \elle t \elle N_d$). Equation \eqref{periodicity} and the triangle inequality give \begin{equation} \ellabel{triangle} |S_{r,D,q}(\ba, \ba_{\,{\rm d}}ag)| \elle D^n \sum_{\substack{y_2, \elldots, y_n: \\ |\by| \elle (|\bm_t|+1) r}} |S(\bm_t, \by)| \qquad (1 \elle t \elle N_d), \end{equation} where \[ S(\bm_t, \by) = \sum_{x_1 \in I_r(\bm_t, \by)} e_r\Bigl( \sum_{1\elle |\bj|_1 \elle d} \ba_\bj (\by + x_1 \bm_t)^\bj \Bigr); \] here \[ I_r(\bm_t, \by) = \{ x_1 \in \bZ : 1 \elle y_1 + m_{t,1} x_1, \elldots, y_n+m_{t,n} x_1 \elle r \} \] is a discrete subinterval of $\{1,2,\elldots,r\}$. More precisely, given $t$ and $\by$ as above, there exists a real subinterval $[a,b]$ of $[1,r]$ such that $I_r(\bm_t, \by) = [a,b] \cap \bZ$. Suppose for a contradiction that \[ |S_{r,D,q}(\ba, \ba_{\,{\rm d}}ag)| > D^n r^{n- (N_d d)^{-1} + \psi} \prod_{t \elle N_d} (2|\bm_t| +3)^{n-1}, \] and that $r$ is large in terms of $\psi$. Then, by \eqref{triangle}, there exist $\by_1, \elldots, \by_{N_d} \in \bZ^n $ such that \begin{equation} \ellabel{Sbig} |S(\bm_t, \by_t)| > r^{1 - (N_d d)^{-1} + \psi} \qquad (1 \elle t \elle N_d). \end{equation} In view of the calculation \eqref{zdef}, we see that if $1 \elle t \elle N_d$ and $1 \elle j \elle d$ then the coefficient of $x_1^j$ in \[ \sum_{1 \elle |\bj|_1 \elle d} a_\bj (\by_t + x_1 \bm_t)^\bj \] is \begin{equation} \ellabel{ctj} c_{t,j} := \sum_{|\bj|_1 = j} a_\bj \bm_t^\bj + \sum_{j < |\bi|_1 \elle d} a_\bi z_{j,\bi} (\bm_t, \by_t). \end{equation} At this point we apply \cite[Theorem 7.1]{Vau1997}. It is necessary to remove any common divisors of $r,c_{t,1}, \elldots, c_{t,d}$. Moreover, since \cite[Theorem 7.1]{Vau1997} deals with complete exponential sums, we use an estimate due to Hua \cite[\S 3]{Hua1940} to compare our incomplete exponential sum to the corresponding complete exponential sum. Thus, it follows that \[ S(\bm_t, \by_t) \elll \gcd(r,c_{t,1}, \elldots, c_{t,d})^{1/d-\varepsilon} r^{1-1/d+\varepsilon} \qquad (1 \elle t \elle N_d). \] Coupling this with \eqref{Sbig}, we deduce that \[ \gcd(r,c_{t,1}, \elldots, c_{t,d}) \gg r^{1- 1/N_d + \psi} \qquad (1 \elle t \elle N_d), \] so \begin{equation} \ellabel{gcdBig0} \prod_{t \elle N_d} \gcd(r,c_{t,1}, \elldots, c_{t,d}) > r^{N_d - 1 + \psi}. \end{equation} By induction using the inequality \[ (a,b)(a,c) \elle a \cdot \gcd(a,b,c) \qquad (a, b,c \in \bZ, a > 0), \] one can show that \[ \prod_{t \elle N_d} \gcd(r,c_{t,1}, \elldots, c_{t,d}) \elle r^{N_d-1} G, \] where $G$ is the greatest common divisor of $r$ and the $c_{t,j}$ \mbox{($1 \elle t \elle N_d$, $1 \elle j \elle d$).} This and \eqref{gcdBig0} give \begin{equation} \ellabel{Gbig} G > r^\psi. \end{equation} Note that \begin{equation} \ellabel{decompose} G \elle \gcd(r, g_1, \elldots, g_d), \end{equation} where \[ g_j = \gcd(c_{1,j}, \elldots, c_{N_j, j}) \qquad (1 \elle j \elle d). \] We adopt the notation of \eqref{DeljDef}, \eqref{MjExplicit} and the discussion in between. Write \[ C_j = \begin{pmatrix} c_{1,j} \\ \vdots \\ c_{N_j,j} \end{pmatrix}, \qquad A_j = \begin{pmatrix} a_{\bj_{1,j}} \\ \vdots \\ a_{\bj_{N_j,j}} \end{pmatrix} \qquad (1 \elle j \elle d). \] We shall show by induction from $d$ down to $1$ that if $1 \elle j \elle d$ then \begin{equation} \ellabel{gcdInduct} \gcd(g_j, \elldots, g_d) | \Del_j \cdots \Del_d G_j, \end{equation} where $G_j$ is the greatest common divisor of the $a_\bj$ ($j \elle |\bj|_1 \elle d$). Let $\cD_0$ be a common divisor of $c_{1,d}, \elldots, c_{N_d, d}$. From \eqref{ctj} we have \[ C_d = M_d A_d. \] Hence \[ \Del_d A_d = \Del_d M_d^{-1} C_d, \] and so $\cD_0$ divides $\Del_d a_\bj$ ($|\bj|_1 = d$). We thus conclude that $g_d | \Del_d G_d$, thereby establishing the case $j=d$ of \eqref{gcdInduct}. Now let $j \in \{1, 2, \elldots, d-1 \}$, and assume that \[ \gcd(g_i, \elldots, g_d) | \Del_i \cdots \Del_d G_i \qquad (j+1 \elle i \elle d). \] Let $\cD$ be a common divisor of $g_j, \elldots, g_d$. Then $\cD$ divides $c_{t,j}$ ($1 \elle t \elle N_j$), and our inductive hypothesis shows that \begin{equation} \ellabel{IndImplies} \cD | \Del_{j+1} \cdots \Del_d a_\bi \qquad (j < |\bi|_1 \elle d). \end{equation} Equations \eqref{ctj} and \eqref{IndImplies} yield \[ \Del_{j+1} \cdots \Del_d C_j \equiv \Del_{j+1} \cdots \Del_d M_j A_j \mmod \cD, \] so \[ \Del_j \cdots \Del_d A_j \equiv (\Del_j M_j^{-1}) \Del_{j+1} \cdots \Del_d C_j \equiv \begin{pmatrix} 0 \\ \vdots \\ 0 \end{pmatrix} \mmod \cD. \] Coupling this with \eqref{IndImplies} yields $\cD | \Del_j \cdots \Del_d G_j$. Hence $\gcd(g_j, \elldots, g_d)$ divides $\Del_j \cdots \Del_d G_j$, and our induction is complete. We now have \eqref{gcdInduct}, in particular for $j=1$. Substituting this into \eqref{decompose} gives \[ G \elll \gcd(r, G_1) = \gcd(r,\ba_{\,{\rm d}}ag) = 1. \] This contradicts \eqref{Gbig}, thereby completing the proof the lemma. \end{proof} With \eqref{spec1}, we now specialise \eqref{spec2}. Define $S^*: \fR \to \bC$ as follows: if $\balp \in \fR(\cX)$ then \begin{equation} \ellabel{SstarDef} S^*(\balp) = P^n (Dr)^{-n} S_{r, D,q}(\ba, \ba_{\,{\rm d}}ag) I(\bgam, \bgam_{\,{\rm d}}iam) e(\balp \cdot \bf(\bmu)). \end{equation} We note from \eqref{delDef}, \eqref{bak1cor}, \eqref{bak2cor} and \eqref{qerror} that \[ r < P^{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}, \qquad r|\bgam| < P^{2{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}}, \qquad r|\bgam_{\,{\rm d}}iam| < P^{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}. \] By \eqref{Sg} and \eqref{one}, we now have \begin{equation} \ellabel{SstarCompare} S(\balp) = S^*(\balp) + O(P^{n-1+2{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}}) \qquad (\balp \in \fR). \end{equation} Let $\fU$ be an arbitrary unit hypercube in $R$ dimensions. The measure of $\fN^* \cap \fU$ is $O(P^{R(R+1)(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0 - Rd})$, so \eqref{delDef}, \eqref{subset} and \eqref{SstarCompare} show that \[ \int_{\fN^* \cap \fU} |S(\balp) - S^*(\balp)| {\,{\rm d}} \balp \elll P^{n - Rd - 1 + 3 {{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}}. \] Since ${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}$ is small, we now see from \eqref{Tbound}, \eqref{Ldef}, \eqref{Kbounds} and \eqref{Kprod} that \begin{align} \notag \int_{\fN^*} S(\balp) e(-\balp \cdot \btau) \bK_{\pm}(\balp) {\,{\rm d}} \balp &= \int_{\fN^*} S^*(\balp) e(-\balp \cdot \btau) \bK_{\pm}(\balp) {\,{\rm d}} \balp \\ \ellabel{IntCompare} &\qquad+ o(P^{n - Rd}). \end{align} Let $\balp \in \fR(\cX)$. Equations \eqref{spec1} and \eqref{spec2} give $q \bgam = P^d (q\balp - \ba)$ and \[ r {\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_\bj = P^{d-1} (r {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj - a_\bj) \qquad (|\bj|_1 = d-1). \] Thus, by \eqref{Er}, \eqref{Ibound}, \eqref{SraBound}, \eqref{SraBound2} and \eqref{SstarDef}, we have \[ S^*(\balp) \elll P^n(q+P^d |q \balp - \ba|)^{\varepsilon - {\kappa}} \] and \[ S^*(\balp) \elll P^n(r+P^{d-1} |Er \mu \balp - \ba_2 |)^{\varepsilon - (Nd)^{-1}}. \] In light of \eqref{kapBound} and the bound $E \elle C_\bf$, we now have \begin{equation} \ellabel{SstarBound} S^*(\balp) \elll P^n (q+P^d |q \balp - \ba|)^{- R - 1 - \varepsilon} F(\balp)^\varepsilon, \end{equation} where \begin{align} \notag F(\balp) &= F(\balp; P) \\ \ellabel{fdef} &= (q+P^d |q \balp - \ba|)^{-1} (Er+P^{d-1} |Er \mu \balp - \ba_2|)^{-1} \end{align} is well defined on $\fR = \fR_P$. \section{Lemmas of Freeman type} \ellabel{bgf} The saving of $(q+P^d|q \balp - \ba|)^{R+1+\varepsilon}$ in \eqref{SstarBound} suffices to obtain an upper bound for \[ \int_{\fN^*} S^*(\balp) e(-\balp \cdot \btau) \bK_{\pm}(\balp) {\,{\rm d}} \balp \] of the correct order of magnitude. On Davenport--Heilbronn minor arcs, however, we shall need to save slightly more. Using the methods of Bentkus, G\"otze and Freeman, as exposited in \cite[Lemmas 2.2 and 2.3]{Woo2003}, we will show that $F(\balp) = o(1)$ in the case that $|\balp|$ is of `intermediate' size, where $F(\balp)$ is as in \eqref{fdef}. The set on which we are able to prove this estimate will define our Davenport--Heilbronn minor arcs. The success of our endeavour depends crucially on the irrationality of $\mu$. For the argument to work, we need to essentially replace $F$ by a function defined on all of $\bR^R$. For $\balp \in \bR^R$, let $\cF(\balp; P)$ be the supremum of the quantity \[ (q+P^d |q \balp - \ba|)^{-1} (s + P^{d-1} |s \mu \balp - \bb|)^{-1} \] over $q,s \in \bN$ and $\ba, \bb \in \bZ^R$ satisfying $q \elle C_\bf s$. It follows from Lemma \ref{identities} and the bound $D \elle C_\bf$ that \begin{equation} \ellabel{Fbound} F(\balp; P) \elle \cF(\balp; P) \qquad (\balp \in \fR_P). \end{equation} \begin{lemma} \ellabel{Freeman1} Let $V$ and $W$ be fixed real numbers such that $0 < V \elle W$. Then \begin{equation} \ellabel{Freeman1eq} \sup \{ \cF(\balp; P): V \elle |\balp| \elle W \} \to 0 \qquad (P \to \infty). \end{equation} \end{lemma} \begin{proof} Suppose for a contradiction that \eqref{Freeman1eq} is false. Then there exist $\psi > 0$ and \[ (\balp^{(m)}, P_m, q_m, s_m, \ba^{(m)}, \bb^{(m)}) \in \bR^R \times [1, \infty) \times \bN^2 \times (\bZ^R)^2 \quad (m \in \bN) \] such that (i) the sequence $(P_m)$ increases monotonically to infinity, (ii) \[ V \elle |\balp^{(m)}| \elle W \qquad (m \in \bN) \] and (iii) if $m \in \bN$ then \begin{equation} \ellabel{Fre1key} (q_m+P_m^d|q_m \balp^{(m)} - \ba^{(m)}|) \cdot (s_m +P_m^{d-1} |s_m \mu \balp^{(m)} - \bb^{(m)}|) < \psi^{-1}. \end{equation} Now $q_m, s_m < \psi^{-1} \elll 1$, so $|\ba^{(m)}|, |\bb^{(m)}| \elll 1$. In particular, there are only finitely many possible choices for the tuple $(q_m, s_m, \ba^{(m)}, \bb^{(m)})$, so this tuple must take a particular value infinitely often, say $(q,s,\ba,\bb)$. Note that $\ba \ne \bzero$, for if $m$ is large then \eqref{Fre1key} and the condition $|\balp^{(m)}| \ge V$ ensure that $\ba^{(m)} \ne \bzero$. Let $k \in \{1,2,\elldots,R\}$ be such that $a_k \ne 0$. From \eqref{Fre1key} we have \[ {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}^{(m)}_k - q_m^{-1} a^{(m)}_k \elll P_m^{-d}, \qquad \mu {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}^{(m)}_k - s_m^{-1} b^{(m)}_k \elll P_m^{1-d}. \] Hence \[ \mu q_m^{-1} a^{(m)}_k - s_m^{-1} b^{(m)}_k \elll P_m^{1-d} \to 0 \qquad (m \to\infty). \] We conclude that \[ \mu = \frac{qb_k}{sa_k}, \] contradicting the irrationality of $\mu$. This contradiction establishes \eqref{Freeman1eq}. \end{proof} \begin{cor} \ellabel{Freeman2} There exists $T: [1,\infty) \to [1,\infty)$, increasing monotonically to infinity, such that \begin{equation} \ellabel{Tnote} T(P) \elle P^{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta} \end{equation} and, for large $P$, \begin{equation} \ellabel{FreemanBound} \sup \{ F(\balp; P) : \balp \in \fN^*_P, \: P^{{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}-d} \elle |\balp| \elle T(P) \} \elle T(P)^{-1}. \end{equation} \end{cor} \begin{proof} Recall \eqref{subset} and \eqref{Fbound}. We shall prove, \emph{a fortiori}, that \[ \sup \{ \cF(\balp; P) : P^{{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta} - d} \elle |\balp| \elle T(P) \} \elle T(P)^{-1}. \] Lemma \ref{Freeman1} yields a sequence $(P_m)$ of positive real numbers such that if \[ 1/m \elle |\balp| \elle m \] then $\cF(\balp; P_m) \elle 1/m$. We may choose this sequence to be increasing, and such that if $m \in \bN$ then $P_m^{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta} \ge m$. We define $T$ by $T(P) = 1$ ($1 \elle P < P_1$) and $T(P) = m$ ($P_m \elle P < P_{m+1}$). We note \eqref{Tnote}, and that $T$ increases monotonically to infinity. Now \[ \sup \{ \cF(\balp; P): T(P)^{-1} \elle |\balp| \elle T(P) \} \elle T(P)^{-1}, \] for if $P \ge P_m$ then $\cF(\balp; P) \elle \cF(\balp; P_m)$. It remains to show that if $P$ is large and \begin{equation} \ellabel{Tass} |\balp| < T(P)^{-1} < \cF(\balp; P) \end{equation} then $|\balp| < P^{{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta} - d}$. Suppose $P$ is large and $\balp \in \bR^R$ satisfies \eqref{Tass}. Then \begin{equation} \ellabel{Tass2} (q+P^d |q \balp - \ba|) \cdot (s+P^{d-1} |s \mu \balp - \bb|) < T(P) \end{equation} for some $q,s \in \bN$ and some $\ba, \bb \in \bZ^R$ satisfying $q \elle C_\bf s$. We must therefore have $q + P^d|q\balp - \ba| < T(P)^{1/2}$ or $s + P^{d-1}|s \mu \balp - \bb| < T(P)^{1/2}$. \textbf{Case: $q + P^d|q\balp - \ba| < T(P)^{1/2}$.} Now $q < T(P)^{1/2}$ and \[ |q\balp - \ba| < P^{-d} T(P)^{1/2}. \] Combining these with \eqref{Tnote}, \eqref{Tass} and the triangle inequality yields \[ |\ba| < T(P)^{-1/2} + P^{-d} T(P)^{1/2} \to 0 \qquad (P \to \infty). \] Hence $\ba = \bzero$, so \[ |\balp| < P^{-d} T(P)^{1/2} \elle P^{{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}-d}, \] as desired. \textbf{Case: $s + P^{d-1}|s \mu \balp - \bb| < T(P)^{1/2}$.} In this case $s < T(P)^{1/2}$ and \[ |s \mu \balp - \bb| < P^{1-d}T(P)^{1/2}. \] By \eqref{Tnote}, \eqref{Tass} and the triangle inequality, we now have \[ |\bb| \elll T(P)^{-1/2} + P^{1-d}T(P)^{1/2} \to 0 \qquad (P \to \infty), \] so $\bb = \bzero$. Thus \[ |q\balp| \elle C_\bf s |\balp| \elll |s\mu \balp| \elll P^{1-d}T(P)^{1/2}. \] Combining this with \eqref{Tnote}, \eqref{Tass2} and the triangle inequality yields \[ |\ba| \elll P^{1-d}T(P)^{1/2} + P^{-d}T(P) \to 0 \qquad (P \to \infty), \] so $\ba= \bzero$. Substituting this into \eqref{Tass2} and using \eqref{Tnote} gives \[ |\balp| \elle |q\balp| < P^{-d}T(P) \elle P^{{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta} -d}, \] completing the proof. \end{proof} \section{The Davenport--Heilbronn method} \ellabel{dh} In this section we finish the proof of the asymptotic formula \eqref{asymp}. Recall that it remains to prove \eqref{goal2}. By \eqref{IntCompare}, it now suffices to show that \begin{equation} \ellabel{goal3} \int_{\fN^*} S^*(\balp) e(-\balp \cdot \btau) \bK_{\pm}(\balp) {\,{\rm d}} \balp = (2\eta)^R c P^{n-Rd} + o(P^{n-Rd}) \end{equation} as $P \to \infty$, where $c$ is given by \eqref{cdef}. With $T(P)$ as in Corollary \ref{Freeman2}, we define our Davenport--Heilbronn major arc by \[ \fM_1 = \{ \balp \in \bR^R: |\balp| < P^{{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}-d} \}, \] our minor arcs by \[ \fm = \{ \balp \in \bR^R: P^{{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}-d} \elle |\balp| \elle T(P) \} \] and our trivial arcs by \[ \ft = \{ \balp \in \bR^R: |\balp| > T(P) \}. \] Recall that to any $\balp \in \fN^*$ we have uniquely assigned $q \in \bN$ and $\ba \in \bZ^R$ via \eqref{qbound} and \eqref{qerror}. For any unit hypercube $\fU$ in $R$ dimensions, we have \[ \int_{\fN^* \cap \fU} P^n (q+P^d|q\balp - \ba|)^{-R-1-\varepsilon} {\,{\rm d}} \balp \elll P^n X_1 Y_1, \] where \[ X_1 = \sum_{q \in \bN} q^{-1-\varepsilon} \elll 1 \] and \[ Y_1 = \int_{\bR^R} (1+P^d|\bbet|)^{- R - 1} {\,{\rm d}} \bbet \elle \Bigl( \int_\bR (1+P^d |{\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}|)^{-1-1/R} {\,{\rm d}} {\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta} \Bigr)^R \elll P^{-Rd}. \] Hence \begin{equation} \ellabel{crux} \int_{\fN^* \cap \fU} P^n (q+P^d|q\balp - \ba|)^{-R-1-\varepsilon} {\,{\rm d}} \balp \elll P^{n-Rd}. \end{equation} Combining this with \eqref{SstarBound} and \eqref{FreemanBound} gives \[ \int_{\fN^* \cap \fm \cap \fU} |S^*(\balp)| {\,{\rm d}} \balp \elll \sup_{\balp \in \fN^* \cap \fm} F(\balp)^\varepsilon \cdot P^{n-Rd} \elll T(P)^{-\varepsilon} P^{n-Rd}. \] In view of \eqref{Ldef}, \eqref{Kbounds} and \eqref{Kprod}, we now have \begin{equation} \ellabel{minor} \int_{\fN^* \cap \fm} |S^*(\balp) \bK_\pm(\balp)| {\,{\rm d}} \balp \elll L(P)^R T(P)^{-\varepsilon} P^{n-Rd} = o(P^{n-Rd}). \end{equation} Note that \begin{equation} \ellabel{Fnote} 0 < F(\balp) \elle 1. \end{equation} Together with \eqref{Ldef}, \eqref{Kbounds}, \eqref{Kprod}, \eqref{SstarBound} and \eqref{crux}, this gives \begin{align} \notag \int_{\fN^* \cap \ft} |S^*(\balp) \bK_\pm(\balp)| {\,{\rm d}} \balp &\elll P^{n-Rd} L(P)^R \sum_{n=0}^\infty (T(P)+n)^{-2} \\ \ellabel{trivial} &\elll L(P)^R T(P)^{-1} P^{n-Rd} = o(P^{n-Rd}). \end{align} Recalling \eqref{NstarDef}, we claim that \begin{equation} \ellabel{claim} \fN^* \cap \fM_1 = \{ \balp \in \bR^R: 2|\balp| \elle P^{R(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0 - d}, \: |S(\balp)| > P^{n - R(R+1)d{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0} \}. \end{equation} It is clear from \eqref{delDef} that if $2|\balp| \elle P^{R(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0 - d}$ and $|S(\balp)| > P^{n - R(R+1)d{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0}$ then $\balp \in \fN^* \cap \fM_1$. Conversely, let $\balp \in \fN^* \cap \fM_1$. Then $|S(\balp)| > P^{n - R(R+1)d{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0}$. Further, as $\balp \in \fN$ we have $2|\balp - q^{-1} \ba| \elle P^{R(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0 - d}$ for some $q \in \bN$ and $\ba \in \bZ^R$ satisfying $q \elle P^{R(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0}$. Since $\balp \in \fM_1$, the triangle inequality now gives \[ |q^{-1}\ba| < P^{{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta} - d} + P^{R(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0-d} < q^{-1}, \] so $\ba = \bzero$. Hence $2|\balp| \elle P^{R(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0 - d}$, and we have verified \eqref{claim}. Put \[ \fM = \{ \balp \in \bR^R: 2|\balp| \elle P^{R(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0 - d} \} \] and \[ \fM_2 = \{ \balp \in \bR^R: 2|\balp| \elle P^{R(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0 - d}, \: |S(\balp)| \elle P^{n - R(R+1) d{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0} \}. \] From \eqref{claim}, we see that $\fM$ is the disjoint union of $\fM_2$ and $\fN^* \cap \fM_1$. \begin{lemma} We have \begin{equation} \ellabel{MajorSubset} \fM \subseteq \fR(1,1,1,1,\bzero,\bzero,\bzero,\bzero) \subseteq \fR. \end{equation} \end{lemma} \begin{proof} Let $\balp \in \fM$, and recall \eqref{suppress}. With $\cX = (1,1,1,1,\bzero,\bzero,\bzero,\bzero)$, the conditions \eqref{bak1cor}, \eqref{qbound}, \eqref{qerror}, \eqref{DE}, and \eqref{further} are plainly met, while the bound \eqref{bak2cor} follows from \eqref{omegadef} and \eqref{delDef}. It therefore remains to show that \begin{equation} \ellabel{finalmajor0} |\balp| \elll \max_{|\bj|_1 = j} |{{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}_\bj| \qquad (j=d,d-1). \end{equation} Recall that $d, d-1 \in \cS$. Lemma \ref{SpecLemma} reveals that there exist nonzero integers $D'$ and $E'$, bounded in terms of $\bf$, as well as $\ba'_1, \ba'_2 \in \bZ^R$, satisfying \begin{equation} \ellabel{finalmajor1} |D' \balp - \ba'_1| \elll \max_{|\bj|_1 = d} |{{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}_\bj|, \qquad |E' \mu \balp - \ba'_2| \elll \max_{|\bj|_1 = d-1} |{{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}_\bj|. \end{equation} By \eqref{bak2cor}, we now have \[ |D' \balp - \ba'_1|, |E' \mu \balp - \ba'_2| \elll P^{{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta} - 1}. \] Since $\balp \in \fM$, the triangle inequality now gives $|\ba'_1|, |\ba'_2| < 1$, so $\ba'_1 = \ba'_2 = \bzero$. Substituting this information into \eqref{finalmajor1} confirms \eqref{finalmajor0}. \end{proof} Now \eqref{Kbounds}, \eqref{Kprod} and \eqref{SstarCompare} yield \[ \int_{\fM_2} |S^*(\balp)\bK_{\pm}(\balp)| {\,{\rm d}} \balp \elll P^{R^2(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0 - Rd}P^{n- R(R+1)d {\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0} = o(P^{n- Rd}), \] so \begin{align} \notag \int_{\fN^* \cap \fM_1} S^*(\balp) e(-\balp \cdot \btau) \bK_\pm(\balp) {\,{\rm d}} \balp &= \int_{\fM} S^*(\balp) e(-\balp \cdot \btau) \bK_\pm(\balp) {\,{\rm d}} \balp \\ \ellabel{MajorCompare} & \qquad + o(P^{n-Rd}). \end{align} By \eqref{Kdef}, we have \[ K_{\pm}({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}) = (2 \eta \pm \rho) \cdot \sinc(\pi {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha} \rho) \cdot \sinc(\pi {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}(2\eta \pm \rho)) \] for ${\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha} \in \bR$. Now \eqref{Tbound}, \eqref{Ldef} and the Taylor expansion of $\sinc(\cdot)$ yield \[ K_{\pm}({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}) = 2 \eta + O(L(P)^{-1}) \qquad (|{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}| < P^{-1}). \] Substituting this into \eqref{Kprod} gives \begin{equation} \ellabel{SincTaylor} \bK_\pm(\balp) = (2\eta)^R + O(L(P)^{-1}) \qquad (\balp \in \fM). \end{equation} By \eqref{SstarBound}, \eqref{Fnote} and \eqref{MajorSubset}, we also have \begin{equation} \ellabel{UpperBound} \int_\fM |S^*(\balp)| {\,{\rm d}} \balp \elll P^n \int_{\bR^R} (1+ P^d |\balp|)^{-R-1} {\,{\rm d}} \balp \elll P^{n-Rd}. \end{equation} From \eqref{SincTaylor} and \eqref{UpperBound}, we infer that \[ \int_\fM S^*(\balp) e(-\balp \cdot \btau) \bK_\pm(\balp) {\,{\rm d}} \balp = (2\eta)^R \int_{\fM} S^*(\balp) e(-\balp \cdot \btau) {\,{\rm d}} \balp + o(P^{n-Rd}). \] Combining this with \eqref{minor}, \eqref{trivial} and \eqref{MajorCompare} yields \begin{align} \notag \int_{\fN^*} S^*(\balp) e(-\balp \cdot \btau) \bK_\pm(\balp) {\,{\rm d}} \balp &= (2\eta)^R \int_\fM S^*(\balp) e(-\balp \cdot \btau) {\,{\rm d}} \balp \\ \ellabel{RemovedK} & \qquad + o(P^{n-Rd}). \end{align} Let $\balp \in \fM$. Recall \eqref{Idef} and \eqref{SraDef}. By \eqref{SstarDef} and \eqref{MajorSubset}, we have \[ S^*(\balp) = P^n e(\balp \cdot \bf( \bmu)) \int_{[-1,1]^n} e\Bigl (\bgam \cdot \bf(\bt) + \sum_{1 \elle |\bj|_1 \elle d-1} {\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_\bj \bt^\bj \Bigr) {\,{\rm d}} \bt, \] with \eqref{omegadef} and \eqref{WeirdGam}. Using \eqref{WeirdGam} and the change of variables $\by = P \bt$ gives \[ S^*(\balp) = \int_{[-P,P]^n} e \Bigl( \balp \cdot \bf(\by) + \balp \cdot \bf(\bmu) + \sum_{1 \elle |\bj|_1 \elle d-1} {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj \by^\bj \Bigr) {\,{\rm d}} \by. \] By \eqref{Taylor} and \eqref{omegadef}, we now have \[ S^*(\balp) = \int_{[-P,P]^n} e(\balp \cdot \bf(\by + \bmu)) {\,{\rm d}} \by = S_1(\balp) + O(P^{n-1}), \] where \[ S_1(\balp) = \int_{[-P,P]^n} e(\balp \cdot \bf(\bx)) {\,{\rm d}} \bx. \] Hence \begin{align} \notag \int_\fM S^*(\balp) e(-\balp \cdot \btau) {\,{\rm d}} \balp - \int_\fM S_1(\balp) e(-\balp \cdot \btau) {\,{\rm d}} \balp &\elll P^{n-1 + R^2(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_0 - Rd} \\ \ellabel{S1compare} &= o(P^{n-Rd}). \end{align} Note that $S_1(\balp) = P^nI(P^d \balp, \bzero)$. In light of \eqref{kapBound}, the bound \eqref{Ibound} now yields \[ S_1(\balp) \elll P^n (1+P^d|\balp|)^{- R - 1} \elll P^n \prod_{k \elle R} (1+P^d |{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_k|)^{-1-1/R}, \] so \[ \int_{\bR^R \setminus \fM} |S_1(\balp)| {\,{\rm d}} \balp \elll P^{n - (R-1)d} \int_{P^{R\varepsilon-d}}^\infty (1+P^d {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha})^{-1-1/R} {\,{\rm d}} {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha} \elll P^{n- Rd-\varepsilon}. \] In particular \begin{equation} \ellabel{S1rest} \int_\fM S_1(\balp) e(-\balp \cdot \btau) {\,{\rm d}} \balp = \int_{\bR^R} S_1(\balp) e(-\balp \cdot \btau) {\,{\rm d}} \balp + o(P^{n-Rd}). \end{equation} To apply \cite[Lemma 5.3]{Bir1962} directly, we need to work with a box of side length less than 1. Changing variables with $\bx = 3P \bu$ and $\bz = (3P)^d \balp$ shows that \begin{align*} \int_{\bR^R} S_1(\balp) e(- \balp \cdot \btau) {\,{\rm d}} \balp &= \int_{\bR^R} \int_{[-P,P]^n} e(\balp \cdot \bf(\bx)) e(-\balp \cdot \btau) {\,{\rm d}} \bx {\,{\rm d}} \balp \\ &= (3P)^{n - Rd} \int_{\bR^R} \cI(\bz) e( - (3P)^{-d} \btau \cdot \bz){\,{\rm d}} \bz, \end{align*} where \[ \cI(\bz) = \int_{[-1/3,1/3]^n} e(\bz \cdot \bf(\bu)) {\,{\rm d}} \bu. \] Now \cite[Lemma 5.3]{Bir1962} gives \[ \int_{\bR^R} S_1(\balp) e(- \balp \cdot \btau) {\,{\rm d}} \balp = (3P)^{n - Rd} \Bigl( \int_{\bR^R} \cI(\bz) {\,{\rm d}} \bz + o(1) \Bigr) \] as $P \to \infty$. Moreover, changing variables yields \[ \int_{\bR^R} \cI(\bz) {\,{\rm d}} \bz = \int_{\bR^R} \int_{[-1/3,1/3]^n} e(\bz \cdot \bf(\bu)) {\,{\rm d}} \bu {\,{\rm d}} \bz = 3^{Rd-n} c, \] where we recall \eqref{cdef}. Hence \begin{equation} \ellabel{S1eval} \int_{\bR^R} S_1(\balp) e(- \balp \cdot \btau) {\,{\rm d}} \balp = cP^{n-Rd} + o(P^{n-Rd}). \end{equation} Combining \eqref{S1compare}, \eqref{S1rest} and \eqref{S1eval} gives \[ \int_\fM S^*(\balp) e(-\balp \cdot \btau) {\,{\rm d}} \balp = cP^{n-Rd} + o(P^{n-Rd}). \] Substituting this into \eqref{RemovedK} yields \eqref{goal3}, confirming the desired asymptotic formula \eqref{asymp}. \section{The singular integral} \ellabel{SingularIntegral} Schmidt \cite[\S 3]{Sch1985} gives the following geometric definition of the real density $c$. For $L > 0$ and $\xi \in \bR$, let \[ {\ellambda}} {\,{\rm d}}ef\Lam{{\Lambda}} {\,{\rm d}}ef\Lamtil{{\widetilde \Lambda}_L(\xi) = L \cdot \max(0,1 - L|\xi|). \] For $\bxi \in \bR^R$, put \[ \Lam_L(\bxi) = \prod_{k \elle R} {\ellambda}} {\,{\rm d}}ef\Lam{{\Lambda}} {\,{\rm d}}ef\Lamtil{{\widetilde \Lambda}_L(\xi_k). \] Set \[ I_L(\bf) = \int_{[-1,1]^n} \Lam_L(\bf(\bt)) {\,{\rm d}} \bt, \] and define \begin{equation} \ellabel{cSchmidt} c = \ellim_{L \to \infty} I_L(\bf) \end{equation} whenever the limit exists. Schmidt explains in \cite[\S 11]{Sch1982} and \cite[\S 3]{Sch1985} that the limit does exist, and that this definition is equivalent to Birch's analytic definition \eqref{cdef}. The expression on the right hand side of \eqref{cdef} arose naturally in our proof of \eqref{asymp}. It is well defined, by \cite[Lemma 5.3]{Bir1962} and a change of variables (here Birch uses a box of side length less than 1). One can verify the final statement of Theorem \ref{MainThm} from \eqref{cSchmidt} by mimicking \cite[\S 4]{Sch1982}; one uses the implicit function theorem to construct a region of measure $\gg L^{-R}$ on which $|\bf(\bt)| < (2L)^{-1}$. Birch instead invokes the Fourier integral theorem to show from \eqref{cdef} that $c > 0$ whenever $\bf = \bzero$ has a nonsingular real solution (see \cite[\S6]{Bir1962}). This discussion concludes the proof of Theorem \ref{MainThm}. \section{An alternative approach} \ellabel{SchmidtApproach} In this section we establish Theorem \ref{TheoremTwo}. The crux is a suitable analogue of Lemma \ref{Birch43}, and we shall deduce such an analogue from the work of Schmidt \cite{Sch1985}. Let $g$ be as defined in \cite[\S 10]{Sch1985}, and put \begin{equation*} {\kappa}' = \frac g{R(d-1)2^{d-1}}. \end{equation*} The quantity ${\kappa}'$ shall play the r\^ole played by ${\kappa}$ in the proof of Theorem \ref{MainThm}. We note at once that coupling \eqref{hhyp} with the corollary to \cite[Proposition III]{Sch1985} yields \[ {\kappa}' > R+1, \] in analogy with \eqref{kapBound}. We begin with an analogue of \cite[Lemma 2.5]{Bir1962}. \begin{lemma} \ellabel{Bir25} Let $0 < {\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta} \elle 1$ and $k > 0$. Then at least one of the following holds. \begin{enumerate}[(i)] \item We have \[ g(\balp, \bom_{\,{\rm d}}iam) \elll P^{n-k}. \] \item There exist integers $q, a_1, \elldots, a_R$ satisfying \eqref{qa} and \eqref{FirstApprox}. \item We have \[ g \elle 2^{d-1}k/{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}. \] \end{enumerate} The same is true if we replace $g(\balp, \bome_{\,{\rm d}}iam)$ by \[ \sum_{1 \elle x_1, \elldots, x_n \elle P} e\Bigl( \balp \cdot \bf(\bx) + \sum_{1 \elle |\bj|_1 \elle d-1} {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj \bx^\bj \Bigr). \] \end{lemma} \begin{proof} We may imitate the Birch's proof of \cite[Lemma 2.5]{Bir1962}. As in \S \ref{BirchType}, the lower order terms have no bearing on the proof. Our second assertion follows in the same way as our first. \end{proof} This implies the following analogue of Lemma \ref{Birch43}. \begin{lemma} \ellabel{Birch43alt} Let $0 < {\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta} \elle 1$. Suppose \begin{equation} \ellabel{gbigalt} |g(\balp, \bome_{\,{\rm d}}iam)| > P^{n- R(d-1){\kappa}' {\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta} + \varepsilon}. \end{equation} Then there exist integers $q, a_1, \elldots, a_R$ satisfying \eqref{qa} and \eqref{FirstApprox}. In particular, if $|S(\balp)| > P^{n- R(d-1){\kappa}' {\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta} + \varepsilon}$ then there exist $q \in \bN$ and $\ba \in \bZ^R$ satisfying \eqref{qa} and \eqref{FirstApprox}. We may replace $g(\balp, \bome_{\,{\rm d}}iam)$ by \[ \sum_{1 \elle x_1, \elldots, x_n \elle P} e\Bigl( \balp \cdot \bf(\bx) + \sum_{1 \elle |\bj|_1 \elle d-1} {{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga}} {\,{\rm d}}ef \Om {{\Omega}ega_\bj \bx^\bj \Bigr), \] and the same conclusions hold. \end{lemma} \begin{proof} Choosing $k = R(d-1) {\kappa}' {\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta} - \varepsilon$ in Lemma \ref{Bir25} ensures that (iii) is impossible, reducing us to two possibilities. We have removed the implied constant from \eqref{gbigalt} by redefining $\varepsilon$ and recalling that $P$ is large. Our second claim follows from our first, by \eqref{Sg}. \end{proof} Using Lemma \ref{Birch43alt} instead of Lemma \ref{Birch43}, we can then follow the proof of Theorem \ref{MainThm}, with minimal changes. Corollary \ref{Birch43cor} follows with ${\kappa}'$ in place of ${\kappa}$. Similarly, Lemmas \ref{IboundLemma} and \ref{Sra} follow in the same way, but with ${\kappa}'$ in place of ${\kappa}$. Finally, it is important to note that we still have $d, d-1 \in \cS$. As explained in the introduction, our assumption that the $(1,\elldots,1) \cdot \nabla f_k$ are linearly independent implies that $d-1 \in \cS$. This assumption also implies that $d \in \cS$, in view of \eqref{top}. This completes the proof of Theorem \ref{TheoremTwo}. The quantity $\Phi(d)$ dominates the quantity $\varphi(d)$ in \cite[Proposition III$_C$]{Sch1985}. If we read \cite{Sch1985} more closely, we find that we can replace $\Phi(d)$ by \[ \max(\eta_{d-2}, 2^{d-2} - 1), \] where $\eta_0 = 1$ and \[ \eta_m = \sum_{q=1}^m \sum_{\substack{u_1 + \elldots + u_m = q \\ u_i > 0}} \frac {m!} {u_1 ! \cdots u_q!} \qquad (m \in \bN). \] \providecommand{\bysame}{\elleavevmode\hbox to3em{\hrulefill}\thinspace} \end{document}
\begin{document} \title[On the dynamics of Lipschitz operators]{On the dynamics of Lipschitz operators} \author[A. Abbar]{Arafat Abbar} \author[C. Coine]{Cl\'ement Coine} \author[C. Petitjean]{Colin Petitjean} \address[A. Abbar]{LAMA, Univ Gustave Eiffel, UPEM, Univ Paris Est Creteil, CNRS, F--77447, Marne-la-Vall\'ee, France} \email{[email protected]} \address[C. Coine]{Normandie Univ, UNICAEN, CNRS, LMNO, 14000 Caen, France} \email{[email protected]} \address[C. Petitjean]{LAMA, Univ Gustave Eiffel, UPEM, Univ Paris Est Creteil, CNRS, F--77447, Marne-la-Vall\'ee, France} \email{[email protected]} \date{} \begin{abstract} By the linearization property of Lipschitz-free spaces, any Lipschitz map $f : M \to N$ between two pointed metric spaces may be extended uniquely to a bounded linear operator $\widehat{f} : \mathcal{F}(M) \to \mathcal{F}(N)$ between their corresponding Lipschitz-free spaces. In this note, we explore the connections between the dynamics of Lipschitz self-maps $f : M \to M$ and the linear dynamics of their extensions $\widehat{f} : \mathcal{F}(M) \to \mathcal{F}(M)$. This not only allows us to relate topological dynamical systems to linear dynamical systems but also provide a new class of hypercyclic operators acting on Lipschitz-free spaces. \end{abstract} \subjclass[2010]{Primary 47A16, 54H20 ; Secondary 46B20, 54E35} \keywords{Chaoticity, Cyclicity, Hypercyclicity, Lipschitz-free space, Supercyclicity, Transitivity, Weakly mixing. } \maketitle \section{Introduction} A \textit{topological dynamical system} is a pair $(M,f)$ where $M$ is a metric space and $f : M \to M$ is continuous map. In topological dynamics, it is often assumed that M is compact. A \textit{linear dynamical system} is a pair $(X,T)$ where $X$ is a Banach space (or, more generally, a Fr\'echet space) and $T$ is a bounded linear operator on $X$. We refer to \cite{GrPe} (and references therein) for an introduction to dynamical systems as well as for more details on the next notions. In what follows, the pair $(M,f)$ stands for a topological dynamical system, while $(X,T)$ denotes a linear dynamical system. Let $\mathbb{N}$ denote the set of positive integers and let $\mathbb{N}_0=\mathbb{N} \cup \{0\}$. For any point $x$ in $M$, \textit{the orbit of $x$ under $f$} is defined by $$ \mathrm{Orb}(x,f):=\lbrace f^{n}(x):\, n\in\mathbb{N}_0 \rbrace. $$ We will say that \textit{$f$ is hypercyclic} if it a has a dense orbit, that is, there exists $x \in M$ such that $\mathrm{Orb}(x,f)$ is dense in $M$; such an $x$ will be called a \textit{hypercyclic element} for $f$. Next, we say that $f$ is \textit{topologically transitive} if, for each pair of nonempty open sets $U,V$ of $M$, there exists $n \in \mathbb{N}_0 $ such that $f^n(U) \cap V \neq \emptyset$. It is known that if $M$ has no isolated point then any hypercyclic map is also topologically transitive \cite[Proposition 1.15]{GrPe}. Conversely, if $M$ is a separable Baire space then a topologically transitive map is hypercyclic (see the remark after \cite[Theorem 1.2]{Survey}). We will also consider the following stronger notions: \begin{itemize}[leftmargin=*] \item $f$ is \textit{(topologically) mixing} if for each pair of nonempty open sets $U,V$ of $M$ there exists $N \in \mathbb{N}_0$ such that for every $n \geq N$, $f^n(U) \cap V \neq \emptyset$; \item $f$ is \textit{(topologically) weakly mixing} if $f\times f$ is topologically transitive on $M\times M$; \item $f$ is \textit{Devaney chaotic} if it is topologically transitive and its set of periodic points is dense in $M$. We recall that $x$ is a periodic point of $f$ if there exists $n \in \mathbb{N}$ such that $f^n(x)=x$, and we will denote by $\mathrm{Per}(f)$ the set of all periodic points of $f$. \end{itemize} It is straightforward that $$\text{mixing } \implies \text{ weakly mixing } \implies \text{ topologically transitive.} $$ Moreover, for every bounded linear operator $T$ defined on a separable Banach space $X$ (see \cite{Survey, GrPe}): \[T \text{ is Devaney chaotic} \implies T \text{ is weakly mixing} \implies T \text{ is hypercyclic}.\] Then, we say that $T$ is \textit{supercyclic} whenever there exists a vector $x\in X$ whose projective orbit, i.e. the set $$ \mathrm{Orb}(\mathbb{K}\,x,T):=\lbrace \lambda T^nx:\, \lambda\in\mathbb{K},\, n\in\mathbb{N}_0 \rbrace, $$ is dense in $X$. Such a vector $x$ is called a \textit{supercyclic vector} for $T$. Finally, recall that $T$ is \textit{cyclic} if there exists a vector $x\in X$, called a \textit{cyclic vector} for $T$, such that the linear span of the orbit of $x$ under $T$ is dense in $X$. Clearly, the following chain of implications holds: \[\text{Hypercyclicity }\mathbb{R}ightarrow \text{ Supercyclicity }\mathbb{R}ightarrow \text{Cyclicity}.\] One of the main objectives of this paper is to relate topological dynamical systems to linear dynamical systems. Such a connection have already been explored for instance in \cite[Corollary 2.9]{Feldman} where it is built a universal linear operator $T :X \to X$ in such a way that, for any compact metric space $M$ and any continuous map $f : M \to M$, there is an invariant compact set $K \subset X$ such that $T\mathord{\upharpoonright}_K$ is topologically conjugate to $f$. In our work, we consider a different point of view since we relate topological dynamical systems to linear dynamical systems by taking advantage of the fundamental linearization property of Lipschitz-free spaces. Let us briefly introduce the latter class of Banach spaces along with the mentioned linearization property; a more detailed overview will be made in Subsection~\ref{Subsection-Lipfree}. Let $(M,d)$ be a metric space equipped with a distinguished point denoted by $0 \in M$. Following \cite{GoKa_2003}, the Lipschitz-free space over $M$, denoted by $\mathcal{F}(M)$, is the canonical predual of the real Banach space $\Lip_0(M)$ of Lipschitz maps from $M$ to $\mathbb{R}$, vanishing at $0$, and equipped with the norm (the best Lipschitz constant $\mathrm{Lip}(f)$ of $f$): $$\displaystyle \mathrm{Lip}(f) := \sup_{x \neq y \in M} \frac{|f(x)-f(y)|}{d(x,y)}.$$ More precisely, $$\mathcal{F}(M) := \overline{ \mbox{span}}^{\| \cdot \|}\left \{ \delta(x) \, : \, x \in M \right \} \subset \Lip_0(M)^*,$$ where $\delta(x)$ is the evaluation functional defined by $\langlef,\delta(x)\rangle = f(x)$ for any $f\in \Lip_0(M)$. It is readily seen that $\delta : x \mapsto \delta(x) \in \mathcal{F}(M)$ is an isometry. We wish to point out that the class of Lispchitz-free spaces is a powerful tool which has been used in various fields of Mathematics for proving deep results (e.g. \cite{GoKa_2003}), simplifying some proofs (e.g. \cite{RosendalTalk}) and constructing counterexamples (e.g. \cite{AlbiacKalton}). The following linearization property of Lipschitz-free spaces is the cornerstone of our study. \begin{proposition} \label{diagramfree} Let $M$ and $N$ be two pointed metric spaces. Let $f \colon M \to N$ be a Lipschitz map such that $f(0_M) = 0_N$. Then, there exists a unique bounded linear operator $\widehat{f} \colon \mathcal{F}(M) \to \mathcal{F}(N)$ with $\|\widehat{f}\|=\mathrm{Lip}(f)$ and such that the following diagram commutes: $$\xymatrix{ M \ar[r]^f \ar[d]_{\delta_{M}} & N \ar[d]^{\delta_{N}} \\ \mathcal{F}(M) \ar[r]_{\widehat{f}} & \mathcal{F}(N) }$$ \end{proposition} In this paper, by \textit{Lipschitz operator} we mean any bounded linear operator $\widehat{f} : \mathcal{F}(M) \to \mathcal{F}(N)$ as defined in the previous proposition. A very natural and intriguing question is whether linear properties of $\widehat{f}$ can be characterised by properties on $f$, or vice versa. For instance, compact operators have been considered in \cite{vargas2, vargas}. In this note, we choose to focus on the dynamical properties introduced above. More precisely, we are interested in the following general questions: \begin{question}\label{Q1} Assume that $f: M \to M$ has a given dynamical property, what can be said about $\widehat{f} : \mathcal{F}ree(M) \to \mathcal{F}ree(M)$? \end{question} Or conversely: \begin{question} \label{Q2} Assume that $\widehat{f} : \mathcal{F}ree(M) \to \mathcal{F}ree(M)$ satisfies a given dynamical property, what can be said about $f: M \to M$? \end{question} Furthermore, another important motivation for exploring these questions is to provide a new class of hypercyclic linear operators. As we shall explain latter, for some metric spaces there is a good description of the associated Lipschitz-free spaces as $L^1(\mu)$ spaces. This allows us for instance to recover some well-known examples, such as backward or forward shift operators, but also to give some new hypercyclic operators acting on $L^1(\mu)$ spaces, therefore providing a different angle on the study of the linear dynamics on $L^1(\mu)$. Of course, even when Lipschitz-free spaces are not isomorphic to $L^1(\mu)$ spaces, they give rise to interesting examples of Banach spaces and therefore possibly interesting examples of hypercyclic linear operators. To the best of our knowledge, these directions are rather new and not much explored. With respect to Question~\ref{Q1}, some answers are given by M. Murillo-Arcila and A. Peris in \cite[Theorem~2.3]{MuPe}. Indeed, they prove that if $T:X \to X$ is a bounded operator and $K\subset X$ is an invariant set for $T$ such that $0\in K$ and $T \mathord{\upharpoonright}_K$ is weakly mixing (mixing, weakly mixing and chaotic, respectively), then $T \mathord{\upharpoonright}_{\overline{\lspan} K}$ is also weakly mixing (mixing, weakly mixing and chaotic, respectively). Since by the very definition of Lipschitz-free spaces we have $\overline{\lspan} \; \delta(M) = \mathcal{F}(M)$, as a direct consequence they could obtain that if a Lipschitz selfmap $f :M \to M$ is weakly mixing (mixing, weakly mixing and chaotic, respectively) then so is $\widehat{f}$ (see Example~2.4~(3) in \cite{MuPe}). As we shall explain later, the reverse implications are not true in general. In fact, we define in Example~\ref{Example2} a Lipschitz self-map $f : [0,1] \to [0,1]$ such that $\widehat{f}$ is mixing and Devaney chaotic while $f$ is not even topologically transitive. Let us now describe the content of the paper. In what follows, unless otherwise specified $f$ will stand for a base-point preserving Lipschitz mapping $f : M \to M$ and $\widehat{f}$ for its linearization obtained by Proposition~\ref{diagramfree}. We will first introduce below the notation as well as the main tools related to Lipschitz-free spaces which we will use throughout the paper. Next, we shall start our study in Section~\ref{Section-FO} by giving some properties which are preserved by the functor $f \mapsto \widehat{f}$. For instance, it is easy to see that a Lipschitz map $f : M \to N$ has a dense range if and only if $\widehat{f} : \mathcal{F}(M) \to \mathcal{F}(N)$ also has a dense range (Proposition~\ref{denserange}). Similarly, it is readily seen that if the set of periodic points of $f$ is dense in $M$, then the set of periodic points of $\widehat{f}$ is dense in $\mathcal F(M)$ (the converse being false; see Proposition~\ref{PropPeriodic} and Example~\ref{ExamplePeriodic}). Another observation is that a point $x \in M$ is a hypercyclic element for $f$ if and only if $\delta(x)$ is a cyclic vector for $\widehat{f}$ (Proposition~\ref{PropHCtoC}). In fact, if $\gamma$ is a hypercyclic vector for $\widehat{f}$ then $\gamma$ must be infinitely supported (Proposition~\ref{Prop-support}). Then, it is well-known that a bounded linear operator is weakly mixing if and only if it satisfies the ``Hypercyclicity Criterion" (shortened HC, see Section~\ref{section_criteria} for more details). So one can use the connection between $f$ and $\widehat{f}$ (that is the linearization property) to transfer the conditions on $\widehat{f}$ stated in the Hypercyclicity Criterion to metric conditions on $f$. Doing so, we obtain a criterion that we will call "\textit{Hypercylicity Criterion for Lipschitz operators}" (shortened HCL) which turns out to be very useful in a number of examples. Of course, if $\widehat{f}$ satisfies the HCL then $\widehat{f}$ satisfies the HC and therefore is hypercyclic (Theorem~\ref{HCriterion}). However the converse is not true in general as we will show in Example~\ref{ExampleHCnotHCL}. We also notice that if $M$ is a complete space without isolated points and if $f$ is weakly mixing, then $\widehat{f}$ satisfies the HCL (Theorem~\ref{Thm_WMimpliesHCL}). We summarize the above mentioned general relations in the following diagram. \begin{center} \begin{tikzpicture} \node (M) at (0,0) {$\widehat{f}$ mixing}; \node (WM) at (3,0) {$\widehat{f}$ weakly mixing}; \node (HC) at (7,0) {$\widehat{f}$ satisfies the HC}; \draw[-implies,double equal sign distance] (M) -- (WM); \draw[implies-implies,double equal sign distance] (WM) -- (HC); \node (fM) at (0,-1.5) {$f$ mixing}; \node (fWM) at (3,-1.5) {$f$ weakly mixing}; \node (fHC) at (7,-1.5) {$\widehat{f}$ satisfies the HCL}; \node at (-0.5,-0.8) {\cite{MuPe}}; \node at (2.5,-0.8) {\cite{MuPe}}; \node at (6.6,-0.8) {\ref{HCriterion}}; \node at (4.8,-1.2) {\ref{Thm_WMimpliesHCL}}; \draw[-implies,double equal sign distance] (HC) to [bend left] node [midway,right]{\;\ref{ExampleHCnotHCL}} (fHC); \node at (7.3,-0.8) {$\mathbin{\tikz [x=1.4ex,y=1.4ex,line width=.2ex, red] \draw (0,0) -- (1,1) (0,1) -- (1,0);}$}; \draw[-implies,double equal sign distance] (fM) -- (fWM); \draw[-implies,double equal sign distance] (fM) -- (M); \draw[-implies,double equal sign distance] (fWM) -- (WM); \draw[-implies,double equal sign distance] (fWM) -- (fHC); \draw[-implies,double equal sign distance] (fHC) -- (HC); \end{tikzpicture} \end{center} Since every linear operator satisfying the Hypercyclicity Criterion is hypercyclic, an obvious question is whether the HCL implies that $f$ is topologically transitive or has a dense orbit. Unfortunately, this is not the case (see Example~\ref{ExampleHCLnotTransitive} or Example~\ref{Example2} for instance) and we do not know how to characterise Lipschitz maps satisfying the HCL in dynamical terms. This is actually the point where the theory probably becomes less obvious since many natural and tempting implications fail. For instance, $f$ having a dense orbit does not necessarily imply that $\widehat{f}$ does so. In fact, $\widehat{f}$ might even not be supercyclic (see Example~\ref{ExamplefTTfhatNOTsuper}). \begin{center} \begin{tikzpicture} \node (HC) at (0,0) {$\widehat{f}$ satisfies the HC}; \node (H) at (4,0) {$\widehat{f}$ hypercyclic}; \node (S) at (7,0) {$\widehat{f}$ supercyclic}; \node (C) at (10,0) {$\widehat{f}$ cyclic}; \draw[-implies,double equal sign distance] (HC) -- (H); \draw[-implies,double equal sign distance] (H) -- (S); \draw[-implies,double equal sign distance] (S) -- (C); \node (fHC) at (0,-2) {$\widehat{f}$ satisfies the HCL}; \node (fTT) at (7,-2) {$f$ has a dense orbit}; \draw[-implies,double equal sign distance] (fHC) -- (HC); \draw[-implies,double equal sign distance] (fHC) -- (fTT); \node at (3.5,-1.7) {\ref{Example2}}; \node at (3.5,-2) {$\mathbin{\tikz [x=1.4ex,y=1.4ex,line width=.2ex, red] \draw (0,0) -- (1,1) (0,1) -- (1,0);}$}; \draw[-implies,double equal sign distance] (fTT) -- node [midway,left]{\ref{ExamplefTTfhatNOTsuper}\;} (S); \node at (7,-1) {$\mathbin{\tikz [x=1.4ex,y=1.4ex,line width=.2ex, red] \draw (0,0) -- (1,1) (0,1) -- (1,0);}$}; \draw[-implies,double equal sign distance] (fTT) -- node [midway,above]{\ref{PropHCtoC}\;\;\;\;\;} (C); \draw[-implies,double equal sign distance] (HC) -- (fTT); \node at (4,-0.9) {\ref{Example2}}; \node at (3.5,-1) {$\mathbin{\tikz [x=1.4ex,y=1.4ex,line width=.2ex, red] \draw (0,0) -- (1,1) (0,1) -- (1,0);}$}; \draw[-implies,double equal sign distance] (C) to [bend right] node [midway,above]{\ref{ExamplefTTfhatNOTsuper}} (S); \node at (8.5,0.6) {$\mathbin{\tikz [x=1.4ex,y=1.4ex,line width=.2ex, red] \draw (0,0) -- (1,1) (0,1) -- (1,0);}$}; \draw[-implies,double equal sign distance] (S) to [bend right] node [midway,above]{\ref{Counter-example}} (H); \node at (5.5,0.6) {$\mathbin{\tikz [x=1.4ex,y=1.4ex,line width=.2ex, red] \draw (0,0) -- (1,1) (0,1) -- (1,0);}$}; \draw[-implies,double equal sign distance] (H) to [bend right] node [midway,above]{Open question} (HC); \end{tikzpicture} \end{center} Most of our ``counter-examples" are built on discrete metric spaces $M$. This underlines that the structure of the metric space $M$ is as important as the Lipschitz self-map $f :M \to M$. For instance, there is no hypercyclic $\widehat{f}$ if $M$ is bounded and uniformly discrete (see Remark~\ref{UnifDiscrete}). Leaving apart those pathological examples, one can obtain some positive results by working on non-discrete metric spaces such as closed intervals in $\mathbb{R}$. Notably, we prove in Theorem~\ref{mainthminterval} that if $f : [a,b] \to [a,b]$ has a fixed point $c$ (considered to be the base-point of $M = [a,b]$) and is topologically transitive, then $\widehat{f}$ is weakly mixing. So the following implications hold for a base-point preserving Lipschitz self-map $f$ defined on a closed interval $M=[a,b]$: \begin{center} \begin{tikzpicture} \node (WMC) at (4,0) {$\widehat{f}$ weakly mixing and chaotic}; \node (WM) at (11,0) {$\widehat{f}$ weakly mixing}; \draw[implies-implies,double equal sign distance] (WMC) -- (WM); \node (fTT) at (12,-1.5) {$f$ is transitive}; \node (fWM) at (4,-1.5) {$f$ weakly mixing}; \node (fDC) at (8,-1.5) {$f$ Devaney chaotic}; \draw[implies-implies,double equal sign distance] (fDC) -- node [midway,above]{\cite{VeBe}} (fTT); \draw[-implies,double equal sign distance] (fWM) -- (fDC); \draw[-implies,double equal sign distance] (fTT) -- node [midway,left]{\ref{mainthminterval}} (WM); \draw[-implies,double equal sign distance] (fDC) -- (WMC); \node at (6.8,-0.75) {\ref{CorDC}}; \draw[-implies,double equal sign distance] (fDC) to [bend left] node [midway,below]{\cite{Wang2011} or \ref{Example2}} (fWM); \node at (6.15,-2.2) {$\mathbin{\tikz [x=1.4ex,y=1.4ex,line width=.2ex, red] \draw (0,0) -- (1,1) (0,1) -- (1,0);}$} ; \end{tikzpicture} \end{center} \subsection{Notation} \label{Subsection-Notation} Let us now introduce the notation that will be used throughout this paper. If $(M,d)$ is a metric space, we will denote by $B(x,r)$ the open ball of center $x\in M$ and radius $r>0$. When $E$ is a subset of $M$, we let $\dist(x,E) := \inf\{d(x,y) \; : \; y \in E\}$ be the distance from $x$ to $E$. If $(N, d')$ is another metric space and $f : M \to N$ is a Lipschitz map, then we let $$\mathrm{Lip}(f) = \sup_{x\neq y} \dfrac{d'(f(x), f(y))}{d(x,y)}$$ be the smallest Lipschitz constant of $f$. For a Banach space $X$, the unit ball of $X$ will simply be denoted by $B_X$ and its (topological) dual space by $X^*$. If $Y$ is another Banach space, we will write $X \equiv Y$ if there exists an isometric isomorphism between $X$ and $Y$. Finally, if $f : E \to F$ is a map between two sets and $U$ is a subset of $E$, $f\mathord{\upharpoonright}_U$ will stand for the restriction of $f$ to $U$. \subsection{Lipschitz-free spaces} \label{Subsection-Lipfree} We wish to end this introduction by giving a more detailed introduction to Lipschitz-free spaces theory (for the proofs, we refer the reader to \cite{Weaver2} where the name Arens-Eells spaces is used instead). Consider a pointed metric space $(M,d)$ with distinguished point $0\in M$. For a \textbf{real} Banach space $X$, we denote by $\Lip_0(M,X)$ the vector space of Lipschitz maps from $M$ to $X$ satisfying $f(0)=0$. Then $\mathrm{Lip}(\cdot)$ is a norm on $\Lip_0(M,X)$, and equipped with that norm, $\Lip_0(M,X)$ is a Banach space. When the range space is $\mathbb{R}$, we simply write $\Lip_0(M)$ instead of $\Lip_0(M,\mathbb{R})$. Now recall that the Lipschitz-free space over $M$ is the following subspace of $\Lip_0(M)^*$: $$\mathcal{F}(M) := \overline{ \mbox{span}}^{\| \cdot \|}\left \{ \delta(x) \, : \, x \in M \right \},$$ where $\delta(x)$ is the functional defined by $\langlef,\delta(x)\rangle = f(x)$ for every $f\in \Lip_0(M)$. It is readily seen that $\delta(x) \in \Lip_0(M)^*$ with $\|\delta(x)\| = d(x,0)$. The map $\delta_{M} \colon x \in M \mapsto \delta(x) \in \mathcal{F}(M)$ is actually an isometry which in turns implies that $\delta(M)$ is a closed subset of $\mathcal{F}(M)$ whenever $M$ is complete. In fact, if $\overline{M}$ is the completion of $M$ then $\mathcal{F}(M)$ and $\mathcal{F}(\overline{M})$ are linearly isometric. So, even when it is not precisely specified, we will always assume our metric spaces to be complete. Notice also that $\mathcal{F}(M)$ is separable if and only if $M$ is so. Their most important application to non-linear geometry is certainly their universal extension property: for every Banach space $X$, for every $f \in \Lip_0(M,X)$, the unique linear operator $\overline{f} \colon \mathcal{F}(M) \to X$ defined on $\lspan \delta(M)$ by $$\overline{f}\Big(\sum_{i=1}^n a_i \delta(x_i)\Big)= \sum_{i=1}^n a_i f(x_i) \in X $$ is continuous with $\|\overline{f}\|=\mathrm{Lip}(f)$. In other words, the map $\Phi \colon f\in \Lip_0(M,X) \mapsto \overline{f} \in \mathcal{L}(\mathcal{F}(M),X)$ is an onto linear isometry. As a direct consequence (in the case $X=\mathbb{R}$) we obtain that $\mathcal{F}(M)^* \equiv \Lip_0(M)$. Moreover the weak$^*$ topology coincides with the topology of pointwise convergence on bounded sets of $\Lip_0(M)$. Afterward, if $N \subset M$ with $0 \in N$ then $\mathcal{F}(N)$ can be canonically isometrically identified with the subspace $\mathrm{span}\{\delta(x) : x \in N\}$ of $\mathcal{F}(M)$. This is due to a well known McShane-Whitney theorem (see \cite[Theorem 1.33]{Weaver2} e.g.) according to which every real-valued Lipschitz function on $N$ can be extended to $M$ with the same Lipschitz constant. \begin{remark} In linear dynamics, one often study operators defined on \textbf{complex} Banach spaces. Here we want to highlight the fact that, by construction, Lipschitz-free spaces are Banach spaces over $\mathbb R$. Nevertheless, one could build a complex version of Lipschitz-free spaces by following the same steps as we did above. That is, we may consider the complex Banach space $\Lip_0(M,X)$, where $X$ is a Banach space over $\mathbb{C}$ as well, and then the evaluation functionals $\delta(x) \in \Lip_0(M,\mathbb{C})^*$ are defined in a same fashion. This leads to the complex version of the Lipschitz-free space $$ \mathcal{F}_{\mathbb{C}}(M) := \overline{ \mbox{span}}^{\| \cdot \|}\left \{ \delta(x) \, : \, x \in M \right \} \subset \Lip_0(M,\mathbb{C})^*.$$ One can prove that the universal extension property works perfectly fine and thus provides $\mathcal{F}_{\mathbb{C}}(M)^* \equiv \Lip_0(M,\mathbb{C})$. Now one should be careful since some features of $\mathcal{F}(M)$ might not work equally well for $\mathcal{F}_{\mathbb{C}}(M)$ (for instance $\mathcal{F}_{\mathbb{C}}(N)$ may not be isometric but only isomorphic to a subspace of $\mathcal{F}(M)$). Up to our knowledge, the complex version of Lipschitz-free spaces have not been much studied in the literature (see the comments at pages 86 and 125 in \cite{Weaver2}). In our work, we claim that the results still hold if one replaces $\mathcal{F}(M)$ by $\mathcal{F}_{\mathbb{C}}(M)$. \end{remark} We now recall the fundamental linearization property of Lipschitz-free spaces (already stated in Proposition~\ref{diagramfree}), which is a direct consequence of the universal extension property presented above. If $f \colon M \to N$ is a Lipschitz map such that $f(0_M) = 0_N$, then there exists a linear bounded operator $\widehat{f} \colon \mathcal{F}(M) \to \mathcal{F}(N)$ such that $\|\widehat{f}\|=\mathrm{Lip}(f)$ and which satisfies: $$ \text{For any } \gamma = \sum_{i=1}^n a_i\delta_M(x_i) \in \mathcal{F}(M) , \quad \widehat{f}(\gamma) = \sum_{i=1}^n a_i \delta_N(f(x_i)).$$ We recall that such an operator $\widehat{f}$ will be called \textit{Lipschitz operator.} In this paper, we will focus on Lipschitz self-maps $f : M \to M$ preserving the distinguished point and we will often require that $f$ is transitive. It is readily seen that if $f$ is transitive then its Lipschitz constant $\mathrm{Lip}(f) > 1$. Notice also that if $0$ is an isolated point in $M$, then there is no Lipschitz map $f:M \to M$ and $x\in M$ such that $f(0)=0$ and $\mathrm{Orb}(x,f)$ is dense in $M$ (and thus no hypercyclic $f :M \to M$). \begin{remark} \label{UnifDiscrete} We recall that any infinite-dimensional Banach space supports a hypercyclic operator \cite{Ansari}. Yet, for some metric spaces $M$ there is no hypercyclic Lipschitz operator. For instance, let $M$ be a countable separable pointed metric space and suppose that: \begin{itemize} \item $M$ is uniformly discrete, that is there exists $\theta >0$ such that $d(x,y)> \theta$ for every $x\neq y$; \item $M$ is bounded, i.e., $\mathrm{rad}(M):=\underset{x\in M}{\sup}\,d(x,0)<+\infty$. \end{itemize} Then it is known \cite[Proposition~4.4]{Kalton04} that $\mathcal{F}(M)$ is linearly isomorphic to the Banach space $\ell_1(\mathbb{N})$ of real sequences indexed by $\mathbb{N}$ whose series is absolutely convergent. However, every orbit under the action of $\widehat{f}$ is bounded, so $\widehat{f}$ cannot be hypercyclic: \begin{eqnarray*} \forall \gamma = \sum_{i=1}^{\infty} a_i \delta(x_i), \forall n \in \mathbb{N}, \quad \| \widehat{f}^n \gamma\| \leq \mathrm{rad}(M) \sum_{i=1}^{\infty} |a_i| \leq C \cdot \mathrm{rad}(M) \|\gamma\|. \end{eqnarray*} \end{remark} \begin{remark} A change of the base point in a metric space $M$ does not affect the isometric structure of the associated Lipschitz-free space. Indeed, if $b \in M$ is the new base point (instead of $0$), then $f \in \Lip_0(M) \mapsto f - f(b) \in \Lip_b(M)$ defines a linear and surjective isometry. Moreover, it is easy to check that this operator is continuous with respect to the topology of pointwise convergence, which in turn implies that it is weak$^*$-to-weak$^*$ continuous. Therefore its preadjoint is a surjective isometry between $\mathcal{F}_b(M)$ and $\mathcal{F}(M)$, where $\mathcal{F}_b(M)$ is the Lipschitz-free space over $M$ with $b$ considered to be the distinguished point. Now imagine that a Lipschitz self-map $f : M \to M$ admits two fixed points, say $p$ and $q$. One can consider $\widehat{f_p} : \mathcal{F}_p(M) \to \mathcal{F}_p(M)$ and $\widehat{f_q} : \mathcal{F}_q(M) \to \mathcal{F}_q(M)$ obtained by the linearization property of Lipschitz-free spaces. Let us denote by $T: \mathcal{F}_p(M) \to \mathcal{F}_q(M)$ the isometry described in the previous paragraph. Then, it is easy to check that $\widehat{f_q} = T \circ \widehat{f_p} \circ T^{-1}$. Therefore $\widehat{f_p}$ and $\widehat{f_q}$ are conjugate and they will enjoy the very same dynamical properties. \end{remark} To conclude this short introduction to Lipschitz-free spaces theory, we recall two famous examples and then discuss a more generic point of view. \begin{example} In the sequel, $L^1 = L^1([0,1])$ denotes the real Banach space of integrable functions from $[0,1]$ to $\mathbb{R}$ (as usual quotiented by the kernel of $\|\cdot\|_1$). \begin{enumerate}[itemsep=3pt] \item ``$(M,d) = (\mathbb{N},|\cdot|)$". The linear operator satisfying $T \colon \delta(n) \in \mathcal{F}(\mathbb{N}) \mapsto \sum_{i=1}^n e_i \in \ell_1(\mathbb{N})$ is an onto linear isometry (the sequence $(e_n)_n \subset \ell_1$ stands for the canonical unit vector basis of $\ell_1$). \item ``$M =( [0,1],|\cdot|)$". The linear operator $T \colon \delta(t) \in \mathcal{F}([0,1]) \mapsto \mathbbm{1}_{[0,t]} \in L^1([0,1])$ is an onto linear isometry. \end{enumerate} \end{example} More generally, we can see the two previous examples as particular cases of a more general theorem. Indeed, A. Godard gave a very explicit formula in \cite{Godard_2010} to prove that if $M$ is a subset of an $\mathbb{R}$-tree which contains all of its branching points, then $\mathcal{F}(M)$ is isometric to an $L^1(\mu)$ space. We recall that an $\mathbb{R}$-tree is an arc-connected metric space $(M,d)$ with the property that there is a unique arc connecting any pair of points $x\neq y\in M$ and it moreover is isometric to the real segment $[0,d(x,y)]\subset\mathbb{R}$. A point $x\in M$ is called a \emph{branching point} of $M$ if $M\setminus\set{x}$ has at least three connected components. In this paper we use Godard's formula in a number of examples. We will always give the definition of the isometries for convenience, but we will never prove that they are indeed surjective isometries. In fact, most of the time we apply Godard's formula to a countably branching tree of height 1, we state the explicit isometry here for future reference. \begin{proposition} \label{PropTreesl1} Let $M = \mathbb{N} \cup \{0\}$ be equipped with the tree metric $d$ described below \begin{equation*} \xymatrix @!0 @R=1pc @C=1.5pc { 1 && 2 && 3 & \ar@{.}[rr] &&& n & \ar@{.}[rr] && \\ \\ \\ & & & & \ar@/^/@{-}[lllluuu]^{d_1} \ar@{-}[lluuu]^{d_2} \ar@{-}[uuu]^{d_3} 0 \ar@/_/@{-}[rrrruuu]_{d_n} } \end{equation*} That is, for every $n,m \in \mathbb{N}$, $d(n,0) =d_n > 0$ and $d(n,m) =d_n +d_m$. Then the linear map $\Phi : \mathcal{F}(M) \to \ell_1(\mathbb{N})$ given by $\Phi(\delta(n)) = d(n,0)e_n = d_n e_n$ is a linear onto isometry. In particular, any Lipschitz operator $\widehat{f} : \mathcal{F}(M) \to \mathcal{F}(M)$ is conjugate to a bounded operator $T: \ell_1(\mathbb{N}) \to \ell_1(\mathbb{N})$ such that $Te_n = d_{f(n)}d_n^{-1} e_{f(n)}$. \end{proposition} We refer the reader to the papers \cite{Godard_2010, AlPePr_2019} for more details on this topic. \section{First observations} \label{Section-FO} As we already mentioned, our aim is to study whether the arrow $f \mapsto \widehat{f}$ carries on some dynamical information. First, we note that having a dense range is preserved through this functor. \begin{proposition} \label{denserange} Let $M$ and $N$ be two pointed metric spaces and let $f :M \to N$ be a Lipschitz map such that $f(0_M) = 0_N$. Then, the range of $\widehat{f}$ is dense in $\mathcal{F}(N)$ if and only if the range of $f$ is dense in $N$. \end{proposition} \begin{proof} By the very definition of $\widehat{f}$, notice that $\widehat{f}(\lspan \delta(M)) = \lspan \delta(f(M))$. \noindent $(\impliedby):$ If $f(M)$ is dense in $N$, then $\delta(f(M))$ is dense in $\delta(N)$ because the map $\delta$ is an isometry. Then, $\lspan \delta(f(M))$ is dense in $\overline{\lspan}\ \delta(N) = \mathcal{F}ree(N)$. Since $\lspan \delta(f(M)) = \widehat{f} (\lspan (\delta(M))) \subset \widehat{f}(\mathcal{F}ree(M)),$ we get that $\widehat{f}(\mathcal{F}ree(M))$ is dense in $\mathcal{F}ree(N)$. \noindent $(\implies):$ Assume that $f(M)$ is not dense in $N$ and let $y\in N \setminus \overline{f(M)}$. Since $\dist\big(y,\overline{f(M)}\big):=\inf\big\{d(y,z) \; : \; z \in \overline{f(M)}\big\} >0$, we may define a Lipschitz map $g : N \to \mathbb{R}$ such that $g(y) = 1$ and $g(\overline{f(M)}) = \{0\}$ (such a map $g$ exists, see for instance the inf/sup-convolution formula \cite[Theorem 1.33]{Weaver2} to extend Lipschitz maps). In particular $g\in \Lip_0(N)$ and it is readily seen that $\langle g , \gamma \rangle = 0$ whenever $\gamma \in \overline{\widehat{f}(\mathcal{F}(M))} $. Therefore, the fact that $\widehat{f}$ does not have a dense range follows from the next simple estimates: \begin{align*} \dist\big(\delta(y) ,\overline{\widehat{f}(\mathcal{F}(M))}\big) \geq \inf_{\gamma \in \overline{\widehat{f}(\mathcal{F}(M))}}\left| \left\langle \delta(y) - \gamma , \dfrac{g}{\|g\|} \right\rangle \right| = \frac{1}{\|g\|} >0. \end{align*} \end{proof} \begin{corollary}\label{suphypdenserange} Let $M$ be a pointed metric space and let $f :M \to M$ be a Lipschitz map such that $f(0) = 0$. If $\widehat{f}$ is supercyclic (or hypercyclic), then the range of $f$ is dense in $M$. \end{corollary} The forward shift operator on $\ell_1(\mathbb{N})$ is cyclic, but its image is not dense in $\ell_1(\mathbb{N})$. This allows us bellow to show that the cyclicity of $\widehat{f}$ does not imply that $f$ has a dense image in $M$. This also underlines the fact that the hypercyclicity of $f$ does not imply the supercyclicity of $\widehat{f}$. \begin{example} \label{4} Let $f$ be the map defined on $M=\lbrace 1,2,3,...\rbrace\cup \lbrace0\rbrace$ by $f(0) = 0$ and $f(n)=n+1$ for every $n \in \mathbb{N}$. We equip $M$ with the tree metric $d$ given by: for all $ n,m \geqslant 1$, $d(n,0)=\frac{1}{n}$ and $d(n,m)=d(n,0)+d(m,0)$. According to Proposition~\ref{PropTreesl1}, $\delta(n) \in \mathcal{F}(M) \mapsto \frac{1}{n} e_n \in \ell_1$ extends to a bijective linear isometry. In particular, $\widehat{f}$ is conjugate to the operator $T$ acting on $\ell_1$ by $Te_n = \frac{n}{n+1} e_{n+1}$. Thus $\widehat{f}$ is cyclic while $f$ does not have a dense range. By Corollary $\ref{suphypdenserange}$, this implies that $\widehat{f}$ is not supercyclic. Notice also that $\mathrm{Orb}(1,f)$ is dense in $M$. \end{example} Nevertheless, the example above is somehow the only pathology that may occur, as this is shown by the next result. \begin{proposition} \label{propcyclicdense} If a Lispchitz operator $\widehat{f} : \mathcal{F}(M) \to \mathcal{F}(M)$ is cyclic, then either $f(M)$ is dense in $M$ or there exists $x \in M$ such that the range $f(M)$ is dense in $M\setminus \{x\}$. \end{proposition} Before giving the proof, let us state the following simple facts which we will use throughout the section. \begin{lemma}\label{LemmaOrbit} ~~ \begin{enumerate}[itemsep = 3pt] \item For every $n \in \mathbb{N}$, $\widehat{f^n} = (\widehat{f})^n$. \item For every $x \in M$, $\mathrm{Orb}(\delta(x),\widehat{f}) = \delta(\mathrm{Orb}(x,f))$. \end{enumerate} \end{lemma} \begin{proof}[Proof of Proposition~\ref{propcyclicdense}] Assume that $M\setminus \overline{f(M)}$ contains at least two points, say $x_1, x_2 \in M$. Set $E:=\lspan(\delta(x_1), \delta(x_2))$. Let $P : \mathcal{F}ree(M) \to E$ be a continuous projection from $\mathcal{F}ree(M)$ onto $E$ such that $P\mathord{\upharpoonright}_{\overline{\lspan}\{\delta(f(M))\}}=0$. If there exists $\gamma\in \mathcal{F}ree(M)$ such that $\lspan \mathrm{Orb}(\gamma,\widehat{f}) $ is dense in $\mathcal{F}ree(M)$, then $$P\left(\lspan \mathrm{Orb}(\gamma,\widehat{f}) \right) = \lspan \left\lbrace P(\widehat{f}^n(\gamma)), n\geq 0 \right\rbrace$$ is dense in $E$. However, notice that for any $n\geq 1$, $P(\widehat{f}^n(\gamma))=0$. Indeed, if $y\in M$ then $f^n(y)\not \in \{x_1, x_2\}$, so that $P(\widehat{f}^n(\delta(y)))=P(\delta(f^n(y))=0$. By linearity, this implies that $P(\widehat{f}^n(z))=0$ for any $z\in \lspan(\delta(M))$. By approximation and continuity of $P$, we get that $P(\widehat{f}^n(\gamma))=0$. The latter implies that $P\left(\lspan \mathrm{Orb}(\gamma,\widehat{f}) \right) = \mathbb{R}.P(\gamma)$ which is of dimension $1$ and hence cannot be dense in the $2-$dimensional space $E$. \end{proof} Next, we deduce from Lemma~\ref{LemmaOrbit}~(1) the next proposition. \begin{proposition} \label{PropPeriodic} Let $M$ be a pointed metric spaces and let $f :M \to M$ be a Lipschitz map such that $f(0) = 0$. If the set of periodic points $\mathrm{Per}(f)$ of $f$ is dense in $M$, then the set of periodic points $\mathrm{Per}(\widehat{f})$ of $\widehat{f}$ is dense in $\mathcal{F}ree(M)$. \end{proposition} \begin{proof} Since $\mathrm{Per}(f)$ is dense in $M$, $\lspan \delta(\mathrm{Per}(f))$ is dense in $\mathcal{F}(M)$. Now if $\gamma = \sum_{i=1}^n a_i \delta(x_i) \in \lspan \delta(\mathrm{Per}(f))$, then for every $i \in \{1 , \ldots , n\}$ there exists $n_i \in \mathbb{N}$ such that $f^{n_i}(x_i)=x_i$. We define $n = \prod_{i=1}^n n_i$, notice that $f^{n}(x_i)=x_i$ for every $i \in \{1, \ldots , n\}$. We conclude the proof by using Lemma~\ref{LemmaOrbit}~(1) to show that $$ \duality{(\widehat{f})^{n} , \gamma} = \duality{\widehat{f^{n}} , \gamma} = \sum_{i=1}^n a_i \delta(f^{n}(x_i)) = \sum_{i=1}^n a_i \delta(x_i) = \gamma,$$ which implies that $\lspan \delta(\mathrm{Per}(f)) \subset \mathrm{Per}(\widehat{f})$. \end{proof} Another direct consequence of Lemma~\ref{LemmaOrbit}~(2) is that $\mathrm{Orb}(\delta(x),\widehat{f}) \subseteq \delta(M)$, and thus neither $\mathrm{Orb}(\delta(x), \widehat{f})$ nor $ \mathrm{Orb}(\mathbb R \,\delta(x),\widehat{f})$ (when $M\neq \{0,x\}$) can be dense in $\mathcal{F}(M)$. In other words, $\delta(x) \in \mathcal{F}(M)$ will never be a hypercyclic (or supercyclic) vector for the operator $\widehat{f}$. Nevertheless, $\delta(x) \in \mathcal{F}(M)$ may be a cyclic vector for $\widehat{f}$. \begin{proposition} \label{PropHCtoC} Let $M$ be a metric space with non-isolated distinguished point $0\in M$, $f :M \to M$ be a Lipschitz map such that $f(0) = 0$, and let $x\in M$. Then the following assertions are equivalent: \begin{enumerate}[label={$(\arabic*)$}] \item $x$ is a hypercyclic element for $f$. \item $\delta(x)$ is a cyclic vector for $\widehat{f}$. \end{enumerate} \end{proposition} \begin{proof} (1) $\implies$ (2): Thanks to Lemma~\ref{LemmaOrbit}, $\mathrm{Orb}(\delta(x),\widehat{f}) = \delta(\mathrm{Orb}(x,f))$. So, if $\mathrm{Orb}(x,f)$ is dense in $M$ then $\lspan \delta(\mathrm{Orb}(x,f))$ is dense in $\mathcal{F}(M)$, which in turn implies that $\lspan \mathrm{Orb}(\delta(x),\widehat{f})$ is dense in $\mathcal{F}(M)$. (2) $\implies$ (1): Assume that $\mathrm{Orb}(x,f)$ is not dense in $M\setminus \set{0}$. So there exists $y \neq 0 \in M$ and $\varepsilon >0$ such that $ B(y,\varepsilon) \cap \overline{\mathrm{Orb}(x,f)} = \emptyset$. Then let $F \in \Lip_0(M)\equiv \mathcal{F}(M)^*$ be such that $F(y) >0$ and $\mathrm{supp}(F) \subseteq B(y,\varepsilon)$ (for instance $z \in M \mapsto d\big(z,B(y,C \varepsilon)^c\big)$ for some small enough constant $C$ which ensures that $0 \not \in B(y, C\varepsilon)$). Clearly, for every $\gamma \in \lspan \overline{\mathrm{Orb}(\delta(x), \widehat{f})}$, $\langle F , \gamma\rangle =0$. However $\langle F , \delta(y)\rangle >0$ which implies that $$\dist_{\mathcal{F}(M)}\left(\delta(y) , \lspan \mathrm{Orb}(\delta (x),\widehat{f}) \right) = \dist_{\mathcal{F}(M)}\Big(\delta(y) , \lspan \delta(\mathrm{Orb}(x, f)) \Big) > 0,$$ which means that $\lspan \mathrm{Orb}(\delta(x),\widehat{f})$ is not dense in $\mathcal{F}(M)$. \end{proof} \subsection{Supports of supercyclic vectors} As we mentioned above, evaluation functionals, that is elements with only one non-zero single point in their support, cannot be hypercylic or supercyclic vectors for some $\widehat{f}$. In fact, we can say more: if $\gamma \in \mathcal{F}(M)$ is a hypercyclic (or supercyclic) vector for $\widehat{f}$, then $\gamma$ cannot be finitely supported. \begin{definition} Let $M$ be a pointed metric space. We say that $ \gamma\in \mathcal{F}(M)$ is finitely supported if $$\gamma \in \mathrm{span}\lbrace \delta(x) : x \in M \rbrace.$$ The support of such a $\gamma$ is denoted by $\supp \gamma$ and is the smallest finite subset $F$ of $M$ which contains $0$ and such that $\gamma \in \mathrm{span}\lbrace \delta(x) : x \in F \rbrace$. More generally \cite{AP20, APPP_2019}, the support of any element $\gamma \in \mathcal{F}(M)$, also denoted by $\supp \gamma$, is the intersection of all closed subsets $K$ of $M$ such that $\gamma \in \mathcal{F}(K) \subset \mathcal{F}(M)$. It follows from \cite[Theorem~2.1]{APPP_2019} that $\gamma \in \mathcal{F}(\supp \gamma)$ and of course $\supp \gamma$ is the smaller closed subset with this property. \end{definition} \begin{proposition} \label{Prop-support} Let $M$ be an infinite complete metric space. If $\gamma \in \mathcal{F}(M)$ is a supercyclic vector for a Lipschitz operator $\widehat{f} : \mathcal{F}(M) \to \mathcal{F}(M)$, then $\gamma$ is infinitely supported. \end{proposition} Indeed, any element in $\mathrm{Orb}(\gamma,\widehat{f})$ should have a support of cardinal less or equal to the one of $\gamma$. So our claim follows from the next result which was proved by R. Aliaga, C. Noûs, the third named author and A. Proch\'azka. We are deeply grateful to them for allowing us to include their result as well as its proof. \begin{lemma}[\textbf{Aliaga, Noûs, Petitjean and Proch\'azka}] Let $M$ be a complete pointed metric space. Let $FS_n(M) = \{\gamma \in \mathcal{F}(M) \; : \; | \supp \gamma | \leq n \}$ be the set of finitely supported elements whose support contains at most $n$ points of $M$. Then $FS_n(M)$ is weakly closed. \end{lemma} The proof uses the notion of support introduced above. More precisely, we will need the following characterization (see \cite[Proposition 2.7]{APPP_2019}): Let $M$ be a complete pointed metric space and $\gamma \in \mathcal{F}(M)$. Then $x \in M$ lies in the support of $\gamma$ if and only if for every open neighbourhood $U_x$ of $x$ there exists a function $f\in \Lip_0(M)$ whose support is contained in $U_x$ and such that $\langle f , \gamma \rangle \neq 0$. \begin{proof} Aiming for a contradiction, suppose $(\gamma_i)_i \subset FS_n(M)$ is a net which weakly converges to some $\gamma \not\in FS_n(M)$. This means that $\supp(\gamma)$ contains at least $n+1$ points $x_1,\ldots,x_{n+1}$. Let $\delta>0$ be small enough so that the balls $B(x_k,\delta)$, for $k=1,\ldots,n+1$, are pairwise disjoint. By \cite[Proposition 2.7]{APPP_2019}, there are $f_k\in\Lip_0(M)$ such that $\supp(f_k)\subset B(x_k,\delta)$ and $\langle f_k , \gamma \rangle \neq 0$. Therefore, if $i$ is large enough we must have $\langle f_k , \gamma_i \rangle \neq 0$ for every $k$, hence $\supp(\gamma_i)\cap B(x_k,\delta)\neq\emptyset$ for every $k$. This is impossible since $\supp(\gamma_i)$ only has $n$ elements. \end{proof} \subsection{Quasi-conjugacy } It is well known that, hypercyclicity and the notions of topological dynamics introduced in the introduction are preserved under quasi-conjugacy. \begin{definition} Let $f : M \rightarrow M$ and $g: N \rightarrow N$ be two continuous maps acting on metric spaces $M$ and $N$. The map $f$ is called quasi-conjugate to $g$ if there exists a continuous map $\phi:N\to M$ with dense range such that the diagram $$\xymatrix{ N \ar[r]^g \ar[d]_{\phi} & N \ar[d]^{\phi} \\ M \ar[r]_{f} & M }$$ commutes, that is, $f\circ\phi=\phi\circ g$. In this case, we say that $\phi$ defines a quasi-conjugacy from $g$ to $f$. \end{definition} \begin{proposition} Let $(M,d_M)$ and $(N,d_N)$ be two pointed metric spaces, let $f : M \rightarrow M$, $g: N \rightarrow N$ be two Lipschitz maps such that $f(0)=0$ and $g(0)=0$, and let $\phi:N\rightarrow M$ be a Lipschitz map such that $\phi(0)=0$. Then $\phi$ defines a quasi-conjugacy from $g$ to $f$ if and only if $\widehat{\phi}:\mathcal{F}ree(N)\rightarrow\mathcal{F}ree(M)$ defines a quasi-conjugacy from $\widehat{g}$ to $\widehat{f}$. \label{3} \end{proposition} \begin{proof} Thanks to Proposition \ref{denserange}, we have $\phi$ has a dense range if and only if $\widehat{\phi}$ has a dense range. So, it remains to show that $f\circ\phi=\phi\circ g$ is equivalent to $\widehat{f}\circ\widehat{\phi}=\widehat{\phi}\circ\widehat{g}$. Assume that $f\circ\phi=\phi\circ g$, let $\gamma=\sum a_i\delta_N(x_i)\in \lspan\,\delta(N)$, we have \begin{align*} \widehat{\phi}\circ\widehat{g}(\gamma)&=\widehat{\phi}(\sum a_i\delta_N(g(x_i)))\\ &=\sum a_i\delta_M(\phi\circ g(x_i))\\ &=\sum a_i\delta_M(f\circ\phi(x_i))\\ &=\widehat{f}\circ\widehat{\phi}(\sum a_i\delta_N(x_i))\\ &=\widehat{f}\circ\widehat{\phi}(\gamma), \end{align*} so $\widehat{f}\circ\widehat{\phi}=\widehat{\phi}\circ\widehat{g}$ on $\lspan\,\delta(N)$, which implies that $\widehat{f}\circ\widehat{\phi}=\widehat{\phi}\circ\widehat{g}$ on $\mathcal{F}ree(N)$. Conversely, suppose that $\widehat{f}\circ\widehat{\phi}=\widehat{\phi}\circ\widehat{g}$, let $x\in N$, we have $\widehat{f}\circ\widehat{\phi}(\delta_N(x))=\widehat{\phi}\circ\widehat{g}(\delta_N(x))$, which is equivalent to $\delta_M(\phi\circ g(x))=\delta_M(f\circ\phi(x))$, since $\delta_M$ is an isometry, we get $\phi\circ g(x)=f\circ\phi(x)$. \end{proof} \section{Hypercyclicity Criterion for Lipschitz operators} \label{section_criteria} Proving that a given operator is hypercyclic by constructing a hypercyclic vector is not an easy task, it is sometimes easier to check the topological transitivity condition. Nonetheless, in many concrete situations it is not obvious how to verify the latter condition. The purpose of \textit{the Hypercyclicity Criterion} is to provide several easily verified conditions under which an operator is hypercyclic (actually even weakly mixing). In this section, we will shift those conditions on the Lipschitz maps themselves, which will give us a very useful tool for particular examples. Let us start by recalling the statement of the Hypercyclicity Criterion \cite{Survey, GrPe}. \noindent \textbf{The Hypercyclicity Criterion (HC).} Let $X$ be a separable Banach space and let $T : X \to X$ be a bounded linear operator. We will say that $T$ satisfies the HC if there exists an increasing sequence of integers $(n_k)$, two dense sets $X_0$ and $Y_0$ in $X$, and a sequence of maps $S_{n_k} : Y_0 \to X$ such that \begin{enumerate} \item $T^{n_k}x \to 0$ for any $x \in X_0$; \item $S_{n_k}x \to 0$ for any $x \in Y_0$; \item $T^{n_k} S_{n_k}y \to y$ for each $y \in Y_0$. \end{enumerate} It is well known that if $T$ satisfies the HC then $T$ is hypercyclic (see \cite[Theorem~3.15]{GrPe} e.g.). Moreover, a bounded linear operator satisfies the HC if and only if it is weakly mixing (see \cite[Theorem 2.3]{BesPeris}), and if it satisfies the HC with respect to the full sequence $(n)_{n \in \mathbb{N}}$ then it is actually mixing (see page 32 in \cite{Survey}). The linearization property stated in Proposition~\ref{diagramfree} allows us to formulate a version of the HC for Lipschitz operators $\widehat{f} : \mathcal{F}(M) \to \mathcal{F}(M)$, only involving metric conditions. \begin{theorem} \label{HCriterion} Let $(M,d)$ be a pointed separable metric space, $f:M \to M$ be a Lipschitz map such that $f(0)=0$ and $\lambda\in\mathbb{R}\setminus\lbrace 0\rbrace$. Assume that there exist an increasing sequence of integers $(n_k)_{k\in\mathbb{N}}$, two dense subsets $\mathcal{D}_1$, $\mathcal{D}_2$ in $M$ and a sequence of maps $g_{n_k}:\mathcal{D}_2 \to M$ such that, for any $x\in\mathcal{D}_1$ and $y\in\mathcal{D}_2$ the following conditions hold: \begin{enumerate} \item $|\lambda|^{n_k}\,d(f^{n_k}(x),0)\underset{k\to+\infty}{\longrightarrow}0$; \item $\dfrac{d(g_{n_k}(y),0)}{|\lambda|^{n_k}}\underset{k\to+\infty}{\longrightarrow}0$; \item $d(f^{n_k}\circ g_{n_k}(y),y)\underset{k\to+\infty}{\longrightarrow}0$; \end{enumerate} Then $\lambda \widehat{f}$ satisfies the Hypercyclicity Criterion. In particular, $\lambda \widehat{f}$ is hypercyclic. \end{theorem} \begin{proof} Let $X_0=\mathrm{span}\,\delta(\mathcal{D}_1)$ and $Y_0=\mathrm{span}\,\delta(\mathcal{D}_2)$. It is clear that $X_0$ and $Y_0$ are dense in $\mathcal{F}ree(M)$. Moreover, for every $x\in\mathcal{D}_1$, we have: \[\|(\lambda\widehat{f})^{n_k}(\delta(x))\|=|\lambda|^{n_k}\|\delta(f^{n_k}(x))\|=|\lambda|^{n_k}\,d(f^{n_k}(x),0)\underset{k\to+\infty}{\longrightarrow}0.\] Therefore, for every $x_0\in X_0$, $\|(\lambda\widehat{f})^{n_k}(x_0)\|\underset{k\to+\infty}{\longrightarrow}0$. Let $S_{n_k}:Y_0 \to \mathcal{F}ree(M)$ be the linear map given by $S_{n_k}(\delta(y))=\dfrac{1}{\lambda^{n_k}}\delta(g_{n_k}(y))$, for every $y\in\mathcal{D}_2$. We thus have \[\|S_{n_k}(\delta(y))\|=\dfrac{d(g_{n_k}(y),0)}{|\lambda|^{n_k}}\underset{k\to+\infty}{\longrightarrow}0\] and \[\|(\lambda\widehat{f})^{n_k}\circ S_{n_k}(\delta(y))-\delta(y)\|=\| \delta(f^{n_k}\circ g_{n_k}(y))-\delta(y)\|=d(f^{n_k}\circ g_{n_k}(y),y)\underset{k\to+\infty}{\longrightarrow}0.\] Therefore, for every $y_0\in Y_0$, $S_{n_k} y_0\underset{k\to+\infty}{\longrightarrow}0$ and $(\lambda\widehat{f})^{n_k}\circ S_{n_k}y_0\underset{k\to+\infty}{\longrightarrow} y_0$. \end{proof} Throughout this paper, we will always have $g_{n_k} = g^{n_k}$ for some function (always denoted by) $g$, and that we will refer to as the ``{\it inverse function}" of $f$ even though $f$ is not a bjiection. \begin{definition} We will say that $\widehat{f}$ satisfies the \textit{hypercyclic criterion for Lipschitz operators} (shortened HCL) if $f$ satisfies the conditions of Theorem~\ref{HCriterion} with $\lambda=1$. \end{definition} Here we present a setting where the HCL is satisfied. \begin{proposition} Let $(M,d)$ be a complete pointed metric space, and $f:M\to M$ be a Lipschitz map such that $f(0)=0$. Assume that there exist an increasing sequence of integers $(n_k)_{k\in\mathbb{N}}$, a dense subset $D$ in $M$, a subset $J$ of $M$ with $0\in J$, and an integer $p\geq1$ such that: \begin{enumerate} \item For every $x\in D$, $d(f^{pn_k}(x),0)\underset{k\to+\infty}{\longrightarrow}0$; \item $f^p_{|J}:J \to M$ is bijective and its inverse is a contraction; \end{enumerate} then $\widehat{f}$ satisfies the HCL. In particular, $\widehat{f}$ is hypercyclic. \end{proposition} \begin{proof} For each $k$, let $g_k=(f_{|J}^{p})^{-n_k}:M \to J$. Since $(f^p_{|J})^{-1}(0)=0$ and $(f^p_{|J})^{-1}$ is a contraction, we get $d((f^p_{|J})^{-n}(x),0)\underset{n\to+\infty}{\longrightarrow}0$, for each $x\in M$. In particular, for each $x\in M$, $d(g_k(x),0)\underset{k\to+\infty}{\longrightarrow}0$. Moreover, it is clear that the last condition of HCL is satisfied. Hence $\widehat{f}$ is hypercyclic. \end{proof} We now explore the connections between the weakly mixing property and the HCL. This is of course motivated by the linear case since a bounded operator $T : X \to X$ satisfies the HC if and only if it is weakly mixing \cite[Theorem 3.15]{GrPe}. Unfortunately, if $\widehat{f}$ satisfies the HCL then $f$ is not necessarily weakly mixing (not even transitive; see Example~\ref{ExampleHCnotHCL} and Example~\ref{ExampleHCLnotTransitive}). Nevertheless the reverse implications holds, we omit the proof since it follows the same line as in \cite[Theorem~3.15]{GrPe}. \begin{theorem} \label{Thm_WMimpliesHCL} Let $(M,d)$ be a separable complete pointed metric space without isolated points, and let $f:M\to M$ be a Lipschitz map such that $f(0)=0$. If $f$ is weakly mixing then $f$ satisfies the HCL. \end{theorem} In \cite{DR}, M. De La Rosa and C. Read gave the first example of hypercyclic operator which does not satisfy the Hypercyclicity Criterion. In the same direction, we do not know whether there is a Lipschitz operator $\widehat{f}$ which is hypercyclic but fails the Hypercyclicity Criterion. On the other hand, there are Lipschitz operators $\widehat{f}$ satisfying the HC but not the HCL. \begin{example} \label{ExampleHCnotHCL} Let $M$ be the compact space $\{0\} \cup \{ \frac{1}{n} \; : \; n \in \mathbb{N}\}$ equipped with the usual distance in $\mathbb{R}$. Let $f$ be the Lipschitz map defined by $f(0)=0$, $f(1)=1$ and $f(\frac{1}{n}) = \frac{1}{n-1}$ for $n \geq 2$. We claim that $f$ does not satisfy the HCL but satisfies the usual HC. To simplify the notation, we will write $x_n$ instead of $\frac{1}{n}$. First, it is readily seen that for every $k \neq 0$, we have that $\lim\limits_{n \to \infty} d(f^n(x_k),0)=1 \neq 0$ and so $f$ does not satisfy the HCL. Next, it is well known that $\mathcal{F}(M) \equiv \ell_1$. Indeed, the linear operator $\Phi : \ell_1 \to \mathcal{F}(M)$ given by $$ \Phi(e_n) = \frac{\delta(x_n) - \delta(x_{n+1})}{d(x_n,x_{n+1})} $$ is a surjective isometry. So $\widehat{f}$ is conjugate to an operator $T :=\Phi^{-1} \circ \widehat{f} \circ \Phi : \ell_1 \to \ell_1$ so that $T(e_1) = 0$ and for $n\geq 2$: \begin{eqnarray*} T(e_n) &=& \Phi^{-1} \widehat{f} \left(\frac{\delta(x_n) - \delta(x_{n+1})}{d(x_n,x_{n+1})}\right) \\ &=& \Phi^{-1} \left(\frac{\delta(x_{n-1}) - \delta(x_{n})}{d(x_n,x_{n+1})}\right) \\ &=& \frac{d(x_{n-1},x_{n})}{d(x_n,x_{n+1})}\Phi^{-1} \left(\frac{\delta(x_{n-1}) - \delta(x_{n})}{d(x_{n-1},x_{n})}\right) \\ &=& \frac{n+1}{n-1} e_{n-1}. \end{eqnarray*} Thus $T$ is a weighted backward shift on $\ell_1$ which is well known to satisfy the HC, so does $\widehat{f}$. \end{example} \begin{remark} Notice that if we consider $1$ to be the base point of $M$ (instead of 0), then $f$ satisfy the HCL. Therefore, when $f$ as multiples fixed points, the choice of the base point matters for studying the HCL. This is clearly in opposition with the other dynamical properties considered in this paper. Nevertheless, the assertion ``there is a choice of the base point such that $f$ satisfies the HCL whenever $\widehat{f}$ satisfies the HC" is false. Indeed, it suffices to only modify the value of $f(1)$ by setting for instance $f(1) = \frac{1}{2}$ to disprove the latter statement ($1$ is not an acceptable base point anymore since it is not a fixed point for the map $f$). \end{remark} \subsection{Application} Let us first prove the following characterisation of hypercyclicity for ``backward shift Lipschitz operators" defined on (the Lipschitz-free space over) countably branching tree of height one. \begin{proposition} \label{hyperleftshift} Let $M= \mathbb{N} \cup \{0\}$ be a pointed metric space and let $f:M \to M$ be the map defined by $f(0)=0$ and $f(n)= n-1$ whenever $n \geq 1$. Assume that $M$ is endowed with a metric $d$ such that $f$ is Lipschitz and $d(n,m)=d(n,0)+d(0,m)$ whenever $n \neq m \in \mathbb{N}$. \begin{enumerate} \item Then the following conditions are equivalent: \begin{enumerate} \item $\underset{n\to+\infty}{\liminf}\, d(n,0)=0$. \item $\widehat{f}$ is hypercyclic. \end{enumerate} \item We also have equivalence between the two stronger conditions: \begin{enumerate} \item $\lim\limits_{n \to \infty} d(n,0) =0$. \item $\widehat{f}$ is mixing. \end{enumerate} \end{enumerate} \end{proposition} \begin{proof} According to Proposition~\ref{PropTreesl1}, $$\psi: \delta(n) \in \mathcal{F}(M) \mapsto d(n,0)e_n \in \ell_1$$ induces an onto linear isometry between $\mathcal{F}ree(M)$ and $\ell_1$. Hence $T=\psi\circ\widehat{f}\circ\psi^{-1} : \ell_1 \to \ell_1$ is quasi-conjugate to $\widehat{f}$ while $Te_n=\frac{d(n-1,0)}{d(n,0)}e_{n-1}$ for $n\geqslant2$ and $Te_1=0$. Thus $T$ is a unilateral weighted backward shift acting on $\ell_1$. According to \cite[Theorem 1.40]{Survey} (see also Remark 1.41 therein), we can deduce that \begin{align*} \widehat{f} \text{ is hypercyclic } &\iff T \text{ is hypercyclic } \\ &\iff \underset{n\to+\infty}{\limsup}\,\dfrac{d(1,0)}{d(2,0)}\times\dfrac{d(2,0)}{d(3,0)}\times\ldots\times\dfrac{d(n-1,0)}{d(n,0)}=+\infty \\ &\iff \underset{n\to+\infty}{\liminf}\, d(n,0)=0, \end{align*} which proves $(1)$. The proof of assertion $(2)$ is similar (replacing $\limsup$ by $\lim$) and based on the characterisation of mixing weighted backward shift acting on $\ell_1$; see \cite[Theorem~4.8 and Example~4.9~(a)]{GrPe}. \end{proof} As an easy consequence we provide a non-hypercyclic Lipschitz map $f : M \to M$ which satisfies the HCL. Thus, the HCL does not necessarily implies in general that $f$ itself is hypercyclic. \begin{example} \label{ExampleHCLnotTransitive} Let $M$ and $f :M \to M$ be as in the previous proposition with $d(0,n)=\frac{1}{2^n}$ for every $n \in \mathbb{N}$. It is clear that the orbit of any point in $M$ under $f$ is finite, therefore it cannot be dense. Nonetheless, the sequence $(d(n,0))_n$ is decreasing to 0 so that $\widehat{f}$ is mixing. \end{example} As another application of Theorem~\ref{HCriterion}, we can state a modified version of \cite[Theorem 3]{DuMoZa_2013}. It corrects the former statement which does not hold in general, as we will see in Example~$\ref{Counter-example}$. Note that the case when $\widehat{f}$ is the backward shift on (a copy of) $\ell_1$ is included in the following proposition (see again Example $\ref{Counter-example}$). \begin{proposition} \label{PropBackShift} Let $(M,d)$ be a pointed metric space, and let $f:M \to M$ be a Lipschitz map such that $f(0)=0$. Let $(M_n)_{n\geq0}$ be a partition of $M$ such that $M_0= \left\lbrace 0 \right\rbrace$, and for every $n\geq 1$, $f_{|M_{n+1}}$ is injective, and $$f(M_{n+1}) = M_n.$$ Let $\lambda\in \mathbb{R}$ be such that $\dfrac{d^+_n}{\lambda^n} \underset{n\to \infty}{\rightarrow} 0$, where $d^+_n = \sup_{x\in M_n} d(x,0)$. Then $\lambda \widehat{f}$ is mixing. \end{proposition} \begin{proof} We apply Theorem $\ref{HCriterion}$ with $\mathcal{D}_1 = \mathcal{D}_2 = M$, $n_k = k$ and $g_k = g^k$ where $g : M \to M$ is the map defined as follow: $g(0) = 0$, and for every $n\geq 1$ and $y\in M_n$, $g(y) = x$ where $x$ is the unique element of $M_{n+1}$ such that $f(x) = y$.\\ Let us check the three conditions of Theorem $\ref{HCriterion}$: \begin{enumerate} \item For every $x\in M$, $f^n(x) = 0$ when $n$ is large enough and hence the condition $(1)$ of Theorem $\ref{HCriterion}$ is satisfied. \item Let $k \in \mathbb{N}$ and $y\in M_k$. If $k=0$ and $y=0$ so that $g_n(y) = 0$. If $k\geq 1$ then $g_n(y) \in M_{k + n}$ and we have $$ \dfrac{d(g_n(y),0)}{|\lambda|^n} = |\lambda|^k \dfrac{d(g_n(y),0)}{|\lambda|^{k+n}} \leq |\lambda|^k \dfrac{d^+_{k + n}}{|\lambda|^{k+n}} \underset{n \to \infty}{\rightarrow} 0.$$ \item For every $y\in M$, $f^n \circ g_n(y) = y$ and hence $d(f^n\circ g_n(y),y)=0$. \end{enumerate} Thanks to Theorem~\ref{HCriterion}, we conclude that $\lambda \widehat{f}$ satisfies the Hypercyclicity Criterion with respect to the full sequence, so it is mixing. \end{proof} We give a counter-example to Proposition~\ref{PropBackShift} in the case when $\frac{d^+_n}{\lambda^n}$ does not converge to $0$. Moreover, this provides an example of a Lipschitz operator $\widehat{f}$ which is supercyclic but not hypercyclic. \begin{example}\label{Counter-example} Once more, let $M$ and $f : M \to M$ be as in Proposition~\ref{hyperleftshift} with $d(0,n) = n!$ for every $n \in \mathbb{N}$. Recall that $\Phi : \delta(x_n) \in \mathcal{F}(M) \mapsto n! \, e_n \in \ell_1$ defines an isometry and $\widehat{f} : \mathcal{F}ree(M) \to \mathcal{F}ree(M)$ is conjugate to $T : \ell_1 \to \ell_1$ given by $$T(e_{n+1}) = \dfrac{e_n}{n+1}.$$ If $\lambda \in \mathbb{R}$ then notice that $\lambda T$ is compact and therefore not hypercyclic. Therefore $\lambda \widehat{f}$ is not hypercyclic as well, while $$\left|\dfrac{d^+_n}{\lambda^n} \right| = \dfrac{n!}{|\lambda|^n}\to +\infty.$$ In order to check that $\widehat{f}$ is supercyclic, it is easy to see that the conditions of Theorem~\ref{LipSCC} below are satisfied. \end{example} Next there is a Lipschitz map $f : M \to M$ such that $f$ has an orbit which is dense in $M$ while $\widehat{f}$ is cyclic but it is not supercyclic. \begin{example} \label{ExamplefTTfhatNOTsuper} Let $M=\mathbb{N} \cup \{0\}$ be as in Proposition~ \ref{hyperleftshift}, where $(d_n)_n:=(d(0,n))_n$ is decreasing and tending to zero. This time we let $f : M \to M$ be the $1$-Lipschitz map defined by $f(0)=0$ and $f(n)=n+1$ otherwise. Then the orbit of $1$ under $f$ is dense in $M$, and since $\|\widehat{f}\|=\mathrm{Lip}(f) \leqslant1$, we get that $\widehat{f}$ is not hypercyclic. Let us show that $\widehat{f}$ is not even supercyclic. As usual $\widehat{f}$ is conjugate to an bounded operator $T : \ell_1 \to \ell_1$ given by $T(e_{n}) = \dfrac{d_{n+1}}{d_n} e_{n+1}$. It is clear that $e_1$ is a cyclic vector for $T$, but $T$ is not supercyclic because it does not have a dense range. \end{example} Finally, the following example shows that there is a pointed metric space $M$ and a Lipschitz map $f : M \to M$ such that both $f$ and $\widehat{f}$ are hypercyclic. \begin{example} Let $\sum_2$ be the space of $0$-$1$-sequences, that is, $\sum_{2}=\lbrace (x_n)_{n\in\mathbb{N}}:\, x_n\in\lbrace0,1\rbrace\rbrace$. The sequence $(0)_{n \in \mathbb{N}}$ is considered to be the base point of $M$. Let $d$ be the metric on $\sum_2$ defined by: \[d(x,y)=\sum_{n=1}^{+\infty}\dfrac{|x_n-y_n|}{2^n}\] where $x=(x_1,x_2, \ldots)$ and $y=(y_1,y_2,...)$. We consider the following map: \[ \begin{array}{lrcl} \sigma : & \sum_2 & \longrightarrow & \sum_2 \\ & (x_1,x_2,x_3, \ldots) & \longmapsto & (x_2,x_3, x_4, \ldots) \end{array} \] The dynamical system $(\sum_2,\sigma)$ is often called the backward shift on two symbols. Note that $\sigma(0)=0$ and $\sigma$ is $2$-Lipschitz. It is well known that $\sigma$ is Devaney choatic (\cite[Theorem 1.36]{GrPe}) and mixing (\cite[Exercice 1.4.1]{GrPe}). Consequently, $\widehat{\sigma}$ is also Devaney choatic and mixing thanks to \cite[Theorem~2.3]{MuPe}. We wish to recall that $\sigma$ is quasi-conjugate to a map acting on the unit circle $\mathbb{T}$ be endowed with the normalized distance $d$ defined by: \[ \forall \; \theta_1,\theta_2 \in [0,1[, \quad d(e^{2\pi i\theta_1},e^{2\pi i\theta_2})=\begin{cases} |\theta_1-\theta_2| &\text{ if } |\theta_1-\theta_2| \leqslant\frac{1}{2} \\ 1-|\theta_1-\theta_2| &\text{ if } |\theta_1-\theta_2| \geqslant\frac{1}{2} \end{cases}, \] Indeed, let $D:\mathbb{T}\rightarrow\mathbb{T}$ be the doubling map $Dz=z^2$. It is known \cite[Example 1.37]{GrPe} that the map \begin{equation} \begin{array}{lrcl} \phi : & \sum_2 & \longrightarrow & \mathbb{T} \\ & (x_n) & \longmapsto & \exp{\left(2\pi i \sum_{n=1}^{\infty}\dfrac{x_n}{2^{n}}\right)} \end{array} \label{2} \end{equation} defines a quasiconjugacy from $\sigma$ to $D$. Note that $\phi(0)=1$ and $\phi$ is 1-Lipschitz. By Proposition \ref{3}, $\widehat{D}:\mathcal{F}ree(\mathbb{T})\to\mathcal{F}ree(\mathbb{T)}$ is Devaney choatic and mixing as well. \end{example} \subsection{Some other criteria} In the same way as we did for the HC, we may also ``push downward" the conditions of other well-known criteria to obtain metric conditions for Lipschitz operators. We will quickly mention two examples: the ``Supercyclicity Criterion" (see \cite[Theorem 1.14]{Survey}) and the ``Chaoticity Criterion" (see \cite[Theorem 6.10]{Survey}). \begin{theorem}[Supercyclicity Criterion for Lipschitz Operators] \label{LipSCC} ~\\ Let $(M,d)$ be a pointed separable metric space, $f:M\to M$ be a Lipschitz map such that $f(0)=0$. Assume that there exist an increasing sequence of integers $(n_k)_{k\in\mathbb{N}}$, two dense subsets $\mathcal{D}_1$, $\mathcal{D}_2$ in $M$ and a sequence of maps $g_{n_k}:\mathcal{D}_2 \to M$ such that, for any $x\in\mathcal{D}_1$ and $y\in\mathcal{D}_2$ the following conditions hold: \begin{enumerate} \item $d(f^{n_k}(x),0)\, d(g_{n_k}(y),0)\underset{k\to+\infty}{\longrightarrow}0$; \item $d(f^{n_k}\circ g_{n_k}(y),y)\underset{k\to+\infty}{\longrightarrow}0$; \end{enumerate} Then $\widehat{f}$ is supercyclic. \end{theorem} \begin{corollary} Let $M=\mathbb{N} \cup \{0\}$ be a pointed metric space endowed with a metric $d$ such that the map $f:M\rightarrow M$ defined by $f(0)=0$, $f(1)=0$ and $f(n+1)=n$ is Lipschitz. Then $\widehat{f}$ is supercyclic. \end{corollary} \begin{theorem}[A Chaoticity criterion for Lipschitz operators]\label{Lipschitz Chaoticity criterion} ~\\ Let $(M,d)$ be a pointed separable metric space and let $f:M\to M$ be a Lipschitz map such that $f(0)=0$. Assume that there exist a dense subset $\mathcal{D}$ in $M$ and a map $g:\mathcal{D} \to \mathcal{D}$ such that, for any $x\in\mathcal{D}$ the following conditions hold: \begin{enumerate} \item $\sum_{n=0}^{\infty}d(f^{n}(x),0)$ and $\sum_{n=0}^{\infty}d(g^{n}(x),0)$ are convergent; \item $f\circ g=\mathrm{Id}_{M}$; \end{enumerate} Then $\widehat{f}$ satisfies the Chaoticity Criterion. \end{theorem} \begin{remark} The map $f$ given in Example \ref{ExampleHCnotHCL} does not satisfy the chaoticity criterion for Lipschitz operators but $\widehat{f}$ satisfies the chaoticity criterion. \end{remark} Similarly as in Proposition~\ref{hyperleftshift}, we characterise below the chaoticity for ``backward shift Lipschitz operators" defined on (the Lipschitz-free space over) countably branching tree of height one. \begin{proposition}\label{ChaoShift} Let $M= \mathbb{N} \cup \{0\}$ be a pointed metric space and let $f:M \to M$ be the map defined by $f(0)=0$ and $f(n)= n-1$ whenever $n \geq 1$. Assume that $M$ is endowed with a metric $d$ such that $f$ is Lipschitz and $d(n,m)=d(n,0)+d(0,m)$ whenever $n \neq m \in \mathbb{N}$. Then the following conditions are equivalent: \begin{enumerate} \item $\widehat{f}$ is Devaney chaotic. \item The series $\sum_{n=1}^{\infty}\, d(n,0)$ converges. \end{enumerate} If moreover the sequence $\big( d(n,0)\big)_n$ is decreasing, then the two above conditions are equivalent to \begin{enumerate} \item[(3)] $f$ satisfies the chaoticity criterion for Lipschitz operators. \end{enumerate} \end{proposition} \begin{proof} As in the proof of Proposition \ref{hyperleftshift}, the operator $\widehat{f}$ is conjugate with the weighted backward shift operator $T$ defined on $\ell_1$ by $$ Te_1=0 \quad \text{and} \quad \forall n\geqslant 2, \quad Te_n=\dfrac{d(n-1,0)}{d(n,0)}e_{n-1} .$$ So $\widehat{f}$ is chaotic if and only if $T$ is so. Therefore, by \cite[Theorem 6.12]{Survey}, we obtain \begin{align*} \widehat{f} \text{ chaotic }&\Longleftrightarrow \sum_{n\geqslant1}\left(\dfrac{d(1,0)}{d(n,0)}\right)^{-1}<+\infty\\ &\Longleftrightarrow \sum_{n=1}^{\infty}\, d(n,0)<+\infty, \end{align*} which proves $(1)\Longleftrightarrow(2)$. It is clear that $(3)\mathbb{R}ightarrow(1)$, so let us show that $(2)\mathbb{R}ightarrow(3)$. Let $g:M\rightarrow M$ be the map defined by $g(0)=0$ and $g(n)=n+1$. Fix $m\geqslant1$. Since $d(g^n(m),0)\leqslant d(n,0)$ and $\sum_{n=1}^{\infty}\, d(n,0)$ converges, we have that the series $\sum_{n=1}^{\infty}\, d(g^n(m),0)$ also converges. Moreover, we have $f\circ g=\mathrm{Id}_M$ and the series $\sum_{n=1}^{\infty}\, d(f^n(m),0)$ converges as well. \end{proof} Recall that we proved in Proposition~\ref{PropPeriodic} that if $\mathrm{Per}(f)$ is dense in $M$, then $\mathrm{Per}(\widehat{f})$ is dense in $\mathcal{F}(M)$. We claim that the reverse implication is not true. \begin{example} \label{ExamplePeriodic} Let $M$ and $f: M \to M$ be as in Proposition~\ref{ChaoShift}, with $d(0,n)=\frac{1}{2^n}$ for every $n \geq 1$. Clearly $\mathrm{Per}(f)=\lbrace0\rbrace$ is not dense in $M$. On the other hand, since $$\sum_{n=1}^{\infty}d(n,0)=\sum_{n=1}^{\infty}\dfrac{1}{2^n}<+\infty,$$ $\widehat{f}$ is Devaney chaotic thanks to Proposition~\ref{ChaoShift}. \end{example} \section{The particular case of compact intervals} From now on, we will consider metric spaces $M= [a,b]$, where $a<b\in \mathbb{R}$, and $f : M \to M$ will be a Lipschitz map having (at least) one fixed point $c\in [a,b]$. For such metric spaces $M$, we have a surjective isometry $\Phi : \delta(x) \in \mathcal{F}(M) \mapsto \mathbf{1}_{[c,x]} \in L^1([a,b])$ (where $\mathbf{1}_{[c,x]}$ is understood as $-\mathbf{1}_{[x,c]}$ when $x\leq c$). Thus, $\widehat{f} : \mathcal{F}(M) \to \mathcal{F}(M)$ is conjugate to an operator $T : L^1([a,b]) \to L^1([a,b])$. This operator acts on indicator functions as follows: if $a\leq s \leq t \leq b$, we have \begin{align}\label{formulesurL1} T(\mathbf{1}_{[s,t]})=\left\{\begin{array}{cl} \mathbf{1}_{[f(s),f(t)]},& \text{if} \ \ f(s) \leq f(t) \\ -\mathbf{1}_{[f(t),f(s)]}, & \text{if} \ \ f(t) \leq f(s). \end{array}\right. \end{align} The next theorem will give sufficient conditions on $f$ which ensure that $\widehat{f}$ is hypercyclic, and so this will exhibit a new class of hypercyclic operators acting on $L^1([a,b])$. To our knowledge, there is not much study on hypercyclic operators defined on $L^1(I)$ when $I$ is a bounded interval in $\mathbb{R}$. If $I= [0, +\infty)$ or $I = \mathbb{R}$, operators of translation $T_t : f \mapsto f(\cdot + t)$ have been studied as well as the dynamical properties of the $C_0$-semigroup $(T_t)_{t\geq 0}$ on weighted $L^p$-spaces $L^p(I, \omega \text{d}\lambda)$, where $\lambda$ is the Lebesgue measure and $\omega$ is a weight on $I$. For instance, in \cite{DSW}, necessary and sufficient conditions are given in terms of the weight $\omega$ that ensure the hypercyclicity of $T_t$ or $(T_t)_{t\geq 0}$. \begin{theorem}\label{mainthminterval} If $f:[a,b]\to[a,b]$ is a Lipschitz and topologically transitive map with a fixed point $c\in[a,b]$, then $\widehat{f}$ is weakly mixing. \\ If moreover $f$ admits at least two fixed points, then $\widehat{f}$ is mixing. \end{theorem} \begin{proof} Our main ingredient is a result due to Barge and Martin (see \cite[Theorem~3]{Barge}), the next formulation can be found in \cite[Theorem~2.19]{Sy}. It states that, since $f$ is topologically transitive, one of the following cases holds: \begin{enumerate} \item[$(i)$] $f$ is mixing. \item[$(ii)$] $c \in (a,b)$ is the unique fixed point of $f$, $f([a,c])=[c,b]$, $f([c,b])=[a,c]$ and both maps $f^{2}_{|[a,c]}$, $f^{2}_{|[c,b]}$ are mixing. \end{enumerate} If $f$ has at least two fixed points then it is mixing, and $\widehat{f}$ is also mixing thanks to \cite[Theorem~2.3]{MuPe}. Otherwise, $c \in (a,b)$ is the unique fixed point of $f$ and $f([a,c])=[c,b]$, $f([c,b])=[a,c]$ and both maps $f^{2}_{|[a,c]}$, $f^{2}_{|[c,b]}$ are mixing. Let $h_a=f^{2}_{|[a,c]}$ and $h_b=f^{2}_{|[c,b]}$. By the latter, we have that $$\widehat{h}_a:\mathcal{F}([a,c]) \to \mathcal{F}([a,c]) \quad \text{ and } \quad \widehat{h}_b:\mathcal{F}([c,b]) \to \mathcal{F}([c,b])$$ are mixing. It is enough to show that $\widehat{f ^2}$ is mixing. To do so, we will show that the corresponding conjugate map associated to $\widehat{f^2}$ (see \eqref{formulesurL1}), say $T: L^1([a,b]) \to L^1([a,b])$, is mixing. We will consider $L^1([a,c])$ and $L^1([c,b])$ as subspaces of $L^1([a,b])$. Let $B(u,r), B(v,r') \subset L^1([a,b])$ be two open balls and write $u = u_1 + u_2, v=v_1 + v_2$ where $u_1, v_1 \in L^1([a,c])$ and $u_2, v_2 \in L^1([c,b])$. The operator $T$ maps $L^1([a,c])$ into $L^1([a,c])$ and $L^1([c,b])$ into $L^1([c,b])$. Its restrictions to these two spaces, that we will denote by $T_1$ and $T_2$ respectively, are mixing. Indeed, $T_1$ is conjugate to $\widehat{h_a}$ while $T_2$ is conjugate to $\widehat{h_b}$. Hence, there exists $N\geq 1$ such that for any $n\geq N$, $$T_1^n(B(u_1, r/2)) \cap B(v_1, r'/2) \neq \emptyset \quad \text{ and } \quad T_2^n(B(u_2, r/2)) \cap B(v_2, r'/2)\neq \emptyset.$$ Then, for every $n\geq N$, there exist $u_{1,n} \in B(u_1, r/2)$ and $u_{2,n} \in B(u_2, r/2)$ such that $$T_1^n(u_{1,n}) \in B(v_1, r'/2) \quad \text{ and } \quad T_2^n(u_{2,n}) \in B(v_2, r'/2).$$ We deduce that $u_n := u_{1,n} + u_{2,n} \in B(u, r)$, where we naturally see $L^1([a,c])$ and $L^1([c,b])$ as subspaces of $L^1([a,b])$. Moreover, we have $$T^n(u_n) = T_1^n(u_{1,n}) + T_2^n(u_{2,n}) \in B(v_1, r'/2) + B(v_2, r'/2) \subset B(v, r').$$ We proved that for every $n\geq N$, $T^n(B(u, r)) \cap B(v, r') \neq \emptyset$, which shows that $T$ is mixing. \end{proof} \begin{remark} If $c=a$ or $c=b$, then necessarily $f$ has another fixed point. Indeed, assume that $c=a$, the case $c=b$ being similar. The map $f$ is continuous and transitive, so it must be onto. In particular, there exists $y\in (a,b]$ such that $f(y)=b$. Hence, the continuous function $h(x)=f(x)-x$ satisfies $h(y)\geq 0$ and $h(b)\leq 0$ so it vanishes at some point which is a fixed point of $f$, distinct from $a$. \end{remark} \begin{corollary} \label{CorDC} Let $f:[a,b] \to [a,b]$ be a Lipschitz and topologically transitive map with a fixed point $c\in[a,b]$. Then $\widehat{f}$ is Devaney chaotic. \end{corollary} \begin{proof} It is known \cite{VeBe} that a continuous map $g : I \to I$, where $I$ is a real interval, is topologically transitive if and only if it is Devaney chaotic. By assumption, our Lipschitz map $f:[a,b]\to [a,b]$ is transitive, so it is Devaney chaotic as well. By Proposition~\ref{PropPeriodic}, the set of periodic points of $\widehat{f}$ is dense in $\mathcal{F}(M)$. Moreover $\widehat{f}$ is hypercyclic thanks to Theorem~\ref{mainthminterval}. Therefore, $\widehat{f}$ is Devaney chaotic. \end{proof} \begin{remark} We do not know whether $\widehat{f} : \mathcal{F}(\mathbb{R}) \to \mathcal{F}(\mathbb{R})$ is hypercyclic whenever $f : \mathbb{R} \to \mathbb{R}$ is a transitive Lipschitz map (having a fixed point). In fact, the decomposition given by \cite[Theorem~2.19]{Sy} was crucial to our proof (see also \cite[Theorem 2.2]{splitingtheorem} for a similar statement in the more general setting of locally connected compact metric spaces). We do not know if a comparable result holds in the case of unbounded intervals. Nevertheless, any weakly mixing Lipschitz maps $f : \mathbb{R} \to \mathbb{R}$ produces a weakly mixing Lipschitz operator acting on $L^1(\mathbb{R})$ thanks to \cite[Theorem~2.3]{MuPe} (see \cite{Nagar} at page 2 for an example). \end{remark} \subsection{Application} In this subsection we will provide two Lipschitz self-maps $f$ defined on $[0,1]$ which illustrate the two cases in \cite[Theorem~2.19]{Sy}. They are hypercyclic so that their linearization are both mixing and Devaney chaotic, thanks to Theorem~\ref{mainthminterval}. Also, the obtained operators $\widehat{f}$ acting on $L_1([0,1])$ will be made explicit. \begin{example}\label{Example1interval} The map $f : [0,1] \to [0,1]$ defined below is often called the tent map, it is transitive as it is explained in \cite[Examples 1.12 ~(a)]{GrPe}. Here we consider $0$ to be the base point of $[0,1]$. \begin{minipage}{0.50\linewidth} \begin{equation*} f(x) = \left\{ \begin{array}{rl} 2x & \text{if } 0 \leq x \leq \frac 12, \\[0.5em] 2-2x & \text{if } \frac 12 \leq x \leq 1. \end{array} \right. \end{equation*} \end{minipage} \begin{minipage}[c]{0.45\linewidth} \begin{tikzpicture}[thick, scale=2.3] \draw[->,>=latex, gray] (-0.2,0) -- (1.2,0) node[above] {$x$}; \draw[->,>=latex, gray] (0,-0.2) -- (0,1.2) node[right] {$y$}; \draw[domain=0:0.5, blue,ultra thick, smooth] plot ({\x},{2*\x}) node[above] {$f(x)$}; \draw[domain=0.5:1, blue,ultra thick, smooth] plot ({\x},{-2*\x+2}) ; \fill (0,0) circle (0.2pt) node[below left] {$0$}; \fill (1,0) circle (0.2pt) node[below] {$1$}; \fill (0.5,0) circle (0.2pt) node[below] {$\frac 12$}; \fill (0.25,0) circle (0.2pt) node[below] {$\frac 14$}; \fill (0.75,0) circle (0.2pt) node[below] {$\frac 34$}; \fill (0,1) circle (0.2pt) node[left] {$1$}; \fill (0,0.5) circle (0.2pt) node[left] {$\frac 12$}; \fill (0,0.25) circle (0.2pt) node[left] {$\frac 14$}; \fill (0,0.75) circle (0.2pt) node[left] {$\frac 34$}; \end{tikzpicture} \end{minipage} Consequently $\widehat{f}$ is mixing thanks to Theroem~\ref{mainthminterval}. The latter fact can also be checked by showing that $\widehat{f}$ satisfies the HCL with respect to the full sequence, where $g(x)=\frac{x}{2}$ stands for the ``inverse" mapping. We also know that $\widehat{f}$ is Devaney chaotic thanks to Corollary~\ref{CorDC}. As explained at the beginning of this section, $\widehat{f}$ is conjugate to an operator $T : L^1([0,1]) \to L^1([0,1])$ which maps any indicator function $\mathbf{1}_{[a,b]}, a<b$, to \begin{equation*} T(\mathbf{1}_{[a,b]}) = \left\{ \begin{array}{rl} \mathbf{1}_{[2a,2b]} & \text{if } b\leq \frac{1}{2}, \\[0.5em] -\mathbf{1}_{[-2b+2,-2a+2]} & \text{if } a\geq \frac{1}{2}. \end{array} \right. \end{equation*} One can check that $T$ acts as a kind of backwards shift on the haar basis $(h_m)_{m}$ of $L^1([0,1])$ (more details will be given in the second example). Of course, this is not always the case: for instance the tent map is conjugate to the ``logistic map" $L(x) = 4x(1-x)$ (see \cite[Example~1.6]{GrPe}), and the operator $T' : L^1([0,1]) \to L^1([0,1]) $ associated to $\widehat{L}$ does not necessarily send a haar function to another. \end{example} We now study another well-known example. It is a Lipschitz map $f : [0,1] \to [0,1]$, with only one fixed point $1/2$, and such that $f$ is Devaney chaotic but not weakly mixing. \begin{example} \label{Example2} Let us consider the following Lipschitz map: \begin{minipage}{0.50\linewidth} \begin{equation*} f(x) = \left\{ \begin{array}{rl} \frac 12 +2x & \text{if } 0 \leq x \leq \frac 14, \\[0.5em] \frac 32 -2x & \text{if } \frac 14 \leq x \leq \frac 12 ,\\[0.5em] 1-x & \text{if } \frac 12 \leq x \leq 1. \end{array} \right. \end{equation*} \end{minipage} \begin{minipage}[c]{0.45\linewidth} \begin{tikzpicture}[thick, scale=2.3] \draw[->,>=latex, gray] (-0.2,0) -- (1.2,0) node[above] {$x$}; \draw[->,>=latex, gray] (0,-0.2) -- (0,1.2) node[right] {$y$}; \draw[domain=0:0.25, blue,ultra thick, smooth] plot ({\x},{2*\x+0.5}) node[above, right] {$f(x)$}; \draw[domain=0.25:0.5, blue,ultra thick, smooth] plot ({\x},{-2*\x+3/2}) ; \draw[domain=0.5:1, blue,ultra thick, smooth] plot ({\x},{1-\x}) ; \fill (0,0) circle (0.2pt) node[below left] {$0$}; \fill (1,0) circle (0.2pt) node[below] {$1$}; \fill (0.5,0) circle (0.2pt) node[below] {$\frac 12$}; \fill (0.25,0) circle (0.2pt) node[below] {$\frac 14$}; \fill (0.75,0) circle (0.2pt) node[below] {$\frac 34$}; \fill (0,1) circle (0.2pt) node[left] {$1$}; \fill (0,0.5) circle (0.2pt) node[left] {$\frac 12$}; \fill (0,0.25) circle (0.2pt) node[left] {$\frac 14$}; \fill (0,0.75) circle (0.2pt) node[left] {$\frac 34$}; \end{tikzpicture} \end{minipage} Note that here the metric space that we consider is $M=[0,1]$ with $\frac 12$ being the distinguished point of $M$. In what follows, it will be quite convenient to consider the second iterated of $f$, so we give its definition explicitly bellow. \begin{minipage}{0.50\linewidth} \begin{equation*} f^2(x) = \left\{ \begin{array}{rl} \frac 12 -2x & \text{if } 0 \leq x \leq \frac 14 , \\[0.5em] -\frac 12 + 2x & \text{if } \frac 14 \leq x \leq \frac 34 ,\\[0.5em] \frac 52 - 2x & \text{if } \frac 34 \leq x \leq 1. \end{array} \right. \end{equation*} \end{minipage} \begin{minipage}[c]{0.45\linewidth} \begin{tikzpicture}[thick, scale=2.3] \draw[->,>=latex, gray] (-0.2,0) -- (1.2,0) node[above] {$x$}; \draw[->,>=latex, gray] (0,-0.2) -- (0,1.2) node[right] {$y$}; \draw[domain=0:0.25, blue,ultra thick, smooth] plot ({\x},{-2*\x+0.5}); \draw[domain=0.25:0.75, blue,ultra thick, smooth] plot ({\x},{2*\x-1/2}) node[above] {$f^2(x)$}; \draw[domain=0.75:1, blue,ultra thick, smooth] plot ({\x},{5/2-2*\x}) ; \fill (0,0) circle (0.2pt) node[below left] {$0$}; \fill (1,0) circle (0.2pt) node[below] {$1$}; \fill (0.5,0) circle (0.2pt) node[below] {$\frac 12$}; \fill (0.25,0) circle (0.2pt) node[below] {$\frac 14$}; \fill (0.75,0) circle (0.2pt) node[below] {$\frac 34$}; \fill (0,1) circle (0.2pt) node[left] {$1$}; \fill (0,0.5) circle (0.2pt) node[left] {$\frac 12$}; \fill (0,0.25) circle (0.2pt) node[left] {$\frac 14$}; \fill (0,0.75) circle (0.2pt) node[left] {$\frac 34$}; \end{tikzpicture} \end{minipage} We then have the following claims with respect to $f$ and $\widehat{f}$. \begin{enumerate}[$(i)$, itemsep=5pt] \item \textit{The map $f$ is Devaney chaotic, but not weakly mixing} (see \cite[Example~2.21]{Sy}). \item \textit{The map $f^2$ is not topologically transitive.} Indeed, notice that $f^2([0,\frac 12]) \subset [0 , \frac 12]$ and $f^2([\frac 12 , 1]) \subset [\frac 12 , 1]$. \item \textit{The map $\widehat{f^2}$ satisfies the HCL.} Indeed, we claim that $\widehat{f^2}$ has the HCL where the ``inverse" map $g$ is defined for every $x \in [0,1]$, by $g(x) = \frac x2 + \frac 14$. \item \textit{The operator $\widehat{f}$ is mixing and Devaney chaotic.} First, since $f$ is transitive, notice that we can deduce that $\widehat{f}$ is weakly mixing and Devaney chaotic by applying Theorem~\ref{mainthminterval} and Corollary~\ref{CorDC}. Furthermore, one can prove directly that actually $\widehat{f}$ satisfies the HCL condition with respect to the full sequence, where the ``inverse" map $g$ is defined by $$ g(x) = \left\{ \begin{array}{cc} 1-x & \text{ if } 0 \leq x < \frac 12, \\ -\frac 12 x + \frac 34 & \text{ if } \frac 12 < x \leq 1. \end{array}\right. $$ We consider the Dyadic numbers $\mathcal{D}$ in $[0,1]$ as the dense subset of $M$ on which one has to check the conditions given by the HCL. Since we do not pass to a subsequence for proving those conditions, $\widehat{f}$ is mixing. \end{enumerate} \begin{remark} Since $g = f^2$ is not topologically transitive but $\widehat{g}$ is Devaney chaotic and mixing, the implication ``$\widehat{g}$ Devaney chaotic and mixing $\implies$ $g$ transitive" does not hold in general. \end{remark} We now descirbe the operator $S : L^1([0,1]) \to L^1([0,1])$ conjugate to $\widehat{f}$ by its action on the Haar basis $(h_m)_m$. Recall that the Haar functions $(h_m)_{m\geq 0}$ are given by $h_1 = \mathbf{1}_{[0,1]}$ and for every $n\geq 0$ and every $1\leq k \leq 2^n$: $$h_{2^n+k}(t) = \mathbf{1}_{\left[\frac{2k-2}{2^{n+1}},\frac{2k-1}{2^{n+1}}\right]}(t) - \mathbf{1}_{\left[\frac{2k-1}{2^{n+1}},\frac{2k}{2^{n+1}}\right]}(t), \ \ t\in [0,1].$$ It is well-known that $(h_m)_{m\geq 0}$ is a Schauder basis of $L^1([0,1])$. We have $$S(h_1) = S(\mathbf{1}_{[0,1]}) = -\mathbf{1}_{[f(1),f(0)]} = -\mathbf{1}_{\left[0,\frac{1}{2}\right]},$$ $$S(h_2)=\mathbf{1}_{\left[0,\frac{1}{2}\right]}, ~ S(h_3)=2\mathbf{1}_{\left[\frac{1}{2}, 1\right]}, ~ S(h_4)=\mathbf{1}_{\left[0,\frac{1}{4}\right]} - \mathbf{1}_{\left[\frac{1}{4},\frac{1}{2}\right]}.$$ Now let $n\geq 2$. We distinguish three cases:\\ $\bullet$ If $0\leq k \leq 2^{n-2}$, the two intervals defining $h_{2^n+k}$ are included in $\left[0, \frac{1}{4} \right]$ so that \begin{align*} S(h_{2^n+k}) & = \mathbf{1}_{\left[\frac{1}{2}-\frac{2k-2}{2^{n}},\frac{1}{2}-\frac{2k-1}{2^{n}}\right]} - \mathbf{1}_{\left[\frac{1}{2}-\frac{2k-2}{2^{n}},\frac{1}{2}-\frac{2k-1}{2^{n}}\right]} \\ & = \mathbf{1}_{\left[\frac{2(2^{n-1}+k)-2}{2^{n}},\frac{2(2^{n-1}+k)-1}{2^{n}}\right]} - \mathbf{1}_{\left[\frac{2(2^{n-1}+k)-1}{2^{n}},\frac{2(2^{n-1}+k)}{2^{n}}\right]} \\ & = h_{2^{n-1}+2^{n-2}+k}. \end{align*} $\bullet$ If $2^{n-2}+1\leq k \leq 2^{n-1}$, the two intervals that define $h_{2^n+k}$ are included in $\left[\frac{1}{4}, \frac{1}{2} \right]$ and with similar computations we get $$ S(h_{2^n+k}) = h_{2^{n-1}+3.2^{n-2}-k+1}. $$ $\bullet$ Finally, if $2^{n-1}+1\leq k \leq 2^n$, both intervals that define $h_{2^n+k}$ are included in $\left[\frac{1}{2}, 1 \right]$ and we have $$ S(h_{2^n+k}) = h_{2^n+2^n-k+1}. $$ \end{example} \subsection{An extension to some compact \texorpdfstring{$\mathbb{R}$-trees}{R-trees}} Our Theorem~\ref{mainthminterval} can actually be extended to a more general setting. Recall that an $\mathbb{R}$-tree is an arc-connected metric space $(M,d)$ with the property that there is a unique arc connecting any pair of points $x\neq y\in M$ and it moreover is isometric to the real segment $[0,d(x,y)]\subset\mathbb{R}$. A point $x\in M$ is called a branching point if $M\setminus\set{x}$ has at least three connected components; we let $\mathrm{Br}(M)$ the set of all branching point of $M$. The main ingredient for proving Theorem~\ref{mainthminterval} was the decomposition given by \cite[Theorem~2.19]{Sy}. A similar result actually holds for compact trees $M$ such that $M \setminus Br(M)$ has finitely many connected components: If $f : M \to M$ is a continuous and transitive map, then \begin{itemize} \item Either $f$ is mixing, \item Or there is a positive integer $n_0$ such that there are an interior fixed point $c$ and subtrees $M_1,\ldots,M_{n_0}$ of $M$ with $\cup_i M_i = M$, $M_i \cap M_j = \{c\}$ whenever $i \neq j$ and $f(M_i) =M_{i+1 (\mathrm{mod} \; n_0)}$ for $1 \leq i\leq n_0$. Moreover, $f^{n_0}\mathord{\upharpoonright}_{M_i}$ is mixing for every $1 \leq i \leq n_0$. \end{itemize} The previous statement can be found in \cite[Proposition~2.6]{Wang2011} and it is based on \cite[Proposition 3.1]{Alseda}. Therefore, we can state the following (the proof is similar to the one of Theorem~\ref{mainthminterval} and left to the reader). \begin{theorem} \label{ThmRtrees} Let $M$ be a compact $\mathbb{R}$-tree such that $M \setminus Br(M)$ has finitely many connected components, and let $f:M\to M$ be a Lipschitz and topologically transitive map with a fixed point $c \in M$. Then either $\widehat{f}$ is mixing or there exists $n_0 \in \mathbb N$ such that $\widehat{f}^{n_0}$ is mixing. In any case, $\widehat{f}$ is weakly mixing. \end{theorem} It is proved in \cite{carac} that a separable metric space $(M,d)$ is locally arcwise connected and uniquely arcwise connected if and only if it admits an equivalent metric $d'$ such that $(M,d')$ is an $\mathbb{R}$-tree. Since $\mathcal{F}(M,d)$ and $\mathcal{F}(M,d')$ are isomorphic, the previous theorem therefore applies to an even more general class of spaces. In the same way we can also state the following extension of Corollary~\ref{CorDC}. \begin{corollary} Let $M$ be a compact $\mathbb{R}$-tree such that $M \setminus Br(M)$ has finitely many connected components. Let $f:M\to M$ be a Lipschitz and topologically transitive map with a fixed point $c \in M$. Then $\widehat{f}$ is Devaney chaotic \end{corollary} \begin{proof} It is proved in \cite[Lemma 2.3]{Wang2011} that $f : M \to M$ is topologically transitive if and only if it is Devaney chaotic. Therefore, exactly as in the proof of Corollary~\ref{CorDC}, we use Proposition~\ref{PropPeriodic} as well as Theorem~\ref{ThmRtrees} to conclude. \end{proof} \addtocontents{toc}{\protect\setcounter{tocdepth}{0}} \section*{Acknowledgments} Part of this work was carried out when the second named author was working in the ``Laboratoire d'analyse et de math\'ematiques appliqu\'ees" in Marne-la-Vall\'ee. He is deeply grateful for the excellent working conditions there. The authors would also like to thank Romuald Ernst and Anton\'in Proch\'azka for useful conversations, as well as Evgeny Abakumov and St\'ephane Charpentier for valuable comments which improved the presentation of this paper. \end{document}
\begin{document} \author{Marcos Salvai\thanks{ Partially supported by \textsc{fonc}y\textsc{t}, Antorchas, \textsc{ ciem\thinspace (conicet)} and \textsc{sec}y\textsc{t\thinspace (unc)}.}} \title{On the geometry of the space of oriented lines of the hyperbolic space} \date{} \maketitle \begin{abstract} Let $H$ be the $n$-dimensional hyperbolic space of constant sectional curvature $-1$ and let $G$ be the identity component of the isometry group of $H$. We find all the $G$-invariant pseudo-Riemannian metrics on the space $\mathcal{G}_{n}$ of oriented geodesics of $H$ (modulo orientation preserving reparametrizations). We characterize the null, time- and space-like curves, providing a relationship between the geometries of $ \mathcal{G}_{n}$ and $H$. Moreover, we show that $\mathcal{G}_{3}$ is K\"{a}hler and find an orthogonal almost complex structure on $\mathcal{G} _{7}$. \end{abstract} \noindent MSC 2000: 53A55, 53C22, 53C35, 53C50, 53D25. \noindent \textsl{Key words and phrases: }hyperbolic space, space of geodesics, invariant metric, K\"{a}hler, octonions. \begin{center} \textbf{1. The space of geodesics of a Hadamard manifold} \end{center} \noindent Let $M$ be a Hadamard manifold (a complete simply connected Riemannian manifold with nonpositive sectional curvature) of dimension $n+1$ . An \emph{oriented geodesic} $c$ of $M$ is a complete connected totally geodesic oriented submanifold of $M$ of dimension one. We may think of $c$ as the equivalence class of unit speed geodesics $\gamma :\mathbf{R} \rightarrow M$ with image $c$ such that $\left\{ \dot{\gamma}\left( t\right) \right\} $ is a positive basis of $T_{\gamma \left( t\right) }c$ for all $t$ . Let $\mathcal{G}=\mathcal{G}\left( M\right) $ denote the space of all oriented geodesics of $M$. The space of geodesics of a manifold all of whose geodesics are periodic with the same length is studied with detail in \cite {besse}. The geometry of the space of oriented lines of Euclidean space is studied in \cite{gk, salvaimm, salvai2}. Let $T^{1}M$ be the unit tangent bundle of $M$ and $\xi $ the spray of $M$, that is, the vector field on $T^{1}M$ defined by $\xi \left( v\right) =\left. d/dt\right| _{0}\gamma _{v}^{\prime }\left( t\right) $, where $ \gamma _{v}$ is the unique geodesic in $M$ with initial velocity $v$. Clearly, $\mathcal{G}$ may be identified with the set of oriented leaves of the foliation of $T^{1}M$ induced by $\xi $. By \cite{keilhauer}, if $M$ is Hadamard, this foliation is regular in the sense of Palais \cite{palais}. Hence, $\mathcal{G}$ admits a unique differentiable structure of dimension $ 2n$ such that the natural projection $T^{1}M\rightarrow \mathcal{G}$ is a submersion. Fix $o\in M$ and let Exp $:T_{o}M\rightarrow M$ denote the geodesic exponential map. Let $S=\left\{ v\in T_{o}M\mid \left\| v\right\| =1\right\} \cong S^{n}$. We identify as usual $T_{v}S\cong v^{\bot }\subset T_{o}M$. Hence, $TS\cong \left\{ \left( v,x\right) \mid v\in S\text{ and } \left\langle v,x\right\rangle =0\right\} $. Let $F:TS\rightarrow \mathcal{G}$ be defined by \[ F\left( v,x\right) =\left[ \gamma \right] \text{,} \] where $\gamma $ is the unique geodesic in $M$ with initial velocity $\tau _{0}^{1}v$ (here $\tau $ denotes parallel transport along the geodesic $ t\rightarrow $ Exp\thinspace $\left( tx\right) $ of $M$). This is called the minitwistor construction in \cite{hitchin}. Keilhauer proved in \cite {keilhauer} that $F$ is a diffeomorphism. \begin{center} \textbf{2. The geometry of $\mathcal{G}$ for the hyperbolic space} \end{center} \noindent Let $H=H^{n+1}$ be the hyperbolic space of constant sectional curvature $-1$ and dimension $n+1$. Consider on $\Bbb{R}^{n+2}$ the basis $ \left\{ e_{0},e_{1},\dots ,e_{n+1}\right\} $ and the inner product whose associated norm is given by $\left\| x\right\| =\left\langle x,x\right\rangle =-x_{0}^{2}+x_{1}^{2}+\dots +x_{n+1}^{2}$. Then $H=\left\{ x\in \Bbb{R}^{n+2}\mid \left\| x\right\| =-1\text{ and }x_{0}>0\right\} $ with the induced metric. Let $G$ be the identity component of the isometry group of $H$, that is, \[ G=O_{o}\left( 1,n+1\right) =\left\{ g\in O\left( 1,n+1\right) \mid \left( ge_{0}\right) _{0}>0\text{ and }\det g>0\right\} \text{.} \] In the following we denote $\mathcal{G}_{m}=\mathcal{G}\left( H^{m}\right) $ (or simply $\mathcal{G}$ if no confusion is possible). The group $G$ acts on $\mathcal{G}$ as follows: $g\left[ \gamma \right] =\left[ g\circ \gamma \right] $. This action is transitive, since $H$ is two-point homogeneous, and smooth, since $G$ acts smoothly on $T^{1}H$. Let $\gamma _{o}$ be the geodesic in $H$ with $\gamma _{o}\left( 0\right) =e_{0}$ and initial velocity $e_{1}\in T_{e_{0}}H$. The isotropy subgroup of $G$ at $c_{o}:=\left[ \gamma _{o}\right] $ is \[ G_{o}=\left\{ \text{diag}\left( T_{t},A\right) \mid t\in \Bbb{R},A\in SO_{n}\right\} \cong \Bbb{R}\times SO_{n}\text{,} \] where $T_{t}=\left( \begin{array}{ll} \cosh t & \sinh t \\ \sinh t & \cosh t \end{array} \right) $. Therefore we may identify $\mathcal{G}$ with $G/G_{o}$ in the usual way. Let $\frak{g}$ be the Lie algebra of $G$ and let \[ \frak{g}_{o}=\left\{ \text{diag}\left( tR,A\right) \mid t\in \Bbb{R},A\in so_{n}\right\} \] be the Lie algebra of $G_{o}$ (here $R=\left( \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array} \right) $). Let $B$ be the bilinear form on $\frak{g}$ defined by $B\left( X,Y\right) =\frac{1}{2}\,$tr\thinspace $\left( XY\right) $, which is well-known to be a multiple of the Killing form of $\frak{g}$, hence nondegenerate. Besides, the canonical projection $\pi :G\rightarrow H$, $\pi \left( g\right) =g\left( e_{0}\right) $, is a pseudo-Riemannian submersion. Let $\frak{g}=\frak{g}_{0}\oplus \frak{h}$ be the orthogonal decomposition with respect to $B$. Then \[ T_{c_{o}}\mathcal{G}=\frak{h}:=\left\{ x_{h}+y_{v}\mid x,y\in \Bbb{R} ^{n}\right\} , \] where for column vectors $x,y\in \Bbb{R}^{n}$, \[ x_{h}=\left( \begin{array}{cc} 0_{2} & \left( x,0\right) ^{t} \\ \left( x,0\right) & 0_{n} \end{array} \right) \text{ \ \ and\ \ \ \ }y_{v}=\left( \begin{array}{cc} 0_{2} & \left( 0,y\right) ^{t} \\ \left( 0,-y\right) & 0_{n} \end{array} \right) \] (here the exponent $t$ denotes transpose and $0_{m}$ the $m\times m$ zero matrix). We chose this notation since $x_{h}$ and $y_{v}$ are horizontal and vertical, respectively, tangent vectors in $T_{\left( e_{0},e_{1}\right) }\left( T^{1}H\right) $ with respect to the canonical projection $ T^{1}H\rightarrow H$. \begin{theorem} \label{homogeneous}For each $n\geq 1$ there exists a $G$-invariant pseudo-Riemannian metric $g_{1}$ on $\mathcal{G}_{n+1}$ whose associated norm at $c_{o}$ is given by \[ \left\| x_{h}+y_{v}\right\| _{1}=\left| x\right| ^{2}-\left| y\right| ^{2} \text{.} \] For $n=2,$ if one identifies $\Bbb{R}^{2}=\Bbb{C}$ as usual, there exists a $ G$-invariant metric $g_{0}$ on $\mathcal{G}_{3}$ whose associated norm at $ c_{o}$ is given by \[ \left\| x_{h}+y_{v}\right\| _{0}=\left\langle ix,y\right\rangle \text{.} \] For $n\ne 2$, any $G$-invariant pseudo-Riemannian metric on $\mathcal{G} _{n+1}$ is homothetic to $g_{1}$. Any $G$-invariant pseudo-Riemannian metric on $\mathcal{G}_{3}$ is of the form $\lambda g_{0}+\mu g_{1}$ for some $ \lambda ,\mu \in \Bbb{R}$ not simultaneously zero. All the metrics are symmetric and have split signature $\left( n,n\right) $. In particular, $\mathcal{G}$ does not admit any $G$-invariant Riemannian metric and the geodesics in $\mathcal{G}$ through $c_{o}$ are exactly the curves $s\mapsto \exp _{G}\left( sX\right) c_{o}$, for $X\in \frak{h}$. \end{theorem} \noindent \textbf{Proof. }One computes easily that $B\left( X,X\right) =\left\| X\right\| _{1}$ for all $X\in \frak{h}$. Since $B$ is $G$-invariant, $g_{1}$ defines a $G$-invariant metric on $\mathcal{G}$. Let $Z=$ diag\thinspace $\left( R,0_{n}\right) $, $\frak{m}=\{ \text{diag} \left( 0_{2},A\right)\mid A\in so_{n}\} $ and $\frak{g}_{\lambda }=\{ U\in \frak{g}\mid \text{ad}_{Z}U=\lambda U\} $. One verifies that $\frak{g}_{0}= \frak{g}_{o}$ and $\frak{g}_{\pm 1}=\left\{ x_{h}\pm x_{v}\mid x\in \Bbb{R} ^{n}\right\} $. Moreover, one has the decompositions \[ \frak{g}_{0}=\Bbb{R}Z\oplus \frak{m}\;\;\;\text{and\ \ \ }\frak{h}=\frak{g} _{1}\oplus \frak{g}_{-1}\text{,} \] which are preserved by the the action of $\frak{m}$. Hence $\frak{h}$ is $ \frak{g}_{0}$-invariant. Since $B$ is nondegenerate and $G_{o}$ is connected, any other pseudo-Rie\-mannian metric $g$ on $\mathcal{G}$ has the form $g\left( U,V\right) =B\left( TU,V\right) $ for some $T:\emph{h}\rightarrow \emph{h}$ commuting with ad$_{Z}$ and ad$_{\frak{m}}$. In particular, $T$ preserves $ \frak{g}_{\pm 1}$. We call $T_{\pm }$ the restrictions of $T$ to the corresponding subspaces. Under the identification $\frak{g}_{\pm 1}\equiv \Bbb{R}^{n}$, $x_{h}\pm x_{v}\equiv x$, the action of $\frak{m}\equiv so_{n}$ on $\Bbb{R}^{n}$ is the canonical one. If $T_{\pm }\in Gl\left( \frak{g} _{\pm 1}\right) \equiv Gl\left( n,\Bbb{R}\right) $ commutes with every $A\in so_{n}$, then either $T_{\pm }$ is a nonzero multiple of the identity or $ n=2 $ and $T_{\pm }=a_{\pm }I_{2}+b_{\pm }J$ where $J=\left( \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right) $, for some not simultaneously zero constants $a_{\pm }$ and $b_{\pm }$. Next we consider the case $n=2$ and show that $a_{+}=a_{-}$ and $ b_{-}=-b_{+}$. For $x\ne 0$ we denote $x^{\pm }=x_{h}\pm x_{v}$ and compute \begin{eqnarray*} B\left( T\left( x^{+}\right) ,x^{-}\right) &=&B\left( \left( a_{+}x+b_{+}ix\right) ^{+},x^{-}\right) \\ &=&a_{+}B\left( x^{+},x^{-}\right) +b_{+}B\left( \left( ix\right) ^{+},x^{-}\right) \\ &=&2a_{+}\left| x\right| ^{2}+0\text{.} \end{eqnarray*} Since $T$ must be symmetric with respect to $B$, this expression coincides with $B\left( x^{+},T\left( x^{-}\right) \right) $, which by similar computations equals $2a_{-}\left| x\right| ^{2}$. Hence $a_{+}=a_{-}$. Using again the symmetry of $T$ in the case \[ B\left( T\left( x^{-}\right) ,\left( ix\right) ^{+}\right) =B\left( x^{-},T\left( ix\right) ^{+}\right) \] one obtains that $b_{-}=-b_{+}$. Finally, since $2\left( x_{h}+y_{v}\right) =\left( x+y\right) ^{+}+\left( x-y\right) ^{-}$, one computes that the metric associated with $T$ is homothetic to $g_{1}$ if $b_{+}=0$ and to $ g_{o}$ if $a_{+}=0$. The case $n\ne 2$ is simpler since it does not involve $ b_{\pm }$. Next we show that for any of the metrics above, $\mathcal{G}$ is a symmetric space. Let $G^{\uparrow }=\left\{ g\in O\left( 1,n+1\right) \mid \left( ge_{0}\right) _{0}>0\right\} $ be the isometry group of $H$ and let $C=$ diag\thinspace $\left( I_{2},-I_{n}\right) \in G^{\uparrow }$, which induces an involutive diffeomorphism $\widetilde{C}$ of $\mathcal{G}$ by $\widetilde{ C}\left[ \gamma \right] =\left[ C\circ \gamma \right] $ fixing exactly $ c_{o} $. If $n=2$, $C\in G$, hence $\widetilde{C}$ is clearly an isometry for any $G$-invariant metric on $\mathcal{G}_{3}$. The same happens for $ n\ne 2$. Indeed, in this case, up to homotheties, we have seen that the unique metric on $\mathcal{G}_{m}$ with $m\ne 3$ comes from a multiple of the Killing form of $\frak{g}$, which is invariant by the action of $ G^{\uparrow }$. The statement regarding geodesics follows from the theory of symmetric spaces, since conjugation by $C$ is an involutive automorphism of $ \frak{g}$ whose $\left( -1\right) $-eigenspace is $\frak{g}_{0}$ and preserves the given metrics. $\square $ \noindent \textbf{Remarks. }a) In contrast with the space of oriented lines of $\Bbb{R} ^{n}$, which only for $n=3,7$ admits pseudo-Riemannian metrics invariant by the induced transitive action of a connected closed subgroup of the identity component of the isometry group (see \cite{salvaimm}), $\mathcal{G}_{n}$ admits $G$-invariant metrics for all $n$. b) The metric $g_{0}$ is the analogue of the metric defined in the Euclidean case in \cite{shepherd, gk1}. We will see below that also in the hyperbolic case it admits a K\"{a}hler structure. c) For any complete simply connected Riemannian manifold $M$ of negative curvature, the space $\mathcal{G}\left( M\right) $ of its oriented geodesics has a canonical pseudo-Riemannian metric, which is in general only continuous, see \cite{Kanai}. If $M$ is the hyperbolic space, then $g_{1}$ is the canonical metric on $\mathcal{G}$. d) If $H$ has dimension two, then $\mathcal{G}$ is isometric to the two-dimensional de Sitter sphere. We recall some well-known facts about the imaginary border of the hyperbolic space and the action of $G$ on it. For a geodesic $\gamma $ in $H$, $\gamma \left( \infty \right) $ is defined to be the unique $z\in S^{n}$ such that $ \lim_{t\rightarrow \infty }\gamma \left( t\right) /\gamma \left( t\right) _{0}=e_{0}+z\in \Bbb{R}^{n+2}$. One defines analogously $\gamma \left( -\infty \right) $. Sometimes we will identify $\Bbb{R}^{n+1}$ with $ e_{0}^{\bot }$ and $S^{n}$ with $\left\{ e_{0}\right\} \times S^{n}$. The group $G$ acts on $S^{n}$ by directly (that is, orientation preserving) conformal diffeomorphisms. More precisely, any $g\in G$ induces the directly conformal transformation $\widetilde{g}$ of $S^{n}$, well-defined by $ \widetilde{g}\left( \gamma \left( \infty \right) \right) =\left( g\circ \gamma \right) \left( \infty \right) $, and any directly conformal transformation of $S^{n}$ can be realized in this manner. \begin{proposition} \label{proptoni}If $S$ is a subgroup of $G$ acting transitively on $\mathcal{ G}$, then $S=G$. \end{proposition} \noindent \textbf{Proof. }By the main result of \cite{toni}, it suffices to show that $ S$ acts irreducibly on $\Bbb{R}^{n+2}$. Suppose that $S$ leaves the nontrivial subspace $V$ invariant. If $V$ is degenerate, then $V$ contains a null line, say $\Bbb{R}\left( e_{0}+z\right) $, with $z\in \Bbb{R}^{n+1}$, $ \left| z\right| =1$. Hence $S$ takes the oriented line $\left[ \gamma \right] $ with $\gamma \left( \infty \right) =z$ to another line with the same point at $\infty $. If $V$ is nondegenerate, either $V$ or its complement (also $S$-invariant) intersects $H$. Let us call $ H_{1}\varsubsetneq H$ the intersection, which is a totally geodesic submanifold of $H$. Then $S$ takes any oriented line contained in $H_{1}$ to a line contained in $H_{1}$. If $H_{1}$ is a point $p$, then $S$ takes any line through $p$ to a line through $p$. Therefore the action of $S$ on $ \mathcal{G}$ is not transitive. $\square $ \noindent \textbf{Remark. }The hyperbolic case contrasts with the Euclidean one: We found in \cite{salvaimm} a pseudo-Riemannian metric on the space of oriented lines of $\Bbb{R}^{7}=$ Im\thinspace $\Bbb{O}$ which is invariant by the transitive action of $G_{2}\ltimes \Bbb{R}^{7}$, where $G_{2}$ is the automorphism group of the octonions $\Bbb{O}$. \begin{center} \textbf{3. Null, space- and time-like curves} \end{center} \noindent In order to give a geometric interpretation for a curve in $ \mathcal{G}$ endowed with some of the $G$-invariant metrics to be null, space- or time-like, we introduce the following concept, which makes sense for any Hadamard manifold. \noindent \textbf{Definition.} Let $H$ be a Hadamard manifold. Given a smooth curve $c$ in $\mathcal{G}$ defined on the interval $I$, a function $\varphi :\mathbf{R} \times I\rightarrow H$ is said to be a \emph{standard presentation of }$c$ if $s\mapsto \alpha _{t}\left( s\right) :=\varphi \left( s,t\right) $ is a unit speed geodesic of $H$ satisfying $c\left( t\right) =\left[ \alpha _{t}\right] $ and $\left\langle \dot{\beta}\left( t\right) ,\dot{\alpha} _{t}\left( 0\right) \right\rangle =0$ for all $t\in I$, where $\beta \left( t\right) =\varphi \left( 0,t\right) $. \begin{proposition} \label{presentation}Given a smooth curve $c:I\rightarrow \mathcal{G}$ and $p$ a point in the image of some \emph{(}any\emph{)} geodesic in the equivalence class $c\left( t_{o}\right) $, there exists a standard presentation $\varphi $ of $c$ such that $\varphi \left( 0,t_{o}\right) =p$. \end{proposition} \noindent \textbf{Proof.} Consider the submersion $\Pi :T^{1}H\rightarrow \mathcal{G}$ , $\Pi \left( v\right) =[\gamma _{v}]$. Let $v\left( t\right) $ be a lift of $c\left( t\right) $ to $T^{1}H$ with $v\left( t_{o}\right) \in T_{p}^{1}H$, and let $\psi :\mathbf{R}\times I\rightarrow H$ be defined by $\psi \left( s,t\right) =\gamma _{v\left( t\right) }\left( s\right) $. We look for a function $f:I\rightarrow \mathbf{R}$ such that \[ \varphi \left( s,t\right) =\psi \left( s+f\left( t\right) ,t\right) \] satisfies the required properties. Clearly $\alpha _{t}\left( s\right) =\varphi \left( s,t\right) $ has unit speed and \[ c\left( t\right) =\Pi \left( v\left( t\right) \right) =\Pi \left( \gamma _{v\left( t\right) }^{\prime }\left( f\left( t\right) \right) \right) =\left[ \alpha _{t}\right] \text{.} \] One can verify easily that taking as $f$ the solution of the differential equation \[ f^{\prime }\left( t\right) =-\frac{\left\langle \psi _{t}\left( f\left( t\right) ,t\right) ,\psi _{s}\left( f\left( t\right) ,t\right) \right\rangle }{\left\| \psi _{s}\left( f\left( t\right) ,t\right) \right\| ^{2}} \] (subindexes denote partial derivatives) with $f\left( t_{o}\right) =0$, then $\varphi ( 0,t_{o}) =p$ and $\left\langle \dot{\beta}\left( t\right) ,\dot{ \alpha}_{t}\left( 0\right) \right\rangle =0$ for all $t\in I$, where $\beta $ is as in the definition of the standard presentation. $\square $ The following Proposition characterizes the null, time- and space-like curves of $\mathcal{G}$, providing a relationship between the geometries of $ \mathcal{G}$ and $H$. \begin{proposition} \label{relation}For the metric $g_{1}$, a smooth curve $c$ in $\mathcal{G} _{n}$ is null \emph{(}respectively, space-, time-like\emph{)} if and only if, for any standard presentation, the rate of variation of the directions, that is, $\left\| \frac{D}{dt}\dot{\alpha}_{t}\left( 0\right) \right\| $, coincides with \emph{(}respectively, is smaller, larger than\emph{)} the rate of displacement $\left\| \dot{\beta}\left( t\right) \right\| $ for all $ t$ \emph{(}here $\frac{D}{dt}$ denotes covariant derivative along $\beta $ \emph{)}. For the metric $g_{0}$ on $\mathcal{G}_{3}$, a smooth curve $c$ in $\mathcal{ G}_{3}$ is null \emph{(}respectively, space-\nolinebreak , time-like\emph{)} if and only if, for any standard presentation, \[ \left\{ \dot{\beta}\left( t\right) ,\frac{D}{dt}\dot{\alpha}_{t}\left( 0\right) ,\dot{\alpha}_{t}\left( 0\right) \right\} \] is linearly dependent \emph{(}respectively, positively, negatively oriented \emph{)} for all $t$. \end{proposition} \noindent \textbf{Proof. }Let $\left[ \gamma \right] $ be an oriented geodesic of a Hadamard manifold and let $\mathcal{J}_{\gamma }$ be the space of Jacobi fields along $\gamma $ orthogonal to $\dot{\gamma}$. First we show that $ L_{\gamma }:\mathcal{J}_{\gamma }\rightarrow T_{\left[ \gamma \right] } \mathcal{G}$ given by \begin{eqnarray} L_{\gamma }\left( J\right) =\left( d/dt\right) _{0}\left[ \gamma _{t}\right] \text{,} \label{el} \end{eqnarray} where $\gamma _{t}$ is a variation of $\gamma $ by unit speed geodesics associated with the Jacobi field $J$, is a well-defined vector space isomorphism. Indeed, let $\mathcal{P}:T^{1}M\rightarrow \mathcal{G}$ be the canonical projection, which is a smooth submersion, by definition of the differentiable structure on $\mathcal{G}$. We compute \[ \left( d/dt\right) _{0}\left[ \gamma _{t}\right] =\left( d/dt\right) _{0} \mathcal{P}\left( \dot{\gamma}_{t}\left( 0\right) \right) =d\mathcal{P}_{ \dot{\gamma}\left( 0\right) }\left( \left( d/dt\right) _{0}\dot{\gamma} _{t}\left( 0\right) \right) . \] Now, let p $:T^{1}H\rightarrow H$ be the canonical projection and $\mathcal{K }:T_{\dot{\gamma}\left( 0\right) }\left( T^{1}H\right) \rightarrow \dot{ \gamma}\left( 0\right) ^{\bot }\subset T_{\gamma \left( 0\right) }H$ the connection operator. It is well-known that $\left( d\text{p},\mathcal{K} \right) :T_{\dot{\gamma}\left( 0\right) }\left( T^{1}H\right) \rightarrow T_{\gamma \left( 0\right) }H\oplus \dot{\gamma}\left( 0\right) ^{\bot }$ is a bijection and \[ \left( d/dt\right) _{0}\dot{\gamma}_{t}\left( 0\right) =\left( d\text{p}, \mathcal{K}\right) ^{-1}\left( J\left( 0\right) ,J^{\prime }\left( 0\right) \right) \] (see for instance \cite{besse}). Therefore, $L_{\gamma }$ is well-defined. Next we show that for any $J\in \mathcal{J}_{\gamma }$ one has \begin{eqnarray} \left\| L_{\gamma }\left( J\right) \right\| _{1} &=&\left\| J\left( 0\right) \right\| ^{2}-\left\| J^{\prime }\left( 0\right) \right\| ^{2} \label{nj} \\ \left\| L_{\gamma }\left( J\right) \right\| _{0} &=&\left\langle \dot{\gamma} \left( 0\right) \times J\left( 0\right) ,J^{\prime }\left( 0\right) \right\rangle \text{.} \nonumber \end{eqnarray} We may suppose without loss of generality that $c=c_{o}$ and $\gamma =\gamma _{o}.$ Let $c^{\prime }\left( 0\right) =x_{h}+y_{v}$ with $x,y\in \Bbb{R} ^{n} $. Then the Jacobi field along $\gamma _{o}$ satisfying $L_{\gamma _{o}}\left( J\right) =c^{\prime }\left( 0\right) $ is the one determined by \[ J\left( 0\right) =d\pi _{I}\left( x_{h}\right) \text{ and }J^{\prime }\left( 0\right) =d\pi _{I}\left( y_{h}\right) \text{,} \] where $\pi :G\rightarrow H$ is as before the canonical projection. In fact, clearly, $\gamma _{t}\left( s\right) =\exp \left( tx_{h}\right) \exp \left( ty_{v}\right) \gamma _{o}\left( s\right) $ is a variation of $\gamma _{o}$ by unit speed geodesics. Let us see that the associated Jacobi field is $J.$ Indeed, \[ J\left( 0\right) =\left. \frac{d}{dt}\right| _{0}\gamma _{t}\left( 0\right) =\left. \frac{d}{dt}\right| _{0}\exp \left( tx_{h}\right) e_{0}=d\pi _{I}\left( x_{h}\right) \text{,} \] since $\gamma _{o}\left( 0\right) =e_{0}$, which is fixed by $\exp \left( ty_{v}\right) $. If $\frac{D}{dt}$ denotes covariant derivative along $ t\mapsto \gamma _{t}\left( 0\right) $ and $Z$ is as in the beginning of the proof of Theorem \ref{homogeneous}, then \begin{eqnarray*} J^{\prime }\left( 0\right) &=&\left. \frac{D}{dt}\right| _{0}\dot{\gamma} _{t}\left( 0\right) =\left. \frac{D}{dt}\right| _{0}d\left( \exp \left( tx_{h}\right) \exp \left( ty_{v}\right) \right) _{\pi \left( I\right) }e_{1} \\ &=&\left. \frac{D}{dt}\right| _{0}d\exp \left( tx_{h}\right) d\pi _{I}\text{ Ad}\left( \exp ty_{v}\right) Z \\ &=&d\pi _{I}\left. \frac{d}{dt}\right| _{0}e^{t\text{ad\thinspace } y_{v}}Z=d\pi _{I}\left[ y_{v},Z\right] =d\pi _{I}\left( y_{h}\right) \text{,} \end{eqnarray*} since $d\exp \left( tx_{h}\right) $ realizes the parallel transport and $ d\pi _{I}\left( Z\right) =e_{1}$. Therefore (\ref{nj}) is true by Theorem \ref{homogeneous}. Finally, suppose that $\varphi $ is a standard presentation of $c$ and let $\alpha _{t},\beta $ be as above. Let $J_{t}$ denote the Jacobi field along $\alpha _{t}$ associated with the variation $ \varphi $. Clearly, $\dot{c}\left( t\right) =L_{\alpha _{t}}\left( J_{t}\right) $, $J_{t}\left( 0\right) =\frac{d}{dt}\varphi \left( 0,t\right) =\dot{\beta}\left( t\right) $ and \[ J_{t}^{\prime }\left( 0\right) =\left. \frac{D}{ds}\right| _{0}\frac{d}{dt} \varphi \left( s,t\right) =\frac{D}{dt}\left. \frac{d}{ds}\right| _{0}\varphi \left( s,t\right) =\frac{D}{dt}\dot{\alpha}_{t}\left( 0\right) . \] Consequently, the proposition follows from (\ref{nj}). $\square $ \noindent \textbf{A geometric invariant of }$\mathcal{G}$ \noindent We have mentioned in the introduction that $\mathcal{G}\left( H^{n}\right) $ is diffeomorphic to $\Bbb{T}^{n}$, the space of all oriented lines of $\Bbb{R }^{n}$. For $n=3$ and $n=7$, we found in \cite{salvaimm} pseudo-Riemannian metrics on $\Bbb{T}^{n}$ invariant by the induced transitive action of a connected closed subgroup of $SO_{n}\ltimes \Bbb{R}^{n}$ (only for those dimensions such metrics exist). \begin{proposition} For $n=3,7$, no metric on $\mathcal{G}_{n}$ invariant by the identity component of the isometry group of $H^{n}$ is isometric to $\Bbb{T}^{n}$ endowed with any of the metrics above. \end{proposition} \noindent \textbf{Proof. }We compute now a pseudo-Riemannian invariant of $ \mathcal{G}_{n}$ involving its periodic geodesics. For any $c\in \mathcal{G}$ , let $A$ denote the subset of $T_{c}\mathcal{G}$ consisting of the velocities of periodic geodesics of $\mathcal{G}$ though $c$. We show next that the frontier of $A$ in $T_{c}\mathcal{G}$ is the union of two subspaces of half the dimension of $\mathcal{G}$ intersecting only at zero. By homogeneity we may suppose that $c=c_{o}$. Since by the proposition below $ A=\left\{ \lambda x_{h}+x_{v}\mid x\in \Bbb{R}^{n}\text{, }\left| \lambda \right| <1\right\} $, the frontier of $A$ is $\frak{g}_{1}\cup \frak{g}_{-1}$ . On the other hand, we have computed in \cite{salvai2} that the analogue invariant for $\Bbb{T}^{n}$ ($n=3,7$) is a subspace of half the dimension of $\Bbb{T}^{n}$. Hence the proposition follows. $\square $ \noindent \textbf{Remarks. }a) Of course we could have considered more standard invariants, like the curvature or the isometry group, but we chose this one since the geodesics can be described so easily. b) Clearly the difference in the invariants is related to the fact that the two horospheres through a point associated with opposite directions coincide in the Euclidean case but are different in the hyperbolic case. \begin{proposition} A geodesic in $\mathcal{G}$ with initial velocity $x_{h}+y_{v}$ is periodic if and only if $x=\lambda y$ for some $\lambda \in \Bbb{R}$ with $\left| \lambda \right| <1$. \end{proposition} \noindent \textbf{Proof. }We may suppose that $x_{h}+y_{v}\ne 0$. We compute that Ad\thinspace $\left( e^{tZ}\right) \left( x_{h}+y_{v}\right)\allowbreak =x_{v}^{t}+y_{v}^{t}$, where \[ x^{t}=\left( \cosh t\right) x+\left( \sinh t\right) y\text{ \ \ and \ \ } y^{t}=\left( \sinh t\right) x+\left( \cosh t\right) y\text{.} \] Now, there exists $s$ such that $\left\langle x^{s},y^{s}\right\rangle =0$ (take $\tanh \left( 2s\right) =-\frac{2\left\langle x,y\right\rangle}{\left| x\right| ^{2}+\left| y\right| ^{2}} $). Hence $\left[ x_{h}^{s},y_{v}^{s}\right] =0$ and consequently \[ \pi \exp \left( t\left( x_{h}^{s}+y_{v}^{s}\right) \right) =\pi \exp \left( tx_{h}^{s}\right) \exp \left( ty_{v}^{s}\right) =\pi \exp \left( tx_{h}^{s}\right) \text{,} \] which is a geodesic in $H$, in particular it is periodic only if it is constant, or equivalently, only for $x^{s}=0$. Since $Z\in \frak{g}_{0}$ and the metric is $G$-invariant, the geodesics with initial velocities $x_{h}^{t}+y_{v}^{t}$ are simultaneously periodical or not periodical for all $t$. Now, one verifies that $x^{s}=0$ if and only if $x=\lambda y$ for some $\lambda \in \Bbb{R}$ with $\left| \lambda \right| <1$ and the proposition follows. $\square $ \begin{center} \textbf{4. Additional geometric structures on $\mathcal{G}$} \end{center} \noindent An almost Hermitian structure on a pseudo-Riemannian manifold $ \left( M,g\right) $ is a smooth tensor field $J$ of type $\left( 1,1\right) $ on $M$ such that $J_{p}$ is an orthogonal transformation of $\left( T_{p}M,g_{p}\right) $ and satisfies $J_{p}^{2}=-\,$id for all $p\in M$. If $ \nabla $ is the Levi Civita connection of $\left( M,g\right) ,$ then $\left( M,g,J\right) $ is said to be K\"{a}hler if $\nabla J=0$. \noindent \textbf{A K\"{a}hler structure on }$\mathcal{G}\left( H^{3}\right) $ \noindent Let $\mathcal{G}=\mathcal{G}_{3}$ and let $j_{o}$ be the endomorphism of $\frak{h}\equiv T_{c_{o}}\mathcal{G}\equiv \Bbb{C}\times \Bbb{C}$ given by $j_{o}\left( z,w\right) =\left( iz,iw\right) $. One checks that $j_{o}$ commutes with the action of $G_{o}$, is orthogonal for $g_{0}$ and $g_{1}$ and $j_{o}^{2}=-\,$id. Therefore $j_{o}$ defines an orthogonal almost complex structure on $\mathcal{G}_{3}$ for any $G$-invariant metric on it. \begin{proposition} The space $\left( \mathcal{G}_{3},J\right) $ is K\"{a}hler for any pseudo-Riemannian $G$-invariant metric on $\mathcal{G}_{3}$. \end{proposition} \noindent \textbf{Proof. }We show that for every geodesic $\gamma $ in $ \mathcal{G}_{3}$ and any parallel vector field $Y$ along $\gamma $, the vector field $JY$ along $\gamma $ is parallel. By homogeneity we may suppose that $\gamma \left( 0\right) =c_{o}$. Suppose that $\gamma \left( t\right) =\exp \left( tX\right) c_{o}$ for some $X\in \frak{h}$. By a well-known property of symmetric spaces, $Y=d\exp \left( tX\right) _{c_{o}}Y_{c_{o}}$. Since $J$ is $G$-invariant, $JY=d\exp \left( tX\right) _{c_{o}}JY_{c_{o}}$ and thus $JY$ is parallel along $\gamma $, as desired. $\square $ \noindent \textbf{An orthogonal almost complex structure on }$\mathcal{G} _{7} $ \noindent We present another model of $\mathcal{G}_{n+1}$ endowed with the metric $g_{1}$ and use it to define an orthogonal almost complex structure on $\mathcal{G}_{7}$. In the following we use the notations given before Proposition \ref{proptoni} of concepts related to the imaginary border of $H$. We recall that $g\in G$ is called a transvection of $H$ if it preserves a geodesic $\gamma $ of $H$ and $dg$ realizes the parallel transport along $\gamma $, that is, $g\left( \gamma \left( t\right) \right) =\gamma \left( t+s\right) $ for all $t$ and some $s$ and $dg_{\gamma \left( t\right) }$ realizes the parallel transport between $\gamma \left( t\right) $ and $\gamma \left( t+s\right) $ along $ \gamma $. For any unit $v\in T_{e_{0}}H=e_{0}^{\bot }=\Bbb{R}^{n+1}$ the transvections through $e_{0}\in H$ preserving the geodesic with initial velocity $v$ form a one parameter subgroup $\phi _{t}$ such that the corresponding one parameter group $\widetilde{\phi _{t}}$ of conformal transformations of $S^{n}$ (which we also call transvections, by abuse of notation) is the flow of the vector field on $S^{n}$ defined at $q\in S^{n}$ as the orthogonal projection of the constant vector field $v$ on $\Bbb{R} ^{n+1}$ onto $T_{q}S^{n}=q^{\bot }$. In particular $\widetilde{\phi _{t}}$ fixes $\pm v\in S^{n}$. For $\tau =\widetilde{\phi _{t}}$ we will need specifically the following standard facts: $*$) If $u\in S^{n}$ is orthogonal to $v$, then $v\in T_{u}S^{n}$ and if $ \tau \left( u\right) =\left( \cos \theta \right) u+\left( \sin \theta \right) v$, then $\left( d\tau \right) _{u}v$ is a vector in $T_{\tau \left( u\right) }S^{n}$ spanned by $u$ and $v$ of length $\cos \theta $. $**$) There exists a positive constant $c$ such that $\left( d\tau \right) _{\pm v}$ is a multiple $c^{\pm 1}$ of the identity map on $T_{\pm v}S^{n}=v^{\bot }$. Let $\Delta _{n}=\left\{ \left( p,p\right) \mid p\in S^{n}\right\} $ denote the diagonal in $S^{n}\times S^{n}$. The map \begin{eqnarray} \psi :\mathcal{G}_{n+1}\rightarrow \left( S^{n}\times S^{n}\right) \backslash \Delta _{n}\text{,\ \ \ \ \ }\psi \left( \left[ \gamma \right] \right) =\left( \gamma \left( -\infty \right) ,\gamma \left( \infty \right) \right) \label{gss} \end{eqnarray} is a well-defined diffeomorphism. We denote by $\widehat{g}$ the induced action of $g$ $\in G$ on $\left( S^{n}\times S^{n}\right) \backslash \Delta _{n}$, that is $\widehat{g}\left( p,q\right) =\left( \widetilde{g}\left( p\right) ,\widetilde{g}\left( q\right) \right) $. Given distinct points $ p,q\in S^{n}$, let $T_{p,q}$ denote the reflection on $\Bbb{R}^{n+1}$ with respect to the hyperplane orthogonal to $p-q$. \begin{proposition} If $\mathcal{G}_{n+1}$ is endowed with the metric $g_{1}$ and one considers on $\left( S^{n}\times S^{n}\right) \backslash \Delta _{n}$ the pseudo-Riemannian metric whose associated norm is \begin{equation} \left\| \left( x,y\right) \right\| _{\left( p,q\right) }=4\left\langle T_{p,q}x,y\right\rangle /\left| q-p\right| ^{2} \label{mss} \end{equation} for $x\in p^{\bot }$, $y\in q^{\bot }$, then the diffeomorphism $\psi $ of $ \emph{(}$\emph{\ref{gss})} is an isometry. \end{proposition} \noindent \textbf{Proof.} Clearly $\psi $ is $G$-equivariant. Since the metric $g_{1}$ on $\mathcal{G}_{n+1}$ is $G$-invariant, it is sufficient to show that the metric (\ref{mss}) on $\left( S^{n}\times S^{n}\right) \backslash \Delta _{n}$ is $G$-invariant as well and that $d\psi _{\left[ \gamma _{o}\right] }$ is a linear isometry. Given distinct points $p_{\pm }\in S^{n}$, we show first that for any $g\in G $ with $\widetilde{g}\left( e_{\pm 1}\right) =p_{\pm }$, $d\widehat{g} _{\left( -e_{1},e_{1}\right) }$ is a linear isometry. A straightforward computation shows that the given metric on $\left( S^{n}\times S^{n}\right) \backslash \Delta _{n}$ is invariant by the action of $SO_{n+1}$, since for all $k$ in this group, $T_{k\left( p\right) ,k\left( q\right) }\circ k=k\circ T_{p,q}$ for all $p,q\in S^{n}$, $p\ne q$. Hence we may suppose without loss of generality that $p_{\pm }=\pm \left( \cos \theta \right) e_{1}+\left( \sin \theta \right) e_{2}$ for some $\theta \in [0,\pi /2)$. Now, any directly conformal transformation $\widetilde{g}$ as above may be written as a composition $\tau ^{2}\circ \tau ^{1}\circ R$, where $R$ is a rotation fixing $e_{1}$ and $\tau ^{1}$ and $\tau ^{2}$ are transvections fixing $\left( -e_{1},e_{1}\right) $ and $\left( -e_{2},e_{2}\right) $, respectively. The assertion ($**$) above, with $v=e_{1}$ and $\tau =\tau ^{1}$, implies that $d\widehat{\tau ^{1}}_{\left( -e_{1},e_{1}\right) }$ is a linear isometry. Now we use the assertion ($*$) with $v=e_{2}$ and $u=e_{1}$ to see that $d\widehat{\tau ^{2}}_{\left( -e_{1},e_{1}\right) }:e_{1}^{\bot }\times e_{1}^{\bot }\rightarrow p_{-}^{\bot }\times p_{+}^{\bot }$ is a linear isometry. Let $\lambda _{\pm }v+x_{\pm }\in T_{\pm u}S^{n}=u^{\bot }$, with $ \lambda _{\pm }$ real numbers and $\left\langle x_{\pm },v\right\rangle =0$. One computes \begin{eqnarray} \left\| \left( \lambda _{-}v+x_{-},\lambda _{+}v+x_{+}\right) \right\| _{\left( -u,u\right) } &=&4\left( \lambda _{-}\lambda _{+}+\left\langle x_{-},x_{+}\right\rangle \right) /\left| 2u\right| ^{2} \label{tal} \\ &=&\left( \lambda _{-}\lambda _{+}+\left\langle x_{-},x_{+}\right\rangle \right) \text{.} \nonumber \end{eqnarray} On the other hand, call $d\tau _{\pm u}^{2}\left( v\right) =v_{\pm }$ and $ d\tau _{\pm u}^{2}\left( x_{\pm }\right) =y_{\pm }$. Hence $\left| v_{\pm }\right| =\cos \theta $. Since $d\tau _{\pm u}^{2}$ is conformal, $y_{\pm }$ is orthogonal to $v_{\pm }$ and has length $\left| x_{\pm }\right| \cos \theta $. Also, $y_{\pm }$ is orthogonal to $u$, hence it is left fixed by $ T_{p_{-},p_{+}}$. Therefore one computes \[ \left\| \left( \lambda _{-}v_{-}+y_{-},\lambda _{+}v_{+}+y_{+}\right) \right\| _{\left( -u,u\right) }=\frac{4\cos ^{2}\theta}{\left| p_{-}-p_{+}\right| ^{2}}\left( \lambda _{-}\lambda _{+}+ \left\langle x_{-},x_{+}\right\rangle \right) \text{,} \] which coincides with (\ref{tal}) since $\left| p_{-}-p_{+}\right| =2\cos \theta $. This completes the proof that $d\widehat{g}_{\left( -e_{1},e_{1}\right) }$ is a linear isometry. It remains only to show that $ d\psi _{\left[ \gamma _{o}\right] }$ is a linear isometry. We have that $\gamma _{o}\left( t\right) =\left( \cosh t,\sinh t,0\right) \in \Bbb{R}^{n+2}$. Let $J$ be the Jacobi field along $\gamma _{o}$ orthogonal to $\gamma _{o}$ and satisfying $J\left( 0\right) =x$ and $ J^{\prime }\left( 0\right) =y$, both in $T_{e_{0}}H$ orthogonal to $ e_{1}=\gamma ^{\prime }\left( 0\right) $. We show next that \[ d\psi _{\left[ \gamma _{o}\right] }L_{\gamma _{o}}\left( J\right) =\left( x-y,x+y\right) \text{,} \] where $L_{\gamma _{o}}$ was defined in (\ref{el}). By invariance of $\psi $ by rotations it is sufficient to see that \begin{eqnarray} d\psi _{\left[ \gamma _{o}\right] }L_{\gamma _{o}}\left( J_{\pm }\right) =\left( \pm e_{2},e_{2}\right) \text{,} \label{psielpm} \end{eqnarray} where $J_{-}\left( 0\right) =0$, $J_{-}^{\prime }\left( 0\right) =e_{2}$, $ J_{+}\left( 0\right) =e_{2}$ and $J_{+}^{\prime }\left( 0\right) =0$. Let now \[ A_{s}=\left( \begin{array}{cc} \cos s & -\sin s \\ \sin s & \cos s \end{array} \right) \text{ and }B_{s}=\left( \begin{array}{ccc} \cosh s & 0 & \sinh s \\ 0 & 1 & 0 \\ \sinh s & 0 & \cosh s \end{array} \right) \text{.} \] The field $J_{-}$ is associated to the variation of $\gamma _{o}$ corresponding to the one parameter group of isometries $s\mapsto A_{s}^{-}=$ diag\thinspace $\left( 1,A_{s},I_{n-1}\right) $. One computes $ A_{s}^{-}\left( \gamma _{o}\left( t\right) \right) =\left( \cosh t\right) e_{0}+\sinh t\left( \left( \cos s\right) e_{1}+\left( \sin s\right) e_{2}\right) \in H$. Hence \begin{eqnarray*} \left( A_{s}^{-}\circ \gamma _{o}\right) \left( \pm \infty \right) &=&\lim_{t\rightarrow \pm \infty }\left( \tanh t\right) \left( \left( \cos s\right) e_{1}+\left( \sin s\right) e_{2}\right) \\ &=&\pm \left( \cos s\right) e_{1}\pm \left( \sin s\right) e_{2}\text{,} \end{eqnarray*} whose derivative at $s=0$ is $\pm e_{2}$. Therefore $\left( \left. d/ds\right| _{0}\right) \psi \left[ A_{s}^{-}\circ \gamma _{o}\right] =\left( -e_{2},e_{2}\right) $. Using $B_{s}^{+}=$ diag\thinspace $\left( B_{s},I_{n-1}\right) $ instead of $A_{s}^{-}$ one verifies the remaining identity of (\ref{psielpm}). Finally, since $T_{-e_{1},e_{1}}$ clearly fixes $x,y$, the norm (\ref{mss}) of $\left( x-y,x+y\right) $ at $\left( -e_{1},e_{1}\right) $ is $4\left\langle x-y,x+y\right\rangle /\left| 2e_{1}\right| ^{2}=\left| x\right| ^{2}-\left| y\right| ^{2}$, which coincides with the norm of $L_{\gamma _{o}}\left( J\right) $ by (\ref{nj}). This shows that $d\psi _{\left[ \gamma _{o}\right] }$ is a linear isometry. $\square $ Let $\Bbb{O}$ denote the normed division algebra of the octonions and let $ \Bbb{R}^{7}=$ Im\thinspace $\Bbb{O}$ endowed with its canonical cross product $\times $. Let $j$ be the almost complex structure of $S^{6}$ defined by $j_{p}\left( x\right) =p\times x$ if $x\in T_{p}S^{6}=p^{\bot }$. For $q\in S^{6}$, $q\ne p$, let $j_{p,q}$ be the linear operator on $ T_{q}S^{6}=q^{\bot }$ defined by $j_{p,q}=T_{p,q}\circ j_{p}\circ T_{p,q}$. \begin{proposition} For all $x\in p^{\bot },y\in q^{\bot }$, \[ J_{\left( p,q\right) }\left( x,y\right) =(j_{p}\left( x\right) ,j_{p,q}\left( y\right) ) \] defines an orthogonal almost complex structure on $\left( S^{6}\times S^{6}\right) \backslash \Delta _{n}$ with the metric above. \end{proposition} \noindent \textbf{Proof. }First we check that $J$ is an almost complex structure. Indeed, \[ \left\langle j_{p,q}\left( y\right) ,q\right\rangle =\left\langle j_{p}T_{p,q}\left( y\right) ,T_{p,q}\left( q\right) \right\rangle =\left\langle p\times T_{p,q}\left( y\right) ,p\right\rangle =0 \] and $J^{2}=-\,$id holds as well, since $j_{p}^{2}=-$\thinspace id and $ T_{p,q}^{2}=$ id. Finally, $J$ is orthogonal since both $j_{p}$ and $T_{p,q}$ are so. $\square $ \noindent \textbf{Remarks. }a) By Proposition \ref{proptoni} there exists no proper subgroup of $G$ acting transitively on $\mathcal{G}$ leaving $J$ invariant, as it is the case of the analogous almost complex structure defined in \cite {salvaimm} on the space of oriented lines of $\Bbb{R}^{7}$. b) The structure $J$ is not integrable, since $\left( S^{6}\backslash \left\{ p\right\} \right) \times \left\{ p\right\} $ is an almost complex submanifold for any $p$, whose induced almost complex structure is $q\mapsto j_{q}$, which is not integrable. \noindent \textbf{Acknowledgment.} I would like to thank Eduardo Hulett for his help and Antonio Di Scala for the statement and the idea of the proof of Proposition \ref{proptoni}. \noindent Marcos Salvai, \textsc{famaf\,-\,ciem}, Ciudad Universitaria, 5000 C\'ordoba, Argentina. \noindent E-mail: [email protected] \end{document}
\begin{document} \title{On codimension two subvarieties in hypersurfaces} \author{N. Mohan Kumar} \address{Department of Mathematics, Washington University in St. Louis, St. Louis, Missouri, 63130} \email{[email protected]} \urladdr{http://www.math.wustl.edu/$\sim$kumar} \author{A.~P.~Rao} \address{Department of Mathematics, University of Missouri-St. Louis, St. Louis, Missouri 63121} \email{[email protected]} \author{G. V. Ravindra} \address{Department of Mathematics, Indian Institute of Science, Bangalore 560012, India} \email{[email protected]} \subjclass{14F05} \keywords{Arithmetically Cohen-Macaulay subvarieties, ACM vector bundles} \thanks{We thank the referee for pointing out some relevant references.} \begin{abstract} We show that for a smooth hypersurface $X\subset {\mathbb P}^n$ of degree at least $2$, there exist arithmetically Cohen-Macaulay (ACM) codimension two subvarieties $Y\subset X$ which are not an intersection $X\cap{S}$ for a codimension two subvariety $S\subset{\mathbb P}^n$. We also show there exist $Y\subset X$ as above for which the normal bundle sequence for the inclusion $Y\subset X\subset{\mathbb P}^n$ does not split. \end{abstract} \date{\today} \maketitle \centerline{\it Dedicated to Spencer Bloch} \section{Introduction} In this note, we revisit some questions of Griffiths and Harris from 1985 \cite{GH}: \begin{question}[Griffiths and Harris]\label{ghconj} Let $X\subset{\mathbb P}^4$ be a general hypersurface of degree $d\geq 6$ and $C\subset X$ be a curve. \begin{enumerate} \item \label{ghconj3}Is the degree of $C$ a multiple of $d$? \item \label{ghconj1}Is $C=X\cap S$ for some surface $S\subset{\mathbb P}^4$? \end{enumerate}\end{question} The motivation for these questions comes from trying to extend the Noether-Lefschetz theorem for surfaces to threefolds. Recall that the Noether-Lefschetz theorem states that if $X$ is a very general surface of degree $d\geq 4$ in ${\mathbb P}^3$, then $\Pic(X)={\mathbb Z}$, and hence every curve $C$ on $X$ is the complete intersection of $X$ and another surface $S$. C.~Voisin very soon \cite{Vo} proved that the second question had a negative answer by constructing counter-examples on any smooth hypersurface of degree at least 2. She also considered a third question: \begin{question1} With the same terminology and when $C$ is smooth: \begin{enumerate} \setcounter{enumi}{2} \item \label{ghconj2} Does the exact sequence of normal bundles associated to the inclusions $C\subset X\subset{\mathbb P}^4$: $$ 0 \to N_{C/X} \to N_{C/{\mathbb P}^4} \to {\mathcal O}_C(d) \to 0 $$ split? \end{enumerate} \end{question1} Her counter-examples provided a negative answer to this question as well. The first question, the Degree Conjecture of Griffiths-Harris, is still open. \comment{\textcolor[rgb]{0.98,0.00,0.00}}Strong evidence for this conjecture was provided by some elementary but ingenious examples of Koll{\'a}r (\cite{BCC},Trento examples). In particular he shows that if $\gcd(d,6)=1$ and $d\geq 4$ and $X$ is a very general hypersurface of degree $d^2$ in ${\mathbb P}^4$, then every curve on $X$ has degree a multiple of $d$. In the same vein, Van Geemen shows that if $d>1$ is an odd number and $X$ is a very general hypersurface of degree $54d$, then every curve on $X$ has degree a multiple of $3d$. The main result of this note is the existence of a large class of counterexamples which subsumes Voisin's counterexamples and places them in the context of arithmetically Cohen-Macaulay (ACM) vector bundles on $X$. It is well known that ACM bundles which are not sums of line bundles can be found on any hypersurface of degree at least 2 \cite{BGS}, and for such a bundle, say of rank $r$, on $X$, ACM subvarieties of codimension two can be created on $X$ by considering the dependency locus of $r-1$ general sections. These subvarieties fail to satisfy Questions \ref{ghconj1} and \ref{ghconj2}. We will be working on hypersurfaces in ${\mathbb P}^n$ for any $n \geq 4$ and our constructions of ACM subvarieties may not give smooth ones. Hence in Question \ref{ghconj2}, we will consider the splitting of the conormal sheaf sequence instead. \section{Main results} Let $X\subset{\mathbb P}^n$ be a smooth hypersurface of degree $d\geq 2$ and let $Y\subset X$ be a codimension $2$ subscheme. Recall that $Y$ is said to be \comment{\textcolor[rgb]{0.98,0.00,0.00}}an arithmetically Cohen-Macaulay (ACM) \comment{\textcolor[rgb]{0.98,0.00,0.00}}subscheme of $X$ if $\HH^i(X,I_{Y/X}(\nu))=0$ for $0 < i\leq \dim{Y}$ and for any $\nu\in{\mathbb Z}$. Similarly, a vector bundle $E$ on $X$ is said to be ACM if $\HH^i(X,E(\nu))=0$ for $i\neq 0, \,\dim{X}$ and for any $\nu\in{\mathbb Z}$. Given a coherent sheaf $\mathcal{F}$ on $X$, let $s_i\in\HH^0(\mathcal{F}(m_i))$ for $1\leq i \leq k$ be generators for the $\oplus_{\nu\in{\mathbb Z}}\HH^0({\mathcal O}_X(\nu))$-graded module $\oplus_{\nu\in{\mathbb Z}}\HH^0(\mathcal{F}(\nu))$. These sections give a surjection of sheaves $\oplus_{i=1}^k{\mathcal O}_X(-m_i) \twoheadrightarrow \mathcal{F}$ which induces a surjection of global section $\oplus_{i=1}^k\HH^0({\mathcal O}_X(\nu-m_i)) \twoheadrightarrow \HH^0(\mathcal{F}(\nu))$ for any $\nu\in{\mathbb Z}$. Applying this to the ideal sheaf $I_{Y/X}$ of an ACM subscheme of codimension $2$ in $X$, we obtain the short exact sequence $$ 0 \to G \to \oplus_{i=1}^k{\mathcal O}_X(-m_i) \to I_{Y/X} \to 0,$$ where $G$ is some ACM sheaf on $X$ of rank $k-1$. \comment{\textcolor[rgb]{0.98,0.00,0.00}}Since $Y$ is ACM as a subscheme of $X$, it is also ACM as a subscheme of ${\mathbb P}^n$. In particular, $Y$ is locally Cohen-Macaulay. Hence $G$ is a vector bundle by \comment{\textcolor[rgb]{0.98,0.00,0.00}} the Auslander-Buchsbaum Theorem (see \cite{Mat} page 155). We will loosely say that $G$ is associated to $Y$. Conversely, the following Bertini type theorem which goes back to arguments of Kleiman in \cite{Kleiman} (see also \cite{Ban}) shows that given an ACM bundle $G$ on $X$, we can use $G$ to construct ACM subvarieties $Y$ of codimension $2$ in $X$: \begin{prop}\label{Kleiman} (Kleiman). Given a bundle $G$ of rank $k-1$ on $X$, a general map $G \to \oplus_{i=1}^k{\mathcal O}_X(m_i)$ for sufficiently large $m_i$ will determine the ideal sheaf (up to twist) of a subvariety $Y$ of codimension $2$ in $X$ with a resolution of sheaves: $$ 0 \to G \to \oplus_{i=1}^k{\mathcal O}_X(m_i) \to I_{Y/X}(m) \to 0.$$ \end{prop} Since the conclusion of Question \ref{ghconj1} implies that of Question \ref{ghconj2}, we will look at just Question \ref{ghconj2}, in the conormal sheaf version. Let $X$ be a hypersurface of degree $d$ in ${\mathbb P}^n$ defined by the equation $f=0$. Let $X_2$ be the thickening of $X$ defined by $f^2=0$ in ${\mathbb P}^n$. Given a subvariety $Y$ of codimension $2$ in $X$, let $I_{Y/{\mathbb P}}$ (resp. $I_{Y/X}$) denote the ideal sheaf of $Y\subset{\mathbb P}^n$ (resp. $Y\subset X$). The conormal sheaf sequence is \begin{equation}\label{nbs} 0\to {\mathcal O}_Y(-d) \to I_{Y/{\mathbb P}}/I_{Y/{\mathbb P}}^2 \to I_{Y/X}/I_{Y/X}^2 \to 0. \end{equation} \begin{lemma} For the inclusion $Y\subset X\subset {\mathbb P}^n$, if the sequence of conormal sheaves (\ref{nbs}) splits, then there exists a subscheme $Y_2\subset X_2$ containing $Y$ such that $$ I_{Y_2/X_2}(-d) \stackrel{f}{\to} I_{Y_2/X_2} \to I_{Y/X} \to 0$$ is exact. Furthermore, $fI_{Y_2/X_2}(-d)=I_{Y/X}(-d)$. \end{lemma} \begin{proof} Suppose sequence (\ref{nbs}) splits: then we have a surjection $$ I_{Y/{\mathbb P}}\twoheadrightarrow I_{Y/{\mathbb P}}/I_{Y/{\mathbb P}}^2 \twoheadrightarrow {\mathcal O}_Y(-d)$$ where the first map is the natural quotient map and the second is the splitting map for the sequence. The kernel of this composition defines a scheme $Y_2$ in ${\mathbb P}^n$. Since this kernel $I_{Y_2/{\mathbb P}}$ contains $I_{Y/{\mathbb P}}^2$ and hence $f^2$, it is clear that $Y \subset Y_2 \subset X_2$. The splitting of (\ref{nbs}) also means that $f \in I_{Y/{\mathbb P}}(d)$ maps to $1 \in {\mathcal O}_Y$. We get the commutative diagram: \[ \begin{array}{ccccccccc} & & & & & & 0 & \\ & & & & & & \uparrow & & \\ 0 &\to & I_{Y_2/{\mathbb P}}& \to & I_{Y/{\mathbb P}} & \to & {\mathcal O}_Y(-d) & \to & 0 \\ & & \uparrow{\scriptstyle{f^2}} & & \uparrow{\scriptstyle{f}} & & \uparrow & &\\ 0 & \to & {\mathcal O}_{\mathbb P}(-2d) & \stackrel{f}{\to} & {\mathcal O}_{\mathbb P}(-d) & \to & {\mathcal O}_X(-d) &\to & 0\\ & & \uparrow & & \uparrow & & & &\\ & & 0 & & 0 & & & &\\ \end{array} \] This induces $$ 0 \to I_{Y/X}(-d) \to I_{Y_2/X_2} \to I_{Y/X} \to 0.$$ In particular, note that $I_{Y/X}(-d)$ is the image of the multiplication map $f: I_{Y_2/X_2}(-d) \to I_{Y_2/X_2}$. \end{proof} Now assume that $Y$ is an ACM subvariety on $X$ of codimension $2$. The ideal sheaf of $Y$ in $X$ has a resolution $$ 0 \to G \to \oplus_{i=1}^k{\mathcal O}_X(-m_i) \to I_{Y/X} \to 0,$$ for some ACM bundle $G$ on $X$ associated to $Y$. \begin{lemma}\label{bundle extends} Suppose the conditions of the previous lemma hold, and in addition $Y$ is an ACM subvariety. Then there is an extension of the ACM bundle $G$ (associated to $Y$) on $X$ to a bundle $\mathcal G$ on $X_2$. {\it ie.} there is a vector bundle $\mathcal G$ on $X_2$ such that the multiplication map $f: \mathcal G(-d) \to \mathcal G$ induces the exact sequence $0 \to G(-d) \to \mathcal G \to G \to 0$. \end{lemma} \begin{proof} Since $Y$ is ACM, $H^1(I_{Y/X}(-d+\nu))=0, \forall \nu$, hence in the sequence stated in the previous lemma, the right hand map is surjective on the level of sections. Therefore, the map $\oplus_{i=1}^k{\mathcal O}_X(-m_i) \to I_{Y/X}$ can be lifted to a map $\oplus_{i=1}^k{\mathcal O}_{X_2}(-m_i) \to I_{Y_2/X_2}$. Since a global section of $I_{Y_2/X_2}(\nu)$ maps to zero in $I_{Y/X}$ only if it is a multiple of $f$, by Nakayama's lemma, this lift is surjective at the level of global sections in different twists, and hence on the level of sheaves. Hence there is a commuting diagram of exact sequences: \[ \begin{array}{ccccccccc} 0 & & 0 & & 0 & & \\ \uparrow & & \uparrow & & \uparrow & & \\ I_{Y_2/X_2}(-d)& \to & I_{Y_2/X_2} & \to & I_{Y/X} &\to & 0 \\ \uparrow & & \uparrow & & \uparrow & & \\ \oplus_{i=1}^k{\mathcal O}_{X_2}(-m_i-d) & \to & \oplus_{i=1}^k{\mathcal O}_{X_2}(-m_i) & \to & \oplus_{i=1}^k{\mathcal O}_X(-m_i) & \to & 0\\ \uparrow & & \uparrow & & \uparrow & & \\ \mathcal G(-d) & \to & \mathcal G & \to & G & \to & 0\\ \uparrow & & \uparrow & & \uparrow & & \\ 0 & & 0 & & 0 & & \\ \end{array} \] where the sheaf $\mathcal G$ is defined as the kernel of the lift, and the map from the left column to the middle column is multiplication by $f$. It is easy to verify that the lowest row induces an exact sequence $$ 0 \to G(-d) \to \mathcal G \to G \to 0.$$ By Nakayama's lemma, $\mathcal G$ is a vector bundle on $X_2$. \end{proof} \begin{prop}\label{split bundle} Let $E$ be an ACM bundle on $X$. If $E$ extends to a bundle $\mathcal{E}$ on $X_2$, then $E$ is a sum of line bundles. \end{prop} \begin{proof} There is an exact sequence $0\to E(-d) \to \mathcal{E} \to E \to 0$, where the left hand map is induced by multiplication by $f$ on $\mathcal E$. Let $F_0=\oplus{\mathcal O}_{{\mathbb P}^n}(a_i) \twoheadrightarrow E$ be a surjection induced by the minimal generators of $E$. Since $E$ is ACM, this lifts to a map $F_0\twoheadrightarrow \mathcal{E}$. This lift is surjective on global sections by Nakayama's lemma (since the sections of $\mathcal{E}$ which are sent to $0$ in $E$ are multiples of $f$). Thus we have a diagram \[ \begin{array}{ccccccccc} & & & & & & 0 & & \\ & & & & & & \downarrow & & \\ & & 0 & & & & E(-d)& & \\ & & \downarrow & & & & \downarrow & & \\ 0 & \to & F_1 & \to & F_0 & \to & \mathcal{E} & \to & 0 \\ & & \downarrow & & || & & \downarrow & & \\ 0 & \to & G_1 & \to & F_0 & \to & E & \to & 0 \\ & & \downarrow & & & & \downarrow & & \\ & & E(-d) & & & & 0 \\ & & \downarrow & & & & & & \\ & & 0 & & & & & & \\ \end{array} \] $G_1$ and $F_1$ are sums of line bundles on ${\mathbb P}^n$ by Horrocks' Theorem. Furthermore, $G_1\cong F_0(-d)$. Thus $0 \to F_0(-d)\by{\Phi} F_0 \to E \to 0$ is a minimal resolution for $E$ on ${\mathbb P}^n$. As a consquence of this, one checks that $\det{\Phi}=f^{\rank{E}}$. On the other hand, the degree of $\det{\Phi}=d\rank{F_0}$ and so we have $\rank{F_0}=\rank{E}$. Restricting, this resolution to $X$, we get a surjection $F_0\otimes{\mathcal O}_X \twoheadrightarrow E$. The ranks of both vector bundles being the same, this implies that this is an isomorphism. \end{proof} \begin{cor} Let $Y\subset X$ be a codimension $2$ ACM subvariety. If the conormal sheaf sequence (\ref{nbs}) splits, then \begin{itemize} \item the ACM bundle $G$ associated to $Y$ is a sum of line bundles, \item there is a codimension $2$ subvariety $S$ in ${\mathbb P}^n$ such that $Y = X\cap S$. \end{itemize} \end{cor} \begin{proof} The first statement follows from Lemma \ref{bundle extends} and Proposition \ref{split bundle}. For the second statement, since the bundle $G$ associated to $Y$ is a sum of line bundles $\oplus_{i=1}^{k-1}{\mathcal O}_X(-l_i)$ on $X$, the map $G \to \oplus_{i=1}^k{\mathcal O}_X(-m_i)$ can be lifted to a map $\oplus_{i=1}^{k-1}{\mathcal O}_{\mathbb P}(-l_i) \to \oplus_{i=1}^k{\mathcal O}_{\mathbb P}(-m_i)$. The determinantal variety $S$ of codimension $2$ in ${\mathbb P}^n$ determined by this map has the property that $Y = X\cap S$. \end{proof} In conclusion, we obtain the following collection of counterexamples: \begin{cor} If $G$ is an ACM bundle on $X$ which is not a sum of line bundles, and if $Y$ is a subvariety of codimension $2$ in $X$ constructed from $G$ as in Proposition \ref{Kleiman}, then $Y$ does not satisfy the conclusion of either Question \ref{ghconj1} or Question \ref{ghconj2}. \end{cor} Buchweitz-Greuel-Schreyer have shown \cite{BGS} that any hypersurface of degree at least $2$ supports (usually many) non-split ACM bundles. We will give another construction in the next section. \section {Remarks} \begin{subsection}{} {The infinitesimal Question \ref{ghconj2} was treated by studying the extension of the bundle to the thickened hypersurface $X_2$. This method goes back to Ellingsrud, Gruson, Peskine and Str{\o}mme \cite{EGPS}.} If we are not interested in the infinitesimal Question \ref{ghconj2}, but just in the more geometric Question \ref{ghconj1}, a geometric argument gives an even easier proof of the existence of codimension $2$ ACM subvarieties $Y\subset X$ which are not of the form $Y=X\cap Z$ for some codimension $2$ subvariety $Z\subset {\mathbb P}^n$. \begin{prop} Let $E$ be an ACM bundle on a hypersurface $X$ in ${\mathbb P}^n$ which extends to a sheaf $\mathcal{E}$ on \comment{\textcolor[rgb]{0.98,0.00,0.00}}${\mathbb P}^n$; i.e. there is an exact sequence \begin{equation}\label{extension} 0 \to \mathcal{E}(-d) \stackrel{f}{\to} \mathcal{E} \to E \to 0 \end{equation} Then $E$ is a sum of line bundles. \end{prop} \begin{proof} At each point $p$ on $X$, over the local ring ${\mathcal O}_{{\mathbb P},p}$ the sheaf $\mathcal E$ is free, of the same rank as $E$. Hence $\mathcal E$ is locally free except at finitely many points. Let ${\mathbb H}$ be a general hyperplane not passing through these points. Let $X' = X\cap {\mathbb H}$, and $\mathcal E', E'$ be the restrictions of $\mathcal E, E$ to ${\mathbb H}, X'$. It is enough to show that $E'$ is a sum of line bundles on $X'$. This is because any isomorphism $\oplus {\mathcal O}_{X'}(a_i) \to E'$ can be lifted to an isomorphism $\oplus{\mathcal O}_{X}(a_i) \to E$, as $H^1(E(\nu))=0, \forall~ \nu\in\mathbb{Z}$. The bundle $E'$ on $X'$ is ACM and from the sequence $$ 0 \to \mathcal E'(-d) \to \mathcal E' \to E' \to 0,$$ it is easy to check that $H^i(\mathcal E'(\nu))=0, \forall~ \nu\in\mathbb{Z}$, for $2 \leq i \leq n-2$. Since $\mathcal E'$ is a vector bundle on ${\mathbb H}$, we can dualize the sequence to get $$ 0 \to \mathcal E'^{\vee}(-d) \to \mathcal E'^{\vee} \to E'^{\vee} \to 0.$$ $E'^{\vee}$ is still an ACM bundle, hence $H^i(\mathcal E'^{\vee}(\nu)) =0, \forall~ \nu\in\mathbb{Z}$, and $2 \leq i \leq n-2$. By Serre duality, we conclude that $\mathcal E'$ is an ACM bundle on ${\mathbb H}$, and by Horrocks' theorem, $\mathcal E'$ is a sum of line bundles. Hence, its restriction $E'$ is also a sum of line bundles on $X'$. \end{proof} \begin{prop} Let $Y$ be an ACM subvariety of codimension $2$ in the hypersurface $X$ such that the associated ACM bundle $G$ is not a sum of line bundles. Then there is no pure subvariety $Z$ of codimension $2$ in ${\mathbb P}^n$ such that $Z\cap{X}=Y$. \end{prop} \begin{proof} Suppose there is such a $Z$. Then there is an exact sequence $0 \to {I}_{Z/{\mathbb P}}(-d) \to {I}_{Z/{\mathbb P}} \to {I}_{Y/X} \to 0$, where the inclusion is multiplication by $f$, the polynomial defining $X$. Since $Z$ has no embedded points, $H^1({I}_{Z/{\mathbb P}}(\nu))=0$ for $\nu<<0$. Combining this with $H^1({I}_{Y/X}(\nu))=0, \forall~\nu\in{\mathbb Z}$, and using the long exact sequence of cohomology, we get $H^1({I}_{Z/{\mathbb P}}(\nu))=0, \forall~ \nu\in{\mathbb Z}$. Now suppose $Y$ has the resolution $ 0\to G \to \oplus{\mathcal O}_{X}(-m_i)\to I_{Y/X} \to 0$. From the vanishing just proved, the right hand map can be lifted to a map $\oplus{\mathcal O}_{{\mathbb P}}(-m_i)\to I_{Z/{\mathbb P}}$, which is easily checked to be surjective (at the level of global sections). It follows that if $\mathcal G$ is the kernel of this lift, $\mathcal G$ is an extension of $G$ to ${\mathbb P}^n$. By the previous proposition, $G$ is a sum of line bundles. This is a contradiction. \end{proof} \end{subsection} \begin{subsection}{} Voisin's original example was as follows. Let $P_1$ and $P_2$ be two planes meeting at a point $p$ in ${\mathbb P}^4$. The union $\Sigma$ is a surface which is not locally Cohen-Macaulay at $p$. Let $X$ be a smooth hypersurface of degree $d>1$ which passes through $p$. $X\cap \Sigma$ is a curve $Z$ in $X$ with an embedded point at $p$. The reduced subscheme $Y$ has the form $Y=C_1\cup C_2$, where $C_1$ and $C_2$ are plane curves. Voisin argues that $Y$ itself does not have the form $X\cap S$ for any surface $S$ in ${\mathbb P}^4$. We can treat this example from the point of view of ACM bundles. $I_{Z/X}$ has a resolution on $X$ which is just the restriction of the resolution of the ideal of the union $P_1\cup P_2$ in ${\mathbb P}^4$, {\it viz.} $$ 0 \to {\mathcal O}_X(-4) \to 4{\mathcal O}_X(-3) \to 4 {\mathcal O}_X(-2) \to I_{Z/X} \to 0. $$ From the sequence $0 \to I_{Z/X} \to I_{Y/X} \to k_p \to 0$, it is easy to see that $Y$ is ACM, with a resolution $$ 0 \to G \to 4{\mathcal O}_X(-2)\oplus {\mathcal O}_X(-d) \to I_{Y/X} \to 0.$$ $G$ is an ACM bundle. If it were a sum of line bundles, comparing the two resolutions, we find that $h^0(G(2)) =0$ and $h^0(G(3)) =4$, hence $G= 4{\mathcal O}_X(-3)$. But then $G \to 4{\mathcal O}_X(-2)\oplus {\mathcal O}_X(-d)$ cannot be an inclusion. Thus $G$ is an ACM bundle which is not a sum of line bundles. Voisin's subsequent smooth examples were obtained by placing $Y$ on a smooth surface $T$ contained in $X$ and choosing divisors $Y'$ in the linear series $|Y+mH|$ on $T$. When $m$ is large, $Y'$ can be chosen smooth. In fact, such curves $Y'$ are doubly linked to the original curve $Y$ in $X$, hence they have a similar resolution $ G' \to L \to I_{D'/X}\to 0$, where $L$ is a sum of line bundles and where $ G'$ equals $G$ up to a twist and a sum of line bundles. The fact that $G$ above is not a sum of line bundles is related (via the mapping cone of the map of resolutions) to the fact that $k_p$ itself cannot have a finite resolution by sums of line bundles on $X$. This follows from the following proposition which provides another argument for the existence of ACM bundles on arbitrary smooth hypersurfaces of degree $\geq 2$. \begin{prop}\label{constructions} Let $X$ be a smooth hypersurface in ${\mathbb P}^n$ of degree $\geq 2$ with homogeneous coordinated ring $S_X$. Let $L$ be a linear space (possibly a point or even empty) inside $X$ of codimension $r$, with homogeneous ideal $I(L)$ in $S_X$. A free presentation of $I(L)$ of length $r-2$ will have a kernel whose sheafification is an ACM bundle on $X$ which is not a sum of line bundles. \end{prop} \begin{proof} It should first be understood that the homogeneous ideal $I(L)$ of the empty linear space will be taken as the irrelevant ideal $(X_0, X_1, \dots, X_n)$. Let the free presentation of $I(L)$ together with the kernel be $$ 0\to M \to F_{r-2} \to \cdots \to F_0 \to I(L) \to 0, $$ where $F_i$ are free graded $S_X$ modules. Its sheafification looks like $$ 0 \to \tilde M \to \tilde F_{r-2} \to \cdots \to \tilde F_0 \to I_{L/X} \to 0.$$ Since $L$ is locally Cohen-Macaulay, $\tilde M$ is a vector bundle on $X$, and since $L$ is ACM, so is $\tilde M$. $M$ equals $\oplus_{\nu \in {\mathbb Z}} H^0(\tilde M(\nu))$. Hence, $\tilde M$ is a sum of line bundles only if $M$ is a free $S_X$ module. If ${\mathbb H}$ is a general hyperplane in ${\mathbb P}^n$ which meets $X$ and $L$ transversally along $X_{{\mathbb H}}$ and $L_{{\mathbb H}}$ respectively, the above sequences of modules and sheaves can be restricted to give similar sequences in ${\mathbb H}$. The restriction $\tilde M_{{\mathbb H}}$ is an ACM bundle on $X_{{\mathbb H}}$. Repeat this \comment{\textcolor[rgb]{0.98,0.00,0.00}}successively to find a maximal and general linear space ${\mathbb P}$ in ${\mathbb P}^n$ which does not meet $L$. If $X' = X\cap {\mathbb P}$, the restriction of the sequence of $S_X$ modules to $X'$ gives a resolution $$ 0 \to M' \to F_{r-2}' \to \cdots \to F_0' \to S_{X'} \to k \to 0.$$ Localize this sequence of graded $S_{X'}$ modules at the irrelevant ideal $I(L)\cdot S_{X'}$, to look at its behaviour at the vertex of the affine cone over $X'$. $k$ is the residue field of this local ring. Since $X$ and hence $X'$ has degree $\geq 2$, the cone is not smooth at the vertex. By Serre's theorem (\cite{Se}, IV-C-3-Cor 2), $k$ cannot have finite projective dimension over this local ring. Hence $M'$ is not a free module. Therefore neither is $M$. \end{proof} \end{subsection} \begin{subsection}{} We make a few concluding remarks about Question \ref{ghconj3}, the Degree Conjecture of Griffiths and Harris. A vector bundle $G$ on a smooth hypersurface $X$ in ${\mathbb P}^4$ has a second Chern class $c_2(G) \in A^2(X)$, the Chow group of codimension $2$ cycles. If $h \in A^1(X)$ is the class of the hyperplane section of $X$, the degree of any element $c\in A^2(X)$ will be defined to be the degree of the zero cycle $c\cdot h \in A^3(X)$. (Note that by the Lefschetz theorem, all classes in $A^1(X)$ are multiples of $h$.) With this notation, if $E$ is any bundle on $X$ and $Y$ is a curve obtained from $E$ with the sequence ({\it vide} Proposition \ref{Kleiman}) $$ 0 \to E \to \oplus_{i=1}^k{\mathcal O}_X(m_i) \to I_{Y/X}(m) \to 0,$$ a calculation tells us that the degree $d$ of $X$ divides the degree of $Y$ if and only if $d$ divides the degree of $c_2(E)$. More generally: let $Y$ be any curve in $X$ and resolve $I_{Y/X}$ to get $$ 0 \to E \to \oplus_{i=1}^l{\mathcal O}_X(b_i) \to \oplus_{i=1}^k{\mathcal O}_X(a_i) \to I_{Y/X} \to 0,$$ where $E$ is an ACM bundle on $X$. Then a similar calculation tells us that the degree $d$ of $X$ divides the degree of $Y$ if and only if $d$ divides the degree of $c_2(E)$. Hence we may ask the following question which is equivalent to the Degree Conjecture: \begin{ACM} If $X$ is a general hypersurface in ${\mathbb P}^4$ of degree $d \geq 6$, then for any indecomposable ACM vector bundle $E$ on $X$, $d$ divides the degree of $c_2(E)$. \end{ACM} The examples created above in Proposition \ref{constructions} satisfy this, when $L$ has codimension $> 2$ in $X$. In \cite{MRR}, this conjecture is settled for ACM bundles of rank $2$ on $X$. \end{subsection} \end{document}
\begin{document} \title[Remark on the Daugavet property for complex Banach spaces]{Remark on the Daugavet property for complex Banach spaces} \keywords{Daugavet points, $\Delta$-points, alternative convexity or smoothness, nonsquareness, polynomial Daugavet property} \subjclass[2010]{Primary 46B20; Secondary 46B04, 46E40, 46J10} \thanks{ The first author was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology [NRF-2020R1A2C1A01010377]. } \thanks{ The second author was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology [NRF-2020R1A2C1A01010377].} \author{Han Ju Lee} \address{Department of Mathematics Education, Dongguk University - Seoul, 04620 (Seoul), Republic of Korea} \email{[email protected]} \author{Hyung-Joon Tag*} \address{Department of Mathematics Education, Dongguk University - Seoul, 04620 (Seoul), Republic of Korea} \email{[email protected]} \date{\today} \maketitle \begin{abstract} In this article, we study the Daugavet property and the diametral diameter two properties in complex Banach spaces. The characterizations for both Daugavet and $\Delta$-points are revisited in the context of complex Banach spaces. We also provide relationships between some variants of alternative convexity and smoothness, nonsquareness, and the Daugavet property. As a consequence, every strongly locally uniformly alternatively convex or smooth (sluacs) Banach space does not contain $\Delta$-points from the fact that such spaces are locally uniformly nonsquare. We also study the convex diametral local diameter two property (convex-DLD2P) and the polynomial Daugavet property in the vector-valued function space $A(K, X)$. From an explicit computation of the polynomial Daugavetian index of $A(K, X)$, we show that the space $A(K, X)$ has the polynomial Daugavet property if and only if either the base algebra $A$ or the range space $X$ has the polynomial Daugavet property. Consequently, we obtain that the polynomial Daugavet property, Daugavet property, diameteral diameter two properties, and property ($\mathcal{D}$) are equivalent for infinite-dimensional uniform algebras. \end{abstract} \section{Introduction} In the theory of Banach spaces, various properties that are related to certain behaviors of vector measures and bounded linear operators have been studied from a geometrical point of view. We focus on the Daugavet property and diametral diameter two properties in this article. Let $X$ be a Banach space on $\mathbb{F} = \mathbb{R}$ or $\mathbb{C}$. For a given $\epsilon > 0$, a slice $S(x^*, \epsilon)$ of the unit ball $B_X$ is defined by $S(x^*, \epsilon)=\{x \in B_X: \text{Re}\,x^*x > 1 - \epsilon\}$ and a weak$^*$-slice $S(x, \epsilon)$ of $B_{X^*}$ by $S(x, \epsilon) = \{x^* \in B_{X^*}: \text{Re}\,x^*x > 1 - \epsilon\}$. Many results on Banach spaces with the Radon-Nikod\'ym property have been obtained from this perspective. Even though its definition was originally given in terms of vector measures, it is now well-known that the Banach spaces with the Radon-Nikod\'ym property can be characterized by the existence of slices with arbitrarily small diameter as well as the existence of denting points. For more details on the geometrical aspect of the Radon-Nikod\'ym property and its application to other research topics in Banach spaces, we refer to \cite{DU}. A Banach space $X$ is said to have the {\it Daugavet property} if every rank-one operator $T: X \rightarrow X$ satisfies the following equation: \[ \|I + T\| = 1 + \|T\|. \] We call this equation the {\it Daugavet equation}. The space $C(K)$, where $K$ does not have isolated points, $L_1(\mu)$, and $L_{\infty}(\mu)$ with a nonatomic measure $\mu$ are classical examples with the Daugavet property. The infinite-dimensional uniform algebras also have the Daugavet property if and only if their Shilov boundaries do not have isolated points \cite{Wo, LT}. Moreover, the Daugavet property in Musielak-Orlicz spaces \cite{KK}, in Lipschitz-free spaces \cite{JR}, and in rearrangement-invariant Banach function lattices \cite{AKM, KMMW} have been examined. It is well-known that every slices of $B_X$ with diameter two if $X$ has the Daugavet property, which tells us that Banach spaces with this property are on the opposite spectrum to the Radon-Nikod\'ym property. The following characterization allows us to observe the Daugavet property with slices. \begin{Lemma}\cite[Lemma 2.2]{KSSW} \label{lem:Daug} The following are equivalent. \begin{enumerate}[{\rm(i)}] \item A Banach space $(X,\|\cdot\|)$ has the Daugavet property, \item\label{Daugii} For every slice $S = S(x^*,\epsilon)$ where $x^*\in S_{X^*}$, every $x \in S_X$ and every $\epsilon>0$, there exists $y\in S_{X}\cap S$ such that $\|x+y\|>2-\epsilon$, \item\label{Daugiii} For every weak$^{*}$-slice $S^* = S(x,\epsilon)$ where $x\in S_{X}$, every $x^* \in S_{X^*}$ and every $\epsilon>0$, there exists $y^*\in S_{X^*}\cap S^*$ such that $\|x^*+y^*\|>2-\epsilon$, \end{enumerate} \end{Lemma} Later, the diametral diameter two properties (diametral D2Ps), the property ($\mathcal{D}$), and the convex diameter local diameter two property have gained attention \cite{AHLP, W} by many researchers. They are known to be weaker than the Daugavet property. \begin{Definition} \begin{enumerate}[\rm(i)] \item A Banach space $X$ has the property ($\mathcal{D}$) if for every rank-one, norm-one projection $P: X \rightarrow X$ satisfies $\|I - P\| = 2$. \item A Banach space $X$ has the diametral local diameter two property (DLD2P) if for every slice $S$ of the unit ball, every $x \in S \cap S_X$, and every $\epsilon > 0$ there exists $y \in B_X$ such that $\|x - y\| \geq 2 - \epsilon$. \item A Banach space $X$ has the diametral diameter two property (DD2P) if for every nonempty weakly open subset $W$ of the unit ball, every $x \in W \cap S_X$, and every $\epsilon > 0$, there exists $y \in B_X$ such that $\|x- y\| \geq 2 - \epsilon$. \item A Banach space $X$ has the convex diametral local diameter two property (convex-DLD2P) if $\overline{conv}\Delta_X = B_X$. \end{enumerate} \end{Definition} \noindent The first known example that possesses the property ($\mathcal{D}$) is a certain subspace of $L_1$ constructed with martingales \cite{BR}. Later on, this space is shown to have the Daugavet property \cite{KW}. In view of \cite{IK}, every rank-one projection $P$ on a Banach space $X$ with the DLD2P satisfies $\|I - P\| \geq 2$, and so the DLD2P implies the property ($\mathcal{D}$). As a matter of fact, the property ($\mathcal{D}$) was thought to be equivalent to the DLD2P. However, since a scalar multiple of a projection is not a projection \cite{AHLP}, the validity of the equivalence is not clear up to now. The implication (iii) $\implies$ (ii) holds because every slice is a weakly open subset of the unit ball. The DLD2P and the Daugavet property can be also considered from a local perspective by using $\Delta$-points and Daugavet points. Let $\Delta_{\epsilon}(x) = \{y \in B_X: \|x - y\| \geq 2 - \epsilon\}$. \begin{Definition} \begin{enumerate}[\rm(i)] \item A point $x \in S_X$ is a $\Delta$-point if $x \in \overline{conv}\Delta_{\epsilon}(x)$ for every $\epsilon > 0$ \item A point $x \in S_X$ is a Daugavet point if $B_X = \overline{conv}\Delta_{\epsilon}(x)$ for every $\epsilon > 0$. \end{enumerate} \end{Definition} \noindent Notice that the set $\Delta_{\epsilon}(x)$ is defined independently of the scalar fields $\mathbb{F} = \mathbb{R}$ or $\mathbb{C}$ on Banach spaces. Hence we may use the same definitions for $\Delta$-points and Daugavets points for complex Banach spaces. It is well-known that a real Banach space $X$ has the Daugavet property (resp. the DLD2P) if and only if every point on the unit sphere is a Daugavet point (resp. a $\Delta$-point). We mention that many recent results on the Daugavet property, the diametral D2Ps, Daugavet points, and $\Delta$-points have been mostly revolved around {\it real} Banach spaces. But there are several results concerning these concepts in complex Banach spaces; see \cite{Kad, MPR}. For a real Banach space $X$, it is well-known that $\Delta$-points are connected to certain behaviors of slices of the unit ball and of rank-one projections. \begin{Theorem}\label{th:realdelta}\cite{AHLP} Let $X$ be a real Banach space. Then the following statements are equivalent. \begin{enumerate}[\rm(i)] \item $x \in S_X$ is a $\Delta$-point. \item For every slice $S$ of $B_X$ with $x \in S \cap S_X$ and $\epsilon > 0$, there exists $y \in S$ such that $\|x - y\| \geq 2 - \epsilon$. \item For every rank-1 projection $P = x^* \otimes x$ with $x^*x = 1$, we have $\|I - P\| \geq 2$. \end{enumerate} \end{Theorem} \noindent Even though the complex analogue of this relationship may be well-known to the specialists, we will state and prove them for completeness. In addition, while the Daugavet property for complex Banach spaces can be examined through rank-one real-linear operators \cite{KMM}, it has not been known whether we can examine the DLD2P in a similar spirit with rank-one real-projections. We also study this here. Since a denting point is always contained in a slice of arbitrarily small diameter, such a point cannot be either a $\Delta$-point or a Daugavet point. This implies that any (locally) uniformly rotund real Banach spaces cannot have $\Delta$-points. Recently, identifying the Banach spaces that do not contain these points has been an active research topic. For example, it is shown in \cite{ALMP} that every uniformly nonsquare real Banach space does not have $\Delta$-points. Furthermore, a locally uniformly nonsquare real Banach space does not have $\Delta$-points \cite{KLT}. We will examine strongly locally uniformly alternatively convex or smooth (sluacs) Banach spaces in this article. We mention that alternative convexity or smoothness are related to the anti-Daugavet property and the nonsquareness property \cite{J, WSL}. Banach spaces that satisfy the Daugavet equation for weakly compact polynomials are also studied in \cite{CGMM, CGKM, MMP}. For Banach spaces $X, Y$, let $\mathcal{L}(^kX;Y)$ be the space of bounded $k$-linear mappings from $X$ to $Y$ and let $\Delta_k: X \rightarrow X^k$ be a diagonal mapping defined by \[ \Delta_k(x) = \underset{k\,\,\, \text{times}}{\underbrace{(x, x, \dots, x)}}. \] A mapping is called a bounded {\it $k$-homogeneous polynomial} from $X$ to $Y$ if it is the composition of $\Delta_k$ with an element in $\mathcal{L}(^kX;Y)$. We denote the set of all bounded $k$-homogenous polynomials from $X$ to $Y$ by $\mathcal{P}(^k X;Y)$. A {\it polynomial} is a finite sum of bounded homogenous polynomials from $X$ to $Y$. We also denote the set of all polynomials from $X$ to $Y$ by $\mathcal{P}(X;Y)$ and the set of all scalar-valued continuous polynomials by $\mathcal{P}(X)$. We endow the space $\mathcal{P}(X;X)$ (resp. $\mathcal{P}(X)$) with the norm $\|P\| = \sup_{x \in B_X}\|Px\|_X$ (resp. $\|P\| = \sup_{x \in B_X}|Px|$). We say a polynomial $P \in \mathcal{P}(X;Y)$ is weakly compact if $P(B_X)$ is a relatively weakly compact subset of $Y$. A Banach space $X$ is said to have the {\it polynomial Daugavet property} if for every weakly compact polynomial $P \in \mathcal{P}(X, X)$ satisfies \[ \|I + P\| = 1 + \|P\|. \] If $X$ has the polynomial Daugavet property, then the space also has the Daugavet property. It is also well-known that the polynomial Daugavet property can be described in terms of scalar-valued polynomials. \begin{Theorem}\cite[Corollary 2.2]{CGMM}\label{th:polydauggen} Let $X$ be a real or complex Banach space. Then the following statements are equivalent: \begin{enumerate} \item $X$ has the polynomial Daugavet property. \item For every $p \in \mathcal{P}(X)$ with $\|p\| = 1$, every $x_0 \in S_X$, and every $\epsilon > 0$, there exists $\omega \in S_{\mathbb{C}}$ and $y \in B_X$ such that $\text{Re}\,p(y) > 1 - \epsilon$ and $\|x_0 + \omega y\| > 2 - \epsilon$. \item For every $p \in \mathcal{P}(X)$ and every $x_0 \in X$, the polynomial $p \otimes x_0$ satisfies the Daugavet equation. \end{enumerate} \end{Theorem} In this article, we will look at the polynomial Daugavet property in a function space $A(K, X)$ over the base algebra $A$, which will be defined later. This class of function spaces includes uniform algebras and the space of Banach space-valued continuous functions on a compact Hausdorff space. The Daugavet property and the diametral D2Ps of the vector-valued function spaces $A(K, X)$ are studied in \cite{LT}. From the same article, assuming the uniform convexity of the range space $X$ and $A \otimes X \subset A(K, X)$, it is shown that the space $A(K, X)$ has the Daugavet property if and only if its base algebra also has the Daugavet property. It is also shown in \cite{CJT} that if $X$ has the Daugavet property then $A(K, X)$ has the Daugavet property. Here we attempt to find a necessary and sufficient condition for $A(K, X)$ with the polynomial Daugavet property. The article consists of three parts. In section 2, we revisit well-known facts about $\Delta$-points and Daugavet points in the context of complex Banach spaces. Like the Daugavet property, the DLD2P can be also analyzed by using rank-one real-projections (Theorem \ref{prop:deltaequiv}). In Section 3, we examine the relationship between alternative convexity or smoothness and nonsquareness. From the fact that strongly locally uniformly alternatively convex or smooth (sluacs) Banach spaces are locally uniformly nonsquare (Proposition 3.8.(i)), every sluacs Banach space does not have a $\Delta$-point (Theorem \ref{prop:nod}). In section 4, we study the polynomial Daugavet property of the space $A(K, X)$. Here we explicitly compute the polynomial Daugavetian index of the space $A(K, X)$ (Theorem \ref{th:polydauind}). The space $A(K, X)$ has a bicontractive projection if the Shilov boundary of the base algebra $A$ has isolated points (Proposition \ref{prop:bicontractive}). As a consequence, we will show that $A(K, X)$ has the polynomial Daugavet property if and only if either the base algebra $A$ or the range space $X$ has the polynomial Daugavet property (Corollary \ref{cor:polydaugAKX}). \section{Delta-points and Daugavet points in complex Banach spaces} In this section, we study $\Delta$-points and Daugavet points for complex Banach spaces. Although one may find that a certain portion of the proofs are similar to the real case, we include them in this article for completeness. However, we mention that the complex scalar field $\mathbb{C}$ provides something more, namely, a tool to analyze the Daugavet property and the DLD2P for complex Banach spaces through rank-one real-linear operators and rank-one real-projections, respectively. We recall the following useful lemma: \begin{Lemma}\cite[Lemma 1.4]{IK}\label{lem:subslice} Let $x^* \in S_{X^*}$, $\epsilon > 0$. Then for every $x \in S(x^*, \epsilon)$ and every $\delta \in(0, \epsilon)$ there exists $y^* \in S_{X^*}$ such that $x \in S(y^*, \delta)$ and $S(y^*, \delta) \subset S(x^*, \epsilon)$. \end{Lemma} \begin{Theorem}\label{prop:deltaequiv} Let $X$ be a complex Banach space. Then the following statements are equivalent. \begin{enumerate}[\rm(i)] \item $x \in S_X$ is a $\Delta$-point. \item For every slice $S$ of $B_X$ with $x \in S \cap S_X$ and $\epsilon > 0$, there exists $y \in S$ such that $\|x - y\| \geq 2 - \epsilon$. \item For every rank-1 projection $P = x^* \otimes x$ with $x^*x = 1$, we have $\|I - P\| \geq 2$. \item For every rank-1 real-projection $P = \text{Re}\,x^* \otimes x$ with $x^*x = 1$, we have $\|I - P\| \geq 2$. \end{enumerate} \end{Theorem} \begin{proof} The implications (i) $\iff$ (ii) and (ii) $\iff$ (iv) come from modifying the proofs given in \cite[Proposition 1.4.5]{P} for Daugavet points and \cite{IK}. (i) $\implies$ (ii): Assume to the contrary that there exist a slice $S = S(x^*, \alpha)$ containing $x$ and $\alpha > 0$ such that $\|x- y\| < 2 - \alpha$ for every $y\in S$. This implies that $S \cap \Delta_{\alpha}(x) = \emptyset$. Since $x$ is a $\Delta$-point and $x \in S$, we see that $S \cap \overline{conv}\Delta_{\alpha}(x) \neq \emptyset$. Choose $y \in S$ such that $\text{Re}\,x^*y > 1 - \alpha + \delta$ for sufficiently small $\delta > 0$. Then there exist $y_1, y_2, \dots, y_n \in \Delta_{\alpha}(x)$ such that \[ \text{Re}\,x^*y - \frac{1}{n}\sum_{i=1}^{n}\text{Re}\,x^*y_i \leq \left\|y - \frac{1}{n}\sum_{i=1}^{n}y_i\right\| < \delta. \] From the fact that $y_i$'s are not in the slice $S$, we have \begin{eqnarray*} 1 - \alpha < \text{Re}\,x^*y -\delta = \text{Re}\,x^*y - \frac{1}{n}\sum_{i=1}^{n}\text{Re}\,x^*y_i + \frac{1}{n}\sum_{i=1}^{n}\text{Re}\,x^*y_i - \delta &<& \frac{1}{n}\sum_{i=1}^{n}\text{Re}\,x^*y_i\\ &<& 1 - \alpha, \end{eqnarray*} which leads to contradiction. (ii) $\implies$ (i): Suppose that $x \notin \overline{conv}\Delta_{\epsilon}(x)$ for some $\epsilon > 0$. Notice that a singleton $\{x\}$ is convex as well as $\overline{conv}\Delta_{\epsilon}(x)$. Moreover, the set $\{x\}$ is compact. So by Hahn-Banach separation theorem, there exist $x^* \in S_{X^*}$ and $\alpha > 0$ such that for every $z \in \overline{conv}\Delta_{\epsilon}(x)$, we have $\text{Re}\,x^*z < \alpha < \text{Re} x^*x \leq 1$. Hence, we see that $z \notin S(x^*, 1 - \alpha)$ for every $z \in \overline{conv}\Delta_{\epsilon}(x)$, in particular, for every $z \in \Delta_{\epsilon}(x)$. This leads to a contradiction from our assumption (ii). (iii) $\implies$ (ii): Consider a slice $S(x^* ,\delta)$, an element $x \in S(x^*, \delta)$ and $\epsilon > 0$. Then there exist $\delta_1> 0$ such that $\frac{\sqrt{2\delta_1}}{1 - \delta_1} < \frac{\epsilon}{4}$ and a bounded linear functional $y^* \in S_{X^*}$ such that $x \in S(y^*, \delta_1) \subset S(x^*, \delta)$. Consider a rank-one projection $P: y \mapsto y^*y\frac{x}{y^*x}$. Then by the assumption (iii), for every $\beta < \frac{\epsilon}{2}$, there exists $y \in S_X$ such that \begin{equation}\label{eq:control} \| y - Py\| = \left\|y - y^*y\frac{x}{y^*x}\right\| = \left \|\gamma y - \text{Re}\,y^*(\gamma y)\frac{x}{y^*x}\right\| > 2 - \beta, \end{equation} where $\gamma \in \mathbb{T}$ such that $|y^*y| = \gamma y^*y = y^*(\gamma y) = \text{Re}\,y^*(\gamma y)$. Moreover, we also see that $\gamma y \in S(y^*, \delta_1)$. Let $\tilde{y} = \gamma y$. Then by (\ref{eq:control}), we obtain \begin{eqnarray*} \|\tilde{y}- x\| &=& \left\|\gamma y - \text{Re}\,y^*(\gamma y)\frac{x}{y^*x} + \text{Re}\,y^*(\gamma y)\frac{x}{y^*x} - x\right\|\\ &\geq& \left\|\gamma y - \text{Re}\,y^*(\gamma y)\frac{x}{y^*x}\right\| - \left\|x - \text{Re}\,y^*(\gamma y)\frac{x}{y^*x} \right\|\\ &>& 2 - \beta - \left|1 - \frac{\text{Re}\,y^*(\gamma y)}{y^*x}\right|. \end{eqnarray*} Since $|y^*x| \geq \text{Re}\,y^*x > 1 - \delta_1$, we can see that \[ \left|1 - \frac{\text{Re}\,y^*(\gamma y)}{y^*x}\right| = \frac{|y^*x - \text{Re}\,y^*(\gamma y)|}{|y^*x|}\leq \frac{|1 - y^*x| + (1 - \text{Re}\,y^*(\gamma y))}{1 - \delta_1} \] and \[ (\text{Im}\,y^*x)^2 = |y^*x|^2 - (\text{Re}\,y^*x)^2 < 1 - (1 - \delta_1)^2 < \delta_1, \] Hence, we have \begin{eqnarray*} |1 - y^*x| = \sqrt{(1 - \text{Re}\,y^*x)^2 + (\text{Im}\,y^*x)^2} &<& \sqrt{(1 - \text{Re}\,y^*x)^2 + \delta_1}\\ &<& \sqrt{\delta_1^2 + \delta_1} < \sqrt{2\delta_1}. \end{eqnarray*} These consequently show that \[ \|\tilde{y} - x\| > 2 - \beta - \frac{2\sqrt{2\delta_1}}{1 - \delta_1} > 2 - \epsilon. \] (ii) $\implies$ (iii): Every rank-one projection is of the form $P = x^* \otimes x$, where $\|x^*\| \geq 1, \|x\| = 1$, and $x^*x = 1$. Define an operator $T \in L(X)$ by $T(y) = \frac{x^*y}{\|x^*\|} \cdot x$ and consider a slice $S = \{y \in B_X: \text{Re}\, \frac{x^*}{\|x^*\|}y \geq 1 - \frac{\epsilon}{2}\}$ containing $x$. Since $\left|\frac{x^*}{\|x^*\|}y\right| \geq \text{Re}\,\frac{x^*}{\|x^*\|}y > 1 - \frac{\epsilon}{2}$, we know that $\left(\text{Im}\,\frac{x^*}{\|x^*\|}y\right)^2 = \left|\frac{x^*}{\|x^*\|}y\right|^2 - \left(\text{Re}\frac{x^*}{\|x^*\|}y\right)^2 < 1 - (1 -\frac{\epsilon}{2})^2$. Then \[ \left|1- \frac{x^*}{\|x^*\|}y\right| = \sqrt{\left(1 - \text{Re}\,\frac{x^*}{\|x^*\|}y\right)^2 + \left(\text{Im}\,\frac{x^*}{\|x^*\|}y\right)^2} < \sqrt{\frac{\epsilon^2}{4} + 1 - \left(1 - \frac{\epsilon}{2}\right)^2}< \sqrt{\epsilon}. \] Moreover, we see that \begin{eqnarray*} \|(I - T)y\| \geq \|y - x\| - \left\|x - \frac{x^*}{\|x^*\|}y\cdot x\right\|&>& 2 - \frac{\epsilon}{2} - \left|1 - \frac{x^*}{\|x^*\|}y\right|\\ &>& 2 - \frac{\epsilon}{2} - \sqrt{\epsilon}. \end{eqnarray*} Hence, $\|I-T\| \geq 2$. Now define a function $\varphi(\lambda) = \|I - \lambda T\|$ where $\lambda \in [0, \infty)$. It is easy to show that the function $\varphi$ is a convex function on $[0, \infty)$. Also, $\varphi(0) = 1$ and $\varphi(1) = \|I - T\| \geq 2$. Furthermore, \[ 0 < \frac{\varphi(1) - \varphi(0)}{1-0} \leq \frac{\varphi(s) - \varphi(1)}{s-1}\,\,\, \text{for all} \,\,\, s \geq 1. \] So, for every $s \geq 1$ we see that $\varphi(s) > \varphi(1) \geq 2$. In particular, if $s = \|x^*\| \geq 1$ we obtain that $\varphi(\|x^*\|) = \|I - \|x^*\|\cdot T\| = \|I - P\| \geq 2$, which proves (ii) $\implies$ (iii). \end{proof} Hence we can verify the relationship between the $\Delta$-points and the DLD2P as well as the space with bad projections \cite{IK} for complex Banach spaces. \begin{corollary} Let $X$ be a complex Banach space. The following statements are equivalent: \begin{enumerate}[\rm(i)] \item The space $X$ has the DLD2P. \item Every point on the unit sphere $S_X$ is a $\Delta$-point. \item Every rank-one projection $P$ has $\|I - P\| \geq 2$, i.e. $X$ is the space with bad projections. \item Every rank-one real-projection $P$ has $\|I - P\| \geq 2$. \end{enumerate} \end{corollary} The following statement about Daugavet points is a compliation of well-known results, but we include them for completeness. \begin{Theorem} Let $X$ be a complex Banach spaces. Then the following statements are equivalent. \begin{enumerate}[\rm(i)] \item $x \in S_X$ is a Daugavet point. \item For every slice $S$ of $B_X$ and $\epsilon > 0$, there exists $y \in S$ such that $\|x - y\| \geq 2 - \epsilon$. \item For every rank-one operator $T = x^* \otimes x$ satisfies $\|I - T\| = 1 + \|T\|$. \item For every rank-one operator $T = x^* \otimes x$ with norm-one satisfies $\|I - T\| = 2$. \item For every rank-one real-linear operator $T = \text{Re}\,x^* \otimes x$ satisfies $\|I - T\| = 1 + \|T\|$. \item For every rank-one real-linear operator $T = \text{Re}\, x^* \otimes x$ with norm-one satisfies $\|I - T\| = 2$. \end{enumerate} \end{Theorem} \begin{proof} One may see the proof for the real case in \cite[Proposition 1.4.5]{P}. (i) $\implies$ (ii): Assume to the contrary that there exist a slice $S = S(x^*, \alpha)$ containing $x$ and $\alpha > 0$ such that $\|x- y\| < 2 - \epsilon$ for every $y\in S$. This implies that $S \cap \Delta_{\epsilon}(x) = \emptyset$. Since $x$ is a Daugavet point and $x \in S$, we see that $S \subset \overline{conv}\Delta_{\epsilon}(x)$. Choose $y \in S$ such that $\text{Re}\,x^*y > 1 - \alpha + \delta$ for sufficiently small $\delta > 0$. Then there exist $y_1, y_2, \dots, y_n \in \Delta_{\epsilon}(x)$ such that \[ \text{Re}\,x^*y - \frac{1}{n}\sum_{i=1}^{n}\text{Re}\,x^*y_i \leq \left\|y - \frac{1}{n}\sum_{i=1}^{n}y_i\right\| < \delta. \] From the fact that $y_i$'s are not in the slice $S$, we have \begin{eqnarray*} 1 - \alpha < \text{Re}\,x^*y -\delta = \text{Re}\,x^*y - \frac{1}{n}\sum_{i=1}^{n}\text{Re}\,x^*y_i + \frac{1}{n}\sum_{i=1}^{n}\text{Re}\,x^*y_i - \delta &<& \frac{1}{n}\sum_{i=1}^{n}\text{Re}\,x^*y_i\\ &<& 1 - \alpha, \end{eqnarray*} which leads to contradiction. (ii) $\implies$ (i): Suppose that $x \in B_X \neq \overline{conv}\Delta_{\epsilon}(x)$ for some $\epsilon > 0$. Notice that a singleton $\{x\}$ is convex as well as $\overline{conv}\Delta_{\epsilon}(x)$. Moreover, the set $\{x\}$ is compact. So by Hahn-Banach separation theorem, there exist $x^* \in S_{X^*}$ and $\alpha > 0$ such that for every $z \in \overline{conv}\Delta_{\epsilon}(x)$, we have $\text{Re}\,x^*(z) < \alpha < \text{Re} x^*y \leq 1$. Hence, we see that $z \notin S(x^*, 1 - \alpha)$ for every $z \in \overline{conv}\Delta_{\epsilon}(x)$. This leads to a contradiction from our assumption (ii). (iii) $\implies$ (iv) is clear, and (iv) $\implies$ (iii) comes from the fact that for Daugavet property considering rank-one operators with norm-one is enough \cite{W}. The equivalence (ii) $\iff$ (v) $\iff$ (vi) is a well-known result in \cite{AHLP}. (iv) $\implies$ (ii): Let $x^* \in S_{X^*}$ and $S = \{y \in B_X: \text{Re}\,x^*y > 1 - \frac{\epsilon}{2}\}$ containing $x \in S_X$. Let $T \in L(X)$ be a rank-one operator with norm one of the form $T(y) = x^*y\cdot x$ where $\|x^*\| = \|x\| = 1$. By the assumption (iv), for every $\epsilon > 0$, there exists $y \in S_X$ such that $\|y - Ty\| = \|y - x^*y\cdot x\| > \frac{\epsilon}{2}$. Notice that $|x^*y| = \text{Re}\,x^*(\gamma y)$ for some $\gamma \in \mathbb{T}$, and so \[ 1 + \text{Re}\,x^*(\gamma y) \geq \|y - x^*y \cdot x\| > 2 - \frac{\epsilon}{2}. \] Hence $\gamma y \in S$. Let $\tilde{y} = \gamma y$. Then we have \[ \|\tilde{y} - x\| \geq \|y - \text{Re}\,x^*(\gamma y) \cdot x\| - 1 + \text{Re}\,x^*(\gamma y) > 2 - \epsilon. \] Therefore, we see that (ii) holds. (ii) $\implies$ (iv): Every rank-one operator with norm-one is of the form $T = x^* \otimes x$ where $\|x^*\| = \|x\| = 1$. Let $S = \{y \in B_X: \text{Re}\,x^*y > 1 - \frac{\epsilon}{2}\}$ be a slice containing $x$. By the assumption (ii), there exists $y \in B_X$ such that $\|x - y\| > 2 - \frac{\epsilon}{2}$. Notice that $1 \geq |x^*y|^2 = (\text{Re}\,x^*y)^2 + (\text{Im}\,x^*y)^2$. Hence we have \[ |1- x^*y| = \sqrt{(1 - \text{Re}\,x^*y)^2 + (\text{Im}\,x^*y)^2} < \sqrt{\frac{\epsilon^2}{4} + 1 - \left(1 - \frac{\epsilon}{2}\right)^2}< \sqrt{\epsilon}. \] Moreover, \begin{eqnarray*} \|(I - T)y\| = \|y - x^*yx\| \geq \|y - x\| - \|x - x^*y \cdot x\| &>& 2 - \frac{\epsilon}{2} - |1 - x^*y|\\ &>& 2 - \frac{\epsilon}{2} - \sqrt{\epsilon} > 2 - \epsilon. \end{eqnarray*} Since $\epsilon > 0$ is arbitrary, we obtain $\|I - T\| \geq 2$. Then (iv) holds immediately from the fact that $\|I - T\| \leq 1 + \|T\| = 2$. \end{proof} \begin{Corollary} Let $X$ be a complex Banach space. Then the following statements are equivalent: \begin{enumerate}[\rm(i)] \item The space $X$ has the Daugavet property. \item Every point on the unit sphere $S_X$ is a Daugavet point. \end{enumerate} \end{Corollary} Now, we make a similar observation on rank-one, norm-one projections. \begin{Proposition}\label{prop:propD} Let $X$ be a complex Banach space. The following statements are equivalent: \begin{enumerate}[\rm(i)] \item Every rank-one projection $P = x^* \otimes x$ of norm-one on $X$ satisfies $\|I - P\| = 2$. \item Every rank-one real-projection $P = \text{Re}\,x^*\otimes x$ of norm-one on $X$ satisfies $\|I - P\| = 2$. \end{enumerate} \end{Proposition} \begin{proof} (i) $\implies$ (ii): Let $P = \text{Re}\,x^* \otimes x$, where $x^* \in S_{X^*}$ and $x \in S_X$ satisfying $\text{Re}\,x^*x = x^*x = 1$. Then for every $\epsilon >0$, there exists $y \in S_X$ such that $\|y - x^*y\cdot x\| \geq 2 - \epsilon$ in view of (i). Now take $\gamma \in \mathbb{T}$ such that $|x^*y| = \gamma x^*y$. This implies that $x^*(\gamma y) = \text{Re}\, x^*(\gamma y)$. Hence we see that \[ \|I - P\| \geq \|\gamma y - \text{Re}\,x^*(\gamma y)\cdot x\| = \|\gamma( y - x^*y\cdot x)\| = \|y - x^*y\cdot x\| \geq 2 -\epsilon. \] Since $\epsilon > 0$ is arbitrary, we obtain $\|I - P\| = 2$. (ii) $\implies$ (i): Let $P = x^* \otimes x$ , where $x^* \in S_{X^*}$ and $x \in S_X$ satisfying $x^*x = 1$. From (ii), we see that for every $\epsilon > 0$, there exists $y \in S_X$ such that $\|y - \text{Re}\,x^*y\cdot x\| \geq 2 - \frac{\epsilon}{2}$. Then we have $|\text{Re}\,x^*y| \geq 1- \frac{\epsilon}{2}$. Notice that \[ (\text{Im}\,x^*y)^2 = |x^*y|^2 - (\text{Re}\,x^*y)^2 < 1 - \left(1 - \frac{\epsilon}{2}\right)^2 < \epsilon. \] Hence, for $y \in S_X$, we obtain \[ \|I - P\| \geq \|y - x^*y\cdot x\| \geq \|y - \text{Re}\,x^*y \cdot x\| - |\text{Im}\,x^*y| \geq 2 - 2\epsilon. \] Therefore, since $\epsilon > 0$ is arbitrary, we obtain $\|I - P\| = 2$. \end{proof} \section{Alternative convexity or smoothness, nonsquareness, and the Daugavet property} In this section, we study the relationship between alternative convexity or smoothness, nonsquareness, and the Daugavet property. First, we recall various nonsquareness properties, in the sense of James \cite{J,WSL}. The uniform nonsquareness has been examined on both real and complex Banach spaces via Jordan-von Neumann constants \cite{KMT}. \begin{definition} \begin{enumerate}[\rm(i)] \item A Banach space $X$ is uniformly nonsquare (UNSQ) if there exists $\delta > 0$ such that for every $x, y \in S_X$, $\min\{\|x\pm y\|\} \leq 2 - \delta$. \item A Banach space $X$ is locally uniformly nonsquare (LUNSQ) if for every $x \in S_X$, there exists $\delta > 0$ such that $\min\{\|x \pm y\|\} \leq 2 - \delta$ for every $y \in S_X$. \item A Banach space $X$ is nonsquare (NSQ) if for every $x, y \in S_X$, $\min\{\|x \pm y\| \} < 2$. \end{enumerate} \end{definition} \noindent Here we call each point $x \in S_X$ in (ii) a {\it locally uniformly nonsqure point (or uniformly non-$\ell_1^2$ point)}. We have the following implication for these classes: \[ \text{UNSQ}\,\,\, \implies\,\,\, \text{LUNSQ}\,\,\, \implies\,\,\, \text{NSQ}. \] It has been recently shown that UNSQ real Banach spaces do not have $\Delta$-points \cite{ALMP} at all. We extend this result for the class of LUNSQ Banach spaces on the complex scalar fields. Let us start with an improvement of Theorem \ref{prop:deltaequiv}. For the proof on the real case, we refer to \cite{JR}. \begin{Lemma}\label{lem:delta} Let $X$ be a complex Banach space and $x\in S_X$ be a $\Delta$-point. For every $\epsilon > 0$, $\frac{\alpha}{1 - \alpha} < \epsilon$, and every slice $S = S(x^*, \alpha)$ containing $x$, there exists a slice $S(z^*, \alpha_1)$ of $B_X$ such that $S(z^*, \alpha_1) \subset S(x^*, \alpha)$ and $\|x - y\| > 2 - \epsilon$ for all $y \in S(z^*, \alpha_1)$. \end{Lemma} \begin{proof} Let $x^* \in S_{X^*}$ and $S = S(x^*, \alpha)$ be a slice containing $x$. First choose $\eta > 0$ such that $\eta < \min\left\{1 - \frac{1 - \alpha}{\text{Re}\,x^*(x)}, \epsilon - \frac{\alpha}{1 - \alpha}\right\}$. Since $x \in S_X$ is a $\Delta$-point, for a projection $P(y) = \frac{\text{Re}\,x^*y}{\text{Re}\,x^*x}\cdot x$ we have $\|I - P\| \geq 2$ by Theorem \ref{prop:deltaequiv}. Then there exists $y^* \in S_{X^*}$ such that $\|y^* - P^*y^*\| \geq 2 - \eta$. Now define $z^* = \frac{P^*y^* - y^*}{\|P^*y^* - y^*\|} \in S_{X^*}$ and $\alpha_1 = 1 - \frac{2 - \eta}{\|P^*y^* - y^*\|}$ where $P^*y^* = \frac{y^*x}{\text{Re}\,x^*x}\cdot \text{Re}\, x^*$. For every $y \in S(z^*, \alpha_1)$ notice that \[ \frac{\text{Re}\,x^*y}{\text{Re}\,x^*x}\cdot \text{Re}\,y^*x - \text{Re}\,y^*y = \text{Re}\,z^*y\cdot \|P^*y^* - y^*\| > 2 - \eta. \] Hence we see that $\frac{\text{Re}\,x^*y}{\text{Re}\,x^*x}\cdot \text{Re}\,y^*x > 1 - \eta$. Since $\text{Re}\, y^*x$ cannot be zero, without loss of generality, assume that $\text{Re}\, y^*x > 0$. Then we have $\text{Re}\, x^*y > (1 - \eta)\cdot\text{Re}\,x^*x > 1 - \alpha$, which shows that $S(z^*, \alpha_1) \subset S(x^*, \alpha)$. Furthermore, notice that $\text{Re}\,x^*y \leq |x^*y| \leq 1$, and so \[ \left\|\frac{x}{\text{Re}\,x^*x} - y\right\|\geq \frac{\text{Re}\,y^*x}{\text{Re}\,x^*x} - \text{Re}\,y^*y \geq \frac{\text{Re}\,x^*y}{\text{Re}\,x^*x}\cdot\text{Re}\,y^*x - \text{Re}\,y^*y > 2 - \eta. \] Therefore, we obtain \[ \|x - y\| \geq \left\|\frac{x}{\text{Re}\,x^*x} - y\right\| - \left(\frac{1}{\text{Re}\,x^*x}-1\right) > (2 - \eta) - \left(\frac{\alpha}{1 - \alpha}\right) > 2 - \epsilon. \] \end{proof} Applying Lemma \ref{lem:delta} again, we can also show the converse, the same proof in \cite[Lemma 2.2]{JR} transfers to complex Banach spaces. \begin{Corollary} Let $X$ be a complex Banach space. Then $x \in S_X$ is a $\Delta$-point if and only if for every $\epsilon > 0$ and every slice $S = S(x^*, \alpha)$ containing $x \in S$, there exists a slice $S(z^*, \alpha_1)$ such that $S(z^*, \alpha_1) \subset S(x^*, \alpha)$ and $\|x - y\| > 2 - \epsilon$ for all $y \in S(z^*, \alpha_1)$. \end{Corollary} As a consequence, we obtain the relationship between the locally uniformly nonsquare points and the $\Delta$-points on both real and complex Banach spaces. \begin{Proposition}\label{prop:nod} Let $X$ be a complex Banach space. A locally uniformly nonsquare point $x \in S_X$ is not a $\Delta$-point of $X$. \end{Proposition} \begin{proof} We show that a $\Delta$-point $x \in S_X$ cannot be a locally uniformly nonsquare point. Let $\epsilon > 0$ and $\eta \in (0, \frac{\epsilon}{2})$. By Lemma \ref{lem:delta}, for every $\alpha > 0$ where $\frac{\alpha}{1 - \alpha} < \eta$ and every slice $S = S(x^*, \alpha)$ containing $x$, there exists a slice $S(z^*, \alpha_1) \subset S$ such that $\|x - y\| > 2 - \eta > 2 - \epsilon$ for all $y \in S(z^*, \alpha_1)$. In particular, we have $\frac{\text{Re}\,z^*y}{\|y\|} > \frac{1-\alpha_1}{\|y\|} \geq 1 - \alpha_1$ for every $y \in S(z^*, \alpha_1)$. Hence $y' = \frac{y}{\|y\|} \in S(z^*, \alpha_1)$ and $\|x - y'\| > 2 - \epsilon$ for $y \in S(z^*, \alpha_1)$. Moreover, by the fact that $\alpha < \frac{\alpha}{1 - \alpha} < \eta$ and $x, y \in S$, we have $\|x + y'\| \geq \text{Re}\,x^*x + \text{Re}\,x^*y' > 2 - 2\alpha > 2 - 2\eta > 2 - \epsilon$. Thus, for every $\epsilon > 0$ there exists $y' \in S_X$ such that $\min\{\|x + y'\|, \|x - y'\|\} > 2 - \epsilon$. This shows that $x \in S_X$ is not a locally uniformly nonsquare point. \end{proof} \begin{Corollary}\label{cor:nodelta} Let $X$ be a complex Banach space. If $X$ is LUNSQ, then the space does not admit $\Delta$-points. As a consequence, every LUNSQ space does not satisfy the Daugavet property, DD2P, and DLD2P. \end{Corollary} A Banach space $X$ is said to have the {\it anti-Daugavet property} for a class of operators $\mathcal{M}$ if the following equivalence holds: \[ \|I + T\| = 1 + \|T\| \iff \|T\| \in \sigma(T), \] where $\sigma(T)$ is the spectrum of $T \in \mathcal{M}$. If $\mathcal{M} = L(X)$, we simply say the space $X$ satisfies the anti-Daugavet property. We mention that the only if part always hold for any bounded linear operators. It is well-known that any uniformly rotund or uniformly smooth Banach spaces have the anti-Daugavet property. Moreover, this property is connected to the alternative convexity or smoothness properties that are introduced in \cite{KSSW}: \begin{definition}\label{def:acs} \begin{enumerate}[\rm(i)] \item A Banach space $X$ is uniformly alternatively convex or smooth (uacs) if for all sequences $(x_n), (y_n) \subset S_X$ and $(x_n^*) \subset S_{X^*}$, $\|x_n + y_n\|\rightarrow 2$ and $x_n^*(x_n) \rightarrow 1$ implies $x_n^*(y_n) \rightarrow 1$. \item A Banach space $X$ is strongly locally uniformly alternatively convex or smooth (sluacs) if for every $x \in S_X$, $(x_n) \subset S_X$ and $(x^*_n) \subset S_{X^*}$, $\|x_n + x\|\rightarrow 2$ and $x_n^*(x_n) \rightarrow 1$ implies $x_n^*(x) \rightarrow 1$. \item A Banach space $X$ is alternatively convex or smooth (acs) if for all $x,y \in S_X$ and $x^* \subset S_{X^*}$, $\|x + y\|= 2$ and $x^*(x) = 1$ implies $x^*(y) = 1$. \end{enumerate} \end{definition} \noindent Any uniformly convex (resp. locally uniformly rotund, rotund) and uniformly smooth (resp. uniformly Gateaux-smooth, smooth) Banach spaces are known to be uacs (resp. sluacs, acs) \cite{Hard}. Even though it is mentioned in \cite{KSSW} that alternative convexity or smoothness for complex Banach spaces can be defined in a similar fashion, any recent investigation on this property also assumes the scalar field to be $\mathbb{R}$. Hence, we provide equivalent definitions that only involve with the real part of bounded linear functionals which enable us to consider complex Banach spaces. \begin{Proposition} \begin{enumerate}[\rm(i)] \item A Banach space $X$ is uacs if and only if for all sequence $(x_n), (y_n) \subset S_X$ and $(x_n^*) \subset S_{X^*}$, $\|x_n + y_n\| \rightarrow 2$ and $\text{Re}\,x_n^*x_n \rightarrow 1$ implies $\text{Re}\,x_n^*y_n \rightarrow 1$. \item A Banach space $X$ is sluacs if and only if for every $x \in S_X$, $(x_n) \subset S_X$ and $(x_n^*) \subset S_{X^*}$, $\|x_n + y_n\| \rightarrow 2$ and $\text{Re}\,x_n^*x_n \rightarrow 1$ implies $\text{Re}\,x_n^*x \rightarrow 1$. \item A Banach space $X$ is acs if and only if for all $x,y \in S_X$ and $x^* \in S_{X^*}$, $\|x + y\|= 2$ and $\text{Re}\,x^*(x) = 1$ implies $\text{Re}\,x^*(y) = 1$. \end{enumerate} \end{Proposition} \begin{proof} We assume that $X$ is a complex Banach space. Since the proofs for (i) and (ii) are similar, we only prove (i). Suppose that for every $(x_n), (y_n) \subset S_X$ and $(x_n^*) \subset S_{X^*}$, $x_n^*x_n \rightarrow 1$ and $\|x_n + y_n\| \rightarrow 2$. Then for every $\epsilon > 0$, there exists $N_1 \in \mathbb{N}$ such that for every $n \geq N_1$, $|x_n^*x_n| > 1 - \frac{\epsilon}{2}$. We see that \[ 1 \geq (\text{Re}\,x_n^*x_n)^2 + (\text{Im}\,x_n^*x_n)^2 = |x_n^*x_n|^2 > \left(1 - \frac{\epsilon}{2}\right)^2 + (\text{Im}\,x_n^*x_n)^2. \] Hence $\text{Im}\,x_n^*x_n < \sqrt{\epsilon}$, which in turn implies that $\text{Re}\,x_n^*x_n \rightarrow 1$. Then by the assumption, we obtain $\text{Re}\,x_n^*y_n \rightarrow 1$. Again, for every $\epsilon > 0$, there exists $N_2 \in \mathbb{N}$ such that $\text{Re}\,x_n^*y_n > 1 - \frac{\epsilon}{2}$. This implies that \[ 1 \geq |x_n^*y_n|^2 = (\text{Re}\,x_n^*y_n)^2 + (\text{Im}\,x_n^*y_n)^2 \geq (1 - \epsilon)^2 + (\text{Im}\, x_n^*y_n)^2, \] and so $\text{Im}\,x_n^*y_n \rightarrow 0$ as $n \rightarrow \infty$. Therefore, we see that $x_n^* y_n = \text{Re}\,x_n^* y_n + i\text{Im}\,x_n^*y_n \rightarrow 1$. For (iii), if for all $x, y \in S_X$ and $x^* \in S_{X^*}$ we have $\|x +y\| = 2$ and $x^*x = \text{Re}\,x^*x = 1$, then $\text{Re}\,x^*y = 1$ by the assumption. Hence, we see that \[ 1 \geq |x^*y| = (\text{Re}\,x^*y)^2 + (\text{Im}\,x^*y)^2 = 1 + (\text{Im}\,x^*y)^2. \] Therefore, $x^*y = \text{Re}\,x^*y = 1$. \end{proof} Even though every uacs Banach spaces are UNSQ \cite{Hard, KSSW}, there have been no explicit description on the relationship between the sluacs and LUNSQ (resp. acs and NSQ) Banach spaces. As a matter of fact, the similar statement also holds for both sluacs and acs Banach spaces. \begin{Proposition}\label{prop:acsnsq} \begin{enumerate}[\rm(i)] \item Every sluacs space is LUNSQ. \item Every acs space is NSQ. \end{enumerate} \end{Proposition} \begin{proof} (i) Suppose that a sluacs space $X$ is not LUNSQ. Then there exists $x \in S_X$ such that for every $\delta > 0$, there exists $y \in S_X$ such that $\|x + y\| > 2 - \delta$ and $\|x - y\| > 2 - \delta$. So choose a sequence $(x_n)_{n=1}^{\infty}$ such that $\|x + x_n\| > 2 - \frac{1}{2^n}$ and $\|x - x_n\| > 2 - \frac{1 }{2^n}$. In view of Hahn-Banach theorem, we can also find a sequence $(x_n^*)_{n=1}^{\infty}$ such that \[ \text{Re}\,x_n^*x_n - \text{Re}\,x_n^*x = \|x - x_n\|\,\,\, \text{and} \,\,\, \text{Re}\,x_n^*x_n + \text{Re}\,x_n^*x = \|x + x_n\|. \] Then we see that \[ 2 - \frac{1}{2^n} < \text{Re}\,x_n^*x_n + \text{Re}\,x_n^*x \leq \|x\| + \text{Re}\,x_n^*x_n = 1 + \text{Re}\,x_n^*x_n, \] and so $\text{Re}\,x_n^*x_n \rightarrow 1$ as $n \rightarrow \infty$. Since the space $X$ is assumed to be sluacs, $x_n^*x \rightarrow 1$ as $n \rightarrow \infty$. However, by repeating the same argument to $\|x - x_n\|$, we also obtain that $-\text{Re}\,x_n^*x \rightarrow 1$. This leads to a contradiction. (ii) Suppose that an acs space $X$ is not nonsquare. Then there exist $x, y \in S_X$ such that $\|x + y\| = \|x - y\| = 2$. Let $x^* \in S_{X^*}$ such that $\text{Re}\,x^*(x) + \text{Re}\,x^*(y) = \|x + y\|$. From the fact that \[ 2 = \text{Re}\,x^*(x) + \text{Re}\,x^*(y) \leq \text{Re}\,x^*(x) + \|y\| = \text{Re}\,x^*(x) + 1, \] we have $\text{Re}\,x^*x = 1$. This also shows that $\text{Re}\,x^*y = 1$. However, since $\|x - y\| = 2$ and the space $X$ is acs, we have $-\text{Re}\,x^*x = 1$, which is a contradiction. Therefore, the space $X$ must be nonsquare. \end{proof} We mention that any locally uniformly rotund (LUR) Banach spaces does not have $\Delta$-points. As a matter of fact, we can show further that every sluacs Banach space does not have $\Delta$-points based on our observation. \begin{corollary} Let $X$ be a Banach space. Every sluacs Banach space does not contain $\Delta$-points. \end{corollary} \begin{proof} Every sluacs Banach space is LUNSQ by Proposition \ref{prop:acsnsq}.(i). Then by Proposition \ref{prop:nod}, we see that the space does not contain $\Delta$-points. \end{proof} There has been a long standing open problem whether a Banach space with the Daugavet property can be rotund. While there is a rotund normed space (not complete) with the Daugavet property \cite{KMMP}, the existence has not been verified for Banach spaces. Since every rotund Banach space is nonsquare, it will be interesting to know the answer to the following question that may help us to prove or disprove the open problem. \begin{Problem} Does a NSQ Banach space $X$ contain $\Delta$-points? \end{Problem} \section{Remarks on the Daugavet property of $A(K, X)$} Let $K$ be a compact Hausdorff space. The space $C(K)$ is the set of all complex-valued continuous functions over $K$ endowed with the supremum norm $\|\cdot\|_{\infty}$. A {\it uniform algebra} $A$ is a closed subalgebra of $C(K)$ that separates points and contains constant functions. For a compact subset $K \subset \mathbb{C}$, the space $P(K)$ (resp. $R(K)$) of continuous functions that can be approximated uniformly on $K$ by polynomials in $z$ (resp. by rational functions with poles off $K$) and the space $A(K)$ of continuous functions that are analytic on the interior of $K$ are well-known examples of uniform algebras. When $K = \overline{\mathbb{D}}$, the corresponding uniform algebra $A(K) = A(\overline{\mathbb{D}})$ is the disk algebra. We refer to \cite{D, L} for more details on uniform algebras. For a complex Banach space $X$, let $C(K, X)$ be the set of all vector-valued continuous functions over $K$ equipped with the supremum norm. We recall the definition of the vector-valued function space $A(K, X)$. \begin{Definition} Let $K$ be a compact Hausdorff space and $X$ be a Banach space. The space $A(K,X)$ is called {\it a function space over the base algebra $A$} if it is a subspace of $C(K,X)$ that satisfies: \begin{enumerate}[\rm(i)] \item The base algebra $A := \{x^*\circ f: x^* \in X^*, f \in A(K, X)\}$ is a uniform algebra. \item $A \otimes X \subset A(K, X)$. \item For every $g \in A$ and every $f \in A(K, X)$, we have $g\cdot f \in A(K, X)$. \end{enumerate} \end{Definition} \noindent If $X = \mathbb{F}$, then the space $A(K, X)$ becomes the uniform algebra $A$ on a compact Hausdorff space $K$. It is clear that $C(K, X)$ is a function space over a base algebra $C(K)$. As a nontrivial example, for given Banach spaces $X$ and $Y$, let $A_{w^*}(B_{X^*}, Y)$ be the space of all weak$^*$-to-norm continuous functions on the closed unit ball $B_{X^*}$ that are holomorphic on the interior of $B_{X^*}$. It is a closed subspace of $C(B_{X^*}; Y)$, where $B_{X^*}$ is the weak$^*$-compact set. Then $A_{w^*u}(B_{X^*}; Y)$ is a function space over base algebra $A_{w^*u}(B_{X^*})$. A subset $L \subset K$ is said to be a {\it boundary} for $A$ if for every $f \in A$ there exists $t \in L$ such that $f(t) = \|f\|_\infty$. The smallest closed boundary for $A$ is called the {\it Shilov boundary} denoted by $\mathcal{G}amma$. A point $x \in K$ is a {\it strong boundary point} for a uniform algebra $A$ if for every open subset $U \subset K$ containing $x$, there exists $f \in A$ such that $\|f\|_{\infty} = |f(t_0)| = 1$ and $\sup_{t \in K \setminus U} |f(t)| < 1$. For a compact Hausdorff space $K$, the set of all strong boundary points on $A$ is coincides with the Choquet boundary $\mathcal{G}amma_0$, that is, the set of all extreme points on the set $K_A = \{\lambda \in A^*: \|\lambda\| = \lambda(1_A) = 1\}$ \cite[Theorem 4.3.5]{D}. Moreover, the closure of $\mathcal{G}amma_0$ is $\mathcal{G}amma$ in this case \cite[Corollary 4.3.7.a]{D}. For instance, the Shilov boundary of the disk algebra $A(\overline{\mathbb{D}})$ is the unit circle $\partial\overline{\mathbb{D}}$. To study various geometrical properties of $A(K, X)$ and Bishop-Phelps-Bollob\'as property for Asplund operators which range space is a uniform algebra, a Urysohn-type lemma has played an important role. Here we use a stronger version of the lemma provided in \cite{CGK}. \begin{Lemma}\cite[Lemma 3.10]{CJT}\label{lem:urysohnAKX} Let $K$ be a compact Hausdorff space. If $t_0$ is a strong boundary point for a uniform algebra $A \subset C(K)$, then for every open subset $U \subset K$ containing $t_0$ and $\epsilon > 0$, there exists $\phi = \phi_U \in A$ such that $\phi(t_0) = \|\phi\|_{\infty} = 1$, $\sup_{K \setminus U}|\phi(t)| < \epsilon$ and \[ |\phi(t)| + (1 - \epsilon)|1 - \phi(t)| \leq 1 \] for every $t \in K$. \end{Lemma} We can also construct a Urysohn-type function at an isolated point in the Shilov boundary. \begin{Lemma}\cite[Lemma 2.5]{LT}\label{lem:auxiso} Let $A$ be a uniform algebra on a compact Hausdorff space $K$ and let $t_0$ be an isolated point of the Shilov boundary $\mathcal{G}amma$ of $A$. Then there exists a function $\phi \in A$ such that $\phi(t_0) = \|\phi\| = 1$ and $\phi(t) = 0$ for $t \in \mathcal{G}amma \setminus \{t_0\}$. \end{Lemma} The next statement is in the proof of the case (iii) for \cite[Theorem 4.2]{LT}, but we state it explicitly here. \begin{Lemma} \label{lem:akxdecomp} Let $K$ be a compact Hausdorff space and $\mathcal{G}amma$ be the Shilov boundary of the base algebra for the space $A(K, X)$. Suppose that $\mathcal{G}amma$ has an isolated point. Then, $A(K, X)$ is isometrically isomorphic to $X \oplus_{\infty} Y$ where $Y$ is $A(K, X)$ restricted to $K\setminus \{t_0\}$. \end{Lemma} \begin{proof} Let $t_0 \in \mathcal{G}amma$ be an isolated point. By Lemma \ref{lem:auxiso}, there exists $\phi \in A$ such that $\phi(t_0) = \|\phi\|_{\infty} = 1$ and $\phi(t) = 0$ for $t \in \mathcal{G}amma \setminus \{t_0\}$. Let $\tilde{K} = K \setminus \{t_0\}$. Define a norm-one projection $P: A(K, X) \rightarrow A(K, X)$ by $Pf = \phi \cdot f$ and denote $Y$ be the restriction of $A(K, X)$ to $\tilde{K}$. As a matter of fact, the image $P(A(K, X))$ is isometrically isomorphic to $X$. Indeed, define a linear operator $\Psi: P(A(K, X)) \rightarrow X$ by $\Psi(Pf) = f(t_0)$. Then $\|\Psi(Pf)\|_X =\|f(t_0)\|_X = \|Pf\|$. Moreover, we see that for every $x \in X$ there exists $f \in A(K, X)$ such that $f(t_0) = x$. Hence, $\Psi$ is surjective, which in turn implies that the operator $\Psi$ is an isometric isomorphism on $P(A(K, X))$. Now, we claim that the space $A(K, X)$ is isometrically isomorphic to $X \oplus_{\infty} Y$. For $f \in A(K, X)$, define a bounded linear operator $\Phi: A(K, X) \rightarrow X \oplus_{\infty} Y$ by $\Phi f = (Pf, f_{|\tilde{K}})$. Then we see that \[ \|\Phi f\| = \max\{\|Pf\|, \|f_{|\tilde{K}}\|\} = \max\left\{\|f(t_0)\|_X, \sup_{t \in\tilde{K}}\|f(t)\|_X\right\} = \|f\|. \] Notice that for a given $(f, g) \in X \oplus_{\infty} Y$, there exist $f_1, f_2 \in A(K, X)$ such that $f = Pf_1$ and $g = f_{2|\tilde{K}}$. Let $h = Pf_1 + f_2 - Pf_2 \in A(K, X)$. Then we have $\Phi(h) = (Pf_1, f_{2|\tilde{K}})$. Hence, the operator $\Phi$ is also surjective, and so it is an isometric isomorphism between $A(K, X)$ and $X \oplus_{\infty} Y$. \end{proof} The following lemma will be useful for later. \begin{Lemma}\label{lem:AKXisom} Let $X$ be a Banach space. Suppose that $L$ is a closed boundary for $A$. The space of restrictions of elements of $A(K, X)$ to $L$ is denoted by $A(L, X)$ and the restrictions of elements of $A$ to $L$ is denoted by $A(L)$. Then $A(L, X)$ is isometrically isomorphic to $A(K, X)$. \end{Lemma} \subsection{The polynomial Daugavet property in $A(K,X)$} First we provide a sufficient condition for $A(K, X)$ with the polynomial Daugavet property. We mention that the proof method is inspired by \cite[Theorem 2.7]{CGKM}. \begin{Theorem}\label{th:polydaug} Let $K$ be a compact Hausdorff space and let $\mathcal{G}amma$ be the Shilov boundary of the base algebra $A$ of $A(K, X)$. If $\mathcal{G}amma$ does not have isolated points, then $A(K, X)$ has the polynomial Daugavet property. \end{Theorem} \begin{proof} In view of \cite[Corollary 2.2]{CGMM}, it suffices to show that for every $p \in \mathcal{P}(X)$ with $\|p\| = 1$, every $x_0 \in S_X$, and every $\epsilon > 0$, there exist $\alpha \in S_{\mathbb{C}}$ and $y \in B_X$ such that \[ \text{Re}\,\alpha p(y) > 1 - \epsilon\,\,\, \text{and} \,\,\, \|x_0 + \alpha y\| > 2 - \epsilon. \] Let $0 < \epsilon < 1$, $P \in \mathcal{P}(X)$ with $\|P\| = 1$, and let $f_0 \in S_{A(K, X)}$. Choose $h \in S_{A(K, X)}$ and $\alpha \in \mathbb{T}$ such that $|P(h)| > 1 - \frac{\epsilon}{2}$ and $\text{Re}\,\alpha P(h) > 1 - \frac{\epsilon}{2}$. Also choose $t_0 \in \mathcal{G}amma_0$ such that $\|f_0(t_0)\|_X > 1 - \frac{\epsilon}{8}$. Let $U = \{t \in K: \|f_0(t) - f_0(t_0)\|_X < \frac{\epsilon}{8}\,\,\, \text{and}\,\,\, \|h(t) - h(t_0)\|_X < \frac{\epsilon}{8}\}$ be a nonempty open subset of $K$. We consider two cases. Case 1: Suppose that there exists $(t_i)_{i=1}^{\infty} \subset U$ such that $\|\alpha^{-1}f_0(t_i) - h(t_i)\| \rightarrow 0$. Then we have \begin{eqnarray*} \|f_0 + \alpha h\| &\geq& \sup_i \|f_0(t_i) + \alpha h(t_i)\|\\ &\geq& \sup_i(2\|f_0(t_0)\|_X - 2\|f_0(t_0) - f_0(t_i)\|_X - \|f_0(t_i) - \alpha h(t_i)\|_X\\ &\geq& 2 - \frac{\epsilon}{4} - \frac{\epsilon}{4} - \frac{\epsilon}{4} > 2 - \epsilon. \end{eqnarray*} Case 2: Now suppose that there exists $\eta > 0$ such that $\|\alpha^{-1}f_0(t) - h(t)\| > \eta$ for every $t \in U$. Since $\mathcal{G}amma$ is perfect, we see that the strong boundary point $t_0 \in U$ is not an isolated point. Let $\{U_i\}_{i=1}^{\infty}$ be a collection of pairwise disjoint open subsets of $U$ such that $\cup_{i = 1}^{\infty} U_i \subset U$. From the fact that the Choquet boundary $\mathcal{G}amma_0$ is dense in $\mathcal{G}amma$, there exist strong boundary points $t_i \in U_i$ for each $i \in \mathbb{N}$. Then by Lemma \ref{lem:urysohnAKX}, there exists $\phi_i \in A$ such that \begin{equation}\label{eq:urysohns} \phi_i(t_i) = 1, \,\,\,, \sup_{K\setminus U_i}|\phi_i(t)| < \frac{\epsilon}{2^{i+3}}, \,\,\, \text{and} \,\,\, |\phi_i(t)| + \left(1 - \frac{\epsilon}{2^{i+3}}\right)|1 - \phi_i(t)| \leq 1 \,\,\, \text{for every} \,\,\, t \in K. \end{equation} Let $h_i = h + \phi_i(\alpha^{-1} f_0(t_i) - h(t_0)) \in A(K, X)$. Then for every $t \in \cup_{i = 1}^{\infty} U_i$, by (\ref{eq:urysohns}), we have \begin{eqnarray*} \|h_i(t)\|_X &=& \|h(t) + \phi_i(t)\alpha^{-1}f_0(t_i) - \phi_i(t)h(t_i)\|_X\\ &\leq& \|h(t) - h(t_i)\|_X + \|h(t_i) - \phi_i(t)h(t_i)\|_X + \|\phi_i(t) \alpha^{-1} f_0(t_i)\|_X\\ &\leq& \|h(t) - h(t_0)\|_X + \|h(t_0) - h(t_i)\|_X + |1 - \phi_i(t)| + |\phi_i(t)|\\ &\leq& \frac{\epsilon}{4} + \left(1 - \frac{\epsilon}{2^{i+3}}\right)|1 - \phi(t)| + \frac{\epsilon}{2^{i+3}}|1 - \phi_i(t)| + |\phi_i(t)|\\ &\leq& \frac{\epsilon}{4} + 1 + \frac{\epsilon}{2^{i+2}} < 1 + \frac{\epsilon}{2}. \end{eqnarray*} On the other hand, for every $t \in K \setminus \cup_{i=1}^{\infty} U_i$, \begin{equation}\label{eq:hit} \|h_i(t)\|_X \leq \|h(t)\|_X + |\phi_i(t)| \|\alpha^{-1} f_0(t_i) - h(t_i)\|_X \leq 1 + \frac{\epsilon}{2^{i+3}}\cdot 2 < 1 + \frac{\epsilon}{2}. \end{equation} Moreover, we see that \begin{equation}\label{eq:hi} \|h_i\| \geq \|h_i(t_i)\|_X = \|f_0(t_i)\|_X \geq \|f_0(t_0)\| - \|f_0(t_0) - f_0(t_i)\| \geq 1 - \frac{\epsilon}{2}. \end{equation} Now, let $g_i = \frac{h_i}{\|h_i\|}$. By (\ref{eq:hi}) we obtain \[ \|h_i - g_i\| = |1 - \|h_i\|| < \frac{\epsilon}{2}. \] For every $(\beta_i) \in \ell_{\infty}$, notice that \begin{eqnarray*} \sup_n \left\|\sum_{i=1}^n \beta_i \phi_i(\alpha^{-1}f_0(t_i) - h(t_i)) \right\| &\leq& \sup_n \sup_{t \in K} \sum_{i=1}^{n}|\beta_i||\phi_i(t)|\|\alpha^{-1}f_0(t_i) - h(t_i)\|_X\\ &\leq& \sup_n \sup_{t \in K}\sum_{i=1}^n2|\beta_i||\phi_i(t)|\\ &\leq& 2 \sup_i|\beta_i|\left(1 + \frac{\epsilon}{2^4} + \frac{\epsilon}{2^5}\cdots\right) = 2 \left(1 + \frac{\epsilon}{8}\right) \sup_i|\beta_i| \end{eqnarray*} Hence by \cite[Theorem V.6]{Diest}, the series $\sum_{i=1}^n \beta_i \phi_i(\alpha^{-1}f_0(t_i) - h(t_i))$ is weakly unconditionally Cauchy. Since we assumed that $\|\alpha^{-1}f_0(t) - h(t)\| > \eta$, there exists a basic subsequence $(\phi_{\sigma(i)}(\alpha^{-1}f_0(t_{\sigma(i)}) - h(t_{\sigma(i)}))$ that is equivalent to the basis $(e_i)$ in $c_0$ by using Bessaga-Pe\l czynski principle \cite[pg. 45]{Diest}. From the fact that a polynomial on a bounded subset of $c_0$ is weakly continuous \cite[Proposition 1.59]{Dn} , we have $\text{Re}\,\alpha P(h_{\sigma(i)})\rightarrow \text{Re}\,\alpha P(h)$ as $i \rightarrow \infty$. Choose $k \in \mathbb{N}$ such that $\text{Re}\,\alpha P(h_k) > 1 - \frac{\epsilon}{2}$. Then we have \[ \text{Re}\, \alpha P(g_k) = \frac{\text{Re}\, \alpha P(h_k)}{\|h_k\|} > \frac{1 - \epsilon /2}{1 + \epsilon/2} \geq 1 - \epsilon. \] Therefore, we finally obtain \begin{eqnarray*} \|f_0 + \alpha g_k\| \geq \|f_0 + \alpha h_k\|- \|g_k - h_k\| &\geq& \|f_0(t_i) + \alpha h_k(t_i)\| - \frac{\epsilon}{2}\\ &=& 2\|f_0(t_i)\| - \frac{\epsilon}{2}\\ &\geq& 2\|f_0(t_0)\| - 2\|f_0(t_i) - f_0(t_0)\| - \frac{\epsilon}{2}\\ &\geq& 2\|f_0(t_0)\| - \frac{3\epsilon}{4}\\ &\geq& 2 - \epsilon. \end{eqnarray*} \end{proof} Let $\mathcal{P}_K(X, X)$ be the set of all compact polynomials from $X$ to itself. For $P \in \mathcal{P}_K(X, X)$, the numerical range $V(P)$ is defined by \[ V(T) = \{x^*(Tx): x^* \in S_{X^*} \,\,\,\text{and} \,\,\, x \in S_X \,\,\,\text{where} \,\,\, x^*(x) = 1\}. \] Now, we recall the polynomial Daugavetian index. \begin{Definition}\cite{S} For a Banach space $X$, the Daugavetian index $\text{Daug}\,(X)$ is defined by \begin{eqnarray*} \text{Daug}_p\,(X) &=& \max\{m \geq 0: \|I + P\| \geq 1 + m\|P\|, \,\,\,\text{for every} \,\,\,P \in \mathcal{P}_K(X, X)\}\\ &=& \inf\{\omega(P): P \in \mathcal{P}_K(X, X), \|P\| =1\}, \end{eqnarray*} where $\omega(P) = \sup \text{Re}\,V(T)$. \end{Definition} \noindent It is well-known that $\text{Daug}_p\,(X) \in [0,1]$ and $\text{Daug}_p(X) \leq \text{Daug}(X)$, where $\text{Daug}(X)$ is the Daugavetian index introduced in \cite{M}. A Banach space $X$ has the polynomial Daugavet property if and only if $\text{Daug}_p(X) = 1$. This comes from the fact that a Banach space $X$ satisfies the Daugavet equation for every rank-one polynomials if and only if $X$ satisfies the same equation for every weakly compact polynomials (see Theorem \ref{th:polydauggen}). We recall the following lemma that will be useful for later. \begin{Lemma}\cite[Proposition 2.2, 2.3]{S}\label{lem:familydaug} Let $\{X_{\lambda}\}_{\lambda \in \Lambda}$ be a family of infinite-dimensional Banach spaces and let $Z$ be the $c_0$- or $\ell_{\infty}$-sum of the family. Then \[ \text{Daug}_p\,(Z) = \inf\{\text{Daug}_p\,(X_\lambda): \lambda \in \Lambda\}. \] \end{Lemma} If there exists a finite-rank projection on $X$ such that $\|P\| = \|I - P\| = 1$, then $\text{Daug}\,(X) = 0$ \cite{M}. Hence $\text{Daug}_p\,(X) = 0$ in this case. Examples of such spaces are $C(K)$ where $K$ has isolated points and Banach spaces $X$ with an 1-unconditional basis \cite[pp. 635]{M}. Similar to the space $C(K)$, we can also construct such a projection for uniform algebras. \begin{proposition}\label{prop:bicontractive} Let $A$ be a uniform algebra on a compact Hausdorff space $K$ and let $t_0$ be an isolated point of the Shilov boundary $\mathcal{G}amma$ of $A$. Then, there exists $P:A \rightarrow A$ defined by $P = \delta_{t_0} \otimes \phi$, where $\delta_{t_0} \in S_{A^*}$ and $\phi \in S_A$, such that $P$ is a projection and $\|P\| = \|I - P\| = 1$. \end{proposition} \begin{proof} Let $t_0 \in K$ be an isolated point of $\mathcal{G}amma$. Then, by Lemma \ref{lem:auxiso}, there exists a function $\phi \in A$ such that $\phi(t_0) = \|\phi\| = 1$ and $\phi(t) = 0$ for $t \in \mathcal{G}amma \setminus \{t_0\}$. Now, define $P = \delta_{t_0} \otimes \phi$ where $\delta_{t_0}$ is the pointwise evaluation at $t_0$. Since $P^2f = f(t_0)\phi(t_0)\cdot \phi = f(t_0)\phi = Pf$, the rank-one operator $P$ is a projection on $A$. Let $f \in S_A$. Then we have $|Pf(t)| = |f(t_0)\phi(t)| = 0$ for $t \in K\setminus \{t_0\}$ and $|Pf(t_0)| = |f(t_0)|$. Hence, $\|Pf\|_{\infty} = |f(t_0)| \leq 1$. However, we see that $\|P\phi\|_{\infty} = 1$. So $\|P\| = 1$. Similarly, $|[(I - P)f](t)| = |f(t) - f(t_0)\phi(t)| = |f(t)|$ for $t \in K \setminus \{t_0\}$ and $|[(I - P)f](t_0)| = 0$. Let $U$ be an open set in $K$ that contains $t_1 \in \mathcal{G}amma_0$. Then there exists $\tilde{\phi} \in A$ such that $\tilde{\phi}(t_1) = 1$ and $\sup_{K\setminus U} |\tilde{\phi}| < 1$. We see that $\|(I - P)\tilde{\phi}\|_{\infty} = \|\tilde{\phi}\|_{\infty} = 1$. Therefore, we obtain $\|I - P\| = 1$. \end{proof} \begin{Corollary}\label{cor:daugzero} Let $K$ be a compact Hausdorff space and let $A$ be a uniform algebra on $K$. If the Shilov boundary of $A$ contains an isolated point, then $\text{Daug}_p(A) = 0$. \end{Corollary} \begin{proof} This is an immediate consequence of Proposition \ref{prop:bicontractive}. \end{proof} \begin{Theorem}\label{th:polydauind} Let $X$ be a complex Banach space and let $K$ be a compact Hausdorff space. Then \[ \text{Daug}_p\,(A(K, X)) = \max\{\text{Daug}_p(A), \text{Daug}_p(X)\}. \] \end{Theorem} \begin{proof} Let $P \in \mathcal{P}_K(A(K, X))$. We first show that \[ \|I + P\| \geq 1 + \text{Daug}_p(X)\|P\|. \] For a given $\epsilon > 0$ there exists $f_0\in S_{A(K, X)}$ and $t_0 \in \mathcal{G}amma_0$ such that $\|P(f_0)(t_0)\|_X \geq \|P\| - \frac{\epsilon}{2}$. Since $P$ is continuous at $f_0$, there exists $\delta > 0$ such that \begin{equation}\label{eq:Pcont} \text{If} \,\,\, \|f_0 - g\| < \delta,\,\,\, \text{then}\,\,\, \|P(f_0) - P(g)\| \leq \frac{\epsilon}{2}. \end{equation} Now, consider $U = \{t \in K: \|f_0(t) - f_0(t_0)\|_X <\frac{\delta}{4}\}$. Since the set $U$ is a nonempty open subset of $K$ that contains the strong boundary point $t_0$, by Lemma \ref{lem:urysohnAKX}, there exists $\phi \in A$ such that \begin{equation}\label{eq:urysohn1} \phi(t_0) = 1,\,\,\, \sup_{K \setminus U} |\phi(t)| < \frac{\delta}{8},\,\,\, \text{and}\,\,\, |\phi(t)| + \left(1 - \frac{\delta}{8}\right) |1 - \phi(t)| \leq 1 \,\,\, \text{for every} \,\,\, t \in K. \end{equation} Fix $x_0 \in S_X$ such that $f_0(t_0) = \|f_0(t_0)\| \cdot x_0$ and define $\Psi: \mathbb{C} \rightarrow A(K, X)$ by \[ \Psi(z) = \left(1 - \frac{\delta}{8}\right)(1 - \phi)f_0 + \phi \cdot x_0 \cdot z. \] Then we have \begin{eqnarray*} \Psi(\|f_0(t_0)\|_X)(t) - f_0(t) &=& \left(1 - \frac{\delta}{8}\right)(1 - \phi(t)) f_0(t) + \phi(t)f_0(t_0) - f_0(t)\\ &=& \phi(t)(f_0(t) - f_0(t_0)) - \frac{\delta(1- \phi(t))f_0(t)}{8}. \end{eqnarray*} In view of (\ref{eq:urysohn1}), notice that \[ \left\|\phi(t)(f_0(t) - f_0(t_0))- \frac{\delta(1- \phi(t))f_0(t)}{8}\right\|_X \leq \|\phi\|_{\infty} \cdot \|(f_0(t) - f_0(t_0))\|_X + \frac{\delta}{4}< \frac{\delta}{2} \] for every $t \in U$ and that \[ \left\|\phi(t)(f_0(t) - f_0(t_0))- \frac{\delta(1- \phi(t))f_0(t)}{8}\right\|_X < 2 \cdot \frac{\delta}{4} = \frac{\delta}{2} \] for every $t \in K \setminus U$. Hence we can see that $\|\Psi(\|f_0(t_0)\|_X) - f_0\| < \delta$, and so \[ \|P(\Psi(\|f_0(t_0)\|_X))(t_0) - P(f_0)(t_0)\|_X \leq \|P(\Psi(\|f_0(t_0)\|_X)) - P(f_0)\| < \frac{\epsilon}{2} \] by (\ref{eq:Pcont}). This implies that \[ \|P(\Psi(\|f_0(t_0)\|_X))(t_0)\|_X > \|P(f_0)(t_0)\|_X - \frac{\epsilon}{2}> \|P\| - 2\cdot \frac{\epsilon}{2} = \|P\| - \epsilon. \] In view of Hahn-Banach theorem, there exists $x_0^* \in S_{X^*}$ such that \[ x_0^*\left(P(\Psi(\|f_0(t_0)\|_X))(t_0)\right) = \|P(\Psi(\|f_0(t_0)\|_X))(t_0)\|_X > \|P\| - \epsilon. \] Notice that the function $f(z) = x_0^*\left(P(\Psi(z))(t_0)\right)$ is holomorphic. Hence, by the maximum modulus theorem, there exists $z_0 \in \mathbb{T}$ such that \[ \|P(\Psi(z_0))(t_0)\|_X \geq x_0^*\left(P(\Psi(\|f_0(t_0)\|_X))(t_0)\right) > \|P\| - \epsilon. \] Take $x_1 = z_0 x_0 \in S_X$ and let $x_1^* \in S_{X^*}$ such that $x_1^*x_1 = 1$. Define a function $\Phi: X \rightarrow A(K, X)$ by \[ \Phi(x) = x_1^*x\left(1 - \frac{\delta}{8}\right)(1 - \phi)f_0 + \phi\cdot x. \] We see that $\|\Phi(x)\| \leq 1$ for every $x \in B_X$ from (\ref{eq:urysohn1}). In particular, $\Phi(x_1) = \Psi(z_0)$. Hence $\|P(\Phi(x_1))(t_0)\| > \|P\| - \epsilon$. Consider $Q \in \mathcal{P}_K(X)$ defined by $Q(x) = P(\Phi(x))(t_0)$. Notice that \[ \|Q\| \geq \|Qx_1\|_X = \|(P(\Phi(x_1))(t_0)\|_X > \|P\| - \epsilon. \] This implies that $\|I + Q\| \geq 1 + \text{Daug}_p(X) \|Q\| > 1 + \text{Daug}_p(X) (\|P\| - \epsilon)$. Now choose $x_2 \in B_X$ such that $\|x_2 + Qx_2\| > 1 + \text{Daug}_p(X) (\|P\| - \epsilon)$ and let $g = \Phi(x_2)$. Then we obtain \begin{eqnarray*} \| I + P\| \geq \|g + Pg\| &\geq& \|g(t_0) + P(g)(t_0)\|_X\\ &\geq& \left\|x_1^*x_2\left(1 - \frac{\delta}{8}\right)(1 - \phi(t_0)f(t_0) + \phi(t_0)x_2 - Q(x_2)\right\|_X\\ &=& \|x_2 + Q(x_2)\| > 1 + \text{Daug}_p(X) (\|P\| - \epsilon) \end{eqnarray*} As $\epsilon \rightarrow 0$, we have $\|I + P\| \geq 1 + \text{Daug}_p(X)\|P\|$. This consequently shows that $\text{Daug}_p(A(K, X)) \geq \text{Daug}_p(X)$. If $\mathcal{G}amma$ does not have isolated points, then $A(K, X)$ has the polynomial Daugavet property by Theorem \ref{th:polydaug}. This implies that \[ \text{Daug}_p(A(K, X)) = \text{Daug}_p(A) = 1, \] and so we have $\text{Daug}_p(A(K, X)) = \max\{\text{Daug}_p(A), \text{Daug}_p(X)\}$. If $\mathcal{G}amma$ has isolated points, then $A(K, X) = X \oplus_{\infty} Y$ by Lemma \ref{lem:akxdecomp} and $\text{Daug}(A) = 0$ by Corollary \ref{cor:daugzero}. From Lemma \ref{lem:familydaug}, we see that $\text{Daug}_p (A(K, X)) \leq \text{Daug}_p (X)$. Therefore, we also obtain $\text{Daug}_p (A(K, X)) = \max\{\text{Daug}_p (A), \text{Daug}_p (X)\}$. \end{proof} \begin{corollary}\label{cor:polydaugAKX} Let $K$ be a compact Hausdorff space. Then the space $A(K, X)$ has the polynomial Daugavet property if and only if either the base algebra $A$ or $X$ has the polynomial Daugavet property. \end{corollary} \subsection{Remarks on the property $(D)$ and the convex diametral local diameter two property in $A(K, X)$} Since the equivalence between the property ($\mathcal{D}$) and the DLD2P is not clear, it is natural to explore various Banach spaces that potentially distinguish these properties. However, we show that this is not the case for $A(K, X)$. Under the additional assumption that $X$ is uniformly convex, the space $A(K,X)$ has the Daugavet property if and only if the Shilov boundary of the base algebra does not have isolated points \cite[Theorem 5.6]{LT}. Moreover, the Daugavet property of $A(K, X)$ is equivalent to all diametral D2Ps under the same assumption. In fact, carefully inspecting the proof of \cite[Theorem 5.4]{LT}, we see that the rank-one projection constructed in there has norm-one. With the aid of our previous observations, we can see that the DLD2P is also equivalent to the property ($\mathcal{D}$) for $A(K, X)$. \begin{proposition}\label{prop:dpointvect}\cite[Theorem 5.4]{LT} Let $X$ be a uniformly convex Banach space, $K$ be a compact Hausdorff space, $\mathcal{G}amma$ be the Shilov boundary of the base algebra $A$ of $A(K, X)$, and $f \in S_{A(K, X)}$. Then the following statements are equivalent: \begin{enumerate} [\rm(i)] \item $f$ is a Daugavet point. \item $f$ is a $\Delta$-point. \item Every rank-one, norm-one projection $P = \psi \otimes f$, where $\psi \in A(K, X)^*$ with $\psi(f) = 1$, satisfies $\|I - P\| = 2$. \item there is a limit point $t_0$ of $\mathcal{G}amma$ such that $\|f\| = \|f(t_0)\|_X$. \end{enumerate} \end{proposition} \begin{proof} (i) $\implies$ (ii) is clear. The implication (ii) $\implies$ (iii) comes from Theorem \ref{prop:deltaequiv}. Indeed, for a Banach space $Y$, a point $f \in S_Y$ is a $\Delta$-point if and only if every rank-one projection of the form $P = \psi\otimes f$, where $\psi \in Y^*$ with $\psi(f) = 1$, satisfies $\|I - P\| \geq 2$. Hence, we immediately have (iii) if this projection $P$ has the norm one. (iii) $\implies$ (iv) and (iv) $\implies$ (i) are identical to the proof of (ii) $\implies$ (iii) and (iii) $\implies$ (i) in \cite[Theorem 5.4]{LT}, respectively. \end{proof} \begin{Corollary}\cite[Corollary 5.5]{LT}\label{th:dpoint} Let $K$ be a compact Hausdorff space, $\mathcal{G}amma$ be the Shilov boundary of $A(K)$, and $f \in S_{A(K)}$. Then the following statements are equivalent: \begin{enumerate}[\rm(i)] \item $f$ is a Daugavet point. \item $f$ is a $\Delta$-point. \item Every rank-one, norm-one projection $P = \psi \otimes f$, where $\psi \in A(K)^*$ with $\psi(f) = 1$, satisfies $\|I - P\| = 2$. \item there is a limit point $t_0$ of $\mathcal{G}amma$ such that $\|f\|_\infty = |f(t_0)|$. \end{enumerate} \end{Corollary} As a consequence, we obtain the following characterizations for the space $A(K, X)$ and infinite-dimensional uniform algebras. \begin{Proposition}\cite[Theorem 5.6]{LT} Let $X$ be a uniformly convex Banach space, let $K$ be a compact Hausdorff space, and let $\mathcal{G}amma$ be the Shilov boundary of the base algebra $A$ of $A(K, X)$. Then the following statements are equivalent: \begin{enumerate}[\rm(i)] \item $A(K,X)$ has the polynomial Daugavet property. \item $A(K,X)$ has the Daugavet property. \item $A(K,X)$ has the DD2P. \item $A(K,X)$ has the DLD2P. \item $A(K,X)$ has the property ($\mathcal{D}$). \item The Shilov boundary $\mathcal{G}amma$ does not have isolated points. \end{enumerate} \end{Proposition} \begin{proof} (i) $\implies$ (ii) $\implies$ (iii) $\implies$ (iv) $\implies$ (v) is clear from their definitions. The implication (vi) $\implies$ (i) can be shown by Theorem \ref{th:polydaug}. Showing (v) $\implies$ (vi) is also identical to the proof of \cite[Theorem 5.6]{LT} with Corollary \ref{prop:dpointvect}. \end{proof} \begin{Corollary}\cite[Corollary 5.7]{LT}\label{lemma:equiDau} Let $K$ be a compact Hausdorff space and let $\mathcal{G}amma$ be the Shilov boundary of a uniform algebra $A(K)$. Then the following are equivalent: \begin{enumerate}[\rm(i)] \item $A(K)$ has the polynomial Daugavet property. \item $A(K)$ has the Daugavet property. \item $A(K)$ has the DD2P. \item $A(K)$ has the DLD2P. \item $A(K)$ has the property ($\mathcal{D}$). \item The Shilov boundary $\mathcal{G}amma$ does not have isolated points. \end{enumerate} \end{Corollary} In view of Lemma \ref{lem:urysohnAKX}, we can also show that the sufficient condition for the convex-DLD2P in \cite[Theorem 5.9]{LT} can be described with strong boundary points. \begin{Theorem} Let $K$ be a compact Hausdorff space, $X$ be a uniformly convex Banach space, and let $\mathcal{G}amma$ be the Shilov boundary of the base algebra of $A(K, X)$. Denote by $\mathcal{G}amma'$ the set of limit points of the Shilov boundary. If $\mathcal{G}amma'\cap \mathcal{G}amma_0 \neq \emptyset$, then $A(K, X)$ has the convex-DLD2P. \end{Theorem} \begin{proof} In view of Lemma \ref{lem:AKXisom}, we assume that $K = \mathcal{G}amma$. Denote the set of all $\Delta$-points of $A(K, X)$ by $\Delta$ and the base algebra of $A(K,X)$ by $A$. We claim that $S_{A(K, X)} \subset \overline{\text{conv}}\Delta$. Let $f \in S_{A(K, X)}$. Choose a point $t_0 \in \mathcal{G}amma' \cap \mathcal{G}amma_0$ and let $\lambda = \frac{1 + \|f(t_0)\|_X}{2}$. For $\epsilon > 0$, let $U$ be an open subset of $K$ such that $\|f(t) - f(t_0)\|_X < \epsilon$. Then by Lemma \ref{lem:urysohnAKX}, there exists $\phi \in A$ such that $\|\phi\|_{\infty} = \phi(t_0) = 1$, $\sup_{t \in K \setminus U}|\phi(t)| < \epsilon$, and \[ |\phi(t)| + (1 - \epsilon)|1 - \phi(t)| \leq 1 \] for every $t \in K$. Choose a norm-one vector $v_0 \in X$ and let \[ x_0 = \begin{cases} \frac{f(t_0)}{\|f(t_0)\|_X} &\mbox{if } f(t_0) \neq 0 \\ v_0 &\mbox{if } f(t_0)=0. \end{cases} \] Now, define \begin{align*} f_1(t) &= (1 - \epsilon)(1 - \phi(t))f(t) + \phi(t)x_0\\ f_2(t) &= (1 - \epsilon)(1 - \phi(t))f(t) - \phi(t)x_0, \ \ \ t\in K. \end{align*} Notice that $f_1, f_2 \in A(K, X)$ because $A\otimes X \subset A(K, X)$. Moreover, \begin{eqnarray*} \|f_1(t)\|_X &=\left\|(1 - \epsilon)(1 - \phi(t))f(t) + \phi(t) x_0\right\|_X \\&\leq (1 - \epsilon)|1 - \phi(t)| + |\phi(t)| \leq 1, \end{eqnarray*} for every $t \in K$. In particular, we have $\|f_1(t_0)\|_X = 1$, and so $ \|f_1(t_0)\|_X = \|f_1\| = 1$. By the same argument, we also have $\|f_2(t_0)\|_X = \|f_2\|= 1$. Thus, $f_1, f_2 \in \Delta$ by Proposition \ref{prop:dpointvect}. Let $g(t) = \lambda f_1(t) + (1 - \lambda)f_2(t)$. We need to consider two cases. Case 1: Suppose $f(t_0) \neq 0$. Then $g(t) = (1 - \epsilon)(1 - \phi(t))f(t) + \phi(t)f(t_0)$. We see that \begin{eqnarray*} \|g(t) - f(t)\|_X &=& \| (1 - \epsilon)(1 - \phi(t))f(t) + \phi(t)f(t_0) - f(t)\|_X\\ &=& \| (1 - \epsilon)(1 - \phi(t))f(t) + \phi(t)f(t_0) - (1 -\epsilon)f(t) - \epsilon f(t)\|_X\\ &=& \|(1-\epsilon)(- \phi(t))f(t) + (1 - \epsilon)\phi(t)f(t_0) + \epsilon\phi(t)f(t_0) - \epsilon f(t)\|_X\\ &=&\|(1 - \epsilon)\phi(t)(f(t_0) - f(t)) + \epsilon\phi(t)f(t_0) - \epsilon f(t)\|_X\\ &\leq& (1 -\epsilon)|\phi(t)|\cdot \|f(t) - f(t_0)\|_X + \epsilon|\phi(t)|\cdot\|f(t_0)\|_X+ \epsilon \|f(t)\|_X\\ &\leq & (1 -\epsilon)|\phi(t)|\cdot \|f(t) - f(t_0)\|_X + 2\epsilon. \end{eqnarray*} For $t \in U$, we see that $(1 - \epsilon)|\phi(t)| \cdot \|f(t) - f(t_0)\|_X \leq (1 - \epsilon)\epsilon < \epsilon$. On the other hand, for $t \in K \setminus U$, we have $(1 - \epsilon)|\phi(t)| \cdot \|f(t) - f(t_0)\|_X \leq 2(1-\epsilon)\epsilon < 2 \epsilon$. Hence, $\|g - f\| < 4\epsilon$, and so $f \in \overline{conv}\Delta$. Case 2: Now, suppose $f(t_0) = 0$. Then we have $\|f(t)\|_X < \epsilon$ for every $t \in U$. Moreover, notice that $\lambda = \frac{1}{2}$ and $g(t) = (1 - \epsilon)(1 - \phi(t))f(t)$. This implies that \begin{eqnarray*} \|g(t) - f(t)\|_X &=& \|(1 - \epsilon)(1 - \phi(t))f(t) - (1 - \epsilon)f(t) - \epsilon f(t)\|_X\\ &\leq& (1- \epsilon)|\phi(t)|\cdot \|f(t)\|_X + \epsilon \|f(t)\|_X \leq (1- \epsilon)|\phi(t)|\cdot \|f(t)\|_X + \epsilon. \end{eqnarray*} Notice that $(1- \epsilon)|\phi(t)|\cdot \|f(t)\|_X \leq (1 - \epsilon)\epsilon < \epsilon$ for every $t \in U$. From the fact that $\sup_{t \in K \setminus U} |\phi(t)| < \epsilon$ for $t \in K \setminus U$, we have $(1- \epsilon)|\phi(t)|\cdot \|f(t)\|_X \leq (1- \epsilon)\epsilon < \epsilon$. This shows that $\|g - f\| < 2 \epsilon$, and so $f \in \overline{\text{conv}}\Delta$. Since $f \in S_{A(K, X)}$ is arbitrary, we see that $S_X \subset \overline{\text{conv}}\Delta$. Therefore, the space $A(K, X)$ has the convex-DLD2P. \end{proof} \begin{corollary} Let $K$ be a compact Hausdorff space and $\mathcal{G}amma'$ be the set of limit points in the Shilov boundary of a uniform algebra. If $\mathcal{G}amma' \cap \mathcal{G}amma_0 \neq \emptyset$, Then the uniform algebra has the convex-DLD2P. \end{corollary} \end{document}
\begin{document} \title{Modeling Single Picker Routing Problems in Classical and Modern Warehouses} \category{Working Paper DPO-2018-11 (version 1, 04.11.2018)} \authors{\textbf{Dominik Goeke and Michael Schneider}\\ goeke$|[email protected]\\ Deutsche Post Chair\,--\,Optimization of Distribution Networks\\ RWTH Aachen University\\[2ex]} \abstract{The standard single picker routing problem (SPRP) seeks the cost-minimal tour to collect a set of given articles in a rectangular single-block warehouse with parallel picking aisles and a dedicated storage policy, i.e, each SKU is only available from one storage location in the warehouse. We present a compact formulation that forgoes classical subtour elimination constraints by directly exploiting two of the properties of an optimal picking tour used in the dynamic programming algorithm of Ratliff and Rosenthal~(1983). We extend the formulation to three important settings prevalent in modern e-commerce warehouses: scattered storage, decoupling of picker and cart, and multiple end depots. In numerical studies, our formulation outperforms existing standard SPRP formulations from the literature and proves able to solve large instances within short runtimes. Realistically sized instances of the three problem extensions can also be solved with low computational effort. We find that decoupling of picker and cart can lead to substantial cost savings depending on the speed and capacity of the picker when traveling alone, whereas additional end depots have rather limited benefits in a single-block warehouse.\\[1ex] \textbf{Keywords:} \textit{warehouse management, picker routing, scattered storage, decoupling, multiple end depots}} \logo{\includegraphics[scale=0.75]{logo}} \titlepage \section{Introduction} \label{sec:intro} Order picking is a central and labor-intensive task in warehouses. The aim of {single picker routing problems} (SPRPs) is to determine a picker tour of minimum cost---starting from and ending at a depot---to collect all stock keeping units (SKUs) contained in a pick list from their storage locations in the warehouse. The cost of a tour is typically measured as distance or time. Single-block SPRPs are defined on a rectangular warehouse, in which the SKUs are stored in racks along both sides of multiple parallel picking aisles that are enclosed by a storage-free cross aisle at the top and at the bottom (see Figure~\ref{fig:warehouse}). Each of the picking aisles contains a number of picking positions, and multiple different SKUs can be located at the same picking position. In single-block SPRPs, we do not distinguish between a picking request from the rack on the left side, on the right side, or from both sides of a picking position. All these cases are treated equally, and only the travel cost to reach the picking positions in the aisles is of relevance. Therefore, a pick list translates into a set of required picking positions that the picker needs to visit. The single-block SPRP with dedicated storage, in which each SKU is only available from one picking position in the warehouse, is the most well-studied SPRP variant, and is denoted as standard SPRP in the following. In a seminal work, \citet{Ratliff:1983} introduce a dynamic programming (DP) algorithm to solve the standard SPRP to optimality with a runtime linear in the number of picking aisles. \cite{Roodbergen:2001} extend the DP to two-block warehouses, and \citet{Pansart:2018} present a DP that is applicable to warehouses with an arbitrary number of blocks, however, the runtime complexity is exponential in the number of cross aisles. SPRPs can also be tackled using mathematical formulations that are solved with the help of optimization software. To address the standard SPRP, \citet{Scholz:2016} reduce the number of vertices that have to be considered in each picking aisle based on the fact that the largest gap in an aisle is never traversed if the aisle is entered from top and bottom, originally discussed in \citet{Ratliff:1983}. On the resulting graph, they solve a single-commodity flow formulation of a traveling salesman problem (TSP) variant that contains optional vertices indicating the direction of travel at the entry and exit of each picking aisle. In this way, they obtain a formulation whose size is linear to the number of picking aisles. This formulation is compared to three TSP formulations defined on a complete graph spanning the SKUs to be picked and one Steiner TSP formulation that is adapted to single-block warehouses. The authors demonstrate on a large set of test instances that their formulation is superior with regard to the size of the instances that can be solved and the runtimes for solving the instances. Their formulation can also be extended to multi-block warehouses, but they only present results for the standard SPRP. \citet{Pansart:2018} present a model of the SPRP in multi-block warehouses that is based on a single-commodity flow formulation of the Steiner TSP. The authors use a procedure similar to the one described in \citet{Scholz:2016} to reduce the number of vertices, and the number of arcs is decreased by solving the minimum 1-spanner problem using a commercial solver. In addition, valid inequalities exploiting the special structure of the warehouse are added, and the solver is provided with upper bounds that are computed using a freely available version of the heuristic of \citet{Lin:1973}. On single-block warehouse instances, their formulation is clearly superior to the formulation of \citet{Scholz:2016}. We propose a compact formulation of the standard SPRP that directly exploits two properties of an optimal tour used in the algorithm of \citet{Ratliff:1983}: (i)~two consecutive picking aisles can only be connected using four possible configurations, and (ii)~to prevent the generation of isolated subtours, it is sufficient to ensure that the tour is always connected and the degree of the connections at the top and bottom of each picking aisle is of even degree. Thus, no classical subtour elimination constraints are needed. Although we do not rely on preprocessing or the addition of cuts to speed up the solution of our model, our formulation vastly outperforms the one of \citet{Scholz:2016} and is approximately six times faster than the one of \citet{Pansart:2018} on a set of benchmark instances with up to 30 picking aisles and 45 required picking positions using a comparable computer. Our approach shows a convincing scaling behavior and is able to solve instances with 1000 aisles, 1000 available picking positions per aisle, and 1000 required picking positions in approximately two minutes. In addition, our model can be extended to cope with three important settings relevant to modern e-commerce warehouses: \begin{itemize} \item \textbf{Scattered storage:} In warehouses with scattered storage, any SKU can be available from more than one picking position. This setting plays a major role in modern e-commerce warehouses of companies like Amazon or Zalando and is receiving growing attention from the scientific community \citep[see, e.g.,][]{Boysen:2018, Weidinger:2018b}. \citet{Daniels:1998} propose a TSP formulation for the SPRP with scattered storage for arbitrary warehouse layouts and compare several heuristics. \citet{Weidinger:2018a} shows that the single-block SPRP with scattered storage is NP-hard. He proposes a heuristic based on the decomposition of the problem into a selection and a routing problem. As comparison method, the formulation of \citet{Daniels:1998} using \citet*{Miller:1960} subtour elimination constraints is realized with Gurobi. Given a time limit of three hours, the formulation is able to solve most of the single-block warehouse instances generated by the authors with three picking aisles, 30 picking positions per aisle, and pick lists with up to seven requested SKUs. In contrast, the extension of our formulation to the single-block SPRP with scattered storage solves large instances with up to 100 picking aisles, 180 picking positions per aisle, and pick lists containing up to 30 SKUs within short runtimes of at most three minutes. \item \textbf{Decoupling of picker and cart:} In manual order picking, items are typically retrieved from the warehouse by a picker pushing a cart, so that multiple items can be picked during one tour. To speed up the order picking, Zalando, a large fashion online retailer, allows pickers to park the cart during the tour, retrieve a few items traveling on their own, then return to the cart and continue their tour (comparable to the picking behavior of people in supermarkets). The company also incorporates this option when planning picker tours \citep{Zalando:2014,Nvidia:2015}; however, no mathematical model or algorithm has yet been published. We extend our formulation to the single-block SPRP with decoupling of picker and cart and investigate the potential time savings of this approach depending on the carrying capacity and the speed of the picker without cart. \item \textbf{Multiple end depots:} To reduce unnecessary trips back to a central depot, warehouse managers can use multiple end depots at which collected items can be dropped off, e.g., at dedicated positions of a conveyor belt. \Citet{DeKoster:1998} consider the single-block SPRP with decentralized depositing, in which they assume that it is possible to drop items anywhere along the upper or lower cross aisle. \citet{Scholz:2016} show how to extend their formulation to this problem variant, but they only present results for the single depot case. We extend our formulation to single-block SPRP with multiple end depots, and we investigate the potential cost savings depending on the number of available end depots. \end{itemize} \noindent Although using our formulation in a commercial solver still cannot match the performance of a dedicated implementation of the DP approach of \citet{Ratliff:1983} in a higher programming language, the former approach has the following advantages: \begin{itemize} \item The formulation can be easily be implemented and used by anyone familiar with a mathematical programming solver. No knowledge of a higher programming language is required, and no experience in algorithmic programming to realize a DP is necessary. This point is certainly relevant in practice, where algorithmic programming skills are generally far rarer than at universities and other scientific organizations. \item The formulation is extendable to handle three important settings in modern e-commerce warehouses---scattered storage, decoupling of picker and cart, and multiple end depots---and seems likely to be able to incorporate other real-world-inspired constraints. \item The formulation can be used in approaches for integrated problems, in which the higher-level decision depends on the outcome of the SPRP, e.g., order batching \citep{Gademann:2005, Valle:2017} or storage assignment \citep{Petersen:1999}. For example, the integrated order batching and picker routing problem could be solved by i)~column generation, where our model can be modified to solve the pricing subproblem, i.e., the orders are associated with current prices, and the picker can only pick orders such that the number of collected items does not exceed the maximum batch size, or ii)~by a compact formulation that extends our model by an index for each batch (up to the maximum number of batches), a limited capacity for each batch, and a set covering constraint. The integrated storage assignment and picker routing problem could be studied in a scattered-storage setting using our model. \end{itemize} This paper is organized as follows. We introduce our compact formulation for the standard SPRP in Section~\ref{sec:model}. The following sections present the extensions of the model to the setting with scattered storage (Section~\ref{sec:mixshelves}), decoupling of picker and cart (Section~\ref{sec:cart}), and multiple end depots (Section~\ref{sec:openDepot}). Section~\ref{sec:results} presents the numerical studies to investigate the performance of our formulation and the benefits of the considered extensions. Section~\ref{sec:conclusion} concludes the paper. \section{Mathematical Formulation of the Standard SPRP} \label{sec:model} To solve an instance of the standard SPRP, the warehouse can be restricted to its relevant part, i.e., all aisles that lie to the left of both the depot and the leftmost aisle in which a SKU needs to be picked, and, analogously, all aisles that lie to the right of both the depot and the rightmost aisle in which a SKU needs to be picked, can be removed. The resulting part of the warehouse is represented as a set $\mathcal{J}=\{ 0, \ldots, \numberOfAisles-1 \}$ indexing $m$ aisles numbered from left to right. Each aisle $j \in \mathcal{J}$ has $n$ available picking positions, numbered from top to bottom, and is associated with a set of required picking positions $\mathcal{H}InAisle{j} \subseteq \{0,\ldots, n-1\}$ that the picker needs to visit to complete the pick list. The depot can be located at the entries to the picking aisles in the top or bottom cross aisle. The picking aisle above\,/\,below which the depot is located is denoted as aisle $l$. The parameter $\depotTop{}$ is set to 1 if the depot is located in the top cross aisle and to zero otherwise. The example in Figure~\ref{fig:warehouse} illustrates the introduced concepts: There are eight picking aisles with $n=10$ available picking positions per aisle. SKUs need to be picked from nine required picking positions that are marked black. We only have to consider the $m = 6$ aisles containing required picking positions, i.e., $\mathcal{J}=\{0, \ldots 5\}$. The required picking positions in the aisles are given by $\mathcal{H}InAisle{0}=\{1\}$, $\mathcal{H}InAisle{1}=\{9\}$, $\mathcal{H}InAisle{2}=\{2,3,7\}$, $\mathcal{H}InAisle{3}=\{3\}$, $\mathcal{H}InAisle{4}=\{5,6\}$, and $\mathcal{H}InAisle{5}=\{7\}$. The depot is located at the bottom of aisle 3, i.e., $l=3$ and $\depotTop{} = 0$. \begin{figure} \caption{Optimal solution of an example instance of the standard SPRP.} \label{fig:warehouse} \end{figure} As described by \cite{Ratliff:1983}, there exist four feasible configurations to connect aisle $j$ and aisle $j+1$ using the cross aisles located at the top and bottom of the warehouse. We represent these configurations by means of the following binary decision variables: \begin{itemize} \item $\xTwoTop{j}$ equals 1 if the top cross aisle is traversed twice (back and forth), 0 otherwise, \item $\xTwoBottom{j}$ equals 1 if the bottom cross aisle is traversed twice (back and forth), 0 otherwise, \item $\xTopBottom{j}$ equals 1 if the bottom and the top cross aisle are both traversed once, 0 otherwise, and \item $\xTwoTopBottom{j}$ equals 1 if both the top and bottom cross aisle are both traversed twice, 0 otherwise. \end{itemize} With regard to the traversal of picking aisles, the binary decision variables $\xUp{j}$ are used to indicate that aisle $j$ is completely traversed once (in arbitrary direction). If the costs for traversing the picking aisles are non-uniform, it may be beneficial to traverse an aisle $j$ twice: in this case, $\xTwoUp{j}$ equals one. For uniform traversal costs, there always exists an optimal solution in which no aisle is traversed twice \citep[see][]{Ratliff:1983}. Furthermore, the binary decision variables $\xPickTop{j}{i}$ ($\xPickBottom{j}{i}$) define that picking position $i$---and all picking positions that are located between the top (bottom) and picking position $i$---are accessed via a vertical\xspace branch-and-pick tour from the top (bottom) of aisle $j$, i.e., the picking aisle is entered and left from the same cross aisle. For each of the described decision variables, we precompute cost coefficients $\cost{}$ that correspond to the additional travel cost of the picker if the respective decision variable equals 1, e.g., $\xTwoTopBottom{j}$ has a coefficient $\costTwoTopBottom{j}$ that corresponds to four times the travel cost between aisle $j$ and $j+1$. Figure~\ref{fig:warehouse} illustrates the meaning of the decision variables: here, $\xPickTop{0}{1}, \xTwoTop{0}, \xPickBottom{1}{9}, \xTwoTopBottom{1}, \xUp{2}, \xTopBottom{2},\xPickTop{3}{2}, \xTopBottom{3}, \xUp{4},\xTwoBottom{4}, \xPickBottom{5}{7}$ are equal to $1$. According to \citet{Ratliff:1983}, the generation of isolated subtours is prevented if the degree of the connections at the top and bottom of each picking aisle is of even, and the picking tour is connected. Using the observation that an even degree divided by two is an integer, we introduce for each picking aisle $j$ an integer variable $\degreeEvenTop{j}$ ($\degreeEvenBottom{j}$), whose value corresponds to the degree of the connections at the top (at the bottom) of aisle $j$ divided by two. For example, $\degreeEvenTop{2} = 2$ and $\degreeEvenBottom{3} = 1$ in Figure~\ref{fig:warehouse}. To guarantee that the picking tour is connected, we introduce an additional binary variable $\xComponent{j}$ for each picking aisle $j$, which is equal to 0 if the picking tour from the leftmost relevant aisle in the warehouse up to aisle $j$ is connected, and equal to 1 if this picking tour consists of two components. Note that two components emerge whenever (i)~we have configuration $\xTwoTopBottom{j}$ for the leftmost aisle, or (ii)~we switch from configuration $\xTwoTop{j-1}$ or $\xTwoBottom{j-1}$ to configuration $\xTwoTopBottom{j}$ without connecting the top and bottom cross aisle. Using the above definitions, we can formulate the standard SPRP. To present a concise and comprehensible model (by avoiding the repetition of basically identical constraints for different index sets), we take the following two liberties with regard to notation: (i)~we sometimes use conditional statements for defining the relevant index set or to define the validity of a certain set of constraints, and (ii)~ we use the notation $\underset{\mathclap{\mathit{range}}}{[}\ldots]$ to define that a certain part of an expression is only relevant for a given range of the index (and otherwise disappears). \begin{align} \text{min~}&\sum_{j \in \mathcal{J}} \costTwoBottom{j} \xTwoBottom{j} + \costTwoTop{j} \xTwoTop{j} + \costTopBottom{j} \xTopBottom{j} + \costTwoTopBottom{j} \xTwoTopBottom{j} + \costUp{j} \xUp{j} + \costTwoUp{j} \xTwoUp{j}+ \sum_{j \in \mathcal{J}}\sum_{i \in \mathcal{H}InAisle{j}} \left(\costPickBottom{j}{i} \xPickBottom{j}{i} +\costPickTop{j}{i} \xPickTop{j}{i}\right) &~~~~~~~~~~~~~~~~~~~~~~~~\label{F0} \end{align} {\begin{align} \text{s.t.~}&\xTwoBottom{j} + \xTwoTop{j} + \xTopBottom{j} + \xTwoTopBottom{j} = 1 & j \in \mathcal{J}\setminus \{\numberOfAisles-1Vis\} \label{F1}\\ &\xUp{j} + \xTwoUp{j} + \sum_{i' \in \mathcal{H}InAisle{j} :i'\geqi} \xPickTop{j}{i'} + \sum_{i' \in \mathcal{H}InAisle{j} :i'\leqi}\xPickBottom{j}{i'} \geq 1 & j \in \mathcal{J}, i \in \mathcal{H}InAisle{j} \label{F2}\\ &\underset{\mathclap{j > 0}}{[}\xTwoBottom{j-1} + \xTopBottom{j-1} + \xTwoTopBottom{j-1}]+ \xTwoBottom{j} + \xTopBottom{j} + \xTwoTopBottom{j} \geq \xPickBottom{j}{i} & \begin{aligned}[c] \mathit{if}(\depotTop{} = 1)\,\{j \in \mathcal{J}\}\,\\\mathit{else}\,\{j \in \mathcal{J} \setminus \{l\} \}, i \in \mathcal{H}InAisle{j} \end{aligned} \label{F3}\\[15pt] & \underset{\mathclap{j > 0}}{[} \xTwoTop{j-1} + \xTopBottom{j-1} + \xTwoTopBottom{j-1}] + \xTwoTop{j} + \xTopBottom{j} + \xTwoTopBottom{j} \geq \xPickTop{j}{i} & \begin{aligned}[c] \mathit{if}(\depotTop{} = 0)\,\{j \in \mathcal{J}\}\,\\\mathit{else}\,\{j \in \mathcal{J} \setminus \{l\} \}, i \in \mathcal{H}InAisle{j} \end{aligned} \label{F4}\\[15pt] &\xTwoTop{j-1} + \xTwoBottom{j} \leq \xTwoUp{j}+1 & j \in \mathcal{J} \setminus \{0\}\label{F5}\\ &\xTwoBottom{j-1} + \xTwoTop{j} \leq \xTwoUp{j}+1 & j \in \mathcal{J} \setminus \{0\}\label{F6}\\ &2 \xTwoUp{l} + \xUp{l} +\underset{\mathclap{l > 0}}{[}\xTwoTop{l-1} +\xTwoTopBottom{l-1} ] + \xTwoTop{l} +\xTwoTopBottom{l} \geq \underset{\mathclap{l > 0}}{[}\xTwoBottom{l-1}] + \xTwoBottom{l}& \mathit{if}(\depotTop{} = 1) \label{F7}\\ &2 \xTwoUp{l} + \xUp{l} +\underset{\mathclap{l > 0}}{[}\xTwoBottom{l-1} + \xTwoTopBottom{l-1}] + \xTwoBottom{l} +\xTwoTopBottom{l} \geq \underset{\mathclap{l > 0}}{[}\xTwoTop{l-1}] + \xTwoTop{l}& \mathit{if}(\depotTop{}= 0)\label{F8}\\ &\underset{\mathclap{j > 0}}{[}\xTopBottom{j-1} + 2 \xTwoTopBottom{j-1} + 2 \xTwoTop{j-1} ] + \xTopBottom{j} + 2 \xTwoTopBottom{j} + 2 \xTwoTop{j} + \xUp{j}+ 2 \xTwoUp{j} = 2 \degreeEvenTop{j} & j \in \mathcal{J} \label{E1}\\ &\underset{\mathclap{j > 0}}{[}\xTopBottom{j-1} + 2 \xTwoTopBottom{j-1} + 2 \xTwoBottom{j-1} ] + \xTopBottom{j} + 2 \xTwoTopBottom{j} + 2 \xTwoBottom{j} + \xUp{j}+ 2 \xTwoUp{j} = 2 \degreeEvenBottom{j} & j \in \mathcal{J} \label{E2}\\ &\xTwoTopBottom{j} + \xTwoBottom{j-1} + \xTwoTop{j-1} - \xTwoUp{j} \leq \xComponent{j} +1 & j \in \mathcal{J} \setminus \{0\}\label{T1}\\ &\xTwoTopBottom{j}+\underset{\mathclap{j > 0}}{[}-\xTwoTopBottom{j-1}-\xTwoTop{j-1} -\xTwoBottom{j-1}] - \xTwoUp{j} - \xUp{j} \leq \xComponent{j} & j \in \mathcal{J} \label{T3}\\ &\xComponent{j-1} - \xUp{j}- \xTwoUp{j} \leq \xComponent{j} & j \in \mathcal{J} \setminus \{0\}\label{T4}\\ &\xComponent{j} \leq \xTwoTopBottom{j} & j \in \mathcal{J} \label{T5}\\ &\xTopBottom{j}, \xTwoTop{j}, \xTwoBottom{j}, \xTwoTopBottom{j}, \xComponent{j} \in \{0,1\} & j \in \mathcal{J} \setminus \{\numberOfAisles-1\} \label{B1}\\ &\xUp{j}, \xTwoUp{j} \in \{0,1\} & j \in \mathcal{J} \label{B2}\\ & \xPickTop{j}{i}, \xPickBottom{j}{i}\in \{0,1\} & j \in \mathcal{J}, i \in \mathcal{H}InAisle{j} \label{B3}\\ & \degreeEvenTop{j},\degreeEvenBottom{j} \in \mathcal{N}_{0} & j \in \mathcal{J} \label{B4}\\ & \xTopBottom{\numberOfAisles-1}, \xTwoTop{\numberOfAisles-1}, \xTwoBottom{\numberOfAisles-1}, \xTwoTopBottom{\numberOfAisles-1}, \xComponent{\numberOfAisles-1} = 0 & \label{B5} \end{align}} The objective~\eqref{F0} is to minimize the total costs of the picker tour. Constraints~\eqref{F1} guarantee that the relevant part of the warehouse is visited by the picker using one of the four cross aisle configurations. Constraints~\eqref{F2} ensure that the picker visits all required picking positions. Constraints~\eqref{F3} and \eqref{F4} guarantee that a vertical\xspace branch-and-pick tour from the bottom (top) cross aisle into aisle $j$ can only take place if $j$ is connected to the previous or successive aisle with a configuration that uses the bottom (top) cross aisle. The squared brackets exclude the terms involving the preceding aisle when the constraints for the first picking aisle are determined. Constraints~\eqref{F5} and \eqref{F6} guarantee that switches between top and bottom cross aisle are connected in a feasible manner. Constraints~\eqref{F7} and \eqref{F8} ensure that the depot is included in the tour. For example, if the depot is located at the top $(\depotTop{} = 1)$ and the aisle containing the depot is connected via $\xTwoBottom{l} =1$ and $\xTwoBottom{l-1} =1$, the depot must be included in the tour by setting $\xTwoUp{l} =1$. Instead, if $\xTwoBottom{l} =1$ and $\xTwoBottom{l-1} =0$, the depot must be included by setting $\xTwoUp{l} =1$, $\xTwoTopBottom{l-1} =1$, or $\xTwoTop{l-1}=1$. Constraints~\eqref{E1} and \eqref{E2} establish that the degrees of all connections at the top and also at the bottom of each picking aisle must be even, i.e, every position must be left as often as it is entered. Constraints~\eqref{T1} set the number of components to two (i.e., $\xComponent{j}=1$) if there is a transition from configurations $\xTwoBottom{j-1} =1$ or $\xTwoTop{j-1} =1$ to $\xTwoTopBottom{j} =1$ without directly connecting top and bottom by $\xTwoUp{j} =1$. Constraints~\eqref{T3} set the number of components to two, if top and bottom are not connected by a traversal of the picking aisle, and the part of the warehouse to the left is not visited. Constraints~\eqref{T4} propagate the number of components. Constraints~(\ref{T5}) ensure that configuration $\xTwoTopBottom{j}$ is used as long as there are two components. Finally, constraints~\eqref{B1}--\eqref{B5} define the decision variables. Note that the model could be further improved by substituting for every aisle $j$ the variables $\xPickBottom{j}{i}, \xPickTop{j}{i}, i \in \mathcal{H}InAisle{j}$ with three new variables $\xPickBottom{}{j}, \xPickTop{}{j},\xPickTopBottom{j}$ that represent the three vertical\xspace branch-and-pick tours that can alternatively be part of an optimal tour, i.e, from the bottom cross aisle to the topmost requested SKU, from the top cross aisle to the bottommost requested SKU, and from top cross aisle and bottom cross aisle to the two neighboring requested SKUs with the largest distance between them. We refrained from implementing this improvement to keep a more general formulation as a basis for the extensions presented in the following sections. \section{The Single-Block SPRP with Scattered Storage} \label{sec:mixshelves} We extend formulation~\eqref{F0}--\eqref{B5} to the single-block SPRP with scattered storage, i.e., now, any SKU can be available from multiple picking positions. Figure~\ref{fig:warehouse2} shows the optimal solution of an example instance of this problem variant. We assume that multiple items of each individual SKU may be contained in the pick list and that the supply of items of a SKU available at a given picking position is limited. Set $\mathcal{H}$ contains all SKUs that need to be picked, and $\demand{h}$ denotes the number of items of SKU $h \in \mathcal{H}$ that are requested. Set $\setOfCellsInAisleForArticle{j}{h}$ contains all picking positions from which SKU $h$ is available in aisle $j$, and $\capacityTwo{j}{i}{h}$ denotes the number of items of SKU $h$ that are available in aisle $j$ at position $i$. Set $\mathcal{H}InAisle{j}$ is redefined to contain all picking positions in aisle $j$ from which SKUs present in the pick list are available: several positions storing the same SKU may be contained, and not all positions have to be visited. To indicate whether picking position $i$ in aisle ${j}$ is visited, we introduce additional binary variables $\xPick{j}{i}$. In the example in Figure~\ref{fig:warehouse2}, we assume $\mathcal{H} = \{a,b,c,d,e,f,g,h,i\}$, $\capacityTwo{j}{i}{h} = 1,h \in \mathcal{H}, j \in \mathcal{J}, i \in \mathcal{H}InAisle{j}$, and $b_h = 1,h \in \mathcal{H}$. The picking tour is given by $\xPickBottom{1}{7}, \xTwoBottom{1}, \xTwoBottom{2}, \xUp{3}, \xTopBottom{3}, \xPickBottom{4}{9}, \xTopBottom{4},\xUp{5}$ equal to $1$, and, e.g., $\xPick{3}{8} = 1$ indicates that the requested item of SKU 'a' is picked in aisle 3 at position 8. \begin{figure} \caption{Optimal solution of an example instance of the single-block SPRP with scattered storage. Picking positions from which a requested SKU is picked are marked in black.} \label{fig:warehouse2} \end{figure} To model the decision where to pick the requested SKUs, we replace constraints~\eqref{F2} by constraints~\eqref{F2c}--\eqref{F2d}: \begin{align} \tiny &\sum_{j \in \mathcal{J}} \sum_{i \in \setOfCellsInAisleForArticle{j}{h}} \capacityTwo{j}{i}{h} \xPick{j}{i} \geq \demand{h} & h \in \mathcal{H} \label{F2c} \\ &\xUp{j} + \xTwoUp{j} + \sum_{i' \in \mathcal{H}InAisle{j} :i'\geqi} \xPickTop{j}{i'} + \sum_{i' \in \mathcal{H}InAisle{j} :i'\leqi}\xPickBottom{j}{i'} \geq \xPick{j}{i}\ & j \in \mathcal{J}, i \in \mathcal{H}InAisle{j} \label{F2b} \\ &\xPick{j}{i} \in \{0,1\} & j \in \mathcal{J}, i \in \mathcal{H}InAisle{j} \label{F2d} \end{align} Constraints~\eqref{F2c} ensure that the requested number of items of each SKU is picked from the picking positions at which the SKU is available. Constraints~\eqref{F2b} guarantee that the selected positions are visited by the picking tour. Because not all picking positions storing requested SKUs have to be visited in the case of scattered storage, it is not possible to define the relevant part of the warehouse by means of {these} picking positions and the location of the depot like in the standard SPRP. Instead, we require additional binary variables $\xLast{j}$ that indicate whether aisle $j$ is reached by the picker or not, i.e., in Figure~\ref{fig:warehouse2}, $\xLast{1}, ..., \xLast{5}$ are equal to $1$. We replace constraints~\eqref{F1} by the following constraints: \begin{align} \tiny & \xLast{j} \geq \xPick{j}{i} & j \in \mathcal{J}, i \in \mathcal{H}InAisle{j} \label{F1e} \\ &\xLast{l} = 1 & \label{F1fb} \\ &\xTwoBottom{j} + \xTwoTop{j} + \xTopBottom{j} + \xTwoTopBottom{j} = \xLast{j+1} & j \in \mathcal{J}\setminus \{\numberOfAisles-1Vis\} : j \geq l \label{F1b} \\ &\xTwoBottom{j} + \xTwoTop{j} + \xTopBottom{j} + \xTwoTopBottom{j} = \xLast{j} & j \in \mathcal{J} : j < l \label{F1bb} \\ & \xLast{j} \geq \xLast{j+1} & j \in \mathcal{J}\setminus \{\numberOfAisles-1Vis\} : j \geq l \label{F1d} \\ & \xLast{j} \leq \xLast{j+1} & j \in \mathcal{J}: j < l \label{F1db} \\ &\xLast{j} \in \{0,1\} & j \in \mathcal{J} \label{F1f} \end{align} Constraints~\eqref{F1e} and \eqref{F1fb} guarantee that all aisles containing the selected picking positions of the requested SKUs and the aisle containing the depot are reached. Constraints~\eqref{F1b} define the configurations to connect aisles that allow reaching a certain aisle located to the right of the depot, and \eqref{F1bb} does the same for the aisles to the left of the depot. Constraints~\eqref{F1d} and \eqref{F1db} describe how to propagate the $\xLast{}$ variables. \section{The Single-Block SPRP with Decoupling} \label{sec:cart} In this section, we extend our model to cover the possibility of the picker to park the cart and continue the tour for a certain period without the cart, then returning to the cart. Figure~\ref{fig:warehouse3} depicts an example of an optimal solution for the resulting single-block SPRP with decoupling. We make the following modeling assumptions: \begin{figure} \caption{Optimal solution of an example instance of the single-block SPRP with decoupling. Only one SKU is stored at each picking position, and only one item is requested of each SKU. The picker capacity is two items, and the speed of the picker without cart is twice the speed with cart. The tours that the picker does without cart are indicated by dotted edges.} \label{fig:warehouse3} \end{figure} \begin{itemize} \item The speed of the picker without cart differs from the speed when pushing the cart. The cost coefficients $\cost{}$ now represent the travel times of the picker when pushing the cart, the coefficients $\cost{}^p$ the times of the picker when traveling alone. \item Like in the previous models, we assume that the pick list is generated such that the capacity of the picking cart is sufficient to carry all items. However, the carrying capacity of the picker alone is limited to $C$ items. We return to a warehouse with dedicated storage, i.e., each SKU is only stored at one picking position, and $\demandTwo{j}{i}{h}$ denotes the number of requested items of SKU $h$ that have to be picked from aisle $j$ at position $i$. \item If a picking aisle is completely traversed, the picker pushes the cart, and no decoupling takes place. \item In horizontal\xspace branch-and-pick tours, the picker alone travels along a cross aisle and may visit one or more picking aisles to retrieve SKUs. In Figure~\ref{fig:warehouse3}, one horizontal\xspace branch-and-pick tour starts at the top of aisle 3, another one at the bottom. We assume that after parking the cart, at most one vertical\xspace branch-and-pick tour (see Figure~\ref{fig:warehouse3}, aisles 0 and 2 for examples of vertical\xspace branch-and-pick tours) and at most two horizontal\xspace branch-and-pick tours---one to the left and one to the right---are possible. Note that this assumption is not restrictive if the picker without cart is not more than twice as fast as the picker pushing the cart. Lifting the assumption is not possible with the presented modeling approach because it would entail that a section of an aisle is traversed more than twice. For the same reason, we assume that branch-and-pick tours (vertical\xspace as well as horizontal\xspace) starting from different parking positions cannot overlap. Parking the cart directly at the entry or within a picking aisle and doing a vertical\xspace branch-and-pick tour in that aisle does not explicitly have to be modeled: Under our assumption that only one vertical\xspace branch-and-pick tour for each parking position is allowed, it is beneficial to only push the cart as far into the picking aisle as is needed so that the capacity of the picker is sufficient to visit the remaining required picking positions in the aisle (see Figure~\ref{fig:warehouse3}, aisle 0). Because the decision $\xPickTop{j}{i}=1$ ($\xPickBottom{j}{i}=1$) implicates that all positions located above (below) position $i$ in aisle $j$ are picked in the branch-and-pick tour, the parking position of the cart and consequently the respective cost coefficients $\costPickTop{j}{i}$ and $\costPickBottom{j}{i}$ can be precomputed. However, decoupling and the parking position of the cart must explicitly be modeled for horizontal\xspace branch-and-pick tours. In this case, possible parking positions are located in the cross aisles at the entries to the picking aisles. \end{itemize} To model the path of the picker when traveling without cart, we introduce additional binary variables: Variables $\xPickerTopRight{j}$ indicate that the picker traverses the top cross aisle from the entry of picking aisle $j$ to the entry of $j+1$, and the cart is parked somewhere to his left, i.e., the picker is traveling to the right. Variables $\xPickerBottomRight{j}$ are defined analogously for the bottom cross aisle. Variables $\xPickerBottomLeft{j}$ and $\xPickerTopLeft{j}$ indicate that the picker is traveling from the entry of aisle $j+1$ to the entry of $j$, and the cart is parked to his right. Variables $\xPickTop{j}{i}^p$ ($\xPickBottom{j}{i}^p$) indicate that picking aisle $j$ is entered from the top (bottom) without cart, and all required picking positions down (up) to position $i$ are visited. The variables are only defined for those picking positions $i$ for which the picker capacity is sufficient to carry the requested number of items of all SKUs that are stored in the picking positions passed by the picker. In the example in Figure~\ref{fig:warehouse3}, variables $\xPickTop{0}{3}, \xTwoTop{0}, \xUp{1}, \xTopBottom{1}, \xPickTop{2}{2}, \xTopBottom{2}, \xUp{3}$ are equal to $1$ and define the picking tour with cart, and variables $\xPickerTopRight{3}, \xPickTop{4}{4}^p, \xPickerBottomRight{3}, \xPickBottom{4}{9}^p$ equal $1$ and describe the two horizontal\xspace branch-and-pick tours starting from aisle 3. We modify formulation~\eqref{F0}--\eqref{B5} as follows: First, the objective function~\eqref{F0} is changed to minimize the total travel time, i.e., the sum of the time the picker travels alone and the time that the picker travels with cart: \begin{align} \tiny & \begin{aligned} \text{min~}&\sum_{j \in \mathcal{J}} \costTwoBottom{j} \xTwoBottom{j} + \costTwoBottom{j}^p (\xPickerBottomLeft{j} + \xPickerBottomRight{j}) + \costTwoTop{j} \xTwoTop{j} + \costTwoTop{j}^p (\xPickerTopLeft{j} + \xPickerTopRight{j}) + \costTopBottom{j} \xTopBottom{j} + \costTwoTopBottom{j} \xTwoTopBottom{j} + \costUp{j} \xUp{j} + \costTwoUp{j} \xTwoUp{j}+ \\ &\sum_{j \in \mathcal{J}}\sum_{i \in \mathcal{H}InAisle{j}} \left(\costPickBottom{j}{i} \xPickBottom{j}{i} +\costPickTop{j}{i} \xPickTop{j}{i}\right) + \left(\costPickBottom{j}{i}^p \xPickBottom{j}{i}^p +\costPickTop{j}{i}^p \xPickTop{j}{i}^p\right) \end{aligned} & \label{H4} \end{align} \noindent We replace constraints~\eqref{F2} with constraints~\eqref{F2Pick} to take picking without cart into account: \begin{align} \tiny &\xUp{j} + \xTwoUp{j} + \sum_{i' \in \mathcal{H}InAisle{j} :i'\geqi} (\xPickTop{j}{i'} + \xPickTop{j}{i'}^p) + \sum_{i' \in \mathcal{H}InAisle{j} :i'\leqi} (\xPickBottom{j}{i'} + \xPickBottom{j}{i'}^p) \geq 1 & j \in \mathcal{J}, i \in \mathcal{H}InAisle{j} \label{F2Pick} \end{align} To determine the part of the warehouse in which the cart is used, we replace constraints~\eqref{F1} with constraints~\eqref{F1fb}--\eqref{F1f}, and we add the following constraints: \begin{align} \tiny &\xLast{j} \geq \xTwoUp{j} & j \in \mathcal{J} \label{F1z} \end{align} \noindent To model feasible horizontal\xspace branch-and-pick tours of the picker without cart, we add: \begin{align} \tiny &\xPickerBottomRight{j} + \xPickerBottomLeft{j} + \xTwoBottom{j} + \xTwoTopBottom{j} + \xTopBottom{j} \leq 1 & j \in \mathcal{J}\setminus \{\numberOfAisles-1Vis\} \label{G1} \\ &\xPickerTopRight{j} + \xPickerTopLeft{j} + \xTwoTop{j} + \xTwoTopBottom{j} + \xTopBottom{j} \leq 1 & j \in \mathcal{J}\setminus \{\numberOfAisles-1Vis\} \label{G2} \\ &\underset{\mathclap{j > 0}}{[}\xPickerTopRight{j-1} + \xTwoTop{j-1} + \xTwoTopBottom{j-1}] + \xUp{j} +\xTwoUp{j} \geq \xPickerTopRight{j} & j \in \mathcal{J}\setminus \{\numberOfAisles-1Vis\}\label{G3} \\ &\underset{\mathclap{j > 0}}{[}\xPickerBottomRight{j-1} + \xTwoBottom{j-1} + \xTwoTopBottom{j-1}] + \xUp{j} +\xTwoUp{j} \geq \xPickerBottomRight{j} & j \in \mathcal{J}\setminus \{\numberOfAisles-1Vis\}\label{G4} \\ &\xPickerTopLeft{j+1} + \xTwoTop{j+1} + \xTwoTopBottom{j+1} + \xUp{j+1} +\xTwoUp{j+1} \geq \xPickerTopLeft{j} & j \in \mathcal{J}\setminus \{\numberOfAisles-1Vis\} \label{G5} \\ &\xPickerBottomLeft{j+1} + \xTwoBottom{j+1} + \xTwoTopBottom{j+1} + \xUp{j+1} +\xTwoUp{j+1} \geq \xPickerBottomLeft{j} & j \in \mathcal{J}\setminus \{\numberOfAisles-1Vis\} \label{G6} \\ &\underset{\mathclap{j > 0}}{[}\xPickerBottomRight{j-1}] + \xPickerBottomLeft{j} \geq \xPickBottom{j}{i}^p & j \in \mathcal{J}, i \in \mathcal{H}InAisle{j} \label{G7}\\ &\underset{\mathclap{j > 0}}{[}\xPickerTopRight{j-1}] + \xPickerTopLeft{j} \geq \xPickTop{j}{i}^p & j \in \mathcal{J}, i \in \mathcal{H}InAisle{j} \label{G8}\\ & \xPickerBottomRight{j},\xPickerBottomLeft{j},\xPickerTopRight{j}, \xPickerTopLeft{j} \in \{0,1\} & j \in \mathcal{J} \setminus \{\numberOfAisles-1Vis\} \label{G9}\\ & \xPickerBottomRight{\numberOfAisles-1Vis},\xPickerBottomLeft{\numberOfAisles-1Vis},\xPickerTopRight{\numberOfAisles-1Vis}, \xPickerTopLeft{\numberOfAisles-1Vis} = 0& \label{G10}\\ & \xPickTop{j}{i}^p \in \{0,1\} & j \in \mathcal{J}, i \in \mathcal{H}InAisle{j} \label{G11}\\ & \xPickBottom{j}{i}^p \in \{0,1\}& j \in \mathcal{J}, i \in \mathcal{H}InAisle{j} \label{G12} \end{align} Constraints~\eqref{G1} and \eqref{G2} forbid overlaps of horizontal\xspace branch-and-pick tours with other horizontal\xspace branch-and-pick tours and also with the tour with cart. Constraints~\eqref{G3}--\eqref{G8} define feasible conditions for starting or continuing horizontal\xspace branch-and-pick tours. Constraints~\eqref{G9}--\eqref{G12} define the domains of the variables. To ensure that the carrying capacity of the picker is not exceeded, we introduce variables $\capBottom{j}$ ($\capTop{j}$) that keep track of the total load collected by the picker on a horizontal\xspace branch-and-pick tour at the moment when passing at the bottom (at the top) of aisle $j$. In Figure~\ref{fig:warehouse3}, $\capTop{4} =2$ and $\capBottom{4}=1$. Let $\itemsTop{j}{i}$ ($\itemsBottom{j}{i}$) indicate the number of required items of all SKUs stored between the top (bottom) aisle and picking position $i$, i.e., $\itemsTop{j}{i} = \sum_{i' \in \mathcal{H}InAisle{j}: i' \leqi} \sum_{h \in \mathcal{H} }\demandTwo{j}{i'}{h}$ and $\itemsBottom{j}{i} = \sum_{i' \in \mathcal{H}InAisle{j}: i' \geqi} \sum_{h \in \mathcal{H} }\demandTwo{j}{i'}{h}$. Then, we add the following constraints: \begin{align} \tiny &\capTop{j+1} + \sum_{i \in \mathcal{H}InAisle{j+1}} \itemsTop{j+1}{i}\,\xPickTop{j+1}{i}^p - C (1-\xPickerTopLeft{j}) \leq \capTop{j} & j \in \mathcal{J}\setminus \{\numberOfAisles-1Vis\} \label{H1} \\ &\capBottom{j+1} + \sum_{i \in \mathcal{H}InAisle{j+1}} \itemsBottom{j+1}{i} \xPickBottom{j+1}{i}^p - C (1-\xPickerBottomLeft{j}) \leq \capBottom{j} & j \in \mathcal{J}\setminus \{\numberOfAisles-1Vis\} \label{H2} \\ &\capTop{j} + \sum_{i \in \mathcal{H}InAisle{j}} \itemsTop{j}{i} \xPickTop{j}{i}^p - C (1-\xPickerTopRight{j}) \leq \capTop{j+1} & j \in \mathcal{J}\setminus \{\numberOfAisles-1Vis\} \label{H3} \\ &\capBottom{j} + \sum_{i \in \mathcal{H}InAisle{j}} \itemsBottom{j}{i} \xPickBottom{j}{i}^p - C (1-\xPickerBottomRight{j}) \leq \capBottom{j+1} & j \in \mathcal{J}\setminus \{\numberOfAisles-1Vis\} \label{H4} \\ & \capTop{j} + \sum_{i \in \mathcal{H}InAisle{j}} \itemsTop{j}{i} \xPickTop{j}{i}^p \leq C & j \in \mathcal{J} \label{H5} \\ & \capBottom{j} + \sum_{i \in \mathcal{H}InAisle{j}} \itemsBottom{j}{i} \xPickBottom{j}{i}^p \leq C & j \in \mathcal{J} \label{H6} \\ & \capTop{j}, \capBottom{j} \geq 0 & j \in \mathcal{J} \label{H7} \end{align} Constraints~\eqref{H1}--\eqref{H4} propagate the load to the next aisle, and constraints~\eqref{H5} and \eqref{H6} restrict the load to the maximum picker capacity. Finally, note that the described modeling approach can also be used to implement inaccessibility constraints for certain aisles when traveling with cart. \section{The Single-Block SPRP with Multiple End Depots} \label{sec:openDepot} Finally, we consider the single-block SPRP with multiple end depots, in which the picker does not have to return to the start depot but may select an arbitrary end depot from a set of possible candidates. The start depot is always included in the set of candidates, and additional depots can be located in both cross aisles at the entries to all picking aisles. Initially, the relevant part of the warehouse, i.e., the set of aisles $\mathcal{J}$, can only be restricted to the area between the leftmost depot or required picking position and the rightmost depot or required picking position. Figure~\ref{fig:warehouse4} shows an example with the given start depot 'D' and four potential end depots given by the filled circles. Depot 'E' is finally selected as end depot. As illustrated in the figure, the idea of our modeling approach is to simultaneously determine a picking tour according to formulation~\eqref{F0}--\eqref{B5} and a simple path (given by the dot-dashed edges) on which we do not return from the selected end depot to the given start depot. The edges of this path are removed from the edges of the closed loop which starts and ends at the start depot and includes the selected end depot. To preserve the connectivity of the picking tour, the path can only contain edges that are traversed twice in the closed loop, i.e., those for which $\xTwoTopBottom{j},\xTwoTop{j}, \xTwoBottom{j}$, or $\xTwoUp{j}$ are equal to $1$. \begin{figure} \caption{Optimal solution of an example instance of the single-block SPRP with multiple end depots.} \label{fig:warehouse4} \end{figure} To define the path, we introduce the binary variables $\yTop{j},\yBottom{j},\yUp{j}$, and $\yTopBottom{j}$, which are defined analogous to the $x_{j}$ variables in Section~\ref{sec:model}. The binary variables $\openDepotTop{j}$ and $\openDepotBottom{j}$ indicate whether the depot at the top (bottom) of aisle $j$ is selected as end depot. To keep notation simple, we set $\openDepotTop{j}$ and $\openDepotBottom{j}$ to zero if no potential end depot is available at the respective position. The binary variables $\degreeEvenTopReturn{j}$ ($\degreeEvenBottomReturn{j}$) equal 0 if the degree of the path at the top (bottom) of aisle $j$ is zero or uneven and equal 1 if the degree is even. In Figure~\ref{fig:warehouse4}, the path between the start depot 'D' and the selected end depot 'E' ($\openDepotBottom{1} = 1$) is given by $\yUp{3}, \yTop{2}, \yUp{2}, \yBottom{1} = 1$. The degree of the path at the start and end depot is uneven ($\degreeEvenBottomReturn{1},\degreeEvenBottomReturn{3} = 0$), and the degrees at the top and bottom of aisle 2 and the top of aisle 3 are even ($\degreeEvenTopReturn{2}, \degreeEvenBottomReturn{2}, \degreeEvenTopReturn{3} = 1$). We modify the objective function \eqref{F0} and now subtract the cost associated with the path: \begin{align} \tiny & \begin{aligned} \text{min~} &\sum_{j \in \mathcal{J}} \costTwoBottom{j} (\xTwoBottom{j}-\yBottom{j}) + \costTwoTop{j} (\xTwoTop{j}-\yTop{j}) + \costTopBottom{j} \xTopBottom{j} + \costTwoTopBottom{j} ( \xTwoTopBottom{j}-\yTopBottom{j})+ \costUp{j} \xUp{j} + \costTwoUp{j}( \xTwoUp{j}-\yUp{j})+\\ &\sum_{j \in \mathcal{J}}\sum_{i \in \mathcal{H}InAisle{j}} \left(\costPickBottom{j}{i} \xPickBottom{j}{i} +\costPickTop{j}{i} \xPickTop{j}{i}\right) \end{aligned} & \label{K0} \end{align} We keep constraints~\eqref{F2}--\eqref{B5} and replace constraints (2) with constraints~\eqref{F1fb}--\eqref{F1f} and constraints~\eqref{F1y} to restrict the relevant part of the warehouse: \begin{align} \tiny &\xLast{j} \geq 1 & j \in \mathcal{J}: \mathcal{H}InAisle{j} \neq \emptyset\label{F1y} \end{align} To characterize the path, we add the following constraints: \begin{align} \tiny & \xTwoTop{j} + \xTwoTopBottom{j} \geq \yTop{j} & j \in \mathcal{J} \label{K1}\\ & \xTwoBottom{j} + \xTwoTopBottom{j} \geq \yBottom{j}& j \in \mathcal{J} \label{K2}\\ & \xTwoTopBottom{j} \geq \yTopBottom{j}& j \in \mathcal{J} \label{K3}\\ & \xTwoUp{j} \geq \yUp{j}& j \in \mathcal{J} \label{K4}\\ &\yTop{j} + \yBottom{j+1} \leq \yUp{j+1}+1 & j \in \mathcal{J} \setminus \{\numberOfAisles-1Vis\} \label{K6}\\ &\yBottom{j} + \yTop{j+1} \leq \yUp{j+1}+1 & j \in \mathcal{J} \setminus \{\numberOfAisles-1Vis\} \label{K7}\\ & \yTop{j} + \yBottom{j} + \yTopBottom{j} \geq \yTop{j+1} + \yBottom{j+1} + \yTopBottom{j+1} & j \in \mathcal{J} \setminus \{\numberOfAisles-1Vis\} : j \geq l \label{K9}\\ & \yTop{j} + \yBottom{j} +\yTopBottom{j} \geq \yTop{j-1} + \yBottom{j-1} +\yTopBottom{j-1} & j \in \mathcal{J} \setminus \{0\} : j < l \label{K10}\\ & \sum_{j \in \mathcal{J}} \openDepotBottom{j} + \openDepotTop{j} \leq 1 & \label{K11}\\[0.1cm] &\underset{\mathclap{j > 0}}{[}\yTopBottom{j-1} + \yTop{j-1}] + \yTopBottom{j} + \yTop{j} + \yUp{j} = 2 \degreeEvenTopReturn{j} +\openDepotTop{j} & \begin{aligned}[c] \mathit{if}(\depotTop{} = 0)\,\{j \in \mathcal{J}\}\,\\\mathit{else}\,\{j \in \mathcal{J} \setminus \{l\} \} \end{aligned} \label{K12}\\[0.3cm] &\underset{\mathclap{j > 0}}{[}\yTopBottom{j-1} + \yBottom{j-1}] + \yTopBottom{j} + \yBottom{j} + \yUp{j} = 2 \degreeEvenBottomReturn{j} +\openDepotBottom{j} & \begin{aligned}[c] \mathit{if}(\depotTop{} = 1)\,\{j \in \mathcal{J}\}\,\\\mathit{else}\,\{j \in \mathcal{J} \setminus \{l\} \} \end{aligned} \label{K13}\\[0.3cm] & \yTopBottom{j} + \yTop{j} + \yBottom{j} \leq 1& j \in \mathcal{J} \label{K5}\\ &\yBottom{j}, \yTop{j}, \yTopBottom{j}, \yUp{j}, \openDepotBottom{j}, \openDepotTop{j}, \degreeEvenTopReturn{j},\degreeEvenBottomReturn{j} \in \{0,1\} & j \in \mathcal{J} \label{K14} \end{align} Constraints~\eqref{K1}--\eqref{K4} restrict the path to the edges of the closed loop that are traversed twice. Constraints~\eqref{K6}--\eqref{K10} guarantee that the path is connected. Constraint~\eqref{K11} ensures that at most one end depot is selected. Constraints \eqref{K12}--\eqref{K14} guarantee that the path is simple and connects start and end depot. \section{Computational experiments} \label{sec:results} This section presents the numerical studies to assess the performance of our models and the benefits of (i)~allowing the decoupling of picker and cart, and (ii)~having multiple end depots. We discuss results for the standard SPRP (Section~\ref{sec:single}), the settings with scattered storage (Section~\ref{sec:mixed}), decoupling (Section~\ref{sec:decouple}), and multiple end depots (Section~\ref{sec:multdepot}). We used Gurobi\,6.5.0 set to use only a single thread and otherwise using the standard setting to solve our formulation. All studies were conducted on a Desktop PC with an AMD FX-6300 processor at 3.5 GHz with 8 GB of RAM and running Windows 10 Pro. \subsection{Results for the Standard SPRP} \label{sec:single} We compare our formulation (denoted as GS) to the two best performing formulations from the literature: SHSW (\citet*{Scholz:2016}) and PPC (\citet*{Pansart:2018}). The comparison between GS and SHSW is performed on the 900-instance benchmark of \citet{Scholz:2016}, which are generated in groups of 30 instances for different numbers of aisles $m \in \{5,10,15,20,25,30\}$ and required picking positions $a \in \{30,45,60,75,90\}$. The number of available picking positions per aisle $n$ is set to 45, and the required picking positions are uniformly distributed over the warehouse. The depot is located at the bottom left of the warehouse. The comparison between GS and PPC is carried out on the benchmark of \citet{Pansart:2018}, which is generated in an analogous fashion to that of \citet{Scholz:2016} but only contains 10 instances per group. Because \citet{Pansart:2018} do not use a subset of the \citet{Scholz:2016} benchmark but generate new instances using a different random number generator, results between SHSW and PPC are not directly comparable. The two benchmarks are available at \url{http://www.mansci.ovgu.de/Forschung/Materialien/2016+_+I_-p-534.html} and \url{https://pagesperso.g-scop.grenoble-inp.fr/~pansartl/en/en_picking.html}, respectively. Table~\ref{tab:resultsForm} shows the results of the comparison: Column $(m,a)$ denotes the instance group defined as combination of the number of aisles $m$ and the number of required picking positions $a$. Column $\# opt$ reports the number of instances solved to optimality with each formulation within the runtime limit of 30 minutes. In column \runtime{avg}, we report the average runtime in seconds calculated over all instances in the respective group that were solved by the respective method within the time limit, and in column \runtime{max} the maximum runtime observed for any of the instances in the considered group. We judge the comparison of runtimes to be relatively fair: the speed of our processor and the one used by \citet{Pansart:2018} (an Intel Xeon E5-2440 v2 at 1.9\,GHz) are roughly comparable based on their Passmark single-thread scores; \citet{Scholz:2016} only provide the information that a 3.4\,GHz Pentium processor was used, but in general, models that fit this description have similar or even superior Passmark single-thread scores than the other two machines. \begin{table}[htbp] \centering \begin{footnotesize} \sffamily \setlength{\tabcolsep}{5.0pt} \ra{1.1} \begin{tabular}{@{}l@{\hspace{0.5cm}}rr@{\hspace{0.5cm}}rrr@{\hspace{.5cm}}rrr@{\hspace{0.5cm}}rrr@{}} \toprule & \multicolumn{2}{c}{SHSW} & \multicolumn{3}{c}{GS} & \multicolumn{3}{c}{PPC} & \multicolumn{3}{c}{GS}\\ \cmidrule(rr{.55cm}l{.05cm}){2-3} \cmidrule(r{.05cm}l{.05cm}){4-6} \cmidrule(r{.05cm}l{.05cm}){7-9} \cmidrule(r{.05cm}l{.05cm}){10-12} $(m,a)$ &\multicolumn{1}{c}{\#opt.} & \multicolumn{1}{c}{\runtime{avg}} & \multicolumn{1}{c}{\#opt.} & \multicolumn{1}{c}{\runtime{avg}} & \runtime{max} & \multicolumn{1}{c}{\#opt.} & \multicolumn{1}{c}{\runtime{avg}} & \multicolumn{1}{c}{\runtime{max}} & \multicolumn{1}{c}{\#opt.} & \multicolumn{1}{c}{\runtime{avg}} & \multicolumn{1}{c}{\runtime{max}} \\ \midrule (5,30) & 30/30 & 0.09 & 30/30 & 0.01 & 0.02 & 10/10 & 0.03 & 0.08 & 10/10 & 0.00 & 0.02 \\ (5,45) & 30/30 & 0.09 & 30/30 & 0.01 & 0.02 & 10/10 & 0.06 & 0.09 & 10/10 & 0.01 & 0.02 \\ (5,60) & 30/30 & 0.09 & 30/30 & 0.01 & 0.02 & 10/10 & 0.08 & 0.10 & 10/10 & 0.02 & 0.03 \\ (5,75) & 30/30 & 0.09 & 30/30 & 0.01 & 0.02 & 10/10 & 0.08 & 0.11 & 10/10 & 0.02 & 0.03 \\ (5,90) & 30/30 & 0.10 & 30/30 & 0.01 & 0.02 & 10/10 & 0.08 & 0.13 & 10/10 & 0.03 & 0.03 \\ \addlinespace (10,30) & 30/30 & 1.60 & 30/30 & 0.02 & 0.05 & 10/10 & 0.05 & 0.13 & 10/10 & 0.03 & 0.06 \\ (10,45) & 30/30 & 1.03 & 30/30 & 0.02 & 0.03 & 10/10 & 0.08 & 0.19 & 10/10 & 0.03 & 0.05 \\ (10,60) & 30/30 & 1.42 & 30/30 & 0.02 & 0.05 & 10/10 & 0.13 & 0.20 & 10/10 & 0.02 & 0.05 \\ (10,75) & 30/30 & 1.36 & 30/30 & 0.02 & 0.05 & 10/10 & 0.09 & 0.17 & 10/10 & 0.02 & 0.05 \\ (10,90) & 30/30 & 0.62 & 30/30 & 0.02 & 0.03 & 10/10 & 0.10 & 0.20 & 10/10 & 0.02 & 0.05 \\ \addlinespace (15,30) & 30/30 & 2.29 & 30/30 & 0.02 & 0.10 & 10/10 & 0.06 & 0.13 & 10/10 & 0.04 & 0.08 \\ (15,45) & 30/30 & 5.28 & 30/30 & 0.03 & 0.06 & 10/10 & 0.11 & 0.20 & 10/10 & 0.04 & 0.05 \\ (15,60) & 30/30 & 10.64 & 30/30 & 0.03 & 0.08 & 10/10 & 0.14 & 0.32 & 10/10 & 0.05 & 0.08 \\ (15,75) & 30/30 & 15.10 & 30/30 & 0.03 & 0.08 & 10/10 & 0.15 & 0.41 & 10/10 & 0.04 & 0.08 \\ (15,90) & 30/30 & 19.41 & 30/30 & 0.04 & 0.08 & 10/10 & 0.23 & 0.45 & 10/10 & 0.05 & 0.08 \\\addlinespace (20,30) & 30/30 & 10.57 & 30/30 & 0.04 & 0.16 & 10/10 & 0.09 & 0.31 & 10/10 & 0.06 & 0.08 \\ (20,45) & 30/30 & 27.32 & 30/30 & 0.03 & 0.13 & 10/10 & 0.10 & 0.21 & 10/10 & 0.07 & 0.11 \\ (20,60) & 30/30 & 114.33 & 30/30 & 0.04 & 0.08 & 10/10 & 0.26 & 0.49 & 10/10 & 0.06 & 0.11 \\ (20,75) & 30/30 & 216.63 & 30/30 & 0.04 & 0.08 & 10/10 & 0.84 & 5.21 & 10/10 & 0.07 & 0.13 \\ (20,90) & 30/30 & 485.71 & 30/30 & 0.05 & 0.11 & 10/10 & 0.39 & 1.64 & 10/10 & 0.09 & 0.13 \\\addlinespace (25,30) & 30/30 & 54.46 & 30/30 & 0.04 & 0.14 & 10/10 & 0.11 & 0.24 & 10/10 & 0.05 & 0.06 \\ (25,45) & 30/30 & 85.46 & 30/30 & 0.05 & 0.13 & 10/10 & 0.24 & 0.50 & 10/10 & 0.06 & 0.10 \\ (25,60) & 30/30 & 258.92 & 30/30 & 0.06 & 0.13 & 10/10 & 0.71 & 2.34 & 10/10 & 0.08 & 0.17 \\ (25,75) & 29/30 & 527.39 & 30/30 & 0.07 & 0.19 & 10/10 & 0.76 & 2.34 & 10/10 & 0.09 & 0.15 \\ (25,90) & 24/30 & 646.59 & 30/30 & 0.07 & 0.18 & 10/10 & 1.01 & 4.41 & 10/10 & 0.09 & 0.18 \\\addlinespace (30,30) & 30/30 & 204.18 & 30/30 & 0.05 & 0.11 & 10/10 & 0.08 & 0.21 & 10/10 & 0.06 & 0.13 \\ (30,45) & 30/30 & 406.19 & 30/30 & 0.06 & 0.17 & 10/10 & 0.19 & 0.49 & 10/10 & 0.08 & 0.11 \\ (30,60) & 30/30 & 508.80 & 30/30 & 0.07 & 0.17 & 10/10 & 0.39 & 0.59 & 10/10 & 0.10 & 0.13 \\ (30,75) & 24/30 & 638.89 & 30/30 & 0.07 & 0.16 & 10/10 & 0.99 & 6.69 & 10/10 & 0.11 & 0.19 \\ (30,90) & 21/30 & 786.29 & 30/30 & 0.08 & 0.22 & 10/10 & 1.63 & 6.69 & 10/10 & 0.15 & 0.30 \\ \midrule \textbf{Avg.} & & \textbf{167.70} & & \textbf{0.04} & & & \textbf{0.31} & & & \textbf{0.05} & \\ \bottomrule \end{tabular} \rmfamily \caption{\textrm{Comparison of the formulations SHSW, PPC, and GS on standard SPRP instances from the literature}.\label{tab:resultsForm}} \end{footnotesize} \end{table} Comparing GS to SHSW, we note that GS is able to solve all instances, while SHSW fails to solve 22 of the instances with a number of aisles $m \geq 25$ and a number of required picking positions $a \geq 75$. The runtimes of SHSW grow strongly with a larger number of aisles, and, for a number of aisles above 10 also with an increasing number of required picking positions. Contrary to this, the runtime of GS only grows moderately with a larger number of aisles, and the number of required picking positions seems to have no clear influence on the runtime. The average runtime of GS is approximately 4500 times lower than that of SHSW. The difference between GS and PPC is smaller: Both formulations are able to solve all instance groups within less than 10 seconds. Still, GS is approximately six times faster on average, and the worst runtime on the instance groups is up to 40 times lower than that of PPC. To assess the scaling behavior of GS, we generate a set of large instances, more precisely, 30 instances for all combinations of $m, a, n \in \{100,250,500,750,1000\}$. Again, the required picking positions are uniformly distributed over the warehouse, and the depot is located at the bottom left corner of the warehouse. Table~\ref{tab:resultsLarge} presents aggregate results over the instances in a group. In addition to the values reported in Table~\ref{tab:resultsForm}, we report in rows \runtimeSup{avg}{a} the average runtimes over different values of $a$ within one group (defined by a fixed value of $m$ and $n$). Column \runtimeSup{avg}{n} reports the average runtime over all different values of $n$ for a fixed combination of $m$ and $a$. \begin{table}[htbp] \centering \begin{footnotesize} \sffamily \setlength{\tabcolsep}{5.0pt} \ra{1.1} \begin{tabular}{@{}l@{\hspace{0.5cm}}rr@{\hspace{0.5cm}}rr@{\hspace{0.5cm}}rr@{\hspace{0.5cm}}rr@{\hspace{0.5cm}}rr@{\hspace{0.5cm}}r@{}}\toprule & \multicolumn{2}{c}{$n = 100$} & \multicolumn{2}{c}{$n = 250$} & \multicolumn{2}{c}{$n = 500$} & \multicolumn{2}{c}{$n = 750$} & \multicolumn{2}{c}{$n = 1000$} \\ \cmidrule(l{.2cm}r{.2cm}){2-3} \cmidrule(l{.2cm}r{.2cm}){4-5} \cmidrule(l{.2cm}r{.2cm}){6-7} \cmidrule(l{.2cm}r{.2cm}){8-9} \cmidrule(l{.2cm}r{.2cm}){10-11} $(m,a)$ & \multicolumn{1}{r}{\runtime{avg}} & \multicolumn{1}{r@{\hspace{0.5cm}}}{\runtime{max}} & \multicolumn{1}{r}{\runtime{avg}} & \multicolumn{1}{r@{\hspace{0.5cm}}}{\runtime{max}} & \multicolumn{1}{r}{\runtime{avg}} & \multicolumn{1}{r@{\hspace{0.5cm}}}{\runtime{max}} & \multicolumn{1}{r}{\runtime{avg}} & \multicolumn{1}{r@{\hspace{0.5cm}}}{\runtime{max}} & \multicolumn{1}{r}{\runtime{avg}} & \multicolumn{1}{r}{\runtime{max}} & \multicolumn{1}{r}{\boldmath\runtimeSup{avg}{n}} \\\midrule (100,100) & 1.03 & 2.38 & 1.33 & 5.50 & 1.87 & 13.47 & 1.23 & 6.49 & 1.30 & 4.63 & \textbf{1.35} \\ (100,250) & 1.67 & 8.87 & 2.08 & 7.11 & 1.79 & 5.06 & 1.54 & 4.82 & 2.22 & 6.44 & \textbf{1.86} \\ (100,500) & 2.50 & 10.20 & 1.35 & 4.49 & 1.30 & 4.60 & 1.13 & 2.13 & 1.14 & 1.87 & \textbf{1.49} \\ (100,750) & 1.38 & 5.09 & 1.32 & 3.81 & 1.65 & 6.05 & 1.42 & 6.29 & 1.16 & 1.57 & \textbf{1.39} \\ (100,1000) & 1.46 & 6.12 & 1.65 & 4.80 & 1.27 & 2.79 & 1.27 & 2.87 & 1.45 & 2.86 & \textbf{1.42} \\%\addlinespace {\boldmath\runtimeSup{avg}{a}} & \textbf{1.61} & & \textbf{1.54} & & \textbf{1.58} & & \textbf{1.32} & & \textbf{1.45} & & \\ \midrule (250,100) & 2.42 & 5.21 & 3.58 & 17.52 & 3.43 & 12.50 & 5.60 & 23.89 & 4.79 & 32.91 & \textbf{3.96} \\ (250,250) & 5.55 & 22.54 & 7.52 & 24.61 & 10.98 & 29.25 & 9.93 & 28.51 & 9.14 & 26.16 & \textbf{8.62} \\ (250,500) & 8.26 & 29.42 & 6.07 & 32.26 & 10.10 & 26.03 & 7.59 & 26.32 & 8.97 & 29.03 & \textbf{8.20} \\ (250,750) & 11.06 & 44.55 & 12.77 & 33.40 & 14.10 & 36.57 & 12.35 & 30.44 & 14.21 & 35.33 & \textbf{12.90} \\ (250,1000) & 14.99 & 34.76 & 15.03 & 35.66 & 9.96 & 32.23 & 11.32 & 31.41 & 9.22 & 25.84 & \textbf{12.10} \\%\addlinespace {\boldmath\runtimeSup{avg}{a}} & \textbf{8.45} & & \textbf{8.99} & & \textbf{9.71} & & \textbf{9.36} & & \textbf{9.27} & & \\ \midrule (500,100) & 4.32 & 15.45 & 6.78 & 18.43 & 8.82 & 31.58 & 11.46 & 47.96 & 15.22 & 68.86 & \textbf{9.32} \\ (500,250) & 14.44 & 63.67 & 12.81 & 37.21 & 22.07 & 68.54 & 28.85 & 54.87 & 26.87 & 87.59 & \textbf{21.01} \\ (500,500) & 11.43 & 56.09 & 18.99 & 66.95 & 23.75 & 59.27 & 29.56 & 64.88 & 34.35 & 75.92 & \textbf{23.62} \\ (500,750) & 17.86 & 55.94 & 21.56 & 66.74 & 26.19 & 71.22 & 25.35 & 68.96 & 30.05 & 79.15 & \textbf{24.20} \\ (500,1000) & 16.03 & 64.74 & 18.64 & 62.14 & 23.48 & 71.43 & 30.40 & 83.83 & 24.12 & 63.57 & \textbf{22.53} \\%\addlinespace {\boldmath\runtimeSup{avg}{a}} & \textbf{12.81} & & \textbf{15.75} & & \textbf{20.86} & & \textbf{25.12} & & \textbf{26.13} & & \\ \midrule (750,100) & 8.68 & 41.81 & 8.70 & 55.56 & 12.67 & 77.53 & 16.09 & 114.41 & 16.18 & 117.38 & \textbf{12.46} \\ (750,250) & 19.75 & 56.34 & 29.74 & 95.73 & 44.66 & 107.42 & 55.18 & 125.61 & 50.61 & 120.86 & \textbf{39.99} \\ (750,500) & 24.11 & 73.18 & 56.25 & 134.75 & 64.34 & 161.30 & 72.61 & 211.40 & 68.02 & 166.36 & \textbf{57.06} \\ (750,750) & 31.09 & 112.22 & 44.27 & 156.87 & 60.81 & 140.97 & 68.15 & 130.10 & 68.59 & 154.34 & \textbf{54.58} \\ (750,1000) & 23.09 & 106.81 & 34.24 & 104.50 & 50.49 & 130.80 & 52.78 & 127.16 & 72.75 & 173.93 & \textbf{46.67} \\%\addlinespace {\boldmath\runtimeSup{avg}{a}} & \textbf{21.34} & & \textbf{34.64} & & \textbf{46.59} & & \textbf{52.96} & & \textbf{55.23} & & \\ \midrule (1000,100) & 10.40 & 74.06 & 26.68 & 300.07 & 22.36 & 98.25 & 23.42 & 141.61 & 22.77 & 154.93 & \textbf{21.12} \\ (1000,250) & 24.25 & 84.93 & 47.38 & 174.19 & 74.49 & 257.29 & 82.95 & 371.27 & 89.06 & 617.04 & \textbf{63.63} \\ (1000,500) & 25.07 & 72.91 & 70.79 & 227.91 & 83.98 & 181.89 & 110.82 & 287.68 & 90.04 & 291.63 & \textbf{76.14} \\ (1000,750) & 33.60 & 127.68 & 117.88 & 310.40 & 161.87 & 338.99 & 122.89 & 239.56 & 102.21 & 242.78 & \textbf{107.69} \\ (1000,1000) & 35.62 & 134.22 & 96.56 & 261.85 & 127.16 & 307.27 & 106.87 & 226.21 & 122.96 & 251.78 & \textbf{97.83} \\%\addlinespace {\boldmath\runtimeSup{avg}{a}} & \textbf{25.79} & & \textbf{71.86} & & \textbf{93.97} & & \textbf{89.39} & & \textbf{85.41} & & \\ \bottomrule \end{tabular} \end{footnotesize} \rmfamily \caption{Results of our formulation on newly generated large standard SPRP instances\label{tab:resultsLarge}.} \end{table} The results show that the runtimes consistently increase for a larger number of aisles $m$, while the relationship between the number of required picking positions $a$ or the number of available picking positions $n$ and the runtime is not entirely consistent: we can only note the rough tendency that runtimes are higher for larger values of the two parameters. Overall, the scaling behavior of our formulation is quite convincing, even the largest instances with $m, a, n = 1000$ can be solved with an average runtime of about two minutes, and the most challenging instance in the benchmark was solved in approximately 10 minutes. \subsection{Results for the Single-Block SPRP with Scattered Storage} \label{sec:mixed} Because the benchmark instances used by \citet{Weidinger:2018a} were not archived by the author, we generated new instances in a fashion similar to the procedure described in the original paper. To allow for a fair comparison, we reimplemented the mathematical model of \citet{Weidinger:2018a} using Gurobi (in the following denoted as formulation W). The instances consider warehouses of different sizes by varying the number of aisles $m \in \{5,25,100\}$ and the number of available picking positions $n \in \{30,60,180\}$. We assume that one SKU is stored in each picking position. To investigate the influence of the degree of duplication of the SKUs in the warehouse, we vary the number of different SKUs $\xi$ stored in the warehouse depending on (i)~the storage capacity of the warehouse (given by $m \cdot n$), (ii)~a factor $\alpha \in \{1,5,10,50\}$ that determines the frequency with which SKUs are assigned to multiple storage positions, and (iii)~the number of different SKUs in the pick list $a$ as follows: $$ \xi =\max(a,\lceil m \cdot n / \alpha \rceil).$$ For example, if we set $\alpha = 1$, we have a standard warehouse in which each picking position is occupied by a different SKU. The maximum expression guarantees that for higher degrees of duplication at least as many different SKUs as required in the pick list are available in the warehouse. The SKUs in the warehouse are divided into three classes A, B, and C based on their turnover rate. We assign $20\%$ of the SKUs to class A, $30\%$ to class B, and $50\%$ to class C. To ensure that each SKU is available at least at one picking position, each SKU is first assigned to one randomly selected picking position. Afterward, all remaining picking positions are assigned a randomly drawn SKU from class A with $80\%$ probability, from class B with $15\%$ probability, and from class C with 5\% probability. The number of items of the selected SKU that is available at each picking position is randomly selected from $\mathbb{N} \cap [1,3]$. Next, we generate pick lists with $a \in \{3,7,15,30\}$ SKUs. Each SKU is selected from classes A, B, and C according to the probabilities $80\%$, $15\%$, and $5\%$. The demand $\demand{h}$ for each SKU $h$ is randomly drawn from $\mathbb{N} \cap [1,min(6,\bar{\capacity{h}{}})]$ with $\bar{\capacity{h}{}}$ the total supply of $h$. In the described way, we generate $3 \cdot 3 \cdot 4 \cdot 4 = 144$ instances. Both formulations were given a time limit of one hour. Table~\ref{tab:mixed} reports the results for different warehouse sizes $(m,n)$, number of SKUs in the pick list $a$, and degrees of duplication $\alpha$. For W, we report the runtime in column $t_W$ (TL indicates that the time limit was reached, OOM that an out of memory error occurs) and the difference between the upper bound found and the optimal solution (if no valid upper bound is found, this is indicated with a ``-''). GS finds the optimal solution for all instances, and we only report the runtime in column $t_{GS}$. If all instances of an instance group were solved to optimality by the respective formulation, we report averages of the runtimes for the group. \begin{table}[htbp] \centering \begin{footnotesize} \sffamily \setlength{\tabcolsep}{3.0pt} \ra{1.1} \begin{tabular}{@{}l@{\hspace{0.6cm}}rr@{\hspace{0.5cm}}r@{\hspace{0.9cm}}rr@{\hspace{0.5cm}}r@{\hspace{0.9cm}}rr@{\hspace{0.5cm}}r@{\hspace{0.9cm}}rr@{\hspace{0.5cm}}r@{}} \toprule & \multicolumn{3}{c}{$a=3$} & \multicolumn{3}{c}{$a=7$} & \multicolumn{3}{c}{$a=15$} & \multicolumn{3}{c}{$a=30$} \\ \cmidrule(l{.0cm}r{.5cm}){2-4} \cmidrule(l{.0cm}r{.5cm}){5-7} \cmidrule(l{.0cm}r{.5cm}){8-10} \cmidrule(l{.0cm}r{.0cm}){11-13} $(m,n)$ & $\Delta_{ub}$ & $t_{W}$ & $t_{GS}$ & $\Delta_{ub}$ & $t_{W}$ & $t_{GS}$ & $\Delta_{ub}$ & $t_{W}$ & $t_{GS}$ & $\Delta_{ub}$ & $t_{W}$ & $t_{GS}$ \\ \midrule {\boldmath$\alpha=1$} & & & & & & & & & & & & \\ (5,30) & 0.0 & 0.00 & 0.01 & 0.0 & 0.02 & 0.00 & 0.0 & 0.88 & 0.01 & 0.0 & 98.92 & 0.02 \\ (5,60) & 0.0 & 0.00 & 0.01 & 0.0 & 0.01 & 0.00 & 0.0 & 0.93 & 0.01 & 0.0 & 3.73 & 0.01 \\ (5,180) & 0.0 & 0.03 & 0.00 & 0.0 & 0.04 & 0.01 & 0.0 & 0.14 & 0.01 & 0.0 & 14.41 & 0.01 \\ (25,30) & 0.0 & 0.01 & 0.08 & 0.0 & 0.03 & 0.14 & 0.0 & 3.04 & 0.07 & 0.0 & 570.42 & 0.14 \\ (25,60) & 0.0 & 0.01 & 0.03 & 0.0 & 0.03 & 0.02 & 0.0 & 9.98 & 0.15 & 0.0 & TL & 0.05 \\ (25,180) & 0.0 & 0.01 & 0.15 & 0.0 & 0.04 & 0.06 & 0.0 & 0.23 & 0.03 & 0.0 & 114.35 & 0.10 \\ (100,30) & 0.0 & 0.01 & 0.21 & 0.0 & 0.03 & 0.93 & 0.0 & 56.99 & 0.15 & 0.4 & TL & 0.15 \\ (100,60) & 0.0 & 0.00 & 0.53 & 0.0 & 0.03 & 0.28 & 0.0 & 15.22 & 0.12 & 0.2 & TL & 0.44 \\ (100,180) & 0.0 & 0.01 & 0.67 & 0.0 & 0.04 & 0.30 & 0.0 & 0.23 & 0.13 & 0.0 & 1761.86 & 0.17 \\ \textbf{Avg.} & & \textbf{0.01} & \textbf{0.19} & & \textbf{0.03} & \textbf{0.19} & & \textbf{9.74} & \textbf{0.07} & & & \textbf{0.12} \\ \midrule {\boldmath$\alpha=5$} & & & & & & & & & & & & \\ (5,30) & 0.0 & 1.09 & 0.03 & 0.0 & TL & 0.05 & 1.2 & TL & 0.10 & 16.0 & TL & 0.06 \\ (5,60) & 0.0 & 2.63 & 0.05 & 0.0 & TL & 0.02 & 6.8 & TL & 0.07 & 86.9 & TL & 0.10 \\ (5,180) & 0.0 & 1.54 & 0.01 & 12.3 & TL & 0.52 & 106.1 & TL & 0.18 & - & TL & 0.75 \\ (25,30) & 0.0 & TL & 0.39 & 3.9 & TL & 0.20 & 86.1 & TL & 0.16 & 320.3 & TL & 0.16 \\ (25,60) & 0.0 & 1.59 & 0.10 & 3.4 & TL & 0.31 & 34.7 & TL & 0.98 & - & TL & 2.24 \\ (25,180) & 0.0 & 2.68 & 0.25 & 0.0 & 128.34 & 0.07 & 26.5 & TL & 0.39 & - & TL & 1.59 \\ (100,30) & 0.0 & TL & 0.27 & 15.3 & TL & 0.19 & 72.8 & TL & 1.99 & - & TL & 3.18 \\ (100,60) & 0.0 & 28.33 & 0.30 & 1.6 & TL & 1.24 & 13.7 & TL & 0.55 & - & TL & 0.48 \\ (100,180) & 0.0 & 15.84 & 3.37 & 12.1 & TL & 1.72 & 148.5 & TL & 1.83 & 105.8 & TL & 1.13 \\ \textbf{Avg.} & & & \textbf{0.53} & & & \textbf{0.48} & & & \textbf{0.69} & & & \textbf{1.08} \\\midrule {\boldmath$\alpha=10$} & & & & & & & & & & & & \\ (5,30) & 0.0 & TL & 0.02 & 0.0 & TL & 0.04 & 3.0 & TL & 0.05 & 14.9 & TL & 0.02 \\ (5,60) & 0.0 & 17.35 & 0.02 & 0.0 & TL & 0.20 & 83.0 & TL & 0.07 & 37.8 & TL & 0.19 \\ (5,180) & 0.0 & TL & 0.07 & 68.9 & TL & 0.32 & 77.0 & TL & 0.36 & - & TL & 0.64 \\ (25,30) & 0.0 & 92.49 & 0.19 & 10.7 & TL & 0.33 & 28.6 & TL & 0.72 & - & TL & 0.27 \\ (25,60) & 0.0 & 447.62 & 4.24 & 32.7 & TL & 0.44 & 375.0 & TL & 1.00 & - & TL & 0.87 \\ (25,180) & 0.0 & 86.79 & 0.47 & 74.7 & TL & 0.36 & 525.8 & TL & 30.94 & - & TL & 3.11 \\ (100,30) & 0.0 & TL & 0.51 & 12.6 & TL & 0.82 & 412.1 & TL & 1.43 & - & TL & 2.79 \\ (100,60) & 0.0 & 418.16 & 0.33 & 15.9 & TL & 11.69 & 310.9 & TL & 3.21 & - & TL & 6.04 \\ (100,180) & 0.0 & 849.48 & 1.69 & 38.3 & TL & 22.58 & - & TL & 93.84 & - & OOM & 29.07 \\ \textbf{Avg.} & & & \textbf{0.84} & & & \textbf{4.09} & & & \textbf{14.62} & & & \textbf{4.78} \\\midrule {\boldmath$\alpha=40$} & & & & & & & & & & & & \\ (5,30) & 0.0 & TL & 0.38 & 2.1 & TL & 0.19 & 0.0 & TL & 0.03 & 7.1 & TL & 0.04 \\ (5,60) & 0.0 & TL & 0.39 & 50.9 & TL & 1.00 & 20.6 & TL & 0.12 & 84.5 & TL & 0.10 \\ (5,180) & 0.0 & TL & 0.76 & 333.9 & TL & 2.07 & - & TL & 1.26 & - & TL & 0.74 \\ (25,30) & 0.0 & TL & 0.12 & 607.9 & TL & 0.35 & - & TL & 1.28 & - & TL & 0.49 \\ (25,60) & 29.2 & TL & 0.47 & - & TL & 0.68 & - & TL & 4.74 & - & TL & 5.31 \\ (25,180) & 178.0 & TL & 0.88 & - & OOM & 0.79 & - & TL & 1.20 & - & TL & 8.98 \\ (100,30) & 439.5 & TL & 0.78 & - & OOM & 2.41 & - & TL & 3.25 & - & OOM & 4.90 \\ (100,60) & 0.0 & 193.97 & 0.11 & - & OOM & 81.39 & - & TL & 21.61 & - & TL & 12.63 \\ (100,180) & 0.0 & 245.70 & 1.76 & - & OOM & 1.30 & - & OOM & 4.75 & - & OOM & 155.13 \\ \textbf{Avg.} & & & \textbf{0.63} & & & \textbf{10.02} & & & \textbf{4.25} & & & \textbf{20.92} \\ \bottomrule \end{tabular} \end{footnotesize} \rmfamily \caption{Comparison of formulations W and GS for the single-block SPRP with scattered storage.} \label{tab:mixed} \end{table} For W, both a higher number of SKUs in the pick list and a higher degree of duplication make the instances more difficult to solve. Of the 144 instances, 95 cannot be solved to proven optimality within the time limit (no valid upper bound is found in 24 cases, and an OOM error occurs in 8 cases). Contrary to this, GS is able to solve all instances, and the average runtime on the hardest instance group is approximately 21 seconds. The highest runtime observed for any instance is around 155 seconds. The relationship between the number of SKUs in the pick list or the degree of duplication and the runtime of GS is not entirely consistent: we can again note only a rough tendency that runtimes are higher for larger values of the two parameters. \subsection{Results for the Single-Block SPRP with Decoupling} \label{sec:decouple} To investigate the performance of our formulation in the setting with decoupling of picker and cart, we use the standard SPRP benchmark of \citet{Scholz:2016}. Because we want to study the effect of different capacities and speeds of the picker when traveling without cart, we consider three values of the picker capacity $C \in \{2,4,6\}$, and we vary the travel time required by the picker without cart by multiplying the original travel time with cart by factors $\beta\in\{0.5,0.75\}$, e.g., $\costTwoTop{j}^p = \beta\cdot \costTwoTop{j}$. Thus, we study $6 \cdot 900 = 5400$ instances. Table~\ref{tab:decouple} provides aggregate values for the 30 instances in each group $(m,a)$: the average gap $\Delta$ between the objective value obtained with the respective setting and the base setting without decoupling of picker and cart, the average runtime \runtime{avg}, and the maximum runtime \runtime{max}. In the last row, we provide the average of the gaps and of the average runtimes for all combinations of travel speed and capacity of the picker. \afterpage{ \begin{landscape} \begin{table}[p] \centering \ra{1.1} \begin{footnotesize} \sffamily \setlength{\tabcolsep}{2.0pt} \begin{tabular}{@{}l@{\hspace{0.5cm}}rrr@{\hspace{0.5cm}}rrr@{\hspace{0.5cm}}rrr@{\hspace{0.5cm}}rrr@{\hspace{0.5cm}}rrr@{\hspace{0.5cm}}rrr@{}} \toprule & \multicolumn{6}{c}{$C = 2$} & \multicolumn{6}{c}{$C = 4$} & \multicolumn{6}{c}{$C = 6$} \\ \cmidrule(l{.2cm}r{.2cm}){2-7} \cmidrule(l{.2cm}r{.2cm}){8-13} \cmidrule(l{.2cm}r{.0cm}){14-19} & \multicolumn{3}{c}{$\beta = 0.75$} & \multicolumn{3}{c}{$\beta = 0.5$}& \multicolumn{3}{c}{$\beta = 0.75$} & \multicolumn{3}{c}{$\beta = 0.5$} & \multicolumn{3}{c}{$\beta = 0.75$} & \multicolumn{3}{c}{$\beta = 0.5$} \\ \cmidrule(l{.2cm}r{.2cm}){2-4} \cmidrule(l{.0cm}r{.2cm}){5-7} \cmidrule(l{.2cm}r{.2cm}){8-10} \cmidrule(l{.2cm}r{.2cm}){11-13} \cmidrule(l{.2cm}r{.2cm}){14-16} \cmidrule(l{.2cm}r{.0cm}){17-19} $(m,a)$ & $\Delta$ & \runtime{avg} & \runtime{max} & $\Delta$ & \runtime{avg} & \runtime{max} & $\Delta$ & \runtime{avg} & \runtime{max} & $\Delta$ & \runtime{avg} & \runtime{max} & $\Delta$ & \runtime{avg} & \runtime{max} & $\Delta$ & \runtime{avg} & \runtime{max} \\ \midrule (5,30) & -2.14 & 0.01 & 0.03 & -6.01 & 0.01 & 0.02 & -4.39 & 0.02 & 0.03 & -17.02 & 0.03 & 0.12 & -5.77 & 0.02 & 0.03 & -20.51 & 0.13 & 0.22 \\ (5,45) & -1.88 & 0.02 & 0.03 & -4.25 & 0.01 & 0.03 & -4.00 & 0.02 & 0.03 & -11.36 & 0.02 & 0.06 & -5.03 & 0.02 & 0.03 & -17.52 & 0.07 & 0.22 \\ (5,60) & -1.40 & 0.02 & 0.05 & -3.14 & 0.02 & 0.05 & -3.17 & 0.03 & 0.06 & -7.11 & 0.02 & 0.05 & -4.66 & 0.03 & 0.06 & -13.06 & 0.03 & 0.09 \\ (5,75) & -0.96 & 0.03 & 0.05 & -2.19 & 0.03 & 0.05 & -2.87 & 0.03 & 0.05 & -6.66 & 0.03 & 0.05 & -4.30 & 0.03 & 0.05 & -9.80 & 0.03 & 0.06 \\ (5,90) & -0.79 & 0.04 & 0.05 & -1.66 & 0.03 & 0.05 & -2.03 & 0.03 & 0.06 & -4.51 & 0.03 & 0.06 & -3.36 & 0.03 & 0.06 & -7.53 & 0.03 & 0.05 \\\addlinespace (10,30) & -7.43 & 0.06 & 0.14 & -18.24 & 0.15 & 0.62 & -12.25 & 0.11 & 0.37 & -29.06 & 0.55 & 2.37 & -12.95 & 0.11 & 0.37 & -30.23 & 1.19 & 2.68 \\ (10,45) & -3.83 & 0.05 & 0.09 & -10.47 & 0.08 & 0.28 & -7.31 & 0.07 & 0.26 & -23.85 & 0.46 & 1.95 & -8.40 & 0.07 & 0.26 & -27.80 & 1.66 & 5.08 \\ (10,60) & -2.26 & 0.05 & 0.13 & -6.81 & 0.07 & 0.22 & -5.59 & 0.07 & 0.18 & -20.25 & 0.28 & 3.77 & -6.89 & 0.07 & 0.18 & -25.97 & 0.71 & 4.23 \\ (10,75) & -1.30 & 0.04 & 0.06 & -3.73 & 0.06 & 0.16 & -3.27 & 0.06 & 0.11 & -13.92 & 0.13 & 0.36 & -4.48 & 0.06 & 0.11 & -21.74 & 0.36 & 2.06 \\ (10,90) & -0.64 & 0.05 & 0.12 & -1.92 & 0.06 & 0.22 & -1.67 & 0.07 & 0.14 & -8.57 & 0.12 & 0.38 & -2.51 & 0.07 & 0.14 & -16.39 & 0.25 & 0.66 \\\addlinespace (15,30) & -9.84 & 0.17 & 0.99 & -22.38 & 0.78 & 4.05 & -14.18 & 0.29 & 1.18 & -31.72 & 2.15 & 5.09 & -14.50 & 0.29 & 1.18 & -32.49 & 3.28 & 6.13 \\ (15,45) & -7.48 & 0.12 & 0.33 & -18.60 & 0.34 & 1.01 & -12.02 & 0.17 & 0.42 & -30.73 & 1.92 & 5.59 & -12.80 & 0.17 & 0.42 & -32.44 & 3.16 & 6.04 \\ (15,60) & -4.82 & 0.09 & 0.17 & -12.73 & 0.24 & 0.66 & -9.84 & 0.17 & 0.50 & -28.41 & 1.00 & 5.81 & -11.00 & 0.17 & 0.50 & -31.65 & 3.37 & 7.56 \\ (15,75) & -3.22 & 0.10 & 0.20 & -9.99 & 0.19 & 0.55 & -6.98 & 0.12 & 0.28 & -23.74 & 0.41 & 3.63 & -8.02 & 0.12 & 0.28 & -28.63 & 3.08 & 8.39 \\ (15,90) & -2.07 & 0.07 & 0.13 & -6.33 & 0.12 & 0.38 & -4.86 & 0.12 & 0.25 & -18.58 & 0.29 & 0.58 & -6.08 & 0.12 & 0.25 & -25.48 & 2.61 & 9.02 \\\addlinespace (20,30) & -11.78 & 0.46 & 4.81 & -26.11 & 2.09 & 8.49 & -15.34 & 1.08 & 3.58 & -33.11 & 4.49 & 7.99 & -15.55 & 1.08 & 3.58 & -33.65 & 5.44 & 9.76 \\ (20,45) & -9.11 & 0.17 & 0.47 & -21.80 & 1.13 & 6.78 & -13.85 & 0.57 & 4.81 & -33.09 & 4.86 & 16.42 & -14.52 & 0.57 & 4.81 & -34.40 & 6.55 & 16.95 \\ (20,60) & -7.71 & 0.18 & 0.66 & -19.10 & 0.74 & 4.52 & -12.68 & 0.29 & 0.82 & -32.32 & 3.57 & 9.36 & -13.33 & 0.29 & 0.82 & -34.14 & 6.23 & 12.20 \\ (20,75) & -5.15 & 0.15 & 0.30 & -13.87 & 0.75 & 4.37 & -10.16 & 0.22 & 0.66 & -29.27 & 2.30 & 9.15 & -11.03 & 0.22 & 0.66 & -32.26 & 5.76 & 15.64 \\ (20,90) & -3.98 & 0.14 & 0.37 & -11.00 & 0.30 & 0.56 & -8.03 & 0.23 & 0.45 & -25.50 & 1.47 & 6.16 & -9.17 & 0.23 & 0.45 & -30.44 & 5.09 & 16.37 \\\addlinespace (25,30) & -11.69 & 0.83 & 4.99 & -25.91 & 3.46 & 12.47 & -14.92 & 1.73 & 7.21 & -32.17 & 6.35 & 18.22 & -15.19 & 1.73 & 7.21 & -32.73 & 7.74 & 13.89 \\ (25,45) & -10.87 & 0.32 & 2.69 & -24.38 & 2.55 & 15.51 & -15.11 & 0.74 & 2.64 & -33.76 & 7.01 & 14.90 & -15.58 & 0.74 & 2.64 & -34.65 & 9.75 & 20.71 \\ (25,60) & -9.18 & 0.25 & 0.51 & -21.57 & 2.13 & 8.09 & -14.12 & 0.45 & 1.82 & -33.82 & 6.46 & 20.65 & -14.75 & 0.45 & 1.82 & -35.23 & 9.11 & 26.48 \\ (25,75) & -7.32 & 0.23 & 0.48 & -18.44 & 1.69 & 5.01 & -12.50 & 0.35 & 0.92 & -32.56 & 8.07 & 27.64 & -13.34 & 0.35 & 0.92 & -34.55 & 8.80 & 19.91 \\ (25,90) & -6.22 & 0.22 & 0.41 & -15.82 & 0.71 & 3.38 & -11.02 & 0.35 & 0.81 & -30.33 & 4.66 & 26.77 & -12.15 & 0.35 & 0.81 & -33.83 & 10.51 & 26.92 \\\addlinespace (30,30) & -12.51 & 2.18 & 8.97 & -26.66 & 7.53 & 29.84 & -15.54 & 3.31 & 11.85 & -32.30 & 11.95 & 34.28 & -15.73 & 3.31 & 11.85 & -32.81 & 13.30 & 34.73 \\ (30,45) & -11.26 & 0.41 & 2.07 & -25.20 & 3.93 & 29.24 & -15.55 & 1.08 & 7.44 & -34.21 & 11.48 & 28.04 & -15.81 & 1.08 & 7.44 & -34.80 & 14.30 & 37.37 \\ (30,60) & -10.57 & 0.43 & 1.24 & -24.26 & 3.36 & 10.33 & -15.50 & 1.08 & 4.66 & -35.18 & 9.84 & 24.00 & -15.85 & 1.08 & 4.66 & -36.01 & 16.53 & 49.18 \\ (30,75) & -8.36 & 0.43 & 1.55 & -20.66 & 4.33 & 14.04 & -13.94 & 0.75 & 2.88 & -33.98 & 8.92 & 28.41 & -14.65 & 0.75 & 2.88 & -35.54 & 11.87 & 26.38 \\ (30,90) & -7.62 & 0.27 & 0.59 & -18.57 & 2.80 & 7.31 & -12.78 & 0.55 & 1.73 & -32.59 & 12.17 & 50.15 & -13.68 & 0.55 & 1.73 & -35.16 & 18.77 & 35.48 \\ \midrule \textbf{Avg.} & \textbf{-6.11} & \textbf{0.25} & & \textbf{-14.73} & \textbf{1.32} & & \textbf{-9.85} & \textbf{0.47} & & \textbf{-25.32} & \textbf{3.70} & & \textbf{-10.70} & \textbf{0.47} & & \textbf{-28.38} & \textbf{5.66} & \\\bottomrule \end{tabular} \end{footnotesize} \rmfamily \caption{Results for the single-block SPRP with decoupling for different values of travel time factor $\beta$ and picker capacity $C$.}\label{tab:decouple} \end{table} \end{landscape} } Concerning the performance of our formulation, we observe the tendency that average and maximum runtimes increase if either the time required by the picker traveling alone decreases, i.e., $c_p$ decreases, or if the capacity $C$ increases. Nevertheless, all instances can be solved to optimality within very short runtimes of at most 50 seconds. The average runtimes over all instances of a certain combination of picker speed and capacity range between 0.25 and 5.66 seconds. We find that decoupling results in considerable cost savings, which grow with increasing picker speed and capacity. Even assuming the conservative values $C=2$ and $\beta=0.75$, the objective value is notably reduced by around 6\% on average. For the most optimistic setting with $C=6$ and $\beta=0.5$, savings rise to more than 28\%. \subsection{Results for the Single-Block SPRP with Multiple End Depots} \label{sec:multdepot} To study the effect of multiple end depots, we generate $3 \cdot 900 = 2700$ new instances from the standard SPRP instances of \citet{Scholz:2016} by locating end depots at the top or the bottom (independent of each other) of each aisle of the respective instance with a probability of $\sigma \in \{0.1, 0.5,1.0\}$. Thus, the instances with $\sigma=1$ refer to the single-block SPRP with decentralized depositing introduced in \citet{DeKoster:1998}. Table~\ref{tab:multEndDepot} presents aggregate results for each instance group. Columns $\Delta$ report the average gap between the optimal solution with multiple end depots and the optimum of the standard setting in which the picker must return to the start depot. Columns \runtime{avg} and \runtime{max} again report average and maximum runtime for each instance group. Our formulation is able to solve all instances to optimality within a maximum runtime of approximately seven seconds. We note that, in general, the runtimes slightly increase with $\sigma$, on average from 1.13 seconds to 2.09 seconds. With respect to solution quality, we see that only moderate average savings between 2\% for $\sigma=0.1$ and 3.4\% for $\sigma=1.0$ can be realized. This suggests that, in a single-block rectangular warehouse, the overall benefits of multiple end depot are rather limited. This result might be different if the batching decision already incorporates the availability of multiple end depots. The results also indicate that already a few additional end depots achieve a large portion of the possible benefits. \begin{table}[htbp] \centering \begin{footnotesize} \sffamily \ra{1.1} \setlength{\tabcolsep}{3.0pt} \begin{tabular}{@{}l@{\hspace{0.3cm}}rrr@{\hspace{0.3cm}}rrr@{\hspace{0.3cm}}rrr@{}} \toprule & \multicolumn{3}{c}{$\sigma = 0.1$} & \multicolumn{3}{c}{$\sigma = 0.5$} & \multicolumn{3}{c}{$\sigma = 1.0$} \\ \cmidrule(l{.1cm}r{.1cm}){2-4} \cmidrule(l{.1cm}r{.1cm}){5-7} \cmidrule(l{.1cm}r{.0cm}){8-10} $(m,a)$ & $\Delta$ & \runtime{avg} & \runtime{max} & $\Delta$ & \runtime{avg} & \runtime{max} & $\Delta$ & \runtime{avg} & \runtime{max} \\ \midrule (5,30) & -1.25 & 0.02 & 0.05 & -2.78 & 0.03 & 0.09 & -3.24 & 0.03 & 0.08 \\ (5,45) & -1.68 & 0.03 & 0.09 & -3.82 & 0.03 & 0.08 & -4.22 & 0.03 & 0.09 \\ (5,60) & -3.21 & 0.03 & 0.09 & -6.85 & 0.02 & 0.05 & -7.26 & 0.02 & 0.06 \\ (5,75) & -3.75 & 0.04 & 0.12 & -6.47 & 0.03 & 0.13 & -7.48 & 0.03 & 0.05 \\ (5,90) & -3.87 & 0.04 & 0.12 & -7.46 & 0.03 & 0.05 & -7.67 & 0.03 & 0.05 \\\addlinespace (10,30) & -1.25 & 0.11 & 0.38 & -2.51 & 0.26 & 0.51 & -2.83 & 0.28 & 0.53 \\ (10,45) & -1.55 & 0.11 & 0.31 & -2.50 & 0.17 & 0.46 & -2.76 & 0.21 & 0.42 \\ (10,60) & -1.31 & 0.10 & 0.25 & -2.39 & 0.17 & 0.47 & -2.55 & 0.17 & 0.36 \\ (10,75) & -1.27 & 0.08 & 0.16 & -2.47 & 0.13 & 0.45 & -2.59 & 0.18 & 0.58 \\ (10,90) & -1.86 & 0.08 & 0.19 & -3.06 & 0.12 & 0.30 & -3.22 & 0.11 & 0.36 \\\addlinespace (15,30) & -1.96 & 0.32 & 1.53 & -2.71 & 0.70 & 3.27 & -2.97 & 0.80 & 2.62 \\ (15,45) & -1.32 & 0.29 & 0.75 & -2.45 & 0.73 & 2.90 & -2.58 & 0.98 & 3.68 \\ (15,60) & -1.65 & 0.29 & 0.93 & -2.40 & 0.59 & 2.14 & -2.48 & 0.76 & 3.54 \\ (15,75) & -1.68 & 0.22 & 0.60 & -2.56 & 0.30 & 0.79 & -2.68 & 0.39 & 1.36 \\ (15,90) & -1.98 & 0.19 & 0.63 & -2.68 & 0.20 & 0.39 & -2.74 & 0.22 & 0.53 \\\addlinespace (20,30) & -2.53 & 0.75 & 2.70 & -3.42 & 1.69 & 4.55 & -3.57 & 2.10 & 4.94 \\ (20,45) & -2.27 & 0.70 & 2.66 & -2.92 & 1.73 & 3.81 & -3.08 & 1.98 & 5.08 \\ (20,60) & -1.45 & 1.18 & 3.15 & -2.33 & 1.81 & 4.31 & -2.37 & 2.68 & 6.07 \\ (20,75) & -1.49 & 0.76 & 3.37 & -2.16 & 1.51 & 4.73 & -2.27 & 1.69 & 4.97 \\ (20,90) & -1.74 & 0.72 & 1.96 & -2.52 & 1.12 & 3.96 & -2.65 & 1.30 & 5.47 \\\addlinespace (25,30) & -3.23 & 1.31 & 4.36 & -4.31 & 2.95 & 6.46 & -4.48 & 3.77 & 7.49 \\ (25,45) & -2.44 & 1.97 & 5.48 & -3.06 & 3.57 & 9.39 & -3.24 & 4.10 & 8.78 \\ (25,60) & -1.77 & 2.09 & 5.88 & -2.44 & 3.81 & 8.29 & -2.60 & 3.62 & 7.23 \\ (25,75) & -1.79 & 1.73 & 6.34 & -2.40 & 3.26 & 8.60 & -2.53 & 3.08 & 7.40 \\ (25,90) & -2.02 & 1.56 & 5.23 & -2.63 & 2.43 & 5.67 & -2.69 & 3.06 & 9.80 \\\addlinespace (30,30) & -3.16 & 3.65 & 8.22 & -4.20 & 5.01 & 11.12 & -4.43 & 5.50 & 12.12 \\ (30,45) & -2.53 & 3.74 & 8.93 & -3.04 & 5.54 & 12.90 & -3.19 & 6.63 & 17.93 \\ (30,60) & -2.13 & 4.30 & 9.40 & -2.55 & 5.92 & 10.22 & -2.62 & 7.05 & 11.18 \\ (30,75) & -2.09 & 3.67 & 10.50 & -2.48 & 5.32 & 13.70 & -2.62 & 5.90 & 14.82 \\ (30,90) & -1.77 & 3.77 & 12.93 & -2.16 & 5.36 & 14.52 & -2.24 & 5.89 & 14.15 \\ \midrule \textbf{Avg.} & \textbf{-2.07} & \textbf{1.13} & & \textbf{-3.19} & \textbf{1.82} & & \textbf{-3.39} & \textbf{2.09} & \\\bottomrule \end{tabular} \end{footnotesize} \rmfamily \caption{Results for single-block SPRP with multiple end depots for different probabilites $\sigma$ that an aisle contains an end depot at the top or bottom.} \label{tab:multEndDepot} \end{table} \section{Conclusion} \label{sec:conclusion} In this paper, we present a compact formulation of the standard SPRP that directly exploits two properties of an optimal picking tour used in the algorithm of \citet{Ratliff:1983} and thus does not require classical subtour elimination constraints. Our formulation outperforms exisiting standard SPRP formulations from the literature and is able to solve large problem instances within short runtimes. The extensions of our formulation to scattered storage, decoupling of picker and cart, and multiple end depots are also able to solve realistically sized instances with low computational effort. Large savings are possible by allowing the decoupling of picker and cart: assuming a picker capacity of only two items and a reduction of travel time of $25\%$ when the picker travels alone, cost savings of $6\%$ are possible, and up to $28\%$ are achieved with a picker capacity of six items and a doubling of picker speed. Contrary to this, the cost savings of multiple end depots are rather limited. An interesting topic for future research is the extension or utilization of our formulation for integrated warehousing problems with a picker routing component, as outlined in Section 1. \end{document}
\begin{document} \title{Single-shot measurement of quantum optical phase} \author{K. L. Pregnell and D. T. Pegg} \affiliation{School of Science, Griffith University, Nathan, Brisbane, 4111, Australia} \date{\today} \begin{abstract} Although the Canonical phase of light, which is defined as the complement of photon number, has been described theoretically by a variety of distinct approaches, there have been no methods proposed for its measurement. Indeed doubts have been expressed about whether or not it is measurable. Here we show how it is possible, at least in principle, to perform a single-shot measurement of Canonical phase using beam splitters, mirrors, phase shifters and photodetectors. \end{abstract} \pacs{42.50.Dv, 42.50.-p} \maketitle Quantum-limited phase measurements of the optical field have important applications in precision measurements of small distances in interferometry and in the emerging field of quantum communication, where there is the possibility of encoding information in the phase of light pulses. Much work has been done in attempting to understand the quantum nature of phase. Some approaches have been motivated by the aim of expressing phase as the complement of photon number \cite{Leon}. Examples of these approaches include the probability operator measure approach \cite{Helstrom,SS}, a formalism in which the Hilbert space is doubled \cite{Newton}, a limiting approach based on a finite Hilbert space \cite{PB,tute} and a more general axiomatic approach \cite{Leon}. Although these approaches are quite distinct, they all lead to the same phase probability distribution for a field in state $|\psi\rangle$ as a function of the phase angle $\theta$ \cite{Leon}: \begin{equation} P(\theta) =\frac{1}{2\pi}|\sum_{n=0}^\infty \langle\psi|n\rangle \exp (in\theta)|^{2} \label{0} \end{equation} where $|n\rangle$ is a photon number state. Leonhardt {\em et al.} \cite{Leon} have called this common distribution the ``canonical'' phase distribution to indicate a quantity that is the canonical conjugate, or complement, of photon number. This distribution is shifted uniformly when a phase shifter is applied to the field and is not changed by a photon number shift \cite{Leon}. We adopt this definition here and use the term Canonical phase to denote the quantity whose distribution is given by (\ref{0}). Much less progress has been made on ways to measure Canonical phase. Homodyne techniques can be used to measure phase-like properties of light but are not measurements of Canonical phase. It is possible in principle to measure the Canonical phase distribution by a series of experiments on a reproducible state of light \cite{BPprl} but there has been no known way of performing a single-shot measurement. Indeed it is thought that this might be impossible \cite{Wiseman}. Even leaving aside the practical issues, the concept that a particular fundamental quantum observable may not be measurable, even in principle, has interesting general conceptual ramifications for quantum mechanics. A different approach to the phase problem, which avoids difficulties in finding a way to measure Canonical phase, is to define phase operationally in terms of observables that can be measured \cite {Leon}. The best known of these operational phase approaches is that of Noh {\em et al.} \cite {Noh1, Noh2}. Although the experiments to measure this operational phase produce excellent results, they were not designed to measure Canonical phase as defined here and, as shown by the the measured phase distribution \cite{Noh2}, they do not measure Canonical phase. In this paper we show how, despite these past difficulties, it is indeed possible, at least in principle, to perform a single-shot measurement of Canonical phase in the same sense that the experiments of Noh {\em et al.} are single-shot measurements of their operational phase. A single-shot measurement of a quantum observable must not only yield one of the eigenvalues of the observable, but repeating the measurement many times on systems in identical states should result in a probability distribution appropriate to that state. If the spectrum of eigenvalues is discrete, the probabilities of the results can be easily obtained from the experimental statistics. Where the spectrum is continuous, the probability density is obtainable by dividing the eigenvalue range into a number of small bins and finding the number of results in each bin. As the number of experiments needed to obtain measurable probabilities increases as the reciprocal of the bin size, a practical experiment will require a non-zero bin size and will produce a histogram rather than a smooth curve. Although the experiments of Noh {\em et al.} \cite{Noh1, Noh2} were not designed to measure Canonical phase, it is helpful to be guided by their approach. In addition to their results being measured and plotted as a histogram, some of the experimental data are discarded, specifically photon count outcomes that lead to an indeterminacy of the type zero divided by zero in their definitions of the cosine and sine of the phase \cite{Noh1}. The particular experiment that yields such an outcome is ignored and its results are not included in the statistics. Concerning the discarding of some results, we note in general that the well-known expression for the probability that a von Neumann measurement on a pure state $|\psi \rangle $ yields a result $q$ is $\langle \psi |q\rangle \langle q|\psi \rangle $, where $|q\rangle $ is the eigenstate of the measurement operator corresponding to eigenvalue $q$. If the state to be measured is a mixed state with a density operator $\hat{\rho }$ the expression becomes Tr($\hat{\rho } \widehat{\Pi }_q)$ where $\widehat{\Pi }_q$ = $|q\rangle \langle q|$. The operator $\widehat{\Pi }_q$ is a particular case of an element of a probability operator measure (POM) \cite{Helstrom}. The sum of all the elements of a POM is the unit operator and the expression Tr($\hat{\rho }\widehat{\Pi }_q)$ for the probability is based on the premise that all possible outcomes of the measurement are retained for the statistics. If some of the possible outcomes of an experiment are discarded, the probability of a particular result calculated from the final statistics is given by the normalized expression Tr($\hat{\rho }\widehat{\Pi }_q)/\sum_p$Tr($\hat{\rho } \widehat{\Pi }_p)$, where the sum is over all the elements of the POM corresponding to outcomes of the measurement that are retained. We seek now to approximate the continuous distribution (\ref{0}) by a histogram representing the probability distribution for a discrete observable $\theta_{m}$ such that when the separation $\delta\theta$ of consecutive values of $\theta_{m}$ tends to zero the continuous distribution is regained. A way to do this is first to define a state \begin{equation} |\theta_m\rangle =\frac 1{(N+1)^{1/2}}\sum_{n=0}^N\exp (in\theta_m)|n\rangle. \label{1} \end{equation} There are $N+1$ orthogonal states $|\theta_{m}\rangle$ corresponding to $N+1$ values $\theta_{m}=m\delta\theta$ with $\delta\theta = 2\pi/(N+1)$ and $m=0,1,\dots N$. This range for $m$ ensures that $\theta_{m}$ takes values between $0$ and $2\pi$. Then, if we can find a measurement technique that yields the result $\theta_{m}$ with a probability of $|\langle \psi |\theta _m\rangle |^2$, the resulting histogram will approximate a continuous distribution with a probability density of $|\langle \psi |\theta _m\rangle |^2/\delta\theta$. It follows that, as we let $N$ tend to infinity, there will exist a value of $\theta_{m}$ as close as we like to any given value of $\theta$ with a probability density approaching $P(\theta)$ given by (\ref{0}). If we keep $N$ finite so that we can perform an experiment with a finite number of outcomes, then the value of $N$ must to be sufficiently large to give the resolution $\delta\theta$ of phase angle required and also for $|\psi \rangle$ to be well approximated by $\sum_{n} \langle n|\psi \rangle |n\rangle$ where the sum is from $n = 0$ to $N$. The latter condition ensures that the terms with coefficients $\langle n|\psi\rangle$ for $n > N$ have little effect on the probability $|\langle \psi |\theta _m\rangle |^2$. As we shall be interested mainly in weak optical fields in the quantum regime with mean photon numbers of the order of unity, the maximum phase resolution $\delta\theta$ desired will usually be the determining factor in the choice of $N$. When $N$ is finite, the states $|\theta _m\rangle $ do not span the whole Hilbert space, so the projectors $|\theta _m\rangle \langle \theta _m|$ will not sum to the unit operator $\hat{1}$. Thus these projectors by themselves do not form the elements of a POM. To complete the POM we need to include an element $ \hat{1}-\sum_m|\theta _m\rangle \langle \theta _m|$. If we discard the outcome associated with this element, that is, treat an experiment with this outcome as an unsuccessful attempt at a measurement in a similar way that Noh {\it et al.} \cite{Noh1} treated experiments with indeterminate outcomes, then the probability that the outcome of a measurement is the phase angle $\theta _m$ is given by \begin{equation} Pr(\theta _m)=\frac{\text {Tr}(\hat{\rho }|\theta _m\rangle \langle \theta _m|)} {\sum_p\text {Tr}(\hat{\rho }|\theta _p\rangle \langle \theta _p|)}, \label{2} \end{equation} where $p=0,1\ldots\,N$. We now require a single-shot measuring device that will reproduce this probability in repeated experiments. \begin{figure} \caption{Multi-port device for measuring phase. The input and output modes are labelled $0, 1, \ldots N$ from the top. In input mode $0$ is the field in state $|\psi\rangle_{0} \label{Fig1} \end{figure} As measurements will be performed ultimately by photodetectors, we seek an optical device that transforms input fields in such a way that photon number measurements at the output ports can be converted to phase measurements. We examine the case of a multi-port device with $N+1$ input modes and $N+1$ output modes as depicted schematically in Fig. \ref{Fig1}. As phase is not an absolute quantity, that is it can only be measured in relation to some reference state, we shall need the field in state $|\psi \rangle _0$ that is to be measured to be in one input and a reference field in state $|B\rangle _{1\text{ }}$ to be in another input. We let there be vacuum states $|0\rangle _i$ with $i=2,3\ldots N$ in the other input modes. We let the device be such that the input states are transformed by a unitary transformation $\widehat{R}$. We let $N+1$ photodetectors that can distinguish between zero photons, one photon and more than one photon be in the output modes. Consider the case where one photon is detected in each output mode except for mode $m$, in which zero photons are detected. The amplitude for this detection event is \begin{equation} _m\langle 0|\left (\prod_{j\neq m}\,_j\langle 1|\right )\widehat{R} \left (\prod_{i=2}^N|0\rangle _i\right )|B\rangle _1|\psi \rangle _0\, \label{3} \end{equation} where the first product is over values of $j$ from $0$ to $N$ excluding the value $m$. We require the transformation and the reference state to be such that this amplitude is proportional to $_0\langle \theta _m|\psi \rangle _0$ . We write the photon creation operators acting on the mode $i$ as $\hat{ a}_i^{\dagger}$. Writing $|1\rangle _j$ as $\hat{a}_j^{\dagger }|0\rangle _j$ we see that we require $|\theta _m\rangle _0$ to be proportional to \begin{equation} _1\langle B|\left (\prod_{i=2}^N\,_i\langle 0|\right )\left (\prod_{j\neq m} \widehat{R}^{\dagger}\hat{a}_j^{\dagger }\widehat{R}\right ) \widehat{R}^{\dagger }\,\left (\prod_{k=0}^{N}|0\rangle _k\right ) \label{3a} \end{equation} We rewrite the unitary transformation in the form \begin{equation} \widehat{R}^{\dagger }\hat{a}_j^{\dagger }\widehat{R}=\sum_{i=0}^NU_{ij} \hat{a}_i^{\dagger } \label{4} \end{equation} where $U_{ij}$ are the elements of a unitary matrix. Reck {\em et al.} \cite {Reck} have shown how it is possible to construct a multi-port device from mirrors, beam splitters and phase shifters that will transform the input modes into the output modes in accord with any $(N+1)\times (N+1)$ unitary matrix. We choose such a device for which the associated unitary matrix is \begin{equation} U_{ij}=\frac{\omega ^{ij}}{\sqrt{N+1}} \label{5} \end{equation} where $\omega =$ $\exp[-i2\pi /(N+1)]$ that is, a $(N+1)$th root of unity. Such a device will transform the combined vacuum state to the combined vacuum state so we can delete the $\widehat R^{\dagger}$ on the right of expression (\ref{3a}). Substituting (\ref{5}) into (\ref{4}) gives eventually \begin{eqnarray} \left (\prod_{i=2}^N\,_i\langle 0|\right )\left (\prod_{j\neq m} \widehat{R}^{\dagger}\hat{a}_j^{\dagger }\widehat{R}\right ) \left (\prod_{k=0}^{N}|0\rangle_k\right )= \nonumber \\ \kappa_{1}\left [\prod_{j\neq m}(\hat{a}_0^{\dagger }+\omega ^j\hat{a} _1^{\dagger })\right ] |0\rangle _0|0\rangle _1 \label{6} \end{eqnarray} where $\kappa_{1} =(N+1)^{-N/2}$. To evaluate (\ref{6}) we divide both sides of the identity \begin{equation} X^{N+1}+(-1)^N=(X+1)(X+\omega )(X+\omega ^2)\ldots (X+\omega ^N) \label{7} \end{equation} by $X+\omega ^m$ to give, after some rearrangement and application of the relation $\omega ^{m(N+1)}=1$, \begin{equation} \prod_{j\neq m}(X+\omega ^j)=(-1)^N\omega ^{mN}\frac{1-(-X\omega ^{-m})^{N+1} }{1-(-X\omega ^{-m})}\text{ .} \label{8} \end{equation} The last factor is the sum of a geometric progression. Expanding this and substituting $X=x/y$ gives eventually the identity \begin{equation} \prod_{j\neq m}(x+\omega ^jy)=\sum_{n=0}^N x^n(-\omega ^my)^{N-n}\text{ .} \label{9} \end{equation} We now expand $|B\rangle _1$ in terms of photon number states as \begin{equation} |B\rangle _1=\sum_{n=0}^Nb_n|n\rangle_{1} \label{10} \end{equation} and put $x=a_0^{\dagger }$ and $y=a_1^{\dagger }$ in (\ref{9}). Then from (\ref {6}) we find that (\ref{3a}) becomes \begin{eqnarray} _1\langle B|\left (\prod_{i=2}^N\,_i\langle 0|\right )\left (\prod_{j\neq m} \widehat{R}^{\dagger}\hat{a}_j^{\dagger }\widehat{R}\right ) \widehat{R}^{\dagger }\,\left (\prod_{k=0}^{N}|0\rangle _k\right ) =\nonumber \\ \kappa_{2}\sum_{n=0}^{N}(-1)^{N-n}{N\choose n}^{-1/2}\omega^{-nm}b_{N-n}^{*}|n\rangle_{0} \label{11} \end{eqnarray} where $\kappa_{2}= \kappa_{1}\omega^{-m}(N!)^{1/2}$. We see then that, if we let $|B\rangle_{1}$ be the binomial state \begin{equation} |B\rangle_{1}=2^{-N/2}\sum_{n=0}^{N}(-1)^{n}{N\choose n}^{1/2}|n\rangle_{1}, \label{12} \end{equation} then expression (\ref{11}) is proportional to $\sum_{n}\omega^{-nm}|n\rangle_{0}$, that is, to $|\theta_{m}\rangle_{0}$. Thus the amplitude for the event that zero photons are detected in output mode $m$ and one photon is detected in all the other output modes will be proportional to $_{0}\langle \theta_{m}|\psi\rangle_{0}$. The probability that the outcome of a measurement is this event, given that only outcomes associated with the $(N+1)$ events of this type are recorded in the statistics, will be given by (\ref {2}), where we note that the proportionality constant $\kappa_{2}$ will cancel from this expression. Thus the measurement event that zero photons are detected in output mode {\em m} and one photon is detected in all the other output modes can be taken as the event that the result of the measurement of the phase angle is $\theta_{m}$. Thus the photodetector with zero photocounts, when all other photodetectors have registered one photocount, can be regarded as a digital pointer to the value of the measured phase angle. We have shown, therefore, that it is indeed possible in principle to conduct a single-shot measurement of Canonical phase to within any given non-zero error, however small. This error is of the order $2\pi/(N+1)$ and will determine the value of $N$ chosen. \begin{figure} \caption{Triangular array for $N+1 = 4$. The outside beam splitters in the top row are 50:50, the middle one is fully reflecting. The second row beam splitters are $\frac{2} \label{Fig2} \end{figure} While the aim of this paper is to establish how Canonical phase can be measured in principle, it is worth briefly considering some practical issues. Although we have specified that the photodetectors need only be capable of distinguishing among zero, one and more than one photons, reflecting the realistic case, there are other imperfections such as inefficiency. These will give rise to errors in the phase measurement, just as they will cause errors in a single-shot photon number measurement. In practice, there is no point in choosing the phase resolution $\delta\theta$ much smaller than the expected error due to photodetector inefficiencies, thus there is nothing lost in practice in keeping $N$ finite. A requirement for the measuring procedure is the availability of a binomial state. Such states have been studied for some time \cite{binom} but their generation has not yet been achieved. In practice, however, we are usually interested in measuring weak fields in the quantum regime with mean photon numbers around unity \cite{Noh1, Noh2} and even substantially less \cite{Torg}. Only the first few coefficients of $|n\rangle_{0}$ in (\ref{11}) will be important for such weak fields. Also, it is not difficult to show that the reference state need not be truncated at $n = N$, as indicated in (\ref{10}), as coefficients $b_{n}$ with $n > N$ will not appear in (\ref{11}). Thus we need only prepare a reference state with a small number of its photon number state coefficients proportional to the appropriate binomial coefficients. Additionally, of course, in a practical experiment we are forced to tolerate some inaccuracy due to photodetector errors, so it will not be necessary for the reference state coefficients to be exactly proportional to the corresponding binomial state coefficients. These factors give some latitude in the preparation of the reference state. The muti-port device depicted in Fig. 1 can be constructed in a variety of ways. Reck {\em et al.} \cite{Reck} provide an algorithm for constructing a triangular array of beam splitters to realize any unitary transformation matrix. Fig. 2 shows, for example, such a device that, with suitable phase shifters in the output modes, will realize the transformation $U_{ij}$ in (\ref{5}) with $N+1 = 4$. Because we are detecting photons, however, these output phase shifters are not actually necessary and are thus not shown. The number of beam splitters needed for a general triangular array increases quadratically with $N$. Fortunately, however, our required matrix (\ref{5}) represents a discrete Fourier transformation and we only require two of the input ports to have input fields that are not in the vacuum state. The device of T\"{o}rm\"{a} and Jex \cite{Jex} is ideally suited for these specific requirements. This device, which has an even number of input and output ports, is pictured in Ref. \cite{Jex}. It consists of just $(N+1)/2$ ordinary 50:50 beam splitters and two plate beam splitters. The latter are available in the form of glass plates with modulated transmittivity along the direction of the incoming beam propagation \cite{Jex}. As a large fraction of the raw data in this procedure is discarded, an interesting question arises as to whether or not there is a relationship between the method of this paper and a limiting case of the operational phase measurements of Torgerson and Mandel \cite{Torg2} where it is found that the distribution becomes sharper as more data are discarded. While preliminary analysis indicates that there is not, this will be discussed in more detail elsewhere. In conclusion we have shown that it is possible in principle to perform a single-shot measurement of the Canonical phase in the same sense that the experiments of Noh {\it et al.} are single-shot measurements of operational phase. The technique relies on generating a reference state with some number state coefficients proportional to those of a binomial state. \begin{acknowledgments} D. T. P. thanks the Australian Research Council for funding. \end{acknowledgments} \end{document}
\begin{document} \rhead{\mathfrak{t}hepage} \lhead{\author} \mathfrak{t}hispagestyle{empty} \raggedbottom \pagenumbering{arabic} \mathfrak{s}etcounter{section}{0} \title{Embedding 3-manifolds in spin 4-manifolds} \begin{abstract} An invariant of orientable 3-manifolds is defined by taking the minimum $n$ such that a given 3-manifold embeds in the connected sum of $n$ copies of $S^2 \mathfrak{t}imes S^2$, and we call this $n$ the embedding number of the 3-manifold. We give some general properties of this invariant, and make calculations for families of lens spaces and Brieskorn spheres. We show how to construct rational and integral homology spheres whose embedding numbers grow arbitrarily large, and which can be calculated exactly if we assume the 11/8-Conjecture. In a different direction we show that any simply connected 4-manifold can be split along a rational homology sphere into a positive definite piece and a negative definite piece. \end{abstract} \title{Embedding 3-manifolds in spin 4-manifolds} \begin{section}{Introduction}\label{intro} It is natural to ask which 3-manifolds embed in $S^4$ (or, equivalently, in $\mathbb{R}^4$). Such a 3-manifold must necessarily be orientable, and it turns out that there are different answers depending on whether one requires the embeddings to be smooth or only topologically locally flat. Freedman~\cite{F} showed that every integral homology sphere embeds topologically locally flatly in $S^4$, while there are several obstructions to a homology sphere embedding smoothly. An integral homology sphere embedded in $S^4$ splits $S^4$ into two integral homology 4-balls, and so any obstruction to bounding a smooth integral homology ball gives an obstruction to a smooth embedding in $S^4$. The simplest such obstruction is the Rokhlin invariant, and so any integral homology sphere with nontrivial Rokhlin invariant (for example, the Poincar\'e sphere) does not admit a smooth embedding into $S^4$. Other obstructions include the correction terms of Heegaard Floer homology, and for the case of rational homology spheres there are simpler obstructions coming from the torsion linking form and indeed the order of the first homology (it must be a square). On the constructive side, Casson and Harer~\cite{CH} gave several infinite families of Brieskorn homology spheres that smoothly embed in $S^4$ (see~\cite{BB}). More general classes of 3-manifolds that smoothly embed in $S^4$ include those that arise as cyclic branched covers of doubly slice knots (see~\cite{Gil-Liv},~\cite{Meier},~\cite{Don}) and homology spheres obtained by surgery on ribbon links~\cite{L}. For some specific classes of 3-manifolds it is known exactly which ones smoothly embed in $S^4$, for example circle bundles over closed surfaces~\cite{Crisp-Hillman} and connected sums of lens spaces~\cite{Don}. Budney and Burton~\cite{BB} have examined this question from the perspective of the 11-tetrahedron census of triangulated 3-manifolds. Unfortunately a complete answer to which 3-manifolds embed in $S^4$ remains out of reach. However, the question can be generalized by asking which 3-manifolds embed in some larger class of 4-manifolds (for the case of connected sums of $\mathbb{C}P^2$ see~\cite{EL}). Since it is known that every orientable 3-manifold smoothly embeds in a connected sum of $S^2 \mathfrak{t}imes S^2$'s, the minimum $n$ such that a given 3-manifold $Y$ \emph{smoothly} embeds in $\#_n S^2 \mathfrak{t}imes S^2$ is a well-defined invariant of $Y$, which we call the \emph{embedding number} of $Y$ and denote $\varepsilon(Y)$. Hence the 3-manifolds that embed smoothly in $S^4$ are precisely those with embedding number equal to 0 (by convention the empty connected sum is $S^4$). Similar 3-manifold invariants, for example the surgery number (the minimal number of components of a link that admits a surgery to a given 3-manifold), are often notoriously difficult to compute. However, Kawauchi~\cite{Kaw} was able to produce infinite families of 3-manifolds whose embedding numbers grow arbitrarily large, and to compute it exactly for these manifolds (although he did not use this terminology). One drawback to his method is that it only works for 3-manifolds with non-zero $b_1$, and indeed for his examples the first Betti numbers are also unbounded. In this paper we focus on computing embedding numbers for integral and rational homology spheres. Lens spaces are an interesting and instructive class of 3-manifolds to consider. It is known that no lens space embeds in $S^4$, but if the lens space $L(p,q)$ is punctured (that is, we remove an open ball) then it embeds in $S^4$ if and only if is $p$ is odd~\cite{Epstein,Zeeman}. For even $p$, the punctured lens space embeds in $S^2\mathfrak{t}imes S^2$~\cite{EL}. Furthermore, Edmonds~\cite{Edm} showed that every lens space embeds \emph{topologically locally flatly} in $\#_4 S^2 \mathfrak{t}imes S^2$. In contrast the smooth embedding numbers for lens spaces behave much differently, as we show in Section \ref{lens}. Indeed, for the family $L(n,1)$ the embedding numbers grow arbitrarily large (Proposition~\ref{bad lower bound}). We give upper and lower bounds for these embedding numbers, and give exact calculations for $n\leq19$; for even $n$ the embedding number is 1, and for odd $n$ the embedding numbers are listed in Figure~\ref{table}. As a tool we construct embeddings of $L(17,16)$ and $L(19,18)$ (and their associated canonical negative definite plumbings) into the $K3$ surface. We also consider the question of which lens spaces have embedding number 1 (Theorem~\ref{lens space ball}), and for odd $p$ they are exactly those lens spaces that bound rational homology balls; such lens spaces were classified by Lisca~\cite{Lisca-ribbon}. \begin{figure} \caption{Embedding numbers of $L(n,1)$, for odd $n\leq19$.} \label{table} \end{figure} In Section~\ref{general} we consider some general constructions and bounds. The most common technique we use to construct embeddings into $\#_n S^2 \mathfrak{t}imes S^2$ is to realize the 3-manifold as surgery on an $n$-component, even-framed link (the double of the corresponding 4-manifold is $\#_n S^2\mathfrak{t}imes S^2$, see Theorem \ref{all embed}), although we also use branched double cover arguments. In the other direction, most of our obstructions depend essentially on the fact that $\#_n S^2 \mathfrak{t}imes S^2$ is a spin 4-manifold. Hence Rokhlin's Theorem and the 10/8-Theorem~\cite{Furuta} provide powerful tools. Besides lens spaces, the other class of 3-manifolds we consider in depth are the Brieskorn homology spheres (in Section \ref{Brieskorn}). We give some general upper bounds on their embedding numbers, as well as some exact calculations for several infinite families where the embedding numbers are bounded. For example, each member of the family $\Sigma(2,3,6n+1)$ with $n$ odd has embedding number 10 (Proposition~\ref{Brieskorn_comp}). Work of Tange~\cite{Tange} allows us also to give families of Brieskorn spheres where the embedding numbers are unbounded, although we cannot give exact calculations. Unfortunately, the task of giving exact calculations of arbitrarily large embedding numbers (in the case of integral or rational homology spheres) appears to be related to the gap between the 10/8-Theorem and the 11/8-Conjecture (recall the 11/8-Conjecture states that for a spin, closed 4-manifold $X$ the signature and second Betti number should be related by the inequality $b_2(X) \geq \frac{11}{8}|\mathfrak{s}igma(X)|$, while Furuta~\cite{Furuta} proved that $b_2(X) \geq \frac{10}{8}|\mathfrak{s}igma(X)| +2$). While the 10/8-Theorem is effective to show unboundedness of embedding numbers for many families of 3-manifolds, to give exact calculations it appears we must assume the validity of the 11/8-Conjecture (or else have counterexamples to the conjecture). In Section \ref{exact} we show how to do this by constructing integral and rational homology spheres that split connected sums of the $K3$ surface (these 4-manifolds lie on the 11/8-line) into definite pieces. In particular this method gives integral homology spheres that bound two negative definite spin 4-manifolds with different rank, answering a question of Tange \cite[Question 5.2]{Tange}. In fact our technique can be generalized using a structure theorem of Stong~\cite{Stong} to show that any simply connected 4-manifold can be decomposed into a positive definite 4-manifold and a negative definite 4-manifold (both simply connected), glued along a rational homology sphere (Theorem~\ref{definite splitting}). Finally, we point out that many of the techniques used in this paper are quite general, and can be applied to calculate embedding numbers for other classes of 3-manifolds than those explicitly addressed here, as well as to study embeddings of 3-manifolds into other spin 4-manifolds. \mathfrak{s}ubsection*{Acknowledgements} The authors would like to thank Bob Gompf, Ahmad Issa, D. Kotschick, Ana Lecuona, and Andr\'as Stipsicz for helpful conversations. The first and third authors were partially supported by the ERC Advanced Grant LDTBud, and additionally the third author was partially supported by NSF grant DMS-1148490. The second author is supported by the Alice and Knut Wallenberg foundation. \end{section} \begin{section}{Preliminaries and general statements}\label{general} Recall that the group ${\rm Spin}(n)$ is the double cover of $SO(n)$, which is also its universal cover as long as $n\ge 3$. A \emph{spin structure} on an $n$-manifold $M$ is a lift of the principal $SO(n)$-bundle associated to the tangent space $TM$ to a ${\rm Spin}(n)$-bundle over $M$. A spin structure on $M$ exists if and only if $M$ is orientable and the second Stiefel--Whitney class of its tangent bundle vanishes, i.e. if and only if $w_1(M) = 0 $ and $w_2(M) = 0$; moreover, spin structures on $M$ are an affine space over $H^1(M;\mathbb{Z}/2\mathbb{Z})$. Since every orientable 3-manifold $Y$ is parallelizable, the Stiefel--Whitney classes of its tangent bundle vanish, hence $Y$ always admits a spin structure. Moreover, if $H^1(Y;\mathbb{Z}/2\mathbb{Z}) = 0$, it is unique. This happens, for instance, when $Y$ is a rational homology sphere whose $H_1$ has odd order. Now 4-manifolds, on the other hand, do not always admit spin structures. In fact, a closed, simply connected 4-manifold $X$ admits a spin structure if and only if it has an even intersection form. Spin structures behave well with respect to gluing: if $(X_1, \mathfrak{s}_1)$ and $(X_2, \mathfrak{s}_2)$ are two spin 4-manifold with boundary $\partial X_i = Y$, then $X_1\cup (-X_2)$ admits a spin structure provided the restrictions $\mathfrak{s}_1|_Y$ and $\mathfrak{s}_2|_Y$ agree. This is for free when $Y$ is a rational homology sphere whose $H_1$ has odd order. Throughout we will assume all manifolds and maps to be smooth, and in addition we require that all manifolds be oriented. The following theorem is well-known (see~\cite[Section 5.7]{GS}). \begin{theorem}\label{all embed} Every $3$-manifold embeds in $\#_n S^2 \mathfrak{t}imes S^2$ for some $n$. More precisely, every closed $3$-manifold can be realized as integral surgery on a link in $S^3$ where all the surgery coefficients are even. If there are $n$ components in such a link, then this surgery description gives an embedding into $\#_n S^2 \mathfrak{t}imes S^2$. \end{theorem} \begin{proof}[Sketch of proof] Let $Y$ be a closed 3-manifold (if a 3-manifold has boundary we can double it to obtain a closed 3-manifold, and then embed the double by the following argument). Kaplan~\cite{Kap} gives an algorithm to realize $Y$ as integral surgery on an $n$-component link $L$ (for some $n$) where all the coefficients are even. From this description $Y$ is realized as the boundary of a spin 4-manifold $X$ obtained by attaching $n$ 2-handles to $B^4$ along $L$ with even framings (the intersection form is even, and hence $X$ is spin since it is simply connected). To obtain a handle decomposition for the double $\mathcal{D}X$ of $X$ we add $n$ additional 2-handles, each attached along a 0-framed meridian of a component of $L$, and then a 4-handle. Then repeatedly sliding over these 0-framed meridians results in $n$ 0-framed Hopf pairs (see Figure \ref{f:hopf}), which shows that $\mathcal{D}X$ is diffeomorphic to $\#_n S^2 \mathfrak{t}imes S^2$ (see~\cite[Corollary 5.1.6]{GS} for more details of the necessary handle slides). \end{proof} \begin{figure} \caption{Two 0-framed Hopf pairs and a 4-handle gives $\#_2 S^2 \mathfrak{t} \label{f:hopf} \end{figure} Therefore the following is a well-defined invariant of 3-manifolds. \begin{definition} Given a 3-manifold $M$, let $\varepsilon(M)$ be the minimum $n$ such that $M$ embeds in $\#_n S^2 \mathfrak{t}imes S^2$. Call $\varepsilon(M)$ the \emph{embedding number} of $M$. \end{definition} For example, $M$ embeds in $S^4$ if and only if $\varepsilon(M)=0$. Now we consider some general properties of this invariant. \begin{proposition}\label{p:generalproperties} Let $M$ and $N$ be $3$-manifolds, and let $\overline{M}$ denote $M$ with the opposite orientation. Then the embedding number satisfies the following properties: \begin{enumerate} \item $\varepsilon(M) = \varepsilon(\overline M)$; \item $\varepsilon(M \# N) \leq \varepsilon(M) + \varepsilon(N)$; \item $\varepsilon(M \# \overline{M}) \leq \varepsilon(M)$. \end{enumerate} \end{proposition} \begin{proof} Point (1) is obvious, since every embedding of $M$ is also an embedding of $\overline M$. Now we prove (2). Let $m=\varepsilon(M)$, and $n=\varepsilon(N)$. Then $M$ embeds in $\#_m S^2 \mathfrak{t}imes S^2$ and $N$ embeds in $\#_n S^2 \mathfrak{t}imes S^2$, so the disjoint union $M \mathfrak{s}qcup N$ embeds in $\#_{m+n} S^2 \mathfrak{t}imes S^2$. Since $M$ and $N$ embed disjointly in $\#_{m+n} S^2 \mathfrak{t}imes S^2$, their connected sum also embeds. Just perform ambient surgery along an embedded arc $\gamma$ in $\#_{m+n} S^2 \mathfrak{t}imes S^2$ with one endpoint on $M$ and the other endpoint on $N$, such that the interior of the arc misses $M$ and $N$ (notice that in order to arrange the correct orientations we may have to change which connected component of $\#_m S^2 \mathfrak{t}imes S^2\mathfrak{s}etminus M$ and $\#_n S^2 \mathfrak{t}imes S^2\mathfrak{s}etminus N$ we use for the connected sum). Therefore $\varepsilon(M \# N) \leq m+n = \varepsilon(M) + \varepsilon(N)$. Now we prove (3). Let $M^\circ$ denote $M$ with an open $B^3$ removed. If $M$ embeds in $\#_n S^2 \mathfrak{t}imes S^2$, then obviously so does $M^\circ$. The boundary of a collar neighborhood $M^\circ \mathfrak{t}imes I$ of $M^\circ$ is $M \# \overline{M}$, and so $M \# \overline{M}$ embeds in $\#_n S^2 \mathfrak{t}imes S^2$ as well, finishing the proof. \end{proof} The next few results explore the relationship between small embedding numbers and bounding an integral or rational homology ball. Here and in the following, $H_*(\cdot)$ will always denote homology with integer coefficients, unless otherwise stated. \begin{proposition}\label{Zspheres} Let $M$ be an integral homology sphere. If $\varepsilon(M) \leq 1$ then $M$ bounds an integral homology ball. \end{proposition} \begin{proof} If $\varepsilon(M) \leq 1$ then $M$ embeds in $S^2 \mathfrak{t}imes S^2$. Let $X_1, X_2$ be the closures of the two connected components of $S^2\mathfrak{t}imes S^2\mathfrak{s}etminus M$, so that $S^2 \mathfrak{t}imes S^2 = X_1 \cup_M X_2$. Notice that $X_1$ and $X_2$ are spin 4-manifolds since they are codimension-0 submanifolds of the spin manifold $S^2\mathfrak{t}imes S^2$. Since $M$ is an integral homology sphere, the Mayer--Vietoris sequence implies that $H_1(X_1)=H_1(X_2)=0$, and furthermore, we get a splitting $H := Q_{S^2 \mathfrak{t}imes S^2}\colon\thinspaceng Q_{X_1} \oplus Q_{X_2}$ using the unimodular intersection forms $Q_{X_1}$ and $Q_{X_2}$. But this implies that one of $Q_{X_1}$ or $Q_{X_2}$ is trivial (since the forms must be even, and $H$ is the only nontrivial even, unimodular form of rank less than 8), say $Q_{X_1}$, and so $X_1$ must be an integral homology ball. \end{proof} Note that we do not know of any obstruction that can distinguish between an integral homology sphere embedding in $S^2 \mathfrak{t}imes S^2$ and embedding in $S^4$. Hence it is possible, although it seems unlikely, that every integral homology sphere that embeds in $S^2 \mathfrak{t}imes S^2$ also embeds in $S^4$. We now give a generalization of Proposition~\ref{Zspheres}. \begin{theorem}\label{t:oddH1} Let $Y$ be a rational homology sphere such that $H_1(Y)$ has odd order. If $\varepsilon(Y) \leq 1$, then $Y$ bounds a spin rational homology ball. \end{theorem} \begin{proof} Suppose $Y$ embeds in $S^2\mathfrak{t}imes S^2$, splitting $S^2\mathfrak{t}imes S^2$ into two spin connected components $X_1$ and $X_2$. Assume by contradiction that $Y$ does not bound a rational homology ball. The Mayer--Vietoris long exact sequence reads: \[ 0 \mathfrak{t}o H_2(X_1)\oplus H_2(X_2) \mathfrak{t}o H_2(S^2\mathfrak{t}imes S^2) \mathfrak{t}o H_1(Y) \mathfrak{t}o H_1(X_1)\oplus H_1(X_2) \mathfrak{t}o 0. \] In particular, since $H_1(Y)$ is finite, $H_2(X_1)$ and $H_2(X_2)$ are two free groups, the sum of whose ranks is $2$, and if $Y$ does not bound a rational homology ball, then $\rk H_2(X_1) = \rk H_2(X_2) = 1$. Hence both groups are isomorphic to $\mathbb{Z}$. Moreover, since $|H_1(Y)|$ is odd and it surjects onto $H_1(X_1)$ and $H_1(X_2)$, these two groups have odd order, too. The long exact sequence of the pair for $(X_i, Y)$ reads: \[ 0 \longrightarrow H_2(X_i) \mathfrak{s}tackrel{\phi}{\longrightarrow} H_2(X_i,Y) \longrightarrow H_1(Y) \longrightarrow H_1(X_i) \longrightarrow 0, \] where the fact that $H_1(X_i,Y)$ vanishes follows from the surjectivity of $H_1(Y)\mathfrak{t}o H_1(X_i)$, observed above. Now $H_2(X_i,Y)$ may have torsion, since by the universal coefficient theorem and Poincar\'e--Lefschetz duality $H_2(X_i,Y) \colon\thinspaceng H^2(X_i) \colon\thinspaceng H_2(X_i)\oplus H_1(X_i)$. Let $\alpha$ be a generator of $H_2(X_i)$, and $\beta$ be the Poincar\'e dual of an element in $H^2(X_i)$ that evaluates to 1 on $\alpha$; then $\beta \cdot \alpha =1$ and $\phi(\alpha) = \ell \beta + t$ for some $\ell$ and some torsion element $t\in H_2(X_i,Y)$. But $\alpha \cdot \alpha = \phi(\alpha)\cdot\alpha = \ell \beta \cdot \alpha = \ell$, and so $\ell=2k$ must be an even number, since $Q_{X_i}$ is an even intersection form (because $X_i$ is spin). Since the torsion in $H_2(X_i,Y)$ has odd order, the order of $t$ is an odd number $d$ (if $t=0$, $d=1$). It is easy to see that the element $\bar x = dk\beta$ is \emph{not} in the image of $\phi$, while $2\bar x = \phi(d\alpha)$ is; that is, $\bar x$ is a nonzero element in $\colon\thinspaceker \phi$ such that $2\bar x = 0$. But this contradicts the fact that $\colon\thinspaceker\phi$ is a subgroup of $H_1(Y)$, which has odd order by assumption. \end{proof} This next theorem is a partial converse to Theorem \ref{t:oddH1}, and both of these theorems will be crucial in understanding which lens spaces $L(p,q)$ with odd $p$ have embedding number 1 (see Section \ref{lens}). \begin{figure} \caption{A rational homology ball with a single 1-handle and a single 2-handle.} \label{f:rational ball} \end{figure} \begin{theorem}\label{t:one1-handle} Let $Y$ be a rational homology sphere such that $H_1(Y)$ has odd order. If $Y$ bounds a rational homology ball with only a single $1$-handle and a single $2$-handle, then $\varepsilon(Y) \leq 1$. \end{theorem} \begin{remark} Note that if $Y$ is an \emph{integral} homology sphere, the argument used to prove \cite[Theorem 2.13]{BB} implies that actually $Y$ embeds in $S^4$, i.e. $\varepsilon(Y)=0$. \end{remark} \begin{proof} We need to show that $Y$ embeds in $S^2 \mathfrak{t}imes S^2$. Let $B$ be a rational homology ball with a single 1-handle and a single 2-handle, such that $\partial B= Y$. Then $B$ has a handle diagram as in Figure \ref{f:rational ball}, where there are $n$ strands running through the dotted circle, and the box labeled $D$ represents some $n$-tangle filling which results in a single attaching circle for a 2-handle with framing $m$. Since $\partial B=Y$, the 2-handle attaching circle has an odd linking number with the dotted circle (because if we surger the 1-handle to a 0-framed 2-handle then the intersection form of the resulting 4-manifold will present $H_1(Y)$). Note that this implies that $n$ (the total number of strands) is odd as well. Let $\gamma$ denote the attaching circle for the 2-handle. Now first consider the case when the framing $m$ is even. Then we attach two additional 2-handles to $B$ along 0-framed meridians of $\gamma$ and the dotted circle, as in the left-hand side of Figure \ref{f:ballsplus}. We can then slide $\gamma$ off the dotted circle by sliding over the 0-framed meridian of the dotted circle, and then the 2-handle attached to the meridian cancels the 1-handle. What remains after the cancellation is a 2-handle attached to $\gamma$ with framing $m$, and another 2-handle attached to a 0-framed meridian of $\gamma$. As in the proof of Theorem \ref{all embed}, we can slide over this meridian some number of times to realize a 0-framed Hopf pair, and then cap off with a 4-handle to obtain $S^2 \mathfrak{t}imes S^2$. \begin{figure} \caption{Adding 2-handles to the rational homology ball.} \label{f:ballsplus} \end{figure} When $m$ is odd we again add 2-handles along meridians of $\gamma$ and the dotted circle, but this time we use framing 1 with the meridian of the dotted circle as in the right-hand side of Figure \ref{f:ballsplus}. When we slide $\gamma$ over the 1-framed meridian we increase $m$ by one and $\gamma$ now links this meridian once (see the left-hand side of Figure \ref{f:ballslide}). By sliding the 1-framed meridian over the 0-framed meridian we can unlink $\gamma$ from the 1-framed meridian, and we have reduced to the starting position except $m$ has been increased by one and $\gamma$ runs one fewer time through the dotted circle (see the right-hand side of Figure \ref{f:ballslide}). \begin{figure} \caption{Sliding $\gamma$ off the dotted circle.} \label{f:ballslide} \end{figure} We now repeat this combination of handle slides $n$ times to slide $\gamma$ completely off the dotted circle, and then the 1-framed meridian cancels the 1-handle. What remains is a 2-handle attached to $\gamma$ with framing $m+n$, and the 2-handle attached to the 0-framed meridian. Since $n$ is odd, $m+n$ is even, and as before we can obtain $S^2 \mathfrak{t}imes S^2$ by capping off with a 4-handle. \end{proof} \begin{subsection}{Surgery on knots} Let $S^3_{p/q}(K)$ denote the 3-manifold obtained by $p/q$-Dehn surgery on a knot $K$ in $S^3$. Here we prove some simple facts about embedding numbers for 3-manifolds obtained by surgery on knots. \begin{proposition}\label{surgery prop} Let $K$ be a knot in $S^3$. \begin{enumerate} \item $\varepsilon(S^3_{p/q}(K)) \geq 1$ for all $|p| >1$ and $q \neq 0$. \item $\varepsilon(S^3_{2n}(K)) = 1$ for all nonzero $n$. \item $\varepsilon(S^3_{2n+1}(K)) > 1$ if $2n+1$ is not a square. \item $\varepsilon(S^3_{1/{2n}}(K)) \leq 2$ for all nonzero $n$. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item We must show that $S^3_{p/q}(K)$ does not embed in $S^4$. If a rational homology 3-sphere $Y$ embeds in $S^4$, then $H_1(Y) \colon\thinspaceng G \oplus G$ for some torsion group $G$~\cite{Gil-Liv}. Since $H_1(S^3_{p/q}(K))\colon\thinspaceng \mathbb{Z}/p\mathbb{Z}$, $S^3_{p/q}(K)$ does not embed in $S^4$. \item This follows from Theorem \ref{all embed} and Part (1). \item If $\varepsilon(S^3_{2n+1}(K)) \leq 1$, then $S^3_{2n+1}(K)$ embeds in $S^2\mathfrak{t}imes S^2$. By Theorem \ref{t:oddH1} $S^3_{2n+1}(K)$ must bound a rational homology ball, and it is well-known that if a rational homology sphere $Y$ bounds a rational homology ball then the order of $H_1(Y)$ is a square (see, for instance, \cite[Proposition 2.2]{AG}). \item By the reverse slam dunk move illustrated in Figure \ref{f:slamdunk} (see~\cite[Section 5.3]{GS} for a discussion of the slam dunk) with $m=2n$, we can realize $M$ as integral surgery on a 2-component link where the coefficients are even. Then by Theorem \ref{all embed} $\varepsilon(M) \leq 2$.\qedhere \end{enumerate} \end{proof} \begin{figure} \caption{The reverse slam dunk move.} \label{f:slamdunk} \end{figure} \begin{remark} It follows from (4) above and Proposition \ref{Zspheres} that if $\varepsilon(S^3_{1/{2m}}(K)) \neq 2$, then $S^3_{1/{2m}}(K)$ bounds an integral homology ball and so all integral homology cobordism invariants (for example, the Rokhlin invariant and the Heegaard Floer correction term) must vanish for this integral homology sphere. \end{remark} Proposition \ref{surgery prop}(3) suggests that most odd surgeries on a knot will have embedding number larger than 1. However, in the next example we show that this is not always the case. \begin{example} We can show that $\varepsilon(S^3_9 (T_{2,3})) = 1$ by realizing $S^3_9 (T_{2,3})$ as the boundary of a rational homology ball with a single 1-handle and a single 2-handle, and then applying Theorem \ref{t:one1-handle}. This is demonstrated in Figure \ref{f:trefoil}. We blow up to obtain the second picture, which we then think of as a 4-dimensional 2-handlebody. Since the 0-framed 2-handle is attached along the unknot, we can surger it to a 1-handle in dotted circle notation to get the third picture. In the third picture we see $S^3_9 (T_{2,3})$ as the boundary of the required rational homology ball. Note that the same argument works for $S^3_{d^2}(T_{d-1,d})$ and $S^3_{d^2}(T_{d,d+1})$ for each odd $d$. \end{example} \begin{figure} \caption{$S^3_9 (T_{2,3} \label{f:trefoil} \end{figure} \end{subsection} \begin{subsection}{Branched double covers} We finish this section by relating the embedding numbers of branched double covers to classical knot invariants. Given a knot $K$ in $S^3$, let $\Sigma(K)$ denote the double cover of $S^3$ branched over $K$. Furthermore, let $g(K)$ denote the Seifert genus of the knot and $u(K)$ denote the unknotting number. \begin{proposition}\label{DBC} For a knot $K$ in $S^3$, let $m$ denote the minimum of $g(K)$ and $u(K)$. Then $\varepsilon(\Sigma(K))\le 2m$. \end{proposition} \begin{proof} It is a standard fact that $K$ bounds a surface in $B^4$, built with a single 0-handle and $2m$ 1-handles. (Note that this is not the case if we replace the Seifert genus with the slice genus.) Now Akbulut and Kirby~\cite{AK} gave an algorithm for how to build a handle decomposition of the branched double cover of $B^4$ over such a surface, and the resulting handle decomposition consists of a single 0-handle and $2m$ 2-handles, all with even framing. Since the boundary of this 4-manifold is $\Sigma(K)$, Theorem \ref{all embed} then finishes the proof. \end{proof} \end{subsection} \end{section} \begin{section}{Brieskorn spheres}\label{Brieskorn} We now consider the embedding numbers of a specific class of 3-manifolds. Recall that the Seifert fibered manifold $\Sigma(p,q,r)$ is the boundary of the Milnor fiber $M_c(p,q,r)$; this is a spin 4-manifold that can be constructed by taking the $p$-fold cover of $B^4$ branched over the pushed-in Seifert surface of minimal genus of the $T_{q,r}$ torus link. Furthermore, $M_c(p,q,r)$ admits a handle decomposition with one 0-handle and $(p-1)(q-1)(r-1)$ 2-handles, all with even framing (see~\cite{AK} and \cite[Section 6.3]{GS}). Therefore doubling $M_c(p,q,r)$ results in $\#_{(p-1)(q-1)(r-1)} S^2 \mathfrak{t}imes S^2$, and we get the following upper bound for the embedding numbers of these Seifert fibered manifolds. \begin{proposition}\label{SFSbound} For the Seifert fibered manifold $\Sigma (p,q,r)$, we have $\varepsilon (\Sigma (p,q,r)) \leq (p-1)(q-1)(r-1)$. \end{proposition} Note that if $p$, $q$, and $r$ are relatively prime then $\Sigma (p,q,r)$ is a Brieskorn homology sphere. While Proposition \ref{SFSbound} gives an upper bound for the embedding numbers of Brieskorn spheres, in many cases we can improve on this bound or even give an exact computation. \begin{proposition} If relatively prime $p$, $q$, and $r$ are all odd, have absolute value greater than $1$, and satisfy $pq+pr+qr= -1$, then $\varepsilon (\Sigma (|p|, |q|, |r|)) = 2$. \end{proposition} \begin{proof} If $\varepsilon (\Sigma (|p|, |q|, |r|)) < 2$, then by Proposition \ref{Zspheres}, $\Sigma (|p|, |q|, |r|)$ bounds an integral homology ball. However, Fintushel and Stern \cite[Theorem 10.7]{FS1} showed that these manifolds never bound integral homology balls. Therefore $\varepsilon (\Sigma (|p|, |q|, |r|)) \geq 2$. Now $\Sigma (|p|, |q|, |r|)$ admits a surgery diagram with a 0-framed unknot and three meridians with framings $\pm p$, $\pm q$, and $\pm r$ (see \cite[Section 1.1.4]{Sav}). Sliding two meridians over the third allows us to slam dunk (see Figure \ref{f:slamdunk}) the 0-framed unknot against the third meridian; this eliminates the 0-framed unknot and turns the third meridian into an $\infty$-framed curve, which can also be removed from the diagram. The result is a surgery diagram with two even-framed components. This shows that $\varepsilon (\Sigma (|p|, |q|, |r|)) = 2$. \end{proof} \begin{example} For odd $p$, the family $\Sigma (p-2, p, (p^2-2p-1)/2)$ of Brieskorn spheres satisfy $\varepsilon (\Sigma (p-2, p, (p^2-2p-1)/2)) = 2$. \end{example} \begin{proposition}\label{Poincare} For the Poincar\'e sphere $\Sigma(2,3,5)$, we have $\varepsilon(\Sigma(2,3,5))=8$. \end{proposition} \begin{proof} Since $\Sigma(2,3,5)$ is the boundary of the $E_8$ plumbing, we immediately have $\varepsilon(\Sigma(2,3,5))\leq 8$ and that the Rokhlin invariant $\mu(\Sigma(2,3,5))$ is nonzero. Now assume that $\Sigma(2,3,5)$ embeds in $\#_m S^2 \mathfrak{t}imes S^2$ for $m<8$, splitting $\#_m S^2 \mathfrak{t}imes S^2$ into two spin pieces $U$ and $V$. Then by the classification of indefinite, unimodular even forms and the fact that there are no definite, unimodular even forms of rank less than 8, both of the intersection forms $Q_U$ and $Q_V$ must have signature 0. This contradicts the fact that $\Sigma(2,3,5)$ has nontrivial Rokhlin invariant, so $\varepsilon(\Sigma(2,3,5))=8$. \end{proof} Note that this proof actually shows that any integral homology sphere with nontrivial Rokhlin invariant has $\varepsilon(\Sigma(2,3,5))\geq8$. The Poincar\'e sphere is one of a collection of Brieskorn spheres obtained by surgery on torus knots, namely $\Sigma(p,q,pqn \pm 1) = S^3_{-1/n}(T_{p,\pm q})$ \cite[Example 1.2]{Sav}. Note that when $n$ is even the embedding numbers are less than or equal to $2$ by Proposition \ref{surgery prop}(4). When $n$ is odd, the situation is more difficult. \begin{proposition}\label{Brieskorn_comp} For any \emph{odd} integer $n>0$ we have: \begin{itemize} \item $\varepsilon(\Sigma(2,3,6n+1)) = 10$. \end{itemize} More generally, for any \emph{odd} integer $n>0$ and \emph{even} integer $p>0$, we have the following bound: \begin{itemize} \item $\varepsilon(\Sigma(p,p+1,p(p+1)n + 1)) \leq (p+1)^2 + 1$. \end{itemize} \end{proposition} \begin{figure} \caption{Surgery diagrams for $\Sigma(p,p+1,p(p+1)n + 1)$.} \label{tks} \end{figure} \begin{proof} We start by proving the bound $\varepsilon(\Sigma(p,p+1,p(p+1)n + 1)) \leq (p+1)^2 + 1$. Now $\Sigma(p,p+1,p(p+1)n + 1) = S^3_{-1/n}(T_{p,p+1})$, and therefore we see a surgery diagram for these manifolds in the first picture of Figure \ref{tks}, where we have already performed a reverse slam dunk move (see Figure \ref{f:slamdunk}). Note that we draw $T_{p,p+1}$ so that there are $p+1$ strands, and hence we need to compensate for the full right-handed twist in the $p+1$ strands by adding a $-\frac{1}{p+1}$-twist. Now the goal is to transform this surgery description into one where all the surgery coefficients are even (see~\cite{Kap} for approaches to this type of problem). First we blow up the $p+1$ strands as in the second picture of Figure \ref{tks}. This changes the framing on our knot to $-(p+1)^2$, but now the knot is unknotted. We blow up the knot $(p+1)^2-1$ times as in the third picture of Figure \ref{tks}. Now all the other components have odd framing, and have odd linking number with the knot (here we use the fact that $p+1$ is odd). Hence we can blow down the knot to obtain a surgery diagram with $(p+1)^2 + 1$ even-framed components, and thus we achieve the upper bound on the embedding number by applying Theorem~\ref{all embed} as usual. Now we consider the family $\Sigma(2,3,6n+1)$, for $n$ odd. Applying the upper bound we just obtained to the case $p=2$, we get $\varepsilon(\Sigma(2,3,6n+1)) \leq 10$. As in Proposition \ref{Poincare}, the fact that this each member of family has nontrivial Rokhlin invariant implies that $\varepsilon(\Sigma(2,3,6n+1)) \geq 8$. If for some odd $n$ we have $\varepsilon(\Sigma(2,3,6n+1)) = 8$ or 9, then $\Sigma(2,3,6n+1)$ embeds into $\#_9 S^2 \mathfrak{t}imes S^2$, splitting it into two spin pieces $U$ and $V$. Then the intersection forms $Q_U$ and $Q_V$ are unimodular, even, and have signature $\pm8$. It then follows by the total rank that one form must be $\pm E_8$ and the other $\mp E_8 \oplus H$. Hence $\Sigma(2,3,6n+1)$ bounds a non-standard even definite form, and we claim that this is impossible. Indeed, Ozsv\'ath and Szab\'o computed the Heegaard Floer correction term $d(\Sigma(2,3,6n+1))=0$ \cite[Section 8.1]{OzsvathSzabo-absolutely}. Suppose $\Sigma(2,3,6n+1)$ bounds a negative definite spin 4-manifold $W$ (if it bounds a positive definite spin 4-manifold we can reverse orientations and apply the same argument); \cite[Theorem 9.6]{OzsvathSzabo-absolutely} reads: \[ c_1(\mathfrak{s})^2 + b_2(W) \le 4d(\Sigma(2,3,6n+1)) = 0. \] Since $W$ is even, $0$ is a characteristic vector in $H^2(W)$, and therefore it is the cohomology class of a \mathfrak{s}pinc structure $\mathfrak{s}_0$ on $W$, and setting $\mathfrak{s}=\mathfrak{s}_0$ in the equation above shows $b_2(W)\le 0$. \end{proof} We end this section by using work of Tange to give families of Brieskorn spheres whose embedding numbers grow arbitrarily large. \begin{proposition} Let $\{M_n\}$ denote any one of the following families of Brieskorn spheres (as $n$ ranges over the positive integers): \begin{itemize} \item $\Sigma(4n-2,4n-1,8n-3)$, \item $\Sigma(4n-1,4n,8n-1)$, \item $\Sigma(4n-2,4n-1,8n^2-4n+1)$, or \item $\Sigma(4n-1,4n,8n^2-1)$. \end{itemize} Then $\varepsilon(M_n) \rightarrow \infty$ as $n \rightarrow \infty$. \end{proposition} \begin{proof} Tange~\cite{Tange} showed that $M_n$ bounds a spin, definite 4-manifold $X_n$ with $b_2(X_n)= 8n$. Now, by way of contradiction, suppose there is an $m >0$ such that $\varepsilon(M_n) \leq m$ for all $n$. In particular we can choose $a > m$ such that $8a + 2m < 10a - \frac{10}{4}m +2$, and $M_a$ embeds in $\#_m S^2 \mathfrak{t}imes S^2$, splitting $\#_m S^2 \mathfrak{t}imes S^2$ into two spin pieces $U$ and $V$, say, with $\partial V = \overline{M_a}$. Then $Z:= X_a \cup_{M_a} V$ is a closed, spin 4-manifold, with $$b_2(Z) \leq 8a + 2m < 10a - \frac{10}{4}m +2 = \frac{10}{8}|8a-2m| + 2 \leq \frac{10}{8}|\mathfrak{s}igma(Z)|+2.$$ But this contradicts the 10/8-Theorem~\cite{Furuta}. \end{proof} \end{section} \begin{section}{Lens spaces}\label{lens} In this section we study the embedding numbers of lens spaces. We give partial results for lens spaces with small embedding number, and we also study the family $L(n,1)$. In what follows, $L(p,q)$ will always denote the 3-manifold obtained as $-p/q$-surgery along the unknot, and we will assume $p>1$ (i.e. $L(p,q)$ is not the 3-sphere nor $S^1 \mathfrak{t}imes S^2$) and $\gcd(p,q)=1$. Recall that $L(p,q)$ is the double cover of $S^3$ branched over a 2-bridge link, which we denote by $K(p,q)$; namely, $L(p,q) = \Sigma(K(p,q))$. Moreover, $K(p,q)$ is a knot if $p$ is odd, and a 2-component link if $p$ is even. Recall also that $L(p,q)$ is (orientation-preserving) diffeomorphic to $L(p,q')$ if and only if $qq' \equiv 1 \pmod p$, and that $\overline{L(p,q)}= L(p,p-q)$. We start by giving an upper bound. \begin{proposition}\label{bad bound 2} $\varepsilon(L(p,q)) \le p-1$. \end{proposition} \begin{proof} Consider the linear plumbing $P$ associated to the continued fraction expansion $p/q = [a_1,\dots,a_n]^-$, namely: \[ \xygraph{ !{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::} !~-{@{-}@[|(2.5)]} !{(0,0) }*+{\bullet}="x" !{(1.5,0) }*+{\dots}="a1" !{(3,0) }*+{\bullet}="a2" !{(0,0.4) }*+{-a_1} !{(3,0.4) }*+{-a_n} "x"-"a1" "a2"-"a1" } \] where each $a_i\geq2$. This represent a surgery along a framed link $L$ that is a chain of unknots. Suppose $L'\mathfrak{s}ubset L$ is a characteristic sublink (see \cite[Section 5.7]{GS}) with $\ell'$ components, indexed by the set $I'\mathfrak{s}ubset \{1,\dots, n\}$. For each $i\in I'$, we blow up the $i$-th component of $L$ $a_i-1$ times, and then we blow it down. The resulting link $L''$ has even framing on each component, and presents $L(p,q)$ as the boundary of a spin 2-handlebody $W$. We claim that $b_2(W)\le p-1$. Indeed, it is enough to count the number of components $\ell''$ of $L''$: they are $\ell'' = n+\mathfrak{s}um_{i\in I'} (a_i-1) - \ell'$. Since $a_i \ge 2$ for each $i$ we have: \[ \ell'' = n - \ell' +\mathfrak{s}um_{i\in I'} (a_i-1) = \mathfrak{s}um_{i \not\in I'} 1 + \mathfrak{s}um_{i \in I'} (a_i-1) \le \mathfrak{s}um_{i=1}^n (a_i-1). \] We now prove by induction on $n$ that $\mathfrak{s}um_{i=1}^n (a_i-1) \le p-1$. This is obviously true when $n=1$, since in that case $a_1 = p$. Suppose now $n>1$, and let $p = kq-r$ with $0<r<q$. \[ \mathfrak{s}um_{i=1}^n (a_i-1) = a_1-1 + \mathfrak{s}um_{i=2}^n(a_i-1) = k-1+\mathfrak{s}um_{i=2}^n(a_i-1) \le k-1+q-1, \] where the last inequality follows from the inductive step, and the observation that $q/r = [a_2,\dots,a_n]^-$. Now $k-1+q-1 \le p-1$, since \[ k-1+q \le p = kq-r \Longleftrightarrow r\le (k-1)(q-1), \] which is trivially true since $k\ge 2$ and $q-1\ge r$ by assumption. \end{proof} \begin{remark} In fact, Neumann and Raymond show that, up to reversing the orientation, every lens space bounds a spin linear plumbing of spheres~\cite[Lemma 6.3]{NeumannRaymond}; furthermore, an easy induction shows that the number of spheres in the plumbing is at most $p-1$, giving an alternative proof of the proposition above. \end{remark} Now Proposition~\ref{surgery prop}(1) shows that $\varepsilon(L(p,q)) \ge 1$. Moreover, it has been observed by Rasmussen that every lens space in Lisca's list\footnote{As observed by several authors, the case $\gcd(m,k)=2$ should be included in type (1) in the definition of $\mathcal{R}$ in that paper.}~\cite{Lisca-ribbon} bounds a rational homology ball that admits a handle decomposition with one handle of each index 0, 1, and 2 (see~\cite{LecuonaBaker}). Combining this directly with Theorem~\ref{t:one1-handle} and Theorem~\ref {t:oddH1} gives the following result. \begin{theorem}\label{lens space ball} Every lens space $L(p,q)$ with $p$ odd has $\varepsilon(L(p,q)) = 1$ if and only if it bounds a rational homology ball, i.e. if and only if belongs to Lisca's list. \end{theorem} This naturally leads to considering which lens spaces with $p$ even have embedding number 1. For example, $L(pq\pm1, \mp q^2)$ with $pq$ odd has embedding number 1, since it is $(pq\pm1)$-surgery along $T_{p,q}$~\cite{Moser}. More generally, the Berge conjecture provides more lens spaces for which the embedding number is 1: the classification of lens spaces that arise as surgery along knots in the 3-sphere has been solved by Greene~\cite{Greene}. \begin{question} Are all lens spaces with embedding number 1 either in Lisca's list or in Greene's list? \end{question} We begin the study of this problem by generalizing Theorem~\ref{t:oddH1} for some branched double cover rational homology spheres (and in particular for lens spaces). \begin{proposition}\label{p:spin-link} Let $L$ be a link in $S^3$ with $\ell$ components, whose branched double cover $\Sigma(L)$ is a rational homology sphere. If $\Sigma(L)$ bounds a spin $4$-manifold $W$, $b_2(W) \equiv \ell+1 \pmod 2$. \end{proposition} First of all, let us observe that the proposition does indeed generalize Theorem~\ref{t:oddH1} in the case of lens spaces. Indeed, when $p$ is odd, $L(p,q)$ is the branched double cover of a knot (i.e. $\ell = 1$ in the proposition above), and every embedding of $L(p,q)$ in $S^2\mathfrak{t}imes S^2$ splits the $4$-manifold into two connected components, each with even $b_2$ (namely, $0$ and $2$). On the other hand, when $p$ is even, we have the following corollaries. \begin{corollary} If $p$ is even, $L(p,q)$ bounds no spin rational homology ball. \end{corollary} \begin{proof} Indeed, when $p$ is even, $L(p,q)$ is the double cover of $S^3$ branched over the 2-component link $K(p,q)$, and therefore every spin filling of $L(p,q)$ has odd $b_2$. \end{proof} \begin{corollary} If $p$ is even and $\varepsilon(L(p,q)) = 1$, every embedding of $L(p,q)$ into $S^2\mathfrak{t}imes S^2$ splits $S^2\mathfrak{t}imes S^2$ into two spin $4$-manifolds $X_1$, $X_2$ with $b_2(X_i)=1$, $b_1(X_i) = b_3(X_i) = 0$. \end{corollary} \begin{proof} As remarked above, $L(p,q)$ is the branched double cover of a $2$-component link, hence $\ell = 2$ in the proposition above, and therefore $b_2(X_i)$ is odd. Looking at the Mayer--Vietoris sequence, we have that $b_1(X_i) = b_3(X_i) = 0$; since $b_2(S^2\mathfrak{t}imes S^2) = 2$ and the $b_2(X_i)$ are odd, we have $b_2(X_1) = b_2(X_2) = 1$, hence the corollary follows. \end{proof} \begin{remark} The classification of which lens spaces bound a $4$-manifold $X_1$ as in the corollary above is still an open question. Observe also that, if we restrict our attention to lens spaces $L(p,q)$ with squarefree, even $p$, there is a further restriction on the homology of $X_1$; namely, $H_1(X_1) = H_3(X_1) = 0$, and $H_2(X_1) = \mathbb{Z}$. However, this restriction is not sufficient to allow for a complete classification, either. \end{remark} \begin{proof}[Proof of Proposition~\ref{p:spin-link}] By restriction $\Sigma(L)$ inherits a spin structure $\mathfrak{t}$ from $W$. Turaev~\cite[Section 2.2]{Turaev} has shown that to each orientation $o$ on $L$ one can associate a spin structure $\mathfrak{t}_o$ on $\Sigma(L)$, and Donald and Owens~\cite[Proposition 3.3]{DonaldOwens} gave the following interpretation of $\mathfrak{t}_o$. Fix a Seifert surface for the oriented link $(L,o)$, and push it into the 4-ball, obtaining a surface $F_o$; the branched double cover $\Sigma(B^4,F_o)$ admits a spin structure $\mathfrak{s}_{F_o}$ (the pull-back of the spin structure on $B^4$), and $\mathfrak{t}_o$ is the restriction of $\mathfrak{s}_{F_o}$ to $\Sigma(L)$. Recall that $\mathfrak{s}igma(L,o)$ is the signature of the double cover $\Sigma(B^4,F_o)$. In the case at hand, since $\Sigma(L)$ is a rational homology sphere, $\det L\neq0$, and the Seifert form of $F_o$ is nondegenerate for every choice of orientation $o$ and every choice of Seifert surface $F_o$. It follows that $\mathfrak{s}igma(L,o) \equiv b_1(F_o) \pmod 2$, and since $b_1(F_o) \equiv \ell+1 \pmod 2$, $\mathfrak{s}igma(L,o) \equiv \ell+1 \pmod 2$. Summing up, for each spin structure on $\Sigma(L)$ we constructed a spin filling, whose signature is congruent to $\ell+1$ modulo 2. In particular, we have a spin filling $W_\mathfrak{t}$ of $(Y,\mathfrak{t})$. We can glue $W_\mathfrak{t}$ and $-W$ along $Y$, and we obtain a closed spin $4$-manifold $X$. Thus, since closed spin 4-manifolds have even signature, $\mathfrak{s}igma(W_\mathfrak{t}) - \mathfrak{s}igma(W) = \mathfrak{s}igma(X) \equiv 0 \pmod 2$, from which the result follows. \end{proof} \begin{subsection}{The family $L(n,1)$} We now focus on lens spaces of the form $L(n,1)$. For convenience we work with the opposite orientation, $\overline{L(n,1)}=L(n,n-1)$, and so let $L_n$ denote the lens space $L(n,n-1)$. According to Proposition~\ref{surgery prop}, $L_{2n}$ has embedding number 1. However, we can refine the notion of the embedding number by requiring that the restriction of the unique spin structure on $\#_nS^2\mathfrak{t}imes S^2$ induce a given spin structure on the 3-manifold. In particular, the embedding of $L_{2n}$ in $S^2\mathfrak{t}imes S^2$ realizes the restriction of the unique spin structure on the 2-handlebody obtained by attaching a 2-handle to $B^4$ along the unknot, with framing $2n$. We can ask what happens with the other spin structure on $L_{2n}$, which is the restriction of the spin structure on the plumbing $P_{2n}$ of a chain of $2n-1$ spheres with Euler number $-2$. Observe also that $L_{2n+1}$ is the boundary of the plumbing $P_{2n+1}$, and restricting the spin structure on $P_{2n+1}$ gives the unique spin structure on $L_{2n+1}$. For the remainder of this discussion we consider $L_n$, implicitly equipped with the spin structure described above. Proposition~\ref{bad bound 2} states that $\varepsilon(L_n) \leq n-1$. Indeed, we can see this directly since $L_n$ is the boundary of the linear plumbing $P_n$ (the induced surgery diagram is a chain of $n-1$ unknotted components, all with framing $-2$). We will see below that this upper bound can be improved upon in many cases. But first we use the 10/8-Theorem to show that the embedding numbers of these lens spaces grow arbitrarily large. Recall from the introduction that Edmonds~\cite{Edm} proved that every lens space embeds topologically locally flatly in $\#_4 S^2 \mathfrak{t}imes S^2$, and hence we see that these embedding numbers reflect the sharp contrast between the smooth and topological categories of 4-manifolds. In what follows, we will repeatedly use Novikov additivity. More precisely, whenever we have a splitting of $\#_nS^2\mathfrak{t}imes S^2$ into $X$ and $Y$ along a 3-manifold, we have $\mathfrak{s}igma(X) + \mathfrak{s}igma(Y) = \mathfrak{s}igma(\#_nS^2\mathfrak{t}imes S^2) = 0$; that is, $\mathfrak{s}igma(X) = -\mathfrak{s}igma(Y)$. \begin{proposition}\label{bad lower bound} $\varepsilon(L_n) \geq \frac19 (n+7)$. Additionally, if the $11/8$-Conjecture is true, then $\varepsilon(L_n) \geq \frac{3}{19}(n-1)$. \end{proposition} \begin{proof} As noted above, $L_n = \partial P_n$, where the latter is a spin $4$-manifold with $\mathfrak{s}igma(P_n) = 1-n$. Suppose $\varepsilon(L_n) = m$, and fix an embedding of $L_n$ into $\#_{m}S^2\mathfrak{t}imes S^2$. Note we can assume that $m\leq n-1$ by Proposition \ref{bad bound 2}. Let $Z$ be the connected component of the complement of this embedding such that $\partial Z = -L_n$. Consider the closed, spin $4$-manifold $X := P_n \cup_{L_n} Z$: since $L_n$ is a rational homology sphere, $b_2(X) = n-1+b_2(Z)$ and $\mathfrak{s}igma(X) = 1-n+\mathfrak{s}igma(Z)$. Observe that, by definition, $\mathfrak{s}igma(Z) \le b_2(Z)$. By the 10/8-Theorem~\cite{Furuta}, \begin{equation}\label{e:furuta} n-1 + b_2(Z) = b_2(X) \ge \frac54|\mathfrak{s}igma(X)| + 2 = \frac54|1-n+\mathfrak{s}igma(Z)| + 2= \frac54(n-1-\mathfrak{s}igma(Z))+2, \end{equation} and in particular $7-4n-4b_2(Z) \le 5\mathfrak{s}igma(Z)-5n \le 5b_2(Z)-5n$, from which we obtain $b_2(Z) \geq \frac19n +\frac79$. Let $Z' = \#_{m}S^2\mathfrak{t}imes S^2 \mathfrak{s}etminus Z$. By Novikov additivity, $\mathfrak{s}igma(Z') = -\mathfrak{s}igma(Z)$, and, by gluing $-P_n$ onto $Z'$, the same manipulations as above give the inequality \[ m = \frac12 b_2(\#_{m}S^2\mathfrak{t}imes S^2) = \frac12(b_2(Z) + b_2(Z')) \geq \frac19n+\frac79, \] as desired. Assuming the 11/8-Conjecture, instead of~\eqref{e:furuta} we have \[ n-1 + b_2(Z) = b_2(X) \ge \frac{11}{8}|\mathfrak{s}igma(X)| = \frac{11}{8}|1-n+\mathfrak{s}igma(Z)|, \] from which one readily obtains $3n-8b_2(Z) \le 11\mathfrak{s}igma(Z)+3$, from which the desired bound follows as before. \end{proof} Now we show two relations between $\varepsilon(L_n)$: one is a form of subadditivity, and the other asserts that $\varepsilon(L_{n-1})$ gives tight restrictions on $\varepsilon(L_n)$; in particular, the values can differ by at most 1. \begin{theorem}\label{steps} We have: \begin{enumerate} \item $\varepsilon(L_{m+n}) \le \varepsilon(L_m)+\varepsilon(L_n)+1$; \item $\varepsilon(L_{n-1})-1 \leq \varepsilon(L_{n}) \leq \varepsilon(L_{n-1}) + 1$. \end{enumerate} \end{theorem} We first pause and observe a consequence of (1) in the theorem above. \begin{corollary} The sequence $\varepsilon(L_n)/n$ converges to a limit $\varepsilon_L\in \mathbb{R}$. Moreover, $\varepsilon_L\ge \frac19$, and, if the $11/8$-Conjecture holds, $\varepsilon_L \ge \frac3{19}$. \end{corollary} \begin{proof} The sequence $a_n = \varepsilon(L_n)+1$ is subadditive by Theorem~\ref{steps}(1), above. Therefore, by Fekete's lemma~\cite{Fekete}, we have $\inf_n \frac{a_n}n = \lim_n \frac{a_n}n$. Since $a_n = \varepsilon(L_n) + 1$, we have $\lim_n \frac{\varepsilon(L_n)}n= \lim_n \frac{a_n}n$, and in particular the sequence ${\varepsilon(L_n)}/n$ is convergent. The inequalities follow directly from Proposition~\ref{bad lower bound} above. \end{proof} \begin{remark} Steven Sivek pointed out to the authors that we can obtain upper bounds for $\varepsilon_L$ as follows. Suppose $\varepsilon(L_n) =e$ for some $n$. Then by induction one can show that $\varepsilon(L_{2^k n})\leq2^k(e+1)-1$, and hence $\varepsilon_L \leq \frac{1}{n}(e+1)$. For example, below we will show that $\varepsilon(L_{19})=4$, and so this implies that $\varepsilon_L \leq \frac{5}{19}$. \end{remark} \begin{proof}[Proof of Theorem~\ref{steps}] \begin{enumerate} \item By Proposition~\ref{p:generalproperties}, it is enough to prove that $\varepsilon(L_{m+n}) \le \varepsilon(L_m\#L_n)+1$. Suppose therefore that $\varepsilon(L_m\# L_n) = a$, i.e. that $\#_a S^2\mathfrak{t}imes S^2 = X \cup_{L_m\# L_n} Y$. Consider the cobordism $W_0$ in Figure~\ref{cobordism0}; this is obtained from $(L_m\#L_n)\mathfrak{t}imes I$ (represented by the disjoint plumbings of $m-1$ and $n-1$ components with framings $-2$ in brackets) by adding a $-2$-framed 2-handle $h$ joining two ends of $P_m$ and $P_n$, and a 2-handle $b$ attached along a meridian of the attaching curve of $h$ with framing 0. The upper boundary of $W_0$ is still $L_m\# L_n$. Notice that $W_0$ contains $L_{m+n}$, since it contains the boundary of the plumbing of a chain of $-2$-framed unknots of length $m+n-1$. In particular, the closed 4-manifold $Z_0 = X\cup_{L_m\# L_n} W_0\cup_{L_m\# L_n} Y$ contains a copy of $L_{m+n}$. We will show in Lemma~\ref{l:trivialcob} that $Z_0$ is diffeomorphic to $\#_{a+1} S^2\mathfrak{t}imes S^2$, and hence we obtain $\varepsilon(L_{m+n}) \le a+1$, as desired. \item Suppose that $\varepsilon(L_n)=a$, so that $\#_a S^2 \mathfrak{t}imes S^2 = X \cup_{L_{n}} Y$. Consider the cobordisms $W_1$ in Figure \ref{cobordism} and $W_2$ in Figure \ref{cobordism2}, each consisting of two 2-handles (labelled $h,b$) added to $L_n \mathfrak{t}imes I$. Notice that in both cases the upper boundary of the cobordism is still $L_n$. Hence we can form new spin 4-manifolds $Z_i := X \cup W_i \cup Y$. Now observe that $L_{n-1}$ is embedded in $W_1$ and $L_{n+1}$ is embedded in $W_2$ as middle levels. In both cases, just take the surgery description given by the plumbing of $-2$ spheres and add the surgery curve corresponding to $h$. For $W_2$ the claim is immediate, and for $W_1$ we must slam dunk the 0-framed curve $h$. Now to complete the proof we only need to show that each $Z_i$ is diffeomorphic to $\#_{a+1} S^2 \mathfrak{t}imes S^2$ (to get the right-hand inequality we reindex), and we do this in the next lemma.\qedhere \end{enumerate} \end{proof} \begin{figure} \caption{The cobordism $W_0$.} \label{cobordism0} \end{figure} \begin{figure} \caption{The cobordism $W_1$.} \label{cobordism} \end{figure} \begin{figure} \caption{The cobordism $W_2$.} \label{cobordism2} \end{figure} \begin{lemma}\label{l:trivialcob} In each case above, $Z_i$ is diffeomorphic to $\#_{a+1} S^2 \mathfrak{t}imes S^2$, for $i=0,1,2$. \end{lemma} \begin{proof} Let $L$ denote either $L_m\#L_n$ or $L_n$, depending on the respective case. In any of the three cases we can surger $b$ in $W_i$ to a 1-handle in dotted circle notation, and the resulting cobordism is the trivial cobordism $L\mathfrak{t}imes I$ since $h$ will cancel the resulting 1-handle. Therefore we obtain $W_i$ by reversing this operation, i.e. performing surgery on a copy of $S^1 \mathfrak{t}imes D^3$ in $L \mathfrak{t}imes I$. Now consider this operation occurring inside the closed manifold $X \cup (L \mathfrak{t}imes I) \cup Y =\#_a S^2\mathfrak{t}imes S^2$ (where $X$ and $Y$ come from the proof of Theorem~\ref{steps} above, and depend on which case we are in). Since this manifold is simply connected, the surgery curve $S^1 \mathfrak{t}imes \{0\}$ is null-homotopic and the effect of the surgery is to produce a connected summand with one of the two $S^2$ bundles over $S^2$~\cite[Proposition 5.2.3]{GS}. We now claim that $W_i$ is spin: in fact, $W_i$ embeds in the spin $2$-handlebody obtained by removing all brackets in Figures~\ref{cobordism0}--\ref{cobordism2} above. It follows that we must be taking a connected sum with $S^2 \mathfrak{t}imes S^2$, and we get that $Z_i= X \cup W_i \cup Y$ is diffeomorphic to $\#_{a+1} S^2 \mathfrak{t}imes S^2$, as claimed. \end{proof} We end this section with a lengthy discussion of the embedding numbers for $L_n$ with $n\leq 19$. We again emphasize that in this section we are abusing notation by using $\varepsilon(L_{2n})$ to denote the embedding number of $L_{2n}$ with respect to the spin structure that extends over $P_{2n}$. \begin{theorem}\label{t:smallLn} The embedding numbers for $L_n$ with $n\leq 19$ are given by the following table. \begin{center} \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline $n$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ & $12$ & $13$ & $14$ & $15$ & $16$ & $17$ & $18$ & $19$\\ \hline $\varepsilon(L_n)$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ & $10$ & $9$ & $8$ & $7$ & $6$ & $5$ & $4$\\ \hline \end{tabular} \end{center} \end{theorem} We will make several of these computations in detail: in particular, we will compute $\varepsilon(L_2)$, $\varepsilon(L_{12})$, and $\varepsilon(L_{19})$, and these will suffice to prove the theorem. We will also give a more explicit computation of $\varepsilon(L_{17})$. Observe that for these values of $n$, Theorem~\ref{bad lower bound} is insufficient to give the necessary lower bounds. From Proposition~\ref{bad bound 2} it follows that $\varepsilon(L_n) \leq n-1$, and for $n=3, \cdots, 11$ we claim that this cannot be improved upon. \begin{claim} $\varepsilon(L_2) = 1$. \end{claim} \begin{proof} Indeed, $L_2$ is understood to have the spin structure induced as the boundary of $P_2$, whose double is $S^2\mathfrak{t}imes S^2$. Moreover, since the order of $H_1(L_2)$ is not a square, $\varepsilon(L_2)\ge 1$, hence $\varepsilon(L_2) = 1$. \end{proof} \begin{claim} $\varepsilon(L_{12})=11$. \end{claim} \begin{proof} Since $L_{12}$ is the boundary of $P_{12}$, by doubling we obtain $\varepsilon(L_{12})\le 11$. If $\varepsilon(L_{12}) < 11$, then $L_{12}$ embeds in $\#_{10} S^2 \mathfrak{t}imes S^2$, splitting it into two spin pieces $U$ and $V$, say, with $\partial U =L_{12}$ and $\partial V = \overline{L_{12}}$. Since the Rokhlin invariant associated to this spin structure satisfies $\mu(L_{12}) = -11 \pmod{16}$, and $b_2(\#_{10} S^2 \mathfrak{t}imes S^2) = 20$, we must have that $\mathfrak{s}igma(U)=5$ and $\mathfrak{s}igma(V)=-5$. Furthermore, at least one of $U$ or $V$ must have $b_2 \leq 10$, and we can glue this manifold to either $P_{12}$ or $\overline{P_{12}}$ to get a spin closed 4-manifold $X$ with $b_2(X) \leq 21$ and $|\mathfrak{s}igma(X)|=16$. But then \[ b_2(X) \leq 21 < \frac{10}{8}(16) +2= \frac{10}{8}|\mathfrak{s}igma(X)| +2, \] which contradicts the 10/8-Theorem. \end{proof} Before considering $\varepsilon(L_{19})$ we will first prove directly that $\varepsilon(L_{17}) =6$, since the strategy is the same as for $L_{19}$ and the argument in this case is easier. In both cases we claim that the plumbings $P_{17}$ and $P_{19}$ embed in the $K3$ surface. For $L_{17}$ we show directly that the complement $K3\mathfrak{s}etminus P_{17}$ consists of a single 0-handle and six 2-handles (necessarily with even framing, since the $K3$ surface is spin), and so the standard argument shows that $\varepsilon(L_{17})\leq6$. However, we were unable to carry out fully the analogous argument for $L_{19}$. Instead we embed $P_{19}$ in a \emph{homotopy} $K3$ such that the complement admits a handle decomposition with a single 0-handle and four even-framed 2-handles, giving $\varepsilon(L_{19})\leq4$. The following fact is related to, though not directly relevant for, computing $\varepsilon(L_n)$. \begin{proposition}\label{sextic cover} The plumbings $P_n$ embed in the standard smooth $K3$ surface for every $n\le 19$. \end{proposition} \begin{proof} It clearly suffices to prove the statement for $n=19$. To this end, recall that there exists a complex sextic $C$ in $\mathbb{C}P^2$ with a unique singular point, whose link is the torus knot $T_{2,19}$ \cite[Theorem 5.8]{sextics}. We can smooth out the singularity of $C$ by taking a small perturbation, thus obtaining a complex curve $C'$. In a neighborhood of the singular point, this has the effect of replacing the singularity with its Milnor fiber $F$, that is isotopic to a push-off of the minimal-genus Seifert surface of $T_{2,19}$ pushed into $B^4$. By taking the double cover of $\mathbb{C}P^2$ branched over $C'$, we obtain a complex $K3$ surface, that contains the double cover of $B^4$ branched over $F$, and this is known to be a plumbing of $-2$-spheres of length $19$. Therefore, $P_{19}$ embeds in a $K3$. \end{proof} Now for $L_{17}$ we can see an embedding of $P_{17}$ explicitly in a handle diagram for the $K3$ surface. \begin{claim} $\varepsilon(L_{17})=6$. Indeed, $P_{17}$ embeds in the $K3$ surface such that the complement admits a handle decomposition with a single $0$-handle and six even-framed $2$-handles. \end{claim} \begin{figure} \caption{A handle diagram for the $K3$ surface.} \label{K3} \end{figure} \begin{proof} In \cite[Section 8.3]{GS} it is shown that Figure~\ref{K3} (plus a 4-handle) is a handle diagram for the $K3$ surface. Notice that in total there are sixteen $-1$-framed meridians. These can be slid, one over the other, as in Figure~\ref{chain}, to form a chain of fifteen $-2$-framed unknots, attached at the end to a $-1$-framed meridian that cancels the 1-handle. Hence we immediately see an embedding of $P_{16}$ into the $K3$ surface. To obtain $P_{17}$ we must work a little harder. If in Figure~\ref{K3} we slide the left-most 0-framed 2-handle over the left-most $-1$-framed meridian as indicated by the arrow, then after isotopy the 0-framed 2-handle becomes a $-1$-framed meridian of the dotted circle, itself with a $-2$-framed meridian (and of course linking the the meridian it slid over). Now we may begin the process of sliding the meridians to create the $-2$ chain, but we gained an extra $-2$-framed 2-handle (the original $-2$-framed 2-handle on the left of the diagram). After these slides we will see $P_{17}$ embedded in the $K3$ surface, linked with two $-1$-framed meridians. Either of these meridians cancels the 1-handle after sliding the remaining 2-handles off. The result will be a handle diagram for the $K3$ surface composed of twenty-two 2-handles, a single 0- and 4-handle, and with $P_{17}$ as a sub-handlebody. Then if we take the 2-handles not in $P_{17}$ and the 4-handle, and turn this handlebody upside down, we get a handlebody with a single 0-handle, and six even-framed 2-handles (necessarily even-framed, since the $K3$ surface is spin) with $L_{17}$ as the boundary. Hence $\varepsilon(L_{17})\leq 6$. Now we finish the claim by showing $L_{17}$ does not embed in $\#_5 S^2 \mathfrak{t}imes S^2$. If it did, it would split $\#_5 S^2 \mathfrak{t}imes S^2$ into two spin pieces $U$ and $V$. Since $\mu(L_{17})=-16 \equiv 0$ (mod 16), we have $\mathfrak{s}igma(U)=\mathfrak{s}igma(V)=0$. At least one of $U$ or $V$ must have $b_2 \leq5$, and gluing this manifold to either $P_{17}$ or $\overline{P_{17}}$ results in a closed spin 4-manifold $X$ with \[ b_2(X)\leq 21 < \frac{10}{8}(16) +2= \frac{10}{8}|\mathfrak{s}igma(X)| +2, \] which contradicts the 10/8-Theorem. \end{proof} \begin{figure} \caption{Sliding $-1$-framed meridians.} \label{chain} \end{figure} Observe that our argument shows that $L_{17}$ can be obtained by even-framed surgery on a 6-component link in $S^3$. In principle one could follow through the handle calculus to realize this surgery diagram, but we have not done this. For $L_{19}$ we were unable to prove that the complement of $P_{19}$ in the $K3$ surface (given by the embedding described in Proposition~\ref{sextic cover}) admits a handle decomposition without 1- or 3-handles. Instead we will construct a certain \emph{smooth} sextic in $\mathbb{C}P^2$ whose branched double cover is a homotopy $K3$, which we denote by $\mathcal{K}$, such that $P_{19}$ embeds in $\mathcal{K}$ with a complement that admits a handle decomposition with a single 0-handle and four even-framed 2-handles. \begin{lemma} There is a genus-$1$ cobordism $C$ from $T_{2,19}$ to $T_{6,6}$ with no minima and no maxima. \end{lemma} \begin{proof} Consider the group $B_6$ of braids on six strands; we let $\mathfrak{s}igma_1,\dots,\mathfrak{s}igma_5$ denote the five standard generators, and $\mathfrak{t}au = \mathfrak{s}igma_1\mathfrak{s}igma_2\dots\mathfrak{s}igma_5$ be their product; the full twist on six strands is then $\mathfrak{t}au^6$. We note the following relation in the braid group that holds for every $i+j\le5$: $\mathfrak{t}au^j \mathfrak{s}igma_i = \mathfrak{s}igma_{i+j}\mathfrak{t}au^j$. In particular, we will make use of the relations $\mathfrak{s}igma_3\mathfrak{t}au = \mathfrak{t}au\mathfrak{s}igma_2$ and $\mathfrak{s}igma_2\mathfrak{s}igma_3\mathfrak{t}au = \mathfrak{t}au\mathfrak{s}igma_1\mathfrak{s}igma_2$, and of the fact that $\mathfrak{s}igma_1$, $\mathfrak{s}igma_3$ and $\mathfrak{s}igma_5$ commute. \begin{figure} \caption{The braid $\beta = \mathfrak{t} \label{f:braid} \end{figure} We claim that $T_{2,19}$ is the closure of the 6-braid $\beta = \mathfrak{t}au^2\mathfrak{s}igma_1^3\mathfrak{s}igma_3^6\mathfrak{s}igma_5^4$ (see Figure~\ref{f:braid} above). In fact, the closure of $\beta$ clearly represents a $2$-cable of the unknot, seen as a 3-braid; therefore, $\widehat\beta$ is a torus knot $T_{2,q}$ for some $q$. Moreover, since $\beta$ is quasipositive, the slice genus of its closure is determined by the exponent of $\beta$ \cite{R}. It follows that $g_*(\widehat\beta) = 9$, thus $\widehat\beta = T_{2,19}$, as claimed. We now exhibit the cobordism, by adding bands corresponding to generators. Indeed, let us write: \[ \mathfrak{t}au^2\mathfrak{s}igma_1^3\mathfrak{s}igma_3^6\mathfrak{s}igma_5^4 = \mathfrak{t}au^2(\mathfrak{s}igma_1\mathfrak{s}igma_3\mathfrak{s}igma_5)\mathfrak{s}igma_3(\mathfrak{s}igma_1\mathfrak{s}igma_3\mathfrak{s}igma_5)\mathfrak{s}igma_3(\mathfrak{s}igma_1\mathfrak{s}igma_3\mathfrak{s}igma_5)\mathfrak{s}igma_3\mathfrak{s}igma_5; \] we can insert bands corresponding to $\mathfrak{s}igma_2$ and $\mathfrak{s}igma_4$ in each $\mathfrak{s}igma_1\mathfrak{s}igma_3\mathfrak{s}igma_5$ factor, and one extra $\mathfrak{s}igma_4$ in the next-to-last position, thus obtaining: \[ \gamma := \mathfrak{t}au^2\cdot\mathfrak{t}au\cdot\mathfrak{s}igma_3\cdot\mathfrak{t}au\cdot\mathfrak{s}igma_3\cdot\mathfrak{t}au\cdot\mathfrak{s}igma_3\cdot\mathfrak{s}igma_4\cdot\mathfrak{s}igma_5. \] We now use the relations $\mathfrak{s}igma_3\mathfrak{t}au = \mathfrak{t}au\mathfrak{s}igma_2$ and $\mathfrak{s}igma_2\mathfrak{s}igma_3\mathfrak{t}au = \mathfrak{t}au\mathfrak{s}igma_1\mathfrak{s}igma_2$ mentioned above to write: \[ \gamma = \mathfrak{t}au^3\mathfrak{s}igma_3\mathfrak{t}au\mathfrak{s}igma_3\mathfrak{t}au\mathfrak{s}igma_3\mathfrak{s}igma_4\mathfrak{s}igma_5 = \mathfrak{t}au^4\mathfrak{s}igma_2\mathfrak{s}igma_3\mathfrak{t}au\mathfrak{s}igma_3\mathfrak{s}igma_4\mathfrak{s}igma_5 = \mathfrak{t}au^5\mathfrak{s}igma_1\mathfrak{s}igma_2\mathfrak{s}igma_3\mathfrak{s}igma_4\mathfrak{s}igma_5 = \mathfrak{t}au^6, \] and hence $\widehat\gamma = T_{6,6}$. \end{proof} We can now cap off the cobordism in $\mathbb{C}P^2$ with six discs, since the Hopf link $T_{6,6}$ is the link at infinity of a degree-6 curve. Filling the lower boundary component with the Milnor fiber of $T_{2,19}$ (i.e. the pushed-in canonical Seifert surface) we obtain a genus-10 smooth surface $F$ in the homology class $6h\in H_2(\mathbb{C}P^2)$. The double cover of $\mathbb{C}P^2$ branched over $F$ is a spin 4-manifold $\mathcal{K}$, since the homology class $[F]$ is even, but not divisible by 4~\cite{Nagami}; moreover, the Euler characteristic is 24, since the surface $F$ is of genus 10. Since $F$ contains the Milnor fiber of $T_{2,19}$, and the double cover of $B^4$ branched over this surface is $P_{19}$~\cite{AK}, we see that $P_{19}$ embeds in $\mathcal{K}$. \begin{lemma} The complement of $P_{19}$ in $\mathcal{K}$ admits a handle decomposition with a single $0$-handle and four even-framed $2$-handles. \end{lemma} \begin{proof} $F$ is constructed so that the standard Morse function on $\mathbb{C}P^2$ induces a handle decomposition of $F$ with a single 0-handle, twenty-five 1-handles, and six 2-handles. We now apply work of Akbulut and Kirby~\cite{AK} (see also \cite[Section 6.3]{GS}) to determine the structure of the handle decomposition of the double cover $\Sigma(F)$ of $\mathbb{C}P^2$ branched over $F$. The double cover of $B^4$ branched over a properly embedded surface with a single 0-handle and no 2-handles admits a handle decomposition with a single 0-handle and $k$ 2-handles with even framing, where $k$ is the number of 1-handles of the surface. If we take the double cover of $B^4 \cup 2$-handle branched over such a surface (whose boundary is disjoint from the attaching circle of the 2-handle), the additional 2-handle lifts to two 2-handles in the cover. Now the branched double cover of $B^4$ over the Milnor fiber of $T_{2,19}$ is $P_{19}$, and by the preceding remarks it follows that the branched double cover of the cobordism $C$ constructed above is obtained by attaching only 2-handles from $L_{19} = L(19,18) = \Sigma(T_{2,19})$ to get to $\Sigma(T_{6,6})$. The only potential difficulty is when we take the double cover of $\mathbb{C}P^2 \mathfrak{s}etminus B^4$ branched over the six disks (the 2-handles of $F$). However, here we can apply a trick from \cite[Section 5]{AK}. Since $F$ is connected, and each disk is glued onto one of the six upper boundary components of $C$, we can connect any pair of disks with an embedded band in $C$. This band can then be ``lifted'' to the 4-handle, connecting the pair of disks and so eliminating one of the 2-handles of $F$. After performing this move five times, we will have isotoped $F$ so that it has a handle decomposition with a single 0-handle and single 2-handle. Then the work of~\cite{AK} implies that the branched double cover admits a handle decomposition without 1- or 3-handles. Since this handle decomposition includes $P_{19}$, the proof is completed by recalling that $\Sigma(F)=\mathcal{K}$ and that $b_2(\mathcal{K}) = 22$. \end{proof} We have not attempted to verify whether $\mathcal{K}$ is diffeomorphic to the $K3$ surface; nor have we attempted to determine the 4-component link in $S^3$ that admits an even-framed surgery to $L_{19}$, whose existence is guaranteed by the argument above. \begin{claim} $\varepsilon(L_{19})=4$. \end{claim} \begin{proof} We have shown that $\varepsilon(L_{19}) \leq4$. The proof of the necessary lower bound is by now a familiar argument. If $L_{19}$ embeds in $\#_3 S^2 \mathfrak{t}imes S^2$, then it would split $\#_3 S^2 \mathfrak{t}imes S^2$ into two spin pieces $U$ and $V$, say, with $\partial U =L_{19}$ and $\partial V = \overline{L_{19}}$. Since $\mu(L_{19})=-18 \equiv -2$ (mod 16), we have $\mathfrak{s}igma(U)= -2$ and $\mathfrak{s}igma(V)=2$. At least one of $U$ or $V$ must have $b_2 \leq3$, and gluing this manifold to either $P_{19}$ or $\overline{P_{19}}$ results in a closed spin 4-manifold $X$ with \[ b_2(X)\leq 21 < \frac{10}{8}(16) +2= \frac{10}{8}|\mathfrak{s}igma(X)| +2, \] which contradicts the 10/8-Theorem. \end{proof} At last, we can sum up everything we have obtained to prove Theorem~\ref{t:smallLn}. \begin{proof}[Proof of Theorem~\ref{t:smallLn}] In the previous claims we have proven that $\varepsilon(L_2) = 1$ and $\varepsilon(L_{12}) = 11$. By Theorem~\ref{steps}(2), $\varepsilon(L_n) = n-1$ for $n=2,\dots, 12$. Analogously, since $\varepsilon(L_{12})= 11$ and $\varepsilon(L_{19}) = 4$, by Theorem~\ref{steps}(2), $\varepsilon(L_n) = 23-n$ for $n=12,\dots, 19$. \end{proof} \begin{remark} We observe here that the computation of the embedding numbers of Theorem~\ref{t:smallLn} can be also done with a case-by-case analysis, without appealing to Theorem~\ref{steps}. Indeed, one can combine explicit constructions (analogous for those of $L_{12}$, $L_{17}$, and $L_{19}$) either with Rokhlin's theorem (for small $n$) or with the 10/8-Theorem (for large $n$) to achieve the same result. \end{remark} \end{subsection} \end{section} \begin{section}{Exact calculations for arbitrarily large embedding numbers}\label{exact} Finally we show how to construct integral and rational homology 3-spheres with arbitrarily large embedding numbers, such that we can give exact calculations provided that we assume the validity of the 11/8-Conjecture. Let $K_n$ be the spin 4-manifold $K_n = \#_{2n} K3$. Then the intersection form $Q_{K_n}$ of $K_n$ is isomorphic to $4nE_8\oplus 6nH$ (note that our convention is that the $E_8$ form is \emph{negative} definite) and so has signature $-32n$ and rank $44n$. Now $K_n$ admits a handle decomposition without 1-handles or 3-handles. With this handle decomposition we can perform handle slides so that in the basis for $H_2(K_n)$ corresponding to the cores of the 2-handles, $Q_{K_n}$ is given by $4nE_8\oplus 6nH$. We can further perform handle slides to obtain a basis such that each of the $n$ copies of $\oplus 6H$ looks like $\left(\begin{array}{cc}Q & I \\ I & O\end{array}\right)$, where all submatrices are $6\mathfrak{t}imes 6$, and $Q$ is defined by the following matrix:\\ $$\begin{pmatrix} -2 & 1 & 0 & 0 & 0 & 0 \\ 1 & -2 & 1 & 0 & 0 & 0 \\ 0 & 1 & -2 & 0 & 0 & 0 \\ 0 & 0 & 1 & -2 & 1 & 0 \\ 0 & 0 & 0 & 1 & -2 & 1 \\ 0 & 0 & 0 & 0 & 1 & -2 \end{pmatrix}.$$\\ Observe that this is possible since doubling the linear plumbing of six $-2$-disk bundles over $S^2$ yields $\#_6 S^2 \mathfrak{t}imes S^2$. Now Let $U_n$ be the sub-handlebody of $K_n$ formed by taking the 0-handle and the 2-handles corresponding to each basis element in the $4n$ $E_8$'s and the first six basis elements in each $\oplus 6H$ block. Then the intersection form $Q_{U_n}$ of $U_n$ will be $4nE_8 \oplus nQ$. One can check that $Q_{U_n}$ is negative definite of rank $38n$, and will have determinant $\pm 7^n$. Hence the boundary $Y_n = \partial U_n$ will be a $\mathbb{Z}/2\mathbb{Z}$-homology sphere, and therefore has a unique spin structure. Let $V_n$ be the closure of $K_n \mathfrak{s}etminus U_n$. Then by Novikov additivity and Mayer--Vietoris we get that $V_n$ is positive definite of rank $6n$, and so $\overline{V_n}$ is negative definite with $\partial \overline{V_n} = Y_n$. Since $\overline{V_n}$ is a spin 2-handlebody, by doubling we get $\#_{6n} S^2 \mathfrak{t}imes S^2$ and hence $\varepsilon(Y_n) \leq 6n$. \begin{proposition} If the $11/8$-Conjecture is true then $\varepsilon(Y_n) = 6n$. \end{proposition} \begin{proof} Fix a natural number $n$. $Y_n$ embeds in $\#_{\varepsilon(Y_n)} S^2 \mathfrak{t}imes S^2$ and splits $\#_{\varepsilon(Y_n)} S^2 \mathfrak{t}imes S^2$ into two spin pieces, $X_1$ and $X_2$. We can assume that $\partial X_1 = Y_n$ and $\partial X_2 = \overline{Y_n}$. Then let $W_1 = X_1 \cup \overline{U_n}$ and $W_2 = U_n \cup X_2$, where in each case the 4-manifolds are glued along $Y_n$. Now $W_1$ and $W_2$ are spin 4-manifolds, and so assuming the validity of the 11/8-Conjecture we obtain $b_2(W_1) \geq \frac{11}8|\mathfrak{s}igma(W_1)|$ and $b_2(W_2) \geq \frac{11}8|\mathfrak{s}igma(W_2)|$. By Novikov additivity and Mayer--Vietoris (and applying what we know about $U_n$) these become $b_2(X_1) + 38n \geq \frac{11}8(38n+\mathfrak{s}igma(X_1))$ and $b_2(X_2) + 38n \geq \frac{11}8(38n-\mathfrak{s}igma(X_2))$. Adding these two inequalities gives $b_2(X_1) + b_2(X_2) +76n \geq \frac{11}8 76n + \frac{11}8(\mathfrak{s}igma(X_1) - \mathfrak{s}igma(X_2))$. Since $X_1 \cup X_2 = \#_{\varepsilon(Y_n)} S^2 \mathfrak{t}imes S^2$ this simplifies to $2\varepsilon{(Y_n)} +76n \geq \frac{11}8 76n + \frac{11}8 2\mathfrak{s}igma(X_1)$, and after rearranging sides $2\varepsilon{(Y_n)} - \frac{11}8 2\mathfrak{s}igma(X_1) \geq \frac38 76n$. Finally, since $\mathfrak{s}igma(X_1) \geq -\varepsilon{(Y_n)}$, this becomes $\frac{19}4\varepsilon{(Y_n)} \geq \frac38 76n$. Upon simplifying we get that $\varepsilon{(Y_n)} \geq 6n$, and since we saw previously that $\varepsilon(Y_n) \leq 6n$ it follows that $\varepsilon(Y_n) = 6n$. \end{proof} We emphasize that this construction consists of many choices, and each of these choices will affect the resulting manifolds $Y_n$. We can apply a similar argument to compute embedding numbers for integral homology spheres by splitting the intersection form of $\#_{8n} K3$. The intersection form is $16n E_8 \oplus 24n H$, which is isomorphic to $19n E_8 \oplus -3n E_8$. Applying the same technique as above splits $\#_{8n} K3$ along an integral homology sphere $Z_n$ that will have embedding number $24n$. Note that this splitting implies that $Z_n$ bounds two simply connected spin 4-manifolds, one of which has intersection form $16nE_8$ and the other $3nE_8$. In particular, the $Z_n$ bound spin, simply connected, negative definite 4-manifolds that have different $b_2$, answering Question 5.2 in~\cite{Tange}. Our technique has a similar flavor to an argument of Stong~\cite{Stong}, who proved that if the intersection form $Q_X$ of a simply connected closed 4-manifold $X$ decomposes as the direct sum of unimodular forms $U_1 \oplus U_2$, then $X$ can be smoothly decomposed into two simply connected pieces $X_1$ and $X_2$ with $Q_{X_1} = U_1$ and $Q_{X_2} = U_2$. (Note that this is insufficient for our argument; we need that each piece is a 2-handlebody.) Stong's theorem is a strengthening of a result of Freedman and Taylor~\cite{FT}, which only guarantees that $H_1(X_i) = 0$ rather than that the $X_i$ are simply connected. Stong's splitting theorem depends on the following structure theorem. \begin{theorem}[Stong~\cite{Stong}]\label{structure} Let $X$ denote a simply connected closed smooth $4$-manifold. Then $X$ admits a handlebody decomposition $\mathcal{H}$ with $2$-handles $\{ H_1, \cdots, H_m\}$ such that the following holds. \begin{enumerate} \item The attaching circles for the handles $H_1, \cdots, H_r$ represent a free basis for $\pi_1(X^{(1)})$ (where $X^{(1)}$ denotes the union of the $0$-handle and the $1$-handles of $X$), and the attaching circles for the other $2$-handles are null-homotopic in $\pi_1(X^{(1)})$. \item The belt spheres of the handles $H_{r+1}, \cdots, H_{r+s}$ represent a free basis for $\pi_1(\overline{X}^{(3)})$ (where $\overline{X}^{(3)}$ denotes the union of the $4$-handle and the $3$-handles of $X$), and the belt spheres for the other $2$-handles are null-homotopic in $\pi_1(\overline{X}^{(3)})$. \end{enumerate} \end{theorem} Finally, we use this structure theorem to prove a result unrelated to the main theme of the paper, but perhaps interesting in its own right. When the 4-manifold $X$ is non-spin, the result below follows directly from the work of Stong~\cite{Stong} mentioned in the paragraph preceding Theorem~\ref{structure}, and therefore the most interesting case is when $X$ is spin. \begin{theorem}\label{definite splitting} Any simply connected $4$-manifold $X$ can be decomposed as $X = X_1 \cup X_2$, where $X_1$ and $X_2$ are simply connected $4$-manifolds that are positive definite and negative definite, respectively, glued along a rational homology sphere. \end{theorem} \begin{proof} Give $X$ a handlebody decomposition as in Theorem \ref{structure} (and we use the notation of Theorem \ref{structure} as well). Then the cores of the 2-handles $H_{r+s+1}, \dots, H_m$ represent a free basis for $H_2(X) = \mathbb{Z}^{m-r-s}$. If $X$ is definite, then the theorem is trivial with one of the $X_i$ empty. Otherwise $X$ is indefinite and by the classification of indefinite unimodular forms we have that $Q_X$ is isomorphic to either $aE_8 \oplus bH$ or $a\langle 1 \rangle \oplus b\langle -1 \rangle$ for some $a$ and $b$, depending on whether $X$ is spin or non-spin. In the non-spin case we slide and reorder $H_{r+s+1}, \cdots, H_m$ so that the first $a$ handles represent $a\langle 1 \rangle$ and the rest represent $b\langle -1 \rangle$ (where $a+b = m-r-s$). Then we define $X_1$ to be $X^{(1)} \cup H_1 \cup \cdots \cup H_r \cup H_{r+s+1} \cup \cdots \cup H_{r+s+a}$, and let $X_2$ denote the remaining handles. It follows that the $X_i$ are simply connected and definite as required. In the spin case we make a similar argument. We have $Q_X = aE_8 \oplus bH$, and we can assume $a$ is nonnegative if we allow reversing the orientation of $X$. Next we use the fact that $bH$ can be represented by $A=\left(\begin{array}{cc}Q & I \\ I & O\end{array}\right)$, where all submatrices are $b\mathfrak{t}imes b$, and $Q$ is negative definite (as we described earlier, this follows since doubling the linear plumbing of $b$ $-2$-spheres results in $\#_b S^2 \mathfrak{t}imes S^2$). Then we slide and reorder our 2-handles, so that $H_{r+s+1}, \dots, H_{r+s+8a +b}$ represent the first $8a+b$ elements in $aE_8 \oplus A$. Then $X_1 := X^{(1)} \cup H_1 \cup \cdots \cup H_r \cup H_{r+s+1} \cup \cdots \cup H_{r+s+8a+b}$ is simply connected and negative definite, and its complement $X_2$ consisting of the remaining handles will be simply connected and positive definite. In the non-spin case we have that $\det(X_i)=\pm1$, and so we are actually splitting along an integral homology sphere. In the spin case we still have that $\det(X_i) \neq 0$, and so $\partial X_i$ is a rational homology sphere as required. \end{proof} \end{section} \end{document}
\begin{document} \title{Testing normality for unconditionally heteroscedastic macroeconomic variables} \noindent {\em Abstract:} In this paper the testing of normality for unconditionally heteroscedastic macroeconomic time series is studied. It is underlined that the classical Jarque-Bera test (JB hereafter) for normality is inadequate in our framework. On the other hand it is found that the approach which consists in correcting the heteroscedasticity by kernel smoothing for testing normality is justified asymptotically. Nevertheless it appears from Monte Carlo experiments that such methodology can noticeably suffer from size distortion for samples that are typical for macroeconomic variables. As a consequence a parametric bootstrap methodology for correcting the problem is proposed. The innovations distribution of a set of inflation measures for the U.S., Korea and Australia are analyzed. Keywords: Unconditionally heteroscedastic time series; Jarque-Bera test.\\ JEL: C12, C15, C18\\ \section{Introduction} In the econometric literature, the Jarque Bera (1980) test is routinely used to assess the normality of variables. The properties of this test are well documented for stationary conditionally heteroscedastic processes. For instance Fiorentini, Sentana and Calzolari (2003), Lee, Park and Lee (2010) and Lee (2012) investigated the JB test in the context of GARCH models. However few studies are available on the distributional specification of time series in presence of unconditional heteroscedasticity. Drees and St\u{a}ric\u{a} (2002), Mikosch and St\u{a}ric\u{a} (2004) and Fry\'{z}lewicz (2005) investigated the possibility of modelling financial returns by nonparametric methods. To this end, Drees and St\u{a}ric\u{a} (2002) and Mikosch and St\u{a}ric\u{a} (2004) examined the distribution of S\&P500 returns corrected from heteroscedasticity. On the other hand Fry\'{z}lewicz (2005) pointed out that large sample kurtosis for financial time series may be explained by non constant unconditional variance. In general we did not found references that specifically address the problem of assessing the distribution of unconditionally heteroscedastic time series. Note that non constant variance constitutes an important pattern for time series in general, and macroeconomic variables in particular. Reference can be made to Sensier and van Dijk (2004) who found that most of the 214 U.S. macroeconomic time series they studied have a time-varying variance. In this paper we aim to provide a reliable methodology for testing normality for small samples time series with non constant unconditional variance. The structure of the paper is as follows. In Section 2 we first set the dynamics ruling the observed process. In particular the unconditional heteroscedastic structure of the errors is given. The inadequacy of the standard JB test in our framework is highlighted. The approach consisting in correcting the errors from the heteroscedasticity for building a JB test is presented. We then introduce a parametric bootstrap procedure that is intended to improve the normality testing for unconditionally heteroscedastic macroeconomic time series. In Section \ref{numerical} numerical experiments are conducted to shed some light on the finite sample behaviors of the studied tests. In particular it is found that when estimating the non constant variance structure by kernel smoothing, a correct bandwidth choice is a necessary condition for the good implementation of the normality tests based on heteroscedasticity correction. We illustrate our outputs examining the distributional properties of the U.S., Korean and Australian GDP implicit price deflators. \section{Testing normality in presence of unconditional heteroscedasticity} We consider processes $(y_{t,n})$ which can be written as \begin{eqnarray} y_{t,n}&=&\omega_0+x_{t,n},\nonumber\\ x_{t,n}&=&\sum_{i=1}^pa_{0i}x_{t-i,n}+u_{t,n},\label{model} \end{eqnarray} where $y_{1,n},\dots,y_{n,n}$ are available, $n$ being the sample size and $E(x_{t,n})=0$. The conditional mean of $x_{t,n}$ is driven by the autoregressive parameters $\theta_0=(a_{01},\dots,a_{0p})'$. We make the following assumption on the conditional mean.\\ \textbf{Assumption A0:}\: The ${a}_{0i}\in\mathbb{R}$, $1\leq i\leq p$, are such that $\det ({a}(z))\neq 0$ for all $|z|\leq 1$, with ${a}(z)=1-\sum_{i=1}^{p}{a}_{0i} z^i$.\\ In the assumption {\bf A1} below, the well known rescaling device introduced by Dahlhaus (1997) is used to specify the errors process $(u_{t,n})$. For a random variable $v$ we define $\|v\|_q=(E|v|^q)^{1/q}$, with $E|v|^q<1$ and $q\geq1$.\\ \textbf{Assumption A1:}\: We assume that $u_{t,n}=h_{t,n}\epsilon_t$ where: \begin{itemize} \item[(i)] $h_{t,n}\geq c>0$ for some constant $c>0$, and satisfies $h_{t,n}=g(t/n)$, where $g(r)$ is a measurable deterministic function on the interval $(0,1]$, such that $\sup_{r\in(0,1]}|g(r)|<\infty$. The function $g(.)$ satisfies a Lipschitz condition piecewise on a finite number of some sub-intervals that partition $(0,1]$. \item[(ii)] The process $(\epsilon_t)$ is iid and such that $E(\epsilon_t)=0,$ $E(\epsilon_t^2)=1$, and $(E(\|\epsilon_t\|^{8\nu})<\infty$ for some $\nu>1$.\\ \end{itemize} The non constant variance induced by {\bf A1(i)} allows for a wide range of non stationarity patterns commonly faced in practice, as for instance abrupt shifts, smooth changes or cyclical behaviors. Note that in the zero mean AR case, tools needed to carry out the Box and Jenkins specification-estimation-validation modeling cycle, are provided in Patilea and Ra\"{\i}ssi (2013) and Ra\"{\i}ssi (2015). For $\omega_0\neq0$ define the estimator $\hat{\omega}=n^{-1}\sum_{t=1}^{n}y_{t,n}$, and $x_{t,n}(\omega)=y_{t,n}-\omega$ for any $\omega\in\mathbb{R}$. Writing $\hat{\omega}-\omega_0=n^{-1}\sum_{t=1}^{n}x_{t,n}$, it can be shown that \begin{equation}\label{const} \sqrt{n}(\hat{\omega}-\omega_0)=O_p(1), \end{equation} using the Beveridge-Nelson decomposition. Now let \begin{equation}\label{condmean} \hat{\theta}(\omega)=\left(\Sigma_{\underline{x}}(\omega)\right)^{-1}\Sigma_x(\omega), \end{equation} where $$\Sigma_{\underline{x}}(\omega)= n^{-1}\sum_{t=1}^{n}\underline{x}_{t-1,n}(\omega)\underline{x}_{t-1,n}(\omega)'\quad\mbox{and}\quad \Sigma_x(\omega)=n^{-1}\sum_{t=1}^{n}\underline{x}_{t-1,n}(\omega)x_{t-1,n}(\omega),$$ with $\underline{x}_{t-1,n}(\omega)=(x_{t-1,n}(\omega),\dots,x_{t-p,n}(\omega))'$. With these notations define the OLS estimator $\hat{\theta}(\hat{\omega})$ and the unfeasible estimator $\hat{\theta}(\omega_0)$. Straightforward computations give $\sqrt{n}(\hat{\theta}(\hat{\omega})-\hat{\theta}(\omega_0))$ $=o_p(1)$, so that using the results of Patilea and Ra\"{\i}ssi (2012) we have \begin{equation}\label{autoreg} \sqrt{n}(\hat{\theta}(\hat{\omega})-\theta_0)=O_p(1). \end{equation} Once the conditional mean is filtered in accordance to (\ref{const}) and (\ref{autoreg}), we can proceed to the test of the following hypotheses: $$H_0:\:\epsilon_t\sim\mathcal{N}(0,1)\quad\mbox{vs.}\quad H_1:\epsilon_t\:\mbox{ has a different distribution},$$ with the usual slight abuse of interpretation inherent of the use JB test for normality testing. Clearly the skewness and kurtosis of $u_{t,n}$ correspond to those of $\epsilon_t$. However in practice $E(u_{t,n}^3)=0$ and $E(u_{t,n}^4)=3$ is checked using the JB test statistic: \begin{equation}\label{jb} Q_{JB}^u=n\left[Q_{JB}^{S,u}+Q_{JB}^{K,u}\right], \end{equation} where $$Q_{JB}^{S,u}=\frac{\hat{\mu}_3^2}{6\hat{\mu}_2^3}\quad\mbox{and}\quad Q_{JB}^{K,u}=\frac{1}{24}\left(\frac{\hat{\mu}_4}{\hat{\mu}_2^2}-3\right)^2,$$ with $\hat{\mu}_j=n^{-1}\sum_{t=1}^{n}(\hat{u}_{t,n}-\bar{\hat{u}})^j$ and $\bar{\hat{u}}=n^{-1}\sum_{t=1}^{n}\hat{u}_{t,n}$. The $\hat{u}_{t,n}$'s are the residuals obtained from the estimation step. Let us denote by $\Rightarrow$ convergence in distribution. If we suppose the process $(u_t)$ homoscedastic ($g(.)$ is constant), then the standard result $Q_{JB}^u\Rightarrow\chi_2^2$ is retrieved (see Yu (2007), Section 2.2). However under {\bf A0} and {\bf A1} with $g(.)$ non constant (the unconditionally heteroscedastic case) we have: \begin{equation}\label{divergence2} Q_{JB}^{K,u}=\frac{1}{24}\left[\kappa_2\left(E(\epsilon_t^4))-3\right) +3\left(\kappa_2-1\right)\right]+o_p(1), \end{equation} where $\kappa_2=\frac{\int_{0}^{1}g^4(r)dr}{\left(\int_{0}^{1}g^2(r)dr\right)^2}$. Hence if we suppose the errors process unconditionally heteroscedastic with $E(\epsilon_t^4)=3$, we have $Q_{JB}^u=Cn+o_p(n)$ for some strictly positive constant $C$. As a consequence, the classical JB test will tend to detect spuriously departures from the null hypothesis of a normal distribution in our framework. This argument was used by Fry\'{z}lewicz (2004) to underline that unconditionally heteroscedastic specifications can cover financial time series that typically exhibit an excess of kurtosis.\\ In order to assess the distribution of S\&P500 returns, Drees and St\u{a}ric\u{a} (2002) considered data corrected from heteroscedasticity, using a kernel estimator of the variance. We will follow this approach in the sequel considering $$\hat{h}_{t,n}^2=\sum_{i=1}^nw_{ti}(\hat{u}_{i,n}-\bar{\hat{u}})^2,\qquad 1\leq t\leq n,$$ with $w_{ti}=\left(\sum_{j=1}^nK_{tj}\right)^{-1}K_{ti}$, $K_{ti}= K((t-i)/nb)$ if $t\neq i$ and $K_{ii}=0,$ where $K(\cdot)$ is a kernel function on the real line and $b$ is the bandwidth. The following assumption is needed for our variance estimator.\\ \textbf{Assumption A2:} \, (i) The kernel $K(\cdot)$ is a bounded density function defined on the real line such that $K(\cdot)$ is nondecreasing on $(-\infty, 0]$ and decreasing on $[0,\infty)$ and $\int_\mathbb{R} v^2K(v)dv < \infty$. The function $K(\cdot)$ is differentiable except a finite number of points and the derivative $K^\prime(\cdot)$ satisfies $\int_{\mathbb{R}}|x K^\prime (x)| dx < \infty.$ Moreover, the Fourier Transform $\mathcal{F}[K](\cdot)$ of $K(\cdot)$ satisfies $\int_{\mathbb{R}} \left| s\right|^\tau \left| \mathcal{F}[K](s) \right|ds <\infty$ for some $\tau>0$. (ii) The bandwidth $b$ is taken in the range $\mathfrak{B}_n = [c_{min} b_n, c_{max} b_n]$ with $0< c_{min}< c_{max}< \infty$ and $nb_n^{4-\gamma} + 1/nb_n^{2+\gamma} \rightarrow 0$ as $n\rightarrow \infty$, for some small $\gamma >0$.\\ Let $\hat{\epsilon}_t=(\hat{u}_{t,n}-\bar{\hat{u}})/\hat{h}_{t,n}$. We are now ready to consider the following JB test statistic: \begin{equation*}\label{jb} Q_{JB}^{\epsilon}=n\left[Q_{JB}^{S,\epsilon}+Q_{JB}^{K,\epsilon}\right], \end{equation*} where $$Q_{JB}^{S,\epsilon}=\frac{\hat{\nu}_3^2}{6\hat{\nu}_2^3}\quad\mbox{and}\quad Q_{JB}^{K,\epsilon}=\frac{1}{24}\left(\frac{\hat{\nu}_4}{\hat{\nu}_2^2}-3\right)^2,$$ with $\hat{\nu}_j=n^{-1}\sum_{t=1}^{n}\hat{\epsilon}_t^j$. The following proposition gives the asymptotic distribution of $Q_{JB}^{\epsilon}$. \begin{prop}\label{propostu} Under the assumptions {\bf A0}, {\bf A1} and {\bf A2}, we have as $n\to\infty$ \begin{equation}\label{res} Q_{JB}^{\epsilon}\Rightarrow\chi_2^2, \end{equation} uniformly with respect to $b\in\mathfrak{B}_n$. \end{prop} Proposition 1 can be proved using the same arguments given in Yu (2007), together with those of the proof of Proposition 4 in Patilea and Ra\"{\i}ssi (2014). Therefore we skip the proof. For building a test using the above result, we will consider the normal kernel in the next section. On the other hand we suggest to choose a bandwidth by minimizing the cross-validation (CV) criterion (see Wasserman (2006,p69-70)), unless otherwise specified. The test obtained using (\ref{res}) and the above settings will be denoted by $T_{cv}$. The standard test, that does not take into account the unconditional heteroscedasticity, is denoted by $T_{st}$. For high frequency time series it is reasonable to suppose that the approximation (\ref{res}) is satisfactory when the bandwidth is carefully chosen. Nevertheless considering the above sophisticated procedure for small $n$ is questionable. Therefore we propose to apply the following parametric bootstrap algorithm inspired from Francq and Zako\"{\i}an (2010,p335).\\ \begin{itemize} \item[1-] Generate $\epsilon_t^{(b)}\sim\mathcal{N}(0,1)$, $1\leq t\leq n$, build the bootstrap errors $u_{t,n}^{(b)}=\epsilon_t^{(b)}\hat{h}_{t,n}$, and the bootstrap series $y_t^{(b)}$ using (\ref{model}), but with $\hat{\omega}$ and $\hat{\theta}(\hat{\omega})$ (see (\ref{const}) and (\ref{condmean})). \item[2-] Estimate the autoregressive parameters and a constant as in (\ref{model}), but using the $y_t^{(b)}$'s. Build the kernel estimators $\hat{h}_{t,n}^{(b)}$ from the resulting residuals $\hat{u}_{t,n}^{(b)}$. \item[3-] Compute $\hat{\epsilon}_{t,n}^{(b)}=\hat{u}_{t,n}^{(b)}/\hat{h}_{t,n}^{(b)}$ for $t=1,\dots,n$. Compute $Q_{JB}^{\epsilon,(b)}$. \item[4-] Repeat the steps 1 to 3 B times for some large B. Use the $Q_{JB}^{\epsilon,(b)}$'s to compute the p-values of the bootstrap JB test. \end{itemize} The test obtained using the above parametric bootstrap procedure is denoted by $T_{boot}$. \section{Numerical illustrations} \label{numerical} The finite sample properties of the $T_{st}$, $T_{cv}$ and $T_{boot}$ tests are first examined by means of Monte Carlo experiments. The distribution of the U.S., Korean and Australian GDP implicit price deflator is then investigated. Throughout this section the asymptotic nominal level of the tests is 5\%. In the sequel, we fixed $B=499$. \subsection{Monte Carlo experiments} \label{mce} We simulate $N=1000$ trajectories of AR(1) processes: \begin{equation}\label{ar1} y_{t,n}=a_0y_{t-1,n}+u_{t,n}, \end{equation} where $a_0=0.4$ and $u_{t,n}=h_{t,n}\epsilon_t$ with $\epsilon_t$ iid(0,1). Under the null hypothesis we set $\epsilon_t\sim\mathcal{N}(0,1)$. On the other hand under the alternative hypothesis $\epsilon_t=\cos(\delta)v_t+\sin(\delta)w_t$ is taken, with $v_t\sim\mathcal{N}(0,1)$, $(\sqrt{2}w_t+1)\sim\chi_1^2$, $0<\delta\leq\frac{\pi}{2}$, $v_t$ and $w_t$ being mutually independent. In order to study the case where the series are actually homoscedastic, we set $h_{t,n}=1$. For the heteroscedastic case, the variance structure is given by \begin{equation}\label{simhet} h_{t,n}=1+2\exp\left(t/n\right)+0.3(1+t/n)\sin\left(5\pi t/n+\pi/6\right). \end{equation} In such situation the variance structure exhibits a global monotone behavior together with a cyclical/seasonal component that is common in macroeconomic data (see e.g. Trimbur, and Bell (2010) for seasonal effects in the variance). In all our experiments, the mean in (\ref{ar1}) is treated as unknown. More precisely the AR parameter in (\ref{ar1}) is estimated using $y_{t,n}-\hat{\omega}$, where $\hat{\omega}$ is given in (\ref{const}), and then the resulting centered residuals are used to compute the test statistics.\\ The outputs obtained under the null hypothesis are first analyzed. The results are given in Table \ref{tab1} for the homoscedastic case and in Table \ref{tab2} for the heteroscedastic case. Noting that macroeconomic time series with noticeable heteroscedasticity are relatively large but smaller than $n=400$ in general, a special emphasis will be put on interpreting results for samples $n=100,200,400$. Since $N=1000$ processes are simulated, and under the hypothesis that the finite sample size of a given test is $5\%$, the relative rejection frequencies should be between the significant limits 3.65\% and 6.35\% with probability 0.95. The outputs outside these confidence bands will be displayed in bold type.\\ From Table \ref{tab1}, it appears that the $T_{cv}$ is oversized for small samples ($n=100$ and $n=200$). This could be explained by the fact that this test is too much sophisticated for the standard case. When the samples are increased the relative rejection frequencies become close to the 5\% ($n=400$ and $n=800$). On the other hand the $T_{st}$ and $T_{boot}$ tests have good results for all the samples. Of course if there is no evidence of heteroscedasticity, the simple $T_{st}$ should be used. However Table \ref{tab1} reveals that in case of doubt, the use of the $T_{boot}$ is a good alternative. In the heteroscedastic case, it is seen from Table \ref{tab2} that the $T_{st}$ test fails to control the type I error as $n\to\infty$. This was expected from (\ref{divergence2}). Next it seems that the relative rejection frequencies of the $T_{cv}$ test are somewhat far from the nominal level 5\%, even when $n=800$. From Table \ref{tab2} it also emerges that the $T_{boot}$ control reasonably well the type I error. Therefore we can draw the conclusion that the $T_{boot}$ gives a substantial improvement for samples that are typical for heteroscedastic macroeconomic variables. Note that the $T_{cv}$ test have better results for larger samples ($n\gg1000$). For instance conducting similar experiences to those of Table \ref{tab2}, we obtained 7.4\% rejections for $n=1600$ and 6.9\% rejections for $n=3200$. Hence the potential improvements of the $T_{boot}$ in comparison to the $T_{cv}$ should become slight as $n\to\infty$. For this reason if high frequency time series are analyzed, the $T_{cv}$ should certainly be preferred to the computationally intensive $T_{boot}$. In general it is important to point out that the bandwidth must be carefully selected to ensure a good implementation of the $T_{boot}$ and $T_{cv}$ tests. Its turns out from our experiments that selecting the bandwidth by cross-validation leads to relatively correct results. Indeed we found that the rejection frequencies of the $T_{cv}$ converge to the 5\%, and that the rejection frequencies of the $T_{boot}$ remain close to the nominal level in such a case. However other choices can deteriorate the control of the type I errors. For instance let us consider the $T_{f}$ test which consists in correcting the heteroscedasticity, but with fixed bandwidth as $\gamma(\hat{\sigma}^2/n)^{0.2}$, where $\hat{\sigma}^2$ is the sample variance and $\gamma$ is a constant. The corresponding bootstrap test will be denoted by $T_{f,boot}$. Here the normal kernel is kept. We only study the heteroscedastic case. The results given in Table \ref{tabgam} show that the rejection frequencies are strongly affected by this way of selecting the bandwidth. Finally let us point out that when the heteroscedastic structure is relatively easy to estimate (for instance if the sinus part is removed in (\ref{simhet})), we found better results (not displayed here) for the $T_{boot}$ and $T_{cv}$ tests in comparison to those of Table \ref{tab2}.\\ Now we turn to the analysis of the behavior of the tests under the alternative hypothesis. For a fair comparison we only studied the $T_{st}$ and $T_{boot}$ in the homoscedastic case. The sample size $n=100$ is fixed and recall that the parameter $\delta$ defines the departures from the null hypothesis. The outputs of our simulations, displayed in Figure \ref{powerfig}, show that the $T_{boot}$ test does not suffer from a lack of power in comparison to the $T_{st}$. In conclusion it turns out that the $T_{boot}$ improves the distribution analysis, in the sense that it ensures a good control of the type I error, but without entailing noticeable loss of power. \subsection{Real data analysis} \label{rda} The inflation measures data are commonly used to analyze macroeconomic facts. Reference can be made to the numerous empirical papers studying the relation between price levels and money supply (see e.g. Jones and Uri (1986)). On the other hand inflation is of great importance in finance, as many central banks adjust their interest rates in view of targeting a certain inflation level. Accordingly, constructing valid confidence intervals for inflation forecasts may be often crucial. In such kind of investigations clearly the distributional analysis can help to build a model for the data. In a stationary setting, authors aimed to detected ARCH effects assessing asymmetry and/or leptokurticity in inflation variables following Engle (1982) (see Broto and Ruiz (2008p22) among others). In the same way, it is reasonable to think that a test for normality taking into account the time-varying variance, can help to choose between a deterministic specification, as in {\bf A1}, and the case where in addition to unconditional heteroscedasiticity, second order dynamics are present (as in the case of spline-GARCH processes introduced by Engle and Rangel (2008)). In other words, once the unconditional heteroscedasiticity is removed from $u_t=h_t\epsilon_t$, the JB tests can help to decide whether ARCH effects are present or not in $(\epsilon_t)$. In this part we will study the normality of the log differences of the quarterly GDP implicit price deflators for the U.S., Korea and Australia from 10/01/1983 to 01/01/2017 ($n=132$). More precisely we use $y_{t,n}=100\log\left(GDP_{t,n}/GDP_{t-1,n}\right)$. The data can be downloaded from the webpage of the research division of the federal reserve bank of Saint Louis: https://fred.stlouisfed.org. The studied variables plotted in Figure \ref{data} seem to show cyclical heteroscedasticity. In the case of Korea we can suspect a global decreasing behavior leading to a stabilization after the Asian crisis. The times series are first filtered according to (\ref{model}). The non correlation of the residuals is tested using the adaptive portmanteau test of Patilea and Ra\"{\i}ssi (2013). On the other hand we applied tests for second order dynamics developed by Patilea and Ra\"{\i}ssi (2014). The outputs (not displayed here) show that the hypothesis of no ARCH effects cannot be rejected. Hence the deterministic specification of the time-varying variance in {\bf A1} seems valid. Once the linear dynamics of the series seem captured in an appropriate way, the tests considered in this paper are applied to the residuals. The results are given in Table \ref{tab3}. When the null hypothesis of normality is rejected at the 5\% level, the p-value is displayed in bold type. It emerges that the outputs of the $T_{boot}$ test are in general clearly different from those of the $T_{cv}$ and $T_{st}$ tests. The p-values of the $T_{cv}$ are all lower than those of the $T_{boot}$. Note that in the case of the U.S. GDP implicit price deflator, the difference between the $T_{boot}$ on one hand, and the $T_{st}$, $T_{cv}$ tests on the other hand, lead to different conclusions. In view of the outputs obtained from the simulations experiments, it is reasonable to decide that the normality assumption cannot be rejected for the U.S. data. It is likely that rejecting normality will suggest more sophisticated models, and could entail misspecifications for the confidence intervals of the forecasts by fitting a heavy tailed distribution to the U.S. data. \section*{References} \begin{description} \item[]{\sc Broto, C., and Ruiz, E.} (2008) Testing for conditional heteroscedasticity in the components of inflation. Working document, banco de Espa\~{n}a. \item[]{\sc Dahlhaus, R.} (1997) Fitting time series models to nonstationary processes. \textit{Annals of Statistics} 25, 1-37. \item[]{\sc Drees, H., and St\u{a}ric\u{a}, C.} (2002) A simple non stationary model for stock returns. Preprint. Universit\"{a}t des Saaland. \item[]{\sc Engle, R.F.} (1982) Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation. \textit{Econometrica} 50, 987-1007. \item[]{\sc Engle, R.F., and Rangel, J.G.} (2008) The spline GARCH model for unconditional volatility and its global macroeconomic causes. \textit{Review of Financial Studies} 21, 1187-1222. \item[]{\sc Fiorentini, G., Sentana, E., Calzolari, G.} (2004) On the validity of the Jarque-Bera normality test in conditionally heteroscedastic dynamic regression models. \textit{Economic Letters} 83, 307-312. \item[] {\sc Francq, C., and Zako\"{i}an, J-M.} (2010) \textit{GARCH models : structure, statistical inference, and financial applications.} Wiley, Chichester. \item[] {\sc Fry\'{z}lewicz, P.} (2005) Modelling and forecasting financial log-returns as locally stationary wavelet processes. \textit{Journal of Applied Statistics} 32, 503-528. \item[] {\sc Jarque, C.M., and Bera, A.K.} (1980) Efficient tests for normality, homoscedasticity and serial independence of regression residuals. \emph{Economics Letters} 6, 255-259. \item[]{\sc Jones, J.D., and Uri, N.} (1986) Money, inflation and causality (another look at the empirical evidence for the USA, 1953-84). \textit{Applied Economics} 19, 619-634. \item[]{\sc Lee, S., Park, S., and Lee, T.} (2010) A note on the Jarque-Bera normality test for GARCH innovations. \textit{Journal of the Korean Statistical Society} 39, 93-102. \item[]{\sc Lee, T.} (2010) A note on Jarque-Bera normality test for ARMA-GARCH innovations. \textit{Journal of the Korean Statistical Society} 41, 37-48. \item[]{\sc Mikosch, T., and St\u{a}ric\u{a}, C.} (2004) Stock market risk-return inference. An unconditional non-parametric approach. Research report, the Danish national research foundation: Network in Mathematical Physics and Stochastics. \item[] {\sc Patilea, V., and Ra\"{i}ssi, H.} (2012) Adaptive estimation of vector autoregressive models with time-varying variance: application to testing linear causality in mean. \textit{Journal of Statistical Planning and Inference} 142, 2891-2912. \item[] {\sc Patilea, V., and Ra\"{i}ssi, H.} (2013) Corrected portmanteau tests for VAR models with time-varying variance. \textit{Journal of Multivariate Analysis} 116, 190-207. \item[] {\sc Patilea, V., and Ra\"{i}ssi, H.} (2014) Testing second order dynamics for autoregressive processes in presence of time-varying variance. \emph{Journal of the American Statistical Association} 109, 1099-1111. \item[]{\sc Sensier, M., and van Dijk, D.} (2004) Testing for volatility changes in U.S. macroeconomic time series. \textit{Review of Economics and Statistics} 86, 833-839. \item[]{\sc Trimbur, T.M., and Bell, W.R.} (2010) Seasonal heteroscedasticity in time series data: modeling, estimation, and testing. In W. Bell, S. Holan, and T. Mc Elroy (Eds.), \emph{Economic Time Series: Modelling and Seasonality}. Chapman and Hall, New York. \item[]{\sc Wasserman, L.} (2006) \textit{All of Nonparametric Statistics}. Springer, New-York. \item[]{\sc Ra\"{i}ssi, H.} (2015) Autoregressive order identification for VAR models with non-constant variance. \textit{Communications in Statistics: Theory and Methods} 44, 2059-2078. \item[]{\sc Yu, H.} (2007) High moment partial sum processes of residuals in ARMA models and their applications. \textit{Journal of Time Series Analysis} 28, 72-91. \end{description} \section*{Tables and Figures} \begin{table}[hh]\!\!\!\!\!\!\!\!\!\! \begin{center} \caption{\small{Empirical size (in \%) of the studied tests for normality. The homoscedastic case.}} \begin{tabular}{|c|c|c|c|c|} \hline $n$ & 100 & 200 & 400 & 800 \\ \hline $T_{st}$ & 4.0 & 5.2 & 4.9 & 4.2 \\ \hline $T_{cv}$ & {\bf 7.2} & {\bf 7.5} & 5.6 & 5.0 \\ \hline $T_{boot}$ & 4.5 & 5.6 & 5.0 & 4.7 \\ \hline \end{tabular} \label{tab1} \end{center} \end{table} \begin{table}[hh]\!\!\!\!\!\!\!\!\!\! \begin{center} \caption{\small{Empirical size (in \%) of the studied tests for normality. The heteroscedastic case.}} \begin{tabular}{|c|c|c|c|c|} \hline $n$ & 100 & 200 & 400 & 800 \\ \hline $T_{st}$ & {\bf 8.7} & {\bf 13.0} & {\bf 11.5} & {\bf 19.1} \\ \hline $T_{cv}$ & {\bf 9.4} & {\bf 9.2} & {\bf 8.3} & {\bf 7.8} \\ \hline $T_{boot}$ & 4.4 & {\bf 6.5} & 6.3 & 6.3 \\ \hline \end{tabular} \label{tab2} \end{center} \end{table} \begin{table}[hh]\!\!\!\!\!\!\!\!\!\! \begin{center} \caption{\small{Empirical size (in \%) of the $T_{cv}$ and $T_{boot}$ tests for normality with fixed bandwidth. The heteroscedastic case.}} \begin{tabular}{|c|c|c||c|c|} \hline $\gamma$ &\multicolumn{2}{c||}{$1$}&\multicolumn{2}{c|}{$1.5$}\\ \hline $n$ & 100 & 200 & 100 & 200 \\ \hline $T_{f}$ & {\bf 11.3} & {\bf 14.0} & {\bf 12.0} & {\bf 14.3} \\ \hline $T_{f,boot}$ & {\bf 6.8} & {\bf 10.2} & {\bf 7.6} & {\bf 11.1} \\ \hline \end{tabular} \label{tabgam} \end{center} \end{table} \begin{table}[hh]\!\!\!\!\!\!\!\!\!\! \begin{center} \caption{\small{The p-values (in \%) of the tests for normality for GDP implicit price deflators for the U.S., Korea and Australia.}} \begin{tabular}{c|c|c|c|}\cline{2-4} & \mbox{U.S.} & \mbox{Korea} & \mbox{Australia} \\ \hline \multicolumn{1}{|c|}{$T_{st}$} & {\bf 3.8} & 16.4 & 50.9 \\ \hline \multicolumn{1}{|c|}{$T_{cv}$} & {\bf 2.3} & 82.0 & 21.0 \\ \hline \multicolumn{1}{|c|}{$T_{boot}$} & 8.2 & 87.0 & 49.0 \\ \hline \end{tabular} \label{tab3} \end{center} \end{table} \begin{figure} \caption{\label{powerfig} \label{powerfig} \end{figure} \begin{figure} \caption{\label{data} \label{data} \end{figure} \end{document}
\begin{document} \title{Optimal error intervals for properties of the quantum state} \date[]{Posted on the arXiv on \today} \author{Xikun Li} \affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543, Singapore} \author{Jiangwei Shang} \altaffiliation[Now at\ ]{Naturwissenschaftlich-Technische Fakult\"at, Universit\"at Siegen, Walter-Flex-Stra\ss{}e 3, 57068 Siegen, Germany} \email{corresponding email: [email protected]} \affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543, Singapore} \author{Hui Khoon Ng} \affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543, Singapore} \affiliation{Yale-NUS College, 16 College Avenue West, Singapore 138527, Singapore} \affiliation{MajuLab, CNRS-UNS-NUS-NTU International Joint Research Unit, UMI 3654, Singapore} \author{Berthold-Georg Englert} \affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543, Singapore} \affiliation{MajuLab, CNRS-UNS-NUS-NTU International Joint Research Unit, UMI 3654, Singapore} \affiliation{Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542, Singapore} \begin{abstract} Quantum state estimation aims at determining the quantum state from observed data. Estimating the full state can require considerable efforts, but one is often only interested in a few properties of the state, such as the fidelity with a target state, or the degree of correlation for a specified bipartite structure. Rather than first estimating the state, one can, and should, estimate those quantities of interest directly from the data. We propose the use of optimal error intervals as a meaningful way of stating the accuracy of the estimated property values. Optimal error intervals are analogs of the optimal error regions for state estimation [New J.~Phys.~\textbf{15}, 123026 (2013)]. They are optimal in two ways: They have the largest likelihood for the observed data and the pre-chosen size, and are the smallest for the pre-chosen probability of containing the true value. As in the state situation, such optimal error intervals admit a simple description in terms of the marginal likelihood for the data for the properties of interest. Here, we present the concept and construction of optimal error intervals, report on an iterative algorithm for reliable computation of the marginal likelihood (a quantity difficult to calculate reliably), explain how plausible intervals --- a notion of evidence provided by the data --- are related to our optimal error intervals, and illustrate our methods with single-qubit and two-qubit examples. \end{abstract} \pacs{03.65.Wj, 02.50.-r, 03.67.-a} \maketitle \section{Introduction}\label{sec:Intro} Quantum state estimation (QSE) --- the methods, procedures, and algorithms by which one converts tomographic experimental data into an educated guess about the state of the quantum system under investigation \cite{LNP649} --- provides just that: an estimate of the \emph{state}. For high-dimensional systems, such a state estimate can be hard to come by. But one is often not even interested in all the details the state conveys and rather cares only about the values of a few functions of the state. For example, when a source is supposed to emit quantum systems in a specified target state, the fidelity between the actual state and this target could be the one figure of merit we want to know. Then, a direct estimate of the few properties of interest, without first estimating the quantum state, is more practical and more immediately useful. The full state estimate may not even be available in the first place, if only measurements pertinent to the quantities of interest are made instead of a tomographically complete set, the latter involving a forbidding number of measurement settings in high dimensions. Furthermore, even if we have a good estimate for the quantum state, the values of the few properties of interest computed from this state may not be, and often are not, the best guess for those properties (see an illustration of this point in Sec.~\ref{sec:SCPR}). Therefore, we need to supplement QSE with SPE --- state-property estimation, that is: methods, procedures, and algorithms by which one directly arrives at an educated guess for the few properties of interest. Several schemes have been proposed for determining particular properties of the quantum state. These are prescriptions for the measurement scheme, and/or estimation procedure from the collected data. For example, there are schemes for measuring the traces of powers of the statistical operator, and then perform separability tests with the numbers thus found \cite{Ekert+5:02,Horodecki+1:02,Bovino+5:05}. Alternatively, one could use likelihood ratios for an educated guess whether the state is separable or entangled \cite{BK+2:10}. Other schemes are tailored for measuring the fidelity with particular target states \cite{Somma+2:06,Guhne+3:07,Flammia+1:11}, yet another can be used for estimating the concurrence \cite{Walborn+4:06}. Schemes for measuring other properties of the quantum state can be found by Paris's method \cite{Paris:09}. Many of these schemes are property specific, involving sometimes ad-hoc estimation procedures well-suited for only those properties of interest. Here, in full analogy to the \emph{state} error regions of Ref.~\cite{Shang+4:13} for QSE, we describe general-purpose optimal error intervals for SPE, from measurement data obtained from generic tomographic measurements or property-specific schemes like those mentioned above. Following the maximum-likelihood philosophy for statistical inference, these error intervals give precise ``error bars'' around the maximum-likelihood (point) estimator for the properties in question consistent with the data. According to the Bayesian philosophy, they are intervals with a precise probability (credibility) of containing the true property values. As is the case for QSE error regions, these SPE error intervals are optimal in two ways. First, they have the largest likelihood for the data among all the intervals of the same size. Second, they are smallest among all regions of the same credibility. Here, the natural notion of the size of an interval is its prior content, i.e., our belief in the interval's importance before any data are taken; the credibility of an interval is its posterior --- after taking the data into account --- content. We will focus on the situation in which a single property of the state is of interest. This is already sufficient for illustration, but is not a restriction of our methods. (Note: If there are several properties of interest and a consistent set of values is needed, they should be estimated jointly, not one-by-one, to ensure that constraints are correctly taken into account.) The optimal error interval is a range of values for this property that answers the question: Given the observed data, how well do we know the value of the property? This question is well answered by the above-mentioned generalization of the maximum-likelihood point estimator to an interval of most-likely values, as well as the dual Bayesian picture of intervals of specified credibility. Our error interval is in contrast to other work \cite{Faist+1:16} based on the frequentists' concept of confidence regions/intervals, which answer a different question pertaining to all possible data that could have been observed but is not the right concept for drawing inference from the actual data acquired in a single run (see Appendix \ref{sec:appCC}). As we will see below, the concepts and strategies of the optimal error regions for QSE \cite{Shang+4:13,Shang+4:15,Seah+4:15} carry over naturally to this SPE situation. However, additional methods are needed for the specific computational tasks of SPE. In particular, there is the technical challenge of computing the property-specific likelihood: In QSE, the likelihood for the data as a function over the state space is straightforward to compute; in SPE, the relevant likelihood is the property-specific \emph{marginal likelihood}, which requires an integration of the usual (state) likelihood over the ``nuisance parameters'' that are not of interest. This can be difficult to compute even in classical statistics \cite{Bos:02}. Here, we offer an iterative algorithm that allows for reliable estimation of this marginal likelihood. In addition, we point out the connection between our optimal error intervals and \emph{plausible intervals}, an elegant notion of evidence for property values supported by the observed data \cite{Evans:15}. Plausible intervals offer a complementary understanding of our error intervals: Plausibility identifies a unique error interval that contains all values for which the data are in favor of, with an associated critical credibility value. Here is a brief outline of the paper. We set the stage in Sec.~\ref{sec:stage} where we introduce the reconstruction space and review the notion of size and credibility of a region in the reconstruction space. Analogously, we identify the size and credibility of a range of property values in Sec.~\ref{sec:SCPR}. Then, the flexibility of choosing priors in the property-value space is discussed in Sec.~\ref{sec:prior}. With these tools at hand, we formulate in Sec.~\ref{sec:PE-OEI} the point estimators as well as the optimal error intervals for SPE. Section~\ref{sec:evidence} explains the connection to plausible regions and intervals. Section~\ref{sec:MCint} gives an efficient numerical algorithm that solves the high-dimensional integrals for the size and credibility. We illustrate the matter by simulated single-qubit and two-qubit experiments in Secs.~\ref{sec:1qubit} and \ref{sec:2qubit}, and close with a summary. Additional material is contained in several appendixes: The fundamental differences between Bayesian credible intervals and the confidence intervals of frequentism are the subject matter of Appendix \ref{sec:appCC}. Appendixes \ref{sec:appA} and \ref{sec:appB} deals with the limiting power laws of the prior-content functions that are studied numerically in Sec.~\ref{sec:2qubit}. For ease of reference, a list of the various prior densities is given in Appendix \ref{sec:appC} and a list of the acronyms in Appendix \ref{sec:appD}. \section{Setting the stage}\label{sec:stage} As in Refs.~\cite{Shang+4:13,Shang+4:15,Seah+4:15}, we regard the probabilities $p=(p_1,p_2,\dots,p_K)$ of a measurement with $K$ outcomes as the basic parameters of the quantum state $\rho$. The Born rule \begin{equation}\label{eq:2-0a} p_k=\tr{\Pi_k\rho}=\expect{\Pi_k} \end{equation} states that the $k$th probability $p_k$ is the expectation value of the $k$th probability operator $\Pi_k$ in state $\rho$. Together, the $K$ probability operators constitute a probability-operator measurement (POM), \begin{equation}\label{eq:2-0b} \Pi_k\geq0\,,\quad\sum_{k=1}^K\Pi_k=\dyadic{1}\,, \end{equation} where $\dyadic{1}$ is the identity operator. The POM is fully tomographic if we can infer a unique state $\rho$ when the values of all $p_k$s are known. If the measurement provides partial rather than full tomography, we choose a suitable set of statistical operators from the state space, such that the mapping ${p\leftrightarrow\rho(p)}$ is one-to-one; this set is the reconstruction space $\mathcal{R}_0$. While there is no unique or best choice for the ``suitable set'' that makes up $\mathcal{R}_0$, the intended use for the state, once estimated, may provide additional criteria for choosing the reconstruction space. As far as QSE and SPE are concerned, however, the particulars of the mapping $p\to\rho(p)$ do not matter at all. Yet, that there is such a mapping, permits viewing a region $\mathcal{R}$ in $\mathcal{R}_0$ also as a region in the probability space, and we use the same symbols in both cases whenever the context is clear. Note, however, that while the probability space --- in which the numerical work is done --- is always convex, the reconstruction space of states may or may not be. Examples for that can be found in \cite{Rehacek+6:15} where various aspects of the mapping ${p\leftrightarrow\rho(p)}$ are discussed in the context of measuring pairwise complementary observables. The parameterization of the reconstruction space in terms of the probabilities gives us \begin{equation}\label{eq:2-1} (\mathrm{d} \rho)=(\mathrm{d} p)\, w_0(p) \end{equation} for the volume element $\equiv$ prior element in $\mathcal{R}_0$, where \begin{equation}\label{eq:2-2} (\mathrm{d} p)=\mathrm{d} p_1\mathrm{d} p_2\ldots \mathrm{d} p_K \, w_{\mathrm{cstr}}(p)\,, \end{equation} is the volume element in the probability space. The factor $w_{\mathrm{cstr}}(p)$ accounts for all the constraints that the probabilities must obey, among them the constraints that follow from the positivity of $\rho(p)$ in conjunction with the quantum-mechanical Born rule. Other than the mapping ${p\leftrightarrow\rho(p)}$, this is the \emph{only} place where quantum physics is present in the formalism of QSE and SPE. Yet, the quantum constraints in $w_{\mathrm{cstr}}(p)$ are the defining feature that distinguishes quantum state estimation from non-quantum state estimation. Probabilities $p$ that obey the constraints are called ``physical'' or ``permissible''. $w_{\mathrm{cstr}}$ vanishes on the unphysical $p$s and is generally a product of step functions and delta functions. The factor $w_0(p)$ in Eq.~\eqref{2-1} is the prior density of our choice; it reflects what we know about the quantum system before the data are taken. Usually, the prior density $w_0(p)$ gives positive weight to the finite neighborhoods of all states in $\mathcal{R}_0$; criteria for choosing the prior are reviewed in appendix A of Ref.~\cite{Shang+4:13} --- ``use common sense'' is a guiding principle. Although not really necessary, we shall assume that $w_0(p)$ and $w_{\mathrm{cstr}}(p)$ are normalized, \begin{equation}\label{eq:2-3} \int(\mathrm{d} p)=1\quad\mbox{and}\quad\int_{\mathcal{R}_0}(\mathrm{d}\rho)=1\,, \end{equation} so that we do not need to exhibit normalizing factors in what follows. Then, the size of a region ${\mathcal{R}\subseteq\mathcal{R}_0}$, that is: its prior content, is \begin{equation}\label{eq:2-4} S_{\mathcal{R}}=\int_{\mathcal{R}}(\mathrm{d}\rho) =\int_\mathcal{R}(\mathrm{d} p)\,w_0(p)\leq1\,, \end{equation} with equality only for ${\mathcal{R}=\mathcal{R}_0}$. This identification of size and prior content is natural in the context of state estimation; see \cite{Shang+4:13} for a discussion of this issue. While other contexts may very well have their own natural notions of size, such other contexts do not concern us here. After measuring a total number of ${N=\sum_{k=1}^K n_k}$ copies of the quantum system and observing the $k$th outcome $n_k$ times, the data $D$ are the recorded sequence of outcomes (``detector clicks''). The probability of obtaining $D$ is the point likelihood \begin{equation}\label{eq:2-5} L\left(D|p\right)=p_1^{n_1}p_2^{n_2}\cdots p_K^{n_K}\,. \end{equation} In accordance with Sec.~2.3 in Ref.~\cite{Shang+4:13}, then, the joint probability of finding $\rho(p)$ in the region $\mathcal{R}$ and obtaining data $D$ is \begin{eqnarray}\label{eq:2-6} \mathrm{Pr}\bigl(D\wedge \{\rho\in\mathcal{R}\}\bigr) &=&\int_{\mathcal{R}}(\mathrm{d} p)\,w_0(p)\,L(D|p) \nonumber\\&=&L(D|\mathcal{R})S_{\mathcal{R}}=C_{\mathcal{R}}(D)L(D)\,, \end{eqnarray} with (i) the region likelihood $L(D|\mathcal{R})$, (ii) the credibility --- the posterior content --- $C_{\mathcal{R}}(D)$ of the region, \begin{equation}\label{eq:2-6a} C_\mathcal{R}(D)=\frac{1}{L(D)}\int_\mathcal{R}(\mathrm{d} p)\,w_0(p)\,L(D|p)\,, \end{equation} and (iii) the prior likelihood for the data \begin{equation}\label{eq:2-7} L(D)= \int_{\mathcal{R}_0}(\mathrm{d} p)\,w_0(p)\,L(D|p)\,. \end{equation} \section{Size and credibility of a range of property values}\label{sec:SCPR} We wish to estimate a particular property, specified as a function $f(p)$ of the probabilities, with values between $0$ and $1$, \begin{equation}\label{eq:3-1} 0\leq f(p)\leq1\,; \end{equation} the restriction to this convenient range can be easily lifted, of course. Usually, there is at first a function $\tilde{f}(\rho)$ of the state $\rho$, and ${f(p)=\tilde{f}\bigl(\rho(p)\bigr)}$ is the implied function of $p$. We take for granted that the value of $\tilde{f}(\rho)$ can be found without requiring information that is not contained in the probabilities $p$. Otherwise, we need to restrict $\tilde{f}(\rho)$ to $\rho$s in $\mathcal{R}_0$. \begin{figure} \caption{\label{fig:regions} \label{fig:regions} \end{figure} By convention, we use lower-case letters for the functions on the probability space and upper-case letters for the function values. The generic pair is $f(p),F$ here; we will meet the pairs $\phi(p),\Phi$ and $\gamma(p),\Gamma$ in Sec.~\ref{sec:1qubit}, and the pairs $\theta(p),\Theta$ and $\thetaopt(p),\Thetaopt$ in Sec.~\ref{sec:2qubit}. A given $f(p)$ value --- $F=f(p)$, say --- identifies hypersurfaces in the probability space and the reconstruction space, and an interval ${F_1\leq f(p)\leq F_2}$ corresponds to a region; see Fig.~\figref{regions}. Such a region has size \begin{eqnarray}\label{eq:3-2} &&\int_{\mathcal{R}_0}(\mathrm{d}\rho)\,\Bigl[\eta\bigl(F_2-\tilde{f}(\rho)\bigr) -\eta\bigl(F_1-\tilde{f}(\rho)\bigr)\Bigr] \nonumber\\&=& \int(\mathrm{d} p)\,w_0(p)\Bigl[\eta\bigl(F_2-f(p)\bigr) -\eta\bigl(F_1-f(p)\bigr)\Bigr] \nonumber\\&=& \int(\mathrm{d} p)\,w_0(p)\int_{F_1}^{F_2}\mathrm{d} F\,\delta\bigl(F-f(p)\bigr) \end{eqnarray} and credibility \begin{eqnarray}\label{eq:3-3} &&\frac{1}{L(D)}\int_{\mathcal{R}_0}(\mathrm{d}\rho)\,L(D|p) \Bigl[\eta\bigl(F_2-\tilde{f}(\rho)\bigr) -\eta\bigl(F_1-\tilde{f}(\rho)\bigr)\Bigr] \nonumber\\&=& \frac{1}{L(D)}\int(\mathrm{d} p)\,w_0(p)L(D|p) \int_{F_1}^{F_2}\mathrm{d} F\,\delta\bigl(F-f(p)\bigr)\,, \end{eqnarray} where $\eta(\,)$ is Heaviside's unit step function and $\delta(\,)$ is Dirac's delta function. For an infinitesimal slice, ${F\leq f(p)\leq F+\mathrm{d} F}$, the size \eqref{3-2} identifies the prior element ${\mathrm{d} F\,W_0(F)}$ in $F$, \begin{equation}\label{eq:3-4} \mathrm{d} F\,W_0(F)=\int(\mathrm{d} p)\,w_0(p)\,\mathrm{d} F\,\delta\bigl(F-f(p)\bigr)\,, \end{equation} and the credibility \eqref{3-3} tells us the likelihood $L(D|F)$ of the data for given property value $F$, \begin{eqnarray}\label{eq:3-5} &&\frac{1}{L(D)}\ \mathrm{d} F\,W_0(F)L(D|F) \nonumber\\&=& \frac{1}{L(D)}\int(\mathrm{d} p)\,w_0(p)L(D|p)\,\mathrm{d} F\,\delta\bigl(F-f(p)\bigr)\,. \end{eqnarray} Of course, Eqs.~\eqref{3-4} and \eqref{3-5} are just the statements of Eqs.~\eqref{2-4} and \eqref{2-6a} in the current context of infinitesimal regions defined by an increment in $F$; it follows that $W_0(F)$ and $L(D|F)$ are positive everywhere, except possibly for a few isolated values of $F$. To avoid any potential confusion with the likelihood $L(D|p)$ of Eq.~\eqref{2-5}, we shall call $L(D|F)$ the $F$-likelihood. In passing, we note that $L(D|F)$ can be viewed as the marginal likelihood of $L(D|p)$ with respect to the probability density $\delta\bigl(F-f(p)\bigr)/W_0(F)$ in $p$. For the computation of $L(D|F)$, however, standard numerical methods for marginal likelihoods, such as those compared by Bos \cite{Bos:02}, do not give satisfactory results. The bench marking conducted by Bos speaks for itself; in particular, we note that none of those standard methods has a built-in accuracy check. Therefore, we are using the algorithm described in Sec.~\ref{sec:MCint}. In terms of $W_0(F)$ and $L(D|F)$, a finite interval of $F$ values, or the union of such intervals, denoted by the symbol $\mathcal{I}$, has the size \begin{equation}\label{eq:3-6} S_{\mathcal{I}}=\int_{\mathcal{I}}\mathrm{d} F\,W_0(F) \end{equation} and the credibility \begin{equation}\label{eq:3-7} C_{\mathcal{I}}=\frac{1}{L(D)}\int_{\mathcal{I}}\mathrm{d} F\,W_0(F)L(D|F)\,, \end{equation} where \begin{equation}\label{eq:3-8} L(D)=\int_{\mathcal{I}_0}\mathrm{d} F\,W_0(F)L(D|F) \end{equation} has the same value as the integral of Eq.~\eqref{2-7}. $\mathcal{I}_0$ denotes the whole range ${0\leq F\leq1}$ of property values, where we have $S_{\mathcal{I}_0}=C_{\mathcal{I}_0}=1$. Note that the $F$-likelihood $L(D|F)$ is the natural derivative of the interval likelihood, the conditional probability \begin{eqnarray}\label{eq:3-9} L(D|\mathcal{I})&=&\frac{\mathrm{Pr}\bigl(D\wedge \{F\in\mathcal{I}\}\bigr)} {\mathrm{Pr}(F\in\mathcal{I})}\\ &=&\frac{1}{S_\mathcal{I}}\int(\mathrm{d} p)\,w_0(p)\,L(D|p)\int_\mathcal{I}\mathrm{d} F\,\delta(F-f(p))\,. \nonumber \end{eqnarray} If we now define the $F$-likelihood by the requirement \begin{equation}\label{eq:3-10} L(D|\mathcal{I})=\frac{1}{S_\mathcal{I}}\int_\mathcal{I} \mathrm{d} F\,W_0(F)\,L(D|F)\,, \end{equation} we recover the expression for $L(D|F)$ in Eq.~\eqref{3-5}. \section{Free choice of prior}\label{sec:prior} The prior density $W_0(F)$ and the $F$-likelihood $L(D|F)$ have an implicit dependence on the prior density $w_0(p)$ in probability space, and it may seem that we cannot choose $W_0(F)$ as we like, nor would the $F$-likelihood be independent of the prior for $F$. This is only apparently so: As usual, the likelihood does not depend on the prior. When we restrict the prior density $w_0(p)$ to the hypersurface where ${f(p)=F}$, \begin{equation}\label{eq:4-0a} w_0(p)\Bigr|_{f(p)=F}=W_0(F)u_F(p)\,, \end{equation} we exhibit the implied prior density $u_F(p)$ that tells us the relative weights of $p$s within the iso-$F$ hypersurface. As a consequence of the normalization of $w_0(p)$ and $W_0(F)$, \begin{equation}\label{eq:4-0b} \int(\mathrm{d} p)\,w_0(p)=1\,,\qquad\int_0^1\mathrm{d} F\,W_0(F)=1\,, \end{equation} which are more explicit versions of ${S_{\mathcal{R}_0}=1}$ and ${S_{\mathcal{I}_0}=1}$, $u_F(p)$ is also normalized, \begin{equation}\label{eq:4-1} \int (\mathrm{d} p)\,u_F(p)\,\delta(F-f(p))=1\,. \end{equation} In a change of perspective, let us now regard $u_F(p)$ and $W_0(F)$ as independently chosen prior densities for all iso-$F$ hypersurfaces and for property $F$. Since $F$ is the coordinate in $p$-space that is normal to the iso-$F$ hypersurfaces (see Fig.~\figref{regions}), these two prior densities together define a prior density on the whole probability space, \begin{equation}\label{eq:4-3} w_0(p)=W_0(f(p))\,u_{f(p)}(p)\,. \end{equation} The restriction to a particular value of $f(p)$ takes us back to Eq.~\eqref{4-0a}, as it should. For a prior density of the form \eqref{4-3}, the $F$-likelihood \begin{eqnarray}\label{eq:4-0c} L(D|F)&=&\frac{1}{W_0(F)}\int(\mathrm{d} p)\, w_0(p)L(D|p)\delta\bigl(F-f(p)\bigr) \nonumber\\ &=&\int(\mathrm{d} p)\, u_F(p)L(D|p)\delta\bigl(F-f(p)\bigr) \end{eqnarray} does not involve $W_0(F)$ and is solely determined by $u_F(p)$. Therefore, different choices for $W_0(F)$ in Eq.~\eqref{4-3} do not result in different $F$-likelihoods. Put differently, if we begin with some reference prior density $w_{\mathrm{r}}(p)$, which yields the iso-$F$ prior density \begin{equation}\label{eq:4-0e} u_F(p)=\frac{\displaystylew_{\mathrm{r}}(p)\Bigr|_{f(p)=F}} {\displaystyle\int(\mathrm{d} p')\,w_{\mathrm{r}}(p')\,\delta\bigl(F-f(p')\bigr)} \end{equation} that we shall use throughout, then \begin{equation}\label{eq:4-0d} w_0(p)=\frac{w_{\mathrm{r}}(p)W_0(f(p))} {\displaystyle\int(\mathrm{d} p')\,w_{\mathrm{r}}(p')\,\delta\bigl(f(p)-f(p')\bigr)} \end{equation} is the corresponding prior density for the $W_0(F)$ of our liking. Clearly, the normalization of $w_{\mathrm{r}}(p)$ is not important; more generally yet, the replacement \begin{equation}\label{eq:4-0f} w_{\mathrm{r}}(p)\tow_{\mathrm{r}}(p)g\bigl(f(p)\bigr) \end{equation} with an arbitrary function ${g(F)>0}$ has no effect on the right-hand sides of Eqs.~\eqref{4-0e}, \eqref{4-0d}, as well as \eqref{4-5} below. One can think of this replacement as modifying the prior density in $F$ that derives from $w_{\mathrm{r}}(p)$ upon proper normalization. While the $F$-likelihood \begin{equation}\label{eq:4-5} L(D|F)=\frac{\displaystyle\int(\mathrm{d} p)\,w_{\mathrm{r}}(p)\,\delta\bigl(F-f(p)\bigr)L(D|p)} {\displaystyle\int(\mathrm{d} p)\,w_{\mathrm{r}}(p)\,\delta\bigl(F-f(p)\bigr)} \end{equation} is the same for all $W_0(F)$s, it will usually be different for different $u_F(p)$s and thus for different $w_{\mathrm{r}}(p)$s. For sufficient data, however, $L(D|p)$ is so narrowly peaked in probability space that it will be essentially vanishing outside a small region within the iso-$F$ hypersurface, and then it is irrelevant which reference prior is used. In other words, the data dominate rather than the priors unless the data are too few. Typically, we will have a natural choice of prior density $w_0(p)$ on the probability space and accept the induced $W_0(F)$ and $u_F(p)$. Nevertheless, the flexibility offered by Eq.~\eqref{4-0d} is useful. We exploit it for the numerical procedure in Sec.~\ref{sec:MCint}. In the examples below, we employ two different reference priors $w_{\mathrm{r}}(p)$. The first is the \emph{primitive prior}, \begin{equation}\label{eq:4-6} w_{\mathrm{primitive}}(p)= 1\,, \end{equation} so that the density is uniform in $p$ over the (physical) probability space. The second is the \emph{Jeffreys prior} \cite{Jeffreys:46}, \begin{equation}\label{eq:4-7} w_{\mathrm{Jeffreys}}(p)\propto \frac1{\sqrt{p_{1}p_{2}\cdots p_{K}}}\,, \end{equation} which is a common choice of prior when no specific prior information is available \cite{Kass+1:96}. For ease of reference, there is a list of the various prior densities in Appendix~\ref{sec:appC}. In Sec.~\ref{sec:1qubit}, we use $w_{\mathrm{primitive}}(p)$ and $w_{\mathrm{Jeffreys}}(p)$ for $w_0(p)$ and then work with the induced priors $W_0(F)$ of Eq.~\eqref{3-4}, as this enables us to discuss the difference between direct and indirect estimation in Sec.~\ref{sec:1qubitb}. The natural choice of ${W_0(F)=1}$ will serve as the prior density in Sec.~\ref{sec:2qubit}. \section{Point estimators and optimal error intervals}\label{sec:PE-OEI} The $F$-likelihood $L(D|F)$ is largest for the maximum-likelihood estimator $\widehat{F}^{\ }_{\textsc{ml}}$, \begin{equation}\label{eq:5-1} \max_F\{L(D|F)\}=L(D|\widehat{F}^{\ }_{\textsc{ml}})\,. \end{equation} Another popular point estimator is the Bayesian mean estimator \begin{equation}\label{eq:5-2} \widehat{F}^{\ }_{\textsc{bm}}=\frac{1}{L(D)}\int_0^1\mathrm{d} F\,W_0(F)\,L(D|F)\,F\,. \end{equation} They are immediate analogs of the maximum-likelihood estimator $\widehat{\rho}^{\ }_{\textsc{ml}}$ for the state, \begin{equation}\label{eq:5-3} \widehat{\rho}^{\ }_{\textsc{ml}}=\rho(\widehat{p}^{\ }_{\textsc{ml}})\quad\mbox{with}\quad\max_p\{L(D|p)\}=L(D|\widehat{p}^{\ }_{\textsc{ml}})\,, \end{equation} and the Bayesian mean of the state, \begin{equation}\label{eq:5-4} \widehat{\rho}^{\ }_{\textsc{bm}}=\frac{1}{L(D)}\int(\mathrm{d}\rho)\,L(D|p)\,\rho\,. \end{equation} Usually, the value of $\tilde{f}(\rho)$ for one of these state estimators is different from the corresponding estimator, \begin{equation}\label{eq:5-5} \tilde{f}(\widehat{\rho}^{\ }_{\textsc{ml}})\neq\widehat{F}^{\ }_{\textsc{ml}}\,,\quad\tilde{f}(\widehat{\rho}^{\ }_{\textsc{bm}})\neq\widehat{F}^{\ }_{\textsc{bm}}\,, \end{equation} although the equal sign can hold for particular data $D$; see Fig.~\figref{regions}. As an exception, we note that $\tilde{f}\left(\widehat{\rho}^{\ }_{\textsc{bm}}\right)=\widehat{F}^{\ }_{\textsc{bm}}$ is always true if $f(p)$ is linear in $p$. The observation of Eq.~\eqref{5-5} --- the best guess for the property of interest may not, and often does not, come from the best guess for the quantum state --- deserves emphasis, although it is not a new insight. For example, the issue is discussed in Ref.~\cite{Schwemmer+6:15} in the context of confidence regions (see topic SM4 in the supplemental material). We return to this in Sec.~\ref{sec:1qubitb}. For reasons that are completely analogous to those for the optimal error regions in Ref.~\cite{Shang+4:13}, the optimal error intervals for property ${F=\tilde{f}(\rho)=f(p)}$ are the bounded-likelihood intervals (BLIs) specified by \begin{equation}\label{eq:5-6} \mathcal{I}_{\lambda}=\left\{F\bigm|L(D|F)\geq\lambda L(D|\widehat{F}^{\ }_{\textsc{ml}})\right\} \quad\mbox{with}\quad0\leq\lambda\leq1\,. \end{equation} While the set of $\mathcal{I}_{\lambda}$s is fully specified by the $F$-likelihood $L(D|F)$ and is independent of the prior density $W_0(F)$, the size and credibility of a specific $\mathcal{I}_{\lambda}$ do depend on the choice of $W_0(F)$. The interval of largest $F$-likelihood for given size $s$ --- the maximum-likelihood interval (MLI) --- is the BLI with ${\displaystyle s=S_{\mathcal{I}_\lambda}\equiv s_\lambda}$, and the interval of smallest size for given credibility~$c$ --- the smallest credible interval (SCI) --- is the BLI with ${\displaystyle c=C_{\mathcal{I}_{\lambda}}}\equiv c_{\lambda}$, where $S_{\mathcal{I}_{\lambda}}$ and $C_{\mathcal{I}_{\lambda}}$ are the size and credibility of Eqs.~\eqref{3-6} and \eqref{3-7} evaluated for the interval $\mathcal{I}_{\lambda}$. We have ${\mathcal{I}_\lambda\subseteq\mathcal{I}_0}$, ${s_\lambda\leq s_0=1}$, and ${c_\lambda\leq c_0=1}$ for ${\lambda\leq\lambda_0}$, with ${\lambda_0\geq0}$ given by ${\min_F\{L(D|F)\}=\lambda_0 L(D|\widehat{F}^{\ }_{\textsc{ml}})}$. As $\lambda$ increases from $\lambda_0$ to $1$, $s_\lambda$ and $c_\lambda$ decreases monotonically from $1$ to $0$. Moreover, we have the link between $s_{\lambda}$ and $c_{\lambda}$, \begin{equation}\label{eq:5-7} c_{\lambda}=\frac{\displaystyle\lambda s_{\lambda}+\int_\lambda^1 \mathrm{d} \lambda' \,s_{\lambda'}} {\displaystyle\int_0^1 \mathrm{d} \lambda' \,s_{\lambda'}}\,, \end{equation} exactly as that for the size and credibility of bounded-likelihood regions (BLRs) for state estimation in Ref.~\cite{Shang+4:13}. The normalizing integral of the size in the denominator has a particular significance of its own, as is discussed in the next section. As soon as the $F$-likelihood $L(D|F)$ is at hand, it is a simple matter to find the MLIs and the SCIs. Usually, we are most interested in the SCI for the desired credibility~$c$: The actual value of $F$ is in this SCI with probability $c$. Since all BLIs contain the maximum-likelihood estimator $\widehat{F}^{\ }_{\textsc{ml}}$, each BLI, and thus each SCI, reports an error bar on $\widehat{F}^{\ }_{\textsc{ml}}$ in this precise sense. In marked contrast, $\widehat{F}^{\ }_{\textsc{bm}}$ plays no such distinguished role. \section{Plausible regions and intervals}\label{sec:evidence} The data provide evidence in favor of the $\rho$s in a region $\mathcal{R}\subset\mathcal{R}_0$ if we would put a higher bet on $\mathcal{R}$ after the data are recorded than before, that is: if the credibility of $\mathcal{R}$ is larger than its size, \begin{equation}\label{eq:5-8} C_{\mathcal{R}}(D)=\int_\mathcal{R}(\mathrm{d} p)\,w_0(p)\frac{L(D|p)}{L(D)} > \int_\mathcal{R}(\mathrm{d} p)\,w_0(p)=S_{\mathcal{R}}\,. \end{equation} In view of Eq.~\eqref{2-6}, this is equivalent to requiring that the region likelihood $L(D|\mathcal{R})$ exceeds $L(D)$, the likelihood for the data. Upon considering an infinitesimal vicinity of a state $\rho\leftrightarrow p$, we infer from Eq.~\eqref{5-8} that we have \emph{evidence in favor} of $\rho(p)\in\mathcal{R}_0$ if ${L(D|p)>L(D)}$, and we have \emph{evidence against} $p$, and thus against $\rho(p)$, if $L(D|p)<L(D)$. The ratio $L(D|p)/L(D)$, or any monotonic function of it, measures the strength of the evidence \cite{Evans:15}. It follows that the data provide strongest evidence for the maximum-likelihood estimator. Further, since ${c_{\lambda}>s_{\lambda}}$ for all BLRs, there is evidence in favor of each BLR. The larger BLRs, however, those for the lower likelihood thresholds set by smaller $\lambda$ values, contain subregions against which the data give evidence. The $\rho(p)$s with evidence against them are not plausible guesses for the actual quantum state. We borrow Evans's terminology \cite{Evans:15} and call the set of all $\rho$s, for which the data provide evidence in favor, the \emph{plausible region} --- the largest region with evidence in favor of all subregions. It is the SCR $\mathcal{R}_{\lambda}$ for the \emph{critical value} of $\lambda$, \begin{subequations}\label{eq:5-9} \begin{eqnarray}\label{eq:5-9a} &&\lambda_{\mathrm{crit}}\equiv\frac{L(D)}{L_{\mathrm{max}}(D)}=\int_0^1 \mathrm{d} \lambda \,s_{\lambda} \\\label{eq:5-9b} \mbox{with}\quad&&L_{\mathrm{max}}(D)=\max_p\bigl\{L(D|p)\bigr\}\,. \end{eqnarray} \end{subequations} The equal sign in Eq.~\eqref{5-9a} is that of Eq.~(21) in Ref.~\cite{Shang+4:13}. In a plot of $s_{\lambda}$ and $c_{\lambda}$, such as those in Figs.~4 and 5 of \cite{Shang+4:13} or in Figs.~\figref{Size&Credibility} and \figref{BLI-SizeCred} below, we can identify $\lambda_{\mathrm{crit}}$ as the $\lambda$ value with the largest difference $c_{\lambda}-s_{\lambda}$. This concept of the plausible region for QSE carries over to SPE, where we have the \emph{plausible interval} composed of those $F$ values for which $L(D|F)$ exceeds $L(D)$. It is the SCI $\mathcal{I}_{\lambda}$ for the critical $\lambda$ value, \begin{subequations}\label{eq:5-10} \begin{eqnarray}\label{eq:5-10a} &&\lambda_{\mathrm{crit}}\equiv\frac{L(D)}{L_{\mathrm{max}}(D)}=\int_0^1 \mathrm{d} \lambda \,s_{\lambda} \\\label{eq:5-10b} \mbox{with}\quad&&L_{\mathrm{max}}(D)=\max_{F}\bigl\{L(D|F)\bigr\}\,, \end{eqnarray} \end{subequations} where now $L_{\mathrm{max}}(D)$ and $s_{\lambda}$ refer to the $F$-likelihood $L(D|F)$. Usually, the values of $L_{\mathrm{max}}(D)$ in Eqs.~\eqref{5-9b} and \eqref{5-10b} are different and, therefore, the critical $\lambda$ values are different. After measuring a sufficient number of copies of the quantum system --- symbolically: ``${\,N\gg1\,}$'' --- one can invoke the central limit theorem and approximate the $F$-likelihood by a gaussian with a width $\propto N^{-1/2}$, \begin{equation}\label{eq:5-11} N\gg1\,:\quad L(D|F)\simeq L_{\mathrm{max}}(D)\,\Exp{-\frac{N}{2\alpha^2}(F-\widehat{F}^{\ }_{\textsc{ml}})^2}\,, \end{equation} where ${\alpha>0}$ is a scenario-dependent constant. The weak $N$-dependence of $\alpha$ and $\widehat{F}^{\ }_{\textsc{ml}}$ is irrelevant here and will be ignored. Then, the critical $\lambda$ value is \begin{equation}\label{eq:5-12} N\gg1\,:\quad \lambda_{\mathrm{crit}}\simeq W_0(\widehat{F}^{\ }_{\textsc{ml}})\alpha\sqrt{\frac{2\pi}{N}}\,, \end{equation} provided that $W_0(F)$ is smooth near $\widehat{F}^{\ }_{\textsc{ml}}$, which property we take for granted. Accordingly, the size and credibility of the plausible interval are \begin{equation}\label{eq:5-13} N\gg1\,:\quad\left\{\begin{array}{r@{\;\simeq\;}l}\displaystyle s_{\lambda_{\mathrm{crit}}}^{\ }&\displaystyle 2\lambda_{\mathrm{crit}}{\left(\frac{1}{\pi}\log\frac{1}{\lambda_{\mathrm{crit}}}\right)}^{\frac{1}{2}}\,, \\[2.5ex] \displaystyle c_{\lambda_{\mathrm{crit}}}^{\ } &\displaystyle \mathrm{erf}{\left({\left(\log\frac{1}{\lambda_{\mathrm{crit}}}\right)}^{\frac{1}{2}}\right)} \end{array}\right. \end{equation} under these circumstances. When focusing on the dominating $N$ dependence, we have \begin{equation}\label{eq:5-14} N\gg1\,:\quad \lambda_{\mathrm{crit}}\,,\ s_{\lambda_{\mathrm{crit}}}^{\ }\,,\ 1-c_{\lambda_{\mathrm{crit}}}^{\ } \propto\frac{1}{\sqrt{N}}\,, \end{equation} which conveys an important message: As more and more copies of the quantum system are measured, the plausible interval is losing in size and gaining in credibility. \section{Numerical procedures}\label{sec:MCint} The size element of Eq.~\eqref{3-4}, the credibility element of Eq.~\eqref{3-5}, and the $F$-likelihood of Eqs.~\eqref{4-0c} and \eqref{4-5}, introduced in Eq.~\eqref{3-5}, are the core ingredients needed for the construction of error intervals for $F$. The integrals involved are usually high-dimensional and can only be computed by Monte Carlo (MC) methods. The expressions with the delta-function factors in their integrands are, however, ill-suited for a MC integration. Therefore, we consider the antiderivatives \begin{equation}\label{eq:6-1} P_{\mathrm{r},0}(F)=\int(\mathrm{d} p)\,w_{\mathrm{r}}(p)\,\eta\bigl(F-f(p)\bigr) \end{equation} and \begin{equation}\label{eq:6-2} P_{\mathrm{r},0}D(F)=\frac{1}{L(D)} \int(\mathrm{d} p)\,w_{\mathrm{r}}(p)L(D|p)\,\eta\bigl(F-f(p)\bigr)\,. \end{equation} These are the prior and posterior contents of the interval ${0\leq f(p)\leq F}$ for the reference prior with density $w_{\mathrm{r}}(p)$. The denominator in the $F$-likelihood of Eq.~\eqref{4-5} is the derivative of $P_{\mathrm{r},0}(F)$ with respect to $F$, the numerator that of $L(D)P_{\mathrm{r},0}D(F)$. Let us now focus on the denominator in Eq.~\eqref{4-5}, \begin{equation}\label{eq:6-3} W_{\mathrm{r},0}(F)=\frac{\partial}{\partial F}P_{\mathrm{r},0}(F) =\int(\mathrm{d} p)\,w_{\mathrm{r}}(p)\,\delta\bigl(F-f(p)\bigr)\,. \end{equation} For the MC integration, we sample the probability space in accordance with the prior $w_{\mathrm{r}}(p)$ and due attention to $w_{\mathrm{cstr}}(p)$ of Eq.~\eqref{2-2}, for which the methods described in Refs.~\cite{Shang+4:15} and \cite{Seah+4:15} are suitable. This gives us $P_{\mathrm{r},0}(F)$ together with fluctuations that originate in the random sampling and the finite size of the sample; for a sample with $N_{\mathrm{sample}}$ values of $p$, the expected mean-square error is $\bigl[P_{\mathrm{r},0}(F)\bigl(1-P_{\mathrm{r},0}(F)\bigr)/N_{\mathrm{sample}}\bigr]^{1/2}$. We cannot differentiate this numerical approximation of $P_{\mathrm{r},0}(F)$, but we can fit a several-parameter function to the values produced by the MC integration, and then differentiate this function and so arrive at an approximation $\widetilde{W}_{\mathrm{r},0}(F)$ for $W_{\mathrm{r},0}(F)$. How can we judge the quality of this approximation? For the prior density $w_0(p)$ in Eq.~\eqref{4-0d} with any chosen $W_0(F)$ \cite{note6}, the antiderivative of the integral in Eq.~\eqref{3-4} yields \begin{eqnarray}\label{eq:6-4} P_0(F)&=&\int(\mathrm{d} p)\,w_0(p)\,\eta\bigl(F-f(p)\bigr)\nonumber\\&=& \int(\mathrm{d} p)\,\frac{w_{\mathrm{r}}(p)W_0\bigl(f(p)\bigr)}{W_{\mathrm{r},0}\bigl(f(p)\bigr)} \int_0^F\mathrm{d} F'\,\delta\bigl(F'-f(p)\bigr)\nonumber\\&=& \int_0^F\frac{\mathrm{d} F'\, W_0(F')}{W_{\mathrm{r},0}(F')}\int(\mathrm{d} p)\,w_{\mathrm{r}}(p)\, \delta\bigl(F'-f(p)\bigr)\nonumber\\ &=&\int_0^F\mathrm{d} F'\, W_0(F') \end{eqnarray} upon recalling Eq.~\eqref{6-3}. When the approximation \begin{equation}\label{eq:6-4a} \widetilde{w}_0(p)=\frac{w_{\mathrm{r}}(p)W_0\bigl(f(p)\bigr)}{\widetilde{W}_{\mathrm{r},0}\bigl(f(p)\bigr)} \end{equation} is used instead, we find \begin{eqnarray}\label{eq:6-5} \widetilde{P}_0(F)&=&\int(\mathrm{d} p)\,\widetilde{w}_0(p)\,\eta\bigl(F-f(p)\bigr) \nonumber\\&=&\int_0^F\!\mathrm{d} F'\,\frac{W_{\mathrm{r},0}(F')}{\widetilde{W}_{\mathrm{r},0}(F')}W_0(F')\,. \end{eqnarray} It follows that $\widetilde{W}_{\mathrm{r},0}(F)$ approximates $W_{\mathrm{r},0}(F)$ well if ${\widetilde{P}_0(F)\simeq \int_0^F\mathrm{d} F'\,W_0(F')}$ is sufficiently accurate. If it is not, an approximation $\widetilde{W}_0(F)$ for $\displaystyle\frac{\partial}{\partial F}\widetilde{P}_0(F)$ provides $\widetilde{W}_{\mathrm{r},0}(F)\Bigr|_{\mathrm{new}}=\widetilde{W}_{\mathrm{r},0}(F)\widetilde{W}_0(F)/W_0(F)$, which improves on the approximation $\widetilde{W}_{\mathrm{r},0}(F)$. It does not give us $W_{\mathrm{r},0}(F)$ exactly because the integral in Eq.~\eqref{6-5} also requires a MC integration with its intrinsic noise. Yet, we have here the essence of an iteration algorithm for successive approximations of $W_{\mathrm{r},0}(F)$. Since the $F$-likelihood $L(D|F)$ does not depend on the prior $W_0(F)$, we can choose ${W_0(F)=1}$ so that ${P_0(F)=F}$ in Eq.~\eqref{6-4}, and the $n$th iteration of the algorithm consists of these steps: \STEP{S1}{For given $W_{\mathrm{r},0}^{(n)}(F)$, sample the probability space in accordance with the prior ${w^{(n)}_0(p)=w_{\mathrm{r}}(p)/W_{\mathrm{r},0}^{(n)}\bigl(f(p)\bigr)}$.} \STEP{S2}{Use this sample for a MC integration of \begin{displaymath} P^{(n)}_0(F)=\int(\mathrm{d} p)\,w^{(n)}_0(p)\,\eta\bigl(F-f(p)\bigr)\,. \end{displaymath}} \STEP{S3}{Escape the loop if ${P^{(n)}_0(F)\simeq F}$ with the desired accuracy.} \STEP{S4}{Fit a suitable several-parameter function to the MC values of $P^{(n)}_0(F)$.} \STEP{S5}{Differentiate this function to obtain $$W^{(n)}_0(F)\simeq \frac{\partial}{\partial F}P^{(n)}_0(F)\,;$$ update ${n\to n+1}$ and $$W_{\mathrm{r},0}^{(n)}(F)\to W_{\mathrm{r},0}^{(n+1)}(F)=W_{\mathrm{r},0}^{(n)}(F)W^{(n)}_0(F)\,;$$ return to step S1.} \par \par\noindent The sampling in step S1 consumes most of the CPU time in each round of iteration. It is, therefore, economic to start with smaller samples and increase the sample size as the approximation gets better. Numerical codes for sampling by the Hamiltonian MC method described in Ref.~\cite{Seah+4:15} are available at a website \cite{QSampling}, where one also finds large ready-for-use samples for a variety of POMs and priors. Similarly, we compute the numerator $W_{\mathrm{r},0}D(F)$ in Eq.~\eqref{4-5}. With the replacements ${W_{\mathrm{r},0}^{(n)}(F)\to W_{\mathrm{r},0}D^{(n)}(F)}$ and ${w_{\mathrm{r}}(p)\to w_{\mathrm{r}}(p)L(D|p)}$, the same iteration algorithm works. Eventually, we get the $F$-likelihood, \begin{equation}\label{eq:6-6} L(D|F)=\frac{W_{\mathrm{r},0}D(F)}{W_{\mathrm{r},0}(F)}\,, \end{equation} and can then proceed to determine the BLIs of Sec.~\ref{sec:PE-OEI}. In practice, it is not really necessary to iterate until $P^{(n)}_0(F)$ equals $F$ to a very high accuracy. A few rounds of the iteration are usually enough for establishing a $w^{(n)}_0(p)$ for which the induced prior density $W_{\mathrm{r},0}^{(n)}(F)$ is reliable over the whole range from ${F=0}$ to ${F=1}$. Then a fit to $P_{\mathrm{r},0}D(F)$, obtained from a MC integration with a sample in accordance with the posterior density $\propto w^{(n)}_0(p)L(D|p)$, provides an equally reliable posterior density $W_{\mathrm{r},0}D^{(n)}(F)$, and so gives us the $F$-likelihood of Eq.~\eqref{6-6}. Regarding the fitting of a several-parameter function in step S4, we note that, usually, a truncated Fourier series of the form \begin{eqnarray}\label{eq:6-7} P_0^{(n)}(F)&\simeq&F+a_1\sin(\pi F)+a_2\sin(2\pi F)\nonumber\\ &&\phantom{F}+a_3\sin(3\pi F)+\cdots\,, \end{eqnarray} with the amplitudes $a_1,a_2,a_3,\dots$ as the fitting parameters, is a good choice, possibly modified such that known properties of $P_0^{(n)}(F)$ are properly taken into account. These matters are illustrated by the examples in Sec.~\ref{sec:2qubit}; see, in particular, Fig.~\figref{P0-Iteration}. \section{Example: One qubit}\label{sec:1qubit} \begin{figure*} \caption{\label{fig:Size&Credibility} \label{fig:Size&Credibility} \end{figure*} As a first application, let us consider the single-qubit situation. The state of a qubit can be written as \begin{equation}\label{eq:7-4a} \rho(\vec{r})=\frac{1}{2}(\dyadic{1}+\vec{r}\cdot\bfsym{\sigma}), \end{equation} where $\bfsym{\sigma}=(\sigma_x,\sigma_y,\sigma_z)$ is the vector of Pauli operators, and ${\vec{r}=(x,y,z)}$ is the Bloch vector with ${x=\langle\sigma_x\rangle}$, ${y=\langle\sigma_y\rangle}$, and ${z=\langle\sigma_z\rangle}$. The tomographic measurement is taken to be the four-outcome tetrahedron measurement of Ref.~\cite{Rehacek+1:04}, with outcome operators \begin{equation}\label{eq:7-4} \Pi_k=\frac{1}{4}(\dyadic{1}+\vec{a}_k\cdot\bfsym{\sigma}) \quad \mathrm{with}\,\,k=1,2,3,4\,. \end{equation} Here, the four unit vectors $\vec{a}_k$ are chosen such that they are respectively orthogonal to the four faces of a symmetric tetrahedron (hence the name). We orient them such that the probabilities ${p_k=\frac{1}{4}(1+\vec{r}\cdot\vec{a}_k)}$ for the four outcomes are \begin{eqnarray}\label{eq:7-5} &&p_1=\frac{1}{4}(1-z)\,,\qquad p_2=\frac{1}{4}\!\left(1+\frac{\sqrt{8}}{3}y+\frac{1}{3}z\right),\nonumber\\ &&\begin{array}{@{}l@{}}p_3\\p_4\end{array}\Biggr\} =\frac{1}{4}\!\left(1\pm \sqrt{\frac{2}{3}}x -\frac{\sqrt 2}{3}y+\frac{1}{3}z\right). \end{eqnarray} The tetrahedron measurement is tomographically complete for the qubit and so allows the full reconstruction of the state, which we accomplish with the aid of \begin{equation}\label{eq:7-5'} \vec{r}=3\sum_{k=1}^4p_k\vec{a}_k\,. \end{equation} This tomographic completeness is useful for our discussion, since it permits both the estimation of a property of interest directly from the $p_k$s, as well as estimating that property by first estimating the density operator $\rho$; see Sec.~\ref{sec:1qubitb}. \subsection{SCIs for fidelity and purity}\label{sec:1qubita} \begin{figure} \caption{\label{fig:qubitSCIs} \label{fig:qubitSCIs} \end{figure} We construct the SCIs for two properties: the fidelity with respect to some target state, and the normalized purity. Both have values between $0$ and $1$, so that the concepts and tools of the preceding sections are immediately applicable. The fidelity \begin{equation}\label{eq:7-0a} \phi=\tr{\bfsym{|}\sqrt{\rho}\,\sqrt{\rho_{\mathrm{tar}}}\bfsym{|}} \end{equation} is a measure of overlap between the actual state $\rho$ and the target state $\rho_{\mathrm{tar}}$. For these two qubit states, we express the fidelity in terms of the Bloch vectors $\vec{r}$ and ${\vec{t}=\tr{\bfsym{\sigma}\rho_{\mathrm{tar}}}}$, \begin{equation}\label{eq:7-0b} \phi={\left[\frac{1}{2}(1+\vec{r}\cdot\vec{t}) +\frac{1}{2}\sqrt{1-r^2}\sqrt{1-t^2}\right]}^{\frac{1}{2}} \geq\sqrt{\frac{1-t}{2}}\,, \end{equation} where ${r=\bfsym{|}\vec{r}\bfsym{|}}$ and ${t=\bfsym{|}\vec{t}\bfsym{|}}$, and the lower bound is reached for ${\vec{r}=-\vec{t}/t}$. When the target state is pure ($\rho_{\mathrm{tar}}=|\mathrm{tar}\rangle\langle\mathrm{tar}|$, ${t=1}$), (\ref{eq:7-0b}) simplifies to $\phi ={\left[\frac{1}{2}(1+\vec{r}\cdot\vec{t})\right]}^{\frac{1}{2}} =\langle\mathrm{tar}|\rho|\mathrm{tar}\rangle^{\frac{1}{2}}$. In particular, for $|\mathrm{tar}\rangle=|0\rangle$, the $+1$ eigenstate of $\sigma_z$, we have ${\vec{t}=\vec{e}_z}$, and the fidelity is a function of only the $z$-component of $\vec{r}$, namely $\phi={\left[\frac{1}{2}(1+z)\right]}^{\frac{1}{2}}$. The purity $\tr{\rho^2}$ is a measure of the mixedness of a state, with values between $\frac{1}{2}$ (for the completely mixed state) and $1$ (for pure states). We define the normalized purity by ${\gamma=2\,\tr{\rho^2}-1}$, so that ${\gamma=r^2}$ is simply the squared length of the Bloch vector. Expressed in terms of the tetrahedron probabilities in Eq.~\eqref{7-5}, we have \begin{equation}\label{eq:7-0d} \gamma(p)=12\sum_{k=1}^4p_k^2-3 \quad\mbox{and}\quad \phi(p)=\sqrt{1-2p_1} \end{equation} for the normalized purity and the fidelity with $\rho_{\mathrm{tar}}={\frac{1}{2}(1+\sigma_z)}$, respectively. In a simulated experiment, the state used to generate the data is ${\rho=\frac{1}{2}(\dyadic{1}+0.9\,\sigma_{z})}$. This state has fidelity ${\Phi=\sqrt{0.95}=0.9747}$ (for target state $|0\rangle$) and normalized purity ${\Gamma=0.81}$ --- the ``true'' values for the two properties to be estimated from the data. A particular simulation measured $36$ copies of this state using the tetrahedron measurement, and gave data $D=(n_1,n_2,n_3,n_4)=(2,10,11,13)$, where $n_k$ is the number of clicks registered by the detector for outcome $\Pi_k$. In this low-dimensional single-qubit case, the induced priors $W_0(\Phi)$ and $W_0(\Gamma)$, both for the primitive prior \eqref{4-6} and the Jeffreys prior \eqref{4-7}, are obtained by an analytical evaluation of the analogs of the integral in Eq.~\eqref{3-4}. While a MC integration is needed for the analogs of the integral in Eq.~\eqref{3-5}, one can do without the full machinery of Sec.~\ref{sec:MCint}. The top plots in Fig.~\ref{fig:Size&Credibility} report the $F$-likelihoods $L(D|\Phi)$ and $L(D|\Gamma)$ thus obtained for the Jeffreys prior and the primitive prior, respectively. The bottom plots show the size $s_{\lambda}^{\ }$ and the credibility $c_{\lambda}^{\ }$ for the resulting BLIs, computed from these $F$-likelihoods together with the respective induced priors. The dots mark values obtained by numerical integration that employs the Hamiltonian Monte Carlo algorithm for sampling the quantum state space \cite{Seah+4:15} in accordance with the prior and posterior distributions. Consistency with the relation in Eq.~\eqref{5-7} between $s_\lambda$ and $c_\lambda$ is demonstrated by the green curves through the credibility points, which is obtained by integrating over the cyan curve fitted to the size points. The SCIs resulting from these $s_\lambda$ and $c_\lambda$ are reported in Fig.~\figref{qubitSCIs} for fidelity $\Phi$ and normalized purity $\Gamma$, both for the primitive prior (red lines `a') and for the Jeffreys prior (blue lines `b'). The SCI with a specific credibility is the horizontal interval between the two branches of the curves; see the plausible intervals marked on the plots. An immediate observation is that the choice of prior has little effect on the SCIs, although the total number of measured copies is not large. In other words, already for the small number of ${N=36}$ qubits measured, our conclusions are dominated by the data, not by the choice of prior. \begin{figure} \caption{\label{fig:DSPEvsISPE} \label{fig:DSPEvsISPE} \end{figure} \subsection{Direct and indirect estimation of state properties}\label{sec:1qubitb} As mentioned in the Introduction and also in Sec.~\ref{sec:PE-OEI}, the best guess for the properties of interest may not, and often does not, come from the best guess for the quantum state. For an illustration of this matter, we compare here the two approaches for our qubit example. The error intervals are either constructed by directly estimating the value of the property from the data, as we have done in the previous section, or by first constructing the error regions (SCRs specifically; see Ref.~\cite{Shang+4:13}) for the quantum state, and the error interval for the desired property is given by the range of property values for the states contained in the error region of states; see Fig.~\figref{regions}. We refer to the two respective approaches as direct and indirect state-property estimation, with the abbreviations of DSPE and ISPE. Of course, DSPE is simply SPE proper. Figure~\figref{DSPEvsISPE} shows the error intervals for fidelity $\Phi$ and normalized purity $\Gamma$ for the single-qubit data of Figs.~\figref{Size&Credibility} and \figref{qubitSCIs}. The purple curves labeled `a' are obtained via ISPE and the red curves labeled `b' via DSPE. Here, the primitive prior of Eq.~\eqref{4-6} is used as $w_0(p)$ on the probability space, together with the induced prior densities $W_0(\Phi)$ and $W_0(\Gamma)$ for the fidelity and the normalized purity. Clearly, the error intervals obtained by these two approaches are quite different in this situation and, in particular, DSPE reports smaller intervals than ISPE does. More importantly, the intervals obtained via ISPE and DSPE are also rather different in meaning: The credibility value used for constructing the interval from DSPE (the SCI) is the posterior content of that interval for the property itself; the credibility value used in ISPE, however, is the posterior content for the \emph{state} error region, and often has no simple relation to the probability of containing the true property value. This is the situation depicted in Fig.~\figref{regions}, where the range of $F$ values across the SCR is larger than the range of the SCI. \section{Example: Two qubits}\label{sec:2qubit} \subsection{CHSH quantity, TAT scheme, and simulated experiment} In our second example we consider qubit pairs and, as in Sec.~4.3 in Ref.~\cite{Seah+4:15}, the property of interest is the Clauser-Horne-Shimony-Holt (CHSH) quantity \cite{Clauser+3:69,Clauser+1:74}, \begin{equation}\label{eq:8-1} \theta=\tr{(A_1\otimes B_1 + A_2\otimes B_1 + A_1\otimes B_2 - A_2\otimes B_2)\rho}\,, \end{equation} where $A_j=\vec{a}_j\cdot\bfsym{\sigma}$ and $B_{j'}=\vec{b}_{j'}\cdot\bfsym{\sigma}$ with unit vectors $\vec{a}_1$, $\vec{a}_2$, $\vec{b}_1$, and $\vec{b}_2$ are components of the Pauli vector operators for the two qubits. We recall that $\bfsym{|}\theta\bfsym{|}$ cannot exceed $\sqrt{8}$, and the two-qubit state is surely entangled if ${\bfsym{|}\theta\bfsym{|}>2}$. Therefore, one usually wishes to distinguish reliably between ${\bfsym{|}\Theta\bfsym{|}<2}$ and ${\bfsym{|}\Theta\bfsym{|}>2}$. A standard choice for the single-qubit observables is \begin{equation}\label{eq:8-2} A_1=\sigma_x\,,\quad A_2=\sigma_z\,,\quad \left.\begin{array}{c}B_1\\B_2\end{array}\right\}= -\frac{1}{\sqrt{2}}(\sigma_x\pm\sigma_z)\,, \end{equation} for which \begin{equation}\label{eq:8-3} \theta=-\sqrt{2}\expect{\sigma_x\otimes\sigma_x+\sigma_z\otimes\sigma_z}\,. \end{equation} The limiting values ${\theta=\pm\sqrt{8}}$ are reached for two of the ``Bell states'', \textsl{viz.}\ the maximally entangled states $\rho=\frac{1}{4}(\dyadic{1}\mp\sigma_x\otimes\sigma_x) (\dyadic{1}\mp\sigma_z\otimes\sigma_z)$, the common eigenstates of $\sigma_x\otimes\sigma_x$ and $\sigma_z\otimes\sigma_z$ with same eigenvalue $-1$ or $+1$. One does not need full tomography for the experimental determination of this $\Theta$; a measurement that explores the $xz$ planes of the two Bloch balls provides the necessary data. We use the trine-antitrine (TAT) scheme (see Ref.~\cite{Tabia+1:11} and Sec.~6 in Ref.~\cite{Shang+4:13}) for this purpose. Qubit~1 is measured by the three-outcome POM with outcome operators \begin{equation}\label{eq:8-4} \Pi_1^{(1)}=\frac{1}{3}\left(\mathds{1}+\sigma_z\right),\quad \left.\begin{array}{c}\Pi_2^{(1)}\\[0.5ex]\Pi_3^{(1)}\end{array}\right\}= \frac{1}{3}{\left(\mathds{1}\pm \frac{\sqrt{3}}{2}\sigma_x-\frac{1}{2}\sigma_z\right)}, \end{equation} and the $\Pi_{j'}^{(2)}$s for qubit~2 have the signs of $\sigma_x$ and $\sigma_z$ reversed. The nine probability operators of the product POM are \begin{equation}\label{eq:8-5} \Pi_k^{\ }=\Pi_{j}^{(1)}\otimes\Pi_{j'}^{(2)}\quad\mbox{with}\quad k=3(j-1)+j'\equiv[jj']\,, \end{equation} that is $1=[11]$, $2=[12]$, \dots, $5=[22]$, \dots, $8=[32]$, $9=[33]$, and we have \begin{equation}\label{eq:8-6} \theta(p)=\sqrt{8}\Bigl[3(p_1+p_5+p_9)-1\Bigr] \end{equation} for the CHSH quantity in Eq.~\eqref{8-3}. With the data provided by the TAT measurement, we can evaluate $\theta$ for any choice of the unit vectors $\vec{a}_1$, $\vec{a}_2$, $\vec{b}_1$, $\vec{b}_2$ in the $xz$ plane. If we choose the vectors such that $\theta$ is largest for the given $\rho$, then \begin{eqnarray}\label{eq:8-7} \thetaopt&=&2\Bigl[\expect{\sigma_x\otimes\sigma_x}^2 +\expect{\sigma_x\otimes\sigma_z}^2 +\expect{\sigma_z\otimes\sigma_x}^2\nonumber\\ &&\phantom{2\Bigl[}+\expect{\sigma_z\otimes\sigma_z}^2\Bigr]^{\frac{1}{2}} \end{eqnarray} for the optimized CHSH quantity. In terms of the TAT probabilities, it is given by \begin{eqnarray}\label{eq:8-8} {\left(\frac{\thetaopt(p)}{4}\right)}^2&=& 1+9\sum_{k=1}^9p_k^2\\&&\mbox{} -3\Bigl[(p_1+p_2+p_3)^2+(p_4+p_5+p_6)^2\nonumber\\&&\phantom{-3\Bigl[} +(p_7+p_8+p_9)^2+(p_1+p_4+p_7)^2\nonumber\\ &&\phantom{-3\Bigl[}+(p_2+p_5+p_8)^2+(p_3+p_6+p_9)^2\Bigr].\nonumber \end{eqnarray} Whereas the fixed-vectors CHSH quantity in Eq.~\eqref{8-6} is a linear function of the TAT probabilities, the optimal-vectors quantity is not. The inequality $\bfsym{|}\theta\bfsym{|}\leq \thetaopt$ holds for any two-qubit state $\rho$, of course. Extreme examples are the Bell states $\rho=\frac{1}{4}(\dyadic{1}\pm\sigma_x\otimes\sigma_x) (\dyadic{1}\mp\sigma_z\otimes\sigma_z)$, the common eigenstates of $\sigma_x\otimes\sigma_x$ and $\sigma_z\otimes\sigma_z$ with opposite eigenvalues, for which ${\theta=0}$ and ${\thetaopt=\sqrt{8}}$. The same values are also found for other states, among them all four common eigenstates of $\sigma_x\otimes\sigma_z$ and $\sigma_z\otimes\sigma_x$. The simulated experiment uses the true state \begin{equation}\label{eq:8-9} \rho^{\ }_{\mathrm{true}}=\frac{1}{4}(\dyadic{1}-x\sigma_x\otimes\sigma_x -y\sigma_y\otimes\sigma_y -z\sigma_z\otimes\sigma_z) \end{equation} with $(x,y,z)=\frac{1}{20}(18,-15,-14)$, for which the TAT probabilities are \begin{equation}\label{eq:8-10} {\left(\begin{array}{ccc} p_1 & p_2 & p_3 \\ p_4 & p_5 & p_6 \\ p_7 & p_8 & p_9 \end{array}\right)}=\frac{1}{60} {\left(\begin{array}{ccc} 2 & 9 & 9 \\ 9 & 10 & 1 \\ 9 & 1 & 10 \end{array}\right)} \end{equation} and the true values of $\Theta$ and $\Thetaopt$ are \begin{eqnarray}\label{eq:8-11} &&\Theta=\sqrt{2}(x+z)=\frac{1}{5}\sqrt{2}=0.2828\,,\nonumber\\ &&\Thetaopt=2\sqrt{x^2+z^2}=\sqrt{\frac{26}{5}}=2.2804\,. \end{eqnarray} When simulating the detection of ${N=180}$ copies, we obtained the relative frequencies \begin{eqnarray}\label{eq:8-12} \frac{1}{180} {\left(\begin{array}{ccc} 9 & 28 & 30 \\ 28 & 27 & 3 \\ 29 & 1 & 25 \end{array}\right)} &=& {\left(\begin{array}{ccc} p_1 & p_2 & p_3 \\ p_4 & p_5 & p_6 \\ p_7 & p_8 & p_9 \end{array}\right)}\nonumber\\&&\mbox{}+\frac{1}{180} {\left(\begin{array}{ccc} 3 & 1 & 3 \\ 1 & -3 & 0 \\ 2 & -2 & -5 \end{array}\right)}. \end{eqnarray} If we estimate the probabilities by the relative frequencies and use these estimates in Eqs.~\eqref{8-6} and \eqref{8-8}, the resulting estimates for $\Theta$ and $\Thetaopt$ are ${\sqrt{2}/30=0.0471}$ and ${16\sqrt{39}/45=2.2204}$, respectively. This so-called ``linear inversion'' is popular, and one can supplement the estimates with error bars that refer to confidence intervals \cite{Schwemmer+6:15}, but the approach has well-known problems \cite{Shang+2:14}. Instead, we report SCIs for $\Theta$ and $\Thetaopt$, and for those we need the $\Theta$-likelihoods $L(D|\Theta)$ and $L(D|\Thetaopt)$. We describe in the following Sec.~\ref{sec:2qb-MCint} how the iteration algorithm of Sec.~\ref{sec:MCint} is implemented, and present $L(D|\Theta)$ and $L(D|\Thetaopt)$ thus found in Sec.~\ref{sec:2qb-SCIs} together with the resulting SCIs. \subsection{Iterated MC integrations}\label{sec:2qb-MCint} Rather than ${F=\frac{1}{2}{\left(\Theta/\sqrt{8}+1\right)}}$ or ${F=\Thetaopt/\sqrt{8}}$, which have values in the range ${0\leq F\leq1}$, we shall use $\Theta$ and $\Thetaopt$ themselves as the properties to be estimated, with the necessary changes in the expressions in Secs.~\ref{sec:SCPR}--\ref{sec:MCint}. For the MC integration of $P_0(\Theta)$, say, we sample the probability space with the Hamiltonian MC algorithm described in Sec.~4.3 in \cite{Seah+4:15}. In this context, we note the following implementation issue: The sample probabilities carry a weight proportional to the range of permissible values for ${\expect{(\sigma_x\otimes\sigma_x)(\sigma_z\otimes\sigma_z)} =-\expect{\sigma_y\otimes\sigma_y}}$, i.e., parameter $q$ in \eqref{B-4}. It is expedient to generate an unweighted sample by resampling (``bootstrapping'') the weighted sample. The unweighted sample is then used for the MC integration. \begin{figure} \caption{\label{fig:CHSH-histo} \label{fig:CHSH-histo} \end{figure} The histograms in Fig.~\figref{CHSH-histo}(a) show the distribution of $\Theta$ and $\Thetaopt$ values in such a sample, drawn from the probability space in accordance with the primitive prior of \eqref{4-6}. These prior distributions contain few values with ${\Thetaopt>2}$ and much fewer with ${\bfsym{|}\Theta\bfsym{|}>2}$. In Fig.~\figref{CHSH-histo}(b), we have the histograms for a corresponding sample drawn from the posterior distribution to the simulated data of Eq.~\eqref{8-12}. In the posterior distributions, values exceeding $2$ are prominent for $\Thetaopt$, but virtually non-existent for~$\Theta$. We determine the $\Theta$-likelihoods $L(D|\Theta)$ and $L(D|\Thetaopt)$ by the method described in Sec.~\ref{sec:MCint}. The next five paragraphs deal with the details of carrying out a few rounds of the iteration. The green dots in Fig.~\figref{P0-Iteration}(a) show the $P_0(\Theta)$ values obtained with the sample of 500\,000 sets of probabilities that generated the histograms in Fig.~\figref{CHSH-histo}(a). We note that the MC integration is not precise enough to distinguish ${P_0(\Theta)\gtrsim0}$ from ${P_0(\Theta)=0}$ for ${\Theta<-2}$ or ${P_0(\Theta)\lesssim1}$ from ${P_0(\Theta)=1}$ for ${\Theta>2}$ and, therefore, we cannot infer a reliable approximation for ${W_0(\Theta)=\frac{\mathrm{d}}{\mathrm{d}\Theta}P_0(\Theta)}$ for these $\Theta$ values; the sample contains only 144 entries with ${\bfsym{|}\Theta\bfsym{|}>2}$ and no entries with ${\bfsym{|}\Theta\bfsym{|}>2.49}$. The iteration algorithm solves this problem. \begin{figure} \caption{\label{fig:P0-Iteration} \label{fig:P0-Iteration} \end{figure} As discussed in Appendix~\ref{sec:appA}, we have \begin{equation}\label{eq:8-13} \frac{\mathrm{d}}{\mathrm{d}\Theta}P_0(\Theta)=W_0(\Theta)\propto {\left(\sqrt{8}-\bfsym{|}\Theta\bfsym{|}\right)}^\frac{11}{2} \quad\mbox{for}\quad \bfsym{|}\Theta\bfsym{|}\lesssim\sqrt{8} \end{equation} near the boundaries of the $\Theta$ range in Fig.~\figref{P0-Iteration}(a). In conjunction with the symmetry property ${W_0(\Theta)=W_0(-\Theta)}$ or ${P_0(\Theta)+P_0(-\Theta)=1}$, this invites the four-parameter approximation \begin{eqnarray}\label{eq:8-14} P_0(\Theta)\simeq P^{(0)}_0(\Theta)&=& w_1B_{\alpha_1^{\ }}(\Theta) +w_2B_{\alpha_2^{\ }}(\Theta)\nonumber\\ &&\mbox{}+w_3B_{\alpha_3^{\ }}(\Theta)\,, \end{eqnarray} where \begin{equation}\label{eq:8-15} B_{\alpha}(\Theta)=\Biggr(\frac{1}{32}\Biggr)^{\alpha+\frac{1}{2}} \frac{(2\alpha+1)!}{(\alpha!)^2} \int_{-\sqrt{8}}^{\Theta}\mathrm{d} x\,{\left(8-x^2\right)}^{\alpha} \end{equation} is a normalized incomplete beta function integral with ${B_{\alpha}(-\sqrt{8})=0}$ and ${B_{\alpha}(\sqrt{8})=1}$; $\alpha_2$ and $\alpha_3$ are fitting parameters larger than ${\alpha_1=\frac{11}{2}}$; and $w_1,w_2,w_3$ are weights with unit sum. A fit with a root mean squared error of $2.7\times10^{-4}$ is achieved by $\alpha_2=\alpha_1+1.6700$, $\alpha_3=\alpha_1+5.4886$, and $(w_1,w_2,w_3)=(0.4691,0.2190,0.3119)$. The graph of $P^{(0)}_0(\Theta)$ is the black curve through the green dots in Fig.~\figref{P0-Iteration}(a); the corresponding four-parameter approximation for $W_0(\Theta)$ is shown as the black envelope for the green $\Theta$ histogram in Fig.~\ref{fig:CHSH-histo}(a). The subsequent approximations $P_0^{(1)}(\Theta)$, $P_0^{(2)}(\Theta)$, and $P_0^{(3)}(\Theta)$, are shown as the blue, cyan, and red dots in Fig.~\figref{P0-Iteration}(a) and, after subtracting $\frac{1}{2}{\left(\Theta/\sqrt{8}+1\right)}$, also in Fig.~\figref{P0-Iteration}(b). We use the truncated Fourier series of Eq.~\eqref{6-7} with ${F=\frac{1}{2}{\left(\Theta/\sqrt{8}+1\right)}}$ for fitting a smooth curve to the noisy MC values for $P_0^{(1)}(\Theta)$, $P_0^{(2)}(\Theta)$, and $P_0^{(3)}(\Theta)$. As a consequence of ${P_0(\Theta)+P_0(-\Theta)=1}$, all Fourier amplitudes $a_k$ with odd $k$ vanish. \begin{figure} \caption{\label{fig:Fourier} \label{fig:Fourier} \end{figure} For an illustration of the method, we report in Fig.~\figref{Fourier} the amplitudes $a_k$ of a full Fourier interpolation between the blue dots (${n=1}$) in Fig.~\figref{P0-Iteration}(b). Upon discarding all components with ${k>8}$ and thus retaining only four nonzero amplitudes, the resulting truncated Fourier series gives the smooth blue curve through the blue dots. Its derivative contributes a factor $W_0^{(1)}(F)$ to the reference prior density $W_{\mathrm{r},0}(F)$, in accordance with step S5 of the iteration algorithm in Sec.~\ref{sec:MCint}. In the next round we treat $P_0^{(2)}(\Theta)$ in the same way, followed by $P_0^{(3)}(\Theta)$ in the third round. \subsection{Likelihood and optimal error intervals}\label{sec:2qb-SCIs} After each iteration round, we use the current reference prior and the likelihood $L(D|p)$ for a MC integration of the posterior density and so obtain the corresponding $P_D^{(n)}(\Theta)$ as well as its analytical parameterization analogous to that of $P_0^{(n)}(\Theta)$; the black envelopes to the histograms in Fig.~\ref{fig:CHSH-histo}(b) show the final approximations for the derivatives of $P_{\mathrm{r},0}D(\Theta)$ and $P_{\mathrm{r},0}D(\Thetaopt)$ thus obtained. The ratio of their derivatives is the $n$th approximation to the $\Theta$-likelihood $L(D|\Theta)$; and likewise for $L(D|\Thetaopt)$, see \ref{sec:appB}. Figure~\figref{S-Likelihood} shows the sequence of approximations. \begin{figure} \caption{\label{fig:S-Likelihood} \label{fig:S-Likelihood} \end{figure} \begin{figure} \caption{\label{fig:BLI-SizeCred} \label{fig:BLI-SizeCred} \end{figure} We note that the approximations for the $\Theta$-likelihood hardly change from one iteration to the next, so that we can stop after just a few rounds and proceed to the calculation of the size $s_\lambda$ and the credibility $c_\lambda$ of the BLIs. These are shown in Fig.~\figref{BLI-SizeCred} for the flat priors in $\Theta$ and $\Thetaopt$, respectively. The plots in Figs.~\ref{fig:CHSH-histo}--\ref{fig:BLI-SizeCred} refer to the primitive prior of Eq.~\eqref{4-6} as the reference prior on the probability space. The analogous plots for the Jeffreys prior of Eq.~\eqref{4-7} are quite similar. As a consequence of this similarity, there is not much of a difference in the SCIs obtained for the two reference priors, although the number of measured copies (${N=180}$) is not large; see Fig.~\figref{S-OEIs}. The advantage of $\Thetaopt$ over $\Theta$ is obvious: Whereas virtually all $\Theta$-SCIs with non-unit credibility are inside the range ${-2<\Theta<2}$, the $\Thetaopt$-SCIs are entirely in the range ${\Thetaopt>2}$ for credibility up to 95\% and 98\% for the primitive reference prior and the Jeffreys reference prior, respectively. \begin{figure} \caption{\label{fig:S-OEIs} \label{fig:S-OEIs} \end{figure} \section{Summary and outlook}\label{sec:Sum} In full analogy to the likelihood $L(D|p)$ of the data $D$ for the specified probability parameters $p$ of the quantum state, which is the basic ingredient exploited by all strategies for quantum state estimation, the $F$-likelihood $L(D|F)$ plays this role when one estimates the value $F$ of a function $f(p)$ --- the value of a property of the quantum state. Although the definition of $L(D|F)$ in terms of $L(D|p)$ relies on Bayesian methodology and, in particular, needs a pre-selected reference prior on the probability space, the prior density for $F$ can be chosen freely and the $F$-likelihood is independent of this choice. As soon as the $F$-likelihood is at hand, we have a maximum-likelihood estimator for $F$, embedded in a family of smallest credible intervals that report the accuracy of the estimate in a meaningful way. This makes optimal use of the data. The dependence of the smallest credible regions on the prior density for $F$ is irrelevant when enough data are available. In the examples studied, ``enough data'' are obtained by measuring a few tens of copies per outcome. Not only is there no need for estimating the quantum state first and finding its smallest credible regions, this is not even useful: The $F$ value of the best-guess state is not the best guess for $F$, and the smallest credible region for the state does not carry the meaning of the smallest credible interval for $F$. The reliable computation of the marginal $F$-likelihood $L(D|F)$ from the primary state-conditioned likelihood $L(D|p)$ is indeed possible. It requires the evaluation of high-dimensional integrals with Monte Carlo techniques. It can easily happen that the pre-selected prior on the probability space gives very little weight to sizeable ranges of $F$ values, and then the $F$-likelihood is ambiguous there. We overcome this problem by an iterative algorithm that replaces the inadequate prior by suitable ones, and so yields a $F$-likelihood that is reliable for all values of $F$. The two-qubit example, in which we estimate CHSH quantities, illustrates these matters. From a general point of view, one could regard values $F$ of functions $f(p)$ of the quantum state as parameters of the state. The term \emph{quantum parameter estimation} is, however, traditionally used for the estimation of parameters of the experimental apparatus, such as the emission rate of the source, efficiencies of detectors, or the phase of an interferometer loop. A forthcoming paper \cite{Dai+4:16} will deal with optimal error regions for quantum parameter estimation in this traditional sense --- smallest credible regions, that is. In this context, it is necessary to account, in the proper way, for the quantum systems that are emitted by the source but escape detection. There are also situations, in which the quantum state and parameters of the apparatus are estimated from the same data, often referred to as \emph{self-calibrating experiments} \cite{Mogilevtsev+2:09,Mogilevtsev:10}. Various aspects of the combined optimal error regions for the parameters of both kinds are discussed in~\cite{Sim:15} and are the subject matter of ongoing research. \begin{acknowledgments} We thank David Nott and Michael Evans for stimulating discussions. This work is funded by the Singapore Ministry of Education (partly through the Academic Research Fund Tier 3 MOE2012-T3-1-009) and the National Research Foundation of Singapore. H.~K.~N is also funded by a Yale-NUS College start-up grant. \end{acknowledgments} \appendix \renewcommand{\Alph{section}.\arabic{subsection}}{\Alph{section}.\arabic{subsection}} \section{Confidence \textit{vs} credibility}\label{sec:appCC} It is common practice to state the result of a measurement of a physical quantity in terms of a \emph{confidence interval}. Usually, two standard deviations on either side of the average value define a 95\% confidence interval for the observed data. This is routinely interpreted as assurance that the actual value (among all thinkable values) is in this range with 95\% probability. Although this interpretation is temptingly suggested by the terminology, it is incorrect --- one must not have such confidence in a confidence interval. Rather, the situation is this: After defining a full set of confidence intervals for quantity $F$, one interval for each thinkable data, the confidence level of the set is its so-called \emph{coverage}, which is the fraction of intervals that cover the actual value, minimized over all possible $F$ values, whereby each interval is weighted by the probability of observing the data associated with it. Upon denoting the confidence interval for data $D$ by $\mathcal{C}_{D}$, the coverage of the set $\mathbf{C}=\{\mathcal{C}_D\}$ is thus calculated in accordance with \begin{equation}\label{eq:CC-1} \mathrm{cov}(\mathbf{C})=\min_{F}\sum_DL(D|F)\left\{ \begin{array}{c@{\ \mbox{if}\ }l} 1 & F\in\mathcal{C}_D \\ 0 & F\not\in\mathcal{C}_D \end{array}\right\}\,. \end{equation} We emphasize that the coverage is a property of the set, not of any individual confidence interval; the whole set is needed for associating a level of confidence with the intervals that compose the set. A set $\textbf{C}$ of confidence intervals with coverage $\mathrm{cov}(\textbf{C})=0.95$ has this meaning: If we repeat the experiment very often and find the respective interval $\mathcal{C}_D$ for each data $D$ obtained, then 95\% of these intervals will contain the actual value of $F$. Confidence intervals are a concept of frequentism where the notion of probability refers to asymptotic relative frequencies --- the confidence intervals are random while the actual value of $F$ is whatever it is (yet unknown to us) and 95\% of the confidence intervals contain it. Here we do statistics on the intervals, not on the value of $F$. It is incorrect to infer that, for each 95\% confidence interval, there are odds of 19:1 in favor of containing the actual value of $F$, an individual confidence interval conveys no such information. It is possible, as demonstrated by the example that follows below, that the confidence interval associated with the observed data contains the actual value of $F$ certainly, or certainly not, and that the data tell us about this. This can even happen for each confidence interval in a set determined by standard optimality criteria \cite{Plante:91} (see also Example 3.4.3 in \cite{Evans:15}). The example just alluded to is a scenario invented by Jaynes \cite{Jaynes:76} (see also \cite{VanderPlas:14}). We paraphrase it as follows: A certain process runs perfectly for duration $T$, after which failures occur at a rate $r$, so that the probability of observing the first failure between time $t$ and $t+\mathrm{d} t$ is \begin{equation}\label{eq:CC-2} \mathrm{d} t\, r\, \Exp{-r(t-T)}\eta(t-T)\,. \end{equation} We cannot measure $T$ directly; instead we record first-failure times $t_1,t_2,\dots,t_N$ when restarting the process $N$ times. Question: What do the data $D=\{t_1,t_2,\dots,t_N\}$ tell us about $T$? One standard frequentist approach begins with noting that the expected first-failure time is \begin{equation}\label{eq:CC-3} \mathbb{E}(t)=\int_{-\infty}^{\infty}\mathrm{d} t\, r\, \Exp{-r(t-T)}\eta(t-T)\,t =T+\frac{1}{r}\,. \end{equation} Since the average $t_{\mathrm{av}}$ of the observed failure times, \begin{equation}\label{eq:CC-4} t_{\mathrm{av}}=\frac{1}{N}\sum_{n=1}^N t_n\,, \end{equation} is an estimate for $\mathbb{E}(t)$, we are invited to use \begin{equation}\label{eq:CC-5} \widehat{T}=t_{\mathrm{av}}-\frac{1}{r} \end{equation} as the point estimator for $T$. In many repetitions of the experiment, then, the probability of obtaining the estimator between $\widehat{T}$ and $\widehat{T}+\mathrm{d}\widehat{T}$ is \begin{eqnarray}\label{eq:CC-6} &&\int_T^{\infty}\mathrm{d} t_1\,r\,\Exp{-r(t_1-T)} \int_T^{\infty}\mathrm{d} t_2\,r\,\Exp{-r(t_2-T)}\cdots \nonumber\\&&\times\int_T^{\infty}\mathrm{d} t_N\,r\,\Exp{-r(t_N-T)} \delta\biggl(\widehat{T}-t_{\mathrm{av}}+\frac{1}{r}\biggr)\mathrm{d}\widehat{T} \nonumber\\&=&\mathrm{d}\widehat{T}\,f_N(\widehat{T}-T) \end{eqnarray} with \begin{equation}\label{eq:CC-7} f_N(t)=Nr\frac{[N(rt+1)]^{N-1}}{(N-1)!}\Exp{-N(rt+1)}\eta(rt+1)\,. \end{equation} Accordingly, the expected value of $\widehat{T}$ is $T$, \begin{equation}\label{eq:CC-8} \mathbb{E}(\widehat{T}) =\int_{-\infty}^{\infty}\mathrm{d}\widehat{T}\,f_N(\widehat{T}-T)\,\widehat{T}=T\,, \end{equation} which says that the estimator of Eq.~\eqref{CC-5} is unbiased. It is also consistent (the more important property) since \begin{equation}\label{eq:CC-9} f_N(\widehat{T}-T)\xrightarrow{N\to\infty}\delta(\widehat{T}-T)\,. \end{equation} Next, we consider the set $\mathbf{C}_N(t_1,t_2)$ of intervals specified by \begin{equation}\label{eq:CC-10} \widehat{T}-t_1<T<\widehat{T}+t_2 \end{equation} and establish its coverage, \begin{eqnarray}\label{eq:CC-11} \mathrm{cov}\bigl(\mathbf{C}_N(t_1,t_2)\bigr) &=&\min_T\int_{-\infty}^{\infty}\mathrm{d}\widehat{T}\,f_N(\widehat{T}-T) \nonumber\\&&\hphantom{\min_T\int}\times \eta(T-\widehat{T}+t_1)\eta(\widehat{T}-T+t_2)\nonumber\\ &=&\int\power{y_1}\rewop{\min\{0,y_2\}} \mathrm{d} y\frac{y^{N-1}}{(N-1)!}\,\Exp{-y} \end{eqnarray} with ${y_1=N(rt_1+1)}$ and ${y_2=N(1-rt_2)<y_1}$. Of the $y_1,y_2$ pairs that give a coverage of $0.95$, one would usually not use the pairs with ${y_1=\infty}$ or ${y_2=0}$ but rather opt for the pair that gives the shortest intervals --- the frequentist analog of the smallest credible intervals. These shortest intervals are obtained by the restrictions \begin{eqnarray} \label{eq:CC-12} &&0<y_2<N-1<y_1<\infty\nonumber\\[1ex] \mbox{with}&\quad& y_2^{N-1}\Exp{-y_2} =y_1^{N-1}\Exp{-y_1} \end{eqnarray} on $y_1$ and $y_2$ in Eq.~\eqref{CC-11}. When ${N=3}$, we have ${y_1=6.400}$ and ${y_2=0.3037}$, and the shortest confidence intervals with $95\%$ coverage are given by \begin{equation}\label{eq:CC-13} \frac{1}{3}{\left(\sum_{n=1}^3t_n-\frac{6.400}{r}\right)}<T< \frac{1}{3}{\left(\sum_{n=1}^3t_n-\frac{0.3037}{r}\right)}\,. \end{equation} There is, for instance \cite{VanderPlas:14}, the interval associated with the data ${t_1=10/r}$, ${t_2=12/r}$, and ${t_3=15/r}$, \begin{equation}\label{eq:CC-14} \frac{10.2}{r} < T < \frac{12.2}{r}\,. \end{equation} Most certainly, the actual value of $T$ is \emph{not inside} this 95\% confidence interval since $T$ must be less than the earliest observed failure time, \begin{equation}\label{eq:CC-15} T<t_{\mathrm{min}}=\min_n\{t_n\}\,, \end{equation} here: ${T<10/r}$. By contrast, the 95\% confidence interval for the data ${t_1=1.9/r}$, ${t_2=2.1/r}$, and ${t_3=2.3/r}$, namely \begin{equation}\label{eq:CC-14'} -\frac{0.03}{r} < T < \frac{2.00}{r}\,, \end{equation} contains all values between ${T=0}$ and ${T=t_{\mathrm{min}}=1.9/r}$, so that the actual value is \emph{certainly inside.} These examples illustrate well what is stated above: The interpretation ``the actual value is inside this 95\% confidence interval with 95\% probability'' is incorrect. Jaynes's scenario is particularly instructive because the data tell us that the confidence interval of Eq.~\eqref{CC-14} is completely off target and that of Eq.~\eqref{CC-14'} is equally useless. Clearly, these 95\% confidence intervals do not answer the question asked above: What do the data tell us about $T$? This is not the full story, however. The practicing frequentist can use alternative strategies for constructing sets of shortest confidence intervals. There is, for example, another standard method that takes the maximum-likelihood point estimator as its starting point. The point likelihood for observing first failures at times $t_1,t_2,\dots,t_N$ is \begin{eqnarray}\label{eq:CC-16} L(D|T)&=&\prod_{n=1}^Nr\tau\,\Exp{-r(t_n-T)}\eta(t_n-T)\nonumber\\ &=&L_{\mathrm{max}}(D)\Exp{-Nr(t_{\mathrm{min}}-T)}\eta(t_{\mathrm{min}}-T)\qquad \end{eqnarray} where $\tau\ll1/r$ is the precision of the observations and the maximal value \begin{equation}\label{eq:CC-17} L_{\mathrm{max}}(D)=L(D|T=\widehat{T}^{\ }_{\textsc{ml}})=(r\tau)^N\Exp{-Nr(t_{\mathrm{av}}-t_{\mathrm{min}})} \end{equation} is obtained for the maximum-likelihood estimator $\widehat{T}^{\ }_{\textsc{ml}}=t_{\mathrm{min}}$. In this case, $f_N(\widehat{T}-T)$ of Eqs.~\eqref{CC-6} and \eqref{CC-7} is replaced by \begin{equation}\label{eq:CC-23} \widehat{T}=t_{\mathrm{min}}:\quad f_N(\widehat{T}-T)=Nr\,\Exp{-Nr(\widehat{T}-T)}\,\eta(\widehat{T}-T)\,, \end{equation} which, not accidentally, is strikingly similar to the likelihood $L(D|T)$ in Eq.~\eqref{CC-16} but has a completely different meaning. Since Eq.~\eqref{CC-9} holds, this estimator is consistent, and it has a bias, \begin{equation}\label{eq:CC-24} \mathbb{E}(\widehat{T}) =\int_{-\infty}^{\infty}\mathrm{d}\widehat{T}\,f_N(\widehat{T}-T)\,\widehat{T} =T+\frac{1}{Nr}\neq T\,, \end{equation} that could be removed. The resulting shortest confidence intervals are specified by \begin{equation}\label{eq:CC-26} t_{\mathrm{min}}-\frac{1}{Nr}\log\frac{1}{1-\mathrm{cov}(\mathbf{C})}<T<t_{\mathrm{min}}\,, \end{equation} where $\mathrm{cov}(\mathbf{C})$ is the desired coverage of the set $\mathbf{C}$ thus defined. Here, we obtain the 95\% confidence intervals \begin{equation}\label{eq:CC-81} \frac{9.0}{r} < T < \frac{10.0}{r}\quad\mbox{and}\quad \frac{0.90}{r} < T < \frac{1.90}{r} \end{equation} for the ${N=3}$ data that yielded the intervals in Eqs.~\eqref{CC-14} and \eqref{CC-14'}. While this suggests, and rather strongly so, that the confidence intervals of this second kind are more reasonable and more useful than the previous ones, it confronts us with the need for a criterion by which we select the preferable set of confidence intervals among equally legitimate sets. Chernoff offers pertinent advice for that \cite{Chernoff-quote}: ``Start out as a Bayesian thinking about it, and you'll get the right answer. Then you can justify it whichever way you like.'' So, let us now find the corresponding SCIs of the Bayesian approach, where probability quantifies our belief --- in colloquial terms: Which betting odds would we accept? For the point likelihood of Eq.~\eqref{CC-16}, the BLI $\mathcal{I}_{\lambda}$ is specified by \begin{equation}\label{eq:CC-18} \max{\left\{0,t_{\mathrm{min}}-\frac{1}{Nr}\log\frac{1}{\lambda}\right\}}<T<t_{\mathrm{min}}\,. \end{equation} Jaynes recommends a flat prior in such applications --- unless we have specific prior information about $T$, that is --- but, without a restriction on the permissible $T$ values, that would be an improper prior here. Instead we use $\mathrm{d} T\,\kappa\,\Exp{-\kappa T}\eta(T)$ for the prior element and enforce ``flatness'' by taking the limit of ${\kappa\to0}$ eventually. Then, the likelihood for the observed data is \begin{eqnarray}\label{eq:CC-19} L(D)&=&\int_0^{\infty}\mathrm{d} T\,\kappa\,\Exp{-\kappa t}\,L(D|T) \nonumber\\ &=&L_{\mathrm{max}}(D)\frac{\kappa}{Nr-\kappa} {\left(\Exp{-\kappat_{\mathrm{min}}}-\Exp{-Nrt_{\mathrm{min}}}\right)}\qquad \end{eqnarray} and the credibility of $\mathcal{I}_{\lambda}$ is \begin{eqnarray}\label{eq:CC-20} c_{\lambda}&=&\int_0^{\infty}\mathrm{d} T\,\kappa\,\Exp{-\kappa t}\, \frac{L(D|T)}{L(D)} \eta{\left(T-t_{\mathrm{min}}+\frac{1}{Nr}\log\frac{1}{\lambda}\right)} \nonumber\\ &\makebox[0pt][l]{$\displaystyle\xrightarrow{\kappa\to0}\min{\left\{1,\frac{1-\lambda} {1-\Exp{-Nrt_{\mathrm{min}}}}\right\}}$}& \end{eqnarray} after taking the ${\kappa\to0}$ limit. We so arrive at \begin{equation}\label{eq:CC-21} t_{\mathrm{min}}-\frac{1}{Nr}\log\frac{1}{(1-c)+\Exp{-Nrt_{\mathrm{min}}}c}<T<t_{\mathrm{min}} \end{equation} for the SCI with pre-chosen credibility $c$. For example, the SCIs for ${c=0.95}$ that corresponds to the confidence intervals in Eqs.~\eqref{CC-14} and \eqref{CC-14'}, and also to the confidence intervals in Eq.~\eqref{CC-81}, are \begin{equation}\label{eq:CC-22} \frac{9.0}{r} < T < \frac{10.0}{r}\quad\mbox{and}\quad \frac{0.92}{r} < T < \frac{1.90}{r}\,. \end{equation} These really \emph{are} useful answers to the question of what do the data tell us about $T$: The actual value is in the respective range with 95\% probability. Regarding the choice between the set of confidence intervals of the first and the second kind --- associated with the point estimators ${\widehat{T}=t_{\mathrm{av}}}$ and ${\widehat{T}=t_{\mathrm{min}}}$, respectively --- Chernoff's strategy clearly favors the second kind. Except for the possibility of getting a negative value for the lower bound, the confidence intervals of Eq.~\eqref{CC-26} are the BLIs of Eq.~\eqref{CC-18} for $\lambda=1-\mathrm{cov}(\textbf{C})$, and they are virtually identical with the SCIs of Eq.~\eqref{CC-21} --- usually the term $\Exp{-Nrt_{\mathrm{min}}}c$ is negligibly small there. Yet, these confidence intervals retain their frequentist meaning. Such a coincidence of confidence intervals and credible intervals is also possible under other circumstances, and this observation led Jaynes to the verdict that ``confidence intervals are satisfactory as inferences \emph{only} in those special cases where they happen to agree with Bayesian intervals after all'' (Jaynes's emphasis, see p.~674 in \cite{Jaynes:03}). That is: One can get away with misinterpreting the confidence intervals as credible intervals for an unspecified prior. In the context of the example we are using, the coincidence occurs as a consequence of two ingredients: (i) We are guided by the Bayesian reasoning when choosing the set of confidence intervals; (ii) we are employing the flat prior when determining the SCIs. The coincidence does not happen when (i) another strategy is used for the construction of the set of confidence intervals, or (ii) for another prior, as we would use it if we had genuine prior information about $T$; the coincidence could still occur when $N$ is large but hardly for ${N=3}$. In way of summary, the fundamental difference between the confidence intervals of Eqs.~\eqref{CC-14} and \eqref{CC-14'}, or those of Eq.~\eqref{CC-81}, and the credible intervals of Eq.~\eqref{CC-22}, which refer to the same data, is this: We judge the quality (= confidence level = coverage) of the confidence interval $\mathcal{C}_D$ by the company it keeps (= the full set $\mathbf{C}=\{\mathcal{C}_D\}$), whereas the credible interval is judged on its own merits (= credibility). It is worth repeating here that the two types of intervals tell us about very different things: Confidence intervals are about statistics on the data; credible intervals are about statistics on the quantity of interest. If one wishes, as we do, to draw reliable conclusions from the data of a single run, one should use the Bayesian credible interval and not the frequentist confidence interval. What about many runs? If we take, say, one hundred measurements of three first-failure times, we can find the one hundred shortest 95\% confidence intervals of either kind and base our conclusions on the properties of this set. Alternatively, we can combine the data and regard them as three hundred first-failure times of a single run and so arrive at a SCI with a size that is one-hundredth of each SCI for three first-failure times. Misconceptions such as ``confidence regions have a natural Bayesian interpretation as regions which are credible for any prior'' \cite{NJPexpert:16}, as widespread as they may be, arise when the fundamental difference in meaning between confidence intervals and credible intervals is not appreciated. While, obviously, one can compute the credibility of any region for any prior, there is no point in this ``natural Bayesian interpretation'' for a set of confidence regions; the credibility thus found for a particular confidence region has no universal relation to the coverage of the set. It is much more sensible to determine the SCRs for the data actually observed. On the other hand, it can be very useful to pay attention to the corresponding credible regions when constructing a set of confidence regions. In the context of QSE, this Chernoff-type strategy is employed by Christandl and Renner \cite{Christandl+1:12} who take a set of credible regions and enlarge all of them to produce a set of confidence regions; see also \cite{Blume-Kohout:12}. Another instance where a frequentist approach benefits from Bayesian methods is the marginalization of nuisance parameters in \cite{Faist+1:16} where a MC integration employs a flat prior, apparently chosen because it is easy to implement. The histograms thus produced --- they report differences of $P_{\mathrm{r},0}D(F)$ in Eq.~\eqref{6-2} between neighboring $F$ values, just like the binned probabilities in Fig.~\figref{CHSH-histo} --- depend on the prior, and so do the confidence intervals inferred from the histograms. There is also a rather common misconception about the subjectivity or objectivity of the two methods. The frequentist confidence regions are regarded as objective, in contrast to the subjective Bayesian credible regions. The subjective nature of the credible regions originates in the necessity of a prior, privately chosen by the scientist who evaluates the data and properly accounts for her prior knowledge. No prior is needed for the confidence regions, they are completely determined by the data --- or so it seems. In fact, the choice between different sets of confidence regions is equally private and subjective; in the example above, it is the choice between the confidence intervals of Eqs.~\eqref{CC-10}--\eqref{CC-12}, those of Eq.~\eqref{CC-26}, and yet other legitimate constructions which, perhaps, pay attention to prior knowledge. Clearly, either approach has unavoidable subjective ingredients, and this requires that we state, completely and precisely, how the data are processed; see Sec.~1.5.2 in \cite{Evans:15} for further pertinent remarks. \section{Prior-content function $P_0(\Theta)$ near ${\Theta=\pm\sqrt{8}}$} \label{sec:appA} In this appendix, we consider the sizes of the regions with ${\theta(p)\gtrsim-\sqrt{8}}$ and ${\theta(p)\lesssim\sqrt{8}}$. It is our objective to justify the power law stated in Eq.~\eqref{8-13} and so motivate the approximation in Eq.~\eqref{8-14}. We denote the kets of the maximally entangled states with ${\theta=\pm\sqrt{8}}$ by $\ket{\pm}$, that is \begin{equation}\label{eq:A-1} \ket{+}=\frac{\ket{\uparrow\downarrow}-\ket{\downarrow\uparrow}}{\sqrt{2}} \quad\mbox{and}\quad \ket{-}=\frac{\ket{\uparrow\uparrow}+\ket{\downarrow\downarrow}}{\sqrt{2}} \,, \end{equation} where ${\ket{\uparrow\downarrow}=\ket{\uparrow}\otimes\ket{\downarrow}}$, for example, has ${\sigma_z=1}$ for the first qubit and ${\sigma_z=-1}$ for the second. Since \begin{equation}\label{eq:A-2} \sigma_x\otimes\dyadic{1}\ket{\pm}=\mp\dyadic{1}\otimes\sigma_x\ket{\pm} \quad\mbox{and}\quad \sigma_z\otimes\dyadic{1}\ket{\pm}=\mp\dyadic{1}\otimes\sigma_z\ket{\pm}\,, \end{equation} we have [recall Eq.~\eqref{8-5}] \begin{eqnarray}\label{eq:A-3} \Pi_k^{\ }\ket{\pm}&=&\Pi_{[jj']}^{\ }\ket{\pm} =\Pi_j^{(1)}\otimes\Pi^{(2)}_{j'}\ket{\pm}\nonumber\\&=& \frac{1}{9}(1+\vec{t}_j\cdot\bfsym{\sigma})\otimes (1-\vec{t}_{j'}\cdot\bfsym{\sigma})\ket{\pm}\nonumber\\&=& \frac{1}{9}(1+\vec{t}_j\cdot\bfsym{\sigma}) (1\pm\vec{t}_{j'}\cdot\bfsym{\sigma})\otimes\dyadic{1}\ket{\pm} \nonumber\\&=& \frac{1}{9}\Bigl[(1\pm\vec{t}_j\cdot\vec{t}_{j'})\dyadic{1}\nonumber\\ &&\phantom{\frac{1}{9}\Bigl[} +(\vec{t}_j\pm\vec{t}_{j'}\pm\mathrm{i}\vec{t}_j\bfsym{\times}\vec{t}_{j'})\cdot \bfsym{\sigma}\Bigr] \otimes\dyadic{1}\ket{\pm}\,,\qquad \end{eqnarray} where \begin{equation}\label{eq:A-4} \vec{t}_1=\vec{e}_z\,,\quad \vec{t}_2=\frac{\sqrt{3}}{2}\vec{e}_x-\frac{1}{2}\vec{e}_z\,,\quad \vec{t}_3=-\frac{\sqrt{3}}{2}\vec{e}_x-\frac{1}{2}\vec{e}_z \end{equation} are the three unit vectors of the trine. States in an $\epsilon$-vicinity of $\ketbra{\pm}$ are of the form \begin{eqnarray}\label{eq:A-5} \rho_{\epsilon}^{\ }&=& \frac{\bigl(\ketbra{\pm}+\epsilon A^{\dagger}\bigr) \bigl(\ketbra{\pm}+\epsilon A\bigr)} {1+\epsilon\bra{\pm}(A^\dagger+A)\ket{\pm}+\epsilon^2\tr{A^\dagger A}} \nonumber\\&=& \ketbra{\pm}+\epsilon\,(A_{\pm}^\dagger+A_{\pm}^{\ })+O(\epsilon^2)\,, \end{eqnarray} where $A$ is any two-qubit operator and \begin{equation}\label{eq:A-6} A_{\pm}^{\ }=\ketbra{\pm}A\bigl(\dyadic{1}-\ketbra{\pm}\bigr) \end{equation} is a traceless rank-1 operator with the properties \begin{equation}\label{eq:A-6a} A_{\pm}^{\ }\ket{\pm}=0 \quad\mbox{and}\quad \ketbra{\pm}A_{\pm}^{\ }=A_{\pm}^{\ }\,. \end{equation} The TAT probabilities are \begin{eqnarray}\label{eq:A-7} p_k&=&\tr{\rho_{\epsilon}\Pi_k}\nonumber\\&=&\bra{\pm}\Pi_k\ket{\pm} +\epsilon\,\tr{(A_{\pm}^\dagger+A_{\pm}^{\ })\Pi_k}+O(\epsilon^2) \nonumber\\&=& \frac{1}{9}\bigl(1\pm\vec{t}_j\cdot\vec{t}_{j'}\bigr) +\epsilon\Bigl[(\vec{t}_j\pm\vec{t}_{j'})\cdot\bfsym{\alpha}_{\pm}^{\ } \mp(\vec{t}_j\bfsym{\times}\vec{t}_{j'})\cdot\bfsym{\beta}_{\pm}^{\ }\Bigr] \nonumber\\ &&+O(\epsilon^2) \end{eqnarray} with the real vectors $\bfsym{\alpha}_{\pm}^{\ }$ and $\bfsym{\beta}_{\pm}^{\ }$ given by \begin{equation}\label{eq:A-8} \frac{2}{9}\tr{\bfsym{\sigma}\otimes\dyadic{1} A_{\pm}^{\ }} =\bfsym{\alpha}_{\pm}^{\ } +\mathrm{i} \bfsym{\beta}_{\pm}^{\ }\,. \end{equation} Owing to the trine geometry, the $x$ and $z$ components of $\bfsym{\alpha}_{\pm}^{\ }$ and the $y$ component of $\bfsym{\beta}_{\pm}^{\ }$ matter, but the other three components do not. In the eight-dimensional probability space, then, we have increments ${\propto\epsilon}$ in three directions only, and increments ${\propto\epsilon^2}$ in the other five directions. For the primitive prior, therefore, the size of the $\epsilon$-vicinity is ${\propto\epsilon^{3\times1+5\times2}=\epsilon^{13}}$. The sum of probabilities in Eq.~\eqref{8-6} is \begin{equation}\label{eq:A-9} p_1+p_5+p_9=p_{[11]}+p_{[22]}+p_{[33]}=\frac{1}{3}(1\pm1)+O(\epsilon^2)\,, \end{equation} so that ${\Theta=\pm\sqrt{8}\,[1-O(\epsilon^2)]}$ or ${\sqrt{8}-\bfsym{|}\Theta\bfsym{|}\propto\epsilon^2}$. Accordingly, we infer that \begin{equation}\label{eq:A-10} P_0(\Theta)\propto{\left(\sqrt{8}+\Theta\right)}^\frac{13}{2} \quad\mbox{near ${\Theta=-\sqrt{8}}$} \end{equation} and \begin{equation}\label{eq:A-11} 1-P_0(\Theta)\propto{\left(\sqrt{8}-\Theta\right)}^\frac{13}{2} \quad\mbox{near ${\Theta=\sqrt{8}}$}\,, \end{equation} which imply Eq.~\eqref{8-13}. \section{Prior-content function $P_0(\Thetaopt)$ near ${\Thetaopt=0}$ and ${\Thetaopt=\sqrt{8}}$ }\label{sec:appB} In this appendix, we consider the sizes of the regions with ${\Thetaopt\gtrsim0}$ and ${\Thetaopt\lesssim\sqrt{8}}$. We wish to establish the $\Thetaopt$ analogs of Eqs.~\eqref{8-13} and \eqref{8-14}. In the context of $P_0(\Thetaopt)$, it is expedient to switch from the nine TAT probabilities $p_1,p_2,\dots,p_9$ to the expectation values of the eight single-qubit and two-qubit observables that are linearly related to the probabilities, \begin{widetext} \begin{eqnarray}\label{eq:B-1} {\left(\begin{array}{@{}ccc@{}} p_1 & p_2 & p_3 \\ p_4 & p_5 & p_6 \\ p_7 & p_8 & p_9 \end{array}\right)} \begin{array}{c} \mbox{\footnotesize{}linear}\\[-1.5ex] \longleftarrow\hspace*{-0.5em}\frac{\quad}{}\hspace*{-0.5em}\longrightarrow \\[-1.5ex]\mbox{\footnotesize{}relation} \end{array} {\left[\begin{array}{@{}c|cc@{}} & \expect{\dyadic{1}\otimes\sigma_x} & \expect{\dyadic{1}\otimes\sigma_z} \\ \hline \expect{\sigma_x\otimes\dyadic{1}} & \expect{\sigma_x\otimes\sigma_x} & \expect{\sigma_x\otimes\sigma_z} \\ \expect{\sigma_z\otimes\dyadic{1}} & \expect{\sigma_z\otimes\sigma_x} & \expect{\sigma_z\otimes\sigma_z} \end{array}\right]}\equiv {\left[\begin{array}{@{}c|cc@{}} & x_3 & x_4 \\ \hline x_1 & y_1 & y_2 \\ x_2 & y_3 & y_4 \end{array}\right]}. \end{eqnarray} The Jacobian matrix associated with the linear relation does not depend on the probabilities and, therefore, we have \begin{equation}\label{eq:B-2} (\mathrm{d}\rho)=(\mathrm{d} p)=(\mathrm{d} x)\,(\mathrm{d} y)\,w_{\mathrm{cstr}}(x,y) \end{equation} for the primitive prior, where $(\mathrm{d} x)=\mathrm{d} x_1\,\mathrm{d} x_2\,\mathrm{d} x_3\,\mathrm{d} x_4$ and $(\mathrm{d} y)=\mathrm{d} y_1\,\mathrm{d} y_2\,\mathrm{d} y_3\,\mathrm{d} y_4$, and $w_{\mathrm{cstr}}(x,y)$ equals a normalization factor for permissible values of ${x=(x_1,x_2,x_3,x_4)}$ and ${y=(y_1,y_2,y_3,y_4)}$, whereas ${w_{\mathrm{cstr}}(x,y)=0}$ for unphysical values. Thereby, the permissible values of $x$ and $y$ are those for which one can find $q$ in the range ${-1\leq q\leq1}$ such that \cite{note11} \begin{equation}\label{eq:B-4} {\left(\begin{array}{@{}c@{\,}c@{\,}c@{\,}c@{}} 1+x_1+x_3+y_1 & x_2+y_3 & x_4+y_2 & y_4-q \\ x_2+y_3 & 1-x_1+x_3-y_1 & y_4+q & x_4-y_2 \\ x_4+y_2 & y_4+q & 1+x_1-x_3-y_1 & x_2-y_3 \\ y_4-q & x_4-y_2 & x_2-y_3 & 1-x_1-x_3+y_1 \end{array}\right)}\geq0\,. \end{equation} While the implied explicit conditions on $x$ and $y$ are rather involved, the special cases of interest here --- namely ${x=0}$ and ${y=0}$, respectively --- are quite transparent. We have \begin{equation}\label{eq:B-5} w_{\mathrm{cstr}}(x,0)=0 \enskip \mbox{unless $\displaystyle{\left(x_1^2+x_2^2\right)}^{\frac{1}{2}} +{\left(x_3^2+x_4^2\right)}^{\frac{1}{2}}\leq1$} \end{equation} and \begin{eqnarray}\label{eq:B-6} w_{\mathrm{cstr}}(0,y)=0\enskip &&\mbox{unless the two characteristic values}\nonumber\\ &&\mbox{of ${\left(\begin{array}{@{}cc@{}}y_1 & y_2 \\ y_3 & y_4\end{array}\right)}$ are ${\leq1}$.} \end{eqnarray} The sum of the squares of these characteristic values is ${y_1^2+y_2^2+y_3^2+y_4^2}$; it determines the value of $\thetaopt(p)$, \begin{equation}\label{eq:B-7} \thetaopt=2{\left(y_1^2+y_2^2+y_3^2+y_4^2\right)}^{\frac{1}{2}}\,. \end{equation} \end{widetext} \subsection{The vicinity of ${\Thetaopt=0}$}\label{sec:appB-1} We obtain ${\thetaopt=0}$ for ${y=0}$ and \begin{eqnarray}\label{eq:B-8} {\left(\begin{array}{@{}c@{}} x_1 \\x_2\end{array}\right)}&=& {\left(\begin{array}{@{}cc@{}} \cos\varphi_1 & -\sin\varphi_1 \\ \sin\varphi_1 & \cos\varphi_1\end{array}\right)} {\left(\begin{array}{@{}c@{}} r_1 \\ 0 \end{array}\right)} ={\left(\begin{array}{@{}c@{}} r_1\cos\varphi_1 \\ r_1\sin\varphi_1 \end{array}\right)}\,, \nonumber\\ {\left(\begin{array}{@{}cc@{}} x_3 & x_4\end{array}\right)}&=& {\left(\begin{array}{@{}cc@{}} r_2 & 0 \end{array}\right)} {\left(\begin{array}{@{}cc@{}} \cos\varphi_2 & \sin\varphi_2 \\ -\sin\varphi_2 & \cos\varphi_2\end{array}\right)} \nonumber\\ &=& {\left(\begin{array}{@{}cc@{}} r_2\cos\varphi_2 & r_2\sin\varphi_2 \end{array}\right)}\,, \nonumber\\ (\mathrm{d} x)&=&\mathrm{d} r_1\,r_1\,\mathrm{d}\varphi_1\,\mathrm{d} r_2\,r_2\,\mathrm{d}\varphi_2 \end{eqnarray} \par\noindent{} with ${0\leq r_1\leq1-r_2\leq1}$ and ${0\leq\varphi_1,\varphi_2\leq2\pi}$. These $x$ values make up a four-dimen\-sional volume \begin{equation}\label{eq:B-9} \int(\mathrm{d} x)=(2\pi)^2\int_0^1\mathrm{d} r_1\,r_1\int_0^{1-r_1}\mathrm{d} r_2\,r_2 =\frac{\pi^2}{6} \end{equation} but, since there is no volume in the four-dimensional $y$ space, the set of probabilities with ${\thetaopt=0}$ has no eight-dimensional volume --- it has no size. The generic state in this set has ${r_1+r_2<1}$ and full rank. A finite, if small, four-dimensional ball is then available for the $y$ values. All $y$ values on the three-dimensional surface of the ball have the same value of $\thetaopt$, equal to the diameter of the ball. The volume of the ball is proportional to $\thetaopt^4$ and, therefore, we have \begin{equation}\label{eq:B-10} P_0(\Thetaopt)\propto\Thetaopt^4\quad\mbox{for}\quad 0\lesssim\Thetaopt\ll1\,. \end{equation} \begin{widetext} \subsection{The vicinity of ${\Thetaopt=\sqrt{8}}$}\label{sec:appB-2} We reach ${\Thetaopt=\sqrt{8}}$ for all maximally entangled states with ${\expect{\sigma_y\otimes\sigma_y}^2=1}$. Then, ${x=0}$ and both characteristic values of the ${2\times2}$ matrix in Eq.~\eqref{B-6} are maximal. More generally, when ${x=0}$, the permissible $y$ values are \begin{eqnarray}\label{eq:B-11} {\left(\begin{array}{@{}cc@{}}y_1 & y_2 \\ y_3 & y_4\end{array}\right)}&=& {\left(\begin{array}{@{}cc@{}}\cos\phi_1 & -\sin\phi_1 \\ \sin\phi_1 & \cos\phi_1\end{array}\right)} {\left(\begin{array}{@{}cc@{}}\vartheta_1 & 0 \\ 0 & \vartheta_2\end{array}\right)} {\left(\begin{array}{@{}cc@{}} \cos\phi_2 & \sin\phi_2 \\ -\sin\phi_2 & \cos\phi_2\end{array}\right)} \\\hspace*{-1.5em}&=& {\left(\begin{array}{@{}cc@{}} \vartheta_1\cos\phi_1\cos\phi_2 +\vartheta_2\sin\phi_1\sin\phi_2 & \vartheta_1\cos\phi_1\sin\phi_2 -\vartheta_2\sin\phi_1\cos\phi_2 \\ \vartheta_1\sin\phi_1\cos\phi_2 -\vartheta_2\cos\phi_1\sin\phi_2 & \vartheta_1\sin\phi_1\sin\phi_2 +\vartheta_2\cos\phi_1\cos\phi_2 \end{array}\right)} \nonumber \end{eqnarray} \end{widetext} with ${0\leq \vartheta_1\leq1}$, ${-1\leq \vartheta_2\leq1}$, ${0\leq\phi_1,\phi_2\leq2\pi}$, where $\vartheta_1$ and $\bfsym{|}\vartheta_2\bfsym{|}$ are the characteristic values. The determinant $\vartheta_1\vartheta_2$ can be positive or negative; we avoid double coverage by restricting $\vartheta_1$ to positive values while letting $\phi_1$ and $\phi_2$ range over a full $2\pi$ period. The Jacobian factor in \begin{equation}\label{eq:B-12} (\mathrm{d} y)=\mathrm{d} \vartheta_1\,\mathrm{d} \vartheta_2\,\mathrm{d}\phi_1\,\mathrm{d}\phi_2\, \bfsym{|}\vartheta_1^2-\vartheta_2^2\bfsym{|} \end{equation} vanishes when ${\vartheta_1=\bfsym{|}\vartheta_2\bfsym{|}=1}$ and $\Thetaopt=2{\left(\vartheta_1^2+\vartheta_2^2\right)}^{\frac{1}{2}}=\sqrt{8}$. Therefore, there is no nonzero four-dimensional volume in the $y$ space for ${\Thetaopt=\sqrt{8}}$. More specifically, the $y$-space volume for ${\vartheta_1^2+\vartheta_2^2>\frac{1}{4}\Thetaopt^2}$ is \begin{eqnarray}\label{eq:B-13} &&(2\pi)^2\int_0^1\mathrm{d} \vartheta_1\int_{-1}^1\mathrm{d} \vartheta_2\, \bfsym{|}\vartheta_1^2-\vartheta_2^2\bfsym{|} \,\eta\Bigl(4{\left(\vartheta_1^2+\vartheta_2^2\right)}-\Thetaopt^2\Bigr) \nonumber\\&=&(2\pi)^2{\left[\frac{2}{3}-\frac{1}{32}\Thetaopt^4 +\frac{1}{6}{\left(\Thetaopt^2-4\right)}^\frac{3}{2}\, \eta{\left(\Thetaopt^2-4\right)} \right]} \nonumber\\&=&\frac{\sqrt{8}\,\pi^2}{3} {\left(\sqrt{8}-\Thetaopt\right)}^3 +O{\left({\left(\sqrt{8}-\Thetaopt\right)}^4\right)}\nonumber\\ &&\mbox{for}\quad\Thetaopt\lesssim\sqrt{8}\,. \end{eqnarray} With respect to the corresponding $x$-space volume, we note that the maximally entangled states with \begin{align}\label{eq:B-14} &{\left(\begin{array}{@{}cc@{}}y_1 & y_2 \\ y_3 & y_4\end{array}\right)}= {\left(\begin{array}{@{}cc@{}} \cos(\phi_1-\phi_2) & \sin(\phi_1-\phi_2) \\ -\sin(\phi_1-\phi_2) &\cos(\phi_1-\phi_2) \end{array}\right)}\nonumber\\ \mbox{or}\quad& {\left(\begin{array}{@{}cc@{}}y_1 & y_2 \\ y_3 & y_4\end{array}\right)}= {\left(\begin{array}{@{}cc@{}} \cos(\phi_1+\phi_2) & \sin(\phi_1+\phi_2) \\ \sin(\phi_1+\phi_2) &-\cos(\phi_1+\phi_2) \end{array}\right)} \end{align} are equivalent because local unitary transformations turn them into each other. It is, therefore, sufficient to consider an $\epsilon$-vicinity of one such state, for which we take that with ${y_1=y_4=-1}$ and ${y_2=y_3=0}$. This is $\ketbra{+}$ of Eq.~\eqref{A-1}, with $\rho_{\epsilon}$ in Eq.~\eqref{A-5}. As a consequence of Eq.~\eqref{A-2}, we have \begin{eqnarray}\label{eq:B-15} && x_1+x_3\propto\epsilon^2\,,\enskip x_1-x_3\propto\epsilon\nonumber\\\mbox{and}&\quad& x_2+x_4\propto\epsilon^2\,,\enskip x_2-x_4\propto\epsilon\,, \end{eqnarray} so that the $x$-space volume is proportional to $\epsilon^6$. Since we know from \eqref{A-11} that ${\sqrt{8}-\Thetaopt\propto\epsilon^2}$, it follows that the $x$-space volume is proportional to ${\left(\sqrt{8}-\Thetaopt\right)}^3$. Together with the $y$-space volume in \eqref{B-13}, we so find that \begin{equation}\label{eq:B-16} 1-P_0(\Thetaopt)\propto{\left(\sqrt{8}-\Thetaopt\right)}^6 \quad\mbox{for}\quad 0\lesssim\sqrt{8}-\Thetaopt\ll1\,. \end{equation} \subsection{Analog of \eqref{8-14} and \eqref{8-15} for $P_0(\Thetaopt)$} Just like Eq.~\eqref{8-13} suggests the approximation Eq.~\eqref{8-14} for $P_0(\Theta)$, the power laws for $P_0(\Thetaopt)$ near ${\Thetaopt=0}$ and ${\Thetaopt=\sqrt{8}}$ in \eqref{B-10} and \eqref{B-16}, respectively, invite the approximation \begin{equation}\label{eq:B-17}\qquad P_0(\Thetaopt)\simeq P_0^{(0)}(\Thetaopt)=\sum_l w_l B_{\alpha_l,\beta_l}(\Thetaopt) \end{equation} with $\displaystyle\sum_lw_l=1$ and \begin{eqnarray}\label{eq:B-18} B_{\alpha,\beta}(\Thetaopt)&=&{\left(\frac{1}{8}\right)}^{\frac{1}{2}(\alpha+\beta+1)} \frac{(\alpha+\beta+1)!}{\alpha!\;\beta!}\nonumber\\&&\times \int_0^{\Thetaopt}\mathrm{d} x\,x^{\alpha}(\sqrt{8}-x)^{\beta}\,. \end{eqnarray} One of the powers $\alpha_l$ is equal to $3$ and one of the $\beta_l$s is equal to $5$, and the other ones are larger. For the sample of 500\,000 sets of probabilities that generated the red $\Thetaopt$ histograms in Fig.~\figref{CHSH-histo}(a), a fit with a mean squared error of $4.2\times10^{-4}$ is achieved by a five-term approximation with these parameter values: \begin{equation}\label{eq:B-19} \begin{array}{@{}crrr@{}} l & \multicolumn{1}{c}{w_l}& \multicolumn{1}{c}{\alpha_l} & \multicolumn{1}{c}{\beta_l}\\ \hline 1 & 0.2187 & \multicolumn{1}{c}{3} & 5.2467\\ 2 & 0.2469 & 5.2238 & \multicolumn{1}{c}{5} \\ 3 & 0.3153 & 14.1703 & 11.7922\\ 4 & 0.2478 & 7.9878 & 11.8061 \\ 5 & -0.0287 & 37.5270 & 15.7518 \end{array} \end{equation} There are 12 fitting parameters here. The black curve to that histogram shows the corresponding approximation for $\displaystyle W_0(\Thetaopt)=\frac{\mathrm{d}}{\mathrm{d}\Thetaopt}P_0(\Thetaopt)$. \newcommand{\prior}[1]{\par\noindent\makebox[\columnwidth][r]{ \begin{tabular}{@{}r@{$\enskip=\enskip$}p{148pt}@{\quad\enskip}} #1 \end{tabular}}} \section{List of prior densities}\label{sec:appC} The various prior densities introduced in Secs.~\ref{sec:stage}--\ref{sec:prior} are \prior{$w_0(p)$ & probability-space prior density in Eq.~\eqref{2-1};} \prior{$W_0(F)$ & prior density for property value $F$ in Eq.~\eqref{3-4};} \prior{$u_F(p)$ & prior density on an iso-$F$ hypersurface in Eq.~\eqref{4-0a};} \prior{$w_{\mathrm{r}}(p)$ & reference prior density in Eq.~\eqref{4-0e};} \prior{$w_{\mathrm{primitive}}(p)$ & primitive prior of Eq.~\eqref{4-6};} \prior{$w_{\mathrm{Jeffreys}}(p)$ & Jeffreys prior of Eq.~\eqref{4-7}.\\[1ex]} There is also the probability-space factor $w_{\mathrm{cstr}}(p)$ in Eq.~\eqref{2-2} that accounts for the constraints. If we choose $w_0(p)$ to our liking, then $W_0(F)$ and $u_F(p)$ are determined by Eqs.~\eqref{3-4} and \eqref{4-0a}, respectively. Alternatively, we can freely choose $W_0(F)$ and either $u_F(p)$ or $w_{\mathrm{r}}(p)\propto u_{f(p)}(p)$, and then obtain $w_0(p)$ from Eq.~\eqref{4-3} or \eqref{4-0d}. For given $u_F(p)$, the $F$-likelihood $L(D|F)$ does not depend on $W_0(F)$. \newcommand{\acronym}[1]{\par\noindent\makebox[\columnwidth][r]{ \begin{tabular}{@{}p{28pt}@{\qquad}p{187pt}@{}} #1 \end{tabular}}} \section{List of acronyms}\label{sec:appD} \acronym{BLI & bounded-likelihood interval} \acronym{BLR & bounded-likelihood region} \acronym{CHSH& Clauser-Horne-Shimony-Holt} \acronym{CPU & central processing unit} \acronym{DSPE& direct state-property estimation} \acronym{ISPE& indirect state-property estimation} \acronym{MC & Monte Carlo} \acronym{MLI & maximum-likelihood interval} \acronym{POM & probability-operator measurement} \acronym{QSE & quantum state estimation} \acronym{SCI & smallest credible interval} \acronym{SCR & smallest credible region} \acronym{SPE & state-property estimation} \acronym{TAT & trine-antitrine} \end{document}
\begin{document} \title{An asymmetric double-slit interferometer for small and large quantum particles} \author{Mirjana Bo\v zi\'c$^1$, Du\v san Arsenovi\'c$^1$ and Lep\v sa Vu\v skovi\'c$^2$\\ $^1$Institute of Physics, P.O.\ Box 57,\\ 11001 Belgrade, Serbia and Montenegro\\ e-mail: [email protected], [email protected]\\ $^2$Old Dominion University, Department of Physics,\\ 4600 Elkhorn Avenue, Norfolk, VA 23529, USA\\ e-mail: [email protected]} \maketitle PACS numbers: 03.65, 03.65.Bz, 03.75, 07.60.Ly \begin{abstract} Quantum theory of interference phenomena does not take the diameter of the particle into account, since particles were much smaller than the width of the slits in early observations. In recent experiments with large molecules, the diameter of the particle has approached the width of the slits. Therefore, analytical description of these cases should include a finite particle size. The generic quantum interference setup is an asymmetric double slit interferometer. We evaluate the wave function of the particle transverse motion using two forms of the solution of Schr\"odinger's equation in an asymmetric interferometer: the Fresnel-Kirchhoff form and the form derived from the transverse wave function in the momentum representation. The transverse momentum distribution is independent of the distance from the slits, while the space distribution strongly depends on this distance. Based on the transverse momentum distribution we determined the space distribution of particles behind the slits. We will present two cases: {\it a\/}) when the diameter of the particle may be neglected with respect to the width of both slits, and {\it b\/}) when the diameter of the particle is larger than the width of the smaller slit. \end{abstract} \section{Introduction} \null\hskip\parindent Until recently various quantum interference experiments were conducted with objects (photons, electrons, neutrons,...) of the size much smaller than the characteristic dimensions of the diffraction structure \cite{Martini}. The first single slit experiment with Rydberg atoms, objects of non negligible size with respect to the width ot the slit was performed by Fabre et al. \cite{Fabre}. By measuring transmission through micrometer size slits, these authors determined the size of Rydberg atoms. Later, Hunter and Wadlinger \cite{Hunter} proposed the single-slit diffraction experiment in order to measure the diameter of the photon. In order to investigate the reasons of unobservability of quantum effects in the classical world Arndt et al. \cite{Arndt}, Nairz et al. \cite{Nairz}, Brezger et al. \cite{Brezger} performed quantum interference experiments with objects of large mass and diameter, including macromolecules. The experiments raise various questions about theoretical concepts. The following three are obvious: 1) Are there new effects applying to particles bigger than their de Broglie wavelength? 2) Does internal structure have influence on interference? Arndt et.\ al.\ emphasized the task \cite{Arndt}: ``Here we report the observation of de Broglie wave interference of ${\rm C}_{60}$ molecules by diffraction at a material absorption grating. This molecule is the most massive and complex object in which wave behavior has been observed. Of particular interest is the fact that ${\rm C}_{60}$ is almost a classical body, because of its many excited internal degrees of freedom...''. 3) What happened with an ensemble of incoming particles if slits are smaller than the diameter of the particles? In the experiment of Arndt et.\ al.\ the de Broglie wavelength of the interfering fullerenes is already smaller than their diameter by a factor of almost $400$ and authors pointed out that ``it would be certainly interesting to investigate the interference of objects the size of which is equal or even bigger than the diffracting structure'' \cite{Arndt}. These experiments could shed more light on a long standing dilema whether each quanton consists of a particle and accompanied wave (as two different compatible entities) \cite{deBroglie}, or quantons sometimes behave like a wave and sometimes behave like a particle (obeying principle of complementarity) \cite{Bohr}? The following citations illustrate the present situation. ``We performed an experiment which was proposed by Ghose, Home and Agarwal showing both classical wave-like and particle-like behaviors of single photon states of light in a single experiment, in conformity with quantum optics.'' \cite{Mizobuchi}. ``Here we give a detailed justification of our claim that this experimental results contradict the tenet of mutual exclusiveness of classical wave and particle pictures assumed in Bohr's complementarity principle.'' \cite{Ghose}. ``Simultaneous observations of wave and particle behavior is prohibited'' \cite{Buks}. ``Although interference patterns were once thought of as evidence for wave motion, when looked at in detail it can be seen that the electron arrive in individual lumps. ... We must therefore conclude that electrons show wave-like interference in their arrival pattern despite the fact that they arrive in lumps just like bullets''. \cite{Hey} ``It is frequently said or implied that the wave-particle duality of matter embodies the notion that a particle -- the electron, for example -- propagates like a wave, but registers at a detector like a particle. Here one must again exercise care in expression so that what is already intrinsically difficult to understand is not made more so by semantic confusion. The manifestations of wave-like behavior are statistical in nature and {\em always} emerge from the collective outcome of many electron events... That electrons behave singly as particles and collectively as waves is indeed mysterious, ... '' \cite{Silverman} ``Each atom is therefore at the same time a particle and a wave, the wave allowing one to get the probability to observe the particle at a given place.'' \cite{Cohen-Tannoudji} ``...Ever since then the two sides of the same quantum object appeared together: on the one hand the non-local wave nature needed to describe the unperturbed propagation and on the other hand the local aspect of the object when it is registered by the detector" \cite{Arndt} In this paper we present the theoretical study of the dependence of the quantum interference pattern on the diameter of the particle, assuming that the characteristic sizes of the diffraction structure are of the order of the diameter of the particle. We argue that an asymmetric double-slit interferometer (an interferometer whose slits have different widths $\delta_1$ and $\delta_2$) is the generic case for this study. {\raggedright \section{The particle wave function behind an asymmetric grating} } \null\hskip\parindent We shall now determine, the wave function of a quanton which travels with velocity, $\vec v=v \vec i$ through the region I, towards the slits and is then sent through the slits to the region II (Fig.~1). Results in this section are valid for arbitrary slits. This wave function is a stationary solution of the time-dependent two dimensional Schr\"odinger equation \begin{equation} -{\hbar^2\over2m}\left(\pzv{^2}{x^2}+\pzv{^2}{y^2}\right)\Psi(x,y,t)=i\hbar\pzv{}t\Psi(x,y,t).\label{timeschr} \end{equation} The solution of (\ref{timeschr}) has the form \begin{equation} \Psi(x,y,t)=e^{-i\omega t}\varphi(x,y),\label{two} \end{equation} where $\hbar\omega=mv^2/2$ and $p=mv=\hbar k$. Space dependent function $\varphi(x,y)$ satisfies the Helmholtz equation \begin{equation} -{\hbar^2\over2m}\left(\pzv{^2}{x^2}+\pzv{^2}{y^2}\right)\varphi(x,y)=\hbar\omega\varphi(x,y).\label{statschr} \end{equation} The solution of this equation in the region I is a spherical wave \begin{equation} \varphi(P^\prime)=\varphi(x^\prime,y^\prime)=A{e^{ikr^\prime}\over r^\prime}, \end{equation} where $A$ is a constant and $r^\prime$ is a distance (Fig.~1) from the source ($P_0$) to the point $P^\prime=(x^\prime,y^\prime)$ in the region I. The distance $a$ of the double-slit screen from the source $P_0$ being very large compared to the width of the slits, this spherical wave at the slit points $(x^\prime=x^{\prime\prime},y^\prime=0)$ may be approximated by the plane wave. In the region II the equation (\ref{statschr}) is as simple as before but initial condition makes the solution more difficult. Solution known as Fresnel-Kirchhoff diffraction formula \cite{BornWolf} reads: \begin{equation} \varphi(x,y)=-{iA\over2\lambda}{e^{ika}\over a}\int_{\cal A}dx^{\prime\prime}{e^{iks}\over s}[1+\cos\chi],\label{fksol} \end{equation} where $s=\sqrt{y^2+(x^{\prime\prime}-x)^2}$, $\cos\chi=y/s$, $\lambda=2\pi/k$. The region $\cal A$ is the union of all intervals along the $x$-axis where slits are open. From now on $x^{\prime\prime}$ represents a variable of integration along the line of the slits. Far enough from the slits wave function resembles the Fourier transform of the wave field accross the aperture. This can be verified from the Fresnel-Kirchhoff solution. In the far region it follows: \begin{equation} \varphi(x,y)\approx-{iA\over\lambda}{e^{ika}\over a}{e^{iky}\over y}\int_{\cal A}dx^{\prime\prime}\,\varphi(x^{\prime\prime},0)\,e^{-ikxx^{\prime\prime}/y}. \end{equation} Wave function is now separable into two functions, one depending on $y$ and the other depending on $K_x\equiv kx/y$ \cite{Hecht}: \begin{eqnarray} &\displaystyle\varphi(x,y)=D(y){\cal F}(K_x)&\nonumber\\ &\displaystyle D(y)=-\sqrt{2\pi}{iA\over\lambda}{e^{ika}\over a}{e^{iky}\over y}&\nonumber\\ &\displaystyle{\cal F}(K_x)={1\over\sqrt{2\pi}}\int_{\cal A}dx^{\prime\prime}\,\varphi(x^{\prime\prime},0)\,e^{-iK_xx^{\prime\prime}}.&\label{faraway} \end{eqnarray} The solution of equation (\ref{statschr}) in region II can be written in another form \cite{Naturforsch,Arsenovic,Vuskovic}. This form is more convenient for our analysis than the form (\ref{fksol}). With approximation valid for small diffraction angles $\chi$ we have: \begin{equation} \varphi(x,y)=e^{iky}{1\over\sqrt{2\pi}}\int_{-\infty}^{+\infty}dk_x\,c(k_x)\,e^{ik_xx}e^{-i{k_x^2\over2k}y}\equiv e^{iky}\phi(x,y).\label{planewave} \end{equation} where $c(k_x)$ is the Fourier transform of the function $\varphi(x,y)$ on the aperture $\varphi(x,y=0)$: \begin{equation} c(k_x)={1\over\sqrt{2\pi}}\int_{-\infty}^{+\infty}dx^{\prime\prime}\,\varphi(x^{\prime\prime},0)\,e^{-ik_xx^{\prime\prime}}.\label{cdef} \end{equation} Inserting (\ref{cdef}) into (\ref{planewave}), after integration over $k_x$ one finds \begin{equation} \varphi(x,y)=e^{-i{\pi\over4}}e^{iky}\sqrt{k\over2\pi}{1\over\sqrt y}\int_{\cal A}\varphi(x^{\prime\prime},0)e^{i{k(x-x^{\prime\prime})^2\over2y}}dx^{\prime\prime}.\label{plwa} \end{equation} This function is normalized $\int_{-\infty}^{+\infty}\vert\varphi(x^{\prime\prime},y)\vert^2\,dx^{\prime\prime}=1$, provided $\int_{\cal A}\vert\varphi(x^{\prime\prime},0)\vert^2\allowbreak dx^{\prime\prime}=1$. The form (\ref{plwa}) clearly expresses wave function's dependence on the boundary condition and it is appropriate for numerical computation. For large values of $y$ the function $\varphi(x,y)$ in (\ref{plwa}) is approximated by \begin{equation} \varphi(x,y)=\sqrt{k\over2\pi y}e^{-i{\pi\over4}}e^{iky}e^{ikx^2/2y}\int_{\cal A}\varphi(x^{\prime\prime},0)e^{-ikxx^{\prime\prime}/y}dx^{\prime\prime}.\label{plwaa} \end{equation} Taking Eq.~(\ref{cdef}) into account, Eq.~(\ref{plwaa}) takes the form \begin{equation} \varphi(x,y)=e^{iky}\sqrt{k\over y}e^{-i{\pi\over4}}e^{ikx^2/2y}c(kx/y).\label{farc} \end{equation} We see that the variable $K_x={kx\over y}={mx\over\hbar t}$ plays the role of $k_x$. Since $K_x$ is proportional to $x/y$ functions $\vert\varphi(x,y)=const\vert$ are family of functions of $x$ spreading along the $x-$ axis as $y$ increases. In fact, for each value of $\vert\varphi\vert$, in the far field there exists the straight line with origin at the center of the grating along which this particular value of $\vert\varphi\vert$ propagates. {\raggedright \section{The understanding of the space distribution using transverse momentum distribution} } \null\hskip\parindent By assuming that the motion of an atom along the $y$-axis may be treated classically and that the transverse motion is quantum, one is tempted to use the relation $y=vt$ and to determine the time dependent function of the transverse motion $\psi(x,t)$ from the function $\phi(x,y)$, by the following definition: \begin{equation} \psi(x,t)\equiv\phi(x,vt)={1\over\sqrt{2\pi}}\int dk_x\,c(k_x)e^{ik_xx}e^{-i\omega_xt}\label{ft} \end{equation} where $\omega_x=\hbar k_x^2/2m$. We see that the function $\psi(x,t)$ has the form of a general solution of the one-dimensional Schr\"odinger equation. The wave function $c(k_x)$ is then seen as a wave function of this one-dimensional (transverse) motion in the momentum representation. It's modulus square, $\vert c(k_x)\vert^2$, determines the distribution of transverse momenta. The wave function $\Psi(x,y,t)$ from Eq.~(\ref{two}) is expressed through $\psi(x,t)$ as \begin{equation} \Psi(x,y,t)=e^{iky}e^{-i\omega t}\psi(x,t). \end{equation} By taking Eq.~(\ref{farc}) into account one concludes that in the far field the relation (\ref{ft}) between the wave functions $\psi(x,t)$ and $c(k_x)$ reduces to the simplier form: \begin{equation} \psi\left(x,t={ym\over\hbar k}\right)=\sqrt{k\over y}e^{-i{\pi\over4}}e^{ikx^2/2y}c(kx/y). \end{equation} Based on the above factorization of the wave function $\Psi(x,y,t)$ and the properties of its factors summarized above, we proposed \cite{Arsenovic} the new expression for the probability density $\tilde P(x,t)$ fo the particle's arrival to a certain point $(x,y=vt)$ at time $t$: \begin{eqnarray} &\displaystyle\tilde P\left(x,{y\over v}\right)=\tilde P(x,t)\equiv&\nonumber\\ &\displaystyle\equiv\int_{-\infty}^{+\infty}dk_x\int_{-\infty}^{+\infty}dx^{\prime\prime}\,\vert c_n(k_x)\vert^2\vert\phi(x^{\prime\prime},0)\vert^2\delta\left(x-x^{\prime\prime}-{\hbar k_xt\over m}\right).&\label{trajec} \end{eqnarray} Particles emerge from different points $(x^{\prime\prime},0)$ at the aperture. That is the reason for integration over $x^{\prime\prime}$. The contribution of each point at the aperture is proportional to $\vert\phi(x^{\prime\prime},0)\vert^2$. The integration over $dk_x$ and the function $\vert c_n(k_x)\vert^2$ reflect the contribution of various angles/momenta in diffraction. Finally, $\delta$-function assumes straight trajectory from a point $(x^{\prime\prime},0)$ at the slits to the point $(x,y)$ and leads to the simplified form \begin{equation} \tilde P(x,t)=\int_{-\infty}^{+\infty}dk_x\,\vert c_n(k_x)\vert^2\left\vert\phi\left(x-{\hbar k_xt\over m},0\right)\right\vert^2.\label{sumtraj} \end{equation} By assuming that the function $\phi(x^{\prime\prime},0)=0$ for $x^{\prime\prime}\not\in{\cal A}$ and $\phi(x^{\prime\prime},0)=const$ such that $\int_{\cal A}\vert\phi(x^{\prime\prime},0)\vert^2=1$, for $x^{\prime\prime}\in{\cal A}$, the Eq.~(\ref{sumtraj}) is transformed to the following usefull form \begin{equation} \tilde P(x,t)={1\over\sqrt{\sum_{i=1}^n\delta_i}}\sum_{i=1}^n\int_{{m\over\hbar}(x-x_r^i)}^{{m\over\hbar}(x-x_l^i)}dk_x\vert c(k_x)\vert^2\equiv\sum_{i=1}^n\tilde P_i(x,t). \label{sumck} \end{equation} Here $x_l^i$ and $x_r^i$ are the coordinates of the left and right edge of the $i$-th slit. The total probability density $\tilde P(x,t)$ is a sum of $n$ terms, $\tilde P_i(x,t)$. $\tilde P_i(x,t)$ is interpreted to be the probability that a quanton reaches $(x,y=vt)$ at time $t$ after passing through the $i$-th slit of the $n$-slits grating. Numerical calculation shows that far from the slits the function $\tilde P(x,t)$ (Fig.~4) is very nearly equal to the exact probability density $\vert\Psi(x,y,t)\vert^2=\vert\psi(x,t)\vert^2$ (Fig.~2). Near the slits $\tilde P(x,t)$ and $\vert\psi(x,t)\vert^2$ qualitatively look similarly but they differ numerically. {\raggedright \section{On the possible influence of particle's diameter on the interference pattern} } \null\hskip\parindent We outline an approach to investigate how the widths of the slits influence the interference pattern in the double-slit experiment with quantons - photons, electrons, neutrons, atoms, molecules. Interference effects are visible when the wavelength of quantons is of the order of the distance between the slits. In practice, this distance is $d=(2-50)\,\lambda$. The slit width is often equal or up to ten times smaller than the distance between the slits. In quantum interference experiments with electrons and neutrons the diameter of the particle is smaller than the wavelength. Consequently, in classical experiments the width $\delta$ of the slits is much greater than $D$. But, depending on the velocity, atoms may have de Broglie wavelength which is smaller than the diameter of the atom. With macromolecules such a situation encounters more often, as shown in the experiment of Arndt et.\ al.\ and discussed by Arndt et al.\ \cite{Arndt} and Nairz et al.\ \cite{Nairz}. So, interference experiments with such quantum particles could have the slit widths smaller than the particle diameter. This requires a theoretical approach to quantum interference which takes the diameter of the particle into account \cite{Bozic}. A study of a quantum particle in an asymmetric double-slit interferometer ($\delta_1>\delta_2$) seems to be useful for this purpose because we identify two characteristic cases for the ratio of slit widths $\delta_1$ and $\delta_2$ and the diameter of the particle $D$: {\it a\/}) The diameter $D$ is negligeable with respect to the widths $\delta_1$ and $\delta_2$. {\it b\/}) The diameter $D$ is greater than the width $\delta_2$, where $\delta_2<\delta_1$. In the case {\it a\/}), which was until recently the only case of physical interest, there is no need to consider or take into account the diameter of the particle. The particle momentum $\vert c(k_x)\vert^2$ and space distribution $\vert\psi(x,t)\vert^2$ behind the grating are determined by the wave function \begin{equation} \psi(x,t)=\phi(x,vt)={1\over\sqrt{2\pi}}\int_{-\infty}^{+\infty}c(k_x)e^{i(k_xx-\omega_x t)}dk_x \label{ift} \end{equation} where \begin{equation} \phi(x,0)=\psi(x,0)=\cases{{1\over\sqrt{\delta_1+\delta_2}}&$x\in{\cal A},\,\,\,{\cal A}=\left({-d-\delta_1\over2},{-d+\delta_1\over2}\right)\cup\left({d-\delta_2\over2},{d+\delta_2\over2}\right)$\cr \strut0&$x\not\in{\cal A}$\cr} \label{cases} \end{equation} and \begin{equation} c(k_x)={1\over\sqrt{2\pi(\delta_1+\delta_2)}}{2\over k_x}\left[e^{ik_xd/2}\sin{k_x\delta_1\over2}+e^{-ik_xd/2}\sin{k_x\delta_2\over2}\right].\label{cktwo} \label{cdefd} \end{equation} The functions $\vert\psi(x,t)\vert^2$ and $\vert c(k_x)\vert^2$ are graphically represented at Fig.~2 and Fig.~3, for the chosen set of parameters. In the case {\it b\/}), we are faced with the question how and where to take the diameter of the particle into account. We know that the diameter of the particle is not incorporated anywhere in the Schr\"odinger equation. But, we expect that a particle with diameter $D$, such that $\delta_1>D>\delta_2$ could not pass through the second slit. So, it seems to us that we are forced to assume that wave functions in the coordinate and momentum representation in the case {\it b\/}) is also given by expressions (\ref{ift})-(\ref{cktwo}). The momentum distribution $\vert c(k_x)\vert^2$ of particles is given also by (\ref{cdefd}), because it is determined by the values of the wave function at the boundary. But the space distribution of particles in case {\it b\/}) is different from the space distribution in case {\it a\/}), because the particles arriving to the smaller slit can not go through. We conclude that particle distribution in case {\it b\/}) is given by $\tilde P_1(x,t)$ from the expression (\ref{sumck}) of $\tilde P(x,t)$. \begin{equation} \tilde P(x,t)\approx\tilde P_1(x,t)={1\over\sqrt{\sum_{i=1}^n\delta_i}}\int_{{m\over\hbar}(x-x_r^i)}^{{m\over\hbar}(x-x_l^i)}\vert c(k_x)\vert^2dk_x. \end{equation} The probability $\tilde P_1(x,t)$ is graphically represented in Fig.\ 5. \section{Conclusion} Inspired by current efforts to perform diffraction and interference experiments with objects of size that is equal or even larger than the diffraction structure, we outline an approach to investigate how the particle diameter influences the interference pattern in an asymmetric double slit interferometer. We identify two characteristic cases for the ratio of slit widths $\delta_1$ and $\delta_2$ and the diameter $D$ of the particle: {\it a\/}) $D\ll\delta_1$ and $D\ll\delta_2$, {\it b\/}) $\delta_1>D>\delta_2$. The wave function behind the grating has the same form in both cases because it is the solution of the Schr\"odinger equation which is not sensitive to the diameter of the particle. The space distribution of particles in case {\it a\/}) is given as usual by the modulus square of this function. Using the same wave function and assuming that a particle with diameter $D$, such that $\delta_1>D>\delta_2$ could not pass through the second slit, we determine the space distribution in case {\it b\/}). We conclude that the momentum distribution of particles behind the grating is the same in cases {\it a\/}) and {\it b\/}). \null\hskip\parindent \centerline{\bf Figure captions} \vskip\baselineskip Fig.\ 1. Illustration of a grating with $n$ slits of various widths. Fig.\ 2. The particle distribution function $\vert\psi(x,t)\vert^2$ behind the asymmetric double slit grating ($\delta_1=1\,\mu{\rm m}$, $\delta_2=0.25\,\mu{\rm m}$, $d=8\,\mu{\rm m}$) close to the slits $(a,b)$ and far from the slits $(c,d)$. It is evaluated from the form $(19)$ of the wave function. The initial longitudinal wave vector is $k=4\pi\cdot10^{10}\,{\rm m}^{-1}$, the particle mass is $m=3.8189\cdot10^{-26}\,{\rm kg}$. Fig.\ 3. The particle transverse momentum distribution $\vert c(k_x)\vert^2$ behind the asymmetric double-slit grating ($\delta_1=1\,\mu{\rm m}$, $\delta_2=0.25\,\mu{\rm m}$, $d=8\,\mu{\rm m}$). Fig.\ 4. The probability density $\tilde P(x,t)$ of particles arrival to the point $x$ at time $t$ ($y=vt$) behind the asymmetric double slit grating ($\delta_1=1\,\mu{\rm m}$, $\delta_2=0.25\,\mu{\rm m}$, $d=8\,\mu{\rm m}$) close to the slits $(a,b)$ and far from the slits $(c,d)$. It is evaluated from Eq.~(18). Particles' diameter $D$ is negligible with respect to the widths of the slits. The initial longitudinal wave vector is $k=4\pi\cdot10^{10}\,{\rm m}^{-1}$, the particle mass is $m=3.8189\cdot10^{-26}\,{\rm kg}$. Fig.\ 5. The probability density $\tilde P_1(x,t)$ of particles reaching $(x,y)$ at time $t$ after passing through the larger slit, near the slits $(a,b)$ and far from the slits $(c,d)$. It is evaluated from Eq.~(22). $D$ is assumed to be larger than $\delta_2$ and smaller than $\delta_1$. The values of parameters are the same as in captions of Figs.~2,3,4. \end{document}
\begin{document} \title{The robust superreplication problem: a dynamic approach} \author[1]{Laurence Carassus} \author[2\footnote{ Support from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no. 335421 is gratefully acknowledged. JO and JW are also thankful to St.\ John's College in Oxford for its financial support. JW further acknowledges support from the German Academic Scholarship Foundation.}]{Jan Ob\l{}\'oj} \author[2$^*$]{Johannes Wiesel} \affil[1]{L\'eonard de Vinci P\^ole Universitaire, Research Center and LMR, Universit\'e de Reims-Champagne Ardenne. Email: [email protected]} \affil[2]{Mathematical Institute and St. John’s College, University of Oxford, Oxford} \maketitle \begin{abstract} In the frictionless discrete time financial market of Bouchard et al.(2015) we consider a trader who, due to regulatory requirements or internal risk management reasons, is required to hedge a claim $\xi$ in a risk-conservative way relative to a family of probability measures ${\cal P}$. We first describe the evolution of $\pi_t(\xi)$ - the superhedging price at time $t$ of the liability $\xi$ at maturity $T$ - via a dynamic programming principle and show that $\pi_t(\xi)$ can be seen as a concave envelope of $\pi_{t+1}(\xi)$ evaluated at today's prices. Then we consider an optimal investment problem for the trader who is rolling over her robust superhedge and phrase this as a robust maximisation problem, where the expected utility of inter-temporal consumption is optimised subject to a robust superhedging constraint. This utility maximisation is carrried out under a new family of measures ${\cal P}^u$, which no longer have to capture regulatory or institutional risk views but rather represent trader's subjective views on market dynamics. Under suitable assumptions on the trader's utility functions, we show that optimal investment and consumption strategies exist and further specify when, and in what sense, these may be unique. \end{abstract} \section{Introduction} We consider a discrete time financial market and an agent who needs to hedge a liability $\xi$ maturing at a future date $T$ in a robust and risk-conservative way. Our focus is on the interplay between the beliefs used for assessing the risks, the beliefs used for agent's investment decisions and the dynamics of agent's actions. For simplicity we assume away other factors and consider an agent who can trade in a dynamic way with no constraints or frictions in $d$ assets available in the market at prices which are exogenous. More precisely, following the approach of \cite{Samuelson} and \cite{BS73}, risky assets are modelled as stochastic processes and their behaviour specified by a probability measure. However, unlike the classical uni-prior approach which fixes one such measure $P$, we consider a multi-prior framework and work simultaneously under a whole family of measures $P\in {\cal P}$. This offers a robust approach which accounts for model ambiguity, also referred to as \emph{Knightian uncertainty} after \cite{Kni}. The price to pay for a robust modelling view comes through specificity of outputs: while the uni-prior setting might generate a unique fair price for a derivate contract a multi-prior setting will typically generate a relatively wide interval of no-arbitrage prices, a tradeoff first identified in the seminal paper of \cite{merton_no_dominance1973}. We consider a trader who, due to regulatory requirements or internal risk management reasons, is required to hedge $\xi$ in a risk-conservative way relative to ${\cal P}$. This means that initially she has to allocate capital equal to $\pi(\xi)$, the superhedging price of $\xi$, i.e., the price of cheapest trading strategies which are guaranteed to cover the liability $\xi$ under all $P\in {\cal P}$. There might be many such cheapest superhedging strategies and the trader can pick any one of them to follow until time $T$. This is a conservative and non-linear risk assessment: the capital the trader would be allowed to borrow against a long position in $\xi$ is $-\pi(-\xi)$ and is typically significantly lower than $\pi(\xi)$. The superhedging price $\pi(\xi)$ can be characterised theoretically and has been considered in a number of papers, see \cite{BN} and the discussion below. To the best of our knowledge, the focus of most of these works has been on the static problem: the problem today for the horizon $T$. In contrast, in this paper we want to focus on the dynamics of the robust pricing and hedging problem \emph{through time}. We ask how $\pi(\xi)$ changes \emph{over time} and how the trader should act optimally through time. Clearly, tomorrow she will see new prices in the market and will be able to recompute the superhedging price. If the new price is lower, she will be able to unwind her old position, buy a new position and be left with a surplus. She could then consume this (e.g., pay into her credit line if the initial capital was borrowed) or invest further if she believes the market offers suitable opportunities. Our first main contribution is to describe the evolution of $\pi_t(\xi)$ - the superhedging price at time $t$ of the liability $\xi$ at maturity $T$. We work in the setting of \cite{BN} and consider an abstract set of priors ${\cal P}$, possibly large and in particular not dominated by a single probability measure. The measures $P\in {\cal P}$ are represented as compositions of one-step kernels and to establish the dual characterisation of $\pi_0(\xi)$ \cite{BN} have essentially proven a dynamic programming principle for the dual objects. We prove that $(\pi_t(\xi))_{0\leq t\leq T}$ satisfy a dynamic programming principle, and that $\pi_t(\xi)$ can be seen as a concave envelope of $\pi_{t+1}(\xi)$ evaluated at today's prices. To the best of our knowledge, this was first suggested in the robust setting by \cite{Dupire}. We also characterise $\pi_t(\xi)$ as the wealth of a minimal superhedging strategy in the sense of \cite{FK97}. These results provide natural robust extensions of classical uni-prior results, see \cite{fs}, including a robust version of the algorithm in \cite{CGT06}. Further, considering ${\cal P}$ which corresponds to the pointwise robust setting of \cite{Bur16b}, we show that $\pi_t(\xi)$ corresponds to the uniprior superhedging price for an extreme $P\in {\cal P}$. Proving our results in the robust setting requires rather lengthy and technical arguments. This is mainly due to delicate measurability questions. Our second main contribution is to consider an optimal investment problem for a trader who is rolling over her robust superhedge. This is phrased as a problem of robust maximisation of expected utility of inter-temporal consumption subject to a robust superhedging constraint. Here the robust constraint means the superhedging has to be satisfies $P$-a.s.\ for all $P\in {\cal P}$. The robust utility maximisation means that we consider a max-min problem, where minimisation is over $P\in {\cal P}^u$. We argue that the latter problem should be considered with respect to a different set of priors ${\cal P}^u\subseteq {\cal P}$ than the former problem. Measures $P\in {\cal P}^u$ no longer have to capture regulatory or institutional risk views but rather represent trader's subjective views on market dynamics. Under suitable assumptions on the trader's utility functions, we show that optimal investment and consumption strategies exist and further specify when, and in what sense, these may be unique. We provide examples to illustrate various pitfalls occurring when our assumptions are not satisfied. Throughout, we work in the setup of \cite{BN} who extended the classical uni-prior theory of pricing and hedging in discrete time to the robust mutli-prior case, introducing a suitable notion of no-arbitrage, proving a robust version of the fundamental theorem of pricing and hedging and establishing a robust pricing-hedging duality. Numerous authors have since adopted their setup and worked on robust extensions of the classical problems in quantitative finance such as pricing and hedging of American options, utility maximisation or transaction cost theory to name just a few examples, see \cite{Nutz,BC16,Aksamitetal:18,bayraktar2017arbitrage,bouchard2017super} and the references therein. We note that alternative ways to address model uncertainty are possible, including the pathwise, or pointwise, approach developed in \cite{DaHo07,ABPW13,ClassS,Burz16a,Bur16b} among others. Whilst the resulting robust framework for pricing and hedging is equipped with different notions of arbitrage and different fundamental theorems, it was recently shown by \cite{jw} to be equivalent to the multi-prior approach. Thus, on an abstract level, there is no loss of generality in our choice to adopt the multi-prior approach of \cite{BN}. It is important however that we work in discrete time. While in the classical setup no-arbitrage theory, including dynamic understanding of the superhedging price, is well developed in continuous time, see \cite{FK97,DelSch05}, in the robust setting an extension of abstract no-arbitrage theory, as developed in \cite{BN} or \cite{Bur16b}, to the continuous time is still open. This is despite a body of works which have achieved either particular or generic steps towards such a goal, large enough so that we can not do it justice in this introduction but refer to \cite{AP95,Ly95,DenisMartini:06,CO11, denis2007utility, EJ13,biagini2014robust,HO18,BCHPP:17,bartl2017pathwise} and the references therein. We note that $d$ may be large and our assets may include both primary and derivate assets. Indeed, one way of making robust outputs more specific is by including more traded assets in the analysis. This was the original motivation behind the works on the robust pricing and hedging in continuous time, going back to \cite{Ho982}, where one typically assumes that the market prices of European options on the underlying assets co-maturing with our liability $\xi$ are known. Here, we consider an abstract general setup and allow any $d$-tuple of traded assets, for a finite $d$. We may expect that the level of uncertainty regarding different assets may differ and this would be reflected in ${\cal P}$. However it is crucial that all the assets are traded dynamically. From a theoretical standpoint, this is both necessary to obtain a dynamic programming principle for the superhedging prices and without loss of generality in the sense that any \cite{BN} setup where some assets are only available for trading at time $0$ can be lifted to a setup with dynamic trading in all assets in a way which does not introduce arbitrage and does not affect time-$0$ superhedging prices, see \cite{Aksamitetal:18}. From a practical standpoint, this is not a significant assumption as we may only consider liquidly traded assets. The remainder of the paper is organised as follows. The next section introduces and discusses our modelling framework. \Cref{sec. conc} presents the results characterising the dynamics of the superhedging price. We then specialise, in \cref{secex}, to the pathwise setting when ${\cal P}$ contains all measures with specified supports. This allows for a more intuitive interpretation of the results, easier proofs and explicit examples. \Cref{sec:utility} then considers the secondary utility maximisation problem for a trader who dynamically re-balances her superhedging strategy and states the existence and uniqueness results for the optimal investment and consumption strategies. Finally, proofs are presented in three appendices. \section{Models of Financial markets} \label{setup} In this section we set up the multi-prior modelling framework and give introductory definitions. Future dynamics of financial assets are modelled using probability measures but, unlike the classical case where one such measure is fixed, we typically work simultaneously under all $P$ from a large family of measures ${\cal P}$. Our market has $d$ traded assets, these could be stocks or options, but importantly all are traded dynamically. We do not consider statically traded assets, i.e., only available for buy-and-hold trading, as then the superhedging prices typically can not admit a dynamic programming principle across all times, see \cite{Aksamitetal:18}. \subsection{Uncertainty modelling} \label{ts} We work in the setting of \cite{BN} to which we refer for details and motivation. We only recall the main objects of interest here and refer to \cite{bs}[Chapter 7] for technical details. Let $\Omega$ be a Polish space and denote by $\Omega^t$ its $t$-fold Cartesian product. We define the price process $S$ of discounted prices of $d$ traded stocks as a Borel measurable map $S_t(\o)=(S_t^1(\o), \dots, S_t^d(\o)):\O^T \to \R^d_+$ for every $\o=(\o_0,\ldots,\o_T)$ with the convention $S_0(\omega)=s_0 \in \R^d_+$ and $T\in \mathbb{N}$ is the time horizon. Prices are specified in discounted units and we have a riskless asset with price equal to $1$ for all $0\le t \le T$. Furthermore let $\mathfrak{P}(\Omega^t)$ be the set of all probability measures on $\mathcal{B} (\Omega^t)$, the Borel-$\sigma$-algebra on $\Omega^t$. We denote by ${\cal F}_tut$ the universal completion of $\mathcal{B}(\Omega^t)$. We often consider $(\Omega^t, {\cal F}_tut)$ as a subspace of $(\Omega^T, {\cal F}_tuT)$ and write $\mathbb{F}^{\mathcal{U}}= ({\cal F}_tut)_{t=0, \dots, T}$. In the rest of the paper, we will use the same notation for $P \in \mathfrak{P}(\Omega^T)$ and for its (unique) extension to ${\cal F}_tuT$. For a given ${\cal P} \subseteq \mathfrak{P}(\Omega^T)$, a set $N \subset \Omega^T$ is called a ${\cal P}$-polar if for all $P \in {\cal P}$, there exists some $A_{P} \in \mathcal{B}(\Omega^T)$ such that $P(A_{P})=0$ and $N \subset A_{P}$. We say that a property holds ${\cal P}$-quasi-surely (q.s.), if it holds outside a ${\cal P}$-polar set. Finally we say that a set is of ${\cal P}$-full measure if its complement is a ${\cal P}$-polar set.\\ To give a probabilistic description of the market we consider a family of random sets ${\cal P}_{t} : \Omega^t \twoheadrightarrow \mathfrak{P}(\O)$, for all $0\leq t\leq T-1$. The set ${\cal P}_{t}(\omega)$ can be seen as the set of all possible models for the $t+1$-th period given the path $\omega \in \Omega^t$ at time $t$. In order to aggregate trading strategies on different paths in a measurable way, we assume here that the sets ${\cal P}_t$ have the following property: \begin{assumption} \label{Qanalytic} The set ${\cal P}$ has Analytic Product Structure (APS), which means that \begin{align*} {\cal P}=\{P_0 \otimes \cdots \otimes P_{T-1} \ | \ P_t \text{ is an}\ {\cal F}_tut\text{-measurable selector of }{\cal P}_t \}, \end{align*} where the sets ${\cal P}_t(\omega) \subseteq \mathcal{P}(\Omega)$ are nonempty, convex and \begin{align*} \text{graph}({\cal P}_t) = \{ (\omega, P)\ | \ \omega \in \Omega^t, \ P \in {\cal P}_t (\omega)\} \end{align*} is analytic. \end{assumption} The fact that $\text{graph}(\mathcal{P}_t)$ is analytic allows for an application of the Jankov-von-Neumann theorem (\cite[Prop. 7.49, p.182]{bs}), which guarantees the existence of universally measurable selectors $P_t: \O^t \to \mathfrak{P}(\O)$. Here $P_0 \otimes \cdots \otimes P_{T-1}$ denotes the $T$-fold application of Fubini's theorem, which defines a measure on $\mathfrak{P}(\Omega^T)$. Indeed, analyticity of the graph of ${\cal P}_t$ is of paramount importance for the preservation of measurability properties. For example the proof of a quasisure superreplication theorem (see \cite[Lemma 4.10]{BN}) uses the fact that if $X_{t+1}: \Omega^{t+1} \to \mathbb{R}$ is upper semianalytic, then $\sup_{P \in \mathcal{P}_{t}(\omega)} {\rm I\kern-2pt E}_{P} [X_{t+1}(\o,\cdot)]$ remains upper semianalytic. Apart from \cref{Qanalytic}, we make no specific assumptions on the set of priors ${\cal P}$. It is neither assumed to be dominated by a given reference probability measure nor to be weakly compact. Some concrete examples, including when ${\cal P}_{t}(\omega)$ are non-compact random sets, are discussed in \cref{secex}. \subsection{Trading} Trading strategies are represented by $\mathbb{F}^{\mathcal{U}}$-predictable $d$-dimensional processes $H:=\{H_{t}\}_{ 1 \le t \le T}$ where for all $1 \leq t \leq T$, $H_{t}$ represents the investor's holdings in each of the $d$ assets at time $t$. The set of trading strategies is denoted by $t\in\{0,\dots,T\}r$. Investors are allowed to consume and their cumulative consumption is represented by an $\R$-valued $\mathbb{F}^{\mathcal{U}}$-adapted process $C=\{C_{t} \}_{1 \le t \le T}$, $C_{0}=0$ and which is assumed to be non-decreasing: $C_{t} \leq C_{t+1}$ ${\cal P}\mbox{-q.s.}$ The set of cumulative consumption processes is denoted by ${\cal C}$. We will use the notation $\Delta S_{t}=S_{t}-S_{t-1}$ and $\Delta C_{t}=C_{t}-C_{t-1}$ for $1\le t\le T.$ Given an initial wealth $x\in \R$, a trading portfolio $H$ and a cumulative consumption process $C$, the wealth process $V^{x,H,C}$ is governed by \begin{align}\label{hj} V^{x,H,C}_0 &=x n$^{\mbox{\footnotesize o}}$onumber \\ V^{x,H,C}_t &=V^{x,H,C}_{t-1}+H _{t} \Delta S_{t}-\Delta C_{t}\quad\mbox{for}\;1\le t\le T. \end{align} The condition $C=0$ means that the portfolio $H$ is self-financing and in this case we write $V^{x,H}$ instead of $V^{x,H ,0}$. We are interested in superhedging of a (European) contingent claim and therefore adapt the presentation of \cite{FK97} to the robust framework. A (European) contingent claim is represented by an ${\cal F}_tuT$-measurable random variable $\xi$ and the set of superhedging strategies for $\xi$ is denoted by \begin{align} \label{aah} \Ac(\xi):=\left\{ (x,H ,C) \in \R \times t\in\{0,\dots,T\}r \times {\cal C} \ \bigg| \ V^{x,H,C}_T \geq \xi \ {\cal P}\mbox{-q.s.} \right\}. \end{align} \begin{definition} The superreplication price $\pi(\xi)$ of an ${\cal F}_tuT$-measurable random variable $\xi$ is the minimal initial capital needed for superhedging $\xi$, i.e., \begin{align}\label{piG} \pi(\xi):=\inf \left\{x \in \mathbb{R} \ | \ \exists (H ,C)\in t\in\{0,\dots,T\}r \times {\cal C} \mbox{ such that } (x,H,C) \in \Ac(\xi) \right\}, \end{align} with $\pi(h)=+\infty$ if $\Ac(\xi) = \emptyset$. A superhedging strategy $(\hat{x},\hat{H},\hat{C}) \in \Ac(\xi)$ is called \emph{minimal} if for all $(x,H ,C) \in \Ac(\xi)$ $V^{x,H,C}_t \ge V_{t}^{\hat{x},\hat{H},\hat{C}}$ ${\cal P}$-q.s. for all $0\le t\le T$. \end{definition} It is easy to see that $\hat{x}=\pi(\xi)$ for any minimal superhedging strategy $(\hat{x},\hat{H},\hat{C}) \in \Ac(\xi)$. \subsection{No-arbitrage condition and Pricing measures} \label{BNexp} We recall the no-arbitrage condition introduced in \cite{BN}. \begin{assumption}\label{NAQT} There is no ${\cal P}$-quasisure arbitrage (NA$({\cal P})$) in the market if for all $H \in t\in\{0,\dots,T\}r$ with $V_{T}^{0,H} \geq 0 \ {\cal P}\mbox{-q.s.}$ we have $V_{T}^{0,H} = 0 \ {\cal P}\mbox{-q.s. }$ \end{assumption} The above definition gives an intuitive extension of the classical no-arbitrage condition, specified under a fixed probability measure $P$, to the multi-prior case of family of probability measures ${\cal P}$. The intuition is justified by the FTAP generalisation proved by \cite[Theorem 4.5]{BN}: under \cref{Qanalytic} (recall that $S$ is Borel-adapted) NA$({\cal P})$ is equivalent to the fact that for all $P \in {\cal P}$, there exists some $Q \in {\cal Q}$ such that $ P \ll Q$ where \begin{align}\label{mathR} {\cal Q}:=\{Q \in \mathfrak{P}(\Omega^T)\ | \ \exists \, P \in {\cal P},\ Q \ll P \; \mbox{and $S$ is a martingale under $Q$}\}. \end{align} \begin{remark} By the same token, further results, e.g., on the Superhedging Theorem or the worst-case expected utility maximisation (see \cite{Nutz}, \cite{BC16}, \cite{Bart16} and \cite{NS16}) provide more evidence supporting the view that NA$({\cal P})$ is a well-chosen extension of the classical no-arbitrage assumption. However, the price to pay when using NA$({\cal P})$ is related to technical measurability issues arising when one considers a one step version of the NA$({\cal P})$ (see \eqref{NP1} below). In \cite{Bart16} a stronger version of \cref{NAQT} is introduced which states that \eqref{NP1} below is satisfied for all $\o \in \O^t$. In \cite{BC16}, a stronger version of no-arbitrage is proposed (sNA$({\cal P})$) which states that there is no-arbitrage in the classical sense for all measures $P \in {\cal P}$. In both cases some of the measurability issues are simplified. Finally, different approaches to model uncertainty may lead to fundamentally different notions of arbitrage. In the pathwise approach, one typically asks that some subset of paths supports a feasible model -- this is in contrast to the multi-prior setup in this paper where essentially \emph{all} $P\in {\cal P}$ are assumed to be feasible models. In consequence, the no-arbitrage conditions in the pathwise approach, e.g., model independent arbitrage as in \cite{DaHo07,CO11,ABPW13} or Arbitrage de la classe ${\cal S}$ (see \cite{ClassS}), are much weaker than NA$({\cal P})$, i.e., their notions of arbitrage are much stronger than the ${\cal P}$-q.s.\ arbitrage. To wit, negation of sNA$({\cal P})$ above gives that there is a classical arbitrage for at least one $P\in {\cal P}$ while \cite{DaHo07} say that there is a \emph{weak arbitrage opportunity} if there is a classical arbitrage under \emph{all} $P\in {\cal P}$. \end{remark} The one step version of the NA$({\cal P})$ is the following: for $\omega \in \Omega^t$ fixed we say that NA$({\cal P}_{t}(\omega))$ condition holds if for all $H \in \mathbb{R}^d$ \begin{align} \label{NP1} H\Delta S_{t+1}(\omega,\cdot) \geq 0 \; {\cal P}_{t}(\omega) \mbox{-q.s.} \quad\Rightarrow \quad H\Delta S_{t+1}(\omega,\cdot) = 0 \;{\cal P}_{t}(\omega)\mbox{-q.s.} \end{align} It is proved in \cite[Theorem 4.5]{BN} that under the assumption that $S$ is Borel measurable and (APS) of ${\cal P}$, the condition NA$({\cal P})$ is equivalent to the fact that for all $0\leq t\leq T-1$, there exists some ${\cal P}$-full measure set $\Omega^{t}_{NA} \in {\cal F}_tut$, such that for all $\omega \in \Omega^{t}_{NA}$, NA$({\cal P}_{t}(\omega))$ holds. We also introduce the one-step versions of the set ${\cal Q}$: \begin{align*} {\cal Q}_{t}(\omega)=\left\{Q \in \mathfrak{P}(\Omega)\ | \ \exists \, P\in {\cal P}_{t}(\omega) \mbox{ such that } Q \ll P\; \mbox{ and } {\rm I\kern-2pt E}E_Q [\Delta S_{t+1}(\omega,\cdot)] =0\right\}. \end{align*} As is shown in \cite[Lemma 4.8]{BN}, ${\cal Q}_{t}$ has an analytic graph. An application of the Jankov-von Neumann Theorem and Fubini's Theorem shows that we have \begin{align}\label{Mstar} {\cal Q}= \{ Q_0 \otimes \cdots \otimes Q_{T-1} \ | \ Q_t \mbox{ is }{\cal F}_tut\mbox{-measurable selector of }{\cal Q}_t \text{ for all }0\le t\le T-1 \}. \end{align} \section{Existence and characterisation of minimal superhedging strategies} \label{sec. conc} The Superhedging theorem, also known as the pricing-hedging duality, is one of the fundamental results in the classical setting of ${\cal P}=\{P\}$, see \cite{fs,FK97} and the references therein. One of the main results in \cite{BN} was its extension to the multi-prior case: \begin{align} \label{repdual} \pi(\xi)=\sup_{Q \in {\cal Q}} {\rm I\kern-2pt E}E_{Q} [\xi]. \end{align} While this duality is important and theoretically pleasing, its use for computations may be hampered by lack of a tractable characterisation of the set ${\cal Q}$. One of our aims is to give a more algorithmic approach to the above duality. To this end, we establish a suitable dynamic programming principle (DPP) for the superhedging price and also show existence of minimal superhedging strategies in the spirit of \cite{FK97}. This leads to a robust generalisation of the algorithm in \cite{CGT06} and gives a way to handle computation of superhedging prices and, importantly, strategies. \subsection{Main Result} To state our main result we need to introduce some further notation. For an upper semianalytic function $\xi: \Omega^T \to \R$ let $\{\pi_t(\xi)\}_{0\le t\le T}$ denote the one step superhedging prices $\pi_t(\xi): \Omega^t \to \overline{\R}$ given by \begin{equation} \begin{split} \pi_T(\xi)(\omega) &= \xi(\omega),\quad \textrm{ and for }0 \leq t \leq T-1 \label{defminr}\\ \pi_t(\xi)(\omega) &= \inf \{x \ | \ \exists H \in \R^d \text{ such that } x + H \Delta S_{t+1}(\omega,\cdot) \ge \pi_{t+1}(\xi)(\omega,\cdot) \; {\cal P}_{t}(\omega)\mbox{-q.s.}\}. \end{split} \end{equation} Note that the above superhedging prices can be construed as concave envelopes. Indeed, with a slight abuse of notation we denote the one-step quasisure concave envelope $\widehat{f}: \Omega^{t}\times \R_+^d \to \R$ by \begin{align*} \widehat{f}(\omega,s)=\inf \{u(s) \ | \ u:\R^d_+ \to \R \mbox{ closed concave, }u(S_{t+1}(\o,\cdot)) \ge f(\omega,\cdot) \ {\cal P}_t(\omega)\text{-q.s.} \} \end{align*} for $t\in \{1, \dots, T\}$ and an upper semianalytic function $f:\Omega^{t}\times \Omega\to\R$, where we recall that a concave function is closed, if its superlevel set is closed. As every concave function can be written as the pointwise infimum of linear functions the equality \begin{equation}\label{eq:sh-cncenv} \pi_t(\xi)(\omega)=\widehat{\pi_{t+1}(\xi)}(\omega,S_{t}(\omega)),\quad \omega\in \Omega^t,\quad 0\leq t\leq T-1 \end{equation} holds and the one-step superhedging prices can be obtained by iteratively taking concave envelopes in the coordinates of $\Omega$. Let us now define the corresponding dual expressions for the one step case. For $\omega \in \Omega^t$ and $f: \, \O^t \times \O \to \overline{\R}$, we define ${\rm I\kern-2pt E}c_t (f) : \, \Omega^t \to \overline{\R}$ by \begin{align*} {\rm I\kern-2pt E}c_t(f) (\o)= \sup_{Q \in {\cal Q}_{t}(\omega)}{\rm I\kern-2pt E}E_Q[f(\omega, \cdot)]. \end{align*} Furthermore, for measurable $\xi: \Omega^T \to \R$, we define the sequences of operators \begin{align} {\rm I\kern-2pt E}c^T(\xi) = \xi\quad \textrm{ and }\quad \label{defect} {\rm I\kern-2pt E}c^t(\xi) = {\rm I\kern-2pt E}c_t \circ {\rm I\kern-2pt E}c^{t+1} (\xi),\quad 0 \leq t \leq T-1. \end{align} With notation at hand, we can state our first main result which gives existence of minimal superhedging strategies and establishes a Dynamic Programming Principle for $\pi_t(\xi)$ and ${\rm I\kern-2pt E}c^t(\xi)$. \begin{theorem} \label{eu} Let \cref{Qanalytic} and NA$({\cal P})$ hold. Let $\xi:\Omega^T \to\R$ be an upper semianalytic function such that $\sup_{Q\in\mathcal{Q}} {\rm I\kern-2pt E}E_Q[\xi^- ] < \infty.$ Then: \begin{itemize} \item[(i)] there exists a minimal superhedging strategy in $\Ac(\xi)$; \item[(ii)] for any minimal superhedging strategy $(\hat{x},\hat{H},\hat{C})\in \Ac(\xi)$, its value satisfies \begin{align} \label{eeu} V^{\hat{x},\hat{H},\hat{C}}_t &= \pi_t(\xi)={\rm I\kern-2pt E}c^t(\xi) \;\; {\cal P} \mbox{-q.s.},\quad 0\leq t\leq T. \end{align} In particular, \begin{align*} \hat{x}=\pi(\xi) &= \pi_0(\xi)={\rm I\kern-2pt E}c^0 (\xi). \end{align*} \end{itemize} \end{theorem} Perhaps suprisingly the proof of the above result is technically involved and is thus relegated to Appendix \ref{appendix:laurence}. However in the special case of the canonical setting $\Omega=\R_+^d$, $S_t(\omega)=\omega_t$ and ${\cal P}=\{P \in \mathfrak{P}(X) \ | \ \text{supp}(P) \text{ is finite}\}$ for an analytic set $X \subseteq \Omega^T$ the underlying arguments are quite intuitive and simple. We outline them in the next section. \subsection{Canonical space: Concave envelopes and computation of the superhedging price} \label{secex} In this subsection we work on the canonical space, i.e. we set $\Omega= \R^d_+$ and $S_t(\o)=(\o_t^1, \dots, \o_t^d)$. In particular $\xi(S_1(\o), \dots, S_T(\omega))=\xi(\o)$ holds.\\ We start by developing in more detail the special case when ${\cal P}$ is obtained by specifying the support for feasible moves of the stock prices. This captures the pathwise approach but is also natural in the quasisure framework as NA$({\cal P})$ and $\pi(\xi)$ only depend on the polar sets of ${\cal P}$. More precisely we give the following definition: \begin{definition} Assume that for $0\le t\le T-1$ we are given correspondences $f_t: \Omega^t \twoheadrightarrow \R^d$. We say that a sequence of sets $({\cal P}_t)_{0\le t \le T-1}$ such that ${\cal P}_t \subseteq \mathfrak{P}(\Omega)$ for all $0 \le t \le T-1$ is generated by $\{f_t\}_{0 \le t \le T-1}$ if \begin{align*} {\cal P}_t(\omega)=\{ P \in \mathfrak{P}(\Omega) \ | \ \text{supp}(P) \subseteq f_t(\omega) \} \end{align*} for $0\le t \le T-1$, where $\text{supp}(P)$ denotes the support of a measure $P$. \end{definition} Recall that a correspondence $f: \Omega^t \twoheadrightarrow \R^d$ is called measurable if $\{\omega \in \Omega^t \ | \ f(\omega) \cap O n$^{\mbox{\footnotesize o}}$eq \emptyset\}\in \mathcal{B}(\Omega^t)$ for all open sets $O \subseteq \R^d$. We refer to \cite[14.A, p.643ff.]{rw} for the theory of measurable correspondences. \begin{lemma}\label{lem. graph} Let $({\cal P}_t)_{0\le t \le T-1}$ be generated by measurable, closed valued correspondences $\{f_t\}_{0 \le t\le T-1}$. Then ${\cal P}_t$ has Borel measurable graph for all $0 \le t \le T-1$. \end{lemma} Under the assumptions of \cref{lem. graph} we can then define ${\cal P} \subseteq \mathfrak{P}(\Omega^T)$ satisfying (APS) as in \cref{Qanalytic}.\\ \begin{proof} By assumption the graph of $f_t$ is ${\cal B}(\Omega^t) \otimes {\cal B}(\R^d)={\cal B}((\R^d)^{t+1})$-measurable for all $t \in \{0, \dots T-1\}$ (see \cite[Theorem 14.8, p.648]{rw}). Thus by \cite[Cor. 7.25.1, p.134]{bs} $\mathfrak{P}(\text{graph}(f_t))$ is Borel as well. Define the map \begin{align*} D: \Omega^t \times \mathfrak{P}(\R^d_+) \to \mathfrak{P}(\Omega^{t+1}), \ (\omega, P) \mapsto \delta_{\omega} \otimes P \end{align*} and note that $D$ is a homeomorphism from $\Omega^t \times \mathfrak{P}(\R^d_+)$ to $\{\delta_{\omega} \otimes P \ |\ \omega \in \Omega^t, \ P \in \mathfrak{P}(\R^d_+) \}$. Indeed, take a sequence $(\omega_n, P_n) \in \Omega^t \times \mathfrak{P}(\R^d_+)$ such that $(\omega_n, P_n)$ converges to $(\omega, P)$ in the product topology. Denote by $\mathcal{L}^1_b(\Omega^{t+1})$ the bounded 1-Lipschitz functions on $\Omega^{t+1}$. Then \begin{align*} &\lim_{n \to \infty}\sup_{f \in \mathcal{L}^1_b(\Omega^{t+1})} \left|\int_{\Omega^{t+1}} f d(\delta_{\omega_n} \otimes P_n) - \int_{\Omega^{t+1}} f d(\delta_{\omega} \otimes P) \right| \\ \le\ &\lim_{n \to \infty} \left(|\omega_n-\omega|+\sup_{f \in \mathcal{L}^1_b(\Omega^{t+1})} \left|\int_{\Omega^{t+1}} f(\omega, \cdot) dP_n - \int_{\Omega^{t+1}} f(\omega, \cdot) dP \right|\right)=0, \end{align*} so $\delta_{\omega_n} \otimes P_n$ converges weakly to $\delta_{\omega}\otimes P$. Continuity of the inverse map follows directly from the definition of weak convergence of measures. Note also that a homeomorphism map Borel sets to Borel sets. As \begin{align*} \mathfrak{P}(\text{graph}(f_t)) \cap \{\delta_{\omega} \otimes P \ |\ \omega \in \Omega^t,\ P \in \mathfrak{P}(\R^d) \} \end{align*} is Borel-measurable, applying the inverse map $D^{-1}$ we conclude that \begin{align*} \text{graph}({\cal P}_t)=D^{-1}(\mathfrak{P}(\text{graph}(f_t)) \cap \{\delta_{\omega} \otimes P \ |\ \omega \in \Omega^t,\ P \in \mathfrak{P}(\R^d) \}) \end{align*} is Borel. \end{proof} In fact, for such a set ${\cal P}$ the condition NA$({\cal P}_t(\omega))$ is equivalent to $0 \in \text{ri}(f_t(\omega)-S_t(\omega))$, where $\text{ri}(A)$ denotes the relative interior of the convex hull of $A$. For a proof of this result in a more general setup, see \cite[Thm. 3.3, p. 6]{jw}. This deterministic condition is called No Pointwise Arbitrage in \cite{Bur16b} and can be checked without resorting to the use of probability measures. As an intuitive outline of the proof of \cref{eu}, let us now assume that ${\cal P}=\{P \in \mathfrak{P}(X) \ | \ \text{supp}(P) \text{ is finite}\}$ and NA$({\cal P})$ holds, where $X \subseteq \Omega^T$ is some analytic set. We can now prove the crucial equality $\pi_t(\xi)={\rm I\kern-2pt E}c_t(\pi_{t+1}(\xi))$ directly using the concave envelope characterisation \eqref{eq:sh-cncenv}, see also \cite{beiglbock2014martingale} and the references therein. Indeed, it follows from \cite[Prop 6.1, p. 14]{jw} that ${\cal P}$ satisfies \cref{Qanalytic} in this case and \begin{align*} {\cal Q}= \{ Q \in \mathfrak{P}(X) \ | \ \text{supp}(Q) \text{ is finite and }S \text{ is a martingale under }Q \}, \end{align*} see also \cite[Example 1.2, p.827]{BN} for $X=(\R^d)^T$ and \cite[Cor. 4.6, p.151]{Lange} for locally compact $X$. Let $\omega=(\omega_1, \dots, \omega_t) \in \Omega^t$. Using Jensen's inequality \begin{align}\label{eq. jensen} {\rm I\kern-2pt E}c_t(f)(\omega) &= \sup_{Q \in {\cal Q}_t(\omega)} {\rm I\kern-2pt E}E_Q[f(\omega, \cdot)]n$^{\mbox{\footnotesize o}}$onumber \le \sup_{Q \in {\cal Q}_t(\omega)} {\rm I\kern-2pt E}E_Q[\hat{f}(\omega, \cdot)]\\ &\le \sup_{Q \in {\cal Q}_t(\omega)}\hat{f}(\omega, {\rm I\kern-2pt E}E_Q[\cdot])=\hat{f}(\omega, \omega_t), \end{align} where ${\rm I\kern-2pt E}_{Q}[\cdot]=\int_{\R_+^d} yQ(dy)$. To establish the $``\ge"$-inequality, it suffices to observe that \begin{align*} s \mapsto \sup_{Q \ll P \text{ for some }P\in {\cal P}_t(\omega), \ {\rm I\kern-2pt E}E_{Q}[\cdot]=s} {\rm I\kern-2pt E}E_Q[f(\omega,\cdot)] \end{align*} is concave and dominates $f(\omega,\cdot)$ on $S_{t+1}(\Sigma_t^{\omega})$, where $\Sigma_t^{\omega}:= \{\tilde{\omega}\in X \ | \ (\tilde{\omega}_1, \dots, \tilde{\omega}_t)=\omega \}$. While concavity is clear in general (see \cite[Lemma 2.2]{beiglbock2014martingale}), the domination property crucially relies on the fact that the set $\{Q \ll P \ \text{for some }P \in {\cal P}_t(\omega), \ {\rm I\kern-2pt E}E_{Q}[\cdot]=s\}$ contains the Dirac measures at points $s \in S_{t+1}(\Sigma_t^{\omega}).$ For a general set ${\cal P}$ this is not true: For example in the case ${\cal P}=\{P\}$ for some $P \in \mathfrak{P}(\Omega)$ in general only the set $ \{Q \ll P, \ {\rm I\kern-2pt E}E_{Q}[\cdot]=s\}$ is non-empty for $s$ in the relative interior of the convex hull of the support of $P$ (see \cite[Theorem 1.48, p.29]{fs}). The following definition further characterises closed-valued correspondences $\{f_t\}_{0 \le t \le T-1}$ and is needed to identify an important subclass of sets $\{{\cal P}_t\}_{0 \le t \le T-1}$ generated by $\{f_t\}_{0 \le t \le T-1}$: \begin{definition} \label{def. corunifcont} A closed-valued correspondence $f_t: \Omega^t \to \R^d$ is called uniformly continuous if for all $\epsilon>0$ there exists $\delta >0$ such that for all $\omega, \omega' \in \Omega^T$ such that $|\omega'-\omega|\le \delta$ we have $d_H(f_t(\omega), f_t(\omega'))\le \epsilon$, where $$d_H(A,B):=\max\left(\sup_{v \in A} \inf_{\tilde{v} \in B}|v- \tilde{v}|, \sup_{\tilde{v}\in B} \inf_{v \in A}|v-\tilde{v}| \right)$$ denotes the Hausdorff metric on closed subsets $A,B$ of $\Omega$. \end{definition} Uniformly continuous correspondences are in particular continuous (see \cite[Def. 5.4, p.152]{rw}) and thus measurable (\cite[Theorem 5.7, p.154]{rw}). It turns out, that when the correspondences fulfil this continuity condition and are compact-valued, the ${\cal P}$-q.s. superhedging price of a continuous payoff $\xi$ coincides with the $P$-a.s. superhedging price of $\xi$ for every $P$ with support equal to the paths generated by the correspondences $\{f_t\}_{0 \le t \le T-1}$: \begin{proposition}\label{prop:pathwiseex} Suppose $({\cal P}_t)_{0 \le t \le T-1}$ is generated by closed-valued, uniformly continuous correspondences $\{f_t\}_{0 \le t \le T-1}$ and that NA$({\cal P})$ holds. Furthermore assume that the function $\xi: \Omega^T \to \R$ is continuous and $\{f_t\}_{0\le t\le T-1}$ are compact-valued. Take any measure $P=P_0 \otimes \cdots \otimes P_{T-1}$ such that \begin{align*} \text{supp}(P_t(\omega))=f_{t}(\omega), \quad 0 \le t \le T-1, \ \omega \in \Omega^t. \end{align*} Then, for all $0 \le t \le T-1$ and $\omega \in \Omega^t$, \begin{align}\label{eq. continuous} \pi_t(\xi)(\omega)=\inf \{ x \in \R \ | \ \exists H \in \R^d \text{ such that } x+H \Delta S_{t+1}(\omega, \cdot) \ge \pi_{t+1}(\xi)(\omega, \cdot) \ P\text{-a.s}\}. \end{align} and $\omega \mapsto \pi_t(\xi)(\omega)$ is continuous. \end{proposition} The proof of the above result is relegated to Appendix \ref{appendix:A}. \\ We now apply this result to a one-dimensional case of particular interest, as in \cite{CV17}, where it is easy to explicitly compute the minimal superhedging prices: \begin{proposition} \label{pg1} Assume that for all $0\leq t\leq T-1$, $d_{t+1}< 1 < u_{t+1}$ and that the (random) sets ${\cal P}_{t}$ are given by \begin{align*} {\cal P}_t(\omega)=\left\{P \in \mathfrak{P}(\R)\ |\ \mbox{supp}(P) \subset [\o_t d_{t+1},\o_t u_{t+1}] \right\}, \end{align*} where $\omega=(\omega_1, \dots, \omega_t)\in \Omega^t$. Then NA$({\cal P})$ holds. Let $\xi: \R^T \to \R$ be convex. Then \begin{align} \label{fgp} \pi_T(\xi) & = \xi n$^{\mbox{\footnotesize o}}$onumber\\ \pi_t(\xi) (\omega) & = \alpha_{t+1} \pi_{t+1}(\xi)(\omega, \omega_{t} u_{t+1}) + (1-\alpha_{t+1}) \pi_{t+1}(\xi)(\omega, \o_{t} d_{t+1}), \end{align} where $\alpha_{t} := \frac{1-d_{t}}{u_{t}-d_{t}}$, $1\le t \le T$. \end{proposition} \begin{proof} Noting that $f_t(\omega)=[\omega_t d_{t+1}, \omega_t u_{t+1}]$ is a uniformly continuous compact-valued correspondence, the graph of ${\cal P}_t$ is clearly non-empty, convex and Borel measurable for $0\le t \le T-1$ by \cref{lem. graph}. As $0 \in \text{ri}(f_t(\omega)-S_t(\omega))=\text{ri}([-\omega_t(1-d_{t+1}),\omega_t(u_{t+1}-1))$, NA$({\cal P})$ holds. We prove by induction that $\pi_t(\xi)$ satisfies \eqref{fgp} and is convex: This is clear for $t=T$. Now we assume that for some $0\le t \le T-1$, $\pi_{t+1}(\xi)$ is convex. As ${\cal P}_t(\omega)$ contains the Dirac measures on $[\omega_td_{t+1}, \omega_t u_{t+1}]$ we conclude that \begin{align*} \pi_{t}(\xi)(\omega)&= \inf \{ x \in \R \ | \ \exists H \text{ s. t. } x+H\Delta S_{t+1}(\omega, \cdot) \ge \pi_{t+1}(\xi)(\omega, \cdot) \text{ on } [\omega_{t} d_{t+1}, \omega_{t} u_{t+1}]\}. \end{align*} As $\pi_{t}(\xi)(\omega)$ is the pointwise concave envelope of the convex function $\pi_{t+1}(\xi)(\omega,\cdot)$, it can be written as the unique convex combination of the extreme points of $\pi_{t+1}(\xi)(\omega,\cdot)$ on the interval $[\omega_{t} d_{t+1}, \omega_{t} u_{t+1}]$, which conserves the barycentre $\omega_t$. Thus, we obtain \eqref{fgp} for $t$. Clearly $\pi_t(\xi):\R^{t} \to \R$ is then a linear combination of convex functions (with non-negative coefficients) and thus also a convex function. \end{proof} It is insightful to observe that the above superreplication price corresponds to the actual replication price in a Cox-Ross-Rubinstein model of \cite{CRR79} where the stock price evolves on a binomial tree with $S_{t+1}\in \{d_{t+1}S_t,u_{t+1}S_t\}$. \section{Maximising expected utility of consumption in $\Ac(\xi)$}\label{sec:utility} \subsection{Main results} In \cref{eu} above, we characterised the superhedging prices $\pi_t(\xi)$ and introduced ways for computing minimal superhedging strategies. However, these are typically non-unique. Indeed, as we see from \eqref{eq:sh-cncenv}, if the concave envelope $\widehat{f(\omega,\cdot)}$ of a function $f: \O^{t+1}\to \R$ is not differentiable at $\omega_t$, every point $H\in \R^d$ in its superdifferential constitutes a minimal superhedging strategy, see also \cref{Ex 1} below. To select the ``best" among minimal superhedging strategies we propose a secondary optimisation problem of robust maximisation of expected utility with intermediate consumption, given by \begin{align} \label{eq. optpro} \sup_{(H,C)\in \mathcal{A}_x} \inf_{P \in {\cal P}^u} {\rm I\kern-2pt E}_{P}\left[\sum_{s=1}^T U(s,\Delta C_s)\right], \end{align} where $\mathcal{A}_x$ is the set of investment-consumption strategies which superhedge $\xi:\Omega^T \to \R$, i.e. \begin{align*} \mathcal{A}_x :=\{ (H,C) \in \mathcal{H}(\mathbb{F}^{\mathcal{U}}) \times \mathcal{C}\ | \ V_T^{x,H,C}\ge \xi \ {\cal P} \text{-q.s.}\} \end{align*} and the set ${\cal P}^{u} \subseteq \mathfrak{P}(\Omega^T)$ fulfils the following condition: \begin{assumption}\label{Ass 1b} ${\cal P}^{u}$ satisfies (APS) and ${\cal P}^{u} \subseteq \mathcal{P}$. \end{assumption} The set ${\cal P}^{u}$ represents the subjective views of an investor. While superhedging with respect to ${\cal P}$ reflects the necessity to satisfy certain regulatory and risk requirements, ${\cal P}^u$ is used to express individual preferences for the optimisation problem \eqref{eq. optpro} and does not need to satisfy any further requirements than those of \cref{Ass 1b}, e.g. NA$({\cal P}^{u})$ can fail. In \cref{thm. exis_new} and \cref{thm. unique_new} below, we show that \eqref{eq. optpro} is well posed and admits an optimiser which, under suitable assumptions, is unique. The assumptions imposed on the utility functions $U(t, \cdot, \cdot)$ are in line with those in \cite{Nutz}: \begin{assumption} \label{Ass 1a} For $t=1, \dots, T$ the utility function $U(t,\cdot, \cdot): \Omega^t \times [0,\infty) \to \R$ is lower semianalytic and bounded from above. Furthermore \begin{enumerate} \item $\omega \mapsto U(t,\omega, x)$ is bounded from below for each $x>0$. \item $x \mapsto U(t,\omega,x)$ is non-decreasing, concave and continuous for each $\omega \in \Omega^t$. \end{enumerate} \end{assumption} We believe that boundedness assumptions on utility functions which we make here could be weakened, similarly to \cite{BC16}. However, due to the overall length and already technical character of proofs, we decided to leave this extension for further research. \\ We remark that by 2. in \cref{Ass 1a} it is sufficient to consider investment-consumption strategies which hedge $\xi$, i.e. for which $V_T^{x,H,C}=\xi$, since the superhedging surplus can be consumed at terminal time.\\ Note that by \cref{Ass 1a} and standard results on Carath\'{e}odory functions (see \cite[Lemma 4.51, p. 153]{Hitch}) we conclude that $U(t, \cdot, \cdot)$ is $\mathcal{F}^{\mathcal{U}}_t\otimes \mathcal{B}(\R_+)$-measurable. We set $U(t,x,\omega)= -\infty$ for $x<0$ and often write $U(t,x)$ instead of $U(t,x,\omega)$. \begin{theorem}\label{thm. exis_new} Let $U(t,\cdot,\cdot)$ be given for $1\le t \le T$ and let NA$({\cal P})$, \cref{Qanalytic}, \cref{Ass 1b} and \cref{Ass 1a} hold. Then for any Borel $\xi: \O^T\to \R$ such that $\sup_{Q\in\mathcal{Q}} {\rm I\kern-2pt E}E_Q[\xi^- ] < \infty$ there exists $(\hat{H}, \hat{C}) \in \mathcal{A}_{\pi}$ such that \begin{align*} \inf_{P \in {\cal P}^u} {\rm I\kern-2pt E}_{P}\left[\sum_{s=1}^T U(s,\Delta\hat{C}_s)\right]=\sup_{(H,C)\in \mathcal{A}_{\pi}} \inf_{P \in {\cal P}^u} {\rm I\kern-2pt E}_{P}\left[\sum_{s=1}^T U(s,\Delta C_s)\right], \end{align*} where $\pi=\pi(\xi)$ is the ${\cal P}$-q.s. superhedging price of $\xi$. \end{theorem} In order to obtain uniqueness of the above maximiser $(\hat{H}, \hat{C})$, we again switch to the canonical setup $\Omega^T= (\R_+^d)^T$, $S_t(\omega)=\omega_t$. In line with \cite{denis2007utility} we strengthen assumptions on the utility functions $U(t, \cdot, \cdot)$ and also assume weak compactness of the set ${\cal P}^u$. This enables us to show existence of a ``worst-case" measure $\hat{P}\in{\cal P}^u$, in analogy to the argumentation in \cite{schied2005duality}. In fact, \cref{ex. non-unique} below shows, that one cannot expect uniqueness of maximizers in general, if ${\cal P}^u$ is not weakly closed. \begin{assumption}\label{Ass 3} For $t=1, \dots, T$ the non-random utility functions $U(t, \cdot)$ satisfy \cref{Ass 1a} and are bounded. The mapping $x \mapsto U(t,x)$ is strictly concave, non-decreasing and continuous. Furthermore, for $t=0, \dots T-1$ and ${\cal P}^u$-q.e $\omega \in \Omega^t$ the set ${\cal P}^u_t(\omega)$ is weakly compact and the sets ${\cal P}$ and ${\cal P}^u$ fulfil the following continuity criteria: \begin{enumerate} \item If $\omega, \tilde{\omega} \in \Omega^t$ and $\epsilon>0$, then there exists $\delta >0$ such that for $|\omega-\tilde{\omega}|\le \delta$ and for every $P \in {\cal P}^u_t(\omega)$ there exists $\tilde{P} \in {\cal P}^u_t(\tilde{\omega})$ such that $d_L(P, \tilde{P}) \le \epsilon$, where $$d_L(P, \tilde{P})=\inf\{\epsilon \ge 0 \ | \ P(A) \le \tilde{P}(A^\epsilon) + \epsilon \text{ for all }A \in \mathcal{B}(\Omega)\}$$ denotes the Levy metric on $\mathfrak{P}(\Omega)$ and $A^\epsilon=\{\omega \in \Omega \ | \ \exists \tilde{\omega}\in A \text{ such that }|\omega-\tilde{\omega}|< \epsilon \}$. \item The map $f_t(\omega):=\text{supp}({\cal P}_t(\omega))$ is uniformly continuous in the sense of \cref{def. corunifcont}, where \begin{align*} \text{supp}({\cal P}_t(\omega))= \bigcap \{A \subseteq \Omega \text{ closed} \ \big| \ P(A)=1 \text{ for all } P \in {\cal P}_t(\omega)\} \end{align*} is the quasisure support of ${\cal P}_t(\omega)$ for $\omega \in \Omega^t$. \end{enumerate} \end{assumption} \begin{theorem}\label{thm. unique_new} In the setup of \cref{thm. exis_new} assume further that \cref{Ass 3} holds and that the functions $\pi_t(\xi): \Omega^t\to \R$ are continuous on $\{(\omega, v)\in \Omega^t \ | \ v\in f_{t-1}(\omega)\}$ for all $1\le t\le T$. Then there exists a probability measure $\hat{P} \in {\cal P}^u$ such that \begin{align*} \sup_{(H,C)\in \mathcal{A}_{\pi}} \inf_{P \in {\cal P}^u} {\rm I\kern-2pt E}_{P}\left[\sum_{s=1}^T U(s,\Delta C_s)\right]=\sup_{(H,C)\in \mathcal{A}_{\pi}} {\rm I\kern-2pt E}_{\hat{P}}\left[\sum_{s=1}^T U(s,\Delta C_s)\right]. \end{align*} Furthermore, the maximising strategy $(\hat{H}, \hat{C}) \in \mathcal{A}_{\pi}$ is unique in the following sense: for any two maximising strategies $(H^1, C^1), (H^2, C^2) \in \mathcal{A}_{\pi}$ and for $1\le t\le T$ we have $ C^1_t= C^2_t$ and $H_t^1 \Delta S_t= H_t^2 \Delta S_t$ $\hat{P}$-a.s. \end{theorem} The proofs of \cref{thm. exis_new} and \cref{thm. unique_new} are given in \cref{sec:app_utility}. We first establish \cref{thm. exis_new} in the one-period case ($T=1$) and then extend it to the general multi-step setting and consider the uniqueness. \subsection{Examples and comments} To illustrate the above results, we discuss several examples. We start with a simple example for non-uniqueness of minimal superhedging strategies. \begin{example}[Non-Uniqueness of minimal superhedging strategies and maximizers] \label{Ex 1} We take $\Omega=\R_+$, where $d=1$ and $T=2$ as well as $s_0=2$. Furthermore $S_t(\omega)=\omega_t$ for $t=1,2$ and \begin{align*} {\cal P}_t(\omega)=\{P \in \mathfrak{P}(\R_+) \}, \quad t=0,1. \end{align*} We want to superhedge the running minimum at time 2, i.e. $\xi(\omega)=\underline{S}_2(\omega)$. Clearly ${\cal Q}_t(\omega)=\{ Q \in \mathfrak{P}(\R_+) \ | \ {\rm I\kern-2pt E}_Q[\Delta S_{t+1}(\omega, \cdot)]=0 \}$ for all $\omega \in \Omega^t$ and $t=0,1$. Besides it is easy to see that \begin{align*} \sup_{Q \in {\cal Q}} {\rm I\kern-2pt E}_{Q}[\xi]=s_0=2, \end{align*} so we have some degree of freedom to choose our superhedging strategy $H \in t\in\{0,\dots,T\}r$. As it turns out we can choose any $H_1 \in [0,1]$, which gives a wealth of $2+H_1(S_1-2)$ at time 1. For time 2 we have \begin{align*} H_2(\omega) \in \left\{ \begin{array}{ll} [0,H_1] &\text{ if }S_1(\omega)\ge 2, \\ \left[0, \frac{2}{S_1(\omega)}+\frac{H_1}{S_1(\omega)}(S_1(\omega)-2)\right] &\text{ if } S_1(\omega)<2. \end{array} \right. \end{align*} Note also that the superhedging cost at time 1 is given by \begin{align*} \pi_1(\xi)(\omega)=\sup_{Q \in {\cal Q}_1(\omega)} {\rm I\kern-2pt E}_{Q}[\xi(\omega, \cdot)]=\left\{ \begin{array}{ll} 2 &\text{ if }S_1(\omega)\ge 2, \\ S_1(\omega) &\text{ if } S_1(\omega)<2. \end{array} \right. \end{align*} So according to \eqref{hj} and \eqref{defminr} we can consume \begin{align*} C_1(\omega)\in \left\{\begin{array}{ll} [0,H_1(S_1(\omega)-2)] &\text{ if }S_1(\omega)\ge 2, \\ \left[0,(H_1(\omega)-1)(S_1(\omega)-2)\right] &\text{ if } S_1(\omega)<2 \end{array} \right. \end{align*} at time 1. \\ We now show that if \cref{Ass 3} is not satisfied (namely ${\cal P}^u$ does not fulfil \cref{Ass 3}.1.), then \cref{thm. unique_new} is not true in general. For this we specify the set ${\cal P}^u$ and iteratively solve the optimization problem \eqref{eq. optpro}: We set $U(2,\omega,x)=U(1,\omega,x)=U(x)$ for some bounded concave, non-decreasing and continuous function $U: \R_+ \to \R_+$ as well as ${\cal P}^u_1(S_1)=\{\delta_{S_1}\}$ for $S_1>2$ and ${\cal P}^u_1(S_1)=\{\delta_{S_1+1}\}$ for $S_1\le 2$. Note that ${\cal P}^u_1$ obviously violates \cref{Ass 3}.1. We obtain the following optimal one-step prices, where we use notation from \cref{Sec Mulit}: For $S_1 >2$ and $x \ge 2$ we find \begin{align*} U_1(S_1,x)&= \sup_{(H,c) \in \mathcal{A}_{1,x}(S_1)}\left( {\rm I\kern-2pt E}_{\delta_{ S_1}}[U(x+H(S_2-S_1)-\underline{S_2}-c)]+U(c)\right)\\ &=\sup_{(H,c) \in \mathcal{A}_{1,x}(S_1)} \left(U(x-2-c)+U(c)\right)=2U\left(\frac{x-2}{2}\right) \end{align*} with $c=(x-2)/2$ and some $0 \le H\le \min(\frac{x/2+1}{S_1},\frac{x/2-1}{S_1-2})$. For $S_1 \le 2$ and $x \ge S_1$ we have \begin{align*} U_1(S_1,x)&= \sup_{(H,c) \in \mathcal{A}_{1,x}(S_1)} \left({\rm I\kern-2pt E}_{\delta_{ S_1+1}}[U(x+H(S_2-S_1)-\underline{S_2}-c)]+U(c)\right)\\ &=\sup_{(H,c) \in \mathcal{A}_{1,x}(S_1)} (U(x+H-S_1-c)+U(c))\ge U(0)+U(1) \end{align*} with $H =x/S_1$ and $c=0$. Setting ${\cal P}^u_0=\{\delta_x \ | \ x \in \R_+\}$ we obtain \begin{align*} U_0(2)&=\sup_{H \in \mathcal{A}_{0,2}} \inf_{P \in {\cal P}^u_0} {\rm I\kern-2pt E}_{P}[U_1(S_1, 2+H(S_1-2))]\\ &=\sup_{H \in \mathcal{A}_{0,2}} \inf_{P \in {\cal P}^u_0} {\rm I\kern-2pt E}_{P}\bigg[\mathds{1}_{\{S_1>2 \}} 2U\left(\frac{2+H(S_1-2)-2}{2} \right)\\ &+ \mathds{1}_{\{S_1 \le 2\}} U_1(2+H(S_1-2))\bigg]\\ &=2U\left(0 \right). \end{align*} Note that by the proof of \cref{thm. unique_new} under \cref{Ass 3} there would exist $\hat{P}\in \mathcal{P}_0^u$ such that \begin{align*} U_0(2)=\sup_{H \in \mathcal{A}_{0,2}} {\rm I\kern-2pt E}_{\hat{P}} [U_{1}(S_1,x+H \Delta S_{1})]. \end{align*} On the contrary, in our case there exists no $\hat{P}\in \mathcal{P}_0^u$ such that \begin{align*} U_0(2)=2U(0)= {\rm I\kern-2pt E}_{\hat{P}}\bigg[\mathds{1}_{\{S_1 >2\}} 2U\left(\frac{S_1-2}{2} \right)+ \mathds{1}_{\{S_1 <2\}} U_1(S_1, 2)\bigg] \end{align*} as the RHS is strictly greater than $2U(0)$ for all $\hat{P}\in \mathcal{P}_0^u$: Thus \cref{thm. unique_new} does not hold. \end{example} The next example shows that we cannot expect to have uniqueness of maximizers without assuming some closedness property of ${\cal P}^{u}$. \begin{example}[Non-uniqueness of maximisers for non-closed ${\cal P}^{u}$]\label{ex. non-unique} Let $T=1$, $d=2$, $\Omega =\R^2$, ${\cal P}=\mathfrak{P}(\R_+^2)$, $S_t(\omega)=\omega_t$ and $S_0=(1,1)$. Consider $\xi=\min(S^1_1, S^2_1)$. Then $\pi(\xi)=1$ and $H_1$ is of the form \begin{align*} H_1= \left( \begin{array}{c} \lambda\\ 1-\lambda \end{array} \right), \end{align*} where $\lambda \in [0,1].$ Take \begin{align*} {\cal P}^u=\{ P_n\}_{n=1}^{\infty} \hspace{0.5cm} \text{where } P_n=\frac{\delta_{\left\{S_1^1=n-\frac{1}{n}, \ S^2_1=n+\frac{1}{n} \right\}}}{2}+\frac{\delta_{\left\{S_1^1=0, \ S^2_1=0 \right\}}}{2}. \end{align*} Then clearly ${\cal P}^u$ is not closed. We note that for $H \in \mathcal{A}_1$ \begin{align*} {\rm I\kern-2pt E}_{P_n}\left[U\left(1+H \Delta S_1-\xi\right)\right]&= \frac{1}{2}U\left(\lambda\left(n-\frac{1}{n}\right)+(1-\lambda)\left(n+\frac{1}{n}\right)-\left(n-\frac{1}{n}\right)\right)+\frac{1}{2}U(0)\\ &=\frac{1}{2}U\left(\left(1-\lambda\right)\frac{2}{n} \right)+\frac{1}{2}U(0)\downarrow U(0), \ n\to \infty. \end{align*} Thus we conclude \begin{align*} \sup_{H \in \mathcal{A}_1} \inf_{P \in {\cal P}^u} {\rm I\kern-2pt E}_{P}[U(1+ H \Delta S_1-\xi)]=U(0), \end{align*} in particular \begin{align*} H \mapsto \inf_{P \in {\cal P}^u} {\rm I\kern-2pt E}_{P}[U(1+ H \Delta S_1-\xi)]=U(0) \end{align*} is constant and thus the maximizer is not unique. \end{example} Finally, we illustrate that even with a compact ${\cal P}^{u}$ we can not strengthen the sense in which the optimisers are unique in \cref{thm. unique_new}. \begin{example}[On uniqueness property of maximisers] We consider a one-step version of \cref{Ex 1}: $T=1$, $d=1$, $\Omega=\R_+$, $S_t(\omega)=\omega_t$, $s_0=2$, $\xi(S)=\underline{S_1}$, ${\cal P}=\mathfrak{P}(\R_+)$. We have $\pi(\xi)=2$. We also set ${\cal P}^u=\{\delta_2 \}$, where $\delta_2$ is defined by $$\delta_2( S_t=2 \ \text{for all } t=0, 1)=1.$$ Furthermore let $U(\cdot)=U(1, \cdot, \cdot)$ such that the conditions of \cref{thm. unique_new} are satisfied. The optimisers are then non-unique in the sense that \eqref{eq. optpro} is equal to $U(0)$ and is attained for every $H \in [0,1]$ but are unique in the sense of \cref{thm. unique_new} since $H\Delta S_1=0$ $\delta_2$-a.s.\ for all $H \in \R$. \end{example} n$^{\mbox{\footnotesize o}}$ewpage \begin{appendices} \appendixpage We now provide the proofs of \cref{prop:pathwiseex}, \cref{eu}, \cref{thm. exis_new} and of \cref{thm. unique_new}. These proofs require a number of technical lemmata which are established alongside the main proofs. \section{Proof of \cref{prop:pathwiseex}}\label{appendix:A} \begin{proof} Fix $\omega \in \Omega^{T-1}$ and $\epsilon>0$. Recall that $\xi$ is continuous and $\{f_t\}_{0\le t\le T-1}$ are compact-valued. Note that the set $$B:=\{(\tilde{\omega},\tilde{v})\in \Omega^{T-1}\times \R^d \ | \ \text{dist}((\omega,f_{T-1}(\omega)),(\tilde{\omega},\tilde{v}))\le 1\}$$ is compact, thus $\xi$ is uniformly continuous on $B$, i.e. there exists $\delta\in (0,1)$ such that $|\xi(\omega, v)- \xi(\tilde{\omega}, \tilde{v})|\le \epsilon/3$ for $|(\omega,v)-(\tilde{\omega},\tilde{v})|\le \delta$ for $v \in f_{T-1}(\omega)$, $(\tilde{\omega}, \tilde{v}) \in B$. This implies $\sup_{\{\tilde{\omega}| \ |\omega-\tilde{\omega}|\le 1\}} \pi_{T-1}(\xi)(\tilde{\omega})<\infty$ and that for all $\tilde{\omega}\in \Omega^{T-1}$ with $|\omega-\tilde{\omega}|\le 1$ there exists $H_{T}(\tilde{\omega}) \in \R^d$ such that \begin{align}\label{eq. superhedgingl} \epsilon/3+\pi_{T-1}(\xi)(\tilde{\omega})+H_{T}(\tilde{\omega}) \Delta S_{T}(\tilde{\omega},\cdot) \ge \xi(\tilde{\omega}, \cdot) \quad \text{on } f_{T-1}(\tilde{\omega}) \end{align} or equivalently the inequality \eqref{eq. superhedgingl} holds $ {\cal P}_{T-1}(\tilde{\omega})\text{-q.s}$.\\ Note that by the uniform continuity of the correspondence $f_{T-1}$ for any $\tilde{\omega}$ close to $\omega$ and for any $v \in f_{T-1}(\omega)$ there exists $\tilde{v} \in f_{T-1}(\tilde{\omega})$ which is close to $v$, thus $|(\omega,v)-(\tilde{\omega},\tilde{v})|$ is small. Furthermore we show below that $H_T(\tilde{\omega})$ can be chosen bounded uniformly in $\tilde{\omega}$ for all $\tilde{\omega}$ close to $\omega$. Thus, for some $\delta_1$ determined below, $|\omega-\tilde{\omega}|\le \delta_1$ implies \begin{align}\label{eq. epsilon} \epsilon+\pi_{T-1}(\xi)(\tilde{\omega})+H_{T}(\tilde{\omega}) \Delta S_{T}(\omega,v) &\ge \epsilon+\pi_{T-1}(\xi)(\tilde{\omega})+H_{T}(\tilde{\omega}) \Delta S_{T}(\tilde{\omega},\tilde{v})-\epsilon/3\\ &\ge \epsilon/3+\xi(\tilde{\omega},\tilde{v}) \ge \xi(\omega,v),n$^{\mbox{\footnotesize o}}$onumber \end{align} and thus $\pi_{T-1}(\xi)(\omega) \le \pi_{T-1}(\xi)(\tilde{\omega})+\epsilon$. Exchanging the roles of $\omega$ and $\tilde{\omega}$ concludes the proof of continuity of $\omega \mapsto \pi_{T-1}(\omega)$. \\ We now argue that there exists $\delta_0>0$ and $C>0$ such that $|H_{T}(\tilde{\omega})|<C$ for all $\tilde{\omega}\in \Omega^{T-1}$ with $|\omega-\tilde{\omega}|\le \delta_0$ and $H_T(\tilde{\omega})\in \text{lin}(f_{T-1}(\tilde{\omega})-S_{T-1}(\tilde{\omega}))$. Assume towards a contradiction this is not the case, i.e. there exists a sequence $(\tilde{\omega}^N)_{N\in \N}$ with $|\omega-\tilde{\omega}^N|\le 1/N$, $H_T(\tilde{\omega}^N)\in \text{lin}(f_{T-1}(\tilde{\omega}^N)-S_{T-1}(\tilde{\omega}^N))$ for all $N\in \N$ and $\lim_{N\to \infty}|H_T(\tilde{\omega}^N)|=\infty$. After passing to a subsequence (without relabelling) $\tilde{H}^N:=H_T(\tilde{\omega}^N)/|H_T(\tilde{\omega}^N)|\to \tilde{H}$ with $|\tilde{H}|=1$. Note that as $f_{T-1}(\tilde{\omega}^N)$ converges in Hausdorff distance to $f_{T-1}(\omega)$ and as $f_{T-1}(\omega)$ is compact, it follows by the same arguments as above that $\sup_{f_{T-1}(\tilde{\omega}^N)}\xi(\tilde{\omega}^N,\cdot)$ and $\pi_{T-1}(\xi)(\tilde{\omega}^N)$ are bounded uniformly in $N\in \N$. Thus dividing \eqref{eq. superhedgingl} by $|H_T(\tilde{\omega}^N)|$ and taking limits we get \begin{align*} \tilde{H}\Delta S_{T}(\omega,\cdot)\ge 0 \quad\text{on }f_{T-1}(\omega). \end{align*} By NA$({\cal P}_{T-1}(\omega))$ this yields $\tilde{H}\Delta S_T(\omega, \cdot)=0$ on $f_{T-1}(\omega)$. As $\tilde{H}\in \text{span}(f_{T-1}(\omega)-S_{T-1}(\omega))$, $\tilde{H}=0$ follows, a contradiction.\\ Now we choose $\delta_1\le \delta_0$ such that for $|\omega- \tilde{\omega}|\le \delta_1$ we have $$d_H((\omega,f_{T-1}(\omega)), (\tilde{\omega},f_{T-1}(\tilde{\omega})) \le \min(\delta, \epsilon/(3C))$$ and see that \eqref{eq. epsilon} holds. The proof of continuity of $\omega \mapsto \pi_t(\xi)(\omega)$ for $1\le t\le T-2$ follows by backward induction using dynamic programming principle and the same arguments as above. Lastly, as for any $P \in \mathfrak{P}(\R^d)$ such that $\text{supp}(P)=f_{t-1}(\omega)$ \begin{align*} \pi_{t-1}(\xi)(\omega)+H_{t}(\omega) \Delta S_{t}(\omega, \cdot) \ge \pi_t(\xi)(\omega,\cdot) \quad P\text{-a.s.} \end{align*} implies \begin{align*} \pi_{t-1}(\xi)(\omega)+H_{t}(\omega) \Delta S_{t}(\omega, \cdot) \ge \pi_t(\xi)(\omega,\cdot) \quad \text{ on }f_{t-1}(\omega), \end{align*} the claim follows. \end{proof} \begin{remark}\label{rem:bounded} Note that the proof of boundedness of $H_{T}(\tilde{\omega})$ above does not require that $f_{T-1}(\tilde{\omega})$ is compact-valued. \end{remark} \section{Proof of \cref{eu}}\label{appendix:laurence} \begin{lemma} \label{olala} Let NA$({\cal P})$ hold. Assume that $\xi$ is upper semianalytic. Furthermore let \mbox{$\sup_{Q\in{\cal Q}} {\rm I\kern-2pt E}E_Q [\xi^- ]< \infty.$} Then ${\rm I\kern-2pt E}c^{t}(\xi)$ is upper semianalytic and ${\rm I\kern-2pt E}c^{t}(\xi^-)$ is lower semianalytic for all $0\le t\le T-1$. Furthermore \begin{align*} \sup_{Q \in {\cal Q}}{\rm I\kern-2pt E}E_{Q}[{\rm I\kern-2pt E}c^{t}(\xi^-)]<\infty \end{align*} and the analytic set $\O^t_{\xi}:=\{{\rm I\kern-2pt E}c^{t}(\xi^-)<\infty\}$ is of full ${\cal P}$-measure. Let \begin{align} \label{pasmes} \hat{\O}^t_{\xi}:=\{\omega \in \Omega^t\ | \ {\rm I\kern-2pt E}c^{t+1}(\xi)(\omega,\cdot)>-\infty, \; {\cal P}_{t}(\omega)\mbox{-q.s.}\}. \end{align} Then ${\O}^t_{\xi} \subset \hat{\O}^t_{\xi}$, in particular $\hat{\O}^t_{\xi}$ is a ${\cal P}$-full measure set. \end{lemma} \begin{proof} Using \cite[Lemma 4.10]{BN} recursively, ${\rm I\kern-2pt E}c^{t}(\xi)$ is upper semianalytic and ${\rm I\kern-2pt E}c^{t}(\xi^-)$ is lower semianalytic for all $0 \leq t \leq T$. As ${\Omega}_{\xi}^{t} = \{ {\rm I\kern-2pt E}c^{t}(\xi^-) < \infty \}= \bigcup_{n \ge 1} \{ {\rm I\kern-2pt E}c^{t}(\xi^-) \le n\}$, $\O^t_{\xi}$ is an analytic set. We now prove by induction that ${\Omega}_{\xi}^{t}$ is a ${\cal P}$-full measure set and that $\sup_{Q \in {\cal Q}}{\rm I\kern-2pt E}E_{Q}[{\rm I\kern-2pt E}c^{t}(\xi^-)]<\infty$. For $t=T$, $\sup_{Q \in {\cal Q}}{\rm I\kern-2pt E}E_{Q} [\xi^-]<\infty$ by assumption. If there exists some $P\in {\cal P}$ such that $P({\Omega}_{\xi}^{T})<1$ then $\sup_{Q \in {\cal Q}}{\rm I\kern-2pt E}E_{Q}[{\rm I\kern-2pt E}c^{T}(\xi^-)] =\infty$, as ${\cal P}$ and ${\cal Q}$ have the same polar sets (see \cite[First Fundamental Theorem, p. 828]{BN}). Assume for some $ t\leq T-1$ that ${\Omega}_{\xi}^{t+1}$ is a ${\cal P}$-full measure set and that\\ $\sup_{Q \in {\cal Q}}{\rm I\kern-2pt E}E_{Q}[{\rm I\kern-2pt E}c^{t+1}(\xi^-)]<\infty.$ Fix $\epsilon>0$. From \cite[Proposition 7.50 p184]{bs} (recall that ${\cal Q}_{t}$ has an analytic graph), there exists an ${\cal F}_tut$-measurable function $Q_{\epsilon}: \Omega^t \to \mathfrak{P}(\Omega)$, such that $Q_{\epsilon}(\omega) \in {\cal Q}_{t}(\omega)$ for all $\omega \in \Omega^{t}$ and \begin{align} \label{eqjt} {\rm I\kern-2pt E}_{Q_{\epsilon}} [{\rm I\kern-2pt E}c^{t+1}(\xi^-(\omega,\cdot)] \ge \begin{cases} {\rm I\kern-2pt E}c^t (\xi^-)(\omega)- \varepsilon &\mbox{if $\o \in {\Omega}_{\xi}^{t}$},\\ \frac{1}{\varepsilon} \; &\mbox{otherwise}. \end{cases} \end{align} Assume that $\Omega_{\xi}^{t}$ is not a ${\cal P}$-full measure set. Then there exists some $P \in {\cal P}$ such that $P({\Omega}_{\xi}^{t} )<1$. As ${\cal P}$ and ${\cal Q}$ have the same polar sets, we have that $Q({\Omega}_{\xi}^{t} )<1$ for some $Q \in {\cal Q}$. We denote by $Q|_{{\cal F}_tc_t^{\cal{U}}}$ the restriction of $Q$ to ${{\cal F}_tc_t^{\cal{U}}}$ and set $Q^{*}:=Q|_{{\cal F}_tc_t^{\cal{U}}}\otimes Q_{\varepsilon} $. Then $Q^{*}\in {\cal Q}|_{{\cal F}_tc_{t+1}^{\cal{U}}}$ (see \eqref{Mstar}) and we have that \begin{align*} \sup_{Q \in {\cal Q}} {\rm I\kern-2pt E}E_Q [{\rm I\kern-2pt E}c^{t+1}(\xi^-)] \ge {\rm I\kern-2pt E}E_{Q^{*}} [{\rm I\kern-2pt E}c^{t+1}(\xi^-)] \ge \frac{1}{\epsilon} (1-Q^{*}({\Omega}_{\xi}^{t} )) - {\epsilon}Q^{*}({\Omega}_{\xi}^{t} ) . \end{align*} As the previous inequality holds for all $\epsilon>0$, letting $\epsilon$ go to $0$ we obtain that $$\sup_{Q \in {\cal Q}}{\rm I\kern-2pt E}E_{Q}[{\rm I\kern-2pt E}c^{t+1}(\xi^-)]=\infty,$$ a contradiction. Thus $\Omega_{\xi}^{t}$ is a ${\cal P}$-full measure set.\\ Now, for all $Q \in {\cal Q}$, we set $Q^*=Q|_{{\cal F}_tc_t^{\cal{U}}}\otimes Q_{\epsilon} \in {\cal Q}|_{{\cal F}_tc_{t+1}^{\cal{U}}}$ (see \eqref{Mstar}). Then, using \eqref{eqjt} we see that \begin{align*} {\rm I\kern-2pt E}E_{Q} [{\rm I\kern-2pt E}c^{t}(\xi^-)] -\varepsilon = {\rm I\kern-2pt E}E_{Q} [\mathds{1}_{\Omega^{t}_{\xi}} {\rm I\kern-2pt E}c^{t}(\xi^-)] -\epsilon \le {\rm I\kern-2pt E}E_{Q^*}[ {\rm I\kern-2pt E}c^{t+1}(\xi^-)] \le \sup_{Q \in {\cal Q}}{\rm I\kern-2pt E}E_{Q}[{\rm I\kern-2pt E}c^{t+1}(\xi^-)]. \end{align*} Again, as this is true for all $\epsilon>0$ and all $Q \in {\cal Q}$ we obtain that $ \sup_{Q \in {\cal Q}}{\rm I\kern-2pt E}E_{Q}[{\rm I\kern-2pt E}c^{t}(\xi^-)] \le \sup_{Q \in {\cal Q}}{\rm I\kern-2pt E}E_{Q}[{\rm I\kern-2pt E}c^{t+1}(\xi^-)]<\infty.$\\ Let $0\le t \le T-1$ and $\omega \in \Omega^t_{\xi}$. Then for all $Q \in {\cal Q}_{t}(\omega)$, ${\rm I\kern-2pt E}E_Q[{\rm I\kern-2pt E}c^{t+1}(\xi^-)(\omega, \cdot)]<\infty$, which implies that ${\rm I\kern-2pt E}c^{t+1}(\xi^-)(\omega, \cdot)<\infty$ $Q$-a.s. and thus ${\rm I\kern-2pt E}c^{t+1}(\xi^-)(\omega, \cdot)<\infty$ ${\cal P}_{t}(\omega)$-q.s. Assume for a moment that we have proved ${\rm I\kern-2pt E}c^{t+1}(\xi) \geq -{\rm I\kern-2pt E}c^{t+1}(\xi^-)$. Then $-{\rm I\kern-2pt E}c^{t+1}(\xi)(\omega, \cdot)<\infty$ ${\cal P}_{t}(\omega)$-q.s. and $\omega \in \hat{\Omega}^t_{\xi}$. Thus ${\Omega}^t_{\xi} \subseteq \hat{\Omega}^t_{\xi}$ and $\hat{\Omega}^t_{\xi}$ is a ${\cal P}$-full measure set.\\ Let $0\le t \le T-1$. We now prove that ${\rm I\kern-2pt E}c^{t+1}(\xi) \geq -{\rm I\kern-2pt E}c^{t+1}(\xi^-)$ by backward induction. The claim is clearly true for $t=T-1$. Assume that it is true for some $1\le t+1 \le T$. Then for $\omega \in \Omega^t$ we find \begin{align*} {\rm I\kern-2pt E}c^{t}(\xi)(\omega) & = \sup_{Q \in {\cal Q}_{t}(\omega)}{\rm I\kern-2pt E}E_Q[{\rm I\kern-2pt E}c^{t+1}(\xi)(\omega, \cdot)] \ge \sup_{Q \in {\cal Q}_{t}(\omega)}{\rm I\kern-2pt E}E_Q[-{\rm I\kern-2pt E}c^{t+1}(\xi^-)(\omega, \cdot)] \\ & \ge \inf_{Q \in {\cal Q}_{t}(\omega)}{\rm I\kern-2pt E}E_Q[-{\rm I\kern-2pt E}c^{t+1}(\xi^-)(\omega, \cdot)]= -{\rm I\kern-2pt E}c^{t}(\xi^-)(\omega). \end{align*} This concludes the proof. \end{proof} \begin{remark}\label{rem. laurence} Recall the set $\Omega_{\text{NA}}^t=\{\omega \in \Omega^t \ | \ \text{NA}({\cal P}_t(\omega))\text{ holds} \}$, which is universally measurable and of ${\cal P}$-full measure (see \cite[Lemma 4.6, p.842]{BN}). Let $\omega \in \Omega^t_{\text{NA}}$. From \cite[Lemma 4.1]{BN}, we know that ${\rm I\kern-2pt E}c^{t}(\xi)(\omega)=-\infty$ implies that $\{{\rm I\kern-2pt E}c^{t+1}(\xi)(\omega,\cdot)=-\infty\}$ is not ${\cal P}_{t}(\omega)$-polar i.e. $\omega n$^{\mbox{\footnotesize o}}$otin \hat{\Omega}^t_{\xi}$. Thus $${\O}^t_{\xi} \cap \O^t_{\text{NA}} \subseteq \hat{\O}^t_{\xi} \cap \O^t_{\text{NA}} \subseteq \{\o \in \Omega_{\text{NA}}^t\ | \ {\rm I\kern-2pt E}c^{t}(\xi)(\omega)>-\infty\}.$$ \end{remark} \begin{lemma} \label{lem::usa} If $\xi: \Omega^T \to \R$ is upper semianalytic, then $\pi_t(\xi)$ is upper semianalytic for all $0 \le t \le T-1$. \end{lemma} \begin{proof} We proceed by induction. As $\pi_T(\xi)=\xi$ the claim is true for $t=T$. Assume now the $\pi_{t+1}(\xi)$ is upper semianalytic for some $t\in \{0, \dots, T-1\}$. We show that the claim is true for $t$. Indeed for all $a \in \R$ \begin{align*} &\{\omega \in \Omega^{t} \ | \ \pi_t(\xi) < a \}\\ =\ &\{\omega \in \Omega^t \ | \ \exists H \in \R^d, \ \epsilon>0 \text{ s. t. }\forall P \in \mathcal{P}_t(\omega) \ P(a-\epsilon+H \Delta S_{t+1}(\omega, \cdot) \ge \pi_{t+1}(\xi)(\omega, \cdot))=1 \}\\ =\ &\{\omega \in \Omega^t \ | \ \sup_{\epsilon \in \mathbb{Q}_+}\sup_{H \in \mathbb{Q}^d} \inf_{P \in \mathcal{P}_t(\omega)} P(a-\epsilon+H \Delta S_{t+1}(\omega, \cdot) \ge \pi_{t+1}(\xi)(\omega, \cdot))\ge 1 \} \end{align*} As the function $(\omega, P,H,\epsilon) \mapsto {\rm I\kern-2pt E}_{P}\left[\mathds{1}_{\{a-\epsilon+H \Delta S_{t+1}(\omega, \cdot) \ge \pi_{t+1}(\xi)(\omega, \cdot)\}}\right]$ is lower semianalytic, the same holds true for $\omega \mapsto \sup_{\epsilon \in \mathbb{Q}_+}\sup_{H \in \mathbb{Q}^d} \inf_{P \in \mathcal{P}_t(\omega)} {\rm I\kern-2pt E}_{P}\left[\mathds{1}_{\{a-\epsilon+H \Delta S_{t+1}(\omega, \cdot) \ge \pi_{t+1}(\xi)(\omega, \cdot)\}}\right]$ (see \cite[Lemma 7.30, p.177, Prop. 7.47, p.180]{bs}), thus the set above is coanalytic. To complete the proof, we argue why \begin{align*} &\{\omega \in \Omega^t \ | \ \exists H \in \R^d, \ \epsilon>0 \text{ such that }a-\epsilon+H\Delta S_{t+1}(\omega, \cdot) \ge \pi_{t+1}(\omega, \cdot) \ {\cal P}_t(\omega)\text{-q.s.}\} \\ \subseteq\ &\{\omega \in \Omega^t \ | \ \exists H \in \mathbb{Q}^d,\ \epsilon\in \mathbb{Q}_+ \text{ such that }a-\epsilon+H\Delta S_{t+1}(\omega, \cdot) \ge \pi_{t+1}(\omega, \cdot) \ {\cal P}_t(\omega)\text{-q.s.}\}: \end{align*} Fix $\omega \in \Omega^t$, $\tilde{H}\in \R^d$, $\epsilon>0$ such that $a-\epsilon+\tilde{H}\Delta S_{t+1}(\omega, \cdot) \ge \pi_{t+1}(\omega, \cdot) \ {\cal P}_t(\omega)\text{-q.s.}$ Take $\tilde{\epsilon}\in \mathbb{Q}_+$ such that $0<\tilde{\epsilon}<\epsilon/2$ and $H \in [0, \infty)^d$ such that \begin{align*} H^1+ \dots +H^d \le \frac{\epsilon/2}{\max_{1\le i \le d}S^i_t(\omega)}. \end{align*} It follows that for ${\cal P}_t(\omega)$-q.e. $\omega' \in \Omega$ \begin{align*} a-\tilde{\epsilon}+(H+\tilde{H})\Delta S_{t+1}(\omega, \omega') &\ge a -\epsilon/2+ \tilde{H} \Delta S_{t+1}(\omega, \omega') +H\Delta S_{t+1}(\omega, \omega')\\ &\ge \pi_{t+1}(\xi)(\omega, \omega')+\epsilon/2-H S_t(\omega) \\ &\ge \pi_{t+1}(\xi)(\omega, \omega'). \end{align*} In particular the above inequality is valid for some $H$ such that $\tilde{H}+H \in \mathbb{Q}^d$. \end{proof} \begin{proof}[of \cref{eu}] Let \begin{align*} \Omega_{\text{NA}, \xi}:= \{\omega \in \Omega^T \ | \ \omega \in \Omega_{\text{NA}}^t \cap \Omega_{\xi}^t \text{ for all }0\le t \le T-1 \}, \end{align*} where the definition of $\Omega^t_{\xi}$ is given in \cref{olala} and the definiton of $\Omega_{\text{NA}}^t$ in \cref{rem. laurence}. Then by \cref{olala} and \cite[Lemma 4.6, p. 842]{BN} $\Omega_{\text{NA}, \xi}$ is universally measurable and of ${\cal P}$-full measure. Let $\omega \in \Omega_{\text{NA}, \xi}$. By \cite[Lemma 4.10]{BN}, there exists a universally measurable function $\hat{H}_{t+1}$ such that \begin{align} \label{fantarec} \mathcal{E}^{t}(\xi) (\omega) + \hat{H}_{t+1} (\omega) \Delta S_{t+1}(\omega, \cdot) \ge \mathcal{E}^{t+1}(\xi) (\omega,\cdot) \quad {\cal P}_{t}(\omega)\mbox{-q.s.} \end{align} To see that \begin{align} \label{beindidonc} \pi_t(\xi) = {\rm I\kern-2pt E}c^t(\xi) \quad {\cal P}\text{-q.s.} \end{align} for $0\le t \le T$ we argue by backwards induction. Indeed the claim is true by definition for $t=T$. Now we assume that the claim is true for $t+1 \in \{1, \dots, T\}$. By \cite[eq. (4.8) in Lemma 4.8, p.843]{BN} the correspondence \begin{align*} \mathcal{H}_t(\omega)=\{ (Q, P) \in \mathfrak{P}(\Omega) &\times \mathfrak{P}(\Omega) \ | \ {\rm I\kern-2pt E}_{Q}[\Delta S_{t+1}(\omega, \cdot)]=0, \ P \in \mathcal{P}_t(\omega), \ Q \ll P \} \end{align*} has analytic graph. By \cite[Prop. 7.47, p. 179, Prop. 7.48, p. 180, Prop. 7.50, p.184]{bs} $(\omega,Q, P) \mapsto {\rm I\kern-2pt E}_{Q}[\mathcal{E}_{t+1}(\xi)(\omega,\cdot)]$ and $(\omega,Q, P) \mapsto {\rm I\kern-2pt E}_{Q}[\pi_{t+1}(\xi)(\omega, \cdot)]$ are upper seminanalytic functions and there exists sequences $(\hat{P}_n,\hat{Q}_n)_{n \in \N}$ and $(\bar{P}_n,\bar{Q}_n)_{n \in \N}$ of ${\cal F}_tut$-measurable selectors of $\mathcal{H}_t$ such that \begin{align*} \lim_{n \to \infty} {\rm I\kern-2pt E}_{\hat{Q}_n(\omega)}[\mathcal{E}^{t+1}(\xi)(\omega, \cdot)] &=\sup_{(Q,P)\in \mathcal{H}_t(\omega)}{\rm I\kern-2pt E}_{Q}[\mathcal{E}^{t+1}(\xi)(\omega, \cdot)]= \mathcal{E}^t(\xi)(\omega),\\ \lim_{n \to \infty} {\rm I\kern-2pt E}_{\bar{Q}_n(\omega)}[\pi_{t+1}(\xi)(\omega,\cdot)]&=\sup_{(Q,P)\in \mathcal{H}_t(\omega)}{\rm I\kern-2pt E}_{Q}[\pi_{t+1}(\xi)(\omega,\cdot)]= \mathcal{E}_t(\pi_{t+1}(\xi))(\omega). \end{align*} Define $P_n(\omega)=(\hat{P}_n(\omega)+\bar{P}_n(\omega))/2\in {\cal P}_t(\omega)$ and $\tilde{P}_t(\omega)= \sum_{n =1}^{\infty}2^{-n}P_n(\omega)$. Then $\tilde{P}_t(\omega) \in \mathfrak{P}(\Omega)$ for all $\omega \in \Omega^t$, $\omega \mapsto \tilde{P}_t(\omega)$ is ${\cal F}_tut$-measurable and $\hat{P}_n(\omega), \bar{P}_n(\omega), P_n(\omega)$ are absolutely continuous with respect to $\tilde{P}_t(\omega)$. Furthermore for $\omega \in \Omega_{\text{NA}}^t$ \begin{align*} {\rm I\kern-2pt E}_{\hat{Q}_n(\omega)}[\mathcal{E}^{t+1}(\xi)(\omega, \cdot)]&\le \sup_{Q \ll \tilde{P}_t(\omega), \ {\rm I\kern-2pt E}_{Q}[\Delta S_{t+1}(\omega, \cdot)]=0} {\rm I\kern-2pt E}_{Q}[\mathcal{E}^{t+1}(\xi)(\omega, \cdot)] \\&\le \inf\{x \in \R \ | \ \exists H\in \R^d \text{ such that }x+H\Delta S_{t+1}(\omega, \cdot) \ge \mathcal{E}^{t+1}(\xi)(\omega, \cdot) \ \tilde{P}_t(\omega)\text{-a.s.}\}\\ &\le \pi_{t}(\mathcal{E}^{t+1}(\xi))(\omega)=\mathcal{E}_{t}(\mathcal{E}^{t+1}(\xi))(\omega)=\mathcal{E}^t(\xi)(\omega), \end{align*} where the third inequality follows from the fact that $P_n(\omega) \in {\cal P}_t(\omega)$ for $n \in \N$ and the first equality follows from \cite[Theorem 3.4]{BN} as $\omega \in \Omega_{\text{NA}}^t$. Letting $n \to \infty$ we conclude \begin{align*} \sup_{Q \ll \tilde{P}_t(\omega), \ {\rm I\kern-2pt E}_{Q}[\Delta S_{t+1}(\omega, \cdot)]=0} {\rm I\kern-2pt E}_{Q}[\mathcal{E}^{t+1}(\xi)(\omega, \cdot)]&= \mathcal{E}^{t}(\xi)(\omega),\\ \sup_{Q \ll \tilde{P}_t(\omega), \ {\rm I\kern-2pt E}_{Q}[\Delta S_{t+1}(\omega, \cdot)]=0} {\rm I\kern-2pt E}_{Q}[\pi_{t+1}(\xi)(\omega,\cdot)]&= \mathcal{E}_t(\pi_{t+1}(\xi)(\omega,\cdot)), \end{align*} Fix now $P \in \mathcal{P}$ and define $\tilde{P}=P|_{{\cal F}_tut}\otimes \tilde{P}_t$. Then as $P_n(\omega)\in \mathcal{P}_t(\omega)$ the induction assumption implies that $\mathcal{E}^{t+1}(\xi)=\pi_{t+1}(\xi)$ holds $\tilde{P}$-a.s. and thus for $\tilde{P}$-a.e. $\omega \in \Omega^t$ we have \begin{align*} \mathcal{E}^{t}(\xi)(\omega)&=\sup_{Q \ll \tilde{P}_t(\omega), \ {\rm I\kern-2pt E}_{Q}[\Delta S_{t+1}(\omega, \cdot)]=0} {\rm I\kern-2pt E}_{Q}[\mathcal{E}^{t+1}(\xi)(\omega, \cdot)]\\ &=\sup_{Q \ll \tilde{P}_t(\omega), \ {\rm I\kern-2pt E}_{Q}[\Delta S_{t+1}(\omega, \cdot)]=0} {\rm I\kern-2pt E}_{Q}[\pi_{t+1}(\xi)(\omega, \cdot)] \\&=\mathcal{E}_t(\pi_{t+1}(\xi))(\omega)=\pi_t(\xi)(\omega), \end{align*} where the last equality again follows from \cite[Theorem 3.4]{BN} if $\omega \in \Omega_{\text{NA}}^t$. This concludes the proof of \eqref{beindidonc}.\\ Let $(x,H,C) \in \Ac(\xi)$. Now we show that \begin{align} \label{eeu2} V_t^{x,H,C} & \geq \pi_t(\xi) \quad {\cal P}\mbox{-q.s.} \end{align} This is clearly true at $t=T$. Fix some $1\le t\le T$ and assume that \eqref{eeu2} holds true for $t$. Then \begin{align*} V_{t-1}^{x,H,C} + H_t \Delta S_t \geq V_{t}^{x,H,C} \ge \pi_t(\xi)\quad{\cal P}\mbox{-q.s.} \end{align*} Noting that $V_{t-1}^{x,H,C}$ is $\mathcal{F}^{\mathcal{U}}_{t-1}$-measurable and $\pi_t(\xi)$ is upper seminanalytic and using the same reasoning as in \cite[proof of Lemma 4.10, pp.846-848]{BN} we conclude that for $\omega\in \Omega^{t-1}$ in a ${\cal P}$ full-measure set \begin{align}\label{eq. bn_lem4.10} V_{t-1}^{x,H,C} (\omega) + H_t (\omega)\Delta S_t(\omega, \cdot) \geq \pi_t(\xi) (\omega, \cdot)\quad {\cal P}_{t-1}(\o)\text{-q.s.} \end{align} Thus $V_{t-1}^{x,H,C} (\omega) \geq \pi_{t-1}(\xi)(\omega)$ by \eqref{defminr} and \eqref{eeu2} is proved for $t-1$. Next we define the consumption process $\hat{C}$. Let $P=P_{0}\otimes P_{1} \otimes \dots \otimes P_{T-1} \in \mathcal{P}$, where $P_{t} \in \mathcal{P}_{t}(\omega)$ for all $0\le t \le T-1$. Then using \cref{fantarec} and Fubini's Theorem (recall \cite[Proposition 7.45 p175]{bs}), we get that \begin{align} \label{fantarectot} {\rm I\kern-2pt E}c^{t-1}(\xi) + \hat{H}_{t} \Delta S_{t} \ge {\rm I\kern-2pt E}c^{t}(\xi) \quad {\cal P}\mbox{-q.s.} \end{align} for a universally measurable function $\hat{H}_t: \O^t \to \R^d$. Using \eqref{fantarectot} recursively, \begin{align} \label{fantacool} {\rm I\kern-2pt E}c^0(\xi) + \sum_{u=1}^t \hat{H}_u \Delta S_u \ge {\rm I\kern-2pt E}c^t(\xi)\quad {\cal P}\mbox{-q.s.} \end{align} follows. Now we set $\hat{C}_t={\rm I\kern-2pt E}c^0(\xi) + \sum_{u=1}^t \hat{H}_u \Delta S_u - {\rm I\kern-2pt E}c^t(\xi)$. Then $\hat{C}_{t}(\omega, \cdot) -\hat{C}_{t-1}(\omega)={\rm I\kern-2pt E}c^{t-1}(\xi)(\omega) - {\rm I\kern-2pt E}c^{t} (\xi)(\omega, \cdot) + \hat{H}_{t}(\omega) \Delta S_{t}(\omega, \cdot) \geq 0$ ${\cal P}_{t-1}(\omega)$-q.s. and using again Fubini's Theorem $\hat{C}_{t} -\hat{C}_{t-1}\geq 0$ ${\cal P}\mbox{-q.s.}$ Thus $\hat{C}=(\hat{C}_t)_{0\leq t \leq T}$ is a cumulative consumption process.\\ Now we prove that $\pi(\xi)=\pi_0(\xi)$. Let $(x,H) \in \Ac(\xi)$. Then as $V_{T-1}^{x,H}+H_{T} \Delta S_{T} \geq \xi \ {\cal P}_{T-1}\mbox{-q.s.}$ it follows as in \eqref{eq. bn_lem4.10} \begin{align*} V_{T-1}^{x,H}(\omega)+H_{T}(\omega) \Delta S_{T}(\omega, \cdot) \geq \xi(\omega, \cdot) \; {\cal P}_{T-1}(\omega)\mbox{-q.s.} \end{align*} for all for $\omega\in \Omega^{T-1}$ in an ${\cal F}_tc^{{\cal U}}_{T-1}$-measurable and ${\cal P}$-full measure set. From \eqref{defminr}, we conclude that $\pi_{T-1}(\xi)(\omega) \le V_{T-1}^{x,H}(\omega).$ By induction we see that $\pi_0(\xi) \le x$ and thus $\pi_0(\xi) \le \pi(\xi)$. Conversely, using \eqref{fantacool} and \eqref{beindidonc} \begin{align*} V_{T}^{\pi_0,\hat{H}} =\pi_0(\xi) + \sum_{t=1}^T \hat{H}_t \Delta S_t \geq {\rm I\kern-2pt E}c^T(\xi)=\xi \quad {\cal P}\mbox{-q.s.} \end{align*} and therefore $\pi_0(\xi) \geq \pi(\xi)$. Thus ${\rm I\kern-2pt E}c^{0}(\xi)=\pi_0(\xi)=\pi(\xi)$ by \eqref{beindidonc} and we obtain (recall \eqref{fantacool} and the definition of $\hat{C}$) that \begin{align*} V_t^{\pi(\xi),\hat{H},\hat{C}}={\rm I\kern-2pt E}c^t(\xi)=\pi_t(\xi) \quad {\cal P}\mbox{-q.s.} \end{align*} Since $V_T^{\pi(\xi),\hat{H},\hat{C}}={\rm I\kern-2pt E}c^T (\xi)=\xi \quad {\cal P}\mbox{-q.s.}$, $(\pi(\xi),\hat{H},\hat{C})$ is a superhedging strategy and it is also minimal. Indeed let $(x,H,C)\in \Ac(\xi)$ then $V_T^{x,H,C} \ge \xi$ ${\cal P}$-q.s. From \eqref{eeu2}, $V_t^{x,H,C} \ge \pi_t(\xi)=V_t^{\pi(\xi),\hat{H},\hat{C}}$ ${\cal P}$-q.s. This concludes the proof. \end{proof} n$^{\mbox{\footnotesize o}}$ewpage \section{Proofs of \cref{thm. exis_new} and \cref{thm. unique_new} } \label{sec:app_utility} \subsection{Proof of \cref{thm. exis_new}: The one-period case}\label{Sec. one-per} We now prove \cref{thm. exis_new} in the case $T=1$, where we follow arguments given in \cite{Nutz}. Let $\xi:\Omega^T \to \R$ be Borel. In preparation for the multi-period case we define the set \begin{align*} \mathcal{A}_{0,x}= \{(H,c) \in \R^d\times \R_+ \ |\ x-c+H\Delta S_1 \ge \pi_1(\xi)\ {\cal P}\text{-q.s.} \}. \end{align*} Recall definition $\pi_t(\xi)$ given in \eqref{defminr} for $t=0,1$ and note that if $(H,c)\in \mathcal{A}_{0,x}$ then also $(H,0)\in \mathcal{A}_{0,x}$. We thus often write $H\in \mathcal{A}_{0,x}$ instead of $(H,c)\in \mathcal{A}_{0,x}$. Let $U(1,\cdot, \cdot):\Omega \times [0,\infty) \to \R$ be bounded from above and $\mathcal{F}^{\mathcal{U}}_1$-measurable. Besides let us assume that $x \mapsto U(1,\omega,x)$ is non-decreasing, concave and continuous for each $\omega \in \Omega$. Furthermore let the deterministic function $U(0, \cdot):[0,\infty) \to \R$ be non-decreasing and continuous. As usual we set $U(t,\omega, x)=-\infty$ for $x<0$ and $t=0,1$. Let us now state the main theorem for $T=1$: \begin{proposition} \label{Thm Nutz1} Let NA$({\cal P})$ hold and $x \ge \pi_0(\xi)$. Then \begin{align*} u(x) := \sup_{(H,c) \in \mathcal{A}_{0,x}} \left(\inf_{P \in {\cal P}^u} {\rm I\kern-2pt E}_{P}[U(1,x-c+H\Delta S_1-\pi_1(\xi))]+U(0,c)\right)< \infty \end{align*} and there exists $(\hat{H},\hat{c}) \in \mathcal{A}_{0,x}$ such that $\inf_{P \in {\cal P}^u} {\rm I\kern-2pt E}_{P}[U(1,x-\hat{c}+\hat{H}\Delta S_1-\pi_1(\xi))]+U(0,\hat{c})=u(x).$ \end{proposition} We prove the result via a lemma. Here we denote \begin{align*} L=\text{span }(\{\text{supp}(P \circ (\Delta S_1)^{-1} \ | \ P \in \mathcal{P}\})\subseteq \R^d \end{align*} and the orthogonal complement \begin{align*} L^{\perp}=\{H \in \R^d\ |\ H V=0 \text{ for all } V \in L\}. \end{align*} \begin{lemma} Assume $x \ge \pi_0(\xi)$. Under NA$({\cal P})$ the set $K_{x}=\mathcal{A}_{0,x} \cap (L \times \R_+) \subseteq \R^{d+1}$ is non-empty, convex and compact. \end{lemma} \begin{proof} Clearly $K_{x}$ is convex and closed. It remains to show that $K_{x}$ is bounded: As by definition of $\pi_0(\xi)$ clearly $c \in [0,x-\pi_0(\xi)]$ for all $c \in \mathcal{A}_{0,x}$ we only need to show that $H \in K_x$ is bounded. Note that after a translation by $(H_0,0)\in K_{x}$ we have $0 \in \tilde{K}_{x}:= K_{x}-(H_0,0)$. Now we assume towards a contradiction that there exist $H_n \in \tilde{K}_{x}$ such that $|H_n| \to \infty$. We define $\delta=|H_0|+1$. We can extract a subsequence $\delta H_n/|H_n|$ that converges to a limit $H \in \R^d$, so $|H|=\delta$. As $\tilde{K}_{x}$ is convex and contains the origin we have for $n$ large enough $\delta H_n/|H_n|\in \tilde{K}_{x}$. It follows $H \in \tilde{K}_{x}$, since $\tilde{K}_{x}$ is closed. Furthermore \begin{align*} H\Delta S_1 \ge \liminf_{n \to \infty} \frac{\pi_1(\xi)-x-H_0\Delta S_1}{|H_n|/\delta}=0 \hspace{0.5cm} \ {\cal P}\text{-q.s.} \end{align*} By NA$({\cal P})$ this implies $H \Delta S_1=0$ ${\cal P}$-q.s. and thus $H \in L^{\perp}$ by use of \cite[Lemma 2.6]{Nutz}. As $H\in \tilde{K}_{x}$ this implies $H_0+H \in K_{x} \subseteq L$, which means $|H|^2=-H_0H$. This contradicts $|H|=\delta$ by Cauchy-Schwarz inequality. \end{proof} \begin{proof}[of \cref{Thm Nutz1}] Fatou's lemma implies that for all $P \in {\cal P}^u$ the function $(H,c) \mapsto {\rm I\kern-2pt E}_{P}[U(1,x-c+H\Delta S_1-\pi_1(\xi))]+U(0,c)$ is upper semicontinuous on $\mathcal{A}_{0,x}$. It follows that $(H,c) \mapsto \inf_{P \in {\cal P}^u} {\rm I\kern-2pt E}_{P}[U(1,x-c+H\Delta S_1-\pi_1(\xi))]+U(0,c)$ is upper semicontinuous and thus attains its supremum on the compact set $K_{x}$. Finally again using \cite[Lemma 2.6]{Nutz} and recalling that ${\cal P}^u \subseteq {\cal P}$ \begin{align*} &\sup_{(H,c) \in {\mathcal{A}}_{0,x}} \left(\inf_{P \in {\cal P}^u} {\rm I\kern-2pt E}_{P}[U(1,x-c+H\Delta S_1-\pi_1(\xi))]+U(0,c)\right)\\ =\ &\sup_{(H,c) \in K_{x}} \left(\inf_{P \in {\cal P}^u} {\rm I\kern-2pt E}_{P}[ U(1,x-c+H\Delta S_1-\pi_1(\xi))]+U(0,c)\right). \end{align*} \end{proof} \begin{corollary} \label{cor. minim} Under the conditions of \cref{Thm Nutz1} we have \begin{align*} &\sup_{(H,c) \in \mathcal{A}_{0,x}} \left(\inf_{P \in {\cal P}^u} {\rm I\kern-2pt E}_{P}[U(1,x-c+H\Delta S_1-\pi_1(\xi))]+U(0,c)\right)\\ =\ &\inf_{P \in {\cal P}^u}\left( \sup_{(H,c) \in \mathcal{A}_{0,x}} \left({\rm I\kern-2pt E}_{P}[U(1,x-c+H\Delta S_1-\pi_1(\xi))]+U(0,c)\right)\right). \end{align*} \end{corollary} \begin{proof} Note that $K_x$ is compact, convex and ${\cal P}^u$ is convex. Define \begin{align*} f: K_x \times \mathfrak{P}(\Omega) \to \R \hspace{0.5cm} (H,c,P) \mapsto {\rm I\kern-2pt E}_{P}[U(1,x-c+H\Delta S_1-\pi_1(\xi))]+U(0,c) \end{align*} and note that $(H,c) \mapsto f(H,c,P)$ is upper semicontinuous and concave. Furthermore $P \mapsto f(H,c, P)$ is convex on ${\cal P}^u$. The claim follows from Corollary 2 in \cite{terk}. \end{proof} \begin{remark} The boundedness from above of $U(1,\cdot, \cdot)$ can be replaced by a weaker condition: Indeed it is sufficient to assume there exists a constant $a>0$ such that $\omega \mapsto U(1,\omega, a/2)$ is bounded from below and \begin{align*} {\rm I\kern-2pt E}_{P}[U^+(1,x+H\Delta S_1-\pi_1(\xi))] < \infty \ \text{ for all }H \in \mathcal{A}_{0,x} \ \text{and } P \in {\cal P}^u \end{align*} as well as \begin{align*} {\rm I\kern-2pt E}_{P}[U^+(1,a)] <\infty \quad \text{for all }P \in {\cal P}^u. \end{align*} The proof of \cref{Thm Nutz1} then follows along the lines of \cite[Lemma 1]{RS06} and \cite[Lemma 2.8]{Nutz} after a translation by $H_0\in \text{ri}(K_{x})$. \end{remark} \subsection{Proof of \cref{thm. exis_new}: The multi-period case}\label{Sec Mulit} For the rest of this section we assume NA$({\cal P})$ and that $\xi$ is Borel measurable. Furthermore we often abbreviate $\pi_t(\xi)$ by $\pi_t$. To simplify notation we assume $U(0,\cdot,0)=0$. We give the following definition: \begin{definition} \label{Def U_t} We define $U_T(\omega,x)=U(T,\omega, x)$ and for $0\le t \le T-1$ \begin{align*} U_{t}(\omega, x) &:= \sup_{(H,c) \in \mathcal{A}_{t,x}(\omega)}\bigg( \inf_{P \in {\cal P}_{t}^u(\omega)} {\rm I\kern-2pt E}_{P}[U_{t+1}((\omega,\cdot), x+H\Delta S_{t+1}(\omega, \cdot)-c-\mathds{1}_{\{t=T-1\}}\xi(\omega, \cdot))]\\ &\ +U(t,\omega,c)\bigg),\quad x\ge \pi_{t}(\omega) \end{align*} and $U_t(\omega,x)=-\infty$ otherwise, where for $x \in \R$ we set \begin{align*} \mathcal{A}_{0,x}(\omega) &:= \{ (H,c) \in \R^d\times \{0\} \ | \ x+H\Delta S_{1}(\omega,\cdot) \ge \pi_{1}(\omega, \cdot) \ {\cal P}_0(\omega)\text{-q.s.} \} \\ \mathcal{A}_{t,x}(\omega) &:= \{ (H,c) \in \R^d\times\R_+ \ | \ x+H\Delta S_{t+1}(\omega,\cdot)-c \ge \pi_{t+1}(\omega, \cdot) \ {\cal P}_t(\omega)\text{-q.s.} \}, \quad t \ge 1. \end{align*} \end{definition} We recall from \cref{lem::usa} that $\pi_t(\xi)$ is upper semianalytic. This means in particular that $$\{(\omega,x) \ | \ x < \pi_t(\xi)(\omega) \}=\bigcup_{q\in \mathbb{Q}} \pi_t^{-1}((q,\infty))\times (-\infty,q)$$ is analytic. Next we show by backwards induction, that if \cref{Ass 1a} is satisfied, then $U_t$ has ${\cal P}^u$-q.s. the following properties: \begin{condition} \label{Cond 2} Let $0\le t \le T-1$. The function $U_t: \Omega^t \times \R \to [-\infty, \infty)$ is lower semianalytic and bounded from above. Furthermore the following properties hold: \begin{enumerate} \item $\omega \mapsto U_t(\omega, x(\omega))$ is bounded from below for $x(\omega): = \pi_{t}(\omega)+\epsilon$ and each $\epsilon >0$. \item $x \mapsto U_t(\omega,x)$ is non-decreasing, concave and continuous on $[\pi_t(\omega),\infty)$ for each $\omega \in \Omega^t$. \end{enumerate} \end{condition} \begin{lemma}\label{lem usa} Let NA$({\cal P})$ and \cref{Qanalytic}, \cref{Ass 1b} and \cref{Ass 1a} hold for $U (t, \cdot, \cdot)$, $0 \le t\le T$. Then there exist functions $\tilde{U}_t: \Omega^t \times (-\infty, \infty) \to [-\infty, \infty)$, which satisfy \cref{Cond 2}, such that $\tilde{U}_t= U_t$ ${\cal P}^u$-q.s. \end{lemma} \begin{proof} We prove the claim by induction. Recall that $U_T$ satisfies \cref{Ass 1a}. We now show the induction step from $t+1$ to $t$ and therefore first fix $\omega \in \Omega^t$. For simplicity of presentation we assume $t \le T-2$.\\ We first state some results regarding lower semianalyticity, which lead to the definition of $\tilde{U}_t$: Using \cite[Lemma 7.30, p.177, Prop. 7.47, p.179, Prop. 7.48, p.180]{bs}, \cref{Ass 1a} and the analytic graph of ${\cal P}^u_t$ we see that $\phi: \Omega^t \times (-\infty, \infty) \times \R^d \times \R\to \overline{\R}$ \begin{align*} \phi(\omega,x,H,c)= \inf_{P \in {\cal P}^u_t(\omega)} {\rm I\kern-2pt E}_{P}[U_{t+1}((\omega, \cdot), x+H\Delta S_{t+1}(\omega, \cdot)-c)]+U(t,\omega,c) \end{align*} is lower semianalytic as $\Delta S_{t+1}(\omega, \cdot)$ is a Borel measurable functions (and also $\xi(\omega, \cdot)$ for $t=T-1$). Now we define the function $\tilde{\phi}: \Omega^t\times \R \times \R^d \times \R\to \overline{\R}$ \begin{align*} \widetilde{\phi}(\omega,x,H,c)= \left\{\begin{array}{ll} -\infty &\text{if }(H,c) n$^{\mbox{\footnotesize o}}$otin \mathcal{A}_{t,x} \text{ or }x<\pi_t(\xi)(\omega)\\ \phi(\omega,x,H,c) &\text{otherwise.} \end{array} \right. \end{align*} We show that $\tilde{\phi}$ is lower semianalytic. Fix $a \in \R$. Then \begin{align*} \left\{\widetilde{\phi}<a\right\}&= \{(\omega,x,H,c) \ | \ \phi(\omega,x,H,c) < a, \ (H,c) \in \mathcal{A}_{t,x}(\omega), x\ge \pi_t(\xi)(\omega)\} \\ &\cup \left\{(\omega,x,H,c) \ | \ (H,c) n$^{\mbox{\footnotesize o}}$otin \mathcal{A}_{t,x}(\omega) \text{ or }x< \pi_t(\xi)(\omega)\right\} \\ &= \{\phi < a \} \cup \left\{(\omega,x,H,c) \ | \ (H,c) n$^{\mbox{\footnotesize o}}$otin \mathcal{A}_{t,x}(\omega)\right\} \\ &\cup \left\{(\omega,x,H,c) \ | \ x < \pi_t(\xi)(\omega)\right\}. \end{align*} By the same arguments as for the lower seminanalyticity of $\phi$ we see that \begin{align*} &\left\{(\omega,x,H,c) \ | \ (H,c) n$^{\mbox{\footnotesize o}}$otin \mathcal{A}_{t,x}(\omega)\right\} \\ =\ &\Bigg\{(\omega,x,H,c)\ \bigg| \ \sup_{P \in {\cal P}^u_t(\omega)} {\rm I\kern-2pt E}_{P}[x+ H\Delta S_{t+1}(\omega, \cdot)-c-\pi_{t+1}(\omega, \cdot)]^- > 0\Bigg\} \end{align*} is analytic and the sets \begin{align*} \{ \phi < a \} \qquad \text{and} \qquad \{ (\omega,x,H,c) \in \Omega^t \times\R \times \R^d \times \R\ | \ x < \pi_t(\xi)(\omega) \} \end{align*} are analytic, so $\widetilde{\phi}$ is lower semianalytic. Similarly to \cite[Proposition 3.27]{BC16} we define \begin{align*} \tilde{U}_t(\omega,x)=\lim_{n \to \infty}\sup_{(H,c) \in \mathbb{Q}^d\times \mathbb{Q}_+} \widetilde{\phi}\left(\omega,x+\frac{1}{n},H,c\right). \end{align*} As the limits and countable supremum of lower semianalytic functions is lower semianalytic, we conclude that $\tilde{U}_t$ is lower semianalytic. \\ From the definition it is clear that $\tilde{U}_t(\omega, \cdot)$ is non-decreasing and bounded from above. Next we argue that $\tilde{U}_t(\omega,\cdot)$ is concave. As the infimum of concave functions is concave, it is enough to argue that $x \mapsto \sup_{(H,c) \in \mathbb{Q}^d\times \mathbb{Q}_+} \widetilde{\phi}\left(\omega,x,H,c\right)$ is concave. This follows very similarly to \cite{RS06}[proof of Prop. 2, p.5]: Indeed, it is enough to show midpoint-concavity of $\sup_{(H,c) \in \mathbb{Q}^d\times \mathbb{Q}_+} \widetilde{\phi}\left(\omega,\cdot,H,c\right)$, which is immediate by use of triangle inequality. Concavity implies that $ \tilde{U}_t(\omega, \cdot)$ is continuous on $(\pi_t(\omega), \infty)$. By the definition of $\tilde{U}_t$ concavity and continuity extend to $[\pi_t(\omega), \infty)$. \\ By definition we clearly have \begin{align*} \sup_{(H,c) \in \mathbb{Q}^d\times \mathbb{Q}_+} \widetilde{\phi}\left(\omega,x,H,c\right)\le \sup_{(H,c) \in {\cal A}c_{t,x}(\omega)} \phi(\omega,x,H,c). \end{align*} We now show equality of $U_t(\omega,x)$ and $\tilde{U}_t(\omega,x)$ for ${\cal P}^u$-q.e. $\omega \in \Omega^t$. Let us therefore fix $x > \pi_t(\omega)$ and $\omega \in \Omega_{\text{NA}}^t$. Using \cite[Theorem 3.4]{BN} and ${\cal P}^u_t(\omega) \subseteq {\cal P}_t(\omega)$ there exists $\tilde{H} \in \R^d$ such that \begin{align*} \pi_t(\omega)+ \tilde{H}\Delta S_{t+1}(\omega,\omega') \ge \pi_{t+1}(\omega, \omega')\quad \text{ for }{\cal P}_t^u(\omega)\text{-q.e. }\omega' \in \Omega. \end{align*} Take $c<x-\pi_t(\omega)$ and $H \in [0, \infty)^d$ such that \begin{align*} H^1+ \dots +H^d \le \frac{x-\pi_t(\omega)-c}{\max_{1 \le i \le d}S^i_t(\omega)}. \end{align*} It follows for ${\cal P}^u_t(\omega)$-q.e. $\omega'\in \Omega$ that \begin{align*} x+(H+\tilde{H})\Delta S_{t+1}(\omega, \omega')-c &=x -\pi_t(\omega)+ H \Delta S_{t+1}(\omega, \omega')+ \pi_t(\omega) +\tilde{H}\Delta S_{t+1}(\omega, \omega')-c\\ &\ge x-\pi_t(\omega) -H S_t(\omega) +\pi_{t+1}(\omega, \omega')-c \\&\ge \pi_{t+1}(\omega, \omega'). \end{align*} Thus the affine hull of $\mathcal{A}_{t,x}(\omega)$ is $\R^{d+1}$ and consequently $\text{Ri}(\mathcal{A}_{t,x}(\omega))$ is an open set in $\R^{d+1}$. This implies \begin{align*} \sup_{(H,c) \in \mathbb{Q}^d\times \mathbb{Q}_+} \widetilde{\phi}\left(\omega,x,H,c\right)= \sup_{(H,c) \in {\cal A}c_{t,x}(\omega)} \phi(\omega,x,H,c). \end{align*} for $x > \pi_t(\omega)$. Equality in $x=\pi_t(\omega)$ follows by right-continuity of $U_t$ and $\tilde{U}_t$. Indeed, right-continuity of $U_t(x,\omega)$ in $x=\pi_{t}(\omega)$ follows by compactness of $\mathcal{A}_{t,\pi_t(\omega)+1}(\omega)\cap \text{span}(\text{supp}(\{P \circ (\Delta S_{t+1}(\omega, \cdot)^{-1} \ | \ P \in \mathcal{P}_t(\omega)\}))$ and Fatou's Lemma.\\ Lastly we show boundedness of $\tilde{U}_t$ from below: Let $x(\omega)=\pi_t(\omega)+\epsilon$ for some $\epsilon>0$. By the above arguments there exists $\hat{H}\in \mathbb{Q}^d$ such that $\pi_t(\omega)+\epsilon/3+\hat{H}\Delta S_{t+1}(\omega, \omega')\ge \pi_{t+1}(\omega, \omega')$ $\mathcal{P}^u_t(\omega)$-a.s. Thus \begin{align*} U_{t}(\omega,x(\omega)) &\ge \inf_{P \in \mathcal{P}_t^u(\omega)} {\rm I\kern-2pt E}_{P}[U_{t+1}((\omega, \cdot), x(\omega)+\hat{H} \Delta S_{t+1}(\omega, \cdot)-\epsilon/3)] +U(t,\omega,\epsilon/3)\\ &\ge \inf_{P \in \mathcal{P}_t^u(\omega)} {\rm I\kern-2pt E}_{P}[U_{t+1}((\omega, \cdot), \pi_{t+1}(\omega, \cdot)+\epsilon/3)]+U(t, \omega,\epsilon/3) \end{align*} is bounded from below by the induction hypothesis and \cref{Ass 1a}. This shows the claim.\\ \end{proof} \begin{lemma}\label{lem meas_selec} Let NA$({\cal P})$ and \cref{Qanalytic}, \cref{Ass 1b} and \cref{Ass 1a} hold for $U (t, \cdot, \cdot)$, $0 \le t\le T$. Let $t \in \{0, \dots, T-1\}$ and $(H,C) \in \mathcal{A}_{\pi_0}$. There exist universally measurable mappings $\hat{H}_{t+1},\hat{c}_t$ such that $\hat{c}_t$ is non-negative, \begin{align*} V^{\pi_0,H,C}_{t-1}(\omega)+H_t(\omega)\Delta S_t(\omega)+ \hat{H}_{t+1}(\omega)\Delta S_{t+1}(\omega,\cdot)-\hat{c}_t(\omega) \ge \pi_{t+1}(\omega,\cdot)\ {\cal P}_t(\omega)\text{-q.s.} \end{align*} and \begin{align*} \inf_{P \in {\cal P}_t^u(\omega)} &{\rm I\kern-2pt E}_P\left[U_{t+1}\left((\omega, \cdot), V^{\pi_0,H,C}_{t-1}(\omega)+H_t(\omega)\Delta S_t(\omega) + \hat{H}_{t+1}(\omega) \Delta S_{t+1}(\omega, \cdot)-\hat{c}_t(\omega)-\mathds{1}_{\{t=T-1\}}\xi(\omega, \cdot)\right)\right]\\&+U(t,\omega,\hat{c}_t(\omega)) = U_t\left(\omega, V^{\pi_0,H,C}_{t-1}(\omega)+H_t(\omega)\Delta S_t(\omega)\right) \end{align*} for ${\cal P}^u$-a.e. $\omega \in \Omega^t$. \end{lemma} \begin{proof} We show that $\tilde{U}_t$ is $\mathcal{F}^{\mathcal{U}}_t \otimes \mathcal{B}(\R)$-measurable: Indeed, we know that $\omega \mapsto \tilde{U}_t(\omega,x)$ is lower seminanalytic and in particular universally measurable. Also $x \mapsto \tilde{U}_t(\omega,x)$ is continuous on $[\pi_t(\omega), \infty)$, bounded from above and $\tilde{U}_t(\omega,x)=-\infty$ for $x< \pi_t(\omega)$. Thus is it concave and upper semicontinuous on $\R$ and the claim follows from \cite[Lemma A.35, p. 1889]{BC16}. Next we show that the function \begin{align*} \phi(\omega,x,H,c)= \inf_{P \in {\cal P}^u_t(\omega)} {\rm I\kern-2pt E}_{P}[\tilde{U}_{t+1}((\omega, \cdot), x+H\Delta S_{t+1}(\omega, \cdot)-c)]+U(t,\omega,c) \end{align*} is $\mathcal{F}^{\mathcal{U}}_t \otimes \mathcal{B}(\R) \otimes \mathcal{B}(\R^d)\otimes \mathcal{B}(\R)$-measurable: As we have argued in \cref{lem usa} $\omega \mapsto \phi(\omega,x,H,c)$ is lower semianalytic and in particular universally measurable. On the other hand, $x \mapsto \tilde{U}_{t+1}(\omega, x)$ is upper semicontinuous and concave for any $\omega \in \Omega^t$. Since $\tilde{U}_{t+1}$ is bounded from above, an application of Fatou's lemma yields that $(x,H,c) \mapsto \phi(\omega,x,H,c)$ is upper semicontinuous and concave for each $\omega \in \Omega^t$. Again by \cite[Lemma A.35, page 1889]{BC16} it follows that $\phi$ is $\mathcal{F}^{\mathcal{U}}_t \otimes \mathcal{B}(\R) \otimes \mathcal{B}(\R^d)\otimes \mathcal{B}(\R)$-measurable. Now we define the correspondence \begin{align*} \Phi(\omega) :&= \{ (H',c') \in \R^d\times \R_+ \ | \ \phi(\omega, V^{\pi_0,H,C}_{t-1}(\omega)+H_t(\omega) \Delta S_{t}(\omega)-\mathds{1}_{\{t=T-1\}}\xi(\omega, \cdot), H',c')\\ &\qquad = \tilde{U}_t(\omega,V^{\pi_0,H,C}_{t-1}(\omega)+H_{t}(\omega) \Delta S_t (\omega)) \} , \quad \omega \in \Omega^t. \end{align*} Then its graph is in ${\cal F}_tut \otimes \mathcal{B}(\R^d)\otimes\mathcal{B}(\R)$. Next we define the function $$\Upsilon :\omega \mapsto \mathcal{A}_{t,V_{t-1}^{\pi_0,H,C}(\omega)+H_t(\omega) \Delta S_t(\omega)}(\omega).$$ By a slight variation of the arguments given in \cite{BN}[proof of Lemma 4.10, pp.846-848] the graph of $\Upsilon$ is ${\cal F}_tc^{\mathcal{U}}_t \otimes \mathcal{B}(\R^d)\otimes \mathcal{B}(\R)$-measurable and thus $ \text{graph}(\Upsilon) \cap ((\Omega_{NA}^t \cap \Omega_{\xi}^t)\times \R^d \times \R)\in\mathcal{F}_{t}^{\mathcal{U}} \otimes {\cal B}(\R^d) \otimes {\cal B}(\R)$. Then also the graph of \begin{align*} \tilde{\Phi}(\omega)=\begin{cases} \mathcal{A}_{t,V_{t-1}^{\pi_0,H,C}(\omega)+H_t(\omega) \Delta S_t(\omega)}(\omega) \cap \Phi(\omega) & \omega \in \Omega_{NA}^t \cap \Omega_{\xi}^t\\ \emptyset & \text{otherwise} \end{cases} \end{align*} is in $\mathcal{F}_{t}^{\mathcal{U}} \otimes {\cal B}(\R^d) \otimes {\cal B}(\R)$ and $\tilde{\Phi}$ admits an $\mathcal{F}_t^{\mathcal{U}}$-measurable selector $(\hat{H}_{t+1},\hat{c}_t)$ on the universally measurable set $\{ \tilde{\Phi} n$^{\mbox{\footnotesize o}}$eq \emptyset \}\in \mathcal{F}_t^{{\cal U}}$ by the Neumann-Aumann theorem (\cite[Cor.1, p.120]{bv}). We extend $(\hat{H}_{t+1}, \hat{c}_t)$ by setting $\hat{H}_{t+1}= \hat{c}_t=0$ on $\{\tilde{\Phi} n$^{\mbox{\footnotesize o}}$eq \emptyset \}$. Moreover the one-period case given in \cref{Thm Nutz1} applied with $x=V_{t-1}^{\pi_0,H,C}(\omega)+H_t(\omega) \Delta S_t(\omega)$ , \cref{olala}, \cref{rem. laurence} as well as existence of superhedging strategies as stated in \cite[Theorem 3.4]{BN} show that $\tilde{\Phi}(\omega) n$^{\mbox{\footnotesize o}}$eq \emptyset$ for ${\cal P}^u$-q.e. $\omega \in \Omega^t$. This shows the claim as $U_t= \tilde{U}_t$ ${\cal P}^u$-q.s. \end{proof} \begin{proof}[of \cref{thm. exis_new}] Let $(\hat{H}_1,0)$ be an optimal strategy for $$ \inf_{P \in {\cal P}^u_0} {\rm I\kern-2pt E}_P(U_{1}[ \pi_0+ H_{1} \Delta S_{1})]$$ as in \cref{lem meas_selec}. Proceeding recursively, we use \cref{lem meas_selec} to define the strategy $\omega \mapsto (\hat{H}_{t+1}, \hat{c}_t)(\omega)$ for \begin{align*} \inf_{P \in {\cal P}^u_t(\omega)} &{\rm I\kern-2pt E}_{P}[U_{t+1}((\omega, \cdot), V^{\pi_0,\hat{H}, \hat{C}}_{t-1}(\omega)+\hat{H}_{t}(\omega)\Delta S_t(\omega)+ H_{t+1}(\omega) \Delta S_{t+1}(\omega, \cdot)-c_t(\omega) -\mathds{1}_{\{t=T-1\}}\xi(\omega, \cdot))]\\ &+U(t, c_t(\omega)) \end{align*} where $1\le t \le T-1$ and define $\hat{C}_t=\sum_{s=1}^t \hat{c}_s$ as well as $\Delta\hat{C}_T=V_{T-1}^{\pi_0,\hat{H}, \hat{C}}+\hat{H}_T \Delta S_T-\xi$. By construction we then have $(\hat{H}, \hat{C}) \in \mathcal{A}_{\pi_0}$. To establish that $(\hat{H}, \hat{C})$ is optimal we first show that \begin{align}\label{eq own2} \inf_{P \in {\cal P}^u} {\rm I\kern-2pt E}_{P}\left[\sum_{s=1}^T U(s,\Delta\hat{C}_s)\right] \ge U_0(\pi_0). \end{align} Let $0\le t \le T-1.$ By definition of $(\hat{H}, \hat{C})$ we have \begin{align*} &\inf_{P' \in {\cal P}^u_t(\omega)} {\rm I\kern-2pt E}_{P'}[U_{t+1}((\omega, \cdot),V^{\pi_0,\hat{H}, \hat{C}}_{t}(\omega)+\hat{H}_{t+1}(\omega)\Delta S_{t+1}(\omega, \cdot)-\mathds{1}_{\{t=T-1\}}\xi(\omega, \cdot))]\\ +\ &U(t,\omega, \Delta \hat{C}_t(\omega)) =U_t(\omega, V^{\pi_0,\hat{H}, \hat{C}}_{t-1}(\omega)+\hat{H}_t(\omega) \Delta S_t(\omega)) \end{align*} for all $\omega \in \Omega^t$ outside a ${\cal P}^u$-polar set. Let $P \in \mathfrak{P}$, then $P= P_0 \otimes \cdots \otimes P_{T-1}$ for some selectors $P_t$ of ${\cal P}^u_t$, $t=0, \dots, T-1$ and we conclude via Fubini's theorem that \begin{align*} &{\rm I\kern-2pt E}_{P}\left[U_{t+1}\left(V^{\pi_0,\hat{H}, \hat{C}}_{t}+\hat{H}_{t+1}\Delta S_{t+1}-\mathds{1}_{\{t=T-1\}}\xi\right)+ \sum_{s=1}^t U(s, \Delta \hat{C}_s)\right]\\ =\ &{\rm I\kern-2pt E}_{(P_0 \otimes \cdots P_{t-1})(d\omega)}\bigg( {\rm I\kern-2pt E}_{P_t(\omega)}\left[U_{t+1}\left((\omega, \cdot), V^{\pi_0,\hat{H}, \hat{C}}_{t}(\omega)+\hat{H}_{t+1}(\omega)\Delta S_{t+1}(\omega, \cdot)-\mathds{1}_{\{t=T-1\}}\xi(\omega, \cdot)\right)\right]\\ +\ &\sum_{s=1}^t U\left(s,\omega, \Delta \hat{C}_s(\omega)\right)\bigg)\\ \ge \ &{\rm I\kern-2pt E}_{P_0 \otimes \cdots \otimes P_{t-1}}\bigg[U_{t}\left(V^{\pi_0,\hat{H}, \hat{C}}_{t-1}+\hat{H}_t \Delta S_t\right)+ \sum_{s=1}^{t-1} U\left(s,\Delta \hat{C}_s\right) \bigg]\\ =\ &{\rm I\kern-2pt E}_{P}\bigg[U_t\left(V^{\pi_0,\hat{H}, \hat{C}}_{t-1}+\hat{H}_t \Delta S_t\right)+ \sum_{s=1}^{t-1} U\left(s, \Delta \hat{C}_s\right)\bigg]. \end{align*} A repeated application of this inequality shows \eqref{eq own2}. To conclude that $(\hat{H}, \hat{C})$ is optimal, it remains to prove that \begin{align*} U_0(\pi_0) \ge \sup_{(H,C) \in \mathcal{A}_{\pi_0}} \inf_{P \in {\cal P}^u} {\rm I\kern-2pt E}_{P}\left[\sum_{s=1}^T U(s,\Delta C_s)\right] =:v(\pi_0). \end{align*} To this end we fix an arbitrary $(H,C) \in \mathcal{A}_{\pi_0}$ and first show that \begin{align}\label{eq max} &\inf_{P \in {\cal P}^u} {\rm I\kern-2pt E}_{P}\left[U_t\left(V^{\pi_0,H,C}_{t-1}+H_t \Delta S_t \right) +\sum_{s=1}^{t-1}U(s, \Delta C_s)\right]\\n$^{\mbox{\footnotesize o}}$onumber \ge\ &\inf_{P \in {\cal P}^u} {\rm I\kern-2pt E}_{P}\left[U_{t+1}\left(V^{\pi_0,H,C}_{t}+H_{t+1} \Delta S_{t+1}-\mathds{1}_{\{t=T-1\}}\xi\right)+ \sum_{s=1}^t U(s, \Delta C_s)\right], \quad t=1, \dots, T-1. \end{align} Let $\epsilon>0$. As in the proof of \cref{lem usa} \begin{align*} (\omega, P) \mapsto {\rm I\kern-2pt E}_{P}\left[U_{t+1}((\omega, \cdot), V^{\pi_0,H,C}_{t}(\omega)+H_{t+1}\Delta S_{t+1}-\mathds{1}_{\{t=T-1\}}\xi(\omega, \cdot))\right]+\sum_{s=1}^{t} U(s,\omega,\Delta C_s(\omega)), \end{align*} is lower semianalytic. Using \cite[Prop. 7.50, p. 184]{bs} and \cite[Prop. 7.44, p.172]{bs} for $\omega \in \Omega^t$ outside a ${\cal P}^u$-polar set we have for some universally measurable $\epsilon$-optimal selector $P_t^{\epsilon}$ that \begin{align*} &{\rm I\kern-2pt E}_{P_t^{\epsilon}(\omega)} \bigg[U_{t+1}\left((\omega, \cdot), V^{\pi_0,H,C }_{t}(\omega)+H_{t+1}(\omega)\Delta S_{t+1}(\omega, \cdot)-\mathds{1}_{\{t=T-1\}}\xi(\omega,\cdot)\right)\bigg]+\sum_{s=1}^{t} U(s,\omega,\Delta C_s(\omega))-\epsilon\\ &\le \ (-\epsilon)^{-1} \vee \bigg(\inf_{P \in {\cal P}^u_t(\omega)} {\rm I\kern-2pt E}_{P}\left[U_{t+1}\left((\omega, \cdot), V^{\pi_0,H,C}_{t}(\omega)+H_{t+1}(\omega)\Delta S_{t+1}(\omega, \cdot)-\mathds{1}_{\{t=T-1\}}\xi(\omega,\cdot)\right)\right]\\ &+\sum_{s=1}^{t} U(s,\omega,\Delta C_s(\omega))\bigg)\\ &\le \ (-\epsilon)^{-1} \vee \bigg(\sup_{(H',c') \in \mathcal{A}_{t,V_{t-1}^{\pi_0,H,C}(\omega)+H_t(\omega)\Delta S_t(\omega)}(\omega)}\inf_{P \in {\cal P}^u_t(\omega)} {\rm I\kern-2pt E}_{P}\bigg[U_{t+1}\big((\omega, \cdot), V^{\pi_0,H,C}_{t-1}(\omega)\\ &+H_t(\omega)\Delta S_t(\omega)-c' +H'\Delta S_{t+1}(\omega,\cdot)-\mathds{1}_{\{t=T-1\}}\xi(\omega, \cdot)\big)\bigg]+\sum_{s=1}^{t-1} U(s,\omega,\Delta C_s(\omega))+U(t,c')\bigg)\\ &= \ (-\epsilon)^{-1} \vee \bigg( U_{t}(\omega, V^{\pi_0,H,C}_{t-1}(\omega)+H_t(\omega)\Delta S_t(\omega))+\sum_{s=1}^{t-1}U(s,\omega, \Delta C_s(\omega)) \bigg). \end{align*} Given $P \in {\cal P}^u$ we thus have \begin{align*} &{\rm I\kern-2pt E}_{P}\bigg[(-\epsilon)^{-1} \vee \bigg(U_{t}(V^{\pi_0,H,C}_{t-1}+H_t\Delta S_t)+\sum_{s=1}^{t-1}U(s,\Delta C_s)\bigg)\bigg] \\ \ge\ &{\rm I\kern-2pt E}_{P \otimes P_t^{\epsilon}}\bigg[U_{t+1}(V^{\pi_0,H,C}_{t}+H_{t+1}\Delta S_{t+1}-\mathds{1}_{\{t=T-1\}}\xi)+\sum_{s=1}^{t} U(s,\Delta C_s)\bigg]-\epsilon \\ \ge &\inf_{P' \in {\cal P}^u} {\rm I\kern-2pt E}_{P'}\bigg[U_{t+1}(V^{\pi_0,H,C}_{t}+H_{t+1}\Delta S_{t+1}-\mathds{1}_{\{t=T-1\}}\xi)+\sum_{s=1}^{t} U(s,\Delta C_s)\bigg]-\epsilon. \end{align*} As $\epsilon>0$ and $P \in {\cal P}^u$ were arbitrary \eqref{eq max} follows. Noting that $U_0(\pi_0)=\inf_{P \in {\cal P}^u}{\rm I\kern-2pt E}_{P}[U_0(V^{\pi_0,H,C}_0)]$ a repeated application of \eqref{eq max} yields \begin{align*} U_0(\pi_0) \ge \inf_{P \in {\cal P}^u} {\rm I\kern-2pt E}_{P}[ U_1(\pi_0+H_1 \Delta S_1)] \ge \dots &\ge \inf_{P \in {\cal P}^u} {\rm I\kern-2pt E}_{P}\bigg[U_T(V^{\pi_0,H,C}_{T-1}+H_T \Delta S_T-\xi)+\sum_{s=1}^{T-1} U(s, \Delta C_s)\bigg]\\ &=\inf_{P \in {\cal P}^u} {\rm I\kern-2pt E}_{P}\bigg[\sum_{s=1}^{T} U(s, \Delta C_s)\bigg]. \end{align*} As $(H,C) \in \mathcal{A}_{\pi_0}$ was arbitrary, it follows that $U_0(\pi_0) \ge v(\pi_0)$. This concludes the proof, since $\pi_0=\pi(\xi)$. \end{proof} \subsection{Proof of \cref{thm. unique_new}} \begin{proof} Existence of an optimal investment consumption strategy follows from \cref{thm. exis_new}. We now show uniqueness of optimisers. We fix $0\le t \le T-1$ and recall the definition of $\tilde{U}_t$ given in \cref{lem usa}. Note that one can show that the function $$(\omega,P) \mapsto \sup_{(H,c) \in \mathcal{A}_{t,x}(\omega)} {\rm I\kern-2pt E}_{P} [\tilde{U}_{t+1}((\omega, \cdot), x+H \Delta S_{t+1}(\omega,\cdot)-c-\mathds{1}_{\{t=T-1\}}\xi(\omega, \cdot))] +U(t, c)$$ is lower semianalytic by reducing the above expression to a supremum over a countable set as in the proof of \cref{lem usa}. Recall that again by \cref{lem usa} there exists a set of full ${\cal P}^u$ measure on which $\tilde{U}_t=U_t$ for all $0\le t \le T$. For the rest of the proof we take $\omega$ in the intersection of this set with $\Omega_{\text{NA}}^t$. Using the same Jankov-von-Neumann argument as in the proof of \cref{thm. exis_new} and \cref{cor. minim} we conclude that for each $t=0, \dots, T-1$ there exists a sequence $P^n_{t}:\Omega^{t}\to \mathfrak{P}(\Omega)$ of universally measurable kernels such that $P_{t}^n(\omega) \in {\cal P}^u_{t}(\omega)$ and for $x\ge \pi_t(\xi)(\omega)$ \begin{align*} \sup_{(H,c) \in \mathcal{A}_{t,x}(\omega)} {\rm I\kern-2pt E}_{P^n_{t}(\omega)} [\tilde{U}_{t+1}((\omega, \cdot), x+H \Delta S_{t+1}(\omega,\cdot)-c-\mathds{1}_{\{t=T-1\}}\xi(\omega, \cdot))] +U(t, c) \downarrow \tilde{U}_{t}(\omega,x). \end{align*} Since ${\cal P}^u_t(\omega)$ is compact, there exists a probability measure $\hat{P}_t(\omega) \in {\cal P}^u_t(\omega)$ and a subsequence $\{n_k(\omega)\}_{k \in \N}$ such that $\lim_{k \to \infty} P^{n_k(\omega)}_t(\omega)=\hat{P}_t(\omega)$. We now show, that for ${\cal P}^u$-q.e. $\omega \in \Omega^{t}$ and $x\ge\pi_{t}(\omega)$ the functions \begin{align*} U_{t}(\omega, x) &= \sup_{(H,c) \in \mathcal{A}_{t,x}(\omega)} \inf_{P \in {\cal P}^u_{t}(\omega)} {\rm I\kern-2pt E}_{P}[U_{t+1}((\omega, \cdot), x+H\Delta S_{t+1}(\omega, \cdot)-c-\mathds{1}_{\{t=T-1\}}\xi(\omega,\cdot))]\\&+ U(t, \omega,c) \end{align*} have a unique optimizer $(H,c)\in \mathcal{A}_{t,x}(\omega)$. For notational convenience we assume that $0\le t \le T-2$. We note that by concavity of $\tilde{U}_{t+1}$ and $U(t, \cdot)$ the function \begin{align*} (H,c) \mapsto \inf_{P \in {\cal P}^u_{t}(\omega)}\ E_{P}\left(\tilde{U}_{t+1}\left((\omega, \cdot), y+H\Delta S_{t+1}(\omega,\cdot)-c\right)\right)+U(t,c) \end{align*} is concave. Now assume that there are $(H^1,c^1)$, $(H^2,c^2) \in \mathcal{A}_{t, x}(\omega)$ such that \begin{align*} &\inf_{P \in {\cal P}^u_{t}(\omega)} E_{P}\left(\tilde{U}_{t+1}\left((\omega, \cdot),x+H^1\Delta S_{t}(\omega,\cdot)-c^1\right)\right)+U(t,c^1)\\ =\ &\inf_{P \in {\cal P}^u_{t}(\omega)} E_{P}\left(\tilde{U}_{t+1} \left((\omega, \cdot), x+H^2\Delta S_{t}(\omega,\cdot)-c^2\right)\right)+U(t, c^2)\\=\ &\tilde{U}_{t}(\omega,x). \end{align*} Note that for the strategy $(H^3,c^3):=((H^1+H^2)/2,(c^1+c^2)/2) \in \mathcal{A}_{t, x}(\omega)$ we have by concavity \begin{align*} &\inf_{P \in {\cal P}^u_{t}(\omega)} E_{P}\left(\tilde{U}_{t+1}\left((\omega, \cdot),x+H^3\Delta S_{t}(\omega,\cdot)-c^3\right)\right)+U(t,c^3)\\ \ge\ &\frac{1}{2}\bigg(\inf_{P \in {\cal P}^u_{t}(\omega)} E_{P}\left(\tilde{U}_{t+1}((\omega, \cdot),\left(x+H^1\Delta S_{t}(\omega,\cdot)-c^1\right)\right)+U(t,c^1)\\ +\ &\inf_{P \in {\cal P}^u_{t}(\omega)} E_{P}\left(\tilde{U}_{t+1}\left((\omega, \cdot),y+H^2\Delta S_{t+1}(\omega,\cdot)-c^2\right)\right)+U(t,c^2)\bigg)=\tilde{U}_{t}(\omega,x). \end{align*} We thus conclude \begin{align*} \inf_{P \in {\cal P}^u_{t}(\omega)} E_{P}\left(\tilde{U}_{t+1}\left((\omega, \cdot),x+H^3\Delta S_{t}(\omega,\cdot)-c^3\right)\right)+U(t,c^3)=\tilde{U}_{t}(\omega,x). \end{align*} Furthermore, for any $x\ge \pi_t(\omega)$ and any maximizer $(\tilde{H}, \tilde{c}) \in \mathcal{A}_{t,x}(\omega)$ of $\tilde{U}_{t}(\omega,x)$ we have \begin{align} \label{eq. limit1} &\sup_{(H,c) \in \mathcal{A}_{t, x}(\omega)}\left( {\rm I\kern-2pt E}_{P^{n_k(\omega)}_{t}(\omega)} [\tilde{U}_{t+1}((\omega, \cdot), x+H \Delta S_{t+1}(\omega,\cdot)-c)]+U(t, c)\right)n$^{\mbox{\footnotesize o}}$onumber \\ \ge\ &{\rm I\kern-2pt E}_{P^{n_k(\omega)}_{t}(\omega)}[\tilde{U}_{t+1}((\omega, \cdot),x+\tilde{H} \Delta S_{t+1}(\omega,\cdot)-\tilde{c})]+U(t, \tilde{c}) \\ \ge\ &\inf_{P \in {\cal P}^u_{t}(\omega)} {\rm I\kern-2pt E}_{P}[\tilde{U}_{t+1}((\omega, \cdot),x+\tilde{H}\Delta S_{t+1}(\omega,\cdot)-\tilde{c})]+U(t, \tilde{c})=\tilde{U}_{t}(\omega,x)n$^{\mbox{\footnotesize o}}$onumber, \end{align} so taking limits in \eqref{eq. limit1} we find \begin{align*} \lim_{k \to \infty} {\rm I\kern-2pt E}_{P^{n_k(\omega)}_{t}(\omega)} [\tilde{U}_{t+1}((\omega, \cdot),x+\tilde{H} \Delta S_{t+1}(\omega,\cdot)-\tilde{c})]+U(t, \tilde{c})=\tilde{U}_{t}(\omega,x). \end{align*} Furthermore we note that by assumption and \cref{lem usa} $\tilde{U}_{t}(\omega,y)$ is bounded by some $C$ on $\{(\omega,x) \in \Omega^t \times \R \ | \ x \ge \pi_t(\xi)(\omega)\}$, non-decreasing as well as continuous in $y$ and $\xi$ is continuous. Note the superhedging prices $\omega \mapsto \pi_t(\xi)(\omega)$ are continuous on $\{(\omega, v)\in \Omega^t \ | \ v\in f_{t-1}(\omega)\}$ by assumption.\\ For $n\in \N_+$ we define the shifted utility function $$U^{1/n}(T,x):=U(T,x+1/n).$$ Furthermore we inductively define the corresponding one-step versions for the multiperiod case $U^{1/n}_T(\omega,x):= U^{1/n}(T,x)$ and $$U^{1/n}_t(\omega,x):= \sup_{(H,c) \in \mathcal{A}_{t,x}(\omega)} \inf_{P \in {\cal P}^u_{t}(\omega)} {\rm I\kern-2pt E}_{P}[U^{1/n}_{t+1}((\omega, \cdot), x+1/n+H\Delta S_{t+1}(\omega, \cdot)-c)]+ U(t,c)$$ for $1\le t\le T-1$. Note that in particular $U^{1/n}(t,x)$ fulfils \cref{Ass 3} for all $n\in \N$ and $1\le t\le T$. Denote their lower semianalytic versions $\tilde{U}^{1/n}_t(\omega,x)$. Again by \cref{lem usa} there exists a set of full ${\cal P}^u$-measure, such that $\tilde{U}^{1/n}_t(\omega,x)=U^{1/n}_t(\omega,x)$ for all $n\in \N$ and $1\le t\le T$ and we fix $\omega$ in this set from now on. We now show by backwards induction that for all $n\in \N$ the function $(\omega,x) \mapsto \tilde{U}^{1/n}_{t}(\omega,x+1/n)$ is continuous in every point of the set $\{(\omega,x)\in \Omega^t\times \R \ | \ x\ge \pi_{t}(\xi)(\omega) \}$: Let us assume the hypothesis is true for $t+1$ and fix $n\in \N$, $x \ge \pi_{t}(\xi)(\omega)$. For any $(\tilde{\omega},\tilde{x}) \in \Omega^t\times \R$ we have \begin{align*} \left|\tilde{U}^{1/n}_{t}(\omega,x+1/n)-\tilde{U}^{1/n}_{t}(\tilde{\omega},\tilde{x}+1/n)\right|&\le \left|\tilde{U}_{t}^{1/n}(\omega,x+1/n)-\tilde{U}_{t}^{1/n}(\omega,\tilde{x}+1/n)\right|\\ &+\left|\tilde{U}_{t}^{1/n}(\omega,\tilde{x}+1/n)-\tilde{U}_{t}^{1/n}(\tilde{\omega},\tilde{x}+1/n)\right|. \end{align*} As $x \mapsto \tilde{U}^{1/n}_{t}(\omega,x+1/n)$ is continuous on $[\pi_t(\xi)(\omega)-1/n,\infty)$, there exists $\delta>0$ such that the first summand can be bounded by $\epsilon/2$ if $|x- \tilde{x}|\le \delta$. Thus it is sufficient to show that there exists $\tilde{\delta} >0$ such that for all $|\tilde{\omega}-\omega| \le \tilde{\delta}$ we have \begin{align*} \left|\tilde{U}_{t}^{1/n}(\omega,\tilde{x}+1/n)-\tilde{U}_{t}^{1/n}(\tilde{\omega},\tilde{x}+1/n)\right| \le \epsilon/2. \end{align*} Indeed, note first that by \cref{rem:bounded} and the same contradiction argument as in the proof of \cref{prop:pathwiseex} choosing $\tilde{\delta}>0$ small enough we can assume that for any superhedging strategy $(H,c)\in \mathcal{A}_{t,\pi_{t}(\xi)(\tilde{\omega})}(\tilde{\omega})$ we have $|(H,c)| \le \tilde{C}$ for some $\tilde{C}>0$ independent of $\tilde{\omega}$. Furthermore we can choose $\tilde{\delta}>0$, such that $|\pi_t(\xi)(\omega)-\pi_t(\xi)(\tilde{\omega})|\le 1/n$.\\ Next we make the following observation: As ${\cal P}^u_t(\omega)$ is weakly compact by assumption, there exists a compact set $[0,K]^d \subseteq \Omega$, such that $P(([0,K]^d)^c) \le \epsilon/(48C)$ for all $P \in {\cal P}^u_t(\omega)$. By the induction hypothesis $(v,y)\mapsto \tilde{U}^{1/n}_{t+1}(v,y+1/n)$ is continuous in every point of the set $\{(v,y)\in \Omega^{t+1}\times \R \ | \ y\ge \pi_{t+1}(\xi)(v)\}$ and thus uniformly continuous on a compact subset. There exists $1/n>\delta_0>0$ such that for $v, \tilde{v}\in B_{1}(\omega)\times \{u\in \Omega \ | \ \inf_{\tilde{u}\in [0,K]^d}|u-\tilde{u}|\le \delta_0\}$, $y\in [\pi_{t+1}(\xi)(v), 2CK]$ and $|(v,y) -(\tilde{v},\tilde{y})|\le \delta_0$ we have \begin{align}\label{eq. estimate2} \left|\tilde{U}^{1/n}_{t+1}(v,y+1/n)-\tilde{U}^{1/n}_{t+1}(\tilde{v},\tilde{y}+1/n)\right|\le \epsilon/24. \end{align} By \cref{Ass 3}.(1) and by adapting $\tilde{\delta}$ accordingly, for all $\tilde{\omega}\in \Omega^{t}$ such that $|\omega-\tilde{\omega}|<\tilde{\delta}$ and for all $P \in {\cal P}^u_{t}(\omega)$, there exists $\tilde{P} \in {\cal P}^u_{t}(\tilde{\omega})$ such that $d_L(P, \tilde{P}) \le \tilde{\epsilon}:=\delta_0/(2\tilde{C}) \wedge \epsilon/(48C)$. It follows by Strassen's theorem that there exists a measure $\pi \in \mathfrak{P}(\R^d \times \R^d)$ and two random variables $X\sim P\circ ( S_{t+1})^{-1}(\tilde{\omega},\cdot)$ and $\tilde{X}\sim \tilde{P}\circ ( S_{t+1})^{-1}(\tilde{\omega},\cdot)$ such that $\pi(|X-\tilde{X}|\ge \tilde{\epsilon}) \le \tilde{\epsilon}$. Thus we conclude that for $y,\tilde{y}:\Omega\to \R$ with $|y(x) -\tilde{y}(\tilde{x})|\le \delta_0$ whenever $\pi_{t+1}(\tilde{\omega})\le \tilde{y}(\tilde{x})\le 2CK$ and $|x-\tilde{x}|\le \tilde{\epsilon}$ \begin{align}\label{eq. estimate} &\left|{\rm I\kern-2pt E}_{P}\left[\tilde{U}^{1/n}_{t+1}((\tilde{\omega}, \cdot),1/n+y(\cdot))\right]- {\rm I\kern-2pt E}_{\tilde{P}}\left[\tilde{U}^{1/n}_{t+1}((\tilde{\omega}, \cdot),1/n+\tilde{y}(\cdot))\right]\right| \\ \ =\ &\left| {\rm I\kern-2pt E}_{\pi}\left[\tilde{U}^{1/n}_{t+1}((\tilde{\omega}, X),1/n+y(X))-\tilde{U}_{t+1}^{1/n}((\tilde{\omega}, \tilde{X}),1/n+\tilde{y}(\tilde{X}))\right]\right|n$^{\mbox{\footnotesize o}}$onumber\\ \ \le&\ \ {\rm I\kern-2pt E}_{\pi}\bigg[\bigg|\tilde{U}^{1/n}_{t+1}((\tilde{\omega}, X),1/n+y(X))-\tilde{U}^{1/n}_{t+1}((\tilde{\omega}, \tilde{X}),1/n+\tilde{y}(\tilde{X}))\bigg| \mathds{1}_{\{X\in [0,K]^d, \ |X-\tilde{X}|\le \tilde{\epsilon}\}}\bigg]+\frac{C\epsilon}{12C}n$^{\mbox{\footnotesize o}}$onumber\\ \ \le&\ \ \epsilon/12+\epsilon/12=\epsilon/6.n$^{\mbox{\footnotesize o}}$onumber \end{align} Now we modify $\tilde{\delta}>0$ such that $|\pi_t(\xi)(\omega)-\pi_t(\xi)(\tilde{\omega})|\le \delta_0$ if $|\omega-\tilde{\omega}|\le \tilde{\delta}$. Furthermore applying \cref{Thm Nutz1} for the function $(\omega,x+1/n)\mapsto \tilde{U}^{1/n}_{t}(\omega,x+1/n)$ there exists a maximiser $(H',c') \in \mathcal{A}_{t,\tilde{x}+1/n}(\tilde{\omega})$ of $$ \sup_{(H,c) \in \mathcal{A}_{t,\tilde{x}+1/n}(\tilde{\omega})} \inf_{P \in {\cal P}^u_{t}(\tilde{\omega})} {\rm I\kern-2pt E}_{P}[\tilde{U}^{1/n}_{t+1}((\tilde{\omega}, \cdot), \tilde{x}+1/n+H\Delta S_{t+1}(\tilde{\omega}, \cdot)-c)]+ U(t,c)$$ and a strategy $(H,c'-\beta) \in \mathcal{A}_{t,\tilde{x}+1/n}(\omega),$ where $\beta:=c'\wedge|\pi_t(\xi)(\omega)-\pi_t(\xi)(\tilde{\omega})|\le \delta_0/2$. Furthermore there exists $P \in {\cal P}_{t}^u(\omega)$ such that $$\tilde{U}^{1/n}_{t}(\omega,\tilde{x}+1/n) \ge {\rm I\kern-2pt E}_{P}\left[\tilde{U}^{1/n}_{t+1}((\omega, \cdot),\tilde{x}+2/n+H\Delta S_{t+1}(\omega, \cdot)-c'+\beta)\right]+U(t,c'-\beta)-\epsilon/6.$$ Note that we can modify $\tilde{\delta}>0$ such that $|(\omega,HS_t(\omega))-(\tilde{\omega}, HS_t(\tilde{\omega}))|\le (\tilde{C}+2)\tilde{\delta}\le \delta_0/2$. Now by \eqref{eq. estimate2} with $y(\cdot)=\tilde{x}+1/n+H\Delta S_{t+1}(\omega,\cdot)-c'+\beta$ and $\tilde{y}(\cdot)=\tilde{x}+1/n+H\Delta S_{t+1}(\tilde{\omega},\cdot)-c'$ \begin{align*} &{\rm I\kern-2pt E}_{P}[\tilde{U}^{1/n}_{t+1}((\omega, \cdot),\tilde{x}+2/n+H\Delta S_{t+1}(\omega, \cdot)-c'+\beta)]+U(t,c'-\beta)-\epsilon/6\\ \ge\ &{\rm I\kern-2pt E}_{P}[\tilde{U}^{1/n}_{t+1}((\tilde{\omega}, \cdot),\tilde{x}+2/n+H\Delta S_{t+1}(\tilde{\omega}, \cdot)-c')] +U(t,c')-\epsilon/3 \end{align*} follows and by \eqref{eq. estimate} with $y(\cdot)=\tilde{x}+1/n+H\Delta S_{t+1}(\tilde{\omega},\cdot)-c'$, $\tilde{y}(\cdot)=\tilde{x}+1/n+H'\Delta S_{t+1}(\tilde{\omega},\cdot)-c'$ and noting that $|H-H'|\le 2\tilde{C}$ \begin{align*} &{\rm I\kern-2pt E}_{P}\left[\tilde{U}^{1/n}_{t+1}((\tilde{\omega}, \cdot),\tilde{x}+2/n+H\Delta S_{t+1}(\tilde{\omega}, \cdot)-c')\right] +U(t,c')-\epsilon/3 \\ \ge\ &{\rm I\kern-2pt E}_{\tilde{P}}\left[\tilde{U}^{1/n}_{t+1}[(\tilde{\omega}, \cdot),\tilde{x}+2/n+H'\Delta S_{t+1}(\tilde{\omega}, \cdot)-c')\right]+U(t,c')- \epsilon/2 \\ \ge\ & \tilde{U}^{1/n}_{t}(\tilde{\omega},\tilde{x})- \epsilon/2. \end{align*} Exchanging the roles of $\omega$ and $\tilde{\omega}$ concludes the proof of the induction step. \\ This shows in particular continuity of $\omega' \mapsto \tilde{U}^{1/n}_{t+1}((\omega, \omega'), x+1/n+\tilde{H} \Delta S_{t+1}(\omega, \omega')-\tilde{c})$ as $\omega' \mapsto x+\tilde{H} \Delta S_{t+1}(\omega, \omega')-\tilde{c}$ is continuous. As this function is also ${\cal P}^u_t(\omega)$-q.s. bounded by \cref{lem usa} (recall that $(\tilde{H},\tilde{c})\in \mathcal{A}_{t,x}(\omega)$), we conclude by use of the Portmanteau theorem that \begin{align*} \tilde{U}_{t}(\omega,x)&=\inf_{n\in \N}\tilde{U}_t^{1/n}(\omega,x)\\ &=\inf_{n\in \N}\liminf_{k \to \infty} {\rm I\kern-2pt E}_{P^{n_k(\omega)}_{t}(\omega)} [\tilde{U}_{t+1}^{1/n}((\omega, \cdot),x+1/n+\tilde{H} \Delta S_{t+1}(\omega,\cdot)-\tilde{c})]+U(t, \tilde{c})\\ & \ge \inf_{n\in \N}{\rm I\kern-2pt E}_{\hat{P}_{t}(\omega)} [\tilde{U}^{1/n}_{t+1}((\omega, \cdot), x+1/n+\tilde{H} \Delta S_{t+1}(\omega,\cdot)-\tilde{c})]+U(t, \tilde{c})\\ &= {\rm I\kern-2pt E}_{\hat{P}_{t}(\omega)} [\tilde{U}_{t+1}((\omega, \cdot), x+\tilde{H} \Delta S_{t+1}(\omega,\cdot)-\tilde{c})]+U(t, \tilde{c})\\ &\ge \inf_{P \in {\cal P}^u_{t}(\omega)} {\rm I\kern-2pt E}_{P} [\tilde{U}_{t+1}((\omega, \cdot),x+\tilde{H} \Delta S_{t+1}(\omega,\cdot)-\tilde{c})]+U(t, \tilde{c}), \end{align*} which yields for $x\ge \pi_t(\omega)$ \begin{align*} \tilde{U}_{t}(\omega,x) = {\rm I\kern-2pt E}_{\hat{P}_{t}(\omega)} [\tilde{U}_{t+1}((\omega,\cdot),x+\tilde{H} \Delta S_{t+1}(\omega,\cdot)-\tilde{c})]+U(t, \tilde{c}). \end{align*} In particular for $i=1,2$ \begin{align*} &{\rm I\kern-2pt E}_{\hat{P}_{t}\omega)}\left[\tilde{U}_{t+1}\left((\omega, \cdot),x+H^3\Delta S_{t+1}(\omega,\cdot)-c^3\right)\right]+U(t,c^3)\\ = \ &{\rm I\kern-2pt E}_{\hat{P}_{t}(\omega)}\left[\tilde{U}_{t+1}\left((\omega, \cdot),x+H^i\Delta S_{t+1}(\omega,\cdot)-c^i\right)\right]+U(t,c^i). \end{align*} Now since \begin{align*} (H,c) \mapsto {\rm I\kern-2pt E}_{\hat{P}_{t}(\omega)} [\tilde{U}_{t+1}((\omega,\cdot),x+H \Delta S_{t+1}(\omega,\cdot)-c)]+U(t, c) \end{align*} is concave and strictly concave in $c$, we need to have $c^1=c^2$ and \begin{align*} H^1 \Delta S_{t+1}(\omega,\cdot)&=H^2\Delta S_{t+1}(\omega, \cdot) \quad \hat{P}_{t}(\omega)-\text{a.s} \end{align*} Lastly denote by $\Xi_t$ the correspondence \begin{align*} \Xi_t(\omega)=\left\{ P \in \mathcal{P}^u_t(\omega) \ \bigg| \ \tilde{U}_t(x,\omega)= \sup_{(H,c) \in \mathcal{A}_{t, x}(\omega)} {\rm I\kern-2pt E}_{P} [\tilde{U}_{t+1}((\omega, \cdot), x+H \Delta S_{t+1}(\omega,\cdot)-c)]+U(t, c) \right\} \end{align*} for $x\ge \pi_t(\omega)$ and note that by measurable selection arguments as in \cite{BN}[proof of Lemma 4.10, p. 848] the set \begin{align*} \left\{(\omega, P) \in \text{graph}({\cal P}^u_t) \ \bigg| \ \sup_{(H,c) \in \mathcal{A}_{t, x}(\omega)} {\rm I\kern-2pt E}_{P} [\tilde{U}_{t+1}((\omega, \cdot), x+H \Delta S_{t+1}(\omega,\cdot)-c)]+U(t, c) - \tilde{U}_t(x,\omega) \le 0 \right\} \end{align*} is an element of $\textbf{A}({\cal F}_tut\otimes \mathcal{B}(\mathfrak{P}(\Omega)))$, where $\textbf{A}({\cal F}_tut\otimes \mathcal{B}(\mathfrak{P}(\Omega)))$ is the set of all nuclei of Suslin schemes on ${\cal F}_tut\otimes \mathcal{B}(\mathfrak{P}(\Omega))$. In consequence there exists an ${\cal F}_tc^{\mathcal{U}}_t$-measurable function $\hat{P}_t: \Omega^t \to \mathfrak{P}(\Omega)$ such that $\text{graph}(\hat{P}_t)\subseteq \text{graph}(\Xi_t)$. This concludes the proof. \end{proof} \begin{remark} If we assume that $H^1-H^2 \in \text{span}_{\hat{P}_{t}(\omega)}(\Delta S_{t+1}(\omega,\cdot))$, then $H^1=H^2$. \end{remark} \end{appendices} \end{document}
\begin{document} \title{Impossibility of creating a superposition of unknown quantum states } \author{Somshubhro Bandyopadhyay } \email{[email protected]; [email protected]} \affiliation{Department of Physics, Bose Institute, Unified Academic Campus, EN 80, Sector V, Bidhannagar, Kolkata 700091, India} \begin{abstract} The superposition principle is fundamental to quantum theory. Yet a recent no-go theorem has proved that quantum theory forbids superposition of unknown quantum states, even with nonzero probability. The implications of this result, however, remain poorly understood so far. In this paper we show that the existence of a protocol that superposes two unknown pure states with nonzero probability (allowed to vary over input states) leads to the violation of other no-go theorems. In particular, such a protocol can be used to perform certain state discrimination and cloning tasks that are forbidden not only in quantum theory but in no-signaling theories as well. \end{abstract} \maketitle In quantum theory, a state of a physical system is a vector $\left|\psi\right\rangle $ of unit norm in a Hilbert space $\mathcal{H}$. Furthermore, $\left|\psi\right\rangle $ and $e^{i\vartheta}\left|\psi\right\rangle $ describe the same physical state of the system, where $\left|e^{i\vartheta}\right|=1$; thus, a global phase is inconsequential. The ``superposition principle'' states that for any two vectors $\left|\psi\right\rangle ,\left|\phi\right\rangle \in\mathcal{H}$ and nonzero complex numbers $\gamma,\delta$ satisfying $\left|\gamma\right|^{2}+\left|\delta\right|^{2}=1$, the linear superposition $\gamma\left|\psi\right\rangle +\delta\left|\phi\right\rangle \in\mathcal{H}$ is also a state of the system under consideration. However, unlike a global phase, the relative phase in a superposition is physically significant, i.e. $\gamma\left|\psi\right\rangle +\delta\left|\phi\right\rangle $ and $\gamma\left|\psi\right\rangle +\delta e^{i\vartheta}\left|\phi\right\rangle $ represent two different states of the same physical system. The superposition principle is fundamental to quantum theory. In fact, almost all nonclassical properties exhibited by quantum systems, e.g., nonorthogonality of quantum states \citep{quantum}, quantum interference \citep{quantum,Zeilinger-1999}, quantum entanglement \citep{quantum,entanglement}, and quantum coherence \citep{coherence} are consequences of quantum superpositions. Recently a basic question, closely related to quantum superposition, was considered \citep{no-superposition}: Does there exist a quantum operation that would superpose two unknown pure quantum states with some complex weights? The question is of particular interest because quantum theory is known to forbid physical realizations of certain operations, even plausible ones \citep{no-cloning-WZ,no-cloning-2,no-broadcasting,no-disentangling,no-deleting,Pati-2002,no-local-broadcasting,no-coherence,Araujo+-2014,Thompson+2018}, and therefore, it is important to understand whether similar restrictions are also in place on something as basic as the creation of quantum superpositions. Besides, exploring such questions often reveals new ways of manipulating quantum systems that have found useful applications in quantum information and computation. Before we proceed, we note here that a special case of the above question was initially posed in Ref.$\,$\citep{no-adder} where the authors asked about the existence of a quantum adder, a unitary operator that would add two unknown pure quantum states, and proved that such a unitary operator cannot exist. The proof followed from the observation that an unobservable global phase associated with the input state can distribute itself in infinitely many ways in a superposition, thereby leading to infinitely many superpositions with observable relative phases, which is unphysical. Let us now consider the general formulation of the question on the existence of quantum superposers \citep{no-superposition}: For given nonzero complex numbers $\alpha,\beta$ satisfying $\left|\alpha\right|^{2}+\left|\beta\right|^{2}=1$ and any given pair of vectors $\left|\psi\right\rangle ,\left|\phi\right\rangle \in\mathcal{H}$, does there exist a quantum protocol that prepares the superposed state $\left|\Psi\right\rangle \propto\alpha\left|\psi\right\rangle +\beta\left|\phi\right\rangle $? The ambiguity of the relative phase, which ruled out the existence of a quantum adder, however, is also present in this general formulation. Let $\rho_{\chi}=\left|\chi\right\rangle \left\langle \chi\right|$ denote the density matrix for any normalized vector $\left|\chi\right\rangle $. Then it is easy to see that $\rho_{\Psi}$ cannot be a well-defined function of $\rho_{\psi}$ and $\rho_{\phi}$ for the simple reason that the density matrix $\rho_{\chi}$ corresponds not only to $\left|\chi\right\rangle $ but also to any other normalized vector $\left|\chi^{\prime}\right\rangle =e^{i\theta}\left|\chi\right\rangle $. The authors \citep{no-superposition} therefore relaxed the definition of superposing such that there's no phase ambiguity. Specifically, for any pair of vectors $\left|\psi\right\rangle ,\left|\phi\right\rangle \in\mathcal{H}$ they allowed for complex superpositions of any two vectors with density matrices $\rho_{\psi}$ and $\rho_{\phi}$. With this, the question becomes well defined. Then a superposition protocol, if one such exists, could be realized by application of a quantum channel on the input systems and then tracing out one of them. The authors also allowed post-selection, which entails the possibility of obtaining the desired output with some nonzero probability. In other words, the most general class of quantum operations, described in terms of trace-nonincreasing completely positive (CP) maps, was considered in Ref.$\,$\citep{no-superposition}. The answer, however, turned out to be no. \begin{thm} \citep{no-superposition} Let $\alpha,\beta$ be any two nonzero complex numbers satisfying $\left|\alpha\right|^{2}+\left|\beta\right|^{2}=1$. Let $\mathcal{H}$ be a Hilbert space, where $\dim\mathcal{H}\geq2$. Then there does not exist a nonzero CP map $\Lambda_{\alpha,\beta}:\mathcal{H}^{\otimes2}\rightarrow\mathcal{H}$ such that for all pure states $\rho_{\psi},\rho_{\phi}\in\mathcal{H}$, $\Lambda_{\alpha,\beta}\left(\rho_{\psi}\otimes\rho_{\phi}\right)\propto\left|\Psi\right\rangle \left\langle \Psi\right|,$ where $\left|\Psi\right\rangle =\alpha\left|\psi\right\rangle +\beta\left|\phi\right\rangle $ and the states appearing in the superposition may in general depend on both $\rho_{\psi}$ and $\rho_{\phi}$. \end{thm} Noting that the superposition in general may depend on both $\rho_{\psi}$ and $\rho_{\phi}$ and a global phase is not of any consequence, one has the following corollary. \begin{cor} Let $\alpha,\beta$ be any two nonzero complex numbers satisfying $\left|\alpha\right|^{2}+\left|\beta\right|^{2}=1$. Let $\mathcal{H}$ be a Hilbert space, where $\dim\mathcal{H}\geq2$. Then there does not exist a nonzero CP map $\Lambda_{\alpha,\beta}:\mathcal{H}^{\otimes2}\rightarrow\mathcal{H}$ such that for all pure states $\left|\psi\right\rangle ,\left|\phi\right\rangle \in\mathcal{H}$, $\Lambda_{\alpha,\beta}\left(\rho_{\psi}\otimes\rho_{\phi}\right)\propto\left|\Psi\right\rangle \left\langle \Psi\right|,$ where $\left|\Psi\right\rangle =\alpha\left|\psi\right\rangle +\beta e^{i\theta}\left|\phi\right\rangle $ for some phase $\theta\in\left[0,2\pi\right)$ that may in general depend on the input states. \end{cor} The above result, known as the no-superposition theorem, forbids the existence of a universal probabilistic quantum superposer: a quantum operation that would superpose two unknown pure quantum states with nonzero probability. The result forms yet another no-go theorem in quantum theory. Now the no-go theorems \citep{no-cloning-WZ,no-cloning-2,no-broadcasting,no-disentangling,no-deleting,Pati-2002,no-local-broadcasting,no-coherence,Araujo+-2014,Thompson+2018} in quantum theory are of particular significance because they tell us which operations are physically allowed and which are not. For example, the no-cloning theorem \citep{no-cloning-WZ} states that it is impossible to make exact copies of an unknown quantum state. But at a more fundamental level, the no-go theorems can have deep implications. For example, in a world without the no-cloning theorem, it is possible to send signals faster than light \citep{clong-signaling,Hardy-Song} and reliably distinguish nonorthogonal states, both of which would lead to a complete breakdown of our existing physical theories. So while the no-go theorems are fairly easy to understand, their implications can be far reaching, but often not immediate. The implications of the no-superposition theorem, however, remain poorly understood so far. Neither do we know of any relation with any other existing no-go result, nor do we know of the consequences, if any, should it be violated. Although follow-up papers have come up with interesting results \citep{Dogra+-2018,Doosti+-2017,Sami+-2016} and variants \citep{Luo+-2017}, none could account for the most basic questions: Why is it not possible to superpose unknown quantum states, even with a nonzero probability? And what would be the consequences if we could? In this paper, we will show that the existence of universal probabilistic quantum superposers implies the existence of protocols that can perform certain quantum state discrimination and cloning tasks forbidden not only in quantum theory, but also in no-signaling theories. So indeed, there will be unphysical consequences should such quantum superposers exist. We begin by assuming that universal probabilistic quantum superposers exist. That is, \begin{assumption*} For every pair of nonzero complex numbers $\alpha,\beta$ satisfying $\left|\alpha\right|^{2}+\left|\beta\right|^{2}=1$, there exists a universal probabilistic quantum superposer $\mathcal{Q}_{\alpha,\beta}$ that for any two pure quantum states $\left|\psi\right\rangle ,\left|\phi\right\rangle $ prepares, with probability $p_{\psi,\phi}^{\alpha,\beta}>0$, a superposition state $\left|\Psi\right\rangle \propto\alpha\left|\psi\right\rangle +\beta e^{i\theta}\left|\phi\right\rangle $ for some phase $\theta\in\left[0,2\pi\right)$, where $\theta$ may in general depend on the input states. \end{assumption*} Thus $\mathcal{Q}_{\alpha,\beta}$ is a two-input, single-output, quantum black box that takes a pair of pure quantum states as input and generates their linear superposition as output with some nonzero probability which is allowed to vary over input states (for the sake of full generality). The basic idea is to show that the existence of $\mathcal{Q}_{\alpha,\beta}$ implies violation of the following theorems: Let $S_{\psi}=\left\{ \left|\psi_{1}\right\rangle ,\left|\psi_{2}\right\rangle ,\dots,\left|\psi_{n}\right\rangle \right\} $ be a set of pure states such that $0\leq\left|\left\langle \psi_{i}\vert\psi_{j}\right\rangle \right|<1$ for $i\neq j$. Then, \begin{itemize} \item the states can be unambiguously distinguished (i.e., every state in the set can be correctly identified with a nonzero probability) if and only if they are linearly independent \citep{Chefles-unambiguous}. \item the states can be probabilistically cloned if and only if they are linearly independent \citep{prob-clone}. \end{itemize} Note that the above two statements are equivalent in the following sense: For any given set of states, if unambiguous discrimination is possible, then probabilistic cloning is possible as well and vice versa. Further note that the constraint on probabilistic cloning of states follows from the condition of no faster-than-light signaling \citep{Hardy-Song}. So quantum theory, and independently, the no-signaling condition, forbid unambiguous discrimination and probabilistic cloning of linearly dependent pure states. This in turn implies the following: \begin{itemize} \item Let $S_{\psi}=\left\{ \left|\psi_{1}\right\rangle ,\left|\psi_{2}\right\rangle ,\dots,\left|\psi_{n}\right\rangle \right\} $ be a set of linearly dependent pure states. Then, there does not exist a quantum protocol that achieves the state transformation $\left|\psi_{i}\right\rangle \rightarrow\left|\Psi_{i}\right\rangle $ for every $i$ with a nonzero probability such that the states $\left|\Psi_{1}\right\rangle ,\left|\Psi_{2}\right\rangle ,\dots,\left|\Psi_{n}\right\rangle $ are linearly independent. \end{itemize} The proof is simple. Suppose that a quantum system is prepared in a state chosen from the known set $S_{\psi}$ but we do not know which state. Now as the states $\left|\psi_{i}\right\rangle $ are linearly dependent, quantum theory will not allow us to correctly identify the state of the system, or make copies of it, even with nonzero probability. But it is easy to see that both the tasks become possible if there exists a protocol that achieves the transformation $\left|\psi_{i}\right\rangle \rightarrow\left|\Psi_{i}\right\rangle $ with nonzero probability for every $i$, where the states $\left|\Psi_{1}\right\rangle ,\left|\Psi_{2}\right\rangle ,\dots,\left|\Psi_{n}\right\rangle $ are linearly independent. Therefore such a protocol can not exist. Let us now consider a $\mathcal{Q}_{\alpha,\beta}$-based state transformation protocol that works as follows. We feed our quantum machine $\mathcal{Q}_{\alpha,\beta}$ with two input states: The first is chosen from a known set $S_{\psi}=\left\{ \left|\psi_{1}\right\rangle ,\left|\psi_{2}\right\rangle ,\dots,\left|\psi_{n}\right\rangle \right\} $ of pure states and the second is some pure state $\left|\phi\right\rangle $. In this way it is possible to prepare an ensemble $S_{\Psi}=\left\{ \left|\Psi_{1}\right\rangle ,\left|\Psi_{2}\right\rangle ,\dots,\left|\Psi_{n}\right\rangle \right\} $ of output states, where each state $\left|\Psi_{j}\right\rangle \propto\alpha\left|\psi_{j}\right\rangle +\beta e^{i\theta_{j}}\left|\phi\right\rangle $ is generated with some nonzero probability $p_{\psi_{i},\phi}^{\alpha,\beta}$. Thus we have a protocol that transforms $\left|\psi_{j}\right\rangle \rightarrow\left|\Psi_{j}\right\rangle $ with a nonzero probability for every $j$. For a given $\mathcal{Q}_{\alpha,\beta}$ and $S_{\psi}$, observe that the output states will be different for different choices of $\left|\phi\right\rangle $. To make this explicit, denote the set of output states by $S_{\Psi}\left(\phi\right)$. So if the states $\left|\psi_{1}\right\rangle ,\left|\psi_{2}\right\rangle ,\dots,\left|\psi_{n}\right\rangle $\emph{ }are linearly dependent,\emph{ }then for every $\left|\phi\right\rangle $ the states in $S_{\Psi}\left(\phi\right)$ must also be linearly dependent because otherwise, the protocol would be unphysical. We now give a simple proof that the protocol, in fact, is unphysical for all $\mathcal{Q}_{\alpha,\beta}$. Consider a set of three linearly dependent pure states that belong to a $d$-dimensional Hilbert space, where $d\geq3$. The states are given by \begin{eqnarray} \left|\psi_{1}\right\rangle & = & \left|\psi\right\rangle ,\nonumber \\ \left|\psi_{2}\right\rangle & = & \left|\psi^{\perp}\right\rangle ,\label{psi}\\ \left|\psi_{3}\right\rangle & = & a\left|\psi\right\rangle +b\left|\psi^{\perp}\right\rangle ,\nonumber \end{eqnarray} where $a,b\neq0$, $a,b\in\mathbb{R}$, $a^{2}+b^{2}=1$. By construction, the states are linearly dependent as they live in the two-dimensional subspace spanned by $\left\{ \left|\psi\right\rangle ,\left|\psi^{\perp}\right\rangle \right\} $. Following the protocol, we feed $\mathcal{Q}_{\alpha,\beta}$ with two input states, where the first input is chosen from $\left\{ \left|\psi_{1}\right\rangle ,\left|\psi_{2}\right\rangle ,\left|\psi_{3}\right\rangle \right\} $ as given above, and the second input state is taken to be a pure state $\left|\phi\right\rangle $ which is orthogonal to both $\left|\psi\right\rangle $ and $\left|\psi^{\perp}\right\rangle $. Then the possible output states, each of which is generated with some nonzero probability, are given by \begin{eqnarray} \left|\Psi_{1}\right\rangle & = & \alpha\left|\psi_{1}\right\rangle +\beta e^{i\theta_{1}}\left|\phi\right\rangle ,\nonumber \\ \left|\Psi_{2}\right\rangle & = & \alpha\left|\psi_{2}\right\rangle +\beta e^{i\theta_{2}}\left|\phi\right\rangle ,\label{Psi}\\ \left|\Psi_{3}\right\rangle & = & \alpha\left|\psi_{3}\right\rangle +\beta e^{i\theta_{3}}\left|\phi\right\rangle .\nonumber \end{eqnarray} We will show that the states $\left|\Psi_{j}\right\rangle $ for $j=1,2,3$ are linearly independent; that is, the equation \begin{eqnarray} x_{1}\left|\Psi_{1}\right\rangle +x_{2}\left|\Psi_{2}\right\rangle +x_{3}\left|\Psi_{3}\right\rangle & = & 0\label{LI-condition} \end{eqnarray} holds if and only if $x_{1}=x_{2}=x_{3}=0$. The \emph{if} part is trivial. So let us now consider the \emph{only if} part. First, we write Eq.$\,$(\ref{LI-condition}) as \begin{eqnarray} \alpha\left(x_{1}+ax_{3}\right)\left|\psi\right\rangle +\alpha\left(x_{2}+bx_{3}\right)\left|\psi^{\perp}\right\rangle +\beta\sum_{j=1}^{3}e^{i\theta_{j}}x_{j}\left|\phi\right\rangle & = & 0\nonumber \\ \label{LI-condition-1} \end{eqnarray} As the states $\left|\psi\right\rangle ,\left|\psi^{\perp}\right\rangle ,\left|\phi\right\rangle $ are mutually orthogonal they are linearly independent. Thus the coefficients appearing in the above superposition must vanish, i.e., \begin{eqnarray} \alpha\left(x_{1}+ax_{3}\right) & = & 0,\label{alpha,x,z}\\ \alpha\left(x_{2}+bx_{3}\right) & = & 0,\label{alpha,y,z}\\ \beta\left(e^{i\theta_{1}}x_{1}+e^{i\theta_{2}}x_{2}+e^{i\theta_{3}}x_{3}\right) & = & 0.\label{beta,x,y,z} \end{eqnarray} Since $\alpha,\beta\neq0$, we have \begin{eqnarray} x_{1}+ax_{3} & = & 0,\label{x,z}\\ x_{2}+bx_{3} & = & 0,\label{y,z}\\ e^{i\theta_{1}}x_{1}+e^{i\theta_{2}}x_{2}+e^{i\theta_{3}}x_{3} & = & 0.\label{x,y,z,theta} \end{eqnarray} Let us now find the conditions under which the above three equations are satisfied simultaneously. As $a,b\neq0$, we see that the above three equations are simultaneously satisfied when $x_{i}=0$ for all $i=1,2,3$. We will now show that this is the only solution. To establish this, assume that $x_{i}\neq0$ for all $i=1,2,3$. Then from (\ref{x,z}) and (\ref{y,z}) we get $x_{1}=-ax_{3}$, and $x_{2}=-bx_{3}$. Substituting these in (\ref{x,y,z,theta}) and noting that $x_{3}\neq0$ we obtain \begin{eqnarray} e^{i\theta_{1}}a+e^{i\theta_{2}}b-e^{i\theta_{3}} & = & 0,\label{a,b,thetas} \end{eqnarray} or, equivalently, \begin{eqnarray} a+e^{i\theta_{21}}b & = & e^{i\theta_{31}},\label{a,b,theta21,theta31} \end{eqnarray} where $\theta_{21}=\theta_{2}-\theta_{1}$ and $\theta_{31}=\theta_{3}-\theta_{1}$. Equation (\ref{a,b,theta21,theta31}) implies that \begin{eqnarray} \left|a+b^{\prime}\right| & = & 1,\label{|a+bprime|=00003D1} \end{eqnarray} where $b^{\prime}=e^{i\theta_{21}}b$. Now, recall that $a,b\neq0$, $a,b\in\mathbb{R}$, and $a^{2}+b^{2}=1$. Then \begin{eqnarray} a^{2}+\left|b^{\prime}\right|^{2} & = & 1.\label{a^2+|bprime|^2=00003D1} \end{eqnarray} A simple calculation shows that Eqns.$\,$(\ref{|a+bprime|=00003D1}) and (\ref{a^2+|bprime|^2=00003D1}) are satisfied only when $\theta_{21}=\pi/2,3\pi/2$ (since, $a,b\neq0$). Then from (\ref{a,b,theta21,theta31}) it follows that $a=\cos\theta_{31}$ and \textbf{$b=\pm\sin\theta_{31}$}. But the phases that $\mathcal{Q}_{\alpha,\beta}$ associate with the superposition cannot have any dependence on $a$ and $b$! This is because $a$ and $b$ are basis-dependent coefficients and $\left|\psi_{3}\right\rangle $ has infinitely many such representations. In other words, while $\theta_{3}$ may depend on $\left|\psi_{3}\right\rangle $ and $\left|\phi\right\rangle $, it cannot depend on the basis representation of $\left|\psi_{3}\right\rangle $. So the solutions are not feasible. Therefore, Eqns.$\,$ (\ref{x,z}), (\ref{y,z}), and (\ref{x,y,z,theta}) can only be simultaneously satisfied when $x_{i}=0$, $i=1,2,3.$ Consequently, the only solution to (\ref{LI-condition}) is $x_{1}=x_{2}=x_{3}=0$. Thus the states $\left|\Psi_{1}\right\rangle ,\left|\Psi_{2}\right\rangle ,\left|\Psi_{3}\right\rangle $ are linearly independent. Note that the analysis holds for all $\mathcal{Q}_{\alpha,\beta}$. This completes the proof. Thus we have shown that the existence of $\mathcal{Q}_{\alpha,\beta}$ implies the existence of a protocol that can transform linearly dependent pure states into linearly independent pure states. As explained earlier, such protocols can be used to perform the tasks of unambiguous discrimination and probabilistic cloning of linearly dependent pure states, both of which are forbidden in quantum theory and also violate the no-signaling condition. So the unconditional superposition of unknown quantum states, even probabilistically, gives rise to unphysical consequences. Now, interestingly, there exists a probabilistic quantum protocol to superpose two unknown pure states \citep{no-superposition}. Not surprisingly though, the protocol comes with strings attached -- in particular, each input state must have a fixed overlap with a known reference state and the superposition coefficients are functions of the overlaps as well. It is now clear why these conditions are necessary, for without these constraints probabilistic superposition would not be possible. Our result also sheds light on a recent theorem \citep{Luo+-2017} which states that it is possible to superpose two unknown pure states chosen from a known set if and only if the states are linearly independent. The \emph{if} part here is easy to understand: If the states are linearly independent, then in the first step, one correctly identifies the input states by performing suitable unambiguous state discrimination measurements and in the next step creates the desired superposition. The\emph{ only if} part, on the other hand, can be understood by noting that $Q_{\alpha,\beta}$ can transform linearly dependent pure states into linearly independent ones, which is unphysical. This implies the states in the given set must all be linearly independent. \emph{Conclusions. }The no-go theorems in quantum theory help us to understand the class of allowed physical operations. But it is also equally important to understand the consequences should no-go theorems be violated and the answers must come from physics. Here we showed that the existence of a protocol that superposes unknown pure states, even with nonzero probability, leads to unambiguous discrimination and probabilistic cloning of linearly dependent pure states -- tasks that are forbidden in quantum theory and also in no-signaling theories. One question, in the context of the present result, however, remains open: What kind of unphysical consequences would arise if a universal probabilistic quantum superpower, assuming it exists, is allowed to admit only qubit states? We could not find a satisfactory answer. Nevertheless, we are hopeful that a satisfactory answer will eventually be found, perhaps considering a different physical scenario. \begin{acknowledgments} The author is grateful to Guruprasad Kar and Tomasz Paterek for many helpful discussions. \end{acknowledgments} \end{document}
\begin{document} \title[Spectral Tur\'an Problems]{Linear spectral Tur\'an problems for expansions of graphs with given chromatic number} \author[C.-M. She]{Chuan-Ming She} \address{School of Mathematical Sciences, Anhui University, Hefei 230601, P. R. China} \email{[email protected]} \author[Y.-Z. Fan]{Yi-Zheng Fan*} \address{Center for Pure Mathematics, School of Mathematical Sciences, Anhui University, Hefei 230601, P. R. China} \email{[email protected]} \thanks{*The corresponding author. Supported by National Natural Science Foundation of China (Grant No. 11871073). } \author[L. Kang]{Liying Kang$^\dag$} \address{Department of Mathematics, Shanghai University, Shanghai 200444, P. R. China} \email{[email protected]} \thanks{$^\dag$Supported by National Natural Science Foundation of China (Grant No. 11871329).} \author[Y. Hou]{Yaoping Hou$^\ddag$} \address{College of Mathematics and Statistics, Hunan Normal University, Changsha 410081, P. R. China} \email{[email protected]} \thanks{$^\ddag$Supported by National Natural Science Foundation of China (Grant No. 11971164).} \subjclass[2000]{05C35, 05C65} \keywords{Linear hypergraph; extreme problem; adjacency tensor; spectral radius; expansion of graph} \begin{abstract} An $r$-uniform hypergraph is linear if every two edges intersect in at most one vertex. The $r$-expansion $F^{r}$ of a graph $F$ is the $r$-uniform hypergraph obtained from $F$ by enlarging each edge of $F$ with a vertex subset of size $r-2$ disjoint from the vertex set of $F$ such that distinct edges are enlarged by disjoint subsets. Let $\ex_{r}^{\text{lin}}(n,F^{r})$ and $\spex_{r}^{\text{lin}}(n,F^{r})$ be the maximum number of edges and the maximum spectral radius of all $F^{r}$-free linear $r $-uniform hypergraphs with $n$ vertices, respectively. In this paper, we present the sharp (or asymptotic) bounds of $\ex_{r}^{\text{lin}}( n,F^{r})$ and $\spex_{r}^{\text{lin}}(n,F^{r})$ by establishing the connection between the spectral radii of linear hypergraphs and those of their shadow graphs, where $F$ is a $(k+1)$-color critical graph or a graph with chromatic number $k$. \end{abstract} \maketitle \section{Introduction} A \emph{hypergraph} $H=(V(H),E(H))$ consists of a vertex set $V(H)$ and an edge set $E(H)$, where each edge $e \in E(H)$ is a subset of $V(H)$. If each edge $e$ of $H$ is an $r$-element subset of $V(H)$, then $H$ is called \emph{$r$-uniform}. A hypergraph is \emph{linear} if any two edges intersect at most one vertex. So, a simple graph is a linear $2$-uniform hypergraph. Given family $\mathcal{F}$ of hypergraphs, we say a hypergraph $G$ is \emph{$\mathcal{F}$-free} if it does not have a (not necessarily induced) subgraph isomorphic to any graph $F\in \mathcal{F}$. Let $\ex_{r}(n,\mathcal{F})$ and $\spex_{r}(n,\mathcal{F})$ denote the maximum number of edges and the maximum spectral radius of all $\mathcal{F}$-free $r$-uniform hypergraphs on $n$ vertices, respectively. For brevity, we write $\ex_{r}(n,F)$ and $\spex_{r}(n,F)$ instead of $\ex_{r}(n,\{F\})$ and $\spex_{r}(n,\{F\})$ when $\mathcal{F}=\{F\}$. When considering only simple graphs, we abbreviate them as $\ex(n,F)$ and $\spex(n,F)$ respectively. The number $\ex_{r}(n,\mathcal{F})$ is called \emph{Tur\'an number}, and the corresponding Tur\'an problem of determining the Tur\'an number of graphs or hypergraphs is one of the most fundamental problems in extremal combinatorics \cite{furedi2013history}. The study of Tur\'an Problem can be dated back at least to 1907~\cite{mantel1907problem}, when Mantel proposed the Mantel Theorem that $e(G)\le \left \lfloor n^{2}/4 \right \rfloor $ for every triangle-free $G$ on $n$ vertices, where $e(G)$ denotes the number of edges of $G$. Tur\'an~\cite{turaan1941extremal} determined $\ex(n,K_{t+1})$, and proved the well-known Tur\'an theorem: the Turán graph, namely the complete $t$-partite graph on $n$ vertices with the part sizes as equal as possible, is the unique $K_{t+1}$-free graph with the maximum number of edges, where $K_{t+1}$ is a complete graph on $t+1$ vertices. A graph $G$ is \emph{$k$-colorable} if there exists a mapping $\phi: V(G) \to \{ 1,2,\cdots,k\}$ satisfying $\phi (u)\ne \phi(v)$ for any two adjacent vertices $u$ and $v$. The \emph{chromatic number} of $G$, denoted by $\chi(G)$, is defined as the minimum number $k$ such that $G$ is $k$-colorable. A graph $G$ is \emph{$(k+1$)-color critical} if $\chi(G)=k+1$, and there exists an edge $e$ of $G$ such that $\chi(G-e)=k$, where $G-e$ denotes the graph obtained from $G$ by deleting the edge $e$. By the stability theorem, Simonovits \cite{simonovits1968method} determined $\ex(n,F)$ when $F $ is a color critical graph. The spectral Tur\'an problem is a spectral version of Tur\'an problem, namely, determining $\spex_r(n,\mathcal{F})$ or the maximum spectral radius of an $\mathcal{F}$-free graph or hypergraph on $n$ vertices. Brualdi and Solheid~\cite{brualdi1986spectral} proposed the problem of maximizing the spectral radius over a class of graphs with prescribed structural property in 1986. Nikiforov ~\cite{nikiforov2007bounds} presented a spectral version of Turán theorem in 2007, namely, the Tur\'an graph is also the unique $K_{t+1}$-free graph with the maximum spectral radius. Nikiforov \cite{nikiforov2009spectral} also obtained a spectral Erd\"os-Stone-Bollob\'as theorem, which implies the Erd\"os-Stone-Bollob\'as and Erd\"os-Stone-Simonovits theorem. The propose of this paper is to consider the spectral Turán problems in hypergraphs. Let $F$ be a graph. The \emph{$r$-expansion} of a graph $F$ is the $r$-uniform hypergraph $F^{r}$ which is obtained from $F$ by inserting $r-2$ additional vertices to each edge of $F$. Formally, for each edge $e \in E(F)$, we associate it with an $(r-2)$-set $S_e$ with the property: $S_e \cap V(F)=\emptyset$ for any $e \in E(F)$, and $S_e \cap S_f=\emptyset$ for any two edges $e,f \in E(F)$. The vertex set and edge set of $F^r$ is defined as follows: $$ V(F^r)=V(F) \cup (\cup_{e \in E(F)} S_e),~ E(F^r)=\{ e \cup S_e: e \in E(F)\}.$$ Obviously, $F^{r}$ is a linear $r$-uniform hypergraph which has exactly $|E(F)|$ edges and $|V(F)|+(r-2)|E(F)|$ vertices. In 2006, Mubayi~\cite{Mubayi2006A} determined the asymptotic value of $ \ex_{r}(n,K_{t+1}^{r})/\binom{n}{r}$. Combining the above result and the supersaturation technique of Erd\'os~\cite{erdos1964extremal}, we will give the asymptotic value of $\ex_{r}(n,F^{r})$ for all $(t+1)$-color critical graph $F$ with $t\ge r$. In 2007, Mubayi and Pikhurko~\cite{mubayi2007new} determined $\ex_{r}(n,\text{Fan}^{r})$ for an $r$-uniform hypergraph $\text{Fan}^{r}$ which is a generalization of the triangle (as a simple graph). When focused on linear hypergraphs, we denote $\ex^{\text{lin}}_{r}(n,\mathcal{F})$ and $\spex^{\text{lin}}_{r}(n,\mathcal{F})$ the maximum number of edges and the maximum spectral radius of all linear $\mathcal{F}$-free $r$-uniform hypergraphs on $n$ vertices, respectively. In 2020, F\"uredi and Gy\'arf\'as~\cite{furedi2020extension} considered the problem of determing $\ex_{r}^{\text{lin}}(n,\text{Fan}^{r})$. In 2021, Gao and Chang~\cite{gao2021linear} gave an upper bound for $\ex_{r}^{\text{lin}}(n,K_{s,t}^{r})$, where $K_{s,t}$ denotes a complete bipartite graph with two parts of size $s,t$ respectively. For more investigations of the hypergraph Tur\'an problem, we recommend readers the survey of Mubayi~\cite{mubayi2016survey}. The spectral Tur\'an problem of hypergraphs has attracted a great deal of attention recently. In 2014, Keevash, Lenz and Mubayi~\cite{keevash2014spectral} gave two general criteria under which spectral extremal results may be deduced from ‘strong stability’ forms of the corresponding extremal results. In \cite{hou2021spectral}, Hou, Chang and Cooper gave the maximum spectral radius of linear $r$-uniform hypergraphs with forbidden $\text{Fan}^{r}$ under some conditions, and an upper bound for the spectral radius of linear $r $-uniform hypergraphs without Berge $C_4$. Gao, Chang and Hou~\cite{gao2022spectral} determined the value $\spex_{r}^{\text{lin}}(n,K_{r+1}^{r})$ and gave the extremal graph attained at the maximum spectral radius, which is a transversal design $T(n,r)$. In this paper, we mainly consider the linear spectral Tur\'an problems for the expansion of $(k+1)$-color critical graph $F$. As $K_{r+1}$ is a $(r+1)$-color critical graph, our results will generalize the related results in \cite{gao2022spectral}. The main results in this paper are as follows. \begin{thm}\lambdabel{sp-crit} Let $F$ be a $(k+1) $-color critical graph and $ F^{r}$ be the $r $-expansion of $F$, where $k \ge r $. Then, for sufficiently large $n$, \begin{equation} \lambdabel{spcri}\spex_{r}^{\text{lin}}(n,F^{r} ) \le\frac{n(k-1)}{k(r-1)}. \end{equation} If $n=km$ and $m,k,r$ satisfy (\ref{Cond2}) in Theorem \ref{T-Cond2}, the inequality in (\ref{spcri}) can hold as equality for sufficiently large $m$. \end{thm} \begin{thm}\lambdabel{sp-gen} Let $H$ be a connected linear $r$-uniform hypergraph. Then \begin{equation} \lambdabel{spec} \rho (H)\le \frac{n-1}{r-1}, \end{equation} with equality if and only if $H$ is a $2$-$(n,r,1)$ design. If $n,r$ satisfy (\ref{Cond1}) in Theorem \ref{T-Cond1}, a $2$-$(n,r,1)$ design exists for sufficiently large $n$. \end{thm} It is known that for an $r$-uniform hypergraph $H$ with $n$ vertices and $e(H)$ edges, \begin{equation}\lambdabel{sp-ed} \rho(H) \ge \frac{r e(H)}{n}.\end{equation} So we can have the following corollary immediately. \begin{cor} \lambdabel{C-cri} Let $F$ be a $(k+1)$-color critical graph with $k \ge r$. Then, for sufficiently large $n$, \begin{equation}\lambdabel{cri} \ex_{r}^{\text{lin}}(n,F^{r} )\le\frac{n^{2} (k-1)}{kr(r-1)}. \end{equation} If $n=km$ and $m,k,r$ satisfy (\ref{Cond2}) in Theorem \ref{T-Cond2}, the inequality in (\ref{cri}) can hold as equality for sufficiently large $m$ \end{cor} \begin{cor}\lambdabel{C-edge} Let $H$ be a linear $r $-uniform hypergraph. Then \begin{equation}\lambdabel{edge} e(H)\le \frac{n(n-1)}{r(r-1)}, \end{equation} with equality if and only if $H$ is a $2$-$(n,r,1)$ design. If $n,r$ satisfy (\ref{Cond1}) in Theorem \ref{T-Cond1}, a $2$-$(n,r,1)$ design exists for sufficiently large $n$. \end{cor} As an application of the above results, we have the following results. \begin{lem}\lambdabel{N-1} Let $k,l,r$ be positive integers such that $k \ge 3$, $l\ge 2$ and $r \ge 2$, and let $c$ be a positive real number. Then there exists an $n_0=n_{0}(c,k,l,r)$ such that if $H$ is a linear $r$-uniform hypergraph on $n \ge n_0$ vertices satisfying \begin{equation} \rho(H)\ge \frac{n}{r-1}\left(1-\frac{1}{k-1}+c\right), \end{equation} then $H$ contains a $K_{k}(l,\ldots,l)^{r}$. \end{lem} \begin{thm}\lambdabel{N-2} Let $F$ be a graph with chromatic number $k$ with $k \ge r+1 \ge 3$. Then \begin{equation} \spex_{r}^{\text{lin}}(n,F^{r})=n\left(\frac{1}{r-1}\left(1-\frac{1}{k-1}\right)+o(1)\right). \end{equation} \end{thm} The rest of this paper is organized as follows. In Section 2, we introduce the eigenvalues and eigenvectors of a tensor, the hypergraphs constructed by designs, and some necessary lemmas for the latter discussion. We prove the main results in Section 3. \section{Preliminaries} \subsection{Tensors and hypergraphs} For positive integers $n,r$, a complex \emph{tensor} (also called hypermatrix) $\mathcal{A} =( a_{i_{1}i_{2}\dots i_{r}})$ of order $r$ and dimension $n$ refers to a multidimensional array with entries $a_{i_{1}i_{2}\ldots i_{r} }\in \mathbb{C}$ for all $i_{1},i_{2},\dots,i_{r}\in [n]:= \{ 1,2,\ldots,n\}$. A tensor $\mathcal{A}$ is called \emph{symmetric} if its entries are invariant under any permutation of their indices. In 2005, Qi~\cite{qi2005eigenvalues} and Lim~\cite{lim2005singular} independently introduced the eigenvalues of tensors. If there exists a number $\lambdambda \in \mathbb{C}$ and a nonzero vector $x \in \mathbb{C}^{n}$ such that \begin{equation}\lambdabel{ev}\mathcal{A} x^{r-1}=\lambdambda x^{[r-1] }, \end{equation} then $\lambdambda$ is called an \emph{eigenvalue} of $\mathcal{A}$, and ${x}$ is called an \emph{eigenvector} of $\mathcal{A}$ corresponding to the eigenvalue $\lambdambda$, where $\mathcal{A} {x}^{r-1}$ is a vector in $\mathbb{C}^{n}$ whose $i$-th component is given by \[ (\mathcal{A} {x}^{r-1} )_{i} =\sum_{i_{2},\ldots,i_{r}=1 }^{n} a_{ii_{2}\dots i_{r}} x_{i_{2} }\cdots x_{i_{r}}, i\in [n],\] and $x^{[r-1]}:=(x_1^{r-1},\cdots,x_n^{r-1})$. The \emph{spectral radius} of $\mathcal{A}$, denoted by $\rho(\mathcal{A})$, is the maximum modulus of the eigenvalues of $\mathcal{A}$. The Perron-Frobenius theorem for nonnegative matrices was generalized to nonnegative tensors by Chang et al. \cite{chang2008perron}, Friedland et al. \cite{friedland2013perron} and Yang et al. \cite{YY2011-2}. Here we list part of the theorem, where the weak irreducibility of nonnegative tensors can be referred to \cite{friedland2013perron}. \begin{thm}[\cite{chang2008perron,friedland2013perron,YY2011-2}]\lambdabel{PF} Let $\mathcal{A}$ be a nonnegative tensor of order $r$ and dimension $n$. Then the following statements hold. \begin{itemize} \item[(1)] $\rho(\mathcal{A})$ is an eigenvalue of $\mathcal{A}$ corresponding to a nonnegative eigenvector. \item[(1)] If furthermore $\mathcal{A}$ is weakly irreducible, then $\rho(\mathcal{A})$ is the unique eigenvalue of $\mathcal{A}$ corresponding to a unique positive eigenvector up to a scalar. \end{itemize} \end{thm} Let $H$ be an $r$-uniform hypergraph on $n$ vertices $v_{1},v_{2},\ldots,v_{n}$. In 2012, Cooper and Dutle~\cite{cooper2012spectra} introduced the \emph{adjacency tensor} $\mathcal{A}(H)$ of $H$, which is an order $r$ dimension $n$ tensor whose $(i_{1},i_{2},\ldots,i_{r})$-entry is given by \[ a_{i_{1}i_{2}\dots i_{r}}=\begin{cases} \frac{1}{(r-1 )!}, & \text{ if } \{ v_{i_{1} },v_{i_{2} },\dots v_{i_{r} } \}\in E(H ), \\ ~~0, & \text{ otherwise }. \end{cases} \] Clearly, $\mathcal{A}(H)$ is nonnegative and symmetric, and it is weakly irreducible if and only if $H$ is connected (\cite{friedland2013perron, YY2011-2}). The eigenvalues, eigenvectors of $H$ are referring to those of $\mathcal{A}(H)$. In particular, denote by $\rho(H)$ the spectral radius of $H$ (or $\mathcal{A}(H)$). By Theorem \ref{PF}, if $H$ is connected, then there exists a unique positive eigenvector $x$ up to a scalar corresponding to $\rho(H)$, called the \emph{Perron vector} of $H $. In addition, $\rho (H)$ is the optimal value of the following maximization (\cite{qi2005eigenvalues}) \begin{equation}\lambdabel{max} \rho (H )=\max_{\|x\|_{r}=1} x^\top \mathcal{A}(H)x^{r-1}, \end{equation} and the optimal vector $x$ such that $\rho(H)=\mathcal{A} x^{r}$ is exactly the Perron vector of $H$, where \begin{equation}\lambdabel{r-form} x^\top \mathcal{A}(H)x^{r-1}=\sum_{i_1,\ldots, i_r} a_{i_1,\ldots,i_r} x_{i_1}\cdots x_{i_r}=r \sum_{\{v_{i_1},\ldots,v_{i_r}\} \in E(H)}x_{v_{i_1}}\cdots x_{v_{i_r}}. \end{equation} When $r=2$, the Eqs. (\ref{max}) and (\ref{r-form}) are exactly the expressions of the spectral radius and the quadratic form of a graph respectively. \begin{lem}[\cite{cooper2012spectra}]\lambdabel{deg} Let $H$ be an $r$-uniform hypergraph. Let $\bar{d}$ be the average degree of $H$ and $\Delta$ be the maximum degree of $H$. Then \[ \bar{d} \leq \rho (H) \leq \Delta. \] In particular, if $H$ is a $d$-regular hypergraph, then $\rho (H)=d$. \end{lem} One can refer \cite{FHB,FHBproc} for more about the spectral radius and its associated eigenvectors of nonnegative tensors and hypergraphs. \subsection{Designs} We will introduce two kinds of designs in the following. \begin{defi}[\cite{wilson1975existence}] A \emph{pairwise balanced design} of index $\mu$ (in brief, a $\lambdambda$-PBD) is a pair $(X,\mathcal{B})$ where $X$ is a set of points, $\mathcal{B}$ is a family of subsets of $X$ (called blocks), such that each $B\in \mathcal{B}$ contains at least two points, any $2$-subset $\{x,y\}$ of $X$ are contained in exactly $\mu$ blocks $B$ of $\mathcal{B}$. \end{defi} A $\lambdambda$-PBD on $n$ points in which all blocks have the same cardinality $r$ is traditionally called a \emph{$(n,r,\mu)$-BIBD} (balanced incomplete block design), or a $2$-$(n,r,\mu)$ design. \begin{defi}[\cite{mohacsy2011asymptotic}] Let $n$, $\mu$, $t$ be positive integers, and $K$ be a set of positive integers. A \emph{group divisible $t$-design} (or $t$-GDD) of order $n$, index $\mu$ and block sizes from $K$ is a triple $(X,\Gamma,\mathcal{B})$ such that \begin{itemize} \item[(1)] $X$ is a set of $n$ elements (called points), \item[(2)] $\Gamma =\{G_{1},G_{2},\cdots, \}$, a set of non-empty subsets of $X$ which partition $X$ (called groups), \item[(3)] $\mathcal{B}$ is a family of subsets of $X$ each of size form $K$ (called blocks) such that each block $B\in \mathcal{B}$ intersects any given group in at most one point, \item[(4)] each $t$-set of points from $t$ distinct groups is contained in exactly $\mu$ blocks. \end{itemize} \end{defi} If a $t$-GDD has $n_{i}$ groups of size $g_{i}$ for $i \in [l]$, we denote its group type by $g_{1}^{n_{1}}g_{2}^{n_{2}}\ldots g_{l}^{n_{l}}$. In particular, a $t$-GDD with group type $m^r$ and block size $r$ (namely $K=\{r\}$) is called a \emph{transversal design}. For a $2$-$(n,r,\mu)$ design, if we treat its points as vertices and blocks as edges, we can get a hypergraph associated with the design. We will not distinguish the a design and a hypergraph associated with the design. So, a $2$-$(n,r,1)$ design is a $\frac{n-1 }{r-1 } $-regular linear hypergraph with $\frac{n(n-1)}{r(r-1)}$ edges. Similarly, a $2$-GDD of group type $m^{k}$ with block size $r$ and index $1$ is a $\frac{m(k-1)}{r-1}$-regular hypergraph with $\frac{m^{2}k (k-1) }{r(r-1)}$ edges. An important issue is the question of the existence of the above two designs or the associated hypergraphs. The following theorems characterized the existence of the above two designs. \begin{thm}[\cite{wilson1975existence}]\lambdabel{T-Cond1} Given positive integers $n,r$, a $2$-$(n,r,1)$ design exists for all sufficiently large integers $n$ under the following conditions: \begin{equation}\lambdabel{Cond1} n-1 \equiv 0 \mod{(r-1)}, ~~~n(n-1) \equiv 0 \mod{r(r-1)}. \end{equation} \end{thm} \begin{thm}[\cite{mohacsy2011asymptotic}]\lambdabel{T-Cond2} Let $r$ and $k$ be positive integers with $2 \le r \le k$. Then there exists an integer $m_{0}=m_{0}(r,k)$ such that for any integer $m \ge m_{0}$ there exists a group divisible design of group type $m^{k}$ with block size $r$ and index $1$ satisfying the condition: \begin{equation}\lambdabel{Cond2} m(k-1) \equiv 0 \mod{r-1},~~~ m^{2}k(k-1) \equiv 0 \mod{r(r-1)}. \end{equation} \end{thm} \subsection{Spectral radii of graphs and hypergraphs} Some of the necessary lemmas on the spectral radii of graphs and hypergraphs are presented below for the discussion in Section 3, with most of the results coming from Nikiforov. We write $\T_{k}(n)$ for the $k$-partite Tur\'an graph of order $n$, $ K_{k}(s_{1},\ldots ,s_{k})$ for the complete $k$-partite graph with parts of sizes $s_{1}\ge 2$, $s_{2},\ldots ,s_{k}$, respectively, and $K_{k}^{+}(s_{1},\ldots,s_{k})$ obtained from $K_{k}(s_{1},\ldots,s_{k})$ by adding an edge within the first part. By the result in \cite{FLZ}, \begin{equation}\lambdabel{sp-Tur}\rho(\T_{k}(n))\le n\left(1-\frac{1}{k}\right).\end{equation} \begin{lem}[\cite{nikiforov2009spectral}]\lambdabel{N-sp} Let $n,k$ be positive integers and $c$ be a positive real number such that $k\ge 3 $ and $(c/k^{k} )^{k}\ln{n}\ge 1$. If $G $ is a graph with $n$ vertices satisfying \[ \rho (G )\ge n\left(1-\frac{1}{k-1}+c\right), \] then $G$ contains a $K_{k}(s,\dots s,t)$ with $s \ge \left \lfloor (c/k^{k} )^{k}\ln{n} \right \rfloor$ and $t> n^{1-c^{k-1}}$. \end{lem} \begin{lem}[\cite{nikiforov2007spectral}]\lambdabel{N-Tur} Let $n,k$ be positive integers and $c$ be a positive real number such that $k\ge 2$, $c=k^{-(2k+9)(k+1)}$ and $n\ge e^{2/c}$. If $G$ is a graph on $n$ vertices satisfying $\rho(G)> \rho(\T_{k}(n))$, then $G$ contains a $K_{k}^{+}(\left \lfloor c\ln{n}\right\rfloor,\ldots,\left \lfloor c\ln{n} \right \rfloor)$. \end{lem} The key method of this paper is to build a connection between the spectral radius of a hypergraph and that of its shadow graph. Let $H$ be a linear $r$-uniform hypergraph. The \emph{shadow graph} of $H$, denoted by $\partial H$, is the graph with vertex set $V(\partial H)=V(H)$ and edge set $E(\partial (H) )=\{\{x,y\}: \{x,y\}\subseteq e \in E(H)\}$. Alternatively, $\partial H$ is obtained from $H$ by replacing each edge $e$ with a clique on the vertices of $e$. \begin{lem}\lambdabel{Lconn} Let $H$ be a connected linear $r$-uniform graph. Then \begin{equation}\lambdabel{conn} \rho (H )\le \frac{1}{r-1}\rho (\partial H), \end{equation} with equality if and only if $H$ is regular. \end{lem} \begin{proof} Let ${x}$ be a Perron vector of $H$ with $\| x \|_{r}=1$. For an edge $e \in E(H)$, denote $x^e=\prod_{v \in e}x_v$. We have \begin{equation}\lambdabel{key} \begin{split} \rho (H )&=x^\top \mathcal{A}(H)x^{r-1}=r\sum_{e\in E(H)}x^{e}\\ &=r\sum_{e\in E(H)} \prod_{\{ i,j \}\in \binom{e}{2}} (x_{i}x_{j})^{\frac{1}{r-1}}\\ &\le r\sum_{e\in E(H)} \frac{1}{\binom{r}{2} }\sum_{\{ i,j \}\in \binom{e}{2} } (x_{i}x_{j})^{\frac{\binom{r}{2}}{r-1}}\\ &=\frac{2}{r-1} \sum_{e\in E(H)} \sum_{\{ i,j \}\in \binom{e}{2}} (x_{i})^{\frac{r}{2} } (x_{j} )^{\frac{r}{2}}=\frac{2}{r-1} \sum_{\{ i,j\}\in E(\partial H)} (x_{i})^{\frac{r}{2}} (x_{j})^{\frac{r}{2}}\\ &= \frac{1}{r-1} (x^{[r/2]})^\top \mathcal{A}(\partial H) x^{[r/2]} \\ &\le \frac{1}{r-1} \rho(\partial H), \end{split} \end{equation} where the first inequality follows from the inequality for arithmetic means and geometric means, and the second inequality follows from Eq. (\ref{max}) as $\|x^{[r/2]}\|_2=1$. If $H$ is regular, then $\partial H$ is also regular as $H$ is linear, and the Perron vector $x=|V(H)|^{-1/r}\mathbf{1}$, where $\mathbf{1}$ is an all one vector. It is easy to see all inequalities in (\ref{key}) become equalities in this case, and hence we arrive the equality in Eq. (\ref{conn}). On the other hand, if $\rho (H )= \frac{1}{r-1}\rho (\partial H)$, then all entries of $x$ are equal from the first inequality in (\ref{key}), implying that $x=|V(H)|^{-1/r}\mathbf{1}$. By the eigenvector equation (\ref{ev}), we get that $H$ is regular. \end{proof} \section{Proof} \begin{proof}[Proof of Theorem \ref{sp-crit}] It may be assumed that $F-e$ is a $k$-partite graph with all part sizes less than $l$, which implies $K_{k}^{+}(l,\ldots,l)$ contains a copy of $F$. Let $H$ be an $F^{r}$-free linear hypergraph. Suppose that $\rho(H)> \frac{n(k-1) }{k(r-1)}$. By Lemma \ref{Lconn} and Eq. (\ref{sp-Tur}), we have \[ \rho(\partial H)\ge (r-1 )\rho(H)> \frac{n(k-1)}{k}\ge \rho(\T_{k}(n) ). \] So, by Lemma \ref{N-Tur}, for sufficiently large $n$, $\partial H$ contains a $K_{k}^{+}(\left \lfloor c\ln{n}\right\rfloor,\ldots,\left \lfloor c\ln{n} \right \rfloor)$, where $c$ satisfies the condition in Lemma \ref{N-Tur}. Below we will construct a $(K_{k}^{+}(l,\ldots,l))^{r} $, the $r$-expansion of $K_{k}^{+}(l,\dots ,l)$, by a copy $K_{k}^{+}(\left \lfloor c\ln{n} \right \rfloor,\ldots ,\left \lfloor c\ln{n} \right \rfloor)$ which is contained in $\partial H$. Thus $H$ contains a $F^{r}$; a contradiction. Construction method. Take a copy of $K_{k}^{+}(\lfloor c\ln{n}\rfloor,\ldots, \lfloor c\ln{n} \rfloor)$ in $\partial H$. Let $V_{i}=\{v_{i1},v_{i2},\ldots,v_{i,\lfloor c\ln{n} \rfloor}\}$, $i\in [k]$, be all $k$ parts of this copy, and the critical edge $e:=\{v_{11},v_{12}\}$ is contained in $V_{1}$. Since $H$ is linear, if $\{x,y\}$ is an edge of $\partial H$, then $\{x,y\}$ must be contained in a unique edge of $H$ denoted by $E_{xy}$. Denote $\E_{xy}=E_{xy}\backslash \{x,y\}$ if $\{x,y\} \in E(\partial H)$, and $\E_{xy}=\emptyset$ otherwise. First let $A_{1}=\{v_{11},v_{12}\}$ and $B_{1}=\E_{v_{11}v_{12}}$. For $j=2,\ldots,l-1$, take a vertex $v \in V_1\setminus (A_{j-1} \cup B_{j-1})$, let $A_j=A_{j-1} \cup \{v\}$, $B_j =B_{j-1} \cup \{\E_{vw} : w \in A_{j-1}\} $. So we have taken an $l$-subset $A_{l-1}$ from $V_1$. We are going to take an $l$-subset from each $V_i$ for $i=2,\ldots,k$ in the following way. For each $i=2,\ldots, k$, and each $j=(i-1)l,\ldots,il-1$, let $C_{j-1}=\cup\{\E_{xy}: x \in A_{j-1},y \in B_{j-1}\}$, take a vertex $v \in V_i \backslash (A_{j-1} \cup B_{j-1}\cup C_{j-1})$, and let $A_j=A_{j-1} \cup \{v\}$ and $B_j=B_{j-1} \cup \{\E_{vw} : w \in A_{j-1}\}$. So we get sets $A_i,B_i$ for $i=1,\ldots,kl-1$, where $A_{il-1}\backslash A_{(i-1)l-1}$ is an $l$-subset taken from $V_i$ for $i=2,\ldots,k$. Note that $|A_{j-1}|=j$, $|B_{j-1}|\le (r-2)\binom{j}{2}$, and $|C_{j-1}| \le (r-2)|A_{j-1}||B_{j-1}|=j(r-2)^2\binom{j}{2}$ for $j=l,\ldots,kl-1$. So, for $j=2,\ldots,l-1$, if $$ |V_1|=\lfloor c \ln n\rfloor > j+(r-2)\binom{j}{2}\ge |A_{j-1}|+|B_{j-1}|,$$ and for $i=2,\ldots,l$ and $j=(i-1)l,\ldots,il-1$, if $$|V_i|=\lfloor c \ln n\rfloor > j+(r-2)\binom{j}{2}+j(r-2)^2\binom{j}{2} \ge |A_{j-1}|+|B_{j-1}|+|C_{j-1}|,$$ the above procession can be continued. This is guaranteed by taking $n$ sufficiently large. Let $G$ be a subgraph of $K_{k}^{+}(\lfloor c\ln{n}\rfloor,\ldots, \lfloor c\ln{n} \rfloor)$ induced by the vertices of $A_{kl-1}$ and $G^r$ be the sub-hypergraph of $H$ induced by edges $E_{xy}$ for $\{x,y\} \in E(G)$. Then $G=K_{k}^{+}(l,\ldots,l)$ and $G^r=(K_{k}^{+}(l,\ldots,l))^r$. The construction is completed. On the other hand, if $n=mk$, and $m,k,r$ satisfies the condition (\ref{Cond2}) in Theorem \ref{T-Cond2}, then a $2$-GDD of group type $m^{k}$ with block size $r$ and index $1$ exists. Let $\hat{H}$ be the hypergraph associated with the above $2$-GDD. Then $\hat{H}$ is $F^r$-free and $\frac{n(k-1)}{k(r-1)}$-regular. So $\rho(\hat{H})=\frac{n(k-1)}{k(r-1)}$ by Lemma \ref{deg}. \end{proof} \begin{proof}[Proof of Theorem \ref{sp-gen}] Observe that $\partial H$ is a connected graph on $n$ vertices, and $\rho(\partial H)\le n-1$ with equality if and only if $\partial H$ is a complete graph. By Lemma \ref{Lconn}, we have \[\rho(H)\le \frac{1}{r-1}\rho (\partial H)\le \frac{n-1}{r-1}. \] The above equality holds if and only if $H$ is regular and $\partial H$ is a complete graph, which is equivalent to that $H$ is a $2$-$(n,r,1)$ design. \end{proof} \begin{proof}[Proof of Corollary \ref{C-cri}] By the inequality (\ref{sp-ed}) and Theorem \ref{spcri}, \begin{equation}\lambdabel{Coro1} \ex_r^{\text{lin}}(n,F^r) \le \frac{n}{r}\spex_r^{\text{lin}}(n,F^r) \le \frac{n^2(k-1)}{kr(r-1)}.\end{equation} If $n=km$ and $m,k,r$ satisfy (\ref{Cond2}) in Theorem \ref{T-Cond2}, then a $2$-GDD of group type $m^{k}$ with block size $r$ and index $1$ exists. The associated hypergraph $\hat{H}$ makes the second inequality of (\ref{Coro1}) holds as equality $\frac{n^2(k-1)}{kr(r-1)}$ edges. So the inequality in Corollary \ref{C-cri} holds as equality in this case. \end{proof} \begin{proof}[Proof of Corollary \ref{C-edge}] By the inequality (\ref{sp-ed}) and Theorem \ref{sp-gen}, $$ e(H) \le \frac{n}{r}\rho(H) \le \frac{n(n-1)}{r(r-1)}.$$ The second inequality holds as equality if and only if $H$ is a $2$-$(n,r,1)$ design. Observe that in this case $H$ has exactly $\frac{n(n-1)}{r(r(r-1)}$ edges. So we prove the Corollary \ref{C-edge}. \end{proof} \begin{proof}[Proof of Lemma \ref{N-1}] Since $\rho (H)\ge \frac{1}{r-1}(1-\frac{1}{k-1}+c)n $, by Lemma \ref{Lconn} we have \[ \rho(\partial H) \ge (r-1) \rho(H)\ge n(1-\frac{1}{k-1}+c). \] By Lemma \ref{N-sp}, there exists an $n_0=n_0(c,k)$ such that if $n \ge n_0$, then $\partial H$ contains a $K_{k}(s,\ldots s,t)$ with $s \ge \lfloor (c/k^{k})^{k}\ln n \rfloor$ and $t> n^{1-c^{k-1}}$. By a similar discussion as in Theorem \ref{sp-crit}, there exists an $n'_0=n'_0(c,k,l,r)$ such that if $n \ge n'_0$, we can construct a $ K_{k}(l,\dots ,l)^{r}$ in $H$. \end{proof} \begin{proof}[Proof of Theorem \ref{N-2}] By Lemma \ref{N-1}, we have \[ \limsup_{n \to \infty} \frac{\spex_{r}^{\text{lin}}(n,F^{r})}{n} \le \frac{1}{r-1}\left(1-\frac{1}{k-1}\right).\] On the other hand, by Theorem \ref{T-Cond2}, there exists a $m_0=m_{0}(r,k)$ such that if $m\ge m_{0}$ there exists a $2$-GDD of group type $m^{k-1}$ with block size $r$ and index $1$ satisfying \[ m(k-2) \equiv 0 \mod{(r-1)}, ~~~m^{2}(k-1)(k-2) \equiv 0 \mod{r(r-1)}. \] So, the hypergraph $H$ associated with the above $2$-GDD on $n=m(k-1)$ vertices is $\frac{n}{r-1} (1-\frac{1}{k-1})$-regular, and then $\rho(H)= \frac{n}{r-1}(1-\frac{1}{k-1})$. Obviously, $H$ is $F^{r}$-free, which implies that \[ \liminf_{n \to \infty} \frac{\spex_{r}^{\text{lin}}(n,F^{r})}{n} \ge \frac{1}{r-1}(1-\frac{1}{k-1}).\] The result follows. \end{proof} \end{document}
\begin{document} \title{Observing quantum chaos with noisy measurements and highly mixed states} \author{Jason F. Ralph} \email{[email protected]} \affiliation{Department of Electrical Engineering and Electronics, University of Liverpool, Brownlow Hill, Liverpool, L69 3GJ, UK.} \author{Kurt Jacobs} \email{[email protected]} \affiliation{U.S. Army Research Laboratory, Computational and Information Sciences Directorate, Adelphi, Maryland 20783, USA.} \affiliation{Department of Physics, University of Massachusetts at Boston, Boston, MA 02125, USA} \affiliation{Hearne Institute for Theoretical Physics, Louisiana State University, Baton Rouge, LA 70803, USA} \author{Mark J. Everitt} \email{[email protected]} \affiliation{Quantum Systems Engineering Research Group, Department of Physics, Loughborough University, Loughborough, LE11 3TU, UK.} \date{\today} \begin{abstract} A fundamental requirement for the emergence of classical behavior from an underlying quantum description is that certain observed quantum systems make a transition to chaotic dynamics as their action is increased relative to $\hbar$. While experiments have demonstrated some aspects of this transition, the emergence of quantum trajectories with a positive Lyapunov exponent has never been observed directly. Here, we remove a major obstacle to achieving this goal by showing that, for the Duffing oscillator, the transition to a positive Lyapunov exponent can be resolved clearly from observed trajectories even with measurement efficiencies as low as 20\%. We also find that the positive Lyapunov exponent is robust to highly mixed, low purity states and to variations in the parameters of the system. \end{abstract} \pacs{05.45.Mt, 03.65.Ta, 05.45.Pq} \keywords{quantum state-estimation, continuous measurement, quantum chaos, classical limit} \maketitle The emergence of classical chaotic-like behaviour from quantum mechanical systems has been an area of active research for many years~\cite{Haa2013}. There has been a great deal of interest in purely quantum systems, displaying unitary evolution, and non-unitary open quantum systems. This paper is concerned with open quantum systems whose classical counterparts are chaotic and make a transition to chaotic behavior as their size (more precisely their action) is increased so as to be large compared to $\hbar$~\cite{Spi1994,Sch1995,Zur1995,Bru1996,Bru1997,Hab1998,Bha2000,Mil2000,Ste2001,Sco2001,Bha2003,Eve2005a,Eve2005b,Sha2013,Len2013,Eas2016,Pok2016,Nei2016}. This transition is enabled by their interaction with the environment or when they are subjected to continuous observation. In the former case, the evolution approaches that of the probability density in phase space for the equivalent classical system as the action is increased~\cite{Zur1995,Hab1998,Sha2013}. Continuous observation turns this probability density into individual trajectories that follow the nonlinear classical motion with the requisite Lyapunov exponents~\cite{Bha2003,Sha2013,Eas2016,Pok2016}. Recent experimental progress in the control and measurement of quantum systems has enabled the continuous measurement of individual quantum systems and the calculation of quantum trajectories and state estimates~\cite{Mur2013,Web2014,Six2015,Cam2016}. This opens up the exciting possibility of directly observing the trajectories of classical chaotic dynamics emerging in open quantum systems. By observing a sufficiently long trajectory, it should also be possible to identify positive Lyapunov exponents, as a fundamental characteristic parameter that is indicative of chaos. Although experiments have been performed to explore the quantum-classical transition~\cite{Mil2000,Ste2001,Len2013,Nei2016} and to identify aspects of chaotic behavior in open quantum systems, the Lyapunov exponents have not been determined experimentally from quantum trajectories. One of the difficulties in such experiments is the efficiency of the measurement process. In an ideal measurement, the noise will be purely quantum in origin and the measurement efficiency, defined to be the fraction of the noise power due to the quantum measurement as opposed to extraneous classical noise from other sources, will be 100\%. Unfortunately, practical measurement systems are often far from ideal, and even the best experiments have efficiencies well below 100\%. For example, the experimental efficiencies reported in \cite{Web2014} are around 35\%. For the observation of certain purely quantum effects, the efficiency must be above some minimum threshold level. {\em Rapid-purification}~\cite{Jac2003, Com2006, Wis2006, Hil2007, Com2008, Wis2008, Li2013}, for example, requires a measurement efficiency of at least 50\% \cite{Li2013}. In this paper, we show that a positive Lyapunov exponent and the associated transition to classical chaos could be derived from quantum trajectories and continuous measurements with efficiencies as low as 20\%. Further, we find that the value of the positive Lyapunov exponent is robust across a wide range of purities, and are insensitive to variations in system parameters of at least $5\%$. This opens the way to observing the emergence of chaos in open quantum systems with current technology. The evolution of a continuously observed quantum system is described by a stochastic master equation~\cite{Bel1999,Wis2010,Jac2014}. As such, our work here is aided greatly by a recent and significant improvement in the numerical methods available to solve such equations, due to Rouchon and collaborators~\cite{Ami2011,Rou2015}. It also benefits from the ``moving basis'' method used by Schack, Brun and Percival~\cite{Sch1995, Bru1996}. The system that we consider is a standard example from classical chaos: the Duffing oscillator. This system has been studied for pure states and efficient measurements and it has been shown to make a transition from non-chaotic to chaotic motion as the action is increased relative to $\hbar$~\cite{Bru1996,Bru1997,Bha2000,Sco2001,Bha2003,Eve2005a,Eve2005b,Eas2016,Pok2016}. To achieve this, one must change the mass of the oscillator, the potential, and any driving forces in such a way that the dynamics remain the same up to a scaling of the coordinates and time, while the area of the phase space increases with respect to $\hbar$. A simple way to do this is to first write the Hamiltonian of the system, $\hat{H}$, in terms of dimensionless variables $\hat{q}$ and $\hat{p}$, then to change the size of the phase space by defining the new Hamiltonian to be $\hat{H}_{\beta} = \beta^{-2} \hat{H}(\beta \hat{q}, \beta \hat{p})$. The overall factor of $\beta^{-2}$ merely scales time. The size of the phase space for the Hamiltonian $\hat{H}_{\beta}$ now scales as $\beta^{-2}$ so that the classical limit is given by $\beta \rightarrow 0$~\cite{Sch1995,Bru1996}. The resulting dimensionless Hamiltonian for the Duffing oscillator is \begin{equation}\label{Ham} \hat{H}_\beta = \frac{1}{2}\hat{p}^2+\frac{\beta^2}{4}\hat{q}^4-\frac{1}{2}\hat{q}^2+\frac{g}{\beta}\cos(t)\hat{q}+\frac{\Gamma}{2}(\hat{q}\hat{p}+\hat{p}\hat{q}) . \end{equation} The first term in $\hat{H}_\beta$ is the kinetic energy, the second and third terms give the double-well potential, and the fourth is the periodic linear driving with a tunable amplitude $g/\beta$. The final term in the expression for $\hat{H}_\beta$ may look unusual, and is included because, in combination with the dissipative measurement process, it generates linear damping in momentum. (The Markovian dissipative measurement damps both $\hat{p}$ and $\hat{q}$. The Hamiltonian term amplifies $\hat{q}$ and damps $\hat{p}$, thus canceling the damping of $\hat{q}$ so that only the damping of $\hat{p}$ remains \cite{Duf2016}). While damping of momentum is not required to observe chaos in the Duffing oscillator~\cite{Bha2000}, it is useful in numerical simulations to keep the phase space contained. In terms of the real physical position $\hat{X}$, the momentum $\hat{P}$, and the Hamiltonian $\hat{H}_{\ms{r}}$, the scaled variables are given by $\hat{q} = \hat{X}/\sqrt{\hbar/m\omega}$, $\hat{p} = \hat{P}/\sqrt{\hbar m \omega}$, and $\hat{H}_\beta = \hat{H}_{\ms{r}}/(\hbar\omega)$, in which $m$ is the mass of the oscillator and $\omega$ is an arbitrary frequency scale. Since the observer will not have full information about which pure state the system is in at any given time, the observer's knowledge about this state is described by the density matrix, $\rho$. The purity of the density matrix is defined by $P = \mathrm{Tr}[\rho^2]$, and indicates the level of certainty that the observer has about the system's state. Under the action of a continuous measurement, the evolution of the density matrix is stochastic. This is due to the fact that the stream of measurement results necessarily has a fluctuating component, and the density matrix is a full description of the observer’s state-of-knowledge conditioned on these results. To emphasize this, we will denote the density matrix by $\rho_{\ms{c}}$. For the continuous measurement, we use a standard model in which a transmission line --- or more generally a medium that supports a continuum of traveling waves --- is coupled to the system so as to mediate both damping and the extraction of information~\cite{Wis2010,Jac2014}. The exact type of measurement has been shown to be unimportant in observing the emergence of classical dynamics, so long as it provides enough information about the position and momentum to maintain sufficient localization of the state in phase space~\cite{Bha2000, Sch1995, Bru1996}. In fact, for the work presented here, we have also performed the simulations using a continuous measurement of the position, $\hat{q}$, and this showed very similar behavior. Under the action of continuous measurement, the evolution of the density matrix is given by the stochastic master equation (SME)~\cite{Wis2010,Jac2014}, \begin{eqnarray}\label{sme1} d\rho_{\ms{c}}&=&- i \left[\hat{H}_\beta,\rho_{\ms{c}}\right]dt \nonumber \\ &&+\left\{ \hat{L} \rho_{\ms{c}} \hat{L}^{\dagger} -\frac{1}{2}\left(\hat{L}^{\dagger} \hat{L} \rho_{\ms{c}} + \rho_{\ms{c}} \hat{L}^{\dagger} \hat{L} \right)\right\}dt \nonumber \\ &&+\sqrt{\eta}\left(\hat{L}\rho_{\ms{c}}+\rho_{\ms{c}} \hat{L}^{\dagger}-\mathrm{Tr}[\hat{L}\rho_{\ms{c}}+\rho_{\ms{c}} \hat{L}^{\dagger}] \right)dW \nonumber \\ \end{eqnarray} in which $\hat{L} = \sqrt{2\Gamma}\hat{a}$, with $\hat{a} = (\hat{q} + i \hat{p})/\sqrt{2}$, and the stream of measurement results (the ``measurement record'') is given by \begin{equation}\label{record} y(t+dt) = y(t) + \sqrt{\eta}\mathrm{Tr}[\hat{L}\rho_{\ms{c}}+\rho_{\ms{c}} \hat{L}^{\dagger}] dt+dW \end{equation} where $dW$ are increments of Weiner noise and thus satisfy $\langle dW \rangle =0$ and $dW^2 = dt$. The efficiency of the measurement is denoted by $\eta$, and is defined to be the fraction of the noise power due to the measurement rather than other (classical) sources of noise, i.e. the fraction of the output signal that is recorded by the measuring device. For Rouchon's method~\cite{Ami2011,Rou2015} with a moving basis, the increment to $\rho_{\ms{c}}$ for the time step from $t_n = n\Delta t$ to $t_{n+1} = (n+1)\Delta t$, is given by $\Delta \rho_{\ms{c}}^{(n)} = \rho_{\ms{c}}^{(n+1)}- \rho_{\ms{c}}^{(n)}$, where \begin{equation}\label{sme2} \rho_{\ms{c}}^{(n+1)}= \frac{\hat{M}_n\rho_{\ms{c}}^{(n)}\hat{M}_n^{\dagger}+(1-\eta)\hat{L}\rho_{\ms{c}}^{(n)}\hat{L}^{\dagger}\Delta t } {\mathrm{Tr}\left[\hat{M}_n\rho_{\ms{c}}^{(n)}\hat{M}_n^{\dagger}+ (1-\eta)\hat{L}\rho_{\ms{c}}^{(n)}\hat{L}^{\dagger}\Delta t \right]} \end{equation} and $\hat{M}_n$ is given by \begin{eqnarray}\label{Mn1} \hat{M}_n &=& I-\left(i\hat{H} +\frac{1}{2} \hat{L}^{\dagger}\hat{L}\right)\Delta t +\frac{\eta}{2}\hat{L}^2(\Delta W(n)^2-\Delta t) \nonumber\\ && +\sqrt{\eta}\hat{L}\left(\sqrt{\eta}\mathrm{Tr}[\hat{L}\rho_{\ms{c}}^{(n)}+\rho_{\ms{c}}^{(n)}\hat{L}^{\dagger}]\Delta t +\Delta W(n)\right), \end{eqnarray} where the $\Delta W$'s are independent Gaussian variables with zero mean and a variance equal to $\Delta t$. To represent the density matrix we use a harmonic oscillator (Fock) basis, changing the location of this oscillator to follow the expected location of the system in phase space (i.e. the expectation values of $\hat{q}$ and $\hat{p}$). This greatly reduces the size of the state-space required for the simulation. Figure 1 shows an example trajectory in the chaotic regime. To verify whether a system exhibits chaotic behavior or not, it is necessary to calculate the Lyapunov exponents for the trajectory. In classical dynamics, this is fairly straightforward and uses the Jacobian, calculated from the classical dynamical equations, and the Lyapunov exponents are found from the eigenvalues of the product of the Jacobian matrices along the trajectory and taking the infinite time limit \cite{Sko2010}. In practice, the Lyapunov exponents are estimated in the long (but finite) time limit and the Jacobian products are repeatedly renormalized using a Gram-Schmidt orthonormalization procedure to constrain the tendency of the eigenvalues to increase beyond the numerical limits of the computer \cite{Sko2010}. For quantum systems, a number of approaches have been proposed and used to define Lyapunov exponents~\cite{Haa1992, Zyc1993,Jal2001}. Here, the generation of trajectories means that an approach analogous to the classical calculation method can be used~\cite{Jal2001}, but rather than using the classical dynamical equations to generate a Jacobian at each time step, an approximate Jacobian is constructed using the evolution of the expectation values for $\hat{q}$ and $\hat{p}$ under the non-stochastic evolution given by (\ref{sme2}), i.e. the evolution predicted when $dW=0$. Because of these factors, the finite time of the simulation and the differences in the construction of the Jacobian matrices, the solutions that generate a positive Lyapunov exponent are strictly chaotic-like rather than true chaos in the mathematical sense. However, we refer to the solutions as chaotic for reasons of practicality. With a two-dimensional phase space and an arbitrary phase variable for the drive term, we would expect to obtain three Lyapunov exponents, one of which would always be zero (corresponding to perturbations along the trajectory). We will denote the two non-zero Lyapunov exponents by $\lambda_{+}$ and $\lambda_{-}$ respectively, noting that $\lambda_{+}$ could be positive (chaotic solution) or negative (periodic solution) and $\lambda_{+}+\lambda_{-} <0$. The estimates of the Lyapunov exponents calculated below were obtained using the parameter values $g=0.3$ and $\Gamma=0.125$, with a moving basis containing between 80 and 200 oscillator states, and between 2000 and 6000 time increments per cycle of the drive term. The size of the basis and time steps was varied to ensure that the integration of the SME was numerically stable. Although the values of the Lyapunov exponents are found to be insensitive to measurement inefficiencies, the state estimates generated using (\ref{sme1}) and a particular measurement record (\ref{record}) can be numerically unstable if the basis contains insufficient numbers of states or the time increments are too large. For measurement efficiencies around 20\% and $\beta$ values around $0.1$, the number of states required to generate a stable trajectory in the moving basis grows to 150-200 states and the time step must be $\Delta t \simeq \pi/3000$. \begin{figure} \caption{An example quantum trajectory for $\beta = 0.1$ and $\eta = 0.4$, with $g=0.3$ and $\Gamma=0.125$.} \label{fig:phaseplot} \end{figure} Figure~\ref{fig:lyapunov1} shows the largest non-zero Lyapunov exponents estimated for $\beta$ values between 1.0 (noisy-periodic) and 0.1 (noisy-chaotic) for measurement efficiencies from 20\% to 100\%. A small number of simulations were performed for measurement efficiencies as low as 10\%. It was possible to obtain values for positive exponents in some cases but the numerical stability of the SME was affected for $\beta < 0.2$ so these results are not shown. The main feature to note in Figure~\ref{fig:lyapunov1} is that the positive Lyapunov exponents are approximately constant as a function of purity and for the range of measurement efficiencies, up to some small fluctuations due to the stochastic nature of the trajectories. There is a weak linear dependence on the average purity for the negative exponents (noisy-periodic trajectories). The figure also shows that the periodic solutions often have a higher average purity for the same measurement efficiency. The chaotic solutions have lower purities except for cases where $\eta = 1.0$, which will always asymptote to a pure state $P=1$, because all of the information contained in the measurement record is available to construct the quantum state. This relationship between periodic solutions and a higher average purity might be expected intuitively and has been noted in\cite{Sha2013}. Chaos is associated with information ``creation'', in that two chaotic solutions from neighboring points will diverge as the small differences are amplified by the stretching and folding of phase space associated with chaotic evolution \cite{Sko2010} -- although not shown, this stretching and folding process can be seen in the quantum states if the phase space Wigner functions are plotted on the $q-p$ plane \cite{Sha2013}. As a result of this, it could be anticipated that a chaotic trajectory with a positive Lyapunov exponent would require more measurements to extract the information required to construct an accurate state estimate, and an inefficient measurement would be likely to produce a less accurate state estimate for chaotic evolution than for periodic evolution. The minimum Lyapunov exponents ($\lambda_{-}$) are all negative, as expected. They are not shown explicitly, but they were also found to be relatively insensitive to the purity of the states and the efficiency of the measurements. \begin{figure} \caption{The largest non-zero Lyapunov exponent ($\lambda_{+} \label{fig:lyapunov1} \end{figure} Figure \ref{fig:lyapunov2} shows the largest non-zero Lyapunov exponents as functions of $\beta$, as in \cite{Eas2016}, for three different measurement efficiencies. These represent the transition from the quantum ($\beta = 1.0$) to the near classical regime ($\beta=0.1$). Positive Lyapunov exponents and chaotic behavior appear at $\beta = 0.3$ \cite{Eas2016}. The figure shows that the transition is preserved even when the measurement efficiency and, consequently, the average purity of the quantum states are low, which is relevant for possible experimental investigations where the measurements are not idealized theoretical models. \begin{figure} \caption{The largest non-zero Lyapunov exponents ($\lambda_{+} \label{fig:lyapunov2} \end{figure} The accuracy of the estimation process and of the quantum trajectory are reliant on the accuracy of the Hamiltonian and the parameters used in the SME to estimate the quantum state from the measurement record. If there is a mis-match between the system that generates the measurements and the parameter values used in the SME, the fidelity of the quantum state estimate will be adversely affected. It is natural, therefore, to ask what effect such mis-matches would have on the estimation of the Lyapunov exponents. To address this concern, simulations were conducted using one filter to generate a continuous measurement record, and this record was then fed into a second SME, where the second SME had errors in each of the parameters in the Hamiltonian (\ref{Ham}) and the SME (\ref{sme1}): $g$, $\beta$, $\Gamma$, $\eta$, and the initial phase of the cosine drive term. In each case, the accuracies of the quantum trajectories did deteriorate, but the estimates for the Lyapunov exponents were found to be insensitive to errors up to 5\% of the true parameter values. So, the Lyapunov exponents were found to be robust against measurement inefficiencies, highly mixed states and mis-matches in the state estimation processing. The importance of these results lies in the accessibility of the characteristic Lyapunov exponents to experimental investigation. As we have already noted, continuous measurements are difficult to achieve in experiments and are often limited in terms of their efficiency \cite{Web2014}. A signature of chaos that is related to the ``quantum-ness'' or classicality of the system and that is relatively insensitive to the measurement efficiency could be a significant factor in the experimental observation of quantum chaos in such systems. The signature is also robust against highly mixed states and inaccuracies in the experimental parameters. It is also a benefit that the Duffing oscillator can be realized using superconducting circuits and Josepson junctions \cite{Man2007,Guo2010} and it already forms the basis for nonlinear amplifiers used in quantum circuit experiments \cite{Man2007}. Using the notation given in \cite{Man2007}, we can define dimensionless quantities for the parallel circuit configuration (also called the radio-frequency SQUID \cite{Bar1983}): $\hat{q}=\Phi (\hbar\sqrt{L_p/C_p})^{-\frac{1}{2}}$, $\hat{p}=Q (\hbar\sqrt{C_p/L_p})^{-\frac{1}{2}}$, where $\Phi$ is the magnetic flux, $Q=C_p\dot{\Phi}$ is the conjugate momentum, $C_p$ is the junction capacitance, $L_p$ is the parallel inductance formed from the Josephson inductance $L_J$ and the geometric inductance $L_{pe}$ and $\omega=1/\sqrt{C_p L_p}$. To produce the potential given in (\ref{Ham}), the circuit must be biased to give a negative quadratic term and a positive quartic term, with $L_p=(1/L_J-1/L_{pe})^{-1}$. For this configuration, the classical scaling parameter is given by $\beta=\sqrt{e/(3\omega C_p(1-L_J/L_{pe}))}$, where $e$ is the electron charge, and the classical limit is taken by letting the effective ``mass'' of the system $C_p\rightarrow\infty$. In this paper, we have studied the properties of the quantum Duffing oscillator in the presence of a continuous measurement, mediated by a weak coupling to an environment. The stochastic master equation was used to follow the evolution of the quantum state, for both ideal (efficient) measurements and inefficient measurements; including very inefficient measurements, leading to highly mixed states. The resultant quantum trajectories are stochastic and can exhibit periodic or chaotic behavior as the dynamical evolution is scaled from the quantum regime towards the classical limit. The standard indicators of chaos, the Lyapunov exponents, have been calculated for this system. Positive Lyapunov exponents were shown to be insensitive to the measurement efficiency and to the purity of the quantum states, meaning that the emergence of chaotic behavior can be determined even when using very inefficient measurements and highly mixed states. The Lyapunov exponents calculated from the quantum trajectories were also found to be robust to variations in all of the parameter values used in the state estimation process. The robustness of the Lyapunov exponents to these factors would be significant for any experimental investigation of chaos in open quantum systems, because it demonstrates that the quantum-classical transition to chaotic behavior should be accessible even when the measurements are not ideal and the system parameters have not been characterized perfectly. \textit{Acknowledgments:} JFR would like to thank the US Army Research Laboratories (contract no. W911NF-16-2-0067). \end{document}
\begin{document} \title{EBSD Grain Knowledge Graph Representation Learning for Material Structure-Property Prediction} \titlerunning{EBSD Grain Knowledge Graph Representation Learning} \author{Chao Shu \and Zhuoran Xin \and Cheng Xie\inst{*}} \authorrunning{C. Shu et al.} \institute{Yunnan University, School of Software, Kunming 650504, China \\ \email{[email protected]}} \maketitle \begin{abstract} The microstructure is an essential part of materials, storing the genes of materials and having a decisive influence on materials' physical and chemical properties. The material genetic engineering program aims to establish the relationship between material composition/process, organization, and performance to realize the reverse design of materials, thereby accelerating the research and development of new materials. However, tissue analysis methods of materials science, such as metallographic analysis, XRD analysis, and EBSD analysis, cannot directly establish a complete quantitative relationship between tissue structure and performance. Therefore, this paper proposes a novel data-knowledge-driven organization representation and performance prediction method to obtain a quantitative structure-performance relationship. First, a knowledge graph based on EBSD is constructed to describe the material's mesoscopic microstructure. Then a graph representation learning network based on graph attention is constructed, and the EBSD organizational knowledge graph is input into the network to obtain graph-level feature embedding. Finally, the graph-level feature embedding is input to a graph feature mapping network to obtain the material's mechanical properties. The experimental results show that our method is superior to traditional machine learning and machine vision methods. \keywords{Knowledge Graph \and EBSD \and Graph Neural Network \and Representation Learning \and Materials Genome \and Structure-Property.} \end{abstract} \section{Introduction} Material science research is a continuous understanding of the organization's evolution, and it is also a process of exploring the quantitative relationship between organizational structure and performance. In the past, the idea of material research was to adjust the composition and process to obtain target materials with ideal microstructure and performance matching. However, this method relies on a lot of experimentation and trial-error experience and is inefficient. Therefore, to speed up the research and development(R\&D) of materials, the Material Genome Project~\cite{jain2013commentary} has been proposed in various countries. The idea of the Material Genome Project is to establish the internal connections between ingredients, processes, microstructures, and properties, and then design microstructures that meet the material performance requirements~\cite{jain2013commentary,de2019new}. According to this connection, the composition and process of the material are designed and optimized. Therefore, establishing the quantitative relationship between material composition/process, organizational structure, and performance is the core issue of designing and optimizing materials. At present, most tissue structure analysis is based on image analysis technology to extract specific geometric forms and optical density data~\cite{Rekha_2017}. However, the data obtained by this method is generally limited to the quantitative information about one-dimensional or two-dimensional images, and it is not easy to directly establish a quantitative relationship between tissue structure and material properties. The method has obvious limitations. In addition, current material microstructure analysis (e.g., metallographic analysis, XRD analysis, EBSD analysis) is often qualitative or partially quantitative and relies on manual experience~\cite {Rekha_2017}. It is still impossible to directly calculate material properties based on the overall organizational structure. In response to the above problems, this paper proposes a novel data-driven~\cite{zhou2020property} material performance prediction method based on the EBSD~\cite{humphreys2004characterisation}. EBSD is currently one of the most effective material characterization methods. This characterization data not only contains structural information but is also easier for computers to understand. Therefore, we construct a digital knowledge graph~\cite{wang2017knowledge} representation based on EBSD, then design a representation learning network to embed graph features. Finally, we use neural network~\cite{priddy2005artificial} to predict material performance with graph embedding. We conducted experiments on magnesium metal and compared our method with traditional machine learning methods and computer vision methods. The results show the scientific validity of our proposed method and the feasibility of property calculation. The contribution of this page include: \begin{enumerate} \item We design an EBSD grain knowledge graph that can digitally represent the mesoscopic structural organization of materials. \item We propose an EBSD representation learning method that can predict material's performance based on the EBSD organization representation. \item We establish a database of structural performance calculations that expand the material gene database. \end{enumerate} \section{Related work} \subsection{Data-driven material structure-performance prediction} Machine learning algorithms can obtain abstract features of data and mine the association rules behind the data. Machine learning algorithms have accelerated the transformation of materials R\&D to the fourth paradigm(i.e., Data-driven R\&D model). Machine learning is applied to material-aided design. Ruho Kondo et al. used a lightweight VGG16 networks to predict the ionic conductivity in ceramics based on the microstructure picture~\cite{kondo2017microstructure}. Zhi-Lei Wang et al. developed a new machine learning tool, Material Genome Integrated System Phase and Property Analysis (MIPHA)~\cite{wang2019property}. They use neural networks to predict the stress-strain curves and mechanical properties based on constructed quantitative structural features. Pokuri et al. used deep convolutional neural networks to map microstructures to photovoltaic performance, and learn structure-attribute relationships of the data~\cite{pokuri2019interpretable}. They designed a CNN-based model to extract the active layer morphology feature of thin-film OPVs and predict photovoltaic performance. Machine learning methods based on numerical and visual features can detect the relationship between organization and performance. However, the microstructure of materials contains essential structural information and connection relationships, and learning methods based on descriptors and images will ignore this information. \subsection{Knowledge graph representation learning} Knowledge Graph is an important data storage form in artificial intelligence technology. It forms a large amount of information into a form of graph structure close to human reasoning habits and provides a way for machines to understand the world better. Graph representation learning\cite{hamilton2020graph} gradually shows great potential. In medicine, knowledge graphs are commonly used for embedding representations of drugs. The knowledge graph embedding method is used to learn the embedding representation of nodes directly and construct the relationship between drug entities. The constructed knowledge graphs can be used for downstream prediction tasks. Lin Xuan et al. propose a graph neural network based on knowledge graphs(KGNN) to solve the problem of predicting interactions in drug knowledge graphs~\cite{lin2020kgnn}. Similarly, in the molecular field, knowledge graphs are used to characterize the structure of molecules/crystals~\cite{wieder2020compact,chen2019graph,jang2020structure}. Nodes can describe atoms, and edges can describe chemical bonds between atoms. The molecular or crystal structure is seen as an individual "graph". By constructing a molecular network map and applying graph representation learning methods, the properties of molecules can be predicted. In the biological field, graphs are used for the structural characterization of proteins. The Partha Talukdar research group of the Indian Institute of Science did work on the quality assessment of protein models~\cite{2020ProteinGCN}. In this work, they used nodes to represent various non-hydrogen atoms in proteins. Edges connect the K nearest neighbors of each node atom. Edge distance, edge coordinates, and edge attributes are used as edge characteristics. After generating the protein map, they used GCN to learn atomic embedding. Finally, the non-linear network is used to predict the quality scores of atomic embedding and protein embedding. Compared with the representation of descriptors and visual features, knowledge graphs can represent structural information and related information. The EBSD microstructure of the material contains important grain structure information and connection relationships. Therefore, this paper proposes the representation method of the knowledge graph and uses it for the prediction of organizational performance. \section{Representation of the EBSD Grain Knowledge Graph} In this part of the work, we construct a knowledge graph representation of the micro-organization structure. As shown in the Figure \ref{fig.ebsd}, the left image is the scanning crystallographic data onto the sample, and the right is the Inverse Pole Figure map of the microstructure. The small squares in the Figure \ref{fig.ebsd.b} represents the grains. Based on this grain map data, we construct a grain knowledge graph representation. Because the size, grain boundary, and orientation of the crystal grains affect the macroscopic properties of the material, such as yield strength, tensile strength, melting point, and thermal conductivity~\cite{carneiro2020recent}. Therefore, in this article, we choose the grain as the primary node in the map, and at the same time, we discretize the main common attributes of the grain size and orientation as the attribute node. Then, according to the grain boundaries of the crystal grains, we divided the two adjacent relationships between the crystal grains, namely, strong correlation and weak correlation. Finally, affiliation with grains and attribute nodes is established. \begin{figure} \caption{EBSD scan organization information.} \label{fig.ebsd.a} \label{fig.ebsd.b} \label{fig.ebsd} \end{figure} \subsection{Nodes Representation} \label{sec.node_construct} \subsubsection{Grain node.} We segment each grain in the grain organization map and map it to the knowledge graph as a grain node. First, we use Atex software to count and segment all the grains in a grain organization map. Then We individually number each grain so that all the grains are uniquely identified, and finally, we build the corresponding nodes in the graph. As shown in the Figure \ref{fig.grain2node2}, the left side corresponds to the grains of the Figure \ref{fig.ebsd.b}, and the right side are the node we want to build. The original grain corresponds to the grain node one-to-one. The grain node is the main node entity in the graph, reflecting the existence and distribution of the grain. \begin{figure} \caption{The grain node corresponds to the original grain.} \label{fig.grain2node2} \end{figure} \subsubsection{Grain size attribute node.} Next, we construct grain size attribute nodes used to discretize and identify the grain size. First, we discretize the size of the crystal grains. As shown in the Figure \ref{fig.size2node}, the color represents the difference in the size of the grains, $SIZE_{max}$ represents the largest-scale grain size, and $SIZE_{min}$ represents the smallest-scale grain size. The grain size levels are divided into $N_{SIZE}$, and the interval size of each level is $(SIZE_{max}-SIZ_{min})/N_{SIZE}$. We regard each interval as a category, as shown in the equation \ref{eq1}, for each grain, we divide it into corresponding category according to its size. We use the discretization category to represent the grain size instead of the original value. Then we construct a size attribute node for each size category, as shown in Figure \ref{fig.size2node}. Finally, we use the one-hot method to encode these $N_{SIZE}$ categories and use the one-hot encoding as the feature of the size attribute node. \begin{equation} L\_S_{node}=\lceil Grain.size/\lceil (SIZE_{max}-SIZE_{min})/N_{SIZE}\rceil\rceil \label{eq1} \end{equation} where $L\_S_{node}$ represents the size category of the grain. $Grain.size$ is the circle equivalent diameter of the grain, $\lceil\rceil$ means rounding up. \begin{figure} \caption{Grain size discretization and corresponding size attribute nodes. The size of the grains is marked with different colors. From left to right, the grains are getting bigger and bigger. Then the size interval [$SIZE_{min} \label{fig.size2node} \end{figure} \subsubsection{Grain orientation attribute node.} In this work, Euler angles are used to identify the orientation of grains. Similarly, we also discretize the Euler angles, as shown in Figure \ref{fig.ori2node}. The orientation of the grains is determined by the euler angles in three directions, so we discretize the euler angles in the three directions and combine them. The obtained three Euler angle interval combinations are the discretized types of orientation. Specifically as shown in the equation \ref{eq2}, we first calculate the maximum and minimum values of the three Euler angles $\phi(\phi1, \phi, \phi2)$ for all grains, namely $\bm{ \phi_{max}}=\{\phi1_{max}, \phi_{max}, \phi2_{max}\}, \bm{\phi_{min}}=\{\phi1_{min}, \phi_{ min}, \phi2_{min}\}$. Then each Euler angle $\phi(\phi1, \phi, \phi2)$ is divided into $N_\phi$ equal parts, the length of each part is $( \bm{\phi_{max}}-\bm{\phi_{min}})/N_\phi$. Finally, the $N_\phi$ equal parts of each Euler angle are cross-combined to obtain $N_ \phi^3$ combinations. We regard each combination as a kind of orientation, i.e., there are $N_\phi^3$ orientation categories. For each grain, we can map it to one of $N_\phi^3$ categories according to its three Euler angles $\phi(\phi1, \phi, \phi2)$, As shown in the equation \ref{eq2}. In this way, all crystal grains are divided into a certain type of orientation. We construct an orientation attribute node for each type of orientation to represent orientation information. Similarly, we use the one-hot method to encode these $N_\phi^3$ categories individually. Each orientation category will be represented by a $N_\phi^3$-dimensional one-hot vector used as the feature of the corresponding orientation attribute node. \begin{equation} \begin{aligned} L\_O_{node} = \{ &\lceil Grain.\phi1/ \lceil(\phi1_{max}-\phi1_{min})/N_\phi \rceil \rceil, \\ &\lceil Grain.\phi/ \lceil(\phi_{max}-\phi_{min})/N_\phi \rceil \rceil, \\ &\lceil Grain.\phi2/ \lceil(\phi2_{max}-\phi2_{min})/N_\phi \rceil \rceil\} \end{aligned} \label{eq2} \end{equation} where $Grain.\phi1$, $Grain.\phi$ and $Grain.\phi2$ are the euler angles in the three directions. $L\_O_{node}$ represents the orientation category to which the grains are classified. $\lceil\rceil$ refers to rounding up. \begin{figure} \caption{Grain orientation discretization and corresponding orientation attribute nodes. On the left are three directions Euler angles, whose angles are represented by the RGB color. Each Euler angle is divided into $N_{\phi} \label{fig.ori2node} \end{figure} \subsection{Edges Representation} After the construction of the node, the edges between the nodes need to be constructed. The nodes reflect the entities in the graph, and the edges contain the structural information of the graph. We build edges in the grain knowledge graph based on crystallographic knowledge. The constructed edge represents the association between nodes, including position association and property association. The edges between grain nodes reflect position information and grain boundaries; the edges between grain nodes and grain attribute nodes describe the properties of the grains. \subsubsection{Edge between grain nodes.} The contact interface between the grains is called the grain boundary, representing the transition of the atomic arrangement from one orientation to another. Generally speaking, grain boundaries have a significant impact on the various properties of the metal. In order to describe the boundary information of grains, we construct edges between grain nodes. First, we obtain the neighboring grains of each grain and construct the connection between the neighbor grain nodes. In order to further restore more complex grain spatial relationships, we set up a knowledge of neighboring rules. As shown in the equation \ref{eq3}, we use $lp$ to represent the ratio of the bordering edge length of the grain to the total perimeter of the grain. Then we set a threshold $\lambda$, as shown in the equation \ref{eq4}, when the $lp$ of grain A and grain B is greater than or equal to $\lambda$, we set the relationship between the A node and the B node to be a strong correlation; otherwise, it is set to weak correlation. Figure \ref{fig.ref_G2G} shows the edge between grain nodes. \begin{equation} lp = bound\_length / perimeter \label{eq3} \end{equation} \begin{equation} Rel\_G\_G(lp)=\begin{cases} \text{Strong association}, & lp < \lambda \\ \text{Weak association}, & lp \ge \lambda \end{cases} \label{eq4} \end{equation} \begin{figure} \caption{Adjacent grains and the edges between them} \label{fig.ref_G2G} \end{figure} \subsubsection{Edge between grain node and size attribute node.} In section \ref{sec.node_construct}, we construct two types of attribute nodes. Here we associate the grain node with the attribute node to identify the property of the grain. First, we calculate the grain size category according to the equation \ref{eq1}, and then associate the corresponding grain node with the corresponding size attribute node to form the edge. As shown in the Figure \ref{fig.ref_G2S}, we calculate the size categories $\lbrace$m, m, n, r$\rbrace$ of the four grains $\lbrace$1, 2, 3, 4$\rbrace$, and then associate the corresponding grain node with the size attribute nodes to form the belonging relationship. \begin{figure} \caption{Edge between grain node and size attribute node.} \label{fig.ref_G2S} \end{figure} \subsubsection{Edge between grain node and orientation attribute node.} Similarly, we identify the orientation category for the grain node by associating it with the orientation attribute node. As shown in the Figure \ref{fig.ref_G2O}, first we split the grains in the map, the bottom left of the picture shows the three Euler angles of the grains. Then we calculate the orientation category $\lbrace$i, j, k, l$\rbrace$ of the grains $\lbrace$1, 2, 3, 4$\rbrace$ according to equation \ref{eq3}. Finally, we associate the corresponding orientation attribute node with the corresponding grain node. Figure \ref{fig.ref_G2O} shows the edges between the grain nodes and the orientation attribute nodes, reflecting the discrete orientation characteristics and orientation distribution of the grains. \begin{figure} \caption{Edge between grain node and orientation attribute node.} \label{fig.ref_G2O} \end{figure} \subsection{Grain graph convolutional prediction model} The structured grain knowledge graph can describe the microstructure of the material. Next, we build a graph feature convolution network(grain graph convolutional network) to embed the grain knowledge graph and realize graph feature extraction. Then, a feature mapping network based on a neural network is built to predict material properties with the graph feature. The complete model we built is shown in Figure \ref{fig.model}. \begin{figure} \caption{Grain graph convolutional prediction model. The model is divided into graph feature extraction part and performance prediction part. The graph feature extraction network is composed of multiple node-level graph attention networks(gat) and a path-level attention aggregation network. The prediction network is a multilayer neural network. The graph feature network extracts graph-level features, and the prediction network maps graph-level features to material properties.} \label{fig.model} \end{figure} \subsubsection{Grain graph convolutional network.} The graph features convolution network is a heterogeneous graph convolution network~\cite{wang2019heterogeneous}. First, the heterogeneous grain knowledge graph is divided into multiple bipartite graphs and isomorphic graphs according to the type of edges. Next, the features of the nodes in the meta-path of subgraph are transferred and aggregated. Then the features of the same nodes of the subgraphs are fused, and finally, the graph-level characterization nodes are obtained through multiple convolutions. The process of graph convolution is shown at the bottom of Figure \ref{fig.model}. Specifically, the node aggregation process includes node-level feature aggregation and path-level feature aggregation. In node message transmission, we use node-level attention to learn the attention value of adjacent nodes on the meta-path. After completing the message transmission of all meta-paths, we use path-level attention to learn the attention value of the same nodes on different meta-paths. With the double-layer attention, the model can capture the influence factors of nodes and obtain the optimal combination of multiple meta-paths. Moreover, the nodes in the graph can better learn complex heterogeneous graphical and rich information. The equation \ref{eq5} shows the feature aggregation transformation under the node-level attention. $\iota$ represents different paths/edges, and there are a total of $p$ edges. $\alpha_{ij}$ represents the attention score between node i and node j. $LeakyReLU$ and $Softmax$ are activation functions, $W$ is a learnable weight matrix, and $\vec{a}$ is a learnable weight vector. $\parallel$ represents concatenation, and $N(i)$ refers to all neighbor nodes of node i. $h_i^{k+1}$ represents the $(k+1)$ layers embedding of node i. \begin{equation} \begin{split} \vec{z}_i^{k^{(\iota)}} &= \bm{W}^{k^{(\iota)}} \cdot \vec{h}_i^{k^{(\iota)}} \\ e_{ij}^{k^{(\iota)}} &= LeakyReLU(\vec{a}^{k^{(\iota)}} \cdot [\vec{z}_i^{k^{(\iota)}} \parallel \vec{z}_j^{k^{(\iota)}}]) \\ \alpha_{ij}^{k^{(\iota)}} &= Softmax_j(e_{ij}^{k^{(\iota)}}) \\ \vec{h}_i^{{k+1}^{(\iota)}} &= \sigma(\sum\limits_{j \in {N(i)}^{(\iota)}} \alpha_{ij}^{k^{(\iota)}} \cdot \vec{z}_i^{k^{(\iota)}}) \end{split} \label{eq5} \end{equation} The equation \ref{eq6} shows the change of node characteristics at the path level, $\beta_{(\iota)}^k$ is the important coefficient of each meta-path. We first perform a nonlinear transformation on the output $\vec{h}_i^{{k+1}^{(\iota)}}$ of the node-level attention network, and then perform a similarity measurement with a learnable attention vector $q$. Next, we input the result of the similarity measurement into the Softmax function to obtain important coefficients, and finally perform weighted summation on the node embeddings on each meta-path. After completing multiple graph feature convolutions, we obtain graph-level node embeddings. \begin{equation} \begin{split} \beta_{(\iota)}^k &= Softmax(\frac{1}{N(i)} \sum\limits_{\iota \in N(i)} \vec{q} \cdot tanh(\bm{W}^k \cdot \vec{h}_i^{{k+1}^{(\iota)}} + \vec{b}) ) \\ \vec{h}_i^{k+1} &= \sum\limits_{\iota = 1}^p \beta_{(\iota)}^k \cdot \vec{h}_i^{{k+1}^{(\iota)}} \end{split} \label{eq6} \end{equation} \subsubsection{Feature mapping network.}The microstructure-performance relationship is usually qualitatively studied through statistical methods (e.g., statistics of grain size, orientation, and grain boundaries). The relationship between the microstructure and properties is difficult to obtain through comparative observation or direct calculation. However, Artificial neural networks can mine more essential characteristics of data and establish complex relationships between data~\cite{priddy2005artificial}. Here, we have used the graph features convolution network to extract the features of the grain knowledge graph, so we use a feature mapping network based on a neural network to implement machine learning tasks. As shown in equation \ref{eq7}, $\vec{h}_i$ is the final graph-level node vector, $fc$ is the mapping network. The network comprises a data normalization layer, a fully connected layer, an activation layer, and a random deactivation layer. Through this network, the feature of the grain knowledge graph can be mapped to the property of material. \begin{equation} prop = \frac{1}{n} \sum_{i=1}^{n} fc(\vec{h}_i) \label{eq7} \end{equation} \section{Experiment} \subsection{Dataset} The experimental data comes from the EBSD experimental data of 19 Mg metals and also includes the yield strength(ys), tensile strength(ts), and elongation(el) of the sample. The EBSD scan data contains a total of 4.46 million scan points. As a result, the number of nodes in all the constructed knowledge graphs is 40,265, and the number of edges reaches 389,210. We use EBSD knowledge graph representation as model input and mechanical properties as label. \subsection{Comparison methods and Results} We design two different methods to compare with ours. They are traditional machine learning methods based on statistics, image feature extraction methods based on computer vision. We use traditional machine learning methods to directly calculate the attribute characteristics of all grains to obtain material properties. These methods include Ridge, SVR, KNN, ExtraTree. In addition, we use the pre-trained CNN model to directly learn visual features from the microstructure map and predict performance. The model is Resnet-50. The model performance evaluation results are shown in Table \ref{tb2}. It can be seen that our method is superior to other methods. Our method obtained an R2 value of 0.74. This shows that our method can extract more effective features. Traditional machine learning methods and machine vision methods have obtained acceptable R2 values, which shows that both methods can obtain microstructure characteristics to a certain extent. However, compared with traditional machine learning, the method of machine vision does not show much superiority. It is because CNN training requires a larger amount of data, and our current data set is small. \begin{table} \centering \caption{Results of model for ys prediction} \label{tb2} \setlength{\tabcolsep}{2.74mm}{ \begin{tabular}{llll} \hline Model &MSE &MAE &R2 \\ \hline Ridge &112.5 & 6.9 &0.590 \\ SVR &102.7 & 4.8 &0.626 \\ KNN & 97.6 & 3.6 &0.651 \\ ExtraTree &105.9 & 5.6 &0.610 \\ \hline Resnet50 & 94.8 & \textbf{3.1} &0.667 \\ \hline \textbf{Hetero\_GAT(Our)} &\textbf{73.1} &5.9 &\textbf{0.74} \\ \hline \end{tabular}} \end{table} \section{Conclusion} This paper proposes a novel material organization representation and performance calculation method. First, we use the knowledge graph to construct the EBSD representation. Then, we designed a representation learning network to abstract the EBSD representation as graph-level features. Finally, we built a neural network prediction model to predict the corresponding attributes. The experimental results prove the effectiveness of our method. Compared with traditional machine learning methods and machine vision methods, our method is more reasonable and practical. \end{document}
\begin{document} \title[Derived equivalences for simplices of higher-dimensional flops]{Relating derived equivalences for \\ simplices of higher-dimensional flops} \author{W.\ Donovan} \address{W.\ Donovan, Yau Mathematical Sciences Center, Tsinghua University, Haidian District, Beijing 100084, China.} \email{[email protected]} \thanks{I am supported by the Yau MSC, Tsinghua University, and the Thousand Talents Plan. I am grateful for travel support from EPSRC Programme Grant EP/R034826/1} \begin{abstract} I study a sequence of singularities in dimension~$4$ and above, each given by a cone of rank~$1$ tensors of a certain signature, which have crepant resolutions whose exceptional loci are isomorphic to cartesian powers of the projective line. In each dimension~$n$, these resolutions naturally correspond to vertices of an $(n-2)$-simplex, and flops between them correspond to edges of the simplex. I show that each face of the simplex may then be associated to a certain relation between flop functors. \end{abstract} \subjclass[2010]{Primary 14F08; Secondary 14J32, 18G80} \keywords{Calabi--Yau manifolds, crepant resolutions, tensors, derived category, derived equivalence, birational geometry, flops, simplices.} \maketitle \setcounter{tocdepth}{1} \section{Introduction} This note is motivated by a desire to better understand the derived autoequivalence groups of $4$-folds and beyond, in particular the contributions coming from birational geometry. I study triangles of birational maps between resolutions of certain singularities, and find that the corresponding flop functors obey a pleasing relation involving spherical twists, extending a well-known story for $3$-folds related by Atiyah flops. \subsection{Singularities and resolutions} Consider the $n$-fold singular cone for $n\geq 3$ given by the rank~$1$ tensors of signature $2^{n-1}$ as follows. \[ Z = \{ v_1 \otimes \dots \otimes v_{n-1} \in V_1 \otimes \dots \otimes V_{n-1} \} \qquad V_i \cong \C^2 \] By a straightforward construction explained later, $Z$ has $n-1$ crepant resolutions. These will be written $X_i$ for $i=1,\dots,n-1$. Each is given by replacing the singularity $0 \in Z$ by an exceptional locus $\Exc \cong (\P^1)^{n-2}$ which, for a given $i$, arises as a product of the $\P V_j$ for $j\neq i$. \begin{figure} \caption{Resolutions and birational maps for small $n$.} \label{functors4I} \end{figure} For $n=3$, we have the $3$-fold Atiyah flop between two resolutions of the cone of singular $2\times 2$ matrices. In general, assigning each resolution $X_i$ to a vertex of an $(n-2)$-simplex, the edges of the simplex correspond to birational maps $X_i 2} \def\onemorph{1} \def\twomorphintro{0} \def\twoopts{3} \def\twomorphtext{4} \def\twomorphtextother{5} \def\biratB{6ional X_j $ as illustrated in Figure~\ref{functors4I}. For~$n=4$ we have a~triangle of $4$-folds, and for~$n=5$ we have a~tetrahedron of $5$-folds. \subsection{Equivalences} The birational maps appearing here are family Atiyah flops, and therefore have associated flop functors, which are derived equivalences, illustrated in Figure~\ref{functors4J}. \begin{figure} \caption{Flop functors for $n=4$.} \label{functors4J} \end{figure} This work grew out of interest in describing `derived monodromy' for \mbox{Figure~\ref{functors4J}}, namely an autoequivalence given by composition of a $3$-cycle of equivalences in the triangle. However, it seems that a question with a neater answer is the following. \begin{center}\it How are routes between two of the vertices of Figure~\ref{functors4J} related?\end{center} Theorem~\ref{mainthm.equiv} below gives such a relation where we take, without loss of generality, the two vertices $X_1$ and $X_3$. Furthermore it gives a relation associated to each face ($2$-simplex) in the analogous diagram for $n>4$. \subsection{Result} I prove the following. \begin{keythm}[Theorems~\ref{thm.keynatiso} and~\ref{thm.equiv}]\label{mainthm.equiv} For $n\geq 4$, write flop functors as follows. \begin{center} \begin{tikzpicture}[scale=1.0] \trianglepic[7] \end{tikzpicture} \end{center} Then there is a natural isomorphism \[ \fun_3 \cong \operatorname{\fun[Tw]}_2 \circ \fun_2 \circ \fun_1 \] where $\operatorname{\fun[Tw]}_2$ is one of the following. \begin{description} \item[Case $n=4$] a spherical twist around the torsion sheaf $\mathcal{O}_\Exc (0,-1)$ on $X_3$, where $\Exc \cong \P V_1 \times \P V_2$, given in Definition~\ref{defn.twist}. \item[Case $n>4$] a family version of this spherical twist, given in Definition~\ref{defn.twistfam}, over base $\P V_4 \times \dots \times \P V_{n-1}$. \end{description} \end{keythm} \noindent For each $n$, replacing the indices $1,2,3$ in the above with general $i,j,k$ gives a relation for each face ($2$-simplex) of the $(n-2)$-simplex. \begin{keyrem} As a mnemonic for Theorem~\ref{mainthm.equiv}, I draw a diagram as follows. \begin{center} \begin{tikzpicture}[scale=1.0] \trianglepic \end{tikzpicture} \end{center} \end{keyrem} I prove Theorem~\ref{mainthm.equiv} by calculating the action of flops and twists on a certain (relative) tilting bundle. \subsection{Related questions} As an immediate corollary of Theorem~\ref{mainthm.equiv}, we get the following formula for~$\operatorname{\fun[Tw]}_2$. \begin{equation*}\tag{$\ast$}\label{equation flops} \fun_3 \circ \fun_1^{-1} \circ \fun_2^{-1} \cong \operatorname{\fun[Tw]}_2 \end{equation*} There are many formulas of the form `flop-flop = twist' (up to taking inverses) in the literature, including~\cite{ADM,ALb,Bar,BB,DS1,DW1,DW3,JL,Har,Tod}. I~hope~\eqref{equation flops} gives a hint of how they may be generalized to flop cycles of length~3 and above. For discussion of the relation of Theorem~\ref{mainthm.equiv} to derived monodromy, in particular to the autoequivalence of $\D(X_1)$ given by a composition of $3$~flop functors, see Remark~\ref{rem.monod}. \subsection{Atiyah flop}\label{sect.Atiyah} For $n=3$, we have resolutions $X_1$ and $X_2$ related by an Atiyah flop, and functors as follows, satisfying the relation shown, where $\operatorname{\fun[Tw]}$ is a spherical twist about $\mathcal{O}_\Exc(-1)$. \begin{center} \begin{tikzpicture}[scale=1.5] \node (0) at (-1,0) {$\D(X_1)$}; \node (1) at (0,0) {$\D(X_2)$}; \node (2) at (1.3,0) {$\id \cong \operatorname{\fun[Tw]} \circ \fun_2 \circ \fun_1$}; \draw[->,transform canvas={yshift=+2.5pt}] (0) -- node[above]{$\fun_1$} (1); \draw[<-,transform canvas={yshift=-2.5pt}] (0) -- node[below]{$\fun_2$} (1); \end{tikzpicture} \end{center} The argument for Theorem~\ref{mainthm.equiv} is an elaboration of a standard argument for this relation: see Section~\ref{sec.discuss} for discussion. \subsection{Related work} The example $n=4$ is studied extensively by Kite~\cite[Sections~5.3 and~7.2]{Kit}. He gives an action of $\pi_1(\cM)$ on the derived categories~$\D(X_i)$ where $\cM$ is a certain Fayet--Iliopoulos parameter space. I expect Theorem~\ref{mainthm.equiv} in the case $n=4$ also follows from his methods. Kite uses a realization of the $X_i$ as toric GIT quotients, and the technology of `fractional magic windows'. This may give an alternative, and perhaps swifter, method to prove Theorem~\ref{mainthm.equiv}. However, as it was not needed to obtain the result, I feel the proofs here may be more accessible without it. I also have in mind extensions to more general flops, where a GIT presentation is not available. Halpern-Leistner and Sam~\cite{HLSam} construct similar actions of $\pi_1$ for GIT problems which are `quasi-symmetric'. This condition does not hold for the GIT quotients realizing the $X_i$. \begin{keyrem} The constructions here readily generalize to the case of $V_i \cong \C^d$ for any fixed $d>2$, and I expect that similar results can be proved, see Remark~\ref{rem.gen}. Kite notes that, for $n=4$, the above `fractional magic windows' technology may not apply in this new setting~\cite[Example~4.37]{Kit}.\end{keyrem} \subsection{Contents}Section~\ref{sect.res} describes the resolutions~$X_i$ and flop functors between them. Section~\ref{sect.flopcalc} gives properties of these functors for later use. Sections~\ref{sec.4} and~\ref{sect.higher} prove Theorem~\ref{mainthm.equiv}, first in dimension~$4$ and then in higher dimension by a family construction. Section~\ref{sect.fam} gives further family constructions relating resolutions. \subsection{Notation} When I write $X^{(n)}$ and similar notations, the $(n)$ denotes the dimension~$n$ of the space, or the relative dimension~$n$ of a family. Letters L and~R indicate derived functors throughout, but are sometimes dropped in Section~\ref{sect.higher} for the sake of readability. The bounded derived category of coherent sheaves is denoted by~$\D(X)$. \begin{acks}I am grateful for conversations with Tatsuki Kuwagaki, Xun Lin, Mauricio Romo, Ed Segal, and Weilin Su, and thank the organizers of the conference `McKay correspondence, mutation and related topics' at Kavli IPMU for their work to make the meeting a success during the pandemic. I started calculations for this project while visiting Michael Wemyss at University of Glasgow, and am grateful for his hospitality and support there. \end{acks} \section{Resolutions}\label{sect.res} We construct resolutions of the singularity $Z$ from the introduction, namely \[ Z = \{ v_1 \otimes \dots \otimes v_{n-1} \in V_1 \otimes \dots \otimes V_{n-1} \} \qquad V_i \cong \C^2 \] for $n\geq 3$. To see that $\dim Z = n$, note that $z\in Z - \{ 0 \}$ taken up to scale determines a point of the cartesian power $(\P^1)^{n-1}$. There are various terminologies for~$Z$. For instance, we may describe it as the cone of rank~$1$ tensors of signature~$2^{n-1}$, or as the simple hypermatrices of order $n-1$ in dimension $2$. \subsection{Construction} We construct crepant resolutions $X_i$ of $Z$ for $i=1,\dots,n-1$. \begin{defn} Let $X_1$ be the total space of a rank~$2$ bundle \[ \lab{X_1} V_1 \otimes \mathcal{O}(-1,\dots,-1) \longrightarrow \P V_2 \times \dots \times \P V_{n-1} \] where the line bundle $\mathcal{O}(-1,\dots,-1)$ has degree~$-1$ on each factor $\P V_i$. Other spaces $X_i$ are obtained similarly, by applying the cyclic symmetry of the set $\{ V_i \}$ to replace $V_1$ by $V_i$.\end{defn} \begin{notn} Indices will be written in cyclic order. For instance, for $n=4$ I~write the following. \begin{align*} \lab{X_1} & V_1 \otimes \mathcal{O}(-1,-1) \longrightarrow \P V_2 \times \P V_3 \\ \lab{X_2} & V_2 \otimes \mathcal{O}(-1,-1) \longrightarrow \P V_3 \times \P V_1 \\ \lab{X_3} & V_3 \otimes \mathcal{O}(-1,-1) \longrightarrow \P V_1 \times \P V_2 \end{align*} \end{notn} To see the resolution morphism $g_1 \colon X_1 \to Z$ write $X_1$ as follows, where $L_i$ denotes the tautological subspace bundle on $\P V_i$. \[ \lab{X_1} V_1 \otimes (L_2 \boxtimes \dots \boxtimes L_{n-1}) \longrightarrow \P V_2 \times \dots \times \P V_{n-1} \] Then the inclusions $L_i \subset V_i$ induce the required morphism, which is easily seen to contract the zero section $\Exc_1 \subset X_1$ while being an isomorphism elsewhere. The other $g_i$ are obtained similarly. \begin{notn} Let $\Exc_i \subset X_i$ be the exceptional locus of the resolution morphism $g_i \colon X_i \to Z$. I often drop subscripts and write $\Exc$ for readability.\end{notn} By the following proposition, the $g_i$ are crepant resolutions. \begin{prop}\label{prop.cy} Each $X_i$ is Calabi--Yau.\end{prop} \begin{proof} The total space of the bundle $X_1$ is Calabi--Yau because both its determinant, and the canonical bundle of its base $\P V_2 \times \dots \times \P V_{n-1}$, are isomorphic to $\mathcal{O}(-2,\dots,-2)$. The same holds for the other $X_i$ by the cyclic symmetry.\end{proof} I set notation for sheaves and bundles on the $X_i$ for $n=4$. \begin{notn}\label{sect bun notation} For $X_1$, let $\mathcal{O}_\Exc(a,b)$ denote a line bundle on $\Exc_1 \cong \P V_2 \times \P V_3$ considered as a torsion sheaf on $X_1$. Let $\mathcal{O}(a,b)$ denote a line bundle on $X_1$ given by pullback from the base $\P V_2 \times \P V_3$. Similar notations are used for the other $X_i$. \end{notn} \subsection{Flop functors} \label{sect flop fun} If we blow up the zero section $\Exc_i \subset X_i$ for any $i$ we obtain \[ \lab{Q} \mathcal{O}(-1,\dots,-1) \longrightarrow \P V_1 \times \dots \times \P V_{n-1} \] with a blowup map $f_i \colon Q \to X_i$. Using these $f_i$ to form birational roofs, we have birational maps \[ \phi_{ji} \colon X_i \,\rational X_j. \] \begin{defn}\label{def.ffunc} We write functors \[ \fun_{ji} = \RDerived f_{j*} \circ \LDerived f_i^* \colon \D(X_i) \longrightarrow \D(X_j). \] \end{defn} These functors are equivalences. For $n=3$ they are simply the Bondal--Orlov equivalences for the Atiyah flop. For $n\geq 4$, the $\phi_{ji}$ are family Atiyah flops which implies that the $\fun_{ji}$ are equivalences. This is explained in Section~\ref{sect.fam_descrip} for $n=4$, and a similar argument using Proposition~\ref{prop.gen_fam} suffices for general $n$. \begin{rem} For $n=4$ the birational maps $\phi_{ji}$ may be drawn as follows. Note that at each $X_i$ the two birational maps have the same exceptional locus $\P^1 \times \P^1$, but that each is a flop of a different ruling. \begin{center} \begin{tikzpicture}[scale=0.9] \trianglepic[2] \end{tikzpicture} \end{center} \end{rem} \begin{rem}\label{rem.gen} The constructions in this section extend to the setting where $V_i \cong \C^d$ for any fixed $d \geq 2$. The flops appearing are then families of standard flops of~$\P^{k-1}$. It would be interesting to prove similar results in this case. For comparison, see \cite{ADM} for a `flop-flop = twist' formula for such flops. \end{rem} \subsection{Notation for families} \label{sect notat} For a given $n$, each resolution may be constructed as a non-trivial family of the analogous resolution for $n-1$. I explain the $n=4$ case in the following Section~\ref{sect.flopcalc}. The general case, which is analogous but notationally more complex, is deferred to Proposition~\ref{prop.iterate}. Here I give the notation that will be used in these constructions. To specify a particular $n$, the notation~$X_i^{(n)}$ is used. To specify furthermore the vector spaces used in the construction, I write the following. \[X_i^{(n)}(V_1,\dots,V_{n-1})\] The construction will often be repeated in a family, replacing the vector spaces $V_i$ with vector bundles $\pi_i$ over some base $B$. The result is denoted as follows. \[\familyX_i^{(n)}(\pi_1,\dots,\pi_{n-1})\] Finally, similar notations are used for other constructions, for instance the birational roof~$Q$ from Section~\ref{sect flop fun} may be written as $Q^{(n)}(V_1,\dots,V_{n-1}).$ \section{Flop calculations}\label{sect.flopcalc} I explain how each resolution for $n=4$ may be constructed as a non-trivial family of the analogous resolutions for $n=3$, and use this to calculate the effect of the flop functors for $n=4$ on certain objects, for use in the proof of Theorem~\ref{thm.keynatiso}. This calculation is routine, but I write it out in full to show how the non-triviality is handled, and anticipating a further family version in Section~\ref{sect.higher}. \subsection{Family construction}\label{sect.fam_descrip} Recall that we take the following. \[\lab{X_1^{(4)}} V_1 \otimes \mathcal{O}(-1,-1) \longrightarrow \P V_2 \times \P V_3\] The proposition below realizes $X_1^{(4)}$ as a family of copies of $X_1^{(3)}$ over $\P V_3$. \begin{prop}\label{prop.fam4} Using the notation of Section~\ref{sect notat}, we have that \begin{equation}\label{eqn.family1} X_1^{(4)}(V_1, V_2, V_3) \cong \familyX_1^{(3)} (\pi_1, \pi_2) \end{equation} where we take bundles \begin{align*} \lab{\pi_1} & V_1\otimes\mathcal{O} \longrightarrow \P V_3 \\ \lab{\pi_2} & V_2\otimes\mathcal{O}(-1) \longrightarrow \P V_3 \end{align*} so that $\pi_1$ is the trivial bundle with fibre $V_1$. \end{prop} \begin{proof} Writing $ \operatorname{Tot} (\P \pi_2) $ for the total space of $\P \pi_2$, there is an isomorphism \[ \operatorname{Tot} (\P \pi_2) \cong \P V_2 \times \P V_3 \] under which $\mathcal{O}_{\P \pi_2}(-1)$ corresponds to $\mathcal{O}(-1,-1)$, where the second $-1$ comes from the definition of~$\pi_2$, giving the claim. \end{proof} We now extend the argument to birational roofs. We have an isomorphism \begin{equation}\label{eqn.family2} X_2^{(4)}(V_1, V_2, V_3) \cong \familyX_2^{(3)} (\pi_1, \pi_2) \end{equation} using that $ \operatorname{Tot} (\P \pi_1) \cong \P V_3 \times \P V_1$ under which $\mathcal{O}_{\P \pi_1}(-1)$ corresponds to $\mathcal{O}(0,-1)$. Furthermore, we have an isomorphism \begin{equation}\label{eqn.family3}Q^{(4)}(V_1, V_2, V_3) \cong \familyQ^{(3)}(\pi_1, \pi_2)\makebox[0pt]{ .} \end{equation} The isomorphisms \eqref{eqn.family1}, \eqref{eqn.family2} and \eqref{eqn.family3} intertwine birational roof diagrams of blowup maps $f_i \colon Q \to X_i$ as follows. \begin{center} \begin{tikzpicture}[scale=0.6,xscale=1.7] \node (14) at (0,0) {$X_1^{(4)}(V_1, V_2, V_3) $}; \node (r4) at (1,1) {$Q^{(4)}(V_1, V_2, V_3) $}; \node (24) at (2,0) {$X_2^{(4)}(V_1, V_2, V_3) $}; \draw[->] (r4) -- (14); \draw[->] (r4) -- (24); \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.6,xscale=1.5] \node (13) at (0,0) {$\familyX_1^{(3)} (\pi_1, \pi_2)$}; \node (r3) at (1,1) {$Q^{(3)} (\pi_1, \pi_2)$}; \node (23) at (2,0) {$\familyX_2^{(3)} (\pi_1, \pi_2)$}; \draw[->] (r3) -- (13); \draw[->] (r3) -- (23); \end{tikzpicture} \end{center} The right-hand diagram is a family of $3$-fold Atiyah flops. It follows that the flop functor $\fun_{21}$ from Definition~\ref{def.ffunc} using the left-hand diagram is an equivalence. \subsection{Flop functors}\label{section.flop} The above construction lets us calculate the effect of the flop $X_1 2} \def\onemorph{1} \def\twomorphintro{0} \def\twoopts{3} \def\twomorphtext{4} \def\twomorphtextother{5} \def\biratB{6ional X_2$ for $n=4$ on the derived category. Recall that $\mathcal{O}(a,b)$ denotes a certain line bundle on each of the $X_i$ by the convention of Notation~\ref{sect bun notation}. \begin{prop}\label{prop flop action general} For $n=4$, the flop functor \[ \fun \colon \D(X_1) \to \D(X_2) \] acts as follows, where we write projections $\rho_i\colon X_i \to \P V_3$. \begin{enumerate} \item\label{prop flop action general a} For any $\varobj \in \D(\P V_3)$, taking $\mathcal{A}_1,\mathcal{A}_0 \in \D(X_1)$ given by \begin{align*} \mathcal{A}_1 &= \rho_1^* \varobj \otimes \mathcal{O}(-1,0) \\ \mathcal{A}_0 &= \rho_1^* \varobj \end{align*} we have the following. \begin{align*} \fun(\mathcal{A}_1) & \cong \rho_2^* \varobj \otimes \mathcal{O}(1,1) \\ \fun(\mathcal{A}_0) & \cong \rho_2^* \varobj \end{align*} \item\label{prop flop action general b} There exist canonical inclusions, given at the end of the proof, such that the following diagram commutes, with $\Hom$s taken in the derived category. \begin{center} \begin{tikzpicture}[scale=0.8,xscale=1.3] \node (1) at (-1,0) {$\Hom(\mathcal{A}_1,\mathcal{A}_0)$}; \node (2) at (1,0) {$\Hom(\fun(\mathcal{A}_1),\fun(\mathcal{A}_0))$}; \node (1b) at (-0.05,-1) {$V_2^\vee$}; \draw[->] (1) to node[above]{\scriptsize $\fun$} (2); \draw[left hook->] (1b) to (1); \draw[right hook->] (1b) to (2); \end{tikzpicture} \end{center} \end{enumerate} \end{prop} \begin{proof} The $4$-fold $X_1$ is isomorphic to the family of $3$-folds $X^{(3)}_1(\pi_1, \pi_2)$ over $\P V_3$ by Proposition~\ref{prop.fam4}. Under this isomorphism $\mathcal{A}_1,\mathcal{A}_0 \in \D(X_1)$ go to \begin{equation}\label{eq.flop_inter1} \rho_1^* ( \varobj \otimes \mathcal{O}(1) ) \otimes \mathcal{O}_{\P \pi_2}(-1) \quad\text{and}\quad \rho_1^* \varobj \end{equation} where we reuse $\rho_1$ for the projection from $X^{(3)}_1(\pi_1, \pi_2)$ to $ \P V_3$. The twist $\otimes \mathcal{O}(1)$ appearing here is dual to the twist in the definition of $\pi_2$. Then the flop \begin{equation}\label{eqn.familyflop} X^{(3)}_1(\pi_1, \pi_2) 2} \def\onemorph{1} \def\twomorphintro{0} \def\twoopts{3} \def\twomorphtext{4} \def\twomorphtextother{5} \def\biratB{6ional X^{(3)}_2(\pi_1, \pi_2)\end{equation} is a family over $\P V_3$ of copies of the $3$-fold Atiyah flop $Y_1 2} \def\onemorph{1} \def\twomorphintro{0} \def\twoopts{3} \def\twomorphtext{4} \def\twomorphtextother{5} \def\biratB{6ional Y_2$ with \begin{align*} \lab{Y_1} & W_1 \otimes \mathcal{O}(-1) \longrightarrow \P W_2 \\ \lab{Y_2} & W_2 \otimes \mathcal{O}(-1) \longrightarrow \P W_1 \end{align*} where $\dim W_i = 2$. Write $\fun[G]\colon \D(Y_1) \to \D(Y_2)$ for the flop functor. The following description of the effect of $\fun[G]$ is obtained by standard arguments, see for instance~\cite[Proposition~1]{DS2}. Letting \[\mathcal{B}_1 = \mathcal{O}(-1) \qquad \mathcal{B}_0 = \mathcal{O}\] where $\mathcal{O}(k)$ denotes a bundle on $Y_i$ obtained by pullback from $\P W_i$, we have \[\fun[G](\mathcal{B}_1) = \mathcal{O}(+1) \qquad \fun[G](\mathcal{B}_0) = \mathcal{O}\makebox[0pt]{ .}\] Furthermore we have a commutative triangle \begin{center} \begin{tikzpicture}[scale=0.8,xscale=1.3] \node (1) at (-1,0) {$\Hom(\mathcal{B}_1,\mathcal{B}_0)$}; \node (2) at (1,0) {$\Hom(\fun[G](\mathcal{B}_1),\fun[G](\mathcal{B}_0))$}; \node (1b) at (-0.05,-1) {$W_2^\vee$}; \draw[->] (1) to node[above]{\scriptsize $\fun[G]$} (2); \draw[left hook->] (1b) to (1); \draw[right hook->] (1b) to (2); \end{tikzpicture} \end{center} where the inclusions are given by observing the following. \begin{align*} \Hom(\mathcal{B}_1,\mathcal{B}_0) \cong \RDerived\Gamma_{Y_1} \,\mathcal{O}(+1) & \cong \RDerived\Gamma_{\P W_2} \big( \mathcal{O}(+1) \otimes \operatorname{Sym}^\bullet ( W_1^\vee \otimes \mathcal{O}(1)) \big) \\ \Hom(\fun[G](\mathcal{B}_1),\fun[G](\mathcal{B}_0)) \cong \RDerived\Gamma_{Y_2} \,\mathcal{O}(-1) & \cong \RDerived\Gamma_{\P W_1} \big( \mathcal{O}(-1) \otimes \operatorname{Sym}^\bullet ( W_2^\vee \otimes \mathcal{O}(1)) \big) \end{align*} These have no higher cohomology by standard vanishing on $\P^1$, and after taking $0^{\text{th}}$ cohomology, the pieces coming from $\operatorname{Sym}^0$ and $\operatorname{Sym}^1$ respectively are both $W_2^\vee$. Now repeating the standard arguments in a family, we find that the flop functor for the flop \eqref{eqn.familyflop} applied to the objects~\eqref{eq.flop_inter1} gives \begin{equation}\label{eq.flop_inter2} \rho_2^* ( \varobj \otimes \mathcal{O}(1) ) \otimes \mathcal{O}_{\P \pi_1}(+1) \quad\text{and}\quad \rho_2^* \varobj \end{equation} where we reuse $\rho_2$ for the projection from $X^{(3)}_2(\pi_1, \pi_2)$ to $ \P V_3$. Under the isomorphism~\eqref{eqn.family2} it is easily seen that the objects~\eqref{eq.flop_inter2} go to the required objects on~$X_2$. For the last part, we also repeat the argument for the Atiyah flop in a family. The inclusions in the statement are given by observing the following. \begin{align*} \Hom(\mathcal{A}_1,\mathcal{A}_0) & \cong \RDerived\Gamma_{X_1}\, \mathcal{O}(1,0) \\ & \cong \RDerived\Gamma_{\P V_2 \times \P V_3} \big( \mathcal{O}(1,0) \otimes \operatorname{Sym}^\bullet ( V_1^\vee \otimes \mathcal{O}(1,1)) \big) \\[3pt] \Hom(\fun(\mathcal{A}_1),\fun(\mathcal{A}_0)) & \cong \RDerived\Gamma_{X_2}\, \mathcal{O}(-1,-1) \\ & \cong \RDerived\Gamma_{\P V_3 \times \P V_1} \big( \mathcal{O}(-1,-1) \otimes \operatorname{Sym}^\bullet ( V_2^\vee \otimes \mathcal{O}(1,1)) \big) \end{align*} These have no higher cohomology by standard vanishing on $\P^1 \times \P^1$, and after taking $0^{\text{th}}$ cohomology, the pieces coming from $\operatorname{Sym}^0$ and $\operatorname{Sym}^1$ respectively are both $V_2^\vee$. \end{proof} The following describes the action of the flop functors on certain line bundles, again using the conventions of Notation~\ref{sect bun notation}. \begin{prop}\label{prop flop action} For $n=4$, considering the flop functors \begin{center} \begin{tikzpicture}[scale=1.0] \trianglepic[3] \end{tikzpicture} \end{center} between \begin{align*} \lab{X_1} & V_1 \otimes \mathcal{O}(-1,-1) \longrightarrow \P V_2 \times \P V_3 \\ \lab{X_2} & V_2 \otimes \mathcal{O}(-1,-1) \longrightarrow \P V_3 \times \P V_1 \\ \lab{X_3} & V_3 \otimes \mathcal{O}(-1,-1) \longrightarrow \P V_1 \times \P V_2 \end{align*} we have the following. \begin{align*} \lab{\fun_{21}} \mathcal{O}(0,b) & \mapsto \mathcal{O}(b,0) \\ \mathcal{O}(-1,b) & \mapsto \mathcal{O}(b+1,1) \\[3pt] \lab{\fun_{31}} \mathcal{O}(a,0) & \mapsto \mathcal{O}(0,a) \\ \mathcal{O}(a,-1) & \mapsto \mathcal{O}(1,a+1) \end{align*} \end{prop} \begin{rem}\label{rem cycle} From this proposition we may deduce the action of all $\fun_{ji}$ by cyclic symmetry. Because of our conventions, the statements for $\fun_{32}$ and $\fun_{13}$ read the same as for $\fun_{21}$, and so on. \end{rem} \begin{proof} The claim for $\fun_{21}$ is from Proposition~\ref{prop flop action general} using isomorphisms as follows, and the claim for $\fun_{31}$ is obtained by symmetry. \[ \mathcal{O}(0,b) \cong \rho_1^* \mathcal{O}(b) \quad\text{and}\quad \mathcal{O}(-1,b) \cong \rho_1^* \mathcal{O}(b) \otimes \mathcal{O}(-1,0)\quad\text{on~$X_1$} \] \[ \mathcal{O}(b,0) \cong \rho_2^* \mathcal{O}(b) \quad\text{and}\quad \mathcal{O}(b+1,1) \cong \rho_2^* \mathcal{O}(b) \otimes \mathcal{O}(1,1)\quad\text{on~$X_2$}\qedhere \] \end{proof} \begin{rem} Propositions~\ref{prop flop action general} and~\ref{prop flop action} may also be obtained directly by adapting an argument of Kawamata~\cite[Proposition~3.1]{Kaw}, after calculating the canonical bundles of the roofs $Q$. \end{rem} The following straightforward observation about the flop functors~$\fun_{ji}$ will be used in the proof of Theorem~\ref{thm.keynatiso}. \begin{prop}\label{prop.intertwine} Writing restriction functors \[ \res_i \colon \D(X_i) \to \D(X_i - \Exc_i) \] we have intertwinements \begin{align*} \res_j \circ \fun_{ji} & \cong g_{ji*} \circ \res_i \end{align*} where we write $g_{ji}\colon X_i - \Exc_i \to X_j - \Exc_j$ for the isomorphism induced by the birational map $\phi_{ji}$. \end{prop} \begin{proof}Each $\res_i$ is by definition a pullback along an open immersion, so this follows using flat base~change. \end{proof} \section{Dimension~$4$}\label{sec.4} Here I prove Theorem~\ref{mainthm.equiv} for dimension~$4$. Though the argument is standard, I~write it in detail, anticipating a family version in Section~\ref{sect.higher}. After the proof, some discussion of the method is given in Section~\ref{sec.discuss}. \subsection{Proof} Recall that $\Exc \cong \P V_1 \times \P V_2$ is the exceptional locus of $X_3$. \begin{prop}\label{prop.sph} The object $\mathcal{E}=\mathcal{O}_\Exc(0,-1)$ is spherical in $\D(X_3)$. \end{prop} \begin{proof} Noting that $X_3$ is Calabi--Yau by Proposition~\ref{prop.cy}, we require \[ \Hom(\mathcal{E},\mathcal{E}) \cong \operatorname{H}^*(S^4) \] which follows by a standard spectral sequence calculation, using that for the normal bundle $\mathcal{N}$ of $\Exc$ we have \begin{align*} \mathcal{N} & \cong \mathcal{O}_{\P^1\times \P^1}(-1,-1)^{\oplus 2} \\ \wedge^2 \mathcal{N} & \cong \mathcal{O}_{\P^1\times \P^1}(-2,-2) . \end{align*} See for instance~\cite[Examples~8.10(v)]{Huy} for a similar calculation. \end{proof} \begin{defn}\label{defn.twist} Take the spherical twist autoequivalence on $X_3$ \[ \operatorname{\fun[Tw]}_2 = \operatorname{\fun[Tw]} \big( \mathcal{O}_\Exc(0,-1) \big) \] where the subscript~$2$ is used because pullback of $\mathcal{O}_{\P V_2}(-1)$ gives $\mathcal{O}_\Exc(0,-1)$. The autoequivalence $\operatorname{\fun[Tw]} (\mathcal{E})$ is defined so that there is a triangle of Fourier--Mukai functors \[ \operatorname{\fun[Tw]} (\mathcal{E}) = \Cone\big( \RHom_{X_3}(\mathcal{E},-)\Lotimes \mathcal{E} \to \id \big)\makebox[0pt]{\,,} \] see \cite{ST} or~\cite[Section~8.1]{Huy}. \end{defn} Similarly, we have an autoequivalence on $X_2$ \[ \operatorname{\fun[Tw]}_3 = \operatorname{\fun[Tw]} \big( \mathcal{O}_\Exc(-1,0) \big) \] where the subscript~$3$ is used because pullback of $\mathcal{O}(-1)_{\P V_3}$ gives $\mathcal{O}_\Exc(-1,0)$. \begin{figure} \caption{Relations between functors from Theorem~\ref{thm.keynatiso} \label{functors.rel} \end{figure} \begin{thm}[Theorem~\ref{mainthm.equiv}]\label{thm.keynatiso} For $n=4$ there are natural isomorphisms \begin{align*} \fun_{21} & \cong \operatorname{\fun[Tw]}_3 \circ \fun_{23} \circ \fun_{31} \\ \fun_{31} & \cong \operatorname{\fun[Tw]}_2 \circ \fun_{32} \circ \fun_{21} \end{align*} of functors from $\D(X_1)$ to $\D(X_2)$ and $\D(X_3)$ respectively, illustrated in Figure~\ref{functors.rel}. \end{thm} \begin{proof} I prove the second statement, as the first follows by the same argument. Using the results of Section~\ref{section.flop}, we evaluate the functors in the proposed isomorphism on the following bundle $\mathcal{T}$ on $X_1$. Note that $\mathcal{T}$ is tilting, by standard methods using the Beilinson tilting bundles $\mathcal{O} \oplus \mathcal{O}(-1)$ on $\P V_2$ and $\P V_3$. \[ \mathcal{T} = \mathcal{O} \,\oplus\, \mathcal{O}(-1,0) \,\oplus\, \mathcal{O}(0,-1) \,\oplus\, \mathcal{O}(-1,-1) \] By applying functors to the summands using Proposition~\ref{prop flop action}, we find that \[ \setlength\arraycolsep{2pt} \begin{array}{rcccccccc} \fun_{21} (\mathcal{T}) & \cong & \mathcal{O} & \oplus & \mathcal{O}(1,1) & \oplus & \mathcal{O}(-1,0) & \oplus & \mathcal{O}(0,1) \\[3pt] \fun_{31} (\mathcal{T}) & \cong & \mathcal{O} & \oplus & \mathcal{O}(0,-1) & \oplus & \mathcal{O}(1,1) & \oplus & \mathcal{O}(1,0) \end{array} \] and furthermore by Remark~\ref{rem cycle} that \[ \fun_{32}\fun_{21} (\mathcal{T}) \,\cong\, \mathcal{O} \,\oplus\, \calcobj \,\oplus\, \mathcal{O}(1,1) \,\oplus\, \mathcal{O}(1,0)\makebox[0pt]{ ,} \] where we let \begin{equation}\label{eqn cE} \calcobj = \fun_{32} (\mathcal{O}(1,1)) \cong \fun_{32}\fun_{21} (\mathcal{O}(-1,0))\makebox[0pt]{ .} \end{equation} This $\calcobj$ may be described as follows. \begin{align*} \calcobj & \cong \fun_{32} \Cone \big( \!\wedge^2 \! V_3^\vee \otimes \mathcal{O}(-1,1) \overset{\psi_2\,}\longrightarrow V_3^\vee \otimes \mathcal{O}(0,1) \big) \\ & \cong \Cone \big( \!\wedge^2 \! V_3^\vee \otimes \mathcal{O}(2,1) \overset{\psi_3\,}\longrightarrow V_3^\vee \otimes \mathcal{O}(1,0) \big) \end{align*} The first line uses the pullback of the Euler short exact sequence from $\P V_3 \cong \P^1$ to~$X_2$. The second line follows by Proposition~\ref{prop flop action}, where we let $\psi_3 = \fun_{32} (\psi_2)$. To describe $\psi_3$ we apply Proposition~\ref{prop flop action general}(\ref{prop flop action general b}) with $\mathcal{B}=\mathcal{O}(1)$ on $\P V_1$ and cycling the indices as in Remark~\ref{rem cycle}. As in this proposition, $\Hom (\mathcal{O}(-1,1),\mathcal{O}(0,1))$ on $X_2$ has a canonical summand $V_3^\vee$. By construction, $\psi_2$ is induced by this summand. If follows from the proposition that $\psi_3$ is induced by the canonical summand~$V_3^\vee$ of $\Hom (\mathcal{O}(2,1),\mathcal{O}(1,0))$ on $X_3$. Using the Koszul resolution of the exceptional locus $\Exc$ on $X_3$, we therefore find the following. \begin{align}\label{eqn.inv_tw_sh} \calcobj & \cong \Cone \big( \mathcal{O}(0,-1) \overset{\operatorname{res}}{\longrightarrow} \mathcal{O}_\Exc(0,-1) \big)[-1] \notag \\ & \cong \operatorname{\fun[Tw]}_2^{-1} (\mathcal{O}(0,-1)) \notag \\ & \cong \operatorname{\fun[Tw]}_2^{-1} \fun_{31} (\mathcal{O}(-1,0)) \end{align} For the second isomorphism we use that \begin{equation}\label{eqn.invtwist} \operatorname{\fun[Tw]}_2^{-1} = \operatorname{\fun[Tw]} (\mathcal{E})^{-1} \cong \Cone\big(\id \to \RHom_{X_3}(-,\mathcal{E})^\vee \Lotimes \mathcal{E} \,\big)[-1] \end{equation} with $\mathcal{E}=\mathcal{O}_\Exc(0,-1)$, where the morphism is an adjunction unit and \begin{equation}\label{eqn.onkeysheaf} \RHom_{X_3}(\mathcal{O}(0,-1),\mathcal{O}_\Exc(0,-1))^\vee \cong \RDerived\Gamma_\Exc (\mathcal{O}_\Exc)^\vee = \C \end{equation} by projectivity. I now argue that we have the following. \begin{equation}\label{eq.alt_iso} \operatorname{\fun[Tw]}_2^{-1} \fun_{31} (\mathcal{T}) \cong \fun_{32} \fun_{21} (\mathcal{T})\end{equation} We first take a splitting $\mathcal{T}=\mathcal{U} \oplus \mathcal{F}$ with $\mathcal{U}$ given below, and show that \eqref{eq.alt_iso} holds with $\mathcal{T}$ replaced by $\mathcal{U}$. \[ \mathcal{U} \,=\, \mathcal{O} \,\oplus\, \mathcal{O}(0,-1) \,\oplus\, \mathcal{O}(-1,-1) \] By the above argument $ \fun_{31} (\mathcal{U}) \cong \fun_{32} \fun_{21} (\mathcal{U}) $ with \[ \fun_{31} (\mathcal{U}) \,\cong\, \mathcal{O} \,\oplus\, \mathcal{O}(1,1) \,\oplus\, \mathcal{O}(1,0). \] This is in the kernel of \[ \RHom_{X_3}(-,\mathcal{O}_\Exc(0,-1)), \] and therefore is unchanged up to isomorphism by applying $\operatorname{\fun[Tw]}_2^{-1}$. We deduce that \eqref{eq.alt_iso} holds with $\mathcal{T}$ replaced by $\mathcal{U}$. Combining with the definition~\eqref{eqn cE} and description~\eqref{eqn.inv_tw_sh} of $\calcobj$, we conclude~\eqref{eq.alt_iso}. Finally, we prove the claim by considering the `difference' of the two sides of the claimed isomorphism, namely proving the following natural isomorphism of functors on $\D(X_1)$. \begin{equation}\label{eq.diff_iso} \Psi = \fun_{21}^{-1} \circ \fun_{32}^{-1} \circ \operatorname{\fun[Tw]}_2^{-1} \circ \fun_{31} \cong \id \end{equation} By~\eqref{eq.alt_iso}, $\Psi(\mathcal{T}) \cong \mathcal{T}$, so $\Psi$ induces a composition as follows. \begin{equation}\label{eq.tilt_alg} \End (\mathcal{T}) \overset\sim\longrightarrow \End (\Psi(\mathcal{T})) \cong \End (\mathcal{T}) \end{equation} We will show this is the identity, and thence that \eqref{eq.diff_iso} holds by the tilting equivalence. Recall the restriction functors $\res_i \colon \D(X_i) \to \D(X_i - \Exc)$ of Proposition~\ref{prop.intertwine}. We have \[ \res_1 \circ \, \Psi \cong \res_1 \] by combining the intertwinements of Proposition~\ref{prop.intertwine} with \begin{equation}\label{eq.sph_intertw}\res_2 \circ \operatorname{\fun[Tw]}_2^{-1} \cong \res_2\end{equation} which follows from definition of the twist, in particular that the spherical object is supported on $\Exc$. It follows immediately that \eqref{eq.tilt_alg} intertwines via $\res_1$ with the identity on $\End (\res_1 \mathcal{T})$. Noting that $\res_1 \mathcal{T}$ is just the bundle $\mathcal{T}|_{X_1 - \Exc}$, that $\Exc$ is codimension~$2$, and $X_1$ is smooth thence normal, we deduce that \eqref{eq.tilt_alg} is the identity. Using that $\mathcal{T}$ is tilting so that there is an equivalence $\D(\End (\mathcal{T})) \cong \D(X_1)$, we find that \eqref{eq.diff_iso} holds, and this completes the proof.\end{proof} \begin{rem}\label{rem.monod} I briefly explain how Theorem~\ref{thm.keynatiso} above relates to calculating the derived monodromy around the triangle formed by the $\D(X_i)$, namely, to determining the composition $\fun_{13} \circ \fun_{32} \circ \fun_{21}$. Using the theorem we have the following. \begin{align*} \fun_{13} \circ \fun_{32} \circ \fun_{21} & \cong \fun_{13} \circ \operatorname{\fun[Tw]}_2^{-1} \circ \fun_{31} \\ & \cong (\fun_{13} \circ \fun_{31}) \circ (\fun_{31}^{-1} \circ \operatorname{\fun[Tw]}_2^{-1} \circ \fun_{31}) \end{align*} The two brackets may then be calculated by standard techniques. The first may be expressed as a product of twists of spherical objects by, for instance, flop-flop formulas for toric variation of GIT in~\cite{HLShi}. The second may be expressed as a twist by a spherical object using the following. \[ \Phi \circ \operatorname{\fun[Tw]}(\mathcal{E}) \circ \Phi^{-1} \cong \operatorname{\fun[Tw]}(\Phi \mathcal{E}) \quad \Longleftrightarrow \quad \Phi \circ \operatorname{\fun[Tw]}^{-1}(\mathcal{E}) \circ \Phi^{-1} \cong \operatorname{\fun[Tw]}^{-1}(\Phi \mathcal{E}) \] It would be interesting to carry out this calculation, for this example and more generally. \end{rem} \def0.45{0.45} \def-0.45{-0.45} \def0.13{0.13} \def0.15{0.15} \def-0.05{-0.05} \newcommand\nodeloc[4]{($#1*(0:1)+#2*(120:1)+(#3,#4)$)} \def-0.05ingB { \draw[lightgray] (-1.1-0.45,0.13) -- (-1.1-0.45,-0.13) -- (0.45,-0.13) -- (0.45,0.13) -- cycle; } \def-0.05ing { \draw[lightgray] (--0.45+-0.05,0.15) -- \nodeloc{0}{-1}{--0.45--0.05}{-0.15} -- \nodeloc{-1}{-1}{-0.45--0.05}{-0.15} -- \nodeloc{-1}{0}{-0.45+-0.05}{0.15} -- cycle; } \def\nodes { \node (Am1m1) at \nodeloc{-1}{-1}{0}{0} {$\hspace{12pt}\mathcal{O}(-1,-1)$}; \node (Am10) at \nodeloc{-1}{0}{0}{0} {$\hspace{4pt}\mathcal{O}(-1,0)$}; \node (A0m1) at \nodeloc{0}{-1}{0}{0} {$\mathcal{O}(0,-1)$}; \node (A10) at \nodeloc{1}{0}{0}{0} {$\mathcal{O}(1,0)$}; \node (A01) at \nodeloc{0}{1}{0}{0} {$\mathcal{O}(0,1)$}; \node (A11) at \nodeloc{1}{1}{0}{0} {$\mathcal{O}(1,1)$}; \node (A00) at (0,0) {$\mathcal{O}$}; } \def\nodesB { \node (Bm1) at (-1,0) {$\mathcal{O}(-1)$}; \node (B0) at (0,0) {$\mathcal{O}$}; \node (B1) at (1,0) {$\mathcal{O}(1)$}; } \subsection{Discussion}\label{sec.discuss} For some intuition for the above Theorem~\ref{thm.keynatiso}, I include Figure~\ref{functors4} showing the action of the flop functors on line bundles on the $X_i$. The arrows in Figure~\ref{functors4} indicate source and target for the given functor, up to isomorphism. The dotted arrows indicate the same thing, but where the target $\mathcal{O}(a,b)$ should be replaced with \[\operatorname{\fun[Tw]}_{\mathcal{O}_\Exc(a,b)}^{-1} \mathcal{O}(a,b) \cong \Cone\big(\mathcal{O}(a,b) \to \mathcal{O}_\Exc(a,b)\big)[-1] \cong \mathcal{I}_\Exc(a,b) \] using a similar argument to the proof of Theorem~\ref{thm.keynatiso}. Note also that each flop functor takes $\mathcal{O}$ to~$\mathcal{O}$. Therefore in the diagrams each flop ``cycles" the bundles by $2\pi / 3$, up to spherical twists. Inspecting Figure~\ref{functors4} we see, for instance, that $\fun_{32} \circ \fun_{21}$ and $\fun_{31}$ give the same results on all summands of the tilting bundle $\mathcal{T}$ except for $\mathcal{O}(-1,0)$. The spherical twist $\operatorname{\fun[Tw]}_2$ in Theorem~\ref{thm.keynatiso} accounts precisely for the disparity. \begin{figure} \caption{Flop functors for $n=4$, with summands of $\mathcal{T} \label{functors4} \end{figure} \begin{rem} There is a similar diagram, albeit simpler and more well-known, for the case $n=3$, given in Figure~\ref{functors3}. Here we have $X_1$ and $X_2$ related by an Atiyah flop, and functors as follows. \begin{center} \begin{tikzpicture}[scale=1.5] \node (0) at (-1,0) {$\D(X_1)$}; \node (1) at (0,0) {$\D(X_2)$}; \draw[->,transform canvas={yshift=+2.5pt}] (0) -- node[above]{$\fun_{21}$} (1); \draw[<-,transform canvas={yshift=-2.5pt}] (0) -- node[below]{$\fun_{12}$} (1); \end{tikzpicture} \end{center} Then Figure~\ref{functors3} shows the action of the flop functors. The dotted arrow indicates that the target $\mathcal{O}(a)$ should be replaced with \[\operatorname{\fun[Tw]}_{\mathcal{O}_\Exc(a)}^{-1} \mathcal{O}(a) \cong \Cone\big(\mathcal{O}(a) \to \mathcal{O}_\Exc(a)\big) [-1] \cong \mathcal{I}_\Exc(a) \] by standard arguments. This can be used to prove the relation $\id \cong \operatorname{\fun[Tw]} \circ \fun_2 \circ \fun_1$ from Section~\ref{sect.Atiyah} for the Atiyah flop, by a simple analogue of the argument of Theorem~\ref{thm.keynatiso}. Namely, we check the relation on the tilting bundle $\mathcal{O}(-1)\oplus\mathcal{O}$, and then deduce that it holds in general. \begin{figure} \caption{Flop functors $\fun_{21} \label{functors3} \end{figure} \end{rem} \section{Higher dimension} \label{sect.higher} Having proved the result for $n=4$ in Section~\ref{sec.4}, I explain how to extend to $n>4$ by a family construction. \subsection{Family construction} I realize $X_1^{(n)}$ as a family of $X_1^{(4)}$ over base \[ B = \P V_4 \times \dots \times \P V_{n-1}\makebox[0pt]{ .} \] \begin{prop}\label{prop.famhigher} For $n>4$ we have for $i=1,2,3$ \[ X_i^{(n)}(V_1, \dots, V_{n-1}) \cong X_i^{(4)} (\pi_1, \pi_2, \pi_3 ) \] where we take the following bundles. \begin{align*} \lab{\pi_1} & V_1\otimes\mathcal{O} \longrightarrow B \\ \lab{\pi_2} & V_2\otimes\mathcal{O} \longrightarrow B \\ \lab{\pi_3} & V_3\otimes\mathcal{O}(-1,\dots,-1) \longrightarrow B \end{align*} \end{prop} \begin{proof} Recalling that we have \[ \lab{X_1^{(n)}} V_1 \otimes \mathcal{O}(-1,-1,-1,\dots,-1) \longrightarrow \P V_2 \times \P V_3 \times \P V_4 \times \dots \times \P V_{n-1} \] the result follows by the methods of Section~\ref{sect.fam_descrip}. \end{proof} Proposition~\ref{prop.famhigher} also holds trivially for $n=4$, if we take $B$ to be a point. \begin{rem} The isomorphisms of Proposition~\ref{prop.famhigher} express each $n$-fold as a family of $4$-folds, similarly to how the isomorphisms~\eqref{eqn.family1} and~\eqref{eqn.family2} expressed $4$-folds as a family of~$3$-folds. \end{rem} \subsection{Family spherical twist}\label{sect.famtwist} I construct a family spherical twist on~$X_3$ for $n\geq 4$ generalizing the twist~$\operatorname{\fun[Tw]}_2$ for $n=4$ from Definition~\ref{defn.twist}. Via the isomorphism of Proposition~\ref{prop.famhigher}, the exceptional locus on $X_3$ is \[\Exc = \P \pi_1 \underset{B}{\times} \P \pi_2 ,\] a bundle over $B$ with fibre $\P V_1 \times \P V_2$. Write \[\mathcal{O}_\Exc(a,b) = \mathcal{O}_{\P \pi_1}(c) \underset{B}{\boxtimes} \mathcal{O}_{\P \pi_2}(d)\] and consider this as a torsion sheaf on $X_3$ via the inclusion. This is a relative analog of Notation~\ref{sect bun notation}. Continuing the analogy, write $\mathcal{O}(a,b)$ for the pullback of $\mathcal{O}_\Exc(a,b)$ to $X_3$, and use similar notation for the other $X_i$. I define the following functor, which I will show to be spherical. Here and in the next subsection, the letters L and R on derived functors are often dropped for the sake of readability. \begin{defn} Take the functor \begin{align*} S & = \sphobj \otimes \tau^*(-) \colon \D(B) \to \D(X_3) \end{align*} where $\sphobj=\mathcal{O}_\Exc(0,-1)$ on $X_3$ and $\tau \colon X_3 \to B$ denotes the projection morphism. \end{defn} \begin{prop}\label{prop.adj} We have adjoints to $S$ as follows. \begin{align*} L & = \tau_! \sHom ( \sphobj, - ) \cong \tau_* \sHom ( -, \sphobj )^\vee \\ R & = \tau_* \sHom ( \sphobj, - ) \end{align*} \end{prop} \begin{proof} First note that $\sphobj^\vee \otimes -$, where we take derived dual, is a two-sided adjoint to $\sphobj \otimes - $. We then use that $\tau_! \dashv \tau^* \dashv \tau_*$. The isomorphism follows after noting that $\tau_! \cong \tau_*(-^\vee)^\vee$. \end{proof} \begin{prop} $S$ is a spherical functor. \end{prop} \begin{proof} Using the framework of Anno--Logvinenko~\cite{AL}, $S$ yields a cotwist endofunctor $C$ of $\D(B)$, satisfying the following. \[ C \cong \Cone(\id \to RS)[-1]\] It suffices that this is an equivalence, along with a certain Calabi--Yau condition, namely that a canonical natural transformation $R\to CL[1]$ is an isomorphism. Now we have that \begin{align*} RS & \cong \tau_* \sHom ( \sphobj, \sphobj \otimes \tau^*(-) ) \\ & \cong \tau_* \big( \sHom ( \sphobj, \sphobj) \otimes \tau^*(-) \big) \\ & \cong \tau_* \sHom ( \sphobj, \sphobj) \otimes - \end{align*} where all functors are derived. Using a family version of the argument of Proposition~\ref{prop.sph}, we may then calculate $\tau_* \sHom ( \sphobj, \sphobj)$ and obtain an isomorphism \[RS \cong \id \oplus\, (-\otimes\omega_B) [-4]\] where $\omega_B \cong \mathcal{O}(-2,\dots,-2)$. The $\omega_B$ in this calculation arises from the determinant of the relative normal bundle of $\Exc$ over $B$, namely \[\wedge^2 \mathcal{N}_{\text{rel}} \cong \mathcal{O}_{\Exc}(-2,-2) \otimes \sigma^*\omega_B \] where we take projection $\sigma\colon \Exc \to B$. It follows that the cotwist \[ C \cong (-\otimes\omega_B)[-5]\] which is an autoequivalence. I briefly explain how the Calabi--Yau condition mentioned above is obtained from the Calabi--Yau property of the target space $X_3$. Note that for the fibration $\tau\colon X_3 \to B$ we have $\omega_\tau \cong \tau^* \omega_B$ because $X_3$ is Calabi--Yau by Proposition~\ref{prop.cy}. Then recall \[ \tau_! = \tau_* ( - \otimes \omega_\tau [\dim \tau]) \cong \tau_* (-) \otimes \omega_B [4] \] where we use the projection formula. The condition then follows using Proposition~\ref{prop.adj}. \end{proof} \begin{defn}\label{defn.twistfam} Take the spherical twist autoequivalence on $X_3$ \begin{equation*} \operatorname{\fun[Tw]}_2 = \operatorname{\fun[Tw]} (S) \end{equation*} where the subscript 2 is used because under the isomorphism $\Exc \cong \P V_1 \times \P V_2 \times B$ the bundle $\mathcal{O}_\Exc(0,-1)$ on $\Exc$ appearing in the definition of $S$ is the pullback of~$\mathcal{O}_{\P V_2}(-1)$. The autoequivalence $\operatorname{\fun[Tw]} (S)$ is defined so that there is a triangle of Fourier--Mukai functors \begin{equation*} \operatorname{\fun[Tw]} (S) \cong \Cone(SR \to \id)\makebox[0pt]{\,,} \end{equation*} as explained in~\cite{AL}. \end{defn} Similarly, we have an autoequivalence $\operatorname{\fun[Tw]}_3$ on $X_2$, by repeating the construction of this subsection using instead $\sphobj=\mathcal{O}_\Exc(-1,0)$ on $X_2$ and projection $\tau \colon X_2 \to B$. \begin{rem} Definition~\ref{defn.twistfam} reduces to Definition~\ref{defn.twist} when $n=4$, using Proposition~\ref{prop.adj}. \end{rem} \begin{rem} The above twists can be formulated as EZ-twists \cite{Hor}, see for instance \cite[Definition~8.43]{Huy}. Indeed, taking projection $\sigma\colon \Exc \to B$ and inclusion $i\colon \Exc \to X_3$ we get that \begin{align*} S & \cong i_* (\sphobj \otimes \sigma^* (-)) \end{align*} using $\tau \circ i = \sigma$ and the projection formula. \end{rem} \subsection{Proof} The following uses a family version of the argument of Theorem~\ref{thm.keynatiso} to generalize the result there to dimension $n> 4$. \begin{thm}[Theorem~\ref{mainthm.equiv}]\label{thm.equiv} For $n\geq 4$ there are natural isomorphisms \begin{align*} \fun_{21} & \cong \operatorname{\fun[Tw]}_3 \circ \fun_{23} \circ \fun_{31} \\ \fun_{31} & \cong \operatorname{\fun[Tw]}_2 \circ \fun_{32} \circ \fun_{21} \end{align*} of functors from $\D(X_1)$ to $\D(X_2)$ and $\D(X_3)$ respectively, where the twists~$\operatorname{\fun[Tw]}$ are defined in Definition~\ref{defn.twistfam} and following it. \end{thm} \begin{proof} As before, we prove the second statement, with the first following by the same argument. We replace the spaces $X_i$ for $i=1,2,3$ in the proof of Theorem~\ref{thm.keynatiso} with \[ \familyX_i^{(4)} (\pi_1,\pi_2, \pi_3) ,\] writing projections $\tau_i\colon X_i^{(4)} \to B$, or simply $\tau$. The proof proceeds by repeating the argument of Theorem~\ref{thm.keynatiso} in a family over $B$. I explain the key modifications needed for this relative context. The tilting bundle $\mathcal{T}$ of Theorem~\ref{thm.keynatiso}, namely \[ \mathcal{T} = \mathcal{O} \,\oplus\, \mathcal{O}(-1,0) \,\oplus\, \mathcal{O}(0,-1) \,\oplus\, \mathcal{O}(-1,-1) \] makes sense in the relative context using the notation of Section~\ref{sect.famtwist}. It is now a relative tilting bundle over $B$, as follows. Taking $\mathcal{R} = \tau_* \sEnd(\mathcal{T})$, a sheaf of algebras on $B$, we have \[ \D(\mathcal{R}) \cong \D(X_1) \] where we take right $\mathcal{R}$-modules, with mutually inverse equivalences as follows. \[\RDerived\tau_* \RsHom(\mathcal{T},-) \qquad\qquad \tau^{-1}(-) \underset{\tau^{-1}\mathcal{R}}{\Lotimes} \mathcal{T}\] The description of the flop functors is similar to Propositions~\ref{prop flop action general} and~\ref{prop flop action}. We replace the $\mathcal{A}_i$ as written there with $\mathcal{A}_i(\baseobj)$ for any $\baseobj\in \D(B)$, where we put \[ \mathcal{A}(\baseobj) = \tau^{-1}(\baseobj) \underset{\tau^{-1} \mathcal{O}_B}\otimes \mathcal{A}. \] In place of the commuting diagram in Proposition~\ref{prop flop action general}(\ref{prop flop action general b}) it suffices to prove the result with the following diagram. \begin{center} \begin{tikzpicture}[scale=0.8,xscale=1.8] \node (1) at (-1,0) {$\tau_*\sHom(\mathcal{A}_1,\mathcal{A}_0)$}; \node (2) at (1,0) {$\tau_*\sHom(\fun(\mathcal{A}_1),\fun(\mathcal{A}_0))$}; \node (1b) at (-0.05,-1) {$V_2^\vee\otimes\mathcal{O} $}; \draw[->] (1) to node[above]{\scriptsize $\fun$} (2); \draw[left hook->] (1b) to (1); \draw[right hook->] (1b) to (2); \end{tikzpicture} \end{center} Given this, the calculation of the action of the flop functors on~$\mathcal{T}$ at the beginning of the proof of Theorem~\ref{thm.keynatiso} proceeds in the relative context. Note that when Proposition~\ref{prop flop action general}(\ref{prop flop action general b}) is applied to describe $\psi_3$, it takes the following form, with the twist $\mathcal{O}(1,\dots,1)$ being dual to the twist in the definition of $\pi_3$. \begin{center} \begin{tikzpicture}[scale=0.8,xscale=1.9] \node (1) at (-1,0) {$\tau_*\sHom(\mathcal{A}_1,\mathcal{A}_0)$}; \node (2) at (1,0) {$\tau_*\sHom(\fun(\mathcal{A}_1),\fun(\mathcal{A}_0))$}; \node (1b) at (-0.05,-1) {$V_3^\vee\otimes\mathcal{O}(1,\dots,1) $}; \draw[->] (1) to node[above]{\scriptsize $\fun$} (2); \draw[left hook->] (1b) to (1); \draw[right hook->] (1b) to (2); \end{tikzpicture} \end{center} For equation \eqref{eqn.inv_tw_sh} in the relative context we have \begin{align}\label{eqn.inv_tw_sh.rel} \calcobj & = \fun_{32}\fun_{21} (\mathcal{O}(-1,0)) \notag \\ & \cong \Cone \big( \mathcal{O}(0,-1) \overset{\operatorname{res}}{\longrightarrow} \mathcal{O}_\Exc(0,-1) \big)[-1] \notag \\ & \cong \operatorname{\fun[Tw]}_2^{-1} (\mathcal{O}(0,-1)) \notag \\ & \cong \operatorname{\fun[Tw]}_2^{-1} \fun_{31} (\mathcal{O}(-1,0)) \end{align} where the second isomorphism arises as follows. The formula~\eqref{eqn.invtwist} for the inverse twist is replaced by \begin{equation*}\label{eqn.invtwistfam} \operatorname{\fun[Tw]}_2^{-1} \cong \Cone(\id \to SL)[-1] \end{equation*} and \eqref{eqn.onkeysheaf} is replaced by \begin{align*}\label{eqn.onkeysheaffam} L(\mathcal{O}(0,-1)) & \cong \tau_* \sHom ( \mathcal{O}(0,-1), \sphobj)^\vee \\ & \cong \tau_* \sHom ( \mathcal{O}(0,-1), \mathcal{O}_\Exc(0,-1))^\vee \\ & \cong \sigma_* (\mathcal{O}_\Exc)^\vee \\ & \cong \mathcal{O}_B. \end{align*} Here $\sigma\colon \Exc \to B$ denotes the projection morphism, and we obtain the last line using that $\sigma$ is a bundle with fibre $\P V_1 \times \P V_2$. Finally, we use $S (\mathcal{O}_B) \cong \sphobj = \mathcal{O}_\Exc(0,-1) $ to get \eqref{eqn.inv_tw_sh.rel}. The analogs of the intertwinements of Proposition~\ref{prop.intertwine} and \eqref{eq.sph_intertw} follow by similar arguments, using that all the Fourier--Mukai functors are relative to $B$, that is their kernels are pushed forward from the fibre product over $B$. The argument concludes by showing the following. \begin{equation*} \Psi = \fun_{21}^{-1} \circ \fun_{32}^{-1} \circ \operatorname{\fun[Tw]}_2^{-1} \circ \fun_{31} \cong \id \end{equation*} For this, we study the endomorphism induced by $\Psi$ of the sheaf of algebras $\mathcal{R} = \tau_* \sEnd(\mathcal{T})$, similarly to the end of the proof of Theorem~\ref{thm.keynatiso}. \end{proof} \begin{rem} It would be interesting to give a global analog of Theorem~\ref{mainthm.equiv} taking, for instance, a collection of quasiprojective $Y_i$ having a diagram of birational maps as for the $X_i$, and further having formal completions isomorphic to the formal completions of the $X_i$ along $\Exc_i$. A first step could be to extend existing methods for proving relations between derived equivalences on $3$-folds to this setting, for instance~\cite[Section 7.6]{DW1} which follows~\cite{Tod}. \end{rem} \section{Family constructions}\label{sect.fam} I conclude with some straightforward constructions which realize each $n$-fold resolution as a family of $k$-fold resolutions for some $k<n$. I first explain the statement for $k=n-1$. This coincides with Proposition~\ref{prop.fam4} for $n=4$ and a case of Proposition~\ref{prop.famhigher} for $n=5$. \begin{prop}\label{prop.iterate} For $n\geq 4$ we have for $i=1,\dots,n-2$ \[ X_i^{(n)}(V_1, \dots, V_{n-1}) \cong X_i^{(n-1)} (\pi_1, \dots, \pi_{n-2}) \] where we take the following bundles. \begin{align*} \lab{\pi_1} & V_1 \longrightarrow \P V_{n-1} \\ \labpos{\vdots} & \\ \lab{\pi_{n-3}} & V_{n-3} \longrightarrow \P V_{n-1} \\ \lab{\pi_{n-2}} & V_{n-2}\otimes\mathcal{O}(-1) \longrightarrow \P V_{n-1} \end{align*} \end{prop} \begin{proof} We may write \[ \lab{X_1^{(n)}} V_1 \otimes \mathcal{O}(-1,\dots,-1,-1,-1) \longrightarrow (\P V_2 \times \dots \times \P V_{n-3}) \times \P V_{n-2} \times \P V_{n-1} \] so that the claim again follows by the methods of Section~\ref{sect.fam_descrip}. Similar arguments suffice for the other $i$. \end{proof} \begin{rem} Note for completeness that the $n=3$ case of the above Proposition~\ref{prop.iterate} is true too, when suitably interpreted. For this, we take $X_1^{(2)}(V_1)$ as simply~$V_1$, because $Z^{(2)}(V_1) = V_1$ so no resolution is needed here. By extension we take $X_1^{(2)}(\pi_1)$ as simply $\pi_1$. We may then put \[ \lab{\pi_1} V_1 \otimes \mathcal{O}(-1) \longrightarrow \P V_2 \] so that the statement follows by definition of $X_1^{(3)}(V_1,V_2)$. \end{rem} By iterating the above Proposition~\ref{prop.iterate} in families, we may realize $X_1^{(n)}$ as a family of $X_1^{(k)}$. The following is a direct construction to show this fact, which coincides with the above when $k=n-1$. \begin{prop}\label{prop.gen_fam} For $n>k\geq 3$ we have for $i=1,\dots,k-1$ \[ X_i^{(n)}(V_1, \dots, V_{n-1}) \cong X_i^{(k)} (\pi_1, \dots, \pi_{k-1} ) \] where we take the bundle \[ \lab{\pi_{k-1}} V_{k-1} \otimes\mathcal{O}(-1,\dots,-1) \longrightarrow \P V_k \times \dots \times \P V_{n-1} \] and $\pi_1,\dots,\pi_{k-2}$ bundles with constant fibre $V_1,\dots,V_{k-2}$ over the same base. \end{prop} \begin{proof} Note first for consistency that the dimension of the base is $n-k$. Observe that, taking $k\geq 4$, we may write the following. \begin{multline*} \lab{X_1^{(n)}} V_1 \otimes \mathcal{O}(-1,\dots,-1,-1,-1,\dots,-1) \\ \longrightarrow (\P V_2 \times \dots \times \P V_{k-2}) \times \P V_{k-1} \times (\P V_k \times \dots \times \P V_{n-1}) \end{multline*} We deduce the claim for $i=1$ for $k\geq 4$. A similar argument gives the claim for other $i$, and also $k= 3$. \end{proof} \end{document}
\mathbf{E}gin{document} \title{Parabolic Anderson model with rough dependence in space } \mathbf{E}gin{abstract} This paper studies the one-dimensional parabolic Anderson model driven by a Gaussian noise which is white in time and has the covariance of a fractional Brownian motion with Hurst parameter $H \in (\frac{1}{4}, \frac{1}{2})$ in the space variable. We derive the Wiener chaos expansion of the solution and a Feynman-Kac formula for the moments of the solution. These results allow us to establish sharp lower and upper asymptotic bounds for the $n$th moment of the solution. \noindent{\it Keywords. Stochastic heat equation, fractional Brownian motion, Feynman-Kac formula, Wiener chaos expansion, sharp lower and upper moment bounds, intermittency} \noindent{\it \noindent AMS 2010 subject classification.} Primary 60H15; Secondary 35R60, 60G60. {\varepsilon}psilonnd{abstract} \setlength{\parindent}{1.5em} \setcounter{equation}{0}\Section{Introduction} A recent paper \mathcal Ite{HHLNT} studies the stochastic heat equation for $(t,x) \in (0,\infty)\times \mathbb{R}R$ \mathbf{E}gin{equation}{\lambda}bel{eq:SHE sigma} \frac{\partial u}{\partial t}=\frac{\kappappa}{2}\frac{\partial ^2 u}{\partial x ^2}+{\sigma}gma(u)\,\dot W\,, {\varepsilon}psilonnd{equation} where $\dot{W}$ is a centered Gaussian noise which is white in time and behaves as fractional Brownian motion with Hurst parameter $1/4 < H < 1/2$ in space, and ${\sigma}gma$ may be a nonlinear function with some smoothness. However, the specific case ${\sigma}gma(u)=u$, i.e. \mathbf{E}gin{equation}{\lambda}bel{spde} \frac{\partial u}{\partial t}=\frac{\kappappa}{2}\frac{\partial ^2 u}{\partial x ^2}+u\,\dot W {\varepsilon}psilonnd{equation} deserves some specific treatment due to its simplicity. Indeed, this linear equation turns out to be a continuous version of the parabolic Anderson model, and is related to challenging systems in random environment like KPZ equation \mathcal Ite{Ha,BeC} or polymers~\mathcal Ite{AKQ,BTV}. The localization and intermittency properties {\varepsilon}psilonqref{spde} have thus been thoroughly studied for equations driven by a space-time white noise (see \mathcal Ite{Kh} for a nice survey), while a recent trend consists in extending this kind of result to equations driven by very general Gaussian noises \mathcal Ite{Ch14,HHNT,HN,HNS}, but the rough noise $\dot W$ presented here is not covered by the aforementioned references. To fill this gap, we first go to the existence and uniqueness problem. Although the existence and uniqueness of the solution in the general nonlinear case {\varepsilon}psilonqref{eq:SHE sigma} has been established in \mathcal Ite{HHLNT}, in this linear case {\varepsilon}psilonqref{spde}, one can implement a rather simple procedure involving Fourier transforms. Since this point of view is interesting in its own right and is short enough, we develop it in Subsection \ref{sec:picard}. In Subsection \ref{subsec: chaos}, we study the random field solution using chaos expansion. Following the approach introduced in \mathcal Ite{HN,HHNT}, we obtain an explicit formula for the kernels of the Wiener chaos expansion and we show its convergence, and thus obtain the existence and uniqueness of the solution. It worths noting these methods treat different classes of initial data which are more general than in \mathcal Ite{HHLNT} and different from \mathcal Ite{BJQ}. We then move to a Feynman-Kac type representation for the moments of the solution. In fact, we cannot expect a Feynman-Kac formula for the solution, because the covariance is rougher than the space-time white noise case, and this type of formula requires smoother covariance structures (see, for instance, \mathcal Ite{HNS}). However, by means of Fourier analysis techniques as in \mathcal Ite{HN,HHNT}, we are able to obtain a Feynman-Kac formula for the moments that involves a fractional derivative of the Brownian local time. Finally, the previous considerations allow us to handle, in the last section of the paper, the intermittency properties of the solution. More precisely, we show sharp lower bounds for the moments of the solution of the form $\mathbf{E} [u(t,x)^n]\ge{\varepsilon}psilonxp(C n^{1+\frac 1H} t)$, for all $t\ge 0$, $x\in \mathbb{R} $ and $n\ge 2$, where $C$ is independent of $t\ge 0$, $x\in \mathbb{R} $ and $n$. These bounds entail the intermittency phenomenon and match the corresponding estimates for the case $H>\frac 12$ obtained in \mathcal Ite{HHNT}. After the completion of this work, three of the authors have studied the parabolic Anderson model in more details in \mathcal Ite{HLN}. In particular, existence and uniqueness results are extended for a wider class of initial data, exact long term asymptotics for the moments of the solution are obtained. \setcounter{equation}{0}\Section{Preliminaries} Let us start by introducing our basic notation on Fourier transforms of functions. The space of Schwartz functions is denoted by $\mathcal{S}$. Its dual, the space of tempered distributions, is $\mathcal{S}'$. The Fourier transform of a function $u \in \mathcal{S}$ is defined with the normalization \[ \mathcal{F}u ( \xi) = \int_{\mathbb{R}} e^{- i \xi x } u ( x) d x, \] so that the inverse Fourier transform is given by $\mathcal{F}^{- 1} u ( \xi) = ( 2 \pi)^{- 1} \mathcal{F}u ( - \xi)$. Let $ \mathcal{D}((0,\infty)\times \mathbb{R})$ denote the space of real-valued infinitely differentiable functions with compact support on $(0, \infty) \times \mathbb{R}$. Taking into account the spectral representation of the covariance function of the fractional Brownian motion in the case $H<\frac 12$ proved in \mathcal Ite[Theorem 3.1]{PT}, we represent our noise $W$ by a zero-mean Gaussian family $\{W(\varphi) ,\, \varphi\in \mathcal{D}((0,\infty)\times \mathbb{R})\}$ defined on a complete probability space $({\Omega}ega,\mathcal F,\mathbf{P})$, whose covariance structure is given by \mathbf{E}gin{equation}{\lambda}bel{eq:cov1} \mathbf{E}\left[ W(\varphi) \, W(\psi) \right] = c_{1,H}\int_{\mathbb{R}_{+}\times\mathbb{R}} \mathcal F\varphi(s,\xi) \, \overline{\mathcal F\psi(s,\xi)} \, |\xi|^{1-2H} \, ds d\xi, {\varepsilon}psilonnd{equation} where the Fourier transforms $\mathcal F\varphi,\mathcal F\psi$ are understood as Fourier transforms in space only and \mathbf{E}gin{equation}{\lambda}bel{eq:expr-c1H} c_{1,H}= \frac 1 {2\pi} \mathcal{G}amma(2H+1){\sigma}n(\pi H) \,. {\varepsilon}psilonnd{equation} We denote by $\mathfrak H $ the Hilbert space obtained by completion of $ \mathcal{D}((0,\infty)\times \mathbb{R})$ with respect to the inner product \mathbf{E}gin{equation}{\lambda}bel{eq: H_0 element H prod} {\lambda}ngle\varphi, \psi \rangle_{ \mathfrak H}=c_{1, H}\int_{\mathbb{R}R_+\times \mathbb{R}R}\mathcal{F}\varphi(s,\xi)\overline{\mathcal{F}\psi(s,\xi)}|\xi|^{1-2H }d\xi ds\,. {\varepsilon}psilonnd{equation} The next proposition is from Theorem 3.1 and Proposition 3.4 in \mathcal Ite{PT}. \mathbf{E}gin{proposition} {\lambda}bel{prop: H} If $\mathfrak H_0$ denotes the class of functions $\varphi \in L^2( \mathbb{R}R_+\times \mathbb{R}R)$ such that $$ \int_{\mathbb{R}R_+\times \mathbb{R}R} |\mathcal{F}\varphi(s,\xi)|^2|\xi|^{1-2H}d\xi ds < \infty\,, $$ then $ \mathfrak H_0$ is not complete and the inclusion $\mathfrak H_0 \subset \mathfrak H$ is strict. {\varepsilon}psilonnd{proposition} We recall that the Gaussian family $W$ can be extended to $\mathfrak H$ and this produces an isonormal Gaussian process, { for which Malliavin calculus can be applied. } We refer to~\mathcal Ite{Nua} and \mathcal Ite{hubook} for a detailed account of the Malliavin calculus with respect to a Gaussian process. On our Gaussian space, the smooth and cylindrical random variables $F$ are of the form \mathbf{E}gin{equation*} F=f(W(\phi_1),\dots,W(\phi_n))\,, {\varepsilon}psilonnd{equation*} with $\phi_i \in \mathfrak H$, $f \in C^{\infty}_p (\mathbb{R}^n)$ (namely $f$ and all its partial derivatives have polynomial growth). For this kind of random variable, the derivative operator $D$ in the sense of Malliavin calculus is the $\mathfrak H$-valued random variable defined by \mathbf{E}gin{equation*} DF=\sum_{j=1}^n\frac{\partial f}{\partial x_j}(W(\phi_1),\dots,W(\phi_n))\phi_j\,. {\varepsilon}psilonnd{equation*} The operator $D$ is closable from $L^2({\Omega}ega)$ into $L^2({\Omega}ega; \mathfrak H)$ and we define the Sobolev space $\mathbb{D}^{1,2}$ as the closure of the space of smooth and cylindrical random variables under the norm \[ \|DF\|_{1,2}=\sqrt{\mathbf{E} [F^2]+\mathbf{E} [\|DF\|^2_{\mathfrak H} ]}\,. \] We denote by ${\delta}lta$ the adjoint of the derivative operator (called divergence operator) given by the duality formula \mathbf{E}gin{equation}{\lambda}bel{dual} \mathbf{E} \left[ {\delta}lta (u)F \right] =\mathbf{E} \left[ {\lambda}ngle DF,u \rangle_{\mathfrak H}\right] , {\varepsilon}psilonnd{equation} for any $F \in \mathbb{D}^{1,2}$ and any element $u \in L^2({\Omega}ega; \mathfrak H)$ in the domain of ${\delta}lta$. For any integer $n\ge 0$ we denote by $\mathbf{H}_n$ the $n$th Wiener chaos of $W$. We recall that $\mathbf{H}_0$ is simply $\mathbb{R}$ and for $n\ge 1$, $\mathbf {H}_n$ is the closed linear subspace of $L^2({\Omega}ega)$ generated by the random variables $\{ H_n(W(\phi)),\phi \in \mathfrak H, \|\phi\|_{\mathfrak H}=1 \}$, where $H_n$ is the $n$th Hermite polynomial. For any $n\ge 1$, we denote by $\mathfrak H^{[0,t]imes n}$ (resp. $\mathfrak H^{\odot n}$) the $n$th tensor product (resp. the $n$th symmetric tensor product) of $\mathfrak H$. Then, the mapping $I_n(\phi^{[0,t]imes n})= H_n(W(\phi))$ can be extended to a linear isometry between $\mathfrak H^{\odot n}$ (equipped with the modified norm $\sqrt{n!}\| \mathcal Dot\|_{\mathfrak H^{[0,t]imes n}}$) and $\mathbf{H}_n$. Consider now a random variable $F\in L^2({\Omega}ega)$ which is measurable with respect to the ${\sigma}gma$-field $\mathcal F$ generated by $W$. This random variable can be expressed as \mathbf{E}gin{equation}{\lambda}bel{eq:chaos-dcp} F= \mathbf{E} \left[ F\right] + \sum_{n=1} ^\infty I_n(f_n), {\varepsilon}psilonnd{equation} where the series converges in $L^2({\Omega}ega)$, and the elements $f_n \in \mathfrak H ^{\odot n}$, $n\ge 1$, are determined by $F$. This identity is called the Wiener chaos expansion of $F$. The Skorohod integral (or divergence) of a random field $u$ can be computed by using the Wiener chaos expansion. More precisely, suppose that $u=\{u(t,x) , (t,x) \in \mathbb{R}_+ \times\mathbb{R}\}$ is a random field such that for each $(t,x)$, $u(t,x)$ is an $\mathcal F_t$-measurable and square-integrable random variable, here $\mathcal{F}_t$ is the ${\sigma}gma$ algebra generated by $W$ up to time $t$. Then, for each $(t,x)$ we have a Wiener chaos expansion of the form \mathbf{E}gin{equation} {\lambda}bel{exp1} u(t,x) = \mathbf{E} \left[ u(t,x) \right] + \sum_{n=1}^\infty I_n (f_n(\mathcal Dot,t,x)). {\varepsilon}psilonnd{equation} Suppose that $\mathbf{E} [\|u\|_{ \mathfrak H}^{2}]$ is finite. Then, we can interpret $u$ as a square-integrable random function with values in $\mathfrak H$ and the kernels $f_n$ in the expansion (\ref{exp1}) are functions in $\mathfrak H ^{[0,t]imes (n+1)}$ which are symmetric in the first $n$ variables. In this situation, $u$ belongs to the domain of the divergence operator (that is, $u$ is Skorohod integrable with respect to $W$) if and only if the following series converges in $L^2({\Omega}ega)$ \mathbf{E}gin{equation}{\lambda}bel{eq:delta-u-chaos} {\delta}lta(u)= \int_0 ^\infty \int_{\mathbb{R}^d} u(t,x) \, {\delta}lta W(t,x) = W(\mathbf{E} [u]) + \sum_{n=1}^\infty I_{n+1} (\widetilde{f}_n), {\varepsilon}psilonnd{equation} where $\widetilde{f}_n$ denotes the symmetrization of $f_n$ in all its $n+1$ variables. We note here that if ${\Lambda}mbda_{H}$ denotes the space of predictable processes $g$ defined on $\mathbb{R}R_+\times \mathbb{R}R$ such that almost surely $g\in \mathfrak H$ and $\mathbf{E} [\|g\|^2_{\mathfrak H}] < \infty$, the Skorohod integral of $g$ with respect to $W$ coincides with the It\^o integral defined in \mathcal Ite{HHLNT}, also, we have the isometry \mathbf{E}gin{equation}{\lambda}bel{int isometry} \mathbf{E} \left [\left( \int_{\mathbb{R}R_+} \int_{\mathbb{R}R} g(s,x) W(ds,dx) \right)^2 \right] = \mathbb{E} \|g\|^2_{\mathfrak H}\,. {\varepsilon}psilonnd{equation} Now we are ready to state the definition of the solution to equation {\varepsilon}psilonqref{spde}. \mathbf{E}gin{definition} Let $u=\{u(t,x), 0 \leq t \leq T, x \in \mathbb{R}\}$ be a real-valued predictable stochastic process such that for all $t\in[0,T]$ and $x\in\mathbb{R}$ the process $\{p_{t-s}(x-y)u(s,y) {\bf 1}_{[0,t]}(s), 0 \leq s \leq t, y \in \mathbb{R}\}$ is Skorohod integrable, where $p_t(x)$ is the heat kernel on the real line related to $\frac{\kappa}{2}\mathbb Delta$. We say that $u$ is a mild solution of {\varepsilon}psilonqref{spde} if for all $t \in [0,T]$ and $x\in \mathbb{R}$ we have \mathbf{E}gin{equation}{\lambda}bel{eq:mild-formulation sigma} u(t,x)= p_t*u_0(x) + \int_0^t \int_{\mathbb{R}}p_{t-s}(x-y)u(s,y) W(ds,dy) \quad a.s., {\varepsilon}psilonnd{equation} where the stochastic integral is understood in the sense of Skorohod or It\^o. {\varepsilon}psilonnd{definition} \setcounter{equation}{0}\Section{Existence and uniqueness}{\lambda}bel{sec:anderson-exist-uniq} In this section we prove the existence and uniqueness result for the solution to equation {\varepsilon}psilonqref{spde} by means of two different methods: one is via Fourier transform and the other is via chaos expansion. \subsection{Existence and uniqueness via Fourier transform}{\lambda}bel{sec:picard} In this subsection we discuss the existence and uniqueness of equation (\ref{spde}) using techniques of Fourier analysis. Let $\dot{H}^{\frac 12-H}_0$ be the set of functions $f\in L^2(\mathbb{R}R)$ such that $\int_\mathbb{R}R | \mathcal F f(\xi)| ^2 |\xi|^{1-2H} d\xi <\infty$. This spaces is the time independent analogue to the space $\mathfrak H_0$ introduced in Proposition \ref{prop: H}. We know that $\dot{H}^{\frac 12-H}_0$ is not complete with the seminorm $ \left[ \int_\mathbb{R}R | \mathcal F f(\xi)| ^2 |\xi|^{1-2H} d\xi \right] ^\frac 12$ (see \mathcal Ite{PT}). However, it is not difficult to check that the space $\dot{H}^{\frac 12-H}_0$ is complete for the seminorm $\|f\|_{\mathcal V(H)} ^2:=\int_\mathbb{R}R | \mathcal F f(\xi)| ^2 (1+|\xi|^{1-2H} )d\xi$. In the next theorem we show the existence and uniqueness result assuming that the initial condition belongs to $\dot{H}^{\frac 12-H}_0$ and using estimates based on the Fourier transform in the space variable. To this purpose, we introduce the space $\mathcal V_T(H)$ as the completion of the set of elementary $\dot{H}^{\frac 12-H}_0$-valued stochastic processes $\{u(t,\mathcal Dot), t\in [0,T]\}$ with respect to the seminorm \mathbf{E}gin{equation} {\lambda}bel{nuH} \|u\|_{\mathcal V_{T}(H)}^{2}:=\sup_{t\in [0,T]} \mathbf{E} \| u(t,\mathcal Dot)\|_{\mathcal V(H)}^{2}. {\varepsilon}psilonnd{equation} We now state a convolution lemma. \mathbf{E}gin{proposition}{\lambda}bel{prop:convolution-fourier} Consider a function $u_{0}\in \dot{H}^{\frac 12-H}_0$ and $\frac{1}{4}<H<\frac{1}{2}$. For any {$v\in\mathcal V_{T}(H)$} we set ${\Gamma}mma(v)=V$ in the following way: \mathbf{E}gin{equation*} {\Gamma}mma(v):=V(t,x)=p_t*u_0(x) + \int_0^t \int_{\mathbb{R}}p_{t-s}(x-y) v(s,y) W(ds,dy), \quad t\in[0,T], \, x\in\mathbb{R}. {\varepsilon}psilonnd{equation*} Then ${\Gamma}mma$ is well-defined as a map from $\mathcal V_{T}(H)$ to $\mathcal V_{T}(H)$. Furthermore, there exist two positive constants $c_{1},c_{2}$ such that the following estimate holds true on $[0,T]$: \mathbf{E}gin{equation}{\lambda}bel{eq:intg-bnd-V-lin} { \|V(t,\mathcal Dot)\|_{\mathcal V(H)}^{2} \le c_{1} \, \|u_0\|_{\mathcal V(H)}^{2} +c_{2}\int_0^t (t-s)^{2H-3/2} \|v(s,\mathcal Dot)\|_{\mathcal V(H)}^{2} \, ds\,.} {\varepsilon}psilonnd{equation} {\varepsilon}psilonnd{proposition} \mathbf{E}gin{proof} Let $v$ be a process in $\mathcal V_{T}(H)$ and set $V={\Gamma}mma(v)$. We focus on the bound {\varepsilon}psilonqref{eq:intg-bnd-V-lin} for $V$. Notice that the Fourier transform of $V$ can be computed easily. {Indeed, setting $v_0(t,x)=p_t*u_0(x)$ and }invoking a stochastic version of Fubini's theorem, we get \mathbf{E}gin{equation*} \mathcal{F}V(t,\xi)=\mathcal{F}v_0(t,\xi) +\int_0^t\int_{\mathbb{R}} \left( \int_{\mathbb{R}} e^{i x \xi} \, p_{t-s}(x-y) \, dx \right) v(s,y)W(ds,dy)\,. {\varepsilon}psilonnd{equation*} According to the expression of $\mathcal F p_{t}$, we obtain \mathbf{E}gin{eqnarray*} \mathcal{F}V(t,\xi)=\mathcal{F}v_0(t,\xi)+\int_0^t\int_{\mathbb{R}}e^{-i\xi y} e^{-\frac{\kappappa}{2}(t-s)\xi^2}v(s,y)W(ds,dy)\,. {\varepsilon}psilonnd{eqnarray*} We now evaluate the quantity $\mathbf{E}[\int_{\mathbb{R}}|\mathcal{F}V(t,\xi)|^2|\xi|^{1-2H}d\xi ]$ in the definition of $\|V\|_{\mathcal V_{T}(H)}$ given by {\varepsilon}psilonqref{nuH}. We thus write \mathbf{E}gin{multline*} \mathbf{E}\left[ \int_{\mathbb{R}}|\mathcal{F}V(t,\xi)|^2|\xi|^{1-2H}d\xi \right] \leq 2 \, \int_{\mathbb{R}}|\mathcal{F}v_0(t,\xi)|^2|\xi|^{1-2H}d\xi \\ +2 \, \int_{\mathbb{R}}\mathbf{E}\left[\Big|\int_0^t\int_{\mathbb{R}}e^{-i\xi y}e^{-\frac{\kappappa}{2}(t-s)\xi^2}v(s,y)W(ds,dy)\Big|^2 \right] |\xi|^{1-2H}d\xi := 2\left( I_{1} + I_{2} \right) \, , {\varepsilon}psilonnd{multline*} and we handle the terms $I_{1}$ and $I_{2}$ separately. The term $I_1$ can be easily bounded by using that $u_0 \in\dot{H}^{\frac 12-H}_0$ and recalling $v_{0}=p_{t}*u_{0}$. That is, \[ I_1 = \int_\mathbb{R}R| \mathcal{F}u_0(\xi) | ^2e^{-\kappappa t|\xi|^2} |\xi|^{1-2H} d\xi \le C \, \|u_{0}\|_{\mathcal V(H)}^{2}. \] We thus focus on the estimation of $I_{2}$, and we set $f_{\xi}(s,{\varepsilon}psilonta)=e^{-i\xi {\varepsilon}psilonta}e^{-\frac{\kappappa}{2}(t-s)\xi^2}v(s,{\varepsilon}psilonta)$. Applying the isometry property {\varepsilon}psilonqref{int isometry} we have: \mathbf{E}gin{equation*} \mathbf{E}\left[\Big|\int_0^t\int_{\mathbb{R}} e^{-i\xi y}e^{-\frac{\kappappa}{2}(t-s)\xi^2}v(s,y)W(ds,dy)\Big|^2 \right] =c_{1,H} \int_0^t \int_{\mathbb{R}} \mathbf{E}\left[ |\mathcal F_{{\varepsilon}psilonta}f_{\xi}(s,{\varepsilon}psilonta) |^{2}\right] |{\varepsilon}psilonta|^{1-2H} \, ds d{\varepsilon}psilonta, {\varepsilon}psilonnd{equation*} where $\mathcal F_{{\varepsilon}psilonta}$ is the Fourier transform with respect to ${\varepsilon}psilonta$. It is obvious that the Fourier transform of $e^{-i\xi y} V(y)$ is $\mathcal{F} V({\varepsilon}psilonta+\xi)$. Thus we have \mathbf{E}gin{align*} I_{2}&= C\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}}e^{-\kappappa(t-s)\xi^2} \, \mathbf{E}\left[|\mathcal{F}v(s,{\varepsilon}psilonta+\xi)|^2 \right] |{\varepsilon}psilonta|^{1-2H}|\xi|^{1-2H} \, d{\varepsilon}psilonta d\xi ds\\ &= C\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}}e^{-\kappappa(t-s)\xi^2} \, \mathbf{E}\left[|\mathcal{F}v(s,{\varepsilon}psilonta )|^2 \right] |{\varepsilon}psilonta-\xi|^{1-2H}|\xi|^{1-2H} \, d{\varepsilon}psilonta d\xi ds\, . {\varepsilon}psilonnd{align*} We now bound $|{\varepsilon}psilonta-\xi |^{1-2H}$ by $|{\varepsilon}psilonta|^{1-2H}+|\xi|^{1-2H}$, which yields $I_{2}\le I_{21}+I_{22}$ with: \mathbf{E}gin{align*} I_{21}&=C \int_0^t \int_{\mathbb{R}}\int_{\mathbb{R}} e^{-\kappappa(t-s)\xi^2} \, \mathbf{E}\left[|\mathcal{F}v(s,{\varepsilon}psilonta)|^2 \right] |{\varepsilon}psilonta|^{1-2H}|\xi|^{1-2H} \, d{\varepsilon}psilonta d\xi ds \\ I_{22}&=C\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}}e^{-\kappappa(t-s)\xi^2} \, \mathbf{E}\left[|\mathcal{F}v(s,{\varepsilon}psilonta)|^2 \right] |\xi|^{2-4H} \, d{\varepsilon}psilonta d\xi ds\,. {\varepsilon}psilonnd{align*} Performing the change of variable {$\xi \rightarrow (t-s)^{1/2}\xi$} and then trivially bounding the integrals of the form $\int_{\mathbb{R}}|\xi|^{\mathbf{E}ta} e^{-\kappappa\xi^{2}} d\xi$ by constants, we end up with \mathbf{E}gin{align*} I_{21}&\leq C \int_0^t (t-s)^{H-1} \int_{\mathbb{R}} \mathbf{E}\left[|\mathcal{F}v(s,{\varepsilon}psilonta)|^2 \right] |{\varepsilon}psilonta|^{1-2H} \, d{\varepsilon}psilonta \, ds \\ I_{22}&\leq C \int_0^t (t-s)^{2H-3/2} \int_{\mathbb{R}} \mathbf{E}\left[|\mathcal{F}v(s,{\varepsilon}psilonta)|^2 \right] \, d{\varepsilon}psilonta \, ds . {\varepsilon}psilonnd{align*} Observe that for $H\in(\frac 14, \frac 12)$ the term $(t-s)^{2H-3/2}$ is more singular than $(t-s)^{H-1}$, but we still have $2H-\frac 32>-1$ (this is where we need to impose $H>1/4$). Summarizing our consideration up to now, we have thus obtained \mathbf{E}gin{multline}{\lambda}bel{eq:bnd-picard-1} \int_{\mathbb{R}}\mathbf{E}\left[ |\mathcal{F}V(t,\xi)|^2 \right] |\xi|^{1-2H}d\xi \\ \le C _{1,T} \, { \|u_{0}\|_{\mathcal V(H)}^{2}} + C_{2,T} \int_{0}^{t} (t-s)^{2H-3/2} \int_{\mathbb{R}} \mathbf{E}\left[|\mathcal{F}v(s,\xi)|^2 \right] \, (1+ |\xi|^{1-2H}) \, d\xi \, ds , {\varepsilon}psilonnd{multline} for two strictly positive constants $C_{1,T},C_{2,T}$. The term $\mathbf{E}[\int_{\mathbb{R}}|\mathcal{F}V(t,\xi)|^2 d\xi ]$ in the definition of $\|V\|_{\mathcal V_{T}(H)}$ can be bounded with the same computations as above, and we find \mathbf{E}gin{multline}{\lambda}bel{eq:bnd-picard-2} \int_{\mathbb{R}}\mathbf{E}\left[ |\mathcal{F}V(t,\xi)|^2 \right] \, d\xi \\ \le C_{1,T} \, { \|u_{0}\|_{\mathcal V(H)}^{2}} + C_{2,T} \int_{0}^{t} (t-s)^{H-1} \int_{\mathbb{R}} \mathbf{E}\left[|\mathcal{F}v(s,\xi)|^2 \right] \, (1+ |\xi|^{1-2H}) \, d{\varepsilon}psilonta \, ds , {\varepsilon}psilonnd{multline} Hence, gathering our estimates {\varepsilon}psilonqref{eq:bnd-picard-1} and {\varepsilon}psilonqref{eq:bnd-picard-2}, our bound {\varepsilon}psilonqref{eq:intg-bnd-V-lin} is easily obtained, which finishes the proof. {\varepsilon}psilonnd{proof} As in the forthcoming general case, Proposition \ref{prop:convolution-fourier} is the key to the existence and uniqueness result for equation {\varepsilon}psilonqref{spde}. \mathbf{E}gin{theorem}{\lambda}bel{thm:exist-uniq-picard} Suppose that $u_{0}$ is an element of $\dot{H}^{\frac 12-H}_0$ and $\frac{1}{4}<H<\frac{1}{2}$. Fix $T>0$. Then there is a unique process $u$ in the space $\mathcal V_{T}(H)$ such that for all $t\in [0,T]$, \mathbf{E}gin{equation} u(t,\mathcal Dot)=p_t* u_0 + \int_0^t \int_{\mathbb{R}}p_{t-s}(\mathcal Dot-y)u(s,y) W(ds,dy). {\varepsilon}psilonnd{equation} {\varepsilon}psilonnd{theorem} \mathbf{E}gin{proof} The proof follows from the standard Picard iteration scheme, where we just set $u_{n+1}={\Gamma}mma(u_{n})$. Details are left to the reader for sake of conciseness. {\varepsilon}psilonnd{proof} \subsection{Existence and uniqueness via chaos expansions}{\lambda}bel{subsec: chaos} Next, we provide another way to prove the existence and uniqueness of the solution to equation {\varepsilon}psilonref{spde}, by means of chaos expansions. This will enable us to obtain moment estimates. Before stating our main theorem in this direction, let us label an elementary lemma borrowed from \mathcal Ite{HHNT} for further use. \mathbf{E}gin{lemma}{\lambda}bel{lem:intg-simplex} For $m\ge 1$ let $\alphapha \in (-1+{\varepsilon}psilon, 1)^m$ with ${\varepsilon}psilon>0$ and set $|\alphapha |= \sum_{i=1}^m \alphapha_i $. For $t\in[0,t]t$, the $m$-th dimensional simplex over $[0,t]$ is denoted by $T_m(t)=\{(r_1,r_2,\dots,r_m) \in \mathbb{R}^m: 0<r_1 <\mathcal Dots < r_m < t\}$. Then there is a constant $c>0$ such that \[ J_m(t, \alphapha):=\int_{T_m(t)}\prod_{i=1}^m (r_i-r_{i-1})^{\alphapha_i} dr \le \frac { c^m t^{|\alphapha|+m } }{ \mathcal{G}amma(|\alphapha|+m +1)}, \] where by convention, $r_0 =0$. {\varepsilon}psilonnd{lemma} Let us now state a new existence and uniqueness theorem for our equation of interest. \mathbf{E}gin{theorem}{\lambda}bel{thm:exist-uniq-chaos} Suppose that $\frac 14 <H<\frac 12$ and that the initial condition $u_0$ satisfies \mathbf{E}gin{equation}{\lambda}bel{cond:fu0} \int_{\mathbb{R}R}(1+|\xi|^{\frac{1}{2}-H})|\mathcal{F}u_0(\xi)|d\xi < \infty\,. {\varepsilon}psilonnd{equation} Then there exists a unique solution to equation {\varepsilon}psilonqref{spde}, that is, there is a unique process $u$ such that $p_{t-\mathcal Dot}(x-\mathcal Dot)u$ is Skorohod integrable for any $(t,x)\in[0,t]t\times\mathbb{R}$ and relation {\varepsilon}psilonqref{eq:mild-formulation sigma} holds true. {\varepsilon}psilonnd{theorem} \mathbf{E}gin{remark} (i) The formulation of Theorem \ref{thm:exist-uniq-chaos} yields the definition of our solution $u$ for all $(t,x)\in[0,t]t\times\mathbb{R}$. This is in contrast with Theorem \ref{thm:exist-uniq-picard} which gives a solution sitting in $\dot{H}^{\frac12-H}_0$ for every value of $t$, and thus defined a.e. in $x$ only. (ii) Condition {\varepsilon}psilonqref{cond:fu0} is satisfied by constant functions. {\varepsilon}psilonnd{remark} \mathbf{E}gin{remark} In the later paper \mathcal Ite{HLN}, the existence and uniqueness in Theorem \ref{thm:exist-uniq-chaos} is obtained under a more general initial condition. Since the proof of Theorem \ref{thm:exist-uniq-chaos} for condition {\varepsilon}psilonqref{cond:fu0} is easier and shorter, we present the proof as follows. {\varepsilon}psilonnd{remark} \mathbf{E}gin{proof}[Proof of Theorem \ref{thm:exist-uniq-chaos}] Suppose that $u=\{u(t,x), \, t\geq 0, x \in \mathbb{R}^d\}$ is a solution to equation~{\varepsilon}psilonqref{eq:mild-formulation sigma} in ${\lambda}a_{H} $. Then according to {\varepsilon}psilonref{eq:chaos-dcp}, for any fixed $(t,x)$ the random variable $u(t,x)$ admits the following Wiener chaos expansion \mathbf{E}gin{equation}{\lambda}bel{eq:chaos-expansion-u(tx)} u(t,x)=\sum_{n=0}^{\infty}I_n(f_n(\mathcal Dot,t,x))\,, {\varepsilon}psilonnd{equation} where for each $(t,x)$, $f_n(\mathcal Dot,t,x)$ is a symmetric element in $\mathfrak H^{[0,t]imes n}$. Hence, thanks to~{\varepsilon}psilonqref{eq:delta-u-chaos} and using an iteration procedure, one can find an explicit formula for the kernels $f_n$ for $n \geq 1$. Indeed, we have: \mathbf{E}gin{multline}{\lambda}bel{eq:expression-fn} f_n(s_1,x_1,\dots,s_n,x_n,t,x)\\ =\frac{1}{n!}p_{t-s_{{\sigma}(n)}}(x-x_{{\sigma}(n)})\mathcal Dots p_{s_{{\sigma}(2)}-s_{{\sigma}(1)}}(x_{{\sigma}(2)}-x_{{\sigma}(1)}) p_{s_{{\sigma}(1)}}u_0(x_{{\sigma}(1)})\,, {\varepsilon}psilonnd{multline} where ${\sigma}$ denotes the permutation of $\{1,2,\dots,n\}$ such that $0<s_{{\sigma}(1)}<\mathcal Dots<s_{{\sigma}(n)}<t$ (see, for instance, formula (4.4) in \mathcal Ite{HN} or formula (3.3) in \mathcal Ite{HHNT}). Then, to show the existence and uniqueness of the solution it suffices to prove that for all $(t,x)$ we have \mathbf{E}gin{equation}{\lambda}bel{chaos} \sum_{n=0}^{\infty}n!\|f_n(\mathcal Dot,t,x)\|^2_{\mathfrak H^{[0,t]imes n}}< \infty\,. {\varepsilon}psilonnd{equation} The remainder of the proof is devoted to prove relation {\varepsilon}psilonqref{chaos}. Starting from relation {\varepsilon}psilonqref{eq:expression-fn}, some elementary Fourier computations show that \mathbf{E}gin{align*} \mathcal F f_n(s_1,\xi_1,\dots,s_n,\xi_n,t,x)&= \frac{c_{H}^n}{n!} \int_\mathbb{R}R \prod_{i=1}^n e^{-\frac{\kappappa}{2}(s_{{\sigma}(i+1)}-s_{{\sigma}(i)})|\xi_{{\sigma}(i)}+\mathcal Dots + \xi_{{\sigma}(1)} -\zetata|^2} \\ &\quad\times { e^{-ix (\xi_{{\sigma}gma(n)}+ \mathcal Dots + \xi_{{\sigma}gma(1)}-\zetata)}} \mathcal{F}u_0(\zetata) e^{-\frac {\kappa s_{{\sigma}gma(1)}|\zetata|^2} 2} d\zetata, {\varepsilon}psilonnd{align*} where we have set $s_{{\sigma}(n+1)}=t$. Hence, owing to formula {\varepsilon}psilonref{eq: H_0 element H prod} for the norm in $\mathfrak H$ (in its Fourier mode version), we have \mathbf{E}gin{multline}{\lambda}bel{eq:expression-norm-fn} n!\| f_n(\mathcal Dot,t,x)\|_{\mathfrak H^{[0,t]imes n}}^2 =\frac{c_H^{2n} }{n!} \, \int_{[0,t]^n}\int_{\mathbb{R}R^n}\bigg| \int_\mathbb{R}R \prod_{i=1}^n e^{-\frac {\kappappa}{2} (s_{{\sigma}(i+1)}-s_{{\sigma}(i)})|\xi_i+\mathcal Dots +\xi_1-\zetata |^2} { e^{-ix (\xi_{{\sigma}gma(n)}+ \mathcal Dots + \xi_{{\sigma}gma(1)}-\zetata)}} \\ \mathcal{F}u_0(\zetata) e^{-\frac {\kappappa s_{{\sigma}gma(1)}|\zetata|^2} 2} d\zetata \bigg|^2 \times \prod_{i=1}^n |\xi_i |^{1-2H} d\xi ds\,, {\varepsilon}psilonnd{multline} where $d\xi$ denotes $d\xi_1 \mathcal Dots d\xi_n$ and similarly for $ds$. Then using the change of variable $\xi_{i}+\mathcal Dots + \xi_{1}={\varepsilon}psilonta _{i}$, for all $i=1,2,\dots, n$ and a linearization of the above expression, we obtain \mathbf{E}gin{multline*} n!\| f_n(\mathcal Dot,t,x)\|_{\mathfrak H^{[0,t]imes n}}^2 = \frac{c_H^{2n} }{n!}\int_{[0,t]^n}\int_{\mathbb{R}R^n} \int_{\mathbb{R}R^2}\prod_{i=1}^n e^{-\frac{\kappappa}{2}(s_{{\sigma}(i+1)}-s_{{\sigma}(i)})(|{\varepsilon}psilonta_{i}-\zetata|^2+|{\varepsilon}psilonta_i-\zetata^{\prime}|^2)} \mathcal{F}u_0(\zetata) \overline{\mathcal{F}u_0(\zetata^{\prime})} \\ \times { e^{ix(\zetata -\zetata')}}e^{-\frac{\kappappa s_{{\sigma}gma(1)}(|\zetata|^2+|\zetata^{\prime}|^2)}{2}} \prod_{i=1}^n|{\varepsilon}psilonta_{i}-{\varepsilon}psilonta_{i-1}|^{1-2H} d\zetata d\zetata^{\prime} d{\varepsilon}psilonta ds\,, {\varepsilon}psilonnd{multline*} where we have set ${\varepsilon}psilonta_{0}=0$. Then we use Cauchy-Schwarz inequality and bound the term ${\varepsilon}psilonxp(-\kappappa s_{{\sigma}gma(1)}(|\zetata|^2+|\zetata^{\prime}|^2)/2)$ by $1$ to get \mathbf{E}gin{multline*} n!\| f_n(\mathcal Dot,t,x)\|_{\mathfrak H^{[0,t]imes n}}^2 \le \frac{c_H^{2n}}{n!} \int_{\mathbb{R}R^2} \left ( \int_{[0,t]^n} \int_{\mathbb{R}R^n} \prod_{i=1}^n e^{- \kappappa (s_{{\sigma}(i+1)}-s_{{\sigma}(i)})|{\varepsilon}psilonta_{i}-\zetata|^2}\prod_{i=1}^n|{\varepsilon}psilonta_{i}-{\varepsilon}psilonta_{i-1}|^{1-2H}d{\varepsilon}psilonta ds \right)^{\frac{1}{2}} \\ \times \left ( \int_{[0,t]^n} \int_{\mathbb{R}R^n} \prod_{i=1}^n e^{- \kappappa (s_{{\sigma}(i+1)}-s_{{\sigma}(i)})|{\varepsilon}psilonta_{i}-\zetata^{\prime}|^2}\prod_{i=1}^n|{\varepsilon}psilonta_{i}-{\varepsilon}psilonta_{i-1}|^{1-2H}d{\varepsilon}psilonta ds \right)^{\frac{1}{2}} \left|\mathcal{F}u_0(\zetata)\right| \left|\mathcal{F}u_0(\zetata^{\prime})\right| d\zetata d\zetata^{\prime}. {\varepsilon}psilonnd{multline*} Arranging the integrals again, performing the change of variables ${\varepsilon}psilonta_{i}:={\varepsilon}psilonta_{i}-\zetata$ and invoking the trivial bound $|{\varepsilon}psilonta_{i}-{\varepsilon}psilonta_{i-1}|^{1-2H}\le |{\varepsilon}psilonta_{i-1}|^{1-2H}+|{\varepsilon}psilonta_{i}|^{1-2H}$, this yields \mathbf{E}gin{equation}{\lambda}bel{eq:bnd-fn-L2-1} n!\| f_n(\mathcal Dot,t,x)\|_{\mathfrak H^{[0,t]imes n}}^2 \le \frac{c_H^{2n} }{n!} \Bigg(\int_{\mathbb{R}R} L_{n,t}^{\frac{1}{2}}(\zetata) \left |\mathcal{F}u_0(\zetata)\right|d\zetata\Bigg)^2 , {\varepsilon}psilonnd{equation} where $L_{n,t}(\zetata)$ is \mathbf{E}gin{equation*} \int_{[0,t]^n} \int_{\mathbb{R}R^n} \prod_{i=1}^n e^{-\kappappa (s_{{\sigma}(i+1)}-s_{{\sigma}(i)})|{\varepsilon}psilonta_{i}|^2} (|\zetata|^{1-2H}+|{\varepsilon}psilonta_1|^{1-2H}) \times \prod_{i=2}^n(|{\varepsilon}psilonta_{i}|^{1-2H}+|{\varepsilon}psilonta_{i-1}|^{1-2H})d{\varepsilon}psilonta ds. {\varepsilon}psilonnd{equation*} Let us expand the product $\prod_{i=2}^{n} (|{\varepsilon}psilonta_{i}|^{1-2H}+|{\varepsilon}psilonta_{i-1}|^{1-2H})$ in the integral defining $L_{n,t}(\zetata)$. We obtain an expression of the form $\sum_{\alpha\in D_{n}} \prod_{i=1}^{n} |{\varepsilon}psilonta_{i}|^{\alpha_{i}}$, where $D_{n}$ is a subset of multi-indices of length $n-1$. The complete description of $D_{n}$ is omitted for sake of conciseness, and we will just use the following facts: ${\theta}etaxt{Card}(D_{n})=2^{n-1}$ and for any $\alpha\in D_{n}$ we have \mathbf{E}gin{equation*} |\alpha|{\varepsilon}psilonquiv \sum_{i=1}^{n} \alphapha_i = (n-1)(1-2H), \quad{\theta}etaxt{and}\quad \alpha_{i} \in \{0, 1-2H, 2(1-2H)\}, \quad i=1,\ldots, n. {\varepsilon}psilonnd{equation*} This simple expansion yields the following bound \mathbf{E}gin{multline*} L_{n,t}(\zetata) \leq|\zetata|^{1-2H}\sum_{\alphapha \in D_{n}} \int_{[0,t]^n} \int_{\mathbb{R}R^n} \prod_{i=1}^n e^{-\kappappa (s_{{\sigma}(i+1)}-s_{{\sigma}(i)})|{\varepsilon}psilonta_{i}|^2} \prod_{i=1}^n |{\varepsilon}psilonta_i|^{\alphapha_i}d{\varepsilon}psilonta ds\\ +\sum_{\alphapha \in D_{n}} \int_{[0,t]^n}\int_{\mathbb{R}R^n}\prod_{i=1}^n e^{-\kappappa (s_{{\sigma}(i+1)}-s_{{\sigma}(i)})|{\varepsilon}psilonta_{i}|^2} |{\varepsilon}psilonta_1|^{1-2H} \prod_{i=1}^n |{\varepsilon}psilonta_i|^{\alphapha_i}d{\varepsilon}psilonta ds\,. {\varepsilon}psilonnd{multline*} Perform the change of variable $\xi_{i}= (\kappappa (s_{{\sigma}gma(i+1)}-s_{{\sigma}gma(i)}))^{1/2} {\varepsilon}psilonta_{i}$ in the above integral, and notice that $\int_{\mathbb{R}} e^{- \xi^{2}} |\xi|^{\alphapha_i}d\xi$ is bounded by a constant for $\alphapha_i>-1$. Changing the integral over $[0,t]^{n}$ into an integral over the simplex, we get \mathbf{E}gin{eqnarray*} L_{n,t}(\zetata)&\leq& C |\zetata|^{1-2H} n! c_H^n \sum_{\alphapha \in D _{n}} { \int_{T_n(t)}\prod_{i=1}^n (\kappappa(s_{i+1}-s_{i}))^{-\frac{1}{2}(1+\alphapha_i)}ds.}\\ &&+C n! c_H^n \sum_{\alphapha \in D _{n}} { \int_{T_n(t)}(\kappappa(s_{2}-s_{1}))^{-\frac{2-2H+\alphapha_1}{2}}\prod_{i=2}^n (\kappappa(s_{i+1}-s_{i}))^{-\frac{1}{2}(1+\alphapha_i)}ds.} {\varepsilon}psilonnd{eqnarray*} We observe that whenever $\frac{1}{4}< H < \frac{1}{2}$, we have $\frac12(1+\alpha_{i})<1$ for all $i=2,\ldots n$, and it is easy to see that $\alphapha_1$ is at most $1-2H$ so $\frac{1}{2}(2-2H+\alphapha_1)<1$. (The condition $H>1/4$ comes from the requirement that when $\alphapha_1=1-2H$, we need $\frac{1}{2}(2-2H+\alphapha_1)=\frac{1}{2}(3-4H)<1$.) Thanks to Lemma \ref{lem:intg-simplex} and recalling that $\sum_{i=1}^n\alphapha_i = (n-1)(1-2H)$ for all $\alpha\in D_{n}$, we thus conclude that \mathbf{E}gin{equation*} L_{n,t}(\zetata) \leq\frac{C (1+t^{\frac{1}{2}-H}\kappappa^{\frac{1}{2}-H}|\zetata|^{1-2H})n! c^nc_H^n t^{nH}\kappappa^{nH-n}}{\mathcal{G}amma(nH+1)}\,. {\varepsilon}psilonnd{equation*} Plugging this expression into {\varepsilon}psilonqref{eq:bnd-fn-L2-1}, we end up with \mathbf{E}gin{equation}{\lambda}bel{eq:bnd-H-norm-fn} n!\| f_n(\mathcal Dot,t,x)\|_{\mathfrak H^{[0,t]imes n}}^2 \leq \frac{C c_H^n c^n t^{nH}\kappappa^{nH-n}}{\mathcal{G}amma(nH+1)}\left(\int_{\mathbb{R}R}(1+t^{\frac{1}{2}-H}\kappappa^{\frac{1}{2}-H}|\zetata|^{\frac{1}{2}-H})\left| \mathcal{F}u_0(\zetata)\right| d\zetata\right)^2. {\varepsilon}psilonnd{equation} The proof of {\varepsilon}psilonqref{chaos} is now easily completed thanks to the asymptotic behavior of the Gamma function and our assumption of $u_0$, and this finishes the existence and uniqueness proof. {\varepsilon}psilonnd{proof} \setcounter{equation}{0}\Section{Moment formula and bounds} {\lambda}bel{sec:Anderson.momentbounds} In this section we derive the Feynman-Kac formula for the moments of the solution to equation {\varepsilon}psilonqref{spde} and the upper and lower bounds for the moments of the solution to equation {\varepsilon}psilonref{spde} which allow us to conclude on the intermittency of the solution. We proceed by first getting an approximation result for $u$, and then deriving the upper and lower bounds for the approximation. \subsection{Approximation of the solution} The approximation of the solution we consider is based on the following approximation of the noise $W$. For each ${\varepsilon}psilon>0$ and $\varphi\in \mathfrak H $, we define \mathbf{E}gin{equation}{\lambda}bel{eq:cov-W-epsilon} W_{{\varepsilon}psilon}(\varphi) = \int_0^t \int_{\mathbb{R}} [\rho_{{\varepsilon}psilon}*\varphi](s,x)W(ds,dy) =\int_0^t \int_{\mathbb{R}}\int_{\mathbb{R}}\varphi(s,x)\rho_{{\varepsilon}psilon}(x-y)W(ds,dy)dx\,, {\varepsilon}psilonnd{equation} where $\rho_ {\varepsilon}psilon (x)=(2\pi {\varepsilon}psilon)^{-\frac{1}{2}} e^{-{x^2}/{(2{\varepsilon}psilon)}}$. Notice that the covariance of $W_{\varepsilon}psilon$ can be read (either in Fourier or direct coordinates) as: \mathbf{E}gin{eqnarray}{\lambda}bel{eq:ident-cov-W-ep} \mathbf{E}\left[W_{{\varepsilon}psilon}(\varphi) W_{{\varepsilon}psilon}(\psi) \right] &=& c_{1,H} \int_0^t \int_{\mathbb{R}} \mathcal{F}\varphi(s,\xi)\, \overline{\mathcal{F}\psi(s,\xi)} \, e^{-{\varepsilon}psilon |\xi|^2} |\xi|^{1-2H} d\xi ds \\ &=& c_{1,H} \int_0^t \int_{\mathbb{R}}\int_{\mathbb{R}}\varphi(s,x)f_{{\varepsilon}psilon}(x-y)\psi(s,y) \, dx dy ds, \notag {\varepsilon}psilonnd{eqnarray} for every $\varphi, \psi$ in $\mathfrak H$, where $f_{{\varepsilon}psilon}$ is given by $f_{{\varepsilon}psilon}(x)= \mathcal{F}^{-1}(e^{-{\varepsilon}psilon |\xi|^2} |\xi|^{1-2H})$. In other words, $W_{\varepsilon}psilon$ is still a white noise in time but its space covariance is now given by $f_{{\varepsilon}psilon}$. Note that $f_{{\varepsilon}psilon}$ is a real positive-definite function, but is not necessarily positive. The noise $W_{{\varepsilon}psilon}$ induces an approximation to the mild formulation of equation {\varepsilon}psilonref{spde}, namely \mathbf{E}gin{equation}{\lambda}bel{appr eq} u_{{\varepsilon}psilon}(t,x)=p_t* u_0(x) + \int_0^t \int_{\mathbb{R}}p_{t-s}(x-y)u_{{\varepsilon}psilon}(s,y) \, W_{{\varepsilon}psilon}(ds,dy) , {\varepsilon}psilonnd{equation} where the integral is understood (as in Subsection \ref{sec:picard}) in the It\^o sense. We will start by a formula for the moments of $u_{{\varepsilon}psilon}$. \mathbf{E}gin{proposition}{\lambda}bel{prop:appro-moments} Let $W_{{\varepsilon}psilon}$ be the noise defined by {\varepsilon}psilonqref{eq:cov-W-epsilon}, and assume $\frac{1}{4}<H<\frac{1}{2}$. Assume $u_0$ is such that $\int_{\mathbb{R}R}(1+|\xi|^{\frac{1}{2}-H})|\mathcal{F}u_0(\xi)|d\xi< \infty$. Then \noindent {\varepsilon}psilonmph{(i)} Equation {\varepsilon}psilonqref{appr eq} admits a unique solution. \noindent {\varepsilon}psilonmph{(ii)} For any integer $n \geq 2$ and $(t,x)\in[0,t]t\times\mathbb{R}$, we have \mathbf{E}gin{equation}{\lambda}bel{appro moment} \mathbf{E} \left[ u^n_{{\varepsilon}psilon}(t,x)\right] = { \mathbf{E}_B \left[\prod_{j=1}^n u_0(x+B_{\kappappa t}^j) {\varepsilon}psilonxp \left( c_{1,H} \sum_{1\leq j\neq k \leq n} V_{t,x}^{{\varepsilon}psilon,j,k}\right)\right],} {\varepsilon}psilonnd{equation} with \mathbf{E}gin{equation}{\lambda}bel{eq:def-V-tx-epsilon} V_{t,x}^{{\varepsilon}psilon,j,k} = \int_0^t f_{{\varepsilon}psilon}(B_{ \kappappa r}^j-B_{\kappappa r}^k)dr = \int_0^t \int_{\mathbb{R}}e^{-{\varepsilon}psilon |\xi|^2} |\xi|^{1-2H} e^{i\xi (B_{\kappappa r}^j-B_{\kappappa r}^k)} \, d\xi dr . {\varepsilon}psilonnd{equation} In formula {\varepsilon}psilonqref{eq:def-V-tx-epsilon}, $\{ B^j; j=1,\dots,n\}$ is a family of $n$ independent standard Brownian motions which are also independent of $W$ and $\mathbf{E}_{B}$ denotes the expected value with respect to the randomness in $B$ only. \noindent {\varepsilon}psilonmph{(iii)} The quantity $\mathbf{E} [u^n_{{\varepsilon}psilon}(t,x)]$ is uniformly bounded in ${\varepsilon}psilon$. More generally, for any $a>0$ we have \mathbf{E}gin{equation*} \sup_{{\varepsilon}psilon>0} \mathbf{E}_B \left[ {\varepsilon}psilonxp \left( a \sum_{1\leq j\neq k \leq n} V_{t,x}^{{\varepsilon}psilon,j,k}\right)\right] {\varepsilon}psilonquiv c_{a}<\infty . {\varepsilon}psilonnd{equation*} {\varepsilon}psilonnd{proposition} \mathbf{E}gin{proof} The proof of item (i) is almost identical to the proof of Theorem~\ref{thm:exist-uniq-chaos}, and is omitted for sake of conciseness. Moreover, in the proof of (ii) and (iii), we may take $u_0(x){\varepsilon}psilonquiv 1$ for simplicity. In order to check item (ii), set \mathbf{E}gin{equation}{\lambda}bel{eq:def-Atx} A_{t,x}^{{\varepsilon}psilon}(r,y)= \rho_{{\varepsilon}psilon}(B_{\kappappa (t-r)}^x-y), \quad{\theta}etaxt{and}\quad \alphapha^{{\varepsilon}psilon}_{t,x}=\|A^{{\varepsilon}psilon}_{t,x}\|^2_{\mathfrak H}. {\varepsilon}psilonnd{equation} Then one can prove, similarly to Proposition 5.2 in \mathcal Ite{HN}, that $u_{{\varepsilon}psilon}$ admits a Feynman-Kac representation of the form \mathbf{E}gin{equation}{\lambda}bel{eq:feynman-u-ep} u_{{\varepsilon}psilon}(t,x)=\mathbf{E}_B \left[ {\varepsilon}psilonxp \left( W ( A_{t,x}^{{\varepsilon}psilon})-\frac{1}{2}\alphapha^{{\varepsilon}psilon}_{t,x}\right) \right]\,. {\varepsilon}psilonnd{equation} Now fix an integer $n \geq 2$. According to {\varepsilon}psilonqref{eq:feynman-u-ep} we have \mathbf{E}gin{equation*} \mathbf{E} \left[ u^n_{{\varepsilon}psilon}(t,x)\right]=\mathbf{E}_W \left[\prod_{j=1}^n \mathbf{E}_B\left[ {\varepsilon}psilonxp \left( W(A^{{\varepsilon}psilon, B^j}_{t,x})- \frac{1}{2}\alphapha_{t,x}^{{\varepsilon}psilon,B^j}\right) \right] \right]\,, {\varepsilon}psilonnd{equation*} where for any $j=1,\dots,n$, $A_{t,x}^{{\varepsilon}psilon,B^j}$ and $\alphapha_{t,x}^{{\varepsilon}psilon,B^j}$ are evaluations of {\varepsilon}psilonqref{eq:def-Atx} using the Brownian motion $B^j$. Therefore, since $W(A^{{\varepsilon}psilon, B^j}_{t,x})$ is a Gaussian random variable conditionally on $B$, we obtain \mathbf{E}gin{eqnarray*} \mathbf{E} \left[ u^n_{{\varepsilon}psilon}(t,x)\right]&=& \mathbf{E}_B \left[ {\varepsilon}psilonxp \left(\frac{1}{2}\|\sum_{j=1}^n A_{t,x}^{{\varepsilon}psilon,B^j}\|^2_{\mathfrak H} -\frac{1}{2}\sum_{j=1}^n \alphapha_{t,x}^{{\varepsilon}psilon,B^j}\right)\right] \notag\\ &=& \mathbf{E}_B \left[ {\varepsilon}psilonxp \left(\frac{1}{2}\|\sum_{j=1}^n A_{t,x}^{{\varepsilon}psilon,B^j}\|^2_{\mathfrak H} -\frac{1}{2}\sum_{j=1}^n \| A_{t,x}^{{\varepsilon}psilon,B^j}\|^2_{\mathfrak H}\right)\right] \notag\\ &=&\mathbf{E}_B \left[ {\varepsilon}psilonxp \left(\sum_{1\leq i < j \leq n}{\lambda}ngle A_{t,x}^{{\varepsilon}psilon,B^i}, A_{t,x}^{{\varepsilon}psilon,B^j}\rangle _{\mathfrak H}\right)\right]\,. {\varepsilon}psilonnd{eqnarray*} The evaluation of ${\lambda}ngle A_{t,x}^{{\varepsilon}psilon,B^i}, A_{t,x}^{{\varepsilon}psilon,B^j}\rangle _{\mathfrak H}$ easily yields our claim {\varepsilon}psilonqref{appro moment}, the last details being left to the patient reader. Let us now prove item (iii), namely \mathbf{E}gin{equation}{\lambda}bel{appro moment finite} \sup_{{\varepsilon}psilon > 0} \sup_{t \in [0,T], x \in \mathbb{R}} \mathbf{E} \left[ u^n_{{\varepsilon}psilon}(t,x)\right] < \infty\,. {\varepsilon}psilonnd{equation} To this aim, observe first that we have obtained an expression {\varepsilon}psilonqref{appro moment} which does not depend on $x\in\mathbb{R}$, so that the $\sup_{t \in [0,T], x \in \mathbb{R}}$ in {\varepsilon}psilonqref{appro moment finite} can be reduced to a $\sup$ in $t$ only. Next, still resorting to formula {\varepsilon}psilonqref{appro moment}, it is readily seen that it suffices to show that for two independent Brownian motions $B$ and $\tilde{B}$, we have \mathbf{E}gin{equation}{\lambda}bel{eq:bnd-exp-F-t-epsilon} \sup_{{\varepsilon}psilon > 0, t\in [0,T]} \mathbf{E}_{B} \left[ {\varepsilon}psilonxp \left (c \, F_t^{{\varepsilon}psilon} \right)\right] <\infty, \quad{\theta}etaxt{with}\quad F_t^{{\varepsilon}psilon} {\varepsilon}psilonquiv \int_0^t \int_{\mathbb{R}} e^{-{\varepsilon}psilon |\xi|^2} |\xi|^{1-2H} e^{i \xi (B_{\kappappa r}-\tilde{B}_{\kappappa r})}d\xi dr, {\varepsilon}psilonnd{equation} for any positive constant $c$. In order to prove {\varepsilon}psilonqref{eq:bnd-exp-F-t-epsilon}, we expand the exponential and write: \mathbf{E}gin{equation}{\lambda}bel{eq:moments-F-t-epsilon} \mathbf{E}_{B} \left[ {\varepsilon}psilonxp (c \, F_t^{{\varepsilon}psilon})\right] =\sum_{l=0}^{\infty}\frac{\mathbf{E}_{B} \left[ (c \, F_t^{{\varepsilon}psilon})^l\right]}{l!}\,. {\varepsilon}psilonnd{equation} Next, we have \mathbf{E}gin{align*} \mathbf{E}_{B} \left[\left( F_t^{{\varepsilon}psilon}\right)^l\right]&= \mathbf{E}_{B} \left[ \int_{[0,t]^l} \int_{\mathbb{R}R^l} \prod_{j=1}^l e^{-i \xi_j (B_{\kappappa r_j}-\tilde{B}_{\kappappa r_j})-{\varepsilon}psilon |\xi_j|^2} |\xi_j|^{1-2H} d\xi dr \right] \\& \leq \int_{[0,t]^l} \int_{\mathbb{R}R^l} \prod_{j=1}^{l} e^{-\kappappa (t-r_{{\sigma}gma(j)})|\xi_j+\dots+\xi_1|^2} \, |\xi_j|^{1-2H} \, d\xi dr\,, {\varepsilon}psilonnd{align*} where ${\sigma}gma$ is the permutation on $\{1,2,\dots, l\}$ such that $t \geq r_{{\sigma}gma(l)} \geq \mathcal Dots \geq r_{{\sigma}gma(1)}$. We have thus gone back to an expression which is very similar to {\varepsilon}psilonqref{eq:expression-norm-fn}. We now proceed as in the proof of Theorem \ref{thm:exist-uniq-chaos} to show that {\varepsilon}psilonref{appro moment finite} holds true from equation {\varepsilon}psilonqref{eq:moments-F-t-epsilon}. {\varepsilon}psilonnd{proof} Starting from Proposition \ref{prop:appro-moments}, let us take limits in order to get the moment formula for the solution $u$ to equation~{\varepsilon}psilonqref{spde}. \mathbf{E}gin{theorem}{\lambda}bel{THM moment} Assume $\frac{1}{4}<H<\frac{1}{2}$ and consider $n\ge 1$, $j,k\in\{1,\ldots,n\}$ with $j\ne k$. For $(t,x)\in[0,t]t\times\mathbb{R}$, denote by $V_{t,x}^{j,k}$ the limit in $L^2({\Omega}ega)$ as ${\varepsilon}psilon\rightarrow 0$ of \mathbf{E}gin{equation*} V_{t,x}^{{\varepsilon}psilon,j,k} = \int_0^t \int_{\mathbb{R}}e^{-{\varepsilon}psilon |\xi|^2} |\xi|^{1-2H} e^{i\xi (B_{ \kappappa r}^j-B_{\kappappa r}^k)}d\xi dr. {\varepsilon}psilonnd{equation*} Then $\mathbf{E} \left[ u^n_{{\varepsilon}psilon}(t,x)\right]$ converges as ${\varepsilon}psilon \to 0$ to $\mathbf{E} [u^n(t,x)]$, which is given by \mathbf{E}gin{equation}{\lambda}bel{moment} \mathbf{E}[u^n(t,x)] ={ \mathbf{E}_{B}\left[ \prod_{j=1}^n u_0(B^j_{\kappappa t}+x){\varepsilon}psilonxp \left( c_{1,H} \sum_{1\leq j \neq k \leq n} V_{t,x}^{j,k} \right)\right]\, .} {\varepsilon}psilonnd{equation} {\varepsilon}psilonnd{theorem} We note that in a recent paper \mathcal Ite{HLN}, the moment formula for general covariance function is obtained. However we present the proof here for the sake of completeness. \mathbf{E}gin{proof} As in Proposition \ref{prop:appro-moments}, we will prove the theorem for $u_0 {\varepsilon}psilonquiv 1$ for simplicity. For any $p\ge 1$ and $1\le j < k \le n$, we can easily prove that $V_{t,x}^{{\varepsilon}psilon,j,k}$ converges in $L^{p}({\Omega}ega)$ to $V_{t,x}^{j,k}$ defined by \mathbf{E}gin{equation}{\lambda}bel{eq:def-Vtx-jk} V_{t,x}^{j,k} = \int_0^t \int_{\mathbb{R}} |\xi|^{1-2H} e^{i\xi (B_{\kappappa r}^j-B_{\kappappa r}^k)}d\xi dr. {\varepsilon}psilonnd{equation} Indeed, this is due to the fact that $e^{-{\varepsilon}psilon |\xi|^2} |\xi|^{1-2H} e^{i\xi (B_{\kappappa r}^j-B_{\kappappa r}^k)}$ converges to $|\xi|^{1-2H} e^{i\xi (B_{\kappappa r}^j-B_{\kappappa r}^k)}$ in the $d\xi[0,t]imes dr[0,t]imes d\mathbf{P}$ sense, plus standard uniform integrability arguments. Now, taking into account relation {\varepsilon}psilonqref{appro moment}, Proposition \ref{prop:appro-moments} (iii), the fact that $V_{t,x}^{{\varepsilon}psilon,j,k}$ converges to $V_{t,x}^{j,k}$ in $L^{2}({\Omega}ega)$ as ${\varepsilon}psilon\to 0$, and the inequality $|e^{x}-e^{y}|\leq (e^x+e^y)|x-y|$, we obtain \mathbf{E}gin{eqnarray*} &&\mathbf{E}_B\left|{\varepsilon}psilonxp \left(c_{1,H}\sum_{1\leq j\neq k \leq n}V_{t,x}^{{\varepsilon}psilonsilon,j,k} \right)-{\varepsilon}psilonxp \left(c_{1,H}\sum_{1\leq j\neq k \leq n}V_{t,x}^{j,k} \right)\right|\\ &\leq&\sup_{{\varepsilon}psilonsilon >0}2\left(\mathbf{E}_B\left|{\varepsilon}psilonxp \left(2c_{1,H}\sum_{1\leq j\neq k \leq n}V_{t,x}^{{\varepsilon}psilonsilon,j,k} \right)+{\varepsilon}psilonxp \left(2c_{1,H}\sum_{1\leq j\neq k \leq n}V_{t,x}^{j,k} \right)\right|^2\right)^{\frac{1}{2}} \\ &&\quad \times \left(\mathbf{E}_B \left|c_{1,H}\sum_{1\leq j\neq k \leq n}V_{t,x}^{{\varepsilon}psilonsilon,j,k}-c_{1,H}\sum_{1\leq j\neq k \leq n}V_{t,x}^{j,k}\right|^2\right)^{\frac{1}{2}}\,, {\varepsilon}psilonnd{eqnarray*} which implies \mathbf{E}gin{eqnarray}{\lambda}bel{eq:lim-moments-u-epsilon} \lim_{{\varepsilon}psilon\to 0} \mathbf{E} \left[ u^n_{{\varepsilon}psilon}(t,x)\right] &=& \lim_{{\varepsilon}psilon\to 0} \mathbf{E}_B \left[ {\varepsilon}psilonxp \left( c_{1,H} \sum_{1\leq j\neq k \leq n} V_{t,x}^{{\varepsilon}psilon,j,k}\right)\right] \notag \\ &=& \mathbf{E}_B \left[ {\varepsilon}psilonxp \left( c_{1,H} \sum_{1\leq j\neq k \leq n} V_{t,x}^{j,k}\right)\right]. {\varepsilon}psilonnd{eqnarray} To end the proof, let us now identify the right hand side of {\varepsilon}psilonqref{eq:lim-moments-u-epsilon} with $\mathbf{E} [u^n(t,x)]$, where $u$ is the solution to equation {\varepsilon}psilonqref{spde}. For ${\varepsilon}psilon,{\varepsilon}psilon'>0$ we write \[ \mathbf{E} \left[ u_{{\varepsilon}psilon}(t,x) \, u_{{\varepsilon}psilon'}(t,x) \right]= \mathbf{E}_B \left[ {\varepsilon}psilonxp \left(\ {\lambda}ngle A^{{\varepsilon}psilon,B^1}_{t,x} , A^{{\varepsilon}psilon', B^2}_{t,x} \rangle_{\mathfrak H}\right)\right]\, , \] where we recall that $A^{{\varepsilon}psilon,B}_{t,x}$ is defined by relation {\varepsilon}psilonqref{eq:def-Atx}. As before we can show that this converges as ${\varepsilon}psilon, {\varepsilon}psilon'$ tend to zero. So, $u_{{\varepsilon}psilon}(t,x)$ converges in $L^2$ to some limit $v(t,x)$, and the limit is actually in $L^p$ , for all $p \geq 1$. Moreover, $\mathbf{E} [v^k(t,x)]$ is equal to the right hand side of~{\varepsilon}psilonref{eq:lim-moments-u-epsilon}. Finally, for any smooth random variable $F$ which is a linear combination of $W({\bf 1}_{[a,b]}(s)\varphi(x))$, where $\varphi$ is a $C^{\infty}$ function with compact support, using the duality relation {\varepsilon}psilonqref{dual}, we have \mathbf{E}gin{equation}{\lambda}bel{eq:duality-u-varepsilon} { \mathbf{E} \left[ F u_{{\varepsilon}psilon}(t,x)\right] =\mathbf{E} \left[ F\right]+\mathbf{E} \left[ {\lambda}ngle Y^{{\varepsilon}psilon} ,DF\rangle _{\mathfrak H}\right],} {\varepsilon}psilonnd{equation} where \mathbf{E}gin{equation*} Y^{t,x}({s,z})= \left( \int_{\mathbb{R}} p_{t-s}(x-y) \, p_{{\varepsilon}psilon}(y-z) u_{{\varepsilon}psilon} (s,y)\, dy \right) {\bf 1}_{[0,t]}(s) . {\varepsilon}psilonnd{equation*} Letting ${\varepsilon}psilon$ tend to zero in equation {\varepsilon}psilonref{eq:duality-u-varepsilon}, after some easy calculation we get \mathbf{E}gin{equation*} \mathbf{E} [F v_{t,x}]= \mathbf{E}[ F] +\mathbf{E} \left[ {\lambda}ngle DF, v p_{t-\mathcal Dot}(x-\mathcal Dot)\rangle_{\mathfrak H}\right]\,. {\varepsilon}psilonnd{equation*} This equation is valid for any $F \in \mathbb{D}^{1,2}$ by approximation. So the above equation implies that the process $v$ is the solution of equation {\varepsilon}psilonqref{spde}, and by the uniqueness of the solution we have $v=u$. {\varepsilon}psilonnd{proof} \subsection{Intermittency estimates} In this subsection we prove some upper and lower bounds on the moments of the solution which entail the intermittency phenomenon. \mathbf{E}gin{theorem}{\lambda}bel{thm:intermittency-estimates} Let $\frac{1}{4}<H<\frac{1}{2}$, and consider the solution $u$ to equation {\varepsilon}psilonqref{spde}. For simplicity we assume that the initial condition is $u_0(x){\varepsilon}psilonquiv 1$. Let $n \geq 2$ be an integer, $x\in\mathbb{R}$ and $t\ge 0$. Then there exist some positive constants $c_{1},c_{2},c_{3}$ independent of $n$, $t$ and $\kappappa$ with $0<c_{1}<c_{2}<\infty$ satisfying \mathbf{E}gin{equation}{\lambda}bel{eq:intermittency-bounds} {\varepsilon}psilonxp (c_{1} n^{1+\frac{1}{H}}\kappappa^{1-\frac{1}{H}}t) \leq \mathbf{E}\left[ u^n(t,x) \right] \leq c_{3} {\varepsilon}psilonxp \big(c_{2} n^{1+\frac{1}{H}}\kappappa^{1-\frac{1}{H}} t\big)\,. {\varepsilon}psilonnd{equation} {\varepsilon}psilonnd{theorem} \mathbf{E}gin{proof}[Proof of Theorem \ref{thm:intermittency-estimates}] We divide this proof into upper and lower bound estimates. \noindent {\theta}etaxtit{Step 1: Upper bound.} Recall from equation {\varepsilon}psilonqref{eq:chaos-expansion-u(tx)} that for $(t,x)\in\mathbb{R}_{+}\times\mathbb{R}$, $u(t,x)$ can be written as: $u(t,x)=\sum_{m=0}^{\infty}I_m(f_m(\mathcal Dot,t,x))$. Moreover, as a consequence of the hypercontractivity property on a fixed chaos we have (see \mathcal Ite[p. 62]{Nua}) \mathbf{E}gin{equation*} \|I_m(f_m(\mathcal Dot,t,x))\|_{L^{n}({\Omega}ega)}\leq (n-1)^{\frac{m}{2}}\|I_m(f_m(\mathcal Dot,t,x))\|_{L^{2}({\Omega}ega)} \,, {\varepsilon}psilonnd{equation*} and substituting the above right hand side by the bound {\varepsilon}psilonqref{eq:bnd-H-norm-fn}, we end up with \mathbf{E}gin{eqnarray*} \|I_m(f_m(\mathcal Dot,t,x))\|_{L^{n}({\Omega}ega)} \leq n^{\frac{m}{2}}\|I_m(f_m(\mathcal Dot,t,x))\|_{L^{2}({\Omega}ega)} \leq \frac{c^{\frac{n}{2}}n^{\frac{m}{2}}t^{\frac{mH}{2}}\kappappa^{\frac{Hm-m}{2}}} { \mathcal{G}amma(mH/2+1) } \,. {\varepsilon}psilonnd{eqnarray*} Therefore from by the asymptotic bound of Mittag-Leffler function $\sum_{n\ge 0}x^{n}/\mathcal{G}amma(\alpha n+1) \le c_1 {\varepsilon}psilonxp(c_2 x^{1/a})$ (see \mathcal Ite{kilbas}, Formula (1.8.10)), we get: \mathbf{E}gin{eqnarray*} \|u(t,x)\|_{L^{n}({\Omega}ega)} \leq \sum_{m=0}^{\infty} \|J_m(t,x)\|_{L^{n}({\Omega}ega)} \leq \sum_{m=0}^{\infty}\frac{c^{\frac{m}{2}}n^{\frac{m}{2}}t^{\frac{mH}{2}}\kappappa^{\frac{Hm-m}{2}}}{\big(\mathcal{G}amma(m H+1)\big)^{\frac{1}{2}}}\leq c_{1}{\varepsilon}psilonxp {\big(c_{2} t n^{\frac{1}{H}} \kappappa^{\frac{H-1}{H}}\big)}\,, {\varepsilon}psilonnd{eqnarray*} from which the upper bound in our theorem is easily deduced. \noindent {\theta}etaxtit{Step 2: Lower bound for $u_{{\varepsilon}psilon}$.} For the lower bound, we start from the moment formula {\varepsilon}psilonref{appro moment} for the approximate solution, and write \mathbf{E}gin{multline*} \mathbf{E} \left[ u^n_{{\varepsilon}psilon}(t,x)\right] \\ = \mathbf{E}_{B} \left[ {\varepsilon}psilonxp \left(c_{1,H}\left[ \int_0^t \int_{\mathbb{R}} e^{-{\varepsilon}psilon |\xi|^2} \left| \sum_{j=1}^n e^{-i B_{\kappappa r}^j \xi}\right|^2 |\xi|^{1-2H} d\xi dr -nt \int_{\mathbb{R}} e^{-{\varepsilon}psilon |\xi|^2} |\xi|^{1-2H} d\xi\right] \right)\right]. {\varepsilon}psilonnd{multline*} In order to estimate the expression above, notice first that the obvious change of variable ${\lambda}= {\varepsilon}psilon^{1/2}\xi$ yields $\int_{\mathbb{R}} e^{-{\varepsilon}psilon |\xi|^2} |\xi|^{1-2H} d\xi=C {\varepsilon}psilon^{-(1-H)}$ for some constant $C$. Now for an additional arbitrary parameter ${\varepsilon}psilonta>0$, consider the set \mathbf{E}gin{equation*} A_{\varepsilon}psilonta=\left\{\omega; \, \sup_{1\leq j\leq n}\sup_{0\leq r \leq t}|B_{\kappappa r}^{j}(\omega)|\leq \frac{\pi}{3{\varepsilon}psilonta}\right\}. {\varepsilon}psilonnd{equation*} Observe that classical small balls inequalities for a Brownian motion (see (1.3) in \mathcal Ite{LS}) yield $\mathbf{P}(A_{{\varepsilon}psilonta})\geq c_{1} e^{-c_{2} {\varepsilon}psilonta^2 n \kappappa t}$ for a large enough ${\varepsilon}psilonta$. In addition, if we assume that $A_{{\varepsilon}psilonta}$ is realized and $|\xi|\le{\varepsilon}psilonta$, some elementary trigonometric identities show that the following deterministic bound hold true: $| \sum_{j=1}^n e^{-i B_{\kappappa r}^j \xi}| \ge \frac{n}{2}$. Gathering those considerations, we thus get \mathbf{E}gin{align*} \mathbf{E} \left[ u^n_{{\varepsilon}psilon}(t,x)\right] &\geq {\varepsilon}psilonxp \left( c_1 n^2 \int_0^t \int_0^{{\varepsilon}psilonta} e^{-{\varepsilon}psilon |\xi|^2} |\xi|^{1-2H} d\xi dr - c_2 nt {\varepsilon}psilon^{H-1} \right) \mathbf{P}\left( A_{\varepsilon}psilonta \right) \\ &\geq C {\varepsilon}psilonxp \left( c_1 n^2 t {\varepsilon}psilon^{-(1-H)} \int_0^{{\varepsilon}psilon^{1/2}{\varepsilon}psilonta} e^{- |\xi|^2} |\xi|^{1-2H} d\xi - c_2 nt {\varepsilon}psilon^{-(1-H)} - c_{3} n \kappappa t {\varepsilon}psilonta^{2} \right). {\varepsilon}psilonnd{align*} We now choose the parameter ${\varepsilon}psilonta$ such that $\kappappa {\varepsilon}psilonta^2={\varepsilon}psilon^{-(1-H)}$, which means in particular that ${\varepsilon}psilonta \to \infty$ as ${\varepsilon}psilon \to 0$. It is then easily seen that $\int_0^{{\varepsilon}psilon^{1/2}{\varepsilon}psilonta} e^{- |\xi|^2} |\xi|^{1-2H} d\xi$ is of order ${\varepsilon}psilon^{H(1-H)}$ in this regime, and some elementary algebraic manipulations entail \mathbf{E}gin{equation*} \mathbf{E} \left[ u^n_{{\varepsilon}psilon}(t,x)\right] \geq C {\varepsilon}psilonxp \left( c_1 n^2 t \kappappa^{H-1}{\varepsilon}psilon^{-(1-H)^2} -c_2 nt{\varepsilon}psilon^{-(1-H)}\right) \geq C {\varepsilon}psilonxp \left(c_{3} t \kappappa^{1-\frac{1}{H}}n^{1+\frac{1}{H}}\right), {\varepsilon}psilonnd{equation*} where the last inequality is obtained by choosing ${\varepsilon}psilon^{-(1-H)}=c \, \kappappa ^{\frac{H-1}{H}}n^{\frac{1}{H}}$ in order to optimize the second expression. We have thus reached the desired lower bound in {\varepsilon}psilonqref{eq:intermittency-bounds} for the approximation $u^{{\varepsilon}psilon}$ in the regime ${\varepsilon}psilon=c \, \kappappa ^{\frac{1}{H}}n^{-\frac{1}{H(1-H)}}$. \noindent {\theta}etaxtit{Step 3: Lower bound for $u$.} To complete the proof, we need to show that for all sufficiently small ${\varepsilon}psilon$, $\mathbf{E} \left[ u^n_{{\varepsilon}psilon}(t,x)\right]\leq \mathbf{E}[u^n(t,x)]$. We thus start from equation {\varepsilon}psilonref{appro moment} and use the series expansion of the exponential function as in {\varepsilon}psilonqref{eq:moments-F-t-epsilon}. We get \mathbf{E}gin{equation}{\lambda}bel{eq:expansion-moment-u-epsilon} \mathbf{E} \left[ u^n_{{\varepsilon}psilon}(t,x)\right]= \sum_{m=0}^{\infty} \frac{ c_{1,H}^m}{m!} \, \mathbf{E} _{B} \!\left[ \left( \sum_{1\leq j \neq k \leq n} V_{t,x}^{{\varepsilon}psilon,j,k} \right)^m \right], {\varepsilon}psilonnd{equation} where we recall that $V_{t,x}^{{\varepsilon}psilon,j,k}$ is defined by {\varepsilon}psilonqref{eq:def-V-tx-epsilon}. Furthermore, expanding the $m$th power above, we have \mathbf{E}gin{equation*} \mathbf{E}_{B} \!\left[ \left( \sum_{1\leq j \neq k \leq n} V_{t,x}^{{\varepsilon}psilon,j,k} \right)^m \right] = \sum_{\alpha\in K_{n,m}} \int_{[0,t]^m} \int_{\mathbb{R}^m} e^{-{\varepsilon}psilon \sum_{l=1}^m |\xi_l|^2} \mathbf{E}_{B} \left[ e^{i B^{\alpha}(\xi)} \right] \prod_{l=1}^m |\xi_l|^{1-2H} \, d\xi dr\,, {\varepsilon}psilonnd{equation*} where $K_{n,m}$ is a set of multi-indices defined by \mathbf{E}gin{equation*} K_{n,m}= \left[l \alpha=(j_{1},\ldots,j_{m},k_{1},\ldots,k_{m}) \in \{1,\ldots,n\}^{2m} ; \, j_{l}<k_{l} {\theta}etaxt{ for all } l=1,\ldots,n \right]l, {\varepsilon}psilonnd{equation*} and $B^{\alpha}(\xi)$ is a shorthand for the linear combination $\sum_{l=1}^m \xi_{l}(B_{\kappappa r_{l}}^{j_{l}}-B_{\kappappa r_{l}}^{k_{l}})$. The important point here is that $E _{B} e^{iB^{\alpha}(\xi)}$ is positive for any $\alpha\in K_{n,m}$. We thus get the following inequality, valid for all $m\ge 1$ \mathbf{E}gin{eqnarray*} \mathbf{E} _{B} \!\left[ \left( \sum_{1\leq j \neq k \leq n} V_{t,x}^{{\varepsilon}psilon,j,k} \right)^m \right] &\le& \sum_{\alpha\in K_{n,m}} \int_{[0,t]^m} \int_{\mathbb{R}^m} \mathbf{E} _{B} \left[ e^{i B^{\alpha}(\xi)} \right] \prod_{l=1}^m |\xi_l|^{1-2H} \, d\xi dr \\ &=& \mathbf{E} _{B} \!\left[ \left( \sum_{1\leq j \neq k \leq n} V_{t,x}^{j,k} \right)^m \right], {\varepsilon}psilonnd{eqnarray*} where $V_{t,x}^{j,k}$ is defined by {\varepsilon}psilonqref{eq:def-Vtx-jk}. Plugging this inequality back into {\varepsilon}psilonqref{eq:expansion-moment-u-epsilon} and recalling expression {\varepsilon}psilonqref{moment} for $\mathbf{E} [u^n(t,x)]$, we easily deduce that $\mathbf{E} [u^n_{{\varepsilon}psilon}(t,x)] \le \mathbf{E}[u^n(t,x)]$, which finishes the proof. {\varepsilon}psilonnd{proof} \mathbf{E}gin{thebibliography}{999} \bibitem{AKQ} Alberts, T., Khanin, K., Quastel, J. The continuum directed random polymer. {\theta}etaxtit{J. Stat. Phys.} {\theta}etaxtbf{154} (2014), no. 1-2, 305-326. \bibitem{BJQ} Balan, R., Jolis, M. and Quer-Sardanyons, L. SPDEs with fractional noise in space with index $H<1/2$. {\it Electron. J. Probab.} Volume 20 (2015), paper no. 54, 36 pp. \bibitem{BeC} Bertini, L., Cancrini, N. The stochastic heat equation: Feynman- Kac formula and intermittence. {\theta}etaxtit{J. Statist. Phys.} {\theta}etaxtbf{78} (1995), no. 5-6, 1377-1401. \bibitem{BTV} Bezerra, S., Tindel, S., Viens, F. Superdiffusivity for a Brownian polymer in a continuous Gaussian environment. {\it Ann. Probab.} {\theta}etaxtbf{36} (2008), no. 5, 1642-1675. \bibitem{Ch14} Chen, X. Spatial asymptotics for the parabolic Anderson models with generalized time-space Gaussian noise. {\it Ann. Probab.} Volume 44, Number 2 (2016), 1535-1598. \bibitem{Ha} Hairer, M. Solving the KPZ equation. {\it Ann. of Math.} (2) {\theta}etaxtbf{178} (2013), no. 2, 559-664. \bibitem{hubook} Hu, Y. Analysis on Gaussian space. World Scientific, Singapore, 2017. \bibitem{HN} Hu, Y., Nualart, D. Stochastic heat equation driven by fractional noise and local time. {\it Probab. Theory Related Fields } {\theta}etaxtbf{143} (2009), no. 1-2, 285-328. \bibitem{HHLNT}Hu, Y., Huang, J., L\^e, K., Nualart, D. and Tindel, S. Stochastic heat equation with rough dependence in space. {\it Ann. Probab.} pending revision. \bibitem{HHNT}Hu, Y., Huang, J., Nualart, D., Tindel, S. Stochastic heat equations with general multiplicative Gaussian noises: H\"older continuity and intermittency. {\it Electron. J. Probab.} 20 (2015), no. 55, 50 pp. \bibitem{HNS} Hu, Y., Nualart, D., Song, J. Feynman-Kac formula for heat equation driven by fractional white noise. {\it Ann. Probab.} {\bf 30}, 291-326. \bibitem{HLN} Huang, J., L\^e, K., Nualart, D. Large time asymptotics for the parabolic Anderson model driven by spatially correlated noise. {\it Ann. Inst. H. Poincar\'e}, to appear. \bibitem{kilbas} Kilbas,A.~A., Srivastava, H.~M. and Trujillo, J.~J. \newblock {{\varepsilon}psilonm Theory and applications of fractional differential equations.} \newblock North-Holland Mathematics Studies, 204. Elsevier Science B.V., Amsterdam, 2006. \bibitem{Kh} Khoshnevisan, D. Analysis of stochastic partial differential equations. {\it CBMS Regional Conference Series in Mathematics}, 119. Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 2014. viii+116 pp. \bibitem{LS}Li, W. V., Shao, Q.-M. Gaussian processes: inequalities, small ball probabilities and applications. {\it Stochastic processes: theory and methods, Handbook of Statist.} {\bf 19}, North-Holland, Amsterdam, 2001. \bibitem{Nua}Nualart, D. The Malliavin calculus and related topics. Second edition. Probability and its Applications (New York). {\it Springer-Verlag, Berlin}, 2006. xiv+382 pp. \bibitem{PT}Pipiras, V., Taqqu, M. Integration questions related to fractional Brownian motion. {\it Probab. Theory Related Fields } {\bf 118} (2000), no. 2, 251-291. {\varepsilon}psilonnd{thebibliography} \mathbf{E}gin{minipage}{1.0{\theta}etaxtwidth} \vskip 1cm Yaozhong Hu, Jingyu Huang, Khoa L\^e and David Nualart: Department of Mathematics, University of Kansas, 405 Snow Hall, Lawrence, Kansas, 66044, USA. {\it E-mail address:} [email protected], [email protected], [email protected], [email protected] \vskip 0.5cm Samy Tindel: Department of Mathematics, Purdue University, West Lafayette, IN 47907, USA. {\it E-mail address:} [email protected] {\varepsilon}psilonnd{minipage} {\varepsilon}psilonnd{document}
\begin{document} \title{Extinction of Fleming-Viot-type particle systems with strong drift} \author{ {\bf Mariusz Bieniek}, {\bf Krzysztof Burdzy} \ and \ {\bf Soumik Pal} } \address{MB: Instytut Matematyki, Uniwersytet Marii Sk\l odowskiej-Curie, 20-031 Lublin, Poland} \address{KB, SP: Department of Mathematics, Box 354350, University of Washington, Seattle, WA 98195, USA} \mathrm email{[email protected]} \mathrm email{[email protected]} \mathrm email{[email protected]} \thanks{Research supported in part by NSF Grants DMS-0906743, DMS-1007563, and by grant N N201 397137, MNiSW, Poland.} \begin{abstract} We consider a Fleming-Viot-type particle system consisting of independently moving particles that are killed on the boundary of a domain. At the time of death of a particle, another particle branches. If there are only two particles and the underlying motion is a Bessel process on $(0,\infty)$, both particles converge to 0 at a finite time if and only if the dimension of the Bessel process is less than 0. If the underlying diffusion is Brownian motion with a drift stronger than (but arbitrarily close to, in a suitable sense) the drift of a Bessel process, all particles converge to 0 at a finite time, for any number of particles. \mathrm end{abstract} \keywords{Fleming-Viot particle system, extinction} \subjclass{60G17} \maketitle \section{Introduction}\label{sec:intro} Our paper is motivated by an open problem concerning extinction in a finite time of a branching particle system. We prove two results that are related to the original problem and might shed some light on the still unanswered question. The following Fleming-Viot-type particle system was studied in \cite{BurdzyMarch00}. Consider an open bounded set $D\subset\mathbb R^d$ and an integer $N\geq 2$. Let $\mathbf X_t=(X_t^1,\dotsc,X_t^N)$ be a process with values in $D^N$ defined as follows. Let $\mathbf X_0=(x^1,\dotsc,x^N)\in D^N$. Then the processes $X_t^1,\dotsc,X_t^N$ evolve as independent Brownian motions until the time $\tau_1$ when one of them, say, $X^j$ hits the boundary of $D$. At this time one of the remaining particles is chosen uniformly, say, $X^k$, and the process $X^j$ jumps at time $\tau_1$ to $X^k_{\tau_1}$. The processes $X_t^1,\dotsc,X_t^N$ continue evolving as independent Brownian motions after time $\tau_1$ until the first time $\tau_2>\tau_1$ when one of them hits the boundary of $D$. Again at the time $\tau_2$ the particle which approaches the boundary jumps to the current location of a particle chosen uniformly at random from amongst the ones strictly inside $D$. The subsequent evolution of $\mathbf X$ proceeds in the same way. We will say that $\mathbf X$ constructed above is \mathrm emph{driven} by Brownian motion. The main results in this paper are concerned with Fleming-Viot particle systems driven by other processes. The above recipe defines the process $\mathbf X_t$ only for $t < \tau_\infty$, where \begin{equation*} \tau_\infty = \lim_{k\to \infty} \tau_k. \mathrm end{equation*} There is no natural way to define the process $\mathbf X_t$ for $t\geq \tau_\infty$, and, therefore, it is of interest to investigate what conditions ensure that $\tau_\infty=\infty$. In Theorem 1.1 of \cite{BurdzyMarch00} the authors claim that in every domain $D\subset\mathbb R^d$ and every $N\geq 2$, we have $\tau_\infty=\infty$, so the Fleming-Viot process is always well-defined. However, the proof of Theorem 1.1 in \cite{BurdzyMarch00} contains an error which is irreparable in the following sense. That proof is based on only two properties of Brownian motion---the strong Markov property and the fact the the hitting time distribution of a compact set has no atoms (assuming that the starting point lies outside the set). Hence, if some version of that argument were true, it would apply to almost all non-trivial examples of Markov processes with continuous time, and in particular to all diffusions. However, in \cite{BieniekBurdzyFinch10}, the authors provided an example of a diffusion $X$ on $D=(0,\infty)$ (a Bessel process with dimension $\nu=-4$), such that $\tau_\infty<\infty$ for the Fleming-Viot process driven by this diffusion with $N=2$. It is not known whether Theorem 1.1 of \cite{BurdzyMarch00} is correct in full generality. It was proved in \cite{BieniekBurdzyFinch10,GK} that the theorem holds in domains which do not have thin channels. \subsection{Main results} We will prove two theorems. The first theorem is concerned with Bessel processes but it is motivated by the original model based on Brownian motion in an open bounded subset of $\mathbb R^d$. Recall that for any real $\nu$, a $\nu$-dimensional Bessel process on $(0,\infty)$ killed at 0 may be defined as a solution to the stochastic differential equation \begin{align}\label{s30.3} dX_t = dW_t + \frac{\nu-1}{2 X_t} dt, \mathrm end{align} where $W$ is the standard Brownian motion. To make a link between Brownian motion in a domain and Bessel processes, we recall that there exists a regularized version $\rho$ of the distance function (\cite[Theorem 2, p. 171]{Stein}). More precisely, there exist $0<c_1,c_2,c_3, c_4<\infty$ and a $C^\infty$ function $\rho: D \to (0,\infty)$ with the following properties, \begin{align*} &c_1 \dist(x, \partial D) \leq \rho(x) \leq c_2 \dist(x, \partial D) ,\\ &\sup_{x\in D} |\nabla \rho(x)| \leq c_3, \\ &\sup_{x\in D} \left| \rho (x) \frac\partial{\partial x_i} \frac\partial{\partial x_m} \rho(x)\right| \leq c_4 \qquad \hbox{for } 1\leq i,m\leq d. \mathrm end{align*} The above estimates and the It\^o formula show that if $B=(B^1, \dots, B^d)$ is a $d$-dimensional Brownian motion and $Z_t = \rho(B_t)$ then \begin{align*} d Z_t = \sum_{k=1}^ d a_k(Z_t) dB^k_t + \frac{b(Z_t)}{Z_t} dt, \mathrm end{align*} where the functions $a_k(\,\cdot\,)$ and $b(\,\cdot\,)$ are bounded. This shows that the dynamics of $Z$ resembles that of a Bessel process. Note that if $\tau_\infty < \infty$ for the Fleming-Viot process driven by Brownian motion in a domain $D$ then the distances of all particles to $\partial D$ go to 0 as $t\uparrow \tau_\infty$, by Lemma 5.2 of \cite{BieniekBurdzyFinch10}. Hence, it is of some interest to see whether a Fleming-Viot process based on a Bessel process can become extinct in a finite time. We have a complete answer only for $N=2$, i.e., a two-particle process. \begin{theorem}\label{thm:main1} Let $\mathbf X$ be a Fleming-Viot process with $N$ particles on $(0,\infty)$ driven by Bessel process of dimension $\nu\in\mathbb R$. (i) If $N=2$ then $\tau_\infty<\infty$, a.s., if and only if $\nu< 0$. (ii) If $N\nu \geq 2$ then $\tau_\infty=\infty$, a.s. \mathrm end{theorem} Our second main result is also motivated by some results presented in \cite{BurdzyMarch00}. Several theorems in \cite{BurdzyMarch00,BieniekBurdzyFinch10} are concerned with limits when $N\to \infty$. To formulate rigorously any of these theorems it would suffice that $\tau_\infty = \infty$, a.s., for all sufficiently large $N$. In other words, it is not necessary to know whether $\tau_\infty = \infty$ for small values of $N$. One may wonder whether it is necessarily the case that $\tau_\infty = \infty$ for any Fleming-Viot-type process and sufficiently large $N$. Our next result shows that once the drift of the diffusion is slightly stronger than the drift of any Bessel process then $\tau_\infty < \infty$ for the Fleming-Viot process driven by this diffusion and {\it every} $N$. Consider the following SDE for a diffusion on $(0,2]$, \begin{equation}\label{eq:SDEX1} X_t=x_0+W_t-\int_0^t \frac1{\beta X_t^{\beta -1}}\,ds-L_t,\quad t\leq T_0, \mathrm end{equation} where $x_0 \in (0,2]$, $\beta >2$, $W$ is Brownian motion, $T_0$ is the first hitting time of 0 by $X$, and $L_t$ is the local time of $X$ at $2$, i.e., $L_t$ is a CAF of $X$ such that \begin{equation*} \int_0^\infty \mathbf 1_{\set{X_s\ne 2}}dL_s=0,\quad\text{a.s.} \mathrm end{equation*} It is well known that \mathrm eqref{eq:SDEX1} has a unique pathwise solution $(X,L)$ (see, e.g., \cite{BassDEO}, Theorem I.12.1). We will analyze a Fleming-Viot process on $(0,2]$ driven by the diffusion defined in \mathrm eqref{eq:SDEX1}. The role of the boundary is played by the point 0, and only this point. In other words, the particles jump only when they approach 0. Let $\P[\mathbf x]$ denote the distribution of the Fleming-Viot particle system starting from $\mathbf X_0 =\mathbf x$. \begin{theorem}\label{thm:main2} Fix any $\beta >2$. For every $N\geq 2$, the $N$-particle Fleming-Viot process on $(0,2]$ driven by diffusion defined in \mathrm eqref{eq:SDEX1} has the property that $\tau_\infty<\infty$, a.s. Moreover, \begin{align}\label{s30.2} \P[\mathbf x](\tau_\infty > t ) \leq c_1\mathrm e^{- c_2 t}, \qquad t \geq 0,\ \mathbf x \in (0,2]^N, \mathrm end{align} where $c_1$ and $c_2$ depend only on $N$ and $\beta$, and satisfy $0< c_1, c_2 < \infty$. \mathrm end{theorem} \begin{remark} (i) If we take $\beta = 2$ in \mathrm eqref{eq:SDEX1} then the diffusion is a Bessel process (locally near 0). Hence, we may say that Theorem \ref{thm:main2} is concerned with a diffusion with a drift ``slightly stronger'' than the drift of any Bessel process. (ii) The theorem still holds if the constant $1/\beta$ in the drift term in \mathrm eqref{eq:SDEX1} is replaced by any other positive constant. We chose $1/\beta$ to simplify some formulas in the proof. (iii) The diffusion \mathrm eqref{eq:SDEX1} is reflected at 2 so that we can prove the exponential bound in \mathrm eqref{s30.2}. For some Markov processes, the hitting time of a point can be finite almost surely but it may have an infinite expectation; the hitting time of 0 by one-dimensional Brownian motion starting at 1 is a classical example of such situation. The reflection is used in \mathrm eqref{eq:SDEX1} to get rid of the effects of excursions of the diffusion far away from the boundary at 0. A different example could be constructed based on a diffusion on $(0,\infty)$ with no reflection but with very strong negative drift far away from 0. \mathrm end{remark} We end this section with two open problems. \begin{problem} Find necessary and sufficient conditions, in terms of $N$ and $\nu$, for non-extinction in a finite time of an $N$-particle Fleming-Viot process driven by $\nu$-dimensional Bessel process. \mathrm end{problem} \begin{problem} Does there exist a Fleming-Viot-type process, not necessarily driven by Brownian motion, such that $\tau_\infty = \infty$, a.s., for the $N$-particle system, but $\tau_\infty < \infty$ with positive probability for the $(N+1)$-particle system, for some $N\geq 2$? \mathrm end{problem} The rest of the paper contains the proofs of the two main theorems. \section{Proof of Theorem \ref{thm:main1}}\label{sec:proof1} \subsection{Bessel processes}\label{sec:bessel} We start with a review of some facts about Bessel processes and Gamma distributions. Let $Z_t$, $t\geq 0$, be a square of Bessel process of dimension $\nu\in\mathbb R$ starting at $x\geq 0$, ($Z\sim\besq{x}$, for short), i.e., $Z$ is the unique strong solution to stochastic differential equation \begin{equation*} dZ_t=\nu \,dt+2\sqrt{|Z_t|}\,dW_t,\quad Z_0=x, \mathrm end{equation*} where $W$ is a one-dimensional Brownian motion (see \cite[Chapter 11]{RevuzYor99} for the case $\nu\geq 0$ and \cite{GoingYor03} for the general case). Squares of Bessel processes have the following scaling property: if $Z_t\sim\besq{x}$ and for some $c>0$ and all $t\geq 0$ we have $Z'_t=cZ_{c^{-1}t}$, then $Z'\sim\besq{cx}$. If $Z\sim\besq{x}$ with $x>0$, and $T_0$ denotes the first hitting time of 0, then $T_0=\infty$, a.s., if $\nu\geq 2$, and $T_0<\infty$, a.s., if $\nu<2$. Moreover, in the latter case we have \begin{equation}\label{eq:invgamma} T_0\overset{d}{=}\frac{x}{2G}, \mathrm end{equation} where $G$ is $\Gamma\left( 1-\frac{\nu}{2} \right)$-distributed random variable \cite[eqn.~(15)]{GoingYor03}. Here and in what follows we say that a random variable is $\Gamma(\alpha)$-distributed if it has the density \begin{equation*} f_\alpha(x)=\frac{1}{\Gamma(\alpha)}\,x^{\alpha-1}\mathrm e^{-x},\quad x>0,\,\alpha>0, \mathrm end{equation*} where \begin{equation*} \Gamma(\alpha)=\int_0^\infty x^{\alpha-1}\,\mathrm e^{-x}\,dx \mathrm end{equation*} denotes the standard gamma function. Note that we consider only a one-parameter family of gamma densities, unlike the traditional two-parameter family. In \cite{RevuzYor99}, Bessel process $X$ of dimension $\nu\geq 0$ starting at $x\geq 0$ ($X\sim\bes{x}$), is defined as the square root of $\besq{x^2}$ process $Z$. If $\nu\geq 0$, then by so called comparison theorems, the paths of $Z_t$ are defined for all $t\geq 0$, so $X_t$ is well defined for all $t\in [0,\infty)$. We define Bessel process $X$ of dimension $\nu<0$ starting at $x\geq 0$ as the square root of a $\besq{x^2}$ process $Z$, i.e., $X_t=\sqrt{Z_t}$ for $ t\leq T_0$. For any real $\nu$, these definitions are equivalent to the definition given in \mathrm eqref{s30.3} by the It\^o formula. Processes $\bes{x}$ with $\nu\in \mathbb R$ scale as follows. If $X\sim\bes{x}$ is a Bessel process on $[0,T_0)$, then for all $c>0$, $c X_{c^{-2}t }$ is a $\bes{cx}$ process on $[0,c^2T_0]$. This follows easily from the scaling property of $\besq{x}$ processes. \subsection{Proof of Theorem \ref{thm:main1} (i)} We start with an alternative construction of the Fleming-Viot process $\mathbf X$. Let $X=(X_t,\,t\in[0, T_0))$ be a $\bes{1}$ process. Let $\mathbf Y=\br{Y^{1}_t,Y^{2}_t}$, where $Y^{1}_t$ and $Y^{2}_t$ are independent copies of $X_t$ and let $\mathbf Y^i_t = \br{Y^{i,1}_t,Y^{i,2}_t}$, $i=1,2,\dotsc$, be a sequence of independent copies of $\mathbf Y$. For $i=1,2,\dotsc$ we set \begin{gather*} \sigma_i = \inf\setof{t>0}{Y^{i,1}_t \wedge Y^{i,2}_t = 0}, \\ \intertext{and} \alpha_i = Y^{i,1}_{\sigma_i} \vee Y^{i,2}_{\sigma_i}. \mathrm end{gather*} It is easily seen that $\sigma_1$ may be represented as $\sigma_1=\min(T_0,T'_0)$, where $T'_0$ is an independent copy of $T_0$, and that $(\sigma_i,\,i=1,2,\dotsc)$ is a sequence of independent and identically distributed random variables. We construct a two-particle Fleming-Viot type process $\mathbf X_t = \br{X^1_t, X^2_t}$ as follows. First let $\tau_1=\sigma_1$ and set $\mathbf X_t = \mathbf Y^1_t$ for $t\in\halfint{0,\tau_1}$. At $\tau_1$ one of the particles hits the boundary of $D=(0,\infty)$, and it jumps to $\mathbf xi_1 = \alpha_1$. To continue the process we use the scaling property of $\mathbf Y_t$: let $\tau_2 = \tau_1 + \mathbf xi_1^2\sigma_2$ and set $\mathbf X_t = \mathbf xi_1\mathbf Y^2_{\mathbf xi_1^{-2}(t-\tau_1)}$ for $t\in\halfint{\tau_1,\tau_2}$. At $\tau_2$, one of the particles hits the boundary and jumps, this time to $\mathbf xi_2 = \alpha_2\mathbf xi_1$. We continue the process in the same way by setting \begin{gather*} \mathbf xi_j =\prod_{i=1}^j \alpha_i,\\ \tau_n = \sum_{j=1}^{n} \mathbf xi_{j-1}^2 \sigma_{j}, \\ \intertext{and} \mathbf X_t = \mathbf xi_n \mathbf Y^n_{\mathbf xi_n^{-2}(t-\tau_n)} , \qquad \text{for } t\in\halfint{\tau_n,\tau_{n+1}}. \mathrm end{gather*} It is easy to see that the construction of $\mathbf X$ given above is equivalent to that given in the Introduction, except that the driving process is a $\nu$-dimensional Bessel process. The process $\mathbf X_t$ is well defined up until $\tau_\infty$, and we will show now that $\tau_\infty<\infty$ almost surely if and only if $\nu<0$. Note that $\mathbf X_0 = (1,1)$ for the process constructed above. However, it is easy to see that for any two starting points $\mathbf X_0 = (x_0^1, x_0^2)$ and $\mathbf X_0 = (z_0^1, z_0^2)$ with $x_0^1,x_0^2, z_0^1, z_0^2 >0$, the distributions of $\mathbf X_{\tau_1}$ are mutually absolutely continuous. This implies that the argument given below proves the theorem for any initial value of $\mathbf X$. The case $\nu\geq 2$ is very simple: then $\sigma_1=\infty$, a.s., so $\tau_\infty=\infty$, a.s. So for the rest of this section we assume that $\nu<2$. To check whether $\tau_\infty<\infty$ or $\tau_\infty=\infty$, we will apply the following theorem. Let $\log^+ x=\max(\log x, 0)$. \begin{theorem}\label{thm:diaco_freed} (\cite{DiaconisFreedman99}; see also \cite{BougerolNico92} or \cite{GoldieMaller00}) Let $\left\{ (A_n,B_n),\, n\geq 1 \right\}$ be a sequence of independent and identically distributed random vectors such that $A_n,B_n \in \mathbb R$ and \begin{equation*} \E\left(\log^{+}|A_1|\right)<\infty,\quad \E\left(\log^{+}|B_1|\right)<\infty. \mathrm end{equation*} Then the infinite random series \begin{equation*} \sum_{n=1}^\infty\Bigl( \prod_{j=1}^{n-1} A_j \Bigr) B_n \mathrm end{equation*} converges a.s. to a finite limit if and only if \begin{equation*} \E\log|A_1|<0. \mathrm end{equation*} \mathrm end{theorem} We will apply Theorem \ref{thm:diaco_freed} with $A_n=\alpha_n^2$ and $B_n=\sigma_n$. Thus, in order to prove Theorem \ref{thm:main1} (i), it suffices to show that \begin{enumerate}[(i)] \item $\E\log\sigma_1<\infty$ for $\nu<2$; \item $\E\log(\alpha_1^2)<0$ for $\nu<0$ and $\E\log(\alpha_1^2)\geq 0$ for $\nu\geq 0$. \mathrm end{enumerate} \begin{proof}[Proof of (i)] Note that, in view of \mathrm eqref{eq:invgamma}, \begin{equation*} \E\log\sigma_1 \leq \E\log T_0 =\E\log\frac{1}{2G} =-\log 2- \E\log G, \mathrm end{equation*} where $G\sim\Gamma\left( 1-\frac{\nu}{2} \right)$. But for $G\sim\Gamma(\alpha)$ with $\alpha = 1-\frac{\nu}{2} >0$, we have \begin{equation}\label{eq:ElogG} \begin{split} \E\log G & =\int_0^\infty \log x\, f_\alpha(x)\,dx\\ & =\frac{1}{\Gamma(\alpha)}\int_0^\infty x^{\alpha-1}\log x\,\mathrm e^{-x}\,dx\\ & =\frac{1}{\Gamma(\alpha)}\,\frac{d}{d\alpha}\Gamma(\alpha)=\psi(\alpha)<\infty, \mathrm end{split} \mathrm end{equation} where $\psi$ is well known digamma function (\cite[Section 6.3]{AS}) defined as \begin{equation*} \psi(x)=\frac d{dx}\log\Gamma(x)=\frac{\Gamma'(x)}{\Gamma(x)}. \qedhere \mathrm end{equation*} \mathrm end{proof} \begin{proof}[Proof of (ii)] By Theorem 11 of \cite{Pal10} we get that the density of $\alpha_1^2$ is given by \begin{equation*} \begin{split} h_\nu(y)&=\frac{(y+2)^{\nu-3}}{\Gamma\left( 1-\frac{\nu}{2} \right)} \sum_{n=0}^\infty \frac{\Gamma\left( 3-\nu+2n \right)}{n!\Gamma\left( 2-\frac{\nu}{2}+n \right)}\left( \frac{y}{(y+2)^2} \right)^n\\ & = \frac{(y+2)^{\nu-3}}{\Gamma\left( 1-\frac{\nu}{2} \right)} g\left( \frac{y}{(y+2)^2} \right), \mathrm end{split} \mathrm end{equation*} where \begin{equation*} g(z)=\sum_{n=0}^{\infty}c_n z^n \mathrm end{equation*} with \begin{equation*} c_n=\frac{\Gamma\left( 2n+3-\nu \right)} {n!\Gamma\left( n+ 2-\frac{\nu}{2} \right)}, \quad n=0,1,2,\dotsc. \mathrm end{equation*} By the duplication formula for the gamma function (\cite[eqn.~6.1.18]{AS}), i.e., \begin{equation*} \Gamma(2z)=\frac{2^{2z-1}}{\sqrt\pi}\Gamma(z)\Gamma\left( z+\frac{1}{2} \right), \mathrm end{equation*} we have \begin{equation*} \begin{split} c_n & = \frac{2^{2n+2-\nu}}{\sqrt\pi}\cdot\frac{\Gamma\left( n+\frac{3-\nu}{2} \right)}{n!}\\ & =\frac{2^{2n+2-\nu}}{\sqrt\pi}\binom{n+\frac{1-\nu}{2}}n \Gamma\left( \frac{3-\nu}{2} \right), \mathrm end{split} \mathrm end{equation*} where for $x>-1$ and $k\in\mathbb N$, \begin{equation*} \binom x k=\frac{\Gamma(x+1)}{k!\Gamma(x-k+1)} \mathrm end{equation*} is a generalized binomial coefficient. Therefore, \begin{equation*} \begin{split} g(z) & = \frac{2^{2-\nu}}{\sqrt\pi}\Gamma\left( \frac{3-\nu}{2} \right) \sum_{n=0}^\infty \binom{n+\frac{1-\nu}{2}}n (4z)^n\\ & = \frac{2^{2-\nu}}{\sqrt\pi}\Gamma\left( \frac{3-\nu}{2} \right) \left( \frac{1}{1-4z} \right)^{\frac{3-\nu}{2}}, \mathrm end{split} \mathrm end{equation*} as for $a\in\mathbb R$ \begin{equation*} \sum_{n=0}^\infty\binom{n+a}n z^n= (1-z)^{-a-1}. \mathrm end{equation*} Now \begin{equation*} g\left( \frac{y}{(y+2)^2} \right)=\frac{2^{2-\nu}}{\sqrt\pi}\Gamma\left( \frac{3-\nu}{2} \right) \frac{(y+2)^{3-\nu}}{(y^2+4)^{\frac{3-\nu}{2}}} \mathrm end{equation*} and therefore for $y\geq 0$ \begin{equation*} h_\nu(y)=\frac{2^{2-\nu}}{\sqrt\pi}\frac{\Gamma\left( \frac{3-\nu}{2} \right)}{\Gamma\left( 1-\frac{\nu}{2} \right)}\,\frac{1}{\left( y^2+4 \right)^{\frac{3-\nu}{2}}}. \mathrm end{equation*} So, to prove (ii) we need to study the sign of the integral \begin{equation*} I(\nu)=\int_0^{\infty}h_\nu(y)\log y\,dy. \mathrm end{equation*} Recall the Student's $t$-distribution with $a>0$ degrees of freedom (\cite[section 26.7]{AS}). The density for this distribution is given by \begin{equation*} f(x;a)= \frac{\Gamma\left( \frac{a+1}{2} \right)}{\sqrt{\pi a}\,\Gamma\left( \frac{a}{2} \right)}\left( 1+\frac{x^2}{a} \right)^{-\frac{a+1}{2}},\quad -\infty < x < \infty. \mathrm end{equation*} Changing the variable $y=\frac{2x}{\sqrt{2-\nu}}$ in $I(\nu)$ we get \begin{equation*} \begin{split} I(\nu)&=\int_0^{\infty}f(x;2-\nu)\log\frac{2x}{\sqrt{2-\nu}}\,dx\\ &=\frac{1}{2}\E\log\frac{2\abs{X}}{\sqrt{2-\nu}}, \mathrm end{split} \mathrm end{equation*} where $X$ is a random variable with $t$-distribution with $(2-\nu)$-degrees of freedom. It is well known (\cite[section 26.7]{AS}) that \begin{equation*} X\overset{d}{=}\frac{Z\sqrt{2-\nu}}{\sqrt{V}}, \mathrm end{equation*} where $Z$ has standard normal distribution and $V$ has chi-squared distribution with $(2-\nu)$ degrees of freedom, and $Z$ and $V$ are independent. Therefore \begin{equation*} \begin{split} I(\nu)&=\frac12\E\log\frac{2\abs{Z}}{\sqrt{V}}\\ &=\frac12\log 2+\frac{1}{4}\left( \E\log\frac{Z^2}{2}-\E\log\frac{V}{2} \right). \mathrm end{split} \mathrm end{equation*} Note that $\frac{Z^2}{2}$ has $\Gamma\left(\frac{1}{2}\right)$ distribution and $\frac{V}{2}$ has $\Gamma\left(\frac{2-\nu}{2} \right)$ distribution. Therefore, by \mathrm eqref{eq:ElogG}, \begin{equation*} I(\nu)=\frac12\log 2+\frac{1}{4}\left( \psi\left( \frac{1}{2} \right) -\psi\left( \frac{2-\nu}{2} \right) \right). \mathrm end{equation*} The function $\psi$ is strictly increasing with $\psi\left( \frac{1}{2} \right)=-2\log 2-\gamma$ and $\psi(1)=-\gamma$ where $\gamma$ is the Euler constant (\cite[eqns.~6.3.2, 6.3.3]{AS}). Using these facts we see that \begin{equation*} I(\nu)=\frac{1}{4}\left( \psi(1)-\psi\left( \frac{2-\nu}{2} \right) \right), \mathrm end{equation*} and therefore $I(\nu)<0$ iff $\frac{2-\nu}{2}>1$ iff $\nu<0$. This completes the proof of (ii) and of Theorem \ref{thm:main1} (i). \mathrm end{proof} \subsection{Proof of Theorem \ref{thm:main1} (ii)} Suppose that $\mathbf X= (X^1, \dots , X^N)$ is a Fleming-Viot process driven by $\nu$-dimensional Bessel process, $\mathbf X_0 = (x^1, \dots, x^N)$, $x^j >0$ for all $1\leq j \leq N$, and $N\nu \geq 2$. Let $Z_t = (X^1_t)^2 + \cdots + (X^N_t)^2$ and $z_0 = (x^1)^2 + \cdots + (x^N)^2>0$. According to \cite[Thm. 2.1]{Shiga_Wat}, the process $\{Z_t, t\in[0, \tau_1)\}$ is an $(N\nu)$-dimensional square of Bessel process, i.e., it has distribution $\besselsq^{N\nu}(z_0)$. More generally, $\{Z_t, t\in[\tau_k, \tau_{k+1})\}$ has distribution $\int\besselsq^{N\nu}(z)\P(Z_{\tau_k} \in dz)$ for $k\geq 0$, where, by convention, $\tau_0=0$. Let $Y_t = Z_t^{1/2}$, $B_0=0$ and define $B$ inductively on intervals $(\tau_k, \tau_{k+1}]$ by \begin{align*} B_t = B_{\tau_k} + \int_{\tau_k}^t dY_s - \int_{\tau_k}^t\frac{N\nu-1}{2 Y_s} ds. \mathrm end{align*} Then, by the It\^o formula, $B$ is a Brownian motion and \begin{equation*} dZ_t=2\sqrt{Z_t}\,dB_t + N\nu \,dt, \mathrm end{equation*} for $t\in (\tau_k, \tau_{k+1})$, $k\geq 0$. Let $\widehat Z^t$ be defined by $\widehat Z_0 = z_0$ and \begin{equation*} \widehat Z_t= \int_0^t 2\sqrt{\widehat Z_s}\,dB_s + N\nu t, \qquad t\geq 0. \mathrm end{equation*} By definition, $\widehat Z$ is an $(N\nu)$-dimensional squared Bessel process on $[0, \infty)$. We assumed that $N\nu \geq 2$ and $z_0>0$ so we have $\widehat Z_t >0$ for all $t\geq 0$, a.s. Since $\widehat Z$ is continuous, for every integer $j>0$ there exists a random variable $a_j$ such that $\widehat Z_t > a_j >0$ for all $t\in[0,j]$, a.s. Note that $\widehat Z_t = Z_t$ for $t\in [0, \tau_1)$ and $\widehat Z_{\tau_1} < Z_{\tau_1}$ because $Z$ has a positive jump at time $\tau_1$. Strong existence and uniqueness for SDE's with smooth coefficients implies that $\widehat Z_t \leq Z_t$ for all $t\in [\tau_1, \tau_2)$, because if the trajectories of $\widehat Z$ and $Z$ ever meet then they have to be identical after that time up to $\tau_2$. Once again, $\widehat Z_{\tau_2} < Z_{\tau_2}$ because $Z$ has a positive jump at time $\tau_2$. By induction, $\widehat Z_t \leq Z_t$ for all $t\in [\tau_k, \tau_{k+1})$, $k\geq 0$, a.s. Hence, $ Z_t > a_j >0$ for all $t\in[0,j]$ and $j>0$, a.s. This implies that $\tau _\infty = \infty$, a.s., by an argument similar to that in Lemma 5.2 of \cite{BieniekBurdzyFinch10}. \section{Proof of Theorem \ref{thm:main2}} \subsection{Preliminaries} We will give new meanings to some symbols used in the previous section. Constants denoted by $c$ with subscripts will be tacitly assumed to be strictly positive and finite; in addition, they may be assumed to satisfy some other conditions. (i) Let $W$ be one-dimensional Brownian motion and let $b$ be a Lipschitz function defined on an interval in $\mathbb R$, i.e., $|b(x_1) - b(x_2)| \leq L |x_1 - x_2|$ for some $L< \infty$ and all $x_1$ and $x_2$ in the domain of $b$. Consider a diffusion $X_t, t\in [s,u]$, satisfying the following stochastic differential equation, \begin{equation}\label{o6.1} dX_t= dW_t+b\left( X_t \right)\,dt,\quad X_s=a. \mathrm end{equation} Let $y_t$ be the solution to the ordinary differential equation \begin{equation*} \frac d {dt} y_t=b(y_t),\quad y_s=a. \mathrm end{equation*} We will later write $y'=b(y)$ instead of $\frac d {dt} y_t=b(y_t)$. The following inequality appears in Ch.~3, Sect.~1 of the book by Freidlin and Wentzell \cite{FredlinWentzell83}. For every $\delta>0$, \begin{align*} \P\left( \sup_{s\leq t\leq u}\left|X_t-y_t\right|>\delta \right)\leq \P \left( \sup_{s\leq t\leq u} |W_t| > \delta \mathrm e^{-L(u-s)} \right), \mathrm end{align*} where $L$ is a Lipschitz constant of $b$. It follows that \begin{align}\nonumber \P\left( \sup_{s\leq t\leq u}\left|X_t-y_t\right|>\delta \right) &\leq \P \left( \sup_{s\leq t\leq u} W_t > \delta \mathrm e^{-L(u-s)} \right) +\P \left( \inf_{s\leq t\leq u} W_t <- \delta \mathrm e^{-L(u-s)} \right)\\ & = 2\P \left( \sup_{s\leq t\leq u} W_t > \delta \mathrm e^{-L(u-s)} \right)\nonumber\\ & = 4\P \left( W_u-W_s > \delta \mathrm e^{-L(u-s)} \right)\nonumber\\ &\leq c_0 \mathrm exp\left( -\frac{\delta^2}{2(u-s)}\,\mathrm e^{-2L(u-s)} \right), \label{eq:FWineq} \mathrm end{align} where $c_0$ is an absolute constant. (ii) Recall that $\beta > 2$ and consider the function \begin{equation}\label{eq:defb} b(x)=-\frac{1}{\beta x^{\beta-1}},\quad x>0. \mathrm end{equation} We need the assumption that $\beta >2$ for the main part of the argument but many calculations given below hold for a larger family of $\beta$'s. It is easy to check that \begin{equation}\label{eq:yt} y_{s,a}(t):=\left( a^\beta+s -t \right)^{1/\beta},\quad s\leq t\leq s+a^\beta, \mathrm end{equation} is the solution to the ordinary differential equation \begin{equation}\label{eq:ODEb} y'=b(y) \mathrm end{equation} with the initial condition $y_{s,a}(s)=a$, where $s\in \mathbb R$, $a>0$. Note that the function $y_{s,a}(t)$ approaches 0 vertically at $ t = s+a^\beta$. (iii) Fix any $\gamma\in(0,1)$ and let $L$ be the Lipschitz constant of $b$ on the interval $\left[ a(\gamma/2)^{1/\beta}/2, 2a\right]$. Then \begin{equation*} L=b'\bigl( a(\gamma/2)^{1/\beta}/2 \bigr), \qquad b'(x)=\frac{\beta-1}{\beta x^\beta}, \mathrm end{equation*} and, therefore, \begin{equation}\label{o7.1} L=\frac{\beta-1}{\beta(a(\gamma/2)^{1/\beta}/2)^\beta} = \frac{\beta-1}{\beta\gamma 2^{1-\beta} a ^\beta}. \mathrm end{equation} Let $X$ be the solution to \mathrm eqref{o6.1} with $b$ defined in \mathrm eqref{eq:defb}. Assume that $\delta>0$ is so small that \begin{align}\label{o6.5} a(\gamma/2)^{1/\beta}/2 \leq y_{0,a}((1-\gamma/2)a^\beta)-\delta < y_{0,a}(0) + \delta \leq 2 a. \mathrm end{align} It follows that if $ \sup_{0\leq t\leq (1-\gamma/2)a^\beta} \left|X_t-y_{0,a}(t)\right|\leq\delta$ then $X_t\in \left[ a(\gamma/2)^{1/\beta}/2, 2a\right]$ for $0\leq t\leq (1-\gamma/2)a^\beta$. Hence, we can apply \mathrm eqref{eq:FWineq} with $L$ given by \mathrm eqref{o7.1} to obtain the following estimate \begin{equation}\label{eq:y0A} \P\left( \sup_{0\leq t\leq (1-\gamma/2)a^\beta} \left|X_t-y_{0,a}(t)\right|>\delta \mid X_0 =a \right) \leq c_0 \mathrm exp\left( -c_1\,\frac{\delta^2}{a^\beta} \right), \mathrm end{equation} where $X_t$ satisfies \mathrm eqref{o6.1} and $c_1$ depends on $\beta$ and $\gamma$, but it does not depend on $\delta$ and $a$. (iv) Suppose that $a>0$ and $u=(1-\gamma)a^\beta$. Then $y_{0,a}(u)=a\gamma^{1/\beta}$. For $\varepsilon\in(0,1)$, let \begin{equation}\label{o1.1} \bar\delta= \bar\delta(\varepsilon)=a\left[ 1-(1-\varepsilon\gamma)^{1/\beta} \right]. \mathrm end{equation} We fix $\mathrm eps\in(0,1)$ so small that for all $a>0$ the inequality \mathrm eqref{o6.5} is satisfied with $\bar\delta$ in place of $\delta$ and, moreover, \begin{align}\label{o4.1} (1-\gamma/2)(a-\bar\delta)^\beta >(1-\gamma)a^\beta \mathrm end{align} and \begin{align}\label{o4.7} \gamma^{1/\beta} + 1-(1-\varepsilon\gamma)^{1/\beta} < 1. \mathrm end{align} If $\delta\in[-\bar\delta,\bar\delta]$ then \begin{equation*} y_{0, a+\delta}(t)=\left( \left( a+\delta \right)^{\beta}-t \right)^{1/\beta},\quad 0\leq t\leq (a+\delta)^{\beta}. \mathrm end{equation*} It is straightforward to check that \begin{equation*} y_{0, a+\delta}(u)\geq a\gamma^{1/\beta}(1-\varepsilon)^{1/\beta}>0. \mathrm end{equation*} We will estimate the difference $ y_{0, a+\delta}(u) - y_{0, a}(u)$ as a function of $\delta$. Let \begin{equation*} f(\delta)=y_{0, a+\delta}(u) =\left( \left( a+\delta \right)^{\beta}-(1-\gamma)a^\beta \right)^{1/\beta}. \mathrm end{equation*} Then \begin{equation*} y_{0, a+\delta}(u) - y_{0, a}(u) =f(\delta)-f(0)=\delta f'(\hat\delta), \mathrm end{equation*} where $\hat\delta$ is between 0 and $\delta$. But \begin{equation*} f'(\delta)=\left( a+\delta \right)^{\beta-1} \left( \left( a+\delta \right)^{\beta}-(1-\gamma)a^\beta \right)^{1/\beta-1} \mathrm end{equation*} and \begin{equation*} f''(\delta)=-a^\beta(\beta-1)(1-\gamma)\left( a+\delta \right)^{\beta-2}\left( \left( a+\delta \right)^{\beta}-(1-\gamma)a^\beta \right)^{1/\beta-2}. \mathrm end{equation*} It follows from \mathrm eqref{o1.1} and the fact that $\mathrm eps\in(0,1)$ that $\left( a+\delta \right)^{\beta}-(1-\gamma)a^\beta>0$. Thus $f''<0$ and $f'$ is strictly decreasing. Therefore, for $\delta\in[-\bar\delta,\bar\delta]$ we have \begin{equation}\label{eq:Mdef} f'(\delta)\leq f'(-\bar\delta)=\left( \frac{1-\varepsilon\gamma}{\gamma(1-\varepsilon)} \right)^{1-1/\beta}=:M >1. \mathrm end{equation} Hence $ \left| y_{0, a+\delta}(u) - y_{0, a}(u) \right|\leq M|\delta|$ for $\delta\in[-\bar\delta,\bar\delta]$. Suppose that $\delta'\in(0,\bar\delta]$ and $\delta=\frac{\delta'}{M+1}$. If $a_0\in[a-\delta,a+\delta]$, then \begin{equation}\label{eq:Mdelta} y_{0,a}(u)-\delta' \leq y_{0,a_0}(u) \leq y_{0,a}(u)+\delta'. \mathrm end{equation} The function $b(x)$ is strictly increasing on $(0,\infty)$. This easily implies that if $0 < a_1 < a_2$ and $0 \leq s < t \leq a_1^\beta$ then \begin{align}\label{o4.3} y_{0,a_2}(s) - y_{0,a_1}(s) < y_{0,a_2}(t) - y_{0,a_1}(t). \mathrm end{align} Hence, inequality \mathrm eqref{eq:Mdelta} holds in fact for all $t\in[0,u]$ in place of $u$ and, moreover, $y_{0,a-\bar\delta}(t)\leq y_{0,a}(t)-\delta'$. Another consequence of \mathrm eqref{eq:Mdelta} and \mathrm eqref{o4.3} is that if $s\in[0,u]$ and $a_0\in[y_{0,a}(s)-\delta,y_{0,a}(s)+\delta]$ then for $t \in[s,u]$, \begin{equation}\label{o3.5} y_{0,a-\bar\delta}(t)\leq y_{0,a}(t)-\delta' \leq y_{s,a_0}(t) \leq y_{0,a}(t)+\delta'. \mathrm end{equation} The following generalization of \mathrm eqref{o4.1} also follows from \mathrm eqref{o4.3}. If $s\in[0,u]$ and $a_0 \geq y_{0,a}(s)-\delta$ then \begin{align}\label{o4.2} s+(1-\gamma/2)a_0^\beta >(1-\gamma)a^\beta. \mathrm end{align} \subsection{Proof of Theorem \ref{thm:main2}} Suppose that $\mathbf X_t=(X_t^1,\dotsc,X_t^N)$ is a Fleming-Viot process on $(0,2]$ driven by the diffusion defined in \mathrm eqref{eq:SDEX1}, with an arbitrary $2 \leq N < \infty$. Recall that the role of the boundary is played by the point 0, and only this point. In other words, the particles jump only when they approach 0. {\it Step 1}. Let $\mathbf x=\left( x^1,\dotsc,x^N \right)$, $[N]=\set{1,\dotsc,N}$, and let $j_1$ be the smallest integer in $[N]$ with \begin{equation*} x^{j_1}=\max_{1\leq j\leq N} x^j. \mathrm end{equation*} Consider process $\mathbf X$ starting from $\mathbf X_0=\mathbf x$ and let $I_1=\set{j_1}$, $J_1=[N]\setminus I_1$ and $S_0=0$. Let $u=(1-\gamma)(x^{j_1})^\beta$ and \begin{equation*} S_1= u \wedge\inf\set{t\geq 0:\mathrm exists_{j\in J_1}\,X_t^j=X_t^{j_1}}. \mathrm end{equation*} Note that two processes $X^i$ and $X^j$ can meet either when their paths intersect at a time when both processes are continuous or when one of the processes jumps onto the other. Let $j_2$ be the smallest index in $J_1$ such that the equality in the definition of $S_1$ holds with $j = j_2$. Let $I_2=\set{j_1,j_2}$ and $J_2=[N]\setminus I_2$. Next we proceed by induction. Assume that, for some $n<N$, the sets $I_1,\dotsc,I_n$, $J_1,\dotsc,J_n$, and stopping times $S_1<S_2<\dotsc<S_{n-1}$ are defined. Then we let \begin{equation*} S_{n}= u \wedge\inf\set{t\geq S_{n-1}:\mathrm exists_{i\in I_n}\,\mathrm exists_{j\in J_n}\, X_t^i=X_t^j}, \mathrm end{equation*} $I_{n+1}=I_n\cup\set{j_{n+1}}$ and $J_{n+1}=[N]\setminus I_{n+1}$, where $j_{n+1}$ is the smallest index in $J_n$ such that the equality in the definition of $S_n$ holds with $j = j_{n+1}$. The set $I_n$ has $n$ elements which are indices of particles which are ``descendants'' of the particle $X^{j_1}$ that was the highest at time 0. By convention, we let $I_n=I_N$ and $S_n=u$ for $n\geq N$. {\it Step 2}. Write $a=x^{j_1}$ and $u=(1-\gamma)a^\beta$. Then $\mathbf x\in(0,a]^N$. Recall $\bar\delta$ and $M$ defined in \mathrm eqref{o1.1} and \mathrm eqref{eq:Mdef}, and for $1\leq n\leq N$ define \begin{equation}\label{eq:hatdelta} \hat\delta_{n}=\frac{\bar\delta}{(M+1)^{N-n}}. \mathrm end{equation} Note that $\hat\delta_{n}=(M+1)\hat\delta_{n-1}$. Consider events \begin{equation*} \mathrm eF_n=\bigcup_{ j\in I_n} \left\{ \sup_{S_{n-1}\leq t<S_n}\left|X_t^j-y_{0,a}(t)\right|>\hat\delta_n \right\} . \mathrm end{equation*} Note that for every $t$, $\max_{j\in I_n} X^j_t \geq \max_{j\in J_n} X^j_t$. Hence, \begin{align*} \bigcup_{1\leq j\leq N} \left\{\sup_{0\leq t<u}X_t^j- y_{0,a}(t) >\bar\delta\right\} \subset \bigcup_{1\leq n\leq N} \mathrm eF _n, \mathrm end{align*} and, therefore, \begin{equation}\label{eq:D} \begin{split} \P[\mathbf x]&\left( \bigcup_{1\leq j\leq N} \left\{\sup_{0\leq t<u}X_t^j- y_{0,a}(t) >\bar\delta\right\} \right)\\ &\leq \P[\mathbf x] \left( \bigcup_{1\leq n\leq N} \mathrm eF _n \right) \\ &= \P[\mathbf x] \left( \bigcup_{1\leq n\leq N} \mathrm eF _n \cap \mathrm eF_1^c \cap \dots \cap \mathrm eF_{n-1}^c \right)\\ &\leq \sum_{1\leq n\leq N} \P[\mathbf x] \left( \mathrm eF _n \cap \mathrm eF_1^c \cap \dots \cap \mathrm eF_{n-1}^c \right) \\ &\leq \sum_{1\leq n\leq N} \P[\mathbf x]\left( \mathrm eF_n \mid \mathrm eF_{1}^c,\dotsc,\mathrm eF_{n-1}^c \right)\\ &\leq\sum_{1\leq n\leq N}\sum_{j\in I_n}\P[\mathbf x]\left( \sup_{S_{n-1}\leq t<S_n} \left|X_t^j-y_{0,a}(t)\right|>\hat\delta_n \Bigm| \mathrm eF_{1}^c,\dotsc,\mathrm eF_{n-1}^c \right), \mathrm end{split} \mathrm end{equation} where we adopted the convention $\P[\mathbf x](\mathrm eF_1\mid\mathrm eF_0^c)=\P[\mathbf x](\mathrm eF_1)$. Suppose that $\mathrm eF_{n-1}^c$ holds and $j \in I_{n}$. Then $|X^j_{S_{n-1}} - y_{0,a}(S_{n-1})| \leq \hat\delta_{n-1}$. Let $y^j_t$, $t\geq S_{n-1}$, be a solution to $y'=b(y)$ with $y^j_{S_{n-1}}=X^j_{S_{n-1}}$. By \mathrm eqref{o3.5}, \begin{equation*} \begin{split} \P[\mathbf x]\left( \sup_{S_{n-1}\leq t<S_{n}} \left| X_t^j- y_{0,a}(t)\right|>\hat\delta_{n} \mid \mathrm eF_{n-1}^c \right) &\leq \P[\mathbf x]\left( \sup_{S_{n-1}\leq t<S_{n}} \left| X_t^j-y_t^j\right|>\hat\delta_{n} \mid \mathrm eF_{n-1}^c \right). \mathrm end{split} \mathrm end{equation*} It follows from \mathrm eqref{o4.2} that we can apply \mathrm eqref{eq:y0A} (with an appropriate shift of the time scale) to $X^j$, assuming that $\mathrm eF_{n-1}^c$ holds, on the interval $[S_{n-1},S_{n}]\subset [0, u] = [0,(1-\gamma)a^\beta]$. We obtain \begin{align*} &\P[\mathbf x]\left( \sup_{S_{n-1}\leq t<S_{n}} \left| X_t^j-y_t^j\right|>\hat\delta_{n} \mid \mathrm eF_{n-1}^c \right) \leq c_0 \mathrm exp\left( -c_2\,\frac{\hat\delta_{n}^2} {(y_{0,a}(S_{n-1}) + \hat\delta_{n-1})^\beta} \right)\\ &\quad \leq c_0\mathrm exp\left( -c_2\,\frac{\bar\delta^2 (M+1)^{-2N}} {(a + \bar\delta)^\beta} \right) = c_0 \mathrm exp\left( -c_3\,\frac{\bar\delta^2 } {(a + \bar\delta)^\beta} \right). \mathrm end{align*} We combine this estimate with \mathrm eqref{eq:D} to see that \begin{align}\label{o6.2} \P[\mathbf x]&\left( \bigcup_{1\leq j\leq N} \left\{\sup_{0\leq t<u}X_t^j- y_{0,a}(t) >\bar\delta\right\} \right)\\ &\leq\sum_{1\leq n\leq N}\sum_{j\in I_n}\P[\mathbf x]\left( \sup_{S_{n-1}\leq t<S_n} \left|X_t^j- y_{0,a}(t) \right|>\hat\delta_n \Bigm| \mathrm eF_{1}^c,\dotsc,\mathrm eF_{n-1}^c \right) \nonumber \\ &\leq c_0 N^2 \mathrm exp\left( -c_3\,\frac{\bar\delta^2 } {(a + \bar\delta)^\beta} \right). \nonumber \mathrm end{align} {\it Step 3}. We will prove that there exist $v<\infty$ and $r\in(0,2)$ such that if $\mathbf x\in(0,r]^N$ then \begin{equation}\label{o2.1} \P[\mathbf x](\tau_\infty> v)\leq \frac{1}{2}. \mathrm end{equation} Consider an $r\in(0,2)$ and for $\mathbf x = (x^1, \dots, x^N)$, let $A_0 = \max_j x^j$, $U_0=0$, and for $k=0,1,2,\dotsc$, let \begin{align*} U_{k+1}&= U_k+(1-\gamma) A_k^\beta,\\ A_{k+1}&= \max _{1\leq j \leq N} X^j_{ U_{k+1}}. \mathrm end{align*} Let $\mathbf yk_t$ denote the solution to ODE \mathrm eqref{eq:ODEb} with the initial condition $\mathbf yk_{ U_k}= A_k$. Recall $\mathrm eps$ from \mathrm eqref{o4.1} and let $ \Delta_k = A_k\left[ 1-(1-\varepsilon\gamma)^{1/\beta} \right]$. For $k=0,1,2,\dotsc$ define events \begin{equation*} \Gamma_k=\left[\max_{1\leq j\leq N} \sup_{ U_{k}\leq t< U_{k+1}} X_t^j>\mathbf yk_t+ \Delta_k\right]. \mathrm end{equation*} Note that $\mathbf yk_{U_{k+1}} = \gamma^{1/\beta} A_k$. Suppose that $\bigcap_{k=0}^\infty\Gamma_k^c$ holds. Then \begin{align*} A_{k+1} \leq \gamma^{1/\beta} A_k + A_k\left[ 1-(1-\varepsilon\gamma)^{1/\beta} \right] = c_4 A_k, \mathrm end{align*} for all $k$, where $c_4 = \gamma^{1/\beta} + 1-(1-\varepsilon\gamma)^{1/\beta} < 1$, by \mathrm eqref{o4.7}. Hence, $ A_k \leq c_4^k A_0$ and, therefore, $\sum_k A_k^\beta < \infty$. If we let $v= (1-\gamma) \sum_{k=1}^\infty r ^\beta c_4^{k\beta} < \infty$ then $ \lim _{k\to \infty} U_k \leq (1-\gamma) \sum_{k=1}^\infty A_0 ^\beta c_4^{k\beta} \leq v $ and $\limsup_{t \uparrow v} \max_{1\leq j\leq N} X_t^j = 0$. This implies easily that $\tau_\infty\leq v$. Thus, to prove \mathrm eqref{o2.1}, it will suffice to show that there exists $r\in(0,2)$ such that if $\mathbf x\in(0,r]^N$, then \begin{equation}\label{eq:mainseries} \P[\mathbf x]\left( \bigcup_{k=0}^\infty\Gamma_k \right)<\frac{1}{2}. \mathrm end{equation} But \begin{equation}\label{eq:A} \P[\mathbf x]\left( \bigcup_{k=0}^\infty\Gamma_k \right)\leq \P[\mathbf x](\Gamma_0)+\sum_{k=1}^\infty \P[\mathbf x]\left( \Gamma_k \mid \Gamma_0^c,\dotsc,\Gamma_{k-1}^c \right). \mathrm end{equation} By \mathrm eqref{o6.2} and the strong Markov property applied at $U_k$, \begin{equation*} \begin{split} \P[\mathbf x]&\left( \Gamma_k \mid \Gamma_0^c,\dotsc,\Gamma_{k-1}^c \right) \leq c_0 N^2\mathrm exp\left( -c_3 \frac{ \Delta_k^2}{( A_k + \Delta_k)^\beta} \right) \\ & = c_0 N^2\mathrm exp\left( - c_3 A_k^{2-\beta} \frac{(1-(1-\varepsilon\gamma)^{1/\beta})^2}{(2-(1-\varepsilon\gamma)^{1/\beta})^\beta} \right)\\ & \leq c_0 N^2\mathrm exp\left( - c_5 A_0^{2-\beta} c_4^{k(2-\beta)} \right), \mathrm end{split} \mathrm end{equation*} where $c_4 < 1$. So by \mathrm eqref{eq:A}, if $\max_j x^j\leq r$, then \begin{equation*} \P[\mathbf x]\left( \bigcup_{k=0}^\infty \Gamma_k \right) \leq c_0 N^2\sum_{k=0}^\infty\mathrm exp\left( - c_5 r^{2-\beta} c_4^{k(2-\beta)} \right), \mathrm end{equation*} which is convergent. Since $2-\beta<0$, we can choose $r>0$ so small that the above sum is less than $1/2$, proving \mathrm eqref{eq:mainseries}. {\it Step 4}. Let $r\in(0,2)$ and $v$ be as in Step 3. Partition the set $(0,2]^N$ into two sets $A=(0,r]^N$ and $A^c$. First we will show that the time when process $\mathbf X$ enters the set $A$ has a distribution with an exponentially decreasing tail. So assume that $\mathbf X_0\in A^c$ and let \begin{equation*} I_1=\set{j\in[N]:X_0^j\in(r,2]},\quad I_2=[N]\setminus I_1. \mathrm end{equation*} Let $\tau_1^{j}$ be the the first hitting time of $0$ by the process $X^j$ and let \begin{equation*} \mathrm eta= \begin{cases} 0 & \text{if $I_2=\mathrm emptyset$,}\\ \max_{j\in I_2}\left\{ \tau_1^j \right\},&\text{otherwise}. \mathrm end{cases} \mathrm end{equation*} Consider \begin{equation*} p_1(\mathbf x)=\P[\mathbf x]\left\{ \forall_{j\in I_1}\,\forall_{0\leq t\leq 1/2}\, X_t^j\in\left[ \frac{r}{2},2 \right];\mathrm eta<1/2; \forall_{i\in I_2}\forall_{\tau_1^j<t\leq 1/2}\, X_t^j\in\left[ \frac{r}{4},2 \right]\right\}. \mathrm end{equation*} We will argue that for $\mathbf x\in A^c$ we have \begin{equation}\label{eq:p1} p_1(\mathbf x)\geq p_1>0. \mathrm end{equation} Indeed, with probability at least $q_1>0$ any particle from $I_1$ stays in the interval $[r/2,2]$ up to time $t=1/2$. With probability at least $q_2>0$ any particle from $I_2$ hits $0$ before time $t=1/2$; with probability at least $1/N$ it jumps onto a particle in $I_1$; and then with probability at least $q_3>0$ it stays in the interval $[r/4,2]$ up to time $t=1/2$. Therefore \mathrm eqref{eq:p1} holds with $p_1=(q_1q_2q_3/N)^N$. Obviously $q_1, q_2, q_3$ and $p_1$ depend on $r$. Next, if we define \begin{equation*} p_2(\mathbf x)=\P[\mathbf x]\left\{ \forall_{j\in[N]}\,\forall_{0<t<1/2} \, X_t^j\in\left[ \frac{r}{8},2 \right]; X_{1/2}^j\in\left[ \frac{r}{8},r \right]\right\}, \mathrm end{equation*} and $B=\left[ \frac{r}{4},2 \right]^N$, then an argument similar to that proving \mathrm eqref{eq:p1} shows that for $\mathbf x\in B$ we have $p_2(\mathbf x)\geq p_2>0$, where $p_2$ depends on $r$. Therefore, by the Markov property at time $t=1/2$, for $\mathbf x\in A^c$ we have \begin{equation}\label{o6.4} \P[\mathbf x](\mathbf X_1\in A)\geq p:=p_1p_2>0. \mathrm end{equation} Now let \begin{equation*} T=\inf\left\{ t\geq 0:\mathbf X_{ t}\in A \right\}. \mathrm end{equation*} By \mathrm eqref{o6.4}, for all $\mathbf x\in(0,2]^N$, \begin{equation*} \P[\mathbf x](T\leq 1)\geq p>0. \mathrm end{equation*} Applying the Markov property at $t=1,2, \dots$ we obtain \begin{equation*} \P[\mathbf x](T\geq k) \leq (1-p)^{k}. \mathrm end{equation*} Choose $k$ so large that $(1-p)^{k}<\frac{1}{2}$. Recall that $r$ and $v$ are as in Step 3. Let $\theta$ denote the usual Markovian shift operator. Then for any $\mathbf x\in (0,2]^N$, \begin{equation*} \P[\mathbf x]\left( \tau_\infty\geq k+v \right)\leq\P[\mathbf x](T\geq k)+\P[\mathbf x](\tau_\infty \circ \theta_T \geq v)\leq (1-p)^{k}+\frac{1}{2}:=q<1. \mathrm end{equation*} Therefore, applying the Markov property at times $k+v, 2(k+v), 3(k+v), \dots$, we obtain, \begin{equation*} \P[\mathbf x](\tau_\infty \geq n(k+v))\leq q^n,\quad n=1,2,\dotsc, \mathrm end{equation*} which proves \mathrm eqref{s30.2}. This implies that $\tau_\infty < \infty$, a.s. \mathrm end{document}
\begin{document} \begin{abstract} \color{black} We propose a sharp-interface model for a hyperelastic material consisting of two phases. In this model, phase interfaces are treated in the deformed configuration, resulting in a fully Eulerian interfacial energy. \color{black} In order to penalize large curvature of the interface, we include a geometric term featuring a curvature varifold. \color{black} Equilibrium solutions are \color{black} proved \color{black} to exist via minimization. We then utilize this model in an Eulerian topology optimization problem that incorporates \color{black} a \color{black} curvature penalization. \color{black} \end{abstract} \maketitle \section{Introduction}\label{sec:intro} \color{black} In the field of elasticity, it is commonly assumed that experimentally observed patterns in materials correspond to the minimization of a suitable \color{black} phase-dependent \color{black} energy. \color{black} Indeed, some \color{black} materials have multiple phases, and the optimal energetic configuration is \color{black} often \color{black} achieved by creating spatial microstructures composed of these phases. These microstructures \color{black} feature their \color{black} own unique size, shape, and distribution (such as grains, precipitates, dendrites, spherulites, lamellae, or pores). The phases can be distinguished from each other by their various crystalline, semicrystalline, or amorphous \color{black} properties, \color{black} which can be \color{black} experimentally identified \color{black} through microscopy techniques. To fully understand the behavior of a material, it is necessary to \color{black} characterize the relation \color{black} between the macroscopic properties and the underlying phenomena occurring \color{black} at \color{black} the microstructural scale. \color{black} To shed light on the multiscale nature of this phenomenon is paramount \color{black} for optimizing material performance and developing new materials with tailored properties. A prominent example of materials with microstructure are shape memory alloys, \color{black} showing a highly symmetric crystallographic variant called austenite, preferred at high temperatures, as well as different low-symmetry variants called martensites, favored at low temperatures. \color{black} These alloys, \color{black} including \color{black} NiTi, CuAlNi, or InTh, are widely used in various technological applications, as discussed in \cite{Jani2014ARO}. The mixing of these different phases lead to the formation of complex microstructures, \color{black} which ultimately govern the rich thermomechanical response of the material. \color{black} In the continuum theory, the \color{black} total stored energy of the system usually consists of \color{black} \color{black} bulk and interfacial energy contributions. \color{black} Neglecting the interfacial energy generically leads to a minimization problem that has no solution due to the formation of spatially finer and finer oscillations of the deformation gradient among the various phases. \color{black} If \color{black} spatial phase changes are penalized by the interfacial energy, an optimal material layout is reached by \color{black} balancing \color{black} energy contributions rising from the bulk \color{black} and \color{black} the interface, \color{black} under the effect of \color{black} external loading. \color{black} Different models \color{black} have been considered \color{black} taking \color{black} into account \color{black} interfacial energy \color{black} in various forms. \color{black} This includes strain gradients \cite{BallMora-Corral-2009,Toup62EMCS} but also gradients of nonlinear null \color{black} Lagrangians \color{black} of the deformation \cite{BeKrSc17NLMGP}. \color{black} Recently \color{black} \v{S}ilhav\'{y} introduced \color{black} in \cite{Silhavy-2011} \color{black} a notion of interface polyconvexity and proved \color{black} it \color{black} sufficient to ensure the existence of minimizers for the corresponding static problem. \color{black} In particular, it his model the \color{black} perimeters of interfaces in the reference and deformed configurations, \color{black} as well as \color{black} the deformations of lines in the referential interfaces \color{black} are penalized. \color{black} A more explicit characterization of interface polyconvexity can be found in \cite{GKMS19,grandi2020}, \color{black} discussing the case of materials with more than two phases as well. \color{black} \color{black} Again, let us mention \color{black} that the mathematical treatment of multiphase materials without surface-energy \color{black} penalization \color{black} typically leads to ill-posed problems where the existence of a solution is not necessarily guaranteed, and some relaxation is needed, cf.~\cite{Daco89DMCV}. This, however, would challenge orientation preservation of the involved deformations, and consequently, also injectivity may be lost \cite{Ball_puzzles}. \color{black} In this article, we consider a material with two phases, separated by a sharp interface. \color{black} Note however that our model can \color{black} be extended to describe more phases similarly as in \cite{Silhavy-2010}, \color{black} see Remark \ref{rem:multi}. \color{black} To incorporate the penalization of large \color{black} interface curvatures, \color{black} we describe the interface in terms of a curvature varifold, a measure-theoretic generalization of classical surfaces with a notion of curvature and \color{black} with \color{black} good compactness properties \cite{Hutchinson:86,Mantegazza:96,Simon:83}. Varifolds have been used to describe bending-resistant interfaces in a wide range of applications, for instance in the modeling of cracks \cite{MR2658342,MR2644754,KrMaMu:22}, biological membranes \cite{EichmannAGAG,BLS:20,RS:23}, or anisotropic phase transitions \cite{Moser:12}. \color{black} \color{black} The state of the elastic body is characterized by the deformation $y$ \color{black} of the reference configuration $\Omega\subset\mathbb{R}^3$, \color{black} the phase field $\phi$, and a curvature varifold $V$ describing the phase interface. The equilibrium \color{black} state \color{black} minimizes the energy $E$, consisting of the elastic bulk energy and the energy of the phase interface. If the varifold $V$ and the phase interface correspond to a smoothly embedded surface $M \subset \mathbb{R}^3$ with second fundamental form $I\!I$, our energy typically looks as follows \begin{align} E(y,\phi,V)=\int_\Omega\Big((1-\phi\circ y)W_0(\nabla y)+\phi\circ y\, W_1(\nabla y)\, \Big)\mathrm{d} X \color{black} + \mathcal{H}^2(M) + \int_M |I\!I|^p \,\mathrm{d} \mathcal{H}^2 \end{align} see Section \ref{subsec:energies} for \color{black} the general \color{black} definition and all necessary details. \color{black} \color{black} The main result of this work is \color{black} the proof of \color{black} the existence of minimizers with the phase field and the varifold defined in the deformed configuration, i.e., in the Eulerian setting. In order to find a good framework for the direct method in the Calculus of Variations, two important challenges need to be met: Firstly, a suitable coupling needs to be imposed to identify the varifold with the phase field, see Definition \ref{def:coupling}. Secondly, the Eulerian setting implies that both $\phi$ and $V$ are defined in the deformed configuration $y(\Omega)$, which itself is subject to minimization. Once compactness is achieved, the existence of minimizers follows from the closedness of the coupling condition together with the lower semicontinuity of the energy via the usual (poly-)convexity assumptions. \color{black} Moreover, we adapt the variational theory to study a related problem in topology optimization, taking into account the curvature of the design material surface. We also provide a corresponding referential formulation which might be computationally more feasible. \color{black} This article is organized as follows. \color{black} In Section~\ref{sec:prelim}, \color{black} we introduce basic notions and notation \color{black} on functions of bounded variation and varifolds. \color{black} Our model is presented in Section~\ref{sec:model} and the existence of a solution is proved in Section~\ref{sec:existence}. This allows us to settle a problem of topology optimization in the Eulerian coordinates and to establish the existence of an optimal topological design in Section~\ref{sec:topology}. \color{black} \section{Notation and preliminaries}\label{sec:prelim} \subsection{Piecewise constant functions of bounded variation} Let $U\subset\mathbb{R}^3$ be open. By $BV(U)$ we denote the class of {\em functions of bounded variation} and by $SBV(U)$ the class of {\em special functions of bounded variation}, see \cite{AmFuPa:00}. We set $$ SBV(U;\{0,1\}):=\{\phi\in SBV(U):\:\phi(x)\in\{0,1\}\:\text{for a.e.}\: x\in U\}. $$ Its elements are {\em piecewise constant functions} in the sense of \cite[Def.\ 4.21]{AmFuPa:00}, restricted to only assuming values in $\{0,1\}$. The weak derivative of $\phi\in SBV(U;\{0,1\})$ is the $\mathbb{R}^3$-valued Radon measure $$ D\phi=\nu_\phi(\mathcal{H}^{2}\,\text{\Large{$\llcorner$}} J_\phi)\in\mathcal{M}(U;\mathbb{R}^3). $$ Here, $\mathcal{H}^2$ is the two-dimensional Hausdorff measure, $J_\phi\subset U$ is the approximate jump set, which is countably $\mathcal{H}^2$-rectifiable, and $\nu_\phi\colon J_\phi\to\mathbb{S}^2$ is the unit normal vector. The total variation norm of $D\phi$ is given by \begin{align} |D\phi|(U)=\mathcal{H}^2(J_\phi).\label{eq:Dphi_H2} \end{align} By definition, functions $\phi\in SBV(U;\{0,1\})$ are characteristic functions of some $E\subset U$ of finite perimeter, i.e., $\phi=1_E\colon U\to\mathbb{R}$ with $$ 1_E(x):=\begin{cases} 1, & x\in E\\ 0, & \text{otherwise.} \end{cases} $$ In particular, $J_\phi$ coincides with the reduced boundary of $E$ up to a $\mathcal{H}^2$-null set, $\nu_\phi$ points in the interior of $E$, and $|D\phi|(U)$ is the perimeter of $E$. For the convenience of the reader, we recall the compactness theorem for piecewise constant functions which follows from \cite[Thm.~3.23, Thm.~4.25]{AmFuPa:00}. \begin{theorem}[Compactness of piecewise constant SBV-functions]\label{thm:SBVpwcpt} Let $U\subset\mathbb{R}^3$ be an \linebreak open, bounded Lipschitz domain. Let $(\phi_n)_{n\in\mathbb{N}}\subset SBV(U)$ be piecewise constant functions, satisfying $$ \sup_{n\in\mathbb{N}}\left(\|\phi_n\|_{L^\infty(U)}+\mathcal{H}^2(J_{\phi_n})\right)<\infty. $$ Then there exists a piecewise constant function $\phi\in SBV(U)$ such that after passing to a subsequence, we have $\phi_n\to \phi$ in $L^1(U)$ \color{black} and \color{black} $D\phi_n \rightharpoonup^* D\phi$ in $\mathcal{M}(U;\mathbb{R}^3)$ as $n\to\infty$. \end{theorem} \subsection{Oriented curvature varifolds}\label{sec:ocv} We briefly introduce the relevant definitions for varifolds, restricting to two-varifolds in the open set $U\subset \mathbb{R}^3$. Let $G_{2,3}$ denote the Grassmannian, i.e., the set of all two-dimensional linear subspaces of $\mathbb{R}^3$, which we describe by their orthogonal projection matrices $P\in\mathbb{R}^{3\times 3}$. We identify the oriented Grassmannian with the two-sphere $\mathbb{S}^2$ by representing an oriented two-dimensional subspace by its unit normal. Following \cite{Hutchinson:86}, an {\em oriented two-varifold} in $U$ is a (nonnegative) Radon measure $$ V\in\mathcal{M}(U\times\mathbb{S}^2). $$ The mass of $V$ is the Radon measure $\mu_V\in\mathcal{M}(U)$ given by $$ \mu_V(B)=V(B\times\mathbb{S}^2) \quad \text{for all Borel sets} \quad B\subset U. $$ By Riesz' Representation Theorem, e.g., \cite{Simon:83}, $V$ is defined through its action on continuous functions with compact support, given by $$ \langle V,u\rangle=\int_{U\times\mathbb{S}^2}u(x,\nu)\mathrm{d} V(x,\nu)\quad\text{for all}\quad u\in C^0_c(U\times\mathbb{S}^2). $$ Pushforward of $V$ by the covering map $q\colon U\times \mathbb{S}^2\to U\times G_{2,3}$, $q(x,\nu)=(x,\mathbb{I}_{3\times 3}-\nu\otimes\nu)$ gives the (unoriented) two-varifold $q_\sharp V\in\mathcal{M}(U\times G_{2,3})$, namely, $$ \langle q_\sharp V,v\rangle=\int_{U\times G_{2,3}}v(x,P)\mathrm{d} (q_\sharp V)(x,P)= \int_{U\times\mathbb{S}^2}v(q(x,\nu))\mathrm{d} V(x,\nu) $$ for all $v\in C^0_c(U\times G_{2,3})$. Moreover, to every oriented two-varifold $V$ we can associate a two-current $T_V\in\mathcal{D}_2(U)$ by $$ \langle T_V,\omega\rangle =\int_{U\times\mathbb{S}^2}\langle\star\nu,\omega(x)\rangle\mathrm{d} V(x,\nu) $$ for all $\omega\in C_c^\infty(U;\Lambda^2(\mathbb{R}^3))$, the smooth, compactly supported two-forms in $U$. Here, $\star\nu\in \Lambda_2(\mathbb{R}^3)$ stands for the simple two-vector associated to $\nu\in\mathbb{S}^2$ through the Hodge star operator $\star$. The boundary of $T_V$ is the one-current $\partial T_V\in\mathcal{D}_1(U)$, which is given by $$ \langle \partial T_V,\eta\rangle=\langle T_V,\mathrm{d}\eta\rangle $$ for all one-forms $\eta\in C_c^\infty(U;\Lambda^1(\mathbb{R}^3))$, where $\mathrm{d}\eta$ denotes the exterior derivative of $\eta$. An {\em oriented integral two-varifold} is a varifold $V\in \mathcal{M}(U\times \mathbb{S}^2)$ given by $$ \langle V,u\rangle=\int_{M}\left(u(x,\nu^M(x))\theta^+(x)+u(x,-\nu^M(x))\theta^-(x)\right)\mathrm{d} \mathcal{H}^2(x) $$ for all $u\in C^0_c(U\times\mathbb{S}^2)$ and for which we will also write \begin{align}\label{eq:rec_or_varif} V(x,\nu)=(\mathcal{H}^2\,\text{\Large{$\llcorner$}} M)(x)\otimes\left(\theta^+(x)\delta_{\nu^M(x)}(\nu)+\theta^-(x)\delta_{-\nu^M(x)}(\nu)\right). \end{align} Here, $M\subset U$ is a countably $\mathcal{H}^2$-rectifiable set, the {\em orientation} $\nu^M\in L^1_{\textup{loc},\mathcal{H}^2}(M;\mathbb{S}^2)$ selects one of the two unit normals to the approximate tangent plane $T_xM$ at $\mathcal{H}^2$-a.e.\ $x\in M$, and the corresponding {\em multiplicities} $\theta^\pm\in L^1_{\textup{loc},\mathcal{H}^2}(M)$ are integer-valued, i.e., $\theta^\pm(x)\in\mathbb{N}$ for $\mathcal{H}^2$-a.e.\ $x\in M$. The class of oriented integral two-varifolds in $U$ is denoted by $IV_2^o(U)$. The unoriented varifold associated to $V$ is the {\em integral two-varifold} given by $$ \langle q_\sharp V,v\rangle=\int_{M}v(x,T_xM)(\theta^+(x)+\theta^-(x))\mathrm{d} \mathcal{H}^2(x) $$ for all $v\in C^0_c(U\times G_{2,3})$. The class of (unoriented) integral two-varifolds in $U$ is denoted by $IV_2(U)$. The current associated to $V$ is the {\em integral two-current} given by $$ \langle T_V,\omega\rangle=\int_{M}\langle \star\nu^M(x),\omega(x)\rangle(\theta^+(x)-\theta^-(x))\mathrm{d} \mathcal{H}^2(x) $$ for all $\omega\in C_c^\infty(U;\Lambda^2(\mathbb{R}^3))$. A {\it curvature two-varifold} (in the sense of \cite{Hutchinson:86}, i.e., without boundary measure \cite{Mantegazza:96}), denoted by $V\in CV_2(U)$, is an integral varifold $V\in IV_2(U)$ such that there exist {\em generalized curvature function} $A^V=(A^V_{ijk})_{i,j,k=1}^3\in L^1_{\textup{loc},V}(U\times G_{2,3};\mathbb{R}^{3 \times 3 \times 3})$ satisfying $$ \int_{U\times G_{2,3}}\sum_{j=1}^3\Big(P_{ij}\partial_j\varphi+\sum_{k=1}^3(\partial_{P_{jk}}\varphi)\,A_{ijk}^V+A_{jij}^V\,\varphi\Big)\mathrm{d} V=0 $$ for all $\varphi\in C_c^1(U\times G_{2,3})$ and $1\leq i\leq 3$. An {\it oriented curvature two-varifold}, denoted by $V\in CV^o_2(U)$, is an oriented integral varifold $V\in IV_2^o(U)$ whose unoriented counterpart $q_\sharp V$ is a curvature varifold, i.e., $$ CV_2^o(U)=\{V\in IV_2^o(U):\:q_\sharp V\in CV_2(U)\}. $$ We conclude with a compactness theorem for oriented curvature varifolds without boundary, still restricting to two-varifolds in $U\subset \mathbb{R}^3$. The result is a combination of the compactness theorem \cite[Thm.\ 3.1]{Hutchinson:86} for oriented integral varifolds and \cite[Thm.\ 6.1]{Mantegazza:96} for curvature varifolds. \begin{theorem}[Compactness of oriented curvature varifolds without boundary]\label{thm:CVocpt}\phantom{e}\linebreak Let $p>1$ and $(V_n)_{n\in\mathbb{N}}\subset CV_2^o(U)$ satisfy $\partial T_{V_n}=0$ for all $n\in\mathbb{N}$ and $$ \sup_{n\in\mathbb{N}}\left(\mu_{V_n}(U)+\|A^{q_\sharp V_n}\|^p_{L_{q_\sharp V_n}^p(U\times G_{2,3})}\right)<\infty. $$ Then there exists $V\in CV_2^o(U)$ such that after passing to a subsequence, $V_{n}\rightharpoonup^\ast V$ in $\mathcal{M}(U\times \mathbb{S}^2)$ and $q_\sharp V_n \rightharpoonup^\ast q_\sharp V$, $A^{q_\sharp V_n}_{ijk}q_\sharp V_n \rightharpoonup^\ast A^{q_\sharp V}_{ijk}q_\sharp V$ in $\mathcal{M}(U\times G_{2,3})$ for all $1\leq i,j,k\leq 3$. \end{theorem} Here, the $p$-th power of the $L^p$-norm of the curvature function of $V\in CV_2^o(U)$ is $$ \|A^{q_\sharp V}\|^p_{L_{q_\sharp V}^p(U\times G_{2,3})}=\int_{U\times G_{2,3}} |A^{q_\sharp V}(x,P)|^p\,\mathrm{d}(q_\sharp V)(x,P) $$ with Frobenius norm $|A^{q_\sharp V}|=\sqrt{\sum_{i,j,k=1}^3(A^{q_\sharp V}_{ijk})^2}$. \color{black} In particular, if $V\in CV_2^o(U)$ corresponds to a smoothly embedded closed oriented surface $M\subset U$ and has multiplicities $\theta^++\theta^-\equiv 1$, cf.\ \eqref{eq:rec_or_varif}, then $\mu_V(U)=\int_M(\theta^++\theta^-)\mathrm{d}\mathcal{H}^2=\mathcal{H}^2(M)$ and the Frobenius norms of the curvature function $A=A^{q_\sharp V}$ of $M$ and of its second fundamental form $I\!I$ are related by $|A|^2=2|I\!I|^2$. \color{black} \section{Model}\label{sec:model} \subsection{States} Let $\Omega\subset\mathbb{R}^3$ be an open, bounded Lipschitz domain, which describes the reference configuration of an elastic body. The state of the body is characterized by the {\em deformation} $y$, the {\em phase field} (or {\em phase indicator}) $\phi$, and the oriented curvature varifold $V$ corresponding to the \emph{phase interfaces.} The deformation is a homeomorphism $$ y\colon\Omega\to y(\Omega)\subset\mathbb{R}^3, $$ mapping points in $X\in\Omega$ to points $x=y(X)\in y(\Omega)$. We describe the interfaces in the Eulerian setting, that is, both $\phi$ and $V$ are defined on the current configuration $y(\Omega)$ of the body, namely $$ \phi\in SBV(y(\Omega);\{0,1\})\qquad\text{and}\qquad V\in CV_2^o(y(\Omega)). $$ Since $y$ is a homeomorphism, $y(\Omega)\subset \mathbb{R}^3$ is an open set. A crucial ingredient of the model is that we introduce a coupling relating the phase $\phi$ and the varifold $V$ in the following sense. \begin{definition}[Coupling]\label{def:coupling} Let $U\subset \mathbb{R}^3$ be open and consider the linear map $$Q\colon C^0_c(U;\mathbb{R}^3)\to C^0_c(U\times\mathbb{S}^2), \quad (QY)(x,\nu):=Y(x)\cdot\nu.$$ An oriented varifold $V\in\mathcal{M}(U\times\mathbb{S}^2)$ and a phase field $\phi\in SBV(U;\{0,1\})$ are called \emph{coupled in $U$} if $D\phi=Q'V$, i.e., if for all $Y\in C^0_c(U;\mathbb{R}^3)$ we have \begin{align}\label{eq:coupling} \langle D\phi,Y\rangle =\int_{J_\phi} Y\cdot\nu_\phi\,\mathrm{d} \mathcal{H}^2 = \int_{U\times \mathbb{S}^2}Y(x)\cdot\nu\,\mathrm{d} V(x,\nu) = \langle V, QY\rangle = \langle Q'V,Y\rangle. \end{align} \end{definition} Here, $Q'$ denotes the Banach space adjoint and we use the dualities $C^0_c(U;\mathbb{R}^3)'=\mathcal{M}(U;\mathbb{R}^3)$ and $C^0_c(U\times \mathbb{S}^2)'=\mathcal{M}(U\times \mathbb{S}^2)$, respectively. \begin{definition}[Admissible set]\label{def:admissible} We say that a triple $(y,\phi, V)$ is \emph{admissible}, in short, $$ (y,\phi,V)\in\mathcal{A}, $$ if the following conditions are satisfied. \begin{enumerate}[(i)] \item\label{item:adm_y} $y\in W^{1,r}(\Omega;\mathbb{R}^3)$ is a homeomorphism, $r>3$; \item $\phi\in SBV(y(\Omega);\{0,1\})$; \item $V\in CV_2^o(y(\Omega))$ and has no boundary, i.e., $\partial T_V=0$; \item\label{item:coupling} $V$ is coupled to $\phi$ in $y(\Omega)$ in the sense of Definition \ref{def:coupling}. \end{enumerate} \end{definition} Condition \eqref{item:adm_y} is enforced by assuming that $y\in W^{1,r}(\Omega;\mathbb{R}^3)$ satisfies $\det\nabla y>0$ a.e. in $\Omega$, that it fulfills the Ciarlet-Ne\v{c}as condition \cite{CiaNec87ISCN}, \color{black} i.e., \color{black} \begin{equation}\label{ciarlet-necas} \int_\Omega\det\nabla y(x)\,\mathrm{d} x\le \mathcal{L}^3(y(\Omega)), \end{equation} and that the distortion $|\nabla y|^3/\det\nabla y\in L^{r-1}(\Omega)$. Namely, nonnegativity of the Jacobian determinant together with \eqref{ciarlet-necas} makes $y$ injective almost everywhere in $\Omega$. Controlling the distortion in $L^{r-1}(\Omega)$ ensures that $y$ is an open map; cf.~\cite[Thm.~3.24, p.~43]{HenKos14LMFD}. This together with almost everywhere injectivity implies that $y$ is homeomorphic in $\Omega$. \begin{lemma}[Properties of admissible triples]\label{lem:A_properties} Let $(y,\phi,V)\in \mathcal{A}$. Then we have \begin{enumerate}[(i)] \item\label{item:A_prop_J_vs_spt_mu} $J_\phi\subset \operatorname{spt} \mu_V$; \item\label{item:C2_rectif} $J_\phi$ and $\operatorname{spt} \mu_V$ are \emph{countably $\mathcal{H}^2$-rectifiable of class $2$,} i.e., up to a set of $\mathcal{H}^2$-measure zero, they can be covered by a countable union of embedded, two-dimensional $C^2$-submanifolds of $\mathbb{R}^3$. \end{enumerate} \end{lemma} \begin{proof} If we take a test vector field $Y\in C^0_c(y(\Omega);\mathbb{R}^3)$ with $Y\equiv 0$ on $\operatorname{spt}\mu_V$, the coupling \eqref{eq:coupling} implies \begin{align} \int_{J_\phi} Y\cdot\nu_\phi\,\mathrm{d}\mathcal{H}^2 = \int_{y(\Omega)\times \mathbb{S}^2}\underbrace{Y(x)\cdot\nu}_{\leq |Y(x)|}\mathrm{d} V(x,\nu)\leq \int_{y(\Omega)} |Y|\mathrm{d} \mu_V =0, \end{align} and \eqref{item:A_prop_J_vs_spt_mu} follows. For \eqref{item:C2_rectif}, note that $A^{q_\sharp V}\in L^1_{\mathrm{loc},q_\sharp V}(y(\Omega)\times G_{2,3})$ implies that $\mu_V$ has locally bounded first variation with generalized mean curvature in $L^1_{\mathrm{loc},\mu_V}(y(\Omega))$, and consequently the statement follows from \cite[Theorem 1]{Menne}. \end{proof} \begin{remark} The coupling \eqref{item:coupling}, i.e., $D\phi=Q'V$ in $y(\Omega)$, does not imply that the multiplicity of $V$ must be one, in particular, it does not imply $V(x,\nu)=(\mathcal{H}^2\,\text{\Large{$\llcorner$}} J_\phi)(x)\otimes\delta_{\nu_\phi(x)}(\nu)$, \color{black} cf.\ \eqref{eq:rec_or_varif}. \color{black} Moreover, there may in general be multiple different varifolds coupled with a fixed phase $\phi$. The reason for this is that $Q$ is not surjective: If $u\in C^0_c(y(\Omega)\times\mathbb{S}^2)$ satisfies $u(x_0,\pm \nu)=1$ for some $x_0\in y(\Omega), \nu\in \mathbb{S}^2$, there is no representation $u=QY$ for $Y\in C^0_c(y(\Omega);\mathbb{R}^3)$. \end{remark} \subsection{Energies}\label{subsec:energies} Equilibrium configurations of the body are admissible states $(y,\phi,V)\in\mathcal{A}$ that minimize the {\em energy} \begin{equation}\label{eq:E} E(y,\phi,V):=E_{\textrm{bulk}}(y,\phi)+E_{\textrm{int}}(y,V), \end{equation} which consists of the {\em bulk energy} $$ E_{\textrm{bulk}}(y,\phi):=\int_\Omega\Big((1-(\phi\circ y)(X))\,W_0(\nabla y(X))+(\phi\circ y)(X)\,W_1(\nabla y(X))\Big)\mathrm{d} X $$ and the {\em interface energy} \begin{align} E_{\textrm{int}}(y,V):=\int_{y(\Omega)\times G_{2,3}} \Psi(A^{q_\sharp V}(x,P))\mathrm{d}(q_\sharp V)(x,P). \end{align} Here, $W_i\colon\mathbb{R}^{3\times 3}\to \mathbb{R}$ is the \emph{elastic energy density} of phase $i$ ($i=0,1$), and $\Psi\colon \mathbb{R}^{3\times 3\times 3}\to \mathbb{R}$ is the \emph{interface energy density.} We assume that there exist $c_{\textrm{bulk}}>0$ and $s>0$ such that \begin{align}\label{eq:W_coercive} W_i(F) \begin{cases}\geq c_{\textrm{bulk}} \left(|F|^{r} + \left(\displaystyle\frac{|F|^3}{\det F}\right)^{r-1} +(\det F)^{-s}\right) & \text{ if } \det F>0\\[2mm] =+\infty &\text{ if } \det F\le 0, \end{cases} \end{align} \color{black} where $r>3$ is the same as in Definition \ref{def:admissible}. \color{black} Moreover, we assume that $W_i$ is polyconvex \cite{Ball:77}, i.e., that there exists a convex function $h_i:\mathbb{R}^{19}\to\mathbb{R}$ such that $W_i(F)=h_i(F, {\rm Cof}\, F,\det F)$ for all $F\in\mathbb{R}^{3\times 3}$ with $\det F>0$ and $i=0,1$. It is easily seen that $F\mapsto (|F|^3/\det F)^{r-1}$ is polyconvex in the set of matrices with positive determinants if $r>3$. Therefore the right-hand side of \eqref{eq:W_coercive} can serve as an example of a polyconvex stored energy density. In the interfacial energy, we assume that $\Psi\colon \mathbb{R}^{3\times 3\times 3}\to \mathbb{R}$ is a convex function satisfying \begin{align}\label{eq:Psi_coercivity} \Psi(A) \geq c_{\mathrm{int}}(1+ |A|^p)\quad \text{ for all } A\in \mathbb{R}^{3\times 3\times 3} \end{align} for some $p>1$ and $c_{\mathrm{int}}>0$. Integrating \eqref{eq:Psi_coercivity}, we find that the interface energy controls both the curvature and the mass of the varifold, since \begin{align}\label{eq:Psi_control} E_{\textrm{int}}(y,V) \geq c_\mathrm{int}\Big(\mu_V(y(\Omega))+\int_{y(\Omega)\times G_{2,3}}|A^{q_\sharp V}|^p \mathrm{d} (q_\sharp V)\Big). \end{align} \section{Existence of equilibrium states}\label{sec:existence} Let $E$ be given by \eqref{eq:E} and $\mathcal{A}$ as in Definition \ref{def:admissible}. Then, the following result holds. \begin{theorem}[Existence]\label{thm:main} There exists a minimizer of $E$ on $\mathcal{A}$. \end{theorem} \begin{proof} We observe that $(y,\phi,V)=(\text{id},1,0)\in\mathcal{A}$ and $E(\text{id},1,0)=\int_\Omega W_1(\mathbb{I}_{3\times 3})\,\mathrm{d} X<\infty$, which is a consequence of $W_1(\mathbb{I}_{3\times 3})<\infty$ and $|\Omega|<\infty$. In particular, $\inf_{\mathcal{A}}E<\infty$. Let $(y_n,\phi_n,V_n)_{n\in\mathbb{N}}\subset\mathcal{A}$ be a minimizing sequence for $E$. Without loss of generality, we may assume $\int_\Omega y_n\,\mathrm{d} X =0$ and \begin{align} E(y_n, \phi_n, V_n)\leq K\quad \text{ for all }n\in \mathbb{N}. \end{align} In particular, by \eqref{eq:W_coercive}, we have $\det \nabla y_n>0$ a.e.\ in $\Omega$ and $(y_n)_{n\in \mathbb{N}}\subset W^{1,r}(\Omega;\mathbb{R}^3)$ is bounded. Thus, after passing to a subsequence, we may assume $y_n\rightharpoonup y$ in $W^{1,r}(\Omega;\mathbb{R}^3)$ and also \begin{align}\label{eq:yn_unif} y_n\to y \quad \text{ in } C^0(\bar{\Omega};\mathbb{R}^3) \text{ as }n\to\infty. \end{align} Moreover, the weak convergence of $(y_n)_{n\in\mathbb{N}}$, the sequential weak continuity of $y\mapsto\det\nabla y\colon W^{1,r}(\Omega;\mathbb{R}^3)\to L^{r/3}(\Omega)$, and of $y\mapsto{\rm Cof}\,\nabla y\colon W^{1,r}(\Omega;\mathbb{R}^3)\to L^{r/2}(\Omega;\mathbb{R}^{3\times 3})$ yield \begin{align} &\nabla y_n\rightharpoonup\nabla y \quad \text{ in }L^r(\Omega;\mathbb{R}^{3\times 3}) \text{ as }n\to\infty,\label{eq:nabla_conv}\\ &\det \nabla y_n\rightharpoonup \det\nabla y \quad \text{ in } L^{r/3}(\Omega) \text{ as }n\to\infty,\label{eq:det_conv}\\ & {\rm Cof}\, \nabla y_n\rightharpoonup {\rm Cof}\,\nabla y \quad \text{ in } L^{r/2}(\Omega;\mathbb{R}^{3\times 3}) \text{ as }n\to\infty.\label{eq:cof_conv} \end{align} The convergence in \eqref{eq:yn_unif} allows us to pass to the limit in the right-hand side of \eqref{ciarlet-necas} while the left-hand side passes to the limit due to the sequential weak continuity of $y\mapsto\det\nabla y$: $W^{1,r}(\Omega;\mathbb{R}^3)\to L^{r/3}(\Omega)$. Moreover, \eqref{eq:nabla_conv} and \eqref{eq:det_conv}, together with polyconvexity of $F\mapsto (|F|^3/\det F)^{r-1}$, imply that $$\liminf_{n\to\infty}\int_\Omega \frac{|\nabla y_n|^{3(r-1)}}{(\det\nabla y_n)^{r-1}}\, {\rm d}X\ge \int_\Omega \frac{|\nabla y|^{3(r-1)}}{(\det\nabla y)^{r-1}}\, {\rm d}X, $$ i.e., the distortion of the limit deformation also belongs to $L^{r-1}(\Omega)$, in particular, it implies that $y$ is also a homeomorphism. Given a strictly decreasing zero-sequence $(\varepsilon_\ell)_{\ell\in\mathbb{N}}\subset (0,1)$, let $(U^\ell)_{\ell\in\mathbb{N}}$ denote a sequence of open, bounded Lipschitz domains such that \begin{align} \{x\in y(\Omega):\: \mathrm{dist}(x, \partial y(\Omega))>\varepsilon_\ell \} \subset U^\ell \subset\subset y(\Omega) \end{align} and which, upon extracting a subsequence, is increasing, i.e., $U^{\ell_1}\subset U^{\ell_2}$ whenever $\ell_1<\ell_2$. By \eqref{eq:yn_unif}, one easily shows that $U^\ell$ for $\ell\in\mathbb{N}$ fixed is contained in the image set $y_n(\Omega)$, namely, \begin{align}\label{eq:Ueps_in_y_n} U^\ell \subset y_n(\Omega) \quad\text{whenever $n\geq n(\ell)\in\mathbb{N}$ is large enough.} \end{align} {\bf Compactness:} First, we examine the sequence of varifolds. For $\ell\in\mathbb{N}$ and $n\geq n(\ell)$, consider the restriction \begin{align} V_n^{\ell} := V_n \,\text{\Large{$\llcorner$}} (U^{\ell}\times \mathbb{S}^2). \end{align} By testing the definitions of curvature varifolds and current boundaries with test functions supported in $U^\ell\times \mathbb{S}^2$ and $U^\ell$, one verifies that \begin{align} V_n^{\ell} &\in CV^{o}_2(U^{\ell}),\quad A^{q_\sharp V_n^{\ell}} = A^{q_\sharp V_n}\,\text{\Large{$\llcorner$}} (U^{\ell}\times \mathbb{S}^2), \quad \text{and}\quad \partial T_{V_n^\ell}=0 \text{ in }U^{\ell}. \end{align} The coercivity assumption \eqref{eq:Psi_coercivity} implies that \begin{align} c_{\textrm{int}}\Big(\mu_{V_n^\ell}(U^\ell)+\int_{U^{\ell}\times G_{2,3}}|A^{q_\sharp V_n^{\ell}}|^p\,\mathrm{d}(q_\sharp V_n^{\ell}) \Big)\leq E_{\textrm{int}}(y_n,V_n) \leq K. \end{align} After passing to a subsequence, it thus follows from Theorem \ref{thm:CVocpt} that there exist $V^{\ell}\in CV^{o}_2(U^{\ell})$ such that $V_n^{\ell}\rightharpoonup^\ast V^{\ell}$ in $\mathcal{M}(U^{\ell}\times \mathbb{S}^2)$ and $q_\sharp V_n^\ell \rightharpoonup^* q_\sharp V^\ell$, $A_{ijk}^{q_\sharp V_n^\ell} q_\sharp V_n^\ell \rightharpoonup^* A_{ijk}^{q_\sharp V^\ell} q_\sharp V^\ell$ in $\mathcal{M}(U^\ell \times G_{2,3})$ for $1\leq i,j,k\leq 3$. In particular, it follows $\partial T_{V^\ell}=0$ in $U^{\ell}$. Similarly, we find that \begin{align} \phi_n^{\ell}:= \phi_n\vert_{U^{\ell}}\in SBV(U^{\ell};\{0,1\}) \quad \text{ with }\quad D\phi_n^{\ell} = D\phi_n \,\text{\Large{$\llcorner$}} U^{\ell}. \end{align} By \eqref{eq:Dphi_H2} and since $J_{\phi_n^\ell}\subset \operatorname{spt} \mu_{V_n^\ell}$ by Lemma \ref{lem:A_properties}\eqref{item:A_prop_J_vs_spt_mu}, it follows that \begin{align}\label{eq:SBV_bounds_Ueps} \mathcal{H}^2(J_{\phi_n^\ell})=|D \phi_n^\ell|(U^\ell)\leq \mu_{V^{\ell}_n}(y_n(\Omega))\leq K. \end{align} Moreover, $\Vert \phi_n^{\ell}\Vert_{L^\infty(U^{\ell})}<\infty$ uniformly as well, because $\phi_n^{\ell}\in SBV(U^\ell;\{0,1\})$ and $U^\ell\subset y(\bar\Omega)$ is bounded. Consequently, from Theorem \ref{thm:SBVpwcpt} it follows that after passing to a subsequence, we have $\phi_n^{\ell}\to \phi^{\ell}$ in $L^1(U^{\ell})$ as $n\to\infty$ with also $\phi^{\ell}\in \{0,1\}$ a.e.\ and $D\phi_n^{\ell} \rightharpoonup^\ast D\phi^{\ell}$ in $\mathcal{M}(U^{\ell};\mathbb{R}^3)$. It is not difficult to see that the above limits are local, i.e., if $\ell<\ell'$, then we have \begin{align} V^{\ell} = V^{\ell'}\,\text{\Large{$\llcorner$}} (U^{\ell}\times \mathbb{S}^2),\quad \phi^{\ell}&= \phi^{\ell'}\vert_{U^{\ell}}. \label{eq:well_def} \end{align} Now, \color{black} choosing an appropriate diagonal sequence, we thus may assume $V_n^\ell \rightharpoonup^* V^\ell$ as $n\to\infty$ for all $\ell\in \mathbb{N}$, and \color{black} we obtain a limit varifold $V\in CV^{o}_2(y(\Omega))$ by setting \begin{align} \langle V, u\rangle = \langle V^{\ell},u\rangle = \lim_{n\to\infty} \langle V_n^\ell, u\rangle \end{align} for any $u\in C^0_c(y(\Omega)\times \mathbb{S}^2)$ and any $\ell\in\mathbb{N}$ such that $\operatorname{spt} u\subset U^{\ell}$. Similarly, we define $\phi \in L^1(y(\Omega);\{0,1\})$ by the condition $$ \phi\vert_{U^{\ell}} = \phi^{\ell} $$ for all $\ell\in\mathbb{N}$. By \eqref{eq:well_def} above, $V$ and $\phi$ are well-defined. Moreover, we have that $\phi \in SBV(y(\Omega);\{0,1\})$ since by \eqref{eq:SBV_bounds_Ueps} and \eqref{eq:Dphi_H2} we have \begin{align} |D\phi| (y(\Omega)) \leq \sup_{\ell\in\mathbb{N}}\liminf_{n\to\infty} |D \phi^\ell_n|(U^\ell)=\sup_{\ell\in\mathbb{N}}\liminf_{n\to\infty} \mathcal{H}^2(J_{\phi_n^\ell})\leq K. \end{align} To see that $V$ is coupled to $\phi$, fix any $Y\in C^0_c(y(\Omega);\mathbb{R}^3)$. Taking $\ell\in\mathbb{N}$ sufficiently large, we have that $\operatorname{spt} Y\subset U^{\ell}$, and consequently \begin{align} \langle D\phi, Y\rangle &= \langle D\phi^{\ell}, Y\rangle = \lim_{n\to\infty}\langle D\phi_n^{\ell},Y\rangle = \lim_{n\to\infty}\langle D\phi_n, Y\rangle \\ &= \lim_{n\to\infty} \langle V_n, QY\rangle = \lim_{n\to\infty} \langle V_n^{\ell}, QY\rangle = \langle V^{\ell}, QY\rangle = \langle V, QY\rangle, \end{align} using that $V_n$ and $\phi_n$ are coupled in $y_n(\Omega)$. {\bf Lower semicontinuity:} For the interface part of the energy, by convexity of $\Psi$ and \cite[Theorem 5.3.2]{Hutchinson:86}, for every $\ell\in\mathbb{N}$ we have \begin{align}\label{eq:lsc_varif_eps} \int_{U^{\ell}\times G_{2,3}} \Psi\big(A^{q_\sharp V^{\ell}}\big)\mathrm{d}(q_\sharp V^{\ell}) \leq \liminf_{n\to\infty}\int_{U^{\ell}\times G_{2,3}}\Psi\big(A^{q_\sharp V_n^{\ell}}\big)\mathrm{d}(q_\sharp V_n^{\ell}). \end{align} Sending $\ell\to\infty$, monotone convergence implies \begin{align} E_{\textrm{int}}(y,V)=\int_{y(\Omega)\times G_{2,3}}\Psi\big(A^{q_\sharp V}\big)\mathrm{d} (q_\sharp V) & \leq \liminf_{n\to\infty}\int_{y_n(\Omega)\times G_{2,3}}\Psi\big(A^{q_\sharp V_n}\big)\mathrm{d}(q_\sharp V_n)\\ &=\liminf_{n\to\infty}E_{\textrm{int}}(y_n,V_n). \end{align} We now turn to the bulk term. From the above construction of $U^\ell$ it follows that \begin{align} &\phi_n \to \phi \quad\text{ in } L^1(U^\ell), \end{align} for all $\ell\in\mathbb{N}$. We now find that \begin{align} \int_{y_n(\Omega)\cap y(\Omega)} |\phi_n(x)-\phi(x)|\mathrm{d}x &\leq \int_{U^\ell} |\phi_n(x)-\phi(x)| \mathrm{d}x + |y(\Omega)\setminus U^\ell|, \end{align} so that, by sending first $n\to\infty$ and then $\ell\to\infty$, by \eqref{eq:yn_unif} we conclude that \begin{align} \lim_{n\to\infty}\Vert \phi_n-\phi\Vert_{L^1(y_n(\Omega)\cap y(\Omega))}= 0. \end{align} By the energy bound and the coercivity assumptions \eqref{eq:W_coercive}, we find that the $y_n$ have uniformly $L^{r-1}$-bounded distortion, $r-1>2$, and consequently the assumptions of \cite[Lemma 5.3]{GKMS19} are satisfied. This yields that \begin{align}\label{eq:phi_circ_y} \phi_n \circ y_n \to \phi \circ y \quad \text{ in }L^1(\Omega). \end{align} \color{black} The last limit passage, polyconvexity of the bulk energy densities, weak convergence of minors \eqref{eq:nabla_conv}, \eqref{eq:det_conv}, \eqref{eq:cof_conv}, and the lower semicontinuity result of Eisen \cite{Eisen} imply that \begin{align} E_{\textrm{bulk}}(y,\phi)\leq \liminf_{n\to\infty} E_{\textrm{bulk}}(y_n,\phi_n). \end{align} Consequently, a minimizer of $E$ in $\mathcal{A}$ exists. \end{proof} \color{black} \begin{remark}[Multiple phases] \label{rem:multi} \color{black} Although the model above deals only with two phases, an extension to a general multiphase material is possible in a similar way as in \cite{Baldo90}, or \cite{grandi2020,Silhavy-2011}. \color{black} More precisely, one can describe the case of $m\in {\mathbb N}$ distinct phases by redefining $$ E(y,\phi,V) = \sum_{i=1}^m \left(\int_\Omega (\phi_i \circ y)W_i(\nabla y)\, {\rm d} X + c_i\,E_{\textrm{int}}(y,V_i)\right). $$ \color{black} Here, the phase descriptor $\phi=(\phi_1,\dots, \phi_m)$ takes values in $\{0,1\}^m$\color{black} , i.e., $\phi\colon y(\Omega)\to \{0,1\}^m$, \color{black} and the components $\phi_i$ \color{black} $(i=1,\dots,m)$ \color{black} describe the local proportion of the different phases. In particular, $\phi$ is constrained to the set of pure phases $\{\phi=(\phi_1,\cdots, \phi_m) \in \{0,1\}^m\: : \: \phi_1+\dots+\phi_m=1\}$. Correspondingly, the vector of varifolds $V=(V_1,\cdots,V_m)$ collects $m$ varifolds, \color{black} such that $V_i$ is coupled to $\phi_i$ in $y(\Omega)$, $i=1,\dots,m$. \color{black} The energy densities $W_i$ are all assumed to be coercive as in \eqref{eq:W_coercive} and the constants $c_i$ are assumed to be positive. Indeed, the latter positivity turns out to be necessary for lower semicontinuity, \color{black} see \cite{AmbBra90a} for details. \color{black} \end{remark} \section{Topology optimization}\label{sec:topology} In this section, we build on the above theory and tackle a problem in topology optimization \cite{Allaire,Bendsoe}. With respect to more classical settings, the novelty is twofold here. Firstly, in line with this note's general approach, the problem's description is fully Eulerian, which naturally corresponds to the large-deformation setting. Secondly, the curvature of the material interface is taken into account. The penalization of the curvature of the boundary of the body, in addition to its surface area, fits well into applicative situations where sharp edges should avoided. Indeed, in many mechanical applications sharp edges, especially reentrant edges, may be subject to strong stresses and are often the onset of plasticization and damage. Given a deformation $y\colon \Omega\to \mathbb{R}^3$ \color{black} of an open, bounded Lipschitz domain $\Omega\subset\mathbb{R}^3$, \color{black} we interpret the \color{black} image \color{black} set $y(\Omega) $ as an a priori unknown {\it design domain}. This has to be understood as a container, to be partially occupied by an elastic solid, whose actual shape is the subject of the optimization procedure. We reinterpret the phase indicator $\phi\colon y(\Omega)\to \{0,1\}$ as a descriptor of the optimal shape. More precisely, the deformed state of the body to be identified corresponds to the subset of $y(\Omega)$ where $\phi\color{black}\equiv\color{black} 1$. On the contrary, the subset of $y(\Omega)$ where $\phi\color{black}\equiv\color{black} 0$ is interpreted as the deformed state of a very compliant {\it Ersatz} material, which is still assumed to be elastic. As customary in topology optimization, in order to avoid trivial solutions, we prescribe the volume of the body to be determined by imposing \begin{equation} \label{eq:volume} \int_{y(\Omega)}\phi\, {\rm d} X = \eta\, {\mathcal L}^3(y(\Omega)) \end{equation} for a fixed parameter $\eta\in(0,1)$. As the deformation $y$ is a priori unknown, for mathematical convenience, the scalar field $\phi$ is defined from here on on the whole space $\mathbb{R}^3$ without changing notation. We will refer to such a field $\phi:\mathbb{R}^3 \to \{0,1\}$ as {\it Eulerian material distribution} in the following. Given an Eulerian material distribution $\phi$, we start by solving the equilibrium problem with some appropriate boundary conditions in the referential configuration. More precisely, we let $\partial \Omega $ be decomposed into $\Gamma_{\rm D},\, \Gamma_{\rm N}\subset \partial \Omega$, which are assumed to be open (in the topology of $\partial \Omega$) with $\Gamma_{\rm D}\cap\Gamma_{\rm N} =\emptyset$, $\overline{\Gamma}_{\rm D}\cup \overline{\Gamma}_{\rm N} = \partial \Omega $ (closure taken in the topology of $\partial \Omega$), and ${\mathcal H}^2(\Gamma_{\rm D})>0$. The body is assumed to be clamped on $\Gamma_{\rm D}$ and the set of {\em admissible deformations} reads $${\mathcal Y}=\{y \in W^{1,r}(\Omega;{\mathbb R}^3) \::\: \text{$y$ is a homeomorphism}, \ y=\text{id on} \ \Gamma_{\rm D}\}.$$ In addition, a traction $g \in L^1(\Gamma_{\rm N}; {\mathbb R}^3)$ is exerted at the boundary part $\Gamma_{\rm N}$ and the material is subjected to a force with given force density $f \in L^1(\Omega; {\mathbb R}^3)$. Force and traction could also be assumed to be formulated in Eulerian coordinates, as well. For all $\eta\in (0,1)$ fixed, the set of equilibria related with $\phi$ is defined as \begin{align} {\mathcal Y}(\phi)&=\argmin \bigg\{ E_{\rm bulk}(y,\phi) - \int_\Omega \color{black}(\color{black}\phi\circ y\color{black})\color{black} f\cdot y \, {\rm d} X- \int_{\Gamma_{\rm N}} g \cdot y \, {\rm d} \mathcal{H}^2 \color{black} : \color{black} \\\ &\qquad\qquad \qquad \qquad \text{$y\in {\mathcal Y}$ is such that} \ \eqref{eq:volume} \ \text{holds}\bigg\}. \label{eq:Yphi_def} \end{align} \color{black} By following the arguments from the proof of Theorem \ref{thm:main}, one readily checks that $\mathcal{Y}(\phi)$ is not empty, provided $\phi$ is such that the constraint \eqref{eq:volume} is satisfied by some $y\in \mathcal{Y}$. Note that such $\phi$ exists; see the proof of Theorem \ref{thm4} below. However, for general $\phi$, even if nonempty, $\mathcal{Y}(\phi)$ may not be a singleton, for equilibrium deformations could be not unique. Our goal is to minimize the {\it compliance} $$ C(y,\phi) = \int_\Omega (\phi\circ y)f\cdot y \, {\rm d} X + \int_{\Gamma_{\rm N}} g \cdot y \, {\rm d} \mathcal{H}^2$$ which is a measure of the elastic energy stored by the deformed piece at equilibrium. In order to describe the curvature of the Eulerian interface, we augment the description of the material by \color{black} an oriented \color{black} curvature varifold, which we relate to $\phi$ as in Section \ref{sec:model}. Recalling $\mathcal{A}$ from Definition \ref{def:admissible}, the topology optimization problem reads \begin{align} \min\Big\{C(y,\phi) + E_{\textrm{int}}(y,V):~\phi\in L^\infty(\mathbb{R}^3), \Vert \phi\Vert_{\infty}\leq 1, y\in \mathcal{Y}(\phi), (y,\phi\vert_{y(\Omega)},V)\in\mathcal{A} \Big\}.\label{eq:to2} \end{align} Note that the volume constraint is included in the definition of ${\mathcal Y}(\phi)$. The main result of this section is the following. \begin{theorem}[Topology optimization]\label{thm4} Problem \eqref{eq:to2} admits a solution. \end{theorem} \begin{proof} Let us first check that the infimum in \eqref{eq:to2} is not $\infty$. To this aim, we construct an admissible triplet in the domain of the functional. Let $\phi_0=1_H\in SBV(\mathbb{R}^3;\{0,1\})$, where $H\subset \mathbb{R}^3$ is a half-space with ${\mathcal L}^3(\Omega \cap H)= \eta {\mathcal L}^3(\Omega)$. Call $P_0 = \partial H\subset\mathbb{R}^3$ and let $N_0\color{black}\in\mathbb{S}^2\color{black}$ be the unit normal vector to $P_0$ pointing towards the interior of $H$. Moreover, let $V_0\color{black}\in CV_2^o(\mathbb{R}^3)\color{black}$ be as in \eqref{eq:rec_or_varif} with $M=P_0$, $\nu^M(x)=N_0$, $\theta^+(x)=1$, and $\theta^-(x)=0$ for $x\in M$. This entails in particular that $V_0$ and $\phi_0$ are coupled in $\mathbb{R}^3$. Now $y_0=\mathrm{id}$ satisfies the constraint \eqref{eq:volume}, so by the discussion after \eqref{eq:Yphi_def}, the set ${\mathcal Y}(\phi_0)$ is not empty. Let now $y_\ast \in \mathcal{Y}(\phi_0)$ be given. We readily check that $(y_\ast, \phi_0\vert_{y_\ast(\Omega)},V_0\,\text{\Large{$\llcorner$}} (y_\ast(\Omega)\times \mathbb{S}^2))\in\mathcal{A}$. Moreover, as $P_0$ being a plane implies $A^{q_\sharp V_0} \equiv 0$, we have that $$C(y_\ast,\phi_0\vert_{y_\ast(\Omega)}) + E_{\textrm{int}}(y_\ast,V_0\,\text{\Large{$\llcorner$}} (y_\ast(\Omega)\times \mathbb{S}^2)) = C(y_\ast,\phi_0\vert_{y_\ast(\Omega)}) + \Psi(0){\mathcal H}^2 (P_0 \cap y_\ast(\Omega))<\infty,$$ since $y_\ast \in W^{1,r}(\Omega;\mathbb{R}^3)$, $r>3$, implies that $y_\ast(\Omega)$ is bounded. Let $(y_n,\phi_n,V_n)$ be an infimizing sequence for problem \eqref{eq:to2} so that $\phi_n\in L^\infty(\mathbb{R}^3)$, $\Vert\phi_n\Vert_\infty\leq 1$, $y_n\in \mathcal{Y}(\phi_n)$, $(y_n,\phi_n\vert_{y_n(\Omega)},V_n)\in \mathcal{A}$. Starting from the observation that $\mathcal{Y}(\phi_n)$ is bounded in $W^{1,r}(\Omega;\mathbb{R}^3)$, independently of $n\in \mathbb{N}$, we may argue as in the proof of Theorem \ref{thm:main} to conclude that, after passing to a not relabeled subsequence, there exist $\phi\in L^\infty(\mathbb{R}^3)$, $y\in {\mathcal Y}$, and $V\in CV_2^o(y(\Omega))$ with $(y,\phi\vert_{y(\Omega)},V)\in \mathcal{A}$ such that in particular \begin{align} &y_n \to y \quad \text{in} \ C^0(\bar \Omega;{\mathbb R}^3). \label{eq:c1}\\ &\phi_n \rightharpoonup^* \phi \quad \text{in} \ L^\infty(\mathbb{R}^3), \ \ \phi_n \to \phi\quad \text{in} \ L^1_\mathrm{loc}(y(\Omega)),\label{eq:c2}\\ &\phi_n\circ y_n \to \phi\circ y\quad \text{in} \ L^1(\Omega),\label{eq:c3} \end{align} and, we have by lower semicontinuity that $\Vert\phi\Vert_\infty \leq \liminf_{n\to\infty}\Vert \phi_n\Vert_\infty\leq 1$, as well as \begin{align} E_{\textrm{bulk}}(y,\phi)&\leq \liminf_{n\to\infty} E_{\textrm{bulk}}(y_n, \phi_n),\label{eq:c6}\\ E_{\textrm{int}}(y, V) &\leq \liminf_{n\to\infty} E_{\textrm{int}}(y_n,V_n).\label{eq:Eint_lsc} \end{align} Equation \eqref{eq:c1} and \eqref{eq:c3} also imply \begin{align} C(y, \phi) &= \lim_{n\to\infty} C(y_n, \phi_n).\label{eq:c7} \end{align} As the volume constraint \eqref{eq:volume} passes to the limit under convergences \eqref{eq:c1} and \eqref{eq:c2}, from \eqref{eq:c1}, \eqref{eq:c3}, and \eqref{eq:c6} we get that $y \in {\mathcal Y}(\phi)$. Hence, owing to inequality \eqref{eq:Eint_lsc} and the convergence \eqref{eq:c7} we conclude that $(y,\phi,V)$ solves the topology optimization problem~\eqref{eq:to2}. \end{proof} \begin{remark}[Worst-case-scenario compliance.] Given the Eulerian material distribution $\phi$, the set ${\mathcal Y}(\phi)$ may contain more than one equilibrium, for uniqueness may genuinely fail \cite{Spadaro}. In order to tackle this indeterminacy, one could consider solving a topology optimization problem \eqref{eq:to2} where the compliance $C(y,\phi)$ is replaced by the {\it worst-case-scenario} compliance $$C_{\rm max}(\phi) = \max_{y\in {\mathcal Y}(\phi)} C(y,\phi).$$ Note that $C_{\rm max}(\phi)$ can be proved to be well-defined, as soon as ${\mathcal Y}(\phi)$ is not empty. To treat the worst-case-scenario-compliance case, however, one needs to require some stability of \color{black} the \color{black} set ${\mathcal C}(\phi) \subset {\mathcal Y}(\phi)$ \color{black} given by \color{black} those equilibria $y\in {\mathcal Y}(\phi)$ realizing the maximum, namely, such that $C(y,\phi)=C_{\rm max}(\phi)$. A condition which would ensure the validity of an existence result in the spirit of Theorem~\ref{thm4} would be \begin{align} &\forall \phi_n ,\, \phi \in L^\infty(\Omega), \ \| \phi_n\|_\infty,\, \| \phi\|_\infty\leq 1, \ \phi_n \to \phi \ \text{in} \ L^1(\Omega), \ \forall y \in {\mathcal C}(\phi), \\ &\qquad \qquad \exists\ y_n \in {\mathcal Y}(\phi_n): \quad C(y,\phi)\leq \liminf_{n\to \infty } C(y_n,\phi_n). \end{align} The latter entails the existence of a recovery sequence for each equilibrium $y\in {\mathcal C}(\phi)$. In particular, it is trivially satisfied in case the set of equilibria ${\mathcal Y}(\phi)$ is a singleton, i.e., in case of uniqueness. Even in the case of nonuniqueness, the above condition holds if $C(y,\phi)$ takes the same value for all $y\in {\mathcal Y}(\phi)$. This is for instance the classical case of buckling of a rod under longitudinal compression. \end{remark} \begin{remark}[A referential formulation] The fully Eulerian setting above can be computationally challenging. One could resort to a more classical referential setting by identifying the optimized body via $\varphi: \Omega \to \{0,1\}$ defined on the fixed reference configuration, while still retaining the penalization of the curvature of the referential boundary of the body, in addition to its referential surface area. In this setting, the volume constraint \eqref{eq:volume} can be simplified \color{black} to \color{black} $\| \varphi\|_1=\eta {\mathcal L}^3(\Omega)$ for some given $\eta \in (0,1)$. The set of equilibrium deformations related with $\varphi$ is defined as \begin{align} &{\mathcal Y}(\varphi)=\argmin_{y \in {\mathcal Y}}\left\{ \int_\Omega\Big((1{-}\varphi)\,W_0(\nabla y)+\varphi\,W_1(\nabla y)\Big)\mathrm{d} X - \int_\Omega \varphi f\cdot y \, {\rm d} X- \int_{\Gamma_{\rm N}} g \cdot y \, {\rm d} \mathcal{H}^2\right\} \end{align} which can be readily checked to be not empty. By defining the {\it referential} compliance as $$C_{\rm ref}(y,\varphi) = \int_\Omega \varphi f\cdot y \, {\rm d} X + \int_{\Gamma_{\rm N}} g \cdot y \, {\rm d} \mathcal{H}^2,$$ the referential topology optimization problem reads \begin{align} &\min \bigg\{ C_{\rm ref}(y,\varphi)+ \int_{\Omega\times G_{2,3}} \Psi(A^{q_\sharp V})\, {\rm d } (q_\sharp V) \: : \: \varphi \in SBV(\Omega;\{0,1\}), \ \| \varphi\|_1=\eta {\mathcal L}^3(\Omega), \\ &\qquad \qquad \qquad y \in {\mathcal Y}(\varphi), \ V \in CV_2^o(\Omega), \ \partial T_V=0, \, \varphi \text{ and }V\text{ are coupled in }\Omega\bigg\}.\label{eq:to} \end{align} Note that the varifold $V$ is now defined in the fixed set $\Omega \times \mathbb{S}^2$, and we are using the same notation of Section \ref{sec:ocv}. By arguing along the lines above, one can prove that the referential topology optimization problem \eqref{eq:to} admits a solution. \end{remark} \begin{comment} \section{Old material from Topology optimization 3} Here is another Eulerian idea. Let $r>3$ and define \begin{align} &{\mathcal B}:=\{(y,\phi)\: : \: y \in W^{1,r}(\Omega;{\mathbb R}^3) \ \text{homeomorphism}, \ \phi \in SBV(y(\Omega);\{0,1\})\},\\ &{\mathcal E}: (y,\phi)\in {\mathcal B} \mapsto {\mathcal E}(y,\phi) = E_{\rm bulk}(y,\phi) - \int_\Omega (\phi\circ y) f\cdot y \, {\rm d} X - \int_{\Gamma_{\rm N}}g \cdot y \, {\rm d}S,\\ &{\mathcal D}:=\{(\phi,D)\: : \: \emptyset \not =D \subset {\mathbb R}^3\ \text{open}, \ \phi \in SBV(D;\{0,1\})\}. \end{align} Moreover, for all $(\phi,D) \in {\mathcal D} $ we define \begin{align} & {\mathcal Y}(\phi,D):=\text{argmin}_y\{{\mathcal E}(\cdot,\phi)\: : (y,\phi)\in {\mathcal B}, \ y(\Omega)=D \} \end{align} Note that ${\mathcal Y}(\phi,D)$ may be empty. This is for instance the case when there is no $W^{1,r}$ homeomorphism between $\Omega$ and $D$. We then let \begin{align} &{\mathcal F}:=\{(\phi,D) \in {\mathcal D}\: : \: {\mathcal Y}(\phi,D)\not = \emptyset\}. \end{align} The set ${\mathcal F}$ is nonempty: by letting $(y,\phi)$ minimize $\mathcal{E}$ on $\mathcal{B}$, one finds that $y$ minimizes $\tilde y \mapsto\mathcal{E}(\tilde y,\phi)$ under the constraint $\tilde y(\Omega)=D$, as well. Hence, $(\phi,y(\Omega))$ belongs to $\mathcal{F}$. With all this preparation, for all $(\phi,D)\in {\mathcal F}$ we define the worst-case scenario compliance \begin{align} C(\phi,D) = \max_{y\in {\mathcal Y} (\phi,D)} \left\{\int_\Omega (\phi\circ y) f\cdot y \, {\rm d} X + \int_{\Gamma_{\rm N}}g \cdot y\, {\rm d} \mathcal{H}^2 \right\} \end{align} which I expect to be well-defined by the usual arguments, for the set $\mathcal{Y}(\phi,D)$ for $(\phi,D)\in {\mathcal F}$ is (nonempty and) weak $W^{1,r}$ compact. For $\eta \in (0,1)$ fixed and all $D\subset {\mathbb R}^3$ open define \begin{align} &{\mathcal V}(D)=\Bigg\{(\phi,V)\::\: \phi\in SBV(D;\{0,1\}), \ \int_D \phi \, {\rm d} X = \eta {\mathcal L}^3(D), \nonumber\\ &\qquad\qquad V\in CV_2^o(D), \ \partial T_V=0, \ {\rm D} \phi=Q'V\Bigg\}\nonumber \end{align} The topology optimization problem reads \begin{equation}\label{eq:to3}\min\left\{ C(\phi,D)+ \int_{D \times G_{2,3}} \Psi (A^{q_\sharp V})\, {\rm d } (q_\sharp V)\: : \: (\phi,D)\in \mathcal{F}, \ (\phi,V)\in \mathcal{V}(D)\right\}. \end{equation} The problem is completely Eulerian. $D$ is at the same time the domain of definition for $\phi$ and the image of the optimal $y$, realizing the worst-case compliance. I have the feeling that problem \eqref{eq:to3} can be solved. The compactness in $D$ should again come from the weak $W^{1,r}$ compactness of those equilibria, with $y(\Omega)=D$. \end{comment} \end{document}
\begin{document} \author[R. Sebastian]{Ronnie Sebastian} \address{Indian Institute of Science Education and Research (IISER), Dr. Homi Bhabha Road, Pashan, Pune 411008, India } \email{[email protected]} \subjclass[2010]{14C25} \keywords{Algebraic cycles, smash nilpotence} \begin{abstract} Voevodsky has conjectured that numerical and smash equivalence coincide on a smooth projective variety. We prove this conjecture holds for uniruled 3-folds and for one dimensional cycles on products of Kummer surfaces. \end{abstract} \mathfrak{m}aketitle \section{Introduction} Throughout this article we work over an algebraically closed field $k$ and with algebraic cycles with rational coefficients. Let $X$ be a smooth and projective variety over $k$. In \cite{voe}, Voevodsky defines a cycle $\alpha$ to be smash nilpotent if the cycle $\alpha^n:=\alpha\times\alpha\ldots\times\alpha$ on the variety $X^n:=X\times X\ldots\times X$ is rationally equivalent to 0. It is trivial to see that a smash nilpotent cycle is numerically trivial, Voevodsky conjectured that the converse also holds. Voevodsky, \cite{voe}, and Voisin, \cite{voisin}, prove that a cycle which is algebraically trivial is smash nilpotent. Kimura, \cite[Proposition 6.1]{kimura}, proved that a morphism between finite dimensional motives of different parity is smash nilpotent. Thus, if an algebraic cycle can be viewed as a morphism between motives of different parities, then it is smash nilpotent. In \cite{ks}, the authors use this fact to prove that skew cycles on an abelian variety are smash nilpotent. A cycle $\beta$ is called skew if it satisfies $[-1]^*\beta=-\beta$. In \cite{ks} such cycles are expressed as morphisms between motives of different parity, using the fact that the motive of an abelian variety has a Chow-Kunneth decomposition, \[h(A)=\bigoplus_{i=0}^{2\,{\rm dim}\,A}h^i(A)\] and the motives $h^i(A)$ for $i$ odd are oddly finite dimensional. In \cite{sebastian1} it is proved that for one dimensional cycles on a variety dominated by a product of curves, smash equivalence and numerical equivalence coincide. The same result can be deduced from \cite{marini} and \cite{herbaut}, where it is shown that for a smooth projective curve $C$, for any adequate equivalence relation, $[C]_{i}=0$ implies that $[C]_{i+1}=0$, for $i\geq2$. Here $[C]_i$ denotes the Beauville component of the curve $C$ in its Jacobian satisfying $[n]_*[C]_i=n^i[C]_i$. If we combine this with \cite{ks}, where it is shown that $[C]_3=0$ modulo smash equivalence, then one can deduce the results in \cite{sebastian1}. If we take the Chow ring of an abelian variety modulo algebraic equivalence and go modulo the subring generated by the cycles in the preceding paragraphs under the Pontryagin product, intersection product and Fourier transform, then there are no nontrivial examples of higher dimensional cycles (dim $>$ 1) for which Voevodsky's conjecture holds. The purpose of this article is to write down some more examples for which this conjecture holds. The main theorems in this article are \begin{theorem}\langlebel{thm1} Let $X$ be uniruled 3-fold. Then numerical and smash equivalence coincide for cycles on $X$. \end{theorem} \begin{theorem}\langlebel{thm2} Let $K_i$, $i=1,2,\ldots,N$ be Kummer surfaces. Then numerical and smash equivalence coincide for one dimensional cycles on $X:=K_1\times K_2\times \cdots \times K_N$. \end{theorem} The proof of the above theorems use Lemma \ref{l1}, which implies the following. If numerical and smash equivalence coincide on a smooth and projective variety $Y$, then they coincide on $\t{Y}$, which is obtained by blowing up $Y$ along a smooth subvariety of dimension $\leq 2$. {\bf Acknowledgements}. We thank Najmuddin Fakhruddin for useful discussions. \section{Smash equivalence and blow ups} Let $Y$ be a smooth variety and $i:X\hookrightarrow Y$ be a smooth and closed subvariety. Let $f:\t{Y}\to Y$ denote the blow-up of $Y$ along $X$. \begin{lemma}\langlebel{l1} If numerical and smash equivalence coincide for elements in $CH_i(X)$ for $i\leq r$ and $CH_r(Y)$, then they coincide for elements in $CH_r(\t{Y})$. \end{lemma} \begin{proof} Consider the Cartesian square \[\xymatrix{\t{X}\ar[r]\ar[d]_g & \t{Y}\ar[d]^f\\ X\ar[r]^{i} & Y}\] Then \cite[Proposition 6.7]{fulton} says that there is an exact sequence \[0\to CH_r(X)\to CH_r(\t{X})\oplus CH_r(Y)\to CH_r(\t{Y})\to 0\] Since $X$ is a smooth subvariety of $Y$, we have that $\t{X}\xr{g}X$ is the projective bundle associated to the locally free sheaf $\sc{I}_{X}/\sc{I}_X^2$ on $X$. Thus, every element $\beta\in CH_r(\t{X})$ may be expressed as the sum \[\beta=\sum_{i=0}^{d-1}c_1(\sc{O}(1))^i\cap g^*g_*(\beta\cap c_1(\sc{O}(1))^{d-1-i}),\] where $\sc{O}(1)$ is the tautological bundle on $\t{X}$. If we assume that numerically trivial elements in $CH_i(X)$ are smash nilpotent for $i\leq r$, then the above formula shows that numerically trivial elements in $CH_r(\t{X})$ are smash nilpotent. If numerical and smash equivalence coincide for elements in $CH_r(Y)$, then the above exact sequence would show that these coincide for elements in $\t{Y}$ as well. \end{proof} The following is a standard result which we include for the benefit of the reader. \begin{lemma}\langlebel{l2} Let $X$ be a smooth projective variety and let $h:Y\to X$ be a dominant morphism. If numerical and smash equivalence coincide for cycles on $Y$, then they coincide for cycles on $X$. \end{lemma} \begin{proof} Let $l\in CH^1(Y)$ be a relatively ample line bundle. The relative dimension of $h$ is $r:=\rm{dim}(Y)-\rm{dim}(X)$ and define $d$ by $h_*(l^r)=:d[Y]$. Then by the projection formula, we have $\forall \alpha\in CH^*(X)$ \[h_*(l^r\cdot h^*\alpha)=d\alpha\] If $\alpha$ is a numerically trivial cycle on $X$, then $l^{r}\cdot h^*\alpha$ is a numerically trivial cycle on $Y$ and so is smash nilpotent. The above equation shows that $\alpha$ is smash nilpotent. \end{proof} \section{Examples} \subsection{Uniruled 3-folds} \begin{definition}By a uniruled 3-fold we mean a smooth projective variety $X$ for which there is a dominant rational map $\varphi:S\times \mathfrak{m}bb{P}^1\dashrightarrow X$ for some smooth projective surface $S$. \end{definition} \begin{proof}[{\bf Proof of Theorem \ref{thm1}}]Since $X$ is projective and $Y:=S\times \mathfrak{m}bb{P}^1$ is normal, $\varphi$ can be defined on an open set $U$ whose compliment has codimension $\geq 2$. Let $X\hookrightarrow \mathfrak{m}bb{P}^n$ be a closed immersion, composing this with $\varphi$ we get a morphism $g:U\to \mathfrak{m}bb{P}^n$. Let $L$ denote the pullback of $\sc{O}(1)$ along $g$. Since $Y\setminus U$ has codimension $\geq2$, there is a unique line bundle on $Y$ which restricts to $L$, we denote this also by $L$. As $Y$ is smooth and codimension $Y\setminus U$ is $\geq2$, the restriction map $H^0(Y,L)\to H^0(U,L)$ is an isomorphism, see, for example \cite[Chapter 3, Ex 3.5]{hart}. Let $V\subset H^0(Y,L)$ be the subspace of global sections $g^*H^0(\mathfrak{m}bb{P}^n,\sc{O}(1))$. Let $J\subset L$ be the subsheaf generated by $V$, then $I=J\otimes L^{-1}$ is an ideal sheaf such that $Y\setminus \rm{Supp}(I)=U$ and $V$ is contained in the image of the map $H^0(Y,I\otimes L)\to H^0(Y,L)$ and it generates $I\otimes L$. We want to apply the principalization theorem to the ideal sheaf $I$. In characteristic 0, see \cite[Theorem 3.21]{kollar}, and in positive characteristic, see \cite[Theorem 1.3]{cutkosky}. We get a morphism $f:Y'\to Y$ which is obtained as a composite of smooth blow-ups, such that $f^*I$ is a locally principal ideal sheaf and $f$ is an isomorphism on $f^{-1}(U)$. The subspace $f^*V\subset H^0(Y',f^*I\otimes f^*L)$ defines a map $Y'\to \mathfrak{m}bb{P}^n$ which extends $g$. Thus, we get a dominant morphism $Y'\to X$. As $S$ is a surface, numerical and smash equivalence coincide for cycles on $S$ and so for cycles on $Y$. Since $Y'$ is obtained from $Y$ by blowing up at smooth centers and $\rm{dim}(Y)=3$, numerical and smash equivalence coincide for $Y'$ using Lemma \ref{l1}. Finally, use Lemma \ref{l2} to get the same result for $X$. \end{proof} \subsection{Kummer surfaces} Let $Y$ be an abelian surface and let $X$ be the set of 2 torsion points. These are exactly the fixed points for the involution $x\mathfrak{m}apsto x^{-1}$ on $Y$. This involution lifts to an involution of $\t{Y}$ which we denote $\t{i}$ and the quotient $\t{Y}/\t{i}$ is the Kummer surface associated to $Y$. We denote this surface by $K$ and by $\pi$ the quotient map $\t{Y}\to K$. Let $Y_i$, $i=1,2$ be abelian surfaces and let $K_i$, $i=1,2$ be the associated Kummer surfaces. Let $X_i$ be the set of 2 torsion points in $Y_i$. Similarly, we have the varieties $\t{Y}_i$ and there is a dominant projective map $\t{Y}_1\times \t{Y}_2\to K_1\times K_2$. The map $\t{Y}_1\times \t{Y}_2 \to Y_1\times Y_2$ may be factored as the composite of two blow ups \[\t{Y}_1\times \t{Y}_2 \to \t{Y}_1\times Y_2 \to Y_1\times Y_2\] the first along the surface $X_1\times Y_2$ and the second along the surface $\t{Y}_1\times X_2$. Applying Lemma \ref{l1} to both these blow ups, we get that numerical and smash equivalence coincide for one dimensional cycles on $\t{Y}_1\times \t{Y}_2$ and so using Lemma \ref{l2} they coincide on $K_1\times K_2$. \begin{proof}[{\bf Proof of Theorem \ref{thm2}}] We recall a result from \cite{sebastian1} which we need. Let $N\geq 3$ be an integer and let $C$ be a smooth projective curve with a base point $c_0$. Let $\Delta_C$ denote the diagonal embedding $C\hookrightarrow C^N$. Let $p_{ij}:C^N\to C^N$ denote the map which leaves the $i$th and $j$th coordinates intact and the other coordinates are changed to $c_0$, for example, $p_{12}(x_1,x_2,\ldots,x_N)=(x_1,x_2,c_0,c_0,\ldots,c_0)$. Then there are rational numbers $q_{ij}$ such that \begin{equation}\langlebel{e1}\Delta_C\sim_{sm}\sum_{i\neq j}q_{ij}p_{ij*}(\Delta_C).\end{equation} Let $X:=K_1\times K_2\ldots\times K_N$ be a product of Kummer surfaces. Fix base points $e_i\in K_i$, and define (we abuse notation here) $p_{ij}:X\to X$ in the same way as above, using these base points. We remark that if we work modulo algebraic equivalence, for any cycle $\alpha\in CH_*(X)$, the cycle $p_{ij*}(\alpha)$ is independent of the choice of these base points. Hence, the same is true modulo smash equivalence. Let $D\hookrightarrow X$ be a reduced and irreducible one dimensional subvariety. Let $C\to D$ be its normalization and denote the composite map by $f:C\to X$. If we let $p_i$ denote the projection from $X$ to $K_i$ and let \[\pi:=(p_1\circ f)\times (p_2\circ f)\ldots(p_n\circ f),\] then we get $f_*([C])=\pi_*(\Delta_C)$. Using equation \eqref{e1}, we get that modulo smash equivalence \begin{align*} [D]=f_*([C])=&\pi_*(\Delta_C)=\sum_{i\neq j}q_{ij}\pi_*p_{ij*}(\Delta_C)=\sum_{i\neq j}q_{ij}p_{ij*}\pi_*(\Delta_C)\\ =&\sum_{i\neq j}q_{ij}p_{ij*}([D]) \end{align*} In particular, we get for any one dimensional cycle $\alpha$, \[\alpha=\sum_{i\neq j}q_{ij}p_{ij*}(\alpha)\] As we have seen above, on a product of two Kummer surfaces, numerical and smash equivalence coincide for one dimensional cycles. If $\alpha$ is numerically trivial, each $p_{ij*}(\alpha)$ is numerically trivial and so smash nilpotent. Thus, $\alpha$ is smash nilpotent. \end{proof} \end{document}
\begin{document} \setcounter{page}{1} \vspace*{1.0cm} \title[Data-compatibility of algorithms] {Data-compatibility of algorithms} \author[Y. Censor, M. Zaknoon, A. J. Zaslavski]{ Yair Censor$^{1,*}$, Maroun Zaknoon$^2$, Alexander J. Zaslavski$^3$} \maketitle \vspace*{-0.6cm} \begin{center} {\footnotesize {\it $^1$Department of Mathematics, University of Haifa\\Mt. Carmel, Haifa 3498838, Israel\\ $^2$Department of Mathematics, The Arab Academic College for Education\\22 HaHashmal Street, Haifa 32623, Israel\\ $^3$Department of Mathematics, The Technion -- Israel Institute of Technology\\Technion City, Haifa 3200003, Israel }}\end{center} \vskip 4mm {\small \noindent {\bf Abstract.} The data-compatibility approach to constrained optimization, proposed here, strives to a point that is \textquotedblleft close enough\textquotedblright \ to the solution set and whose target function value is \textquotedblleft close enough\textquotedblright\ to the constrained minimum value. These notions can replace analysis of asymptotic convergence to a solution point of infinite sequences generated by specific algorithms. We consider a problem of minimizing a convex function over the intersection of the fixed point sets of nonexpansive mappings and demostrate the data-compatibility of the Hybrid Subgradient Method (HSM). A string-averaging HSM is obtained as a by-product and relevance to the minimization over disjoint hard and soft constraints sets is discussed. \vskip 1mm \noindent {\bf Keywords.} Data-compatiblity; constrained minimization; feasibility-seeking; hybrid subgradient method; string-averaging; common fixed points; proximity function; nonexpansive operators; hard and soft constraints. } \renewcommand{\thefootnote}{} \footnotetext{ $^*$Corresponding author. \par E-mail addresses: [email protected] (Y. Censor), [email protected] (M. Zaknoon),\\ [email protected] (A. J. Zaslavski). \par Received May 17, 2020; Accepted October 19, 2020. } \section{Introduction} The data of a constrained minimization problem $\min\{f(x)\mid$ $x\in C\}$ consists of a target function $f$ and a constraints set $C.$ For this problem to be meaningful, $C$ needs to be nonempty, and for asymptotic convergence analysis of an algorithm for solving the problem one commonly needs that the solution set of the problem be nonempty, i.e., that there exists at least one point, say $x^{\ast},$ in $C$ with the property that $f(x^{\ast})\leq f(x)$ for all $x\in C.$ In real-world practical situations these nonemptiness assumptions cannot always be guaranteed or verified. To cope with this we define the notion of \textit{data-compatibility} in a Hilbert space. Such data-compatibility is a finite, not an asymptotic, notion. Even when the sets $C$ and the solution set of the constrained minimization problem are nonempty, striving for data-compatibility is a worthwhile aim because it can be \textquotedblleft reached\textquotedblright,\ contrary to asymptotic limit points. Data-compatibility of a point $x$ means that it simultaneously, (i) is \textquotedblleft close enough\textquotedblright\ to the set of minimizers of $f$ over the constraints $C,$ and (ii) has a function value $f(x)$ \textquotedblleft close enough\textquotedblright\ to the minimum of $f$ over that set of constraints. Once we precisely define these notions the question arises whether or not it is possible to guarantee that, under certain conditions, compatibility with a data pair $(C,f)$ can be reached by an iterative process designed to solve a constrained optimization problem? The advantage of these data-compatibility\ notions is that they can cater better to practical situations. On the theoretical side, we propose that instead of proving asymptotic convergence of iterative processes and afterwards studying the effects of various stopping rules, it is possible to directly formulate data-compatibility and provably guarantee that it can be reached by an iterative process. We demonstrate our principle and approach in a specific scenario. The problem formulation and the algorithm that we use in our demonstration serve only as vehicles to present our data-compatibility approach which we believe is novel. To do so we study the behavior of the specific iterative process for convex minimization over the intersection of the fixed point sets of nonexpansive operators, called the Hybrid Subgradient Method (HSM)\footnote{The term HSM is in analogy with the established term, coined by Yamada \cite{Yamada2001a}, of the Hybrid Steepest Descent Method (HSDM). The structural similarity of the HSM with the HSDM is that the former uses subgradient steps instead of the steepest descent steps used by the latter.}. In contrast with many existing works, rather than investigating asymptotic convergence of the generated sequences we specify conditions under which the iterative HSM process generates solutions that are compatible with the data pair $(\mathrm{Fix} \left( T\right) ,f).$ Minimization over the intersection of the fixed point sets of nonexpansive operators has been treated extensively in the literature, of which we reference a few works below. But in all these earlier works the asymptotic convergence of algorithms, under various sets of conditions, is the central theme, not data-compatibility. As an important special case of the general algorithmic formulation we discuss a string-averaging algorithmic scheme. The string-averaging algorithmic notion has a quite general structure in itself. Invented in \cite{ceh01} and spurred many extensions and applications since then, e.g., \cite{Bargetz2018,Kong2019} and the book \cite{Zaslavskibook2018}, it works in general as follows. From a current iteration point, it performs consecutively specified iterative algorithmic steps \textquotedblleft along\textquotedblright\ different \textquotedblleft strings\textquotedblright\ of individual constraints sets and then takes a combination, convex or other, of the strings' end-points as the next iterate. The string-averaging algorithmic scheme gives rise to a variety of specific algorithms by judiciously choosing the number of strings, their assignments and the nature of the combination of the strings' end-points. Details are given in the sequel. Earlier works on minimizing convex functions over the intersection of the fixed point sets of nonexpansive mappings are all based on asymptotic convergence of the algorithms and investigate the problem and prove convergence of algorithms under various conditions. These include, to name but a few, the papers of Iiduka \cite{Iiduka2012}, \cite{Iiduka2015a}, \cite{Iiduk2016}, the work of Maing\'{e} \cite{Mainge2008}, and publications by Hayashi and Iiduka \cite{Hayashi2018} and Deutsch and Yamada \cite{Deutsch1998}. Methods similar to HSM were studied by several researchers, e.g., Shor \cite{Shor1985}, Albert, Iusem and Solodov \cite{Albert1998}, Yamada and Ogura \cite{yamada2005hybrid}, Hirstoaga \cite{Hirstoaga2006}, Martinez-Yanes and Xu \cite{Martinez-Yanes2006}, Maing\'{e} \cite{Mainge2008a}, Aoyama and Kohsaka \cite{Aoyama2014}, Cegielski \cite{Cegielski2015}, Bello Cruz \cite{Cruz2017}. In contrast with these, our main contribution is in the novel quest for data-compatibility instead of asymptotic convergence properties. The paper is organized as follows. In Section \ref{sec:data-comp} we define the notion of data-compatibility of a point with the data of a constrained minimization problem, and, in particular in Subsection \ref{subsec:investigating} we discuss the problem of guaranteeing a priori data-compatibility. In Section \ref{sect:origin} we present the origin and motivation of our work. The problem of minimizing a convex function over the intersection of the fixed point sets of nonexpansive mappings is defined in Section \ref{sec:Assump present prob} along with the Hybrid Subgradient Method (HSM) for its solution. Inexact iterates are discussed in Section \ref{Inexact Iter section} followed in Section \ref{sec:convergence} by work on the main result that proves the ability of the HSM to generate a data-compatible point for the problem. We present the string-averaging variant of the HSM in Section \ref{sect:SAPv}. In Section \ref{sect:inconsistent} we consider a specific situation wherein the data of a constrained minimization problem does not necessarily obey feasibility of the constraints, i.e., does not demand that $C=\cap_{i=1}^{m}C_{i}$ is nonempty. Finally, in Section \ref{sect:hard-soft} we describe the minimization over disjoint hard and soft constraints sets problem and its relation to the work presented in this paper. \section{Data-compatibility \label{sec:data-comp}} In this section we define the notion of \textit{data-compatibility of a point with the data of a constrained minimization problem. }Let $\Omega\subseteq H$ be a given nonempty set in the Hilbert space $H$ and let there be given, for $i=1,2,\ldots,m,$ nonempty sets $C_{i}\subseteq\Omega.$ We denote by $\Gamma:=\{C_{i}\}_{i=1}^{m}$ the family of sets and refer to it as the \textquotedblleft constraints data $\Gamma$\textquotedblright. We introduce a set $\Delta$ such that $\Omega\subseteq\Delta\subseteq H$ and assume that we are given a function $f:\Delta\rightarrow R$ which is referred to as \textquotedblleft the target function $f$\textquotedblright\ or, in short, the data $f$. A pair $(\Gamma,f)$ is referred to as the \textquotedblleft data pair $(\Gamma,f)$\textquotedblright. \subsection{Data-compatibility for constraints\label{subsec:D-C-const}} Constraint modelling has a prominent role in operations research and is used in a wide range of industrial projects, such as, but by far not only, the control of an intelligent interface linking computer aided design and automatic inspection systems, the identification of manufacturing errors from inspection results and the design synthesis and analysis of mechanisms, to name a few, see, e.g., \cite{Hicks2006}. In our language, it is the modelling of real-world problems via \textit{convex feasibility problems} (CFPs). Given a finite family of, commonly convex, sets $\Gamma:=\{C_{i}\}_{i=1}^{m}$ the CFP is to find a point $x^{\ast}\in C:=\cap_{i=1}^{m}C_{i}.$ This approach has a long history, see, e.g., Combettes' or Bauschke and Borwein's corner stone reviews \cite{Combettes1993}, \cite{Bauschke96}, respectively, Cegielski's book \cite{Cegielski2012Book} and Bauschke and Combettes' book \cite{BC11}. First we look at compatibility with the constraints data alone. For this we need an appropriate \textit{proximity function} that \textquotedblleft measures\textquotedblright\ how incompatible an $x\in\Omega$ is with the constraints of $\Gamma$. There is no common definition in the literature, but a proximity function $\operatorname*{Prox}_{\Gamma}:\Omega\rightarrow R_{+}$ (the nonnegative orthant) should have the property that $\operatorname*{Prox} _{\Gamma}(x)=0$ if and only if $x\in C:=\cap_{i=1}^{m}C_{i}.$ By evaluating/measuring a \textquotedblleft distance\textquotedblright\ to the constraints, the lower the value of $\operatorname*{Prox}_{\Gamma}(x)$ is -- the less incompatible $x$ is with the constraints. A proximity function does not require that $C\neq\varnothing$ and it is a useful tool, particularly for situations when $C\neq\varnothing$ does not hold, or cannot be verified. An enlightening discussion of proximity functions for the convex feasibility problem can be found in Cegielski's book \cite[Subsection 1.3.4]{Cegielski2012Book}. An important and often used proximity function is \begin{equation} \text{Prox}_{\Gamma}(x):=\frac{1}{2}\sum_{i=1}^{m}w_{i}\left\Vert P_{C_{i} }(x)-x\right\Vert ^{2}, \label{eq:prox-function} \end{equation} where $P_{C_{i}}(x)$ is the orthogonal (metric) projection onto $C_{i}$ and $\{w_{i}\}_{i=1}^{m}$ is a set of weights such that $w_{i}\geq0$ and $\sum_{i=1}^{m}w_{i}=1.$ With a proximity function at hand we define compatibility with constraints as follows. \begin{definition} \label{def:epsilon-comp}$\gamma$\textbf{-compatibility with constraints data }$\Gamma$\textbf{. }Given constraints data $\Gamma,$ a proximity function $\operatorname*{Prox}_{\Gamma},$ and a real number $\gamma\geq0,$ we say that a point $x\in\Omega$ is \textquotedblleft$\gamma$-compatible with $\Gamma $\textquotedblright\ if $\operatorname*{Prox}_{\Gamma}(x)\leq\gamma.$ We define the set of all points that are $\gamma$-compatible with $\Gamma$ by $\Pi(\Gamma,\gamma):=\{x\in\Omega\mid\operatorname*{Prox}_{\Gamma} (x)\leq\gamma\}$ and call it the $(\Gamma,\gamma)$-compatibility set. \end{definition} \begin{definition} \label{def:gamma-output}\textbf{The} $\gamma$\textbf{-output of a sequence. }Given constraints data $\Gamma,$ a proximity function $\operatorname*{Prox} _{\Gamma},$ a real number $\gamma\geq0$ and a sequence $R:=\{x^{k} \}_{k=0}^{\infty}$ of points in $\Omega$, we use $O\left( \Gamma ,\gamma,R\right) $ to denote the point $x\in\Omega$ that has the following properties: $\operatorname*{Prox}_{\Gamma}(x)\leq\gamma,$ and there is a nonnegative integer $K$ such that $x^{K}=x$ and, for all nonnegative integers $k<K$, $\operatorname*{Prox}_{\Gamma}(x)>\gamma$. If there is such an $x$, then it is unique. If there is no such $x$, then we say that $O\left( \Gamma,\gamma,R\right) $ is \textit{undefined}, otherwise it is \textit{defined}. \end{definition} If $R$ is an infinite sequence generated by a certain process then $O\left( \Gamma,\gamma,R\right) $ is the \textit{output} produced by that process when we add to it instructions that make it terminate as soon as it reaches a point that is $\gamma$-compatible with $\Gamma$. The $(\Gamma,\gamma)$-compatibility set $\Pi(\Gamma,\gamma)\subseteq\Omega$ need not be nonempty for all $\gamma.$ If, however, $\Pi(\Gamma,0)\neq \varnothing$ then $\Pi(\Gamma,0)=C.$ We have used the notion of $\gamma $-compatibility with constraints earlier in our work on the superiorization method, see, e.g., \cite{Censor2019}. The unconstrained minimization of Prox$_{\Gamma}(x)$ always yields an infimum of Prox$_{\Gamma}(x)$ but this does not guarantee that $\Pi(\Gamma,\gamma)$ is nonempty. However, $\Pi (\Gamma,\gamma)$ can be nonempty even if the constraints intersection $C=\cap_{i=1}^{m}C_{i}$ is empty. In order to proceed with data-compatibility for constrained minimization in the next subsection we require that $\Pi(\Gamma,\gamma)\neq\varnothing.$ It should be emphasized that a sequence considered in Definition \ref{def:gamma-output} need not be convergent, or can be convergent but not necessarily to a point in $\Pi(\Gamma,\gamma)$, and still yield an output $O\left( \Gamma,\gamma,R\right) $ that is $\gamma$-compatible with $\Gamma.$ \subsection{Data-compatibility for constrained minimization\label{subsec:D-C-const-minim}} For a $\gamma,$ for which $\Pi(\Gamma,\gamma)\neq\varnothing,$ we define the set of minimizers of $f$ over $\Pi(\Gamma,\gamma),$ \begin{equation} S(f,\Pi(\Gamma,\gamma)):=\{x\in\Pi(\Gamma,\gamma)\mid f(x)\leq f(y),\text{ for all }y\in\Pi(\Gamma,\gamma)\}. \end{equation} If $f$ is the zero function or if $f=$constant then $S(f,\Pi(\Gamma ,\gamma))=\Pi(\Gamma,\gamma).$ We use the distance function between a point $x$ and a set $S$ defined as \begin{equation} d(x,S):=\inf\{d(x,y)\ |\ y\in S\} \end{equation} where $d(x,y)$ is the distance between points $x$ and $y.$ Next we propose our definition of compatibility with a data pair $(\Gamma,f)$. \begin{definition} \label{def:tau-el-compatibility}$(\tau,\bar{L})$\textbf{-compatibility with a data pair }$(\Gamma,f).$ Given a $\tau\geq0,$ and a real number $\bar{L}>0,$ we say that a point $x\in\Omega$ is \textquotedblleft$(\tau,\bar{L} )$-compatible with the data pair $(\Gamma,f)$\textquotedblright\ if $S(f,\Pi(\Gamma,\gamma))\neq\varnothing$ and the following two conditions hold \begin{gather} d(x,S(f,\Pi(\Gamma,\gamma)))\leq\tau\text{ \label{eq:comp-delta}}\\ \text{and }\nonumber\\ f(x)\leq f(z)+\tau\bar{L},\text{ for all }z\in S(f,\Pi(\Gamma,\gamma)), \label{eq:comp-S} \end{gather} where the constraints data $\Gamma,$ a proximity function $\operatorname*{Prox}_{\Gamma},$ a target function $f,$ and a $\gamma\geq0$ such that $\Pi(\Gamma,\gamma)\neq\varnothing,$ are given. \end{definition} This means that such a point $x$ simultaneously, (i) is \textquotedblleft close enough\textquotedblright\ to the set of minimizers of $f$ over a $(\Gamma,\gamma)$-compatibility set of the constraints, and (ii) has a function value $f(x)$ \textquotedblleft close enough\textquotedblright\ to the minimum of $f$ over that $(\Gamma,\gamma)$-compatibility set of the constraints. This definition does not require nonemptiness of the intersection of the constraints $C=\cap_{i=1}^{m}C_{i}$ neither does it require that the constrained minimization problem $\min\{f(x)\mid$ $x\in C\},$ has a nonempty solution set $\operatorname*{SOL}(f,C)$ which is defined by \begin{equation} \operatorname*{SOL}(f,C):=S(f,C)=\{x\in C\mid f(x)\leq f(y),\text{ for all }y\in C\}. \label{eq:SOL} \end{equation} It relies on the weaker assumptions that $\Pi(\Gamma,\gamma)\neq\varnothing$ and $S(f,\Pi(\Gamma,\gamma))\neq\varnothing.$ Therefore, use of these notions makes it possible to deviate from the nonemptiness assumptions which usually lie at the heart of asymptotic analyses in optimization theory. \begin{example} Here is a motivating example from a specific real-world application that stands to benefit from the situation described above. A fully-discretized modeling approach to intensity-modulated radiation therapy (IMRT) treatment planning, e.g., \cite{Censor2008}, leads to a very large and sparse system of linear inequalities, see, e.g., \cite[Equation 6.13]{Brooke-2019} which is, in practice, further equipped with box constraints on the unknown vector $x$ there. The sets $C_{i}$ are half-spaces and the, commonly used, proximity function (\ref{eq:prox-function}) above always has a minimum value $g$ and its solution set is nonempty. This is guaranteed, e.g., by Proposition 7 of \cite{Combettes1994} with $\Omega=H$ and the box constraints serving the condition there that one of the sets must be bounded. Therefore, $\Pi (\Gamma,\gamma)$ will be nonempty for any $\gamma$ for which $g\leq\gamma.$ Often, an exogeneous target function is imposed on these constraints in order to impose some prior knowledge. A commonly employed such function $\ f$ is the \textquotedblleft total variation\textquotedblright\ (TV), see, e.g., \cite{TV2011} that smoothes the solution vector, see, e.g., \cite{DCSGX2015}. In this situation we know in advance that $S(f,\Pi(\Gamma,\gamma ))\neq\emptyset$ without having to invest preliminary computing resources in finding whether $C=\cap_{i=1}^{m}C_{i}$ is or is not empty. This example is also relevant to Theorems \ref{thm:7.1 string} and \ref{thm:7.1 Proj} below. \end{example} In the consistent case when $C\neq\varnothing$ and $\operatorname*{SOL} (f,C)\neq\varnothing$, i.e., $\gamma=0$ and $\Pi(\Gamma,0)=C,$ Definition \ref{def:tau-el-compatibility} takes the following form. \begin{definition} \label{def:consistent-data-comp}$(\tau,\bar{L})$\textbf{-compatibility with a data pair }$(\Gamma,f)$\textbf{ in the consistent case}$.$ Given consistent constraints data $\Gamma$ via $C:=\cap_{i=1}^{m}C_{i}\neq\varnothing,$ a target function $f,$ a $\tau\geq0,$ and a real number $\bar{L}>0,$ we say that a point $x\in\Omega$ is \textquotedblleft$(\tau,\bar{L})$-compatible with the consistent data pair $(\Gamma,f)$\textquotedblright\ if $\operatorname*{SOL} (f,C)\neq\varnothing$ and the following two conditions hold \begin{gather} d(x,\operatorname*{SOL}(f,C))\leq\tau\text{ \label{eq:data-comp-1}}\\ \text{and }\nonumber\\ f(x)\leq f(z)+\tau\bar{L},\text{ for all }z\in\operatorname*{SOL} (f,C).\text{\label{eq:data-comp-2}} \end{gather} \end{definition} \subsection{Data-compatibility instead of asymptotic convergence\label{subsec:investigating}} We consider in this paper the consistent case situation of Definition \ref{def:consistent-data-comp}. Given a constrained minimization problem $\min\{f(x)\mid$ $x\in C\}$ via its data pair $(\Gamma,f),$ one traditional route in optimization theory is to design/construct an iterative process for its solution and investigate the asymptotic convergence of the this process. After asymptotic convergence of the iterative process has been secured a common approach is to use the process as a basis for creating an algorithm\footnote{An algorithm is a finite sequence of well-defined, computer-implementable instructions, typically to solve a class of problems or to perform a computation. However, it is common practice in the literature to use the term \textquotedblleft algorithm\textquotedblright\ also for iterative processes that do not include a stopping rule.}. Such an algorithm will use the iteration formulae dictated by the process but have a user-chosen stopping rule attached to it. The, so obtained, algorithm will stop when the stopping rule is met and output an approximate solution to the original problem. Various stopping rules are in use and we do not attempt to review them. But it is clear that the question \textquotedblleft when to stop\textquotedblright \ has a\ profound influence on the practical output of an algorithmic run. With roots and research in the fields of statistics and optimization, \textquotedblleft optimal stopping is concerned with the problem of choosing a time to take a given action based on sequentially observed random variables in order to maximize an expected payoff or to minimize an expected cost.\textquotedblright, see \cite{opt-stop-book-2008}. One aspect of stopping rules is the question whether or not a particular stopping rule will actually cause the iterative process to which it is attached to stop and yield an output. If, for example, the stopping rule is to stop after a specified number of iterations then the algorithmic run will undoubtedly stop when this number is reached. On the other hand, if one considers a feasibility-seeking problem to find a point in the intersection of a given finite family of sets $C:=\cap_{i=1}^{m}C_{i}$ then a feasibility-seeking iterative process that is proven to asymptotically converge to a point in $C$ can be used. However, if one uses the stopping rule that iterations should stop when an iterate $x^{k}$ is reached that belongs to the intersection $C$ then the process might never stop. \begin{problem} The question that we ask ourselves here is whether or not it is possible to a priori guarantee that, under certain conditions, $(\tau,\bar{L})$ -compatibility with a data pair $(\Gamma,f)$ can be reached by an iterative process designed to solve a constrained optimization problem? \end{problem} Obviously, if the considered iterative process is proven to asymptotically converge to a point in $\operatorname*{SOL}(f,C))$ then the answer is positive and (\ref{eq:comp-delta})-(\ref{eq:comp-S}) can be used as a stopping rule that will indeed yield a data-compatible output. On the other hand, if an algorithm generates $(\tau,\bar{L})$-compatible sequences for all values of $\tau$ then all the sequences generated by it are asymptotically convergent. Nevertheless, the notion of $(\tau,\overline{L})$-compatibility makes sense even if it holds for certain (not all) values of $\tau$ and in this case it can provide useful information. To approach this, inspired by \cite[Definition 2.1]{Censor2019}, we suggest the following definition for the first iterate that satisfies both conditions (\ref{eq:data-comp-1})-(\ref{eq:data-comp-2}). \begin{definition} \textbf{The }$(\tau,\bar{L})$\textbf{-output of a sequence}. Given constraints data $\Gamma,$ a proximity function $\operatorname*{Prox}_{\Gamma},$ a target function $f,$ a $\gamma\geq0,$ a $\tau\geq0,$ and a real number $\bar{L}>0,$ we consider an infinite sequence $\mathcal{X}=\{x^{k}\}_{k=0}^{\infty}$ of points in $\Omega.$ Let $\operatorname*{OUT}((\Gamma,f),\gamma,(\tau,\bar {L}),\mathcal{X})$ denote the point $x\in\Omega$ \ that fulfills (\ref{eq:data-comp-1})-(\ref{eq:data-comp-2}) and such that there exists a nonnegative integer $K$ such that $x^{K}=x$ and for all nonnegative integers $k<K$ at least one of the two conditions (\ref{eq:comp-delta} )-(\ref{eq:comp-S}) is violated. If there is such an $x,$ then it is unique. If there is no such $x$ then we say that $\operatorname*{OUT}((\Gamma ,f),\gamma,(\tau,\bar{L}),\mathcal{X})$ is undefined, otherwise it is defined. \end{definition} If $\mathcal{X}$ is generated by an iterative process, then $\operatorname*{OUT}((\Gamma,f),\gamma,(\tau,\bar{L}),\mathcal{X})$ is the output produced by that process when we add to it instructions that make it terminate as soon as it reaches a point that is $(\tau,\bar{L})$-compatible with a data pair $(\Gamma,f)$. General results that will characterize, or give conditions for, an iterative process to be provably data-compatible are not yet known, but to initiate research in this direction we demonstrate our approach in a specific scenario. We work in the framework of Definition \ref{def:consistent-data-comp} and study the behavior of an iterative process for convex minimization over fixed point sets of nonexpansive operators. Rather than generating infinite sequences that asymptotically converge to a point in $\operatorname*{SOL} (f,C),$ we specify conditions under which the iterative process generates solutions that are $(\tau,\bar{L})$-compatible with the data pair $(\Gamma,f).$ In the sequel (Section \ref{sect:inconsistent}) we also discuss a specific situation wherein the data pair $(\Gamma,f),$ with $\Gamma:=\{C_{i} \}_{i=1}^{m}$ a family of closed and convex subsets of $H,$ not necessarily obeys that $C=\cap_{i=1}^{m}C_{i}$ is nonempty. \section{Origin and motivation\label{sect:origin}} The origin of the idea of data-compatibility is the work done in \cite{Censor2014}. There the string-averaging projected subgradient method (SA-PSM) was developed and studied. The SA-PSM is a variant of the projected subgradient method for solving constrained minimization problems. It differs from the traditional projected subgradient method (PSM) by replacing, in each iteration, the single step of projecting onto the entire constraints set with projections onto the individual sets whose intersection is the feasible set of the minimization problem. This is an advantage, in the frequently occurring situations, when the individuals sets are easier to project on than their entire intersection. The main result in \cite[Theorem 9]{Censor2014} is not really an asymptotic convergence theorem. Instead, it provably guarantees the ability of the algorithm to reach an iterate of the generated sequence that is, up to predetermined bounds, close to a \textquotedblleft solution\textquotedblright. Here we formulate this notion and extend it to encompass also the case when the solution set of the problem is empty. \section{The problem and the algorithm \label{sec:Assump present prob}} Our problem, iterative process, and main result are set in a Hilbert space. However, some intermediate auxiliary results are true, and have some independent value, in a general metric space. Let $\left( X,\rho\right) \ $be a complete metric space and let $T:X\longrightarrow X$ be an operator. The fixed point set\ of $T$\ is \begin{equation} \mathrm{Fix}\left( T\right) :=\left\{ x\in X\ \mid T(x)=x\right\} . \label{eq:1.9} \end{equation} An operator $T$ is nonexpansive if it satisfies \begin{equation} \rho\left( T(x),T(y)\right) \leq\rho\left( x,y\right) ,\text{ for all }x,y\in X. \label{eq:1.8} \end{equation} Given a nonempty set $E\subseteq X$ define the distance of a point $x\in X$ from it by \begin{equation} d(x,E):=\inf\{\rho(x,y)\ |\ y\in E\}. \label{eq:1.8 m 1} \end{equation} We denote\ the ball with center at a given $x\in X$ and radius $r>0$ by $B(x,r).$ The execution of the operator $T$ for $n$ times consecutively on an initial given point $x$ is denoted by $T^{n}x,$ and $T^{0}x:=x.$ For $X=H$ a Hilbert space, we look at a constrained minimization problem of the form \begin{equation} \min\{f(x)\mid x\in\mathrm{Fix}\left( T\right) \} \label{prob:cons-min-1} \end{equation} where $f$ is a convex target function from $H$ into the reals and $T$ is a given nonexpansive operator. Solving this problem means to \begin{equation} \text{find a point }x\text{ in }\operatorname*{SOL}(f,\mathrm{Fix}\left( T\right) ), \label{prob:cons-min Version 2} \end{equation} where \begin{equation} \operatorname*{SOL}(f,\mathrm{Fix}\left( T\right) ):=\{x\in\mathrm{Fix} \left( T\right) \mid f(x)\leq f(y){\text{ for all }}y\in\mathrm{Fix}\left( T\right) \}. \label{Solutoin Set} \end{equation} For this task we employ an iterative Hybrid Subgradient Method (HSM) that uses the powers of the operator $T$ combined with subgradient steps. We denote by $\partial f(x^{k})$ the subgradient set of $f$ at $x^{k}.$ \begin{algorithm} \label{alg:sa-psm}$\left. {}\right. $\textbf{Hybrid Subgradient Method (HSM).} \textbf{Initialization}: Let $\{\alpha_{k}\}_{k=0}^{\infty}\subset(0,1]$ be a scalar sequence and let $x^{0}\in H$ be an arbitrary initialization vector. \textbf{Iterative step}: Given a current iteration vector $x^{k}$ calculate the next vector as follows: Choose any $s^{k}\in\partial f(x^{k})$ and calculate \begin{equation} x^{k+1}=T\left( x^{k}-\alpha_{k}\frac{s^{k}}{\parallel s^{k}\parallel }\right) \text{,} \label{eq:alg-sa-psm-2} \end{equation} but if $s^{k}=0$ then set $\frac{\textstyle s^{k}}{\textstyle\parallel s^{k}\parallel}:=0.$ \end{algorithm} Note that if $s^{k}=0$ then the algorithm simply calculates \begin{equation} x^{k+1}=T(x^{k})\text{,} \label{eq:alg-sa-psm-1} \end{equation} otherwise, it uses (\ref{eq:alg-sa-psm-2}). As mentioned above, our data-compatibility result, presented in Theorem \ref{thm:7.1} below, will not be about asymptotic convergence but rather specify conditions that guarantee the existence of an iterate, of any sequence generated by the HSM of Algorithm \ref{alg:sa-psm}, that is $(\tau,\bar{L} )$-compatible with the data pair $(\Gamma=Fix(T),f).$ I.e., that for every $\tau\in(0,1),$ and any sequence $\{x^{k}\}_{k=0}^{\infty}$, generated by Algorithm \ref{alg:sa-psm}, there exists an integer $K$ so that, for all $k\geq K$: \begin{gather} d(x^{k},\operatorname*{SOL}(f,\mathrm{Fix}\left( T\right) ))\leq\tau{\text{ }}\\ {\text{and }}\nonumber\\ f(x^{k})\leq f(z)+\tau\bar{L}\text{ for all }z\in\operatorname*{SOL} (f,\mathrm{Fix}\left( T\right) ) \end{gather} where $\bar{L}$ is some well-defined constant. \section{Inexact iterates\label{Inexact Iter section}} In this sections we establish some properties of sequences of the form $\left\{ T^{j}y^{0}\right\} _{j=0}^{\infty},$ for any $y^{0}\in X,$ with \textquotedblleft computational errors\textquotedblright. These will serve as tools in proving the main result. In our work we need to focus on operators that have the property formulated in the next condition. \begin{condition} \label{cond:condition}Let $X\ $be a complete metric space, assume that $T:X\rightarrow X$ is a nonexpansive operator such that $\lim_{j\rightarrow \infty}T^{j}y^{0}$ exists for any $y^{0}\in X$. \end{condition} Condition \ref{cond:condition} holds for many nonexpansive mappings. In \cite{Reich2014} it was shown that for several important classes of nonexpansive mappings this property is generic (typical) in the sense of the Baire category. This means that a class of mappings is equipped with an appropriate complete metric and it is shown the existence of a subset of the space of mappings which is a countable intersection of open everywhere dense sets such that any mappings from the intersection possesses the desirable convergence property. \begin{proposition} \label{The limit is fixed point}Let $X\ $be a complete metric space, and that $T:X\rightarrow X$ is a nonexpansive operator. \begin{enumerate} \item The set $\mathrm{Fix}\left( T\right) $ is closed. \item If $\lim_{j\rightarrow\infty}T^{j}y^{0}$ exists for some $y^{0}\in X$, then $\lim_{j\rightarrow\infty}T^{j}y^{0}$ is a fixed point of $T$ and, consequently, $\mathrm{Fix}\left( T\right) \not =\varnothing.$ \end{enumerate} \end{proposition} \begin{proof} The proof is obvious. \end{proof} \begin{proposition} \label{Sa Prop 1.3}Let $X\ $be a complete and compact metric space, assume that $T:X\rightarrow X$ is a nonexpansive operator and let $\varepsilon>0$. Then there exists a $\delta>0$ such that for each $x\in X$ satisfying $\rho(x,Tx)\leq\delta$ we have \begin{equation} d(x,\mathrm{Fix}(T))\leq\varepsilon. \end{equation} \end{proposition} \begin{proof} Assume to the contrary that for each integer $k\geq1$ there exists a point $x^{k}\in X$ such that \begin{equation} \rho(x^{k},Tx^{k})\leq k^{-1}\text{ and}\;d(x^{k},\mathrm{Fix}(T))>\varepsilon . \label{Sa (1.2)} \end{equation} Since $X$ is compact, extracting a subsequence and re-indexing, if necessary, we may assume without loss of generality that, the sequence $\{x^{k} \}_{k=1}^{\infty}$ so generated by the repeated use of the above negation, converges and denote \begin{equation} z:=\lim_{k\rightarrow\infty}x^{k}. \label{Sa (1.3)} \end{equation} Since $T$ is nonexpansive, inequality (\ref{Sa (1.2)}) and the limit (\ref{Sa (1.3)}) yield, for all integers $k\geq1$, \begin{align} \rho(z,Tz) & \leq\rho(z,x^{k})+\rho(x^{k},Tx^{k})+\rho(Tx^{k},Tz)\nonumber\\ & \leq2\rho(z,x^{k})+k^{-1}\rightarrow0,\text{ as }k\rightarrow\infty, \end{align} thus, \begin{equation} z\in Fix(T). \end{equation} In view of (\ref{Sa (1.3)}), for all sufficiently large integers $k$, \begin{equation} d(x^{k},\mathrm{Fix}(T))\leq d(x^{k},z)<\varepsilon. \end{equation} This contradicts (\ref{Sa (1.2)}) and completes the proof. \end{proof} \begin{lemma} \label{Sa Lem 1.5}Let $X\ $be a complete and compact metric space, assume that $T:X\rightarrow X$ is a nonexpansive operator for which Condition \ref{cond:condition} holds, and let $\mu>0$. Then there exists an integer $k_{1}$ such that for each $x\in X$ there exists $j\in\{0,1,\dots,k_{1}\}$ such that \begin{equation} d(T^{j}x,\mathrm{Fix}(T))\leq\mu. \end{equation} \end{lemma} \begin{proof} Assume to the contrary that for each integer $k\geq1$ there exists a point $x^{k}\in X$ such that \begin{equation} d(T^{j}x^{k},\mathrm{Fix}(T))>\mu,\text{\ for all }j=0,1,\dots,k. \label{Sa (1.4)} \end{equation} Since $X$ is compact, extracting a subsequence and re-indexing, if necessary, we may assume without loss of generality that, the sequence $\{x^{k} \}_{k=1}^{\infty}$ so generated by the repeated use of the above negation, converges and let \begin{equation} z=\lim_{k\rightarrow\infty}x^{k}. \label{Sa (1.5)} \end{equation} By Proposition \ref{The limit is fixed point} and Condition \ref{cond:condition}, we conclude that \begin{equation} d(T^{j}z,\mathrm{Fix}(T))<\mu. \end{equation} By (\ref{Sa (1.5)}) and since $T$ is nonexpansive, for all sufficiently large integers $k$, \begin{equation} d(T^{j}x^{k},\mathrm{Fix}(T))<\mu, \end{equation} contradicting (\ref{Sa (1.4)}), thus, concluding the proof. \end{proof} \begin{theorem} \label{Sa Theo 1.4}Let $X\ $be a complete and compact metric space, assume that $T:X\rightarrow X$ is a nonexpansive operator for which Condition \ref{cond:condition} holds, and let $\varepsilon>0$. Then there exists a natural number $k_{0}$ such that for each $x\in X$ and each integer $k\geq k_{0}$, \begin{equation} \rho(T^{k}x,\lim_{i\rightarrow\infty}T^{i}x)\leq\varepsilon. \end{equation} \end{theorem} \begin{proof} By Lemma \ref{Sa Lem 1.5}, there exists an integer $k_{0}$ such that for each $x\in X$ there exists $j\in\{0,1,\dots,k_{0}\}$ so that \begin{equation} d(T^{j}x,\mathrm{Fix}(T))<\varepsilon/2. \label{Sa Property (a)} \end{equation} This implies that there exist a \begin{equation} j\in\{0,1,\dots,k_{0}\} \label{Sa (1.6)} \end{equation} and a \begin{equation} z\in\mathrm{Fix}(T) \label{Sa (1.7)} \end{equation} such that \begin{equation} \rho(T^{j}x,z)<\varepsilon/2. \label{Sa (1.8)} \end{equation} Since $T$ is nonexpansive, we get, by (\ref{Sa (1.7)}) and (\ref{Sa (1.8)}), that for all integers $k\geq j$, \begin{equation} \rho(T^{k}x,z)\leq\rho(T^{j}x,z)<\varepsilon/2, \label{Sa (1.9)} \end{equation} yielding, \begin{equation} \rho(\lim_{i\rightarrow\infty}T^{i}x,z)\leq\varepsilon/2. \end{equation} Together with (\ref{Sa (1.6)}) and (\ref{Sa (1.9)}) this implies that for all integers $k\geq k_{0}$, \begin{equation} \rho(T^{k}x,\lim_{i\rightarrow\infty}T^{i}x)\leq\rho(T^{k}x,z)+\rho (z,\lim_{i\rightarrow\infty}T^{i}x)\leq\varepsilon/2+\varepsilon /2=\varepsilon, \end{equation} completing the proof. \end{proof} Theorem \ref{Sa Theo 1.4} implies the following theorem: \begin{theorem} \label{Sa Theorem 1.6 - 20200426}Let $X\ $be a complete metric space, assume that $T:X\rightarrow X$ is a nonexpansive operator for which Condition \ref{cond:condition} holds, the closure of $T\left( X\right) $ is compact, and let $\varepsilon>0$. Then there exists a natural number $k_{0}$ such that for each $x\in X$ and each integer $k\geq k_{0}$, \begin{equation} \rho(T^{k}x,\lim_{i\rightarrow\infty}T^{i}x)\leq\varepsilon. \end{equation} \end{theorem} \begin{proposition} \label{Sa Prop 1.7}Under the assumptions of Theorem \ref{Sa Theorem 1.6 - 20200426}, there exist an integer $k_{0}$ and a $\delta>0$ such that for each finite sequence $\{x^{i}\}_{i=0}^{k_{0}}\subset X$ satisfying \begin{equation} \rho(x^{i+1},Tx^{i})\leq\delta,\text{ for all }i=0,1,\dots,k_{0}-1, \label{eq:cond-prop-1.7} \end{equation} the inequality \begin{equation} d(x^{k_{0}},\mathrm{Fix}(T))\leq\varepsilon \end{equation} holds. \end{proposition} \begin{proof} Theorem \ref{Sa Theorem 1.6 - 20200426} implies that there exists an integer $k_{0}$ such that for each $x\in X$, \begin{equation} d(T^{k_{0}}x,\mathrm{Fix}(T))\leq\varepsilon/4. \label{Sa (1.10)} \end{equation} Define \begin{equation} \delta:=4^{-1}\varepsilon(k_{0})^{-1}, \label{Sa (1.11)} \end{equation} assume that $\{x^{i}\}_{i=0}^{k_{0}}\subset X$ satisfies (\ref{eq:cond-prop-1.7}) and set \begin{equation} y^{0}:=x^{0},\;y^{i+1}:=Ty^{i},\;\text{for all }i=0,1,\dots,k_{0}-1. \label{Sa (1.13)} \end{equation} In view of (\ref{Sa (1.10)}) and (\ref{Sa (1.13)}), \begin{equation} d(y^{k_{0}},\mathrm{Fix}(T))\leq\varepsilon/4. \label{Sa (1.14)} \end{equation} Next we show, by induction, that \begin{equation} \rho(y^{i},x^{i})\leq i\delta,\;\text{for all }i=0,1,\dots,k_{0}. \label{Sa (1.15)} \end{equation} Equation (\ref{Sa (1.13)}) implies that (\ref{Sa (1.15)}) holds for $i=0$. Let $i\in\{0,1,\dots,k_{0}-1\}$ for which (\ref{Sa (1.15)}) holds. By the nonexpansiveness of $T$, (\ref{eq:cond-prop-1.7}), (\ref{Sa (1.13)}) and (\ref{Sa (1.15)}), \begin{align} \rho(y^{i+1},x^{i+1}) & =\rho(Ty^{i},x^{i+1})\nonumber\\ & \leq\rho(Ty^{i},Tx^{i})+\rho(Tx^{i},x^{i+1})\leq\rho(y^{i},x^{i} )+\delta\leq(i+1)\delta, \end{align} in particular, \begin{equation} \rho(y^{k_{0}},x^{k_{0}})\leq k_{0}\delta. \label{Sa (1.16)} \end{equation} It follows now from (\ref{Sa (1.11)}), (\ref{Sa (1.14)}) and (\ref{Sa (1.16)}) that \begin{equation} d(x^{k_{0}},\mathrm{Fix}(T))\leq d(x^{k_{0}},y^{k_{0}})+d(y^{k_{0} },\mathrm{Fix}(T))\leq\varepsilon/4+\varepsilon/4, \end{equation} which concludes the proof. \end{proof} \begin{theorem} \label{Sa Theo 1.6}Under the assumptions of Theorem \ref{Sa Theorem 1.6 - 20200426}, there exist an integer $k_{0}$ and a $\delta>0$ such that for each sequence $\{x^{i}\}_{i=0}^{\infty}\subset X$ satisfying $\rho(x^{i+1},Tx^{i})\leq\delta$, for all $i=0,1,\dots,$ the inequality \begin{equation} d(x^{i},\mathrm{Fix}(T))\leq\varepsilon \end{equation} holds for all integers $i\geq k_{0}.$ \end{theorem} \begin{proof} The proof follows from Proposition \ref{Sa Prop 1.7}. \end{proof} Theorem \ref{Sa Theo 1.6} implies the next result. \begin{theorem} \label{thm:thm5.1}Under the assumptions of Theorem \ref{Sa Theorem 1.6 - 20200426}, if we take a sequence \begin{equation} \{\mu_{k}\}_{k=1}^{\infty}\subset(0,\infty),\text{ }\lim_{k\rightarrow\infty }\mu_{k}=0, \label{eq:5.3} \end{equation} then there exists an integer $k_{1}>0$ such that for each sequence $\{x^{i}\}_{i=0}^{\infty}\subset X$ satisfying \begin{equation} \rho(x^{i+1},Tx^{i})\leq\mu_{i+1},\;i=0,1,\dots, \label{eq:5.3 second} \end{equation} the inequality $d(x^{k},\mathrm{Fix}(T))\leq\varepsilon$ holds for all integers $k\geq k_{1}$. \end{theorem} \section{Data-compatibility of the hybrid subgradient method \label{sec:convergence}} Our main data-compatibility result in Theorem \ref{thm:7.1} below is obtained under the following assumptions: $T:H\rightarrow H$ is a nonexpansive operator for which Condition \ref{cond:condition} holds, the closure of $T\left( H\right) $ (which denoted by $cl\left( T\left( H\right) \right) $) is compact, $f:H\rightarrow R$ convex function, Lipschitz on any bounded set. Since $cl\left( T\left( H\right) \right) $ is compact, there exists a ball $B(0,M),$ with $M>0,$ such that \begin{equation} \mathrm{Fix}\left( T\right) \subset cl\left( T\left( H\right) \right) \subset B(0,M), \label{eq 52 N} \end{equation} which means that the set $\mathrm{Fix}(T)$ is bounded. Proposition \ref{The limit is fixed point} yields that $\mathrm{Fix}(T)$ is closed. Since $\mathrm{Fix}(T)$ is closed subset of a compact set, which is $cl\left( T\left( H\right) \right) $, then it easy to see that $\mathrm{Fix}(T)$ is compact also. By Condition \ref{cond:condition} and Proposition \ref{The limit is fixed point} we get that $\mathrm{Fix}(T)$ is nonempty. The convexity of $f$ on $H$ implies the continuity of $f$ on the space\ $H$ and especially on the subset $\mathrm{Fix}(T)$. The continuity of $f$ on $\mathrm{Fix}(T)$ and the compactness of $\mathrm{Fix}(T),$ implies that there exists some point $x\in\operatorname*{SOL}(f,\mathrm{Fix}(T)),$ i.e., $\operatorname*{SOL}(f,\mathrm{Fix}(T))\neq\varnothing$. Since $f$ is Lipschitz on any bounded set, there exists a number $\bar{L}>1$ such that \begin{equation} |f(z^{1})-f(z^{2})|\leq\bar{L}||z^{1}-z^{2}||,{\text{ for all }}z^{1},z^{2}\in B(0,3M+2). \label{eq 53 N} \end{equation} We need the following lemma to prove the main result. \begin{lemma} {\label{lem-8.3}}Assume $T:H\rightarrow H$ is a nonexpansive operator for which Condition \ref{cond:condition} holds and $cl\left( T\left( H\right) \right) $ is compact. Let $f:H\rightarrow R$ convex function and Lipschitz on any bounded set. Let $\bar{x}\in\operatorname*{SOL}(f,\mathrm{Fix}(T))$ and let $\Delta\in(0,1],$ $\alpha>0$ and $x\in cl\left( T\left( H\right) \right) $ satisfy \begin{equation} \left\Vert x\right\Vert \leq3M+2,\;f(x)>f(\bar{x})+\Delta, \label{eq:8.3} \end{equation} where $M$ is as in (\ref{eq 52 N}). Further, let $v\in\partial f(x).$ Then $v\not =0$ and \begin{equation} y:=T\left( x-\alpha||v||^{-1}v\right) \end{equation} satisfies \begin{equation} \Vert y-\bar{x}\Vert^{2}\leq\Vert x-\bar{x}\Vert^{2}-2\alpha(4\bar{L} )^{-1}\Delta+\alpha^{2}, \end{equation} where $\bar{L}$ is as in (\ref{eq 53 N}). Moreover, \begin{equation} d(y,\operatorname*{SOL}(f,\mathrm{Fix}(T)))^{2}\leq d(x,\operatorname*{SOL} (f,\mathrm{Fix}(T)))^{2}-2\alpha(4\bar{L})^{-1}\Delta+\alpha^{2}. \end{equation} \end{lemma} \begin{proof} From (\ref{eq:8.3}) $v\not =0$. For $\bar{x}\in\operatorname*{SOL} (f,\mathrm{Fix}(T))$, we have, by (\ref{eq 53 N}) and (\ref{eq 52 N}), that for each $z\in B(\bar{x},4^{-1}\Delta\bar{L}^{-1})$, \begin{equation} f(z)\leq f(\bar{x})+\bar{L}||z-\bar{x}||\leq f(\bar{x})+4^{-1}\Delta. \end{equation} Therefore, (\ref{eq:8.3}) and $v\in\partial f(x)$, imply that \begin{equation} \left\langle v,z-x\right\rangle \leq f(z)-f(x)\leq-(3/4)\Delta,\text{ for all }z\in B(\bar{x},4^{-1}\Delta\bar{L}^{-1}). \end{equation} From this inequality we deduce that \begin{equation} \left\langle ||v||^{-1}v,z-x\right\rangle <0,{\text{ for all }}z\in B(\bar {x},4^{-1}\Delta\bar{L}^{-1}), \end{equation} or, setting $\bar{z}:=\bar{x}+4^{-1}\bar{L}^{-1}\Delta\Vert v\Vert^{-1}v,$ that \begin{equation} 0>\left\langle ||v||^{-1}v,\bar{z}-x\right\rangle =\left\langle ||v||^{-1} v,\bar{x}+4^{-1}\bar{L}^{-1}\Delta\Vert v\Vert^{-1}v-x\right\rangle . \end{equation} This leads to \begin{equation} \left\langle ||v||^{-1}v,\bar{x}-x\right\rangle <-4^{-1}\bar{L}^{-1}\Delta. \end{equation} Putting $\tilde{y}:=x-\alpha\Vert v\Vert^{-1}v,$ we arrive at \begin{align} \Vert\tilde{y}-\bar{x}\Vert^{2} & =\Vert x-\alpha\Vert v\Vert^{-1}v-\bar {x}\Vert^{2}\nonumber\\ & =\Vert x-\bar{x}\Vert^{2}-2\left\langle x-\bar{x},\alpha\Vert v\Vert ^{-1}v\right\rangle +\alpha^{2}\nonumber\\ & \leq\Vert x-\bar{x}\Vert^{2}-2\alpha(4\bar{L})^{-1}\Delta+\alpha^{2}. \end{align} From all the above we obtain \begin{align} \Vert y-\bar{x}\Vert^{2} & =\Vert T\tilde{y}-\bar{x}\Vert^{2}\leq\Vert \tilde{y}-\bar{x}\Vert^{2}\nonumber\\ & \leq\Vert x-\bar{x}\Vert^{2}-2\alpha(4\bar{L})^{-1}\Delta+\alpha^{2}, \end{align} which completes the proof. \end{proof} Now we present the main theorem showing that sequences generated by the hybrid subgradient method (HSM) have a $(\tau,\bar{L})$-output, i.e., contain an iterate that is data-compatible. The condition (\ref{eq:7.3}) in this and subsequent theorems is common in many results in optimization theory, see, e.g., Theorem 2.2 and many subsequent theorems in N. Shor's classic book \cite{Shor1985}. \begin{theorem} {\label{thm:7.1}} Assume $T:H\rightarrow H$ is a nonexpansive operator for which Condition \ref{cond:condition} holds and $cl\left( T\left( H\right) \right) $ is compact. Let $f:H\rightarrow R$ convex function and Lipschitz on any bounded set. Let \begin{equation} \{\alpha_{k}\}_{k=0}^{\infty}\subset(0,1],\text{ be a sequence such that } \lim_{k\rightarrow\infty}\alpha_{k}=0\text{ and}\;\sum_{k=0}^{\infty} \alpha_{k}=\infty, \label{eq:7.3} \end{equation} let $\bar{L}$ be fixed, as defined by (\ref{eq 53 N}), and let $\tau\in(0,1)$. Then there exists an integer $K$ such that for any sequence $\{x^{k} \}_{k=1}^{\infty}\subset cl\left( T\left( H\right) \right) $, generated by Algorithm \ref{alg:sa-psm} , the inequalities \begin{gather} d(x^{k},\operatorname*{SOL}(f,\mathrm{Fix}\left( T\right) ))\leq\tau{\text{ }}\\ {\text{and }}\nonumber\\ f(x^{k})\leq f(z)+\tau\bar{L}\text{ for all }z\in\operatorname*{SOL} (f,\mathrm{Fix}\left( T\right) ) \end{gather} hold for all integers $k\geq K$. \end{theorem} \begin{proof} Fix an $\bar{x}\in\operatorname*{SOL}(f,\mathrm{Fix}(T)).$ It is not difficult to see that there exists a number $\tau_{0}\in(0,\tau/4)$ such that for each $x\in cl\left( T\left( H\right) \right) $ satisfying $d(x,\mathrm{Fix} (T))\leq\tau_{0}$ and $f(x)\leq f(\bar{x})+\tau_{0}$ we have \begin{equation} d(x,\operatorname*{SOL}(f,\mathrm{Fix}(T)))\leq\tau/4. \label{eq:P2} \end{equation} Since $\{x^{k}\}_{k=1}^{\infty}$ is generated by Algorithm \ref{alg:sa-psm} we know, from (\ref{eq:alg-sa-psm-1}), (\ref{eq:alg-sa-psm-2}) and (\ref{eq:1.8} ), that \begin{equation} \left\Vert x^{k}-Tx^{k-1}\right\Vert \leq\alpha_{k-1},\text{ for all }k\geq1. \label{eq:8.23} \end{equation} Thus, by Theorem \ref{thm:thm5.1} and (\ref{eq:7.3}), there exists an integer $n_{1}$ such that \begin{equation} d(x^{k},\mathrm{Fix}(T))\leq\tau_{0},\text{ for all }k\geq n_{1}. \label{eq:8.24} \end{equation} This, along with (\ref{eq 52 N}), guarantees that \begin{equation} \Vert x^{k}\Vert\leq M+1,{\text{ for all }}k\geq n_{1}. \label{eq:8.25} \end{equation} Choose a positive $\tau_{1}$ for which $\tau_{1}<(8\bar{L})^{-1}\tau_{0},$ by (\ref{eq:7.3}) there is an integer $n_{2}>n_{1}$ such that \begin{equation} \alpha_{k}\leq\tau_{1}(32)^{-1},{\text{ for all }}k>n_{2}, \label{eq:8.16} \end{equation} and so, there is an integer $n_{0}>n_{2}+4$ such that \begin{equation} \sum_{k=n_{2}}^{n_{0}-1}\alpha_{k}>8(2M+1)^{2}\bar{L}\tau_{0}^{-1}. \label{eq:8.17} \end{equation} We show now that there exists an integer $p\in\lbrack n_{2}+1,n_{0}]$ such that $f(x^{p})\leq f(\bar{x})+\tau_{0}$. Assuming the contrary means that for all $k\in\lbrack n_{2}+1,n_{0}]$, \begin{equation} f(x^{k})>f(\bar{x})+\tau_{0}. \label{eq:8.26} \end{equation} By (\ref{eq:8.26}), (\ref{eq:7.3}), (\ref{eq:8.25}) and using Lemma \ref{lem-8.3}, with $\Delta=\tau_{0}$, $\alpha=\alpha_{k}$, $x=x^{k}$, $y=x^{k+1}$, $v=s^{k}$, we get, for all $k\in\lbrack n_{2}+1,n_{0}]$, \begin{align} & d(x^{k+1},\operatorname*{SOL}(f,\mathrm{Fix}(T)))^{2}\nonumber\\ & \leq d(x^{k},\operatorname*{SOL}(f,\mathrm{Fix}(T)))^{2}-2\alpha_{k} (4\bar{L})^{-1}\tau_{0}+\alpha_{k}^{2}. \end{align} According to the choice of $\tau_{1}$ and by (\ref{eq:8.16}) this implies that for all $k\in\lbrack n_{2}+1,n_{0}]$, \begin{align} & d(x^{k},\operatorname*{SOL}(f,\mathrm{Fix}(T)))^{2}-d(x^{k+1} ,\operatorname*{SOL}(f,\mathrm{Fix}(T)))^{2}\nonumber\\ & \geq\alpha_{k}[(2\bar{L})^{-1}\tau_{0}-\alpha_{k}]\nonumber\\ & \geq\alpha_{k}(4\bar{L})^{-1}\tau_{0}, \end{align} which, together with (\ref{eq:8.25}) and (\ref{eq 52 N}), gives \begin{align} & (2M+1)^{2}\nonumber\\ & \geq d(x^{n_{2}+1},\operatorname*{SOL}(f,\mathrm{Fix}(T)))^{2}\nonumber\\ & \geq\sum_{k=n_{2}+1}^{n_{0}}\left( d(x^{k},\operatorname*{SOL} (f,\mathrm{Fix}(T)))^{2}-d(x^{k+1},\operatorname*{SOL}(f,\mathrm{Fix} (T)))^{2}\right) \nonumber\\ & \geq(4\bar{L})^{-1}\tau_{0}\sum_{k=n_{2}+1}^{n_{0}}\alpha_{k}, \end{align} and \begin{equation} \sum_{k=n_{2}+1}^{n_{0}}\alpha_{k}\leq(2M+1)^{2}4\bar{L}\tau_{0}^{-1}. \end{equation} This contradicts (\ref{eq:8.17}), proving that there is an integer $p\in\lbrack n_{2}+1,n_{0}]$ such that $f(x^{p})\leq f(\bar{x})+\tau_{0}$. Thus, by (\ref{eq:8.24}) and (\ref{eq:P2}), \begin{equation} d(x^{p},\operatorname*{SOL}(f,\mathrm{Fix}(T)))\leq\tau/4. \end{equation} We show that for all $k\geq p$, $d(x^{k},\operatorname*{SOL}(f,\mathrm{Fix} (T)))\leq\tau$. Assuming the contrary, \begin{equation} \text{there exists a }q>p\text{ such that }d(x^{q},\operatorname*{SOL} (f,\mathrm{Fix}(T)))>\tau. \label{eq:8.31} \end{equation} We may assume, without loss of generality, that \begin{equation} d(x^{k},\operatorname*{SOL}(f,\mathrm{Fix}(T)))\leq\tau,{\text{ for all } }p\leq k<q. \label{eq:8.32} \end{equation} One of the following two cases must hold: (i) $f(x^{q-1})\leq f(\bar{x} )+\tau_{0},$ or (ii) $f(x^{q-1})>f(\bar{x})+\tau_{0}.$ In case (i), since $p\in\lbrack n_{2}+1,n_{0}],$ (\ref{eq:8.24}), (\ref{eq:8.25}) and (\ref{eq:P2}) show that \begin{equation} d(x^{q-1},\operatorname*{SOL}(f,\mathrm{Fix}(T)))\leq\tau/4. \end{equation} Thus, there is a point $z\in\operatorname*{SOL}(f,\mathrm{Fix}(T))$ such that $\left\Vert x^{q-1}-z\right\Vert <\tau/3.$ Using this fact and (\ref{eq:8.23} ), (\ref{eq:1.8}), (\ref{eq:1.9}) and (\ref{eq:8.16}), yields \begin{align} \left\Vert x^{q}-z\right\Vert & \leq\left\Vert x^{q}-Tx^{q-1}\right\Vert +\left\Vert Tx^{q-1}-z\right\Vert \nonumber\\ & \leq\alpha_{q-1}+\left\Vert x^{q-1}-z\right\Vert \leq\tau/4+\tau/3, \end{align} proving that $d(x^{q},\operatorname*{SOL}(f,\mathrm{Fix}(T)))\leq\tau.$ This contradicts (\ref{eq:8.31}) and implies that case (ii) must hold, namely that $f(x^{q-1})>f(\bar{x})+\tau_{0}$. This, along with (\ref{eq:8.25}), (\ref{eq:8.16}), the choice of $\tau_{1}$, (\ref{eq:8.32}) and Lemma \ref{lem-8.3}, with $\Delta=\tau_{0}$, $\alpha=\alpha_{q-1}$, $x=x^{q-1}$, $y=x^{q}$, shows that \begin{align} & d(x^{q},\operatorname*{SOL}(f,\mathrm{Fix}(T)))^{2}\nonumber\\ & \leq d(x^{q-1},\operatorname*{SOL}(f,\mathrm{Fix}(T)))^{2}-2\alpha _{q-1}(4\bar{L})^{-1}\tau_{0}+\alpha_{q-1}^{2}\nonumber\\ & \leq d(x^{q-1},\operatorname*{SOL}(f,\mathrm{Fix}(T)))^{2}-\alpha _{q-1}((2\bar{L})^{-1}\tau_{0}-\alpha_{q-1})\nonumber\\ & \leq d(x^{q-1},\operatorname*{SOL}(f,\mathrm{Fix}(T)))^{2}\leq\tau^{2}, \end{align} namely, that $d(x^{q},\operatorname*{SOL}(f,\mathrm{Fix}(T)))\leq\tau.$ This contradicts (\ref{eq:8.31}), proving that, for all $k\geq p$, $d(x^{k} ,\operatorname*{SOL}(f,\mathrm{Fix}(T)))\leq\tau$. Together with (\ref{eq 52 N}) and (\ref{eq 53 N}) this implies that, for all $k\geq n_{0}$, \begin{equation} f(x^{k})\leq f(z)+\tau\bar{L}\text{ for all }z\in\operatorname*{SOL} (f,\mathrm{Fix}\left( T\right) ), \end{equation} and the proof is complete. \end{proof} \section{\textbf{The string-averaging hybrid subgradient }\newline \textbf{method }(SA-HSM)\textbf{ }\label{sect:SAPv}} Assume that $O_{1},O_{2},\dots,O_{m}$ are nonexpansive operators mapping $H$ into $H,$ for which \begin{equation} \mathcal{F}:= {\textstyle\bigcap\limits_{i=1}^{m}} \mathrm{Fix}\left( O_{i}\right) \neq\varnothing\label{Non Empty Inter} \end{equation} Let $f:H\rightarrow R$ be a convex function and Lipschitz on any bounded set. We are interested in solving the following problem by using a string-averaging algorithmic scheme. \begin{equation} \min\{f(x)\mid x\in\mathcal{F}\} \label{Prov_inter 1} \end{equation} whose solution means to \begin{equation} \text{find a point }x\text{ in }\operatorname*{SOL}(f,\mathcal{F}), \label{Prov_inter 2} \end{equation} where \begin{equation} \operatorname*{SOL}(f,\mathcal{F}):=\{x\in\mathcal{F}\mid f(x)\leq f(y),{\text{ for all }}y\in\mathcal{F}\}. \label{Sol_Def_Inter} \end{equation} For $t=1,2,\dots,\Theta,$ let the \textit{string} $I_{t}$ be an ordered subset of $\{1,2,\dots,m\}$ of the form \begin{equation} I_{t}=(i_{1}^{t},i_{2}^{t},\dots,i_{m(t)}^{t}), \label{block} \end{equation} with $m(t)$ the number of elements in $I_{t}.$ For any $x\in H,$ the product of operators along a string $I_{t},$ $t=1,2,\dots,\Theta,$ is \begin{equation} F_{t}(x):=O_{i_{m(t)}^{t}}\cdots O_{i_{2}^{t}}O_{i_{1}^{t}}(x), \label{notation 1} \end{equation} and is called a \textquotedblleft string operator\textquotedblright. We deal with string-averaging of fixed strings and fixed weights. To this end we assume that \begin{equation} \{1,2,\dots,m\}\subset {\displaystyle\bigcup\limits_{t=1}^{\Theta}} I_{t} \label{eq:contain} \end{equation} and that a system of nonnegative weights $w_{1,}w_{2},\cdots,w_{\Theta}$ such that $\sum_{t=1}^{\Theta}w_{t}=1$ is fixed and given. We define the operator \begin{equation} O(x):=\sum_{t=1}^{\Theta}w_{t}F_{t}(x). \label{eq:sum} \end{equation} This operator will be called \textquotedblleft fit\textquotedblright\ if the strings that define it obey (\ref{eq:contain}). We will need the following condition. \begin{condition} \label{Strict inequality}For all $i=1,2,\ldots,m,$ the following holds: For any $y\in H\backslash\mathrm{Fix}\left( O_{i}\right) $ there exist $x\in\mathcal{F=} {\textstyle\bigcap\limits_{i=1}^{m}} \mathrm{Fix}\left( O_{i}\right) $ such that $\left\Vert O_{i}\left( y\right) -x\right\Vert <\left\Vert y-x\right\Vert .$ \end{condition} \begin{proposition} \label{FixO_Eq_FixInt}Let $O_{1},O_{2},\dots,O_{m}$ be nonexpansive operators $O_{i}:H\rightarrow H$, and let $O=\sum_{t=1}^{\Theta}w_{t}F_{t}(x),$ be as in (\ref{eq:sum}). If (\ref{eq:contain}) and condition \ref{Strict inequality} hold, then $\mathrm{Fix}\left( O\right) =\mathcal{F}.$ \end{proposition} \begin{proof} Clearly, $\mathcal{F}\subset\mathrm{Fix}\left( O\right) ,$ therefore, it is sufficient to prove that $\mathrm{Fix}\left( O\right) \subset\mathcal{F}.$ Assume by negation that $\widehat{y}\in\mathrm{Fix}x\left( O\right) $ such that $\widehat{y}\notin\mathcal{F}.$ This means that there is an $1\leq\widehat{i}\leq m$ such that $\widehat{y}\notin\mathrm{Fix}\left( O_{\widehat{i}}\right) .$ Condition \ref{Strict inequality} implies that there exist an $\overline{x}\in\mathcal{F}$ that satisfies $\left\Vert O_{\widehat{i}}\left( \widehat{y}\right) -\overline{x}\right\Vert <\left\Vert \widehat{y}-\overline{x}\right\Vert $. From this inequality, since $O_{1},O_{2},\dots,O_{m}$ are nonexpansive operators, it is easy to see that \begin{equation} \left\Vert O\left( \widehat{y}\right) -\overline{x}\right\Vert =\left\Vert \sum_{t=1}^{\Theta}w_{t}F_{t}\left( \widehat{y}\right) -\overline {x}\right\Vert \leq\sum_{t=1}^{\Theta}w_{t}\left\Vert F_{t}\left( \widehat {y}\right) -\overline{x}\right\Vert <\left\Vert \widehat{y}-\overline {x}\right\Vert , \end{equation} and, consequently, that $\widehat{y}\notin\mathrm{Fix}\left( O\right) .$ This contradicts the negation assumption made above and completes the proof. \end{proof} We propose the following string-averaging hybrid subgradient method (SA-HSM) for solving the problem (\ref{Prov_inter 1}). \begin{algorithm} \label{String-alg:sa-psm}\textbf{String-Averaging Hybrid Subgradient Method (SA-HSM).} \textbf{Initialization}: Let $\{\alpha_{k}\}_{k=0}^{\infty}\subset(0,1]$ be a scalar sequence and let $x^{0}\in H$ be an arbitrary initialization vector. \textbf{Iterative step}: Given a current iteration vector $x^{k}$ calculate the next vector as follows: Choose any $s^{k}\in\partial f(x^{k})$ and calculate \begin{equation} x^{k+1}=O\left( x^{k}-\alpha_{k}\frac{s^{k}}{\parallel s^{k}\parallel }\right) \text{,} \label{eq:alg-sa-psm-2_string} \end{equation} but if $s^{k}=0$ then set $\frac{\textstyle s^{k}}{\textstyle\parallel s^{k}\parallel}:=0.$ \end{algorithm} The next theorem shows that sequences generated by the string-averaging hybrid subgradient method (SA-HSM) have a $(\tau,\bar{L})$-output, i.e., contain an iterate that is data-compatible. \begin{theorem} {\label{thm:7.1 string}}Let $O_{1},O_{2},\dots,O_{m}$ be nonexpansive operators mapping $H$ into $H,$ such that $\mathcal{F}= {\textstyle\bigcap\limits_{i=1}^{m}} \mathrm{Fix}\left( O_{i}\right) \neq\varnothing$ and $cl\left( O_{i}\left( H\right) \right) $ is compact for all $i=1,2,\ldots,m.$ Let $O=\sum _{t=1}^{\Theta}w_{t}F_{t}(x)$ be as in (\ref{eq:sum}) and assume that $\lim_{j\rightarrow\infty}O^{j}y^{0}$ exists for any $y^{0}\in H$. Let $f:H\rightarrow R$ be a convex function and Lipschitz on any bounded set. Let $\{\alpha_{k}\}_{k=0}^{\infty}\subset(0,1]$ be a sequence such that \begin{equation} \lim_{k\rightarrow\infty}\alpha_{k}=0\text{ and}\;\sum_{k=0}^{\infty} \alpha_{k}=\infty, \label{eq:7.3_String} \end{equation} let $\bar{L}$ be fixed, as defined by (\ref{eq 53 N}), and let $\tau\in(0,1)$. If (\ref{eq:contain}) and condition \ref{Strict inequality} hold then there exists an integer $K$ such that for any sequence $\{x^{k}\}_{k=0}^{\infty }\subset H$, generated by Algorithm \ref{String-alg:sa-psm}, the inequalities \begin{gather} d(x^{k},\operatorname*{SOL}(f,\mathcal{F})\leq\tau{\text{ }}\\ {\text{and }}\nonumber\\ f(x^{k})\leq f(z)+\tau\bar{L}\text{ for all }z\in\operatorname*{SOL} (f,\mathcal{F}) \end{gather} hold for all integers $k\geq K$. \end{theorem} \begin{proof} Since $O_{1},O_{2},\dots,O_{m}$ are nonexpansive and $O$ is a fit operator, it follows that that $O$ is nonexpansive. Moreover, condition \ref{Strict inequality} and proposition \ref{FixO_Eq_FixInt} ensure that $\mathrm{Fix}\left( O\right) =\mathcal{F}.$ This, along with the other assumptions of the theorem, enable the use of Theorem \ref{thm:7.1} to complete the proof. \end{proof} \section{Data-compatibility for constrained minimization with inconsistent constraints\label{sect:inconsistent}} In this section we consider a data pair $(\Gamma,f),$ assuming that $\Gamma:=\{C_{i}\}_{i=1}^{m}$ is a family of closed and convex subsets of $H,$ not necessarily obeying that $C:=\cap_{i=1}^{m}C_{i}\neq\varnothing$. Let $\{w_{i}\}_{i=1}^{m}$ be a set of weights such that $w_{i}\geq0$ and $\sum_{i=1}^{m}w_{i}=1.$ It is well-known that the operator $P_{w}:=\sum _{i=1}^{m}w_{i}P_{C_{i}}$ is nonexpansive and satisfies \begin{equation} \mathrm{Fix}\left( P_{w}\right) =\operatorname*{Arg}\min\{\text{Prox} _{\Gamma}(x)\mid x\in H\}, \end{equation} where Prox$_{\Gamma}(x):=\frac{1}{2}\sum_{i=1}^{m}w_{i}\left\Vert P_{C_{i} }(x)-x\right\Vert ^{2},$ see the succinct \cite[Subsection 5.4] {Cegielski2012Book} on the simultaneous projection method. If $C\neq \varnothing$ then $\mathrm{Fix}\left( P_{w}\right) =C.$ If, however, $C=\varnothing$ then $\mathrm{Fix}\left( P_{w}\right) =\Pi(\Gamma,\gamma)$ for $\gamma=\min\{$Prox$_{\Gamma}(x)\mid x\in H\}$ and is nonempty. Moreover, for any $y^{0}\in H$ the limit $\lim_{k\rightarrow\infty}(P_{w})^{k}y^{0}$ exists and belong to $\mathrm{Fix}\left( P_{w}\right) $. Consider the following algorithm. \begin{algorithm} \label{alg:Sim Proj sa-psm}\textbf{Simultaneous Projection Hybrid Subgradient Method (SP-HSM).} \textbf{Initialization}: Let $\{\alpha_{k}\}_{k=0}^{\infty}\subset(0,1]$ be a scalar sequence and let $x^{0}\in H$ be an arbitrary initialization vector. \textbf{Iterative step}: Given a current iteration vector $x^{k}$ calculate the next vector as follows: Choose any $s^{k}\in\partial f(x^{k})$ and calculate \begin{equation} x^{k+1}=P_{w}\left( x^{k}-\alpha_{k}\frac{s^{k}}{\parallel s^{k}\parallel }\right) \text{,} \end{equation} but if $s^{k}=0$ then set $\frac{\textstyle s^{k}}{\textstyle\parallel s^{k}\parallel}:=0.$ \end{algorithm} From the above assumptions and discussion we obtain the following theorem as a consequence of Theorem \ref{thm:7.1}. It does not assume consistency of the underlying constraints $\Gamma=\{C_{i}\}_{i=1}^{m},$ and shows that sequences generated by the simultaneous projection hybrid subgradient method (SP-HSM) have a $(\tau,\bar{L})$-output, i.e., contain an iterate that is data-compatible. Observe in the next theorem that $\overline{L}$ is a fixed constant that obeys (\ref{eq 53 N}) and that the parameter $\tau$ must obey $\tau\in(0,1),$ so, once $\overline{L}$ is fixed the \textquotedblleft user\textquotedblright\ can choose a small $\tau$ so that $\tau\overline{L}$ is as small as he wants. \begin{theorem} \label{thm:7.1 Proj}Assume that $cl\left( P_{w}\left( H\right) \right) $ is compact. Let $f:H\rightarrow R$ be a convex function and Lipschitz on any bounded set, let \begin{equation} \{\alpha_{k}\}_{k=0}^{\infty}\subset(0,1],\text{ be a sequence such that } \lim_{k\rightarrow\infty}\alpha_{k}=0\text{ and}\;\sum_{k=0}^{\infty} \alpha_{k}=\infty, \end{equation} let $\bar{L}$ be fixed, as defined by (\ref{eq 53 N}), and let $\tau\in(0,1)$. Then there exists an integer $K$ such that, for any sequence $\{x^{k} \}_{k=0}^{\infty}\subset H$ generated by Algorithm \ref{alg:Sim Proj sa-psm}, the inequalities \begin{gather} d(x^{k},\operatorname*{SOL}(f,\mathrm{Fix}\left( P_{w}\right) ))\leq \tau{\text{ }}\\ {\text{and }}\nonumber\\ f(x^{k})\leq f(z)+\tau\bar{L}\text{ for all }z\in\operatorname*{SOL} (f,\mathrm{Fix}\left( P_{w}\right) ) \end{gather} hold for all integers $k\geq K$. \end{theorem} \section{Minimization over disjoint hard and soft constraints sets\label{sect:hard-soft}} Here we describe the minimization over disjoint hard and soft constraints sets problem and its relation to our work in this paper. The issue of hard and soft constraints often arises in convex feasibility problems (CFPs), mentioned in Subsection \ref{subsec:D-C-const} above, see, e.g., \cite{Combettes1999}. Many studies consider the, often occurring, situation when the CFP is not consistent, i.e., the intersection of all constraints is empty, see, e.g., \cite{BauschkeBS1997} and the recent review \cite{CZ2018}. In that case hard constraints are those which definitely must be satisfied, while soft constraint are those we would like to be satisfied - but not at the expense of the others. Let $\Gamma_{1}:=\{C_{i}\}_{i=1}^{m_{1}}$ and $\Gamma_{2}:=\{Q_{i} \}_{i=1}^{m_{2}}$ be two finite families of constraints such that $C:=\cap_{i=1}^{m_{1}}C_{i}\neq\varnothing$ and $Q:=\cap_{i=1}^{m_{2}} Q_{i}\neq\varnothing$ but $C\cap Q=\varnothing$ and let $C$ and $Q$ be the hard and soft constraints, respectively. In view of the inability to solve the CFP given by $C\cap Q,$ it makes sense to look for a point that will solve the hard/soft-CFP (h/s-CFP): Find a point in $C$ that is closest to the set $Q,$ according to some metric, say the Euclidean distance. This can be done, in principle, by using the well-known 1959 Cheney-Goldstein theorem \cite{Cheney1959} that specifies conditions under which alternating metric projections onto two sets are guaranteed to converge to the best approximation pair. A \textit{best approximation pair }relative to two closed convex sets $C$ and $Q$ is a pair $(c,q)\in C\times Q$ attaining $\left\Vert {c-q}\right\Vert =\min\left\Vert {C-Q}\right\Vert $, where $C-Q:=\left\{ x-y\mid x\in C,\text{ }y\in Q\right\} ,$ see, e.g., Deutsch's book \cite{Deutsch2001} or the recent \cite{Aharoni2018}. Cheney and Goldstein considered the case of two nonempty closed convex sets $C$ and $Q$ in Hilbert space, with $P_{1}$ and $P_{2}$ denoting the orthogonal projection (proximity map) onto $C$ and $Q,$ respectively, and $T:=P_{1}P_{2} $. They show that the sequence $x^{k}$ $=$ $T^{k}(x^{0})$ obtained by alternating distance minimizations converges to a fixed point of $T,$ regardless whether $C\cap Q=\varnothing$ or not, if either (i) one of the two sets is compact, or (ii) one set is finite-dimensional and the distance between the two sets is attained. In particular, when the intersection of the two convex sets is nonempty, the sequence $\{x^{k}\}_{k=0}^{\infty}$ converges to a member of that intersection, in either of the two cases above, see, e.g., Subsection 2.1 of \cite{CZ2018}. The problem of minimization over disjoint hard and soft constraints sets occurs when the fixed point set $\mathrm{Fix}\left( T\right) $ is larger than a singleton and we want to find in it a minimizer of some given target function $f,$ leading to a constrained minimization problem like (\ref{prob:cons-min-1}). The SA-PSM of \cite{Censor2014}, mentioned in Section \ref{sect:origin} above, will not apply to this problem but the hybrid subgradient method (HSM) of Algorithm \ref{alg:sa-psm} studied here will, allowing us to reach a data-compatible point. \vskip 6mm \noindent{\bf Acknowledgments} \noindent We greatly appreciate the comprehensive and very constructive referee report that helped us improve the paper. The work of Y.C. is supported by the ISF-NSFC joint research program grant No. 2874/19. \end{document}
\begin{document} \title{ \huge{Exponentially Convergent Algorithm Design for Constrained Distributed Optimization via Non-smooth Approach} } \author{Weijian~Li, Xianlin~Zeng, Shu~Liang and Yiguang~Hong \thanks{W.~Li is with the Department of Automation, University of Science and Technology of China, Hefei 230027, Anhui, China, and is also with the Key Laboratory of Systems and Control, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, 100190, China, e-mail: \texttt{[email protected]}. X.~Zeng is with the Key Laboratory of Intelligent Control and Decision of Complex Systems, School of Automation, Beijing Institute of Technology, Beijing, 100081, China, e-mail: \texttt{[email protected]}. S. Liang is with the Key Laboratory of Knowledge Automation for Industrial Processes of Ministry of Education, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China, and is also with the Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing 100083, China. e-mail: \texttt{[email protected]}. Y.~Hong is with the Key Laboratory of Systems and Control, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, 100190, China, e-mail: \texttt{[email protected]}. }} \maketitle \begin{abstract} We consider minimizing a sum of non-smooth objective functions with set constraints in a distributed manner. As to this problem, we propose a distributed algorithm with an exponential convergence rate for the first time. By the exact penalty method, we reformulate the problem equivalently as a standard distributed one without consensus constraints. Then we design a distributed projected subgradient algorithm with the help of differential inclusions. Furthermore, we show that the algorithm converges to the optimal solution exponentially for strongly convex objective functions. \end{abstract} \begin{IEEEkeywords} Exponential convergence, constrained distributed optimization, non-smooth approach, projected gradient dynamics, exact penalty method \end{IEEEkeywords} \section{INTRODUCTION} In the past decade, distributed convex optimization has received intensive research interests due to its broad applications in distributed control \cite{antonell2013interconnected}, recourse allocation \cite{amir2014optimal}, machine learning \cite{li2014communication}, etc. The basic idea is that all agents cooperate to compute an optimal solution with their local information and neighbors' states in a multi-agent network. A variety of distributed algorithms, either discrete-time or continuous-time, have been proposed for different formulations. For instance, the subgradient method \cite{nedic2009distri}, the dual averaging method \cite{duchi2011dual} and the augmented Lagrangian method \cite{chatzipanagiotis2015augmented} were designed for the unconstrained distributed optimization, while the primal-dual dynamics was explored for constrained distributed problems \cite{liang2017distributed, cherukuri2016asymptotic, zeng2016distributed, zhu2018projected}. Among these formulations, one of the most important one lies in the distributed optimization problem with set constraints \cite{zeng2016distributed, lei2016primal}. As to algorithms, the continuous-time design, including differential equations \cite{wang2010control} and differential inclusions \cite{liu2016collective}, are increasingly popular because they may be implemented by continuous-time physical systems, and moreover, the Lyapunov stability theory provides a powerful tool for their convergence analysis. Convergence rate is an important criterion to evaluate the performance of a distributed algorithm. Particularly, the exponential convergence is desired in many scenarios. In fact, great efforts have been paid for the exponential convergence of distributed algorithms, especially for the unconstrained distributed optimization \cite{shi2015Extra, ali2017convergence, liang2019exponential}. In \cite{shi2015Extra}, an exact first-order algorithm was proposed with fixed stepsizes and linear convergence rates for the strongly convex objective functions. The linear convergence for alternating direction method of multipliers (ADMM) was discussed in \cite{shi2014on, ali2017convergence}. In \cite{liang2019exponential, yi2019exponential}, the strongly convex assumption was relaxed by metric sub-regularity and restricted secant inequality, respectively, to achieve the exponential convergence for the distributed primal-dual dynamics. As to the distributed problems with affine constraints, some pioneering works have also been done in \cite{cortes2019distributed, yi2016initialization, nedic2018improved}. Based on the saddle point dynamics, a distributed algorithm was proposed with exponential convergence in \cite{cortes2019distributed}. In \cite{deng2017distributed, yi2016initialization}, the distributed continuous-time algorithms for the resource allocation were explored with exponential convergence rates in the absence of set constraints. In \cite{nedic2018improved}, an improved distributed algorithm was designed with geometric rates under the time-varying communication topology. For the distributed formulations with set constraints, convergence rates have been analyzed for some existing methods. In \cite{liu2017convergence}, the authors reconsidered the consensus-based projected subgradient algorithm, which resulted in a convergence rate of $O(1/{\sqrt k})$ with the non-summable stepsize. Both the distributed continuous-time \cite{zeng2016distributed, zeng2018distributedsiam} and discrete-time \cite{lei2016primal} primal-dual methods were developed, and they converged to an optimal solution with a convergences rate of $O(1/t)$. However, it is still challenging to design a distributed algorithm with exponential convergence rates for these problems. Inspired by the above observations, we focus on the distributed optimization problem of a sum of non-smooth objective functions with local set constraints. By the exact penalty method, we reformulate the problem equivalently to remove consensus constraints. Furthermore, we explore a distributed algorithm, and provide rigorous proofs for its correctness and convergence properties. Main contributions are summarized as follows. \begin{enumerate}[a)] \item We propose a new distributed continuous-time algorithm by combining the differentiated projected operator and the subgradient method. Note that the algorithm can deal with non-smooth objective functions. Additionally, it is with lower computation and communication burden than the primal-dual algorithm in \cite{zeng2016distributed, lei2016primal}. \item We show that the proposed algorithm converges to an optimal solution exponentially for strongly convex objective functions. Compared with the existing results, this is the first work with an exponential convergence rate for this problem to our best knowledge. \end{enumerate} The rest of this paper is organized as follows. Section \uppercase\expandafter{\romannumeral2} presents necessary preliminaries, while Section \uppercase\expandafter{\romannumeral3} formulates and reformulates our problem. In Section \uppercase\expandafter{\romannumeral4}, a distributed algorithm is proposed with convergence analysis. Finally, numerical simulations are carried out in Section \uppercase\expandafter{\romannumeral5} and concluding remarks are given in Section \uppercase\expandafter{\romannumeral6}. \textbf{Notations:} Let $\mathbb R$ be the set of real numbers, $\mathbb R_{\ge 0}$ be the set of non-negative real numbers and $\mathbb R^m$ be the set of $m$ dimensional real column vectors. Denote $\bm 0$ as vectors with all entries being 0, whose dimensions are indicated by their subscripts. Denote $x^T$ as the transpose of $x$. Denote $\otimes$ as the Kronecker product. Let $|\cdot |$, $\Vert \cdot \Vert$ be $l_1$-norm and $l_2$-norm of a vector, respectively. For $x, y \in \mathbb R^m$, their Euclidean inner product is denoted by $\langle x,y\rangle$, or sometimes simply $x^T y$. For $x_i\in\mathbb R^m, i\in \{1,\dots,n\}$, we denote $\bm x={\rm col} \{x_1,\dots,x_n\}$ as the vector in $\mathbb R^{mn}$ defined by stacking $x_i$ together in columns. Define $B(x;r)=\{y~|~\Vert y-x\Vert \le r\}$. Denote ${\rm int}(\Omega)$ as the interior points of set $\Omega$. Let $\Omega_1 \times \Omega_2$ be the Cartesian product of two sets $\Omega_1$ and $\Omega_2$. \section{PRELIMINARIES} In this section, we introduce some necessary preliminaries about convex analysis, graph theory and differential inclusion. \subsection{Convex Analysis} A set $\Omega \subset \mathbb R^m$ is convex if $\lambda x+(1-\lambda)y \in \Omega$ for all $x,y\in \Omega$ and $\lambda \in [0,1]$. A function $f:\Omega \rightarrow \mathbb R$ is convex if $\Omega \subset \mathbb R^m$ is a convex set, and moreover, \begin{equation*} f(\theta x+(1-\theta) y) \le \theta f(x)+(1-\theta)f(y), ~\forall x, y\in \Omega, ~\forall \theta \in [0,1]. \end{equation*} If $g_f(x)\in \mathbb R^m$ satisfies \begin{equation} \label{cov} f(y) \ge f(x)+\langle y-x, g_f(x) \rangle, ~\forall y\in \Omega \end{equation} then $g_f(x)\in \partial f(x)$, where $\partial f(x)$ is the subdifferential of $f$ at $x$. Furthermore, $f$ is said to be $\mu$-strongly convex if \begin{equation*} \langle g_f(x)-g_f(y), y-x\rangle \ge \mu \Vert y-x\Vert^2, ~\forall x, y\in\Omega. \end{equation*} Let $\Omega \subset \mathbb R^m$ be a convex set. For $x \in \Omega$, the tangent cone to $\Omega$ at $x$ is defined by \begin{equation} \begin{aligned} \label{tancon_def} \mathcal T_\Omega(x) \triangleq \{\lim_{k\to +\infty} \frac {x_k-x}{\tau_k} | &x_k \in \Omega,x_k \to x, \tau_k >0, \tau_k \to 0\}, \end{aligned} \end{equation} while the normal cone to $\Omega$ at $x$ is defined by \begin{equation} \label{norcon_def} \mathcal N_{\Omega}(x) \triangleq \{v\in \mathbb R^m |v^T(y-x)\le 0, ~\forall y\in\Omega\}. \end{equation} For $x\in \mathbb R^m$, the projection operator $P_{\Omega}(x)$ is defined by \begin{equation} \label{pro} P_{\Omega}(x)={\rm argmin}_{y\in\Omega} \Vert x-y \Vert. \end{equation} Then \begin{subequations} \begin{align} \langle x-P_{\Omega}(x), z-P_{\Omega}(x) \rangle \le& ~0,~\forall z\in \Omega \label{pro_ine}\\ x-P_{\Omega}(x) \in& ~\mathcal N_{\Omega}(x). \label{pro_ine2} \end{align} \end{subequations} Referring to \cite{brogliato2006equivalence}, we define the differentiated projection operator $P_{\mathcal T_{\Omega}(x)}(y)$, which can be computed by \begin{equation} \label{tancom} P_{\mathcal T_{\Omega}(x)}(y)=y-\beta z^*, \end{equation} where $\beta={\rm max}\{0, y^T z^*\}$, and $z^* ={\rm argmax}_{z \in \mathcal N_{\Omega}(x)}\langle y,z\rangle$ such that $\Vert z\Vert=1$. \subsection{Graph Theory} Consider a multi-agent network described by an undirected graph $\mathcal G(\mathcal V, \mathcal E)$, where $\mathcal V$ is the node set and $\mathcal E \subset \mathcal V \times \mathcal V$ is the edge set. Node $j$ is a neighbor of node $i$ if and only if $(i,j)\in \mathcal E$. Denote $\mathcal N_i=\{j|(i,j)\in \mathcal E\}$ as the set of agent $i$'s neighbors. All nodes can exchange information with their neighbors. A path between nodes $i$ and $j$ is denoted as a sequence of edges $(i, i_1), (i_1, i_2), \dots (i_k, j)$ in the graph with distinct nodes $i_l \in \mathcal V$. Graph $\mathcal G$ is said to be connected if there exists a path between any pair of distinct nodes. \subsection{Differential Inclusion} A differential inclusion is given by \begin{equation} \label{dif_in} \dot x(t) \in \mathcal F(x(t)), ~x(0)=x_0,~t\ge 0 \end{equation} where $\mathcal F: \mathbb R^m \rightrightarrows \mathbb R^m$ is a set-valued map. A Caratheodory solution to (\ref{dif_in}) defined on $[0,\tau) \subset [0,+\infty)$ is an absolutely continuous function $x:[0,\tau) \rightarrow \mathbb R^m$ satisfying (\ref{dif_in}) for almost all $t\in [0,\tau)$ (in the sense of Lebesgue measure) \cite{cortes2008discontinuous}. The solution $t \rightarrow x(t)$ to (\ref{dif_in}) is a right maximal solution if it cannot be extended in time. Suppose that all the right maximal solutions to (\ref{dif_in}) exists on $[0,+\infty)$. If $\bm 0_m \in \mathcal F(x_e)$, then $x_e$ is an equilibrium point of (\ref{dif_in}). The graph of $\mathcal F$ is defined by ${\rm gph}\mathcal F=\{(x,y)~|~y\in \mathcal F(x),~x \in \mathbb R^m\}$. The set valued map $\mathcal F$ is said to be upper semi-continuous at $x$ if there exists $\delta>0$ for all $\epsilon>0$ such that \begin{equation*} \mathcal F(y) \subset \mathcal F(x)+B(0;\epsilon), ~\forall y\in B(x;\delta) \end{equation*} and it is upper semi-continuous if it is so at every $x \in \mathbb R^m$. We collect the following results from \cite[p. 266, p. 267]{aubin1984differential}. \begin{lemma} \label{exi_lem} Let $\Omega$ be a closed convex subset of $\mathbb R^m$, and $\mathcal F$ be an upper semi-continuous map with non-empty compact value from $\Omega$ to $\mathbb R^m$. Consider two differential inclusions given by \begin{subequations} \begin{align} \dot x(t) \in&~ \mathcal F(x(t)) -\mathcal N_{\Omega}(x(t)), ~x(0)=x_0 \label{dyn_1}\\ \dot x(t) \in&~ P_{\mathcal T_\Omega} [\mathcal F(x(t))], ~~~~~~~~~x(0)=x_0. \label{dyn_2} \end{align} \end{subequations} Then the following two statements hold.\\ (\romannumeral1) There is a solution to dynamics (\ref{dyn_1}) if $\mathcal F$ is bounded on $\Omega$. \\ (\romannumeral2) The trajectory $x(t)$ is a solution of (\ref{dyn_1}) if and only if it is a solution of (\ref{dyn_2}). \end{lemma} Consider dynamics (\ref{dif_in}). Let $V$ be a locally Lipschitz continuous function, and $\partial V(x)$ be the Clarke generalized gradient of $V$ at $x$. The set-valued Lie derivative for $V$ is defined by $\mathcal L_{\mathcal F} V(x) \triangleq\{a\in \mathbb R: a = p^T v, p\in \partial V(x),v\in \mathcal F(x)\}$. The following Barbalat's lemma \cite[Lemma 4.1]{haddadnonlinear} will be used in the convergence analysis of this paper. \begin{lemma} \label{Barlem} Let $\sigma: [0, \infty) \rightarrow \mathbb R$ be a uniformly continuous function. Suppose that $\lim_{t\to\infty}\int_0^t \sigma(s)ds$ exists, and is finite. Then $\lim_{t\to\infty} \sigma(t)=0$. \end{lemma} \section{FORMULATION AND REFORMULATION} In this section, we formulate the problem, and reformulate it equivalently by the exact penalty method. In addition, we address the optimal condition for the reformulation. \subsection{Problem Formulation} Consider a network of $n$ agents interacting over a undirected graph $\mathcal G(\mathcal V, \mathcal E)$, where $\mathcal V=\{1, \dots, n\}$ and $\mathcal E \subset \mathcal V \times \mathcal V$. For all $i\in \mathcal V$, there are a local (non-smooth) objective function $f_i: \mathbb R^m \rightarrow \mathbb R$, and a local feasible constraint set $\Omega_i \subset \mathbb R^m$. All agents cooperate to reach a consensus solution that minimizes the global objection function $\sum_{i=1}^n f_i(x)$ in the feasible set $\cap_{i=1}^n \Omega_i$. To be strict, the optimization problem can be formulated as \begin{equation} \label{pri_pro} \begin{aligned} {\rm min}~~ \sum\limits_{i=1}^n f_i(x) \quad {\rm s.t.}~~x\in \cap_{i=1}^n \Omega_i \end{aligned} \end{equation} where $x \in \mathbb R^m$ is the decision variable to be solved. In fact, (\ref{pri_pro}) is a well-known constrained distributed optimization problem. The non-smooth objective functions appear in a variety of fields including resource allocation, signal processing and machine learning, while the local set constraints are often necessary due to the performance limitations of agents in computation and communication capacities. Both discrete-time \cite{lei2016primal, liu2017constrained} and continuous-time \cite{zeng2016distributed} algorithms have been explored for (\ref{pri_pro}). However, to our best knowledge, convergence rates of the existing results are no more than $\mathcal O(1/t)$. The goal of this paper is to design a new distributed algorithm with an exponential convergence rate. To ensure the well-posedness of problem (\ref{pri_pro}), the following assumptions are made \cite[Assumption 3.1]{liang2017distributed}, \cite[Assumption 3.1]{zeng2016distributed}. \begin{assumption} \label{con_ass} For $i\in\mathcal V$, $f_i$ is convex on an open set containing $\Omega_i$, and $\Omega_i$ is convex and compact with $\cap_{i=1}^n {\rm int}(\Omega_i)\neq \emptyset$. \end{assumption} \begin{assumption} \label{lip_ass} For $i\in\mathcal V$, $f_i$ is Lipschitz continuous on $\Omega_i$. There exists a constant $c>0$ such that \begin{equation}\label{lip_ie} |f_i(x)-f_i(y)|\le c\Vert x-y\Vert,~\forall x, y\in\Omega_i. \end{equation} \end{assumption} \begin{assumption} \label{gra_ass} The undirected graph of the multi-agent network is connected. \end{assumption} \begin{assumption} \label{sto_con_ass} For $i\in \mathcal V$, $f_i$ is $\beta$-strongly convex on $\Omega_i$, that is, \begin{equation} \label{stro_equ} \langle x-y, g_{f_i}(x)- g_{f_i}(y)\rangle \ge \beta \Vert x-y \Vert^2,~\forall x,y\in\Omega_i. \end{equation} where $g_{f_i}(x) \in \partial f_i(x)$ and $g_{f_i}(y) \in \partial f_i(y)$. \end{assumption} Assumption \ref{con_ass} on feasibility is reasonable to ensure the solvability of (\ref{pri_pro}). Compared with the formulation in \cite{zeng2016distributed}, we suppose that $f_i$ is Lipschitz continuous in Assumption \ref{lip_ass}, which is necessary for the problem reformulation in this work. However, we should note that the assumption is easy to hold in practice, especially when $\Omega_i$ is a compact set. Assumption \ref{gra_ass} on the communication topology is broadly employed for all agents obtaining a consensus solution. Furthermore, Assumption \ref{sto_con_ass} is well-known to guarantee the exponential convergence. Subgradients are utilized in (\ref{stro_equ}) because the non-smooth objective functions are considered. \subsection{Reformulation} Notice that (\ref{pri_pro}) is not of a standard distributed structure. Under Assumption \ref{gra_ass}, it is equivalent to \begin{equation}\label{dis_str} \begin{aligned} {\rm min} \quad &\sum\limits_{i=1}^n f_i(x_i)\\ {\rm s.t.}\quad &x_i=x_j,~x_i\in \Omega_i,~i\in \mathcal V, ~j\in \mathcal N_i \end{aligned} \end{equation} where $\mathcal N_i$ is the neighbor set of agent $i$. By the exact penalty method, (\ref{dis_str}) can be cast into \begin{equation}\label{dis_pen} \begin{aligned} {\rm min}\quad& \sum\limits_{i=1}^n f_i(x_i)+ \frac K2\sum\limits_{i=1}^n\sum\limits_{j\in \mathcal N_i} |x_i-x_j|\\ {\rm s.t.}\quad &x_i\in \Omega_i,~i\in \mathcal V, ~j\in \mathcal N_i \end{aligned} \end{equation} where $K \in \mathbb R_{\ge 0}$ is the penalty factor. Throughout this paper, we define $\bm x={\rm col}\{x_1,\dots, x_n\}$, $\bar \Omega =\Omega_1 \times \dots \times \Omega_n$, $f(\bm x)= \sum_{i=1}^n f_i(x_i)$, and \begin{equation} \label{lag_fun} \mathcal L(\bm x)=\sum\limits_{i=1}^n f_i(x_i)+ \frac K2\sum\limits_{i=1}^n\sum\limits_{j\in \mathcal N_i} |x_i-x_j|. \end{equation} The following lemma addresses the relationship between (\ref{dis_str}) and (\ref{dis_pen}). \begin{lemma} \label{pro_equ} Let Assumptions \ref{con_ass}, \ref{lip_ass} and \ref{gra_ass} hold. If the penalty factor satisfies $K>nc$, $\bm x^*={\rm col} \{x_1^*,\dots, x_n^*\}$ is an optimal solution to (\ref{dis_str}) if and only if $\bm x^*$ is an optimal solution to (\ref{dis_pen}). \end{lemma} \begin{proof} Define $\bar x=\frac 1n \sum_{i=1}^n x_i$, $\bar{\bm x}=1_n \otimes {\bar x}$ \begin{equation*} \begin{aligned} h(\bm x)&=\frac 12\sum\limits_{i=1}^n \sum\limits_{j\in \mathcal N_i} |x_i-x_j| \ge \frac 12 \sum\limits_{i=1}^n \sum\limits_{j\in \mathcal N_i} \Vert x_i-x_j\Vert, \end{aligned} \end{equation*} and moreover, \begin{equation*} \begin{aligned} d(\bm x)=\sum\limits_{k=1}^n \Vert x_k-\bar x\Vert \le \frac 1n\sum\limits_{k=1}^n \sum\limits_{l=1}^n\Vert x_k- x_l\Vert. \end{aligned} \end{equation*} For any $k,l\in \mathcal V$, there must be a path $\mathcal P_{kl} \subset \mathcal E$ connecting $k$ and $l$ due to Assumption \ref{gra_ass}. Then \begin{equation} \label{pen_prf1} \begin{aligned} d(\bm x)\le \frac 1{2n} \sum\limits_{k=1}^n \sum\limits_{l=1}^n \sum\limits_{(i,j)\in \mathcal P_{kl}}\Vert x_i- x_j\Vert \le& \frac 1{n} \sum\limits_{k=1}^n \sum\limits_{l=1}^n h(\bm x)=n h(\bm x). \end{aligned} \end{equation} Let $K$ be a scalar such that $K>nc$. As a result, we derive \begin{equation} \label{equ_ine} \begin{aligned} \mathcal L(\bm x) =& f(\bm x)+Kh(\bm x) \ge f(\bm x)+cd(\bm x) \\ =& f(\bar{\bm x})+f(\bm x)-f(\bar{\bm x})+cd(\bm x) \ge f(\bar{\bm x}). \end{aligned} \end{equation} The first inequality holds due to (\ref{pen_prf1}), while the second inequality holds via (\ref{lip_ie}). It follows from (\ref{equ_ine}) that ${\rm min}~\mathcal L(\bm x) \ge {\rm min}_{x_i=x_j} ~f(\bm x)$. Furthermore, the equality holds if and only if $x_i=\bar x$ for $i \in \mathcal V$. Conversely, all minima of ${\rm min}_{x_i=x_j} f(\bm x)$ are also minima of ${\rm min}_{x_i=x_j} \mathcal L(\bm x)$, and they are also of minima of $\mathcal L(\bm x)$ due to (\ref{equ_ine}). Thus, the conclusion follows. \end{proof} \begin{remark} In light of \cite[Theorem 6.9]{ruszczynski2006nonlinear}, $K/2$ can be selected as a constant larger than the infinite norm of the Lagrange multipliers for the equality constraints in (\ref{dis_str}). However, it is difficult to derive the Lagrange multiplier before solving a dual problem. Under Assumption \ref{lip_ass}, the explicit relationship between the penalty factor, objective functions and the network structure is established in Lemma \ref{pro_equ}. \end{remark} In fact, Lemma \ref{pro_equ} provides a sufficient condition for the equivalence between (\ref{dis_str}) and (\ref{dis_pen}). Thus, we can focus on solving (\ref{dis_pen}) without consensus constraints, whose optimal conditions are shown as follows. \begin{lemma} \label{opt_lem} Under Assumptions \ref{con_ass}, \ref{lip_ass} and \ref{gra_ass}, $\bm x^*={\rm col} \{x_1^*,\dots, x_n^*\}$ is an optimal solution to problem (\ref{dis_pen}) if and only if \begin{equation} \label{opt_leme} \bm 0_{m} \in P_{\mathcal T_{\Omega_i}(x_i^*)}[-\partial f_i(x_i^*)-K\sum\limits_{j\in \mathcal N_i} {\rm Sgn}(x_i^*-x_j^*)], ~x_i^* \in \Omega_i \end{equation} where $\mathcal T_{\Omega_i}(x_i^*)$, $P_{\mathcal T_{\Omega_i}(x_i^*)}(\cdot)$ are defined by (\ref{tancon_def}) and (\ref{tancom}), respectively, and ${\rm Sgn}(\cdot)$ is the set-valued sign function with each entry defined by \begin{equation} \label{sgn_def} \begin{aligned} {\rm sgn}(u)\triangleq \partial |u|= \left\{\begin{split} &\{1\}, &{\rm if}~ u>0&\\ &[-1,1], &{\rm if}~ u=0&\\ &\{-1\}, &{\rm if}~ u<0&. \end{split}\right. \end{aligned} \end{equation} Furthermore, $\bm x^*$ is also an optimal solution to problem (\ref{dis_str}) if $K>nc$. \end{lemma} \begin{proof} According to the Karush-Kuhn-Tucker (KKT) optimal conditions \cite[Theorem 3.34]{ruszczynski2006nonlinear}, $\bm x^*={\rm col} \{x_1,\dots, x_n^*\}$ is an optimal solution to (\ref{dis_pen}) if and only if \begin{equation} \label{tan_non} \bm 0_m \in \partial f_i(x_i^*)+ K\sum\limits_{j\in \mathcal N_i} {\rm Sgn}(x_i^*-x_j^*)+ \mathcal N_{\Omega_i} (x_i^*), \end{equation} where $\mathcal N_{\Omega_i} (x_i^*)$ is defined in (\ref{norcon_def}). It follows from (\ref{norcon_def}) that (\ref{tan_non}) holds if and only if (\ref{opt_leme}) holds. Therefore, $\bm x^*$ is an optimal solution to (\ref{dis_pen}) if and only if (\ref{opt_leme}) holds. By Lemma \ref{pro_equ}, $\bm x^*$ is also an optimal solution to (\ref{dis_str}) if $K>nc$. Thus, the proof is completed. \end{proof} \section{MAIN RESULTS} In this section, we propose a distributed continuous-time projected algorithm for (\ref{dis_pen}) with the help of a differential inclusion and the differentiated projection operator (\ref{tancom}). Then we show convergence properties of the algorithm. \subsection{Distributed Continuous-Time Projected Algorithm} For (\ref{dis_pen}), we design a distributed continuous-time projected algorithm as \begin{equation}\label{alg_sim} \dot {\bm x}(t)\in P_{\mathcal T_{\bar \Omega}(\bm x(t))} [-\partial \mathcal L(\bm x(t))],~\bm x(0)=\bm x_0\in \bar \Omega. \end{equation} In fact, (\ref{alg_sim}) is inspired by the projected subgradient method \cite{mainge2008strong}. $\mathcal L(\bm x(t))$ is non-smooth due to the non-smoothness of $f_i$ and $l_1$-norm in (\ref{lag_fun}). Thus, subgradients and differential inclusions are adopted. The projection operator is employed to guarantee the state trajectory of $\bm x(t)$ being in the constraint set $\bar \Omega$. For agent $i$, the specific form of (\ref{alg_sim}) is \begin{equation}\label{alg} \begin{aligned} \dot x_i(t) \in P_{\mathcal T_{\Omega_i}(x_i(t))}[-\partial f_i(x_i(t)) -K\sum\limits_{j\in \mathcal N_i}{\rm Sgn}(x_i(t)-x_j(t))], ~x_i(0)=x_{i,0}\in \Omega_i. \end{aligned} \end{equation} Dynamics (\ref{alg_sim}) is discontinuous because of the projection onto the tangent cone and the non-smoothness of $\mathcal L(\bm x)$. However, its solution is still well-defined in the Caratheodory sense. The reasons are as follows. Obviously, (\ref{alg_sim}) is of the form (\ref{dyn_2}), where $\mathcal F(\bm x(t))=-\partial \mathcal L(\bm x(t))$. Notice that $\mathcal L(\bm x(t))$ is convex. Then ${\rm gph}\mathcal F$ is closed. As a result, $\mathcal F$ is upper semi-continuous with compact convex values. Additionally, $\bar \Omega$ is convex and compact. Recalling part (\romannumeral1) of Lemma \ref{exi_lem}, dynamics \begin{equation} \dot{\bm x}(t) \in -\partial \mathcal L(\bm x(t)) -\mathcal N_{\bar\Omega}(\bm x) \end{equation} has a solution. Therefore, (\ref{alg_sim}) has a solution according to part (\romannumeral2) of Lemma \ref{exi_lem}. We should note that (\ref{alg}) is a fully distributed algorithm because for each agent, only its local objective function, set constraint and neighbor's states are necessary. By Lemma \ref{opt_lem}, $\bm x^*$ is an optimal solution to (\ref{dis_pen}) if and only if it is an equilibrium point of dynamics (\ref{alg}). For (\ref{alg}), agent $i$ is required to project $-\partial f_i(x_i) -K\sum_{j\in \mathcal N_i}{\rm Sgn}(x_i-x_j)$ onto the tangent cone $\mathcal T_{\Omega_i}(x_i)$. However, the closed form for this projection is not difficult to be computed in practice, especially for some special convex sets such as polyhedrons, Euclidean balls and boxes. Similar projection operators have also been utilized in \cite{ zhu2018projected, zeng2016distributed, yi2016initialization}. \begin{remark} One of the most intriguing methods for (\ref{dis_str}) is the distributed projected primal-dual algorithm, which has been discussed in \cite{zeng2016distributed, lei2016primal}. As a comparison, (\ref{alg}) is with lower communication and computation burden for each agent because dual variables are not necessary. In fact, similar ideas as (\ref{alg}) have also been explored in \cite{lin2016distributed, zhang2018distributed}. However, the set constraints were not considered in \cite{zhang2018distributed}. We extend the smooth objective functions in \cite{lin2016distributed} into non-smooth cases, and design a different algorithm with an exponential convergence rate. \end{remark} \subsection{Convergence Analysis} It is time to show the convergence for dynamics (\ref{alg}). Before showing the result, we introduce a lemma as follows. \begin{lemma}\label{solset_lem} Consider dynamics (\ref{alg}). If $x_i(0)\in \Omega_i$, then $x_i(t) \in \Omega_i$ for all $t\ge 0$. \end{lemma} \begin{proof} For $i\in \mathcal V$, we construct a function as $$E_i(x_i(t))=\Vert x_i(t)-P_{\Omega_i}(x_i(t)) \Vert^2.$$ Then we have \begin{equation*} \nabla E_i(x_i)= x_i-P_{\Omega_i}(x_i)\in N_{\Omega_i}(x_i). \end{equation*} On the other hand, \begin{equation*} \begin{aligned} \dot E_i =&\langle x_i(t)-P_{\Omega_i}(x_i(t)),~\dot x_i(t) \rangle. \end{aligned} \end{equation*} By (\ref{norcon_def}) and (\ref{alg}), we obtain $\dot E_i \le 0$. In other words, $E_i(x_i(t))$ is non-increasing. Furthermore, $E_i(x_i(0))=0$ due to $x_i(0) \in \Omega_i$. As a result, $E_i(x_i(t))=0$ for all $t \ge 0$, and then $x_i(t)\in \Omega_i$. Thus, the conclusion follows. \end{proof} Lemma \ref{solset_lem} implies that $\bm x(t) \in \bar \Omega$ for all $t>0$ if $\bm x(0)\in \bar \Omega$. The following theorem shows the convergence of $\bm x(t)$. \begin{theorem} \label{conve_the} Consider dynamics (\ref{alg}). Under Assumptions \ref{con_ass}, \ref{lip_ass} and \ref{gra_ass}, $\bm x(t)$ converges to an equilibrium point $\bm x^*$, which is also an optimal solution to (\ref{dis_pen}). \end{theorem} \begin{proof} Let $\bm x^*=\{x_1^*, \dots, x_n^*\}$ be an equilibrium point of dynamics (\ref{alg}). Construct a Lyapunov candidate function as \begin{equation} \label{lya1} V=\frac 12\sum\limits_{i=1}^n \Vert x_i(t)-x_i^*\Vert^2. \end{equation} Clearly, the function $V$ along (\ref{alg}) satisfies \begin{equation*} \label{lya_re} \begin{aligned} \mathcal L_{\mathcal F} V&=\{a\in \mathbb R: a=\sum\limits_{i=1}^n\langle x_i-x_i^*, P_{\mathcal T_{\Omega_i}(x_i)}[-\eta_i &-K\sum\limits_{j\in \mathcal N_i} \xi_{ij}] \rangle, \eta_i \in \partial f_i(x_i), \xi_{ij} \in {\rm Sgn}(x_i-x_j)\}. \end{aligned} \end{equation*} By (\ref{pro_ine2}) and (\ref{alg}), we obtain \begin{equation*} \begin{aligned} -\eta_i -K\sum\limits_{j\in \mathcal N_i} \xi_{ij}-\dot x_i \in N_{\Omega_i}(x_i). \end{aligned} \end{equation*} Due to (\ref{norcon_def}), we have \begin{equation} \label{dev_in2} \begin{aligned} \langle x_i^*-x_i, -\eta_i -K\sum\limits_{j\in \mathcal N_i} \xi_{ij}-\dot x_i\rangle \le 0. \end{aligned} \end{equation} As a result, \begin{equation}\label{dev_in} a \le \sum\limits_{i=1}^n\langle x_i-x_i^*, -\eta_i -K\sum\limits_{j\in \mathcal N_i} \xi_{ij} \rangle. \end{equation} Define \begin{equation} \label{Wdef} W(t)=\mathcal L({\bm x}(t)) -\mathcal L(\bm x^*). \end{equation} Recalling Lemma \ref{solset_lem} gives $W(\bm x)\ge 0$ because $\bm x(t) \in \bar\Omega$. Combining (\ref{cov}) and (\ref{dev_in}), we derive \begin{equation} \label{lya_re3} \begin{aligned} a \le\sum\limits_{i=1}^n(f_i(x_i^*)-f_i(x_i)) -\frac K2 \sum\limits_{i=1}^n \sum\limits_{j\in \mathcal N_i}|x_i-x_j| \le -W(t) \le 0. \end{aligned} \end{equation} Since $\mathcal L(\cdot)$ is locally Lipschitz continuous and $\bm x(t)$ is absolutely continuous, $W(t)$ is uniformly continuous in $t$. Then $W(t)$ is Riemman integrable. As a result, $W(t)$ is Lebesgue integrable, and the integral is equal to the Riemann integral. It follows from (\ref{lya_re3}) that the Lebesgue integral of $W(t)$ over the infinite interval $[0, +\infty)$ is bounded. In summary, $\int_0^t W(\bm x(\tau)) d\tau$ exists and is finite. Furthermore, $\int_0^t W(\tau) d\tau$ is monotonically increasing because $W(t)$ is nonnegative. Define $\mathcal M=\{\bm x|W(\bm x(t))=0\}$. By (\ref{Wdef}), any $\bm x \in \mathcal M$ is an equilibrium point of dynamics (\ref{alg}). Based on Lemma \ref{Barlem}, we derive $\bm x(t) \to \mathcal M$ as $t\to \infty$. Finally, we show that the trajectory $\bm x(t)$ converges to one of the equilibrium points in $\mathcal M$. There exists a strictly increasing sequence $\{t_k\}$ with $\lim_{k\to\infty} t_k=+\infty$ such that $\lim_{k\to\infty} \bm x(t_k) =\tilde {\bm x}$ because $\lim_{t\to \infty} \bm x(t) \rightarrow \mathcal M$. Consider a new Lyapunov function $\tilde V$ defined as (\ref{lya1}) by replacing $\bm x^*$ with $\tilde {\bm x}$. By a similar procedure as above for $V$, we have $\dot{\tilde V} \le 0$. For any $\epsilon > 0$ , there exists $T$ such that $\tilde V(\bm x(T)) < \epsilon$. Because of $\dot{\tilde V} \le 0$, we obtain \begin{equation*} \frac 12\Vert\bm x(t)-\tilde {\bm x} \Vert^2 \le \tilde V(\bm x(T)) < \epsilon,~\forall t\ge T \end{equation*} which implies $\lim_{t\to \infty} \bm x(t) =\tilde {\bm x}$. According to Lemma \ref{opt_lem}, $\bm x^*$ is an equilibrium point of dynamics (\ref{alg}) if and only if it is an optimal solution to (\ref{dis_pen}). Thus, the proof is completed. \end{proof} \begin{remark} Theorem \ref{conve_the} indicates that dynamics (\ref{alg}) converges to one of the equilibria without Assumption \ref{sto_con_ass}, even in the absence of the strict convexity assumption on objective functions \cite[Assumption 3.3]{zeng2018distributedsiam}, \cite[Remark 4.5]{cherukuri2018role}. \end{remark} \subsection{Convergence Rate Analysis} In this subsection, we analyze the convergence rate for dynamics (\ref{alg}). As is known to all, for a nonlinear dynamics, there is not a unifying framework to estimate its convergence rate. However, in this work, dynamics (\ref{alg}) is carefully designed with the projection onto the tangent cone, which can be easily eliminated via (\ref{dev_in2}). Then Assumption \ref{sto_con_ass} can be naturally employed for the exponential convergence analysis. The main result is shown as follows. \begin{theorem} Under Assumptions 1–4, dynamics (\ref{alg}) converges to its equilibrium point exponentially. \end{theorem} \begin{proof} Note that there is only one optimal solution to (\ref{dis_pen}) due to Assumption \ref{sto_con_ass}. Then the equilibrium point of (\ref{alg}) is unique. According to (\ref{dev_in}), we derive \begin{equation} \begin{aligned} \label{lya2_1} a \le \sum\limits_{i=1}^n\langle x_i-x_i^*, -\eta_i -K\sum\limits_{j\in \mathcal N_i} \xi_{ij} \rangle. \end{aligned} \end{equation} For $\eta_i^* \in \partial f_i(x_i^*), \xi_{ij}^* \in {\rm Sgn}(x_i^*-x_j^*)$, $x_i^*$ is an optimal solution to (\ref{dis_pen}) if and only if \begin{equation} \label{lya2_var} \langle x_i-x_i^*,\eta_i^* +K\sum\limits_{j\in \mathcal N_i} \xi_{ij}^* \rangle \ge 0,~\forall x_i \in \Omega_i. \end{equation} Substituting (\ref{lya2_var}) into (\ref{lya2_1}), we obtain \begin{equation*} \begin{aligned} a \le& -\sum\limits_{i=1}^n\langle x_i-x_i^*, \eta_i-\eta_i^* +K\sum\limits_{j\in \mathcal N_i} \xi_{ij}-K\sum\limits_{j\in \mathcal N_i} \xi_{ij}^* \rangle \\ \le& -\sum\limits_{i=1}^n \big\langle x_i-x_i^*,\eta_i-\eta_i^*\rangle-\sum\limits_{i=1}^n \sum\limits_{j \in \mathcal N_i}\frac K2 |x_i-x_j|\\ \le& -\sum\limits_{i=1}^n \big\langle x_i-x_i^*,\eta_i-\eta_i^*\rangle \end{aligned} \end{equation*} Recalling (\ref{stro_equ}) gives \begin{equation*} a \le -\sum\limits_{i=1}^n\beta\Vert x_i-x_i^* \Vert^2 = -\beta V. \end{equation*} As a result, $V(t) \le V(0)e^{-\beta t}$, and $\bm x(t)$ convergence to $\bm x^*$ exponentially. Thus, the conclusion follows. \end{proof} \begin{remark} As to problem (\ref{pri_pro}), a distributed algorithm with an exponential convergence rate is provided for the first time in this work. On one hand, the consensus constraints in (\ref{dis_str}) are eliminated by the exact penalty method, and then dual variables are not necessary for the algorithm design. On the other hand, properties of the differentiated projection operator (\ref{tancom}) are greatly explored in this work. \end{remark} \section{NUMERICAL SIMULATIONS} In this section, two numerical simulations are carried out for illustration. Dynamics (\ref{alg}) is a differential inclusion, and thus, Euler discretization is employed for its numerical implementations in this work. At each step, any subgradient in (\ref{alg}) can be selected. With a fixed stepsize $\alpha$, the $\mathcal O(\alpha)$ approximation is preserved. \begin{example} Here, we provide a numerical example with non-smooth objective functions to verify the convergence of dynamics (\ref{alg}). Consider the multi-agent network with four agents. The communication graph forms a star network. The objective functions and constraints are given by \begin{equation*} f_i(x)=|x-a_i|+b_i^Tx, \end{equation*} and $\Omega_i=\{x \in \mathbb R^4 ~|~\Vert x-c_i\Vert \le d_i, ~d_i> 0\}$, respectively, where $a_i, b_i, c_i$ and $d_i$ are randomly generated. Fig. 1 shows the state trajectories of dynamics (\ref{alg}). From the result, (\ref{alg}) converges to an equilibrium point. Moreover, all agents obtain a consensus solution because $\lim_{t\rightarrow \infty} x_i(t)=x_j(t)$. \begin{figure} \caption{State trajectories of algorithm (\ref{alg} \end{figure} \end{example} \begin{example} We carry out a numerical example with strongly convex objective functions to demonstrate exponential convergence of dynamics (\ref{alg}). The multi-agent network consists of thirty agents connected by a cyclic graph. The objective functions are given by \begin{equation*} f_i(x)= \frac 12 x^TP_ix+ q_i^T x+r_i|x|, \end{equation*} where $P_i \in \mathbb R^{10 \time 10}$ is positive definite, $q_i\in \mathbb R^{10}$ and $r_i\in \mathbb R_{\ge 0}$. Box constraints are utilized, that is, $\Omega_i=\{x \in \mathbb R^{10} ~|~ l_i \le x\le u_i, u_i> l_i\}$. Coefficients including $P_i, q_i, r_i, l_i$ and $u_i$ are randomly generated. Fig. 2(a) shows the trajectory of $f(\bm x)$, while Fig. 2(b) presents the trajectory of ${\rm log}(V(t))$. Fig. 2(a) indicates the convergence of (\ref{alg}), and Fig. 2(b) reveals the exponential convergence. \begin{figure} \caption{Trajectories of $f(\bm x)$ and ${\rm log} \end{figure} \end{example} \section{CONCLUSION} This paper aimed at the distributed optimization problem with local set constraints. This problem was equivalently reformulated without consensus constraints by the penalty method. Resorting to a differentiated projection operation and the subgradient method, a distributed continuous-time algorithm was proposed. The optimal solution could be obtained with an exponential convergence rate for strongly convex objective functions. Finally, two numerical examples were carried out to verify the results. \end{document}
\begin{document} \title{A proof of Sanov’s Theorem via discretizations} \date{\today} \author{Rangel Baldasso \thanks{ mail: [email protected], Mathematical Institute, Leiden University, P.O. Box 9512, 2300 RA Leiden, The Netherlands.} \and Roberto I. Oliveira \thanks{[email protected], IMPA, Estrada Dona Castorina 110, 22460-320 Rio de Janeiro, RJ - Brazil.} \and Alan Pereira \thanks{[email protected], Instituto de Matem\'{a}tica, Universidade Federal de Alagoas, Rua Lorival de Melo Mota s/n, 57072970 Macei\'{o}, AL - Brazil.} \and Guilherme Reis \thanks{\Letter [email protected], Fakult\"at f\"ur Mathematik, Technische Universit\"at M\"unchen, Boltzmannstra{\ss}e 3, 85748 Garching bei M\"unchen, Germany.}} \maketitle \begin{abstract} We present an alternative proof of Sanov’s theorem for Polish spaces in the weak topology that follows via discretization arguments. We combine the simpler version of Sanov's Theorem for discrete finite spaces and well chosen finite discretizations of the Polish space. The main tool in our proof is an explicit control on the rate of convergence for the approximated measures. \noindent \emph{Keywords and phrases:} large deviations; Sanov's Theorem \noindent MSC 2010: 60F10 \end{abstract} \section{Introduction} ~ \par Sanov's Theorem is a well know result in the theory of large deviations principles. It provides the large deviations profile of the empirical measure of a sequence of i.i.d. random variables and characterizes its rate function as the relative entropy. This short note provides an alternative proof of this fact, by exploring the metric structure of the weak topology with the variational formulation of the relative entropy. Formally, let $(M,\dist)$ be a Polish space and let $\big(X_n \big)_{n \in \mathbb{N}}$ be a sequence of independent $M$-valued random elements identically distributed according to $\mu \in \mc{P}(M)$, where $\mc{P}(M)$ is the set of Borel probability measures on $M$. We denote by $\delta_x$ the probability measure degenerate at $x \in M$, and define the \textit{empirical measure} of $X_1, \dots , X_n$ by \begin{equation} \label{eq:DefLn} L_n := \frac{1}{n}\sum_{i=1}^n\delta_{X_i}. \end{equation} Also, given $\upsilon, \mu \in \mc{P}(M)$, the \textit{relative entropy} between $\upsilon$ and $\mu$ is defined as \begin{equation} \label{eq:variational} H(\upsilon|\mu):=\sup \left\{\int f \d \upsilon -\log \int e^f \d \mu;\,f \text{ is measurable and bounded} \right\}. \end{equation} Sanov's Theorem is given by the following statement. \begin{theorem}[Sanov] \label{theoem:sanov} Let $\big(X_n \big)_{n \in \mathbb{N}}$ be a sequence of i.i.d.\ random variables taking values in a Polish space $(M,\dist)$ with distribution $\mu \in \mc{P}(M)$. The sequence of empirical measures $\big(L_n\big)_{n \in \mathbb{N}}$ of $\big(X_{n}\big)_{n \in \mathbb{N}}$ (defined in Equation~\eqref{eq:DefLn}) satisfies a large deviations principle on the space $\mathcal{P}(M)$ with rate function $H( \,\cdot\, |\mu)$. \end{theorem} When the space $M$ is finite, the theorem above is proved in an elementary and elegant way (see Den Hollander~\cite[Theorem II.2]{denhollander}, and Dembo and Zeitouni~\cite[Theorem 2.1.10]{dembo_zeitouni}). In this work, we prove the theorem for general Polish metric spaces by extending this elementary proof via sequences of discretizations of the space. We split the set $M$ in a finite number of subsets which belong to one of two distinct categories. The well-behaved sets are the ones with small diameter, while the badly behaved sets will have small $\mu$-measure. We remark though that when the space $M$ is compact, no badly behaved sets are necessary. These partitions define natural projections on the space and allow us to approximate the sequence $\big(X_{n} \big)_{n \in \mathbb{N}}$ by variables in the discretized spaces, and by consequence provide approximations for its empirical measures. The main technical observation is that the discretized relative entropy converges to the relative entropy~\eqref{eq:variational} as we take thinner partitions (see Lemma~\ref{le:supgoodpart}) and the relative entropy is well approximated in balls (see Lemma~\ref{lemma:supinf}). Some ideas used to prove Lemma~\ref{lemma:supinf} are roughly inspired by the proof of the upper bound in Csisz\'ar~\cite{csiszar}. His work presents a proof of Sanov's Theorem for the $\uptau$-topology, a stronger topology than that of weak convergence, with an approach that differs greatly from more classical ones that can be found, for example, in~\cite[Theorem 6.2.10]{dembo_zeitouni}. There are two proofs of Sanov's Theorem in~\cite{dembo_zeitouni}, one by means of Cram\'er's Theorem for Polish spaces and the other following a projective limit approach. Although we strongly use the metric structure of the space, our proof does not require profound knowledge of large deviations theory or general topology. \noindent\textbf{Organization of the paper.} In the next section we collect some preliminary notation and results that are used during the text. Section~\ref{sec:discretization} introduces the discretization considered here. Section~\ref{sec:theorem} contains the statement of the main lemmas used in the proof. We also show how Sanov's Theorem is proved in Section~\ref{sec:theorem}. Sections~\ref{sec:proof:le:supgoodpart} and~\ref{section:proof_supinf} contain the proofs of Lemmas~\ref{le:supgoodpart} and \ref{lemma:supinf}, respectively. \noindent\textbf{Acknowledgments.} RB is supported by the Mathematical Institute of Leiden University for support. RIO counted on the support of CNPq, Brazil via a {\em Bolsa de Produtividade em Pesquisa} (304475/2019-0) and a {\em Universal} grant (432310/2018-5). GR was partially supported by a Capes/PNPD fellowship 888887.313738/2019-00 while he was a post doc at Federal University of Bahia (UFBA). \noindent\textbf{Data Availability Statement .} Data sharing is not applicable to this article as no datasets were generated or analysed during the current study. \section{Preliminaries} ~ \par In this section we review some basic concepts. We provide the definition of large deviations principle and weak topology, and collect some properties of the relative entropy. \begin{definition}[Large deviation principle]\label{def:LDP} A sequence $\big( \bb{P}_n \big)_{n \in \mathbb{N}}$ of probabilities over a metric space $(\mathfrak{X}, \d_{\mathfrak{X}})$ satisfies a large deviation principle with rate function $I$ if \begin{enumerate} \item (Lower bound) For any open set $\mc{O} \subset \mathfrak{X}$, \begin{equation} \liminf_{n \to \infty}\frac{1}{n} \log \bb{P}_n (\mc{O}) \geq -\inf_{x \in \mc{O}} I(x); \end{equation} \item (Upper bound) For any closed set $\mc{C} \subset \mathfrak{X}$, \begin{equation} \limsup_{n \to \infty}\frac{1}{n} \log \bb{P}_n(\mc{C}) \leq -\inf_{x \in \mc{C}} I(x). \end{equation} \end{enumerate} \end{definition} The weak topology on $\mc{P}(M)$ is defined as the topology generated by the functionals \begin{equation} \upsilon \mapsto \int \varphi \, \d \upsilon, \end{equation} where $\varphi \in C_b(M)$ is a continuous bounded function. A metric compatible with the weak topology is the bounded Lipschitz metric \begin{equation} \d_{\rm BL}(\mu, \upsilon) = \sup \left \{ \left| \int f \, \d \upsilon - \int f \, \d \mu \right|: f \in BL(M) \right\}, \end{equation} where $BL(M)$ is the class of $1$-Lipschitz functions $f : M \to \bb{R}$ bounded by one. For the next lemma, let $x \wedge y$ denote the minimum between $x$ and $y$. \begin{lemma} Let $(X, Y)$ be a coupling of two distributions $\mu$ and $\upsilon$. Then \begin{equation} \d_{\rm BL}(\mu,\upsilon)\leq \bb{E}(\d(X,Y) \wedge 2). \end{equation} \end{lemma} \begin{proof} Let $x, y \in M$ and notice that, for each $f \in BL(M)$, \begin{equation} |f(x) -f(y)| \leq \d(x,y) \wedge 2, \end{equation} since $f$ is 1-Lipschitz and bounded by one. The proof is now complete by noting that \begin{equation} \left| \int f \, \d \upsilon - \int f \, \d \mu \right| = \big| \bb{E} \big(f(X)-f(Y) \big) \big| \leq \bb{E}(\d(X,Y) \wedge 2). \end{equation} \end{proof} Equation~\eqref{eq:variational} is called the variational formulation of entropy, and it readily implies the so-called \textit{entropy inequality} \begin{equation}\label{eq:entropy_inequality} \int f \d \upsilon \leq H(\upsilon|\mu) + \log \int e^f \d \mu, \end{equation} for any measurable bounded function $f$. We will also make use of the \textit{integral formulation} of the relative entropy, provided in the next lemma. This formulation will be the key result used to approximate relative entropies in the discrete case to the general case. \begin{lemma}\label{lemma:entropy} The variational formula of the relative entropy~\eqref{eq:variational} is equivalent to the following integral formulation of the entropy \begin{equation} H(\upsilon|\mu)=\begin{cases} \int \frac{\d\upsilon}{\d\mu}\log \frac{\d\upsilon}{\d\mu} \d \mu, & \text{ if } \upsilon \ll \mu, \\ +\infty, & \text{ otherwise}. \end{cases} \end{equation} \end{lemma} We refrain from presenting the proof of the lemma above, and refer the reader to~\cite[Theorem 5.2.1]{gray}. \section{Discretization}\label{sec:discretization} ~ \par In this section we present the discretization procedure used for the space $M$ and related constructions for measures and random variables. We start by discretizing the space. Let $\mu \in \mc{P}(M)$ and recall that, since $M$ is a Polish space, there exists, for each $m \in \bb{N}$, a compact set $K_m$ with \begin{equation}\label{eq:choice_K} \mu(K_{m}^{\complement}) \leq \frac{e^{-m^{2}-1}}{m}. \end{equation} The support of the measure $\mu$ is contained in the closure of the union of the compacts $K_{m}$. Notice that the collection of probability measures supported on the closure of $\cup_{m=1}^{\infty} K_{m}$ forms a closed subset of $\mc{P}(M)$, and thus it is enough to prove a large deviations principle for this subspace (see~\cite[Lemma 4.1.5]{dembo_zeitouni}). We assume from now on that \begin{equation} M = \overline{\bigcup_{m=1}^{\infty} K_{m}}. \end{equation} Given a sequence of partitions $\big( \mc{A}_{m} \big)_{m \in \bb{N}}$, let $\mathcal{F}_{m}$ and $\mathcal{F}_{\infty}$ denote the $\sigma$-algebras generated by $\mathcal{A}_{m}$ and by the union $\cup_{m=1}^{\infty}\mathcal{A}_{m}$, respectively. We write $\mathcal{B}(M)$ for the Borel $\sigma$-algebra in $M$. \begin{lemma}\label{lemma:good_partition} There exists a sequence of nested partitions $\big( \mc{A}_{m} \big)_{m \in \bb{N}}$ such that $\mc{A}_{m} = \{ A_{m, 1}, \dots, A_{m, \ell_{m}} \}$ and \begin{itemize} \item $\diam(A_{m, i})<\frac{1}{m}$, if $i=1,\dots, \tilde{\ell}_{m}$, for some $\tilde{\ell}_{m} \leq \ell_{m}$. \item $K_{m}^{\complement} = \bigcup_{i=\tilde{\ell}_{m}+1}^{\ell_{m}} A_{m, i}$. \item $\mathcal{F}_{\infty} = \mathcal{B}(M)$. \end{itemize} \end{lemma} \begin{proof} Notice that if we can construct partitions $\mc{A}_{m}$ for each $m$ that satisfy the three first requirements of the lemma without requiring them to be nested, it is possible to take refinements in order to obtain a nested sequence. Recall the definition the compact set $K_{m}$ in~\eqref{eq:choice_K}. By compactness, it is possible to partition $K_{m}$ into subsets $\{ C_{m, 1}, \dots, C_{m, \bar{\ell}_m} \}$ of diameter at most $\frac{1}{m}$, so that $\mc{C}_m=\{K_{m}^{\complement}, C_{m, 1}, \dots, C_{m, \bar{\ell}_m}\}$ defines a partition of $M$. Consider an enumeration $\big(B^{i}\big)_{i \in \bb{N}}$ of balls of rational radius and centered in a countable dense subset of $M$. We now define the partition $\mc{A}_{m}$ via the intersections of sets in $\mc{C}_{m}$ with $B^{m}$ and its complement. We write \begin{equation} \mc{A}_{m} = \{ A_{m, 1}, \dots, A_{m, \ell_{m}} \}, \end{equation} where $A_{m, i}$, $i \leq \tilde{\ell}_{m}$, denotes the sets contained in $K_{m}$ and $A_{m, i}$, $\tilde{\ell}_{m}+1 \leq i \leq \ell_{m}$, indicates the sets contained in $K_{m}^{\complement}$. Notice that the first two statements about the partition $\mc{A}_{m}$ are immediately verified. To check the last claim, notice that $B^{i} \in \mathcal{F}_{i}$, and thus $B^{i} \in \mc{F}_{\infty}$, for all $i \in \bb{N}$, which implies $\mathcal{F}_{\infty} = \mathcal{B}(M)$ and concludes the proof. \end{proof} We select a subset $M_m:=\{a_{m, 1}, \dots, a_{m, \ell_{m}} \} \subset M$ such that $a_{m, i} \in A_{m, i}$ for $i=1, \dots,\ell_m$ and turn $(\mc{A}_{m},M_{m})$ into a tagged partition. We will furthermore assume that $M_m \subset M_{m+1}$. For each $m \in \bb{N}$, the tagged partition $(\mc{A}_{m}, M_{m})$ defines a natural projection $\pi^{m}: M \to M_{m}$ via \begin{equation} \pi^m(x)=a_{m, i}, \text{ if } x \in A_{m, i}. \end{equation} This allows us to define, for any measure $\upsilon \in \mc{P}(M)$, its discretized version $\upsilon^{m} \in \mc{P}(M)$ as the probability measure supported in $M_m$ given by the pushforward of $\upsilon$ via the map $\pi^{m}$, i.e. \begin{equation} \upsilon^{m}(a_{m, i})=\upsilon \big( (\pi^{m})^{-1}(a_{m, i}) \big) = \upsilon(A_{m, i}), \text{ for }i=1,\dots,\ell_m. \end{equation} Random elements are also discretized with the aid of the projection maps $\pi^{m}$. If $\big( X_i \big)_{i \in \bb{N}}$ is an i.i.d.\ sequence of random elements with distribution $\mu \in \mc{P}(M)$, then $X_{i}^{m}=\pi^m(X_i)$ yields an i.i.d.\ sequence of random elements distributed according to $\mu^{m}$. The empirical measure for the discretized elements is given by \begin{equation} \label{eq:DefLnk} L_n^{m}:=\frac{1}{n}\sum_{i=1}^n\delta_{X_i^{m}}. \end{equation} Since, for each $m \in \bb{N}$, the elements $X_{i}^{m}$ take values on the finite space $M_{m}$, we know that the sequence of empirical measures $\big( L_n^m \big)_{n \in \bb{N}}$ satisfies a large deviations principle on the space $\mc{P}(M_{m})$ with rate function $H( \,\cdot\, |\mu^m)$. Via~\cite[Lemma 4.1.5]{dembo_zeitouni}, we can extend these large deviation principles to the whole space $\mc{P}(M)$ with rate function also given by $H( \,\cdot\, |\mu^m)$ (note that $H( \upsilon |\mu^m)$ is infinite if $\upsilon \notin \mc{P}(M_{m})$). Lemma~\ref{lemma:entropy} yields the following expression for the rate function $H( \upsilon|\mu^m)$, when $\upsilon \in \mc{P}(M_{m})$: \begin{equation} H(\upsilon|\mu^m) = \sum_{a \in M_m}\upsilon(a)\log \frac{\upsilon(a)}{\mu^m(a)}. \end{equation} \section{Proof of Theorem \ref{theoem:sanov}}\label{sec:theorem} ~ \par In this section we present our approach to the proof of Sanov's Theorem. Our goal is to deduce that the empirical measures $L_{n}$ given by~\eqref{eq:DefLn} satisfy a large deviations principle from the information that the sequences $\big( L_n^m \big)_{n \in \bb{N}}$ satisfy large deviations principles, for all $m \in \bb{N}$. Since the rate function given by Sanov's Theorem (Theorem~\ref{theoem:sanov}) is the relative entropy, the following two lemmas that relate the entropies in discrete and Polish spaces are the central pieces in our proof. \begin{lemma}\label{le:supgoodpart} For any two probability measures $\mu, \upsilon \in \mc{P}(M)$, \begin{equation}\label{eq:limit_entropy} \sup_m H(\upsilon^m|\mu^m)=\lim_{m} H(\upsilon^m|\mu^m)=H(\upsilon|\mu). \end{equation} Furthermore, if $\sup_m H(\upsilon^{m}| \mu^m)<\infty$ then there exists a positive constant $c \geq 0$ such that \begin{equation} \d_{\rm BL}(\upsilon^m,\upsilon)\leq \frac{c}{m}. \end{equation} \end{lemma} \begin{remark}\label{remark:rate_function} Notice that the lemma above provides an alternative expression for the rate function in Sanov's Theorem, depending on the sequence of partitions chosen. Even more is true: the supremum in~\eqref{eq:limit_entropy} can be taken over all finite partitions of $M$. In fact, simply notice that, if $(\mathcal{A}, \mathcal{M})$ is a dotted partition of $M$ with projection map $\pi: M \to \mathcal{M}$, then any function $f: \mathcal{M} \to \bb{R}$ can be extended to $M$ via $\tilde{f}=f \circ \pi$. To conclude, apply the variational formulation of relative entropy to obtain $H(\upsilon^{\mathcal{A}}|\mu^\mathcal{A}) \leq H(\upsilon|\mu)$. \end{remark} \begin{lemma}\label{lemma:supinf} For any $\mu, \upsilon \in \mc{P}(M)$ and $m_{0} \in \bb{N}$, \begin{equation} \sup_{m \geq m_{0}} \inf_{\sigma \in \bar{B}_{\frac{1}{\sqrt{m}}}(\upsilon)}H(\sigma|\mu^m)=H(\upsilon|\mu). \end{equation} \end{lemma} We prove Lemma~\ref{le:supgoodpart} in Section~\ref{sec:proof:le:supgoodpart} and Lemma~\ref{lemma:supinf} in Section~\ref{section:proof_supinf}. The next lemma is a result of exponential equivalence. \begin{lemma}\label{prop:boundLkLnk} Let $L_n$ and $L_n^m$ as defined in Equations~\eqref{eq:DefLn} and~\eqref{eq:DefLnk}, respectively. Then \begin{equation} \bb{P}\bigg( \d_{\rm BL}(L_n,L_n^m) > \frac{3}{m}\bigg) \leq \exp\big(-mn\big). \end{equation} \end{lemma} \begin{proof} Observe that if $X_i \in A_{m, j}$ for some $j=1,\dots,\tilde{\ell}_m$ then $d(X_i,X_i^{m})\leq 1/m$. In particular, \begin{equation} \d_{BL}(L_n,L_n^m)\leq \frac{1}{n}\sum_{i=1}^n \d(X_i,X_i^{m}) \wedge 2 \leq \frac{1}{m}+\frac{2}{n}\sum_{i=1}^n1_{\{X_i \in K_m^{\complement}\}}. \end{equation} This implies \begin{equation} \bb{P}\bigg( \d_{BL}(L_n,L_n^m)>\frac{3}{m}\bigg)\leq \bb{P}\bigg(\frac{1}{n}\sum_{i=1}^n 1_{\{X_i \in K_m^{\complement}\}}>\frac{1}{m}\bigg) \end{equation} In order to bound the last probability, we use union bound and independence to obtain \begin{equation} \begin{split} \bb{P}\bigg(\frac{1}{n}\sum_{i=1}^n 1_{\{X_i \in K_m^{\complement}\}}>\frac{1}{m}\bigg) & \leq \sum_{A \subset [n]: |A|=\frac{n}{m}} \bb{P} \big(X_i \in K_m^{\complement}, \text{ for all } i \in A \big) \\ & \leq \binom{n}{\frac{n}{m}}\mathcal{B}ig( \frac{e^{-m^{2}-1}}{m} \mathcal{B}ig)^{\frac{n}{m}} \\ & \leq \bigg(em\frac{e^{-m^{2}-1}}{m} \bigg)^{\frac{n}{m}} \\ & \leq \exp\big(-mn\big), \end{split} \end{equation} concluding the proof. \end{proof} We are now ready to work on the proof of Theorem~\ref{theoem:sanov}. It is proved in~\cite[Lemma 6.2.6]{dembo_zeitouni} that the sequence $\big( L_n \big)_{n \in \bb{N}}$ is exponentially tight. In particular, there exists a subsequence $\big( L_{n_k} \big)_{n_{k}}$ that satisfies a large deviations principle with rate function $I$. From now on, we drop the subscript $k$ in $n_{k}$. Notice that \begin{equation} \label{eq:defI} -I(\upsilon) = \lim_{\varepsilon\to 0} \lim_{n\to\infty} \frac{1}{n}\log \bb{P}(L_n\in B_\varepsilon(\upsilon)). \end{equation} Even though the rate function $I$ might depend on the subsequence, our goal is to prove that this is not the case. In fact, we prove that $I(\, \cdot \,)=H( \, \cdot \,|\mu)$. In Proposition~\ref{prop:HgeqI}, we prove that $H( \, \cdot \,|\mu) \geq I(\, \cdot \,)$, while the opposite inequality is established in Proposition~\ref{prop:HleqI}. This concludes the proof of Theorem~\ref{theoem:sanov}, since any possible subsequence $\big( L_{n_{k}} \big)_{n_{k}}$ that satisfies a large deviations principle does so with the same rate function $H( \,\cdot\, | \mu)$, which implies that the whole sequence also satisfies a large deviations principle. \begin{proposition} \label{prop:HgeqI} The function $I$ in Equation~\eqref{eq:defI} satisfies $H(\, \cdot \,|\mu)\geq I(\, \cdot \,)$. \end{proposition} \begin{proof} Fix $\upsilon \in \mc{P}(M)$ and notice we can assume that $H(\upsilon | \mu)$ is finite since the statement is trivially verified if otherwise. Due to Lemma~\ref{le:supgoodpart}, we have \begin{equation} \d_{\rm BL}(\upsilon,\upsilon^m) \leq \frac{c}{m}, \end{equation} for some positive constant $c>0$. In particular, this implies \begin{equation}\label{eq:first_bound} \bb{P}( L^m_n \in B_\varepsilon(\upsilon^m))\leq \bb{P}\mathcal{B}ig( L_n \in \bar{B}_{\varepsilon+\frac{3+c}{m}}(\upsilon) \mathcal{B}ig)+\bb{P} \mathcal{B}ig( \d_{\rm BL}(L_n,L_n^m)> \frac{3}{m} \mathcal{B}ig), \end{equation} which yields \begin{equation}\label{eq:first_estiamte} \begin{split} \frac{1}{n} & \log \bb{P}(L^m_n \in B_\varepsilon(\upsilon^m)) \leq \frac{\log 2}{n} \\ & \qquad \qquad + \max \left\{\frac{1}{n}\log \bb{P}\mathcal{B}ig( L_n \in \bar{B}_{\varepsilon+\frac{3+c}{m}}(\upsilon) \mathcal{B}ig), \frac{1}{n}\log \bb{P} \mathcal{B}ig( \d_{\rm BL}(L_n,L_n^m)> \frac{3}{m} \mathcal{B}ig) \right\}. \end{split} \end{equation} Lemma~\ref{prop:boundLkLnk} gives \begin{equation} \frac{1}{n}\log \bb{P} \mathcal{B}ig( \d_{\rm BL}(L_n,L_n^m)> \frac{3}{ m} \mathcal{B}ig) \leq -m, \end{equation} and, by taking the limit as $n$ grows in~\eqref{eq:first_estiamte}, \begin{equation} -\inf_{\sigma \in B_{\varepsilon}(\upsilon^m)} H(\sigma|\mu^{m}) \leq \max \bigg\{-\inf_{\sigma \in \bar{B}_{\varepsilon+\frac{3+c}{m}}(\upsilon)}I(\sigma),-m \bigg\}. \end{equation} We now take the limit as $\varepsilon$ goes to zero to obtain \begin{equation} -H(\upsilon^m | \mu^m) \leq \max \bigg\{- \inf_{\sigma \in \bar{B}_{\frac{3+c}{m}}(\upsilon)} I (\sigma),-m \bigg\}, \end{equation} which readily implies, via Lemma~\ref{le:supgoodpart}, \begin{equation}\label{eq:hgI} H(\upsilon | \mu) = \sup_{\bar m}H(\upsilon^{\bar m}|\mu^{\bar m}) \geq \min \bigg\{ \inf_{\sigma \in \bar{B}_{\frac{3+c}{m}}(\upsilon)} I(\sigma), m \bigg\}, \end{equation} for all $m \in \bb{N}$. Since the function $I$ is lower semicontinuous, \begin{equation} \lim_{m \to \infty} \inf_{\sigma \in \bar{B}_{\frac{3+c}{m}}(\upsilon)}I(\sigma)=I(\upsilon). \end{equation} In particular, \begin{equation} H(\upsilon | \mu) \geq I(\upsilon), \end{equation} concluding the proof. \end{proof} \begin{proposition} \label{prop:HleqI} We have $H( \,\cdot\, |\mu)\leq I(\, \cdot \,)$. \end{proposition} \begin{proof} Fix $\upsilon \in \mc{P}(M)$ and observe once again that \begin{equation*} \begin{split} \frac{1}{n} & \log \bb{P}\big( L_n \in B_\varepsilon(\upsilon) \big) \leq \frac{\log 2}{n} \\ & \qquad \qquad + \max \bigg\{\frac{1}{n} \log \bb{P} \big( L_n^m \in \bar{B}_{\varepsilon+\frac{1}{\sqrt{m}}}(\upsilon) \big), \frac{1}{n} \log \bb{P} \big( \d_{\rm BL} (L_n,L_n^m)>\tfrac{1}{\sqrt{m}} \big) \bigg\}. \end{split} \end{equation*} Taking $n \to \infty$ and $\varepsilon\to 0$, we obtain with the aid of Lemma~\ref{prop:boundLkLnk} \begin{equation}\label{eq:bound_I} -I(\upsilon) \leq \max \bigg\{-\inf_{\sigma \in \bar {B}_{\frac{1}{\sqrt{m}}}(\upsilon)}H(\sigma|\mu^m),\,-m \bigg\}. \end{equation} Let us now split the discussion in whether $H(\upsilon|\mu)$ is finite or not. Assume first that this relative entropy is infinite and notice that Lemma~\ref{lemma:supinf} implies \begin{equation} \sup_{m}\inf_{\sigma \in \bar {B}_{\frac{1}{\sqrt{m}}}(\upsilon)}H(\sigma|\mu^m) = \infty, \end{equation} which readily implies $I(\upsilon)=\infty$, when combined with~\eqref{eq:bound_I}. If on the other hand we have $H(\upsilon|\mu)< \infty$, we combine~\eqref{eq:bound_I} and Lemma~\ref{lemma:supinf} with $m_{0} \geq H(\upsilon|\mu)$ to obtain \begin{equation} I(\upsilon) \geq \min \bigg\{ \inf_{\sigma \in \bar{B}_{\frac{1}{\sqrt{m}}}(\upsilon)} H(\sigma|\mu^m), m \bigg\} \geq \inf_{\sigma \in \bar{B}_{\frac{1}{\sqrt{m}}}(\upsilon)} H(\sigma|\mu^m), \end{equation} for every $m \geq m_{0}$. Taking the supremum in $m$ concludes the proof. \end{proof} \section{Proof of Lemma~\ref{le:supgoodpart}}\label{sec:proof:le:supgoodpart} ~ \par In this section we prove Lemma~\ref{le:supgoodpart}. We start with the following preliminary lemma, which in particular implies the second part of Lemma~\ref{le:supgoodpart}. We prove the first part afterwards. \begin{lemma}\label{lemma:convergenceNukDiv} If $\sigma \in \mc{P}(M)$ is such that $H(\sigma|\mu) \leq \alpha$, then, for any $\theta>0$, we have \begin{equation}\label{eq:distance_estimate} \d_{\rm BL}(\sigma^m,\sigma) \leq \frac{1}{m}+2\frac{\alpha}{\theta}+ 2\frac{e^{-m^{2}-1+\theta}}{m\theta}. \end{equation} In particular, \begin{equation} \d_{\rm BL}(\sigma^m,\sigma) \leq \frac{3+2\alpha}{m}, \text{ for all } m \in \bb{N}. \end{equation} \end{lemma} \begin{proof} Consider $X \sim \sigma$ and notice that $X^{m} = \pi^{m}(X)$ has distribution $\sigma^{m}$. Therefore, \begin{equation} \d_{\rm BL}(\sigma^m,\sigma)\leq \bb{E}(\d(X^m,X)\wedge 2). \end{equation} Splitting on whether $X\in K_{m}$ or not, we obtain \begin{equation}\label{eq:distance_estimate_1} \d_{\rm BL}(\sigma^m,\sigma)\leq \frac{1}{m}+2\sigma(K_m^{\complement}). \end{equation} We now combine the entropy inequality with the bound $\log (1+x)\leq x$ to obtain, for $\theta>0$, \begin{equation}\label{eq:bound_bad_probability} \begin{split} \sigma (K_m^{\complement}) & = \frac{1}{\theta} \mathbb{E}_{\sigma}[ \theta 1_{K_m^{\complement}} ] \leq \frac{1}{\theta}\mathcal{B}ig( H(\sigma|\mu)+\log \bb{E}_{\mu} [e^{\theta 1_{K_m^{\complement}}}] \mathcal{B}ig) \\ & = \frac{1}{\theta} \mathcal{B}ig( \alpha + \log \big(1-\mu(K_m^{\complement})+e^\theta \mu(K_m^{\complement})\big) \mathcal{B}ig) \\ & \leq \frac{\alpha}{\theta} + \frac{(e^{\theta}-1)}{\theta} \mu(K_m^{\complement}) \\ & \leq \frac{\alpha}{\theta} + \frac{(e^{\theta}-1)}{\theta}\frac{e^{-m^{2}-1}}{m}, \end{split} \end{equation} by the choice of $K_m$ in~\eqref{eq:choice_K}. Combining the equation above with~\eqref{eq:distance_estimate_1} concludes the proof of~\eqref{eq:distance_estimate}. Choose now $\theta=m$ in~\eqref{eq:distance_estimate} to obtain \begin{equation} \d_{\rm BL}(\sigma^m,\sigma)\leq \frac{1}{m}+2\frac{\alpha}{m} + 2\frac{e^{-m^{2}+m-1}}{m^{2}} \leq \frac{3+2\alpha}{m}, \end{equation} concluding the proof. \end{proof} Second, we provide a martingale that will be useful during the proof. \begin{lemma}\label{lemma:martingale} Assume either that $H(\upsilon|\mu)$ or $\sup_{m}H(\upsilon^{m}|\mu^{m})$ is finite. Then \begin{equation}\label{eq:martingale} S_m = \frac{\d \upsilon^m}{\d \mu^m} \circ \pi^{m} \end{equation} is an uniformly-integrable martingale in the probability space $\big(M, \mc{B}(M), \mu \big)$ with respect to the filtration $\big(\mathcal{F}_{m}\big)_{m \in \bb{N}}$. \end{lemma} \begin{proof} Assume first that $H(\upsilon|\mu) < \infty$. In this case, $\tfrac{\d \upsilon}{\d \mu}$ exists and \begin{equation} \hat{S}_m = \mathbb{E}_\mu \mathcal{B}ig[ \frac{\d \upsilon}{\d \mu} \mathcal{B}ig| \mathcal{F}_m \mathcal{B}ig] \end{equation} is a uniformly-integrable martingale. It follows directly from the definition of conditional expectation and Radon-Nikodyn derivative that $\hat{S}_{m}=S_{m}$ almost surely for every $m \in \bb{N}$, concluding the proof of the first case. Assume now that $\sup_{m}H(\upsilon^{m}|\mu^{m}) < \infty$ and observe that this implies that $S_m$ is well defined for all $m \in \bb{N}$, has expectation one, and is non-negative. We first have to verify that $\mathbb{E}[S_{m+1} |\mathcal{F}_m] = S_m$. Take an element $A_{m,k} \in \mathcal{F}_m$, with $0 \leq k \leq \ell_{m}$, so that \begin{equation} \mathbb{E}_\mu [S_{m} \cdot 1_{A_{m,k}}] = \frac{\upsilon(A_{m,k})}{\mu(A_{m,k})} \mathbb{E}_{\mu}[ 1_{A_{m,k}}] = \upsilon(A_{m,k}). \end{equation} Now, let $B_1, \cdots, B_j$ elements of $\mathcal{F}_{m+1}$ such that $\cup_{i=1}^j B_i = A_{m,k}$, then \begin{equation} \begin{split} \mathbb{E}_\mu\left[\mathbb{E}_\mu[S_{m+1} | \mathcal{F}_m \right] \cdot 1_{A_{m,k}}] & = \mathbb{E}_\mu\left[ \mathbb{E}_\mu\left[S_{m+1} \left. \sum_{i=1}^j 1_{B_i} \right| \mathcal{F}_m \right] \right] \\ & = \mathbb{E}_\mu\left[ \mathbb{E}_\mu\left[\left. \sum_{i=1}^j \frac{\upsilon(B_i)}{\mu(B_i)} 1_{B_i} \right| \mathcal{F}_m \right] \right] \\ &= \sum_{i=1}^j \frac{\upsilon(B_i)}{\mu(B_i)} \mu(B_i) \\ &= \upsilon(A_{m,k})=\mathbb{E}[S_m1_{A_{m,k}}], \end{split} \end{equation} which implies $ \mathbb{E}[S_{m+1} |\mathcal{F}_m] = S_m$ , concluding our first statement. In order to verify uniform integrability of $S_{n}$, observe that \begin{equation} \mathbb{E}_\mu[S_m \log S_m] = H(\upsilon^{m}|\mu^{m}) \leq \sup_{m} H(\upsilon^{m}|\mu^{m}):=K < \infty. \end{equation} Now, for each $M>0$ we have, uniformly in $m \in \bb{N}$, \begin{equation} E_\mu[S_m 1_{\{S_m \geq M\}}] \leq \mathbb{E}_\mu \left[ S_m 1_{\{S_m \geq M\}} \dfrac{\log S_m}{\log M} \right] \leq \frac{K}{\log M}. \end{equation} Therefore, $S_m$ is a uniformly-integrable martingale, concluding the proof of the lemma. \end{proof} We are now in position to prove the first part of Lemma~\ref{le:supgoodpart}. \begin{proof}[Proof of Lemma~\ref{le:supgoodpart}] We first observe that, via the variational definition of entropy and the fact that $M_{m} \subset M_{m+1}$, for all $m \in \bb{N}$, we obtain that $H(\upsilon^m|\mu^m)$ is monotone increasing in $m$ (following the steps pointed out in Remark~\ref{remark:rate_function} or directly as a consequence of~\cite[Corollary 5.2.2]{gray}). In particular, \begin{equation} \sup_m H(\upsilon^m|\mu^m)=\lim_{m \to \infty} H(\upsilon^m|\mu^m) \end{equation} and thus it suffices to verify that \begin{equation}\label{eq:limit} \lim_m H(\upsilon^m|\mu^m) = H(\upsilon|\mu). \end{equation} Again from the variational definition of relative entropy we have $ H(\upsilon^m|\mu^m) \leq H(\upsilon|\mu)$, for all $m \in \bb{N}$, so that \begin{equation} \limsup_{m \to \infty} H(\upsilon^{m}|\mu^{m}) \leq H(\upsilon|\mu). \end{equation} We now work on the proof of the reverse inequality. The strategy of the proof is as follows. If at least one of the two quantities of interest is finite, we have access to the uniformly-integrable martingale $S_{m}$ given by the Radon-Hikodyin derivative of $\upsilon^{m}$ with respect to $\mu^{m}$. As we will see, this martingale converges in $L^{1}$ and almost surely to $\tfrac{\d \upsilon}{\d \mu}$, which will yield the result when combined with Fatou's Lemma. Assume that either $H(\upsilon|\mu)<\infty$ or $\sup_{m}H(\upsilon^{m}|\mu^{m})<\infty$. The martingale $S_{m}$ introduced in~\eqref{eq:martingale} is uniformly integrable and thus converges almost surely and in $L^{1}$ to a random variable $X$. In the case $H(\upsilon|\mu)<\infty$, we have \begin{equation} X=E_{\mu}\mathcal{B}ig[ \frac{\d \upsilon}{\d \mu} \mathcal{B}ig| \mathcal{F}_{\infty} \mathcal{B}ig] = \frac{\d \upsilon}{\d \mu}, \end{equation} since $\mathcal{F}_{\infty} = \mathcal{B}(M)$ (see Lemma~\ref{lemma:good_partition}). If we are in the case $\sup_{m}H(\upsilon^{m}|\mu^{m})<\infty$, the above also holds precisely because elements in $\mathcal{B}(M)$ can be approxiamted by elements in $\cup_{m=1}^{\infty} \mathcal{F}_{m}$. We now note that, since $x \log x \geq -e^{-1}$, Fatou's Lemma implies \begin{equation} \liminf_{m} \mathbb{E}_\mu \big[ S_{m} \log S_{m} \big] \geq \mathbb{E}_\mu \mathcal{B}ig[ \frac{\d \upsilon}{\d \mu} \log \frac{\d \upsilon}{\d \mu} \mathcal{B}ig], \end{equation} which verifies~\eqref{eq:limit_entropy} (see also Lemma~\ref{lemma:entropy}) and concludes the proof of the lemma. \end{proof} \section{Proof of Lemma~\ref{lemma:supinf}}\label{section:proof_supinf} ~ \par In this section we prove Lemma~\ref{lemma:supinf}. We fix $m_{0} \in \bb{N}$ and denote by \begin{equation}\label{eq:sanov:defIstar} I^0(\upsilon) := \sup_{m \geq m_{0}} \inf_{\sigma \in \bar{B}_{\frac{1}{\sqrt{m}}}(\upsilon)} H(\sigma|\mu^m). \end{equation} Our goal is to show that $I^0(\upsilon)= H(\upsilon|\mu)$. We will prove this in two steps, by checking that $I^0(\upsilon) \leq H(\upsilon|\mu)$ and $I^0(\upsilon) \geq H(\upsilon|\mu)$. The first inequality is verified in the next paragraph. The reverse inequality is more delicate and we dedicate the rest of the section to verify it. Let us check that $I^0(\upsilon) \leq H(\upsilon|\mu)$. Indeed, the inequality is trivial if $H(\upsilon|\mu) = \infty$. If on the other hand this entropy is finite, we have, in view of Lemma~\ref{le:supgoodpart}, \begin{equation} \d_{\rm BL}(\upsilon, \upsilon^{m}) \leq \frac{c}{m } \leq \frac{1}{\sqrt{m}}, \end{equation} for $m$ large enough, from which our claim follows by noting that $\upsilon^{m} \in \bar{B}_{\frac{1}{\sqrt{m}}}(\upsilon)$ and applying Lemma~\ref{le:supgoodpart}. We now focus on the proof of the inequality \begin{equation}\label{eq:goal} I^0(\upsilon)\geq H(\upsilon|\mu). \end{equation} Once again we assume that $I^0(\upsilon)<\infty$, since the alternative case is trivial. The first observation we make is that~\eqref{eq:goal} follows if, for any $\alpha > 0$, \begin{equation} \label{implicationInequality} I^0(\upsilon) < \alpha \text{ implies } H(\upsilon|\mu) \leq \alpha. \end{equation} This follows directly from the following lemma together with the lower semicontinuity of the relative entropy $H( \,\cdot\, |\mu)$. \begin{lemma} If $I^0(\upsilon) < \alpha$, then, for every $\varepsilon>0$, there exists $\rho \in B_{\varepsilon}(\upsilon)$ such that \begin{equation} H(\rho|\mu) \leq \alpha. \end{equation} \end{lemma} \begin{proof} Our goal will be find $\rho$ such that $H(\rho|\mu) \leq \alpha$ and $\d_{\rm BL}(\upsilon, \rho)< \varepsilon$. Fix $m \geq m_{0}$ large enough such that \begin{equation} \frac{3+2\alpha}{m}+\frac{1}{\sqrt{m}} < \varepsilon, \end{equation} Recall from~\eqref{eq:sanov:defIstar} that $I^0(\upsilon) < \alpha$ implies that there exists $\sigma \in \bar{B}_{\frac{1}{\sqrt{m}}}(\upsilon)$ such that $H(\sigma|\mu^{m}) \leq \alpha$. Notice that, since this entropy is finite, we have $\sigma = \sigma^{m}$. Define \begin{equation}\label{eq:sanov:defnu} \rho(F):=\sum_{i=1}^{\ell_m}\frac{\sigma(A_{m, i})}{\mu (A_{m, i})} \mu(F \cap A_{m, i}) = \sum_{i=0}^{\ell_m} \mu(F | A_{m, i}) \sigma(A_{m, i}). \end{equation} Via direct substitution it follows that $\rho^{m} = \sigma$. We claim that $H(\rho|\mu)=H(\sigma|\mu^{m}) \leq \alpha$ and $\d_{\rm BL}(\upsilon, \rho)< \varepsilon$. In order to verify that $H(\rho|\mu)=H(\sigma|\mu^{m})$, observe that \begin{equation} \begin{split} H(\rho^{m+j}|\mu^{m+j}) & =\sum_{i=0}^{\ell_{m+j}} \rho(A_{m+j, i}) \log \frac{\rho(A_{m+j, i})}{\mu(A_{m+j, i})}\\ &=\sum_{i=0}^{\ell_m}\sum_{k:A_{m+j, k}\subset A_{m, i}} \rho(A_{m+j, k})\log\frac{ \rho(A_{m+j, k})}{\mu(A_{m+j, k})}. \end{split} \end{equation} Furthermore, if $A_{m+j, k} \subset A_{m, i}$, then, from~\eqref{eq:sanov:defnu}, \begin{equation} \rho(A_{k+j, m})=\frac{\sigma(A_{m, i})}{\mu(A_{m, i})}\mu(A_{m+j, k}). \end{equation} Therefore, \begin{equation} \begin{split} H(\rho^{m+j}|\mu^{m+j}) & = \sum_{i=0}^{\ell_m} \sum_{k:A_{m+j, k}\subset A_{m, i}} \frac{\sigma(A_{m, i})}{\mu(A_{m, i})}\mu(A_{m+j, k}) \log\frac{\sigma(A_{m, i})}{\mu(A_{m, i})} \\ & = H(\sigma|\mu^{m}), \end{split} \end{equation} since \begin{equation} \sum_{k:A_{m+j, k} \subset A_{m, i}} \mu(A_{m+j, k})= \mu(A_{m, i}). \end{equation} In particular, from Lemma~\ref{le:supgoodpart}, $H(\rho|\mu) = H(\sigma|\mu^{m}) \leq \alpha$. Finally, we now prove that $\d_{\rm BL}(\rho, \upsilon) < \varepsilon$ by estimating \begin{equation} \begin{split} \d_{\rm BL}(\rho, \upsilon) & \leq \d_{\rm BL}(\rho, \rho^{m}) + \d_{\rm BL}(\rho^{m}, \sigma) + \d_{\rm BL}(\sigma, \upsilon) \\ & \leq \frac{3+2\alpha}{m}+\frac{1}{\sqrt{m}} \leq \varepsilon, \end{split} \end{equation} where the last line uses Lemma~\ref{lemma:convergenceNukDiv}, since $H(\rho|\mu)$ is bounded by $\alpha$ and recalling that $\rho^m=\sigma$. This concludes the proof. \end{proof} \end{document}
\begin{document} \begin{abstract} We survey some new results regarding a priori regularity estimates for the Boltzmann and Landau equations conditional to the boundedness of the associated macroscopic quantities. We also discuss some open problems in the area. In particular, we describe some ideas related to the global well posedness problem for the space-homogeneous Landau equation for Coulomb potentials. \end{abstract} \title{Regularity estimates and open problems in kinetic equations} \section{Introduction} In this note, we discuss some recent developments and open problems concerning regularity estimates for kinetic equations. The material is based on the minicourse given by the author in the Barret Memorial lectures in May 2021. The lectures focused on the conditional regularity estimates for the Boltzmann equation. In this notes, we also discuss some additional topics. We describe the conditional regularity estimates for the Boltzmann and Landau equations, and the main techniques leading to that result. We explain how some techniques that were initially explored in the context of elliptic nonlocal equations apply to the context of the Boltzmann equation. We also discuss open problems in the space-homogeneous setting, and present some ideas and conjectures related to those problems. The program on conditional regularity consists in studying solutions to the inhomogeneous Boltzmann or Landau equation under the assumption that their associated macroscopic hydrodynamic quantities stay pointwise bounded. This assumption rules out the formation of an implosion singularity that would be visible macroscopically. The conditional regularity estimates tell us, essentially, that no other type of singularity other than macroscopic implosions are possible for the non-cutoff Boltzmann or Landau equations. There are various tools that play a role in obtaining the conditional regularity result. The first step, which is about $L^\infty$ bounds works for the case of hard or moderately soft potentials. In the very soft potential case (which corresponds to $\gamma+2s<0$, where $\gamma$ and $s$ are certain parameters in the collision kernel $B$ defined below), the lower order reaction term in the collision operator is too singular and difficult to control with the diffusion term. Currently, there are no upper bounds in the very soft potential case even for space-homogeneous solutions. The most extreme case, and also arguably the most interesting, is the Landau-Coulomb equation. We discuss the well known open problem of existence of global-in-time solutions for the Landau-Coulomb equation in Section \ref{s:space-homogeneous}. In this section we present informally some ideas that lead to new points of view on the problem. We include some conjectures. We describe the regularity results including minimal technical details about their proofs. A more detailed description of the methods involved in obtaining the conditional regularity result for the Boltzmann equations can be found, still in survey form, in \cite{imbert-silvestre-survey2020}. \section{Description of the equations} Kinetic equations describe the evolution in time of a function $f(t,x,v)$ representing the density of particles in a dilute gas with respect to their position and velocity. The kinetic representation of a fluid lies at an intermediate scale between the large dimensional dynamical system following the position and velocity of each and every particle, and the macroscopic hydrodynamic description given by the Euler and Navier-Stokes equations. A kinetic equation typically has the following form \[ f_t + v \cdot \nabla_x f + F(t,x) \cdot \nabla_v f = Q(f,f).\] The first two terms on the left hand side form the pure transport equation. In absence of any force, each particle moves at a straight line with its corresponding velocity $v$. A model in which there is no interaction between particles would simply be $f_t + v \cdot \nabla_x f = 0$. The third term, $F(t,x) \cdot \nabla_v f$, accounts for the macroscopic force. The force $F(t,x)$ may be the result of external forces, or a mean field potential force computed in terms of the distribution $f$ itself. The right hand side, $Q(f,f)$, accounts for local interactions between particles, typically in the form of collisions. In these notes, we concentrate in models that are limited to local interactions. We take $F \equiv 0$ and reduce the equation to \begin{equation} \label{e:boltzmann} f_t + v \cdot \nabla_x f = Q(f,f). \end{equation} The right hand side $Q(f,f)$ is a quadratic nonlocal operator acting on $f(t,x,\cdot)$, for each value of $(t,x)$. The Boltzmann collision operator has the following form \begin{equation} \label{e:bco} Q(f,f)(v) = \int_{\mathbb R^d} \int_{S^{d-1}} (f'_\ast f' - f_\ast f) B(|v-v_\ast|,\cos \theta) \dd \sigma \dd v_\ast. \end{equation} Let us make some clarifications to understand the expression for $Q(f,f)$. It is a double integral, with respect to two parameters $v_\ast \in \mathbb R^d$ and a spherical variable $\sigma \in S^{d-1}$. The notation $f$, $f_\ast$, $f'$ and $f'_\ast$ denotes the values of the function $f$ evaluated at the same value of $(t,x)$, but with different velocities $v$, $v_\ast$, $v'$ and $v'_\ast$ respectively. The last two values depend on the integration parameters through the formulas \begin{align*} v' &= \frac{v+v_\ast}2 + \frac{|v-v_\ast|}2 \sigma, \\ v'_\ast &= \frac{v+v_\ast}2 - \frac{|v-v_\ast|}2 \sigma. \end{align*} First term in the integral, $f'_\ast f'$, accounts for particles with pre-collisional velocities $v'_\ast$ and $v'$ that may be colliding at the point $(t,x)$ and turning their velocities to $v$ and $v_\ast$. The loss term $-f_\ast f$ accounts for particles of velocities $v_\ast$ and $v$ that are colliding and changing their velocities. The rate by which these collisions occur is measured by the nonnegative kernel $B$. There are different choices for this kernel depending on modelling assumptions. The angle $\theta$ measures the deviation between the pre and postcollisional velocities. It is precisely given by the formula \[ \cos(\theta) = \frac{(v'-v'_\ast) \cdot (v-v_\ast)}{|v-v_\ast|^2}.\] We write the collision kernel as $B(|v-v_\ast|,\cos \theta)$ to enphasize that it only depends on these two values. In particular, it is an even and $2\pi$-periodic functions of $\theta$. The Boltzmann collision operator \eqref{e:bco}, for any kernel $B \geq 0$ and any function $f \geq 0$, has remarkable cacellation properties that we list here \begin{itemize} \item $\int Q(f,f) \dd v = 0.$ \item $\int v Q(f,f) \dd v = 0.$ \item $\int |v|^2 Q(f,f) \dd v = 0.$ \item $\int Q(f,f) \log f \,\dd v \leq 0.$ \end{itemize} Each of these items corresponds to a conserved or monotone physical quantity. Any solution $f$ of the Boltzmann equation \eqref{e:boltzmann} formally satisfies the following conservation laws \begin{itemize} \item Conservation of mass: the integral $\iint f(t,x,v) \dd v \dd x$ is constant in time. \item Conservation of momentum: the integral $\iint v \, f(t,x,v) \dd v \dd x$ is constant in time. \item Conservation of energy: the integral $\iint |v|^2 \, f(t,x,v) \dd v \dd x$ is constant in time. \item Entropy dissipation: the integral $\iint f(t,x,v) \log f(t,x,v) \dd v \dd x$ is monotone decreasing in time. \end{itemize} It can be verified directly that the only functions $f : \mathbb R^d \to [0,\infty)$ so that $Q(f,f)=0$ are the Gaussians. In other words, $Q(f,f)=0$ if and only if $f(v) = a \exp(-b |v-u|^2)$ for some $a,b \geq 0$ and $u \in \mathbb R^d$. In the kinetic context, these stationary solutions of \eqref{e:boltzmann} are called \emph{Maxwellians}. \subsection{Common collision kernels and variants} The exact form of the collision kernel $B$ depends on modelling choices. Particles that bounce against each other like billiard balls lead to a different collision kernel that if we modeled particles that behave like sponges. There are many different possible microscopic interactions between particles, leading to a variety of choices for the collision kernel $B$. Rigorously justifying the derivation of the Boltzmann equation from microscopic particle interactions is a delicate mathematical problem, with several fundamental open questions. In many cases, one can compute informal derivations based on euristics. In this section, we merely list the most common collision kernels $B$ that are found in the literature, without any attempt to justify them. The hard spheres model is the one that results from particles that bounce like billiard balls. Its collision kernel $B$ takes a particularly simple form \[ B(r,\cos \theta) = r.\] Another natural model is to consider point particles that repell each other by a potential of the form $1/r^q$, with $q \geq 1$. In that case, the kernel $B$ is not explicit, but it satisfies the following bounds \begin{equation} \label{e:non-cutoff} B(r,\cos \theta) \approx r^\gamma |\sin(\theta/2)|^{-d+1-2s}. \end{equation} Here, we write $a \approx b$ to denote the fact that there is a constant $C$ so that $a/C \leq b \leq Ca$. The parameters $\gamma$ and $s$ depend on the power $q$ by the formulas \[ \gamma=\frac{q-2d+2}{q}, \qquad 2s = \frac 2 {q}.\] These kernels have a singularity at $\theta = 0$. Remarkably, for any values of $v$ and $v_\ast$ \[ \int_{S^{d-1}} B(|v-v_\ast|,\cos(\theta)) \dd \sigma = \infty.\] The expression \eqref{e:bco} still makes sense for any smooth function $f$ when $s \in (0,1)$. This is because the cancellation in the factor $(f'_\ast f' - f_\ast f)$ compensates the singularity in $B$. It is the same way we normally make sense of general integro-differential operators with singular kernels. Indeed, as we will see in Section \ref{s:ide}, the Boltzmann collision operator consists of a diffusion term of fractional order $2s$ plus a lower order term. When $s=1$, the kernel $B$ is too singular to make sense of the integral in \eqref{e:bco}. Taking $B(r,\cos(\theta)) = c_d (1-s) r^\gamma \sin(\theta/2)^{-d+1-2s}$, for some dimensional constant $c_d$, the expression \eqref{e:bco} converges to the following expression as $s \to 1$. \begin{equation} \label{e:landau_expression1} Q(f,f)(v) = \bar a_{ij}(v) \partial_{ij} f(v) + \bar c(v) f(v). \end{equation} Here \begin{equation} \label{e:landau} \bar a_{ij} = \int_{\mathbb R^d} (|w|^2 \delta_{ij} - w_i w_j) |w|^\gamma f(v-w) \dd w, \qquad \bar c = \partial_{ij} \bar a_{ij} = c f \ast |\cdot|^\gamma. \end{equation} The case of Coulombic potentials in 3D corresponds to $d=3$, $q=1$, $s=1$ and $\gamma=-3$. In that case, the convolution in \eqref{e:bco} also becomes too singular. An appropriate asymptotic limit \eqref{e:landau} as $\gamma \to -3$ leads to the Landau-Coulomb operator with \begin{equation} \label{e:landau-coulomb} \bar a_{ij} = \partial_{ij} (-\Delta)^{-2} f = \frac 1 {8\pi} \int_{\mathbb R^3} (|w|^2 \delta_{ij} - w_i w_j) |w|^{-3} f(v-w) \dd w, \qquad \bar c = f. \end{equation} There are alternative equivalent expressions to \eqref{e:landau_expression1} that are useful depending on the type of computation that we want to do. The following integral expression is equivalent to \eqref{e:landau_expression1} and is useful, for example, to prove the dissipation of entropy. \[ Q(f,f) = \partial_i \int_{\mathbb R^d} (|w|^2 \delta_{ij} - w_i w_j) |w|^\gamma ( f(v-w) \partial_j f(v) - \partial_j f(v-w) f(v) ) \dd w. \] The Landau operator is the limit of the Boltzmann collision operator as its angular singularity approaches its limit point $s=1$. One could argue that because it is the most singular case, then the Landau operator should be \emph{harder} to study than the Boltzmann collision operator. However, the Landau operator involves only classical differentiation instead of an integro-differential expression. At the end, depending on what we want to prove, sometimes the Landau equation is harder, and sometimes it is simpler, than the Boltzmann equation. It is useful to analyze the Landau operator as a way to understand the nature of the Boltzmann collision operator as well. The first term is a diffusion operator whose coefficients depend on the solution $f$. It is a quasilinear second order operator. The second term is of lower order. Note that the Landau operator is \textbf{not} the Laplacian plus a lower order term (i.e. it is not semilinear), in the same way that the Boltzmann collision operator is \textbf{not} the fractional Laplacian plus a lower order term. The Boltzmann collision operator \eqref{e:bco} does not make sense in one dimension ($d=1$). The constraints on the pre and postcollisional velocities force $\{v',v'_\ast\} = \{v,v_\ast\}$. There is a common one-dimensional toy model proposed by Kac in which there is conservation of energy but not of momentum. The Kac collision operator takes the following form. \[ Q(f,f)(v) = \int_{\mathbb R} \int_{-\pi}^{\pi} (f'_\ast f' - f_\ast f) B(|v-v_\ast|,|\theta|) \dd \theta \dd v_\ast. \] Here, we write \begin{align*} v' &= v \cos \theta - v_\ast \sin \theta, \\ v'_\ast &= v \sin \theta + v_\ast \cos \theta. \end{align*} In the original Kac model $B(r,|\theta|) = 1/2\pi$. It makes sense to consider non-cutoff kernels with the bounds \eqref{e:non-cutoff}. It is natural to expect that the regularity estimates that we prove for the Boltzmann equation in Theorem \ref{t:conditional-regularity_boltzmann} should apply to the Kac model as well. However, so far it has not been explicitly analyzed as far as we are aware. \subsection{Mathematical problems in kinetic equations} There are many mathematical problems related to kinetic equations. An important area or research is on the rigorous derivation of the models. We can study the derivation of every variant of the Boltzmann equation and the Landau equation from particle models. We can also study the derivation of the Euler and Navier-Stokes equation from the Boltzmann equation. Another, rather different, direction of research is on the well posedness of the equations. Ultimately, the key to prove a well posedness result lies on the a priori estimates that we are able to obtain for the solution. Using merely the conservation of mass and the entropy dissipation, one can construct a very weak notion of global solution. They are the \emph{renormalized solutions} for the cutoff case, and the \emph{renormalized solutions with defect measure} for the non-cutoff case. The uniqueness of solutions within this class is currently unknown. From an optimistic perspective, one may expect a solution $f$ to be $C^\infty$ smooth, strictly possitive everywhere, and with a strong decay as $|v| \to \infty$. Proving the existence of such a solution would require estimates that ensure these properties. The main focus of this note is on these a priori estimates. There are several kind of estimates that have attracted people's attention through the years. The following is a rough attempt to classify them. \begin{enumerate} \item \textbf{Moment estimates}. They refer to upper bounds on quantities of the form \[ \iint f(t,x,v) \omega(v) \dd v \dd x.\] Typically $\omega(v) = \langle v \rangle^q$. A moment estimate for some large value of $q$ should be understood as a type of decay estimate for $|v| \to \infty$. \item \textbf{Pointwise upper bounds}. They refer to an inequality of the form $f(t,x,v) \lesssim \omega(v)$. Typically, $\omega(v) = \langle v \rangle^{-q}$. One can easily argue that pointwise upper bounds are more desirable than moment estimates. Note that a bound of the form $f(t,x,v) \lesssim \langle v \rangle^{-d-q}$ implies a moment estimate for the weight $\langle v \rangle^p$ for any $p<q$. \item \textbf{Regularity estimates}. They may be upper bounds on some weighted Sobolev norm, on some H\"older norm, or an upper bound on the derivatives of $f$. \item \textbf{Lower bounds}. They quantify how far $f$ is from vacuum. They have the form $f \gtrsim \omega(v)$. Because the Maxwellians are stationary solutions, the best possible lower bound one can hope for would have a Gaussian decay as $v \to \infty$. \item \textbf{Convergence rates to equilibrium}. As time $t \to +\infty$, it is natural to expect that the solution $f$ will converge to the equilibrium Maxwellian distribution. In some cases, it is possible to quantify this rate of convergence. \end{enumerate} We use the Japanese bracket convention $\langle v \rangle := \sqrt{1+|v|^2}$. Estimates of every kind described above are known in some circumstances. Below, we describe some of the mathematical difficulties and common simplifications. \subsubsection{Cutoff vs non-cutoff} It may seem convenient to be able to make sense of the formula \eqref{e:bco} for $Q(f,f)(v)$, even when $f$ is not necessarily smooth. For that, it would be desireable that the collision kernel $B$ is integrable with respect to the variable $\sigma$. The condition is known as Grad's cutoff assumption. It says that for any $v,v_\ast \in \mathbb R^d$, \begin{equation} \label{e:cutoff} \int_{S^{d-1}} B(|v-v_\ast|,\cos \theta) \dd \sigma < +\infty. \end{equation} The collision kernel $B$ corresponding to the hard spheres model satisfies this condition. The kernels $B$ that arise from inverse power law models \eqref{e:non-cutoff} do not satisfy \eqref{e:cutoff}. The Boltzmann equation with a kernel of the form \eqref{e:non-cutoff} would be referred to as the non-cutoff Boltzmann equation. In the cutoff case, when \eqref{e:cutoff} holds, it is possible to split the Boltzmann operator $Q(f,f)$ into its \emph{gain} and \emph{loss} terms. That is $Q=Q_+ - Q_-$, where \[ Q_+ = \int_{\mathbb R^d} \int_{S^{d-1}} f'_\ast f' \, B(|v-v_\ast|,\cos \theta) \dd \sigma \dd v_\ast, \qquad Q_- = \int_{\mathbb R^d} \int_{S^{d-1}} f_\ast f \, B(|v-v_\ast|,\cos \theta) \dd \sigma \dd v_\ast.\] Both terms $Q_+$ and $Q_-$ are nonnegative. It is common to use this decomposition when studying upper and lower bounds for the cutoff Boltzmann equation. In the non-cutoff case, it makes no sense to split $Q$ between a gain and a loss term. Indeed, the terms $Q_+$ and $Q_-$ defined above would be both equal to $+\infty$ in general. Interestingly, the non-cutoff Boltzmann equation exhibits a regularization effect that does not hold in the cutoff case. The Boltzmann collision operator \eqref{e:bco} is in some sense a quasilinear integro-differential elliptic operator of order $2s$. This regularization effect is the reason why this note focuses on the non-cutoff case. \subsection{Homogeneous vs inhomogeneous} When we consider solutions to \eqref{e:boltzmann} that are constant in $x$, the equation is significantly simpler. The second term vanishes and we have \[ f_t = Q(f,f).\] Thinking that $Q(f,f)$ is a quasilinear elliptic operator, what we have is a nonlinear parabolic equation that satisfies \begin{itemize} \item \textbf{Conservation of mass:} $\int_{\mathbb R^d} f(t,v) \dd v$. \item \textbf{Conservation of momentum:} $\int_{\mathbb R^d} v f(t,v) \dd v$. \item \textbf{Conservation of energy:} $\int_{\mathbb R^d} |v|^2 f(t,v) \dd v$. \item \textbf{Monotone decreasing entropy:} $\int_{\mathbb R^d} f(t,v) \log f(t,v) \dd v$. \end{itemize} From a macroscopic perspective, the space homogeneous equation is rather plain. It corresponds to a fluid whose macroscopically observable quantities are homogeneous and stationary. From a mathematical perspective, the homogeneous problem is significantly simpler than the inhomogeneous. The well posedness is well understood in the homogeneous non-cutoff Boltzmann equation when $\gamma+2s \geq 0$. Remarkably, the existence of global in time classical solutions when $\gamma+2s < 0$ remains open. \section{On the existence of solutions} For any evolution equation that comes from mathematical physics, we are interested in whether the problem is well posed or not. In the case of the Boltzmann equation, we consider the Cauchy problem for a given initial value $f_0(x,v) \geq 0$, and we would want to determine if there exist a solution $f$ to \eqref{e:boltzmann} so that $f(0,x,v) = f_0(x,v)$. We know that a generalized notion of solution exists globally. It was established first in the cutoff case \cite{diperna1989} and then also for non-cutoff \cite{alexandre-villani2002renormalized}. It is unknown, for any kind of physically relevant collision kernel, whether these solutions may develop singularities in finite time starting from smooth data. It is unclear if the solution will stay unique past a potential singularity. Establishing the existence of global smooth solutions depends crucially on obtaining a priori regularity estimates. Results about the existence of classical solutions can be classified in three rough sub-categories. They are the following. \begin{enumerate} \item \textbf{Global well posedness near equilibrium.} If the initial value $f_0$ is sufficiently close to a Maxwellian with respect to some appropriate norm, then it is often possible to prove that there exists a smooth solution with $f_0$ as its initial data. \item \textbf{Short time existence results.} For generic values of $f_0$ in some functional space, a short-time existence result says that there exists a solution $f$ in some interval of time $[0,T]$ for $T$ sufficiently small (depending on $f_0$). \item \textbf{Unconditional global well posedness.} For generic values of $f_0$ in some functional space, we would want to prove that there exists a solution $f$ that exists forever. This is the ultimate goal, but it might also be too ambitious. \end{enumerate} The analysis of solutions for short-time is a basic type of well posedness. However, it is more complex issue than one might initially expect. The first result of this kind for the inhomogeneous non-cutoff Boltzmann equation appeared in \cite{amuxy2011qualitative} (see also \cite{amuxy-existence2}). It applies for $0<s<1/2$ and $\gamma+2s < 1$. It also requires the initial data to belong to a Sobolev space involving at least four derivatives in $x$ and $v$, with a weight that forces $f_0$ to have Gaussian decay as $v \to \infty$. In particular, the result in \cite{amuxy2011qualitative} cannot be applied for initial data that has polynomial, or even simply exponential decay, for large velocities. The first result that allows initial data with algebraic decay rate appeared in \cite{morimoto2015polynomial}. It was later improved in \cite{henderson2020polynomial}. It applies in the case of soft potentials ($\gamma \leq 0$) and requires the initial data in $H^4$ with a polynomial weight. As far as we are aware, there are no further short-time existence results for the inhomogeneous Boltzmann equation without cutoff explicitly covered in the literature. The case of initial data with algebraic decay rate (as in $f_0 \lesssim \langle v \rangle^{-q}$), with hard potentials ($\gamma > 0$) is not handled by any documented result. It is remarkable that even though the Boltzmann equation is one of the most studied problems in mathematical physics, such a basic question still remains unanswered. Note also that the issue of uniqueness of solutions depends exclusively on our short-time well posedness theory. There are a number of papers dealing with solutions near equilibrium, for practically the full range of parameters $\gamma$ and $s$. See \cite{gressmanstrain2011,amuxy2011_hard,amuxy2012_soft,duan2019global,herau2020regularization,alonso2018non,alonso2020giorgi,zhang2020global,silvestre2021solutions}. From the proofs in any of these papers, in theory one should be able to extract a proof of short-time existence. Its conditions on the initial data would be milder than for the global existence result. However, in most of these papers the short-time existence results are not explicitly stated, and it is difficult to determine what can be done in each case. For any inhomogeneous model like \eqref{e:boltzmann}, with a nontrivial and physically meaningful collision kernel $Q$, the global existence of classical solutions for generic initial data $f_0$ remains an outstanding open problem. A short-time existence result is a first step in the Cauchy theory. In order to make the solution global, we would need sufficiently strong a priori estimates. If the solution stays bounded in a sufficiently strong norm, one would be able to reapply the short time existence result arbitrarily many times and extend the solution forever. The key for global well posedness would be to combine a short-time existence result with good enough a-priori estimates. It is not necessary to work with weak solutions. A priori estimates for classical solution would suffice, since these are the type of solutions provided by short-time existence results. We review a priori estimates for the inhomogeneous Boltzmann equation in these notes. As we will see, they are not strong enough to ensure the unconditional global well posedness of the equation yet. While global well posedness results near equilibrium are interesting in themselves, they say hardly anything about the solutions for arbitrary initial data. Moreover, any appropriate short-time existence result combined with the convergence to equilibrium \cite{desvillettes2005global}, and Theorem \ref{t:conditional-regularity_boltzmann}, implies the existence of global solutions when the initial data is sufficiently close to a nonzero Maxwellian (see \cite{silvestre2021solutions}). It is also possible to study the global well posedness of the equations when the initial data in near zero (see \cite{luk2019} and \cite{chaturvedi2021}). It is a completely different setting, driven by dispersion rather than by diffusion. \subsection{Conditional regularity estimates} We now describe our recent results on a priori regularity estimates. Let us start with some discussion of what we would want to prove, and what we can realistically achieve. The hydrodynamic equations, like Euler and Navier-Stokes, describe the evolution of a fluid at a coarser scale than kinetic equations. The velocity $u(t,x)$, density $\rho(t,x)$, and temperature $\theta(t,x)$, in the compressible Euler equation make sense at a macroscopic scale. Microscopically, they correspond to some local average over particles of the fluid around each point $(t,x)$. The compressible Euler equation can be obtained formally as an asymptotic limit of solutions to the Boltzmann equation. The compressible Euler equation develops singularities in finite time. Should we expect the Boltzmann equation to develop singularities as well? Or is there any reason to believe that the Boltzmann equation will act as a regularization of the usual hydrodynamic equations? There are two very different kinds of singularities that one observes in the compressible Euler equation: shocks and implosions. A \emph{shock} is the same type of singularity that emerges from the flow of Burgers equation. The values of the function stay bounded, and a discontinuity is created when two characteristic curves collide. It is as if the equation was pushing the solution to take two contradictory values at the same point, resulting in a discontinuity. In the kinetic setting, on the contrary, there can be no such thing as a shock singularity. The function $f(t,x,v)$ allows a distribution of different velocities coexisting at each point $(t,x)$. An \emph{implosion} singularity has a very different nature. The flows concentrates at one point, where the energy concentrates and its density explodes. While some implosion profiles for the Euler equations appeared a long time ago, it was only recently that smooth implosion profiles and their stability were properly understood \cite{merle2019smooth,merle2019implosion}. There is no obvious reason to rule out the existence of implosion singularities for the Boltzmann equation. It seems conceivable that there may exist a solution to the Boltzmann equation \eqref{e:boltzmann} whose mass, momentum and temperature density behave similarly as in the implosion singularity of the compressible Euler equation. Rigorously constructing, or ruling out, such a solution seems like a very difficult problem right now. Beyond the possibility of implosion singularities, we may still wonder if there exist any other kind of kinetic singularities for the Boltzmann equation that are not related to anything that we can see macroscopically in the Euler equation. Our conditional regularity results described below rule it out. Essentially, they say that the solution to the non-cutoff Boltzmann (and Landau) equation will be $C^\infty$ smooth for as long as there is no implosion singularity in its associated macroscopically observable quantities. Let us state our conditions. They say that the mass density is bounded below and above, the energy density is bounded above, and the entropy density is bounded above. The following three inequalities will be assumed to hold at every point $(t,x)$. \begin{align} m_0 \leq \int_{\mathbb R^d} f(t,x,v) \dd v &\leq M_0, \label{e:mass_density_assumption} \\ \int_{\mathbb R^d} f(t,x,v) |v|^2 \dd v &\leq E_0, \label{e:energy_density_assumption} \\ \int_{\mathbb R^d} f(t,x,v) \log f(t,x,v) \dd v &\leq \label{e:entropy_density_assumption} H_0. \end{align} We stress that we will not prove the hydrodynamic inequalities \eqref{e:mass_density_assumption}, \eqref{e:energy_density_assumption} and \eqref{e:entropy_density_assumption}. We take them as an assumption. From the discussion above, one would be inclined to believe that there may exist some initial data that evolves into an implosion singularity for which all these quantities blow up at a point. However, such an solution has not been constructed yet for any physically reasonable collision operator $Q$. Here is our conditional regularity theorem proved in \cite{imbert2019global}. \begin{thm} \label{t:conditional-regularity_boltzmann} Let $f$ be a (classical) solution to the inhomogeneous Boltzmann equation \eqref{e:boltzmann}, periodic in space, where the collision operator $Q$ has the form \eqref{e:bco} and $B$ is the standard non-cutoff collision kernel of the form \eqref{e:non-cutoff} with $\gamma+2s \in [0,2]$. Assume that there are constants $m_0>0$, $M_0$, $E_0$ and $H_0$ such that the hydrodynamic bounds \eqref{e:mass_density_assumption}, \eqref{e:energy_density_assumption} and \eqref{e:entropy_density_assumption} hold. Moreover, if $\gamma \leq 0$, we also assume that the initial data $f_0$ decays rapidly at infinity: $f_0 \leq C_q \langle v \rangle^{-q}$ for all $q>0$. Then, for $\tau>0$, all derivatives $D^\alpha f$, for any multi-index $\alpha$, are bounded in $[\tau,\infty) \times \mathbb R^d \times \mathbb R^d$ an decay as $|v| \to \infty$. More precisely, there are constants $C_{\tau, \alpha,q}$ so that \[ \langle v \rangle^q |D^\alpha f(t,x,v)| \leq C_{\tau,\alpha,q} \qquad \text{for all } t>\tau, x\in \mathbb R^d, v \in \mathbb R^d.\] \end{thm} The proof of theorem \ref{t:conditional-regularity_boltzmann} contains several ingredients that originated in the study of nonlocal parabolic equations. We describe some of the key ingredients in this manuscript. A more detailed description of the proof, still in survey form, is given in \cite{imbert-silvestre-survey2020}. There is a parallel result for the inhomogeneous Landau equation that is proved by Chris Henderson and Stan Snelson in \cite{hst-smoothing-landau}. We state it here. \begin{thm} \label{t:conditional-regularity_landau} Let $f$ be a (possibly weak) solution to the inhomogeneous Landau equation \eqref{e:boltzmann} where the collision operator $Q$ has the form \eqref{e:landau_expression1} and \eqref{e:landau} with $\gamma \in (-2,0)$. Assume that there are constants $m_0>0$, $M_0$, $E_0$ and $H_0$ such that the hydrodynamic bounds \eqref{e:mass_density_assumption}, \eqref{e:energy_density_assumption} and \eqref{e:entropy_density_assumption} hold. There is a $\mu>0$, depending on $m_0$, $M_0$, $E_0$, $H_0$ and $\gamma$, so that if \[ f_0(x,v) \leq C e^{-\mu |v|^2}\] then, for any $\tau>0$, $\mu' \in (0,\mu)$, all derivatives $D^\alpha f$, for any multi-index $\alpha$, are bounded in $[\tau,\infty) \times \mathbb R^d \times \mathbb R^d$ an decay as $|v| \to \infty$. More precisely, there are constants $C_{\tau, \mu', \alpha,q}$ so that \[ e^{\mu' |v|^2} |D^\alpha f(t,x,v)| \leq C_{\tau,\mu',\alpha,q} \qquad \text{for all } t>\tau, x\in \mathbb R^d, v \in \mathbb R^d.\] \end{thm} The proof of Theorem \ref{t:conditional-regularity_landau} contains similar steps as the proof of Theorem \ref{t:conditional-regularity_boltzmann}, except that instead of regularity techniques for nonlocal equations, it involves more classical second order diffusion. It uses the upper bounds from \cite{css-upperbounds-landau} instead of the ones from \cite{silvestre-boltzmann2016,imbert-mouhot-silvestre-decay2020}, and the Harnack inequality from \cite{gimv-harnacklandau} instead of \cite{imbert-silvestre-whi2020}. The regularity estimates from Theorems \ref{t:conditional-regularity_boltzmann} and \ref{t:conditional-regularity_landau} stay uniform as $t \to \infty$, or for as long as \eqref{e:mass_density_assumption}, \eqref{e:energy_density_assumption} and \eqref{e:entropy_density_assumption} hold. This is important. For example, the celebrated convergence to equilibrium result by Desvillettes and Villani in \cite{desvillettes2005global} applies to solutions provided that they are uniformly $C^\infty$ as $t \to \infty$. However, if we only seek to construct a global smooth solution, it would suffice to have regularity estimates that are allowed to deteriorate for large time. In that case, it is shown in \cite{hst-continuation-landau,hst-continuation_boltzmann} that the upper bounds on mass density and energy density suffice. We summarize both results in the following statement. \begin{thm} \label{t:continuation} Let $f$ be a solution to \eqref{e:boltzmann}, periodic in space, where the collision operator $Q$ is either the non-cutoff Boltzmann collision operator \eqref{e:bco} with $B$ as in \eqref{e:non-cutoff} with $s \in (0,1)$ and $\gamma+2s \in [0,2]$, or the Landau operator \eqref{e:landau_expression1} and \eqref{e:landau} with $\gamma \in (-2,0)$. Assume that there are upper bounds for the mass and energy density. That means that there is a constant $N_0$ so that for all $t,x$, \begin{align*} \int f(t,x,v) (1+|v|^2) \dd v &\leq N_0. \end{align*} Assume that $f_0$ is continuous and periodic in $x$. It does not need to be strictly positive. There might be values of $x$ so that $f_0(x,\cdot)$ vanishes. In the case that $\gamma \leq 0$, for the Boltzmann equation, we also assume that $f_0(x,v) \leq C_q \langle v \rangle^{-q}$ for all $q>0$. In the case of the Landau equation, we also assume $f_0(x,v) \leq C e^{-\mu |v|^2}$. Then, for all derivatives $D^\alpha f$, for any multi-index $\alpha$, there exist upper bounds of the form \[ |D^\alpha f(t,x,v)| \leq \begin{cases} C_{\alpha,q,f_0}(t) \langle v \rangle^{-q} & \text{for any } q \geq 0, \text{ for Boltzmann,} \\ C_{\alpha,\mu',f_0}(t) e^{-\mu' |v|^2} & \text{for any } \mu' \in [0,\mu), \text{ for Landau}. \\ \end{cases} \] \end{thm} Unlike the upper bounds in Theorems \ref{t:conditional-regularity_boltzmann} and \ref{t:conditional-regularity_landau}, the upper bounds in Theorem \ref{t:continuation} deteriorate as $t \to \infty$. They suffice to construct a global smooth solution, but they would not let us apply the result in \cite{desvillettes2005global} and deduce their relaxation to equilibrium. Theorem \ref{t:continuation} builds upon Theorems \ref{t:conditional-regularity_boltzmann} and \ref{t:conditional-regularity_landau} by propagating a lower bound that is stronger than the first inequality in \eqref{e:mass_density_assumption}. Indeed, for any nonzero continuous initial data $f_0$, there will be a clear ball $B = B_r(x_0,v_0)$ and $\delta>0$ so that $f_0(x,v) \geq \delta$ whenever $(x,v) \in B$. The upper bounds on mass and energy densities turn out to suffice to propagate and expand this lower bound to the full space by a barrier-like argument. Thus, the lower bound in \eqref{e:mass_density_assumption} holds for positive time. The upper bound on the entropy \eqref{e:entropy_density_assumption} is not proved directly. Instead, the authors observe that the proofs of Theorems \ref{t:conditional-regularity_boltzmann} and \ref{t:conditional-regularity_landau} go through without \eqref{e:entropy_density_assumption} as soon as we have a barrier from below. The right hand side of the regularity estimates in Theorem \ref{t:continuation} depend on the initial data in terms of this clear ball. \section{Integro-differential diffusion inside the Boltzmann collision operator} \label{s:ide} The analysis of equations involving integro-differential diffusion has been an intensely researched area since the beginning of the 21st century. A linear \emph{elliptic} integro-differential operator has the general form \begin{equation} \label{e:id-op} Lf(v) = \int (f(v')- f(v)) K(v,v') \dd v', \end{equation} for some nonnegative kernel $K$. The typical linear parabolic integro-differential equation would be $f_t = Lf$, for an nonlocal operator $L$ as above. The operator $L$ in \eqref{e:id-op} is the nonlocal analog of more classical second order elliptic operators. In the case the $K(v,v') = |v-v'|^{-d-2s}$, it is a multiple of the fractional Laplacian: $Lf(v) = -c (-\Delta)^sf(v)$. The operator $L$ would be considered elliptic of order $2s$ if its kernel is comparable with the kernel of the fractional Laplacian $|v-v'|^{-d-2s}$. However, this concept is more subtle than it may seem initially. There are several different ellipticity conditions in the literature depending on different interpretations of the word \emph{comparable}. There are two types of second order elliptic operators: divergence-form operators $Lf = \partial_i a_{ij}(v) \partial_j f$, and nondivergence-form operators $Lf = a_{ij}(v) \partial_{ij} f$. In both cases, the uniform ellipticity condition consists in requiring the coefficients $a_{ij}(v)$ to be strictly positive definite at every point $v$, with some specific upper and lower bounds $\lambda I \leq \{a_{ij}\} \leq \Lambda I$. In the nonlocal setting \eqref{e:id-op}, the divergence or nondivergence structure of the operator will be reflected in different symmetry conditions on the kernel $K$. A possible uniform ellipticity condition would be to require $\lambda |v-v'|^{-d-2s} \leq K(v,v') \leq \Lambda |v-v'|^{-d-2s}$. It is a common condition in the literature of nonlocal equations. However, the richness of the class of integral kernels gives us a lot of flexibility, and more general notions of ellipticity exist. Most of the central regularity results for elliptic and parabolic equations have a nonlocal counterpart. In particular, there are nonlocal versions of De Giorgi-Nash-Moser theorem \cite{komatsu1995, barlow2009non,basslevin2002,caffarelli2010drift,kassmann2009priori,Felsinger2013,chan2011,kassmann2013regularity}, Krylov-Safonov theorem \cite{basslevin2002,bass2002harnack,song2004,bass2005holder,bass2005harnack, silvestre2006holder,caffarelli2009regularity,silvestre2011differentiability,davila2012nonsymmetric,davila2014parabolic,kassmann2013intrinsic,bjorland2012,rang2013h,schwab2016}, and Schauder estimates \cite{MP,tj2015,serra2015,imbert2016schauder,tj2018}. In order to apply methods that originate in the study of elliptic equations in divergence form (like in the theorem of De Giorgi, Nash and Moser), we would need the kernel $K$ to satisfy a symmetry condition that allows us to apply variational techniques. The usual elliptic operators in divergence form $f \mapsto \partial_i a_{ij} \partial_j f$ are self-adjoint. Analogously, the integro-differential operator $L$ above is self-adjoint when $K(v,v') = K(v',v)$. Conversely, in order to apply methods that originate in the study of elliptic equations in non-divergence form (like in the theorem of Krylov and Safonov), we need different symmetry assumptions on the kernel $K$. The key property of non-divergence operators like $f \mapsto a_{ij} \partial_{ij} f$ is that for any smooth function $f$ the values of the operator make sense pointwise. This is always true for integro-differntial operators when $s<1/2$. If $s \geq 1/2$, we would need the symmetry condition $K(v,v+w) = K(v,v-w)$. The type of techniques that one can apply to integro-differential equations vary according to the symmetry conditions on the kernel. The Boltzmann collision operator \eqref{e:bco} involves an integral of differences. However, it is not immediately clear how to relate it with results about nonlocal operators of the form \eqref{e:id-op}. With that in mind, let us analyze the expression for $Q(f,f)$ in \eqref{e:bco} and derive an equivalent formulation. We add an subtract a term $f'_\ast f$ inside the integrand. \begin{align*} Q(f,f)(v) &= \int_{\mathbb R^d} \int_{S^{d-1}} (f'_\ast f' - f'_\ast f + f'_\ast f - f_\ast f) \, B \dd \sigma \dd v_\ast, \\ &= \int_{\mathbb R^d} \int_{S^{d-1}} (f' - f) \, [f'_\ast B] \dd \sigma \dd v_\ast + f(v) \int_{\mathbb R^d} \int_{S^{d-1}} (f'_\ast - f_\ast) B \dd \sigma \dd v_\ast \end{align*} For the first term, we use a change of variables known as \emph{Carleman coordinates}. For the second term, we refer to the \emph{cancellation lemma}. The cancellation lemma appeared first in \cite{alexandre_entropy_dissipation2000} (see also \cite{silvestre-boltzmann2016}). It reduces the integral in the second term to a convolution. \begin{lemma}[Cancellation lemma] There is a nonnegative function $b : \mathbb R^d \to [0,\infty)$, depending only on the collision kernel $B$ on \eqref{e:bco}, so that \[ \int_{\mathbb R^d} \int_{S^{d-1}} (f'_\ast - f_\ast) B \dd \sigma \dd v_\ast = [f \ast b](v)\] In particular, if $B$ has the standard non-cutoff form \eqref{e:non-cutoff}, then $b(v) \approx |v|^\gamma$. \end{lemma} For the first term, we want to change the variables of integration from $v_\ast \in \mathbb R^d$ and $\sigma \in S^{d-1}$ to $v' \in \mathbb R^d$ and $w \in \langle v'-v \rangle^\perp$, so that $v'_\ast = v+w$ and $v_\ast = v'+w$. With these new variables, it can be checked (see \cite{silvestre-boltzmann2016}) that \[ \dd \sigma \dd v_\ast = \frac{2^{2d-1}}{|v'-v| |v-v_\ast|^{d-2}}\dd w \dd v'. \] Thus, the first term becomes \[ \int_{\mathbb R^d} \int_{S^{d-1}} (f' - f) \, [f'_\ast B] \dd \sigma \dd v_\ast = \int_{\mathbb R^d} (f'-f) K_f(v,v') \dd v', \] where \begin{equation}\label{e:boltzmann-kernel} K_f(v,v') = \frac{2^{2d-1}}{|v'-v|} \int_{w \perp v'-v} |v-v_\ast|^{-d+2} f(v'_\ast) B(|v-v_\ast|,\cos \theta) \dd w. \end{equation} For a standard non-cutoff kernel $B$ satisfying \eqref{e:non-cutoff}, we obtain \begin{equation}\label{e:boltzmann-kernel-approx} K_f(v,v') \approx |v'-v|^{-d-2s} \int_{w \perp v'-v} |w|^{\gamma+2s+1} f(v+w) \dd w. \end{equation} When this last integral factor is bounded below and above, we recover the most classical notion of classical ellipticity $K_f(v,'v) = |v-v'|^{-d-2s}$. However, the integral is on a hyperplane. If $f$ is a function in $L^1$, with finite second moment, it is not possible to bound this integral pointwise either from below or from above. For that reason, we must explore more general notions of ellipticity. Let us suppose that all we know about the function $f : \mathbb R^d \to [0,\infty)$ is that it has finite mass (integral), energy (second moment) and entropy. Moreover, its mass is bounded below. We state these assumptions precisely below. \begin{align*} 0 < m_0 \leq \int f(v) \dd v \leq M_0, \\ \int f(v) |v|^2 \leq E_0, \\ \int f(v) \log f(v) \dd v \leq H_0. \end{align*} Based on these four constants only, we can deduce the following nondegeneracy conditions on the kernel $K_f$ (see \cite{imbert-silvestre-survey2020,silvestre-boltzmann2016,imbert-silvestre-whi2020,imbert2019global}). \begin{description} \item[Symmetry in the \emph{nondivergence} way] This is automatic from the formula \eqref{e:boltzmann-kernel}. \[ K(v,v+w) = K(v,v-w).\] \item[Lower bound on a cone of directions] There is a constant $\lambda>0$ depending on $m_0$, $M_0$, $E_0$ and $H_0$ such that \[K(v,v') \geq \lambda |v'-v|^{-d-2s}, \] when $v'$ belongs to a cone of directions emanating from $v$. This cone is \emph{thick} in the sense that the measure of its intersection with the unit sphere centered at $v$ has a lower bound depending also on $m_0$, $M_0$, $E_0$ and $H_0$. \item[Upper bound on average] There is a constant $\Lambda$ depending only on $M_0$ and $E_0$ such that \[ \int_{B_{r}(v)} K_f(v,v') |v-v'|^2 \dd v' \leq \Lambda r^{2-2s}.\] \item[Cancellation] While the exact \emph{divergence-form} symmetry does not hold. We do have some symmetry in the form of cancellation estimates \[ \left\vert \int_{B_r(v)} K_f(v,v') - K_f(v',v) \dd v' \right\vert \leq \Lambda r^{-2s}.\] If $s \geq 1/2$, we also have \[ \left\vert \int_{B_r(v)} (v-v') (K_f(v,v') - K_f(v',v)) \dd v' \right\vert \leq \Lambda r^{1-2s}.\] The first of these inequalities is a rewriting of the classical cancellation lemma from \cite{alexandre_entropy_dissipation2000} in terms of the kernel $K_f$. \end{description} These are the estimates that we get for $K_f$ based on minimal physically meaningful assumptions on $f$. Interestingly, much of the theory of elliptic and parabolic integro-differential equations can be recovered for equations involving kernels that satisfy these assumptions only. In that sense, the lower bound on the cone of nondegeneracy, and the upper bound on average, described above, are some mild form of integro-differential uniform ellipticity of order $2s$. The values of $\lambda$ and $\Lambda$, and the thickness of the cone of degeneracy, depend on $m_0$, $M_0$, $E_0$ and $H_0$, and also on $|v|$. Their values degenerate in certain precise way as $|v| \to \infty$. A key property of divergence form elliptic operators that is central in the proof of the theorem of De Giorgi, Nash and Moser is their coercivity and boundedness with respect to the $H^1$ norm. In the case of integro-differential operators, we work with fractional order Sobolev spaces. To have a nonlocal analog of this important regularity theorem, we want the integro-differential operator $L$ to be bounded and coercive with respect to the $H^s$ norm. The following results tell us that the cone condition and the upper bound on average described above for the Boltzmann kernel $K_f$ are enough. \begin{prop} \label{p:Hs_boundedness} Assume that a nonnegative kernel $K: \mathbb R^d \times \mathbb R^d \to \mathbb R$ satisfies the following two conditions \begin{itemize} \item There is a constant $\Lambda$ such that \[ \int_{B_{r}(v)} K(v,v') |v-v'|^2 \dd v' \leq \Lambda r^{2-2s}.\] \item The cancellation condition holds \[ \left\vert \int_{B_r(v)} K(v,v') - K(v',v) \dd v' \right\vert \leq \Lambda r^{-2s}.\] If $s \geq 1/2$, we also have \[ \left\vert \int_{B_r(v)} (v-v') (K(v,v') - K(v',v)) \dd v' \right\vert \leq \Lambda r^{1-2s}.\] \end{itemize} Then the operator $L$ defined in \eqref{e:id-op} is a well defined bounded linear operator from $H^s$ to $H^{-s}$. \end{prop} As we discussed, the kernel $K_f$ of the Boltzmann equation satisfies the assumption of Proposition \ref{p:Hs_boundedness}. Yet, Proposition \ref{p:Hs_boundedness} is a general statement of integro-differential operators. It applies to any kernel $K$, regardless of whether it was obtained from the Boltzmann collision operator, or from any other origin. A proof of Proposition \ref{p:Hs_boundedness} is given in \cite{imbert-silvestre-whi2020}. A form of coercivity follows from the cone of nondegeneracy described above. It is a consequence of the following general result for integro-differential operators proved in \cite{chaker2020} \begin{thm} \label{t:coercivity} Let $K: B_2 \times B_2 \to \mathbb R$ be a nonnegative kernel. Assume that there exist two constants $\lambda>0$ and $\mu>0$ so that for any $v \in B_2$ and any ball $B \subset B_2$ that contains $v$, we have \[ |\{v' \in B : K(v,v') \geq \lambda |v-v'|^{-d-2s} \}| \geq \mu |B|. \] Then, the following coercivity holds \[ \iint_{B_2 \times B_2} |f(v') - f(v)|^2 K(v,v') \dd v' \dd v \geq c \lambda \iint_{B_1 \times B_1} |f(v') - f(v)|^2 |v-v'|^{-d-2s} \dd v' \dd v = \tilde c \lambda \|f\|_{\dot H^s}. \] Here, the constants $c$ and $\tilde c$ depend on $\mu$, $s$ and dimension only. \end{thm} The cone of nondegeneracy condition satisfied by the Boltzmann kernel $K_f$ is naturally stronger than the assumption of Theorem \ref{t:coercivity}. The coercivity of the Boltzmann collision operator was proved several times in the literature using various techniques. It is interesting to realize that it follows as a consequence of a general statement of integro-differential quadratic forms. \section{Regularity estimates for the Kinetic Fokker-Planck equation} The collision operator $Q(f,f)$ in \eqref{e:boltzmann} acts as a diffusion in the velocity variable. It may be a nonlocal diffusion in the case of the Boltzmann equation, or a more classical second order diffusion in the case of Landau equation. In any case, we must address the fact that the equation has a regularization effect with respect to all variables $t$, $x$ and $v$, even though the diffusion is with respect to velocity only. It is a hypoelliptic effect that comes from the interaction between the diffusion in velocity and the kinetic transport. In 1934, Kolmogorov studied the following equation \[ f_t + v \cdot \nabla_x f = \Delta_v f. \] It is a simpler linear model than \eqref{e:boltzmann}, where the collision operator $Q$ is replaced by the usual Laplacian. Kolmogorov observed that for any initial data $f_0 \in L^1 + L^\infty$, the equation has a smooth solution. It follows immediately after explicitly computing its fundamental solution. \[ K(t,x,v) = \begin{cases} c_d t^{-2d} \exp \left( -\frac{|v|^2}{4t} - \frac{3 |x-tv/2|^2}{t^3} \right) &\text{for } t>0, \\ 0 &\text{for } t \leq 0. \end{cases} \] We compute the solution $f$, for any initial data $f_0$, by a modified convolution of $f_0$ with $K$. From this formula, we observe immediately that the solution is $C^\infty$ smooth in all variables. For fractional diffusion, a similar analysis applies, but this time the fundamental solution is not explicit. If we consider, for $s \in (0,1),$ \[ f_t + v \cdot \nabla_x f + (-\Delta_v)^s f = 0, \] there is a fundamental solution $K_s$ whose Fourier transform with respect to $x$ and $v$ is given by \[ \hat K_s(t,\psi,\xi) = c \exp\left(-\int_0^t |\xi + \tau \psi|^{2s} \dd \tau \right). \] It is easy to see that $K_s$ is smooth in all variables, so the fractional Kolmogorov equation enjoys analogous regularization properties as the classical one. It is also possible to see that the kernels $K_s$ are bounded, nonnegative and integrable. They have some polynomial decay as $x$ and $v$ go to infinity. However, the exact asymptotics have not been precisely described yet. The regularity analysis for nonlinear equations depends on regularity results for \emph{linear} equations with variable (possibly rough) coefficients. In this section, we describe the three most fundamental regularity results, which bring to the kinetic setting the theorems of De Giogi-Nash-Moser and Schauder. \subsection{De Giorgi meets kinetic equations} The theorem of De Giorgi, Nash and Moser is arguably the most fundamental regularity result for nonlinear elliptic PDE. It is usually stated in terms of \emph{linear} equations. But since it does not have any regularity assumption on the coefficients, it is readily applicable to solutions of quasilinear elliptic equations whose coefficients depend somehow on the solution. For kinetic equations with classical second order diffusion in divergence form, we study equations of the following form. \begin{equation} \label{e:kinetic-div-form} f_t + v \cdot \nabla_x f - \partial_{v_i} a_{ij}(t,x,v) \partial_{v_j} f = b \cdot \nabla_v f + h. \end{equation} Here, the function $h$ is a source term. We also included an extra drift term $b \cdot \nabla_v f$ on the right hand side. The coefficients $a_{ij}(t,x,v)$ are assumed to be uniformly elliptic. It means that there exists constants $\Lambda \geq \lambda > 0$ so that for all $(t,x,v)$ in the domain of the equation, $\lambda \mathrm I \leq \{a_{ij}(t,x,v)\} \leq \Lambda \mathrm I$. It is important that there is no regularity assumption on the coefficients $a_{ij}$ beyond boundedness and measurability. The kinetic version of De Giorgi, Nash, Moser theorem provides H\"older continuity estimates for solutions to \eqref{e:kinetic-div-form}. When the right hand side $h$ vanishes, there is also some form of Harnack inequality that holds. The following theorem was developed in \cite{pascucci2004}, \cite{zhang2008}, \cite{wang2009}, \cite{wang2011} and \cite{gimv-harnacklandau}. The presentation below matches the result in the last one of these papers. \begin{thm} \label{t:kinetic-DG-div} Assume $f: [0,1] \times B_1 \times B_1 \to \mathbb R$ is a weak solution of \eqref{e:kinetic-div-form} in $(0,1] \times B_1 \times B_1$ for some bounded functions $b$ and $h$. Then, there exists an $\alpha>0$ depending only on dimension and the ellipticity parameters $\lambda$ and $\Lambda$, and a constant $C$ depending on dimension, $\lambda$, $\Lambda$ and $\|b\|_{L^\infty}$, so that \[ \|f\|_{C^\alpha((1/2,1)\times B_{1/2} \times B_{1/2})} \leq C \left( \|f\|_{L^\infty([0,1] \times B_1 \times B_1)} + \|h\|_{L^\infty([0,1] \times B_1 \times B_1)} \right).\] \end{thm} Theorem \ref{t:kinetic-DG-div} can be used to conclude that solutions to the inhomogeneous Landau equation that satisfy \eqref{e:mass_density_assumption}, \eqref{e:energy_density_assumption} and \eqref{e:entropy_density_assumption} are H\"older continuous. It is a key ingredient in the proof of Theorem \ref{t:conditional-regularity_landau}. In order to study the regularity of the Boltzmann equation, we need a similar result but for integro-differential diffusion. Let us now consider equations of the form \begin{equation} \label{e:kinetic-integral} f_t + v \cdot \nabla_x f - \int_{\mathbb R^d} (f(t,x,v') - f(t,x,v)) K(t,x,v,v') \dd v' = h. \end{equation} Here $h$ is a source term. The kernel $K$ is a nonnegative function. We want to study equation \eqref{e:kinetic-integral} with an eye on possible applications to the Boltzmann equation. Thus, we will assume that the kernel $K$ satisfies the nondegeneracy conditions described in section \ref{s:ide}. The following theorem is proved in \cite{imbert-silvestre-whi2020}. \begin{thm} \label{t:kinetic-DG-integral} Assume $f: [0,1] \times B_1 \times \mathbb R^d \to \mathbb R$ is a bounded weak solution of \eqref{e:kinetic-integral} for $(t,x,v) \in (0,1] \times B_1 \times B_1$ and some bounded function $h$. We make the following assumptions on the kernel $K$. \begin{itemize} \item \textbf{Upper bound on average}. There is a constant $\Lambda$ such that for all $(t,x,v) \in (0,1] \times B_1 \times B_1$ and $r>0$, \[ \int_{B_{r}(v)} K(t,x,v,v') |v-v'|^2 \dd v' \leq \Lambda r^{2-2s}.\] \item \textbf{Nondegeneracy condition}. There exist two constants $\lambda>0$ and $\mu>0$ so that for any $(t,x,v) \in (0,1] \times B_1 \times B_1$ and any ball $B \subset B_2$ that contains $v$, we have \[ |\{v' \in B : K(t,x,v,v') \geq \lambda |v-v'|^{-d-2s} \}| \geq \mu |B|. \] \item \textbf{The cancellation condition}. For all $(t,x,v) \in (0,1] \times B_1 \times B_1$ and $r \in [0,1]$, \[ \left\vert \int_{B_r(v)} K(t,x,v,v') - K(t,x,v',v) \dd v' \right\vert \leq \Lambda r^{-2s}.\] If $s \geq 1/2$, we also have \[ \left\vert \int_{B_r(v)} (v-v') (K(t,x,v,v') - K(t,x,v',v)) \dd v' \right\vert \leq \Lambda r^{1-2s}.\] \end{itemize} Then $f$ is H\"older continuous and it satisfies the following estimate for some $\alpha>0$. \[ \|f\|_{C^\alpha((1/2,1)\times B_{1/2} \times B_{1/2})} \leq C \left( \|f\|_{L^\infty([0,1] \times B_1 \times \mathbb R^d)} + \|h\|_{L^\infty([0,1] \times B_1 \times B_1)} \right).\] The constants $\alpha$ and $C$ depend on dimension, $\lambda$, $\Lambda$ and $\mu$ only. \end{thm} Note that the assumptions on Theorem \ref{t:kinetic-DG-integral} are exactly the union of the assumptions of Proposition \ref{p:Hs_boundedness} and Theorem \ref{t:coercivity}. The Boltzmann kernel satisfies a cone of nondegeneracy condition that implies the nondegeneracy hypothesis of Theorem \ref{t:kinetic-DG-integral}. Remarkably, Theorems \ref{t:kinetic-DG-div} and \ref{t:kinetic-DG-integral} are not specifically for the Landau and Boltzmann equation as in \eqref{e:boltzmann}. They apply to a more general family of kinetic Fokker-Planck equations. In the case of Landau equation, the coefficients $a_{ij}$ will be related to the solution through \eqref{e:landau}. In the case of the Boltzmann equation, the kernel $K$ will be related to $f$ through \eqref{e:boltzmann-kernel}. Theorems \ref{t:kinetic-DG-div} and \ref{t:kinetic-DG-integral} apply regardless of these extra relations. \subsection{Schauder meets kinetic equations} The Schauder estimates provide an a priori estimate in higher order H\"older spaces when we know, in addition to uniform ellipticity, that the coefficients of the equation are H\"older continuous. Unlike the H\"older estimate obtained using De Giorgi's method, the Schauder estimates provide an estimate in a H\"older norm with a precise exponent. When we consider a kinetic Fokker-Planck equation like \eqref{e:kinetic-div-form}, we gain exactly two derivatives in $v$. In the fractional order case \eqref{e:kinetic-integral}, we should expect to gain $2s$ derivatives in $v$. The exact gain of regularity with respect to the variables $t$ and $x$ is less obvious. We need to keep track of the precise balance of scales between these variables. For that, it is important to analyze the group of transformations that leave the class of equations \eqref{e:kinetic-div-form} or \eqref{e:kinetic-integral} invariant. Let us focus on the fractional order case \eqref{e:kinetic-integral}, which is the more complex of the two. We observe that if $f$ solves \eqref{e:kinetic-integral} for some kernel $K$ satisfying the hypothesis of Theorem \ref{t:kinetic-DG-integral}, then the same is true for the scaled function \[ f_r(t,x,v) = f(r^{2s}t, r^{1+2s}x, rv). \] The kernel $K$ would have to be replaced with $r^{d+2s} K(r^{2s}t, r^{1+2s}x, rv)$, which satisfies the same assumptions as the original $K$. Because this is the natural scaling of the equation, we define the kinetic cylinders at the origin $Q_r$ as \[ Q_r:= (-r^{2s},0] \times B_{r^{1+2s}} \times B_r.\] The class of equations \eqref{e:kinetic-integral} is not translation invariant because the coefficient in the drift term depends on $v$. It is \emph{Galilean} invariant, in the sense that we can change coordinates to another inertial frame of reference. That is, given $(s,y,w)$, if we define \[ (s,y,w) \circ (t,x,v) = (s+t, y+x+tw, w+v),\] then $\tilde f(t,x,v) = f((s,y,w) \circ (t,x,v))$ would satisfy also an equation like \eqref{e:kinetic-integral} with $K$ replaced with $\tilde K(t,x,v) = K((s,y,w) \circ (t,x,v))$. In order to state the Schauder estimates for the equation \eqref{e:kinetic-integral}, we need to use a H\"older norm that is compatible with the scaling and Galilean invariance of the equation. The best way to achieve it is by defining an appropriate notion of distance. \begin{defn} \label{d:distance} For any two points $z_1 = (t_1,x_1,v_1)$ and $z_2 = (t_2, x_2, v_2)$ in $\mathbb R^{1+2d}$, we define the following \emph{kinetic} distance \[ d_\ell(z_1,z_2) := \min_{w \in \mathbb R^d} \left\{ \max \left( |t_1-t_2|^{ \frac 1 {2s} } , |x_1-x_2-(t_1-t_2)w|^{ \frac 1 {1+2s} } , |v_1-w| , |v_2-w| \right) \right\}.\] \end{defn} Strictly speaking, this is only a distance (it satisfies the triangle inequality) when $s \geq 1/2$. For smaller values of $s \in (0,1/2)$, $d_\ell^{2s}$ is a distance. In either case, we use the expression for $d_\ell$ as in Definition \ref{d:distance} to keep consistency of the $1$-homogeneity with respect to the variable $v$. The distance $d_\ell$ has the following two properties. \begin{itemize} \item It is homogeneous with respect to the scaling of the equation, in the sense that \[ d_\ell((r^{2s}t_1, r^{1+2s}x_1, rv_1),(r^{2s}t_2, r^{1+2s}x_2, rv_2)) = r d_\ell((t_1,x_1,v_1),(t_2,x_2,v_2)).\] \item It is invariant by left Galilean translations. That is, for every $z_0,z_1,z_2 \in \mathbb R^{2d+1}$, \[ d_\ell(z_0 \circ z_1) = d_\ell(z_0 \circ z_2).\] \end{itemize} The subindex ``$\ell$'' is meant to indicate that the distance is \textbf{``l''}eft invariant, as opposed to right-invariant. In terms of this kinetic distance, we define the kinetic H\"older norms. \begin{defn} \label{d:holder-space} Let $\Omega \subset \mathbb R^{2d+1}$. For any $\alpha \in (0,\infty)$, we define the $C^\alpha(z_0)$ semi-norm of a function $f : \Omega \to \mathbb R$ as the smallest constant $C$ so that for all $z \in \Omega$ \[ |f(z) - p(z)| \leq C d_\ell(z,z_0)^\alpha,\] for some polynomial $p(t,x,v)$ whose kinetic degree is less than $\alpha$. Moreover, the $C^\alpha$ semi-norm of $f$ in $\Omega$ is the supremum of $\|f\|_{C^\alpha(z_0)}$ for all $z_0 \in \Omega$. The full norm $\|f\|_{C_\ell^\alpha(D)}$ is defined as $[f]_{C_\ell^\alpha(D)}+[f]_{L^\infty(D)}$. \end{defn} We left out the definition of kinetic degree of a polynomial. It is adjusted by the scaling defined above so that the variable $v$ has degree one, $t$ has degree $2s$, and $x$ has degree $1+2s$. Now that we have a precise notion of H\"older continuity that takes into account the scaling and Galilean invariance of the equation, we are ready to state the kinetic Schauder estimates. The following theorem is proved in \cite{imbert2018schauder}. \begin{thm}[The Schauder estimate] \label{t:schauder} Let $\alpha \in (0, \min (1,2s))$ and $\alpha' = 2s \alpha / (1+2s)$. Let $K\colon Q_1 \times \mathbb R^d \to \mathbb R$ be a nonnegative kernel satisfying the following assumptions. \begin{itemize} \item \textbf{Upper bound on average}. There is a constant $\Lambda$ such that for all $(t,x,v) \in Q_1$ and $r>0$, \[ \int_{B_{r}(v)} K(t,x,v,v') |v-v'|^2 \dd v' \leq \Lambda r^{2-2s}.\] \item \textbf{Nondegeneracy condition}. There exist two constants $\lambda>0$ and $\mu>0$ so that for any $(t,x,v) \in Q_1$ and any ball $B \subset B_2$ that contains $v$, we have \[ |\{v' \in B : K(t,x,v,v') \geq \lambda |v-v'|^{-d-2s} \}| \geq \mu |B|. \] \item \textbf{Nondivergence symmetry}. For all $(t,x,v) \in Q_1$ and $w \in \mathbb R^d$, $K(t,x,v,v+w) = K(t,x,v,v-w)$. \item \textbf{Holder continuity}. For all $(t_1,x_1,v_1), (t_2,x_2,v_2) \in Q_1$ and $r>0$, \[ \int_{B_r} |K(t_1,x_1,v_1,v_1+w)-K(t_2,x_2,v_2,v_2+w)| |w|^2 \dd w \leq A_0 r^{2-2s} d_\ell((t_1,x_1,v_1),(t_2,x_2,v_2))^{\alpha'}. \] \end{itemize} Let $h: Q_1 \to \mathbb R$ be $\alpha'$-H\"older continuous. Assume further that $2s+\alpha' \notin \{1,2\}$. If $f$ satisfies \eqref{e:kinetic-integral} in $Q_1$, then \[ [f]_{C^{2s+\alpha'} (Q_{1/2})} \le C (\| f \|_{C^\alpha ((-1,0] \times B_1 \times \mathbb R^d)} + \|h\|_{C^{\alpha'} (Q_1)}) . \] The constant $C$ only depends on dimension, $\alpha$, the order $2s$ of the integral diffusion, ellipticity constants $\mu,\lambda,\Lambda$ and $A_0$. \end{thm} The first two conditions on the kernel are the same as in Theorem \ref{t:kinetic-DG-integral}. The symmetry condition replaces the cancellation condition. Schauder theorem is, in this case, a statement for equations in nondivergence form. The last condition on the kernel is the integral-version, with respect to the kinetic distance, of the H\"older continuity of the coefficients in the classical case. The Schauder estimates for kinetic equations with standard second order diffusion is a particular case of a more general theory developed in the context of hypoelliptic equations (see \cite{sergio2004recent} and references therein). A simplified version of the proof in \cite{imbert2018schauder} also leads to the following result. \begin{thm}[The Schauder estimate for kinetic equations with second order diffusion] \label{t:schauder-2nd-order} Let $\alpha \in (0, 1)$. Let $h: Q_1 \to \mathbb R$ be $\alpha$-H\"older continuous. Suppose $f$ is a classical solution of \[ f_t + v \cdot \nabla_x f - a_{ij}(t,x,v) \partial_{v_i v_j} f = h \text{ in } Q_1,\] for some $C^\alpha$ function $h$ and uniformly elliptic coefficients $a_{ij}$ that are $C^\alpha$ H\"older continuous in $Q_1$ with respect to the $d_\ell$ distance defined with $s=1$. Then \[ [f]_{C^{2+\alpha} (Q_{1/2})} \le C (\| f \|_{C^\alpha (Q_1)} + \|h\|_{C^{\alpha} (Q_1)}) . \] The constant $C$ depends on dimension, ellipticity constants $\lambda,\Lambda$ and the H\"older norm of $a_{ij}$. \end{thm} \section{Proving the conditional regularity estimates of Theorem \ref{t:conditional-regularity_boltzmann}} Let us outline the ingredients in the proof of Theorem \ref{t:conditional-regularity_boltzmann}. It has the following steps. \begin{enumerate} \item Obtain pointwise upper bounds for the solution $f$ depending on the constants in \eqref{e:mass_density_assumption}, \eqref{e:energy_density_assumption} and \eqref{e:entropy_density_assumption}. These upper bounds should decay faster than $\langle v \rangle^{-q}$, for any exponent $q>0$. \item Use Theorem \ref{t:kinetic-DG-integral} to deduce H\"older estimates for $f$. \item Once we know that $f$ is H\"older, we see through the formula \eqref{e:boltzmann-kernel} that the kernel $K_f$ is H\"older continuous as well. Thus, we use the Schuader estimates of Theorem \ref{t:schauder} to achieve higher order H\"older estimates for $f$. \item There is a change of variables that makes the ellipticity of $K_f$ uniform for large velocities. In that way, the local regularity estimates given by Theorems \ref{t:kinetic-DG-integral} and \ref{t:schauder} are transformed into global estimates. \item We iterate the application of the Schauder estimates to derivatives of the function $f$ to obtain $C^\infty$ estimates. \end{enumerate} While Theorems \ref{t:kinetic-DG-integral} and \ref{t:schauder} are results about generic kinetic equations with integral diffusion, the upper bounds and the change of variables are two steps that are specific about the Boltzmann equation. We describe these two ingredients below. A more detailed description of the ideas in the proof of Theorem \ref{t:conditional-regularity_boltzmann} can be found in \cite{imbert-silvestre-survey2020}. \subsection{The pointwise upper bounds} The following theorem is proved in \cite{imbert-mouhot-silvestre-decay2020}. \begin{thm} \label{t:upper-bounds} Let $f$ be a (classical) solution to the inhomogeneous Boltzmann equation \eqref{e:boltzmann}, periodic in space, where the collision operator $Q$ has the form \eqref{e:bco} and $B$ is the standard non-cutoff collision kernel of the form \eqref{e:non-cutoff} with $\gamma+2s \in [0,2]$. Assume that there are constants $m_0>0$, $M_0$, $E_0$ and $H_0$ such that the hydrodynamic bounds \eqref{e:mass_density_assumption}, \eqref{e:energy_density_assumption} and \eqref{e:entropy_density_assumption} hold. Then \begin{itemize} \item If $\gamma > 0$, for $t>0$ we get $f \leq C_q \langle v \rangle^{-q}$ for all $q>0$. This is a self-generated bound independent of the initial data. \item If $\gamma \leq 0$, and we assume in addition $f_0 \leq A_q \langle v \rangle^{-q}$ for all $q>0$, then we have $f_0 \leq C_q \langle v \rangle^{-q}$. \end{itemize} The constants $C_q$ depend on dimension, $\gamma$, $s$, $m_0$, $M_0$, $E_0$, $H_0$, and, in the case $\gamma \leq 0$, also on the constants $A_q$. \end{thm} The proof of Theorem \ref{t:upper-bounds} is based on some type of nonlocal barrier argument. We argue by contradiction. We look at the first point where the function invalidates our pointwise upper bound, and somehow a contradiction is reached by carefully analyzing the equation at that point. Using similar techniques, we also obtain Maxwellian lower bounds in \cite{imbert-mouhot-silvestre-lowerbound2020}. The first time a barrier argument was used in the context of the Boltzmann equation was in \cite{gamba2009} to obtain Maxwellian upper bounds in the case of the space homogeneous cutoff Boltzmann. See also \cite{taskovic,alonso2019} for related results for the space-homogeneous non-cutoff Boltzmann. These pointwise upper bounds tell us that the solution $f$ decays as $|v| \to \infty$ faster than any algebraic rate. We may compare them with the classical moment estimates for the Boltzmann equation. Needless to say, pointwise estimates are a much stronger result than moment estimates. Theorem \ref{t:upper-bounds} implies in particular that for every $p>0$ there is a $C_p$ so that for all $(t,x) \in [0,\infty) \times \mathbb R^d$, \[ \int_{\mathbb R^d} f(t,x,v) \langle v \rangle^p \dd v \leq C_p.\] Moment estimates like these are well known in the space-homogeneous case. However, there is no direct proof in the space inhomogeneous case, other than as a consequence of the pointwise estimates. \subsection{A change of variables to handle large velocities} Theorems \ref{t:kinetic-DG-integral} and \ref{t:schauder} are used to obtain estimates in H\"older norms for the solution $f$ of the Boltzmann equation \eqref{e:boltzmann}. The hydrodynamic assumptions \eqref{e:mass_density_assumption}, \eqref{e:energy_density_assumption} and \eqref{e:entropy_density_assumption} are used to ensure that the kernel $K_f$ satisfies the appropriate nondegeneracy conditions. However, a careful analysis of the formula \eqref{e:boltzmann-kernel} reveals that while the three assumptions of Theorem \ref{t:kinetic-DG-integral} hold in any bounded set of velocities $v$, its parameters degenerate as $|v| \to \infty$. In order to obtain global estimates in H\"older spaces, we must figure out what happens to the kernels $K_f$ as $|v| \to \infty$. The following change of variables resolves this problem altogether. For any value of $v_0 \notin B_1$, we define the linear transformation $T_{v_0}$ by the formula \[ T_{v_0}(a v_0 + w) := \frac{a}{|v_0|} v_0 + w \qquad \text{ whenever } w \perp v_0. \] Note that $T_{v_0}$ maps the unit ball $B_1$ into an ellipsoid that is flattened by the factor $1/|v_0|$ in the direction of $v_0$. If $v_0 \in B_1$, we simply take $T_{v_0}$ to be the identity. Given any $z_0 = (t_0,x_0,v_0)$, we further define \[ \mathcal T_{z_0} := z_0 \circ \left(|v_0|^{-\gamma-2s} t, |v_0|^{-\gamma-2s} T_{v_0} x, T_{v_0} v \right). \] Here, $\circ$ is the Galilean group operator in $\mathbb R^{1+2d}$. This transformation $\mathcal T_{z_0}$ maps $Q_1$ into certain neighborhood of $z_0$ in $\mathbb R^{2d+1}$. Let $f$ be a solution to the non-cutoff Boltzmann equation. Consider $\bar f (z)=f (\mathcal T_{z_0} (z))$. This function $\bar f$ solves a modified equation in $Q_1$ \[ \partial_t \bar f + v \cdot \nabla_x \bar f - \int_{\mathbb R^d} (f(v') - f(v)) \bar K_f(t,x,v,v') \dd v' = \bar h,\] where \[ \bar h(z) = c |v_0|^{-\gamma-2s} \bar f(z) (f \ast_v |\cdot|^\gamma)(\mathcal T_{z_0} z),\] and \[ \bar K_f(t,x,v,v') = |v_0|^{-\gamma-2s} K(\mathcal T_{z_0} z, v_0 + T_{v_0} v').\] Remarkably, the kernel $\bar K_f$ satisfies the assumptions of Theorem \ref{t:kinetic-DG-integral} and \ref{t:schauder} with parameters that are independent of $v_0$. Thus, by working out how the H\"older norms are affected by this linear change of variables, we have a recipe to compute the precise asymptotics as $|v| \to \infty$ of all our regularity estimates. We make a precise statement in the following proposition. \begin{prop} \label{p:change_of_variables} Let $f : [0,T] \times \mathbb R^d \times \mathbb R^d \to [0,\infty)$ be a function that satisfies \eqref{e:mass_density_assumption}, \eqref{e:energy_density_assumption} and \eqref{e:entropy_density_assumption}. Let $K_f$ be the Boltzmann kernel that is obtained by the formula \eqref{e:boltzmann-kernel}, and $\bar K_f: Q_1 \times \mathbb R^d \to [0,\infty)$ be the modified kernel as above. Then, there are constants $\lambda>0$, $\Lambda$ and $\mu$, depending only on $m_0$, $M_0$, $E_0$ and $H_0$, but \textbf{not} on $v_0$, such that for all $(t,x,v) \in Q_1$ \begin{itemize} \item For all $r>0$, \[ \int_{B_{r}(v)} K(t,x,v,v') |v-v'|^2 \dd v' \leq \Lambda r^{2-2s}.\] \item For any ball $B \subset \mathbb R^d$ that contains $v$, we have \[ |\{v' \in B : K(v,v') \geq \lambda |v-v'|^{-d-2s} \}| \geq \mu r^d. \] \item For all $r \in [0,1]$, \[ \left\vert \int_{B_r(v)} K(t,x,v,v') - K(t,x,v',v) \dd v' \right\vert \leq \Lambda r^{-2s}.\] If $s \geq 1/2$, we also have \[ \left\vert \int_{B_r(v)} (v-v') (K(t,x,v,v') - K(t,x,v',v)) \dd v' \right\vert \leq \Lambda r^{1-2s}.\] \end{itemize} \end{prop} Proposition \ref{p:change_of_variables} tells us that the assumptions of Theorem \ref{t:kinetic-DG-integral} hold uniformly after the change of variables. A similar computation holds for the H\"older continuity assumption of Theorem \ref{t:schauder}. Effectively, Proposition \ref{p:change_of_variables} turns the local estimates of Theorems \ref{t:kinetic-DG-integral} and \ref{t:schauder} into global regularity estimates, with optimal asymptotics as $|v| \to \infty$. Proposition \ref{p:change_of_variables} is proved in \cite{imbert2019global}. It is inspired by a similar change of variables for the Landau equation that first appeared in \cite{css-upperbounds-landau}. \section{The homogeneous problem} \label{s:space-homogeneous} A simplified form of the Boltzmann or Landau equation \eqref{e:boltzmann} takes place when we consider solutions that are constant in the space variable $x$. In that case, the kinetic transport term disappears from the equation and we are left with \begin{equation} \label{e:space-homogeneous} f_t = Q(f,f). \end{equation} Here, $Q(f,f)$ may be the Boltzmann or Landau operators described in \eqref{e:bco} and \eqref{e:landau_expression1}. The equation takes the more classical form of quasilinear parabolic equations. Moreover, the conserved quantities become stronger, since they do not involve integration in $x$. More precisely, we have the following. \begin{description} \item[Conservation of mass:] \[ \int_{\mathbb R^d} f(t,v) \dd v \qquad \text{ is constant in time.}\] \item[Conservation of energy:] \[ \int_{\mathbb R^d} f(t,v) |v|^2 \dd v \qquad \text{ is constant in time.}\] \item[Conservation of momentum:] \[ \int_{\mathbb R^d} f(t,v) v \dd v \qquad \text{ is constant in time.}\] \item[Entropy:] \[ \int_{\mathbb R^d} f(t,v) \log f(t,v) \dd v \qquad \text{ is monotone decreasing.}\] \end{description} In particular, the hydrodynamic assumptions \eqref{e:mass_density_assumption}, \eqref{e:energy_density_assumption}, \eqref{e:entropy_density_assumption} are automatically true for any solution \footnote{We mean for \emph{classical} solutions with sufficient decay as $|v| \to \infty$. Some of these conservation laws might break if we work with an overly weak notion of solution.} of the space homogeneous problem whose initial data has finite mass, energy and entropy. Establishing the existence of global smooth solutions of \eqref{e:space-homogeneous} does not face the same difficulties as in the space in-homogeneous case. Clearly, Theorems \ref{t:conditional-regularity_boltzmann} and \ref{t:conditional-regularity_landau} can be combined with any short-time well posedness result to produce a global-in-time solution. Anyhow, the global well posedness of the space homogeneous problem, at least for some range of the values $\gamma$ and $s$, can be proved directly and it was done earlier. An incomplete list of references would be \cite{desvillettes2000spatially,villani1998spatially,silvestre_landau} for the homogeneous Landau equation, and \cite{desvillettes1997,desvillettes2004,alexandre2005,huo2008,alexandre2009,desvillettes2009,morimoto2009,morimoto2010,chen2011,alexandre2012,he2012} for the homogeneous Boltzmann equation. Regularity estimates for the space homogeneous problem face a strong limitation when $\gamma+2s<0$. This range is called the \emph{very soft potential} case. The Landau equation corresponds to $s=1$, thus very soft potentials are those with $\gamma < -2$. Note that the very soft potential range is excluded from Theorems \ref{t:conditional-regularity_boltzmann} and \ref{t:conditional-regularity_landau}.The problem is that, even in the space homogeneous scenario, we do not know how to obtain an a priori estimate for the solution in $L^\infty$ when $\gamma+2s < 0$. We explore this major open problem in the rest of this section. In order to understand the difficulties with very soft potentials, it is best to focus on the most extreme case: the Landau-Coulomb equation. It corresponds to $s=1$ and $\gamma=-3$ in three dimensions. It is a model for plasma dynamics, what makes it particularly interesting as a natural equation from mathematical physics. The operator $Q(f,f)$ takes the following form \eqref{e:landau-coulomb}, which we recall here. \begin{equation} \label{e:landau-coulomb2} Q(f,f) = \bar a_{ij} \partial_{ij} f + f^2, \end{equation} where \[ \bar a_{ij}(v) = -\partial_{ij} (-\Delta)^{-2} f(v) = \frac 1 {8\pi} \int_{\mathbb R^3} (|w|^2 \delta_{ij} - w_i w_j) |w|^{-3} f(v-w) \dd w.\] Some lower bound on the coefficients is computed in terms of the mass, energy and entropy of $f$. Let us state it for a general power $\gamma$ in the next lemma. \begin{lemma} \label{l:landau-coefficients} Let $f : \mathbb R^d \to [0,\infty)$ be a function with finite mass, energy and entropy. Assume that \begin{align*} \int_{\mathbb R^d} f(v) \dd v &\geq m_0, \\ \int_{\mathbb R^d} f(v) |v|^2 \dd v &\leq E_0, \\ \int_{\mathbb R^d} f(v) \log f(v) \dd v &\leq H_0. \end{align*} Consider the coefficients $\bar a_{ij}$ given by \eqref{e:landau} for any $\gamma \in (-d-2,0]$. That is \[ \bar a_{ij}(v) = \int_{\mathbb R^d} (|w|^2 \delta_{ij} - w_i w_j) |w|^{\gamma} f(v-w) \dd w.\] Then, we have the following lower bounds, \[ \bar a_{ij}(v) e_i e_j \geq c \begin{cases} \langle v \rangle^\gamma & \text{for any } e \in S^{d-1}, \\ \langle v \rangle^{\gamma+2} & \text{if } e \cdot v = 0. \end{cases} \] Here, the constand $c$ depends on $\gamma$, dimension, $m_0$, $E_0$ and $H_0$ only. \end{lemma} The proof of Lemma \ref{l:landau-coefficients} can be found in \cite{silvestre_landau} (see the proof of Lemma 3.1 inside that paper). See \cite[Lemma 2.1]{css-upperbounds-landau} for the corresponding upper bounds for $\gamma \in [-2,0]$. If $\gamma < -2$, there is no pointwise upper bound for $\bar a_{ij}$ in terms of the mass, energy and entropy of $f$ only. We may see this fact as a hint that analyzing the very soft potential case $\gamma<-2$ will be delicate. In order to obtain a pointwise upper bounds for the coefficients $\bar a_{ij}$, we would need to have some extra information about the function $f$. For example, it would suffice to have an upper bound of its weighted $L^p$ norm \[ \|f\|_{L^p_k(\mathbb R^d)} = \left( \int |f(v)|^p \langle v \rangle^k \dd v \right)^{1/p} \leq K.\] In that case, if $p > d/(d+2+\gamma)$ and $k$ is sufficiently large (depending on $p$, $\gamma$ and dimension), the upper bounds for the coefficients $\bar a_{ij}$ would differ from the lower bounds by a constant factor. A conditional a priori estimate for the solution to \eqref{e:space-homogeneous} follows in that case (see \cite{silvestre_landau}). The bounds for the coefficients $\bar a_{ij}$ might suggest a comparison between the homogeneous Landau-Coulomb equation and the nonlinear heat equation $f_t = \Delta f + f^2$. Since the latter blows up in finite time, one may be inclined to believe that the Landau-Coulomb equation also develops singularities. It is an example of how we may arrive to the wrong conclusions when we think of the collision operator $Q(f,f)$ as if it was semilinear \footnote{It would be equally misleading to think of the Boltzmann collision operator as the fractional Laplacian plus lower order terms.}. The coefficients $\bar a_{ij}$ are not bounded above for $\gamma<-2$. Intuitively, the coefficients $\bar a_{ij}$ will become large around the areas where the mass of $f$ is accumulating. This enhanced dissipation seems to prevent blowup altogether. Nowadays, the consensus among researchers in the area is that everybody expects the homogeneous Landau-Coulomb equation to have global smooth solutions for any reasonable initial data. There are two positive, unconditional, results about the Landau-Coulomb equation that provide some partial progress in the problem. They are the following. \begin{enumerate} \item It is shown in \cite{desvillettes2014entropy} that we can get a bound for $\|f\|_{L^1_t L^3_{-k}}$, for some suitable exponent $k$, using the entropy dissipation. \item In \cite{golse2019partial}, the authors show that the potential set of times where a weak solution blows up has Hausdorff dimension at most $1/2$. \end{enumerate} There are also conditional regularity estimates for the homogeneous Landau equation, which say that the solution will not blow up unless certain quantities become infinite. Results of that kind can be found in \cite{silvestre_landau,gualdani2016,gualdani2019}. \subsection{The Krieger-Strain model} A simplified toy model that captures the properties and key difficulties of the Landau-Coulomb operator was proposed by Krieger and Strain in \cite{krieger2012a}. The idea is to replace the Landau-Coulomb operator \eqref{e:landau-coulomb2} with \[ Q(f,f) = (-\Delta)^{-1} f(v) \cdot \Delta f(v) + f(v)^2.\] Solutions to the equation \eqref{e:space-homogeneous} with this new operator $Q$ still conserve mass and dissipate entropy, but they do not conserve energy. Note also that the Maxwellian functions are no longer stationary solutions. In fact, the only function $f$ so that $Q(f,f)=0$ is $f \equiv 0$. We can modify the operator further by introducing a parameter $\alpha \in [0,1]$ and letting \[ Q_\alpha(f,f) = (-\Delta)^{-1} f(v) \cdot \Delta f(v) + \alpha f(v)^2.\] The idea is that by setting $\alpha<1$, we are making the dissipation term stronger than the reaction term. In \cite{krieger2012a}, the authors study the Cauchy problem for \eqref{e:space-homogeneous} with $Q_\alpha$, and establish the existence of global smooth solutions for radially symmetric, positive and monotone initial data, and $\alpha < 2/3$. Soon after, in \cite{krieger2012b}, the result is extended to the range $\alpha \in (0,74/75)$. The milestone $\alpha=1$ is achieved in \cite{gualdani2016}, also for symmetric, positive and radial initial data. Note that all the regularity estimates for the Krieger-Strain equation in the current literature are limited to radially symmetric solutions. \subsection{Toward global smooth solutions} We mentioned above that we believe that the space-homogeneous equation \eqref{e:space-homogeneous}, with the Landau-Coulomb operator \eqref{e:landau-coulomb2} has global smooth solutions for any reasonably smooth initial data that decays quickly as $|v| \to \infty$. But how could we ever possibly prove it? In this subsection, we outline some potential ideas to attack this problem. This is a purely speculative section. We state two conjectures. We also state two lemmas, with complete proofs, that support possible directions to study Conjectures \ref{c:Linfty-growth} and \ref{c:liouville}. If we had an upper bound in $L^\infty$ for the solution $f$ of \eqref{e:space-homogeneous}, further regularity would follow by a standard application of the Schauder estimates. The key difficulty is to determine, given a (classical) solution $f : [0,T] \times \mathbb R^d \to [0,\infty)$ of \eqref{e:space-homogeneous}, whether $\|f(t,\cdot)\|_{L^\infty}$ may blow up at time $T$ or not. Given mass, momentum and energy, the Maxwellian distribution is the function that minimizes the entropy. It is also easy to verify that the entropy dissipation of a nonnegative function $f$ is zero if and only if $f$ is a Maxwellian. Thus, a global solution to \eqref{e:space-homogeneous} will naturally converge to the unique Maxwellian function $M(v)$ whose mass, momentum and energy coincide with those of the initial data $f_0$. Before the limit $t \to \infty$ can effectively take place, we need to make sure that no singularity emerges from the flow of the equation. If we had a solution $f : [0,T] \times \mathbb R^d \to [0,\infty)$ so that $\lim_{t \to T} f(t,x_t) = +\infty$, along some curve $x_t$ we should also have the $L^p$ norm of $f$ blowing up around the points $x_t$, for any $p > 3/2$. In that case, it is reasonable to expect some separation of scales: to have some local blow-up profile at a small scale, separated from the bulk of the mass. The Landau-Coulomb operator \eqref{e:landau-coulomb2} is nonlocal only though the formula of the coefficients $\bar a_{ij}$. The values of $f(t,y)$, for $y$ far from $x_t$, would only influence the equation around $x_t$ by enhancing its dissipation. Intuitively, more dissipation reduces the chances of a blow-up. The more negative $\gamma$ is in \eqref{e:landau}, the more localized the formula for $\bar a_{ij}$ is in terms of $f$. In the very soft potential case, and in particular for Landau-Coulomb, we might expect that the local blow-up profile solves its own Landau equation \eqref{e:landau}, but at accelerated time scale. We will do a more precise blow-up analysis below, that justifies this intuition in part. If the local blow-up profile also solves \eqref{e:space-homogeneous}, then it should also converge to a Maxwellian. The mass, momentum and energy of the local blow-up profile are completely uncorrelated with the global function $f$ (and we will see that they may even be infinite). The following proposition shows an interesting inequality that is independent of their values. \begin{prop} \label{p:Linfty-growth} Let $m>0$, $q \in \mathbb R^d$ and $e > 0$ be given. Let $M : \mathbb R^d \to [0,\infty)$ be the Maxwellian distribution so that \[ m = \int_{\mathbb R^d} M(v) \dd v, \qquad q = \int_{\mathbb R^d} M(v) v \dd v, \qquad e = \int_{\mathbb R^d} M(v) |v|^2 \dd v.\] For any nonnegative function $f \in L^1_2(\mathbb R^d)$ so that \[ m = \int_{\mathbb R^d} f(v) \dd v, \qquad q = \int_{\mathbb R^d} f(v) v \dd v, \qquad e = \int_{\mathbb R^d} f(v) |v|^2 \dd v,\] we have \[ \frac{\|M\|_{L^\infty(\mathbb R^d)}} {\|f\|_{L^\infty(\mathbb R^d)}} \leq C_d,\] for some constant $C_d$ depending on dimension only. Moreover, the equality is achieved if and only if $f$ is the characteristic function of some ball. \end{prop} We can compute the constant $C_d$ explicitly for each dimension. In three dimensions, we have \[ C_3 = \sqrt{\frac{250}{9 \pi}}.\] \begin{proof} By translating and scaling the functions $f$ and $M$ if necessary, we assume without loss of generality that $q=0$, $m=1$ and $\|f\|_{L^\infty}=1$. We analyze, under these conditions, what the maximum possible value of $\|M\|_{L^\infty}$ is. The Maxwellian $M$ is uniquely determined by the values of $m$, $q$ and $e$. In this case, after fixing $q=0$ and $m=1$, we observe that $\|M\|_{L^\infty}$ will be maximal if $e$ takes the minimum possible value. Indeed, if $M_1$ is the Maxwellian with $q=0$, $m=1$, and $e=1$, we scale it to recover a Maxwellian $M$ with any arbitrary energy $e$ by \[ M(v) = e^{-d/2} M_1(e^{-1/2} v).\] Thus, the problem is reduced to minimize the energy of the function $f$ constrained to $0 \leq f \leq 1$, having unit mass and zero momentum. Let $B_R$ be the ball centerd at the origin with volume one, and $\chi(v)$ be its characteristic function. We claim that the energy of $f$ is always larger or equal to the energy of $\chi$. We compute \begin{align*} \int f(v) |v|^2 \dd v - \int \chi(v) |v|^2 \dd v &= \int_{B_R} (f(v)-\chi(v)) |v|^2 \dd v + \int_{\mathbb R^d \setminus B_R} (f(v)-\chi(v)) |v|^2 \dd v\\ \intertext{Since $f$ takes values in $[0,1]$, then $f(v)-\chi(v)$ is nonpositive in $B_R$ and nonnegative in $\mathbb R^d \setminus B_R$. In both cases, we obtain an inequality in the same direction by replacing $|v|^2$ with $R^2$.} &\leq \int_{B_R} (f(v)-\chi(v)) R^2 \dd v + \int_{\mathbb R^d \setminus B_R} (f(v)-\chi(v)) R^2 \dd v \\ &= R^2 \int_{\mathbb R^d} (f(v)-\chi(v)) \dd v = 0 \end{align*} The last equality is a consequence of $f$ and $\chi$ having the same mass. The inequality will be strict unless $f = \chi$ a.e. \end{proof} Proposition \ref{p:Linfty-growth} says that the ratio between the maximum of the initial data $f_0$, and the maximum of the asymptotic limit of $f(t,\cdot)$ as $t \to \infty$, are bounded by a universal constant depending on dimension only. Our intuition of separation of scales suggests that this ratio is the most the $L^\infty$ norm of a function should be able to grow by the evolution of the equation. We formulate the following bold conjecture. \begin{conjecture} \label{c:Linfty-growth} Let $f: [0,T] \times \mathbb R^3 \to [0,\infty)$ be a classical solution of \eqref{e:space-homogeneous}, where $Q$ is the Landau-Coulomb operator \eqref{e:landau-coulomb2}, and $f(0,v) = f_0(v)$. Then, for all values of $(t,v) \in [0,T] \times \mathbb R^3$, we have the inequality \[ f(t,v) \leq \sqrt{\frac{250}{9 \pi}} \max f_0(v).\] \end{conjecture} Conjecture \ref{c:Linfty-growth}, if true, rules out the finite time blow-up for solutions to \eqref{e:space-homogeneous} in the Landau-Coulomb case. Moreover, since the inequality is independent of the initial mass, momentum and energy, it suggests that the required analysis should be oblivious of these conserved quantities. While we formulated Conjecture \ref{c:Linfty-growth} for the Landau-Coulomb case, similar intuition applies for solutions to \eqref{e:space-homogeneous} where $Q$ is the Boltzmann or Landau operator in the soft potential case. The intuition is less clear for hard potentials, but even in that case there is no apparent counterexample to rule out this inequality. The analysis of blow-up limits is a common tool in the analysis of regularity of solutions across partial differential equations. Let us describe some attempt to apply that logic to the homogeneous Landau-Coulomb equation. Our objective is to prove that no solution to the Landau-Coulomb equation may blow up in finite time. Let us consider the simpler scenario of a radially symmetric solution that is monotone decreasing along the radial direction. It is a class of functions that is preserved by the evolution, and they can only possibly blow up at the origin. We argue by contradiction. Let us suppose that the radially symmetric, monotone solution, $f$ blows up at time $T$. That is, we have a function $f: [0,T] \times \mathbb R^3 \to [0,\infty)$, which is smooth for $t<T$ and solves \eqref{e:space-homogeneous}, where $Q$ is the Landau-Coulomb operator \eqref{e:landau-coulomb2}. We assume that $f$ depends only on $t$ and $|v|$ (it is radially symmetric), $v \cdot \nabla_v f \leq 0$ and therefore $\max f(t,\cdot) = f(t,0)$. Since it is blowing up at time $T$, we have $f(t,0) \to +\infty$ as $t \to T$. Ideally, we would want to extract a blow-up limit profile and find some contradiction. Since $f$ attains it maximum always at $v=0$, we must have $D^2f(t,0) \leq 0$. Thus, $f_t(t,0) = Q(f,f)(t,0) \leq f(t,0)^2$. We deduce a minimum blowup rate: if $f$ is blowing up at time $t=T$, then $f(t,0) \geq (T-t)^{-1}$. Let $t_k \to T$ be such that \[ f(t_k,0) = \sup \{ f(t,v) : t\leq t_k, v \in \mathbb R^d\},\] and, for some arbitrary $\varepsilon>0$, \begin{equation} \label{e:blowup-rate} \left(\frac 12 + \varepsilon \right) f(t_k,0) \geq f(t_k-f(t_k,0)^{-1},0) . \end{equation} The reason why we can find a sequence satisfying the second inequality is precisely the blowup rate $f(t,0) \geq (T-t)^{-1}$. We consider the rescaled solutions \[ f_k(t,v) = a_k^{-1} f(t_k + a_k^{-1} t, b_k v).\] We choose $a_k = f(t_k, 0)$ and $b_k$ so that $(-\Delta_v)^{-1} f_k(0,0) = 1$. by construction, the functions $f_k$ solve the homogeneous Landau-Coulomb equation in $(-a_k t_k,0] \times \mathbb R^d$. Moreover, $0 \leq f_k \leq 1$ in the same domain, $f_k(0,0) = 1$, and $(-\Delta_v)^{-1}f_k(0,0) = 1$. Also, \eqref{e:blowup-rate} implies that \begin{equation} \label{e:blowup-nonconstant} f_k(-1,0) \leq \left(\frac 12 + \varepsilon \right). \end{equation} Since we are working with a radially symmetric function $f$, the coefficients $\bar a_{ij}$ are isotropic at $v=0$. We have $\{\bar a_{ij}(t,0)\} = \frac 13 (-\Delta)^{-1} f(t,0) \mathrm I$. The same can be said about the rescaled functions $f_k$. The purpose of the normalization $(-\Delta)^{-1} f_k(0,0)=1$ is so that we fix the corresponding coefficients $\{\bar a_{ij}^k(0,0)\} = \frac 13 \mathrm I$. We would like to pass to the limit as $k \to \infty$ and recover an ancient solution. We have $t_k \to T$ and $a_k \to \infty$, so the domain of the functions will grow to $(-\infty,0] \times \mathbb R^3$ as $k \to \infty$. The following lemma says that if a radially symmetric solution of the Landau-Coulomb equation is bounded by one, and its coefficients are bounded below and above at one point, then we can obtain a bound for the coefficients in any bounded domain. \begin{lemma} \label{l:coefficients_comparable} Let $f : [-T,0] \times \mathbb R^3 \to [0,1]$ be a classical, radially symmetric, solution of \eqref{e:space-homogeneous}, where $Q$ is the Landau-Coulomb operator \eqref{e:landau-coulomb2}. Assume that $c_0 \leq (-\Delta_v)^{-1} f(0,0) \leq 1$ for some constant $c_0 > 0$. Then, for any values of $R>0$ and $T>0$, there are constants $C_1 \geq c_1 > 0$ so that for $(t,v) \in [-T,0] \times B_R$, \[ c_1 \mathrm I \leq \{ \bar a_{ij}(t,v) \} \leq C_1 \mathrm I.\] The constants $c_1$ and $C_1$ depend on $c_0$, $R$ and $T$ only. \end{lemma} \begin{proof} We first estimate the value of the coefficients at $v=0$. Recall that \[ \{ \bar a_{ij}(t,0) \} = \frac 13 (-\Delta)^{-1} f(t,0) \mathrm I = c \left( \int_{\mathbb R^3} f(t,v) |v|^{-1} \dd v \right) \mathrm I . \] Let $r_0 > 0$ be so that $r_0^2 < c_0/2$. Let $\varphi(v)$ be a smooth positive function so that $\varphi(v) = |v|^{-1}$ whenever $|v|>1$, and $0 < \varphi(v) \leq |v|^{-1}$ elsewhere. We define \[ a(t) := \int_{\mathbb R^3} f(t,v) \varphi(v) \dd v.\] Naturally, $a(t) \leq (-\Delta)^{-1}f(t,0)$. Also, using that $0 \leq f(t,v) \leq 1$, we also have $a(t) \geq (-\Delta)^{-1}f(t,0) - r_0^2$. Thus $c_0/2 \leq a(0) \leq 1$. We compute $a'(t)$ using the integral expression for the Landau operator. \begin{align*} a'(t) &= \frac 1 {8 \pi} \int \varphi(v) \partial_i \int \left( |w|^2 \delta_{ij} - w_i w_j \right) |w|^{-3} (f(t,v+w) \partial_j f(t,v) - f(t,v) \partial_j f(t,v+w)) \dd w \dd v, \\ \intertext{We integrate by parts twice to get,} &= \frac 1 {8 \pi} \iint \left( \partial_{ij} \varphi(v) \left( |w|^2 \delta_{ij} - w_i w_j \right) |w|^{-3} + 2 \partial_i \varphi(v) \frac {w_i}{|w|^3} \right) f(t,v+w) f(t,v) \dd w \dd v. \end{align*} Taking absolute values, we have \[ |a'(t)| \lesssim \iint \left( \langle v \rangle^{-3} |w|^{-1} + \langle v \rangle^{-2} |w|^{-2} \right) f(t,v+w) f(t,v) \dd w \dd v. \] In order to estimate $|a'(t)|$, we divide the domain of integration in various subdomains. We start with $|v+w|<2\langle v \rangle$. \begin{align*} I &:= \iint_{|v+w|<2\langle v \rangle} \left( \langle v \rangle^{-3} |w|^{-1} + \langle v \rangle^{-2} |w|^{-2} \right) f(t,v+w) f(t,v) \dd w \dd v, \\ \intertext{Using $f(t,v+w) \leq 1$,} &\lesssim \int f(t,v) \left( \int_{|v+w|<2\langle v \rangle} \langle v \rangle^{-3} |w|^{-1} + \langle v \rangle^{-2} |w|^{-2} \dd w \right) \dd v, \\ &\approx \int f(t,v) \langle v \rangle^{-1} \dd v \approx a(t) \end{align*} We continue with one of the terms in the integrand in the subdomain $|v+w| > 2\langle v \rangle$. \begin{align*} II &:= \iint_{|v+w| > 2\langle v \rangle} \langle v \rangle^{-2} |w|^{-2} f(t,v+w) f(t,v) \dd w \dd v, \\ \intertext{Using $f(t,v) \leq 1$, and changing variables $z=v+w$,} &\lesssim \int f(t,z) \left( \int_{\langle v \rangle < |z|/2} \langle v \rangle^{-2} |v-z|^{-2} \dd v \right) \dd z, \\ &\lesssim \int f(t,z) \langle z \rangle^{-1} \dd z \approx a(t) \end{align*} We finish with the second term in the integrand in the subdomain $|v-w| > 2\langle v \rangle$. \begin{align*} II &:= \iint_{|v+w| > 2\langle v \rangle} \langle v \rangle^{-3} |w|^{-1} f(t,v+w) f(t,v) \dd w \dd v \\ \intertext{Changing variables $z=v+w$,} &\lesssim \int f(t,z) \left( \int_{\langle v \rangle < |z|/2} \langle v \rangle^{-3} |v-z|^{-1} f(t,v) \dd v \right) \dd z \\ \intertext{Note that the domain of the inner integral is empty if $|z|<2$. Thus, $|v-z|^{-1} \lesssim \langle z \rangle^{-1}$.} &\lesssim \int f(t,z) \langle z \rangle^{-1} \left( \int f(t,v) \langle v \rangle^{-3} \dd v \right) \dd z \\ \intertext{For an arbitrary $R>0$, we divinde the inner integral into the two subdomains $B_R$ and $\mathbb R^3 \setminus B_R$.} &\lesssim \int f(t,z) \langle z \rangle^{-1} \left( \int_{B_R} f(t,v) \langle v \rangle^{-3} \dd v + \int_{\mathbb R^3 \setminus B_R} f(t,v) \langle v \rangle^{-3} \dd v \right) \dd z, \\ \intertext{We use $f(t,v) \leq 1$ in the first term, and the value of $a(t)$ to bound the second term.} &\lesssim \int f(t,z) \langle z \rangle^{-1} \left( \log(1+R^3) + \langle R \rangle^{-2} a(t) \right) \dd z. \\ \intertext{Choosing $R = \sqrt{a(t)}$, we get} &\lesssim a(t) \left( 1+ \log(1+a(t)) \right). \end{align*} Adding the three terms $I$, $II$ and $III$, we conclude that $|a'(t)| \lesssim a(t) ( 1+ \log(1+a(t)))$. This ODE, together with the value of $a(0)$, implies that $a(t)$ may grow at most double exponentially as $t \to -\infty$. It has to stay bounded in any interval of time $[-T,0]$. Moreover, for small values of $a(t)$ the logarithmic correction is irrelevant. The value of $a(t)$ may decay at most exponentially as $t \to -\infty$. This ODE, together with $a(0) \geq c_0/2$, gives us a lower bound for $a(t)$ for $t \in [-T,0]$. It is easy to obtain bounds for $(-\Delta)^{-1}f(t,0)$ from the values of $a(t)$. Indeed, $a(t) \leq (-\Delta)^{-1}f(t,0) \leq a(t) + r_0^2 \|f\|_{L^\infty}$. Thus, we have already obtained lower and upper bounds for the coefficients $\bar a_{ij}(t,0)$ for $t \in [-T,0]$. We need to extend these bounds for nonzero values of $v \in B_R$. For the upper bounds, we observe that $\tr \bar a_{ij}(t,v) = (-\Delta)^{-1} f(t,v)$. Therefore \begin{align*} \tr \bar a_{ij}(t,v) &= c \int_{\mathbb R^3} |v-w|^{-1} f(t,w) \dd w \\ &\lesssim \int_{B_r(v)} |v-w|^{-1} f(w) \dd w + \int_{\mathbb R^3 \setminus B_r(v)} |v-w|^{-1} f(w) \dd w \\ &\lesssim r^2 (\sup f) + \frac{|v|+r}r (-\Delta)^{-1} f(t,0). \end{align*} Choosing any arbitrary $r>0$, we get an upper bound for $\bar a_{ij}(t,v)$. To get a lower bound for $\bar a_{ij}(t,v)$, we use that $f$ is radially symmetric in $v$. Let $\tilde f : [-T,0] \times [0,\infty) \to [0,1]$ so that $f(t,v) = \tilde f(t,|v|)$. The smallest eigenvalue $\lambda$ of $\bar a_{ij}(t,v)$ points in the radial direction. It can be computed from $\tilde f$ by the formula \begin{align*} \lambda(t,v) &= \frac 13 |v|^{-3} \int_0^{|v|} s^4 \tilde f(t,s) \dd s + \frac 13 \int_{|v|}^\infty s \tilde f(t,s) \dd s. \\ \intertext{We choose any $s_0 < |v|$ and get} &\geq \frac 13 |v|^{-3} s_0^3 \int_0^\infty s \tilde f(t,s) \dd s - \frac 16 |v|^{-3} s_0^5 (\sup \tilde f) \\ &\geq \frac 13 |v|^{-3} s_0^3 (-\Delta)^{-1} f(t,0) - \frac 16 |v|^{-3} s_0^5. \end{align*} This gives us a lower bound if we choose $s_0$ sufficiently small, and we conclude the proof. \end{proof} Going back to the blow-up sequence $f_k$, because of Lemma \ref{l:coefficients_comparable}, we have that the coefficients $a^k(t,v)$ will be uniformly elliptic in any bounded set $[-T,0] \times B_R$. Using standard parabolic estimates (first the theorem by Krylov and Safonov, and then Schauder estimates) it is easy to see that the sequence $f_k$ is uniformly smooth on bounded sets. We pass to the limit so that, up to extracting a subsequence, there is a function $f_\infty: (-\infty,0] \times \mathbb R^d \to [0,1]$ and coefficients $\bar a_{ij}^\infty : (-\infty,0] \times \mathbb R^d \to \mathbb R$, so that \begin{align*} f_k &\to f_\infty, \\ \partial_t f_k &\to \partial_t f_\infty, \\ D^2_v f_k &\to D^2_v f_\infty, \\ \bar a_{ij}^k &\to \bar a_{ij}^\infty, \end{align*} with convergence being uniform over every compact set. Clearly, the function $f_\infty$ solves \[ \partial_t f_\infty = \bar a_{ij}^\infty \partial_{ij} f_\infty + f_\infty^2 \qquad \text{ in } (-\infty,0] \times \mathbb R^3.\] From Fatou's lemma, we see that \[ \bar a_{ij}^\infty = \lim_{k \to \infty} c \int_{\mathbb R^3} (|w|^2 \mathrm I - w_i w_j) |w|^{-3} f_k(v-w) \dd w \geq c \int_{\mathbb R^3} (|w|^2 \mathrm I - w_i w_j) |w|^{-3} f_\infty(v-w) \dd w.\] We cannot say that $f_\infty$ is an ancient solution of the Landau-Coulomb equation because the coefficients $\bar a_{ij}^\infty$ are not exactly given by the formula \eqref{e:landau-coulomb2} applied to $f_\infty$. Yet, the inequality above tells us that the diffusion is only \emph{enhanced} in the limit. Intuitively, more diffusion should only improve our regularity estimates. Admittedly, it is far from clear how to derive a contradiction from this blowup analysis. It seems reasonable to expect that the only bounded ancient solutions to the Landau-Coulomb equation are stationary Maxwellians. However, even if we start with a solution $f$ that has finite mass, energy and entropy, none of these quantities will be preserved through the sequence and blow-up limit. The final function $f_\infty$ that we obtain is a smooth, bounded function that belongs in addition to $L^\infty_{loc}((-\infty,0], L^1_{-1}(\mathbb R^d))$ (as a consequence of the upper bounds in Lemma \ref{l:coefficients_comparable}). There is no obvious reason to expect the blow-up limit $f_\infty$ to have finite mass or energy. We state the corresponding Liouville-type conjecture here. \begin{conjecture} \label{c:liouville} Let $f : (-\infty,0] \times \mathbb R^3 \to [0,\infty)$ be a smooth classical solution of \eqref{e:space-homogeneous}, where $Q$ is the Landau-Coulomb operator \eqref{e:landau-coulomb2}. We make the following assumptions. \begin{itemize} \item For all $(t,v) \in (-\infty,0] \times \mathbb R^3$, we have $0 \leq f(t,v) \leq 1$. \item For every $t \in (-\infty,0]$, there is some constant $C_t$ so that \[ \int_{\mathbb R^3} f(v) \langle v \rangle^{-1} \dd v \leq C_t.\] \end{itemize} Then $f$ must be a stationary Maxwellian. \end{conjecture} Note that the second assumption in Conjecture \ref{c:liouville} is necessary just to make sense of the formula for the coefficients in \eqref{e:landau-coulomb2}. In the radially symmetric case, the constants $C_t$, are related for different values of $t$ through Lemma \ref{l:coefficients_comparable}. The blow-up limit $f_\infty$ cannot be a stationary Maxwellian because it would contradict \eqref{e:blowup-nonconstant}. That is the basic idea of how the blowup analysis would rule out the emergence of singularities. However, a positive resolution of Conjecture \ref{c:liouville} seems to be very difficult with current techniques. It would be interesting to study solutions to the space homogeneous Landau-Coulomb equation \eqref{e:space-homogeneous}, with $Q$ given by \eqref{e:landau-coulomb2}, in the class $f \in L^\infty([0,T] \times \mathbb R^3) \cap L^\infty([0,T], L^1_{-1}(\mathbb R^3))$. These are solutions with possibly infinite mass, energy and entropy. There is currently no short-time Cauchy theory in this class. \section{Open problems} In Section \ref{s:space-homogeneous}, we analyzed the problem of finding $L^\infty$ bounds in the case of soft potentials, and in particular the problem of global existence of smooth solutions for the homogeneous Landau-Coulomb equation. In this section we quickly review other open problems that are related to the regularity issues reviewed in this paper. \subsection{Unconditional regularity estimates} Observing the conditional regularity results of Theorems \ref{t:conditional-regularity_landau} and \ref{t:conditional-regularity_boltzmann}, the obvious question is whether the hydrodynamic bounds \eqref{e:mass_density_assumption}, \eqref{e:energy_density_assumption} and \eqref{e:entropy_density_assumption} can possibly be proved. If we somehow established the bounds for the hydrodynamic quantities in \eqref{e:mass_density_assumption}, \eqref{e:energy_density_assumption} and \eqref{e:entropy_density_assumption}, we would conclude that the inhomogeneous Landau and Boltzmann equations have global smooth solutions, for any initial data. This is a remarkable open problem that Cedric Villani describes in the first chapter of his book \cite{villani2012theoreme}. If unconditional regularity estimates were true, it seems that they would be very difficult to prove. At certain scale, the hydrodynamic quantities associated to solutions of the Landau or Boltzmann equation approximately solve the compressible Euler equation (see \cite{bardos1991}). There are some recent results studying the stability of implosion singularities for the compressible Euler equation, and producing singularities for the compressible Navier-Stokes equation (see \cite{merle2019smooth} and \cite{merle2019implosion}). Can there be similar implosion singularities for the Boltzmann equation? The idea is to look for a solution that mostly flows toward a single point. The mass and energy densities would be blowing up at that point in a way that resembles the blowup profile of the compressible Euler equation in \cite{merle2019smooth}. This scenario appears to be more plausible than the unconditional regularity of the previous paragraph. If such an implosion singularity was indeed possible for the Boltzmann and/or Landau equation, then the bounds \eqref{e:mass_density_assumption}, \eqref{e:energy_density_assumption} and \eqref{e:entropy_density_assumption} would not be true in general. \subsection{The kinetic Krylov-Safonov theorem} We explain a kinetic version of De Giorgi - Nash - Moser theorem in Theorem \ref{t:kinetic-DG-div}. It concerns the equation \eqref{e:kinetic-div-form}, which is a kinetic equation with second order diffusion in divergence form. A central result for parabolic equations in non-divergence form is the theorem of Krylov and Safonov \cite{krylov1980}. Its statement is analogous to that of De Giorgi, Nash and Moser, but for diffusion in non-divergence form. It would be natural to expect a result of that kind to hold for kinetic equations as well. It is still an open problem. We state it here as a conjecture. \begin{conjecture} \label{c:kinetic-krylov} Assume $f: [0,1] \times B_1 \times B_1 \to \mathbb R$ is a (classical) solution of an equation of the following form \[ f_t + v \cdot \nabla_x f = a_{ij}(t,x,v) \partial_{v_i v_j} f.\] Here, the coefficients $a_{ij}$ are assumed to be uniformly elliptic in the sense that there are constants $\Lambda \geq \lambda > 0$, such that for all $(t,x,v) \in [-1,0] \times B_1 \times B_1$, \[ \lambda \mathrm I \leq \{a_{ij}(t,x,v)\} \leq \Lambda \mathrm I.\] Then, there exists constants $\alpha>0$ and $C$, depending only on dimension and the ellipticity parameters $\lambda$ and $\Lambda$, so that \[ \|f\|_{C^\alpha((1/2,1)\times B_{1/2} \times B_{1/2})} \leq C \|f\|_{L^\infty([0,1] \times B_1 \times B_1)} .\] \end{conjecture} If would also make sense to expect an integro-differential version of Conjecture \ref{c:kinetic-krylov} to hold. One can state it for different ellipticity conditions on the kernel. As a starting point, we state the conjecture using a strong notion of fractional ellipticity. \begin{conjecture} \label{c:kinetic-krylov-integral} Assume $f: [0,1] \times B_1 \times \mathbb R^d \to \mathbb R$ is a (classical) solution of \eqref{e:kinetic-integral} for $(t,x,v) \in (0,1] \times B_1 \times B_1$ and some bounded function $h$. We assume that the kernel $K$ is symmetric $K(t,x,v,v+w) = K(t,x,v,v-w)$, and that there exist constants $\Lambda \geq \lambda>0$ such that \[ \lambda |v'-v|^{-d-2s} \leq K(t,x,v,v') \leq \Lambda |v'-v|^{-d-2s}.\] Then there are constants $\alpha>0$ and $C$, depending only on dimension, $\lambda$ and $\Lambda$, so that $f$ satisfies the estimate \[ \|f\|_{C^\alpha((1/2,1)\times B_{1/2} \times B_{1/2})} \leq C \left( \|f\|_{L^\infty([0,1] \times B_1 \times \mathbb R^d)} + \|h\|_{L^\infty([0,1] \times B_1 \times B_1)} \right).\] \end{conjecture} The elliptic version of the theorem of Krylov and Safonov is easier to prove for integro-differential equations than it is for classical second order elliptic equations (see \cite{silvestre2006holder}). Based on that, it is possible that Conjecture \ref{c:kinetic-krylov-integral} may have a simpler resolution than Conjecture \ref{c:kinetic-krylov}. As of now, both remain open. \subsection{Bounded domains} The current versions of the conditional regularity results for Boltzmann and Landau equations in Theorems \ref{t:conditional-regularity_boltzmann} and \ref{t:conditional-regularity_landau} do not allow any form of spacial boundary. For physical applications, it is natural to have the space variable $x$ confined to some bounded domain. There are different boundary conditions that have been considered in the literature: diffuse reflection, specular reflection and bounce back reflection. The boundary effects may have an impact on the regularity of the solutions to kinetic equations. See \cite{guo2010decay,kim2011thesis,guo2016bv,guo2017regularity,kim2018specular,ouyang2020} for results concerning the cutoff Boltzmann and Landau equations, and solutions near a Maxwellian. It is an interesting research direction to study the possibility of extending some form of Theorem \ref{t:conditional-regularity_boltzmann} to any domain with boundary, for any of the physical boundary conditions. In the cutoff case, it is known that the boundary conditions produce discontinuities when the domain is not convex (see \cite{kim2011thesis}). Naturally, in the non-cutoff case the solution is expected to be smooth away from the boundary. However, it is unclear if we should expect uniform smoothness estimates up to the boundary, especially in the nonconvex case. \subsection{Sharper conditions for coercivity estimates} In Theorem \ref{t:coercivity}, we present a sufficient condition for a kernel $K: B_2 \times B_2 \to [0,\infty)$ to have coercivity estimates with respect to the $\dot H^s$ norm. Our conditions are not sharp. We would like to explore what kernels $K$ have the property that for some $c>0$, \[ \iint_{B_2 \times B_2} |f(v') - f(v)|^2 K(v,v') \dd v' \dd v \geq c \iint_{B_1 \times B_1} |f(v') - f(v)|^2 |v'-v|^{-d-2s} \dd v' \dd v.\] There are simple examples replacing $K(v,v') \dd v' \dd v$ for a singular measure that satisfy the coercivity condition. For example, in two dimensions, the operator $(-\partial_1)^{2s} + (-\partial_2)^{2s}$ is clearly coercive. It corresponds to \[ \begin{aligned} \int_{\mathbb R^2} \int_{\mathbb R} & \left( |f(v_1+w,v_2) - f(v_1,v_2)|^2 + |f(v_1,v_2+w) - f(v_1,v_2)|^2 \right) |w|^{-1-2s} \dd w \dd v \\ & \geq c \iint_{\mathbb R^d \times \mathbb R^d} |f(v') - f(v)|^2 |v'-v|^{-d-2s} \dd v' \dd v. \end{aligned}\] What this simple example shows is that a sharp condition for coercivity should allow for singular measures instead of $K(v,v') \dd v' \dd v$, possibly supported in a set of zero Lebesgue measure. In the context of the Boltzmann equation, Theorem \ref{t:coercivity} allows us to obtain a coercivity estimate for the Boltzmann collision operator in terms of the mass, energy and entropy of $f$. We would need a sharper version of Theorem \ref{t:coercivity} if we want to replace the upper bound on the entropy with a lower bound on the temperature tensor. \subsection{Conditional regularity for the cutoff Boltzmann equation} It is well known that we should not expect any regularization from the cutoff Boltzmann equation. Yet, one may start with a smooth initial data and study the propagation of its initial regularity. It seems plausible that if the hydrodynamic bounds \eqref{e:mass_density_assumption}, \eqref{e:energy_density_assumption} and \eqref{e:entropy_density_assumption} hold, then a solution of the inhomogeneous Boltzmann equation whose initial data is smooth and rapidly decaying would stay smooth and rapidly decaying for all time. Moreover, it is conceivable that the smoothness estimates stay uniform as $t \to \infty$, just like in the noncutoff case. Conditional regularity estimates of this type, for the full Boltzmann equation with cutoff, have not yet been studied. \subsection{Renormalized solutions} For generic initial data $f_0$, Alexandre and Villani constructed certain kind of global solution $f$ of the non-cutoff Boltzmann equation. This type of generalized solutions are called \emph{renormalized solutions with defect measure}. The uniqueness of solutions is not known within this class. It is natural to wonder if a localized version of Theorem \ref{t:conditional-regularity_boltzmann} may hold for renormalized solutions with defect measure. That is, if a renormalized solution $f$ satisfies the hydrodynamic inequalities \eqref{e:mass_density_assumption}, \eqref{e:energy_density_assumption} and \eqref{e:entropy_density_assumption} for almost every $(t,x)$ in some cylinder $(-1,0] \times B_1$, should we expect $f$ to be $C^\infty$ for $(t,x) \in (-1/2,0] \times B_{1/2}$? \subsection{Conditional regularity for solutions without rapid decay in the moderately soft potential case} In the case of soft potentials ($\gamma\leq 0$), Theorem \ref{t:conditional-regularity_boltzmann} requires the initial data to have rapid decay. More precisely, we assume that for every $q\geq 0$, there exists a $C_q$ so that $f_0(x,v) \leq C_q \langle v \rangle^{-q}$. The reason for this assumption is that when in the proof of Theorem \ref{t:conditional-regularity_boltzmann} we iteratively compute estimates for all derivatives of $f$, the decay of our estimates deteriorate every time we take an extra derivative. If we only start with $f_0(x,v) \leq C_q \langle v \rangle^{-q}$ for one particular value of $q$, after taking certain number of derivatives, our estimates would not decay at all, and eventually the analysis fails. It seems difficult to imagine that if $f_0(x,v) \leq C_q \langle v \rangle^{-q}$ for some large value of $q$, and the hydrodynamic bounds \eqref{e:mass_density_assumption}, \eqref{e:energy_density_assumption} and \eqref{e:entropy_density_assumption} hold, then $f$ can possibly fail to be $C^\infty$. Yet, this problem is currently open whenever $\gamma \leq 0$. \index{Bibliography@\emph{Bibliography}} \end{document}
\betaegin{document} \subjclass[2020] {15A63, 11E04} \keywords{Inner product, simultaneous orthogonalization, simultaneous diagonalization, ultrafilter.} \muaketitle \betaegin{abstract} For an arbitrary field $\muathbb{K}$ and a family of inner products in a $\muathbb{K}$-vector space $V$ of arbitrary dimension, we study necessary and sufficient conditions in order to have an orthogonal basis relative to all the inner products. If the family contains a nondegenerate element plus a compatibility condition, then under mild hypotheses the simultaneous orthogonalization can be achieved. So we investigate several constructions whose purpose is to add a nondegenerate element to a degenerate family and we study under what conditions the enlarged family is nondegenerate. \end{abstract} \section{Introduction and preliminaries} Historically the research about the problem of simultaneous orthogonalization of two inner products over a ${\muathbb{K}}$-vector space of finite dimension, with ${\muathbb{K}}$ a field satisfying different restrictions, has been largely studied by several authors, see, for example, the works \cite{Finsler}, \cite{Calabi}, \cite{Wo}, \cite{Uhligart}, \cite{Greub} and \cite{Becker}. The analogous problem for a family with two or more inner products over a ${\muathbb{K}}$-vector space of finite dimension has been considered in \cite{BMV} when the ground field is the real or complex numbers. In general, for an arbitrary field ${\muathbb{K}}$ and a finite-dimensional vector space over ${\muathbb{K}}$, a thorough study has been recently realised in \cite{CGMM}. In fact, \cite{CGMM} has motivated a natural follow-up to the simultaneous orthogonalization of families of inner products over a vector space of arbitrary dimension over an arbitrary field ${\muathbb{K}}$. \newline \indent The paper is organized as follows: Section 2 contains the connection between the notion of roots and simultaneous diagonalization of families of endomorphisms. This is preparatory for the results on simultaneous orthogonalization that we pursue. Section 3 deals firstly with some topological notions that we will need in order to introduce the concept of a nondegenerate family of inner products. This latest notion will lead us to divide the problem of simultaneous orthogonalization into two procedures, depending on whether the family of inner products is nondegenerate or not. The nondegenerate case is contained in Theorem \mathfrak ref{ogacem} and its corollaries. The degenerate case is studied in the remaining subsection (see Theorem \mathfrak ref{oepem} and its corollary). In Section 4 we consider different constructions whose goal is to modify a given family $\muathcal F$ of inner products, where possibly all of them are degenerated, as to get a new family containing a nondegenerate inner product and whose orthogonalization is induced by an orthogonalization of $\muathcal F$. To this end, we use several philosophies, some of them exploit the idea of adding a suitable linear combination which turns out to be nondegenerate. Others use an ultrafilter construction that we particularize at the end of the paper to the case in which the ground field is $\muathbb{R}$ or $\muathbb{C}$. To do so, we introduce new concepts related to the idea of how elements on the vector space behave, such as pathological (or negligible in the real-complex case) elements. This will guarantee, together with the simultaneous orthogonalization, the nondegenerancy of new certain families of inner products constructed from the original one. We are talking about Theorem \mathfrak ref{pajaro} and Corollary \mathfrak ref{playita} when the base field is arbitrary and Theorem \mathfrak ref{wwe} for $\muathbb{R}$ or $\muathbb{C}$. From now on we will use the following basic definitions and notations. The complementary of a subset $A$ of $X$ will be denoted by $X\setminus A$ or by $\complement A$ (if there is no possible ambiguity). We write the cardinal of $X$ as $\mathfrak rm{card}(X)$. If $I$ is a set, $\hbox{\bf P}_{\mathbf{F}}(I)$ will denote the set of finite parts of $I$. \newline \indent We consider the naturals with zero included: $\muathbb{N}=\{0,1,\lambdadots\}$ and use the notation $\muathbb{N}^*=\muathbb{N}\setminus\{0\}$. For a field ${\muathbb{K}}$ we use any of the notations ${\muathbb{K}}^*$ or ${\muathbb{K}}^\times$ for the multiplicative group ${\muathbb{K}}\setminus\{0\}$. \newline \indent All through this work, we denote by $V$ a ${\muathbb{K}}$ vector space where ${\muathbb{K}}$ is the field. We use ${\mathfrak rm char}({\muathbb{K}})$ for the characteristic of ${\muathbb{K}}$. If $V$ is a ${\muathbb{K}}$-vector space and $S$ is a subset of $V$, we write $\mathop{\hbox{\mathfrak rm span}}an(S)$ to denote the subspace generated by $S$. A vector subspace $H$ of a vector space $V$ is a {\it hyperplane} if $\dim(V/H)=1$. A vector subspace $H$ of a vector space $V$ is {\it proper} if it is other than the whole space and $H$ is {\it trivial} if it is the null space.\newline \indent By an {\it inner product} on a ${\muathbb{K}}$-vector space $V$ we will mean a symmetric bilinear form $\esc{\cdot,\cdot}\colon V\times V\to {\muathbb{K}}$. And with $(V,\esc{\cdot,\cdot})$ we refer to the so-called {\it inner product ${\muathbb{K}}$-vector space}. We denote by $V^{\circ}:=\{v \in V \colon \esc{v,v}=0\}$ the \emph{set of isotropic} vectors of $V$. Observe that if ${\mathfrak rm char}({\muathbb{K}})=2$, then $V^\circ$ is a subspace of $V$. If $U$ is a vector subspace of $V$, we can consider the inner product ${\muathbb{K}}$-vector space $(U,\esc{\cdot,\cdot}\vert_U)$ where $\esc{\cdot,\cdot}\vert_U$ is the restriction of $\esc{\cdot,\cdot}$ to $U$. We denote by $U^{\perp}:=\{x \in V \colon \esc{x,U}=0\}$. We say that $U$ is {\it $\perp$-closed} if $U^{\perp \perp}=U$. For an element $u\in V$ we set $u^\betaot:=({\muathbb{K}} u)^\betaot$. Define $\muathop{\hbox{\betaf rad}}(V,\esc{\cdot,\cdot}):=\{x\in V\colon \esc{x,V}=0\}$. If there is no possible confusion we will write $\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot})$. When $\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot})=0$ we say that $(V,\esc{\cdot,\cdot})$ is \emph{nondegenerate}. In this case we may consider the {\it dual pair} $(V,V)$ in the sense of \cite[\S IV, section 6, Definition 1, p. 69]{Jacobson}. For us, $\omegaPerp ^{\esc{\cdot,\cdot}}$ means orthogonal direct sum relative to the inner product $\esc{\cdot,\cdot}$. If it is clear we write $\omegaPerp$ instead of $\omegaPerp ^{\esc{\cdot,\cdot}}$. A family of inner products $\muathcal F=\{\esc{\cdot,\cdot}_i\}_{i\in I}$ is said to be \emph{simultaneously orthogonalizable} if there exists a basis $\{v_j\}_{j\in\mathcal{L}ambda}$ of $V$ such that for any $i\in I$ we have $\esc{v_j,v_k}_i=0$ whenever $j\ne k$. Finally, we denote by $\omegaPerp^{\muathcal F}$ the orthogonal direct sum relative to any inner product of $\muathcal F$ and by $\betaot^{\!\!\tiny\muathcal F} $ the perpendicularity relationship for any $\esc{\cdot,\cdot}\in\muathcal F$. \section{Roots and root spaces}\lambdaabel{gatito} Let $\mathcal T\subset \mathop{\hbox{\mathfrak rm End}}(V)$ where $\mathop{\hbox{\mathfrak rm End}}(V)$ is the set of ${\muathbb{K}}$-linear maps $V\to V$. A map $\alpha\colon\mathcal T\to{\muathbb{K}}$ is said to be a {\em root} for $\mathcal T$ if and only if there is some nonzero $v\in V$ such that for any $T\in\mathcal T$ one has $T(v)=\alpha(T)v$. Any such vector $v$ is called a {\em root vector} of $\mathcal T$ (relative to $\alpha$) and the set of all $w\in V$ such that $T(w)=\alpha(T)w$ (for any $T\in\mathcal T$) is called the {\em root space} of $\alpha$ and denoted $V_\alpha$. Observe that the constant map $0\colon\mathcal T\to{\muathbb{K}}$ such that $T\muapsto 0$ is a root of $\mathcal T$ if and only if $\cap_{T\in\mathcal T}\ker(T)\ne 0$. Now let ${\muathbb{K}}\mathcal T$ denote the ${\muathbb{K}}$-subspace of $\mathop{\hbox{\mathfrak rm End}}(V)$ generated by $\mathcal T$. We will denote by $\muathop{\hbox{\betaf roots}}(\mathcal T)$ the set of all roots of $\mathcal T$. Then we have: \betaegin{lemma} There is a canonical bijection $\tau\colon \muathop{\hbox{\betaf roots}}(\mathcal T)\cong\muathop{\hbox{\betaf roots}}({\muathbb{K}}\mathcal T)$ such that $\tau(\alpha)\vert_\mathcal T=\alpha$ for any $\alpha\in\mathcal T$. \end{lemma} \betaegin{proof} Given a root $\alpha\in\muathop{\hbox{\betaf roots}}(\mathcal T)$ we must extend it to a root $\tau(\alpha)$ of ${\muathbb{K}}\mathcal T$. To do this, we can fix a basis of ${\muathbb{K}}\mathcal T$ given by $\{T_i\}_{i\in I}\subset\mathcal T$ and then, for a generic element $\sum_i \lambda_i T_i\in{\muathbb{K}}\mathcal T$ with $\lambdaambda_i\in{\muathbb{K}}$ and $T_i\in\mathcal T$ we define $\tau(\alpha)(\sum_i\lambda_i T_i):=\sum_i\lambda_i\alpha(T_i)$. This definition does not depend on the chosen basis. Indeed, if we consider a different basis $\{S_j\}\subset \mathcal T$, then we must prove that when $\sum_i\lambda_i T_i=\sum_j\muu_j S_j$ (with $\lambda_i,\muu_j\in{\muathbb{K}}$) we get $\sum_i\lambda_i \alpha(T_i)=\sum_j\muu_j\alpha(S_j)$. To check this equality, we take a root vector $v$ relative to $\alpha$ and apply both members of the equality $\sum_i\lambda_i T_i=\sum_j\muu_j S_j$ to $v$. Now the well defined map $\tau(\alpha)\colon{\muathbb{K}}\mathcal T\to{\muathbb{K}}$ is a root of ${\muathbb{K}}\mathcal T$ because any root vector of $\mathcal T$ relative to $\alpha$ is a root vector of ${\muathbb{K}}\mathcal T$ relative to $\tau(\alpha)$. So we have a map $\tau\colon\muathop{\hbox{\betaf roots}}(\mathcal T)\to\muathop{\hbox{\betaf roots}}({\muathbb{K}}\mathcal T)$. Also, note that $\tau(\alpha)(T_i)=\alpha(T_i)$ for any element $T_i$ of the basis $\{T_i\}$, hence the restriction of $\tau(\alpha)$ to $\mathcal T$ is $\alpha$. This proves that $\tau$ is injective. To see that $\tau$ is surjective consider a root $\beta$ of ${\muathbb{K}}\mathcal T$ and let $\alpha:=\beta\vert_\mathcal T$. Then $\alpha$ is a root of $\mathcal T$ and $\alpha(T_i)=\beta(T_i)$ for any basic element $T_i$. Since $\tau(\alpha)(T_i)=\alpha(T_i)$ also, we conclude that $\tau(\alpha)=\beta$. \end{proof} So if $\mathcal T\subset\mathop{\hbox{\mathfrak rm End}}(V)$, then essentially $\mathcal T$ and ${\muathbb{K}}\mathcal T$ have the same roots. This is why we can replace $\mathcal T$ with ${\muathbb{K}}\mathcal T$ in our work. Therefore, we can assume from the beginning that the set of linear maps $\mathcal T$ is in fact a vector subspace. In this setting, we see that any $\alpha\in\muathop{\hbox{\betaf roots}}(\mathcal T)$ is a linear map: \betaegin{enumerate}[label=(\mathfrak roman*)] \item If $T,S\in\mathcal T$ we consider a root vector $v$ of $\mathcal T$ relative to $\alpha$ and then $(T+S)(v)=\alpha(T+S)v$ and on the other hand $T(v)+S(v)=\alpha(T)v+\alpha(S)v$ so that $\alpha(T+S)=\alpha(T)+\alpha(S)$. \item If $T\in\mathcal T$ we consider a root vector $v$ of $\mathcal T$ relative to $\alpha$ and $\lambda\in{\muathbb{K}}$, then $(\lambda T)(v)=\alpha(\lambda T)v$ and $\lambda T(v)=\lambda\alpha(T)v$ whence $\lambda\alpha(T)=\alpha(\lambda T)$. \end{enumerate} So we have that $\muathop{\hbox{\betaf roots}}(\mathcal T)$ is a subspace of the dual space. \betaegin{lemma} \lambdaabel{igual} Let $\mathcal T$ be a vector subspace of $\mathop{\hbox{\mathfrak rm End}}(V)$, for two roots $\alpha, \beta$ of $\mathcal T$, we have that $V_\alpha=V_\beta$ if and only if $\alpha=\beta$. \end{lemma} \betaegin{proof} Take a root vector $v$ of $\mathcal T$ relative to $\alpha$. Then for any $T\in\mathcal T$ we have $T(v)=\alpha(T)v$ but $v\in V_\beta$ hence $T(v)=\beta(T)v$ whence $\alpha(T)=\beta(T)$ and since $T$ is arbitrary $\alpha=\beta$. The other implication is straightforward. \end{proof} Recall that a family $\mathcal T \subset \mathop{\hbox{\mathfrak rm End}}{(V)}$ is {\it simultaneously diagonalizable} if there exists a basis $B$ of $V$ such that $T(v)\in {\muathbb{K}} v$ for any $v \in B$. \betaegin{proposition}\lambdaabel{palmera} Let V be a ${\muathbb{K}}$-vector space and let $\mathcal T$ be a vector subspace of $\mathop{\hbox{\mathfrak rm End}}(V)$. Then we have the following assertions: \betaegin{enumerate}[label=(\mathfrak roman*)] \item If $\mathcal T$ is simultaneously diagonalizable and $B=\{v_i\}_{i\in I}$ a basis of $V$ diagonalizing all the elements of $\mathcal T$, define for each $i\in I$ a map $\alpha_i\colon\mathcal T\to{\muathbb{K}}$ such that $\alpha_i(T)$ is the eigenvalue of $v_i$ relative to $T\in \mathcal T$. Now, from the indexed family of roots $\{\alpha_i\colon i\in I\}$ we eliminate repetitions obtaining a set $\Phi$. Then $$V=\betaigoplus_{\alpha_i\in \Phi}V_{\alpha_i}$$ where $V_{\alpha_i}:=\{x\in V \colon \forall \, T\in\mathcal T, T(x)=\alpha_i(T)x\}$. \item Conversely, if $V$ is a direct sum of root spaces relative to some set of roots of $\mathcal T$, then $\mathcal T$ is simultaneously diagonalizable. \item \lambdaabel{coco} If all the elements of $\mathcal T$ are self-adjoint relative to an inner product $\esc{\cdot, \cdot}$ defined on $V$, then the root spaces are pairwise orthogonal. \end{enumerate} \end{proposition} \betaegin{proof} By the definition of $\alpha_i$ we have $T(v_i)=\alpha_i(T)v_i$ hence $v_i\in V_{\alpha_i}$. Let us prove that for any $i$ we have $$V_{\alpha_i}=\mathop{\hbox{\mathfrak rm span}}an(\{v_j \in B \colon V_{\alpha_j}=V_{\alpha_i}\}).$$ Let $0\ne x\in V_{\alpha_i}$ and write it in terms of the basis $B$. Thus $x=\sum_l k_l v_l$ (for scalars $k_l\in{\muathbb{K}}$ not all null). Then, for an arbitrary $T\in\mathcal T$, we have $\sum_l\alpha_i(T)k_l v_l=\sum_l k_l T(v_l)$, whence for the $l$'s such that $k_l\ne 0$ we conclude that $\alpha_i(T)=\alpha_l(T)$ since $v_l\in V_{\alpha_l}$. Consequently $V_{\alpha_i}$ is the linear span of the $v_l$'s such that $\alpha_l=\alpha_i$. Thus each $V_{\alpha_i}$ has a basis consisting of elements of $B$. This implies, applying Lemma \mathfrak ref{igual}, that the sum $\sum_{\alpha_i\in\Phi}V_{\alpha_i}$ is direct. To show that it coincides with the whole $V$ take into account that for any $v_k\in B$ the root $\alpha_k$ is in $\Phi$ hence $v_k\in V_{\alpha_k}$. The second assertion of the proposition is straightforward considering a basis of each root space and performing the union of these. To prove item \mathfrak ref{coco}, consider two roots $\alpha,\beta$ with $\alpha\ne\beta$. Then there is some $T\in\mathcal T$ such that $\alpha(T)\ne\beta(T)$. So for any $x\in V_\alpha$ and $y\in V_\beta$ we have $\alpha(T)\esc{x,y}=\esc{T(x),y}=\esc{x,T(y)}=\beta(T)\esc{x,y}$. Since $\alpha(T)\ne\beta(T)$ we conclude $\esc{x,y}=0$. \end{proof} \betaegin{corollary} If $\mathcal T$ is a subspace of $\mathop{\hbox{\mathfrak rm End}}(V)$ which is simultaneously diagonalizable, denote by $\hat\mathcal T$ the subalgebra of $\mathop{\hbox{\mathfrak rm End}}(V)$ generated by $\mathcal T$. Then any root $\alpha\colon\mathcal T\to{\muathbb{K}}$ can be extended to a homomorphism of ${\muathbb{K}}$-algebras $\hat\alpha\colon\hat\mathcal T\to{\muathbb{K}}$. \end{corollary} \betaegin{proof} The natural extension of $\alpha$ should be $$\hat\alpha(\sum_{i_1,\lambdadots,i_n}k_{i_1,\lambdadots,i_n}T_{i_1}\cdots T_{i_n}):=\sum_{i_1,\lambdadots,i_n}k_{i_1,\lambdadots,i_n}\alpha(T_{i_1})\cdots \alpha(T_{i_n}).$$ It is easy to prove that $\hat\alpha$ is well-defined. \end{proof} Observe that for each root $\alpha$, the projection of $V$ onto $V_\alpha$ is an idempotent of $\mathop{\hbox{\mathfrak rm End}}(V)$ and the collection of all such projections is a system of orthogonal idempotents whose sum is $1_V$. Concretely, the idempotents in \cite[Theorem 4.10 (c)]{IMR} are these projections onto root subspaces. This establishes a link of the current section with \cite{IMR}. \section{Simultaneous orthogonalization in infinite dimension} In the first subsection we will consider the most favorable case of simultaneous orthogonalization, roughly speaking, the case in which the family contains a nondegenerate inner product. Then we advance to the case in which there is one inner product whose radical is contained in the radical of the remaining members of the family. Before approaching the simultaneous orthogonalization of a family of inner products we would like to observe that in a finite-dimensional space endowed with two inner products, with the radical of the first one contained in the radical of the second one, the latter can be written in terms of the former. This fact will play a substantial roll also in the infinite-dimensional case. A linear algebra elementary result states that if $R,S\colon V\to V$ are linear maps with $\ker(R)\subset\ker(S)$, then there is a linear map $P\colon V\to V$ such that $S=PR$. We use this result to prove the following lemma: \betaegin{lemma} Let $V$ be a finite-dimensional vector space with $\esc{\cdot,\cdot}_0$ and $\esc{\cdot,\cdot}_1$ two inner products on $V$ such that $\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_0)\subset \muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_1)$. Then $\esc{x,y}_1=\esc{T(x),y}_0$ for some linear map $T\colon V\to V$ and any $x,y \in V$. \end{lemma} \betaegin{proof} Take a basis $B=\{v_i\}_{i=1}^n$ of $V$. Consider the matrices $(\esc{v_i,v_j}_0)_{i,j}$ and $(\esc{v_i,v_j}_1)_{i,j}$. Then $$ x (\esc{v_i,v_j}_0)_{i,j}=0 \muathbb{R}ightarrow x (\esc{v_i,v_j}_1)_{i,j}=0$$ for any row vector $x\in{\muathbb{K}}^n$. So there is an $n\times n$ matrix $P$ such that $(\esc{v_i,v_j}_1)_{i,j}=(\esc{v_i,v_j}_0)_{i,j} P$. If we write $P=(p_{ij})$ we have $$\esc{v_i,v_j}_1=\sum_k \esc{v_i,v_k}_0 p_{kj}= \esc{v_i,\sum_k p_{kj}v_k}_0=\esc{v_i,T(v_j)}_0$$ where $T\colon V\to V$ is such that $T(v_j)=\sum_k p_{kj}v_k$. Summarizing $\esc{x,y}_1=\esc{x,T(y)}_0$ which is equivalent to say $\esc{x,y}_1=\esc{T(x),y}_0$. \end{proof} \muedskip In order to extend as far as possible the previous finite-dimensional result we will need to implement some fundamental topological ideas. \muedskip Let $V$ be a vector space over a field ${\muathbb{K}}$ and the inner product $\esc{\cdot,\cdot} \colon V \times V \to {\muathbb{K}}$ such that $(V,V)$ is a dual pair. We can consider the topology whose basis of neighborhoods of an $x\in V$ is the given by the sets $x+\cap_{i=1}^n v_i^\betaot$ for some finite collection $v_1,\lambdadots,v_n$ (see \cite[\S IV, section 6, Definition 2, p. 70]{Jacobson}). We will call this the $\esc{\cdot,\cdot}$-{\it topology} of $V$. One can see that the closed subsets are those subspaces $U\subset V$ such that $U^{\betaot\betaot}=U$ (see \cite[\S IV, section 6, Proposition 1, p. 71]{Jacobson}). This can be applied to the ground field endowed with the inner product $\esc{\lambdaambda,\muu}_{{\muathbb{K}}}:=\lambdaambda \muu$ with $\lambdaambda, \muu \in {\muathbb{K}}$. Of course the topology induced in ${\muathbb{K}}$ is the discrete topology. \betaegin{definition} \mathfrak rm Let $X$ and $Y$ be topological spaces and $B\colon X\times X\to Y$ a map. We will say that $B$ is {\em partially continuous} if both maps $B(x,\_),B(\_,x)\colon X\to Y$ are continuous for every $x \in X$. \end{definition} \betaegin{remark}\lambdaabel{copioso}\mathfrak rm Let $(V,\esc{\cdot,\cdot})$ be a nondegenerate inner product ${\muathbb{K}}$-vector space. Recall the definition of the {\it topological dual} $V^*$, that is, all linear maps $V\to {\muathbb{K}}$ which are continuous considering in $V$ the $\esc{\cdot,\cdot}$-topology and the discrete one in ${\muathbb{K}}$. By \cite[\S IV, section 7, Lemma, p.72]{Jacobson} each $f\in V^*$ is of the form $f=\esc{y,\_}$ for some $y\in V$. Furthermore, observe that $\esc{\cdot,\cdot}$ is partially continuous relative to itself. Indeed, for $x \in V$ let $f_x \colon V \to {\muathbb{K}}$ given by $f_x(v):=\esc{x,v}$. We will prove that $f_x$ is continuous finding an adjoint (see \cite[\S IV, section 7, Theorem 1, p. 72]{Jacobson}). So, write $(f_x)^{\sharp} \colon {\muathbb{K}} \to V$ defined linearly as $(f_x)^{\sharp}(1)=x$. For $\lambdaambda \in {\muathbb{K}}$, we have $\esc{f_x(v),\lambda}_{{\muathbb{K}}}=\lambdaambda f_x(v)= \lambdaambda\esc{x,v}=\esc{(f_x)^{\sharp}(\lambdaambda),v}$. \end{remark} \betaegin{proposition}\lambdaabel{atupoi} Let $V$ be an arbitrary ${\muathbb{K}}$-vector space provided with two inner products $\esc{\cdot,\cdot}_i$ ($i=0,1$). Assume that $\esc{\cdot,\cdot}_0$ is nondegenerate and endow $V$ with the $\esc{\cdot,\cdot}_0$-topology. Then: \betaegin{enumerate}[label=(\mathfrak roman*)] \item \lambdaabel{atupoi1}The inner product $\esc{\cdot,\cdot}_1$ is partially continuous if and only if there is a continuous linear map $T\colon V\to V$ such that $\esc{x,y}_1=\esc{T(x),y}_0$ for any $x,y\in V$. \item \lambdaabel{atupoi2} If both inner products are simultaneously orthogonalizable , then $\esc{\cdot,\cdot}_1$ is partially continuous. \end{enumerate} \end{proposition} \betaegin{proof} If $\esc{\cdot,\cdot}_1$ is partially continuous, then for any $x\in V$ the linear map $f_x:=\esc{x,\_}_1$ is continuous. Whence by Remark \mathfrak ref{copioso}, there is an unique element $a_x\in V$ verifying $f_x=\esc{a_x,\_}_0$. The uniqueness follows from nondegeneracy of $\esc{\cdot,\cdot}_0$. Defining $T\colon V\to V$ such that $T(x)=a_x$, we have \betaegin{equation}\lambdaabel{reason} \esc{x,\_}_1=\esc{T(x),\_}_0 \end{equation} for any $x \in V$. It can be seen that $T$ is linear. Next we prove the continuity of $T$ or equivalently that it has an adjoint relative to $\esc{\cdot,\cdot}_0$. In fact, $T$ is self-adjoint: $$\esc{T(x),y}_0=\esc{x,y}_1=\esc{y,x}_1=f_y(x)=\esc{a_y,x}_0=\esc{x,T(y)}_0.$$ Conversely, assume that the inner product $\esc{\cdot,\cdot}_1$ is written as $\esc{x,y}_1=\esc{T(x),y}_0$ for some continuous linear map $T\colon V\to V$ and arbitrary $x,y\in V$. In order to prove that $\esc{\cdot,\cdot}_1$ is partially continuous we need to check that for any $x\in V$ the map $f_x\colon V\to{\muathbb{K}}$ defined as $f_x(v):=\esc{x,v}_1$ is continuous. But this is equivalent again to prove that $f_x$ has an adjoint. Define the linear map $S\colon {\muathbb{K}}\to V$ such that $1\muapsto T(x)$. We check that this is the adjoint map of $f_x$: $$\esc{f_x(v),\lambda}_{{\muathbb{K}}}=\lambda f_x(v)=\lambda \esc{x,v}_1=\lambda \esc{v,T(x)}_0=\esc{v,S(\lambda)}_0$$ for every $\lambda \in {\muathbb{K}}$ and $v \in V$, which proves that $f_x$ is continuous.\newline For the second item take an orthogonal basis $B=\{v_i\}_{i\in I}$ for both inner products. Since $\esc{v_i,v_i}_0=\lambda_i\ne 0$ denoting $\mu_i:=\esc{v_i,v_i}_1$ we can define the linear map $T\colon V\to V$ such that $T(v_i)=\frac{\mu_i}{\lambda_i}v_i$ for any $i$. So, denoting $\delta_{ij}$ the Kronecker's delta, we have $$\esc{T(v_i),v_j}_0=\frac{\mu_i}{\lambda_i}\esc{v_i,v_j}_0=\delta_{ij}\frac{\mu_i}{\lambda_i}\lambda_i=\esc{v_i,v_j}_1$$ which gives $\esc{T(x),y}_0=\esc{x,y}_1$ for any $x,y\in V$. Furthermore $T$ is easily seen to be self-adjoint. Thus applying item \mathfrak ref{atupoi1} we prove the statement. \end{proof} \subsection{Simultaneous orthogonalization of a nondegenerate family of inner products}\lambdaabel{gallo} In this subsection we approach the issue of finding necessary and sufficient conditions to the existence of a basis which simultaneously orthogonalizes a nondegenerate family of inner products. We start by defining a such nondegenerate family. \betaegin{definition}\lambdaabel{fuerade} \mathfrak rm Let $\muathcal F$ be a family of inner products in a vector space $V$ over ${\muathbb{K}}$. We will say that $\muathcal F$ is {\it nondegenerate} if there is some element in $\muathcal F$ whose radical is $0$, say $\esc{\cdot,\cdot}_0$, such that any $\esc{\cdot,\cdot}\in\muathcal F$ is partially continuous relative to the $\esc{\cdot,\cdot}_0$-topology of $V$. On the contrary, we will refer to a {\it degenerate} family. \end{definition} If a family $\muathcal F$ is nondegenerate we will use the notation $\muathcal F=\{\esc{\cdot,\cdot}_i\}_{i\in I \cup \{0\}}$ to indicate that the specific inner product $\esc{\cdot,\cdot}_0$ is nondegenerate and the remaining are partially continuous relative to the $\esc{\cdot,\cdot}_0$-topology of $V$. \betaegin{remark}\mathfrak rm If $V$ is finite-dimensional the condition on the continuity is redundant so that $\muathcal F$ is nondegenerate if and only if there is some nondegenerate inner product within $\muathcal F$. Regardless of the dimension of $V$, if $\muathcal F$ is simultaneously orthogonalizable\ the condition that any inner product in $\muathcal F$ is partially continuous relative to the $\esc{\cdot,\cdot}_0$-topology of $V$ is redundant in virtue of Proposition \mathfrak ref{atupoi} \mathfrak ref{atupoi2}. \end{remark} \betaegin{remark}\lambdaabel{fragel}\mathfrak rm If $V$ is arbitrarily dimensional and $\muathcal F=\{\esc{\cdot,\cdot}_i\}_{i\in I}$ is simultaneously orthogonalizable\ we can construct a new family $\muathcal F':=\muathcal F\sqcup\{\esc{\cdot,\cdot}_0\}$ by adding the inner product defined in the following way: take a basis $\{v_j\}$ which is orthogonal for $\muathcal F$ and write $\esc{v_i,v_j}_0=\delta_{ij}$. The basis of the $v_j$'s orthogonalizes also to $\muathcal F'$. Furthermore the family $\muathcal F'$ is nondegenerate . Thus a necessary condition for the simultaneous orthogonalization of a given family $\muathcal F$ is that by adding at most one element, we get a new family $\muathcal F'$ which is nondegenerate and simultaneously orthogonalizable. \end{remark} So the natural starting point for this study should be the nondegenerate families. In this sense we have the following result. \betaegin{theorem} \lambdaabel{ogacem} Assume that $\muathcal F=\{\esc{\cdot,\!\cdot}_i\}_{i\in I\cup\{0\}}$ is a nondegenerate family of inner products on the vector space $V$ of arbitrary dimension over ${\muathbb{K}}$. Then: \betaegin{enumerate}[label=(\mathfrak roman*)] \item \lambdaabel{ogacem1} For each $i\in I$ there is a linear map $T_i\colon V\to V$ such that $\esc{x,y}_i=\esc{T_i(x),y}_0$ \ for any $x,y\in V$. Furthermore, each $T_i$ is a self-adjoint operator of $(V,\esc{\cdot , \cdot}_0)$. \item \lambdaabel{ogacem2}$\muathcal F$ is simultaneously orthogonalizable\ if and only if there exists an orthogonal basis $B$ of $(V,\esc{\cdot , \cdot}_0)$ such that each $T_i$ (as in item \mathfrak ref{ogacem1}) is diagonalizable relative to $B$. \item \lambdaabel{short} In case ${\mathop{\hbox{\mathfrak rm char}}}({\muathbb{K}})\ne 2$ and $\dim(V)$ is either finite or infinite countable, the family $\muathcal F$ is simultaneously orthogonalizable if and only if $\{T_i\}_{i\in I}$ (as in item \mathfrak ref{ogacem1}) is simultaneously diagonalizable. \item \lambdaabel{short2} In case ${\mathop{\hbox{\mathfrak rm char}}}({\muathbb{K}})= 2$ and $\dim(V)$ is either finite or infinite countable, if $\muathcal F$ is simultaneously orthogonalizable , then $\{T_i\}_{i\in I}$ (as in item \mathfrak ref{ogacem1}) is simultaneously diagonalizable. Conversely, when $\{T_i\}_{i\in I}$ is simultaneously diagonalizable we can consider the root space decomposition $V=\omegaplus_\alpha V_\alpha$. If each $V_\alpha^{\circ}=\{x \in V_{\alpha} \colon \esc{x,x}_0=0\}$ satisfies that $V_\alpha^\circ$ is not $\betaot$-closed or $V_\alpha/V_\alpha^\circ$ is infinite-dimensional, then $\muathcal F$ is simultaneously orthogonalizable. \end{enumerate} \end{theorem} \betaegin{proof} For proving item \mathfrak ref{ogacem1}, we proceed analogously to the proof of item \mathfrak ref{atupoi1} of Proposition \mathfrak ref{atupoi}. So we have the existence of self-adjoint maps $T_i\colon V\to V$ such that $\esc{x,y}_i=\esc{T_i(x),y}_0$ for any $x,y\in V$ and any $i \in I$. Now we prove item \mathfrak ref{ogacem2}, assume that $\muathcal F$ is simultaneously orthogonalizable. Let $B=\{v_j\}$ be a basis of $V$ with $\esc{v_j,v_k}_i=0$ for $j\ne k$ and any $i\in I\cup\{0\}$. If we write $T_i(v_j)=\sum_k a_{ij}^k v_k$ we have $$\esc{T_i(v_j)-a_{ij}^j v_j, v_k}_0=\esc{T_i(v_j), v_k}_0-a_{ij}^j\esc{ v_j, v_k}_0 = \esc{v_j, v_k}_i=0 \hbox{ if $k\ne j$, }$$ $$\esc{T_i(v_j)-a_{ij}^j v_j, v_j}_0= \sum_{q\ne j}a_{ij}^q\esc{v_q,v_j}_0=0.$$ And since $\esc{\cdot,\cdot}_0$ is nondegenerate, then $T_i(v_j)\in {\muathbb{K}} v_j$ for arbitrary $i, j \in I$. Thus each self-adjoint operator $T_i$ is diagonalizable in the basis $B$. Reciprocally, assume that for any $i\in I$ we have that each $T_i$ is diagonalizable relative to a certain orthogonal basis $B=\{v_j\}$ with respect to $\esc{\cdot,\!\cdot}_0$. Thus $T_i(v_j)\in {\muathbb{K}} v_j$ and we can write $T_i(v_j)=a_{ij}v_j$ for some $a_{ij}\in {\muathbb{K}}$. So $\muathcal F$ is simultaneously orthogonalizable in $B$ since for any $i,j,k$ with $j\ne k$ we have: $$\esc{v_j,v_k}_i=\esc{T_i(v_j),v_k}_0=a_{ij}\esc{v_j,v_k}_0=0.$$ To prove item \mathfrak ref{short} we only need to show that if $\mathcal T:=\{T_i\}_{i\in I}$ is simultaneously diagonalizable\ then there is an orthogonal basis of $V$ relative to $\esc{\cdot,\cdot}_0$ which diagonalizes the family $\mathcal T$. For this purpose, applying Proposition \mathfrak ref{palmera} we have that $V=\omegaPerp_{\alphalpha}^{\esc{\cdot,\cdot}_0}V_\alpha$ is an orthogonal direct sum of root spaces. Moreover, since $$\esc{x,y}_i=\esc{T_i(x),y}_0=\alpha(T_i)\esc{x,y}_0,$$ so if $\esc{x,y}_0=0 $ then $\esc{x,y}_i=0$ for every $i \in I$ and therefore $V=\omegaperpf{\alpha}V_\alpha$. Now, if we can find an orthogonal basis (relative to $\esc{\cdot,\cdot}_0$) in each $V_\alpha$, joining together all those bases we get an orthogonal basis of $V$ relative to $\esc{\cdot,\cdot}_0$ which diagonalizes $\mathcal T$. Note that the restriction of $\esc{\cdot,\cdot}_0$ to each $V_\alpha$ is not alternate since on the contrary we would have $\esc{x,x}_0=0$ for any $x\in V_\alpha$ hence $\esc{\cdot,\cdot}_0\vert_{V_\alpha}=0$ which would imply the contradiction $V_\alpha\subset\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_0)=0$. Hence applying \cite[Chapter Two, Corollary 2, p. 65]{gross}, we get that there are orthogonal bases $B_{\alpha}$ relative to $\esc{\cdot,\cdot}_0$ in each $V_\alpha$. Now, for any different elements $x, y \in B_\alpha$ we have $$\esc{x,y}_i =\esc{T_i(x),y}_0=\alpha(T_i)\esc{x,y}_0=0.$$ Next if we consider $B=\sqcup_\alpha B_{\alpha}$, then $B$ is an orthogonal basis of $V$ relative to any inner product of the family $\muathcal F$. Finally we show item \mathfrak ref{short2}. We apply item \mathfrak ref{ogacem2} for proving that when $\muathcal F$ is simultaneously orthogonalizable , then $\{T_i\}_{i\in I}$ is simultaneously diagonalizable. Now assume that $\{T_i\}_{i\in I}$ is simultaneously diagonalizable. Applying Proposition \mathfrak ref{palmera} and analogously to the proof of item \mathfrak ref{short} we have that $V=\omegaPerp_{\alpha}^{\muathcal F} V_\alpha$. So the existence of an orthogonal basis relative to all the inner products will follow by noting that each orthogonal summand $V_\alpha$ has such a basis. For this, apply \cite[Chapter Two, Corollary 2, p. 65]{gross} taking into account that for each root $\alpha$ either $V_\alpha^{\circ}=\{x \in V_{\alpha} \colon \esc{x,x}_0=0\}$ is not $\perp$-closed or $V_\alpha/ V_\alpha^{\circ}$ is infinite-dimensional. \end{proof} From the proof of Theorem \mathfrak ref{ogacem} we can deduce the following corollary. \betaegin{corollary} Suppose $\muathcal F=\{\esc{\cdot,\!\cdot}_i\}_{i\in I\cup\{0\}}$ is a nondegenerate family of inner products in a vector space $V$ over a field ${\muathbb{K}}$. If $\muathcal F$ is simultaneously orthogonalizable, then $V=\omegaperpf{\alpha} V_\alpha$ where $\alpha$'s are the roots. Moreover, for any $i$ and $\alpha$, we get $\esc{\cdot,\!\cdot}_i\vert_{V_\alpha}=c_{i,\alpha}\esc{\cdot,\!\cdot}_0\vert_{V_\alpha}$ for suitable $c_{i,\alpha}\in {\muathbb{K}}$ and each $\esc{\cdot,\!\cdot}_i$ can be represented in block diagonal form where each block is the matrix of $c_{i,\alpha}\esc{\cdot,\!\cdot}_0\vert_{V_\alpha}$ relative to some basis of $V_\alpha$. \end{corollary} We can go even further as the next corollary shows. \betaegin{corollary} Let $\muathcal F=\{\esc{\cdot,\!\cdot}_i\}_{i\in I\cup\{0\}}$ be a nondegenerate family of inner products in a vector space $V$ over a field ${\muathbb{K}}$ such that ${\mathop{\hbox{\mathfrak rm char}}}({\muathbb{K}})\ne 2$ and $\dim(V)$ is either finite or infinite countable. Then the following statements are equivalent: \betaegin{enumerate}[label=(\mathfrak roman*)] \item The family $\muathcal F$ is simultaneously orthogonalizable. \item There exists an orthogonal basis $B$ such that for every $i \in I$ the matrix of the product $\esc{\cdot,\cdot}_i$ relative to $B$ is a diagonal matrix. \end{enumerate} \end{corollary} \betaegin{proof} Suppose that $\muathcal F$ is simultaneously orthogonalizable. Consider the basis $B= \sqcup_{\alpha} B_\alpha$ where $B_{\alpha}$ is the orthogonal basis of each $V_{\alpha}$. Let $M_{0,{B_\alpha}}$ be the matrix of the restriction of the product $\esc{\cdot,\cdot}_0$ to $V_\alpha$ relative to $B_{\alpha}$. Then, for each $i \in I$, we have $M_{i,B}:=M_B(\esc{\cdot,\cdot}_i)= {\mathfrak rm {diag} }(\alpha(T_i)M_{0,{B_\alpha}}: \alpha \in \Phi)$, being $\Phi$ as in Proposition \mathfrak ref{palmera}. The converse follows immediately. \end{proof} \subsection{Simultaneous orthogonalization of a degenerate family of inner products} Note that a degenerate family $\muathcal F$ of inner products on $V$ is one such that either all the inner products in $\muathcal F$ are degenerate or if there is one nondegenerate inner product, then the remaining inner products are not partially continuous relative to the topology induced by the first one. We will focus on a family of inner products in which all their inner products are degenerate. This is because if we consider a nondegenerate inner product $\esc{\cdot,\cdot}_0$ and another one $\esc{\cdot,\cdot}_1$ which is not partially continuous relative to the $\esc{\cdot,\cdot}_0$-topology, then $\esc{\cdot,\cdot}_0$ and $\esc{\cdot,\cdot}_1$ cannot be simultaneously orthogonalizable in view of Proposition \mathfrak ref{atupoi} item \mathfrak ref{atupoi2}. In this subsection we give a step ahead with relation to the previous one. We will assume that all the inner products in a given family $\muathcal F$ are degenerate but there is some element $\esc{\cdot,\cdot}_0\in\muathcal F$ whose radical is contained in the radical of the other members of $\muathcal F$. We will see that still we can recover much of the philosophy in the subsection \mathfrak ref{gallo}. We will start by introducing some topological tools. If $X$ and $B$ are topological spaces and $A$ is a set, then any surjective map $f\colon A\to B$ is continuous relative to the initial topology on $A$, that is, that whose closed sets are $f^{-1}(F)$ where $F$ ranges in the class of closed subsets of $B$. Moreover, for any other continuous map $\varphi\colon A\to X$ such that for any $a,a'\in A$: $$f(a)=f(a')\muathbb{R}ightarrow \varphi(a)=\varphi(a')$$ there is a unique continuous map $\theta\colon B\to X$ such that $\theta f=\varphi$. This is the abc of initial topologies but allows us to give an interesting topology on a ${\muathbb{K}}$-vector space $V$ endowed with an inner product. In fact, we have the following definition. \betaegin{definition}\lambdaabel{calabacin} \mathfrak rm Let $(V,\esc{\cdot,\cdot})$ be an inner product ${\muathbb{K}}$-vector space. Let $\mathfrak r:=\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot})$ and define the inner product $\esc{\cdot,\cdot}_{\mathfrak r}\colon V/\mathfrak r\times V/\mathfrak r\to{\muathbb{K}}$ as \betaegin{equation}\lambdaabel{azorrac} \esc{v+\mathfrak r,w+\mathfrak r}_{\mathfrak r}:=\esc{v,w} \text{ for } v,w \in V. \end{equation} Since $(V/\mathfrak r,\esc{\cdot,\cdot}_{\mathfrak r})$ is nondegenerate, we can consider the $\esc{\cdot,\cdot}_{\mathfrak r}$-topology of $V/\mathfrak r$ and consequently the initial topology on $V$ induced by the canonical projection $p\colon V\to V/\mathfrak r$. We will call this topology the \emph{$\esc{\cdot,\cdot}$-topology} of $V$ by extension of the nondegenerate case. Note that $\esc{\cdot,\cdot}$-topology of $V$ is the smallest topology such that $p$ is continuous. \end{definition} \betaegin{lemma}\lambdaabel{tomate} Let $(V,\esc{\cdot,\cdot})$ be an inner product ${\muathbb{K}}$-vector space and topologize $V$ with the $\esc{\cdot,\cdot}$-topology. Then, $\esc{\cdot,\cdot}$ is partially continuous. \end{lemma} \betaegin{proof} Consider $a \in V$ and define $f_a: V \to {\muathbb{K}}$ as $f_a = \esc{a,\_}$ and $\tilde{f}_{a+\mathfrak r}: V/\mathfrak r \to {\muathbb{K}}$ as $\tilde{f}_{a+\mathfrak r}(x+\mathfrak r)=\esc{a+\mathfrak r,x+\mathfrak r}_{\mathfrak r}=\esc{a,x}$ where $\mathfrak r=\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot})$ . Observe that in fact $f_a$ is the composition $V \xrightarrow{p} V/\mathfrak r \xrightarrow{\tilde{f}_{a+\mathfrak r}} {\muathbb{K}}$. Hence in order to see that $f_a$ is continuous, it suffices to check that $\tilde{f}_{a+\mathfrak r}$ is continuous. Since $V/\mathfrak r$ is nondegenerate this is equivalent to find an adjoint map $(\tilde{f}_{a+\mathfrak r})^{\sharp}: {\muathbb{K}} \to V/\mathfrak r$. Define linearly $(\tilde{f}_{a+\mathfrak r})^{\sharp}(1)=a+\mathfrak r$, so for $x \in V$ and $\lambdaambda \in {\muathbb{K}}$ we have: $$ \esc{\tilde{f}_{a+\mathfrak r}(x+\mathfrak r),\lambdaambda}_{{\muathbb{K}}}=\lambdaambda \esc{a,x}=\esc{x,\lambdaambda a}=\esc{x+\mathfrak r_,\lambdaambda (a+\mathfrak r)}_{\mathfrak r}=\esc{x+\mathfrak r,(\tilde{f}_{a+\mathfrak r})^{\sharp}(\lambdaambda)}_{\mathfrak r}.$$ \end{proof} Before going on, consider an inner product ${\muathbb{K}}$-vector space $(V,\esc{\cdot,\cdot})$ with $\mathfrak r=\muathop{\hbox{\betaf rad}}{(\esc{\cdot,\cdot})}$. Then the quotient space $V/\mathfrak r$ is nondegenerate relative to $\esc{\cdot,\cdot}_{\mathfrak r}$, as defined in (\mathfrak ref{azorrac}), and we fix our attention on it. If we choose a subspace $W$ of $V$ with $V=\mathfrak r\omegaPerp W$, then there is a canonical vector space isomorphism $W\cong V/\mathfrak r$. So we can consider the inner product ${\muathbb{K}}$-vector space $(W,\esc{\cdot,\cdot}\vert_W)$. If $(V_i,\esc{\cdot,\cdot}_i)_{i=1,2}$ are nondegenerate inner product spaces over the same field, a linear isomorphism $T\colon V_1\to V_2$ is said to be an {\em isometry} if $\esc{T(x),T(y)}_2=\esc{x,y}_1$ for arbitrary elements $x,y\in V_1$. If $T$ is an isometry, then $T$ is continuous (relative to the $\esc{\cdot,\cdot}_i$-topologies) being its adjoint $T^\sharp=T^{-1}$. Taking this into account, the above canonical isomorphism is an isometry $$(W,\esc{\cdot,\cdot}\vert_W)\cong (V/\mathfrak r,\esc{\cdot,\cdot}_{\mathfrak r}).$$ Note that the inverse of the above isometry is an isometry \betaegin{equation}\lambdaabel{sartahcram} \Omega\colon(V/\mathfrak r,\esc{\cdot,\cdot}_{\mathfrak r})\cong (W,\esc{\cdot,\cdot}\vert_W),\end{equation} which will be useful in the sequel. \betaegin{lemma}\lambdaabel{marcha} In the previous setting, the canonical inclusion ${\boldsymbol{i}} \colon (W,\esc{\cdot,\cdot}\vert_W)\to V$ is continuous relative to the $\esc{\cdot,\cdot}$-topology of $V$. Moreover, the linear map ${\boldsymbol{i}} \Omega: V/\mathfrak r \to V$ is a continuous monomorfism. \end{lemma} \betaegin{proof} Let $p\colon V\to (V/\mathfrak r,\esc{\cdot,\cdot}_{\mathfrak r})$ be the canonical projection. The open neighborhoods of $0$ in $V$ are of the form $p^{-1}(\cap_1^n (x_i+\mathfrak r)^\betaot)$ where $x_1,\dots,x_n$ is a finite collection of elements in $V$. For any $i$ write $x_i=r_i+w_i$ where $r_i\in\mathfrak r$ and $w_i\in W$. Next we claim that $$p^{-1}(\cap_1^n (x_i+\mathfrak r)^\betaot)=\cap_1^n w_i^\betaot \quad\text{ where } w_i^\betaot=\{x\in V\colon \esc{x,w_i}=0\}.$$ If $v\in p^{-1}(\cap_1^n (x_i+\mathfrak r)^\betaot)$ then $p(v)\in \cap_1^n (x_i+\mathfrak r)^\betaot$ so that $\esc{p(v),p(x_i)}_{\mathfrak r}=0$ for $i=1,\lambdadots,n$. But then formula \eqref{azorrac} gives $\esc{v,x_i}=0$ for any $i$ hence $\esc{v,w_i}=0$ so that $v\in\cap_i w_i^\betaot$. The other inclusion is proved analogously. So far we have proved that the neighborhoods of $0$ in $V$ are of the form $\cap_1^n w_i^\betaot$ where $w_1,\lambdadots,w_n$ is a finite collection of elements in $W$. Taking into account that $${\boldsymbol{i}}^{-1}(\cap_1^n w_i^\betaot)=\cap_1^n w_i^{\betaot_W}$$ where $w_i^{\betaot_W}=\{x\in W\colon \esc{x,w_i}=0\}$, we get the continuity of ${\boldsymbol{i}}$. Therefore, we have a continuous linear map $V/\mathfrak r\to V$ given by the composition ${\boldsymbol{i}}\Omega$. \end{proof} We will call this map ${\boldsymbol{i}}\Omega$ the \lambdaq\lambdaq backward gear\mathfrak rq\mathfrak rq\ from $V/\mathfrak r$ to $V$. \betaegin{proposition} \lambdaabel{banana2} Let $(V_i,\esc{\cdot,\cdot}_i)$ $(i=1,2)$ be two inner products ${\muathbb{K}}$-vector spacess with $\mathfrak r_i:=\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_i)$. We can write $V_i=\mathfrak r_i \omegaPerp^{\esc{\cdot,\cdot}_i} W_i$ with $W_i$ a vector subspace of $V_i$. Let $T\colon V_1 \to V_2$ be a linear map such that $T(\mathfrak r_1) \subset \mathfrak r_2$ and $T(W_1) \subset W_2$. Then $T$ is continuous if and only if $T$ has adjoint. \end{proposition} \betaegin{proof} Suppose that $T$ has an adjoint $T^{\sharp}$. The induced linear map $S: V_1/\mathfrak r_1 \to V_2/\mathfrak r_2$ such that for $x \in V_1$, $S(x+\mathfrak r_1)=T(x)+\mathfrak r_2$ has an adjoint because for every $x \in V_1$, $y \in V_2$: $$\esc{S(x+\mathfrak r_1),y+\mathfrak r_2}_{\mathfrak r_2}=\esc{T(x)+\mathfrak r_2,y+\mathfrak r_2}_{\mathfrak r_2}=\esc{T(x),y}_2=$$ $$\esc{x,T^{\sharp}(y)}_1=\esc{x+\mathfrak r_1,T^{\sharp}(y)+\mathfrak r_1}_{\mathfrak r_1}.$$ Then an adjoint of $S$ is $S^{\sharp} \colon V_2/\mathfrak r_2 \to V_1/\mathfrak r_1$ given by $S^{\sharp}(y+\mathfrak r_2):=T^{\sharp}(y)+\mathfrak r_1$. Thus $S$ is continuous. Next we prove that $T={\boldsymbol{i}}_2\Omega_2 S p_1$ where ${\boldsymbol{i}}_2\Omega_2$ is defined as in Lemma \mathfrak ref{marcha} on $V_2/\mathfrak r_2$. Take an arbitrary $x\in V_1$, then ${\boldsymbol{i}}_2\Omega_2 S p_1(x)={\boldsymbol{i}}_2\Omega_{2}(T(x)+\mathfrak r_2)={\boldsymbol{i}}_2(z)$ for some $z\in W_2$ such that $z+\mathfrak r_2=T(x)+\mathfrak r_2$. Applying the hypothesis we have $z=T(x)$. Thus we have the commutativity of the following diagram \[ \betaegin{tikzcd} V_1/\mathfrak r_1 \alpharrow[r,"S"]& V_2/\mathfrak r_2 \alpharrow[d,"{\boldsymbol{i}}_2\Omega_{2}"]\\ V_1 \alpharrow[r,"T"'] \alpharrow[u,"p_1"]& V_2\\ \end{tikzcd} \] Observe that $T$ is continuous because $p_1, S$ and ${\boldsymbol{i}}_2 \Omega_{2}$ are continuous (apply Lemma \mathfrak ref{marcha}). Conversely, assuming that $T$ is continuous, define $S:=p_2T {\boldsymbol{i}}_1\Omega_{1}$ where ${\boldsymbol{i}}_1\Omega_{1}$ is defined as in Lemma \mathfrak ref{marcha} on $V_1/\mathfrak r_1$. Since $S$ is continuous there is an adjoint $S^\sharp\colon V_2/\mathfrak r_2\to V_1/\mathfrak r_1$ by \cite[\S IV, section 7, Theorem 1, p. 72]{Jacobson}). Let $x\in V_1$ and $y \in V_2$. If we write $S^\sharp(y + \mathfrak r_2)=z + \mathfrak r_1$ for certain $z \in W_1$ then ${\boldsymbol{i}}_1\Omega_1 (z+\mathfrak r_1)=w \in W_1$ such that $w + \mathfrak r_1=z + \mathfrak r_1$. Now we will check that the map ${\boldsymbol{i}}_1\Omega_{1}S^\sharp p_2\colon V_2\to V_1$ vanishing on $\mathfrak r_2$ is an adjoint of $T$: $$\esc{T(x),y}_2=\esc{T(x)+\mathfrak r_2,y+\mathfrak r_2}_{\mathfrak r_2}=\esc{S(x+\mathfrak r_1),y+\mathfrak r_2}_{\mathfrak r_2}=\esc{x+\mathfrak r_1,S^\sharp(y+\mathfrak r_2)}_{\mathfrak r_1}= $$ $$ \esc{x+\mathfrak r_1,z+\mathfrak r_1}_{\mathfrak r_1}=\esc{x,z}_{1}=\esc{x,w}_{1} =\esc{x, {\boldsymbol{i}}_1\Omega_{1}S^\sharp(y+\mathfrak r_2)}_1=\esc{x,{\boldsymbol{i}}_1\Omega_1S^{\sharp}p_2 (y)}_{1}.$$ Therefore $T$ has an adjoint. \end{proof} Proposition \mathfrak ref{banana2} is a generalization of the well-known principle asserting that in the context of nondegenerate spaces, a map with posses an adjoint map is automatically continuous (see \cite[\S IV, section 7, Theorem 1, p. 72]{Jacobson}). \betaegin{remark}\lambdaabel{gato}\mathfrak rm Assume that $\muathcal F$ is a family of inner products such that there exists $\esc{\cdot,\cdot}_0\in\muathcal F$ for which any element $\esc{\cdot,\cdot}\in\muathcal F$ is partially continuous relative to the $\esc{\cdot,\cdot}_0$-topology of $V$. Assume further that $\mathfrak r_0:=\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_0)\subset \muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot})$ for any $\esc{\cdot,\cdot}\in\muathcal F$. Consider next the new family $\muathcal F_{\mathfrak r_0}$ of inner products $\esc{\cdot,\cdot}'$ on $V/\mathfrak r_0$ defined as $\esc{x+\mathfrak r_0,y+\mathfrak r_0}':=\esc{x,y}$ for any $\esc{\cdot,\cdot}\in\muathcal F$. Then each element $\esc{\cdot,\cdot}'$ is partially continuous relative to $\esc{\cdot,\cdot}_{\mathfrak r_{0}}$. To prove this, fix $x\in V$ and consider the map $g\colon V/\mathfrak r_0\to{\muathbb{K}}$ such that $g(y +\mathfrak r_0):=\esc{x+\mathfrak r_0,y+\mathfrak r_0}'$. We have to prove that $g$ is continuous (more precisely $\esc{\cdot,\cdot}_{\mathfrak r_{0}}$-continuous). But the map $f\colon V\to{\muathbb{K}}$ given by $f(y):=\esc{x,y}$ is continuous (that is, $\esc{\cdot,\cdot}_0$-continuous), and as $g$ is the composition $g=f{\boldsymbol{i}}\Omega$ whence the continuity of $g$. \[ \betaegin{tikzcd}[column sep=tiny, row sep=small] V/\mathfrak r_0\alpharrow[rr,"g"]\alpharrow[dr,"{\boldsymbol{i}}\Omega"'] & &{\muathbb{K}}\\ & V\alpharrow[ur,"f"'] & \end{tikzcd} \] This allows to conclude that the new family $\muathcal F_{\mathfrak r_0}$ has a nondegenerate element $\esc{\cdot,\cdot}_{\mathfrak r_{0}}$ and any other $\esc{\cdot,\cdot}'\in\muathcal F_{\mathfrak r_0}$ is partially continuous relative to $\esc{\cdot,\cdot}_{\mathfrak r_{0}}$. \end{remark} \betaegin{proposition}\lambdaabel{berenjena} Let $V$ be a ${\muathbb{K}}$-vector space endowed with two inner products $\esc{\cdot,\cdot}_i$ ($i=0,1$) such that $\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_0)\subset\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_1)$. Then the following assertions are equivalent: \betaegin{enumerate}[label=(\mathfrak roman*)] \item $\esc{\cdot,\cdot}_1$ is partially continuous relative to the $\esc{\cdot,\cdot}_0$-topology of $V$. \item \lambdaabel{ber} There is a continuous linear map $T\colon V\to V$ (relative to the $\esc{\cdot,\cdot}_0$-topology) vanishing on $\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_0)$ such that $\esc{x,y}_1=\esc{T(x),y}_0$ for any $x,y\in V$. \end{enumerate} \mathfrak rm\betaigskip \end{proposition} \betaegin{proof} Consider $\mathfrak r_0=\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_0)$, $\mathfrak r_1=\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_1)$, the inner product $\esc{\cdot,\cdot}_{\mathfrak r_{0}}$ as \eqref{azorrac}, the $\esc{\cdot,\cdot}_{\mathfrak r_{0}}$-topology of $V/\mathfrak r_0$ and the decomposition $V=\mathfrak r_0\omegaPerp^{\esc{\cdot,\cdot}_0} W$ for certain subspace $W$ of $V$. Let $p\colon V\to V/\mathfrak r_0$ that gives the initial topology on $V$. Define now the inner product $\esc{\cdot,\cdot}_1'\colon V/\mathfrak r_0\times V/\mathfrak r_0 \to {\muathbb{K}}$ as $\esc{x+\mathfrak r_0,y+\mathfrak r_0}_1':=\esc{x,y}_1$. It is well defined because $\mathfrak r_0\subseteq\mathfrak r_1$. As $\esc{\cdot,\cdot}_1$ is partially continuous then for any $x\in V$ the linear map $\esc{x,\_}_1\colon V\to{\muathbb{K}}$ is continuous. Since $\mathfrak r_0\subset\ker(\esc{x,\_}_1)$ there is a unique linear map $\esc{x+\mathfrak r_0,\_}'_1\colon V/\mathfrak r_0\to{\muathbb{K}}$ such that the diagram below is commutative \[ \betaegin{tikzcd} V \alpharrow[r,"p"]\alpharrow[dr,"\esc{x,\_}_1"']& V/\mathfrak r_0\alpharrow[d, dashed, "\esc{x+\mathfrak r_0,\_}'_1"]\\ & {\muathbb{K}}\\ \end{tikzcd} \] and the continuity of $\esc{x+\mathfrak r_0,\_}'_1$ is automatic from the universal property. Now applying Proposition \mathfrak ref{atupoi} item \mathfrak ref{atupoi1} there exists a continuous linear map $S: V/\mathfrak r_0 \to V/\mathfrak r_0$ (continuity relative to the $\esc{\cdot,\cdot}_{\mathfrak r_{0}}$-topology of $V/\mathfrak r_0$), such that $\esc{x+\mathfrak r_0,\_}'_1=\esc{S(x+\mathfrak r_0),\_}_{\mathfrak r_{0}}$. Define $T\colon V\to V$ such that $T={\boldsymbol{i}}\Omega Sp$ having the following commutative diagram: \[ \betaegin{tikzcd} V/\mathfrak r_0 \alpharrow[r,"S"]& V/\mathfrak r_0\alpharrow[d,"{\boldsymbol{i}}\Omega"]\\ V \alpharrow[r,"T"] \alpharrow[u,"p"]& V\\ \end{tikzcd} \] So $T(x):= {\boldsymbol{i}} \Omega(S(x+\mathfrak r_0))$. Observe that $T$ is linear and continuous because $p, S$ and ${\boldsymbol{i}} \Omega$ are linear and continuous (apply Lemma \mathfrak ref{marcha}). In fact, if we write $S(x+\mathfrak r_0) = w + \mathfrak r_0$ for a unique $w \in W$, then we have that $S(x+\mathfrak r_0)=p(w)$ and $T(x)=w$. Moreover for $x,z \in V$ we get $$ \esc{x,z}_1\!=\!\esc{x+\mathfrak r_0, z+\mathfrak r_0}'_1\!=\!\esc{S(x+\mathfrak r_0), z+\mathfrak r_0}_{\mathfrak r_{0}}=\! \esc{p(w), z+\mathfrak r_0}_{\mathfrak r_{0}}=\!\esc{w,z}_0\!=\esc{T(x),z}_0. $$ For the converse, (ii) implies (i) by Lemma \mathfrak ref{tomate}. \end{proof} \betaegin{remark}\lambdaabel{cachondo} \mathfrak rm Observe that the map $T$ defined in the proof of Proposition \mathfrak ref{berenjena} satisfies $T(V)\subset W$. \end{remark} \betaegin{theorem} \lambdaabel{oepem} Let $\muathcal F=\{\esc{\cdot,\cdot}_i\}_{i\in I\cup \{0\}}$ be a family of inner products in the ${\muathbb{K}}$-vector space $V$ such that $\mathfrak r_0:=\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_0)\subseteq \muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_i)$ for any $i\in I$ and each $\esc{\cdot,\cdot}_i$ is partially continuous relative to the $\esc{\cdot,\cdot}_0$-topology of $V$. Then: \betaegin{enumerate}[label=(\mathfrak roman*)] \item\lambdaabel{oepem1} $\muathcal F$ is simultaneously orthogonalizable\ if and only there exist an orthogonal basis $B=\{v_j\}$ of $(V,\esc{\cdot , \cdot}_0)$ and a collection of $\esc{\cdot,\cdot}_0$-self-adjoint continuous linear maps $T_i\colon V \to V$ for every $i \in I \cup \{0\}$ such that $\{T_i\}$ is simultaneously diagonalizable\ and for any $i$ we have $T_i(\mathfrak r_0)=0$, $\esc{x,y}_i=\esc{T_i(x),y}_0$ for any $x,y\in V$. \item \lambdaabel{oepem2} In case ${\mathop{\hbox{\mathfrak rm char}}}({\muathbb{K}})\ne 2$ and $\dim(V)$ is either finite or infinite countable, the family $\muathcal F$ is simultaneously orthogonalizable if and only if $\{T_i\}_{i\in I}$ (as in item \mathfrak ref{oepem1}) is simultaneously diagonalizable. \end{enumerate} \end{theorem} \betaegin{proof} In order to prove item \mathfrak ref{oepem1} suppose first that $\muathcal F$ is simultaneously orthogonalizable. Then there exists a basis $B=\{v_j\}_{j \in J}$ such that $\esc{v_j, v_k}_i=0$ for $i \in I \cup \{0\}$ and $j\ne k$. Consider $\muathcal F_{\mathfrak r_0}=\{\esc{\cdot , \cdot}'_i\}_{i \in I \cup\{0\}}$ where $\esc{\cdot,\cdot}'_i$ is the inner product defined on the quotient $V/\mathfrak r_0$ by $\esc{x+\mathfrak r_0, y +\mathfrak r_0}'_i:=\esc{x,y}_i$ (note that $\esc{\cdot,\cdot}'_0=\esc{\cdot, \cdot}_{\mathfrak r_{0}}$). Observe that since $\muathcal F$ is simultaneously orthogonalizable, then $\muathcal F_{\mathfrak r_0}$ is simultaneously orthogonalizable. By Remark \mathfrak ref{gato} and Proposition \mathfrak ref{berenjena} item \mathfrak ref{ber} we have a collection of continuous linear maps $T_i \colon V \to V$ (relative to $\esc{\cdot , \cdot}'_0$) vanishing on $\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_0)$ such that $\esc{x,y}_i=\esc{T_i(x),y}_0$ for any $x,y\in V$ and $i \in I$. Moreover, we know that $T_i= {\boldsymbol{i}} \Omega S_i p$ where $S_i \colon V/\mathfrak r_0 \to V/\mathfrak r_0$ verifies $\esc{x+\mathfrak r_0, y +\mathfrak r_0}'_i=\esc{S_i(x+\mathfrak r_0),y+\mathfrak r_0}'_0$, $V \xrightarrow{p} V/\mathfrak r_0$ and $V/\mathfrak r_0 \xrightarrow{{\boldsymbol{i}} \Omega} V$ as in the proof of Proposition \mathfrak ref{berenjena}. Since $\muathcal F_{\mathfrak r_0}$ is simultaneously diagonalizable\ by Theorem \mathfrak ref{ogacem} item \mathfrak ref{ogacem2} $S_i$ is simultaneously diagonalizable. So for $j \in J$ there exist $\lambdaambda_{ij} \in {\muathbb{K}}$ such that $S_i(v_j + \mathfrak r_0)=\lambdaambda_{ij}(v_j+\mathfrak r_0)$. Now we prove that $T_i$ is simultaneously diagonalizable. Indeed, take $v_j \in W$ we have that $T_i(v_j)={\boldsymbol{i}} \Omega S_i p(v_j)= {\boldsymbol{i}} \Omega S_i (v_j + \mathfrak r_0)= {\boldsymbol{i}} \Omega (\lambdaambda_{ij}(v_j +\mathfrak r_0))=\lambdaambda_{ij} {\boldsymbol{i}} \Omega(v_j+\mathfrak r_0)= \lambdaambda_{ij} {\boldsymbol{i}}(v_j)=\lambdaambda_{ij} v_j$. For the converse, suppose that there exists a basis $B=\{v_j\}$ such that $\esc{v_j,v_k}_0=0$ if $j \neq k$ and $T_i(v_j)=a_{ij}v_j$ with $a_{ij} \in {\muathbb{K}}$. Now, for every $i \in I$ we get $\esc{v_j,v_k}_i=\esc{T_i(v_j),v_k}_0=a_{ij}\esc{v_j,v_k}_0=0$ if $j \neq k$. Next we prove \mathfrak ref{oepem2}. The nontrivial implication is as follows. Assume that $\{T_i\}_{i\in I}$ is simultaneously diagonalizable. Consider $\{S_i\}_{i\in I}$ given by $S_i \colon V/\mathfrak r_0 \to V/\mathfrak r_0$ defined as $S_i(x+\mathfrak r_0)=T_i(x) + \mathfrak r_0$. Since $\mathop{\hbox{\mathfrak rm char}}({\muathbb{K}}) \neq 2$ and $\dim(V/\mathfrak r_0)$ is either finite or infinite countable then, applying Theorem \mathfrak ref{ogacem} item \mathfrak ref{short}, we get that $\{\esc{\cdot,\cdot}'_i\}_{i \in I}$ is simultaneously orthogonalizable. Consequently we have a basis $\{x_j + \mathfrak r_0\}_{j \in J}$ of $V/\mathfrak r_0$ that orthogonalizes simultaneously. Let ${\boldsymbol{i}} \Omega \colon V/\mathfrak r_0 \to V$ be as in Lemma \mathfrak ref{marcha}, that is, for $x \in V$ we have ${\boldsymbol{i}} \Omega(x+\mathfrak r_0)=w$ with $x+\mathfrak r_0=w+\mathfrak r_0$ where $w \in W$ and $V= W \omegaPerp \mathfrak r_0$. Suppose ${\boldsymbol{i}} \Omega (x_j + \mathfrak r_0)=t_j$. Observe that $t_j \in W$ for every $j \in J$. Next, $\{t_j\}_{j \in J}$ is a basis of $W$ because $\Omega$ is an isomorphism. For our purposes, consider $B= \{t_j\}_{j \in J} \sqcup \{r_k\}_{k \in K}$ where $\{r_k\}$ is a basis of $\mathfrak r_0$. For $j \neq k$, we finally have $$\esc{t_j,t_k}_i=\esc{t_j+\mathfrak r_0,t_k+\mathfrak r_0}'_i=\esc{x_j+\mathfrak r_0,x_k+\mathfrak r_0}'_i=0,$$ providing thus a basis $B$ of $V$ which simultaneously orthogonalizes the family $\muathcal F$. \end{proof} \betaegin{remark}\lambdaabel{guanga}\mathfrak rm The $T_i$'s defined in the proof of Theorem \mathfrak ref{oepem} item \mathfrak ref{oepem1} may be chosen in such a way that $T_i(V)\subset W$ for any $i$. \end{remark} Analogously to the nondegenerate case we can conclude: \betaegin{corollary} Suppose that $\muathcal F=\{\esc{\cdot,\!\cdot}_i\}_{i\in I\cup\{0\}}$ is a family of inner products in a vector space $V$ over a field ${\muathbb{K}}$ such that $\mathfrak r_0:=\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_0)\subseteq \muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_i)$ for any $i\in I$ and each $\esc{\cdot,\cdot}_i$ is partially continuous relative to the $\esc{\cdot,\cdot}_0$-topology of $V$. If $\muathcal F$ is simultaneously orthogonalizable, then $V=\omegaperpf{\alpha} V_\alpha$ where $\alpha$'s are the roots for the family of endomorphisms $\{T_i\}$ given by Theorem $\mathfrak ref{oepem}$ item $\mathfrak ref{oepem1}$. Moreover, for any $i$ and $\alpha$, we get $\esc{\cdot,\!\cdot}_i\vert_{V_\alpha}=c_{i,\alpha}\esc{\cdot,\!\cdot}_0\vert_{V_\alpha}$ for suitable $c_{i,\alpha}\in {\muathbb{K}}$ and each $\esc{\cdot,\!\cdot}_i$ can be represented in block diagonal form where each block is the matrix of $c_{i,\alpha}\esc{\cdot,\!\cdot}_0\vert_{V_\alpha}$ relative to some basis of $V_\alpha$. \end{corollary} \betaegin{proof} First we can write $V=W \omegaPerp \mathfrak r_0$ for certain subspace $W \subset V$. We know that there exists a family of endomorphisms $\{T_{i}\}_{i \in I}$ in the conditions of Theorem \mathfrak ref{oepem} item \mathfrak ref{oepem1}. By Proposition \mathfrak ref{palmera} we have $V=\betaigoplus_{\alpha \in \Phi} V_{\alpha}$. Next we prove that for each $\alphalpha$, $V_{\alphalpha} = (V_{\alpha} \cap \mathfrak r_0) \omegaplus (V_{\alpha} \cap W)$. Let $x=w+r \in V_{\alpha}$ where $w \in W, \, r \in \mathfrak r_0$. By Remark \mathfrak ref{guanga}, $T_i(V)\subset W $ for $i \in I$. Furthermore $T_i(x)=T_i(w)$ since $T_i(r)=0$. But $T_i(x)=\alpha(T_i)x=\alpha(T_i)w+\alpha(T_i)r=\alpha(T_i)w$ which means $T_i(w)=\alpha(T_i)w$ that is, $w \in V_{\alpha} \cap W$. Consequently $r=x-w \in V_{\alpha} \cap \mathfrak r_0$, so $V_{\alphalpha} \subseteq (V_{\alpha} \cap \mathfrak r_0) \omegaplus (V_{\alpha} \cap W)$. The other containment is trivial. Moreover, $V_{\alphalpha} = (V_{\alpha} \cap \mathfrak r_0) \omegaperpf{\alpha} (V_{\alpha} \cap W)$ because $\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_0)\subseteq \muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_i)$. Secondly, we check $V_{\alpha} \perp^{\muathcal F} V_{\betaeta}$ for $\alpha \neq \betaeta$. Indeed it suffices to prove that $(V_{\alpha} \cap W) \perp^{\muathcal F} (V_{\betaeta} \cap W)$. For this aim, take $x \in V_{\alpha} \cap W$ and $y \in V_{\betaeta} \cap W$: $\esc{x,y}_i=\esc{T_i(x),y}_0=\alpha(T_i)\esc{x,y}_0$ and $\esc{x,y}_i=\esc{y,x}_i=\esc{T_i(y),x}_0=\betaeta(T_i)\esc{x,y}_0$, so then $\alpha(T_i)\esc{x,y}_0=\betaeta(T_i)\esc{x,y}_0$. Since $\alpha \neq \betaeta$, there exists $T_k$ such that $\alpha(T_k)\neq \betaeta(T_k)$, whence $\esc{x,y}_0=0$ as desired. Finally, as $\esc{x,y}_i=\esc{T_i(x),y}_0=\alpha(T_i)\esc{x,y}_0$ for any $x,\, y \in V_{\alpha}$ then $\esc{\cdot,\!\cdot}_i\vert_{V_\alpha}=c_{i,\alpha}\esc{\cdot,\!\cdot}_0\vert_{V_\alpha}$ with $c_{i,\alpha}=\alpha(T_i)$. Taking this into account each $\esc{\cdot,\!\cdot}_i$ can be represented in block diagonal form where each block is the matrix of $c_{i,\alpha}\esc{\cdot,\!\cdot}_0\vert_{V_\alpha}$ relative to some basis of $V_\alpha$. \end{proof} \section{Different constructions of a new family of inner products with a nondegenerate element} We have seen in Remark \mathfrak ref{fragel} that a necessary condition for a family of inner products $\muathcal F$ on $V$ to be simultaneously orthogonalizable\ is that we can add (at most) one inner product so as to get a new family $\muathcal F'$ which is nondegenerate. Furthermore if $B$ is a basis of $V$ orthogonalizing $\muathcal F$, the new family $\muathcal F'$ can be constructed in such a way that the same basis $B$ orthogonalizes it. In this section we consider several constructions whose goal is to enlarge a given family $\muathcal F$ of inner products so as to get a new family $\muathcal F'$ whose simultaneous orthogonalization is essentially the same as that of $\muathcal F$ and such that $\muathcal F'$ is better behaved than the original $\muathcal F$. The optimum than we can expect is to get a nondegenerate inner product in $\muathcal F'$ while possibly all the inner products in $\muathcal F$ are degenerate. Also we investigate some properties that occur automatically in the finite-dimensional case, but that deserve certain attention in the infinite-dimensional case. \subsection{Construction by adding a linear combination.} Under certain conditions, it is possible to construct a new family containing a nondegenerate inner product from a given one (in which possibly all the inner products are degenerate). Moreover, if the initial family is simultaneously orthogonalizable, then the new one is also simultaneously orthogonalizable. \betaegin{definition}\mathfrak rm Let $\muathcal F=\{\esc{\cdot,\cdot}_i\}_{i \in I}$ be a family of inner products in the ${\muathbb{K}}$-vector space $V$. We define the {\it radical of the family} $\muathcal F$ by $\muathop{\hbox{\betaf rad}}(\muathcal F):=\cap_{i\in I}\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_i)$. For any $F\in\hbox{\bf P}_{\mathbf{F}}(I)$ we define the subspace $$R_F:=\cap_{i\in F}\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_i).$$ We will say that $\muathcal F$ satisfies the {\em descending chain condition on radicals} if it satisfies the descending chain condition on subspaces $R_F$. More precisely: for any sequence $F_1, F_2,\lambdadots $ of finite subsets of $I$ such that $R_{F_1}\supset R_{F_2}\supset\cdots\supset R_{F_k}\supset\cdots$, there is some $n$ such that $R_{F_n}=R_{F_m}$ for any $m\ge n$. \end{definition} It is easy to see that $\muathcal F$ satisfies the descending chain condition if and only if any collection of sets $R_F$ has a minimum element (relative to the inclusion). Of course, if $V$ happens to be finite-dimensional automatically $\muathcal F$ satisfies the descending chain condition on radicals. An example of $\muathcal F$ which does not satisfy the descending chain conditions on radicals is $\muathcal F=\{\esc{\cdot,\cdot}_i\}_{i\in\muathbb{N}^*}$ where $V$ is a countably-dimensional ${\muathbb{K}}$-vector space and there is an (simultaneous) orthogonal basis $\{v_j\}$ such that for any $i$ we have $\esc{\cdot,\cdot}_i$ given by $\esc{v_j,v_j}_i=1$ for $1\lambdae j\lambdae i$ and $\esc{v_j,v_j}_i=0$ for $j>i$. This $\muathcal F$ has $\muathop{\hbox{\betaf rad}}(\muathcal F)=0$. \muedskip Although the following result is well-known in the literature, we give here a slightly more general version than the one we can find in \cite[\S X, section 14, Proposition 3, p. 248]{Jacobson}. \betaegin{lemma}\lambdaabel{retorta} Let $V$ be a ${\muathbb{K}}$-vector space of finite dimension. Let $J$ be a set of cardinal $\omegamega$ and ${\muathbb{K}}$ be a field of cardinal $>\omegamega$. Let $\{l_i\}_{i\in J}$ be a family of proper subspaces of $V$ indexed by $J$. Then $V\not\subset \cup_{i\in J}l_i$. \end{lemma} \betaegin{proof} Let $n$ be the dimension of $V$. For $n=1$ this is trivial. Suppose that the assertion is true for any $k<n$. So any subspace of dimension $<n$ is not a union (indexed by $J$) of proper subspaces of $V$. Assume on the contrary that $V$ is a ${\muathbb{K}}$-vector space of dimension $n$ and that it is an union of proper subspaces $V\subset \cup_{i\in J}l_i$. Let $L=\{l_i\}_{i\in J}$. Let $h$ be a hyperplane not in $L$. It exists because ${\muathbb{K}}$ has cardinal $>\omegamega$ and $L$ is a collection whose cardinal is $\lambdaeq \omegamega$ (so $V$ contains many more hyperplanes). Then $h=h\cap V \subset \cup_{i\in J} (h\cap l_i)$. But for any $i$ we have $h\cap l_i$ is a proper subspace of $h$ (on the contrary $h\subset l_i$ and this would imply $h=l_i$). So we have that $h$ (which has dimension $n-1$) is a union (indexed by $J$) of proper subspaces. This is a contradiction. \end{proof} Note that if $J$ has finite cardinal $\omega$ and $\mathfrak rm{card}({\muathbb{K}})>\omega$, Lemma \mathfrak ref{retorta} applies. It also applies if $J$ is countable and ${\muathbb{K}}$ uncountable. \betaegin{corollary}\lambdaabel{krop} Let ${\muathbb{K}}$ be a field, $n\in \muathbb{N}$, $n\ge 1$ and $L_n=\{(a_{1i},\lambdadots, a_{ni})\in{\muathbb{K}}^n \colon i\in J \}$ with $\mathfrak rm{card}({\muathbb{K}}) > \mathfrak rm{card}(J)$. Then there are elements $x_1,\lambdadots,x_n\in{\muathbb{K}}$ such that for any $i$ we have $x_1 a_{1i}+\cdots+x_n a_{ni}\ne 0$. \end{corollary} \betaegin{proof} Assume the contrary, so we can write that for any $(x_1,\lambdadots,x_n)\in{\muathbb{K}}^n$ we have $a_{1j}x_1+\cdots+a_{nj}x_n=0$ for some $j$. In geometrical terms this means that ${\muathbb{K}}^n$ is contained in a union of hyperplanes indexed by $J$ which is not possible by Lemma \mathfrak ref{retorta}. \end{proof} \betaegin{proposition} Let $V$ be a ${\muathbb{K}}$-vector space and $\muathcal F=\{\esc{\cdot,\cdot}_i\}_{i\in I}$ be a simultaneously orthogonalizable\ family of inner products of $V$ satisfying the descending chain condition on radicals. Then there is a finite subset $F\subset I$ such that \betaegin{equation}\lambdaabel{merenguito} \betaigcap_{i\in F}\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_i)=\muathop{\hbox{\betaf rad}}(\muathcal F). \end{equation} Furthermore if $\muathop{\hbox{\betaf rad}}(\muathcal F)=0$ and $\mathfrak rm{card}({\muathbb{K}}) > \dim(V)$, there exists a linear combination of elements of $\muathcal F$ which is nondegenerate. \end{proposition} \betaegin{proof} If $\mathfrak rm{card}(I)$ is finite the equality \eqref{merenguito} holds trivially. So assume that $I$ is an infinite set. Denote $J=\hbox{\bf P}_{\mathbf{F}}(I)$ and consider the collection $\{R_F\}_{F\in J}$ which by our hypothesis has a minimum element $R_{F_0}$. On the one hand we have $\muathop{\hbox{\betaf rad}}(\muathcal F)\subset R_{F_0}$. On the other hand $\forall F\in J$ we have $R_F\supset R_{F_0}$. Thus $$R_{F_0}\supset\muathop{\hbox{\betaf rad}}(\muathcal F) \supset \betaigcap_{F\in J} R_F\supset R_{F_0}.$$ Now we prove the second part. For some set of inner products (which can be chosen linearly independent) we have $0=\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_1)\cap\cdots\cap\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_n)$ (reordering if necessary). If the simultaneous orthogonal basis is $\{v_k\}$ and $\esc{v_k,v_k}_j=a_{jk}$. The matrix of $\esc{\cdot,\cdot}_j$ is $\text{diag}(a_{ji})_{i \in H}$ where $\mathfrak rm{card}(H)= \dim(V)$, then applying Corollary \mathfrak ref{krop} there are scalars $x_1,\lambdadots, x_n\in{\muathbb{K}}$ such that $x_1\esc{\cdot,\cdot}_1+\cdots+x_n\esc{\cdot,\cdot}_n$ is nondegenerate. \end{proof} Note that in the hypothesis of the previous proposition we can construct a new family $\muathcal F'=\muathcal F\cup\{x_1\esc{\cdot,\cdot}_1+\cdots+x_n\esc{\cdot,\cdot}_n\}$ which contains a nondegenerate element and induces the same simultaneous orthogonalization as $\muathcal F$. Furthermore the new family is obtained by adding at most one element. Moreover, $n$ can be chosen minimum. \subsection{Scalar extension of inner products}\lambdaabel{periquito} The aim of this subsection is to study the behaviour of nondegenerate families of inner products under scalar extension. If $\esc{\cdot,\cdot}\colon V\times V\to{\muathbb{K}}$ is an inner product and ${\muathbb{F}}\supset{\muathbb{K}}$ a field extension, then we can define on $V_{\muathbb{F}}:= V \omegatimes {\muathbb{F}}$ an inner product $\esc{\cdot,\cdot}_{\muathbb{F}}\colon V_{\muathbb{F}}\times V_{\muathbb{F}}\to{\muathbb{F}}$ such that $\esc{v\omegatimes\lambda,w\omegatimes\mu}_{{\muathbb{F}}}=\lambda\mu\esc{v,w}$. If we take a (possibly infinite) basis $B=\{v_i\}$ of $V$, then $B\omegatimes 1:=\{v_i\omegatimes 1\}$ is a basis of $V_{\muathbb{F}}$ as ${\muathbb{F}}$-vector space. In the particular case in which $V$ is finite-dimensional, the matrix of $\esc{\cdot,\cdot}_{\muathbb{F}}$ relative to the basis $B\omegatimes 1$ coincides with the matrix of $\esc{\cdot,\cdot}$ (relative to $B$) up to the canonical identification of $V$ with $V \omegatimes {\muathbb{K}}$. Consequently \betaegin{equation}\lambdaabel{pepino} \muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_{{\muathbb{F}}}) \cong \muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot})\omegatimes{\muathbb{F}} \end{equation} \noindent and in particular nondegeneracy of $\esc{\cdot,\cdot}$ implies that of $\esc{\cdot,\cdot}_{\muathbb{F}}$. We can prove the above isomorphism for arbitrary $V$, taking a basis $\{v_i\}$ of $V$ and a basis $\{f_j\}$ of the ${\muathbb{K}}$-vector space ${\muathbb{F}}$. If $\esc{z,V}=0$ then automatically $\esc{z\omegao 1,V_{\muathbb{F}}}_{{\muathbb{F}}}=0$ so that the map $\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot})\omegatimes{\muathbb{F}}\to V_{\muathbb{F}}$ such that $z\omegao 1\muapsto z\omegao1$ is a monomorphism with image contained in $\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_{\muathbb{F}})$. To prove that the image of the above map is $\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot})_{\muathbb{F}}$, let $c_{ij}\in{\muathbb{K}}$ be scalars and take an element $z=\sum_{ij}c_{ij}v_i\omegatimes f_j\in V_{\muathbb{F}}$ such that $\esc{z,V_{\muathbb{F}}}_{{\muathbb{F}}}=0$. Then $\esc{z,u\omegatimes 1}_{\muathbb{F}}=0$ for any $u\in V$. Thus $0=\sum_{ij}f_jc_{ij}\esc{v_i,u} $ which implies $\sum_i c_{ij}\esc{v_i,u}=0$ for any $j$ and $u \in V$. Whence $\sum_i c_{ij}v_i\in \muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot})$ for any $j$. Defining $z_j:=\sum_i c_{ij}v_i\in \muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot})$ we have $z=\sum_j z_j\omegao f_j$ hence $z\in\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot})\omegatimes{\muathbb{F}}$. \betaegin{definition}\mathfrak rm Consider a field extension ${\muathbb{F}}\supset{\muathbb{K}}$ and a linear map $T\colon V_1\to V_2$ with $V_1$ and $V_2$ two ${\muathbb{K}}$-vector spaces. We will denote by $T_{\muathbb{F}}:=T\omegao 1$ the linear map $V_{1{\muathbb{F}}}\to V_{2{\muathbb{F}}}$ such that $T_{\muathbb{F}}(v\omegao\lambdaambda)=T(v)\omegao\lambdaambda$ for any $v\in V_1$ and $\lambdaambda\in{\muathbb{F}}$. \end{definition} Observe that if $T$ has an adjoint $S$ then $T_{\muathbb{F}}$ has also an adjoint $S_{\muathbb{F}}$. \betaegin{remark}\lambdaabel{tapita} \mathfrak rm If $V_i$ is provided with a nondegenerate inner product $\esc{\cdot,\cdot}_i$ $(i=1,2)$ and $T$ is a continuous linear map $T\colon V_1\to V_2$ (relative to the $\esc{\cdot,\cdot}_i$-topology of $V_i$ for $i=1,2$), then $T_{\muathbb{F}}$ is also continuous (relative to the $\esc{\cdot,\cdot}_{i{\muathbb{F}}}$-topology of $V_{i{\muathbb{F}}}$ for $i=1,2$) since continuity in this case is characterized by the existence of an unique adjoint and we have $(T_{\muathbb{F}})^\sharp=(T\omegao 1)^\sharp=T^\sharp\omegao 1=(T^\sharp)_{\muathbb{F}}$ (see \cite[\S IV, section 7, Theorem 1, p. 72]{Jacobson}). \end{remark} \betaegin{proposition}\lambdaabel{sotreumsus} For $i=1,2$ let $(V_i,\esc{\cdot,\cdot}_i)$ be inner product ${\muathbb{K}}$-vector spaces with radical $\mathfrak r_i$, and $T\colon V_1\to V_2$ be a continuous linear map (relative to the $\esc{\cdot,\cdot}_i$-topology of $V_i$). Let ${\muathbb{F}}\supset{\muathbb{K}}$ be a field extension. Assume that $T(\mathfrak r_1)\subset\mathfrak r_2$, $T(W_1)\subset W_2$ for some subspaces $W_i$ with $V_i= \mathfrak r_i \omegaPerp W_i$, $i=1,2$. Then the linear map $T_{\muathbb{F}}$ is continuous (relative to the $\esc{\cdot,\cdot}_{i{\muathbb{F}}}$-topology of $V_{i{\muathbb{F}}}$). \end{proposition} \betaegin{proof} Applying Proposition \mathfrak ref{banana2} there exists an adjoint $T^\sharp\colon V_2\to V_1$. Then it follows by definition that $T_{\muathbb{F}}$ has also an adjoint which is $(T^\sharp)_{\muathbb{F}}$. Then, again Proposition \mathfrak ref{banana2} gives the continuity of $T_{\muathbb{F}}$ because $T_{\muathbb{F}}(\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_{1{\muathbb{F}}})=T(\mathfrak r_1)\omegao{\muathbb{F}}\subset\mathfrak r_2\omegao{\muathbb{F}}=\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_{2{\muathbb{F}}})$ and $V_{i{\muathbb{F}}}=\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_{i{\muathbb{F}}})\omegaPerp W_{i{\muathbb{F}}}$ for $i=1,2$. Furthermore $T_{\muathbb{F}}(W_{1{\muathbb{F}}})=T(W_1)\omegao{\muathbb{F}}\subset W_2\omegao{\muathbb{F}}=W_{2{\muathbb{F}}}$. \end{proof} \betaegin{proposition} For $i=0,1$ let $(V,\esc{\cdot,\cdot}_i)$ be two inner product ${\muathbb{K}}$-vector spaces with $\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_0)\subset \muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_1)$. Then if $\esc{\cdot,\cdot}_1$ is partially continuous relative to the $\esc{\cdot,\cdot}_0$-topology of $V$, for any field extension ${\muathbb{F}}\supset{\muathbb{K}}$ the inner product $\esc{\cdot,\cdot}_{1{\muathbb{F}}}$ is partially continuous relative to the $\esc{\cdot,\cdot}_{0{\muathbb{F}}}$-topology of $V$. \end{proposition} \betaegin{proof} Indeed, if $\esc{\cdot,\cdot}_1$ is partially continuous relative to the $\esc{\cdot,\cdot}_0$-topology, Proposition \mathfrak ref{berenjena} implies that $\esc{x,y}_1=\esc{T(x),y}_0$ for some $T\colon V\to V$ linear, continuous and vanishing on $\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_0)$. Then $T_{\muathbb{F}}\colon V_{\muathbb{F}}\to V_{\muathbb{F}}$ is continuous by Proposition \mathfrak ref{sotreumsus} (taking into account Remark \mathfrak ref{cachondo}). Moreover, we have $$\esc{x\omegao\lambda,y\omegao\muu}_{1{\muathbb{F}}}=\lambda\mu \esc{x,y}_1=\lambda\mu\esc{T(x),y}_0=\esc{T_{\muathbb{F}}(x\omegao\lambda),y\omegao\mu}_{0{\muathbb{F}}}.$$ So $\esc{z,z'}_{1{\muathbb{F}}}=\esc{T_{\muathbb{F}}(z),z'}_{0{\muathbb{F}}}$ for any $z,z'\in V_{\muathbb{F}}$ and since $\esc{\cdot,\cdot}_{0{\muathbb{F}}}$ is partially continuous by Lemma \mathfrak ref{tomate} we conclude that $\esc{\cdot,\cdot}_{1{\muathbb{F}}}$ is also partially continuous. \end{proof} Let $\muathcal F$ be a family of inner products in a ${\muathbb{K}}$-vector space $V$. Assume that: \betaegin{enumerate}[label=(\mathfrak roman*)] \item there is an element $\esc{\cdot,\cdot}_0\in\muathcal F$ whose radical $\mathfrak r_0$ is contained in any $\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot})$ for arbitrary $\esc{\cdot,\cdot}\in\muathcal F$; \item each $\esc{\cdot,\cdot}$ is partially continuous relative to the $\esc{\cdot,\cdot}_0$-topology of $V$. \end{enumerate} Then for any field extension ${\muathbb{F}}\supset{\muathbb{K}}$, the extended family $$\muathcal F_{\muathbb{F}}:=\{\esc{\cdot,\cdot}_{\muathbb{F}}\colon \esc{\cdot,\cdot}\in\muathcal F\}$$ satisfies both properties: $\muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_{0{\muathbb{F}}})\subset \muathop{\hbox{\betaf rad}}(\esc{\cdot,\cdot}_{\muathbb{F}})$ for any $\esc{\cdot,\cdot}_{\muathbb{F}}\in\muathcal F_{\muathbb{F}}$, and $\esc{\cdot,\cdot}_{\muathbb{F}}$ is partially continuous relative to the $\esc{\cdot,\cdot}_{0{\muathbb{F}}}$-topology of $V_{\muathbb{F}}$. As a consequence the next result follows. \betaegin{corollary} If $\muathcal F$ is a nondegenerate family of inner products in a ${\muathbb{K}}$-vector space $V$ and ${\muathbb{F}}\supset{\muathbb{K}}$ a field extension, then the family $\muathcal F_{\muathbb{F}}$ is again nondegenerate. \end{corollary} \subsection{Construction by ultrafilters.} Roughly speaking, in this subsection we start with a simultaneously orthogonalizable\ family $\muathcal F$ of inner products on a vector space $V$ over ${\muathbb{K}}$. Under suitable mild hypothesis, we find a field extension ${\muathbb{F}}\supset{\muathbb{K}}$ and an expanded family $\muathcal F_{\muathbb{F}}\cup\{\escd{\cdot,\cdot}_{\muathbb{F}}\}$ which is nondegenerate in the sense of Definition \mathfrak ref{fuerade}. \muedskip \indent Recall that $(I,\lambdae)$ is a {\it directed set}, if it is a preordered set satisfying $\forall i,j\in I, \exists \, k\in I\colon i,j\lambdae k$. \betaegin{remark}\lambdaabel{onimuhc} \mathfrak rm Let $\muathcal F=\{\esc{\cdot,\cdot}_i\}_{i\in I}$ be a family of inner products in a ${\muathbb{K}}$-vector space $V$. We may assume that $I$ is a directed set, in fact by Zermelo's Theorem (also known as the Well-Ordering Theorem), any set can be well-ordered. Since any well-ordered set is a directed set, we conclude that there is no loss of generality assuming that $I$ is a directed set for some preorder relation $\lambdae$. \end{remark} We recall that a {\it filter of subsets} ${\muathfrak F}$ of a given set $X$ is a collection ${\muathfrak F}\subset\text{\betaf P}(X)$ (the {\it power set of} $X$) such that: \betaegin{enumerate}[label=(\mathfrak roman*)] \item $\emptyset\notin\muathfrak{F}, \; X\in \muathfrak{F}$. \item $S,S'\in\muathfrak{F}$ implies $S\cap S'\in\muathfrak{F}$. \item $S\in\muathfrak{F}$ and $S\subset S'$ implies $S'\in \muathfrak{F}$. \end{enumerate} An {\it ultrafilter} on $X$, say $\muathfrak{U}$, is just a maximal filter of subsets of $X$ (equivalently for any $S\subset X$ one has $S\in{\muathfrak{U}}$ or $X\setminus S\in{\muathfrak{U}}$). For more information about filters see \cite[Chapter 3]{F}. In these conditions we have the following definitions. \betaegin{definition}\lambdaabel{Leo} \mathfrak rm Let $\muathcal F=\{\esc{\cdot,\cdot}_i\}_{i\in I}$ be a family of inner products in a ${\muathbb{K}}$-vector space $V$. Note that $I$ is a directed set by Remark \mathfrak ref{onimuhc}. For each $i\in I$ define $[i,\to):=\{j\in I\colon i\lambdae j\}$. Take the filter $\muathfrak{F}:=\{S\subset I\colon \exists \, \, i\in I,\, [i,\to)\subset S\}$ and an ultrafilter $\muathfrak{U}$ containing $\muathfrak{F}$. If we define ${\muathbb{K}}_i:={\muathbb{K}}$ we will denote the elements of $\prod_i{\muathbb{K}}_i$ in the form $(x_i)_{i\in I}$ with $x_i \in {\muathbb{K}}$. The usual equivalence relation in $\prod_i{\muathbb{K}}_i$ is given by $(x_i)_{i\in I}\equiv (y_i)_{i\in I}$ if and only if $\{i\in I\colon x_i=y_i\}\in\muathfrak{U}$. The equivalence class of $(x_i)_{i\in I}$ will be denoted $[(x_i)_{i\in I}]$ or $[(x_i)]$ for short. The quotient of $\prod_{i\in I}{\muathbb{K}}_i$ module the relation $\equiv$ will be denoted: \betaegin{equation}\lambdaabel{coneho} {\muathbb{F}}:=\prod_{i\in I}{\muathbb{K}}_i/\muathfrak{U}. \end{equation} \end{definition} It is well known that ${\muathbb{F}}\supset{\muathbb{K}}$ is a field extension for the operations: $$[(x_i)]+[(y_i)]:=[(x_i+y_i)], \ [(x_i)][(y_i)]:=[(x_iy_i)].$$ Indeed the extension is realized via the canonical monomorphism ${\muathbb{K}}\to{\muathbb{F}}$ such that $\lambdaambda\muapsto\vec{\lambda}$ where $\vec{\lambda}$ denotes the equivalence class of the element $(x_i)_{i\in I}$ such that $x_i=\lambda$ for any $i$. For further details about the construction of the field ${\muathbb{F}}$ see \cite[Chapter 3]{F}. \betaegin{remark} \lambdaabel{estornudo} \mathfrak rm Assume that $I$ is finite and endow it with a well ordering. So there is no loss of generality assuming that $I=\{1,\lambdadots,n\}$ with its usual order as a subset of the natural numbers. Let $\muathfrak{F}$ be a filter containing all the intervals $[i,\to)$ for $i=1,\lambdadots,n$. Then $$\muathfrak{F}=\{S\subset I\colon n\in S\}.$$ Moreover, this filter is an ultrafilter because any subset of $I$ contains $n$ or its complementary contains $n$. We will denote ${\muathbb{K}}^I=\prod_{i=1}^n{\muathbb{K}}_i$. So we take ${\muathfrak{U}}=\muathfrak{F}$ and consider ${\muathbb{K}}^I/{\muathfrak{U}}$ where now two $n$-tuples $(x_i)_1^n$ and $(y_i)_1^n$ are related if and only if $x_n=y_n$. Therefore the canonical monomorphism ${\muathbb{K}}\to{\muathbb{F}}:={\muathbb{K}}^I/{\muathfrak{U}}$ is an isomorphism since for any $[(x_i)_1^n]\in{\muathbb{F}}$ we have $[(x_i)_1^n]=[(x_n,x_n,\lambdadots,x_n)]=\mathfrak ray{x_n}$. In this case we do not get a proper extension field of ${\muathbb{K}}$ but ${\muathbb{K}}$ itself. \end{remark} \betaegin{definition} \mathfrak rm Let $\muathcal F=\{\esc{\cdot,\cdot}_i\}_{i\in I}$ be a family of inner products in the ${\muathbb{K}}$-vector space $V$. Observe that $I$ is a directed set by Remark \mathfrak ref{onimuhc}. We will say that an element $x\in V$ is {\em pathological} if for any finite-dimensional subspace $S$ of $V$ the set $S_x:=\{i\in I\colon \esc{x,S}_i=0\}\in\muathfrak{U}$ where $\muathfrak{U}$ is the ultrafilter as in Definition \mathfrak ref{Leo}. The set of all pathological elements will be denoted $\mathop{\hbox{\rm path}}_{\muathcal F}(V)$. When $\mathop{\hbox{\rm path}}_{\muathcal F}(V)=0$ we will say that $\muathcal F$ is \em{nonpathological}. \end{definition} It is easy to prove that the set of all pathological elements is a subspace of $V$. Let $x,y\in\mathop{\hbox{\rm path}}_{\muathcal F}(V)$ then for any finite-dimensional subspace $S$ of $V$ consider $S_x=\{i\in I\colon \esc{x,S}_i=0\}$ and similarly $S_y$. Then $S_x,S_y\in\muathfrak{U}$ so $S_x\cap S_y\in\muathfrak{U}$. We have $S_x\cap S_y\subset S_{x+y}$ whence $S_{x+y}\in\muathfrak{U}$ implying $x+y\in\mathop{\hbox{\rm path}}_{\muathcal F}(V)$. Thus $\mathop{\hbox{\rm path}}_{\muathcal F}(V)+\mathop{\hbox{\rm path}}_{\muathcal F}(V)\subset\mathop{\hbox{\rm path}}_{\muathcal F}(V)$ and analogously we have ${\muathbb{K}} \mathop{\hbox{\rm path}}_{\muathcal F}(V)\subset\mathop{\hbox{\rm path}}_{\muathcal F}(V)$. Consequently choosing a complement to $\mathop{\hbox{\rm path}}_{\muathcal F}(V)$ in $V$ we have a decomposition $V=\mathop{\hbox{\rm path}}_{\muathcal F}(V)\omegaplus W$. When $W=0$ all the elements are pathological and one can see that this is equivalent to the statement that for any $x,y\in V$ we have $\{i\in I\colon \esc{x,y}_i=0\}\in{\muathfrak{U}}$. When $W\ne 0$ we can consider $\muathcal F\vert_W$ and the question is: Is $\muathcal F\vert_W$ a nonpathological family? \betaegin{proposition} Suppose $\muathcal F=\{\esc{\cdot,\cdot}_i\}_{i\in I}$ is a family of inner products in the ${\muathbb{K}}$-vector space $V$ where $V=\mathop{\hbox{\rm path}}_{\muathcal F}(V)\omegaplus W$. If $W\ne 0$ then $\muathcal F\vert_W$ is a nonpathological family. \end{proposition} \betaegin{proof} Let $a \in \mathop{\hbox{\rm path}}_{\muathcal F\vert_W}(W)$, this means that for any finite-dimensional subspace $R$ of $W$ the set $R_a=\{i\in I\colon \esc{a,R}_i=0\} \in \muathfrak{U}$. Consider a finite-dimensional subspace $S$ of $V$. We want to prove that $\{i\in I\colon \esc{a,S}_i=0\}\in {\muathfrak{U}}$. Let $\pi_W: V \to W$ be the projection onto $W$. Consider $R:=\pi_W(S)$ the projection of $S$ onto $W$. We know that $R_a\in\muathfrak{U}$. Write $s= p + w \in S$ where $p \in \mathop{\hbox{\rm path}}_{\muathcal F}(V)$ and $w \in W$, then $\esc{a,s}_i=\esc{a,p+w}_i=\esc{a,p}_i+\esc{a,w}_i$. Since $p$ is pathological $({\muathbb{K}} a)_p=\{i\in I\colon\esc{p,{\muathbb{K}} a}_i=0\}\in {\muathfrak{U}}$. Hence $ R_a\cap ({\muathbb{K}} a)_p \in {\muathfrak{U}}$. Now observe that if $i\in({\muathbb{K}} a)_p$ we can say that $\esc{p,a}_i=0$ and in case $i\in R_a$ we can write $0=\esc{a,R}_i=\esc{a,\pi_W(S)}_i$ which implies $\esc{a,w}_i=0$. Consequently, if $i\in R_a\cap ({\muathbb{K}} a)_p \in {\muathfrak{U}}$ we have $\esc{a,s}_i=\esc{a,p}_i+\esc{a,w}_i=0$. Hence for an fixed $s\in S$ we have $\{i\in I\colon \esc{a,s}_i=0\}\in{\muathfrak{U}}$. To finish the proof we consider a basis $\{s_1,\lambdadots, s_k\}$ of $S$. We know that each set $\{i\in I\colon \esc{a,s_j}_i=0\}\in{\muathfrak{U}}$ ($j=1,\lambdadots,k$), hence the intersection of these sets is contained in ${\muathfrak{U}}$. We conclude that $\{i\in I\colon \esc{a,S}_i=0\}\in{\muathfrak{U}}$. Thus $a\in\mathop{\hbox{\rm path}}_{\muathcal F}(V)\cap W=0$, so the family $\muathcal F\vert_W$ is nonpathological. \end{proof} \betaegin{definition} \mathfrak rm Consider a family $\muathcal F=\{\esc{\cdot,\cdot}_i\}_{i\in I}$ of inner products in the ${\muathbb{K}}$-vector space $V$. We define a ${\muathbb{K}}$-bilinear symmetric map $\escd{\cdot,\cdot}\colon V\times V\to{\muathbb{F}}$, being ${\muathbb{F}}$ as in \eqref{coneho}, by \betaegin{equation}\lambdaabel{etnetop} \escd{x,y}:=[(\esc{x,y}_i)_{i\in I}] \ \text{ for any }\ x, y \in V. \end{equation} \end{definition} \betaegin{proposition}Let $\muathcal F=\{\esc{\cdot,\cdot}_i\}_{i\in I}$ be a family of inner products in the ${\muathbb{K}}$-vector space $V$ and the ${\muathbb{K}}$-bilinear symmetric map $\escd{\cdot,\cdot}\colon V\times V\to{\muathbb{F}}$ defined in \eqref{etnetop}. Then $ \mathop{\hbox{\rm path}}_{\muathcal F}(V)=\{x\in V\colon \escd{x,V}=0\}.$ \end{proposition} \betaegin{proof} If $x\in \mathop{\hbox{\rm path}}_{\muathcal F}(V)$ each $S_x\in{\muathfrak{U}}$ (for arbitrary finite-dimensional subspace $S$). Take an arbitrary $v\in V$ then $({\muathbb{K}} v)_x\in{\muathfrak{U}}$ hence $\{i\colon\esc{x,v}_i=0\}\in{\muathfrak{U}}$ so $\escd{x,v}=0$ and since $v$ is arbitrary $\escd{x,V}=0$. Conversely if $\escd{x,V}=0$ then take a finite-dimensional subspace $S$ of $V$. Consider a basis $\{s_1,\lambdadots,s_k\}$ of $S$. Then $0=\escd{x,s_j}=[(\esc{x,s_j}_i)_{i\in I}]$ whence $\{i\in I\colon \esc{x,s_j}=0\}\in{\muathfrak{U}}$ for $j=1,\lambdadots,k$. So $\{i\in I\colon \esc{x,S}=0\}\in{\muathfrak{U}}$ and $x\in \mathop{\hbox{\rm path}}_{\muathcal F}(V)$. \end{proof} \betaegin{definition} \mathfrak rm Let $\muathcal F=\{\esc{\cdot, \cdot}_i\}_{i\in I}$ be a family of inner products in the ${\muathbb{K}}$-vector space $V$ and let $\escd{\cdot,\cdot}\colon V\times V\to{\muathbb{F}}$ be as in \eqref{etnetop}. We will define on $V_{\muathbb{F}}:=V \omegatimes {\muathbb{F}}$ an inner product $\escd{\cdot,\cdot}_{\muathbb{F}}\colon V_{\muathbb{F}}\times V_{\muathbb{F}}\to {\muathbb{F}}$ such that $$\escd{x\omegao\lambdaambda,y\omegao\muu}_{\muathbb{F}}:=\lambdaambda\muu\escd{x,y} \ \text{ for any }\ x\omegao\lambdaambda,\ y\omegao\muu \in V_{\muathbb{F}}.$$ \end{definition} \betaegin{theorem}\lambdaabel{pajaro}Let $\muathcal F=\{\esc{\cdot,\cdot}_i\}_{i\in I}$ be a family of inner products in the ${\muathbb{K}}$-vector space $V$. We have the following: \betaegin{enumerate}[label=(\mathfrak roman*)] \item\lambdaabel{papa} If $\muathcal F$ is nonpathological and simultaneously orthogonalizable, then $\escd{\cdot,\cdot}_{\muathbb{F}}$ is nondegenerate. \item \lambdaabel{pipi} If $\escd{\cdot,\cdot}_{\muathbb{F}}$ is nondegenerate then $\muathcal F$ is nonpathological. \end{enumerate} \end{theorem} \betaegin{proof} Let $B=\{v_j\}$ be an orthogonal basis of $V$ relative to any inner product of $\muathcal F$. Then $\{v_j\omegao 1\}$ is an orthogonal basis of $(V_{\muathbb{F}},\escd{\cdot,\cdot}_{\muathbb{F}})$ and to prove that $\escd{\cdot,\cdot}_{\muathbb{F}}$ is nondegenerate it suffices to prove that $\escd{v_j\omegatimes 1,v_j\omegatimes 1}_{\muathbb{F}}$ is nonzero for any $j$. But $\escd{v_j\omegatimes 1,v_j\omegatimes 1}_{\muathbb{F}}=\escd{v_j,v_j}$. If we suppose that there exists $j$ such that $\escd{v_j,v_j}=0$ we can define $T:=\{i\in I\colon \esc{v_j,v_j}_i=0\}\in{\muathfrak{U}}$. Next we prove that $v_j$ is a pathological element of $\muathcal F$: let $S$ be a finite-dimensional subspace of $V$, then there is a finite collection $v_{k_1},\lambdadots,v_{k_m}$ of elements of $B$ such that $S\subset\mathop{\hbox{\mathfrak rm span}}an(\{v_{k_q}\}_{q=1}^m)$. Now we see that $\{i\in I\colon \esc{v_j,S}_i=0\}\in{\muathfrak{U}}$. Indeed, first $T\subset S_q:=\{i\in I\colon \esc{v_j,v_{k_q}}_i=0\}$ for each $q\in\{1,\lambdadots,m\}$. Thus $\cap_1^m S_q\in{\muathfrak{U}}$ and $\cap_1^m S_q\subset \{i\in I\colon \esc{v_j,S}_i=0\}$. We conclude that $v_j\in\mathop{\hbox{\rm path}}_{\muathcal F}(V)=0$ which is a contradiction. Next to prove \mathfrak ref{pipi} we see that if $v\in\mathop{\hbox{\rm path}}_{\muathcal F}(V)$, then $v\omegao 1\in\muathop{\hbox{\betaf rad}}(\escd{\cdot,\cdot}_{\muathbb{F}})$. Indeed, we have $\escd{v\omegao 1,x\omegao 1}_{\muathbb{F}}=\escd{v,x}=[(\esc{v,x}_i)]$ for $x\in V$. But $v$ is pathological so $\{i\colon \esc{v,x}_i=0\}\in{\muathfrak{U}}$. Thus $[(\esc{v,x}_i)]=0$ hence $v\omegao 1\in\muathop{\hbox{\betaf rad}}(\escd{\cdot,\cdot}_{{\muathbb{F}}})$. \end{proof} \betaegin{remark}\lambdaabel{aparruz}\mathfrak rm Note that if $\{v_j\}$ is a basis of $V$ diagonalizing $\muathcal F$, then $\{v_j\omegao 1\}$ is a basis of $V_{\muathbb{F}}$ diagonalizing the inner product $\escd{\cdot,\cdot}_{\muathbb{F}}$. In this way the family $\muathcal F_{\muathbb{F}}\cup\{\escd{\cdot,\cdot}_{\muathbb{F}}\}$ is simultaneously orthogonalizable. \end{remark} In order to illustrate Theorem \mathfrak ref{pajaro} we can consider the following example. \betaegin{example}\mathfrak rm Assume that $V$ is $\alphaleph_0$-dimensional and consider a family of inner products in $V$ given by $\muathcal F=\{\esc{\cdot,\cdot}_i\}_{i\in\muathbb{N}}$ where relative to a basis $B=\{v_j\}$ of $V$ we have $\esc{v_i,v_j}_k=0$ if $i\ne j$ for each $k$, $\esc{v_j,v_j}_j=0$ and $\esc{v_j,v_j}_i=1$ for $i\ne j$. Thus the matrix of any $\esc{\cdot,\cdot}_i$ relative to $B$ is a diagonal matrix with $1$'s on the diagonal except for the $(i,i)$ entry which is $0$. Consequently each inner product of $\muathcal F$ is degenerate. Furthermore, $\muathcal F$ is nonpathological since if $x$ is a pathological element and we consider an arbitrary $v_j\in B $, then $\{i\in \muathbb{N} \colon \esc{x,v_j}_i=0\} \in {\muathfrak{U}}$, so $\{i\in \muathbb{N} \colon \esc{x,v_j}_i=0\}\ne \{j\}$ and there exists $i\ne j$ such that $0=\esc{x,v_j}_i=\sum_{k}\lambda_k\esc{v_k,v_j}_i=\lambda_j\esc{v_j,v_j}_i=\lambda_j$ for arbitrary $j$, therefore $x=0$. Hence if we compute $\escd{v_j,v_i}$ we find $$\escd{v_j,v_i}=\betaegin{cases}1 & \hbox{if}\ i=j\\ 0 & \hbox{if}\ i\ne j\end{cases}$$ so that $\escd{\cdot,\cdot}_{\muathbb{F}}$ is nondegenerate. \end{example} Now we observe that when $\muathcal F$ is nonpathological and $I$ is finite, then $\muathcal F$ is nondegenerate. \betaegin{remark}\mathfrak rm Assume that $\muathcal F$ is nonpathological, $I$ is finite (concretely, without loss of generality that $I=\{1,\dots,n\}$) and that the ultrafilter ${\muathfrak{U}}$ consists of all the subsets of $I$ containing $n$. The ${\muathbb{K}}$-bilinear symmetric map $\escd{\cdot,\cdot}$ is precisely $\esc{\cdot,\cdot}_n$ because $\escd{x,y}=[(\esc{x,y}_i)_1^n]=[(\esc{x,y}_n,\lambdadots,\esc{x,y}_n)]$. We can check that in this case $\esc{\cdot,\cdot}_n$ is nondegenerate: assume that for a nonzero $x\in V$ we have $\esc{x,V}_n=0$, since $x$ is not pathological there is some finite-dimensional subspace $S$ of $V$ such that $S_x\notin{\muathfrak{U}}$. Then $n\notin S_x$ whence $n\in \complement S_x$ and so $\esc{x,S}_n\ne 0$ which is a contradiction. \end{remark} \betaegin{corollary} \lambdaabel{playita} If $\muathcal F=\{\esc{\cdot,\cdot}_i\}_{i\in I}$ is a nonpathological and simultaneously orthogonalizable\ family of inner products in the ${\muathbb{K}}$-vector space $V$, then the family $\muathcal F_{\muathbb{F}}\cup\{\escd{\cdot,\cdot}_{\muathbb{F}}\}$ is nondegenerate. \end{corollary} \betaegin{proof} First observe that $\escd{\cdot,\cdot}_{\muathbb{F}}$ is nondegenerate by Theorem \mathfrak ref{pajaro} item \mathfrak ref{papa}. In order prove the statement we need to check that any $\esc{\cdot,\cdot}_{i{\muathbb{F}}}$ is partially continuous relative to the $\escd{\cdot,\cdot}_{\muathbb{F}}$-topology. Since both $\esc{\cdot,\cdot}_{i{\muathbb{F}}}$ and $\escd{\cdot,\cdot}_{{\muathbb{F}}}$ are simultaneously orthogonalizable\ (see Remark \mathfrak ref{aparruz}), then applying Proposition \mathfrak ref{atupoi} item \mathfrak ref{atupoi2} we have that {$\esc{ \cdot, \cdot }_{i{\muathbb{F}}}$} is partially continuous. Summarizing the family $\muathcal F_{\muathbb{F}}\cup\{\escd{\cdot,\cdot}_{{\muathbb{F}}}\}$ is nondegenerate. \end{proof} \subsubsection{\textit{\textbf{The real-complex case.}}} In this subsubsection we will study the case of the previous statement in which the field is $\muathbb{R}$ or $\muathbb{C}$. Their respective extended fields are: the hyperreal numbers denoted by ${^*\R}$ and hypercomplex numbers denoted by ${^*\C}$. Recall that ${^*\R}=\muathbb{R}^I/{\muathfrak{U}}$ (and similarly for ${^*\C}$) as in the previous subsection. An element in ${^*\R}$ or in ${^*\C}$ can be termed a {\it hypernumber} so that we do not precise if it is an hyperreal or an hypercomplex number. The absolute value in $\muathbb{R}$ and the complex modulus in $\muathbb{C}$ will be denoted by $\vert\cdot\vert$ (we will distinguish them only by the context). The extension of the absolute value function $\vert\cdot\vert$ to ${^*\R}$ will be denoted by abuse of notation as $\vert\cdot\vert\colon{^*\R}\to {{^*\R}}$ (and similarly $\vert\cdot\vert\colon{^*\C}\to{{^*\C}}$). This extension is given by $\vert [(x_i)]\vert:=[(\alphabs{x_i})]$ for any $[(x_i)]$. Also, given two hyperreals $x,y\in{{^*\R}}$ with $x=[(x_i)]$ and $y=[(y_i)]$, it is said that $x<y$ if $\{i\in I\colon x_i<y_i\}\in{\muathfrak{U}}$. Recall some basic facts on infinite and infinitesimal hypernumbers: let ${\muathbb{K}}=\muathbb{R}$ or $\muathbb{C}$, an element $x\in{^*\K}$ is said to be {\em finite} if there is some real $M\in\muathbb{R}$ such that $\vert x\vert<M$. An element $x\in{^*\K}$ is said to be an {\em infinitesimal} is for any real $\epsilon>0$ we have $\alphabs{x}<\epsilon$. A known result is that any finite hypernumber $x\in{^*\K}$ can be written of the form $x=\muathop{\hbox{\mathfrak rm st}}(x)+w$ where $w$ is an infinitesimal and $\muathop{\hbox{\mathfrak rm st}}(x)\in{\muathbb{K}}$. The element $\muathop{\hbox{\mathfrak rm st}}(x)$ is unique and is called the {\em standard part} of $x$. Note also that if $x,y$ are finite hypernumbers then $x+y$ is a finite hypernumber and $\muathop{\hbox{\mathfrak rm st}}(x+y)=\muathop{\hbox{\mathfrak rm st}}(x)+\muathop{\hbox{\mathfrak rm st}}(y)$. Also if $\lambda\in{\muathbb{K}}$ and $x$ is a finite element of ${^*\K}$ then $\lambda x$ is finite and $\muathop{\hbox{\mathfrak rm st}}(\lambda x)=\lambda\muathop{\hbox{\mathfrak rm st}}(x)$. Finally we recall the extended notation $x\alphapprox y$ for hypernumbers whose difference is an infinitesimal (consequently $\muathop{\hbox{\mathfrak rm st}}(x)=\muathop{\hbox{\mathfrak rm st}}(y)$). \betaegin{definition}\mathfrak rm{Let $V$ be a ${\muathbb{K}}$-vector space for ${\muathbb{K}}= \muathbb{R}, \muathbb{C}$ and consider a family of inner products $\muathcal F=\{\esc{\cdot,\cdot}_i\}_{i \in I}$ of $V$. We recall that $I$ is a directed set by Remark \mathfrak ref{onimuhc}. We say that $\muathcal F$ is \emph{bounded} if for $x,y \in V$ there exists $M \in \muathbb{R}$ such that $\{ i \in I \colon \vert \esc{x,y}_i \vert < M \} \in {\muathfrak{U}}$} being ${\muathfrak{U}}$ an ultrafilter as in Definition \mathfrak ref{Leo}. \end{definition} In case $I$ is finite, it is trivial that any family $\muathcal F$ indexed by $I$, is bounded. So the definition is meaningful only if $I$ is an infinite set. In general, $\muathcal F$ is bounded if and only if for any $x,y\in V$, the hypernumber $[(\esc{x,y}_i)]$ is finite. Defining $\escd{\cdot,\cdot}\colon V\times V\to{^*\R}$ as in formula \eqref{etnetop} we can say that $\muathcal F$ is bounded if and only if for any $x,y\in V$, we have that $\escd{x,y}$ is finite. \betaegin{definition}\mathfrak rm{Given the family $\muathcal F=\{\esc{\cdot,\cdot}_i\}_{i \in I}$, consider an ultrafilter ${\muathfrak{U}}$ as Definition \mathfrak ref{Leo}, containing all the intervals $[i,\to)$ with $i\in I$. An element $x \in V$ is said to be \emph{negligible} if for every $y \in V$ and for every real $\epsilon > 0$, we have $\{i \in I \colon \vert \esc{x,y}_i \vert < \epsilon\} \in {\muathfrak{U}}$. We will denote the set of all negligible elements by $\muathop{\text{neg}}_\muathcal F(V)$. When $\muathop{\text{neg}}_\muathcal F(V)=0$ we will say that $\muathcal F$ is \emph{robust}.} \end{definition} An element $x$ is negligible if and only if for any $y\in V$ the hypernumber $\escd{x,y}$ (as in formula \eqref{etnetop} again) is an infinitesimal. Observe that $\muathop{\text{neg}}_{\tiny\muathcal F}(V)$ is a subspace of $V$ and that if $x \in \mathop{\hbox{\rm path}}_{\muathcal F}(V)$ then $x$ is negligible. So we can write $V=\muathop{\text{neg}}_\muathcal F(V) \omegaplus W$ for certain subspace $W \subset V$. Furthermore $\muathcal F\vert_ W$ is robust. \betaegin{definition}\mathfrak rm If $\muathcal F$ is a bounded family we can define an inner product $\esct{\cdot,\cdot}\colon V\times V\to\muathbb{R}$ such that $\esct{x,y}:=\muathop{\hbox{\mathfrak rm st}}(\escd{x,y})$ for any $x,y\in V$. \end{definition} As a final result we have the following one. \betaegin{theorem}\lambdaabel{wwe} Let $V$ be a ${\muathbb{K}}$-vector space for ${\muathbb{K}}= \muathbb{R}, \muathbb{C}$ and consider a bounded family of inner products $\muathcal F=\{\esc{\cdot,\cdot}_{i}\}_{i \in I}$ of $V$. Then: \betaegin{enumerate}[label=(\mathfrak roman*)] \item\lambdaabel{wweuno} We have $\muathop{\hbox{\betaf rad}}\esct{\cdot,\cdot}=\muathop{\text{neg}}\nolimits_\muathcal F(V)$. \item\lambdaabel{wwedos} If $\muathcal F$ is robust, the inner product $\esct{\cdot,\cdot}$ is nondegenerate. \item\lambdaabel{wwetres} $\muathcal F$ is simultaneously orthogonalizable\ if and only if the enlarged family $\muathcal F\cup\{\esct{\cdot,\cdot}\}$ is simultaneously orthogonalizable. Furthermore, if $\muathcal F$ is robust and simultaneously orthogonalizable, the enlarged family $\muathcal F\cup\{\esct{\cdot,\cdot}\}$ is nondegenerate. \item\lambdaabel{wwecuat} The simultaneous orthogonalizations of $V$ relative to $\muathcal F$ and to $\muathcal F\cup\{\esct{\cdot,\cdot}\}$ agree. \end{enumerate} \end{theorem} \betaegin{proof} To prove the equality in \mathfrak ref{wweuno} consider $x\in\muathop{\hbox{\betaf rad}}\esct{\cdot,\cdot}$. This happens exactly when $\escd{x,y}\alphapprox 0$ for any $y$. Equivalently, for any real $\epsilon>0$ the set $\{i\in I\colon\vert\esc{x,y}_i\vert<\epsilon\}$ is in ${\muathfrak{U}}$. So $x$ is negligible and reciprocally. Now \mathfrak ref{wwedos} is trivial from the previous item. To prove \mathfrak ref{wwetres}, take a basis $\{v_j\}$ of $V$ orthogonal relative to $\muathcal F$. Then if $j\ne k$ we have $\esct{v_j,v_k}=\muathop{\hbox{\mathfrak rm st}}(\escd{v_j,v_k})$. But $\escd{v_j,v_k}=[(\esc{v_j,v_k}_i)]=0$ hence $\esct{v_j,v_k}=0$. Thus the same basis is also orthogonal for $\esct{\cdot,\cdot}$. Assume besides that $\muathcal F$ is robust. The inner products $\esc{\cdot,\cdot}_i$ are partially continuous relative to the $\esct{\cdot,\cdot}$-topology because of item \mathfrak ref{wwedos} of this theorem and Proposition \mathfrak ref{atupoi} item \mathfrak ref{atupoi2}. Finally $\muathcal F\cup\{\esct{\cdot,\cdot}\}$ is nondegenerate. For the assertion in \mathfrak ref{wwecuat} take into account that any orthogonal basis $B$ relative to $\muathcal F$ is automatically an orthogonal basis of $\muathcal F\cup\{\esct{\cdot,\cdot}\}$ and reciprocally. \end{proof} \betaegin{thebibliography}{00} \betaibitem{Becker} Ronald I. Becker, {Necessary and sufficient conditions for the simultaneous diagonability of two quadratic forms.} \textit{Linear Algebra Appl.} \textbf{30} (1980) pp. 129--139. \betaibitem{CGMM} Yolanda Cabrera Casado, Crist\'obal Gil Canto, Dolores Mart\'in Barquero and C\'andido Mart\'in Gonz\'alez, {Simultaneous orthogonalization of inner products over arbitrary fields}. https://arxiv.org/pdf/2012.06533.pdf. \betaibitem{Calabi} Eugenio Calabi, {Linear systems of real quadratic forms.} \textit{Proc. Amer. Math. Soc.} \textbf{15} (1965) pp. 844-46. \betaibitem{BMV} Miguel D. Bustamante, Pauline Mellon, and M. Victoria Velasco, {Determining when an algebra is an evolution algebra.} \textit{Mathematics.} \textbf{8, 1349} (2020). \betaibitem{Finsler} Paul Finsler, {\"Uber das vorkommen definiter und semidefiniter formen in scharen quadratischer formen.} \textit{Comment. Math. Helv.} \textbf{9} (1937) pp. 188-192. \betaibitem{F} Jacques Fleuriot, {A Combination of geometry theorem proving and nonstandard analysis with application to Newton's Principia. Distinguished Dissertations.}, \textit{Springer, London}, (2001). https://doi.org/10.1007/978-0-85729-329-9 \betaibitem{Greub} Werner Greub, {Linear Algebra}. \textit{Springer-Verlag, Heidelberg, 4st ed.} (1975). \betaibitem{gross} Herbert Gross, {Quadratic forms in infinite dimensional vector spaces}. Progress in Mathematics, vol 1. Birkh\"ausser. (1979). DOI: 10.1007/978-1-4757-1454-8 \betaibitem{IMR} Miodrag C. Iovanov, Zachary Mesyan and Manuel L. Reyes, {Infinite-dimensional diagonalization and semisimplicity}. \textit{Israel Journal of Mathematics.} {\betaf 215}, 801–855 (2016). https://doi.org/10.1007/s11856-016-1395-5. \betaibitem{Jacobson} Nathan Jacobson, {Structure of Rings}. \textit{American Mathematical Society, Colloquium Publications Volume XXXVII 1964}. \betaibitem{Uhligart} Frank Uhlig, {Simultaneous block diagonalisation of two real symmetric matrices}. \textit{Linear Algebra and Appl.}, \textbf{7} (1973) pp. 281--289. \betaibitem{Wo} Maria J. Wonenburger, {Simultaneous diagonalization of symmetric bilinear forms}. \textit{Journal of Mathematics and Mechanics}, {\betaf 15 (4)} (1966) pp. 617--622. \end{thebibliography} \end{document}
\begin{document} \title{Brief Article} \author{The Author} \begin{center} {\bf Exact Computation for Existence of a Knot Counterexample}\\ {\bf T. J. Peters, K. Marinelli, University of Connecticut}\\ {\bf Abstract}\\ \end{center} Previously, numerical evidence was presented of a self-intersecting B\'ezier curve having the $\mbox{unknot}$ for its control polygon. This numerical demonstration resolved open questions in scientific visualization, but did not provide a formal proof of self-intersection. An example with a formal existence proof is given, even while the exact self-intersection point remains undetermined. \section{Introduction to Existence Condition for Self-intersection} \label{sec:ex} The formal proof of an unknotted control polygon with a self-intersecting B\'ezier curve (See Definition~\ref{def:c}.) appears in Theorem~\ref{thm:hasselfi}, but illustrative images are shown in Figure~\ref{fig:xselfi}. The progression, from left to right, shows `snap shots' of a piecewise linear (PL) curve being continuously perturbed, generating new B\'ezier curves at each instant. The top-most point is the only one perturbed linearly. The initial (left) and final (right) B\'ezier curves are shown to have differing knot types, so there must be an intermediate B\'ezier curve with a self-intersection (In the middle image, a red dot approximates a neighborhood containing that intersection). Relevant definitions follow. \hspace*{-15ex} \begin{figure} \caption{Existence of Self-intersection} \label{fig:unkview} \label{fig:xselfi} \end{figure} \section{Data for the Example} \label{sec:dataex} Let $$ (0, 9, 20), (-15, -95, -50), (40, 80, -20), \mathbf{(-10, -60, 58)}, (-60, 30, 20), (40, -60, -60), (0, 9, 20)$$ be the ordered list of vertices for the initial PL knot, denoted by $K_0$, where the integer values support exact computation. In Figure~\ref{fig:xselfi}, the vertex $(0, 9, 20)$ is the initial and final point of the B\'ezier curve shown in each of the three images (in green). The remaining vertices are arranged counterclockwise. A single Reidemeister move of vertex (40, -60, -60) shows that $K_0$ is the unknot. The vertex in bold font is perturbed linearly to $\mathbf{(10, -60, 58)}$ to obtain the final PL knot, denoted by $K_1$, with all other vertices remaining fixed. The perturbation generates uncountably many new control polygons and associated B\'ezier curves~\footnote{Author T. J. Peters thanks the referee of the previously cited numerical work \cite{JL2012} for a suggestion, that was similar in spirit, to generate the example shown here.}. The B\'ezier curves corresponding to $K_0$ and $K_1$ are denoted by $\beta_0$ and $\beta_1$, respectively. Both $K_0$ and $K_1$ were produced from visual experiments. \section{Mathematical Preliminaries} \label{sec:mathpre} Some relevant mathematical definitions are summarized. \begin{definition} \label{def:knot} A subspace of $\mathbf{R}^3$ is called a knot \cite{Armstrong1983} if it is homeomorphic to a circle. \end{definition} Homeomorphism does not distinguish between different embeddings. The stronger equivalence by ambient isotopy distinguishes embeddings and is fundamental in knot theory. \begin{definition} Let $X$ and $Y$ be two subspaces of $\mathbb{R}^3$. A continuous function \[ H:\mathbb{R}^3 \times [0,1] \to \mathbb{R}^3 \] is an {\bf ambient isotopy} between $X$ and $Y$ if $H$ satisfies the following conditions: \begin{enumerate} \item $H(\cdot, 0)$ is the identity, \item $H(X,1) = Y$, and \item $\forall t \in [0,1], H(\cdot,t)$ is a homeomorphism from $\mathbb{R}^3$ onto $\mathbb{R}^3$. \end{enumerate} \label{def:aiso} \end{definition} \begin{definition}\label{def:c} Denote $\mathcal{C}(t)$ as the parameterized B\'ezier curve of degree $n$ with control points $P_m \in \mathbf{R}^3$, defined by $$ \mathcal{C}(t)=\sum_{i=0}^{n}{B_{i,n}(t)P_i}, \hspace{2ex} t\in[0,1] $$ where $B_{i,n}(t) = \left(\!\!\! \begin{array}{c} n \\ i \end{array} \!\!\!\right)t^i(1-t)^{n-i}$. \end{definition} The curve $\mathcal{P}$ formed by PL interpolation on the ordered list of points $P_0,P_1,\ldots,P_n$ is called the {\em control polygon} and is a PL approximation of $\mathcal{C}$. The work presented was partially motivated by development of wireless data gloves. The smooth B\'ezier representations are manipulated by the gloves, while the illustrative computer graphics are created by PL approximations derived from the control polygon. Topological fidelity between these representations is of interest to provide reliable visual feedback to the user. \section{Related Work} \label{sec:relw} This article arose from a question posed by a referee from a previous numerical example~\cite{JL2012}. The mathematical objects of study here are B\'ezier curves. The canonical references~\cite{G.Farin1990} (any edition) and \cite{Piegl} focus on approximation and modeling, with less attention to associated topological properties. Relations between a smooth curve and its PL approximation are dominant in computer graphics~\cite{Fvd90}, but the topological aspects are often ignored. One prominent property is that any B\'ezier curve is contained in the convex hull of its control points~\cite{G.Farin1990}, while recent enhancements have been shown~\cite{PetersWu}, where the containing set is a subset of the convex hull. The \emph{push} receives prominent attention by R. H. Bing~\cite{Bing1983} as a fundamental tool in developing ambient isotopies in $\Re^2$. The perturbation in this article is the trivial extension of a push to $\Re^3$. The preservation of topological characteristics in geometric modeling and graphics has become of contemporary interest \cite{Amenta2003, L.-E.Andersson2000, Andersson1998, Chazal2005, ENSTPPKS12, bez-iso,KiSi08, JL-isoconvthm}. Sufficient conditions for a homeomorphism between a B\'ezier curve and its control polygon have been studied \cite{M.Neagu_E.Calcoen_B.Lacolle2000}, while topological differences have also been shown \cite{Bisceglio, JL2012, Piegl}. Topological aspects are relevant in `molecular movies' \cite{MMovies}. Sufficient conditions were established for preservation of knot type~\cite{TJP08} during dynamic visualization of ongoing molecular simulations \cite{WeMi06}. Interest by bio-chemists in using hand gestures to interactively manipulate these complex molecular images prompts related topological considerations for data gloves~\cite{CG,Manus,VM}. The beautiful visual studies of knot symmetry~\cite{Carlo} provided an example of the $7_4$ knot that was helpful in the initial visual studies performed to create the example presented here. Exact computation has some advantages over floating point arithmetic~\cite{culver2004exact,kettner2008classroom}, particularly regarding loss of precision in finite bit strings, with alternative views expressed~\cite{Jiang2006}. Some \emph{ad hoc} techniques are often invoked to provide adequate bit length for full precision relative to a particular computation, as has been done here. \section{Exact Computation for Subdivision} \label{sec:exsub} The de Casteljau algorithm \cite{G.Farin1990} is a subdivision method for B\'ezier curves. The algorithm recursively generates control polygons that more closely approximate the curve~\cite{G.Farin1990} under Hausdorff distance~\cite{J.Munkres1999}. Figure~\ref{fig:sub-ex} shows the first step of the de Casteljau algorithm with an input value of $\frac{1}{2}$ on a cubic B\'ezier curve. The initial control polygon $P$ is used as input to generate local PL approximations, $P^1$ and $P^2$, as Figure~\ref{fig:dec3} shows. The result is to subdivide the original curve into two subsets, with the new control polygons, $P^1$ and $P^2$, respectively. \begin{figure} \caption{A subdivision with parameter $\frac{1} \label{fig:dec1} \label{fig:dec3} \label{fig:sub-ex} \end{figure} Figure~\ref{fig:dec1} has the initial control polygon $\mathcal{P}$ as the three outer edges. The next step of this subdivision produces 4 edges by selecting the midpoint of each edge of $P$ and connecting these midpoints, producing 4 edges closer to the B\'ezier curve in Figure~\ref{fig:dec1}. Recursion continues until a tangent to the B\'ezier curve \cite{G.Farin1990} is created. The union of the edges from the final step then forms two new PL curves, as shown in Figure~\ref{fig:dec1}. Termination is guaranteed over finitely many edges. For $i$ iterations, the subdivision process has generated $2^i$ PL sub-curves, each being a control polygon for part of the original curve \cite{G.Farin1990}, which is a \textbf{sub-control polygon}\footnote{Note that by the subdivision process, each sub-control polygon of a simple B\'ezier curve is open.}, denoted by $\mathcal{P}^k$ for $k = 1, 2, 3, \ldots, 2^i$. For a curve initially defined by $n + 1$ control points, each $\mathcal{P}^k$ has $n+1$ points and their union $\bigcup_k \mathcal{P}^k$ forms a new PL curve that converges in Hausdorff distance to the original B\'ezier curve. The B\'ezier curve defined by $\bigcup_k \mathcal{P}^k$ is exactly the same as the original B\'ezier curve~\cite{Lane_Riesenfeld1980}. \begin{remark} A B\'ezier curve is contained within the convex hull of its control points~ \cite{G.Farin1990}. \label{rem:convhull} \end{remark} Remark~\ref{rem:convhull} also applies to each sub-control polygon, as will be extensively used. The stick knots of Section~\ref{sec:dataex} are refined via the DeCasteljau subdivision algorithm. Each refinement stage in the DeCasteljau algorithm uses a series of averages for the vertices of the form $\frac{(a+b)}{2}$. In order to retain integer valued vertices throughout all computed subdivisions, the vertices will be scaled by $2^m$, for an appropriately chosen $m$. The existence of self-intersections in the B\'ezier curves is preserved under scaling. Each subdivision iteration has one division by 2 for each degree $d$. In our examples, we invoke $\ell$ levels of subdivision, so that $m = \ell \ast (d + 1) +1$. \section{Ambient Isotopy Between Stick Knots} \label{sec:aisticks} The two curves, $K_0$ and $K_1$ will be shown to be the unknot. \begin{lemma} The curve $K_0$ is the unknot. \end{lemma} \begin{proof} The necessary demonstration that $K_0$ is simple is elementary over the six edges. Establishing that no consecutive edges are collinear is by comparing slopes of projections into the $XY$-plane. The non-consecutive pairs of edges have separating hyperplanes. \hspace{5ex} $\Box$ \end{proof} \begin{lemma} The curves $K_0$ and $K_1$ are ambient isotopic. \end{lemma} \begin{proof} The single perturbation to create $K_1$ is similar to a push~\cite{Bing1983} and incurs no self-intersections with other segments. It is easily extended to an ambient isotopy having compact support. \hspace{5ex} $\Box$ \end{proof} \begin{cor} The curve $K_1$ is the unknot. \end{cor} \section{Subdivision Analysis} \label{sec:subdivan} The points from the 4th iteration of the DeCastlejau algorithm, using a subdivision value of $1/2$ are listed in Appendix~\ref{app:subdata}. These can be verified simply by executing the algorithm on the initial data for $K_0$ and $K_1$, with the provision that the implementation relies upon exact arithmetic. It was observed that each sub-control polygon in Appendix~\ref{app:subdata} had the property that its control points were strictly monotone in one of its coordinate values. This is annotated in the appendix by abbreviations noting the co-ordinates in which strict monotonicity was observed. In some cases, this occurs for all three co-ordinates. This observation greatly facilitated proof techniques used. Let $K_{0,4}$ denote the PL curve formed from the 4th subdivision on $K_0$ and similarly denote $K_{1,4}$ for $K_1$. Within each of $K_{0,4}$ and $K_{1,4}$, there are 16 sub-control polygons. The rest of the paper proceeds by showing that $K_{0,4}$ is the unknot and is ambient isotopic to $\beta_0$ and that $K_{1,4}$ is the unknot and is ambient isotopic to $\beta_1$. As the convex hulls of the sub-control polygons become central, their images are shown in Figure~\ref{fig:chulls}. \begin{figure} \caption{Convex Hulls -- Fourth subdivision (parameter $\frac{1} \label{fig:sub0-4} \label{fig:sub1-4} \label{fig:chulls} \end{figure} \pagebreak \begin{lemma} \label{lem:both4-simp} Each of $K_{0,4}$ and $K_{1,4}$ are simple. \end{lemma} \begin{proof} First consider $K_{0,4}$. Let $K_{0,4,i}$, for $i = 0, 1, \ldots 15$, denote the 16 sub-control polygons for $K_{0,4}$. Each $K_{0,4,i}$, for $i = 0, 1, \ldots 15$, is simple, because of the strict monotonicity in at least one co-ordinate. The $K_{0,4,i}$, for $i = 0, 1, \ldots 15$, are pairwise disjoint, as shown by computing the convex hull for each and establishing these convex hulls are disjoint. Again, any ambiguity that could arise from floating point arithmetic is avoided by exact computations. The argument for $K_{1,4}$ follows the same pattern. \hspace{5ex} $\Box$ \end{proof} The derivative of a B\'ezier curve $\cal{C}$ is expressed as $$ \mathcal{C}^{'}(t)=\sum_{i=0}^{n} \binom{n}{i} t^i (1-t)^{n-i} (P_{i+1}-P_i), i \in \{1, \ldots, {n-1}\}.$$ \begin{lemma} \label{lem:both4-mono} Each of $\beta_0$ and $\beta_1$ are strictly monotonic in the same co-ordinate as their corresponding control polygons and are simple. \end{lemma} \begin{proof} The strict monotonicity in some co-ordinate for each sub-control polygon implies that the partial derivative in the $x,y$ or $z$ variable will be positive, implying simplicity for that segment of the B\'ezier curve. Since each B\'ezier segment is contained in the convex hull of its sub-control polygon and since those convex hulls are pairwise disjoint, the simplicity follows. \hspace{5ex} $\Box$ \end{proof} \begin{cor} \label{cor:monobez} If a control polygon is strictly monotonic in some co-ordinate, then its associated B\'ezier curve is also. \end{cor} Subdivision need not preserve knot type, so we establish the knot types for $K_{0,4}$ and $K_{1,4}$. \begin{lemma} The curve $K_{0,4}$ is the unknot. \end{lemma} \begin{proof} The projection onto the $XY$--$plane$ of $K_{0.4}$, has three pairs of crossings, as again, can be verified by exact computation with some co-ordinate values being represented exactly as rational numbers. For the $z$ co-ordinates, a bounding integer value is given to these exact rational representations, for ease of comparison in determining the crossings relative to the $XY$--$plane$: \vskip 0.1in overcrossing: $( (70600874219532518400/90998355737), (-331936500725210686464/90998355737) )$, \newline with $z$ values $(346821834258400839680/90998355737) > -179855360$ \vskip 0.1in overcrossing: $( (-2606810091676070400/2228701007), (-141298996572704411136/15600907049) )$, \newline with $z$ values $ (-695629074309606400/821100371) < -847191279 < -2734161920$ \vskip 0.1in undercrossing: $( (70736463317966883840/46441845451), (-2183313901438239630336/232209227255) )$, \newline with $z$ values $ (-312474348049921062912/46441845451) < -6728293094 < -5234410880$ \vskip 0.1in The two consecutive overcrossings permit conclusion of the unknot. \hspace{5ex} $\Box$ \end{proof} \begin{lemma} The curve $K_{1,4}$ is the trefoil. \end{lemma} \begin{proof} The projection onto the $YZ$--$plane$ of $K_{1.4}$, has three pairs of crossings, from exact computation: \vskip 0.1in undercrossing: $( (-2321511588927897600/2778711187), (-12860064917297823744/2778711187) )$, \newline with $z$ values $ (8352902508791726080/2778711187) < 3006034794 < 4026531840$ \vskip 0.1in overcrossing: $( (-8217249783839948800/7079707793), (-58835463893254373376/7079707793) )$, \newline with $z$ values $ (-24693447116718080/7079707793) > -34879189 > \\ \hspace*{25ex} -1547779857 > (-10957829115791278080/7079707793)$ \vskip 0.1in undercrossing: $( (2736710937161450496/968030497), (-222045650166834551808/24200762425) )$, \newline with $z$ values $ (-6443291820066594816/968030497) < -6656083501 < -5348303360$. These three pairs of alternating crossings permit conclusion of the trefoil. \hspace{5ex} $\Box$ \end{proof} \section{Ambient Isotopy for B\'ezier Curves} \label{sec:aibez} We show that $\beta_0$ is ambient isotopic to $K_{0,4}$ and that $\beta_1$ is ambient isotopic to $K_{1,4}$. Both proofs proceed by showing that each sub-control polygon is ambient isotopic to its associated B\'ezier curve. We show this for one sub-control polygon, as representative. Let $K_{0,4,i}$ be one such control polygon and let $\beta_{0,i}$ be its corresponding B\'ezier curve segment. \begin{theorem} Each $K_{0,4,i}$ is ambient isotopic to its corresponding B\'ezier curve, denoted as $\beta_{0,i}$. \end{theorem} \begin{proof} Without loss of generality assume $K_{0,4,i}$ is strictly monotonic in its $x$ co-ordinate, which implies that $\beta_{0,i}$ is also strictly monotonic in its $x$ co-ordinate by Corollary~\ref{cor:monobez}. For each $p \in K_{0,4,i}$, let $\Pi_p$ be the plane at $p$ that is parallel to the $YZ$-plane. By the indicated strict monotonicity, $\Pi_p \cap K_{0,4,i} = \{p\}$, a unique point. Similarly, by the same strict monotonicity on $\beta_{0,i}$, the intersection $\Pi_p \cap \beta_{0,i}$ has at most one point, and the connectivity of $\beta_{0,i}$ (as the continuous image of a connected subset of $[0,1]$) implies that this intersection is non-empty. For each $p \in K_{0,4,i}$, let $q_p = \Pi_p \cap \beta_{0,i}$. To construct the ambient isotopy, consider the line segment between each $p$ and $q_p$. These line segments will not intersect, due to the strict monotonicity. The remaining details to construct an ambient isotopy of compact support are standard and left to the reader. \hspace{5ex} $\Box$ \end{proof} \pagebreak \begin{cor} \label{cor:aibez0} The B\'ezier curve $\beta_0$ is ambient isotopic to its control polygon $K_{0,4}$, the unknot. \end{cor} \begin{proof} The existence of an ambient isotopy within each convex hull of $K_{0,4,i}$, the lack of intersection between convex hulls $K_{0,4,i}$ and $K_{0,4,j}$ for $i \neq j$ (except at the end points when $j = i + 1$, where those end points remain fixed) and composition of the ambient isotopies on each sub-control polygon conclude this proof. \hspace{5ex} $\Box$ \end{proof} \begin{cor} \label{cor:aibothbez3} The B\'ezier curve $\beta_1$ is ambient isotopic to its control polygon $K_{1,4}$, the trefoil. \end{cor} \begin{theorem} \label{thm:hasselfi} The ambient isotopy between $K_0$ and $K_1$ generates a B\'ezier curve with a self-intersection. \end{theorem} \begin{proof} The perturbation of the control point $(-10, 60, 58)$ to $(10, 60, 58)$ creates a homotopy on the B\'ezier curve $\beta_0$ to $\beta_1$, defined over the interval $[0,1]$. If for all values of $t \in [0,1]$, the perturbed B\'ezier curve was simple, then the homotopy would be an isotopy between $\beta_0$ and $\beta_1$. However, since $\beta_0$ and $\beta_1$ have differing knot types, this cannot be. So, for some value \~{t} $\in [0,1]$, the corresponding B\'ezier curve must have a self-intersection. \hspace{5ex} $\Box$ \end{proof} \section{More Aggressive Enclosures} \label{sec:agg} In the spirit of optimal enclosures for splines~\cite{PetersWu}, a modification is shown to the already presented proof of isotopy for the B\'ezier trefoil at the 4th subdivision. The benefit is an explicit construction of an isotopy of the trefoil at the 3rd subdivision. The modification presented is, admittedly, \emph{ad hoc} for the given data, but suggests the possibility of using the broader generalizations~\cite{PetersWu}. In the construction for the 4th subdivision, it was shown that the convex hulls generated were pairwise disjoint, except, possibly, at a single control point. For the 3rd subdivision, this property failed only for one pair of convex hulls, which did not have any shared control points. It was possible to modify one of these convex hulls to a more aggressive enclosure to be disjoint from the other convex hull. An outline of the modification follows. The convex hull chosen for modification is determined by the control polygon $K_{1,3,7}$, as listed in the Appendix. Let $H_{1,3,7}$ denote the convex hull of $K_{1,3,7}$ and let $c$ denote the B\'ezier segment within $K_{1,3,7}$, re-parameterized as $c: [0,1] \rightarrow \Re^3$, where $$c(1/2) = (4399876800, -4859733312, -855503360).$$ Let $p_L$ denote the initial point of $K_{1,3,7}$ and let $p_R$ denote the final point of $K_{1,3,7}$. Two half-spaces will be constructed, with their definitions being interdependent. First, consider the line $\ell_L$ defined by $c(1/2)$ and $p_L$. A closed half-space ${\mathcal{H}}_L$ will be chosen to contain $\ell_L$. There are uncountably many normals to $\ell_L$, but the vector $(1, 68748075/151430805, 0)$ is selected to define the separating plane for ${\mathcal{H}}_L$, with this plane denoted by $\Pi_L$. Let ${\mathcal{H}}_L$ denote the closed half-space of all points with non-negative evaluation according to the planar equation defining $\Pi_L$. Similarly, let ${\mathcal{H}}_R$ be determined by the line containing $c(1/2)$ and $p_R$, with a normal chosen as $(10, -57032750/60642987, 0)$ and an associated separating plane denoted by $\Pi_L$. The two normals were chosen by numerical experiments to define the set $\mathcal{E}$ by \begin{center} $\cal{E}$ $= H_{1,3,7} \cap ({\mathcal{H}}_L \cup {\mathcal{H}}_R).$ \end{center} \pagebreak \begin{claim} The set $K_{1,3,7}$ is contained in $\cal{E}$. \label{claim:cptsin} \end{claim} \begin{claim} The set $\cal{E}$ is disjoint from all the convex hulls for $K_{1,3,i}, i = 0, \ldots, 6$, except at the end points $p_L$ and $p_R$. \label{claim:pwdis} \end{claim} The geometric calculations necessary to verify the two claims are elementary. The well-known \emph{variation diminishing property}~\cite{Lane_Riesenfeld1980} is fundamental for B\'ezier curves, where the monograph~\cite{G.Farin1990} serves as a convenient reference. \begin{theorem} Variation Diminishing Property for B\'ezier Curves: A B\'ezier curve $\mathbf{B}$ has no more intersections with any plane than occurs between that plane and the control polygon of $\mathbf{B}$ \label{thm:vdpbez} \end{theorem} \begin{lemma} The curve $c$ is contained within $\cal{E}$. \label{lem:cin} \end{lemma} \begin{proof} The set $\Pi_L \cap K_{1,3,7}$ has cardinality two, as can be shown directly. Similarly, the set $\Pi_R \cap K_{1,3,7}$ has cardinality two. Invoking Theorem~\ref{thm:vdpbez}, yields \begin{center} $\Pi_L \cap c = \{p_0, c(1/2)\}$ and $\Pi_R \cap c = \{c(1/2), p_1\}$. \end{center} It can easily be shown that $c(1/4)$ and $c(3/4)$ both lie within $\cal{E}$ and are not equal to each other or to any of the points $ p_0, c(1/2)$, or $p_1$. Connectivity of $c$ suffices to conclude the proof. \hspace{5ex} $\Box$ \end{proof} \begin{cor} The control polygon for the 3rd subdivision is ambient isotopic to the trefoil. \label{cor:n3tref} \end{cor} \begin{proof} For the 3rd subdivision, the coordinate monotonicity property prevails. The containment of $c$ within $\cal{E}$ and the disjoint properties shown relative to $\cal{E}$ and the other convex hulls are sufficient to lead to minor modifications of the proof of Corollay~\ref{cor:aibez0} to construct a similar isotopy for this 3rd subdivision. \hspace{5ex} $\Box$ \end{proof} \begin{remark} Corollary~\ref{cor:n3tref} permits the explicit construction of an isotopy. However, once it is known that $\beta_1$ is the trefoil, it can be shown by direct calculations that each of control polygons $K_{1,1}$ and $K{1,2}$ are also the trefoil, implying the \emph{existence} of ambient isotopies with $\beta_1$, but that method does not specifically construct the isotopies. \label{re:n1} \end{remark} \section{Conclusion and Future Work} \label{sec:concandfu} An example is shown of a closed B\'ezier curve and its control polygon which are both the unknot. A perturbation of one vertex is shown to cause the perturbed B\'ezier curve to be the trefoil, while its control polygon remains the unknot. This transition demonstrates that there must have been an intermediate state where a self-intersection occurred in the transforming B\'ezier curves. These topological differences between the mathematical representation and its PL approximation are of interest in computer graphics and animation. Visual evidence previously existed, but the only supportive mathematics relied on floating point arithmetic which left open the question of whether an intermediate intersection could be rigorously proven. The formal proof presented relies on the implementation of exact, integer computations to resolve that open question. Comparisons between the exact techniques and certifiable numeric methods merit further consideration, as it may often be of interest to specify a neighborhood in which a self-intersection is known to exist. The present example was quite carefully constructed to show the self-intersection and robust numerical methods are likely to have broader scope for applications. \small \pagebreak \appendix \section{Subdivision Data} \label{app:subdata} The following two tables list the vertices for both the original and perturbed control polygons, each after 4 subdivisions. The strict monotonicity of specific coordinates is indicated for each subdivided control polygon. In Table I for subdivision of $K_0$ the sixteen control polygons are denoted by P[0], $\dots$, P[15] and all consecutive control polygons are separated by half-planes. Between P[0] and P[1] the separating plane is the $XZ$ plane at the $y$-value of the last point of P[0]. Between P[1] and P[2] the separating plane is the $XZ$ plane at the $y$-value of the last point of P[1]. For all others, it is a $YZ$ plane at the $x$ value of the shared subdivision point. In Table II for subdivision of $K_1$ the sixteen control polygons are denoted by Q[0], $\dots$, Q[15] and all consecutive control polygons are separated by half-planes. Between Q[9] and Q[10] the separating plane is the $XZ$ plane at the $y$-value of the last point of P[0]. Between Q[14] and Q[15] the separating plane is the $XZ$ plane at the $y$-value of the last point of P[14]. For all others, it is a $YZ$ plane at the $x$ value of the shared subdivision point. A notational change for points occurs in Appendix~\ref{app:subdata}. In preceding narrative, points are denoted by parentheses in the format of $(x,y,z)$. Within Appendix~\ref{app:subdata}, points are bounded by the set brackets in the format of $\{x,y,z\}$, as the required syntax within Mathematica, where many of the computations were performed. For integrity of the data, it was judged best to report the points using this alternative formatting. The code used and its output will be posted for public access, with a permanent site being determined. \pagebreak \begin{center} {\bf Table I: Subdivision Points from 4 Iterations on $K_0$} \end{center} \vspace*{4ex} \begin{multicols}{2} \input{h4Un-TJP2a.txt} \end{multicols} \vspace*{3ex} \begin{center} \vspace*{-0.09in} {\bf Table II: Subdivision Points from 4 Iterations on $K_1$} \vspace*{-0.09in} \end{center} \vspace*{1ex} \begin{multicols}{2} \input{HW4-2.txt} \end{multicols} \pagebreak \begin{center} \vspace*{-0.09in} {\bf Table III: Subdivision Points from 3 Iterations on $K_1$} \vspace*{-0.09in} \end{center} \begin{multicols}{2} \small{ $K[1,3,0]$: Y, Z \begin{verbatim} { 0, 4831838208,10737418240 }, { -1006632960, -2147483648,6039797760 }, { -1426063360, -6786383872,2181038080 }, { -1420820480, -9707716608,-893386752 }, { -1127219200, -11385044992,-3252682752 }, { -655933440, -12176949248,-4975001600 }, { -93900800, -12353556480,-6142648320 } \end{verbatim} $K[1,3,1]$: X \begin{verbatim} { -93900800, -12353556480,-6142648320 }, { 468131840, -12530163712,-7310295040 }, { 1120911360, -12091473920,-7923269632 }, { 1777500160, -11307614208,-8063877120 }, { 2374696960, -10360258560,-7818575872 }, { 2871132160, -9369157632,-7273185280 }, { 3244032000, -8410890240,-6509035520 } \end{verbatim} $K[1,3,2]$: Y, Z \begin{verbatim} { 3244032000, -8410890240,-6509035520 }, { 3616931840, -7452622848,-5744885760 }, { 3866296320, -6527188992,-4761976832 }, { 3969351680, -5711167488,-3641638912 }, { 3921920000, -5037965312,-2460712960 }, { 3735183360, -4516569088,-1287700480 }, { 3431116800, -4142518272,-179855360 } \end{verbatim} $K[1,3,3]$: X, Z \begin{verbatim} { 3431116800, -4142518272,-179855360 }, { 3127050240, -3768467456,927989760 }, { 2705653760, -3541762048,1970667520 }, { 2188902400, -3457941504,2890924032 }, { 1609564160, -3499098112,3642753024 }, { 1006632960, -3644850176,4194304000 }, { 419430400, -3875536896,4529848320 } \end{verbatim} $K[1,3,4]$: X, Y \begin{verbatim} { 419430400, -3875536896,4529848320 }, { -167772160, -4106223616,4865392640 }, { -739246080, -4421844992,4984930304 }, { -1255669760, -4802740224,4872732672 }, { -1677393920, -5229969408,4529192960 }, { -1970339840, -5688508416,3972792320 }, { -2113228800, -6162665472,3241123840 } \end{verbatim} $K[1,3,5]$: Y, Z \begin{verbatim} { -2113228800, -6162665472,3241123840 }, { -2256117760, -6636822528,2509455360 }, { -2248949760, -7126597632,1602519040 }, { -2070446080, -7616299008,557907968 }, { -1712128000, -8089567232,-567672832 }, { -1185546240, -8524791808,-1697382400 }, { -530841600, -8882749440,-2734161920 } \end{verbatim} $K[1,3,6]$: X \begin{verbatim} { -530841600, -8882749440,-2734161920 }, { 123863040, -9240707072,-3770941440 }, { 906690560, -9521397760,-4714790912 }, { 1777500160, -9685598208,-5468651520 }, { 2667560960, -9676472320,-5915246592 }, { 3470991360, -9407209472,-5916999680 }, { 4034867200, -8740884480,-5316894720 } \end{verbatim} $K[1,3,7]$ \begin{verbatim} { 4034867200, -8740884480,-5316894720 }, { 4598743040, -8074559488,-4716789760 }, { 4923064320, -7011172352,-3514826752 }, { 4854906880, -5413797888,-1553989632 }, { 4194304000, -3095396352,1342177280 }, { 2684354560, 201326592,5368709120 }, { 0, 4831838208,10737418240 } \end{verbatim} } \end{multicols} \begin{center} \vspace*{-0.09in} {\bf Table IV: Subdivision Points from First Two Iterations on $K_1$} \vspace*{-0.09in} \end{center} \begin{multicols}{2} \small{ \begin{verbatim} (*Subdivision 1 *) { 0, 4831838208,10737418240 }, { -4026531840, -23085449216,-8053063680 }, { 1342177280, -13555990528,-13421772800 }, { 5704253440, -6442450944,-8858370048 }, { 5368709120, -3388997632,-1610612736 }, { 2768240640, -2952790016,3187671040 }, { 419430400, -3875536896,4529848320 } { 419430400, -3875536896,4529848320 }, { -1929379840, -4798283776,5872025600 }, { -4026531840, -7079985152,3758096384 }, { -3355443200, -9462349824,-2818572288 }, { 2684354560, -10871635968,-10737418240 }, { 10737418240, -13690208256,-10737418240 }, { 0, 4831838208,10737418240 } \end{verbatim} } \end{multicols} \pagebreak \begin{multicols}{2} \small{ \begin{verbatim} (*Subdivision 2 *) { 0, 4831838208,10737418240 }, { -2013265920, -9126805504,1342177280 }, { -1677721600, -13723762688,-4697620480 }, { -293601280, -13941866496,-7818182656 }, { 1258291200, -12375293952,-8690597888 }, { 2498232320, -10327425024,-8037335040 }, { 3244032000, -8410890240,-6509035520 } { 3244032000, -8410890240,-6509035520 }, { 3989831680, -6494355456,-4980736000 }, { 4241489920, -4709154816,-2577399808 }, { 3816816640, -3667918848,50331648 }, { 2831155200, -3292528640,2323644416 }, { 1593835520, -3414163456,3858759680 }, { 419430400, -3875536896,4529848320 } { 419430400, -3875536896,4529848320 }, { -754974720, -4336910336,5200936960 }, { -1866465280, -5138022400,5007998976 }, { -2600468480, -6121586688,3825205248 }, { -2637168640, -7141851136,1784676352 }, { -1840250880, -8166834176,-660602880 }, { -530841600, -8882749440,-2734161920 } { -530841600, -8882749440,-2734161920 }, { 778567680, -9598664704,-4807720960 }, { 2600468480, -10005512192,-6509559808 }, { 4613734400, -9789505536,-7063207936 }, { 6039797760, -8355053568,-5368709120 }, { 5368709120, -4429185024,0 }, { 0, 4831838208,10737418240 } \end{verbatim} } \end{multicols} \end{document}
\begin{document} \title {On the Littlewood--Offord problem} \author{Yulia S. Eliseeva$^{1,2}$, Andrei Yu. Zaitsev$^{1,3}$} \email{[email protected]} \address{St.~Petersburg State University\newline\indent and Laboratory of Chebyshev in St. Petersburg State University } \email{[email protected]} \address{St.~Petersburg Department of Steklov Mathematical Institute \newline\indent Fontanka 27, St.~Petersburg 191023, Russia\newline\indent and St.~Petersburg State University} \begin{abstract}{The paper deals with studying a connection of the Littlewood--Offord problem with estimating the concentration functions of some symmetric infinitely divisible distributions. Some multivariate generalizations of results of Arak (1980) are given. They show a connection of the concentration function of the sum with the arithmetic structure of supports of distributions of independent random vectors for arbitrary distributions of summands.}\end{abstract} \keywords {concentration functions, inequalities, the Littlewood--Offord problem, sums of independent random variables} \maketitle \footnotetext[1]{The first and the second authors are supported by grants RFBR 13-01-00256 and NSh-2504.2014.1.} \footnotetext[2]{The first author was supported by Laboratory of Chebyshev in St. Petersburg State University (grant of the Government of Russian Federation 11.G34.31.0026) and by grant of St. Petersburg State University 6.38.672.2013.}\footnotetext[3]{The second author was supported by the Program of Fundamental Researches of Russian Academy of Sciences "Modern Problems of Fundamental Mathematics".} Let $X,X_1,\ldots,X_n$ be independent identically distributed (i.i.d.) random variables. Let $a=(a_1,\ldots,a_n)$, where $a_k=(a_{k1},\ldots,a_{kd})\in \mathbf{R}^d$, $k=1,\ldots, n$. The concentration function of a $\mathbf{R}^d$-dimensional random vector $Y$ with distribution $F=\mathcal L(Y)$ is defined by the equality \begin{equation} Q(F,\lambda)=\sup_{x\in\mathbf{R}^d}\mathbf{P}(Y\in x+ \lambda B), \quad \lambda\geq0, \nonumber \end{equation} where $B=\{x\in\mathbf{R}^n:\|x\|\leq 1/2\}$. In this paper we study the behavior of the concentration functions of the weighted sums $S_a=\sum\limits_{k=1}^{n} X_k a_k$ with respect to the properties of vectors~$a_k$. Recently, interest in this subject has increased considerably in connection with the study of eigenvalues of random matrices (see, for instance, Friedland and Sodin \cite{Fried:Sod:2007}, Nguyen and Vu~\cite{Nguyen:Vu:2011}, Rudelson and Vershynin \cite{Rud:Ver:2008}, \cite{Rud:Ver:2009}, Tao and Vu \cite{Tao:Vu:2009:Ann}, \cite{Tao:Vu:2009:Bull}, Vershynin \cite{Ver:2011}). For a detailed history of the problem we refer to a recent review of Nguen and Vu \cite{Nguyen and Vu13}. The authors of the above articles (see also Hal\'asz \cite{Hal:1977}) called this question the Littlewood--Offord problem, since, for the first time, this problem was considered in 1943 by Littlewood and Offord \cite{Lit:Off:1943} in connection with the study of random polynomials. They considered a special case, where the coefficients $a_k \in \mathbf{R}$ are one-dimensional, and $X$ takes values $\pm1$ with probabilities~$1/2$. Let us introduce some notation. In the sequel, let $F_a$ denote the distribution of the sum $S_a$, $E_y$ is the probability measure concentrated at a point $y$, and let $G$ be the distribution of the random variable $\widetilde{X}$, where $\widetilde{X}=X_1-X_2$ is the symmetrized random variable. The symbol $c$ will be used for absolute positive constants which may be different even in the same formulas. Writing $A\ll B$ means that $|A|\leq c B$. Also we will write $A\asymp B$, if $A\ll B$ and $B\ll A$. We will write $A\ll _{d} B$, if $|A|\leq c(d) B$, where $c(d)>0$ depends on $d$ only. Similarly, $A\asymp_{d} B$, if $A\ll_{d} B$ and $B\ll_{d} A$. The scalar product in $\mathbf{R}^d$ will be denoted $\left\langle \, \cdot\,,\,\cdot\,\right\rangle$. Later $\lfloor x\rfloor$ is the largest integer~$k$ such that~$k< x$. For~${x=(x_1,\dots,x_n )\in\mathbf R^n}$ we will use the norms $\|x\|^2= x_1^2+\dots +x_n^2$ and $|x|= \max_j|x_j|$. We denote by $\widehat F(t)$, $t\in\mathbf R^d$, the characteristic function of $d$-dimensional distributions~$F$. Products and powers of measures will be understood in the convolution sense. While a distribution~$F$ is infinitely divisible, $F^\lambda$, $\lambda\ge0$, is the infinitely divisible distribution with characteristic function $\widehat F^\lambda(t)$. The elementary properties of concentration functions are well studied (see, for instance, \cite{Arak:Zaitsev:1986}, \cite{Hen:Teod:1980}, \cite{Petrov:1972}). It is known that \begin{equation}\label{8jrt} Q(F,\mu)\ll_d (1+ \lfloor \mu/\lambda \rfloor)^d\,Q(F,\lambda) \end{equation} for any $\mu,\lambda>0$. Hence, \begin{equation}\label{8art} Q(F,c\lambda)\asymp_{d}\,Q(F,\lambda). \end{equation} Let us formulate a generalization of the classical Ess\'een inequality \cite{Ess:1966} to the multivariate case (\cite{Ess:1968}, see also \cite{Hen:Teod:1980}): \begin{lemma}\label{lm3} Let $\tau>0$ and let $F$ be a $d$-dimensional probability distribution. Then \begin{equation} Q(F, \tau)\ll_d \tau^d\int_{|t|\le1/\tau}|\widehat{F}(t)| \,dt. \label{4s4d} \end{equation} \end{lemma} In the general case $Q(F,\lambda)$ cannot be estimated from below by the right hand side of inequality~\eqref{4s4d}. However, if we assume additionally that the distribution $F$ is symmetric and its characterictic function is non-negative for all~$t\in\mathbf R$, then we have the lower bound: \begin{equation} \label{1a}Q(F, \tau)\gg_d \tau^d\int\limits_{|t|\le1/\tau}{|\widehat{F}(t)| \,dt}, \end{equation} and, therefore, \begin{equation} \label{1b} Q(F, \tau)\asymp_d \tau^d\int\limits_{|t|\le1/\tau}{|\widehat{F}(t)| \,dt}, \end{equation} (see \cite{Arak:1980} or \cite{Arak:Zaitsev:1986}, Lemma~1.5 of Chapter II for $d=1$). In the multivariate case relations \eqref{1a} and~\eqref{1b} were obtained by Zaitsev \cite{Zaitsev:1987}, see also Eliseeva \cite{Eliseeva}. Just the use of relation \eqref{1b} allows us to simplify the arguments of Friedland and Sodin \cite{Fried:Sod:2007}, Rudelson and Vershynin~\cite{Rud:Ver:2009} and Vershynin \cite{Ver:2011} which were applied to Littlewood--Offord problem (see \cite{Eliseeva}, \cite{EGZ} and \cite{Eliseeva and Zaitsev}). The main result of this paper is a general inequality which reduces the estimation of concentration functions in the Littlewood--Offord problem to the estimation of concentration functions of some infinitely divisible distributions. This result is formulated in Theorem~\ref{lm43}. For $z\in \mathbf{R}$, introduce the distribution $H_z$ with the characteristic function \begin{equation} \label{11b}\widehat{H}_z(t) =\exp\Big(-\frac{\,1\,}2\;\sum_{k=1}^{n}\big(1-\cos(\left\langle \, t,a_k\right\rangle z)\big)\Big). \end{equation}It depends on the vector~$a$. It is clear that $H_z$ is a symmetric infinitely divisible distribution. Therefore, its characteristic function is positive for all $t\in \mathbf{R}^d$. \begin{theorem}\label{lm43}Let\/ $V$ be an arbitrary\/ $d$-dimensional Borel measure such that\/ $\lambda=V\{\mathbf R\}>0$,\/ and $V\le G$, that is,\/ $V\{B\}\le G\{B\}$, for any Borel set~$B$. Then, for any\/ $\varepsilon>0$ and\/ $\tau>0$, we have \begin{equation}\label{cc23} Q(F_a,\tau) \ll_d Q(H_{1}^{\lambda},\varepsilon)\,\exp\bigg(d \int_{z\in\mathbf{R}}\log\big(1+\lfloor\tau(\varepsilon |z|)^{-1}\rfloor \big)\,F\{dz\}\bigg), \end{equation}where $F=\lambda^{-1}V$.\end{theorem} Note that $\log\big(1+\lfloor\tau(\varepsilon |z|)^{-1}\rfloor \big)=0$, for $|z|\ge\tau/\varepsilon$. Therefore, the integration in~\eqref{cc23} is taken, in fact, over the set $\big\{z:|z|<\tau/\varepsilon\big\}$ only. \begin{corollary}\label{lm42}Let\/ $\delta>0$ and \begin{equation}p(\delta)= G\big\{\{z:|z| \ge \delta\}\big\}>0.\end{equation}Then, for any\/ $\varepsilon,\tau>0$, we have \begin{equation}\label{10abc} Q(F_a, \tau) \ll_d e^{\Delta}\, Q(H_1^{p(\delta)}, \varepsilon), \end{equation} where \begin{equation}\Delta=\Delta(\tau,\varepsilon,\delta)=\frac{d}{p(\delta)}\int\limits_{|z| \ge \delta} \log\big(1+\lfloor\tau(\varepsilon |z|)^{-1}\rfloor \big) \, G\{dz\}.\end{equation} \end{corollary} In particular, choosing $\delta=\tau/\varepsilon$, we get \begin{corollary}\label{lm452}For any\/ $\varepsilon,\tau>0$, we have \begin{equation}\label{11abc} Q(F_a, \tau) \ll_d Q(H_1^{p(\tau/\varepsilon)}, \varepsilon). \end{equation} \end{corollary}Just the statement of Corollary \ref{lm452} (usually for $\tau=\varepsilon$) is actually the starting point of almost all recent studies on the Littlewood--Offord problem (see, for instance, \cite{Fried:Sod:2007},~\cite{Hal:1977}, \cite{Nguyen:Vu:2011}, \cite{Rud:Ver:2008}, \cite{Rud:Ver:2009} and \cite{Ver:2011}). More precisely, with the help of Lemma \ref{lm3} or its analogs, the authors of the above-mentioned papers have obtained estimates of the type \begin{equation}\label{11abc2} Q(F_a, \tau) \ll_d \sup_{z\ge\tau/\varepsilon }\tau^d\int_{|t|\le1/\tau}\widehat{H}_z^{p(\tau/\varepsilon)}(t) \,dt. \end{equation} The fact that \eqref{8jrt} and \eqref{1b} imply that \begin{eqnarray} \sup_{z\ge\tau/\varepsilon }\tau^d\int_{|t|\le1/\tau}\widehat{H}_z^{p(\tau/\varepsilon)}(t) \,dt &\asymp_d&\sup_{z\ge\tau/\varepsilon}\; Q\big(H_{z}^{p(\tau/\varepsilon)},\tau\big)\nonumber \\ = \sup_{z\ge\tau/\varepsilon}\; Q\big(H_{1}^{p(\tau/\varepsilon)},\tau/z\big)&=&Q\big(H_{1}^{p(\tau/\varepsilon)},\varepsilon\big), \end{eqnarray} remained apparently unnoticed by the authors of these papers that significantly hampered further evaluation of the right-hand side of inequality~\eqref{11abc2}. Choosing $V$ so that \begin{equation}\label{cc243} V\{dz\}=\big(\max\big\{1,\,\log\big(1+\lfloor\tau(\varepsilon |z|)^{-1}\rfloor \big)\big\}\big)^{-1}\,G\{dz\}, \end{equation} we obtain \begin{corollary}\label{lm429} For any\/ $\varepsilon,\tau>0$, we have \begin{equation}\label{cc234} Q(F_a,\tau) \ll_d Q(H_{1}^{\lambda},\varepsilon)\,\exp\big(d\lambda^{-1}G\big\{\{z:|z|<\tau/\varepsilon\}\big\} \big), \end{equation}where \begin{equation}\label{cc239} \lambda=\lambda(G,\tau/\varepsilon)=V\{\mathbf R\}=\int\limits_{z\in\mathbf{R}}\big(\max\big\{1,\,\log\big(1+\lfloor\tau(\varepsilon |z|)^{-1}\rfloor \big)\big\}\big)^{-1}\,G\{dz\}. \end{equation} \end{corollary} In Corollaries~\ref{lm42}--\ref{lm429} we choose the measure $V$ in the form $V\{dz\}=f(z)\,G\{dz\}$ with $0\le f(z)\le1$. It is not clear what choice of $f$ is optimal. This depends on $a$ and $G$. Choosing the optimal function $f$, minimizing the right-hand sides of inequalities \eqref{10abc}, \eqref{11abc} and \eqref{cc234}, is a difficult problem. It is clear that its solution depends on $a$ and $G$. Certainly, it is sufficient to consider non-decreasing functions~$f$ only. For a fixed $\varepsilon$, an increase of~$\lambda$ implies a decrease of $Q(H_{1}^{\lambda},\varepsilon)$. Theorem \ref{lm43} may be applied for $V=G$. Then $\lambda=1$. This is the maximal possible value of~$\lambda$. However, the integral in the right-hand side of~\eqref{cc23} may be in this case infinite. In particular, it diverges if the distribution~$G$ has a nonzero atom at zero. This atom in any case should be excluded in constructing the measure $V$, if we expect to get a meaningful bound for $Q(F_a,\tau)$. For a fixed measure~$V$, decreasing $\varepsilon$ implies a decrease of $Q(H_{1}^{\lambda},\varepsilon)$, but an increase of the integral in the right-hand side of inequality~\eqref{cc23}. In Corollary \ref{lm429} we used the measure $V$, defined in \eqref{cc243} so that the integral in the right-hand side of inequality~\eqref{cc23} would converge always, no matter what is the measure~$G$. The proof of Theorem~$\ref{lm43}$ is based on elementary properties of concentration functions, it will be given below. Note that $H_1^{\lambda}$ is an infinitely divisible distribution with the L\'evy spectral measure $M_\lambda=\frac{\,\lambda\,}4\;M^*$, where $M^*=\sum\limits_{k=1}^{n}\big(E_{a_k}+E_{-a_k}\big)$. It is clear that the assertions of Theorem~$\ref{lm43}$ and Corollaries~\ref{lm42}--\ref{lm429} reduce the Littlewood--Offord problem to the study of the measure~$M^*$, uniquely corresponding to the vector~$a$. In fact, almost all the results obtained when solving this problem, are formulated in terms of the coefficients~$a_j$ or, equivalently, in terms of the properties of the measure~$M^*$. Sometimes this leads to a loss of information on the distribution of the random variable~$X$, which can help in obtaining more precise estimates. In particular, if $\mathcal L(X)$ is the standard normal distribution, then $F_a$ is a Gaussian distribution with zero mean and covariance operator which can be easily calculated. Thus, there are situations in which it is possible to obtain estimates for $ Q (F_a, \tau) $ that do not follow from the results formulated in terms of the measure~$M^*$. Note that using the results of Arak \cite{Arak:1980},~\cite{Arak:1981} (see also \cite{Arak:Zaitsev:1986}) one could derive from Theorem \ref{lm43} estimates similar to estimates of concentration functions in the Littlewood--Offord problem, which were obtained in a recent paper of Nguyen and Vu~\cite{Nguyen:Vu:2011} (see also \cite{Nguyen and Vu13}). A detailed discussion of this fact is presented in a joint paper of the authors and Friedrich G\"otze which is preparing for the publication. In the same paper there is a proof of multidimensional analogs of some results of Arak \cite{Arak:1980}. In Theorems~\ref{t4} and~\ref{thm1} below, we provide without proof the formulations of these results which demonstrates a relation between the order of smallness of the concentration function of the sum and the arithmetic structure of the supports of distributions of independent random vectors for {\it arbitrary} distributions of summands, in contrast to the results of \cite{Fried:Sod:2007}, \cite{Nguyen:Vu:2011}, \cite{Rud:Ver:2008}--\cite{Ver:2011}, in which a similar relationship was found in a particular case of summands with the distributions arising in the Littlewood--Offord problem. We need some notation. Let ${\mathbf Z}_+$ be the set non-negative integers. For any $r\in{\mathbf Z}_+$ and $u=(u_1,\ldots, u_r)\in {({\mathbf R}^d)}^r$, $u_j\in {\mathbf R}^d$, $j=1,\ldots,r$, introduce the sets \begin{equation} {K}_{1}(u)=\Big\{\sum\limits_{j=1}^r n_j u_j:n_j\in \{-1,0,1\} \hbox{ for }j=1,\ldots,r\Big\}.\label{1s17} \end{equation} We denote by $[B]_\tau$ the closed $\tau$-neighborhood of a set $B$ in the sense of the norm $|\,\cdot\,|$. \begin{theorem}\label{t4} Let $\tau\ge0$ and let $F_j$, $j=1,\ldots,n$, be $d$-dimensional probability distributions. Denote $\gamma=Q\Big(\prod_{j=1}^n F_j,\tau\Big)$. Then there exist $r\in\mathbf Z_+$ and vectors $u_1,\ldots, u_r;x_1,\ldots, x_r\in {{\mathbf R}^d}$ such that \begin{equation} r\ll_d \left|\log \rho\right|+1, \label{1s682} \end{equation} and \begin{equation} \sum\limits_{j=1}^n F_j\{{\mathbf R}^d\setminus[K_{1}(u)]_\tau+x_j\}\ll_d \bigl(\left|\log \rho\right|+1\bigr)^3, \label{1s692} \end{equation} where $u=(u_1,\ldots, u_r)\in {({\mathbf R}^d)}^r$, and the set $K_{1}(u)$ is defined in~\eqref{1s17}. \end{theorem} \begin{theorem}\label{thm1} Let $D$ be a $d$-dimensional infinitely divisible distribution with characteristic function of the form $\exp\big\{\alpha(\widehat M(t)-1)\big\}$, $t\in{\mathbf R}^d$, where $\alpha>0$ and $M$ is a probability distribution. Let $\tau\ge0$ and $\gamma=Q(D, \tau)$. Then there exist $r\in\mathbf Z_+$ and vectors $u_1,\ldots, u_r\in {{\mathbf R}^d}$ such that \begin{equation} r\ll_d \left|\log \gamma\right|+1, \label{1s65} \end{equation} and \begin{equation} \alpha \,M\{{\mathbf R}^d\setminus[K_{1}(u)]_\tau\}\ll_d \bigl(\left|\log \gamma\right|+1\bigr)^3, \label{1s68d} \end{equation} where $u=(u_1,\ldots, u_r)\in {({\mathbf R}^d)}^r$. \end{theorem} \noindent {\bf Proof of Theorem~\ref{lm43}.} Let us show that, for arbitrary probability distribution~$F$ and $\lambda,T>0$, \begin{multline} \log\int_{|t|\le T}\exp\Big(-\frac{\,1\,}2\;\sum_{k=1}^{n}\int_{z\in\mathbf{R}}\big(1-\cos(\left\langle \, t,a_k\right\rangle z)\big)\,\lambda\,F\{dz\}\Big)\,dt\\ \le \int_{z\in\mathbf{R}}\bigg(\log\int_{|t|\le T}\exp\Big(-\frac{\,\lambda\,}2\;\sum_{k=1}^{n}\big(1-\cos(\left\langle \, t,a_k\right\rangle z)\big)\Big)\,dt\bigg)\,F\{dz\}\\ = \int_{z\in\mathbf{R}}\bigg(\log\int_{|t|\le T}\widehat{H}_{z}^{\lambda}(t)\,dt\bigg)\,F\{dz\}.\label{dd11} \end{multline} It suffices to prove \eqref{dd11} for discrete distributions~$F= \sum_{j=1}^{\infty}p_j E_{z_j} $, where $0\le p_j\le1$, $z_j\in\mathbf{R}$, $\sum_{j=1}^{\infty}p_j =1 $. Applying in this case the H\"older inequality, we have \begin{multline} \int\limits_{|t|\le T}\exp\Big(-\frac{\,1\,}2\;\sum\limits_{k=1}^{n}\int\limits_{z\in\mathbf{R}}\big(1-\cos(\left\langle \, t,a_k\right\rangle z)\big)\,\lambda\,F\{dz\}\Big)\,dt\\ = \int\limits_{|t|\le T}\exp\Big(-\frac{\,\lambda\,}2\;\sum\limits_{j=1}^{\infty}p_j\sum\limits_{k=1}^{n}\big(1-\cos(\left\langle \, t,a_k\right\rangle z_j)\big)\Big)\,dt\\ \le\prod\limits_{j=1}^{\infty} \bigg(\int\limits_{|t|\le T}\exp\Big(-\frac{\,\lambda\,}2\;\sum\limits_{k=1}^{n}\big(1-\cos(\left\langle \, t,a_k\right\rangle z_j)\big)\Big)\,dt\bigg)^{p_j}.\label{d11} \end{multline} Taking the logarithms of the left and right-hand sides of~\eqref{d11}, we get~\eqref{dd11}. In general case we can approximate the distribution~$F$ by discrete distributions in the sense of weak convergence and to pass to the limit. We use that the weak convergence of probability distributions is equivalent to the convergence of characteristic functions which is uniform on bounded sets. Moreover, the weak convergence of symmetric infinitely divisible distributions is equivalent to the weak convergence of the corresponding spectral measure. Note also that the integrals $\int_{|t|\le T}$ may be replaced in~\eqref{dd11} by the integrals $\int_{t\in B}$ over an arbitrary Borel set~$B$. Since for characteristic function $\widehat{W}(t)$ of a random vector $Y$, we have $$|\widehat{W}(t)|^2 = \mathbf{E}\exp(i\langle \,t,\widetilde{Y}\rangle ) = \mathbf{E}\cos(\langle \,t,\widetilde{Y}\rangle ),$$ where $\widetilde{Y}$ is the corresponding symmetrized random vector, then \begin{equation}\label{6}|\widehat{W}(t)| \leq \exp\Big(-\cfrac{\,1\,}{2}\,\big(1-|\widehat{W}(t)|^2\big)\Big) = \exp\Big(-\cfrac{\,1\,}{2}\,\mathbf{E}\,\big(1-\cos(\langle \,t,\widetilde{Y}\rangle )\big)\Big). \end{equation} According to Theorem \ref{lm3} and relations $V=\lambda\,F\le G$, \eqref{dd11} and \eqref{6}, we have \begin{eqnarray} Q(F_a,\tau)&\ll_{d}& \tau^d\int_{\tau|t|\le 1}|\widehat{F}_a(t)|\,dt \nonumber\\ &\ll_{d}& \tau^d\int_{\tau|t|\le 1}\exp\Big(-\frac{\,1\,}{2}\,\sum_{k=1}^{n}\mathbf{E}\,\big(1-\cos(\left\langle \,t,a_k\right\rangle \widetilde{X})\big)\Big)\,dt\nonumber\\ &=&\tau^d\int_{\tau|t|\le 1}\exp\Big(-\frac{\,1\,}2\;\sum_{k=1}^{n}\int_{z\in\mathbf{R}}\big(1-\cos(\left\langle \, t,a_k\right\rangle z)\big)\,G\{dz\}\Big)\,dt\nonumber\\ &\le&\tau^d\int_{\tau|t|\le 1}\exp\Big(-\frac{\,1\,}2\;\sum_{k=1}^{n}\int_{z\in\mathbf{R}}\big(1-\cos(\left\langle \, t,a_k\right\rangle z)\big)\,\lambda\,F\{dz\}\Big)\,dt\nonumber\\ &\le&\exp\bigg( \int_{z\in\mathbf{R}}\log\bigg(\tau^d\int_{\tau|t|\le 1}\widehat{H}_{z}^{\lambda}(t)\,dt\bigg)\,F\{dz\}\bigg).\label{cc11} \end{eqnarray} Using \eqref{8jrt} and \eqref{1b}, we have \begin{eqnarray} \tau^d\int_{\tau|t|\le 1} \widehat{H}_{z}^{\lambda}(t) \,dt &\asymp_{d}& Q(H_{z}^{\lambda},\tau) = Q\big(H_{1}^{\lambda}, \tau{|z|}^{-1}\big)\nonumber \\&\le& \big(1+\lfloor\tau(\varepsilon |z|)^{-1}\rfloor \big)^d\, Q(H_{1}^{\lambda},\varepsilon).\label{cc22} \end{eqnarray} Substituting this estimate into \eqref{cc11}, we obtain~\eqref{cc23}. $\square$ \end{document}
\begin{document} \begin{abstract} In this paper, we examine the locality condition for non-splitting and determine the level of uniqueness of limit models that can be recovered in some stable, but not superstable, abstract elementary classes. In particular we prove: \begin{theorem1}\label{uniqueness theorem} Suppose that $\mathcal{K}$ is an abstract elementary class satisfying \begin{enumerate} \item the joint embedding and amalgamation properties with no maximal model of cardinality $\mu$. \item stability in $\mu$. \item $\kappa^*_\mu(\mathcal{K})<\mu^+$. \item continuity for non-$\mu$-splitting (i.e. if $p\in\operatorname{ga-S}(M)$ and $M$ is a limit model witnessed by $\langle M_i\mid i<\alpha\rangle$ for some limit ordinal $\alpha<\mu^+$ and there exists $N \prec M_0$ so that $p\upharpoonrightriction M_i$ does not $\mu$-split over $N$ for all $i<\alpha$, then $p$ does not $\mu$-split over $N$). \end{enumerate} For $\theta$ and $\delta$ limit ordinals $<\mu^+$ both with cofinality $\geq\kappa^*_\mu(\mathcal{K})$, if $\mathcal{K}$ satisfies symmetry for non-$\mu$-splitting (or just $(\mu,\delta)$-symmetry), then, for any $M_1$ and $M_2$ that are $(\mu,\theta)$ and $(\mu,\delta)$-limit models over $M_0$, respectively, we have that $M_1$ and $M_2$ are isomorphic over $M_0$. \end{theorem1} Note that no tameness is assumed. \end{abstract} \maketitle \section{Introduction} Because the main test question for developing a classification theory for abstract elementary classes (AECs) is Shelah's Categoricity Conjecture \cite[Problem D.1]{Ba}, the development of independence notions for AECs has often started with an assumption of categoricity (\cite{sh576, vas3, V1} and others). Consequently, the independence relations that result are superstable or stronger (see, for instance, good $\lambda$-frames and the superstable prototype \cite[Example II.3.(A)]{shelahaecbook}). However, little progress has been made to understand stable, but not superstable AECs. A notable exception is the work on $\kappa$-coheir of Boney and Grossberg \cite{bgcoheir}, which only requires stability in the guise of `no weak $\kappa$-order property.' In this paper, we add to the understanding of strictly stable AECs with a different approach and under different assumptions than \cite{bgcoheir}. In particular, our analysis uses towers and the standard definition of Galois-stability. Moreover, we work without assuming any of the strong locality assumptions (tameness, type shortness, etc.) of \cite{bgcoheir}. We hope that this work will lead to further exploration in this context. The main tool in our analysis is a tower, which was first conceived to study superstable AECs (see, for instance \cite{ShVi} or \cite{Va1}). The `right analogue' of superstability in AECs has been the subject of much research. Shelah has commented that this notion suffers from `schizophrenia,' where several equivalent concepts in first-order seem to bifurcate into distinct notions in nonelementary settings; see the recent Grossberg and Vasey \cite{grva} for a discussion of the different possibilities (and a suprising proof that they are equivalent under tameness). Common to much analysis of superstable AECs is the uniqueness of limit models. Uniqueness of limit models was first proved to follow from a categoricity assumption in \cite{sh394, Sh:600, ShVi, Va1, Va1e}. Later, $\mu$-superstability, which was isolated by Grossberg, VanDieren, and Villaveces \cite[Assumption 2.8(4)]{gvv}, was shown to imply uniqueness of limit models under the additional assumption of $\mu$-symmetry \cite{Va2}. $\mu$-superstability was modeled on the local character characterization of superstability in first-order and was already known to follow from categoricity \cite{ShVi}. The connection between $\mu$-symmetry and structural properties of towers \cite{Va2} inspired recent research on $\mu$-superstable classes: \cite{Va4, VaVa}. Moreover, years of work culminating in the series of papers \cite{ShVi, Va1, Va1e, Va2, Va4, VaVa} has led to the extraction of a general scheme for proving the uniqueness of limit models (note that amalgamation is generally assumed in these papers, but this is not true of \cite{ShVi, Va1, Va1e}). In this paper we witness the power of this new scheme by adapting the technology developed in \cite{Va2} to cover $\mu$-stable, but not $\mu$-superstable classes. We suspect that this new technology of towers will likely be used to answer other problems in classification theory (in both first order and non-elementary settings). This paper focuses on the question to what degree the uniqueness of limit models can be recovered if we assume the class is Galois-stable in $\mu$, but not $\mu$-superstable, by refocusing the question from ``\emph{Are all} $(\mu, \alpha)$-limit models isomorphic (over the base)?'' to ``\emph{For which $\alpha, \beta < \mu^+$ are } $(\mu, \alpha)$-limit models and $(\mu, \beta)$-limit models isomorphic (over the base)?'' Based on first-order results (summarized in \cite[Section 2]{gvv}), we have the following conjecture. \begin{conjecture}\label{stab-conj} Suppose $\mathcal{K}$ is an AEC with $\mu$-amalgamation and is $\mu$-stable. The set $$\{ \alpha < \mu^+ : \cf(\alpha)=\alpha\text{ and } (\mu, \alpha)\text{-limit models are isomorphic to }(\mu, \mu)\text{-limit models} \}$$ is a non-trivial interval of regular cardinals. Moreover, the minimum of this set, denoted by $\kappa^*_\mu(\mathcal{K})$, is an important measure of the complexity of $\mathcal{K}$. \end{conjecture} Our main result (restated from the abstract)\footnotei{WB: The theorem1 environment is now hardcoded to be `Theorem 1.2.' If the label here changes, we should change that environment} proves this conjecture under certain assumptions. \begin{theorem}\label{uniqueness theorem} Suppose that $\mathcal{K}$ is an abstract elementary class satisfying \begin{enumerate} \item the joint embedding and amalgamation properties with no maximal model of cardinality $\mu$. \item stabilty in $\mu$. \item $\kappa^*_\mu(\mathcal{K})<\mu^+$. \item continuity for non-$\mu$-splitting (i.e. if $p\in\operatorname{ga-S}(M)$ and $M$ is a limit model witnessed by $\langle M_i\mid i<\alpha\rangle$ for some limit ordinal $\alpha<\mu^+$ and there exists $N$ so that $p\upharpoonrightriction M_i$ does not $\mu$-split over $N$ for all $i<\alpha$, then $p$ does not $\mu$-split over $N$). \end{enumerate} For $\theta$ and $\delta$ limit ordinals $<\mu^+$ both with cofinality $\geq\kappa^*_\mu(\mathcal{K})$, if $\mathcal{K}$ satisfies symmetry for non-$\mu$-splitting (or just $(\mu,\delta)$-symmetry), then, for any $M_1$ and $M_2$ that are $(\mu,\theta)$ and $(\mu,\delta)$-limit models over $M_0$, respectively, we have that $M_1$ and $M_2$ are isomorphic over $M_0$. \end{theorem} Assumption \ref{assm} collects these assumptions together, and we discuss them following that statement. In this statement, the ``measure of complexity'' from Conjecture \ref{stab-conj} is $\kappa^*_\mu(\mathcal{K})$, a generalization of the first-order $\kappa(T)$ (see Definition \ref{kappastar-def}). An important feature of this work is that it explores the underdeveloped field of strictly stable AECs. We end with a short comment contextualizing this paper within the body of work on limit models. The general arguments for investigating the uniqueness of limit models have appeared before (see \cite{Va1, gvv}). One use is that they give a version of saturated models without dealing with smaller models and give a sense of how difficult it is to create saturated models. Many works of AECs take a `local approach' of analyzing $\mathcal{K}_\lambda$ (the models of size $\lambda$) to derive structure on $\mathcal{K}_{\lambda^+}$ (see \cite[Chapter II]{shelahaecbook} or \cite{sh576} for the most prominent examples). Because not even the existence of models of size $<\lambda$ is assumed, Galois saturation (which quantifies over smaller models) cannot be used, and limit models have become the standard substitute. Moreover, we expect that limit models will take on a greater importance in the context of strictly stable AECs, especially those without assumption of tameness. Of the various analogues for AECs (see \cite[Theorem 1.2]{grva}), most have seen extensive analysis, but only in the context of tameness. One of the remaining notions (solvability; see \cite[Chapter IV]{shelahaecbook}) seems to have no weakening to the strictly stable context. What remains are $\mu$-superstability and the uniqueness of limit models. Thus, it is reasonable to assume that understanding strictly stable AECs will require understanding the connection between `$\mu$-stability' (Assumption \ref{assm} here) and limit models. Theorem \ref{uniqueness theorem} is a step towards this understanding. After circulating this paper but before publication, Vasey and Mazari-Armida used our results to make further progress in the field. Vasey used Theorem \ref{uniqueness theorem} in his work to characterize stable AECs \cite{v-stab-AEC}, especially in terms of unions of sufficiently saturated models being saturated \cite[Theorem 11.11]{v-stab-AEC}. Additionally, Vasey \cite[Theorem 3.7]{v-stab-AEC} gives some natural conditions for Assumption \ref{assm}.(\ref{wc-split}) below, which he calls the weak continuity of splitting. On the other hand, Mazari-Armida identified naturally occuring strictly stable AECs. By analyzing limit models of different cofinalities, he demonstrated that the class of torsion-free abelian groups and the class of finitely Butler groups, both with the pure subgroup relation, are strictly stable AECs \cite{marcos}. Section \ref{background-sec} reviews key definitions and facts with Assumption \ref{assm} being the key hypotheses throughout the paper. Section \ref{relfulltow-sec} discusses the notion of relatively full towers. Section \ref{redtow-sec} discusses reduced towers and proves the key lemma, Theorem \ref{reduced are continuous}. Section \ref{ulm-sec} concludes with a proof of the main theorem, Theorem \ref{uniqueness theorem}. We would like to thanks Rami Grossberg and Sebastien Vasey for comments on earlier drafts of this paper that led to a vast improvement in presentation. \section{Background} \label{background-sec} We refer the reader to \cite{Ba}, \cite{GV2}, \cite{gvv}, \cite{Va1}, and \cite{Va2} for definitions and notations of concepts such as Galois-stability, $\mu$-splitting, etc. We reproduce a few of the more specialized definitions and results here. Grossberg, VanDieren, and Villaveces \cite[Assumption 2.8]{gvv} isolated a notion they call `$\mu$-superstability'\footnote{We do not use this here, but the definition of $\mu$-superstability strengthens Assumption \ref{assm} by requiring that $\kappa^*_\mu(\mathcal{K})$ be $\omega$.} by examining consequences of categoricity from \cite{sh394} and \cite{ShVi}. The key feature in this assumption is that there are no infinite splitting chains (as forbidden in \cite[Theorem 2.2.1]{ShVi}). We weaken $\mu$-superstability by only forbidding long enough splitting chains. How long is `long enough' is measured by $\kappa^*_\mu(\mathcal{K})$, which is a relative of \cite[Definition 4.3]{GV2} and universal local character \cite[Definition 3.5]{bgcoheir}. Following \cite{bgcoheir}, we add the `*' to this symbol to denote that the chain is required to have the property that $M_{i+1}$ is universal over $M_i$. \begin{definition} \label{kappastar-def} We define $\kappa^*_\mu(\mathcal{K})$ to be the minimal, regular $\kappa<\mu^+$ so that for every increasing and continuous sequence $\langle M_i\in\mathcal{K}_\mu\mid i\leq\alpha \rangle$ with $\alpha\geq \kappa$ regular which satisfies for every $i<\alpha$, $M_{i+1}$ is universal over $M_i$, and for every non-algebraic $p\in\operatorname{ga-S}(M_\alpha)$, there exists $i<\alpha$ such that $p$ does not $\mu$-split over $M_i$. If no such $\kappa$ exists, we say $\kappa^*_\mu(\mathcal{K})=\infty$. We call $\kappa^*_\mu(\mathcal{K})$ the `universal local character for $\mu$-nonsplitting for $\mathcal{K}$,' or simply the `universal local character' for short when $\mu$ and $\mathcal{K}$ are fixed. \end{definition} In \cite[Theorem 4.13]{GV2}, Grossberg and VanDieren show that if $\mathcal{K}$ is a tame stable abstract elementary class satisfying the joint embedding and amalgamation properties with no maximal models, then there exists a single bound for $\kappa^*_\mu(\mathcal{K})$ for all sufficiently large $\mu$ in which $\mathcal{K}$ is $\mu$-stable. This proof works by considering the $\chi$-order property of Shelah. We can also give a direct bound assuming tameness. \begin{proposition} Let $\mathcal{K}$ be an AEC with amalgamation that is $\lambda$-stable and $(\lambda, \mu)$-tame. Then $\kappa^*_\mu(\mathcal{K}) \leq \lambda$. \end{proposition} Note that the proof does not require the extensions to be universal. \begin{proof} Let $\langle M_i \in K_\mu : i \leq \alpha \rangle$ be an increasing, continuous chain with $\cf(\alpha) \geq \lambda$ and $p \in \operatorname{ga-S}(M_\alpha)$. By \cite[Claim 3.3.(1)]{sh394} and $\lambda$-stability, there is $N_0 \prec M_\alpha$ of size $\lambda$ such that $p$ does not $\lambda$-split over $N_0$. By tameness, $p$ does not $\mu$-split over $N_0$. By the cofinality assumption, there is $i_* < \alpha$ such that $N_0 \prec M_{i_*}$. By monotonicity, $p$ does not $\mu$-split over $M_{i_*}$. \end{proof} This definition motivates our main assumption. We use this collection only to group these items together and will explicitly list Assumption \ref{assm} when it is part of a result's hypothesis. \begin{assumption}\label{assm} \mbox{} \begin{enumerate} \item $\mathcal{K}$ satisfies the joint embedding and amalgamation properties with no maximal model of cardinality $\mu$. \item $\mathcal{K}$ is stable in $\mu$. \item $\kappa^*_\mu(\mathcal{K})<\mu^+$. \item \label{wc-split}$\mathcal{K}$ satisfies (limit) continuity for non-$\mu$-splitting (i.e. if $p\in\operatorname{ga-S}(M)$ and $M$ is a limit model witnessed by $\langle M_i\mid i<\theta\rangle$ for some limit ordinal $\theta<\mu^+$ and there exists $N$ so that $p\upharpoonrightriction M_i$ does not $\mu$-split over $N$ for all $i<\theta$, then $p$ does not $\mu$-split over $N$). \end{enumerate} \end{assumption} A few comments on the assumption is in order. Note that tameness is not assumed in this paper. Amalgamation is commonly assumed in the study of limit models, although \cite{ShVi,Va1, Va1e} replace it with more nuanced results about amalgamation bases. Stability in $\mu$ is necessary for the conclusion of Theorem \ref{uniqueness theorem} to make sense; otherwise, there are no limit models! We have argued (both in principle and in practice) that varying the local character cardinal is the right generalization of superstability to stability in this context. However, we have kept the ``continuity cardinal'' to be $\omega$; this is the content of Assumption \ref{assm}.(\ref{wc-split}). This seems necessary for the arguments\footnote{The first author claimed in the discussion following \cite[Lemma 9.1]{extendingframes} that only long continuity was necessary. However, after discussion with Sebastien Vasey, this seems to be an error.}. It seems reasonable to hope that some failure of continuity for non-splitting will lead to a nonstructure result, but this has not yet been achieved. The assumptions are (trivially) satisfied in any superstable AEC and, therefore, any categorical AEC. However, in this context, the result is already known. For a new example, we look to the context of strictly stable homogeneous structures as developed in Hyttinen \cite[Section 1]{hyttinen}. In the homogeneous contexts, Galois types are determined by syntactic types. Armed with this, Hyttinen studies the normal syntactic notion of nonsplitting under a stable, unsuperstable hypothesis \cite[Assumption 1.1]{hyttinen}, and shows that syntactic splitting satisfies continuity and (more than) the universal local character of syntactic nonsplitting is $\aleph_1$.\footnote{It shows that it is at most $\aleph_1$. However, if it were $\aleph_0$, the class would be superstable, contradicting the assumption.} It is easy to see that the syntactic version of nonsplitting implies our nonsplitting, which already implies $\kappa^*_\mu(\mathcal{K}) = \aleph_1$. The following argument shows that, if $N$ is limit over $M$, the converse holds as well, which is enough to get the limit continuity for our semantic definition of splitting. Since the context of homogeneous model theory is very tame, we don't worry about attaching a cardinal to non-splitting because they are all equivalent. Suppose that $N$ is a limit model over $M$, witnessed by $\langle M_i \mid i < \alpha \rangle$, and $p \in \operatorname{ga-S}(M)$ syntactically splits over $M$. Then, since Galois types are syntactic, there are $b, c \in N$ such that $\tp(b/M) = \tp(c/M)$ and, for an appropriate $\phi$, $\phi(x, b, m) \wedge \neg \phi(x, c, m) \in p$. We can find $\beta, \beta' < \alpha$ such that $b \in N_\beta$ and $c \in N_{\beta'}$. Since $b$ and $c$ have the same type, we can find an amalgam $N_* \succ N_\beta$ and $f:N_\alpha \to_{M} N_*$ such that $f(b) = c$. Since $N$ is universal over $N_{\beta'}$, we can find $h:N_* \to_{N_\beta'} N$. This gives us an isomorphism $h \circ f:N_\beta \cong h(f(N_\beta))$ and we claim that this witnesses the semantic version splitting: $c \in N_{\beta'}$, so $c = h(c) = h(f(b)) \in h(f(N_\beta))$ and, thus, $\neg \phi(x, c, m) \in p \upharpoonright h(f(N_\beta))$. On the other hand, $\phi(x, c, m) = h\circ f(\phi(x, b, m)) \in h \circ f( p \upharpoonright N_\beta)$. Thus, we have witnessed $h \circ f(p \upharpoonright N_\beta) \neq p \upharpoonright h(f(N_\beta))$. Note if $\kappa^*_\mu(\mathcal{K}) = \mu$, then the conclusion of Theorem \ref{uniqueness theorem} is uninteresting, but the results still hold: any two limit models whose lengths have the same cofinality are isomorphic on general grounds. Also, we assume joint embedding, etc. only in $\mathcal{K}_\mu$. However, to simplify presentation, we work as though these properties held in all of $\mathcal{K}$ and, thus, we work inside a monster model. This will allow us to write $\tp(a/M)$ rather than $\tp(a/M;N)$ and witness Galois type equality with automorphisms. The standard technique of working inside of a $(\mu,\mu^+)$-limit model can translate our proofs to ones not using a monster model. Under these assumptions, it is possible to construct towers. This is the key technical tool in this construction. Towers were introduced in Shelah and Villaveces \cite{ShVi} and expanded upon in \cite{Va1} and subsequent works. Recall that, if $I$ is well-ordered, then it has a successor function which we will denote $+1$ (or $+_I 1$ if necessary). Also, we typically restrict our attention to well-ordered $I$. \begin{definition}[{\cite[Definition I.5.1]{Va1}}]\ \begin{enumerate} \item A \emph{tower indexed by $I$ in $\mathcal{K}_\mu$} is a triple $\mathcal{T} = \langle \bar M, \bar a, \bar N \rangle$ where \begin{itemize} \item $\bar M=\langle M_i\in\mathcal{K}_\mu\mid i\in I\rangle$ is an increasing sequence of limit models; \item $\bar a=\langle a_{i}\in M_{i+1}\backslash M_i\mid i+1\in I\rangle$ is a sequence of elements; \item $\bar N=\langle N_{i} \in K_\mu \mid i+1\in I\rangle$ such that $N_i \prec M_i$ with $M_i$ universal over $N_i$; and \item $\tp(a_i/M_i)$ does not $\mu$-split over $N_i$. \end{itemize} \item A tower $\mathcal{T} = \langle \bar M, \bar a, \bar N\rangle$ is \emph{continuous} iff $\bar M$ is, i. e., $M_i = \cup_{j<i} M_j$ for all limit $i \in I$. \item $\mathcal{K}^*_{\mu, I}$ is the collection of all towers indexed by $I$ in $\mathcal{K}_\mu$. \end{enumerate} \end{definition} Note that continuity is not required of all towers. We will switch back and forth between the notation $\mathcal{K}^*_{\mu,\alpha}$ where $\alpha$ is an ordinal and $\mathcal{K}^*_{\mu,I}$ where $I$ is a well ordered set (of order type $\alpha$) when it will make the notation clearer. When we deal with relatively full towers, we will find the notation using $I$ to be more convenient for book-keeping purposes. For $\beta<\alpha$ and $\mathcal{T}=(\bar M,\bar a,\bar N)\in\mathcal{K}^*_{\mu,\alpha}$ we write $\mathcal{T}\upharpoonrightriction \beta$ for the tower made up of the sequences $\bar M\upharpoonrightriction \beta:=\langle M_i\mid i<\beta\rangle$, $\bar a\upharpoonrightriction\beta:=\langle a_i\mid i+1<\beta\rangle$, and $\bar N\upharpoonrightriction \beta:=\langle N_i\mid i+1<\beta\rangle$. We will construct increasing chains of towers. Here we define what it means for one tower to extend another: \begin{definition} For $I$ a sub-ordering of $I'$ and towers $(\bar M,\bar a,\bar N)\in\mathcal{K}^*_{\mu,I}$ and $(\bar M',\bar a',\bar N')\in\mathcal{K}^*_{\mu,I'}$, we say $$(\bar M,\bar a,\bar N)\leq (\bar M',\bar a',\bar N')$$ if $\bar a=\bar a'\upharpoonrightriction I$, $\bar N=\bar N'\upharpoonrightriction I$, and for $i\in I$, $M_i\preceq_{\mathcal{K}}M'_i$ and whenever $M'_i$ is a proper extension of $M_i$, then $M'_i$ is universal over $M_i$. If for each $i\in I$, $M'_i $ is universal over $M_i$ we will write $(\bar M,\bar a,\bar N)< (\bar M',\bar a',\bar N')$. \end{definition} For $\gamma$ a limit ordinal $<\mu^+$ and $\langle I_j\mid j<\gamma\rangle$ a sequence of well ordered sets with $I_j$ a sub-ordering of $I_{j+1}$, if $\langle(\bar M^j,\bar a,\bar N)\in\mathcal{K}^*_{\mu,I_j}\mid j<\gamma\rangle$ is a $<$-increasing sequence of towers, then the union $\mathcal{T}$ of these towers is determined by the following: \begin{itemize} \item for each $\beta\in \bigcup_{j<\gamma}I_j$, $M_\beta:=\bigcup_{\beta\in I_j;\; j<\gamma}M^j_\beta$ \item the sequence $\langle a_\beta\mid \exists (j<\gamma)\; \beta+1,\beta\in I_j\rangle$, and \item the sequence $\langle N_\beta\mid \exists (j<\gamma)\; \beta+1,\beta\in I_j\rangle$ \end{itemize} is a tower in $\mathcal{K}^*_{\mu,\bigcup_{j<\gamma}I_j}$, provided that $\mathcal{K}$ satisfies the continuity property for non-$\mu$-splitting and that $\bigcup_{j<\gamma} I_j$ is well ordered. Note that it is our desire to take increasing unions of towers that leads to the necessity of the continuity property. We also need to recall a few facts about directed systems of partial extensions of towers that are implicit in \cite{Va1}. These are helpful tools in the inductive construction of towers and are used in other work (see, e.g., \cite[Facts 2 and 3]{Va2}): Fact \ref{successor stage prop} will get us through the successor step of inductive constructions of directed systems, and Fact \ref{limit stage prop} describes how to pass through the limit stages. An explicit proof of Fact \ref{limit stage prop} appears as \cite[Fact 3]{Va2}, and we provide a proof of Fact \ref{successor stage prop} below. Two important notes: \begin{itemize} \item These facts do not require that the towers be continuous. \item The work in \cite{Va1} does not assume amalgamation, so more care had to be taken in working with large limit models (in place of the monster model) and towers made of amalgamation bases. The amalgamation assumption in this (and other) papers significantly simplifies the situation. \end{itemize} \begin{fact}[\cite{Va1}]\label{successor stage prop} Suppose $\mathcal{T}$ is a tower in $\mathcal{K}^*_{\mu,\alpha}$ and $\mathcal{T}'$ is a tower of length $\beta<\alpha$ with $\mathcal{T}\upharpoonrightriction \beta<\mathcal{T}'$, if $f\in\Aut_{M_\beta}(\mathfrak{C})$ and $M''_\beta$ is a limit model universal over $M_{\beta}$ such that $\tp(a_\beta/M''_\beta)$ does not $\mu$-split over $N_\beta$ and $f(\bigcup_{i<\beta}M'_i)\prec_{\mathcal{K}}M''_\beta$, then the tower $\mathcal{T}''\in\mathcal{K}^*_{\mu,\beta+1}$ defined by $f(\mathcal{T}')$ concatenated with the model $M''_\beta$, element $a_\beta$ and submodel $N_\beta$ is an extension of $\mathcal{T}\upharpoonrightriction (\beta+1)$. \end{fact} \begin{proof} This is a routine verification from the definitions. $\mathcal{T}''\upharpoonright \beta$ is isomorphic to the tower $\mathcal{T}'$ and we are given the required nonsplitting and that, for $i < \beta$, $f(M_i')\prec M_\beta''$, so we have that $\mathcal{T}'' \in \mathcal{K}^*_{\mu, \beta+1}$. Similarly, $f$ $\mathcal{T} \upharpoonright \beta$, so $\mathcal{T} \upharpoonright \beta < \mathcal{T}'$ implies $\mathcal{T} \upharpoonright \beta < \mathcal{T}'' \upharpoonright \beta$. To extend this to $\mathcal{T} \upharpoonright (\beta+1) < \mathcal{T}''\upharpoonright (\beta+1) = \mathcal{T}''$, we note that $M_\beta''$ is universal over $M_\beta$ by assumption. \end{proof} \begin{fact}[\cite{Va1}]\label{limit stage prop} Fix $\mathcal{T}\in\mathcal{K}^*_{\mu,\alpha}$ for $\alpha$ a limit ordinal. Suppose $\langle \mathcal{T}^i\in\mathcal{K}^*_{\mu,i}\mid i<\alpha\rangle$ and $\langle f_{i,j}\mid i\leq j<\alpha\rangle$ form a directed system of towers. Suppose \begin{itemize} \item each $\mathcal{T}^i$ extends $\mathcal{T}\upharpoonrightriction i$ \item $f_{i,j}\upharpoonrightriction M_i=id_{M_i}$ \item $M^{i+1}_{i+1}$ is universal over $f_{i,i+1}(M^i_i)$. \end{itemize} Then there exists a direct limit $\mathcal{T}^\alpha$ and mappings $\langle f_{i,\alpha}\mid i<\alpha\rangle$ to this system so that $\mathcal{T}^\alpha\in\mathcal{K}^*_{\mu,\alpha}$, $\mathcal{T}^\alpha$ extends $\mathcal{T}$, and $f_{i,\alpha}\upharpoonrightriction M_i=id_{M_i}$. \end{fact} Finally, to prove results about the uniqueness of limit models, we will additionally need to assume that non-$\mu$-splitting satisfies a symmetry property over limit models. We refine the definition of symmetry from \cite[Definition 3]{Va2} for non-$\mu$-splitting; this localization only requires symmetry to hold when $M_0$ is $(\mu, \delta)$-limit over $N$. \begin{definition}\label{mu-delta symmetry} Fix $\mu\geq\LS(\mathcal{K})$ and $\delta$ a limit ordinal $<\mu^+$. We say that an abstract elementary class exhibits \emph{$(\mu,\delta)$-symmetry for non-$\mu$-splitting} if whenever models $M,M_0,N\in\mathcal{K}_\mu$ and elements $a$ and $b$ satisfy the conditions \ref{limit sym cond}-\ref{last} below, then there exists $M^b$ a limit model over $M_0$, containing $b$, so that $\tp(a/M^b)$ does not $\mu$-split over $N$. See Figure \ref{fig:sym}. \begin{enumerate} \item\label{limit sym cond} $M$ is universal over $M_0$ and $M_0$ is a $(\mu,\delta)$-limit model over $N$. \item\label{a cond} $a\in M\backslash M_0$. \item\label{a non-split} $\tp(a/M_0)$ is non-algebraic and does not $\mu$-split over $N$. \item\label{last} $\tp(b/M)$ is non-algebraic and does not $\mu$-split over $M_0$. \end{enumerate} \end{definition} \begin{figure} \caption{A diagram of the models and elements in the definition of $(\mu,\delta)$-symmetry. We assume the type $\tp(b/M)$ does not $\mu$-split over $M_0$ and $\tp(a/M_0)$ does not $\mu$-split over $N$. Symmetry implies the existence of $M^b$ a limit model over $M_0$ so that $\tp(a/M^b)$ does not $\mu$-split over $N$.} \label{fig:sym} \end{figure} Note that $(\mu, \delta)$-symmetry is the same as $(\mu, \cf \delta)$-symmetry. \section{Relatively Full Towers} \label{relfulltow-sec} One approach to proving the uniqueness of limit models is to construct a continuous relatively full tower of length $\theta$, and then conclude that the union of the models in this tower is a $(\mu,\theta)$-limit model. In this section we confirm that this approach can be carried out in this context, even if we remove continuity along the relatively full tower. \begin{definition}[{\cite[Definition 3.2.1]{ShVi}}]\label{strong type defn} For $M$ a $(\mu,\theta)$-limit model, \index{strong types}\index{Galois-type!strong}\index{$\operatorname{\mathfrak{S}t}(M)$}\index{$(p,N)$} let $$\operatorname{\mathfrak{S}t}(M):=\left\{\begin{array}{ll} (p,N) & \left|\begin{array}{l} N\prec_{\mathcal{K}}M;\\ N\text{ is a }(\mu,\theta)\text{-limit model};\\ M\text{ is universal over }N;\\ p\in \operatorname{ga-S}(M)\text{ is non-algebraic}\\ \text{and }p\text{ does not }\mu\text{-split over }N. \end{array}\right\} \end{array}\right . $$ Elements of $\operatorname{\mathfrak{S}t}(M)$ are called {\em strong types.} Two strong types $(p_1,N_1)\in\operatorname{\mathfrak{S}t}(M_1)$ and $(p_2,N_2)\in\operatorname{\mathfrak{S}t}(M_2)$ are \emph{parallel} iff for every $M'$ of cardinality $\mu$ extending $M_1$ and $M_2$ there exists $q\in\operatorname{ga-S}(M')$ such that $q$ extends both $p_1$ and $p_2$ and $q$ does not $\mu$-split over $N_1$ nor over $N_2$. \end{definition} \begin{definition}[Relatively Full Towers]\label{def:relativefulltowers} Suppose that $I$ is a well-ordered set. Let $(\bar M,\bar a,\bar N)$ be a tower indexed by $I$ such that each $M_i$ is a $(\mu,\sigma)$-limit model. For each $i$, let $\langle M^\gamma_{i}\mid \gamma<\sigma\rangle$ witness that $M_{i}$ is a $(\mu,\sigma)$-limit model.\\ The tower $(\bar M,\bar a,\bar N)$ is \emph{full relative to $(M^\gamma_{i})_{\gamma<\sigma,i\in I}$} iff \begin{enumerate} \item \label{niceorder-def} there exists a cofinal sequence $\langle i_\alpha\mid\alpha<\theta\rangle$ of $I$ of order type $\theta$ such that there are $\mu\cdot \omega$ many elements between $i_\alpha$ and $i_{\alpha+1}$ and \item\label{strong type condition} for every $\gamma<\sigma$ and every $(p,M^\gamma_{i})\in\operatorname{\mathfrak{S}t}(M_{i})$ with $i_\alpha\leq i<i_{\alpha+1}$, there exists $j\in I$ with $i\leq j< i_{\alpha+1}$ such that $(\tp(a_j/M_j),N_j)$ and $(p,M^\gamma_{i})$ are parallel. \end{enumerate} \end{definition} The following proposition will allow us to use relatively full towers to produce limit models. The fact that relatively full towers yield limit models was first proved in \cite{Va1} and in \cite{gvv} and later improved in \cite[Proposition 4.1.5]{Dr}. We notice here that the proof of \cite[Proposition 4.1.5]{Dr} does not require that the tower be continuous and does not require that $\kappa^*_\mu(\mathcal{K})=\omega$. We provide the proof for completeness. \begin{proposition}[Relatively full towers provide limit models]\label{relatively full is limit} Let $\theta$ be a limit ordinal $<\mu^+$ satisfying $\theta=\mu\cdot\theta$. Suppose that $I$ is a well-ordered set as in Definition \ref{def:relativefulltowers}.(\ref{niceorder-def}). Let $(\bar M,\bar a,\bar N)\in\mathcal{K}_{\mu,I}^*$ be a tower made up of $(\mu,\sigma)$-limit models, for some fixed $\sigma$ with $\kappa^*_\mu(\mathcal{K})\leq\cf(\sigma)<\mu^+$. If $(\bar M,\bar a,\bar N)\in\mathcal{K}^*_{\mu,I}$ is full relative to $(M^\gamma_i)_{i\in I,\gamma<\sigma}$, then $M:=\bigcup_{i\in I}M_i$ is a $(\mu,\theta)$-limit model over $M_{i_0}$. \end{proposition} \begin{proof} Because the sequence $\langle i_\alpha\mid \alpha<\theta\rangle$ is cofinal in $I$ and $\theta=\mu\cdot\theta$, we can rewrite $M:=\bigcup_{i\in I}M_i=\bigcup_{\beta<\theta}M_{i_{\beta}}=\bigcup_{\alpha<\theta}\bigcup_{\delta<\mu}M_{i_{\mu\alpha+\delta}}$. For $\alpha<\theta$ and $\delta<\mu$, notice \begin{equation}\label{special equation} M_{i_{\mu\alpha+\delta+1}}\text{ realizes every type over }M_{i_{\mu\alpha+\delta}}. \end{equation} To see this take $p\in\operatorname{ga-S}(M_{i_{\mu\alpha+\delta}})$. By our assumption that $\cf(\sigma)\geq\kappa^*_\mu(\mathcal{K})$, $p$ does not $\mu$-split over $M^\gamma_{i_{\mu\alpha+\delta}}$ for some $\gamma<\sigma$. Therefore $(p,M^\gamma_{i_{\mu\alpha+\delta}})\in\operatorname{\mathfrak{S}t}(M_{i_{\mu\alpha+\delta}})$. By definition of relatively full towers, there is an $a_k$ with $i_{\mu\alpha+\delta}\leq k<i_{\mu\alpha+\delta+1}$ so that $(\tp(a_k/M_k),N_k)$ and $(p,M^\gamma_{i_{\mu\alpha+\delta}})$ are parallel. Because $M_{i_{\mu\alpha+\delta}}\prec_{\mathcal{K}}M_k$, by the definition of parallel strong types, it must be the case that $a_k\models p$. By a back and forth argument we can conclude from $(\ref{special equation})$ that $M_{i_{\mu\alpha+\mu}}$ is universal over $M_{i_{\mu\alpha}}$. Thus $M$ is a $(\mu,\theta)$-limit model. To see the details of the back-and-forth argument mentioned in the previous paragraph, first translate $(\ref{special equation})$ to the terminology of \cite{Ba}: $(\ref{special equation})$ witnesses that $\bigcup_{\beta<\mu}M_{i_{\mu\alpha+\beta}}$ is $1$-special over $M_{i_{\mu\alpha}}$. Then, refer to the proof of Lemma 10.5 of \cite{Ba}. \end{proof} \section{Reduced Towers} \label{redtow-sec} The proof of the uniqueness of limit models from \cite{sh394, gvv, Va1, Va1e} is two dimensional. In the context of towers, the relatively full towers are used to produce a $(\mu,\theta)$-limit model, but to conclude that this model is also a $(\mu,\omega)$-limit model, a $<$-increasing chain of $\omega$-many continuous towers of length $\theta+1$ is constructed. We adapt this construction to prove Theorem \ref{uniqueness theorem}. Instead of creating a chain of $\omega$-many towers, we produce a chain of $\delta$-many towers, and instead of each tower in this chain being continuous, we only require that these towers are continuous at limit ordinals of cofinality at least $\kappa^*_\mu(\mathcal{K})$. The use of towers should be compared with the proof uniqueness of limit models in \cite[Section II.4]{shelahaecbook} (details are given in \cite[Section 9]{extendingframes}). Both proofs create a `square' of models, but do so in a different way. The proof here will proceed by starting with a 1-dimensional tower of models and then, in the induction step, extend this tower to fill out the square. In contrast, the induction step of \cite[Lemma II.4.8]{shelahaecbook} adds single models at a time. This seems like a minor distinction (or even just a difference in how the induction step is carried out), but there is a real distinction in the resulting squares. In \cite{shelahaecbook}, the construction is `symmetric' in the sense that $\theta$ and $\delta$ are treated the same. However, in the proof presented here, this symmetry is broken and one could `detect' which side of the square was laid out initially by observing where continuity fails. In \cite{gvv, Va1, Va1e, Va2}, the continuity of the towers is achieved by restricting the construction to reduced towers, which under the stronger assumptions of \cite{gvv, Va1, Va1e, Va2} are shown to be continuous. We take this approach and notice that continuity of reduced towers at certain limit ordinals can be obtained with the weaker assumptions of Theorem \ref{uniqueness theorem}, in particular $\kappa^*_\mu(\mathcal{K})<\mu^+$. \begin{definition}\label{reduced defn}\index{reduced towers} A tower $(\bar M,\bar a,\bar N)\in\mathcal{K}^*_{\mu,\alpha}$ is said to be \emph{reduced} provided that for every $(\bar M',\bar a,\bar N)\in\mathcal{K}^*_{\mu,\alpha}$ with $(\bar M,\bar a,\bar N)\leq(\bar M',\bar a,\bar N)$ we have that for every $i<\alpha$, $$(*)_i\quad M'_i\cap\bigcup_{j<\alpha}M_j = M_i.$$ \end{definition} The proofs of the following three results about reduced towers only require that the class $\mathcal{K}$ be stable in $\mu$ and that $\mu$-splitting satisfies the continuity property. Although \cite{ShVi} works under stronger assumptions than we currently, none of these results use anything beyond Assumption \ref{assm}. In particular, $\kappa^*_\mu(\mathcal{K})=\omega$ holds in \cite{ShVi}, but is not used. \begin{fact}[{\cite[Theorem 3.1.13]{ShVi}}]\label{density of reduced}\index{reduced towers!density of} Let $\mathcal{K}$ satisfy Assumption \ref{assm}. There exists a reduced $<$-extension of every tower in $\mathcal{K}^*_{\mu,\alpha}$. \end{fact} \begin{fact}[{\cite[Theorem 3.1.14]{ShVi}}]\label{union of reduced is reduced} Let $\mathcal{K}$ satisfy Assumption \ref{assm}. Suppose $\langle (\bar M,\bar a,\bar N)^\gamma\in\mathcal{K}^*_{\mu,\alpha}\mid \gamma<\beta\rangle$ is a $<$-increasing and continuous sequence of reduced towers such that the sequence is continuous in the sense that for a limit $\gamma<\beta$, the tower $(\bar M,\bar a,\bar N)^\gamma$ is the union of the towers $(\bar M,\bar a,\bar N)^\zeta$ for $\zeta<\gamma$. Then the union of the sequence of towers $\langle (\bar M,\bar a,\bar N)^\gamma\in\mathcal{K}^*_{\mu,\alpha}\mid \gamma<\beta\rangle$ is itself a reduced tower. \end{fact} In fact the proof of Fact \ref{union of reduced is reduced} gives a slightly stronger result which allows us to take the union of an increasing chain of reduced towers of increasing index sets and conclude that the union is still reduced. \begin{fact}[{\cite[Lemma 5.7]{gvv}}]\label{monotonicity} Let $\mathcal{K}$ satisfy Assumption \ref{assm}. Suppose that $(\bar M,\bar a,\bar N)\in\mathcal{K}^*_{\mu,\alpha}$ is reduced. If $\beta<\alpha$, then $(\bar M,\bar a,\bar N)\upharpoonrightriction \beta$ is reduced. \end{fact} The following theorem is related to \cite[Theorem 3]{Va2}, which additionally assumes that $\kappa^*_\mu(\mathcal{K}) = \omega$; in other words it assumes $\mathcal{K}$ $\mu$-superstable. Instead, we allow for strict stability (that is, $\kappa^*_\mu(\mathcal{K})$ to be uncountable) at the cost of only guaranteeing continuity at limits of large cofinality. In particular, the proof is similar to the proof of $(a)\to(b)$ in \cite[Theorem 3]{Va2}, but we crucially allow our towers to be discontinuous at $\gamma$ where $\cf(\gamma)<\kappa^*_\mu(\mathcal{K})$. We provide the details where the proof differs. \begin{theorem}\label{reduced are continuous} Suppose $\mathcal{K}$ satisfies Assumption \ref{assm}. Let $\alpha$ be an ordinal and $\delta$ be a limit ordinal so that $\kappa^*_\mu(\mathcal{K})\leq\cf(\delta)<\alpha$. If $\mathcal{K}$ satisfies $(\mu, \delta)$-symmetry for non-$\mu$-splitting and $(\bar M,\bar a,\bar N)\in\mathcal{K}^*_{\mu,\alpha}$ is reduced, then the tower $(\bar M,\bar a,\bar N)$ is continuous at $\delta$ (i.e., $M_\delta=\bigcup_{\beta<\delta}M_\beta$). \end{theorem} \begin{proof} Suppose the theorem is false. Then we can find a reduced tower $\mathcal{T} := (\bar M, \bar a, \bar N) \in \mathcal{K}^*_{\mu, \alpha}$ that is a counterexample of minimal length at $\delta$ in the sense that: \begin{enumerate} \item $M_\delta \neq \cup_{i<\delta} M_i$ and \item if $(\bar M', \bar a', \bar N') \in \mathcal{K}^*_{\mu, \alpha'}$ is reduced and discontinuous at $\delta$, then $\alpha \leq \alpha'$. \end{enumerate} Notice that Fact \ref{monotonicity} implies that $\alpha=\delta+1$. Let $b\in M_\delta\backslash \bigcup_{i<\delta}M_i$ witness the discontinuity of the tower at $\delta$. By Fact \ref{density of reduced} and Fact \ref{union of reduced is reduced}, we can build $\mathcal{T}^i=(\bar M^i,\bar a^i,\bar N^i)\in\mathcal{K}^*_{\mu,\delta}$ for $i \leq \delta$ such that $\mathcal{T}^0 = \mathcal{T}\upharpoonright \delta$ and $\langle \mathcal{T}^i \mid i \leq \delta\rangle$ is a $<$-increasing, continuous chain. By $\delta$-applications of Fact \ref{density of reduced} in between successor stages of the construction, we can require that for $\beta<\delta$ \begin{align}\label{limit at successor}\begin{split} M^{i+1}_{\beta}\text{ is a }(\mu,\delta)\text{-limit over }M^i_{\beta}\\ \text{and consequently }M^{i+1}_{\beta}\text{ is a }(\mu,\delta)\text{-limit over }N_{\beta}. \end{split} \end{align} Let $\displaystyle{M^\delta_{diag}:=\bigcup_{i<\delta,\;\beta<\delta}M^i_\beta}$. Figure \ref{fig:Mdeltas} is an illustration of these models. \begin{figure} \caption{$(\bar M,\bar a,\bar N)$ and the towers $(\bar M,\bar a,\bar N)^j$ extending $(\bar M,\bar a,\bar N)\upharpoonrightriction\delta$.} \label{fig:Mdeltas} \end{figure} There are two cases depending on whether $b$ is in $M^\delta_{diag}$ or not. Both cases lead to a contradiction of our assumption that $\mathcal{T}$ is reduced. {\bf Case 1:} $b\in M^\delta_{diag}$\\ The first case will contradict our assumption that $(\bar M,\bar a,\bar N)$ is reduced. We have that $\mathcal{T}^\delta$ is an extension of $\mathcal{T}\upharpoonright \delta$ and that $M^\delta_{diag}$ contains $b$. Let $M^\delta_\delta$ be an extension of $M^\delta_{diag}$ that is also a universal extension of $M_\delta$. Then $\mathcal{T}^\delta {}^\frown \langle M^\delta_\delta \rangle$ is an extension of $\mathcal{T}$. Since $b \in M^\delta_{diag}$, there is some $j < \delta$ so $b \in M^\delta_j$. Because $\mathcal{T}$ is reduced, we have that $$M_j^\delta \cap \bigcup_{i < \alpha} M_i = M_j.$$ Notice that the $M^\delta_j \cap M_\delta$ on the LHS contains $b$, but the RHS does not contain $b$, a contradiction. {\bf Case 2:} $b\notin M^\delta_{diag}$\\ Then $\tp(b/M^\delta_{diag})$ is non-algebraic. Consider the sequence $\langle \check M_i\mid i<\delta\rangle$ defined by $\check M_i:=M^i_i$ if $i$ is a successor and $\check M_i:=\bigcup_{j<i}M^j_j$ for $i$ a limit ordinal. Notice that $(\ref{limit at successor})$ implies that this sequence witnesses that $M^\delta_{diag}$ is a $(\mu,\delta)$-limit model. Because $M^\delta_{diag}$ is a $(\mu,\delta)$-limit model, by our assumption that $\cf(\delta)\geq\kappa^*_\mu(\mathcal{K})$ and monotonicity of non-splitting, there exists a successor ordinal $i^*<\delta$ so that \begin{equation}\label{i* equation} \tp(b/M^\delta_{diag})\text{ does not }\mu\text{-split over }M^{i^*}_{i^*}. \end{equation} Our next step in Case (2) is to consider the tower formed by the diagonal elements in Figure \ref{fig:Mdeltas}. In particular, let $\mathcal{T}^{diag}$ be the sequence $(M^i_i, a_i, N_i)_{i<\delta}$. We claim that $\mathcal{T}^{diag} \in \mathcal{K}^*_{\mu, \delta}$ and that $\mathcal{T}^{diag}$ extends $\mathcal{T}\upharpoonright \delta$. We will now use $\mathcal{T}^{diag}$ to construct a tower containing $b$ that extends $\mathcal{T}\upharpoonright \delta$. First we find an approximation, $\mathcal{T}^b$, which is a tower of length $i^*+1$ that contains $b$ and extends $\mathcal{T}^{diag}\upharpoonrightriction(i^*+2)$. Then through a directed system of mappings, we move this tower so that the result is as desired. To define $\mathcal{T}^b$, first notice that by (\ref{limit at successor}), $M^{i^*}_{i^*}$ is a $(\mu,\delta)$-limit over $N_{i^*}$. Now, referring to the Figure \ref{fig:sym}, apply $(\mu,\delta)$-symmetry to $a_{i^*}$ standing in for $a$, $M^{i^*}_{i^*}$ representing $M_0$, $N_{i^*}$ as $N$, $M^\delta_{diag}$ as $M$, and $b$ as itself. We can conclude that there exists $M^b$ containing $b$, a limit model over $M^{i^*}_{i^*}$, for which $\tp(a_{i^*}/M^b)$ does not $\mu$-split over $N_{i^*}$. Define the tower $\mathcal{T}^b\in\mathcal{K}^*_{\mu,i^*+2}$ by the sequences $\bar a\upharpoonrightriction (i^*+1)$, $\bar N\upharpoonrightriction (i^*+1)$ and $\bar M'$ with $M'_j:=M^j_j$ for $j\leq i^*$ and $M'_{i^*+1}:=M^b$. Notice that $\mathcal{T}^b$ is an extension of $\mathcal{T}^{diag}\upharpoonrightriction(i^*+2)$ containing $b$. Next, we will explain how we can use this tower to find a tower $\mathring\mathcal{T}^\delta\in\mathcal{K}^*_{\mu,\delta}$ extending $\mathcal{T}^{diag}$ with $b\in \bigcup_{j<\delta}\mathring M^\delta_{j}$. This will be enough to contradict our assumption that $\mathcal{T}$ was reduced. We want to build $\langle \mathring\mathcal{T}^j, f_{j,k}\mid i^*+2\leq j\leq k\leq\delta\rangle$ a directed system of towers so that for $j \geq i^*+2$ \begin{enumerate} \item\label{base} $\mathring\mathcal{T}^{i^*+2}=\mathcal{T}^b$ \item $\mathring\mathcal{T}^j\in\mathcal{K}^*_{\mu,j}$ for $j\leq\delta$ \item $\mathcal{T}^{diag}\upharpoonrightriction j \leq\mathring\mathcal{T}^j$ for $j\leq\delta$ \item $f_{j,k}(\mathring\mathcal{T}^j)\leq\mathring\mathcal{T}^k\upharpoonrightriction j$ for $j\leq k<\delta$ \item\label{id condition} $f_{j,k}\upharpoonrightriction M^{j}_j=id_{M^{j}_j}$ $j\leq k<\delta$ \item\label{limit M'} $\mathring M^{j+1}_{j+1}$ is universal over $f_{j,j+1}(\mathring M^j_j)$ for $j<\delta$ \item\label{b in} $b\in\mathring M^{j}_{i^*+1}$ for $j\leq\delta$ \item\label{non splitting} $\tp(f_{j,k}(b)/M^{k}_{k})$ does not $\mu$-split over $M^{i^*}_{i^*}$ for $j<k<\delta$. \end{enumerate} {\bf Construction:} We will define this directed system by induction on $k$, with $i^*+2\leq k\leq\alpha$. The base and successor case are exactly as in the proof of Theorem 5 of \cite{Va2}. The only difference in the construction here is at limit stages in which $\mathcal{T}^{diag}$ is not continuous. Therefore we will concentrate on the details of the construction for stage $k$ and $k+1$ where $k<\delta$ is a limit ordinal for which $\mathcal{T}^{diag}$ is discontinuous at $k$. {\bf Construction, Case 1:} $k$ is limit where $\mathcal{T}^{diag}$ is discontinuous.\\ First, let $\grave \mathcal{T}^k$ and $\langle\grave f_{j,k}\mid i^*+2\leq j<k\rangle$ be a direct limit of the system defined so far. We use the $\grave{}$ notation since these are only approximations to the tower and mappings that we are looking for. We will have to take some care to find a direct limit that contains $b$ in order to satisfy Condition \ref{b in} of the construction. By Fact \ref{limit stage prop} and our induction hypothesis, we may choose this direct limit so that for all $j<k$ \begin{equation*} \grave f_{j,k}\upharpoonrightriction M^{j}_j=id_{M^{j}_j}. \end{equation*} Consequently $\grave M^\alpha_j:=\grave f_{j,k}(\mathring M^j_j)$ is universal over $M^{j}_j$, and $\bigcup_{j<k}\grave M^k_j$ is a limit model witnessed by Condition \ref{limit M'} of the construction. Additionally, the tower $\grave\mathcal{T}^k$ composed of the models $\grave M^k_j$, extends $\mathcal{T}^{diag}\upharpoonrightriction k$. We will next show that for every $j<k$, \begin{equation}\label{limit non split eqn} \tp(\grave f_{i^*+2,k}(b)/M^j_j)\text{ does not }\mu\text{-split over }M^{i^*}_{i^*}. \end{equation} To see this, recall that for every $j<k$, by the definition of a direct limit, $\grave f_{i^*+2,k}(b)=\grave f_{j,k}(f_{i^*+2,j}(b))$. By Condition \ref{non splitting} of the construction, we know \begin{equation*} \tp(f_{i^*+2,j}(b)/M^{j}_{j})\text{ does not }\mu\text{-split over }M^{i^*}_{i^*}. \end{equation*} Applying $\grave f_{j,k}$ to this implies $\tp(\grave f_{i^*+2,k}(b)/M^j_j)$ does not $\mu$-split over $M^{i^*}_{i^*}$, establishing $(\ref{limit non split eqn})$. Because $M^{j+1}_{j+1}$ is universal over $M^j_j$ by construction, we can apply the continuity of non-splitting to $(\ref{limit non split eqn})$, yielding \begin{equation}\label{grave f} \tp(\grave f_{i^*+2,k}(b)/\bigcup_{j<k}M^j_j)\text{ does not }\mu\text{-split over }M^{i^*}_{i^*}. \end{equation} Because $\grave f_{i^*+2,k}$ fixes $M^{i^*+1}_{i^*+1}$, $\tp(b/M^{i^*+1}_{i^*+1})=\tp(\grave f_{i^*+2,k}(b)/M^{i^*+1}_{i^*+1})$. We can then apply the uniqueness of non-splitting extensions (see \cite[Theorem I.4.12]{Va1}) to $(\ref{grave f})$ to see that $\tp(\grave f_{i^*+2,k}(b)/\bigcup_{j<k}M^j_j)=\tp(b/\bigcup_{j<k}M^j_j)$. Thus we can fix $g$ an automorphism of the monster model fixing $\bigcup_{j<k}M^j_j$ so that $g(\grave f_{i^*+2,k}(b))=b.$ We will then define $\mathring \mathcal{T}^k$ to be the tower $g(\grave\mathcal{T}^k)$, and the mappings for our directed system will be $f_{j,k}:=g\circ\grave f_{j,k}$ for all $ i^*+2\leq j<k$. Notice that by our induction hypothesis we have that $b\in \mathring M^{i^*+2}_{i^*+1}$. Then, by definition of a direct limit we have $\grave f_{i^*+2,k}(b)\in \grave M^k_{i^*+1}$. Therefore $g(\grave f_{i^*+2,k}(b))=b\in \mathring M^k_{i^*+1}$, satisfying Condition \ref{b in} of the construction. Furthermore for all $j<k$, we have that $f_{j,k}(b)=b$. Therefore by $(\ref{i* equation})$ and monotonicity of non-splitting, Condition \ref{non splitting} of the construction holds. Notice that $\mathcal{T}^{diag}$ being discontinuous at $k$ does not impact this stage of the construction since we only require that $\mathring \mathcal{T}^k$ be a tower of length $k$ and therefore $\mathring \mathcal{T}^k$ need not contain models extending $M^k_k$. The discontinuity plays a role at the next stage of the construction. {\bf Construction, Case 2:} $k+1$ is successor of limit where $\mathcal{T}^{diag}$ is discontinuous.\\ Suppose that $\mathcal{T}^{diag}$ is discontinuous at $k$ and that $\mathring\mathcal{T}^k\in\mathcal{K}^*_{\mu,k}$ has been defined. By our choice of $i^*$, we have $\tp(b/\bigcup_{l<\alpha}M^{l}_l)$ does not $\mu$-split over $M^{i^*}_{i^*}$. So in particular by monotonicity of non-splitting, we notice: \begin{equation}\label{Mjj non-split} \tp(b/M^{k+1}_{k})\text{ does not }\mu\text{-split over }M^{i^*}_{i^*}. \end{equation} Using the definition of towers (i.e. $M^{k+1}_{k}$ is a $(\mu,\delta)$-limit over $N_{k}$ and $\tp(a_k/M^{k+1}_k)$ does not $\mu$-split over $N_k$) and the choice of $i^*$, we can apply $(\mu,\delta)$-symmetry to $a_{k}$, $M^{k+1}_{k}$, $ \bigcup_{l<\delta}M^{l}_l$, $b$ and $N_{k}$ which will yield $M^b_{k}$ a limit model over $M^{k+1}_{k}$ containing $b$ so that $\tp(a_{k}/M^b_{k})$ does not $\mu$-split over $N_{k}$ (see Figure \ref{fig:successor}). \begin{figure} \caption{A diagram of the application of $(\mu,\delta)$-symmetry in the successor stage of the directed system construction in the proof of Theorem \ref{reduced are continuous} \label{fig:successor} \end{figure} Notice that $M^b_k$ has no relationship to $\mathring \mathcal{T}^k$. In particular, it does not contain $\bigcup_{l<k}\mathring M^l_l$. Fix $M'$ to be a model of cardinality $\mu$ extending both $\bigcup_{l<k}\mathring M^l_l$ and $M^{k+1}_{k}$. Since $M^b_{k}$ is a limit model over $M^{k+1}_{k}$ which is a limit model over $M^k_k$, there exits $f:M'\rightarrow M^{k+1}_{k}$ with $f=id_{M^{k}_{k}}$ so that $M^b_{k}$ is also universal over $f(\bigcup_{l<k}\mathring M^l_l)$. Because $\tp(b/M^k_k)$ does not $\mu$-split over $M^{i^*}_{i^*}$ and $f$ fixes $M^k_k$, we know that $\tp(f(b)/M^k_k)$ does not $\mu$-split over $M^{i^*}_{i^*}$. But because $f(b)$ and $b$ both realize the same types over $M^{i^*+1}_{i^*+1}$, we can conclude by the uniqueness of non-splitting extensions that $\tp(f(b)/M^k_k)=\tp(b/M^k_k)$; so there is $g\in\Aut_{M^k_k}(\mathfrak{C})$ with $g(f(b))=b$. Since $M^b_k$ is universal over $M^k_k$ and $b\in M^b_k$, we can choose $g$ so that $g(f(M'))\prec_{\mathcal{K}}M^b_k$. Take $\mathring M^{k+1}_{k}$ to be an extension of $M^b_{k}$ which is also universal over $M^{k+1}_{k+1}$, and set $f_{k,k+1}:=g\circ f$. To see that Condition \ref{non splitting} of the construction holds, just apply monotonicity and the fact that $f_{k,k+1}(b)=b$ to $(\ref{i* equation})$. See figure \ref{fig:reduced tower}. \begin{figure} \caption{The construction of $\mathring \mathcal{T} \label{fig:reduced tower} \end{figure} It is easy to check by invariance and the induction hypothesis that $\mathring \mathcal{T}^{k+1}$ defined by the models $\mathring M^{k+1}_l:=f_{k,k+1}(\mathring M^k_l)$ for $l< k$ satisfies the remaining requirements on $\mathring \mathcal{T}^{k+1}$. Then the rest of the directed system can be defined by the induction hypothesis and the mappings $f_{l,k+1}:=f_{l,k}\circ f_{k,k+1}$ for $i^*+2\leq l<k$. This completes the construction. {\bf Case (2), continued:} Now that we have a tower $\mathring\mathcal{T}^\delta$ extending $\mathcal{T}\upharpoonrightriction\delta$ which contains $b$, we are in a situation similar to the proof in Case $(1)$. To contradict that $\mathcal{T}$ is reduced, we need only lengthen $\mathring\mathcal{T}^\delta$ to a discontinuous extension of the entire tower $(\bar M,\bar a,\bar N)$ by taking the $\delta^{th}$ model to be some extension of $\bigcup_{i<\delta}\mathring M^i_i$ which is also universal over $M_\delta$. This discontinuous extension of $(\bar M,\bar a,\bar N)$ along with $b\in\mathring M^{\delta}_{i^*+1}$ witness that $(\bar M,\bar a,\bar N)$ cannot be reduced. \end{proof} Although not used here, the converse of this theorem is also true, as in \cite{Va2}. Note that the following does not have any assumption about $\kappa^*_\mu(\mathcal{K})$. \begin{proposition} Suppose $\mathcal{K}$ satisfies Assumption \ref{assm}.(1), (2), and (4). Suppose further that that, for every reduced tower $(\bar M, \bar a, \bar M) \in \mathcal{K}_{\mu, \alpha}^*$, $\bar M$ is continuous at limit ordinals of cofinality $\delta$. Then $\mathcal{K}$ satisfies $(\mu, \delta)$-symmetry for non $\mu$-splitting. \end{proposition} \begin{proof} The proof is an easy adaptation of \cite[Theorem 3.$(b)\to(a)$]{Va2}. The same argument works; the only adaptations are to require that every limit model to in fact be a $(\mu, \delta)$ limit model and that the tower $\mathcal{T}$ be of length $\delta+1$\footnote{In a happy coincidence, the notation in that proof already agrees with this change.}. \end{proof} \section{Uniqueness of Long Limit Models} \label{ulm-sec} We now begin the proof Theorem \ref{uniqueness theorem}, which we restate here. \begin{theorem1} Suppose that $\mathcal{K}$ is an abstract elementary class satisfying Assumption \ref{assm}. For $\theta$ and $\delta$ limit ordinals $<\mu^+$ both with cofinality $\geq\kappa^*_\mu(\mathcal{K})$, if $\mathcal{K}$ satisfies symmetry for non-$\mu$-splitting (or just $(\mu,\delta)$-symmetry), then, for any $M_1$ and $M_2$ that are $(\mu,\theta)$ and $(\mu,\delta)$-limit models over $M_0$, respectively, we have that $M_1$ and $M_2$ are isomorphic over $M_0$. \end{theorem1} The structure of the proof of Theorem \ref{uniqueness theorem} from this point on is similar to the proof in \cite[Theorem 1.9]{gvv}. For completeness we include the details here, and emphasize the points of departure from \cite[Theorem 1.9]{gvv}. We construct an array of models which will produce a model that is both a $(\mu,\theta)$- and a $(\mu,\delta)$-limit model. Let $\theta$ be an ordinal as in the definition of relatively full tower so that $\cf(\theta)\geq\kappa^*_\mu(\mathcal{K})$ and let $\delta=\kappa^*_\mu(\mathcal{K})$. The goal is to build an array of models with $\delta+1$ rows so that the bottom row of the array is a relatively full tower indexed by a set of cofinality $\theta+1$ continuous at $\theta$. To do this, we will be adding elements to the index set of towers row by row so that at stage $n$ of our construction the tower that we build is indexed by $I_n$ described here. The index sets $I_\beta$ will be defined inductively so that $\langle I_\beta\mid \beta<\delta+1\rangle$ is an increasing and continuous chain of well-ordered sets. We fix $I_0$ to be an index set of order type $\theta+1$ and will denote it by $\langle i_\alpha\mid\alpha\leq\theta\rangle$. We will refer to the members of $I_0$ by name in many stages of the construction. These indices serve as anchors for the members of the remaining index sets in the array. Next we demand that for each $\beta<\delta$, $\{j\in I_\beta\mid i_\alpha<j<i_{\alpha+1}\}$ has order type $\mu\cdot \beta$ such that each $I_\beta$ has supremum $i_\theta$. An example of such $\langle I_\beta\mid \beta\leq\delta\rangle$ is $I_\beta=\theta\times(\mu\cdot \beta)\bigcup\{i_\theta\}$ ordered lexicographically, where $i_\theta$ is an element $\geq$ each $i\in \bigcup_{\beta<\delta}I_\beta$. Also, let $I=\bigcup_{\beta<\delta}I_\beta$. To prove Theorem \ref{uniqueness theorem}, we need to prove that, for a fixed $M\in\mathcal{K}$ of cardinality $\mu$, any $(\mu,\theta)$-limit and $(\mu,\delta)$-limit model over $M$ are isomorphic over $M$. Since all $(\mu, \theta)$-limits over $M$ are isomorphic over $M$ (and the same holds for $(\mu, \delta)$-limits), it is enough to construct a single model that is simultaneously $(\mu, \theta)$-limit and $(\mu, \delta)$-limit over $M$. Let us begin by fixing a limit model $M\in\mathcal{K}_\mu$. We define, by induction on $\beta\leq\delta$, a $<$-increasing and continuous sequence of towers $(\bar M,\bar a,\bar N)^\beta$ such that \begin{enumerate} \item $\mathcal{T}^0:=(\bar M,\bar a,\bar N)^0$ is a tower with $M^0_0=M$. \item $\mathcal{T}^\beta:=(\bar M,\bar a,\bar N)^\beta\in\mathcal{K}^*_{\mu,I_\beta}$. \item\label{realizing types} For every $(p,N)\in\operatorname{\mathfrak{S}t}(M^\beta_i)$ with $i_\alpha\leq i< i_{\alpha+1}$ there is $j\in I_{\beta+1}$ with $i_\alpha< j<i_{\alpha+1}$ so that $(\tp(a_j/M^{\beta+1}_j),N^{\beta+1}_j)$ and $(p,N)$ are parallel. \end{enumerate} See Figure \ref{fig:arrayconstruction}. \begin{figure} \caption{The chain of length $\delta$ of towers of increasing index sets $I_j$ of cofinality $\theta+1$. The symbol $\lll$ indicates that there are $\mu$ many new indices between $i_\beta$ and $i_{\beta+1} \label{fig:arrayconstruction} \end{figure} Given $M$, we can find a tower $(\bar M,\bar a,\bar N)^0\in\mathcal{K}^*_{\mu,I_0}$ with $M\preceq_{\mathcal{K}}M^0_0$ because of the existence of universal extensions and because $\kappa^*_\mu(\mathcal{K})<\mu^+$. At successor stages we first take an extension of $(\bar M,\bar a,\bar N)^\beta$ indexed by $I_{\beta+1}$ and realizing all the strong types over the models in $(\bar M,\bar a,\bar N)^\beta$. This tower may not be reduced, but by Fact \ref{density of reduced}, it has a reduced extension. At limit stages take unions of the chain of towers defined so far. Notice that by Fact \ref{union of reduced is reduced}, the tower $\mathcal{T}^\delta$ formed by the union of all the $(\bar M,\bar a,\bar N)^\beta$ is reduced. Furthermore, by Theorem \ref{reduced are continuous} every one of the reduced towers $\mathcal{T}^j$ is continuous at $\theta$ because $\cf(\theta)\geq\kappa^*_\mu(\mathcal{K})$. Therefore $M^\delta_{i_\theta}=\bigcup_{k<\theta}M^\delta_{i_k}$, and by the definition of the ordering $<$ on towers, the last model in this tower ($M^\delta_{i_\theta}$) is a $(\mu,\delta)$-limit model witnessed by $\langle M^j_{i_\theta}\mid j<\delta\rangle$. Since $M^1_{i_\theta}$ is universal over $M$, we have that $M^\delta_{i_\theta}$ is $(\mu, \delta)$-limit over $M$. Next to see that $M^\delta_{i_\theta}$ is also a $(\mu,\theta)$-limit model, notice that $\mathcal{T}^\delta$ is relatively full by condition \ref{realizing types} of the construction and the same argument as \cite[Claim 5.11]{gvv}. Therefore by Theorem \ref{reduced are continuous} and our choice of $\delta$ with $\cf(\delta)\geq\kappa^*_\mu(\mathcal{K})$, the last model $M^\delta_{i_\theta}$ in this relatively full tower is a $(\mu,\theta)$-limit model over $M$. This completes the proof of Theorem \ref{uniqueness theorem}. \end{document}
\begin{document} \theoremstyle{plain} \newtheorem{theorem}{Theorem} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{conjecture}[theorem]{Conjecture} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \begin{center} \vskip 1cm{\LARGE\bf Factors of Alternating Convolution of the Gessel Numbers \vskip 1cm} \large Jovan Miki\'{c}\\ University of Banja Luka\\ Faculty of Technology\\ Bosnia and Herzegovina\\ \href{mailto:[email protected]}{\tt [email protected]} \\ \end{center} \vskip .2in \begin{abstract} The Gessel number $P(n,r)$ is the number of the paths in plane with $(1,0)$ and $(0,1)$ steps from $(0,0)$ to $(n+r, n+r-1)$ that never touch any of the points from the set $\{(x,x) \in \mathbb{Z}^2: x \geq r\}$. We show that there is a close relationship between the Gessel numbers $P(n,r)$ and the super Catalan numbers $S(n,r)$. By using new sums, we prove that an alternating convolution of the Gessel numbers $P(n,r)$ is always divisible by $\frac{1}{2}S(n,r)$. \end{abstract} \noindent\emph{ \textbf{Keywords:}} Gessel Number, Super Catalan Number, Catalan Number, $M$ sum, Stanley's formula. \noindent \textbf{2020} {\it \textbf{Mathematics Subject Classification}}: 05A10, 11B65. \section{Introduction}\label{l:1} Let $n$ be a non-negative integer, and let $r$ be a fixed positive integer. Let the number $P(n,r)$ denote $\frac{r}{2(n+r)}\binom{2n}{n}\binom{2r}{r}$. We shall call $P(n,r)$ as the $n$th Gessel number of order $r$. It is known that $P(n,r)$ is always an integer. They have an interesting combinatorial interpretation. The Gessel number $P(n,r)$ \cite[p.\ 191]{IG} counts all paths in the plane with unit horizontal and vertical steps from $(0, 0)$ to $(n+r, n+r-1)$ that never touch any of the points $(r,r)$, $(r+1, r+1)$, $\ldots$, . Recently \cite[Theorem 5, p.\ 2]{JM6}, it is shown that $P(n,r)$ is the number of all lattice paths in plane with $(1,0)$ and $(0,1)$ steps from $(0,0)$ to $(n+r, n+r-1)$ that never touch any of the points from the set $\{(x,x)\in \mathbb{Z}^2:1\leq x \leq n\}$; where $n$ and $r$ are positive integers. Let $C_n=\frac{1}{n+1}\binom{2n}{n}$ denote the $n$th Catalan number, and let $S(n,r)=\frac{\binom{2n}{n}\binom{2r}{r}}{\binom{n+r}{n}}$ denote the $n$th super Catalan number of order $r$. Obviously, $P(n,1)=C_n$. Furthermore, it is readily verified that \begin{equation}\label{eq:1} P(n,r)=\binom{n+r-1}{n}\frac{1}{2}S(n,r)\text{.} \end{equation} It is known that $S(n,r)$ is always an even integer except for the case $n=r=0$. See \cite[Introduction]{EAllen} and \cite[Eq.~(1), p.\ 1]{DC}. For only a few values of $r$, there exist combinatorial interpretations of $S(n,r)$. See, for example \cite{EAllen, DC, ChenWang,PipSch, Sch}. The problem of finding a combinatorial interpretation for super Catalan numbers of an arbitrary order $r$ is an intriguing open problem. By using Eq.~(\ref{eq:1}), it follows that $P(n,r)$ is an integer. Note that Gessel numbers $P(n,r)$ have a generalization \cite[Eq.~(1.10), p.\ 2]{VG2}. Also, it is known \cite[p.\ 191]{IG} that, for a fixed positive integer $r$, the smallest positive integer $K_r$ such that $\frac{K_r}{n+r}\binom{2n}{n}$ is an integer for every $n$ is $\frac{r}{2}\binom{2r}{r}$. Let us consider the following sum: \begin{equation} \varphi(2n,m,r-1)=\sum_{k=0}^{2n}(-1)^k\binom{2n}{k}^m P(k,r)P(2n-k,r)\text{;}\label{eq:2} \end{equation} where $m$ is a positive integer. For $r=1$, the sum in Eq.~(\ref{eq:2}) reduces to \begin{equation} \varphi(2n,m,0)=\sum_{k=0}^{2n}(-1)^k\binom{2n}{k}^m C_k C_{2n-k}\text{.}\label{eq:3} \end{equation} Recently, by using new method, it is shown \cite[Cor.\ 4, p.\ 2]{JM4} that the sum $\varphi(2n,m,0)$ is divisible by $\binom{2n}{n}$ for all non-negative integers $n$ and for all positive integers $m$. In particular, $\varphi(2n,1,0)=C_n\binom{2n}{n}$. See \cite[Th.\ 1, Eq.~(2)]{JM5}. Interestingly, Gessel numbers appear \cite[Eq.~(68), p.\ 17]{JM4} in this proof. By using the Eq.~(\ref{eq:1}), the sum $ \varphi(2n,m,r-1)$ can be rewritten, as follows: \begin{equation}\label{eq:4} \sum_{k=0}^{2n}(-1)^k\binom{2n}{k}^m \binom{k+r-1}{k}\binom{2n-k+r-1}{2n-k}\frac{1}{2}S(k,r)\frac{1}{2}S(2n-k,r)\text{.} \end{equation} Let $\varPsi(2n,m,r-1)$ denote the following sum \begin{equation}\label{eq:5} \sum_{k=0}^{2n}(-1)^k\binom{2n}{k}^m S(k,r)S(2n-k,r)\text{.} \end{equation} Recently, by using new method, it is shown \cite[Th.\ 3, p.\ 3]{JM3} that the sum $\varPsi(2n,m,r-1)$ is divisible by $S(n,r)$ for all non-negative integers $n$ and $m$. In particular, $\varPsi(2n,1,r-1)=S(n,r)S(n+r,n)$. See \cite[Th.\ 1, Eq.~(1), p.\ 2]{JM3}. Also, it is known \cite[Th.\ 12]{JM2} that the sum $\sum_{k=0}^{2n}(-1)^k\binom{2n}{k}^m \binom{k+r-1}{k}\binom{2n-k+r-1}{2n-k}$ is divisible by $lcm(\binom{2n}{n},\binom{n+r-1}{n})$ for all non-negative integers $n$ and all positive integers $m$ and $r$. Our main result is, as follows: \begin{theorem}\label{t:1} The sum $\varphi(2n,m,r-1)$ is always divisible by $\frac{1}{2}S(n,r)$ for all non-negative integers $n$ and for all positive integers $m$ and $r$. \end{theorem} For proving Theorem \ref{t:1}, we use a new class of binomial sums \cite[Eqns.~(27) and (28)]{JM2} that we call $M$ sums. \begin{definition}\label{def:1} Let $n$ and $a$ be non-negative integers, and let $m$ be a positive integer. Let $S(n,m,a)=\sum_{k=0}^{n}\binom{n}{k}^m F(n,k,a)$, where $F(n,k,a)$ is an integer-valued function. Then $M$ sums for the sum $S(n,m,a)$ are, as follows: \begin{equation}\label{eq:6} M_S(n,j,t;a)=\binom{n-j}{j}\sum_{v=0}^{n-2j}\binom{n-2j}{v}\binom{n}{j+v}^t F(n, j+v, a)\text{;} \end{equation} where $j$ and $t$ are non-negative integers such that $j\leq \lfloor \frac{n}{2} \rfloor$. \end{definition} Obviously, the following equation \cite[Eq.~(29)]{JM2}, \begin{equation}\label{eq:7} S(n,m,a)=M_S(n,0,m-1;a) \end{equation} holds. Let $n$, $j$, $t$, and $a$ be same as in Definition \ref{def:1}. It is known \cite[Th.\ 8]{JM2} that $M$ sums satisfy the following recurrence relation: \begin{equation}\label{eq:8} M_S(n,j,t+1;a)=\binom{n}{j}\sum_{u=0}^{\lfloor\frac{n-2j}{2}\rfloor}\binom{n-j}{u}M_S(n,j+u,t;a)\text{.} \end{equation} Moreover, we shall prove that $M$ sums satisfy another interesting recurrence relation: \begin{theorem}\label{t:2} Let $n$ and $a$ be non-negative integers, and let $m$ be a positive integer. Let $R(n,m,a)$ denote \begin{equation} \sum_{k=0}^{n}\binom{n}{k}^m G(n,k,a)\text{;}\notag \end{equation} where $G(n,k, a)$ is an integer-valued function. Let $Q(n,m,a)$ denote \begin{equation} \sum_{k=0}^{n}\binom{n}{k}^m H(n,k,a)\text{;}\notag \end{equation} where $H(n,k,a)=\binom{a+k}{a}\binom{a+n-k}{a}G(n,k,a)$. Then the following recurrence relation is true: \begin{equation} M_Q(n,j,0;a)=\binom{a+j}{a}\sum_{l=0}^{a}\binom{n-j+l}{l}\binom{n-j}{a-l}M_R(n,j+a-l,0;a)\text{.}\label{eq:9} \end{equation} \end{theorem} Note that, by using substitution $k=v+j$, the Eq.~(\ref{eq:6}) becomes, as follows: \begin{equation} M_S(n,j,t;a)=\binom{n-j}{j}\sum_{k=j}^{n-j}\binom{n-2j}{k-j}\binom{n}{k}^t F(n, k, a)\text{.}\label{eq:10} \end{equation}\label{eq:15} From now on, we use the Eq.~(\ref{eq:10}) instead of the Eq.~(\ref{eq:6}). \section{Background}\label{l:2} In $ 1998$, Calkin proved that the alternating binomial sum $S_1(2n, m)=\sum_{k=0}^{2n}(-1)^k\binom{2n}{k}^m $ is divisible by $\binom{2n}{n} $ for all non-negative integers $n$ and all positive integers $m$. In 2007, Guo, Jouhet, and Zeng proved, among other things, two generalizations of Calkin's result \cite[Thm.\ 1.2, Thm.\ 1.3, p.\ 2]{VG1}. In $2018$, Calkin's result \cite[Thm.\ 1]{NC} is proved by using $D$ sums \cite[Section 8]{JM1}. Note that there is a close relationship between $D$ sums and $M$ sums \cite[Eqns.~(9) and (19)]{JM2}. Recently, Calkin's result \cite[Thm.\ 1]{NC} is proved by using $M$ sums \cite[Section 5]{JM2}. In particular, it is known \cite[Eqns.~(22) and (25)]{JM2} that: \begin{align*} M_{S_1}(2n,j,0)&= \begin{cases}0, & \text{if } 0 \leq j <n;\\ (-1)^n, & \text{if } j=n. \end{cases}\\ M_{S_1}(2n,j,1)&=(-1)^n\binom{2n}{n}\binom{n}{j}\text{.} \end{align*} By using $M$ sums, it is shown \cite[Section 7]{JM2} that the sum $S_2(2n,m,a)\\=\sum_{k=0}^{2n}(-1)^k\binom{2n}{k}^m\binom{a+k}{k}\binom{a+2n-k}{2n-k}$ is divisible by $lcm(\binom{a+n}{a}, \binom{2n}{n})$ for all non-negative integers $n$ and $a$; and for all positive integers $m$. In particular, it is known \cite[Eqns.~(52) and (57)]{JM2} that: \begin{align*} M_{S_2}(2n,j,0;a)&=(-1)^{n-j}\binom{a+n}{a}\binom{a+j}{j}\binom{a}{n-j},\\ M_{S_2}(2n,j,1;a)&=(-1)^{n-j}\binom{2n}{n}\sum_{u=0}^{n-j}(-1)^u\binom{n}{j+u}\binom{j+u}{u}\binom{a+j+u}{j+u}\binom{a+n}{2n-j-u}\text{.} \end{align*} By using $D$ sums, it is shown \cite[Thm.\ 1]{JM4} that the sum $S_3(2n,m)\\ =\sum_{k=0}^{2n}(-1)^k\binom{2n}{k}^m\binom{2k}{k}\binom{2(2n-k)}{2n-k}$ is divisible by $\binom{2n}{n}$ for all non-negative integers $n$, and for all positive integers $m$. The same result also can be proved by using $M$ sums. It can be shown that \begin{equation}\label{eq:11} M_{S_3}(2n,j,0)=(-1)^j\binom{2n}{n}\binom{2j}{j}\binom{2(n-j)}{n-j}\text{.} \end{equation} Note that Eq.~(\ref{eq:11}) is equivalent with the \cite[Eq.~(12)]{JM4}. Recently, by using $D$ sums, it is shown \cite[Th.\ 3, p.\ 3]{JM3} that $\varPsi(2n,m,l-1)$ is divisible by $S(n,l)$ for all non-negative integers $n$ and $m$. The same result also can be proved by using $M$ sums. Let $l$ be a positive integer, and let $n$ and $j$ be non-negative integers such that $j\leq n$. It is readily verified \cite[Eq.~(91)]{JM2} that \begin{equation}\label{eq:12} M_{\varPsi}(2n,j,0;l-1)=(-1)^j\frac{\binom{2l}{l}\binom{2n}{n}\binom{2j}{j}\binom{2(n+l-j)}{n+l-j}\binom{2n-j}{n}}{\binom{n+l}{n}\binom{2n+l-j}{n}}\text{.} \end{equation} Furthermore, by using \cite[Eq.~(103)]{JM3} and \cite[Eq.~(33)]{JM2}, it can be shown that \begin{equation}\label{eq:13} M_{\varPsi}(2n,j,1;l-1)=(-1)^jS(n,l)\binom{n}{j}\sum_{v=0}^{n-j}(-1)^vS(n+l-j-v,n)\binom{2(j+v)}{j+v}\binom{n-j}{v}\text{.} \end{equation} The rest of the paper is structured as follows. In Section \ref{l:3}, we give a proof of Theorem \ref{t:2} by using Stanley's formula. In Section \ref{l:4}, we give a proof of Theorem \ref{t:1}. Our proof of Theorem \ref{t:1} consists from two parts. In the first part, we prove that Theorem \ref{t:1} is true for $m=1$. In the second part, we prove that Theorem \ref{t:1} is true for all postive integers $m$ such that $m\geq 2$. \section{A Proof of Theorem \ref{t:2}}\label{l:3} We use three known binomial formulas. Let $a$, $b$, and $c$ be non-negative integers such that $a \geq b \geq c$. The first formula is: \begin{equation} \binom{a}{b}\binom{b}{c}=\binom{a}{c}\binom{a-c}{b-c}\text{.}\label{eq:14} \end{equation} Let $a$, $b$, $m$, and $n$ be non-negative integers. The second formula is Stanley's formula: \begin{equation} \sum_{k=0}^{min(m,n)}\binom{a}{m-k}\binom{b}{n-k}\binom{a+b+k}{k}=\binom{a+n}{m}\binom{b+m}{n}\text{.}\label{eq:15} \end{equation} The third formula is a symmetry of binomial coefficients. \begin{proof} By setting $S:=Q$ and $t:=0$ in the Eq.~(\ref{eq:10}), it follows that \begin{align} M_Q(n,j,0;a)&=\binom{n-j}{j}\sum_{k=j}^{n-j}\binom{n-2j}{k-j}\binom{n}{k}^{0} H(n,k,a)\text{,}\notag\\ &=\sum_{k=j}^{n-j}\binom{n-j}{j}\binom{n-2j}{k-j}\binom{a+k}{a}\binom{a+n-k}{a}G(n,k,a)\label{eq:16}\text{.} \end{align} By using symmetry of binomial coefficients and the Eq.~(\ref{eq:14}), it follows that \begin{equation} \binom{n-j}{j}\binom{n-2j}{k-j}=\binom{n-j}{k-j}\binom{n-k}{n-k-j}\text{.}\label{eq:17} \end{equation} By using the Eq.~(\ref{eq:17}), Eq.~(\ref{eq:16}) becomes, as follows: \begin{align} M_Q(n,j,0;a)&=\sum_{k=j}^{n-j}\binom{n-j}{k-j}\binom{n-k}{n-k-j}\binom{a+k}{a}\binom{a+n-k}{a}G(n,k,a)\notag\text{,}\\ &=\sum_{k=j}^{n-j}\binom{n-j}{k-j}\bigl{(}\binom{a+n-k}{a}\binom{n-k}{n-k-j}\bigr{)}\binom{a+k}{a}G(n,k,a)\text{.}\label{eq:18} \end{align} By using symmetry of binomial coefficients and the Eq.~(\ref{eq:18}), it follows that \begin{equation} \binom{a+n-k}{a}\binom{n-k}{n-k-j}=\binom{a+n-k}{a+j}\binom{a+j}{j}\text{.}\label{eq:19} \end{equation} By using the Eq.~(\ref{eq:19}), the Eq.~(\ref{eq:18}) becomes, as follows: \begin{equation} M_Q(n,j,0;a)=\binom{a+j}{j}\sum_{k=j}^{n-j}\binom{n-j}{k-j}\bigl{(}\binom{a+n-k}{a+j}\binom{a+k}{a}\bigr{)}G(n,k,a)\text{.}\label{eq:20} \end{equation} By setting $m:=a+j$, $n:=a$, $a:=n-k$, $b:=k-j$, and $k:=l$ in Stanley's formula \ref{eq:15}, we obtain that \begin{equation}\label{eq:21} \binom{a+n-k}{a+j}\binom{a+k}{a}=\sum_{l=0}^{a}\binom{n-k}{a+j-l}\binom{k-j}{a-l}\binom{n-j+l}{l}\text{.} \end{equation} By using Eqns.~(\ref{eq:20}) and (\ref{eq:21}), it follows that $M_Q(n,j,0;a)$ is equal to \begin{equation} \binom{a+j}{j}\sum_{k=j}^{n-j}\binom{n-j}{k-j}\bigl{(}\sum_{l=0}^{a}\binom{n-k}{a+j-l}\binom{k-j}{a-l}\binom{n-j+l}{l}\bigr{)}G(n,k,a)\text{.}\label{eq:22} \end{equation} Note that, by the Eq.~(\ref{eq:21}), we obtain the following inequality: \begin{align} n-k\geq a+j-l\text{, or }\notag\\ k\leq n-j-a+l\text{.}\label{neq:1} \end{align} Similarly, by the Eq.~(\ref{eq:21}), we obtain another inequality \begin{align} k-j\geq a-l\text{, or}\notag\\ k\geq a+j-l\text{.}\label{neq:2} \end{align} By changing the order of sumation in the Eq.~(\ref{eq:22}) and by using Inequalites (\ref{neq:1}) and (\ref{neq:2}), it follows that $M_Q(n,j,0;a)$ is equal to \begin{equation} \binom{a+j}{j}\sum_{l=0}^{a}\binom{n-j+l}{l}\sum_{k=a+j-l}^{n-(a+j-l)}\binom{n-j}{k-j}\binom{n-k}{a+j-l}\binom{k-j}{a-l}G(n,k,a)\text{.}\label{eq:25} \end{equation} By using symmetry of binomial coefficients and the Eq.~(\ref{eq:14}), it follows that \begin{equation} \binom{n-j}{k-j}\binom{n-k}{a+j-l}=\binom{n-j}{a+j-l}\binom{n-a-2j+l}{k-j}\text{.}\label{eq:26} \end{equation} By using Eqns.~(\ref{eq:22}) and (\ref{eq:26}), it follows that $M_Q(n,j,0;a)$ is equal to \begin{equation} \binom{a+j}{j}\sum_{l=0}^{a}\binom{n-j+l}{l}\binom{n-j}{a+j-l}\sum_{k=a+j-l}^{n-(a+j-l)}\binom{n-a-2j+l}{k-j}\binom{k-j}{a-l}G(n,k,a)\text{.}\label{eq:27} \end{equation} By using symmetry of binomial coefficients and the Eq.~(\ref{eq:14}), it follows that \begin{equation} \binom{n-a-2j+l}{k-j}\binom{k-j}{a-l}=\binom{n-a-2j+l}{a-l}\binom{n-2a-2j+2l}{k-j-a+l}\text{.}\label{eq:28} \end{equation} By using Eqns.~(\ref{eq:27}) and (\ref{eq:28}), it follows that $M_Q(n,j,0;a)$ is equal to \begin{equation} \binom{a+j}{j}\sum_{l=0}^{a}\binom{n-j+l}{l}\binom{n-j}{a+j-l}\binom{n-a-2j+l}{a-l}\sum_{k=a+j-l}^{n-(a+j-l)}\binom{n-2a-2j+2l}{k-j-a+l}G(n,k,a)\text{.}\label{eq:29} \end{equation} Again, by using symmetry of binomial coefficients and the Eq.~(\ref{eq:14}), it follows that \begin{equation} \binom{n-j}{a+j-l}\binom{n-a-2j+l}{a-l}=\binom{n-j}{a-l}\binom{n-(j+a-l)}{j+a-l}\text{.}\label{eq:30} \end{equation} By using Eqns.~(\ref{eq:29}) and (\ref{eq:30}), it follows that $M_Q(n,j,0;a)$ is equal to \begin{equation} \binom{a+j}{j}\sum_{l=0}^{a}\binom{n-j+l}{l}\binom{n-j}{a-l}\bigl{(}\binom{n-(j+a-l)}{j+a-l}\sum_{k=a+j-l}^{n-(a+j-l)}\binom{n-2(a+j-l)}{k-(j+a-l)}G(n,k,a)\bigr{)}\text{.}\label{eq:31} \end{equation} Note that , by setting $S:=R$, $F:=G$, $j:=j+a-l$, and $t:=0$ in the Eq.~(\ref{eq:10}), it follows that: \begin{equation} M_R(n,j+a-l,0;a)=\binom{n-(j+a-l)}{j+a-l}\sum_{k=a+j-l}^{n-(a+j-l)}\binom{n-2(a+j-l)}{k-(j+a-l)}G(n,k,a)\text{.}\label{eq:32} \end{equation} Hence, by using Eqns.~(\ref{eq:31}) and (\ref{eq:32}), it follows that \begin{equation} M_Q(n,j,0;a)=\binom{a+j}{j}\sum_{l=0}^{a}\binom{n-j+l}{l}\binom{n-j}{a-l}M_R(n,j+a-l,0;a)\text{.}\notag \end{equation} This completes the proof of Theorem \ref{t:2}. \end{proof} \section {A Proof of Theorem \ref{t:1}}\label{l:4} Let the function $\varphi(2n,m,r-1)$ be defined as in Eq.~(\ref{eq:4}). Let us consider the following sum \begin{equation} \phi(2n,m,r-1)=\frac{1}{4}\varPsi(2n,m,r-1)\text{.}\label{eq:33} \end{equation} By Eq.~(\ref{eq:5}), the Eq.~(\ref{eq:33}) becomes \begin{equation}\label{eq:34} \phi(2n,m,r-1)=\sum_{k=0}^{2n}(-1)^k\binom{2n}{k}^m( \frac{1}{2}S(k,r))(\frac{1}{2}S(2n-k,r))\text{.} \end{equation} Note that, since $r$ is a positive integer, both numbers $ \frac{1}{2}S(k,r)$ and $\frac{1}{2}S(2n-k,r)$ are integers. Therefore, the sum $\phi(2n,m,r-1)$ is a sum from Definition \ref{def:1}. Now we can apply Theorem \ref{t:2}. By setting $Q:=\varphi$, $n:=2n$, $a:=r-1$, $R:=\phi$ and $G(2n,k,r-1):=\frac{1}{2}S(k,r)\frac{1}{2}S(2n-k,r)$ in Theorem \ref{t:2}, by the Eq.~(\ref{eq:9}), it follows that $M_{\varphi}(2n,j,0;r-1)$ is equal to \begin{equation}\label{eq:35} \binom{j+r-1}{j}\sum_{l=0}^{r-1}\binom{2n-j+l}{l}\binom{2n-j}{r-1-l}M_{\phi}(2n,j+r-1-l,0;r-1)\text{.} \end{equation} By Eqns.~(\ref{eq:6}) and (\ref{eq:33}), it follows that \begin{equation}\label{eq:36} M_{\phi}(2n,j,t;r-1)=\frac{1}{4}M_{\varPsi}(2n,j,t;r-1)\text{.} \end{equation} By setting $t:=0$ and $l:=r$ in the Eq.~(\ref{eq:12}) and by using the Eq.~(\ref{eq:36}) and the definition of super Catalan numbers, we obtain that \begin{equation}\label{eq:37} M_{\phi}(2n,j,0;r-1)=(-1)^j\frac{1}{2}S(n, r)\frac{\binom{2j}{j}\binom{2(n+r-j)}{n+r-j}\binom{2n-j}{n}}{2\binom{2n+r-j}{n}}\text{.} \end{equation} By using Eq.~(\ref{eq:37}), it follows that the sum $M_{\phi}(2n,j+r-1-l,0;r-1)$ is equal to \begin{equation} (-1)^{j+r-1-l}\frac{1}{2}S(n,r)\frac{\binom{2(j+r-1-l)}{j+r-1-l}\binom{2(n-j+l+1)}{n-j+l+1}\binom{2n-j-r+l+1}{n}}{2\binom{2n-j+l+1}{n}}\text{.} \end{equation}\label{eq:38} By using a symmetry of binomial coefficients and the Eq.~(\ref{eq:14}), it follows that \begin{equation} \binom{2n-j}{r-1-l}\binom{2n-j-r+l+1}{n}=\binom{2n-j}{n}\binom{n-j}{j+r-l-1}\text{.}\label{eq:39} \end{equation} By using Eqns.~(38) and (39), it follows that the sum $\binom{2n-j}{r-1-l}M_{\phi}(2n,j+r-1-l,0;r-1)$ is equal to \begin{equation}\label{eq:40} (-1)^{j+r-1-l}\frac{1}{2}S(n,r)\binom{2n-j}{n}\frac{\binom{2(j+r-1-l)}{j+r-1-l}\binom{2(n-j+l+1)}{n-j+l+1}\binom{n-j}{j+r-l-1}}{2\binom{2n-j+l+1}{n}}\text{.} \end{equation} By using Eqns.~(\ref{eq:35}) and (\ref{eq:40}), it follows that the sum $M_{\varphi}(2n,j,0;r-1)$ is equal to \begin{equation}\label{eq:41} (-1)^{j+r-1}\binom{j+r-1}{j}\frac{1}{2}S(n,r)\binom{2n-j}{n}\sum_{l=0}^{r-1}(-1)^l\binom{2n-j+l}{l}\frac{\binom{2(j+r-1-l)}{j+r-1-l}\binom{2(n-j+l+1)}{n-j+l+1}\binom{n-j}{j+r-l-1}}{2\binom{2n-j+l+1}{n}}\text{.} \end{equation} By setting $j:=0$ in the Eq.~(\ref{eq:41}), we obtain that \begin{align} M_{\varphi}(2n,0,0;r-1)&=(-1)^{r-1}\frac{1}{2}S(n,r)\binom{2n}{n}\sum_{l=0}^{r-1}(-1)^l\binom{2n+l}{l}\frac{\binom{2(r-1-l)}{r-1-l}\binom{2(n+l+1)}{n+l+1}\binom{n}{r-l-1}}{2\binom{2n+l+1}{n}}\notag\\ &=(-1)^{r-1}\frac{1}{2}S(n,r)\sum_{l=0}^{r-1}(-1)^l\binom{2n+l}{l}\binom{2(r-1-l)}{r-1-l}\binom{n}{r-l-1}\frac{\binom{2n}{n}\binom{2(n+l+1)}{n+l+1}}{2\binom{2n+l+1}{n}}\notag\\ &=(-1)^{r-1}\frac{1}{2}S(n,r)\sum_{l=0}^{r-1}(-1)^l\binom{2n+l}{l}\binom{2(r-1-l)}{r-1-l}\binom{n}{r-l-1}\bigl{(}\frac{1}{2}S(n,n+l+1)\bigr{)}\text{.}\label{eq:42} \end{align} By using the Eq.~(\ref{eq:41}), it follows that the sum $M_{\varphi}(2n,0,0;r-1)$ is divisible by $\frac{1}{2}S(n,r)$. By setting $n:=2n$, $m:=1$, $S:=\varphi$, and $r:=r-1$ in the Eq.~(\ref{eq:7}), we obtain that \begin{equation}\label{eq:43} \varphi(2n,1,r-1)=M_{\varphi}(2n,0,0;r-1)\text{.} \end{equation} By using Eqns.~(\ref{eq:42}) and (\ref{eq:43}), it follows that the sum $\varphi(2n,1,r-1)$ is divisible by $\frac{1}{2}S(n,r)$. This completes the proof of Theorem \ref{t:1} for the case $m=1$. Let us calculate the sum $M_{\varphi}(2n,j,1;r-1)$, where $n$ and $j$ are non-negative integers such that $j \leq n$. By setting $n:=2n$, $S:=\varphi$, $t:=0$, and $a:=r-1$ in the Eq.~(\ref{eq:8}), we obtain that \begin{equation}\label{eq:44} M_{\varphi}(2n,j,1;r-1)=\sum_{u=0}^{n-j}\binom{2n}{j}\binom{2n-j}{u}M_{\varphi}(2n,j+u,0;r-1)\text{.} \end{equation} By using a symmetry of binomial coefficients and the Eq.~(\ref{eq:14}), it follows that \begin{equation}\label{eq:45} \binom{2n}{j}\binom{2n-j}{u}=\binom{2n}{2n-j-u}\binom{j+u}{u}\text{.} \end{equation} By using Eqns.~(\ref{eq:44}) and (\ref{eq:45}), it follows that \begin{equation}\label{eq:46} M_{\varphi}(2n,j,1;r-1)=\sum_{u=0}^{n-j}\binom{j+u}{u}\bigl{(}\binom{2n}{2n-j-u}M_{\varphi}(2n,j+u,0;r-1)\bigr{)}\text{.} \end{equation} By using the Eq.~(\ref{eq:41}), it follows that the sum $ \binom{2n}{2n-j-u}M_{\varphi}(2n,j+u,0;r-1)$ equals \begin{equation}\label{eq:47} \begin{split} \binom{2n}{2n-j-u}\binom{2n-j-u}{n}(-1)^{j+u+r-1}\binom{j+u+r-1}{j+u}\frac{1}{2}S(n,r)\cdot\\\cdot\sum_{l=0}^{r-1}(-1)^l\binom{2n-j-u+l}{l}\frac{\binom{2(j+u+r-1-l)}{j+u+r-1-l}\binom{2(n-j-u+l+1)}{n-j-u+l+1}\binom{n-j-u}{j+u+r-l-1}}{2\binom{2n-j-u+l+1}{n}}\text{.} \end{split} \end{equation} By using a symmetry of binomial coefficients and the Eq.~(\ref{eq:14}), it follows that \begin{equation}\label{eq:48} \binom{2n}{2n-j-u}\binom{2n-j-u}{n}=\binom{2n}{n}\binom{n}{j+u}\text{.} \end{equation} By using Eqns.~(\ref{eq:47}) and (\ref{eq:48}), it follows that the sum $ \binom{2n}{2n-j-u}M_{\varphi}(2n,j+u,0;r-1)$ equals \begin{equation} \begin{split} \frac{1}{2}S(n,r)\binom{n}{j+u}(-1)^{j+u+r-1}\binom{j+u+r-1}{j+u}\sum_{l=0}^{r-1}(-1)^l\binom{2n-j-u+l}{l}\cdot\\\cdot\binom{2(j+u+r-1-l)}{j+u+r-1-l}\binom{n-j-u}{j+u+r-l-1}\bigl{(}\frac{1}{2}S(n,n-j-u+l+1)\bigr{)}\text{.}\label{eq:49} \end{split} \end{equation} Hence, by using the Eq.~(\ref{eq:49}), we obtain that \begin{equation}\label{eq:50} \binom{2n}{2n-j-u}M_{\varphi}(2n,j+u,0;r-1)=\frac{1}{2}S(n,r)\cdot c(n,j+u,r-1)\text{;} \end{equation} where $c(n,j+u,r-1)$ is always an integer. By using the Eq.~(\ref{eq:50}), the Eq.~(\ref{eq:46}) becomes, as follows: \begin{equation}\label{eq:51} M_{\varphi}(2n,j,1;r-1)=\frac{1}{2}S(n,r)\sum_{u=0}^{n-j}\binom{j+u}{u}c(n,j+u,r-1)\text{.} \end{equation} By Eq.~(\ref{eq:51}), it follows that the sum $M_{\varphi}(2n,j,1;r-1)$ is divisible by $\frac{1}{2}S(n,r) $ for all non-negative integers $n$ and $j$ such that $j\leq n$; and for all positive integers $r$. By using the Eq.~(\ref{eq:8}) and the induction principle, it can be shown that the sum $M_{\varphi}(2n,j,t;r-1)$ is divisible by $\frac{1}{2}S(n,r) $ for all non-negative integers $n$ and $j$ such that $j\leq n$; and for all positive integers $r$ and $t$. By setting $S=\varphi$, $n:=2n$, $m:=t+1$, and $a:=r-1$ in the Eq.~(\ref{eq:7}), it follows that \begin{equation}\label{eq:51} \varphi(2n,t+1,r-1)=M_{\varphi}(2n,0,t;r-1)\text{.} \end{equation} Since $t \geq 1$, it follows that $t+1\geq 2$. By Eq.~(\ref{eq:51}), it follows that the sum $\varphi(2n,m,r-1)$ is always divisible by $\frac{1}{2}S(n,r) $ for all non-negative integers $n$; and for all positive integers $m$ and $r$ such that $m \geq 2$. See \cite[Section 4]{JM2}. This completes the proof of Theorem \ref{t:1} \begin{remark} \label{rem:1} For $r=1$, by using Theorem \ref{t:1} and the fact $\frac{1}{2}S(n,1)=C_n$, it follows that the sum $\varphi(2n,m,0)$ is divisible by $C_n$. Therefore, for $r=1$, the result of Theorem \ref{t:1} is weaker than the result that sum $\varphi(2n,m,0)$ is divisible by $\binom{2n}{n}$ \cite[Cor.\ 4, p.\ 2]{JM4}. However, for $n=3$, $r=2$, and $m=1$, the sum $\varphi(2n,m,r-1)$ is neither divisible by $\binom{2n}{n}$ nor by $S(n,r)$. \end{remark} \section{Conclusion}\label{l:5} Theorem \ref{t:1} and Eq.~(\ref{eq:1}) show that there is a close relationship between Gessel's numbers $P(n,r)$ and super Catalan numbers $S(n,r)$. Gessel's numbers have, at least, two combinatorial interpretations \cite{IG, JM6}, while super Catalan numbers do not have a combinatorial interpretation in general. We think that some combinatorial properties of Gessel's numbers can be used for obtaining a combinatorial interpretation for super Catalan numbers in general. \section*{Acknowledgments} I want to thank professor Tomislav Do\v{s}li\'{c} for inviting me on the third Croatian Combinatorial Days in Zagreb. \end{document}
\begin{document} \title{Parameterized Algorithms for Conflict-free Colorings of Graphs} \titlerunning{The Parameterized Complexity of Conflict-free Colorings} \author{I. Vinod Reddy} \authorrunning{I. V Reddy} \institute{IIT Gandhinagar, India \\ \email{reddy\[email protected]} } \maketitle \begin{abstract} In this paper, we study the conflict-free coloring of graphs induced by neighborhoods. A coloring of a graph is conflict-free if every vertex has a uniquely colored vertex in its neighborhood. The conflict-free coloring problem is to color the vertices of a graph using the minimum number of colors such that the coloring is conflict-free. We consider both closed neighborhoods, where the neighborhood of a vertex includes itself, and open neighborhoods, where a vertex does not included in its neighborhood. We study the parameterized complexity of conflict-free closed neighborhood coloring and conflict-free open neighborhood coloring problems. We show that both problems are fixed-parameter tractable ($\ensuremath{\sf{FPT}}\xspace$) when parameterized by the cluster vertex deletion number of the input graph. This generalizes the result of Gargano et al.(2015) that conflict-free coloring is fixed-parameter tractable parameterized by the vertex cover number. Also, we show that both problems admit an additive constant approximation algorithm when parameterized by the distance to threshold graphs. We also study the complexity of the problem on special graph classes. We show that both problems can be solved in polynomial time on cographs. For split graphs, we give a polynomial time algorithm for closed neighborhood conflict-free coloring problem, whereas we show that open neighborhood conflict-free coloring is $\ensuremath{\sf{NP}}\xspace$-complete. We show that interval graphs can be conflict-free colored using at most four colors. \end{abstract} \section{Introduction} A hypergraph is a pair $ \mathcal{H}=(V, E)$ where $V$ is the set of vertices and $E$ is a set of non-empty subsets of $V$ called \emph{hyperedges}. A proper $k$-coloring of $\mathcal{H}$ is an assignment of colors from $\{1,2, \cdots, k\}$ to every vertex of $\mathcal{H}$ such that every hyperedge contains at least two vertices of distinct colors. The minimum number of colors required to color the vertices of $\mathcal{H}$ is called the chromatic number of $\mathcal{H}$ and is denoted as $\chi( \mathcal{H})$. The \emph{conflict-free} coloring is a special case of coloring and it is defined as follows. \begin{definition} Let $ \mathcal{H}=(V, E)$ be a hypergraph, a coloring $C_\mathcal{H}$ is called conflict-free coloring of $\mathcal{H}$ if for every $e \in E$ there exists a vertex $u \in e$ such that for all $v\in e$, $u \neq v$ we have $C_\mathcal{H}(u) \neq C_\mathcal{H}(v )$. The minimum number of colors needed to conflict-free color the vertices of a hypergraph $\mathcal{H}$ is called the \emph{conflict-free chromatic number} of $\mathcal{H}$. \end{definition} The conflict-free coloring problem was introduced by Even et al. \cite{even2003conflict} to study the frequency assignment problem for cellular networks. These networks contain two types of nodes, base stations and clients. Fixed frequencies are assigned to base stations to allow connections to clients. Each client scans for the available base stations in this neighborhood and connects to one of the available base station. Suppose if two base stations are available to a client, which are assigned the same frequency then mutual interference occurs and the connection between the client and base stations can become noisy. Our aim is to reduce the disturbances occur in connections between base stations and clients. The frequency assignment problem on cellular networks is an assignment of frequencies to base stations such that for each client there exists a base station of unique frequency within his region. The goal here is to minimize the number of assigned frequencies, since available frequencies are limited and expensive. We can model this problem using the hypergraphs. The vertices of hypergraph correspond to the base stations and the set of base stations available for each client is represented by a hyperedge. The problem reduces to assigning frequencies to vertices of hypergraph such that each hyperedge contains a vertex of unique frequency. Conflict-free coloring is well studied for hypergraphs induced by geometric objects like, intervals \cite{bar2008deterministic}, rectangles \cite{ajwani2007conflict}, unit disks \cite{lev2009conflict} etc. This problem also has applications in areas like radio frequency identification and robotics, VLSI design and many other fields. In this paper, we study the conflict-free coloring of hypergraphs induced by graph neighborhoods. Let $G=(V,E)$ be a graph, for a vertex $v \in V(G)$, $N(v)$ denotes the set consisting of all vertices which are adjacent to $v$, called open neighborhood of $v$. The set $N[v]=N(v) \cup \{v\}$ is called the closed neighborhood of $v$. The conflict-free open neighborhood (\textsc {CF-ON}{}) coloring of a graph $G$ is defined as the conflict-free coloring of the hypergraph $\mathcal{H}$ with $$ V(\mathcal{H}) =V(G) \qquad \textrm{and} \qquad E(\mathcal{H})=\{N(v)~:~ v \in V(G)\}$$ Similarly conflict-free closed neighborhood (\textsc {CF-CN}{}) coloring problem can be defined. Alternatively we can also define both \textsc {CF-CN}{} coloring and \textsc {CF-ON}{} coloring problems as follows. Given a graph $G$ and a coloring $C_G$, we say that a subset $U \subseteq V(G)$ has a \emph {unique color} with respect to $C_G$ if there exists a color $c$ such that $|\{u \in U ~|~ C_G(u)=c\}|=1$. \begin{definition} \begin{enumerate} \item A coloring $C_G$ of a graph $G$ is called \emph{conflict-free closed neighborhood (\textsc {CF-CN}{}) coloring} if for every vertex $v \in V(G)$, the set $N[v]$ has a unique color. \item A coloring $C_G$ of a graph $G$ is called \emph{conflict-free open neighborhood (\textsc {CF-ON}{}) coloring} if for every vertex $v\in V(G)$, the set $N(v)$ has a unique color. \end{enumerate} \end{definition} The minimum value $k$ for which there is a \textsc {CF-ON}{} (resp. \textsc {CF-CN}{}) coloring of $G$ with $k$ colors is called the \emph{\textsc {CF-ON}{} (resp. \textsc {CF-CN}{}) chromatic number} of $G$ and is denoted as $\chi_{cf}(G)$ (resp. $\chi_{cf}[G]$). \begin{figure} \caption{~A schematic showing the relation between the various parameters. An arrow from parameter $a$ to $b$ indicates that $a$ is larger than $b$. Parameters marked with $*$ are studied in this paper.} \label{fig-parameters} \end{figure} \paragraph{ Related work.} Gargano et al.\cite{gargano2015complexity} studied the complexity of conflict-free colorings induced by the graph neighborhoods and showed that the \textsc {CF-CN}{} 2-coloring and \textsc {CF-ON}{} $k$-coloring are $\ensuremath{\sf{NP}}\xspace$-complete. In the parameterized setting, both conflict-free closed and open neighborhood colorings are fixed-parameter tractable ($\ensuremath{\sf{FPT}}\xspace$), when parameterized by the vertex cover number or the neighborhood diversity of the graph \cite{gargano2015complexity}. Ashok et al. \cite{ashok2015exact} showed that maximizing the number of conflict-free colored edges in hypergraphs is $\ensuremath{\sf{FPT}}\xspace$ when parameterized by the number of conflict-free edges in solution. \paragraph{ Our contributions.} In this paper, we give parameterized algorithms for \textsc {CF-CN}{} coloring and \textsc {CF-ON}{} coloring problems with respect to various structural parameters. Gargano et al.\cite{gargano2015complexity} showed that both problems are $\ensuremath{\sf{FPT}}\xspace$ parameterized by vertex cover number. Both problems are $\ensuremath{\sf{FPT}}\xspace$ parameterized by tree-width, which follows from an application of Courcelle's Theorem \cite{courcelle1990monadic} and the fact that the \textsc {CF-CN}{} and \textsc {CF-ON}{} problems can be expressed by a monadic second order (MSO) formula. In this paper, we focus is on \emph{distance-to-triviality} ~\cite{guo2004structural,cai2003parameterized} parameters. They measure how far a graph is from some class of graphs for which the problem is tractable. Then, it is natural to parameterize by the distance of a general instance to a tractable class. The main advantage of studying structural parameters is, if a problem is tractable on a class of graphs $\mathcal{F}$, then it is natural to expect the problem might be tractable on a class of graphs which are close to $\mathcal{F}$. Our notion of distance to a graph class is the vertex deletion distance. More precisely, for a class $\mathcal{F}$ of graphs we say that $X$ is a $\mathcal{F}$-modulator of a graph $G$ if there is a subset $X\subseteq V(G)$ such that $G \setminus X \in \mathcal{F}$. If the size of the smallest modulator to $\mathcal{F}$ is $k$, we also say that the distance of $G$ to the class $\mathcal{F}$ is $k$. We study the parameterized complexity of the conflict-free coloring problems with respect to the distance from following graph class: \emph{cluster graphs} (disjoint union of cliques) and \emph{threshold graphs}. Studying the parameterized complexity of conflict-free coloring problem with respect to these parameters improves our understanding about the tractable parameterizations. For instance, the parameterization by the distance to cluster graphs directly generalizes vertex cover and is not comparable with tree-width (see Fig.~\ref{fig-parameters}). In particular we obtain the following results. \begin{itemize} \item We show that both variants of conflict-free coloring problems are \ensuremath{\sf{FPT}}\xspace{} when parameterized by size of the modulator to cluster graphs (cluster vertex deletion number). \item We show that \textsc {CF-CN}{} (resp. \textsc {CF-ON}{}) coloring problem admits an additive $1$-approximation (resp. $2$-approximation) when parameterized by the size of the modulator to threshold graphs. \item We show that on split graphs \textsc {CF-CN}{} coloring admits a polynomial time algorithm, where as \textsc {CF-ON}{} coloring is $\ensuremath{\sf{NP}}\xspace$-complete. \item We give polynomial time algorithms for both \textsc {CF-CN}{} and \textsc {CF-ON}{} coloring problems on cographs. We show that interval graphs can be conflict-free colored using at most four colors. \end{itemize} \section{Preliminaries} In this section, we introduce the notation and the terminology that we will need to describe our algorithms. Most of our notation is standard. We use $[k]$ to denote the set $\{1,2,\ldots,k\}$. All graphs we consider in this paper are undirected, connected, finite and simple. For a graph $G=(V,E)$, let $V(G)$ and $E(G)$ denote the vertex set and edge set of $G$ respectively. An edge in $E$ between vertices $x$ and $y$ is denoted as $xy$ for simplicity. For a subset $X \subseteq V(G)$, the graph $G[X]$ denotes the subgraph of $G$ induced by vertices of $X$. Also, we abuse notation and use $G \setminus X$ to refer to the graph obtained from $G$ after removing the vertex set $X$. For a vertex $v\in V(G)$, $N(v)$ denotes the set of vertices adjacent to $v$ and $N[v] = N(v) \cup \{v\}$ is the closed neighborhood of $v$. A vertex is called \emph{universal vertex} if it is adjacent to every other vertex of the graph. \paragraph{Graph classes.} We now define the graph classes which are considered in this paper. A graph is a \emph {split graph} if its vertices can be partitioned into a clique and an independent set. Split graphs are $(2K_2,C_4,C_5)$-free. The class of $P_4$-free graphs are called \emph{cographs}. A graph is a \emph{threshold graph} if it can be constructed recursively by adding an isolated vertex or a universal vertex. A cluster graph is a disjoint union of complete graphs. Cluster graphs are $P_3$-free graphs. A graph $G$ is called an \emph{interval graph} if there exists a set $\{I_ v ~|~ v \in V(G) \}$ of real intervals such that $I_u \cap I_v \neq \emptyset$ if and only if $(u,v) \in E(G)$. It is easy to see that a graph that is both split and cograph is a threshold graph. We denote threshold graph (or a split graph) with $G = (C,I)$ where $C$ and $I$ denotes the partition of $G$ into a clique and an independent set. For any two vertices $x, y$ in a threshold graph $G$ we have either $N(x)\subseteq N[y]$ or $N(y)\subseteq N[x]$. For a class of graphs $\mathcal{F}$, the distance to $\mathcal{F}$ of a graph $G$ is the minimum number of vertices to be deleted from $G$ to get a graph in $\mathcal{F}$. An $\mathcal{F}$-modulator is the smallest possible vertex subset $X$ for which $G \setminus X \in \mathcal{F}$. The size of a $\mathcal{F}$-modulator is a natural measure of closeness to $\mathcal{F}$. \paragraph{Parameterized Complexity.} A parameterized problem denoted as $(I,k)\subseteq \Sigma^*\times \mathbb{N}$, where $\Sigma$ is fixed alphabet and $k$ is called the parameter. We say that the problem $(I,k)$ is {\it fixed parameter tractable} with respect to parameter $k$ if there exists an algorithm which solves the problem in time $f(k) |I|^{O(1)}$, where $f$ is a computable function. A kernel for a parameterized problem $\Pi$ is an algorithm which transforms an instance $(I,k)$ of $\Pi$ to an equivalent instance $(I',k')$ in polynomial time such that $k' \leq g(k)$ and $|I'| \leq f(k)$ for some computable functions $f$ and $g$. It is known that a parameterized problem is fixed parameter tractable if and only of it has a kernel. For a detailed survey of the methods used in parameterized complexity, we refer the reader to the texts \cite{CyganFKLMPPS15,downey2013fundamentals}. Since cluster graphs are $P_3$-free and threshold graphs are $(P_4, C_4, 2K_2)$-free, modulators can be computed in \ensuremath{\sf{FPT}}\xspace{} time. Therefore we assume that a modulator to these graph classes is given as a part of the input. The problems we consider in this paper are formally defined as follows: We only define for closed neighborhood case and open neighborhood case can be defined similarly. \problembox{\sc Conflict-free closed neighborhood coloring (CF-CNC)}{A graph $G$, a subset $X \subseteq V(G)$ such that $G \setminus X \in \mathcal{F}$ and an integer $k$.} {The size $d:=|X|$ of the modulator to $\mathcal{F}$.}{Does there exists a coloring $C_G: V(G) \rightarrow [k]$ such that for each $v \in V(G)$ the set $N[v]$ has a unique color with respect to $C_G$?} \section{Parameterized Algorithms} In this section, we give parameterized algorithms for conflict-free open/closed neighborhood coloring problems parameterized by cluster vertex deletion number and distance to threshold graphs. \subsection{Parameterized by Cluster Vertex Deletion Number} The cluster vertex deletion number (or distance to cluster) of a graph $G$ is the minimum number of vertices that have to be deleted from $G$ to get a disjoint union of complete graphs or cluster graph. Cluster vertex deletion number is an intermediate parameter between vertex cover number and clique-width/rank-width \cite{doucha2012cluster}. In this section we show that both variants of conflict free coloring problems are $\ensuremath{\sf{FPT}}\xspace$ parameterized by cluster vertex deletion number. The following lemma gives an upper bound on number of colors needed in a conflict-free coloring. \begin{lemma}\label{lem-bound} Let $X \subseteq V(G)$ of size $d$ such that $G \setminus X$ is a cluster graph then $$\chi_{cf}[G] \leq d+2 \qquad \textrm {and} \qquad \chi_{cf}(G) \leq 2d+2$$ \end{lemma} \begin{proof} A \textsc {CF-CN}{} $(d+2)$-coloring $C_G$ of $G$ can be obtained as follows. \begin{enumerate} \item For each clique $C \in G \setminus X$, assign $C_G(u)=0$ for some vertex $u \in C$ and for all $v \in C \setminus \{u\}$, assign color $C_G(v)=1$ \item For each $x \in X$ assign $C_G(x)$ a color from the set $\{2, \cdots, d+1\}$ that is not already used by $C_G$. \end{enumerate} According to this coloring, the unique color in $N[x]$ is $C_G(x)$ if $x \in X$ and $0$ if $x \in G \setminus X$. A \textsc {CF-ON}{} $(2d+2)$-coloring $C_G$ of $G$ can be obtained as follows. \begin{enumerate} \item For each vertex $u \in G \setminus X$, assign the color $C_G(u)=0$ \item For each $x \in X$ assign $C_G(x) \in \{1, \cdots, d\}$ that is not already used by $C_G$. \item For a vertex $x \in X$ if $C_G(N(x))= \{0\}$, then recolor any one vertex in $N(x)$ with a color from $\{d+1, \cdots ,2d\}$ which is not already used by $C_G$. \item For each clique $C \in G \setminus X$, if $C_G(C)=\{0\}$, then recolor an arbitrary vertex $u$ in $C$ with color $2d+1$. \end{enumerate} It is easy to see that above coloring is a \textsc {CF-ON}{} coloring of $G$. \qed \end{proof} \begin{theorem} \label{th-dcluster} Both \textsc {CF-CN}{} and \textsc {CF-ON}{} coloring problems are fixed-parameter tractable when parameterized by the cluster vertex deletion number of the input graph. \end{theorem} \begin{proof} Let $G$ be a graph and $X \subseteq V(G)$ of size $d$ such that $G \setminus X$ is a cluster graph. \par {\bf \textsc {CF-CN}{} coloring} : First we show that \textsc {CF-CN}{} coloring problem admits a kernel of size $O(d^{2^d+1})$. Without loss of generality we assume that $k < d+2$ otherwise from Lemma~\ref{lem-bound} we can obtain a \textsc {CF-CN}{} coloring of $G$. We partition the vertices of each clique $C$ in $G \setminus X$ based on their neighborhoods in $X$. For every subset $Y \subseteq X$, $T_Y^C:=\{x \in C ~|~ N(x) \cap X=Y\}$. Notice that in this way we can partition vertices of a clique $C$ into at most $2^{d}$ subsets (called types), one for each $Y \subseteq X$. We represent each clique $C$ in $G \setminus X$ with a vector $T^C$ of length $2^{d}$, where each entry of $T^C$ corresponds to a type and its value equals to the number of vertices in that type. \begin{Reduction} For a clique $C \in G \setminus X$, if a type $T_Y^C$ has more than $k+1$ vertices for some $Y \subseteq X$, then removing all vertices except $k+1$ from $T^C_Y$ does not change $ \chi_{cf} [G]$. \end{Reduction} \begin{proof} Let $G_1$ be the graph obtained after applying the reduction rule $1$ on $G$. Given a \textsc {CF-CN}{} coloring $C_{G_1}$ of reduced instance $G_1$, we extend it to a \textsc {CF-CN}{} coloring $C_G$ of $G$. Let $C$ be a clique in $G \setminus X$ such that the number of vertices in $T_Y^C$ are more than $k+1$ for some $Y \subseteq X$. We color the deleted vertices in $T_Y^C$ as follows. Since there are at least $k+1$ vertices in $T_Y^C$ after applying reduction rule, there exists two vertices $u$ and $v$ in $T_Y^C$ such that $C_G(v)=C_G(u)$. For each deleted vertex of $T_Y^C$ we assign the color $C_G(u)$. Since $C_G(u)$ is not unique in $N[v]$ for any $v \in V(G)$, it is easy to see that $C_G$ is a \textsc {CF-CN}{} coloring of $G$. \end{proof} After applying Reduction rule 1, each clique in $G_1 \setminus X$ has at most $k+1$ vertices in each type. Now we partition the cliques in $G_1 \setminus X$ based on their type vector of length $2^d$. For every subset $S \subseteq \{0,1, \cdots, k+1\}^{2^d}$, $T_S^{G_1}:=\{ C \in G_1 \setminus X ~|~ T^C=S\}$. Notice that in this way we can partition cliques of $G_1 \setminus X$ into at most $({k+2})^{2^{d}}$ subsets (called mega types), one for each $S \subseteq \{0,1, \cdots, k+1\}^{2^d}$. Let $\tau$ be an arbitrary but fixed ordering on $V(G_1)$. For a vertex $v$ in the modulator $X$ and a clique $C$ in $G_1 \setminus X$, we say that $C$ is critical for $v$ with respect to a \textsc {CF-CN}{} coloring if the first uniquely colored vertex in the neighborhood of $v$ belongs to $C$, where the notion of the first vertex is with respect to the ordering $\tau$. \begin{Reduction} For a subset $S \subseteq \{0,1, \cdots, k+1\}^{2^d}$, If a mega type $T_S^{G_1}$ has more than $d+1$ cliques then removing all cliques except $d+1$ cliques from $T_S^{G_1}$ does not change the $ \chi_{cf} [G_1]$. \end{Reduction} \begin{proof} Let $G_2$ be the graph obtained after applying the reduction rule $2$ on $G_1$. Given a \textsc {CF-CN}{} coloring $C_{G_2}$ of reduced instance, we extend it to a \textsc {CF-CN}{} coloring $C_{G_1}$ of $G_1$. Let $T_S^{G_1}$ be a mega type with more than $d+1$ cliques for some $S \subseteq \{0,1, \cdots, k+1\}^{2^d}$. We color the deleted cliques in $T_S^{G_1}$ as follows. First, for every vertex $v \in X$, we \textit{mark} a clique $C$ in $T_S^{G_2}$ if it is critical for $v$ with respect to $C_{G_2}$. Since there are $d+1$ cliques in $T_S^{G_2}$ after applying the reduction rule 2, at the end of the procedure above, at least one clique is not marked. Let this clique be $C$. Note that reusing the colors of clique $C$ to color deleted cliques does not violate the uniqueness of a color in $N[x]$ for all $x \in X$. So we recolor all deleted cliques according to the coloring of $C$. \end{proof} After applying the above reduction rules on input graph $G$, It is easy to see that the size of the reduced instance is at most $O((k+2)^{2^d}(d+1)(k+1))$. As $k < d+2$, we get a kernel of size at most $O(d^{2^d+2})$. \par {\bf \textsc {CF-ON}{} coloring} : The proof of \textsc {CF-ON}{} coloring is similar to \textsc {CF-CN}{} coloring except some minor changes in reduction rules. \setcounter{Reduction}{0} \begin{Reduction} For a clique $C \in G \setminus X$, if a type $T_Y^C$ has more than $2k+1$ vertices for some $Y \subseteq X$, then removing all vertices except $2k+1$ from $T_Y^C$ does not change $ \chi_{cf} (G)$. \end{Reduction} \begin{proof} Let $G_1$ be the graph obtained after applying the reduction rule $1$ on $G$. Given a \textsc {CF-ON}{} coloring $C_{G_1}$ of reduced instance $G_1$, we extend it to a \textsc {CF-ON}{} coloring $C_G$ of $G$. Let $C$ be a clique in $G \setminus X$ such that the number of vertices in $T_Y^C$ are more than $2k+1$ for some $Y \subseteq X$. Given a \textsc {CF-ON}{} coloring $C_G$ of reduced instance, we can color the deleted vertices in $T_Y^C$ as follows. Since there are at least $2k+1$ vertices in $T_Y^C$ after applying reduction rule, there exists three vertices $u$, $v$ and $w$ in $T_Y^C$ such that $C_G(v)=C_G(u)=C_G(w)$. For each deleted vertex of $T_Y^C$ we assign the color $C_G(u)$. Since $C_G(u)$ is not unique in $N(x)$ for any $x \in G$, it is easy to see that $C_G$ is a \textsc {CF-ON}{} coloring of $G$. \end{proof} \begin{Reduction} For a subset $S \subseteq \{0,1, \cdots, k+1\}^{2^d}$, If mega type $T_S^{G_1}$ has more than $d+1$ cliques then removing all cliques except $d+1$ cliques does not change $ \chi_{cf} (G_1)$. \end{Reduction} After applying the above two reduction rules on input graph $G$ we obtain a kernel of size at most $O(d^{2^d+2})$ for \textsc {CF-ON}{} coloring problem. \qed \end{proof} \subsection{Parameterized by Distance to Threshold Graphs} In this section we obtain an additive one (resp. two) approximation algorithm for \textsc {CF-CN}{} (resp. \textsc {CF-ON}{}) coloring of a graph parameterized by distance to threshold graphs in $\ensuremath{\sf{FPT}}\xspace$-time. \begin{theorem}\label{th-threshold} Let $G$ be a graph and $X \subseteq V(G)$ of size $d$ such that $G \setminus X$ is a threshold graph. Then we can find a \textsc {CF-CN}{} $k$-coloring of $G$ such that $\chi_{cf}[G] \leq k \leq \chi_{cf}[G]+1$ in $\ensuremath{\sf{FPT}}\xspace$-time. \end{theorem} \begin{proof} Let $G'=G[X \cup N(X)]$ is a subgraph of $G$ induced by $X$ and its neighbors. Let $H$ be a graph obtained from $G'$ by removing all edges between vertices of $N(X)$. The set $X$ is a vertex cover of $H$ of size $d$. We divide the problem into two subproblems. First, partial \textsc {CF-CN}{} coloring of $H$: color the vertices of $H$ using the minimum number of colors such that for each $x \in X$, $N[x]$ has a unique color. Second, color the vertices of $V(G) \setminus V(H)$ such that $N[x]$ has a unique color for each $x \in G \setminus X$. It is easy to see that $\chi_{cf}[G]$ is at least the number of colors needed in a partial \textsc {CF-CN}{} coloring of $H$. Now, we give a procedure to find a partial \textsc {CF-CN}{} coloring of $H$. We partition the vertices of $N(X)$ based on their neighborhoods in $X$ into at most $2^{d}$ subsets (called types). For every subset $Y \subseteq X$, $T_Y^H:=\{x \in N(X) ~|~ N(x) \cap X=Y\}$. Since every vertex cover of a graph is also a cluster vertex deletion set, therefore by following the proof of Theorem~\ref{th-dcluster} we get a partial \textsc {CF-CN}{} coloring of $H$. Color all vertices of $(G \setminus X) \setminus (N(X))$ with any arbitrary color used in $H$. Since these vertices are non-neighbors of $X$, this step does not disturb the existence of unique color in $N[x]$ for all $x \in X$. Recall that every connected threshold graph has a universal vertex. Recolor the universal vertex $u \in G \setminus X$ with a new color which is not used in $G$. This new color is unique color in $N[x]$ for all $x \in G \setminus X$. This completes the proof of \textsc {CF-CN}{} coloring. \qed \end{proof} \begin{corollary} Let $G$ be a graph and $X \subseteq V(G)$ of size $d$ such that $G \setminus X$ is a threshold graph. Then we can find a \textsc {CF-ON}{} $k$-coloring of $G$ such that $\chi_{cf}(G) \leq k \leq \chi_{cf}(G)+2$ in $\ensuremath{\sf{FPT}}\xspace$-time. \end{corollary} \begin{proof} The proof of \textsc {CF-ON}{} coloring is similar to Theorem~\ref{th-threshold}, except at the end we need to recolor two vertices, universal vertex and some arbitrary vertex of $G \setminus X$ with two new colors. Hence we get a \textsc {CF-ON}{} $k$-coloring of $G$ such that $\chi_{cf}(G) \leq k \leq \chi_{cf}(G)+2$. \end{proof} \section{Special Graph Classes} In this section we study the complexity of \textsc {CF-CN}{} and \textsc {CF-ON}{} coloring problems for bipartite graphs, split graphs, interval graphs and cographs. For a bipartite graph $G=(A,B,E)$, the \textsc {CF-CN}{} coloring can be obtained by coloring all vertices of $A$ with color $0$ and $B$ with $1$. The \textsc {CF-ON}{} coloring for bipartite graphs is $\ensuremath{\sf{NP}}\xspace$-complete (follows from Theorem~3 in \cite{gargano2015complexity}). We show that \textsc {CF-CN}{} coloring can be solved in polynomial time on split graphs, whereas \textsc {CF-ON}{} is coloring $\ensuremath{\sf{NP}}\xspace$-complete. For cographs we prove that both \textsc {CF-CN}{} and \textsc {CF-ON}{} coloring problems can be solved in polynomial time. For interval graphs we give upper bounds on the number of colors needed for both \textsc {CF-CN}{} and \textsc {CF-ON}{} colorings. \subsection{Split Graphs} We begin by showing that \textsc {CF-CN}{} coloring can be solved in polynomial time on split graphs. \begin{lemma}\label{lem-cfcn-upper} Let $G$ be a split graph with at least one edge, then $$2 \leq \chi_{cf}[G] \leq 3$$ \end{lemma} \begin{proof} Let $G=(C,I)$ be a split graph. We assume that $|C| \geq 3$ and $|I| \geq 3$. Clearly $2 \leq \chi_{cf}[G]$ and we obtain a \textsc {CF-CN}{} 3-coloring $C_G$ of $G$ as follows. For a vertex $v \in C$, assign $C_G(v)=0$ and for all $u \in C \setminus \{v\}$ assign $C_G(u)=1$. Color all vertices of independent set $I$ with $2$. It is easy to see that $C_G$ is a \textsc {CF-CN}{} coloring of $G$. Therefore we have $2 \leq \chi_{cf}[G] \leq 3$. \qed \end{proof} In the following lemma we give a characterization of split graphs that admit a \textsc {CF-CN}{} coloring using two colors. If there is a universal vertex in the graph then coloring it with one color and the rest with second color gives a \textsc {CF-CN}{} coloring. We assume that graph does not contain universal vertices. \begin{lemma}\label{lem-cfcn-split} Let $G=(C,I)$ be a split graph without universal vertex. $\chi_{cf}[G]=2$ if and only if for all $v \in C$, $|N(v) \cap I|=1$. \end{lemma} \begin{proof} For the reverse direction, assume for all $v \in C$, $|N(v) \cap I|=1$ then color all vertices of $C$ with $0$ and $I$ with $1$ respectively to get a \textsc {CF-CN}{} coloring of $G$ with two colors. For the forward direction, assume $\chi_{cf}[G]=2$. First, we show that all vertices in the clique get the same color. Suppose not, then at least one vertex $v$ in the clique has another color. If $N(v) \cap I = \emptyset$ and let $v'\in C$ such that $u' \in N(v') \cap I$ then color of $u'$ have to be different from $v'$, then $v'$ don not have unique color in its neighborhood. If $ u \in N(v) \cap I$ then since $v$ is not universal vertex there is a vertex $u'$ in $I$ not adjacent to $v$. Let $v'$ be the neighbor of $u'$ in $C$. Then it is easy to see that either $u'$ or $v'$ have no uniquely colored vertex in any coloring. If all vertices in the clique have the same color, then all vertices in $I$ must have a different color as otherwise their closed neighborhood would be in conflict. If $C$ and $I$ are completely colored as described above, then every vertex in $C$ must have exactly one neighbor in $I$ to make its neighborhood conflict-free. This completes the proof of the lemma. \end{proof} \begin{theorem} The \textsc {CF-CN}{} coloring problem is polynomial time solvable on the class of split graphs. \end{theorem} \begin{proof} Follows from Lemma~\ref{lem-cfcn-upper} and Lemma~\ref{lem-cfcn-split}. \end{proof} Now we show that \textsc {CF-ON}{} coloring problem is $\ensuremath{\sf{NP}}\xspace$-complete for split graphs. \begin{theorem} The \textsc {CF-ON}{} coloring is $\ensuremath{\sf{NP}}\xspace$-complete on the class of split graphs. \end{theorem} \begin{proof} We give a reduction from the well-known $\ensuremath{\sf{NP}}\xspace$-complete \textsc{Graph Coloring} problem~\footnote{assignment of colors to the vertices of a graph such that no two adjacent vertices have the same color.}. Given an instance $(G,k)$ of graph coloring, define a graph $G'$ with $V(G')=V(G) \cup \{x,y\}$ and $E(G') = E(G) \cup \{xy\} \cup \{xv, yv ~|~ v \in V(G)\}$. i.e., $x$ and $y$ are universal vertices in graph $G'$. Construct a split graph $H=(C,I)$ as follows. $$ V(H)= V(G') \cup \{I_{uv} ~|~ uv \in E(G')\} \text \qquad { and } $$ $$E(H)= ~\{uv ~|~ u,v \in V(G')\} ~\cup ~\{uI_{uv},vI_{uv}\ ~|~ uv \in E(G') \}$$ It is easy to see that the graph $H$ is a split graph with clique $C=H[V(G')]$ and independent set $I=H[V(H) \setminus V(G')]$. The construction of graph $H$ can be done in polynomial time. We show that for any $k \geq 3$, the graph $G$ is $k$-colorable if and only if the split graph $H$ has a \textsc {CF-ON}{} $(k+2)$-coloring. Let $C_G$ be a $k$-coloring of $G$. We construct a \textsc {CF-ON}{} $(k+2)$-coloring $C_H$ of $H$ as follows. $C_H(v)=C_G(v)$ for all $v \in V(G)$, $C_H(x)=k+1$, $C_H(y)=k+2$ and color all independent set vertices of $H$ with color $k$. Now we show that $C_H$ is a \textsc {CF-ON}{} $(k+2)$-coloring of $H$. According to the coloring $C_H$, the unique color in $N(v)$ is $k+2$ if $v \in V(G) \cup \{x\}$ and $k+1$ if $v=y$. For all $I_{uv} \in V(H)$, $N(I_{uv})=\{u,v\}$ and we have $C_H(u) =C_G(u)\neq C_G(v)=C_H(v)$ for all $uv \in E(G')$, therefore $N(I_{uv})$ has a unique color. For the reverse direction, Let $C_H$ be a \textsc {CF-ON}{} $(k+2)$-coloring of $H$, we show that $C_H$ when restricted to vertices of $G$ gives a $k$-coloring of $G$. By the construction, for any $I_{uv} \in V(H)$, we have $N( I_{uv}) = \{ u , v \}$, therefore $C_H(u) \neq C_H(v)$ which implies $C_H$ when restricted to vertices of $G'$ gives a proper coloring of $G'$ with $k+2$ colors. Now we show that the color of $x$ (resp $y$) is unique in $V(G')$. For every $v \in V(G)$ we have $N(I_{vx})=\{v,x\}$, implies $C_H(x) \neq C_H(v)$ for all $v \in V(G)$. Similarly $C_H(y) \neq C_H(v)$ for all $v \in V(G)$ and $C_H(x) \neq C_H(y)$ as $N( I_{xy}) = \{ x , y \}$. Therefore $C_H$ when restricted to $V(G)$ gives a $k$-coloring of $G$. \qed \end{proof} \subsection{Interval Graphs} In this section, we give upper bounds on the number of colors needed for both \textsc {CF-CN}{} and \textsc {CF-ON}{} coloring problems for interval graphs. Let $G$ be an interval graph and $\mathcal{I}$ be an \emph{interval representation} of $G$, i.e., there is a mapping from $V(G)$ to closed intervals on the real line such that for any two vertices $u$ and $v$, $uv \in E(G)$ if and only if $I_u \cap I_v \neq \emptyset$. For any interval graph, there exists an interval representation with all endpoints distinct. Such a representation is called a \emph{distinguishing interval representation} and it can be computed starting from an arbitrary interval representation of the graph. Interval graphs can be recognized in linear time and an interval representation can be obtained in linear time \cite{booth1976testing}. Let $l(I_u)$ and $r(I_u)$ denote the left and right end points of the interval corresponding to the vertex $u$ respectively. We say that an interval $I \in \mathcal{I}$ is \emph{rightmost interval} if $r(J) \leq r(I)$ for all $J \in \mathcal{I}$. \begin{algorithm}[t] \caption{Conflict-free closed neighborhood coloring of an interval graph with at most four colors} \label{algo-cfcn} \KwIn{A connected interval graph $G$ along with its distinguishing interval representation $\mathcal{I}$.} \KwOut{A conflict-free closed neighborhood coloring $C_G$ of $G$ with at most four colors.} \For{{\bf each} interval $I_{v_i} \in \mathcal{I}$ by increasing left end point}{ \If {$v_i$ is not colored}{ \eIf {$I_{v_i}$ is the rightmost interval in $\mathcal{I}$}{ $C_G(v_i)=1$ and $C_G(v_j)=0$ for all $v_j \in N(v_i)$ with $l(I_{v_j}) \geq l(I_{v_{i}})$ \; } {find $v_l \in N(v_i)$ such that $r(I_{v_j}) \leq r(I_{v_l})$ for all $v_j \in N[v_i]$\; \eIf {$I_{v_l}$ is the rightmost interval in $\mathcal{I}$}{ $C_G(v_i)=1$, $C_G(v_l)=2$ and $C_G(v_j)=0$ for all $v_j \in N(v_i \cup v_l)$ with $l(I_{v_j}) \geq l(I_{v_{i}})$\; } {find $v_{l'}\in N(v_l)$ such that $r(I_{v_j}) \leq r(I_{v_{l'}})$ for all $v_j \in N[v_l]$ \; $C_G(v_i)=1, C_G(v_l)=2$, $C_G(v_{l'})=3$ and $C_G(v_j)=0$ for all $v_j \in N(v_i \cup v_l \cup v_{l'})$ with $r(I_{v_j}) \leq r(I_{v_{l'}})$ and $l(I_{v_j}) \geq l(I_{v_{i}})$\;} {\bf end if } } {\bf end if } } {\bf end if } } {\bf end for } \end{algorithm} \setcounter{theorem}{4} \begin{theorem}\label{th-cfcn-interval} Let $G$ be an interval graph with at least one edge, then $$2 \leq \chi_{cf}[G] \leq 4$$ \end{theorem} \begin{proof} It is easy to see that for any graph with at least one edge, $2 \leq \chi_{cf}[G]$. The algorithm that computes the \textsc {CF-CN}{} coloring of $G$ with at most four colors is given in Algorithm~\ref{algo-cfcn}. Now we present details on the proof of correctness of the algorithm. Let $C_G$ be a \textsc {CF-CN}{} coloring of $G$ obtained using Algorithm~\ref{algo-cfcn}. For any $u_i, u_j \in V(G)$, if $C_G(u_i)=C_G(u_j)=1$ then $u_iu_j \notin E(G)$: Suppose assume that $u_iu_j \in E(G)$ and $l(I_{u_i}) \leq l(I_{u_j})$. If $I_{u_j}$ is the rightmost interval in $\mathcal{I}$ then $C_G(u_j)=2$ [lines 7-8 in Algorithm~\ref{algo-cfcn}], which is a contradiction to $C_G(u_2)=1$. If $I_{u_j}$ is not the rightmost interval in $\mathcal{I}$, then there exists $v_l$ adjacent to $u_i$ such that $r(I_{v_k}) \leq r(I_{v_l})$ for all $v_k \in N(u_i)$. So we have $r(I_{u_j}) \leq r(I_{v_l})$, and the algorithm colors $u_j$ with $0$, which is a contradiction as $C_G(u_j)=1$. If $I_{u_j}$ lies completely inside $I_{u_i}$ then $C_G(u_j)=0$, which is again a contradiction to $C_G(u_j)=1$. Therefore $u_iu_j \notin E(G)$. Similarly, we can show that any two vertices of color $2$ or color $3$ are also not adjacent. This shows that any vertex $v$ colored with a non-zero color by Algorithm~\ref{algo-cfcn} have a unique color in the set $N[v]$. Now we show that if a vertex $v$ is assigned a color $0$ by Algorithm~\ref{algo-cfcn} then $v$ has at least one non-zero unique color in $N[v]$. It is easy to see that if a vertex is colored $0$ by Algorithm~\ref{algo-cfcn} then it is adjacent to at least one vertex of non-zero color. Since $C_G(v)=0$, $v$ is not the rightmost interval in $\mathcal{I}$. Suppose assume that $v$ is adjacent to two vertices $u_1$ and $u_2$ of color $1$ and $l(I_{u_1}) \leq l(I_{u_2})$. Note that $u_1$ and $u_2$ are not adjacent as $C_G(u_1)=C_G(u_2)= 1$, so we have $r(I_{u_1}) \leq l(I_{u_2})$. Since $C_G(v)=0$ there exists a $v_l$ in $N(u_1)$ such that $r(I_{v_j}) \leq r(I_{v_l})$ for all $v_j \in N[u_1]$. This implies $r(I_{v}) \leq r(I_{v_l})$ and $I_{v_l} \cap I_{u_2} \neq \emptyset$. If $u_2$ is the rightmost interval, then $C_G(v_l)=2$ and $C_G(u_2)=3$ which is a contradiction to $C_G(u_2)=1$. If $u_2$ is not the rightmost interval, then there exists a $v_{l'}$ in $N(v_l)$ such that $r(I_{v_j}) \leq r(I_{v_{l'}})$ for all $v_j \in N[v_l]$. This implies $r(I_{u_2}) \leq r(I_{v_{l'}})$. The algorithm colors vertex $v_l$ with $2$, $v_{l'}$ with $3$ and $u_2$ with $0$, which is again a contradiction to $C_G(u_2)=1$. Along similar lines, we can also show that any vertex colored zero can not be adjacent to two vertices of same non-zero color.\qed \end{proof} \begin{theorem} Let $G$ be an interval graph with at least two edges, then $$2 \leq \chi_{cf}(G) \leq 4$$ \end{theorem} \begin{proof} The algorithm that finds a \textsc {CF-ON}{} coloring of interval graphs with four colors is given in Algorithm~\ref{algo-cfon}. The correctness proof of Algorithm~\ref{algo-cfon} is similar to proof of Theorem~\ref{th-cfcn-interval}. \qed \end{proof} \begin{algorithm}[t] \caption{Conflict-free open neighborhood coloring of an interval graph with at most four colors} \label{algo-cfon} \KwIn{A connected interval graph $G$ along with its distinguishing interval representation $\mathcal{I}$.} \KwOut{A conflict-free open neighborhood coloring $C_G$ of $G$ with four colors.} \For{{\bf each} interval $I_{v_i} \in \mathcal{I}$ by increasing left end point}{ \If {$v_i$ is not colored}{ \eIf {$I_{v_i}$ is the rightmost interval in $\mathcal{I}$}{ \eIf{there is no vertex $v_{i'}$ such that $I_{v_{i'}} \subseteq I_{v_i}$ }{ $C_G(v_i)=1$ \;} {select an arbitrary vertex $v_{i'}$ in $N(v_i)$ with $l(I_{v_{i'}}) \geq l(I_{v_{i}})$ \; $C_G(v_i)=1$,$C_G(v_{i'})=2$ and $C_G(v_j)=0$ for all $v_j \in N(v_i)$ with $l(I_{v_j}) \geq l(I_{v_{i}})$} {\bf end if } } {find a $v_l \in N(v_i)$ such that $r(I_{v_j}) \leq r(I_{v_l})$ for all $v_j \in N[v_i]$\; \eIf {$I_{v_l}$ is the rightmost interval in $\mathcal{I}$}{ $C_G(v_i)=1$, $C_G(v_l)=2$ and $C_G(v_j)=0$ for all $v_j \in N(v_l \cup v_i)$ with $l(I_{v_j}) \geq l(I_{v_{i}})$ \; } {find a $v_{l'}\in N(v_l)$ such that $r(I_{v_j}) \leq r(I_{v_{l'}})$ for all $v_j \in N[v_l]$ \; $C_G(v_i)=1$,$C_G(v_l)=2$, $C_G(v_{l'})=3$ and $C_G(v_j)=0$ for all $v_j \in N(v_i \cup v_l \cup v_{l'})$ with $r(I_{v_j}) \leq r(I_{v_{l'}})$ and $l(I_{v_j}) \geq l(I_{v_{i}})$ \; } {\bf end if } } {\bf end if } } {\bf end if } } {\bf end for } \end{algorithm} \subsection{Cographs} \begin{theorem}\label{th-cographs} The \textsc {CF-CN}{} (resp. \textsc {CF-ON}{}) coloring problem can be solved in polynomial time on cographs. \end{theorem} \begin{proof} We use modular decomposition \cite{habib2010survey} technique to solve conflict free coloring problem on cographs. First, we define the notion of \emph{modular decomposition}. A set $M \subseteq V(G)$ is called {\it module} of $G$ if all vertices of $M$ have the same set of neighbors in $V(G)\setminus M$. The \emph{trivial modules} are $V(G)$, and $\{v\}$ for all $v$. A prime graph is a graph in which all modules are trivial. The modular decomposition of a graph is one of the decomposition techniques which was introduced by Gallai~\cite{gallai1967transitiv}. The {\it modular decomposition} of a graph $G$ is a rooted tree $M_G$ that has the following properties: \begin{enumerate} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \item The leaves of $M_G$ are the vertices of $G$. \item For an internal node $h$ of $M_G$, let $M(h)$ be the set of vertices of $G$ that are leaves of the subtree of $M_G$ rooted at $h$. ($M(h)$ forms a module in $G$). \item For each internal node $h$ of $M_G$ there is a graph $G_h$ (\emph{representative graph}) with $V(G_h)=\{h_1,h_2,\cdots,h_r\}$, where $h_1,h_2,\cdots,h_r$ are the children of $h$ in $M_G$ and for $1 \leq i<j \leq r$, $h_i$ and $h_j$ are adjacent in $G_h$ iff there are vertices $u \in M(h_i)$ and $v \in M(h_j)$ that are adjacent in $G$. \item $G_h$ is either a clique, an independent set, or a prime graph and $h$ is labeled \emph{Series} if $G_h$ is clique, \emph{Parallel} if $G_h$ is an independent set, and \emph{Prime} otherwise. \end{enumerate} Modular decomposition tree of cographs has only parallel and series nodes. Let $G$ be a cograph whose modular decomposition tree is $M_G$. Without loss of generality we assume that the root $r$ of tree $M_G$ is a series node, otherwise, $G$ is not connected and we can color each connected component independently. Let the children of $r$ be $x$ and $y$. Further, let the cographs corresponding to the subtrees at $x$ and $y$ be $G_x$ and $G_y$. First we consider \textsc {CF-CN}{} coloring problem on cographs. If $G$ has a universal vertex then color it with one color and the rest with a second color. If $G$ does not have a universal vertex, then color one vertex of $G_x$ with $0$, one vertex of $G_y$ with $1$ and all other remaining vertices of $G$ with $2$. For the \textsc {CF-ON}{} coloring, if $G$ contains only two vertices then color one vertex with color $0$ and other one with color $1$. If $G$ contains at least three vertices then color one vertex of $G_x$ with $0$, one vertex of $G_y$ with $1$ and all other remaining vertices of $G$ with $2$. \qed \end{proof} \section{Conclusion} In this paper, we have studied the parameterized complexity of conflict-free coloring problem with respect to open/closed neighborhoods. We have shown that both closed and open neighborhood conflict-free colorings are $\ensuremath{\sf{FPT}}\xspace$ parameterized by cluster vertex deletion number and also showed that both variants of the problem admit an additive constant approximation algorithm when parameterized by the distance to threshold graphs. We studied the complexity of the problem on special classes of graphs. We show that both closed and open neighborhood conflict-free colorings are polynomially solvable on cographs. On split graphs, closed neighborhood coloring can be solved in polynomial time, whereas open neighborhood coloring is $\ensuremath{\sf{NP}}\xspace$-complete. For interval graphs, we give upper bounds on the number of colors needed for both open/closed conflict-free coloring problems. The following problems remain open. \begin{itemize} \item Does conflict-free open/closed coloring admit a polynomial kernel when parameterized by (a) the size of a vertex cover (b) distance to clique? \item Is the \textsc {CF-CN}{} problem is \ensuremath{\sf{FPT}}\xspace{} when parameterized by distance to cographs, distance to split graphs? \item What is the parameterized complexity of both the problems when parameterized by clique-width/rank-width? \item What is the complexity of \textsc {CF-CN}{} and \textsc {CF-ON}{} coloring problems on interval graphs? \end{itemize} \end{document}
\begin{document} \title{Secure direct bidirectional communication protocol using the Einstein-Podolsky-Rosen pair block \\ \thanks{*Email: [email protected] }} \author{Z. J. Zhang and Z. X. Man \\ {\normalsize Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071, China } \\ {\normalsize *Email: [email protected] }} \date{\today} \maketitle \begin{minipage}{380pt} In light of Deng-Long-Liu's two-step secret direct communication protocol using the Einstein-Podolsky-Rosen pair block [Phys. Rev. A {\bf 68}, 042317 (2003)], by introducing additional local operations for encoding, we propose a brand-new secure direct communication protocol, in which two legitimate users can simultaneously transmit their different secret messages to each other in a set of quantum communication device.\\ PACS Number(s): 03.67.Hk, 03.65.Ud \\ \end{minipage}\\ Quantum key distribution (QKD) is an ingenious application of quantum mechanics, in which two remote legitimate users (Alice and Bob) establish a shared secret key through the transmission of quantum signals. Much attention has been focused on QKD after the pioneering work of Bennett and Brassard published in 1984 [1]. Till now there have been many theoretical QKDs [2-19]. They can be classified into two types, the nondeterministic one [2-14] and the deterministic one [15-19]. The nondeterministic QKD can be used to establish a shared secret key between Alice and Bob, consisting of a sequence of random bits. This secret key can be used to encrypt a message which is sent through a classical channel. In contrast, in the deterministic QKD, the legitimate users can get results deterministically provided that the quantum channel is not disturbed. It is more attractive to establish a deterministic secure direct communication protocol by taking advantage of the deterministic QKDs. However, different from the deterministic QKDs, the deterministic secure direct communication protocol is more demanding on the security. Hence, only recently a few of deterministic secure direct protocols have been proposed [15-16,19]. One of these protocols is Deng-Long-Liu's two-step quantum direct communication protocol using the EPR pair block [19]. It is provably secure and has a high capacity. However, this deterministic secure direct protocol is also a message-unilaterally-transmitted protocol as well as the protocols in [15-16], i.e., two parties can not simultaneously transmit their different secret messages to each other in a set of quantum communication device. In general, convenient bidirectional simultaneous mutual communications are very useful and usually desired. In this paper in light of Deng-Long-Liu's communication protocol, by introducing additional local operations for encoding, we propose a secure direct bidirectional communication protocol, in which two legitimate users can simultaneously transmit their secret messages to each other in a set of quantum communication device. Let us start with a brief description of the two-step protocol. Alice prepares an ordered $N$ EPR photon pairs in state $|\Psi \rangle _{CM}=|\Psi ^{-}\rangle =(|0\rangle _{C}|1\rangle _{M}-|1\rangle _{C}|0\rangle _{M})/\sqrt{2}$ for each and divides them into two partner-photon sequences $[C_1, C_2, \dots, C_N]$ and $[M_1, M_2, \dots, M_N]$, where $C_i$ $(M_i)$ stands for the $C$ ($M$) photon in the $i$th photon pair. Then she sends the $C$ photon sequence to Bob. Bob chooses randomly a fraction of photons in the $C$ sequence and tells Alice publicly which photons he has chosen. Then Bob chooses randomly one of two measurement bases (MB), say $\sigma_z$ or $\sigma_x$, to measure the chosen photons and tells Alice which MB he has chosen for each and the corresponding measurement result. Alice uses the same MB as Bob to measure the corresponding partner photons in the $M$ sequence and checks with Bob's result. Their results should be anticorrelated with each other provided that no eavesdropping exists [16]. If Eve is in the line, they have to discard their transmission and abort the communication. Otherwise, Alice performs the unitary operations on the unmeasured photons in the $M$ sequence to encode her messages according to the following correspondences: $U_0 \leftrightarrow 00$; $U_1 \leftrightarrow 01$; $U_2 \leftrightarrow 10$; $U_3 \leftrightarrow 11$, where $U_0=I=|0\rangle \langle 0|+|1\rangle \langle 1|$, $U_1=\sigma _{z}=|0\rangle \langle 0|-|1\rangle \langle 1|$, $U_2=\sigma _{x}=|1\rangle \langle 0|+|0\rangle \langle 1|$, $U_3=i\sigma _{y}=|0\rangle \langle 1|-|1\rangle \langle 0|$. Then Alice sends Bob the photons on which unitary operations has been performed . After Bob receives the photons, he perform Bell-basis measurement on each with its partner photon in the initial pair. Since $U_0|\Psi ^{-}\rangle=|\Psi ^{-}\rangle$, $U_{1}|\Psi ^{-}\rangle=|\Psi ^{+}\rangle$, $U_{2}|\Psi ^{-}\rangle=|\Phi ^{-}\rangle$ and $U_{3}|\Psi ^{-}\rangle)=|\Phi ^{+}\rangle$, Bob can extract Alice's encoding according to his measurement results. By the way, in Alice's second transmissions, a small trick like message authentification is used by Alice to detect on Eve's attack without eavesdropping. In [19], the security of the two-step protocol is proven. Let us turn to our protocol. We only revise the two-step protocol in a subtle way, however, the function of the protocol is changed excitedly, i.e., two legitimate users can transmit simultaneously their different secret messages to each other in a set of quantum communication device. When Bob receives the photons on which Alice has performed unitary operations to encode her messages, he does not perform the Bell-basis measurements on each with its partner photon in the initial pair at once but carry out a unitary operation (i.e., $ U_{0},U_{1},U_{2}$ or $U_{3})$ on anyone photon of the initial pair to encode his own message. After his unitary operations Bob performs the Bell-basis measurements on the photon pairs and publicly announces his measurement results. Since Bob knows which unitary operation he has performed on one photon of each pair, he can still extract Alice's encodings according to his measurement results (See Table 1). Meanwhile since Alice knows which unitary operation she has performed on one photon of each pair, also she can extract Bob's encodings according to Bob's public announcements of his measurement results (See Table 1). So far we have proposed a deterministic direct bidirectional communication protocol. \\ \begin{minipage}{370pt} \begin{center} Table 1. Corresponding relations among Alice's, Bob's unitary operations (i.e., the encoding bits) and Bob's Bell measurement results on the photon pair. Alice's (Bob's) unitary operations are listed in the first column (line).\\ \begin{tabular}{ccccc} \hline & $U_{0}(00)$ & $U_{1}(01)$ & $U_{2}(10)$ & $U_{3}(11)$ \\ \hline $U_{0}(00)$ & $|\Psi ^{-}\rangle $ & $|\Psi ^{+}\rangle $ & $|\Phi ^{-}\rangle $ & $|\Phi ^{+}\rangle $ \\ $U_{1}(01)$ & $|\Psi ^{+}\rangle $ & $|\Psi ^{-}\rangle $ & $|\Phi ^{+}\rangle $ & $|\Phi ^{-}\rangle $ \\ $U_{2}(10)$ & $|\Phi ^{-}\rangle $ & $|\Phi ^{+}\rangle $ & $|\Psi ^{-}\rangle $ & $|\Psi ^{+}\rangle $ \\ $U_{3}(11)$ & $|\Phi ^{+}\rangle $ & $|\Phi ^{-}\rangle $ & $|\Psi ^{+}\rangle $ & $|\Psi ^{-}\rangle $\\ \hline \end{tabular}\\ \end{center} \end{minipage}\\ \vskip 1cm Let us discuss the security of our protocol. Before Bob's announcement, the present protocol is only nearly same with the two step protocol due to the additional unitary operations of Bob. However, since all the photons are in Bob's hand, Eve can not know which unitary operation Bob has performed at all. In fact, in this case the essence of our protocol is the two-step protocol. Hence it is secure for Bob to get the secret message from Alice according to his Bell-basis measurements. Although later Bob publicly announces his Bell measurement results, because he has performed unitary operations which Eve can not know at all, Eve still can not know which unitary operations Alice has ever performed. Hence, it is still secure for Bob to get the secret message from Alice via our protocol. Now that Eve can not know which unitary operations Alice has performed and Bob publicly announces his measurement results, Alice can securely know which unitary operation Bob has ever performed, i.e., she can extract securely Bob's encodings. Hence the present quantum dense coding protocol is secure against eavesdropping. As for Eve's attack without eavesdropping, we can also adopt the strategy as the trick in [19] to detect it. To summarize, we have proposed a deterministic secure direct bidirectional communication protocol by using the Einstein-Podolsky-Rosen pair block. In this protocol two legitimate users can simultaneously transmit their different secret messages to each other in a set of quantum communication device. This work is supported by the National Natural Science Foundation of China under Grant No. 10304022. \\ \noindent [1] C. H. Bennett and G. Brassard, in {\it Proceedings of the IEEE International Conference on Computers, Systems and Signal Processings, Bangalore, India} (IEEE, New York, 1984), p175. \noindent[2] A. K. Ekert, Phys. Rev. Lett. {\bf67}, 661 (1991). \noindent[3] C. H. Bennett, Phys. Rev. Lett. {\bf68}, 3121 (1992). \noindent[4] C. H. Bennett, G. Brassard, and N.D. Mermin, Phys. Rev. Lett. {\bf68}, 557(1992). \noindent[5] L. Goldenberg and L. Vaidman, Phys. Rev. Lett. {\bf75}, 1239 (1995). \noindent[6] B. Huttner, N. Imoto, N. Gisin, and T. Mor, Phys. Rev. A {\bf51}, 1863 (1995). \noindent [7] M. Koashi and N. Imoto, Phys. Rev. Lett. {\bf79}, 2383 (1997). \noindent[8] W. Y. Hwang, I. G. Koh, and Y. D. Han, Phys. Lett. A {\bf244}, 489 (1998). \noindent[9] P. Xue, C. F. Li, and G. C. Guo, Phys. Rev. A {\bf65}, 022317 (2002). \noindent[10] S. J. D. Phoenix, S. M. Barnett, P. D. Townsend, and K. J. Blow, J. Mod. Opt. {\bf42}, 1155 (1995). \noindent[11] H. Bechmann-Pasquinucci and N. Gisin, Phys. Rev. A {\bf59}, 4238 (1999). \noindent[12] A. Cabello, Phys. Rev. A {\bf61},052312 (2000); {\bf64}, 024301 (2001). \noindent[13] A. Cabello, Phys. Rev. Lett. {\bf85}, 5635 (2000). \noindent[14] G. P. Guo, C. F. Li, B. S. Shi, J. Li, and G. C. Guo, Phys. Rev. A {\bf64}, 042301 (2001). \noindent[15] A. Beige, B. G. Englert, C. Kurtsiefer, and H.Weinfurter, Acta Phys. Pol. A {\bf101}, 357 (2002). \noindent[16] Kim Bostrom and Timo Felbinger, Phys. Rev. Lett. {\bf89}, 187902 (2002). \noindent[17] G. L. Long and X. S. Liu, Phys. Rev. A {\bf65}, 032302 (2002). \noindent[18] F. G. Deng and G. L. Long, Phys. Rev. A {\bf68}, 042315 (2003). \noindent[19] F. G. Deng, G. L. Long, and X. S. Liu, Phys. Rev. A {\bf68}, 042317 (2003). \end{document}
\begin{document} \title[Commutators]{Commutators of elements of coprime orders in finite groups} \author{Pavel Shumyatsky} \address{Department of Mathematics, University of Brasilia, Brasilia-DF, 70910-900 Brazil} \email{[email protected]} \thanks{This work was supported by CNPq-Brazil} \keywords{} \subjclass{} \begin{abstract} This paper is an attempt to find out which properties of a finite group $G$ can be expressed in terms of commutators of elements of coprime orders. A criterion of solubility of $G$ in terms of such commutators is obtained. We also conjecture that every element of a nonabelian simple group is a commutator of elements of coprime orders and we confirm this conjecture for the alternating groups. \end{abstract} \maketitle \section{Introduction} Let $w$ be a group word, i.e., an element of the free group on $x_1,\dots,x_d$. For a group G we denote by $w(G)$ the subgroup generated by the $w$-values. The subgroup $w(G)$ is called the verbal subgroup of $G$ corresponding to the word $w$. An important family of words are the lower central words $\gamma_k$, given by \[ \gamma_1=x_1, \qquad \gamma_k=[\gamma_{k-1},x_k]=[x_1,\ldots,x_k], \quad \text{for $k\geq 2$.} \] Here, as usual, we write $[x,y]$ to denote the commutator $x^{-1}y^{-1}xy$. The corresponding verbal subgroups $\gamma_k(G)$ are the terms of the lower central series of $G$. Another interesting sequence of words are the derived words $\delta_k$, on $2^k$ variables, which are defined recursively by \[ \delta_0=x_1, \quad \delta_k=[\delta_{k-1}(x_1,\ldots,x_{2^{k-1}}),\delta_{k-1}(x_{2^{k-1}+1},\ldots,x_{2^k})], \quad \text{for $k \geq 1$.} \] The verbal subgroup that corresponds to the word $\delta_k$ is the familiar $k$th derived subgroup of $G$ usually denoted by $G^{(k)}$. It is well-known that many properties of $G$ can be detected by just looking at the set of $w$-values. For example, the group $G$ is nilpotent of class at most $k$ if and only if $\gamma_{k+1}(G)=1$ and $G$ is soluble with derived length at most $k$ if and only if $\delta_{k}(G)=1$. In the case where $G$ is finite some important group-theoretical properties can be detected by studying the set of commutators $[x,y]$, where $x$ and $y$ are elements of coprime orders. In particular, it is easy to show that a finite group $G$ is nilpotent if and only if $[x,y]=1$ for all $x,y\in G$ such that $(|x|,|y|)=1$. The present paper is an attempt to find out which properties of a finite group can be expressed in terms of commutators of elements of coprime orders. There is no canonical way to define the $\gamma_k$-commutators and $\delta_k$-commutators in elements of coprime orders of a finite group $G$. Thus, we propose the following definitions. Let $G$ be a finite group and $k$ a nonnegative integer. Every element of $G$ is a $\gamma_1^*$-commutator as well as a $\delta_0^*$-commutator. Now let $k\geq 2$ and let $X$ be the set of all elements of $G$ that are powers of $\gamma_{k-1}^*$-commutators. An element $x$ is a $\gamma_k^*$-commutator if there exist $a\in X$ and $b\in G$ such that $x=[a,b]$ and $(|a|,|b|)=1$. For $k\geq 1$ let $Y$ be the set of all elements of $G$ that are powers of $\delta_{k-1}^*$-commutators. The element $x$ is a $\delta_k^*$-commutator if there exist $a,b\in Y$ such that $x=[a,b]$ and $(|a|,|b|)=1$. The subgroups of $G$ generated by all $\gamma_k^*$-commutators and all $\delta_k^*$-commutators will be denoted by $\gamma_k^*(G)$ and $\delta_k^*(G)$, respectively. One can easily see that if $N$ is a normal subgroup of $G$ and $x$ an element whose image in $G/N$ is a $\gamma_k^*$-commutator (respectively a $\delta_k^*$-commutator), then there exists a $\gamma_k^*$-commutator $y\in G$ (respectively a $\delta_k^*$-commutator) such that $x\in yN$. \section{$\delta_k^*$-Commutators} For a finite group $G$ we have $\gamma_k^*(G)=1$ if and only if $G$ is nilpotent. Indeed, we have already remarked that if $G$ is nilpotent then $\gamma_2^*(G)=1$. Suppose that $\gamma_k^*(G)=1$ but $G$ is not nilpotent. We can assume that the counter-example $G$ is chosen with minimal possible order. Then every proper subgroup of $G$ is nilpotent. Finite groups all of whose proper subgroups are nilpotent have been classified by Schmidt in \cite{schmidt}. In particular, such groups are soluble. Therefore $G$ contains a minimal normal abelian $p$-subgroup $M$ for some prime $p$. By induction $G/M$ is nilpotent. If $M$ commutes with every $p'$-element of $G$, it follows easily that $G$ is nilpotent, a contradiction. Hence $G=M\langle x\rangle$ for some $p'$-element $x$ of $G$ and $M=[M,x]$. Since $M$ is abelian, it is clear that each element of $M$ can be written in the form $[m,x]$ for suitable $m\in M$. Further, the obvious induction shows that each element of $M$ can be written in the form $[m,\underbrace{x,\dots,x}_l]$ for suitable $m\in M$ and an arbitrary positive integer $l$. Since all elements of the form $[m,\underbrace{x,\dots,x}_l]$ are $p$-elements and $x$ is a $p'$-element, we conclude that $$[M,\underbrace{x,\dots,x}_{k-1}]=\gamma_k^*(G)=1.$$ This yields a contradiction since $G$ is not nilpotent. We will now study the influence of $\delta_k^*$-commutators on the structure of $G$. In what follows we use without explicit references the fact that any $\delta_k^*$-commutator in $G$ can be viewed as a $\delta_{i}^*$-commutator for each $i\leq k$. We start with the following well-known lemma. \begin{lemma}\label{1} Let $\alpha$ be an automorphism of a finite group $G$ with $(|\alpha|,|G|)=1$. \begin{enumerate} \item $G=[G,\alpha]C_{G}(\alpha)$. \item $[G,\alpha]=[G,\alpha,\alpha]$. In particular, if $[G,\alpha,\alpha]=1$ then $\alpha=1$. \end{enumerate} \end{lemma} We will also require the following lemma from \cite{lau}. \begin{lemma}\label{2} Let $A$ be a group of automorphisms of a finite group $G$ with $(|A|,|G|)=1$. Suppose that $B$ is a normal subset of $A$ such that $A=\langle B\rangle$. Let $i\geq 1$ be an integer. Then $[G,A]$ is generated by the subgroups of the form $[G,b_1,\dots,b_i]$, where $b_1,\dots,b_i\in B$. \end{lemma} The next lemma will be very useful. \begin{lemma}\label{1113} Let $G$ be a finite group and $y_1,\dots,y_k$ $\delta_k^*$-commutators in $G$. Suppose the elements $y_1,\dots,y_k$ normalize a subgroup $N$ such that $(|y_i|,|N|)=1$ for every $i=1,\dots,k$. Then for every $x\in N$ the element $[x,y_1,\dots,y_k]$ is a $\delta_{k+1}^*$-commutator. \end{lemma} \begin{proof} We note that all elements of the form $[x,{y_1,\dots,y_s}]$ are of order prime to $|y_{s+1}|$. An easy induction on $i$ shows that whenever $i\leq k$ the element $[x,y_1,\dots,y_i]$ is a $\delta_{i+1}^*$-commutator. The lemma follows. \end{proof} The famous Burnside $p^aq^b$-Theorem says that a finite group whose order is divisible by only 2 primes is soluble (see \cite[Theorem 4.3.3]{go}). Our next result may be viewed as a generalization of the Burnside theorem. As usual, $O_\pi(G)$ denotes the largest normal $\pi$-subgroup of $G$. \begin{theorem}\label{suzuki} Let $k$ be a positive integer, $\pi$ a set consisting of at most two primes and $G$ a finite group in which all $\delta_k^*$-commutators are $\pi$-elements. Then $G$ is soluble and $\delta_k^*(G)\leq O_\pi(G)$. \end{theorem} \begin{proof} First we will prove that $G$ is soluble. Suppose that this is false and let $G$ be a counterexample of minimal possible order. Then $G$ is nonabelian simple and all proper subgroups of $G$ are soluble. The minimal simple groups have been classified by Thompson in his famous paper \cite{thompson}. It follows that $G$ is isomorphic with a group of type $Sz(q)$, $L_2(q)$ or $L_3(3)$. Suppose first that $G=Sz(q)$ is a Suzuki group. Let $Q$ be a Sylow 2-subgroup of $G$ and $K$ a (cyclic) subgroup of order $q-1$ that normalizes $Q$. Let $x$ be a generator of $K$. Choose an involution $j\in G$ such that $x^j=x^{-1}$. We remark that for every $y\in K$ there exists $y_1\in K$ such that $y=[y_1,j]$. Moreover for every $n\geq 1$ and every involution $a\in Q$ we have $a=[b,\underbrace{x,\dots,x}_{n-1}]$ for a suitable involution $b\in Q$. Using Lemma \ref{1113} it is easy to show that both $a$ and $x$ are $\delta_{n}^*$-commutator for every $n=0,1\dots$. Indeed suppose by induction that $n\geq 1$ and $x$ is a $\delta_{n-1}^*$-commutator. Lemma \ref{1113} shows that $a$ is a $\delta_{n}^*$-commutator. Since all involutions in $G$ are conjugate, we conclude that $j$ is a $\delta_{n}^*$-commutator. Now write $x=[y,\underbrace{j,\dots,j}_n]$ for suitable $y\in K$. Lemma \ref{1113} shows that $x$ is a $\delta_{n+1}^*$-commutator, as required. This argument actually shows that every strongly real element of odd order is a $\delta_{n}^*$-commutator for every $n$. Since $G$ contains strongly real elements of orders dividing $q-1$ and $q\pm r+1$, where $r^2=2q$, we obtain a contradiction. Therefore in the case where $G=Sz(q)$ not all $\delta_{k}^*$-commutators are $\pi$-elements. Other minimal simple groups can be treated in a similar way. Really, all involutions in those groups are conjugate. In all possible cases $G$ contains an elementary abelian 2-subgroup $R$ which is normalized by a strongly real element acting on $R$ irreducibly. Thus, in those groups all involutions and all strongly real elements of odd order are $\delta_{n}^*$-commutators for every $n$. Suppose $G=L_3(3)$. Then $G$ has strongly real element of order 3 which acts irreducibly on a cyclic subgroup of order 13. It follows that for every $n$ the group $G$ contains $\delta_{n}^*$-commutators of orders 2, 3 and 13. If $G=L_2(q)$ where $q$ is even, $G$ contains strongly real elements of orders dividing $q-1$ and $q+1$ and we get a contradiction. If $G=L_2(q)$ where $q=p^s$ is odd, $G$ contains strongly real elements of orders dividing $(q-1)/2$ and $(q+1)/2$. Choose an element $x$ of prime order dividing $(q-1)/2$. We know that $x$ normalizes a Sylow $p$-subgroup $Q$ in $G$ and $Q=[Q,\underbrace{x,\dots,x}_n]$. Thus, again by Lemma \ref{1113} it follows that $G$ contains $\delta_{n}^*$-commutators of order $p$. Hence, $G$ is soluble and we will now prove that $\delta_k^*(G)\leq O_\pi(G)$. Again we assume that the claim is false and let $G$ be a counterexample of minimal possible order. Then $O_\pi(G)=1$. Let $M$ be a minimal normal subgroup of $G$. We know that $G$ is soluble and therefore $M$ is an elementary abelian $r$-group for some prime $r\not\in\pi$. Choose a $\delta_{k}^*$-commutator $x\in G$. By Lemma \ref{1113} every element of $[M,\underbrace{x,\dots,x}_{k-1}]$ is a $\delta_{k}^*$-commutator. Since the orders of $\delta_{k}^*$-commutators in $G$ are not divisible by $r$, we conclude that $[M,\underbrace{x,\dots,x}_{k-1}]=1$. Lemma \ref{1} now shows that $x$ commutes with $M$. Denote $\delta_k^*(G)$ by $N$. It follows that $[M,N]=1$. By induction the image of $N$ in $G/M$ is a $\pi$-group. Hence, $N/Z(N)$ is a $\pi$-group. Schur's Theorem now shows that $N'$ is a $\pi$-group \cite[p. 102]{rob}. Since $O_\pi(G)=1$, we conclude that $N$ is abelian. But then $N$, being generated by $\pi$-elements, must be a $\pi$-group. This is a contradiction. The proof is complete. \end{proof} We will now proceed to show that the finite groups $G$ satisfying $\delta_k^*(G)=1$ are precisely the soluble groups with Fitting height at most $k$. Recall that the Fitting height $h=h(G)$ of a finite soluble group $G$ is the minimal number $h$ such that $G$ possesses a normal series all of whose quotients are nilpotent. Following \cite{lau} we call a subgroup $H$ of $G$ a tower of height $h$ if $H$ can be written as a product $H=P_1\cdots P_h$, where (1) $P_i$ is a $p_i$-group ($p_i$ a prime) for $i=1,\dots,h$. (2) $P_i$ normalizes $P_j$ for $i<j$. (3) $[P_i,P_{i-1}]=P_i$ for $i=2,\dots,h$. It follows from (3) that $p_i\neq p_{i+1}$ for $i=1,\dots,h-1$. A finite soluble group $G$ has Fitting height at least $h$ if and only if $G$ possesses a tower of height $h$ (see for example Section 1 in \cite{turull}). We will need the following lemma. \begin{lemma}\label{12} Let $P_1\cdots P_h$ be a tower of height $h$. For every $1\leq i\leq h$ the subgroup $P_i$ is generated by $\delta_{i-1}^*$-commutators contained in $P_i$. \end{lemma} \begin{proof} If $i=1$ the lemma is obvious so we suppose that $i\geq 2$ and use induction on $i$. Thus, we assume that $P_{i-1}$ is generated by $\delta_{i-2}$-commutators contained in $P_{i-1}$. Denote the set of $\delta_{i-2}$-commutators contained in $P_{i-1}$ by $B$. Combining Lemma \ref{2} with the fact that $P_i=[P_i,P_{i-1}]$, we deduce that $P_i$ is generated by subgroups of the form $[P_i,b_1,\dots,b_{i-2}]$, where $b_1,\dots,b_{i-2}\in B$. The result is now immediate from Lemma \ref{1113}. \end{proof} \begin{theorem}\label{crite} Let $G$ be a finite group and $k$ a positive integer. We have $\delta_k^*(G)=1$ if and only if $G$ is soluble with Fitting height at most $k$. \end{theorem} \begin{proof} Assume that $\delta_k^*(G)=1$. We know from Theorem \ref{suzuki} that $G$ is soluble. Suppose that $h(G)\geq k+1$. Then $G$ possesses a tower $P_1\cdots P_{k+1}$ of height $k+1$. Lemma \ref{12} shows that $P_{k+1}$ is generated by $\delta_k^*$-commutators. Since $\delta_k^*(G)=1$, it follows that $P_{k+1}=1$, a contradiction. Now suppose that $G$ is soluble with Fitting height at most $k$. Let $$G=N_1\geq N_2\dots\geq N_{t}=1$$ be the lower Fitting series of $G$. Here the subgroup $N_2=\gamma_\infty(G)$ is the last term of the lower central series of $G$, the subgroup $N_3=\gamma_\infty(N_2)$ is the last term of the lower central series of $N_2$ etc. Let us show that $N_i=\delta_{i-1}^*(G)$ for every $i=1,2,\dots,t$. This is clear for $i=1$ and so suppose that $i\geq 2$ and use induction on $i$. Thus, we assume that $N_{i-1}=\delta_{i-2}^*(G)$. Since $N_i=\gamma_\infty(N_{i-1})$, it follows that $N_i$ contains all commutators of elements of coprime orders in $N_{i-1}$. In particular, $N_i\geq\delta_{i-1}^*(G)$. On the other hand, the previous paragraph shows that $h(G/\delta_{i-1}^*(G))\leq i-1$ and therefore $N_i\leq\delta_{i-1}^*(G)$. Hence, indeed $N_i=\delta_{i-1}^*(G)$. It is clear that $t\leq k+1$ and therefore $\delta_k^*(G)=1$. \end{proof} Now a simple combination of Theorem \ref{crite} with Theorem \ref{suzuki} yields the following corollary. \begin{corollary}\label{uki} Let $k$ a positive integer, $p$ a prime and $G$ a finite group in which all $\delta_k^*$-commutators are $p$-elements. Then $G$ is soluble and $h(G)\leq k+1$. \end{corollary} \begin{proof} Indeed, by Theorem \ref{suzuki} $\delta_k^*(G)\leq O_p(G)$ and by Theorem \ref{crite} $h(G/O_p(G))\leq k$. \end{proof} \section{Commutators in the alternating groups} If $\pi$ is set of primes and $G$ a finite group in which all $\delta_k$-commutators are $\pi$-elements, then $G^{(k)}\leq O_\pi(G)$. This is straightforward from the main result of \cite{focal}. It seems likely that if $\pi$ is set of primes and $G$ a finite group in which all $\delta_k^*$-commutators are $\pi$-elements, then $\delta_k^*(G)\leq O_\pi(G)$. Theorem \ref{suzuki} tells us that this is true whenever $\pi$ consists of at most two primes and it is easy to adopt the proof of Theorem \ref{suzuki} to show that this is true in the case where $G$ is soluble. One possible approach to the general case would be via a modification of the well-known Ore Conjecture. In 1951 Ore conjectured that every element of a nonabelian finite simple group is a commutator. Ore's conjecture has been confirmed almost sixty years later by Liebeck, O'Brien, Shalev and Tiep \cite{lost}. Ore himself proved that every element of a simple alternating group $A_n$ is a commutator. Our proof of Theorem \ref{suzuki} suggests that perhaps every element of a nonabelian finite simple group is a commutator of elements of coprime orders. The goal of this section is to show that this is true for the alternating groups $A_n$. More precisely, we will prove the following theorem. \begin{theorem}\label{alter} Let $n\geq 5$. Every element of the alternating group $A_n$ is a commutator of an element of odd order and an element of order dividing 4. \end{theorem} \begin{proof} Let $x\in A_n$. The decomposition of $x$ into product of independent cycles may contain cycles of odd order and an even number of cycles of even order. Our theorem follows, therefore, if one can show that every cycle of odd order and every pair of cycles of even order are commutators of the required form in elements lying in $A_n$ and moving only symbols involved in the cycles. In the arguments that follow we more than once use the fact that for any $i,j,k,l\leq n$ we have $$(i,j)(k,l)(j,k)=(i,k,l,j),$$ which is of order four. Here and throughout the products of permutations are executed from left to right. First consider the case where $x$ is the cycle $(1,2,\dots,n)$ with $n$ odd. Suppose that $m=\frac{n-1}{2}$ is even and let $y=x^m$. Consider the product of $m$ transpositions $$a=(1,n)(2,n-1)\dots(m,m+2).$$ It is clear that $x^a=x^{-1}$ and $[y,a]=y^{-2}=(x^{m})^{-2}=x$. Thus, we have $x=[y,a]$ where $|y|=n$ and $|a|=2$. Of course, both $y$ and $a$ are elements of $A_n$. Now suppose that $m$ is odd. The previous argument is not quite adequate for this case as the product $(1,n)(2,n-1)\dots(m,m+2)$ does not belong to $A_n$. Set $$y_1=(n,m,n-1,m-1,m-2,\dots,2,1).$$ Thus, $y_1$ is a cycle of order $m+2$, which is odd. Consider the product of $m$ transpositions $$b=(n-1,n)(1,n-2)(2,n-3)\dots(m-1,m+1).$$ It is straightforward to check that $x=[y_1,b]$. Let $b_1$ denote the product of the transposition $(m+1,m+2)$ with $b$. Thus, $b_1=(m+1,m+2)b$ and $|b_1|=4$. Since the transposition $(m+1,m+2)$ commutes with $y_1$, it follows that $x=[y_1,b_1]$. Finally we remark that $b_1\in A_n$ and so the expression $x=[y_1,b_1]$ is the required one. Now we consider the case where $n=2i+2j$ and $x$ is the product of two cycles of even sizes $x=(1,2,\dots,2i)(2i+1,2i+2,\dots,2i+2j)$. We assume that $i\leq j$ and consider first the case where $i\neq j$. Put $y_2=(2i,n,n-1,\dots,i+j+1)$ and let $a_2$ be the product of the cycle $(2j+1,2i,i+j+1,i+j)$ with the $i+j-2$ transpositions of the form $(m_1,m_2)$, where $m_1+m_2=n+1$ and $m_1\not\in\{i+j+1,2i,2j+1,i+j\}$. We see that $x=[y_2,a_2]$. Moreover $|a_2|=4$ while $|y_2|=n/2+1$. Suppose that $i+j$ is even. In this case $y_2\in A_n$ but $a_2\not\in A_n$. Therefore we will replace $a_2$ by an element $b_2$, of order 4, such that $[y_2,a_2]=[y_2,b_2]$ and $b_2\in A_n$. Choose a transposition $b_0=(l,k)$ such that $l,k\geq i+j+2$. Then $b_0$ commutes with $y_2$ since $l,k$ are not involved in $y_2$. Hence $[y_2,a_2]=[y_2,b_0a_2]$. One checks that $b_0a_2$ is of order 4 and $b_0a_2\in A_n$. Thus, taking $b_2=b_0a_2$ gives us the required expression $x=[y_2,b_2]$. Assume now that $i+j$ is odd. Then $a_2\in A_n$ while $y_2\not\in A_n$. Remark that $a_2$ commutes with the transposition $(1,n)$. Set $y_3=(1,n)y_2$. Then we have $[y_2,a_2]=[y_3,a_2]$. We see that $y_3=(2i,n,1,n-1,\dots,i+j+1)$ and this is an element of odd order. Therefore the expression $x=[y_3,a_2]$ is of the required type. Finally, we have to consider the case where $i=j$. Now $y_2=(2i,n,n-1,\dots,2i+1)$ and this belongs to $A_n$. Put $$a_3=(1,n)(2,n-1)\dots(2i,2i+1).$$ Note that $a_3\in A_n$. We have $x=[y_2,a_3]$ and the expression $x=[y_2,a_3]$ is as required. \end{proof} \end{document}
\betaegin{document} \thetaitle{Time-dependent Hamiltonians with 100\% evolution speed efficiency} \thetaitle[Time-dependent Hamiltonians with 100\% evolution speed efficiency]{} \alphauthor{Raam Uzdin$^{1}$, Uwe Günther$^{2}$, Saar Rahav$^{3}$ and Nimrod Moiseyev$^{1,3}$} \alphaddress{$^{1}$Faculty of Physics, Technion - Israel Institute of Technology\\ $^{2}$Helmholtz-Center Dresden-Rossendorf, POB 510119, D-01314 Dresden, Germany\\ $^{3}$Schulich Faculty of Chemistry, Technion - Israel Institute of Technology\\ \eads{\mailto{[email protected]},\quad{}\mailto{[email protected]}} } \betaegin{abstract} The evolution speed in projective Hilbert space is considered for Hermitian Hamiltonians and for non-Hermitian (NH) ones. Based on the Hilbert-Schmidt norm and the spectral norm of a Hamiltonian, resource-related upper bounds on the evolution speed are constructed. These bounds are valid also for NH Hamiltonians and they are illustrated for an optical NH Hamiltonian and for a non-Hermitian $\mathcal{PT}-$symmetric matrix Hamiltonian. Furthermore, the concept of quantum speed efficiency is introduced as measure of the system resources directly spent on the motion in the projective Hilbert space. A recipe for the construction of time-dependent Hamiltonians which ensure 100\% speed efficiency is given. Generally these efficient Hamiltonians are NH but there is a Hermitian efficient Hamiltonian as well. Finally, the extremal case of a non-Hermitian non-diagonalizable Hamiltonian with vanishing energy difference is shown to produce a 100\% efficient evolution with minimal resources consumption.\global\long\def\ket#1{\left|#1\right\ranglengle } \global\long\def\betara#1{\left\langlengle #1\right|} \global\long\def\betaraket#1#2{\left\langlengle #1|#2\right\ranglengle } \global\long\def\ketbra#1#2{\left|#1\right\ranglengle \left\langlengle #2\right|} \global\long\def\betaraOket#1#2#3{\left\langlengle #1\left|#2\right|#3\right\ranglengle } \global\long\def\rangle{\ranglengle} \global\long\def\langle{\langlengle} \global\long\def\mathcal{H}{\mathcal{H}} \global\long\def\mathcal{\mathcal{P}}{\mathcal{\mathcal{P}}} \global\long\def\mathcal{T}{\mathcal{T}} \global\long\def\mathfrak{H}{\mathfrak{H}} \global\long\def\mathbb{R}{\mathbb{R}} \global\long\def\mathbb{C}{\mathbb{C}} \global\long\def\mathbb{P}{\mathbb{P}} \global\long\def\dagger{\dagger} \global\long\def\alpha{\alphalpha} \global\long\def\beta{\betaeta} \global\long\def\sigma{\sigma} \global\long\def\theta{\thetaheta} \global\long\def\Theta{\Thetaheta} \global\long\def\lambda{\langlembda} \end{abstract} \section{Introduction} Non-Hermitian models naturally emerge in many fields of physics as efficient tools for the description of complicated large systems in terms of smaller effective subsystems \cite{Nimbook,Berry NH phys}. Examples range from atomic/molecular physics \cite{Gilary,Berry grating}, light propagation in optically active crystals \cite{MVB_pol} and media with anisotropic pumping and absorption \cite{christo-prl-08,KGM-prl-08,kottos-prl-09,christo-prl-09,Longhi-prl-09,christo-nature-10,Kottos-nature-10,PT-lattice-pra-10,eva-hugh-pra-11,Morales,WG} over microwave cavities \cite{Richter EP,pt-micro-prl-2012}, coupled electronic circuits \cite{kottos-LRC-pra-11,kottos-UG-LRC-pra-12} up to mechanics \cite{Seyranian-Mailyb-book,cmb-mech-exp-2012}, hydrodynamics \cite{schmid-book-2001,shkalikov-2004} and magnetohydrodynamics \cite{GSZ-jmp-2005,GK-jpa-2006,GLT-2010,stefani-spectra-2012}. Apart from the spectral properties of the Hamiltonians, the evolution processes generated by non-Hermitian (NH) Hamiltonians can significantly differ from those generated by Hermitian Hamiltonians \cite{MVB_pol,cmb-brach-prl-2007,cmb-lnp-2009,BU,UMM}. In this regard it appears natural to ask what new possibilities NH evolution entails and what bounds can be broken when a Hamiltonian is no longer Hermitian. For example, it was shown in \cite{cmb-brach-prl-2007} that a NH $2\thetaimes2$ matrix Hamiltonian with some predefined energy difference can generate a much faster evolution than a Hermitian Hamiltonian with the same energy difference. The evolution speed is the rate in which a state changes into other states (e.g. the angular speed of the state vector on a Bloch sphere). In the Hermitian case, the energy difference sets an upper bound on the evolution speed (the Fleming bound \cite{Fleming}). The implicitly demonstrated clear violation of this bound by non-unitary evolution processes of NH systems leads to the conclusion that these energy-difference based bounds should be replaced by some more adequate bounds for NH systems. The first goal of this article is to derive an upper bound on the evolution speed that works for any Hamiltonian, be it Hermitian or non-Hermitian. The bounds derived here may not be tight for some Hamiltonians and/or for some initial conditions, but this statement is equally true for the Hermitian Fleming bound. This leads directly to the second goal of this article: to show how to construct Hamiltonians for which the evolution speed of the state of interest reaches the upper bound for any instant of the evolution. We call these Hamiltonians {}``maximal efficiency\thetaextquotedblright{} Hamiltonians (or maximally efficient Hamiltonians). For every state evolution there exists a family of Hamiltonians that are maximally efficient. This family contains both Hermitian and NH Hamiltonians. Our third goal is to explore a very special Hamiltonian in this family which is of rank-one, non-diagonalizable and similar to a Jordan block with zero-eigenvalue. This Hamiltonian corresponds to a NH degeneracy called exceptional point (EP) which has only one (geometric) eigenvector\thetaextbf{ }\cite{Nimbook,Seyranian-Mailyb-book,baumg}.\thetaextbf{ }We show that any state evolution can be generated solely by such NH degeneracies yielding an EP-driven evolution. This special evolution minimizes the Hilbert-Schmidt norm $\sum_{i,j}|\mathcal{H}_{i,j}|^{2}$ of the matrix Hamiltonian $\mathcal{H}$. We note that the second goal strongly differs conceptually from the so called quantum brachistochrone problem. The quantum brachistochrone problem consists in finding the (time independent) Hamiltonian which evolves some predefined initial state into some predefined final state in a minimal time. This problem was the subject of intensive studies during the last few years for Hermitian systems \cite{Dorje Herm,Hook,japan-brach-prl-2006} as well as for NH ones \cite{cmb-brach-prl-2007,cmb-lnp-2009,Assis,GRS-ep-jpa-2007,Ali-brach-prl-2007,Giri,GS-brach-pra-2008,GS-brach-prl-2008,Ali-brach-pra-2009}. As shown in \cite{Dorje Herm,Hook,AA-prl-1990} for Hermitian systems, the corresponding minimal-passage-time trajectories correspond to geodesics in projective Hilbert space. For the evolution problems we are investigating here, the trajectories in projective Hilbert space are predefined and not necessarily geodesic. Instead, we are searching for Hamiltonians capable of producing evolution processes which exactly follow these predefined trajectories with minimal resources. That is, we look for efficient evolution and not for a fast evolution. In fact, our optimization problem is closer in spirit to the reverse engineering approach used to quicken adiabatic evolution \cite{Demir,MVB Trnsls,Muga}. Yet there are two main differences: The first difference is that in our case we seek only Hamiltonians which yield maximal efficiency. The second difference is that we take as input only the evolution of a single state (the state of interest), while in \cite{Demir,MVB Trnsls,Muga} the number of states needed to be specified is equal to the Hilbert space dimension (number of levels in the system). The article is organized as follows: Section \ref{sec: Preliminaries} contains some basic facts on the evolution speed in projective Hilbert space. In section \ref{sec: bounds}, the Hilbert-Schmidt norm and the spectral norm of a Hamiltonian are introduced as upper bounds on the evolution speed. In section \ref{sec :Speed-Efficiency}, the concept of speed efficiency is introduced, and for the predefined evolution of a given state a family of Hamiltonians is constructed which ensure a speed efficiency of 100\%. The generic properties of maximally efficient evolutions are explored. Section \ref{sec: EP-DE} is devoted to the special case of a maximally efficient evolution which is driven by a Hamiltonian at an EP (a NH degeneracy). In Appendix 1, for completeness we briefly discuss the relation between Bloch sphere and projective Hilbert space. In Appendix 2, the norm speed bounds are illustrated for a Hamiltonian that describes two optical systems recently studied. In Appendix 3, the norm speed bound is applied to the matrix Hamiltonian of a $\mathcal{PT}-$symmetric quantum brachistochrone. \section{Preliminaries - the evolution speed in projective Hilbert space $\mathbb{P}(\mathfrak{H})$\langlebel{sec: Preliminaries}} Let $\ket{\psi}\in\mathfrak{H}=\mathbb{C}^{N}$ be a solution of the time-dependent Schrödinger equation (TDSE): \betaegin{equation} i\partial_{t}\ket{\psi}=H(t)\ket{\psi},\langlebel{eq: TDSE} \end{equation} where $H(t)\neq H^{\dagger}(t)\in\mathbb{C}^{N\thetaimes N}$ is the matrix of the corresponding time-dependent NH Hamiltonian. Defining the bra-vector $\langle\psi|$ in a standard way \footnote{Unlike other choices often made for NH Hamiltonians in order to exploit the bi-orthogonal relations of the eigenstates\thetaextbf{ }\cite{Nimbook}. } as $\langle\psi|=|\psi\rangle^{\dagger}$, the adjoint TDSE has the form \betaegin{equation} -i\partial_{t}\langle\psi|=\langle\psi|H^{\dagger}(t)\,.\langlebel{eq: TDSE-2} \end{equation} Our main interest is to study the rate at which states evolve into different states. Phase evolution is irrelevant for this purpose. It makes sense, then, to study the evolution of states in a space where the phase is eliminated. The so called projective Hilbert space (PHS) is exactly suited for this purpose. The well known Bloch sphere for two-level systems is closely related to PHS (see Appendix 1), but strictly speaking it is not a PHS. For the reader not familiar with PHS we provide a simplified and very limited presentation of the basic ideas needed to understand the present work. For a more complete and rigorous treatment see, e.g., \cite{quant-geom-book}. The angle $\Thetaheta$ between two complex vectors $\ket{\psi_{1}},\ket{\psi_{2}}\in\mathbb{\mathbb{C}}^{N}$ can be obtained from the standard inner product of the two vectors: \betaegin{equation} \cos\Thetaheta=\frac{\left|\betaraket{\psi_{1}}{\psi_{2}}\right|}{\sqrt{\betaraket{\psi_{1}}{\psi_{1}}}\sqrt{\betaraket{\psi_{2}}{\psi_{2}}}}\langlebel{eq: angle} \end{equation} The angle $\Thetaheta$ acts as a measure of distance between two states: $\Thetaheta=0$ means the two states are identical up to a complex factor and $\Thetaheta=\pi/2$ indicates the states are mutually orthogonal \footnote{This is different from the Bloch sphere construction discussed in Appendix 1, where orthogonal states correspond to the angle of $\pi$ between antipodal points on the sphere. }. Now imagine that a state is infinitesimally changed from $\ket{\psi}$ to $\ket{\psi}+\ket{d\psi}$ where $\betaraket{\psi}{\psi}\gg\betaraket{d\psi}{d\psi}$. The angle between the original state and the modified state, $d\Thetaheta$, can be obtained from (\ref{eq: angle}). Keeping leading orders in $\ket{d\psi}$ and $d\Thetaheta$ we get: \betaegin{equation} d\Thetaheta^{2}=\frac{\betaraket{d\psi}{d\psi}}{\betaraket{\psi}{\psi}}-\frac{\betaraket{d\psi}{\psi}\betaraket{\psi}{d\psi}}{\betaraket{\psi}{\psi}^{2}}=ds_{FS}^{2}.\langlebel{eq: FS-metric} \end{equation} $ds_{FS}^{2}$ is known as the Fubini-Study metric \cite{quant-geom-book} which describes the length of an infinitesimal arc traced on a unit hypersphere by changing a state by $\ket{d\psi}$. To quantify the rate at which states change we will look at the evolution speed defined by: $\left|\frac{d\Thetaheta}{dt}\right|$ (or equivalently $\left|\frac{ds_{FS}}{dt}\right|$), which can be interpreted as angular speed/frequency. This hypersphere is related to the projective Hilbert space associated with the Fubini-Study metric. All states which differ by a complex number are mapped to the same point on the hypersphere (hence phase is immaterial in this space). The details of this mapping are not important for the present article. What is important is that the distance between two states on the hypersphere which differ by $\ket{d\psi}$ is given by (\ref{eq: FS-metric}). Formally, the projective Hilbert space of an N-level system is denoted by $\mathbb{P}(\mathfrak{H})=\mathbb{C}\mathbb{P}^{N-1}=\mathbb{C}_{*}^{N}/\mathbb{C}_{*}$ where $\mathbb{C}_{*}^{N}=\mathbb{C}^{N}-\{(0,0,\ldots,0)\}$, $\mathbb{C}_{*}:=\mathbb{C}-\{0\}$. As the state $\ket{\psi}$ evolves in time it traces a certain trajectory in $\mathbb{P}(\mathfrak{H})$ (i.e. on the hypersphere associated with the PHS). We denote the trajectory induced by $\ket{\psi}$ by $\pi(\ket{\psi})\in\mathbb{P}(\mathfrak{H})$. Notice that for any complex function of time, $c(t)$ $(c(t)\neq0)$: \betaegin{equation} \pi(\ket{\psi})=\pi(c(t)\ket{\psi})\langlebel{eq: pi traj} \end{equation} Next we wish to establish a relation between the evolution speed $\left|\frac{ds_{FS}}{dt}\right|$ and the Hamiltonian. Making use of the TDSE (\ref{eq: TDSE}) and its adjoint (\ref{eq: TDSE-2}), and introducing the normalized state vectors $|\Psi\rangle:=|\psi\rangle/\sqrt{\langle\psi|\psi\rangle}$, we find the squared evolution speed in $\mathbb{P}(\mathfrak{H})$ is given by: \betaegin{equation} \fl\left(\frac{ds_{FS}}{dt}\right)^{2}=\langle\Psi|H^{\dagger}(t)H(t)|\Psi\rangle-\langle\Psi|H^{\dagger}(t)|\Psi\rangle\langle\Psi|H(t)|\Psi\rangle=:K(t).\langlebel{eq: FS K def} \end{equation} Henceforth, we refer to $K(t)$ as \thetaextquotedbl{}kinetic scalar\thetaextquotedbl{} because it plays a structurally similar role like the kinetic energy in classical mechanical systems. The expression (\ref{eq: FS K def}) is a straight-forward generalization for non-Hermitian Hamiltonians of the corresponding evolution speed discussed in \cite{AA-prl-1990,quant-geom-book} for Hermitian systems. We note that for those systems $K(t)$ just reduces to the instantaneous energy variance $K(t)=\langle\Psi|H^{2}(t)|\Psi\rangle-\langle\Psi|H(t)|\Psi\rangle^{2}=\Delta E^{2}(t)$.\thetaextbf{ }Further insight into this expression can be obtained by\thetaextbf{ }introducing an instantaneous orthonormal basis set $\{|k\rangle\}_{k=1}^{N}\in\mathfrak{H}=\mathbb{C}^{N}$, with $|\Psi\rangle$ identified with one of its elements $|\Psi\rangle=|j\rangle$, the kinetic scalar (\ref{eq: FS K def}) in this basis set takes the form: \betaegin{equation} K=\sum_{k=1}^{N}\langle j|H^{\dagger}|k\rangle\langle k|H|j\rangle-\langle j|H^{\dagger}|j\rangle\langle j|H|j\rangle=\sum_{k\neq j}|\langle k|H|j\rangle|^{2}\ge0.\langlebel{eq:fub-3} \end{equation} Obviously, $K$ is characterizing the total rate for transitions from the given state $|\Psi\rangle=|j\rangle$ to other states of the system. Splitting off the trace of the Hamiltonian \betaegin{equation} H=\mathcal{H}+\mu I,\qquad\mu:=\mbox{Tr}(H)/N\langlebel{fub-4} \end{equation} one immediately sees that $K$ is invariant with regard to trace shifts \footnote{A time-dependent trace $\mbox{tr}(H)=N\mu(t)\in\mathbb{C}$ can be removed from the Hamiltonian $H$ by the transformation $\ket{\psi}\thetao e^{i\intop_{0}^{t}\mu(t')dt'}\ket{\psi}$. Since this transformation involves only a multiplication by a complex function the motion in the PHS is not affected by this transformation (\ref{eq: pi traj}). Setups with non-vanishing complex traces have been considered, e.g., in \cite{Assis}. }, $K[H]=K[\mathcal{H}]$. Hence, we can subsequently restrict our attention to traceless Hamiltonians $\mathcal{H}$.\thetaextbf{ }In Appendix 1, we discuss the dynamics on the PHS-related Bloch sphere and obtain its relation to the kinetic scalar. \section{Upper norm bounds on the evolution speed\langlebel{sec: bounds}} The absolute values of the matrix elements of Hermitian or non-Hermitian Hamiltonians are directly defined by the intensities of interactions and the strength of the corresponding fields. Naturally, a model remains valid only within the region of applicability of the corresponding underlying theory and/or the applicability of the approximations made in the derivation of the model.\thetaextbf{ }Hence, it is natural to ask for the maximal evolution speed achievable by a given quantum system when the resources are limited. \subsection{The Hilbert-Schmidt-norm upper bound on the evolution speed} To obtain the first simple upper bound on the evolution speed we notice that: \betaegin{eqnarray} K\le\langle\Psi|\mathcal{H}^{\dagger}\mathcal{H}|\Psi\rangle\le\mbox{tr}(\mathcal{H}^{\dagger}\mathcal{H})\equiv||\mathcal{H}||_{HS}^{2}.\langlebel{eq: K<HS} \end{eqnarray} where $||\mathcal{H}||_{HS}^{2}$ is the Hilbert-Schmidt norm of the Hamiltonian \cite{horn}. It is also known as Euclidean norm, $l_{2}-$norm, Schatten 2-norm, Frobenius norm and Schur norm \cite{horn}. It was possible to use the trace in the last inequality since $\mathcal{H}^{\dagger}\mathcal{H}$ is a positive operator. Equations (\ref{eq: FS K def}) and (\ref{eq: K<HS}) set a bound on the evolution speed: \betaegin{equation} \left|\frac{ds_{FS}}{dt}\right|=\sqrt{K}\le\left\Vert \mathcal{H}\right\Vert _{HS}. \end{equation} Upon writing the $HS$ norm explicitly in terms of matrix elements \betaegin{equation} \left\Vert \mathcal{H}\right\Vert _{HS}^{2}=\sum_{i,j}\left|\mathcal{H}_{ij}\right|^{2},\langlebel{eq: HS comp} \end{equation} it becomes clear that the evolution speed is limited by the size of the Hamiltonian elements and not just by the eigenvalue difference. This becomes very important in the vicinity of NH degeneracies as shown in Appendix 3. \subsection{The spectral norm upper bound --- a tighter bound on the evolution speed} To get a tighter bound on the evolution speed we use the following inequality: \betaegin{equation} K\le\langle\Psi|\mathcal{H}^{\dagger}\mathcal{H}|\Psi\rangle=\sum_{k=1}^{N}\lambda_{k}|\langle\Psi|k\rangle|^{2}\le\max(\lambda_{k})\equiv||\mathcal{H}||_{SP}^{2},\langlebel{eq: K<SP} \end{equation} where $\lambda_{k}\ge0$ and $|k\rangle$ are the eigenvalues and eigenstates of the matrix $\mathcal{H}^{\dagger}\mathcal{H}$, \ $\mathcal{H}^{\dagger}\mathcal{H}|k\rangle=\lambda_{k}|k\rangle$. $||\mathcal{H}||_{SP}=\sqrt{\max(\lambda_{k})}$ is known as the spectral norm of $\mathcal{H}$ (also known as Ky Fan 1-norm \cite{horn}).\thetaextbf{ }To understand the second inequality in (\ref{eq: K<SP}), notice that the states $\{|k\rangle\}$ constitute a complete orthonormal basis set and $\betaraket{\Psi}{\Psi}=1$, so that the projection sum satisfies $\sum_{k}\left|\left\langlengle \Psi|k\right\ranglengle \right|^{2}=1$. Thus, $\sum_{k=1}^{N}\lambda_{k}|\langle\Psi|k\rangle|^{2}$ is just a weighted average of positive numbers $\lambda_{k}$. Such a weighted average is always smaller or equal to the largest element. Obviously, the Hilbert-Schmidt norm $||\mathcal{H}||_{HS}^{2}$ and the spectral norm $||\mathcal{H}||_{SP}^{2}$ can be represented in terms of the eigenvalues $\lambda_{k}$\,. This implies the following useful relation: \betaegin{equation} ||\mathcal{H}||_{SP}^{2}\le||\mathcal{H}||_{HS}^{2}\le\mbox{rank}(H)||\mathcal{H}||_{SP}^{2}\,.\langlebel{fub-8} \end{equation} Therefore, the spectral norm bound on the evolution speed \betaegin{equation} \left|\frac{ds_{FS}}{dt}\right|=\sqrt{K}\le\left\Vert \mathcal{H}\right\Vert _{SP}\langlebel{SP bound} \end{equation} is always tighter than the Hilbert-Schmidt norm bound. Simple and useful lower and upper bounds on $\left\Vert \mathcal{H}\right\Vert _{SP}$ for $\mathcal{H}(t)\in\mathbb{C}^{N\thetaimes N}$ (but not on the evolution speed!) are given by : \betaegin{equation} \max(\left|\mathcal{H}_{i,j}\right|)\leq\left\Vert \mathcal{H}\right\Vert _{SP}\le N\,\max(\left|\mathcal{H}_{i,j}\right|).\langlebel{eq: sp lower bound} \end{equation} Finally, we briefly comment on two extremal cases of traceless $2\thetaimes2$ matrix Hamiltonians. \betaegin{itemize} \item For a NH two-level Hamiltonian $\mathcal{H}$ which is similar to a Jordan block with zero-eigenvalue $\mathcal{H}\sim J_{2}(0)=\left(\betaegin{array}{cc} 0 & 1\\ 0 & 0 \end{array}\right)$ it holds $\mbox{rank}(\mathcal{H})=1$ so that \betaegin{equation} \mathcal{H}\sim J_{2}(0)\qquad\Longrightarrow\qquad\left\Vert \mathcal{H}\right\Vert _{SP}=\left\Vert \mathcal{H}\right\Vert _{HS}.\langlebel{eq: rank 1 case} \end{equation} This fact will be important later on in sec. \ref{sec: EP-DE}. \item For a Hermitian traceless two-level Hamiltonian with energy separation $\Delta E$, the spectral norm is $\left\Vert \mathcal{H}\right\Vert _{SP}=\left|\Delta E\right|/2$ and we obtain: \betaegin{equation} \left|\frac{ds_{FS}}{dt}\right|\leq\left|\Delta E\right|/2. \end{equation} As discussed in Appendix 1 the Bloch unit vector, $\hat{n}$, is related to the Kinetic scalar via: $\left|\frac{d\hat{n}}{dt}\right|=2\sqrt{K}$. Therefore the corresponding upper bound on the evolution speed over the Bloch sphere in the \thetaextit{Hermitian} case is: \betaegin{equation} \left|\frac{d\hat{n}}{dt}\right|=2\left|\frac{ds_{FS}}{dt}\right|\le|\Delta E|\langlebel{fub-14} \end{equation} which is known as Fleming bound \cite{Fleming}. That is, for a two-level Hermitian operator the spectral bound coincides with the known Hermitian bound. \end{itemize} For explicit examples of the speed bound we refer the reader to Appendices 2 and 3. In Appendix 2, a Hamiltonian that describes certain optical systems is analyzed. Appendix 3 studies a $\mathcal{\mathcal{P}}\mathcal{T}$- symmetric Hamiltonian that was introduced in \cite{cmb-brach-prl-2007}, in the context of the $\mathcal{\mathcal{P}}\mathcal{T}$-symmetric brachistochrone problem. In the next section we introduce the notion of speed efficiency which quantifies how close the actual motion in $\mathbb{P}(\mathfrak{H})$ is to the speed bound just derived. Later we show how to construct a Hamiltonian which reaches the spectral bound for a given motion in $\mathbb{P}(\mathfrak{H})$ at all times. \section{Speed efficiency of quantum evolution\langlebel{sec :Speed-Efficiency}} In this section we introduce the notion of a maximally efficient evolution. We wish to compare the actual speed of motion in the projective Hilbert space $\mathbb{P}(\mathfrak{H})$ to the speed bound given by the spectral norm $\left\Vert \mathcal{H}\right\Vert _{SP}$ characterizing the available resources of the system. We use $\left\Vert \mathcal{H}\right\Vert _{SP}$ since it is tighter than the Hilbert-Schmidt norm $\left\Vert \mathcal{H}\right\Vert _{HS}$. Let $\ket{\psi}\in\mathbb{C}^{N}$ be a time-dependent state in an $N$-level system that induces some predefined evolution $\pi(\ket{\psi})$ in the corresponding projective Hilbert space $\mathbb{P}(\mathfrak{H})=\mathbb{C}\mathbb{P}^{N-1}\ni\pi(|\psi\rangle)$. We define the efficiency to be: \betaegin{equation} \eta(\mathcal{H},\ket{\psi})=\frac{\sqrt{K(\psi)}}{\left\Vert \mathcal{H}\right\Vert _{SP}}\le1\,.\langlebel{eq: eta} \end{equation} It is important to realize that this efficiency is an instantaneous (or local) property of $\mathcal{H}$ and its solution $\ket{\psi}$. The shape of the curve in $\mathbb{P}(\mathfrak{H})$ alone has nothing to do with efficiency. For example, a geodesic in $\mathbb{P}(\mathfrak{H})$ can have efficiency smaller than one, and on the other hand, non-geodesic curves can have 100\% efficiency. Loosely speaking, the value of $\eta$ quantifies to what extent the Hamiltonian really uses all its resources to generate motion in $\mathbb{P}(\mathfrak{H})$. That is why we call an $(\eta=1)-$evolution, a {}``maximally efficient evolution''.\thetaextbf{ }As an example of inefficient evolution, consider a spin in a magnetic field which is not exactly perpendicular to the spin direction. The part of the magnetic field which is parallel to the spin is wasted as it does not contribute to the precession motion. As we shall demonstrate in the next section, this inefficiency can be fixed by making the Hamiltonian time-dependent (rotating the magnetic field in time). In the NH case, reaching 100\% efficiency becomes even more difficult. As explained at the end of section \ref{sec: Preliminaries}, for NH systems the condition $\frac{d}{dt}H=0$ does not guarantee a constant evolution speed, i.e. $\left|\frac{ds_{FS}}{dt}\right|\neq\mbox{const}$. On the other hand, the spectral norm is fixed if $\frac{d}{dt}H=0$. Equation (\ref{eq: eta}), then, implies that $\eta$ varies with time and, therefore, the evolution cannot be maximally efficient at all times. In the next subsection we show how to construct Hamiltonians that are designed to generate maximally efficient evolution for a given predefined motion in projective Hilbert space at all times. We will demonstrate that such an $(\eta=1)-$evolution can be either Hermitian or non-Hermitian. \subsection{Maximally efficient evolution\langlebel{sec: Maximal-Efficiency-Evolution}} Our goal in this section is to find a Hamiltonian $\mathcal{H}_{0}$ that generates the same motion $\pi(\ket{\psi})$ in $\mathbb{P}(\mathfrak{H})$ as $\ket{\psi}$ but with 100\% efficiency. The solution $\ket m$ corresponding to the maximally efficient Hamiltonian $\mathcal{H}_{0}$ may differ from $\ket{\psi}$ only by a time dependent complex factor. In short, we look for $\ket m$ and $\mathcal{H}_{0}$ that satisfy: \betaegin{eqnarray} \pi(\ket m) & = & \pi(\ket{\psi})\langlebel{req 1}\\ i\partial_{t}\ket m & = & \mathcal{H}_{0}\ket m\langlebel{req 2}\\ \eta(t) & = & 1.\langlebel{req 3} \end{eqnarray} The first requirement is that the states $\ket m$ and $\ket{\psi}$ have the same motion in $\mathbb{P}(\mathfrak{H})$. The second requirement states that $\{\mathcal{H}_{0},\ket m\}$ satisfy the TDSE, whereas the third requirement simply means that we are searching for maximal efficiency. To satisfy the first requirement we set: \betaegin{equation} \ket m=c(t)\ket{\psi}\langlebel{eq: |n>} \end{equation} where $c(t)$ is a complex differentiable function of time (see eq. (\ref{eq: pi traj})). For reasons that will become clear shortly, we fix $c(t)$ by choosing $\ket m$ to be normalized to unity and to be parallel transported \footnote{If $\ket{\chi}$ is a normalized state, $\betaraket{\chi}{\chi}=1$, then its parallel transported form is given by$\ket{\thetailde{\chi}}=e^{-\intop_{0}^{t}\betaraket{\chi}{\partial_{t'}\chi}dt'}\ket{\chi}$. \thetaextbf{$\ket{\thetailde{\chi}}$ }satisfies\thetaextbf{$\betaraket{\thetailde{\chi}}{\partial_{t'}\thetailde{\chi}}=\betaraket{\thetailde{\partial_{t'}\chi}}{\thetailde{\chi}}=0$. }This fixes the phase of the state up to a constant determined by the choice of the lower limit of the time integral. }: \betaegin{eqnarray} \betaraket mm & = & 1,\langlebel{eq: n norm}\\ \betaraket m{\partial_{t}|m}=(\partial_{t}\betaraket m{)|m} & = & 0.\langlebel{eq: n phase} \end{eqnarray} The first condition determines $\left|c(t)\right|$, and the second one yields the phase of $c(t)$ (up to a time-independent constant). To find the Hamiltonian that drives $\ket m$ with maximal efficiency we choose the following ansatz: \betaegin{equation} \mathcal{H}_{0}=i\ket{\partial_{t}m}\betara m-ig\ket m\betara{\partial_{t}m},\langlebel{eq: H opt} \end{equation} where, in general, $g$ can be time-dependent. For $g=1$ the Hamiltonian is Hermitian. Notice that $\ket{\partial_{t}m}\equiv\partial_{t}\ket m$ is not normalized and not parallel transported. In contrast to $\ket m$, $\ket{\partial_{t}m}$ is also not a solution of the TDSE (with $\mathcal{H}_{0}$ as Hamiltonian). However, $\ket m$ and $\ket{\partial_{t}m}$ are mutually orthogonal by virtue of the parallel transport we imposed. By applying (\ref{eq: H opt}) to the state $\ket m$ we see that the requirement (\ref{req 2}) is immediately satisfied. To fulfill the remaining third requirement we note that in the basis $\{\ket m,\ket{\partial_{t}m}\}$ the Hamiltonian $\mathcal{H}_{0}$ has only off-diagonal elements so that $\mathcal{H}_{0}$ is traceless by construction. To calculate the efficiency we\thetaextbf{ }first calculate the spectral bound and then the kinetic scalar\thetaextbf{.} Evaluating $\mathcal{H}_{0}^{\dagger}\mathcal{H}_{0}$ we get \betaegin{eqnarray} \mathcal{H}_{0}^{\dagger}\mathcal{H}_{0} & = & \betaraket{\partial_{t}m}{\partial_{t}m}\ket m\betara m+\left|g\right|^{2}\ket{\partial_{t}m}\betara{\partial_{t}m}.\langlebel{eq: H^dag H} \end{eqnarray} The eigenstates of $\mathcal{H}_{0}^{\dagger}\mathcal{H}_{0}$ are $\ket m$ and $\ket{\partial_{t}m}$ and the corresponding eigenvalues are $\betaraket{\partial_{t}m}{\partial_{t}m}$ and $\left|g\right|^{2}\betaraket{\partial_{t}m}{\partial_{t}m}$, respectively.\thetaextbf{ }The spectral norm is given by: \betaegin{equation} \left\Vert \mathcal{H}_{0}\right\Vert _{SP}=\sqrt{\betaraket{\partial_{t}m}{\partial_{t}m}}\max(1,\left|g\right|). \end{equation} Using the fact that $\betaraOket n{\mathcal{H}_{0}}n=0$ and Eq. (\ref{eq: FS K def}) and (\ref{eq: H^dag H}) we get that: \betaegin{equation} K(\mathcal{H}_{0},\ket m)=\betaraket{\partial_{t}m}{\partial_{t}m}. \end{equation} The efficiency, then, is given by: \betaegin{equation} \eta=\frac{\sqrt{K(\mathcal{H}_{0},\ket m)}}{\left\Vert \mathcal{H}_{0}\right\Vert _{SP}}=\frac{\sqrt{\betaraket{\partial_{t}m}{\partial_{t}m}}}{\sqrt{\betaraket{\partial_{t}m}{\partial_{t}m}}\max(1,\left|g\right|)}. \end{equation} Clearly, maximal efficiency $\eta=1$ (third requirement (\ref{req 3})) is achieved provided that: \betaegin{equation} \left|g\right|\le1.\langlebel{eq: g_cond} \end{equation} In summary, given any arbitrary state $\ket{\psi}$, equations (\ref{eq: |n>})-(\ref{eq: H opt}) together with (\ref{eq: g_cond}) show how to construct maximal efficiency Hamiltonians that generate the same motion in projective Hilbert space $\mathbb{P}(\mathfrak{H})$ as $\ket{\psi}$. It is instructive to look on the instantaneous eigenvalues of $\mathcal{H}_{0}^{2}$. From \betaegin{eqnarray} \mathcal{H}_{0}^{2} & = & g\ket{\partial_{t}m}\betara{\partial_{t}m}+g\betaraket{\partial_{t}m}{\partial_{t}m}\ket m\betara m\langlebel{eq: H0_sq} \end{eqnarray} and $\mbox{tr}(\mathcal{H}_{0})=0$ it follows that the instantaneous eigenvalues $E_{\pm}(t)$ of $\mathcal{H}_{0}$ are: \betaegin{equation} E_{\pm}(t)=\pm\sqrt{g}\sqrt{\betaraket{\partial_{t}m}{\partial_{t}m}}. \end{equation} In case of $g\in\mathbb{R}$, these eigenvalues are real for $g>0$ and purely imaginary for $g<0$, i.e. $E_{\pm}(t)\in\mathbb{R}\cup i\mathbb{R}$. This indicates a hidden instantaneous pseudo-Hermiticity of $\mathcal{H}_{0}$ --- in a certain analogy to the considerations \footnote{We leave a corresponding detailed investigation to future research. } in Appendix 3. One of the key points of this work is that the evolution can be maximally efficient regardless of whether the Hamiltonian is Hermitian or not. Finally, we note that the Hamiltonian $\mathcal{H}_{0}$ in (\ref{eq: H opt}) shows some structural analogy to the brachistochrone Hamiltonians for Hermitian systems (constructed in \cite{Hook}). In fact, $\mathcal{H}_{0}$ extends the geodesic-trajectory paradigm of \cite{Dorje Herm,Hook,AA-prl-1990} to maximally efficient evolution regimes over arbitrarily predefined time-dependent trajectories in $\mathbb{P}(\mathfrak{H})$. Moreover, the constraint $\Delta E=\mbox{const}$ is replaced by the constraint $\eta=1$. \subsection{Inherent properties of the maximally efficient evolution} Here we wish to highlight three points which are generic for maximally efficient evolutions. The first point concerns the fact that $\ket m$ is normalized and parallel transported. In the construction of $\mathcal{H}_{0}$ we demanded $\betaraket{m(t)}{m(t)}=1$ and $\betaraket m{\partial_{t}m}=0$. Now we wish to show that if these constraints are relaxed the spectral norm will increase even though the motion in $\mathbb{P}(\mathfrak{H})$ remains unaltered. According to (\ref{eq: eta}) the efficiency will drop below 100\% by this modification. Assume we wish to change the amplitude and phase of $\ket m$ by some complex factor $e^{-i\varphi(t)}$ where $\varphi(t)$ is some complex number. This is accomplished by adding $\mathcal{H}_{0}$ a diagonal term so that: \betaegin{equation} \mathcal{H}_{new}=\mathcal{H}_{0}+\ketbra mm\partial_{t}\varphi(t). \end{equation} To keep the trace zero, another diagonal term must be added as well, in principle, but it is of no importance to the present discussion. This transformation does not change the value of $K$. The spectral norm squared is the largest expectation of $\mathcal{H}_{new}^{\dagger}\mathcal{H}_{new}$, so: \betaegin{equation} \fl\left\Vert \mathcal{H}_{new}\right\Vert _{SP}^{2}\geq\betaraOket m{\mathcal{H}_{new}^{\dagger}\mathcal{H}_{new}}m=\left|\partial_{t}\varphi(t)\right|^{2}+\betaraket{\partial_{t}m}{\partial_{t}m}>\betaraket{\partial_{t}m}{\partial_{t}m} \end{equation} Since $K$ remained the same and spectral norm increased, we see that the efficiency is now: \betaegin{equation} \eta_{new}=\frac{\sqrt{\betaraket{\partial_{t}m}{\partial_{t}m}}}{\left\Vert \mathcal{H}_{new}\right\Vert _{SP}}\le\frac{\sqrt{\betaraket{\partial_{t}m}{\partial_{t}m}}}{\sqrt{\left|\partial_{t}\varphi(t)\right|^{2}+\betaraket{\partial_{t}m}{\partial_{t}m}}}<1, \end{equation} This decrease in efficiency with respect to $\mathcal{H}_{0}$ expresses the simple fact that changes in phase and/or amplitude also require spectral norm resources from the Hamiltonian. In order to direct all resources to motion in $\mathbb{P}(\mathfrak{H})$, any phase and amplitude changes should be avoided. The second point concerns the role of $g$. While the $\mathcal{H}_{0}$ found earlier conserves the norm of $\ket m$, it doesn't do so for other initial states (with the exception of the Hermitian case $g=1$). Moreover, while $\ket m$ evolves exactly in the same way for all values of $g$, the evolution of other states strongly depends on the value of $g$. This is demonstrated in the example shown in Figure 1. The state of interest in this example was chosen to be: $\ket m=\cos(t)\ket{\uparrow}+\sin(t)\ket{\downarrow}$. We considered two different maximally efficient Hamiltonians. The first one is Hermitian (g=1) and the other is not (g=-0.8). The Hamiltonian is constructed using the recipe in Sec. \ref{sec: Maximal-Efficiency-Evolution}. If the initial state is $\ket{\psi(t=0)}=\ket{m(t=0}=\ket{\uparrow}$ we observe that, as expected, both Hamiltonians generate the same evolution. Yet, if $\ket{\psi(t=0)}=\ket{\downarrow}$, the different Hamiltonians generate different evolutions and the effect of $g$ becomes apparent. \betaegin{figure} \includegraphics[width=9cm]{fig11_s1} \caption{(color online) The value of $g$ has no effect on the evolution when applied to the initial state $\mathcal{H}_{0}$ was designed to propagate efficiently (North pole of the Bloch sphere in this example). Yet when applying $\mathcal{H}_{0}$ to a different initial state (South pole) the $g$ may completely change the evolution. The large black dots mark the starting points of the evolution. See text for details.} \end{figure} The third point that we note is that for periodic motion in $\mathbb{P}(\mathfrak{H})$ the state $\ket m$ accumulates only an Anandan-Aharonov phase \cite{AA phase}, since the dynamical phase $\betaraOket mHm$ is zero for maximally efficient evolution. Once again, this is the result of wasting no resources on phase accumulation. \section{EP driven evolution\langlebel{sec: EP-DE}} A special case of great interest is $g=0$. Equation (\ref{eq: H0_sq}) shows that in this case $\mathcal{H}_{0}^{2}=0$. This can only happen if $\mathcal{H}_{0}$ is similar to a rank-one Jordan block with zeros on the diagonal, i.e. when $\mathcal{H}_{0}\sim J_{2}(0)$. That is, $\mathcal{H}_{0}(g=0)$ describes an EP operator --- a NH degeneracy. The dynamics can be fast even though at each instant the instantaneous eigenvalue difference is zero. \thetaextit{This degeneracy is time-dependent}. Moreover, the orientation of the single geometric eigenvector of a Hamiltonian $\mathcal{H}_{0}\sim J_{2}(0)$, associates a preferred directionality to this degeneracy. The directionality of the EP at each instant of time is chosen such that it induces the desired dynamics. Thus it appears natural to name this evolution an EP Driven Evolution (EP-DE). Another unique feature of this evolution can be seen by evaluating the Hilbert-Schmidt norm. For a general value of $g$ the $HS$ norm is: \betaegin{equation} \left\Vert \mathcal{H}_{0}\right\Vert _{HS}^{2}=(1+\left|g\right|^{2})\betaraket{\partial_{t}m}{\partial_{t}m}. \end{equation} Obviously, the $HS$ norm takes its minimal value for EP-DE ($g=0$), i.e. from all the possible maximally efficient evolutions the EP-DE has the minimal $HS$ norm. At this point the HS norm $\left\Vert \mathcal{H}_{0}\right\Vert _{HS}$ and the spectral norm $\left\Vert \mathcal{H}_{0}\right\Vert _{SP}$ coincide, a fact mentioned in (\ref{eq: rank 1 case}). Equation (\ref{eq: HS comp}) shows that the EP-DE provides the minimal value of $\sum_{i,j}\left|\mathcal{H}_{ij}\right|^{2}$ for a given trajectory in $\mathbb{P}(\mathfrak{H})$. \section*{Conclusion} The concept of speed efficiency was defined using spectral speed bounds derived for NH or Hermitian systems. A recipe for the construction of 100\% efficiency Hamiltonians for any evolution in the projective Hilbert space was given. These Hamiltonians contain a free parameter, $"g"$. 100\% efficiency is obtained for $\left|g\right|\le1$. The Hermitian case corresponds to $g=1$. We conclude that it is possible to have a Hamiltonian which is both 100\% efficient and Hermitian. The $g=0$ case corresponds to a Hamiltonian which is not diagonalizable, i.e. the evolution is driven solely by a time-dependent NH degeneracy (exceptional point). This particular evolution minimizes the quantity $\sum_{i,j}\left|\mathcal{H}_{ij}\right|^{2}$ with respect to all other 100\% efficiency evolutions considered in this work. \section*{Appendix 1 - The Bloch Sphere and the Fubini-Study metric} The dynamics of NH $2\thetaimes2$ matrix systems in $\mathfrak{H}=\mathbb{C}^{2}\ni|\psi\rangle$ is conveniently analyzed as dynamics on the Bloch sphere. The latter is spanned by the unit vectors \betaegin{equation} \hat{n}(t)=\frac{\left\langlengle \psi|\vec{\sigma}|\psi\right\ranglengle }{\left\langlengle \psi|\psi\right\ranglengle }=\left\langlengle \Psi|\vec{\sigma}|\Psi\right\ranglengle \in S^{2}\subset\mathbb{R}^{3}.\langlebel{eq: n def} \end{equation} Its close relationship to the projective Hilbert space $\mathbb{C}\mathbb{P}^{1}=\mathbb{P}(\mathfrak{H})=\mathbb{C}_{*}^{2}/\mathbb{C}_{*}\sim S^{2}$ can be seen by the explicit comparison of the Bloch sphere metric with the Fubini-Study metric of $\mathbb{C}\mathbb{P}^{1}$. For a qubit $|\Psi\rangle\in\mathbb{C}^{2}$ parametrized as $|\Psi\rangle=\left(\cos(\theta/2),e^{i\phi}\sin(\theta/2)\right)^{T}$, $\theta\in[0,\pi]$, \ $\phi\in[0,2\pi]$ it holds $\hat{n}=(\sin(\theta)\cos(\phi),\sin(\theta)\sin(\phi),\cos(\theta))^{T}$ and the Bloch sphere metric reads \betaegin{equation} d\hat{n}^{2}=d\theta^{2}+\sin^{2}(\theta)d\phi^{2}.\langlebel{fub-9} \end{equation} For the \thetaextit{same state} $\ket{\Psi}$ the Fubini-Study metric (\ref{eq: FS K def}) reduces to \betaegin{equation} ds_{FS}^{2}=\frac{1}{4}(d\theta^{2}+\sin^{2}(\theta)d\phi^{2})\langlebel{fub-10} \end{equation} and therefore: \betaegin{equation} \left(\frac{d\hat{n}}{dt}\right)^{2}=4\left(\frac{ds_{FS}}{dt}\right)^{2}=4K(t).\langlebel{eq: bloch speed} \end{equation} This is closely related to the fact that orthogonal states are antipodal on the Bloch sphere having a geodesic distance $\pi$, whereas the corresponding Fubini-Study distance as discussed in section 2 is $s_{FS}=\Thetaheta=\pi/2$. The main results of this article can be expressed using the Bloch Sphere and the NH Bloch Equations (see for example \cite{MVB_pol}) formalism, but we found that the results are more neatly described by the {}``ket-bra'' operator formalism and the Schrödinger equation. Moreover, unlike the NH Bloch Equation formalism the {}``ket-bra'' formalism is applicable to a multilevel system without any alterations. \section*{Appendix 2 - Speed bounds in optical systems\langlebel{sec: optics examp}} Let us examine the NH evolution in optical systems where the Hamiltonians are explicitly known and a two-level description is either a good approximation or even exact. Consider the Hamiltonian $\mathcal{H}_{0}$ introduced and studied in \cite{BU} in the context of {}``EP cycling'' \cite{Gilary,MVB_pol,WG,UMM}: \betaegin{equation} \mathcal{H}(z)=\left(\betaegin{array}{cc} 0 & i\\ -iq(z) & 0 \end{array}\right),\langlebel{eq: H Berry} \end{equation} where $z$, the propagation coordinate, plays the role of time. This Hamiltonian can describe different physical systems. In \cite{WG}, it was used to describe the evolution of the transverse electric field and its spatial derivative in a waveguide. In this system $q(z)$ is proportional to the change in the index of refraction with respect to vacuum. In \cite{MVB_pol} $\mathcal{H}(z)$ describes the evolution of the two optical polarizations in crystals. $q(z)$ in this case is related to the change in the transverse part of the reciprocal dielectric tensor. The eigenvalue difference of $\mathcal{H}(z)$ is $\Delta E=2\sqrt{q(z)}$, and at $q=0$ a NH degeneracy (an EP) forms which is experimentally well accessible in these systems. From the structure of $\mathcal{H}_{0}(z)$ in (\ref{eq: H Berry}), it is obvious that for $q=0$ and $\Delta E(q=0)=0$ the evolution speed does not vanish for all states. Using (\ref{eq: bloch speed}) and the kinetic scalar definition (\ref{eq: FS K def}), one finds that the non-vanishing angular velocity is $\mbox{\ensuremath{\left|\frac{d\thetaheta}{dz}\right|}=}|\dot{\thetaheta}|=2\left|q(z)\right|$ for the spin-up state $\ket{\uparrow}=(1,0)^{T}$, and $|\dot{\thetaheta}|=2$ for the spin-down state $\ket{\downarrow}=(0,1)^{T}$. In particular, at the degeneracy $q=0$ the spin-up state becomes an eigenstate ($\dot{\thetaheta}=0$), whereas the spin-down state still has a speed $\dot{\thetaheta}=2$ regardless of $\Delta E=0$. For $q\neq0$ the evolution speed of a general state (not necessarily spin-up or spin-down) can be shown to be limited by \betaegin{equation} \mbox{\ensuremath{\dot{\left|\thetaheta\right|}}}\leq2\,\max(1,|q(z)|)\le2\,\sqrt{1+\left|q\right|^{2}}\,,\langlebel{eq: H Br speed ineq} \end{equation} where the last inequality simply follows from $\max(\left|a\right|,\left|b\right|)\le\sqrt{\left|a\right|^{2}+\left|b\right|^{2}}.$ From the norms of the Hamiltonian (\ref{eq: H Berry}),$\left\Vert \mathcal{H}\right\Vert _{SP}=\max(\left|i\right|,\left|-iq(z)\right|)\,\,,\,\,\left\Vert \mathcal{H}\right\Vert _{HS}=\sqrt{1+\left|q\right|^{2}}$ , we see that \betaegin{equation} \mbox{\ensuremath{\dot{\left|\thetaheta\right|}}}\le2\,\left\Vert \mathcal{H}\right\Vert _{SP}\leq2\,\left\Vert \mathcal{H}\right\Vert _{HS}. \end{equation} Hence, the maximal speed exactly fits within the spectral norm bound of $\mathcal{H}(z)$. Other optical systems whose evolution speeds near EPs can easily be studied are discussed, e.g., in \cite{Morales}. \section*{Appendix 3 - Spectral speed bounds for pseudo-Hermitian and $\mathcal{\mathcal{P}}\mathcal{T}-$symmetric two-level Hamiltonians\langlebel{sub:Two-level-pseudo-Hermitian}} We start from a general type NH traceless Hamiltonian, $\mathcal{H}$, written in terms of the Pauli matrices $\vec{\sigma}$: \betaegin{equation} \mathcal{H}(t)=[\vec{\alphalpha}(t)+i\vec{\betaeta}(t)]\cdot\vec{\sigma}\,, \end{equation} where $\vec{\alphalpha},\vec{\betaeta}\in\mathbb{R}^{3}$ are some time-dependent real vectors. The eigenvalues are: \betaegin{equation} E_{\pm}=\pm\sqrt{(\vec{\alphalpha}+i\vec{\betaeta})^{2}}=\pm\sqrt{\vec{\alphalpha}^{2}-\vec{\betaeta}^{2}+i2\vec{\alphalpha}\cdot\vec{\betaeta}}, \end{equation} and become real or pairwise complex conjugate for \betaegin{equation} \vec{\alphalpha}\cdot\vec{\betaeta}=0\qquad\Longrightarrow\qquad E_{\pm}\in\mathbb{R}\cup i\mathbb{R}.\langlebel{pseudo-cond} \end{equation} Operators and matrices with this specific spectral behavior are known to be symmetric under an anti-unitary transformation \cite{BBM-jpa-2002}, to be pseudo-Hermitian \cite{ali-jmp-2002} and self-adjoint in a Pontryagin space \cite{dijksma-langer-book-1996} (a finite-dimension type version of a Krein space \cite{azizov-book-1989}). For Hamiltonians $\mathcal{H}$ with $\vec{\alphalpha}\cdot\vec{\betaeta}=0$ the $SP$ norm and the eigenvalue difference reduce to \betaegin{equation} \left\Vert \mathcal{H}\right\Vert _{SP}=|\vec{\alphalpha}|+|\vec{\betaeta}|\,,\qquad\Delta E=2\sqrt{\vec{\alphalpha}^{2}-\vec{\betaeta}^{2}}.\langlebel{eq: sp of ph} \end{equation} Obviously it is possible to have an arbitrary large $\left\Vert \mathcal{H}\right\Vert _{SP}$ and a vanishing energy difference by choosing $|\vec{\alphalpha}|\thetao|\vec{\betaeta}|$. This choice corresponds to an EP limit for which $\left|\Delta E\right|/\left\Vert \mathcal{H}\right\Vert _{SP}\thetao0$, since the eigenvalues are very small near the degeneracy while $\left\Vert \mathcal{H}\right\Vert _{SP}$ remains roughly constant and finite. A simple example for a NH Hamiltonian with a similar type of behavior is the Hamiltonian $\mathcal{H}$ used in studies of the $\mathcal{\mathcal{P}}\mathcal{T}-$symmetric quantum brachistochrone problem \cite{cmb-brach-prl-2007} \betaegin{equation} \fl\mathcal{H}=\left(\betaegin{array}{cc} ir\sin\chi & s\\ s & -ir\sin\chi \end{array}\right)=s\sigma_{x}+ir\sin\chi\,\sigma_{z},\qquad r,s,\chi\in\mathbb{R}.\langlebel{eq: H bender} \end{equation} This complex symmetric Hamiltonian is $\mathcal{\mathcal{P}}\mathcal{T}-$symmetric, $[\mathcal{\mathcal{P}}\mathcal{T},\mathcal{H}]=0$ and $\mathcal{\mathcal{P}}-$pseudo-Hermitian, $\mathcal{\mathcal{P}}\mathcal{H}=\mathcal{H}^{\dagger}\mathcal{\mathcal{P}}$, with the parity operation given as $\mathcal{\mathcal{P}}=\sigma_{x}$ and the time reversal, $\mathcal{T}$, as complex conjugation. In \cite{cmb-brach-prl-2007}, it was shown that for certain parameter combinations $r,s,\chi$ such Hamiltonians with fixed and purely real eigenvalues difference $\Delta E\in\mathbb{R}$ can evolve a given initial state $|\Psi_{I}\rangle\in\mathbb{C}^{2}$ into an orthogonal final state $\ket{\Psi_{F}}\in\mathbb{C}^{2}$, $\betaraket{\Psi_{I}}{\Psi_{F}}=0$, in an arbitrarily short time interval. Due to the finite geodesic distance $\pi$ between these antipodal states $\ket{\Psi_{I}}$ and $\ket{\Psi_{F}}$ on the Bloch sphere, the corresponding evolution speed should diverge in this limit. The concrete relations can be easily obtained in terms of the simplifying reparametrization $r\sin\chi=s\sin\alpha$ which yields \betaegin{eqnarray} \fl\mathcal{H}=s\left(\betaegin{array}{cc} i\sin\alpha & 1\\ 1 & -i\sin\alpha \end{array}\right),\quad\Delta E=2s\cos\alpha,\quad\left\Vert \mathcal{H}\right\Vert _{SP}=|s|(1+|\sin\alpha|).\langlebel{fub-14-1} \end{eqnarray} As demonstrated in \cite{GRS-ep-jpa-2007}, the ultra-fast evolution regime predicted in \cite{cmb-brach-prl-2007} corresponds to an EP-limit $\alpha\thetao\pm\pi/2$ so that for fixed $\Delta E=\mbox{const}$ it holds $s=\Delta E/(2\cos\alpha)\thetao\infty$ and \betaegin{equation} \fl\mathcal{H}\thetao s\left(\betaegin{array}{cc} i & 1\\ 1 & -i \end{array}\right),\qquad\left\Vert \mathcal{H}\right\Vert _{SP}\alphapprox2|s|\thetao\infty\,,\qquad\left|\Delta E\right|/\left\Vert \mathcal{H}\right\Vert _{SP}\alphapprox\cos\alpha\thetao0.\langlebel{fub-15} \end{equation} According to (\ref{eq: bloch speed}), this would indeed allow for diverging evolution speeds on the Bloch sphere \betaegin{equation} \left|\frac{d\hat{n}}{dt}\right|=2\sqrt{K}\le2\left\Vert \mathcal{H}\right\Vert _{SP}\thetao\infty.\langlebel{fub-16} \end{equation} From this diverging spectral norm one might be led to the conclusion that actually such ultra-high evolution speeds and corresponding ultra-short evolution times might be forbidden by the limited resources of the system and the validity region of the model used. Both would set some natural upper bounds (ultra-violet cut-offs) on the evolution speed. This would be true if one were keeping within the present NH setups. Nevertheless, the same ultra-high-speed evolution regimes can be induced in subsystems of entangled Hermitian systems in larger Hilbert spaces \cite{GS-brach-prl-2008}. Due to geometric contraction effects the corresponding evolution speed of the associated (Naimark-dilated) Hermitian system in the larger Hilbert space will remain finite, well-behaved and much below any ultra-violet cutoffs. For completeness we note that the present evolution speed considerations are closely related to questions for possible lower bounds on evolution times (quantum brachistochrone problems) and possible violations of such bounds. Corresponding intensive theoretical studies for Hermitian setups \cite{Dorje Herm,Hook,japan-brach-prl-2006} in the early 2000's have been followed by investigations of various aspects of non-Hermitian systems ($\mathcal{\mathcal{P}}\mathcal{T}-$symmetric \cite{cmb-brach-prl-2007,cmb-lnp-2009,GRS-ep-jpa-2007,Ali-brach-prl-2007,Giri,GS-brach-pra-2008,GS-brach-prl-2008} and quasi-Hermitian ones \cite{Ali-brach-pra-2009}, as well as other of more general non-Hermitian types \cite{Assis}). $\mathcal{\mathcal{P}}\mathcal{T}-$symmetric setups \cite{cmb-prl-1998,cmb-rev-2007,ali-review-2010} have been experimentally studied via special arrangements of gain-loss components (active $\mathcal{\mathcal{P}}\mathcal{T}-$symmetry) and components of different loss (passive $\mathcal{\mathcal{P}}\mathcal{T}-$symmetry) in optical waveguide systems \cite{christo-prl-09,christo-nature-10}, microwave billiards \cite{pt-micro-prl-2012}, electronic LRC-circuits \cite{kottos-LRC-pra-11,kottos-UG-LRC-pra-12} and in mechanical systems of coupled pendulums \cite{cmb-mech-exp-2012}.\\ \betaegin{thebibliography}{References} \betaibitem{Nimbook}Moiseyev N, \thetaextit{Non-Hermitian Quantum Mechanics}, (Cambridge University Press, Cambridge, 2011). \betaibitem{Berry NH phys}Berry M V 2004 Czech. J. Phys \thetaextbf{54} 1039 \betaibitem{Gilary}Gilary I and Moiseyev N 2012 J. Phys. B \thetaextbf{45} 051002 \betaibitem{Berry grating}Berry M V and O'Dell D H J 1998 J. Phys. A. 31 2093. \betaibitem{MVB_pol}Berry M V 2011 J Opt. \thetaextbf{13} 115701. \betaibitem{christo-prl-08} K. G. Makris \thetaextit{et al.}, Phys. Rev. Lett. \thetaextbf{100}, 103904 (2008). \betaibitem{KGM-prl-08} S. Klaiman, U. Günther, N. Moiseyev, Phys. Rev. Lett. \thetaextbf{101}, 080402 (2008). \betaibitem{kottos-prl-09} O. Bendix \thetaextit{et al.}, Phys. Rev. Lett. \thetaextbf{103}, 030402 (2009). \betaibitem{christo-prl-09} A. Guo \thetaextit{et al.}, Phys. Rev. Lett. \thetaextbf{103}, 093902 (2009). \betaibitem{Longhi-prl-09} S. Longhi, Phys. Rev. Lett. \thetaextbf{103}, 123601 (2009). \betaibitem{christo-nature-10} C. E. Rüter \thetaextit{et al.}, Nat. Phys. \thetaextbf{6}, 192 (2010). \betaibitem{Kottos-nature-10} T. Kottos, Nature Physics \thetaextbf{6}, 166 (2010). \betaibitem{PT-lattice-pra-10}Makris K G et al. 2010 Phys. Rev. A \thetaextbf{81} 063807. \betaibitem{eva-hugh-pra-11}E. M. Graefe and H. F. Jones, Phys. Rev. A \thetaextbf{84}, 013818 (2011). \betaibitem{Morales}Morales-Molina L and Reyes S A 2011 J. Phys. B \thetaextbf{44} 205403. \betaibitem{WG}Uzdin R and Moiseyev N, 2012 Phys. Rev. A \thetaextbf{85} 031804(R). \betaibitem{Richter EP}Dembowski C et al., 2001 Phys. Rev. Lett. \thetaextbf{86} 787. \betaibitem{pt-micro-prl-2012} Bittner S et al., Phys. Rev. Lett. \thetaextbf{108}, 024101 (2012). \betaibitem{kottos-LRC-pra-11} Schindler J. et al., Phys. Rev. A \thetaextbf{84}, 040101(R) (2011). \betaibitem{kottos-UG-LRC-pra-12}Ramezani H. et al., Phys. Rev. A \thetaextbf{85}, 062122 (2012); arXiv:1205.1847. \betaibitem{Seyranian-Mailyb-book} A. P. Seyranian and A. A. Mailybaev, \thetaextit{Multiparameter stability theory with mechanical applications}, (World Scientific, Singapore, 2003). \betaibitem{cmb-mech-exp-2012} C. M. Bender, B. K. Berntson, D. Parker, and E. Samuel, arXiv:1206.4972. \betaibitem{schmid-book-2001} P. J. Schmid and D. S. Henningson, \thetaextit{Stability and Transition in Shear Flows}, (Springer, New York, 2001). \betaibitem{shkalikov-2004} A. A. Shkalikov, J. Math. Sc. \thetaextbf{124}, 5417 (2004); math-ph/0304030. \betaibitem{GSZ-jmp-2005} U. Günther, S. Stefani and M. Znojil, J. Math. Phys. \thetaextbf{46}, (2005), 063504. \betaibitem{GK-jpa-2006} U. Günther and O. Kirillov, J. Phys. A: Math. Gen. \thetaextbf{39}, (2006), 10057. \betaibitem{GLT-2010} U. Günther, H. Langer and C. Tretter, SIAM J. Math. Anal. \thetaextbf{42}, (2010), 1413; arXiv:1004.0231. \betaibitem{stefani-spectra-2012} A. Giesecke, F. Stefani and G. Gerbeth, arXiv:1202.2218. \betaibitem{cmb-brach-prl-2007} Bender C M, Brody D C, Jones H F, and Meister B K, 2007 Phys. Rev. Lett. \thetaextbf{98} 040403. \betaibitem{cmb-lnp-2009} C. M. Bender and D. C. Brody, Lect. Notes Phys. \thetaextbf{789}, 341 (2009); arXiv:0808.1823. \betaibitem{BU}Berry M V and Uzdin R 2011 J. Phys. A \thetaextbf{44} 435303 \betaibitem{UMM}Uzdin R, Mailybaev A and Moiseyev N 2011 J. Phys. A \thetaextbf{44} 435302 \betaibitem{Fleming}Fleming G N, 1973 Nuov. Cim. A \thetaextbf{16} 232. \betaibitem{baumg} H. Baumgärtel: \thetaextit{Analytic perturbation theory for matrices and operators} (Akademie-Verlag, Berlin, 1984) and (Operator Theory: Adv. Appl. \thetaextbf{15}, Birkhäuser, Basel, 1985). \betaibitem{Dorje Herm}Brody D C, 2003 J. Phys. A. \thetaextbf{36} 5587. \betaibitem{Hook}Brody D C and Hook D W, 2006 J. Phys. A. \thetaextbf{39} L167. \betaibitem{japan-brach-prl-2006} A. Carlini, A. Hosoya, T. Koike, and Y. Okudaira, Phys. Rev. Lett. \thetaextbf{96}, 060503 (2006). \betaibitem{Assis}Assis P E G and Fring A, 2008 J. Phys. A \thetaextbf{41} 244002 \betaibitem{GRS-ep-jpa-2007}Günther U, Rotter I and Samsonov B F 2007 J. Phys. A \thetaextbf{40} 8815 \betaibitem{Ali-brach-prl-2007} Mostafazadeh A, 2007 Phys. Rev. Lett. \thetaextbf{99} 130502 \betaibitem{Giri}Giri P R 2008 Int. J. Theor. Phys. \thetaextbf{47} 2095 \betaibitem{GS-brach-pra-2008}Günther U and Samsonov B F 2008 Phys. Rev. A \thetaextbf{78} 042115 \betaibitem{GS-brach-prl-2008}Günther U and Samsonov B F 2008 Phys. Rev. Lett. \thetaextbf{101}, 230404 \betaibitem{Ali-brach-pra-2009}Mostafazadeh A, Phys. Rev A 2009 \thetaextbf{79} 014101 \betaibitem{AA-prl-1990} Anandan J and Aharonov Y 1990 Phys. Rev. Lett.\thetaextbf{ 65} 1697 \betaibitem{Demir}Demirplak M and Rice S A, 2003 J. Phys. Chem. \thetaextbf{107} 9937; 2005 J. Phys. Chem. \thetaextbf{109} 6838 \betaibitem{MVB Trnsls}Berry M V 2009 J. Phys. A \thetaextbf{42} 365303 \betaibitem{Muga} Ib\'a\~{n}ez S, Mart\'\i nez-Garaot S, Chen X, Torrontegui E and Muga J G 2011 Phys. Rev. A \thetaextbf{84} 023415 \betaibitem{quant-geom-book} Bengtsson I and Zyczkowski K, \thetaextit{Geometry of Quantum States}, (Cambridge University Press, Cambridge, 2006). \betaibitem{horn} R. A. Horn and C. R. Johnson, Matrix analysis, (Cambidge: Cambridge University Press, 1985). \betaibitem{AA phase}Aharonov Y and Anandan J 1987 Phys. Rev. Lett. \thetaextbf{58} 1593 \betaibitem{BBM-jpa-2002} C M Bender, M V Berry and A Mandilara, 2002, J. Phys. A: Math. Gen. \thetaextbf{35} L467 \betaibitem{ali-jmp-2002} A. Mostafazadeh, J. Math. Phys. \thetaextbf{43}, 205-214 (2002), math-ph/0107001. \betaibitem{dijksma-langer-book-1996} A. Dijksma and H. Langer, \thetaextit{Operator theory and ordinary differential operators}, in A. Böttcher (ed.) \thetaextit{et al.}, \thetaextit{Lectures on operator theory and its applications}, Providence, RI: Am. Math. Soc., Fields Institute Monographs, \thetaextbf{Vol. 3}, 75 (1996). \betaibitem{azizov-book-1989}T. Ya. Azizov and I. S. Iokhvidov, \thetaextit{Linear operators in spaces with an indefinite metric} (Wiley-Interscience, New York, 1989). \betaibitem{cmb-prl-1998} C. M. Bender and S. Boettcher, Phys. Rev. Lett. \thetaextbf{80}, 5243 (1998). \betaibitem{cmb-rev-2007} Bender C M 2007 Rep. Prog. Phys. \thetaextbf{70} 947-1018 (Preprint hep-th/0703096) \betaibitem{ali-review-2010} A. Mostafazadeh, Int. J. Geom. Meth. Mod. Phys. \thetaextbf{7}, 1191-1306 (2010), arXiv:0810.5643.\end{thebibliography} \end{document}
\begin{document} \maketitle \def\thefootnote{} \footnotetext{2000 {\it{Mathematics Subject Classification}} Primary 14J45, Secondary 14M20.} \begin{abstract} In this paper we investigate Fano manifolds $X$ whose Chern characters $\text {ch}_k(X)$ satisfy some positivity conditions. Our approach is via the study of \emph{polarized minimal families of rational curves} $(H_x,L_x)$ through a general point $x\in X$. First we translate positivity properties of the Chern characters of $X$ into properties of the pair $(H_x,L_x)$. This allows us to classify polarized minimal families of rational curves associated to Fano manifolds $X$ satisfying $\text {ch}_2(X)\geq0$ and $\text {ch}_3(X)\geq0$. As a first application, we provide sufficient conditions for these manifolds to be covered by subvarieties isomorphic to $\p^2$ and $\p^3$. Moreover, this classification enables us to find new examples of Fano manifolds satisfying $\text {ch}_2(X)\geq0$. \end{abstract} \section{Introduction}\label{introduction} Fano manifolds, i.e., smooth complex projective varieties with ample anticanonical class, play an important role in the classification of complex projective varieties. In \cite{mori79}, Mori showed that Fano manifolds are uniruled, i.e., they contain rational curves through every point. Then he studied minimal dominating families of rational curves on Fano manifolds, and used them to characterize projective spaces as the only smooth projective varieties having ample tangent bundle. Since then, minimal dominating families of rational curves have been extensively investigated and proved to be a useful tool in the study of uniruled varieties. Let $X$ be a smooth complex projective uniruled variety, and $x\in X$ a general point. A \emph{minimal family of rational curves through $x$} is a \emph{smooth} and \emph{proper} irreducible component $H_x$ of the scheme $\rat(X,x)$ parametrizing rational curves on $X$ passing through $x$. There always exists such a family. For instance, one can take $H_x$ to be an irreducible component of $\rat(X,x)$ parametrizing rational curves through $x$ having minimal degree with respect to some fixed ample line bundle on $X$. While we view $X$ as an abstract variety, $H_x$ comes with a natural polarization $L_x$, which can be defined as follows (see Section~\ref{section:rat_curves} for details). By \cite{kebekus}, there is a \emph{finite morphism} $\tau_x: \ H_x \to \p(T_xX^{^{\vee}})$ that sends a point parametrizing a curve smooth at $x$ to its tangent direction at $x$. We set $L_x=\tau_x^*\o(1)$. We call the pair $(H_x, L_x)$ a \emph{polarized minimal family of rational curves through $x$}. The image ${\mathcal {C}}_x$ of $\tau_x$ is called the \emph{variety of minimal rational tangents} at $x$. The natural projective embedding ${\mathcal {C}}_x\subset \p(T_xX^{^{\vee}})$ has been successfully explored to investigate the geometry of Fano manifolds. See \cite{hwang} and \cite{hwang_ICM} for an overview of applications of the variety of minimal rational tangents. In this paper we view $(H_x, L_x)$ as a \emph{smooth polarized variety}. We start by giving a formula for all the Chern characters of the variety $H_x$ in terms of the Chern characters of $X$ and $c_1(L_x)$. This illustrates the general principle that the pair $(H_x, L_x)$ encodes many properties of the ambient variety $X$. In what follows $\pi_x:U_x\to H_x$ and $\ev_x:U_x\to X$ denote the universal family morphisms introduced in section~\ref{section:rat_curves}, and the $B_j$'s denote the Bernoulli numbers, defined by the formula $\frac{x}{e^x-1}=\sum_{j=0}^{\infty} \frac{B_j}{j!} x^j$. \begin{prop} \label{chern_characters} Let $X$ be a smooth complex projective uniruled variety. Let $(H_x, L_x)$ be a polarized minimal family of rational curves through a general point $x\in X$. For any $k\geq1$, the $k$-th Chern character of $H_x$ is given by the formula: \begin{equation}\label{ch_k for H_x} ch_k(H_x)=\sum_{j=0}^{k}\frac{(-1)^jB_j}{j!}c_1(L_x)^j{\pi_x}_*\ev_x^*\big(\text {ch}_{k+1-j}(X)\big)-\frac{1}{k!}c_1(L_x)^k. \end{equation} When $k$ is $1$ or $2$ this becomes: \begin{equation}\label{c_1 of H_x} c_1(H_x)={\pi_x}_*\ev_x^*\big(\text {ch}_2(X)\big)+\frac{d}{2}c_1(L_x), \ \ and \end{equation} \begin{equation}\label{ch_2 of H_x} \text {ch}_2(H_x)={\pi_x}_*\ev_x^*\big(\text {ch}_3(X)\big)+\frac{1}{2}{\pi_x}_*\ev_x^*\big(\text {ch}_2(X)\big)\cdot c_1(L_x)+\frac{d-4}{12}c_1(L_x)^2. \end{equation} \end{prop} Formulas for the first Chern class $c_1(H_x)$ were previously obtained in \cite[Proposition 4.2]{druel_chern_classes} and \cite[Theorem 1.1]{dJ-S:Chern_classes}. However, $c_1(L_x)$ appears disguised in those formulas. Next we turn our attention to Fano manifolds $X$ whose Chern characters satisfy some positivity conditions. In order to state our main theorem, we introduce some notation. See section~\ref{section:rat_curves} for details. Given a positive integer $k$, we denote by $N_k(X)_{\r}$ the real vector space of $k$-cycles on $X$ modulo numerical equivalence. We denote by $\eff_k(X)\subset N_k(X)_{\r}$ the closure of the cone generated by effective $k$-cycles. There is a linear map ${\ev_x}_*\pi_x^*: N_k(H_x)_{\r} \to N_{k+1}(X)_{\r}$. A codimension $k$ cycle $\alpha\in A^k(X)\otimes_\z\q$ is \emph{weakly positive} (respectively \emph{nef}) if $\alpha\cdot \beta>0$ (respectively $\alpha\cdot \beta \geq 0$) for every effective integral $k$-cycle $\beta\neq 0$. In this case we write $\alpha>0$ (respectively $\alpha\geq0$). One easily checks that the only del Pezzo surface satisfying $\text {ch}_2>0$ is $\p^2$. In \cite{2Fano_3folds}, we go through the classification of Fano threefolds, and check that the only ones satisfying $\text {ch}_2>0$ are $\p^3$ and the smooth quadric hypersurface in $\p^4$. In higher dimensions, Proposition~\ref{chern_characters} above allows us to translate positivity properties of the Chern characters of $X$ into those of $H_x$, and classify polarized varieties $(H_x, L_x)$ associated to Fano manifolds $X$ with $\text {ch}_2(X)\geq0$ and $\text {ch}_3(X)\geq0$. The following is our main theorem. \begin{thm}\label{thm1} \label{thm2} Let $X$ be a Fano manifold. Let $(H_x, L_x)$ be a polarized minimal family of rational curves through a general point $x\in X$. Set $d=\dim H_x$. \begin{enumerate} \item If $\text {ch}_2(X)>0$ (respectively $\text {ch}_2(X)\geq0$), then $-2K_{H_x}-dL_x$ is ample (respectively nef). This necessary condition is also sufficient provided that ${\ev_x}_*\pi_x^*\big(\eff_1(H_x)\big)=\eff_2(X)$. \item If $\text {ch}_2(X)>0$, then $H_x$ is a Fano manifold with $\rho(H_x)=1$ except when $(H_x, L_x)$ is isomorphic to one of the following: \begin{enumerate} \item $\Big(\p^m\times \p^m, p_{_1}^*\o(1)\otimes p_{_2}^*\o(1)\Big)$, with $d=2m$, \item $\Big(\p^{m+1}\times\p^{m} \ ,\ p_{_1}^*\o(1)\otimes p_{_2}^*\o(1)\Big)$, with $d=2m+1$, \item $\Big(\p_{\p^{m+1}}\Big(\o(2)\oplus \o(1)^{^{\oplus m}}\Big) \ , \ \o_{\p}(1)\Big)$, with $d=2m+1$, \item $\Big(\p^{m}\times Q^{m+1} \ ,\ p_{_1}^*\o(1)\otimes p_{_2}^*\o(1)\Big)$, with $d=2m+1$, or \item $\Big(\p_{\p^{m+1}}\big(T_{\p^{m+1}}\big) \ , \ \o_{\p}(1)\Big)$, with $d=2m+1$. \end{enumerate} Morever, each of these exceptional pairs occurs as $(H_x, L_x)$ for some Fano manifold $X$ with $\text {ch}_2(X)>0$. \item If $\text {ch}_2(X)>0$, $\text {ch}_3(X)\geq0$ and $d\geq 2$, then $H_x$ is a Fano manifold with $\rho(H_x)=1$ and $\text {ch}_2(H_x)>0$. \end{enumerate} \end{thm} \begin{rems} \noindent {\bf (i)} Fano manifolds $X$ with $\text {ch}_2(X)\geq 0$ were introduced in \cite{dJ-S:2fanos_1} and \cite{dJ-S:2fanos_2}. In \cite{dJ-S:2fanos_1} de Jong and Starr described a few examples and many non-examples of such manifolds. Roughly, the only examples of Fano manifolds with $\text {ch}_2(X)>0$ in their list are complete intersections of type $(d_1,\cdots, d_m)$ in $\p^n$, with $\sum d_i^2\leq n$, and the Grassmannians $G(k,2k)$ and $G(k,2k+1)$. Theorem~\ref{thm1} explains why, as pointed out in \cite{dJ-S:2fanos_1}, other examples are difficult to find. \noindent {\bf (ii)} Eventually, one would hope to classify all Fano manifolds with weakly positive (or even nef) higher Chern characters. Theorem~\ref{thm1} is a step in this direction. In fact, many homogeneous spaces $X$ are characterized by their variety of minimal rational tangents ${\mathcal {C}}_x=\tau_x(H_x)\subset \p(T_xX^{^{\vee}})$ among Fano manifolds with Picard number one. This is the case when $X$ is a Hermitian symmetric space or a homogeneous contact manifold {\cite[Main Theorem]{mok_05}}, or $X$ is the quotient of a complex simple Lie group by a maximal parabolic subgroup associated to a long simple root {\cite[Main Theorem]{hong_hwang}}. Notice, however, that $(H_x,L_x)$ carries less information than the embedding ${\mathcal {C}}_x\subset \p(T_xX^{^{\vee}})$. For instance, in Example~\ref{G2/P}, $X$ is the $5$-dimensional homogeneous space $G_2/P$, $(H_x, L_x)\cong \big(\p^1,\o(3)\big)$, and ${\mathcal {C}}_x$ is a twisted cubic in $\p(T_xX^{^{\vee}})\cong \p^4$, and thus degenerate. \noindent {\bf (iii)} In a forthcoming paper we classify polarized varieties $(H_x, L_x)$ associated to Fano manifolds $X$ satisfying $\text {ch}_2(X)\geq0$. In this case, the list of pairs $(H_x, L_x)$ with $\rho(H_x)>1$ is much longer than the one in Theorem~\ref{thm1}(2). \end{rems} By \cite{mori79}, Fano manifolds are covered by rational curves. In \cite{dJ-S:2fanos_2}, de Jong and Starr considered the question whether there is a rational surface through a general point of a Fano manifold $X$ satisfying $\text {ch}_2(X)\geq 0$. They showed that the answer is positive if the pseudoindex $i_X$ of $X$ is at least $3$. The condition $i_X\geq 3$ implies that $\dim H_x\geq1$, and so Theorem~\ref{thm1}(1) recovers their result. In fact, we can say a bit more: \begin{thm}\label{thm3} Let $X$ be a Fano manifold, and $(H_x, L_x)$ a polarized minimal family of rational curves through a general point $x\in X$. Suppose that $\text {ch}_2(X)\geq0$ and $d=\dim H_x\geq 1$. \begin{enumerate} \item (\cite{dJ-S:2fanos_2}) There is a rational surface through $x$. \item If $\text {ch}_2(X)> 0$ and $(H_x, L_x)\not\cong$ $\big(\p^d,\o(2)\big)$, $\big(\p^1,\o(3)\big)$, then there is a generically injective morphism $g:(\p^{2},p)\to (X,x)$ mapping lines through $p$ to curves parametrized by $H_x$. \end{enumerate} Suppose moreover that $\text {ch}_2(X)> 0$, $\text {ch}_3(X)\geq0$ and $d\geq 2$. \begin{enumerate} \setcounter{enumi}{2} \item There is a rational $3$-fold through $x$, except possibly if $(H_x, L_x)\cong \big(\p^2,\o(2)\big)$ and ${\mathcal {C}}_x=\tau_x(H_x)$ is singular. \item Let $(W_h,M_h)$ be a polarized minimal family of rational curves through a general point $h\in H_x$. Suppose that $(H_x, L_x)\not\cong \big(\p^d,\o(2)\big)$ and $(W_h,M_h)\not\cong$ $\big(\p^k,\o(2)\big)$, $\big(\p^1,\o(3)\big)$. Then there is a generically injective morphism $h:(\p^{3},q)\to (X,x)$ mapping lines through $q$ to curves parametrized by $H_x$. \end{enumerate} \end{thm} \begin{rems} \noindent {\bf (i)} We believe that the exception in Theorem~\ref{thm3}(3) does not occur. \noindent {\bf (ii)} If $H_x$ parametrizes lines under some projective embedding $X\DOTSB\lhook\joinrel\rightarrow \p^N$, then the morphisms $g$ and $h$ from Theorem~\ref{thm3}(2) and (4) map lines of $\p^{2}$ and $\p^3$ to lines of $\p^N$. Hence, they are isomorphisms onto their images. \end{rems} This paper is organized as follows. In section~\ref{section:rat_curves} we introduce polarized minimal families of rational curves and study some of their basic properties. In section~\ref{section:Chern} we make a Chern class computation to prove Proposition~\ref{chern_characters}. This is a key ingredient to the proof of Theorem~\ref{thm1}. Theorems~\ref{thm1} and \ref{thm3} are proved in section~\ref{section:proofs}. In section~\ref{examples}, we give new examples of Fano manifolds satisfying $\text {ch}_2(X)\geq 0$. In particular, we exhibit Fano manifolds $X$ with $\text {ch}_2(X)>0$ realizing each of the exceptional pairs in Therorems~\ref{thm1}. \noindent {\bf Notation.} Throughout this paper we work over the field of complex numbers. We often identify vector bundles with their corresponding locally free subsheaves. We also identify a divisor on a smooth projective variety $X$ with its corresponding line bundle and its class in $\pic(X)$. Let $E$ be a vector bundle on a variety $X$. We denote the Grothendieck projectivization $\operatorname{Proj}_X(Sym (E))$ by $\p(E)$, and the tautological line bundle on $\p(E)$ by $\o_{\p(E)}(1)$, or simply $\o_{\p}(1)$. By a \emph{rational curve} we mean a \emph{proper rational curve}, unless otherwise noted. \noindent {\it Acknowledgments.} Most of this work was developed while we were research members at the Mathematical Sciences Research Institute (MSRI) during the 2009 program in Algebraic Geometry. We are grateful to MSRI and the organizers of the program for providing a very stimulating environment for our research and for the financial support. This work has benefitted from ideas and suggestions by J\'anos Koll\'ar and Jaros\l aw Wi\'sniewski. We thank them for their comments and interest in our work. We thank Izzet Coskun, Johan de Jong, Jason Starr and Jenia Tevelev for fruitful discussions on the subject of this paper. The first named author was partially supported by CNPq-Brazil Research Fellowship and L'Or\'eal-Brazil For Women in Science Fellowship. \section{Polarized minimal families of rational curves}\label{section:rat_curves} We refer to \cite[Chapters I and II]{kollar} for the basic theory of rational curves on complex projective varieties. See also \cite{debarre}. Let $X$ be a smooth complex projective uniruled variety of dimension $n$. Let $x\in X$ be a \emph{general} point. There is a scheme $\rat(X,x)$ parametrizing rational curves on $X$ passing through $x$. This scheme is constructed as the normalization of a certain subscheme of the Chow variety $\text {ch}ow(X)$ parametrizing effective $1$-cycles on $X$. We refer to \cite[II.2.11]{kollar} for details on the construction of $\rat(X,x)$. An irreducible component $H_x$ of $\rat(X,x)$ is called a \emph{family of rational curves through $x$}. It can also be described as follows. There is an irreducible open subscheme $V_x$ of the Hom scheme $\Hom(\p^1,X,o\mapsto x)$ parametrizing morphisms $f:\p^1\to X$ such that $f(o)=x$ and $f_*[\p^1]$ is parametrized by $H_x$. Then $H_x$ is the quotient of $V_x$ by the natural action of the automorphism group $\aut(\p^1,o)$. Given a morphism $f:\p^1\to X$ parametrized by $V_x$, we use the same symbol $[f]$ to denote the element of $V_x$ corresponding to $f$, and its image in $H_x$. Since $x\in X$ is a general point, both $V_x$ and $H_x$ are smooth, and every morphism $f:\p^1\to X$ parametrized by $V_x$ is \emph{free}, i.e., $f^*T_X\ \cong \ \bigoplus_{i=1}^{\dim X}\o_{\p^1}(a_i)$, with all $a_i\geq 0$. From the universal properties of $\Hom(\p^1,X,o\mapsto x)$ and $\text {ch}ow(X)$, we get a commutative diagram: \begin{equation} \label{diagram_Hx} \xymatrix{ \p^1 \times V_x \ar[d] \ar[r] \ar@/^0.8cm/[rrr]^{(t,[f])\mapsto f(t)} & U_x \ar[d]_{\pi_x} \ar[rr]^{\ev_x} & & X, \\ V_x \ar[r] \ar@/^0.4cm/[u]^{\{o\}\times \id} & H_x \ar@/_0.4cm/[u]_{\sigma_x} } \end{equation} where $\pi_x$ is a $\p^1$-bundle and $\sigma_x$ is the unique section of $\pi_x$ such that $\ev_x\big(\sigma_x(H_x)\big)=x$. We denote by the same symbol both $\sigma_x$ and its image in $U_x$, which equals the image of $\{o\}\times V_x$ in $U_x$. In \cite[Proposition 3.7]{druel_chern_classes}, Druel gave the following description of the tangent bundle of $H_x$: \begin{equation} \label{druel} T_{H_x}\ \cong \ (\pi_x)_* \Big( \big( (\ev_x^*T_X)/T_{\pi_x} \big)(-\sigma_x) \Big), \end{equation} where the relative tangent sheaf $T_{\pi_x}=T_{U_x/H_x}$ is identified with its image under the map $d\ev_x: T_{U_x}\to \ev_x^*T_X$. When $H_x$ is proper, we call it a \emph{minimal family of rational curves through $x$}. \begin{say}[Minimal families of rational curves]\label{Hx} Let $H_x$ be a minimal family of rational curves through $x$. For a general point $[f]\in H_x$, we have $f^*T_X \cong \o(2)\oplus \o(1)^{\oplus d}\oplus \o^{\oplus n-d-1}$, where $d=\dim H_x=\deg(f^*T_X)-2\leq n-1$ (see \cite[IV.2.9]{kollar}). Moreover, $d=n-1$ if and only if $X\cong \p^n$ by \cite{CMSB} (see also \cite{kebekus_on_CMSB}). Let $H_x^{\text{Sing},x}$ denote the subvariety of $H_x$ parametrizing curves that are singular at $x$. By a result of Miyaoka (\cite[V.3.7.5]{kollar}), if $Z\subset H_x\setminus H_x^{\text{Sing},x}$ is a \emph{proper} subvariety, then $\ev_x\big|_{\pi_x^{-1}(Z)}: \pi_x^{-1}(Z)\to X$ is generically injective. In particular, if $H_x^{\text{Sing},x}=\emptyset$, then $\ev_x$ is birational onto its image. By \cite[Theorem 3.3]{kebekus}, $H_x^{\text{Sing},x}$ is at most finite, and every curve parametrized by $H_x$ is immersed at $x$. \end{say} \begin{say}[Polarized minimal families of rational curves]\ \label{describing_Lx} Now we describe a natural polarization $L_x$ associated to a minimal family $H_x$ of rational curves through $x$. There is an inclusion of sheaves \begin{equation}\label{tau} \sigma_x^*\big(T_{\pi_x}\big)\ \DOTSB\lhook\joinrel\rightarrow \ \sigma_x^*\ev_x^*T_X\ \cong \ T_xX\otimes \o_{H_x}. \end{equation} By \cite[Theorems 3.3 and 3.4]{kebekus}, the cokernel of this map is locally free, and defines a finite morphism $\tau_x: \ H_x \to \p(T_xX^{^{\vee}})$. By \cite{hwang_mok_birationality}, $\tau_x$ is birational onto its image. Notice that $\tau_x$ sends a curve that is smooth at $x$ to its tangent direction at $x$. Set $L_x=\tau_x^*\o(1)$. It is an ample and globally generated line bundle on $H_x$. We call the pair $(H_x, L_x)$ a \emph{polarized minimal family of rational curves through $x$}. The following description of $L_x$ from \cite[4.2]{druel_chern_classes} is very useful for computations. Set $E_x=(\pi_x)_*\o_{U_x}(\sigma_x)$. Then $U_x\cong \p(E_x)$ over $H_x$, and under this isomorphism $\o_{U_x}(\sigma_x)$ is identified with the tautological line bundle $\o_{\p(E_x)}(1)$. Notice also that $\sigma_x^*\big(\o_{U_x}(\sigma_x)\big) \cong \sigma_x^* \big({\mathcal {N}}_{\sigma_x/U_x}\big) \cong \sigma_x^*\big(T_{\pi_x}\big)$. Therefore \eqref{tau} induces in isomorphism \begin{equation}\label{Lx=-normal} L_x\cong \sigma_x^*\big({T_{\pi_x}}\big)^{-1}\cong \sigma_x^*\o_{U_x}(-\sigma_x) \cong \sigma_x^*\o_{\p(E_x)}(-1). \end{equation} By pulling back by $\sigma_x$, the Euler sequence \begin{equation}\label{euler} 0 \ \to \ \ \o_{\p(E_x)}(1)\otimes\big({T_{\pi_x}}\big)^{-1} \ \to \ \pi_x^*E_x \ \to \ \o_{\p(E_x)}(1) \ \to \ 0 \end{equation} induces an exact sequence \begin{equation}\label{L} 0 \ \to \ \o_{H_x} \ \to \ E_x \ \to \ L_x^{-1} \ \to \ 0. \end{equation} This description of $L_x$ and the projection formula yield the following identities of cycles on $U_x$: \begin{itemize} \item[(i) ] $\ev_x^*(c_1(X))=(d+2)(\sigma_x+\pi_x^*c_1(L_x))$, \item[(ii) ] $\sigma_x\cdot\ev_x^*(\gamma)=0$ for any $\gamma\in A^k(X)$, $k\geq1$, and \item[(iii) ] $\sigma_x^2=-\sigma_x\cdot\pi_x^*c_1(L_x)$, \end{itemize} where, as before, $d=\dim H_x=\deg(f^*T_X)-2$ for any $[f]\in H_x$. \end{say} \begin{lemma} \label{f:P^k->X} Let $X$ be a smooth complex projective uniruled variety. Let $(H_x, L_x)$ be a polarized minimal family of rational curves through a general point $x\in X$. Suppose there is a subvariety $Z\subset H_x$ such that $(Z, L_x|_Z)\cong (\p^k,\o_{\p^k}(1))$. \begin{enumerate} \item Then there is a finite morphism $g:(\p^{k+1},p)\to (X,x)$ that maps lines through $p$ birationally to curves parametrized by $H_x$. \item If moreover $Z\subset H_x\setminus H_x^{\text{Sing},x}$, then $g$ is generically injective. \end{enumerate} \end{lemma} \begin{proof} Let $Z\subset H_x$ be a subvariety such that $(Z, L_x|_Z)\cong (\p^k,\o_{\p^k}(1))$. Set $U_Z:=\pi_x^{-1}(Z)$, $\sigma_Z:=\sigma_x\cap U_Z$, and $E_Z:=E_x|_{Z}$. By \ref{describing_Lx}, $U_Z$ is isomorphic to $\p(E_Z)$ over $Z$, and under this isomorphism $\o_{U_Z}(\sigma_Z)$ is identified with the tautological line bundle $\o_{\p(E_Z)}(1)$. By \eqref{L}, $E_Z\cong \o_{\p^k}\oplus \o_{\p^k}(-1)$. Thus $U_Z$ is isomorphic to the blowup of $\p^{k+1}$ at a point $p$, and under this isomorphism $\sigma_Z$ is identified with the exceptional divisor. Since $\ev_x|_{U_Z}: U_Z\to X$ maps $\sigma_Z$ to $x$ and contracts nothing else, it factors through a finite morphism $g:\p^{k+1}\to X$ mapping $p$ to $x$. The lines through $p$ on $\p^{k+1}$ are images of fibers of $\pi_x$ over $Z$, and thus are mapped birationally to curves parametrized by $Z\subset H_x$. If $Z\subset H_x\setminus H_x^{\text{Sing},x}$, then $g$ is generically injective by \cite[V.3.7.5]{kollar}. \end{proof} \begin{say}\label{H} There is a scheme $\rat(X)$ parametrizing rational curves on $X$. A \emph{minimal dominating family of rational curves on $X$} is an irreducible component $H$ of $\rat(X)$ parametrizing a family of rational curves that sweeps out a dense open subset of $X$, and satisfying the following condition. For a general point $x\in X$, the (possibly reducible) subvariety $H(x)$ of $H$ parametrizing curves through $x$ is proper. In this case, for each irreducible component $H(x)^i$ of $H(x)$, there is a minimal family $H_x^i$ of rational curves through $x$ parametrizing the same curves as $H(x)^i$. Moreover, $H_x^i$ is naturally isomorphic to the normalization of $H(x)^i$. This follows from the construction of $\rat(X)$ and $\rat(X,x)$ in \cite[II.2.11]{kollar}. If in addition $H$ is proper, then we say that it is an \emph{unsplit covering family of rational curves}. This is the case, for instance, when the curves parametrized by $H$ have degree $1$ with respect to some ample line bundle on $X$. \end{say} We end this section by investigating the relationship between the Chow ring of a Fano manifold and that of its minimal families of rational curves. \begin{defn}\label{defn_cycles} Let $X$ be a projective variety, and $k$ a non negative integer. We denote by $A_k(X)$ the group of $k$-cycles on $X$ modulo rational equivalence, and by $A^k(X)$ the $k^{\text{th}}$ graded piece of the Chow ring $A^*(X)$ of $X$. Let $N_k(X)$ (respectively $N^k(X)$) be the quotient of $A_k(X)$ (respectively $A^k(X)$) by numerical equivalence. Then $N_k(X)$ and $N^k(X)$ are finitely generated Abelian groups, and intersection product induces a perfect pairing $N^k(X)\times N_k(X)\to \z$. For every $\z$-module $B$, set $N_k(X)_B:=N_k(X)\otimes B$ and $N^k(X)_B:=N^k(X)\otimes B$. We denote by $\eff_k(X)\subset N_k(X)_{\r}$ the closure of the cone generated by effective $k$-cycles. Let $\alpha \in N^k(X)_{\r}$. We say that $\alpha$ is \begin{enumerate} \item[$\bullet$] \emph{ample} if $\alpha=A^k$ for some ample $\r$-divisor $A$ on $X$; \item[$\bullet$] \emph{positive} if $\alpha\cdot \beta>0$ for every $\beta\in \eff_k(X)\setminus \{0\}$; \item[$\bullet$] \emph{weakly positive} if $\alpha\cdot \beta>0$ for every effective integral $k$-cycle $\beta\neq 0$; \item[$\bullet$] \emph{nef} if $\alpha\cdot \beta\geq0$ for every $\beta\in \eff_k(X)$. \end{enumerate} We write $\alpha>0$ for $\alpha$ weakly positive and $\alpha\geq 0$ for $\alpha$ nef. \end{defn} \begin{defn}\label{defn_Tk} Let $X$ be a smooth projective uniruled variety, and $H_x$ a minimal family of rational curves through a general point $x\in X$. Let $\pi_x$ and $\ev_x$ be as in \eqref{diagram_Hx}. For every positive integer $k$, we define linear maps \begin{align} T^k: \ & N^k(X)_{\r} \ \to \ N^{k-1}(H_x)_{\r}\ , & \ T_k: \ & N_k(H_x)_{\r} \ \to \ N_{k+1}(X)_{\r}. \notag \\ & \ \ \ \ \ \alpha \ \mapsto \ {\pi_x}_*\ev_x^*\alpha & & \ \ \ \ \ \beta \ \mapsto \ {\ev_x}_*\pi_x^*\beta \notag \end{align} This is possible because $\ev_x$ is proper and $\pi_x$ is a $\p^1$-bundle, and thus flat. We remark that in general these maps are neither injective nor surjective. \end{defn} \begin{lemma}\label{Tk_preserves_positivity} Let $X$ be a smooth projective uniruled variety, and $(H_x, L_x)$ a polarized minimal family of rational curves through a general point $x\in X$. \begin{enumerate} \item Let $A$ be an $\r$-divisor on $X$, and set $a=\deg f^*A$, where $[f]\in H_x$. Then $T^k(A^k)= a^k c_1(L_x)^{k-1}$. \item $T_k$ maps $\eff_k(H_x)\setminus \{0\}$ into $\eff_{k+1}(X)\setminus \{0\}$. \item $T^k$ preserves the properties of being ample, positive, weakly positive and nef. \end{enumerate} \end{lemma} \begin{proof} To prove (1), let $A$ be an $\r$-divisor on $X$, and set $a=\deg f^*A$, where $[f]\in H_x$. Using \eqref{Lx=-normal} and \ref{describing_Lx}(ii), it is easy to see that $\ev_x^*A= a(\sigma_x+\pi_x^*L_x)$ in $N^1(U_x)$. By \ref{describing_Lx}(iii) and the projection formula, {\small \begin{align} T^k(A^k)={\pi_x}_*\ev_x^*(A^k)&= a^k{\pi_x}_*\left[\sum_{i=0}^k{{k}\text {ch}oose{i}}\sigma_x^{k-i}\cdot \pi_x^*c_1(L_x)^i\right] \notag \\ &=a^k{\pi_x}_*\left[\left(\sum_{i=0}^{k-1}{{k}\text {ch}oose{i}}(-1)^{k-i-1}\right)\sigma_x \cdot \pi_x^*c_1(L_x)^{k-1} + \pi_x^*c_1(L_x)^{k}\right] \notag \\ &=a^k{\pi_x}_*\Big[\sigma_x\cdot \pi_x^*c_1(L_x)^{k-1} + \pi_x^*c_1(L_x)^{k}\Big] = a^k c_1(L_x)^{k-1}.\notag \end{align}} Notice that $T_k$ maps effective cycles to effective cycles, inducing a linear map $T_k: \eff_k(H_x) \to\eff_{k+1}(X)$. By taking $A$ an ample divisor in (1) above, we see that $T_k$ maps $\eff_k(H_x)\setminus \{0\}$ into $\eff_{k+1}(X)\setminus \{0\}$. By the projection formula, $T^{k+1}(\alpha)\cdot\beta=\alpha\cdot T_k(\beta)$ for every $\alpha\in N^{k+1}(X)_{\r}$ and $\beta\in N_k(H_x)_{\r}$. Together with (1) and (2) above, this implies that $T^k$ preserves the properties of being ample, positive, weakly positive and nef. \end{proof} \section{A Chern class computation}\label{section:Chern} In this section we prove Proposition~\ref{chern_characters}. We refer to \cite{fulton} for basic results about intersection theory. In particular, $\text {ch}(F)$ denotes the Chern character of the sheaf $F$, and $\text {td}(F)$ denotes its Todd class. We follow the lines of the proof of \cite[Proposition 4.2]{druel_chern_classes}. \begin{proof}[Proof of Proposition~\ref{chern_characters}] We use the notation introduced in Section \ref{section:rat_curves}. By Grothendieck-Riemann-Roch \begin{gather*} \text {ch}\Big({\pi_x}_!\big(\ev_x^*T_X/T_{\pi_x}(-\sigma_x)\big)\Big)= \\ {\pi_x}_*\Big(\text {ch}\big(\ev_x^*T_X/T_{\pi_x}(-\sigma_x)\big)\cdot\text {td}(T_{\pi_x})\Big)\in A(H_x)_{\q}. \end{gather*} Since $f:\p^1\to X$ is free for any $[f]\in H_x$, $R^1{\pi_x}_*\big(\ev_x^*T_X/T_{\pi_x}(-\sigma_x)\big)=0$, and thus $ {\pi_x}_!\big(\ev_x^*T_X/T_{\pi_x}(-\sigma_x)\big)={\pi_x}_*\big(\ev_x^*T_X/T_{\pi_x}(-\sigma_x)\big). $ It follows from \eqref{druel} that $$ ch_k(H_x)=\text {ch}_k(T_{H_x})={\pi_x}_*\Big(\Big[\text {ch}\big(\ev_x^*T_X/T_{\pi_x}(-\sigma_x)\big)\cdot\text {td}(T_{\pi_x})\Big]_{k+1}\Big). $$ Denote by $W_k$ the codimension $k$ part of the cycle \begin{gather*} \text {ch}\big(\ev_x^*T_X/T_{\pi_x}(-\sigma_x)\big)\cdot\text {td}(T_{\pi_x})= \\ \big(\ev_x^*\text {ch}(T_X)-\text {ch}(T_{\pi_x})\big)\cdot\text {ch}\big(\o_{U_x}(-\sigma_x)\big)\cdot\text {td}(T_{\pi_x}). \end{gather*} Denote by $Z_k$ the codimension $k$ part of the cycle \begin{equation*} \big(\ev_x^*\text {ch}(T_X)-\text {ch}(T_{\pi_x})\big)\cdot\text {ch}\big(\o_{U_x}(-\sigma_x)\big), \end{equation*} Then $\text {ch}_k(H_x)={\pi_x}_*\big(W_{k+1}\big)$, and $W_{k+1}=\sum_{j=0}^{k+1}Z_{k+1-j}\cdot\big[\text {td}(T_{\pi_x})\big]_j$. We have: $$\ev_x^*\big(\text {ch}(T_X)\big)=\ev_x^*\big(n+c_1(X)+\text {ch}_2(X)+\text {ch}_3(X)+\cdots\big),$$ $$\text {ch}\big(\o_{U_x}(-\sigma_x)\big)=\sum_{k=0}^{\infty}\frac{(-1)^k}{k!}\sigma_x^k,$$ $$\text {ch}(T_{\pi_x})=\sum_{k=0}^{\infty}\frac{1}{k!}c_1(T_{\pi_x})^k,\quad \text {td}(T_{\pi_x})=\sum_{k=0}^{\infty}\frac{(-1)^kB_k}{k!}c_1(T_{\pi_x})^k.$$ From \eqref{euler} and \eqref{L}, $c_1(T_{\pi_x})=2\sigma_x+\pi_x^*c_1(L_x)$. By repeatedly using \ref{describing_Lx}(iii), we have the following identities: \begin{itemize} \item[(iv) ] $\pi_x^*c_1(L_x)^i\cdot\sigma_x^j=(-1)^i\sigma_x^{i+j}$, for any $j\geq1$, \item[(v) ] $c_1(T_{\pi_x})^i\cdot\sigma_x^j=\sigma_x^{i+j}$, for any $j\geq1$, \item[(vi) ] $c_1(T_{\pi_x})^i=\left\{ \begin{aligned} &\pi_x^*c_1(L_x)^i, &\text { if } i \text{ is even}\\ &2\sigma_x^i+\pi_x^*c_1(L_x)^i, &\text { if } i \text{ is odd.} \end{aligned} \right.$ \end{itemize} \begin{claim}\label{Z formula} For any $k\geq1$ we have the following formulas: \begin{equation}\label{Z} Z_k=\ev_x^*\text {ch}_k(X)+\frac{(n+1)(-1)^k}{k!}\sigma^k-\frac{1}{k!}\pi_x^*c_1(L_x)^k, \end{equation} \begin{equation}\label{Z times sigma} Z_k\cdot\sigma_x=\frac{(n+1)(-1)^k}{k!}\sigma_x^{k+1}-\frac{1}{k!}\sigma_x\cdot\pi_x^*c_1(L_x)^k, \end{equation} \begin{equation}\label{push forward Z} {\pi_x}_*Z_k={\pi_x}_*\ev_x^*\text {ch}_k(X)-\frac{(n+1)}{k!}c_1(L_x)^{k-1}, \end{equation} \begin{equation}\label{push forward Z times sigma} {\pi_x}_*\big(Z_k\cdot\sigma_x\big)=\frac{n}{k!}c_1(L_x)^k. \end{equation} \end{claim} In addition, $Z_0=n-1$ (formula (\ref{Z}) does not hold for $k=0$). \begin{proof}[Proof of Claim~\ref{Z formula}] For $k\geq1$ we have: $$Z_k=\sum_{j=0}^k\big(\ev_x^*\text {ch}_j(X)-\frac{1}{j!}\pi_x^*c_1(T_{\pi_x})^j\big)\cdot\frac{(-1)^{k-j}}{(k-j)!}\sigma_x^{k-j}.$$ By identity \ref{describing_Lx}(ii), $$Z_k=\ev_x^*\text {ch}_k(X)-\sum_{j=0}^k\frac{(-1)^{k-j}}{j!(k-j)!}\pi_x^*c_1(T_{\pi_x})^j\cdot\sigma_x^{k-j}.$$ Formula (\ref{Z}) follows now from identities (v), (vi) and $\sum_{j=0}^k\frac{(-1)^{k-j}}{j!(k-j)!}=0$. Formula (\ref{Z times sigma}) follows immediately from (\ref{Z}) and \ref{describing_Lx}(ii). Using the identity (iv) and the projection formula, we have \begin{itemize} \item[(vii) ] ${\pi_x}_*\sigma_x^{k}=(-1)^{k-1}c_1(L_x)^{k-1}$, for any $k\geq1$, and \item[(viii) ] ${\pi_x}_*\pi_x^*(\gamma)=0$ for any class $\gamma\in A(H_x)$. \end{itemize} Formulas (\ref{push forward Z}) and (\ref{push forward Z times sigma}) now follow from (vii) and (viii). \end{proof} For simplicity, we denote by $A_j$ the coefficient of $c_1(M)^j$ in the formula for the Todd class $\text {td}(M)$ of a line bundle $M$, i.e., $A_j=\frac{(-1)^j}{j!}B_j$. Recall that $A_0=1$, $A_1=1/2$, $A_2=1/12$, $A_3=0$, $A_4=-1/720$, etc. We have $W_{k+1}=\sum_{j=0}^{k+1}A_{k+1-j}Z_j\cdot c_1(T_{\pi_x})^{k+1-j}$. Since $A_l=0$ for all odd $l\geq3$, by identity (vi) the formula for $W_{k+1}$ becomes: $$W_{k+1}=\sum_{j=0}^{k+1}A_{k+1-j}Z_j\cdot \pi_x^*c_1(L_x)^{k+1-j}+Z_k\cdot\sigma_x.$$ By the projection formula, $${\pi_x}_*W_{k+1}=\sum_{j=1}^{k+1}A_{k+1-j}\big({\pi_x}_*Z_j\big)\cdot c_1(L_x)^{k+1-j}+{\pi_x}_*\big(Z_k\cdot\sigma_x\big).$$ Using (\ref{push forward Z}), (\ref{push forward Z times sigma}), we have \begin{gather*} {\pi_x}_*W_{k+1}=\sum_{j=1}^{k+1}A_{k+1-j}{\pi_x}_*\ev_x^*\text {ch}_j(X)\cdot c_1(L_x)^{k+1-j}+\\ -(n+1)\big(\sum_{j=1}^{k+1}\frac{A_{k+1-j}}{j!}\big)c_1(L_x)^k+\frac{n}{k!}c_1(L_x)^k. \end{gather*} It is easy to see that the identity $\sum_{l=0}^m B_l {{m+1}\text {ch}oose{l}}=0$ implies the identity \begin{equation*} \sum_{j=1}^{k+1}\frac{A_{k+1-j}}{j!}=\frac{1}{k!} \end{equation*} (use $A_l=0$ for all odd $l\geq3$) and now \eqref{ch_k for H_x} follows. To prove \eqref{c_1 of H_x} and \eqref{ch_2 of H_x} from \eqref{ch_k for H_x}, observe that ${\pi_x}_*\ev_x^*c_1(X)=d+2$. \end{proof} \section{Higher Fano manifolds} \label{section:proofs} We start this section by recalling some results about the index and pseudoindex of a Fano manifold and extremal rays of its Mori cone. We refer to \cite{kollar_mori} for basic definitions and results about the minimal model program. \begin{defn}\label{index} Let $X$ be a Fano manifold. The \emph{index} of $X$ is the largest integer $r_X$ that divides $-K_X$ in $\pic(X)$. The \emph{pseudoindex} of $X$ is the integer $i_{X}=\min\big\{-K_Y\cdot C\ \big| \ C\subset Y \text{ rational curve }\big\}$. \end{defn} Notice that $1\leq r_X \leq i_X$. Moreover $i_X\leq \dim X+1$, and $i_X=\dim X+1$ if and only if $X\cong \p^n$ (see \ref{Hx}). By \cite{wisniewski_90}, if $\rho(X)>1$, then $i_X\leq \frac{\dim X}{2}+1$. \begin{defn}\label{index&rays} Let $X$ be a smooth complex projective variety. Let $R$ be an extremal ray of the Mori cone $\nec{X}$, and let $f:Y\to Z$ be the corresponding contraction. The \emph{exceptional locus} $E(R)$ of $R$ is the closed subset of $X$ where $f$ fails to be a local isomorphism. Given a divisor $L$ on $X$, we set $L\cdot R= \min\big\{L\cdot C\ \big| \ C\subset X \text{ rational curve such that } [C]\in R\big\}$. In particular, the \emph{length} of $R$ is $l(R)=-K_X\cdot R=\min\big\{-K_X\cdot C\ \big| \ C\subset X \text{ rational curve such that } [C]\in R\big\}\geq i_X$. \end{defn} Let $X$ be a Fano manifold, and $R$ an extremal ray of $\nec{X}$. By the theorem on lengths of extremal rays, $l(R)\leq \dim X+1$. By \cite{AO_long_R}, if $\rho(X)>1$, then, \begin{equation} \label{long_R} i_X\ +\ l(R)\ \leq \ \dim E(R) \ + \ 2. \end{equation} \begin{lemma}\label{adjunction} Let $Y$ be a $d$-dimensional Fano manifold. Let $L$ be an ample divisor on $Y$ such that $-2K_Y-dL$ is ample. \begin{enumerate} \item Suppose that $(Y,L) \not\cong$ $\big(\p^d,\o(2)\big)$, $\big(\p^1,\o(3)\big)$, and let $R$ be any extremal ray of $\nec{Y}$. Then $L\cdot R=1$. \item Suppose that $\rho(Y)>1$. Then $(Y,L)$ is isomorphic to one of the following: \begin{enumerate} \item $\Big(\p^m\times \p^m, p_{_1}^*\o(1)\otimes p_{_2}^*\o(1)\Big)$, with $d=2m$, \item $\Big(\p^{m+1}\times\p^{m} \ ,\ p_{_1}^*\o(1)\otimes p_{_2}^*\o(1)\Big)$, with $d=2m+1$, \item $\Big(\p_{\p^{m+1}}\Big(\o(2)\oplus \o(1)^{^{\oplus m}}\Big) \ , \ \o_{\p}(1)\Big)$, with $d=2m+1$, \item $\Big(\p^{m}\times Q^{m+1} \ ,\ p_{_1}^*\o(1)\otimes p_{_2}^*\o(1)\Big)$, with $d=2m+1$, or \item $\Big(\p_{\p^{m+1}}\big(T_{\p^{m+1}}\big) \ , \ \o_{\p}(1)\Big)$, with $d=2m+1$. \end{enumerate} \end{enumerate} \end{lemma} \begin{rem} In (c) above, $Y$ can also be described as the blowup of $\p^{2m+1}$ along a linear subspace of dimension $m-1$. In (e) above, $Y$ can also be described as a smooth divisor of type $(1,1)$ on $\p^{m+1}\times\p^{m+1}$. \end{rem} \begin{proof}[{Proof of Lemma~\ref{adjunction}}] Suppose that $\rho(Y)=1$. Then $\nec{Y}$ consists of a single extremal ray $R$, and there is an ample divisor $L'$ on $Y$ such that $\pic(X)=\z\cdot [L']$. Let $\lambda$ be the positive integer such that $L\sim \lambda L'$. If $i_Y=d+1$, then $(Y,L')\cong \big(\p^d, \o_{\p^d}(1)\big)$, and $-2K_Y-dL\sim \big(d(2-\lambda) +2\big)L'$. Since this is ample, either $\lambda\leq 2$ or $(d,\lambda)=(1,3)$. If $i_Y\leq d$, then $ 1\leq (-2K_Y-dL)\cdot R = 2i_Y-d\lambda (L'\cdot R)\leq d\big(2-\lambda (L'\cdot R)\big). $ Hence, $\lambda=L'\cdot R=1$. From now on we assume that $\rho(Y)>1$. Then $d>1$ and $i_Y\geq \frac{d+1}{2}$. Moreover, by \cite{wisniewski_90}, $r_Y\leq i_Y\leq \frac{d}{2}+1$. Let $R$ be any extremal ray of $\nec{Y}$. We claim that $L\cdot R=1$. Indeed, if $L\cdot R\geq 2$, then $l(R)=d+1$, contradicting \eqref{long_R}. Suppose that $d=2m$ is even. Then $i_Y= m+1$. Set $A=-K_Y-mL$. By assumption $A$ is ample. For any extremal ray $R\subset \nec{Y}$, \eqref{long_R} implies that $l(R)=m+1$, and thus $A\cdot R= 1=L\cdot R$. Hence, $A\equiv L$, and so $A\sim L$ since $Y$ is Fano. In particular $-K_Y\sim (m+1)L$, and thus $r_Y=m+1$. By \cite[Theorem B]{wisniewski_90}, this implies that $Y\cong \p^{m}\times \p^{m}$. Now suppose that $d=2m+1$ is odd. Then $i_Y= m+1$. Set $A'=-2K_Y-(2m+1)L$. By assumption $A'$ is ample. Let $R$ be an extremal ray of $\nec{Y}$. Then $l(R)\geq m+1$, and $\dim E(R)\leq 2m+1$. By \eqref{long_R}, there are three possibilities: \begin{enumerate} \item[(a)] $l(R)= m+2$, $E(R)=Y$, and equality holds in \eqref{long_R}; \item[(b)] $l(R)= m+1$, $\dim E(R) =2m$, and equality holds in \eqref{long_R}; or \item[(c)] $l(R)= m+1$, $E(R)=Y$, and equality in \eqref{long_R} fails by $1$. \end{enumerate} In \cite{AO_long_R}, Andreatta and Occhetta classify the cases in which equality holds in \eqref{long_R}, assuming $\dim Y-1\leq\dim E(R)\leq \dim Y$. They show that in this case either $Y$ is a product of projective spaces, or a blowup of $\p^{2m+1}$ along a linear subspace of dimension at most $m-1$. From this we see that in case (a) we must have $Y\cong \p^{m+1}\times\p^{m}$, while in case (b) $Y$ must be isomorphic to the blowup of $\p^{2m+1}$ along a linear subspace of dimension $m-1$. From now on we assume that every extremal ray $R$ of $\nec{Y}$ falls into case (c) above, which implies that $A'\cdot R= 1=L\cdot R$. Thus $A'\sim L$, $-K_Y\sim (m+1)L$, and thus $r_Y=m+1$. By \cite{wisniewski_91}, this implies that either $Y\cong \p^{m}\times Q^{m+1}$, or $Y\cong \p_{\p^{m+1}}\big(T_{\p^{m+1}}\big)$. \end{proof} \begin{proof}[{Proof of Theorem~\ref{thm1}}] Let $X$ be a Fano manifold with $\text {ch}_2(X)\geq 0$. Let $(H_x, L_x)$ be a polarized minimal family of rational curves through a general point $x\in X$. Set $d=\dim H_x$. By Proposition~\ref{chern_characters}, $$ c_1(H_x)={\pi_x}_*\ev_x^*\big(\text {ch}_2(X)\big)+\frac{d}{2}c_1(L_x). $$ By Lemma~\ref{Tk_preserves_positivity}, ${\pi_x}_*\ev_x^*$ preserves the properties of being weakly positive and nef. Thus $-K_{H_x}$ is ample and $-2K_{H_x}-dL_x$ is nef. Since $H_x$ is Fano, ${\pi_x}_*\ev_x^*\big(\text {ch}_2(X)\big)$ is ample if and only if it is weakly positive. Hence, $\text {ch}_2(X)> 0$ implies that $-2K_{H_x}-dL_x$ is ample. If ${\ev_x}_*\pi_x^*\big(\eff_1(H_x)\big)=\eff_2(X)$, and $-2K_{H_x}-dL_x$ is ample (respectively nef), then clearly $\text {ch}_2(X)$ is positive (respectively nef). This proves the first part of the theorem. The second part follows from Lemma~\ref{adjunction}(2). Examples of Fano manifolds $X$ with $\text {ch}_2(X)>0$ realizing each of the exceptional pairs are given in section~\ref{examples}. Finally, suppose that $\text {ch}_2(X)>0$, $\text {ch}_3(X)\geq 0$ and $d\geq 2$. We already know from part (1) that $-2K_{H_x}-dL_x$ is ample. We want to prove that $\text {ch}_2(H_x)>0$ and $\rho(H_x)=1$. For that purpose we may assume that $(H_x, L_x)\not\cong \big(\p^d,\o(2)\big)$. Let $R\subset \eff_1(H_x)$ be an extremal ray. By Lemma~\ref{adjunction}(1), there is a rational curve $\ell\subset H_x$ such that $R=\r_{\geq 0}[\ell]$ and $L_x\cdot \ell=1$. Moreover, $$ {\pi_x}_*\ev_x^*\big(2 \ \text {ch}_2(X)\big)\cdot [\ell]=2\ \text {ch}_2(X)\cdot {\ev_x}_*\pi_x^*[\ell] = \big(c_1(T_X)^2-2c_2(T_X)\big)\cdot {\ev_x}_*\pi_x^*[\ell] $$ is a positive integer, and thus $\geq 1$. Therefore $\eta:={\pi_x}_*\ev_x^*\big(\text {ch}_2(X)\big)-\frac{1}{2}c_1(L_x)\in N^1(H_x)_{\q}$ is nef. We rewrite formula \eqref{ch_2 of H_x} of Proposition~\ref{chern_characters} as $$ \text {ch}_2(H_x)={\pi_x}_*\ev_x^*\big(\text {ch}_3(X)\big)+\frac{1}{2}\eta\cdot c_1(L_x)+\frac{d-1}{12}c_1(L_x)^2. $$ Since ${\pi_x}_*\ev_x^*$ preserves the properties of being nef, we conclude that $\text {ch}_2(H_x)$ is positive. By \ref{non-examples}, none of the exceptional pairs $(H_x, L_x)$ from part (2) satisfy $\text {ch}_2(H_x)>0$. Hence, $\rho(H_x)=1$. \end{proof} \begin{lemma} \label{Hx_covered_by_lines} Let $X$ be a Fano manifold. Let $(H_x, L_x)$ be a polarized minimal family of rational curves through a general point $x\in X$. Set $d=\dim H_x$. \begin{enumerate} \item Suppose that $\text {ch}_2(X)>0$, $d\geq 1$ and $(H_x, L_x)\not\cong$ $\big(\p^d,\o(2)\big)$, $\big(\p^1,\o(3)\big)$. Then any minimal dominating family of rational curves on $H_x$ parametrizes smooth rational curves of $L_x$-degree equal to $1$. \item Suppose that $\text {ch}_2(X)>0$, $\text {ch}_3(X)\geq0$, $d\geq 2$ and $(H_x, L_x)\not\cong$ $\big(\p^d,\o(2)\big)$. Let $(W_h,M_h)$ be a polarized minimal family of rational curves through a general point $h\in H_x$. Suppose that $(W_h,M_h)\not\cong$ $\big(\p^k,\o(2)\big)$, $\big(\p^1,\o(3)\big)$. Then there is an isomorphism $g:\p^2\to S\subset H_x$ mapping a point $p\in \p^2$ to $h\in H_x$, sending lines through $p$ to curves parametrized by $W_h$, and such that $g^*L_x\cong \o_{\p^2}(1)$. \end{enumerate} \end{lemma} \begin{proof} Suppose that $\text {ch}_2(X)>0$, $d\geq 1$ and $(H_x, L_x)\not\cong$ $\big(\p^d,\o(2)\big)$, $\big(\p^1,\o(3)\big)$. Let $W$ be a minimal dominating family of rational curves on $H_x$. We will show that the curves parametrized by $W$ have $L_x$-degree equal to $1$. If $\rho(H_x)>1$, then this can be checked directly from the list in Theorem~\ref{thm1}(2). So we assume $\rho(H_x)=1$. Let $\ell\subset H_x$ be a curve parametrized by $W$. Then, as in \ref{Hx}, $-K_{H_x}\cdot \ell\leq d+1$, and $-K_{H_x}\cdot \ell= d+1$ if and only if $H_x\cong \p^d$. By Theorem~\ref{thm1}(1), $-K_{H_x}\cdot \ell>\frac{d}{2} L_x\cdot \ell$. If $ L_x\cdot \ell >1$, then $(H_x, L_x)\cong$ $\big(\p^d,\o(2)\big)$ or $\big(\p^1,\o(3)\big)$, contradicting our assumptions. We conclude that $L_x\cdot \ell =1$. The generically injective morphism $\tau_x: H_x \to \p(T_xX^{^{\vee}})$ defined in \ref{describing_Lx} maps curves parametrized by $W$ to lines. So all curves parametrized by $W$ are smooth. Now suppose we are under the assumptions of the second part of the lemma. By Theorem~\ref{thm1}(3), $H_x$ is a Fano manifold with $\text {ch}_2(H_x)>0$ and $\rho(H_x)=1$. If $d=2$, then $H_x\cong \p^2$, and by our assumptions $L_x\cong \o(1)$. Now suppose $d=3$. Recall that the only Fano threefolds satisfying $\text {ch}_2>0$ are $\p^3$ and the smooth quadric hypersurface $Q^3\subset \p^4$ (\cite{2Fano_3folds}). The polarized minimal family of rational curves through a general point of $Q^3$ is isomorphic to $\big(\p^1,\o(2)\big)$. So our assumptions imply that $(H_x, L_x)\cong \big(\p^3,\o(1)\big)$, and the conclusion of the lemma is clear. Finally assume that $d\geq 4$. By the first part of the lemma, $L_x\cdot \ell =1$ for any curve $\ell$ parametrized by $W_h$, and $W_h^{\text{Sing},h}=\emptyset$. Theorem~\ref{thm1}(1) implies that $i_{H_x}>\frac{d}{2}\geq 2$, and thus $\dim W_h\geq i_{H_x}-2\geq 1$. By the first part of the lemma, now applied to the variety $H_x$, $W_h$ is covered by smooth rational curves of $M_h$-degree equal to $1$. By Lemma~\ref{f:P^k->X}, applied to the variety $H_x$, there is a generically injective morphism $g:(\p^2,p)\to (H_x, h)$ mapping lines through $p$ to curves on $H_x$ parametrized by $W_h$. Since these curves have $L_x$-degree equal to $1$, they are mapped to lines by the generically injective morphism $\tau_x:H_x \to \p(T_xX^{^{\vee}})$. We conclude that the composition $\tau_x\circ g:\p^2 \to \p(T_xX^{^{\vee}})$ is an isomorphism onto its image and $(\tau_x\circ g)^*\o_{\p}(1) \cong \o_{\p^2}(1)$. This proves the second part of the lemma. \end{proof} \begin{proof}[{Proof of Theorem~\ref{thm3}}] Let the notation and assumptions be as in Theorem~\ref{thm3}. Part (1) was proved in \cite{dJ-S:2fanos_2}. It also follows from Theorem~\ref{thm1}(1): the conditions $\text {ch}_2(X)\geq 0$ and $d\geq 1$ imply that $H_x$ is a positive dimensional Fano manifold, and thus covered by rational curves. For any rational curve $\ell\subset H_x$, $S=\ev_x\big(\pi_x^{-1}(\ell)\big)$ is a rational surface on $X$ through $x$. Notice that $S$ is covered by rational curves parametrized by $H_x$. For part (2), assume $\text {ch}_2(X)> 0$ and $(H_x, L_x)\not\cong$ $\big(\p^d,\o(2)\big)$, $\big(\p^1,\o(3)\big)$. If $d=1$, then $(H_x, L_x)\cong \big(\p^1,\o(1)\big)$. In this case the morphism $\tau_x: H_x \to \p(T_xX^{^{\vee}})$ is an isomorphism onto its image, and thus $H_x^{\text{Sing},x}=\emptyset$ by \cite[Corollary 2.8]{artigo_tese}. If $d>1$, let $W$ be a minimal dominating family of rational curves on $H_x$. By Lemma~\ref{Hx_covered_by_lines}(1), the curves parametrized by $W$ are smooth and have $L_x$-degree equal to $1$. Moreover, since $H_x^{\text{Sing},x}$ is at most finite, a general curve parametrized by $W$ is contained in $H_x\setminus H_x^{\text{Sing},x}$ by \cite[II.3.7]{kollar}. Part (2) now follows from Lemma~\ref{f:P^k->X}. From now on assume that $\text {ch}_2(X)> 0$, $\text {ch}_3(X)\geq 0$ and $d\geq 2$. Then $H_x$ is a Fano manifold with $\rho(H_x)=1$ and $\text {ch}_2(H_x)>0$ by Theorem~\ref{thm1}(3). If $d=2$, then $H_x\cong\p^2$ and $U_x=\p(E_x)$ is a rational $3$-fold. Hence, $\ev_x(U_x)$ is a rational $3$-fold through $x$ except possibly if $\ev_x$ fails to be birational onto its image. This can only occur if ${\mathcal {C}}_x$ is singular by \cite[Corollary 2.8]{artigo_tese}, in which case $L_x\cong \o(2)$. Now suppose $d\geq 3$. We claim that there is a rational surface $S\subset H_x\setminus H_x^{\text{Sing},x}$. From this it follows that $\ev_x\big|_{\pi_x^{-1}(S)}$ is generically injective and $\ev_x\big(\pi_x^{-1}(S)\big)$ is a rational $3$-fold through $x$. If $d=3$, then $H_x\cong \p^3$ or $Q^3\subset \p^4$, and we can find a rational surface $S\subset H_x\setminus H_x^{\text{Sing},x}$. Now assume $d\geq 4$, and let $W_h$ be a minimal family of rational curves through a general point $h\in H_x$. Then $W_h$ is a Fano manifold and $\dim W_h\geq i_{H_x}-2\geq 1$. As in the proof of part (1), now applied to $H_x$, each rational curve on $W_h$ yields a rational surface $S$ on $H_x$ through $h$. Recall that $H_x^{\text{Sing},x}$ is at most finite. If $S \cap H_x^{\text{Sing},x} \neq \emptyset$, then there is a rational curve on $H_x$ parametrized by $W_h$ meeting $H_x^{\text{Sing},x}$. If this holds for a general point $h\in H_x$, then there is a point $h_0\in H_x^{\text{Sing},x}$ that can be connected to a general point of $H_x$ by a curve parametrized by a suitable minimal dominating family of rational curves on $H_x$. But this implies that a curve from this minimal dominating family has $-K_{H_x}$-degree equal to $d+1$, and so we must have $H_x\cong \p^d$ (see \ref{Hx}). Since $d>2$, we can find a rational surface $S'\subset H_x\setminus H_x^{\text{Sing},x}$. This proves part (3). Finally, suppose we are under the assumptions of part (4). By Lemma~\ref{Hx_covered_by_lines}(2), $H_x$ is covered by surfaces $S$ such that $(S, L_x|_S)\cong (\p^2,\o_{\p^2}(1))$. Exactly as in the proof of part (3) above, we can take such a surface $S\subset H_x\setminus H_x^{\text{Sing},x}$. Part (4) now follows from Lemma~\ref{f:P^k->X}. \end{proof} \section{Examples}\label{examples} In this section we discuss examples of Fano manifolds $X$ with $\text {ch}_2(X)\geq 0$. Theorem~\ref{thm1} provides a new way of checking positivity of $\text {ch}_2(X)$, enabling us to find new examples. Examples \ref{C.I.} and \ref{G} below appear in \cite{dJ-S:2fanos_1}. Example \ref{H_in_G} does not appear explicitly in \cite{dJ-S:2fanos_1}, but it can be inferred from \cite[Theorem 1.1(3)]{dJ-S:2fanos_1}. Examples \ref{OG}, \ref{SG}, \ref{degenerate SG} and \ref{G2/P} are new. \begin{say}[Complete Intersections]\label{C.I.} Let $X$ be a complete intersection of type $(d_1,\ldots, d_c)$ in $\p^n$. Standard Chern class computations show that $\text {ch}_k(X)>0$ (respectively $\geq0$) if and only if $\sum d_i^k\leq n$ (respectively $\leq n+1$). See for instance \cite[2.1 and 2.4]{dJ-S:2fanos_1}. Let $x\in X$ be a general point, and let $H_x$ be the variety of lines through $x$ on $X$. Then $H_x$ is a complete intersection of type $(1,2,\ldots d_1,\ldots,1,2,\ldots d_c)$ in $\p^{n-1}$, and $L_x\cong \o(1)$. The condition from Theorem~\ref{thm1}(1) of $-2K_{H_x}-dL_x$ being ample (respectively nef) is clearly equivalent to $\sum d_i^2\leq n$ (respectively $\leq n+1$). \end{say} \begin{say}[Grassmannians] \label{G} Let $X=G(k,n)$ be the Grassmannian of $k$-dimensional linear subspaces of an $n$-dimensional vector space $V$, with $2\leq k\leq\frac{n}{2}$. As computed in \cite[2.2]{dJ-S:2fanos_1}, the second Chern class of $X$ is given by $$ \text {ch}_2(X)=\frac{n+2-2k}{2}\sigma_2-\frac{n-2-2k}{2}\sigma_{1,1}, $$ where $\sigma_2$ and $\sigma_{1,1}$ are the usual Schubert cycles of codimension $2$. Recall that $\eff_{2}(X)$ is generated by the dual Schubert cycles $\sigma_{1,1}^{*}$ and $\sigma_2^{*}$. Thus $\text {ch}_2(X)>0$ (respectively $\geq0$) if and only if $2k\leq n \leq2k+1$ (respectively $2k\leq n \leq2k+2$). Given $x\in X$, let $H_x$ be the variety of lines through $x$ on $X$ under the Pl\"ucker embedding. As explained in \cite[1.4.4]{hwang}, $(H_x,L_x)\cong \big(\p^{k-1}\times\p^{n-k-1}, p_{_1}^*\o(1)\otimes p_{_2}^*\o(1)\big)$. Indeed, if $x$ parametrizes a linear subspace $[W]$, then a line through $x$ corresponds to subspaces $U$ and $U'$ of $V$, of dimension $k-1$ and $k+1$, such that $U\subset W\subset U'$. So there is a natural identification $H_x\cong\p(W)\times\p(V/W)^*$. The condition of $-2K_{H_x}-dL_x$ being ample (respectively nef) is clearly equivalent to $2k\leq n\leq 2k+1$ (respectively $2k\leq n\leq 2k+2$). Notice also that the map $T_1: \eff_1(H_x) \to\eff_{2}(X)$ sends lines on fibers of $p_1$ and $p_2$ to the dual Schubert cycles $\sigma_2^{*}$ and $\sigma_{1,1}^{*}$. In particular it is surjective. Exceptional pairs (a), (b) in Theorem~\ref{thm1}(2) occur in this case. \end{say} \begin{say}[Hyperplane sections of Grassmannians] \label{H_in_G} Let $X$ be a general hyperplane section of the Grassmannian $G(k,n)$ under the Pl\"ucker embedding, where $2\leq k\leq\frac{n}{2}$. Let $x\in X$ be a general point, and $H_x$ the variety of lines through $x$ on $X$. Then $H_x$ is a smooth divisor of type $(1,1)$ in $\p^{k-1}\times\p^{n-k-1}$ and $L_x$ is the restriction to $H_x$ of $p_{_1}^*\o(1)\otimes p_{_2}^*\o(1)$. Thus $-2K_{H_x}-dL_x$ is ample (respectively nef) if and only if $n=2k$ (respectively $2k\leq n\leq 2k+1$). In these cases $T_1: \eff_1(H_x) \to\eff_{2}(X)$ is surjective, and thus Theorem~\ref{thm1}(1) applies. We conclude that $\text {ch}_2(X)>0$ (respectively $\geq0$) if and only if $n=2k$ (respectively $2k\leq n\leq 2k+1$). This example occurs as the exceptional case (e) in Theorem~\ref{thm1}(2). \end{say} \begin{say}[Orthogonal Grassmannians]\label{OG} We fix $Q$ a nondegenerate symmetric bilinear form on the $n$-dimensional vector space $V$, and $k$ an integer satisfying $2\leq k<\frac{n}{2}-1$. Let $X=OG(k,n)$ be the subvariety of the Grassmannian $G(k,n)$ parametrizing linear subspaces that are isotropic with respect to $Q$. Then $X$ is a Fano manifold of dimension $\frac{k(2n-3k-1)}{2}$ and $\rho(X)=1$. Notice that $X$ is the zero locus in $G(k,n)$ of a global section of the vector bundle $\sym^2({\mathcal {S}}^*)$, where ${\mathcal {S}}^*$ is the universal quotient bundle on $G(k,n)$. Using this description and the formula for $\text {ch}_2\big(G(k,n)\big)$ described in \ref{G}, standard Chern class computations show that $$ \text {ch}_2(X)=\frac{n-1-3k}{2}\sigma_2-\frac{n-3-3k}{2}\sigma_{1,1}, $$ where we denote by the same symbols $\sigma_2$ and $\sigma_{1,1}$ the restriction to $X$ of the corresponding Schubert cycles on $G(k,n)$. Given $x\in X$, let $H_x$ be the variety of lines through $x$ on $X$ under the Pl\"ucker embedding. We claim that $(H_x,L_x)\cong \big(\p^{k-1}\times Q^{n-2k-2}, p_{_1}^*\o(1)\otimes p_{_2}^*\o(1)\big)$. Indeed, if $x$ parametrizes a linear subspace $[W]$, then a line through $x$ on $X$ corresponds to a pair $(U,U')\in\p(W)\times\p(V/W)^*$ such that $U'\subset U^{\perp}$ and $Q(v,v)=0$ for any $v\in U'$. This is equivalent to the condition that $U'\subset W^{\perp}$ and $Q(v,v)=0$ for any $v\in U'$. The form $Q$ induces a nondegenerate quadratic form on $W^{\perp}/W$, which defines a smooth quadric $Q^{n-2k-2}$ in $\p(W^{\perp}/W)^*\cong \p^{n-2k-1}$. The condition then becomes $U'\subset W^{\perp}$ and $[U'/W]\in Q^{n-2k-2}$, proving the claim. Thus $-2K_{H_x}-dL_x$ is ample (respectively nef) if and only if $n=3k+2$ (respectively $3k+1\leq n\leq 3k+3$). In these cases there are lines on fibers of $p_1$ and $p_2$ contained in $H_x\subset \p^{k-1}\times\p^{n-k-1}$, and thus the composite map $\eff_1(H_x) \to\eff_{2}(X)\DOTSB\lhook\joinrel\rightarrow \eff_{2}\big(OG(k,n)\big)$ is surjective. Thus $T_1: \eff_1(H_x) \to\eff_{2}(X)$ is surjective, and Theorem~\ref{thm1}(1) applies. We conclude that $\text {ch}_2(X)>0$ (respectively $\geq0$) if and only if $n=3k+2$ (respectively $3k+1\leq n\leq 3k+3$). The exceptional pair (d) in Theorem~\ref{thm1}(2) occurs in this case. \end{say} \begin{say}[Symplectic Grassmannians] \label{SG} We fix $\omega$ a non-degenerate antisymmetric bilinear form on the $n$-dimensional vector space $V$, $n$ even, and $k$ an integer satisfying $2\leq k\leq\frac{n}{2}$. Let $X=SG(k,n)$ be the subvariety of the Grassmannian $G(k,n)$ parametrizing linear subspaces that are isotropic with respect to $\omega$. Then $X$ is a Fano manifold of dimension $\frac{k(2n-3k+1)}{2}$ and $\rho(X)=1$. Notice that $X$ is the zero locus in $G(k,n)$ of a global section of the vector bundle $\wedge^2({\mathcal {S}}^*)$, where ${\mathcal {S}}^*$ is the universal quotient bundle on $G(k,n)$. Using this description and the formula for $\text {ch}_2\big(G(k,n)\big)$ described in \ref{G}, standard Chern class computations show that $$ \text {ch}_2(X)=\frac{n+3-3k}{2}\sigma_2-\frac{n+1-3k}{2}\sigma_{1,1}, $$ where we denote by the same symbols $\sigma_2$ and $\sigma_{1,1}$ the restriction to $X$ of the corresponding Schubert cycles on $G(k,n)$. Given $x\in X$, let $H_x\subset \p^{k-1}\times\p^{n-k-1}$ be the variety of lines through $x$ on $X$ under the Pl\"ucker embedding. By \cite[1.4.7]{hwang}, $(H_x,L_x)\cong \big( \p_{\p^{k-1}}(\o(2)\oplus\o(1)^{n-2k}), \o_{\p}(1) \big)$. When $n=2k$ this becomes $(H_x,L_x)\cong \big(\p^{k-1},\o(2)\big)$. When $n>2k$, $H_x$ can also be described as the blow-up of $\p^{n-k-1}$ along a linear subspace $\p^{n-2k-1}$, and $L_x$ as $2H-E$, where $H$ is the hyperplane class in $\p^{n-k-1}$ and $E$ is the exceptional divisor. Thus $-2K_{H_x}-dL_x$ is ample (respectively nef) if and only if $n=2k$ or $n=3k-2$ (respectively $n=2k$ or $3k-3\leq n\leq 3k-1$). In these cases $T_1: \eff_1(H_x) \to\eff_{2}(X)$ is surjective. Indeed, if $n=2k$, then $b_4(X)=1$. If $3k-3\leq n\leq 3k-1$, then there are lines on fibers of $p_1$ and $p_2$ contained in $H_x\subset \p^{k-1}\times\p^{n-k-1}$, and thus the composite map $\eff_1(H_x) \to\eff_{2}(X)\DOTSB\lhook\joinrel\rightarrow \eff_{2}\big(OG(k,n)\big)$ is surjective. So Theorem~\ref{thm1}(1) applies, and we conclude that $\text {ch}_2(X)>0$ (respectively $\geq0$) if and only if $n=2k$ or $n=3k-2$ (respectively $n=2k$ or $3k-3\leq n\leq 3k-1$). When $m$ is even, the exceptional pair (c) in Theorem~\ref{thm1}(2) occurs for $X=SG(m+2,3m+4)$. The exceptional pair $(H_x,L_x)\cong (\p^d, \o(2))$ in Theorem~\ref{thm3}(2) occurs for $X=SG(d+1,2d+2)$. \end{say} \begin{say}[A two-orbit variety]\label{degenerate SG} We fix $\omega$ an antisymmetric bilinear form of maximum rank $n-1$ on the $n$-dimensional vector space $V$, $n$ odd, and $k$ an integer satisfying $2\leq k<\frac{n}{2}$. Let $X$ be the subvariety of the Grassmannian $G(k,n)$ parametrizing linear subspaces that are isotropic with respect to $\omega$. Then $X$ is a Fano manifold of dimension $\frac{k(2n-3k+1)}{2}$ and $\rho(X)=1$. Note that $X$ is not homogeneous. The same argument presented in \ref{SG} above, taking $x\in X$ a general point, shows that $\text {ch}_2(X)>0$ (respectively $\text {ch}_2(X)\geq0$) if and only if $n=3k-2$ (respectively $3k-3\leq n\leq 3k-1$). When $m$ is odd, the exceptional pair (c) in Theorem~\ref{thm1}(2) occurs for such $X$, with $k=m+2$ and $n=3m+4$. \end{say} \begin{say}[The $5$-dimensional homogeneous space $G_2/P$] \label{G2/P} Let $X$ be the $5$-dimensional homogeneous space $G_2/P$. Then $X$ is a Fano manifold with $\rho(X)=1$, and $(H_x,L_x)\cong\big(\p^1,\o(3)\big)$, as explained in \cite[1.4.6]{hwang}. Since $b_4(X)=1$, the map $T_1: \eff_1(H_x) \to\eff_{2}(X)$ is surjective, and thus Theorem~\ref{thm1}(1) applies. We conclude that $\text {ch}_2(X)>0$. The exceptional pair $(H_x,L_x)\cong (\p^1, \o(3))$ in Theorem~\ref{thm3}(2) occurs in this case. \end{say} \begin{say}[Non-Examples]\label{non-examples} By \cite[Theorem 1.2]{dJ-S:2fanos_1}, the following smooth projective varieties do not satisfy $\text {ch}_2(X)>0$. \begin{itemize} \item Products $X\times Y$, with $\dim X, \dim Y>0$. \item Projective space bundles $\p(E)$, with $\dim X>0$ and $\rank E \geq2$. \item Blowups of $\p^n$ along smooth centers of codimension $2$. \end{itemize} By Theorem \ref{thm2}(1), if $X$ a Fano manifold and $H_x$ is not Fano, then $\text {ch}_2(X)$ is not nef. This is the case, for instance, when $X$ is the moduli space of rank $2$ vector bundles with fixed determinant of odd degree on a smooth curve $C$ of genus $\geq2$. In this case $H_x$ is the family of Hecke curves through $x=[E]\in X$, which are conics with respect to the ample generator of $\pic(X)$. As explained in \cite[1.4.8]{hwang}, $H_x\cong\p_C(E)$, which is not Fano. \end{say} \end{document}
\begin{document} \title{Microfabricated Ion Traps} \author{Marcus D. Hughes\footnote{Corresponding author. Email: [email protected]}} \author{Bjoern Lekitsch} \author{Jiddu A. Broersma} \author{Winfried K. Hensinger} \affiliation{Department of Physics and Astronomy, University of Sussex, Brighton, UK\\ BN1 9QH} \begin{abstract} Ion traps offer the opportunity to study fundamental quantum systems with high level of accuracy highly decoupled from the environment. Individual atomic ions can be controlled and manipulated with electric fields, cooled to the ground state of motion with laser cooling and coherently manipulated using optical and microwave radiation. Microfabricated ion traps hold the advantage of allowing for smaller trap dimensions and better scalability towards large ion trap arrays also making them a vital ingredient for next generation quantum technologies. Here we provide an introduction into the principles and operation of microfabricated ion traps. We show an overview of material and electrical considerations which are vital for the design of such trap structures. We provide guidance in how to choose the appropriate fabrication design, consider different methods for the fabrication of microfabricated ion traps and discuss previously realized structures. We also discuss the phenomenon of anomalous heating of ions within ion traps, which becomes an important factor in the miniaturization of ion traps.\\ \textbf{Keywords:} Ion traps, microfabrication, quantum information processing, anomalous heating, laser cooling and trapping. \end{abstract} \maketitle \section{Introduction} Ion trapping was developed by Wolfgang Paul \cite{Paul} and Hans Dehmelt \cite{Dehmelt} in the 1950's and 60's and ion traps became an important tool to study important physical systems such as ion cavity QED \cite{Keller,Drewsen}, quantum simulators \cite{Pons,Friedenauer,Johanning,Clark,Kim}, determine frequency standards \cite{Udem,Webster,Tamm2,Chwalla}, as well as the development towards a quantum information processor \cite{Ciracgate,Wineland,Haeffner}. In general, ion traps compare well to other physical systems with good isolation from the environment and long coherence times. Progress in many of the research areas where ion traps are being used may be aided by the availability of a new generation of ion traps with the ion - electrode distance on the order of tens of micrometers. While in some cases, the availability of micrometer scale ion - electrode distance and a particular electrode shape may be of sole importance, often the availability of versatile and scalable fabrication methods (such as micro-electromechanical systems (MEMS) and other microfabrication technologies) may be required in a particular field. \\ One example of a field which will see step-changing innovation due to the emergence of microfabricated ion traps is the general area of quantum technology with trapped ions. In 1995 David DiVincenzo set out criteria which determine how well a system can be used for quantum computing \cite{DiVincenzo}. Most of these criteria have been demonstrated with an ion trap: Qubit initialization \cite{Leibfriedstate,Leibfriedstate2}, a set of universal quantum gates creating entanglement between ions \cite{Leibfriedphasequbit,Ciracgate,SorensenMolmer,Benhelm}, long coherence times \cite{Lucas}, detection of states \cite{Myersonreadout,acton} and a scalable architecture to host a large number of qubits \cite{Stick,Seidelin,Hensinger,Blakestad}. An important research area is the development of a scalable architecture which can incorporate all of the DiVincenzo criteria. A realistic architecture has been proposed consisting of an ion trap array incorporating storage and gate regions \cite{CiracandZoller,Kielpinski,Steane} and could be implemented using microfabricated ion traps. Microfabricated ion traps hold the possibility of small trap dimensions on the order of tens of micrometers and more importantly fabrication methods like photolithography that allow the fabrication of very large scale arrays. Electrodes with precise shape, size and geometry can be created through a number of process steps when fabricating the trap. \\ In this article we will focus on the design and fabrication of microfabricated radio frequency (rf) ion traps as a promising tool for many applications in ion trapping. Another ion trap type is the Penning trap \cite{Thompson} and advances in their fabrication have been discussed by Castrej\'{o}n-Pita et al. \cite{Castre} and will not be discussed in this article. Radio-frequency ion traps include multi-layer designs where the ion is trapped between electrodes located in two or more planes with the electrodes symmetrically surrounding the ion, for example as the ion trap reported by Stick et al. \cite{Stick}. We will refer to such traps as symmetric ion traps. Geometries where all the electrodes lie in a single plane and the ion is trapped above that plane, for example as the trap fabricated by Seidelin et al. \cite{Seidelin}, will be referred to as asymmetric or surface traps.\\ There have been many articles discussing ion traps and related physics, including studies of fundamental physics \cite{Paul2,Ghosh,Horvath}, spectroscopy \cite{Thompson2}, and coherent control \cite{Wineland}. This article focuses on the current progress and techniques used for the realisation of microfabricated ion traps. First we discuss basic principles of ion traps and their operation in section \ref{iontraps}. Section \ref{linear} discusses linear ion trap geometries as a foundation of most ion trap arrays. The methodology of efficient simulation of electric fields within ion trap arrays is discussed in Section \ref{simu}. Section \ref{ElecChara} discusses some material characteristics that have to be considered when designing microfabricated ion traps including electric breakdown and rf dissipation. Section \ref{FabProcess} provides a guide to realizing such structures with the different processes outlined together with the capabilities and limitations of each one. Finally in Section \ref{heating} we discuss motional heating of the ion due to fluctuating voltage patches on surfaces and its implications for the design and fabrication of microfabricated ion traps. \section{Radio frequency ion traps}\label{iontraps} Static electric fields alone cannot confine charged particles, this is a consequence of Earnshaw's theorem \cite{G99} which is analogous to Maxwell's equation $\nabla \cdot E=0$. To overcome this Penning traps use a combination of static electric and magnetic fields to confine the ion \cite{Penning,Thompson}. Radio frequency (rf) Paul traps use a combination of static and oscillating electric fields to achieve confinement. We begin with an introduction into the operation of radio frequency ion traps, highlighting important factors when considering the design of microfabricated ion traps. \subsection{Ion trap dynamics} First we consider a quadrupole potential within the radial directions, $x$ and $y$-axes, which is created from hyperbolic electrodes as shown in Fig. \ref{linhyp}, whilst there is no confinement within the axial ($z$) direction. By considering a static voltage $V_0$ applied to two opposite electrodes, the resultant electric potential will produce a saddle as depicted in Fig. \ref{oscillating potential} (a). With the ion present within this potential, the ion will feel an inward force in one direction and outward force in the direction perpendicular to the first. Reversing the polarity of the applied voltage the saddle potential will undergo an inversion as shown in Fig. \ref{oscillating potential} (b). As the force acting on the ion is proportional to the gradient of the potential, the magnitude of the force is less when the ion is closer to the centre. The initial inward force will move the ion towards the centre where the resultant outward force half a cycle later will be smaller. Over one oscillation the ion experiences a greater force towards the centre of the trap than outwards resulting in confinement. The effective potential the ion sees when in an oscillating electric field is shown in Fig. \ref{oscillating potential} (c). If the frequency of the oscillating voltage is too small then the ion will not be confined long enough in one direction. For the case where a high frequency is chosen then the effective difference between the inward and outward forces decreases and the resultant potential is minimal. By selecting the appropriate frequency $\Omega_T$ of this oscillating voltage together with the amplitude $V_0$ which is dependent on the mass of the charged particle, confinement of the particle can be achieved within the radial directions.\\ \begin{figure} \caption{Hyperbolic electrodes within the $x$ and $y$-axes where an rf voltage of $V_0\cos{(\Omega_Tt)} \label{linhyp} \end{figure} \begin{figure} \caption{Principles of confinement with a pseudopotential. (a) A saddle potential created by a static electric field from a hyperbolic electrode geometry. (b) The saddle potential acquiring an inversion from a change in polarity. (c) The effective potential the ion sees resulting from the oscillating electric potential.} \label{oscillating potential} \end{figure} There are two ways to calculate the dynamics of an ion within a Paul trap, firstly a comprehensive treatment can be given using the Mathieu equation. The Mathieu equation provides a complete solution for the dynamics of the ion. It also allows for the determination of parameter regions of stability where the ion can be trapped. These regions of stability are determined by trap parameters such as voltage amplitude and rf drive frequency. Here we outline the process involved in solving the equation of motion via the Mathieu equation approach. By applying an oscillating potential together with a static potential, the total potential for the geometry in Fig. \ref{linhyp} can be expressed as \cite{Ghosh} \begin{equation} \phi(x,y,t)=(U_0-V_0\cos{(\Omega_Tt)})\left(\frac{x^2-y^2}{2r_0^2}\right) \end{equation} where $r_0$ is defined as the ion-electrode distance. This is from the centre of the trap to the nearest electrode and $\Omega_T$ is the drive frequency of the applied time varying voltage. The equations of motion of the ion due to the above potential are then given by \cite{Ghosh} \begin{equation}\label{eomx} \frac{d^2x}{dt^2}=-\frac{e}{m}\frac{\partial\phi(x,y,t)}{\partial x}=-\frac{e}{m r_0^2}(U_0-V_0\cos{\Omega_Tt})x\\ \end{equation} \begin{equation}\label{eomy} \frac{d^2y}{dt^2}=-\frac{e}{m}\frac{\partial\phi(x,y,t)}{\partial y}=\frac{e}{m r_0^2}(U_0-V_0\cos{\Omega_Tt})y\\ \end{equation} \begin{equation} \frac{d^2z}{dt^2}=0 \end{equation} Later it will be shown that the confinement in the $z$-axis will be produced from the addition of a static potential. Making the following substitution \begin{equation*} a_x=-a_y=\frac{4eU_0}{mr_0^2\Omega_T^2}, \qquad q_x=-q_y=\frac{2eV_0}{mr_0^2\Omega_T^2}, \qquad \zeta=\Omega_Tt/2 \end{equation*} equations \ref{eomx} and \ref{eomy} can be written in the form of the Mathieu equation. \begin{equation}\label{Mathieu} \frac{d^2i}{d\zeta^2}+(a_i-2q_i\cos{2\zeta})i=0, \qquad i=[x,y] \end{equation} The general Mathieu equation given by equation \ref{Mathieu} is periodic due to the $2q_i\cos{2\zeta}$ term. The Floquet theorem \cite{Abramowitz} can be used as a method for obtaining a solution. Stability regions for certain values of the $a$ and $q$ parameters exist in which the ion motion is stable. By considering the overlap of both the stability regions for the $x$ and $y$-axes of the trap \cite{Ghosh,Horvath}, the parameter region where stable trapping can be accomplished is obtained. For the case when $a=0$, $q\ll1$ then the motion of the ion in the $x$-axis can be described as follows, \begin{equation} x(t) = x_0\cos(\omega_xt)\left[1+\frac{q_x}{2}\cos(\Omega_Tt)\right] \end{equation} with the equation of motion in the $y$-axis of the same form. The motion of the ion is composed of secular motion $\omega_x$ (high amplitude slow frequency) and micromotion at the drive frequency $\Omega_T$ (small amplitude high frequency).\\ The second form to calculate motion is the pseudopotential approximation \cite{Dehmelt}. This considers the time averaged force experienced by the ion in an inhomogeneous field. With an rf voltage of $V_0\cos{\Omega_Tt}$ applied to the trap a solution for the pseudopotential approximation is given by \cite{Dehmelt} \begin{equation}\label{pseudo} \psi(x,y,z)=\frac{e^{2}}{4m\Omega_{T}^{2}}|\nabla V(x,y,z)|^2 \end{equation} where m is the mass of the ion, $\nabla V(x,y,z)$ is the gradient of the potential. The motion of the ion in a rf potential can be described just by the secular motion in the limit where $q_i/2\equiv\sqrt{2}\omega_i/\Omega_T\ll1$. The secular frequency of the ion is given by \cite{Madsen}. \begin{equation} \omega_{i}^2(x,y,z)=\frac{e^2}{4m^{2}\Omega_{T}^{2}}\frac{\partial^2}{\partial x^2}(|\nabla V(x,y,z)|^2) \end{equation} The pseudopotential approximation provides a means to treat the rf potential in terms of electrostatics only, leading to simpler analysis of electrode geometries.\\ Micromotion can be divided into intrinsic and extrinsic micromotion. Intrinsic micromotion refers to the driven motion of the ion when displaced from the rf nil position due to the secular oscillation within the trap. Extrinsic micromotion describes an offset of the ion's position from the rf nil from stray electric fields, this can be due to imperfections of the symmetry in the construction of the trap electrodes or the build up of charge on dielectric surfaces. Micromotion can cause a problem with the widening of atomic transition linewidth, second-order Doppler shifts and reduced lifetimes without cooling \cite{Berkeland}. It is therefore important when designing ion traps that compensation of stray electric fields can occur in all directions of motion. Another important factor is the occurrence of a possible phase difference $\varphi$ between the rf voltages on different rf electrodes within the ion trap. This will result in micromotion that cannot be compensated for. A phase difference of $\varphi=1^{\circ}$ can lead to an increase in the equivalent temperature for the kinetic energy due to the excess micromotion of 0.41 K \cite{Berkeland}, well above the Doppler limit of a few milli-kelvin. \\ The trap depth of an ion trap is the potential difference between pseudopotential at the minimum of the ion trap and the lowest turning point of the potential well. For hyperbolic geometries this is at the surface of the electrodes, for linear geometries (see section \ref{linear}) it can be obtained through electric field simulations. Higher trap depths are preferable as they allow the ion to remain trapped longer without cooling. Typical trap depths are on the order of a few eV. The speed of optical qubit gates for quantum information processing \cite{Steane2} and shuttling within arrays \cite{Hucul} is dependent on the secular frequency of the ion trap. Secular frequencies and trap depth are a function of applied voltage, drive frequency $\Omega_T$, the mass of the ion m and the particular geometry (particular the ion - electrode distance). Since the variation of the drive frequency is limited by the stability parameters, it is important to achieve large maximal rf voltages for the design of microfabricated ion traps, which is typically limited by bulk breakdown and surface flashover (see Section \ref{ElecChara}). It is also important to note that the secular frequency also increases whilst scaling down trap dimensions for a given applied voltage allowing for large secular frequencies at relatively small applied voltages. \subsection{Motional and internal states of the ion.} Single ions can be considered to be trapped within a three-dimensional harmonic well with the three directions of motion uncoupled. Considering the motion of the ion along one of the axes, the Hamiltonian describing this model can be represented as \begin{equation} \textit{H}=\hbar \omega\left(a^{\dag}a+\frac{1}{2}\right) \end{equation} with $\omega$ the secular frequency, $a^{\dag}$ and a the raising and lowering operators respectively, these operators have the following properties $a^{\dag}|n\rangle =\sqrt{n+1}|n+1\rangle$, $a|n\rangle =\sqrt{n}|n-1\rangle$. When an ion moves up one motional level; it is said to have gained one motional quantum of kinetic energy. For most quantum gates with trapped ions, the ion must reside in within the Lambe Dicke regime. This is where the ion's wave function spread is much less than the optical wavelength of the photons interacting with the ion. The original proposed gates \cite{CiracandZoller} required the ion to be in the ground state motional energy level, but more robust schemes \cite{SorensenMolmer} do not have such stringent requirements anymore.\\ Another requirement for many quantum technology application is the availability of a two-level system for the qubit to be represented, such that the ion's internal states can be used for encoding. The qubit can then be initialised into the state $|1\rangle$, $|0\rangle$ or a superposition of both. Typical ion species used are hydrogenic ions, which are left with one orbiting electron in the outer shell and similar structure to hydrogen once ionised. These have the simplest lower level energy diagrams. Candidates for ions to be used as qubits can be subdivided into two categories. Hyperfine qubits, $^{171}$Yb$^+$, $^{43}$Ca$^+$, $^{9}$Be$^+$, $^{111}$Cd$^+$, $^{25}$Mg$^+$ use the hyperfine levels of the ground state and have lifetimes on the order of thousands of years, whilst optical qubits, $^{40}$Ca$^+$, $^{88}$Sr$^+$, $^{172}$Yb$^+$ use a ground state and a metastable state as the two level system. These metastable states typically have lifetimes on the order of seconds and are connected via optical transitions to the other qubit state. \subsection{Laser cooling}\label{Lasercool} For most applications, the ion has to be cooled to a state of sufficiently low motional quanta, which can be achieved via laser cooling. For a two level system, when a laser field with a frequency equivalent to the spacing between the two energy levels is applied to the ion, photons will be absorbed resulting in a momentum ``kick" onto the ion. The photon is then spontaneously emitted which leads to another momentum kick onto the ion in a completely random direction so the net effect of many photon emissions averages to zero. Due to the motion of the ion within the harmonic potential the laser frequency will undergo a Doppler shift. By red detuning (lower frequency) the laser frequency by $\delta$ from resonance, Doppler cooling can be achieved. When the ion moves towards the laser, the ion will experience a Doppler shift towards the resonant transition frequency and more scattering events will occur with the net momentum transfer slowing the ion down. Less scattering events will occur when travelling away from the Doppler shifted laser creating a net cooling of the ion's motion. Doppler cooling can typically only achieve an average motional energy $\bar{n} > 1$. In order to cool to the ground state of motion, resolved sideband cooling can be utilized. This can be achieved with stimulated Raman transitions \cite{Monroe,King}.\\ For effective cooling of the ion, the $\vec{k}$-vector of the laser needs a component in all three directions of uncoupled motion. These directions depend on the trap potential and they are called the principal axes. For a convenient choice of directions for the laser beam, principal axes can be rotated by an angle $\theta$ by application of appropriate voltages \cite{Madsen,Allcock} or asymmetries in the geometry about the ion's position \cite{Britton2,Amini,Nizamani}. The angle of rotation of the principal axes can be obtained through the Hessian matrix of the electric field. This angle describes a linear transformation of the electric potential that eliminates any cross terms between the axes creating uncoupled equations of motion. The eigenvectors of the matrix signify the direction of the principal axes.\\ \subsection{Operation of microfabricated ion traps} In order to successfully operate a microfabricated ion trap a certain experimental infrastructure needs to be in place, from the ultra high vacuum (UHV) system apparatus to the radio-frequency source. A description of experimental considerations for the operation of microfabricated ion traps was given by McLoughlin et al. \cite{McLoughlin}. For long storage times of trapped ions and performing gate operations, the collision with background particles must not be a limiting factor. Ion traps are therefore typically operated under ultra high vacuum (UHV) (pressures of $10^{-9}-10^{-12}$ mbar). The materials used need to be chosen carefully such that outgassing does not pose a problem. The materials used for different trap designs are discussed in more detail within section \ref{FabProcess}.\\ To generate the high rf voltage ($\sim$100-1000V) a resonator \cite{Siverns} is commonly used. Typical resonator designs include helical and coaxial resonators. The advantage of using a resonator, is that it provides a frequency source with a narrow bandpass, defined by the quality factor Q of the combined resonator - ion trap circuit. This provides a means of filtering out frequencies that couple to the motion of the ion leading to motional heating of the trapped ion. A resonator also fulfills the function of impedance matching the frequency source to the ion trap. The total resistance and capacitance of the trap lowers the Q factor. It is important to minimize the resistance and capacitance of the ion trap array if a high Q value is desired.\\ To provide electrical connections, ion traps are typically mounted on a chip carrier with electrical connections provided by wire bonding individual electrodes to an associate connection on the chip carrier. Bond pads on the chip are used to provide a surface in which the wire can be connect to, see Fig. \ref{bondpads}. The pins of the chip carrier are connected to wires which pass to external voltage supplies outside the vacuum system.\\ \begin{figure} \caption{Wire bonding a microfabrciated chip to a chip carrier providing external electrical connections.} \label{bondpads} \end{figure} The loading of atomic ions within Paul traps is performed utilizing a beam of neutral atoms typically originating from an atomic oven consisting of a resistively heated metallic tube filled with the appropriate atomic species or its oxide. The atomic flux is directed to the trapping region where atoms can be ionised via electron bombardment or more commonly by photoionisation, see for example refs. \cite{McLoughlin,Deslauriers2}. The latter has the advantage of faster loading rates requiring lower neutral atom pressures and results in less charge build up resulting from electron bombardment. For asymmetric traps in which all the electrodes lie in the same plane, see Fig. \ref{lineartraps}, a hole within the electrode structure can be used for the atomic flux to pass through the trap structure, this is defined as backside loading \cite{Britton2}. The motivation behind this method is to reduce the coating of the electrodes and more importantly reduce coating of the notches between the electrodes from the atomic beam reducing charge build up and the possibility of shorting between electrodes. However, the atomic flux can also be directed parallel to the surface in an asymmetric ion trap due to the low atomic flux required for photoionisation loading. \section{Linear ion traps}\label{linear} The previously mentioned ideal linear hyperbolic trap only provides confinement within the radial directions and does not allow for optical access. By modifying the geometry as depicted in Fig. \ref{lineartraps} linear ion traps are created. To create an effective static potential for the confinement in the axial ($z$-axis) direction, the associate electrodes are segmented. This allows for the creation of a saddle potential and when superimposed onto rf pesduopotenial provides trapping in three dimensions. By selecting the appropriate amplitudes for the rf and static potentials such that the radial secular frequencies $\omega_x,\omega_y$ are significantly larger than the axial frequency $\omega_z$, multiple ions will form a linear chain along the $z$-axis. The motion of the ion near the centre of the trap can be considered to be harmonic as a very good approximation. The radial secular frequency of a linear trap is on the order of that of a hyperbolic ion trap of same ion - electrode distance but different by a geometric factor $\eta$ \cite{Madsen}. \subsection{Linear ion trap geometries} Linear ion trap geometries can be realised in a symmetric or asymmetric design as depicted in Fig. \ref{lineartraps}. \begin{figure} \caption{Different linear trap geometries. (a) A two-layer design in which the rf electrodes (yellow) are diagonally opposite and the dc electrodes (grey) are segmented. (b) A three-layer design in which the rf electrodes are surrounded by the dc electrodes. (c) A five-wire asymmetric design where all the electrodes lie in the same plane. } \label{lineartraps} \end{figure} In symmetric designs the ions are trapped between the electrodes, as shown for two- and three-layer designs in Fig. \ref{lineartraps} (a) and (b) respectively. These types of designs offer higher trap depths and secular frequencies compared to asymmetric traps of the same trap parameters. Two-layer designs offer the highest secular frequencies and trap depths whilst three-layer designs offer more control of the ions position for micromotion compensation and shuttling.\\ The aspect ratio for symmetric designs is defined as the ratio of the separation between the two sets of electrodes w and the separation of the layers d depicted in Fig. \ref{AR}. As the aspect ratio rises, the geometric efficiency factor $\eta$ decreases and approaches asymptotically $1/\pi$ for two-layer designs. \cite{Madsen}. \begin{figure} \caption{for two-layer traps the aspect ratio is defined as w/d} \label{AR} \end{figure} Another advantage of symmetric traps is more freedom of optical laser access, allowing laser beams to enter the trapping zone in various angles. In asymmetric trap structures the laser beams typically have to enter the trapping zone parallel to the trap surface. Asymmetric designs offer the possibility of simpler fabrication processes. Buried wires \cite{Amini} and vertical interconnects can provide electrical connections to electrodes which cannot be connected via surface pathways. Traps depths are typically smaller than for symmetric ion traps, therefore higher voltages need to be applied to obtain the same trap depth and secular frequencies of an equivalent symmetric ion trap. The widths of the individual electrodes can be optimised to maximise trap depth \cite{Nizamani}.\\ \begin{figure} \caption{Cross-section in the $x$-$y$ plane of the different types of asymmetric designs, (a) Four-wire design in which the principal axes are naturally non-perpendicular with respect to the plane of the electrodes. (b) Five-wire design where the electrodes are symmetric and one principal axis is perpendicular to the surface. (c) Five-wire design with different widths rf electrodes, this allows the principal axes to be rotated.} \label{asymmtraps} \end{figure} In order to successfully cool ions, Doppler cooling needs to occur along all three principal axes therefore the $\vec{k}$-vector of the laser needs to have a component along all the principal axes. Due to the limitation of the laser running parallel to the surface it is important that all principal axes have a component along the $\vec{k}$-vector of the Doppler cooling laser beam. Five-wire designs (Fig. \ref{asymmtraps}(b) and (c)) have a static voltage electrode below the ions position, surrounded by rf electrodes and additional static voltage electrodes. With the rf electrodes of equal width (Fig. \ref{asymmtraps}(b)) one of the principal axes is perpendicular to the surface of the trap. However, it can be rotated via utilizing two rf electrodes of different width (Fig. \ref{asymmtraps}(c)) or via splitting the central electrodes \cite{Allcock}. A four-wire design shown in Fig. \ref{asymmtraps}(a) has the principal axes naturally rotated but the ion is in direct sight of the dielectric layer below since the ion is located exactly above the trench separating two electrodes. Deep trenches have been implemented \cite{Britton,Britton2} to reduce the effect of exposed dielectrics. \subsection{From linear ion traps to arrays} For ions stored in microfabricated ion traps to become viable for quantum information processing, thousands or even millions of ions need to be stored and interact with each other. This likely requires a number of individual trapping regions that is on the same order as the number of ions and furthermore, the ability for the ions to interact with each other so that quantum information can be exchanged. This could be achieved via arrays of trapping zones that are connected via junctions. To scale up to such an array requires fabrication methods that are capable of producing large scale arrays without requiring an unreasonable overhead in fabrication difficulty. This makes some fabrication methods more viable for scalability than other techniques. An overview of different fabrication methods is discussed in more detail within section \ref{FabProcess}.\\ The transport of ions through junctions has first been demonstrated within a three-layer symmetric design \cite{Hensinger} and later near-adiabatic in a two-layer symmetric trap array \cite{Blakestad}. Both ion trap arrays were made from laser machined alumina substrates incorporating mechanical alignment. The necessity of mechanical alignment and laser machining limit the opportunity to scale up to much larger numbers of electrodes making other microfabrication methods more suitable in the long term. Transport through an asymmetric ion trap junction was then demonstrated by Amini et al. \cite{Amini}, however, this non adiabatic transport required continuous laser cooling. Wesenberg carried out a theoretical study \cite{Wesenberg2} how one can implement optimal ion trap array intersections. Splatt et al. demonstrated reordering of ions within a linear trap \cite{Splatt}.\\ \begin{figure} \caption{Junctions that have been used to successfully shuttle ions. The yellow parts represent the rf electrodes. No segmentation of the static voltage electrodes (grey) is shown. (a) T-junction design \cite{Hensinger} \label{junctions} \end{figure} \section{Simulating the electric potentials of ion trap arrays}\label{simu} Accurate simulations of the electric potentials are important for determining trap depths, secular frequencies and to simulate adiabatic transport including the separation of multiple ions and shuttling through corners \cite{Hucul,Reichle}. Various methods can be used in determining the electric potentials from the trap electrodes, with both analytical and numerical methods available. Numerical simulations using the finite element method (FEM) and the boundary element method (BEM) \cite{Hucul,Singer} provide means to obtain the full 3D potential of the trap array. FEM works by dividing the region of interest into a mesh of nodes and vertices, an iterative process then finds a solution which connects the nodes whilst satisfying the boundary conditions and a potential can be found for each node. BEM starts with the integral equation formulation of Laplace's equation resulting in only surface integrals being non-zero in an empty ion trap. Due to BEM solving surface integrals, this is a dimensional order less than FEM thus providing a more efficient numerical solution than FEM \cite{Hucul}. To obtain the total potential the basis function method is used \cite{Hucul}. A basis function for a particular electrode is obtained by applying 1V to one particular electrode whilst holding the other electrodes at ground. By summing all the basis functions (with each basis function multiplied by the actual voltage for the particular electrode) the total trapping potential can be obtained.\\ For the case of asymmetric ion traps, analytical methods provide means to calculate the trapping potential at a quicker rate and scope for optimisation of the electrode structures. A Biot-Savart-like law \cite{Oliveira} can be used and is related to the Biot-Savart law for magnetic fields in which the magnetic field at a point of interest is obtained by solving the line integral of an electrical current around a closed loop. This analogy is then applied to electric fields in the case of asymmetric ion traps \cite{Wesenberg}. One limitation for these analytical methods is the fact that all the electrodes must lie in a single plane, with no gaps, which is referred to as the gapless plane approximation. House \cite{House} has obtained analytical solutions to the electrostatic potential of asymmetric ion trap geometries with the electrodes located on a single plane within a gapless plane approximation. Microfabrication typically requires gaps of a few micrometers \cite{Britton2} which need to be created to allow for different voltages on neighboring electrodes. The approximation is suggested to be reasonable for gaps much smaller than the electrode widths and studies into the effect of gapped and finite electrodes have been conducted \cite{Schmied}. However, within the junction region where electrodes can be very small and high accuracy is required, the gapless plane approximation may not necessarily be sufficient. \section{Electrical characteristics}\label{ElecChara} \subsection{Voltage breakdown and surface flashover} Miniaturization of ion traps is not only limited by the increasing motional heating of the ion (see section \ref{heating}), but also by the maximum applied voltages allowed by the dielectrics and gaps separating the electrodes. Both secular frequency and trap depth depend on the applied voltage. Therefore it is important to highlight important aspects involved in electrical breakdown. Breakdown can occur either through the bulk material, a vacuum gap between electrodes, or across an insulator surface (surface flashover). There are many factors which contribute to the breakdown of a trap, from the specific dielectric material used and its deposition process, residues on insulating materials to the geometry of the electrodes itself and the frequency of the applied voltage.\\ Bulk breakdown describes the process of breakdown via the dielectric layer between two independent electrodes. An important variable which has been modelled and measured is the dielectric strength. This is the maximum field that can be applied before breakdown occurs. The breakdown voltage $V_c$ is related to the dielectric strength for an ideal capacitor by $V_c=dE_c$, where d is the thickness of the dielectric. There have been many studies into dielectric strengths showing an inverse power law relation $E_c\propto d^{-n}$ \cite{Agarwal1,Agarwal2,MRB82,KS01,ZSZ03,B03,MPC}. The results show a typical range of values ($0.5-1$) for the scaling parameter $n$. Although decreasing the thickness will increase the dielectric strength this will not increase the breakdown voltage if the scaling parameter lies below one.\\ Surface flashover occurs over the surface of the dielectric material between two adjacent electrodes. The topic has been reviewed \cite{Miller} with studies showing a similar trend with a distance dependency on the breakdown voltage with $V_b\propto d^\alpha$ where $\alpha\approx0.5$ \cite{Pillai2,MPC}. Surface flashover usually starts from electron emission from the interface of the electrode, dielectric and vacuum known as the triple point. Imperfections at this point increase the electric field locally and will reduce the breakdown voltage. The electric field strength for surface breakdown has been measured to be a factor of 2.5 less than that for bulk breakdown of the same material, dimensions and deposition process \cite{MPC}, with thicknesses of $1-3.9\mu m$ for substrate breakdown and lengths of $5-600\mu m$ considered.\\ The range of parameters that can affect breakdown from the difference between rf and applied static voltages \cite{Stick,Pillai,Pillai2} includes the dielectric material, deposition process and the geometry of the electrodes \cite{Miller}. Therefore it is most advisable to carry out experimental tests on a particular ion trap fabrication design to determine reliable breakdown parameters. In a particular design it is very important to avoid sharp corners or similar features as they will give rise to large local electric fields at a given applied voltage. \subsection{Power dissipation and loss tangent}\label{power} When scaling to large trap arrays the finite resistance $R$ and capacitance $C$ of the electrodes as well as the dielectric materials within the trap structure result in losses and therefore have to be taken into account when designing an ion trap. The power dissipation, which results from rf losses, is highly dependent on the materials used and the dimensions of the trap structure and can result in heating and destruction of trap structures. To calculate the power dissipated in a trap a simple lumped circuit model, as shown in Fig. \ref{circuitmodel} can be utilized.\\ \begin{figure} \caption{The rf electrode modelled with resistance $R$, capacitance $C$, inductance $L$ and conductance $G$.} \label{circuitmodel} \end{figure} The dielectric material insulating the electrodes cannot be considered a perfect insulator resulting in a complex permittivity $\varepsilon$, with $\varepsilon=\varepsilon'-j\varepsilon''$. Lossless parts are represented by $\varepsilon'$ and lossy parts by $\varepsilon''$. Substituting this into Amp\`ere's circuital law and by rearranging one obtains \cite{Kraus} \begin{equation} {\bf \nabla \times H} = \left[ \left( \sigma + \omega \epsilon ''\right) + j \omega \epsilon '\right]{\bf E} \end{equation} $\sigma$ is the conductivity of the dielectric for an applied alternating current, and $\omega$ is the angular frequency of the applied field. The effective conductance is often referred to as $ \sigma' = \sigma + \omega \epsilon ''$. The ratio of the conduction to displacement current densities is commonly defined as loss tangent $\tan \delta$ \begin{equation} \tan \delta = \frac{\sigma + \omega \epsilon ''}{\omega \epsilon '} \end{equation} For a good dielectric the conductivity $\sigma$ is much smaller then $\omega \epsilon ''$ and we can then make the following approximation $\tan \delta = \epsilon '' / \epsilon '$. For a parallel plate capacitor the real and imaginary parts of the permittivity can be expressed as \cite{DAG08} \begin{equation} \varepsilon'=\frac{Cd}{\varepsilon_{0}A}, \qquad \varepsilon''=\frac{Gd}{\varepsilon_{0}\omega A} \end{equation} Substituting this into the approximated loss tangent expression, the conductance $G$ can be expressed as \begin{equation}\label{Conductance} G=\omega C \tan \delta \end{equation} Now we can calculate the power dissipated through the rf electrodes driven by frequency $\omega=\Omega_T$, with $I_{rms}=V_{rms}/Z$. The total impedance for this circuit is \begin{equation}\label{imp} Z=R+j\omega L+\frac{G-j\omega C}{G^2+\omega^2C^2} \end{equation} The second term of the impedance represents the inductance $L$ of the electrodes, which we approximate with the inductance of two parallel plates separated by a dielectric with $L= \mu l (d/w)$ \cite{Thierauf}. Assuming a dielectric of thickness $d \approx 10 \mu $m, electrode width $w \approx 100 \mu$m, electrode length $l \approx 1 $mm, magnetic permeability of the dielectric $\mu \approx 10^{-6} \frac{H}{m}$ \cite{Howard}, the inductance can be approximated to be $L\approx 10^{-10}$H. Comparing the imaginary impedance terms $j \omega L $ and $ j \omega \frac{C}{G^2+\omega^2 C^2} $, it becomes clear that the inductance can be neglected. The average power dissipated in a lumped circuit is given by $P_d=Re(V_{rms}I^{\ast}_{rms})$ \cite{Horwitz}. Using the approximation for the total impedance $Z=R+\frac{G-j\omega c}{G^2+\omega^2C^2}$ we obtain \begin{equation} P_d= \frac{V_{0}^{2} R (G^{2}+C^{2}\omega^{2})^{2}} {2(C^{2} \omega^{2} + R^{2} (G^{2} +C^{2} \omega^{2})^{2})} \end{equation} and using equation \ref{Conductance} one obtains \begin{equation}\label{powerdiss} P_d=\frac{V_{0}^2\Omega_{T}^{2}C^2R(1 + \tan^2 \delta)^2}{2(1+\Omega_{T}^{2}C^2R^2(1 + \tan^2 \delta)^2)} \end{equation} In the limit where $\tan \delta$ and $\Omega_{T} C R \ll 1$, the dissipated power can be simplified as $P_d=\frac{1}{2}V_{0}^2\Omega_{T}^{2}C^2R$. Considering equation \ref{powerdiss}, important factors to reduce power dissipation are an electrode material with low resistivity, a low capacitance of the electrode geometry and a dielectric material with low loss tangent at typical drive frequencies ($\Omega_{T}= 10-80$MHz).\\ Values for loss tangent have been studied in the GHz range for microwave integrated circuit applications \cite{Krupka} and diode structures at kHz range \cite{DAG08,T06,B06}. Generally the loss tangent decreases with increasing frequency \cite{Karatas,Selcuk} and there has been a temperature dependence shown for specific structures \cite{T06}. Values at 1 MHz can be obtained but are dependent on the structures tested; Au/SiO$_2$/n-Si $\tan\delta\sim 0.05$ \cite{DAG08}, Au/Si$_3$N$_4$/p-Si $\tan\delta\sim 0.025$, \cite{B06}, Cr/SiO$_{1.4}$/Au $\tan\delta\sim 0.09$ \cite{T08}. The loss tangent will be dependent on the specific structure and also dependent on the doping levels. It is suggested that the loss tangent for specific materials be requested from the manufacturer or measured for the appropriate drive frequency. For optimal trap operation, careful design considerations have to be made to reducing the overall resistance and capacitance of the trap structure minimising the dissipated power. \section{Fabrication Processes}\label{FabProcess} Using the information given in the previous sections about ion trap geometries, materials and electrical characteristics common microfabrication processes can be discussed. One of the main criteria for choice of fabrication process is the compatibility with a desired electrode geometry. Asymmetric and symmetric geometries result in different requirements and therefore we will discuss them separately. First, process designs for asymmetric traps will be discussed followed by symmetric ion trap designs and universally compatible processes. The compatibility of a process with discussed materials, geometries will be highlighted. Structural characteristics and limitations of a process will be explained and solutions will then be given. As the exact fabrication steps depend on materials used and available equipment, the process sequences will be discussed as generally possible. However, to give the reader a guideline of possible choices, material and process step details of published ion traps will be given in brackets. The discussion will start with a simple process that can be performed in a wide variety of laboratories and is offered by many commercial suppliers. \subsection{Printed Circuit Board (PCB)} The first discussed fabrication technique used for ion trap microfabrication is the widely available Printed Circuit Boards (PCB) process. This technology is commonly used to create electrical circuits for a wide variety of devices and does not require cleanroom technology. For ion traps generally a monolithic single or two layer PCB process is used. Monolithic processes rely on the fabrication of ion trap structures by adding to or subtracting material from one component. In contrast to this, wafer bonding, mechanical mounting or other assembly techniques are used to fabricate traps from pre-structured, potentially monolithic parts in a non-monolithic process. When combined with cleanroom fabrication such a process can be used for very precise and large scale structures, however, mechanical alignment remains an issue.\\ PCB processes commonly allow for minimal structure sizes of approximately 100$\mu$m \cite{Kenneth} and slots milled into the substrate of 500$\mu$m width. This limits the uses of this technique for large and complex designs but also results in a less complex and easier accessible process. Smaller features are possible with special equipment, which is not widely available. The following section will give a general introduction into the PCB processes used to manufacture ion traps.\\ This process is based on removing material instead of depositing materials as used in most cleanroom processes. Therefore the actual process sequence starts by selecting a suitable metal coated PCB substrate. The substrate has to exhibit the characteristics needed for ion trapping, ultra high vacuum (UHV) compatibility, low rf loss tangent and high breakdown voltage (commercially available high-frequency (hf) Rogers 4350B used in ref. \cite{Kenneth}). One side of the PCB substrates is generally pre-coated with a copper layer by the manufacturer, which is partially removed in the process to form the trap structures as shown in Fig. \ref{PCB} (c). Possible techniques to do this are mechanical milling shown in Fig. \ref{PCB} (b) or chemically wet etching using a patterned mask. The mask can either be printed directly onto the copper layer or photolithography can be used to pattern photo resist as shown in Fig. \ref{PCB} (a). To reduce exposed dielectrics and prevent shorting from material deposited in the trapping process slots can be milled into the substrate underneath the trapping zones as shown in Fig. \ref{PCB} (d).\\ \begin{figure} \caption{Overview of several PCB fabrication processes. (a) Deposition and exposure of the resist used as a mask in later etch steps. (b) Removal of the top copper layer using mechanical milling. (c) Removal of the copper layer using a chemical wet etch and later removal of the deposited mask. (d) Mechanical removal of the substrate underneath the trapping zone \cite{Pearson} \label{PCB} \end{figure} The fabricated electrodes form one plane and make the process unusable for symmetric ion traps (SIT). Not only are the electrodes located in one plane they also sit directly on the substrate, which makes a low rf loss tangent substrate necessary to minimize energy dissipation from the rf rails into the substrate. As explained in Section \ref{power} high power dissipation leads to an increase of the temperature of the ion trap structure and also reduces the quality factor of the loaded resonator. PCB substrates intended for hf devices generally exhibit a low rf loss tangent (values of tan$\delta= 0.0031$ are typical \cite{Chenard}) and if UHV compatible can be used for ion traps. Other factors reducing the quality factor are the resistances and capacitances of the electrodes. When slots are milled into the substrate exposed dielectrics underneath the ion can be avoided despite the electrode structures being formed directly on the substrate. The otherwise exposed dielectrics would lead to charge built up underneath the ion resulting in stray fields pushing the ion out of the rf nil and leading to micromotion. By relying on widely available fabrication equipment this technique enabled the realization of several ion trap designs with micrometer-scale structures, published in \cite{Splatt,Harlander,LeibrandtPCB,Kenneth,Pearson}, without the need of cleanroom techniques.\\ The discussed single layer PCB process can be used for electrode geometries including y-junctions and linear sections. More complex topologies with isolated electrodes and buried wires require a more complex two-layer process. To address limitations caused by the minimal structure separations and sizes allowing for ion traps with higher electrode and trapping-zone densities a process based on common cleanroom technology will be discussed next. \subsection{Conductive Structures on Substrate (CSS)} \label{CSSPro} Common cleanroom fabrication techniques like high resolution photolithography, isotropic and anisotropic etching, deposition, electroplating and epitaxial growth allow a very precise fabrication of large scale structures. The monolithic technique we discuss here is based on a ``conductive structures on substrate" process and was used to fabricate the first microfabricated asymmetric ion trap (AIT) \cite{Seidelin}. Electrode structures separated by only 5$\mu$m \cite{Allcock} can be achieved with this process and smaller structures are not limited by the process but flashover and bulk breakdown voltage of the used materials. The structures are formed by means of deposition and electroplating of conductive material onto a substrate instead of removing precoated material. This makes it possible to use a much wider variety of materials in a low number of process steps. Several variations or additions were published \cite{Seidelin,Wang,Wang2,Labaz,StickThesis} to adjust the process for optimal results with desired materials and structures.\\ First the standard process will be presented, which starts by coating the entire substrate (for example polished fused quartz as used in \cite{Seidelin}) with a metal layer working as a seed layer (0.1$\mu$m copper), necessary for a following electroplating step. Commonly an adhesion layer (0.03$\mu$m titanium) is evaporated first, as most seed layer materials (\cite{Seidelin,Labaz}) have a low adhesion on common substrates, see Fig. \ref{CSS} (a). Then a patterned mask (photolithographic structured photo resist) with a negative of the electrode structures is formed on the seed layer. The structures are then formed by electroplating (6$\mu$m gold) metal onto the seed layer, see Fig. \ref{CSS} (b). Afterwards the patterned mask is no longer needed and removed. The seed layer, providing electrical contact between the structures during electroplating, also needs to be removed to allow trapping operations. This can be done using the electroplated electrodes as a mask and an isotropic chemical wet etch process to remove the material, see Fig. \ref{CSS} (c). Then the adhesion layer is removed in a similar etch process (for example using hydrofluoric acid) and the completed trapping structure is shown in Fig. \ref{CSS} (d).\\ \begin{figure} \caption{Fabrication sequence of the standard ``Conductive Structures on Substrate" process. (a) The sequence starts with the deposition of an adhesion and seed layer. (b) An electroplating step creating the electrodes using a patterned mask. (c) Removal of the mask followed by a first etch step removing the seed layer. d) Second etch step removing the adhesion layer.} \label{CSS} \end{figure} Depending on the available equipment additional or different steps can be performed, which also allow special features to be included on the chip. One variation was published in \cite{Labaz} replacing the seed layer with a thicker metal layer (1$\mu$m silver used in \cite{Labaz}) and forming the patterned mask on top of that, as shown in Fig. \ref{CSS2} (a). The electrode structures are then formed by using this mask to wet etch through the thick silver layer (NH3OH: H2O2 silver etch) and the adhesion layer (hydrofluoric acid) Fig. \ref{CSS2} (b). To counter potentially sharp electrode edges resulting from the wet etch process an annealing step (720$^{\circ} $C to 760$^{\circ} $C for 1 h) was performed to reflux the material and flatten the sharp edges. An ion trap with superconductive electrodes was fabricated using a similar process \cite{Wang2}. Low temperature superconductors Nb and NbN were grown onto the substrate in a sputtering step. Then the electrodes were defined using a mask and an anisotropic etch step. An annealing step was not performed after this etch step. With this variation of the standard process electroplating equipment and compatible materials are not needed to fabricate an ion trap.\\ A possible addition to the process was presented in \cite{Seidelin} incorporating on-chip meander line resistors on the trap as part of the static voltage electrode filters. Bringing filters closer to the electrodes can reduce the noise induced in connecting wires as the filters are typically located outside the vacuum system or on the chip carrier. On-chip integration of trap features is essential for the future development of very large scale ion trap arrays with controls for thousands of electrodes need for quantum computing.\\ In order to allow for appropriate resistance values for the resistors, processing of the chip occurs in two stages. Within the first stage only the electrode geometry is patterned with regions where resistors are to be fabricated entirely coated in photoresist. This step consists of patterning photo resist on the substrate (for example polished fused quartz \cite{Seidelin}) to shape the electrodes followed by the deposition of the standard adhesion (0.030$\mu$m titanium) and seed layers (0.100$\mu$m copper) on the substrate (thicknesses stated correspond to the values used by Seidelin et al. \cite{Seidelin}). After the electrodes have been patterned, the first mask is removed. In the next step, only the on-chip resistors are patterned allowing for different thickness of conducting layers used for resistors. A second mask is patterned on the substrate parts reserved for the on-chip resistors. Then an adhesion layer (0.013$\mu$m titanium) and metal layer (0.030$\mu$m gold) is deposited forming the meander lines as shown in Fig. \ref{CSS2} (c). The mask is removed and the rest of the sequence follows the standard process steps. This process illustrates one example for on-chip features integrated on the trap. Another process was published in \cite{Wang}, which integrates on chip magnetic field coils to generate a magnetic field gradient at the trapping zone. It was fabricated using the standard process and a specific mask to form the electrodes incorporating the coils.\\ \begin{figure} \caption{Different variations of the standard CSS process are shown. (a) Conductive material is deposited on the entire substrate and a resist mask is structured. (b) Structured electrodes after the etch step. (c) On chip resistor meander line, reported in ref. \cite{Seidelin} \label{CSS2} \end{figure} Most published variations \cite{Seidelin,Wang,Wang2,Labaz,Labaz2,StickThesis,Allcock,Dani} of this process feature structure sizes much smaller than achievable with the PCB process but similar to it electrodes sit directly on the substrate resulting in the same rf loss tangent requirements. It also makes the process incompatible with symmetric ion traps. As a result of the smaller structure separations, high flash over and bulk breakdown voltages are also more important. Because no slots are milled into the substrate electric charges can accumulate on the dielectrics beneath the trapping zones, which can result in stray fields and additional micromotion. To reduce this effect a high aspect ratio of electrode height to electrode-electrode distance is desirable.\\ Similar to the one-layer PCB process, topologies including junctions are possible but isolated electrodes are prohibited by the lack of buried wires. By moving towards cleanroom fabrication techniques it is possible with this process to reduce the electrode - electrode distance and increase the trapping zone density. The scalability is still limited and the exposed dielectrics can lead to unwanted stray fields. To counter the exposed dielectrics another variation and a different process featuring ``patterned silicon on insulator (SOI) layers" can be used and will be explained in Section \ref{SOI}.\\ One variation of this process that prevents exposed dielectrics was fabricated by Sandia National Laboratory \cite{StickThesis} featuring free standing rf rails with a large section of the substrate underneath the trapping region being removed. Electrodes consist of free standing wires held in place by anchor like structures fabricated on the ends of the slot in the substrate as shown in Fig. \ref{CSS2} (d). To prevent snapping under stress the wires are made from connected circles increasing the flexibility. While no dielectrics are exposed underneath the designs scalability and compatibility with different electrode geometries is limited as the rf rails are suspended between the anchors in a straight line. \subsection{Patterned Silicon on Insulator Layers (SOI)}\label{SOI} The ``Patterned Silicon on Insulator Layers" process makes use of commercially available silicon-on-insulator (SOI) substrates and was first reported by Britton et al. \cite{Britton2}. Similar to the PCB process this technique removes parts of a substrate instead of adding material to form the ion trap structures. Using the selective etch characteristics of the oxide layer between the two conductive silicon layers of the substrate, it is also possible to create an undercut of the dielectric and shielding the ion from it, without introducing several process steps. A metal deposition step performed at the end of the process to lower the electrode resistance does not require an additional mask, keeping the number of process steps low. Therefore this process allows for much more advanced trap structures without increasing the number of process steps by making clever use of a substrate.\\ The substrates are available with insulator thicknesses of up to 10$\mu$m and different Si doping grades resulting in different resistances. After a substrate with desired characteristics (100$\mu$m Si, 3$\mu$m SiO$_{2}$, 540$\mu$m Si in ref. \cite{Britton2}) is found, the process sequence starts with photolithographic patterning of a mask on the top SOI silicon layer as shown in Fig. \ref{NIST2009a} (a). The mask is used to etch through the top Si layer, see Fig. \ref{NIST2009a} (b) by means of an anisotropic process (Deep Reactive Ion Etching (DRIE)). Using a wet etch process the exposed SiO$_{2}$ layer parts are removed and an undercut is formed to further reduce exposed dielectrics as shown in Fig. \ref{NIST2009a} (c). If the doping grade of the top Si layer is high enough the created structure can already be used to trap ions.\\ To reduce the resistance of the trap electrodes resulting from the use of bare Si a deposition step can be used to apply an additional metal coating onto the electrodes. No mask is required for this deposition step, the adhesion layer (Chromium or Titanium) and metal layer (1 $\mu$m Gold, in \cite{Britton2}) can be deposited directly onto the silicon structures as shown in Fig. \ref{NIST2009a} (d). This way the electrodes and the exposed parts of the second Si layer will be coated. This has the benefit that possible oxidization of the lower silicon layer is also avoided. A slot can be fabricated into the substrate underneath the trapping zone, which allows backside loading in these traps \cite{Britton2}. This slot is microfabricated by means of anisotropic etching using a patterned mask from the backside. The benefit of such a slot is that the atom flux needed to ionize atoms in the trapping region can be created at the backside and guided perpendicular to the trap surface away from the electrodes. Therefore coating of the electrodes as a result of the atom flux can be minimized.\\ \begin{figure} \caption{Fabrication sequence of the SOI process. (a) Resist mask is directly deposited on the substrate. (b) First silicon layer is etched in an anisotropic step. (c) Isotropic selective wet etch step removing the SiO$_{2} \label{NIST2009a} \end{figure} An ion trap based on the SOI process was fabricated with shielded dielectrics by Britton et al. \cite{Britton2}. A similar approach is used by Sterling et al. \cite{Sterling} to create 2-D ion trap arrays. Similar to the standard CSS process, discussed in the previous section, y-junctions and other complex topologies are possible, but buried wires are prohibited. The process design incorporates a limited material choice for substrate and insulator layer. Only the doping grade of the upper and lower conductive layers can be varied not the material. The insulator layer separating the Si layers must be compatible with the SOI fabrication technique and commercially available materials are SiO$_{2}$ and Al$_{2}$O$_{3}$ (Sapphire). Adjustments of electrical characteristics, like rf loss tangent and electrode capacitances are therefore limited to the two insulator materials and geometrical variations.\\ To further increase the scalability and trapping zone density of ion traps, process techniques should incorporate buried wires to allow isolated static voltage and rf electrodes. Examples of such monolithic processes will be discussed next, first a process incorporating buried wires but exhibiting exposed dielectrics will be presented followed by a process allowing for buried wires and shielding of dielectrics. \subsection{Conductive Structures on Insulator with Buried Wires (CSW)} The process to be discussed here is a further development of the CSS process discussed in section \ref{CSSPro}, which adds two more layers to incorporate buried wires. With these it is possible to connect isolated static voltage and rf electrodes and was used to fabricate an ion trap with six junctions arranged in a hexagonal shape \cite{Amini}, see Fig. \ref{NIST2010b} (a). The capability of buried wires is essential for more complex ion trapping arrays that could be used to trap hundreds or thousands of ions necessary for advanced quantum computing.\\ The process starts with the deposition of the buried conductor layer (0.300$\mu$m gold layer \cite{Amini}) sandwiched between two adhesion layers (0.02$\mu$m titanium) on the substrate (380$\mu$m quartz) and a patterned mask is deposited on these layers. Conductor and adhesion layers are then patterned in an isotropic etch step, see Fig. \ref{NIST2010b} (b) and then buried with an insulator material (1$\mu$m SiO$_{2}$ deposited by means of chemical vapor deposition (CVD)). To establish contact between electrodes and the buried conductor, windows are formed in the insulator layer (plasma etching) as shown in Fig. \ref{NIST2010b} (c). Then another patterned mask is used to deposit an adhesion and conductor layer forming the electrodes (0.020$\mu$m titanium and 1$\mu$m gold). In this deposition step the windows in the insulator are also filled and connection between buried conductor and electrodes is established as shown in Fig. \ref{NIST2010b} (d). For this ion trap a backside loading slot was also fabricated using a combination of mechanical drilling and focused ion beam milling to achieve a precise slot in the substrate \cite{Amini}. As demonstrated by Amini et al. \cite{Amini} this process allows for large scale ion trap arrays with many electrodes and trapping zones. While the scalability is further increased, this particular process design results in exposed dielectrics. The next process improves this further by shielding the electrodes and the substrate. \begin{figure} \caption{Process sequence for the buried wire process. (a) Hexagonal shaped six junction ion trap array \cite{Amini} \label{NIST2010b} \end{figure} \subsection{Conductive Structures on Insulator with Ground Layer (CSL)} To achieve buried wires or in this case vertical interconnect access (vias) and shielding of dielectrics a monolithic process including a ground layer and overhanging electrodes can be carried out. Several structured and deposited layers are necessary for this process and while providing the greatest flexibility of all processes it also results in a complicated process sequence. Therefore these processes are commonly performed using Very-Large-Scale Integration (VLSI) facilities that are capable of performing many process steps with high reliability.\\ A process design, which makes use of vias, was presented by Stick et al. \cite{Sandia}. In this process design the substrate (SOI in ref. \cite{Sandia}) is coated with an insulator and a structured conductive ground layer (1$\mu$m Al) as shown in Fig. \ref{Lucentb} (a). This is followed by a thick structured insulator (9-14 $\mu$m). The trapping electrodes are then placed in a plane above the thick insulator. The electrodes overhang the thick insulator layer and in combination with the conductive ground layer shield the trapping zone from dielectrics. Vias are used to connect static electrodes to the conductive layer beneath the thick insulator as shown in Fig. \ref{Lucentb} (b). The process also includes creation of a backside loading slot.\\ A variation from the discussed process making use of a ground plate without vias is described by Leibrandt et al. \cite{Leibrandt} and more detailed process steps are given in ref. \cite{StickThesis}.\\ \begin{figure} \caption{Two variations of the CSL process sequence. (a) In the process sequence published in \cite{Sandia} \label{Lucentb} \end{figure} The described process design \cite{Sandia} combines vias with shielded dielectrics and also makes the trap structures independent from the substrate. Therefore the same level of scalability as the CSW process, discussed in the previous section, can be achieved while dielectrics are shielded similar to the SOI process, described in section \ref{SOI}. The process uses aluminium to form the electrodes which can lead to oxidisation and unwanted charge build up. To avoid this the electrodes could be coated with gold or other non-oxidizing conductors in an additional step. Easier choice of substrate, isolator and electrode materials combined with vias and shielded dielectrics make this process well suited for very large scale asymmetric ion traps with high trapping zone densities and low stray fields. With electrodes in one plane and another conductive layer above the substrate this structure could also be used to fabricate a symmetric ion traps (SIT). \subsection{Double Conductor/Insulator Structures on Substrate (DCI)} Symmetric ion traps have electrodes placed in two or three planes with the ions trapped between the planes. Therefore electrodes need to be precisely structured in several vertically separated layers and cannot be fabricated in one plane resulting in different requirements on the microfabrication processes. In the DCI process described here a specifically grown substrate with selective etch capabilities similar to SOI substrate layers is used and all electrodes are structured in one etch steps. The first such microfabricated ion trap was created by Stick et al. \cite{Stick} using this process with MBE grown AlGaAs, GaAs structures and constitutes the first realisation of a monolithic ion trap chip.\\ The process starts by growing alternating layers (AlGaAs 4$\mu$m, GaAs 2.3$\mu$m) on a substrate using an MBE system as shown in Fig. \ref{DCI} (a). The doping grades and atomic percentages (70 \% Al, 30\% GaAs AlGaAs layer, GaAs layers highly doped $3*10^{18} /cm^{3}$) are chosen to achieve low power dissipation in the electrodes. After the structure is grown a slot is etched into the substrate from the backside to allow optical access as shown in Fig. \ref{DCI} (b). To provide electrical contact to both GaAs layers an anisotropic etch process is performed to remove parts of the first AlGaAs and GaAs layers, then metal is deposited onto parts of the top GaAs and the now exposed lower GaAs layer as shown in Fig. \ref{DCI} (c). This is followed by an anisotropic etch step defining all the electrode structures. A following isotropic etch step creates an undercut of the insulating AlGaAs layer to increase the distance between trapping zone and dielectric, see Fig. \ref{DCI} (d).\\ \begin{figure} \caption{Fabrication process used for the symmetric trap published in \cite{Stick} \label{DCI} \end{figure} This ion trap chip could only be operated at very low rf amplitudes of 8V as a result of the low breakdown voltage of the insulating AlGaAs material and possible residues on the insulator surfaces. This problem can be addressed by choosing a different and/or thicker insulator material.\\ Removing the insulator material between the electrode layers can avoid this problem almost entirely and was used in the design described by Hensinger et al. \cite{Winni}. This structure is based on depositing layers instead of using a specifically grown substrate and etching material away. It incorporates a selectively etchable oxide layer, which can be completely removed after electrodes are created on this layer. This sacrifical layer allows for cantilever like electrodes without any supporting oxide layers. Therefore breakdown can only occur over the surfaces and the capacitance between electrodes is kept at a minimum.\\ The process starts with the deposition of an insulating layer (Si$_{3}$N$_{4}$) on an oxidized Si substrate as shown in Fig. \ref{DCI2} (a). Then a conductor layer (polycrystal silicon \cite{Winni}) is deposited onto the insulator and structured using an anisotropic etch process (DRIE) with a mask. Now selectively etchable sacrificial material (polysilicon glass) is deposited on the conductor structures. With a second mask windows are etched into the insulator layer allowing vertical connections to the following upper electrodes, see Fig. \ref{DCI2} (b). This is followed by another conductor layer (polycrystalline silicon) deposited onto the patterned insulator also filling the previously etched windows. Then an anisotropic etch is performed to structure the top conductor layer as shown in Fig. \ref{DCI2} (c). An etch step using a mask is then performed from the backside through the entire substrate creating a slot in the silicon substrate. With an isotropic selective hydrofluoric etch the sacrificial material used to support the polycrystal silicon electrodes during fabrication is completely removed. The electrodes now form cantilever like structures and the previously created slot in the substrate allows optical access to the trapping zone, see Fig. \ref{DCI2} (d).\\ \begin{figure} \caption{Process sequence for the trap proposed in \cite{Winni} \label{DCI2} \end{figure} This design potentially allows for the application of much higher voltages (breakdown for this design would occur over an insulator surface rather than through insulator bulk).\\ Independent of the material, the distance between the two electrode planes is limited by the maximal allowable insulator thickness which is often determined by particular deposition and etch constraints. To keep the ion-electrode distance at a desired level the aspect ratio of horizontal and vertical electrode-electrode distances (see Fig. \ref{DCI2} (d)) has to be much higher than one. As described in chapter \ref{linear} this leads to a low geometric efficiency factor $\eta$ and reduces the trap depths and secular frequencies of these traps. By forming the electrodes on the upper and lower side of an oxidized substrate this aspect ratio can be dramatically decreased. A process technique following this consideration will be discussed next. \subsection{Double Conductor Structures on Oxidized Silicon Substrate (DCS)} In this proposed monolithic process \cite{Brownnutt} an oxidized silicon substrate separates the electrode structures, which allows for much higher electrode to electrode distances. Making use of both sides of the substrate separates this process from all other described techniques. It allows electrode-electrode distances of hundreds of micrometers, which would otherwise be impossible due to maximal thicknesses of deposited or grown oxide layers. Therefore a ratio of one for the horizontal and vertical electrode-electrode distances can be achieved, resulting in an optimal geometric efficiency factor $\eta$. The trap is fabricated in several process steps starting with the oxidization of a silicon wafer. Using an anisotropic etch process and a patterned mask, slots are then formed in the SiO$_{2}$ layers on both sides exposing the Si wafer and creating the future trenches between electrodes as shown in Fig. \ref{DCS} (a). Then conductive layers are deposited on both sides of the wafer forming the two electrode layers. The individual electrodes are then created on both sides in an anisotropic etch step using a patterned mask as shown in Fig. \ref{DCS} (b). This is followed by an isotropic wet etch through the silicon using another mask resulting in an under cut of the electrodes and providing the optical access as shown in Fig. \ref{DCS} (c). The now exposed dielectric SiO$_{2}$ layers are then coated in a shadow metal evaporation process and the remaining mask are removed. After all conductor layers are deposited, the layer thickness is increased by means of electroplating resulting in the final structure shown in Fig. \ref{DCS} (d) \cite{Brownnutt}. By using the entire substrate to separate the electrodes this proposed process sequence allows for a low aspect ratio of horizontal and vertical electrode electrode distance. All electrodes sit on a dielectric material increasing the importance of a low rf loss tangent of the insulating SiO$_{2}$ layers. \begin{figure} \caption{Process sequence as used by Brownnutt et al. \cite{Brownnutt} \label{DCS} \end{figure} \subsection{Assembly of Precision Machined structures (PMS)} Non-monolithic fabrication processes are commonly based on the assembly of pre-structured parts, using techniques like wafer bonding or mechanical clamping. The scalability of non-monolithic processes can be limited due to this and therefore are commonly used for ion traps with only one or a few trapping zones. Systems with thousands of isolated electrodes would be difficult to realize with a non-monolithic process. Advantages of the non-monolithic processes are a greater choice of materials used and fabrication techniques, ranging from machined metal structures over laser machined alumina, to structured Si substrates.\\ A non-monolithic process can be based on the assembly of precision machined structures commonly done in workshops and will not be discussed in detail. Two examples for ion traps fabricated with this process are the needle trap with needle tip radius of approximately 3$\mu$m reported by Deslauriers et al. \cite{D06} and a blade trap used by McLoughlin et al. in \cite{McLoughlin}. More complex ion trap geometries including junctions and more electrodes can also be realized with a non-monolithic processes and will therefore be discussed next. \subsection{Assembly of Laser Machined Alumina structures (LMA)} Trap structures can be created by precision laser machining and coating of alumina substrates and mechanically assembling these using spacers and clamps as shown in Fig. \ref{LMA}. The structures are created by laser machining slots into an alumina substrate resulting in cantilevers providing the mechanical stability for the electrodes, see Fig. \ref{LMA} (a). The alumina cantilevers are then coated with a conductive layer, commonly metal, to form the electrodes. To prevent electrical shorting between the electrodes a patterned mask is used during this step. Several structured and coated alumina plates are then mechanically assembled to form a two or three layer trap \cite{T00,Hensinger,Blakestad}. Normally Spacers are used to maintain an exact distance between the plates, which are then exactly positioned and held in place using mechanical pressure, see Fig. \ref{LMA} (b).\\ \begin{figure} \caption{``Assembly of Laser Machined Alumina structures" process sequence. (a) Cantilever like structures are formed into the alumina structures using a laser. (b) Parts of the substrate are coated using a mask forming the electrodes and electrical connections. Three similar layers are mounted on top of each other to form a three layer symmetric trap.} \label{LMA} \end{figure} One of the first ion traps with linear sections using this process was reported by Turchette et al. \cite{T00}. Another example for such a trap was used for the first demonstration of corner shuttling of ions within a two-dimensional ion trap array \cite{Hensinger}. Other traps fabricated with this process include the trap used for near adiabatic shuttling through a junction \cite{Blakestad} and the traps reported in refs. \cite{Rowe,Schulz}. The necessary alumina substrates show a small loss tangent. A similar non-monolithic process based on clean room fabrication techniques, which allows for higher precision and greater choice of substrate material, will be discussed next. \subsection{Wafer Bonding of Lithographic Structured Semiconductor Substrates (WBS)} This process technique is similar to the monolithic SOI substrate technique discussed in section \ref{SOI} creating a highly doped conductor on an insulator using wafer bonding techniques. Much higher insulator thicknesses can be achieved with this process and many types of insulator and substrates can be used. This process was used to fabricate symmetric and asymmetric ion traps.\\ \begin{figure} \caption{Processes used to fabricate symmetric and asymmetric non-monolithic ion traps via waver bonding. (a) Microfabricated structures used for the assembly of an asymmetric trap. The structures are still physically connected within each layer. (b) These parts are then waferbonded together and physical connections are removed forming the trap structure. (c) Microfabricated parts used for a symmetric trap. (d) Waferbonded parts creating a symmetric trap. } \label{WBS} \end{figure} First a process used for an asymmetric ion trap will be discussed, starting with a commercially available substrate (Si wafer, resistivity$ = 500 \times 10^{-6} \Omega$ cm). Using anisotropic etching with a patterned mask (DRIE) the wafer is structured. All electrodes are physically connected outside the trapping zone to provide the necessary stability during the fabrication as shown in Fig. \ref{WBS} (a). The structured substrate is then bonded on another substrate with a sandwiched and structured insulator layer providing electrical insulation and mechanical stability as shown in Fig. \ref{WBS} (b). In the insulator layer a gap is formed underneath the trapping region leaving no exposed dielectrics underneath the ion. The bonding also provides the needed structural stability and the physical connections between the electrodes can then be removed by dicing the substrate Fig. \ref{WBS} (b). Similar to the SOI substrate process a conductive layer can be deposited on top of the structured substrate. A patterned mask is not necessary and the conductive layer can be directly deposited. Depending on the material used and substrate an adhesion layer has to be added.\\ Another variation of this process can be used to fabricate symmetric ion traps. Both substrates are identically structured and wafer bonded. An additional etch step is introduced to make the substrate thinner adjacent to the trapping zone in order to improve optical access as shown in Fig. \ref{WBS} (c). Similar to the asymmetric trap, the parts are then wafer bonded together with a sandwiched insulator layer Fig. \ref{WBS} (d). When used for asymmetric traps the dielectrics are completely shielded from the trapping zone and for the case of symmetric traps the dielectrics can be placed far away from the trapping zone. Possible geometries for asymmetric ion traps are limited with this technique as buried wires and therefore isolated electrodes are not possible. The fabrication method and choice of electrode materials used for this trap can result in a very low surface roughness of less than 1nm.\\ \section{Anomalous heating}\label{heating} One limiting factor in producing smaller and smaller ion traps is motional heating of trapped ions. While it only has limited impact in larger ion traps it becomes more important when scaling down to very small ion - electrode distances. The most basic constraint is to allow for laser cooling of the ion. If the motional heating rate is of similar magnitude as the photon scattering rate, laser cooling is no longer possible. Therefore for most applications, the ion - electrode distance should be chosen so the expected motional heating rate is well below the photon scattering rate. Depending on the particular application, there may be more stringent constraints. For example, in order to realize high fidelity quantum gates that rely on motional excitation for entanglement creation of internal states \cite{molmer,PLee}, motional heating should be negligible on the time scale of the quantum gate. This timescale is typically related to the secular period $1/\omega_m$ of the ion motion, however, can also be faster \cite{Garcia}. Motional heating of trapped ions in an ion trap is caused by fluctuating electric fields (typically at the secular frequency of the ion motion). These electric fields originate from voltage fluctuations on the ion trap electrodes. One would expect some voltage fluctuations from the electrodes due to the finite impedance of the trap electrodes, this effect is known as Johnson noise. Resulting heating would have a $1/d^2$ scaling \cite{T00} where $d$ is the characteristic nearest ion-electrode distance. However, in actual experiments a much larger heating rate has been observed. In fact, heating measurements taken for a variety of ions and ion trap materials seem to loosely imply a $1/d^4$ dependence of the motional heating rate $\dot{\bar{n}}$. A mechanism beyond Johnson noise must be responsible for this heating and this mechanism was termed 'anomalous heating'. In order to establish a more reliable scaling law, an experiment was carried out where the heating rate of an ion trapped between two needle electrodes was measured \cite{D06}. The experimental setup allowed for controlled movement of the needle electrodes. It was therefore possible to vary the ion - electrode distance and an experimental scaling law was measured $\dot{\bar{n}}\sim1/d^{3.5\pm0.1}$ \cite{D06}. The motional heating of the secular motion of the ion can be expressed as \cite{T00} \begin{equation}\label{ndot} \dot{\bar{n}}=\frac{q^2}{4m\hbar \omega_m}\left(S_E(\omega_m)+\frac{\omega^2_m}{2\Omega^2_T}S_E(\Omega_T \pm \omega_m)\right) \end{equation} $\omega_m$ is the secular frequency of the mode of interest, typically along the axial direction of the trap, $\Omega_T$ is the drive frequency and the power spectrum of the electric field noise is defined as $S_E(\omega)=\int^\infty_{-\infty}\langle E(\tau)E(t+\tau)\rangle e^{i\omega\tau}d\tau$. The second term represents the cross coupling between the noise and rf fields and can be neglected for axial motion in linear traps as the axial confinement is only produced via static fields \cite{Wineland,T00}.\\ A model was suggested to explain the $1/d^4$ trend that considered fluctuating patch potentials; a large number of randomly distributed `small' patches on the inside of a sphere, where the ion sits at centre at a distance $d$ \cite{T00}. All patches have a power noise spectral density that influence the electric field at the ion position, over which is averaged to eventually deduce the heating rate. Figure \ref{EFN} shows a collection of published motional heating results. Instead of plotting the actual heating rate, we plot the spectral noise density $S_E(\omega_m)$ multiplied with the secular frequency in order to scale out behavior from different ion mass or different secular frequencies used in individual experiments. We also plot a $1/d^4$ trend line. We note that previous experiments \cite{D06,McLoughlin} consistently showed $S_E(\omega_m)\sim 1/\omega_m$ allowing the secular frequency to be scaled out by plotting $S_E(\omega_m)\times\omega_m$ rather than just $S_E(\omega_m)$. \begin{figure} \caption{Previously published measurements of motional heating plotted as the product of electric field noise spectral density $S_{E} \label{EFN} \end{figure} In the experiment by Deslauriers et al. \cite{D06}, another discovery was made. The heating rate was found to be massively suppressed by mild cooling of the trap electrodes. Cooling the ion trap from 300K down to 150K reduced the heating by an order of magnitude \cite{D06}. This suggests the patches are thermally activated. Labaziewicz et al. \cite{Labaz} measured motional heating for temperatures as low as 7K and found a multiple-order-of-magnitude reduction of motional heating at low temperatures. The same group measured a scaling law for the temperature dependence of the spectral noise density for a particular ion trap as $S_E(T)= 42(1+(T/46 K)^{4.1})\times10^{-15}$ V$^2$/m$^2$/Hz \cite{Labaz2}. Superconducting ion traps consisting of niobium and niobium nitride were tested above and below the critical temperature $T_c$ and showed no significant change in heating rate between the two states \cite{Wang2}. Within the same study, heating rates were reported for gold and silver trap electrodes at the same temperature (6 K) showing no significant difference between the two and superconducting electrodes. This suggests that anomalous heating is mainly caused by noise sources on the surfaces, although the exact cause of anomalous heating is not yet fully understood. From the information available, it is likely that surface properties play a critical role, however, other factors such as material bulk properties, oxide layers may also play an important role and much more work is still needed to fully understand and control anomalous heating. While anomalous heating limits our ability to make extremely small ion traps, it does not prevent the use of slightly larger microfabricated ion traps. Learning how to mitigate anomalous heating is therefore not a prerequisite for many experiments, however, mitigating it will help to increase experimental fidelities (such as in quantum gates) and will allow for the use of smaller ion traps. \section{Conclusion} Microfabricated ion traps provide the opportunity for significant advances in quantum information processing, quantum simulation, cavity QED, quantum hybrid systems, precision measurements and many other areas of modern physics. We have discussed the basic principles of microfabricated ion traps and highlighted important factors when designing such ion traps. We have discussed important electrical and material considerations when scaling down trap dimensions. We presented a detailed overview of a vast range of ion trap geometries and showed how they can be fabricated using different fabrication processes employing advanced microfabrication methods. A limiting factor to scaling down trap dimensions even further is motional heating in ion traps and we have summarized current knowledge of its nature and how it can be mitigated. The investigation of microfabricated ion trap lies on the interface of atomic physics and state-of-the-art nanoscience. This is a very young research field with room for many step-changing innovations. In addition to the development of trap structures themselves, future research will focus on advanced on-chip features such as cavities, electronics, digital signal processing, fibres, waveguides and other integrated functionalities. Eventually progress in this field will result in on-chip architectures for next generation quantum technologies allowing for the implementation of large scale quantum simulations and quantum algorithms. The inherent scalability of such condensed matter systems coupled with the provision of atomic qubits within which are highly decoupled from the environment will allow for ground-breaking innovations in many areas of modern physics. \section*{Notes on contributors} \end{document}
\begin{document} \maketitle \begin{abstract} The aim of this paper is to show the possible Milnor numbers of deformations of semi-quasi-homogeneous isolated plane curve singularities. Main result states that if $f$ is irreducible and nondegenerate, by deforming $f$ one can attain all Milnor numbers ranging from $\mu(f)$ to $\mu(f)-r(p-r)$, where $r$ and $p$ are easily computed from the Newton diagram of $f$. \end{abstract} \section*{Introduction} The main goal of this paper is to identify all possible Milnor numbers attained by deformations of plane curve singularities. This question is closely related to some of Arnold's problems~\cite{ArnoldProblems}. A~direct motivation for our study was a talk of Arkadiusz P³oski on recent developments and open questions regarding jumps of Milnor numbers given at the £ódŸ-Kielce seminar in June 2013 as well as questions from an article of Arnaud Bodin~\cite{Bo}. The most interesting point is establishing the initial Milnor jumps i.e. the greatest Milnor numbers attained by deformations. As was shown in general in~\cite{GuseinZade} and for special cases in~\cite{BK}, it is possible that not all Milnor numbers are attained, meaning that the jumps may be greater than one. Moreover, in these cases the Milnor numbers that are not attained give exactly the first jump greater than one. These results are related to bounds on Milnor numbers of singularities and refer to questions on possible Milnor numbers of singularities of given degree, see for instance \cite{Pl14} or~\cite{GreuelShustinL}. Moreover, the fact that the first jump is not equal one has in turn interesting implications for multiparameter versal deformations and adjacency of $\mu$-constant strata~\cite{ArnoldProblems}. In this paper we show, in particular, that this is not the case when considering irreducible nondegenerate semi-quasi-homogeneous singularities. The approach presented here stems from the observation that many properties of the sequence of Milnor numbers attained by deformations of a singularity are possible to be established combinatorially, a fact that was not in our opinion sufficiently explored. In this paper we focus on the irreducible case which in our opinion is the hardest. A careful analysis shows that for semi-quasi-homogeneous singularities, assuming nondegeneracy, the problem boils down to three cases depending on the greatest common divisor of $p$ and $q$ (using the notation \eqref{eqPostacfQSH}). The irreducible case in such a setting is equivalent to saying that $p$ and $q$ are coprime. We show that initial jumps of Milnor numbers are equal to one. This result, on its own, can be used iteratively for many singularities to prove that all jumps are equal to one, as shown in Section~\mathbb{R}f{sectionRks}. On the other hand, we think of this paper as introduction to more general results based on the observation that if the procedure presented here is adjusted, it implies also solutions in general in the other two cases. For instance, given an isolated singularity $f$ of the form~\eqref{eqPostacfQSH} with ${\rm GCD}(p,q)=g$ such that $1<g<p$, one can show that the first jump is not bigger than $g$ (as was already shown in \cite{Bo} and \cite{W1}) but all Milnor numbers ranging from $\mu(f)-g$ to $\mu(f)- m-r(p-r)+1$ can be attained by deformations of $f$ (under notation $q\equiv r ({\rm mod}\ p)$). We defer the details to a subsequent publication. One would also like to note that parallel and complimentary research of the problem of jumps of Milnor numbers is a recent paper \cite{BKW}. This article is organised as follows. First, we state the main result. In Section \mathbb{R}f{sectionPreliminaries} we begin with introducing notation that we hope will provide more clarity to further considerations. In paragraphs \mathbb{R}f{subsecNewtdiag} and \mathbb{R}f{subsecNewtonNumber} we recall some properties of the Newton diagram and Newton numbers. General combinatorial remarks and a reminder on Euclid's algorithm follow in paragraphs \mathbb{R}f{subsecGeneralComb}, \mathbb{R}f{subsecEEA} and \mathbb{R}f{subsecEEArks}. Section~\mathbb{R}f{secLemmas} presents steps needed in the proof of Theorem~\mathbb{R}f{mainThm}. It is divided into three parts. In Section~\mathbb{R}f{subsecDecrease} we prove validity of Procedure~\mathbb{R}f{procka1} that gives minimal jumps and allows to substitute $p$ and $q$ by $n(a-a')+a'$ and $n(b-b')+b'$ respectively (compare table \eqref{EEA}). In Section~\mathbb{R}f{subsecReductionLine} we prove an iteration of this procedure, that is Procedure~\mathbb{R}f{procka2}, is valid and gives minimal jumps until $p,q$ are reduced to $n'a'+a''$ and $n'b'+b''$ respectively. Whereas in Section~\mathbb{R}f{subsecShortEEA} we deal with the case (or the last line in Euclid's Algorithm) when $q\equiv \pm 1({\rm mod\ } p)$. Section~\mathbb{R}f{secMainThmComb} brings the procedures together to prove Theorem~\mathbb{R}f{thmMainCombinatorially}. The main Theorem~\mathbb{R}f{mainThm} follows immediately. The article concludes with some remarks and observations on further developments. \section{Statement of the main result} Throughout this paper we will consider an isolated plane curve singularity $f$ i.e.~the germ $f:(\mathbb{C}^2,0)\to(\mathbb{C},0)$ is analytic and $0$ is the only solution of the system of equations $\nabla f(x,y)=f(x,y)=0$. By a deformation of $f$ we mean any analytic function $F:(\mathbb{C}^{3},0)\to\mathbb{C}$ such that $F(0,\cdot)=f$ and $F(t,\cdot)$ is an isolated singularity for every $t$ small enough. The Milnor number $\mu(f)$ of an isolated singularity $f$ is the multiplicity of $\nabla f$ at zero. A classic result is that the Milnor number of a deformation $F$ of $f$ always satisfies the inequality $\mu(f)\ge \mu(F(t,\cdot))$ for $t$ small enough, see for instance \cite{GreuelShustinL}. Hence it makes sense to consider the strictly decreasing sequence $(\mu_i)_{i=0,\dots,w}$ of all positive integers attained as Milnor numbers of deformations of $f$. We have $\mu_0=\mu(f)$ and $\mu_w=1$. The sequence of positive integers $(\mu_{i-1}-\mu_{i})_{i=1,\dots,w}$ will be henceforth called the sequence of jumps of Milnor numbers. We will consider the isolated singularity $f$ of the form \begin{equation} \label{eqPostacfQSH} f=\sum_{p\alpha + q\beta\ \geq\ pq}c_{\alpha\beta}\ x^{\alpha}y^{\beta} \end{equation} for some positive integers $p,q$. \begin{thm} \label{mainThm} Given a nondegenerate isolated singularity $f$ of the form~\eqref{eqPostacfQSH} with $p<q$ coprime the sequence of Milnor jumps begins with $$\underbrace{1\ ,\ \dots\ ,\ 1}_{r(p-r)}$$ where $r$ is the rest out of division of $q$ by $p$. \end{thm} \noindent{\bf Proof.} The proof follows immediately from Kouchnirenko's theorem (see Fact \mathbb{R}f{factNewMil}) and minimality of the jumps in Theorem \mathbb{R}f{thmMainCombinatorially}. $\blacksquare$ If $f$ is nondegenerate of the form~\eqref{eqPostacfQSH}, $p,q$ are coprime and $c_{p,0}c_{0,q}\neq 0$, then $f$ is irreducible. On the other hand, for any nondegenerate irreducible isolated singularity $f$, it is of the form \eqref{eqPostacfQSH}, $p,q$ are coprime and $c_{p,0}c_{0,q}\neq 0$. Hence as a special case a direct generalisation of \cite[Theorem 2]{Bo} follows. Namely \begin{cor} Given an irreducible nondegenerate isolated singularity, the claim of Theorem \mathbb{R}f{mainThm} holds. \end{cor} \section{Preliminaries on combinatorial aspects} \label{sectionPreliminaries} \subsection{Notations} A Newton diagram of a set of points $\mathcal{S}$ is the convex hull of the set $$\bigcup_{P\in\mathcal{S}} \left(P+\mathbb{R}_+^2\right).$$ We will refer to Newton diagrams simply as diagrams. Since a Newton diagram is uniquely determined by the compact faces of its border, we will often refer only to these compact faces. We say that a diagram $\Gamma$ is supported by a set $\mathcal{S}$ if $\Gamma$ is the smallest diagram containing every point $P\in\mathcal{S}$. We say that $\Gamma$ lies below $\Sigma$ if $\Sigma\subset \Gamma$. Let us denote by $(P_1,\dots,P_n)$ a diagram supported by points $P_1,\dots,P_n$. If $\Gamma$ is a diagram we will write $\Gamma+(P_1,\dots,P_n)$ for a diagram supported by ${\rm supp} \Gamma \cup \{P_1,\dots,P_n\}$. Any such diagram will be called a deformation of the diagram $\Gamma$. If $P=(p,0)$, $Q=(0,q)$ then any translation of the segment $P,Q$ will be denoted as $\triangle(p,q)$, in other words \begin{eqnarray} \triangle(p,q)& := &\text{hypotenuse of a right triangle with base}\nonumber\\ & &\text{of length }p\text{ and heigth }q\nonumber \end{eqnarray} We will write $n\triangle(p,q)$ instead of $\triangle(np,nq)$. Moreover, for $\triangle(p_1,q_1),\dots,\triangle(p_l,q_l)$ denote by $$(-1)^k\left(\ \triangle(p_1,q_1)+\dots+\triangle(p_l,q_l)\ \right)$$ any translation of a polygonal chain with endpoints $Q$, $Q+(-1)^k[p_1,-q_1]$, $\dots$ , $Q+(-1)^k\left[\sum_{i=1}^l p_i\ ,\ -\sum_{i=1}^l q_i\right].$ Note that if $(-1)^k=1$ we list the segments from top to bottom and if the sequence of the slopes $q_i/p_i$ is increasing, then $\triangle(p_1,q_1)+\dots+\triangle(p_l,q_l)$ is a Newton diagram. We will also write $\triangle(P,Q)$ instead of $\triangle(p,q)$ when we want to indicate fixed endpoints $P$ and $Q$ of the segment $\triangle(p,q)$. \subsection{Newton diagrams of singularities}\label{subsecNewtdiag} We say that $\Gamma$ is the Newton diagram of an isolated singularity $f(x,y)=\sum_{i,j} c_{ij}x^iy^j$ if $\Gamma$ is the diagram supported by the set ${\rm supp} f=\{P\in\mathbb{Z}^2:c_P\neq 0 \}$. In such a case denote it by $\Gamma(f)$. We will say that $f$ is nondegenerate if it is nondegenerate in the sense of Kouchnirenko, see \cite{Kush}. Note that \cite{WallNonDeg} defines nondegeneracy differently but the definitions are equivalent in dimension $2$, see \cite{GreuelNonDegN2}. A Newton diagram of a singularity is at distance at most $1$ from any axis. \subsection{Newton numbers}\label{subsecNewtonNumber} For a diagram $\Gamma\subset\mathbb{R}^2_+$, such that it has common points with both axis, its Newton number $\nu(\Gamma)$ is equal to $$2A-p-q+1,$$ where $A$ is the area of the compliment $\mathbb{R}^2_+\setminus \Gamma$ and $p,q$ are the non-zero coordinates of the points of intersection. For any diagram $\Gamma\subset\mathbb{R}^2_+$ let $\nu_{p,q}(\Gamma)$ be the Newton number of a Newton diagram of $\Gamma+\{ (p,0), (0,q) \}$. Note that if $\Gamma$ is a diagram of an isolated singularity, then the definition does not depend on the choice of $p$ or $q$ if they are large enough, see \cite{Len08}. Hence the Newton number $\nu(\Gamma)=\nu_{p,q}(\Gamma)$, where $p,q$ sufficiently large, is well defined in the general case. The motivation to study Newton numbers was given by Kouchnirenko in \cite{Kush}. In particular, \begin{fact}\label{factNewMil} For an isolated nondegenerate singularity the Newton number of its diagram and its Milnor number are equal. \end{fact} Similarly as for Milnor numbers, for a diagram $\Gamma$ consider the strictly decreasing sequence $(\nu_i)_{i=0,\dots,s}$ of positive integers attained as Newton numbers of deformations of $\Gamma$. Of course, $\nu_0=\nu(\Gamma)$ and $\nu_s=1$. The sequence $(\nu_{i-1}-\nu_{i})_{i=1,\dots,s}$ is the sequence of minimal jumps of Newton numbers. Now for two useful properties. \begin{prty} \begin{enumerate}\label{prtyNewtonNr}\label{prty33} \item If $\Sigma$ lies beneath $\Gamma$, then for any system of points $P_1,\dots,P_n$ the diagram $\Sigma+(P_1,\dots,P_n)$ lies below $\Gamma+(P_1,\dots,P_n)$ and the diagrams have common endpoints provided $\Sigma$ and $\Gamma$ had common endpoints. \item If $\Sigma$ lies below $\Gamma$ and they have common endpoints, then the difference of Newton numbers $\nu_{p,q}(\Gamma) - \nu_{p,q}(\Sigma)$ is twice the difference of their areas. \end{enumerate} \end{prty} \subsection{General combinatorial remarks}\label{subsecGeneralComb} Since a Newton number can be computed from the diagram, we will give some classic combinatorial tools that will help us in doing so. \begin{fact}[Pick's Formula] The area of a polygon with vertices from the lattice $\mathbb{Z}^2$ is equal to $${B\over 2}+W-1,$$ where $B$ is equal to the number of points of the lattice $\mathbb{Z}^2$ which lie on its border and $W$ is the number of points of the lattice $\mathbb{Z}^2$ which lie in the interior. \end{fact} \begin{rk}[Tile Argument] Consider a rhomboid $R(p,q)$ with vertices $(p,0)$, $(p-a,b)$, $(0,q)$, $(a,q-b)$, where $bp-aq=\pm 1$. The family $$\mathcal{R}(p,q)= \{\ R(p,q)+i[a,-b]+j[-(p-a),q-b] : \ i,j\in\mathbb{Z}\ \}$$ covers the real plane and consists of rhomboids with pairwise disjoint interiors. Moreover, every point in $\mathbb{Z}^2$ is a vertex of some rhomboid from this family. \end{rk} Indeed, since the area of $R(p,q)$ is $|pq-bp-(p-a)q|=1$, Pick's Formula implies that $R(p,q)\cap \mathbb{Z}^2$ is equal to the set of four vertices of $R(p,q)$. The rest follows immediately. \subsection{EEA}\label{subsecEEA} Let us recall the Extended Euclid's Algorithm. Note that $p,q$ being coprime implies $p,q\neq 1$. \begin{fact}[Extended Euclid's Algorithm] \label{factClassicEEA} Take positive integers $p$ and $q$ which are coprime and $q>p$. The EEA goes as follows $$ \begin{array}{r|ccccccc} \text{variables} & P & Q & A' & A & B' & B & N \\\hline \text{initial condition} & p & q & 0 & 1 & 1& 0 & \left\lfloor {q\over p} \right\rfloor \\ \\ \text{as long as $P\neq 0$ substitute} & Q-NP & P & A-NA' & A' & B-NB' & B' & \left\lfloor {P\over Q-NP} \right\rfloor \\ \\ \text{the output line } P=0 & 0 & 1 & \pm\ p & \mp\ a & \mp\ q & \pm\ b & 0 \end{array} $$ Positive integers $a,b$ in the last line are such that $a<p, b<q$ and $|bp-aq|=1$. \end{fact} We will adjust the algorithm to our needs. Reverse the order of the lines and number them from $0$ for the output line to $k_0+2$ for the initial conditions line (we always have at least 3 lines, hence $k_0\ge 0$). Note that $a_0=p, b_0=q$ and we get a modified table \begin{equation} \label{EEAFullGeneral} \begin{array}{cc|l} p & q & \\\hline a_1 & b_1 & n_1\\ \vdots & \vdots & \vdots\\ a_{k_0+1} & b_{k_0+1} & n_{k_0+1}\\ a_{k_0+2} & b_{k_0+2} & \end{array} \end{equation} which consists of columns $A', B'$ and $N$ from original EEA in reverse order and dropping the signs. Note that $a_1=a$ and $b_1=b$. Consider an example that we will use as an illustration throughout this paper. \begin{ex}\label{przyEEA} For $p=40$ and $q=73$ we have $k_0=4$ and $$ \begin{array}{cc|l} 40 & 73 & \\\hline 17 & 31 & 2\\ 6 & 11 & 2\\ 5 & 9 & 1\\ 1 & 2 & 4\\ 1& 1 & 1\\ 0&1 \end{array} $$ In particular, $31\cdot 40-17\cdot 73=-1=(-1)^{4-0+1}$. \end{ex} We will list some properties of EEA adjusted to our notations. \begin{prty}\label{wlEEA} \begin{enumerate} \item The values in the last two lines are always $$ \begin{array}{lll} a_{k_0+1}=1, & b_{k_0+1}=\left\lfloor {q\over p} \right\rfloor, & n_{k_0+1}=a_{k_0}\\ a_{k_0+2}=0, & b_{k_0+2}=1 & \end{array} $$ and necessarily $a_{k_0+1}b_{k_0+2}-b_{k_0+1}a_{k_0+2}=1$. \item Each new line can be obtained as the rest from division from the former two lines (except $a_{k_0+1}$ and $b_{k_0+2}$). In particular for any $k=1,\dots,k_0+1$ we have $$ a_{k+1}=a_{k-1}-n_{k}a_{k},\quad b_{k+1}=b_{k-1}-n_{k}b_{k} , $$ $n_k= \left\lfloor {a_{k-1}\over a_k}\right\rfloor$ for $k< k_0$ and $n_k=\left\lfloor {b_{k-1}\over b_k} \right\rfloor$ for $k\le k_0$. \item The positive integers $a_k$ and $b_k$ are coprime and the sign of $a_kb_{k+1}-b_ka_{k+1}$ alternates. In particular, for $k=0,\dots,k_0$ we get $ b_{k}>a_k\ge 1$ and $$a_kb_{k+1}-b_ka_{k+1}=(-1)^{k_0-k+1}$$ \end{enumerate} \end{prty} \subsection{Remarks on EEA}\label{subsecEEArks} Let $q=mp+r$, where $r<p<q$ and $q,p$ are coprime. Consider EEA beginning with \begin{equation} \label{EEA} \begin{array}{cc|l} p & q & \\\hline a & b & n\\ a' & b' & n'\\ a'' & b'' & n''\\ \dots & \dots & \dots \end{array} \end{equation} and denote $${\rm sign}(p,q):=bp-aq.$$ Note that ${\rm sign}(p,q)$ is equal $(-1)^{k_0+1}$. We will retain this notation throughout the rest of the paper and prove some technical properties that will be useful. \begin{prty} \label{prtyN} We may assume that $n$ in EEA is >1. \end{prty} \noindent{\bf Proof.} Instead of \eqref{EEA} consider a shorter EEA $$ \begin{array}{cc|l} p & q & \\\hline a' & b' & n'+1\\ a'' & b'' & n''\\ \dots & \dots & \dots \end{array} $$ We have $pb'-qa'= -n \cdot {\rm sign}(p,q) =-{\rm sign}(p,q)$ and $p=a+a'=n'a'+a'+a''$. Hence in the table above signs alternate and the table above has all properties listed in Property \mathbb{R}f{wlEEA}. The rest of the table does not change. $\blacksquare$ \begin{prty} \label{prtyShortEEA} EEA ends with $$ \begin{array}{cc|l} \dots & \dots & \dots\\ \tilde{a} & \tilde{a}m+1 & \tilde{n}\\ 1 & m & \tilde{a}\\ 0 & 1 & \end{array} $$ where $\tilde{a},\tilde{n}$ are positive integers. Moreover, $\tilde{a}=1$ iff $2r>p$. \end{prty} \noindent{\bf Proof.} The last lines were already given in Property \mathbb{R}f{wlEEA}. We need to prove the second part. To study the last lines recall the classic EEA, Fact \mathbb{R}f{factClassicEEA}. It easily follows that $\tilde{a}=\lfloor p/r \rfloor$. Hence $\tilde{a}$ is equal to one if and only if $2r>p$. $\blacksquare$ \begin{prty} \label{prtyResztaRownaN} \label{prtyShortEEAa1} If $k_0=2$, EEA is of the form \begin{equation} \label{eqSchortEEA} \begin{array}{cc|l} p & q & \\\hline a & am+1 & n=r\\ 1 & m & n'=a\\ 0 & 1 & \end{array} \end{equation} and $a=1$ iff $q=mp+p-1$. \end{prty} \noindent{\bf Proof.} Taking into account Property~\mathbb{R}f{prtyShortEEA} above one needs to show only that $n=r$ as well as the equivalence. Indeed, if $a''=0$, then by the above Property~\mathbb{R}f{prtyShortEEA} we get $\tilde{a}=a, \tilde{a}m+1=b$, $\tilde{n}=n$ thus $(am+1)p-aq=1$. Hence $1=a(mp-q)+p=-ra+p$ and it follows that $p=ra +1$ on one hand, while $p=na+1$ on the other. Moreover, if $a=1$, then $p(m+1)-q=1$. On the other hand, if $q=mp+p-1$ the EEA is of the form \eqref{eqSchortEEA} with $a=1$. $\blacksquare$ Moreover, as a special case of Property \mathbb{R}f{prtyShortEEA} we get \begin{prty} \label{prtyVeryShortEEA} $q=mp+1$ if and only if EEA is of the form \begin{equation} \label{eqVerySchortEEA} \begin{array}{cc|l} p & q & \\\hline 1 & m & p\\ 0 & 1 & \end{array} \end{equation} \end{prty} Now for two technical properties \begin{prty} \label{prtyEEApjqj} Take $j< n'$, any positive integer $l$, $p^j=l(a-ja')+a'$ and $q^j=l(b-jb')+b'$. Then $p^j,q^j$ are coprime and their EEA is of the form $$ \begin{array}{cc|l} l(a-ja')+a' & l(b-jb')+b' & \\\hline a-ja' & b-jb' & l\\ a' & b' & n'-j\\ a'' & b'' & n''\\ \dots & \dots & \dots \end{array} $$ \end{prty} Indeed, $(n(a-ja')+a')(b-jb') - (n(b-jb')+b')(a-ja') = a'(b-jb')-b'(a-ja')=a'b-b'a = {\rm sign}(p,q)$. \begin{prty} \label{prtyEEAatilde} Take a positive integer $N$, assume $a''\neq 0$. Then $Na'+a''$ and $Nb'+b''$ are coprime and their EEA is of the form $$ \begin{array}{cc|l} Na'+a'' & Nb'+b'' & \\\hline a' & b' & N\\ a'' & b'' & n''\\ \dots & \dots & \dots \end{array} $$ \end{prty} Indeed, $(Na'+a'')b''-(Nb'-b'')a''=a''b'-a'b''=-{\rm sign}(a',b')$. Now it is easy to see that if $a\neq 1$, then $a'\neq 0$. \section{Main steps of proof}\label{secLemmas} Choose the line $i\le k_0$ in the EEA \eqref{EEAFullGeneral} for $a_0,b_0$, where $1<a_0<b_0$ are coprime. Denote $p=a_i,q=b_i$ and assume EEA is of the form \eqref{EEA}. We have $bp-aq=(-1)^{k_0-i+1}$, recall ${\rm sign}(p,q)=bp-aq$. We will consider deformations of $\triangle(p,q)$. Let $Q$ denote the upper and $P$ denote the lower endpoint of the diagram $\triangle(p,q)$ if ${\rm sign}(p,q)=-1$, reversely if ${\rm sign}(p,q)=1$. \subsection{Decreasing $p$ and $q$} \label{subsecDecrease} In this paragraph, informally speaking, we will aim at replacing $p=na+a'$ by $p=n(a-a')+a'$ (and at the same time $q=nb+b'$ by $n(b-b')+b'$). In the next paragraph~\mathbb{R}f{subsecReductionLine} we will prove that one can do it recursively until $a-ka'=a''$. This will allow us to use EEA and reduce the problem to repetition of the procedure for consecutive levels of the EEA table~\eqref{EEAFullGeneral}. Consider a diagram $$ \Gamma^k = -{\rm sign}(p,q)\big( k\triangle(a',b')+\triangle(p-ka,q-kb)+k\triangle(a-a',b-b')\big)$$ where $0\leq k\leq n$. Denote also the points $$P^k=P-{\rm sign}(p,q) k[-(a-a'),b-b'],\quad Q^k=Q-{\rm sign}(p,q)k[a',-b'], $$ in the support of $\Gamma^k$ such that $$\Gamma^k = -{\rm sign}(p,q)\left(\triangle(Q,Q^k)+\triangle(Q^k,P^k)+\triangle(P^k,P)\right).$$ Note that $\Gamma^0=\triangle(p,q)$, $$\Gamma^n=-{\rm sign}(p,q)\big(\ (n+1)\triangle(a',b')+n\triangle(a-a',b-b')\ \big)$$ and every $\Gamma^k$ is a Newton diagram. Consider points $$P_i^k = P^k-{\rm sign}(p,q)\cdot i[-a,b],\quad i=1,\dots,n-k$$ and $$D_i^k = P_i^k+{\rm sign}(p,q)[-a',b'],\quad i=1,\dots,n-k.$$ \begin{proc}\label{procka1} Consecutively for $k=0,\dots,n-1$ take diagrams $$\Gamma^k+P_i^{k}\quad i=1,\dots,n-k,$$ $$\Gamma^k+D_i^{k}\quad i=1,\dots,n-k.$$ \end{proc} Note that $$\Gamma^k+P_i^{k} = \Gamma+(P_{n-k-1}^{k-1},P_i^{k},D_1^{k-1}) = \Gamma + (P^k,P^k_i,Q^k)$$ and analogously $$\Gamma^k+D_i^{k} = \Gamma+(P_{n-k-1}^{k-1},D_i^{k},D_1^{k-1}) = \Gamma + (P^k,D^k_i,Q^k).$$ \begin{prop}\label{propReductioAtoA1} If $a\neq 1$ the choice of deformations in Procedure \mathbb{R}f{procka1} gives the opening terms of the sequence of minimal jumps of Newton numbers $$\underbrace{1\ ,\ \dots\ ,\ 1}_{n(n+1)}$$ \end{prop} Proof will follow after some lemmas below. \begin{lem}\label{lemPropertiesP} For any fixed $k$ we have \begin{enumerate} \item For $i=1,\dots,n-k-1$ the deformation $\Gamma^k+P_i^k$ has the diagram $$-{\rm sign}(p,q)(\triangle(Q,Q^k)+\triangle(Q^k,P_i^k)+\triangle(P_i^k,P^k)+\triangle(P^k,P)).$$ \item The deformation $\Gamma^k+P_{n-k}^k$ has the diagram $$-{\rm sign}(p,q)\big((k+1)\triangle(a',b')+(n-k)\triangle(a,b) +k\triangle(a-a',b-b')\big).$$ \end{enumerate} \end{lem} \noindent{\bf Proof.} First note that $P_1^k,\dots,P_{n-k}^k$ and $P^k$ are colinear. Moreover, from Euclid's Algorithm $p-ka=(n-k)a + a'$ and $q-kb=(n-k)b+b'$ with $a'b'\neq 0$ and $k=0,\dots,n$. To prove (1) it suffices to note that as a consequence the slopes of $\triangle(P^k,P), \triangle(P^k,P_i^k), \triangle(P_i^k,Q^k)$ and $\triangle(Q^k,Q)$ exactly in that order constitute a strictly monotone sequence. Point (2) follows from the above considerations taking into account the fact that $p-na=a'$ and $q-nb=b'$, hence the slopes of $\triangle(P_{n-k}^k,Q^k)$ and $\triangle(Q^k,Q)$ are equal. $\blacksquare$\\ \begin{lem} \label{lemjumpP} For fixed $k$ and $i=1,\dots,n-k$ we have $$\nu(\Gamma^k)-\nu(\Gamma^k+P_i^k)=i.$$ \end{lem} \noindent{\bf Proof.} Note that from Lemma \mathbb{R}f{lemPropertiesP} it follows that we add only points that are in the interior of the triangle with hypotenuse $\triangle(p-ka,q-kb)$. Moreover, they all lie on or over the line passing through $Q^k$ with the slope as of $\triangle(a',b')$ and on or over the line passing through $P^k$ with the slope as in $\triangle(a-a',b-b')$. Hence the difference of Newton numbers of $\Gamma^k$ and $\Gamma^k+P_i^k$ is equal to double the area of their difference. Now the claim easily follows from Tile Argument and Pick's formula, since double the area of the triangle $P^kQ^kP^k_{n-k}$ is equal $n-k$. $\blacksquare$\\ \begin{lem}\label{lemPropertiesD} For any fixed $k$ we have \begin{enumerate} \item For $i=2,\dots,n-k$ the deformation $\Gamma^k+D_i^k$ has the diagram $$-{\rm sign}(p,q)(\triangle(Q,Q^k)+\triangle(Q^k,D_i^k)+\triangle(D_i^k,P^k)+\triangle(P^k,P)).$$ \item The above holds for $i=1$ provided $a\neq 1$. \end{enumerate} \end{lem} \noindent{\bf Proof.} First note that from their definition, the points $D_1^k, \dots, D_{n-k}^k$ all lie on a translation of the segment with endpoints $P_1^k, P_{n-k}^k$ by the vector $[-a',b']$. Hence to prove (1) it suffices to note that $Q,Q^k$ and $D_{n-k}^k$ are colinear and the slope of $\triangle(D_2^k,P^k)$ is bigger then that of $\triangle(P^k,P)$ in the case ${\rm sign}(p,q)=-1$ (smaller in the other case). Note that we have $a\neq a'$ unless $a=a'=1$. Hence if $a\neq a'$, we have equality of the slopes of $\triangle(D_{1}^k,P^k)$ and $\triangle(P^k,P)$. Which proves (2). $\blacksquare$\\ \begin{rk} If $a\neq 1$ $$\Gamma^k+(P_{n-k}^k, D_1^k)=\Gamma^0 + (P_{n-k}^k, D_1^k) = \Gamma^{k+1}$$ and $\Gamma^n$ lies below all points considered above. \end{rk} \begin{lem} \label{lemjumpD} For fixed $k$ if $a\neq 1$, then for $i=1,\dots,n-k$ we have $$\nu(\Gamma^k)-\nu(\Gamma^k+D_i^k)=n-k+i.$$ \end{lem} \noindent{\bf Proof.} Similarly to the opening argument of the proof of Lemma \mathbb{R}f{lemjumpP} we derive from Lemma \mathbb{R}f{lemPropertiesD} that $\nu(\Gamma^k)-\nu(\Gamma^k+D_i^k)$ is equal to double the area of the difference of the diagrams. Moreover, this difference can be computed when considering only the segment $\triangle(p-ka,q-kb)$. Consider double the area of $P^kQ^kD^k_{j}$ with fixed $j$. We will compute it using Pick's formula. Without loss of generality we can assume that $bp-aq=-1$. Note that due to Tile Argument, the only points that may lie in the triangle $P^kQ^kD^k_{j}$ are the points $P_i^k$. First, note that any $P_i^k$ with $i<j$ lies in the interior of the triangle with vertices $P^kQ^kD^k_{j}$, since it suffices to notice that the segments $P^kP^k_j$ and $D_i^kD_j^k$ are parallel. Any $P_i^k$ with $i\ge j$ lies in the interior of the triangle with vertices $P^kQ^kD^k_{j}$ if and only if the slope of $Q^kD^k_j$ is greater than the slope of $Q^kP^k_i$ (in absolute values) i.e. \begin{equation} \label{EQlemma4slopes} (q-kb-jb+b')(p-ka-ia)>(p-ka-ja+a')(q-kb-ib), \end{equation} whereas $P_i^k$ lies on its side if and only if there is an equality of the slopes. Equation~\eqref{EQlemma4slopes} is equivalent to $$(i-j)(b(p-ka)-(q-kb)a)+b'(p-ka)-a'(q-kb)-i>0.$$ Note that from $q=nb+b'$ and $p=na+a'$ it follows that $a'q-b'p=-n$. Hence $a'(q-kb)-b'(p-ka)=k-n$ and \eqref{EQlemma4slopes} is equivalent to $${j+n-k\over 2}>i. $$ Of course, equality in \eqref{EQlemma4slopes} holds if and only if ${j+n-k\over 2}=i$. By $\#$ denote the number of elements. Above combined with Pick's formula gives that double the area of the triangle with vertices $P^kQ^kD^k_{j}$ is $$B+2W-2 = 3 + \#\left\{i\ |\ {j+n-k\over 2}=i\ge j \right\}+ 2(j-1)+2\#\left\{i\ |\ {j+n-k\over 2}>i\ge j \right\} -2.$$ If $j+n-k$ is even, then the above is equal to $$1+ 1+ 2(j-1)+2\left({j+n-k\over 2}-1-j+1\right)=j+n-k.$$ If $j+n-k$ is odd, then the above is equal to $$1+ 0 + 2(j-1)+2\left({j+n-k-1\over 2}-j+1\right)=j+n-k.$$ This gives the assertion. $\blacksquare$\\ \begin{rk}\label{rkJumps} In particular, for $a\neq 1$ Lemmas \mathbb{R}f{lemjumpP} and \mathbb{R}f{lemjumpD} imply that the sequence of minimal jumps for diagram $\Gamma^k$ begins with $$\underbrace{\ 1\ ,\ \dots\dots\ ,\ 1\ }_{2(n-k)}.$$ \end{rk} \noindent{\bf Proof of Proposition \mathbb{R}f{propReductioAtoA1}.} Thanks to Lemmas \mathbb{R}f{lemjumpP} and \mathbb{R}f{lemjumpD} (see Remark \mathbb{R}f{rkJumps}) we only have to show that $$\nu(\Gamma_k)-\nu(\Gamma_{k+1})=2(n-k)$$ for $k=0,\dots,n-1$. We compute this number as double the area of the polygon with vertices $P_{n-k}^{k}, D_1^{k}, P_{n-k-1}^{k+1}, D_1^{k+1}$. From Tile Argument for $[a,-b]$ it follows that the only integer points on the boundary are the vertices, whereas $P_k^j$ for $j=1,\dots,n-k-1$ lie in the interior. Again from Tile Argument for $[a',-b']$ these points are the only integer ones to lie there. Therefore, from Pick's formula we get $$\nu(\Gamma_k)-\nu(\Gamma_{k+1}) = 4+2(n-k-1) -2=2(n-k).$$ Therefore consecutive choices in Procedure \mathbb{R}f{procka1} give consecutively $1$ in the sequence of minimal jumps and $$\nu(\Gamma^0) - \nu(\Gamma^n)=\sum_{k=0}^{n-1}2(n-k)=n(n+1).$$ This ends the proof. $\blacksquare$\\ \subsection{Reduction of the line in EEA} \label{subsecReductionLine} We will now recursively substitute $p$ by $n(a-a')+a'$ (compare previous Subsection~\mathbb{R}f{subsecDecrease}) i.e. we will reduce $p$ to $a$. Consider diagrams $$\Sigma^j=-{\rm sign}(p,q)\big(nj\ \triangle(a',b')+\triangle\left(\ a'+n(a-ja'),\ b'+n(b-jb')\ \right)\big)$$ for $j=0,\dots,n'$. Let $$P(\Sigma^j)=Q-{\rm sign}(p,q)nj[a',-b']$$ be the point such that $$\Sigma^j=-{\rm sign}(p,q)(\triangle\left(Q,P(\Sigma^j)\right)+\triangle\left(P(\Sigma^j),P\right)).$$ Note that $\Sigma^0=\triangle(p,q)$, $$\Sigma^1=-{\rm sign}(p,q)(n\ \triangle(a',b')+\triangle\left(\ a'+n(a-a'),\ b'+n(b-b')\ \right))$$ and $$\Sigma^{n'}=-{\rm sign}(p,q)(nn'\triangle(a',b')+\triangle(a'+na'',b'+nb'')).$$ Every $\Sigma^j$ is a diagram. \begin{figure} \caption{Procedure \mathbb{R} \label{r:1} \label{f:1} \end{figure} Let $p^j=a'+n(a-ja')$, $q^j=b'+n(b-jb')$ and $$ \begin{array}{c} \Gamma^k(\Sigma^j) =-{\rm sign}(p,q) \Big( (nj + k)\triangle(a',b') + \triangle\left(p^j-k(a-ja'),q^j-k(b-jb')\right) +\\ + k\triangle\left(a-(j+1)a',b-(j+1)b'\right) \Big). \end{array} $$ Hence $$\Gamma^k(\Sigma^j) = -{\rm sign}(p,q)( \triangle(Q,Q^k(\Sigma^j))+\triangle(Q^k(\Sigma^j),P^k(\Sigma^j))+\triangle(P^k(\Sigma^j),P) ),$$ where $$P^k(\Sigma^j)=P-{\rm sign}(p,q)\cdot k[-(a-(j+1)a'),b-(j+1)b'],$$ $$Q^k(\Sigma^j)=Q-{\rm sign}(p,q)\cdot (nj+k)[a',-b'].$$ Consider points $$P_i^k(\Sigma^j) = P^k(\Sigma^j)-{\rm sign}(p,q)\cdot i[-(a-ja'),b-jb'],\quad i=1,\dots,n-k$$ and $$D_i^k (\Sigma^j)= P_i^k(\Sigma^j)+{\rm sign}(p,q)[-a',b'],\quad i=1,\dots,n-k.$$ \begin{proc}\label{procka2} Fix $j\in\{0,\dots,n'-1\}$. Consecutively for $k=0,\dots,n-1$ take diagrams $$\Gamma^k(\Sigma^j)+P_i^{k}(\Sigma^j)\quad i=1,\dots,n-k,$$ $$\Gamma^k(\Sigma^j)+D_i^{k}(\Sigma^j)\quad i=1,\dots,n-k.$$ \end{proc} Note that $$\Gamma^n(\Sigma^j)= -{\rm sign}(p,q)\big( \left(n(j+1)+1 \right) \triangle (a',b') + n\ \triangle\left(a-(j+1)a',b-(j+1)b'\right) \big).$$ \begin{rk} \label{rkAllDeformationsAboveGammaN} All points $P_i^{k}(\Sigma^j)$ and $D_i^{k}(\Sigma^j)$ lie on or above the diagram $\Gamma^n(\Sigma^{n'-1})= -{\rm sign}(p,q) ( \left(nn'+1 \right) \triangle (a',b') + n\ \triangle\left(a'',b''\right) ).$ \end{rk} Below is a generalisation of Proposition \mathbb{R}f{propReductioAtoA1}. \begin{prop} \label{propSigmaKdoSigmaK1} If $a\neq 1$ the choice of deformations in Procedure \mathbb{R}f{procka2} for the diagram $\Sigma^j$ gives the opening terms of the sequence of minimal jumps of Newton numbers $$\underbrace{1\ ,\ \dots\ ,\ 1}_{n(n+1)}$$ provided $j<n'-2$ or $j=n'-1$ and $a''\neq 0$. \end{prop} \noindent{\bf Proof.} Apply Proposition \mathbb{R}f{propReductioAtoA1} to $\triangle\left( p^j,q^j \right)$, where $$p^j=a'+n(a-ja')\quad {\rm and}\quad q^j=b'+n(b-jb').$$ From Property \mathbb{R}f{prtyEEApjqj} the last diagram $\Gamma^n$ is of the form $$-{\rm sign}(p,q) \big((n+1)\triangle(a',b')+n\triangle(a-ja'-a',b-jb'-b') \big).$$ Hence $nj\triangle(a',b')+\Gamma^n$ is a diagram. Moreover, it is exactly $\Gamma^n(\Sigma^j)$ and no segment lies on any axis if $j<n'-2$ or $j=n'-1$ and $a''\neq 0$. Therefore, all preceding $\Gamma^k$ for $k=0,\dots, n-1$ coupled with $nj\triangle(a',b')$ are also diagrams (in fact equal to $\Gamma^k(\Sigma^j)$). Hence the claim follows from Property \mathbb{R}f{prtyNewtonNr} and Proposition \mathbb{R}f{propReductioAtoA1}. $\blacksquare$\\ \begin{prop} \label{propSigma0DoSigmaN} Let $a\neq 1$ and $a''\neq 0$. For consecutive $j=0,\dots,n'-1$ consider points as in Procedure \mathbb{R}f{procka2}. They give the opening terms for $\triangle(p,q)$ of the sequence of minimal jumps of Newton numbers $$\underbrace{1\ ,\ \dots\ ,\ 1}_{n(nn'+1)}$$ \end{prop} Proof will follow immediately from \begin{lem} \label{lemSigmaGamma} For $j=0,\dots,n'-1$ the diagram $\Gamma^n(\Sigma^j)$ lies below $\Sigma^{j+1}$. Moreover, if $j<n'-2$ or $j=n'-1$ and $a''\neq 0$ we have $$\nu(\Sigma^{j+1})=\nu(\Gamma^n(\Sigma^j))+n.$$ \end{lem} \noindent{\bf Proof.} From Property \mathbb{R}f{prtyNewtonNr} and the form of the diagrams one has to compute double the area of the triangle with vertices $P(\Sigma^{j+1}), P(\Sigma^{j+1})-{\rm sign}(p,q)[a',-b']$ and $Q$. From Tile Argument for $p^{j+1}$ and $q^{j+1}$ as well as for $a'$ and $b'$, the only points that lie in this triangle lie on its sides and there are exactly $n+2$ such points. Hence form Pick's formula we get the claim. $\blacksquare$\\ \noindent{\bf Proof of Proposition \mathbb{R}f{propSigma0DoSigmaN}.} The claim follows from Lemma \mathbb{R}f{lemSigmaGamma} above and the fact that we have $n'$ steps. Each step gives $n^2+n$ ones in the sequence (see Proposition \mathbb{R}f{propSigma0DoSigmaN}), where $n$ ones are attained twice with the last deformations of $\Sigma^j$ and initial deformations of $\Sigma^{j+1}$ (see Lemma \mathbb{R}f{lemSigmaGamma} above) with the exception of the $(n'-1)$th step. Since $a''\neq 0$, none of the points lie on an axis. $\blacksquare$\\ \begin{figure} \caption{Procedure \mathbb{R} \label{r:2} \end{figure} Suppose $a''\neq 0$. Consider the diagram $$\Theta=-{\rm sign}(p,q) \big(\triangle\left((nn'+1)a'+a'', (nn'+1)b'+b'' \right) + (n-1) \triangle(a'',b'') \big). $$ Note that $n-1>0$ from Property \mathbb{R}f{prtyN}. Moreover, from Property \mathbb{R}f{prtyEEAatilde} we get that EEA is the same as EEA of $a,b$ up to the first line. \begin{lem} \label{lemTheta} If $a''\neq 0$, then $\Theta$ lies above $\Gamma^n(\Sigma^{n'-1})$ and $$\nu(\Theta)-\nu(\Gamma^n(\Sigma^{n'-1}))=nn'+1$$ \end{lem} \noindent{\bf Proof.} Note that $(nn'+1)a'+a''$ and $(nn'+1)b'+b''$ are coprime and their EEA series is given in Property \mathbb{R}f{prtyEEAatilde}. Hence using Tile Argument and Pick's formula we easily get the claim. $\blacksquare$\\ \subsection{Short EEA} \label{subsecShortEEA} In this section we consider two cases left i.e. what happens if $a''=0$ or $a'=0$. Note that $a=1$ implies $a''=0$ or $a'=0$ i.e. EEA is of the form \eqref{eqSchortEEA} or \eqref{eqVerySchortEEA}. Let us remind that $q=mp+r$, where $0<r<p$. From Property \mathbb{R}f{prtyResztaRownaN} we have $r=n$ for short EEA. Under notation of Procedure \mathbb{R}f{procka1} consider \begin{proc} \label{prockaShortNie1} Let $a''=0$ and $ a\neq 1$. For $j\in\{0,\dots,n'-2\}$ consecutively for $k=0,\dots,n-1$ take diagrams $$\Gamma^k(\Sigma^j)+P_i^{k}(\Sigma^j)\quad i=1,\dots,n-k,$$ $$\Gamma^k(\Sigma^j)+D_i^{k}(\Sigma^j)\quad i=1,\dots,n-k.$$ \end{proc} Note that the last diagram in the procedure above is $\Gamma^n(\Sigma^{n'-2}) $ of the form \begin{equation} \label{eqDiagramKoncowy} n\triangle(1,m+1)+(n(n'-1)+1)\triangle(1,m). \end{equation} \begin{proc} \label{prockaShortA1} Let $a'\neq 0$ and $ a= 1$. Consecutively take diagrams $$(P,Q)+Q_i\quad i=1,\dots,p-1,$$ where $Q_i=Q-i[-1,m+1].$ \end{proc} Note that $a'=0$ iff $q=mp+1$. \begin{proc} \label{prockaVeryShortEEA} Let $a'=0$. Take diagrams $$(P,Q)+P_i \quad i=1,\dots,p-1,$$ where $P_i=P+i[-1,m].$ \end{proc} \begin{figure} \caption{Procedures~\mathbb{R} \label{r:4} \end{figure} \begin{prop} \label{propShortEEA} If $a''=0$ or $a'=0$, the opening terms of the sequence of minimal jumps of Newton numbers are $$\underbrace{1\ ,\ \dots\ ,\ 1}_{r(p-r)}$$ \end{prop} \noindent{\bf Proof.} We have three cases. If $q=kp+1$ i.e. $a'=0$ consider the choice of deformations in Procedure \mathbb{R}f{prockaVeryShortEEA} and the claim follows immediately from Lemma \mathbb{R}f{lemjumpP} and Property \mathbb{R}f{prtyVeryShortEEA}. Note that the number of jumps above is also equal to $r(p-r)$, because here $r=1$ and the diagram $(P,Q)+P_{p-1}$ is of the form \eqref{eqDiagramKoncowy}. If $a'\neq 0$ and $ a= 1$, then $a'=1, a''=0$. Consider the choice of deformations in Procedure \mathbb{R}f{prockaShortA1} and the claim follows immediately from Lemma \mathbb{R}f{lemjumpP}. Note that the number of jumps above is also equal to $r(p-r)$, because here $r=p-1$, see Property~\mathbb{R}f{prtyShortEEAa1}. Again, the diagram $(P,Q)+Q_{p-1}$ is of the form \eqref{eqDiagramKoncowy}. If $a''=0$ and $ a\neq 1$ consider deformations in Procedure \mathbb{R}f{prockaShortNie1}, the proof is the same as in Proposition \mathbb{R}f{propSigma0DoSigmaN} and follows from Lemma \mathbb{R}f{lemSigmaGamma}. The length of the sequence of jumps is hence equal to $(n'-1)n^2+n=n(na-n+1)=r(p-r)$ thanks to Properties~\mathbb{R}f{prtyResztaRownaN} and \mathbb{R}f{prtyShortEEA}. Hence the claim. $\blacksquare$\\ \begin{rk} \label{rkLastDiagramShort} Note that if $a''=0$ or $a'=0$, the last diagram is of the form \eqref{eqDiagramKoncowy}. \end{rk} \section{Main theorem combinatorially}\label{secMainThmComb} \begin{thm} \label{thmMainCombinatorially} Given a diagram $\triangle(a_0,b_0)$, where $a_0,b_0$ are coprime, the sequence of minimal jumps of Newton numbers commences with $$\underbrace{1\ ,\ \dots\ ,\ 1}_{r(a_0-r)}$$ where $r$ is the rest out of division of $b_0$ by $a_0$. \end{thm} \noindent{\bf Proof.} Suppose EEA is of the form \eqref{EEAFullGeneral}. Consider an auxiliary sequence $$z_0=1,\quad, z_1=n_1,\quad, z_k=z_{k-2} + z_{k-1}n_k .$$ This sequence coincides with the column $P$ in reverse order in EEA, see Fact \mathbb{R}f{factClassicEEA}. Note that by Property \mathbb{R}f{prtyN} we may assume that $z_1>1$ and hence $(z_k)$ is strictly increasing. \begin{proc} \label{prockaWszystko} Take $k=1$. Put $L=z_{k-1}-1$, $$ \begin{array}{ll|l} p=z_ka_k+a_{k+1} & q=z_kb_k+b_{k+1} & \\\hline a=a_k & b=b_k & n=z_k\\ a'=a_{k+1} & b'=b_{k+1} & n'=n_{k+1}\\ a''=a_{k+2} & b''=b_{k+2} & \end{array} $$ and consider deformations of the diagrams $$\Theta_k=-{\rm sign}(p,q) \big( L\triangle(a',b') + \triangle(p,q) \big).$$ For $\triangle(p,q)$: If $a'=0$, use Procedure \mathbb{R}f{prockaVeryShortEEA}. If $a''=0$, $a'=1$ and $a=1$ use Procedure \mathbb{R}f{prockaShortA1}. If $a''=0$, $a'=1$ and $a\neq 1$ use Procedure \mathbb{R}f{prockaShortNie1}. Otherwise, use Procedure \mathbb{R}f{procka2}, afterwards substitute $k$ by $k+1$ and proceed as above. \end{proc} This procedure will end, because the EEA sequence is finite. The last step is $k_0$th step. We have $\Theta_0=\triangle(a_0,b_0)$, all diagrams $\Theta_k$ lie below $\triangle(a_0,b_0)$ and have constant endpoints for $k_0> k>0$. We will argue that the procedure above gives asserted jumps. For $\nu(\Theta_0)$ the initial jumps are 1 due to Propositions \mathbb{R}f{propSigma0DoSigmaN} and \mathbb{R}f{propShortEEA}. For $k>0$ we need only to show that all intermediate polygonal chains are diagrams. \begin{figure} \caption{Using Procedure \mathbb{R} \label{r:3} \end{figure} Indeed, if $a''\neq 0$ recall that due to Remark \mathbb{R}f{rkAllDeformationsAboveGammaN} all points considered for $\triangle(p,q)$ lie above $\Gamma^n(\Sigma^{n'-1}) = -{\rm sign}(p,q) ( \left(nn'+1 \right) \triangle (a',b') + n\ \triangle\left(a'',b''\right) )$. Assume ${\rm sign}(p,q)=-1$. Note that $L\triangle(a',b') + \Gamma^n(\Sigma^{n'-1})$ is a diagram with the same endpoints as $\Theta_k$. Hence all intermediate diagrams combined with $L\triangle(a',b')$ as the initial segment are diagrams. Recall Property \mathbb{R}f{prtyNewtonNr}. By Lemma \mathbb{R}f{lemTheta} we get that $\nu(\Theta_k)$ has been already attained in the sequence. Hence from Proposition \mathbb{R}f{propSigma0DoSigmaN} the jumps are at most 1. The same argument applies when ${\rm sign}(p,q)=1$. Moreover, if $a''=0$ or $a'=0$ the same argument gives that from Proposition \mathbb{R}f{propShortEEA} follows that the jumps are at most 1. Now we only have to compute the total number of jumps. The last diagram due to Remark \mathbb{R}f{rkLastDiagramShort} is $M\triangle(1,m+1)+N\triangle(1,m)$ for some positive integers $M,N$. Hence we have obtained all numbers ranging from $\nu(\triangle(a_0,b_0))$ up to $\nu(M\triangle(1,m+1)+N\triangle(1,m))$, the difference is double the area and is equal to $r(a_0-r)$. Indeed, we have $a_0=M+N$ and $b_0=M(m+1)+Nm=m(M+N)+M$. Hence $M=r$ and $N=a_0-r$. Thus double the area of the difference is $a_0b_0-Mb_0-a_0mN=N(b_0-a_0m)=r(a_0-r)$. This gives the claim. $\blacksquare$\\ Note that the above can also be computed explicitly using EEA and inductive definition of $z_k$. \begin{rk} Theorem \mathbb{R}f{thmMainCombinatorially} is at its weakest for $q\equiv \pm 1 ({\rm mod}\ p)$, when the function $r(p-r)$ minimises and is equal $p-1$. \end{rk} \begin{ex}\label{exDalej} We continue Example \mathbb{R}f{przyEEA}. For the diagrem $\triangle(40,73)$ from Theorem~\mathbb{R}f{thmMainCombinatorially} we get that the sequence of jumps of Newton numbers begins with $33\cdot(40-33)=231$ ones. \end{ex} \section{Remarks}\label{sectionRks} We will indicate one possible use of the algorithm described in this paper in finding all Milnor numbers attained by deformations. Note that this combinatorial approach gives also the form of deformations that have the supposed Milnor number. Let us look at a continuation of Example~\mathbb{R}f{przyEEA}. \begin{ex}\label{rk_ex} Take an irreducible singularity $f$ of the form~\eqref{eqPostacfQSH} with $p=40$ and $q=73$. We claim that all positive integers less than $\mu(f)$ are attained as Milnor numbers of deformations of $f$ i.e. the sequence of jumps is constantly equal $1$. Indeed, as was already indicated in Example~\mathbb{R}f{exDalej}, we have at least $231$ initial ones in the sequence of jumps of Milnor numbers. Take nondegenerate deformations $F_{k,l}$ of $f$ such that $$\Gamma(F_{k,l})=\Gamma(f)+((0,k),(l,0)).$$ Note that the diagram $\Gamma(F_{k,l})$ consists of a single segment. Consider for instance $F_{37,73}$. We have $37$ and $73$ are coprime, moreover $\mu(F_{37,73})>\mu(f)-231>\mu(F_{37,73})-36\cdot(37-36)$. Using Theorem~\mathbb{R}f{mainThm}, we get that the sequence of jumps equal to $1$ is at least as long as $252$. This improves the previous result. In the same manner consider deformations $F_{k,l}$ with $(k,l)$ consecutively equal to $$ \begin{array}{l} (39,73),\ (38,73),\ (37,73),\\ (37,73),\ (37,71),\ \dots,\ (37,41) \end{array} $$ and apply Theorem~\mathbb{R}f{mainThm} to each. Now one can continue with deformations with $(k,l)$ equal to $$ \begin{array}{l} (37,41), \ (36,41),\ \dots,\ (23,41),\\ (23,41),\ (23,40),\ \dots,\ (23,29),\\ (23,29),\ (22,29),\ \dots\ etc \end{array} $$ or use the main result of~\cite{BKW} for $k=40$. Precisely, the result we are referring to states that for a homogeneous nondegenerate isolated singularity $f_k$ of degree $k$ all positive integers less than $\mu(f_k)-k+2$ are attained as Milnor numbers of deformations. Note that $\mu(f_{40})-40+2>\mu(F_{37,41})-4\cdot(37-4)$. Both approaches give the assertion of Example~\mathbb{R}f{rk_ex}. \end{ex} Do note that the computation by hand presented above (which is also easy to implement as a program) is essentially better than straightforward numerical computation. Our numerical experiments with naive algorithms have lasted for hours in the case of the singularity from Example~\mathbb{R}f{rk_ex}, whereas doing it by hand using Theorem~\mathbb{R}f{mainThm} is a matter of minutes. Easy generalisation of the above Example is \begin{cor}\label{cor_ex} Take an isolated singularity $f$ of the form~\eqref{eqPostacfQSH} with $p<q$ coprime. Suppose there exists an injective sequence of coprime numbers $(p_s,q_s)_{s=1,\dots,v}$ such that $p_s\leq q_s$, both sequences $(p_s), (q_s)$ are non-increasing and \begin{eqnarray}\label{eqRk_ex} (p_{s}-1)(q_{s}-1)-r_{s}(p_{s}-r_{s})\le (p_{s+1}-1)(q_{s+1}-1), \end{eqnarray} where we denote by $r_s$ the positive integer such that $q_s\equiv r_s ({\rm mod}\ p_s)$. Then all positive integers between $\mu(f)$ and $(p_{v}-1)(q_{v}-1)-r_{v}(p_{v}-r_{v})$ are attained as Milnor numbers of deformations of $f$. Moreover, if \begin{equation}\label{eqRk_ex2} (p_{v}-1)(q_{v}-1)-r_{v}(p_{v}-r_{v})< (p-1)(p-2)+1, \end{equation} then all positive integers are attained as Milnor numbers of deformations of~$f$. \end{cor} \noindent{\bf Proof.} As in Example~\mathbb{R}f{rk_ex} consider nondegenerate deformations $F_{k,l}$ of $f$ such that $$\Gamma(F_{k,l})=\Gamma(f)+((0,k),(l,0)).$$ Since $p_{s},q_{s}$ are coprime, for each deformation $F_{p_{s},q_{s}}$ use Theorem~\mathbb{R}f{mainThm}. A deformation of a deformation is a deformation (can be chosen as a one-parameter deformation as well as from nondegeneracy of deformations one can assume it is nondegenerate) due to the form of the diagrams. Hence we get deformations of $f$ giving Milnor numbers from $(p_{s}-1)(q_{s}-1)$ to $(p_{s}-1)(q_{s}-1)-r_{s}(p_{s}-r_{s})$. The inequality \eqref{eqRk_ex} guaranties that the Milnor number of $F_{p_{s+1},q_{s+1}}$ is greater than $\mu(F_{p_{s},q_{s}})-r_{s}(p_{s}-r_{s})$. Hence all integers between $\mu(f)$ and $(p_{v}-1)(q_{v}-1)-r_{v}(p_{v}-r_{v})$ are attained as Milnor numbers of deformations of $f$. Moreover, if inequality \eqref{eqRk_ex2} holds, it means that the Milnor number of the deformation $F_{p,p}$ of $f$ is bigger by at least $p-2$ than the last Milnor number already attained. Hence we can use the result that for a homogeneous nondegenerate isolated singularity of degree $p$ all positive integers less or equal $\mu(F_{p,p})-p+2$ are attained as Milnor numbers of deformations from \cite{BKW}. This ends the proof.~ $\blacksquare$ The procedure above may be stated constructively but we cannot at the moment guarantee that we can choose sequences which satisfy inequality~\eqref{eqRk_ex2}. To the contrary, for $p,q$ relatively small, for instance $(5,7)$, such a sequence may not exist (compare \cite{W2}). The question on when such a sequence exists could be possibly resolved using distribution of primes (compare the explicit sequence from Remark~\mathbb{R}f{rk_ex}). This paper answers in particular to open questions posed in the article \cite{Bo} which stress for constructive methods. Our combinatorial approach in the spirit of previous sections is very powerful in answering these questions. The authors have results also for all bivariate semi-quasi-homogeneous singularities, in particular extending results of this paper on irreducible germs, but we defer the details to a subsequent publication. \section*{Aknowledgements} Authors were supported by grant NCN 2013/09/D/ST1/03701. \end{document}
\begin{document} \title{Quantitative Redundancy \ in Partial Implications} \begin{abstract} We survey the different properties of an intuitive notion of redundancy, as a function of the precise semantics given to the notion of partial implication. The final version of this survey will appear in the Proceedings of the Int.~Conf.~Formal Concept Analysis, 2015. \end{abstract} \section{Introduction} The discovery of regularities in large scale data is a multifaceted current challenge. Each syntactic mechanism proposed to represent such regularities opens the door to wide research questions. We focus on a specific sort of regularities sometimes found in transactional data, that is, data where each observation is a set of items, and defined in terms of pairs of sets of items. Syntactically, the fact that this sort of regularity holds for a given pair $(X,Y)$ of sets of items is often denoted as an implication: $X \to Y$. However, whereas in Logic an implication like this is true if and only if $Y$ holds whenever $X$ does, in our context, namely, partial implications and association rules, it is enough if $Y$ holds ``most of the times'' $X$ does. Thus, in association mining, the aim is to find out which expressions of that sort are valid for a given transactional dataset: for what $X$ and what $Y$, the transactions that contain $X$ ``tend to contain'' $Y$~as~well. In many current works, that syntax is defined as if its meaning was sufficiently clear. Then, any of a number of ``measures of interestingness'' is chosen to apply to them, in order to select some to be output by a data analysis process on a particular dataset. Actually, the mere notation $X \to Y$ is utterly insufficient: any useful perspective requires to endow these expressions with a definite semantics that makes precise how that na\"\i{}ve intuition of ``most of the times'' is formalized; only then can we study and clarify the algorithmic properties of these syntactical expressions. Thus, we are not really to ``choose a measure of interestingness'' but plainly to \emph{define} what $X\to Y$ means, and there are many acceptable ways of doing this. This idea of a relaxed implication connective is a relatively natural concept, and versions sensibly defined by resorting to conditional probability have been proposed in different research communities: a common semantics of $X\to Y$ is through a lower bound on its ``confidence'', the conditional probability of $Y$ given~$X$. This meaning appears already in the ``partial implications'' of \cite{Lux} (actually, ``implications partielles'', with confidence christened there ``pr\`ecision''). Some contributions based on Mathematical Logic develop notions related to these partial implications defined in terms of conditional probability: see \cite{GUHA}. However, it must be acknowledged that the contribution that turned on the spotlights on partial implications was \cite{AgrawalImielinskiSwami} and the improved algorithm in \cite{AMSTV}:~the proposal of exploring large datasets in search for association rules of high support and confidence has led to huge amounts of research since. Association rules are partial implications that impose the additional condition that the consequent is a single item. Three of the major foci of research in association rules and partial implications are as follows. First, the quantity of candidate itemsets for both the antecedent $X$ and, sometimes, the consequent $Y$ grows exponentially with the number of items. Hence, the space to explore is potentially enormous: on real world data, very soon we run already into billions of candidate antecedents. Most existing solutions are based on the acceptance that, as not all of them can be considered within reasonable running times, we make do with those that obey the support constraint (``frequent itemsets''). The support constraint combines well with confidence in order to avoid reporting mere statistical artifacts \cite{MegSrik} but its major role is to reduce the search space. A wide repertory of algorithms for frequent sets and association rule mining exists by now \cite{FqPattBook}. Second, many variations have been explored: for instance, cases of more complicated structures in the data and, also, combinations with other machine-learning models or tasks like in \cite{JarSchSim,YinHan}. This paper surveys part of a research line that belongs to a third focus: in a vast majority of practical applications, if any partial implication is found at all, it often happens that the search returns hundreds of thousands of them. It is far from trivial to design an associator able to choose well, among them, a handful to show to an impatient user. This is tantamount to modifying the semantics of the partial implication connective, by adding or changing the conditions under which one such expression is deemed valid and is to be reported. Most often, but not always (as we report in Sections \ref{s:reprules}~and~\ref{s:multprem}) this approach takes the form of ``quality evaluations'' performed to select which partial implications are to be highlighted for the user. We do not consider this problem solved yet, but deep progresses have been achieved so far; we survey a humble handful of those, where the present author was actively involved. For a wider perspective of all these three aspects of association rule mining, see Part II of~\cite{ZakiMeira}. The main link along this paper can be described informally as follows: human intuition, maybe on the basis of our experience with full, standard implications, tends to expect that smaller antecedents are better than larger ones, and larger consequents are better than smaller ones. We call this statement here the \emph{central intuition} of this paper; many references express, in various variants, this intuition (e.g.~\cite{BAG,KryszPKDD,LiuHsuMa,PadTu2000,ShahLaksRS,ToKleRHM} just to name a few). This intuition is only partially true in implications, where the GD basis gets to be minimal through the use of subtly enlarged antecedents~\cite{GD}. This survey paper discusses, essentially, the particular fact that, on partial implications, this intuition is both true and false\dots\ as a function, of course, of the actual semantics given to the partial implication connective. \section{Notation and Preliminary Definitions} Our datasets are transactional. This means that they are composed of transactions, each of which consists of an itemset with a unique transaction identifier. Itemsets are simply subsets of some fixed set ${\cal U}$ of items. We will denote itemsets by capital letters from the end of the alphabet, and use juxtaposition to denote union, as in $XY$. The inclusion sign as in $X\subset Y$ denotes proper subset, whereas improper inclusion is denoted $X\subseteq Y$. The cardinality of a set $X$ (either an itemset or a set of transactions) is denoted $|X|$. \subsection{Partial Implications} As indicated in the Introduction, the most common semantics of partial implication is its \emph{confidence}: the conditional empirical probability of the consequent given the antecedent, that is, the ratio between the number of transactions in which $X$ and $Y$ are seen together and the number of transactions that contain~$X$. We will see below that this semantics may be somewhat misleading. In most application cases, the search space is additionally restricted by a minimal \emph{support} criterion, thus avoiding itemsets that appear very seldom in the dataset. More precisely, for a given dataset~${\cal D}$, consisting of~$n$ transactions, the \emph{supporting set} ${\cal D}_X\subseteq{\cal D}$ of an itemset $X$ is the subset of transactions that include $X$. (For the reader familiar with the FP-growth frequent set miner \cite{HPYM04}, these are the same as their ``projected databases'', except for the minor detail that, here, we do not remove $X$ from the transactions.) The \emph{support} $s_{{\cal D}}(X) = |{\cal D}_X|/n \in[0,1]$ of an itemset $X$ is the cardinality of the set of transactions that contain~$X$ divided by~$n$; it corresponds to the relative frequency or empirical probability of $X$. An alternative rendering of support is its unnormalized version, but some of the notions that will play a major role later on are simpler to handle with normalized supports. Now, the \emph{confidence} of a partial implication $X\to Y$ is $c_{{\cal D}}(X\to Y) = s_{{\cal D}}(XY)/s_{{\cal D}}(X)$: that is, the empirical approximation to the corresponding conditional probability. The \emph{support} of a partial implication $X\to Y$ is $s_{{\cal D}}(X\to Y) = s_{{\cal D}}(XY)$. In both expressions, we will omit the subscript ${\cal D}$ whenever the dataset is clear from the context. Clearly, $s_{{\cal D}_Z}(X) = \frac{|{\cal D}_{XZ}|}{|{\cal D}_Z|} = c(Z\to X)$. Often, we will assume that $X\cap Y = \emptyset$ in partial implications $X\to Y$. Some works impose this condition globally; we will mention it explicitly whenever it is relevant, but, generally speaking, we allow $X$ and $Y$ to intersect or, even, to fulfill $X\subseteq Y$. Note that, if only support and confidence are at play, then $c_{{\cal D}}(X\to XY) = c_{{\cal D}}(X\to Y)$ and $s_{{\cal D}}(X\to XY) = s_{{\cal D}}(X\to Y)$. Of course, in practical terms, after a partial implication mining process, only the part of $Y$ that does not appear in $X$ would be shown to the user. We do allow $X=\emptyset$ as antecedent of a partial implication: then, its confidence coincides with the support, $c_{{\cal D}}(\emptyset\to Y) = s_{{\cal D}}(Y)$, since $s_{{\cal D}}(\emptyset)=1$. Allowing \hbox{$Y=\emptyset$} as consequent as well is possible but turns out not to be very useful; therefore, empty-consequent partial implications are always omitted from consideration. All along the paper, there are occassional glitches where the empty set needs to require separate consideration. Being interested in the general picture, here we will mostly ignore these issues, but the reader can check that these cases are given careful treatment in the original references provided for each part of our discussion. By $X \Rightarrow Y$ we denote full, standard logical implication; this expression will be called the \emph{full counterpart} of the partial implication $X\to Y$. \subsection{Partial Implications versus Association Rules} Association rules were defined originally as partial implications $X\to Y$ with singleton consequents: $|Y| = 1$; we abbreviate $X\to \{A\}$ as $X\to A$. This decision allows one to reduce association mining to a simple postprocessing after finding frequent sets. Due to the illusion of augmentation, many users are satisfied with this syntax, but, however, more items in the consequent provide more information. Indeed, in full implications, the expression $(A\Rightarrow B)\lambdaand(A\Rightarrow C)$ is fully equivalent to $A\Rightarrow BC$, and we lose little by enforcing singleton consequents (equivalently, definite Horn clauses); an exception is the discussion of minimal bases, where nonsingleton consequents allow for canonical bases that are unreachable in the Horn clause syntax~\cite{GD}. But, in partial implications, $A\to BC$ says more than the conjunction of $A\to B$ and $A\to C$, namely, $B$ and $C$ abound \emph{jointly} in ${\cal D}_{A}$. Whenever possible, $A\to BC$ is better, being both more economical and more informative. This can be ilustrated by the following example from~\cite{BalTKDD}, to which we will return later on. \begin{example} \lambdaabel{ex:running} Consider a dataset on ${\cal U} = \{ A, B, C, D, E \}$ consisting of 12 transactions: 6 of them include all of ${\cal U}$, 2 consist of $ABC$, 2 more are $AB$, and then one each of $CDE$ and $BC$. It can be seen that the confidence of both $B\to A$ and $B\to C$ is $9/11$, whereas the confidence of $B\to AC$ is $8/11$. \end{example} Actually, even restricted to association rules, the output of confidence-based associators is often still too large: the rest of this paper discusses how to reduce the output with no loss of information, first, and, then, as the outcome is often still too large in practice, we will need to allow for a carefully tuned loss of information. \section{Redundancy in Confidence-Based Partial Implications}\lambdaabel{s:reprules} We start our discussion by ``proving correct'' our \emph{central intuition}, that is, providing a natural semantics under which that intuition is correct. For this section, we work under confidence and support thresholds, and it turns out to be convenient to explicitly assume that the left-hand side of each partial implication is included in the right-hand side. We force that inclusion using notations in the style of $X\to XY$. Several references (\cite{AgYu} for one) have considered the following argument: assume that we could know beforehand that, in all datasets, the confidence and support of $X_0\to X_0Y_0$ are always larger than or equal to those of $X_1\to X_1Y_1$. Then, whenever we are mining some dataset under confidence and support thresholds, assume that we find $X_1\to X_1Y_1$: we should not bother to report as well $X_0\to X_0Y_0$, since it must be there anyhow, and its presence in the output is uninformative. In a very strong sense, $X_0\to X_0Y_0$ is redundant with respect to $X_1\to X_1Y_1$. Irredundant partial implications according to this criterion are called ``essential rules'' in \cite{AgYu} and \emph{representative rules} in \cite{KryszPAKDD}; we will follow this last term. \begin{lemma} \lambdaabel{l:redchar} Consider two partial implications, $X_0\to X_0Y_0$ and $X_1\to X_1Y_1$. The following are equivalent: \begin{enumerate} \item The confidence and support of $X_0\to X_0Y_0$ are larger than or equal to those of $X_1\to X_1Y_1$, in {\em all} datasets: for every ${\cal D}$, $c_{{\cal D}}(X_0\to X_0Y_0)\gammaeq c_{{\cal D}}(X_1\to X_1Y_1)$ and $s_{{\cal D}}(X_0\to X_0Y_0)\gammaeq s_{{\cal D}}(X_1\to X_1Y_1)$. \item The confidence of $X_0\to X_0Y_0$ is larger than or equal to that of $X_1\to X_1Y_1$, in {\em all} datasets: for every ${\cal D}$, $c_{{\cal D}}(X_0\to X_0Y_0)\gammaeq c_{{\cal D}}(X_1\to X_1Y_1)$. \item $X_1\subseteq X_0\subseteq X_0Y_0\subseteq X_1Y_1$. \end{enumerate} \end{lemma} When these cases hold, we say that $X_1\to X_1Y_1$ makes $X_0\to X_0Y_0$ \emph{redundant}. The fact that the inequality on support follows from the inequality on confidence is particularly striking. This lemma can be interpreted as proving correct the \emph{central intuition} that smaller antecedents and larger consequents are better, by indentifying a semantics of the partial implication connective that makes this true and by pointing out that it is not just the consequent that is to be maximized, but the union of antecedent and consequent. If only consequents are maximized separately, and are kept disjoint from the antecedents, then one gets to a quite more complicated situation discussed below. \begin{definition} Fix a dataset and confidence and support thresholds. The \emph{representative rule basis} for that dataset at these support and confidence thresholds consists of those partial implications that pass both thresholds in the dataset, and are not made redundant, in the sense of the previous paragraph, by other partial implications also above the thresholds. \end{definition} Hence, a redundant partial implication is so because we can know beforehand, from the information in the basis, that its confidence is above the threshold. We have: \begin{proposition} (Essentially, from \cite{KryszPAKDD}.) For a fixed dataset ${\cal D}$ and a fixed confidence threshold~$\gammaamma$: \begin{enumerate} \item Every partial implication of confidence at least $\gammaamma$ is made redundant by some representative rule. \item Partial implication $X\to Y$ with $X\subseteq Y$ is a representative rule if and only if $c_{{\cal D}}(X\to Y) \gammaeq\gammaamma$ but there is no $X'$ and $Y'$ with $X'\subseteq X$ and $XY\subseteq X'Y'$ such that $c_{{\cal D}}(X'\to Y') \gammaeq\gammaamma$, except $X=X'$ and $Y=Y'$. \end{enumerate} \end{proposition} According to statement \emph{(3)} in Lemma~\ref{l:redchar}, that last point means that a representative rule is not redundant with respect to any partial implication (different from itself) that has confidence at least $\gammaamma$ in the dataset. It is interesting to note that one does not need to mention support in this last proposition, the reason being, of course, statement \emph{(2)} in Lemma~\ref{l:redchar}. The fact that statement \emph{(3)} implies statement \emph{(1)} was already pointed out in \cite{AgYu,KryszPAKDD,PhanLuongICDM} (in somewhat different terms). The remaining implications are from~\cite{BalLMCS}; see this reference as well for proofs of additional properties, including the fact the representative basis has the minimum possible size among all bases for this notion of redundancy, and for discussions of other related redundancy notions. In particular, several other natural proposals are shown there to be equivalent to this redundancy. Also \cite{BalTKDD} provides further properties of the representative rules. These references discuss as well the connection with a similar notion in \cite{Zaki}. In Example~\ref{ex:running}, at confidence threshold 0.8, the representative rule basis consists of seven partial implications: $\emptyset\to C$, $B\to C$, $\emptyset\to AB$, $C\to AB$, $A\to BC$, $D\to ABCE$, and $E\to ABCD$. \subsection{Quantitative Evaluation of Non-Redundancy: Confidence Width} \lambdaabel{ss:cwidth} Redundancy is a qualitative property; still, it allows for a quantitative discussion. Consider a representative rule $X\to XY$: at confidence $c(X\to XY)$, no partial implication makes it redundant. But we could consider now to what extent we need to reduce the confidence threshold in order to find a partial implication that would make this one redundant. If a partial implication of almost the same confidence can be found to make $X\to XY$ redundant, then our partial implication is not so interesting. According to this idea, one can define a parameter, the \emph{confidence width} \cite{Bal09}, that, in a sense, evaluates how different is our partial implication from other similar ones. We do not discuss this parameter further, but a related quantity is treated below in Section~\ref{ss:cboost}. \subsection{Closure-Aware Redundancy Notions}\lambdaabel{ss:bstar} Redundancy of one partial implication with respect to another can be redefined as well in a similar but slightly more sophisticate form by taking into account the closure operator obtained from the data (see \cite{GanWil99}). Often, this variant yields a more economical basis because the full implications are described by their often very short Guigues-Duquenne basis~\cite{GD}; see again \cite{BalLMCS} for the details. \section{Redundancy with Multiple Premises}\lambdaabel{s:multprem} The previous section indicates precisely when ``one partial implication follows logically from another''. It is natural to ask whether a stronger, more useful notion to reduce the size of a set of partial implications could be based on partial implications following logically from several others together, beyond the single-premise case. Simply considering standard examples with full implications like Augmentation (from $X\Rightarrow Y$ and $X'\Rightarrow Y'$ it follows $XX'\Rightarrow YY'$) or Transitivity (from $X\Rightarrow Y$ and $Y\Rightarrow Z$ it follows $X\Rightarrow Z$), it is easy to see that these cases fail badly for partial implications. Indeed, one might suspect, as this author did for quite some time, that one partial implication would not follow logically from several premises unless it follows from one of them. Generally speaking, however, this suspicion is wrong. It is indeed true for confidence thresholds $\gamma\in(0,0.5)$, but these are not very useful in practice, as an association rule $X\to A$ of confidence less than 0.5 means that, in ${\cal D}_X$, the absence of $A$ is more frequent than its presence. And, for $\gamma\in[0.5,1)$, it turns out that, for instance, from $A\to BC$ and $A\to BD$ it follows $ACD\to B$, in the sense that if both premises have confidence at least $\gamma$ in any dataset, then the conclusion also does. The general case for two premises was fully characterized in \cite{BalLMCS}, but the case of arbitrary premise sets has remained elusive for some years. Eventually, a very recent result from~\cite{AtsBal} proved that redundancy with respect to a set of premises that are partial implications hinges on a complicated combinatorial property of the premises themselves. We give that property a short (if admittedly uninformative) name here: \lambdaooseness=1 \begin{definition} \lambdaabel{df:nice} Let $X_1 \to Y_1,\lambdadots,X_k \to Y_k$ be a set of partial implications. We say that it is \emph{nice} if $X_1 \Rightarrow Y_1,\lambdadots,X_k \Rightarrow Y_k \models X_i \Rightarrow U$, for all $i \in 1\lambdadots k$, where $U = X_1Y_1 \cdots X_kY_k$. \end{definition} Here we use the standard symbol $\models$ for logical entailment; that is, whenever the implications at the left-hand side are true, the one at the right-hand side must be as well. Note that the definition of nicety of a set of partial implications states a property, not of the partial implications themselves, but of their full counterparts. Then, we can characterize entailment among partial implications for high enough thresholds of confidence, as follows: \begin{theorem} \lambdaabel{th:mainHGnew} \cite{AtsBal} Let $X_1 \to Y_1,\lambdadots,X_k \to Y_k$ be a set of partial implications with $k \gammaeq 1$, candidates to premises, and a candidate conclusion $X_0 \to Y_0$. If $\gammaamma \gammaeq (k-1)/k$, then the following are equivalent: \begin{enumerate} \item in any dataset where the confidence of the premises $X_1 \to Y_1,\lambdadots,X_k \to Y_k$ is at least $\gammaamma$, $c(X_0 \to Y_0)\gammaeq\gamma$ as well; \item either $Y_0 \subseteq X_0$, or there is a non-empty $L \subseteq \{1\lambdadots k\}$ such that the following conditions hold: \begin{enumerate} \item $\{ X_i \to Y_i : i \in L \}$ is nice, \item $\bigcup_{i \in L} X_i \subseteq X_0 \subseteq \bigcup_{i \in L} X_iY_i$, \item $Y_0 \subseteq X_0 \cup \bigcap_{i \in L} Y_i$. \end{enumerate} \end{enumerate} \end{theorem} Interestingly, the last couple of conditions are reasonably correlated, for the case of several premises, with the \emph{central intuition} that smaller antecedents are better than larger ones, and larger consequents are better than smaller ones. The premises actually necessary must all include the consequent of the conclusion, and their antecedents are to be included in the antecedent of the conclusion. Even the additional fact that the antecedent of the conclusion does not have ``extra items'' not present in the premises also makes sense. However, there is the additional condition that only nice sets of partial implications may have a nontrivial logical consequence, and all this just for high enough confidence thresholds. The proof is complex and we refrain from discussing it here; see \cite{AtsBal}, where, additionally, the case of $\gammaamma < 1/k$ is also characterized and the pretty complicated picture for intermediate values of $\gammaamma$ is discussed. We do indicate, though, that the notion of ``nicety'', in practice, turns out to be so restrictive that we have not found any case of nontrivial entailment from more than one premise in a number of tests with stardard benchmark datasets. Therefore, this approach is not particularly useful in practice to reduce the size of the outcome of an associator. \subsection{Ongoing Developments} As for representative rules (Subsection~\ref{ss:bstar}), there exists a natural variant of the question of redundancy, whereby full implications are handled separately; essentially, the redundancy notion becomes ``closure-based''. This extension was fully characterized as well for the case of two premises in \cite{BalLMCS}, but it is current work in progress how to extend the scheme to the case of arbitrary quantities of premises. \section{Alternative Evaluation Measures} We move on to discuss how to reinterpret the \emph{central intuition} as we change the semantics of the partial implication connective. Confidence is widely used as a definition of partial implication but, in practice, presents two drawbacks. First, it does not detect negative correlations; and, second, as already indicated, often lets pass far too many rules and, moreover, fiddling with the confidence threshold turns out to be a mediocre or just useless solution. Examples of both disadvantages are both easy to construct and easy to find on popular benchmark datasets. Both objections can be addressed by changing the semantics of the expression $X\to Y$, by either replacing the confidence measure or by strengthening it with extra conditions. The literature on this topic is huge and cannot be reviewed here: see \cite{GH,LencaEtAl,TKS} and their references for information about the relevant developments published along these issues. We focus here on just a tiny subset of all these studies. The first objection alluded to in the previous paragraph can be naturally solved via an extra normalization (more precisely, dividing the confidence by the support of the consequent). The outcome is \emph{lift}, a well-known expression in basic probability; a closely related parameter is \emph{leverage}: \begin{definition} \lambdaabel{df:liftleverage} Assume $X\cap Y = \emptyset$. The {\em lift} of partial implication $X\to Y$ is $\ell_{{\cal D}}(X\to Y) = \frac{c_{{\cal D}}(X\to Y)}{s_{{\cal D}}(Y)} = \frac{s_{{\cal D}}(XY)}{s_{{\cal D}}(X)\times s_{{\cal D}}(Y)}$. The {\em leverage} of partial implication $X\to Y$ is $\lambda_{{\cal D}}(X\to Y) = s_{{\cal D}}(XY) - s_{{\cal D}}(X)\times s_{{\cal D}}(Y)$. \end{definition} If supports are unnormalized, extra factors $n$ are necessary. In case of independence of both sides of a partial implication $X\to Y$, we would have $s(XY) = s(X)s(Y)$; therefore, both lift and leverage are measuring deviation from independence: lift is the multiplicative deviation, whereas leverage measures it rather as an additive distance instead. Leverage was introduced in \cite{PSG} and, under the name ``Novelty'', in \cite{LavracFlachZupan}, and received much attention via the Magnum Opus associator \cite{WebbMO}. We find lift in the references going by several different names: it has been called {\em interest} \cite{BrinMS} or, in a slightly different but fully equivalent form, {\em strength} \cite{ShahLaksRS}; \emph{lift} seems to be catching up as a short name, possibly aided by the fact that the Intelligent Miner system from IBM employed that name. These notions allow us to exemplify that we are modifying the semantics of our expressions: if we define the meaning of $X\to Y$ through confidence, then partial implications of the form $X\to Y$ and $X\to XY$ are always equivalent, whereas, if we use lift, then they may not be. Note that, in case $X=\emptyset$, the lift trivializes to~1. Also, if we are to use lift, then we must be careful to keep the right-hand side $Y$ disjoint from the left-hand side: $X\cap Y = \emptyset$. A related notion is: \begin{definition} \lambdaabel{df:relconf} \cite{LavracFlachZupan} The {\em relative confidence} of partial implication $X\to Y$, also called {\em centered confidence} or {\em relative accuracy}, is $r_{{\cal D}}(X\to Y) = c_{{\cal D}}(X\to Y) - c_{{\cal D}}(\emptyset\to Y)$. \end{definition} Therefore, the relative confidence is measuring additively the effect, on the support of the consequent $Y$, of ``adding the condition'' or antecedent $X$. Since $c_{{\cal D}}(\emptyset\to Y) = s_{{\cal D}}(Y)$, lift can be seen as comparing $c_{{\cal D}}(X\to Y)$ with $c_{{\cal D}}(\emptyset\to Y)$, that is, effecting the same comparison but multiplicatively this time: $\ell(\hbox{$X\to Y$}) = \frac{s(XY)}{s(X)\times s(Y)} = \frac{c(X\to Y)}{s(Y)} = \frac{c(X\to Y)}{c(\emptyset\to Y)}$. Also, it is easy to check that leverage can be rewritten as $\lambda_{{\cal D}}(X\to Y) = s_{{\cal D}}(X)\times r_{{\cal D}}(X\to Y)$ and is therefore called also {\em weighted relative accuracy}~\cite{LavracFlachZupan}. Relative confidence has the potential to solve the ``negative correlation'' objection to confidence, and all subsequent measures to be described here inherit this property as well. An objection of a different sort is that lift and leverage are symmetric. As the implicational syntax is asymmetric, they do not fit very well the directional intuition of an expression like $X\to Y$; that is one of the reasons behind the exploration of many other options. However, to date, none of the more sophisticate attempts seems to have gained a really noticeable ``market share''. Most common implementations either offer a long list of options of measures for the user to choose from (like \cite{Borgelt} for one), or employ the simpler notions of confidence, support, lift, or leverage (for instance, Magnum Opus \cite{WebbMO}). We believe that one must keep close to confidence and to deviation from independence. Confidence is the most natural option for many educated domain experts not specialized in data mining, and it provides actually a directionality to our partial implications. The vast majority of these alternatives attempt at defining the quality of partial implication $X\to Y$ relying only on the supports of $X$, $Y$, $XY$, or their complements. One major exception is \emph{improvement} \cite{BAG}, which is the added confidence obtained by using the given antecedent as opposed to any properly smaller one. We discuss it and two other related quantities next. They are motivated again by our \emph{central intuition}: if the confidence of a partial implication with a smaller antecedent and the same consequent is sufficiently high, the larger partial implication should not be provided in the output. They have in common that their computation requires exploration of a larger space, however; we return to this point in the next section. \subsection{Improvement: Additive and Multiplicative} The key observation for this section is that $X\to Y$ and $Z\to Y$, for $Z\subset X$, provide different, independent information. From the perspective of confidence, either may have it arbitrarily higher than the other. For inequality in one direction, suppose that almost all transactions with $X$ have $Y$, but they are just a small fraction of those supporting $Z$, which mostly lack $Y$; conversely, $Y$ might hold for most transactions having $Z$, but the only transactions having all of $X$ can be those without $Y$. In Example~\ref{ex:running}, one can see that $c(\emptyset\to BC) < c(A\to BC)$ whereas $c(\emptyset\to C) > c(B\to C)$. This fact underlies the difficulty in choosing a proper confidence bound. Assume that there exists a mild correlation giving, say, $c(Z\to A) = 2/3$. If the threshold is set higher, of course this rule is not found; but an undesirable side effect may appear: there may be many ways of choosing subsets of the support of $Z$, by enlarging it a bit, where $Y$ is frequent enough to pass the threshold. Thus, often, in practice, the algorithms enlarge $Z$ into various supersets $X_i$ so that all the confidences $c(X_i\to A)$ do pass, and then $Z\to A$ is not seen, but generates dozens of very similar ``noisy'' rules, to be manually explored and filtered. Finding the appropriate threshold becomes difficult, also because, for different partial implications, this sort of phenomenon may appear at several threshold values simultaneously. Relative confidence tests confidence by a comparison to what happens if the antecedent is replaced by one of its subsets in particular, namely $\emptyset$. Improvement generalizes it by considering not only the alternative partial implication $\emptyset\to Y$ but all proper subsets of the antecedent, as alternative antecedents, and in the same additive form: \begin{definition} \lambdaabel{df:improvement} The {\em improvement} $X\to Y$, where $X\neq\emptyset$, is $i(X\to Y) = \min \{ c(X\to Y) - c(Z\to Y) \bigm| Z\subset X \}$. \end{definition} The definition is due to \cite{BAG}, where only association rules are considered, that is, cases where $|Y|=1$. The work on productive rules~\cite{Webb07} is related: these coincide with the rules of \emph{positive improvement}. In~\cite{LiuHsuMa}, improvement is combined with further pruning on the basis of the $\chi^2$ value. We literally quote from \cite{BAG}: ``A rule with negative improvement is typically undesirable because the rule can be simplified to yield a proper sub-rule that is more predictive, and applies to an equal or larger population due to the antecedent containment relationship. An improvement greater than 0 is thus a desirable constraint in almost any application of association rule mining. A larger minimum on improvement is also often justified because most rules in dense data-sets are not useful due to conditions or combinations of conditions that add only a marginal increase in confidence.'' The same process, and with the same intuitive justification, can be applied to lift, which is, actually, a multiplicative, instead of additive, version of relative confidence as indicated above: $\ell(X\to Y) = c(X\to Y) / c(\emptyset\to Y)$. Taking inspiration in this correspondence, we studied in~\cite{BalDog} a multiplicative variant of improvement that generalizes lift, exactly in the same way as improvement generalizes relative confidence: \begin{definition} \lambdaabel{df:multimpr} The {\em multiplicative improvement} of $X\to Y$, where $X\neq\emptyset$, is $m(X\to Y) = \min \{ c(X\to Y)/c(Z\to Y) | Z\subset X \}$. \end{definition} In Example~\ref{ex:running}, the facts that $c(A\to BC) = 4/5$ and $c(\emptyset\to BC) = 3/4$ lead to $i(A\to BC) = 4/5 - 3/4 = 0.05$ and $m(A\to BC) = (4/5) / (3/4) \approx 1.066$. Here, as the size of the antecedent is 1, there is one single candidate $Z = \emptyset$ to proper subset of the antecedent and, therefore, improvement coincides with relative confidence, and multiplicative improvement coincides with lift. For larger left-hand sides, the values will be different in general. \subsection{Rule Blocking} Attempting at formalizing the same part of the \emph {central intuition}, we proposed in \cite{Bal09} a notion of ``rule blocking'', where a smaller antecedent $Z\subset X$ would ``block'' (that is, suggest to omit) a given partial implication $X\to Y$. We will compare the number of tuples having $XY$ (that is, having $Y$ within the supporting set of $X$) with the quantity that would be predicted from the confidence of the partial implication $Z\to Y$, that applies to a larger supporting set: we are going to bound the relative error incurred if the support $s(X)$ and the confidence of $Z\to Y$ are employed to approximate the confidence of $X\to Y$. More precisely, let $c(Z\to ZY) = c$. If $Y$ is distributed along the support of $X$ at the same ratio as along the larger support of $Z$, we would expect $s(XY)\approx c \times s(X)$: we consider the relative error committed by $c\times s(X)$ used as an approximation to $s(XY)$ and, if the error is low, we consider that $Z\to Y$ is sufficient information about $X\to Y$ and dispose of this last one. \begin{definition} \lambdaabel{df:blocking} \cite{Bal09} $Z\subset X$ blocks $X\to Y$ at blocking threshold $\epsilon$ when $$ \frac{s(XY) - c(Z\to Y)s(X)}{c(Z\to Y)s(X)} \lambdaeq \epsilon. $$ \end{definition} In case the difference in the numerator is negative, it would mean that $s(XY)$ is even lower than what $Z\to Y$ would suggest. If it is positive but the quotient is low, $c(Z\to Y)\times s(X)$ still suggests a good approximation to $c(X\to Y)$, and the larger partial implication $X\to Y$ does not bring high enough confidence to be considered besides $Z\to Y$, a simpler one: it remains blocked. But, if the quotient is larger, and this happens for all $Z$, then $X\to Y$ becomes interesting since its confidence is higher enough than suggested by other partial implications of the form $Z\to Y$ for smaller antecedents $Z$. Of course, the higher the block threshold, the more demanding the constraint is. Note that, in the presence of a support threshold $\tau$, $s(ZY) \gammaeq s(XY) > \tau$ or a similar inequality would be additionally required. The value $\epsilon$ is intended to take positive but small values, say around 0.2 or lower. In Example~\ref{ex:running}, $\emptyset$ blocks $A\to BC$ at blocking threshold $1/15 \approx 0.066$. Rule blocking relates to multiplicative improvement as follows: \begin{proposition} \lambdaabel{pr:block} The smallest blocking threshold at which $X\to Y$ is blocked is $m(X\to Y)-1$. \end{proposition} \begin{proof} As everything around is finite, this is equivalent to proving that $Z\subset X$ blocks $X\to Y$ at block threshold $\epsilon$ if and only if $\frac{c(X\to Y)}{c(Z\to Y)} -1 \lambdaeq\epsilon$, for all such $Z$. Starting from the definition of blocking, multiplying both sides of the inequality by $c(Z\to Y)$, separating the two terms of the left-hand side, replacing $s(XY)/s(X)$ by its meaning, $c(X\to Y)$, and then solving first for $c(Z\to Y)$ and finally for $\epsilon$, we find the stated equivalence. All the algebraic manipulations are reversible. \end{proof} \subsection{Ongoing: Conditional Weighted Versions of Lift and Leverage} We propose here one additional step to enhance the flexibility of both lift and leverage by considering their action, on the same partial implication, but with respect to many different subsets of the dataset, and under a weighting scheme that leads to different existing measures according to the weights chosen. For a given partial implication $X\to Y$, we consider many limited views of the dataset, namely, all its projections into subsets of the antecedent. We propose to measure a weighted variant of the lift and/or the leverage of the same partial implication in all these projections, and evaluate as the quality of the partial implication the minimum value thus obtained. That is, we want our high-quality partial implications not only to have high lift or leverage, but also to maintain it when we consider projections of the dataset on the subsets of the antecedent. We call the measures obtained \emph{conditional weighted} lift and leverage. \begin{definition} \lambdaabel{df:cwliftleverage} Assume $X\cap Y = \emptyset$. Let $w$ be a weighting function associating a weight (either a positive real number or $\infty$) to each proper subset of $X$. The {\em conditional weighted lift} of partial implication $X\to Y$ is $\ell'_{{\cal D},w}(X\to Y) = \min\{ w(Z)\ell_{{\cal D}_Z}(\hbox{$X\to Y$}) \bigm| Z\subseteq X \}$. The {\em conditional weighted leverage} of partial implication $X\to Y$ is $\lambda'_{{\cal D},w}(X\to Y) = \min\{ w(Z)g_{{\cal D}_Z}(X\to Y) \bigm| Z\subseteq X \}$. \end{definition} These notions can be connected to other existing notions with unificatory effects. We only state here one such connection. Further development will be provided in a future paper in preparation. \begin{proposition} For inverse confidence weights, conditional weighted leverage is improvement: for all $X\to Y$, $\lambda'_{{\cal D},w}(X\to Y) = i(X\to Y)$ holds for the weighting function $w_r(Z) = c_{{\cal D}}(Z\to X)^{-1}$. \end{proposition} \section{Support Ratio and Confidence Boost} From the perspective of our \emph{central intuition}, the previous section has developed, essentially issues related to smallish antecedents. This is fully appropriate for the discussion of association rules, which were defined originally as partial implications with singleton consequents. We now briefly concentrate on largish consequents, and then join both perspectives. \subsection{Support ratio} The support ratio was employed first, to our knowledge, in \cite{KryszIDA}, where no particular name was assigned to it. Together with other similar quotients, it was introduced in order to help obtaining faster algorithmics. \begin{definition} \lambdaabel{d:suppratio} In the presence of a support threshold $\tau$, the {\em support ratio} of a partial implication $X\to Y$ is $$ \sigma(X\to Y) = \frac{s(XY)}{\max \{ s(Z) | XY\subset Z, \, s(Z) > \tau \}}. $$ \end{definition} We see that this quantity depends on $XY$ but not on the antecedent~$X$ itself. In Example~\ref{ex:running}, we find that $\sigma(A\to BC) = 4/3$. \subsection{Confidence Boost} \lambdaabel{ss:cboost} \begin{definition} \lambdaabel{d:boost} The {\em confidence boost} of a partial implication $X\to Y$ (always with $X\cap Y = \emptyset$) is $\beta(X\to Y) = {}$ $$ \frac{c(X\to XY)}{\max \{ c(X'\to X'Y') \bigm| (X\to XY) \not\equiv (X'\to X'Y'), \, X'\subseteq X, \, Y\subseteq Y'\}}. $$ where the partial implications in the denominator are implicitly required to clear the support threshold, in case one is enforced: $s(X'\to X'Y') > \tau$. \end{definition} Let us explain the interpretation of this parameter. Suppose that $\beta(X\to Y)$ is low, say $\beta(X\to Y)\lambdaeq b$, where $b$ is just slightly larger than~1. Then, according to the definition, there must exist some {\em different} partial implication $X'\to X'Y'$, with $X'\subseteq X$ and $Y\subseteq X'Y'$, such that $\frac{c(X\to Y)}{c(X'\to Y')}\lambdaeq b$, or $c(X'\to Y')\gammaeq c(X\to Y)/b$. This inequality says that the partial implication $X'\to Y'$, stating that transactions with $X'$ tend to have $X'Y'$, has a confidence relatively high, not much lower than that of $X\to Y$; equivalently, the confidence of $X\to Y$ is not much higher (it could be lower) than that of $X'\to Y'$. But all transactions having $X$ do have $X'$, and all transactions having $Y'$ have $Y$, so that the confidence found for $X\to Y$ is not really that novel, given that it does not give so much additional confidence over a partial implication that states such a similarly confident, and intuitively stronger, fact, namely $X'\to Y'$. This author has developed a quite successful open-source partial implication miner based on confidence boost ({\tt yacaree.sf.net}); all readers are welcome to experiment with it and provide feedback. We note also that the confidence width alluded to in Section~\ref{ss:cwidth}, while having different theoretical and practical properties, is surprisingly close in definition to confidence boost. See \cite{BalTKDD} for further discussion of all these issues. Confidence boost fits the general picture as follows: \begin{proposition} $\beta(X\to Y) = \min\{ \sigma(X\to Y), m(X\to Y) \}$. \end{proposition} The inequalities $\beta(X\to Y)\lambdaeq \sigma(X\to Y)$ (due to \cite{BalTirZor10a}) and $\beta(X\to Y)\lambdaeq m(X\to Y)$ are simple to argue: the consequent leading to the support ratio, or the antecedent leading to the multiplicative improvement, take a role in the denominator of confidence boost. Conversely, taking the maximizing partial implication in the denominator, if it has the same antecedent $X$ then one obtains a bound on the support ratio whereas, if the antecedent is properly smaller, a bound on the multiplicative improvement follows. In Example~\ref{ex:running}, since $\sigma(A\to BC) = 4/3$ and $m(A\to BC) = (4/5) / (3/4)$, which is smaller, we obtain $\beta(A\to BC) = (4/5) / (3/4) \approx 1.066$. A related proposal in \cite{KryszPKDD} suggests to minimize directly the antecedents and maximizing the consequents, within the confidence bound, and in a context where antecedents and consequents are kept disjoint. This is similar to statement $(3)$ in Lemma~\ref{l:redchar}, except that, there, one maximizes jointly consequent and antecedent. If consequents are maximized separately, then the \emph{central intuition} fails, but there is an interesting connection with confidence boost; see~\cite{BalTKDD}. The measures in this family of improvement, including conditional weighted variants and also confidence boost, tend to require exploration of larger spaces of antecedents compared to simpler rule quality measures. This objection turns out not to be too relevant because human-readable partial implications have often just a few items in the antecedent. Nontrivial algorithmic proposals for handling this issue appear as well in \cite{BalTKDD}. \subsection{Ongoing Developments} We briefly mention here the following observations. First, like in Section~\ref{ss:bstar}, a variant of confidence boost appropriate for closure-based analysis exists \cite{BalTKDD}. Second, both variants trivialize if they are applied directly, in their literal terms, to full implications. However, the intuitions leading to confidence boost can be applied as well to full implications. In future work, currently in preparation, we will discuss proposals for formalizing the same intuition in the context of full implications. \section{Evaluation of Evaluation Measures} We have covered just a small fraction of the evaluation measures proposed to endow with useful semantics the partial implication connective. All of those attempt, actually, at capturing a potential (but maybe nonexisting) ``na\"\i{}ve concept'' of interesting partial implication from the perspective of an end user. Eventually, we would like to find one such semantics that fits as best as possible that hypothetical na\"\i{}ve concept. We can see no choice but to embark, at some point, in the creation of resources where, for specific datasets, the interest of particular implications is recorded as per the assessment of individual humans. Some approximations to this plan are Section~5.2 of \cite{BalTKDD}, where the author, as a scientific expert, subjectively evaluates partial implications obtained from abstracts or scientific papers; a similar approach in \cite{MINI} using PKDD abstracts; and the work in \cite{BalTirZor10b,ZorGB} where partial implications found on educational datasets from university course logs are evaluated by the teachers of the corresponding courses. These preliminary experiments are positive and we hope that a more ambitious attempt could be made in the future along these lines. The idea of evaluating associators through the predictive capabilities of the rules found has been put forward in several sources, e.g.~\cite{MHF}. The usage of association rules for direct prediction (where the ``class'' attribute is forced to occur in the consequent) has been widely studied (e.g.~\cite{YinHan}). In \cite{MHF}, two different associators are employed to find rules with the ``class'' as consequent, and they are compared in terms of predictive accuracy. This scheme is inappropriate to evaluate our proposals for the semantics of partial implications, because, first, we must focus on single pairs of attribute and value as right-hand side, thus making it useless to consider larger right-hand sides; and, also, the classification will only be sensible to minimal left-hand sides independently of their confidences. In \cite{BalDog}, we have deployed an alternative framework that allows us to evaluate the diverse options of semantics for association rules, in terms of their usefulnes for subsequent predictive tasks. By means of a mechanism akin to the AUC measure for predictor evaluation, we have focused on potential accuracy improvements of predictors on given, public, standard benchmark datasets, if one more Boolean column is added, namely, one that is true exactly for those observations that are exceptions to one association rule: the antecedent holds but the consequent does not. In a sense, we use the association rule as a ``hint of outliers'', but, instead of removing them, we simply offer direct access to this label to the predictor, through the extra column. Of course, in general this may lead astray the predictor instead of helping it. Our experiments suggest that leverage, support, and multiplicative improvement tend to be better than the other measures with respect to this evaluation score. \subsection{Ongoing Developments} We are currently developing yet new frameworks that, hopefully, might be helpful in assessing the relative merits of the different candidates for semantics of partial implications, put forward often as rule quality measures. One of them resorts to an empirical application of approximations to the MDL principle along the lines of Krimp \cite{krimp}. A second idea is to make explicit the dependence on alternative partial implications, in the sense that $X\to Y$ would mean, intuitively, that $Y$ appears often on the support of $X$ and that, barring the presence of some other partial implication to the contrary, it is approximately uniformly distributed there. These avenues will be hopefully explored along the coming months or years. A common thread is that additional statistical knowledge, along the lines of the self-sufficient itemsets of Webb~\cite{Webb10}, for instance, is expected to be at play in the future developments of the issue of endowing the partial implication connective with the right intuitive semantics. \end{document}
\begin{document} \title[Differential cocycles]{Arithmetic differential equations on $GL_n$, I:\\ differential cocycles} \author{Alexandru Buium and Taylor Dupuy} \def \Rp{R_p} \def \Rpi{R_{\pi}} \def \dpi{\d_{\pi}} \def \bT{{\bf T}} \def \cI{{\mathcal I}} \def \cH{{\mathcal H}} \def \cJ{{\mathcal J}} \def \ZN{\bZ[1/N,\zeta_N]} \def \tA{\tilde{A}} \def \o{\omega} \def \tB{\tilde{B}} \def \tC{\tilde{C}} \def \alph{A} \def \bet{B} \def \bsigma{\bar{\sigma}} \def \y{^{\infty}} \def \Ra{\Rightarrow} \def \uBS{\overline{BS}} \def \lBS{\underline{BS}} \def \lB{\underline{B}} \def \<{\langle} \def \>{\rangle} \def \hL{\hat{L}} \def \cU{\mathcal U} \def \cF{\mathcal F} \def \S{\Sigma} \def \st{\stackrel} \def \sd{Spec_{\d}\ } \def \pd{Proj_{\d}\ } \def \s{\sigma_2} \def \i{\sigma_1} \def \bs{ } \def \cD{\mathcal D} \def \cC{\mathcal C} \def \cT{\mathcal T} \def \cK{\mathcal K} \def \cX{\mathcal X} \def \sX{X_{set}} \def \cY{\mathcal Y} \def \cS{X} \def \cR{\mathcal R} \def \cE{\mathcal E} \def \tcE{\tilde{\mathcal E}} \def \cP{\mathcal P} \def \cA{\mathcal A} \def \cV{\mathcal V} \def \cM{\mathcal M} \def \cL{\mathcal L} \def \cN{\mathcal N} \def \tcM{\tilde{\mathcal M}} \def \caS{\mathcal S} \def \cG{\mathcal G} \def \cB{\mathcal B} \def \tG{\tilde{G}} \def \cF{\mathcal F} \def \h{\hat{\ }} \def \hp{\hat{\ }} \def \tS{\tilde{S}} \def \tP{\tilde{P}} \def \tA{\tilde{A}} \def \tX{\tilde{X}} \def \tcS{\tilde{X}} \def \tT{\tilde{T}} \def \tE{\tilde{E}} \def \tV{\tilde{V}} \def \tC{\tilde{C}} \def \tI{\tilde{I}} \def \tU{\tilde{U}} \def \tG{\tilde{G}} \def \tu{\tilde{u}} \def \chu{\check{u}} \def \tx{\tilde{x}} \def \tL{\tilde{L}} \def \tY{\tilde{Y}} \def \d{\delta} \def \e{\chi} \def \bW{\mathbb W} \def \bV{{\mathbb V}} \def \bF{{\bf F}} \def \bE{{\bf E}} \def \bC{{\bf C}} \def \bO{{\bf O}} \def \bR{{\bf R}} \def \bA{{\bf A}} \def \bB{{\bf B}} \def \cO{\mathcal O} \def \ra{\rightarrow} \def \bx{{\bf x}} \def \f{{\bf f}} \def \bX{{\bf X}} \def \bH{{\bf H}} \def \bS{{\bf S}} \def \bF{{\bf F}} \def \bN{{\bf N}} \def \bK{{\bf K}} \def \bE{{\bf E}} \def \bB{{\bf B}} \def \bQ{{\bf Q}} \def \bd{{\bf d}} \def \bY{{\bf Y}} \def \bU{{\bf U}} \def \bL{{\bf L}} \def \bQ{{\bf Q}} \def \bP{{\bf P}} \def \bR{{\bf R}} \def \bC{{\bf C}} \def \bD{{\bf D}} \def \bM{{\bf M}} \def \bZ{{\mathbb Z}} \def \xtoleqr{x^{(\leq r)}} \def \hU{\hat{U}} \def \k{\kappa} \def \ee{\overline{p^{\k}}} \newtheorem{THM}{{\!}}[section] \newtheorem{THMX}{{\!}} \renewcommand{\theTHMX}{} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{\bf Example} \numberwithin{equation}{section} \address{University of New Mexico \\ Albuquerque, NM 87131} \email{[email protected]} \maketitle \begin{abstract} The theory of differential equations has an arithmetic analogue \cite{book} in which derivatives are replaced by Fermat quotients. One can then ask what is the arithmetic analogue of a linear differential equation. The study of usual linear differential equations is the same as the study of the differential cocycle from $GL_n$ into its Lie algebra given by the logarithmic derivative \cite{kolchin}. However we prove here that there are no such cocycles in the context of arithmetic differential equations. In sequels of this paper \cite{adel2, adel3} we will remedy the situation by introducing arithmetic analogues of Lie algebras and a skew version of differential cocycles; this will lead to a theory of linear arithmetic differential equations. \end{abstract} \section{Introduction and main results} In \cite{char} an arithmetic analogue of differential equations was introduced in which derivations are replaced by Fermat quotient operators; cf. \cite{book} for an overview of the theory which was mostly concerned with Abelian varieties and Shimura varieties. With the exception of \cite{simple}, however, little attention has been given to the case of arithmetic differential equations attached to linear algebraic groups such as $GL_n$. The present paper is the first in a series of papers whose purpose is to shed some light into the theory for $GL_n$. In particular, one of our main motivations is to understand what is the correct notion of ``linear arithmetic differential equation". The basic clue should come from the Ritt-Kolchin differential algebra \cite{kolchin} whose arithmetic analogue is the theory in \cite{char,book}. So let us examine Kolchin's setting first. Since we will later treat Kolchin's theory and our arithmetic theory simultaneously it is convenient to use the same notation in two different contexts. We call these contexts the {\it $\delta$-algebraic} setting (corresponding to Kolchin's theory) and the {\it $\delta$-arithmetic} setting (corresponding to the theory in \cite{char, book}). In the $\delta$-algebraic setting we denote by $R$ a field of characteristic zero equipped with a derivation $\d:R\ra R$ and we assume $R$ is $\d$-{\it closed} (which is the same as {\it constrainedly closed} in Kolchin's terminology \cite{kolchin}). We denote by $R^{\d}$ the field of constants $\{c\in R;\d c=0\}$. In this context we will consider smooth schemes of finite type $X$ over $R$ (i.e. nonsingular varieties) and we denote by $X(R)$ the set of $R$-points of $X$; if there is no danger of confusion we often simply write $X$ in place of $X(R)$. In the $\delta$-arithmetic setting we assume $R$ is the unique complete discrete valuation ring with maximal ideal generated by an odd prime $p$ and residue field $k=R/pR$ equal to the algebraic closure ${\mathbb F}_p^a$ of ${\mathbb F}_p$; then we denote by $\d:R\ra R$ the unique $p$-derivation on $R$ in the sense of \cite{char}; recall that $\d x=\frac{\phi(x)-x^p}{p}$ where $\phi:R\ra R$ is the unique ring homomorphism lifting the $p$-power Frobenius on the residue field $k$. We denote by $R^{\d}$ the monoid of constants $\{\lambda\in R;\d \lambda=0\}$; so $R^{\d}$ consists of $0$ and all roots of unity in $R$. Also we denote by $K$ the fraction field of $R$. In this context we will consider smooth schemes of finite type $X$ over $R$ or, more generally, smooth $p$-formal schemes of finite type, by which we mean formal schemes locally isomorphic to $p$-adic completions of smooth schemes of finite type; we denote by $X(R)$ the set of $R$-points of $X$; if there is no danger of confusion we often simply write $X$ in place of $X(R)$. Groups in the category of smooth $p$-formal schemes will be called smooth group $p$-formal schemes. In the $\d$-algebraic setting a map $f:R^N\ra R^M$ will be called a $\d$-map of order $n$ if there exists an $M$-vector $F=(F_j)$ of polynomials $F_j\in R[x_0,...,x_n]$ with coefficients in $R$ in $n+1$ $N$-tuples of variables $x_0,...,x_n$, such that \begin{equation} \label{singe} f(a)=F(a,\d a,...,\d^n a),\ \ a\in R^N.\end{equation} In the $\d$-arithmetic setting a map $f:R^N\ra R^M$ will be called a $\d$-map of order $n$ if there exists an $M$-vector $F=(F_j)$ of restricted power series $F_j\in R[x_0,...,x_n]\h$ (where $\h$ means $p$-adic completion) such that \ref{singe} holds. In both the $\d$-algebraic and the $\d$-arithmetic setting one can then consider affine smooth schemes $X,Y$ and define a map $f:X(R)\ra Y(R)$ to be a $\d$-map of order $n$ if there exist embeddings $X\subset {\mathbb A}^N$, $Y\subset {\mathbb A}^M$ such that $f$ is induced by a $\d$-map $R^N\ra R^M$ of order $n$; we simply write $f:X\ra Y$. Even more generally if $X$ is any smooth scheme and $Y$ is an affine scheme a set theoretic map $X\ra Y$ is called a $\d$-map of order $n$ if there exists an affine cover $X=\bigcup X_i$ such that all the induced maps $X_i\ra Y$ are $\d$-maps of order $n$. If $G,H$ are smooth group schemes a $\d$-homomorphism $G\ra H$ is, by definition, a $\d$-map which is also a homomorphism. We shall review these concepts in section 2 following \cite{char,book} in the $\delta$-arithmetic setting and \cite{kolchin, cassidy, hermann} in the $\delta$-algebraic setting. We will use a slightly different (but equivalent) approach using jet spaces. Assume we are in either the $\delta$-algebraic or in the $\delta$-arithmetic setting. Let $G$ be an algebraic group (i.e. smooth group scheme) over $R$ and let $L(G)$ be its Lie algebra which we view as a group scheme over $R$ isomorphic as a scheme with the affine space. Denote by $\star:G\times L(G)\ra L(G)$ the (left) adjoint action. \begin{definition} A classical $\d$-cocycle of $G$ with values in $L(G)$ is a $\d$-map $f:G\ra L(G)$ which is a cocycle for the adjoint action; i.e. for all $g_1,g_2\in G(R)$ we have \begin{equation} \label{theone} f(g_1g_2)=f(g_1)+g_1\star f(g_2).\end{equation}\end{definition} Here $G(R)$ is the set of $R$-points of $G$. If in the definition above $f$ is a regular map (i.e. a morphism of schemes over $R$) then we say $f$ is a regular cocycle. The adjective {\it classical} in the above definition was used in order to distinguish between the {\it classical $\d$-cocycles} introduced above and {\it skew $\d$-cocycles} to be introduced in \cite{adel2}; skew $\d$-cocycles will then be viewed as ``non-classical" objects. \begin{remark}\ 1) For $G$ a smooth closed subgroup scheme of some $GL_n$, identifying $L(G)$ with a Lie subalgebra of ${\mathfrak gl}_n=L(GL_n)$, we get that condition \ref{theone} reads $$f(g_1g_2)=f(g_1)+g_1f(g_2)g_1^{-1}.$$ On the other hand if $G$ is a (non-necessarily affine) smooth commutative group then condition \ref{theone} reads $$f(g_1g_2)=f(g_1)+f(g_2)$$ i.e. $f$ is simply a $\d$-homomorphism. In view of Kolchin's theory \cite{kolchin} it seems reasonable to introduce the following definition (in both the $\delta$-algebraic and the $\delta$-arithmetic contexts): a linear differential equation is an equation of the form $f(u)=\alpha$ where $f$ is a classical $\d$-cocycle of $G$ with values in $L(G)$, $u\in G(R)$ is the unknown and $\alpha \in L(G)(R)$ is a given $R$-point of the Lie algebra. 2) Assume we are in the $\delta$-algebraic context and assume $G$ is defined over the field of constants $R^{\d}$ of $\d$. In that case there is a remarkable order $1$ classical $\d$-cocycle $l\d:G\ra L(G)$ called the {\it Kolchin logarithmic derivative}. For $G$ closed in $GL_n$ and $\d$-horizontal (i.e. defined by equations with coefficients in $R^{\d}$) $l\d$ is defined by $l\d g=\d g \cdot g^{-1}$, $g\in G(R)$. This essentially models, in the context at hand, concepts that go back to Lie and Cartan. Also the equation $l\d(u)=\alpha$ for $G=GL_n$ reduces to an equation $\d u=\alpha u$ which is the familiar form of a linear differential equation. The above construction can be generalized to the case when $G$ has a structure of $D$-group \cite{lnm,pillay} i.e. $G$ comes equipped with a derivation $\cO_G\ra \cO_G$ extending that of $R$ and compatible with the group structure. 3) Assume again that we are in the $\d$-algebraic context. Then for any Abelian variety $A$ over $R$ (not necessarily defined over the field of constants $R^{\d}$!) there is a surjective, order $2$, $\d$-homomorphisms $f:A\ra L(A)$ (hence a classical $\d$-cocycle) into the Lie algebra $L(A)$ of $A$; this follows from Manin's work \cite{manin} (cf. \cite{annals,hermann} for a different construction of Manin's map $f$). It turns out that there is an analogue of Manin maps in the $\delta$-arithmetic setting \cite{char}: for any abelian scheme $A$ over $R$ there is an order $2$ surjective $\d$-homomorphism $f:A\ra L(A)$ with remarkable properties (e.g one has a precise description of its kernel which is analogous to the Manin theorem of the kernel \cite{manin}). Similar maps exist for linear tori in place of abelian schemes; in particular there is a complete description of the $\d$-homomorphisms ${\mathbb G}_m\ra {\mathbb G}_a=L({\mathbb G}_m)$; cf. Lemma \ref{allchar} (which follows immediately from \cite{char}). \end{remark} To summarize, one has a good definition of linear differential equations in the $\delta$-algebraic setting. Moreover there is a nice $\d$-arithmetic analogue of linear differential equations in the case of Abelian varieties. However our Theorem \ref{main} below shows that there is no ``naive" $\d$-arithmetic analogue of the linear differential equations in the case of $GL_n$: what we show is that there is no ``naive" $\d$-arithmetic analogue of the Kolchin logarithmic derivative for $GL_n$, $n\geq 2$. \begin{theorem}\label{main} Assume we are in the $\delta$-arithmetic setting. Let $f:GL_n\ra {\mathfrak gl}_n$ be a classical $\d$-cocycle. Then there exists a $\d$-ho\-mo\-morphism $\omega:{\mathbb G}_m\ra {\mathbb G}_a$ and there exists $v\in gl_n(R)$ such that for all $g\in GL_n(R)$ we have: $$f(g)=\omega(\det(g)) 1_n +gvg^{-1}-v.$$\end{theorem} The corresponding statement in algebraic geometry, saying that if a map of varieties $GL_n\ra {\mathfrak gl}_n$ over an algebraically closed field of characteristic zero is a cocycle then the map must be a couboundary, is ``well known" and follows easily from Whitehead's lemma; cf. Lemma \ref{cocoa}. Note also that our computations have a $\delta$-algebraic variant leading to a characterization of Kolchin's logarithmic derivative that seems to be new. Indeed assume we are in the $\delta$-algebraic setting and let $f:GL_n\ra {\mathfrak gl}_n$ be a classical $\d$-cocycle. Let us say that $f$ is $\d$-coherent if for any algebraic subgroup $G\subset GL_n$ defined over the field of constants $R^{\d}$ with Lie algebra $L(G)$ we have that $f(G)\subset L(G)$. It is well known (and trivial to check) that Kolchin's logarithmic derivative $l\d:GL_n\ra {\mathfrak gl}_n$, $l\d g=\d g \cdot g^{-1}$, is a $\d$-coherent classical $\d$-cocycle. Conversely we will prove: \begin{theorem} \label{secondary} Assume we are in the $\delta$-algebraic setting. Let $f:GL_n\ra {\mathfrak gl}_n$ be a $\d$-coherent classical $\d$-cocycle and assume $n\geq 2$. Then there exists $\nu\in R$ such that for all $g\in GL_n(R)$, $$f(g)=\nu \cdot \d g \cdot g^{-1}.$$ \end{theorem} For $n=1$ the above fails; instead one has a complete description of the situation in this case due to Cassidy \cite{cassidy}; cf. Lemma \ref{allchar} for a review of this. As already mentioned Theorem \ref{main} shows that there is no ``naive" analogue of Kolchin's logarithmic derivative for $GL_n$ when $n\geq 2$. In order to find, then, an arithmetic analogue of linear differential equations we will be led in \cite{adel2, adel3} to replace Lie algebras of algebraic groups by objects that are less linear and, arguably, are better adapted to the arithmetic jet theory. This will lead to a theory of linear arithmetic differential equations. The present paper is organized as follows. Section 2 reviews some of the basic concepts in \cite{char,book}. In section 3 we present the proofs of Theorems \ref{main} and \ref{secondary} about classical $\d$-cocycles. {\bf Acknowledgement}. The authors are indebted to P. Cartier for inspiring discussions and to A. Minchenko for providing the proof of Lemma \ref{cocoa}. Also the first author would like to acknowledge partial support from the Hausdorff Institute of Mathematics in Bonn and from the NSF through grant DMS 0852591. \section{Review of $p$-jets} The aim of this section is to review the relevant material on the $\delta$-arithmetic setting in \cite{char,book}. We will also include the relevant corresponding comments on differential algebra \cite{kolchin, cassidy, hermann}. Assume first we are in the $\delta$-arithmetic setting. Recall from the introduction that we denote by $R$ the complete discrete valuation ring with maximal ideal generated by an odd prime $p$ and algebraically closed residue field $k={\mathbb F}_p^a$. Then $R$ comes equipped with a unique lift of Frobenius $\phi:R\ra R$ i.e. with a ring homomorphism whose reduction mod $p$ is the $p$-th power map. For $x$ a tuple of indeterminates over $R$ and tuples of indeterminates $x',...,x^{(n)},...$ we let $R\{x\}=R[x,x',x'',...]$ and we still denote by $\phi:R\{x\}\ra R\{x\}$ the unique lift of Frobenius extending $\phi$ on $R$ such that $\phi(x)=x^p+px'$, $\phi(x')=(x')^p+px''$, etc. Then we let $\d:R\{x\}\ra R\{x\}$ be defined as $\d f=p^{-1}(\phi(f)-f^p)$; so $\d x=x'$, $\d x'=x''$, etc. We view $\d$ as an analogue of the total derivative operator in differential algebra. Now for any affine scheme of finite type $X=Spec\ R[x]/(f)$ over $R$, where $f$ is a tuple of polynomials, we define the $p$-jet spaces of $X$ as being the $p$-formal schemes \begin{equation} \label{fort} J^n(X)=Spf\ \frac{R[x,x',...,x^{(n)}]\h}{(f,\d f,...,\d^n f)}\end{equation} For $X$ of finite type but not necessarily affine we define $J^n(X)=\bigcup J^n(X_i)$ where $X=\bigcup X_i$ is an affine cover and the gluing is an obvious one. The spaces $J^n(X)$ have an obvious universality property for which we refer to \cite{char,book} and can be defined for $X$ a $p$-formal scheme of finite type as well. If $X/R$ is smooth then $J^n(X)$ is locally the $p$-adic completion of a smooth scheme. The universality property yields natural maps on sets of $R$-points $\nabla^n:X(R)\ra J^n(X)(R)$; for $X={\mathbb A}^m$ (the affine space), $J^n(X)={\mathbb A}^{m(n+1)}$ and $\nabla^n(a)=(a,\d a,...,\d^na)$. Let $X,Y$ be schemes of finite type over $R$; by a $\d$-map (of order $n$) $f:X\ra Y$ we understand a map of $p$-formal schemes $J^n(X)\ra J^0(Y)=\widehat{Y}$. Two $\d$-maps $X\ra Y$ and $Y\ra Z$ of orders $n$ and $m$ respectively can be composed (using the universality property) to yield a $\d$-map of order $n+m$. Any $\d$-map $f:X\ra Y$ induces a set theoretic map $f_*:X(R)\ra Y(R)$ defined by $f_*(P)=f(\nabla^n(P))$; if $X,Y$ are smooth the map $f_*$ uniquely determines the map $f$ and, in this case, we simply write $f$ instead of $f_*$ (and $X,Y$ instead of $X(R),Y(R)$). A $\d$-map $X\ra Y$ of order zero is nothing but a map of $p$-formal schemes $\widehat{X}\ra \widehat{Y}$. The functors $J^n$ commute with products and send groups into groups. By a $\d$-homomorphism $f:G\ra H$ between two group schemes (or group $p$-formal schemes) we understand a group homomorphism $J^n(G)\ra J^0(H)=\widehat{H}$. The Kolchin $\delta$-algebraic setting can be presented in a similar way. In this setting one defines jet spaces $J^n(X)$ of schemes of finite type $X$ over $R$ by the same formula \ref{fort} in which one drops the symbol $\h$, one replaces $Spf$ by $Spec$, and one takes $\d:R\{x\}\ra R\{x\}$ to be the unique derivation extending $\d$ on $R$ and sending $\d x=x'$, $\d x'=x''$, etc. If $X$ is smooth then $J^n(X)$ are smooth over $R$. Again one has natural set theoretic maps $\nabla^n:X(R)\ra J^n(X)(R)$. A $\d$-map $f:X\ra Y$ is, as before, a morphism of schemes $J^n(X)\ra Y$; one defines similarly the composition of $\d$-maps. Any $\d$-map $f:X\ra Y$ induces a set theoretic map $f_*:X(R)\ra Y(R)$ which determines $f$ uniquely if $X$ and $Y$ are smooth; in this case we write $f$ in place of $f_*$. A $\d$-map of order zero is nothing but a map of schemes. One defines $\d$-homomorphisms in the expected way. The formalism of flows and prime integrals is similar and boils down, of course, to the classical picture. \begin{remark} The $p$-jet spaces $J^n(X)$ have natural algebraizations which, in their turn, have multi-prime versions. The resulting jet functors are adjoint to the Witt functors. Cf. Borger's work \cite{borger, borger2}. \end{remark} \section{Proofs of Theorems \ref{main} and \ref{secondary}} We start with the ``non-differential" case which is ``well known"; for convenience we include a proof (for which we are indebted to A. Minchenko): \begin{lemma}\label{cocoa} Let $f:GL_n(F) \ra {\mathfrak gl}_n(F)$ be a regular map of algebraic varieties over an algebraically closed field $F$ of characteristic zero and assume $f$ is a cocycle i.e. for all $g_1,g_2\in GL_n(F)$ equation \ref{theone} holds. Then $f$ is a coboundary, i.e. there exists $v\in gl_n(F)$ such that for all $g\in GL_n(F)$, $f(g)=gvg^{-1}-v.$ \end{lemma} {\it Proof}. If in \ref{theone} we set $g_1=x\in GL_n(F)$, $g_2=1+ \epsilon \xi\in GL_n(F[\epsilon])$ where $1$ is the identity, $\epsilon^2=0$, and $\xi\in {\mathfrak gl}(F)$ and if we take the coefficient of $\epsilon$ we get $$ d_x f(x\xi)=x d_1 f(\xi) x^{-1}, $$ equivalently \begin{equation}\label{unu} d_x f(\xi)=x d_1 f(x^{-1}\xi) x^{-1}. \end{equation} Similarly setting $g_1=1+\epsilon \xi$, $g_2=x$ in \ref{theone} we get \begin{equation}\label{doi} d_x f(\xi)= d_1 f(\xi x^{-1})+[\xi x^{-1},f(x)]. \end{equation} From \ref{unu} and \ref{doi} we get \begin{equation}\label{trei} x d_1 f(x^{-1}\xi) x^{-1}=d_1 f(\xi x^{-1})+[\xi x^{-1},f(x)] \end{equation} for all $x\in GL_n(F)$ and $\xi\in {\mathfrak gl}_n(F)$ and hence for all $x\in GL_n(A)$, $\xi\in {\mathfrak gl}_n(A)$, $A$ an $F$-algebra. Setting in \ref{trei} $x=1+\epsilon \eta\in GL_n(F[\epsilon])$, with $\eta\in {\mathfrak gl}_n(F)$ and taking the coefficient of $\epsilon$ we get $$d_1f([\xi,\eta])=[d_1f(\xi),\eta]+[\xi,d_1f(\eta)]$$ so $d_1f$ is a derivation on ${\mathfrak gl}_n(F)$. Note that $d_1f$ preserves ${\mathfrak sl}_n(F)$. Indeed by \ref{theone} we have $$\text{tr}\ f(g_1g_2)=\text{tr} \ f(g_1)+\text{tr} \ f(g_2).$$ Since $GL_n$ has no regular non-zero homomorphism into ${\mathbb G}_a$ we get $\text{tr}\ f=0$ hence $f$ takes values in ${\mathfrak sl}_n(F)$. Now by the first Whitehead lemma \cite{jacobson}, p. 77, all derivations of ${\mathfrak sl}_n(F)$ are inner hence there exists $v\in {\mathfrak sl}_n(F)$ such that \begin{equation} \label{patru} d_1f(\xi)=[\xi,v] \end{equation} for all $\xi\in {\mathfrak sl}_n(F)$. By \ref{trei}, for all $\lambda \in F$, $f(\lambda 1)$ centralizes all of ${\mathfrak gl}_n(F)$ and, since $f(\lambda 1)\in{\mathfrak sl}_n(F)$, we must have $f(\lambda 1)=0$. We conclude that \ref{patru} holds for all $\xi\in {\mathfrak gl}_n(F)$. Let $f^v:GL_n(F)\ra {\mathfrak gl}_n(F)$ be the coboundary $f^v(g)=gvg^{-1}-v$ and note that $d_1f=d_1f^v$. Since, by \ref{trei}, $d_xf$ is determined by $d_1f$ it follows that $d_xf=d_xf^v$ for all $x\in GL_n(F)$. Since $f(1)=f^v(1)=0$ it follows that $f=f^v$.\qed As a consequence we get: \begin{lemma}\label{cocoa2} Assume we are in the $\delta$-arithmetic setting. Let $f:GL_n \ra {\mathfrak gl}_n$ be a map of schemes over $R$ and assume $f$ is a cocycle i.e. for all $g_1,g_2\in GL_n(R)$ equation \ref{theone} holds. Then $f$ is a coboundary, i.e. there exists $v\in gl_n(R)$ such that for all $g\in GL_n(R)$, $f(g)=gvg^{-1}-v.$ \end{lemma} {\it Proof}. Let $K$ be the fraction field of $R$ and $F$ its algebraic closure. By Lemma \ref{cocoa} there exists $v_{F}\in {\mathfrak gl}_n(F)$ such that $f(g)=gv_{F}g^{-1}-v_{F}$ for all $g\in GL_n(F)$. Denoting by $v_K$ the sum of the conjugates of $v_{F}$ over $K$ we have $f(g)=gv_Kg^{-1}-v_K$ for all $g\in GL_2(K)$. Let $m\geq 1$ be an integer such that $p^{m}v_K\in {\mathfrak gl}_2(R)$. If $w=p^{m}v_K-\alpha 1=(w_{ij})$ then we have $gwg^{-1}-w=p^{m}f(g)\in p^{m}{\mathfrak gl}_n(R)$ for all $g\in GL_n(R)$. Taking $g$ to be an arbitrary diagonal matrix we get that $w_{ij}\equiv 0$ mod $p^m$ for $i\neq j$. Taking $g=1+e_{ij}$, $i\neq j$, with $e_{ij}$ having all its entries $0$ except the $(i,j)$ entry which is $1$ we get $w_{ii}\equiv w_{jj}$ mod $p^m$; since $w_{11}\equiv 0$ mod $p^m$ we get that all the entries of $w$ are divisible by $p^{m}$ in $R$. So $w=p^{m}v$, $v\in {\mathfrak gl}_n(R)$, and hence $f(g)=gvg^{-1}-v$ for all $g\in GL_n(R)$, which proves our claim.\qed \begin{lemma} \label{croco} \ 1) \cite{char} Assume we are in the $\delta$-arithmetic setting. Then any $\d$-homomorphism $f:{\mathbb G}_a\ra {\mathbb G}_a$, viewed as a map $f:R\ra R$, has the form $$f(a)=\sum_{i=0}^r\lambda_i \phi^i(a),\ \ a\in R$$ for some $\lambda_0,...,\lambda_r\in R$. 2) \cite{cassidy} Assume we are in the $\delta$-algebraic setting. Then any $\d$-homomorphism $f:{\mathbb G}_a\ra {\mathbb G}_a$, viewed as a map $f:R\ra R$, has the form $$f(a)=\sum_{i=0}^r\lambda_i \d^i(a),\ \ a\in R$$ for some $\lambda_0,...,\lambda_r\in R$. \end{lemma} {\it Proof.} 2) is due to Cassidy \cite{cassidy}, p. 936. To check 1), note that by \cite{char} there exists $\nu\geq 1$ and $\lambda_i\in R$ such that $p^{\nu}f(a)=\sum\lambda_i \phi^i(a)$, $a\in R$. We are left to check that if for $c_i\in R$ we have that for any $a\in R$, $\sum c_i \phi^i(a)\in pR$ then we must have $c_i\in pR$ for all $i$. But this follows from the fact that $\sum c_i a^{p^i}\equiv 0$ mod $p$ for all $a\in R$ and from the fact that the residue field of $R$ is algebraically closed. \qed Next recall from \cite{char}, p. 313, that there exists a remarkable $\d$-homomorphism $\psi:{\mathbb G}_m\ra {\mathbb G}_a$ given on $R$-points by $$\psi(a)=\sum_{n\geq 1}(-1)^{n-1}\frac{p^{n-1}}{n}\left(\frac{\d a}{a^p}\right)^n.$$ With this notation we have: \begin{lemma} \label{allchar} \ 1) \cite{char} Assume we are in the $\delta$-arithmetic setting. Then any $\d$-homomorphism $f:{\mathbb G}_m\ra {\mathbb G}_a$, viewed as a map $f:R^{\times}\ra R$, has the form $$f(a)=\sum_{i=0}^r \lambda_i \phi^i(\psi(a)),\ \ a\in R^{\times}$$ for some $\lambda_0,...,\lambda_r\in R$. 2) \cite{cassidy} Assume we are in the $\delta$-algebraic setting. Then any $\d$-homomorphism $f:{\mathbb G}_m\ra {\mathbb G}_a$, viewed as a map $f:R^{\times}\ra R$, has the form $$f(a)=\sum_{i=0}^r \lambda_i \d^i(\d a \cdot a^{-1}),\ \ a\in R^{\times}$$ for some $\lambda_0,...,\lambda_r\in R$. \end{lemma} {\it Proof}. 2) is due to Cassidy \cite{cassidy}. To check 1) note that by \cite{char} there exist $\nu\geq 1$ such that $p^{\nu}f$ has the above form; we conclude exactly as in Lemma \ref{croco}. \qed If instead of looking at $\d$-homomorphisms ${\mathbb G}_m\ra {\mathbb G}_a$ we are looking at cocycles of non-trivial actions the situation is very different: \begin{lemma} \label{aj} Assume we are either in the $\delta$-algebraic setting or in the $\delta$-arithmetic setting. Assume $f:{\mathbb G}_m\ra {\mathbb G}_a$ is a $\d$-morphism and $0\neq s\in \bZ$ is an integer such that for any $a_1,a_2\in R^{\times}$ we have $$f(a_1a_2)=f(a_1)+a_1^sf(a_2).$$ Then there exists $\mu \in R$ such that $$f(a)=\mu(1-a^s),\ \ a\in R^{\times};$$ in particular $f$ is a regular cocyle. \end{lemma} {\it Proof}. Let us assume first we are in the $\delta$-arithmetic setting. By induction on $r\geq 1$ one gets, for all $a\in R^{\times}$: $$f(a^r)=(1+a^s+a^{2s}+...+a^{(r-1)s})f(a).$$ In particular for any $\nu\geq 0$ one gets \begin{equation} \label{cucu} (1-a^s)f(a^{p^{\nu}})=(1-a^{sp^{\nu}})f(a)\end{equation} Now recall that $$\cO(J^n({\mathbb G}_m))=R[x,x^{-1},x',...,x^{(n)}]\h.$$ Identify the point set map $f$ with a restricted power series $f\in R[x,x^{-1},x',...,x^{(n)}]\h$ and write $$f=\sum_{i_1,...,i_n\geq 0} f_{i_1...i_n} (x) (x')^{i_1}...(x^{(n)})^{i_n}$$ where $f_{i_1...i_n}\in R[x,x^{-1}]\h$ tend to $0$ $p$-adically for $|i_1|+...+|i_n|\ra \infty$. By \ref{cucu} we get \begin{equation}\label{iri}(1-x^s)\sum_{i_1,...,i_n\geq 0} f_{i_1...i_n} (x^{p^{\nu}}) \d(x^{p^{\nu}})^{i_1}...\d^n(x^{p^{\nu}})^{i_n}\end{equation} $$=(1-x^{sp^{\nu}})\sum_{i_1,...,i_n\geq 0} f_{i_1...i_n} (x) (x')^{i_1}...(x^{(n)})^{i_n}$$ By induction on $\nu$ it is easy to see that for any $\nu\geq r\geq 1$ we have \begin{equation} \label{plan} \d^r(x^{p^{\nu}})\in p^{\nu-r+1}R[x,x',...,x^{(r)}];\end{equation} cf. also \cite{pfin1}. Fix $(i_1,...,i_n)\neq (0,...,0)$ and let $\nu\geq n$ be arbitrary. Picking out the coefficient of $(x')^{i_1}...(x^{(n)})^{i_n}$ in the equation \ref{iri} we get $$(1-x^{sp^{\nu}})f_{i_1...i_n}\in p^{\nu-n+1}R[x,x^{-1}]\h.$$ Since $\nu$ is arbitrary we get $$f_{i_1...i_n}=0 \ \ \text{for}\ \ \ (i_1,...,i_n)\neq (0,...,0),$$ hence $f=f_{0...0}$. For $\nu=1$ we may rewrite equation \ref{iri} as $$(1-x^s)f_{0...0}(x^{p})=(1-x^{sp})f_{0...0}(x)$$ The latter is an equality in $R[x,x^{-1}]\h$ so we can also view it as an equality in the ring $$A=\{\sum_{i=-\infty}^{\infty} c_ix^i; c_i\ra 0\ \ \text{for}\ \ i\ra -\infty\}$$ If $s\geq 1$ we get the following equality in $A$: $$g(x^{p})=g(x),\ \ \ g(x)=\frac{f_{0...0}(x)}{1-x^s}=f_{0...0}(x)(1+x^s+x^{2s}+...);$$ if $s\leq -1$ we get the following equality in $A$: $$g(x^{p})=g(x),\ \ \ g(x)=\frac{-x^{-s}f_{0...0}(x)}{1-x^{-s}}=-x^{-s}f_{0...0}(x)(1+x^{-s}+x^{-2s}+...);$$ In either case we get $g=\mu\in R$ and we are done. Assume now we are in the $\delta$-algebraic setting. Identify $f$ with a polynomial in $A[x,x^{-1},x',...,x^{(n)}]$ where $A\subset R$ is a subring finitely generated over $\bZ$. Now select a prime $p\in \bZ$ which is not invertible in $A$; hence by Krull's intersection theorem $\cap_{n=1}^{\infty} p^nA=0$. Then one can conclude by the same argument as above in which $R$ is replaced by $A$ (and all completions are dropped); by the way \ref{plan} is, in this case, obvious. \qed \begin{lemma} \label{success} Assume we are either in the $\delta$-algebraic setting or in the $\delta$-arithmetic setting. Assume we are given a $\d$-map $\epsilon:{\mathbb G}_m\times {\mathbb G}_a^{n-1}\ra {\mathfrak gl}_{n-1}$ and a row vector $\lambda\in R^{n-1}$ such that for any $(a_1,b_1), (a_2,b_2)\in R^{\times}\times R^{n-1}$: \begin{equation} \label{train1} \epsilon(a_1a_2,b_1+a_1b_2) = \epsilon(a_1,b_1)+\epsilon(a_2,b_2)+ a_1^{-1}(a_2^{-1}-1)\lambda^t b_1. \end{equation} Then, for any $a_1,a_2,a\in R^{\times}$, $b\in R^{n-1}$: \begin{equation} \label{camil1} \epsilon(a_1a_2,0)=\epsilon(a_1,0)+\epsilon(a_2,0) \end{equation} \begin{equation} \label{petrescu1} \epsilon(a,b)=\epsilon(a,0)+ a^{-1}\lambda^t b. \end{equation} \end{lemma} {\it Proof}. Assume first we are in the $\delta$-arithmetic setting. Setting $b_1=b_2=0$ in equation \ref{train1} we get \ref{camil1}. Setting $a_1=a_2=1$ in equation \ref{train1} we get $$\epsilon(1,b_1+b_2) = \epsilon(1,b_1)+\epsilon(1,b_2). $$ So if $\epsilon=(\epsilon_{kl})$ then by Lemma \ref{croco} there exist $\lambda_{ikl}\in R^{n-1}$, $i\geq 0$, such that \begin{equation} \label{foc} \epsilon_{kl}(1,b)=\sum \phi^i(b)\lambda_{ikl}^t,\ \ b \in R^{n-1}.\end{equation} Now setting $a_1=a\in R^{\times}$, $a_2=1$, $b_1=0$, $b_2=a^{-1}b\in R^{n-1}$ in equation \ref{train1} we get \begin{equation} \label{trainy1} \epsilon_{kl}(a,b) = \epsilon_{kl}(a,0)+\epsilon_{kl}(1,a^{-1}b)=\epsilon_{kl}(a,0)+\sum \phi^i(a)^{-1}\phi^i(b)\lambda_{ikl}^t.\end{equation} Setting $a_1=1$, $a_2=a\in R^{\times}$, $b_1=b$, $b_2=0$, in equation \ref{train1} we get \begin{equation} \label{nice1} \epsilon_{kl}(a,b) = \epsilon_{kl}(1,b)+\epsilon_{kl}(a,0)+(a^{-1}-1)(\lambda^t b)_{kl}\end{equation} hence, using equation \ref{trainy1} we get \begin{equation} \label{mention1} \sum (\phi^i(a)^{-1}-1)\phi^i(b)\lambda^t_{ikl}=(a^{-1}-1)(\lambda^tb)_{kl}. \end{equation} This can be viewed as an identity in $\phi^i(a),\phi^i(b)$ so we get $\lambda_{ilk}=0$ for all $i\geq 1$ and all $k,l$ and also $b\lambda_{0kl}^t=(\lambda^t b)_{kl}$. Then \ref{petrescu1} follows from \ref{trainy1}. Assume now we are in the $\delta$-algebraic setting. The above argument can be modified as follows. Instead of equation \ref{foc} one gets \begin{equation} \label{focs} \epsilon_{kl}(1,b)=\sum (\d^ib)\lambda_{ikl}^t,\ \ b \in R^{n-1}.\end{equation} Instead of equation \ref{trainy1} one gets \begin{equation} \label{trainy1s} \epsilon_{kl}(a,b) =\epsilon_{kl}(a,0)+\sum \d^i(a^{-1}b)\lambda_{ikl}^t.\end{equation} Equation \ref{nice1} remains then valid. Then instead of \ref{mention1} we get \begin{equation} \label{mention1s} \sum (\d^i(a^{-1}b)-\d^ib)\lambda^t_{ikl}=(a^{-1}-1)(\lambda^tb)_{kl}. \end{equation} Setting $b=e_j=(0,...,1,...,0)$ (with $1$ on the $j$th place) we get $$(a^{-1}-1)e_j\lambda^t_{0kl}+\sum_{i\geq 1} \d^i(a^{-1})e_j\lambda^t_{ikl}=(a^{-1}-1)\lambda_k \d_{lj}$$ ($\d_{lj}$ the Kronecker symbol). Hence $e_j \lambda^t_{ikl}=0$ for $i\geq 1$ and all $j,k,l$. Hence $\lambda_{ilk}=0$ for $i\geq 1$ and all $k,l$ and also $e_j\lambda^t_{0kl}=\lambda_k\d_{lj}$. So $a^{-1}b \lambda_{0kl}^t=(a^{-1}\lambda^tb)_{kl}$. This ends the proof. \qed \begin{lemma} \label{successs} Assume we are either in the $\delta$-algebraic setting or in the $\delta$-arithmetic setting. Assume we are given a $\d$-map $\alpha:{\mathbb G}_m\times {\mathbb G}_a^{n-1}\ra {\mathbb A}^1$ and a row vector $\lambda\in R^{n-1}$ such that for any $(a_1,b_1), (a_2,b_2)\in R^{\times}\times R^{n-1}$: \begin{equation} \label{asking} \alpha(a_1a_2,b_1+a_1b_2) = \alpha(a_1,b_1)+\alpha(a_2,b_2)+ a_1^{-1}(1-a_2^{-1})b_1 \lambda^t. \end{equation} Then for any $a_1,a_2,a\in R^{\times}$, $b\in R^{n-1}$: \begin{equation} \label{homo} \alpha(a_1a_2,0)=\alpha(a_1,0)+\alpha(a_2,0)\end{equation} \begin{equation} \label{cap} \alpha(a,b)=\alpha(a,0)- a^{-1}b\lambda^t. \end{equation} \end{lemma} {\it Proof}. Entirely similar to the proof of Lemma \ref{success}.\qed \begin{lemma} \label{successss} Assume we are either in the $\delta$-algebraic setting or in the $\delta$-arithmetic setting. Assume we are given a $\d$-map $\beta:{\mathbb G}_m\times {\mathbb G}_a^{n-1}\ra {\mathbb A}^{n-1}$, a $\d$-homomorphism $\psi:{\mathbb G}_m\ra {\mathfrak gl}_{n-1}$, and a row vector $\lambda\in R^{n-1}$ such that for any $(a_1,b_1), (a_2,b_2)\in R^{\times}\times R^{n-1}$: \begin{equation} \label{maddd} \begin{array}{rcl} \beta(a_1a_2,b_1+a_1b_2) & = &\beta(a_1,b_1)+a_1 \beta(a_2,b_2)\\ \ & \ & \ \\ \ & \ & +b_1 \psi(a_2)\\ \ & \ & \ \\ \ & \ & + a_2^{-1}b_2\lambda^t b_1+a_2^{-1} b_1\lambda^tb_2+a_1^{-1}(a_2^{-1}-1)b_1 \lambda^t b_1\end{array}\end{equation} 1) Assume we are in the $\delta$-arithmetic setting. Then $\psi=0$ and there exist $\mu\in R^{n-1}$ and $\nu\in {\mathfrak gl}_{n-1}(R)$ such that for any $a\in R^{\times}$, $b\in R^{n-1}$ we have \begin{equation} \label{baby} \beta(a,b)=(1-a)\mu+ a^{-1}b \lambda^t b+b\nu. \end{equation} 2) Assume we are in the $\delta$-algebraic setting. Then there exist $\mu\in R^{n-1}$ and $\nu, \eta\in {\mathfrak gl}_{n-1}(R)$ such that for any $a\in R^{\times}$, $b\in R^{n-1}$ we have \begin{equation} \label{shoes} \psi(a)=-(a^{-1}\d a)\nu; \end{equation} \begin{equation} \label{baby} \beta(a,b)=(1-a)\mu+ a^{-1}b \lambda^t b+b\eta+a\d(a^{-1}b)\nu. \end{equation} \end{lemma} {\it Proof}. Assume first we are in the $\delta$-arithmetic setting. Setting $b_1=b_2=0$ in equation \ref{maddd} we get $$\beta(a_1a_2,0) = \beta(a_1,0)+a_1\beta(a_2,0).$$ So by Lemma \ref{aj} there exists $\mu\in R^{n-1}$ such that $$\beta(a,0)=(1-a)\mu,\ \ a\in R^{\times}.$$ Setting $a_1=a_2=1$ in equation \ref{maddd} we get $$\beta(1,b_1+b_2) = \beta(1,b_1)+\beta(1,b_2)+b_2 \lambda^t b_1+b_1\lambda^t b_2, $$ because $\psi(1)=0$. Let $\beta^*(b)=\beta(1,b)-b\lambda^t b$. Then $$\beta^*(b_1+b_2)=\beta^*(b_1)+\beta^*(b_2).$$ So by Lemma \ref{croco} there exist $\lambda_{i}\in {\mathfrak gl}_{n-1}(R)$, $i\geq 0$, such that $$\beta^*(b)=\sum \phi^i(b)\lambda_{i},\ \ b \in R^{n-1}$$ hence \begin{equation} \label{heat} \beta(1,b)=b\lambda^t b+\sum \phi^i(b)\lambda_i.\end{equation} Now setting $a_1=a\in R^{\times}$, $a_2=1$, $b_1=0$, $b_2=a^{-1}b\in R^{n-1}$ in equation \ref{maddd} we get \begin{equation} \label{trainyyy} \begin{array}{rcl} \beta(a,b) & = & \beta(a,0)+a\beta(1,a^{-1}b)\\ \ & \ & \ \\ \ & = & (1-a)\mu+a\left( a^{-2}b \lambda^t b+\sum \phi^i(a)^{-1}\phi^i(b)\lambda_i\right).\end{array}\end{equation} Setting $a_1=1$, $a_2=a\in R^{\times}$, $b_1=b$, $b_2=0$ in equation \ref{maddd} we get \begin{equation} \label{nice} \beta(a,b) = \beta(1,b)+\beta(a,0)+b\psi(a)+(a^{-1}-1)b\lambda^tb,\end{equation} hence, using equation \ref{trainyyy} we get, after cancellation: \begin{equation} \label{mentionnn} \sum (a\phi^i(a)^{-1}-1)\phi^i(b)\lambda_i =b\psi(a).\end{equation} Let $a=\zeta$ be an arbitrary root of unity. Since $\psi(\zeta)=0$ and $\phi(\zeta)=\zeta^p$ we get: $$ \sum (\zeta^{1-p^i}-1)\phi^i(b)\lambda_i =0.$$ Since, for each $\zeta$ this can be seen as an identity in $\phi^i(b)$ we get that for all $\zeta$ and all $i\geq 0$, $$ (\zeta^{1-p^i}-1)\lambda_i =0.$$ We get that $\lambda_i=0$ for all $i\geq 1$. By \ref{mentionnn} we get $\psi=0$. By \ref{trainyyy}, if $\nu:=\lambda_0$, we get \ref{baby}. This ends to the proof of assertion 1) in the Lemma. Assume now we are in the $\delta$-algebraic setting. The above argument can be modified as follows. Instead of \ref{heat} we get \begin{equation} \label{heats} \beta(1,b)=b\lambda^t b+\sum (\d^i b)\lambda_i.\end{equation} Instead of equation \ref{trainyyy} we get \begin{equation} \label{trainyyys} \begin{array}{rcl} \beta(a,b) = (1-a)\mu+a\left( a^{-2}b \lambda^t b+\sum \d^i(a^{-1}b)\lambda_i\right).\end{array}\end{equation} Equation \ref{nice} is valid. Instead of equation \ref{mentionnn} we then get \begin{equation} \label{mentionnns} \sum (a\d^i(a^{-1}b)-\d^i b)\lambda_i =b\psi(a).\end{equation} Let $\psi=(\psi_{kl})$ and $\lambda_i=(\lambda_{ikl})$. Setting $b=e_j=(0,...,1,...,0)$ we get $$\sum_{i\geq 1} a\d^i(a^{-1})\lambda_{ijl}=\psi_{jl}(a)$$ Replacing $a$ by $a^{-1}$ and using $\psi(a^{-1})=-\psi(a)$ we get $$\sum_{i\geq 1} (\d^ia)\lambda_{ijl}=-a\psi_{jl}(a)$$ Using the fact that $\psi_{jl}$ is a $\d$-homomorphism ${\mathbb G}_m\ra {\mathbb G}_a$ and $\d$ is a $\d$-homomorphism ${\mathbb G}_a\ra {\mathbb G}_a$ and setting $a=a_1a_2$ one immediately concludes that $\lambda_{ijl}=0$ for $i\geq 1$ and all $j,l$. Hence $\psi(a)=-(a^{-1}\d a)\lambda_1$. Setting $\nu=\lambda_1$ and $\eta=\lambda_0$ we conclude by \ref{trainyyys}. \qed \begin{lemma} \label{indet} Let $x$ be an $n\times n$ matrix of indeterminates over $\bZ$ and let $\Delta_i(x)$ be the determinant of the matrix obtained from $x$ by removing the first $i$ rows and the first $i$ columns. Let $\Delta(x)=\prod_{i=1}^{n-1}\Delta_i(x)$. Let $W$ be the group of all $n\times n$ matrices obtained from the identity matrix $1_n$ by permuting its columns. Then there exist $w_0,...,w_N \in W$ and there exist elements $$a_1(x),...,a_N(x)\in \bZ[x,\Delta(x)^{-1}],$$ and $1\times (n-1)$ vectors $$b_1(x),...,b_N(x)\in \bZ[x,\Delta(x)^{-1}]^{n-1},$$ such that if $$s_i(x)=\left(\begin{array}{ll} a_i(x) & b_i(x)\\ 0 & 1_{n-1}\end{array}\right)$$ then $$x=w_0 \cdot s_1(x)\cdot w_1 \cdot s_2(x)\cdot w_2 \cdot ... \cdot s_N(x)\cdot w_N$$ \end{lemma} {\it Proof}. We proceed by induction on $n$. Write $$x=\left(\begin{array}{ll} u & y\\ z^t & w\end{array}\right)$$ where $u$ is one variable, $y,z$ are $1\times (n-1)$ matrices of variables and $w$ is an $(n-1)\times (n-1)$ matrix of variables. We may write \begin{equation} \label{foamee} x=\left(\begin{array}{ll} u-yw^{-1}z^t & yw^{-1}\\ 0& 1_{n-1}\end{array}\right) \left(\begin{array}{ll} 1 & 0\\ 0 & w\end{array}\right) \left(\begin{array}{ll} 1 & 0\\ w^{-1}z^t & 1_{n-1}\end{array}\right). \end{equation} Now for $n \times n$ matrices $m,m'$ we say $m$ and $m'$ are $W$-equivalent if $W mW=W m' W$. We then have that $\left(\begin{array}{ll} 1 & 0 \\ 0 & w\end{array}\right)$ is $W$-equivalent to $\left(\begin{array}{ll} w & 0\\ 0 & 1\end{array}\right)$. Also, if $w^{-1}z^t=(c_1(x),...,c_{n-1}(x))^t$ then $$\left(\begin{array}{ll} 1 & 0\\ w^{-1}z^t & 1_{n-1}\end{array}\right)=\left(\begin{array}{lll} 1 & 0 & 0 \\ c_1(x) & 1 & 0\\ 0 & 0 & 1_{n-2}\end{array}\right)\left(\begin{array}{llll} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ c_2(x) & 0 & 1 & 0\\ 0 & 0 & 0 & 1_{n-3}\end{array}\right).... $$ and each of the matrices in the right hand side of the above equation is $W$-equivalent to its transpose. We may conclude from \ref{foamee} by the induction hypothesis applied to $w$. \qed {\it Proof of Theorem \ref{main}}. Write matrices in $GL_{n}(R)$ in block from $\left(\begin{array}{ll}a & b\\ c^t & d\end{array}\right)$ where $a\in R$, $b,c\in R^{n-1}$ are viewed as a $1\times (n-1)$ matrices, $c^t$ is the transpose of $c$, and $d$ is an $(n-1)\times (n-1)$ matrix. Also let $1_{n-1}$ be the identity $(n-1)\times (n-1)$ matrix. We may write $$f\left(\begin{array}{ll}a & b\\ 0 & 1_{n-1}\end{array}\right)= \left(\begin{array}{cc}\alpha(a,b) & \beta(a,b)\\ \gamma(a,b)^t & \epsilon(a,b)\end{array}\right)$$ where $\alpha, \beta,\gamma, \epsilon$ are $\d$-morphisms from ${\mathbb G}_m\times {\mathbb G}_a^{n-1}$ to ${\mathbb A}^1,{\mathbb A}^{n-1},{\mathbb A}^{n-1}, {\mathfrak gl}_{n-1}$ respectively. If we let $(a_i,b_i)\in R^{\times}\times R^{n-1}$, $i=1,2$, and if we set $$ \alpha_i=\alpha(a_i,b_i), \ \ \beta_i=\beta(a_i,b_i), \ \ \gamma_i=\gamma(a_i,b_i),\ \ \epsilon_i=\epsilon(a_i,b_i),\ \ i=1,2, $$ then the cocycle relation for $f$ yields: \begin{equation} \label{list} \begin{array}{rrcl} (1) & \alpha(a_1a_2,b_1+a_1b_2) & = & \alpha_1+\alpha_2+a_1^{-1}b_1\gamma_2^t\\ (2) & \beta(a_1a_2,b_1+a_1b_2) & = & \beta_1+a_1 \beta_2-\alpha_2 b_1+b_1 \epsilon_2 -a_1^{-1}b_1\gamma_2^t b_1\\ (3) & \gamma(a_1a_2,b_1+a_1b_2) & = & \gamma_1+a_1^{-1}\gamma_2\\ (4) & \epsilon(a_1a_2,b_1+a_1b_2) & = & \epsilon_1+\epsilon_2-a_1^{-1}\gamma_2^t b_1 \end{array} \end{equation} By equation \ref{list} (3) and by Lemma \ref{aj} there exists $\lambda\in R^{n-1}$ such that \begin{equation} \label{official} \gamma(a,b)=(1-a^{-1})\lambda,\ \ a\in R^{\times}, b\in R^{n-1}. \end{equation} So equation \ref{list} (4) reads: \begin{equation} \label{ask} \epsilon(a_1a_2,b_1+a_1b_2) = \epsilon(a_1,b_1)+\epsilon(a_2,b_2)+ a_1^{-1}(a_2^{-1}-1)\lambda^t b_1. \end{equation} By Lemma \ref{success} we get \begin{equation} \label{homoe} \epsilon(a_1a_2,0)=\epsilon(a_1,0)+\epsilon(a_2,0)\end{equation} \begin{equation} \label{cape} \epsilon(a,b)=\epsilon(a,0)+ a^{-1}\lambda^t b. \end{equation} Similarly, equation \ref{list} (1) reads \begin{equation} \label{asking} \alpha(a_1a_2,b_1+a_1b_2) = \alpha(a_1,b_1)+\alpha(a_2,b_2)+ a_1^{-1}(1-a_2^{-1})b_1 \lambda^t. \end{equation} By Lemma \ref{successs} we get \begin{equation} \label{homo} \alpha(a_1a_2,0)=\alpha(a_1,0)+\alpha(a_2,0)\end{equation} \begin{equation} \label{cap} \alpha(a,b)=\alpha(a,0)- a^{-1}b\lambda^t. \end{equation} Finally by equations \ref{official}, \ref{cape}, \ref{cap}, equation \ref{list} (2) reads \begin{equation} \label{mad} \begin{array}{rcl} \beta(a_1a_2,b_1+a_1b_2) & = & \beta(a_1,b_1)+a_1 \beta(a_2,b_2) \\ \ & \ & \ \\ \ & \ & + b_1(\epsilon(a_2,0)-\alpha(a_2,0)1_{n-1})\\ \ & \ & \ \\ \ & \ & + a_2^{-1}b_2\lambda^t b_1+a_2^{-1} b_1\lambda^tb_2+a_1^{-1}(a_2^{-1}-1)b_1 \lambda^t b_1.\end{array}\end{equation} By Lemma \ref{successss} \begin{equation} \label{zgomot} \epsilon(a,0)=\alpha(a,0)1_{n-1}, \ a\in R^{\times}\end{equation} and there exist $\mu\in R^{n-1}$, $\nu\in {\mathfrak gl}_{n-1}(R)$ such that \begin{equation} \label{shack} \beta(a,b)=(1-a)\mu+ a^{-1}b \lambda^t b+b\nu. \end{equation} Let $\omega:{\mathbb G}_m\ra {\mathbb G}_a$ be the $\d$-homomorphism defined by $\omega(a)=\alpha(a,0)$. Consider the classical $\d$-cocycle $f_0:GL_n\ra {\mathfrak gl}_n$ defined by $$f_0(g):=f(g)-\omega(\det\ g)1_n.$$ By Lemma \ref{cocoa2} it is sufficient to show that $f_0$ is induced on points by a regular map. Let $H\subset GL_n$ the natural closed subgroup scheme such that $$H(R)=\{\left(\begin{array}{ll}a & b \\ 0 & 1_{n-1}\end{array}\right);a\in R^{\times}, b\in R^{n-1}\}.$$ By equations \ref{cap}, \ref{cape}, \ref{official}, the restriction of $f_0$ to $H$ is a regular map. On the other hand by Lemma \ref{indet} there exist finitely many matrices $w_0,w_1,...,w_N\in W$ such that the map $$H\times...\times H\ra GL_n$$ defined on points by $$ (h_1,...,h_N)\mapsto w_0 h_1w_1 \cdot\cdot\cdot h_N w_N$$ has a rational section $$s:U\ra H\times ... \times H, \ s(g)=(s_1(g),...,s_N(g))$$ defined on a Zariski open set $U$ of $GL_n$ which meets the special fiber at $p$. So for $g\in U(R)$ we have $$f_0(g)=f_0(w_0\cdot s_1(g)\cdot w _1 \cdot...\cdot s_N(g)\cdot w_N).$$ By the cocycle condition we get that the restriction of $f_0$ to $U$ is regular and hence, using translations and the cocycle condition again we get that $f_0$ is regular on the whole of $GL_n$. \qed {\it Proof of Theorem \ref{secondary}}. One can redo the argument in the proof of Theorem \ref{main} as follows. First we assume $f$ is any classical $\d$-cocycle (not necessarily $\d$-coherent). The formulae giving $\gamma, \alpha, \epsilon$ (\ref{official}, \ref{homoe}, \ref{cape}, \ref{homo}, \ref{cap}) are unchanged. Hence equation \ref{mad} is valid. By Lemma \ref{successss} there exist $\mu,\lambda \in R^{n-1}$ and $\nu,\eta\in {\mathfrak gl}_{n-1}$ such that \begin{equation} \label{zgomot} \epsilon(a,0)=\alpha(a,0)1_{n-1}-(a^{-1}\d a)\nu, \ a\in R^{\times}\end{equation} \begin{equation} \label{shack} \beta(a,b)=(1-a)\mu+ a^{-1}b \lambda^t b+b\eta+a\d(a^{-1}b)\nu. \end{equation} Set $g=\left(\begin{array}{ll} a & b \\ 0 & 1_{n-1}\end{array}\right)\in H(R)$. By the above we conclude that \begin{equation} \label{trip} f(g)= \left(\begin{array}{lll} \alpha(a,0)1_n-a^{-1}b\lambda^t & (1-a)\mu+ a^{-1}b \lambda^t b+b\eta+a\d(a^{-1}b)\nu\\ \ &\ & \ \\ (1-a^{-1})\lambda^t & \alpha(a,0)1_{n-1}-(a^{-1}\d a)\nu+a^{-1}\lambda^t b\end{array}\right) \end{equation} Now let us assume $f$ is $\d$-coherent. Since $f$ must send $H$ into its Lie algebra, consisting of the matrices $\left(\begin{array}{ll} \star & \star \\ 0 & 0_{n-1}\end{array}\right)$, we get that $\lambda=0$ and $$\alpha(a,0)1_{n-1}=(a^{-1}\d a)\nu.$$ This forces $\nu$ to be a scalar matrix, say $\nu=\epsilon 1_{n-1}$ and $\alpha(a,0)=\epsilon \cdot a^{-1}\d a$. If for $v\in {\mathfrak gl}_n(R)$ we write $f^v(g)=gvg^{-1}-v$, $g\in GL_n(R)$, then note that $$f^{v}\left(\begin{array}{ll} a & b \\0 & 1_{n-1}\end{array}\right)=\left(\begin{array}{cc} 0 & (1-a)\mu +b\eta \\ 0 & 0 \end{array}\right)\ \ \text{for}\ \ v=\left(\begin{array}{cc} 0 & -\mu\\0 & \eta\end{array}\right)$$ We conclude that $$f(g)=f^v(g)+\epsilon \cdot \d g \cdot g^{-1}$$ for all $g\in H(R)$ and hence (as in the proof of Theorem \ref{main}) for all $g\in GL_n(R)$. Now since $f$ and $g\mapsto \d g \cdot g^{-1}$ are $\d$-coherent it follows that $f^v$ is $\d$-coherent. It is enough to prove that $v$ is a scalar matrix. Assume $v$ is not scalar. Then we claim there exists $u\in GL_n(R^{\d})$ such that $uvu^{-1}$ is not in the Lie algebra ${\mathfrak t}(R)\subset {\mathfrak gl}_n(R)$ of diagonal matrices. Indeed if one assumes $$\{uvu^{-1};u\in GL_n(R^{\d})\}\subset {\mathfrak t}(R),$$ since $\{uvu^{-1};u\in GL_n(R^{\d})\}$ is Zariski dense in $\{uvu^{-1};u\in GL_n(R)\}$ we get that $\{uvu^{-1};u\in GL_n(R)\}\subset {\mathfrak t}(R)$. Note that $v$ itself is in ${\mathfrak t}(R)$. So the stabilizer of $v$ in $GL_n(R)$ under the adjoint action has dimension $n_1^2+...+n_s^2$ where $n_1,...,n_s$ are the multiplicities of the eigenvalues of $v$; note that $s\geq 2$ and $n_1+...+n_s=n$. Hence the dimension of $\{uvu^{-1};u\in GL_n(R)\}$ is $n^2-(n_1^2+...+n_s^2)> n=\dim {\mathfrak t}$ (because $s\geq 2$), a contradiction. Let now $G=u^{-1}Tu$ where $T\subset GL_n$ is the torus of diagonal matrices; so $G$ is defined over $R^{\d}$ and has Lie algebra $L(G)=u^{-1}L(T)u$. Since $f^v$ is $\d$-regular we have $u^{-1}duvu^{-1}d^{-1}u-v\in u^{-1}L(T)(R)u$ for all $d\in T(R)$. Hence $duvu^{-1}d^{-1} -uvu^{-1}\in {\mathfrak t}$ for all $d\in T(R)$. But $uvu^{-1}$ is not diagonal so there are indices $i\neq j$ such that then $(i,j)$-entry of $uvu^{-1}$ is non zero. Taking $d$ with diagonal entries $d_i\neq d_j$ we get that the $(i,j)$-entry of $duvu^{-1}d^{-1} -uvu^{-1}$ is non-zero, a contradiction. This ends the proof. \qed \end{document}
\begin{document} \title[Anisotropic and isotropic persistent singularities]{Anisotropic and isotropic persistent singularities of solutions of the fast diffusion equation} \author[M. Fila]{Marek Fila} \address{Department of Applied Mathematics and Statistics, Comenius University, \\ 842 48 Bratislava, Slovakia} \email{[email protected]} \thanks{ The first author was partially supported by the Slovak Research and Development Agency under the contract No. APVV-18-0308 and by VEGA grant 1/0339/21. The second author was partially supported by VEGA grant 1/0339/21 and by Comenius University grant UK/111/2021. The third author was partially supported by JSPS KAKENHI Early-Career Scientists (No.~19K14567). The fourth author was partially supported by JSPS KAKENHI Grant-in-Aid for Scientific Research (A) (No.~17H01095). } \author[P. Mackov\'a]{Petra Mackov\'a} \address{Department of Applied Mathematics and Statistics, Comenius University, \\ 842 48 Bratislava, Slovakia} \email{[email protected]} \author[J. Takahashi]{Jin Takahashi} \address{Department of Mathematical and Computing Science, Tokyo Institute of Technology, \\ Tokyo 152-8552, Japan} \email[Communicating author]{[email protected]} \author[E. Yanagida]{Eiji Yanagida} \address{Department of Mathematics, Tokyo Institute of Technology, \\ Tokyo 152-8551, Japan} \email{[email protected]} \subjclass[2020]{Primary 35K67; Secondary 35A21, 35B40.} \keywords{nonlinear diffusion, fast diffusion, singular solution, moving singularity, anisotropic singularity, Dirac source term} \begin{abstract} The aim of this paper is to study a class of positive solutions of the fast diffusion equation with specific persistent singular behavior. First, we construct new types of solutions with anisotropic singularities. Depending on parameters, either these solutions solve the original equation in the distributional sense, or they are not locally integrable in space-time. We show that the latter also holds for solutions with snaking singularities, whose existence has been proved recently by M. Fila, J.R. King, J. Takahashi, and E. Yanagida. Moreover, we establish that in the distributional sense, isotropic solutions whose existence was proved by M. Fila, J. Takahashi, and E. Yanagida in 2019, actually solve the corresponding problem with a moving Dirac source term. Last, we discuss the existence of solutions with anisotropic singularities in a critical case. \end{abstract} \maketitle \section{Introduction} Let $n \geq 2$ and $m \in (0,1)$. We study positive singular solutions of the fast diffusion equation \begin{equation} \label{eq:main} u_t = \Delta u^m, \qquad x \in \mathbb{R}^n \setminus \{ \xi_0 \}, \quad t>0, \end{equation} with an initial condition \begin{equation} \label{eq:main_initial} u(x,0)=u_0(x), \qquad x \in \mathbb{R}^n \setminus \{\xi_0\}. \end{equation} Here, $\xi_0 \in \mathbb{R}^n$ is a given point at which solutions are singular, i.e. \begin{equation*} u(x,t) \to \infty \quad \mbox{ as } \quad x \to \xi_0, \quad t>0. \end{equation*} Let $S^{n-1} := \{ x \in \mathbb{R}^n: |x| = 1 \}$ denote the unit $\displaystyle (n-1)$-sphere and set \begin{equation} \label{eq:r_omega} r := |x-\xi_0| \quad \text{ and } \quad \omega := (x-\xi_0)/|x-\xi_0|. \end{equation} Let $\lambda>0$ and $\alpha\in C^{2,1}(S^{n-1} \times [0,\infty))$ be positive. The aim of this paper is to study positive solutions with the persistent singular behavior of the form \begin{equation} \label{eq:anisotrop_asymp} u(x,t) = \alpha(\omega,t) r^{-\lambda} + o(r^{-\lambda} )\quad \text{ as } \quad r \to 0, \end{equation} for $\omega \in S^{n-1}$ and $t\geq 0$. We say that if $\alpha(\omega,t)$ depends non-trivially on the space variable $\omega$, the corresponding solution $u$ has an anisotropic singularity, otherwise it is asymptotically radially symmetric. Our main result formulated in Theorem~\ref{th:anisotropic} concerns the existence of solutions of~\eqref{eq:main}-\eqref{eq:main_initial} with anisotropic singularities. In order to prove the existence of such solutions, we introduce the following assumptions. \begin{itemize} \item[(A1)] Let $\alpha\in C^2(S^{n-1})$ be positive. \item[(A2)] Let $0<m<1$ and let $\lambda$, $\nu$ satisfy \[ \lambda>\frac{2}{1-m}, \qquad (1-m)\lambda-2-m(\lambda-\nu)>0, \qquad \lambda> \nu>0. \] \item[(A3)] Let $u_0\in C(\mathbb{R}^n\setminus\{\xi_0\})$ be positive and such that it has the asymptotic behavior \begin{equation*} u_0^m(x)=\alpha^m(\omega) |x-\xi_0|^{-m\lambda} + O\big(|x-\xi_0|^{-m\nu}\big) \quad \text{ as } \quad x\to \xi_0, \end{equation*} for each $\omega \in S^{n-1}$, and \begin{equation*} C^{-1}\leq u_0(x) \leq C \end{equation*} for $|x-\xi_0|\geq 1$ and some constant $C>1$. \end{itemize} Note that the condition (A2) implies that $\nu$ is sufficiently close to $\lambda$. \begin{theorem} \label{th:anisotropic} Let $n \geq 2$ and assume \emph{(A1)}, \emph{(A2)}, and \emph{(A3)}. Then there is a function \[ u\in C^{2,1}( \{(x,t)\in (\mathbb{R}^n\setminus\{\xi_0\})\times(0,\infty)\}) \cap C( \{(x,t)\in (\mathbb{R}^n\setminus\{\xi_0\})\times[0,\infty)\}), \] which satisfies~\eqref{eq:main}-\eqref{eq:main_initial} pointwise and \begin{equation*}\label{eq:sing1} u(x,t)^m = \alpha^m(\omega) |x-\xi_0|^{-m\lambda} + O(|x-\xi_0|^{-m\nu}) \quad \text{ as } \quad x\to \xi_0, \end{equation*} for each $\omega \in S^{n-1}$ and $t\geq 0$. \end{theorem} A subclass of solutions from Theorem~\ref{th:anisotropic} has been also studied in~\cite{TY2021}. The authors of~\cite{TY2021} focused on radially symmetric solutions of~\eqref{eq:main}-\eqref{eq:main_initial} with $n\geq 3$, $0<m<m_c:=(n-2)/n$ and with the initial condition $u_0(x)=(c_1^m |x-\xi_0|^{-m\lambda} + c_2^m)^{1/m}$, where $2/(1-m)<\lambda<(n-2)/m$, $c_1>0$, and $c_2 \geq 0$. In addition to the existence, several interesting properties of these solutions have been proved, among them their uniqueness. In our next result we show that, depending on parameters, solutions constructed in Theorem~\ref{th:anisotropic} either solve the original fast diffusion equation in the distributional sense, i.e. \begin{equation} \label{eq:weak} u_t=\Delta u^m \quad \text{ in } \quad \mathcal{D}'(\mathbb{R}^n \times (0, \infty)), \end{equation} or they are not locally integrable in space-time. \begin{theorem} \label{th:anisotropic_weak} Let the conditions from Theorem~\ref{th:anisotropic} be satisfied. \begin{enumerate}[label={\upshape(\roman*)}, align=left, widest=iii, leftmargin=*] \item \label{it:i} If $\lambda<n$, $0 < m < m_c$, and $n > 2$, then for the solution $u$ from Theorem~\ref{th:anisotropic} it holds that $u \in C([0,\infty);L^1_{loc}(\mathbb{R}^n))$, and it satisfies~\eqref{eq:weak} in the distributional sense, i.e. \[ \int_0^\infty \int_{\mathbb{R}^n} \big( u \varphi_t + u^m \Delta\varphi \big) \, dy \, dt = 0 \] for all $\varphi \in C_0^\infty (\mathbb{R}^n \times (0, \infty))$. \item \label{it:ii} If $\lambda \geq n$, then the solution $u$ from Theorem~\ref{th:anisotropic} satisfies $u \notin L^p_{loc}(\mathbb{R}^n \times [0,\infty))$ for any $p \geq 1$. \end{enumerate} \end{theorem} We note that in the supercritical exponent range $m_c < m < 1$, the authors of \cite{HP} proved that all solutions of~\eqref{eq:weak} with $u_0 \in L^1_{loc}(\mathbb{R}^n)$ become locally bounded and continuous for all $t>0$. A further related result concerning anisotropic singularities can be found in~\cite{FKTY}. Here, the authors constructed positive entire-in-time solutions with snaking singularities for the fast diffusion equation (in the range $m^*<m<1$ and $n \geq 2$, where $m^*:=(n-3)/(n-1)$ when $n \geq 3$ and $m^*:=0$ when $n=2$). In particular, these solutions have a singularity on a set $\Gamma(t):=\{\xi(s); -\infty<s<ct\}$ for $c>0$ and each $t \in \mathbb{R}$. Here $\xi:\mathbb{R}\rightarrow \mathbb{R}^n$ satisfies Condition 1.1 in~\cite{FKTY}. Their construction was based on the existence of the following explicit singular traveling wave solution with cylindrical symmetry \begin{equation} \label{eq:snaking} U(x,t) = C \big( |a| |x-ta| + a \cdot (x-ta) \big)^{-\frac{1}{1-m}}, \end{equation} where $a$ is a velocity vector, and $C$ is an explicitly computable constant. As in Theorem~\ref{th:anisotropic_weak}~\ref{it:ii}, the solution $U$ is also an example of a function with no local integrability in space-time. Namely, in Section~\ref{subsec:proof_snaking} we show the following. \begin{remark} \label{rem:snaking} Let $n \geq 2$ and $m^*<m<1$. Then for the function $U$ from~\eqref{eq:snaking} it holds that $U \notin L^p_{loc}(\mathbb{R}^n \times \mathbb{R})$ for any $p \geq 1$. \end{remark} To extend the idea of various possibilities of distributional solutions of the fast diffusion and porous medium equation, we present our last result in Theorem~\ref{th:dirac}. Here, a class of asymptotically radially symmetric singular solutions satisfies the corresponding equation with a moving Dirac source term in the distributional sense. The existence of such solutions of the initial value problem \begin{align} \label{eq:porous} u_t &= \Delta u^m, \qquad \,\, x \in \mathbb{R}^n \setminus \{ \xi(t) \}, \quad t \in (0, \infty), \\ \label{eq:porous-init} u(x,0)&=u_0(x), \qquad x \in \mathbb{R}^n \setminus \{\xi(0)\}, \end{align} was established in Theorem~1.1 in~\cite{FTY}. Assuming $n \geq 3$ and $m>m_*:= (n-2)/(n-1)$, the authors of~\cite{FTY} constructed singular solutions of~\eqref{eq:porous}-\eqref{eq:porous-init}, which for some given $C^1$ function $k(t)$ behave as \begin{equation} \label{eq:rad_as_sym} u^m(x,t) = k^m(t) |x-\xi(t)|^{-(n-2)} + o(|x-\xi(t)|^{-(n-2)})\quad \text{ as } \quad x \to \xi(t). \end{equation} It was shown in~\cite{FTY} that $m_*$ is a critical exponent for the existence of such solutions, and there are no such solutions if $m<m_*$. To construct global-in-time solutions of this form, suitable conditions on $\xi'$, $k$, and $k'$ were imposed. \begin{remark} \label{rem:1} The existence of solutions from~\cite{FTY} can be extended to the parameter range $n\geq 3$ and $m_c < m \leq m_*$ if the singularity is not moving, i.e. if $\xi(t) \equiv \xi_0$. This can be verified by an inspection of the proof of Theorem~1.1 in~\cite{FTY}, which is in this case simpler since all terms containing $\xi'$ vanish. \end{remark} We also remark that the results from~\cite{FTY} have been extended previously in a different way in~\cite{FMTY}. Here, the authors treated the case $n=2$, $m > m_*=0$. They established the existence of solutions that, near the singularity, behave like the fundamental solution of the Laplace equation to the power $1/m$. In Theorem~\ref{th:dirac} we show that solutions from~\cite{FTY} satisfy \begin{equation} \label{eq:n3_weak_delta} u_t = \Delta u^m + (n-2) |S^{n-1}| k^m(t) \delta_{\xi(t)}(x) \quad \text{ in } \quad \mathcal{D}'(\mathbb{R}^n \times (0, \infty)). \end{equation} Here, $\delta_{\xi(t)}$ denotes the Dirac measure on $\mathbb{R}^n$, giving unit mass to the point $\xi(t) \in \mathbb{R}^n$. \begin{theorem} \label{th:dirac} Let $n\geq3$ and assume that conditions on $k(t)$ and $u_0(x)$ from Theorem~1.1 in~\cite{FTY} hold. Let either $m > m_*$ and $\xi(t)$ be as in Theorem~1.1 in~\cite{FTY}, or $m_c < m \leq m_*$ and $\xi(t) \equiv \xi_0$. Then the solution $u$ satisfies equation~\eqref{eq:n3_weak_delta} in the distributional sense, i.e. \[ -\int_0^\infty \int_{\mathbb{R}^n} \big( u \varphi_t + u^m \Delta\varphi \big) \, dx \, dt = \int_0^\infty (n-2) |S^{n-1}| k^m(t) \varphi(\xi(t),t) \, dt \] for all $\varphi \in C_0^\infty (\mathbb{R}^n \times (0, \infty))$. \end{theorem} Equations~\eqref{eq:main} and~\eqref{eq:porous} with $n\geq3$ have radially symmetric stationary solutions of the form \begin{equation} \label{eq:steady_state} \tilde u(x) = K|x-\xi_0|^{-(n-2)/m}, \qquad x \in \mathbb{R}^n \setminus \{ \xi_0 \}, \end{equation} where $K$ is an arbitrary positive constant, and these solutions satisfy \begin{equation} \label{eq:fund_weak} -\Delta \tilde u(x) = (n-2)|S^{n-1}| K \delta_{\xi_0}(x) \quad \text{ in } \quad \mathcal{D}'(\mathbb{R}^n), \end{equation} where $|S^{n-1}|$ is the hypervolume of the $(n-1)$-dimensional unit sphere. Hence, the result of Theorem~\ref{th:dirac} can be expected. In~\cite{KT14}, the authors constructed singular solutions with time-dependent singularities for the heat equation \begin{equation*} u_t = \Delta u + w(t) \delta_{\xi(t)}(x) \quad \text{ in } \quad \mathcal{D}'(\mathbb{R}^n \times (0, T)), \end{equation*} where $n \geq 2$, $T \in (0,\infty]$, and $w \in L^1((0, t))$ for each $t \in (0,T)$. The behavior of solutions from~\cite{KT14} near the singularity does not always have to be like that of the fundamental solution of the Laplace equation, and the profile loses the asymptotic radial symmetry. Further results concerning the heat equation \begin{equation*} u_t = \Delta u + \delta_{\xi(t)}(x) \otimes M(t) \quad \text{ in } \quad \mathcal{D}'(\mathbb{R}^n \times (0, T)), \end{equation*} where $\delta_{\xi(t)}(y) \otimes M(t)$ is a product measure of $M(t)$ and $\delta_{\xi(t)}(x)$, can be found in~\cite{KT16}. Solutions of the porous medium ($m>1$) and fast diffusion equation in the supercritical range ($m_c < m < 1$) with singularities which are not necessarily standing were analyzed in \cite{Lu1} and \cite{Lu2}, respectively. If $M$ is a nonnegative Radon measure on $\mathbb{R}^{n+1}$, which satisfies $M(\Omega \times (0,T)) < \infty$ for $T>0$ and a bounded domain $\Omega \subset \mathbb{R}^n$, then there exists a function $u$ such that \begin{equation*} u_t = \Delta u^m + M(x, t) \quad \text{ in } \quad \mathcal{D}'(\Omega \times (0, T)), \end{equation*} and $u^m \in L^q((0, T); W^{1,q}_0 (\Omega) )$ with $1 < q < 1 + 1/(1 + mn)$. A moving Dirac measure on the right-hand side of parabolic systems also appears in several biological applications concerning, for example, the growth of axons or angiogenesis. See~\cite{CZ} and \cite{Bookholt}, respectively. A moving Dirac measure also appears in~\cite{PO}, where the authors studied the Cattaneo telegraph equation with a moving time-harmonic source in the context of the Doppler effect. We also mention the following two results, which can be applied to solutions with anisotropic singularities. When $n \geq 3$, $0 < m < m_c$, and the singularity of the initial function satisfies $a_1 |x-\xi_0|^{-2/(1-m)} \leq u_0(x) \leq a_2 |x-\xi_0|^{-2/(1-m)}$ for some $a_1, a_2 > 0$ and for all $x \in \Omega$, then from \cite{vazquez2} (for $\Omega = \mathbb{R}^n$) and \cite{VW} (for smoothly bounded domain $\Omega \subset \mathbb{R}^n$) it results that finite-time blow-down occurs. More specifically, there is a $T > 0$ such that $u(\cdot, t) \notin L^\infty (\Omega)$ for $t < T$ but $u(\cdot, t) \in L^\infty (\Omega)$ for $t > T$, i.e. that the singularity disappears after a time $T$. On the other hand, if $m$ is in the range $m_c < m < 1$, the authors of~\cite{ChV} concluded the monotonicity of strongly singular sets of extended solutions, i.e. that it cannot shrink in time. Hence, the singularity of such solutions persists for all times. This paper is organized as follows. A formal analysis of solutions with the asymptotic behavior~\eqref{eq:anisotrop_asymp} is given in Section~\ref{subses:fc}. The last part of this section is devoted to a critical case that is left as an open problem. The existence result in Theorem~\ref{th:anisotropic} is then proved in Section~\ref{subsec:proof_thn}. Formal computations in Section~\ref{subses:fc} suggest the choice of comparison functions in Subsections~\ref{subsec:super} and~\ref{subsec:sub}. We leave the question of extending the results from Theorem~\ref{th:anisotropic} from standing to moving singularities open. We see no problem in using the methods employed in this text, however, different critical exponents and technical difficulties may arise. We continue with the proof of Theorem~\ref{th:dirac} in Section~\ref{subsec:proof_th3}, and the proof of Theorem~\ref{th:anisotropic_weak} in Section~\ref{subsec:proof_anisotropic_weak}. Finally, computations concerning Remark~\ref{rem:snaking} are given in Section~\ref{subsec:proof_snaking}. \section{Formal computations} \label{subses:fc} Let $u(x,t)$ be given by~\eqref{eq:anisotrop_asymp}, i.e. \begin{equation*} u(x,t) = \alpha(\omega,t) r^{-\lambda} + o(r^{-\lambda}), \end{equation*} where $r>0$, $\omega \in S^{n-1}$, $t\geq 0$, and recall notation~\eqref{eq:r_omega} for $r$ and $\omega$. For $u(x,t) = w(r,\omega,t)$, the fast diffusion equation~\eqref{eq:main} is transformed into \begin{equation} \label{eq:wprob} w_t = r^{1-n} \frac{\partial}{\partial r} \left(r^{n-1} \frac{\partial w^m}{\partial r} \right) + \frac{1}{r^2} \Delta_\omega w^m. \end{equation} Here, $\Delta_\omega$ denotes the Laplace-Beltrami operator on $S^{n-1}$. Simple computations show that \begin{equation} \label{eq:formal_comp} \begin{split} w_t &= \alpha_t r^{-\lambda} + o(r^{-\lambda}), \\ \Delta w^m &= \big(\Delta_\omega \alpha^m -m\lambda(n-2-m\lambda)\alpha^m \big) r^{-m\lambda -2} + o(r^{-m\lambda -2}). \end{split} \end{equation} The leading term is different in each of the three cases: $\lambda>m\lambda + 2$, $\lambda<m\lambda + 2$, and $\lambda=m\lambda + 2$. \subsection{$\boldsymbol{\lambda> 2/(1-m)}$} \label{sec:results2} The most singular case is $\lambda>m\lambda + 2$, which is equivalent to $\lambda> 2/(1-m)$. This implies that the leading term in~\eqref{eq:formal_comp} is $w_t$, hence, we set $\alpha = \alpha(\omega)$. The existence result in Theorem~\ref{th:anisotropic} is based on this observation. \subsection{$\boldsymbol{\lambda< 2/(1-m)}$} \label{sec:results1} The case $\lambda<m\lambda + 2$ is equivalent to $\lambda< 2/(1-m)$. The leading term in~\eqref{eq:formal_comp} is $\Delta w^m$, which implies that $\alpha$ must be a solution of \begin{center} $-\Delta_\omega \alpha^m = -m\lambda(n-2-m\lambda)\alpha^m$. \end{center} Eigenvalues of $-\Delta_\omega$ are non-negative and start with zero (the constant $1$ is the corresponding eigenfunction), other eigenfunctions change sign, see~\cite{LaplaceBeltrami}. Since we are looking for positive solutions, we obtain conditions $$\lambda = \frac{n-2}{m}, \quad m > m_c, \quad \text{and} \quad n \geq 3 .$$ As we pointed out in the introduction, the existence of the corresponding asymptotically radially symmetric solutions for $n\geq 3$ and $m > m_*$ in the case of a moving singularity was established in Theorem~1.1 in~\cite{FTY}. Moreover, in Remark~\ref{rem:1} we explain that in the case of a standing singularity, the proof of Theorem~1.1 in~\cite{FTY} is valid also in the parameter range $n\geq 3$, $m_c < m \leq m_*$. Our result extending the qualitative analysis of these solutions can be found in Theorem~\ref{th:dirac}. \subsection{Critical case $\boldsymbol{\lambda = 2/(1-m)}$, open problem} \label{sec:results3} In the critical case $\lambda = 2/(1-m)$, the terms $w_t$ and $\Delta w^m$ are balanced. Let \[ A:=m\lambda(m\lambda-n+2) = \frac{2 m n (m-m_c)}{(1 - m)^2}. \] Balancing the leading terms in~\eqref{eq:formal_comp} leads us to an initial value problem \begin{align} \label{eq:alfa_eq} \alpha_t(\omega,t) &= \Delta_\omega \alpha^m(\omega,t) + A\alpha^m(\omega,t), \qquad \omega \in S^{n-1}, \quad 0 <t <T, \\ \label{eq:alfa_eq_initial} \alpha(\omega,0) &= \alpha_0(\omega)>0, \qquad \qquad \qquad \qquad \, \omega \in S^{n-1}, \end{align} where $T \in (0,\infty]$. If we prove the existence of a positive classical solution $\alpha$ of~\eqref{eq:alfa_eq}-\eqref{eq:alfa_eq_initial} for some $T>0$, we obtain a positive classical solution of~\eqref{eq:main}-\eqref{eq:main_initial} of the form \[ u(x,t) = \alpha(\omega,t) |x-\xi_0|^{-\lambda}, \qquad x \in \mathbb{R}^n \setminus \{ \xi_0 \}, \quad 0 <t <T. \] At the end of this section, we present some examples of solutions of~\eqref{eq:alfa_eq}-\eqref{eq:alfa_eq_initial}. A well-known explicit solution is \[ \tilde \alpha (t) = \big((1-m)A \, t + t_0 \big)^{\frac{1}{1-m}}, \] where $t_0$ is an arbitrary positive constant. In order to obtain solutions of~\eqref{eq:main}-\eqref{eq:main_initial} with an anisotropic singularity, we are interested in solutions of~\eqref{eq:alfa_eq}-\eqref{eq:alfa_eq_initial} that, unlike $\tilde \alpha$, depend non-trivially on the space variable $\omega$. Such solutions can be obtained by looking for solutions of the form $\alpha (\omega,t) = \tau(t) \beta^{1/m}(\omega)$, where $\beta$ is non-constant. Using the method of separation of variables, we have $\tau(t) = ((1-m) C t + t_0)^{\frac{1}{1-m}}$, where $t_0>0$ and $C$ is a constant from the separation of variables. For $\beta(\omega)$ we obtain a semilinear elliptic equation on a sphere \begin{equation} \label{eq:onsphere} \Delta_\omega \beta(\omega) + A \beta(\omega) = C \beta^{1/m}(\omega), \qquad \omega \in S^{n-1}. \end{equation} We briefly examine the existence of a class of solutions of~\eqref{eq:onsphere} depending only on an angle $\theta \in [0,2\pi)$. In this case, equation~\eqref{eq:onsphere} becomes $\ddot \beta(\theta) + A \beta(\theta) = C \beta^{1/m}(\theta)$. It represents a Hamiltonian system \[ \begin{cases} \dot \beta = v, \\ \dot v = - \beta (A - C \beta^{{1/m}-1}), \end{cases} \] with a relevant critical point $P=((A/C)^{\frac{m}{1-m}},0)$ if $C \neq 0$ has the same sign as $A \neq 0$. Notice that this condition guarantees the same asymptotic behavior of $\tau$ as that of $\tilde \alpha$, which is consistent with the results from~\cite{vazquez2, ChV} described in the introduction. Finally, the existence of periodic trajectories results from a standard ODE theory: the critical point $P$ is a center, i.e. all trajectories close to it are closed orbits if $A<0$. The existence of a more general class of classical positive solutions of~\eqref{eq:alfa_eq}-\eqref{eq:alfa_eq_initial}, which depend non-trivially on $\omega$, is left as an open problem. \section{Proof of Theorem~\ref{th:anisotropic}} \label{subsec:proof_thn} \subsection{Construction of supersolutions} \label{subsec:super} We set $a(t) := A e^{At}$, where $A \geq 1$ is a sufficiently large constant chosen later, and define a function \begin{equation} \label{eq:supersolform_n} w^+(r,\omega,t) := \big( \alpha^m(\omega) r^{-m\lambda} + a(t) r^{-m\nu} +A\big)^\frac{1}{m}. \end{equation} In what follows, we prove that $w^+$ is a supersolution of~\eqref{eq:wprob}. \begin{lemma} \label{l+:n} Let $n \geq 2$ and assume \emph{(A1)} and \emph{(A2)}. Then there exists constant $A \geq 1$, such that the function $w^+(r,\omega,t)$ defined in~\eqref{eq:supersolform_n} is a supersolution of equation~\eqref{eq:wprob} for $r>0$, $\omega \in S^{n-1}$, and $t>0$. \end{lemma} \begin{proof} We define a bounded function \[ \sigma(\omega) := \Delta_\omega \alpha^m(\omega) + m\lambda (m\lambda-n+2) \alpha^m(\omega) \] and compute \begin{equation*} \begin{split} w^+_t &= \frac{1}{m} A a r^{-m\nu} \big(\alpha^m r^{-m\lambda} + a r^{-m\nu} +A\big)^{\frac{1}{m}-1}, \\ -\Delta (w^+)^m &= -\sigma r^{-m\lambda-2} - m\nu (m\nu-n+2) a r^{-m\nu-2}. \end{split} \end{equation*} Since $(1-m)\lambda-2-m(\lambda-\nu)>0$, $\alpha \geq \alpha_{min} :=\min_{\omega \in S^{n-1}} \alpha(\omega) >0$, $\sigma \leq \sigma_{max}:= \max_{\omega\in S^{n-1}} \sigma(\omega) < \infty$, and $A\geq1$, for $r\leq 1$ we obtain \begin{equation*} \begin{split} &w^+_t -\Delta (w^+)^m \\ &= \frac{1}{m} A a \big(\alpha^m + a r^{m(\lambda-\nu)} + A r^{m\lambda} \big)^{\frac{1}{m}-1} r^{-\lambda + m(\lambda-\nu)} - \sigma r^{-m\lambda-2} - m\nu (m\nu-n+2) a r^{-m\nu-2} \\ &\geq \frac{1}{m} A a \alpha^{1-m} r^{-m\lambda-2} - \sigma r^{-m\lambda-2} - m\nu (m\nu+2) a r^{-m\nu-2} \\ &\geq \left( \Big( \frac{1}{m} A \alpha_{min}^{1-m} - m\nu (m\nu+2) \Big) a - \sigma_{max} \right) r^{-m\lambda-2}. \end{split} \end{equation*} Thus, for $A \geq m \alpha_{min}^{-(1-m)} (\sigma_{max} + m\nu (m\nu+2))$, it holds that $w^+_t -\Delta (w^+)^m \geq 0$ for all $r\leq 1$, $\omega \in S^{n-1}$, and $t>0$. Similarly, for $r>1$, $\omega \in S^{n-1}$, and $t > 0$ we have \begin{equation*} \begin{split} w^+_t -\Delta (w^+)^m &\geq \frac{1}{m} A^\frac{1}{m} a r^{-m\nu} - \sigma_{max} r^{-m\lambda-2} - m\nu (m\nu-n+2) a r^{-m\nu-2} \\ &\geq \left( \Big(\frac{1}{m} A - m\nu (m\nu-n+2) \Big) a - \sigma_{max} \right) r^{-m\nu-2}. \end{split} \end{equation*} This completes the proof that for any $A \geq 1$ sufficiently large, the function $w^+$ defined in~\eqref{eq:supersolform_n} is a supersolution of~\eqref{eq:wprob} for $t>0$ in the whole space. \end{proof} \subsection{Construction of subsolutions} \label{subsec:sub} Let $\mu>\lambda$ satisfy \begin{equation}\label{eq:asmu} m\mu (m\mu+2-n) \min_{\omega \in S^{n-1}} \alpha^m - \max_{\omega\in S^{n-1}}|\Delta_\omega(\alpha^m)| >0. \end{equation} Note that~\eqref{eq:asmu} implies $\mu>(n-2)/m$. Let $\delta>0$ satisfy \[ 0<\delta<\frac{\lambda-\nu}{\mu-\nu}, \] and define \[ b(t):=b_0 e^{Bt}, \qquad \rho(t):= (1-\delta)^\frac{1}{m(\lambda-\nu)} b^{-\frac{1}{m(\lambda-\nu)}}(t), \] where $b_0, B>1$ are sufficiently large constants chosen later. We set \[ \begin{aligned} w_{in}^-(r,\omega,t) &:= \alpha(\omega) r^{-\lambda} ( 1-b(t) r^{m(\lambda-\nu)} )^\frac{1}{m}, \\ w_{out}^-(r,\omega,t) &:= \alpha(\omega) \delta^\frac{1}{m} \rho^{\mu-\lambda}(t) r^{-\mu}. \end{aligned} \] Note that the zero point of $w_{in}^-$ is $b^{-\frac{1}{m(\lambda-\nu)}}(t)$ and that $w_{in}^-$ intersects $w_{out}^-$ at $r=\rho(t)<1$. Now we can construct a subsolution of the form \begin{equation} \label{eq:subsolform} w^-(r,\omega,t)= \left\{ \begin{aligned} & w_{in}^-(r,\omega,t) &&\mbox{ for }r\leq \rho(t), \text{ } t\geq 0, \\ & w_{out}^-(r,\omega,t) &&\mbox{ for }r> \rho(t), \text{ } t\geq 0. \end{aligned} \right. \end{equation} \begin{lemma} \label{l-:n} Let $n \geq 2$ and assume \emph{(A1)} and \emph{(A2)}. Then there exist constants $b_0, B>1$, such that the function $w^-$ defined in~\eqref{eq:subsolform} is a subsolution of equation~\eqref{eq:wprob} for $r>0$, $\omega \in S^{n-1}$, and $t >0$. \end{lemma} \begin{proof} \textbf{Inner part:} Let $t>0$. We consider the inner part $r\leq \rho(t)$. Straightforward computations show that \[ \begin{aligned} (w_{in}^-)_t &= \frac{1}{m} \alpha r^{-\lambda} ( 1-b r^{m(\lambda-\nu)} )^{\frac{1}{m}-1} (-b' r^{m(\lambda-\nu)} ) \\ &= - \frac{1}{m} B b \alpha r^{-\lambda+m(\lambda-\nu)} ( 1-b r^{m(\lambda-\nu)} )^{\frac{1}{m}-1} \end{aligned} \] and \[ \begin{aligned} \Delta (w_{in}^-)^m =& \, m \alpha^m ( (m\lambda+2-n) \lambda r^{-m\lambda-2} - (m\nu+2-n) \nu b r^{-m\nu-2} )\\ &+ ( r^{-m\lambda-2} -b r^{-m\nu-2} ) \Delta_\omega(\alpha^m). \end{aligned} \] By the definition of $\rho$, we have \[ \begin{aligned} (w_{in}^-)_t - \Delta (w_{in}^-)^m &= - \frac{1}{m} B b \alpha r^{-\lambda+m(\lambda-\nu)} ( 1-b r^{m(\lambda-\nu)} )^{\frac{1}{m}-1} - ( r^{-m\lambda-2} -b r^{-m\nu-2} ) \Delta_\omega(\alpha^m) \\ &\quad - m \alpha^m ( (m\lambda+2-n) \lambda r^{-m\lambda-2} - (m\nu+2-n) \nu b r^{-m\nu-2} ) \\ &\leq - \frac{1}{m} B b \alpha r^{-\lambda+m(\lambda-\nu)} ( 1-b \rho^{m(\lambda-\nu)} )^{\frac{1}{m}-1} + C r^{-m\lambda-2} + C b r^{-m\nu-2} \\ &= - \frac{1}{m} B b \alpha r^{-\lambda+m(\lambda-\nu)} \delta^{\frac{1}{m}-1} + C r^{-m\lambda-2} + C b r^{-m\nu-2} \end{aligned} \] for $r\leq \rho$, where $C>0$ is a constant independent of $b$. By $\alpha_{min}>0$, $\lambda>\nu$, $b>1$, and $r\leq \rho<1$, we have \[ \begin{aligned} (w_{in}^-)_t - \Delta (w_{in}^-)^m &\leq - \frac{1}{m} B b \delta^{\frac{1}{m}-1} \alpha_{min} r^{-\lambda+m(\lambda-\nu)} + C b r^{-m\lambda-2} \\ &= - b r^{ -\lambda+m(\lambda-\nu) } \left( \frac{1}{m} \delta^{\frac{1}{m}-1} \alpha_{min} B - C r^{(1-m)\lambda-2-m(\lambda-\nu)} \right). \end{aligned} \] Recall that $(1-m)\lambda-2-m(\lambda-\nu)>0$. Thus, \[ \begin{aligned} (w_{in}^-)_t - \Delta (w_{in}^-)^m \leq - b r^{ -\lambda+m(\lambda-\nu) } \left( \frac{1}{m} \delta^{\frac{1}{m}-1} \alpha_{min} B - C\right) \end{aligned} \] for $r\leq \rho$. Hence, by choosing $B>1$ large, we conclude that $w_{in}^-$ is a subsolution for $r\leq \rho(t)$. \textbf{Matching condition:} Since both $w_{in}^-$ and $w_{out}^-$ are of the separated form $\alpha(\omega) f(r,t)$, it is sufficient to check that \[ \left. \frac{\partial}{\partial r} (w_{in}^-)^m \right|_{r=\rho(t)} < \left. \frac{\partial}{\partial r} (w_{out}^-)^m \right|_{r=\rho(t)}. \] By the definition of $\rho$ and the choice of $\delta$, we have \[ \begin{aligned} \left. \frac{\partial}{\partial r} (w_{out}^-)^m \right|_{r=\rho(t)} - \left. \frac{\partial}{\partial r} (w_{in}^-)^m \right|_{r=\rho(t)} &= -\alpha^m m\mu \delta \rho^{-m\lambda-1} - \alpha^m (-m\lambda \rho^{-m\lambda-1} + m\nu b\rho^{-m\nu-1}) \\ &= \alpha^m m \rho^{-m\lambda-1} \left( - \mu \delta + \lambda - \nu b\rho^{m(\lambda-\nu)} \right) \\ &= \alpha^m m \rho^{-m\lambda-1} \big( - \mu \delta + \lambda - (1-\delta) \nu \big) \\ &= \alpha^m m \rho^{-m\lambda-1} \big( \lambda-\nu-(\mu-\nu)\delta \big) > 0. \end{aligned} \] \textbf{Outer part:} Note that \[ \rho'(t)= -\frac{1}{m(\lambda-\nu)} B\rho(t) \leq0. \] From this, it follows that \[ (w_{out}^-)_t = \alpha \delta^\frac{1}{m} (\mu-\lambda) \rho^{\mu-\lambda-1} \rho' r^{-\mu} \leq 0. \] By direct computations, we have \[ \begin{aligned} \Delta (w_{out}^-)^m &= \alpha^m \delta \rho^{m(\mu-\lambda)} m\mu (m\mu+2-n) r^{-m\mu-2} + \delta \rho^{m(\mu-\lambda)} r^{-m\mu-2} \Delta_\omega(\alpha^m). \end{aligned} \] Then~\eqref{eq:asmu} implies \[ \begin{aligned} (w_{out}^-)_t - \Delta (w_{out}^-)^m &\leq - \alpha^m \delta \rho^{m(\mu-\lambda)} m\mu (m\mu+2-n) r^{-m\mu-2} - \delta \rho^{m(\mu-\lambda)} r^{-m\mu-2} \Delta_\omega(\alpha^m) \\ &= -\delta \rho^{m(\mu-\lambda)} r^{-m\mu-2} \left( \alpha^m m\mu (m\mu+2-n) +\Delta_\omega(\alpha^m) \right) \\ &\leq -\delta \rho^{m(\mu-\lambda)} r^{-m\mu-2} \left( \alpha_{min}^m m\mu (m\mu+2-n) - \max_{\omega\in S^{n-1}}|\Delta_\omega(\alpha^m)| \right) \leq 0. \end{aligned} \] Hence $w_{out}^-$ is a subsolution for $r\geq \rho(t)$. \end{proof} \subsection{Completion of the proof of Theorem~\ref{th:anisotropic}} \label{subsec:compl} \begin{prop} \label{pro:n} Let $n \geq 2$ and assume \emph{(A1)}, \emph{(A2)}, and \emph{(A3)}. Then there exist a supersolution $w^+$ and a subsolution $w^-$ of~\eqref{eq:wprob}, which have for each $\omega \in S^{n-1}$ and $t\geq 0$ the asymptotic behavior \begin{equation*} w^+(r,\omega,t)^m = \alpha^m(\omega) r^{-m\lambda} + O(r^{-m\nu}), \quad w^-(r,\omega,t)^m = \alpha^m(\omega) r^{-m\lambda} + O(r^{-m\nu}) \end{equation*} as $r\to 0$. Moreover, \begin{equation*} w^-(r,\omega,t) \leq w^+(r,\omega,t) \end{equation*} and \begin{equation*} w^-(r,\omega,0) \leq u_0(x) \leq w^+(r,\omega,0) \end{equation*} for all $r>0$, $\omega \in S^{n-1}$, which are defined in~\eqref{eq:r_omega}, and $t\geq 0$. \end{prop} \begin{proof} We choose $w^+$ and $w^-$ as in Lemmata~\ref{l+:n} and~\ref{l-:n}, respectively. Note that \[ \begin{aligned} &w^+(r,\omega,0)= \big( \alpha^m(\omega) r^{-m\lambda} + A r^{-m\nu}+A\big)^\frac{1}{m}, \\ & w^-(r,\omega,0)= \left\{ \begin{aligned} & \alpha(\omega) r^{-\lambda} ( 1-b_0 r^{m(\lambda-\nu)} )^\frac{1}{m} &&\mbox{ for }r\leq \rho(0), \\ & \alpha(\omega) \delta^\frac{1}{m} \rho^{\mu-\lambda}(0) r^{-\mu} &&\mbox{ for }r> \rho(0). \end{aligned} \right. \end{aligned} \] Moreover, $\rho(0)<1$. Then by choosing $A$ and $b_0$ sufficiently large and $\delta$ sufficiently small, we see that the function $u_0$ satisfying (A3) can be always squeezed in between comparison functions $w^-(r,\omega,0)$ and $w^+(r,\omega,0)$. \end{proof} \begin{proof} [Proof of Theorem~\ref{th:anisotropic}] In Proposition~\ref{pro:n} we proved the existence of a global-in-time sub- and supersolution of problem~\eqref{eq:wprob}, which implies the existence of sub- and supersolution of~\eqref{eq:main} with the desired asymptotic behavior. The rest of the proof of Theorem~\ref{th:anisotropic} is the same as in Section~5 in~\cite{FTY}. Here, it was proved that the existence of global-in-time comparison functions, i.e. sub- and supersolution of~\eqref{eq:main}, which are positive and bounded on each compact subset of $(\mathbb{R}^{n}\setminus\{\xi_0\})\times(0,\infty)$, implies the existence of a global-in-time solution of~\eqref{eq:main}-\eqref{eq:main_initial}. \end{proof} \section{Proof of Theorem~\ref{th:dirac}} \label{subsec:proof_th3} \begin{proof} [Proof of Theorem~\ref{th:dirac}] For simplicity, let $B_R:=B_R \left(\xi(t)\right)$ denote an open ball in $\mathbb{R}^n$ of radius $R$ centered at $\xi(t)$. For $\varepsilon > 0$ let $\eta_\varepsilon \in C^2(\mathbb{R})$ be a non-negative cut-off function such that $\eta_\varepsilon(r) \equiv 0$ for $r\leq \varepsilon$, $\eta_\varepsilon(r) \equiv 1$ for $r\geq 3\varepsilon$, $\eta_\varepsilon''\geq 0$ for $r \in [\varepsilon, 2\varepsilon]$, $\eta_\varepsilon''\leq 0$ for $r \in [2\varepsilon, 3\varepsilon]$, and $0 \leq \eta_\varepsilon'(r) \leq \eta_\varepsilon'(2\varepsilon) = \tilde c_1 \varepsilon^{-1}$ for some $\tilde c_1 >0$ and $|\eta_\varepsilon''| \leq \tilde c_2 \varepsilon^{-2}$ for some $\tilde c_2>0$. Let $u$ be from Theorem~1.1 in~\cite{FTY}, that means that $u$ is a classical solution of~\eqref{eq:porous}-\eqref{eq:porous-init} such that $u \in C([0,\infty);L^1_{loc}(\mathbb{R}^n))$. Let $\varphi \in C_0^\infty (\mathbb{R}^n \times (0, \infty))$ and set $\varphi_\varepsilon(x,t) := \eta_\varepsilon(|x-\xi(t)|) \varphi(x,t)$. Without loss of generality, we may assume that there is a nonempty open time interval $I \subset (0,\infty)$ such that $\xi(t) \in \supp \varphi(\cdot,t)$ for all $t \in I$. We can fix $\varepsilon$ sufficiently small so that $B_{3\varepsilon} \subset \supp \varphi(\cdot,t)$ for all $t \in I$. Multiplying now equation~\eqref{eq:porous} by $\varphi_\varepsilon$ and integrating it over $\mathbb{R}^n \times (0, \infty)$, we obtain \begin{equation} \label{eq:int_in_y} \int_0^\infty \int_{\mathbb{R}^n} u_t \varphi_\varepsilon \, dx \, dt = \int_0^\infty \int_{\mathbb{R}^n} \Delta u^m \varphi_\varepsilon \, dx \, dt. \end{equation} Let us denote \begin{align*} \begin{split} I_\varepsilon &:= \int_{B_{3\varepsilon}\setminus B_{\varepsilon}} u^m \eta_\varepsilon\Delta\varphi \, dx, \quad J_\varepsilon := 2\int_{B_{3\varepsilon}\setminus B_{\varepsilon}} u^m \nabla\eta_\varepsilon \cdot \nabla\varphi \, dx, \\ K_\varepsilon &:= \int_{B_{3\varepsilon}\setminus B_{\varepsilon}} u^m \varphi\Delta\eta_\varepsilon \, dx,\quad H_\varepsilon := \int_{B_{3\varepsilon}\setminus B_\varepsilon} u \eta_\varepsilon \varphi_t \, dx . \end{split} \end{align*} Since $\varphi$ is smooth and compactly supported in $\mathbb{R}^n \times (0, \infty)$, integrating the right-hand side of~\eqref{eq:int_in_y} by parts we have \begin{equation}\label{eq:est0} \int_0^\infty \int_{\mathbb{R}^n} \Delta u^m \varphi_\varepsilon \, dx \, dt = \int_0^\infty \int_{\mathbb{R}^n \setminus B_{3\varepsilon}} u^m \Delta\varphi \, dx\, dt + \int_0^\infty \left( I_\varepsilon + J_\varepsilon + K_\varepsilon \right)\, dt . \end{equation} Similarly, we analyze the left-hand side of~\eqref{eq:int_in_y} and obtain \begin{equation*} \int_0^\infty \int_{\mathbb{R}^n} u_t \varphi_\varepsilon \, dx \, dt = - \int_0^\infty \int_{\mathbb{R}^n} u (\varphi_\varepsilon)_t \, dx \, dt = - \int_0^\infty \int_{\mathbb{R}^n \setminus B_{3\varepsilon}} u \varphi_t \, dx \, dt - \int_0^\infty H_\varepsilon \, dt. \end{equation*} Hence, equation~\eqref{eq:int_in_y} can be written as \begin{equation*} -\int_0^\infty \int_{\mathbb{R}^n \setminus B_{3\varepsilon}} \big( u \varphi_t + u^m \Delta\varphi \big) \, dx \, dt = \int_0^\infty \left(H_\varepsilon + I_\varepsilon + J_\varepsilon + K_\varepsilon \right) \, dt. \end{equation*} In the following, we show that \begin{equation} \label{eq:assertion} H_\varepsilon + I_\varepsilon + J_\varepsilon + K_\varepsilon \to (n-2) |S^{n-1}| k^m(t) \varphi(\xi(t),t) \end{equation} locally uniformly for $t$ as $\varepsilon \to 0$. In order to do that, we choose $\varepsilon$ sufficiently small so that the method of sub- and supersolutions in~\cite{FTY} provides estimates of the form \begin{equation} \label{eq:v_estimates} \begin{split} u^m(x,t) &\leq k^m(t) \left(|x-\xi(t)|^{2-n} + b(t)|x-\xi(t)|^{-\lambda}\right),\\ u^m(x,t) &\geq k^m(t) \left(|x-\xi(t)|^{2-n} - b(t)|x-\xi(t)|^{-\lambda}\right)_+, \end{split} \end{equation} for all $(x,t) \in B_{3\varepsilon} \times I$. Here, $b(t)=b_0e^{B t}$ for some constants $B$, $b_0 >1$ and $\lambda < n-2$. First, we deal with the integrals $H_\varepsilon, I_\varepsilon, J_\varepsilon$ and show that they converge to zero locally uniformly for $t$. In what follows, by $c$ we will denote a large enough but otherwise arbitrary constant independent of $t$ and $\varepsilon$. Given that $|\eta_\varepsilon| \leq 1$, for $m>m_c$ and $t<T$ for some $T>0$ it holds that \[ \begin{aligned} |H_\varepsilon| &= \left|\int_{B_{3\varepsilon}\setminus B_\varepsilon} u \eta_\varepsilon \varphi_t \, dx \right| \leq \sup_{B_{3\varepsilon}\setminus B_{\varepsilon}}|\varphi_t| \int_{B_{3\varepsilon}\setminus B_\varepsilon} u \, dx \leq c \int_{\varepsilon}^{3\varepsilon} \big(r^{2-n} + b(t) r^{-\lambda} \big)^{\frac{1}{m}} r^{n-1} \, dr \leq \\ &\leq c \int_{\varepsilon}^{3\varepsilon} r^{\frac{2-n}{m} + n-1} \to 0 \quad \text{ as } \quad \varepsilon \to 0. \end{aligned} \] Similarly, \[ \begin{aligned} |I_\varepsilon| &= \left| \int_{B_{3\varepsilon}\setminus B_{\varepsilon}} u^m \eta_\varepsilon\Delta\varphi \, dx \right| \leq \sup_{B_{3\varepsilon}\setminus B_{\varepsilon}}|\Delta\varphi| \int_{B_{3\varepsilon}\setminus B_\varepsilon} u^m \, dx \leq c \int_{\varepsilon}^{3\varepsilon} \big(r^{2-n} + b(t) r^{-\lambda} \big) r^{n-1} \, dr \leq \\ &\leq c \int_{\varepsilon}^{3\varepsilon} r \, dr \to 0 \quad \text{ as } \quad \varepsilon \to 0. \end{aligned} \] Moreover, using $0 \leq\eta_\varepsilon' \leq \tilde c_1 \varepsilon^{-1}$, we obtain \[ \begin{aligned} |J_\varepsilon| &= \left| 2 \int_{B_{3\varepsilon}\setminus B_{\varepsilon}} u^m \nabla\eta_\varepsilon \cdot \nabla\varphi \, dx \right| \leq 2 \tilde c_1 \varepsilon^{-1} \sup_{B_{3\varepsilon}\setminus B_{\varepsilon}} |\omega \cdot\nabla\varphi| \int_{B_{3\varepsilon}\setminus B_\varepsilon} u^m \, dx\leq \\ &\leq c \, \varepsilon^{-1} \int_{\varepsilon}^{3\varepsilon} \big(r^{2-n} + b(t) r^{-\lambda} \big) r^{n-1} \, dr \leq c \varepsilon^{-1} \int_{\varepsilon}^{3\varepsilon} r \, dr \to 0 \quad \text{ as } \quad\varepsilon \to 0. \end{aligned} \] Now we deal with the integral $K_\varepsilon$. Denoting \[ K_\varepsilon^1 := \int_{B_{3\varepsilon}\setminus B_{\varepsilon}} u^m \varphi |x-\xi(t)|^{-1}\eta_\varepsilon' \, dx, \quad K_\varepsilon^2 := \int_{B_{2\varepsilon}\setminus B_{\varepsilon}} u^m \varphi \eta_\varepsilon'' \, dx, \quad K_\varepsilon^3 := \int_{B_{3\varepsilon}\setminus B_{2\varepsilon}} u^m \varphi (-\eta_\varepsilon)'' \, dx, \] we can split $K_\varepsilon$ into \begin{equation*} K_\varepsilon = (n-1) K_\varepsilon^1 + K_\varepsilon^2 - K_\varepsilon^3. \end{equation*} By means of the non-negativity of $u^m$ and the properties $\eta_\varepsilon' \geq 0$, $\eta_\varepsilon'' \geq 0$ on $B_{2\varepsilon}\setminus B_{\varepsilon}$, and $\eta_\varepsilon'' \leq 0$ on $B_{3\varepsilon}\setminus B_{2\varepsilon}$, we obtain \[ \begin{aligned} &\inf_{B_{3\varepsilon}\setminus B_{\varepsilon}} \varphi \int_{B_{3\varepsilon}\setminus B_{\varepsilon}} u^m |x-\xi(t)|^{-1}\eta_\varepsilon' \, dx \leq K_\varepsilon^1 \leq \sup_{B_{3\varepsilon}\setminus B_{\varepsilon}} \varphi \int_{B_{3\varepsilon}\setminus B_{\varepsilon}} u^m |x-\xi(t)|^{-1}\eta_\varepsilon' \, dx, \\ &\inf_{B_{2\varepsilon}\setminus B_{\varepsilon}} \varphi \int_{B_{2\varepsilon}\setminus B_{\varepsilon}} u^m \eta_\varepsilon'' \, dx \leq K_\varepsilon^2 \leq \sup_{B_{2\varepsilon}\setminus B_{\varepsilon}} \varphi \int_{B_{2\varepsilon}\setminus B_{\varepsilon}} u^m \eta_\varepsilon'' \, dx, \\ &\inf_{B_{3\varepsilon}\setminus B_{2\varepsilon}} \varphi \int_{B_{3\varepsilon}\setminus B_{2\varepsilon}} u^m (-\eta_\varepsilon)'' \, dx \leq K_\varepsilon^3 \leq \sup_{B_{3\varepsilon}\setminus B_{2\varepsilon}} \varphi \int_{B_{3\varepsilon}\setminus B_{2\varepsilon}} u^m (-\eta_\varepsilon)'' \, dx. \end{aligned} \] We only consider the case where $\sup_{B_{3\varepsilon}\setminus B_{\varepsilon}} \varphi$ and $\inf_{B_{3\varepsilon}\setminus B_{\varepsilon}} \varphi$ are non-negative. The other cases can be handled in the same way by using ~\eqref{eq:est_L} below. Indeed, a change of the sign of $\sup_{B_{3\varepsilon}\setminus B_{\varepsilon}} \varphi$ and $\inf_{B_{3\varepsilon}\setminus B_{\varepsilon}} \varphi$ will not change the limit value. For $a:= n-2-\lambda>0$ we set \begin{equation*} L_\varepsilon^1 := b(t) \int_\varepsilon^{2\varepsilon} r^{a+1} \eta_\varepsilon'' \, dr, \quad L_\varepsilon^2 := b(t) \int_{2\varepsilon}^{3\varepsilon} r^{a+1} (-\eta_\varepsilon)'' \, dr, \quad L_\varepsilon^3 := b(t) \int_\varepsilon^{3\varepsilon} r^a \eta_\varepsilon' \, dr. \end{equation*} Since $0 \leq \eta_\varepsilon' \leq \tilde c_1 \varepsilon^{-1}$, for $t<T$ with some $T>0$ it holds that \begin{equation*} | L_\varepsilon^3 | = \left| b(t)\int_\varepsilon^{3\varepsilon} r^a \eta_\varepsilon' \, dr \right| \leq c \varepsilon^a, \end{equation*} and by $|\eta_\varepsilon''| \leq \tilde c_2 \varepsilon^{-2}$ we have \begin{equation*} | L_\varepsilon^1 | + | L_\varepsilon^2 | = \left| b(t) \int_\varepsilon^{2\varepsilon} r^{a+1} \eta_\varepsilon'' \, dr \right| + \left| b(t)\int_{2\varepsilon}^{3\varepsilon} r^{a+1} (-\eta_\varepsilon)'' \, dr \right| \leq c \varepsilon^a. \end{equation*} Hence, \begin{equation} \label{eq:est_L} L_\varepsilon^1 \to 0, \quad L_\varepsilon^2 \to 0, \quad L_\varepsilon^3 \to 0 \quad \text{ as } \quad \varepsilon \to 0 \end{equation} locally uniformly for $t$. Using inequalities~\eqref{eq:v_estimates}, we have \begin{equation} \label{eq:k1_up} K_\varepsilon^1 \leq |S^{n-1}| k^m(t) \sup_{B_{3\varepsilon}\setminus B_{\varepsilon}} \varphi \int_\varepsilon^{3\varepsilon} \big(1 + b(t)r^{a}\big)\eta_\varepsilon' \, dr = |S^{n-1}| k^m(t) \sup_{B_{3\varepsilon}\setminus B_{\varepsilon}} \varphi \left( 1 + L_\varepsilon^3 \right), \end{equation} and \begin{equation} \label{eq:k1_down} K_\varepsilon^1 \geq |S^{n-1}| k^m(t) \inf_{B_{3\varepsilon}\setminus B_{\varepsilon}} \varphi \int_\varepsilon^{3\varepsilon} \big(1 - b(t)r^{a}\big) \eta_\varepsilon' \, dr = |S^{n-1}| k^m(t) \inf_{B_{3\varepsilon}\setminus B_{\varepsilon}} \varphi \left( 1 - L_\varepsilon^3 \right). \end{equation} Integrating by parts and estimating integrals $K_\varepsilon^2$ and $K_\varepsilon^3$ we have \begin{equation*} \begin{split} K_\varepsilon^2 &\leq |S^{n-1}| k^m(t) \sup_{B_{2\varepsilon}\setminus B_{\varepsilon}} \varphi \int_\varepsilon^{2\varepsilon} \big(r+b(t)r^{a+1}\big) \eta_\varepsilon'' \, dr =\\ &= |S^{n-1}| k^m(t) \sup_{B_{2\varepsilon}\setminus B_{\varepsilon}} \varphi \left( 2\varepsilon \eta_\varepsilon'(2\varepsilon) - \eta_\varepsilon(2\varepsilon) + L_\varepsilon^1 \right), \end{split} \end{equation*} and \begin{equation*} \begin{split} K_\varepsilon^3 &\geq |S^{n-1}| k^m(t) \inf_{B_{3\varepsilon}\setminus B_{2\varepsilon}} \varphi \int_{2\varepsilon}^{3\varepsilon} \big(r-b(t)r^{a+1}\big) (-\eta_\varepsilon)'' \, dr =\\ &= |S^{n-1}| k^m(t) \inf_{B_{3\varepsilon}\setminus B_{2\varepsilon}} \varphi \left( 2\varepsilon \eta_\varepsilon'(2\varepsilon) + 1 - \eta_\varepsilon(2\varepsilon) - L_\varepsilon^2 \right). \end{split} \end{equation*} Thus, \begin{equation} \label{eq:k23_up} \begin{split} K_\varepsilon^2 - K_\varepsilon^3 \leq \, - |S^{n-1}| k^m(t) \Big[ &(1- L_\varepsilon^2) \inf_{B_{3\varepsilon}\setminus B_{2\varepsilon}}\varphi - L_\varepsilon^1 \sup_{B_{2\varepsilon}\setminus B_{\varepsilon}} \varphi \, - \\ & - \Big( \sup_{B_{2\varepsilon}\setminus B_{\varepsilon}}\varphi - \inf_{B_{3\varepsilon}\setminus B_{2\varepsilon}}\varphi \Big) \big( 2\varepsilon\eta_\varepsilon'(2\varepsilon) - \eta_\varepsilon(2\varepsilon) \big) \Big]. \end{split} \end{equation} Analogously, \begin{equation*} \begin{split} K_\varepsilon^2 &\geq |S^{n-1}| k^m(t) \inf_{B_{2\varepsilon}\setminus B_{\varepsilon}} \varphi \int_\varepsilon^{2\varepsilon} \big(r-b(t)r^{a+1}\big) \eta_\varepsilon'' \, dr = \\ &= |S^{n-1}| k^m(t) \inf_{B_{2\varepsilon}\setminus B_{\varepsilon}} \varphi \left( 2\varepsilon \eta_\varepsilon'(2\varepsilon) - \eta_\varepsilon(2\varepsilon) - L_\varepsilon^1 \right), \end{split} \end{equation*} and \begin{equation*} \begin{split} K_\varepsilon^3 & \leq |S^{n-1}| k^m(t) \sup_{B_{3\varepsilon}\setminus B_{2\varepsilon}} \varphi \int_{2\varepsilon}^{3\varepsilon} \big(r+b(t)r^{a+1}\big) (-\eta_\varepsilon)'' \, dr = \\ &= |S^{n-1}| k^m(t) \sup_{B_{3\varepsilon}\setminus B_{2\varepsilon}} \varphi \left( 2\varepsilon \eta_\varepsilon'(2\varepsilon) + 1 - \eta_\varepsilon(2\varepsilon) + L_\varepsilon^2 \right). \end{split} \end{equation*} Hence, \begin{equation} \label{eq:k23_down} \begin{split} K_\varepsilon^2 - K_\varepsilon^3 \geq \, - |S^{n-1}| k^m(t) \Big[ &(1 + L_\varepsilon^2) \sup_{B_{3\varepsilon}\setminus B_{2\varepsilon}}\varphi + L_\varepsilon^1 \inf_{B_{2\varepsilon}\setminus B_{\varepsilon}} \varphi \, - \\ &- \Big( \inf_{B_{2\varepsilon}\setminus B_{\varepsilon}}\varphi - \sup_{B_{3\varepsilon}\setminus B_{2\varepsilon}}\varphi \Big) \big( 2\varepsilon\eta_\varepsilon'(2\varepsilon) - \eta_\varepsilon(2\varepsilon) \big) \Big] . \end{split} \end{equation} Finally, using~\eqref{eq:est_L}, inequalities~\eqref{eq:k1_up},~\eqref{eq:k1_down},~\eqref{eq:k23_up}, and~\eqref{eq:k23_down} yield \begin{equation*} K_\varepsilon \to (n-2) |S^{n-1}| k^m(t) \varphi(\xi(t),t) \quad \text{ as } \quad\varepsilon \to 0 \end{equation*} locally uniformly for $t$. Thus, the assertion~\eqref{eq:assertion} is proved, which completes the proof. \end{proof} \section{Proof of Theorem~\ref{th:anisotropic_weak}} \label{subsec:proof_anisotropic_weak} \begin{proof} [Proof of Theorem~\ref{th:anisotropic_weak}~\ref{it:i}] The functions from Theorem~\ref{th:anisotropic} satisfy $u \in C([0,\infty);L^1_{loc}(\mathbb{R}^n))$. The proof of this statement is analogous to the proof of Lemma~5.2 in~\cite{FTY}, and so we omit it here. Since in this case $\lambda < (n-2)/m$ by $\lambda<n$, it necessarily holds that the singularity of solutions from Theorem~\ref{th:anisotropic} is weaker than the singularity of solutions of the type~\eqref{eq:steady_state} and~\eqref{eq:rad_as_sym}. It suggests that in the distributional sense, unlike solutions of the type~\eqref{eq:steady_state} and~\eqref{eq:rad_as_sym} satisfy equations~\eqref{eq:fund_weak} and~\eqref{eq:n3_weak_delta} with a singular source term, solutions from Theorem~\ref{th:anisotropic_weak}~\ref{it:i} satisfy equation~\eqref{eq:weak} with no source term on the right-hand side. Rigorously, the proof of this theorem can be carried out analogously to the proof of Theorem~\ref{th:dirac} in Section~\ref{subsec:proof_th3} (and it is less technical due to standing versus moving singularity). \end{proof} \begin{proof} [Proof of Theorem~\ref{th:anisotropic_weak}~\ref{it:ii}] To prove that $u \notin L^p_{loc}(\mathbb{R}^n \times [0,\infty))$ for any $p \geq 1$ if $\lambda \geq n$, we integrate over $B_1(\xi_0) \times [0,1]$, and use the comparison function $w^-$ to estimate the integral \begin{equation*} I:=\int_0^1 \int_{B_1(\xi_0)} u^p(x,t) \, dx \, dt \geq \int_0^1 \int_{B_1(\xi_0)} (w^-(r,\omega,t))^p \, dx \, dt. \end{equation*} By the definition of $w^- \geq 0$ in~\eqref{eq:subsolform}, $\rho(t)<1$, and $\alpha \geq \alpha_{min} >0$, we have \begin{equation*} I \geq \int_0^1 \int_{B_{\rho(t)}(\xi_0)} (w^-(r,\omega,t))^p \, dx \, dt \geq \alpha_{min}^p \int_0^1 \int_0^{\rho(t)} r^{n-1-p\lambda} \big( 1-b(t) r^{m(\lambda-\nu)} \big)^\frac{p}{m} \, dr \, dt. \end{equation*} Substituting $z=b(t) r^{m(\lambda-\nu)}$, by $b \geq 1$ we obtain \[ \begin{aligned} I &\geq \frac{\alpha_{min}^p}{m(\lambda-\nu)} \int_0^1 b^{\frac{p\lambda-n}{m(\lambda-\nu)}}(t) \int_0^{1-\delta} z^{-1-\frac{p\lambda-n}{m(\lambda-\nu)}} (1-z)^\frac{p}{m} \, dz \, dt \\ &\geq \frac{\alpha_{min}^p \delta^\frac{p}{m}}{m(\lambda-\nu)} \int_0^1 \int_0^{1-\delta} z^{-1-\frac{p\lambda-n}{m(\lambda-\nu)}} \, dz \, dt. \end{aligned} \] This integral is infinite exactly when $\lambda \geq n$ for any $p \geq 1$, which completes the proof. \end{proof} \section{Non-integrability of the singular traveling wave} \label{subsec:proof_snaking} In this section we show that for $U$ from~\eqref{eq:snaking} it holds that $U \notin L^p_{loc}(\mathbb{R}^n \times \mathbb{R})$ for any $p \geq 1$. Without loss of generality, we may take $a = e_n = (0,\ldots,0,1)$. Indeed, given any velocity vector $a \in \mathbb{R}^n$, we could transform the coordinate system and proceed as below. Let $B'_1:= \{ x' \in \mathbb{R}^{n-1}; |x'|<1\}$. For $p \geq 1$ we examine the integrability over $[0,1] \times B'_1 \times [-1,0]$, i.e. \begin{equation*} I:= \int_0^1 \int_{B'_1 \times [-1,0]} U^p(x,t) \, dx \, dt = \int_0^1 \int_{-1}^0 \int_{B'_1} C^p \big( \sqrt{|x'|^2+(x_n-t)^2} + (x_n-t) \big)^{-\frac{p}{1-m}} \, dx' \, dx_n \, dt. \end{equation*} By the change of variables $y_n=-(x_n-t)$ and $|x'| = y_n r$, we obtain \begin{equation*} \begin{split} I &= C^p |S^{n-2}| \int_0^1 \int_{1+t}^t \int_0^{1/y_n} (y_n r)^{n-2} \big( \sqrt{y_n^2 r^2 + y_n^2} - y_n \big)^{-\frac{p}{1-m}} y_n \, dr \, (-dy_n) \, dt, \\ &= C^p |S^{n-2}| \int_0^1 \int_t^{1+t} y_n^{n-1-\frac{p}{1-m}} \int_0^{1/y_n} r^{n-2} \big( \sqrt{r^2 + 1} - 1 \big)^{-\frac{p}{1-m}} \, dr \, dy_n \, dt,\\ &= C^p |S^{n-2}| \int_0^1 \int_t^{1+t} y_n^{n-1-\frac{p}{1-m}} \int_0^{1/y_n} r^{n-2-\frac{2p}{1-m}} \big( \sqrt{r^2 + 1} + 1 \big)^{\frac{p}{1-m}} \, dr \, dy_n \, dt. \end{split} \end{equation*} We have $1/y_n \geq 1/2$, hence \begin{equation*} I \geq C^p \int_0^1 \int_1^{1+t} y_n^{n-1-\frac{p}{1-m}} \int_0^{1/2} r^{n-2-\frac{2p}{1-m}} \, dr \, dy_n \, dt, \end{equation*} which is finite if $p < (1-m)(n-1)/2$. Since we assumed that $p \geq 1$, $m>(n-3)/(n-1)=m^*$ for $n\geq3$ and $m>0=m^*$ for $n=2$, this condition for $p$ cannot be satisfied. This implies the conclusion. \subsection*{Acknowledgment} We thank the referee for the comments that helped improve the presentation significantly. \end{document}
\begin{document} \title{Circuit Quantum Electrodynamics} \author{Alexandre Blais}\email[Send comments and feedback to ]{[email protected]} \affiliation{Institut quantique and D\'epartement de Physique, Universit\'e de Sherbrooke, Sherbrooke J1K 2R1 QC, Canada} \affiliation{Canadian Institute for Advanced Research, Toronto, ON, Canada} \author{Arne L. Grimsmo} \affiliation{Centre for Engineered Quantum Systems, School of Physics, The University of Sydney, Sydney, NSW 2006, Australia} \author{S. M. Girvin} \affiliation{Yale Quantum Institute, PO Box 208 334, New Haven, CT 06520-8263 USA} \author{Andreas Wallraff} \affiliation{Department of Physics, ETH Zurich, CH-8093, Zurich, Switzerland.} \begin{abstract} Quantum mechanical effects at the macroscopic level were first explored in Josephson junction-based superconducting circuits in the 1980's. In the last twenty years, the emergence of quantum information science has intensified research toward using these circuits as qubits in quantum information processors. The realization that superconducting qubits can be made to strongly and controllably interact with microwave photons, the quantized electromagnetic fields stored in superconducting circuits, led to the creation of the field of circuit quantum electrodynamics (QED), the topic of this review. While atomic cavity QED inspired many of the early developments of circuit QED, the latter has now become an independent and thriving field of research in its own right. Circuit QED allows the study and control of light-matter interaction at the quantum level in unprecedented detail. It also plays an essential role in all current approaches to quantum information processing with superconducting circuits. In addition, circuit QED enables the study of hybrid quantum systems, such as quantum dots, magnons, Rydberg atoms, surface acoustic waves, and mechanical systems, interacting with microwave photons. Here, we review the coherent coupling of superconducting qubits to microwave photons in high-quality oscillators focussing on the physics of the Jaynes-Cummings model, its dispersive limit, and the different regimes of light-matter interaction in this system. We discuss coupling of superconducting circuits to their environment, which is necessary for coherent control and measurements in circuit QED, but which also invariably leads to decoherence. Dispersive qubit readout, a central ingredient in almost all circuit QED experiments, is also described. Following an introduction to these fundamental concepts that are at the heart of circuit QED, we discuss important use cases of these ideas in quantum information processing and in quantum optics. Circuit QED realizes a broad set of concepts that open up new possibilities for the study of quantum physics at the macro scale with superconducting circuits and applications to quantum information science in the widest sense. \end{abstract} \maketitle \today \tableofcontents \section{Introduction} Circuit quantum electrodynamics (QED) is the study of the interaction of nonlinear superconducting circuits, acting as artificial atoms or as qubits for quantum information processing, with quantized electromagnetic fields in the microwave frequency domain. Inspired by cavity QED \cite{Haroche2006,Kimble1998}, a field of research originating from atomic physics and quantum optics, circuit QED has led to advances in the fundamental study of light-matter interaction, in the development of quantum information processing technology \cite{Kjaergaard2019,Krantz2019,Wendin2017,Clarke2008,Blais2020}, and in the exploration of novel hybrid quantum systems \cite{Xiang2013a,Clerk2020}. First steps toward exploring the quantum physics of superconducting circuits were made in the mid-1980's. At that time, the question arose whether quantum phenomena, such as quantum tunneling or energy level quantization, could be observed in macroscopic systems of any kind \cite{Leggett1980,Leggett1984}. One example of such a macroscopic system is the Josehphson tunnel junction \cite{Josephson1962,Tinkham2004} formed by a thin insulating barrier at the interface between two superconductors and in which macroscopic quantities such as the current flowing through the junction or the voltage developed across it are governed by the dynamics of a macroscopic order parameter \cite{Eckern1984}. This macroscopic order parameter relates to the density and common phase of Bose-condensed Cooper pairs of electrons. The first experimental evidences for quantum effects in these circuits \cite{Clarke1988} were the observation of quantum tunneling of the phase degree of freedom of a Josephson junction \cite{Devoret1985}, rapidly followed by the measurement of quantized energy levels of the same degree of freedom \cite{Martinis1985}. While the possibility of observation of coherent quantum phenomena in Josephson junction-based circuits, such as coherent oscillations between two quantum states of the junction and the preparation of quantum superpositions was already envisaged in the 1980's \cite{Tesche1987}, the prospect of realizing superconducting qubits for quantum computation revived interest in the pursuit of this goal \cite{Bouchiat1998,Shnirman1997,Bocko1997,Makhlin1999,Makhlin2001}. In a groundbreaking experiment, time-resolved coherent oscillations with a superconducting qubit were observed in 1999 \cite{Nakamura1999}. Further progress resulted in the observation of coherent oscillations in coupled superconducting qubits \cite{Pashkin2003,Yamamoto2003} and in significant improvements of the coherence times of these devices by exploiting symmetries in the Hamiltonian underlying the description of the circuits \cite{Vion2002}. In parallel to these advances, in atomic physics and quantum optics, cavity QED developed into an excellent setting for the study of the coherent interactions between individual atoms and quantum radiation fields \cite{Rempe1987,Thompson1992,Brune1996,Haroche1989}, and its application to quantum communication \cite{Kimble2008} and quantum computation \cite{Kimble1998,Haroche2006}. In the early 2000's, the concept of realizing the physics of cavity QED with superconducting circuits emerged with proposals to coherently couple superconducting qubits to microwave photons in open 3D cavities \cite{AlSaidi2001,Yang2003, You2003}, in discrete LC oscillators \cite{Makhlin2001,Buisson2001}, and in large Josephson junctions \cite{Marquardt2001,Plastina2003, Blais2003a}. The prospect of realizing strong coupling of superconducting qubits to photons stored in high-quality coplanar waveguide resonators, together with suggestions to use this approach to protect qubits from decoherence, to read out their state, and to couple them to each other in a quantum computer architecture advanced the study of cavity QED with superconducting circuits \cite{Blais2004}. This possibility of exploring both the foundations of light-matter interaction and advancing quantum information processing technology with superconducting circuits motivated the rapid advance in experimental research, culminating in the first experimental realization of a circuit QED system achieving the strong coupling regime of light-matter interaction where the coupling overwhelms damping \cite{Wallraff2004,Chiorescu2004}. Circuit QED combines the theoretical and experimental tools of atomic physics, quantum optics and the physics of mesoscopic superconducting circuits not only to further explore the physics of cavity QED and quantum optics in novel parameter regimes, but also to allow the realization of engineered quantum devices with technological applications. Indeed, after 15 years of continuous development, circuit QED is now a leading architecture for quantum computation. Simple quantum algorithms have been implemented \cite{DiCarlo2009}, cloud-based devices are accessible, demonstrations of quantum-error correction have approached or reached the so-called break-even point \cite{Ofek2016,Hu2019}, and devices with several tens of qubits have been operated with claims of quantum supremacy \cite{Arute2019}. More generally, circuit QED is opening new research directions. These include the development of quantum-limited amplifiers and single-microwave photon detectors with applications ranging from quantum information processing to the search for dark matter axions, to hybrid quantum systems \cite{Clerk2020} where different physical systems such as NV centers \cite{Kubo2010}, mechanical oscillators \cite{Aspelmeyer2013}, semiconducting quantum dots \cite{Burkard2020}, or collective spin excitations in ferromagnetic crystals \cite{Lachance-Quirion2019} are interfaced with superconducting quantum circuits. In this review, we start in \cref{sec:SupQuantumCircuits} by introducing the two main actors of circuit QED: high-quality superconducting oscillators and superconducting artificial atoms. The latter are also known as superconducting qubits in the context of quantum information processing. There are many types of superconducting qubits and we choose to focus on the transmon \cite{Koch2007}. This choice is made because the transmon is not only the most widely used qubit but also because this allows us to present the main ideas of circuit QED without having to delve into the very rich physics of the different types of superconducting qubits. Most of the material presented in this review applies to other qubits without much modification. \Cref{sec:LightMatter} is devoted to light-matter coupling in circuit QED including a discussion of the Jaynes-Cummings model and its dispersive limit. Different methods to obtain approximate effective Hamiltonians valid in the dispersive regime are presented. \Cref{sec:environment} addresses the coupling of superconducting quantum circuits to their electromagnetic environment, considering both dissipation and coherent control. In \cref{sec:readout}, we turn to measurements in circuit QED with an emphasis on dispersive qubit readout. Building on this discussion, \cref{sec:CouplingRegimes} presents the different regimes of light-matter coupling which are reached in circuit QED and their experimental signatures. In the last sections, we turn to two applications of circuit QED: quantum computing in \cref{sec:quantumcomputing} and quantum optics in \cref{sec:QuantumOptics}. Our objective with this review is to give the reader a solid background on the foundations of circuit QED rather than showcasing the very latest developments of the field. We hope that this introductory text will allow the reader to understand the recent advances of the field and to become an active participant in its development. \section{\label{sec:SupQuantumCircuits}Superconducting quantum circuits} Circuit components with spatial dimensions that are small compared to the relevant wavelength can be treated as lumped elements~\cite{Devoret1997}, and we start this section with a particularly simple lumped-element circuit: the quantum LC oscillator. We subsequently discuss the closely related two- and three-dimensional microwave resonators that play a central role in circuit QED experiments and which can be thought of as distributed versions of the LC oscillator with a set of harmonic frequencies. Finally, we move on to nonlinear quantum circuits with Josephson junctions as the source of nonlinearity, and discuss how such circuits can behave as artificial atoms. We put special emphasis on the transmon qubit~\cite{Koch2007}, which is the most widely used artificial atom design in current circuit QED experiments. \subsection{\label{sec:HO}The quantum LC resonator} An LC oscillator is characterized by its inductance $L$ and capacitance $C$ or, equivalently, by its angular frequency $\omega_r = 1/\sqrt{LC}$ and characteristic impedance $Z_r=\sqrt{L/C}$. The total energy of this oscillator is given by the sum of its charging and inductive energy \begin{equation}\label{eq:HLC} H_{LC} = \frac{Q^2}{2C} + \frac{\Phi^2}{2L}, \end{equation} where $Q$ is the charge on the capacitor and $\Phi$ the flux threading the inductor, see \cref{fig:LCpotential}. Charge is related to current, $I$, from charge conservation by $Q(t) = \int_{t_0}^t dt'\, I(t')$, and flux to voltage from Faraday's induction law by $\Phi(t) = \int_{t_0}^t dt'\, V(t')$, where we have assumed that the voltage and current are zero at an initial time $t_0$, often taken to be in the distant past \cite{Vool2017}. It is instructive to rewrite $H_{LC}$ as \begin{equation}\label{eq:HLCAnalogy} H_{LC} = \frac{Q^2}{2C} + \frac{1}{2}C\omega_r^2\Phi^2. \end{equation} This form emphasizes the analogy of the LC oscillator with a mechanical oscillator of coordinate $\Phi$, conjugate momentum $Q$, and mass $C$. With this analogy in mind, quantization proceeds in a manner that should be well known to the reader: The charge and flux variables are promoted to non-commuting observables satisfying the commutation relation \begin{equation} [\hat \Phi,\hat Q] = i\hbar. \end{equation} It is further useful to introduce the standard annihilation $\hat a$ and creation $\hat a^\dag$ operators of the harmonic oscillator. With the above mechanical analogy in mind, we choose these operators as \begin{align}\label{eq:HatPhiQ} \hat \Phi = \Phi_\zp(\hat a^\dag+\hat a), \qquad \hat Q = iQ_\zp (\hat a^\dag-\hat a), \end{align} with $\Phi_\zp = \sqrt{\hbar/2\omega_r C} = \sqrt{\hbar Z_r/2}$ and $Q_\zp = \sqrt{\hbar\omega_r C/2} = \sqrt{\hbar/2 Z_r}$ the characteristic magnitude of the zero-point fluctuations of the flux and the charge, respectively. With these definitions, the above Hamiltonian takes the usual form \begin{equation} \hat H_{LC} = \hbar\omega_r (\hat a^\daga+1/2), \end{equation} with eigenstates that satisfy $\hat a^\daga \ket n = n \ket n$ for $n=0,1,2,\dots$ In the rest of this review, we follow the convention of dropping from the Hamiltonian the factor of $\sfrac12$ corresponding to zero-point energy. The action of $\hat a^\dag = \sqrt{1/2\hbar Z_r}(\hat \Phi-iZ_r\hat Q)$ is to create a quantized excitation of the flux and charge degrees of freedom of the oscillator or, equivalently of the magnetic and electric fields. In other words, $\hat a^\dag$ creates a photon of frequency $\omega_r$ stored in the circuit. \begin{figure} \caption{\label{fig:LCpotential} \label{fig:LCpotential} \end{figure} While formally correct, one can wonder if this quantization procedure is relevant in practice. In other words, is it possible to operate LC oscillators in a regime where quantum effects are important? For this to be the case, at least two conditions must be satisfied. First, the oscillator should be sufficiently well decoupled from uncontrolled degrees of freedom such that its energy levels are considerably less broad than their separation. In other words, we require the oscillator's quality factor $Q = \omega_r/\kappa$, with $\kappa$ the oscillator linewidth or equivalently the photon loss rate, to be large. An approach to treat the environment of a quantum system is described in~\secref{sec:environment}. Because losses are at the origin of level broadening, superconductors are ideal to reach the quantum regime. In practice, most circuit QED devices are made of thin aluminum films evaporated on low-loss dielectric substrates such as sapphire or high-resistivity silicon wafers. Mainly for its larger superconducting gap, niobium is sometimes used in place of aluminum. In addition to internal losses in the metals forming the LC circuit, care must also be taken to minimize the effect of coupling to the external circuitry that is essential for operating the oscillator. As will be discussed below, large quality factors ranging from $Q \sim 10^{3}$ to $10^{8}$ can be obtained in the laboratory~\cite{Frunzio2005,Bruno2015,Reagor2016}. Given that your microwave oven has a quality factor approaching $10^4$~\cite{Vollmer2004}, it should not come as a surprise that large Q-factor oscillators can be realized in state-of-the-art laboratories. Relation to kitchen appliances, however, stops here with the second condition requiring that the energy separation $\hbar\omega_r$ between adjacent eigenstates be larger than thermal energy $k_BT$. Since $1\, \mathrm{GHz} \times h/k_B \sim 50$~mK, the condition $\hbar\omega_r\gg k_BT$ can be satisfied easily with microwave frequency circuits operated at $\sim 10$~mK in a dilution refrigerator. These circuits are therefore operated at temperatures far below the critical temperature ($\sim 1-10$ K) of the superconducting films from which they are made. With these two requirements satisfied, an oscillator with a frequency in the microwave range can be operated in the quantum regime. This means that the circuit can be prepared in its quantum-mechanical ground state $\ket{n=0}$ simply by waiting for a time of the order of a few photon lifetimes $T_\kappa = 1/\kappa$. It is also crucial to note that the vacuum fluctuations in such an oscillator can be made large. For example, taking reasonable values $L\sim0.8$~nH and $C\sim0.4$~pF, corresponding to $\omega_r/2\pi\sim 8$ GHz and $Z_r\sim 50~\Omega$, the ground state is characterized by vacuum fluctuations of the voltage of variance as large as $\Delta V_0 = [\av{\hat V^2} - \av{\hat V}^2]^{1/2} = \sqrt{\hbar\omega_r/2C} \sim 1~\mu$V, with $\hat V = \hat Q/C$. As will be made clear later, this leads to large electric field fluctuations and therefore to large electric-dipole interactions when coupling to an artificial atom. \subsection{\label{sec:2D}2D resonators} Quantum harmonic oscillators come in many shapes and sizes, the LC oscillator being just one example. Other types of harmonic oscillators that feature centrally in circuit QED are microwave resonators where the electromagnetic field is confined either in a planar, essentially two-dimensional structure (2D resonators) or in a three-dimensional volume (3D resonators). The boundary conditions imposed by the geometry of these different resonators lead to a discretization of the electromagnetic field into a set of modes with distinct frequencies, where each mode can be thought of as an independent harmonic oscillator. Conversely (especially for the 2D case) one can think of these modes as nearly dissipationless acoustic plasma modes of superconductors. \begin{figure} \caption{(a) Schematic layout of a $\lambda/2$ coplanar waveguide resonator of length $d$, center conductor width $w$, ground plane separation $s$, together with its capacitively coupled input and output ports. The cosine shape of the second mode function ($m=1$) is illustrated with pink arrows. Also shown is the equivalent lumped element circuit model. Adapted from \textcite{Blais2004} \label{fig:coplanaracrhitecture} \end{figure} Early experiments in circuit QED were motivated by the observation of large quality factors in coplanar waveguide resonators in experiments for radiation detectors~\cite{Day2003} and by the understanding of the importance of presenting a clean electromagnetic environment to the qubits. Early circuit QED experiments were performed with these 2D coplanar waveguide resonators \cite{Wallraff2004}, which remains one of the most commonly used architectures today. A coplanar waveguide resonator consist of a coplanar waveguide of finite length formed by a center conductor of width $w$ and thickness $t$, separated on both sides by a distance $s$ from a ground plane of the same thickness, see Fig.~\ref{fig:coplanaracrhitecture}(a)~\cite{Pozar2012, Simons2001}. Both conductors are typically deposited on a low-loss dielectric substrate of permittivity $\varepsilon$ and thickness much larger than the dimensions $w, s, t$. This planar structure acts as a transmission line along which signals are transmitted in a way analogous to a conventional coaxial cable. As in a coaxial cable, the coplanar waveguide confines the electromagnetic field to a small volume between its center conductor and the ground, see Fig.~\ref{fig:coplanaracrhitecture}(b). The dimensions of the center conductor, the gaps, and the thickness of the dielectric are chosen such that the field is concentrated between the center conductor and ground, and radiation in other directions is minimized. This structure supports a quasi-TEM mode~\cite{Wen1969}, with the electromagnetic field partly in the dielectric substrate and in the vacuum (or other dielectric) above the substrate, and with the largest concentration in the gaps between the center conductor and the ground planes. In practice, the coplanar waveguide can be treated as an essentially dispersion-free, linear dielectric medium. Ideally, the loss of coplanar waveguides is only limited by the conductivity of the center conductor and the ground plane, and by the loss tangent of the dielectric. To minimize losses, superconducting metals such as aluminum, niobium or niobium titanium nitride (NbTiN), are used in combination with dielectrics of low loss tangent, such as sapphire or high-resistivity silicon. Similarly to the lumped LC oscillator, the electromagnetic properties of a coplanar waveguide resonator are described by its characteristic impedance $Z_r = \sqrt{l_0/c_0}$ and the speed of light in the waveguide $v_0 = 1/\sqrt{l_0c_0}$, where we have introduced the capacitance to ground $c_0$ and inductance $l_0$ per unit length~\cite{Simons2001}. Typical values of these parameters are $Z_r \sim 50\,\Omega$ and $v_0 \sim 1.3\times 10^8$ m/s, or about a third of the speed of light in vacuum~\cite{Goppl2008}. For a given substrate, metal thickness and center conductor width, the characteristic impedance can be adjusted by varying the parameters $w$, $s$ and $t$ of the waveguide \cite{Simons2001}. In the coplanar waveguide geometry, transmission lines of constant impedance $Z_r$ can therefore be realized for varying center conductor width $w$ by keeping the ratio of $w/s$ close to a constant \cite{Simons2001}. This allows the experimenter to fabricate a device with large $w$ at the edges for convenient interfacing, and small $w$ away from the edges to minimize the mode volume or simply for miniaturization. A resonator is formed from a coplanar waveguide by imposing boundary conditions of either zero current or zero voltage at the two endpoints separated by a distance $d$. Zero current is achieved by micro-fabricating a gap in the center conductor (open boundary), while zero voltage can be achieved by grounding an end point (shorted boundary). A resonator with open boundary conditions at both ends, as illustrated in Fig.~\ref{fig:coplanaracrhitecture}(a), has a fundamental frequency $f_0 = v_0/2d$ with harmonics at $f_m = (m+1) f_0$, and is known as a $\lambda/2$ resonator. On the other hand, $\lambda/4$ resonators with fundamental frequency $f_0 = v_0/4d$ are obtained with one open end and one grounded end. A typical example is a $\lambda/2$ resonator of length $1.0$ cm and speed of light $1.3\times 10^8$ m/s corresponding to a fundamental frequency of $6.5$ GHz. This coplanar waveguide geometry is very flexible and a large range of frequencies can be achieved. In practice, however, the useful frequency range is restricted from above by the superconducting gap of the metal from which the resonator is made (82 GHz for aluminum). Above this energy, losses due to quasiparticles increase dramatically. Low frequency resonators can be made by using long, meandering, coplanar waveguides. For example, in \textcite{Sundaresan15}, a resonator was realized with a length of $0.68$ m and a fundamental frequency of $f_0 = 92$ MHz. With this frequency corresponding to a temperature of 4.4 mK, the low frequency modes of such long resonators are, however, not in the vacuum state. Indeed, according to the Bose-Einstein distribution, the thermal occupation of the fundamental mode frequency at 10 mK is $\bar n_\kappa = 1/(e^{h f_0/k_B T}-1) \sim 1.8$. Typical circuit QED experiments rather work with resonators in the range of 5--15 GHz where, conveniently, microwave electronics is well developed. As already mentioned, entering the quantum regime for a given mode $m$ requires more than $\hbar \omega_m\gg k_B T$. It is also important that the linewidth $\kappa_m$ be small compared to the mode frequency $\omega_m$. As for the LC oscillator, the linewidth can be expressed in terms of the quality factor $Q_m$ of the resonator mode as $\kappa_m = \omega_m/Q_m$. An expression for the linewidth in terms of circuit parameters is given in~\cref{sec:environment}. There are multiple sources of losses and it is common to distinguish between internal losses due to coupling to uncontrolled degrees of freedom (dielectric and conductor losses at the surfaces and interfaces, substrate dielectric losses, non-equilibrium quasiparticles, vortices, two-level fluctuators\ldots) and external losses due to coupling to the input and output ports used to couple signals in and out of the resonator \cite{Goppl2008}. In terms of these two contributions, the total dissipation rate of mode $m$ is $\kappa_m = \kappa_{\mathrm{ext},m} + \kappa_{\mathrm{int},m}$ and the total, or loaded, quality factor of the resonator is therefore $Q_{\mathrm{L},m} = (Q^{-1}_{\mathrm{ext},m} + Q^{-1}_{\mathrm{int},m})^{-1}$. It is always advantageous to maximize the internal quality factor and much effort have been invested in improving resonator fabrication such that values of $ Q_\mathrm{int} \sim 10^5$ are routinely achieved. A dominant source of internal losses in superconducting resonators at low power are believed to be two-level systems (TLSs) that reside in the bulk dielectric, in the metal substrate, and in the metal-vacuum and substrate-vacuum interfaces where the electric field is large~\cite{Sage:2011,Wang2015n}. Internal quality factors over $10^6$ can be achieved by careful fabrication minimizing the occurrence of TLSs and by etching techniques to avoid substrate-vacuum interfaces in regions of high electric fields~\cite{Vissers2010b,Megrant2012,Bruno2015,Calusine2018}. On the other hand, the external quality factor can be adjusted by designing the capacitive coupling at the open ends of the resonator to input/output transmission lines. In coplanar waveguide resonators, these input and output coupling capacitors are frequently chosen either as a simple gap of a defined width in the center conductor, as illustrated in Fig.~\ref{fig:coplanaracrhitecture}(a), or formed by interdigitated capacitors \cite{Goppl2008}. The choice $Q_\mathrm{ext} \ll Q_\mathrm{int}$ corresponding to an `overcoupled' resonator is ideal for fast qubit measurement, which is discussed in more detail in \cref{sec:readout}. On the other hand, undercoupled resonators, $Q_\mathrm{ext} \gg Q_\mathrm{int}$, where dissipation is only limited by internal losses which are kept as small as possible, can serve as quantum memories to store microwave photons for long times. Using different modes of the same resonator \cite{Leek2010}, or combinations of resonators \cite{Johnson2010, Kirchmair2013}, both regimes of high and low external losses can also be combined in the same circuit QED device. A general approach to describe losses in quantum systems is described in \cref{sec:environment}. Finally, the magnitude of the vacuum fluctuations of the electric field in coplanar waveguide resonators is related to the mode volume. While the longitudinal dimension of the mode is limited by the length of the resonator, which also sets the fundamental frequency $d \sim \lambda/2$, the transverse dimension can be adjusted over a broad range. Commonly chosen transverse dimensions are on the order of $w \sim 10 \, \rm{\mu m}$ and $s \sim 5\,\rm{\mu m}$ \cite{Wallraff2004}. If desired, the transverse dimension of the center conductor may be reduced to the sub-micron scale, up to a limit set by the penetration depth of the superconducting thin films which is typically of the order of 100 to 200 nm. When combining the typical separation $s \sim 5\,\rm{\mu m}$ with the magnitude of the voltage fluctuations $\Delta V_0 \sim 1~\mu$V already expected from the discussion of the LC circuit, we find that the zero-point electric field in coplanar resonator can be as large as $\Delta E_0 = \Delta V_0/s \sim 0.2$ V/m. This is at least two orders of magnitude larger than the typical amplitude of $\Delta E_0$ in the 3D cavities used in cavity QED~\cite{Haroche2006}. As will become clear later, together with the large size of superconducting artificial atoms, this will lead to the very large light-matter coupling strengths which are characteristic of circuit QED. \subsubsection{\label{sec:telegrapher}Quantized modes of the transmission line resonator} While only a single mode of the transmission line resonator is often considered, there are many circuit QED experiments where the multimode structure of the device plays an important role. In this section, we present the standard approach to finding the normal modes of a distributed resonator, first using a classical description of the circuit. The electromagnetic properties along the $x$-direction of a coplanar waveguide resonator of length $d$ can be modeled using a linear, dispersion-free one-dimensional medium. \Cref{fig:TelegrapherResonator} shows the telegrapher model for such a system where the distributed inductance of the resonator's center conductor is represented by the series of lumped elements inductances and the capacitance to ground by a parallel combination of capacitances~\cite{Pozar2011}. Using the flux and charge variables introduced in the description of the LC oscillator, the energy associated to each capacitance is $Q_n^2/2C_0$ while the energy associated to each inductance is $(\Phi_{n+1}-\Phi_n)^2/2L_0$. In these expressions, $\Phi_n$ is the flux variable associated with the $n$th node and $Q_n$ the conjugate variable which is the charge on that node. Using the standard approach~\cite{Devoret1997}, we can thus write the classical Hamiltonian corresponding to \Cref{fig:TelegrapherResonator} as \begin{equation} H = \sum_{n=0}^{N-1}\left[ \frac{1}{2C_0} Q_n^2 + \frac{1}{2L_0}(\Phi_{n+1}-\Phi_n)^2 \right]. \end{equation} It is useful to consider a continuum limit of this Hamiltonian where the size of a unit cell $\delta x$ is taken to zero. For this purpose, we write $C_0 = \delta x \, c_0$ and $L_0 = \delta x\, l_0$, with $c_0$ and $l_0$ the capacitance and inductance per unit length, respectively. Moreover, we define a continuum flux field via $\Phi(x_n) = \Phi_n$ and charge density field $Q(x_n) = Q_n/\delta x$. We can subsequently take the continuum limit $\delta x \to 0$ while keeping $d=N\Delta x$ constant to find \begin{equation}\label{eq:HamiltonianResonatorContinous} H = \int_{0}^{d} dx\, \left\{ \frac{1}{2 c_0} Q(x)^2 + \frac{1}{2l_0}\left[\partial_x \Phi(x)\right]^2 \right\}, \end{equation} where we have used that $\partial_x \Phi(x_n) = \lim_{\delta x \to 0}\, (\Phi_{n+1}-\Phi_n)/\delta x$. In this expression, the charge $Q(x,t) = c_0 \partial_t \Phi(x,t)$ is the canonical momentum to the generalized flux $\Phi(x,t) = \int_{-\infty}^t dt'\, V(x,t')$, with $V(x,t)$ the voltage to ground on the center conductor. \begin{figure} \caption{Telegrapher model of an open-ended transmission line resonator of length $d$. $L_0$ and $C_0$ are, respectively, the inductance and capacitance associated to each node $n$ of flux $\Phi_n$. The resonator is coupled to external transmission lines (not shown) at its input and output ports via the capacitors $C_\kappa$.} \label{fig:TelegrapherResonator} \end{figure} Using Hamilton's equations together with~\cref{eq:HamiltonianResonatorContinous}, we find that the propagation along the transmission line is described by the wave equation \begin{equation}\label{eq:WaveEq} v_0^2 \frac{\partial^2\Phi(x,t)}{\partial x^2} - \frac{\partial^2\Phi(x,t)}{\partial t^2} = 0, \end{equation} with $v_0=1/\sqrt{l_0c_0}$ the speed of light in the medium. The solution to~\eq{eq:WaveEq} can be expressed in terms of normal modes \begin{equation}\label{eq:ResonatorModeDecomp} \Phi(x,t) = \sum_{m=0}^\infty u_m(x) \Phi_m(t), \end{equation} with $\ddot\Phi_m = -\omega_m^2\Phi_m$ a function of time oscillating at the mode frequency $\omega_m$ and \begin{equation}\label{eq:ResonatorModeFunc} u_m(x) = A_m \cos\left[ k_m x + \varphi_m\right], \end{equation} being the spatial profile of the mode with amplitude $A_m$. The wavevector $k_m = \omega_m/v_0$ and the phase $\varphi_m$ are set by the boundary conditions. For an open-ended $\lambda/2$-resonator these are \begin{equation}\label{eq:ZeroCurrentBoundary} I(x=0,d) = -\frac{1}{l_0}\left.\frac{\partial \Phi(x,t)}{\partial x}\right|_{x=0,d} = 0, \end{equation} corresponding to the fact that the current vanishes at the two extremities. A $\lambda/4$-resonator is modeled by requiring that the voltage $V(x,t) = \partial_t \Phi(x,t)$ vanishes at the grounded boundary. Asking for~\eq{eq:ZeroCurrentBoundary} to be satisfied for every mode implies that $\varphi_m = 0$ and that the wavevector is discrete with $k_m = m\pi/d$. Finally, it is useful to choose the normalization constant $A_m$ such that \begin{equation} \frac 1d \int_{0}^{d} dx\, u_m(x)u_{m'}(x) = \delta_{mm'}, \end{equation} resulting in $A_m = \sqrt{2}$. This normalization implies that the amplitude of the modes in a 1D resonator goes down with the square root of the length $d$. Using this normal mode decomposition in~\eq{eq:HamiltonianResonatorContinous}, the Hamiltonian can now be expressed in the simpler form \begin{equation}\label{eq:NormalModeHResonator} H = \sum_{m=0}^\infty \left[ \frac{Q_m^2}{2C_\mathrm{r}} + \frac{1}{2} C_\mathrm{r} \omega_m^2 \Phi_m^2 \right], \end{equation} where $C_\mathrm{r} = dc_0$ is the total capacitance of the resonator and $Q_m = C_\mathrm{r} \dot\Phi_m$ the charge conjugate to $\Phi_m$. We immediately recognize this Hamiltonian to be a sum over independent harmonic oscillators, cf.~\eq{eq:HLC}. Following once more the quantization procedure of Sec.~\ref{sec:HO}, the two conjugate variables $\Phi_m$ and $Q_m$ are promoted to non-commuting operators \begin{align} \hat \Phi_m &= \sqrt{\frac{\hbar Z_m}{2}} (\hat a^\dag_m+\hat a_m),\label{eq:ResonatorFlux}\\ \hat Q_m &= i \sqrt{\frac{\hbar}{2 Z_m}} (\hat a^\dag_m-\hat a_m),\label{eq:ResonatorCharge} \end{align} with $Z_m = \sqrt{L_m/C_\mathrm{r}}$ the characteristic impedance of mode $m$ and $L_m^{-1} \equiv C_r\omega_m^2$. Using these expressions in \cref{eq:NormalModeHResonator} immediately leads to the final result \begin{equation}\label{eq:H_cavity} \hat H = \sum_{m=0}^\infty \hbar\omega_m \hat a^\dag_m \hat a_m, \end{equation} with $\omega_m = (m+1) \omega_0$ the mode frequency and $\omega_0/2\pi = v_0 /2d$ the fundamental frequency of the $\lambda/2$ transmission-line resonator. To simplify the discussion, we have assumed here that the medium forming the resonator is homogenous. In particular, we have ignored the presence of the input and output port capacitors in the boundary condition of \cref{eq:ZeroCurrentBoundary}. In addition to lowering the external quality factor $Q_\mathrm{ext}$, these capacitances modify the amplitude and phase of the mode functions, as well as shift the mode frequencies. In some contexts, it can also be useful to introduce one or several Josephson junctions directly in the center conductor of the resonator. As discussed in \cref{sec:transmon}, this leads to a Kerr nonlinearity, $-K\hat{a}^{\dag2}\hat a^2/2$, of the oscillator. Kerr nonlinear coplanar waveguide resonators have been used, for example, as near quantum-limited linear amplifiers~\cite{Castellanos2008}, bifurcation amplifiers~\cite{Metcalfe2007}, and to study phenomena such as quantum heating~\cite{Ong2013}. We discuss the importance of quantum-limited amplification for qubit measurement in \cref{sec:readout} and applications of the Kerr nonlinearity in the context of quantum optics in \cref{sec:QuantumOptics}. A theoretical treatment of the resonator mode functions, frequencies and Kerr nonlinearity in the presence of resonator inhomogeneities, including embedded junctions, can be found in~\textcite{Bourassa2012} and is discussed in~\secref{sec:bbq}. \subsection{\label{sec:3D}3D resonators} Although their physical origin is not yet fully understood, dielectric losses at interfaces and surfaces are important limiting factors to the internal quality factor of coplanar transmission line resonators and lumped element LC oscillators, see~\textcite{Oliver2013b} for a review. An approach to mitigate the effect of these losses is to lower the ratio of the field energy stored at interfaces and surfaces to the energy stored in vacuum. Indeed, it has been observed that planar resonators with larger feature sizes ($s$ and $w$), and hence weaker electric fields near the interfaces and surfaces, typically have larger internal quality factors~\cite{Sage:2011}. This approach can be pushed further by using three-dimensional microwave cavities rather than planar circuits~\cite{Paik2011}. In 3D resonators formed by a metallic cavity, a larger fraction of the field energy is stored in the vacuum inside the cavity rather than close to the surface. As a result, the surface participation ratio can be as small as $10^{-7}$ in 3D cavities, in comparison to $10^{-5}$ for typical planar geometries \cite{Reagor2015}. Another potential advantage is the absence of dielectric substrate. In practice, however, this does not lead to a major gain in quality factor since, while coplanar resonators can have air-substrate participation ratio as large as 0.9, the bulk loss tangent of sapphire and silicon substrate is significantly smaller than that of the interface oxides and does not appear to be the limiting factor~\cite{Wang2015n}. In practice, three-dimensional resonators come in many different shapes and sizes, and can reach higher quality factors than lumped-element oscillators and 1D resonators. Quality factors has high as $4.2\times 10^{10}$ have been reported at 51 GHz and 0.8 K with Fabry-P\'erot cavities formed by two highly-polished cooper mirrors coated with niobium~\cite{Kuhr2007}. Corresponding to single microwave photon lifetimes of 130 ms, these cavities have been used in landmark cavity QED experiments~\cite{Haroche2006}. Similar quality factors have also been achieved with niobium micromaser cavities at 22~GHz and 0.15 K~\cite{Varcoe2000}. In the context of circuit QED, commonly used geometries include rectangular~\cite{Paik2011,Rigetti2012} and coaxial $\lambda/4$ cavities~\cite{Reagor2016}. The latter have important practical advantages in that no current flows near any seams created in the assembly of the device. \begin{figure} \caption{\label{fig:3Dmodes} \label{fig:3Dmodes} \end{figure} As illustrated in \cref{fig:3Dmodes}(a) and in close analogy with the coplanar waveguide resonator, rectangular cavities are formed by a finite section of a rectangular waveguide terminated by two metal walls acting as shorts. This three-dimensional resonator is thus simply vacuum surrounded on all sides by metal, typically aluminum to maximize the internal quality factor or copper if magnetic field tuning of components placed inside the cavity is required. The metallic walls impose boundary conditions on the electromagnetic field in the cavity, leading to a discrete set of TE and TM cavity modes of frequency~\cite{Pozar2012} \begin{equation} \omega_{mnl} = c\sqrt{ \left(\frac{m\pi}{a}\right)^2 + \left(\frac{n\pi}{b}\right)^2 + \left(\frac{l\pi}{d}\right)^2 }, \end{equation} labelled by the three integers $(l,m,n)$ and where $c$ is the speed of light, and $a$, $b$ and $d$ are the cavity dimensions. Dimensions of the order of a centimeter lead to resonance frequencies in the GHz range for the first modes. The TE modes, to which superconducting artificial atoms couple, are illustrated in \cref{fig:3Dmodes}(b-e). Because these modes are independent, once quantized, the cavity Hamiltonian again takes the form of \cref{eq:H_cavity} corresponding to a sum of independent harmonic oscillators. We return to the question of quantizing the electromagnetic field in arbitrary geometries in \secref{sec:bbq}. As already mentioned, a major advantage of 3D cavities compared to their 1D or lumped-element analogs is their high quality factor or, equivalently, long photon lifetime. A typical internal Q factor for rectangular aluminum cavities is $5\times 10^6$, corresponding to a photon lifetime above 50~$\mu$s~\cite{Paik2011}. These numbers are even higher for coaxial cavities where $Q_\mathrm{int} = 7 \times 10^7$, or above a millisecond of photon storage time, has been reported~\cite{Reagor2016}. Moreover, this latter type of cavity is more robust against imperfections which arise when integrating 3D resonators with Josephson-junction-based circuits. Lifetimes up to 2 s have also been reported in niobium cavities that were initially developed for accelerators \cite{Romanenko2020}. At such long photon lifetimes, microwave cavities are longer-lived quantum memories than the transmon qubit which we will introduce in the next section. This has led to a new paradigm for quantum information processing where information is stored in a cavity with the role of the qubit limited to providing the essential nonlinearity~\cite{Mirrahimi2014}. We come back to these ideas in \cref{sec:CatCodes}. \subsection{\label{sec:transmon}The transmon artificial atom} Although the oscillators discussed in the previous section can be prepared in their quantum mechanical ground state, it is challenging to observe clear quantum behavior with such linear systems. Indeed, harmonic oscillators are always in the correspondence limit and some degree of nonlinearity is therefore essential to encode and manipulate quantum information in these systems \cite{Leggett1984a}. Fortunately, superconductivity allows to introduce nonlinearity in quantum electrical circuits while avoiding losses. Indeed, the Josephson junction is a nonlinear circuit element that is compatible with the requirements for very high quality factors and operation at millikelvin temperatures. The physics of these junctions was first understood in 1962 by Brian Josephson, then a 22-year-old PhD candidate~\cite{Josephson1962,McDonald2001}. Contrary to expectations~\cite{Bardeen1962}, Josephson showed that a dissipationless current, i.e.~a supercurrent, could flow between two superconducting electrodes separated by a thin insulating barrier. More precisely, he showed that this supercurrent is given by $I = I_c \sin\varphi$, where $I_c$ is the junction's critical current and $\varphi$ the phase difference between the superconducting condensates on either sides of the junction~\cite{Tinkham1996}. The critical current, whose magnitude is determined by the junction size and material parameters, is the maximum current that can be supported before Cooper pairs are broken. Once this happens, dissipation kicks in and a finite voltage develops across the junction accompanied by a resistive current. Clearly, operation in the quantum regime requires currents well below this critical current. Josephson also showed that the time dependence of the phase difference $\varphi$ is related to the voltage across the junction according to $d\varphi/dt = 2\pi V /\Phi_0$, with $\Phi_0 = h/2e$ the flux quantum. It is useful to write this expression as $\varphi(t) = 2\pi \Phi(t)/\Phi_0~(\mathrm{mod}~2\pi)= 2\pi \int dt'\, V(t')/\Phi_0~(\mathrm{mod}~2\pi)$, with $\Phi(t)$ the flux variable already introduced in \secref{sec:HO}. The mod $2\pi$ in the previous equalities takes into account the fact that the superconducting phase $\varphi$ is a compact variable on the unit circle, $\varphi = \varphi+2\pi$, while $\Phi$ can take arbitrary real values. Taken together, the two Josephson relations make it clear that a Josephson junction relates current $I$ to flux $\Phi$. This is akin to a geometric inductance whose constitutive relation $\Phi = L I$ also links these two quantities. For this reason, it is useful to define the Josephson inductance \begin{equation}\label{eq:LJ} L_J (\Phi) = \left(\frac{\partial I}{\partial \Phi}\right)^{-1} = \frac{\Phi_0}{2\pi I_c} \frac{1}{\cos(2\pi\Phi/\Phi_0)}. \end{equation} In contrast to geometric inductances, $L_J$ depends on the flux. As a result, when operated below the critical current, the Josephson junction can be thought of as a nonlinear inductor. Replacing the geometric inductance $L$ of the LC oscillator discussed in \secref{sec:HO} by a Josephson junction, as in \cref{fig:transmonpotential}(b), therefore renders the circuit nonlinear. In this situation, the energy levels of the circuit are no longer equidistant. If the nonlinearity and the quality factor of the junction are large enough, the energy spectrum resembles that of an atom, with well-resolved and nonuniformly spread spectral lines. We therefore often refer to this circuit as an \emph{artificial atom}~\cite{Clarke1988}. In many situations, and as is the focus of much of this review, we can furthermore restrict our attention to only two energy levels, typically the ground and first excited states, forming a qubit. To make this discussion more precise, it is useful to see how the Hamiltonian of the circuit of \cref{fig:transmonpotential}(b) is modified by the presence of the Josephson junction taking the place of the linear inductor. While the energy stored in a linear inductor is $E = \int dt\, V(t) I(t) = \int dt\, (d\Phi/dt) I = \Phi^2/2L$, where we have used $\Phi = L I$ in the last equality, the energy of the nonlinear inductance rather takes the form \begin{equation} E = I_c \int dt\, \left(\frac{d\Phi}{dt}\right) \sin\left(\frac{2\pi}{\Phi_0}\Phi\right) = - E_J \cos\left(\frac{2\pi}{\Phi_0}\Phi\right), \end{equation} with $E_J = \Phi_0I_c/2\pi$ the Josephson energy. This quantity is proportional the rate of tunnelling of Cooper pairs across the junction. Taking into account this contribution, the quantized Hamiltonian of the capacitively shunted Josephson junction therefore reads (see \cref{sec:AppendixTRcoupling}) \begin{equation}\label{eq:Hsj} \begin{split} \hat H_\mathrm{T} &= \frac{(\hat Q-Q_g)^2}{2C_\Sigma} - E_J \cos\left(\frac{2\pi}{\Phi_0}\hat \Phi\right) \\ & = 4E_C (\hat n-n_g)^2 - E_J \cos\hat \varphi. \end{split} \end{equation} In this expression, $C_\Sigma=C_J+C_S$ is the total capacitance, including the junction's capacitance $C_J$ and the shunt capacitance $C_S$. In the second line, we have defined the charge number operator $\hat n = \hat Q/2e$, the phase operator $\hat \varphi = (2\pi/\Phi_0)\hat\Phi$ (mod $2\pi$) and the charging energy $E_C = e^2/2C_\Sigma$. We have also included a possible offset charge $n_g = Q_g/2e$ due to capacitive coupling of the transmon to external charges. The offset charge can arise from spurious unwanted degrees of freedom in the transmon's environment or from an external gate voltage $V_g = Q_g/C_g$. As we show below, the choice of $E_J$ and $E_C$ is crucial in determining the system's sensitivity to the offset charge. \begin{figure} \caption{\label{fig:transmonpotential} \label{fig:transmonpotential} \end{figure} The spectrum of $\hat H_T$ is controlled by the ratio $E_J/E_C$, with different values of this ratio corresponding to different types of superconducting qubits; see for example the reviews~\cite{Makhlin2001,Zagoskin2007,Clarke2008,Kjaergaard2019}. Regardless of the parameter regime, one can always express the Hamiltonian in the diagonal form $\hat H = \sum_j \hbar \omega_j \ket j \bra j$ in terms of its eigenfrequencies $\omega_j$ and eigenstates $\ket j$. In the literature, two notations are commonly used to label these eigenstates: $\{\ket{g},\ket{e},\ket{f},\ket{h}\ldots\}$ and, when there is not risk of confusion with resonator Fock states, $\{\ket{0},\ket{1},\ket{2}\ldots\}$. Depending on the context, we will use both notation in this review. \Cref{fig:transmonregime} shows the energy difference $\omega_j-\omega_0$ for the three lowest energy levels for different ratios $E_J/E_C$ as obtained from numerical diagonalization of \cref{eq:Hsj}. If the charging energy dominates, $E_J/E_C < 1$, the eigenstates of the Hamiltonian are approximately given by eigenstates of the charge operator, $\ket j \simeq \ket n$, with $\hat n\ket n = n\ket n$. In this situation, a change in gate charge $n_g$ has a large impact on the transition frequency of the device. As a result, unavoidable charge fluctuations in the circuit's environment lead to corresponding fluctuations in the qubit transition frequency and consequently to dephasing. To mitigate this problem, a solution is to work in the \emph{transmon regime} where the ratio $E_J/E_C$ is large~\cite{Koch2007,Schreier2008}. Typical values are $E_J/E_C \sim 20 - 80$. In this situation, the charge degree of freedom is highly delocalized due to the large Josephson energy. For this reason, and as clearly visible in \cref{fig:transmonregime}(c), the first energy levels of the device become essentially independent of the the gate charge. It can in fact be shown that the charge dispersion, which describes the variation of the energy levels with gate charge, decreases exponentially with $E_J/E_C$ in the transmon regime~\cite{Koch2007}. The net result is that the coherence time of the device is much larger than at small $E_J/E_C$. However, as is also clear from \cref{fig:transmonregime}, the price to pay for this increased coherence is the reduced anharmonicity of the transmon, anharmonicity that is required to control the qubit without causing unwanted transitions to higher excited states. Fortunately, while charge dispersion is exponentially small with $E_J/E_C$, the loss of anharmonicity has a much weaker dependence on this ratio given by $\sim (E_J/E_C)^{-1/2}$. As will be discussed in more detail in \cref{sec:quantumcomputing}, because of the gain in coherence, the reduction in anharmonicity is not an impediment to controlling the transmon state with high fidelity. \begin{figure} \caption{\label{fig:transmonregime} \label{fig:transmonregime} \end{figure} While the variance of the charge degree of freedom is large when $E_J/E_C\gg 1$, the variance of its conjugate variable $\hat \varphi$ is correspondingly small, with $\Delta\hat \varphi = \sqrt{\braket{\hat \varphi^2}-\braket{\hat \varphi}^2} \ll1$. In this situation, it is instructive to rewrite \eq{eq:Hsj} as \begin{equation}\label{eq:HsjLinNonLin} \begin{split} \hat H_\mathrm{T} = 4E_C \hat n^2 + \frac{1}{2}E_J \hat \varphi^2 - E_J\left(\cos\hat \varphi + \frac{1}{2}\hat \varphi^2\right), \end{split} \end{equation} the first two terms corresponding to an LC circuit of capacitance $C_\Sigma$ and inductance $E_J^{-1}(\Phi_0/2\pi)^2$, the linear part of the Josephson inductance \eq{eq:LJ}. We have dropped the offset charge $n_g$ in~\cref{eq:HsjLinNonLin} on the basis that the frequency of the relevant low-lying energy levels is insensitive to this parameter. Importantly, although these energies are not sensitive to variations in $n_g$, it is still possible to use an external oscillating voltage source to cause transition between the transmon states. We come back to this later. The last term of \cref{eq:HsjLinNonLin} is the nonlinear correction to this harmonic potential which, for $E_J/E_C\gg 1$ and therefore $\Delta\hat \varphi\ll1$ can be truncated to its first nonlinear correction \begin{equation}\label{eq:HsjPhi4} \begin{split} \hat H_\mathrm{q} = 4E_C \hat n^2 + \frac{1}{2}E_J \hat \varphi^2 - \frac{1}{4!}E_J\hat \varphi^4. \end{split} \end{equation} As expected from the above discussion, the transmon is thus a weakly anharmonic oscillator. Note that in this approximation, the phase $\hat \varphi$ is treated as a continuous variable with eigenvalues extending to arbitrary real values, rather than enforcing $2\pi$-periodicity. This is allowed as long as the device is in a regime where $\hat \varphi$ is sufficiently localized, which holds for low-lying energy eigenstates in the transmon regime with $E_J/E_C \gg 1$. Following the previous section, it is then useful to introduce creation and annihilation operators chosen to diagonalize the first two terms of \eq{eq:HsjPhi4}. Denoting these operators $\hat b^\dag$ and $\hat b$, in analogy to \eq{eq:HatPhiQ} we have \begin{align} \hat \varphi &= \left(\frac{2 E_C}{E_J}\right)^{1/4}(\hat b^\dag+\hat b),\label{eq:phiTransmon}\\ \quad \hat n &= \frac{i}{2}\left(\frac{E_J}{2 E_C}\right)^{1/4}(\hat b^\dag-\hat b).\label{eq:nTransmon} \end{align} This form makes it quite clear that fluctuations of the phase $\hat \varphi$ decrease with $E_J/E_C$, while the reverse is true for the conjugate charge $\hat n$. Using these expressions in \eq{eq:HsjPhi4} finally leads to\footnote{The approximate Hamiltonian \cref{eq:HsjPhi4b} is not bounded from below -- an artefact of the truncation of the cosine operator. Care should therefore be taken when using this form, and it should strictly speaking only be used in a truncated subspace of the original Hilbert space.} \begin{equation}\label{eq:HsjPhi4b} \begin{split} \hat H_\mathrm{q} &= \sqrt{8E_CE_J}\hat b^\dagb - \frac{E_C}{12}(\hat b^\dag+\hat b)^4\\ &\approx \hbar\omega_q \hat b^\dagb - \frac{E_C}{2}\hat b^\dag\hat b^\dag\hat b\hat b, \end{split} \end{equation} where $\hbar\omega_q = \sqrt{8E_CE_J}-E_C$. In the second line, we have kept only terms that have the same number of creation and annihilation operators. This is reasonable because, in a frame rotating at $\omega_q$, any terms with an unequal number of $\hat b$ and $\hat b^\dag$ will be oscillating. If the frequency of these oscillations is larger than the prefactor of the oscillating term, then this term rapidly averages out and can be neglected~\cite{Cohen-Tannoudji1977}. This rotating-wave approximation (RWA) is valid here if $\hbar\omega_q \gg E_C/4$, an inequality that is easily satisfied in the transmon regime. The particular combination $\omega_p = \sqrt{8E_CE_J}/\hbar$ is known as the Josephson plasma frequency and corresponds to the frequency of small oscillations of the effective particle of mass $C$ at the bottom of a well of the cosine potential of the Josephson junction. In the transmon regime, this frequency is renormalized by a `Lamb shift' equal to the charging energy $E_C$ such that $\omega_q = \omega_p-E_C/\hbar$ is the transition frequency between ground and first excited state. Finally, the last term of \eq{eq:HsjPhi4b} is a Kerr nonlinearity, with $E_C/\hbar$ playing the role of Kerr frequency shift per excitation of the nonlinear oscillator~\cite{Walls2008}. To see this even more clearly, it can be useful to rewrite \eq{eq:HsjPhi4b} as $H_\mathrm{q} = \hbar\tilde \omega_q(\hat b^\dagb)\hat b^\dagb$, where the frequency $\tilde \omega_q(\hat b^\dagb) = \omega_q-E_C(\hat b^\dagb-1)/2\hbar$ of the oscillator is a decreasing function of the excitation number $\hat b^\dagb$. Considering only the first few levels of the transmon, this simply means that the $e$--$f$ transition frequency is smaller by $E_C$ than the $g$--$e$ transition frequency, see \cref{fig:transmonpotential}(a). In other words, in the regime of validity of the approximation made to obtain \cref{eq:HsjPhi4}, the anharmonicity of the transmon is $-E_C$ with a typical value of $E_C/h \sim 100$--$400$~MHz~\cite{Koch2007}. Corrections to the anharmonicity from $-E_C$ can be obtained numerically or by keeping higher-order terms in the expansion of \cref{eq:HsjPhi4}. While the nonlinearity $E_C/\hbar$ is small with respect to the oscillator frequency $\omega_q$, it is in practice much larger than the spectral linewidth that can routinely be obtained for these artificial atoms and can therefore easily be spectrally resolved. As a result, and in contrast to more traditional realizations of Kerr nonlinearities in quantum optics, it is possible with superconducting quantum circuits to have a large Kerr nonlinearity even at the single-photon level. Some of the many implications of this observation will be discussed further in this review. For quantum information processing, the presence of this nonlinearity is necessary to address only the ground and first excited state without unwanted transition to other states. In this case, the transmon acts as a two-level system, or qubit. However, it is important to keep in mind that the transmon is a multilevel system and that it is often necessary to include higher levels in the description of the device to quantitatively explain experimental observations. These higher levels can also be used to considerable advantage in some cases \cite{Rosenblum2018b,Reinhold2019,Elder2020,Ma2019a}. \subsection{\label{sec:tunabletransmon}Flux-tunable transmons} A useful variant of the transmon artificial atom is the flux-tunable transmon, where the single Josephson junction is replaced with two parallel junctions forming a superconducting quantum interference device (SQUID), see \cref{fig:transmonpotential}(c)~\cite{Koch2007}. The transmon Hamiltonian then reads \begin{equation} \begin{split} \hat H_\mathrm{T} & = 4E_C \hat n^2 - E_{J1} \cos\hat \varphi_1 - E_{J2} \cos\hat \varphi_2, \end{split} \end{equation} where $E_{Ji}$ is the Josephson energy of junction $i$, and $\hat \varphi_i$ the phase difference across that junction. In the presence of an external flux $\Phi_x$ threading the SQUID loop and in the limit of negligible geometric inductance of the loop, flux quantization requires that $\hat \varphi_1-\hat \varphi_2 = 2\pi\Phi_x/\Phi_0~(\bmod\,2\pi)$~\cite{Tinkham1996}. Defining the average phase difference $\hat \varphi = (\hat \varphi_1+\hat \varphi_2)/2$, the Hamiltonian can then be rewritten as~\cite{Koch2007,Tinkham1996} \begin{equation}\label{eq:HTransmonFluxTunable} \hat H_\mathrm{T} = 4E_C \hat n^2 - E_J(\Phi_x)\cos(\hat \varphi-\varphi_0), \end{equation} where \begin{equation} \begin{aligned} E_J(\Phi_x) ={} E_{J\Sigma} \cos\left(\frac{\pi\Phi_x}{\Phi_0}\right) \sqrt{1+d^2\tan^2\left(\frac{\pi\Phi_x}{\Phi_0}\right)}, \end{aligned} \end{equation} with $E_{J\Sigma} = E_{J2}+E_{J1}$ and $d=(E_{J2}-E_{J1})/E_{J\Sigma}$ the junction asymmetry. The phase $\varphi_0 = d \tan(\pi\Phi_x/\Phi_0)$ can be ignored for a time-independent flux~\cite{Koch2007}. According to \cref{eq:HTransmonFluxTunable}, replacing the single junction with a SQUID loop yields an effective flux-tunable Josephson energy $E_J(\Phi_x)$. In turn, this results in a flux tunable transmon frequency $\omega_q(\Phi_x) = \sqrt{8E_C|E_J(\Phi_x)|}-E_C/\hbar$.~\footnote{The absolute value arises because when expanding the Hamiltonian in powers of $\hat \varphi$ in \cref{eq:HsjPhi4}, the potential energy term must always be expanded around a minimum. This discussion also assumes that the ratio $|E_J(\Phi_x)|/E_C$ is in the transmon range for all relevant $\Phi_x$.} In practice, the transmon frequency can be tuned by as much as one GHz in as little as 10--20 ns \cite{Rol2019a,Rol2020}. Dynamic range can also be traded for faster flux excursions by increasing the bandwidth of the flux bias lines. This possibility is exploited in several applications, including for quantum logical gates as discussed in more detail in~\cref{sec:quantumcomputing}. As will become clear later, this additional control knob can lead to dephasing due to noise in the flux threading the SQUID loop. With this in mind, it is worth noticing that a larger asymmetry $d$ leads to a smaller range of tunability and thus also to less susceptibility to flux noise~\cite{Hutchings2017}. Finally, first steps towards realizing voltage tunable transmons where a semiconducting nanowire takes the place of the SQUID loop have been demonstrated~\cite{Casparis2018,Luthi2018}. \subsection{Other superconducting qubits} While the transmon is currently the most extensively used and studied superconducting qubit, many other types of superconducting artificial atoms are used in the context of circuit QED. In addition to working with different ratios of $E_J/E_C$, these other qubits vary in the number of Josephson junctions and the topology of the circuit in which these junctions are embedded. This includes charge qubits~\cite{Bouchiat1998, Shnirman1997,Nakamura1999}, flux qubits~\cite{Orlando1999,Mooij1999} including variations with a large shunting capacitance~\cite{You2007,Yan2016a}, phase qubits~\cite{Martinis2002}, the quantronium~\cite{Vion02}, the fluxonium~\cite{Manucharyan2009} and the $0-\pi$ qubit~\cite{Gyenis2019a,Brooks2013}, amongst others. For more details about these different qubits, the reader is referred to reviews on the topic~\cite{Makhlin2001,Zagoskin2007,Clarke2008,Krantz2019,Kjaergaard2019}. \section{\label{sec:LightMatter} Light-matter interaction in circuit QED} \subsection{Exchange interaction between a transmon and an oscillator} \label{sec:TransmonOscillator} Having introduced the two main characters of this review, the quantum harmonic oscillator and the transmon artificial atom, we are now ready to consider their interaction. Because of their large size coming from the requirement of having a low charging energy (large capacitance), transmon qubits can very naturally be capacitively coupled to microwave resonators, see \cref{fig:transmonCQED} for schematic representations. With the resonator taking the place of the classical voltage source $V_g$, capacitive coupling to a resonator can be introduced in the transmon Hamiltonian~\cref{eq:Hsj} with a quantized gate voltage $n_g \to -\hat n_r$, representing the charge bias of the transmon due to the resonator (the choice of sign is simply a common convention in the literature that we will adopt here, see \cref{sec:AppendixTRcoupling}). The Hamiltonian of the combined system is therefore \cite{Blais2004} \begin{equation}\label{eq:HTransmonResonator} \hat H = 4E_C (\hat n+\hat n_r)^2 - E_J \cos\hat \varphi - \sum_m\hbar\omega_m \hat a_m^\dagger \hat a_m, \end{equation} where $\hat n_r = \sum_m \hat n_m$ with $\hat n_m = (C_g/C_m) \hat Q_m/2e$ the contribution to the charge bias due to the $m$th resonator mode. Here, $C_g$ is the gate capacitance and $C_m$ the associated resonator mode capacitance. To simplify these expressions, we have assumed here that $C_g \ll C_\Sigma, C_m$. A derivation of the Hamiltonian of \cref{eq:HTransmonResonator} that goes beyond the simple replacement of $n_g$ by $-\hat n_r$ and without the above assumption can be found in~\cref{sec:AppendixTRcoupling} for the case of a single LC oscillator coupled to the transmon. \begin{figure} \caption{Schematic representation of a transmon qubit (green) coupled to (a) a 1D transmission-line resonator, (b) a lumped-element LC circuit and (c) a 3D coaxial cavity. Panel (a) is adapted from \textcite{Blais2004} \label{fig:transmonCQED} \end{figure} Assuming that the transmon frequency is much closer to one of the resonator modes than all the other modes, say $|\omega_0-\omega_q| \ll |\omega_m-\omega_q|$ for $m\ge 1$, we truncate the sum over $m$ in \cref{eq:HTransmonResonator} to a single term. In this single-mode approximation, the Hamiltonian reduces to a single oscillator of frequency denoted $\omega_r$ coupled to a transmon. It is interesting to note that, regardless of the physical nature of the oscillator---for example a single mode of a 2D or 3D resonator---it is possible to represent this Hamiltonian with an equivalent circuit where the transmon is capacitively coupled to an LC oscillator as illustrated in \figref{fig:transmonCQED}(b). This type of formal representation of complex geometries in terms of equivalent lumped element circuits is generally known as ``black-box quantization''~\cite{Nigg2012}, and is explored in more detail in \secref{sec:bbq}. As will be discussed in \cref{sec:Purcell}, there are many situations of experimental relevance where ignoring the multi-mode nature of the resonator leads to inaccurate predictions. Using the creation and annihilation operators introduced in the previous section, in the single-mode approximation \eq{eq:HTransmonResonator} reduces to \footnote{One might worry about the term $\hat n_r^2$ arising from~\cref{eq:HTransmonResonator}. However, this term can be absorbed in the charging energy term of the resonator mode, see \cref{eq:HLC}, and therefore leads to a renormalization of the resonator frequency which we omit here for simplicity. See~\cref{eq:app:cq:HTransmonResonatorNoApprox,eq:app:cq:HTransmonResonator} for further details.} \begin{equation}\label{eq:HTransmonRabi} \begin{aligned} \hat H \approx{}& \hbar\omega_r \hat a^\daga + \hbar\omega_q \hat b^\dagb - \frac{E_C}{2}\hat b^\dag\hat b^\dag\hat b\hat b\\ &- \hbar g (\hat b^\dag-\hat b)(\hat a^\dag-\hat a), \end{aligned} \end{equation} where $\omega_r$ is the frequency of the mode of interest and \begin{equation}\label{eq:gTransmon} \begin{aligned} g ={}& \omega_r \frac{C_g}{C_\Sigma} \left(\frac{E_J}{2E_C}\right)^{1/4} \sqrt{\frac{\pi Z_r}{R_K}}, \end{aligned} \end{equation} the oscillator-transmon, or light-matter, coupling constant. Here, $Z_r$ is the characteristic impedance of the resonator mode and $R_K = h/e^2 \sim 25.8$~k$\Omega$ the resistance quantum. The above Hamiltonian can be simplified further in the experimentally relevant situation where the coupling constant is much smaller than the system frequencies, $|g| \ll \omega_r,\, \omega_q$. Invoking the rotating-wave approximation, it simplifies to \begin{equation}\label{eq:HTransmonJC} \begin{aligned} \hat H \approx{}& \hbar\omega_r \hat a^\daga + \hbar\omega_q \hat b^\dagb - \frac{E_C}{2}\hat b^\dag\hat b^\dag\hat b\hat b\\ &+ \hbar g (\hat b^\dag\hat a+\hat b\hat a^\dag). \end{aligned} \end{equation} As can be seen from \eq{eq:nTransmon}, the prefactor $(E_J/2E_C)^{1/4}$ in \eq{eq:gTransmon} is linked to the size of charge fluctuations in the transmon. By introducing a length scale $l$ corresponding to the distance a Cooper pair travels when tunneling across the transmon's junction, it is tempting to interpret \eq{eq:gTransmon} as $\hbar g = d_0\mathcal{E}_0$ with $d_0 = 2e l (E_J/32 E_C)^{1/4}$ the dipole moment of the transmon and $\mathcal{E}_0= (\omega_r/l)(C_g/C_\Sigma)\sqrt{\hbar Z_r/2}$ the resonator's zero-point electric field as seen by the transmon. Since these two factors can be made large, especially so in the transmon regime where $d_0 \gg 2e l$, the electric-dipole interaction strength $g$ can be made very large, much more so than with natural atoms in cavity QED. It is also instructive to express \cref{eq:gTransmon} as \begin{equation}\label{eq:g_alpha} g = \omega_r \frac{C_g}{C_\Sigma} \left(\frac{E_J}{2E_C}\right)^{1/4} \sqrt{\frac{Z_r}{Z_\mathrm{vac}}}\sqrt{2\pi\alpha}, \end{equation} where $\alpha = Z_\mathrm{vac}/2R_K$ is the fine-structure constant and $Z_\mathrm{vac}= \sqrt{\mu_0/\epsilon_0} \sim 377~\Omega$ the impedance of vacuum with $\epsilon_0$ the vacuum permittivity and $\mu_0$ the vacuum permeability \cite{Devoret2007}. To find $\alpha$ here should not be surprising because this quantity characterizes the interaction between the electromagnetic field and charged particles. Here, this interaction is reduced by the fact that both $Z_r/Z_\mathrm{vac}$ and $C_g/C_\Sigma$ are smaller than unity. Very large couplings can nevertheless be achieved by working with large values of $E_J/E_C$ or, in other words, in the transmon regime. Large $g$ is therefore obtained at the expense of reducing the transmon's relative anharmonicity $-E_C/\hbar\omega_q \simeq -\sqrt{E_C/8E_J}$. We note that the coupling can be increased by boosting the resonator's impedance, something that can be realized, for example, by replacing the resonator's center conductor with a junction array \cite{Andersen2017,Stockklauser2017}. Apart from a change in the details of the expression of the coupling $g$, the above discussion holds for transmons coupled to lumped, 2D or 3D resonators. Importantly, by going from 2D to 3D, the resonator mode volume is made significantly larger leading to an important reduction in the vacuum fluctuations of the electric field. As first demonstrated in \textcite{Paik2011}, this can be made without change in the magnitude of $g$ simply by making the transmon larger thereby increasing its dipole moment. As illustrated in Fig.~\ref{fig:transmonCQED}(c), the transmon then essentially becomes an antenna that is optimally placed within the 3D resonator to strongly couple to one of the resonator modes. To strengthen the analogy with cavity QED even further, it is useful to restrict the description of the transmon to its first two levels. This corresponds to making the replacements $\hat b^\dag \rightarrow\spp{} = \ket{e}\bra{g}$ and $\hat b \rightarrow\smm{} = \ket{g}\bra{e}$ in \eq{eq:HTransmonRabi} to obtain the well-known Jaynes-Cummings Hamiltonian~\cite{Haroche2006,Blais2004} \begin{equation}\label{eq:HJC} \hat H_\mathrm{JC} = \hbar\omega_r \hat a^\daga + \frac{\hbar\omega_q}{2}\sz{} + \hbar g (\hat a^\dag\smm{} + \hat a\spp{}), \end{equation} where we use the convention $\sz{} = \ket e\bra e - \ket g \bra g$. The last term of this Hamiltonian describes the coherent exchange of a single quantum between light and matter, here realized as a photon in the oscillator or an excitation of the transmon. \subsection{The Jaynes-Cummings spectrum}\label{sec:JaynesCummings} The Jaynes-Cummings Hamiltonian is an exactly solvable model which very accurately describes many situations in which an atom, artificial or natural, can be considered as a two-level system in interaction with a single mode of the electromagnetic field. This model can yield qualitative agreement with experiments in situations where only the first two levels of the transmon, $\ket{\sigma = \{g, e\}}$, play an important role. It is often the case, however, that quantitative agreement between theoretical predictions and experiments is obtained only when accounting for higher transmon energy levels and the multimode nature of the field. Nevertheless, since a great deal of insight can be gained from this, in this section we consider the Jaynes-Cummings model more closely. In the absence of coupling $g$, the \emph{bare} states of the qubit-field system are labelled $\ket{\sigma,n}$ with $\sigma$ defined above and $n$ the photon number. The \emph{dressed} eigenstates of the Jaynes-Cummings Hamiltonian, $\ket{\overline{\sigma, n}} = \hat U^\dagger \ket{\sigma,n}$, can be obtained from these bare states using the Bogoliubov-like unitary transformation~\cite{Boissonneault2009,Carbonaro1979} \begin{equation} \hat U = \exp \left[\Lambda(\hat N_T) (\hat a^\dag\smm{} - \hat a\spp{})\right], \label{eqn:JCdiag_transform} \end{equation} where we have defined \begin{equation}\label{eq:LambdaNT} \Lambda(\hat N_T) = \frac{\arctan\left(2\lambda\sqrt{\hat N_T}\right)}{2\sqrt{\hat N_T}}. \end{equation} Here, $\hat N_T = \hat a^\dag\hat a + \spp{}\smm{}$ is the operator associated with the total number of excitations, which commutes with $\hat H_\mathrm{JC}$, and $\lambda = g/\Delta$ with $\Delta = \omega_{q}-\omega_r$ the qubit-resonator detuning. Under this transformation, $\hat H_\mathrm{JC}$ takes the diagonal form \begin{equation}\label{eq:HJCdiagonal} \begin{split} &\hat H_D = \hat U^\dag \hat H_\mathrm{JC} \hat U\\ & = \hbar\omega_r\hat a^\daga + \frac{\hbar\omega_{q}}{2}\sz{} - \frac{\hbar\Delta}{2} \left( 1 - \sqrt{1+4\lambda^2\hat N_T} \right) \sz{}. \end{split} \end{equation} The dressed-state energies can be read directly from this expression and, as illustrated in Fig.~\ref{fig:JCSpectrum}, the Jaynes-Cummings spectrum consists of doublets $\{\ket{\overline{g,n}},\ket{\overline{e,n-1}}\}$ of fixed excitation number\footnote{To arrive at these expressions, we have added $\hbar \omega_r/2$ to $\hat H_D$. This global energy shift is without consequences.} \begin{align}\label{eq:JCEnergies} \begin{split} E_{\overline{g,n}} = \hbar n \omega_r - \frac\hbar 2 \sqrt{\Delta^2 + 4g^2n},\\ E_{\overline{e,n-1}} = \hbar n \omega_r + \frac\hbar 2 \sqrt{\Delta^2 + 4g^2n}, \end{split} \end{align} and of the ground state $\ket{\overline{g,0}}=\ket{g,0}$ of energy $E_{\overline{g,0}} = -\hbar\omega_{q}/2$. The excited dressed states are \begin{equation}\label{eq:JCEigenstates} \begin{split} \ket{\overline{g, n}} &= \cos(\theta_n/2)\ket{g,n}-\sin(\theta_n/2)\ket{e,n-1},\\ \ket{\overline{e, n-1}} &= \sin(\theta_n/2)\ket{g,n} + \cos(\theta_n/2)\ket{e,n-1}, \end{split} \end{equation} with $\theta_n = \arctan(2g\sqrt{n}/\Delta)$. A crucial feature of the energy spectrum of~\eq{eq:JCEnergies} is the scaling with the photon number $n$. In particular, for zero detuning, $\Delta = 0$, the energy levels $E_{\overline{g,n}}$ and $E_{\overline{e,n-1}}$ are split by $2g\sqrt{n}$, in contrast to two coupled harmonic oscillators where the energy splitting is independent of $n$. Experimentally probing this spectrum thus constitutes a way to assess the quantum nature of the coupled system~\cite{Carmichael1996,Fink2008}. We return to this and related features of the spectrum in~\secref{sec:resonant}. \begin{figure} \caption{Box: Energy spectrum of the uncoupled (gray lines) and dressed (blue lines) states of the Jaynes-Cummings Hamiltonian at zero detuning, $\Delta = \omega_{q} \label{fig:JCSpectrum} \end{figure} \subsection{\label{sec:dispersive}Dispersive regime} On resonance, $\Delta = 0$, the dressed-states \cref{eq:JCEigenstates} are maximally entangled qubit-resonator states implying that the qubit is, by itself, never in a well-defined state, i.e.~the reduced state of the qubit found by tracing over the resonator is not pure. For quantum information processing, it is therefore more practical to work in the dispersive regime where the qubit-resonator detuning is large with respect to the coupling strength, $|\lambda| = |g/\Delta|\ll 1$. In this case, the coherent exchange of a quanta between the two systems described by the last term of $\hat H_\mathrm{JC}$ is not resonant, and interactions take place only via virtual photon processes. Qubit and resonator are therefore only weakly entangled and a simplified model obtained below from second-order perturbation theory is often an excellent approximation. As the virtual processes can involve higher energy levels of the transmon, it is however crucial to account for its multi-level nature. For this reason, our starting point will be the Hamiltonian of \cref{eq:HTransmonJC} and not its two-level approximation \cref{eq:HJC}. \subsubsection{\label{sec:dispersive:SW}Schrieffer-Wolff approach} To find an approximation to~\cref{eq:HTransmonJC} valid in the dispersive regime, we perform a Schrieffer-Wolff transformation to second order~\cite{Koch2007}. As shown in~\cref{sec:unitarytransforms}, as long as the interaction term in~\cref{eq:HTransmonJC} is sufficiently small, the resulting effective Hamiltonian is well approximated by \begin{equation}\label{eq:HTransmonDispersiveSW} \begin{aligned} \hat H_\mathrm{disp} \simeq{}& \hbar \omega_r \hat a^\daga + \hbar\omega_q \hat b^\dagb - \frac{E_C}{2}\hat b^\dag\hat b^\dag\hat b\hat b\\ +& \sum_{j=0}^\infty \hbar \left(\Lambda_{j} + \chi_{j}\hat a^\dag\hat a\right) \ket{j} \bra{j}, \end{aligned} \end{equation} where $\ket{j}$ label the eigenstates of the transmon which, under the approximation used to obtain~\cref{eq:HsjPhi4b}, are just the eigenstates of the number operator $\hat b^\dag\hat b$. Moreover, we have defined \begin{subequations} \begin{align} \Lambda_{j} ={}& \chi_{j-1,j},\quad \chi_{j} = \chi_{j-1,j} - \chi_{j,j+1},\\ \chi_{j-1,j} ={}& \frac{j g^2}{\Delta - (j - 1)E_C/\hbar}. \end{align} for $j>0$ and $\Lambda_0 = 0$, $\chi_0 = -g^2/\Delta$. Here $\chi_j$ are known as dispersive shifts, while $\Lambda_j$ are Lamb shifts and are signatures of vacuum fluctuations~\cite{Lamb1947,Bethe1947,Fragner2008}. \end{subequations} Truncating \eq{eq:HTransmonDispersiveSW} to the first two levels of the transmon leads to the more standard form of the dispersive Hamiltonian~\cite{Blais2004} \begin{equation}\label{eq:HQubitDispersive} \begin{aligned} \hat H_\text{disp} \approx{}& \hbar \omega_r' \hat a^\daga + \frac{\hbar\omega_q'}{2} \sz{} + \hbar \chi \hat a^\daga \sz{}, \end{aligned} \end{equation} where $\chi$ is the qubit state-dependent dispersive cavity-shift with~\cite{Koch2007} \begin{equation}\label{eq:HQubitDispersiveParametersSW} \begin{aligned} \omega_r' ={}& \omega_r - \frac{g^2}{\Delta-E_C/\hbar},\quad \omega_q' = \omega_q + \frac{g^2}{\Delta},\\ \chi ={}& -\frac{g^2E_C/\hbar}{\Delta(\Delta - E_C/\hbar)}. \end{aligned} \end{equation} These dressed frequencies are what are measured experimentally in the dispersive regime. It is important to emphasize that the frequencies entering the right-hand-sides of \cref{eq:HQubitDispersiveParametersSW} are the \emph{bare} qubit and resonator frequencies. The spectrum of this two-level dispersive Hamiltonian is illustrated in \cref{fig:DispersiveSpectrum}. Much of this review is devoted to the consequences of this dispersive Hamiltonian for qubit readout and quantum information processing. We note that the Scrieffer-Wolff transformation also gives rise to resonator and qubit self-Kerr nonlinearities at fourth order~\cite{Zhu2012}. As already mentioned, these perturbative results are valid when the interaction term in~\cref{eq:HTransmonJC} is sufficiently small compared to the energy splitting of the bare transmon-oscillator energy levels, $|\lambda|=|g/\Delta|\ll1$. Because the matrix elements of the operators involved in the interaction term scale with the number of photons in the resonator and the number of qubit excitations, a more precise bound on the validity of \cref{eq:HTransmonDispersiveSW} needs to take into account these quantities. As discussed in~\cref{sec:SW:transmon}, we find that for the above second order perturbative results to be a good approximation, the oscillator photon number $\bar n$ should be much smaller than a critical photon number $n_\text{crit}$ \begin{equation}\label{eq:ncrit} \bar n \ll n_\text{crit} \equiv \frac{1}{2j + 1} \left(\frac{|\Delta-j E_C/\hbar|^2}{4 g^2} - j\right), \end{equation} where $j=0,1,\dots$ refers to the qubit state as before. For $j=0$, this yields the familiar value $n_\text{crit}=(\Delta/2g)^2$ for the critical photon number expected from the Jaynes-Cummings model~\cite{Blais2004}, while setting $j=1$ gives a more conservative bound. In either case, this gives only a rough estimate for when to expect higher-order effects to become important. \begin{figure} \caption{Energy spectrum of the uncoupled (gray lines) and dressed states in the dispersive regime (blue lines). The two lowest transmon states are labelled $\{\ket{g} \label{fig:DispersiveSpectrum} \end{figure} It is worth contrasting~\cref{eq:HQubitDispersiveParametersSW} to the results expected from performing a dispersive approximation to the Jaynes-Cummings model~\cref{eq:HJC}, which leads to $\chi=g^2/\Delta$ (see~\cref{sec:AppendixDispersiveTLS}, \textcite{Blais2004,Boissonneault2010}). This agrees with the above result in the limit of very large $E_C$ but, since $E_C/\hbar$ is typically rather small compared to $\Delta$ in most transmon experiments, the two-level system Jaynes-Cummings model gives a poor prediction for the dispersive shift $\chi$ in practice. The intuition here is that $E_C$ determines the anharmonicity of the transmon. Two coupled harmonic oscillators can shift each other's frequencies, but only in a state-independent manner. Thus the dispersive shift must vanish in the limit of $E_C$ going to zero. \subsubsection{\label{sec:dispersive:bb}Bogoliubov approach} We now present an approach to arrive at~\cref{eq:HQubitDispersive} that can be simpler than performing a Schrieffer-Wolff transformation and which is often used in the circuit QED literature. To proceed, it is convenient to write~\cref{eq:HTransmonJC} as the sum of a linear and a nonlinear part, $\hat H = \hat HL + \hat HNL$, where \begin{align} \hat HL ={}& \hbar\omega_r \hat a^\daga + \hbar\omega_q \hat b^\dagb + \hbar g (\hat b^\dag\hat a+\hat b\hat a^\dag),\label{eq:HTransmonDispersiveL}\\ \hat HNL ={}&- \frac{E_C}{2}\hat b^\dag\hat b^\dag\hat b\hat b\label{eq:HTransmonDispersiveNL}. \end{align} The linear part $\hat HL$ can be diagonalized exactly using the Bogoliubov transformation \begin{equation}\label{eq:UDispersive} \hat U_\mathrm{disp} = \exp\left[\Lambda (\hat a^\dagger \hat b - \hat a \hat b^\dagger) \right]. \end{equation} Under this unitary, the annihilation operators transform as $\hat U_\mathrm{disp}^\dagger \hat a \hat U_\mathrm{disp} = \cos(\Lambda)\hat a + \sin(\Lambda)\hat b$ and $\hat U_\mathrm{disp}^\dagger \hat b \hat U_\mathrm{disp} = \cos(\Lambda)\hat b - \sin(\Lambda)\hat a$. With the choice $\Lambda = \half \arctan(2\lambda)$, this results in the diagonal form \begin{equation}\label{eq:HDiagonalLinear} \begin{aligned} \hat U_\mathrm{disp}^\dagger \hat HL \hat U_\mathrm{disp} = \hbar\tilde\omega_r \hat a^\dag \hat a + \hbar\tilde\omega_q \hat b^\dagger \hat b, \end{aligned} \end{equation} with the dressed frequencies \begin{subequations}\label{eq:DressedFrequenciesDispersive} \begin{align} \tilde\omega_r ={}& \half\left(\omega_r + \omega_q - \sqrt{\Delta^2+4g^2}\right),\\ \tilde\omega_q ={}& \half\left(\omega_r + \omega_q + \sqrt{\Delta^2+4g^2}\right). \end{align} \end{subequations} Applying the same transformation to $\hat HNL$ and, in the dispersive regime, expanding the result in orders of $\lambda$ leads to the dispersive Hamiltonian (see \cref{sec:AppendixDispersiveTransformation}) \begin{equation}\label{eq:HTransmonDispersive} \begin{aligned} \hat H_\text{disp} ={}& \hat U_\mathrm{disp}^\dagger \hat H \hat U_\mathrm{disp} \\ \simeq{}& \hbar\tilde\omega_r\hat a^\daga + \hbar\tilde\omega_q \hat b^\dagb \\ +&\frac{\hbar K_a}{2} \hat a^\dagger \hat a^\dagger \hat a \hat a + \frac{\hbar K_b}{2} \hat b^\dag\hat b^\dag\hat b\hat b+ \hbar \chi_{ab} \hat a^\daga \hat b^\dagb, \end{aligned} \end{equation} where we have introduced \begin{equation}\label{eq:HTransmonDispersiveParameters} \begin{aligned} K_a &\simeq -\frac{E_C}{2\hbar }\left(\frac{g}{\Delta}\right)^4,\; K_b \simeq -E_C/\hbar,\\ \chi_{ab} &\simeq -2 \frac{g^2 E_C/\hbar}{\Delta(\Delta-E_C/\hbar)}. \end{aligned} \end{equation} The first two of these quantities are self-Kerr nonlinearities, while the third is a cross-Kerr interaction. All are negative in the dispersive regime. As discussed in \cref{sec:AppendixDispersiveTransformation}, the above expression for $\chi_{ab}$ is obtained after performing a Schrieffer-Wolff transformation to eliminate a term of the form $\hat b^\dagger \hat b\,\hat a^\dagger \hat b + \text{H.c.}$ that results from applying $U_\mathrm{disp}$ on $H_\mathrm{NL}$. Higher-order terms in $\lambda$ and other terms rotating at frequency $\Delta$ or faster have been dropped to arrive at \cref{eq:HTransmonDispersive}. These terms are given in~\cref{eq:app:dispersive:H_NL_lambda}. Truncating \eq{eq:HTransmonDispersive} to the first two levels of the transmon correctly leads to~\cref{eq:HQubitDispersive,eq:HQubitDispersiveParametersSW}. Importantly, these expressions are not valid if the excitation number of the resonator or the transmon is too large or if $|\Delta| \sim E_C/\hbar$. The regime $0 < \Delta < E_C$, known as the straddling regime, is qualitatively different from the usual dispersive regime. It is characterized by positive self-Kerr and cross-Kerr nonlinearities, $K_a, \chi_{ab} > 0$, and is better addressed by exact numerical diagonalization of~\cref{eq:HTransmonResonator}~\cite{Koch2007}. A remarkable feature of circuit QED is the large nonlinearities that are achievable in the dispersive regime. Dispersive shifts larger than the resonator or qubit linewidth, $\chi > \kappa,\, \gamma$, are readily realized in experiments, a regime referred to as strong dispersive coupling~\cite{Gambetta2006,Schuster2007a}. Some of the consequences of this regime are discussed in Sec.~\ref{sec:DispersiveConsequences}. It is also possible to achieve large self-Kerr nonlinearities for the resonator, $K_a > \kappa$.\footnote{Of course, the transmon is itself an oscillator with a very large self-Kerr given by $\hbar K_b = -E_C$.} These nonlinearities can be enhanced by embedding Josephson junctions in the center conductor of the resonator~\cite{Bourassa2012,Ong2013}, an approach which is used for example in quantum-limited parametric amplifiers~\cite{Castellanos2008} or for the preparation of quantum states of the microwave electromagnetic field~\cite{Kirchmair2013,Holland2015,Puri2017}. \subsection{\label{sec:bbq}Josephson junctions embedded in multimode electromagnetic environments} So far, we have focussed on the capacitive coupling of a transmon to a single mode of an oscillator. For many situations of experimental relevance it is, however, necessary to consider the transmon, or even multiple transmons, embedded in an electromagnetic environment with a possibly complex geometry, such as a 3D cavity. Consider the situation depicted in~\figref{fig:bbq}(a) where a capacitively shunted Josephson junction is embedded in some electromagnetic environment represented by the impedance $Z(\omega)$. To keep the discussion simple, we consider here a single junction but the procedure can easily be extended to multiple junctions. As discussed in \secref{sec:transmon}, the Hamiltonian of the shunted junction \cref{eq:HsjLinNonLin} can be decomposed into a linear term of capacitance $C_\Sigma = C_S + C_J$ and linear inductance $L_J = E_J^{-1}(\Phi_0/2\pi)^2$, and a purely nonlinear element. This decomposition is illustrated in~\figref{fig:bbq}(b), where the spider symbol represents the nonlinear element~\cite{Manucharyan2007,Bourassa2012}. \begin{figure} \caption{(a) Transmon qubit coupled to an arbitrary impedance, such as that realized by a 3D microwave cavity. (b) The Josephson junction can be represented as a capacitive element $C_J$ and a linear inductive element $L_J$ in parallel with a purely nonlinear element indicated here by the spiderlike symbol. Here, $C_\Sigma = C_S+C_J$ is the parallel combinaison of the Josephson capacitance and the shunting capacitance of the transmon. (c) Normal mode decomposition of the parallel combination of the impedance $Z(\omega)$ together with $L_J$ and $C_\Sigma$ represented by effective $LC$ circuits. \label{fig:bbq} \label{fig:bbq} \end{figure} We assume that the electromagnetic environment is linear, nonmagnetic and has no free charges and currents. Since $C_\Sigma$ and $L_J$ are themselves linear elements, we might as well consider them part of the electromagnetic environment too, something that is illustrated by the box in \cref{fig:bbq}(b). Combining all linear contributions, we write a Hamiltonian for the entire system, junction plus the surrounding electromagnetic environment, as $\hat H = \hat HL + \hat HNL$ with \begin{equation} \hat HNL = -E_J\left(\cos\hat\varphi + \half \hat\varphi^2\right) \end{equation} the nonlinear part of the transmon Hamiltonian already introduced in \cref{eq:HsjLinNonLin}. A good strategy is to first diagonalize the linear part, $\hat HL$, which can in principle be done exactly much like was done in \secref{sec:dispersive}. Subsequently, the phase difference $\hat\varphi$ across the junction can be expressed as a linear combination of the eigenmodes of $\hat HL$, a decomposition which is then used in $\hat HNL$. A convenient choice of canonical fields for the electromagnetic environment are the electric displacement field $\hat{\*D}(\*x)$ and the magnetic field $\hat{\*B}(\*x)$, which can be expressed in terms of bosonic creation and annihilation operators~\cite{Bhat2006} \begin{subequations}\label{eq:bbq:DB} \begin{align} \hat{\*D}(\*x) ={}& \sum_m \left[\*D_m(\*x)\hat a_m + \text{H.c.} \right], \label{eq:bbq:D}\\ \hat{\*B}(\*x) ={}& \sum_m \left[\*B_m(\*x)\hat a_m + \text{H.c.} \right], \label{eq:bbq:B} \end{align} \end{subequations} where $[\hat a_m, \hat a_n^\dagger] = \delta_{mn}$. The more commonly used electric field is related to the displacement field through $\hat{\*D}(\*x) = \varepsilon_0\hat{\*E}(\*x) + \hat{\*P}(\*x)$, where $\hat{\*P}(\*x)$ is the polarization of the medium. Moreover, the mode functions $\*D_m(\*x)$ and $\*B_m(x)$ can be chosen to satisfy orthogonality and normalization conditions such that \begin{equation}\label{eq:bbq:H_L} \hat HL = \sum_m \hbar\omega_m \hat a_m^\dagger \hat a_m. \end{equation} In \cref{eq:bbq:DB,eq:bbq:H_L}, we have implicitly assumed that the eigenmodes form a discrete set. If some part of the spectrum is continuous, which is the case for infinite systems such as open waveguides, the sums must be replaced by integrals over the relevant frequency ranges. The result is very general, holds for arbitrary geometries, and can include inhomogeneities such as partially reflecting mirrors, and materials with dispersion~\cite{Bhat2006}. We will, however, restrict ourselves to discrete spectra in the following. Diagonalizing $\hat HL$ amounts to determining the mode functions $\{\hat{\*D}_m(\*x), \hat{\*B}_m(\*x)\}$, which is essentially a classical electromagnetism problem that can, e.g., be approached using numerical software such as finite element solvers~\cite{minev2019catching}. Assuming that the mode functions have been found, we now turn to $\hat HNL$ for which we relate $\hat \varphi$ to the bosonic operators $\hat a_m$. This can be done by noting again that $\hat\varphi(t) = 2\pi \int dt'\, \hat V(t')/\Phi_0$, where the voltage is simply the line integral of the electric field $\hat V(t) = \int d\*l \hat c^\dagot \hat{\*E}(\*x) = \int d\*l \hat c^\dagot \hat{\*D}(\*x)/\varepsilon$ across the junction~\cite{Vool2016}. Consequently, the phase variable can be expressed as \begin{equation}\label{eq:bbq:phase} \hat \varphi = \sum_m \left[\varphi_m \hat a_m + \text{H.c.} \right], \end{equation} where $\varphi_m = i(2\pi/\Phi_0) \int_{\*x_J'}^{\*x_J} d \*l \hat c^\dagot \*D_m(\*x)/(\omega_m\varepsilon)$ is the dimensionless magnitude of the zero-point fluctuations of the $m$th mode as seen by the junction and the boundary conditions defined as in \cref{fig:bbq}(a). Using~\cref{eq:bbq:phase} in $\hat HNL$ we expand the cosine to fourth order in analogy with~\cref{eq:HsjPhi4}. This means that we are assuming that the capacitively shunted junction is well in the transmon regime, with a small anharmonicity relative to Josephson energy. Focusing on the dispersive regime where all eigenfrequencies $\omega_m$ are sufficiently well separated, and neglecting fast-rotating terms in analogy with \secref{sec:dispersive}, leads to \begin{equation}\label{eq:blackbox:HNL} \begin{aligned} \hat HNL \simeq{}& \sum_m \hbar \Delta_m \hat a_m^\dagger \hat a_m + \half \sum_{m}\hbar K_m (\hat a_m^\dagger)^2 \hat a_m^2, \\ &+ \sum_{m > n}\hbar \chi_{m,n}\hat a_m^\dagger \hat a_m \hat a_n^\dagger \hat a_n, \end{aligned} \end{equation} where $\Delta_m = \half \sum_n \chi_{m,n}$, $K_m = \frac{\chi_{m,m}}{2}$ and \begin{align} \hbar \chi_{m,n} = -E_J \varphi_m^2\varphi_n^2. \end{align} It is also useful to introduce the energy participation ratio $p_m$, defined to be the fraction of the total inductive energy of mode $m$ that is stored in the junction $p_m = (2E_J/\hbar\omega_m)\varphi_m^2$~\cite{minev2019catching} such that we can write \begin{equation} \chi_{m,n} = -\frac{\hbar\omega_m\omega_n}{4 E_J} p_m p_n. \end{equation} As is clear from the above discussion, finding the nonlinear Hamiltonian can be reduced to finding the eigenmodes of the system and the zero-point fluctuations of each mode across the junction. Of course, finding the mode frequencies $\omega_m$ and zero-point fluctuations $\varphi_m$, or alternatively the energy participation ratios $p_m$, can be complicated for a complex geometry. As already mentioned this is, however, an entirely classical electromagnetism problem~\cite{Bhat2006,minev2019catching}. An alternative approach is to represent the linear electromagnetic environment seen by the purely nonlinear element as an impedance $Z(\omega)$, as illustrated in~\figref{fig:bbq}(c). Neglecting loss, any such impedance can be represented by an equivalent circuit of possibly infinitely many LC oscillators connected in series. The eigenfrequencies $\hbar \omega_m = 1/\sqrt{L_m C_m}$, can be determined by the real parts of the zeros of the admittance $Y(\omega) = Z^{-1}(\omega)$, and the effective impedance of the $m$'th mode as seen by the junction can be found from $\mathcal{Z}eff_m = 2/[\omega_m \text{Im}\,Y'(\omega_m)]$~\cite{Nigg2012,Solgun2014}. The effective impedance is related to the zero-point fluctuations used above as $\mathcal{Z}eff_m = 2(\Phi_0/2\pi)^2\varphi_m^2/\hbar = R_K\varphi_m^2/(4\pi)$. From this point of view, the quantization procedure thus reduces to the task of determining the impedance $Z(\omega)$ as a function of frequency. \subsection{Beyond the transmon: multilevel artificial atom} In the preceding sections, we have relied on a perturbative expansion of the cosine potential of the transmon under the assumption $E_J/E_C \gg 1$. To go beyond this regime one can instead resort to exact diagonalization of the transmon Hamiltonian. Returning to the full transmon-resonator Hamiltonian~\cref{eq:HTransmonResonator}, we write \cite{Koch2007} \begin{equation}\label{eq:HMultileveltResonator} \begin{aligned} \hat H ={}& 4E_C \hat n - E_J \cos\hat \varphi + \hbar\omega_r \hat a^\dagger \hat a + 8E_C \hat n\hat n_r\\ ={}& \sum_j \hbar \omega_j \ket j \bra j + \hbar\omega_r \hat a^\dagger \hat a + i \sum_{ij} \hbar g_{ij} \ket i\bra j(\hat a^\dagger - \hat a), \end{aligned} \end{equation} where $\ket j$ are now the eigenstates of the bare transmon Hamiltonian $\hat H_T = 4E_C \hat n - E_J\cos\hat \varphi$ obtained from numerical diagonalization and we have defined \begin{equation} \hbar g_{ij} = 2e\frac{C_g}{CC_\Sigma} Q_\zp \braket{i|\hat n|j}. \end{equation} The eigenfrequencies $\omega_j$ and the matrix elements $\braket{i|\hat n|j}$ can be computed numerically in the charge basis. Alternatively, they can be determined by taking advantage of the fact that, in the phase basis, \eq{eq:Hsj} takes the form of a Mathieu equation whose exact solution is known~\cite{Cottet2002a,Koch2007}. The second form of~\cref{eq:HMultileveltResonator} written in terms of energy eigenstates $\ket j$ is a very general Hamiltonian that can describe an arbitrary multilevel artificial atom capacitively coupled to a resonator. Similarly to the discussion of \cref{sec:dispersive}, in the dispersive regime where $|g_{ij}| \sqrt{n+1} \ll |\omega_i-\omega_j-\omega_r|$ for all relevant atomic transitions $i \leftrightarrow j$ and with $n$ the oscillator photon number, it is possible to use a Schrieffer-Wolff transformation to approximately diagonalize~\cref{eq:HMultileveltResonator}. As discussed in~\cref{sec:SW:multilevel}, to second order one finds~\cite{Zhu2012} \begin{equation} \begin{aligned} \hat H \simeq{}& \sum_j \hbar (\omega_j + \Lambda_j) \ket j \bra j + \hbar \omega_r \hat a^\dagger \hat a \\ &+ \sum_j \hbar \chi_j \hat a^\dagger \hat a \ket j \bra j, \end{aligned} \end{equation} where \begin{subequations} \begin{align} \Lambda_j ={}& \sum_i \frac{|g_{ij}|^2}{\omega_j-\omega_i-\omega_r},\\ \chi_j ={}& \sum_i \left(\frac{|g_{ij}|^2}{\omega_j-\omega_i-\omega_r} - \frac{|g_{ij}|^2}{\omega_i-\omega_j-\omega_r} \right). \end{align} \end{subequations} This result is, as already stated, very general, and can be used with a variety of artificial atoms coupled to a resonator in the dispersive limit. Higher order expressions can be found in~\cite{Boissonneault2010,Zhu2012}. \subsection{Alternative coupling schemes} Coupling the electric dipole moment of a qubit to the zero-point electric field of an oscillator is the most common approach to light-matter coupling in a circuit but it is not the only possibility. Another approach is to take advantage of the mutual inductance between a flux qubit and the center conductor of a resonator to couple the qubit's magnetic dipole to the resonator's magnetic field. Stronger interaction can be obtained by galvanically connecting the flux qubit to the center conductor of a transmission-line resonator \cite{Bourassa2009}. In such a situation, the coupling can be engineered to be as large, or even larger, than the system frequencies allowing to reach what is known at the ultrastrong coupling regime, see \cref{sec:ultrastrong}. Yet another approach is to couple the qubit to the oscillator in such a way that the resonator field does not result in qubit transitions but only shifts the qubit's frequency. This is known as longitudinal coupling and is represented by the Hamiltonian \cite{Didier2015c,Kerman2013,Billangeon2015,Billangeon2015a,Richer2016,Richer2017a} \begin{equation}\label{eq:H_Longitudinal} \hat H_\mathrm{z} = \hbar\omega_r \hat a^\daga + \frac{\hbar\omega_q}{2}\sz{} + \hbar g_z (\hat a^\dag+\hat a)\sz{}. \end{equation} Because light-matter interaction in $\hat H_\mathrm{z}$ is proportional to $\sz{}$ rather than $\sx{}$ , the longitudinal interaction does not lead to dressing of the qubit by the resonator field of the form discussed in \cref{sec:JaynesCummings}. Some of the consequences of this observation, in particular for qubit readout, are discussed in \cref{sec:ReadoutOtherApproaches}. \section{\label{sec:environment}Coupling to the outside world: The role of the environment} So far we have dealt with isolated quantum systems. A complete description of quantum electrical circuits, however, must also take into account a description of how these systems couple to their environment, including any measurement apparatus and control circuitry. In fact, the environment plays a dual role in quantum technology: Not only is a description of quantum systems as perfectly isolated unrealistic, as coupling to unwanted environmental degrees of freedom is unavoidable, but a perfectly isolated system would also not be very useful since we would have no means of controlling or observing it. For these reasons, in this section we consider quantum systems coupled to external transmission lines. We also introduce the input-output theory which is of central importance in understanding qubit readout in circuit QED in the next section. \subsection{\label{sec:wiringup} Wiring up quantum systems with transmission lines} We start the discussion by considering transmission lines coupled to individual quantum systems, which are a model for losses and can be used to apply and receive quantum signals for control and measurement. To be specific, we consider a semi-infinite coplanar waveguide transmission line capacitively coupled at one end to an oscillator, see \cref{fig:inout}. The semi-infinite transmission line can be considered as a limit of the coplanar waveguide resonator of finite length already discussed in~\secref{sec:telegrapher} where one of the boundaries is now pushed to infinity, $d\to\infty$. Increasing the length of the transmission line leads to a densely packed frequency spectrum, which in its infinite limit must be treated as a continuum. In analogy with \eq{eq:H_cavity}, the Hamiltonian of the transmission line is consequently \begin{equation}\label{eq:Html} \hat H_\text{tml} = \int_0^\infty d\omega\, \hbar\omega \hat b^\dagw\hat b_\omega, \end{equation} where the mode operators now satisfy $[\hat b_\omega,\hat b_{\omega'}^\dagger]=\delta(\omega-\omega')$. Similarly, the position-dependent flux and charge operators of the transmission line are, in analogy with \cref{eq:ResonatorModeDecomp,eq:ResonatorModeFunc,eq:ResonatorFlux,eq:ResonatorCharge}, given in the continuum limit by~\cite{Yurke2004} \begin{subequations} \begin{align} \hat\Phi_\text{tml}(x) ={}& \int_0^\infty d\omega\, \sqrt{\frac{\hbar}{\pi\omega cv}} \cos\left(\frac{\omega x}{v}\right)(\hat b_{\omega}^\dagger + \hat b_{\omega}), \label{eq:Phi_tml}\\ \hat Q_\text{tml}(x) ={}& i\int_0^\infty d\omega\, \sqrt{\frac{\hbar\omega c}{ \pi v}} \cos\left(\frac{\omega x}{v}\right) (\hat b_{\omega}^\dagger - \hat b_{\omega}).\label{eq:Q_tml} \end{align} \end{subequations} These are the canonical fields of the transmission line and in the Heisenberg picture under~\cref{eq:Html} are related through $\hat Q_\text{tml}(x,t) = c\dot{\hat \Phi}_\text{tml}(x,t)$. In these expressions, $v=1/\sqrt{lc}$ is the speed of light in the transmission line, with $c$ and $l$ the capacitance and inductance per unit length, respectively. \begin{figure} \caption{ LC circuit capacitively coupled to a semi-infinite transmission line used to model both damping and driving of the system. Here, $\hat b_\mathrm{in} \label{fig:inout} \end{figure} Considering capacitive coupling of the line to the oscillator at $x=0$, the total Hamiltonian takes the form \begin{equation}\label{eq:HSBCavityNonMarkov} \hat H = \hat H_S + \hat H_\text{tml} - \hbar\int_{0}^\infty d\omega\, \lambda(\omega)(\hat b_{\omega}^\dagger-\hat b_{\omega})(\hat a^\dag-\hat a), \end{equation} where $\hat H_S = \hbar\omega_r \hat a^\dagger \hat a$ is the oscillator Hamiltonian. Moreover, $\lambda(\omega) = (C_\kappa/\sqrt{c C_r})\,\sqrt{\omega_r \omega/2 \pi v}$ is the frequency-dependent coupling strength, with $C_\kappa$ the coupling capacitance and $C_r$ the resonator capacitance. These expressions neglect small renormalizations of the capacitances due to $C_\kappa$, as discussed in~\cref{sec:AppendixTRcoupling}. In the following, $\lambda(\omega)$ is assumed sufficiently small compared to $\omega_r$, such that the interaction can be treated as a perturbation. In this situation, the system's $Q$ factor is large and the oscillator only responds in a small bandwidth around $\omega_r$. It is therefore reasonable to take $\lambda(\omega) \simeq \lambda(\omega_r)$ in \cref{eq:HSBCavityNonMarkov}. Dropping rapidly oscillating terms finally leads to~\cite{Gardiner1999} \begin{equation}\label{eq:HSBCavityMarkov} \hat H \simeq \hat H_S + \hat H_\text{tml} + \hbar\int_{0}^\infty d\omega\, \lambda(\omega_r)(\hat a\hat b_{\omega}^\dagger-\hat a^\dag\hat b_{\omega}). \end{equation} Under the well-established Born-Markov approximations, \eq{eq:HSBCavityMarkov} leads to a Lindblad-form Markovian master equation for the system's density matrix $\rho$~\cite{Gardiner1999,Carmichael2002,Breuer2002} \begin{equation}\label{eq:ME_harmonic} \dot\rho = -i[\hat H_S,\rho] + \kappa (\bar n_\kappa + 1) \mathcal{D}[\hat a]\rho + \kappa \bar n_\kappa \mathcal{D}[\hat a^\dag]\rho, \end{equation} where $\kappa = 2\pi \lambda(\omega_r)^2 = Z_\text{tml} \omega_r^2 C_\kappa^2 /C_\mathrm{r}$ is the photon decay rate, or linewidth, of the oscillator introduced earlier. Moreover, $\bar n_\kappa = \bar n_\kappa(\omega_r)$ is the number of thermal photons of the transmission line as given by the Bose-Einstein distribution, $\braket{\hat b_\omega^\dagger \hat b_{\omega'}} = \bar n_\kappa(\omega)\delta(\omega-\omega')$, at the system frequency $\omega_r$ and environment temperature $T$. The symbol $\mathcal{D}[\hat O]\bullet$ represents the dissipator \begin{equation}\label{eq:dissipator} \mathcal{D}[\hat O]\bullet = \hat O \bullet \hat O^\dag - \frac{1}{2} \left\{\hat O^\dag \hat O, \bullet \right\}, \end{equation} with $\{\hat c^\dagot, \hat c^\dagot \}$ the anticommutator. Focussing on the second term of \eq{eq:ME_harmonic}, the role of this superoperator can be understood intuitively by noting that the term $\hat O \rho \hat O^\dag$ with $\hat O = \hat a$ in \eq{eq:dissipator} acts on the Fock state $\ket n$ as $\hat a\ket n\bra n\hat a^\dag = n\ket{n-1}\bra{n-1}$. The second term of \eq{eq:ME_harmonic} therefore corresponds to photon loss at rate $\kappa$. Finite temperature stimulates photon emission, boosting the loss rate to $\kappa (\bar n_\kappa + 1)$. On the other hand, the last term of \eq{eq:ME_harmonic} corresponds to absorption of thermal photons by the system. Because $\hbar\omega_r\gg k_BT$ at dilution refrigerator temperatures, it is often assumed that $\bar n_\kappa\rightarrow 0$. Deviations from this expected behavior are, however, common in practice due to residual thermal radiation propagating along control lines connecting to room temperature equipment and to uncontrolled sources of heating. Approaches to mitigate this problem using absorptive components are being developed \cite{Corcoles2011,Wang2019}. \subsection{\label{sec:inout}Input-output theory in electrical networks} While the master equation describes the system's damped dynamics, it provides no information on the fields radiated by the system. Since radiated signals are what are measured experimentally, it is of practical importance to include those in our model. This is known as the input-output theory for which two standard approaches exist. The first is to work directly with~\cref{eq:HSBCavityMarkov} and consider Heisenberg picture equations of motion for the system and field annihilation operators $\hat a$ and $\hat b_\omega$. This is the route taken by Gardiner and Collett, which is widely used in the quantum optics literature~\cite{Collett1984,Gardiner1985}. An alternative approach is to introduce a decomposition of the transmission line modes in terms of left- and right-moving fields, linked by a boundary condition at the position of the oscillator which we take to be $x=0$ with the transmission line at $x\ge0$~\cite{Yurke1984}. The advantage of this approch is that the oscillator's input and output fields are then defined in terms of easily identifiable left- ($\hat b_{L\omega}$) and right-moving ($\hat b_{R\omega}$) radiation field components propagating along the transmission line. To achieve this, we replace the modes $\cos(\omega x/v) \hat b_\omega$ in Eqs.~(\ref{eq:Phi_tml}) and (\ref{eq:Q_tml}) by $(\hat b_{R\omega}e^{i \omega x /v} + \hat b_{L\omega}e^{-i \omega x /v} )/2$. Since the number of degrees of freedom of the transmission line has seemingly doubled, the modes $\hat b_{L/R\omega}$ cannot be independent. Indeed, the dynamics of one set of modes is fully determined by the other set through a boundary condition linking the left- and right-movers at $x=0$. To see this, it is useful to first decompose the voltage $\hat V(x,t) = \dot{\hat \Phi}_\text{tml}(x,t)$ at $x=0$ into left-moving (input) and right-moving (output) contributions as $\hat V(t) = \hat V(x=0,t) = \hat V_\text{in}(t) + \hat V_\text{out}(t)$, where \begin{equation} \begin{aligned} \hat V_\text{in/out}(t) ={} i\int_0^\infty d\omega\, \sqrt{\frac{\hbar\omega}{4 \pi cv}} e^{i\omega t} \hat b_{\text{L/R}\omega}^\dagger + \text{H.c.} \end{aligned} \end{equation} The boundary condition at $x=0$ follows from Kirchhoff's current law \begin{equation}\label{eq:voltage_inout} \hat I(t) = \frac{\hat V_\text{out}(t) - \hat V_\text{in}(t)}{Z_\text{tml}}, \end{equation} where the left-hand side $\hat I(t) = (C_\kappa/C_r)\dot{\hat Q}_r(t)$ is the current injected by the sample, with $\hat Q_r$ the oscillator charge (see~\cref{app:inout} for a derivation), while the right-hand side is the transmission line voltage difference at $x=0$.\footnote{Note that if instead we have a boundary condition of zero current at $x=0$, it would follow that $\hat V_\text{in}(t) = \hat V_\text{out}(t)$, i.e.~the endpoint simply serves as a mirror reflecting the input signal.} A mode expansions of the operators involved in~\eq{eq:voltage_inout} leads to the standard input-output relation (see~\cref{app:inout} for details) \begin{equation}\label{eq:inoutrel} \hat b_\mathrm{out}(t) - \hat b_\mathrm{in}(t) = \sqrt{\kappa} \hat a(t), \end{equation} where the input and output fields are defined as \begin{align} \hat b_\mathrm{in}(t) ={}& \frac{-i}{\sqrt{2\pi}} \int_{-\infty}^\infty d\omega\,\hat b_{L\omega}e^{-i(\omega-\omega_r) t},\label{eq:bin}\\ \hat b_\mathrm{out}(t) ={}& \frac{-i}{\sqrt{2\pi}} \int_{-\infty}^\infty d\omega\,\hat b_{R\omega}e^{-i(\omega-\omega_r) t}\label{eq:bout} \end{align} and satisfy the commutation relations $[\hat b_\mathrm{in}(t),\hat b_\mathrm{in}d(t')]=[\hat b_\mathrm{out}(t),\hat b_\mathrm{out}d(t')]=\delta(t-t')$. To arrive at~\cref{eq:inoutrel}, terms rotating at $\omega + \omega_r$ have been dropped based on the already mentioned assumption that the system only responds to frequencies $\omega\simeq \omega_r$ such that these terms are fast rotating~\cite{Yurke2004}. In turn, this approximation allows to extend the range of integration from $(0,\infty)$ to $(-\infty,\infty)$ in \cref{eq:bin,eq:bout}. We have also approximated $\lambda(\omega) \simeq \lambda(\omega_r)$ over the relevant frequency range. These approximations are compatible with those used to arrive at the Lindblad-form Markovian master equation of \cref{eq:ME_harmonic}. The same expressions and approximations can be used to obtain the equation of motion for the resonator field $\hat a(t)$ in the Heisenberg picture, which takes the form (see~\cref{app:inout} for details) \begin{equation}\label{eq:inouteom} \dot{\hat a}(t) = i[\hat H_S, \hat a(t)] - \frac{\kappa}{2}\hat a(t) + \sqrt{\kappa}\hat b_\mathrm{in}(t). \end{equation} This expression shows that the resonator dynamics is determined by the input field (in practice, noise or drive), while~\eq{eq:inoutrel} shows how the output can, in turn, be found from the input and the system dynamics. The output field thus holds information about the system's response to the input and which can be measured to, indirectly, give us access to information about the dynamics of the system. As will be discussed in more detail in \cref{sec:readout}, this can be done, for example, by measuring the voltage at some $x>0$ away from the oscillator. Under the approximations used above, this voltage can be expressed as \begin{equation}\label{eq:TLvoltage} \begin{aligned} \hat V(x, t) \simeq{}& \sqrt{\frac{\hbar \omega_r Z_\text{tml}}{2}} \bigg[ e^{i\omega_r x/v - i\omega_r t} \hat b_\mathrm{out}(t) \\ &+ e^{-i \omega_r x/v - i\omega_r t} \hat b_\mathrm{in}(t) + \text{H.c.}\bigg]. \end{aligned} \end{equation} Note that this approximate expression assumes that all relevant frequencies are near $\omega_r$ and furthermore neglects all non-Markovian time-delay effects. In this section we have considered a particularly simple setup: A single quantum system connected to the endpoint of a semi-infinite transmission line. More generally, quantum systems can be made to interact by coupling them to a common transmission line, and multiple transmission lines can be used to form quantum networks. These more complex setups can be treated using the SLH formalism, which generalizes the results in this section~\cite{Gough2009,Combes2016a}. \subsection{\label{sec:qubitnoise}Qubit relaxation and dephasing} \begin{figure} \caption{\label{fig:qubitnoise} \label{fig:qubitnoise} \end{figure} The master equation~\eq{eq:ME_harmonic} was derived for an oscillator coupled to a transmission line, but this form of the master equation is quite general. In fact,~\eq{eq:HSBCavityNonMarkov} is itself a very generic system-bath Hamiltonian that can be used to model dissipation due to a variety of different noise sources~\cite{Caldeira81}. To model damping of an arbitrary quantum system, for example a transmon qubit or a coupled resonator-transmon system, the operator $\hat a$ in~\eq{eq:HSBCavityNonMarkov} is simply replaced with the relevant system operator that couples to the transmission line (or, more generally, the bath). For the case of a transmon, see \cref{fig:qubitnoise}, $\hat H_S$ in~\cref{eq:ME_harmonic} is replaced with the Hamiltonian $\hat H_\mathrm{q}$ of~\cref{eq:HsjPhi4b} together with the additional replacements $\mathcal D[\hat a]\bullet \to \mathcal{D}[\hat b]\bullet$, $\mathcal D[\hat a^\dagger]\bullet \to \mathcal D[\hat b^\dagger]\bullet$, and $\kappa \rightarrow \gamma$. Here, $\gamma = 2\pi\lambda(\omega_{q})^2$ is the relaxation rate of the artificial atom which is related to the qubit-environment coupling strength evaluated at the qubit frequency. This immediately leads to the master equation \begin{equation}\label{eq:MEQubitBs} \dot\rho = {-}i[\hat H_\mathrm{q},\rho] + \gamma (\bar n_{\gamma} + 1) \mathcal{D}[\hat b]\rho + \gamma \bar n_{\gamma} \mathcal{D}[\hat b^\dag]\rho, \end{equation} where $\rho$ now refers to the transmon state and $\bar n_{\gamma}$ is the thermal photon number of the transmon's environment. It is often assumed that $\bar n_{\gamma} \rightarrow 0$ but, just like for the oscillator, a residual thermal population is often observed in practice~\cite{Corcoles2011,Wang2019}. Superconducting quantum circuits can also suffer from dephasing caused, for example, by fluctuations of parameters controlling their transition frequency and by dispersive coupling to other degrees of freedom in their environment. For a transmon, a phenomenological model for dephasing can be introduced by adding the following term to the master equation~\cite{Carmichael2002} \begin{equation}\label{eq:MEDephasingTransmon} 2\gamma_\varphi\mathcal{D}[\hat b^\dagger \hat b]\rho, \end{equation} with $\gamma_\varphi$ the pure dephasing rate. Because of its insensitivity to charge noise (see \cref{fig:transmonregime}), $\gamma_\varphi$ is often very small for the 0-1 transition of transmon qubits~\cite{Koch2007}. Given that charge dispersion increases exponentially with level number, dephasing due to charge noise can, however, be apparent on higher transmon levels, see for example \textcite{Egger2019}. Another source of dephasing for the transmon is the residual thermal photon population of a resonator to which the transmon is dispersively coupled. This can be understood from the form of the interaction in the dispersive regime, $\chi_{ab} \hat a^\daga \hat b^\dagb$, where fluctuations of the photon number lead to a fluctuations in the qubit frequency and therefore to dephasing \cite{Bertet2005,Schuster2005,Gambetta2006,Rigetti2012}. We note that a term of the form of \cref{eq:MEDephasingTransmon} but with $\hat b^\dagb$ replaced by $\hat a^\daga$ can be added to the master equation of the oscillator to model this aspect. Oscillator dephasing rates are, however, typically small and this contribution is often neglected \cite{Reagor2016}. Other sources of relaxation and dephasing include two-level systems within the materials and interfaces of the devices \cite{Mueller2019}, quasiparticles \cite{Glazman2020} generated by a number of phenomenon including infrared radiation \cite{Corcoles2011,Barends2011} and even ionizing radiation \cite{Vepsalainen2020}. Combining the above results, the master equation for a transmon subject to relaxation and dephasing assuming $\bar n_{\gamma}\to 0$ is \begin{equation}\label{eq:ME_transmon} \begin{aligned} \dot\rho ={}& -i[\hat H_\mathrm{q},\rho] + \gamma \mathcal{D}[\hat b]\rho + 2\gamma_\varphi\mathcal D[\hat b^\dagger \hat b]\rho. \end{aligned} \end{equation} It is common to express this master equation in the two-level approximation of the transmon, something that is obtained simply by taking $\hat H_\mathrm{q} \to \hbar\omega_a \sz{}/2$, $\hat b^\dagger \hat b \to \left(\sz{} + 1\right)/2$, $\hat b \to \smm{}$ and $\hat b^\dagger \to \spp{}$. Note that the rates $\gamma$ and $\gamma_\varphi$ appearing in the above expressions are related to the characteristic $T_1$ relaxation time and $T_2$ coherence time of the artificial atom which are defined as~\cite{schoelkopf:2003a} \begin{subequations} \begin{align}\label{eq:T1} T_1 ={}& \frac{1}{\gamma_1} = \frac{1}{\gamma_\downarrow + \gamma_\uparrow} \simeq \frac{1}{\gamma},\\ T_2 ={}& \frac{1}{\gamma_2} = \left( \frac{\gamma_1}{2} + \gamma_\varphi \right)^{-1}, \label{eq:T2_definition} \end{align} \end{subequations} where $\gamma_\downarrow = (\bar n_\gamma + 1)\gamma$, $\gamma_\uparrow = \bar n_\gamma \gamma$. The approximation in \cref{eq:T1} is for $\bar n_{\gamma}\rightarrow 0$. At zero temperature, $T_1$ is the characteristic time for the artificial atom to relax from its first excited state to the ground state. On the other hand, $T_2$ is the dephasing time, which quantifies the characteristic lifetime of coherent superpositions, and includes both a contribution from pure dephasing ($\gamma_\varphi)$ and relaxation ($\gamma_1$). Current best values for the $T_1$ and $T_2$ time of transmon qubits is in the 50 to 120~$\mu$s range for aluminum-based transmons \cite{Wei2019,Nersisyan2019,Devoret2013,Kjaergaard2019}. Relaxation times above 300~$\mu$s have been reported in transmon qubits where the transmon pads have been made with tantalum rather than aluminum, but the Josephson junction still made from aluminum and aluminum oxide \cite{Place2020}. Other superconducting qubits also show large relaxation and coherence times. Examples are $T_1,\,T_2\sim300~\mu$s for heavy-fluxonium qubits \cite{Zhang2020}, and $T_1 \sim 1.6$~ms and $T_2\sim25~\mu$s for the $0-\pi$ qubit \cite{Gyenis2019a}. Qubit relaxation and incoherent excitation occur due to uncontrolled exchange of GHz frequency photons between the qubit and its environment. These processes are observed to be well described by the Markovian master equation of \cref{eq:ME_transmon}. In contrast, the dynamics leading to dephasing are typically non-Markovian, happening at low-frequencies (i.e.~slow time scales set by the phase coherence time itself). As a result, these processes cannot very accurately be described by a Markovian master equation such as~\cref{eq:ME_transmon}. This equation thus represents a somewhat crude approximation to dephasing in superconducting qubits. That being said, in practice, the Markovian theory is still useful in particular because it correctly predicts the results of experiments probing the steady-state response of the system. \subsection{Dissipation in the dispersive regime}\label{sec:DissipationDispersive} \begin{figure} \caption{\label{fig:dispersivenoise} \label{fig:dispersivenoise} \end{figure} We now turn to a description of dissipation for the coupled transmon-resonator system of~\cref{sec:LightMatter}. Assuming that the transmon and the resonator are coupled to independent baths as illustrated in \cref{fig:dispersivenoise}, the master equation for this composite system is (taking $\bar n_{\kappa,\gamma}\rightarrow0$ for simplicity) \begin{equation}\label{eq:JCMasterEquation} \begin{aligned} \dot\rho ={}& -i[\hat H,\rho] + \kappa \mathcal{D}[\hat a]\rho + \gamma \mathcal{D}[\hat b]\rho \\ &+ 2\gamma_\varphi\mathcal D[\hat b^\dagger \hat b]\rho, \end{aligned} \end{equation} where $\rho$ is now a density matrix for the total system, and $\hat H$ describes the coupled system as in~\cref{eq:HTransmonJC}. Importantly, the above expression is only valid at small values of $g/(\omega_r,\omega_{q})$. This is because energy decay occurs via transitions between system eigenstates while the above expression describes transitions between the uncoupled bare states. A derivation of the master equation valid at arbitrary $g$ can be found, for example, in \textcite{Beaudoin2011}. More important to the present discussion is the fact that, at first glance, \cref{eq:JCMasterEquation} gives the impression that dissipative processes influence the transmon and the resonator in completely independent manners. However, because $\hat H$ entangles the two systems, the loss, for example, of a resonator photon can lead to qubit relaxation. Moving to the dispersive regime, a more complete picture of dissipation emerges after applying the unitary transformation \eq{eq:UDispersive} not only on the Hamiltonian but also on the above master equation. Neglecting fast rotating terms and considering corrections to second order in $\lambda$ (which is consistent if $\kappa,\,\gamma,\,\gamma_\varphi = \mathcal{O}(E_C g^2/\Delta^2)$), leads to the dispersive master equation~\cite{Boissonneault2009} \begin{equation}\label{eq:DispersiveMasterEquation} \begin{aligned} \dot\rho_\mathrm{disp} ={}& -i[\hat H_\text{disp},\rho_\mathrm{disp}] \\ & + (\kappa+\kappa_\gamma) \mathcal{D}[\hat a]\rho_\mathrm{disp} + (\gamma+\gamma_\kappa) \mathcal{D}[\hat b]\rho_\mathrm{disp}\\ &+ 2\gamma_\varphi\mathcal D[\hat b^\dagger \hat b]\rho_\mathrm{disp}\\ &+ \gamma_\Delta \mathcal D[\hat a^\dagger \hat b]\rho_\mathrm{disp} + \gamma_\Delta \mathcal D[\hat b^\dagger \hat a]\rho_\mathrm{disp}, \end{aligned} \end{equation} where \begin{align}\label{eq:DispersiveRates} \gamma_\kappa ={}& \left(\frac{g}{\Delta}\right)^2\kappa,\;\; \kappa_\gamma = \left(\frac{g}{\Delta}\right)^2\gamma,\;\; \gamma_{\Delta} = 2\left(\frac{g}{\Delta}\right)^2\gamma_\varphi, \end{align} and $\rho_\mathrm{disp} = \hat U_\mathrm{disp}^\dag \rho \hat U_\mathrm{disp}$ is the density matrix in the dispersive frame. This expression has three new rates, the first of which is known as the Purcell decay rate $\gamma_\kappa$~\cite{Purcell1946}. This rate captures the fact that the qubit can relax by emission of a resonator photon. It can be understood simply following \eq{eq:JCEigenstates} from the form of the dressed eigenstate $\ket{\overline{e, 0}} \sim \ket{e,0} + (g/\Delta)\ket{g,1}$ which is closest to a qubit excitation $\ket{e}$. This state is the superposition of the qubit first excited state with no photon and, with probability $(g/\Delta)^2$, the qubit ground state with a photon in the resonator. The latter component can decay at the rate $\kappa$ taking the dressed excited qubit to the ground state $\ket{g, 0}$ with a rate $\gamma_\kappa$. A similar intuition also applies to $ \kappa_\gamma$, now associated with a resonator photon loss through a qubit decay event. The situation is more subtle for the last line of \eq{eq:DispersiveMasterEquation}. Following \textcite{Boissonneault2008,Boissonneault2009}, an effective master equation for the transmon only can be obtained from \eq{eq:DispersiveMasterEquation} by approximately eliminating the resonator degrees of freedom. This results in transmon relaxation and excitation rates given approximately by $\bar n \gamma_{\Delta}$, with $\bar n$ the average photon number in the resonator. Commonly known as dressed-dephasing, this leads to spurious transitions during qubit measurement and can be interpreted as originating from dephasing noise at the detuning frequency $\Delta$ that is up- or down-converted by readout photons to cause spurious qubit state transitions. Because we have taken the shortcut of applying the dispersive transformation on the master equation, the above discussion neglects the frequency dependence of the various decay rates. In a more careful derivation, the dispersive transformation is applied on the system plus bath Hamiltonian, and only then is the master equation derived~\cite{Boissonneault2009}. The result has the same form as \eq{eq:DispersiveMasterEquation}, but with different expressions for the rates. Indeed, it is useful to write $\kappa = \kappa(\omega_r)$ and $\gamma = \gamma(\omega_{q})$ to recognize that, while photon relaxation is probing the environment at the resonator frequency $\omega_r$, qubit relaxation is probing the environment at $\omega_{q}$. With this notation, the first two rates of \eq{eq:DispersiveRates} become in the more careful derivation $\gamma_\kappa = (g/\Delta)^2 \kappa(\omega_{q})$ and $\kappa_\gamma = (g/\Delta)^2 \gamma(\omega_r)$. In other words, Purcell decay occurs by emitting a photon at the qubit frequency and not at the resonator frequency as suggested by the completely white noise model used to derive \eq{eq:DispersiveRates}. In the same way, it is useful to write the dephasing rate as $\gamma_\varphi = \gamma_\varphi(\omega \to 0)$ to recognize the importance of low-frequency noise. Using this notation, the rates in the last two terms of \cref{eq:DispersiveMasterEquation} become, respectively, $\gamma_\Delta = 2(g/\Delta)^2\gamma_\varphi(\Delta)$ and $\gamma_{-\Delta} = 2(g/\Delta)^2\gamma_\varphi(-\Delta)$~\cite{Boissonneault2009}. In short, dressed dephasing probes the noise responsible for dephasing at the transmon-resonator detuning frequency $\Delta$. This observation was used to probe this noise at GHz frequencies by \textcite{Slichter2012}. It is important to note that the observations in this section result from the qubit-oscillator dressing that occurs under the Jaynes-Cummings Hamiltonian. For this reason, the situation is very different if the electric-dipole interaction leading to the Jaynes-Cummings Hamiltonian is replaced by a longitudinal interaction of the form of \cref{eq:H_Longitudinal}. In this case, there is no light-matter dressing and consequently no Purcell decay or dressed-dephasing \cite{Kerman2013,Billangeon2015}. This is one of the advantages of this alternative light-matter coupling. \subsection{Multi-mode Purcell effect and Purcell filters}\label{sec:Purcell} Up to now we have considered dissipation for a qubit dispersively coupled to a single-mode oscillator. Replacing the latter with a multi-mode resonator leads to dressing of the qubit by all of the resonator modes and therefore to a modification of the Purcell decay rate. Following the above discussion, one may then expect the contributions to add up, leading to the modified rate $\sum_{m=0}^\infty (g_m/\Delta_m)^2\kappa_m$, with $m$ the mode index. However, when accounting for the frequency dependence of $\kappa_m$, $g_m$ and $\Delta_m$, this expression diverges~\cite{Houck2008}. It is possible to cure this problem using a more refined model including the finite size of the transmon and the frequency dependence of the impedance of the resonator's input and output capacitors~\cite{Bourassa2012b,Malekakhlagh2017}. Given that damping rates in quantum electrical circuits are set by classical system parameters~\cite{Leggett1984}, a simpler approach to compute the Purcell rate exists. It can indeed be shown that $\gamma_\kappa = \mathrm{Re}[Y(\omega_{q})]/C_\Sigma$, with $Y(\omega) = 1/Z(\omega)$ the admittance of the electromagnetic environment seen by the transmon~\cite{Esteve1986,Houck2008}. This expression again makes it clear that relaxation probes the environment (here represented by the admittance) at the system frequency. It also suggests that engineering the admittance $Y(\omega)$ such that it is purely reactive at $\omega_{q}$ can cancel Purcell decay (see the inset of \cref{fig:dispersivenoise}). This can be done, for example, by adding a transmission-line stub of appropriate length and terminated in an open circuit at the output of the resonator, something which is known as a Purcell filter~\cite{Reed2010}. Because of the increased freedom in optimizing the system parameters (essentially decoupling the choice of $\kappa$ from the qubit relaxation rate), various types of Purcell filters are commonly used experimentally~\cite{Jeffrey2014,Bronn2015b,Walter2017}. \subsection{\label{sec:drives}Controlling quantum systems with microwave drives} While connecting a quantum system to external transmission lines leads to losses, such connections are nevertheless necessary to control and measure the system. Consider a continuous microwave tone of frequency $\omega_\mathrm{d}$ and phase $\phi_\mathrm{d}$ applied to the input port of the resonator. A simple approach to model this drive is based on the input-output approach of \cref{sec:inout}. Indeed, the drive can be taken into account by replacing the input field $\hat b_\mathrm{in}(t)$ in~\cref{eq:inouteom} with $\hat b_\mathrm{in}(t) \to \hat b_\mathrm{in}(t) + \beta(t)$, where $\beta(t) = A(t) \exp(-i\omega_\mathrm{d} t -i\phi_\mathrm{d})$ is the coherent classical part of the input field of amplitude $A(t)$. The resulting term $\sqrt{\kappa}\beta(t)$ in the Langevin equation can be absorbed in the system Hamiltonian with the replacement $\hat H_S \to \hat H_S + \hat H_\text{d}$ where \begin{equation}\label{eq:DriveOnCavity} \hat H_\text{d} = \hbar \left[\varepsilon(t) \hat a^\dag e^{-i\omega_\mathrm{d} t - i \phi_\mathrm{d}} + \varepsilon^*(t)\hat a e^{i\omega_\mathrm{d} t + i \phi_\mathrm{d}}\right]), \end{equation} with $\varepsilon(t) = i\sqrt{\kappa}A(t)$ the possibly time-dependent amplitude of the drive as seen by the resonator mode. Generalizing to multiple drives on the resonator and/or drives on the transmon is straightforward. Moreover, the Hamiltonian $\hat H_\text{d}$ is the generator of displacement in phase space of the resonator. As a result, by choosing appropriate parameters for the drive, evolution under $\hat H_\text{d}$ will bring the intra-resonator state from vacuum to an arbitrary coherent state~\cite{Gardiner1999,Carmichael2002} \begin{equation}\label{eq:CoherentState} \ket\alpha = \hat D(\alpha)\ket0 = e^{-|\alpha|^2/2}\sum_{n=0}^\infty \frac{\alpha^n}{\sqrt{n!}}\ket{n}, \end{equation} where $\hat D(\alpha)$ is known as the displacement operator and takes the form \begin{equation}\label{eq:DisplacementOp} \hat D(\alpha) = e^{\alpha \hat a^\dag - \alpha^* \hat a}. \end{equation} As discussed in the next section, coherent states play an important role in qubit readout in circuit QED. It is important to note that $\hat H_\text{d}$ derives from \cref{eq:inouteom} which is itself the result of a rotating-wave approximation. As can be understood from \cref{eq:Hsj}, before this approximation, the drive rather takes the form $i\hbar \varepsilon(t) \cos(\omega_\mathrm{d} t + \phi_\mathrm{d})(\hat a^\dag - \hat a)$. Although $\hat H_\text{d}$ is sufficient in most cases of practical interest, departures from the predictions of \cref{eq:DriveOnCavity} can been seen at large drive amplitudes \cite{Pietikainen2017,Verney2019}. \section{\label{sec:readout}Measurements in circuit QED} Before the development of circuit QED, the quantum state of superconducting qubits was measured by fabricating and operating a measurement device, such as a single-electron transistor, in close proximity to the qubit~\cite{Makhlin2001}. A challenge with such an approach is that the readout circuitry must be strongly coupled to the qubit during measurement so as to extract information on a time scale much smaller than $T_1$, while being well decoupled from the qubit when the measurement is turned off to avoid unwanted back-action. Especially given that measurement necessarily involves dissipation~\cite{Landauer1991}, simultaneously satisfying these two requirements is challenging. Circuit QED, however, has several advantages to offer over the previous approaches. Indeed, as discussed in further detail in this section, qubit readout in this architecture is realized by measuring scattering of a probe tone off an oscillator coupled to the qubit. This approach first leads to an excellent measurement on/off ratio since qubit readout only occurs in the presence of the probe tone. A second advantage is that the necessary dissipation now occurs away from the qubit, essentially at a voltage meter located at room temperature, rather than in a device fabricated in close proximity to the qubit. Unwanted energy exchange is moreover inhibited when working in the dispersive regime where the effective qubit-resonator interaction \eq{eq:HQubitDispersive} is such that even the probe-tone photons are not absorbed by the qubit. As a result, the backaction on the qubit is to a large extent limited to the essential dephasing that quantum measurements must impart on the measured system leading, in principle, to a quantum non-demolition (QND) qubit readout. Because of the small energy of microwave photons with respect to optical photons, single-photon detectors in the microwave frequency regime are still being developed, see \cref{sec:SinglePhotonDetector}. Therefore, measurements in circuit QED rely on amplification of weak microwave signals followed by detection of field quadratures using heterodyne detection. Before discussing qubit readout, the objective of the next subsection is to explain these terms and go over the main challenges related to such measurements in the quantum regime. \subsection{Microwave field detection} \label{sec:FieldDetection} \begin{figure} \caption{\label{fig:MeasurementChain} \label{fig:MeasurementChain} \end{figure} \Cref{fig:MeasurementChain} illustrates a typical measurement chain in circuit QED. The signal of a microwave source is directed to the input port of the resonator first going through a series of attenuators thermally anchored at different stages of the dilution refrigerator. The role of these attenuators is to absorb the room-temperature thermal noise propagating towards the sample. The field transmitted by the resonator is first amplified, then mixed with a reference signal, converted from analog to digital, and finally processed with an FPGA or recorded. Circulators are inserted before the amplification stage to prevent noise generated by the amplifier from reaching the resonator. Circulators are directional devices that transmit signals in the forward direction while strongly attenuating signals propagating in the reverse direction (here coming from the amplifier)~\cite{Pozar2011}. In practice, circulators are bulky off-chip devices relying on permanent magnets that are not compatible with the requirement for integration with superconducting quantum circuits. They also introduce additional losses, for example due to insertion losses and off-chip cable losses. Significant effort is currently being devoted to developing compact, on-chip, superconducting circuit-based circulators~\cite{Kamal2011,Chapman2017,Lecocq2017,Abdo2019}. In practice, the different components and cables of the measurement chain have a finite bandwidth which we will assume to be larger than the bandwidth of the signal of interest $\hat b_\mathrm{out}(t)$ at the output of the resonator. To account for the finite bandwidth of the measurement chain and to simplify the following discussion, it is useful to consider the filtered output field \begin{equation}\label{eq:bout_filtered} \begin{aligned} \hat af(t) ={}& (f \star \hat b_\mathrm{out})(t) \\ ={}& \int_{-\infty}^\infty d\tau f(t-\tau) \hat b_\mathrm{out}(\tau)\\ ={}& \int_{-\infty}^\infty d\tau f(t-\tau) \left[\sqrt{\kappa}\hat a(\tau) + \hat b_\mathrm{in}(\tau)\right], \end{aligned} \end{equation} which is linked to the intra-cavity field $\hat a$ via the input-ouput boundary condition \cref{eq:inoutrel} which we have used in the last line. In this expression, the filter function $f(t)$ is normalized to $\int_{-\infty}^\infty dt |f(t)|^2 = 1$ such that $[\hat af(t),\hat af^\dagger(t)]=1$. As will be discussed later in the context of qubit readout, in addition to representing the measurement bandwidth, filter functions are used to optimize the distinguishability between the qubit states. Ignoring the presence of the circulator and assuming that a phase-preserving amplifier (i.e.~an amplifier that amplifies both signal quadratures equally) is used, in the first stage of the measurement chain the signal is transformed according to~\cite{Caves1982,Clerk2010} \begin{equation}\label{eq:amplifier_inout} \hat a_\mathrm{amp} = \sqrt G \hat af + \sqrt{G-1} \hat h^\dag, \end{equation} where $G$ is the power gain and $\hat{h}^\dag$ accounts for noise added by the amplifier. The presence of this added noise is required for the amplified signal to obey the bosonic commutation relation, $[\hat a_\mathrm{amp},\hat a_\mathrm{amp}d]=1$. Equivalently, the noise must be present because the two quadratures of the signal are canonically conjugate. Amplification of both quadratures without added noise would allow us to violate the Heisenberg uncertainty relation between the two quadratures. In a standard parametric amplifier, $\hat af$ in \cref{eq:amplifier_inout} represents the amplitude of the signal mode and $h$ represents the amplitude of a second mode called the idler. The physical interpretation of \cref{eq:amplifier_inout} is that an ideal amplifier performs a Bogoljubov transformation on the signal and idler modes. The signal mode is amplified, but the requirement that the transformation be canonical implies that the (phase conjugated and amplified) quantum noise from the idler port must appear in the signal output port. Ideally, the input to the idler is vacuum with $\av{\hat{h}^\dag\hat{h}} = 0$ and $\av{\hat{h}\hat{h}^\dag} = 1$, so the amplifier only adds quantum noise. Near quantum-limited amplifiers with $\sim 20$ dB power gain approaching this ideal behavior are now routinely used in circuit QED experiments. These Josephson junction-based devices, as well as the distinction between phase-preserving and phase-sensitive amplification, are discussed further in~\cref{sec:QO:AmplificationSqueezing}. To measure the very weak signals that are typical in circuit QED, the output of the first near-quantum limited amplifier is further amplified by a low-noise high-electron-mobility transistor (HEMT) amplifier. The latter acts on the signal again following \cref{eq:amplifier_inout}, now with a larger power gain $\sim 30-40$ dB but also larger added noise photon number. The very best cryogenic HEMT amplifiers in the 4-8 GHz band have noise figures as low as $\av{\hat{h}^\dag\hat{h}} \sim 5 - 10$. However, the effect of attenuation due to cabling up to the previous element of the amplification chain, i.e.~a quantum-limited amplifier or the sample of interest itself, can degrade this figure significantly. A more complete understanding of the added noise in this situation can be derived from \cref{fig:QuantumEfficiency}(a). There, beam splitters of transmissivity $\eta_{1,2}$ model the attenuation leading to the two amplifiers of gain labelled $G_1$ and $G_2$. Taking into account vacuum noise $\hat v_{1,2}$ at the beam splitters, the input-output expression of this chain can be cast under the form of \cref{eq:amplifier_inout} with a total gain $G_T = \eta_1\eta_2G_1G_2$ and noise mode $\hat h_T^\dag$ corresponding to the total added noise number \begin{equation}\label{eq:NoiseNumber} \begin{split} N_T &= \frac{1}{G_T-1} \Big[ \eta_1(G_1-1)G_2(N_1+1) \\ &\quad\quad\quad\quad +(G_2-1)(N_2+1) \Big] -1\\ &\approx \frac{1}{\eta_1}\left[1+N_1+\frac{N_2}{\eta_2 G_1}\right]-1, \end{split} \end{equation} with $N_i = \av{\hat h^\dag_i \hat h_i}$ with $i=$ 1, 2, $T$. The last expression corresponds to the large gain limit. If the gain $G_1$ of the first amplifier is large, the noise of the chain is dominated by the noise $N_1$ of the first amplifier. This emphasizes the importance of using near quantum-limited amplifiers with low noise in the first stage of the chain. In the literature, the quantum efficiency $\eta = 1/(N_T + 1)$ is often used to characterize the measurement chain, with $\eta = 1$ in the ideal case $N_T = 0$. \begin{figure} \caption{\label{fig:QuantumEfficiency} \label{fig:QuantumEfficiency} \end{figure} It is worthwhile to note that another definition of the quantum efficiency can often be found in the literature. This alternative definition is based on \cref{fig:QuantumEfficiency}(b) where a noisy amplifier of gain $G$ is replaced by a noiseless amplifier of gain $G/\bar\eta$ preceded by a fictitious beam splitter of transmittivity $\bar\eta$ adding vacuum noise to the amplifier's input \cite{Leonhardt1993}. The quantum efficiency corresponds, here, to the transmittivity $\bar\eta$ of the fictitious beam splitter. The input-ouput relation of the network of \cref{fig:QuantumEfficiency}(b) with its noiseless phase-preserving amplifier reads $\hat a_\mathrm{amp} = \sqrt{G/\bar\eta}(\sqrt{\bar\eta}\hat af+\sqrt{1-\bar\eta}\hat v)$, something which can be expressed as \begin{equation}\label{eq:amplifier_noise1} \av{|\hat a_\mathrm{amp}|^2} = \frac{G}{\bar\eta}\left[(1-\bar\eta)\frac{1}{2}+\bar\eta\av{|\hat af|^2}\right], \end{equation} with $\av{|\hat O|^2} = \av{\{\hat O^\dag,\hat O\}}/2$ the symmetrized fluctuations. The first term of the above expression corresponds to the noise added by the amplifier, here represented by vacuum noise added to the signal before amplification, while the second term corresponds to noise in the signal at the input of the amplifier. On the other hand, \Cref{eq:amplifier_inout} for a noisy amplifier can also be cast in the form of \cref{eq:amplifier_noise1} with \begin{equation}\label{eq:amplifier_noise2} \av{|\hat a_\mathrm{amp}|^2} = G(\mathcal{A}+\av{|\hat af|^2}), \end{equation} where we have introduced the added noise \begin{equation}\label{eq:AddedNoise} \mathcal{A} = \frac{(G-1)}{G}\left(\av{\hat h^\dag\hat h}+\frac12\right). \end{equation} In the limit of low amplifier noise $\av{\hat h^\dag\hat h} \rightarrow 0$ and large gain, the added noise is found to be bounded by $\mathcal{A} \ge (1-G^{-1})/2 \simeq 1/2$ corresponding to half a photon of noise \cite{Caves1982}. Using \cref{eq:amplifier_noise1,eq:amplifier_noise2}, the quantum efficiency of a phase-preserving amplifier can therefore be written as $\bar\eta = 1/(2\mathcal{A}+1) \le 1/2$ and is found to be bounded by 1/2 in the ideal case. Importantly, the concept of quantum efficiency is not limited to amplification, and can be applied to the whole measurement chain illustrated in \cref{fig:MeasurementChain}. Using \cref{eq:TLvoltage,eq:amplifier_inout}, the voltage after amplification can be expressed as \begin{equation}\label{eq:Vamp} \hat V_\mathrm{amp}(t) \simeq{} \sqrt{\frac{\hbar \omega_\mathrm{RF} Z_\text{tml}}{2}} \left[ e^{- i\omega_\mathrm{RF} t} \hat a_\mathrm{amp} + \text{H.c.}\right], \end{equation} where $\omega_\mathrm{RF}$ is the signal frequency. To simplify the expressions, we have dropped the phase associated to the finite cable length. We have also dropped the contribution from the input field $\hat b_\mathrm{in}(t)$ moving towards the amplifier in the opposite direction at this point, cf.~\cref{fig:MeasurementChain}, because this field is not amplified and therefore gives a very small contribution compared to the amplified output field. Recall, however, the contribution of this field to the filtered signal~\cref{eq:bout_filtered}. \begin{figure} \caption{Schematic representation of an IQ mixer. The RF signal $\hat a_\mathrm{amp} \label{fig:IQMixer} \end{figure} Different strategies can be used to extract information from the amplified signal, and here we take the next stage of the chain to be an IQ-mixer. As schematically illustrated in \cref{fig:IQMixer}, in this microwave device the signal first encounters a power divider, illustrated here as a beam splitter to account for added noise due to internal modes, followed in each branch by mixers with local oscillators (LO) that are offset in phase by $\pi/2$. The LO consists in a reference signal of well-defined amplitude $A_\mathrm{LO}$, frequency $\omega_\mathrm{LO}$ and phase $\phi_\mathrm{LO}$: \begin{equation} V_\mathrm{LO}(t) = A_\mathrm{LO}\cos(\omega_\mathrm{LO} t - \phi_\mathrm{LO}). \end{equation} Mixers use nonlinearity to down-convert the input signal to a lower frequency referred to as the intermediate frequency (IF) signal. Describing first the signal as a classical voltage $V_\mathrm{RF}(t)=A_\mathrm{RF}\cos(\omega_\mathrm{RF} t+ \phi_\mathrm{RF})$, the output at one of these mixers is~\cite{Pozar2011} \begin{equation}\label{eq:Mixer} \begin{split} V_\mathrm{mixer}(t) &= K V_\mathrm{RF}(t) V_\mathrm{LO}(t) \\ &= \frac{1}{2}K A_\mathrm{LO}A_\mathrm{RF} \left\{ \cos[(\omega_\mathrm{LO}-\omega_\mathrm{RF})t-\phi_\mathrm{LO}] \right.\\&\left.\qquad +\cos[(\omega_\mathrm{LO}+\omega_\mathrm{RF})t-\phi_\mathrm{LO}] \right\}, \end{split} \end{equation} where $K$ accounts for voltage conversion losses. According to the above expression, mixing with the LO results in two sidebands at frequencies $\omega_\mathrm{LO}\pm\omega_\mathrm{RF}$. The high frequency component is filtered out with a low-pass filter (not shown) leaving only the lower sideband of frequency $\omega_\mathrm{IF} = \omega_\mathrm{LO}-\omega_\mathrm{RF}$. The choice $\omega_\mathrm{IF} \neq0$ is known as heterodyne detection. Taking the LO frequency such that $\omega_\mathrm{IF}$ is in the range of few tens to a few hundreds of MHz, the signal can be digitized using an analog to digital converter (ADC) with a sampling rate chosen in accordance with the bandwidth requirements of the signal to be recorded. This bandwidth is set by the choice of IF frequency and the signal bandwidth, typically a few MHz to a few tens of MHz if set by the bandwidth $\kappa/2\pi$ of the cavity chosen for the specific circuit QED application such as qubit readout. The recorded signal can then be averaged, or analyzed in more complex ways, using real-time field-programmable gate array (FPGA) electronics or processed offline. A detailed discussion of digital signal processing in the context of circuit QED can be found in \textcite{Salathe2018}. Going back to a quantum mechanical description of the signal by combining \cref{eq:Vamp,eq:Mixer}, the IF signals at the $I$ and $Q$ ports of the IQ-mixer read \begin{subequations}\label{eq:V_IF} \begin{align} \begin{split} \hat V_\mathrm{I}(t) &= V_\mathrm{IF} \left[\hat{X}_f(t) \cos(\omega_\mathrm{IF} t) - \hat{P}_f(t) \sin(\omega_\mathrm{IF} t)\right]\\ &\quad+ \hat V_\mathrm{noise,I}(t), \end{split}\\ \begin{split} \hat V_\mathrm{Q}(t) &= -V_\mathrm{IF} \left[\hat{P}_f(t) \cos(\omega_\mathrm{IF} t) + \hat{X}_f(t) \sin(\omega_\mathrm{IF} t)\right]\\ &\quad+ \hat V_\mathrm{noise,Q}(t), \end{split} \end{align} \end{subequations} where we have taken $\phi_\mathrm{LO} = 0$ in the $I$ arm of the IQ-mixer, and $\phi_\mathrm{LO} = \pi/2$ in the $Q$ arm. We have defined $V_\mathrm{IF} = K A_\mathrm{LO} \sqrt{\kappa G Z_\text{tml} \hbar \omega_\mathrm{RF}/2}$, and $\hat V_\mathrm{noise,I/Q}$ as the contributions from the amplifier noise and any other added noise. We have also introduced the quadratures \begin{equation} \hat{X}_f=\frac{\hat a^\dagf+\hat af}{2}, \qquad \hat{P}_f=\frac{i(\hat a^\dagf-\hat af)}{2}, \end{equation} the dimensionless position and momentum operators of the simple harmonic oscillator, here defined such that $[\hat{X}_f,\hat{P}_f] = i/2$. Taken together, $\hat V_\mathrm{I}(t)$ and $\hat V_\mathrm{Q}(t)$ trace a circle in the $x_f-p_f$ plane and contain information about the two quadratures at all times. It is therefore possible to digitally transform the signals by going to a frame where they are stationary using the rotation matrix \begin{equation} R(t) = \begin{pmatrix} \cos(\omega_\mathrm{IF} t) & -\sin(\omega_\mathrm{IF} t)\\ \sin(\omega_\mathrm{IF} t) & \cos(\omega_\mathrm{IF} t) \end{pmatrix} \end{equation} to extract $\hat{X}_f(t)$ and $\hat{P}_f(t)$. We note that the case $\omega_\mathrm{IF} = 0$ is generally known as homodyne detection~\cite{Leonhardt1997,Gardiner1999,Wiseman2010,Pozar2011}. In this situation, the IF signal in one of the arms of the IQ-mixer is proportional to \begin{equation}\label{eq:X_homodyne} \begin{split} \hat{X}_{f,\phi_\mathrm{LO}} &= \frac{\hat a^\dag_f e^{i\phi_\mathrm{LO}} + \hat a_f e^{-i\phi_\mathrm{LO}}}{2}\\ &= \hat{X}_f\cos\phi_\mathrm{LO} + \hat{P}_f\sin\phi_\mathrm{LO}. \end{split} \end{equation} While this is in appereance simpler and therefore advantageous, this approach is succeptible to $1/f$ noise and drift because the homodyne signal is at DC. It is also worthwhile to note that homodyne detection as realized with the approach described here differs from optical homodyne detection which can be performed in a noiseless fashion (in the present case, noise is added at the very least by the phase-preserving amplifiers and the noise port of the IQ mixer). The reader is referred to \textcite{Schuster2005} and \textcite{Krantz2019} for more detailed discussions of the different field measurement techniques and their applications in the context of circuit QED. \subsection{Phase-space representations and their relation to field detection} \label{sec:PhaseSpace} In the context of field detection, it is particularly useful to represent the quantum state of the electromagnetic field using phase-space representations. There exists several such representations and here we focus on the Wigner function and the Husimi-$Q$ distribution~\cite{Carmichael2002,Haroche2006}. This discussion applies equality well to the intra-cavity field $\hat a$ as to the filtered output field $\hat af$. The Wigner function is a quasiprobability distribution given by the Fourier transform \begin{equation}\label{eq:Wigner} W_\rho(x,p) = \frac{1}{\pi^2}\iint_{-\infty}^\infty dx' dp' C_\rho(x',p') e^{2i (px' - xp')} \end{equation} of the characteristic function \begin{equation} C_\rho(x,p) = \mathrm{Tr}\left\{\rho \,e^{2i(p\hat{X}-x\hat{P})}\right\}. \end{equation} With $\rho$ the state of the electromagnetic field, $C_\rho(x,p)$ can be understood as the expectation value of the displacement operator \begin{equation} \hat D(\alpha) = e^{2i(p\hat{X}-x\hat{P})} = e^{\alpha \hat a^\dag - \alpha^* \hat a}, \end{equation} with $\alpha = x + i p$, see \cref{eq:DisplacementOp}. \begin{figure} \caption{ Pictorial phase-space distribution of a coherent state and its marginal along an axis $X_\phi$ rotated by $\phi$ from $X$. } \label{fig:PhaseSpaceMarginal} \end{figure} Coherent states, already introduced in \cref{eq:CoherentState}, have particularly simple Wigner functions. Indeed, as illustrated schematically in \cref{fig:PhaseSpaceMarginal}, the Wigner function $W_{\ket \beta}(\alpha)$ of the coherent state $\ket\beta$ is simply a Gaussian centered at $\beta$ in phase space: \begin{equation}\label{eq:WignerCoherentState} W_{\ket{\beta}}(\alpha) = \frac2\pi e^{-2|\alpha-\beta|^2}. \end{equation} The width $1/\sqrt{2}$ of the Gaussian is a signature of quantum noise and implies that coherent states saturate the Heisenberg inequality, $\Delta X \Delta P = 1/2$ with $\Delta O^2 = \langle\hat O^2\rho_{a}ngle-\langle\hat O\rho_{a}ngle^2$. We note that, in contrast to \cref{eq:WignerCoherentState}, Wigner functions take negative values for non-classical states of the field. In the context of dispersive qubit measurements, the Wigner function is particularly useful because it is related to the probability distribution for the outcome of measurements of the quadratures $\hat{X}$ and $\hat{P}$. Indeed, the marginals $P(x)$ and $P(p)$, obtained by integrating $W_\rho(x,p)$ along the orthogonal quadrature, are simply given by \begin{subequations} \label{eq:WignerMarginals} \begin{align} P(x) = \int_{-\infty}^\infty dp\, W_\rho(x,p) &= \me{x}{\rho}{x},\\ P(p) = \int_{-\infty}^\infty dx\, W_\rho(x,p) &= \me{p}{\rho}{p}, \end{align} \end{subequations} where $\ket x$ and $\ket p$ are the eigenstate of $\hat{X}$ and $\hat{P}$, respectively. This immediately implies that the probability distribution of the outcomes of an ideal homodyne measurement of the quadrature $\hat{X}_\phi$ is given by $P(x_\phi)$ obtained by integrating the Wigner function $W_\rho(\alpha)$ along the orthogonal quadrature $\hat{X}_{\phi+\pi/2}$. This is schematically illustrated for a coherent state in Fig.~\ref{fig:PhaseSpaceMarginal}. Another useful phase-space function is the Husimi-Q distribution which, for a state $\rho$, takes the simple form \begin{equation} Q_\rho(\alpha) = \frac1\pi \me{\alpha}{\rho}{\alpha}. \end{equation} This function represents the probability distribution of finding $\rho$ in the coherent state $\ket\alpha$ and, in contrast to $W_\rho(\alpha)$, it is therefore always positive. Since $Q_\rho(\alpha)$ and $W_\rho(\alpha)$ are both complete descriptions of the state $\rho$, it is not surprising that one can be expressed in terms of the other. For example, in terms of the Wigner function, the Q-function takes the form~\cite{Carmichael2002} \begin{equation} Q_\rho(\alpha) = \frac{2}{\pi}\int_{-\infty}^\infty d^2\beta\, W_\rho(\beta) e^{-2|\alpha-\beta|^2} = W_\rho(\alpha) * W_{\ket{0}}(\alpha). \end{equation} The Husimi-Q distribution $Q_\rho(\alpha)$ is thus obtained by convolution of the Wigner function with a Gaussian, and is therefore smoother than $W_\rho(\alpha)$. As made clear by the second equality, this Gaussian is in fact the Wigner function of the vacuum state, $W_{\ket{0}}(\alpha)$, obtained from \eq{eq:WignerCoherentState} with $\beta=0$. In other words, the Q-function for $\rho$ is obtained from the Wigner function of the same state after adding vacuum noise. As already illustrated in Fig.~\ref{fig:IQMixer}, heterodyne detection with an IQ mixer adds (ideally) vaccum noise to the signal before detection. This leads to the conclusion that the probability distributions for the simultaneous measurement of two orthogonal quadratures in heterodyne detection is given by the marginals of the Husimi-Q distribution rather than of the Wigner function~\cite{Caves2012a}. \subsection{Dispersive qubit readout}\label{sec:DispersiveQubitReadout} \subsubsection{Steady-state intra-cavity field} As discussed in Sec.~\ref{sec:dispersive}, in the dispersive regime the transmon-resonator Hamiltonian is well approximated by \begin{equation}\label{eq:HQubitDispersiveSimple} \hat H_\text{disp} \approx \hbar \left(\omega_r + \chi \sz{} \right) \hat a^\dagger \hat a + \frac{\hbar\omega_{q}}2 \hat \sigma_z. \end{equation} To simplify the discussion, here we have truncated the transmon Hamiltonian to its first two levels, absorbed Lamb shifts in the system frequencies, and neglected a transmon-induced nonlinearity of the cavity [$K_a$ appearing in~\cref{eq:HTransmonDispersive}]. As made clear by the first term of the above expression, in the dispersive regime, the resonator frequency becomes qubit-state dependent: If the qubit is in $\ket{g}$ then $\av{\sz{}}=-1$ and the resonator frequency is $\omega_r-\chi$. On the other hand, if the qubit is in $\ket e$, $\av{\sz{}}=1$ and $\omega_r$ is pulled to $\omega_r+\chi$. In this situation, driving the cavity results in a qubit-state dependent coherent state, $\ket{\alpha_{g,e}}$. Thus, if the qubit is initialized in the superposition $c_g\ket{g} + c_e \ket{e}$, the system evolves to an entangled qubit-resonator state of the form \begin{equation}\label{eq:entangledQubitPointer} c_g\ket{g, \alpha_g} + c_e \ket{e, \alpha_e}. \end{equation} To interpret this expression, let us recall the paradigm of the Stern-Gerlach experiment. There, an atom passes through a magnet and the field gradient applies a spin-dependent force to the atom that entangles the spin state of the atom with the momentum state of the atom (which in turn determines where the atom lands on the detector). The experiment is usually described as measuring the spin of the atom, but in fact it only measures the final position of that atom on the detector. However, since the spin and position are entangled, we can uniquely infer the spin from the position, provided there is no overlap in the final position distributions for the two spin states. In this case we have effectively performed a projective measurement of the spin. By analogy, if the spin-dependent coherent states of the microwave field, $\alpha_{e,g}$, can be resolved by heterodyne detection, then they act as pointer states in the qubit measurement. Moreover, since $\hat H_\text{disp}$ commutes with the observable that is measured, $\sz{}$, this is a QND (quantum non-demolition) measurement \cite{Braginsky1980} (in contrast to the Stern-Gerlach measurement which is destructive). Note that for a system initially in a superposition of eigenstates of the measurement operator, a QND measurement \emph{does} in fact change the state by randomly collapsing it onto one of the measurement eigenstates. The true test of `QNDness' is that subsequent measurement results are not random but simply reproduce the first measurement result. \begin{figure} \caption{(a) Path in phase space leading up to steady-state of the intra-cavity pointer states $\alpha_g$ and $\alpha_e$ for $2\chi/\kappa = 1$, a measurement drive at the bare cavity frequency with an amplitude leading to one measurement photon at steady-state, and assuming infinite qubit relaxation time. (top). Corresponding marginals along $x$ with the signal, noise and error defined in the text (bottom). The circles of radius $1/\sqrt2$ represent vacuum noise. (b) Path in phase space for $2\chi/\kappa = 10$ and (c) $2\chi/\kappa = 0.2$. (d) Signal-to-noise ratio as a function of $2\chi/\kappa$ for an integration time $\tau_m/\kappa = 200$ (dark blue) and $\tau_m/\kappa = 10$ (light blue). The maximum of the SNR at short integration time is shifted away from $2\chi/\kappa$. The letters correspond to the ratio $2\chi/\kappa$ of the three previous panels.} \label{fig:DispersiveQubitReadoutPhaseSpace} \end{figure} The objective in a qubit measurement is to maximize the readout fidelity in the shortest possible measurement time. To see how this goal can be reached, it is useful to first evaluate more precisely the evolution of the intra-cavity field under such a measurement. The intra-cavity field is obtained from the Langevin equation \cref{eq:inouteom} with $\hat H_S = \hat H_\text{disp}$ and by taking into account the cavity drive as discussed in \secref{sec:drives}. Doing so, we find that the complex field amplitude $\av{\hat a}_{\sigma} = \alpha_{\sigma}$ given that the qubit is in state $\sigma = \{g,e\}$ satisfies \begin{subequations} \begin{align} \dot\alpha_e(t)&= -i \varepsilon(t) - (\delta_r + \chi)\alpha_e(t) - \kappa\alpha_e(t)/2,\label{eq:alpha_e}\\ \dot\alpha_g(t)&= -i \varepsilon(t) - (\delta_r - \chi)\alpha_g(t) - \kappa\alpha_g(t)/2\label{eq:alpha_g}, \end{align} \end{subequations} with $\delta_r = \omega_r-\omega_\mathrm{d}$ the detuning of the measurement drive to the bare cavity frequency. The time evolution of these two cavity fields in phase space are illustrated for three different values of $2\chi/\kappa$ by dashed gray lines in \cref{fig:DispersiveQubitReadoutPhaseSpace}(a-c). Focusing for the moment on the steady-state ($\dot\alpha_\sigma = 0$) response \begin{equation}\label{eq:apha_eg_s} \alpha_{e/g}^\mathrm{s} = \frac{-\varepsilon}{(\delta_r \pm\chi)-i\kappa/2}, \end{equation} with $+$ for $e$ and $-$ for $g$, results in the steady-state intra-cavity quadratures \begin{subequations} \begin{align} \av{\hat{X}}^2_\mathrm{e/g} &= \frac{\varepsilon(\delta_r\pm\chi)}{(\delta_r\pm\chi)^2+(\kappa/2)^2},\\ \av{\hat{P}}^2_\mathrm{e/g} &= \frac{\varepsilon \kappa/2}{(\delta_r\pm\chi)^2+(\kappa/2)^2}. \end{align} \end{subequations} When driving the cavity at its bare frequency, $\delta_r=0$, information about the qubit is only contained in the $X$ quadrature, see \cref{fig:DispersiveQubitReadoutPhaseSpace}(a-c). It is also useful to define the steady-state amplitude \begin{equation}\label{eq:Amplitude} A_{e/g}^\mathrm{s} = \sqrt{\av\hat{X}_{e/g}^2+\av\hat{P}_{e/g}^2} = \frac{2\varepsilon}{\sqrt{(\kappa/2)^2 + (\delta_r\pm\chi)^2}} \end{equation} and phase \begin{equation}\label{eq:phiDispersive} \phi_{e/g}^\mathrm{s} = \arctan\left(\frac{\av\hat{X}_{e/g}}{\av\hat{P}_{e/g}}\right) = \arctan\left(\frac{\delta_r\pm\chi}{\kappa/2}\right). \end{equation} These two quantities are plotted in \cref{fig:DispersiveCavityPull}. As could already have been expected from the form of $\hat H_\text{disp}$, a coherent tone of frequency $\omega_r\pm\chi$ on the resonator is largely transmitted if the qubit is in the ground (excited) state, and mostly reflected if the qubit is in the excited (ground) state. Alternatively, driving the resonator at its bare frequency $\omega_r$ leads to a different phase accumulation for the transmitted signal depending on the state of the qubit. In particular, on resonance with the bare resonator, $\delta_r=0$, the phase shift of the signal associated to the two qubit states is simply $\pm\arctan(2\chi/\kappa)$. As a result, in the dispersive regime, measuring the amplitude and/or the phase of the transmitted or reflected signal from the resonator reveals information about the qubit state~\cite{Blais2004}. On the other hand, when driving the resonator at the qubit frequency, for example, to realize a logical gate discussed further in \cref{subsec:SingleQubitGates}, the phase shift of the resonator field only negligibly depends on the qubit state. This results in negligible entanglement between the resonator, and consequently on negligible measurement-induced dephasing on the qubit. \begin{figure} \caption{Resonator transmission (dashed lines) and corresponding phase shifts (full lines) for the two qubit states (blue:~ground; red:~excited). When driving the resonator close to its pulled frequencies, the resonator response strongly depends on the state of the qubit. Adapted from \cite{Blais2007} \label{fig:DispersiveCavityPull} \end{figure} It is very important to note that for purposes of simplification, all of the above discussion has been couched in terms of the amplitude and phase of the oscillating electric field internal to the microwave resonator. In practice, we can typically only measure the field externally in the transmission line(s) coupled to the resonator. The relation between the two is the subject of input-output theory discussed in \cref{sec:environment} and \cref{app:inout}. The main ideas can be summarized rather simply. Consider an asymmetric cavity with one port strongly coupled to the environment and one port weakly coupled. If driven from the weak port, nearly all of the information about the state of the qubit is in the field radiated by the cavity into the strongly coupled port. The same is true if the cavity is driven from the strongly coupled side, but now the output field is a superposition of the directly reflected drive plus the field radiated by the cavity. If the drive frequency is swept across the cavity resonance, the signal undergoes a phase shift of $\pi$ in the former case and $2\pi$ in the latter. This affects the sensitivity of the output field to the dispersive shift induced by the qubit. If the cavity is symmetric, then half the information about the state of the qubit appears at each output port so this configuration is inefficient. Further details can be found in \cite{Clerk2010}. \subsubsection{Signal-to-noise ratio and measurement fidelity} \label{sec:SNR} Except for the last paragraph, the above discussion concerned the steady-state intra-cavity field from which we can infer the steady-state heterodyne signal. It is, however, also crucial to consider the temporal response of the resonator's output field to the measurement drive since, in the context of quantum computing, qubit readout should be as fast as possible. Moreover, the probability of assigning the correct outcome to a qubit measurement, or more simply put the measurement fidelity, must also be large. As the following discussion hopes to illustrate, simultaneously optimizing these two important quantities requires some care. As discussed in \cref{sec:FieldDetection}, the quadratures $\hat{X}_f(t)$ and $\hat{P}_f(t)$ are extracted from heterodyne measurement of the resonator output field. Combining these signals and integrating for a time $\tau_m$, the operator corresponding to this measurement takes the form \begin{equation} \begin{split} \h{M}(\tau_m) & = \int_0^{\tau_m}dt\, \left\{ w_X(t)\left[V_\mathrm{IF}\hat{X}_f(t) + \hat V_\mathrm{noise,X_f}(t) \right]\right.\\ &\left.\quad\quad\quad\quad\;\; + w_P(t)\left[V_\mathrm{IF}\hat{P}_f(t) + \hat V_\mathrm{noise,P_f}(t) \right] \right\}, \end{split} \end{equation} where $\hat V_\mathrm{noise,X_f/P_f}(t)$ is the noise in the $X_f/P_f$ quadrature. The weighting functions $w_X(t) = |\av{\hat{X}_f}_e-\av{\hat{X}_f}_g|$ and $w_P(t) = |\av{\hat{P}_f}_e-\av{\hat{P}_f}_g|$ are chosen such as to increase the discrimination of the two qubit states~\cite{Bultink2018,Ryan2015,Magesan2015,Walter2017}. Quite intuitively, because of qubit relaxation, these functions give less weight to the cavity response at long times since it will always reveal the qubit to be in its ground state irrespective of the prepared state~\cite{Gambetta2007}. Moreover, for the situation illustrated in \cref{fig:DispersiveQubitReadoutPhaseSpace}, there is no information on the qubit state in the $P$ quadrature. Reflecting this, $w_P(t)=0$ which prevents the noise in that quadrature from being integrated. Following \cref{sec:FieldDetection,sec:PhaseSpace}, the probability distribution for the outcome of multiple shots of the measurement of $\h{M}(\tau_m)$ is expected to be Gaussian and characterized by the marginal of the Q-function of the intra-cavity field. Using the above expression, the signal-to-noise ratio ($\mathrm{SNR}$) of this measurement can be defined as illustrated in \cref{fig:DispersiveQubitReadoutPhaseSpace}(a) for the intra-cavity field: it is the separation of the average combined heterodyne signals corresponding to the two qubit states divided by the standard deviation of the signal, an expression which takes the form \begin{align}\label{eq:SNR2} \mathrm{SNR}^2(t)\equiv\frac{|\av{\h{M}(t)}_e-\av{\h{M}(t)}_g|^2}{\av{\h{M}_{N}^2(t)}_e+\av{\h{M}_{N}^2(t)}_g}. \end{align} Here, $\av{\h{M}}_\sigma$ is the average integrated heterodyne signal given that the qubit is in state $\sigma$, and $\h{M}_N=\h{M}-\av{\h{M}}$ the noise operator which takes into account the added noise but also the intrinsic vacuum noise of the quantum states. In addition to the $\mathrm{SNR}$, another important quantity is the measurement fidelity~\cite{Gambetta2007,Walter2017}~\footnote{An alternative definition known as the assignment fidelity is $1 - \tfrac{1}{2}[P(e|g) - P(g|e)]$~\cite{Magesan2015}. This quantity takes values in $[0,1]$ while formally $F_m \in [-1,1]$. Negative values are, however, not relevant in practice. Indeed, because $F_m = -1$ corresponds to systematically reporting the incorrect value, a fidelity of $1$ is recovered after flipping the measurement outcomes.} \begin{equation} F_m = 1 - [P(e|g) + P(g|e)] \equiv 1- E_m, \end{equation} where $P(\sigma|\sigma')$ is the probability that a qubit in state $\sigma$ is measured to be in state $\sigma'$. In the second equality, we have defined the measurement error $E_m$ which, as illustrated in \cref{fig:DispersiveQubitReadoutPhaseSpace}(a), is simply the overlap of the marginals $P_\sigma(x)$ of the Q-functions for the two qubit states. This can be expressed as $E_m = \int dx_{\phi_\mathrm{LO}+\pi/2}\, \min[P_0(x_{\phi_\mathrm{LO}+\pi/2}),P_1(x_{\phi_\mathrm{LO}+\pi/2})]$, where the LO phase is chosen to minimize $E_m$. Using this expression, the measurement fidelity is found to be related to the $\mathrm{SNR}$ by $F_m = 1 - \mathrm{erfc}(\mathrm{SNR}/2)$, where $\mathrm{erfc}$ is the complementary error function~\cite{Gambetta2007}. It is important to note that this last result is valid only if the marginals are Gaussian. In practice, qubit relaxation and higher-order effects omitted in the dispersive Hamiltonian \cref{eq:HQubitDispersiveSimple} can lead to distortion of the coherent states and therefore to non-Gaussian marginals~\cite{Gambetta2007,Hatridge2013}. Kerr-type nonlinearities that are common in circuit QED tend to create a banana-shaped distortion of the coherent states in phase space, something that is sometimes referred to as bananization \cite{Boutin2017b,Malnou2018,Sivak2019}. Although we are interested in short measurement times, it is useful to consider the simpler expression for the long-time behavior of the $\mathrm{SNR}$ which suggests different strategies to maximize the measurement fidelity. Assuming $\delta_r = 0$ and ignoring the prefactors related to gain and mixing, we find~\cite{Gambetta2008} \begin{equation}\label{eq:SNRLongTime} \mathrm{SNR}(\tau_m\rightarrow\infty) \simeq (2\varepsilon/\kappa) \sqrt{2\kappa\tau_m}\left|\sin{2\phi}\right|, \end{equation} where $\phi$ is given by \cref{eq:phiDispersive}; see \cite{Didier2015c} for a detailled derivation of this expression. The reader can easily verify that the choice $\chi/\kappa = 1/2$ maximizes \cref{eq:SNRLongTime}, see \cref{fig:DispersiveQubitReadoutPhaseSpace}(d) \cite{Gambetta2008}. This ratio is consequently often chosen experimentally~\cite{Walter2017}. While leading to a smaller steady-state $\mathrm{SNR}$, other choices of the ratio $\chi/\kappa$ can be more advantageous at finite measurement times. In the small $\chi$ limit, the factor $2\epsilon/\kappa$ in $\mathrm{SNR}(\tau_m\rightarrow\infty)$ can be interpreted using \cref{eq:alpha_g} as the square-root of the steady-state average intra-cavity measurement photon number. Another approach to improve the $\mathrm{SNR}$ is therefore to work at large measurement photon number $\bar n$. This idea, however, cannot be pushed too far since increasing the measurement photon number leads to a breakdown of the approximations that have been used to derive the dispersive Hamiltonian \cref{eq:HQubitDispersiveSimple}. Indeed, as discussed in \cref{sec:dispersive} the small parameter in the perturbation theory that leads to the dispersive approximation is not $g/\Delta$ but rather $\bar n/n_\mathrm{crit}$, with $n_\mathrm{crit}$ the critical photon number introduced in \cref{eq:ncrit}. Well before reaching $\bar n/n_\mathrm{crit}\sim 1$, higher-order terms in the dispersive approximation start to play a role and lead to departures from the expected behavior. For example, it is commonly experimentally observed that the dispersive measurement loses its QND character well before $\bar n \sim n_\mathrm{crit}$ and often at measurement photon populations as small as $\bar n\sim 1-10$ \cite{Johnson2011a,minev2019}. Because of these spurious qubit flips, measurement photon numbers are typically chosen to be well below $n_\mathrm{crit}$~\cite{Walter2017}. While this non-QNDness at $\bar n<n_\mathrm{crit}$ is expected from the discussion of dressed-dephasing found in \cref{sec:DissipationDispersive}, the predicted measurement-induced qubit flip rates are smaller than often experimentally observed. We note that qubit transitions at $\bar n>n_\mathrm{crit}$ caused by accidental resonances within the qubit-resonator system have been studied in~\textcite{Sank2016}. To reach high fidelities, it is also important for the measurement to be fast compared to the qubit relaxation time $T_1$. A strategy to speed-up the measurement is to use a low-Q oscillator which leads to a faster readout rate simply because the measurement photons leak out more rapidly from the resonator to be detected. However, this should not be done at the price of increasing the Purcell rate $\gamma_\kappa$ to the point where this mechanism dominates qubit decay~\cite{Houck2008}. As discussed in \cref{sec:Purcell}, it is possible to avoid this situation to a large extent by adding a Purcell filter at the output of the resonator~\cite{Reed2010,Jeffrey2014,Bronn2015b}. Fixing $\kappa$ so as to avoid Purcell decay and working at the optimal $\chi/\kappa$ ratio, it can be shown that the steady-state response is reached in a time $\propto1/\chi$~\cite{Walter2017}. Large dispersive shifts can therefore help to speed up the measurement. As can be seen from \cref{eq:HQubitDispersiveParametersSW}, $\chi$ can be increased by working at larger qubit anharmonicity or, in other words, larger charging energy $E_C$. Once more, this cannot be pushed too far since the transmon charge dispersion and therefore its dephasing rate increase with $E_C$. The above discussion shows that QND qubit measurement in circuit QED is a highly constrained problem. The state-of-the-art for such measurements is currently of $F_m\sim98.25\%$ in $\tau_m=48$ ns, when minimizing readout time, and $99.2\%$ in 88 ns, when maximizing the fidelity, in both cases using $\bar n\sim 2.5$ intra-cavity measurement photons~\cite{Walter2017}. These results were obtained by detailed optimization of the system parameters following the concepts introduced above but also given an understanding of the full time response of the measurement signal $|\av{\h{M}(t)}_1-\av{\h{M}(t)}_0|$. The main limitation in these reported fidelity was the relatively short qubit relaxation time of 7.6 $\mu$s. Joint simultanteous dispersive readout of two transmon qubits capacitively coupled to the same resonator has also been realized \cite{Filipp2009b}. The very small photon number used in these experiments underscores the importance of quantum-limited amplifiers in the first stage of the measurement chain, see Fig.~\ref{fig:MeasurementChain}. Before the development of these amplifiers, which opened the possibility to perform strong single-shot (i.e.~projective) measurements, the $\mathrm{SNR}$ in dispersive measurements was well below unity, forcing the results of these weak measurements to be averaged over tens of thousands of repetitions of the experiment to extract information about the qubit state~\cite{Wallraff2005}. The advent of near quantum-limited amplifiers (see \cref{sec:QO:AmplificationSqueezing}) has made it possible to resolve the qubit state in a single-shot something which has led, for example, to the observation of quantum jumps of a transmon qubit~\cite{Vijay2011}. Finally, we point out that the quantum efficiency, $\eta$, of the whole measurement chain can be extracted from the $\mathrm{SNR}$ using \cite{Bultink2018} \begin{equation}\label{eq:eta_chain} \eta = \frac{\mathrm{SNR}^2}{4\beta_m}, \end{equation} where $\beta_m = 2\chi \int_0^{\tau_m} dt\,\mathrm{Im}[\alpha_g(t)\alpha_e(t)^*]$ is related to the measurement-induced dephasing discussed further in \cref{sec:AcStark_MeasIndDephasing}. This connection between quantum efficiency, $\mathrm{SNR}$, and measurement-induced dephasing results from the fundamental link between the rate at which information is gained in a quantum measurement and the unavoidable backaction on the measured system \cite{Clerk2010,Korotkov2001}. \subsubsection{Other approaches} \label{sec:ReadoutOtherApproaches} \paragraph{Josephson Bifurcation Amplifier} While the vast majority of circuit QED experiments rely on the approach described above, several other qubit-readout methods have been theoretically explored or experimentally implemented. One such alternative is known as the Josephson Bifurcation Amplifier (JBA) and relies on using, for example, a transmission-line resonator that is made nonlinear by incorporating a Josephson junction in its center conductor~\cite{Boaknin2007}. This circuit can be seen as a distributed version of the transmon qubit and is well described by the Kerr-nonlinear Hamiltonian of \cref{eq:HsjPhi4b}~\cite{Bourassa2012}. With a relatively weak Kerr nonlinearity ($\sim -500$ kHz) and under a coherent drive of well chosen amplitude and frequency, this system bifurcates from a low photon-number state to a high photon-number state~\cite{Dykman1980,Manucharyan2007}. By dispersively coupling a qubit to the nonlinear resonator, this bifurcation can be made qubit-state dependent~\cite{Vijay2009}. It is possible to exploit the fact that the low- and high-photon-number states can be easily distinguished to realize high-fildelity single-shot qubit readout~\cite{Mallet2009}. \paragraph{High-power readout and qubit 'punch out'} Coming back to linear resonators, while the non-QNDness at moderate measurement photon number mentioned above leads to small measurement fidelity, it was observed by a fearless graduate student that, in the limit of very large measurement power, a fast and high-fidelity single-shot readout is recovered~\cite{Reed2010a}. An intuitive understanding of this observation can be obtained from the Jaynes-Cummings Hamiltonian \eq{eq:HJC}~\cite{Boissonneault2010,Bishop2010a}. Indeed, for $n\gg\sqrt n$, the first term of this Hamiltonian dominates over the qubit-oscillator interaction $\propto g$ such that the cavity responds at its bare frequency $\omega_r$ despite the presence of the transmon. This is sometimes referred to as `punching out' the qubit and can be understood as a quantum-to-classical transition where, in the correspondence limit, the system behaves classically and therefore responds at the bare cavity frequency $\omega_r$. Interestingly, with a multi-level system such as the transmon, the power at which this transition occurs depends on the state of the transmon, leading to a high-fidelity measurement. This high-power readout is, however, obtained at the expense of completely losing the QND nature of the dispersive readout~\cite{Boissonneault2010}. \paragraph{Squeezing} Finally, the $\sqrt n$ scaling of $\mathrm{SNR}(\tau_m\rightarrow\infty)$ mentioned above can be interpreted as resulting from populating the cavity with a coherent state and is known as the standard quantum limit. It is natural to ask if replacing the coherent measurement tone with squeezed input radiation (see \cref{sec:Squeezing}) can lead to Heisenberg-limited scaling for which the $\mathrm{SNR}$ scales linearly with the measurement photons number~\cite{Giovanetti2004}. To achieve this, one might imagine squeezing a quadrature of the field to reduce the overlap between the two pointer states. In \cref{fig:DispersiveQubitReadoutPhaseSpace}, this corresponds to squeezing along $X$. The situation is not so simple since the large dispersive coupling required for high-fidelity qubit readout leads to a significant rotation of the squeezing angle as the pointer states evolve from the center of phase space to their steady-state. This rotation results in increased measurement noise due to contributions from the antisqueezed quadrature~\cite{Barzanjeh2014}. Borrowing the idea of quantum-mechanics-free subsystems~\cite{Tsang2012}, it has been shown that Heisenberg-limited scaling can be reached with two-mode squeezing by dispersively coupling the qubit to two rather than one resonator~\cite{Didier2015b}. \paragraph{Longitudinal readout} An alternative approach to qubit readout is based on the Hamiltonian $\hat H_z$ of \cref{eq:H_Longitudinal} with its longitudinal qubit-oscillator coupling $g_z(\hat a^\dag + \hat a)\sz{}$. In contrast to the dispersive Hamiltonian which leads to a rotation in phase space, longitudinal coupling generates a linear displacement of the resonator field that is conditional on the qubit state. As a result, while under the dispersive evolution there is little information gain about the qubit state at short times [see the poor pointer state separation at short times in \cref{fig:DispersiveQubitReadoutPhaseSpace}(a)], $\hat H_z$ rather generates the ideal dynamics for a measurement with a $180^\circ$ out-of-phase displacements of the pointer states $\alpha_g$ and $\alpha_e$. It is therefore expected that this approach can lead to much shorter measurement times than is possible with the dispersive readout \cite{Didier2015c}. Another advantage is that $\hat H_z$ commutes with the measured observable, $[\hat H_z,\sz{}]=0$, corresponding to a QND measurement. While the dispersive Hamiltonian $\hat H_\text{disp}$ also commutes with $\sz{}$, it is not the case for the full Hamiltonian~\cref{eq:HTransmonJC} from which $\hat H_\text{disp}$ is perturbatively derived. As already discussed, this non-QNDness leads to Purcell decay and to a breakdown of the dispersive approximation when the photon populations is not significantly smaller than the critical photon number $n_\mathrm{crit}$. On the other hand, because $\hat H_z$ is genuinely QND it does not suffer from these problems and the measurement photon number can, in principle, be made larger under longitudinal than under dispersive coupling. Moreover, given that $\hat H_z$ leads to displacement of the pointer states rather than to rotation in phase space, single-mode squeezing can also be used to increase the measurement SNR \cite{Didier2015c}. Because the longitudinal coupling can be thought of as a cavity drive of amplitude $\pm g_z$ with the sign being conditional on the qubit state, $\hat H_z$ leads in steady-state to a pointer state displacement $\pm g_z/(\omega_r+i\kappa/2)$, see \cref{eq:apha_eg_s}. With $\omega_r \gg g_z,\, \kappa$ in practice, this displacement is negligible and cannot realistically be used for qubit readout. One approach to increase the pointer state separation is to activate the longitudinal coupling by modulating $g_z$ at the resonator frequency \cite{Didier2015c}. Taking $g_z(t) = \tilde g_z \cos(\omega_r t)$ leads, in a rotating frame and after dropping rapidly oscillating terms, to the Hamiltonian \begin{equation} \tilde H_z = \frac{\tilde g_z}{2} (\hat a^\dag+a)\sz{}. \end{equation} Under this modulation, the steady-state displacement now becomes $\pm \tilde g_z/\kappa$ and can be significant even for moderate modulation amplitudes $\tilde g_z$. Circuits realizing the longitudinal coupling with transmon or flux qubits have been studied \cite{Didier2015c,Kerman2013,Billangeon2015,Billangeon2015a,Richer2016,Richer2017a}. Another approach to realize these ideas is to strongly drive a resonator dispersively coupled to a qubit \cite{Blais2007,Dassonneville2020}. Indeed, the strong drive leads to a large displacement of the cavity field $\hat a \rightarrow \hat a + \alpha$ which on the dispersive Hamiltonian leads to \begin{equation} \chi\hat a^\daga\sz{} \rightarrow \chi\hat a^\daga\sz{} + \alpha\chi (\hat a^\dag+\hat a)\sz{} + \chi\alpha^2\sz{}, \end{equation} where we have assumed $\alpha$ to be real for simplicity. For $\chi$ small and $\alpha$ large, the second term dominates therefore realizing a synthetic longitudinal interaction af amplitude $g_z=\alpha\chi$. In other words, longitudinal readout can be realized as a limit of the dispersive readout where $\chi$ approaches zero while $\alpha$ grows such that $\chi\alpha$ is constant. A simple interpretation of this observation is that, for strong drives, the circle on which the pointer states rotate due to the dispersive interaction has a very large radius $\alpha$ such that, for all practical purposes, the motion appears linear. A variation of this approach which allows for larger longitudinal coupling strengh was experimentally realized by \textcite{Touzard2019} and \textcite{Ikonen2019} and relies on driving the qubit at the frequency of the resonator. This is akin to the cross-resonance gate discussed further in \cref{sec:AllMicrowaveGates} and which leads to the desired longitudinal interaction, see the last term of \cref{eq:CrossResonanceTLS}. A more subtle approach to realize a synthetic longitudinal interaction is to drive a qubit with a Rabi frequency $\Omega_R$ while driving the resonator at the sideband frequencies $\omega_r\pm\Omega_R$. This idea was implemented by \textcite{Eddins2018} who also showed improvement of qubit readout with single-mode squeezing. Importantly, because these realizations are based on the dispersive Hamiltonian, they suffer from Purcell decay and non-QNDness. Circuits realizing dispersive-like interactions that are not derived from a Jaynes-Cummings interaction have been studied \cite{Dassonneville2020,Didier2015c}. \section{Qubit-resonator coupling regimes}\label{sec:CouplingRegimes} We now turn to a discussion of the different coupling regimes that are accessible in circuit QED and how these regimes are probed experimentally. We first consider the resonant regime where the qubit is tuned in resonance with the resonator, before moving on to the dispersive regime characterized by large qubit-resonator detuning. While the situation of most experimental interest is the strong coupling regime where the coupling strength $g$ overwhelms the decay rates, we also touch upon the so-called bad-cavity and bad-qubit limits because of their historical importance and their current relevance to hybrid quantum systems. Finally, we briefly consider the ultrastrong coupling regime where $g$ becomes comparable or is even larger than the system's frequencies. To simplify the discussion, we will treat the artificial atom as a simple two-level system throughout this section. \subsection{\label{sec:resonant}Resonant regime} The low-energy physics of the Jaynes-Cummings model is well described by the ground state $\ket{\overline{g,0}}=\ket{g,0}$ and first two excited states \begin{equation}\label{eq:OnResonanceDoublet} \begin{split} \ket{\overline{g, 1}} &= (\ket{g,1}-\ket{e,0})/\sqrt{2},\\ \ket{\overline{e, 0}} &= (\ket{g,1} + \ket{e,0})/\sqrt{2}, \end{split} \end{equation} which, as already illustrated in \cref{fig:JCSpectrum}, are split in frequency by $2g$. As discussed in \cref{sec:DispersiveQubitReadout} in the context of the dispersive readout, the coupled qubit-resonator system can be probed by applying a coherent microwave tone to the input of the resonator and measuring the transmitted or reflected signal. To arrive at an expression for the expected signal in such an experiment, we consider the equations of motion for the field and qubit annihilation operators in the presence of a coherent drive of amplitude $\varepsilon$ and frequency $\omega_\mathrm{d}$ on the resonator's input port. In a frame rotating at the drive frequency, these equations take the form \begin{align} \av{\dot\hat a} &= -\left(\frac{\kappa}{2}+i \delta_r\right)\av\hat a - ig\langle\smm{}\rho_{a}ngle - i\varepsilon, \label{eq:a}\\ \av{\dot{\hat{\sigma}}_-} &= -\left(\gamma_2+i\delta_q\right)\langle \smm{}\rho_{a}ngle + ig\av{\hat a\sz{}}, \label{eq:SigmaMinus_Original} \end{align} with $\delta_r = \omega_r-\omega_\mathrm{d}$ and $\delta_q = \omega_{q}-\omega_\mathrm{d}$, and where $\gamma_2$ is defined in \cref{eq:T2_definition}. These expressions are obtained using $\partial_t\av{\h{O}} = {\mathrm{Tr}}{\dot\rho\h{O}}$ and the master equations of \cref{eq:ME_harmonic,eq:ME_transmon} at zero temperature and in the two-level approximation for the transmon. Alternatively, the expression for $\partial_t\av{\hat a}$ is simply the average of \cref{eq:inouteom} with $\hat H_S$ the Jayne-Cummings Hamiltonian. At very low excitation amplitude $\varepsilon$, it is reasonable to truncate the Hilbert space to the first three levels defined above. In this subspace, $\av{\hat a\sz{}} = -\av\hat a$ since $\hat a$ acts nontrivially only if the qubit is in the ground state~\cite{Kimble1994}. It is then simple to compute the steady-state transmitted homodyne power by solving the above expressions with $\partial_t\av{\hat a} = \partial_t\av{{\sigma}_-} = 0$ and using \cref{eq:Amplitude} to find \begin{equation}\label{eq:CavityTransmission} |A|^2 = \left(\frac{\varepsilon V_\mathrm{IF}}{2}\right)^2 \left| \frac{\delta_q-i\gamma_2}{(\delta_q-i\gamma_2)(\delta_r-i\kappa/2)-g^2} \right|^2. \end{equation} This expression is exact in the low excitation power limit. Taking the qubit and the oscillator to be on resonance, $\Delta = \omega_{q}-\omega_r=0$, we now consider the result of cavity transmission measurements in three different regimes of qubit-cavity interaction. \subsubsection{Bad-cavity limit} We first consider the bad-cavity limit realized when the cavity decay rate overwhelms the coupling $g$ which is itself larger than the qubit linewidth: $\kappa > g \gg \gamma_2$. This situation corresponds to an overdamped oscillator and, at qubit-oscillator resonance, leads to rapid decay of the qubit. A simple model for this process is obtained using the truncated Hilbert space discussed above where we now drop the cavity drive for simplicity. Because of the very large decay rate $\kappa$, we can asume the oscillator to rapidly reach its steady-state $\partial_t\av{\hat a}=0$. Using the resulting expression for $\av{\hat a}$ in \cref{eq:SigmaMinus_Original} immediately leads to \begin{equation} \av{\dot{\hat{\sigma}}_-} = -\left(\frac{\gamma_1+\gamma_k'}{2}+\gamma_\varphi\right)\langle \smm{}\rho_{a}ngle, \end{equation} where we have defined the Purcell decay rate $\gamma_\kappa' = 4g^2/\kappa$. The expression for this rate has a rather different form than the Purcell rate $\gamma_\kappa = (g/\Delta)^2\kappa$ given in \cref{eq:DispersiveRates}. These two results are, however, not incompatible but have been obtained in very different regimes. An expression for the Purcell rate that interpolates between the two above expressions can be obtained and takes the form $\kappa g^2/[(\kappa/2)^2+\Delta^2]$~\cite{Sete2014}. \begin{figure} \caption{ Numerical simulations of the qubit-oscillator master equation for (a,c,e) the time evolution starting from the bare state $\ket{0,e} \label{fig:ResonantRegimes} \end{figure} The situation described here is illustrated for $\kappa/g = 10$ and $\gamma_1 = 0$ in \cref{fig:ResonantRegimes}(a) which shows the probability for the qubit to be in its excited state versus time after initializing the qubit in its excited state and the resonator in vacuum. Even in the absence of qubit $T_1$, the qubit is seen to quickly relax to its ground state something which, as discussed in \cref{sec:DissipationDispersive}, is due to qubit-oscillator hybridization. \Cref{fig:ResonantRegimes}(b) shows the transmitted power versus drive frequency in the presence of a very weak coherent tone populating the cavity with $\bar n\ll1$ photons. The response shows a broad Lorentzian peak of width $\kappa$ together with a narrow electromagnetically induced transparency (EIT)-like window of width $\gamma_\kappa'$~\cite{rice1996,Mlynek2014b}. This effect which is due to interference between the intra-cavity field and the probe tone vanishes in the presence of qubit dephasing. Although not the main regime of interest in circuit QED, the bad-cavity limit offers an opportunity to engineer the dissipation seen by the qubit. For example, this regime has been used to control the lifetime of long-lived donor spins in silicon in a hybrid quantum system~\cite{Bienfait2016a}. \subsubsection{Bad-qubit limit} The bad qubit limit corresponds to the situation where a high-Q cavity with large qubit-oscillator coupling is realized, while the qubit dephasing and/or energy relaxation rates is large: $\gamma_2>g\gg\kappa$. Although this situation is not typical of circuit QED with transmon qubits, it is relevant for some hybrid systems that suffer from significant dephasing. This is the case, for example, in early experiments with charge qubits based on semiconductor quantum dots coupled to superconducting resonators \cite{Frey2012,Petersson2012a,Viennot2014a}. In analogy to the bad-cavity case, the strong damping of the qubit together with the qubit-resonator coupling leads to the photon decay rate $\kappa_\gamma' = 4g^2/\gamma_1$ which is sometimes known as the `inverse' Purcell rate. This is illustrated in \cref{fig:ResonantRegimes}(c) which shows the time-evolution of the coupled system starting with a single photon in the resonator and the qubit in the ground state. In this situation, the cavity response is a simple Lorentzian broadened by the inverse Purcell rate, see \cref{fig:ResonantRegimes}(d). If the qubit were to be probed directly rather than indirectly via the cavity, the atomic response would show the EIT-like feature of \cref{fig:ResonantRegimes}(b), now with a dip of width $\kappa_\gamma'$~\cite{rice1996}. The reader should also be aware that qubit-resonator detuning-dependent dispersive shifts of the cavity resonance can be observed in this bad-qubit limit. The observation of such dispersive shifts on its own should not be mistaken for an observation of strong coupling \cite{Wallraff2013}. \subsubsection{Strong coupling regime} We now turn to the case where the coupling strength overwhelms the qubit and cavity decay rates, $g > \kappa,\, \gamma_2$. In this regime, light-matter interaction is strong enough for a single quantum to be coherently exchanged between the electromagnetic field and the qubit before it is irreversibly lost to the environment. In other words, at resonance $\Delta = 0$ the splitting $2g$ between the two dressed eigenstates $\{\ket{\overline{g, 1}},\ket{\overline{e, 0}}\}$ of \cref{eq:OnResonanceDoublet} is larger than their linewidth $\kappa/2 + \gamma_2$ and can be resolved spectroscopically. We note that, with the eigenstates being half-photon and half-qubit\footnote{According to some authors, these dressed states should therefore be referred to as quton and phobit~\cite{Schuster2007}.}, the above expression for the dressed-state linewidth is simply the average of the cavity and of the qubit linewitdh~\cite{Haroche1992}. \Cref{fig:ResonantRegimes}(f) shows cavity transmission for $(\kappa,\gamma_1,\gamma_\varphi)/g = (0.1, 0.1,0)$ and at low excitation power such that, on average, there is significantly less than one photon in the cavity. The resulting doublet of peaks located at $\omega_r\pm g$ is the direct signature of the dressed-states $\{\ket{\overline{g, 1}},\ket{\overline{e, 0}}\}$ and is known as the vacuum Rabi splitting. The observation of this doublet is the hallmark of the strong coupling regime. \begin{figure} \caption{ (a) Transmission-line resonator transmission versus probe frequency in the first observation of vacuum Rabi splitting in circuit QED (full blue line). The qubit is a Cooper pair box qubit with $E_J/h\approx 8\,\rm{GHz} \label{fig:VacuumRabiSplitting} \end{figure} The first observation of this feature in cavity QED with a single atom and a single photon was reported by~\cite{Thompson1992}. In this experiment, the number of atoms in the cavity was not well controlled and it could only be determined that there was \emph{on average} one atom in interaction with the cavity field. This distinction is important because, in the presence of $N$ atoms, the collective interaction strength is $g\sqrt{N}$ and the observed splitting correspondingly larger~\cite{Tavis1968,Fink2009}. Atom number fluctuation is obviously not a problem in circuit QED and, with the very strong coupling and relatively small linewidths that can routinely be experimentally achieved, reaching the strong coupling regime is not particularly challenging in this system. In fact, the very first circuit QED experiment of \textcite{Wallraff2004} reported the observation of a clear vacuum Rabi splitting with $2g/(\kappa/2 + \gamma_2) \sim 10$, see \cref{fig:VacuumRabiSplitting}(a). This first demonstration used a charge qubit which, by construction, has a much smaller coupling $g$ than typical transmon qubits. As a result, more recent experiments with transmon qubits can display ratios of peak separation to linewidth in the several hundred, see \cref{fig:VacuumRabiSplitting}(b)~\cite{Schoelkopf2008}. \Cref{fig:AvoidedCrossing} shows the qubit-oscillator spectrum as a function of probe frequency, as above, but now also as a function of the qubit frequency allowing to see the full qubit-resonator avoided crossing. The horizontal dashed line corresponds to the bare cavity frequency while the diagonal dashed line is the bare qubit frequency. The vacuum Rabi splitting of \cref{fig:ResonantRegimes}(f) is obtained from a linecut (dotted vertical line) at resonance between the bare qubit frequency $\omega_{q}$ and the bare cavity frequency $\omega_r$. Because it is the cavity that is probed here, the response is larger when the dressed-states are mostly cavity-like and disappears away from the cavity frequency where the cavity no longer responds to the probe~\cite{Haroche1992}. \begin{figure} \caption{ Vacuum Rabi splitting revealed in numerical simulations of the cavity transmission $A^2 = |\av\hat a|^2$ as a function of probe frequency and qubit transition frequency for the same parameters as in \cref{fig:ResonantRegimes} \label{fig:AvoidedCrossing} \end{figure} It is interesting to note that the splitting predicted by \cref{eq:CavityTransmission} for the transmitted homodyne signal is in fact smaller than $2g$ in the presence of finite relaxation and dephasing. Although not significant in circuit QED with transmon qubits, this correction can become important in systems such as charge qubits in quantum dots that are not very deep in the strong coupling regime. We also note that the observed splitting can be smaller when measured in reflection rather than in transmission. Rather than spectroscopic measurements, strong light-matter coupling can also be displayed in time-resolved measurements~\cite{Brune1996}. Starting from the qubit-oscillator ground state, this can be done, for example, by first pulsing the qubit to its first excited state and then bringing it on resonance with the cavity. As illustrated in \cref{fig:ResonantRegimes}(e), this results in oscillations in the qubit and cavity populations at the vacuum Rabi frequency $2g$. Time-resolved vacuum Rabi oscillations in circuit QED were first performed with a flux qubit coupled to a discrete LC oscillator realized in the bias circuitry of the device \cite{Johansson2006}. This experiment was followed by a similar observation with a phase qubit coupled to a coplanar waveguide resonator \cite{Hofheinz2008}. In the limit of weak excitation power which we have considered so far, the coupled qubit-oscillator system is indistinguishable from two coupled classical linear oscillators. As a result, while the dressed-states that are probed in these experiments are entangled, the observation of an avoided crossing cannot be taken as a conclusive demonstration that the oscillator field is quantized or of qubit-oscillator entanglement. Indeed, a vacuum Rabi splitting can be interpreted as the familiar normal mode splitting of two coupled classical oscillators. A clear signature of the quantum nature of the system can, however, be obtained by probing the $\sqrt n$ dependence of the spacing of the higher excited states of the Jaynes-Cummings ladder already discussed in \cref{sec:JaynesCummings}. This dependence results from the matrix element of the operator $\hat a$ and is consequently linked to the quantum nature of the field~\cite{Carmichael1996}. Experimentally, these transitions can be accessed in several ways including by two-tone spectroscopy~\cite{Fink2008}, by increasing the probe tone power~\cite{Bishop2009a}, or by increasing the system temperature~\cite{Fink2010}. The light blue line in \cref{fig:ResonantRegimes}(f) shows cavity transmission with a thermal photon number of $\bar n_\kappa = 0.35$ rather than $\bar n_\kappa = 0$ (dark blue line). At this more elevated temperature, additional pairs of peaks with smaller separation are now observed in addition to the original peaks separated by $2g$. As illustrated in \cref{fig:ResonantMultiPhotons}, these additional structures are due to multi-photon transitions and their $\sqrt n$ scaling reveal the anharmonicity of the Jaynes-Cummings ladder. Interestingly, the matrix elements of transitions that lie outside of the original vacuum Rabi splitting peaks are suppressed and these transitions are therefore not observed, see the red arrow in \cref{fig:ResonantMultiPhotons}~\cite{Rau2004a}. We also note that, at much larger power or at elevated temperature, the system undergoes a quantum-to-classical transition and a single peak at the resonator frequency $\omega_r$ is observed~\cite{Fink2010}. In short, the impact of the qubit on the system is washed away in the correspondence limit. This is to be expected from the form of the Jaynes-Cummings Hamiltonian \cref{eq:HJC} where the qubit-cavity coupling $\hbar g (\hat a^\dag\smm{} + \hat a\spp{})$ with its $\sqrt{n}$ scaling is overwhelmed by the free cavity Hamiltonian $\hbar\omega_r \hat a^\daga$ which scales as $n$. This is the same mechanism that leads to the high-power readout discussed in \cref{sec:ReadoutOtherApproaches}. \begin{figure} \caption{ Ground state and first two doublets of the Jaynes-Cummings ladder. The dark blue arrows correspond to the transitions that are probed in a vacuum Rabi experiment. The transitions illustrated with light blue arrows lead to additional peaks at transition frequencies lying inside of the vacuum Rabi doublet at elevated temperature or increased probe power. On the other hand, the matrix element associated with the red transitions would lead to response at transition frequencies outside of the vacuum Rabi doublet. Those transition are, however, suppressed and are not observed \cite{Rau2004a} \label{fig:ResonantMultiPhotons} \end{figure} Beyond this spectroscopic evidence, the quantum nature of the field and qubit-oscillator entanglement was also demonstrated in a number of experiments directly measuring the joint density matrix of the dressed states. For example, \textcite{Eichler2012b} have achieved this by creating one of the entangled states $\{\ket{\overline{g, 1}},\ket{\overline{e, 0}}\}$ in a time-resolved vacuum Rabi oscillation experiment and, subsequently, measuring the qubit state in a dispersive measurement and the photon state using a linear detection method \cite{Eichler2012}. A range of experiments used the ability to create entanglement between a qubit and a photon through the resonant interaction with a resonator, e.g.~in the context of quantum computation \cite{Mariantoni2011a}, for entangling resonator modes \cite{Mariantoni2011}, and transferring quantum states \cite{Sillanpaa2007}. \subsection{\label{sec:DispersiveConsequences}Dispersive regime} For most quantum computing experiments, it is common to work in the dispersive regime where, as already discussed in \cref{sec:dispersive}, the qubit is strongly detuned from the oscillator with $|\Delta|\gg g$. There, the dressed eigenstates are only weakly entangled qubit-oscillator states. This is to be contrasted to the resonant regime where these eigenstates are highly entangled resulting in the qubit and the oscillator to completely lose their individual character. In the two-level system approximation, the dispersive regime is well described by the Hamiltonian $\hat H_\text{disp}$ of \cref{eq:HQubitDispersiveSimple}. There, we had interpreted the dispersive coupling as a qubit-state dependent shift of the oscillator frequency. This shift can be clearly seen in \cref{fig:AvoidedCrossing} as the deviation of the oscillator response from the bare oscillator frequency away from resonance (horizontal dashed line). This figure also makes it clear that the qubit frequency, whose bare value is given by the diagonal dashed line, is also modified by the dispersive coupling to the oscillator. To better understand this qubit-frequency shift, it is instructive to rewrite $\hat H_\text{disp}$ as \begin{equation}\label{eq:HQubitDispersiveSimpleACStark} \hat H_\text{disp} \approx \hbar \omega_r \hat a^\dagger \hat a + \frac{\hbar}{2} \left[\omega_{q}+ 2 \chi \left(\hat a^\dagger \hat a +\frac12 \right)\right] \sz{}, \end{equation} where it is now clear that the dispersive interaction of amplitude $\chi$ not only leads to a qubit-state dependent frequency pull of the oscillator, but also to a photon-number dependent frequency shift of the qubit given by $2\chi\hat a^\dagger \hat a$. This is known as the ac-Stark shift (or the quantized light shift) and is here accompanied by a Lamb shift corresponding to the factor of $1/2$ in the last term of \cref{eq:HQubitDispersiveSimpleACStark} and which we had dropped in \cref{eq:HQubitDispersiveSimple}. In this section, we explore some consequences of this new point of view on the dispersive interaction, starting by first reviewing some of the basic aspects of qubit spectroscopic measurements. \subsubsection{Qubit Spectroscopy} To simplify the discussion, we first consider spectroscopically probing the qubit assuming that the oscillator remains in its vacuum state. This is done by applying a coherent field of amplitude $\alpha_d$ and frequency $\omega_\mathrm{d}$ to the qubit, either via a dedicated voltage gate on the qubit or to the input port of the resonator. Ignoring the resonator for the moment, this situation is described by the Hamiltonian $\delta_q \sz{}/2 + \Omega_R \sx{}/2$, where $\delta_q = (\omega_{q}+\chi)-\omega_\mathrm{d}$ is the detuning between the Lamb-shifted qubit transition frequency and the drive frequency, and $\Omega_R \propto \alpha_d$ is the Rabi frequency. Under this Hamiltonian and using the master equation \cref{eq:ME_transmon} projected on two levels of the qubit, the steady-state probability $P_e=(\av{\sz{}}_\mathrm{s}+1)/2$ for the qubit to be in its excited state (or, equivalently, the probability to be in the ground state, $P_g$) is found to be \cite{Abragam1961} \begin{equation}\label{eq:lineshape} P_{e} = 1 - P_{g} = \frac{1}{2} \frac {\Omega_R^2} {\gamma_1\gamma_2+\delta_q^2\gamma_1/\gamma_2+\Omega_R^2}. \end{equation} The Lorantzian lineshape of $P_e$ as a function of the drive frequency is illustrated in \cref{fig:DispersiveQubitSpectroscopy}(a). In the limit of strong qubit drive, i.e.~large Rabi frequency $\Omega_R$, the steady-state qubit population reaches saturation with $P_e=P_g=1/2$, see \cref{fig:DispersiveQubitSpectroscopy}(b). Moreover, as the power increases, the full width at half maximum (FWHM) of the qubit lineshape evolves from the bare qubit linewidth given by $\gamma_q = 2 \gamma_2$ to $2\sqrt{{1}/{T_2^2} +\Omega_R^2 {T_1}/{T_2}}$, something that is known as power broadening and which is illustrated in \cref{fig:DispersiveQubitSpectroscopy}(c). In practice, the unbroadenend dephasing rate $\gamma_2$ can be determined from spectroscopic measurements by extrapolating to zero spectroscopy tone power the linear dependence of $\nu_{\rm{HWHM}}^2$. This quantity can also be determined in the time domain from a Ramsey fringe experiment \cite{Vion2002}.\footnote{ Different quantities associated with the dephasing time are used in the literature, the three most common being $T_2$, $T_2^*$ and $T_2^\mathrm{echo}$. While $T_2$ corresponds to the intrinsic or ``natural'' dephasing time of the qubit, $T_2^*\le T_2$ accounts for inhomogeneous broadening. For example, for a flux-tunable transmon, this broadening can be due to random fluctuations of the flux treading the qubit's SQUID loop. A change of the flux over the time of the experiment needed to extract $T_2$ results in a qubit frequency shifts, something that is measured as a broadening of the qubit's intrinsic linewidth. Notably, the slow frequency fluctuations can be cancelled by applying a $\pi$-pulse midway through a Ramsey fringe experiment. The measured dephasing time is then known as $T_2^\mathrm{echo}$ and is usually longer than $T_2^*$ with its exact value depending on the spectrum of the low-frequency noise affecting the qubit \cite{Martinis2003}. The method of dynamical decoupling which relies on more complex pulse sequences can be used to cancel higher-frequency components of the noise \cite{Bylander2011}.} \begin{figure} \caption{ Power broadening of the qubit line. (a) Excited qubit population (left vertical axis) and phase (right vertical axis) as a function of the drive detuning $\delta_q$ for the Rabi amplitudes $\Omega_R/2\pi = 0.1$ MHz (light blue), 0.5 MHz (blue) and 1 MHz (dark blue). The phase is obtained from $\phi = \arctan(2\chi \av{\sz{} \label{fig:DispersiveQubitSpectroscopy} \end{figure} In typical optical spectroscopy of atoms in a gas, one directly measures the absorption of photons by the gas as a function of the frequency of the photons. In circuit QED, one typically performs quantum jump spectroscopy by measuring the probability that an applied microwave drive places the qubit into its excited state. The variation in qubit population with qubit drive can be measured by monitoring the change in response of the cavity to the spectroscopy drive. As discussed in \cref{sec:DispersiveQubitReadout}, this is realized using two-tone spectroscopy by measuring the cavity transmission, or reflection, of an additional drive of frequency close to $\omega_r$. In the literature, this second drive is often referred to as the probe or measurement tone, while the spectroscopy drive is also known as the pump tone. As shown by \cref{eq:phiDispersive}, the phase of the transmitted probe tone is related to the qubit population. In particular, with the probe tone at the bare cavity frequency and in the weak dispersive limit $\chi\ll\kappa$, this phase is simply proportional to the qubit population, $\phi_\mathrm{s} = \arctan(2\chi \av{\sz{}}_\mathrm{s}/2) \approx 2\chi\av{\sz{}}_\mathrm{s}/\kappa$. Monitoring $\phi_\mathrm{s}$ as a function of the spectroscopy tone frequency therefore directly reveals the Lorentzian qubit lineshape~\cite{Schuster2005}. \subsubsection{AC-Stark shift and measurement-induced broadening} \label{sec:AcStark_MeasIndDephasing} \begin{figure} \caption{Excited state population as a function of the qubit drive frequency. (a) Dispersive regime with $\chi/2\pi = 0.1$ MHz and (b) strong dispersive limit with $\chi/2\pi = 5$ MHz. The resolved peaks correspond to different cavity photon numbers. The spectroscopy drive amplitude is fixed to $\Omega_R/2\pi = 0.1$ MHz and the damping rates to $\gamma_1/2\pi = \kappa/2\pi = 0.1$ MHz. In panel (a) the measurement drive is on resonance with the bare cavity frequency, with amplitude $\epsilon/2\pi =$ (0, 0.2, 0.4) MHz for the light blue, blue and dark blue line, respectively. In panel (b) the measurement drive is at the pulled cavity frequency $\omega_r-\chi$ with amplitude $\epsilon/2\pi = 0.1$ MHz. } \label{fig:AcStarkShift} \end{figure} In the above discussion, we have implicitly assumed that the amplitude of the measure tone is such that the intra-cavity photon population is vanishingly small, $\av{\hat a^\daga}\rightarrow 0$. As is made clear by \cref{eq:HQubitDispersiveSimpleACStark}, increase in photon population leads to a qubit frequency shift by an average value of $2 \chi \av{\hat a^\daga}$. \Cref{fig:AcStarkShift}(a) shows this ac-Stark shift in the steady-state qubit population as a function of spectroscopy frequency for three different probe drive powers. Taking advantage of the dependence of the qubit frequency on measurement power, prior knowledge of the value of $\chi$ allows one to infer the intra-cavity photon number as a function of input pump power from such measurements~\cite{Schuster2005}. However, care must be taken since the linear dependence of the qubit frequency on power predicted in \cref{eq:HQubitDispersiveSimple} is only valid well inside the dispersive regime or, more precisely, at small $\bar n/n_\mathrm{crit}$. We come back to this shortly. As is apparent from \cref{fig:AcStarkShift}(a), in addition to causing a frequency shift of the qubit, the cavity photon population also causes a broadening of the qubit linewidth. This can be understood simply by considering again the form of $\hat H_\text{disp}$ in \cref{eq:HQubitDispersiveSimpleACStark}. Indeed, while in the above discussion we considered only the \emph{average} qubit frequency shift, $2\chi\av{\hat a^\dag\hat a}$, the actual shift is rather given by $2\chi\hat a^\dag\hat a$ such that the full photon-number distribution is important. As a result, when the cavity is prepared in a coherent state by the measurement tone, each Fock state $\ket n$ of the coherent field leads to its own qubit frequency shift $2\chi n$. In the weak dispersive limit corresponding to $\chi/\kappa$ small, the observed qubit lineshape is thus the result of the inhomogeneous broadening due to the Poisson statistics of the coherent state populating the cavity. This effect becomes more apparent as the average measurement photon number $\bar n$ increases and results in a crossover from a Lorentzian qubit lineshape whose linewidth scales with $\bar n$ to a Gaussian lineshape whose linewidth rather scales as $\sqrt{\bar n}$~\cite{Schuster2005,Gambetta2006}. This square-root dependence can be traced to the coherent nature of the cavity field. For a thermal cavity field, a $\bar n (\bar n+1)$ dependence is rather expected and observed~\cite{Bertet2005,Kono2017}. This change in qubit linewidth due to photon shot noise in the coherent measurement tone populating the cavity can be interpreted as the unavoidable dephasing that a quantum system undergoes during measurement. Using a polaron-type transformation familiar from condensed-matter theory, the cavity can be integrated out of the qubit-cavity master equation and, in this way, the associated measurement-induced dephasing rate can be expressed in the dispersive regime as $\gamma_\mathrm{m}(t) = 2\chi\mathrm{Im}[\alpha_g(t)\alpha_e^*(t)]$, where $\alpha_{g/e}(t)$ are the time-dependent coherent state amplitudes associated with the two qubit states~\cite{Gambetta2008}. In the long time limit, the above rate can be expressed in the more intuitive form $\gamma_\mathrm{m} = \kappa|\alpha_e-\alpha_g|^2/2$, where $\alpha_e-\alpha_g$ is the distance between the two steady-state pointer states~\cite{Gambetta2008}. Unsurprisingly, measurement-induced dephasing is faster when the pointer states are more easily distinguishable and the measurement thus more efficient. This last expression can also be directly obtained from the entangled qubit-pointer state \cref{eq:entangledQubitPointer} whose coherence decay, at short times, at the rate $\gamma_\mathrm{m}$ under photon loss~\cite{Haroche2006}. Using the expressions \cref{eq:alpha_g,eq:alpha_e} for the pointer states amplitude, $\gamma_\mathrm{m}$ can be expressed as \begin{equation}\label{eq:MeasurementInducedDephasingRate} \gamma_\mathrm{m} = \frac {\kappa \chi ^2 (\bar n_g + \bar n_e)} {\delta_r^2+\chi^2 + (\kappa/2)^2}, \end{equation} with $\bar n_\sigma = |\alpha_\sigma|^2$ the average cavity photon number given that the qubit is state $\sigma$. The distinction between $\bar n_g$ and $\bar n_e$ is important if the measurement drive is not symmetrically placed between the two pulled cavity frequencies corresponding to the two qubit states. Taking $\delta_r = \omega_r-\omega_\mathrm{d} = 0$ and thus $\bar n_g = \bar n_e \equiv \bar n$ for a two-level system, the measurement-induced dephasing rate takes, in the small $\chi/\kappa$ limit, the simple form $\gamma_\mathrm{m} \sim 8 \chi^2 \bar n / \kappa$. Thus as announced above, the qubit linewitdth scales with $\bar n$. With the cautionary remarks that will come below, measuring this linewidth versus the drive power is thus another way to infer $\bar n$ experimentally. So far, we have been concerned with the small $\chi/\kappa$ limit. However, given the strong coupling and high-quality factor that can be experimentally realized in circuit QED, it is also interesting to consider the opposite limit where $\chi/\kappa$ is large. A first consequence of this strong dispersive regime, illustrated in \cref{fig:AcStarkShift}(b), is that the qubit frequency shift per photon can then be large enough to be resolved spectroscopically~\cite{Gambetta2006,Schuster2007a}. More precisely, this occurs if $2\chi$ is larger than $\gamma_2 + (\bar n + n)\kappa/2$, the width of the $n$th photon peak~\cite{Gambetta2006}. Moreover, the amplitude of each spectroscopic line is a measure of the probability of finding the corresponding photon number in the cavity. Using this idea, it is possible, for example, to experimentally distinguish between coherent and thermal population of the cavity~\cite{Schuster2007a}. This strong dependence of the qubit frequency on the exact photon number also allows for conditional qubit-cavity logical operations where, for example, a microwave pulse is applied such that qubit state is flipped if and only if there are $n$ photons in the cavity~\cite{Johnson2010}. Although challenging, this strong dispersive limit has also been acheived in some cavity QED experiments \cite{Gleyzes2007,Guerlin2007}. This regime has also been acheived in hybrid quantum systems, for example in phonon-number resolving measurements of nanomechanical oscillators \cite{Arrangoiz-Arriola2019,Sletten2019} and magnon-number resolving measurements \cite{Lachance-Quirion2017}. We now come back to the question of inferring the intra-cavity photon number from ac-Stark shift or qubit linewidth broadening measurements. As mentioned previously, the linear dependence of the ac-Stark shift on the measurement drive power predicted from the dispersive Hamiltonian \cref{eq:HQubitDispersiveSimple} is only valid at small $\bar n/n_\mathrm{crit}$. Indeed, because of higher-order corrections, the cavity pull itself is not constant with $\bar n$ but rather decreases with increasing $\bar n$~\cite{Gambetta2006}. This change in cavity pull is illustrated in \cref{fig:NLCavityPull}(a) which shows the effective resonator frequency given that the qubit is in state $\sigma$ as a function of drive amplitude, $\omega_{\mathrm{r}\sigma}(n)= E_{\overline{\sigma,n+1}} - E_{\overline{\sigma,n}}$, with $E_{\overline{\sigma,n+1}}$ the dressed state energies defined in \cref{eq:JCEnergies}~\cite{Boissonneault2010}. At very low drive amplitude, the cavity frequency is pulled to the expected value $\omega_r\pm\chi$ depending on the state of the qubit. As the drive amplitude increases, and with it the intra-cavity photon number, the pulled cavity frequency goes back to its bare value $\omega_r$. Panels (b) and (c) show the pulled frequencies taking into account three and six transmon levels, respectively. In contrast to the two-level approximation and as expected from \cref{eq:HTransmonDispersiveSW}, in this many-level situation the symmetry that was present in the two-level case is broken and the pulled frequencies are not symmetrically placed around $\omega_r$. We note that this change in effective cavity frequency is at the heart of the high-power readout already discussed in \cref{sec:SNR}. \begin{figure} \caption{Change of effective resonator frequency $\omega_{r\sigma} \label{fig:NLCavityPull} \end{figure} Because of this change in cavity pull, which can be interpreted as $\chi$ itself changing with photon numbers, the ac-Stark shift and the measurement-induced dephasing do not necessarily follow the simple linear dependence expected from $\hat H_\text{disp}$. For this reason, it is only possible to safely infer the intra-cavity photon number from measurement of the ac-Stark shift or qubit linewidth broadening at small photon number. It is worth nothing that, in some cases, the reduction in cavity pull can move the cavity frequency closer to the drive frequency, thereby leading to an increase in cavity population. For some system parameters, these two nonlinear effects -- reduction in cavity pull and increase in cavity population -- can partly compensate each other, leading to an \emph{apparent} linear dependence of the qubit ac-Stark with power~\cite{Gambetta2006}. We can only repeat that care must be taken when extracting the intra-cavity photon number in the dispersive regime. \subsection{Beyond strong coupling: ultrastrong coupling regime} \label{sec:ultrastrong} We have discussed consequences of the strong coupling, $g> \kappa, \gamma_2$, and strong dispersive, $\chi> \kappa, \gamma_2$, regimes which can both easily be realized in circuit QED. Although the effect of light-matter interaction has important consequences, in both these regimes $g$ is small with respect to the system frequencies, $\omega_r,\omega_{q} \gg g$, a fact that allowed us to safely drop counter-rotating terms from~\cref{eq:HTransmonRabi}. In the case of a two-level system this allowed us to work with the Jaynes-Cummings Hamiltonian~\cref{eq:HJC}. The situation where these terms can no longer be neglected is known as the ultrastrong coupling regime. As discussed in \cref{sec:TransmonOscillator}, the relative smallness of $g$ with respect to the system frequencies can be traced to \cref{eq:g_alpha} where we see that $g/\omega_r \propto \sqrt\alpha$, with $\alpha\sim1/137$ the fine-structure constant. This is, however, not a fundamental limit and it is possible to take advantage of the flexibility of superconducting quantum circuits to engineer situations where light-matter coupling rather scales as $\propto 1/\sqrt\alpha$. In this case, the smallness of $\alpha$ now helps boost the coupling rather than constraining it. A circuit realizing this idea was first proposed in \cite{Devoret2007} and is commonly known as the in-line transmon. It simply consists of a transmission line resonator whose center conductor is interrupted by a Josephson junction. Coupling strengths has large as $g/\omega_r \sim 0.15$ can in principle be obtained in this way but increasing this ratio further can be challenging because it is done at the expense of reducing the transmon anharmonicity~\cite{Bourassa2012}. An alternative approach relies on galvanically coupling a flux qubit to the center conductor of a transmission-line resonator. In this configuration, light-matter coupling can be made very large by increasing the impedance of the center conductor of the resonator in the vicinity of the qubit, something that can be realized by interrupting the center conductor of the resonator by a Josephson junction or a junction array~\cite{Bourassa2009}. In this way, coupling strengths of $g/\omega_{q} \sim 1$ or larger can be achieved. These ideas were first realized in \cite{Niemczyk2010,FornDiaz2010} with $g/\omega_{q} \sim 0.1$ and more recently with coupling strengths as large as $g/\omega_{q} \sim 1.34 $ \cite{Yoshihara2016}. Similar results have also been obtained in the context of waveguide QED where the qubit is coupled to an open transmission line rather than to a localized cavity mode~\cite{Forn-Diaz2016b}. A first consequence of reaching this ultrastrong coupling regime is that, in addition to a Lamb shift $g^2/\Delta$, the qubit transition frequency is further modified by the so-called Bloch-Siegert shift of magnitude $g^2/(\omega_{q}+\omega_r)$~\cite{bloch:1940a}. Another consequence is that the ground state of the combined system is no longer the factorizable state $\ket{g0}$ but is rather an entangled qubit-resonator state. An immediate implication of this observation is that the master equation \cref{eq:JCMasterEquation}, whose steady-state is $\ket{g0}$, is not an appropriate description of damping in the ultrastrong coupling regime~\cite{Beaudoin2011}. The reader interested in learning more about this regime of light-matter interaction can consult the reviews by \textcite{FornDiaz2019} and \textcite{Kockum2019}. \section{\label{sec:quantumcomputing}Quantum computing with circuit QED} \begin{figure*} \caption{ False colored optical microscope image of a four-transmon device. The transmon qubits are shown in yellow, the coupling resonators in cyan, the flux lines for single qubit tuning in green, the charge lines for single-qubit manipulation in pink, and a common feedline for multiplexed readout in purple, with transmission line resonators for dispersive readout (red) employing Purcell filters (blue). Adapted from \textcite{Andersen2019} \label{fig:CQED_chip} \end{figure*} One of the reasons for the rapid growth of circuit QED as a field of research is its prominent role in quantum computing. The transmon is today the most widely used superconducting qubit, and the dispersive measurement described in~\cref{sec:readout} is the standard approach to qubit readout. Moreover, the capacitive coupling between transmons that are fabricated in close proximity can be used to implement two-qubit gates. Alternatively, the transmon-resonator electric dipole interaction can also be used to implement such gates between qubits that are separated by distances as large as a centimeter, the resonator acting as a quantum bus to mediate qubit-qubit interactions. As illustrated in \cref{fig:CQED_chip}, realizing a quantum computer architecure, even of modest size, requires bringing together in a single working package essentially all of the elements discussed in this review. In this section, we describe the basic principles behind one- and two-qubit gates in circuit QED. Our objective is not to give a complete overview of the many different gates and gate-optimization techniques that have been developed. We rather focus on the key aspects of how light-matter interaction facilitates coherent quantum operations for superconducting qubits, and describe some of the more commonly used gates to illustrate the basic principles. Unless otherwise noted, in this section we will assume the qubits to be dispersively coupled to the resonator. \subsection{Single-qubit control} \label{subsec:SingleQubitGates} Arbitrary single-qubit rotations can be realized in an NMR-like fashion with voltage drives at the qubit frequency~\cite{Blais2004,Blais2007}. One approach is to drive the qubit via one of the resonator ports~\cite{Wallraff2005}. Because of the large qubit-resonator detuning, a large fraction of the input power is reflected at the resonator, something that can be compensated by increasing the power emitted by the source. The reader will recognize that this approach is very similar to a qubit measurement but, because of the very large detuning, with $\delta_r \gg \chi$ such that $\alpha_e - \alpha_g \sim 0$ according to \cref{eq:apha_eg_s}. As illustrated in \cref{fig:DispersiveCavityPull}, this far off-resonance drive therefore causes negligible measurement-induced dephasing~\cite{Blais2007}. We also note that in the presence of multiple qubits coupled to the same resonator, it is important that the qubits be sufficiently detuned in frequency from each other to avoid the control drive intended for one qubit to inadvertently affect the other qubits. Given this last constraint, an often more convenient approach, already illustrated in \cref{fig:dispersivenoise}, is to capacitively couple the qubit to an additional transmission line from which the control drives are applied. Of course, the coupling to this additional control port must be small enough to avoid any impact on the qubit relaxation time. Following~\cref{sec:drives}, the amplitude of the drive as seen by the qubit is given by $\varepsilon = -i\sqrt{\gamma}\beta$, where $\beta$ is the amplitude of the drive at the input port, and $\gamma$ is set by the capacitance between the qubit and the transmission line. A small $\gamma$, corresponding to a long relaxation time, can be compensated by increasing the drive amplitude $|\beta|$, while making sure that any heating due to power dissipation close to the qubit does not affect qubit coherence. Design guidelines for wiring, an overview of the power dissipation induced by drive fields in qubit drive lines, and their effect on qubit coherence is discussed, for example, in \textcite{Krinner2019}. Similarly to \cref{eq:DriveOnCavity}, a coherent drive of time-dependent amplitude $\varepsilon(t)$, frequency $\omega_\mathrm{d}$ and phase $\phi_\mathrm{d}$ on a transmon is then modeled by \begin{equation}\label{eq:singlequbitdrive} \hat H(t) = \hat H_\mathrm{q} + \hbar \varepsilon(t)\left(\hat b^\dagger e^{-i\omega_\mathrm{d} t -i\phi_\mathrm{d}} + \hat b e^{i\omega_\mathrm{d} t +i\phi_\mathrm{d}}\right), \end{equation} where $\hat H_\mathrm{q} = \hbar \omega_q \hat b^\dagger \hat b - \frac{E_C}{2}(\hat b^\dagger)^2 \hat b^2$ is the transmon Hamiltonian. Going to a frame rotating at $\omega_\mathrm{d}$, $\hat H(t)$ takes the simpler form \begin{equation}\label{eq:singlequbitdrive_rotatingframe} \hat H' = \hat H_\mathrm{q}' + \hbar \varepsilon(t)\left(\hat b^\dagger e^{-i\phi_\mathrm{d}} + \hat b e^{i\phi_\mathrm{d}}\right), \end{equation} where $\hat H_\mathrm{q}' = \hbar \delta_q \hat b^\dagger \hat b - \frac{E_C}{2}(\hat b^\dagger)^2 \hat b^2$ with $\delta_q = \omega_q-\omega_d$ the detuning between the qubit and the drive frequencies. Truncating to two levels of the transmon as in \cref{eq:HJC}, $\hat H'$ takes the form \begin{equation}\label{eq:H_qubit_drive_TLS} \hat H' = \frac{\hbar \delta_q}{2} \sz{} + \frac{\hbar \Omega_R(t)}{2} \left[\cos(\phi_\mathrm{d})\sx{} + \sin(\phi_\mathrm{d})\sy{}\right], \end{equation} where we have introduced the standard notation $\Omega_R = 2\varepsilon$ for the Rabi frequency. This form of $\hat H'$ makes it clear how the phase of the drive, $\phi_\mathrm{d}$, controls the axis of rotation on the qubit Bloch sphere. Indeed, for $\delta_q=0$, the choice $\phi_\mathrm{d}=0$ leads to rotations around the $X$-axis while $\phi_\mathrm{d}=\pi/2$ to rotations around the $Y$-axis. Since any rotation on the Bloch sphere can be decomposed into $X$ and $Y$ rotations, arbitrary single-qubit control is therefore possible using sequences of on-resonant drives with appropriate phases. Implementing a desired gate requires turning on and off the drive amplitude. To realize as many logical operations as possible within the qubit coherence time, the gate time should be as short as possible and square pulses are optimal from that point of view. In practice, however, such pulses suffer from important deformation as they propagate down the finite-bandwidth transmission line from the source to the qubit. Moreover, for a weakly anharmonic multi-level system such as a transmon, high-frequency components of the square pulse can cause unwanted transitions to levels outside the two-level computational subspace. This leakage can be avoided by using smooth (e.g.~Gaussian) pulses, but this leads to longer gate times. Another solution is to shape the pulse so as to remove the unwanted frequency components. A widely used approach that achieves this is known as Derivative Removal by Adiabatic Gate (DRAG). It is based on driving the two quadratures of the qubit with the envelope of the second quadrature chosen to be the time-derivative of the envelope of the first quadrature~\cite{Motzoi2009,Gambetta2011a}. More generally, one can cast the problem of finding an optimal drive as a numerical optimization problem which can be tackled with optimal control approaches such as the GRadient Ascent Pulse Engineering (GRAPE)~\cite{Khaneja2005}. \begin{figure} \caption{\label{fig:DRAG} \label{fig:DRAG} \end{figure} Experimental results from~\textcite{Chow2010} comparing the error in single-qubit gates with and without DRAG are shown in~\cref{fig:DRAG}. At long gate times, decoherence is the dominant source of error such that both Gaussian and DRAG pulses initially improve as the gate time is reduced. However, as the pulses get shorter and their frequency bandwidth become comparable to the transmon anharmonicity, leakage leads to large errors for the Gaussian pulses. In contrast, the DRAG results continue to improve as gates are made shorter and are consistent with a two-level system model of the transmon. These observations show that small anharmonicity is not a fundamental obstacle to fast and high-fidelity single-qubit gates. Indeed, thanks to pulse shaping techniques and long coherence times, state of the art single-qubit gate errors are below $10^{-3}$, well below the threshold for topological error correcting codes~\cite{Barends2014,Chen2016}. While rotations about the $Z$ axis can be realized by concatenating the $X$ and $Y$ rotations described above, several other approaches are used experimentally. Working in a rotating frame as in \cref{eq:H_qubit_drive_TLS} with $\delta_q = 0$, one alternative method relies on changing the qubit transition frequency such that $\delta_q\neq 0$ for a determined duration. In the absence of drive, $\Omega_R = 0$, this leads to phase accumulation by the qubit state and therefore to a rotation about the $Z$ axis. As discussed in~\cref{sec:tunabletransmon}, fast changes of the qubit transition frequency are possible by, for example, applying a magnetic field to a flux-tunable transmon. However, working with flux-tunable transmons is done at cost of making the qubit susceptible to dephasing due to flux noise. To avoid this, the qubit transition frequency can also be tuned without relying on a flux-tunable device by applying a strongly detuned microwave tone on the qubit. For $\Omega_R/\delta_q \ll 1$, this drive does not lead to Rabi oscillations but induces an ac-Stark shift of the qubit frequency due to virtual transition caused by the drive~\cite{Blais2007}. Indeed, as shown in \cref{sec:AppendixDrivenTransmon}, to second order in $\Omega_R/\delta_q$ and assuming for simplicity a constant drive amplitude, this situation is described by the effective Hamiltonian \begin{equation}\label{eq:HacStarkDrive} \begin{aligned} \hat H'' \simeq{}& \half \left(\hbar\omega_q - \frac{E_C}{2} \frac{\Omega_R^2}{\delta_q^2}\right) \sz{}. \end{aligned} \end{equation} The last term can be turned on and off with the amplitude of the detuned microwave drive and can therefore be used to realize $Z$ rotations. Finally, since the $X$ and $Y$ axis in \cref{eq:H_qubit_drive_TLS} are defined by the phase $\phi_d$ of the drive, a particularly simple approach to realize a $Z$ gate is to add the desired phase offset to the drive fields of all subsequent $X$ and $Y$ rotations and two-qubit gates. This so-called virtual $Z$-gate can be especially useful if the computation is optimized to use a large number of $Z$-rotations~\cite{McKay2017}. \subsection{Two-qubit gates} \begin{figure*} \caption{\label{fig:2QubitGates} \label{fig:2QubitGates} \end{figure*} Two-qubit gates are generally more challenging to realize than single-qubit gates. Error rates for current two-qubit gates are typically around one to a few percent, which is an order of magnitude higher than those of single-qubit gates. Recent experiments are, however, closing this gap~\cite{Foxen2020}. Improving two-qubit gate fidelities at short gate times is a very active area of research, and a wide variety of approaches have been developed. A key challenge in realizing two-qubit gates is the ability to rapidly turn interactions on and off. While for single-qubit gates this is done by simply turning on and off a microwave drive, two-qubit gates require turning on a coherent qubit-qubit interaction for a fixed time. Achieving large on/off ratios is far more challenging in this situation. Broadly speaking, one can divide two-qubit gates into different categories depending on how the qubit-qubit interaction is activated. The main approaches discussed in the following are illustrated schematically in~\cref{fig:2QubitGates}. An important distinction between these different schemes is whether they rely on frequency-tunable qubits or not. Frequency tunability is convenient because it can be used to controllably tune qubits into resonance with one another qubit or with a resonator. Using flux-tunable transmons has led to some of the fastest and highest fidelity two-qubit gates to date, see \cref{fig:2QubitGates}(a,b) \cite{Barends2014,Chen2014m,Arute2019}. However, as mentioned previously this leads to additional qubit dephasing due to flux noise. An alternative are all-microwave gates which only use microwave drives, either on the qubits or on a coupler bus such as a resonator, to activate an effective qubit-qubit interaction, see \cref{fig:2QubitGates}(c). Finally, yet another category of gates are parametric gates where a system parameter is modulated in time at a frequency which bridges an energy gap between the states of two qubits. Parametric gates can be all-microwave but, in some instances, involve modulating system frequencies using external magnetic flux, see \cref{fig:2QubitGates}(d). \subsubsection{Qubit-qubit exchange interaction} \paragraph{Direct capacitive coupling}\label{sec:DirectCoupling} One of the conceptually simplest ways to realize two-qubit gates is through direct capacitive coupling between the qubits, see \cref{fig:2QubitGates}(a). In analogy with~\cref{eq:HTransmonJC}, the Hamiltonian describing this situation reads \begin{equation}\label{eq:gates:H_swap} \hat H = \hat H_{q1} + \hat H_{q2} + \hbar J (\hat b^\dag_1\hat b_2+\hat b_1\hat b^\dag_2), \end{equation} where $\hat H_{qi} = \hbar \omega_{qi}\hat b_i^\dagger \hat b_i - E_{C_i} (\hat b_i^\dagger)^2\hat b_i^2/2$ is the Hamiltonian of the $i$th transmon and $\hat b_i$ the corresponding annihilation operator. The interaction amplitude $J$ takes the form \begin{equation}\label{eq:gates:twoqubitg} \hbar J = \frac{2 E_{C1}E_{C2}}{E_{C_c}}\left(\frac{E_{J1}}{2E_{C1}} \times \frac{E_{J2}}{2E_{C2}}\right)^{1/4}, \end{equation} with $E_{Ji}$ and $E_{Ci}$ the transmon Josephson and charging energies, and $E_{C_c} = e^2/2C_c$ the charging energy of the coupling capacitance labelled $C_c$. This beam-splitter Hamiltonian describes the coherent exchange of an excitation between the two qubits. In the two-level approximation, assuming tuned to resonance qubits, $\omega_{q1} = \omega_{q2}$, and moving to a frame rotating at the qubit frequency, \cref{eq:gates:H_swap} takes the familiar form \begin{equation} \hat H' = \hbar J (\hat \sigma_{+1}\hat \sigma_{-2} + \hat \sigma_{-1}\hat \sigma_{+2}). \end{equation} Evolution under this Hamiltonian for a time $\pi/(4J)$ leads to an entangling $\sqrt{i\text{SWAP}}$ gate which, up to single-qubit rotations, is equivalent to a controlled NOT gate (CNOT)~\cite{Loss1998}. As already mentioned, to precisely control the evolution under $ \hat H'$ and therefore the gate time, it is essential to be able to vary the qubit-qubit interaction with a large on/off ratio. There are essentially two approaches to realizing this. The most straightforward way is to tune the qubits in resonance to perform a two-qubit gate, and to strongly detune them to stop the coherent exchange induced by $\hat H'$ \cite{Blais2003a,Dewes2012,Bialczak2010}. Indeed, for $J/\Delta_{12} \ll 1$ where $\Delta_{12} = \omega_{q1}-\omega_{q2}$ is the detuning between the two qubits, the coherent exchange $J$ is suppressed and can be dropped from \cref{eq:gates:H_swap} under the RWA. A more careful analysis following the same arguments and approach used to describe the dispersive regime (cf. \cref{sec:dispersive}) shows that, to second order in $J/\Delta_{12}$, a residual qubit-qubit interaction of the form $(J^2/\Delta_{12}) \sz{1}\sz{2}$ remains. This unwanted interaction in the off state of the gate leads to a conditional phase accumulation on the qubits. As a result, the on-off ratio of this direct coupling gate is estimated to be $\sim \Delta_{12}/J$. In practice, this ratio cannot be made arbitrarily small because increasing the detuning of one pair of qubits in a multi-qubit architecture might lead to accidental resonance with a third qubit. This direct coupling approach was implemented by \textcite{Barends2014} using frequency tunable transmons with a coupling $J/2\pi = 30$ MHz and an on/off ratio of 100. We note that the unwanted phase accumulation due to the residual $\sz{1}\sz{2}$ can in principle be eliminated using refocusing techniques borrowed from nuclear magnetic resonance~\cite{Slichter95}. Another approach to turn on and off the swap interaction is to make the $J$ coupling itself tunable in time. This is conceptually simple, but requires more complex coupling circuitry typically involving flux-tunable elements that can open additional decoherence channels for the qubits. One advantage is, however, that tuning a coupler rather than qubit transition frequencies helps in reducing the frequency crowding problem. This approach is used, for example, by~\cite{Chen2014m} where two transmon qubits are coupled via a flux-tunable inductive coupler. In this way, it was possible to realize an on/off ratio of 1000, with a maximum coupling of 100 MHz corresponding to a $\sqrt{i\text{SWAP}}$ gate in 2.5 ns. A simpler approach based on a frequency tunable transmon qubit acting as coupler, as suggested in \textcite{Yan2018c}, was also used to tune qubit-qubit coupling from 5 MHz to $-40$ MHz going through zero coupling with a gate time of $\sim$ 12 ns and a gate infidelity of $\sim 0.5\%$ \cite{Arute2019}. \paragraph{Resonator mediated coupling} An alternative to the above approach is to use a resonator as a quantum bus mediating interactions between two qubits, see \cref{fig:2QubitGates}(b)~\cite{Blais2004,Blais2007,Majer2007}. An advantage compared to direct coupling is that the qubits do not have to be fabricated in close proximity to each other. With the qubits coupled to the same resonator, and in the absence of any direct coupling between the qubits, the Hamiltonian describing this situation is \begin{equation}\label{eq:gates:H_resonatormediated} \hat H = \hat H_{q1} + \hat H_{q2} + \hbar \omega_r \hat a^\dag\hat a + \sum_{i=1}^2\hbar g_i (\hat a^\dag\hat b_i+\hat a\hat b^\dag_i). \end{equation} One way to make use of this pairwise interaction is, assuming the resonator to be in the vacuum state, to first tune one of the two qubits in resonance with the resonator for half a vacuum Rabi oscillation cycle, swapping an excitation from the qubit to the resonator, before tuning it back out of resonance. The second qubit is then tuned in resonance mapping the excitation from the resonator to the second qubit \cite{Sillanpaa2007}. While this sequence of operations can swap the quantum state of the first qubit to the second, clearly demonstrating the role of the resonator as a quantum bus, it does not correspond to an entangling two-qubit gate. Alternatively, a two-qubit gate can be performed by only virtually populating the resonator mode by working in the dispersive regime where both qubits are far detuned from the resonator~\cite{Blais2004,Blais2007,Majer2007}. Building on the results of \cref{sec:dispersive}, in this situation the effective qubit-qubit interaction is revealed by using the approximate dispersive transformation $\hat U = \exp\left[\sum_i \frac{g_i}{\Delta_i}\left( \hat a^\dag\hat b_i - \hat a\hat b^\dag_i\right) \right]$ on \cref{eq:gates:H_resonatormediated}. Making use of the Baker-Campbell-Hausdorff expansion \cref{eq:BCH} to second order in $g_{i}/\Delta_i$, we find \begin{equation}\label{eq:gates:H_swap2} \begin{aligned} \hat H' ={}& \hat H_{q1}' + \hat H_{q2}' + \hbar J (\hat b^\dag_1\hat b_2+\hat b_1\hat b^\dag_2)\\ & + \hbar \tilde{\omega}_r \hat a^\dagger \hat a + \sum_{i=1}^2 \hbar \chi_{ab_i} \hat a^\dagger \hat a \hat b_i^\dagger \hat b_i \\ & + \sum_{i\neq j}\hbar \Xi_{ij}\hat b^\dag_i\hat b_i\left( \hat b^\dag_j \hat b_i + \hat b^\dag_i \hat b_j \right), \end{aligned} \end{equation} with $H_{qi}' \simeq \hbar\tilde \omega_{qi}\hat b_i^\dagger \hat b_i - \frac{E_{Ci}}{2} (\hat b_i^\dagger)^2\hat b_i^2$ the transmon Hamiltonians, $\chi_{ab_i} \simeq - 2 E_{Ci} g_i^2/\Delta_i^2$ a cross-Kerr coupling between the resonator and the $i$th qubit. The frequencies $\tilde \omega_{qi}$ and $\tilde \omega_{r}$ include the Lamb shift. The last line can be understood as an excitation number dependent exchange interaction with $\Xi_{ij} = E_{Ci} g_ig_j/(2\Delta_i\Delta_j$). Since this term is much smaller than the $J$-coupling it can typically be neglected. Note that we have not included a self-Kerr term of order $\chi_{ab_i}$ on the resonator. This term is of no practical consequences in the dispersive regime where the resonator is only virtually populated. The resonator-induced $J$ coupling in $\hat H'$ takes the form \begin{equation}\label{eq:gates:J_mediated} J = \frac{g_1g_2}{2}\left(\frac1\Delta_1 + \frac1\Delta_2\right) \end{equation} and reveals itself in the frequency domain by an anticrossing of size $2J$ between the qubit states $\ket{01}$ and $\ket{10}$. This is illustrated in \cref{fig:1102}(b) which shows the eigenenergies of the Hamiltonian \cref{eq:gates:H_resonatormediated} in the 1-excitation manifold. In this figure, the frequency of qubit 1 is swept while that of qubit 2 is kept constant at $\sim 8$ GHz with the resonator at $\sim 7$ GHz. From left to right, we first see the vacuum Rabi splitting of size $2g$ at $\omega_{q1} = \omega_r$, followed by a smaller anticrossing of size $2J$ at the qubit-qubit resonance. It is worth mentioning that the above expression for $J$ is only valid for single-mode oscillators and is renormalized in the presence of multiple modes~\cite{Filipp2011a,Solgun2019}. \begin{figure} \caption{\label{fig:1102} \label{fig:1102} \end{figure} To understand the consequence of the $J$-coupling in the time domain, it is useful to note that, if the resonator is initially in the vacuum state, it will remain in that state under the influence of $\hat H'$. In other words, the resonator is only virtually populated by its dispersive interaction with the qubits. For this reason, with the resonator initialized in the vacuum state, the second line of~\cref{eq:gates:H_swap2} can for all practical purposes be ignored and we are back to the form of the direct coupling Hamiltonian of \cref{eq:gates:H_swap}. Consequently, when both qubits are tuned in resonance with each other, but still dispersive with respect to the resonator, the latter acts as a quantum bus mediating interactions between the qubits. An entangling gate can thus be performed in the same way as with direct capacitive coupling, either by tuning the qubits in and out of resonance with each other \cite{Majer2007} or by making the couplings $g_i$ tunable \cite{Gambetta2011,Srinivasan2011}. \subsubsection{\label{sec:gates:1102} Flux-tuned 11-02 phase gate} The $11$-$02$ phase gate is a controlled-phase gate that is well suited to weakly anharmonic qubits such as transmons~\cite{Strauch2003,DiCarlo2009}. It is obtained from the exchange interaction of~\cref{eq:gates:H_swap} and can thus be realized through direct (static or tunable) qubit-qubit coupling or indirect coupling via a resonator bus. In contrast to the $\sqrt{i\text{SWAP}}$ gate, the $11$-$02$ phase gate is not based on tuning the qubit transition frequencies between the computational states into resonance with each other, but rather exploits the third energy level of the transmon. The $11$-$02$ gate relies on tuning the qubits to a point where the states $\ket{11}$ and $\ket{02}$ are degenerate in the absence of $J$ coupling. As illustrated in \cref{fig:1102}(a), the qubit-qubit coupling lifts this degeneracy by an energy $\zeta$ whose value can be found perturbatively \cite{DiCarlo2009}. Because of this repulsion caused by coupling to the state $\ket{02}$, the energy $E_{11}$ of the state $\ket{11}$ is smaller than $E_{01}+E_{10}$ by $\zeta$. Adiabatically flux tuning the qubits in and out of the $11-02$ anticrossing therefore leads to a conditional phase accumulation which is equivalent to a controlled-phase gate. To see this more clearly, it is useful to write the unitary corresponding to this adiabatic time evolution as \begin{equation}\label{eq:gates:CZ_general} \hat C_Z(\phi_{01},\phi_{10},\phi_{11}) = \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & e^{i\phi_{01}} & 0 & 0 \\ 0 & 0 & e^{i\phi_{10}} & 0 \\ 0 & 0 & 0 & e^{i\phi_{11}} \end{array} \right), \end{equation} where $\phi_{ab} = \int dt\, E_{ab}(t)/\hbar$ is the dynamical phase accumulated over total flux excursion. Up to single-qubit rotations, this is equivalent to a standard controlled-phase gate since \begin{equation} \begin{split} \hat C_Z(\phi) &= \text{diag}(1,1,1,e^{i\phi})\\ &= \hat R_Z^1(-\phi_{10})\hat R^2_Z(-\phi_{01}) \hat C_Z(\phi_{01},\phi_{10},\phi_{11}), \end{split} \end{equation} with $\phi=\phi_{11}-\phi_{01}-\phi_{10} = \int dt \, \zeta(t)$ and where $\hat R^i_Z(\theta) = \text{diag}(1,e^{i\theta})$ is a single qubit phase gate acting on qubit $i$. For $\phi \neq 0$ this is an entangling two-qubit gate and, in particular, for $\phi=\pi$ it is a controlled-$Z$ gate (CPHASE). Rather than adiabatically tuning the flux in and out of the $11-02$ resonance, an alternative is to non-adiabatically pulse to this anti-crossing \cite{Strauch2003,DiCarlo2010,Yamamoto2010}. In this sudden approximation, leaving the system there for a time $t$, the state $\ket{11}$ evolves into $\cos(\zeta t /2\hbar)\ket{11}+\sin(\zeta t /2\hbar)\ket{02}$. For $t = h/\zeta$, $\ket{11}$ is mapped back into itself but acquires a minus sign in the process. On the other hand, since they are far from any resonance, the other logical states evolve trivially. This therefore again results in a CPHASE gate. In this way, fast controlled-$Z$ gates are possible. For direct qubit-qubit coupling in particular, some of the fastest and highest fidelity two-qubit gates have been achieved this way with error rates below the percent level and gate times of a few tens of ns \cite{Barends2014,Chen2014m}. Despite its advantages, a challenge associated with this gate is the distortions in the flux pulses due to the finite bandwidth of the control electronics and line. In addition to modifying the waveform experienced by the qubit, this can lead to long time scale distortions where the flux at the qubit at a given time depends on the previous flux excursions. This situation can be partially solved by pre-distorting the pulses taking into account the known distortion, but also by adapting the applied flux pulses to take advantage of the symmetry around the transmon sweet-spot to cancel out unwanted contributions~\cite{Rol2019a}. \subsubsection{All-microwave gates} \label{sec:AllMicrowaveGates} Because the on/off ratio of the gates discussed above is controlled by the detuning between the qubits, it is necessary to tune the qubit frequencies over relatively large frequency ranges or, alternatively, to have tunable coupling elements. In both cases, having a handle on the qubit frequency or qubit-qubit coupling opens the system to additional dephasing. Moreover, changing the qubit frequency over large ranges can lead to accidental resonance with other qubits or uncontrolled environmental modes, resulting in energy loss. For these reasons, it can be advantageous to control two-qubit gates in very much the same way as single-qubit gates: by simply turning on and off a microwave drive. In this section, we describe two so-called all-microwave gates: the resonator-induced phase (RIP) gate and the cross-resonance (CR) gate. Both are based on fixed-frequency far off-resonance qubits with an always-on qubit-resonator coupling. The RIP gate is activated by driving a common resonator and the CR gate by driving one of the qubits. Other all-microwave gates which will not be discussed further here include the sideband-based iSWAP~\cite{Leek2009}, the bSWAP~\cite{Poletto2012}, the microwave-activated CPHASE~\cite{Chow2013} and the fg-ge gate \cite{Zeytinoglu2015,Egger2019}. \paragraph{Resonator-induced phase gate} The RIP gate relies on two strongly detuned qubits that are dispersively coupled to a common resonator mode. The starting point is thus \cref{eq:gates:H_swap2} where we now neglect the $J$ coupling by taking $|\omega_{q1}-\omega_{q2}| \gg J$. In the two-level approximation and accounting for a drive on the resonator, this situation is described by the Hamiltonian \begin{equation}\label{eq:gates:H_RIP} \begin{aligned} \hat H'/\hbar ={}& \frac{\tilde \omega_{q1}}{2} \hat \sigma_{z1} + \frac{\tilde \omega_{q2}}{2} \hat \sigma_{z2} + \tilde \omega_r \hat a^\dagger \hat a \\ +& \sum_{i=1}^2 \chi_{i} \hat a^\dagger \hat a \hat \sigma_{zi} + \varepsilon(t)(\hat a^\dagger e^{-i\omega_\mathrm{d} t} + \hat a e^{i\omega_\mathrm{d} t}), \end{aligned} \end{equation} where $\varepsilon(t)$ is the time-dependent amplitude of the resonator drive and $\omega_\mathrm{d}$ its frequency. Note that we also neglect the resonator self-Kerr nonlinearity. The gate is realized by adiabatically ramping on and off the drive $\varepsilon(t)$, such that the resonator starts and ends in the vacuum state. Crucially, this means that the resonator is unentangled from the qubits at the start and end of the gate. Moreover, to avoid measurement-induced dephasing, the drive frequency is chosen to be far from the cavity mode, $\tilde\delta_r = \tilde\omega_r-\omega_\mathrm{d} \gg \kappa$. Despite this strong detuning, the dispersive shift causes the resonator frequency to depend on the state of the two qubits and, as a result, the resonator field evolves in a closed path in phase space that is qubit-state dependent. This leads to a different phase accumulation for the different qubit states, and therefore to a controlled-phase gate of the form of \cref{eq:gates:CZ_general}. This conditional phase accumulation can be made more apparent by moving \cref{eq:gates:H_RIP} to a frame rotating at the drive frequency and by applying the polaron transformation $\hat U = \exp[\hat \alpha'(t) \hat a^\dagger - \hat \alpha^{*\prime}(t)\hat a]$ with $\alpha'(t) = \alpha(t) - \sum_i \chi_i \hat \sigma_{zi}/\tilde\delta_r$ on the resulting Hamiltonian. This leads to the approximate effective Hamiltonian~\cite{Puri2016c} \begin{equation} \begin{split} \hat H'' \simeq{}& \sum_i \hbar\left[\frac{\tilde\delta_{qi}}{2} + \chi|\alpha(t)|^2\right]\hat\sigma_{zi} + \hbar\delta_r \hat a^\dagger \hat a \\ &+ \sum_{i=1}^2 \hbar \chi_{i} \hat a^\dagger \hat a \hat \sigma_{zi} - \hbar\frac{2\chi_1\chi_2|\alpha(t)|^2}{\delta_r}\hat \sigma_{z1}\hat \sigma_{z2}, \end{split} \end{equation} with $\tilde\delta_x = \tilde\omega_x-\omega_\mathrm{d}$ and where the field amplitude $\alpha(t)$ satisfies $\dot\alpha = -i\tilde\delta_r\alpha - i\epsilon(t) $. In this frame, it is clear how the resonator mediates a $\sz{1}\sz{2}$ interaction between the two qubits and therefore leads to a conditional phase gate. This expression also makes it clear that the need to avoid measurement-induced dephasing with $\tilde\delta_r\gg\kappa$ limits the effective interaction strength and therefore leads to relatively long gate times. This can, however, be mitigated by taking advantage of pulse shaping techniques~\cite{Cross2015} or by using squeezed radiation to erase the which-qubit information in the output field of the resonator~\cite{Puri2016c}. Similarly to the longitudinal readout protocol discussed in~\cref{sec:ReadoutOtherApproaches}, longitudinal coupling also offers a way to overcome many of the limitations of the conventional RIP gate~\cite{Royer2017}. Some of the advantages of this two-qubit gate are that it can couple qubits that are far detuned from each other and that it does not introduce significant leakage errors~\cite{Paik2016}. This gate was demonstrated by~\textcite{Paik2016} with multiple transmons coupled to a 3D resonator, achieving error rates of a few percent and gate times of several hundred nanoseconds. \paragraph{Cross-resonance gate} The cross-resonance gate is based on qubits that are detuned from each other and coupled by an exchange term $J$ of the form of \cref{eq:gates:H_swap} or \cref{eq:gates:H_swap2}~\cite{Rigetti2010,Chow2011}. While the RIP gate relies on off-resonant driving of a common oscillator mode, this gate is based on directly driving one of the qubits at the frequency of the other. Moreover, since the resonator is not directly used and, in fact, ideally remains in its vacuum throughout the gate, the $J$ coupling can be mediated by a resonator or by direct capactitive coupling. In the two-level approximation and in the absence of the drive, this interaction takes the form \begin{equation}\label{eq:H_ExchangeTLS} \hat H = \frac{\hbar\omega_{q1}}{2}\sz{1} +\frac{\hbar\omega_{q2}}{2}\sz{2} +\hbar J (\spp{1}\smm{2}+\smm{1}\spp{2}). \end{equation} To see how this gate operates, it is useful to diagonalize $\hat H$ using the two-level system version of the transformation \cref{eq:UDispersive}. The result takes the same general form as \cref{eq:HDiagonalLinear} and \cref{eq:DressedFrequenciesDispersive}, after projecting to two levels. In this frame, the presence of the $J$ coupling leads to a renormalization of the qubit frequencies which for strongly detuned qubits, $|\Delta_{12}| = |\omega_{q1}-\omega_{q2}|\gg |J|$, take the values $\tilde\omega_{q1} \approx \omega_{q1} + J^2/\Delta_{12}$ and $\tilde\omega_{q2} \approx \omega_{q2} - J^2/\Delta_{12}$ to second order in $J/\Delta_{12}$. In the same frame, a drive on the first qubit, $\hbar\Omega_R(t)\cos (\omega_\mathrm{d} t) \sx{1}$, takes the form \cite{Chow2011} \begin{equation}\label{eq:CrossResonanceTLS} \begin{split} &\hbar\Omega_R(t)\cos (\omega_\mathrm{d} t) \left( \cos\theta \sx{1} + \sin\theta\sz{1}\sx{2} \right)\\ &\approx \hbar\Omega_R(t)\cos (\omega_\mathrm{d} t) \left( \sx{1} + \frac{J}{\Delta_{12}} \sz{1}\sx{2} \right), \end{split} \end{equation} with $\theta = \arctan(2J/\Delta_{12})/2$ and where the second line is valid to first order in $J/\Delta_{12}$. As a result, driving the first qubit at the frequency of the second qubit, $\omega_\mathrm{d} = \tilde\omega_{q2}$, activates the term $\sz{1}\sx{2}$ which can be used to realize a CNOT gate. More accurate expressions for the amplitude of the CR term $\sz{1}\sx{2}$ can be obtained by taking into account more levels of the transmons. In this case, the starting point is the Hamiltonian \cref{eq:gates:H_swap} with, as above, a drive term on the first qubit \begin{equation}\label{eq:gates:H_CR} \begin{aligned} \hat H ={}& \hat H_{q1} + \hat H_{q2} + \hbar J (\hat b^\dag_1\hat b_2+\hat b_1\hat b^\dag_2)\\ &+ \hbar \varepsilon(t)(\hat b_1^\dagger e^{-i\omega_\mathrm{d} t} + \hat b_1 e^{i\omega_\mathrm{d} t}), \end{aligned} \end{equation} where $\omega_\mathrm{d} \sim \omega_{q2}$. Similarly to the previous two-level system example, it is useful to eliminate the $J$-coupling. We do this by moving to a rotating frame at the drive frequency for both qubits, followed by a Schireffer-Wolff transformation to diagonalize the first line of~\cref{eq:gates:H_CR} to second order in $J$, see \cref{sec:SW}. The drive term is modified under the same transformation by using the explicit expression for the Schrieffer-Wolff generator $\hat S = \hat S^{(1)} + \dots$ given in~\cref{eq:SW:explicit_expansions_generator}, and the Baker-Campbell-Hausdorff formula~\cref{eq:BCH} to first order: $e^{\hat S} \hat b_1 e^{-\hat S} \simeq \hat b_1 + [\hat S^{(1)},\hat b_1]$. The full calculation is fairly involved and here we only quote the final result after truncating to the two lowest levels of the transmon qubits~\cite{Magesan2018,Tripathi2019a} \begin{equation}\label{eq:gates:H_CR_approx} \begin{aligned} \hat H' \simeq{}& \frac{\hbar\tilde\delta_{q1}}{2}\hat \sigma_{z1} + \frac{\hbar\tilde\delta_{q2}}{2}\hat \sigma_{z2} + \frac{\hbar\chi_{12}}{2}\hat\sigma_{z1}\hat\sigma_{z2} \\ +& \hbar \varepsilon(t)\left( \hat \sigma_{x1} - J' \hat \sigma_{x2} -\frac{E_{C_1}}{\hbar}\frac{J'}{\Delta_{12}} \hat \sigma_{z1} \hat \sigma_{x2} \right). \end{aligned} \end{equation} In this expression, the detunings include frequency shifts due to the $J$ coupling with $\tilde\delta_{q1} = \omega_{q1} + J^2/\Delta_{12} + \chi_{12} - \omega_\mathrm{d}$ and $\tilde\delta_{q2} = \omega_{q2} - J^2/\Delta_{12} + \chi_{12} - \omega_\mathrm{d}$. The parameters $\chi_{12}$ and $J'$ are given by \begin{subequations} \begin{align} \chi_{12} &= \frac{J^2}{\Delta_{12} + \frac{E_{C_2}}{\hbar}} - \frac{J^2}{\Delta_{12} - \frac{E_{C_1}}{\hbar}},\\%\,\, J' & = \frac{J}{\Delta_{12}-\frac{E_{C_1}}{\hbar}}. \end{align} \end{subequations} \Cref{eq:CrossResonanceTLS,eq:gates:H_CR_approx} agree in the limit of large anharmonicity $E_{C_{1,2}}$ and we again find that a drive on the first qubit at the frequency of the second qubit activates the CR term $\sz{1}\sx{2}$. However, there are important differences at finite $E_{C_{1/2}}$, something which highlights the importance of taking into account the multilevel nature of the transmon. Indeed, the amplitude of the CR term is smaller here than in \cref{eq:CrossResonanceTLS} with a two-level system. Moreover, in contrast to the latter case, when taking into account multiple levels of the transmon qubits we find a spurious interaction $\sz{1}\sz{2}$ of amplitude $\chi_{12}$ between the two qubits, as well as a drive on the second qubit of amplitude $J'\varepsilon(t)$. This unwanted drive can be echoed away with additional single-qubit gates~\cite{Corcoles2013,Sheldon2016}. The $\sz{1}\sz{2}$ interaction is detrimental to the gate fidelity as it effectively makes the frequency of the second qubit dependent on the logical state of the first qubit. Because of this, the effective dressed frequency of the second qubit cannot be known in general, such that it is not possible to choose the drive frequency $\omega_\mathrm{d}$ to be on resonance with the second qubit, irrespective of the state of the first. As a consequence, the CR term $\sz{1}\sx{2}$ in~\cref{eq:gates:H_CR_approx} will rotate at an unknown qubit-state dependent frequency, leading to a gate error. The $\sz{1}\sz{2}$ term should therefore be made small, which ultimately limits the gate speed. Interestingly, for a pair of qubits with equal and opposite anharmonicity, $\chi_{12}=0$ and this unwanted effect is absent. This cannot be realized with two conventional transmons, but is possible with other types of qubits~\cite{Winik2020,Ku2020}. Since $J'$ is small, another caveat of the CR gate is that large microwave amplitudes $\varepsilon$ are required for fast gates. For the typical low-anharmonicity of transmon qubits, this can lead to leakages and to effects that are not captured by the second-order perturbative results of \cref{eq:CrossResonanceTLS,eq:gates:H_CR_approx}. More detailed modeling based on the Hamiltonian of \cref{eq:gates:H_CR} suggests that classical crosstalk induced on the second qubit from driving the first qubit can be important and is a source of discrepancy between the simple two-level system model and experiments~\cite{Magesan2018,Tripathi2019a,Ware2019}. Because of these spurious effects, CR gate times have typically been relatively long, of the order of 300 to 400 ns with gate fidelities $\sim$ 94--96\%~\cite{Corcoles2013}. However, with careful calibration and modeling beyond \cref{eq:gates:H_CR_approx}, it has been possible to push gate times down to the 100--200 ns range with error per gates at the percent level~\cite{Sheldon2016}. Similarlrly to the RIP gate, advantages of the CR gate include the fact that realizing this gate can be realized using the same drive lines that are used for single-qubit gates. Moreover, it works with fixed frequency qubits which often have longer phase coherence times than their flux-tunable counterparts. However, both the RIP and the CR gate are slower than what can now be achieved with additional flux control of the qubit frequency or of the coupler. We also note that, due to the factor $E_{C1}/\hbar\Delta_{12}$ in the amplitude of the $\hat \sigma_{z1}\hat \sigma_{x2}$ term, the detuning of the two qubits cannot be too large compared to the anharmonicity, putting further constraints on the choice of the qubit frequencies. This may lead to frequency crowding issues when working with large numbers of qubits. \subsubsection{Parametric gates} As we have already discussed, a challenge in realizing two-qubit gates is activating a coherent interaction between two qubits with a large on/off ratio. The gates discussed so far have aimed to achieve this in different ways. The $\sqrt{i\text{SWAP}}$ and the $11$-$02$ gates are based on flux-tuning qubits into a resonance condition or on a tunable coupling element. The RIP gate is based on activating an effective qubit-qubit coupling by driving a resonator and the CR gate by driving one of the qubits. Another approach is to activate an off-resonant interaction by modulating a qubit frequency, a resonator frequency, or the coupling parameter itself at an appropriate frequency. This parametric modulation provides the energy necessary to bridge the energy gap between the far detuned qubit states. Several such schemes, known as parametric gates, have been theoretically developed and experimentally realized, see for example~\textcite{Bertet2006,Niskanen2006,Liu2007,Niskanen2007,Beaudoin2012a,Strand2013,Kapit2015a,McKay2016,Naik2017,Sirois2015a,Reagor2018,Caldwell2018,Didier2017}. The key idea behind parametric gates is that modulation of a system parameter can induce transitions between energy levels that would otherwise be too far off-resonance to give any appreciable coupling. We illustrate the idea first with two directly coupled qubits described by the Hamiltonian \begin{equation} \begin{aligned} \hat H ={}& \frac{\hbar \omega_{q1}}{2}\hat \sigma_{z1} + \frac{\hbar \omega_{q2}}{2}\hat \sigma_{z2} + J(t)\hat\sigma_{x1}\hat\sigma_{x2}, \end{aligned} \end{equation} where we assume that the coupling is periodically modulated at the frequency $\omega_m$, $J(t) = J_0 + \tilde J\cos(\omega_m t)$. Moving to a rotating frame at the qubit frequencies, the above Hamiltonian takes the form \begin{equation} \begin{aligned} \hat H' = J(t)\bigg(& e^{i(\omega_{q1}-\omega_{q2})t}\hat\sigma_{+1}\hat\sigma_{-2} \\ &+e^{i(\omega_{q1}+\omega_{q2})t}\hat\sigma_{+1}\hat\sigma_{+2} + \text{H.c.} \bigg). \end{aligned} \end{equation} Just as in \cref{sec:DirectCoupling}, if the coupling is constant $J(t) = J_0$, and $J_0/(\omega_{q1}-\omega_{q2}),\,J_0/(\omega_{q1}+\omega_{q2})\ll1$, then all the terms of $\hat H'$ are fast-rotating and can be neglected. In this situation, the gate is in the off state. On the other hand, by appropriately choosing the modulation frequency $\omega_m$, it is possible to selectively activate some of these terms. Indeed, for $\omega_m = \omega_{q1}-\omega_{q2}$, the terms $\hat \sigma_{+1}\hat\sigma_{-2} + \text{H.c.}$ are no longer rotating and are effectively resonant. Dropping the rapidly rotating terms, this leads to \begin{equation} \begin{aligned} \hat H' \simeq \frac{\tilde J}{2}\left( \hat\sigma_{+1}\hat\sigma_{-2} +\hat\sigma_{-1}\hat\sigma_{+2} \right). \end{aligned} \end{equation} As already discussed, this interaction can be used to generate entangling gates such as the $\sqrt{i\text{SWAP}}$. If rather $\omega_m = \omega_1+\omega_2$ then $\hat\sigma_{+1}\hat\sigma_{+2} +\text{H.c.}$ is instead selected. In practice, it can sometimes be easier to modulate a qubit or resonator frequency rather than a coupling strength. To see how this leads to a similar result, consider the Hamiltonian \begin{equation} \begin{aligned} \hat H ={}& \frac{\hbar \omega_{q1}(t)}{2}\hat \sigma_{z1} + \frac{\hbar \omega_{q2}}{2}\hat \sigma_{z2} + J\hat\sigma_{x1}\hat\sigma_{x2}. \end{aligned} \end{equation} Taking $\omega_{q1}(t) = \omega_{q1} + \varepsilon\sin(\omega_m t)$, the transition frequency of the first qubit develops frequency modulation (FM) sidebands. The two qubits can then be effectively brought into resonance by choosing the modulation to align one of the FM sidebands with $\omega_{q2}$, thereby rendering the $J$ effectively coupling resonant. This can be seen more clearly by moving to a rotating frame defined by the unitary \begin{equation} \hat U = e^{ -\frac{i}{2}\int_0^t dt'\, \omega_{q1}(t')\hat\sigma_{z1} } e^{ - i\omega_{q2}t \hat\sigma_{z2}/2 }, \end{equation} where the Hamiltonian takes the form \cite{Beaudoin2012a,Strand2013} \begin{equation} \begin{aligned} \hat H' ={}& J \sum_{n=-\infty}^\infty J_n\left(\frac{\varepsilon}{\omega_m}\right) \bigg( i^n e^{i(\Delta_{12} - n\omega_m)t}\hat\sigma_{+1}\hat\sigma_{-2} \\ &+i^n e^{i(\omega_{q1}+\omega_{q2} - n\omega_m t)t}\hat\sigma_{+1}\hat\sigma_{+2} + \text{H.c.} \bigg). \end{aligned} \end{equation} To arrive at the above expression, we have used the Jacobi-Anger expansion $e^{iz\cos\theta} = \sum_{n=-\infty}^\infty i^n J_n(z) e^{in\theta}$, with $J_n(z)$ Bessel functions of the first kind. Choosing the modulation frequency such that $n\omega_m = \Delta_{12}$ aligns the $n$th sideband with the resonator frequency such that a resonant qubit-resonator interaction is recovered. The largest contribution comes from the first sideband with $J_{1}$ which has a maximum around $J_1(1.84)\simeq 0.58$, thus corresponding to an effective coupling that is a large fraction of the bare $J$ coupling. Note that the assumption of having a simple sinusoidal modulation of the frequency neglects the fact that the qubit frequency has a nonlinear dependence on external flux for tunable transmons. This behavior can still be approximated by appropriately varying $\Phi_x(t)$~\cite{Beaudoin2012a}. Parametric gates can also be mediated by modulating the frequency of a resonator bus to which qubits are dispersively coupled \cite{McKay2016}. Much as with flux-tunable transmons, the resonator is made tunable by inserting a SQUID loop in the center conductor of the resonator \cite{Sandberg2008a,Castellanos2007}. Changing the flux threading the SQUID loop changes the SQUID's inductance and therefore the effective length of the resonator. As in a trombone, this leads to a change of the resonator frequency. An advantage of modulating the resonator bus over modulating the qubit frequency is that the latter can have a fixed frequency, thus reducing its susceptibility to flux noise. Finally, it is worth pointing out that while the speed of the cross-resonance gate is reduced when the qubit-qubit detuning is larger than the transmon anharmonicity, parametric gates do not suffer from this problem. As a result, there is more freedom in the choice of the qubit frequencies with parametric gates, which is advantageous to avoid frequency crowding related issues such as addressability errors and crosstalk. We also note that the modulation frequencies required to activate parametric gates can be a few hundred MHz, in contrast to the RIP gate or the CR gate which require microwave drives. Removing the need for additional microwave generators simplifies the control electronics and \emph{may} help make the process more scalable. A counterpoint is that fast parametric gates often require large modulation amplitudes, which can be challenging. \subsection{Encoding a qubit in an oscillator} \label{sec:CatCodes} \begin{table*}[t]\label{Table:AmpDampComparison} \begin{tabular}{ ccc } &4-qubit code & Simplest binomial code\\ \hline \hline Code word $|0_\mathrm{L}\rho_{a}ngle$ & $\frac{1}{\sqrt{2}}(|0000\rho_{a}ngle+|1111\rho_{a}ngle)$ &$\frac{1}{\sqrt{2}}(|0\rho_{a}ngle+|4\rho_{a}ngle)$ \\ \hline Code word $|1_\mathrm{L}\rho_{a}ngle$& $\frac{1}{\sqrt{2}}(|1100\rho_{a}ngle+|0011\rho_{a}ngle) $ &$|2\rho_{a}ngle$ \\ \hline Mean excitation number $\bar n$& 2 & 2 \\ \hline Hilbert space dimension & $2^4=16$ & $\{0,1,2,3,4\}=5$ \\ \hline Number of correctable errors & $\{\hat I,\sigma_1^-,\sigma_2^-,\sigma_3^-,\sigma_4^-\}=5$ & $\{\hat I,a\}=2$ \\ \hline Stabilizers &$\hat S_1=\hat Z_1\hat Z_2,\, \hat S_2=\hat Z_3\hat Z_4,\, \hat S_3=\hat X_1\hat X_2\hat X_3\hat X_4$ & $\hat P=(-1)^{\hat n}$\\ \hline Number of Stabilizers & 3&1\\ \hline Approximate QEC? & Yes, 1st order in $\gamma t$&Yes, 1st order in $\kappa t$\\ \end{tabular} \caption{Comparison of qubit and bosonic codes for amplitude damping. $\gamma$ and $\kappa$ are respectively the qubit and oscillator energy relaxation rates.} \end{table*} So far we have discussed encoding of quantum information into the first two energy levels of an artificial atom, the cavity being used for readout and two-qubit gates. However, cavity modes often have superior coherence properties than superconducting artificial atoms, something that is especially true for the 3D cavities discussed in \cref{sec:3D} \cite{Reagor2016}. This suggests that encoding quantum information in the oscillator mode can be advantageous. Using oscillator modes to store and manipulate quantum information can also be favorable for quantum error correction which is an essential aspect of scalable quantum computer architectures \cite{Nielsen2000}. Indeed, in addition to their long coherence time, oscillators have a simple and relatively well-understood error model: to a large extent, the dominant error is single-photon loss. Taking advantage of this, it is possible to design quantum error correction codes that specifically correct for this most likely error. This is to be contrasted to more standard codes, such as the surface code, which aim at detecting and correcting both amplitude and phase errors~\cite{Fowler2012}. Moreover, as will become clear below, the infinite dimensional Hilbert space of a single oscillator can be exploited to provide the redundancy which is necessary for error correction thereby, in principle, allowing using less physical resources to protect quantum information than when using two-level systems. Finally, qubits encoded in oscillators can be concatenated with conventional error correcting codes, where the latter should be optimized to exploit the noise resilience provided by the oscillator encoding~\cite{Tuckett2018,Tuckett2019b,Tuckett2019,Puri2019,Guillaud2019,Grimsmo2020}. Of course, as we have already argued, nonlinearity remains essential to prepare and manipulate quantum states of the oscillator. When encoding quantum information in a cavity mode, a dispersively coupled artificial atom (or other Josephson junction-based circuit element) remains present but only to provide nonlinearity to the oscillator ideally without playing much of an active role. Oscillator encodings of qubits investigated in the context of quantum optics and circuit QED include cat codes \cite{CochMilbMunr99,Ralph2003,Gilchrist2004,Mirrahimi2014,Ofek2016,Puri2017,Grimm2019,Lescanne2019}, the related binomial codes~\cite{Michael2016a,Hu2019}, and Gottesman-Kitaev-Preskill (GKP) codes~\cite{Gottesman2001b,HOME-GKP2019,Campagne2019}, as well as a two-mode amplitude damping code described in \cite{Chuang97}. To understand the basic idea behind this approach, we first consider the simplest instance of the binomial code in which a qubit is encoded in the following two states of a resonator mode~\cite{Michael2016a} \begin{align}\label{eq:catcodes:kitten} \ket{0_L} = \frac{1}{\sqrt 2}\left(\ket 0 + \ket 4\right),\qquad \ket{1_L} = \ket 2, \end{align} with Fock states $\ket n$. The first aspect to notice is that for both logical states, the average photon number is $\bar n = 2$ and, as a result, the likelihood of a photon loss event is the same for both states. An observer detecting a loss event will therefore not gain any information allowing her to distinguish whether the loss came from $\ket{0_L}$ or from $\ket{1_L}$. This is a necessary condition for a quantum state encoded using the logical states \cref{eq:catcodes:kitten} to not be `deformed' by a photon loss event. Moreover, under the action of $\hat a$, the arbitrary superposition $c_0\ket{0_L}+c_1\ket{1_L}$ becomes $c_0\ket{3}+c_1\ket{1}$ after normalization. The coefficients $c_0$ and $c_1$ encoding the quantum information are intact and the original state can in principle be recovered with a unitary transformation. By noting that while the original state only has support on even photon numbers, the state after a photon loss only has support on odd photon numbers, we see that the photon loss event can be detected by measuring photon number parity $\hat P = (-1)^{\hat n}$. The parity operator thus plays the role of a stabilizer for this code~\cite{Nielsen2000,Michael2016}. This simple encoding should be compared to directly using the Fock states $\{\ket0,\ket1\}$ to store quantum information. Clearly, in this case, a single photon loss on $c_0\ket{0}+c_1\ket{1}$ leads to $\ket{0}$ and the quantum information has been irreversibly lost. Of course, this disadvantage should be contrasted to the fact that the rate at which photons are lost, which scales with $\bar n$, is (averaged over the code words) four times as large when using the encoding \cref{eq:catcodes:kitten}, compared to using the Fock states $\{\ket0,\ket1\}$. This observation reflects the usual conundrum of quantum error correction: using more resources (here more photons) to protect quantum information actually increases the natural error rate. The protocol for detecting and correcting errors must be fast enough and accurate enough to counteract this increase. The challenge for experimental implementations of quantum error correction is thus to reach and go beyond the break-even point where the encoded qubit, here \cref{eq:catcodes:kitten}, has a coherence time exceeding the coherence time of the unencoded constituent physical components, here the Fock states $\{\ket0,\ket1\}$. Near break-even performance with the above binomial code has been experimentally reported by \textcite{Hu2019}. The simplest binomial code introduced above is able to correct a single amplitude-damping error (photon loss). Thus if the correction protocol is applied after a time interval $\delta t$, the probability of an uncorrectable error is reduced from ${\mathcal O}(\kappa\,\delta t)$ to ${\mathcal O}((\kappa\,\delta t)^2)$, where $\kappa$ is the cavity energy decay rate. To better understand the simplicity and efficiency advantages of bosonic QEC codes, it is instructive to do a head-to-head comparison of the simplest binomial code to the simplest qubit code for amplitude damping. The smallest qubit code able to protect logical information against a general single-qubit error requires five qubits \cite{Bennett1996,Laflamme96,Knill2001}. However, the specific case of the qubit amplitude-damping channel can be corrected to first order against single-qubit errors using a 4-qubit code \cite{Leung97} that, like the binomial code, satisfies the Knill-Laflamme conditions \cite{Knill1997} to lowest order and whose two logical codewords are \begin{subequations}\label{eq:4qubitcode} \begin{align} \ket{0_\mathrm{L}} &= \frac{1}{\sqrt{2}}\left(|0000\rho_{a}ngle+|1111\rho_{a}ngle\right),\\ \ket{1_\mathrm{L}} &= \frac{1}{\sqrt{2}}\left(|1100\rho_{a}ngle+|0011\rho_{a}ngle\right). \end{align} \end{subequations} This four-qubit amplitude damping code and the single-mode binomial bosonic code for amplitude damping are compared in Table~\ref{Table:AmpDampComparison}. Note that, just as in the binomial code, both codewords have mean excitation number equal to two and so are equally likely to suffer an excitation loss. The logical qubit of \cref{eq:4qubitcode} lives in a Hilbert space of dimension $2^4=16$ and has four different physical sites at which the damping error can occur. Counting the case of no errors, there are a total of five different error states which requires measurement of three distinct error syndromes $\hat Z_1\hat Z_2$, $\hat Z_3\hat Z_4$, and $\hat X_1\hat X_2\hat X_3\hat X_4$ to diagnose (where $\hat P_i$ refers to Pauli operator $\hat P$ acing on qubit $i$). The required weight-two and weight-four operators have to date not been easy to measure in a highly QND manner and with high fidelity, but some progress has been made towards this goal \cite{Chow2014,Chow2015,Corcoles2015,Riste2015,Takita2016}. In contrast, the simple bosonic code in \cref{eq:catcodes:kitten} requires only the lowest five states out of the (formally infinite) oscillator Hilbert space. Moreover, since there is only a single mode, there is only a single error, namely photon loss (or no loss), and it can be detected by measuring a single stabilizer, the photon number parity. It turns out that, unlike in ordinary quantum optics, photon number parity is relatively easy to measure in circuit QED with high fidelity and minimal state demolition \cite{Sun2014,Ofek2016}. It is for all these reasons that, unlike the four-qubit code, the bosonic code \cref{eq:catcodes:kitten} has already been demonstrated experimentally to (very nearly) reach the break-even point for QEC \cite{Hu2019,LuyanSun2020}. Generalizations of this code to protect against more than a single photon loss event, as well as photon gain and dephasing, are described in \textcite{Michael2016a}. Operation slightly exceeding break-even has been reported by \cite{Ofek2016} with cat-state bosonic encoding which we describe now. In the encoding used in that experiment, each logical code word is a superposition of four coherent states referred to as a four-component cat code~\cite{Mirrahimi2014}: \begin{subequations}\label{eq:catcodes:fourlegs} \begin{align} \ket{0_L} &= \mathcal N_0\left( \ket{\alpha} + \ket{i\alpha} + \ket{-\alpha} + \ket{-i\alpha} \right),\\ \ket{1_L} &= \mathcal N_1\left( \ket{\alpha} - \ket{i\alpha} + \ket{-\alpha} - \ket{-i\alpha} \right), \end{align} \end{subequations} where $\mathcal N_i$ are normalization constants, with $\mathcal N_0\simeq \mathcal N_1$ for large $|\alpha|$. The Wigner functions for the $\ket{0_L}$ codeword is shown in~\cref{fig:catcodes:fourlegs}(a) for $\alpha=4$. The relationship between this encoding and the simple code in~\cref{eq:catcodes:kitten} can be seen by writing~\cref{eq:catcodes:fourlegs} using the expression \cref{eq:CoherentState} for $\ket\alpha$ in terms of Fock states. One immediately finds that $\ket{0_L}$ only has support on Fock states $\ket{4n}$ with $n=0,1,\dots$, while $\ket{1_L}$ has support on Fock states $\ket{4n+2}$, again for $n=0,1,\dots$. It follows that the two codewords are mapped onto orthogonal states under the action of $\hat a$, just as the binomial code of~\cref{eq:catcodes:kitten}. Moreover, the average photon number $\bar n$ is approximately equal for the two logical states in the limit of large $|\alpha|$. The protection offered by this encoding is thus similar to that of the binomial code in~\cref{eq:catcodes:kitten}. In fact, these two encodings belong to a larger class of codes characterized by rotation symmetries in phase space~\cite{Grimsmo2020}. \begin{figure} \caption{\label{fig:catcodes:fourlegs} \label{fig:catcodes:fourlegs} \end{figure} We end this section by discussing an encoding that is even simpler than~\cref{eq:catcodes:fourlegs}, sometimes referred to as a two-component cat code. In this case, the codewords are defined simply as $\ket{+_L} = \mathcal N_0(\ket{\alpha} + \ket{-\alpha})$ and $\ket{-_L} = \mathcal N_1(\ket{\alpha} - \ket{-\alpha}$~\cite{CochMilbMunr99,Ralph2003,Gilchrist2004,Mirrahimi2014,Puri2017}. The Wigner function for $\ket{+_L}$ is shown in~\cref{fig:catcodes:fourlegs}(b). The choice to define the above codewords in the logical $\hat X_L$ basis instead of the $\hat Z_L$ basis is, of course, just a convention, but turns out to be convenient for this particular cat code. In contrast to \cref{eq:catcodes:kitten,eq:catcodes:fourlegs}, these two states are \emph{not} mapped to two orthogonal states under the action of $\hat a$. To understand this encoding, it is useful to consider the logical $\hat Z_L$ basis states in the limit of large $|\alpha|$ \begin{subequations}\label{eq:catcodes:twolegs} \begin{align} \ket{0_L} &= \frac{1}{\sqrt2}(\ket{+_L}+\ket{-_L}) = \ket{\alpha} + \mathcal{O}(e^{-2|\alpha|^2}),\\ \ket{1_L} &= \frac{1}{\sqrt2}(\ket{+_L}-\ket{-_L}) = \ket{-\alpha} + \mathcal{O}(e^{-2|\alpha|^2}). \end{align} \end{subequations} As is made clear by the second equality, for large enough $|\alpha|$ these logical states are very close to coherent states of the same amplitude but opposite phase. The action of $\hat a$ is thus, to a very good approximation, a phase flip since $\hat a\ket{0_L/1_L} \sim \pm \ket{0_L/1_L}$. The advantage of this encoding is that, while photon loss leads to phase flips, the bit-flip rate is exponentially small with $|\alpha|$. This can be immediately understood from the golden rule whose relevant matrix element for bit flips is $\me{1_L}{\hat a}{0_L} \sim \me{-\alpha}{\hat a}{\alpha} = \alpha e^{-2|\alpha|^2}$. In other words, if the qubit is encoded in a coherent state with many photons, losing one simply does not do much. This is akin to the redundancy required for quantum error correction. As a result, the bit-flip rate ($1/T_1$) decreases \emph{exponentially} with $|\alpha|^2$ while the phase flip rate increases only \emph{linearly} with $|\alpha|^2$. The crucial point is that the bias between bit and phase flip error rates increases exponentially with $\alpha$, which has been verified experimentally \cite{Grimm2019,Lescanne2019}. While the logical states \cref{eq:catcodes:twolegs} do not allow for recovery from photon-loss errors, the strong asymmetry between different types of errors can be exploited to significantly reduce the qubit overhead necessary for fault-tolerant quantum computation \cite{Puri2019,Guillaud2019}. The basic intuition behind this statement is that the qubit defined by \cref{eq:catcodes:twolegs} can be used in an error correcting code tailored to predominantly correct the most likely error (here, phase flips) rather than devoting ressources to correcting both amplitude and phase errors \cite{Tuckett2018,Tuckett2019b,Tuckett2019}. Another bosonic encoding that was recently demonstrated in circuit QED is the Gottesman-Kitaev-Preskill (GKP) code~\cite{Campagne2019}. This demonstration is the first QEC experiment able to correct all logical errors and it came close to reaching the break-even point. While all the bosonic codes described above are based on codewords that obey rotation symmetry in phase space, the GKP code is instead based on translation symmetry. We will not describe the GKP encoding in more detail here, but refer the reader to the review by \textcite{Terhal2020}. \section{Quantum optics on a chip} \label{sec:QuantumOptics} \begin{figure*} \caption{\label{fig:Hofheinz2009} \label{fig:Hofheinz2009} \end{figure*} The strong light-matter interaction realized in circuit QED together with the flexibility allowed in designing and operating superconducting quantum circuits has opened the possibility to explore the rich physics of quantum optics at microwave frequencies in circuits. As discussed previously it has, for example, made possible the clear observation of vacuum Rabi splitting, of photon-number splitting in the strong-dispersive regime, as well as of signatures of ultrastrong light-matter coupling. The new parameter regimes that can be achieved in circuit QED have also made it possible to test some of the theoretical predictions from the early days of quantum optics and to explore new research avenues. A first indication that circuit QED is an ideal playground for these ideas is the strong Kerr nonlinearity relative to the decay rate, $K/\kappa,$ that can readily be achieved in circuits. Indeed, from the point of view of quantum optics, a transmon is a Kerr nonlinear oscillator that is so nonlinear that it exhibits photon blockade. Given the very high Q factors that can be achieved in 3D superconducting cavities, such levels of nonlinearity can also readily be obtained in microwave resonators by using transmons or other Josephson junction-based circuits to induce nonlinearity in electromagnetic modes. Many of the links between circuit QED and quantum optics have already been highlighted in this review. In this section, we continue this discussion by presenting some further examples. The reader interested in learning more about quantum optics at microwave frequencies can consult the review article by \textcite{Gu2017b}. \subsection{Intra-cavity fields} Because superconducting qubits can rapidly be tuned over a wide frequency range, it is possible to bring them in and out of resonance with a cavity mode on a time scale which is fast with respect to $1/g$, the inverse of the qubit-cavity coupling strength. For all practical purposes, this is equivalent to the thought experiment of moving an atom in and out of the cavity in cavity QED. An experiment by \textcite{Hofheinz2008} took advantage of this possibility to prepare the cavity in Fock states up to $\ket{n=6}$. With the qubit and the cavity in their respective ground states and the two systems largely detuned, their approach is to first $\pi$-pulse the qubit to its excited state. The qubit frequency is then suddenly brought in resonance with the cavity for a time $1/2g$ such as to swap the qubit excitation to a cavity photon as the system evolves under the Jaynes-Cummings Hamiltonian \cref{eq:HJC}. The interaction is then effectively stopped by moving the qubits to its original frequency, after which the cycle is repeated until $n$ excitations have been swapped in this way. Crucially, because the swap frequency between the states $\ket{e,n-1}$ and $\ket{g,n}$ is proportional to $\sqrt{n}$, the time during which qubit and cavity are kept in resonance must be adjusted accordingly at each cycle. The same $\sqrt{n}$ dependence is then used to probe the cavity state using the qubit as a probe~\cite{Hofheinz2008,Brune1996}. Building on this technique and using a protocol proposed by \textcite{Law96} for cavity QED, the same authors have demonstrated the preparation of arbitrary states of the cavity field and characterized these states by measuring the cavity Wigner function \cite{Hofheinz2009}. \Cref{fig:Hofheinz2009} shows the result of this Wigner tomography for superpositions involving up to six cavity photons (top row: theory, bottom row: data). As noted in \textcite{Hofheinz2008}, a downside of this sequential method is that the preparation time rapidly becomes comparable to the Fock state lifetime, limiting the Fock states which can be reached and the fidelity of the resulting states. Taking advantage of the very large $\chi/\kappa$ which can be reached in 3D cavities, an alternative to create such states is to cause qubit transitions conditioned on the Fock state of the cavity. Together with cavity displacements, these photon-number dependent qubit transitions can be used to prepare arbitrary cavity states \cite{Krastanov2015,Heeres2015}. Combining these ideas with numerical optimal control has allowed \textcite{Heeres2017} to synthesize cavity states with high fidelity such as Fock states up to $\ket{n=6}$ and four-legged cat states. The long photon lifetime that is possible in 3D superconducting cavities together with the possibility to realize a single-photon Kerr nonlinearity which overwhelms the cavity decay, $K/\kappa > 1$, has enabled a number of similar experiments such as the observation of collapse and revival of a coherent state in a Kerr medium \cite{Kirchmair2013} and the preparation of cat states with nearly 30 photons \cite{Vlastakis2013}. Another striking example is the experimental encoding of qubits in oscillator states already discussed in \cref{sec:CatCodes}. \subsection{Quantum-limited amplification} \label{sec:QO:AmplificationSqueezing} Driven by the need for fast, high-fidelity single-shot readout of superconducting qubits, superconducting low-noise linear microwave amplifiers are a subject of intense research. There are two broad classes of linear amplifiers. First, phase-preserving amplifiers that amplify equivalently both quadratures of the signal. Quantum mechanics imposes that these amplifiers add a minimum of half a photon of noise to the input signal~\cite{Caves1982,Caves2012a,Clerk2010}. Second, phase-sensitive amplifiers which amplify one quadrature of the signal while squeezing the orthogonal quadrature. This type of amplifier can in principle operate without adding noise~\cite{Caves1982,Clerk2010}. Amplifiers adding the minimum amount of noise allowed by quantum mechanics, phase preserving or not, are referred to as quantum-limited amplifiers. We note that, in practice, phase sensitive amplifiers are useful if the quadrature containing the relevant information is known in advance, a condition that is realized when trying to distinguish between two coherent states in the dispersive qubit readout discussed in \cref{sec:DispersiveQubitReadout}. While much of the development of near-quantum-limited amplifiers has been motivated by the need to improve qubit readout, Josephson junction based amplifiers have been theoretically investigated~\cite{Yurke1987} and experimentally demonstrated as early as the late 80`s~\cite{Yurke1988,Yurke1989}. These amplifiers have now found applications in a broad range of contexts. In their simplest form, such an amplifier is realized as a driven oscillator mode rendered weakly nonlinear by incorporating a Josephson junction and are generically known as a Josephson parametric amplifier (JPA). For weak nonlinearity, the Hamiltonian of a driven nonlinear oscillator is well approximated by \begin{equation}\label{eq:DPA1} H = \omega_0 \hat a^\daga + \frac{K}{2}\hat{a}^{\dag 2}{\hat a}^2 + \epsilon_\mathrm{p} (\hat a^\dag e^{-i\omega_\mathrm{p} t}+ \hat a e^{i\omega_\mathrm{p} t}), \end{equation} where $\omega_0$ is the system frequency, $K$ the negative Kerr nonlinearity, and $\epsilon_\mathrm{p}$ and $\omega_\mathrm{p}$ are the pump amplitude and frequency, respectively. The physics of the JPA is best revealed by applying a displacement transformation $\hat D^\dag(\alpha)\hat a \hat D(\alpha) = a + \alpha$ to $H$ with $\alpha$ chosen to cancel the pump term. Doing so leads to the transformed Hamiltonian \begin{equation}\label{eq:DPA} H_\mathrm{JPA} = \delta \hat a^\daga + \frac{1}{2}\left(\epsilon_2 \hat{a}^{\dag 2} + \epsilon_2^*{\hat a}^2\right) + H_\mathrm{corr}, \end{equation} where $\delta = \omega_0 + 2|\alpha|^2 K - \omega_\mathrm{p}$ is the effective detuning, $\epsilon_2 = \alpha^2 K$, and are $H_\mathrm{corr}$ correction terms which can be dropped for weak enough pump amplitude and Kerr nonlinearity, i.e.~when $\kappa$ is large in comparison to $K$ and thus the drive does not populate the mode enough for higher-order nonlinearity to become important~\cite{Boutin2017b}. The second term, of amplitude $\epsilon_2$, is a two-photon pump which is the generator of quadrature squeezing. Depending on the size of the measurement bandwidth, this leads to phase preserving or sensitive amplification when operating the device close to but under the parametric threshold $\epsilon_2 < \sqrt{\delta^2 + (\kappa/2)^2}$, with $\kappa$ the device's single-photon loss rate \cite{Wustmann2013}. Rather than driving the nonlinear oscillator as in \cref{eq:DPA1}, an alternative approach to arrive at $H_\mathrm{JPA}$ is to replace the junction by a SQUID and to apply a flux modulation at $2\omega_0$~\cite{Yamamoto2008}. \Cref{eq:DPA} is the Hamiltonian for a parametric amplifier working with a single physical oscillator mode. Using appropriate filtering in the frequency domain, single-mode parametric amplifiers can be operated in a phase-sensitive mode, when detecting the emitted radiation over the full bandwidth of the physical mode, see e.g. \textcite{Eichler2011a}. This is also called the degenerate mode of operation. Alternatively, the same single-oscillator-mode amplifier can be operated in the phase-preserving mode, when separating frequency components above and below the pump in the experiment, e.g.~by using appropriately chosen narrow-band filters, see for example \textcite{Eichler2011a}. Parametric amplifiers with two or multiple physical modes are also frequently put to use \cite{Roy2016c} and can be operated both in the phase-sensitive and phase-preserving modes, e.g.~in degenerate or non-degenerate mode of operation, as for example demonstrated in \textcite{Eichler2014a}. Important parameters which different approaches for implementing JPAs aim at optimizing include amplifier gain, bandwidth and dynamic range. The latter refers to the range of power over which the amplifier acts linearly, i.e. powers at which the amplifier output is linearly related to its input. Above a certain input power level, the correction terms in \cref{eq:DPA} resulting from the junction nonlinearity can no longer be ignored and lead to saturation of the gain~\cite{Abdo2012,Kochetov2015,liu2017,Boutin2017b,Planat2019a}. For this reason, while transmon qubits are operated in a regime where the single-photon Kerr nonlinearity is large and overwhelms the decay rate, JPAs are operated in a very different regime with $|K|/\kappa \sim 10^{-2}$ or smaller. An approach to increase the dynamic range of JPAs is to replace the Josephson junction of energy $E_J$ by an array of $M$ junctions, each of energy $ME_J$~\cite{Castellanos2007,Castellanos2008,Eichler2014}. Because the voltage drop is now distributed over the array, the bias on any single junction is $M$ times smaller and therefore the effective Kerr nonlinearity of the device is reduced from $K$ to $K/M^2$. As a result, nonlinear effects kick-in only at increased input signal powers leading to an increased dynamic range. Importantly, this can be done without degrading the amplifier's bandwidth~\cite{Eichler2014}. Typical values are $\sim$ 50 MHz bandwidth with $\sim -117$~dBm saturation power for $\sim$ 20 dB gain~\cite{Planat2019a}. Impedance engineering can be used to improve these numbers further~\cite{Roy2015c}. Because the JPA is based on a localized oscillator mode, the product of its gain and bandwidth is approximately constant. Therefore, increase in one must be done at the expense of the other~\cite{Clerk2010,Eichler2014}. As a result, it has proven difficult to design JPAs with enough bandwidth and dynamic range to simultaneously measure more than a few transmons~\cite{Jeffrey2014}. To avoid the constant gain-bandwidth product which results from relying on a resonant mode, a drastically different strategy, known as the Josephson traveling-wave parametric amplifier (JTWPA), is to use an open nonlinear medium in which the signal co-propagates with the pump tone. While in a JPA the signal interacts with the nonlinearity for a long time due to the finite Q of the circuit, in the JTWPA the long interaction time is rather a result of the long propagation length of the signal through the nonlinear medium~\cite{O'Brien2014a}. In practice, JTWPA are realized with a metamaterial transmission line whose center conductor is made from thousands of Josephson junctions in series~\cite{Macklin2015}. This device does not have a fixed gain-bandwidth product and has been demonstrated to have 20 dB over as much as 3 GHz bandwidth while operating close to the quantum limit~\cite{Macklin2015,Planat2019b,White2015}. Because every junction in the array can be made only very weakly nonlinear, the JTWPA also offers large enough dynamic range for rapid multiplexed simultaneously readout of multiple qubits~\cite{Heinsoo2018a}. \subsection{Propagating fields and their characterization} \subsubsection{Itinerant single and multi-photon states} In addition to using qubits to prepare and characterize quantum states of intra-cavity fields, it is also possible to take advantage of the strong nonlinearity provided by a qubit to prepare states of propagating fields at the output of a cavity. This can be done, for example, in a cavity with relatively large decay rate $\kappa$ by tuning a qubit into and out of resonance with the cavity~\cite{Bozyigit2011} or by applying appropriatly chosen drive fields~\cite{Houck2007}. Alternatively, it is also possible to change the cavity decay rate in time to create single-photon states~\cite{Sete2013,Yin2013b}. The first on-chip single-photon source in the microwave regime was realized with a dispersively coupled qubit engineered such that the Purcell decay rate $\gamma_\kappa$ dominates the qubit's intrinsic non-radiative decay rate $\gamma_1$~\cite{Houck2007}. In this situation, exciting the qubit leads to rapid qubit decay by photon emission. In the absence of single-photon detectors working at microwave frequencies, the presence of a photon was observed by using a nonlinear circuit element (a diode) whose output signal is proportional to the square of the electric field, $\propto (\hat a^\dag+\hat a)^2$, and therefore indicative of the average photon number, $\av{\hat a^\daga}$, in repeated measurements. Rather than relying on direct power measurements, techniques have also been developed to reconstruct arbitrary correlation functions of the cavity output field from the measurement records of the field qudratures \cite{daSilva2010,Menzel2010}. These approaches rely on multiple detection channels with uncorrelated noise to quantify and subtract from the data the noise introduced by the measurement chain. In this way, it is possible to extract, for example, first- and second-order coherence functions of the microwave field. Remarkably, with enough averaging, this approach does not require quantum-limited amplifiers, although the number of required measurement runs is drastically reduced when such amplifiers are used when compared to HEMT amplifiers. This approach was used to measure second-order coherence functions, $G^2(t,t+\tau) = \av{\hat a^\dag(t)\hat a^\dag(t+\tau)\hat a(t+\tau)\hat a(t)}$, in the first demonstration of antibunching of a pulsed single microwave-frequency photon source \cite{Bozyigit2011}. The same technique also enabled the observation of resonant photon blockade at microwave frequencies \cite{Lang2011} and, using two single photon sources at the input of a microwave beam splitter, the indistinguishability of microwave photons was demonstrated in a Hong-Ou-Mandel correlation function measurement \cite{Lang2013}. Moreover, a similar approach was used to characterize the blackbody radiation emitted by a 50 $\Omega$ load resistor~\cite{Mariantoni2010}. Building on these results, it is also possible to reconstruct the quantum state of itinerant microwave fields from measurement of the fields moments. This technique relies on interleaving two types of measurements: measurements on the state of interest and ones in which the field is left in the vacuum as a reference to subtract away the measurement chain noise~\cite{Eichler2011}. In this way, the Wigner function of arbitrary superpositions of vacuum and one-photon Fock states have been reconstructed~\cite{Eichler2011,Kono2018}. This technique was extended to propagating modes containing multiple photons~\cite{Eichler2012}. Similarly, entanglement between a (stationary) qubit and a propagating mode was quantified in this approach with joint state tomography~\cite{Eichler2012,Eichler2012b}. Quadrature-histogram analysis also enabled, for example, the measurement of correlations between radiation fields \cite{Flurin2015}, and the observation of entanglement of itinerant photon pairs in waveguide QED \cite{Kannan2020}. \subsubsection{Squeezed microwave fields} \label{sec:Squeezing} \begin{figure} \caption{\label{fig:squeezing} \label{fig:squeezing} \end{figure} Operated in the phase-sensitive mode, quantum-limited amplifiers are sources of squeezed radiation. Indeed, for $\delta = 0$ and ignoring the correction terms, the JPA Hamiltonian of \cref{eq:DPA} is the generator of the squeezing transformation \begin{equation} S(\zeta) = e^{\frac{1}{2} \zeta^* \hat a^2 - \frac{1}{2} \zeta \hat{a}^{\dag 2}}, \end{equation} which takes vacuum to squeezed vacuum, $\ket\zeta = S(\zeta)\ket0$. In this expression, $\zeta = r e^{i\theta}$ with $r$ the squeezing parameter and $\theta$ the squeezing angle. As illustrated in \cref{fig:squeezing}(a), the action of $S(\zeta)$ on vacuum is to `squeeze' one quadrature of the field at the expense of `anti-squeezing' the orthogonal quadrature while leaving the total area in phase space unchanged. As a result, squeezed states, like coherent states, saturate the Heisenberg inequality. This can be seen more clearly from the variance of the quadrature operator $\hat{X}_\phi$ which takes the form \begin{equation} \Delta X_\phi^2 = \frac{1}{2}\left(e^{2r}\sin^2\tilde\phi + e^{-2r}\cos^2\tilde\phi\right), \end{equation} where we have defined $\tilde\phi = \phi - \theta/2$. In experiments, the squeezing level is often reported in dB computed using the expression \begin{equation} \mathcal{S} = 10 \log_{10} \frac{\Delta X^2_\phi}{\Delta X^2_\mathrm{vac}}. \end{equation} \Cref{fig:squeezing}(b) shows this quantity as a function of $\phi$. It reaches its minimal value $e^{- 2r}/2$ at $\phi =[\theta + (2n+1)\pi]/2$ where the variance $\Delta X^2_\phi$ dips below the vacuum noise level $\Delta X^2_\mathrm{vac} = 1/2$ (horizontal line). Squeezing in Josephson devices was observed already in the late 80's~\cite{Yurke1988,Yurke1989,Movshovich1990}, experiments that have been revisited with the development of near quantum-limited amplifiers~\cite{Castellanos2008,Zhong2013}. Quantum state tomography of an itinerant squeezed state at the output of a JPA was reported by \textcite{Mallet2011}. There, homodyne detection with different LO phases on multiple preparations of the same squeezed state, together with maximum likelihood techniques, was used to reconstruct the Wigner function of the propagating microwave field. Moreover, the photon number distribution of a squeezed field was measured using a qubit in the strong dispersive regime~\cite{Kono2017}. As is clear from the form of the squeezing transformation $S(\zeta)$, squeezed vacuum is composed of a superposition of only even photon numbers \cite{Schleich1987}, something which \textcite{Kono2017} confirmed in experiments. Thanks to the new parameter regimes that can be achieved in circuit QED, it is possible to experimentally test some long-standing theoretical predictions of quantum optics involving squeezed radiation. For example, in the mid-80's theorists predicted how dephasing and resonance fluorescence of an atom would be modified in the presence of squeezed radiation~\cite{Gardiner1986,Carmichael1987}. Experimentally testing these ideas in the context of traditional quantum optics with atomic systems, however, represents a formidable challenge~\cite{Turchette1998,Carmichael2016}. The situation is different in circuits where squeezed radiation can easily be guided from the source of squeezing to the qubit playing the role of artificial atom. Moreover, the reduced dimensionality in circuits compared to free-space atomic experiments limits the number of modes that are involved, such that the artificial atom can be engineered so as to preferentially interact with a squeezed radiation field. Taking advantage of the possibilities offered by circuit QED, \textcite{Murch2013} confirmed the prediction that squeezed radiation can inhibit phase decay of an (artifical) atom~\cite{Gardiner1986}. In this experiment, the role of the two-level atom was played by the hybridized cavity-qubit state $\{\ket{\overline{g,0}},\ket{\overline{e,0}}\}$. Moreover, squeezing was produced by a JPA over a bandwidth much larger than the natural linewitdh of the two-level system, see \cref{fig:squeezing}(c). According to theory, quantum noise below the vacuum level along the squeezed quadrature leads to a reduction of dephasing. Conversely, along the anti-squeezed quadrature, the enhanced fluctuations lead to increased dephasing. For the artificial atom, this results in different time scales for dephasing along orthogonal axis of the Bloch sphere. In the experiment, phase decay inhibition along the squeezed quadrature was such that the associated dephasing time increased beyond the usual vacuum limit of $2T_1$. By measuring the dynamics of the two-level atom, it was moreover possible to reconstruct the Wigner distribution of the itinerant squeezed state produced by the JPA. Using a similar setup, \textcite{Toyli2016a} studied resonance fluorescence in the presence of squeezed vacuum and found excellent agreement with theoretical predictions~\cite{Carmichael1987}. In this way, it was possible to infer the level of squeezing (3.1 dB below vacuum) at the input of the cavity. The discussion has so far been limited to squeezing of a single mode. It is also possible to squeeze a pair of modes, which is often referred to as two-mode squeezing. Labeling the modes as $\hat a_1$ and $\hat a_2$, the corresponding squeezing transformation reads \begin{equation} S_{12}(\zeta) = e^{\frac{1}{2} \zeta^* \hat a_1\hat a_2 - \frac{1}{2} \zeta \hat a^\dag_1\hat a^\dag_2}. \end{equation} Acting on vacuum, $S_{12}$ generates a two-mode squeezed state which is an entangled state of modes $\hat a_1$ and $\hat a_2$. As a result, in isolation, the state of one of the two entangled modes appears to be in a thermal state where the role of the Boltzmann factor $\exp(-\beta\hbar\omega_i)$, with $\omega_{i=1,2}$ the mode frequency, is played by $\tanh^2r$~\cite{Barnett2002}. In this case, correlations and therefore squeezing is revealed when considering joint quadratures of the form $\hat{X}_1 \pm \hat{X}_2$ and $\hat{P}_1 \pm \hat{P}_2$, rather than the quadratures of a single mode as in~\cref{fig:squeezing}(a). In Josephson-based devices, two-mode squeezing can be produced using nondegenerate parametric amplifiers of different types~\cite{Roy2016c}. Over 12 dB of squeezing below vacuum level between modes of different frequencies, often referred to as signal and idler in this context, has been reported~\cite{Eichler2014a}. Other experiments have demonstrated two-mode squeezing in two different spatial modes, i.e.~entangled signals propagating along different transmission lines~\cite{Bergeal2012,Flurin2012}. \subsection{Remote Entanglement Generation} Several approaches to entangle nearby qubits have been discussed in \cref{sec:quantumcomputing}. In some instances it can, however, be useful to prepare entangled states of qubits separated by larger distances. Together with protocols such as quantum teleportation, entanglement between distant quantum nodes can be the basis of a `quantum internet' \cite{Kimble2008}. Because optical photons can travel for relatively long distances in room temperature optical fiber while preserving their quantum coherence, this vision appears easier to realize at optical than at microwave frequencies. Nevertheless, given that superconducting cables at millikelvin temperatures have similar losses per meter as optical fibers \cite{Kurpiers2017}, there is no reason to believe that complex networks of superconductor-based quantum nodes cannot be realized. One application of this type of network is a modular quantum computer architecture where the nodes are relatively small-scale error-corrected quantum computers connected by quantum links \cite{Monroe2014,Chou2018a}. One approach to entangle qubits fabricated in distant cavities relies on entanglement by measurement, which is easy to understand in the case of two qubits coupled to the same cavity. Assuming the qubits to have the same dispersive shift $\chi$ due to coupling to the cavity, the dispersive Hamiltonian in a doubly rotating frame takes the form \begin{equation} H = \chi (\sz{1}+\sz{2})\hat a^\daga. \end{equation} Crucially, the cavity pull associated with odd-parity states $\{\ket{01},\ket{10}\}$ is zero while it is $\pm2\chi$ for the even-parity states $\{\ket{00},\ket{11}\}$. As a result, for $\chi \gg \kappa$, a tone at the bare cavity frequency leads to a large cavity field displacement for the even-parity subspace. On the other hand, the displacement is small or negligible for the odd-parity subspace. Starting with a uniform unentangled superposition of the states of the qubits, homodyne detection therefore stochastically collapses the system to one of these subspaces thereby preparing an entangled state of the two qubits \cite{Lalumiere2010}, an idea that was realized experimentally \cite{Riste2013}. The same concept was used by \textcite{Roch2014} to entangle two transmon qubits coupled to two 3D cavities separated by more than a meter of coaxial cable. There, the measurement tone transmitted through the first cavity is sent to the second cavity, only after which it is measured by homodyne detection. In this experiment, losses between the two cavities -- mainly due to the presence of a circulator preventing any reflection from the second cavity back to the first cavity -- as well as finite detection efficiency was the main limit to the acheivable concurrence, a measure of entanglement, to 0.35. While the above protocol probabilistically entangles a pair of qubits, a more powerful but also more experimentally challenging approach allows, in principle, to realize this in a fully deterministic fashion \cite{Cirac1997}. Developed in the context of cavity QED, this scheme relies on mapping the state of an atom strongly coupled to a cavity to a propagating photon. By choosing its wave packet to be time-symmetric, the photon is absorbed with unit probability by a second cavity also containing an atom. In this way, it is possible to exchange a quantum state between the two cavities. Importantly, this protocol relies on having a unidirectional channel between the cavities such that no signal can propagate from the second to the first cavity. At microwave frequencies, this is achieved by inserting a circulator between the cavities. By first entangling the emitter qubit to a partner qubit located in the same cavity, the quantum-state transfer protocol can be used to entangle the two nodes. Variations on this more direct approach to entangle remote nodes have been implemented in circuit QED \cite{Kurpiers2018,Campagne-Ibarcq2018,Axline2018}. All three experiments rely on producing time-symmetric propagating photons by using the interaction between a transmon qubit and cavity mode. Multiple approaches to shape and catch propagating photons have been developed in circuit QED. For example, \textcite{Wenner2014} used a transmission-line resonator with a tunable input port to catch a shaped microwave pulse with over 99\% probability. Time-reversal-symmetric photons have been created by \textcite{Srinivasan2014} using 3-island transmon qubits \cite{Srinivasan2011,Gambetta2011} in which the coupling to a microwave resonator is controlled in time so as to shape the mode function of spontaneously emitted photons. In a similar fashion, shaped single photons can be generated by modulating the boundary condition of a semi-infinite transmission line using a SQUID \cite{Forn-Diaz2017} which effectively controls the spontaneous emission rate of a qubit coupled to the line and emitting the photon. Alternatively, the remote entanglement generation experiment of \textcite{Kurpiers2018} rather relies on a microwave-induced amplitude- and phase-tunable coupling between the qubit-resonator $\ket{f0}$ and $\ket{g1}$ states, akin to the fg-ge gate already mentioned in \cref{sec:AllMicrowaveGates} \cite{Zeytinoglu2015}. Exciting the qubit to its $\ket{f}$ state followed by a $\pi$-pulse on the $f0-g1$ transition transfers the qubit excitation to a single resonator photon which is emitted as a propagating photon. This single-photon wave packet can be shaped to be time-symmetric by tailoring the envelope of the $f0-g1$ pulse \cite{Pechal2014}. By inducing the reverse process with a time-reversed pulse on a second resonator also containing a transmon, the itinerant photon is absorbed by this second transmon. In this way, an arbitrary quantum state can be transferred with a probability of 98.1\% between the two cavities separated by 0.9 m of coaxial line bisected by a circulator \cite{Kurpiers2018}. By rather preparing the emitter qubit in a $(\ket{e}+\ket{f})/\sqrt{2}$ superposition, the same protocol deterministically prepares an entangled state of the two transmons with a fidelity of 78.9\% at a rate of 50 kHz \cite{Kurpiers2018}. The experiments of \textcite{Axline2018} and \textcite{Campagne-Ibarcq2018} reported similar Bell-state fidelities using different approaches to prepare time-symmetric propagating photons \cite{Pfaff2017}. The fidelity reported by the three experiments suffered from the presence of a circulator bisecting the nearly one meter-long coaxial cable separating the two nodes. Replacing the lossy commercial circulator by an on-chip quantum-limited version could improve the fidelity \cite{Chapman2017,Kamal2011,Metelmann2015}. By taking advantage of the multimode nature of a meter long transmission line, it was also possible to deterministically entangle remote qubits without the need of a circulator. In this way, a bidirectional communication channel between the nodes is established and deterministic Bell pair production with 79.3\% fidelity has been reported \cite{Leung2019}. \subsection{\label{sec:WaveguideQED}Waveguide QED} The bulk of this review is concerned with the strong coupling of artificial atoms to the confined electromagnetic field of a cavity. Strong light-matter coupling is also possible in free space with an atom or large dipole-moment molecule by tightly confining an optical field in the vicinity of the atom or molecule \cite{Schuller2010}. A signature of strong coupling in this setting is the extinction of the transmitted light by the single atom or molecule acting as a scatterer. This extinction results from destructive interference of the light beam with the collinearly emitted radiation from the scatterer. Ideally, this results in 100\% reflection. In practice, because the scatterer emits in all directions, there is poor mode matching with the focused beam and reflection of $\sim 10$\% is observed with a single atom \cite{Tey2008} and $\sim 30\%$ with a single molecule \cite{Maser2016}. Mode matching can, however, be made to be close to ideal with electromagnetic fields in 1D superconducting transmission lines and superconducting artificial atoms where the artificial atoms can be engineered to essentially only emit in the forward and backward directions along the line \cite{Shen2005}. In the first realization of this idea in superconducting quantum circuits, \textcite{Astafiev2010} observed extinction of the transmitted signal by as much as 94\% by coupling a single flux qubit to a superconducting transmission line. Experiments with a transmon qubit have seen extinction as large as 99.6\% \cite{Hoi2011}. Pure dephasing and non-radiative decay into other modes than the transmission line are the cause of the small departure from ideal behavior in these experiments. Nevertheless, the large observed extinction is a clear signature that radiative decay in the transmission line $\gamma_\mathrm{r}$ (i.e.~Purcell decay) overwhelms non-radiative decay $\gamma_\mathrm{nr}$. In short, in this cavity-free system referred to as waveguide QED, $\gamma_\mathrm{r}/\gamma_\mathrm{nr} \gg 1$ is the appropriate definition of strong coupling and is associated with a clear experimental signature: the extinction of transmission by a single scatterer. Despite its apparent simplicity, waveguide QED is a rich toolbox with which a number of physical phenomena have been investigated \cite{Roy2017a}. This includes Autler-Townes splitting \cite{Abdumalikov2010}, single-photon routing \cite{Hoi2011}, the generation of propagating nonclassical microwave states \cite{Hoi2012b}, as well as large cross-Kerr phase shifts at the single-photon level \cite{Hoi2013b}. In another experiment, \textcite{Hoi2015} studied the radiative decay of an artificial atom placed in front of a mirror, here formed by a short to ground of the waveguide's center conductor. In the presence of a weak drive field applied to the waveguide, the atom relaxes by emitting a photon in both directions of the waveguide. The radiation emitted towards the mirror, assumed here to be on the left of the atom, is reflected back to interact again with the atom after having acquired a phase shift $\theta = 2\times 2\pi l/\lambda + \pi$, where $l$ is the atom-mirror distance and $\lambda$ the wavelength of the emitted radiation. The additional phase factor of $\pi$ accounts for the hard reflection at the mirror. Taking into account the resulting multiple round trips, this modifies the atomic radiative decay rate which takes the form $\gamma(\theta) = 2\gamma_\mathrm{r} \cos^2(\theta/2)$ \cite{Hoi2015,Koshino2012,Glaetzle2010a}. For $l/\lambda = 1/2$, the radiative decay rate vanishes corresponding to destructive interference of the right-moving field and the left-moving field after multiple reflections on the mirror. In contrast, for $l/\lambda = 1/4$, these fields interfere constructively leading to enhanced radiative relaxation with $\gamma(\theta) = 2\gamma_\mathrm{r}$. The ratio $l/\lambda$ can be modified by shorting the waveguide's center conductor with a SQUID. In this case, the flux threading the SQUID can be used to change the boundary condition seen by the qubit, effectively changing the distance $l$ \cite{Sandberg2008a}. The experiment of \textcite{Hoi2015} rather relied on flux-tuning of the qubit transition frequency, thereby changing $\lambda$. In this way, a modulation of the qubit decay rate by a factor close to 10 was observed. A similar experiment has been reported with a trapped ion in front of a movable mirror \cite{Eschner2001}. Engineering vacuum fluctuations in this system has been pushed even further by creating microwave photonic bandgaps in waveguides to which transmon qubits are coupled \cite{Liu2016a,Mirhosseini2018a}. For example, \textcite{Mirhosseini2018a} have coupled a transmon qubit to a metamaterial formed by periodically loading the waveguide with lumped-element microwave resonators. By tuning the transmon frequency in the band gap where there is zero or only little density of states to accept photons emitted by the qubit, an increase by a factor of 24 of the qubit lifetime was observed. An interpretation of the `atom in front of a mirror' experiments is that the atom interacts with its mirror image. Rather than using a boundary condition (i.e. a mirror) to study the resulting constructive and destructing interferences and change in the radiative decay rate, it is also possible to couple a second atom to the same waveguide \cite{Lalumiere2013,vanLoo2013}. In this case, photons (real or virtual) emitted by one atom can be absorbed by the second atom leading to interactions between the atoms separated by a distance $2l$. Similar to the case of a single atom in front of a mirror, when the separation between the atoms is such that $2l/\lambda = 1/2$, correlated decay of the pair of atoms at the enhanced rate $2\gamma_1$ is expected \cite{Lalumiere2013,Chang2012a} and experimentally observed \cite{vanLoo2013}. On the other hand, at a separation of $2l/\lambda = 3/4$, correlated decay is replaced by coherent energy exchange between the two atoms mediated by virtual photons \cite{Lalumiere2013,Chang2012a,vanLoo2013}. We note that the experiments of \textcite{vanLoo2013} with transmon qubits agree with a Markovian model of the interaction of the qubits with the waveguide \cite{Lalumiere2013,Chang2012a,Lehmberg1970}. Deviations from these predictions are expected as the distance between the atoms increases \cite{Zheng2013a}. Finally, following a proposal by \textcite{Chang2012a}, an experiment by \textcite{Mirhosseini2019} used a pair of transmon qubits to act as an effective cavity for a third transmon qubit, all qubits being coupled to the same waveguide. In this way, vacuum Rabi oscillations between the dark state of the effective cavity and the qubit playing the role of atom were observed, confirming that the strong-coupling regime of cavity QED was achieved. \subsection{Single microwave photon detection} \label{sec:SinglePhotonDetector} The development of single-photon detectors at infrared, optical and ultraviolet frequencies has been crucial to the field of quantum optics and in fundamental tests of quantum physics \cite{Hadfield2009,Eisaman2011}. High-efficiency photon detectors are, for example, one of the elements that allowed the loophole-free violation of Bell's inequality \cite{Hanson2015,Shalm2015a,Giustina2015a}. Because microwave photons have orders of magnitude less energy than infrared, optical or ultraviolet photons, the realization of a photon detector at microwave frequencies is more challenging. Yet, photon detectors in that frequency range would find a number of applications, including in quantum information processing \cite{Kimble2008,Narla2016}, for quantum radars \cite{Barzanjeh2015,Chang2019,Barzanjeh2019}, and for the detection of dark matter axions \cite{Lamoreaux2013}. Non-destructive counting of microwave photons localized in a cavity has already been demonstrated experimentally by using an (artificial) atom as a probe in the strong dispersive regime \cite{Gleyzes2007,Schuster2007a}. Similar measurements have also been done using a transmon qubit mediating interactions between two cavities, one containing the photons to be measured and a second acting as a probe \cite{Johnson2010}. The detection of itinerant microwave photons remains, however, more challenging. A number of theoretical proposals have appeared \cite{Helmer2009b,Romero2009,Wong2017,Kyriienko2016,Koshino2013b,Koshino2016a,Sathyamoorthy2014,Fan2014a,Leppakangas2018,Royer2018}. One common challenge for these approaches based on absorbing itinerant photons in a localized mode before detecting them can be linked to the quantum Zeno effect. Indeed, continuous monitoring of the probe mode will prevent the photon from being absorbed in the first place. Approaches to mitigate this problem have been introduced, including using an engineered, impedance matched $\Lambda$-system used to deterministically capture the incoming photon \cite{Koshino2016a}, and using the bright and dark states of an ensemble of absorbers \cite{Royer2018}. Despite these challenges, first itinerant microwave photon detectors have been achieved in the laboratory \cite{Narla2016,Chen2011a,Oelsner2017,Inomata2016}, in some cases achieving photon detection without destroying the photon in the process~\cite{Kono2018,Besse2017,Lescanne2019a}. Notably, a microwave photon counter was used to measure a superconducting qubit with a fidelity of 92\% without using a linear amplifier between the source and the detector \cite{Opremcak2018a}. Despite these advances, the realization of a high-efficiency, large-bandwith, QND single microwave photon detector remains a challenge for the field. \section{Outlook} Fifteen years after its introduction~\cite{Blais2004,Wallraff2004}, circuit QED is a leading architecture for quantum computing and an exceptional platform to explore the rich physics of quantum optics in new parameter regimes. Circuit QED has, moreover, found applications in numerous other fields of research as discussed in the body of the review and in the following. In closing this review, we turn to some of these recent developments. Although there remain formidable challenges before large-scale quantum computation becomes a reality, the increasing number of qubits that can be wired up, as well as the improvements in coherence time and gate fidelity, suggests that it will eventually be possible to perform computations on circuit QED-based quantum processors that are out of reach of current classical computers. Quantum supremacy on a 53-qubit device has already been claimed \cite{Arute2019}, albeit on a problem of no immediate practical interest. There is, however, much effort deployed in finding useful computational tasks which can be performed on Noisy Intermediate-Scale Quantum (NISQ) devices \cite{Preskill2018}. First steps in this direction include the determination of molecular energies with variational quantum eigensolvers \cite{OMalley2016,Kandala2017,Colless2018} or boson sampling approachs \cite{Wang2019a} and machine learning with quantum-enhanced features \cite{Havlicek2019}. Engineered circuit QED-based devices also present an exciting avenue toward performing analog quantum simulations. In contrast to quantum computing architectures, quantum simulators are usually tailored to explore a single specific problem. An example are arrays of resonators which are capacitively coupled to allow photons to hop from resonator to resonator. Taking advantage of the flexibility of superconducting quantum circuits, it is possible to create exotic networks of resonators such as lattices in an effective hyperbolic space with constant negative curvature \cite{Kollar2019}. Coupling qubits to each resonator realizes a Jaynes-Cummings lattice which exhibits a quantum phase transition similar to the superfluid-Mott insulator transition in Bose–Hubbard lattices \cite{Houck2012}. Moreover, the nonlinearity provided by capactitively coupled qubits, or of Josephson junctions embedded in the center conductor of the resonators, creates photon-photon interactions. This leads to effects such as photon blockade bearing some similarities to Coulomb blockade in mesoscopic systems \cite{Schmidt2013}. Few resonator- and qubit-devices are also promising for analog quantum simulations. Examples are the exploration of a simple model of the light harvesting process in photosynthetic complexes in a circuit QED device under the influence of both coherent and incoherent drives \cite{Potocnik2018}, and the analog simulation of dissipatively stabilized strongly correlated quantum matter in a small photon Bose–Hubbard lattice \cite{Ma2019}. Because it is a versatile platform to interface quantum devices with transition frequencies in the microwave domain to photons stored in superconducting resonators at similar frequencies, the ideas of circuit QED are also now used to couple to a wide variety of physical systems. An example of such hybrid quantum systems are semiconducter-based double quantum dots coupled to superconducting microwave resonators. Here, the position of an electron in a double dot leads to a dipole moment to which the resonator electric field couples. First experiments with gate-defined double quantum dots in nanotubes \cite{Delbecq2011}, GaAs \cite{Frey2012,Toida2013,Wallraff2013}, and InAs nanowires \cite{Petersson2012a} have demonstrated dispersive coupling and its use for characterizing charge states of quantum dots \cite{Burkard2020}. These first experiments were, however, limited by the very large dephasing rate of the quantum dot's charge states, but subsequent experiments have been able to reach the strong coupling regime \cite{Mi2017,Stockklauser2017,Bruhat2018}. Building on these results and by engineering an effective spin-orbit interaction \cite{PioroLadriere2008,Beaudoin2016a}, it has been possible to reach the strong coupling regime with single spins \cite{Mi2018,Samkharadze2018,Landig2018}. When the coupling to a single spin cannot be made large enough to reach the strong coupling regime, it can be possible to rely on an ensemble of spins to boost the effective coupling. Indeed, in the presence of an ensemble of $N$ emitters, the coupling strength to the ensemble is enhanced by $\sqrt N$ \cite{Imamoglu2009,Fink2009}, such that for large enough $g\sqrt N$ the strong coupling regime can be reached. First realization of these ideas used ensembles of $\sim 10^{12}$ spins to bring the coupling from a few Hz to $\sim 10$ MHz with NV centers in diamond \cite{Kubo2010} and Cr$^{3+}$ spins in ruby \cite{Schuster2010a}. One objective of these explorations is to increase the sensitivity of electron paramagnetic resonance (EPR) or electron spin resonance (ESR) spectroscopy for spin detection with the ultimate goal of achieving the single-spin limit. A challenge in reaching this goal is the long lifetime of single spins in these systems which limits the repetition rate of the experiment. By engineering the coupling between the spins and an LC oscillator fabricated in close proximity, it has been possible to take advantage of the Purcell effect to reduce the relaxation time from $10^3$s to $1$s \cite{Bienfait2016a}. This faster time scale allows for faster repetition rates thereby boosting the sensitivity, which could lead to spin sensitivities on the order of $0.1~\mathrm{spin}/\sqrt{\mathrm{Hz}}$ \cite{Haikka2017}. Mechanical systems operated in the quantum regime also benefited from the ideas of circuit QED \cite{Aspelmeyer2013}. An example is a suspended aluminium membrane that plays the role of a vacuum gap capacitor in a microwave LC oscillator. The frequency of this oscillator depends on the separation between the plates of the capacitor leading to a coupling between the oscillator and the flexural mode of the membrane. Strong coupling between mechanical motion and the LC oscillator has been demonstrated \cite{Teufel2011a}, which allowed to sideband cool the motion of the mechanical oscillator to phonon occupation number as small as $n_\mathrm{phonon}\sim0.34$. \cite{Teufel2011}. Squeezed radiation generated by a Josephson parametric amplifier was also used to cool beyond the quantum backaction limit to $n_\mathrm{phonon}\sim0.19$ \cite{Clark2017}. Building on these ideas, entanglement of the mechanical motion and the microwave fields was demonstrated \cite{Palomaki2013a} as well as coherent state transfer between itinerant microwave fields and a mechanical oscillator \cite{Palomaki2013}. Hybrid systems are also important in the context of microwave to optical frequency transduction in the quantum regime. This is a very desirable primitive for quantum networks, as it would allow quantum processors based on circuit QED to be linked optically over large distances. A variety of hybrid systems are currently being investigated for this purpose, including electro-optomechanical, electro-optic and magneto-optic ones~\cite{Higginbotham2018,Lambert2019,Lauk2020}. Two other hybrid quantum systems that have recently emerged are quantum surface acoustic waves interacting with superconducting qubits \cite{Gustafsson2014}, and quantum magnonics where quanta of excitation of spin-wave modes known as magnon are strongly coupled to the field of a 3D microwave cavity \cite{Lachance-Quirion2019}. \section*{Acknowledgments} We thank Alexandre Choquette-Poitevin, Agustin Di Paolo, Christopher Eichler, Jens Koch, Dany Lachance-Quirion, Yasunobu Nakamura, William Oliver, Alexandru Petrescu, Baptiste Royer, Gerd Sch\"on and Irfan Siddiqi for discussions and comments on the manuscript. This work was undertaken thanks to funding from NSERC, the Canada First Research Excellence Fund, the U.S. Army Research Office Grant No.~W911NF-18-1-0411 and W911NF-18-1-0212, and the Australian Research Council (ARC) via Discovery Early Career Research Award (DECRA) DE190100380. \appendix \section{Hamiltonian of a voltage biased transmon} \label{sec:AppendixTRcoupling} An excellent introduction to the quantization of electromagnetic circuits can be found in~\textcite{Vool2017}. Here, we only give a brief introduction to this topic by means of two examples that are used throughout this review: a transmon qubit biased by an external voltage source, and a transmon coupled to an LC oscillator. \subsection{Classical gate voltage} \begin{figure} \caption{(a) Voltage-biased transmon qubit with the three relevant flux branches. (b) Replacing the classical voltage source by a LC oscillator. The dashed arrows indicate the sign convention. \label{fig:app:transmon} \label{fig:app:transmon} \end{figure} Consider first the circuit shown in~\cref{fig:app:transmon}(a), illustrating a transmon biased by an external voltage $V_g$. Following \textcite{Vool2017}, we start by associating a branch flux $\Phi_i(t) = \int_{-\infty}^t dt'\, V_i(t')$ to each branch of the circuit, with $V_i$ the voltage across branch $i=A,B,C$ indicated in~\cref{fig:app:transmon}(a). Because Kirchoff's laws impose constraints between the branch fluxes, these fluxes are not independent variables and are therefore not independent degrees of freedom of the circuit. Indeed, Kirchoff's voltage law dictates that $V_C + V_B + V_A = V_g + \dot \Phi_B + \dot \Phi_A = 0$ where we have used the sign convention dictated by the (arbitrarily chosen) orientation of the arrows in~\cref{fig:app:transmon}(a). This constraint allows us to eliminate $\Phi_B$ in favor of $\Phi_A$. Moreover, following Kirchoff's current law, the currents $I_A$ and $I_B$ flowing into and out of the node indicated by the black dot in~\cref{fig:app:transmon}(a) obey $I_A = I_B$. This constraint can be expressed in terms of the branch fluxes using the constitutive relations for the capacitances $C_g$ and $C_\Sigma = C_S + C_J$ \begin{equation} Q_A = C_\Sigma\dot\Phi_A,\quad Q_B = C_g\dot\Phi_B, \end{equation} as well as the Josephson current relation \begin{equation} I_J = I_c\sin\varphi_A, \end{equation} where $\varphi_A = (2\pi/\Phi_0)\Phi_A$ and $I_c$ is the critical current. We can thus write $I_A = \dot Q_A + I_J = C_\Sigma\ddot \Phi_A + I_c\sin\varphi_A$ and $I_B = \dot Q_B = C_g \ddot \Phi_B$. Combining the above expressions, we arrive at \begin{equation}\label{eq:app:cq:Phi_A} C_\Sigma \ddot \Phi_A + I_c\sin\varphi_A = - C_g(\ddot \Phi_A + \ddot \Phi_C). \end{equation} Here, $\dot\Phi_C = V_g$ is the applied bias voltage and the only dynamical variable in the above equation is thus $\Phi_A$. As can easily be verified, this equation of motion for $\Phi_A$ can equivalently be derived from the Euler-Lagrange equation for $\Phi_A$ with the Lagrangian \begin{equation}\label{eq:app:cq:L_T} \mathcal L_T = \frac{C_\Sigma}{2}\dot\Phi_A^2 + \frac{C_g}{2}(\dot\Phi_A + \dot\Phi_C)^2 + E_J\cos\varphi_A, \end{equation} where $E_J = (\Phi_0/2\pi)I_c$. The corresponding Hamiltonian can be found by first identifying the canonical momentum associated to the coordinate $\Phi_A$, $Q_A = \partial \mathcal L_T/\partial \dot\Phi_A = (C_\Sigma+C_g)\dot\Phi_A + C_g\dot\Phi_C$, and performing a Legendre transform to obtain~\cite{Goldstein2001} \begin{equation} H_T = Q_A\dot\Phi_A - \mathcal L_T = \frac{(Q_A - C_gV_g)^2}{2(C_\Sigma+C_g)} - E_J\cos\varphi_A, \end{equation} where we have made the replacement $\dot\Phi_C = V_g$ and dropped the term $C_gV_g^2/2$ which only leads to an overall shift of the energies. Promoting the conjugate variables to non-commuting operators $[\hat\Phi_A, \hat Q_A]=i\hbar$, we arrive at~\cref{eq:Hsj} where we have assumed that $C_g \ll C_\Sigma$ to simplify the notation. \subsection{Coupling to an LC oscillator} As a model for the simplest realization of circuit QED, we now replace the voltage source by an LC oscillator, see \cref{fig:app:transmon}(b). The derivation follows the same steps as before, now with $V_g+\dot\Phi_B-\dot\Phi_A = 0$ and $I_A+I_B=0$ because of the different choice of orientation for branch $A$. Moreover, at the node labelled BC we have $I_B=I_C$. Eliminating $\Phi_B$ as before and using the constitutive relations for the capacitance $C$ and inductance $L$ of the LC oscillator to express the current through the oscillator branch as $I_C = C\ddot \Phi_C + \Phi_C/L$, we find \begin{equation}\label{eq:app:cq:Phi_C} C\ddot\Phi_C + \frac{\Phi_C}{L} = C_g(\ddot\Phi_A - \ddot \Phi_C). \end{equation} In contrast to the above example, $\Phi_C$ is now a dynamical variable in it's own right rather than being simply set by a voltage source. Together with \Cref{eq:app:cq:Phi_A} which still holds, \cref{eq:app:cq:Phi_C} can equivalently be derived using the Euler-Lagrange equations with the Lagrangian \begin{equation}\label{eq:L} \mathcal L = \mathcal L_T + \mathcal L_{LC}, \end{equation} where $\mathcal L_T$ is given in~\cref{eq:app:cq:L_T} and $\mathcal L_{LC} = \frac{C}{2}\dot\Phi_C^2 - \frac{1}{2L}\Phi_C$. It is convenient to write \cref{eq:L} as $\mathcal L = T - V$ with $T = \half \*\Phi^T \*C \* \Phi$ and $V=\Phi_C/2L - E_J\cos\varphi_A$ where we have defined the vector $\*\Phi = (\Phi_A, \Phi_C)^T$ and the capacitance matrix \begin{equation} \*C = \left(\begin{array}{cc} C_\Sigma + C_g & - C_g \\ -C_g & C + C_g \end{array}\right). \end{equation} Defining the vector of conjugate momenta $\*Q = (Q_A, Q_C)^T$, the Hamiltonian is then~\cite{Goldstein2001} \begin{equation}\label{eq:app:cq:HTransmonResonatorNoApprox} \begin{aligned} H ={}& \half \*Q^T \*C^{-1} \*Q + V\\ ={}&\frac{(C+C_g)}{2\bar C^2}Q_A^2 + \frac{C_g}{\bar C^2}Q_A Q_C - E_J\cos\varphi_A\\ &+ \frac{(C_\Sigma + C_g)}{2\bar C^2}Q_C^2 + \frac{\Phi_C}{2L}, \end{aligned} \end{equation} where we have defined $\bar C^2 = C_gC_\Sigma + C_gC + C_\Sigma C$. The limit $C_g \ll C_\Sigma,\, C$ results in the simplified expression \begin{equation}\label{eq:app:cq:HTransmonResonator} \begin{aligned} H \simeq{}& \frac{\left(Q_A + \frac{C_g}{C} Q_C\right)^2}{2C_\Sigma} - E_J\cos\varphi_A + H_{LC}, \end{aligned} \end{equation} with $H_{LC} = \frac{Q_C^2}{2C} + \frac{\Phi_C}{2L}$ the Hamiltonian of the LC circuit. By promoting the flux and charge variables to operators, and defining $\hat n = \hat Q_A/2e$, $\hat n_r = (C_g/C)\hat Q_C/2e$ and diagonalizing $\hat H_{LC}$ as in~\cref{sec:HO}, we arrive at~\cref{eq:HTransmonResonator} for a single mode $m=r$. \Cref{eq:app:cq:HTransmonResonator} can easily be generalized to capacitive coupling between other types of circuits, such as resonator-resonator, transmon-transmon or transmon-transmission line coupling by simply replacing the potential energy terms $-E_J\cos\varphi_A$ and $\Phi_C^2/2L$ to describe the type of circuits in question. This leads, for example, to~\cref{eq:gates:H_swap} for two capacitively coupled transmons after introducing ladder operators as in~\cref{eq:phiTransmon,eq:nTransmon}. \section{\label{sec:unitarytransforms}Unitary transformations} We introduce a number of unitary transformations often employed in the field of circuit QED. The starting point is the usual transformation \begin{equation} \hat H_U = \hat U^\dagger \hat H \hat U - i\hbar \hat U^\dagger \dot{\hat U}, \end{equation} of a Hamiltonian under a time-dependent unitary $\hat U$ with the corresponding transformation for the states $\ket{\psi_U} = \hat U^\dagger \ket{\psi}$. Since the unitary can be written as $\hat U = \exp(-\hat S)$ with $\hat S$ an anti-Hermitian operator, a very useful result in this context is the Baker-Campbell-Hausdorff (BCH) formula, which holds for any two operators $\hat S$ and $\hat H$ \begin{equation}\label{eq:BCH} \begin{aligned} e^{\hat S} \hat H e^{-\hat S} ={}& \hat H + [\hat S, H] + \frac{1}{2!} [\hat S,[\hat S, \hat H]] + \dots \\ ={}& \sum_{n=0}^\infty \frac{1}{n!} \mathcal{C}_{\hat S}^n[\hat H], \end{aligned} \end{equation} where in the last line we have introduced the short-hand notation $\mathcal{C}_{\hat S}^n[\hat H] = \stackrel{n \text{ times}}{[\hat S, [\hat S, [\hat S}, \dots, \hat H]]]$ and $\mathcal{C}_{\hat S}^0[\hat H] = \hat H$~\cite{Boissonneault2009}. \subsection{\label{sec:SW}Schrieffer-Wolff perturbation theory} We often seek unitary transformation that diagonalize the Hamiltonian of an interacting system. Exact diagonalization can, however, be impractical, and we must resort to finding an effective Hamiltonian which describes the physics at low energies using perturbation theory. A general approach to perturbation theory which we follow here is known as a Schrieffer–Wolff transformation~\cite{Schrieffer1966,Bravyi2011schrieffer}. The starting point is a generic Hamiltonian of the form \begin{equation} \hat H = \hat H_0 + \hat V, \end{equation} with typically $\hat H_0$ some free Hamiltonian and $\hat V$ a perturbation. We divide the total Hilbert space of our system into different subspaces such that $\hat H_0$ does not couple states in different subspaces while $\hat V$ does. The goal of the Schrieffer-Wolff transformation is to arrive at an effective Hamiltonian for which the different subspaces are completely decoupled. The different subspaces, which we label by an index $\mu$, can conveniently be defined by a set of projection operators~\cite{Zhu2012,Cohen-Tannoudji1998} \begin{equation} \hat P_\mu = \sum_n \ket{\mu,n}\bra{\mu,n}, \end{equation} where $\ket{\mu,n}$, $n=0,1,\dots$, is an orthonormal basis for the subspace labeled $\mu$. For the Schrieffer-Wolff transformation to be valid, we must assume that $\hat V$ is a small perturbation. Formally, the operator norm $||\hat V|| = \max_{\ket\psi}||\hat O \ket\psi||$ should be smaller than half the energy gap between the subspaces we intend to decouple; see Eq.~(3.1) of \textcite{Bravyi2011schrieffer}. While $\hat V$ is often formally unbounded in circuit QED applications, the operator is always bounded when restricting the problem to physically relevant states. The Schrieffer-Wolff transformation is based on finding a unitary transformation $\hat U=e^{-\hat S}$ which approximately decouples the different subspaces $\mu$ by truncating the Baker-Campbell-Hausdorff formula~\cref{eq:BCH} at a desired order. We first expand both $\hat H$ and $\hat S$ in formal power series \begin{subequations}\label{eq:SW:formal_expansions} \begin{align} \hat H ={}& \hat H^{(0)} + \varepsilon \hat H^{(1)} + \varepsilon^2 \hat H^{(2)} + \dots,\\ \hat S ={}& \varepsilon \hat S^{(1)} + \varepsilon^2 \hat S^{(2)} + \dots, \end{align} \end{subequations} where $\varepsilon$ is a fiducial parameter introduced to simplify order counting and which we can ultimately set to $\varepsilon \to 1$. The Schrieffer-Wolff transformation is found by inserting~\cref{eq:SW:formal_expansions} back into~\cref{eq:BCH}, and collecting terms at each order $\varepsilon^k$. We can then iteratively solve for $S^{(k)}$ and $\hat H^{(k)}$ by requiring that the resulting Hamiltonian $\hat H_U$ is block-diagonal (i.e.~it does not couple different subspaces $\mu$) at each order, and the additional requirement that $\hat S$ is itself block off-diagonal~\cite{Bravyi2011schrieffer}. For the reader's convenience, the explicit results up to $k=2$ are for the generator (with $\varepsilon = 1$) \begin{subequations}\label{eq:SW:explicit_expansions_generator} \begin{align} &\braket{\mu,n|\hat S^{(1)}|\nu,l} = \frac{\braket{\mu,n|\hat V|\nu,l}}{E_{\mu,n}-E_{\nu,l}},\\ &\begin{aligned} \braket{\mu,n|\hat S^{(2)}|\nu,l} ={}& \sum_{k} \Bigg( \frac{\braket{\mu,n|\hat V|\nu,k}}{E_{\mu,n}-E_{\nu,l}} \frac{\braket{\nu,k|\hat V|\nu,l}}{E_{\mu,n}-E_{\nu,k}}\\ &- \frac{\braket{\mu,n|\hat V|\mu,k}}{E_{\mu,n}-E_{\nu,l}} \frac{\braket{\mu,k|\hat V|\nu,l}}{E_{\mu,k}-E_{\nu,l}}\Bigg) \end{aligned}, \end{align} \end{subequations} for $\nu\neq\mu$, while block-diagonal matrix element where $\mu=\nu$ are zero, and \begin{subequations}\label{eq:SW:explicit_expansions} \begin{align} &\hat H^{(0)} = \hat H_0,\\ &\hat H^{(1)} = \sum_\mu \hat P_\mu \hat V \hat P_\mu,\\ &\begin{aligned} \braket{\mu,n|\hat H^{(2)}|\mu,m} &= \sum_{\nu\neq\mu,l} \braket{\mu,n|\hat V|\nu,l}\braket{\nu,l|\hat V|\mu,m}\\ \times \half \bigg( &\frac{1}{E_{\mu,n}-E_{\nu,l}} + \frac{1}{E_{\mu,m}-E_{\nu,l}} \bigg), \end{aligned} \end{align} \end{subequations} for the transformed Hamiltonian (block off-diagonal matrix element are zero, i.e., $\braket{\mu,n|\hat H^{(2)}|\nu,m}=0$ for $\mu\neq\nu$). In these expressions, $E_{\mu,n}$ refers to the bare energy of $\ket{\mu,n}$ under the unperturbed Hamiltonian $\hat H_0$. An explicit formulae for $\hat H^{(k)}$ up $k=4$ can be found, e.g. in~\textcite{Winkler2003}. \subsection{\label{sec:SW:multilevel}Schrieffer-Wolff for a multilevel system coupled to an oscillator in the dispersive regime} As an application of the general result of \cref{eq:SW:explicit_expansions} we consider in this section a situation that is commonly encoutered in circuit QED: An arbitrary artificial atom coupled to a single mode oscillator in the dispersive regime. Both the transmon artificial atom and the two-level system discussed in~\cref{sec:dispersive} are special cases of this more general example. The artificial atom, here taken to be a generic multilevel system, is described in its eigenbasis with the Hamiltonian $\hat H_\text{atom} = \sum_j \hbar\omega_j \ket j \bra j$. The full Hamiltonian is therefore given by \begin{equation} \hat H = \hbar \omega_r \hat a^\daga + \sum_j \hbar\omega_j \ket j \bra j + \left( \hat B \hat a^\dag + \hat B^\dagger \hat a\right), \end{equation} where $\hat B$ is an arbitrary operator of the artificial atom which couples to the oscillator. For example, in the case of capacitive coupling, it is proportional to the charge operator with $\hat B \sim i\hat n$, cf.~\cref{eq:HTransmonResonator}. By inserting resolutions of the identity $\hat I = \sum_j \ket j \bra j$, the interaction term can be re-expressed in the atomic eigenbasis as \cite{Koch2007} \begin{equation}\label{eq:SW:multilevel_hamiltonian} \begin{aligned} \hat H ={}& \hbar \omega_r \hat a^\daga + \sum_j \hbar\omega_j \ket j \bra j \\ &+ \sum_{ij} \hbar \left( g_{ij} \ket i \bra j \hat a^\dag + g_{ij}^* \ket j \bra i \hat a\right), \end{aligned} \end{equation} where $\hbar g_{ij} = \braket{i|\hat B|j}$, and with $g_{ij} = g_{ji}$ if $\hat B = \hat B^\dagger$. To use~\cref{eq:SW:explicit_expansions}, we identify the first line of~\cref{eq:SW:multilevel_hamiltonian} as $\hat H_0$ and the second line as the perturbation $\hat V$. The subspaces labeled by $\mu$ are in this situation one-dimensional, $\hat P_\mu = \ket\mu\bra\mu$, with $\ket{\mu} = \ket{n,j} = \ket{n}\otimes \ket{j}$, $\ket{n}$ an oscillator number state and $\ket j$ an artificial atom eigenstate. A straightforward calculation yields the second order result~\cite{Zhu2012} \begin{equation}\label{eq:SW:multilevel} \begin{aligned} \hat H_\mathrm{disp} = e^{\hat S} \hat H e^{-\hat S} \simeq{}& \hbar \omega_r \hat a^\daga + \sum_j \hbar (\omega_j + \Lambda_j) \ket j \bra j \\ &+ \sum_j \hbar \chi_j \hat a^\dag \hat a \ket j \bra j, \end{aligned} \end{equation} where \begin{equation} \Lambda_j = \sum_i \chi_{ij},\quad \chi_j = \sum_i \left(\chi_{ij} - \chi_{ji} \right), \end{equation} with \begin{equation} \chi_{ij} = \frac{|g_{ji}|^2}{\omega_j-\omega_i-\omega_r}. \end{equation} Note that we are following here the convention of \textcite{Koch2007} rather than of \textcite{Zhu2012} for the definition of $\chi_{ij}$. Projecting~\cref{eq:SW:multilevel} on the first two-atomic levels $j=0,1$ with the convention $\hat\sigma_z = \ket 1\bra 1 - \ket 0\bra 0$ we obtain \begin{equation}\label{eq:SW:twolevel} \begin{aligned} \hat H_\mathrm{disp} \simeq{}& \hbar \omega_r' \hat a^\daga + \frac{\hbar\omega_q'}{2}\sz{} + \hbar\chi \hat a^\dag \hat a \sz{}, \end{aligned} \end{equation} where we have dropped a constant term and defined $\omega_r' = \omega_r + (\chi_0 + \chi_1)/2$, $\omega_{q}'=\omega_1-\omega_0 + \Lambda_1-\Lambda_0$ and $\chi= (\chi_1 - \chi_0)/2$. \subsubsection{\label{sec:SW:transmon}The transmon} The transmon capacitively coupled to an oscillator is one example of the above result. From \cref{eq:HTransmonJC}, we identify the free Hamiltonian as \begin{equation} \hat H_0 = \hbar\omega_r \hat a^\daga + \hbar\omega_q \hat b^\dagb - \frac{E_C}{2}\hat b^\dag\hat b^\dag\hat b\hat b, \end{equation} and the perturbation as \begin{equation} \hat V = \hbar g (\hat b^\dag\hat a+\hat b\hat a^\dag). \end{equation} In this nonlinear oscillator approximation for the transmon, the transmon eigenstates are number states $\hat b^\dag\hat b \ket{j} = j\ket j$, with $j=0,1,\dots,\infty$. Moreover, the coupling operator is $\hat B = \hbar g \hat b$, and thus \begin{equation} g_{j,j+1} = g \braket{j|\hat b|j+1} = g\sqrt{j+1} = g_{j,j+1}^*, \end{equation} while all other matrix elements $g_{ij}$ are zero. We consequently find \begin{subequations} \begin{align} &\Lambda_{j} = \chi_{j-1,j} = \frac{j g^2}{\omega_{q} - E_C/\hbar(j - 1) - \omega_r},\\ &\begin{aligned} \chi_{j} ={}& \chi_{j-1,j} - \chi_{j,j+1} \\ ={}& g^2\left(\frac{j}{\omega_j-\omega_{j-1}-\omega_r} - \frac{j+1}{\omega_{j+1}-\omega_j-\omega_r}\right), \end{aligned} \end{align} \end{subequations} for $j>0$, while for $j=0$ we have $\Lambda_0 = 0$ and $\chi_0 = -\chi_{01} = -g^2/\Delta$ where $\Delta \equiv \omega_q-\omega_r$. In the two-level approximation of \cref{eq:SW:twolevel}, this becomes~\cite{Koch2007} \begin{subequations}\label{eq:SW:transmonshifts} \begin{align} \omega_r' ={}& \omega_r - \frac{\chi_{12}}{2} = \omega_r - \frac{g^2}{\Delta-E_C/\hbar},\\ \omega_{q}' ={}& \omega_1-\omega_0 + \chi_{01} = \omega_{q} + \frac{g^2}{\Delta},\\ \chi ={}& \chi_{01} - \frac{\chi_{12}}{2} = - \frac{g^2 E_C/\hbar}{\Delta\left(\Delta-E_C/\hbar\right)}, \end{align} \end{subequations} which are the results quoted in~\cref{eq:HQubitDispersiveParametersSW}. Recall that this Schrieffer-Wolff perturbation theory is only valid if the perturbation $\hat V$ is sufficiently small. Following \textcite{Bravyi2011schrieffer}, a more precise statement is that we require $2||\hat V|| < \Delta_\text{min}$, where $\Delta_\text{min}$ is the smallest energy gap between any of the bare energy eigenstates $\ket{n} \otimes \ket j$, where $\ket n$ is a number state for the oscillator. Here, $\hat V = g(\hat b^\dag\hat a + \hat b\hat a^\dag)$ is formally unbounded but physical states have finite excitation numbers. Therefore, replacing the operator norm by $\braket{n,j|\hat V^\dagger \hat V|n,j}^{1/2}$ and using $\Delta_\text{min} = |\Delta- j E_C/\hbar|$ corresponding to the minimum energy gap between neighboring states $\ket{n,j}$ and $\ket{n\pm 1,j\mp 1}$, we find that a more precise criterion for the validity of the above perturbative results is \begin{equation} n \ll n_\text{crit} \equiv \frac{1}{2j + 1} \left(\frac{|\Delta-j E_C/\hbar|^2}{4 g^2} - j\right). \end{equation} Setting $j=0$, this gives the familiar expression $n_\text{crit} = (\Delta/2g)^2$, while setting $j=1$ gives a more conservative estimate. As quoted in the main text, the appropriate small parameter is therefore $\bar n/n_\mathrm{crit}$, with $\bar n$ the average oscillator photon number. For the second order effective Hamiltonian $\hat H_\text{disp}$ to be an accurate description of the system requires $\bar n/n_\mathrm{crit}$ to be significantly smaller than unity (it is difficult to make a precise statement but the criteria $\bar n/n_\mathrm{crit}\lesssim 0.1$ is often used). \subsubsection{The Jaynes-Cummings model} \label{sec:AppendixDispersiveTLS} It is interesting to contrast the above result in which the transmon is treated as a multilevel system with the result obtained if the artificial atom is truncated to a two-level system \emph{before} performing the Schireffer-Wolff transformation. That is, we start with the Jaynes-Cummings Hamiltonian \begin{equation} \begin{split} \hat H_\mathrm{JC} = \hbar\omega_r \hat a^\daga + \frac{\hbar\omega_{q}}{2}\sz{} + \hbar g(\hat a^\dag\smm{} + \hat a\spp{}). \end{split} \end{equation} Identifying the first two terms as the unperturbed Hamiltonian $\hat H_0$ and the last term as the interaction $\hat V$, we can again apply~\cref{eq:SW:explicit_expansions}. Alternatively, the result can be found more directly from~\cref{eq:SW:twolevel} by taking $g_{01} = g_{01}^* = g$ and all other $g_{ij} = 0$. The result is \begin{equation} \omega_r' = \omega_r,\quad \omega_{q}' = \omega_{q} + \frac{g^2}{\Delta},\quad \chi = \frac{g^2}{\Delta}, \end{equation} with $\Delta = \omega_q - \omega_r$ as before. We see that the results agree with~\cref{eq:SW:transmonshifts} only in the limit $E_C/\hbar \gg \Delta,\, g$. Importantly, since $E_C$ is relatively small compared to the detuning $\Delta$ in most transmon experiments, the value for $\chi$ predicted from the Jaynes-Cummings model is far from the multi-level case. Moreover, following the same argument as above, we find that the Schrieffer-Wolff transformation is valid for photon numbers $\bar n<n_\text{crit}$ with $n_\text{crit} = (\Delta/2g)^2-j$ with $j=0,1$ for the ground and excited qubit states, respectively. It is interesting to note that the transformation used here to approximately diagonalize the Jaynes-Cummings Hamiltonian can be obtained by Taylor expanding the generator $\Lambda(\hat N_T)$ of the unitary transformation \cref{eqn:JCdiag_transform} which exactly diagonalizes $\hat H_\mathrm{JC}$. This exercise also leads to the conclusion that $\bar n/n_\mathrm{crit}$, with $n_\mathrm{crit} = (\Delta/2g)^2$, is the appropriate small parameter. Alternatively, $\hat H_\text{disp}$ can also be obtained simply by Taylor expanding the diagonal form \cref{eq:HJCdiagonal} of $\hat H_\mathrm{JC}$ \cite{Boissonneault2010}. \subsection{Bogoliubov approach to the dispersive regime}\label{sec:AppendixDispersiveTransformation} We derive the results presented in~\cref{sec:dispersive:bb}. Our starting point is thus the transmon-resonator Hamiltonian~\cref{eq:HTransmonJC} and our final result the dispersive Hamiltonian of \cref{eq:HTransmonDispersive}. It is first useful to express \cref{eq:HTransmonJC} as a sum of a linear and a nonlinear part, $\hat H = \hat HL + \hat HNL$ where \begin{align} \hat HL ={}& \hbar\omega_r \hat a^\daga + \hbar\omega_q \hat b^\dagb + \hbar g (\hat b^\dag\hat a+\hat b\hat a^\dag),\label{eq:app:HTransmonDispersiveL}\\ \hat HNL ={}& -\frac{E_C}{2} \hat b^\dag\hat b^\dag\hat b\hat b.\label{eq:app:HTransmonDispersiveNL} \end{align} The linear Hamiltonian $\hat HL$ can be diagonalized exactly with the Bogoliubov transformation \begin{equation} \hat U = \exp\left[\Lambda (\hat a^\dagger \hat b - \hat a \hat b^\dagger) \right]. \end{equation} Under this unitary transformation, the annihilation operators transform as $\hat U^\dagger \hat a \hat U = \cos(\Lambda)\hat a + \sin(\Lambda)\hat b$, $\hat U^\dagger \hat b \hat U = \cos(\Lambda)\hat b - \sin(\Lambda)\hat a$, leading to \begin{equation} \begin{aligned} \hat HL' ={}& \hat U^\dagger \hat HL \hat U = \tilde \omega_r \hat a^\dagger \hat a + \tilde \omega_q \hat b^\dagger \hat b \\ &+\left[g\cos(2\Lambda) -\frac{\Delta}{2}\sin(2\Lambda)\right](\hat a^\dagger \hat b + \hat a \hat b^\dagger), \end{aligned} \end{equation} where \begin{align} \tilde\omega_r ={}& \cos^2(\Lambda)\omega_r + \sin^2(\Lambda)\omega_q - g\sin(2\Lambda),\\ \tilde\omega_q ={}& \cos^2(\Lambda)\omega_q + \sin^2(\Lambda)\omega_r + g\sin(2\Lambda). \end{align} To cancel the last term of $\hat HL'$, we take $\Lambda = \half \arctan(2\lambda)$ with $\lambda = g/\Delta$ and $\Delta = \omega_q-\omega_r$ to obtain the diagonal form \begin{equation} \begin{aligned} \hat HL' ={}& \hbar\tilde\omega_r \hat a^\dagger \hat a + \hbar\tilde\omega_q \hat b^\dagger \hat b, \end{aligned} \end{equation} with the mode frequencies \begin{align} \tilde\omega_r ={}& \half\left(\omega_r + \omega_q - \sqrt{\Delta^2+4g^2}\right),\\ \tilde\omega_q ={}& \half\left(\omega_r + \omega_q + \sqrt{\Delta^2+4g^2}\right). \end{align} The same transformation on $\hat HNL$ gives \begin{equation} \begin{aligned} \hat HNL' ={}& \hat U^\dagger \hat HNL \hat U \\ ={}& -\frac{E_C}{2}\cos^4(\Lambda)(\hat b^\dagger)^2 \hat b^2 -\frac{E_C}{2} \sin^4(\Lambda) (\hat a^\dagger)^2 \hat a^2\\ &-2E_C\cos^2(\Lambda)\sin^2(\Lambda)\hat a^\dagger \hat a \hat b^\dagger \hat b\\ &+ E_C\cos^3(\Lambda)\sin(\Lambda)\left(\hat b^\dagger \hat b\,\hat a^\dagger b + \text{H.c.} \right) \\ &+ E_C \cos(\Lambda)\sin^3(\Lambda)\left(\hat a^\dagger \hat a\, \hat a \hat b^\dagger + \text{H.c.}\right)\\ &-\frac{E_C}{2} \cos(\Lambda)^2\sin(\Lambda)^2[(\hat a^\dagger)^2 \hat b^2 + \text{H.c.}]. \end{aligned} \end{equation} Note that, at this stage, the transformation is exact. In the dispersive regime, we expand the mode frequencies and $\hat HNL'$ in powers of $\lambda$. For the nonlinear part of the Hamiltonian, this yields \begin{equation}\label{eq:app:dispersive:H_NL_lambda} \begin{aligned} \hat HNL' ={}& -\frac{E_C}{2}(\hat b^\dagger)^2 \hat b^2 -\lambda^4\frac{E_C}{2} (\hat a^\dagger)^2 \hat a^2\\ &-2\lambda^2E_C\hat a^\dagger \hat a \hat b^\dagger \hat b\\ &+ \lambda E_C (\hat b^\dagger \hat b\,\hat a^\dagger \hat b + \text{H.c.})\\ &+ \lambda^3E_C (\hat a^\dagger \hat a \,\hat a \hat b^\dagger + \text{H.c.})\\ &-\lambda^2\frac{E_C}{2}[(\hat a^\dagger)^2 \hat b^2 + \text{H.c.}] + \mathcal{O}(\lambda^5). \end{aligned} \end{equation} The magnitude $\lambda^2 E_C$ of the cross-Kerr term $\hat a^\dagger \hat a \hat b^\dagger \hat b$ in this expression does not coincide with Eq.~(3.12) of \textcite{Koch2007}. To correct this situation, we apply an additional transformation to eliminate the third line of \cref{eq:app:dispersive:H_NL_lambda}. This term is important because it corresponds, roughly, to an exchange interaction $\hat a^\dagger \hat b + \hat b^\dagger \hat a$ with an additional number operator $\hat b^\dagger \hat b$ which distinghuishes the different transmon levels. To eliminate this term, we apply a Schrieffer-Wolff transformation to second order with the generator $S = \lambda' (\hat b^\dagger \hat b\,\hat a^\dagger \hat b - \text{H.c.})$ where $\lambda' = \lambda E_C/[\Delta + E_C(1-2\lambda^2)]$. Neglecting the last two lines of \cref{eq:app:dispersive:H_NL_lambda} and ommiting a correction of order $\lambda^2$, we arrive at~\cref{eq:HTransmonDispersive} which agrees with \textcite{Koch2007}. \subsection{\label{sec:AppendixDrivenTransmon}Off-resonantly driven transmon} We derive \cref{eq:HacStarkDrive} describing the ac-Stark shift resulting from an off-resonant drive on a transmon qubit. Our starting point is \cref{eq:singlequbitdrive} which takes the form \begin{equation} \hat H(t) = \hbar \omega_q \hat b^\dagger \hat b - \frac{E_C}{2}(\hat b^\dagger)^2 \hat b^2 + \hbar \epsilon(t)\hat b^\dagger + \hbar \epsilon^*(t) \hat b, \end{equation} where we have defined $\epsilon(t) = \varepsilon(t) e^{-i\omega_\mathrm{d} t -i\phi_\mathrm{d}}$. To account for a possible time-dependence of the drive envelope $\varepsilon(t)$ it is useful to apply the time-dependent displacement transformation \begin{equation} \hat U(t) = e^{\alpha^*(t) \hat b - \alpha(t) \hat b^\dagger}. \end{equation} Under $\hat U(t)$, $\hat b$ transforms to $\hat U^\dagger \hat b \hat U = \hat b - \alpha(t)$, while \begin{equation} \hat U^\dagger \dot{\hat U} = \dot \alpha^*(t) \hat b - \dot \alpha(t) \hat b^\dagger. \end{equation} Using these expressions, the transformed Hamiltonian becomes \begin{equation} \begin{aligned} \hat H' ={}& \hat U^\dagger H \hat U - i\hat U^\dagger \dot{\hat U} \\ \simeq{}& \hbar\omega_q (\hat b^\dagger \hat b - \alpha^* \hat b - \alpha \hat b^\dagger)\\ &- \frac{E_C}{2} [(\hat b^\dagger)^2 \hat b^2 + 4|\alpha|^2 \hat b^\dagger \hat b)\\ &+ \hbar\epsilon\hat b^\dagger + \hbar \epsilon^* \hat b -i\hbar(\dot \alpha^* \hat b - \dot \alpha \hat b^\dagger), \end{aligned} \end{equation} where we have dropped fast-rotating terms and a scalar. The choice \begin{equation} \dot \alpha(t) = -i\omega_q \alpha(t) + i\epsilon(t), \end{equation} cancels the linear drive term leaving \begin{equation}\label{eq:gates:Stark} \begin{aligned} \hat H'(t) \simeq{}& [\hbar\omega_q- 2E_C |\alpha(t)|^2] \hat b^\dagger \hat b - \frac{E_C}{2} (\hat b^\dagger)^2 \hat b^2. \end{aligned} \end{equation} Taking a constant envelope $\varepsilon(t)=\varepsilon$ for simplicity such that $|\alpha(t)|^2 = (\varepsilon/\delta_q)^2$, the above expression takes the compact form \begin{equation} \begin{aligned} \hat H''(t) \simeq{}& \half \left(\hbar\omega_q - E_C \frac{\Omega_R^2}{2 \delta_q^2}\right) \sz{}, \end{aligned} \end{equation} in the two-level approximation which is \cref{eq:HacStarkDrive} of the main text. It is instructive to obtain the same result now using the Schrieffer-Wolff approach. Assuming a constant envelope $\varepsilon(t)=\varepsilon$ and with $\phi_\mathrm{d}=0$ for simplicity, our starting point is \begin{equation} \hat H = \hbar \delta_q \hat b^\dagger \hat b - \frac{E_C}{2}(\hat b^\dagger)^2 \hat b^2 + \hbar \varepsilon(\hat b^\dagger + \hat b), \end{equation} in a frame rotating at $\omega_\mathrm{d}$ and where $\delta_q = \omega_{q}-\omega_\mathrm{d}$. We treat the drive as a perturbation and apply the second order formula~\cref{eq:SW:explicit_expansions} to obtain \begin{equation} \begin{aligned} \hat H_U \simeq{}& \hbar \delta_q \hat b^\dagger \hat b - \frac{E_C}{2}(\hat b^\dagger)^2 \hat b^2 \\ + |\varepsilon|^2 \sum_j &\left( \frac{j}{\delta_q - \frac{E_C(j-1)}{\hbar}} - \frac{j+1}{\delta_q - \frac{E_C j}{\hbar}} \right) \ket j \bra j\\ \simeq{} \hbar \delta_q& \hat b^\dagger \hat b - \frac{E_C}{2}(\hat b^\dagger)^2 \hat b^2 - 2E_C \frac{|\varepsilon|^2}{\delta_q^2} \hat b^\dag\hat b - \frac{|\varepsilon|^2}{\delta_q}, \end{aligned} \end{equation} where $\ket j$ is used to label transmon states, as before. In the last approximation we have kept only terms to $\mathcal{O}(j E_C /\delta_q)$. This agrees with~\cref{eq:gates:Stark} for $|\alpha|^2 = |\varepsilon/\delta_q|^2$. More accurate expressions can be obtained by going to higher order in perturbation theory \cite{Schneider2018b}. \section{\label{app:inout}Input-output theory} Following closely \textcite{Yurke1984,Yurke2004}, we derive the input-output equations of \cref{sec:inout}. As illustrated in \cref{fig:inout}, we consider an LC oscillator located at $x=0$ and which is capacitively coupled to a semi-infinite transmission line extending from $x=0$ to $\infty$. In analogy with~\cref{eq:HamiltonianResonatorContinous}, the Hamiltonian for the transmission line is \begin{equation} \hat H_\text{tml} = \int_{-\infty}^{\infty} dx\, \theta(x) \bigg\{ \frac{\hat Q_\text{tml}(x)^2}{2 c} + \frac{\left[\partial_x \hat \Phi_\text{tml}(x)\right]^2}{2l} \bigg\}, \end{equation} where $c$ and $l$ are, respectively, the capacitance and inductance per unit length, and $\theta(x)$ the Heaviside step function. The flux and charge operators satisfy the canonical commutation relation $[\hat \Phi_\text{tml}(x),\hat Q_\text{tml}(x')] = i\hbar\delta(x-x')$. On the other hand, the Hamiltonian of the LC oscillator of frequency $\omega_r=1/\sqrt{L_\mathrm{r}C_\mathrm{r}}$ is $\hat H_S = \hat Q_\mathrm{r}^2/(2C_\mathrm{r}) +\hat \Phi_\mathrm{r}^2/(2L_\mathrm{r})$ and the interaction Hamiltonian takes the form \begin{equation}\label{eq:inout:H_int} \hat H_\text{int} = \int_{-\infty}^\infty dx\, \delta(x) \frac{C_\kappa}{c C_\mathrm{r}} \hat Q_\mathrm{r} \hat Q_\text{tml}(x), \end{equation} where $C_\kappa$ is the coupling capacitance between the oscillator and the line. In deriving~\cref{eq:inout:H_int}, we have neglected renormalizations of $c$ and $C_\mathrm{r}$ due to $C_\kappa$ (c.f~\cref{sec:AppendixTRcoupling}). The total Hamiltonian is thus $\hat H = \hat H_S + \hat H_\text{tml} + \hat H_\text{int} = \int_{-\infty}^\infty dx \, \mathcal H$, where we have introduced the Hamiltonian density $\mathcal H$ in the obvious way. Using these results, Hamilton's equations for the field in the transmission line take the form \begin{align} &\begin{aligned} \dot{\hat \Phi}_\text{tml}(x) ={}& \theta(x) \frac{\hat Q_\text{tml}(x)}{c} + \delta(x) \frac{C_\kappa}{C_\mathrm{r} c}\hat Q_\mathrm{r}, \end{aligned}\\ &\begin{aligned} \dot{\hat Q}_\text{tml}(x) ={}& \partial_x \left[\theta(x) \frac{\partial_x\hat \Phi_\text{tml}(x)}{l}\right]. \end{aligned} \end{align} These two equations can be combined into a wave equation for $\hat \Phi_\text{tml}$ which, for $x>0$, reads \begin{equation}\label{eq:inout:waveeq} \ddot{\hat \Phi}_\text{tml}(x) = v^2\partial_x^2 \hat \Phi_\text{tml}(x), \end{equation} and where $v = 1/\sqrt{lc}$ is the speed of light in the line. At the location $x=0$ of the oscillator, we instead find \begin{equation} \begin{aligned} \ddot{\hat \Phi}_\text{tml}(x) ={}& \theta(x)v^2\left[\delta(x) \partial_x \hat \Phi_\text{tml}(x) + \partial_x^2 \hat \Phi_\text{tml}(x) \right]\\ &+ \delta(x) \frac{C_\kappa}{C_\mathrm{r}c}\dot{\hat Q}_\mathrm{r}, \end{aligned} \end{equation} where we have used $\partial_x\theta(x) = \delta(x)$. We integrate the last equation over $-\varepsilon < x < \varepsilon$ and subsequently take $\varepsilon\to 0$ to find the boundary condition \begin{equation}\label{eq:inout:boundary} \begin{aligned} v^2 \partial_x \hat \Phi_\text{tml}(x=0) = - \frac{C_\kappa}{C_\mathrm{r}c}\dot{\hat Q}_\mathrm{r}. \end{aligned} \end{equation} From~\cref{eq:inout:waveeq}, we find that the general solution for the flux and charge fields, defined as $\hat Q_\text{tml}(x,t) = c\partial_t\hat \Phi_\text{tml}(x,t)$, can be written for $x>0$ as $\hat \Phi_\text{tml}(x,t) = \hat \Phi_\text{L}(x,t) + \hat \Phi_\text{R}(x,t)$ and $\hat Q_\text{tml}(x,t) = \hat Q_\text{L}(x,t) + \hat Q_\text{R}(x,t)$, with the subscript L/R denoting left- and right-moving fields \begin{subequations}\label{eq:inout:fields} \begin{align} &\begin{aligned} \hat\Phi_\text{L/R}(x,t) ={}& \int_0^\infty d\omega\, \sqrt{\frac{\hbar}{4 \pi\omega cv}} e^{\pm i\omega x/v + i\omega t} \hat b_{\text{L/R}\omega}^\dagger \\ & + \text{H.c.}, \end{aligned}\\ &\begin{aligned} \hat Q_\text{L/R}(x,t) ={}& i \int_0^\infty d\omega\, \sqrt{\frac{\hbar\omega c}{4\pi v}} e^{\pm i\omega x/v + i\omega t} \hat b_{\text{L/R}\omega}^\dagger \\ &- \text{H.c.} \end{aligned} \end{align} \end{subequations} In this expression, we introduced the operators $\hat b_{\nu\omega}$ satisfying $[\hat b_{\nu\omega}, \hat b_{\mu\omega'}] = \delta_{\nu\mu}\delta(\omega-\omega')$ for $\nu=$L, R. Because of the boundary condition at $x=0$, the left- and right-moving fields are not independent. To see this, we first note that, following from the form of $\hat \Phi_\text{tml}(x,t)$, \begin{equation}\label{eq:inout:Ohmslaw} \begin{aligned} Z_\text{tml} \frac{\partial_x \hat \Phi_\text{tml}(x,t)}{l} ={}& \dot{\hat \Phi}_\text{L}(x,t) - \dot{\hat \Phi}_\text{R}(x,t), \end{aligned} \end{equation} with $Z_\text{tml} = \sqrt{l/c}$ the characteristic impedance of the transmission line. Noting that $\hat I(x) = \partial_x \hat \Phi_\text{tml}(x)$ is the current and defining voltages $\hat V_\text{L/R}(x) = \dot{\hat \Phi}_\text{L/R}(x)$, we can recognize~\cref{eq:inout:Ohmslaw} as Ohm's law. Using~\cref{eq:inout:boundary}, we finally arrive at the boundary condition of \cref{eq:voltage_inout} at $x=0$ \begin{equation}\label{eq:inout:inoutrel0} \begin{aligned} \hat V_\text{out}(t) - \hat V_\text{in}(t) = Z_\text{tml} \frac{C_\kappa}{C_\mathrm{r}} \dot{\hat Q}_\mathrm{r}, \end{aligned} \end{equation} where we have introduced the standard notation $\hat V_\text{in/out}(t) = \hat V_\text{L/R}(x=0,t)$. Using the mode expansion of the fields in~\cref{eq:inout:fields} together with \cref{eq:HatPhiQ} for the LC oscillator charge operator in terms of the ladder operator $\hat a$, \cref{eq:inout:inoutrel0} can be expressed as \begin{equation}\label{eq:inout:inoutrel1} \begin{aligned} & \int_0^\infty d\omega\, \sqrt{\frac{\omega }{4 \pi cv}} e^{- i(\omega-\omega_r) t} \left(\hat b_{\text{R}\omega} - \hat b_{\text{L}\omega} \right) \\ ={}& -i \omega_r Z_\text{tml} \frac{C_\kappa}{C_\mathrm{r}} \sqrt{\frac{\omega_r C_\mathrm{r}}{2}} \hat a, \end{aligned} \end{equation} where we have neglected terms rotating at $\omega+\omega_r$. After some re-arrangements this can be written in the form of the standard input-output boundary condition~\cite{Collett1984,Gardiner1985} \begin{equation}\label{eq:inout:inoutrel} \hat b_\mathrm{out}(t) - \hat b_\mathrm{in}(t) = \sqrt{\kappa} \hat a(t), \end{equation} with input and output fields defined as \begin{subequations} \begin{align} \hat b_\mathrm{in}(t) ={}& \frac{-i}{\sqrt{2\pi}} \int_{-\infty}^\infty d\omega\, \hat b_{L\omega}e^{-i(\omega-\omega_r) t},\label{eq:bin_appendix}\\ \hat b_\mathrm{out}(t) ={}& \frac{-i}{\sqrt{2\pi}} \int_{-\infty}^\infty d\omega\, \hat b_{R\omega}e^{-i(\omega-\omega_r) t}.\label{eq:bout_appendix} \end{align} \end{subequations} and the photon loss rate $\kappa$ is given by \begin{equation} \kappa = \frac{Z_\text{tml} C_\kappa^2 \omega_r^2}{C_\mathrm{r}}. \end{equation} There are two further approximations which are made when going from~\cref{eq:inout:inoutrel1} to~\cref{eq:inout:inoutrel}: We have extended the range of integration over frequency from $[0,\infty)$ to $(-\infty,\infty)$, and we have replaced the factor $\sqrt{\omega}$ by $\sqrt{\omega_r}$ inside the integrand. Both approximations are made based on the assumptions that only terms with $\omega\simeq\omega_r$ contribute significantly to the integral in~\cref{eq:inout:inoutrel1}. Moreover, we rewrite~\cref{eq:inout:Ohmslaw} as \begin{equation} \begin{aligned} \partial_x \hat \Phi_\text{tml}(x,t) ={}& Z_\text{tml} \left[ \hat Q_\text{L}(x,t) - \hat Q_\text{R}(x,t) \right]\\ ={}& Z_\text{tml} \left[ 2\hat Q_\text{L}(x,t) - \hat Q_\text{tml}(x,t) \right], \end{aligned} \end{equation} where in the last equality we have used $\hat Q_\text{tml}(x,t) = \hat Q_\text{L}(x,t) + \hat Q_\text{R}(x,t)$. At $x=0$, this gives \begin{equation} \begin{aligned} \hat Q_\text{tml}(x=0,t) = 2\hat Q_\text{L}(x=0,t) + \frac{1}{v} \frac{C_\kappa}{C_\mathrm{r}} \hat Q_\mathrm{r}(t). \end{aligned} \end{equation} Using this result in the Heisenberg representation equations of motion for the LC oscillator, \begin{align} &\dot{\hat \Phi}_\mathrm{r} = \frac{i}{\hbar}[\hat H,\hat \Phi_\mathrm{r}] = \frac{\hat Q_\mathrm{r}}{C_\mathrm{r}} + \frac{C_\kappa}{C_\mathrm{r} c}\hat Q_\text{tml}(x=0),\\ &\dot{\hat Q}_\mathrm{r} = \frac{i}{\hbar}[\hat H,\hat Q_\mathrm{r}] = -\frac{\hat \Phi_\mathrm{r}}{L_\mathrm{r}}, \end{align} we arrive at a single equation of motion for the oscillator charge \begin{equation}\label{eq:inout:inouteom0} \ddot{\hat Q}_\mathrm{r} = - \omega_r^2\left[ \hat Q_\mathrm{r} + \frac{C_\kappa}{c} \left( \frac{1}{v} \frac{C_\kappa}{C_\mathrm{r}} \hat Q_\mathrm{r} + 2\hat Q_\text{in} \right) \right]. \end{equation} Again writing $\hat Q_\mathrm{r}$ in terms of bosonic creation and annihilation operators, it is possible to express \cref{eq:inout:inouteom0} in the form of the familiar Langevin equation \cref{eq:inouteom} for the mode operator $\hat a(t)$. This standard expression is obtained after neglecting fast rotating terms and making the following ``slowly varying envelope'' approximations \cite{Yurke2004} \begin{subequations} \begin{align} \frac{d^2}{d t^2} \hat ae^{-i\omega_r t} \simeq{}& -\omega_r^2 \hat ae^{-i\omega_r t} - 2i\omega_r \dot{\hat a}e^{-i\omega_r t},\\ \frac{d}{d t} \hat ae^{-i\omega_r t} \simeq{}& -i\omega_r \hat ae^{-i\omega_r t},\\ \frac{d}{d t} \hat b_{-\omega}e^{-i\omega t} \simeq{}& -i\omega_r \hat b_{-\omega} e^{-i\omega t}. \end{align} \end{subequations} \Cref{eq:inouteom} can be viewed as a Heisenberg picture analog to the Markovian master equation~\cref{eq:ME_harmonic}. \end{document}
\begin{document} \title{The Toucher-Isolator game} \author[1]{Chris Dowden\thanks{Supported by Austrian Science Fund (FWF): P27290.}} \author[1]{Mihyun Kang\thanks{Supported by Austrian Science Fund (FWF): P27290 and W1230.}} \author[2]{Mirjana Mikala\v{c}ki\thanks{Partly supported by Ministry of Education, Science and Technological Development, Republic of Serbia, Grant nr. 174019}} \author[2]{Milo\v{s} Stojakovi\'{c}\samethanks} \affil[1] {Institute of Discrete Mathematics, Graz University of Technology, Austria. Email: [email protected], [email protected].} \affil[2] {Department of Mathematics and Informatics, Faculty of Sciences, University of Novi Sad, Serbia. Email: [email protected], [email protected].} \setlength{\unitlength}{1cm} \maketitle \begin{abstract} We introduce a new positional game called `Toucher-Isolator', which is a quantitative version of a Maker-Breaker type game. The playing board is the set of edges of a given graph $G$, and the two players, Toucher and Isolator, claim edges alternately. The aim of Toucher is to `touch' as many vertices as possible (i.e.~to maximise the number of vertices that are incident to at least one of her chosen edges), and the aim of Isolator is to minimise the number of vertices that are so touched. We analyse the number of untouched vertices $u(G)$ at the end of the game when both Toucher and Isolator play optimally, obtaining results both for general graphs and for particularly interesting classes of graphs, such as cycles, paths, trees, and $k$-regular graphs. We also provide tight examples. \end{abstract} \section{Introduction} \subsection{Background and motivation} One of the most fundamental and enjoyable mathematical activities is to play and analyse games, ranging from simple examples such as snakes and ladders and noughts and crosses to much more complex games like chess and bridge. Many of the most natural and interesting games to play involve pure skill, perfect information, and a sequential order of play. These are known formally as `combinatorial' games, see e.g.~\cite{winning}, and popular examples include Connect Four, Hex, noughts and crosses, draughts, chess, and go. Often, a combinatorial game might consist of two players alternately `claiming' elements of the playing board (e.g.~noughts and crosses, but not chess) with the intention of forming specific winning sets, and such games are called `positional' combinatorial games (for a comprehensive study, see~\cite{beckbook} or~\cite{milosbook}). In particular, much recent research has involved positional games in which the board is the set of edges of a graph, and where the aim is to claim edges in order to form subgraphs with particular properties. A pioneering paper in this area was that of Chv\'{a}tal and Erd\H{o}s~\cite{chv}, in which the primary target was to form a spanning tree. Subsequent work has then also involved other standard graph structures and properties, such as cliques~\cite{bal, geb12}, perfect matchings~\cite{mik, hef09}, Hamilton cycles~\cite{hef09,kriv11}, planarity~\cite{hef08}, and given minimum degree~\cite{geb09}. Part of the appeal of these games is that there are several different versions. Sometimes, in the so-called strong games, both players aim to be the first to form a winning set (c.f.~three-in-a-row in a game of noughts and crosses). In others, only Player $1$ tries to do this, and Player $2$ simply seeks to prevent her. This latter class of games are known as `Maker-Breaker' positional games. A notable result here is the Erd\H{o}s-Selfridge Theorem~\cite{erdself}, which establishes a simple but general condition for the existence of a winning strategy for Breaker in a wide class of such problems. A quantitative generalisation of this format then involves games in which Player $1$ aims to form as many winnning sets as possible, and Player $2$ tries to prevent this (i.e.~Player $2$ seeks to minimise the number of winning sets formed by Player $1$). In this paper, we introduce a new quantitative version of a Maker-Breaker style positional game, which we call the `Toucher-Isolator' game. Here, the playing board is the set of edges of a given graph, the two players claim edges alternately, the aim of Player $1$ (Toucher) is to `touch' as many vertices as possible (i.e.~to maximise the number of vertices that are incident to at least one of her edges), and the aim of Player $2$ (Isolator) is to minimise the number of vertices that are touched by Toucher (i.e.~to claim all edges incident to a vertex, and do so for as many vertices as possible). This problem is thus simple to formulate and seems very natural, with connections to other interesting games, such as claiming spanning subgraphs, matchings, etc. In particular, we note that it is related to the well-studied Maker-Breaker vertex isolation game (introduced by Chv\'{a}tal and Erd\H{o}s~\cite{chv}), where Maker's goal is to claim all edges incident to a vertex, and it is hence also related to the positive min-degree game (see~\cite{balplu, milosbook, hef11}), where Maker's goal is to claim at least one edge of every vertex. Our Toucher-Isolator game can be thought of as a quantitative version of these games, where Toucher now wants to claim at least one edge on \emph{as many} vertices as possible, while Isolator aims to isolate \emph{as many} vertices as possible. However, the game has never previously been investigated, and so there is a vast amount of unexplored territory here, with many exciting questions. What are the best strategies for Toucher and Isolator? How do the results differ depending on the type of graph chosen? Which graphs provide the most interesting examples? \subsection{Results} Given a graph $G=(V(G),E(G))$, we use $u(G)$ to denote the number of untouched vertices at the end of the game when both Toucher and Isolator play optimally. We obtain both upper and lower bounds on $u(G)$, some of which are applicable to all graphs and some of which are specific to particular classes of graphs (e.g.~cycles or trees). For every one of these, we also demonstrate that the bounds are tight by providing examples of graphs which satisfy them exactly (in most cases, we in fact give a family of tight examples to show that there are infinitely many values of $n=|V(G)|$ for which equality holds). We shall now present all of these results, the proofs of which will be given later. Clearly, one of the key parameters in our game will be the degrees of the vertices (although, as we shall observe later, the degree sequence alone does not fully determine the value of $u(G)$). In our bounds for general $G$, perhaps the most significant is the upper bound of Theorem~\ref{gen2}. Here, we find that it suffices just to consider the vertices with degree at most three (we again re-iterate that all our bounds are tight). \begin{Theorem} \label{gen2} For any graph $G$, we have \begin{displaymath} d_{0} + \frac{1}{2}d_{1} - 1 \leq u(G) \leq d_{0} + \frac{3}{4}d_{1} + \frac{1}{2}d_{2} + \frac{1}{4}d_{3}, \end{displaymath} where $d_{i}$ denotes the number of vertices with degree exactly $i$. \end{Theorem} A notable consequence of this result is that there will be no untouched vertices for any graph with minimum degree at least four. We shall later see (in Theorem~\ref{u>0example}) that this is not always true for graphs with minimum degree three. For certain degree sequences, the bounds given in Theorem~\ref{gen2} can in fact both be improved by our next result. \begin{Theorem} \label{gen1} For any graph $G$, we have \begin{displaymath} \sum_{v \in V(G)} 2^{-d(v)} - \frac{|E(G)|+7}{8} \leq u(G) \leq \sum_{v \in V(G)} 2^{-d(v)}, \end{displaymath} where $d(v)$ denotes the degree of vertex $v$. Equivalently, we have \begin{displaymath} \sum_{i \ge 0} 2^{-i} d_i - \frac{|E(G)|+7}{8} \leq u(G) \leq \sum_{i \ge 0} 2^{-i} d_i, \end{displaymath} where $d_{i}$ again denotes the number of vertices with degree exactly $i$. \end{Theorem} Note also that $|E(G)|$ will be small if the degrees are small, and so Theorem~\ref{gen1} then provides a fairly narrow interval for the value of $u(G)$ (observe that Theorem~\ref{gen2} already provides a narrow interval if the degrees are large). Moving on from these general bounds, one very natural particular graph to consider is the cycle $C_{n}$ on $n$ vertices. It is fascinating to play the game on such a graph and to try to determine the optimal strategies and the proportion of untouched vertices. We again obtain tight upper and lower bounds, both for $C_n$ and for the closely related game on $P_n$ (the path on $n$ vertices). \begin{Theorem} \label{cyc2} For all $n$, we have \begin{displaymath} \frac{3}{16} (n-3) \leq u(C_{n}) \leq \frac{n}{4}. \end{displaymath} \end{Theorem} \begin{Theorem} \label{path1} For all $n$, we have \begin{displaymath} \frac{3}{16} (n-2) \leq u(P_{n}) \leq \frac{n+1}{4}. \end{displaymath} \end{Theorem} We also extend the game to general $2$-regular graphs (i.e.~unions of disjoint cycles). Our main achievement here is to obtain a \emph{tight} lower bound of $u(G) \ge \frac{n-3}{6}$, which (by a comparison with the lower bound of Theorem~\ref{cyc2}) also demonstrates that $u(G)$ is not solely determined by the degree sequence. \begin{Theorem} \label{2reg2} For any $2$-regular graph $G$ with $n$ vertices, we have \begin{displaymath} \frac{n-3}{6} \leq u(G) \leq \frac{n}{4}. \end{displaymath} \end{Theorem} An interesting and natural extension of the game on paths is obtained by considering general trees, although this additional freedom in the structure can make the problem significantly more challenging. Here, we derive the following tight bounds. \begin{Theorem} \label{tree} For any tree $T$ with $n>2$ vertices, we have \begin{displaymath} \frac{n+2}{8} \leq u(T) \leq \frac{n-1}{2}. \end{displaymath} \end{Theorem} As mentioned, it follows from Theorem~\ref{gen2} that there will be no untouched vertices in $k$-regular graphs if $k \geq 4$, so it is intriguing to consider the $3$-regular case. We observe that there are $3$-regular graphs for which $u(G)=0$, and one might expect that this could be true for all such graphs. However, we in fact manage to construct a class of examples for which a constant proportion of vertices remain out of Toucher's reach. \begin{Theorem} \label{u>0example} For all even $n \ge 4$, there exists a $3$-regular graph $G$ with n vertices satisfying \begin{displaymath} u(G) \ge \left \lfloor \frac{n}{24} \right \rfloor. \end{displaymath} \end{Theorem} \subsection{Techniques and outline of the paper} One of our key techniques is to analyse an appropriate \emph{`Danger'} function, building on an idea first introduced by Erd\H{o}s and Selfridge~\cite{erdself} to prove a general criterion for Breaker's win, the celebrated Erd\H{o}s-Selfridge Criterion. The same approach was later adapted by Beck~\cite{beck82} for Maker, resulting in the so-called Weak Win Criterion. In the quantitative context of our Toucher-Isolator game, it is still useful to define the Danger function in a similar manner. \begin{Definition} \label{Dangerdef} We shall say that a vertex has Danger $0$ if any of its edges have been taken by Toucher, and Danger $2^{-k}$ if all but $k$ of its edges have been taken by Isolator and the remaining $k$ edges have not yet been taken by anyone (note that this includes the case when $k=0$, and so a vertex has Danger $1$ if all of its edges have been taken by Isolator). \end{Definition} Note that the Danger function can be interpreted as the probability that a vertex will be untouched if all of its remaining edges are assigned to Toucher and Isolator independently and uniformly at random. An equivalent definition is also obtained if we update the graph $G$ throughout the game by removing the edges claimed by Isolator, keep track of the vertices $U(G) \subseteq V(G)$ untouched by Toucher, and define the total Danger to be $\sum_{v\in U(G)} 2^{-d(v)}$. The following observation will be key. \begin{Observation} \label{Dangerobs1} The total Danger at the start of the game is $\sum_{v} 2^{-d(v)}$, and the total Danger at the end of the game is precisely the number of untouched vertices. \end{Observation} Hence, bounds for $u(G)$ can sometimes be obtained by investigating how the total Danger changes with each move. Here, a further observation is crucial. \begin{Observation} \label{Dangerobs2} Whenever Toucher takes an edge, the total Danger will decrease by exactly the sum of the Dangers of the two vertices incident to this edge (since both of these Dangers will fall to zero). Similarly, whenever Isolator takes an edge, the total Danger will increase by exactly the sum of the Dangers of the two vertices incident to this edge (since both of these Dangers will double). \end{Observation} Another standard method that will be used throughout is `partition of the board'. Here, we divide the graph up into various segments, we focus on one particular player, and we try to optimise that player's strategy subject to the constraint that he/she must always take an edge from the same segment that his/her opponent has just played in (this then provides bounds for the overall optimum strategy, where there are no such constraints). The main advantage of this idea is that it enables us to split the whole graph into simpler pieces that can be analysed more easily. However, we must choose the division of the graph in a rather careful manner in order to achieve substantial results. The remainder of the paper is structured as follows: in Section~\ref{gen}, we prove the general bounds applicable to all graphs, as stated in Theorem~\ref{gen2} and Theorem~\ref{gen1}; in Section~\ref{cycles}, we focus on the case when the graph is a cycle, proving Theorem~\ref{cyc2} (the proof of Theorem~\ref{path1} on paths is very similar, and so is left to the appendix); in Section~\ref{2reg}, we generalise this to any $2$-regular graph, obtaining Theorem~\ref{2reg2}; in Section~\ref{trees}, we investigate trees, proving Theorem~\ref{tree}; in Section~\ref{3reg}, we derive results for $3$-regular graphs, including Theorem~\ref{u>0example}; and in Section~\ref{discussion}, we discuss some interesting remaining questions. \section{General bounds} \label{gen} As mentioned, in this section we shall now derive general bounds applicable to any graph $G$, proving Theorem~\ref{gen2} and Theorem~\ref{gen1}. In each case, we shall also observe that there are straightforward tight examples for infinitely many values of $|V(G)|$. We shall start with the proof of the upper bound of Theorem~\ref{gen2}, followed by tight examples, and then give the proof of the corresponding lower bound, again followed by tight examples. After this, we shall then use the same pattern for the proof of Theorem~\ref{gen1} (and also for future sections). We begin with perhaps the most interesting proof of this section, which uses a variant of the partition of the board strategy involving the pairing of edges. \begin{proof}[Proof of upper bound of Theorem~\ref{gen2}] We will provide Toucher with a pairing strategy to touch enough vertices for the statement to hold. To do this, we will define a collection of disjoint pairs of edges, and Toucher's strategy will be to wait (and play arbitrarily) until Isolator claims an edge within a pair, and then immediately respond by claiming the other edge. This way, Toucher will certainly claim at least one edge in every pair. We start by adding an auxiliary vertex and connecting it to all odd degree vertices of $G$. This will create an even graph, and so each of its components has an Eulerian tour. For each of these Eulerian tours, we then arbitrarily choose one of two orientations. Once the auxiliary vertex is removed, we are thus left with an orientation of $G$. Let $V_i$ be the set of vertices with degree $i$, and let $V_i^{(j)}$ be the set of vertices with degree $i$ and $j$ incoming edges. We shall use $d_i^{(j)}$ to denote $\left| V_i^{(j)} \right|$, and we note that $\left| V_i \right|=d_i$. Also, observe that for even $i$ we have $V_i = V_i^{\left(\frac{i}{2}\right)}$, while for odd $i$ we have $V_i=V_i^{\left(\frac{i+1}{2}\right)} \cup V_i^{\left(\frac{i-1}{2}\right)}$. For each vertex that has at least two incoming edges, we may choose two such edges arbitrarily and pair them. Note that we can do this for all vertices in $V_3^{(2)} \cup \left(\cup_{i\geq 4} V_i \right)$. Next, for all the vertices in $V_1^{(1)} \cup V_2 \cup V_3^{(1)}$ (observe that these each have exactly one incoming edge), let us collect all incoming edges and pair them up arbitrarily. If $\left| V_1^{(1)} \cup V_2 \cup V_3^{(1)} \right|$ is odd, then there will be one unpaired edge here, which Toucher should claim with her very first move of the game (before Isolator has made any moves). Note that by treating only the incoming edges at every vertex, we ensure that all our edge pairs are pairwise disjoint. Let us now consider the number of vertices that Toucher will touch following this pairing strategy. She certainly touches all vertices in $V_3^{(2)} \cup \left(\cup_{i\geq 4} V_i \right)$ and half (rounded up) of the vertices in $V_1^{(1)} \cup V_2 \cup V_3^{(1)}$, so counting those that remain then gives \begin{equation} u(G) \leq d_0 + d_1^{(0)} + \frac{d_1^{(1)}}{2} + \frac{d_2}{2} + \frac{d_3^{(1)}}{2}. \label{i:1} \end{equation} Finally, note that if we were to use the same orientation of $G$, but pair the \emph{outgoing} edges instead of the incoming edges, then exactly the same analysis gives \begin{equation} u(G) \leq d_0 + d_1^{(1)} + \frac{d_1^{(0)}}{2} + \frac{d_2}{2} + \frac{d_3^{(0)}}{2}. \label{i:2} \end{equation} Summing (\ref{i:1}) and (\ref{i:2}) (and dividing by two) then completes the proof. \end{proof} For this bound, it is trivial to note the following tight examples. \begin{Proposition} Any graph with minimum degree at least four will provide a tight example to the upper bound in Theorem~\ref{gen2}. \qed \end{Proposition} Before we move on to the proof of Theorem~\ref{gen1}, which uses a Danger function approach, let us briefly give the proof of the lower bound of Theorem~\ref{gen2}, and then also provide corresponding tight examples. \begin{proof}[Proof of lower bound of Theorem~\ref{gen2}] Let $X$ denote the set of edges whose endpoints both have degree $1$, and let $Y$ denote the set of edges with exactly one endpoint of degree $1$. Note that $d_{1} = |Y| + 2|X|$. By giving priority to the edges in $X$, followed by the edges in $Y$, Isolator will be guaranteed to take at least $\left \lfloor \frac{|X|+|Y|}{2} \right \rfloor$ of these edges in total, including at least $\left \lfloor \frac{|X|}{2} \right \rfloor$ of the edges in $X$. Hence, \begin{eqnarray*} u(G) & \geq & d_{0} + \left \lfloor \frac{|X|+|Y|}{2} \right \rfloor + \left \lfloor \frac{|X|}{2} \right \rfloor \\ & \geq & d_{0} + \frac{|X|+|Y|}{2} - \frac{1}{2} + \frac{|X|}{2} - \frac{1}{2} \\ & = & d_{0} + \frac{d_{1}}{2} - 1. \end{eqnarray*} \end{proof} \begin{Proposition} Any graph consisting of an odd number of $K_{2}$ components will provide a tight example to the lower bound in Theorem~\ref{gen2}. \qed \end{Proposition} We may now proceed with the proof of Theorem~\ref{gen1}, again starting with the upper bound. As mentioned, this bound is obtained via an analysis of the Danger function. \begin{proof}[Proof of upper bound of Theorem~\ref{gen1}] Our proof is an extension of that of the acclaimed Erd\H{o}s-Selfridge Theorem~\cite{erdself}, which establishes conditions under which a player can obtain a `winning set' of edges. In the context of our game, we can consider Isolator as having obtained such a winning set if he has claimed all of the edges incident to a vertex (thus isolating it). However, for our purposes, it is crucial that we now also find a way to keep track of the number of these winning sets. Here, the Danger function will play a vital role, with the key observation being that the total Danger at the end of the game is precisely the number of untouched vertices (see Observation~\ref{Dangerobs1}). Let us begin by recalling (from Observation~\ref{Dangerobs2}) that whenever Toucher takes an edge, the Dangers of the two incident vertices will both fall to zero, and that the total Danger will hence decrease by their sum. By contrast, whenever Isolator takes an edge, the total Danger will increase by exactly the sum of the Dangers of the two incident vertices. Hence, let us consider the strategy where Toucher always chooses the edge which maximises the sum of the Dangers of the two vertices incident to it. By this maximality condition (and Observation~\ref{Dangerobs2}), it then follows that in each pair of moves (one from Toucher and then one from Isolator), the total Danger can never increase. Note furthermore that if $|E(G)|$ is odd, then the game will end with one final (unpaired) move by Toucher, which also cannot increase the total Danger. Thus, recalling from Observation~\ref{Dangerobs1} that the total Danger at the start of the game is $\sum_{v} 2^{-d(v)}$, and again remembering that the total Danger at the end of the game is exactly the number of untouched vertices, we hence obtain the desired bound. \end{proof} Again, it is simple to identify tight examples. \begin{Proposition} Any graph consisting of an even number of $K_{2}$ components will provide a tight example to the upper bound in Theorem~\ref{gen1}. \qed \end{Proposition} \begin{Remark} Note that the upper bound of Theorem~\ref{gen1} will be better than the upper bound of Theorem~\ref{gen2} if \begin{displaymath} \sum_{i \ge 4} 2^{3-i} d_i < 2d_1 + 2d_2 + d_3. \end{displaymath} \end{Remark} \begin{Remark} In some cases, it is possible to combine the Danger function technique used to prove the upper bound of Theorem~\ref{gen1} with the pairing approach used to prove the upper bound of Theorem~\ref{gen2}. In particular, if our graph $G$ contains an induced subgraph $H$ with $\delta(H)\geq 4$, then Toucher could employ a pairing strategy on $E(H)$ to make sure that all vertices in $V(H)$ are touched, while still having full liberty when playing on the edges of $E(G) \setminus E(H)$, with the aim of maximising the number of touched vertices. This would automatically improve the upper bound of Theorem~\ref{gen1} from $\sum_{v \in V(G)} 2^{-d(v)}$ to $\sum_{v \in V(G) \setminus V(H)} 2^{-d(v)}$. Note furthermore that such a graph $H$ must exist as soon as $|E(G)|\geq 3|V(G)|$ (see~\cite{erd64}). A similar approach to improving the upper bound is to repeatedly look for individual vertices that can be taken care of by a pair of edges. In particular, if there is a vertex $v$ with neighbours $u_1$ and $u_2$ such that $d(v) < d(u_1) -1$ and $d(v) < d(u_2)$, then we could pair the edges $vu_1$ and $vu_2$ (thus taking care of touching $v$), and then use the Danger function technique on the edge set $E(G) \setminus \{ vu_1, vu_2 \}$. Thus, the total Danger at the start of the game (and hence our upper bound for $u(G)$ in Theorem~\ref{gen1}) would decrease by $2^{-d(v)}-\left(2^{-d(u_1)}+2^{-d(u_2)}\right)$. \end{Remark} Our final bound of this section is obtained by a proof which again uses an adaptation of the Danger function approach of Erd\H{o}s and Selfridge~\cite{erdself} and Beck~\cite{beck82}. Our arguments here will mirror those for the upper bound, looking at the total Danger from the point of view of Isolator instead of Toucher. \begin{proof}[Proof of lower bound of Theorem~\ref{gen1}] Note, by Observation~\ref{Dangerobs2}, that the total Danger will decrease by at most $1$ with Toucher's first move, since the two vertices incident to the chosen edge can only have had Danger at most $\frac{1}{2}$ each. Suppose that Isolator then chooses the edge that maximises the sum of the Dangers of the two vertices incident to it. Let us suppose that this sum is $r$, say, and hence that Isolator's move causes the total Danger to increase by $r$. If Toucher's response is to take an edge that is disjoint to Isolator's choice, then the total Danger will decrease back by at most $r$, by the maximality condition. If Toucher's edge instead shares a common vertex with Isolator's edge, then the total Danger will still only decrease by at most $r + \frac{1}{4}$, since the Danger of this common vertex can only have increased by at most $\frac{1}{4}$ (from $\frac{1}{4}$ to $\frac{1}{2}$) as a result of Isolator's move. Hence, in this pair of moves, one from Isolator and then one from Toucher, the total Danger can only have decreased by at most $\frac{1}{4}$ altogether. We can consider the $|E(G)|$ moves of the game as Toucher's first move (which we have seen decreases the total Danger by at most $1$), followed by $\left \lfloor \frac{|E(G)|-1}{2} \right \rfloor$ subsequent pairs of moves (which we have seen each decrease the total Danger by at most $\frac{1}{4}$ if Isolator always uses the given strategy), followed possibly (if $|E(G)|$ is even) by one final move from Isolator (which cannot decrease the total Danger). Thus, if Isolator uses the given strategy, then the total Danger at the end of the game (and hence the number of untouched vertices, by Observation~\ref{Dangerobs1}) will be at least \begin{displaymath} \sum_{v \in V(G)} 2^{-d(v)} - 1 - \frac{1}{4} \left \lfloor \frac{|E(G)|-1}{2} \right \rfloor. \qedhere \end{displaymath} \end{proof} \begin{Proposition} Any graph consisting of $P_{3}$ components plus exactly one $P_{2}$ component will provide a tight example to the lower bound in Theorem~\ref{gen1}. \end{Proposition} \begin{proof} Let $x$ denote the number of $P_{3}$ components in such a graph, and note that we then have $|E(G)|=2x+1$, $d_{1}=2x+2$, $d_{2}=x$, and $d_{i}=0$ for $i>2$. Hence, \[ \sum_{v \in V(G)} 2^{-d(v)} - \frac{|E(G)|+7}{8} = \frac{1}{2} (2x+2) + \frac{1}{4} x - \frac{(2x+1)+7}{8} = x. \] Toucher can ensure that the number of untouched vertices is only $x$ by taking the only edge in the $P_{2}$ component in her first move, and then always immediately taking the remaining edge from any $P_{3}$ component on which Isolator plays. \end{proof} \begin{Remark} \label{genremark} Note that the lower bound of Theorem~\ref{gen1} will be better than the lower bound of Theorem~\ref{gen2} if \begin{displaymath} |E(G)| < 1 + \sum_{i \ge 2} 2^{3-i} d_i. \end{displaymath} As $2|E(G)|=\sum_{i \ge 1} id_{i}$, this will occur if $d_{2}$ is sufficiently large (e.g.~consider a path or a cycle, in which case the lower bound of Theorem~\ref{gen2} is ineffective). \end{Remark} \begin{Remark} \label{genremark2} Although the bounds given in this section involve only the degrees of the vertices, we shall see examples later (in Remark~\ref{degseqremark} and Section~\ref{3reg}) which show that $u(G)$ is not determined by the degree sequence alone. It would be interesting to know of any particular properties or parameters of the graph that can tighten the interval in which $u(G)$ must be located. \end{Remark} \section{Cycles} \label{cycles} In this section, we consider the specific case when our graph is a cycle. We shall start by applying Theorem~\ref{gen1} to immediately obtain the upper bound in Theorem~\ref{cyc2}, and then the majority of this section will be devoted to deriving the lower bound. \begin{proof}[Proof of upper bound of Theorem~\ref{cyc2}] This follows from Theorem~\ref{gen1}. \end{proof} \begin{Proposition} The graph $C_{4}$ provides a tight example to the upper bound in Theorem~\ref{cyc2}. \end{Proposition} \begin{proof} After Toucher's first move, Isolator can take the opposite edge (and then either of the two remaining edges with his second move). \end{proof} We may now start to work towards the lower bound in Theorem~\ref{cyc2}. Note that Theorem~\ref{gen1} immediately provides a lower bound of around $\frac{n}{8}$, but we shall aim to do significantly better. The key result here is Lemma~\ref{3of16lemma}, which will enable Isolator to guarantee three untouched vertices from every sixteen edges. \begin{Lemma} \label{3of16lemma} Isolator can guarantee that the number of untouched internal vertices in $P_{17}$ will be at least three. \end{Lemma} \begin{proof} The proof will consist of dividing up the sixteen edges of $P_{17}$ into various segments. Consequently, it will be useful to first establish two statements concerning segments of length three and five, respectively. \begin{Claim}\label{cl1} If it is Isolator's move and there is a segment consisting of three consecutive free edges, then he can isolate an internal vertex from this segment. \end{Claim} \begin{proof} Let the edges of this segment be denoted by $e_a, e_b, e_c$. Isolator claims the edge $e_b$. In the following move, Toucher cannot claim both $e_a$ and $e_c$, so one of them is free for Isolator to claim in his following move. Hence, he can isolate one internal vertex. \end{proof} \begin{Claim}\label{cl2} If it is Isolator's move and there is a segment consisting of five consecutive free edges $e_a, e_b, e_c, e_d, e_f$, then he can guarantee that at least one of the following will occur: \begin{itemize} \item [(a)] after Isolator and Toucher have each had two moves, one internal vertex from this segment will now be isolated and neither of Toucher's moves will have taken place outside this segment; \item [(b)] after Isolator and Toucher have each had three moves, two internal vertices from this segment will now be isolated and exactly one of Toucher's moves will have taken place outside this segment; \item [(c)] after Isolator and Toucher have each had three moves, two internal vertices from this segment will now be isolated, exactly two of Toucher's moves will have taken place outside this segment, and neither $e_{a}$ nor $e_{f}$ will have been claimed by Toucher; \item [(d)] after Isolator has had four moves and Toucher has had three moves, three internal vertices from this segment will now be isolated. \end{itemize} \end{Claim} \begin{proof} Let Isolator claim the central edge $e_{c}$ with his first move. After this, let Isolator then use the strategy of trying to extend this edge into a string of consecutive edges (working solely within this segment), by always choosing an edge immediately adjacent to his current string until this is no longer possible. At this point, it must then be the case that the `left-most' edge of Isolator's string is either $e_{a}$ or is adjacent to an edge of Toucher, and similarly the `right-most' edge of Isolator's string is either $e_{f}$ or is adjacent to an edge of Toucher. Note also that the string must certainly contain at least two edges. If Isolator's string contains exactly two edges, then these must either be $e_{b}$ and $e_{c}$ or $e_{c}$ and $e_{d}$, and it must be that Toucher has claimed the edges either side of this string with her two moves. Hence, we have (a). If Isolator's string contains exactly three edges, then observe that Toucher must have claimed at least one of the other two edges in this segment. If (at the end of Toucher's third move) Toucher has in fact claimed both of these other two edges, then we have (b). If (at the end of Toucher's third move) Toucher has only claimed one of these other two edges, then it can only be that this edge is $e_{b}$ (and the string is $e_{c},e_{d},e_{f}$) or $e_{d}$ (and the string is $e_{a},e_{b},e_{c}$), so we have (c). Finally, if Isolator's string contains at least four edges, then we have (d). \end{proof} We may now prove the lemma. We shall denote the sixteen edges of $P_{17}$ by $\{e_1, e_2, \dots, e_{16}\}$, and wlog we may suppose that Toucher claims one of the edges in $\{e_1, e_2, \dots, e_8\}$ with her first move. We differentiate between the following cases. \paragraph{Case 1:} Toucher claimed one of the edges $\{e_1, e_2, e_3\}$. Isolator splits the free edges $\{e_4,e_5,\dots, e_{16}\}$ into three sequences of consecutive edges $S_1=\{e_4,e_5,\dots,e_8\}$, $S_2=\{e_9,e_{10},\dots, e_{13}\}$, and $S_3=\{e_{14},e_{15},e_{16}\}$. Isolator plays first in $S_1$. If Claim~\ref{cl2} (a) is true for $S_{1}$, then Isolator isolates one internal vertex in $S_{1}$, and the edges in $S_2$ and $S_3$ are all still free. So Isolator then plays in $S_2$. By Claim~\ref{cl2}, either he can isolate two more internal vertices there and is done, or he isolates one internal vertex in $S_2$ and all edges in $S_3$ are still free, so by Claim~\ref{cl1} he can also isolate one internal vertex in $S_3$. If Claim~\ref{cl2} (b) or (c) are true for $S_1$, then Isolator can isolate two internal vertices in $S_{1}$, and at most two of the edges in $\{e_8\}\cup S_2 \cup S_3$ can have been claimed by Toucher. Hence, there must exist a segment of three consecutive edges not claimed by Toucher among the nine edges in $\{e_8\}\cup S_2 \cup S_3 = \{e_{8},e_{9},\ldots,e_{16}\}$, in which case Isolator can isolate another internal vertex there (by applying Claim~\ref{cl1}). If Claim~\ref{cl2} (d) is true for $S_{1}$, then Isolator can isolate three internal vertices in $S_{1}$. \paragraph{Case 2:} Toucher claimed one of the edges in $\{e_4, e_5\}$. Isolator splits the free edges $\{e_1,e_2, e_3, e_6, e_7, \dots, e_{16}\}$ into three sequences of consecutive edges $S_1=\{e_6,e_7, \dots, e_{10}\}$, $S_2=\{e_{11},e_{12}, \dots, e_{16}\}$, and $S_3=\{e_1,e_2,e_3\}$ (note that this time $|S_{2}|=6$, but $S_{2}$ and $S_{3}$ are no longer adjacent). Isolator again plays first in $S_1$. If Claim~\ref{cl2} (a) or (d) are true for $S_{1}$, then the proof is exactly the same as with Case~$1$. If Claim~\ref{cl2} (b) or (c) hold for $S_{1}$, then Isolator can isolate two internal vertices in $S_{1}$, and at most two of the edges in $S_{2} \cup S_{3}$ can have been claimed by Toucher. Since $|S_{2}|=6$ and $|S_{3}|=3$, there must then exist a segment of three consecutive free edges in either $S_{2}$ or $S_{3}$, in which case we can apply Claim~\ref{cl1} and Isolator is done. \paragraph{Case 3:} Toucher claimed the edge $e_6$. Isolator splits the free edges into three sequences of consecutive edges $S_1=\{e_1, e_2, \ldots, e_5\}$, $S_2=\{e_{11}, e_{12}, \ldots , e_{16}\}$, and $S_3=\{e_7, e_8, e_9, e_{10}\}$. Note that we again have $|S_{1}|=5$, $|S_{2}|=6$, and $|S_{3}| \geq 3$, so we may apply exactly the same proof as with Case~$2$. \paragraph{Case 4:} Toucher claimed one of the edges in $\{e_7,e_8\}$. Isolator splits the free edges into three sequences of consecutive edges $S_1=\{e_9, e_{10}, \dots, e_{13}\}$ $S_2=\{e_1, e_2, \dots, e_6\}$, and $S_3=\{e_{14}, e_{15}, e_{16}\}$. Again, we have $|S_{1}|=5$, $|S_{2}|=6$, and $|S_{3}| \geq 3$, so we may again apply exactly the same proof as with Cases~$2$ and~$3$. \end{proof} We may now prove the lower bound from Theorem~\ref{cyc2}. Since Lemma~\ref{3of16lemma} already guarantees three untouched vertices for every sixteen edges, the main substance of the proof is to deal satisfactorily with the `leftover' edges when $n$ is not exactly divisible by $16$. \begin{proof}[Proof of lower bound of Theorem~\ref{cyc2}] After Toucher has made her first move, let Isolator then partition the $n$ edges into segments of $16$ consecutive edges, together with one `leftover' segment of $1$ to $16$ consecutive edges, so that the edge claimed by Toucher is the last edge in the leftover segment. Let Isolator then use the strategy of always responding in the same segment in which Toucher played her previous move. By Lemma~\ref{3of16lemma}, Isolator can thus guarantee that the number of untouched internal vertices in each $16$-edge segment will be at least three. Hence, if we use $k$ to denote the number of edges in the leftover segment, it suffices to show that Isolator can also guarantee isolating at least $\frac{3}{16}(k-3)$ internal vertices here. If $k \leq 3$, then there is nothing to prove. If $k \in \{4,5,6,7,8\}$, then we need to show that Isolator can isolate at least one internal vertex. Since Toucher's edge is the last one in this segment, there exist at least three consecutive free edges, so we may simply apply Claim~\ref{cl1}. If $k \in \{9,10,11,12,13\}$, then we need to show that Isolator can isolate at least two internal vertices. Since Toucher's edge is the last one in this segment, there exist at least eight consecutive free edges, so we may split these into two sequences of consecutive edges $S_{1}$ and $S_{2}$ with $|S_{1}|=5$ and $|S_{2}|=3$. Isolator then plays first in $S_{1}$. By Claim~\ref{cl2}, either he can isolate two internal vertices in $S_{1}$ and is done, or he isolates one internal vertex in $S_{1}$ and all edges in $S_{2}$ are still free, so by Claim~\ref{cl1} he can then also isolate one internal vertex in $S_{2}$. If $k \in \{14,15,16\}$, then we need to show that Isolator can isolate at least three internal vertices. To achieve this, we may split the thirteen consecutive free edges into three adjacent sequences of consecutive edges $S_{1}$, $S_{2}$, and $S_{3}$ with $|S_{1}|=|S_{2}|=5$ and $|S_{3}|=3$, and argue exactly as in Case~$1$ of Lemma~\ref{3of16lemma}. \end{proof} \begin{Proposition} The graph $C_{3}$ provides a tight example to the lower bound in Theorem~\ref{cyc2}. \qed \end{Proposition} In the appendix, we use similar arguments to this section to prove Theorem~\ref{path1} on paths. \section{$\mathbf{2}$-regular graphs} \label{2reg} In this section, we now generalise our playing board from a cycle to a collection of disjoint cycles, i.e.~any $2$-regular graph. Again, we shall start by applying Theorem~\ref{gen1} to immediately obtain the upper bound in Theorem~\ref{2reg2}, and then we will work towards deriving the lower bound. \begin{proof}[Proof of upper bound of Theorem~\ref{2reg2}] This follows from Theorem~\ref{gen1}. \end{proof} \begin{Proposition} \label{C4cpts} Any graph consisting of $C_{4}$ components will provide a tight example to the upper bound in Theorem~\ref{2reg2}. \end{Proposition} \begin{proof} Note that Isolator can certainly isolate one vertex from each such component by always immediately taking the opposite edge in any $C_{4}$ on which Toucher plays and then taking the fourth edge as soon as Toucher takes the third edge. \end{proof} The proof of the lower bound in Theorem~\ref{2reg2} will involve treating the components differently depending on their size modulo $6$, so we shall find it useful to first prove three lemmas related to this. We begin by applying Theorem~\ref{gen2} to obtain a result specific to the case when a cycle has length $k \in \{4,5,6\} \bmod 6$. \begin{Lemma} \label{2reglem1} Let $k \in \{ 4,5,6 \} \bmod 6$. Then \begin{displaymath} u(C_{k}) \geq \frac{k}{6}. \end{displaymath} \end{Lemma} \begin{proof} Let us write $k$ as $6r+s$, where $s \in \{ 4,5,6 \}$. Then, by Theorem~\ref{cyc2}, we have \[ u(C_{k}) \geq \frac{3}{16} (k-3) = \frac{3}{16} (6r+s-3) \geq \frac{3}{16} (6r+1) > r. \] Since $u(C_{k})$ must be an integer, we then in fact have $u(C_{k}) \geq r+1 \geq \frac{k}{6}$, and we are done. \end{proof} \begin{Remark} It is also relatively simple to give a self-contained proof of this result, rather than using Theorem~\ref{cyc2}. \end{Remark} Note that the bound of Lemma~\ref{2reglem1} is certainly not valid for all $k$ (e.g.~consider $C_{3}$). Hence, in the next two lemmas we shall deal separately with components of length $k \in \{1,2,3\} \bmod 6$. We shall find it extremely helpful to consider the case when Isolator allows Toucher to have the first two moves in such a component. \begin{Lemma} \label{2reglem2} Let $k \in \{ 1,2,3 \} \bmod 6$. Then Isolator can guarantee that the number of untouched vertices in $C_{k}$ will be at least $\frac{k-3}{6}$ even if Toucher has the first two moves (and Isolator and Toucher play alternately after this). \end{Lemma} \begin{proof} Wlog (since we have a cycle), Toucher makes her first move in edge $1$. For every $6$-edge section after this (i.e.~edges $2$--$7$, edges $8$--$13$, etc.), we can consider the six edges as two $3$-edge segments (e.g.~edges $2$--$7$ will be considered as two $3$-edge segments $2$--$4$ and $5$--$7$). Whenever Toucher plays in one of these $3$-edge segments, Isolator can then immediately take the central edge of the other $3$-edge segment, and Isolator can also always eventually take one of the edges either side of this central edge (since when Toucher takes one, Isolator can just immediately take the other). Hence, Isolator can certainly always obtain two consecutive edges in each of these $6$-edge sections of the cycle, so there will be an untouched vertex each time. Now observe that there are exactly $\left \lceil \frac{k-3}{6} \right \rceil$ such sections, since $k \in \{ 1,2,3 \} \bmod 6$, so we are done. \end{proof} We shall also find it helpful to consider the case when Isolator makes the first move in a component. \begin{Lemma} \label{2reglem3} Let $k \in \{ 1,2,3 \} \bmod 6$. Then Isolator can guarantee that the number of untouched vertices in $C_{k}$ will be at least $\frac{k+3}{6}$ if Isolator plays first (and Toucher and Isolator play alternately after this). \end{Lemma} \begin{proof} The $k=3$ case can easily be verified, so let us assume that $k \geq 7$. Let Isolator initially use the strategy of trying to extend his first edge into a long string of consecutive edges, by always choosing an edge immediately adjacent to his current string until Toucher has `blocked' both sides of this string with edges of her own (note that these two edges of Toucher will be distinct, since $k>3$). If Isolator is able to use this strategy for the entire game, then he will finish with $\left \lceil \frac{k}{2} \right \rceil$ consecutive edges, and hence the number of untouched vertices will be $\left \lceil \frac{k}{2} \right \rceil - 1$, which is certainly greater than $\frac{k+3}{6}$ (since we are assuming that $k \geq 7$), so we are done. Thus, we may assume that this does not happen. Let us therefore consider the state of the game at the time when Isolator is about to make his first move for which he is no longer able to use this strategy (due to both sides having been blocked by Toucher). Suppose Isolator had managed to achieve a string of $j$ consecutive edges (note it must be that $j \geq 2$), and wlog let these be edges $2$ to $(j+1)$. Hence, Toucher has edges $1$ and $j+2$, and Toucher also has another $j-2$ `rogue' edges elsewhere. Let us split the edges from $j+3$ to $k$ into $3$-edge segments (i.e.~edges $j+3$ to $j+5$, edges $j+6$ to $j+8$, and so on, ignoring the final one or two edges if $k-(j+2)$ is not congruent to $0$ mod $3$). There will be at least $\frac{k-2-(j+2)}{3} = \frac{k-j-4}{3}$ such segments, at most $j-2$ of which will contain one of Toucher's rogue edges. Hence, at least $\frac{k-j-4}{3} - (j-2) = \frac{k-4j+2}{3}$ of these segments will be `unspoilt', in the sense that none of their edges have yet been taken by either player. Recall that Isolator has the next move. Hence, he can immediately take the central edge from one of the unspoilt segments, and (as in the proof of Lemma~\ref{2reglem2}) can also always eventually take one of the edges either side of this central edge, thus isolating a vertex. Whenever Toucher plays first in one of the unspoilt segments, Isolator can then immediately take the central edge of any remaining unspoilt segment, again eventually isolating a vertex. Hence, we see that Isolator will be able to guarantee at least one untouched vertex from at least half of the segments that were unspoilt. Thus, he will obtain at least $\frac{k-4j+2}{6}$ such vertices, together with the $j-1$ vertices that he already had from his string of $j$ consecutive edges. Hence, the total number of untouched vertices adds up to at least $\frac{k+2j-4}{6}$, which is at least $\frac{k}{6}$ by our observation that $j \geq 2$, and at least $\frac{k+3}{6}$ due to integrality and the fact that $k \in \{1,2,3\} \bmod 6$. \end{proof} We are now ready to use our three lemmas to complete the proof of Theorem~\ref{2reg2}. \begin{proof}[Proof of lower bound of Theorem~\ref{2reg2}] Recall from Lemma~\ref{2reglem1} that Isolator can guarantee that at least $\frac{1}{6}$ of the vertices from each component of size $k$ for $k \in \{4,5,6\} \bmod 6$ will be untouched. Hence, it only remains to deal with the other components. Let us pair up these other components into partners, with at most one such component left over (it will not matter whether the partners have the same size $\bmod$~$6$, only that the sizes belong to $\{1,2,3\} \bmod 6$). When Toucher first plays in one of a pair, let Isolator make one move in the partner. After this, whenever Toucher plays again anywhere in this pair, let Isolator respond in the same component as Toucher (so Toucher will have the first two moves in one of the pair, with alternate moves after this, and Isolator will have the first move in the partner, with alternate moves after this). By Lemmas~\ref{2reglem2} and~\ref{2reglem3}, if two paired components have size $k_{1}$ and $k_{2}$, respectively, then Isolator can guarantee that the number of untouched vertices in these two components will be at least $\frac{k_{1}-3}{6} + \frac{k_{2}+3}{6} = \frac{k_{1}+k_{2}}{6}$. Thus, Isolator can guarantee that at least $\frac{1}{6}$ of the vertices from each pair will be untouched. By then applying Lemma~\ref{2reglem2} as a lower bound for the number of untouched vertices in the leftover component (if one exists), we hence obtain our result. \end{proof} \begin{Proposition} \label{oddC3cpts} Any graph consisting of an odd number of $C_{3}$ components will provide a tight example to the lower bound in Theorem~\ref{2reg2}. \end{Proposition} \begin{proof} Note that Toucher can make the first move in Component~$1$, say, and can then pair up the remaining components to ensure that she can also make the first move in half of these. In every component in which Toucher made the first move, she can guarantee eventually taking a second edge and hence leaving no untouched vertices. In every other component, she can guarantee eventually taking one edge and hence leaving only one untouched vertex. \end{proof} \begin{Remark} \label{degseqremark} Note that Theorem~\ref{cyc2} implies that the lower bound of Theorem~\ref{2reg2} will not be tight for $C_n$ if $n>3$. Thus, as mentioned earlier in Remark~\ref{genremark2}, this observation together with the tight example of Proposition~\ref{oddC3cpts} shows that $u(G)$ is not solely determined by the degree sequence (see also Section~\ref{3reg}). \end{Remark} \section{Trees} \label{trees} In the previous section, we explored one way of generalising the playing board from the cycles and paths considered in Theorem~\ref{cyc2} and Theorem~\ref{path1}, by investigating general $2$-regular graphs. In this section, we consider another natural extension, by instead examining general trees. We start by proving the upper bound of Theorem~\ref{tree} and providing a family of tight examples, and then we also prove the lower bound and give a tight example. \begin{proof}[Proof of upper bound of Theorem~\ref{tree}] By Theorem~\ref{gen1}, we have \begin{displaymath} u(T) \leq \sum_{v \in V(T)} 2^{-d(v)}. \end{displaymath} Note that (since $T$ can have no vertices of degree $0$) the sum on the right-hand-side is maximised when all but one of the vertices have degree $1$, since otherwise one can always achieve a higher value by decreasing the second largest degree by $1$ and increasing the largest degree by $1$. Hence, we obtain \begin{displaymath} u(T) \leq \frac{n-1}{2} + 2^{1-n}. \end{displaymath} But since $n \geq 3$, we have $2^{1-n} < \frac{1}{2}$, so the integrality of $u(T)$ then implies that we must actually have $u(T) \leq \frac{n-1}{2}$. \end{proof} \begin{Proposition} \label{treeex1} Any star with an odd number of vertices will provide a tight example to the upper bound in Theorem~\ref{tree}. \qed \end{Proposition} We now move on to the lower bound. \begin{proof}[Proof of lower bound of Theorem~\ref{tree}] In the main part of the proof, we shall work towards showing \begin{equation} \label{tree2eqnb} u(T) \geq \frac{n+d_{1}-1}{8}. \end{equation} The result will then follow from a combination of~\eqref{tree2eqnb}, Theorem~\ref{path1}, and one special case that will need to be considered separately. In order to establish~\eqref{tree2eqnb}, we shall proceed by first (i)~analysing the proof of the lower bound of Theorem~\ref{gen1} to see that some aspects can be improved slightly when the graph is known to be a tree, then (ii)~obtaining a useful result on the average degree of the non-leaves, and finally (iii)~using this to optimise our bound. \paragraph{(i)} Recall that the proof of the lower bound of Theorem~\ref{gen1} utilised the concept of the Danger of a vertex. The bound obtained then followed from showing that the total Danger will decrease by at most $1$ with Toucher's first move, and then by at most $\frac{1}{4}$ with every subsequent pair of moves if Isolator uses the tactic of always choosing the edge which maximises the sum of the Dangers of the two vertices incident to it. However, it can immediately be seen that for a tree with $n>2$ vertices, the total Danger can actually only decrease by at most $\frac{3}{4}$ with Toucher's first move, since there cannot be two adjacent leaves. Hence, we can certainly add an extra $1- \frac{3}{4} = \frac{1}{4}$ to the lower bound obtained in Theorem~\ref{gen1}. We shall now also show that the total Danger can only decrease by at most $\frac{1}{8}$ with the first subsequent pair of moves, meaning that we can then add a further $\frac{1}{4} - \frac{1}{8} = \frac{1}{8}$ to this bound. To see this, first note that (with the stated tactic) Isolator will certainly take an unplayed edge $uw$ incident to a leaf $w$ on his first move. If we use $\textrm{D}(z)$ to denote the Danger of a vertex $z$ after Toucher's first move, then Isolator's move thus causes the total Danger to temporarily increase by $\frac{1}{2} + \textrm{D}(u)$. In order for the total Danger to then decrease back by more than $\frac{5}{8} + \textrm{D}(u)$ with Toucher's next move, note that she would have to take an adjacent edge $uv$ (due to the maximality condition in Isolator's strategy) satisfying $2 \textrm{D}(u) + \textrm{D}(v) > \frac{5}{8} + \textrm{D}(u)$, i.e.~$\textrm{D}(u) + \textrm{D}(v) > \frac{5}{8}$. Since $u$ cannot be a leaf (as it is adjacent to the leaf $w$), this is only possible if $\textrm{D}(u) = \frac{1}{4}$ and $\textrm{D}(v) = \frac{1}{2}$. But in this case, the entire tree $T$ would consist of just the path $wuv$, which would contradict the fact that Toucher has already been able to take one edge somewhere with her first move! Hence, we find that we are indeed able to add the promised increments to the lower bound given in Theorem~\ref{gen1} if $T$ is a tree (with $|V(T)|>2$), and we thus obtain \begin{equation} u(T) \geq \sum_{v \in V(T)} 2^{-d(v)} - \frac{|E(T)|+7}{8} + \frac{1}{4} + \frac{1}{8} = \frac{d_{1}}{2} + \sum_{v:d(v) \geq 2} 2^{-d(v)} - \frac{n}{8} - \frac{3}{8}. \label{tree2eqnc} \end{equation} \paragraph{(ii)} We shall now work towards our aforementioned result on the average degree of the non-leaves of $T$. Let us first recall that (since $n>2$) any two leaves must be non-adjacent, and so it is then clear that $u(T) \geq \frac{d_{1}-1}{2}$. Hence,~\eqref{tree2eqnb} is certainly satisfied if $d_{1} \geq \frac{n}{3} + 1$, so we may assume that $d_{1} < \frac{n}{3} + 1$. Now let $x$ denote the average degree of the $n-d_{1}$ non-leaves, and observe that $d_{1} + x(n-d_{1}) = 2n-2$, and $ x = 1 + \frac{n-2}{n-d_{1}}. $ Thus, since $d_{1} < \frac{n}{3} + 1 < \frac{n}{2} + 1$, we have $x<3$. \paragraph{(iii)} We shall now utilise our bound on $x$ in conjunction with~\eqref{tree2eqnc}. Note that (for given $d_{1}$) the sum $\sum_{v:d(v) \geq 2} 2^{-d(v)}$ is minimised when the non-leaves all have degrees differing by at most $1$, since otherwise one can always achieve a lower value by increasing the smallest non-leaf degree by $1$ and decreasing the largest non-leaf degree by $1$. Hence, since $x<3$, we find that the sum $\sum_{v:d(v) \geq 2} 2^{-d(v)}$ is minimised (for given $d_{1}$) when the non-leaves all have degree $2$ or $3$. In this case, we have $d_{1} + 2d_{2} + 3d_{3} = 2(n-1)$ and $d_{2} = n-d_{1}-d_{3}$, and so we obtain $d_{2} = n+2-2d_{1}$ and $d_{3}=d_{1}-2$. Thus,~\eqref{tree2eqnc} then gives \[ u(T) \geq \frac{d_{1}}{2} + \frac{n+2-2d_{1}}{4} + \frac{d_{1}-2}{8} - \frac{n}{8} - \frac{3}{8} = \frac{n+d_{1}-1}{8}, \] as desired. \\ If $d_{1} > 2$, then we are done. If not, then $T$ must be a path, and we can look to apply our lower bound $u(P_{n}) \geq \frac{3}{16} (n-2)$ from Theorem~\ref{path1}. We certainly have $\frac{3}{16} (n-2) \geq \frac{n+2}{8}$ for $n \geq 10$, and it can also be checked that $\left \lceil \frac{3}{16} (n-2) \right \rceil \geq \frac{n+2}{8}$ for $n \in \{3,4,5,6,8,9\}$, leaving only the case when $T=P_{7}$. For this final case, it suffices to show that Isolator can always guarantee that at least two of the vertices will be untouched, and it can be checked that this is indeed so. \end{proof} \begin{Proposition} \label{treeex2} The graph $P_{6}$ provides a tight example to the lower bound in Theorem~\ref{tree}. \end{Proposition} \begin{proof} This follows immediately from Theorem~\ref{path1}. \end{proof} \section{$\mathbf{3}$-regular graphs} \label{3reg} Recall that in Section~\ref{2reg} we considered the case when our playing board is a $2$-regular graph. The natural generalisation of this is to consider $k$-regular graphs for $k>2$. However, we already know from Theorem~\ref{gen2} that $u(G)=0$ for all $k$-regular $G$ when $k>3$. Hence, it only remains to now deal with the case when $k=3$. We start by giving an upper bound for $u(G)$, then we focus on constructing $3$-regular examples with $u(G)>0$ (proving Theorem~\ref{u>0example}), and then finally we observe that there are also $3$-regular examples with $u(G) = 0$. As mentioned, we begin with our best known upper bound, which is a direct consequence of Theorem~\ref{gen1}. \begin{Corollary} \label{3regcor} For any $3$-regular graph $G$ with $n$ vertices, we have \begin{displaymath} u(G) \leq \frac{n}{8}. \end{displaymath} \end{Corollary} \begin{proof} This follows immediately from Theorem~\ref{gen1}. \end{proof} It is not at all straightforward to construct any $3$-regular graphs with $u(G)>0$. However, the following example shows that there do indeed exist such graphs. \begin{Proposition} \label{3regex} The $3$-regular graph $G$ depicted in Figure~\ref{3regfig1} will have an untouched vertex. \begin{figure} \caption{A $3$-regular graph $G$ satisfying $u(G) \geq 1$.} \label{3regfig1} \end{figure} \end{Proposition} \begin{proof} First, let us observe that $G$ consists of three identical blocks $H_{1}$, $H_{2}$, and $H_{3}$, together with the edges $e_{12}$, $e_{13}$, and $e_{23}$. Thus, by symmetry, wlog Toucher makes her first move somewhere in $H_{1} \cup e_{12} \cup e_{13} $. Let Isolator then take the edge $e_{23}$, and note that wlog Toucher's next move is not in $H_{3}$. From this point on, we shall just focus on the graph $H_{3}$, as shown in Figure~\ref{3regfig2}. Recall that Isolator has already taken the edge $e_{23}$, all the edges in $H_{3}$ are as yet unplayed (it will not matter whether or not the edge $e_{13}$ has been taken), and Isolator has the next move. Let Isolator use this move to take the internal edge $u_1u_2$ marked with an $I$. \begin{figure} \caption{The graph $H_{3} \label{3regfig2} \end{figure} \paragraph{Case (i):} Toucher does not take one of the `inner'-edges (i.e.~those labelled with a $1$ or a $2$) in her next move. Then all these inner edges are still unplayed, and the inner-vertices $u_{1}$, $u_{2}$, and $u_{3}$ are still untouched. Thus, Isolator may then take $1a$, Toucher is forced to take $1b$ (to avoid $u_{1}$ becoming isolated), Isolator may then take $2a$, Toucher is forced to take $2b$ (to avoid $u_{2}$ becoming isolated), and Isolator may then take $2c$ and hence isolate $u_{3}$. \paragraph{Case (ii):} Toucher takes one of the edges labelled with a $1$ in her next move. Then all the edges labelled with a $2$ or a $3$ are still unplayed, and the vertices $u_{2}$, $v_{1}$, $v_{2}$, and $v_{4}$ are still untouched. Thus, Isolator may then take $2b$, Toucher is forced to take $2a$ (to avoid $u_{2}$ becoming isolated), Isolator may then take $3b$, Toucher is forced to take $3d$ (to avoid $v_{1}$ becoming isolated), and Isolator may then take $3a$ and hence isolate $v_{4}$. \paragraph{Case (iii):} Toucher takes one of the edges labelled with a $2$ in her next move. Then all the edges labelled with a $1$ or a $3$ are still unplayed, and the vertices $u_{1}$, $v_{3}$, and $v_{4}$ are still untouched. Thus, Isolator may then take $1b$, Toucher is forced to take $1a$ (to avoid $u_{1}$ becoming isolated), Isolator may then take $3a$, Toucher is forced to take $3c$ (to avoid $u_{3}$ becoming isolated), and Isolator may then take $3b$ and hence isolate $v_{4}$. \end{proof} Using Proposition~\ref{3regex}, we may now prove Theorem~\ref{u>0example}. \begin{proof}[Proof of Theorem~\ref{u>0example}] Note that the graph $G$ in Proposition~\ref{3regex} has $24$ vertices. Hence, we may simply take $\left \lfloor \frac{n}{24} \right \rfloor$ components identical to $G$, and any $3$-regular graph on the other vertices. \end{proof} Recall that in the $2$-regular case (see Theorem~\ref{2reg2}), the only graph for which $u(G)=0$ is the triangle. However, it turns out that there are infinitely many $3$-regular graphs for which there will be no untouched vertices. \begin{Proposition} Any graph consisting of $K_{4}$ components will have no untouched vertices. \end{Proposition} \begin{proof} Whenever Isolator plays first in a component (taking the edge $v_{1}v_{2}$, say), Toucher can then immediately take the non-adjacent edge from this same component (let us denote this edge by $v_{3}v_{4}$). Wlog (by symmetry), when Isolator plays again in this component he takes the edge $v_{1}v_{3}$, in which case Toucher can then immediately take the edge $v_{1}v_{4}$. Whenever Isolator takes one of the two remaining edges in this component (note that both of these will be incident to $v_{2}$), Toucher can then immediately take the final edge and will hence have touched all four vertices. \end{proof} \section{Discussion} \label{discussion} Perhaps the most interesting unresolved issue concerns the asymptotic proportion of untouched vertices in $C_{n}$ and $P_{n}$. We have shown in Theorem~\ref{cyc2} and Theorem~\ref{path1} that this is somewhere between $\frac{3}{16}$ and $\frac{1}{4}$, but where exactly? Could it perhaps be $\frac{1}{5}$? One intuitive reason for this is that Isolator needs two moves to isolate one vertex, but Toucher can touch four vertices in this time, so we might expect that there should consequently be four times as many touched vertices as untouched. However, we have not managed to turn this reasoning into a formal argument. Throughout this paper, whenever we have derived a bound, we have also tried to give tight examples that hold for infinitely many values of $n$. However, in the case of our lower bound for $u(T)$ in Theorem~\ref{tree}, we only managed to provide one tight example, in Proposition~\ref{treeex2}. Hence, it would be interesting to know whether there are other tight examples, or if in fact this lower bound can be improved for large $n$. Also, what type of tree is most suitable for Toucher? Recall that we showed in Proposition~\ref{treeex1} that stars are the best choice for Isolator. As we have seen in Remark~\ref{degseqremark} and Section~\ref{3reg}, we cannot hope to obtain exact results just by looking at the degree sequence of the graph. Hence, we are curious to know if any other properties or parameters of the graph can be utilised to give more precise bounds. Finally, what is the largest possible proportion of untouched vertices for a $3$-regular graph? By Theorem~\ref{u>0example} and Corollary~\ref{3regcor}, we know that this is between $\frac{1}{24}$ and $\frac{1}{8}$. \appendix \section{Paths} \label{paths} We now prove Theorem~\ref{path1} on paths (recall that this result was used in the proofs of both Theorem~\ref{tree} and Proposition~\ref{treeex2}). Clearly, the games on $P_n$ and $C_n$ are very closely related (in fact, the game on $P_{n}$ is exactly equivalent to a game on $C_{n}$ in which Isolator has the first move), and so the proofs here are similar to those for cycles. \begin{proof}[Proof of upper bound of Theorem~\ref{path1}] For $1< n\leq 4$, knowing that Toucher is the first to play, the result follows easily. For $n>4$, we add a slight refinement to the analysis given in the proof of Theorem~\ref{gen1}, again considering the strategy where Toucher always chooses the edge which maximises the sum of the Dangers of the two vertices incident to it. At the beginning of the game, the total Danger is $\sum_{v\in V(P_n)} 2^{-d(v)}=\frac{n+2}{4}$. Going through all possible cases, we see that Toucher will always decrease the total Danger by at least $\frac{6}{4}$ in her first two moves, while Isolator will only increase it by at most $\frac{5}{4}$ in his first two moves. Therefore, after these first two pairs of moves, the total Danger will have decreased by at least $\frac{1}{4}$. Thus, continuing as in the proof of Theorem~\ref{gen1}, we hence obtain \begin{displaymath} u(P_n)\leq \sum_{v\in V(P_n)} 2^{-d(v)} - \frac{1}{4}=\frac{n+1}{4}. \qedhere \end{displaymath} \end{proof} \begin{Proposition} The graph $P_{3}$ provides a tight example to the upper bound in Theorem~\ref{path1}. \qed \end{Proposition} For the lower bound, the key ingredient is Lemma~\ref{3of16lemma}, which enables Isolator to guarantee three untouched vertices from every sixteen edges. As with $C_n$, the main remaining issue is to deal with the leftover portion when the number of edges is not divisible by $16$. This time, the argument is further complicated by the fact that Isolator will need to take advantage of the two leaves. \begin{proof}[Proof of the lower bound of Theorem~\ref{path1}.] Let $k \in \{0,1,\ldots,15\}$ denote the value of $(n-1) \bmod 16$. If $k \in \{0,1\}$, let $x=0$; if $k \in \{2,3,4,5,6\}$, let $x=1$; if $k \in \{7,8,9,10,11\}$, let $x=2$; and if $k \in \{12,13,14,15\}$, let $x=6$. Let $y = k-x \geq 0$. Before Toucher makes her first move, let Isolator partition the $n-1$ edges of $P_{n}$ into a `left-end' segment of $x$ consecutive edges, middle segments each of $16$ consecutive edges, and a `right-end' segment of $y$ consecutive edges. Let Isolator then use the strategy of always responding in the same segment in which Toucher played her previous move. By Lemma~\ref{3of16lemma}, Isolator can thus guarantee that the number of untouched internal vertices in each $16$-edge segment will be at least three. Note that the statement of the theorem is equivalent to $u(P_{n}) \geq \frac{3}{16}(|E(P_{n})|-1)$. Hence, since $k$ is equal to the total number of edges in the two end segments, it now suffices to show that Isolator can guarantee isolating at least $\frac{3}{16}(k-1)$ vertices here. Throughout the remainder of the proof, note that we shall use the word `leaf' solely for the two leaves in $P_{n}$. Moreover, we shall not attempt to isolate the right-most vertex of the left-end segment or the left-most vertex of the right-most segment. If $k \leq 1$, then there is nothing to prove. If $k \in \{2,3,4,5,6\}$, then we need to show that Isolator can isolate at least one vertex. Recall $x=1$, so $y = k-x \geq 1$. Hence, as soon as Toucher takes an edge incident to one of the leaves, Isolator can simply take the edge incident to the other leaf, thus isolating it. If $k \in \{7,8,9,10,11\}$, then we need to show that Isolator can isolate at least two vertices. Recall $x=2$, so $y = k-x \geq 5$. As soon as Toucher takes an edge from one of the end segments (let us use $A$ to denote this segment), let Isolator take the edge incident to the leaf in the other end segment (let us use $B$ to denote this segment), thus isolating it. After Toucher's second move, we may assume that Toucher's two edges consist of the edge adjacent to Isolator's edge in Segment~$B$ and the edge incident to the leaf in Segment~$A$ (since otherwise Isolator could then take one of these, and we would be done). Hence, since $y \geq 5$, the right-end segment must certainly still contain three consecutive free edges, so we are done by Claim~\ref{cl1}. If $k \in \{12,13,14,15\}$, then we need to show that Isolator can isolate at least three vertices. Recall $x=6$, so $y \geq 6$ too. As soon as Toucher takes an edge from one of the end segments (let us again use $A$ to denote this segment), let Isolator take the edge incident to the leaf in the other end segment (let us again use $B$ to denote this segment), thus isolating it. Let us denote the first six edges in $A$, starting from the leaf, as $a_{1}, a_{2}, \ldots, a_{6}$, and let us similarly denote the first six edges in $B$, starting from the leaf, as $b_{1}, b_{2}, \ldots, b_{6}$. Hence, Isolator has claimed $b_{1}$. If Toucher's first two edges consist of $b_{2}$ and $a_{1}$, then $A$ still contains five consecutive free edges and $B$ still contains four consecutive free edges. By Claim~\ref{cl2}, either Isolator can then isolate two internal vertice in $A$ and is done, or he can isolate one internal vertex in $A$ and then also one internal vertex in $B$ (using Claim~\ref{cl1}), and is again done. If Toucher's first two edges are not $b_{2}$ and $a_{1}$, then Isolator may claim one of these with his second move, thus isolating a second vertex. If Isolator takes $b_{2}$, then we may assume that Toucher's first three edges include both $b_{3}$ and $a_{1}$ (since otherwise Isolator could then also take one of these, and we would be done), so at least one of $A$ or $B$ will still contain three consecutive free edges, and so we may then just apply Claim~\ref{cl1}. Similarly, if Isolator takes $a_{1}$, then we may assume that Toucher's first three edges include both $a_{2}$ and $b_{2}$, and we can then use exactly the same argument. \end{proof} \begin{Proposition} The graph $P_{2}$ provides a tight example to the lower bound in Theorem~\ref{path1}. \qed \end{Proposition} \end{document}
{\beta}gin{document} \maketitle \tableofcontents {\rm sect}ion{Introduction} Kuranishi structures were introduced to symplectic topology by Fukaya and Ono \cite{FO}, and recently refined by Joyce \cite{J}, in order to extract homological data from compactified moduli spaces of holomorphic maps in cases where geometric regularization approaches such as perturbations of the almost complex structure do not yield a smooth structure on the moduli space. These geometric methods generally cannot handle curves that are nowhere injective. The first instance in which it was important to overcome these limitations was the case of nowhere injective spheres, which are then multiply covered and have nontrivial isotropy.\footnote {This is not the case for discs. For example a disc with boundary on the equator can wrap two and a half times around the sphere. This holomorphic curve, called the lantern, has trivial isotropy. } Because of this, the development of virtual transversality techniques in \cite{FO}, and the related work by Li and Tian \cite{LT}, was focussed on dealing with finite isotropy groups, while some topological and analytic issues were not resolved. The goal of this paper is to explain these issues and provide the beginnings of a framework for resolving them. To that end we focus on the most fundamental issues, which are already present in applying virtual transversality techniques to moduli spaces of holomorphic spheres without nodes or nontrivial isotropy. We give a general survey of regularization techniques in symplectic topology in Section~\ref{s:fluff}, pointing to some general analytic issues in Sections~\ref{ss:geom} --\ref{ss:kur}, and discussing the specific algebraic and topological issues of the Kuranishi approach in Sections~\ref{ss:alg} and \ref{ss:top}. In the main body of the paper we provide an abstract framework of Kuranishi atlases which separates the analytic and topological issues as outlined in the following. The main analytic issue in each regularization approach is in the construction of transition maps for a given moduli space, where one has to deal with the lack of differentiability of the reparametrization action on infinite dimensional function spaces discussed in Section~\ref{s:diff}. When building a Kuranishi atlas on a moduli space, this issue also appears in a sum construction for basic charts on overlaps, and has to be dealt with separately for each specific moduli space. We explain the construction of basic Kuranishi charts, their sums, and transition maps in the case of spherical Gromov--Witten moduli spaces in Section~\ref{s:construct}, outlining the proof of a more precise version of the following in Proposition~\ref{prop:A1} and Theorem~\ref{thm:A2}. { } {\mathbb N}I {\bf Theorem A.}\,\,{\it Let $(M,{\omega},J)$ be a symplectic manifold with tame almost complex structure, and let ${\mathcal M}(A,J)$ be the space of simple $J$-holomorphic maps $S^2\to M$ in class $A$ with one marked point, modulo reparametrization. If ${\mathcal M}(A,J)$ is compact (e.g.\ if $A$ is \lq\lq ${\omega}ega$-minimal"), then there exists an open cover ${\mathcal M}(A,J)= \bigcup_{i=1,\ldots,N} F_i$ by ``footprints'' of basic Kuranishi charts $({\bf K}_i)_{i=1,\ldots,N}$. Moreover, for any tuple $({\bf K}_i)_{i\in I}$ of basic charts whose obstruction spaces $E_i$ satisfy a ``transversality condition'' we can construct transition data as follows. There exists a ``sum chart'' ${\bf K}_I$ with obstruction space ${\partial}rod_{i\in I}E_i$ and footprint $F_{I}=\bigcap_{i\in I} F_i \subset {\mathcal M}(A,J)$, such that a restriction of each basic chart ${\bf K}_i|_{F_I}$ includes into ${\bf K}_I$ by a coordinate change.}{ } The notions of a Kuranishi chart, a restriction, and a coordinate change are defined in detail in Section~\ref{s:chart}. Throughout, we simplify the discussion by assuming that all isotropy groups are trivial. In that special case our definitions largely follow \cite{FO,J}, though instead of working with germs, we define a Kuranishi atlas as a covering family of basic charts together with transition data satisfying a cocycle condition involving an inclusion requirement on the domains of the coordinate changes. In addition to Theorem~A, this requires the construction of further coordinate changes satisfying the cocycle condition. At this point one could already use ideas of \cite{LiuT} to reduce the cover and construct compatible transverse perturbations of the sections in each Kuranishi chart. However, there is no guarantee that the perturbed zero set modulo transition maps is a closed manifold, in particular Hausdorff. This is an essential requirement in the construction of a {\it virtual moduli cycle} (VMC for short), which, as its name indicates, is a cycle in an appropriate homology theory representing the {\it virtual fundamental class} (or VFC) $[X]^{vir}_{\mathcal K}$ of $X$. We remedy this situation in the main technical part of this paper by constructing a {\it virtual neighbourhood} of the moduli space, with paracompact Hausdorff topology, in which the perturbed zero set modulo transition maps is a closed subset. The construction of this virtual neighbourhood requires a tameness property of the domains of the coordinate changes, in particular a strong cocycle condition which requires equality of domains. However, the coordinate changes arising from sum constructions as in Theorem~A can usually only be made to satisfy a weak cocycle condition on the overlap of domains. On the other hand, these constructions naturally provide an additivity property for the obstruction spaces. In Sections~\ref{ss:tame} and \ref{ss:Kcobord} we develop these notions, proving in Propositions~\ref{prop:proper} and~\ref{prop:cobord2} that a suitable shrinking of such an additive weak Kuranishi atlas induces a tame Kuranishi atlas that is well defined up to cobordism. We can then ``reduce" the Kuranishi atlas in order to facilitate the construction of sections. As a result, in Section~\ref{s:VMC} we obtain a precise version of the following.\footnote{To see why we use rational, rather than integer, \v{C}ech homology see Remark~\ref{rmk:Cech}. } { } {\mathbb N}I {\bf Theorem B.}\,\,{\it Let ${\mathcal K}$ be an oriented, $d$-dimensional, weak, additive Kuranishi atlas with trivial isotropy groups on a compact metrizable space $X$. Then ${\mathcal K}$ determines a cobordism class of smooth, oriented, compact manifolds, and an element $[X]^{vir}_{\mathcal K}$ in the \v{C}ech homology group $\check{H}_d(X;{\mathbb Q})$. Both depend only on the cobordism class of ${\mathcal K}$.} { } Making our constructions applicable to general holomorphic curve moduli spaces will require two generalizations of Theorem~B, which we are working on in \cite{MW:ku2}. Firstly, a groupoid version of the theory (extending ideas from \cite{FO,J}), will allow for nontrivial isotropy groups. Secondly, allowing the structure maps in the Kuranishi category to be stratified smooth rather than smooth permits the construction of Kuranishi charts using standard gluing analysis, as established e.g.\ in \cite{MS}. With this framework in place, we hope to extend Theorem~A to giving a completely detailed construction of a Kuranishi atlas for spherical Gromov--Witten invariants in \cite{MW:gw}, by combining the ideas of finite dimensional reduction in \cite{FO} with the explicit obstruction bundles in \cite{LT} and the analytic results in \cite{MS}. { } \noindent {\bf Organization:} The following remarks together with Sections~\ref{s:fluff} and \ref{s:diff} provide a survey of regularization techniques in symplectic topology and their pitfalls. Section~\ref{s:construct} continues this discussion for the specific example of Kuranishi atlases for genus zero Gromov--Witten moduli spaces, and also outlines an approach to proving Theorem~A. All of these sections are essentially self-contained and can be read in any order. The main technical parts of the paper, Sections~\ref{s:chart}, \ref{s:Ks}, and \ref{s:VMC} , are independent of the previous sections, but strongly build on each other towards a proof of Theorem~B. Much of the work here concerns the topological underpinnings of the theory. An introduction and outline for these technical parts can be found in Sections~\ref{ss:kur} and \ref{ss:top}. Since this project revisits fifteen year old, much used theories, let us explain some motivations, relations to other work, and give an outlook for further development. \noindent {\bf Background:} In a 2009 talk at MSRI \cite{w:msritalk}, KW posed some basic questions on the currently available abstract transversality approaches. DM, who had been uneasily aware for some time of analytic problems with the approach of Liu--Tian~\cite{LiuT}, the basis of her expository article \cite{Mcv}, decided that now was the time to clarify the constructions once and for all. The issue here is the lack of differentiability of the reparametrization action, which enters if constructions are transferred between different infinite dimensional local slices of the action, or if a differentiable Banach manifold structure on a quotient space of maps by reparametrization is assumed. The same issue is present for Deligne--Mumford type spaces of domains and maps, and is discussed in detail in Section~\ref{s:diff}. In studying the construction of a virtual moduli cycle for Gromov--Witten invariants via a Kuranishi atlas we soon encountered the same differentiability issue. Extrapolating remarks by Aleksey Zinger and Kenji Fukaya, we next realized that the geometric construction of obstruction spaces in \cite{LT} could be used to construct smoothly compatible Kuranishi charts. However, in making these constructions explicit, we needed to resolve ambiguities in the definition of a Kuranishi structure, concerning the precise meaning of germ of coordinate changes and the cocycle condition, discussed in Section~\ref{ss:alg}. More generally, we found it difficult to find a definition of Kuranishi structure that on the one hand clearly has a virtual fundamental class (a generalization of Theorem B), and on the other hand arises from fairly simple analytic techniques for holomorphic curves (a generalization of Theorem A). One issue that we will touch on only briefly in Section~\ref{ss:approach} is the lack of smoothness of the standard gluing constructions, which affects the smoothness of the Kuranishi charts near nodal or broken curves. A more fundamental topological issue is the necessity to ensure that the zero set of a transverse perturbation is not only smooth, but also remains compact as well as Hausdorff, and does not acquire boundary in the regularization. These properties, as far as we are aware nowhere previously addressed in the literature, are crucial for obtaining a global triangulation and thus a fundamental homology class. Another topological issue is the necessity of refining the cover by Kuranishi charts to a ``good cover'', in which the unidirectional transition maps allow for an iterative construction of perturbations. (These topological issues will be discussed in Section~\ref{ss:top}.) So we decided to start from the very basics and give a definition of Kuranishi atlas and a completely explicit construction of a VFC in the simplest nontrivial case. \noindent {\bf Relation to other Kuranishi notions:} As this work was nearing completion, we alerted Fukaya et al and Joyce to some of the issues we had uncovered, and the ensuing discussion eventually resulted in \cite{FOOO12} and some parts of \cite{JD}. To clarify the relation between these approaches and ours, we now use the language of atlases, which in fact is more descriptive. While the previous definitions of Kuranishi structures in \cite{FO,J} are algebraically inconsistent as explained in Section~\ref{ss:alg}, our approach is compatible with the notions of \cite{FOOO,FOOO12}. Indeed we show in Remark~\ref{rmk:otherK} how to construct a Kuranishi structure in this sense from a weak Kuranishi atlas. Similarly, when there is no isotropy there is a relation between the ideas behind a ``good coordinate system" and our notion of a reduction. However, as will be clear when \cite{MW:ku2} is completed, the two approaches differ significantly when there is nontrivial isotropy. One can make an analogy with the development of the theory of orbifolds: The approach of \cite{FOOO12} is akin to Satake's definition of a $V$-manifold, while our definitions are much closer to the idea of describing an orbifold as the realization of an \'etale proper groupoid. In our view, weak atlases in the sense of Definition~\ref{def:Kwk} are the natural outcomes of constructions of compatible finite dimensional reductions, and we see a clear abstract path from an atlas to a VMC. Constructing a weak atlas involves checking only a finite number of consistency conditions for the coordinate changes, while uncountably many such conditions must be checked if one tries to construct a Kuranishi structure directly. We will further compare our approach to that of Fukaya et al in Remark~\ref{rmk:otherK}. Another approach to constructing virtual fundamental classes from local finite dimensional reductions is proposed in \cite{JD} using so-called ``d-orbifolds'' which have more algebraic properties than Kuranishi structures. We cannot comment on the details of this approach apart from noting that it does not offer a direct approach to regularizing moduli spaces, but instead seems to require special types of Kuranishi structures or polyfold Fredholm sections as a starting point.\footnote{ \cite[Thm.15.6]{JD} claims ``virtual class maps'' for d-orbifolds under an additional ``(semi)effectiveness'' assumption, which in our understanding could only be obtained from the constructions of \cite[Thm.16.1]{JD} under the assumption of obstruction spaces invariant under the isotropy action. However, this is almost a case of equivariant transversality in which e.g.\ the polyfold setup should allow for a global finite dimensional reduction as smooth section of an orbifold bundle along the lines of \cite{DZ}. } \noindent {\bf Outlook:} Based on our results in the present paper, we have a good idea of how to generalize and apply these constructions to all genus zero Gromov--Witten moduli spaces. We believe that Kuranishi atlases for other moduli spaces of closed holomorphic curves can be constructed analogously, though each case would require a geometric construction of local slices as well as obstruction bundles specific to the setup, and careful gluing analysis. An alternative route is provided by the construction of Kuranishi structures from a proper Fredholm section in a polyfold bundle, as announced in \cite{DZ}. In fact, this approach would induce a smooth, rather than stratified smooth Kuranishi structure. The case of moduli spaces with boundary given as the fiber product of other moduli spaces, as required for the construction of $A_\infty$-structures, is beyond the scope of our project. While finishing this manuscript, we learned that Jake Solomon \cite{Sol} has been developing an approach to dealing with boundaries and making ``pull-push'' construction as required for chain level theories. Such a framework will need to generalize our notion of Kuranishi cobordism on $X\tildemes [0,1]$ to underlying moduli spaces with a much less natural ``boundary and corner stratification'', in particular facing a further complication in the already highly nontrivial construction of relative perturbations in Proposition~\ref{prop:ext2}. Moreover, it has to solve the additional task of constructing regularizations that respect the fiber product structure on the boundary. This issue, also known as constructing coherent perturbations, has to be addressed separately in each specific geometric setting, and requires a hierarchy of moduli spaces which permits one to construct the regularizations iteratively. In the construction of the Floer differential on a finitely generated complex, such an iteration can be performed using an energy filtration thanks to the algebraically simple gluing operation. In more `nonlinear' algebraic settings, such as $A_\infty$ structures, one needs to artificially deform the gluing operation, e.g.\ when dealing with homotopies of data \cite{SEID}. It seems to us that in such situations, especially when one needs to understand the moduli space's boundary and corners, it is more efficient to use the polyfold theory of Hofer-Wysocki-Zehnder [HWZ1--4] since this gives a cleaner structure. However, in the Gromov--Witten setting, the less technologically sophisticated approach via Kuranishi atlases still has value for applications, specially in very geometric situations such as \cite{McT}, or in situations in which the symplectic situation is very close to that in complex algebraic geometry such as in \cite{Z2} or \cite{Mcu}. Another fundamental issue surfaced when we tried to understand how Floer's proof of the Arnold conjecture is extended to general symplectic manifolds using abstract regularization techniques. In the language of Kuranishi structures, it argues that a Kuranishi space $X$ of virtual dimension $0$, on which $S^1$ acts such that the fixed points $F\subset X$ are isolated solutions, allows for a perturbation whose zero set is given by the fixed points. At that abstract level, the argument in both \cite{FO} and \cite{LiuT} is that $(X{\smallsetminus} F)/S^1$ can be equipped with a Kuranishi structure of virtual dimension $-1$, which induces a trivial fundamental cycle for $X{\smallsetminus} F$. However, they give no details of this construction. It seems that any such pullback construction would require the $S^1$-action to extend to an ambient space $X{\smallsetminus} F\subset {\mathcal B}$ such that the Kuranishi structure for $(X{\smallsetminus} F)/S^1$ has domains in the quotient space ${\mathcal B}/S^1$. While polyfold theory offers a direct approach to this question of equivariant transversality, we believe that any VMC approach will require, as a starting point, the construction of a virtual neighbourhood as introduced in the present paper. Such a VMC approach -- which however we cannot comment on -- is now announced in \cite{FOOO12}. \noindent {\bf Acknowledgements:} We would like to thank Mohammed Abouzaid, Kenji Fukaya, Tom Mrowka, Kaoru Ono, Yongbin Ruan, Dietmar Salamon, Bernd Siebert, Cliff Taubes, Gang Tian, and Aleksey Zinger for encouragement and enlightening discussions about this project. We moreover thank MSRI, IAS, and BIRS for hospitality. {\rm sect}ion{Regularizations of holomorphic curve moduli spaces} {\lambda}bel{s:fluff} One of the central technical problems in the theory of holomorphic curves, which provides many of the modern tools in symplectic topology, is to construct algebraic structures by extracting homological information from moduli spaces of holomorphic curves in general compact symplectic manifolds $(M,{\omega})$. We will refer to this technique as {\it regularization} and note that it requires two distinct components. On the one hand, some perturbation technique is used to achieve {\it transversality}, which gives the moduli space a smooth structure that induces a count or chain. On the other hand some cobordism technique is used to achieve {\it invariance}, i.e.\ independence of the resulting algebraic structure from the choices involved in achieving transversality. The aim of this section is to give an overview of the different regularization approaches in the example of genus zero Gromov--Witten invariants ${\bigl{\lambda}ngle} {\alpha}_1,\ldots,{\alpha}_k{\bigl\rangle}_{A} \in {\mathbb Q}$. These are defined as a generalized count of $J$-holomorphic genus $0$ curves in class $A\in H_2(M;{\mathbb Z})$ that meet $k$ representing cycles of the homology classes ${\alpha}_i\in H_*(M)$. This number should be independent of the choice of $J$ in the contractible space of ${\omega}$-compatible almost complex structures, and of the cycles representing ${\alpha}pha_i$. For complex structures $J$ one can work in the algebraic setting, in which the curves are cut out by holomorphic functions on $M$, but general symplectic manifolds do not support an integrable $J$. For non-integrable $J$, the approach introduced by Gromov \cite{GRO} is to view the (pseudo-)holomorphic curves as maps to $M$ satisfying the Cauchy--Riemann PDE, modulo reparametrizations by automorphisms of the complex domain. To construct the Gromov--Witten moduli spaces of holomorphic curves, one starts out with the typically noncompact quotient space $$ {\mathcal M}_k(A,J) := \bigl\{ \bigl( f: S^2 \to M, {\bf z}\in (S^2)^k {\smallsetminus} {\Delta}lta \bigr) \,\big|\, f_*[S^2]=A , {\overline {{\partial}}_J} f = 0 \bigr\} / {\rm PSL}(2,{\mathbb C}) $$ of equivalence classes of tuples $(f,{\bf z})$, where $f$ is a $J$-holomorphic map, the marked points ${\bf z}= (z_1,\ldots z_k)$ are pairwise disjoint, and the equivalence relation is given by the reparametrization action ${\gamma}\cdot(f,{\bf z})=(f\circ{\gamma},{\gamma}^{-1}({\bf z}))$ of the M\"obius group ${\rm PSL}(2,{\mathbb C})$. This space is contained (but not necessarily dense) in the compact moduli space ${\overlineerline {\Mm}}_{k}(A,J)$ formed by the equivalence classes of $J$-holomorphic genus $0$ stable maps $f:{\Sigma} \to M$ in class $A$ with $k$ pairwise disjoint marked points. There is a natural evaluation map {\beta}gin{equation} {\lambda}bel{eq:ev} {\rm ev}: {\overlineerline {\Mm}}_{k}(A,J)\to M^k, \quad [{\Sigma},f,(z_1,\ldots,z_k)]\mapsto \bigl(f(z_1),\ldots,f(z_k)\bigr), \end{equation} and one expects the Gromov--Witten invariant $$ {\bigl{\lambda}ngle} {\alpha}_1,\ldots,{\alpha}_k{\bigl\rangle}_{A}:={\rm ev}_*[{\overlineerline {\Mm}}_{k}(A,J)]\cap ({\alpha}_1\tildemes\ldots\tildemes {\alpha}_k) $$ to be defined as intersection number of a homology class ${\rm ev}_*[{\overlineerline {\Mm}}_{k}(A,J)]\in H_*(M;{\mathbb Q})$ with the class ${\alpha}_1\tildemes\ldots\tildemes {\alpha}_k$. The construction of this homology class requires a {\it regularization} of ${\overlineerline {\Mm}}_{k}(A,J)$. In the following Sections~\ref{ss:geom} - \ref{ss:kur} we give a brief overview of the approaches using geometric means or an abstract polyfold setup, and review the fundamental ideas behind Kuranishi atlases. Sections ~\ref{ss:alg} and~\ref{ss:top} then discuss the algebraic and topological issues in constructing a virtual fundamental class from a Kuranishi atlas. \subsection{Geometric regularization} \hspace{1mm}\\ {\lambda}bel{ss:geom} For some special classes of symplectic manifolds, the regularization of holomorphic curve moduli spaces can be achieved by a choice of the almost complex structure $J$, or more generally a perturbation of the Cauchy--Riemann equation ${\overline {{\partial}}_J} f =0$ that preserves the symmetry under reparametrizations, and whose Hausdorff compactification is given by nodal solutions. Note that these properties are generally not preserved by perturbations of a nonlinear Fredholm operator such as ${\overline {{\partial}}_J}$, so this approach requires a class of perturbations that preserves the geometric properties of ${\overline {{\partial}}_J}$. The construction of Gromov--Witten invariants from a regularization of ${\overlineerline {\Mm}}_{k}(A,J)$ most easily fits into this approach if $A$ is a homology class on which ${\omega}(A)>0$ is minimal, since then $A$ cannot be represented by a multiply covered or nodal holomorphic sphere. For short, we call such $A$ ${\omega}$-{\it minimal.} In this case ${\mathcal M}_{k}(A,J')$ is smooth for generic $J'$, and compact if $k\le 3$. More generally, this approach applies to all spherical Gromov--Witten invariants in semipositive symplectic manifolds, since in this case it is possible to compactify the image ${\rm ev}({\mathcal M}_{k}(A,J'))$ by adding codimension-$2$ strata. Full details for this construction can be found in \cite{MS}. The most general form of this {\bf geometric regularization approach} proceeds in the following steps. {\beta}gin{enumlist} \item{\bf Fredholm setup:} Write the (not necessarily compact) moduli space ${\mathcal M} = {\sigma}^{-1}(0)/{\rm Aut}$ as the quotient, by an appropriate reparametrization group ${\rm Aut}$, of an equivariant smooth Fredholm section ${\sigma}:\widehat{\mathcal B}\to\widehat{\mathcal E}$ of a Banach vector bundle $\widehat{\mathcal E}\to\widehat{\mathcal B}$. For example, ${\mathcal M}={\mathcal M}_k(A,J)$ is cut out from $\widehat{\mathcal B}=W^{m,p}(S^2,M)$ by the Cauchy--Riemann operator ${\sigma}={\overline {{\partial}}_J}$, which is equivariant with respect to ${\rm Aut}={\rm PSL}(2,{\mathbb C})$. \item {\bf Geometric perturbations:} Find a Banach manifold ${\mathcal P}\subset{\Gamma}^{\rm Aut}(\widehat{\mathcal E})$ of {\it equivariant } sections for which the perturbed sections ${\sigma}+p$ have the same compactness properties as ${\sigma}$. For example, the contractible set ${\mathcal J}^\ell$ of compatible ${\mathcal C}^\ell$-smooth almost complex structures for $\ell\geq m$ provides equivariant sections $p=\overlineerline{{\partial}artial}_{J'}-{\overline {{\partial}}_J}$ for all $J'\in{\mathcal J}^\ell$. Moreover, $J'$-holomorphic curves also have a Gromov compactification ${\overlineerline {\Mm}}_k(A,J')$. \item{\bf Sard--Smale:} Check transversality of the section $(p,f)\to ({\sigma} + p)(f)$ to deduce that the universal moduli space $$ {\textstyle \bigcup_{p\in{\mathcal P}}} \; \{p\}\tildemes ({\sigma} + p)^{-1}(0) \;\subset\; {\mathcal P}\tildemes\widehat{\mathcal B} $$ is a differentiable Banach manifold. (In the example it is ${\mathcal C}^\ell$-differentiable.) Then the regular values of the projection to ${\mathcal P}$ (guaranteed by the Sard--Smale theorem for sufficiently high differentiability of the universal moduli space) provide a comeagre subset ${\mathcal P}^{\rm reg}\subset{\mathcal P}$ for which the perturbed sections ${\sigma}_p:={\sigma}+p$ are transverse to the zero section. For holomorphic curves and perturbations given by ${\mathcal J}^\ell$, this transversality holds if all holomorphic maps are {\it somewhere injective}. For ${\omega}$-minimal $A$ this weak form of injectivity is a consequence of unique continuation (cf. \cite[Chapter~2]{MS}), but for general Gromov--Witten moduli spaces this step only applies to the subset ${\mathcal M}_k^*(A,J)$ of simple (i.e.\ not multiply covered) curves. \item {\bf Quotient:} For $p\in {\mathcal P}^{\rm reg}$, the perturbed zero set ${\sigma}_p^{-1}(0)\subset \widehat{\mathcal B}$ is a smooth manifold by the implicit function theorem. If, moreover, the action of ${\rm Aut}$ on ${\sigma}_p^{-1}(0)$ is smooth, free, and properly discontinuous, then the moduli space ${\mathcal M}^p := {\sigma}_p^{-1}(0) / {\rm Aut}$ is a smooth manifold. For holomorphic curves, the smoothness of the action can be achieved if all solutions of $\overlineerline{{\partial}artial}_{J'}f=0$ are smooth. For that purpose one can use e.g.\ the Taubes' trick to find regular perturbations given by smooth $J'$. \item {\bf Compactification:} For Gromov--Witten moduli spaces with $A$ ${\omega}$-minimal and $k\le 3$, the previous steps already give ${\mathcal M}_k(A,J')$ the structure of a compact smooth manifold. Thus the Gromov--Witten invariants can be defined using its fundamental class $[{\mathcal M}_{k}(A,J')]$. In the semipositive case, the previous steps give ${\mathcal M}_k^*(A,J')$ a smooth structure such that ${\rm ev}:{\mathcal M}_k^*(A,J')\to M$ defines a pseudocycle. Indeed, its image is compact up to ${\rm ev}\bigl({\overlineerline {\Mm}}_{k}(A,J) {\smallsetminus} {\mathcal M}_{k}^*(A,J)\bigr)$ which is given by the images of nodal and multiply covered maps. Since the underlying simple curves are regular and of lower Fredholm index, these additional sets are smooth and of codimension at least $2$. A more general approach for showing this pseudocycle property is to use {\bf gluing techiques}, which also apply to moduli spaces whose regularization is expected to have boundary. Generally, one obtains a compactification ${\overlineerline {\Mm}}\,\!^p$ of the perturbed moduli space by constructing gluing maps into ${\mathcal M}^p$, whose images cover the complement of a compact set, and which are compatible on their overlaps. In the Gromov--Witten case the gluing construction provides local homeomorphisms $$ ((1,\infty)\tildemes S^1)\,\!^{N} \tildemes {\overlineerline {\Mm}}\,\!^{*N}_k(A,J') \;\hookrightarrow\; {\mathcal M}_{k}(A,J') $$ to neighbourhoods of the strata ${\overlineerline {\Mm}}\,\!^{*N}_k(A,J')$ of simple stable curves with $N$ nodes. The Gromov compactification ${\overlineerline {\Mm}}\,\!^p={\overlineerline {\Mm}}_k\,\!^*(A,J')$ is then constructed by completing each cylinder to a disc $\bigl((1,\infty)\tildemes S^1\bigr) \cup \{\infty\}$, where we identify the added set $ \{\infty\}\tildemes {\overlineerline {\Mm}}\,\!^{*N}_k(A,J') $ with the stratum $ {\overlineerline {\Mm}}\,\!^{*N}_k(A,J') \subset {\overlineerline {\Mm}}_{k}(A,J)$. Then ${\overlineerline {\Mm}}_{k}(A,J)$ is compact up to stable maps containing multiply covered components. \item {\bf Invariance:} To prove that invariants extracted from the perturbed moduli space ${\overlineerline {\Mm}}\,\!^p$ are well defined, one chooses ${\mathcal P}$ to be a connected neighbourhood of the zero section and constructs a cobordism between ${\overlineerline {\Mm}}\,\!^{p_0}$ and ${\overlineerline {\Mm}}\,\!^{p_1}$ for any regular pair $p_0,p_1\in{\mathcal P}^{\rm reg}$ by repeating the last five steps for the section $[0,1]\tildemes\widehat{\mathcal B}\to\widehat{\mathcal E}$, $(t,b)\mapsto ({\sigma}+p_t)(b)$ for any smooth path $(p_t)_{t\in[0,1]}\in{\mathcal P}$. In the semipositive Gromov--Witten example, the same argument is applied to find a pseudochain with boundary ${\rm ev}({\mathcal M}_{k}^*(A,J_0)) \sqcup {\rm ev}({\mathcal M}_{k}^*(A,J_1))$. \end{enumlist} {\beta}gin{remark} \rm For Gromov-Witten theory, the evaluation map \eqref{eq:ev} generally does not represent a well defined rational homology class in $M^k$. Although ${\overlineerline {\Mm}}_{k}(A,J)$ is compact and has a well understood formal dimension $d$ (given by the Fredholm index of ${\overline {{\partial}}_J}$ minus the dimension of the automorphism group), it need not be a manifold or orbifold of dimension $d$ for any~$J$. Indeed it may contain subsets of dimension larger than $d$ consisting of stable maps with a component that is a multiple cover on which $c_1(f^*{\rm T} S^2)$ is negative. In the case of spherical Gromov--Witten theory on manifolds with $[{\omega}]\in H^2(M;{\mathbb Q})$, it is possible to avoid this problem by first finding a consistent way to ``stabilize the domain'' to obtain a global description of the moduli space that involves no reparametrizations, and then allowing a richer class of perturbations; cf.\ \cite{CM, Io}. This approach has recently been extended in \cite{IoP}. \end{remark} The main nontrivial steps in the geometric approach, which need to be performed in careful detail for any given moduli space, are the following. {\beta}gin{itemlist} \item Each setting requires a different, precise definition of a Banach space of perturbations. Note in particular that spaces of maps with compact support in a given open set are not compact. The proof of transversality of the universal section is very sensitive to the specific geometric setting, and in the case of varying $J$ requires each holomorphic map to have suitable injectivity properties. \item The gluing analysis is a highly nontrivial Newton iteration scheme and should have an abstract framework that does not seem to be available at present. In particular, it requires surjective linearized operators, and so only applies after perturbation. Moreover, gluing of noncompact spaces requires uniform quadratic estimates, which do not hold in general. Finally, injectivity of the gluing map does not follow from the Newton iteration and needs to be checked in each geometric setting. \end{itemlist} \subsection{Approaches to abstract regularization} \hspace{1mm}\\ {\lambda}bel{ss:approach} In order to obtain a regularization scheme that is generally applicable to holomorphic curve moduli spaces, it seems to be necessary to work with abstract perturbations $f\mapsto p(f)$ that need not be differential operators. Thus we recast the question more abstractly into one of regularizing a compactification of the quotient of the zero set of a Fredholm operator.\footnote{ As pointed out by Aleksey Zinger, the ``Gromov compactification'' of a moduli space of holomorphic curves in fact need not even be a compactification in the sense of containing the holomorphic curves with smooth domains as dense subset. For example, it could contain an isolated nodal curve. } From this abstract differential geometric perspective, the geometric regularization scheme provides a highly nontrivial generalization of the well known finite dimensional regularization based on Sard's theorem, see e.g.\ \cite[ch.2]{GuillP}. {\lambda}bel{finite reg} \noindent {\bf Finite Dimensional Regularization Theorem:} {\it Let $E\to B$ be a finite dimensional vector bundle, and let $s:B\to E$ be a smooth section such that $s^{-1}(0)\subset B$ is compact. Then there exists a compactly supported, smooth perturbation section $p:B\to E$ such that $s+p$ is transverse to the zero section, and hence $(s+p)^{-1}(0)$ is a smooth manifold. Moreover, $[(s+p)^{-1}(0)]\in H_*(B,{\mathbb Z})$ is independent of the choice of such perturbations. } {\beta}gin{remark}\rm {\lambda}bel{equivariant BS} {\beta}gin{itemlist} \item Using multisections, this theorem generalizes to equivariant sections under a finite group action, yielding orbifolds as regularized spaces and thus a well defined rational homology class $[(s+p)^{-1}(0)]\in H_*(B,{\mathbb Q})$. \item For nontrivial Lie groups $G$ acting by bundle maps on $E$, equivariance and transversality are in general contradictory requirements on a section. Only if $G$ acts smoothly, freely, and properly on $B$ and $E$, can one obtain $G$-equivariant transverse sections by pulling back transverse sections of $E/G\to B/G$. \item Finite dimensional regularization also holds for noncompact zero sets $s^{-1}(0)$, but the homological invariance of the zero set fails in the simplest examples. \item There have been several attempts to extend this theorem to the case of a Fredholm section $s:\widehat{\mathcal B}\to \widehat {\mathcal E}$ of a Banach (orbi)bundle. The paper by Lu--Tian~\cite{LuT} develops some abstract analysis, which we have not studied in detail since it does not apply to Gromov--Witten moduli spaces; see below. Similarly, Chen--Tian present in~\cite[\S5]{CT} an idea of a \lq\lq Fredholm system" that in its global form (even after replacing the nonsensical properness assumption by compactness of the zero set) is irrelevant to most Gromov--Witten moduli spaces, and when localized runs into the same problems to do with smoothness of coordinate changes and lack of suitable cut off functions that we discuss in detail in Remark~\ref{LTBS} below. \end{itemlist} \end{remark} Note here that no typical moduli space of holomorphic curves, nor even the moduli spaces in gauge theory or Morse theory, has a currently available description as the zero set of a Fredholm section in a Banach groupoid bundle. In the case of holomorphic curves or Morse trajectories, the first obstacle to such a description is the differentiability failure of the action of ${\rm Aut}$ on any Sobolev space of maps $\widehat{\mathcal B}$ explained in Section~\ref{ss:nodiff}. In gauge theory, the action of the gauge group typically is smooth, but in all theories the typical moduli spaces are compactified by gluing constructions, for which there is not even a natural description as a zero set in a topological vector bundle. In comparison, the geometric regularization approach works with a smooth section ${\sigma}:\widehat{\mathcal B}\to\widehat{\mathcal E}$ of a Banach bundle, which has a noncompact solution set ${\sigma}^{-1}(0)$ and is equivariant under the action of a noncompact Lie group. From an abstract topological perspective, the nontrivial achievement of this approach is that it produces equivariant transverse perturbations and a well defined homology class by compactifying quotients of perturbed spaces, rather than by directly perturbing the compactified moduli space. {\beta}gin{remark}\rm Another notable analytic feature of the perturbations obtained by altering $J$ is that they preserve the compactness and Fredholm properties of the nonlinear differential operator, despite changing it nonlinearly in highest order. Indeed, in local coordinates, ${\sigma}= {\partial}artial_s + J {\partial}artial_t$ is a first order operator, and changing $J$ to $J'$ amounts to adding another first order operator $p=(J'-J){\partial}artial_t$. This preserves the Fredholm operator since it preserves ellipticity of the symbol. In general, one retains Fredholm properties only with lower order perturbations, i.e.\ by adding a compact operator to the linearization. For the Cauchy--Riemann operator, that would mean an operator involving no derivatives, e.g.\ $p(f)=X\circ f$ given by a vector field. Note also that the compactness properties of solution sets of nonlinear operators are generally not even preserved under lower order perturbations that are supported in a neighbourhood of a compact solution set, since in the infinite dimensional setting such neighbourhoods are never compact. \end{remark} This discussion shows that a regularization scheme for general holomorphic curve moduli spaces needs to work with more abstract perturbations and directly on the compactified moduli space -- i.e.\ after quotienting and taking the Gromov compactification. The following approaches are currently used in symplectic topology. {\beta}gin{enumlist} \item{The {\bf global obstruction bundle approach}} as in \cite{LiuT, Sieb, Mcv} aims to extend successful techniques from algebraic geometry and gauge theory to the holomorphic curve setting, by means of a weak orbifold structure on a suitably stratified Banach space completion of the space of equivalence classes of smooth stable maps. \item{The {\bf Kuranishi approach}}, introduced by Fukaya-Ono \cite{FO} and implicitly Li-Tian \cite{LT} in the 1990s, aims to construct a virtual fundamental class from finite dimensional reductions of the equivariant Fredholm problem and gluing maps near nodal curves. \item{The {\bf polyfold approach}}, developed by Hofer-Wysocki-Zehnder since 2000, aims to generalize the finite dimensional regularization theorem so that it applies directly to the compactified moduli space, by expressing it as the zero set of a smooth section. \end{enumlist} { } The first two approaches construct a virtual fundamental class $[{\overlineerline {\Mm}}_{k}(A,J)]^{vir}$ and are hence also referred to as virtual transversality. They have been used for concrete calculations of Gromov--Witten invariants, e.g.\ \cite{McT,Mcu,Z} by building a VMC using geometrically meaningful perturbations. The third approach is more functorial and produces a VMC with significantly more structure, e.g.\ as a smooth orbifold in the case of Gromov--Witten invariants \cite{HWZ:gw}. This allows to define e.g.\ symplectic field theory (SFT) invariants on chain level. The book \cite{FOOO} uses the Kuranishi approach to a similar end in the construction of chain level Lagrangian Floer theory. We will make no further comments on chain level theories. Instead, let us compare how the different approaches handle the fundamental analytic issues. \noindent {\bf Dividing by the automorphism group:} Unlike the smooth action of the (infinite dimensional) gauge group on Sobolev spaces of connections, the reparametrization groups (though finite dimensional) do not act differentiably on any known Banach space completion of spaces of smooth maps (or pairs of domains and maps from them); see Section~\ref{s:diff}. In the global obstruction bundle approach this causes a significant differentiability failure in the relation between local charts in Liu--Tian \cite{LiuT}, and hence in the survey article McDuff~\cite{Mcv} and subsequent papers such as Lu~\cite{GLu}. For more details of the problems here see Remark~\ref{LTBS}. This differentiability issue was not mentioned in \cite{FO,LT}. However, as we explain in detail in Section~\ref{s:construct} below, it needs to be addressed when defining charts that combine two or more basic charts since this must be done in the Fredholm setting {\it before} passing to a finite dimensional reduction. We make this explicit in our notion of ``sum chart", but the same construction is used implicitly in \cite{FO,FOOO}. In this setting, it can be overcome by working with special obstruction bundles, as we outline in Section~\ref{ss:gw}. In the polyfold approach, this issue is resolved by replacing the notion of smoothness in Banach spaces by a notion of scale-smoothness which applies to the reparametrization action. To implement this, one must redevelop linear as well as nonlinear functional analysis in the scale-smooth category. \noindent {\bf Gromov compactification:} Sequences of holomorphic maps can develop various kinds of singularities: bubbling (energy concentration near a point), breaking (energy diverging into a noncompact end of the domain -- sometimes also induced by stretching in the domain), buildings (parts of the image diverging into a noncompact end of the target -- sometimes also induced by stretching the domain at a hypersurface). These limits are described as tuples of maps from various domains, capturing the Hausdorff limit of the images. In quotienting by reparametrizations, note that for the limit object this group is a substantially larger product of various reparametrization groups. In the geometric and virtual regularization approaches, charts near the singular limit objects are constructed by gluing analysis, which involves a pregluing construction and a Newton iteration. The pregluing creates from a tuple of holomorphic maps a single map from a nonsingular domain, which solves the Cauchy--Riemann equation up to a small error. The Newton iteration then requires quadratic estimates for the linearized Cauchy--Riemann operator to find a unique exact solution nearby. In principle, the construction of a continuous gluing map should always be possible along the lines of \cite{MS}, though establishing the quadratic estimates is nontrivial in each setting. However, additional arguments specific to each setting are needed to prove surjectivity, injectivity, and openness of the gluing map. Moreover, while homeomorphisms to their image suffice for the geometric regularization approach, the virtual regularization approaches all require stronger differentiability of the gluing map; e.g.\ smoothness in \cite{FO,FOOO,J}. None of \cite{LT,LiuT,FO,FOOO} give all details for the construction of a gluing map. In particular, \cite{FO,FOOO} construct gluing maps with image in a space of maps, but give few details on the induced map to the quotient space, even in the nonnodal case as discussed in Remark~\ref{FOglue}. For closed nodal curves, \cite[Chapter~10]{MS} constructs continuous gluing maps in full detail, but does not claim that the glued curves depend differentiably on the gluing parameter $a\in {\mathbb C}$ as $a\to 0$. By rescaling $|a|$, it is possible to establish more differentiability for $a\to 0$. For example Ruan~\cite{Ruan} uses local ${\mathcal C}^1$ gluing maps. However, as pointed out by Chen--Li~\cite{ChenLi}, this ${\mathcal C}^1$ structure is not intrinsic, so may not be preserved under coordinate changes. This problem was ignored in \cite{FO}, but discussed in both in the appendix to \cite{FOOO} and more recently in \cite{FOOO12}. The polyfold approach reinterprets the pregluing construction as the chart map for an ambient space $\widetildelde {\mathcal B}$ which contains the compactified moduli space, essentially making the quadratic estimates part of the definition of a Fredholm operator on this space. The Newton iteration is replaced by an abstract implicit function theorem for transverse Fredholm operators in this setting. The injectivity and surjectivity issues then only need to be dealt with at the level of pregluing. Here injectivity fails dramatically but in a way that can be reinterpreted in terms of a generalization of a Banach manifold chart, where the usual model domain of an open subset in a Banach space is replaced by a relatively open subset in the image of a scale-smooth retraction of a scale-Banach space. This makes it necessary to redevelop differential geometry in the context of retractions and scale-smoothness. \subsection{Regularization via polyfolds} \hspace{1mm}\\ {\lambda}bel{ss:poly} In the setting of holomorphic maps with trivial isotropy (but allowing for general compactifications by e.g.\ nodal curves), the result of the entirely abstract development of scale-smooth nonlinear functional analysis and retraction-based differential geometry is the following direct generalization of the finite dimensional regularization theorem, see \cite{HWZ3}. The following is the relevant version for trivial isotropy, in which ambient spaces have the structure of an M-polyfold --- a generalization of the notion of Banach manifold, essentially given by charts in open subsets of images of retraction maps, and scale-smooth transition maps between the ambient spaces of the retractions. \noindent {\bf M-polyfold Regularization Theorem:} {\it Let $\widetildelde{{\mathcal E}}\to\widetildelde{{\mathcal B}}$ be a strong M-polyfold bundle, and let $s:\widetildelde{{\mathcal B}}\to\widetildelde{{\mathcal E}}$ be a scale-smooth Fredholm section such that $s^{-1}(0)\subset\widetildelde{{\mathcal B}}$ is compact. Then there exists a class of perturbation sections $p:\widetildelde{{\mathcal B}}\to\widetildelde{{\mathcal E}}$ supported near $s^{-1}(0)$ such that $s+p$ is transverse to the zero section, and hence $(s+p)^{-1}(0)$ carries the structure of a smooth finite dimensional manifold. Moreover, $[(s+p)^{-1}(0)]\in H_*(\widetildelde{\mathcal B},{\mathbb Z})$ is independent of the choice of such perturbations. } For dealing with nontrivial, finite isotropies, \cite{HWZ3} transfers this theory to a groupoid setting to obtain a direct generalization of the orbifold version of the finite dimensional regularization theorem. It is these groupoid-type ambient spaces $\widetildelde{\mathcal B}$, whose object and morphism spaces are M-polyfolds, that are called polyfolds. These abstract regularization theorems should be compared with the definition of Kuranishi atlas and the abstract construction of a virtual fundamental class for any Kuranishi atlas that will be outlined in the following sections. While the language of polyfolds and the proof of the regularization theorems in \cite{HWZ1,HWZ2,HWZ3} is highly involved, it seems to be developed in full detail and is readily quotable. A survey of the basic philosophy and language is now available in \cite{gffw}. Just as in the construction of a Kuranishi atlas for a given holomorphic curve moduli space discussed in Section~\ref{s:construct} below, the application of the polyfold regularization approach still requires a description of the compactified moduli space as the zero set of a Fredholm section in a polyfold bundle. It is here that the polyfold approach promises the most revolutionary advance in regularization techniques. Firstly, fiber products of moduli spaces with polyfold descriptions are naturally described as zero sets of a Fredholm section over a product of polyfolds. For example, one can obtain a polyfold setup for the PSS morphism by combining the polyfold setup for SFT with a smooth structure on Morse trajectory spaces, see \cite{afw:arnold}. Secondly, Hofer--Wysocki--Zehnder are currently working on formalizing a ``modular'' approach to the polyfold axioms in such a way that the analytic setup can be given locally in domain and target for every singularity type. With that, the polyfold setup for a new moduli space that combines previously treated singularities in a different way would merely require a Deligne--Mumford type theory for the underlying spaces of domains and targets. {\beta}gin{remark} \rm {\lambda}bel{polyfold BS checklist} While the polyfold framework is a very powerful method for constructing algebraic invariants from holomorphic curve moduli spaces, it also has some pitfalls in geometric applications. {\beta}gin{itemlist} \item Some caution is required with arguments involving the geometric properties of solutions after regularization. The reason for this is that the perturbed solutions do not solve a PDE but an abstract compact perturbation of the Cauchy--Riemann equation. Essentially, one can only work with the fact that the perturbed solutions can be made to lie arbitrarily close to the unperturbed solutions in any metric that is compatible with the scale-topology (e.g.\ any ${\mathcal C}^k$-metric in the case of closed curves). \item Despite reparametrizations acting scale-smoothly on spaces of maps, the question of equivariant regularization for smooth, free, proper actions remains nontrivial due to the interaction with retractions, i.e.\ gluing constructions. In the example of the $S^1$-action on spaces of Floer trajectories for an autonomous Hamiltonian, the unregularized compactified Floer trajectory spaces of virtual dimension $0$ may contain broken trajectories. The corresponding stratum of the quotient space, $$ {\overlineerline {\Mm}}(p_-,p_+)/S^1 \;\supset\; {\textstyle \bigcup_q} \bigl({\overlineerline {\Mm}}(p_-,q) \tildemes {\overlineerline {\Mm}}(q,p_+)\bigr)/S^1 , $$ is an $S^1$-bundle over the fiber product ${\overlineerline {\Mm}}(p_-,q)/S^1\tildemes {\overlineerline {\Mm}}(q,p_+)/S^1$ of quotient spaces, rather than the fiber product itself. Due to these difficulties, as yet, there is no quotient theorem for polyfolds, and hence no understanding of when a description of ${\overlineerline {\Mm}}$ as zero set of an $S^1$-equivariant Fredholm section would induce a description of ${\overlineerline {\Mm}}/S^1$ as zero set of a Fredholm section with smaller Fredholm index. Moreover, such a quotient would not even immediately induce an equivariant regularization of the Floer trajectory spaces compatible with gluing. \end{itemlist} \end{remark} \subsection{Regularization via Kuranishi atlases} \hspace{1mm}\\ {\lambda}bel{ss:kur} Continuing the notation of Section~\ref{ss:geom}, the basic idea of regularization via Kuranishi atlases is to describe the compactified moduli space ${\overlineerline {\Mm}}$ by local finite dimensional reductions of the ${\rm Aut}$-equivariant section ${\sigma}:\widehat{\mathcal B} \to \widehat{\mathcal E}$, and by gluing maps near the nodal curves. There are different ways to formalize the construction. We will proceed via a notion of {\bf Kuranishi atlas} in the following steps.{ } {\beta}gin{enumlist} \item {\bf Compactness:} Equip the compactified moduli space ${\overlineerline {\Mm}}$ with a compact, metrizable topology; namely as Gromov compactification of ${\mathcal M}={\sigma}^{-1}(0)/{\rm Aut}$. { } \item {\bf Equivariant Fredholm setup:} As in the geometric approach, a significant subset ${\mathcal M}\subset{\overlineerline {\Mm}}$ of the compactified moduli space is given as the zero set of a Fredholm section modulo a finite dimensional Lie group, \[ {\beta}gin{aligned} &{\partial}hantom{{\mathcal M}} \\ &{\partial}hantom{{\mathcal M}} \\ & {\mathcal M} = \frac{{\sigma}^{-1}(0)}{{\rm Aut}} , \end{aligned} \qquad\qquad \xymatrix{ \widehat{\mathcal E} \ar@(ul,dl)_{\textstyle \rm Aut}\ar@{->}[d] \\ \widehat{\mathcal B} \ar@(ul,dl)_{\textstyle \rm Aut} \ar@/_1pc/[u]_{\textstyle {\sigma}} } \] One can now relax the assumption of $\rm Aut$ acting freely to the requirement that the isotropy subgroup ${\Gamma}_{f}:=\{ {\gamma} \in {\rm Aut} \,|\, {\gamma}\cdot f = f \}$ be finite for every solution $f\in{\sigma}^{-1}(0)$.{ } \item {\bf Finite dimensional reduction:} Construct {\bf basic Kuranishi charts} for every ${[f]\in{\mathcal M}}$, \[ {\beta}gin{aligned} &{\partial}hantom{{\mathcal M}} \\ &{\partial}hantom{{\mathcal M}} \\ {\mathcal M} \; \overlineerset{{\partial}si_f}{\longhookleftarrow} \;\frac{\tilde s_f^{-1}(0)}{{\Gamma}_f} , \end{aligned} \qquad\qquad \xymatrix{ *+[r]{\widetildelde E_f } \ar@(ul,dl)_{\textstyle {\Gamma}_f} \ar@{->}[d] \\ *+[r]{U_f} \ar@(ul,dl)_{\textstyle {\Gamma}_f} \ar@/_1pc/[u]_{\textstyle \tilde s_f} } \] which depend on a choice\footnote{ In practice the Kuranishi data will be constructed from many choices, including that of a representative. So we try to avoid false impressions by using the subscript $f$ rather than $[f]$. } of representative $f$, and consist of the following data: {\beta}gin{itemize} \item the {\bf domain} $U_f$ is a finite dimensional smooth manifold (constructed from a local slice of the ${\rm Aut}$-action on a thickened solution space $\{g \,|\, {\overline {{\partial}}_J} g \in \widehat E_{f}\}$); \item the {\bf obstruction bundle} $\widetilde E_f =\widehat E_f|_{U_f}\to U_f$, a finite rank vector bundle (constructed from the cokernel of the linearized Cauchy--Riemann operator at $f$), which is isomorphic $\widetilde E_f\cong U_f\tildemes E_f$ to a trivial bundle, whose fiber $E_f$ we call the {\bf obstruction space}; \item the {\bf section} $\tilde s_{f}: U_{f} \to \widetildelde E_{f}$ (constructed from $g \mapsto {\overline {{\partial}}_J} g$), which induces a smooth map $s_f : U_f\to E_f$ in the trivialization; \item the {\bf isotropy group} ${\Gamma}_{f}$ acting on $U_{f}$ and $E_{f}$ such that $\tilde s_{f}$ is equivariant; \item the {\bf footprint map} ${\partial}si_{f}: \tilde s_{f}^{-1}(0)/{\Gamma}_{f} \to {\overlineerline {\Mm}}$, a homeomorphism to a neighbourhood of $[f]\in{\overlineerline {\Mm}}$ (constructed from $\{g \,|\, {\overline {{\partial}}_J} g=0 \} \ni g \mapsto [g]$). \end{itemize} (See Section~\ref{ss:Kchart} for a detailed outline of this construction for ${\overlineerline {\Mm}}_1(A,J)$ with ${\Gamma}_f=\{\rm id\}$.){ } \item {\bf Gluing:} Construct basic Kuranishi charts covering ${\overlineerline {\Mm}}{\smallsetminus} {\mathcal M}$ by combining finite dimensional reductions with gluing analysis similar to the geometric approach.{ } \item {\bf Compatibility:} Given a finite cover of ${\overlineerline {\Mm}}$ by the footprints of basic Kuranishi charts $\bigl({\bf K}_i = (U_i,E_i,{\Gamma}_i,s_i,{\partial}si_i)\bigr)_{i=1,\ldots,N}$, construct transition data satisfying suitable compatibility conditions. In the case of trivial isotropies ${\Gamma}_i=\{{\rm id}\}$, any notion of compatibility will have to induce the following minimal transition data for any element $[g]\in {\rm im\,}{\partial}si_i\cap {\rm im\,}{\partial}si_j\subset {\overlineerline {\Mm}}$ in an overlap of two footprints: {\beta}gin{itemize} \item a {\bf transition Kuranishi chart} ${\bf K}^{ij}_g = (U^{ij}_g,E^{ij}_g,s^{ij}_g,{\partial}si^{ij}_g)$ whose footprint ${\rm im\,}{\partial}si^{ij}_g \subset {\rm im\,}{\partial}si_i\cap {\rm im\,}{\partial}si_j$ is a neighbourhood of $[g]\in{\overlineerline {\Mm}}$; \item {\bf coordinate changes} $\Phi^{i,ij}_g : {\bf K}_i \to {\bf K}^{ij}_g$ and $\Phi^{j,ij}_g :{\bf K}_j \to {\bf K}^{ij}_g$ consisting of embeddings and linear injections $$ {\partial}hi^{\bullet,ij}_g :\; U_\bullet \supset V^{\bullet,ij}_g\; \longhookrightarrow\; U^{ij}_g , \qquad \widehat{\partial}hi^{\bullet,ij}_g :\; E_\bullet \; \longhookrightarrow\; E^{ij}_g \qquad \text{for}\; \bullet = i,j $$ which extend ${\partial}hi^{\bullet,ij}_g|_{{\partial}si_\bullet^{-1}({\rm im\,}{\partial}si^{ij}_g)} = ({\partial}si^{ij}_g)^{-1}\circ {\partial}si_\bullet$ to open subsets $V^{\bullet,ij}_g \subset U_\bullet$ such that $$ s^{ij}_g \circ {\partial}hi^{\bullet,ij}_g \; =\; \widehat{\partial}hi^{\bullet,ij}_g\circ s_\bullet \qquad \text{for}\; \bullet = i,j . $$ \end{itemize} Further requirements on the domains, an index or ``tangent bundle" condition, coordinate changes between multiple overlaps, and cocycle conditions are discussed in Section~\ref{ss:top}. Sections~\ref{s:diff}, \ref{s:construct} discuss the relevant smoothness issues. The collection of such data -- basic charts, transition charts and coordinate changes -- will be called a {\bf Kuranishi atlas}. { } \item {\bf Abstract Regularization:} For suitable transition data, (multivalued) perturbations $s_f': U_f\to E_f$ of the sections in the Kuranishi charts (both basic charts and the transition charts) should yield the following regularization theorem: { } \noindent {\it Any Kuranishi atlas ${\mathcal K}$ on a compact space ${\overlineerline {\Mm}}$ induces a virtual fundamental class $[{\overlineerline {\Mm}}]_{\mathcal K}^{\rm vir}$.} \item {\bf Invariance:} Prove that $[{\overlineerline {\Mm}}]_{\mathcal K}^{\rm vir}$ is independent of the different choices in the previous steps, in particular the choice of local slices and obstruction bundles. This involves the construction of a Kuranishi atlas for ${\overlineerline {\Mm}}\tildemes [0,1]$ that restricts to two given choices ${\mathcal K}^0$ on ${\overlineerline {\Mm}}\tildemes \{0\}$ and ${\mathcal K}^1$ on ${\overlineerline {\Mm}}\tildemes \{1\}$. Then an abstract cobordism theory for Kuranishi atlases should imply $[{\overlineerline {\Mm}}]_{{\mathcal K}^0}^{\rm vir}=[{\overlineerline {\Mm}}]_{{\mathcal K}^1}^{\rm vir}$. \end{enumlist} The construction of a Kuranishi atlas for a given holomorphic curve moduli space is explained in more detail in Section~\ref{s:construct}. The rest of the paper (Sections~\ref{s:chart},~\ref{s:Ks} and~\ref{s:VMC}) then discusses the abstract regularization theorem underlying this approach. For that purpose we restrict to the case of trivial isotropy groups ${\Gamma}_{f}=\{{\rm id}\}$ in all Kuranishi charts. This simplifies constructions in two ways. First, it guarantees the existence of restrictions of Kuranishi charts to any open subset of the footprint. (In general, restrictions of Kuranishi charts with nontrivial isotropy will only exist as generalized Kuranishi charts whose domain is a groupoid.) Second, for trivial isotropy one can construct the virtual fundamental class from the zero sets of perturbed sections $s_f + \nu_f \approx s_f$ that are transverse, $s_f+\nu_f{\partial}itchfork 0$, rather than replacing each ${\Gamma}_f$-equivariant section $s_f$ with a transverse multisection. \subsection{Algebraic issues in the use of germs for Kuranishi structures} \hspace{1mm}\\ {\lambda}bel{ss:alg} A natural approach, adopted in \cite{FO,J} for formalizing the compatibility of Kuranishi charts is to work with germs of charts and coordinate changes. Recent discussions have led to an agreement that this approach has serious algebraic issues in making sense of a cocycle condition for germs of coordinate changes, which we explain here. This issue is rooted in the fact that only the footprints of Kuranishi charts have invariant meaning, so that the coordinate changes between Kuranishi charts are fixed by the charts only on the zero sets. Thus in the definition of a germ of charts the traditional equivalence of maps with common restriction to a smaller domain is extended by equivalence of maps that are intertwined by a diffeomorphism of the domains. This leads to an ambiguity in the definition of germs of coordinate changes between germs of charts. As a result, germs of coordinate changes are defined as conjugacy classes of coordinate changes with respect to diffeomorphisms of the domains that fix the zero sets. However, in this setting the composition of germs is ill defined, so there is no meaningful cocycle condition. Alternatively, one might want to view (charts, coordinate changes, equivalences of coordinate changes) as a $2$-category with ill-defined $2$-composition. Either way, there is no general procedure for extracting the data necessary for a construction of a VFC: a finite set of charts and coordinate changes that satisfy the cocycle condition. In the following, we spell out in complete detail the usual definitions of germs and point out the algebraic issues that arise from the equivalence under conjugation. { } To simplify notation let us (falsely) pretend that all obstruction spaces are finite rank subspaces $E_f\subset{\mathcal E}$ of the same space and the linear maps $\widehat{\partial}hi$ in the coordinate changes are restrictions of the identity. We moreover assume that all isotropy groups are trivial ${\Gamma}_f=\{{\rm id}\}$ and only consider germs of charts and coordinate changes at a fixed point $p\in{\overlineerline {\Mm}}$. In the following all neighbourhoods are required to be open. { }{\mathbb N}I To the best of our understanding, \cite{FO,J} define a germ of Kuranishi chart as follows. {\beta}gin{itemlist} \item A Kuranishi chart consists of a neighbourhood $U\subset{\mathbb R}^k$ of $0$ for some $k\in{\mathbb N}$, a map $s:U\to E \subset{\mathcal E}$ with $s(0)=0$, and an embedding ${\partial}si:s^{-1}(0)\to{\overlineerline {\Mm}}$ with ${\partial}si(0)=p$, $$ {\overlineerline {\Mm}} \;\overlineerset{{\partial}si}{\longhookleftarrow}\; s^{-1}(0) \;\subset\; U \;\overlineerset{s}{\longrightarrow}\; E. $$ \item Two Kuranishi charts $(U_1,s_1,{\partial}si_1)$, $(U_2,s_2,{\partial}si_2)$ are equivalent if the transition map $$ {\partial}si_2^{-1}\circ{\partial}si_1 :\; s_1^{-1}(0) \;\supset\; {\partial}si_1^{-1}({\rm im\,}{\partial}si_2) \;\longrightarrow \; {\partial}si_2^{-1}({\rm im\,}{\partial}si_1) \;\subset\; s_2^{-1}(0) $$ extends to a diffeomorphism $\theta: U'_1\to U'_2$ between neighbourhoods $U_i' \subset U_i$ of $0$ that intertwines the sections $s_1|_{U'_1}= s_2|_{U'_2}\circ \theta$. \item A germ of Kuranishi chart at $p$ is an equivalence class of Kuranishi charts. \item[$\mathbf{\bigtriangleup} \hspace{-2.08mm} \raisebox{.3mm}{$\scriptscriptstyle !$}\,$] Note that $s_1= s_2\circ \theta$ does not necessarily determine the diffeomorphism $\theta$ except on the (usually singular and not dense) zero set. Hence there may exist auto-equivalences, i.e.\ a nontrivial diffeomorphism $\theta: U'_1\to U'_2$ between restrictions $U'_1,U'_2\subset U$ of the same Kuranishi chart $(U,\ldots)$, satisfying $\theta|_{s^{-1}(0)}={\rm id}$ and $s= s\circ \theta$. \end{itemlist} Next, one needs to define the notion of a coordinate change between two germs of Kuranishi charts $[U_I,s_I,{\partial}si_I]$ and $[U_J,s_J,{\partial}si_J]$.\footnote{ In the notation of the previous section, an example of a required coordinate change is one for index sets $I=\{i\}$, $J=\{i,j\}$, where $[U_I, \ldots]$ denotes the germ at $[f]$ induced by $(U_i,\ldots)$, and $[U_J, \ldots]$ denotes the germ at $[f]$ induced by $(U^{ij}_f, \ldots)$. } It is here that ambiguities in the compatibility conditions appear, so we give what seems like the most natural definition, which is at least closely related to \cite{FO,J}. {\beta}gin{itemlist} \item A coordinate change $(U_{IJ},{\partial}hi_{IJ}) : (U_I,s_I,{\partial}si_I)\to (U_J,s_J,{\partial}si_J)$ between Kuranishi charts consists of a neighbourhood $U_{IJ}\subset U_I$ of $0$ and an embedding ${\partial}hi_{IJ}:U_{IJ} \hookrightarrow U_J$ that extends the natural transition map and intertwines the sections, $$ {\partial}hi_{IJ}|_{s_J^{-1}(0)\cap U_{IJ}} \;=\; {\partial}si_J^{-1}\circ{\partial}si_I , \qquad s_J|_{U_{IJ}} \; =\; s_I \circ {\partial}hi_{IJ} . $$ \item Let $(U_{I,1},s_{I,1},{\partial}si_{I,1}){\sigma}m(U_{I,2},s_{I,2},{\partial}si_{I,2})$ and $(U_{J,1},s_{J,1},{\partial}si_{J,1}){\sigma}m (U_{J,2},s_{J,2},{\partial}si_{J,2})$ be two pairs of equivalent Kuranishi charts. Then two coordinate changes {\beta}gin{align*} & (U_{IJ,1},{\partial}hi_{IJ,1}) : (U_{I,1},s_{I,1},{\partial}si_{I,1})\to (U_{J,1},s_{J,1},{\partial}si_{J,1}) \\ \text{and}\quad & (U_{IJ,2},{\partial}hi_{IJ,2}) : (U_{I,2},s_{I,2},{\partial}si_{I,2})\to (U_{J,2},s_{J,2},{\partial}si_{J,2}) \end{align*} are equivalent if there exist diffeomorphisms $\theta_I: U'_{I,1}\to U'_{I,2}$ and $\theta_J: U'_{J,1}\to U'_{J,2}$ between smaller neighbourhoods of $0$ as in the definition of equivalence of Kuranishi charts (i.e.\ $s_{I,1}|_{U'_{I,1}}= s_{I,2}|_{U'_{I,2}}\circ \theta_I$ and $s_{J,1}|_{U'_{J,1}}= s_{J,2}|_{U'_{J,2}}\circ \theta_J$) that intertwine the coordinate changes on a neighbourhood of $0$, $$ \theta_J \circ {\partial}hi_{IJ,1} = {\partial}hi_{IJ,2} \circ \theta_I . $$ \item A germ of coordinate changes between germs of Kuranishi structures at $p$ is an equivalence class of coordinate changes. \item[$\mathbf{\bigtriangleup} \hspace{-2.08mm} \raisebox{.3mm}{$\scriptscriptstyle !$}\,$] As a special case, two coordinate changes ${\partial}hi_{IJ}, {\partial}hi'_{IJ} : (U_{I},\ldots )\to (U_{J},\ldots)$ between the same Kuranishi charts are equivalent if there exist auto-equivalences $\theta_I: U'_{I,1}\to U'_{I,2}$ and $\theta_J: U'_{J,1}\to U'_{J,2}$ such that $\theta_J \circ {\partial}hi_{IJ} = {\partial}hi'_{IJ} \circ \theta_I $. \item[$\mathbf{\bigtriangleup} \hspace{-2.08mm} \raisebox{.3mm}{$\scriptscriptstyle !$}\,$] Given a germ of coordinate change $[U_{IJ},{\partial}hi_{IJ}] : [U_I,\ldots]\to [U_J,\ldots]$ and choices of representatives $(U'_I,\ldots), (U'_J,\ldots)$ of the germs of charts, a representative of the coordinate change now only exists between suitable restrictions $(U''_I\subset U'_I,\ldots), (U''_J\subset U'_J,\ldots)$, and even with fixed choice of restrictions may not be uniquely determined. \end{itemlist} { }{\mathbb N}I Finally, it remains to make sense of the cocycle condition for germs of coordinate changes. At this point \cite{FO,J} simply write equations such as $[\Phi_{JK}]\circ [\Phi_{IJ}] = [\Phi_{IK}]$ on the level of conjugacy classes of maps, which do not make strict sense. The following is an attempt to phrase the cocycle condition on the level of germs, but we will see that it falls short of implying the existence of compatible choices of representatives that is required for the construction of a VMC. {\beta}gin{itemlist} \item Let $[U_{I},s_{I},{\partial}si_{I}]$, $[U_{J},s_{J},{\partial}si_{J}]$, and $[U_{K},s_{K},{\partial}si_{K}]$ be germs of Kuranishi charts. Then we say that a triple of germs of coordinate changes $[U_{IJ},{\partial}hi_{IJ}], [U_{JK},{\partial}hi_{JK}],[U_{IK},{\partial}hi_{IK}]$ satisfies the cocycle condition if there exist representatives of the coordinate changes between representatives $(U_{I},\ldots)$, $(U_{J},\ldots)$, $(U_{K},\ldots)$ of the charts, {\beta}gin{align*} (U_{IJ},{\partial}hi_{IJ}) : \;(U_{I},s_{I},{\partial}si_{I})&\to (U_{J},s_{J},{\partial}si_{J}) , \\ (U_{JK},{\partial}hi_{JK}) : (U_{J},s_{J},{\partial}si_{J})&\to (U_{K},s_{K},{\partial}si_{K}) , \\ (U_{IK},{\partial}hi_{IK}) : \;(U_{I},s_{I},{\partial}si_{I})&\to (U_{K},s_{K},{\partial}si_{K}) , \end{align*} such that on a neighbourhood of $0$ we have {\beta}gin{align} {\lambda}bel{algcc} {\partial}hi_{JK} \circ {\partial}hi_{IJ} = {\partial}hi_{IK} . \end{align} \item[$\mathbf{\bigtriangleup} \hspace{-2.08mm} \raisebox{.3mm}{$\scriptscriptstyle !$}\,$] Note that the above cocycle condition for some choice of representatives does not imply a cocycle condition for different choices of representatives. For example, suppose that ${\partial}hi_{IJ},{\partial}hi_{JK},{\partial}hi_{IK}$ satisfy \eqref{algcc}, and consider other representatives $$ {\partial}hi_{IJ}' = \theta_J \circ {\partial}hi_{IJ} \circ \theta_I^{-1} , \quad {\partial}hi_{JK}' = {\widetildelde h}eta_K \circ {\partial}hi_{JK} \circ {\widetildelde h}eta_J^{-1} $$ given by auto-equivalences $\theta_I,\theta_J,{\widetildelde h}eta_J, {\widetildelde h}eta_K$. Then these fit into a cocycle condition $$ {\partial}hi_{JK}' \circ {\partial}hi_{IJ}' \;=\; \bigl( {\widetildelde h}eta_K \circ {\partial}hi_{JK} \circ {\widetildelde h}eta_J^{-1} \bigr) \circ \bigl( \theta_J \circ {\partial}hi_{IJ} \circ \theta_I^{-1} \bigr) \;=\; {\partial}hi'_{IK} \;\in\; [{\partial}hi_{IK}] $$ only if ${\widetildelde h}eta_J=\theta_J$ and ${\partial}hi'_{IK} = {\widetildelde h}eta_K \circ {\partial}hi_{IK} \circ \theta_I^{-1}$. That is, the choice of one representative in the cocycle condition between three germs of coordinate changes essentially fixes the choice of the other two representatives. This causes problems as soon as one considers the compatibility of four or more coordinate changes. \end{itemlist} { } Now suppose that a Kuranishi structure on ${\overlineerline {\Mm}}$ is given by germs of charts at each point and germs of coordinate changes between each suitably close pair of points, satisfying a cocycle condition. Then the fundamentally important first step towards the construction of a VMC is the claim of \cite[Lemma~6.3]{FO} that any such Kuranishi structure has a ``good coordinate system". The latter, though the definitions in \cite{FO,FOOO} are slightly ambiguous, is a finite cover of ${\overlineerline {\Mm}}$ by partially ordered charts (where two charts should be comparable iff the footprints intersect) with coordinate changes according to the partial order, and satisfying a weak cocycle condition. In order to extract such a finite cover from a tuple of germs of charts and germs of coordinate changes, one makes a choice of representative in each equivalence class of charts and picks a finite subcover. The first nontrivial step is to make sure that these representatives were chosen sufficiently small for coordinate changes between them to exist in the given germs of coordinate changes. The second crucial step is to make specific choices of representatives of the coordinate changes such that the cocycle condition is satisfied. However, \cite[(6.19.4)]{FO} does not address the need to choose specific, rather than just sufficiently small, representatives. In order to reduce the number of constraints, this would require a rather special structure of the overlaps of charts. In general, the choice of a representative for $[{\partial}hi_{AB}]$ would affect the choice of representatives for $[{\partial}hi_{CA}]$ or $[{\partial}hi_{AC}]$ for all $C$ with $\dim U_C\leq \dim U_B$, and for $[{\partial}hi_{BC}]$ or $[{\partial}hi_{CB}]$ when $\dim U_C \geq \dim U_A$. These are algebraic issues, governed by the intersection pattern of the charts. One approach to solving these algebraic issues could be to replace the definition of Kuranishi structure by that of a good coordinate system. However, we know of no direct way to construct such ordered covers and explicit cocycle conditions for a given moduli space ${\overlineerline {\Mm}}$. We solve both problems by defining the notion of a Kuranishi atlas as a weaker version of a good coordinate system --- without a partial ordering on the charts, but satisfying an explicit cocycle condition --- that can in practice be constructed. We then construct a good coordinate system in Proposition~\ref{prop:red} by an abstract refinement of the Kuranishi atlas. {\beta}gin{rmk}\rm One potential attraction of the notion of germs of Kuranishi charts is that for moduli spaces ${\overlineerline {\Mm}}$ arising from a Fredholm problem, there could be the notion of a \lq\lq natural germ" of charts at a point $[f]\in{\overlineerline {\Mm}}$ given by the finite dimensional reductions at any representative $f$. However, the present definition of germ does not provide a notion of equivalence between finite dimensional reductions with obstruction spaces of different dimension. So the only natural choice would be to require obstruction spaces to have minimal rank at $f$. But with such a choice it is not clear how to make compatible choices of the needed coordinate changes. As we will see, given two different charts at $[f]$ there is usually no natural choice of a coordinate change from one to another; the natural maps arise by including each of them into a bigger chart (here called their sum). Such a construction takes one quickly out of the class of minimal germs. \end{rmk} \subsection{Topological issues in the construction of a virtual fundamental class} \hspace{1mm}\\ {\lambda}bel{ss:top} After one has solved the analytic issues involved in constructing compatible basic Kuranishi charts as defined in Section~\ref{ss:kur} for a given moduli space ${\overlineerline {\Mm}}$, the further difficulties in constructing the virtual fundamental class $[{\overlineerline {\Mm}}]^{\rm virt}$ are all essentially topological, though their solution will impose further requirements on the construction of a Kuranishi atlas. The basic idea for constructing a VMC is to make transverse perturbations $s_i+\nu_i\approx s_i$ of the section in each basic chart, such that the smooth zero sets modulo a relation given by the transition maps provide a regularization of the moduli space $$ {\overlineerline {\Mm}} \,\!^\nu := \; \quotient{{{\underline{n}}derlineerset{{i=1,\ldots,N}}{\textstyle \bigsqcup} }\; (s_i+\nu_i)^{-1}(0)} { {\sigma}m} $$ There are various notions of regularizations; the common features (in the case of trivial isotropy and empty boundary) are that ${\overlineerline {\Mm}}\,\!^\nu$ should be a CW complex with a distinguished homology class $[{\overlineerline {\Mm}}\,\!^\nu]$ (e.g.\ arising from an orientation and triangulation), and that in some sense this class should be independent of the choice of perturbation~$\nu$. For example, \cite{FO,FOOO} require that for any CW complex $Y$ and continuous map $f:{\overlineerline {\Mm}} \to Y$ that extends compatibly to the Kuranishi charts, the induced map $f:{\overlineerline {\Mm}}\,\!^\nu \to Y$ is a cycle, whose homology class is independent of the choice of $\nu$ and extension of $f$. The basic issues in any regularization are that we need to make sense of the equivalence relation and ensure that the zero set of a transverse perturbation is not just locally smooth (and hence can be triangulated locally), but also that the transition data glues these local charts to a compact Hausdorff space without boundary. These properties are crucial for obtaining a global triangulation and thus well defined cycles. For simplicity we aim here for the strongest version of regularization, giving ${\overlineerline {\Mm}} \,\!^\nu$ the structure of an oriented, compact, smooth manifold, which is unique up to cobordism. That is, we wish to realize ${\overlineerline {\Mm}} \,\!^\nu$ as an abstract compact manifold as follows. (We simplify here by deferring the discussion of orientations to the end of this section.) {\beta}gin{definition} {\lambda}bel{def:mfd} An abstract compact smooth manifold of dimension $d$ consists of {\beta}gin{itemlist} \item[{\bf (charts)}] a finite disjoint union ${\underline{n}}derlineerset {{i=1,\ldots,N}}{\bigsqcup} V_i$ of open subsets $V_i\subset {\mathbb R}^d$, \item[{\bf (transition data)}] for every pair $i,j\in\{1,\ldots,N\}$ an open subset $V_{ij} \subset V_i$ and a smooth embedding ${\partial}hi_{ij} : V_{ij} \hookrightarrow V_j$ such that $V_{ji} = {\partial}hi_{ij}(V_{ij})$ and $V_{ii} = V_i$, \end{itemlist} satisfying the {\bf cocycle condition} $$ {\partial}hi_{jk} \circ {\partial}hi_{ij} = {\partial}hi_{ik} \qquad\text{on}\;\; {\partial}hi_{ij}^{-1}(V_{jk}) \subset V_{ik} \qquad\quad \forall i,j,k\in\{1,\ldots,N\}, $$ and such that the induced topological space {\beta}gin{equation} {\lambda}bel{quotient} \quotient{{\textstyle {\underline{n}}derlineerset{{i=1,\ldots,N}}{\bigsqcup}} V_i}{{\sigma}m} \qquad\text{with}\quad x {\sigma}m y \;:\Leftrightarrow\; \exists \; i,j : y={\partial}hi_{ij}(x) \end{equation} is Hausdorff and compact. \end{definition} Note here that it is easy to construct examples of charts and transition data that satisfy the cocycle condition but fail to induce a Hausdorff space, e.g.\ $V_1=V_2=(0,2)$ with $V_{12}=V_{21}=(0,1)$ and ${\partial}hi_{12}(x)={\partial}hi_{21}(x)=x$ does not separate the points $1\in V_1$ and $1\in V_2$. However, if we rephrase the data of charts and transition maps in terms of groupoids, then, as we now show, the Hausdorff property of the quotient is simply equivalent to a properness condition. In this paper we take a groupoid to be a topological category whose morphisms are invertible, whose spaces of objects and morphisms are smooth manifolds, and whose structure maps (encoding source, target, composition, identity, and inverse) are local diffeomorphisms. Such groupoids are often called {\it \'etale}. For further details see e.g.\ \cite{Moe,Mbr}. {\beta}gin{rmk}{\lambda}bel{rmk:grp}\rm A collection of charts and transition data satisfying the cocycle condition as in Definition~\ref{def:mfd} induces a topological groupoid ${\bf G}$, that is a category with {\beta}gin{itemlist} \item the topological space of objects ${\rm Obj}={\rm Obj}_{\bf G} = {\textstyle{\bigsqcup}_{i=1,\ldots,N}} V_i$ induced by the charts, \item the topological space of morphisms ${\rm Mor}={\rm Mor}_{\bf G}= {\textstyle{\bigsqcup}_{i,j=1,\ldots,N}} V_{ij}$ induced by the transition domains, with {\beta}gin{itemize} \item[-] source map $s: {\rm Mor} \to {\rm Obj}$, $(x\in V_{ij}) \mapsto (x\in V_i)$, and target map $ t: {\rm Mor} \to {\rm Obj}$, $(x\in V_{ij}) \mapsto ({\partial}hi_{ij}(x)\in V_j)$ induced by the transition maps, \item[-] composition ${\rm Mor} \leftsub{t}{\tildemes}_s {\rm Mor}\to {\rm Mor}$, $\bigl( (x\in V_{ij}), (y\in V_{jk})\bigr) \mapsto (x\in V_{ik})$ if ${\partial}hi_{ij}(x)=y$, which is well defined by the cocycle condition, \item[-] identities ${\rm Obj} \to {\rm Mor}, x \mapsto x\in V_{ii} \cong V_i$, and inverses ${\rm Mor} \to {\rm Mor}, (x\in V_{ij}) \mapsto ({\partial}hi_{ij}(x)\in V_{ji})$, again well defined by the cocycle condition. \end{itemize} \end{itemlist} Moreover, ${\bf G}$ has the following properties. {\beta}gin{itemlist} \item[{\bf (nonsingular)}] For every $x\in{\rm Obj}$ the isotropy group ${\rm Mor}(x,x)=\{\rm id_x\}$ is trivial. \item[{\bf (smooth)}] The object and morphism spaces ${\rm Obj}$ and ${\rm Mor}$ are smooth manifolds. \item[{\bf (\'etale)}] All structure maps are local diffeomorphisms. \end{itemlist} {\mathbb N}I The quotient space \eqref{quotient} is now given as the {\it realization} of the groupoid ${\bf G}$, that is $$ |{\bf G}| := {\rm Obj}_{\bf G}/{\sigma}m \qquad\text{with}\qquad x {\sigma}m y \; \Leftrightarrow\; {\rm Mor}(x,y) \neq \emptyset . $$ This realization is a compact manifold iff ${\bf G}$ has the following additional properties. {\beta}gin{itemlist} \item[{\bf (proper)}] The product of the source and target map $s\tildemes t : {\rm Mor} \to {\rm Obj} \tildemes {\rm Obj}$ is proper, i.e.\ preimages of compact sets are compact. \item[{\bf (compact)}] $|{\bf G}|$ is compact. \end{itemlist} \end{rmk} Now let $({\bf K}_i = (U_i,E_i,s_i,{\partial}si_i))_{i=1,\ldots,N}$ be a cover of a compact moduli space ${\overlineerline {\Mm}}$ by basic Kuranishi charts with footprints $F_i: = {\partial}si_i(s_i^{-1}(0))\subset {\overlineerline {\Mm}}$. Our guiding idea is to make from these charts two categories, the base category called ${\bf B}_{\mathcal K}$, formed from the domains $U_i$, and the bundle category ${\bf E}_{\mathcal K}$ formed from the obstruction bundles. The morphism spaces in both will arise from some type of transition maps between the basic charts. The projections $U_i\tildemes E_i \to U_i$ and sections $s_i$ should then induce a projection functor ${\partial}i_{\mathcal K}:{\bf E}_{\mathcal K}\to {\bf B}_{\mathcal K}$ and a section functor $s_{\mathcal K}: {\bf B}_{\mathcal K} \to {\bf E}_{\mathcal K}$. Further, the footprint maps ${\partial}si_i$ induce a surjection ${\partial}si_{\mathcal K}: s_{\mathcal K}^{-1}(0)\to {\overlineerline {\Mm}}$ from the zero set onto the moduli space. This induces natural morphisms in the subcategory $s_{\mathcal K}^{-1}(0)$, given by $$ {\partial}si_j^{-1} \circ {\partial}si_i \, : \; s_i^{-1}(0)\cap {\partial}si_i^{-1}(F_i\cap F_j) \;\longrightarrow\; s_j^{-1}(0) . $$ If we use only these morphisms and their lifts to $s_i^{-1}(0)\tildemes\{0\}\subset U_i\tildemes E_i$, then composition in the categories ${\bf B}_{\mathcal K}, {\bf E}_{\mathcal K}$ is well defined, ${\partial}i_{\mathcal K}, s_{\mathcal K}, {\partial}si_{\mathcal K}$ are functors, and $|{\partial}si_{\mathcal K}|:|s_{\mathcal K}^{-1}(0)|\to {\overlineerline {\Mm}}$ is a homeomorphism, which identifies the unperturbed moduli space ${\overlineerline {\Mm}}$ with a subset of the realization $|{\bf B}_{\mathcal K}|$. However, these morphism spaces may be highly singular, so that the structure maps are merely local homeomorphisms between topological spaces. This structure is insufficient for a regularization by transverse perturbation of the section $s_{\mathcal K}$, hence a Kuranishi atlas requires an extension of the transition maps ${\partial}si_j^{-1} \circ {\partial}si_i$ to diffeomorphisms between submanifolds of the domains. Recall here that the domains of the charts $U_i$ may not have the same dimension, since one can only expect the Kuranishi charts ${\bf K}_i $ to have constant index $d= \dim U_i-\dim E_i$. Hence transition data is generally given by ``transition charts'' ${\bf K}^{ij}$ and coordinate changes ${\bf K}_i \to {\bf K}^{ij} \leftarrow {\bf K}_j$ which in particular involve embeddings from open subsets $U_i^{ij}\subset U_i, U_j^{ij}\subset U_j$ into $U^{ij}$. (Here we simplify the notion from Section~\ref{ss:kur} by assuming that a single transition chart covers the overlap $F_i\cap F_j$.) Now one could appeal to Sard's theorem to find a transverse perturbation in each basic chart, $$ \nu=(\nu_i:U_i\to E_i)_{i=1,\ldots,N} \qquad\text{with}\quad s_i+\nu_i {\partial}itchfork 0 \quad\forall i=1,\ldots,N , $$ and use this to regularize ${\overlineerline {\Mm}} \cong |s_{\mathcal K}^{-1}(0)|=\qu{s_{\mathcal K}^{-1}(0)}{{\sigma}m}$. Here the relation ${\sigma}m$ is given by morphisms, so the regularization ought to be the realization $|(s_{\mathcal K}+\nu)^{-1}(0)| = \qu{(s_{\mathcal K} + \nu)^{-1}(0)}{{\sigma}m}$ of a subcategory. Hence the perturbations $\nu_i$ need to be compatible with the morphisms, i.e.\ transition maps. Given such compatible transverse perturbations, one obtains the charts and transition data for an abstract manifold as in Definition~\ref{def:mfd}, but still needs to verify the cocycle condition, Hausdorffness, compactness, and an invariance property to obtain a generalization of the finite dimensional regularization theorem on page {\partial}ageref{finite reg} to Kuranishi atlases along the following lines. \noindent {\bf Kuranishi Regularization:} {\it Let $s_{\mathcal K}: {\bf B}_{\mathcal K}\to{\bf E}_{\mathcal K}$ be a Kuranishi section of index~$d$ such that $|s_{\mathcal K}^{-1}(0)|$ is compact. Then there exists a class of smooth perturbation functors $\nu:{\bf B}_{\mathcal K}\to{\bf E}_{\mathcal K}$ such that the subcategory ${\bf Z}_\nu:=(s_{\mathcal K}+\nu)^{-1}(0)$ carries the structure of an abstract compact smooth manifold of dimension $d$ in the sense of Definition~\ref{def:mfd}. Moreover, $[\, |{\bf Z}_\nu | \,]\in H_d({\bf B}_{\mathcal K},{\mathbb Z})$ is independent of the choice of such perturbations. } However, in general there is no theorem of this precise form, since the topological issues discussed below require various refinements of the perturbation construction. { }{\mathbb N}I {\bf Compatibility:} In order to obtain well defined transition maps, i.e.\ a space of morphisms in ${\bf Z}_\nu$ with well defined composition, the perturbations $\nu_i$ clearly need to be compatible. Since $(s_i + \nu_i)^{-1}(0)$ and $(s_j + \nu_j)^{-1}(0)$ are not naturally identified via ${\partial}si_j^{-1} \circ {\partial}si_i$ for $\nu\not\equiv 0$, this requires that one include in ${\bf B}_{\mathcal K}$ the choice of specific transition data between the basic charts. Next, since the intersection of these embeddings ${\partial}hi_i^{ij}:U_i^{ij} \to U^{ij}$ and ${\partial}hi_j^{ij}:U_j^{ij} \to U^{ij}$ in the ``transition chart'' ${\bf K}^{ij}$ is not controlled, the direct transition map $({\partial}hi_j^{ij})^{-1} \circ {\partial}hi_i^{ij}$ may not have a smooth extension that is defined on an open set. Therefore, we do not want to consider such maps to be morphisms in ${\bf B}_{\mathcal K}$ since that would violate the \'etale property. Instead, we include ${\bf K}^{ij}$ into the set of charts, and ask that the pushforward of each perturbation $\nu_i,\nu_j$ extend to to a perturbation $\nu^{ij}$. But now one must consider triple composites, and so on. The upshot is that, as well as the system of basic charts $({\bf K}_i)_{i=1,\ldots,N}$ with footprints $F_i$, one is led to consider a full collection of transition charts $({\bf K}_I)_{I\subset\{1,\ldots,N\}}$ with footprints $F_I: = \cap _{i\in I} F_i$. To make a category, each of these ``sum charts" should have a chosen domain $U_I$, which is a smooth manifold, and the objects in ${\bf B}_{\mathcal K}$ should be $\sqcup_I U_I$. Further the morphisms in the category should come from coordinate changes between these charts, which in particular involve embeddings ${\partial}hi_{IJ}:U_{IJ} \to U_J$ of open subsets $U_{IJ}\subset U_I$. Thus the space ${\rm Mor}_{{\bf B}_{\mathcal K}}$ will be the disjoint union of the domains $U_{IJ}$ of these coordinate changes over all relevant pairs $I,J$.\footnote{ See Definitions~\ref{def:Ku} and~\ref{def:catKu} for detailed definitions of the categories ${\bf B}_{\mathcal K}$ and ${\bf E}_{\mathcal K}$ along these lines. It is worth noting here that we do not require the domains of charts to be open subsets of Euclidean space. We could achieve this for the basic charts, since these can be arbitrarily small. However, for the transition charts one may need to make a choice between having a single sum chart for each overlap and having sum charts whose domains are topologically trivial. We construct the former in Theorem~\ref{thm:A2}. } For this to form a category, all composites must exist, which is equivalent to the cocycle condition ${\partial}hi_{JK}\circ{\partial}hi_{IJ}={\partial}hi_{IK}$ including the condition that domains be chosen such that ${\partial}hi_{IJ}^{-1}(U_{JK}) \subset U_{IK}$. However, natural constructions as in Section~\ref{ss:gw} only satisfy the cocycle condition on the overlap ${\partial}hi_{IJ}^{-1}(U_{JK}) \cap U_{IK}$. Thus already the construction of an equivalence relation from the transition data requires a refinement of the choice of domains, which we achieve in Theorem~\ref{thm:K} by iteratively choosing subsets of each $U_I$ and $U_{IJ}$. If we assume the cocycle condition, then ${\bf B}_{\mathcal K}$ satisfies all properties of a nonsingular groupoid except {\beta}gin{itemize} \item[-] we do not assume that inverses exist; \item[-] the \'etale condition is relaxed to require that the structure maps are smooth embeddings (as spelled out in Definition~\ref{def:map}) rather than diffeomorphisms. \end{itemize} We write $|{\mathcal K}|$ for the realization ${\rm Obj}({\bf B}_{\mathcal K})/\!{\sigma}m$ of ${\bf B}_{\mathcal K}$, where ${\sigma}m$ is the equivalence relation generated by the morphisms, and denote by ${\partial}i_{\mathcal K}:{\rm Obj}({\bf B}_{\mathcal K})\to|{\mathcal K}|$ the projection. We show in Lemma~\ref{le:Knbhd1} that the natural inclusion ${\iota}_{\mathcal K}: {\overlineerline {\Mm}}\to |{\mathcal K}|$ that is a homeomorphism to its image. Therefore we think of $|{\mathcal K}|$ as a {\it virtual neighbourhood} of ${\overlineerline {\Mm}}$. This categorical framework and the resulting virtual neighbourhood of the moduli space is new in the Kuranishi context. The approach of both \cite{FO} (which \cite{FOOO} builds on though using different definitions) and \cite{J} is to work with equivalence classes of charts at every point. It runs into the algebraic difficulties discussed in Section~\ref{ss:alg}. { }{\mathbb N}I {\bf Hausdorff property:} For a category such as ${\bf B}_{\mathcal K}$, it is no longer true that the properness of $s\tildemes t$ implies that its realization $|{\mathcal K}|$ is Hausdorff; cf. Example~\ref{ex:Haus}. Therefore, the easiest way to ensure that the realization of the perturbed zero set $|{\bf Z}_\nu|$ is Hausdorff is to make $|{\mathcal K}|$ Hausdorff and check that the inclusion $|{\bf Z}_\nu|\to |{\mathcal K}|$ is continuous. The Hausdorff property (or more general properness conditions in the case of nontrivial isotropy) are not addressed in \cite{FO,FOOO,J}. Our attempts to deal with these requirements motivated the introduction of categories and a virtual neighbourhood. Given this framework, most of Section~\ref{s:Ks} is devoted to finding a way to shrink the domains of the charts to achieve not only the cocycle condition but also ensure that $|{\mathcal K}|$ is Hausdorff. To this end we introduce the notion of {\it tameness} in Definition~\ref{def:tame}, which is a very strong form of the cocycle condition that gives great control on the morphisms in ${\bf B}_{\mathcal K}$, cf.\ Lemma~\ref{le:Ku2}. We can achieve this if the original Kuranishi charts are additive, that is the obstruction spaces $E_i$ of the basic charts are suitably transverse. We then show in Proposition~\ref{prop:Khomeo} that the realization of a tame Kuranishi atlas is not only Hausdorff, but also has the property that the natural maps $U_I\to |{\mathcal K}|$ are homeomorphisms to their image. This means that we can construct a perturbation over $|{\mathcal K}|$ by working with its pullbacks to each chart. { }{\mathbb N}I {\bf Compactness:} Unfortunately, even when we can make $|{\mathcal K}|$ Hausdorff, it is almost never locally compact or metrizable. In fact, a typical local model is the subset $S$ of ${\mathbb R}^2$ formed by the union of the line $y=0$ with the half plane $x>0$, with ${\iota}_{\mathcal K}({\overlineerline {\Mm}}) = \{y=0\}$, but given the topology as a quotient of the disjoint union $\{y=0\}\sqcup \{x>0\}$. As we show in Example~\ref{ex:Khomeo}, the quotient topology on $S$ is not metrizable, and even in the weaker subspace topology from ${\mathbb R}^2$ the zero set ${\iota}_{\mathcal K}({\overlineerline {\Mm}})$ does not have a locally compact neighbourhood in $|{\mathcal K}|$. Therefore ``sufficiently small'' perturbations $\nu$ cannot guarantee compactness of the perturbed zero set $|{\bf Z}_\nu|$. Instead, the challenge is to find subsets of $|{\mathcal K}|$ containing ${\iota}_{\mathcal K}({\overlineerline {\Mm}})$ that are compact and -- while not open -- are still large enough to contain the zero sets of appropriately perturbed sections $s+\nu$. Similar to the Hausdorff property, this compactness is asserted in \cite{FO,FOOO,J} by quoting an analogy to the construction of an Euler class of orbibundles. However, we demonstrate in Examples~\ref{ex:Haus} and \ref{ex:Khomeo} that nontrivial Kuranishi atlases (involving domains of different dimension) -- unlike orbifolds -- never provide locally compact Hausdorff ambient spaces for the perturbation theory. To solve the compactness issue, we introduce in Section \ref{ss:red} precompact subsets ${\partial}i_{\mathcal K}({\mathcal C})$ of $|{\mathcal K}|$ with these properties; cf.\ Proposition~\ref{prop:zeroS0}. In fact, for reasons explained below, we are forced to consider nested pairs ${\mathcal C}\subset {\mathcal V}\subset {\rm Obj}({\bf B}_{\mathcal K})$ of such subsets, where the perturbation $\nu$ is defined over ${\mathcal V}$ so that the realization of its zero set is contained in ${\partial}i_{\mathcal K}({\mathcal C})$. One can think of ${\partial}i_{\mathcal K}({\mathcal C})$ as a kind of neighbourhood of ${\iota}_{\mathcal K}({\overlineerline {\Mm}})$, but, even though ${\mathcal C}$ is an open subset of ${\rm Obj}({\bf B}_{\mathcal K})$, the image ${\partial}i_{\mathcal K}({\mathcal C})$ is not open in $|{\mathcal K}|$ because the different components of ${\rm Obj}({\bf B}_{\mathcal K})$ have different dimensions. For example, if $|{\mathcal K}|=S$ as above then ${\partial}i_{\mathcal K}({\mathcal C})$ could be the union of $\{y=0, x<2\}$ with the set $\{x>1, |y|<1\}$. (In this example, since ${\overlineerline {\Mm}}$ is not compact, we cannot expect ${\partial}i_{\mathcal K}({\mathcal C})$ to be precompact, but its closure is locally compact.) { }{\mathbb N}I {\bf Construction of sections:} We aim to construct the perturbation $\nu$ by finding a compatible family of local perturbations $\nu_I$ in each chart ${\bf K}_I$. Thus, if basic charts ${\bf K}_i$ and ${\bf K}_j$ have nontrivial overlap and we start by defining $\nu_i$, the most naive approach is to try to extend the partially defined perturbation $\nu_i\circ ({\partial}hi_i^{ij})^{-1} \circ {\partial}hi_j^{ij}$ over $U_j$. But, as seen above, the image of $({\partial}hi_j^{ij})^{-1} \circ {\partial}hi_i^{ij}$ might be too singular to allow for an extension, and since ${\partial}hi_i^{ij}$ and ${\partial}hi_j^{ij}$ have overlapping images in $U^{ij}$ it does not help to rephrase this in terms of finding an extension of the pushforwards of these sections to $U^{ij}$. Thus one needs some notion of a \lq\lq good coordinate system" on ${\overlineerline {\Mm}}$ such as in \cite{FO}, in which all compatibility conditions between the perturbations are given by pushforwards with embeddings. That is, two charts ${\bf K}_I$ and ${\bf K}_J$ of a \lq\lq good coordinate system" have either no overlap or all morphisms are given by an embedding ${\partial}hi_{IJ}$ or ${\partial}hi_{JI}$. An approach towards extracting a ``good coordinate system'' from a Kuranishi atlas is given in \cite{FO} and built on by \cite{FOOO,J} but does not address compatibility with overlaps or the cocycle condition. We achieve this ordering by constructing a {\it reduction} ${\mathcal V}\subset{\rm Obj}({\bf B}_{\mathcal K})$ in Proposition~\ref{prop:cov2}. This does not provide another Kuranishi atlas or collection of compatible charts (though see Proposition~\ref{prop:red}), but merely is a subset of the domain spaces that covers the unperturbed moduli space ${\partial}i_{\mathcal K}\bigl({\mathcal V}\cap s_{\mathcal K}^{-1}(0)\bigr)={\overlineerline {\Mm}}$ and whose parts project to disjoint subsets ${\partial}i_{\mathcal K}({\mathcal V}\cap U_I)\cap{\partial}i_{\mathcal K}({\mathcal V}\cap U_J)=\emptyset$ in the virtual neighbourhood $|{\mathcal K}|$ unless there is a direct coordinate change between ${\bf K}_I$ and ${\bf K}_J$. Since the unidirectional coordinate changes induce an ordering, this allows for an iterative approach to constructing compatible perturbations $\nu_I$. However, this construction in Proposition~\ref{prop:ext} is still very delicate and requires great control over the perturbation $\nu$ since, to ensure compactness of the zero set, we must construct it so that ${\partial}i_{\mathcal K}\bigl((s+\nu)^{-1}(0)\bigr)$ lies in a precompact but generally not open set ${\partial}i_{\mathcal K}({\mathcal C})$. In particular, this construction requires a suitable metric on ${\partial}i_{\mathcal K}({\mathcal V})$, cf.\ Definition~\ref{def:metric}, which raises the additional difficulty of working with different topologies since -- as explained above -- the natural quotient topologies are almost never metrizable. { }{\mathbb N}I{\bf Regularity of sections:} In order to deduce the existence of transverse perturbations in a single chart from Sard's theorem, the section must be ${\mathcal C}^k$, where $k\ge 1$ is larger than the index of the Kuranishi atlas. (This was overlooked in \cite{FO} and comments of \cite{J}.) For applications to pseudoholomorphic curve moduli spaces this means that either a refined gluing theorem with controls of the derivatives must be proven, or a theory of stratified smooth Kuranishi is needed, which we are developing in \cite{MW:ku2}. Moreover, when extending a transverse section $({\partial}hi_{IJ})_*(s_I+\nu_I)$ from the image of the embedding ${\partial}hi_{IJ}$ to the rest of $U_J$, we must control its behavior in directions normal to the submanifold ${\rm im\,} {\partial}hi_{IJ}$ so that zeros of $s_I+\nu_I:U_I\to E_I$ in $U_{IJ}$ correspond to transverse zeros of $s_J+\nu_J:U_J\to E_J$. That is, the derivative ${\rm d} (s_J+\nu_J)$ must induce a surjective map from the normal bundle of ${\partial}hi_{IJ}(U_{IJ})\subset U_J$ to $E_J/\widehat{\partial}hi_{IJ}(E_I)$. If this is to be satisfied at the intersection of several embeddings to $U_J$, then the construction of transverse sections necessitates a tangent bundle condition, which was introduced in \cite{J} and then adopted in \cite{FOOO}. We reformulate it as an {\it index condition} relating the kernel and cokernels of ${\rm d} s_I$ and ${\rm d} s_J$ and can then extend transverse perturbations by requiring that ${\rm d} \nu_J=0$ in the normal directions to all embeddings ${\partial}hi_{IJ}$. (See the notion of {\it admissible sections} in Definition~\ref{def:sect}.) { }{\mathbb N}I{\bf Uniqueness up to cobordism:} Another crucial requirement on the perturbation constructions is that the resulting manifold (in the case of trivial isotropy) is unique modulo cobordism. This requires considerable effort since it does not just pertain to nearby sections of one bundle, but to sections constructed with respect to different metrics in different shrinkings and reductions of the charts. Finally, in applications to pseudoholomorphic curve moduli spaces, a notion of equivalence between different Kuranishi atlases is needed. Contrary to the finite dimensional charts for transverse moduli spaces, or the Banach manifold charts for ambient spaces of maps, two Kuranishi charts for the same moduli space may not be directly compatible. Section \ref{ss:Kcobord} instead introduces a notion of {\it commensurability} by a common extension. In the application to Gromov-Witten moduli spaces \cite{MW:gw}, we expect to obtain this equivalence from an infinite dimensional index condition relating the linearized Kuranishi section to the linearized Cauchy-Riemann operator. In order to prove invariance of the abstract VFC construction, however, we need to work with a weaker notion of cobordism of Kuranishi atlases -- a very special case of Kuranishi atlas with boundary, namely for ${\overlineerline {\Mm}}\tildemes[0,1]$. As a result, we need to repeat all shrinking, reduction, and perturbation constructions in Section~\ref{s:VMC} in a relative setting to interpolate between fixed data for ${\overlineerline {\Mm}}\tildemes\{0,1\}$. Again, the rather general categorical setting -- rather than a base manifold with boundary -- introduces unanticipated subtleties into these constructions. { } {\mathbb N}I {\bf Orientability:} The dimension condition $\dim U_I-\dim E_I = \dim U_J-\dim E_J=:d$ together with the fact that each $s_I+\nu_I$ is transverse to $0$ implies that the zero sets of $s_I+\nu_I$ and $s_J+\nu_J$ both have dimension $d$, so that the embedding ${\partial}hi_{IJ}$ does restrict to a local diffeomorphism between these local zero sets. Thus, if all the above conditions hold, then the zero sets $(s_I+\nu_I)^{-1}(0)$ and morphisms induced by the coordinate changes do form an \'etale proper groupoid ${\bf Z}_\nu$. To give its realization $|{\bf Z}_\nu|$ a well defined fundamental cycle, it remains to orient the local zero sets compatibly, i.e.\ to pick compatible nonvanishing sections of the determinant line bundles ${\Lambda}^d\bigl(\ker {\rm d}(s_I+\nu_I)\bigr)$. These should be induced from a notion of orientation of the Kuranishi atlas, i.e.\ of sections of the unperturbed determinant line bundles ${\Lambda}^{\rm max} \ker {\rm d} s_I \otimes \bigl({\Lambda}^{\rm max} {\rm coker\,} {\rm d} s_I\bigr)^*$, which are compatible with fiberwise isomorphisms induced by the embeddings ${\partial}hi_{IJ}: U_{IJ}\to U_J$. To construct this {\it determinant line bundle} ${\delta}t(s_{\mathcal K})$ of the Kuranishi atlas in Proposition~\ref{prop:det0}, we have to compare trivializations of determinant line bundles that arise from stabilizations by trivial bundles of different dimension. As recently pointed out by Zinger~\cite{Z3}, there are several ways to choose local trivializations that are compatible with all necessary structure maps. We shall use one that is different from both the original and the revised construction in \cite[Theorem~A.2.2]{MS}, since these lead to sign incompatibilities. Our construction, though, does seem to coincide with ordering conventions in the construction of a canonical K-theory class on the space of linear operators between fixed finite dimensional spaces.\footnote{Thanks to Thomas Kragh for illuminating discussions on the topic of determinant bundles. } Finally, we use intermediate determinant bundles ${\Lambda}^{\rm max} {\rm T} U_I \otimes \bigl({\Lambda}^{\rm max} E_I\bigr)^*$ in Proposition~\ref{prop:orient1} to transfer an orientation of ${\delta}t(s_{\mathcal K})$ to ${\bf Z}_\nu$. { } Putting everything together, we finally conclude in Theorem~\ref{thm:VMC1} that every oriented weak additive $d$-dimensional Kuranishi atlas ${\mathcal K}$ with trivial isotropy determines a unique cobordism class of oriented $d$-dimensional compact manifolds, that is represented by the zero sets of a suitable class of admissible sections. Theorem~\ref{thm:VMC2} interprets this result in more intrinsic terms, defining a \v{C}ech homology class on ${\overlineerline {\Mm}}$, which we call the {\it virtual fundamental class} (VFC). This is a stronger notion than in \cite{FO,FOOO}, where a virtual fundamental cycle is supposed to associate to any ``strongly continuous map'' $f:{\overlineerline {\Mm}}\to Y$ a cycle in $Y$. On the other hand, our notion of VFC does not yet provide a ``pull-push'' construction as needed for e.g.\ the construction of a chain level $A_\infty$ algebra in \cite{FOOO} by pullback of cycles via evaluation maps ${\rm ev}_1,\ldots,{\rm ev}_k:{\overlineerline {\Mm}}\to L$ and push forward by another evaluation ${\rm ev}_0:{\overlineerline {\Mm}}\to L$. Finally, note that our definition of a Kuranishi atlas is designed to make it possible both to construct them in applications, such as Gromov--Witten moduli spaces, and to prove that they have natural virtual fundamental cycles. Its basic ingredients (charts, coordinate changes) are closely related to those in \cite{FO,FOOO, J}, yet we already need small variations. We make essential changes to almost all global notions and constructions and compare our notion of Kuranishi atlas to the various notions of Kuranishi structures in Remark~\ref{rmk:otherK}. {\beta}gin{rmk}\rm This paper makes rather few references to Li--Tian~\cite{LT} because that deals mostly with gluing and isotropy; in other respects it is very sketchy. For example, it does not mention any of the analytic details in Section \ref{s:construct} below. Its Theorem~1.1 constructs the oriented Euler class of a \lq\lq generalized Fredholm bundle" $[s:\widetildelde{\mathcal B}\to \widetildelde{\mathcal E}]$, avoiding the Hausdorff question by assuming that there is a global finite dimensional bundle $\widetildelde{\mathcal B}\tildemes F$ that maps onto the local approximations. However, in the Gromov--Witten situation this is essentially never the case (even if there is no gluing) since $\widetildelde{\mathcal B}$ is a quotient of the form $\widehat{\mathcal B}/G$. Therefore, we must work in the situation described in \cite[Remark~3]{LT}, and here they just say that the extension to this case is easy, without further comment. Also, the proof that the structure described in Remark 3 actually exists even in the simple Gromov--Witten case that we consider in Section \ref{s:construct} lacks almost all detail. The only reference to this question is on page 79 in the course of the proof of Proposition 2.2 (page 38 in the arxiv preprint). Their idea is to build a global object from a covering family of basic charts using sum charts (see condition (iv) at the beginning of Section~1) and partitions of unity to extend sections. The paper \cite{LiuT} explains this idea with much more clarity, but unfortunately, because it does not pass to finite dimensional reductions, it makes serious analytic errors; cf.\ Remark~\ref{LTBS}. As we point out in this remark, there are also serious difficulties with using partitions of unity in this context that cannot be easily circumvented by passing to finite dimensional reductions. Therefore at present it is unclear to us whether this construction can be correctly carried out. \end{rmk} {\rm sect}ion{Differentiability issues in abstract regularization approaches} {\lambda}bel{s:diff} Any abstract regularization procedure for holomorphic curve moduli spaces needs to deal with the fundamental analytic difficulty of the reparametrization action, which has been often overlooked in symplectic topology. We thus explain in Section~\ref{ss:nodiff} the relevant differentiability issues in the example of spherical curves with unstable domain. In a nutshell, the reparametrization $f\mapsto f\circ{\gamma}$ with a fixed diffeomorphism ${\gamma}$ is smooth on infinite dimensional function spaces, but the action $({\gamma},f)\mapsto f\circ{\gamma}$ of any nondiscrete family of diffeomorphisms fails even to be differentiable in any standard Banach space topology. In geometric regularization techniques, this difficulty is overcome by regularizing the space of parametrized holomorphic maps in such a way that it remains invariant under reparametrizations. Then the reparametrization action only needs to be considered on a finite dimensional manifold, where it is smooth. It has been the common understanding that by stabilizing the domain or working in finite dimensional reductions one can overcome this differentiability failure in more general situations. We will explain in Section~\ref{ss:DMdiff} that reparametrizations nevertheless need to be dealt with in establishing compatibility of constructions in local slices, in particular between charts near nodal curves and local slices of regular curves. In particular, we will show the difficulties in the global obstruction bundle approach in Section~\ref{ss:nodiff}, and for the Kuranishi atlas approach will see explicitly in Section \ref{ss:Kcomp} that the action on infinite dimensional function spaces needs to be dealt with when establishing compatibility of local finite dimensional reductions. Finally, Section~\ref{ss:eval} explains additional smoothness issues in dealing with evaluation maps. \subsection{Differentiability issues arising from reparametrizations} \hspace{1mm}\\ {\lambda}bel{ss:nodiff} The purpose of this section is to explain the implications of the fact that the action of a nondiscrete automorphism group ${\rm Aut}({\Sigma})$ on a space of maps $\{ f: {\Sigma} \to M \}$ by reparametrization is not continuously differentiable in any known Banach metric. In particular, the space $$ \{ f: {\Sigma} \to M \,|\, f_*[{\Sigma}]\neq 0 \}/{\rm Aut}({\Sigma}), $$ of equivalence classes of (nonconstant) smooth maps from a fixed domain modulo repara\-metrization of the domain, has no known completion with differentiable Banach orbifold structure. We discuss the issue in the concrete case of the moduli space ${\overlineerline {\Mm}}_{1}(A,J)$ of $J$-holomorphic spheres with one marked point.\footnote{ In order to understand how any given abstract regularization technique deals with the differentiability issues caused by reparametrizations, one can test it on the example of spheres with one marked point. This is a realistic test case since since sphere bubbles will generally appear in any compactified moduli space (before regularization). } For the sake of simplicity let us assume that the nonzero class $A\in H_2(M)$ is such that it excludes bubbling and multiply covered curves a priori, so that no nodal solutions are involved and all isotropy groups are trivial. In that case one can describe the moduli space {\beta}gin{align*} {\overlineerline {\Mm}}_{1}(A,J) &:= \bigl\{ (z_1,f) \in S^2 \tildemes {\mathcal C}^\infty(S^2,M) \,\big|\, f_*[S^2]=A, {\overline {{\partial}}_J} f = 0 \bigr\} / {\rm PSL}(2,{\mathbb C}) \\ & \cong \bigl\{ f \in {\mathcal C}^\infty(S^2,M) \,\big|\, f_*[S^2]=A, {\overline {{\partial}}_J} f = 0 \bigr\} / G_\infty \end{align*} as the zero set of the section $f\mapsto {\overline {{\partial}}_J} f$ in an appropriate bundle over the quotient $$ \widehat{{\mathcal B}}/G_\infty \quad\text{of}\quad \widehat{\mathcal B}:= \bigl\{ f \in {\mathcal C}^\infty(S^2,M) \,\big|\, f_*[S^2]=A \bigr\}$$ by the reparametrization action $f\mapsto f\circ{\gamma}$ of $$G_\infty : = \{{\gamma}\in {\rm PSL}(2,{\mathbb C}) \,|\, {\gamma}(\infty)=\infty\}. $$ The quotient space $\widehat{{\mathcal B}}/G_\infty$ inherits the structure of a Fr\'echet manifold, but note that the action on any Sobolev completion {\beta}gin{align} {\lambda}bel{action} {\widetildelde h}eta: G_\infty \tildemes W^{k,p}(S^2,M) \to W^{k,p}(S^2,M), \quad ({\gamma}mma,f) \mapsto f\circ{\gamma}mma \end{align} does not even have directional derivatives at maps $f_0\in W^{k,p}(S^2,M) {\smallsetminus} W^{k+1,p}(S^2,M)$ since the differential\footnote{ Here the tangent space to the automorphism group ${\rm T}_{\rm Id}G_\infty \subset{\Gamma}({\rm T} S^2)$ is the finite dimensional space of holomorphic (and hence smooth) vector fields $X:S^2 \to {\rm T} S^2$ that vanish at $\infty\in S^2$. } {\beta}gin{align}{\lambda}bel{eq:actiond} {\rm D}{\widetildelde h}eta ({\rm Id},f_0) : \; {\rm T}_{\rm Id}G_\infty \tildemes W^{k,p}(S^2, f_0^*{\rm T} M) &\;\longrightarrow\; W^{k,p}(S^2, f_0^*{\rm T} M) \\ {(X,\xi)} \qquad\qquad\qquad\;\;\; &\;\longrightarrow\; \;\; \xi + {\rm d} f_0 \circ X \nonumber \end{align} is well defined only if ${\rm d} f_0$ is of class $W^{k,p}$. In fact, even at smooth points $f_0\in{\mathcal C}^\infty(S^2,M)$, this ``differential'' only provides directional derivatives of \eqref{action}, for which the rate of linear approximation depends noncontinuously on the direction. Hence \eqref{action} is not classically differentiable at any point. {\beta}gin{remark} \rm {\lambda}bel{BSdiff} To the best of our knowledge, the differentiability failure of \eqref{action} persists in all other completions of ${\mathcal C}^\infty(S^2,M)$ to a Banach manifold -- e.g.\ using H\"older spaces. The restriction of \eqref{action} to ${\mathcal C}^\infty(S^2,M)$ does have directional derivatives, and the differential is continuous in the ${\mathcal C}^\infty$ topology. Hence one could try to deal with \eqref{action} as a smooth action on a Fr\'echet manifold. Alternatively, one could equip ${\mathcal C}^\infty(S^2,M)$ with a (noncomplete) Banach metric. Then $$ {\widetildelde h}eta:G_\infty \tildemes {\mathcal C}^\infty(S^2,M) \to {\mathcal C}^\infty(S^2,M) $$ has a bounded differential operator $$ {\rm D}{\widetildelde h}eta ({\gamma},f) : {\rm T}_{\gamma} G_\infty \tildemes {\mathcal C}^\infty(S^2, f^*{\rm T} M) \to {\mathcal C}^\infty(S^2, {\gamma}^*f^*{\rm T} M). $$ However, the differential fails to be continuous with respect to $f\in {\mathcal C}^\infty(S^2,M)$ in the Banach metric. Now continuous differentiability could be achieved by restricting to a submanifold ${\mathcal C}\subset{\mathcal C}^\infty(S^2,M)$ on which the map $f\mapsto {\rm d} f$ is continuous. However, in e.g.\ a Sobolev or H\"older metric, the identity operator ${\rm T}{\mathcal C}\to{\rm T}{\mathcal C}$ would then be compact, so that ${\mathcal C}$ would have to be finite dimensional. Finally, one could observe that the action \eqref{action} is in fact ${\mathcal C}^\ell$ when considered as a map $G_\infty \tildemes W^{k+\ell,p}(S^2,M) \to W^{k,p}(S^2,M)$. This might be useful for fixing the differentiability issues in the virtual regularization approaches with additional analytic arguments. In fact, this is essentially the definition of scale-smoothness developed in \cite{HWZ1} to deal with reparametrizations directly in the infinite dimensional setting. \end{remark} It has been the common understanding that virtual regularization techniques deal with the differentiability failure of the reparametrization action by working in finite dimensional reductions, in which the action is smooth. We will explain below for the global obstruction bundle approach, and in Section \ref{ss:Kcomp} for the Kuranishi atlas approach, that the action on infinite dimensional spaces nevertheless needs to be dealt with in establishing compatibility of the local finite dimensional reductions. In fact, as we show in Section~\ref{s:construct}, the existence of a consistent set of such finite dimensional reductions with finite isotropy groups for a Fredholm section that is equivariant under a nondifferentiable group action is highly nontrivial. For most holomorphic curve moduli spaces, even the existence of not necessarily compatible reductions relies heavily on the fact that, despite the differentiability failure, the action of the reparametrization groups generally do have local slices. However, these do not result from a general slice construction for Lie group actions on a Banach manifold, but from an explicit geometric construction using transverse slicing conditions. We now explain this construction, and subsequently show that it only defers the differentiability failure to the transition maps \eqref{transition} between different local slices. In order to construct local slices for the action of $G_\infty$ on a Sobolev completion of $\widehat{\mathcal B}$, $$ \widehat{\mathcal B}^{k,p}:= \bigl\{ f \in W^{k,p}(S^2,M) \,\big|\, f_*[S^2]=A \bigr\}, $$ we will assume $(k-1)p>2$ so that $W^{k,p}(S^2)\subset {\mathcal C}^1(S^2)$. Then any element of $\widehat{\mathcal B}^{k,p}/G_\infty$ can be represented as $[f_0]$, where the parametrization $f_0\in W^{k,p}(S^2,M)$ is chosen so that ${\rm d} f_0(t)$ is injective for $t=0, 1 \in S^2={\mathbb C}\cup\{\infty\}$. With such a choice, a neighbourhood of $[f_0]\in \widehat{\mathcal B}^{k,p}/G_\infty$ can be parametrized by $[\exp_{f_0}(\xi)]$, for $\xi$ in a small ball in the subspace $$ \bigl \{ \xi\in W^{k,p}(S^2,f_0^*{\rm T} M) \ | \ \xi(t)\in {\rm im\,} {\rm d} f_0(t)^{\partial}erp \; \text{for}\; t=0,1\bigr\}. $$ Moreover, the map $\xi \mapsto [\exp_{f_0}(\xi)]$ is injective up to an action of the finite {\bf isotropy group} $$ G_{f_0} = \{ {\gamma} \in G_\infty \,|\, f_0\circ{\gamma} = f_0 \} . $$ In other words, for sufficiently small ${\varepsilon}>0$, a $G_{f_0}$-quotient of {\beta}gin{equation}{\lambda}bel{eq:slice} {\mathcal B}_{f_0}:= \bigl\{ f\in \widehat{\mathcal B}^{k,p} \,\big|\, d_{W^{k,p}}(f,f_0)<{\varepsilon} , f(0)\in Q_{f_0}^0 , f(1) \in Q_{f_0}^1 \bigr\} \end{equation} is a local slice for the action of $G_\infty$, where for some ${\delta}lta>0$ {\beta}gin{equation} {\lambda}bel{eq:hypsurf} Q_{f_0}^t=\bigl\{\exp_{f_0(t)} (\xi) \,\big|\, \xi \in {\rm im\,} {\rm d} f_0(t)^{\partial}erp , |\xi|<{\delta}lta \bigr\} \subset M \end{equation} are codimension $2$ submanifolds transverse to the image of $f_0$ in two extra marked points $t=0,1$. For simplicity we will in the following assume that the isotropy group $G_{f_0} =\{{\rm id}\}$ is trivial, and that the submanifolds $Q_{f_0}^{t}$ can be chosen so that $f_0^{-1}(Q_{f_0}^{t})$ is unique for $t=0,1$. Then, for sufficiently small ${\varepsilon}>0$, the intersections ${\rm im\,} f{\partial}itchfork Q_{f_0}^t$ are unique and transverse for all elements of ${\mathcal B}_{f_0}$. This proves that ${\mathcal B}_{f_0}$ is a {\it local slice} to the action of $G_\infty$ in the following sense. {\beta}gin{lemma} {\lambda}bel{lem:slice} For every $f_0\in\widehat{\mathcal B}^{k,p}$ such that ${\rm d} f_0(t)$ is injective for $t=0,1$ and $G_{f_0} =\{{\rm id}\}$, there exist ${\varepsilon},{\delta}lta>0$ such that ${\mathcal B}_{f_0} \to \widehat{\mathcal B}^{k,p}/G_\infty$, $f\mapsto [f]$ is a homeomorphism to its image. \end{lemma} {\beta}gin{remark} \rm {\lambda}bel{rmk:unique} If $f_0$ is pseudoholomorphic with closed domain, then trivial isotropy implies somewhere injectivity, see \cite[Chapter~2.5]{MS}; however this is not true for general smooth maps or other domains. Thus to prove Lemma~\ref{lem:slice} for general $f_0$ with trivial isotropy, we must deal with the case of non-unique intersections. In that case one obtains unique transverse intersections for $f\approx f_0$ in a neighbourhood of the chosen points in $f_0^{-1}(Q_{f_0}^{t})$ and can prove the same result. We defer the details to \cite{MW:gw}, where we also prove an orbifold version of Lemma~\ref{lem:slice} in the case of nontrivial isotropy. In that case, one must define the local action of the isotropy group with some care. However, it is always defined by a formula such as \eqref{transition}, and so in general is no more differentiable than the transition maps \eqref{transition} below. \end{remark} The topological embeddings ${\mathcal B}_{f}\to \widehat{\mathcal B}^{k,p}/G_\infty$ of the local slices provide a cover of $\widehat{\mathcal B}^{k,p}/G_\infty$ by Banach manifold charts. The transition map between two such Banach manifold charts centered at $f_0$ and $f_1$ is given in terms of the local slices by {\beta}gin{equation}{\lambda}bel{transition} {\Gamma}mma_{f_0,f_1} :\; {\mathcal B}_{f_0,f_1}:= {\mathcal B}_{f_0}\cap G_\infty{\mathcal B}_{f_1} \; \longrightarrow \; {\mathcal B}_{f_1}, \qquad f\longmapsto f\circ {\gamma}mma_f , \end{equation} where ${\gamma}mma_f\in G_\infty$ is uniquely determined by ${\gamma}mma_f(t)\in f^{-1}(Q_{f_1}^{t})$ for $t=0,1$ by our choice of ${\mathcal B}_{f_1}$. Here the differentiability of the map {\beta}gin{equation} {\lambda}bel{gf} W^{k,p}(S^2,M)\to G_\infty, \quad f\mapsto {\gamma}mma_f \end{equation} is determined by that of the intersection points with the slicing conditions for $t=0,1$, $$ W^{k,p}(S^2,M)\to S^2, \quad f \mapsto f^{-1}(Q_{f_1}^{t}) . $$ By the implicit function theorem, these maps are ${\mathcal C}^\ell$-differentiable if $k>\ell + 2/p$ such that $W^{k,p}(S^2)\subset {\mathcal C}^\ell(S^2)$. However, the transition map also involves the action \eqref{action}, and thus is non-differentiable at some simple examples of $f\in W^{k,p}{\smallsetminus} W^{k+1,p}$, no matter how we pick $k,p$. {\beta}gin{lemma} {\lambda}bel{le:Gsmooth} Let $B\subset {\mathcal B}_{f_0}$ be a finite dimensional submanifold of ${\mathcal B}_{f_0}$ with the $W^{k,p}$-topology, and assume that it lies in the subset of smooth maps, $B\subset{\mathcal C}^\infty(S^2,M)\cap {\mathcal B}_{f_0}$. Then the transition map \eqref{transition} restricts to a smooth map $$ B \cap G_\infty{\mathcal B}_{f_1} \; \longrightarrow \; {\mathcal B}_{f_1}, \qquad f \;\longmapsto\; {\Gamma}mma_{f_0,f_1}(f) = f\circ {\gamma}mma_f . $$ \end{lemma} {\beta}gin{proof} Since $B$ is finite dimensional, all norms on ${\rm T} B$ are equivalent. In particular, we equip $B$ with the $W^{k,p}$-topology in which it is a submanifold of ${\mathcal B}_{f_0}$. Then the embeddings $B\to {\mathcal C}^\ell(S^2,M)$ for all $\ell\in{\mathbb N}$ are continuous and hence the above discussion shows that the map $B\to G_\infty$, $f\mapsto{\gamma}_f$ given by restriction of \eqref{gf} is smooth. To prove smoothness of ${\Gamma}mma_{f_0,f_1}|_{B \cap G_\infty{\mathcal B}_{f_1}}$ it remains to establish smoothness of the restriction of the action ${\widetildelde h}eta$ in \eqref{action} to $$ {\widetildelde h}eta_B \,: \; G_\infty \tildemes B \;\longrightarrow \; W^{k,p}(S^2,M), \qquad ({\gamma},f) \;\longmapsto\; f\circ {\gamma} . $$ For that purpose first note that continuity in $f\in B$ is elementary since, after embedding $M\hookrightarrow {\mathbb R}^N$, this is a linear map in $f$. Continuity in ${\gamma}$ for fixed $f\in{\mathcal C}^\infty(S^2,M)$ follows from uniform bounds on the derivatives of $f$ (and could also be extended to infinite dimensional subspaces of $W^{k,p}(S^2,M)$ by density of the smooth maps). This proves continuity of ${\widetildelde h}eta_B$. Generalizing \eqref{eq:actiond}, with ${\rm T}_{{\gamma}_0} G_\infty\subset{\Gamma}({\gamma}_0^*{\rm T} S^2)$ the space of holomorphic (and hence smooth) sections $X:S^2 \to {\gamma}_0^*{\rm T} S^2$ that vanish at $\infty\in S^2$, the differential of ${\widetildelde h}eta_B$ is {\beta}gin{align*} {\rm D}{\widetildelde h}eta_B ({\gamma}_0,f_0) : \; {\rm T}_{{\gamma}_0} G_\infty \tildemes {\rm T}_{f_0} B &\;\longrightarrow\; W^{k,p}(S^2, f_0^*{\rm T} M) \\ {(X,\xi)} \qquad&\;\longrightarrow\; \;\; \xi\circ{\gamma}_0 + {\rm d} f_0 \circ X . \end{align*} It exists and is a bounded operator at all $({\gamma}_0,f_0)\in G_\infty \tildemes B$ since by assumption $f_0$ is smooth, so it remains to analyze the regularity of this operator family under variations in $G_\infty \tildemes B$. Denoting by $L(E,F)$ the space of bounded linear operators $E\to F$, the second term, $$ B \;\to\; L\bigl({\rm T}_{{\gamma}_0} G_\infty , W^{k,p}(S^2, f_0^*{\rm T} M)\bigr), \quad f_0 \;\mapsto\; ({\rm d} f_0)_* \qquad\text{given by}\; ({\rm d} f_0)_* X = {\rm d} f_0 \circ X , $$ is smooth on the finite dimensional submanifold $B$ because ${\mathcal C}^\ell(S^2,{\mathbb R}^N) \to W^{k,p}({\rm T} S^2,{\mathbb R}^N)$, $f_0\mapsto {\rm d} f_0$ is a bounded linear map for sufficiently large $\ell$, and the ${\mathcal C}^\ell$-norm on $B\subset {\mathcal C}^\infty(S^2,{\mathbb R}^N)$ is equivalent to the $W^{k,p}$-norm. The first term, {\beta}gin{equation}{\lambda}bel{thetag} G_\infty \;\to\; L\bigl({\rm T}_{f_0} B , W^{k,p}(S^2, f_0^*{\rm T} M)\bigr) , \quad {\gamma}_0 \;\mapsto\; \theta_{{\gamma}_0} \qquad\text{given by}\;\theta_{{\gamma}_0}(\xi) = \xi\circ{\gamma}_0 , \end{equation} is of the same type as ${\widetildelde h}eta_B$, hence continuous by the above arguments. This proves continuous differentiability of ${\widetildelde h}eta_B$. Then continuous differentiability of the first term \eqref{thetag} follows from the same general statement about differentiability of reparametrization by $G_\infty$, and thus implies continuous differentiability of ${\rm D}{\widetildelde h}eta_B$. Iterating this argument, we see that all derivatives of ${\widetildelde h}eta_B$ are continuous, and hence ${\widetildelde h}eta_B$ is smooth, as claimed. Note however that this argument crucially depends on the finite dimensionality of $B$ to obtain continuity for the second term of ${\rm D}{\widetildelde h}eta_B$. \end{proof} An important observation here is that the Cauchy--Riemann operator $$ {\overline {{\partial}}_J} : \; \widehat{\mathcal B}^{k,p} \;\longrightarrow\; \widehat{\mathcal E}:= {\textstyle \bigcup_{f\in\widehat{\mathcal B}^{k,p}}} W^{k-1,p}(S^2,{\Lambda}mbda^{0,1}f^*{\rm T} M) $$ restricts to a smooth section ${\overline {{\partial}}_J}:{\mathcal B}_{f_i}\to\widehat{\mathcal E}|_{{\mathcal B}_{f_i}}$ in each local slice. The bundle map $$ \widehat{\Gamma}_{f_0,f_1} : \; \widehat{\mathcal E}|_{{\mathcal B}_{f_0,f_1}} \;\longrightarrow\; \widehat{\mathcal E}|_{{\mathcal B}_{f_1}}, \qquad \widehat{\mathcal E}_f \;\ni\; \eta \;\longmapsto\; \eta \circ {\rm d} {\gamma}_f^{-1} \;\in\; \widehat{\mathcal E}_{f\circ{\gamma}_f} , $$ intertwines the Cauchy--Riemann operators in different local slices, $$ \widehat{\Gamma}_{f_0,f_1} \circ {\overline {{\partial}}_J} = {\overline {{\partial}}_J} \circ {\Gamma}_{f_0,f_1} . $$ However, general perturbations of the form ${\overline {{\partial}}_J} + \nu : {\mathcal B}_{f_1} \to \widehat{\mathcal E}|_{{\mathcal B}_{f_1}}$, where $\nu$ is a ${\mathcal C}^1$ section of the bundle $\widehat{\mathcal E}|_{{\mathcal B}_{f_1}}$, are {\it not} pulled back to ${\mathcal C}^1$ sections of $\widehat{\mathcal E}|_{{\mathcal B}_{f_0,f_1}}$ by $\widehat{\Gamma}_{f_0,f_1}$ since {\beta}gin{equation}{\lambda}bel{trans2} \widehat{\Gamma}_{f_0,f_1}^{-1}\circ \nu \circ {\Gamma}_{f_0,f_1} : \; f \;\longmapsto\; \nu(f\circ{\gamma}_f) \circ {\rm d}{\gamma}_f^{-1} \end{equation} does not depend differentiably on the points $f$ in the base ${\mathcal B}_{f_0}$. In equations \eqref{graphsp} and \eqref{graph} we give a geometric construction of a special class of sections $\nu$ that do behave well under this pullback. {\beta}gin{remark}\rm {\lambda}bel{LTBS} The lack of differentiability in \eqref{transition} and \eqref{trans2} poses a significant problem in the global obstruction bundle approach to regularizing holomorphic curve moduli spaces. This approach views the Cauchy--Riemann operator ${\overline {{\partial}}_J}:\widetilde{\mathcal B}\to\widetilde{\mathcal E}$ as a section of a topological vector bundle over an ambient space $\widetilde{\mathcal B}$ of stable $W^{k,p}$-maps modulo reparametrization, with a $W^{k,p}$-version of Gromov's topology. It requires a ``partially smooth structure'' on this space, in particular a smooth structure on each stratum. For example, the open stratum in the present Gromov--Witten example is $\widehat{\mathcal B}^{k,p}/G_\infty\subset\widetilde{\mathcal B}$, for which smooth orbifold charts, isotropy actions, and transition maps are explicitly claimed in \cite[Proposition~2.15]{GLu} and implicitly in \cite{LiuT}. The latter paper does not even prove continuity of isotropy and transition maps, though an argument was supplied by Liu for the 2003 revision of \cite[\S6]{Mcv}. However, continuity does not suffice to preserve the differentiability of perturbation sections in local trivializations of $\widetilde{\mathcal E}\to\widetilde{\mathcal B}$ under pullback by isotropy or transition maps. Another serious problem with this approach is its use of cutoff functions to extend sections defined on infinite dimensional local slices such as ${\mathcal B}_{f_0}$ to other local slices. Since these cutoff sections are still intended to give Fredholm perturbations of ${\overline {{\partial}}_J}$, the cutoff functions must be ${\mathcal C}^1$ and remain so under coordinate changes. The paper \cite{LiuT} gives no details here. A construction is given in \cite[Appendix~D]{LuT}, but this paper implicitly assumes an invariant notion of smoothness on the strata of $\widetilde{\mathcal B}$. It is possible that one can avoid these problems by first passing to a finite dimensional reduction as in \cite{LT}. However, as far as we are aware, details of such an approach have not been worked out. If they were to be worked out, we would consider them more as an approach of Kuranishi type than an approach using a global obstruction bundle. The global obstruction bundle approach requires a smooth structure on the ambient infinite dimensional spaces such as $\widehat{\mathcal B}^{k,p}/G_\infty$. One way to resolve this issue would be to use the scale calculus of polyfold theory, in which the action \eqref{action} and the coordinate changes \eqref{transition} are scale-smooth, hence $\widehat{\mathcal B}^{k,p}/G_\infty$ has the structure of a scale-Banach manifold -- the simplest nontrivial example of a polyfold. It is conceivable that the constructions of \cite{LiuT,Mcv} can be made rigorous by replacing Banach spaces with scale-Banach spaces, smoothness with scale-smoothness, and all standard calculus results (e.g.\ chain rule and implicit function theorem) with their correlates in scale calculus. It is however also conceivable that the construction of a global obstruction bundle near nodal holomorphic curves requires a compatibility of strata-wise smooth structures, along the lines of a polyfold structure on $\widetilde{\mathcal B}$. Siebert \cite[Theorem 5.1]{Sieb} also aims for a Banach orbifold structure on a space of equivalence classes of maps. However, his notion is that of topological orbifold, i.e.\ with continuous transition maps. Indeed, his construction of local slices uses a (problematic for other reasons) averaged version of the slicing condition in \eqref{eq:slice}; thus the transition maps have the same form as \eqref{transition}, and hence fail differentiability.\footnote{ Instead, Siebert realized that differentiability does hold in all but finitely many directions. This local classical differentiability can also be observed in all current applications of the polyfold approach. Differing from this approach, the construction of a ``localized Euler class'' in \cite[Thm.1.21]{Sieb} requires a section whose differential varies continuously in the operator norm, even in the nondifferentiable directions. However, at least in the fairly standard analytic setup of e.g.\ \cite{HWZ:gw, w:fred}, this is not the case. } \end{remark} \subsection{Differentiability issues in general holomorphic curve moduli spaces} \hspace{1mm}\\ {\lambda}bel{ss:DMdiff} The purpose of this section is to explain that the differentiability issues discussed in the previous section pertain to any holomorphic curve moduli space for which regularization is a nontrivial question. The only exception to the differentiability issues are compactified moduli spaces that can be expressed as subspace of tuples of maps and complex structures on a {\it fixed domain}, {\beta}gin{equation}{\lambda}bel{safe} {\overlineerline {\Mm}} = \bigl\{ (f,j) \in {\mathcal C}^\infty({\Sigma},M)\tildemes {\mathfrak C}_{\Sigma} \,\big|\, \overlineerline{{\partial}artial}_{j,J} f = 0 \bigr\} , \end{equation} where ${\mathfrak C}_{\Sigma}$ is a compact manifold of complex structures on a fixed smooth surface ${\Sigma}$. In particular, this does not allow one to divide out by any equivalence relation of the type {\beta}gin{equation} {\lambda}bel{rep} (f,j) {\sigma}m (f\circ{\partial}hi, {\partial}hi^* j) \qquad \forall {\partial}hi\in{\rm Diff}({\Sigma}) . \end{equation} For moduli spaces of this form, regularization can be achieved by the simplest geometric approach; namely choosing a generic domain-dependent almost complex structure $J:{\Sigma} \to {\mathcal J}(M,{\omega})$, with no further quotient or compactification needed. One rare example of this setting is the $3$-pointed spherical Gromov--Witten moduli space ${\overlineerline {\Mm}}_3(A,J)$ for a class $A$ which excludes bubbling by energy arguments, since the parametrization can be fixed by putting the marked points at $0,1,\infty\in {\mathbb C}\cup\{\infty\}\cong S^2$, thus setting ${\Sigma}=S^2$ and ${\mathfrak C}_{\Sigma}=\{i\}$ in \eqref{safe}. A similar setup exists for tori with $1$ or disks with $3$ marked points in the absence of bubbling, but we are not aware of further meaningful examples. Generally, the compactified holomorphic curve moduli spaces are of the form $$ \bigl\{ ({\Sigma},{\bf z},f) \,\big|\, ({\Sigma},{\bf z}) \in {\mathfrak R}, f: {\Sigma} \to M, \overlineerline{{\partial}artial}_{J} f = 0 \bigr\} / {\sigma}m $$ with $$ ({\Sigma},{\bf z}, f) {\sigma}m ({\Sigma}' , {\partial}hi^{-1}({\bf z}), f\circ{\partial}hi) \qquad \forall {\partial}hi:{\Sigma}'\to{\Sigma} . $$ Here ${\mathfrak R}$ is some space of Riemann surfaces ${\Sigma}$ with a fixed number $k\in{\mathbb N}_0$ of pairwise distinct marked points ${\bf z}\in{\Sigma}^k$, which contains regular as well as broken or nodal surfaces. In important examples (Floer differentials and one point Gromov--Witten invariants arising from disks or spheres) all domains $({\Sigma},{\bf z})$ are unstable, i.e.\ have infinite automorphism groups. If the regular domains are stable, unless bubbling is a priori excluded, ${\mathfrak R}/\!\!{\sigma}m$ is still not a Deligne--Mumford space since one has to allow nodal domains $({\Sigma},{\bf z})$ with unstable components to describe sphere or disk bubbles. On the complement of nodal surfaces, these moduli spaces have local slices of the form \eqref{safe} with additional marked points ${\bf z}\in {\Sigma}^k {\smallsetminus} {\Delta}lta$. In the case of unstable domains, the slices are constructed by stabilizing the domain with additional marked points given by intersections of the map with auxiliary hypersurfaces. In the case of stable domains, the slices are constructed by pullback of the complex structures to a fixed domain ${\Sigma}$, or fixing some of the marked points. In fact, stable spheres, tori, and disks have a single slice covering the interior of the Deligne--Mumford space $\{({\Sigma},{\bf z}) \;\text{regular, stable}\}/\!\!{\sigma}m$ given by fixing the surface and letting all but $3$ resp.\ $1$ marked point vary. Using such slices, the differentiability issue of reparametrizations still appears in many guises: {\beta}gin{enumerate} \item The transition maps between different local slices -- arising from different choices of fixed marked points or auxiliary hypersurfaces -- are reparametrizations by biholomorphisms that vary with the marked points or the maps. The same holds for local slices arising from different reference surfaces, unless the two families of diffeomorphisms to the reference surface are related by a fixed diffeomorphism, and thus fit into a single slice. \item A local chart for ${\mathfrak R}$ near a nodal domain is constructed by gluing the components of the nodal domain to obtain regular domains. Transferring maps from the nodal domain to the nearby regular domains involves reparametrizations of the maps that vary with the gluing parameters. \item The transition map between a local chart near a nodal domain and a local slice of regular domains is given by varying reparametrizations. This happens because the local chart produces a family of Riemann surfaces that varies with gluing parameters, whereas the local slice has a fixed reference surface. \item Infinite automorphism groups act on unstable components of nodal domains. \end{enumerate} The geometric regularization approach deals with issues (i), (iii), and (iv) by dealing with the biholomorphisms between domains only after equivariant transversality is achieved. This is possible only in restricted geometric settings; in particular it fails unless multiply covered spheres can be excluded in (iv). Similarly, the geometric approach deals with issue (ii) by making gluing constructions only on finite dimensional spaces of smooth solutions that are cut out transversely. We show in Remark~\ref{LTBS} and Section~\ref{ss:Kcomp} that these issues are highly nontrivial to deal with in abstract regularization approaches. In the polyfold approach described in Section~\ref{ss:poly}, it is solved by introducing the notion of scale-smoothness for maps between scale-Banach spaces, in which the reparametrization action is smooth. The other approaches have no systematic way of dealing with a symmetry group that acts nondifferentiably. {\beta}gin{remark}\rm One notable partial solution of the differentiability issues is the construction of Cieliebak-Mohnke \cite{CM} for Gromov--Witten moduli spaces in integral symplectic manifolds.\footnote { Ionel lays the foundations for a related approach in \cite{Io}.} They use a fixed set of symplectic hypersurfaces to construct a global slice to the equivalence relation \eqref{rep}. This reduces the differentiability issues to the gluing analysis near nodal curves, where the construction of a pseudocycle does not require differentiability. This method fits into the geometric approach as described in Section~\ref{ss:geom} by working with a larger set of perturbations. The existence of suitable hypersurfaces is a special geometric property of the symplectic manifold and the type of curves considered. \end{remark} \subsection{Smoothness issues arising from evaluation maps} \hspace{1mm}\\ {\lambda}bel{ss:eval} Another less dramatic differentiability issue in the regularization of holomorphic curve moduli spaces arises from evaluating maps at varying marked points. This concerns evaluation maps of the form $$ {\rm ev_i}: \; \bigl\{ \bigl({\Sigma}, {\bf z}=(z_1,\ldots,z_k) ,f \bigr) \,\big|\, \ldots \bigr\}/\!\!{\sigma}m \;\;\longrightarrow\; M , \qquad [{\Sigma},{\bf z},f] \;\longmapsto\; f(z_i) $$ in situations when they need to be regularized while the moduli space is being constructed, e.g.\ if they need to be transverse to submanifolds of $M$ or are involved in its definition via fiber products. In those cases, the evaluation map needs to be included in the setup of a Fredholm section. However, on infinite dimensional function spaces its regularity depends on the Banach norm on the function space. As a representative example, the map {\beta}gin{equation} {\lambda}bel{evmap} {\rm ev}: \; S^2 \tildemes {\mathcal C}^\infty(S^2,M) \;\longrightarrow\; M , \qquad (z,f) \;\longmapsto\; f(z) \end{equation} is ${\mathcal C}^\ell$ with respect to a Banach norm on ${\mathcal C}^\infty(S^2,M)$ only if the corresponding Banach space of functions, e.g.\ ${\mathcal C}^k(S^2)$ or $W^{k,p}(S^2)$, embeds continuously to ${\mathcal C}^\ell(S^2)$, e.g.\ if $k\geq\ell$ resp.\ $(k-\ell)p>2$. This can be seen from the explicit form of the differential $$ {\rm D}_{(z_0,f_0)}{\rm ev}: \; T_{z_0}S^2 \tildemes {\mathcal C}^\infty(S^2,f_0^*{\rm T} M) \;\longrightarrow\; {\rm T}_{f_0(z_0)}M , \qquad (Z,\xi) \;\longmapsto\; {\rm d} f_0 (Z) + \xi(z_0) , $$ whose regularity is ruled by the regularity of ${\rm d} f_0$. We will encounter this issue in the construction of a smooth domain for a Kuranishi chart in Section~\ref{ss:gw}, where the evaluation maps are used to express the slicing conditions that provide local slices to the reparametrization group. There we are able to deal with the lack of smoothness of \eqref{evmap} by first constructing a ``thickened solution space'', which is a finite dimensional manifold consisting of smooth maps and marked points that do not satisfy the slicing condition yet. Then the slicing conditions can be phrased in terms of the evaluation restricted to a finite dimensional submanifold of ${\mathcal C}^\infty(S^2,M)$. This operator is smooth, but now it is nontrivial to establish its transversality. {\beta}gin{lemma} {\lambda}bel{le:evsmooth} Let $B\subset W^{k,p}(S^2,M)$ be a finite dimensional submanifold, and assume that it lies in the subset of smooth maps, $B\subset{\mathcal C}^\infty(S^2,M)$. Then the evaluation map \eqref{evmap} restricts to a smooth map $$ {\rm ev}_B \; : \; S^2 \tildemes B \; \longrightarrow \; M , \qquad (z,f) \;\longmapsto\; f(z) . $$ \end{lemma} {\beta}gin{proof} We will prove this by an iteration similar to the proof of Lemma~\ref{le:Gsmooth}, with Step $k$ asserting that maps of the type {\beta}gin{equation}{\lambda}bel{type} {\rm Ev} \;:\; {\mathbb C} \tildemes {\mathcal C}^k({\mathbb C}, {\mathbb R}^n ) \; \longrightarrow \; {\mathbb R}^n , \qquad (z,f) \;\longmapsto\; f(z) \end{equation} are ${\mathcal C}^k$. In Step $0$ this proves continuity of the evaluation \eqref{evmap} on Sobolev spaces $W^{k,p}(S^2,M)$ that continuously embed to ${\mathcal C}^0(S^2,M)$. In Step $k$ this proves that ${\rm ev}_B$ is ${\mathcal C}^k$ if we can check that the inclusion $B\hookrightarrow {\mathcal C}^k(S^2, M)$ is smooth if $B$ is equipped with the $W^{k,p}$-topology. Indeed, embedding $M\hookrightarrow{\mathbb R}^N$, this is the restriction of a linear map, which is bounded (and hence smooth) since $B$ is finite dimensional. Hence to prove smoothness of ${\rm ev}_B$ it remains to perform the iteration. Continuity in Step $0$ holds since we can estimate, given ${\varepsilon}>0$, {\beta}gin{align*} \bigl| f(z) - f'(z') \bigr| &\;\le\; 2 \| f - g\|_{{\mathcal C}^0} + \bigl| g(z) - g(z') \bigr| + \bigl|f(z') - f'(z')\bigr| \\ &\;\le\; \tfrac 12 {\varepsilon} + \|{\rm d} g\|_\infty |z-z'| + \|f - f'\|_{{\mathcal C}^0} \;\le\; {\varepsilon} , \end{align*} where we pick $g\in{\mathcal C}^1({\mathbb C},{\mathbb R}^n)$ sufficiently close to $f$, and then obtain the ${\varepsilon}$-estimate for $(f',z')$ sufficiently close to $(f,z)$. To see that Step $k$ implies Step $k+1$ we express the differential ${\rm D}_{(z_0,f_0)}\,{\rm Ev}:(Z,\xi)\mapsto \xi(z_0) + {\rm d}_{z_0} f_0(Z)$ as sum of two operator families. The first family, $$ {\mathbb C} \;\longrightarrow\; L\bigl( {\mathcal C}^{k+1}({\mathbb C}, {\mathbb R}^n ) , {\mathbb R}^n \bigr) , \qquad z_0 \;\longmapsto\; {\rm Ev}(\cdot, z_0) , $$ can be written as composition of ${\mathbb C} \to L\bigl( {\mathcal C}^{k}({\mathbb C}, {\mathbb R}^n ) , {\mathbb R}^n \bigr)$, $z_0 \mapsto {\rm Ev}(\cdot, z_0)$, which is ${\mathcal C}^k$ by Step $k$, and the bounded linear operator $L\bigl( {\mathcal C}^k({\mathbb C}, {\mathbb R}^n ) , {\mathbb R}^n \bigr) \to L\bigl( {\mathcal C}^{k+1}({\mathbb C}, {\mathbb R}^n ) , {\mathbb R}^n \bigr)$. The second family, $$ {\mathbb C} \tildemes {\mathcal C}^{k+1}({\mathbb C}, {\mathbb R}^n ) \;\longrightarrow\; L\bigl( {\mathbb C} , {\mathbb R}^N \bigr) , \qquad (z_0, f_0) \;\longmapsto\; {\rm d}_{z_0} f_0, $$ can be written as composition of the linear map {\beta}gin{equation}{\lambda}bel{bugger} {\mathbb C} \tildemes {\mathcal C}^{k+1}({\mathbb C}, {\mathbb R}^n ) \;\longrightarrow\; {\mathbb C}\tildemes {\mathcal C}^k({\mathbb C},\bigl({\mathbb C}, L({\mathbb C},{\mathbb R}^n) \bigr), \qquad (z_0, f_0) \;\longmapsto\; (z_0, {\rm d} f_0) , \end{equation} which is a bounded linear operator hence smooth, and the evaluation map $$ {\mathbb C} \tildemes {\mathcal C}^k({\mathbb C},\bigl({\mathbb C}, L({\mathbb C},{\mathbb R}^n) \bigr) \;\longrightarrow\; L({\mathbb C},{\mathbb R}^n), \qquad (z_0, \eta) \;\longmapsto\; \eta(z_0) , $$ which is of the type \eqref{type} dealt with in Step $k$, hence also ${\mathcal C}^k$ by iteration assumption. This proves that the differential of evaluation maps of type ${\rm Ev} : {\mathbb C} \tildemes {\mathcal C}^{k+1}({\mathbb C},{\mathbb R}^n) \to {\mathbb R}^n$ is ${\mathcal C}^k$, i.e.\ th emaps are ${\mathcal C}^{k+1}$, which finishes the iteration step and hence proof of smoothness of ${\rm ev}_B$. Again note that this argument makes crucial use of the finite dimensionality of $B$ to obtain continuity of the embeddings $B\hookrightarrow {\mathcal C}^k(S^2, M)$ to prove that ${\rm ev}_B$ is ${\mathcal C}^k$. Here the increase in differentiability index $k$ is necessary to obtain boundedness of \eqref{bugger} in the iteration step. \end{proof} {\rm sect}ion{On the construction of compatible finite dimensional reductions} {\lambda}bel{s:construct} This section gives a general outline of the construction of a Kuranishi atlas on a given moduli space of holomorphic curves, concentrating on the issues of dividing by the reparametrization action and making charts compatible. We thus use the example of the Gromov--Witten moduli space ${\overlineerline {\Mm}}_{1}(A,J)$ of $J$-holomorphic curves of genus $0$ with one marked point, and assume that the nonzero class $A\in H_2(M)$ is such that it excludes bubbling and multiply covered curves a priori. (For example, $A$ could be ${\omega}$-minimal as assumed in Section~\ref{ss:geom}.) This allows us to use the framework of smooth Kuranishi atlases with trivial isotropy, that is developed in Sections ~\ref{s:chart}--\ref{s:VMC} of this paper. The additional difficulties of finite isotropy groups and nodal curves require a stratified smooth groupoid setting and will be developed in \cite{MW:ku2,MW:gw}. We do not claim that this is a general procedure for regularizing other moduli spaces of holomorphic curves, but it does provide a guideline for similar constructions. Recall that in this simplified setting the compactified Gomov--Witten moduli space {\beta}gin{align*} {\overlineerline {\Mm}}_{1}(A,J) &:= \bigl\{ (z_1=\infty, f) \in S^2\tildemes {\mathcal C}^\infty(S^2,M) \,\big|\, f_*[S^2]=A, {\overline {{\partial}}_J} f = 0 \bigr\} / G_\infty \end{align*} is the solution space of the Cauchy--Riemann equation modulo reparametrization by $$ G_\infty : = \{{\gamma}\in {\rm PSL}(2,{\mathbb C}) \,|\, {\gamma}(\infty)=\infty\}. $$ We begin by discussing the construction of basic Kuranishi charts for ${\overlineerline {\Mm}}_{1}(A,J)$ in Section~\ref{ss:Kchart}, where we find that an abstract approach runs into differentiability issues in reducing to a local slice of the action of $G_\infty$. However, this can be overcome by using the infinite dimensional local slices that are constructed geometrically in Section~\ref{ss:nodiff}. In Section~\ref{ss:Kcomp} we discuss the compatibility of a pair of basic Kuranishi charts, showing again that a sum chart and coordinate changes cannot be constructed abstractly (e.g.\ from the given basic charts), but require specifically constructed obstruction bundles, which transfer well under the action of $G_\infty$. Finally in Section~\ref{ss:gw} we give an outline of the construction of a full Kuranishi atlas for ${\overlineerline {\Mm}}_1(A,J)$. \subsection{Construction of basic Kuranishi charts} {\lambda}bel{ss:Kchart} \hspace{1mm}\\ The construction of basic Kuranishi charts for the Gromov--Witten moduli space ${\overlineerline {\Mm}}_{1}(A,J)$ requires local finite dimensional reductions of the Cauchy--Riemann operator {\beta}gin{equation} {\lambda}bel{eq:dbar} {\overline {{\partial}}_J} : \; \widehat{\mathcal B}^{k,p}= W^{k,p}(S^2,M) \;\longrightarrow\; \widehat{\mathcal E}:= {\textstyle \bigcup_{f\in\widehat{\mathcal B}^{k,p}}} W^{k-1,p}(S^2,{\Lambda}mbda^{0,1}f^*{\rm T} M), \end{equation} and simultaneously a reduction of the noncompact Lie group $G_\infty$ to a finite isotropy group; namely the trivial group in the case considered here. We begin by giving an abstract construction of a finite dimensional reduction for an abstract equivariant Fredholm section. Note that by the previous discussion, holomorphic curve moduli spaces do not exactly fall into this abstract setting. However, our purpose is to demonstrate the need to deal with the reparametrization group in infinite dimensional settings. {\beta}gin{remark}\rm To simplify the reading of the following sections, let us explain our notational philosophy. We use curly letters for locally noncompact spaces and roman letters for finite dimensional spaces. We also use the hat superscript to denote spaces on which an automorphism group acts, or the slicing conditions are not (yet) applied. For example, ${\mathcal B}_{f_0} \subset \widehat {\mathcal B}^{k,p}$ is an infinite dimensional local slice in a Banach manifold $\widehat {\mathcal B}^{k,p}$ of maps, the {\it local thickened solution space} $\widehat U$ is a finite dimensional submanifold of $\widehat {\mathcal B}$, and $U\subset \widehat U$ is the subset satisfying a slicing condition. For bundles we again use curly letters if the fibers are infinite dimensional and roman letters if they are finite dimensional, with hats indicating that the base is infinite dimensional and tildes indicating that it is finite dimensional. For example, $\widehat{\mathcal E}\to\widehat{\mathcal B}^{k,p}$ is a bundle with infinite dimensional fibers over a Banach manifold, while $\widehat E \subset \widehat{\mathcal E}|_{\widehat{\mathcal B}}$ has finite dimensional fibers $\widehat E |_f$ over points $f\in\widehat{\mathcal B}$ in an open subset $\widehat{\mathcal B} \subset \widehat{\mathcal B}^{k,p}$. We will always write the fiber at a point as a restriction $\widetilde E|_f$, since we require subscripts for other purposes. Namely, when constructing a finite dimensional reduction near a point $f_0$, we use $f_0$ as subscript for the domains $U_{f_0}$ and restrictions of the bundles $\widetilde E_{f_0}=\widehat E|_{U_{f_0}}$. Moreover, we denote by $E_{f_0}$ a finite dimensional vector space isomorphic to the fibers $(\widetilde E_{f_0})|_f$ of $\widetilde E_{f_0}$. Finally, the symbol $\approx$ is used to mean ``sufficiently close to". Thus for ${\gamma}\in G_\infty$, the set $\{{\gamma}\approx id\}$ is a neighbourhood of the identity. \end{remark} {\beta}gin{lemma} {\lambda}bel{le:fobs} Suppose that ${\sigma}:\widehat{\mathcal B}\to\widehat{\mathcal E}$ is a smooth Fredholm section that is equivariant under the smooth, free, proper action of a finite dimensional Lie group $G$. For any $f\in{\sigma}^{-1}(0)$ let $E_f\subset \widehat{\mathcal E}|_f$ be a finite rank complement of ${\rm im\,} {\rm D}_f{\sigma}\subset \widehat{\mathcal E}|_f$, and let ${\rm T}_f (Gf)^{\partial}erp \subset \ker{\rm D}_f{\sigma}$ be a complement of the tangent space of the $G$-orbit inside the kernel. There exists a smooth map $s_f: W_f \to E_f$ on a neighbourhood $W_f\subset {\rm T}_f (Gf)^{\partial}erp$ of~$0$ and a homeomorphism ${\partial}si_f: s_f^{-1}(0)\to {\sigma}^{-1}(0)/G$ to a neighbourhood of $[f]$. \end{lemma} {\beta}gin{proof} Let $\widehat E \subset\widehat{\mathcal E}|_{\widehat {\mathcal V}}$ be the trivial extension of $E_f\subset \widehat{\mathcal E}|_f$ given by a local trivialization $\widehat{\mathcal E}|_{\widehat {\mathcal V}} \cong \widehat{\mathcal V} \tildemes \widehat{\mathcal E}|_f$ over an open neighbourhood $\widehat{\mathcal V}\subset\widehat{\mathcal B}$ of $f$. Then $\Pi\circ{\sigma} : \widehat{\mathcal V} \to \widehat{\mathcal E}_{\widehat{\mathcal V}}/\widehat E$ is a smooth Fredholm operator that is transverse to the zero section. Thus by the implicit function theorem the thickened solution space $$ \widehat U_f := \{ \, g\in \widehat{\mathcal V} \,|\, {\sigma}(g)\in \widehat E \, \} \; \subset \widehat{\mathcal B} $$ is a submanifold of finite dimension ${\rm ind}\,{\rm D}_f{\sigma} + {\rm rk}\,E_f$. In particular, for small $\widehat{\mathcal V}$, there is an exponential map ${\rm T}_f \widehat U_f \supset \widehat W_f \to \widehat U_f$. More precisely, this is a diffeomorphism $$ \exp_f: \; \ker {\rm D}_f {\sigma} \;\supset\; \widehat W_f \;\overlineerset{\cong}{\longrightarrow} \; \{\, g\in \widehat{\mathcal V} \,|\, {\sigma}(g)\in \widehat E \,\} \;=\; \widehat U_f $$ from a neighbourhood $\widehat W_f\subset \ker {\rm D}_f {\sigma}$ of $0$ with $\exp_f(0)=f$ and ${\rm d}_0\exp_f : \ker{\rm D}_f{\sigma} \to {\rm T}_f \widehat{\mathcal B}$ the inclusion. Note here that we chose the minimal obstruction space $E_f$ so that $$ {\rm T}_f \widehat U_f \;=\; ({\rm D}_f{\sigma})^{-1}(E_f) \;=\; \ker {\rm D}_f(\Pi\circ{\sigma}) \;=\; \ker {\rm D}_f{\sigma}. $$ Via this exponential map we then obtain maps {\beta}gin{align*} \widehat s : \;\widehat W_f &\to \exp_f^*\widehat E, \qquad\qquad\;\, \xi\mapsto {\sigma}(\exp_f(\xi)) , \\ \widehat{\partial}si : \;\widehat s^{-1}(0) &\to {\sigma}^{-1}(0)/G, \quad\quad \xi\mapsto [\exp_f(\xi)] \end{align*} such that the section $\widehat s$ is smooth and $\widehat{\partial}si$ is continuous with image $[\widehat{\mathcal V}\cap{\sigma}^{-1}(0)]$. Restricting to the complement of the infinitesimal action, $W_f:= \widehat W_f \cap {\rm T}_f (Gf)^{\partial}erp$, and trivializing $\exp_f^*\widehat E \cong \widehat W_f \tildemes E_f$ we obtain a smooth map $s_f$ and a continuous map ${\partial}si_f$, {\beta}gin{align*} s_f:= \widehat s|_{{\rm T}_f (Gf)^{\partial}erp} \;\; &: \; \quad W_f \to E_f, \\ {\partial}si_f:= \widehat{\partial}si_f|_{{\rm T}_f (Gf)^{\partial}erp} &: \; s_f^{-1}(0) \to {\sigma}^{-1}(0)/G . \end{align*} We need to check that ${\partial}si_f$ is injective i.e.\ that every orbit of $G$ in $\widehat W_f$ intersects $\exp_f(s_f^{-1}(0))$ at most once. We claim that this holds for $\widehat {\mathcal V}$ sufficiently small. By contradiction suppose $s_f^{-1}(0) \ni \xi_i, \xi'_i\to 0$, ${\gamma}_i\in G{\smallsetminus}\{{\rm id}\}$ satisfy ${\gamma}_i\cdot \exp_f(\xi_i)= \exp_f( \xi'_i)$. By continuity of $\exp_f$ this implies $({\gamma}_i\cdot\exp_f(\xi_i),\exp_f(\xi'_i))\to (f,f)$, and properness of the action implies ${\gamma}_i\to{\gamma}_\infty\in G$ for a subsequence. Since the action is also free, we have ${\gamma}_\infty={\rm id}$. This will constitute a contradiction once we have proven that the ``local action'' $\{{\gamma}\approx{\rm id}\} \tildemes W_f \to {\sigma}^{-1}(0)/G$ is injective on a sufficiently small neighbourhood of $({\rm id},0)$. So far we have only used the differentiability of the $G$-action at a fixed point $f\in\widehat{\mathcal B}$ to define ${\rm T}_f(Gf)$. However, the proof of injectivity of the local action as well as local surjectivity of ${\partial}si_f$ will rely heavily on the continuous differentiability of the $G$-action $G\tildemes\widehat{\mathcal B} \to \widehat{\mathcal B}$. (Intuitively, the problem is that our slice is given by a condition involving a derivative of the $G$ action at $f$, and so is well behaved only if this derivative varies continuously with $f$.) To finish the proof of the homeomorphism property of ${\partial}si_f$ we pick $\widehat{\mathcal V}$ sufficiently small such that $\widehat U_f$ is covered by a single submanifold chart (i.e.\ a chart for $\widehat{\mathcal B}$ in a Banach space, within which $\widehat U_f$ is mapped to a finite dimensional subspace). Then we can extend $\exp_f$ to an exponential map on the ambient space, i.e.\ a diffeomorphism from a neighbourhood $\widehat{\mathcal W}_f\subset {\rm T}_f\widehat{\mathcal B}$ of $\widehat W_f$, $$ {\rm Exp}_f: \; \widehat{\mathcal W}_f \;\overlineerset{\cong}{\longrightarrow} \; \widehat{\mathcal V} \qquad \text{with} \quad {\rm Exp}_f|_{\widehat W_f} = \exp_f , \quad {\rm d}_0{\rm Exp}_f ={\rm id}_{{\rm T}_f \widehat{\mathcal B}}. $$ Note that the existence of such an extension at least requires continuous differentiability of $\widehat U_f$ resp.\ $\exp_f$. Next, we also crucially use the continuous differentiability of the action $G\tildemes\widehat{\mathcal B}\to\widehat{\mathcal B}$ to deduce that, for $\widehat {\mathcal V}$ sufficiently small, by the implicit function theorem {\beta}gin{equation} {\lambda}bel{GBS} \{ {\gamma} \in G \,|\, {\gamma} \approx {\rm id} \} \;\tildemes\; \bigr(\widehat{\mathcal W}_f \cap {\rm T}_f (Gf)^{\partial}erp\bigl) \;\longrightarrow\; \widehat{\mathcal B} , \qquad ({\gamma}, \xi ) \;\longmapsto\; {\gamma}\cdot {\rm Exp}_f(\xi) \end{equation} is a diffeomorphism to a neighbourhood of $f\in\widehat{\mathcal B}$. The injectivity of \eqref{GBS} then implies that ${\gamma}_i\cdot \exp_f(\xi_i) \neq \exp_f( \xi'_i)$ for ${\gamma}_i\neq{\rm id}$, which finishes the proof of injectivity of ${\partial}si_f$. More generally, the local diffeomorphism \eqref{GBS} implies that $$ \Psi : \,\widehat{\mathcal W}_f \cap {\rm T}_f (Gf)^{\partial}erp \;\to\; \widehat{\mathcal B}/G, \qquad \xi\mapsto [{\rm Exp}_f(\xi)] $$ is a homeomorphism to a neighbourhood ${\mathcal U} \subset \widehat{\mathcal B}/G$ of $[f]$ (which in general is a proper subset of $[\widehat{\mathcal V}]$). In particular, its image contains $\widehat{\partial}si_f(\widehat s_f^{-1}(0))\cap{\mathcal U}=[{\sigma}^{-1}(0)]\cap{\mathcal U}$, and by construction $$ \Psi \bigl( \widehat{\mathcal W}_f \cap {\rm T}_f (Gf)^{\partial}erp \bigr) \;\cap\; \widehat{\partial}si_f(\widehat s_f^{-1}(0)) \;=\; \widehat{\partial}si_f\Bigl( \widehat{\mathcal W}_f \cap {\rm T}_f (Gf)^{\partial}erp \cap \widehat s_f^{-1}(0) \Bigr) \;=\; {\partial}si_f( s_f^{-1}(0) ) . $$ This finally implies that the restriction ${\partial}si_f = \Psi|_{s_f^{-1}(0)}$ is a homeomorphism from $s_f^{-1}(0)$ to the neighbourhood ${\mathcal U}\cap[{\sigma}^{-1}(0)] \subset {\sigma}^{-1}(0)/G$ of $[f]$, which completes the proof. \end{proof} {\beta}gin{remark} \rm {\lambda}bel{FOglue} The above proof translates the construction of basic Kuranishi charts in \cite{FO} in the absence of nodes and Deligne--Mumford parameters into a formal setup. In \cite[12.23]{FO} this construction is described in the presence of nodes, in which case the construction of $\exp_f$ involves gluing analysis rather than just an exponential map. Then the injectivity of ${\partial}si_f$ is analogous to the claim of \cite[12.24]{FO}, where an argument is only given in the case of nontrivial Deligne--Mumford parameters. In the case of a remaining nondiscrete automorphism group such as $G_\infty$, an abstract argument would have to proceed along the lines of Lemma~\ref{le:fobs}. However, the map $({\gamma},\xi)\mapsto {\gamma}\cdot\exp_f(\xi)$ is continuously differentiable only if $\exp_f$ is ${\mathcal C}^1$ (which excludes most current gluing constructions) and has image in the smooth maps (which requires a very special construction of $\widehat E$). Moreover, one would at least need ${\rm im\,} {\rm d}_0\exp_f({\rm T}_f (G_\infty f)^{\partial}erp ) + {\rm T}_f (G_\infty f)$ to have maximal rank. This is not necessarily satisfied even for smooth gluing constructions of $\exp_f$ since e.g.\ ${\rm d}_0\exp_f$ could have nontrivial kernel. (In fact, one obvious method for making the gluing map smooth is to scale the gluing parameter such that ${\rm d}_0\exp_f$ vanishes in that direction at the node.) But note that this maximal rank does not seem to be sufficient to achieve the homeomorphism property of ${\partial}si_f$. Even in the absence of nodes, \cite{FO} construct the maps $\widehat s$ and $\widehat{\partial}si$ on a ``thickened Kuranishi domain'' analogous to $\widehat W_f$ and thus need to make the same restriction to an ``infinitesimal local slice'' as in Lemma~\ref{le:fobs}. Again, the argument for injectivity of ${\partial}si_f$ given in Lemma~\ref{le:fobs} does not apply due to the differentiability failure of the reparametrization action of $G=G_\infty$ discussed in Section~\ref{ss:nodiff}. One could apply the same argument to the embedding obtained by restricting \eqref{GBS} to the finite dimensional subspace $\widehat W_f$, as long as $\widehat U_f$ is contained in the smooth maps, and use the additional geometric information that the action of $G_\infty$ restricts to a smooth map from any finite dimensional submanifold consisting of smooth maps to $\widehat{\mathcal B}$. (It does not restrict to a smooth action unless we find a finite dimensional, $G_\infty$-invariant submanifold of $\widehat{\mathcal B}$ consisting of smooth maps.) Finally, the claim that ${\partial}si_f$ has open image in ${\sigma}^{-1}(0)/G$ is analogous to \cite[12.25]{FO}, which asserts that ``$\widehat{\partial}si(\widehat s^{-1}(0)\cap \exp_f(W_f))=\widehat{\partial}si(\widehat s^{-1}(0))$ by definition''.\footnote{ Arguments towards a weaker localized version are now proposed in \cite[Prop.34.2]{FOOO12}. } A natural approach to proving this would use $G_\infty$-invariance of $\widehat U_f$. However, $G_\infty$-invariance of $\widehat U_f$ requires $G_\infty$-equivariance of $\widehat E$, i.e.\ an equivariant extension of $E_f$ to the infinite dimensional domain $\widehat{\mathcal V}$. A general construction of such extensions does not exist due to the differentiability failure of the $G_\infty$-action. And again, the arguments of Lemma~\ref{le:fobs} do not apply since they use a local diffeomorphism to the infinite dimensional quotient space $\widehat{\mathcal B}/G$. Now a finite dimensional version of these arguments would require an embedding of a finite dimensional slice into a $G_\infty$-invariant, smooth target space that contains a neighbourhood of $f$ in the solution set ${\sigma}^{-1}(0)$. But there is no suitable candidate for such a space. The unperturbed solution space ${\sigma}^{-1}(0)$ is $G$-invariant, so contains $\{{\gamma}\approx {\rm id}\} \cdot \exp_f(s_f^{-1}(0))$, but may be highly singular, while the thickened solution space $\widehat U_f$ is smooth but generally not invariant under $G_\infty$, and so does not contain $\{{\gamma}\approx {\rm id}\}\cdot \exp_f(W_f)$. Finally, some argument for the continuity of ${\partial}si_f^{-1}$ is needed, though not mentioned in \cite{FO}; in Lemma~\ref{le:fobs} this also requires differentiability of the $G$-action on $\widehat{\mathcal B}$. \end{remark} In contrast to the differentiability failure of the reparametrization action discussed above, note that the gauge action on spaces of connections is generally smooth. Hence Lemma~\ref{le:fobs} applies in gauge theoretic settings, with an infinite dimensional group $G$, and abstractly provides finite dimensional reductions or Kuranishi charts for the moduli spaces. On the other hand, the differentiability issues in the construction of Kuranishi charts (and in particular coordinate changes between them) can only be resolved by using a geometrically explicit local slice ${\mathcal B}_{f}\subset \widehat{\mathcal B}^{k,p}$ as in \eqref{eq:slice}. This is briefly mentioned in various places throughout the literature, e.g.\ \cite[Appendix]{FO}, but we could not find the analytic details that will be given in the following. More precisely, the construction of a Kuranishi chart near $[f_0]\in{\overlineerline {\Mm}}_1(A,J)$ will depend on the choice~of {\beta}gin{itemize} \item a representative $f_0\in[f_0]$; \item hypersurfaces $Q^0:=Q_{f_0}^0,Q^1:=Q_{f_0}^1 \subset M$ as in \eqref{eq:hypsurf}, and ${\varepsilon}_{f_0}>0$ inducing a local slice $$ {\mathcal B}_{f_0}:= \bigl\{ f\in \widehat{\mathcal B}^{k,p} \,\big|\, d_{W^{k,p}}(f,f_0)<{\varepsilon}_{f_0} , f(0)\in Q_{f_0}^0 , f(1) \in Q_{f_0}^1 \bigr\} \;\subset \; \widehat{\mathcal B}^{k,p}; $$ \item an obstruction space $E_{f_0}\subset\widehat{\mathcal E}|_{f_0}$ that covers the cokernel of the linearization at $f_0$ of the Cauchy-Riemann section \eqref{eq:dbar}, that is ${\rm im\,} {\rm D}_{f_0}{\overline {{\partial}}_J} + E_{f_0} = \widehat{\mathcal E}|_{f_0}$; \item an extension of $E_{f_0}$ to a trivialized finite rank obstruction bundle $\widehat{\mathcal V}_{f_0}\tildemes E_{f_0} \cong \widehat E_{f_0} \subset\widehat{\mathcal E}|_{\widehat{\mathcal V}_{f_0}}$ over a neighbourhood $\widehat{\mathcal V}_{f_0}\subset\widehat {\mathcal B}^{k,p}$ of the slice ${\mathcal B}_{f_0}$. \end{itemize} With that we can construct the Kuranishi chart as a local finite dimensional reduction of the Cauchy--Riemann operator ${\overline {{\partial}}_J} : {\mathcal B}_{f_0} \to \widehat{\mathcal E}|_{{\mathcal B}_{f_0}}$ in the slice to the action of $G_\infty$. Note in the following that this construction requires the extension of the obstruction bundle $\widehat E_{f_0}$ to an open set of $\widehat{\mathcal B}^{k,p}$. {\beta}gin{prop} {\lambda}bel{prop:A1} For a sufficiently small slice ${\mathcal B}_{f_0}$, the subspace of generalized holomorphic maps with respect to the obstruction bundle $\widehat E_{f_0}$ is a finite dimensional manifold {\beta}gin{equation}{\lambda}bel{Uf0} U_{f_0}:=\bigl\{ f\in{\mathcal B}_{f_0} \,\big|\, {\overline {{\partial}}_J} f \in \widehat E_{f_0} \bigr\} . \end{equation} Moreover, $\widetilde E_{f_0}:=\widehat E_{f_0}|_{U_{f_0}}\cong U_{f_0}\tildemes E_{f_0}$ forms the bundle of a Kuranishi chart, whose smooth section and footprint map (a homeomorphism to a neighbourhood of $[f_0]$) are $$ {\beta}gin{array}{rll} \tilde s_{f_0} \,:\; U_{f_0} &\to \; \widehat E_{f_0}|_{U_{f_0}}, & \quad f\mapsto {\overline {{\partial}}_J} f , \\ {\partial}si_{f_0} \,: \; \tilde s_{f_0}^{-1}(0) = \bigl\{ f\in {\mathcal B}_{f_0} \,\big|\, {\overline {{\partial}}_J} f =0\bigr\}&\to\; {\overlineerline {\Mm}}_{1}(A,J),& \quad f\mapsto [f] . \end{array} $$ \end{prop} {\beta}gin{proof} We combine the local slice conditions and the perturbed Cauchy--Riemann equation to express $U_{f_0}$ as the zero set of {\beta}gin{align*} \widehat{\mathcal B}^{k,p} \;\supset\; \bigl\{ f \,\big|\, d_{W^{k,p}}(f,f_0)<{\varepsilon}_{f_0} \bigr\} &\;\longrightarrow\; \bigl( \widehat{\mathcal E} / \widehat E_{f_0}\bigr) \tildemes ({\rm T}_{f_0(0)} Q^{0})^{\partial}erp\tildemes ({\rm T}_{f_0(1)} Q^{1})^{\partial}erp ,\\ f \quad &\longmapsto \Bigl([{\overline {{\partial}}_J} f], \Pi^{\partial}erp_{Q^{0}}(f(0)), \Pi^{\partial}erp_{Q^{1}}(f(1))\Bigr), \end{align*} with projections $\Pi^{\partial}erp_{Q^t}$ near $f_0(t)$ along $Q^t$ to $T_{f_0(t)}(Q^t)^{\partial}erp$. Since the choice of $\widehat E_{f_0}$ guarantees that the linearized Cauchy-Riemann operator ${\rm D}_f{\overline {{\partial}}_J}$ maps onto $\widehat{\mathcal E}_f/\widehat E_{f_0}$ for $f=f_0$, and thus for nearby $f\approx f_0$, we obtain transversality of the full operator for sufficiently small ${\varepsilon}_{f_0}>0$ if the linearized operator at $f_0$ maps the kernel of ${\rm D}_{f_0}{\overline {{\partial}}_J}$ onto the second and third factor. That is, we claim surjectivity of the map $$ R_{f_0} : \; \ker{\rm D}_{f_0}{\overline {{\partial}}_J} \;\ni\; {\delta}lta f \mapsto \bigl({\rm d}\Pi^{\partial}erp_{Q^{0}}({\delta}lta f(0)), {\rm d}\Pi^{\partial}erp_{Q^{1}}({\delta}lta f(1))\bigr) . $$ To check this, we can use the inclusion ${\rm T}_{f_0} (G_\infty\cdot f_0)\subset\ker{\rm D}_{f_0}{\overline {{\partial}}_J}$ of a tangent space to the orbit together with the surjectivity of the infinitesimal action on two points, $$ {\rm T}_{\rm id} G_\infty \; \to \; {\rm T}_0 S^2 \tildemes {\rm T}_1 S^2 ,\qquad \xi \; \mapsto \; \bigl(\xi(0), \xi(1) \bigr) . $$ Combining these facts with $({\rm T}_{f_0(t)} Q^{t})^{\partial}erp ={\rm im\,} {\rm d} f_0(t)$ we obtain transversality from $$ R_{f_0} \bigl( {\rm T}_{f_0} (G_\infty\cdot f_0) \bigr) \;=\; \bigl({\rm d}\Pi^{\partial}erp_{Q^{0}} \tildemes {\rm d}\Pi^{\partial}erp_{Q^{1}}\bigr) \bigl({\rm im\,} {\rm d} f_0(0) \tildemes {\rm im\,} {\rm d} f_0(1)\bigr) . $$ This approach circumvents the differentiability failure of the $G_\infty$-action by working with the explicit local slice ${\mathcal B}_{f_0}$, which is analytically better behaved. Moreover, the homeomorphism ${\partial}si_{f_0}$ is given by restriction of the local homeomorphism ${\mathcal B}_{f_0}\to\widehat{\mathcal B}^{k,p}/G_\infty$ from Lemma~\ref{lem:slice}. Finally, we need to find a trivialization of the obstruction bundle $\widetilde E_{f_0}:=\widehat E_{f_0}|_{U_{f_0}}\cong U_{f_0}\tildemes E_{f_0}$. For that purpose we choose ${\varepsilon}_{f_0}>0$ even smaller. The effect of this on the bundle $\widetilde E_{f_0}$ is a restriction to smaller neighbourhoods of $f_0$. Thus for sufficiently small ${\varepsilon}_{f_0}>0$ the bundle over a smaller domain $U_{f_0}$ can be trivialized. \end{proof} A Kuranishi chart in the exact sense of Definition~\ref{def:chart} can be obtained from the trivialization $\widetilde E_{f_0}\cong U_{f_0} \tildemes E_{f_0}$. However, to emphasize the geometric meaning of our constructions we continue to use the notation for Kuranishi charts given in Section~\ref{ss:kur} in terms of a bundle $\widetildelde E_f\to U_f$ with section $\tildelde s$. \subsection{Compatibility of Kuranishi charts} {\lambda}bel{ss:Kcomp} \hspace{1mm}\\ As in Section~\ref{ss:kur} we oversimplify the formalism by saying that basic Kuranishi charts $$ \bigl( \; \tilde s_{f_i} : U_{f_i}\to \widetilde E_{f_i} \;,\; {\partial}si_{f_i} : \tilde s_{f_i}^{-1}(0)\hookrightarrow {\overlineerline {\Mm}}_{1}(A,J) \;\bigr) \qquad \text{for}\; i=0,1 , $$ as constructed in the previous section from obstruction bundles $\widehat E^i:=\widehat E_{f_i}$ over neighbourhoods of local slices ${\mathcal B}_{f_i}$, are {\bf compatible} if the following transition data exists for every element in the overlap $[g_{01}]\in {\rm im\,}{\partial}si_{f_0}\cap {\rm im\,}{\partial}si_{f_1}\subset {\overlineerline {\Mm}}_{1}(A,J)$: {\beta}gin{enumerate} \item a Kuranishi chart $\quad\displaystyle \bigl( \; \tilde s_{g_{01}} : U_{g_{01}}\to \widetilde E_{g_{01}} \;,\; {\partial}si_{g_{01}} : \tilde s_{g_{01}}^{-1}(0)\hookrightarrow {\overlineerline {\Mm}}_{1}(A,J) \;\bigr) $\\ whose footprint ${\rm im\,}{\partial}si_{g_{01}} \subset {\rm im\,}{\partial}si_{f_0}\cap {\rm im\,}{\partial}si_{f_1}$ is a neighbourhood of $[g_{01}]\in{\overlineerline {\Mm}}_1(A,J)$; \item for $i=0,1$ the transition map arising from the footprints, $$ {\partial}hi|_{{\partial}si_{f_i}^{-1}({\rm im\,}{\partial}si_{g_{01}})} := \; {\partial}si_{g_{01}}^{-1}\circ {\partial}si_{f_i} : \; \tilde s_{f_i}^{-1}(0) \;\supset\; {\partial}si_{f_i}^{-1}({\rm im\,}{\partial}si_{g_{01}}) \; \overlineerset{\cong}{\longrightarrow}\; \tilde s_{g_{01}}^{-1}(0) $$ extends to a coordinate change consisting of an open neighbourhood $V_i\subset U_{f_i}$ of ${\partial}si_{f_i}^{-1}({\rm im\,}{\partial}si_{g_{01}})$ and an embedding and linear injection in the trivialization $\widetilde E_{f_i}\cong U_{f_i}\tildemes E_{f_i}$ that intertwine the sections $\tilde s_\bullet$, $$ {\partial}hi :\; U_{f_i} \supset V_i \; \longhookrightarrow\; U_{g_{01}} , \qquad \widehat{\partial}hi :\; E_{f_i} \; \longrightarrow\; E_{g_{01}} . $$ \end{enumerate} For notational convenience we will continue to construct the Kuranishi charts such that the domains have a canonical embedding $U \hookrightarrow {\mathcal B}^{k,p}/G_\infty$ (given by $f\mapsto [f]$ from a local slice ${\mathcal B}\subset {\mathcal B}^{k,p}$) which identifies the homeomorphism ${\partial}si : s^{-1}(0) \hookrightarrow {\overlineerline {\Mm}}_1(A,J)$ with the identity on ${\overlineerline {\Mm}}_1(A,J)\subset {\mathcal B}^{k,p}/G_\infty$. However, we will not use this ambient space for other purposes, since it has no direct generalization in the case of nodal curves. In particular, the new domain $U_{g_{01}}$ cannot be constructed as an overlap of the domains $U_{f_i}$ since only the intersection of the possibly highly singular footprints ${\rm im\,}{\partial}si_{f_0}\cap {\rm im\,}{\partial}si_{f_1}\subset{\overlineerline {\Mm}}_1(A,J)$ has invariant meaning. Indeed, because the bundles $\widehat E^0, \widehat E^1$ may be quite different, the intersection $[U_{f_0}]\cap[U_{f_1}]\subset {\mathcal B}^{k,p}/G_\infty$ may only contain the intersection of footprints. Moreover, the domains $U_{f_0},U_{f_1}\subset {\mathcal B}^{k,p}$ have no relation to each other beyond the fact that they are both spaces of perturbed solutions of the Cauchy--Riemann equation in a local slice. Hence the Kuranishi chart (i) cannot be abstractly induced from the basic Kuranishi charts but needs to be constructed as another finite dimensional reduction of the Cauchy--Riemann operator. With such a chart given, the transition map ${\partial}si_{g_{01}}^{-1}\circ {\partial}si_{f_i}$ between the zero sets is well defined, but its extension to a neighbourhood of ${\partial}si_{f_i}^{-1}({\rm im\,}{\partial}si_{g_{01}})\subset \tilde s_{f_i}^{-1}(0)$ in the domain $U_{f_i}$ also needs to be constructed. In fact, the need for this extension guides the construction of the chart. For the rest of this section we will assume that the Kuranishi chart required in (i) can be constructed in the same way as the basic charts in Section~\ref{ss:Kchart}, and explain which extra requirements are necessary to guarantee the existence of a coordinate change (ii). The chart (i) will be determined by the following data: {\beta}gin{itemize} \item a representative $g_{01}\in[g_{01}]$; \item hypersurfaces $Q_{g_{01}}^0,Q_{g_{01}}^1 \subset M$ and ${\varepsilon}_{g_{01}}>0$ inducing a local slice ${\mathcal B}_{g_{01}}\subset\widehat{\mathcal B}^{k,p}$; \item a finite rank subspace $E_{g_{01}}\subset\widehat{\mathcal E} |_{g_{01}}$ such that ${\rm im\,} {\rm D}_{g_{01}}{\overline {{\partial}}_J} + E_{g_{01}} = \widehat{\mathcal E}|_{g_{01}}$; \item an extension to a trivialized finite rank subbundle $\widehat{\mathcal V}_{g_{01}}\tildemes E_{g_{01}} \cong \widehat E^{01}: = \widehat E_{g_{01}} \subset\widehat{\mathcal E}|_{\widehat{\mathcal V}_{g_{01}}}$ over a neighbourhood $\widehat{\mathcal V}_{g_{01}}\subset\widehat{\mathcal B}^{k,p}$ of ${\mathcal B}_{g_{01}}$. \end{itemize} {\mathbb N}I The coordinate change (ii) requires the construction of the following for $i=0,1$ {\beta}gin{itemize} \item open neighbourhoods $V_i\subset U_{f_i}$ of ${\partial}si_{f_i}^{-1}({\rm im\,}{\partial}si_{g_{01}})$; \item embeddings ${\partial}hi_i : V_i \hookrightarrow U_{g_{01}}$ and a bundle map $\widehat{\partial}hi_i : \widetilde E_{f_i}|_{V_i} \to \widetilde E_{g_{01}}$ covering ${\partial}hi_i$ and constant on the fibers in a trivialization, such that $$ \widehat{\partial}hi_i \circ \tilde s_{f_i} = \tilde s_{g_{01}} \circ {\partial}hi_i , \qquad {\partial}si_{f_i} = {\partial}si_{g_{01}} \circ {\partial}hi_i . $$ \end{itemize} In the explicit construction, we have $V_i\subset U_{f_i} \subset {\mathcal B}_{f_i}$ and $U_{g_{01}} \subset{\mathcal B}_{g_{01}}$ both identified with subsets of ${\mathcal B}^{k,p}/G_\infty$, and in this identification the embedding ${\partial}hi_i:V_i \hookrightarrow U_{g_{01}}$ is required to restrict to the identity on ${\rm im\,}{\partial}si_{g_{01}}\subset{\rm im\,}{\partial}si_{f_i}$. So the natural extension of ${\partial}hi_i$ to a neighbourhood of ${\partial}si_{f_i}^{-1}({\rm im\,}{\partial}si_{g_{01}})\subset U_{f_i}$ should lift the identity on ${\mathcal B}^{k,p}/G_\infty$. That is, with the domains $V_i\subset U_{f_i}$ still to be determined, we fix ${\partial}hi_i$ to be the transition map \eqref{transition} between the local slices, $$ {\partial}hi_i:= {\Gamma}_{f_i,g_{01}}|_{V_i} : \; V_i \to {\mathcal B}_{g_{01}} , \quad f \mapsto f\circ{\gamma}^{01}_f , $$ where ${\gamma}^{01}_f\in G_\infty$ is determined by $f\circ{\gamma}^{01}_f\in{\mathcal B}_{g_{01}}$ . Now in order for ${\partial}hi_i(V_i)$ to take values in $U_{g_{01}}$ we must have {\beta}gin{equation}{\lambda}bel{givestrans} {\overline {{\partial}}_J} f \in \widehat E^i|_{f} \;\Longrightarrow\; {\overline {{\partial}}_J} f\circ{\rm d}{\gamma}^{01}_f \in \widehat E^{01}|_{f\circ{\gamma}^{01}_f} \qquad\forall \; f\in V_i. \end{equation} In particular for all $g\in \tilde s_{g_{01}}^{-1}(0)$ we must have {\beta}gin{equation} {\lambda}bel{E01 req} \widehat E^0|_{g\circ {\gamma}^0_g} \circ ({\rm d}{\gamma}^0_g)^{-1} \;+\; \widehat E^1|_{g\circ{\gamma}^1_g} \circ ({\rm d}{\gamma}^1_g)^{-1} \;\subset\; \widehat E^{01}|_{g} , \end{equation} where ${\gamma}^i_g\in G_\infty$ is determined by $g\circ{\gamma}^i_g\in{\mathcal B}_{f_i}$ and $\widehat E^i\subset\widehat {\mathcal E}|_{\widehat{\mathcal V}_{f_i}}$ is the obstruction bundle extending $E_{f_i}$. Note here that we at least have to construct $\widehat E^{01}\to{\mathcal B}_{g_{01}}$ as a smooth obstruction bundle over an infinite dimensional slice, since this induces the smooth structure on the domain $U_{g_{01}} =\{g\in{\mathcal B}_{g_{01}} \,|\, {\overline {{\partial}}_J} g \in \widehat E^{01} \}$. (In fact, the proof of Lemma~\ref{Uf0} uses the obstruction bundle over an open set in $\widehat{\mathcal B}^{k,p}$.) However, we encounter several obstacles in constructing $\widehat E^{01}$ such that \eqref{E01 req} is satisfied near ${g_{01}\in{\mathcal B}_{g_{01}}}$. { } {\beta}gin{itemlist} \item[{\bf \qquad\, 1.)}] The left hand side of \eqref{E01 req} involves the pullbacks of $(0,1)$-forms by the transition map ${\Gamma}_{g_{01}, f_i} : {\mathcal B}_{g_{01}} \to {\mathcal B}_{f_i}$ between local slices. In fact, it is no surprise that the reparametrizations enter crucially, since $\widehat E^0$ and $\widehat E^1$ are bundles over neighbourhoods of the local slices ${\mathcal B}_{f_0}$ and ${\mathcal B}_{f_1}$ respectively, which may have no intersection in $\widehat{\mathcal B}^{k,p}$ at all, although they do have an open intersection in the quotient $\widehat{\mathcal B}^{k,p}/G_\infty$. Since the transition maps are not continuously differentiable, the pullback bundles $$ {\Gamma}_{g_{01}, f_i}^*\widehat E^i \,:= \;{\textstyle \bigcup_{g\in{\mathcal B}_{g_{01}}}} \widehat E^i|_{g\circ{\gamma}^i_g} \circ ({\rm d}{\gamma}^i_g)^{-1} $$ will not be differentiable in general. Thus we must find a special class of obstruction bundles, on which the pullback by reparametrizations acts smoothly. \item[{\bf \qquad\, 2.)}] Even if the pullback bundles ${\Gamma}_{g_{01}, f_0}^*\widehat E^0$ and ${\Gamma}_{g_{01}, f_1}^*\widehat E^1$ are differentiable, their fibers can have wildly varying intersections over ${\mathcal B}_{g_{01}}$. Here the diameter of the local slice can be chosen arbitrarily small, but it will always be locally noncompact. So it is unclear whether there even exists a finite rank subbundle of $\widehat{\mathcal E}|_{{\mathcal B}_{g_{01}}}$ that contains both pullback bundles. To ensure this we must assume transversality at $g_{01}$, $$ \bigl(\widehat E^0|_{g_{01}\circ({\gamma}^0_{g_{01}})^{-1}} \circ{\rm d}{\gamma}^0_{g_{01}} \bigr) \cap \bigl( \widehat E^1|_{g_{01}\circ({\gamma}^1_{g_{01}})^{-1}} \circ{\rm d}{\gamma}^1_{g_{01}} \bigr) \;=\; \{ 0 \} . $$ \end{itemlist} If the requirements in 1.) and 2.) are satisfied, then the sum of obstruction bundles {\beta}gin{align*} \widehat E^{01} &\,:=\; {\Gamma}_{g_{01}, f_0}^*\widehat E^0 \oplus {\Gamma}_{g_{01}, f_1}^*\widehat E^1\\ &\;=\; {\textstyle \bigcup_{g\in{\mathcal B}_{g_{01}}} } \bigl\{ \nu^0\circ ({\rm d}{\gamma}^0_g)^{-1} + \nu^1\circ ({\rm d}{\gamma}^1_g)^{-1} \,\big|\, \nu^i\in \widehat E^i|_{g\circ{\gamma}^{01}_g} \bigr\} \end{align*} is a smooth, finite rank subbundle of $\widehat{\mathcal E}$ over a local slice ${\mathcal B}_{g_{01}}$ of sufficiently small diameter ${\varepsilon}_{g_{01}}>0$. Under these assumptions, the constructions of Section~\ref{ss:Kchart} provide a Kuranishi chart for a neighbourhood of $[g_{01}]\in {\overlineerline {\Mm}}_1(A,J)$, which we also call {\bf sum chart} since it is given by a sum of obstruction bundles. Its domain and section are $$ \tilde s_{g_{01}} : \; U_{g_{01}} :=\{g\in{\mathcal B}_{g_{01}} \,|\, {\overline {{\partial}}_J} g \in \widehat E^{01} \} \;\to\; \widehat E^{01} , \qquad g \mapsto {\overline {{\partial}}_J} g , $$ and the embedding ${\mathcal B}_{g_{01}}\to\widehat{\mathcal B}^{k,p}/G_\infty$ of the local slice restricts to a homeomorphism into the moduli space, $$ {\partial}si_{g_{01}}: \tilde s_{g_{01}}^{-1}(0) \to {\overlineerline {\Mm}}_1(A,J) , \qquad g\mapsto [g]. $$ Moreover, we already fixed the embeddings ${\partial}hi_i = {\Gamma}_{f_i,g_{01}}$ and can read off from \eqref{givestrans} the corresponding embedding of obstruction bundles $$ \widehat{\partial}hi_i: \widehat E^i|_{f} \to \widehat E^{01}|_{f\circ{\gamma}_f}, \qquad \nu \mapsto \nu\circ {\rm d}{\gamma}_f. $$ Since this should be a constant linear map $E_{f_i}\to E_{g_{01}}$ in some trivialization $\widehat E^{01}\cong U_{g_{01}}\tildemes E^{01}_{g_{01}}$, the trivialization map $T^{01}(g) :\widehat E^{01}|_g \to E_{g_{01}}$ must be given by $$ T^{01}(g) \,:\; \sum_{i=0,1} \nu^i\circ ({\rm d}{\gamma}^i_g)^{-1} \;\mapsto\; \sum_{i=0,1} \Bigl( T^i(g_{01}\circ{\gamma}^i_{g_{01}} ) ^{-1} \,T^i(g\circ{\gamma}^i_g ) \; \nu^i \Bigr) \circ ({\rm d}{\gamma}^i_{g_{01}})^{-1} $$ in terms of the trivializations $T^i(f) :\widehat E^i|_f\overlineerset{\cong}\to E_{f_i}$ of its factors. In fact, this shows exactly what it means for the sum bundle $\widehat E^{01} = {\Gamma}_{g_{01}, f_0}^*\widehat E^0 \oplus {\Gamma}_{g_{01}, f_1}^*\widehat E^1$ to be smooth. { } We now summarize the preceding discussion in the context of a tuple of $N$ charts $\bigl({\bf K}_i = (U_{f_i},E_{f_i},s_{f_i},{\partial}si_{f_i})\bigr)_{i=1,\ldots,N}$. Generalizing conditions (i) and (ii) at the beginning of this section, we find that if these arise from obstruction bundles $\widehat E^i\to \widehat{\mathcal V}_{f_i}$ over neighbourhoods of local slices ${\mathcal B}_{f_i}$, the minimally necessary compatibility conditions require us to construct for every index subset $I\subset\{1,\ldots,N\}$ and every element $[g_0]\in\bigcap_{i\in I}{\rm im\,}{\partial}si_i\subset{\overlineerline {\Mm}}_1(A,J)$ in the overlap of footprints {\beta}gin{enumerate} \item a {\bf sum chart} ${\bf K}_{I,g_0}$ with obstruction space $E_{I,g_0} \cong {\partial}rod_{i\in I} E_{f_i}$, whose footprint ${\rm im\,}{\partial}si_{I,g_0} \subset \bigcap_{i\in I}{\rm im\,}{\partial}si_{f_i}$ is a neighbourhood of $[g_0]$; \item coordinate changes $\bigl({\bf K}_i \to {\bf K}_{I,g_0}\bigr)_{i\in I}$ that extend the transition maps ${\partial}si_{I,g_0}^{-1}\circ {\partial}si_{f_i}$. \end{enumerate} {\mathbb N}I The construction of a virtual fundamental class $[{\overlineerline {\Mm}}_1(A,J)]^{\rm vir}$ from a cover by compatible basic Kuranishi charts $\bigl({\bf K}_i \bigr)_{i=1,\ldots,N}$ in addition requires fixed choices of the above transition data, and further coordinate changes ${\bf K}_{I,g_0}\to{\bf K}_{J,h_0}$ satisfying a cocycle condition; see Section~\ref{ss:top}. The main difficulty is to ensure that the sum charts are well defined. The details of their construction are dictated by the existence of coordinate changes from the basic charts. This construction is so canonical that coordinate changes between different sum charts exist essentially automatically, and satisfy the weak cocycle condition. By the discussion in the case of two charts, the following conditions on the choice of basic Kuranishi charts $\bigl({\bf K}_i = (U_{f_i},E_{f_i},s_{f_i},{\partial}si_{f_i})\bigr)_{i=1,\ldots,N}$ ensure the existence of the sum charts (i) and transition maps (ii). {\beta}gin{itemlist} \item[{\bf \qquad\, Sum Condition I:}] {\it For every $i\in \{1,\ldots,N\}$ let $T^i(f) :\widehat E^i|_f\overlineerset{\cong}\to E_{f_i}$ be induced by the trivialization of the obstruction bundle. Then for every $[g_0]\in{\rm im\,}{\partial}si_i \cap\bigcap_{j\neq i}{\rm im\,}{\partial}si_j$ and representative $g_0$ with sufficiently small local slice ${\mathcal B}_{g_0}$, the map} {\beta}gin{align*} {\mathcal B}_{g_{0}} \tildemes E_{f_i} &\;\longrightarrow\; \quad \widehat{\mathcal E} \\ (g, \nu_i )\quad & \;\longmapsto \; \bigl( T^i(g\circ{\gamma}^i_g ) \, \nu_i \bigr) \circ ({\rm d}{\gamma}^i_g)^{-1} \end{align*} {\it is required to be smooth, despite the differentiability failure of $g\mapsto g\circ{\gamma}^i_g$.} An approach for satisfying this condition will be given in the next section. \item[{\bf \qquad\, Sum Condition II:}] {\it For every $I\subset\{1,\ldots,N\}$ and $[g]\in\bigcap_{i\in I}{\rm im\,}{\partial}si_i$ we must ensure transversality of the vector spaces $\widehat E^i|_{g\circ({\gamma}^i_{g})^{-1}} \circ{\rm d}{\gamma}^i_{g} = \bigl( T^i(g\circ({\gamma}^i_{g})^{-1})^{-1} E_{f_i} \bigr) \circ{\rm d}{\gamma}^i_{g}$ for $i\in I$. That is, their sum needs to be a direct sum, $$ \sum_{i\in I} \widehat E^i|_{g\circ({\gamma}^i_{g})^{-1}} \circ{\rm d}{\gamma}^i_{g} \;=\; \bigoplus_{i\in I} \widehat E^i|_{g\circ({\gamma}^i_{g})^{-1}} \circ{\rm d}{\gamma}^i_{g} \quad\subset\;\widehat {\mathcal E}|_g . $$ } This means that, no matter how the obstruction bundles are constructed for each chart, the choices for a tuple need to be made ``transverse to each other'' along the entire intersection of the footprints before transition data can be constructed. \end{itemlist} \subsection{Sum construction for genus zero Gromov--Witten moduli spaces} \hspace{1mm}\\ {\lambda}bel{ss:gw} The purpose of this section is to explain the basic ideas of our project \cite{MW:gw} of constructing a Kuranishi atlas for the genus zero Gromov--Witten moduli spaces by combining the geometric perturbations of \cite{LT} with the gluing analysis of \cite{MS}. A natural idea (suggested to us by e.g.\ Kenji Fukaya, see \cite[Appendix]{FO}, and Cliff Taubes) for dealing with the failure of differentiability in the pullback construction for obstruction bundles is to introduce varying marked points so that the pullback by ${\Gamma}mma_{g_{01},f_i}$ no longer depends on the infinite dimensional space of maps, instead depending on a finite number of parameters. For the sum construction of two Kuranishi charts $(U_{f_i},\ldots)_{i=0,1}$ arising from finite rank bundles $\widehat E^i\to \widehat{\mathcal V}_{f_i}$ over neighbourhoods of local slices ${\mathcal B}_{f_i}$, let us for simplicity of notation work in a slice ${\mathcal B}_{g_{01}}\subset{\mathcal B}_{f_0}$ so that ${\gamma}^0_g\equiv {\rm id}$. Thus we construct the domain of the sum chart as $$ U_{g_{01}} = \bigl\{ \bigl( g , {\underline{n}}derline{w} \bigr) \in {\mathcal B}_{g_{01}}\tildemes (S^2)^2 \,\big|\, {\overline {{\partial}}_J} g \in \widehat E^0 + {\Gamma}mma_{{\underline{n}}derline w}^* \widehat E^1 , {\underline{n}}derline{w}=(w^{0} , w^{1}) \in D_{01}, g(w^{t})\in Q_{f_1}^{t} \bigr\} . $$ Here $D_{01}\subset (S^2)^2$ is a neighbourhood of ${\underline{n}}derline{w}_{01}:=(w_{01}^0, w_{01}^1)$ with $w_{01}^t=g_{01}^{-1}(Q_{f_1}^t)$, and ${\Gamma}mma_{{\underline{n}}derline w} : g\mapsto g\circ {\gamma}mma_{{\underline{n}}derline w}$ is the reparametrization with {\beta}gin{equation}{\lambda}bel{gaw} {\gamma}mma_{{\underline{n}}derline w}\in G_\infty \quad \text{given by} \quad {\gamma}mma_{{\underline{n}}derline w}(t)=w^{t} \quad \text{for}\; t=0,1. \end{equation} Observe that, with varying marked points, the map $(g,w) \mapsto g(w)$ still only has the regularity of $g$, see Section~\ref{ss:eval}. So the above $U_{g_{01}}$ is not cut out by a single smooth Fredholm section. However, we may now consider the intermediate {\it thickened solution space} $$ \widehat U_{g_{01}} = \bigl\{ ( g , {\underline{n}}derline{w} ) \in {\mathcal B}_{g_{01}}\tildemes D_{01} \,\big|\, {\overline {{\partial}}_J} g \in \widehat E^0 + {\Gamma}mma_{{\underline{n}}derline w}^* \widehat E^1 \bigr\} \;\subset\; {\mathcal B}_{f_0}\tildemes (S^2)^2, $$ where we have not yet imposed the slicing conditions at the points ${\underline{n}}derline{w}$. Then the domain $U_{g_{01}}={\rm ev}^{-1}(Q_{f_1}^0\tildemes Q_{f_1}^1)$ is cut out by the slicing conditions, which use the evaluation map on the finite dimensional thickened solution space: $$ {\rm ev} : \; \widehat U_{g_{01}} \to (S^2)^2, \qquad ( g , w^0, w^1 ) \mapsto ( g(w^0) , g(w^1) ) . $$ To check that this map is transverse to $Q_{f_1}^0\tildemes Q_{f_1}^1$ at $(g_{01}, w_{01}^0, w_{01}^1)$, note that $\{0\}\tildemes ({\rm T} S^2)^2$ is tangent to the thickened solution space at this point (crucially using the fact that the solution space $\{g \,|\, {\overline {{\partial}}_J} g=0\}$ is $G_\infty$-invariant so that there is an infinitesimal action at $g_{01}$). Moreover, at every point in the local slice $g\in{\mathcal B}_{f_1}$ we have ${\rm im\,} {\rm d}_t g {\partial}itchfork {\rm T}_{g(t)} Q_{f_1}^t$, in particular at $g_{01}\circ{\gamma}_{{\underline{n}}derline w_{01}}$ with ${\rm im\,} {\rm d}_t (g_{01}\circ{\gamma}_{{\underline{n}}derline w_{01}})= {\rm im\,} {\rm d}_{w_{01}^t}g_{01}$. Moreover, the evaluation map is smooth if we can ensure that the thickened solution space $\widehat U_{g_{01}}\subset {\mathcal C}^\infty(S^2,M)\tildemes (S^2)^2$ contains only smooth functions. Continuing the list of conditions on the choice of summable obstruction bundles from the previous section, this adds the following regularity requirement. {\beta}gin{itemlist} \item[{\bf \qquad\, Sum Condition III:}] {\it The obstruction bundles $\widehat E^i\subset\widehat{\mathcal E}|_{\widehat{\mathcal V}_{f_i}}$ need to satisfy regularity,} $$ {\overline {{\partial}}_J} g \in {\textstyle \sum_i } \, {\Gamma}_{{\underline{n}}derline w_i}^* \widehat E^i \;\Longrightarrow \; g \in{\mathcal C}^\infty(S^2,M) . $$ By elliptic regularity for ${\overline {{\partial}}_J}$, this holds if $\widehat E^i|_{W^{\ell,p}\cap \widehat{\mathcal V}_{f_i}}\in W^{\ell,p}\cap\widehat{\mathcal E}$ for all $\ell\in{\mathbb N}$, or in terms of the trivializations $T^i(f):\widehat E^i_f \to E_{f_i}$ if the elements of $E_{f_i}$ are smooth $1$-forms in $\widehat{\mathcal E}|_{f_i}$ and $$ f\in W^{\ell,p} \; \Longrightarrow \;{\rm im\,} T^i(f)\subset W^{\ell,p} . $$ This means that sections of $\widehat E^i$ are lower order, compact perturbations for ${\overline {{\partial}}_J}$, i.e.\ they are $sc^+$ in the language of scale calculus \cite{HWZ1}. \end{itemlist} { }{\mathbb N}I Finally, we need to ensure smoothness of the thickened solution space $\widehat U_{g_{01}}$, which can be viewed as the zero set of the section $$ {\mathcal B}_{g_{01}} \tildemes D_{01} \;\longrightarrow \; \widehat{\mathcal E} / ( \widehat E^0 + {\Gamma}mma^* \widehat E^1 ) , \qquad (g,{\underline{n}}derline w) \;\longmapsto \; {\overline {{\partial}}_J} g . $$ Here the form of the summed obstruction bundle, {\beta}gin{align*} {\Gamma}^* \widehat E^1 &\;=\; { {\underline{n}}derlineerset{{\underline{n}}derline w \in (S^2)^2}{\textstyle{\bigcup}}}{\Gamma}mma_{{\underline{n}}derline w}^* \widehat E^1 \; \longrightarrow \; {\mathcal B}_{g_{01}}\tildemes D_{01}, \\ \bigl({\Gamma}^* \widehat E^1 \bigr)|_{(g,{\underline{n}}derline w)} &\;=\; \bigl\{ \nu \circ {\rm d} {\gamma}_{{\underline{n}}derline w}^{-1} \,\big|\, {\gamma}_{{\underline{n}}derline w} (t)= w^{t} , \nu \in \widehat E^1|_{g\circ{\gamma}_{{\underline{n}}derline w}} \bigr\}, \end{align*} is dictated by fixing the natural embedding ${\partial}hi_1 : U_{f_1}\cap G_\infty U_{g_{01}} \to U_{g_{01}}$ given by $f\mapsto (f\circ{\gamma}_f^{-1}, {\gamma}_f(0), {\gamma}_f(1))$, where $f\circ{\gamma}_f^{-1}\in{\mathcal B}_{g_{01}}$. Its inverse map is $(g,{\underline{n}}derline w)\mapsto g\circ{\gamma}_{{\underline{n}}derline w}$, which maps to a neighbourhood of ${\mathcal B}_{f_1}$. While the extension of $\widehat E^1$ to a neighbourhood of ${\mathcal B}_{f_1}\subset\widehat{\mathcal B}^{k,p}$ so far was mostly for convenience in the proof of Lemma~\ref{Uf0}, it now becomes crucial for the construction of this ``decoupled sum bundle''. In fact, as in that lemma, we will also extend $\widehat E_{g_{01}}= \widehat E^0 + {\Gamma}mma^* \widehat E^1$ to a neighbourhood $\widehat{\mathcal V}_{g_{01}}$ of ${\mathcal B}_{g_{01}}$ to induce the smooth structure on $\widehat U_{g_{01}}$. With this setup, Sum Condition I becomes smoothness of the map involving the trivialization $T^1(f):\widehat E^1(f)\to E_{f_1}$, {\beta}gin{equation}{\lambda}bel{wantsmooth} \widehat{\mathcal V}_{g_{01}} \tildemes D_{01} \tildemes E_{f_1} \;\longrightarrow\; \widehat{\mathcal E} , \qquad (g, {\underline{n}}derline w ,\nu ) \;\longmapsto \; \bigl( T^1(g\circ{\gamma}_{{\underline{n}}derline w} ) \, \nu \bigr) \circ {\rm d} {\gamma}_{{\underline{n}}derline w}^{-1} . \end{equation} This still involves reparametrizations $(g,{\gamma}_{{\underline{n}}derline w})\mapsto g\circ {\gamma}_{{\underline{n}}derline w}$, which are not differentiable in any Sobolev topology on $\widehat{\mathcal V}_{g_{01}}$, since ${\underline{n}}derline w\in D_{01}$ and thus ${\gamma}_{{\underline{n}}derline w}$ is allowed to vary. Thus the compatibility of Kuranishi atlases requires a very special form of the trivialization $T^1$, i.e.\ very special obstruction bundles $\widehat E^i$. { }{\mathbb N}I {\bf Geometric construction of obstruction bundles:} To solve the remaining differentiability issue, we now follow the more geometric approach of \cite{LT} and construct obstruction bundles by pulling back finite rank subspaces {\beta}gin{equation}{\lambda}bel{graphsp} E^i\subset {\mathcal C}^\infty({\rm Hom}^{0,1}_J(S^2,M)) \end{equation} of the space of smooth sections of the bundle over $S^2\tildemes M$ of $(j,J)$-antilinear maps ${\rm T} S^2 \to {\rm T} M$. Given such a subspace and a neighbourhood $\widehat{\mathcal V}_{f_i}$ of a local slice, we hope to obtain an obstruction bundle {\beta}gin{equation}{\lambda}bel{graph} \widehat E^i := \;{\textstyle \bigcup_{f\in\widehat{\mathcal V}_{f_i}} } \bigl\{ \nu|_{\operatorname{graph} f} \;\big|\; \nu \in E^i\bigr\} \;\subset\; \widehat{\mathcal E}|_{\widehat{\mathcal V}_{f_i}} \end{equation} by restriction to the graphs $\nu|_{\operatorname{graph} f} \in \widehat{\mathcal E}|_f = W^{k-1,p}(S^2, {\Lambda}mbda^{0,1}f^* {\rm T} M )$ given by $$ \nu|_{\operatorname{graph} f} (z) = \nu(z,f(z)) \in {\rm Hom}^{0,1}_J({\rm T}_zS^2,{\rm T}_{f(z)}M) . $$ The disadvantage of this construction is that we need to assume injectivity of the map $$ E^i\ni \nu\mapsto \nu|_{\operatorname{graph} f}\in \widehat{\mathcal E}|_f $$ for each $f\in\widehat{\mathcal V}_{f_i}$ to obtain fibers of constant rank. On the other hand, the inverse trivialization of the obstruction bundle $$ (T^i)^{-1}: \widehat{\mathcal V}_{f_i} \tildemes E^i \to \widehat E^i|_f , \qquad (f,\nu) \mapsto \nu|_{\operatorname{graph} f} $$ is now a smooth map, satisfying the regularity requirement in Sum Condition III, since on the finite dimensional space $E^i$ consisting of smooth sections the composition on the domain with $f\in\widehat{\mathcal V}_{f_i}\subset W^{k,p}(S^2,M)$ is smooth. In fact, the pullback ${\Gamma}^*\widehat E^1$ in \eqref{wantsmooth} now takes the special form, with ${\gamma}_{{\underline{n}}derline w}$ from \eqref{gaw}, $$ (g, {\underline{n}}derline w ,\nu ) \mapsto {\gamma}_{{\underline{n}}derline w}^*\nu |_{\operatorname{graph} g}, \qquad {\gamma}_{{\underline{n}}derline w}^*\nu (z,x) = \nu ( {\gamma}_{{\underline{n}}derline w}^{-1}(z) , x ) \circ {\rm d}_z {\gamma}_{{\underline{n}}derline w}^{-1} . $$ This eliminates composition on the domain of infinite dimensional function spaces. Indeed, we now have $$ {\gamma}_{{\underline{n}}derline w}^*\nu |_{\operatorname{graph} g} (z) = \nu ( {\gamma}_{{\underline{n}}derline w}^{-1}(z) , g(z) ) \circ {\rm d}_z {\gamma}_{{\underline{n}}derline w}^{-1} , $$ whose derivatives in the directions of $g$ and ${\underline{n}}derline{w}$ take forms that, unlike \eqref{eq:actiond}, do not involve derivatives of $g$. Moreover, we will later make use of the special transformation of these obstruction bundles under the action of ${\gamma}\in G_\infty$, {\beta}gin{equation} {\lambda}bel{Eequivariant} {\gamma}_{{\underline{n}}derline w}^*\nu |_{\operatorname{graph} g} \circ {\rm d}{\gamma} \;=\; \nu \bigl( {\gamma}_{{\underline{n}}derline w}^{-1}\circ{\gamma} (\cdot) , g\circ {\gamma} (\cdot) \bigr) \circ {\rm d} {\gamma}_{{\underline{n}}derline w}^{-1} \circ {\rm d} {\gamma} \;=\; ({\gamma}^{-1}\circ{\gamma}_{{\underline{n}}derline w})^*\nu |_{\operatorname{graph} g\circ {\gamma}} . \end{equation} Thus we have replaced Sum Conditions I--III, including the highly nontrivial smoothness requirement in the previous section, by the following requirement for the compatibility of the geometrically constructed obstruction bundles. {\beta}gin{itemlist} \item[{\bf \qquad Sum Condition I$'$:}] {\it For every $i\in \{1,\ldots,N\}$ the obstruction bundle $\widehat E^i \subset \widehat{\mathcal E}|_{\widehat{\mathcal V}_{f_i}}$ is given by \eqref{graph} from a subspace $E^i\subset{\mathcal C}^\infty({\rm Hom}^{0,1}_J(S^2,M))$ such that} $$ E^i \to \widehat{\mathcal E}|_f , \;\; \nu \mapsto \nu|_{\operatorname{graph} f} \quad \text{is injective} \quad \forall \; f\in\widehat{\mathcal V}_{f_i} . $$ \item[{\bf \qquad Sum Condition II$'$:}] {\it For every $I\subset\{1,\ldots,N\}$, $[g]\in\bigcap_{i\in I}{\rm im\,}{\partial}si_i$ with representative $g\in{\mathcal B}_{f_{i_0}}$ for some $i_0\in I$, and marked points ${\underline{n}}derline w_i \in D_{i_0 i}\subset (S^2)^2$ in neighbourhoods of $(g^{-1}(Q_{f_i}^t))_{t=0,1}$ resp.\ $D_{i_0 i_0}=\{(0,1)\}$, we must ensure linear independence of $\bigl\{ {\gamma}_{{\underline{n}}derline w_i}^* \nu^i |_{\operatorname{graph} g} \;\big|\; \nu^i\in E^i \bigr\}$ for $i\in I$. That is, their sum must be a direct sum} $$ \sum_{i\in I} \bigl\{ {\gamma}_{{\underline{n}}derline w_i}^* \nu^i |_{\operatorname{graph} g} \;\big|\; \nu^i\in E^i \bigr\} \;=\; \bigoplus_{i\in I} \bigl\{ {\gamma}_{{\underline{n}}derline w_i}^* \nu^i |_{\operatorname{graph} g} \;\big|\; \nu^i\in E^i \bigr\} \quad\subset\;\widehat {\mathcal E}|_g. $$ \end{itemlist} Satisfying these two conditions always requires making the choices of the obstruction spaces $E^i$ ``suitably generic''. If they are satisfied, then they provide a construction of sum charts and coordinate changes as we will state next. At this point, we can also incorporate a further requirement from Section~\ref{ss:top} into the compatibility condition (i) for a tuple of charts $({\bf K}_i)_{i=1,\ldots,N}$ by constructing a single sum chart ${\bf K}_{I,g_0}={\bf K}_I$ for each $I\subset\{1,\ldots, N\}$, whose footprint is the entire overlap of footprints $F_I:={\rm im\,}{\partial}si_I = \bigcap_{i\in I} {\rm im\,}{\partial}si_i$. Moreover, we construct coordinate changes between any pair of tuples $I,J\subset\{1,\ldots,N\}$ with nonempty overlap $F_I\cap F_J\neq\emptyset$ that are, up to a choice of domains, directly induced from the basic charts. Thus our construction naturally satisfies the weak cocycle condition, i.e.\ equality on overlap of domains as in Section~\ref{ss:top}. Note here that $J\subset\{1,\ldots,N\}$ has a very different meaning from the almost complex structure which determines the Gromov--Witten moduli space ${\overlineerline {\Mm}}_1(A,J)$. To avoid confusion, we will sometimes abbreviate $\overlineerline{\partial}artial:={\overline {{\partial}}_J}$. For the construction of sum charts, we will moreover make the following simplifying assumption that all intersections with the slicing hypersurfaces are unique. This can be achieved in sufficiently small neighbourhoods of any holomorphic sphere with trivial isotropy, see Remark~\ref{rmk:unique}. {\beta}gin{itemlist} \item[{\bf \qquad Sum Condition IV$'$:}] {\it For every $i\in \{1,\ldots,N\}$ we assume that the representative $[f_i]$, slicing conditions $Q^t_{f_i}$, size ${\varepsilon}>0$ of local slice ${\mathcal B}_{f_i}$, and its neighbourhood $\widehat{\mathcal V}_{f_i}\subset\widehat{\mathcal B}^{k,p}$ are chosen such that for all $g\in \widehat{\mathcal V}_{f_i}$ and $t=0,1$ the intersection $g^{-1}(Q^t_{f_i}) =: \{w_i^t(g)\}$ is a unique point and transverse, i.e.\ ${\rm im\,}{\rm d}_{w_i^t(g)}g{\partial}itchfork {\rm T}_{w_i^t(g)} Q^t_{f_i}$. } Then the same holds for $g\in G_\infty \widehat{\mathcal V}_{f_i}$. Hence for any $i_0\in I \subset \{1,\ldots,N\}$ the local slice ${\mathcal B}_{f_{i_0}}$ embeds topologically (as a homeomorphism to its image, with inverse given by the projection ${\mathcal B}_{f_{i_0}} \tildemes (S^2)^{2|I|} \to {\mathcal B}_{f_{i_0}}$) into a space of maps and marked points by {\beta}gin{align} {\lambda}bel{embed} {\iota}ta_{i_0, I} : \; {\mathcal B}_{f_{i_0}} &\;\longhookrightarrow\; \widehat{\mathcal B}^{k,p} \tildemes (S^2)^{2|I|} \\ g &\;\longmapsto\; \bigl( g , {\underline{n}}derline w(g) \bigr) , \qquad\qquad\quad {\underline{n}}derline w(g):= \bigl( g^{-1}(Q_{f_i}^t) \bigr)_{i\in I,t=0,1}. \nonumber \end{align} \end{itemlist} {\mathbb N}I Note that the elements of ${\rm im\,} {\iota}_{i_0,I}$ have the form $\bigl(g,{\underline{n}}derline w(g)=({\underline{n}}derline w_i)_{i\in I}\bigr)$ with ${\underline{n}}derline w_{i_0} = (0,1)$. In the following we denote by ${\underline{n}}derline w = ({\underline{n}}derline w_i)_{i\in I}\in (S^2)^{2|I|}$ any tuple of ${\underline{n}}derline w_i = (w_i^0, w_i^1)\in S^2\tildemes S^2$, even if it is not determined by a map $g$. Then ``$\forall i, t$'' will be shorthand for ``$\forall i\in I, t\in \{0,1\}$''. {\beta}gin{thm} {\lambda}bel{thm:A2} Suppose that the tuple of basic Kuranishi charts $$ \bigl({\bf K}_i = (U_{f_i},E_{f_i},s_{f_i},{\partial}si_{f_i})\bigr)_{i=1,\ldots,N} $$ is constructed as in Proposition~\ref{prop:A1} from local slices ${\mathcal B}_{f_i}$ and subspaces $$ E^i\subset {\mathcal C}^\infty({\rm Hom}^{0,1}_J(S^2,M)), $$ that induce obstruction bundles $\widehat E^i$ over neighbourhoods $\widehat{\mathcal V}_{f_i}\subset\widehat{\mathcal B}^{k,p}$ of ${\mathcal B}_{f_i}$. Assume moreover that this data satisfies Sum Conditions {\rm I}$'$, {\rm II}$'$, and {\rm IV}$'$. Then for every index subset $I\subset\{1,\ldots,N\}$ with nonempty overlap of footprints $$ F_I := \;{\textstyle \bigcap_{i\in I}}{\rm im\,} {\partial}si_i \;\neq\; \emptyset $$ we obtain the following transition data. {\beta}gin{enumerate} \item Corresponding to each choice of $i_0\in I$ and sufficiently small open set $$ \widehat{\mathcal W}_{I,i_0} \subset \Bigl(\widehat{\mathcal B}^{k,p} \tildemes (S^2)^{2|I|}\Bigr)\cap \bigl\{(g,{\underline{n}}derline w) \,\big|\, {\underline{n}}derline w_{i_0} = (0,1)\bigr\} $$ that covers a neighbourhood of the footprint $F_I$ in the sense that {\beta}gin{align*} \quad \bigl\{ ( g , {\underline{n}}derline w ) \in \widehat{\mathcal W}_{I,i_0} \,\big|\, {\overline {{\partial}}_J} g = 0 , \; g(w_i^t)\in Q_{f_i}^t,\; \forall i, t \,\bigr\} = {\iota}ta_{i_0,I}( {\partial}si_{i_0}^{-1}(F_I) ) \end{align*} there is a {\bf sum chart} ${\bf K}_{I}: = {\bf K}_{I,i_0}$ with {\beta}gin{itemize} \item domain \[ \qquad\qquad U_{I} := \bigl\{ \bigl( g , {\underline{n}}derline w \bigr) \in \widehat{\mathcal W}_{I,i_0}\,\big|\, {\overline {{\partial}}_J} g \in {\textstyle \sum_{i\in I}} {\Gamma}mma_{{\underline{n}}derline w_i}^* \widehat E^i, \, g(w_i^t)\in Q_{f_i}^t \ \forall i,t \, \bigr\} , \] \item obstruction space $\displaystyle \; E_I:= {\textstyle {\partial}rod_{i\in I}} E^i$, \item section $\displaystyle \; s_{I} : U_{I} \to E_I , \; ( g , {\underline{n}}derline{w}) \mapsto (\nu^i)_{i\in I}$ given by $$ {\overline {{\partial}}_J} g = {\textstyle \sum_{i\in I}} \, {\gamma}_{{\underline{n}}derline w_i}^* \nu^i |_{\operatorname{graph} g} , $$ \item footprint map $\displaystyle\; {\partial}si_{I} : s_{I}^{-1}(0) \overlineerset{\cong}\to F_I, \; (g,{\underline{n}}derline w) \mapsto [g]$. \end{itemize}{ } \item For every $I\subset J$ and choice of $i_0\in I, j_0\in J$ as above, a coordinate change $\widehat\Phi_{IJ}: {\bf K}_{I} \to {\bf K}_{J}$ is given by {\beta}gin{itemize} \item a choice of domain $\displaystyle \; V_{IJ} \subset U_{I}$ such that {\beta}gin{align} {\lambda}bel{choice VIJ} \qquad\qquad\qquad V_{IJ}\cap s_{I}^{-1}(0) = {\partial}si_{I}^{-1}(F_J), \qquad V_{IJ} \subset {\iota}ta_{i_0,I} \bigl( {\Gamma}_{f_{j_0},f_{i_0}} \bigl( {\iota}ta_{j_0,J}^{-1} ( U_{J} ) \bigr)\bigr) \end{align} with the embeddings \eqref{embed} and the reparametrization ${{\Gamma}_{f_{j_0},f_{i_0}}:{\mathcal B}_{f_{j_0}}\to {\mathcal B}_{f_{i_0}}}$ as in \eqref{transition}, \item embedding ${\partial}hi_{IJ} := {\iota}ta_{j_0,J} \circ {\Gamma}_{f_{i_0},f_{j_0}} \circ {\iota}ta_{i_0,I}^{-1}$, that is \footnote{ This map also equals $\; {\partial}hi_{IJ}\bigl( g , (w_i^t)_{i\in I,t=0,1} \bigr) \;=\; \bigl(\, g\circ {\gamma} \,,\, ({\gamma}^{-1}(w_i^t))_{i\in I,t=0,1} \cup (g^{-1}(Q^t_{f_j}))_{j\in J{\smallsetminus} I, t=0,1} \,\bigr)$, where ${\gamma}={\gamma}_{{\underline{n}}derline w_{j_0}}={\gamma}_g\in G_\infty$ is determined by ${\Gamma}_{f_{i_0},f_{j_0}}(g) = g\circ {\gamma}_g$ or equivalently ${\gamma}_{{\underline{n}}derline w_{j_0}}(t)=w_{j_0}^t \;\forall t$. } $$ \qquad\qquad {\partial}hi_{IJ} : V_{IJ} \to U_{J}, \quad \bigl( g , {\underline{n}}derline w \bigr) \mapsto \bigl(\, {\Gamma}_{f_{i_0},f_{j_0}}(g) \,,\, (g^{-1}(Q^t_{f_j}))_{j\in J, t=0,1} \,\bigr) , $$ \item linear embedding $\widehat{\partial}hi_{IJ}:E_I \hookrightarrow E_J$ given by the natural inclusion. \end{itemize} \end{enumerate} Moreover, any choice of $i_0\in I$ and open sets $\widehat{\mathcal W}_{I,i_0}$ for each $F_I\neq\emptyset$, and domains $V_{IJ}$ for each $F_I\cap F_J\neq\emptyset$ forms a {\it weak Kuranishi atlas} ${({\bf K}_I, \widehat\Phi_{IJ})}$ in the sense of Definition~\ref{def:Kwk}; in particular satisfying the weak cocycle condition $$ {\partial}hi_{JK} \circ {\partial}hi_{IJ} = {\partial}hi_{IK} \qquad\text{on}\;\; V_{IK} \cap {\partial}hi_{IJ}^{-1}(V_{JK}) . $$ \end{thm} {\beta}gin{proof} The sum charts ${\bf K}_{I}$ will be constructed as in Proposition~\ref{prop:A1}. In fact, let us begin by showing that the necessary choices of neighbourhoods in (i) always exist. Since $F_I\subset{\overlineerline {\Mm}}_1(A,J)$ is open and ${\partial}si_{i_0}$ is a homeomorphism to $s_{i_0}^{-1}(0)\subset U_{i_0}\subset{\mathcal B}_{f_{i_0}}$, there exists an open set ${\mathcal B}_{I,i_0}\subset{\mathcal B}_{f_{i_0}}$ such that ${\mathcal B}_{I,i_0}\cap s_{i_0}^{-1}(0) = \bigl\{ g \in {\mathcal B}_{I,i_0} \,\big|\, {\overline {{\partial}}_J} g = 0 \bigr\} = {\partial}si_{i_0}^{-1}(F_I)$. Next, since ${\iota}ta_{I,i_0}$ is an embedding to $\widehat{\mathcal B}^{k,p} \tildemes \bigl\{ ({\underline{n}}derline w_{i}) \in (S^2)^{2|I|} \,\big|\, {\underline{n}}derline w_{i_0} = (0,1) \bigr\}$, it contains an open set $\widehat{\mathcal W}_{I,i_0}$ such that $\widehat{\mathcal W}_{I,i_0} \cap{\rm im\,}{\iota}ta_{I,i_0} = {\iota}ta_{i_0,I}( {\mathcal B}_{I,i_0})$. Together, this implies the requirement in (i). Note moreover that elements $(g,{\underline{n}}derline w)\in {\rm im\,} {\iota}ta_{i_0,I}$ satisfy $g(w_i^t)\in Q^t_{f_i}$ and hence $\widehat{\mathcal W}_{I,i_0}$ can be chosen such that $g(w_i^t)$ lies in a given neighbourhood of the hypersurface $Q^t_{f_i}$ near $f_i(t)$ for any $(g,{\underline{n}}derline w)\in \widehat{\mathcal W}_{I,i_0}$. Next, note that Sum Condition II$'$ is assumed to be satisfied for $g\in{\partial}si_{i_0}^{-1}(F_I)\subset{\mathcal B}_{f_{i_0}}$, and hence continues to hold for $\bigl(g,({\underline{n}}derline w_i) \bigr) \in\widehat{\mathcal W}_{I,i_0}$ in a sufficiently small neighbourhood of ${\iota}ta_{i_0,I}({\partial}si_{i_0}^{-1}(F_I))$. Thus we obtain a well defined bundle {\beta}gin{equation}{\lambda}bel{HEI} \widehat E_I \; \to \; \widehat{\mathcal W}_{I,i_0} , \qquad \widehat E_I|_{(g,{\underline{n}}derline w)}:= {\textstyle \sum_{i\in I}} \bigl( {\Gamma}mma_{{\underline{n}}derline w_i}^* \widehat E^i \bigr)|_{g} \;\subset\; \widehat{\mathcal E}|_g. \end{equation} In order to construct a Kuranishi chart ${\bf K}_{I}$ with footprint $F_I$ from $\widehat E_I$ along the lines of Proposition~\ref{prop:A1}, we need to express the domain $U_I$ as the zero set of a smooth transverse Fredholm operator. Recall here from Section~\ref{ss:eval} that ${\mathcal C}^\infty(S^2,M) \tildemes S^2 \ni (g, w_i^t) \mapsto g(w_i^t) \in M$ is not smooth in any standard Banach norm. Hence we first construct the thickened solution space \[ \widehat U_{I} := \bigl\{ ( g , {\underline{n}}derline{w} ) \in \widehat{\mathcal W}_{I,i_0} \,\big|\, \; {\overline {{\partial}}_J} g \in \widehat E_I|_{(g,{\underline{n}}derline w)} \bigr\}, \] which is the zero set of the smooth Fredholm operator $$ \widehat{\mathcal W}_{I,i_0} \;\longrightarrow\; \bigcup_{(g, {\underline{n}}derline w)}\quotient{\widehat{\mathcal E}|_g }{ \widehat E_I |_{(g,{\underline{n}}derline w)}} , \qquad ( g , {\underline{n}}derline{w}) \; \longmapsto\; [{\overline {{\partial}}_J} g] . $$ We can achieve transversality of this operator by choosing $\widehat{\mathcal W}_{I,i_0}$ to be a sufficiently small neighbourhood of ${\iota}ta_{i_0,I}({\partial}si_{i_0}^{-1}(F_I))$, since ${\overline {{\partial}}_J}$ is transverse to $\widehat{\mathcal E}/\widehat E^{i_0}$ over ${\partial}si_{i_0}^{-1}(F_I) \subset \widehat{\mathcal V}_{f_{i_0}}$, and for $( g , {\underline{n}}derline{w})\in {\iota}ta_{i_0,I}({\partial}si_{i_0}^{-1}(F_I))$ we have $\widehat E^{i_0}|_g \subset \widehat E_I |_{(g,{\underline{n}}derline w)}$. Finally, the domain $U_{I}\subset\widehat U_{I}$ is the zero set of the map {\beta}gin{align} {\lambda}bel{BQW} \widehat U_{I} \quad &\;\longrightarrow\; {\underline{n}}derlineerset{i\in I}{ \textstyle {{\partial}rod}}\bigl( ({\rm T}_{f_i(0)} Q_{f_i}^{0})^{\partial}erp\tildemes ({\rm T}_{f_i(1)} Q_{f_i}^{1})^{\partial}erp \bigr) , \\ ( g , {\underline{n}}derline{w}) &\; \longmapsto\; {\underline{n}}derlineerset{i\in I}{ \textstyle {{\partial}rod}} \bigl( \Pi^{\partial}erp_{Q_{f_i}^{0}}(g(w^0_i)), \Pi^{\partial}erp_{Q_{f_i}^{1}}(g(w^1_i))\bigr), \nonumber \end{align} which is well defined for sufficiently small choice of $\widehat{\mathcal W}_{I,i_0}$, such that the $g(w^t_i)$ lie in the domain of definition of the projections $\Pi^{\partial}erp_{Q_{f_i}^{t}}$. Moreover, this map is smooth, since by the regularity in Sum Condition III (which is satisfied by construction) we have $\widehat U_I\subset{\mathcal C}^\infty(S^2,M)$. To see that it is transverse, it suffices to consider any given point $(g ,{\underline{n}}derline{w}) \in {\iota}ta_{i_0,I}({\partial}si_{i_0}^{-1}(F_I))$, since transversality at these points persists in an open neighbourhood, and then $\widehat{\mathcal W}_{I,i_0}$ can be chosen sufficiently small to achieve transversality on all of $\widehat U_I$. At these points we understand some parts of the tangent space ${\rm T}_{(g ,{\underline{n}}derline{w})}\widehat U_I$ because $\{ (f, {\underline{n}}derline v ) \in \widehat{\mathcal W}_{I,i_0} \,|\, {\overline {{\partial}}_J} f=0 \}$ is a subset of $\widehat U_I$ which contains ${\iota}ta_{i_0,I}({\partial}si_{i_0}^{-1}(F_I))$. Hence we have $\bigl({\delta}lta g, ({\delta}lta w_i)_{i\in I}\bigr) \in {\rm T}_{(g, {\underline{n}}derline w)} \widehat U_I$ for any ${\delta}lta g \in \ker{\rm D}_g{\overline {{\partial}}_J}$ and ${\delta}lta w_i \in {\rm T}_{w_i}(S^2)^2$ with ${\delta}lta w_{i_0}=0$. In particular, we have ${\rm T}_{g}(G_\infty g) \tildemes \{0\} \subset {\rm T}_{(g, {\underline{n}}derline w)} \widehat U_I$ since $\{ (f, {\underline{n}}derline v ) \in \widehat{\mathcal W}_{I,i_0} \,|\, {\overline {{\partial}}_J} f=0 \}$ is invariant under the action ${\gamma} : (f,{\underline{n}}derline v)\mapsto (f\circ {\gamma} , {\underline{n}}derline v)$ of $\{{\gamma}\approx{\rm id}\}\subset G_\infty$, unlike the thickened solution space $\widehat U_I$ itself. (Neither space is invariant under the more natural action $(f,{\underline{n}}derline v)\mapsto (f\circ {\gamma} , {\gamma}^{-1}({\underline{n}}derline v))$ that will be important below, since at the moment ${\underline{n}}derline w_{i_0}$ is fixed.) Now the $i_0$ component of the linearized operator of \eqref{BQW} at any point simplifies, since the marked points $w^t_{i_0}=t$ are fixed, to {\beta}gin{equation}{\lambda}bel{linop0} {\rm T}_{(g, {\underline{n}}derline w)} \widehat U_I \;\ni\; \bigl({\delta}lta g, ({\delta}lta w_i)_{i\in I}\bigr) \;\mapsto\; \bigl( {\rm d}\Pi^{\partial}erp_{Q^{0}_{f_{i_0}}} {\delta}lta g (0) , {\rm d}\Pi^{\partial}erp_{Q^{1}_{f_{i_0}}} {\delta}lta g (1) \bigr) . \end{equation} At points with ${\overline {{\partial}}_J} g=0$, its restriction to ${\rm T}_{g}(G_\infty g) \tildemes \{0\} \subset {\rm T}_{(g, {\underline{n}}derline w)} \widehat U_I$ is surjective by the same argument as in Proposition~\ref{prop:A1}, which uses the fact that ${\rm im\,}{\rm d}_t g$ projects onto $({\rm T}_{f_{i_0}(t)} Q_{f_{i_0}}^{t})^{\partial}erp$ by the construction of the local slice ${\mathcal B}_{f_{i_0}}$ at $g\approx f_{i_0}$. Next, the $j\in I{\smallsetminus}\{i_0\}$ component of the linearized operator for fixed ${\delta}lta g$ is {\beta}gin{equation}{\lambda}bel{linopi} {\rm T}_{(g, {\underline{n}}derline w)} \widehat U_I \;\ni\; \bigl({\delta}lta g, ({\delta}lta w_i)_{i\in I}\bigr) \;\mapsto\; \Bigl( {\rm d}\Pi^{\partial}erp_{Q^{t}_{f_j}} \bigl({\delta}lta g (w^t_j ) + {\rm d}_{w^t_j} g ( {\delta}lta w^t_j) \bigr)\Bigr)_{t=0,1} . \end{equation} We claim that this is surjective for any given ${\delta}lta g \subset {\rm T}_{g}(G_\infty g)$ (given by the surjectivity requirements for $i_0$), just by variation of ${\delta}lta w_j$. Indeed, for $(g, {\underline{n}}derline w)\in {\iota}ta_{i_0,I}({\partial}si_{i_0}^{-1}(F_I))$ we have $\bigl({\delta}lta g, ({\delta}lta w_i)_{i\in I}\bigr) \in {\rm T}_{(g, {\underline{n}}derline w)} \widehat U_I$ for any ${\delta}lta g \subset {\rm T}_{g}(G_\infty g)$ and ${\delta}lta w_i \in {\rm T}_{w_i}(S^2)^2$. Moreover, we have ${\rm im\,}{\rm d}_{w^t_j} g = {\rm im\,} {\rm d}_{t} (g\circ{\gamma}_{{\underline{n}}derline w_j})$, which projects onto $({\rm T}_{f_j(t)} Q_{f_j}^{t})^{\partial}erp$ by the construction of the local slice ${\mathcal B}_{f_j}$ at $g\circ{\gamma}_{{\underline{n}}derline w_j}\approx f_j$. This proves surjectivity of \eqref{linopi} for $j\neq i_0$ by variation of ${\delta}lta w_j$, and together with the surjectivity of\eqref{linop0} by variation of ${\delta}lta g$ proves transversality of \eqref{BQW} for sufficiently small $\widehat{\mathcal W}_{I, i_0}$. Now that the domain $U_I$ is equipped with a smooth structure, we can construct a Kuranishi atlas ${\bf K}_I$ as in Proposition~\ref{prop:A1} by pulling back the smooth section $$ \tilde s_I : U_I \;\to\; \widehat E_I|_{U_I} , \qquad (g, {\underline{n}}derline w) \;\mapsto\; {\overline {{\partial}}_J} g $$ to the trivialization $\widehat E_I|_{U_I}\cong U_I \tildemes E_I$ given by construction of the sum bundle. The induced homeomorphism $$ {\partial}si_I : \; \tilde s_I^{-1}(0) \; \overlineerset{\cong}{\longrightarrow} \; F_I \;\subset\; {\overlineerline {\Mm}}_1(A,J) , \qquad (g, {\underline{n}}derline w)\;\mapsto\; [g] $$ maps $\tilde s_I^{-1}(0) \subset {\rm im\,}{\iota}ta_{i_0,I}$ to the desired footprint since we chose the neighbourhoods $\widehat{\mathcal W}_{I, i_0}$ and ${\mathcal B}_{I,i_0}:={\iota}ta_{i_0,I}^{-1}(\widehat{\mathcal W}_{I, i_0})\subset {\mathcal B}_{f_{i_0}}$ such that $$ {\partial}si_I( \tilde s_I^{-1}(0) ) \;=\; {\partial}r \bigl( {\iota}ta_{i_0,I}^{-1}(\tilde s_I^{-1}(0)) \bigr) \;=\; {\partial}r \bigl( {\iota}ta_{i_0,I}^{-1}(\widehat{\mathcal W}_{I,i_0}) \cap {\overline {{\partial}}_J}^{-1}(0) \bigr) \;=\; {\partial}r \bigl( {\partial}si_{i_0}^{-1}(F_I) \bigr) \;=\; F_I , $$ where ${\partial}r :\widehat{\mathcal B}^{k,p}\to \widehat{\mathcal B}^{k,p}/G_\infty$ denotes the quotient. This finishes the construction for (i). To construct the coordinate changes, we can now forget the marked points, which were only a technical means to obtaining smooth sum charts. For that purpose fix a pair $i_0\in I$ and note that the forgetful map $\Pi_I:\widehat{\mathcal B}^{k,p}\tildemes (S^2)^{2|I|}\to\widehat{\mathcal B}^{k,p}$ is a left inverse to the embedding ${\iota}_{i_0,I}:{\mathcal B}_{f_{i_0}}\hookrightarrow \widehat{\mathcal B}^{k,p}\tildemes (S^2)^{2|I|}$ from \eqref{embed}, whose image contains the smooth finite dimensional domain $U_I\subset{\mathcal C}^\infty(S^2,M)\tildemes (S^2)^{2|I|}$. Hence it restricts to a topological embedding to a space of perturbed holomorphic maps in the slice, {\beta}gin{align}{\lambda}bel{UB} \Pi_I|_{U_I} : \; U_I \;\longrightarrow\; B_{I,i_0} :=&\; \bigl\{ g\in {\mathcal B}_{f_{i_0}} \,\big|\, \exists {\underline{n}}derline w \in (S^2)^{2|I|} : (g,{\underline{n}}derline w) \in U_I \bigr\} \\ =&\; \bigl\{ g\in {\mathcal B}_{I,i_0} \,\big|\, {\overline {{\partial}}_J} g \in \widehat E_I |_{(g,{\underline{n}}derline w(g))} \bigr\} . \nonumber \end{align} In fact, this is a smooth embedding since the forgetful map is smooth and we can check that the differential of the forgetful map $\Pi_I|_{U_I}$ is injective. Indeed, its kernel at $(g,{\underline{n}}derline w)$ is the vertical part of the tangent space ${\rm T}_{(g,{\underline{n}}derline w)} U_{I} \cap \bigl( \{0\} \tildemes {\rm T}_{{\underline{n}}derline w} (S^2)^{2|I|}\bigr)$, which in terms of the linearized operators \eqref{linopi} is given by the kernel of $$ {\rm T}_{{\underline{n}}derline w} (S^2)^{2|I|} \; \ni \; ({\delta}lta w^t_i )_{i\in I, t=0,1} \;\longmapsto\; \Bigl( {\rm d}_{g(w^t_i)}\Pi^{\partial}erp_{Q^{t}_{f_i}} \bigl( {\rm d}_{w^t_i} g ( {\delta}lta w^t_i) \bigr)\Bigr)_{i\in I, t=0,1} \;\in \; {\underline{n}}derlineerset{i\in I, t=0,1}{\textstyle{\partial}rod} {\rm im\,} {\rm d}_t f_i . $$ This operator is injective (and hence surjective) since by Sum Condition IV$'$ $$ \bigl({\rm im\,}{\rm d}_{w^t_i} g\bigr)\; {\partial}itchfork\; \bigl({\rm T}_{w^t_i}Q^{t}_{f_i}\bigr)=\ker{\rm d}_{g(w^t_i)}\Pi^{\partial}erp_{Q^{t}_{f_i}}. $$ Thus ${\iota}_{i_0,I}: B_{I,i_0}\to U_I$ is a diffeomorphism, and since it also intertwines the Cauchy--Riemann operator on the domains and the projection to ${\overlineerline {\Mm}}_1(A,J)$, this forms a map $$ \widehat\Pi_{I,i_0}:=\bigl(\Pi_I|_{U_I} , {\rm id}_{E_I} \bigr): \; {\bf K}_I \longrightarrow {\bf K}^B_I, $$ from the sum chart ${\bf K}_I=\bigl(\, U_I \,,\, \bigcup_{(g,{\underline{n}}derline w)\in U_I} \widehat E_I|_{(g,{\underline{n}}derline w)} \,,\, \tilde s_I(g,{\underline{n}}derline w)={\overline {{\partial}}_J} g \,,\, {\partial}si_I(g,{\underline{n}}derline w)=[g] \,\bigr)$ to the Kuranishi chart $$ {\bf K}^B_I: = \bigl(\, B_{I,i_0} \,,\, {\textstyle \bigcup_{g\in B_{I,i_0}}} \widehat E_I|_{(g,{\underline{n}}derline w(g))} \,,\, \tilde s(g)={\overline {{\partial}}_J} g \,,\, {\partial}si(g)=[g] \,\bigr). $$ (Here we indicated the obstruction bundles before trivialization to $E_I$.) The inverse map $\widehat\Pi_{I,i_0}^{-1}:=\bigl({\iota}_{i_0,I} , {\rm id}_{E_I} \bigr)$ is also a map between Kuranishi charts, and both are coordinate changes since the index condition is automatically satisfied when ${\partial}hi_{IJ}$ and $\widehat{\partial}hi_{IJ}$ are both diffeomorphisms. Indeed, in this case, both target and domain in the tangent bundle condition \eqref{tbc} are trivial. Next, we will obtain further coordinate changes $\widehat\Phi^I_{i_0 j_0} : (B_{I,i_0},\ldots) \to (B_{I,j_0},\ldots)$ for different choices of index $i_0,j_0\in I$. Here the choices of neighbourhoods $\widehat{\mathcal W}_{I,\bullet}$ induce neighbourhoods in the local slices ${\mathcal B}_{I,\bullet}\subset{\mathcal B}_{f_\bullet}$ such that $B_{I,\bullet} :=\; \bigl\{ g\in {\mathcal B}_{f_\bullet} \,\big|\, {\overline {{\partial}}_J} g \in \widehat E_I |_{(g,{\underline{n}}derline w(g))} \bigr\}$. These domains are intertwined by the transition map between local slices ${\Gamma}_{f_{i_0},f_{j_0}}$. Indeed, using the $G_\infty$-equivariance of the obstruction bundles \eqref{Eequivariant}, we have $$ {\overline {{\partial}}_J} g = {\textstyle \sum_{i\in I}} \, {\gamma}_{{\underline{n}}derline w_i(g)}^*\nu^i |_{\operatorname{graph} g} \quad\Longrightarrow\quad {\overline {{\partial}}_J}(g\circ{\gamma}) = {\textstyle \sum_{i\in I}}\, {\gamma}_{{\underline{n}}derline w_i (g\circ {\gamma})}^*\nu^i |_{\operatorname{graph} g\circ{\gamma}} , $$ where ${\gamma}^{-1}\circ{\gamma}_{{\underline{n}}derline w_i} = {\gamma}_{{\underline{n}}derline w_i(g\circ{\gamma})}$ since ${\gamma}^{-1}({\gamma}_{{\underline{n}}derline w_i}(t))= {\gamma}^{-1}( w_i^t)= w_i^t(g\circ{\gamma})$. Thus we obtain a well defined map ${\Gamma}_{f_{i_0},f_{j_0}} : B_{I,i_0} \cap G_\infty{\mathcal B}_{I,j_0} \to B_{I,j_0}$. It is a topological embedding with open image, since its inverse is ${\Gamma}_{f_{j_0},f_{i_0}} |_{B_{I,j_0} \cap G_\infty{\mathcal B}_{I,i_0}}$. In fact, it is a local diffeomorphism since both maps are smooth by Lemma~\ref{le:Gsmooth}. The above also shows that this diffeomorphism intertwines the sections, given by the Cauchy--Riemann operator, and the footprint maps, given by the projection $g\mapsto [g]\in{\overlineerline {\Mm}}_1(A,J)$. Since the index condition is automatic as above, we obtain the required coordinate change by $\widehat\Phi^I_{i_0 j_0}:=\bigl(\, {\Gamma}_{f_{i_0},f_{j_0}} \,,\, {\rm id}_{E_I} \,\bigr)$ with domain $B_{I,i_0} \cap G_\infty{\mathcal B}_{I,j_0} \subset B_{I,i_0}$. With these preparations, a natural coordinate change for $I\subsetneq J$ and any choice of $i_0\in I$, $j_0\in J$ arises from the composition of the above coordinate changes (all of which are local diffeomorphisms on the domains) with another natural coordinate change $\widehat\Phi^{i_0}_{IJ} : (B_{I,j_0},\ldots) \to (B_{J,j_0},\ldots)$ given by the inclusion $$ {\partial}hi^{j_0}_{IJ} := {\rm id}_{{\mathcal B}_{j_0}} : \; B_{I,j_0} \cap {\mathcal B}_{J,j_0} \;\hookrightarrow\; B_{J,j_0} . $$ Again, this naturally intertwines the sections and footprint maps with $$ s_I^{-1}(0)\cap B_{I,j_0} \cap {\mathcal B}_{J,j_0}= {\partial}si_I^{-1}(F_J). $$ To check the index condition for this embedding together with the linear embedding $\widehat{\partial}hi^{j_0}_{IJ} := {\rm id}_{E_I} : E_I \hookrightarrow E_J$ we express the tangent spaces to both domains in terms of the linearization of the Cauchy--Riemann operator on the local slice $\overlineerline{\partial}artial: {\mathcal B}_{j_0} \to \widehat{\mathcal E}|_{{\mathcal B}_{j_0}}$. Comparing $$ {\rm T}_g B_{I,j_0} = ({\rm D}_g \overlineerline{{\partial}artial}) ^{-1} \Bigl( \textstyle{\sum_{i\in I}} ({\Gamma}_{{\underline{n}}derline w_i(g)}^*\widehat E^i)|_g \Bigr) , \qquad {\rm T}_g B_{J,j_0} = ({\rm D}_g \overlineerline{\partial}artial) ^{-1} \Bigl( {\textstyle \sum_{j\in J}} ({\Gamma}_{{\underline{n}}derline w_j(g)}^*\widehat E^j)|_g \Bigr) $$ as subsets of ${\rm T}_g{\mathcal B}_{f_{j_0}}$, we can identify $$ \quotient{{\rm T}_g B_{J,j_0}}{{\rm d}_g{\partial}hi^{j_0}_{IJ} \bigl( {\rm T}_g B_{I,j_0} \bigr)} \;=\; \quotient{ ({\rm D}_g \overlineerline{\partial}artial) ^{-1} \left( \textstyle{\sum_{j\in J{\smallsetminus} I}} ({\Gamma}_{{\underline{n}}derline w_j(g)}^*\widehat E^j)|_g\right) }{\ker {\rm D}_g\overlineerline{\partial}artial } $$ to see that the linearized section (given by the linearized Cauchy Riemann operator together with the trivialization of obstruction bundles) satisfies the tangent bundle condition \eqref{tbc} {\beta}gin{eqnarray*} {\rm D}_{g} \overlineerline{\partial}artial \;:\; \quotient{{\rm T}_g B_{J,j_0}}{{\rm d}_g{\partial}hi^{j_0}_{IJ} \bigl( {\rm T}_g B_{I,j_0} \bigr)} \;&\stackrel{\cong} \longrightarrow\; & {\textstyle\sum_{j\in J{\smallsetminus} I}} ({\Gamma}_{{\underline{n}}derline w_j(g)}^*\widehat E^j)|_g \\ &&\qquad\quad \;\cong\; \quotient{\widehat E_J|_{(g,({\underline{n}}derline w_j(g))_{j\in J})}} {\widehat E_I|_{(g,({\underline{n}}derline w_i(g))_{i\in I})}}. \end{eqnarray*} Finally, we can compose the coordinate changes to $$ \widehat\Phi_{IJ} \,:= \; \widehat\Pi_{J,j_0}^{-1} \circ \widehat\Phi^J_{i_0 j_0} \circ \widehat\Phi^{i_0}_{IJ} \circ \widehat\Pi_{I,i_0} \; : \;\; {\bf K}_I \; \to \; {\bf K}_J . $$ By Lemma~\ref{le:cccomp} this defines a coordinate change with the maximal domain $$ {\iota}ta_{i_0,I}(B_{I,i_0} \cap G_\infty {\mathcal B}_{J,j_0}) \;=\; {\iota}ta_{i_0,I} \bigl( {\Gamma}_{f_{j_0},f_{i_0}} \bigl( {\iota}ta_{j_0,J}^{-1} ( U_{J} ) \bigr)\bigr), $$ which we can restrict to any smaller choice of $V_{IJ}$ containing ${\partial}si_I^{-1}(F_J)$. The linear embedding, after the fixed trivialization of the bundle, is the trivial embedding $\widehat{\partial}hi_{IJ}: E_I\hookrightarrow E_J$, whereas the nonlinear embedding ${\partial}hi_{IJ}:={\iota}ta_{i_0,I}^{-1}\circ {\Gamma}_{f_{i_0},f_{j_0}}\circ {\iota}ta_{j_0,J}: V_{IJ} \to U_J$ of domains is given by the restriction to $V_{IJ}$ of the composition $$ U_I \;\overlineerset{{\iota}ta_{i_0,I}}{\longhookleftarrow}\; B_{I,i_0} \cap G_\infty {\mathcal B}_{J,j_0} \; \xrightarrow[\cong]{{\Gamma}_{f_{i_0},f_{j_0}}}\; B_{I,j_0} \cap {\mathcal B}_{J,j_0} \;\overlineerset{{\rm id}_{{\mathcal B}_{j_0}}}{\longhookrightarrow}\; B_{J,j_0} \;\overlineerset{{\iota}ta_{j_0,J}}{\longhookrightarrow}\; U_J . $$ This completes the proof of (ii). Finally, the cocycle condition on the level of the linear embeddings $\widehat{\partial}hi_{IJ}$ holds trivially, whereas the weak cocycle condition for the embeddings between the domains follows, since ${\iota}ta_{j_0,J}^{-1}\circ{\iota}ta_{j_0,J}={\rm id}_{{\mathcal B}_{J,j_0}}$ from the cocycle property of the local slices, $$ {\Gamma}_{f_{j_0},f_{k_0}}\circ {\Gamma}_{f_{i_0},f_{j_0}} = {\Gamma}_{f_{i_0},f_{k_0}} \qquad\text{on}\; {\mathcal B}_{f_{i_0}}\cap \bigl(G_\infty\cdot{\mathcal B}_{f_{j_0}}\bigr) \cap \bigl(G_\infty \cdot {\mathcal B}_{f_{k_0}}\bigr) . $$ This completes the proof of Theorem~\ref{thm:A2}. \end{proof} Note that we crucially use the triviality of the isotropy groups, in particular in the proof of the cocycle condition. Nontrivial isotropy groups cause additional indeterminacy, which has to be dealt with in the abstract notion of Kuranishi atlases. The construction of Kuranishi atlases with nontrivial isotropy groups for Gromov--Witten moduli spaces will in fact require a sum construction already for the basic Kuranishi charts. We will give a more detailed proof of Theorem~\ref{thm:A2} in \cite{MW:gw}, where we will also treat nodal curves and deal with the case of isotropy or, more generally, nonunique intersections with the hypersurfaces $Q^t_{f_i}$. Further, we will show that Sum Conditions I$'$ and II$'$ can always be satisfied by perturbing and shrinking a given set of basic charts. Finally, in the language of Section~\ref{ss:top}, note that the Kuranishi charts and transition data that we construct only satisfy the weak cocycle condition. However, our obstruction bundles are naturally additive. Therefore we obtain an additive weak Kuranishi atlas. As we show in Theorem~\ref{thm:K} and Section~\ref{s:VMC}, this is precisely what we need to define the virtual fundamental class $[{\overlineerline {\Mm}}_1(A,J)]^{\rm virt}$. {\beta}gin{remark} \rm {\lambda}bel{rmk:smart} (i) The actual idea behind the choice of local slice conditions and the introduction of further marked points is of course a stabilization of the domain in order to obtain a theory over the Deligne--Mumford moduli space of stable genus zero Riemann surfaces with marked points. So one might want to rewrite this approach invariantly and, when summing two charts for example, work over the Deligne--Mumford moduli space with five marked points instead of taking the points $(\infty,0,1,w_i^0,w_i^1)$. When properly handled, this approach does give a good framework for discussing coordinate changes. However one does need to take care not to obscure the analytic problems by introducing these further abstractions and notations. Moreover, this abstraction does not yield another approach to constructing the coordinate changes. If there is a rigorous approach using the Deligne--Mumford formalism, then in a local model near $(\infty,0,1, w^0_{01},w^1_{01})$, it would take exactly the form discussed above.{ } {\mathbb N}I (ii) The abstraction to equivalence classes of maps and marked points modulo automorphisms becomes crucial when one wants to extend the above approach to construct finite dimensional reductions near nodal curves, because the Gromov compactification exactly mirrors the construction of Deligne--Mumford space. While we will defer the details of this construction to \cite{MW:gw}, let us note that the genus zero Deligne--Mumford spaces are defined by equivalence classes of pairwise distinct marked points on the sphere. Hence we will need to make sure that the marked points $w^0,w^1\in S^2$ that we read off from intersection with the hypersurfaces $Q_{f_1}^{0},Q_{f_1}^{1}$ are disjoint from each other and from $\infty, 0,1$. Thus, in order to be summable near nodal curves, the basic Kuranishi charts must be constructed from local slices with pairwise disjoint slicing conditions $Q_{f_i}^t$. { } {\mathbb N}I (iii) In view of Sum Condition II$'$ and the previous remark, one cannot expect any two given basic Kuranishi charts to have summable obstruction bundles and hence be compatible. This requires a perturbation of the basic Kuranishi charts, which is possible only when dealing with a compactified solution space, since each perturbation may shrink the image of a chart. { } {\mathbb N}I (iv) This discussion also shows that even a simple moduli space such as ${\overlineerline {\Mm}}_{1}(A,J)$ does not have a canonical Kuranishi atlas. Hence the construction of invariants from this space also involves constructing a Kuranishi atlas on the product cobordism ${\overlineerline {\Mm}}_{1}(A,J)\tildemes [0,1]$ intertwining any two Kuranishi atlases for ${\overlineerline {\Mm}}_{1}(A,J)$ arising from different choices of basic charts and transition data. Note here that one could construct basic charts of the ``wrong dimension'' by simply adding trivial finite dimensional factors to the abstract domains or obstruction spaces. A natural and necessary condition for constructing a well defined cobordism class of Kuranishi atlases with the ``expected dimension'' for ${\overlineerline {\Mm}}_{1}(A,J)$ is the following {\bf Fredholm index condition for charts} pointed out to us by Dietmar Salamon:{ } {\it Each Kuranishi chart must in some sense identify the kernel $\ker{\rm d} s_{f_i}$ and cokernel $({\rm im\,}{\rm d} s_{f_i})^{\partial}erp$ of the finite dimensional reduction with the kernel modulo the infinitesimal action $\ker{\rm d}_{f_i}{\overline {{\partial}}_J}/{\scriptstyle {\rm T}_{f_i}(G_\infty f_i)}$ and cokernel $({\rm im\,}{\rm d}_{f_i}{\overline {{\partial}}_J})^{\partial}erp$ of the Cauchy--Riemann operator.}{ } In fact, one might argue that this identification should be part of a Kuranishi atlas on a moduli space. However, this would require giving the abstract footprint of a Kuranishi atlas more structure than that of a compact metrizable topological space, in order to keep track of the kernel and cokernel of the Fredholm operator that arises by linearization from the PDE that defines the moduli space. Whether the index condition picks out a unique cobordism class of Kuranishi atlases on $X$ is an interesting open question. { } {\mathbb N}I (v) The Fredholm index condition for Kuranishi charts, once rigorously formulated, should imply that any map between charts which satisfy the index condition should also satisfy the index condition for coordinate changes in Definition~\ref{def:change} (a reformulation of the tangent bundle condition introduced by Joyce). Conversely, a map between charts that satisfies the index condition for coordinate changes should also preserve the Fredholm index condition for charts. More precisely, if $\widehat\Phi_{IJ} :{\bf K}_I \to {\bf K}_J$ is a map satisfying the index condition, and one of the charts ${\bf K}_I$ or ${\bf K}_J$ satisfies the Fredholm index condition, then both charts satisfy the Fredholm index condition (pending a rigorous definition of the latter). \end{remark} {\rm sect}ion{Kuranishi charts and coordinate changes with trivial isotropy } {\lambda}bel{s:chart} Throughout this chapter, $X$ is assumed to be a compact and metrizable space. This section defines Kuranishi charts with trivial isotropy for $X$ and coordinate changes between them. The case of nontrivial isotropy is a fairly straightforward generalization using the language of groupoids, but the purpose of this paper is to clarify fundamental topological issues in the simplest example. Hence we assume throughout that the charts have trivial isotropy and drop this qualifier from the wording. Our definitions are motivated by \cite{FO}; an additional reference is \cite{J}. Differing from both approaches, we work exclusively with charts whose domain and section are fixed, rather than with germs of charts as discussed in Section~\ref{ss:alg}. We moreover restrict our attention to charts without boundaries or corners. \cite[App.~A]{FOOO} also dispenses with germs and uses essentially the same basic definitions. However, our insistence on specifying the domain is new, as are our notion of Kuranishi atlas and interpretation in terms of categories in Section~\ref{s:Ks} and our construction of the virtual fundamental class in Section~\ref{s:VMC}. \subsection{Charts, maps, and restrictions}{\lambda}bel{ss:chart} {\beta}gin{defn}{\lambda}bel{def:chart} Let $F\subset X$ be a nonempty open subset. A {\bf Kuranishi chart} for $X$ with {\bf footprint} $F$ is a tuple ${\bf K} = (U,E,s,{\partial}si)$ consisting of {\beta}gin{itemize} \item the {\bf domain} $U$, which is an open smooth $k$-dimensional manifold; \item the {\bf obstruction space} $E$, which is a finite dimensional real vector space; \item the {\bf section} $U\to U\tildemes E, x\mapsto (x,s(x))$ which is given by a smooth map $s: U\to E$; \item the {\bf footprint map} ${\partial}si : s^{-1}(0) \to X$, which is a homeomorphism to the footprint ${{\partial}si(s^{-1}(0))=F}$. \end{itemize} The {\bf dimension} of ${\bf K}$ is $\dim {\bf K}: = \dim U-\dim E$. \end{defn} More generally, one could work with an obstruction bundle over the domain, but this complicates the notation and, by not fixing the trivialization, makes coordinate changes less unique. In the application to holomorphic curve moduli spaces, there are natural choices of trivialized obstruction bundles. The section $s$ is then given by the generalized Cauchy--Riemann operator, and elements in the footprint are $J$-holomorphic maps modulo reparametrization. { }{\mathbb N}I Since we aim to define a regularization of $X$, the most important datum of a Kuranishi chart is its footprint. So, as long as the footprint is unchanged, we can vary the domain $U$ and section $s$ without changing the chart in any important way. Nevertheless, we will always work with charts that have a fixed domain and section. In fact, our definition of a map between Kuranishi charts crucially involves these domains. The following definition is very general; the actual coordinate changes involved in a Kuranishi atlas will be a combination of a restriction, as defined below, and a map that satisfies an extra index condition. {\beta}gin{defn}{\lambda}bel{def:map} A {\bf map} $\widehat\Phi : {\bf K} \to {\bf K}'$ between Kuranishi charts is a pair $({\partial}hi,\widehat{\partial}hi)$ consisting of an embedding ${\partial}hi :U \to U'$ and a linear injection $\widehat {\partial}hi :E \to E'$ such that {\beta}gin{enumerate} \item the embedding restricts to ${\partial}hi|_{s^{-1}(0)}={\partial}si'^{-1} \circ{\partial}si : s^{-1}(0) \to s'^{-1}(0)$, the transition map induced from the footprints in $X$; \item the embedding intertwines the sections, $s' \circ {\partial}hi = \widehat{\partial}hi \circ s$, on the entire domain $U$. \end{enumerate} That is, the following diagrams commute: {\beta}gin{equation} {\beta}gin{array} {ccc} {U\tildemes E}& \stackrel{{\partial}hi\tildemes \widehat{\partial}hi} \longrightarrow & {U'\tildemes E'} {\partial}hantom{\int_Quark} \\ {\partial}hantom{sp} \uparrow {s}&&\uparrow {s'} {\partial}hantom{spac}\\ {\partial}hantom{s}{U} & \stackrel{{\partial}hi} \longrightarrow &{U'} {\partial}hantom{spacei} \end{array} \qquad {\beta}gin{array} {ccc} {s^{-1}(0)} & \stackrel{{\partial}hi} \longrightarrow &{s'^{-1}(0)} {\partial}hantom{\int_Quark} \\ {\partial}hantom{spa} \downarrow{{\partial}si}&&\downarrow{{\partial}si'} {\partial}hantom{space} \\ {\partial}hantom{s}{X} & \stackrel{{\rm Id}} \longrightarrow &{X}. {\partial}hantom{spaceiiii} \end{array} \end{equation} \end{defn} The dimension of the obstruction space $E$ typically varies as the footprint $F\subset X$ changes. Indeed, the maps ${\partial}hi, \widehat{\partial}hi$ need not be surjective. However, as we will see in Definition \ref{def:change}, the maps allowed as coordinate changes are carefully controlled in the normal direction. Since we only defined maps of Kuranishi charts that induce an inclusion of footprints, we now need to define a notion of restriction of a Kuranishi chart to a smaller subset of its footprint. {\beta}gin{defn} {\lambda}bel{def:restr} Let ${\bf K}$ be a Kuranishi chart and $F'\subset F$ an open subset of the footprint. A {\bf restriction of ${\bf K}$ to $\mathbf{\emph F\,'}$} is a Kuranishi chart of the form $$ {\bf K}' = {\bf K}|_{U'} := \bigl(\, U' \,,\, E'=E \,,\, s'=s|_{U'} \,,\, {\partial}si'={\partial}si|_{s'^{-1}(0)}\, \bigr) $$ given by a choice of open subset $U'\subset U$ of the domain such that $U'\cap s^{-1}(0)={\partial}si^{-1}(F')$. In particular, ${\bf K}'$ has footprint ${\partial}si'(s'^{-1}(0))=F'$. \end{defn} The following lemma shows that we may easily restrict to any open subset of the footprint. Moreover it provides a tool for restricting to precompact domains, which we require for refinements of Kuranishi atlases in Sections~\ref{ss:shrink} and \ref{ss:red}. Here and throughout we will use the notation $V'\sqsubset V$ to mean that the inclusion $V'\hookrightarrow V$ is {\it precompact}. That is, ${\rm cl}_V(V')$ is compact, where ${\rm cl}_V(V')$ denotes the closure of $V'$ in the relative topology of $V$. If both $V'$ and $V$ are contained in a compact space $X$, then $V'\sqsubset V$ is equivalent to the inclusion $\overline{V'}:={\rm cl}_X(V')\subset V$ of the closure of $V'$ with respect to the ambient topology. {\beta}gin{lemma}{\lambda}bel{le:restr0} Let ${\bf K}$ be a Kuranishi chart. Then for any open subset $F'\subset F$ there exists a restriction ${\bf K}'$ to $F'$ whose domain $U'$ is such that $\overline{U'}\cap s^{-1}(0) = {\partial}si^{-1}(\overline{F'})$. If moreover $F'\sqsubset F$ is precompact, then $U'$ can be chosen to be precompact. \end{lemma} {\beta}gin{proof} Since $F'\subset F$ is open and ${\partial}si:s^{-1}(0)\to F$ is a homeomorphism in the relative topology of $s^{-1}(0)\subset U$, there exists an open set $V\subset U$ such that ${\partial}si^{-1}(F')=U'\cap s^{-1}(0)$. If $F'\sqsubset F$ then we claim that $U'\subset V$ can be chosen so that in addition its closure intersects $s^{-1}(0)$ in ${\partial}si^{-1}(\overline{F'})$. To arrange this we define $$ U' := \bigl\{x\in V \,\big|\, d(x, {\partial}si^{-1}(F')) < d(x, {\partial}si^{-1}(F{\smallsetminus} F'))\bigr\}, $$ where $d(x,A): = \inf_{a\in A} d(x,a)$ denotes the distance between the point $x$ and the subset $A\subset U$ with respect to any metric $d$ on the finite dimensional manifold $U$. Then $U'$ is an open subset of $V$. By construction, its intersection with $s^{-1}(0)={\partial}si^{-1}(F)$ is $V\cap {\partial}si^{-1}(F')={\partial}si^{-1}(F')$. To see that $\overline{U'} \cap s^{-1}(0) \subset {\partial}si^{-1}(\overline{F'})$, consider a sequence $x_n\in U'$ that converges to $x_\infty\in {\partial}si^{-1}(F)$. If $x_\infty\in {\partial}si^{-1}(F{\smallsetminus} F')$ then by definition of $U'$ there are points $y_n\in {\partial}si^{-1}(F')$ such that $d(x_n,y_n)< d(x_n,x_\infty)$. This implies $d(x_n,y_n)\to 0$, hence we also get convergence $y_n\to x_\infty$, which proves $x_\infty\in \overline{{\partial}si^{-1}(F')} = {\partial}si^{-1}(\overline{F'})$, where the last equality is by the homeomorphism property of ${\partial}si$. Finally, the same homeomorphism property implies the inclusion ${\partial}si^{-1}(\overline{F'}) = \overline{{\partial}si^{-1}(F')} \subset \overline{U'} \cap s^{-1}(0)$, and thus equality. This proves the first statement. The second statement will hold if we show that when $F$ is precompact, we may choose $V$, and hence $U'\subset V$ to be precompact in $U$. For that purpose we use the homeomorphism property of ${\partial}si$ and relative closedness of $s^{-1}(0)\subset U$ to deduce that ${\partial}si^{-1}(F')\subset U$ is a precompact set in a finite dimensional manifold, hence has a precompact open neighbourhood $V\sqsubset U$. To see this, note that each point in $\overline{{\partial}si^{-1}(F')}$ is the center of a precompact ball, by the Heine-Borel theorem; and a continuity and compactness argument provides a finite covering of $\overline{{\partial}si^{-1}(F')}$ by precompact balls. \end{proof} \subsection{Coordinate changes} {\lambda}bel{ss:coord} \hspace{1mm}\\ The following notion of coordinate change is key to the definition of Kuranishi atlases. It involves a special kind of map from an intermediate chart that is obtained by restriction. Here we begin using notation that will also appear in our definition of Kuranishi atlases. For now, ${\bf K}_I=(U_I,E_I,s_I,{\partial}si_I)$ and ${\bf K}_J=(U_J,E_J,s_J,{\partial}si_J)$ just denote different Kuranishi charts for the same space $X$. {\beta}gin{figure}[htbp] \centering \includegraphics[width=5in]{virtfig2.pdf} \caption{A coordinate change in which $\dim U_{J} =\dim U_I+ 1$. Both $U_{IJ}$ and its image ${\partial}hi(U_{IJ})$ are shaded.} {\lambda}bel{fig:2} \end{figure} {\beta}gin{defn}{\lambda}bel{def:change} Let ${\bf K}_I$ and ${\bf K}_J$ be Kuranishi charts such that $F_I\cap F_J$ is nonempty. A {\bf coordinate change} from ${\bf K}_I$ to ${\bf K}_J$ is a map $\widehat\Phi: {\bf K}_I|_{U_{IJ}}\to {\bf K}_J$, which satisfies the {\bf index condition} in (i),(ii) below, and whose domain is a restriction of ${\bf K}_I$ to $F_I\cap F_J$. That is, the {\bf domain} of the coordinate change is an open subset $U_{IJ}\subset U_I$ such that ${\partial}si_I(s_I^{-1}(0)\cap U_{IJ}) = F_I\cap F_J$. {\beta}gin{enumerate} \item The embedding ${\partial}hi:U_{IJ}\to U_J$ underlying the map $\widehat\Phi$ identifies the kernels, $$ {\rm d}_u{\partial}hi \bigl(\ker{\rm d}_u s_I \bigr) = \ker{\rm d}_{{\partial}hi(u)} s_J \qquad \forall u\in U_{IJ}; $$ \item the linear embedding $\widehat{\partial}hi:E_I\to E_J$ given by the map $\widehat\Phi$ identifies the cokernels, $$ \forall u\in U_{IJ} : \qquad E_I = {\rm im\,}{\rm d}_u s_I \oplus C_{u,I} \quad \Longrightarrow \quad E_J = {\rm im\,} {\rm d}_{{\partial}hi(u)} s_J \oplus \widehat{\partial}hi(C_{u,I}). $$ \end{enumerate} \end{defn} Note that coordinate changes are in general {\it unidirectional} since the maps $U_{IJ}\to U_J$ and $E_{IJ}=E_I\to E_J$ are not assumed to have open images. Note also that the footprint of the intermediate chart ${\bf K}_I|_{U_{IJ}}$ is always the full intersection $F_I\cap F_J$. By abuse of notation, we often denote a coordinate change by $\widehat\Phi: {\bf K}_I\to {\bf K}_J$, without specific mention of its domain $U_{IJ}$, even though it is really just a map from the restriction ${\bf K}_I|_{U_{IJ}}$ of the chart ${\bf K}_I$ to $F_I\cap F_J$. This should cause no confusion with Definition~\ref{def:map} since the symbol $\widehat\Phi:{\bf K}_I\to{\bf K}_J$ can represent a map only in case $F_I\subset F_J$, in which case a map is the special case of a coordinate change with domain $U_{IJ}=U_I$. So in either case, the symbol $\widehat\Phi: {\bf K}_I\to {\bf K}_J$ means the choice of a domain $U_{IJ}\subset U_I$ and a map $\widehat\Phi:{\bf K}_I|_{U_{IJ}}\to{\bf K}_J$ from the restriction of ${\bf K}_I$ with domain $U_{IJ}$. The following lemma shows that the index condition is in fact equivalent to a tangent bundle condition which was first introduced, in a weaker version, by \cite{FO}, and formalized in the present version by \cite{J}. We have chosen to present it as an index condition, since that is closer to the basic motivating question of how to associate canonical (equivalence classes of) Kuranishi atlases to moduli spaces described in terms of nonlinear Fredholm operators; see Remark~\ref{rmk:smart}~(iv). {\beta}gin{lemma} {\lambda}bel{le:change} The index condition is equivalent to the {\bf tangent bundle condition}, which requires isomorphisms for all $v={\partial}hi(u)\in{\partial}hi(U_{IJ})$, {\beta}gin{equation}{\lambda}bel{tbc} {\rm d}_v s_J : \;\quotient{{\rm T}_v U_J}{{\rm d}_u{\partial}hi ({\rm T}_u U_I)} \;\stackrel{\cong}\longrightarrow \; \quotient{E_J}{\widehat{\partial}hi(E_I)}, \end{equation} or equivalently at all (suppressed) base points as above {\beta}gin{equation}{\lambda}bel{inftame} E_J={\rm im\,}{\rm d} s_J + {\rm im\,}\widehat{\partial}hi_{IJ} \qquad\text{and}\qquad {\rm im\,}{\rm d} s_J \cap {\rm im\,}\widehat{\partial}hi_{IJ} = \widehat{\partial}hi_{IJ}({\rm im\,}{\rm d} s_I). \end{equation} Moreover, the index condition implies that ${\partial}hi(U_{IJ})$ is an open subset of $s_J^{-1}(\widehat{\partial}hi(E_I))$, and that the charts ${\bf K}_I, {\bf K}_J$ have the same dimension. \end{lemma} {\beta}gin{proof} We will suppress most base points in the notation. To see that the tangent bundle condition \eqref{tbc} implies the index condition, first note that the compatibility with sections, $\widehat{\partial}hi\circ{\rm d} s_I = {\rm d} s_J \circ {\rm d}{\partial}hi$ implies $$ \widehat{\partial}hi({\rm im\,}{\rm d} s_I)\subset{\rm im\,}{\rm d} s_J , \qquad {\rm d}{\partial}hi \bigl(\ker{\rm d} s_I \bigr) \subset \ker{\rm d} s_J . $$ Since ${\partial}hi$ and $\widehat{\partial}hi$ are embeddings, this implies dimension differences $d,d'\geq 0$ in $$ \dim{\rm im\,}{\rm d} s_I + d = \dim{\rm im\,}{\rm d} s_J, \qquad \dim\ker{\rm d} s_I + d' = \dim\ker{\rm d} s_J . $$ The fact that \eqref{tbc} is an isomorphism implies that the Kuranishi charts have equal dimensions, and hence {\beta}gin{align*} \dim E_J - \dim E_I &= \dim U_J - \dim U_I \\ &= \dim \ker{\rm d} s_J + \dim{\rm im\,}{\rm d} s_J - \dim \ker{\rm d} s_I - \dim{\rm im\,}{\rm d} s_I \\ &= d+d' . \end{align*} Moreover, if we pick a representative space $C_I$ for the cokernel, i.e.\ $E_I = {\rm im\,}{\rm d} s_I \oplus C_{I}$, then then the surjectivity of ${\rm d} s_J$ in \eqref{tbc} gives {\beta}gin{equation}{\lambda}bel{EJ} E_J = {\rm im\,} {\rm d} s_J + \widehat{\partial}hi(E_I) = {\rm im\,} {\rm d} s_J + \bigl( \widehat{\partial}hi(C_I) \oplus \widehat{\partial}hi({\rm im\,}{\rm d} s_I) \bigr) = {\rm im\,} {\rm d} s_J + \widehat{\partial}hi(C_I) , \end{equation} where $$ \dim \widehat{\partial}hi(C_I) = \dim E_I - \dim{\rm im\,}{\rm d} s_I = \dim E_J - \dim{\rm im\,}{\rm d} s_J - d' . $$ Thus the sum \eqref{EJ} must be direct and $d'=0$, which implies the identification of cokernels and kernels. Conversely, to see that the index condition implies the tangent bundle condition let again $C_I\subset E_I$ be a complement of ${\rm im\,}{\rm d} s_I$. Then compatibility of the sections $s_J \circ{\partial}hi = \widehat{\partial}hi \circ s_I$ implies $$ \widehat{\partial}hi(E_I) = \widehat{\partial}hi ({\rm im\,}{\rm d} s_I) \oplus \widehat{\partial}hi(C_I) = {\rm d} s_J({\rm im\,}{\rm d}{\partial}hi) \oplus \widehat{\partial}hi(C_I) . $$ Moreover, let $N_u\subset {\rm T} U_J$ be a complement of ${\rm im\,} {\rm d}_u {\partial}hi$, then the identification of cokernels takes the form $$ E_J = {\rm d} s_J(N_u) \oplus {\rm d} s_J({\rm im\,}{\rm d}{\partial}hi) \oplus \widehat{\partial}hi(C_I) = {\rm d} s_J(N_u) \oplus \widehat{\partial}hi(E_I). $$ This shows that \eqref{tbc} is surjective, and for injectivity it remains to check injectivity of ${\rm d} s_J |_{N_u}$. The latter holds since the identification of kernels implies $\ker{\rm d} s_J \subset {\rm im\,}{\rm d}{\partial}hi$. To check the equivalence of \eqref{inftame} and \eqref{tbc} note that the first condition in \eqref{inftame} is the surjectivity of \eqref{tbc}, while the injectivity is equivalent to ${\rm im\,}{\rm d} s_J \cap {\rm im\,}\widehat{\partial}hi_{IJ} \subset {\rm d} s_J({\rm im\,}{\rm d}{\partial}hi_{IJ})$. The latter equals $\widehat{\partial}hi_{IJ}({\rm im\,}{\rm d} s_I)$ by the compatibility $s_J\circ{\partial}hi_{IJ}=\widehat{\partial}hi_{IJ}\circ s_I$ of sections. So \eqref{inftame} implies \eqref{tbc}, and for the converse it remains to check that \eqref{tbc} implies equality of the above inclusion. This follows from a dimension count as in \eqref{EJ} above. Finally, to see that ${\partial}hi(U_{IJ})$ is an open subset of $s_J^{-1}(\widehat{\partial}hi(E_I))$, we may choose the complements $N_u$ above such that ${\mathbb N}n:=\bigcup_{u\in U_{IJ}} N_u\subset {\rm T}U_J|_{{\partial}hi(U_{IJ})}$ is a normal bundle to ${\partial}hi(U_{IJ})$. Then ${\partial}hi(U_{IJ})$ has a neighbourhood that is diffeomorphic to a neighbourhood of the zero section in ${\mathbb N}n$, and we may pull back $s_J$ to a smooth map ${\mathbb N}n\to E_J/\widehat{\partial}hi(E_I)$. It satisfies the assumptions of the implicit function theorem on the zero section by \eqref{tbc}, and hence for a sufficiently small open neighbourhood ${\mathcal U}\subset{\mathbb N}n$ of ${\partial}hi(U_{IJ})$ we have $s_J^{-1}(\widehat{\partial}hi(E_I))\cap{\mathcal U} ={\partial}hi(U_{IJ})$. \end{proof} The next lemmas provide restrictions and compositions of coordinate changes. {\beta}gin{lemma} {\lambda}bel{le:restrchange} Let $\widehat\Phi:{\bf K}_I|_{U_{IJ}}\to {\bf K}_J$ be a coordinate change from ${\bf K}_I$ to ${\bf K}_J$, and let ${\bf K}'_I={\bf K}_I|_{U'_I}$, ${\bf K}'_J={\bf K}_J|_{U'_J}$ be restrictions of the Kuranishi charts to open subsets $F'_I\subset F_I, F'_J\subset F_J$ with $F_I'\cap F_J'\ne \emptyset$. Then a {\bf restricted coordinate change} from ${\bf K}'_I$ to ${\bf K}'_J$ is given by $$ \widehat\Phi|_{U'_{IJ}} := \bigl( \, {\partial}hi|_{U'_{IJ}} \,, \,\widehat{\partial}hi \, \bigr) \; : \; {\bf K}'_I|_{U'_{IJ}} \to {\bf K}'_J $$ for any choice of open subset $U'_{IJ}\subset U_{IJ}$ of the domain such that $$ U'_{IJ} \subset U'_I\cap {\partial}hi^{-1}(U'_J) , \qquad {\partial}si_I(s_I^{-1}(0)\cap U'_{IJ}) = F'_I \cap F'_J . $$ \end{lemma} {\beta}gin{proof} First note that restricted domains $U'_{IJ}\subset U_{IJ}$ always exist since we can choose e.g.\ $U'_{IJ} = U'_I \cap{\partial}hi^{-1}(U'_J)$, which is open in $U_{IJ}$ by the continuity of ${\partial}hi$ and has the required footprint since $$ {\partial}si_I\bigl(s_I^{-1}(0) \cap U'_I \bigr) \cap {\partial}si_I\bigl(s_I^{-1}(0) \cap {\partial}hi^{-1}(U'_J) \bigr) \;=\; F'_I\cap {\partial}si_J\bigl(s_J^{-1}(0) \cap U'_J \bigr) \;=\; F'_I\cap F'_J . $$ Next, ${\bf K}'_I|_{U'_{IJ}}={\bf K}_I|_{U'_{IJ}}$ is a restriction of ${\bf K}'_I$ to $F'_I\cap F'_J$ since it has the required footprint $$ {\partial}si'_I({s_I'}\,\!^{-1}(0)\cap U'_{IJ}) \;=\; {\partial}si_I\bigl(s_I^{-1}(0) \cap U'_{IJ} \bigr) \;=\; F'_I\cap F'_J . $$ Finally, $\widehat\Phi' := ({\partial}hi|_{U'_{IJ}} , \widehat{\partial}hi )$ is a map since it satisfies the conditions of Definition~\ref{def:map}, {\beta}gin{enumerate} \item ${\partial}hi' |_{{s'_I}\,\!^{-1}(0)} = {\partial}hi|_{s_I^{-1}(0)\cap U'_{IJ}} = {\partial}si_J^{-1} \circ {\partial}si_I |_{s_I^{-1}(0)\cap U'_{IJ}} = {{\partial}si'_J}\,\!^{-1} \circ {\partial}si'_I $ ; \item $s'_J \circ {\partial}hi' = s_J|_{U'_J} \circ {\partial}hi|_{U'_{IJ}} = \widehat{\partial}hi \circ s_{IJ}|_{U'_{IJ}} = \widehat{\partial}hi' \circ s'_{IJ}$. \end{enumerate} This completes the proof since the index condition is preserved under restriction. \end{proof} {\beta}gin{lemma} {\lambda}bel{le:cccomp} Let ${\bf K}_I,{\bf K}_J,{\bf K}_K$ be Kuranishi charts such that $ F_I\cap F_K \subset F_J$, and let $\widehat\Phi_{IJ}: {\bf K}_I\to {\bf K}_J$ and $\widehat\Phi_{JK}: {\bf K}_J\to {\bf K}_K$ be coordinate changes. (That is, we are given restrictions ${\bf K}_I|_{U_{IJ}}$ to $F_I\cap F_J$ and ${\bf K}_J|_{U_{JK}}$ to $F_J\cap F_K$ and maps $\widehat\Phi_{IJ}: {\bf K}_I|_{U_{IJ}}\to {\bf K}_J$, $\widehat\Phi_{JK}: {\bf K}_J|_{U_{JK}}\to {\bf K}_K$ satisfying the index condition.) Then the following holds. {\beta}gin{itemize} \item The domain $U_{IJK}:={\partial}hi_{IJ}^{-1}(U_{JK}) \subset U_I$ defines a restriction ${\bf K}_I|_{U_{IJK}}$ to $F_I \cap F_K$. \item The compositions ${\partial}hi_{IJK}:={\partial}hi_{JK}\circ{\partial}hi_{IJ}: U_{IJK}\to U_K$ and $\widehat{\partial}hi_{JK}\circ\widehat{\partial}hi_{IJ}: E_I\to E_K$ define a map $\widehat\Phi_{IJK}:{\bf K}_I|_{U_{IJK}}\to{\bf K}_K$ in the sense of Definition~\ref{def:map}. \item $({\partial}hi_{IJK},\widehat{\partial}hi_{IJK})$ satisfy the index condition, so define a coordinate change. \end{itemize} We denote the induced {\bf composite coordinate change} $\widehat\Phi_{IJK}=({\partial}hi_{IJK},\widehat{\partial}hi_{IJK})$ by $$ \widehat\Phi_{JK}\circ \widehat\Phi_{IJ} :=\widehat\Phi_{IJK} : \; {\bf K}_I|_{U_{IJK}} \; \to\; {\bf K}_K. $$ \end{lemma} {\beta}gin{proof} In order to check that $\bigl(U_{IJK},E_I,s_I|_{U_{IJK}}, {\partial}si_I|_{s_I^{-1}(0)\cap U_{IJK}}\bigr)$ is the required restriction, we need to verify that it has footprint $F_I\cap F_K$. Indeed, ${\partial}si_I\bigl(s_I^{-1}(0)\cap U_{IJK}\bigr) =F_I\cap F_K$ holds since we may decompose ${\partial}si_I={\partial}si_J\circ{\partial}hi_{IJ}$ on $s_I^{-1}(0)\cap U_{IJ}$ with $U_{IJK}\subset U_{IJ}$, and then combine the identities $$ {\partial}hi_{IJ}( s_I^{-1}(0) \cap U_{IJK} ) ={\partial}hi_{IJ}( s_{IJ}^{-1}(0) ) \cap U_{JK} \subset s_J^{-1}(0), $$ $$ {\partial}si_J\bigl( {\partial}hi_{IJ}( s_{IJ}^{-1}(0) ) \bigr) = {\partial}si_I( s_{IJ}^{-1}(0) ) = F_I\cap F_J , $$ $$ {\partial}si_J( s_{J}^{-1}(0)\cap U_{JK}) = F_J\cap F_K . $$ Finally, our assumption $F_I\cap F_K\subset F_J$ ensures that $ (F_I\cap F_J)\cap ( F_J\cap F_K) = F_I\cap F_K $. This proves (i). To prove (ii) we check the conditions of Definition~\ref{def:map}, noting that injectivity and homomorphisms are preserved under composition. {\beta}gin{itemize} \item[-] On $U_{IJK}\cap s_{I}^{-1}(0)$ we have $$ {\partial}hi_{IJK} = {\partial}hi_{JK} \circ {\partial}hi_{IJ} = \bigl( {\partial}si_K^{-1}\circ{\partial}si_{J} \bigr) \circ \bigl({\partial}si_{J}^{-1} \circ {\partial}si_{I}\bigr) = {\partial}si_K^{-1}\circ {\partial}si_{I} . $$ \item[-] The sections are intertwined, $$ s_K \circ {\partial}hi_{IJK} = s_K \circ {\partial}hi_{JK} \circ {\partial}hi_{IJ} = \widehat{\partial}hi_{JK} \circ s_J \circ {\partial}hi_{IJ} = \widehat{\partial}hi_{JK}\circ \widehat{\partial}hi_{IJ} \circ s_{I} = \widehat{\partial}hi_{IJK} \circ s_{IK} . $$ \end{itemize} Next, the index condition is preserved under composition by the following. {\beta}gin{itemize} \item[-] The kernel identifications ${\rm d}{\partial}hi_{IJ} \bigl(\ker{\rm d} s_I \bigr) = \ker{\rm d} s_J $ and ${\rm d}{\partial}hi_{JK} \bigl(\ker{\rm d} s_J \bigr) = \ker{\rm d} s_K$ imply $$ {\rm d}\bigl({\partial}hi_{JK} \circ {\partial}hi_{IJ} \bigr) \bigl(\ker{\rm d} s_I \bigr) = \bigl({\rm d}{\partial}hi_{JK} \circ {\rm d}{\partial}hi_{IJ} \bigr) \bigl(\ker{\rm d} s_I \bigr) = \ker{\rm d} s_K . $$ \item[-] Assuming $E_I = {\rm im\,}{\rm d} s_I \oplus C_{I}$, the cokernel identification of $\Phi_{IJ}$ implies $ E_J = {\rm im\,} {\rm d} s_J \oplus C_J$ with $C_J=\widehat{\partial}hi(C_I)$, so that the cokernel identification of $\Phi_{JK}$ implies $$ E_K = {\rm im\,} {\rm d} s_K \oplus \widehat{\partial}hi_K(C_J) = {\rm im\,} {\rm d} s_K \oplus (\widehat{\partial}hi_K\circ\widehat{\partial}hi_J)(C_I). $$ \end{itemize} \end{proof} Finally, we introduce two notions of equivalence between coordinate changes that may not have the same domain, and show that they are compatible with composition. {\beta}gin{defn} {\lambda}bel{def:overlap} Let $\widehat\Phi^{\alpha} :{\bf K}_I|_{U^{\alpha}_{IJ}}\to {\bf K}_J$ and $\widehat\Phi^{\beta}:{\bf K}_I|_{U^{\beta}_{IJ}}\to {\bf K}_J$ be coordinate changes. {\beta}gin{itemlist} \item We say the coordinate changes are {\bf equal on the overlap} and write $\widehat\Phi^{\alpha}\approx\widehat\Phi^{\beta}$, if the restrictions of Lemma~\ref{le:restrchange} to $U'_{IJ}:=U^{\alpha}_{IJ}\cap U^{\beta}_{IJ}$ yield equal maps $$ \widehat\Phi^{\alpha}|_{U'_{IJ}} = \widehat\Phi^{\beta}|_{U'_{IJ}} . $$ \item We say that $\widehat\Phi^{\beta}$ {\bf extends} $\widehat\Phi^{\alpha}$ and write $\widehat\Phi^{\alpha}\subset\widehat\Phi^{\beta}$, if $U_{IJ}^{\alpha}\subset U_{IJ}^{\beta}$ and the restriction of Lemma~\ref{le:restrchange} yields equal maps $$ \widehat\Phi^{\beta}|_{U_{IJ}^{\alpha}} = \widehat\Phi^{\alpha} . $$ \end{itemlist} \end{defn} {\beta}gin{lemma} {\lambda}bel{le:cccompoverlap} Let ${\bf K}_I,{\bf K}_J,{\bf K}_K$ be Kuranishi charts such that $F_I\cap F_K\subset F_J$, and suppose $\widehat\Phi^{\alpha}_{IJ}\approx \widehat\Phi^{\beta}_{IJ}:{\bf K}_I\to {\bf K}_J$ and $\widehat\Phi^{\alpha}_{JK}\approx\widehat\Phi^{\beta}_{JK}: {\bf K}_J\to {\bf K}_K$ are coordinate changes that are equal on the overlap. Then their compositions as defined in Lemma~\ref{le:cccomp} are equal on the overlap, $\widehat\Phi^{\alpha}_{JK}\circ \widehat\Phi^{\alpha}_{IJ}\approx \widehat\Phi^{\beta}_{JK}\circ \widehat\Phi^{\beta}_{IJ}$. Moreover, if $\widehat\Phi^{\alpha}_{IJ}\subset \widehat\Phi^{\beta}_{IJ}$ and $\widehat\Phi^{\alpha}_{JK}\subset\widehat\Phi^{\beta}_{JK}$ are extensions, then $\widehat\Phi^{\alpha}_{JK}\circ \widehat\Phi^{\alpha}_{IJ}\subset \widehat\Phi^{\beta}_{JK}\circ \widehat\Phi^{\beta}_{IJ}$ is an extension as well. \end{lemma} {\beta}gin{proof} The overlap of the domains of ${\partial}hi^{\alpha}_{IJK}={\partial}hi^{\alpha}_{JK}\circ{\partial}hi^{\alpha}_{IJ}$ and ${\partial}hi^{\beta}_{IJK}={\partial}hi^{\beta}_{JK}\circ{\partial}hi^{\beta}_{IJ}$ is {\beta}gin{align*} U^{\alpha}_{IJK} \cap U^{\beta}_{IJK} \;=\; ({\partial}hi_{IJ}^{\alpha})^{-1}(U^{\alpha}_{JK}) \cap ({\partial}hi_{IJ}^{\beta})^{-1}(U^{\beta}_{JK}) \;\subset\; U^{\alpha}_{IJ}\cap U^{\beta}_{IJ}. \end{align*} By assumption, the injective maps ${\partial}hi^{\alpha}_{IJ}$ and ${\partial}hi^{\beta}_{IJ}$ agree on all points in $U^{\alpha}_{IJ}\cap U^{\beta}_{IJ}$, and hence also map $U^{\alpha}_{IJK} \cap U^{\beta}_{IJK}$ to $U^{\alpha}_{JK} \cap U^{\beta}_{JK}$ The first claim follows because on this latter domain we have ${\partial}hi^{\alpha}_{JK}={\partial}hi^{\beta}_{JK}$. To prove the second claim it remains to note that $$ U^{\alpha}_{IJK} = ({\partial}hi_{IJ}^{\alpha})^{-1}(U^{\alpha}_{JK}) = ({\partial}hi_{IJ}^{\beta})^{-1}(U^{\alpha}_{JK}) \subset ({\partial}hi_{IJ}^{\beta})^{-1}(U^{\beta}_{JK}) = U^{\beta}_{IJK} . $$ \end{proof} {\rm sect}ion{Kuranishi atlases with trivial isotropy}{\lambda}bel{s:Ks} With the preliminaries of Section~\ref{s:chart} in hand, we can now define a notion of Kuranishi atlas with trivial isotropy on a compact metrizable space $X$, which will be fixed throughout this section. As before, we work exclusively in the case of trivial isotropy and hence drop this qualifier from the wording. We first define the notions of Kuranishi atlas ${\mathcal K}$ and Kuranishi neighborhood $|{\mathcal K}|$, showing in Examples~\ref{ex:Haus} and~\ref{ex:Knbhd} that $|{\mathcal K}|$ need not be Hausdorff and that the maps from the domains of the charts into $|{\mathcal K}|$ need not be injective. Further, as in Example~\ref{ex:Khomeo}, $|{\mathcal K}|$ need not be metrizable or locally compact. Moreover, in practice one can construct only weak Kuranishi atlases in the sense of Definition~\ref{def:Kwk}, although they do often have the additivity property of Definition~\ref{def:Ku2}. The main result of this section is Theorem~\ref{thm:K} which states that given a weak additive Kuranishi atlas one can construct a Kuranishi atlas ${\mathcal K}$, whose neighborhood $|{\mathcal K}|$ is Hausdorff and has the injectivity property, and that moreover is well defined up to a natural notion of cobordism. This theorem is proved in Sections~\ref{ss:tame},~\ref{ss:shrink} and \ref{ss:Kcobord}. \subsection{Covering families and transition data}{\lambda}bel{ss:Ksdef} \hspace{1mm}\\ We begin by introducing the notion of a Kuranishi atlas. There are various ways that one might try to define a ``Kuranishi structure", but in practice every such structure on a compact moduli space of holomorphic curves is constructed from a covering family of basic charts with certain compatibility conditions akin to our notion of Kuranishi atlas. We express the compatibility in terms of a further collection of charts for overlaps, and will discuss three different versions of a cocycle condition. We compare our definition with others in Remark~\ref{rmk:otherK}. The basic building blocks of our notion of Kuranishi atlases are the following. {\beta}gin{defn}{\lambda}bel{def:Kfamily} Let $X$ be a compact metrizable space. {\beta}gin{itemlist} \item A {\bf covering family of basic charts} for $X$ is a finite collection $({\bf K}_i)_{i=1,\ldots,N}$ of Kuranishi charts for $X$ whose footprints cover $X=\bigcup_{i=1}^N F_i$. \item {\bf Transition data} for a covering family $({\bf K}_i)_{i=1,\ldots,N}$ is a collection of Kuranishi charts $({\bf K}_J)_{J\in{\mathcal I}_{\mathcal K},|J|\ge 2}$ and coordinate changes $(\widehat\Phi_{I J})_{I,J\in{\mathcal I}_{\mathcal K}, I\subsetneq J}$ as follows: {\beta}gin{enumerate} \item ${\mathcal I}_{\mathcal K}$ denotes the set of subsets $I\subset\{1,\ldots,N\}$ for which the intersection of footprints is nonempty, $$ F_I:= \; {\textstyle \bigcap_{i\in I}} F_i \;\neq \; \emptyset \;; $$ \item ${\bf K}_J$ is a Kuranishi chart for $X$ with footprint $F_J=\bigcap_{i\in J}F_i$ for each $J\in{\mathcal I}_{\mathcal K}$ with $|J|\ge 2$, and for one element sets $J=\{i\}$ we denote ${\bf K}_{\{i\}}:={\bf K}_i$; \item $\widehat\Phi_{I J}$ is a coordinate change ${\bf K}_{I} \to {\bf K}_{J}$ for every $I,J\in{\mathcal I}_{\mathcal K}$ with $I\subsetneq J$. \end{enumerate} \end{itemlist} \end{defn} The transition data for a covering family automatically satisfies a cocycle condition on the zero sets, where due to the footprint maps to $X$ we have for $I\subset J \subset K$ $$ {\partial}hi_{J K}\circ {\partial}hi_{I J} = {\partial}si_K^{-1}\circ{\partial}si_J\circ{\partial}si_J^{-1}\circ{\partial}si_I = {\partial}si_K^{-1}\circ{\partial}si_I = {\partial}hi_{I K} \qquad \text{on}\; s_I^{-1}(0)\cap U_{IK} . $$ Since there is no natural ambient topological space into which the entire domains of the Kuranishi charts map, the cocycle condition on the complement of the zero sets has to be added as axiom. For the linear embeddings between obstruction spaces, we will always impose $\widehat{\partial}hi_{J K}\circ \widehat{\partial}hi_{I J} = \widehat{\partial}hi_{I K}$. However for the embeddings between domains of the charts there are three natural notions of cocycle condition with varying requirements on the domains of the coordinate changes. {\beta}gin{defn} {\lambda}bel{def:cocycle} Let ${\mathcal K}=({\bf K}_I,\widehat\Phi_{I J})_{I,J\in{\mathcal I}_{\mathcal K}, I\subsetneq J}$ be a tuple of basic charts and transition data. Then for any $I,J,K\in{\mathcal I}_K$ with $I\subsetneq J \subsetneq K$ we define the composed coordinate change $\widehat\Phi_{J K}\circ \widehat\Phi_{I J} : {\bf K}_{I} \to {\bf K}_{K}$ as in Lemma~\ref{le:cccomp} with domain ${\partial}hi_{IJ}^{-1}(U_{JK})\subset U_I$. We then use the notions of Definition~\ref{def:overlap} to say that the triple of coordinate changes $\widehat\Phi_{I J}, \widehat\Phi_{J K}, \widehat\Phi_{I K}$ satisfies the {\beta}gin{itemlist} \item {\bf weak cocycle condition} if $\widehat\Phi_{J K}\circ \widehat\Phi_{I J} \approx \widehat\Phi_{I K}$, i.e.\ the coordinate changes are equal on the overlap, in particular if {\beta}gin{equation*} \qquad {\partial}hi_{J K}\circ {\partial}hi_{I J} = {\partial}hi_{I K} \qquad \text{on}\;\; {\partial}hi_{IJ}^{-1}(U_{JK}) \cap U_{IK} ; \end{equation*} \item {\bf cocycle condition} if $\widehat\Phi_{J K}\circ \widehat\Phi_{I J} \subset \widehat\Phi_{I K}$, i.e.\ $\widehat\Phi_{I K}$ extends the composed coordinate change, in particular if {\beta}gin{equation}{\lambda}bel{eq:cocycle} \qquad {\partial}hi_{J K}\circ {\partial}hi_{I J} = {\partial}hi_{I K} \qquad \text{on}\;\; {\partial}hi_{IJ}^{-1}(U_{JK}) \subset U_{IK} ; \end{equation} \item {\bf strong cocycle condition} if $\widehat\Phi_{J K}\circ \widehat\Phi_{I J} = \widehat\Phi_{I K}$ are equal as coordinate changes, in particular if {\beta}gin{equation}{\lambda}bel{strong cocycle} \qquad {\partial}hi_{J K}\circ {\partial}hi_{I J} = {\partial}hi_{I K} \qquad \text{on}\; \; {\partial}hi_{IJ}^{-1}(U_{JK}) = U_{IK} . \end{equation} \end{itemlist} \end{defn} The relevance of the these versions is that the weak cocycle condition can be achieved in practice by constructions of finite dimensional reductions for holomorphic curve moduli spaces, whereas the strong cocycle condition is needed for our construction of a virtual moduli cycle in Section~\ref{s:VMC} from perturbations of the sections in the Kuranishi charts. The cocycle condition is an intermediate notion which is too strong to be constructed in practice and too weak to induce a VMC, but it does allow us to formulate Kuranishi atlases categorically. This in turn gives rise, via a topological realization of a category, to a virtual neighbourhood of $X$ into which all Kuranishi domains map. In the following we use the intermediate cocycle condition to develop these concepts. {\beta}gin{defn}{\lambda}bel{def:Ku} A {\bf Kuranishi atlas of dimension $\mathbf d$} on a compact metrizable space $X$ is a tuple $$ {\mathcal K}=\bigl({\bf K}_I,\widehat\Phi_{I J}\bigr)_{I, J\in{\mathcal I}_{\mathcal K}, I\subsetneq J} $$ consisting of a covering family of basic charts $({\bf K}_i)_{i=1,\ldots,N}$ of dimension $d$ and transition data $({\bf K}_J)_{|J|\ge 2}$, $(\widehat\Phi_{I J})_{I\subsetneq J}$ for $({\bf K}_i)$ as in Definition~\ref{def:Kfamily}, that satisfy the {\it cocycle condition} $\widehat\Phi_{J K}\circ \widehat\Phi_{I J} \subset \widehat\Phi_{I K}$ for every triple $I,J,K\in{\mathcal I}_K$ with $I\subsetneq J \subsetneq K$. \end{defn} {\beta}gin{rmk}{\lambda}bel{rmk:Ku}\rm We have assumed from the beginning that $X$ is compact and metrizable. Some version of compactness is essential in order for $X$ to define a VMC, but one might hope to weaken the metrizability assumption. However, for our charts to model open subsets of $X$ we require the footprint maps ${\partial}si_i: s_i^{-1}(0)\to X$ to be homeomorphisms onto open subsets $F_i \subset X$. Hence any space $X$ with a Kuranishi atlas is Hausdorff, and by compactness has a finite covering by the footprints $F_i$. Since each of these are regular and second countable, it follows that $X$ must also be regular and second countable, and hence metrizable. For details of these arguments, see Proposition~\ref{prop:Ktopl1}~(iv). \end{rmk} It is useful to think of the domains and obstruction spaces of a Kuranishi atlas as forming the following categories. {\beta}gin{defn}{\lambda}bel{def:catKu} Given a Kuranishi atlas ${\mathcal K}$ we define its {\bf domain category} ${\bf B}_{\mathcal K}$ to consist of the space of objects\footnote{ When forming categories such as ${\bf B}_{\mathcal K}$, we take always the space of objects to be the disjoint union of the domains $U_I$, even if we happen to have defined the sets $U_I$ as subsets of some larger space such as ${\mathbb R}^2$ or a space of maps as in the Gromov--Witten case. Similarly, the morphism space is a disjoint union of the $U_{IJ}$ even though $U_{IJ}\subset U_I$ for all $J\supset I$.} $$ {\rm Obj}_{{\bf B}_{\mathcal K}}:= \bigcup_{I\in {\mathcal I}_{\mathcal K}} U_I \ = \ \bigl\{ (I,x) \,\big|\, I\in{\mathcal I}_{\mathcal K}, x\in U_I \bigr\} $$ and the space of morphisms $$ {\rm Mor}_{{\bf B}_{\mathcal K}}:= \bigcup_{I,J\in {\mathcal I}_{\mathcal K}, I\subset J} U_{IJ} \ = \ \bigl\{ (I,J,x) \,\big|\, I,J\in{\mathcal I}_{\mathcal K}, I\subset J, x\in U_{IJ} \bigr\}. $$ Here we denote $U_{II}:= U_I$ for $I=J$, and for $I\subsetneq J$ use the domain $U_{IJ}\subset U_I$ of the restriction ${\bf K}_I|_{U_{IJ}}$ to $F_J$ that is part of the coordinate change $\widehat\Phi_{IJ} : {\bf K}_I|_{U_{IJ}}\to {\bf K}_J$. Source and target of these morphisms are given by $$ (I,J,x)\in{\rm Mor}_{{\bf B}_{\mathcal K}}\bigl((I,x),(J,{\partial}hi_{IJ}(x))\bigr), $$ where ${\partial}hi_{IJ}: U_{IJ}\to U_J$ is the embedding given by $\widehat\Phi_{I J}$, and we denote ${\partial}hi_{II}:={\rm id}_{U_I}$. Composition is defined by $$ \bigl(I,J,x\bigr)\circ \bigl(J,K,y\bigr) := \bigl(I,K,x\bigr) $$ for any $I\subset J \subset K$ and $x\in U_{IJ}, y\in U_{JK}$ such that ${\partial}hi_{IJ}(x)=y$. The {\bf obstruction category} ${\bf E}_{\mathcal K}$ is defined in complete analogy to ${\bf B}_{\mathcal K}$ to consist of the spaces of objects ${\rm Obj}_{{\bf E}_{\mathcal K}}:=\bigcup_{I\in{\mathcal I}_{\mathcal K}} U_I\tildemes E_I$ and morphisms $$ {\rm Mor}_{{\bf E}_{\mathcal K}}: = \bigl\{ (I,J,x,e) \,\big|\, I,J\in{\mathcal I}_{\mathcal K}, I\subset J, x\in U_{IJ}, e\in E_I \bigr\}. $$ \end{defn} We also express the further parts of a Kuranishi atlas in categorical terms: {\beta}gin{itemlist} \item The obstruction category ${\bf E}_{\mathcal K}$ is a bundle over ${\bf B}_{\mathcal K}$ in the sense that there is a functor ${\partial}r_{\mathcal K}:{\bf E}_{\mathcal K}\to{\bf B}_{\mathcal K}$ that is given on objects and morphisms by projection $(I,x,e)\mapsto (I,x)$ and $(I,J,x,e)\mapsto(I,J,x)$ with locally trivial fiber $E_I$. \item The sections $s_I$ induce a smooth section of this bundle, i.e.\ a functor $s_{\mathcal K}:{\bf B}_{\mathcal K}\to {\bf E}_{\mathcal K}$ which acts smoothly on the spaces of objects and morphisms, and whose composite with the projection ${\partial}r_{\mathcal K}: {\bf E}_{\mathcal K} \to {\bf B}_{\mathcal K}$ is the identity. More precisely, it is given by $(I,x)\mapsto (I,x,s_I(x))$ on objects and by $(I,J,x)\mapsto (I,J,x,s_I(x))$ on morphisms. \item The zero sets of the sections $\bigcup_{I\in{\mathcal I}_{\mathcal K}} \{I\}\tildemes s_I^{-1}(0)\subset{\rm Obj}_{{\bf B}_{\mathcal K}}$ form a very special strictly full subcategory $s_{\mathcal K}^{-1}(0)$ of ${\bf B}_{\mathcal K}$. Namely, ${\bf B}_{\mathcal K}$ splits into the subcategory $s_{\mathcal K}^{-1}(0)$ and its complement (given by the full subcategory with objects $\{ (I,x) \,|\, s_I(x)\ne 0 \}$) in the sense that there are no morphisms of ${\bf B}_{\mathcal K}$ between the underlying sets of objects. (This is since given any morphism $(I,J,x)$ we have $s_I(x)=0$ if and only if $s_J({\partial}hi_{IJ}(x))=\widehat{\partial}hi_{IJ}(s_I(x))=0$.) \item The footprint maps ${\partial}si_I$ give rise to a surjective functor ${\partial}si_{\mathcal K}: s_{\mathcal K}^{-1}(0) \to {\bf X}$ to the category ${\bf X}$ with object space $X$ and trivial morphism spaces. It is given by $(I,x)\mapsto {\partial}si_I(x)$ on objects and by $(I,J,x)\mapsto {\rm id}_{{\partial}si_I(x)}$ on morphisms. \end{itemlist} {\beta}gin{lemma}{\lambda}bel{le:Kcat} The categories ${\bf B}_{{\mathcal K}}$ and ${\bf E}_{{\mathcal K}}$ are well defined. \end{lemma} {\beta}gin{proof} We must check that the composition of morphisms in ${\bf B}_{{\mathcal K}}$ is well defined and associative. To see this, note that the composition $\bigl(I,J,x\bigr)\circ \bigl(J,K,y\bigr)$ only needs to be defined for $x={\partial}hi_{IJ}^{-1}(y)\in{\partial}hi_{IJ}^{-1}(U_{JK})$, i.e.\ for $x\in U_{IJK}$ in the domain of the composed coordinate change $\Phi_{JK}\circ\Phi_{IJ}$, which by axiom (d) is contained in the domain of $\Phi_{IK}$, and hence $\bigl(I,K,x\bigr)$ is a well defined morphism. With this said, identity morphisms are given by $\bigl(I,I,x\bigr)$ for all $x\in U_{II}=U_I$, and the composition is associative since for any $I\subset J \subset K\subset L$, and $x\in U_{IJ}, y\in U_{JK}, z\in U_{KL}$ the three morphisms $\bigl(I,J,x\bigr), \bigl(J,K,y\bigr),\bigl(K,L,z\bigr)$ are composable iff $y={\partial}hi_{IJ}(x)$ and $z={\partial}hi_{JK}(y)$. In that case we have $$ \bigl(I,J,x\bigr)\circ \Bigl(\bigl(J,K,y\bigr) \circ \bigl(K,L,z)\bigr) \Bigr) \\ \;=\; \bigl(I,J,x\bigr)\circ \bigl(J,L,{\partial}hi_{IJ}(x)\bigr) \;=\; \bigl(I,L,x\bigr) $$ and $z={\partial}hi_{JK}({\partial}hi_{IJ}(x))={\partial}hi_{IK}(x)$, hence $$ \Bigl( \bigl(I,J,x\bigr)\circ \bigl(J,K,y\bigr) \Bigr) \circ \bigl(K,L,z\bigr) \;=\; \bigl(I,K,x\bigr)\circ \bigl(K,L,{\partial}hi_{IK}(x)\bigr) \;=\; \bigl(I,L,x\bigr) $$ \end{proof} {\beta}gin{remark}{\lambda}bel{rmk:Kgroupoid}\rm Because ${\mathcal K}$ has trivial isotropy, all sets of morphisms in ${\bf B}_{\mathcal K}$ between fixed objects consist of at most one element. However, ${\bf B}_{\mathcal K}$ cannot be completed to an \'etale groupoid\footnote { The notion of \'etale groupoid is reviewed in Remark~\ref{rmk:grp}.} since the inclusion of inverses of the coordinate changes, and their compositions, may yield singular spaces of morphisms. Indeed, coordinate changes ${\bf K}_I\to{\bf K}_K$ and ${\bf K}_J\to{\bf K}_K$ with the same target chart are given by embeddings ${\partial}hi_{IK}:U_{IK}\to U_K$ and ${\partial}hi_{JK}:U_{JK}\to U_K$, whose images may not intersect transversely (for example, often their intersection is contained only in the zero set $s_K^{-1}(0)$), yet this intersection would be a component of the space of morphisms from $U_{I}$ to $U_{J}$. Moreover, since the map ${\partial}hi_{IJ}:U_{IJ}\to U_J$ underlying a coordinate change could well have target of higher dimension than its domain, the target map $t:{\rm Mor}_{{\bf B}_{\mathcal K}}\to {\rm Obj}_{{\bf B}_{\mathcal K}}$ is not in general a local diffeomorphism. However, the spaces of objects and morphisms in ${\bf B}_{\mathcal K}$ are smooth manifolds, and all structure maps are smooth embeddings. Thus ${\bf B}_{\mathcal K}$ is in some ways similar to the \'etale groupoids considered in \cite{Mbr}. \end{remark} The categorical formulation of a Kuranishi atlas ${\mathcal K}$ allows us to construct a topological space $|{\mathcal K}|$ which contains a homeomorphic copy ${\iota}_{{\mathcal K}}(X)\subset |{\mathcal K}|$ of $X$ and hence may be viewed as a virtual neighbourhood of $X$. {\beta}gin{defn} {\lambda}bel{def:Knbhd} Let ${\mathcal K}$ be a Kuranishi atlas for the compact space $X$. Then the {\bf Kuranishi neighbourhood} or {\bf virtual neighbourhood} of $X$, $$ |{\mathcal K}| := {\rm Obj}_{{\bf B}_{\mathcal K}}/{\scriptstyle{\sigma}m} $$ is the topological realization\footnote { As is usual in the theory of \'etale groupoids we take the realization of the category ${\bf B}_{\mathcal K}$ to be a quotient of its space of objects rather than the classifying space of the category ${\bf B}_{\mathcal K}$ (which is also sometimes called the topological realization), cf.\ \cite{ALR}.} of the category ${\bf B}_{\mathcal K}$, that is the quotient of the object space ${\rm Obj}_{{\bf B}_{\mathcal K}}$ by the equivalence relation generated by $$ {\rm Mor}_{{\bf B}_{\mathcal K}}\bigl((I,x),(J,y)\bigr) \ne \emptyset \quad \Longrightarrow \quad (I,x) {\sigma}m (J,y) . $$ We denote by ${\partial}i_{\mathcal K}:{\rm Obj}_{{\bf B}_{\mathcal K}}\to |{\mathcal K}|$ the natural projection $(I,x)\mapsto [I,x]$, where $[I,x]\in|{\mathcal K}|$ denotes the equivalence class containing $(I,x)$. We moreover equip $|{\mathcal K}|$ with the quotient topology, in which ${\partial}i_{\mathcal K}$ is continuous. Similarly, we define $$ |{\bf E}_{\mathcal K}|:={\rm Obj}_{{\bf E}_{\mathcal K}} /{\scriptstyle{\sigma}m} $$ to be the topological realization of the obstruction category ${\bf E}_{\mathcal K}$. The natural projection ${\rm Obj}_{{\bf E}_{\mathcal K}}\to |{\bf E}_{\mathcal K}|$ is still denoted ${\partial}i_{\mathcal K}$. \end{defn} {\beta}gin{lemma} {\lambda}bel{le:realization} The functor ${\rm pr}_{\mathcal K}:{\bf E}_{\mathcal K}\to{\bf B}_{\mathcal K}$ induces a continuous map $$ |{\rm pr}_{\mathcal K}|:|{\bf E}_{\mathcal K}| \to |{\mathcal K}|, $$ which we call the {\bf obstruction bundle} of ${\mathcal K}$, although its fibers generally do not have the structure of a vector space.\footnote { Proposition~\ref{prop:linear} shows that the fibers do have a natural linear structure when ${\mathcal K}$ satisfies a natural additivity condition on its obstruction spaces as well as taming conditions on its domains. Both conditions are necessary, see Example~\ref{ex:nonlin} and Remark~\ref{rmk:LIN}. } However, it has a continuous zero section $$ |0_{\mathcal K}| : \; |{\mathcal K}| \to |{\bf E}_{\mathcal K}| , \quad [I,x] \mapsto [I,x,0] . $$ Moreover, the section $s_{\mathcal K}:{\bf B}_{\mathcal K}\to {\bf E}_{\mathcal K}$ descends to a continuous section $$ |s_{\mathcal K}|:|{\mathcal K}|\to |{\bf E}_{\mathcal K}| . $$ These maps are sections in the sense that $|{\partial}r_{\mathcal K}|\circ|s_{\mathcal K}| = |{\partial}r_{\mathcal K}|\circ |0_{\mathcal K}|= {\rm id}_{|{\mathcal K}|}$. Moreover, there is a natural homeomorphism from the realization of the subcategory $s_{\mathcal K}^{-1}(0)$ to the zero set of the section, with the relative topology induced from $|{\mathcal K}|$, $$ \bigr| s_{\mathcal K}^{-1}(0)\bigr| \;=\; \quotient{s_{\mathcal K}^{-1}(0)}{{\sigma}m_{\scriptscriptstyle s_{\mathcal K}^{-1}(0)}} \;\overlineerset{\cong}{\longrightarrow}\; |s_{\mathcal K}|^{-1}(0) \,:=\; \bigl\{[I,x] \,\big|\, s_I(x)=0 \bigr\} \;\subset\; |{\mathcal K}| . $$ \end{lemma} {\beta}gin{proof} The zero section is induced by the functor $0_{\mathcal K}: {\mathcal K} \to {\bf E}_{\mathcal K}$ given by the map $(I,x) \mapsto (I,x,0)$ on objects and $(I,J,x) \mapsto (I,J,x,0)$ on morphisms. The existence and continuity of $|{\partial}r_{\mathcal K}|$, $|0_{\mathcal K}|$, and $|s_{\mathcal K}|$ then follows from continuity of the maps induced by ${\partial}r_{\mathcal K}$, $0_{\mathcal K}$, and $s_{\mathcal K}$ on the object space and the following general fact: Any functor $f:A\to B$, which is continuous on the object space, induces a continuous map between the realizations (where these are given the quotient topology of each category). Indeed, $|f|:|A|\to|B|$ is well defined since the functoriality of $f$ ensures $a{\sigma}m a' {\mathbb R}ightarrow f(a){\sigma}m f(a')$. Then by definition we have ${\partial}i_B\circ f = |f| \circ {\partial}i_A$ with the projections ${\partial}i_A:A\to |A|$ and ${\partial}i_B:B\to |B|$. To prove continuity of $|f|$ we need to check that for any open subset $U\subset |B|$ the preimage $|f|^{-1}(U)\subset|A|$ is open, i.e.\ by definition of the quotient topology, ${\partial}i_A^{-1}\bigl(|f|^{-1}(U)\bigr)\subset A$ is open. But ${\partial}i_A^{-1}\bigl(|f|^{-1}(U)\bigr) = f^{-1}\bigl({\partial}i_B^{-1}(U)\bigr)$, which is open by the continuity of ${\partial}i_B$ (by definition) and $f$ (by assumption). Next, recall that the equivalence relation ${\sigma}m$ on ${\rm Obj}_{{\bf B}_{\mathcal K}}$ that defines $|{\mathcal K}|$ is given by the embeddings ${\partial}hi_{IJ}$, their inverses, and compositions. Since these generators intertwine the zero sets $s_I^{-1}(0)$ and the footprint maps ${\partial}si_I:s_I^{-1}(0)\to F_I$, we have the useful observations {\beta}gin{align} {\lambda}bel{eq:useful1} {\partial}si_I(x)={\partial}si_J(y) \quad &\Longrightarrow \quad \; (I,x){\sigma}m (J,y) , \\ {\lambda}bel{eq:useful2} (I,x){\sigma}m (J,y) , \; s_I(x)=0 \quad &\Longrightarrow \quad \; s_J(y)=0, \; {\partial}si_I(x)={\partial}si_J(y). \end{align} In particular, \eqref{eq:useful2} implies that the equivalence relation ${\sigma}m$ on ${\rm Obj}_{{\bf B}_{\mathcal K}}$ that defines $|{\mathcal K}|=\qu{{\rm Obj}_{{\bf B}_{\mathcal K}}}{{\sigma}m}$ restricted to the objects $\{(I,x) \,|\, s_I(x)=0\}$ of the subcategory $s_{\mathcal K}^{-1}(0)$ coincides with the equivalence relation ${\sigma}m_{\scriptscriptstyle s_{\mathcal K}^{-1}(0)}$ generated by the morphisms of $s_{\mathcal K}^{-1}(0)$. Hence the map $\bigr| s_{\mathcal K}^{-1}(0)\bigr| \to |s_{\mathcal K}|^{-1}(0)$, $[(I,x)]_{\scriptscriptstyle s_{\mathcal K}^{-1}(0)} \mapsto [(I,x)]_{{\mathcal K}}$ is a bijection. It also is continuous because it is the realization of the functor $s_{\mathcal K}^{-1}(0)\to {\bf B}_{\mathcal K}$ given by the continuous embedding of the object space. To check that the inverse is continuous, consider an open subset $Z\subset \bigr| s_{\mathcal K}^{-1}(0)\bigr|$, that is with open preimage ${\partial}i_{\mathcal K}^{-1}(Z)\subset \{(I,x) \,|\, s_I(x)=0\}$. The latter is given the relative topology induced from ${\rm Obj}_{{\bf B}_{\mathcal K}}$, hence we have ${\partial}i_{\mathcal K}^{-1}(Z) = {\mathcal W}\cap \{(I,x) \,|\, s_I(x)=0\}$ for some open subset ${\mathcal W}\subset{\rm Obj}_{{\bf B}_{\mathcal K}}$. Now we need to check that ${\mathcal W}$ can be chosen so that ${\mathcal W}={\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}({\mathcal W}))$, i.e.\ ${\partial}i_{\mathcal K}({\mathcal W})\subset|{\mathcal K}|$ is open, and ${\partial}i_{\mathcal K}({\mathcal W})\cap |s_{\mathcal K}|^{-1}(0)=Z$. For that purpose note that each footprint ${\partial}si_I({\partial}i_{\mathcal K}^{-1}(Z)\cap U_I)\subset X$ is open since ${\partial}si_I$ is a homeomorphism from $s_I^{-1}(0)$ to an open subset of $X$ and $Z_I:= {\partial}i_{\mathcal K}^{-1}(Z)\cap U_I \subset s_I^{-1}(0)$ is open by assumption. Hence the finite union $\bigcup_{I\in{\mathcal I}_{\mathcal K}}{\partial}si_I(Z_I)\subset X$ is open, thus has a closed complement, so that each preimage $C_J := {\partial}si_J^{-1}\bigl( X{\smallsetminus} \bigcup_I {\partial}si_I(Z_I)\bigr)\subset U_J$ is also closed by the homeomorphism property of the footprint map ${\partial}si_I$. Moreover, by \eqref{eq:useful1} and \eqref{eq:useful2} the morphisms in ${\bf B}_{\mathcal K}$ on the zero sets are determined by the footprint functors, so that we have ${\partial}si_J^{-1}({\partial}si_I(Z_I))={\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}(Z_I))\cap U_J={\partial}i_{\mathcal K}^{-1}(Z \cap {\partial}i_{\mathcal K}(U_I))\cap U_J$ for each $I,J\in{\mathcal I}_{\mathcal K}$, and thus $C_J = s_J^{-1}(0) {\smallsetminus} {\partial}i_{\mathcal K}^{-1}(Z)$. With that we obtain an open set ${\mathcal W}: = {\textstyle \bigcup_{I\in{\mathcal I}_{\mathcal K}}} \bigl( U_I {\smallsetminus} C_I \bigr) \subset {\rm Obj}_{{\bf B}_{\mathcal K}}$ such that ${\partial}i_{\mathcal K}({\mathcal W})\subset|{\mathcal K}|$ is open since ${\mathcal W}$ is invariant under the equivalence relation by ${\partial}i_{\mathcal K}$, namely $$ {\partial}i_{\mathcal K}({\mathcal W}) \;=\; {\textstyle \bigcup_{I\in{\mathcal I}_{\mathcal K}}} {\partial}i_{\mathcal K}( U_I ) {\smallsetminus} \bigl( {\partial}i_{\mathcal K}(s_I^{-1}(0)) {\smallsetminus} Z\bigr) \;=\; |{\mathcal K}| {\smallsetminus} \bigl( |s_{\mathcal K}^{-1}(0)| {\smallsetminus} Z\bigr) $$ so that its preimage is ${\mathcal W}$ and hence open, by the identity $$ {\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}({\mathcal W})) \;=\; {\textstyle \bigcup_{I\in{\mathcal I}_{\mathcal K}}} U_I \cap {\partial}i_{\mathcal K}^{-1}\bigl( |{\mathcal K}| {\smallsetminus} ( |s_{\mathcal K}^{-1}(0)| {\smallsetminus} Z ) \bigr) \;=\; {\textstyle \bigcup_{I\in{\mathcal I}_{\mathcal K}}} U_I {\smallsetminus} \bigl( s_I^{-1}(0) {\smallsetminus} Z \bigr). $$ Finally, using the above, we check that ${\partial}i_{\mathcal K}({\mathcal W})$ has the required intersection {\beta}gin{align*} {\partial}i_{\mathcal K}({\mathcal W}) \cap |s_{\mathcal K}|^{-1}(0) \;=\; \bigl( |{\mathcal K}| {\smallsetminus} \bigl( |s_{\mathcal K}^{-1}(0)| {\smallsetminus} Z \bigr) \bigr) \cap |s_{\mathcal K}|^{-1}(0) \;=\; Z . \end{align*} This proves the homeomorphism between $\bigl|s_{\mathcal K}^{-1}(0)\bigr|$ and $ |s_{\mathcal K}|^{-1}(0)$. \end{proof} The next lemma shows that the zero set $\bigl|s_{\mathcal K}^{-1}(0)\bigr|\cong |s_{\mathcal K}|^{-1}(0)$ is also naturally homeomorphic to $X$, and hence $X$ embeds into the virtual neighbourhood $|{\mathcal K}|$. {\beta}gin{lemma} {\lambda}bel{le:Knbhd1} The footprint functor ${\partial}si_{\mathcal K}: s_{\mathcal K}^{-1}(0) \to {\bf X}$ descends to a homeomorphism $|{\partial}si_{\mathcal K}| : |s_{\mathcal K}|^{-1}(0) \to X$. Its inverse is given by $$ {\iota}_{{\mathcal K}}:= |{\partial}si_{\mathcal K}|^{-1} : \; X\;\longrightarrow\; |s_{\mathcal K}|^{-1}(0) \;\subset\; |{\mathcal K}|, \qquad p \;\mapsto\; [(I,{\partial}si_I^{-1}(p))] , $$ where $[(I,{\partial}si_I^{-1}(p))]$ is independent of $I\in{\mathcal I}_{\mathcal K}$ with $p\in F_I$. \end{lemma} {\beta}gin{proof} To begin, recall that ${\partial}si_{\mathcal K}$ is a surjective functor from $s_{\mathcal K}^{-1}(0)$ to ${\bf X}$ with objects $X$ (i.e.\ the footprints $F_I = {\partial}si_I (s_I^{-1}(0))$ cover $X$). Hence the argument of Lemma~\ref{le:realization} proves that $|{\partial}si_{\mathcal K}|$ is well defined, surjective, and continuous when $|{\partial}si_{\mathcal K}|$ is considered as a map from the quotient space $|s_{\mathcal K}^{-1}(0)|$ (with its quotient topology rather than the relative topology induced by $|{\mathcal K}|$) to $X$. The map $|{\partial}si_{\mathcal K}|={\iota}_{\mathcal K}^{-1}$ considered here is given by composing this realization of the functor ${\partial}si_{\mathcal K}$ with the natural homeomorphism $\bigl|s_{\mathcal K}^{-1}(0)\bigr|\overlineerset{\cong}{\to} |s_{\mathcal K}|^{-1}(0)$ from Lemma~\ref{le:realization}. So it remains to check continuity of ${\iota}_{{\mathcal K}}$ with respect to the subspace topology on $|s_{\mathcal K}|^{-1}(0)\subset|{\mathcal K}|$. For that purpose we need to consider an open subset $V\subset|{\mathcal K}|$, that is ${\partial}i_{\mathcal K}^{-1}(V)\subset {\rm Obj}_{{\bf B}_{\mathcal K}}$ is open. Since ${\rm Obj}_{{\bf B}_{\mathcal K}}$ is a disjoint union that means ${\partial}i_{\mathcal K}^{-1}(V)=\bigcup_{I\in{\mathcal I}_{\mathcal K}} \{I\}\tildemes W_I$ is a union of open subsets $W_I\subset U_I$. So in the relative topology $(W_I\cap s_I^{-1}(0))\subset s_I^{-1}(0)$ is open, as is its image under the homeomorphism ${\partial}si_I: s_I^{-1}(0) \to F_I \subset X$. Therefore $$ {\iota}_{{\mathcal K}}^{-1}(V) \;=\; |{\partial}si_{{\mathcal K}}|(V) \;=\;{\partial}si_{\mathcal K}\left(s_{\mathcal K}^{-1}(0)\cap {\textstyle \bigcup_{I\in{\mathcal I}_{\mathcal K}}} \{I\}\tildemes W_I\right) \,=\; {\textstyle \bigcup_{I\in{\mathcal I}_{\mathcal K}}} {\partial}si_I(W_I\cap s_I^{-1}(0)) $$ is open in $X$ since it is a union of open subsets. This completes the proof. \end{proof} Note that the injectivity of ${\iota}_{{\mathcal K}}:X\to|{\mathcal K}|$ could be seen directly from the injectivity property \eqref{eq:useful2} of the equivalence relation ${\sigma}m$ on $s_{\mathcal K}^{-1}(0)\subset{\rm Obj}_{{\bf B}_{\mathcal K}}$. In particular, this property implies injectivity of the projection of the zero sets in fixed charts, ${\partial}i_{\mathcal K} :s_I^{-1}(0) \to |{\mathcal K}|$. This injectivity however only holds on the zero set. On $U_I{\smallsetminus} s_I^{-1}(0)$, the projections ${\partial}i_{\mathcal K}: U_I\to |{\mathcal K}|$ need not be injective, as the following example shows. {\beta}gin{figure}[htbp] \centering \includegraphics[width=4in]{virtfig3.pdf} \caption{ The lift ${\partial}i^{-1}(U_1\cap U_2)$ is shown as two light grey strips in ${\mathbb R}\tildemes {\mathbb R}$, intersecting the dark grey region $U_3$ in the two shaded sets $V_3^1, V_3^2$. The domains $U_1,U_2\subset S^1\tildemes{\mathbb R}$ lift injectively to the dashed sets. The points $x_3^1\neq x_3^2 \in U_3$ have the same image in $|{\mathcal K}|\subset S^1\tildemes{\mathbb R}$. In Example~\ref{ex:nonlin} we will add another chart with domain $V\cup V^2_3$, where $V$ is barred. } {\lambda}bel{fig:3} \end{figure} {\beta}gin{example}[Failure of Injectivity]{\lambda}bel{ex:Knbhd}\rm The circle $X=S^1={\mathbb R}/{\mathbb Z}$ can be covered by a single ``global'' Kuranishi chart ${\bf K}_0$ of dimension $1$ with domain $U_0= S^1\tildemes {\mathbb R}$, obstruction space $E_0 ={\mathbb R}$, section map $s_0 = {\partial}r_{\mathbb R}$, and footprint map ${\partial}si_i= {\partial}r_{S^1}$. A slightly more complicated Kuranishi atlas (involving transition charts but still no cocycle conditions) can be obtained by the open cover $S^1=F_1\cup F_2\cup F_3$ with $F_i=(\frac i3,\frac{i+2}3) \subset {\mathbb R}/{\mathbb Z}$ such that all pairwise intersections $F_{ij}:=F_i\cap F_j \neq \emptyset$ are nonempty, but the triple intersection $F_1\cap F_2\cap F_3$ is empty. We obtain a covering family of basic charts $\bigl({\bf K}_i:={\bf K}_0|_{U_i}\bigr)_{i=1,2,3}$ with these footprints by restricting ${\bf K}_0$ to the open domains $U_i:=F_i\tildemes (-1,1)\subset S^1\tildemes {\mathbb R}$. Similarly, we obtain transition charts ${\bf K}_{ij}:={\bf K}_0|_{U_i\cap U_j}$ and coordinate changes $\widehat\Phi_{i, ij}:= \widehat\Phi_{0,0}|_{U_i\cap U_j}$ by restricting the identity map $\widehat\Phi_{0,0}=({\rm id}_{U_0}, {\rm id}_{E_0}):{\bf K}_0 \to {\bf K}_0$ to the overlap $U_{ij}:= U_i\cap U_j$. These are well defined for any pair $i,j\in\{1,2,3\}$ (and satisfy all cocycle conditions), but for a Kuranishi atlas it suffices to restrict to $i<j$. That is, the transition charts ${\bf K}_{12},{\bf K}_{13},{\bf K}_{23}$ and corresponding coordinate changes $\widehat\Phi_{1,12}, \widehat\Phi_{2,12}, \widehat\Phi_{1,13}, \widehat\Phi_{3,13}, \widehat\Phi_{2,23}, \widehat\Phi_{3,23}$ form transition data, for which the cocycle condition is vacuous. The realization of this Kuranishi atlas is $|{\mathcal K}|=U_1\cup U_2\cup U_3\subset S^1\tildemes {\mathbb R}$, and the maps $U_i\to |{\mathcal K}|$ are injective. However, keeping the same basic charts ${\bf K}_1, {\bf K}_2$, and transition data for $i,j\in\{1,2\}$, we may choose ${\bf K}_3$ to have the same form as ${\bf K}_0$ but with domain $U_3\subset (0,2)\tildemes {\mathbb R}$ such that the projection ${\partial}i:{\mathbb R}\tildemes {\mathbb R} \to S^1\tildemes {\mathbb R}$ embeds $U_3\cap ({\mathbb R}\tildemes\{0\})=(1,\frac 23)\tildemes\{0\}$ to $F_3\tildemes\{0\}$. We can moreover choose $U_3$ so large that the inverse image of $U_1\cap U_2$ meets $U_3$ in two components ${\partial}i^{-1}(U_1\cap U_2)\cap U_3 = V_3^1 \sqcup V_3^2$ with ${\partial}i(V_3^1)={\partial}i(V_3^2)$, but there are continuous lifts ${\partial}i^{-1}: U_i \cap {\partial}i(U_3) \to U_3$ with $V^i_3\subset{\partial}i^{-1}(U_i)$; cf.\ Figure~\ref{fig:3}. These intersections $V^i_3\subset U_3$ necessarily lie outside of the zero section $s_3^{-1}(0)=F_3\tildemes\{0\}$, though their closure might intersect it. Then it remains to construct transition data from ${\bf K}_i$ for $i=1,2$ to ${\bf K}_3$. We choose the transition charts as restrictions ${\bf K}_{i3}:= {\bf K}_3|_{U_{i3}}$ of ${\bf K}_3$ to the domains $U_{i3}:= {\partial}i^{-1}(U_1)\cap U_3$, with transition maps $\widehat\Phi_{3,i3}:=\widehat\Phi_{3,3}|_{U_{i3}}$. Finally, we construct the transition maps $\widehat\Phi_{i,i3}:{\bf K}_i|_{U_{i,i3}}\to{\bf K}_3$ for $i=1,2$ by the identity $\widehat{\partial}hi_{i,i3}:={\rm id}_{E_i}$ on the identical obstruction bundles $E_i=E_3=E_0$ and the lift ${\partial}hi_{i,i3}:= {\partial}i^{-1}$ on the domain $U_{i,i3}: = U_1\cap {\partial}i(U_3)$. This again defines a Kuranishi atlas with vacuous cocycle condition, but the map ${\partial}i_{\mathcal K}: U_3\to |{\mathcal K}|$ is not injective. Indeed any point $x_3^1\in V_3^1\subset U_3$ is identified $[x_3^1]=[x_3^2]\in|{\mathcal K}|$ with the corresponding point $x_3^2\in V_3^2$ with ${\partial}i(x_3^1)={\partial}i(x_3^2)= y \in S^1\tildemes{\mathbb R}$. Indeed, denoting by $(ij, z)$ the point $z$ considered as an element of $U_{ij}$ (which is just a simplified version of the previous notation $(I,x)$ for a point $x\in U_I$), we have {\beta}gin{equation}{\lambda}bel{eq:equiv123} (3,x_3^1){\sigma}m (13, x_3^1){\sigma}m (1,y){\sigma}m(12,y){\sigma}m(2,y){\sigma}m (23,x_3^2){\sigma}m (3,x_3^2), \end{equation} where each equivalence is induced by the relevant coordinate change. Since there are such points $x_3^1$ arbitrarily close to the zero set $s_3^{-1}(0)=F_3\tildemes\{0\}$, the projection ${\partial}i_{\mathcal K}:U_3\to |{\mathcal K}|$ is not injective on any neighborhood of the zero set $s_3^{-1}(0)$. \end{example} We next adapt the above example so that the fibers of the bundle ${\partial}r_{\mathcal K}: |{\bf E}_{\mathcal K}|\to |{\mathcal K}|$ also fail to have a linear structure. (Remark~\ref{rmk:LIN} describes another scenario where linearity fails.) {\beta}gin{example} [Failure of Linearity] {\lambda}bel{ex:nonlin}\rm We can build on the construction of Example~\ref{ex:Knbhd} to obtain a Kuranishi atlas ${\mathcal K}'$ of dimension $0$ on $S^1$ with four basic charts, in which the fibers of ${\partial}r_{{\mathcal K}'}$ are not even contractible. The first three basic charts and the associated transition charts are obtained by replacing the obstruction space ${\mathbb R}$ with ${\mathbb C}$. That is, for $I\in \{1,2,3,12,13,23\}$, we identify ${\mathbb R}\subset{\mathbb C}$ to define the charts $$ {\bf K}_I': = \bigl( \, U'_I:=U_I \,,\, E'_I:={\mathbb C} \,,\, s_I':= s_I \,,\, {\partial}si_I':={\partial}si_I \bigr) , $$ where $(U_I,E_I={\mathbb R},s_I,{\partial}si_I)$ are the charts of Example~\ref{ex:Knbhd} which yield a noninjective Kuranishi atlas. We also define coordinate changes by the same domains and nonlinear embeddings as before, that is we set $U_{i,ij}':=U_{i,ij}$ and ${\partial}hi_{i,ij}':={\partial}hi_{i,ij}$ for $i<j \in \{1,2,3\}$. However, we now have a choice for the linear embeddings $\widehat{\partial}hi\,'_{i,ij}$ since compatibility with the sections only requires $\widehat{\partial}hi\,'_{i,ij}|_{{\mathbb R}}={\rm id}_{\mathbb R}$. We will take $\widehat{\partial}hi\,'_{i,ij} = {\rm id}_{{\mathbb C}}$ except for $\widehat{\partial}hi\,'_{2,23}({\alpha}pha +\hat{\iota}ta {\beta}ta) :={\alpha}pha + 2 \hat{\iota}ta {\beta}ta$ (where we denote the imaginary unit by $\hat{\iota}ta$ to prevent confusion with the index $i$). As above the cocycle condition is trivially satisfied since there are no triple intersections of footprints. Then with $x_3^i\in U'_3$ as in Example~\ref{ex:Knbhd} the chain of equivalences \eqref{eq:equiv123} lifts to the obstruction space $E_3'={\mathbb C}$ as {\beta}gin{equation} {\lambda}bel{fiber2} (3, x_3^1,{\alpha}pha + \hat{\iota}ta {\beta}ta) \;{\sigma}m\; (3,x_3^2,{\alpha}pha + 2\hat{\iota}ta {\beta}ta) . \end{equation} In order to also obtain the equivalences {\beta}gin{equation} {\lambda}bel{fiber1} (3, x_3^1,{\alpha}pha + \hat{\iota}ta {\beta}ta) \;{\sigma}m\; (3,x_3^2,{\alpha}pha + \hat{\iota}ta {\beta}ta) \end{equation} we add another basic chart ${\bf K}_4'= {\bf K}_3'|_{U'_4}$ with domain indicated in Figure~\ref{fig:3}, $$ U'_4: = V \cup V_3^2 ,\qquad V: = {\partial}i^{-1} \bigl(F_{13}\tildemes {\mathbb R}\bigr) \cap U'_3. $$ This chart has footprint $F'_4=F_{13}$, so it requires no compatibility with ${\bf K}'_2$, and for $I\subset\{1,3,4\}$ we always have $F'_I = F_{13}$. We define the transition charts as restrictions $$ {\bf K}'_{14}: = {\bf K}'_1|_{{\partial}i(U'_4)},\qquad {\bf K}'_{34} = {\bf K}'_3|_{U'_4} , \qquad {\bf K}_{134}: = {\bf K}_3|_V . $$ Then we obtain the coordinate changes $\widehat\Phi_{I,J}$ for $I\subsetneq J \subset\{1,3,4\}$ by setting $\widehat{\partial}hi\,'_{IJ} := {\rm id}_{\mathbb C}$ and ${\partial}hi_{IJ}'$ equal either to the identity or to ${\partial}i^{-1}$, as appropriate, on the domains {\beta}gin{align*} U'_{1,14} := {\partial}i(U'_4), \qquad & U'_{4,14}= U'_{3,34}= U'_{4,34} := U'_4, \\ U'_{1,134} = U'_{14,134} := {\partial}i(V), \qquad & U'_{3,134} = U'_{4,134} = U'_{13,134} := V . \end{align*} To see that the cocycle condition holds, note that we only need to check it for the triples $(i,34,134), i=3,4$, $(j,14,134), j=1,4$, and $(k, 13,134), k=1,3$, and in all of these cases both ${\partial}hi_{JK}'\circ{\partial}hi_{IJ}'$ and ${\partial}hi_{IK}'$ have equal domain, given by $V$ or ${\partial}i(V)$. This provides another chain of morphisms between the same objects as in \eqref{eq:equiv123}, $$ (3,x_3^2){\sigma}m (34,x_3^2){\sigma}m (4,x_3^2) {\sigma}m (14,y) {\sigma}m (1,y){\sigma}m (13,x^1_3){\sigma}m (3,x_3^1), $$ whose lift to the obstruction space is \eqref{fiber1} since $\widehat{\partial}hi\,'_{IJ}= {\rm id}_{\mathbb C}$ for all coordinate changes involved. Therefore, the fiber of $|{\partial}i_{{\mathcal K}'}|: |{\bf E}_{{\mathcal K}'}|\to |{\mathcal K}'|$ over $[3,x_3^1]=[3,x_3^2]$ is $$ |{\partial}r_{\mathcal K}|^{-1}([3,x_3^1]) \;=\;\; \quotient{{\mathbb C}}{\scriptstyle \bigl( {\alpha}pha + \hat{\iota}ta {\beta}ta \;{\sigma}m\; {\alpha}pha + 2\hat{\iota}ta {\beta}ta \bigr)} \quad \cong\; {\mathbb R} \tildemes S^1 , $$ which does not have the structure of a vector space, and in fact is not even contractible. \end{example} Finally, we give a simple example where $|{\mathcal K}|$ is not Hausdorff in any neighbourhood of ${\iota}_{\mathcal K}(X)$ even though the map $s\tildemes t:{\rm Mor}_{{\bf B}_{\mathcal K}}\to {\rm Obj}_{{\bf B}_{\mathcal K}}\tildemes {\rm Obj}_{{\bf B}_{\mathcal K}}$ is proper; cf.\ Section~\ref{ss:top}. {\beta}gin{example}[Failure of Hausdorff property] {\lambda}bel{ex:Haus}\rm We construct a Kuranishi atlas for $X := {\mathbb R}$ by starting with a basic chart whose footprint $F_1={\mathbb R}$ already covers $X$, $$ {\bf K}_1 := \bigl(\, U_1={\mathbb R}^2 \,,\, E_1={\mathbb R} \,,\, s_1(x,y)=y \,,\, {\partial}si_1(x,0)=x \,\bigr) . $$ We then construct a second basic chart ${\bf K}_2:={\bf K}_1|_{U_2}$ with footprint $F_2=(0,\infty)\subset{\mathbb R}$ and the transition chart ${\bf K}_{12}:={\bf K}_1|_{U_{12}}$ as restrictions of ${\bf K}_1$ to the domains $$ U_2: = \{-y<x\le0\}\cup \{x>0\} , \qquad U_{12} := \{ x>0\} . $$ This induces coordinate changes $\widehat\Phi_{i,12}:=\widehat\Phi_{1,1}|_{U_{i,12}} : {\bf K}_i|_{U_{i,12}}\to{\bf K}_{12}$ for $i=1,2$ given by restriction of the trivial coordinate change $\bigl({\partial}hi_{1,1}={\rm id}_{{\mathbb R}^2} , \widehat{\partial}hi_{1,1}={\rm id}_{\mathbb R}\bigr)$ to $U_{i,12} := U_{12}$. This defines a Kuranishi atlas since there are no compositions of coordinate changes for which a cocycle condition needs to be checked. Moreover, $s\tildemes t$ is proper because on each of the finitely many connected components of ${\rm Mor}_{{\bf B}_{\mathcal K}}$ the target map $t$ restricts to a homeomorphism to a connected component of ${\rm Obj}_{{\bf B}_{\mathcal K}}$. (For example, $t:{\rm Mor}_{{\bf B}_{\mathcal K}} \supset U_{i,12} \to U_{12} \subset {\rm Obj}_{{\bf B}_{\mathcal K}}$ is the identity.) On the other hand the images in $|{\mathcal K}|$ of the points $(0,y)\in U_1$ and $(0,y)\in U_2$ for $y>0$ have no disjoint neighbourhoods since for every $x>0$ $$ \bigl( 1, (x,y)\bigr) {\sigma}m \bigl( 12, (x,y)\bigr) {\sigma}m \bigl( 2, (x,y)\bigr) . $$ Therefore ${\iota}_{\mathcal K}(X)$ does not have a Hausdorff neighbourhood in $|{\mathcal K}|$. \end{example} In Section~\ref{ss:shrink} below we will achieve both the injectivity and the Hausdorff property by a subtle shrinking of the domains of charts and coordinate changes. However, we are still unable to make the Kuranishi neighbourhood $|{\mathcal K}|$ locally compact or even metrizable, due to the following natural example. {\beta}gin{example} [Failure of metrizability and local compactness] {\lambda}bel{ex:Khomeo} \rm For simplicity we will give an example with noncompact $X = {\mathbb R}$. (A similar example can be constructed with $X = S^1$.) We construct a Kuranishi atlas ${\mathcal K}$ on $X$ by two basic charts, ${\bf K}_1 = (U_1={\mathbb R}, E_1=\{0\}, s=0,{\partial}si_1={\rm id})$ and $$ {\bf K}_2 = \bigl(U_2=(0,\infty)\tildemes {\mathbb R},\ E_2={\mathbb R}, \ s_2(x,y)= y,\ {\partial}si_2(x,y)= x\bigr), $$ one transition chart ${\bf K}_{12} = {\bf K}_2|_{U_{12}}$ with domain $U_{12} := U_2$, and the coordinate changes $\widehat\Phi_{i,12}$ induced by the natural embeddings of the domains $U_{1,12} := (0,\infty)\hookrightarrow (0,\infty)\tildemes\{0\}$ and $U_{2,12} := U_2\hookrightarrow U_2$. Then as a set $|{\mathcal K}| = \bigl(U_1\sqcup U_2\sqcup U_{12}\bigr)/{\sigma}m$ can be identified with $\bigl({\mathbb R}\tildemes\{0\}\bigr) \cup \bigl( (0,\infty)\tildemes{\mathbb R}\bigr) \subset {\mathbb R}^2$. However, the quotient topology at $(0,0)\in|{\mathcal K}|$ is strictly stronger than the subspace topology. That is, for any $O\subset{\mathbb R}^2$ open the induced subset $O\cap|{\mathcal K}|\subset|{\mathcal K}|$ is open, but some open subsets of $|{\mathcal K}|$ cannot be represented in this way. In fact, for any ${\varepsilon}>0$ and continuous function $f:(0,{\varepsilon})\to (0,\infty)$, the set $$ U_{f,{\varepsilon}} \, :=\; \bigl\{ [x] \,\big|\, x\in U_1, |x|< {\varepsilon} \} \;\cup\; \bigl\{ [(x,y)] \,\big|\, (x,y)\in U_2, |x|< {\varepsilon} , |y|<f(x)\} \;\subset\; |{\mathcal K}| $$ is open in the quotient topology. Moreover these sets form a basis for the neighbourhoods of $[(0,0)]$ in the quotient topology. To see this, let $V\subset |{\mathcal K}|$ be open in the quotient topology. Then, since ${\partial}i_{\mathcal K}^{-1}(V)\cap U_1$ is a neighbourhood of $0$, there is ${\varepsilon}>0$ so that $\{ (x,0) \,|\, |x|<{\varepsilon} \} \subset V$. Further, define $f:\{x\in{\mathbb R} \,|\, 0<x<{\varepsilon} \} \to (0,\infty)$ by $f(x) := \sup \{{\delta} \,|\, B_{\delta}(x,0)\subset V\}$, where $B_{\delta}(x,0)$ is the open ball in ${\mathbb R}^2$ with radius ${\delta}$. Then $f(x)>0$ for all $0<x<{\varepsilon}$ because ${\partial}i_{\mathcal K}^{-1}(V)\cap U_2$ is a neighbourhood of $(0,x)$. The triangle inequality implies that $f(x')\ge f(x) - |x'-x|$ for all $0<x,x'<{\varepsilon}$. Hence $|f(x)- f(x')|\le |x'-x|$, so that $f$ is continuous. Thus we have constructed a neighbourhood $U_{f,{\varepsilon}}\subset |{\mathcal K}|$ of $[(0,0)]$ of the above type with $U_{f,{\varepsilon}}\subset V$. We will use this to see that the point $[(0,0)]$ does not have a countable neighbourhood basis in the quotient topology. Indeed, suppose by contradiction that $(U_k)_{k\in{\mathbb N}}$ is such a basis, then by the above we can iteratively find $1>{\varepsilon}_k>0$ and $f_k:(0,{\varepsilon}_k)\to(0,\infty)$ so that $U_{f_k,{\varepsilon}_k}\subset U_k\cap U_{f_{k-1},\frac 12 {\varepsilon}_{k-1}}$ (with $U_{f_0,\frac 12 {\varepsilon}_0}$ replaced by $|{\mathcal K}|$). In particular, the inclusion $U_{f_k,{\varepsilon}_k}\subset U_{f_{k-1},\frac 12 {\varepsilon}_{k-1}}$ implies ${\varepsilon}_k < {\varepsilon}_{k-1}$. Now there exists a continuous function $g:(0,1)\to (0,\infty)$ such that $g(\frac 12{\varepsilon}_k) < f_k(\frac 12 {\varepsilon}_k)$ for all $k\in{\mathbb N}$. Then the neighbourhood $U_{g,1}$ does not contain any of the $U_k$ because $U_{g,1}\supset U_k \supset U_{f_k,{\varepsilon}_k}$ implies that $g(\frac 12{\varepsilon}_k) \geq f_k(\frac 12 {\varepsilon}_k)$. This contradicts the assumption that $(U_k)_{k\in{\mathbb N}}$ is a neighbourhood basis of $[(0,0)]$, hence there exists no countable neighbourhood basis. Note also that the point $[(0,0)]\in|{\mathcal K}|$ has no compact neighbourhood with respect to the subspace topology from ${\mathbb R}^2$, and hence neither with respect to the stronger quotient topology on $|{\mathcal K}|$. \end{example} {\beta}gin{rmk}{\lambda}bel{rmk:Khomeo}\rm For the Kuranishi atlas in Example~\ref{ex:Khomeo} there exists an exhausting sequence $\overline{{\mathcal A}^n}\subset \overline{{\mathcal A}^{n+1}}$ of closed subsets of $\bigcup_{I\in {\mathcal I}_{\mathcal K}} U_I$ with the properties {\beta}gin{itemize} \item each ${\partial}i_{\mathcal K}(\overline{{\mathcal A}^n})$ contains ${\iota}ta_{\mathcal K}(X)$; \item each ${\partial}i_{\mathcal K}(\overline{{\mathcal A}^n})\subset |{\mathcal K}|$ is metrizable and locally compact in the subspace topology; \item $\bigcup_{n\in{\mathbb N}} \overline{{\mathcal A}^n} = \bigcup_{I\in {\mathcal I}_{\mathcal K}} U_I$. \end{itemize} For example, we can take $\overline{{\mathcal A}^n}$ to be the disjoint union of the closed sets $$ \overline{A_1^n}= [-n,n]\subset U_1, \qquad \overline{A_{2}^n} : = \{(x,y)\in U_2 \,\big|\, x \geq \tfrac 1n, |y| \leq n\}, $$ and any closed subset $\overline{A_{12}^n} \subset \overline{A_2^n}$. However, in the limit $[(0,0)]$ becomes a ``bad point'' because its neighbourhoods have to involve open subsets of $U_2$. In fact, if we altered Example~\ref{ex:Khomeo} to a Kuranishi atlas for the compact space $X=S^1$, then we could choose $\overline{{\mathcal A}^n}$ compact, so that the subspace and quotient topologies on ${\partial}i_{\mathcal K}(\overline{{\mathcal A}^n})$ coincide by Proposition~\ref{prop:Ktopl1}~(ii). We emphasize the subspace topology above because that is the one inherited by (open) subsets of $\overline{{\mathcal A}^n}$. For example, the quotient topology on ${\partial}i_{\mathcal K}({\mathcal A}^n)$, where ${\mathcal A}^n: = \bigcup_I {\rm int}(\overline{A_I^n})$ has the same bad properties at $[(\frac 1n,0)]$ as the quotient topology on $|{\mathcal K}|$ has at $[(0,0)]$, while the subspace topology on ${\partial}i_{\mathcal K}({\mathcal A}_n)$ is metrizable. We prove in Proposition~\ref{prop:Ktopl1} that a similar statement holds for all ${\mathcal K}$, though there we only consider a fixed set $\overline{\mathcal A}$ since we have no need for an exhaustion of the domains. \end{rmk} We end by comparing our choice of definition with the notions of Kuranishi structures in the current literature. {\beta}gin{rmk}{\lambda}bel{rmk:otherK}\rm (i) We defined the notion of a Kuranishi atlas so that it is relatively easy to construct from an equivariant Fredholm section. The only condition that is difficult to satisfy is the cocycle condition since that involves making compatible choices of all the domains $U_{IJ}$. However, we show in Theorem~\ref{thm:K} that, provided the obstruction bundles satisfy an additivity condition, one can always construct a Kuranishi atlas from a tuple of charts and coordinate changes that satisfy the weak cocycle condition in Definition~\ref{def:cocycle}, which is much easier to satisfy in practice. The additivity condition is also naturally satisfied by the sum constructions for finite dimensional reductions of holomorphic curve moduli spaces in e.g.\ \cite{FO} and Section~\ref{s:construct}. { } {\mathbb N}I (ii) A Kuranishi structure in the sense of \cite{FO,J} is given in terms of germs of charts at every point of $X$ and some set of coordinate changes. While this is a natural idea, we were not able to find a meaningful notion of compatible coordinate changes; see the discussion in Section~\ref{ss:alg}. Recently, there seems to be a general understanding in the field that explicit charts and coordinate changes are needed. { } {\mathbb N}I (iii) A Kuranishi structure in the sense of \cite[App.~A]{FOOO} consists of a Kuranishi chart ${\bf K}_p$ at every point $p\in X$ and coordinate changes ${\bf K}_q|_{U_{qp}}\to {\bf K}_p$ whenever $q\in F_p$, and requires the weak cocycle condition. The idea from \cite{FO} for constructing such a Kuranishi structure also starts with a finite covering family of basic charts $({\bf K}_i)$. Then the chart at $p$ is obtained by a sum construction from the charts ${\bf K}_i$ with $p\in F_i$. We outline in Section~\ref{s:construct} how the analytic aspects of this sum construction can be made rigorous in the case of genus zero Gromov--Witten moduli spaces. Moreover, the construction of the sum charts and coordinate changes needs to be essentially canonical in order to achieve even the weak cocycle condition. In the case of trivial isotropy, an abstract weak Kuranishi atlas in the sense of Definition~\ref{def:Ku2} induces a Kuranishi structure in the sense of \cite[App.~A]{FOOO} as follows. Given a covering family of basic charts $({\bf K}_i)_{i=1,\dots,N}$ with footprints $F_i$ and transition data $({\bf K}_I,\widehat\Phi_{IJ})$, choose a family of compact subsets $C_i\subset F_i$ that also cover $X$. Then for any $p\in X$ one obtains a Kuranishi chart ${\bf K}_p$ by choosing a restriction of ${\bf K}_{I_p}$ to $F_p:=\cap_{i\in I_p} F_i{\smallsetminus} \cup_{i\notin I_p} C_i$, where $I_p: = \{i \,|\, p\in C_i\}$. This construction guarantees that for $q\in F_p$ we have $I_q\subset I_p$ and thus can restrict the coordinate change $\widehat\Phi_{I_q I_p}$ to a coordinate change from ${\bf K}_q$ to ${\bf K}_p$. The weak cocycle condition is preserved by these choices. Note however that neither this notion of a Kuranishi structure nor a weak Kuranishi atlas is sufficient for our approach to the construction of a VMC. (We start from a weak Kuranishi atlas with extra additivity condition as in (i). This allows us to achieve the strong cocycle condition and a tameness property by a shrinking procedure. The latter are crucial to achieve compactness and Hausdorff properties of perturbed solution spaces.) Essentially, Fukaya et al use the same approach for constructing a Kuranishi structure. However, instead of formulating the notion of a (weak) atlas, they first work on the level of the infinite family of charts $({\bf K}_p)_{p\in X}$ and only later rebuild a finite covering by ``orbifold charts'' to form a ``good coordinate system". There is some justification for this approach when there is isotropy, since in this case the notion of a weak Kuranishi atlas, although very natural, involves some new ideas such as coverings involved in the coordinate changes, see~\cite{MW:ku2}. However, when there is no isotropy it seems cleaner to work directly with the charts in the finite covering family rather than passing to the uncountably many small charts ${\bf K}_p$. { } {\mathbb N}I (iv) Some recent work uses the notion of ``good coordinate system" from \cite{FO,FOOO,J} instead of a Kuranishi structure, which is introduced there as intermediate step in the construction of a VMC. The early versions of this notion had serious problems since the proof of existence (in \cite[Lemma 6.3]{FO}) is based on notions of germs and does not address the cocycle condition. Moreover, it required a totally ordered set of charts but does not clarify the relationship between order and overlaps. In its most recent version in~\cite{FOOO12} (and in the case when there is no isotropy), a good coordinate system requires a finite cover of $X$ by a partially ordered set of charts $({\bf K}_I)_{I\in{\mathcal P}}$ and coordinate changes ${\bf K}_I \to {\bf K}_J$ for $I\leq J$, where the order is compatible with the overlaps of the footprints in the sense that $F_I\cap F_J\ne \emptyset$ implies $I\leq J$ or $J\leq I$. Moreover, a complicated set of extra conditions must be satisfied to ensure that the resulting quotient space is well behaved. The arguments for the existence of a good coordinate system are also very complicated because they must deal with two problems at once: In the language used here, the resulting type of atlas is in particular required to be both tame and reduced. In our approach these questions are separated in order to clarify exactly what choices and constructions are needed. Thus we first establish tameness and then tackle the problem of reduction. In the case of trivial isotropy, after constructing a Kuranishi atlas ${\mathcal K}$ with additivity and the strong cocycle condition, we then refine the cover to obtain data with the most important properties of a ``good coordinate system" as in \cite{FOOO12}. More precisely, we construct in Proposition~\ref{prop:red} a Kuranishi atlas ${\mathcal K}^{\mathcal V}$, with basic charts ${\bf K}^{\mathcal V}_I$ for $I\in{\mathcal I}_{\mathcal K}$ given by restriction of the charts in ${\mathcal K}$ to precompact subsets of the domain, such that the overlaps of footprints are compatible with the partial ordering by the inclusion relation on ${\mathcal I}_{\mathcal K}$. We will show that the realization $|{\mathcal K}^{\mathcal V}|$ injects continuously into $|{\mathcal K}|$ and inherits the Hausdorff property from $|{\mathcal K}|$ as well as the homeomorphism property of the natural maps $U^{\mathcal V}_C\to |{\mathcal K}^{\mathcal V}|$ from the domain of each Kuranishi chart in ${\mathcal K}^{\mathcal V}$. Here the advantage of constructing ${\mathcal K}^{\mathcal V}$ via ${\mathcal K}$ is that ${\mathcal K}$ has fewer basic charts and coordinate changes, each with large domain, which makes it relatively easy to analyze the properties of its realization $|{\mathcal K}|$. On the other hand, ${\mathcal K}^{\mathcal V}$ has smaller domains but many more coordinate changes, which makes it hard to deduce properties such as Hausdorffness directly. However, good topological properties transfer from ${\mathcal K}$ to ${\mathcal K}^{\mathcal V}$ because its coordinate changes are given by restriction from ${\mathcal K}$, hence are not independent of each other. In fact, it turns out to be easier and perhaps more natural to deal with an associated subcategory ${\bf B}_{\mathcal K}|_{\mathcal V}$ of ${\bf B}_{\mathcal K}$, rather than with the Kuranishi atlas ${\mathcal K}^{\mathcal V}$ itself; cf.\ Definition~\ref{def:vicin}. \end{rmk} \subsection{Additivity, Tameness and the Hausdorff property} {\lambda}bel{ss:tame} \hspace{1mm}\\ We begin by introducing the notion of an additive weak Kuranishi atlas, which can be constructed in practice on compactified holomorphic curve moduli spaces, as outlined in Theorem~A and Section~\ref{s:construct}. In contrast, we then introduce tameness conditions for Kuranishi atlases that imply the Hausdorff property of the Kuranishi neighbourhood. Finally, we provide tools for refining Kuranishi atlases to achieve the tameness condition. {\beta}gin{defn}{\lambda}bel{def:Kwk} A {\bf weak Kuranishi atlas of dimension $\mathbf d$} is a covering family of basic charts of dimension $d$ with transition data ${\mathcal K}=\bigl({\bf K}_I,\widehat\Phi_{I J}\bigr)_{I, J\in{\mathcal I}_{\mathcal K}, I\subsetneq J}$ as in Definition \ref{def:Kfamily}, that satisfy the {\it weak cocycle condition} $\widehat\Phi_{J K}\circ \widehat\Phi_{I J} \approx \widehat\Phi_{I K}$ for every triple $I,J,K\in{\mathcal I}_K$ with $I\subsetneq J \subsetneq K$. \end{defn} This weaker notion of Kuranishi atlas is crucial for two reasons. Firstly, in the application to moduli spaces of holomorphic curves, it is not clear how to construct Kuranishi atlases that satisfy the cocycle condition. Secondly, it is hard to preserve the cocycle condition while manipulating Kuranishi atlases, for example by shrinking as we do below. Note that if ${\mathcal K}$ is only a weak Kuranishi atlas then we cannot define its domain category ${\bf B}_{{\mathcal K}}$ precisely as in Definition~\ref{def:catKu} since the given set of morphisms is not closed under composition. We will deal with this by simply not considering this category unless ${\mathcal K}$ is a Kuranishi atlas, i.e.\ satisfies the standard cocycle condition \eqref{eq:cocycle}. On the other hand, the constructions of transition data in practice, e.g.\ in Section~\ref{s:construct}, use a sum construction for basic charts, which has the effect of adding the obstruction bundles, and thus yields the following additivity property. Here we simplify the notation by writing $\widehat\Phi_{i I}:= \widehat\Phi_{\{i\} I}$ for the coordinate change ${\bf K}_i ={\bf K}_{\{i\}} \to {\bf K}_I$ where $i\in I$. {\beta}gin{defn}{\lambda}bel{def:Ku2} Let ${\mathcal K}$ be a weak Kuranishi atlas. We say that ${\mathcal K}$ is {\bf additive} if for each $I\in {\mathcal I}_{\mathcal K}$ the linear embeddings $\widehat{\partial}hi_{i I}:E_i \to E_I$ induce an isomorphism $$ {\textstyle {\partial}rod_{i\in I}} \;\widehat{\partial}hi_{iI}: \; {\textstyle {\partial}rod_{i\in I}} \; E_i \;\stackrel{\cong}\longrightarrow \; E_I , \qquad\text{or equivalently} \qquad E_I = {\textstyle \bigoplus_{i\in I}} \; \widehat{\partial}hi_{iI}(E_i) . $$ In this case we abbreviate notation by $s_J^{-1}(E_I): = s_J^{-1}\bigl(\widehat{\partial}hi_{IJ}(E_I)\bigr)$ and $s_J^{-1}(E_\emptyset) := s_J^{-1}(0)$. \end{defn} The additivity property is useful since it extends the automatic control of transition maps on the zero sets $s_J^{-1}(0)$ to a weaker control on larger parts of the Kuranishi domains $U_J$ as follows. {\beta}gin{lemma} Let ${\mathcal K}$ be an additive weak Kuranishi atlas. Then for any $H,I,J \in {\mathcal I}_{\mathcal K}$ with $H,I\subset J$ we have {\beta}gin{align} {\lambda}bel{eq:addd} \widehat{\partial}hi_{IJ}(E_I) \; \cap \; \widehat{\partial}hi_{HJ}(E_H) &\;=\; \widehat{\partial}hi_{(I\cap H) J}(E_{(I\cap H) J}) , \\ {\lambda}bel{eq:CIJ} s_J^{-1}(E_I) \;\cap\; s_J^{-1}(E_H) &\;=\; s_J^{-1}(E_{I\cap H}) . \end{align} In particular, we deduce {\beta}gin{equation}{\lambda}bel{eq:CIJ0} H\cup I\subset J, \; I\cap H=\emptyset \quad \Longrightarrow \quad s_J^{-1}(E_I)\cap s_J^{-1}(E_H) = s_J^{-1}(0) . \end{equation} \end{lemma} {\beta}gin{proof} Generally, for $H,I\subset J$ we have a direct sum ${\textstyle\bigoplus_{i\in I\cup H}}\,\widehat{\partial}hi_{iI}(E_i) \subset E_J$ and hence {\beta}gin{align*} \widehat{\partial}hi_{IJ}(E_I)\cap \widehat{\partial}hi_{HJ}(E_H) &= \widehat{\partial}hi_{IJ}\Bigl({\textstyle\bigoplus_{i\in I}}\,\widehat{\partial}hi_{iI}(E_i)\Bigr)\cap \widehat{\partial}hi_{HJ}\Bigl({\textstyle\bigoplus_{i\in H}}\,\widehat{\partial}hi_{iH}(E_i)\Bigr) \\ &= \Bigl({\textstyle\bigoplus_{i\in I}}\,\widehat{\partial}hi_{iJ}(E_i)\Bigr)\cap \Bigl({\textstyle\bigoplus_{i\in H}}\,\widehat{\partial}hi_{iJ}(E_i)\Bigr) \\ &={\textstyle\bigoplus_{i\in I\cap H}}\,\widehat{\partial}hi_{iJ}(E_i) = \widehat{\partial}hi_{(I\cap H) J} \Bigl( {\textstyle\bigoplus_{i\in I\cap H}}\,\widehat{\partial}hi_{i (I\cap H)}(E_i) \Bigr) \\ & = \widehat{\partial}hi_{(I\cap H) J} (E_{I\cap H}) . \end{align*} This proves \eqref{eq:addd}. Applying $s_J^{-1}$ to both sides and recalling our abbreviations then implies \eqref{eq:CIJ}. If moreover $I\cap H = \emptyset$ then $E_{I\cap H}=E_\emptyset = \{0\}$, which implies \eqref{eq:CIJ0}. \end{proof} Before stating the main theorem, we introduce a notion of metrics on Kuranishi atlases that will be useful in the construction of perturbations in Section~\ref{ss:const}. {\beta}gin{defn}{\lambda}bel{def:metric} A Kuranishi atlas ${\mathcal K}$ is said to be {\bf metrizable} if there is a bounded metric $d$ on the set $|{\mathcal K}|$ such that for each $I\in {\mathcal I}_{\mathcal K}$ the pullback metric $d_I:=({\partial}i_{\mathcal K}|_{U_I})^*d$ on $U_I$ induces the given topology on the manifold $U_I$. In this situation we call $d$ an {\bf admissible metric} on $|{\mathcal K}|$. A {\bf metric Kuranishi atlas} is a pair $({\mathcal K},d)$ consisting of a metrizable Kuranishi atlas together with a choice of admissible metric $d$. For a metric Kuranishi atlas, we denote the ${\delta}$-neighbourhoods of subsets $Q\subset |{\mathcal K}|$ resp.\ $A\subset U_I$ for ${\delta}>0$ by {\beta}gin{align*} B_{\delta}(Q) &\,:=\; \bigl\{w\in |{\mathcal K}|\ | \ \exists q\in Q : d(w,q)<{\delta} \bigr\}, \\ B^I_{\delta}(A) &\,:=\; \bigl\{x\in U_I\ | \ \exists a\in A : d_I(x,a)<{\delta} \bigr\}. \end{align*} \end{defn} We next show that if $d$ is an admissible metric on $|{\mathcal K}|$, then the metric topology on $|{\mathcal K}|$ is weaker (has fewer open sets) than the quotient topology, which generally is not metrizable by Example~\ref{ex:Khomeo}. {\beta}gin{lemma}{\lambda}bel{le:metric} Suppose that $d$ is an admissible metric on the virtual neighbourhood $|{\mathcal K}|$ of a Kuranishi atlas ${\mathcal K}$. Then the following holds. {\beta}gin{enumerate} \item The identity ${\rm id}_{|{\mathcal K}|} :|{\mathcal K}| \to (|{\mathcal K}|,d)$ is continuous as a map from the quotient topology to the metric topology on $|{\mathcal K}|$. \item In particular, each set $B_{\delta}(Q)$ is open in the quotient topology on $|{\mathcal K}|$, so that the existence of an admissible metric implies that $|{\mathcal K}|$ is Hausdorff. \item The embeddings ${\partial}hi_{IJ}$ that are part of the coordinate changes for $I\subsetneq J\in{\mathcal I}_{\mathcal K}$ are isometries when considered as maps $(U_{IJ},d_I)\to (U_J,d_J)$. \end{enumerate} \end{lemma} {\beta}gin{proof} Since the neighbourhoods of the form $B_{\delta}(Q)$ define the metric topology, it suffices to prove that these are also open in the quotient topology, i.e.\ that each subset $U_I\cap {\partial}i_{\mathcal K}^{-1}(B_{\delta}(Q))$ is open in $U_I$. So consider $x\in U_I$ with ${\partial}i_{\mathcal K}(x)\in B_{\delta}(Q)$. By hypothesis there is $q\in Q$ and ${\varepsilon}>0$ such that $d({\partial}i_{\mathcal K}(x),q)<{\delta}-{\varepsilon}$, and compatibility of metrics and the triangle inequality then imply the inclusion ${\partial}i_{\mathcal K}(B^I_{\varepsilon}(x))\subset B_{\varepsilon}(Q)\subset B_{\delta}(Q)$. Thus $B^I_{\varepsilon}(x)$ is a neighbourhood of $x\in U_I$ contained in $U_I\cap {\partial}i_{\mathcal K}^{-1}(B_{\delta}(Q))$. This proves the openness required for (i) and (ii). Since every metric space is Hausdorff, $|{\mathcal K}|$ is therefore Hausdorff in the quotient topology as stated in (ii). Claim (iii) is immediate from the construction. \end{proof} One might hope to achieve the Hausdorff property by constructing an admissible metric, but the existence of the latter is highly nontrivial. Instead, in a refinement process that will take up the next two sections, we will first construct a Kuranishi atlas whose virtual neighbourhood has the Hausdorff property, then prove metrizability of certain subspaces, and finally obtain an admissible metric by pullback to a further refined Kuranishi atlas. This process will prove the following theorem whose formulation uses the notions of shrinking from Definition~\ref{def:shr}, tameness from Definition~\ref{def:tame}, and cobordism from Definition~\ref{def:Kcobord}. The formulation below is somewhat informal; more precise statements may be found in the results quoted in its proof. {\beta}gin{thm}{\lambda}bel{thm:K} Let ${\mathcal K}$ be an additive weak Kuranishi atlas on a compact metrizable space $X$. Then an appropriate shrinking of ${\mathcal K}$ provides a metrizable tame Kuranishi atlas ${\mathcal K}'$ with domains $(U'_I\subset U_I)_{I\in{\mathcal I}_{{\mathcal K}'}}$ such that the realizations $|{\mathcal K}'|$ and $|{\bf E}_{{\mathcal K}'}|$ are Hausdorff in the quotient topology. In addition, for each $I\in {\mathcal I}_{{\mathcal K}'} = {\mathcal I}_{\mathcal K}$ the projection maps ${\partial}i_{{\mathcal K}'}: U_I'\to |{\mathcal K}'|$ and ${\partial}i_{{\mathcal K}'}:U'_I\tildemes E_I\to |{\bf E}_{{\mathcal K}'}|$ are homeomorphisms onto their images and fit into a commutative diagram $$ {\beta}gin{array}{ccc} U_I'\tildemes E_I & \stackrel{{\partial}i_{{\mathcal K}'}}\longhookrightarrow & |{\bf E}_{{\mathcal K}'}| \quad \\ \downarrow & & \;\; \downarrow \scriptstyle |{\partial}r_{{\mathcal K}'}| \\ U_I' & \stackrel{{\partial}i_{{\mathcal K}'}} \longhookrightarrow &|{\mathcal K}'| \quad \end{array} $$ where the horizontal maps intertwine the vector space structure on $E_I$ with a vector space structure on the fibers of $|{\partial}r_{{\mathcal K}'}|$. Moreover, any two such shrinkings are cobordant by a metrizable tame Kuranishi cobordism whose realization also has the above Hausdorff, homeomorphism, and linearity properties. \end{thm} {\beta}gin{proof} The key step is Proposition~\ref{prop:proper}, which establishes the existence of a tame shrinking. As we show in Proposition~\ref{prop:metric}, the existence of a metric tame shrinking is an easy consequence. Uniqueness up to metrizable tame cobordism is proven in Proposition~\ref{prop:cobord2}. By Proposition~\ref{prop:Khomeo}, tameness implies the Hausdorff and homeomorphism properties. The diagram commutes since it arises as the realization of commuting functors to ${\partial}r_{{\mathcal K}'}:{\bf E}_{{\mathcal K}'}\to{\bf B}_{{\mathcal K}'}$, and we prove the linearity property in Proposition~\ref{prop:linear}. Finally, the last statement follows from Lemma~\ref{le:cob0} where we show that the realization of every tame Kuranishi cobordism has the Hausdorff, homeomorphism, and linearity properties. \end{proof} The Hausdorff property for the Kuranishi neighbourhood $|{\mathcal K}|$ will require the following control of the domains of coordinate changes, which we will achieve in Section~\ref{ss:shrink} by a shrinking from an additive weak Kuranishi atlas. {\beta}gin{defn}{\lambda}bel{def:tame} A weak Kuranishi atlas is {\bf tame} if it is additive, and for all $I,J,K\in{\mathcal I}_{\mathcal K}$ we have {\beta}gin{align}{\lambda}bel{eq:tame1} U_{IJ}\cap U_{IK}&\;=\; U_{I (J\cup K)}\qquad\qquad\;\;\;\;\,\qquad\forall I\subset J,K ;\\ {\lambda}bel{eq:tame2} {\partial}hi_{IJ}(U_{IK}) &\;=\; U_{JK}\cap s_J^{-1}\bigl(\widehat{\partial}hi_{IJ}(E_I)\bigr) \qquad\forall I\subset J\subset K. \end{align} Here we allow equalities, using the notation $U_{II}:=U_I$ and ${\partial}hi_{II}:={\rm Id}_{U_I}$. Further, to allow for the possibility that $J\cup K\notin{\mathcal I}_{\mathcal K}$, we define $U_{IL}:=\emptyset$ for $L\subset \{1,\ldots,N\}$ with $L\notin {\mathcal I}_{\mathcal K}$. Therefore \eqref{eq:tame1} includes the condition $$ U_{IJ}\cap U_{IK}\ne \emptyset \quad \Longrightarrow \quad F_J\cap F_K \ne \emptyset \qquad \bigl( \quad \Longleftrightarrow\quad J\cup K\in {\mathcal I}_{\mathcal K} \quad\bigr). $$ \end{defn} The first tameness condition \eqref{eq:tame1} extends the identity for footprints ${\partial}si_I^{-1}(F_J)\cap {\partial}si_I^{-1}(F_K) = {\partial}si_I^{-1}(F_{J\cup K})$ to the domains of the transition maps in $U_I$. In particular, with $J\subset K$ it implies nesting of the domains of the transition maps, {\beta}gin{equation}{\lambda}bel{eq:tame4} U_{IK}\subset U_{IJ} \qquad\forall I\subset J \subset K. \end{equation} The second tameness condition \eqref{eq:tame2} extends the control of transition maps between footprints and zero sets ${\partial}hi_{IJ}({\partial}si_I^{-1}(F_K)) = {\partial}si_J^{-1}(F_K) = U_{JK}\cap s_J^{-1}(0) $ to the Kuranishi domains. In particular, with $J=K$ it controls the image of the transition maps, {\beta}gin{equation}{\lambda}bel{eq:tame3} {\rm im\,}{\partial}hi_{IJ}:= {\partial}hi_{IJ}(U_{IJ}) = s_J^{-1}(\widehat{\partial}hi_{IJ}(E_I)) \qquad\forall I\subset J. \end{equation} This implies that the image of ${\partial}hi_{IJ}$ is a closed subset of $U_J$, and is a much strengthened form of the "infinitesimal tameness" $ {\rm im\,}{\rm d}_y{\partial}hi_{IJ}=({\rm d} s_J)^{-1}\bigr(\widehat{\partial}hi_{IJ}(E_I)\bigl)$ provided at the points $y\in {\rm im\,}{\partial}hi_{IJ}$ by Definition \ref{def:change}. The next lemma shows, somewhat generalized, that every tame weak Kuranishi atlas in fact satisfies the strong cocycle condition, so in particular is a Kuranishi atlas. {\beta}gin{lemma}{\lambda}bel{le:tame0} Suppose that the weak Kuranishi atlas ${\mathcal K}$ satisfies the tameness conditions \eqref{eq:tame1}, \eqref{eq:tame2} for all $I,J,K\in{\mathcal I}_{\mathcal K}$ with $|I|\leq k$. Then for all $I\subset J\subset K$ with $|I|\leq k$ the strong cocycle condition \eqref{strong cocycle} is satisfied, i.e.\ ${\partial}hi_{JK}\circ {\partial}hi_{IJ}={\partial}hi_{IK}$ with equality of domains $$ U_{IJ}\cap {\partial}hi_{IJ}^{-1}(U_{JK}) = U_{IK} . $$ \end{lemma} {\beta}gin{proof} From the tameness conditions \eqref{eq:tame1} and \eqref{eq:tame3} we obtain for all $I\subset J\subset K$ with $|I|\leq k$ $$ {\partial}hi_{IJ}(U_{IK}) = U_{JK}\cap s_J^{-1}(\widehat{\partial}hi_{IJ}(E_I)) = U_{JK}\cap {\partial}hi_{IJ}(U_{IJ}) . $$ Applying ${\partial}hi_{IJ}^{-1}$ to both sides and using \eqref{eq:tame4} implies equality of the domains. Then the weak cocycle condition ${\partial}hi_{JK}\circ {\partial}hi_{IJ}={\partial}hi_{IK}$ on the overlap of domains is identical to the strong cocycle condition. \end{proof} The above remarks do not use additivity. However, we formulated Definition~\ref{def:tame} so that tameness implies additivity, because the most useful consequences come by using additivity. In particular, we obtain a limited transversality for the embeddings of the domains involved in coordinate changes. This property is crucial to guarantee the existence of coherent (i.e.\ compatible with coordinate changes) perturbations of the canonical section $s_{\mathcal K}$ of a tame Kuranishi atlas. However, due to further technical complications, we will not use it directly in the constructions of Section~\ref{ss:const}. {\beta}gin{lemma}{\lambda}bel{le:phitrans} If ${\mathcal K}$ is a tame Kuranishi atlas, then for any $H,I,J\in{\mathcal I}_{\mathcal K}$ with $H \cap I\ne \emptyset$ and $H\cup I\subset J$ the two submanifolds ${\rm im\,} {\partial}hi_{H J}$ and ${\rm im\,} {\partial}hi_{IJ}$ of ${\rm im\,} {\partial}hi_{(H\cup I) J}$ intersect transversally in ${\rm im\,} {\partial}hi_{(H\cap I)J}$. \end{lemma} {\beta}gin{proof} We will make crucial use of tameness, which identifies $$ {\rm im\,} {\partial}hi_{L J}=s_J^{-1}(E_L):=s_J^{-1}\bigl(\widehat{\partial}hi_{LJ}(E_L)\bigr) $$ for $L=H,I,H\cap I, H\cup I$, with the preimages under $s_J$ of the images of the linear embeddings, ${\rm im\,}\widehat{\partial}hi_{LJ}= \widehat{\partial}hi_{LJ}(E_L) \subset E_J$. The inclusions $s_J^{-1}\bigl({\rm im\,}\widehat{\partial}hi_{LJ}\bigr) \subset s_J^{-1}\bigl({\rm im\,}\widehat{\partial}hi_{(H\cup I)J}\bigr)$ for $L=H,I$ then follow from the linear cocycle condition $\widehat{\partial}hi_{LJ} = \widehat{\partial}hi_{(H\cup I)J} \circ \widehat{\partial}hi_{L(H\cup I)}$, which implies ${\rm im\,}\widehat{\partial}hi_{LJ} \subset {\rm im\,}\widehat{\partial}hi_{(H\cup I)J}$. The intersection identity $s_J^{-1}\bigl({\rm im\,}\widehat{\partial}hi_{HJ}\bigr)\cap s_J^{-1}\bigl({\rm im\,}\widehat{\partial}hi_{IJ}\bigr) = s_J^{-1}\bigl({\rm im\,}\widehat{\partial}hi_{(H\cap I)J}\bigr)$ follows by applying $s_J^{-1}$ to the additivity property \eqref{eq:addd}. To prove the transversality of intersection, we use additivity and the linear cocycle condition to obtain the decomposition {\beta}gin{align*} {\rm im\,}\widehat{\partial}hi_{(H\cup I)J} &\;\cong\; \quotient{{\rm im\,}\widehat{\partial}hi_{HJ}}{{\rm im\,}\widehat{\partial}hi_{(H\cap I)J}} \;\oplus\; {\rm im\,}\widehat{\partial}hi_{(H\cap I)J} \;\oplus\;\quotient{{\rm im\,}\widehat{\partial}hi_{IJ}}{{\rm im\,}\widehat{\partial}hi_{(H\cap I)J}} . \end{align*} Applying the isomorphism ${\rm d} s_J^{-1}$ to a complement of $\ker{\rm d} s_J \subset {\rm T} U_J$, and adding this kernel to the middle factor, this implies {\beta}gin{align*} {\rm d} s_J^{-1}\bigl({\rm im\,}\widehat{\partial}hi_{(H\cup I)J}\bigr) &\;\cong\; \frac{{\rm d} s_J^{-1}\bigl({\rm im\,}\widehat{\partial}hi_{HJ}\bigr)}{{\rm d} s_J^{-1}\bigl({\rm im\,}\widehat{\partial}hi_{(H\cap I)J}\bigr)} \;\oplus\; {\rm d} s_J^{-1}\bigl({\rm im\,}\widehat{\partial}hi_{(H\cap I)J}\bigr) \;\oplus\;\frac{{\rm d} s_J^{-1}\bigl({\rm im\,}\widehat{\partial}hi_{IJ}\bigr)}{{\rm d} s_J^{-1}\bigl({\rm im\,}\widehat{\partial}hi_{(H\cap I)J}\bigr)} . \end{align*} Since ${\rm im\,}\widehat{\partial}hi_{(H\cap I)J} \subset {\rm im\,}\widehat{\partial}hi_{LJ}$ for $L=H,I$ this implies transversality {\beta}gin{align*} {\rm d} s_J^{-1}\bigl({\rm im\,}\widehat{\partial}hi_{(H\cup I)J}\bigr) \;=\; {\rm d} s_J^{-1}\bigl({\rm im\,}\widehat{\partial}hi_{HJ}\bigr) \;+\; {\rm d} s_J^{-1}\bigl({\rm im\,}\widehat{\partial}hi_{IJ}\bigr) \end{align*} as claimed. \end{proof} We now show that the additivity and tameness conditions give us very useful control over the equivalence relation ${\sigma}m$ on ${\rm Obj}_{{\bf B}_{\mathcal K}}$, given in Definition~\ref{def:Knbhd} by abstractly inverting the morphisms. We reformulate it here with the help of a partial order given by the morphisms -- more precisely the embeddings ${\partial}hi_{IJ}$ that are part of the coordinate changes. {\beta}gin{definition} {\lambda}bel{def:preceq} Let ${\partial}receq$ denote the partial order on ${\rm Obj}_{{\bf B}_{\mathcal K}}$ given by $$ (I,x){\partial}receq(J,y) \quad :\Longleftrightarrow \quad {\rm Mor}_{{\bf B}_{\mathcal K}}((I,x),(J,y))\neq\emptyset . $$ That is, we have $(I,x){\partial}receq (I,y)$ iff $x\in U_{IJ}$ and $y={\partial}hi_{IJ}(x)$. Moreover, for any $I,J\in{\mathcal I}_{\mathcal K}$ and a subset $S_I\subset U_I$ we denote the subset of points in $U_J$ that are equivalent to a point in $S_I$ by $$ {\varepsilon}_J(S_I) \,:=\; {\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}(S_I)) \cap U_J \;=\; \bigl\{y\in U_J \,\big|\, \exists\, x\in S_I : (I,x){\sigma}m (J,y) \bigr\} \;\subset\; U_J. $$ There is a similar partial order on ${\rm Obj}_{{\bf E}_{\mathcal K}}$ given by $$ (I,x,e){\partial}receq(J,y,f) \quad :\Longleftrightarrow \quad {\rm Mor}_{{\bf E}_{\mathcal K}}((I,x,e),(J,y,f))\neq\emptyset . $$ \end{definition} The notation ${\varepsilon}_J(S_I)$ will obtain a more useful interpretation in Lemma \ref{le:tame0} below. The relation ${\partial}receq$ on ${\rm Obj}_{{\bf E}_{\mathcal K}}$ is very similar to that on ${\rm Obj}_{{\bf B}_{\mathcal K}}$. Indeed, $(I,x,e){\partial}receq (J,y,f)$ implies $(I,x){\partial}receq (J,y)$. Conversely, if $(I,x){\partial}receq (J,y)$ then for every $e\in E_I$ there is a unique $f\in E_J$ such that $(I,x,e){\partial}receq (J,y,f)$. Thus to ease notation we mostly work with the relation on ${\rm Obj}_{{\bf B}_{\mathcal K}}$ though any statement about this has an immediate analog for the relation on ${\rm Obj}_{{\bf E}_{\mathcal K}}$ (and vice versa). {\beta}gin{lemma} {\lambda}bel{lem:eqdef} The equivalence relation ${\sigma}m$ on ${\rm Obj}_{{\mathcal B}_{\mathcal K}}$ of Definition~\ref{def:Knbhd} is equivalently defined by $(I,x){\sigma}m (J,y)$ iff there is a finite tuple of objects $(I_0, x_0), \ldots, (I_k, x_k)\in{\rm Obj}_{{\bf B}_{\mathcal K}}$ such that {\beta}gin{align}{\lambda}bel{eq:ch} &(I,x) = (I_0,x_0){\partial}receq (I_1,x_1) \succeq (I_2,x_2) {\partial}receq \ldots (I_k, x_k)=(J,y) \\ \text{or}\qquad & (I,x) = (I_0,x_0)\succeq (I_1,x_1) {\partial}receq (I_2,x_2) \succeq \ldots (I_k, x_k)=(J,y) . \nonumber \end{align} \end{lemma} {\beta}gin{proof} The relation ${\partial}receq$ is transitive by the cocycle condition \ref{def:Ku}~(d) and antisymmetric since the transition maps are directed. In particular, we have $(I,x){\partial}receq (I,y)$ iff $x=y$. The two definitions of ${\sigma}m$ are equivalent since, if \eqref{eq:ch} had consecutive morphisms $(I_{\ell-1},x_{\ell-1}){\partial}receq (I_{\ell},x_{\ell}) {\partial}receq (I_{\ell+1},x_{\ell+1})$, these could be composed to a single morphism $(I_{\ell-1},x_{\ell-1}){\partial}receq (I_{\ell+1},x_{\ell+1})$ by the cocycle condition. Similarly, any consecutive morphisms $(I_{\ell-1},x_{\ell-1})\succeq (I_{\ell},x_{\ell}) \succeq (I_{\ell+1},x_{\ell+1})$ can be composed to a single morphism $(I_{\ell-1},x_{\ell-1})\succeq (I_{\ell+1},x_{\ell+1})$. \end{proof} When ${\mathcal K}$ is tame, the definition of ${\sigma}m$ in terms of ${\partial}receq$ can be simplified, and thus has good topological properties, as follows. Note that, because tame ${\mathcal K}$ are additive, we now denote $s_J^{-1}(E_I):=s_J^{-1}(\widehat{\partial}hi_{IJ}(E_I))\subset U_J$. {\beta}gin{lemma} {\lambda}bel{le:Ku2} Let ${\mathcal K}$ be a tame Kuranishi atlas. {\beta}gin{enumerate} \item [(a)] For $(I,x),(J,y)\in{\rm Obj}_{{\bf B}_{\mathcal K}}$ the following are equivalent. {\beta}gin{enumerate} \item[(i)] $(I,x){\sigma}m (J,y)$; \item[(ii)] there exists $z\in U_{I\cup J}$ such that $(I,x){\partial}receq (I\cup J,z) \succeq (J,y)$; \item[(iii)] there exists $w\in U_{I\cap J}$ such that $(I,x)\succeq (I\cap J,w) {\partial}receq (J,y)$. ${\partial}hantom{\int_{x_x}}$ \end{enumerate} \item[(b)] For $(I,x,e),(J,y,f)\in{\rm Obj}_{{\bf E}_{\mathcal K}}$ the following are equivalent. {\beta}gin{enumerate} \item[(i)] $(I,x,e){\sigma}m (J,y,f)$; \item[(ii)] $(I,x){\sigma}m(J,y)$ and $\widehat{\partial}hi_{I (I\cup J)}(e) = g = \widehat{\partial}hi_{J (I\cup J)}(f)$ for some $g\in E_{I\cup J}$; \item[(iii)] $(I,x){\sigma}m(J,y)$ and $\widehat{\partial}hi^{-1}_{(I\cup J)I}(e) = d = \widehat{\partial}hi_{(I\cap J) J}^{-1}(f)$ for some $d\in E_{I\cap J}$. \end{enumerate} \item[(c)] ${\partial}i_{\mathcal K}:U_I \to |{\mathcal K}|$ and ${\partial}i_{\mathcal K}: U_I\tildemes E_I \to |{\mathcal E}_{\mathcal K}|$ are injective for each $I\in{\mathcal I}_{\mathcal K}$, that is $(I,x,e){\sigma}m (I,y,f)$ implies $x=y$ and $e=f$. In particular, the elements $z$ and $w$ in (a) resp.\ $g$ and $d$ in (b) are automatically unique. \item[(d)] For any $I,J\in{\mathcal I}_{\mathcal K}$ and $S_I\subset U_I$ we have $$ {\varepsilon}_J(S_I) \;=\; {\partial}hi_{J(I\cup J)}^{-1}\bigl( {\partial}hi_{I (I\cup J)}(S_I) \bigr) \;=\; {\partial}hi_{(I\cap J) J}\bigl( {\partial}hi_{(I\cap J)I}^{-1}(S_I) \bigr) ; $$ in particular $$ {\varepsilon}_J(U_I) \;:=\; U_J\cap {\partial}i_{\mathcal K}^{-1}\bigl({\partial}i_{\mathcal K}(U_I)\bigr) \;=\; U_{J(I\cup J)}\cap s_J^{-1}(E_{I\cap J}). $$ \item[(e)] If $S_I\subset U_I$ is closed, then ${\varepsilon}_J(S_I) \subset U_J$ is closed. In particular we have $\overline{{\varepsilon}_J(A_I)} \subset {\varepsilon}_J(\overline{A_I})$ for any subset $A_I\subset U_I$. \end{enumerate} \end{lemma} {\beta}gin{proof} We first prove the following intermediate results: \noindent {\bf Claim 1:} {\it Suppose $(I,x,e){\partial}receq (K,z,g) \succeq (J,y,f)$ for some $(K,z,g)\in{\rm Obj}_{{\bf E}_{\mathcal K}}$, then there exists $w\in U_{I\cap J}$ and $d\in E_{I\cap J}$ such that $(I,x,e)\succeq (I\cap J,w,d) {\partial}receq (J,y,f)$.} { } {\mathbb N}I Indeed, tameness \eqref{eq:tame3} and additivity \eqref{eq:CIJ} imply $$ z \in {\partial}hi_{IK}(U_{IK})\cap {\partial}hi_{JK}(U_{JK}) = s_K^{-1}(E_I)\cap s_K^{-1}(E_J) = s_K^{-1}(E_{I\cap J}) = {\partial}hi_{(I\cap J) K}(U_{(I\cap J)K}). $$ Therefore we have $z={\partial}hi_{(I\cap J)K}(w)$ for some $w\in U_{(I\cap J)K}$. We also have $z={\partial}hi_{IK}(x)$ by assumption, and Lemma \ref{le:tame0} implies that ${\partial}hi_{(I\cap J)K}(w) = {\partial}hi_{IK}\bigl( {\partial}hi_{(I\cap J)I}(w)\bigr)$, so the elements $x$ and ${\partial}hi_{(I\cap J)I}(w)$ of $U_{IK}$ have the same image under ${\partial}hi_{IK}$. Since the latter is an embedding we deduce $x={\partial}hi_{(I\cap J) I}(w)$. Similarly, $y={\partial}hi_{(I\cap J) J}(w)$ follows from ${\partial}hi_{(I\cap J)K} = {\partial}hi_{JK}\circ {\partial}hi_{(I\cap J)J}$. Moreover, we have $\widehat{\partial}hi_{IK}(e) = g = \widehat{\partial}hi_{JK}(f)$ by assumption, so $g\in (\widehat{\partial}hi_{IK}(E_I)) \cap (\widehat{\partial}hi_{JK}(E_J)) = \widehat{\partial}hi_{(I\cap J) K}(E_{(I\cap J) K})$ by additivity \eqref{eq:addd}. Now applying $\widehat{\partial}hi_{(I\cap J) K}^{-1}$ together with the cocycle conditions $\widehat{\partial}hi_{(I\cap J) K}= \widehat{\partial}hi_{\bullet K} \circ \widehat{\partial}hi_{(I\cap J) \bullet}$ for $\bullet = I, J$ we obtain $\widehat{\partial}hi_{(I\cap J)I}^{-1}(e) = \widehat{\partial}hi_{(I\cap J) J}^{-1}(f) = \widehat{\partial}hi_{(I\cap J) K}^{-1}(g) =:d$. \noindent {\bf Claim 2:} {\it Suppose $(I,x,e)\succeq (H,w,d) {\partial}receq (J,y,f)$ for some $(H,w,d)\in{\rm Obj}_{{\bf E}_{\mathcal K}}$, then there exists $z\in U_{I\cup J}$ and $g\in E_{I\cup J}$ such that $(I,x,e){\partial}receq (I\cup J,z,g) \succeq (J,y,f)$.} { }{\mathbb N}I Indeed, \eqref{eq:tame1} implies that $w\in U_{H (I\cup J)}$ so that $z:={\partial}hi_{H (I\cup J)}w\in U_{I\cup J}$ is defined. We also have $x = {\partial}hi_{HI}(w)$ by assumption, which together with Lemma \ref{le:tame0} implies $$ z = {\partial}hi_{H (I\cup J)}(w) = {\partial}hi_{(I\cup J) I}\bigl({\partial}hi_{H I}(w)\bigr) = {\partial}hi_{(I\cup J) I}(x) . $$ Similarly, $z= {\partial}hi_{(I\cup J) J}(y)$ follows from $y={\partial}hi_{HJ}(w)$ and the strong cocycle condition \eqref{strong cocycle}, proved in Lemma~\ref{le:tame0}. Moreover, we have $e=\widehat{\partial}hi_{HI}(d)$ and $f=\widehat{\partial}hi_{HJ}(d)$, so can apply the cocycle conditions $\widehat{\partial}hi_{H(I\cup J)}= \widehat{\partial}hi_{\bullet (I\cup J)}\circ \widehat{\partial}hi_{H \bullet}$ for $\bullet = I, J$ to obtain $$ {\widehat{\partial}hi_{I(I\cup J)}(e) = \widehat{\partial}hi_{J(I\cup J)}(f) = \widehat{\partial}hi_{H (I\cup J)}(d) =:g}. $$ Now to prove part (a), observe that Claims 1 and 2 imply the analogous statements on ${\rm Obj}_{{\bf B}_{\mathcal K}}$ by picking the zero vector for each $e,f,g,d$. With that, (ii) ${\mathbb R}ightarrow $ (iii) follows from Claim 1, and (iii) ${\mathbb R}ightarrow$ (i) holds by definition of the equivalence relation ${\sigma}m$. The implication (i) ${\mathbb R}ightarrow$ (ii) is proven by noting as above that consecutive morphisms in \eqref{eq:ch} in the same direction can be composed to a single morphism by the cocycle condition. Combining this with Claim 1 and Claim 2 above, any tuple of morphisms \eqref{eq:ch} can be replaced by two morphisms $(I,x)\succeq (H,w) {\partial}receq (J,y)$ or $(I,x){\partial}receq (K,z) \succeq (J,y)$. We then use once more Claim 2 or Claims 1 and 2 to deduce the existence of morphisms $(I,x){\partial}receq (I\cup J,z) \succeq (J,y)$. This proves (a), and (b) is proven in complete analogy. Next, part (c) is a consequence of (i)${\mathbb R}ightarrow$(ii) since ${\partial}hi_{II}={\rm Id}_{U_I}$ and $\widehat{\partial}hi_{II}={\rm Id}_{E_I}$. The formulas for ${\varepsilon}_J(S_I)$ in (d) follow immediately from the equivalent definitions of ${\sigma}m$ in~(a). In case $S_I=U_I$ we can moreover use \eqref{eq:tame3} and \eqref{eq:CIJ} to obtain {\beta}gin{align*} {\varepsilon}_J(U_I) & \;=\; {\partial}hi_{J(I\cup J)}^{-1}\bigl( s_{I\cup J}^{-1}(E_I) \bigr) \;=\; {\partial}hi_{J(I\cup J)}^{-1}\bigl( s_{I\cup J}^{-1}(E_I) \cap s_{I\cup J}^{-1}(E_J) \bigr) \\ &\;=\; {\partial}hi_{J(I\cup J)}^{-1}\bigl( s_{I\cup J}^{-1}(E_{I\cap J}) \bigr) \;=\; U_{J(I\cup J)} \cap s_J^{-1}(E_{I\cap J}) \bigr) . \end{align*} If in addition $S_I\subset U_I$ is closed, then ${\varepsilon}_J(S_I) = {\partial}hi_{(I\cap J) J}\bigl( {\partial}hi_{(I\cap J)I}^{-1}(S_I) \bigr) \subset {\rm im\,}{\partial}hi_{(I\cap J) J}$ is closed since the transition maps are homeomorphisms to their images. The tameness assumption \eqref{eq:tame3} ensures that ${\rm im\,}{\partial}hi_{(I\cap J) J}\subset U_J$ is closed, and hence ${\varepsilon}_J(S_I)\subset U_J$ is closed. Now for any subset $A_I\subset U_I$ we have ${\varepsilon}_J(A_I)$ contained in the closed subset ${\varepsilon}_J(\overline{A_I})\subset U_J$, hence by definition the closure $\overline{{\varepsilon}_J(A_I)} \subset U_J$ is contained in ${\varepsilon}_J(\overline{A_I})$. This finishes the proof of (e). \end{proof} With these preparations we show in the following propositions that the additivity and tameness conditions imply the Hausdorff, homeomorphism, and linearity properties claimed in Theorem~\ref{thm:K}. {\beta}gin{prop}{\lambda}bel{prop:Khomeo} Suppose that the Kuranishi atlas ${\mathcal K}$ is tame. Then $|{\mathcal K}|$ and $|{\bf E}_{\mathcal K}|$ are Hausdorff, and for each $I\in{\mathcal I}_{\mathcal K}$ the quotient maps ${\partial}i_{{\mathcal K}}|_{U_I}:U_I\to |{\mathcal K}|$ and ${\partial}i_{{\mathcal K}}|_{U_I\tildemes E_I}:U_I\tildemes E_I\to |{\bf E}_{\mathcal K}|$ are homeomorphisms onto their image. \end{prop} {\beta}gin{proof} The claims for ${\mathcal K}$ follow from that for $ |{\bf E}_{\mathcal K}|$ since $|{\mathcal K}|$ can be identified with the zero section of the bundle ${\partial}r: |{\bf E}_{\mathcal K}| \to |{\mathcal K}|$, which is a closed subset. However, to avoid carrying along unnecessary notation, we first prove the statement for $|{\mathcal K}|$, and then sketch the necessary extensions of the argument for $|{\bf E}_{\mathcal K}|$. Since each component of ${\rm Obj}_{{\bf B}_{\mathcal K}}=\bigcup_{I\in {\mathcal I}_{\mathcal K}} U_I$ is Hausdorff and locally compact, its quotient $|{\mathcal K}|={\rm Obj}_{{\bf B}_{\mathcal K}}/\hspace{-1.5mm}{\sigma}m$ is Hausdorff exactly if the equivalence relation ${\sigma}m$ is closed. Since ${\mathcal I}_{\mathcal K}$ is finite, it suffices to consider sequences $U_I \ni x^\nu\to x^\infty$ and $U_J\ni y^\nu\to y^\infty$ of equivalent objects $(I,x^\nu){\sigma}m (J, y^\nu)$ for all $\nu\in{\mathbb N}$ and check that $(I, x^\infty){\sigma}m (J, y^\infty)$. For that purpose denote $H:=I\cap J$, then by Lemma~\ref{le:Ku2}(a) there is a sequence $w^\nu\in U_H$ such that $x^\nu = {\partial}hi_{HI}(w^\nu)$ and $y^\nu = {\partial}hi_{HJ}(w^\nu)$. Now it follows from the tameness condition \eqref{eq:tame3} that $x^\infty$ lies in the relatively closed subset ${\partial}hi_{HI}(U_{HI})=s_I^{-1}(E_H)\subset U_I$, and since ${\partial}hi_{HI}$ is a homeomorphism to its image we deduce convergence $w^\nu\to w^\infty\in U_{HI}$ to a preimage of $x^\infty={\partial}hi_{HI}(w^\infty)$. Then by continuity of the transition map we obtain ${\partial}hi_{HJ}(w^\infty) = y^\infty$, so that $(I, x^\infty){\sigma}m (J, y^\infty)$ as claimed. Thus $|{\mathcal K}|$ is Hausdorff. To show that ${\partial}i_{\mathcal K}|_{U_I}$ is a homeomorphism onto its image, first recall that it is injective by Lemma \ref{le:Ku2}~(c). It is moreover continuous since $|{\mathcal K}|$ is equipped with the quotient topology. Hence it remains to show that ${\partial}i_{\mathcal K}|_{U_I}$ is an open map to its image, i.e.\ for any given open subset $S_I\subset U_I$ we must find an open subset ${\mathcal W}\subset |{\mathcal K}|$ such that ${\mathcal W}\cap {\partial}i_{{\mathcal K}}(U_I) = {\partial}i_{\mathcal K}(S_I)$. Since $|{\mathcal K}|$ is given the quotient topology, ${\mathcal W}$ is open precisely if ${\partial}i_{{\mathcal K}}^{-1}({\mathcal W})\cap U_J \subset U_J$ is open for each $J\in{\mathcal I}_{\mathcal K}$. Equivalently, we could find open subsets $W_J\subset U_J$ such that $W_I=S_I$ and $$ {\partial}i_{{\mathcal K}}^{-1} \Bigl( {\partial}i_{\mathcal K} \Bigl( {\textstyle \bigcup_{J\in{\mathcal I}_{\mathcal K}} W_J } \Bigr) \Bigr) = {\textstyle \bigcup_{J\in{\mathcal I}_{\mathcal K}} W_J } , $$ since then ${\mathcal W}:={\partial}i_{\mathcal K}\bigl( \bigcup_{J\in{\mathcal I}_{\mathcal K}} W_J \bigr)\subset |{\mathcal K}|$ is the required open set. To construct these sets we order ${\mathcal I}_{\mathcal K}=\{I_1,I_2,\ldots,I_N\}$ in any way such that $I_1=I$. Then we will iteratively define open sets $W_{I_\ell}\subset U_{I_\ell}$ and not necessarily open sets $$ {\mathcal W}_\ell:= {\partial}i_{\mathcal K} \bigl( {\textstyle \bigcup_{k=1}^\ell } W_{I_k}\bigr)\subset|{\mathcal K}| $$ such that {\beta}gin{equation} {\lambda}bel{wwell} {\partial}i_{{\mathcal K}}^{-1}({\mathcal W}_\ell)\cap U_{I_k} = W_{I_k} \quad \forall \;k=1,\ldots,\ell . \end{equation} Obviously, we also need to begin with $W_{I_1}:=S_I$, which is given as open. Then ${\mathcal W}_1={\partial}i_{\mathcal K}(W_{I_1})$ satisfies \eqref{wwell} for $\ell=1$ as we check with the help of Lemma~\ref{le:Ku2}~(c), $$ {\partial}i_{{\mathcal K}}^{-1}({\mathcal W}_1)\cap U_{I_1} = {\partial}i_{{\mathcal K}}^{-1}({\partial}i_{\mathcal K} (W_{I_1}))\cap U_{I_1} = {\varepsilon}_{I_1}(W_{I_1}) = W_{I_1} . $$ Next, suppose that the open sets $W_{I_1},\ldots, W_{I_\ell}$ are defined and satisfy \eqref{wwell}. Then we define $$ W_{I_{\ell+1}}: = U_{I_{\ell+1}}\;{\smallsetminus}\; {\textstyle \bigcup_{n\leq\ell} {\varepsilon}_{I_{\ell+1}}(U_{I_n}{\smallsetminus} W_{I_n}) } $$ by removing those points from the domain $U_{I_{\ell+1}}$ that would violate \eqref{wwell}. This defines an open subset $W_{I_{\ell+1}}\subset U_{I_{\ell+1}}$ since by previous constructions $W_{I_n}$ is open, so $U_{I_n}{\smallsetminus} W_{I_n}\subset U_{I_n}$ is closed, and by Lemma~\ref{le:Ku2}~(d) this implies closedness of ${\varepsilon}_{I_{\ell+1}}(U_{I_n}{\smallsetminus} W_{I_n})$. To verify \eqref{wwell} with $\ell$ replaced by $\ell+1$ first note that for $k=1,\ldots,\ell$ we have $$ {\partial}i_{\mathcal K}^{-1}\bigl({\partial}i_{\mathcal K}(W_{I_{\ell+1}})\bigr)\cap U_{I_k} = {\varepsilon}_{I_k}(W_{I_{\ell+1}}) \subset {\varepsilon}_{I_k}\bigl( U_{I_{\ell+1}}{\smallsetminus} \; {\varepsilon}_{I_{\ell+1}}(U_{I_k}{\smallsetminus} W_{I_k}) \bigr) \subset W_{I_k} $$ since the complicated expression consists of those $x\in U_{I_k}$ for which there exists $y \in U_{I_{\ell+1}}$ with $x {\sigma}m y$ and $y\not{\sigma}m z$ for any $z\in U_{I_k}{\smallsetminus} W_{I_k}$, which implies $x\in W_{I_k}$. Conversely, we have $$ {\partial}i_{\mathcal K}^{-1}({\mathcal W}_\ell)\cap U_{I_{\ell+1}} \subset W_{I_{\ell+1}} $$ since for every $n\leq\ell$ the intersection ${\partial}i_{\mathcal K}^{-1}({\mathcal W}_\ell)\cap {\varepsilon}_{I_{\ell+1}}(U_{I_n}{\smallsetminus} W_{I_n})$ is empty. Indeed, otherwise there exist $x\in U_{I_{\ell+1}}$ and $y\in U_{I_n}{\smallsetminus} W_{I_n}$ with $x{\sigma}m y$, and on the other hand ${\partial}i_{\mathcal K}(x)\in{\mathcal W}_\ell$. However, this implies $y\in {\partial}i_{\mathcal K}^{-1}({\mathcal W}_\ell)$ in contradiction to \eqref{wwell}. Using these two inclusions we can now finish the proof by verifying the iteration of \eqref{wwell}, $$ {\partial}i_{\mathcal K}^{-1}({\mathcal W}_{\ell+1})\cap U_{I_k} = \bigl({\partial}i_{\mathcal K}^{-1}({\mathcal W}_\ell)\cap U_{I_k}\bigr) \cup \bigl({\partial}i_{\mathcal K}^{-1}\bigl({\partial}i_{\mathcal K}(W_{I_{\ell+1}})\bigr)\cap U_{I_k}\bigr) = W_{I_k} $$ holds for $k=1,\ldots,\ell$, and for $k=\ell+1$ we have $$ {\partial}i_{\mathcal K}^{-1}({\mathcal W}_{\ell+1})\cap U_{I_{\ell+1}} = \bigl({\partial}i_{\mathcal K}^{-1}({\mathcal W}_\ell)\cap U_{I_{\ell+1}}\bigr) \cup \bigl({\partial}i_{\mathcal K}^{-1}\bigl({\partial}i_{\mathcal K}(W_{I_{\ell+1}})\bigr)\cap U_{I_{\ell+1}}\bigr) = W_{I_{\ell+1}} . $$ This completes the proof of the statements about $|{\mathcal K}|$. The analogous statements for $|{\bf E}_{\mathcal K}|$ hold by using part (b) of Lemma~\ref{le:Ku2} instead of part (a). \end{proof} {\beta}gin{prop}{\lambda}bel{prop:linear} Let ${\mathcal K}$ be a tame Kuranishi atlas. Then there exists a unique linear structure on the fibers of $|{\partial}r_{{\mathcal K}}|: |{\bf E}_{\mathcal K}| \to |{\mathcal K}|$ such that for every $I\in{\mathcal I}_{\mathcal K}$ the embedding ${\partial}i_{{\mathcal K}} : U_I\tildemes E_I \to |{\bf E}_{{\mathcal K}}|$ is linear on the fibers. \end{prop} {\beta}gin{proof} For fixed $p\in |{\mathcal K}|$ denote the union of index sets for which $p\in {\partial}i_{\mathcal K}(U_I)$ by $$ I_p:= \bigcup_{I\in{\mathcal I}_{\mathcal K}, p\in {\partial}i_{\mathcal K}(U_I)} I \qquad \subset \{1,\ldots,N\} . $$ To see that $I_p\in{\mathcal I}_{\mathcal K}$ we repeatedly use the observation that Lemma~\ref{le:Ku2}~(a) implies $$ p \in {\partial}i_{\mathcal K}(U_I)\cap{\partial}i_{\mathcal K}(U_J) \quad{\mathbb R}ightarrow\quad (I,{\partial}i_{\mathcal K}^{-1}(p)\cap U_I){\sigma}m (J,{\partial}i_{\mathcal K}^{-1}(p)\cap U_J) \quad{\mathbb R}ightarrow\quad I\cup J \in {\mathcal I}_{\mathcal K} . $$ Moreover, $x_p:={\partial}i_{\mathcal K}^{-1}(p)\cap U_{I_p}$ is unique by Lemma~\ref{le:Ku2}~(c). Next, any element in the fiber $[I,x,e]\in |{\partial}r_{{\mathcal K}}|^{-1}(p)$ is represented by some vector over $(I,x)\in {\partial}i_{\mathcal K}^{-1}(p)$, so we have $I\subset I_p$ and ${\partial}hi_{I I_p}(x)=x_p$, and hence $(I,x, e){\sigma}m (I_p,x_p,\widehat{\partial}hi_{I I_p}(e))$. Thus ${\partial}i_{\mathcal K}:\{x_p\}\tildemes E_{I_p}\to |{\partial}r_{\mathcal K}|^{-1}(p)$ is surjective, and by Lemma~\ref{le:Ku2}~(c) also injective. Thus the requirement of linearity for this bijection induces a unique linear structure on the fiber $|{\partial}r_{\mathcal K}|^{-1}(p)$. To see that this is compatible with the injections ${\partial}i_{\mathcal K}:\{x\}\tildemes E_{I}\to |{\partial}r_{\mathcal K}|^{-1}(p)$ for $(I,x){\sigma}m(I_p,x_p)$ note again that $I\subset I_p$ since $I_p$ was defined to be maximal, and hence by Lemma~\ref{le:Ku2}~(b)~(ii) the embedding factors as ${\partial}i_{\mathcal K}|_{\{x\}\tildemes E_{I}} = {\partial}i_{\mathcal K}|_{\{x_p\}\tildemes E_{I_p}} \circ \widehat{\partial}hi_{I I_p}$, where $\widehat{\partial}hi_{I I_p}$ is linear by definition of coordinate changes. Thus ${\partial}i_{\mathcal K}|_{\{x\}\tildemes E_{I}}$ is linear as well. \end{proof} {\beta}gin{rmk}{\lambda}bel{rmk:LIN}\rm It is tempting to think that additivity alone is enough to imply that the fibers of $|{\partial}r_{\mathcal K}|:|{\bf E}_{\mathcal K}|\to|{\mathcal K}|$ are vector spaces. However, if the first tameness condition \eqref{eq:tame1} fails because there is $x\in (U_{IJ}\cap U_{IK}) {\smallsetminus} U_{I(J\cup K)}$, then both $E_J$ and $E_K$ embed into the fiber $|{\partial}r_{\mathcal K}|^{-1}([I,x]))$, but may not be summable, since such sums are well defined by additivity only in $E_{J\cup K}$. \end{rmk} We end this section with further topological properties of the Kuranishi neighbourhood of a tame Kuranishi atlas that will be useful when constructing an admissible metric in Section~\ref{ss:shrink} and eventually the virtual fundamental class in Section~\ref{s:VMC}. For that purpose we need to be careful in differentiating between the quotient and subspace topology on subsets of the Kuranishi neighbourhood, as follows. {\beta}gin{definition} {\lambda}bel{def:topologies} For any subset ${\mathcal A}\subset {\rm Obj}_{{\bf B}_{\mathcal K}}$ of the union of domains of a Kuranishi atlas ${\mathcal K}$, we denote by $$ \|{\mathcal A}\|:={\partial}i_{\mathcal K}({\mathcal A})\subset|{\mathcal K}| , \qquad\qquad |{\mathcal A}|:={\partial}i_{\mathcal K}({\mathcal A})\cong \quot{{\mathcal A}}{{\sigma}m} $$ the set ${\partial}i_{\mathcal K}({\mathcal A})$ equipped with its subspace topology induced from the inclusion ${\partial}i_{\mathcal K}({\mathcal A})\subset|{\mathcal K}|$ resp.\ its quotient topology induced from the inclusion ${\mathcal A}\subset {\rm Obj}_{{\bf B}_{\mathcal K}}$ and the equivalence relation ${\sigma}m$ on ${\rm Obj}_{{\bf B}_{\mathcal K}}$ (which is generated by all morphisms in ${\bf B}_{\mathcal K}$, not just those between elements of ${\mathcal A}$). \end{definition} {\beta}gin{remark} {\lambda}bel{rmk:hom} \rm In many cases we will be able to identify different topologies on subsets of the Kuranishi neighbourhood $|{\mathcal K}|$ by appealing to the following elementary {\bf nesting uniqueness of compact Hausdorff topologies}: Let $f:X\to Y$ be a continuous bijection from a compact topological space $X$ to a Hausdorff space $Y$. Then $f$ is in fact a homeomorphism. Indeed, it suffices to see that $f$ is a closed map, i.e.\ maps closed sets to closed sets, since that implies continuity of $f^{-1}$. But any closed subset of $X$ is also compact, and its image in $Y$ under the continuous map $f$ is also compact, hence closed since $Y$ is Hausdorff. In particular, if $Z$ is a set with nested compact Hausdorff topologies ${\mathcal T}_1\subset{\mathcal T}_2$, then ${\rm id}_Z: (Z,{\mathcal T}_2)\to (Z,{\mathcal T}_1)$ is a continuous bijection, hence homeomorphism, i.e.\ ${\mathcal T}_1={\mathcal T}_2$. \end{remark} {\beta}gin{prop}{\lambda}bel{prop:Ktopl1} Let ${\mathcal K}$ be a tame Kuranishi atlas. {\beta}gin{enumerate} \item For any subset ${\mathcal A}\subset {\rm Obj}_{{\bf B}_{\mathcal K}}$ the identity map ${\rm id}_{{\partial}i_{\mathcal K}({\mathcal A})}: |{\mathcal A}| \to \|{\mathcal A}\|$ is continuous. \item If ${\mathcal A} \sqsubset {\rm Obj}_{{\bf B}_{\mathcal K}}$ is precompact, then both $|\overline{\mathcal A}|$ and $\|\overline{\mathcal A}\|$ are compact. In fact, the quotient and subspace topologies on ${\partial}i_{\mathcal K}(\overline{\mathcal A})$ coincide, that is $|\overline{\mathcal A}|=\|\overline{\mathcal A}\|$ as topological spaces. \item If ${\mathcal A} \sqsubset {\mathcal A}' \subset {\rm Obj}_{{\bf B}_{\mathcal K}}$, then ${\partial}i_{\mathcal K}(\overline{{\mathcal A}}) = \overline{{\partial}i_{\mathcal K}({\mathcal A})}$ and ${\partial}i_{\mathcal K}({\mathcal A}) \sqsubset {\partial}i_{\mathcal K}({\mathcal A}')$ in the topological space $|{\mathcal K}|$. \item If ${\mathcal A} \sqsubset {\rm Obj}_{{\bf B}_{\mathcal K}}$ is precompact, then $\|\overline{{\mathcal A}}\|=|\overline{\mathcal A}|$ is metrizable; in particular this implies that $\|{\mathcal A}\|$ is metrizable. \end{enumerate} \end{prop} {\beta}gin{proof} To prove (i) recall that openness of ${\mathcal U}\subset{\partial}i_{\mathcal K}({\mathcal A})$ in the subspace topology implies the existence of an open subset ${\mathcal W} \subset |{\mathcal K}|$ with ${\mathcal W} \cap {\partial}i_{\mathcal K}({\mathcal A})={\mathcal U}$. Then we have ${\mathcal A} \cap {\partial}i_{\mathcal K}^{-1}({\mathcal U})={\mathcal A} \cap {\partial}i_{\mathcal K}^{-1}({\mathcal W})$, where ${\partial}i_{\mathcal K}^{-1}({\mathcal W}) \subset \bigcup_{I\in{\mathcal I}_{\mathcal K}}U_I$ is open by definition of the quotient topology on $|{\mathcal K}|$. However, that exactly implies openness of ${\mathcal A} \cap {\partial}i_{\mathcal K}^{-1}({\mathcal U})\subset{\mathcal A}$ and thus of ${\mathcal U}$ in the quotient topology. This proves continuity. The compactness assertions in (ii) follow from the compactness of $\overline{\mathcal A}$ together with the fact that both ${\partial}i_{\mathcal K}: {\mathcal A} \to |{\mathcal K}|$ and ${\partial}i_{\mathcal K}: {\mathcal A} \to {{\mathcal A}}/{{\sigma}m}$ are continuous maps. Moreover, $\|{\mathcal A}\|$ is Hausdorff because its topology is induced by the Hausdorff topology on $|{\mathcal K}|$. Therefore the identity map $|\overline{\mathcal A}|\to\|\overline{\mathcal A}\|$ is a continuous bijection from a compact space to a Hausdorff space, and hence a homeomorphism by Remark~\ref{rmk:hom}, which proves the equality of topologies. In (iii), the continuity of ${\partial}i_{\mathcal K}$ implies ${\partial}i_{\mathcal K}(\overline {\mathcal A})\subset \overline{{\partial}i_{\mathcal K}({\mathcal A})}$ for the closure in $|{\mathcal K}|$. On the other hand, the compactness of $\overline{\mathcal A}$ implies that ${\partial}i_{\mathcal K}(\overline {\mathcal A})$ is compact by (ii), in particular it is closed and contains ${\partial}i_{\mathcal K}({\mathcal A})$, hence also contains $ \overline{{\partial}i_{\mathcal K}({\mathcal A})}$. This proves equality ${\partial}i_{\mathcal K}(\overline {\mathcal A})=\overline{{\partial}i_{\mathcal K}({\mathcal A})}$. The last claim of (iii) then holds because $\overline{{\partial}i_{\mathcal K}({\mathcal A})}= {\partial}i_{\mathcal K}(\overline {\mathcal A})\subset {\partial}i_{\mathcal K}({\mathcal A}')$, and ${\partial}i_{\mathcal K}(\overline {\mathcal A})$ is compact by (ii). To prove the metrizability in (iv), we will use Urysohn's metrization theorem, which says that any regular and second countable topological space is metrizable. Here $\|\overline{{\mathcal A}}\|\subset |{\mathcal K}|$ is regular (i.e. points and closed sets have disjoint neighbourhoods) since it is a compact subset of a Hausdorff space. So it remains to establish second countability, i.e.\ to find a countable base for the topology, namely a countable collection of open sets, such that any other open set can be written as a union of part of the collection. For that purpose first recall that each $U_I$ is a manifold, so is second countable by definition. This property is inherited by the subsets $\overline{A}_I\subset U_I$ for $I\in{\mathcal I}_{\mathcal K}$, and by their images ${\partial}i_{\mathcal K}(\overline{A}_I)\subset|{\mathcal K}|$ via the homeomorphisms ${\partial}i_{\mathcal K}|_{U_I}$ of Proposition~\ref{prop:Khomeo}. Moreover, each ${\partial}i_{\mathcal K}(\overline{A}_I)$ is compact since it is the image under the continuous map ${\partial}i_{\mathcal K}$ of the closed subset $\overline{A}_I=\overline{{\mathcal A}}\cap U_I$ of the compact set $\overline{{\mathcal A}}$. So, in order to prove second countability of the finite union $\|\overline{{\mathcal A}}\|=\bigcup_{I\in{\mathcal I}_{\mathcal K}} {\partial}i_{\mathcal K}(\overline{A}_I)$ iteratively, it remains to establish second countability for a union of two compact second countable subsets, as follows. { } {\mathbb N}I {\bf Claim:} Let $B,C \subset Y$ be compact subsets of a Hausdorff space $Y$ such that $B,C $ are second countable in their subspace topologies. Then $B \cup C$ is second countable in the subspace topology. { } To prove this claim, let $(V_i^B)_{i\in{\mathbb N}}$ resp.\ $(V_i^C)_{i\in{\mathbb N}}$ be countable neighbourhood bases for $B$ and $C$. Then $(V_i^B \cap (B{\smallsetminus} C))_{i\in{\mathbb N}}$ resp.\ $(V_i^C \cap (C{\smallsetminus} B))_{i\in{\mathbb N}}$ are countable neighbourhood bases for the open subsets $B{\smallsetminus} C \subset B$ resp.\ $C{\smallsetminus} B \subset C$. To finish the construction of a countable neighborhood basis for $B\cup C$ it then suffices to find a countable collection of open sets $W_j\subset B\cup C$ with the property {\beta}gin{equation}{\lambda}bel{eq:RAB} R\subset B\cup C \quad\text{open}\qquad \Longrightarrow \qquad R\cap (B\cap C) \; \subset {\textstyle {\underline{n}}derlineerset{W_j\subset R}\bigcup} W_j . \end{equation} For then $R$ will be the union of these $W_j$ together with all the sets $V^B_i{\smallsetminus} C$ and $V^C_i{\smallsetminus} B$ that are contained in $R$. To construct the $W_j$, choose metrics $d^B, d^C$ on $B,C$ respectively (which are guaranteed by Urysohn's metrization theorem). Since $B\cap C$ is compact, the restrictions of the metrics to this intersection are equivalent, i.e.\ there is ${\kappa}ppa>1$ such that $$ \tfrac 1{\kappa}ppa d^B(x,y)\le d^C(x,y)\le {\kappa}ppa \ d^B(x,y) \qquad\forall \ x,y\in B\cap C. $$ For any subset $S\subset B$ and ${\varepsilon}>0$ denote the ${\varepsilon}$-neighbourhood of $S$ in $B$ by $$ {\mathbb N}n^B_{\varepsilon}(S): = \bigl\{ y\in B \,\big|\, \ {\textstyle \inf_{s\in S}} d^B(y,s)<{\varepsilon} \bigr\} $$ and similarly define ${\mathbb N}n^C_{\varepsilon}(T)$ for $T\subset C$. These are open sets by the triangle inequality. Moreover, the triangle inequality for $d^C$ together with the above equivalence of metrics gives the following nesting of neighbourhoods for all $S\subset B\cap C$ and ${\varepsilon},{\delta}lta>0$, {\beta}gin{equation} {\lambda}bel{skewtriangle} {\mathbb N}n^C_{\varepsilon}\bigl({\mathbb N}n^B_{\delta}(S)\cap C\bigr) \;\subset\; {\mathbb N}n^C_{{\kappa}ppa{\delta}+{\varepsilon}}(S). \end{equation} Now for any $S \subset B\cap C$ and ${\varepsilon}>0$ we define an ${\varepsilon}$-neighbourhood in $B\cup C$ by $$ W_{\varepsilon}(S)\,:=\; {\mathbb N}n^B_{\varepsilon}(S) \;\cup\; \Bigl({\mathbb N}n^C_{\varepsilon}\bigl({\mathbb N}n^B_{\varepsilon}(S)\cap C)\bigr) \;{\smallsetminus}\; \bigl( B \,{\smallsetminus}\, {\mathbb N}n^B_{\varepsilon}(S)\bigr)\Bigr) \;\subset\; B\cup C. $$ Note here that $S = S\cap B\subset {\mathbb N}n^B_{\varepsilon}(S)$ already implies the inclusion $S \subset W_{\varepsilon}(S)$. Moreover, the definition is made to satisfy the nesting property {\beta}gin{equation} {\lambda}bel{eq:incl} T \subset S \quad\Longrightarrow \quad W_{\varepsilon}(T) \subset W_{\varepsilon}(S) . \end{equation} To see that $W_{\varepsilon}(S)\subset B\cup C$ is open, it suffices to check relative openness of the intersections with $B$ and $C$, since then both $B{\smallsetminus} W_{\varepsilon}(S)$ and $C{\smallsetminus} W_{\varepsilon}(S)$ are compact in the relative topology, so is their union $(B{\smallsetminus} W_{\varepsilon}(S))\cup(C{\smallsetminus} W_{\varepsilon}(S))= (B\cup C){\smallsetminus} W_{\varepsilon}(S)$, and hence $W_{\varepsilon}(S)\subset B\cup C$ is the complement of a closed subset. Indeed, $W_{\varepsilon}(S) \cap B = {\mathbb N}n^B_{\varepsilon}(S)$ is open since the ${\varepsilon}$-neighbourhoods were constructed open, and we use the inclusion $T\subset {\mathbb N}n^C_{\varepsilon}(T)$ for $T={\mathbb N}n^B_{\varepsilon}(S)\cap C$ to express $$ W_{\varepsilon}(S) \cap C \;=\; {\mathbb N}n^C_{\varepsilon}\bigl({\mathbb N}n^B_{\varepsilon}(S)\cap C)\bigr) \;{\smallsetminus}\; \bigl(B \,{\smallsetminus}\, {\mathbb N}n^B_{\varepsilon}(S)\bigr) $$ as complement of a closed set in an open set. This shows openness of $W_{\varepsilon}(S) \cap C$ and hence of $W_{\varepsilon}(S)$. Moreover, the equivalence of metrics gives for any $S\subset B\cap C$ $$ W_{\varepsilon}(S)\cap C\;\subset\; {\mathbb N}n^C_{\varepsilon}\bigl({\mathbb N}n^B_{\varepsilon}(S)\cap C)\bigr) \;\subset\; {\mathbb N}n^C_{\varepsilon}\bigl({\mathbb N}n^C_{{\kappa}ppa{\varepsilon}}(S)\cap C)\bigr) \;\subset\; {\mathbb N}n^C_{({\kappa}ppa+1){\varepsilon}}(S). $$ Next, recall that compact metrizable spaces are separable. Since we can equip $B\cap C$ with either metric $d^B$ or $d^C$, we obtain a dense sequence $(x_n)_{n\in{\mathbb N}}\subset B\cap C$. Now we claim that the countable collection $$ ( W_j )_{j\in{\mathbb N}} \;=\; \bigl( W_{1/m}(x_n) \bigr)_{m,n\in{\mathbb N}} $$ satisfies \eqref{eq:RAB}. To see this, we must check that for every $r\in R\cap (B\cap C)$ there is $m,n\in{\mathbb N}$ with $r\in W_{1/m}(x_n)$. For that purpose first choose ${\varepsilon}>0$ so that ${\mathbb N}n^B_{\varepsilon}(r)\subset R\cap B$ and ${\mathbb N}n^C_{\varepsilon}(r)\subset R\cap C$. Then choose $m\in{\mathbb N}$ so that $m>(2{\kappa}ppa+1)/{\varepsilon}$, and choose $x_n\in {\mathbb N}n^B_{1/m}(r)$. Then we use \eqref{eq:incl} and the equivalence of metrics to obtain the inclusion {\beta}gin{align*} W_{1/m}(x_n) &\;\subset\; W_{1/m}\bigl({\mathbb N}n^B_{1/m}(r)\bigr) \\ & \;\subset\; {\mathbb N}n^B_{1/m}\bigl({\mathbb N}n^B_{1/m}(r)\bigr)\; \cup \; {\mathbb N}n^C_{1/m}\bigl({\mathbb N}n^B_{1/m}\bigl({\mathbb N}n^B_{1/m}(r)\bigr) \cap C\bigr)\\ &\; \subset \; {\mathbb N}n^B_{2/m}(r)\; \cup \; {\mathbb N}n^C_{(2{\kappa}ppa+1)/m}(r) \; \subset \; {\mathbb N}n^B_{{\varepsilon}}(r)\; \cup \; {\mathbb N}n^C_{{\varepsilon}}(r) \;\subset\; R \end{align*} as required. This proves \eqref{eq:RAB} and hence the claim, which finishes the proof of (iv). In particular, $\|{\mathcal A}\|$ is metrizable in the subspace topology, by restriction of a metric on ${\partial}i_{\mathcal K}(\overline{\mathcal A})\subset|{\mathcal K}|$. \end{proof} \subsection{Shrinkings and tameness} {\lambda}bel{ss:shrink} \hspace{1mm}\\ The purpose of this section is to prove Theorem~\ref{thm:K} and Proposition~\ref{prop:metric} by giving a general construction of a metric, tame Kuranishi atlas starting from an additive, weak Kuranishi atlas. The construction will be a suitable shrinking of the footprints along with the domains of charts and transition maps, as follows. {\beta}gin{defn}{\lambda}bel{def:shr0} Let $(F_i)_{i=1,\ldots,N}$ be an open cover of a compact space $X$. We say that $(F_i')_{i=1,\ldots,N}$ is a {\bf shrinking} of $(F_i)$ if $F_i'\sqsubset F_i$ are precompact open subsets, which cover $X= \bigcup_{i=1,\ldots,N} F'_i$, and are such that for all subsets $I\subset \{1,\ldots,N\}$ we have {\beta}gin{equation} {\lambda}bel{same FI} F_I: = {\textstyle\bigcap_{i\in I}} F_i \;\ne\; \emptyset \qquad\Longrightarrow\qquad F'_I: = {\textstyle\bigcap_{i\in I}} F'_i \;\ne\; \emptyset . \end{equation} \end{defn} Recall here that precompactness $V'\sqsubset V$ is defined as the relative closure of $V'$ in $V$ being compact. If $V$ is contained in a compact space $X$, then $V'\sqsubset V$ is equivalent to the closure $\overline{V'}$ in the ambient space being contained in $V$. {\beta}gin{defn}{\lambda}bel{def:shr} Let ${\mathcal K}=({\bf K}_I,\widehat\Phi_{I J})_{I, J\in{\mathcal I}_{\mathcal K}, I\subsetneq J}$ be a weak Kuranishi atlas. We say that a weak Kuranishi atlas ${\mathcal K}'=({\bf K}_I',\widehat\Phi_{I J}')_{I, J\in{\mathcal I}_{{\mathcal K}'}, I\subsetneq J}$ is a {\bf shrinking} of ${\mathcal K}$~if {\beta}gin{enumerate} \item the footprint cover $(F_i')_{i=1,\ldots,N'}$ is a shrinking of the cover $(F_i)_{i=1,\ldots,N}$, in particular the numbers $N=N'$ of basic charts agree, and so do the index sets ${\mathcal I}_{{\mathcal K}'} = {\mathcal I}_{\mathcal K}$; \item for each $I\in{\mathcal I}_{\mathcal K}$ the chart ${\bf K}'_I$ is the restriction of ${\bf K}_I$ to a precompact domain $U_I'\subset U_I$ as in Definition \ref{def:restr}; \item for each $I,J\in{\mathcal I}_{\mathcal K}$ with $I\subsetneq J$ the coordinate change $\widehat\Phi_{IJ}'$ is the restriction of $\widehat\Phi_{IJ}$ to the open subset $U'_{IJ}: = {\partial}hi_{IJ}^{-1}(U'_J)\cap U'_I$ as in Lemma~\ref{le:restrchange}. \end{enumerate} \end{defn} {\beta}gin{rmk}\rm {\lambda}bel{rmk:shrink} (i) Note that any shrinking of an additive weak Kuranishi atlas preserves the weak cocycle condition (since it only requires equality on overlaps) and also the additivity condition. Moreover, a shrinking is determined by the choice of domains $U'_I\sqsubset U_I$ and so can be considered as the restriction of ${\mathcal K}$ to the subset $\bigcup_{I\in{\mathcal I}_{\mathcal K}} U_I'\subset{\rm Obj}_{{\mathcal B}_{\mathcal K}}$. However, for a shrinking to satisfy a stronger form of the cocycle condition (such as tameness) the domains $U'_{IJ}:= {\partial}hi_{IJ}^{-1}(U'_J)\cap U'_I$ of the coordinate changes must satisfy appropriate compatibility conditions, so that the domains $U_I'$ can no longer be chosen independently of each other. Since the relevant conditions are expressed in terms of the $U_{IJ}'$, we will find that the construction of a tame shrinking in Proposition~\ref{prop:proper} can be achieved by iterative choice of these sets $U_{IJ}'$. These will form shrinkings in each step, though we prove it only up to the level of the iteration. Our construction is made possible only because of the additivity conditions on ${\mathcal K}$. { } {\mathbb N}I (ii) Given two tame shrinkings ${\mathcal K}^0$ and ${\mathcal K}^1$ of the same weak Kuranishi atlas, one might hope to obtain a ``common refinement'' ${\mathcal K}^{01}$ by intersection $U^{01}_{IJ}:=U^0_{IJ}\cap U^1_{IJ}$ of the domains. This could in particular simplify the proof of compatibility of the Kuranishi atlases ${\mathcal K}^0,{\mathcal K}^1$ in Proposition~\ref{prop:cobord2}. However, for this to be a valid approach the footprint covers $(F^0_i)_{i=1,\ldots,N}$ and $(F^1_i)_{i=1,\ldots,N}$ would have to be comparable in the sense that their intersections still cover $X=\bigcup_{i=1,\ldots,N} (F^0_i\cap F^1_i)$ and have the same index set, i.e.\ $F^0_I \cap F^1_I \neq \emptyset$ for all $I\in{\mathcal I}_{\mathcal K}$. Once this is satisfied, one can check that ${\mathcal K}^{01}$ defines another tame shrinking of ${\mathcal K}$. \end{rmk} We can now prove the main result of this section. Note that by the above remark, the main challenge is to achieve the tameness conditions \eqref{eq:tame1}, \eqref{eq:tame2}. {\beta}gin{prop} {\lambda}bel{prop:proper} Every additive weak Kuranishi atlas ${\mathcal K}$ has a shrinking ${\mathcal K}'$ that is a tame Kuranishi atlas -- for short called a {\bf tame shrinking}. \end{prop} {\beta}gin{proof} Since $X$ is compact and metrizable and the footprint open cover $(F_i)$ is finite, it has a shrinking $(F_i')$ in the sense of Definition~\ref{def:shr0}. In particular we can ensure that $F_I'\ne \emptyset$ whenever $F_I\ne \emptyset$ by choosing ${\delta}>0$ so that every nonempty $F_I$ contains some ball $B_{\delta}(x_I)$ and then choosing the $F_i'$ to contain $B_{{\delta}/2}(x_I)$ for each $I\ni i$ (i.e.\ $F_I\subset F_i$). Then we obtain $F'_I\neq\emptyset$ for all $I\in {\mathcal I}_{\mathcal K}$ since $B_{{\delta}/2}(x_I)\subset \bigcap_{i\in I} F_i =F_I'$. In another preliminary step we now find precompact open subsets $U_I^{(0)}\sqsubset U_I$ and open sets $U_{IJ}^{(0)}\subset U_{IJ}\cap U_I^{(0)}$ for all $I,J\in{\mathcal I}_{\mathcal K}$ such that {\beta}gin{equation}{\lambda}bel{eq:U(0)} U_I^{(0)}\cap s_I^{-1}(0) = {\partial}si_I^{-1}(F_I'),\qquad U_{IJ}^{(0)}\cap s_I^{-1}(0) = {\partial}si_I^{-1}(F'_I\cap F_J'). \end{equation} The restricted domains $U_I^{(0)}$ are provided by Lemma~\ref{le:restr0}. Next, we may take {\beta}gin{equation}{\lambda}bel{eq:UIJ(0)} U_{IJ}^{(0)}:= U_{IJ}\cap U_I^{(0)}\cap {\partial}hi_{IJ}^{-1}( U_J^{(0)}), \end{equation} which is open because $U_J^{(0)}$ is open and ${\partial}hi_{IJ}$ is continuous. It has the required footprint $$ U_{IJ}^{(0)}\cap s_I^{-1}(0) =U_{IJ}\cap {\partial}si_I^{-1}(F_I')\cap {\partial}hi_{IJ}^{-1} \bigl({\partial}si_J^{-1}(F_J')\bigr) = {\partial}si_I^{-1}(F_I'\cap F_J') = {\partial}si_I^{-1}(F_J'). $$ Therefore, this defines an additive weak Kuranishi atlas with footprints $F_I'$, which satisfies the conditions of Definition~\ref{def:shr} and so is a shrinking of ${\mathcal K}$. We will construct the required shrinking ${\mathcal K}'$ by choosing possibly smaller domains $U_I'\subset U_I^{(0)}$ and $U_{IJ}'\subset U_{IJ}^{(0)}$ but will preserve the footprints $F_I'$. We will also arrange $U_{IJ}' = U_I'\cap {\partial}hi_{IJ}^{-1}(U_J')$, so that ${\mathcal K}'$ is a shrinking of the original ${\mathcal K}$. Since ${\mathcal K}'$ is automatically additive, we just need to make sure that it satisfies the tameness conditions~\eqref{eq:tame1} and~\eqref{eq:tame2}. By Lemma \ref{le:tame0} it will then satisfy the cocycle condition and hence will be a Kuranishi atlas. We will construct the domains $U_I', U_{IJ}'$ by a finite iteration, starting with $U_I^{(0)}, U_{IJ}^{(0)}$. Here we streamline the notation by setting $U_{I}^{(k)}:=U_{II}^{(k)}$ and extend the notation to all pairs of subsets $I\subset J\subset\{1,\ldots,N\}$ by setting $U_{IJ}^{(k)}=\emptyset$ if $J\notin{\mathcal I}_{\mathcal K}$. (Note that $J\in{\mathcal I}_{\mathcal K}$ and $I\subset J$ implies $I\in{\mathcal I}_{\mathcal K}$.) Then in the $k$-th step we will construct open subsets $U_{IJ}^{(k)}\subset U_{IJ}^{(k-1)}$ for all $I\subset J\subset\{1,\ldots,N\}$ such that the following holds. {\beta}gin{enumerate} \item The zero set conditions $U_{IJ}^{(k)}\cap s_I^{-1}(0) = {\partial}si_I^{-1}(F_J')$ hold for all $I\subset J$. \item The first tameness condition \eqref{eq:tame1} holds for all $I\subset J,K$ with $|I|\le k$, that is $$ U_{IJ}^{(k)}\cap U_{IK}^{(k)}= U_{I (J\cup K)}^{(k)} . $$ In particular, we have $U_{IK}^{(k)} \subset U_{IJ}^{(k)}$ for $I\subset J \subset K$ with $|I|\le k$. \item The second tameness condition \eqref{eq:tame2} holds for all $I\subset J\subset K$ with $|I|\le k$, that is $$ {\partial}hi_{IJ}(U_{IK}^{(k)}) = U_{JK}^{(k)}\cap s_J^{-1}(E_I) . $$ In particular we have ${\partial}hi_{IJ}(U_{IJ}^{(k)}) = U_{J}^{(k)}\cap s_J^{-1}(E_I)$ for all $I\subset J$ with $|I|\le k$. \end{enumerate}{ } Note that after the $k$-th step, the domains $U^{(k)}_{IJ}$ form a shrinking ``up to order $k$'' in the sense that {\beta}gin{equation}{\lambda}bel{kclaim} U_{IJ}^{(k)} = U_I^{(k)} \cap {\partial}hi_{IJ}^{-1}(U_J^{(k)}) \qquad \forall\; |I|\le k , I\subsetneq J . \end{equation} Indeed, for any such pair $I\subsetneq J$, property (iii) with $J=K$ implies $$ {\partial}hi_{IJ}(U_{IJ}^{(k)}) \;=\; U_{J}^{(k)}\cap s_J^{-1}(E_I) \;=\; U_{J}^{(k)}\cap {\rm im\,}{\partial}hi_{IJ}, $$ where the second equality is due to the first implying $U_{J}^{(k)}\cap s_J^{-1}(E_I) \subset {\rm im\,}{\partial}hi_{IJ}$, and ${\rm im\,} {\partial}hi_{IJ}\subset s_J^{-1}(E_I)$, which follows from $s_J\circ{\partial}hi_{IJ}=\widehat{\partial}hi_{IJ}\circ s_I$. Since ${\partial}hi_{IJ}$ is injective, this implies $U_{IJ}^{(k)}={\partial}hi_{IJ}^{-1}(U_J^{(k)})$. Now \eqref{kclaim} follows since (ii) with $K=I$ implies $U_{IJ}^{(k)} \subset U_{II}^{(k)} = U_I^{(k)}$. Thus, when the iteration is complete, that is when $k=M: =\max_{I\in {\mathcal I}_{\mathcal K}} |I|$, then ${\mathcal K}'$ is a shrinking of ${\mathcal K}$. Moreover, the tameness conditions hold on ${\mathcal K}'$ by (ii) and (iii), and Lemma~\ref{le:tame0} implies that ${\mathcal K}'$ satisfies the strong cocycle condition. Hence ${\mathcal K}'$ is the desired tame Kuranishi atlas. So it remains to implement the iteration. Our above choice of the domains $U_{IJ}^{(0)}$ completes the $0$-th step since conditions (ii) (iii) are vacuous. Now suppose that the $(k-1)$-th step is complete for some $k\geq 1$. Then we define $U_{IJ}^{(k)}:=U_{IJ}^{(k-1)}$ for all $I\subset J$ with $|I|\leq k-1$. For $|I|=k$ we also set $U_{II}^{(k)}:=U_{II}^{(k-1)}$. This ensures that (i) and (ii) continue to hold for $|I|<k$. In order to preserve (iii) for triples $H\subset I\subset J$ with $|H|<k$ we then require that the intersection $U_{IJ}^{(k)}\cap s_I^{-1}(E_H)= U_{IJ}^{(k-1)}\cap s_I^{-1}(E_H)$ is fixed. In case $H=\emptyset$, this is condition (i), and since $U_{IJ}^{(k)}\subset U_{IJ}^{(k-1)}$ it can generally be phrased as inclusion (i$'$) below. With that it remains to construct the open sets $U_{IJ}^{(k)}\subset U_{IJ}^{(k-1)}$ as follows. {\beta}gin{itemize} \item[(i$'$)] For all $H\subsetneq I\subset J$ with $|H|<k$ and $|I|\geq k$ we have $U_{IJ}^{(k-1)}\cap s_I^{-1}(E_H)\subset U_{IJ}^{(k)}$. Here we include $H=\emptyset$, in which case the condition says that $U_{IJ}^{(k-1)}\cap s_I^{-1}(0)\subset U_{IJ}^{(k)}$ (which implies $U_{IJ}^{(k)}\cap s_I^{-1}(0) = {\partial}si_I^{-1}(F_J')$, as explained above). \item[(ii$'$)] For all $I\subset J,K$ with $|I|= k$ we have $U_{IJ}^{(k)}\cap U_{IK}^{(k)}= U_{I (J\cup K)}^{(k)}$. \item[(iii$'$)] For all $I\subsetneq J\subset K$ with $|I|=k$ we have ${\partial}hi_{IJ}(U_{IK}^{(k)}) = U_{JK}^{(k)}\cap s_J^{-1}(E_I)$. \end{itemize} Here we also used the fact that (iii) is automatically satisfied for $I=J$ and so stated (iii$'$) only for $J\supsetneq I$. By construction, the domains $U^{(k)}_{II}$ for $|I|=k$ already satisfy (i$'$), so we may now do this iteration step in two stages: { }{\mathbb N}I {\bf Step A}\; will construct $U_{IK}^{(k)}$ for $|I|=k$ and $I\subsetneq K$ satisfying (i$'$),(ii$'$) and {\beta}gin{itemize} \item[(iii$''$)] $U_{IK}^{(k)} \subset {\partial}hi_{IJ}^{-1}(U_{JK}^{(k-1)})$ for all $I\subsetneq J\subset K$ . \end{itemize} {\bf Step B} will construct $U_{JK}^{(k)}$ for $|J|>k$ and $J\subset K$ satisfying (i$'$) and (iii$'$). { } {\mathbb N}I {\bf Step A:} We will accomplish this construction by applying Lemma \ref{le:set} below for fixed $I$ with $|I|=k$ to the complete metric space $U:=U_I$, its precompact open subset $U':=U^{(k)}_{II}$, the relatively closed subset $$ Z := \bigcup_{H\subsetneq I} \bigl( U_{II}^{(k-1)} \cap s_I^{-1}(E_H) \bigr) \;=\; \bigcup_{H\subsetneq I} {\partial}hi_{HI}\bigl(U_{HI}^{(k-1)}\bigr) \;\subset\; U' $$ and the relatively open subsets for all $i\in\{1,\ldots,N\}{\smallsetminus} I$ $$ Z_i := U^{(k-1)}_{I (I\cup\{i\})} \cap Z \;=\; \bigcup_{H\subsetneq I} \bigl( U_{I(I\cup\{i\})}^{(k-1)} \cap s_I^{-1}(E_H) \bigr) \;=\; \bigcup_{H\subsetneq I} {\partial}hi_{HI}\bigl(U^{(k-1)}_{H (I\cup\{i\})} \bigr) . $$ Here, by slight abuse of language, we define ${\partial}hi_{\emptyset I}\bigl(U_{\emptyset J}^{(k-1)}\bigr): = U_{IJ}^{(k-1)} \cap s_I^{-1}(0) $. Note that $Z\subset U'$ is relatively closed since $U_{II}^{(k-1)}=U_{II}^{(k)}=U'$, and $Z_i\subset Z$ is relatively open since $U^{(k-1)}_{I (I\cup\{i\})} \subset U'$ is open. Also the above identities for $Z$ and $Z_i$ in terms of ${\partial}hi_{HI}$ follow from (iii) for the triples $H\subset I \subset I$ and $H\subset I \subset I\cup\{i\}$ with $|H|<|I|=k$. To understand the choice of these subsets, note that in case $k=1$ with $I=\{i_0\}$ the set $Z$ is given by $H=\emptyset$ and $U^{(0)}_{II}=U^{(0)}_{i_0} \sqsubset U_{i_0}$, hence $Z = U^{(0)}_{i_0} \cap s_{i_0}^{-1}(0) = {\partial}si_{i_0}^{-1}(F_{i_0}')$ is the preimage of the shrunk footprint and for $i\neq{i_0}$ we have $Z_{i} = {\partial}si_{i_0}^{-1}(F_{i_0}'\cap F'_i)$. When $k\ge 1$, the index sets $K\subset\{1,\ldots,N\}$ containing $I$ as proper subset are in one-to-one correspondence with the nonempty index sets $K'\subset\{1,\ldots,N\}{\smallsetminus} I$ via $K=I\cup K'$ and give rise to the relatively open sets $$ Z_{K'} \,:=\; \bigcap_{i\in K'} Z_i \;=\; Z \cap \bigcap_{i\in K'} U^{(k-1)}_{I (I\cup\{i\})} \;=\; Z \cap U^{(k-1)}_{IK} . $$ Here we used (ii) for $|H| < k$. We may also use the identity $Z_i=\bigcup_{H\subsetneq I} \ldots$ together with (ii) and (iii) for $|H|<k$ to identify these sets with {\beta}gin{equation}{\lambda}bel{eq:ZzK} Z_{K'} = \bigcup_{H\subsetneq I} {\partial}hi_{HI}\Bigl(\; {\underline{n}}derlineerset{i\in K'}{\textstyle \bigcap} U^{(k-1)}_{H (I\cup\{i\})} \Bigr) = \bigcup_{H\subsetneq I} {\partial}hi_{HI}\bigl( U^{(k-1)}_{H (I\cup K')} \bigr) = \bigcup_{H\subsetneq I} \bigl( U_{IK}^{(k-1)} \cap s_I^{-1}(E_H) \bigr) , \end{equation} which explains their usefulness: If we construct the new domains such that $Z_{K'}\subset U_{IK}^{(k)}$, then this implies the inclusion $U_{IK}^{(k-1)}\cap s_I^{-1}(E_H) \subset Z_{K'} \subset U_{IK}^{(k)}$ required by (i$'$). Finally, in order to achieve the inclusion condition $U_{IK}^{(k)} \subset {\partial}hi_{IJ}^{-1}(U_{JK}^{(k-1)})$ of (iii$''$), we fix the open subsets $W_{K'}$ for all $I\subsetneq K = I \cup K'$ as {\beta}gin{equation}{\lambda}bel{eq:WwK} W_{K'}: = \bigcap_{I\subset J\subset K} \bigl( {\partial}hi_{IJ}^{-1} (U^{(k-1)}_{JK}) \cap U^{(k-1)}_{IJ} \bigr) \;\subset\; U' . \end{equation} If we require $U_{IK}^{(k)}\subset W_{K'}$ then this ensures (iii$''$) as well as $U_{IK}^{(k)}\subset U_{IK}^{(k-1)}$. The latter follows from the inclusion $W_{K'}\subset U_{IK}^{(k-1)}$, which holds by definition \eqref{eq:WwK} with $J=K$. Now if we can ensure that $W_{K'}\cap Z = Z_{K'}$, then Lemma \ref{le:set} provides choices of open subsets $U_{IK}^{(k)}\subset U'$ satisfying (ii$'$) and the inclusions $Z_{K'}\subset U_{IK}^{(k)}\subset W_{K'}$. The latter imply (i$'$) and the desired inclusion (iii$''$), as discussed above. Hence it remains to check that the sets $W_{K'}$ in \eqref{eq:WwK} do satisfy the conditions $W_{K'}\cap Z = Z_{K'}$. To verify this, first note that $W_{K'}$ is contained in $U^{(k-1)}_{IJ}$ for all $J\supset I$, in particular for $J=K$, and hence $$ W_{K'}\cap Z \;\subset\; U^{(k-1)}_{IK} \cap Z \;=\; Z_{K'}. $$ It remains to check $Z_{K'} \subset W_{K'}$. By~\eqref{eq:WwK} and the expression for $Z_{K'}$ in the middle of~\eqref{eq:ZzK}, it suffices to show that for all $H\subsetneq I \subset J \subset K$ $$ {\partial}hi_{HI}\bigl( U^{(k-1)}_{H K} \bigr) \;\subset\; {\partial}hi_{IJ}^{-1} (U^{(k-1)}_{JK}) \cap U^{(k-1)}_{IJ} . $$ But, (ii) for $H\subset J\subset K$ and (iii) for $H\subset I \subset J$ imply $$ {\partial}hi_{HI}( U^{(k-1)}_{H K} ) \subset {\partial}hi_{HI}( U^{(k-1)}_{H J} ) \subset U^{(k-1)}_{IJ}, $$ so it remains to check that ${\partial}hi_{HI}\bigl( U^{(k-1)}_{H K} \bigr) \subset {\partial}hi_{IJ}^{-1} ( U^{(k-1)}_{JK} )$. For that purpose we will use the weak cocycle condition ${\partial}hi_{IJ}^{-1}\circ{\partial}hi_{HJ}={\partial}hi_{HI}$ on $U_{H J}\cap {\partial}hi_{HI}(U_{IJ})$. Note that $U^{(k-1)}_{H K}$ lies in this domain since, by (ii) for $|H|<k$, it is a subset of $U^{(k-1)}_{H J}$, which by (iii) for $H\subset I\subset J$ is contained in ${\partial}hi_{HI}^{-1}(U_{IJ})$. This proves the first equality in $$ {\partial}hi_{HI}\bigl( U^{(k-1)}_{H K} \bigr) ={\partial}hi_{IJ}^{-1}\bigl( {\partial}hi_{HJ}\bigl( U^{(k-1)}_{H K} \bigr) \bigr) \subset {\partial}hi_{IJ}^{-1} ( U^{(k-1)}_{JK} ), $$ and the last inclusion holds by (iii) for $H\subset J\subset K$. This finishes Step A. { }{\mathbb N}I {\bf Step B:} The crucial requirement on the construction of the open sets $U_{JK}^{(k)}\subset U_{JK}^{(k-1)}$ for $|J|\geq k+1$ and $J\subset K$ is (iii$'$), that is $$ U_{JK}^{(k)}\cap s_J^{-1}(E_I) = {\partial}hi_{IJ}(U_{IK}^{(k)}) $$ for all $I\subsetneq J$ with $|I|=k$. Here $U_{IK}^{(k)}$ is fixed by Step A and satisfies ${\partial}hi_{IJ}(U_{IK}^{(k)})\subset U_{JK}^{(k-1)}\cap s_J^{-1}(E_I)$ by (iii$''$), where the second part of the inclusion is automatic by ${\partial}hi_{IJ}$ mapping to $s_J^{-1}(E_I)$. Hence the maximal subsets $U_{JK}^{(k)}\subset U_{JK}^{(k-1)}$ satisfying (iii$'$) are {\beta}gin{equation}{\lambda}bel{eq:UJK(k)} U_{JK}^{(k)}\,:=\; U_{JK}^{(k-1)}{\smallsetminus} \bigcup_{I\subset J, |I|= k} \bigl( s_J^{-1}(E_I){\smallsetminus} {\partial}hi_{IJ}(U^{(k)}_{IJ}) \bigr) . \end{equation} It remains to check that these subsets are open and satisfy (i$'$). Here $U_{JK}^{(k)}$ is open since $s_J^{-1}(E_I)\subset U_J$ is closed and ${\partial}hi_{IJ}(U^{(k)}_{IJ})\subset s_J^{-1}(E_I)$ is relatively open by the index condition in Definition~\ref{def:change}. Finally, condition (i$'$), namely $$ U_{JK}^{(k-1)}\cap s_J^{-1}(E_H) \subset U_{JK}^{(k)}, $$ follows from the following inclusions for all $H\subsetneq I \subsetneq J \subset K$ with $|I|=k$. On the one hand we have $U_{JK}^{(k-1)}\cap s_J^{-1}(E_H) \subset s_J^{-1}(E_I)$ from the additivity of ${\mathcal K}$; on the other hand (iii) for $|H|<k$ together with the weak cocycle condition on $U_{HK}^{(k-1)}\subset U_{HJ}\cap{\partial}hi_{HI}^{-1}(U_{IJ})$ and (i$'$) for $|I|=k$ imply {\beta}gin{align*} U_{JK}^{(k-1)}\cap s_J^{-1}(E_H) = {\partial}hi_{HJ}( U_{HK}^{(k-1)} ) &= {\partial}hi_{IJ}\bigl({\partial}hi_{HI}( U_{HK}^{(k-1)} )\bigr) \\ &= {\partial}hi_{IJ}\bigl(U_{IK}^{(k-1)}\cap s_I^{-1}(E_H) \bigr) \subset {\partial}hi_{IJ}(U^{(k)}_{IJ}) . \end{align*} Hence no points of $ U_{JK}^{(k-1)}\cap s_J^{-1}(E_H)$ are removed from $U_{JK}^{(k-1)}$ when we construct $U_{JK}^{(k)}$. This finishes Step B and hence the $k$-th iteration step. { } Since the order of $I\in{\mathcal I}_{\mathcal K}$ is bounded $|I|\leq N$ by the number of basic Kuranishi charts, this iteration provides a complete construction of the shrinking domains after at most $N$ steps. In fact, we obtain $U'_I=U^{(|I|-1)}_{II}$ and $U'_{IJ}=U^{(|I|)}_{IJ}$ since the iteration does not alter these domains in later steps. \end{proof} {\beta}gin{lemma} {\lambda}bel{le:set} Let $U$ be a complete metric space, $U'\subset U$ a precompact open set, and $Z\subset U'$ a relatively closed subset. Suppose we are given a finite collection of relatively open subsets $Z_i\subset Z$ for $i=1,\ldots,N$ and open subsets $W_K\subset U'$ with $$ W_K\cap Z= Z_K: = {\textstyle\bigcap_{i\in K}} Z_i $$ for all index sets $K\subset\{1,\ldots,N\}$. Then there exist open subsets $U_K\subset W_K$ with $U_K\cap Z=Z_K$ and $U_J\cap U_K = U_{J\cup K}$ for all $J,K\subset\{1,\ldots,N\}$. \end{lemma} {\beta}gin{proof} Let us first introduce a general construction of an open set $U_f\subset U'$ associated to any lower semi-continuous function $f:\overlineerline{Z}\to [0,\infty)$, where $\overlineerline{Z}\subset U$ denotes the closure in $U$. By assumption, $\overlineerline{Z}$ is compact, hence the distance function $\overlineerline{Z}\to [0,\infty), z\mapsto d(x,z)$ for fixed $x\in U'$ achieves its minumum on a nonempty compact set $M_x\subset \overlineerline{Z}$. Hence we have $$ d(x,Z) := \inf_{z\in Z} d(x,z) = d(x,\overlineerline{Z}) = \min_{z\in\overlineerline Z} d(x,z) = \min_{z\in M_x} d(x,z) $$ for the distance between $x$ and the set $Z$, or equivalently the closure $\overlineerline{Z}$. { } {\mathbb N}I {\bf Claim:} {\it For any lower semi-continuous function $f:\overlineerline{Z}\to [0,\infty)$ the set} $$ U_f := \bigl\{ x\in U' \,\big|\, d(x,Z) < \inf f|_{M_x} \bigr\} \subset U' $$ {\it is open (in $U'$ or equivalently in $U$) and satisfies} {\beta}gin{equation} {\lambda}bel{eq:supp} U_f \cap Z = {\rm supp\,} f \cap Z = \bigl\{ z\in Z \,\big|\, f(z)>0 \bigr\}. \end{equation} {\mathbb N}I {\it Proof of Claim.} For $x\in Z$ we have $d(x,Z)=0$ and $M_x=\{x\}$, so $d(x,Z) < \inf f|_{M_x}$ is equivalent to $0<f(x)$. We prove openness of $U_f\subset U'$ by checking closedness of $U'{\smallsetminus} U_f$. Thus, we consider a convergent sequence $U'\ni x_i\to x_\infty\in U'$ with $d(x_i, Z)\geq \inf f|_{M_{x_i}}$ and aim to prove $d(x_\infty, Z)\geq \inf f|_{M_{x_\infty}}$. Since $f$ is lower semi-continuous and each $M_{x_i}$ is compact, we may choose a sequence $z_i\in\overlineerline{Z}$ with $z_i\in M_{x_i}$ and $f(z_i)=\inf f|_{M_{x_i}}$. (Indeed, for fixed $i$ any minimizing sequence $z^\nu_i\in M_{x_i}$ with $\lim_{\nu\to\infty}f(z^\nu_i) = \inf f|_{M_{x_i}}$ has a convergent subsequence $z^\nu_i\to z_i\in M_{x_i}$ and the limit satisfies $f(z_i) \leq \lim f(z^\nu_i) = \inf f|_{M_{x_i}}$, hence $f(z_i) = \inf f|_{M_{x_i}}$.) Since $\overlineerline{Z}$ is compact, we may moreover choose a subsequence, again denoted by $(x_i)$ and $(z_i)$, such that $z_i\to z_\infty\in\overlineerline{Z}$ converges. Then by continuity of the distance functions we deduce $z_\infty\in M_{x_\infty}$ from $$ d(x_\infty, z_\infty) = \lim d(x_i,z_i) = \lim d(x_i, Z) = d(x_\infty, Z) , $$ and finally the lower semi-continuity of $f$ implies the claim $$ d(x_\infty, Z) = \lim d(x_i,Z) \geq \lim f(z_i) \geq f(z_\infty) \geq \inf f|_{M_{x_\infty}} . $$ We now use this general construction to define the sets $U_K:=\bigcap_{i\in K} U_{f_i}$ as intersections of the subsets $U_{f_i}\subset U'$ arising from functions $f_i: \overlineerline{Z}\to [0,\infty)$ defined by $$ f_i(z) := \min \bigl\{ d( z, U' {\smallsetminus} W_J) \,\big|\, J\subset\{1,\ldots,N\} : \; i\in J, \; d(z,Z{\smallsetminus} Z_i) = d(z,Z{\smallsetminus} Z_J) \bigr\}. $$ To check that $f_i$ is indeed lower semi-continuous, consider a sequence $z_\nu\to z_\infty\in \overlineerline{Z}$. Then $f_i(z_\nu)= d(z_\nu,U'{\smallsetminus} W_{J^\nu})$ for some index sets $J^\nu$ with $i\in J^\nu$ and $d(z_\nu,Z{\smallsetminus} Z_i) = d(z_\nu,Z{\smallsetminus} Z_{J^\nu})$. Since the set of all index sets is finite, we may choose a subsequence, again denoted $(z_\nu)$, for which $J^\nu=J$ is constant. Then in the limit we also have $d(z_\infty,Z{\smallsetminus} Z_i) = d(z_\infty,Z{\smallsetminus} Z_J)$ and hence $$ f_i(z_\infty) \leq d(z_\infty, U'{\smallsetminus} W_J) = \lim d(z_\nu, U'{\smallsetminus} W_J) = \lim f_i(z_\nu). $$ Thus $f_i$ is lower semi-continuous. Therefore, the above Claim implies that each $U_{f_i}$ is open, and hence also that each $U_K$ is open as the finite intersection of open sets. The intersection property holds by construction: $$ U_J\cap U_K = \bigcap_{i\in J} U_{f_i} \cap \bigcap_{i\in K} U_{f_i} = \bigcap_{i\in J\cup K} U_{f_i} = U_{J\cup K}. $$ To obtain $U_K\cap Z = \bigcap_{i\in K} \bigl( U_{f_i} \cap Z \bigr) = Z_K$ it suffices to verify that $U_{f_i}\cap Z = Z_i$. In view of \eqref{eq:supp}, and unravelling the meaning of $f_i(z)>0$ for $z\in Z$, that means we have to prove the following equivalence for $z\in Z$, $$ z\in Z_i \quad \Longleftrightarrow \quad d(z, U'{\smallsetminus} W_J)>0 \quad \forall J\subset\{1,\ldots N\} : i\in J ,\; d(z,Z{\smallsetminus} Z_i) = d(z,Z{\smallsetminus} Z_J). $$ Assuming the right hand side, we may choose $J=\{i\}$ to obtain $d(z,U'{\smallsetminus} W_{\{i\}})>0$, and hence, since $U'{\smallsetminus} W_{\{i\}}$ is closed, $z\in Z {\smallsetminus} (U'{\smallsetminus} W_{\{i\}})= Z_i$. On the other hand, $z\in Z_i$ implies $d(z,Z{\smallsetminus} Z_i)>0$ since $z\in Z$ and $Z{\smallsetminus} Z_i \subset Z$ is relatively closed. So for any $J$ with $d(z,Z{\smallsetminus} Z_i) = d(z,Z{\smallsetminus} Z_J)$ we obtain $d(z,Z{\smallsetminus} Z_J)>0$. Hence $z\in Z_J\subset W_J$, so that $d(z, U'{\smallsetminus} W_J)>0$. This proves the desired equivalence, and hence $U_K\cap Z = \bigcap_{i\in K}Z_i = Z_K$. Finally, we need to check that $U_K\subset W_K$. Unravelling the construction, note that $U_K$ is the set of all $x\in U'$ that satisfy {\beta}gin{equation}{\lambda}bel{eq:UK} d(x, Z) < d( z , U' {\smallsetminus} W_J) \end{equation} for all $z\in M_x$ and all $J\subset\{1,\ldots,N\}$ such that there exists $i\in J\cap K $ satisfying $d(z,Z{\smallsetminus} Z_i) = d(z,Z{\smallsetminus} Z_J)$. Now suppose by contradiction that there exists a point $x\in U_K{\smallsetminus} W_K$, and pick $z\in M_x$. Then $d(x,Z) = d(x, z) \geq d( z , U' {\smallsetminus} W_K)$ since $x\in U'{\smallsetminus} W_K$. This contradicts \eqref{eq:UK} with $J=K$. On the other hand, the condition $d(z,Z{\smallsetminus} Z_i) = d(z,Z{\smallsetminus} Z_J)$ for $J=K$ is always satisfied for some $i\in K$ since we have $d(z,Z{\smallsetminus} Z_K) = \min_{j\in K} d(z,Z{\smallsetminus} Z_j)$. This provides the contradiction and hence proves $U_K\subset W_K$. \end{proof} This lemma completes the proof that every weak additive ${\mathcal K}$ has a tame shrinking. We will return to these ideas in Section~\ref{ss:Kcobord} when discussing cobordisms. We end this section by Proposition~\ref{prop:metric}, which constructs admissible metrics on certain tame shrinkings by pullback with the map in the following lemma. {\beta}gin{lemma}{\lambda}bel{le:injtameshr} Let ${\mathcal K}'$ be a tame shrinking of a tame Kuranishi atlas ${\mathcal K}$. Then the natural map ${\iota}:|{\mathcal K}'|\to |{\mathcal K}|$ induced by the inclusion of domains ${\iota}_I:U'_I\to U_I$ is injective. \end{lemma} {\beta}gin{proof} We write $U_I, U_{IJ}$ for the domains of the charts and coordinate changes of ${\mathcal K}$ and $U_I', U_{IJ}'$ for those of ${\mathcal K}'$, so that $U_I'\subset U_I, U_{IJ}'\subset U_{IJ}$ for all $I,J\in {\mathcal I}_{\mathcal K} = {\mathcal I}_{{\mathcal K}'}$. Suppose that ${\partial}i_{\mathcal K}(I,x) = {\partial}i_{\mathcal K}(J,y)$ where $x\in U_I', y\in U_J'$. Then we must show that ${\partial}i_{{\mathcal K}'}(I,x) = {\partial}i_{{\mathcal K}'}(J,y)$. Since ${\mathcal K}$ is tame, Lemma~\ref{le:Ku2}~(a) implies that there is $w\in U_{I\cap J}$ such that ${\partial}hi_{(I\cap J)I} (w)$ is defined and equal to $x$. Hence $x\in s_I^{-1}(E_{I\cap J})\cap U_I' ={\partial}hi_{(I\cap J)I} (U'_{(I\cap J)I})$ by the tameness equation \eqref{eq:tame3} for ${\mathcal K}'$. Therefore $w\in U'_{(I\cap J)I}$. Similarly, because ${\partial}hi_{(I\cap J)J} (w)$ is defined and equal to $y$, we have $w\in U'_{(I\cap J)J}$. Then by definition of ${\partial}i_{{\mathcal K}'}$ we deduce ${\partial}i_{{\mathcal K}'}(I,x) = {\partial}i_{{\mathcal K}'}(I\cap J,w) ={\partial}i_{{\mathcal K}'}(J,y)$. \end{proof} In order to construct metric tame Kuranishi atlases, we will find it useful to consider tame shrinkings ${\mathcal K}_{sh}$ of a weak Kuranishi atlas ${\mathcal K}$ that are obtained as shrinkings of an intermediate tame shrinking ${\mathcal K}'$ of ${\mathcal K}$. For short we will call such ${\mathcal K}_{sh}$ a {\bf preshrunk tame shrinking} of ${\mathcal K}$. {\beta}gin{prop}{\lambda}bel{prop:metric} Let ${\mathcal K}$ be an additive weak Kuranishi atlas. Then every preshrunk tame shrinking of ${\mathcal K}$ is metrizable. In particular, ${\mathcal K}$ has a metrizable tame shrinking. \end{prop} {\beta}gin{proof} First use Proposition~\ref{prop:proper} to construct a tame shrinking ${\mathcal K}'$ of ${\mathcal K}$ with domains $(U_I'\subset U_I)_{I\in {\mathcal I}_{\mathcal K}}$, and then use this result again to construct a tame shrinking ${\mathcal K}_{sh}$ of ${\mathcal K}'$ with domains $(U_I^{sh}\sqsubset U_I')_{I\in {\mathcal I}_{\mathcal K}}$. We claim that ${\mathcal K}_{sh}$ is metrizable. For that purpose we apply Proposition~\ref{prop:Ktopl1}~(iv) to the precompact subset ${\mathcal A}: = \bigcup_{I\in{\mathcal I}_{\mathcal K}} U_I^{sh}$ of ${\rm Obj}_{{\bf B}_{{\mathcal K}'}}$ to obtain a metric $d'$ on ${\partial}i_{{\mathcal K}'}(\overline{\mathcal A})$ that induces the relative topology on the subset ${\partial}i_{{\mathcal K}'}(\overline{\mathcal A})$ of $|{\mathcal K}'|$, that is $\bigl( {\partial}i_{{\mathcal K}'}(\overline{\mathcal A}) , d'\bigr) = \|\overline{\mathcal A}\|$. Further, since ${\partial}i_{{\mathcal K}'}(\overline{\mathcal A})$ is compact, the metric $d'$ must be bounded. Now, by Lemma~\ref{le:injtameshr} the natural map ${\iota}: |{\mathcal K}_{sh}|\to |{\mathcal K}'|$ is injective, with image ${\partial}i_{{\mathcal K}_{sh}}({\mathcal A})$, so that the pullback $d:={\iota}^* d'$ is a bounded metric on $|{\mathcal K}_{sh}|$ that is compatible with the relative topology induced by $|{\mathcal K}'|$; in other words ${\iota} : \bigl(|{\mathcal K}_{sh}|, d \,\bigr) \to \|{\mathcal A}\|$ is an isometry. Next, note that the pullback metric $d_I$ on $U^{sh}_I$ does give the usual topology since ${\partial}i_{{\mathcal K}^{sh}}: U^{sh}_I \to {\partial}i_{{\mathcal K}^{sh}}(U^{sh}_I) \subset \bigl( |{\mathcal K}^{sh}| , d \bigr)$ is a homeomorphism to its image. Indeed, by Lemma~\ref{le:injtameshr} it can also be written as ${\partial}i_{{\mathcal K}^{sh}}|_{U^{sh}_I} = {\iota}^{-1} \circ {\partial}i_{{\mathcal K}'} \circ {\iota}_I$ with the embedding ${\iota}_I: U^{sh}_I \to U'_I$. The latter is a homeomorphism to its image, as is ${\partial}i_{{\mathcal K}'} : U'_I \to {\partial}i_{{\mathcal K}'}(U'_I) \subset |{\mathcal K}'|$ by Proposition~\ref{prop:Khomeo}, and ${\iota}^{-1}$ by the definition of the metric topology on $|{\mathcal K}^{sh}|$. \end{proof} \subsection{Cobordisms of Kuranishi atlases}{\lambda}bel{ss:Kcobord} \hspace{1mm}\\ Since there are many choices involved in constructing a Kuranishi atlas, and holomorphic curve moduli spaces in addition depend on the choice of an almost complex structure, it is important to have suitable notions of equivalence. Since we are only interested here in constructing the VMC as cobordism class, resp.\ the VFC as a homology class, a notion of uniqueness up to cobordism will suffice for our purposes, and should e.g.\ arise from paths of almost complex structures. We will defer this general notion of cobordism to \cite{MW:ku2} and concentrate here on developing tools for constructing well defined VMC/VFC for a fixed compact moduli space. This requires several types of results. Within the abstract theory, we firstly need to prove the uniqueness part of Theorem~\ref{thm:K}, saying that metric tame shrinkings of additive weak Kuranishi atlases are unique up to a suitable notion of cobordism, which will be the content of this section. Secondly, we prove in Theorems~\ref{thm:VMC1} and \ref{thm:VMC2} that cobordant Kuranishi atlases induce the same VMC/VFC. On the other hand, for any given holomorphic curve moduli space, we need to construct Kuranishi atlases that are canonical up to a suitable notion of equivalence. Here the most natural notion of equivalence would be along the lines of Morita equivalence, cf.\ Remark~\ref{rmk:Morita}. However, our constructions of Kuranishi atlases for a fixed Gromov--Witten moduli space in \cite{MW:gw} will depend on choices (in particular slicing conditions and obstruction spaces for the basic charts) only up to the following simpler notion of commensurability, in the sense that any two choices will yield Kuranishi atlases that are both commensurate to a third. Finally, we require another abstract result to imply that commensurate Kuranishi atlases induce the same VFC. In this section we hence fix a compact metrizable space $X$ and introduce the notions of commensurability and cobordism of Kuranishi atlases on $X$. {\beta}gin{defn}{\lambda}bel{def:Kcomm} Two (weak) Kuranishi atlases ${\mathcal K}^0,{\mathcal K}^1$ on the same compact space $X$ are {\bf commensurate} if there exists a common extension ${\mathcal K}^{01}$. This means that ${\mathcal K}^{01}$ is a (weak) Kuranishi atlas on $X$ with basic charts $({\bf K}^{\alpha}_i)_{({\alpha},i)\in{\mathbb N}n^{01}}$, where $$ {\mathbb N}n^{01} := {\mathbb N}n^0 \sqcup {\mathbb N}n^1 ; \qquad {\mathbb N}n^{\alpha} := \bigl\{ ({\alpha},i) \,\big|\, 0 \leq i \leq N^{\alpha} \bigr\} , $$ and transition data $({\bf K}^{01}_I,\widehat\Phi^{01}_{IJ})_{I\subset{\mathbb N}n^{01}, I\subsetneq J}$ such that ${\bf K}^{01}_I={\bf K}^0_I$ and $\widehat\Phi^{01}_{IJ}=\widehat\Phi^{\alpha}_{IJ}$ whenever $I,J\subset{\mathbb N}n^{\alpha}$ for fixed ${\alpha}=0$ or ${\alpha}=1$. Moreover, if ${\mathcal K}^0,{\mathcal K}^1$ are additive, we say they are {\bf additively commensurate} if there exists a common extension ${\mathcal K}^{01}$ that in addition is additive. \end{defn} We will see that commensurability is stronger than cobordism, but note that it does not satisfy transitivity, hence is not an equivalence relation. In order to define the notion of cobordism such that it is transitive, we will need a special form of charts and coordinate changes at the boundary that allows for gluing of cobordisms. Here, in order to avoid having to deal with general Kuranishi atlases with boundary, we restrict our work to a notion of cobordism of Kuranishi atlases on the same space $X$, which involves Kuranishi atlases on the space $X\tildemes[0,1]$, whose ``boundary'' $X\tildemes\{0,1\}$ has a natural collar structure. We deal with this boundary by requiring all charts and coordinate changes in a sufficiently small collar to be of product form as introduced below. These notions of collars and product forms will also be used for introducing a more general notion of ``cobordism with Kuranishi atlas'' in \cite{MW:ku2}. {\beta}gin{defn} {\lambda}bel{def:Cchart} {\beta}gin{itemlist} \item Let ${\bf K}^{\alpha}=(U^{\alpha},E^{\alpha},s^{\alpha},{\partial}si^{\alpha})$ be a Kuranishi chart on $X$, and let $A\subset[0,1]$ be a relatively open interval. Then we define the {\bf product chart} for $X\tildemes[0,1]$ with footprint $F_I^{\alpha}\tildemes A$ as $$ {\bf K}^{\alpha} \tildemes A :=\bigl(U^{\alpha} \tildemes A, E^{\alpha}, s^{\alpha}\circ{\rm pr}_{U^{\alpha}} ,{\partial}si^{\alpha}\tildemes{\rm id}_{A} \bigr) . $$ \item A {\bf Kuranishi chart with collared boundary} on $X\tildemes[0,1]$ is a tuple ${\bf K} = (U,E ,s ,{\partial}si )$ of domain, obstruction space, section, and footprint map as in Definition~\ref{def:chart}, with the following boundary variations and collar form requirement: {\beta}gin{enumerate} \item The footprint $F ={\partial}si (s ^{-1}(0))\subset X\tildemes[0,1]$ intersects the boundary $X\tildemes\{0,1\}$. \item The domain $U $ is a smooth manifold whose boundary splits into two parts ${\partial}artial U = {\partial}artial^0 U \sqcup {\partial}artial^1 U $ such that ${\partial}artial^{\alpha} U $ is nonempty iff $F $ intersects $X\tildemes\{{\alpha}\}$. \item If ${\partial}artial^{\alpha} U \neq\emptyset$ then there is a relatively open neighbourhood $A^{\alpha}\subset [0,1]$ of ${\alpha}$ and an embedding ${\iota}ta^{\alpha}:{\partial}artial^{\alpha} U \tildemes A^{\alpha} \hookrightarrow U $ onto a neighbourhood of ${\partial}artial^{\alpha} U \subset U $ such that $$ \bigl( \, {\partial}artial^{\alpha} U \tildemes A^{\alpha} \,,\, E \,,\, s \circ {\iota}ta^{\alpha} \,,\, {\partial}si \circ {\iota}ta^{\alpha} \, \bigr) \; = \; {\partial}artial^{\alpha}{\bf K} \tildemes A^{\alpha} $$ is the product of $A^{\alpha}$ with some Kuranishi chart ${\partial}artial^{\alpha}{\bf K} $ for $X$ with footprint $F\subset X$ such that $(X\tildemes A^{\alpha}) \cap F = F\tildemes A^{\alpha}$. \end{enumerate} \item For any Kuranishi chart with collared boundary ${\bf K} $ on $X\tildemes[0,1]$ we call the uniquely determined Kuranishi charts ${\partial}artial^{\alpha}{\bf K} $ for $X$ the {\bf restrictions of ${\bf K} $ to the boundary} for ${\alpha}=0,1$. \end{itemlist} \end{defn} We now define a coordinate change between charts on $X\tildemes [0,1]$ that may have boundary. Because in a Kuranishi atlas there is a coordinate change ${\bf K}_I\to {\bf K}_J$ only when $F_I\supset F_J$, we will restrict to this case here. (Definition~\ref{def:change} considered a more general scenario.) In other words, we need not consider coordinate changes from a chart without boundary to a chart with boundary. {\beta}gin{defn} {\lambda}bel{def:Ccc} {\beta}gin{itemlist} \item Let $\widehat\Phi^{\alpha}_{IJ}:{\bf K}^{\alpha}_I\to{\bf K}^{\alpha}_J$ be a coordinate change between Kuranishi charts for $X$, and let $A_I,A_J\subset[0,1]$ be relatively open intervals. Then the {\bf product coordinate change} $\widehat\Phi^{\alpha}_{IJ}\tildemes {\rm id}_{A_I\cap A_J} : {\bf K}^{\alpha}_I\tildemes A_I \to {\bf K}^{\alpha}_J\tildemes A_J$ is given by $$ {\partial}hi^{\alpha}_{IJ}\tildemes {\rm id}_{A_I\cap A_J}: \; U^{\alpha}_{IJ}\tildemes (A_I\cap A_J) \;\to\; U^{\alpha}_J\tildemes A_J, \qquad \widehat{\partial}hi^{\alpha}_{IJ}: E_I^{\alpha} \to E_J^{\alpha}. $$ \item Let ${\bf K} _I,{\bf K} _J$ be Kuranishi charts on $X\tildemes[0,1]$ such that only ${\bf K} _I$ or both ${\bf K} _I,{\bf K} _J$ have collared boundary. Then a {\bf coordinate change with collared boundary} $\widehat\Phi_{IJ} :{\bf K} _I\to{\bf K} _J$ is a tuple $\widehat\Phi_{IJ} = (U_{IJ} ,{\partial}hi_{IJ} ,\widehat{\partial}hi_{IJ} )$ of domain and embeddings as in Definition~\ref{def:change}, with the following boundary variations and collar form requirement: {\beta}gin{enumerate} \item The domain is a relatively open subset $U _{IJ}\subset U _I$ with boundary components ${\partial}artial^{\alpha} U _{IJ}:= U _{IJ} \cap {\partial}artial^{\alpha} U _I$; \item If $F_J\cap (X\tildemes \{{\alpha}\}) \ne \emptyset$ for ${\alpha} = 0$ or $1$, there is a relatively open neighbourhood $B^{\alpha}\subset [0,1]$ of ${\alpha}$ such that {\beta}gin{align*} ({\iota}ta_I^{\alpha})^{-1}(U _{IJ}) \cap \bigl({\partial}artial^{\alpha} U _I \tildemes B^{\alpha}\bigr) &\;=\; {\partial}artial^{\alpha} U _{IJ} \tildemes B^{\alpha} , \\ ({\iota}ta_J^{\alpha})^{-1}({\rm im\,}{\partial}hi _{IJ}) \cap \bigl({\partial}artial^{\alpha} U _J \tildemes B^{\alpha} \bigr) &\;=\; {\partial}hi _{IJ}({\partial}artial^{\alpha} U _{IJ}) \tildemes B^{\alpha} , \end{align*} and $$ \bigl( \, {\partial}artial^{\alpha} U _{IJ} \tildemes B^{\alpha} \,,\, ({\iota}ta_J^{\alpha})^{-1}\circ {\partial}hi _{IJ} \circ {\iota}ta_I^{\alpha} \,,\, \widehat{\partial}hi _{IJ} \, \bigr) \;=\; {\partial}artial^{\alpha}\widehat\Phi_{IJ} \tildemes {\rm id}_{B^{\alpha}} , $$ where ${\partial}artial^{\alpha}\widehat\Phi_{IJ} : {\partial}artial^{\alpha}{\bf K} _I \to {\partial}artial^{\alpha}{\bf K} _J$ is a coordinate change. \item If ${\partial}^{\alpha} F_J= \emptyset$ but ${\partial}^{\alpha} F_I\ne \emptyset$ for ${\alpha} = 0$ or $1$ there is a neighbourhood $B^{\alpha}\subset [0,1]$ of ${\alpha}$ such that $$ U _{IJ}\cap {\iota}ta_I^{\alpha} \bigl({\partial}artial^{\alpha} U _I \tildemes B^{\alpha} \bigr) = \emptyset. $$ \end{enumerate} \item For any coordinate change with collared boundary $\widehat\Phi_{IJ} $ on $X\tildemes[0,1]$ we call the uniquely determined coordinate changes ${\partial}^{\alpha} \widehat\Phi_{IJ}$ for $X$ the {\bf restrictions of $\widehat\Phi_{IJ} $ to the boundary} for ${\alpha}=0,1$. \end{itemlist} \end{defn} {\beta}gin{defn}{\lambda}bel{def:CKS} A {\bf (weak) Kuranishi cobordism} on $X\tildemes [0,1]$ is a tuple $$ {\mathcal K}^{[0,1]} = \bigl( {\bf K}^{[0,1]}_{I} , \widehat\Phi_{IJ}^{[0,1]} \bigr)_{I,J\in {\mathcal I}_{{\mathcal K}^{[0,1]}}} $$ of basic charts and transition data as in Definition~\ref{def:Ku} resp.\ \ref{def:Kwk}, with the following boundary variations and collar form requirements: {\beta}gin{itemlist} \item The charts of ${\mathcal K}^{[0,1]}$ are either Kuranishi charts with collared boundary or standard Kuranishi charts whose footprints are precompactly contained in $X\tildemes(0,1)$. \item The coordinate changes $\widehat\Phi_{IJ}^{[0,1]}: {\bf K}^{[0,1]}_{I} \to {\bf K}^{[0,1]}_{J}$ are either standard coordinate changes on $X\tildemes(0,1)$ between pairs of standard charts, or coordinate changes with collared boundary between pairs of charts, of which at least the first has collared boundary. \end{itemlist} Moreover, we call ${\mathcal K}^{[0,1]}$ {\bf additive resp.\ tame} if it satisfies the additivity condition of Definition~\ref{def:Ku2}, resp.\ the additivity and tameness conditions of Definition~\ref{def:tame}. \end{defn} {\beta}gin{remark} {\lambda}bel{rmk:restrict}\rm Any (weak) Kuranishi cobordism ${\mathcal K}^{[0,1]}$ induces by restriction two (weak) Kuranishi atlases ${\partial}artial^{\alpha}{\mathcal K}^{[0,1]}$ on $X$ for ${\alpha}=0,1$ with {\beta}gin{itemize} \item basic charts ${\partial}^{\alpha}{\bf K}_i$ given by restriction of basic charts of ${\mathcal K}^{[0,1]}$ with $F_i\cap X\tildemes\{{\alpha}\}\neq\emptyset$; \item index set ${\mathcal I}_{{\partial}^{\alpha}{\mathcal K}^{[0,1]}}=\{I\in{\mathcal I}_{{\mathcal K}^{[0,1]}}\,|\, F_I\cap X\tildemes\{{\alpha}\}\neq\emptyset\}$; \item transition charts ${\partial}^{\alpha}{\bf K}_I$ given by restriction of transition charts of ${\mathcal K}^{[0,1]}$; \item coordinate changes ${\partial}^{\alpha}\widehat\Phi_{IJ}$ given by restriction of coordinate changes of ${\mathcal K}^{[0,1]}$. \end{itemize} If ${\mathcal K}^{[0,1]}$ is additive or tame, then so are the restrictions ${\partial}artial^{\alpha}{\mathcal K}^{[0,1]}$. Finally, ${\mathcal K}^{[0,1]}$ provides a cobordism from ${\partial}^0{\mathcal K}^{[0,1]}$ to ${\partial}^1{\mathcal K}^{[0,1]}$ in the sense of the following Definition~\ref{def:Kcobord}. \end{remark} With this language in hand, we can now introduce the cobordism relation between Kuranishi atlases. While such notions exist (and generally are equivalence relations) for all flavours of (additive/weak/tame) Kuranishi atlases, we restrict ourselves here to the cobordism relation under which the VMC/VFC will be well defined, namely additive cobordism for weak atlases. In contrast, it will be important to prove the existence of many kinds of cobordisms as in Propositions~\ref{prop:cobord2} and~\ref{prop:cov2}. {\beta}gin{defn}{\lambda}bel{def:Kcobord} Two additive weak Kuranishi atlases ${\mathcal K}^0, {\mathcal K}^1$ on $X$ are {\bf additively cobordant} if there exists an additive weak Kuranishi cobordism ${\mathcal K}^{[0,1]}$ from ${\mathcal K}^0$ to ${\mathcal K}^1$. That is, ${\mathcal K}^{[0,1]}$ is an additive weak Kuranishi cobordism on $X\tildemes[0,1]$, which restricts to ${\partial}artial^0{\mathcal K}^{[0,1]}={\mathcal K}^0$ on $X\tildemes\{0\}$ and to ${\partial}^1{\mathcal K}^{[0,1]}={\mathcal K}^1$ on $X\tildemes\{1\}$. More precisely, there are injections ${\iota}ta^{\alpha}:{\mathcal I}_{{\mathcal K}^{\alpha}} \hookrightarrow {\mathcal I}_{{\mathcal K}^{[0,1]}}$ for ${\alpha}=0,1$ such that ${\rm im\,}{\iota}ta^{\alpha}={\mathcal I}_{{\partial}artial^{\alpha}{\mathcal K}}$ and for all $I,J\in{\mathcal I}_{{\mathcal K}^{\alpha}}$ we have $$ {\bf K}^{\alpha}_I = {\partial}^{\alpha} {\bf K}^{[0,1]}_{{\iota}ta^{\alpha}(I)}, \qquad \widehat\Phi^{\alpha}_{IJ} = {\partial}^{\alpha} \widehat\Phi^{[0,1]}_{{\iota}ta^{\alpha}(I) {\iota}ta^{\alpha} (J)} . $$ \end{defn} In the following we will usually identify the index sets ${\mathcal I}_{{\mathcal K}^{\alpha}}$ of cobordant Kuranishi atlases with the restricted index set ${\mathcal I}_{{\partial}artial^{\alpha}{\mathcal K}}$ in the cobordism index set ${\mathcal I}_{{\mathcal K}^{[0,1]}}$, so that ${\mathcal I}_{{\mathcal K}^0}, {\mathcal I}_{{\mathcal K}^1}\subset {\mathcal I}_{{\mathcal K}^{[0,1]}}$ are the not necessarily disjoint subsets of charts whose footprints intersect $X\tildemes\{0\}$ resp.\ $X\tildemes \{1\}$. {\beta}gin{example} {\lambda}bel{ex:triv}\rm Let ${\mathcal K}= \bigl( {\bf K}_I, \widehat\Phi_{IJ}\bigr)_{I,J\in{\mathcal I}_{\mathcal K}}$ be an additive weak Kuranishi atlas on $X$. Then the {\bf product Kuranishi cobordism} ${\mathcal K}\tildemes [0,1]$ from ${\mathcal K}$ to ${\mathcal K}$ is the weak Kuranishi cobordism on $X\tildemes [0,1]$ consisting of the product charts ${\bf K}_I\tildemes [0,1]$ and the product coordinate changes $\widehat\Phi_{IJ}\tildemes {\rm id}_{[0,1]}$ for $I,J\in{\mathcal I}_{\mathcal K}$. Note that ${\mathcal K}\tildemes [0,1]$ inherits additivity from ${\mathcal K}$. Similarly, if ${\mathcal K}$ is tame, then so is ${\mathcal K}\tildemes [0,1]$. If $|{\mathcal K}|$ is a Kuranishi atlas, so that the Kuranishi neighbourhood $|{\mathcal K}|$ is defined, then there is a natural bijection from the quotient $|{\mathcal K}\tildemes [0,1]|$ to the product $|{\mathcal K}|\tildemes [0,1]$. This map is continuous. However, it is not clear that it is always a homeomorphism.\footnote{At this time, we have neither a proof nor a counter example. We hope to resolve this question, but none of the following depends on this.} Further, when we come to put metrics on the product ${\mathcal K}\tildemes [0,1]$ we will certainly sometimes consider metrics that define a topology on $|{\mathcal K}\tildemes [0,1]|$ that is not a product. Nevertheless we will often denote the realization of ${\mathcal K}\tildemes [0,1]$ as $|{\mathcal K}|\tildemes [0,1]$ with the understanding that this is an equivalence of sets, not of topological spaces. \end{example} In order to discuss further the collar structure near the boundary of Kuranishi cobordisms, we denote for $1\ge{\varepsilon}>0$ the collar neighbourhoods of $0,1\in[0,1]$ by {\beta}gin{equation}{\lambda}bel{eq:Naleps} A_{\varepsilon}^0: = [0,{\varepsilon}) \qquad\text{and}\qquad A_{\varepsilon}^1: = (1-{\varepsilon},1] . \end{equation} We will see that the footprints of the Kuranishi charts in a Kuranishi cobordism are collared in the following sense. {\beta}gin{defn}{\lambda}bel{def:collared} We say that an open subset $F\subset X\tildemes [0,1]$ is {\bf collared} if there is ${\varepsilon}>0$ such that for ${\alpha}\in \{0,1\}$ we have $$ F \cap (X\tildemes A^{\alpha}_{\varepsilon})\ne \emptyset \;\; \Longleftrightarrow\;\; F \cap (X\tildemes A^{\alpha}_{\varepsilon}) = {\partial}^{\alpha} F \tildemes A^{\alpha}_{\varepsilon} . $$ Here we denote by $$ {\partial}artial^{\alpha} F := {\partial}r_{X} \bigl( F \cap (X\tildemes\{{\alpha}\}) \bigr) $$ the image of the intersection with the ``boundary component'' $X\tildemes\{{\alpha}\}$ under its projection to $X$, noting that this is just a convenient notation, not a topological boundary. \end{defn} {\beta}gin{remark} \rm {\lambda}bel{rmk:Ceps} Since the index set ${\mathcal I}_{{\mathcal K}^{[0,1]}}$ in Definition~\ref{def:CKS} is finite, there exists a uniform {\bf collar width} ${\varepsilon}>0$ such that all collar embeddings ${\iota}^{\alpha}_I$ are defined on $A^{\alpha} = A_{\varepsilon}^{\alpha}$, all coordinate changes are of collar form on $B^{\alpha}=A_{\varepsilon}^{\alpha}$, and all charts without collared boundary have footprint contained in $X\tildemes ({\varepsilon},1-{\varepsilon})$. In particular, all footprints are collared in the sense of Definition~\ref{def:collared}. \end{remark} {\beta}gin{rmk}\rm {\lambda}bel{rmk:cobordreal} Let ${\mathcal K}^{[0,1]}$ be a Kuranishi cobordism from ${\mathcal K}^0$ to ${\mathcal K}^1$. Its associated categories ${\bf B}_{{\mathcal K}^{[0,1]}}, {\bf E}_{{\mathcal K}^{[0,1]}}$ with projection, section, and footprint functor, as well as their realizations $|{\mathcal K}^{[0,1]}|, |{\bf E}_{{\mathcal K}^{[0,1]}}|$ are defined as for Kuranishi atlases without boundary in Section~\ref{ss:Ksdef}. We will see that these also have collared boundaries with ${\varepsilon}>0$ from Remark~\ref{rmk:Ceps}~(i). {\beta}gin{itemlist} \item We can think of the virtual neighbourhood $|{\mathcal K}^{[0,1]}|$ of $X\tildemes[0,1]$ as a ``cobordism'' from $|{\mathcal K}^0|$ to $|{\mathcal K}^{1}|$ in the following sense: There are natural functors ${\bf B}_{{\mathcal K}^{\alpha}}\tildemes A^{\alpha}_{\varepsilon}\to {\bf B}_{{\mathcal K}^{[0,1]}}$ given by the inclusions ${\iota}ta^{\alpha}_I : U^{\alpha}_I\tildemes A^{\alpha}_{\varepsilon} \hookrightarrow U^{[0,1]}_{I}$ on objects and ${\iota}ta^{\alpha}_I : U^{\alpha}_{IJ}\tildemes A^{\alpha}_{\varepsilon}\hookrightarrow U^{[0,1]}_{IJ}$ on morphisms, where $A^{\alpha}_{\varepsilon}$ is defined in \eqref{eq:Naleps}. The axioms on the interaction of the coordinate changes with the collar neighbourhoods then imply that the functors map to full subcategories that split ${\bf B}_{{\mathcal K}^{[0,1]}}$ in the sense that there are no morphisms between any other object and this subcategory. Hence they induce ``collar neighbourhoods'' of the ``boundaries'' $|{\mathcal K}^{\alpha}|$ on the topological realization $|{\mathcal K}^{[0,1]}|$, i.e.\ topological embeddings $$ \rho^0: |{\mathcal K}^0| \tildemes [0,{\varepsilon}) \longhookrightarrow |{\mathcal K}^{[0,1]}| , \qquad \rho^1: |{\mathcal K}^1| \tildemes (1-{\varepsilon},1] \longhookrightarrow |{\mathcal K}^{[0,1]}| $$ such that for all $0<{\varepsilon}'<{\varepsilon}$ at ${\alpha}=0$ (and similarly for ${\alpha}=1$) we have $$ {\partial}artial \bigl( |{\mathcal K}^{[0,1]}| {\smallsetminus} \rho^0(|{\mathcal K}^0| \tildemes [0,{\varepsilon}') ) \bigr) = \rho^0(|{\mathcal K}^0| \tildemes \{{\varepsilon}'\}). $$ However, recall from Example~\ref{ex:triv} that the topology on the collars $|{\mathcal K}|\tildemes A^{\alpha}_{\varepsilon}$ is the quotient topology from $|{\mathcal K}\tildemes A^{\alpha}_{\varepsilon}|$ which may not be the product topology. In view of this remark, we shall write $$ {\partial}^{\alpha} |{\mathcal K}^{[0,1]}| \,:= \; \rho^{\alpha}\bigl(|{\mathcal K}^{\alpha}|\tildemes \{{\alpha}\}\bigr) \;\subset\; |{\mathcal K}^{[0,1]}| $$ for the ${\alpha}$-boundary of the Kuranishi cobordism neighbourhood $|{\mathcal K}^{[0,1]}|$, which is homeomorphic to the Kuranishi neighbourhood $|{\mathcal K}^{\alpha}|$ of the boundary ${\partial}^{\alpha}{\mathcal K}^{[0,1]}$ via $\rho^{\alpha}(\cdot,{\alpha})$. \item The obstruction bundle ``with boundary'' $|{\partial}r_{{\mathcal K}^{[0,1]}}|: |{\bf E}_{{\mathcal K}^{[0,1]}}|\to |{\mathcal K}^{[0,1]}|$ can also be thought of as a \lq\lq cobordism" between the obstruction bundles $|{\partial}r_{{\mathcal K}^{\alpha}}|: |{\bf E}_{{\mathcal K}^{\alpha}}|\to |{\mathcal K}^{\alpha}|$ in the sense that $\rho^{{\alpha}\;*} |{\bf E}_{{\mathcal K}^{[0,1]}}| = |{\bf E}_{{\mathcal K}^{{\alpha}}}| \tildemes A^{\alpha}_{\varepsilon}$. \item The embeddings $\rho^{\alpha}$ extend the natural map between footprints $$ |s_{{\mathcal K}^{[0,1]}}|^{-1}(0) \;\; \xleftarrow{{\iota}ta_{{\mathcal K}^{[0,1]}}}\;\; X\tildemes A_{\varepsilon}^{\alpha} \;\;\xrightarrow{{\iota}ta_{{\mathcal K}^{{\alpha}}}\tildemes{\rm id}}\;\; |s_{{\mathcal K}^{{\alpha}}}|^{-1}(0)\tildemes A_{\varepsilon}^{\alpha} . $$ \item The footprint functor to $X\tildemes[0,1]$ for ${\mathcal K}^{[0,1]}$ induces a continuous surjection $$ {\rm pr}_{[0,1]}\circ {\partial}si_{{\mathcal K}^{[0,1]}} : \; s_{{\mathcal K}^{[0,1]}}^{-1}(0) \;\to\; X\tildemes [0,1] \;\to\; [0,1] . $$ In general we do not assume that this extends to a functor ${\bf B}_{{\mathcal K}^{[0,1]}}\to [0,1]$. However, all the Kuranishi cobordisms that we construct explicitly do have this property. \end{itemlist} \end{rmk} {\beta}gin{lemma}{\lambda}bel {le:cob0} Let ${\mathcal K}^{[0,1]}$ be a tame Kuranishi cobordism. Then its realization $|{\mathcal K}^{[0,1]}|$ has the Hausdorff, homeomorphism, and linearity properties stated in Theorem~\ref{thm:K}. \end{lemma} {\beta}gin{proof} These properties are proven by precisely the same arguments as in Proposition~\ref{prop:Khomeo} and Proposition~\ref{prop:linear}. The fact that some charts have collared boundaries is irrelevant in this context. \end{proof} We now turn to the question of constructing additive cobordisms. As we saw, additivity is an essential hypothesis in Proposition~\ref{prop:Khomeo}, which establishes that $|{\mathcal K}|$ has the Hausdorff and homeomorphism properties. However, note that when manipulating charts other than by shrinking the domains, one may easily destroy this property. For example, when constructing cobordisms one must guard against taking two products of the same basic chart, e.g.\ ${\bf K}_i\tildemes [0,\frac 13)$ and ${\bf K}_i\tildemes (\frac 14,\frac 12)$. The natural transition chart for the overlap of footprints $F_i\tildemes (\frac 14,\frac 13)$ is ${\bf K}_i\tildemes (\frac 14,\frac 13)$, but it fails additivity since $E_i \not\cong E_i \tildemes E_i$ unless the obstruction space is trivial. This means that in the following we have to construct cobordisms with great care. {\beta}gin{lemma}{\lambda}bel{le:cobord1} {\beta}gin{enumerate} \item Additive cobordism is an equivalence relation on the set of additive weak Kuranishi atlases on a fixed compact space $X$. \item Any two commensurate additive weak Kuranishi atlases are additively cobordant. \end{enumerate} \end{lemma} {\beta}gin{proof} Given an additive weak Kuranishi atlas ${\mathcal K}$ on $X$, the product atlas ${\mathcal K}\tildemes [0,1]$ on $X\tildemes[0,1]$ of Example~\ref{ex:triv} is an additive weak Kuranishi cobordism from ${\mathcal K}$ to ${\mathcal K}$. This shows that the cobordism relation is reflexive. Next, suppose that ${\mathcal K}^{[0,1]}$ is an additive weak Kuranishi cobordism from ${\mathcal K}^0$ to ${\mathcal K}^1$ as in Definition~\ref{def:Kcobord}. We may compose every footprint map ${\partial}si^{[0,1]}_{I'}$ with the reflection $X\tildemes[0,1]\to X\tildemes[0,1], (x,t) \mapsto (x, 1-t)$ to define another additive weak Kuranishi atlas for $X\tildemes [0,1]$, which restricts to ${\mathcal K}^1$ near $X\tildemes\{0\}$ and to ${\mathcal K}^0$ near $X\tildemes\{1\}$, thus proving that the cobordism relation is symmetric. Similarly, given an additive weak Kuranishi cobordisms from ${\mathcal K}^1$ to ${\mathcal K}^2$, we may compose the footprint maps with the shift $X\tildemes[0,1]\to X\tildemes[1,2], (x,t) \mapsto (x, 1+t)$ to construct an additive weak Kuranishi atlas with boundary ${\mathcal K}^{[1,2]}$ for $X\tildemes [1,2]$, which restricts to ${\mathcal K}^1$ on $X\tildemes\{1\}$ and to ${\mathcal K}^2$ on $X\tildemes\{2\}$. We may concatenate this with any Kuranishi cobordism ${\mathcal K}^{[0,1]}$ from ${\mathcal K}^0$ to ${\mathcal K}^1$ to obtain a Kuranishi atlas with boundary ${\mathcal K}^{[0,2]}$ on $X\tildemes [0,2]$, which restricts to ${\mathcal K}^0$ near $X\tildemes\{0\}$ and to ${\mathcal K}^2$ near $X\tildemes\{2\}$. More precisely, we define ${\mathcal K}^{[0,2]}$ as follows. {\beta}gin{itemlist} \item The index set $\displaystyle \; {\mathcal I}_{{\mathcal K}^{[0,2]}} :=\;{\mathcal I}_{[0,1)} \;\sqcup\; {\mathcal I}_{{\mathcal K}^1} \;\sqcup\; {\mathcal I}_{(1,2]} \;$ is given by ${\mathcal I}_{[0,1)}:={\mathcal I}_{[0,1]}{\smallsetminus}{\iota}ta^1({\mathcal I}_{{\mathcal K}^1})$, ${\mathcal I}_{(1,2]}:={\mathcal I}_{[1,2]}{\smallsetminus}{\iota}ta^1({\mathcal I}_{{\mathcal K}^1})$. \item The charts are ${\bf K}^{[0,2]}_{I} := {\bf K}^{[0,1]}_{I}$ for $I\in{\mathcal I}_{[0,1)}$, and ${\bf K}^{[0,2]}_{I} := {\bf K}^{[1,2]}_{I}$ for $I\in{\mathcal I}_{(1,2]}$. For $I\in{\mathcal I}_{{\mathcal K}^1}$ denote by $I^{01}={\iota}ta^1(I)\in{\mathcal I}_{{\mathcal K}^{[0,1]}}$, $I^{12}={\iota}ta^1(I)\in{\mathcal I}_{{\mathcal K}^{[1,2]}}$ the labels of the charts that restrict to ${\bf K}_I$. In particular, this implies ${\partial}artial^1 U^{[0,1]}_{I^{01}} = U^1_I ={\partial}artial^1 U^{[1,2]}_{I^{12}}$. Then define the glued chart (possibly with collared boundary at $X\tildemes\{0\}$ or $X\tildemes\{2\}$) $$ \qquad {\bf K}^{[0,2]}_{I} := {\bf K}^{[0,1]}_{I^{01}}{\underline{n}}derlineerset{\scriptscriptstyle U^1_I}{\cup} {\bf K}^{[1,2]}_{I^{12}} := \left( U^{[0,1]}_{I^{01}} {\underline{n}}derlineerset{\scriptscriptstyle U^1_I}{\cup} U^{[1,2]}_{I^{12}} \, , \, E^1_I \, , \left\{ {\beta}gin{aligned} s^{[0,1]}_{I^{01}}\;,\; {\partial}si^{[0,1]}_{I^{01}} \quad &\text{on} \; U^{[0,1]}_{I^{01}}\\ s^{[1,2]}_{I^{12}}\;,\; {\partial}si^{[1,2]}_{I^{12}} \quad &\text{on} \;U^{[1,2]}_{I^{12}} \end{aligned} \right\} \right). $$ Here the domain is the boundary connected sum $$ U^{[0,2]}_{I} \;:=\; U^{[0,1]}_{I^{01}} {\underline{n}}derlineerset{\scriptscriptstyle U^1_I}{\cup} U^{[1,2]}_{I^{12}} \;:=\; \quotient{ U^{[0,1]}_{I^{01}} \sqcup U^{[1,2]}_{I^{12}} }{ \scriptstyle {\iota}ta^1_{I^{01}}(x,1) {\sigma}m {\iota}ta^1_{I^{12}}(x,1)\quad \forall x\in U^1_I } . $$ Due to the collar requirements, this domain inherits a smooth structure with an embedded product $U^1_I\tildemes (1-{\varepsilon},1+{\varepsilon})$ for some ${\varepsilon}>0$. Moreover the sections and footprint maps fit smoothly since their pullbacks to this product agree on $U^1_I\tildemes \{1\}$. Finally, the bundles are identical $E^{[0,1]}_{I^{01}} = E^1_I = E^{[1,2]}_{I^{12}}$. \item The coordinate changes are $\widehat\Phi^{[0,2]}_{IJ} := \widehat\Phi^{[0,1]}_{IJ}$ for $I,J\in{\mathcal I}_{[0,1)}$, and $\widehat\Phi^{[0,2]}_{IJ} := {\bf K}^{[1,2]}_{IJ}$ for $I,J\in{\mathcal I}_{(1,2]}$, and the following. {\beta}gin{itemize} \item[-] For $I,J\in{\mathcal I}_{{\mathcal K}^1}$ the coordinate charts corresponding to $I^{01}, J^{01}\in{\mathcal I}_{{\mathcal K}^{[0,1]}}$, $I^{12},J^{12}\in{\mathcal I}_{{\mathcal K}^{[1,2]}}$ fit together to give a glued coordinate change (possibly with collared boundary at $X\tildemes\{0\}$ or $X\tildemes\{2\}$) $$ \qquad \widehat\Phi^{[0,2]}_{IJ} := \left( U^{[0,1]}_{I^{01}J^{01}} {\underline{n}}derlineerset{\scriptstyle U^1_{IJ}}\cup U^{[1,2]}_{I^{12}J^{12}} \; , \, \left\{ {\beta}gin{aligned} {\partial}hi^{[0,1]}_{I^{01}} \;\quad &\text{on} \; U^{[0,1]}_{I^{01}J^{01}}\\ {\partial}hi^{[1,2]}_{I^{12}} \;\quad &\text{on} \; U^{[1,2]}_{I^{12}J^{12}} \end{aligned} \right\} \, , \, \widehat{\partial}hi^1_{IJ} \, \right) . $$ Here the embeddings of domains fit smoothly since as before their pullbacks to the product $U^1_{IJ}\tildemes (1-{\varepsilon},1+{\varepsilon})\subset U^{[0,2]}_{IJ}$ agree on $U^1_{IJ}\tildemes \{1\}$. Moreover, the linear embeddings are identical $\widehat{\partial}hi^{[0,1]}_{I^{01}J^{01}} = \widehat{\partial}hi^1_{IJ} = \widehat{\partial}hi^{[1,2]}_{I^{12}J^{12}}$. \item[{-}] For $J\in{\mathcal I}_{[0,1)}$ and $I\in{\mathcal I}_{{\mathcal K}^1}$ corresponding to $I^{01}\in{\mathcal I}_{{\mathcal K}^{[0,1]}}$ with $I^{01}\subsetneq J$ the coordinate change $\widehat\Phi^{[0,2]}_{IJ} := \widehat\Phi^{[0,1]}_{I^{01} J}$ is well defined with domain $U^{[0,1]}_{I^{01}J} \subset U^{[0,2]}_{IJ}$; similarly for $J\in{\mathcal I}_{(1,2]}$, $I\in{\mathcal I}_{{\mathcal K}^1}$. \end{itemize} \end{itemlist} Note that we need not construct coordinate changes from $I\in{\mathcal I}_{[0,1)}$ (or $I\in{\mathcal I}_{(1,2]}$) to $J\in{\mathcal I}_{{\mathcal K}^1}$ since in these cases $F_J$ is not a subset of $F_I$. Now we may define the basic charts in ${\mathcal K}^{[0,2]}$ to consist of the basic charts in ${\mathcal I}_{[0,1)}$ and ${\mathcal I}_{(1,2]}$ whose footprints are disjoint from $X\tildemes\{1\}$, together with one glued chart for each basic chart in ${\mathcal I}_{{\mathcal K}^1}$ (which is constructed from a pair of charts in ${{\mathcal K}^{[0,1]}}$ and ${{\mathcal K}^{[1,2]}}$ with matching collared boundaries). The further charts and coordinate changes constructed above then cover exactly the overlaps of the new basic charts, since the charts from ${\mathcal I}_{[0,1)}$ have no overlap with those arising from ${\mathcal I}_{(1,2]}$. The weak cocycle condition for charts or coordinate changes in ${\mathcal I}_{[0,1)}\sqcup{\mathcal I}_{(1,2]}$ then follows directly from the corresponding property of ${\mathcal K}^{[0,1]}$ and ${\mathcal K}^{[1,2]}$. Furthermore, for $I\in{\mathcal I}_{{\mathcal K}^{1}}$ the glued chart ${\bf K}^{[0,2]}_{I}={\bf K}^{[0,1]}_{I^{01}}{\underline{n}}derlineerset{{\bf K}^1_I}{\cup} {\bf K}^{[1,2]}_{I^{12}}$ has restrictions (up to natural pullbacks) {\beta}gin{align*} &{\bf K}^{[0,2]}_{I}|_{{\rm int}(U^{[0,1]}_{I^{01}})} = {\bf K}^{[0,1]}_{I^{01}}|_{{\rm int}(U^{[0,1]}_{I^{01}})}, \qquad {\bf K}^{[0,2]}_{I}|_{{\rm int}(U^{[1,2]}_{I^{12}})} = {\bf K}^{[1,2]}_{I^{12}}|_{{\rm int}(U^{[1,2]}_{I^{12}})}, \\ &{\bf K}^{[0,2]}_{I}|_{U^1_I \tildemes (1-{\varepsilon},1+{\varepsilon})} = {\bf K}^1_{I} \tildemes (1-{\varepsilon},1+{\varepsilon}) . \end{align*} The cocycle condition for any tuple of coordinate changes can be checked separately for these restrictions (which cover the entire domain of ${\bf K}^{[0,2]}_{I}$) and hence follow from the corresponding property of ${\mathcal K}^{[0,1]}$, ${\mathcal K}^{[1,2]}$, and ${\mathcal K}^{1}$. Similarly, the additivity condition for the charts in ${\mathcal I}_{[0,1)}$ or ${\mathcal I}_{(1,2]}$ follows directly from the additivity of ${\mathcal K}^{[0,1]}$ or ${\mathcal K}^{[1,2]}$. However, the additivity for a chart ${\bf K}^{[0,2]}_I$ with $I\in{\mathcal I}_{{\mathcal K}^1}$ is a little more subtle in that it requires additivity of the obstruction bundle $E^{02}_{I}$ with respect to all basic charts ${\bf K}^{[0,2]}_i$ whose footprint contains $F^{[0,2]}_{I}=F^{[0,1]}_{I^{01}}\cup F^{[1,2]}_{I^{12}}$. However, note that these are exactly the glued basic charts corresponding to the basic charts ${\bf K}^1_i$ whose footprint contains $F^{1}_{I}$. So additivity follows from additivity for ${\bf K}^1_I$, $$ E^{[0,2]}_{I} \;=\; E^{1}_{I} \;=\; {\underline{n}}derlineerset{F^1_i \supset F^1_I}{\bigoplus} \widehat{\partial}hi^{1}_{i I}(E^{1}_i) \;=\; {\underline{n}}derlineerset{F^{[0,2]}_i \supset F^{[0,2]}_{I}}{\bigoplus} \widehat{\partial}hi^{[0,2]}_{i I}(E^{[0,2]}_i). $$ Thus ${\mathcal K}^{[0,2]}$ is an additive weak Kuranishi cobordism with restrictions $$ {\partial}^0{\mathcal K}^{[0,2]}={\partial}^1{\mathcal K}^{[0,1]}={\mathcal K}^0,\qquad {\partial}^2{\mathcal K}^{[0,2]}={\partial}^2{\mathcal K}^{[1,2]}={\mathcal K}^2. $$ Here we write ${\partial}^2$ for the restriction over $X\tildemes\{2\}$ defined analogously to ${\partial}^0,{\partial}^1$. Finally, we compose the footprint maps of ${\mathcal K}^{[0,2]}$ with the rescaling $X\tildemes[0,2]\to X\tildemes[0,1], (x,t) \mapsto (x, \tfrac 12 t)$ to obtain an additive weak Kuranishi cobordism from ${\mathcal K}^0$ to ${\mathcal K}^2$. { } To prove (ii) consider additive weak Kuranishi atlases ${\mathcal K}^0,{\mathcal K}^1$ and a common additive weak extension ${\mathcal K}^{01}$. Then an additive weak Kuranishi cobordism ${\mathcal K}^{[0,1]}$ from ${\mathcal K}^0$ to ${\mathcal K}^1$ is given by {\beta}gin{itemize} \item index set $\displaystyle \; {\mathcal I}_{{\mathcal K}^{[0,1]}} := {\mathcal I}_{{\mathcal K}^{01}} = \bigl\{ I\subset{\mathbb N}n^{01} \,\big|\, {\textstyle\bigcap_{i\in I}} F^{01}_i \neq \emptyset \bigr\}$; \item charts $\displaystyle\; {\bf K}^{[0,1]}_I := {\bf K}^{01}_I \tildemes A_I$ with $A_I= [0,\tfrac 23)$ for $I\subset{\mathbb N}n^0$, $A_I = (\tfrac 13,1]$ for $I \subset {\mathbb N}n^1$, and $A_I=(\tfrac 13,\tfrac 23)$ otherwise; \item coordinate changes $\displaystyle\; \widehat\Phi^{[0,1]}_{IJ} := \widehat\Phi^{01}_{IJ} \tildemes (A_I\cap A_J)$. \end{itemize} This proves (ii) since additivity and weak cocycle conditions follow from the corresponding properties of ${\mathcal K}^{01}$. In particular, note that additivity makes use of the fact that ${\mathcal K}^{[0,1]}$ has the same index set as the additive Kuranishi atlas ${\mathcal K}^{01}$. \end{proof} The final task in this section is to construct tame cobordisms between different tame shrinkings in order to establish the uniqueness claimed in Theorem~\ref{thm:K}. It will also be useful to have suitable metrics on these cobordisms, since they are used in the construction of perturbations. We therefore begin by discussing the notion of metric tame Kuranishi cobordism. One difficulty here is that we are dealing with an arbitrary distance function, not a length metric such as a Riemannian metric. Hence we must prove various elementary results that would be clear in the Riemannian case. {\beta}gin{defn}{\lambda}bel{def:mCKS} A {\bf metric tame Kuranishi cobordism} on $X\tildemes [0,1]$ is a tame Kuranishi cobordism ${\mathcal K}^{[0,1]}$ equipped with a metric $d^{[0,1]}$ on $\bigl|{\mathcal K}^{[0,1]}\,\!\bigr|$ that satisfies the admissibility conditions of Definition~\ref{def:metric} and has a metric collar as follows: There is ${\varepsilon}>0$ such that for ${\alpha}=0,1$ with ${\mathcal K}^{\alpha}:={\partial}^{\alpha}{\mathcal K}^{[0,1]}$ the collaring maps $\rho^{\alpha}: |{\mathcal K}^{\alpha}|\tildemes A^{\alpha}_{\varepsilon}\to \bigl|{\mathcal K}^{[0,1]}\bigr|$ of Remark~\ref{rmk:cobordreal} are defined and pull back $d^{[0,1]}$ to the product metric {\beta}gin{equation} {\lambda}bel{eq:epsprod} (\rho^{\alpha})^* d^{[0,1]} \bigl((x,t),(x',t')\bigr) \;=\; d^{\alpha}(x,x')+ |t'-t| \qquad \text{on} \;\; |{\mathcal K}^{\alpha}|\tildemes A^{\alpha}_{\varepsilon} , \end{equation} where the metric $d^{\alpha}$ on $|{\mathcal K}^{\alpha}| = |{\partial}^{\alpha}{\mathcal K}^{[0,1]}|$ is given by pullback of the restriction of $d^{[0,1]}$ to ${\partial}^{\alpha} |{\mathcal K}^{[0,1]}| = \rho^{\alpha}\bigl(|{\mathcal K}^{\alpha}|\tildemes\{{\alpha}\}\bigr)$, which we denote by $$ d^{\alpha} \,:=\; d^{[0,1]}|_{|{\partial}^{\alpha} {\mathcal K}^{[0,1]}|} \,:=\; \rho^{\alpha}(\cdot, {\alpha})^* d^{[0,1]}. $$ In addition, we require for all $y\in |{\mathcal K}^{[0,1]}|{\smallsetminus} \rho^{\alpha} \bigl( |{\mathcal K}^{\alpha}|\tildemes A^{\alpha}_{\varepsilon}\bigr)$ {\beta}gin{equation}{\lambda}bel{eq:epscoll} d^{[0,1]}\bigl( y , \rho^{\alpha}(x,{\alpha} + t )\bigr) \;\ge\; {\varepsilon} - |t| \qquad \forall \; (x,{\alpha} + t)\in |{\mathcal K}^{\alpha}|\tildemes A^{\alpha}_{\varepsilon} . \end{equation} More generally, we call a metric on $\bigl|{\mathcal K}^{[0,1]}\,\!\bigr|$ {\bf admissible} if it satisfies the conditions of Definition~\ref{def:metric}, and {\bf ${\varepsilon}$-collared} if it satisfies \eqref{eq:epsprod} and \eqref{eq:epscoll}. \end{defn} Condition \eqref{eq:epscoll} controls the distance between points $\rho^{\alpha}(x,{\alpha} + t)$ in the collar and points $y$ outside of the collar. In particular, if ${\delta}<{\varepsilon}-|t|$, then the ${\delta}$-ball around $\rho^{\alpha}(x,{\alpha} + t)$ is contained in the ${\varepsilon}$-collar, while the ${\delta}$-ball around $y$ does not intersect the $|t|$-collar $\rho^{\alpha}\bigl(|{\mathcal K}^{{\alpha}}|\tildemes A^{\alpha}_{|t|}\bigr)$. {\beta}gin{example} {\lambda}bel{ex:mtriv}\rm {\mathbb N}I (i) Any admissible metric $d$ on $|{\mathcal K}|$ for a Kuranishi atlas ${\mathcal K}$ induces an admissible collared metric $d + d_{\mathbb R}$ on $|{\mathcal K}\tildemes[0,1]|\cong |{\mathcal K}| \tildemes [0,1]$, given by $$ \bigr(d + d_{\mathbb R}\bigl)\bigl( (x,t) , (x',t') \bigr) = d(x,x') + |t'-t| . $$ For short, we call $d + d_{\mathbb R}$ a {\bf product metric.} {\mathbb N}I (ii) Let $d$ be an admissible collared metric on $|{\mathcal K}|$ for a general Kuranishi cobordism ${\mathcal K}$, and let $b$ be an upper bound of $d^{\alpha}:=d\big|_{|{\partial}^{\alpha}{\mathcal K}|}$ for ${\alpha}=0,1$. Then we claim that for any ${\kappa}>b$ the truncated metric $\min(d,{\kappa})$ given by $(x,y) \mapsto \min(d(x,y),{\kappa})$ is an admissible ${\varepsilon}'$-collared metric for ${\varepsilon}':=\min({\varepsilon},{\kappa}-b)$. Indeed, the metric $\min(d,{\kappa})$ is also admissible because it induces the same topology on each $U_I$ as $d$. Further, the product form on the collar is preserved since {\beta}gin{equation}{\lambda}bel{eq:trunc} |t-t'| < {\kappa}-b \qquad\Longrightarrow\qquad d^{\alpha}(x,x') + |t-t'| \;\le\; b + |t-t'| \;<\; {\kappa} , \end{equation} and \eqref{eq:epscoll} holds since for $(x,{\alpha} + t)\in |{\partial}^{\alpha}{\mathcal K}|\tildemes A^{\alpha}_{{\varepsilon}'}$ we have \[ d\bigl( y , \rho^{\alpha}(x,{\alpha} + t) \bigr) \;\ge\; {\beta}gin{cases} {\varepsilon} - |t| \ge {\varepsilon}' - |t| &\quad\text{if} \; y\in |{\mathcal K}|{\smallsetminus} \rho^{\alpha}\bigl( |{\partial}^{\alpha}{\mathcal K}|\tildemes A^{\alpha}_{\varepsilon}\bigr) , \\ |t' - t| \ge {\varepsilon}' - |t| &\quad\text{if} \; y=\rho^{\alpha}(x',{\alpha}+t')\in \rho^{\alpha}\bigl( |{\partial}^{\alpha}{\mathcal K}|\tildemes ( A^{\alpha}_{\varepsilon}{\smallsetminus} A^{\alpha}_{{\varepsilon}'} )\bigr) \end{cases} \] by \eqref{eq:epscoll} for $d$ resp.\ the product form of the metric on the ${\varepsilon}$-collar, and moreover ${\kappa} > {\kappa} - b - t \geq {\varepsilon}' - t$. Finally, the restrictions of this truncated metric are by ${\kappa}>b$ $$ \min(d,{\kappa})\big|_{|{\partial}^{\alpha}{\mathcal K}|} \;=\; \min(d^{\alpha},{\kappa}) \;=\; d^{\alpha} \qquad\text{for}\; {\alpha}=0,1 . $$ {\mathbb N}I (iii) Let $({\mathcal K},d)$ be a metric tame Kuranishi cobordism with collar width ${\varepsilon}>0$. Then for any $0<{\delta}<{\varepsilon}$ the ${\delta}$-neighbourhood of the inclusion of the Kuranishi space $X\tildemes[0,1]$, {\beta}gin{equation}{\lambda}bel{eq:metcoll2} {\mathcal W}_{\delta}: = B_{\delta}\bigl({\iota}_{\mathcal K}(X\tildemes [0,1])\bigr) = \bigl\{y\in |{\mathcal K}| \, \big| \, \exists z\in {\iota}_{\mathcal K}(X\tildemes [0,1]) : d(y, z)<{\varepsilon}\bigr\} \end{equation} has collar form with collars of width ${\varepsilon}-{\delta}$, that is $$ {\mathcal W}_{\delta} \cap \rho^{\alpha}\bigl(|{\partial}^{\alpha}{\mathcal K}|\tildemes A^{\alpha}_{{\varepsilon}-{\delta}}\bigr) = \rho^{\alpha}\bigl(B_{\delta}({\iota}_{{\partial}^{\alpha}{\mathcal K}}(X))\tildemes A^{\alpha}_{{\varepsilon}-{\delta}}\bigr). $$ This holds because \eqref{eq:epscoll} implies that $B_{\delta}\bigl({\iota}_{\mathcal K}\bigl(X\tildemes ([0,1]{\smallsetminus} A^{\alpha}_{\varepsilon}) \bigr)\bigr)$ does not intersect the collar $\rho^{\alpha}\bigl({\partial}^{\alpha}{\mathcal K}\tildemes A^{\alpha}_{{\varepsilon}-{\delta}}\bigr)$, and on the other hand $B_{\delta}\bigl({\iota}_{\mathcal K}\bigl(X\tildemes A^{\alpha}_{\varepsilon} \bigr)\bigr)\cap \rho^{\alpha}\bigl(|{\partial}^{\alpha}{\mathcal K}|\tildemes A^{\alpha}_{\varepsilon}\bigr)$ has product form because the metric in the collar is a product metric. \end{example} Recall that an admissible metric $d$ on the virtual neighbourhood $|{\mathcal K}|$ of a Kuranishi cobordism has the property that its pullback to each chart $U_I$ defines the given topology on that manifold. However, even if the metric is also collared, this implies little else about the induced topology on $|{\mathcal K}|$. For example, if we consider a product cobordism ${\mathcal K}\tildemes [0,1]$, then an admissible (collared) metric $d$ on $|{\mathcal K}\tildemes [0,1]|$ need not define the product topology on $|{\mathcal K}\tildemes [0,1]| \cong |{\mathcal K}|\tildemes [0,1]$; all we know is that the pullback metrics $d_I$ on each domain $U_I\tildemes [0,1]$ give the product topology. The next lemma gives useful techniques for converting such a metric to one of product form in the collar, and for proving uniqueness of admissible metrics up to cobordism. Here, as in Lemma~\ref{le:cobord1}, we will consider product Kuranishi atlases ${\mathcal K}\tildemes A$ for various intervals $A\subset {\mathbb R}$, that is with domain category ${\rm Obj}_{{\bf B}_{\mathcal K}}\tildemes A$. Their virtual neighbourhoods $|{\mathcal K}\tildemes A|$ are canonically identified with $|{\mathcal K}|\tildemes A$ by Remark~\ref{rmk:cobordreal}, and hence have continuous injections $|{\mathcal K}|\tildemes B \hookrightarrow |{\mathcal K}\tildemes A|$ for any $B\subset A$ with respect to the quotient topologies on the realizations of the categories ${\rm Obj}_{\mathcal K}$ resp.\ ${\rm Obj}_{\mathcal K}\tildemes A$. However, for admissible metrics on $|{\mathcal K}|$ and $|{\mathcal K}\tildemes A|$, we do not require these injections to remain continuous. {\beta}gin{prop} {\lambda}bel{prop:metcoll} Let ${\mathcal K}$ be a metrizable tame Kuranishi atlas. {\beta}gin{enumerate} \item Given any admissible metric $d$ on $|{\mathcal K} \tildemes [0,1]|$ and ${\varepsilon}>0$, there is another admissible metric $D$ on $|{\mathcal K} \tildemes [0,1]|$ that restricts to $d$ on $|{\mathcal K}| \tildemes [{\varepsilon},1-{\varepsilon}]$ and restricts to the product $d_{\alpha} + d_{\mathbb R}$ on $|{\mathcal K}|\tildemes A^{\alpha}_{\varepsilon}$, where $d_{\alpha}$ is the restriction of $d$ to $|{\mathcal K}|\tildemes \{{\alpha}\}\cong |{\mathcal K}|$. Moreover, $D$ is ${\varepsilon}$-collared in the sense of \eqref{eq:epscoll}. \item Let $d_1$ be an admissible metric on $|{\mathcal K}|$, and suppose for $A = [0,1], [1,2]$ that $d_A$ are admissible metrics on $|{\mathcal K} \tildemes A|$ that for some ${\kappa}>0$ restrict on $|{\mathcal K}| \tildemes \bigl([1-{\kappa},1+{\kappa}]\cap A\bigr)$ to the product metric $d_1+d_{\mathbb R}$. Then there exists an admissible $\frac{\kappa}2$-collared metric $D$ on $|{\mathcal K} \tildemes [0,2]|$ whose boundary restrictions are $$ \qquad D\big|_{|{\mathcal K}|\tildemes \{0\}} = \min(d_{[0,1]}\big|_{|{\mathcal K}|\tildemes \{0\}}, \tfrac {\kappa} 2) \quad\text{and}\quad D\big|_{|{\mathcal K}|\tildemes \{2\}} = \min(d_{[1,2]}\big|_{|{\mathcal K}|\tildemes \{2\}}, \tfrac {\kappa} 2). $$ \item Suppose that $d$ and $d'$ are admissible metrics on $|{\mathcal K}|$, where $d'$ is bounded by~$1$. Then, for any $0<{\varepsilon} <\frac 14$, there is an admissible ${\varepsilon}$-collared metric $D$ on $|{\mathcal K}\tildemes [0,1]|$ that restricts to $d + d_{\mathbb R}$ on $|{\mathcal K}|\tildemes [0,{\varepsilon}]$ and to $d+d' + d_{\mathbb R}$ on $|{\mathcal K}|\tildemes [1-{\varepsilon},1]$. \item If $d^0$ and $d^1$ are any two admissible metrics on $|{\mathcal K}|$, then there exists an admissible collared metric $D$ on $|{\mathcal K}\tildemes[0,1]|$ with restrictions $D|_{|{\mathcal K}\tildemes\{{\alpha}\}|}=d^{\alpha}$ at the boundaries ${\partial}^{\alpha} |{\mathcal K}\tildemes[0,1]|=|{\mathcal K}|\tildemes\{{\alpha}\}\cong|{\mathcal K}|$ for ${\alpha}=0,1$. \item Finally, suppose that ${\mathcal K}^{[0,1]}$ is a Kuranishi cobordism and $d$ is an admissible (not necessarily collared) metric on $|{\mathcal K}^{[0,1]}|$. Then there exists an admissible collared metric $D$ on $|{\mathcal K}^{[0,1]}|$ with $D|_{|{\partial}^{\alpha} {\mathcal K}^{[0,1]}|}=d|_{|{\partial}^{\alpha} {\mathcal K}^{[0,1]}|}$ for ${\alpha}=0,1$. \end{enumerate} \end{prop} {\beta}gin{proof} The metric required by (i) can be obtained by first rescaling the given admissible metric to $|{\mathcal K}|\tildemes [{\varepsilon},1-{\varepsilon}]$, and then making collaring constructions to extend it to $|{\mathcal K}|\tildemes[{\varepsilon},1]$ and then to $|{\mathcal K}|\tildemes[0,1]$. We will moreover see that the collar width ${\varepsilon}$ plays no special role other than complicating the notation. So it suffices to consider a given admissible metric $d$ on $|{\mathcal K}|\tildemes[0,1]$ and extend it by a collaring construction to a metric $D$ on $|{\mathcal K}|\tildemes[0,2]$ that restricts to $d_1 + d_{\mathbb R}$ on $|{\mathcal K}|\tildemes[1,2]$ with the metric $d_1:=d|_{|{\mathcal K}|\tildemes\{1\}}$ on $|{\mathcal K}|\tildemes\{1\}\cong|{\mathcal K}|$ and satisfying \eqref{eq:epscoll} with ${\varepsilon}=1$, that is $D(y,(x,2-t))\ge 1 - t$ for $y \in |{\mathcal K}|\tildemes[0,1]$ and $0\le t < 1$. This metric can be constructed by symmetric extension of {\beta}gin{equation}{\lambda}bel{eq:DDD} D\bigl( (x,t) , (x',t') \bigr) \; :=\; \left\{{\beta}gin{array}{ll} d\bigl( (x,t) , (x',t') \bigr) &\mbox{ if } t,t'\le 1,\\ d_1(x,x') + |t-t'|, &\mbox{ if } t,t'\ge 1,\\ d\bigl( (x,t) , (x',1) \bigr) + |t'-1| &\mbox{ if } t \le 1 \le t'. \end{array}\right. \end{equation} This is well defined, positive definite, and symmetric. The triangle inequality {\beta}gin{equation}{\lambda}bel{eq:tri} D\bigl( (x,t) , (x'',t'') \bigr)\le D\bigl( (x,t) , (x',t') \bigr) + D\bigl( (x',t') , (x'',t'') \bigr) \end{equation} directly follows from the construction in case $t,t',t''\le 1$ or $t,t',t''\ge 1$. In the case $t' \le 1 < t, t''$ it follows from the triangle inequalities for both $d$ and $d_{\mathbb R}$, {\beta}gin{align*} D\bigl( (x,t) , (x'',t'') \bigr) &\;=\; d\bigl( (x,1) , (x'',1) \bigr) + |t''-t| \\ &\;\le\; d\bigl( (x,1) , (x',t') \bigr) + d\bigl( (x',t') , (x'',1) \bigr) + |t-1|+ |t''-1| \\ &\;=\; D\bigl( (x,t) , (x',t') \bigr) + D\bigl( (x',t') , (x'',t'') \bigr) . \end{align*} Similarly, if $t\le 1<t', t''$ we have {\beta}gin{align*} D\bigl( (x,t) , (x'',t'') \bigr)&\; =\; d\bigl( (x,t) , (x'',1) \bigr) + |t''-1| \\ &\;\le d\bigl( (x,t) , (x',1) \bigr) + d\bigl( (x',1) , (x'',1) \bigr)+ |t'-1|+ |t''-t'| \\ &\; =\; D\bigl( (x,t) , (x',t') \bigr) + D\bigl( (x',t') , (x'',t'') \bigr). \end{align*} In the case $t, t''\le 1< t'$ the triangle inequality for $d$ implies that for $D$, {\beta}gin{align*} D\bigl( (x,t) , (x'',t'') \bigr)&\; =\; d\bigl( (x,t) , (x'',t'') \bigr) \\ &\;\le\; d\bigl( (x,t) , (x',1) \bigr) + d\bigl( (x',1) , (x'',t'') \bigr)+ 2|t'-1|\\ &\; =\; D\bigl( (x,t) , (x',t') \bigr) + D\bigl( (x',t') , (x'',t'') \bigr). \end{align*} Finally, for $t', t''\le 1< t$ we have {\beta}gin{align*} D\bigl( (x,t) , (x'',t'') \bigr)&\; =\; d\bigl( (x,1) , (x'',t'') \bigr) + |t-1| \\ &\;\le\; d\bigl( (x,1) , (x',t') \bigr) + d\bigl( (x',t') , (x'',t'') \bigr)+ |t-1| \\ &\; =\; D\bigl( (x,t) , (x',t') \bigr) + D\bigl( (x',t') , (x'',t'') \bigr). \end{align*} This proves that $D$ is a well defined metric on $|{\mathcal K}|\tildemes [0,2]$. Boundedness follows from that of $d$. We next check that the topology given by the pullbacks $D_I$ of $D$ to $U_I\tildemes [0,2]$ is the standard product topology. Since $d$ pulls back to the product topology on $U_I\tildemes [0,1]$ by hypothesis, as does $d_1+d_{\mathbb R}$ on $U_I\tildemes [1,2]$, the topology given by $D_I$ restricts to the product topology on both $U_I\tildemes [0,1]$ and on $U_I\tildemes [1,2]$. On the other hand, the product topology on $U_I\tildemes [0,2]$ is the quotient topology obtained from the product topologies on the disjoint union $U_I\tildemes [0,1] \;\sqcup\; U_I\tildemes [1,2]$ by identifying the two copies of $U_I\tildemes \{1\}$. Since this is precisely the topology given by $D_I$, the metric $D$ is admissible as claimed. Finally, the collaring requirement \eqref{eq:epscoll} holds since for $y=(x,t)\in |{\mathcal K}|\tildemes [0,1]$ and $(x',2-t)\in |{\mathcal K}|\tildemes (1,2]$ we have $D\bigl( y , (x',2-t) \bigr) = d\bigl( y , (x',1) \bigr) + |2-t - 1| \ge 1-t$. This completes the proof of (i). To prove (ii), we first replace each metric $d_A$ by $\min(d_A,\frac {{\kappa}}2)$ so that the metrics $d_A$ and $d_1$ are bounded by $\frac {\kappa} 2$. We next replace each $d_A$ by the metric $D_A'$ constructed as in (i) that restricts to $d_A$ on $|{\mathcal K}| \tildemes \bigl(A{\smallsetminus} [1-{\kappa},1+{\kappa}]\bigr)$, and to $d_1+d_{\mathbb R}$ on $|{\mathcal K}| \tildemes \bigl([1-{\kappa},1+{\kappa}]\cap A\bigr)$, and for $A=[0,1]$ (and similarly for $A=[1,2]$) satisfies $$ D_A'\bigl( (x,t),(x',t') \bigr) = d_A\bigl( (x,t),(x,1-{\kappa})\bigr) + |t'-(1-{\kappa})| \qquad \forall\; t\le 1-{\kappa} \le t'\le 1. $$ Next we set $D_A: = \min(D_A',{\kappa})$, which by Example~\ref{ex:mtriv} is still $\frac {\kappa} 2$-collared. Finally, we claim that $$ D\bigl((x,t),(x',t')\bigr): = \left\{ {\beta}gin{array}{ll} D_A\bigl((x,t),(x',t')\bigr) &\mbox{ if } t,t'\in A,\\ \min \bigl( \, d_1(x,x') +|t-t'| \, ,\, {\kappa} \,\bigr) {\partial}hantom{\int_A^B} \!\!\!\! &\mbox{ if } t,t'\in [1-{\kappa},1+{\kappa}], \\ {\kappa} &\mbox{ otherwise } \end{array}\right. $$ is the required metric on $|{\mathcal K}\tildemes [0,2]|$. Indeed, $D$ is well defined since $D_A$ restricts to $\min(d_1+d_{\mathbb R},{\kappa})$ on $|{\mathcal K}| \tildemes \bigl([1-{\kappa},1+{\kappa}]\cap A\bigr)$ by construction. Further $D$ has the required restrictions, and is symmetric, positive definite, and bounded by ${\kappa}$. To see that it satisfies the triangle inequality \eqref{eq:tri} we need only check triples with $D\bigl( (x,t) , (x',t') \bigr) + D\bigl( (x',t') , (x'',t'') \bigr)<{\kappa}$, and (by symmetry) $t\le t''$. The proof of (i) shows that $D$ satisfies \eqref{eq:tri} whenever $t,t',t''\le 1+{\kappa}$ or $t,t',t''\ge 1-{\kappa}$. Otherwise at least one of $t,t',t''$ is less than $1-{\kappa}$ while another is larger than $1+{\kappa}$. This means that the points in at least one of the pairs $\{t,t'\}$, $\{t,t''\}$, and $\{t',t''\}$ lie in different components of the complement of the interval $[1-{\kappa},1+{\kappa}]$, and thus the corresponding points in $|{\mathcal K}\tildemes[0,2]|$ have distance ${\kappa}$. In particular, \eqref{eq:tri} holds by the upper bound on $D$ except in the second case. Assuming w.l.o.g.\ $t\le t''$, this reduces our considerations to the case $t<1-{\kappa}\le t' \le 1+{\kappa} < t''$ when either $|t'-(1-{\kappa})|\ge{\kappa}$ or $|t'-(1+{\kappa})|\ge{\kappa}$, and thus $D\bigl( (x,t) , (x',t') \bigr) \ge {\kappa}$ or $D\bigl( (x',t') , (x'',t'') \bigr) \ge {\kappa}$, which again proves \eqref{eq:tri}. Thus in all cases $D$ satisfies the triangle inequality, and so is a metric as claimed. Further, $D$ is admissible because, by (i) its pullback induces the product topology on each $U_I\tildemes [0,1+{\kappa}]$ and $U_I\tildemes [1-{\kappa},2]$, and hence on $U_I\tildemes [0,2]$. This proves (ii). To prove (iii), we choose a smooth nondecreasing function ${\beta}: [0,1]\to [0,1]$ such that ${\beta}|_{[0,{\varepsilon}]}=0$, ${\beta}|_{[1-{\varepsilon},1]}=1$, and the derivative is bounded by $\sup {\beta}'<2$. (At this point we need to know that $(1-{\varepsilon})-{\varepsilon} > \frac 12$, which holds by assumption ${\varepsilon}<\frac 14$.) For $r\in [0,1]$ we then obtain a metric $d_r$ on $|{\mathcal K}|$ by $$ d_r(x,x'): = d(x,x') + {\beta}(r) d'(x,x') $$ and note that $d_r(x,x') \le d_s(x,x')$ whenever $r\le s$. Moreover, each $d_r$ is admissible on $|{\mathcal K}|$ since their pullback to the charts are analogous sums, and the sum of two metrics that induce the same topology also induces this topology. Now we claim that $$ D\bigl( (x,t) , (x',t') \bigr) = d_{\min(t,t')}(x,x') + |t-t'| $$ provides the required metric $D$ on $|{\mathcal K} \tildemes [0,1]|$. This is evidently symmetric and positive definite, and by symmetry it suffices to check the triangle inequality \eqref{eq:tri} for $t\le t''$. In the case $t'< t\le t''$ we use $0\le {\beta}(t) - {\beta}(t') \le 2 (t-t')$ and $d'\le 1$ to obtain {\beta}gin{align*} & \; D\bigl( (x,t) , (x',t') \bigr) + D\bigl( (x',t') , (x'',t'') \bigr)\\ &\qquad \qquad = \; d(x,x') + d(x',x'') + {\beta}(t') d'(x,x') + {\beta}(t') d'(x',x'') + |t-t'| + |t'-t''|\\ &\qquad \qquad \ge\; d(x,x'') + {\beta}(t') d'(x,x'') + 2|t-t'| \\ &\qquad \qquad\ge\; d(x,x'') + {\beta}(t) d'(x,x'') \; =\; D\bigl( (x,t) , (x'',t'') \bigr). \end{align*} In the other cases $t\le t'\le t''$ resp.\ $t\le t''\le t'$, we can use the monotonicity ${\beta}(t')\geq {\beta}(t)$ resp.\ ${\beta}(t'')\geq {\beta}(t)$ to check \eqref{eq:tri}. Therefore $D$ is a metric on the product virtual neighbourhood. It is admissible because each $d_r$ is admissible on $|{\mathcal K}|$ so that the pullback metric on each set $U_I\tildemes [0,1]$ induces the product topology. Finally it is ${\varepsilon}$-collared by construction. In particular, it satisfies \eqref{eq:epscoll} due to the term $|t-t'|$ in its formula. This proves (iii). To prove (iv), let $C>0$ be a common upper bound for $d^0$ and $d^1$ and set ${\kappa}:=\frac 16$. By (iii) there is a ${\kappa}$-collared metric $d_{[0,1]}$ on $|{\mathcal K}|\tildemes [0,1]$ that equals $\frac {\kappa}{3 C} d^0 +d_{\mathbb R}$ on $|{\mathcal K}|\tildemes [0,{\kappa}]$ and $\frac {\kappa} {3 C} (d^0 +d^1)+d_{\mathbb R}$ on $|{\mathcal K}|\tildemes [1-{\kappa}, 1]$. Similarly, there is a ${\kappa}$-collared metric $d_{[1,2]}$ on $|{\mathcal K}|\tildemes [1,2]$ that equals $\frac {\kappa}{3C} (d^0 +d^1)+d_{\mathbb R}$ on $|{\mathcal K}|\tildemes [1,1+{\kappa}]$ and $\frac {\kappa}{3 C} d^1+d_{\mathbb R}$ on $|{\mathcal K}|\tildemes [2-{\kappa}, 2]$. These satisfy the assumptions of (ii), so that we obtain a collared metric $D'$ on $|{\mathcal K}|\tildemes [0,2]$ that restricts to $\min\{ \frac {\kappa} {3 C} d^0+d_{\mathbb R}, \frac{\kappa} 2 \}= \frac {\kappa}{3 C } d^0+d_{\mathbb R}$ on $|{\mathcal K}|\tildemes [0,\frac{\kappa}6]$ since $\frac {\kappa} {3 C} d^0 \le \frac{\kappa}3$. Similarly, it restricts to $\min\{ \frac {\kappa}{3C} d^1+d_{\mathbb R} , \frac{{\kappa}}2 \} = \frac {\kappa}{3 C} d'+d_{\mathbb R}$ on $|{\mathcal K}|\tildemes [2-\frac{\kappa} 6,2]$. Moreover, $D'$ satisfies \eqref{eq:epscoll} with ${\varepsilon}={\kappa}=\frac 16$ by (iii). Now define $D$ on $|{\mathcal K}|\tildemes [0,1]$ to be the pullback of $\frac{3C}{\kappa} D'$ by a rescaling map $(x,t)\mapsto (x,{\beta}(t))$ with ${\beta}(t) = \frac {{\kappa} t}{3C}$ for $t$ near $0$ and ${\beta}(t) = 2- \frac {{\kappa}(1- t)}{3C}$ for $t$ near $1$. Then $D$ restricts to $d^0+d_{\mathbb R}$ near $|{\mathcal K}|\tildemes \{0\}$ and to $d^1+d_{\mathbb R}$ near $|{\mathcal K}|\tildemes \{1\}$, and is collared by construction. Hence it provides the required metric tame cobordism $({\mathcal K}\tildemes[0,1],D)$ from $({\mathcal K},d^0) $ to $({\mathcal K},d^1)$. To prove (v) let $2{\varepsilon}>0$ be the collar width of ${\mathcal K}:={\mathcal K}^{[0,1]}$. Then we first push forward the metric $d$ on $|{\mathcal K}|$ by the bijection $$ F : |{\mathcal K}| \;\overlineerset{\cong}{\longmapsto}\; |{\mathcal K}| {\smallsetminus} \Bigl( \rho^0\bigl(|{\partial}^0{\mathcal K}|\tildemes [0,{\varepsilon})\bigr) \;\cup\; \rho^1\bigl(|{\partial}^1{\mathcal K}|\tildemes (1-{\varepsilon},{\varepsilon}]\bigr) \Bigr) $$ given by $F : \rho^0(x, t) \mapsto \rho^0(x, {\varepsilon} + \frac 12 t )$ and $F : \rho^1(x, 1-t) \mapsto \rho^1(x, 1-{\varepsilon} - \frac 12 t )$ for $t\in[0,2{\varepsilon})$, and the identity on the complement of the $2{\varepsilon}$-collars. The push forward $F_*d$ is admissible since $F$ pulls back to homeomorphisms supported in the collars of the domains $U_I$ of ${\mathcal K}$. Next, let us denote $d_{\alpha}:=d|_{|{\partial}^{\alpha}{\mathcal K}|}$, such that $d_0$ equals to the restriction $(F_*d)|_{\rho^0(|{\partial}^0{\mathcal K}|\tildemes\{{\varepsilon}\})}$, pulled back via $\rho^0(\cdot, {\varepsilon})$, and similar for $d_1$ via $\rho^1(\cdot, 1-{\varepsilon})$. Then we apply the same collaring construction as in (i) to extend $F_*d$ to an admissible collared metric on $|{\mathcal K}|$ given by symmetric extension of \[ D\bigl( y , y' \bigr) \; :=\; \left\{{\beta}gin{array}{ll} d\bigl( y , y' \bigr) &\mbox{ if } y,y' \in {\rm im\,} F ,\\ \rho^{\alpha}_*(d_{\alpha} + d_{\mathbb R}) (y,y') &\mbox{ if } y,y'\in \rho^{\alpha}\bigl(|{\partial}^{\alpha}{\mathcal K}|\tildemes A^{\alpha}_{\varepsilon}\bigr) ,\\ d\bigl( y , \rho^{\alpha}(x', {\varepsilon}) \bigr) + |{\varepsilon} - t' | &\mbox{ if } y\in {\rm im\,} F, y'=\rho^0(x',t') , \\ d\bigl( y , \rho^1(x', 1-{\varepsilon}) \bigr) + |1-{\varepsilon} - t' | &\mbox{ if } y\in {\rm im\,} F, y'=\rho^1(x',t') , \\ d\bigl( \rho^0(x, {\varepsilon}) , \rho^1(x', 1-{\varepsilon}) \bigr) &\mbox{ if } y=\rho^0(x,t), y'=\rho^1(x',t'). \\ \qquad + |{\varepsilon} - t |+ |1-{\varepsilon} - t' | & \end{array}\right. \] Viewing this as a two stage extension to $\rho^{\alpha}\bigl(|{\partial}^{\alpha}{\mathcal K}|\tildemes A^{\alpha}_{\varepsilon}\bigr)$ for ${\alpha}=0,1$, the proof of the triangle inequality and admissibility is the same as in (i), and the restrictions are $\rho^{\alpha}_*d_{\alpha}$ on $\rho^{\alpha}\bigl(|{\partial}^{\alpha}{\mathcal K}|\tildemes \{{\alpha}\}\bigr)$, as required. This finishes the proof. \end{proof} We are now in a position to prove the uniqueness part of Theorem~\ref{thm:K}, namely that different tame shrinkings of the same additive weak Kuranishi atlas are cobordant. This is a crucial ingredient in establishing that the virtual fundamental class associated to a given Kuranishi atlas is well defined. Since in practice one only associates a well defined cobordism class of Kuranishi atlases to a given moduli space, we need the full generality of the following result. For this, we must revisit the construction of shrinkings in the proof of Proposition~\ref{prop:proper}. {\beta}gin{remark}\rm In the case of shrinkings ${\mathcal K}^0,{\mathcal K}^1$ of a fixed Kuranishi atlas ${\mathcal K}$ there is an easier construction of a cobordism in one special case: If the shrinkings of the footprint covers $(F^{\alpha}_I)$ are compatible in the sense that their intersection $(F^0_I\cap F^1_I)$ is also a shrinking (i.e.\ covers $X$ and has the same index set of nonempty intersections of footprints), then by Remark~\ref{rmk:shrink} the intersection of domains $U^0_{IJ}\cap U^1_{IJ}$ defines another shrinking of~${\mathcal K}$. Thus one obtains an additive tame shrinking of the product Kuranishi cobordism ${\mathcal K}\tildemes [0,1]$ by $$ U_{IJ}^{[0,1]} := \bigl( U^0_{IJ} \tildemes [0,\tfrac 13) \bigr) \;\cup\; \bigl( \bigl( U^0_{IJ}\cap U^1_{IJ} \bigr) \tildemes [\tfrac 13,\tfrac 23] \bigr) \;\cup\; \bigl( U^1_{IJ} \tildemes (\tfrac 23,1] \bigr) . $$ \end{remark} {\beta}gin{prop}{\lambda}bel{prop:cobord2} Let ${\mathcal K}^{[0,1]}$ be an additive weak Kuranishi cobordism on $X\tildemes [0,1]$, and let ${\mathcal K}^0_{sh}, {\mathcal K}^1_{sh}$ be preshrunk tame shrinkings of ${\partial}^0{\mathcal K}^{[0,1]}$ and ${\partial}^1{\mathcal K}^{[0,1]}$, that are hence metrizable as in Proposition~\ref{prop:metric}. Then there is a preshrunk tame shrinking of ${\mathcal K}^{[0,1]}$ that provides a metrizable tame Kuranishi cobordism from ${\mathcal K}^0_{sh}$ to ${\mathcal K}^1_{sh}$. \end{prop} {\beta}gin{proof} As in the proof of Proposition~\ref{prop:metric} we first construct a tame shrinking between any pair of tame shrinkings ${\mathcal K}^0, {\mathcal K}^1$ of ${\partial}^0{\mathcal K}^{[0,1]}$ and ${\partial}^1{\mathcal K}^{[0,1]}$. We will use this first to obtain a tame shrinking ${\mathcal K}'$ of ${\mathcal K}^{[0,1]}$ with ${\partial}^{\alpha}{\mathcal K}'={\mathcal K}^{\alpha}$ (the tame shrinkings of which ${\mathcal K}^{\alpha}_{sh}$ are precompact shrinkings), and then to obtain a precompact tame shrinking ${\mathcal K}^{[0,1]}_{sh}$ of ${\mathcal K}'$ with ${\partial}^{\alpha}{\mathcal K}^{[0,1]}_{sh}={\mathcal K}^{\alpha}_{sh}$. This Kuranishi cobordism ${\mathcal K}^{[0,1]}_{sh}$ supports an admissible metric $d_{sh}$ by the same argument as in Proposition~\ref{prop:metric}. Finally we may arrange that it is collared by Proposition~\ref{prop:metcoll}~(v). Hence it remains to carry out the first construction. We write the index set as the union ${\mathcal I}_{{\mathcal K}^{[0,1]}}= {\mathcal I}_0 \cup {\mathcal I}_{(0,1)}\cup {\mathcal I}_1$ of ${\mathcal I}_{\alpha} := {\mathcal I}_ {{\partial}^{\alpha}{\mathcal K}^{[0,1]}} \subset {\mathcal I}_{{\mathcal K}^{[0,1]}}$ and ${\mathcal I}_{(0,1)}:={\mathcal I}_{{\mathcal K}^{[0,1]}}{\smallsetminus} ({\mathcal I}_0\cup{\mathcal I}_1)$. Since the footprint of a chart ${\mathcal K}^{[0,1]}$ might intersect both $X\tildemes\{0\}$ and $X\tildemes\{1\}$, the sets ${\mathcal I}_0$ and ${\mathcal I}_1$ may not be disjoint, though they are both disjoint from ${\mathcal I}_{(0,1)}$, which indexes the charts with precompact footprint in $X\tildemes (0,1)$. We will denote the charts of the Kuranishi cobordism ${\mathcal K}^{[0,1]}$ by ${\bf K}_I=(U_I,\ldots)$, while ${\bf K}^{\alpha}_I=(U^{\alpha}_I,\ldots) = {\partial}^{\alpha} {\bf K}_I |_{U^{\alpha}_I}$ denotes the charts of the shrinking ${\mathcal K}^{\alpha}$ of ${\partial}^{\alpha}{\mathcal K}^{[0,1]}$ with domains $U^{\alpha}_{IJ}\subset {\partial}artial^{\alpha} U_{IJ}$. Recall moreover that by definition of a shrinking the index sets ${\mathcal I}_ {{\mathcal K}^{\alpha}}={\mathcal I}_ {{\partial}^{\alpha}{\mathcal K}}={\mathcal I}_{\alpha}$ coincide. We suppose that the charts and coordinate changes of ${\mathcal K}^{[0,1]}$ have uniform collar width $5{\varepsilon}>0$ as in Remark~\ref{rmk:Ceps}. Then the footprints have induced $5{\varepsilon}$-collars $$ (X\tildemes A^{\alpha}_{5{\varepsilon}}) \cap F_I = {\partial}artial^{\alpha} F_I \tildemes A^{\alpha}_{5{\varepsilon}} \qquad\text{with}\qquad {\partial}artial^{\alpha} F_I = {\partial}r_X\bigl( F_I\cap (X\tildemes\{{\alpha}\}) \bigr). $$ By construction of the shrinkings ${\mathcal K}^{\alpha}$ of ${\partial}^{\alpha}{\mathcal K}^{[0,1]}$, we have precompact inclusions $F^{\alpha}_I \sqsubset {\partial}artial^{\alpha} F_I$. Now one can form a $3{\varepsilon}$-collared shrinking $(F'_i\sqsubset F_i)_{i=1,\ldots,N}$ of the cover of $X\tildemes[0,1]$ by the footprints of the basic charts by first choosing an arbitrary shrinking $(F_i'')$ as in Definition~\ref{def:shr0}, and then adding $3{\varepsilon}$-collars to the boundary charts. Namely, for $i\in {\mathcal I}_0\cup {\mathcal I}_1$ we define $$ F_i': = \Bigl(F_i''\cup \bigl(F^{\alpha}_i\tildemes A^{\alpha}_{4 {\varepsilon}} \bigr)\Bigr)\; {\smallsetminus} \; \Bigl( \bigl( X{\smallsetminus} F^{\alpha}_i \bigr)\tildemes \overlineerline{A^{\alpha}_{3{\varepsilon}}} \Bigr) . $$ By construction, these sets still cover $X\tildemes A^{\alpha}_{4{\varepsilon}}$, together with $F'_i:=F''_i$ for $i\in{\mathcal I}_{(0,1)}$ cover $X \tildemes (3{\varepsilon}, 1-3{\varepsilon})$, and hence cover all of $X\tildemes[0,1]$. Moreover, each $F'_i$ is open with $3{\varepsilon}$-collar $F_i'\cap \bigl(X\tildemes A^{\alpha}_{3{\varepsilon}} \bigr) = F^{\alpha}_i\tildemes A^{\alpha}_{3{\varepsilon}}$ and has compact closure in $F_i$ because $F_i''$ does by construction and $F^{\alpha}_i \tildemes A^{\alpha}_{4{\varepsilon}} \sqsubset {\partial}artial^{\alpha} F_i \tildemes A^{\alpha}_{5{\varepsilon}} \subset F_i $. Next, the induced footprints $F'_I=\bigcap_{i\in I} F'_i$ also have $3{\varepsilon}$-collars, and $F_I\neq\emptyset$ implies $F'_I\neq\emptyset$ since either $F_I\cap A^{\alpha}_{5{\varepsilon}}\neq \emptyset$ so that $\emptyset\neq F^{\alpha}_I\tildemes A^{\alpha}_{4{\varepsilon}} \subset F'_I$, or $F_I\subset X \tildemes (5{\varepsilon}, 1-5{\varepsilon})$ so that $\emptyset\neq F''_I\subset F'_I$. Hence $(F_i')_{i=1,\ldots,N}$ is a shrinking of the footprint cover with $3{\varepsilon}$-collars. We now carry through the proof of Proposition~\ref{prop:proper}, in the $k$-th step choosing domains $ U^{(k)}_{IJ}\subset U^{(k-1)}_{IJ}\subset U_{IJ}$ for $I, J\in {\mathcal I}_{{\mathcal K}^{[0,1]}}$ satisfying the conditions (i$'$), (ii$'$),(iii$'$) as well as the following collar requirement which ensures that the resulting shrinking of ${\mathcal K}^{[0,1]}$ is a Kuranishi cobordism between the given tame atlases ${\mathcal K}^0$ and ${\mathcal K}^1$: {\beta}gin{equation}{\lambda}bel{collar} ({\iota}ta^{\alpha}_I)^{-1} \bigl( U^{(k)}_{IJ} \bigr) \;=\; U^{{\alpha}}_{IJ} \tildemes A^{\alpha}_{\varepsilon} \qquad\forall \; {\alpha} \in\{0,1\}, \; I\subset J\in {\mathcal I}_{\alpha} . \end{equation} For $k=0$ we first must choose precompact sets $U^{(0)}_I\sqsubset U_I$ satisfying the zero set condition \eqref{eq:U(0)}, namely $U_I^{(0)}\cap s_I^{-1}(0) = {\partial}si_I^{-1}(F_I')$. For that purpose we apply Lemma~\ref{le:restr0} to $$ F_I'\cap (X\tildemes(2{\varepsilon},1-2 {\varepsilon})) \;\sqsubset \; {\partial}si_I \Bigl(s_I^{-1}(0){\smallsetminus} \bigcup_{{\alpha}=0,1} {\iota}^{\alpha}_I({\partial}^{\alpha} U_I\tildemes \overline{A^{\alpha}_{{\varepsilon}}})\Bigr) $$ to find $$ U_I' \;\sqsubset \; U_I \;{\smallsetminus}\; \bigcup_{\alpha} {\iota}^{\alpha}_I({\partial}^{\alpha} U_I\tildemes \overline{A^{\alpha}_{{\varepsilon}}}) \quad\text{with}\quad U_I'\cap s_I^{-1}(0) = {\partial}si_I^{-1}\bigl(F_I'\cap (X\tildemes(2{\varepsilon},1-2{\varepsilon})\bigr) . $$ Then we add the image under ${\iota}^{\alpha}_I$ of the precompact subsets $U_I^{\alpha}\tildemes A^{\alpha}_{3{\varepsilon}} \sqsubset {\partial}artial^{\alpha} U_I \tildemes A^{\alpha}_{4{\varepsilon}} \subset U_I$, which have footprint $F_I^{\alpha}\tildemes A^{\alpha}_{3{\varepsilon}} = F_I' \cap (X\tildemes A^{\alpha}_{3{\varepsilon}})$, to obtain the required domains $$ U_I^{(0)} := U_I'\;\cup\; \bigcup_{{\alpha}=0,1} {\iota}^{\alpha}_I(U_I^{\alpha}\tildemes A^{\alpha}_{ 3{\varepsilon}}) \quad\sqsubset\; U_I $$ with boundary ${\partial}^{\alpha} U_I^{(0)} = U_I^{\alpha}$ and collar width ${\varepsilon}$. Next, the domains $U_{IJ}^{(0)}$ for $I\subsetneq J$ are determined by \eqref{eq:UIJ(0)} and satisfy \eqref{collar} since both $U_{IJ}$ and $U_{I}^{(0)}, U_{J}^{(0)}$ have ${\varepsilon}$-collars, and ${\partial}hi_{IJ}$ has product form on the collar. For $I\in{\mathcal I}_{(0,1)}$ these constructions also apply, and reproduce the construction without boundary, if we denote ${\rm im\,}{\iota}^{\alpha}_I:=\emptyset$ and recall that the footprints are contained in $X\tildemes (5{\varepsilon},1-5{\varepsilon})$. In the following we will use the same conventions and hence need not mention ${\mathcal I}_{(0,1)}$ separately. Now in each iterative step for $k\geq 1$ there are two adjustments of the domains. First, in Step~A the domains $U^{(k)}_{IK}\subset W_{K'}$ for $|I|=k$ are chosen using Lemma~\ref{le:set}, where $W_{K'}$ is given by~\eqref{eq:WwK}. In order to give these sets ${\varepsilon}$-collars, we denote the sets provided by Lemma~\ref{le:set} by $V_{IK}^{(k)}\subset W_{K'}$ and define {\beta}gin{equation}{\lambda}bel{eq:UIJeps1} U_{IK}^{(k)} \,:=\; V_{IK}^{(k)} \cup U_{IK}^{0,1,{\varepsilon}} \qquad \text{with}\quad U_{IK}^{0,1,{\varepsilon}} \,:=\; {\iota}ta^0_I\bigl(U_{IK}^0\tildemes A^0_{\varepsilon}) \;\cup\; {\iota}ta^1_I\bigl(U_{IK}^1\tildemes A^1_{\varepsilon} ) . \end{equation} This set is open and satisfies \eqref{collar} because $V_{IK}^{(k)}$ is a subset of $U^{(k-1)}_{IK}$, which by induction hypothesis has the required ${\varepsilon}$-collar. We now show that \eqref{eq:UIJeps1} satisfies the requirements of Step A. {\beta}gin{itemlist} \item[(i$'$)] holds since $U_{IJ}^{(k-1)}\cap \bigl(s_I\bigr)^{-1}(E_H) \;\subset\; V_{IJ}^{(k)}\subset U_{IJ}^{(k)}$, where the first inclusion holds by construction of $V_{IJ}^{(k)}$. \item[(ii$'$)] holds since $V_{IJ}^{(k)}\cap V_{IK}^{(k)}= V_{I (J\cup K)}^{(k)}$ by construction and $U_{IJ}^{0,1,{\varepsilon}}\cap U_{IK}^{0,1,{\varepsilon}} = U_{I (J\cup K)}^{0,1,{\varepsilon}}$ by the tameness of the collars, so {\beta}gin{align*} U_{IJ}^{(k)}\cap U_{IK}^{(k)} &\;=\; \bigr(V_{IJ}^{(k)}\cap V_{IK}^{(k)}\bigl) \;\cup\; \bigr(U_{IJ}^{0,1,{\varepsilon}}\cap V_{IK}^{(k)}\bigl) \;\cup\; \bigr(V_{IJ}^{(k)}\cap U_{IK}^{0,1,{\varepsilon}} \bigl) \;\cup\; \bigr(U_{IJ}^{0,1,{\varepsilon}} \cap U_{IK}^{0,1,{\varepsilon}} \bigl) \\ &\;=\; V_{I(J\cup K)}^{(k)} \;\cup\; U_{I(J\cup K)}^{0,1,{\varepsilon}} \;=\; U_{I (J\cup K)}^{(k)} . \end{align*} Here the two mixed intersections are subsets of the collar $U^{0,1,{\varepsilon}}_{IJ}\cap U^{0,1,{\varepsilon}}_{IK}$ due to $V_{I\bullet}^{(k)}\subset U_{I\bullet}^{(k-1)}$. \item[(iii$''$)] holds since $V_{IK}^{(k)} \subset ({\partial}hi_{IJ})^{-1}(V_{JK}^{(k-1)})$ by construction and $U^{0,1,{\varepsilon}}_{IK} \subset ({\partial}hi^{[0,1]}_{IJ})^{-1}(U^{0,1,{\varepsilon}}_{JK})$ by the tameness of the shrinkings $U^0_\bullet, U^1_\bullet$. \end{itemlist} {\mathbb N}I This completes Step A. In Step B the domains $U^{(k)}_{JK}$ for $|J|>k$ are constructed by \eqref{eq:UJK(k)}, namely $$ U_{JK}^{(k)}\, :=\; U_{JK}^{(k-1)}{\smallsetminus} \bigcup_{I\subset J, |I|= k} \bigl( s_J^{-1}(E_I){\smallsetminus} {\partial}hi_{IJ}(U^{(k)}_{IJ}) \bigr) . $$ We must check that this removes no points in the collars, i.e.\ $$ {\iota}ta^{\alpha}_J(U_{JK}^{\alpha}\tildemes A^{\alpha}_{\varepsilon})\cap s_J^{-1}(E_I) \;\subset\; {\partial}hi_{IJ}(U^{(k)}_{IJ}) . $$ But in this collar $s_J$ and ${\partial}hi_{IJ}$ have product form induced from the corresponding maps in the Kuranishi atlases ${\mathcal K}^{\alpha}$, where tameness implies $U_{JK}^{\alpha}\cap (s_J^{\alpha})^{-1}(E_I) = {\partial}hi_{IJ}^{\alpha} (U^{\alpha}_{IJ})$. Since the $U^{(k)}_{IJ}$ already have ${\varepsilon}$-collar by construction, this guarantees the above inclusion. Thus, with these modifications, the $k$-th step in the proof of Proposition~\ref{prop:proper} carries through. After a finite number of iterations, we find a tame shrinking ${\mathcal K}'$ of ${\mathcal K}^{[0,1]}$ with given restrictions ${\partial}^{\alpha}{\mathcal K}^{[0,1]}={\mathcal K}^{\alpha}$ for ${\alpha}=0,1$. This completes the proof. \end{proof} {\beta}gin{rmk}{\lambda}bel{rmk:Morita}\rm In some sense, the ``correct" notion of equivalence between Kuranishi atlases should generalize that of Morita equivalence for groupoids. In other words, one should develop an appropriate notion of ``refinement'' of a Kuranishi atlas (e.g.\ by replacing each chart by a tuple of charts obtained by restriction to a finite cover of its footprint) and then should say that two Kuranishi atlases ${\mathcal K}, {\mathcal K}'$ on ${\bf X}$ are equivalent if there is a diagram $$ {\mathcal K}'\longleftarrow {\mathcal K}''\longrightarrow {\mathcal K}, $$ where ${\mathcal K}''$ is a ``refinement" of ${\mathcal K}$, and an arrow ${\mathcal K}^1\to{\mathcal K}^2$ means (at a minimum) that there are functors ${\bf B}_{{\mathcal K}^1} \to {\bf B}_{{\mathcal K}^2}$ and ${\bf E}_{{\mathcal K}^1} \to {\bf E}_{{\mathcal K}^2}$, which commute with the section functors $s_{{\mathcal K}^{\alpha}}:{\bf B}_{{\mathcal K}^{\alpha}}\to{\bf B}_{{\mathcal K}^{\alpha}}$ and the footprint functors ${\partial}si_{{\mathcal K}^{\alpha}}: s_{{\mathcal K}^{\alpha}}^{-1}(0)\to X$. We do not pursue this formal line of reasoning here. However, it will be useful to develop the notion of a particular kind of refinement (called a reduction) in order to construct sections; see Proposition~\ref{prop:red}. \end{rmk} {\rm sect}ion{From Kuranishi atlases to the Virtual Fundamental Class}{\lambda}bel{s:VMC} In this section we assume that ${\mathcal K}$ is an oriented, tame Kuranishi atlas (as throughout with trivial isotropy, and with the notion of orientation to be defined) of dimension $d$ on a compact metrizable space $X$, and construct the virtual moduli cycle (VMC) and virtual fundamental class (VFC). As a preliminary step, Section~\ref{ss:red} provides reductions of the cover of the Kuranishi neighbourhood $|{\mathcal K}|$ by the images of the domains ${\partial}i_{\mathcal K}(U_I)$. The goal here is to obtain a cover by a partially ordered set of Kuranishi charts, with coordinate changes governed by the partial order. This will allow for an iterative construction of perturbations. In Section~\ref{ss:sect} we introduce the notion of transverse perturbations in a reduction, and -- assuming their existence -- construct the VMC as a closed manifold up to compact cobordism, so far unoriented, from the associated perturbed zero sets. One difficulty here is to ensure compactness of the zero set despite the fact that ${\iota}_{\mathcal K}(X)\subset |{\mathcal K}|$ may not have a precompact neighbourhood. Transverse perturbations are then constructed in Section~\ref{ss:const}, and orientations will be established in Section~\ref{ss:vorient}. Finally, we construct the virtual fundamental cycle in Section~\ref{ss:VFC}. \subsection{Reductions and Covers}{\lambda}bel{ss:red} \hspace{1mm}\\ The cover of $X$ by the footprints $(F_I)_{I\in {\mathcal I}_{\mathcal K}}$ of all the Kuranishi charts (both the basic charts and those that are part of the transitional data) is closed under intersection. This makes it easy to express compatibility of the charts, since the overlap of footprints of any two charts ${\bf K}_I$ and ${\bf K}_J$ is covered by another chart ${\bf K}_{I\cup J}$. However, this yields so many compatibility conditions that a construction of compatible perturbations in the Kuranishi charts may not be possible. For example, a choice of perturbation in the chart ${\bf K}_I$ also fixes the perturbation in each chart ${\bf K}_J$ over ${\partial}hi_{J (I\cup J)}^{-1}\bigl( {\rm im\,} {\partial}hi_{I (I\cup J)}\bigr) \subset U_J$, whenever $I\cup J\subset{\mathcal I}_{\mathcal K}$. Since we do not assume transversality of the coordinate changes, this subset of $U_J$ need not be a submanifold, and hence the perturbation may not extend smoothly to $U_J$. We will avoid these difficulties, and also make a first step towards compactness, by reducing the domains of the Kuranishi charts to precompact subsets $V_I\sqsubset U_I$ such that all compatibility conditions between ${\bf K}_I|_{V_I}$ and ${\bf K}_J|_{V_J}$ are given by direct coordinate changes $\widehat\Phi_{IJ}$ or $\widehat\Phi_{JI}$. The left diagram in Figure~\ref{fig:1} illustrates a typical family of sets $V_I$ for $I\subset\{1,2,3\}$ with the appropriate intersection properties. As we explain in Remark~\ref{rmk:nerve} this reduction process is analogous to replacing the star cover of a simplicial set by the star cover of its first barycentric subdivision. This method was used in the current context by Liu--Tian~\cite{LiuT}. {\beta}gin{figure}[htbp] \centering \includegraphics[width=4in]{virtfig1.pdf} \caption{ The right diagram shows the first barycentric subdivision of the triangle with vertices $1,2,3$. It has three new vertices labelled $ij$ at the barycenters of the three edges and one vertex labelled $123$ at the barycenter of the triangle. The left is a schematic picture of a reduction of the cover as in Lemma~\ref{le:cov0}. The black sets are examples of multiple intersections of the new cover, which correspond to the simplices in the barycentric subdivision. E.g.\ $V_2\cap V_{23}\cap V_{123}$ corresponds to the triangle with vertices $2, 23, 123$, whereas $V_1\cap V_{123}$ corresponds to the edge between $1$ and $123$. } {\lambda}bel{fig:1} \end{figure} {\beta}gin{rmk}{\lambda}bel{rmk:nerve}\rm In algebraic topology it is often useful to consider the nerve ${\mathbb N}n: = {\mathbb N}n({\mathcal U})$ of an open cover ${\mathcal U}: = (F_i)_{i=1,\ldots,N}$ of a space $X$, namely the simplicial complex with one vertex for each open subset $F_i$ and a $k$-simplex for each nonempty intersection of $k+1$ subsets.\footnote { A simplicial complex is defined in \cite[\S2.1]{Hat} as a finite set of vertices (or $0$-simplices) $V$ and a subset of the power set ${\mathcal I}\subset 2^V$, whose $(k+1)$-element sets are called $k$-simplices for $k\geq 0$. The only requirements are that any subset $\tau \subsetneq {\sigma}$ of a simplex ${\sigma}\in{\mathcal I}$ is also a simplex $\tau\in{\mathcal I}$, and that each simplex is linearly ordered, compatible with a partial order on $V$. In our case the ordering is provided by the linear order on $V=\{1,\dots,N\}$. Then the $j$-th face of a $k$-simplex ${\sigma}: = \{i_0,\dots, i_k\}$, where $i_0<\dots<i_k$, is given by the subset of ${\sigma}$ obtained by omitting its $j$-th vertex $i_j$. This provides the order in which faces are identified when constructing the realization of the simplicial complex. } We denote its set of simplices by $$ {\mathcal I}_{\mathcal U}: = \bigl\{ I\subset \{1,\ldots,N\} \,\big|\, \cap _{i\in I} F_i\ne \emptyset \bigr\} . $$ This combinatorial object is often identified with its realization, the topological space $$ |{\mathbb N}n| :=\; \quotient{{\textstyle \coprod_{I\in {\mathcal I}_{\mathcal U}}} \{I\} \tildemes {\Delta}^{|I|-1} }{{\sigma}m} $$ where ${\sigma}m$ is the equivalence relation under which the $|I|$ codimension $1$ faces of the simplex $\{I\}\tildemes{\Delta}^{|I|-1}$ are identified with $\{I{\smallsetminus} \{i\}\} \tildemes {\Delta}^{|I|-2}$ for $i\in I$. The realization $|{\mathbb N}n|$ has a natural open cover by the stars $St(v)$ of its vertices $v$, where $St(v)$ is the union of all (open) simplices whose closures contain $v$. Notice that the nerve of the star cover of $|{\mathbb N}n|$ can be identified with ${\mathbb N}n$. Next, let ${\mathbb N}n_1:={\mathbb N}n_1({\mathcal U})$ be the first barycentric subdivision of ${\mathbb N}n$. That is, ${\mathbb N}n_1$ is a simplicial complex with one vertex $v_I$ at the barycenter of each simplex $I\in{\mathcal I}_{\mathcal U}$ and a $k$ simplex for each {\it chain} $I_0\subsetneq I_1\subsetneq\ldots\subsetneq I_k$ of simplices $I_0,\ldots,I_k\in{\mathcal I}_{\mathcal U}$. This linear order on each simplex is induced from the partial order on the set of vertices ${\mathcal I}_{\mathcal U}$ given by the inclusion relation for subsets of $\{1,\ldots,N\}$. Further, the star $St(v_I)$ of the vertex $v_I$ in ${\mathbb N}n_1$ is the union of all simplices given by chains that contain $I$ as one of its elements. Hence two stars $St(v_I), St(v_J)$ have nonempty intersection if and only if $I\subset J $ or $J\subset I$, because this is a necessary and sufficient condition for there to be a chain containing both $I$ and $J$. For example in the right hand diagram in Figure~\ref{fig:1} the stars of the vertices $v_{12}$ and $v_{13}$ are disjoint, as are the stars of $v_1$ and $v_2$. As before the nerve of the star cover of $|{\mathbb N}n_1|$ can be identified with ${\mathbb N}n_1$ itself. In particular, each nonempty intersection of sets in the star cover of $|{\mathbb N}n_1|$ corresponds to a simplex in $|{\mathbb N}n_1|$, namely to a chain $I_0\subset \ldots \subset I_k$ in the poset ${\mathcal I}_{\mathcal U}$. Therefore the indexing set for this cover is the set ${\mathcal C}$ of chains in ${\mathcal I}_{\mathcal U}$; cf.\ Hatcher~\cite[p.119ff]{Hat}. Now suppose that ${\mathcal U} = (F_i)_{i=1,\ldots,N}$ is the footprint cover provided by the basic charts of a tame Kuranishi atlas. Then ${\mathcal I}_{\mathcal U}={\mathcal I}_{\mathcal K}$ is the index set of the Kuranishi atlas. Hence ${\mathcal K}$ consists of one basic chart for each vertex of the nerve ${\mathbb N}n({\mathcal U})$ and one transition chart ${\bf K}_I$ for each simplex in ${\mathbb N}n({\mathcal U})$. We are aiming to construct from the original cover $(F_i)$ of $X$ a {\it reduced} cover $(Z_I)_{I\in {\mathcal I}_{\mathcal U}}$ of $X$ whose pattern of intersections mimics that of the star cover of $|{\mathbb N}n_1({\mathcal U})|$. In particular, we will require $\overline{Z_I}\cap \overline{Z_J}= \emptyset$ unless $I\subset J$ or $J\subset I$. Next, we will aim to construct corresponding subsets $V_I\subset U_I$ with $V_I\cap s_I^{-1}(0)={\partial}si_I^{-1}(Z_I)$ and ${\partial}i_{\mathcal K}(\overline{V_I})\cap {\partial}i_{\mathcal K}(\overline{V_J})= \emptyset$ unless $I\subset J$ or $J\subset I$. We will see in Proposition~\ref{prop:red} that such a reduction gives rise to a Kuranishi atlas ${\mathcal K}^{\mathcal V}$ that has one {\it basic} chart for each vertex in ${\mathbb N}n_1({\mathcal U})$, i.e.\ for each element in ${\mathcal I}_{\mathcal K}$, and one transition chart for each simplex in ${\mathbb N}n_1({\mathcal U})$, i.e.\ for each chain $C$ of elements in the poset ${\mathcal I}_{\mathcal K}$. \end{rmk} We will prove the existence of the following type of reduction in Proposition~\ref{prop:cov2} below. As always, we denote the closure of a set $Z\subset X$ by $\overline Z$. {\beta}gin{defn}{\lambda}bel{def:vicin} A {\bf reduction} of a tame Kuranishi atlas ${\mathcal K}$ is an open subset ${\mathcal V}=\bigcup_{I\in {\mathcal I}_{\mathcal K}} V_I \subset {\rm Obj}_{{\bf B}_{\mathcal K}}$ i.e.\ a tuple of (possibly empty) open subsets $V_I\subset U_I$, satisfying the following conditions: {\beta}gin{enumerate} \item $V_I\sqsubset U_I $ for all $I\in{\mathcal I}_{\mathcal K}$, and if $V_I\ne \emptyset$ then $V_I\cap s_I^{-1}(0)\ne \emptyset$; \item if ${\partial}i_{\mathcal K}(\overline{V_I})\cap {\partial}i_{\mathcal K}(\overline{V_J})\ne \emptyset$ then $I\subset J$ or $J\subset I$; \item the zero set ${\iota}ta_{\mathcal K}(X)=|s_{\mathcal K}|^{-1}(0)$ is contained in $ {\partial}i_{\mathcal K}({\mathcal V}) \;=\; {\textstyle{\bigcup}_{I\in {\mathcal I}_{\mathcal K}} }\;{\partial}i_{\mathcal K}(V_I). $ \end{enumerate} Given a reduction ${\mathcal V}$, we define the {\bf reduced domain category} ${\bf B}_{\mathcal K}|_{\mathcal V}$ and the {\bf reduced obstruction category} ${\bf E}_{\mathcal K}|_{\mathcal V}$ to be the full subcategories of ${\bf B}_{\mathcal K}$ and ${\bf E}_{\mathcal K}$ with objects $\bigcup_{I\in {\mathcal I}_{\mathcal K}} V_I$ resp.\ $\bigcup_{I\in {\mathcal I}_{\mathcal K}} V_I\tildemes E_I$, and denote by $s|_{\mathcal V}:{\bf B}_{\mathcal K}|_{\mathcal V}\to {\bf E}_{\mathcal K}|_{\mathcal V}$ the section given by restriction of $s_{\mathcal K}$. \end{defn} Uniqueness of the VFC will be based on the following relative notion of reduction. {\beta}gin{defn} {\lambda}bel{def:cvicin} Let ${\mathcal K}$ be a tame Kuranishi cobordism. Then a {\bf cobordism reduction} of ${\mathcal K}$ is an open subset ${\mathcal V}=\bigcup_{I\in{\mathcal I}_{{\mathcal K}}}V_I\subset {\rm Obj}_{{\bf B}_{{\mathcal K}}}$ that satisfies the conditions of Definition~\ref{def:vicin} and, in the notation of Section~\ref{ss:Kcobord}, has the following collar form: {\beta}gin{enumerate} \item[(iv)] For each ${\alpha}\in\{0,1\}$ and $I\in {\mathcal I}_{{\partial}^{\alpha}{\mathcal K}}\subset{\mathcal I}_{{\mathcal K}}$ there exists ${\varepsilon}>0$ and a subset ${\partial}artial^{\alpha} V_I\subset {\partial}artial^{\alpha} U_I$ such that ${\partial}artial^{\alpha} V_I\ne \emptyset$ iff $V_I \cap {\partial}si_I^{-1}\bigl( {\partial}artial^{\alpha} F_I \tildemes \{{\alpha}\}\bigr)\ne \emptyset$, and $$ ({\iota}ta^{\alpha}_I)^{-1} \bigl( V_I \bigr) \cap \bigl( {\partial}artial^{\alpha} U_I \tildemes A^{\alpha}_{\varepsilon} \bigr) \;=\; {\partial}artial^{\alpha} V_I \tildemes A^{\alpha}_{\varepsilon} . $$ \end{enumerate} We call ${\partial}artial^{\alpha}{\mathcal V} := \bigcup_{I\in{\mathcal I}_{{\partial}^0{\mathcal K}}} {\partial}artial^{\alpha} V_I \subset {\rm Obj}_{{\bf B}_{{\partial}^{\alpha}{\mathcal K}}}$ the {\bf restriction} of ${\mathcal V}$ to ${\partial}^{\alpha}{\mathcal K}$. \end{defn} {\beta}gin{remark}\rm The restrictions ${\partial}artial^{\alpha}{\mathcal V}$ of a reduction ${\mathcal V}$ of a Kuranishi cobordism ${\mathcal K}$ are reductions of the restricted Kuranishi atlases ${\partial}^{\alpha}{\mathcal K}$ for ${\alpha}=0,1$. In particular condition (i) holds because part~(iv) of Definition~\ref{def:cvicin} implies that if ${\partial}^{\alpha} V_I\ne \emptyset$ then ${\partial}^{\alpha} V_I \cap {\partial}si_I^{-1}\bigl( {\partial}artial^{\alpha} F_I\bigr)\ne \emptyset$ \end{remark} The notions of reductions make sense for general Kuranishi atlases and cobordisms, however we will throughout assume additivity and tameness. In some ways, the closest we come in this paper to constructing a ``good cover" in the sense of \cite{FO,J} is the category ${\bf B}_{\mathcal K}|_{\mathcal V}$. However, it is not a Kuranishi atlas. For completeness, we show in Proposition~\ref{prop:red} that there is an associated Kuranishi atlas ${\mathcal K}^{\mathcal V}$ together with a faithful functor ${\iota}^{\mathcal V}:{\bf B}_{{\mathcal K}^{\mathcal V}}\to {\bf B}_{\mathcal K}|_{\mathcal V}$ that induces an injection $|{\mathcal K}^{\mathcal V}|\to {\partial}i_{\mathcal K}({\mathcal V}) \subset|{\mathcal K}|$. Since the extra structure in ${\mathcal K}^{\mathcal V}$ has no real purpose for us, we use the simpler category ${\bf B}_{\mathcal K}|_{\mathcal V}$ instead. Its realization $|{\bf B}_{\mathcal K}|_{\mathcal V}|$ also injects into $|{\mathcal K}|$, with image $|{\mathcal V}|={\partial}i_{\mathcal K}({\mathcal V})$ by a special case of the following result. In particular, this identifies the quotient topologies $|{\mathcal V}|\cong |{\bf B}_{\mathcal K}|_{\mathcal V}|$ given by ${\partial}i_{\mathcal K}$ resp.\ generated by the morphisms of ${\bf B}_{\mathcal K}|_{\mathcal V}$. Here, as before, we define the realization $|{\bf C}|$ of a category ${\bf C}$ to be the quotient ${\rm Obj}_{{\bf C}}/\!\!{\sigma}m$, where ${\sigma}m$ is the equivalence relation generated by the morphisms in ${\bf C}$. {\beta}gin{lemma}{\lambda}bel{lem:full} Let ${\mathcal K}$ be a tame Kuranishi atlas with reduction ${\mathcal V}$, and suppose that ${\bf C}$ is a full subcategory of the reduced domain category ${\bf B}_{\mathcal K}|_{\mathcal V}$. Then the map $|{\bf C}|\to |{\mathcal K}|$, induced by the inclusion of object spaces, is a continuous injection. In particular, the realization $|{\bf C}|$ is homeomorphic to its image $|{\rm Obj}_{{\bf C}}|={\partial}i_{\mathcal K}({\rm Obj}_{{\bf C}})$ with the quotient topology in the sense of Definition~\ref{def:topologies}. \end{lemma} {\beta}gin{proof} The map $|{\bf C}|\to |{\mathcal K}|$ is well defined because $(I,x) {\sigma}m_{{\bf C}} (J,y)$ implies $(I,x) {\sigma}m_{{\bf B}_{\mathcal K}} (J,y)$ since the morphisms in ${\bf C}$ are a subset of those in ${\bf B}_{\mathcal K}$. In order for $|{\bf C}|\to |{\mathcal K}|$ to be injective we need to check the converse implication, that is we consider objects $(I,x),(J,y)\in {\rm Obj}_{{\bf C}}$, identify them with points $x\in V_I$ and $y\in V_J$, and assume ${\partial}i_{\mathcal K}(I,x) = {\partial}i_{\mathcal K}(J,y)$. Then we have $I\subset J$ or $J\subset I$ by Definition~\ref{def:vicin}~(ii), so that Lemma~\ref{le:Ku2}~(a) implies either $y={\partial}hi_{IJ}(x)$ or $x={\partial}hi_{JI}(y)$. Since ${\bf C}$ is a full subcategory of ${\bf B}_{\mathcal K}|_{\mathcal V}$ and hence of ${\bf B}_{\mathcal K}$, the corresponding morphism $(I,J,x)$ (or $(J,I,y)$) belongs to ${\bf C}$. Hence $(I,x) {\sigma}m_{{\bf B}_{\mathcal K}} (J,y)$ implies $(I,x) {\sigma}m_{{\bf C}} (J,y)$, so that $|{\bf C}|\to |{\mathcal K}|$ is injective. In fact, this shows that the relations ${\sigma}m_{{\bf B}_{\mathcal K}}$ and ${\sigma}m_{{\bf C}}$ agree on ${\rm Obj}_{{\bf C}}\subset{\rm Obj}_{{\bf B}_{\mathcal K}}$, and thus $|{\bf C}|\to |{\partial}i_{\mathcal K}({\rm Obj}_{{\bf C}})|$ is a homeomorphism with respect to the quotient topology on ${\partial}i_{\mathcal K}({\rm Obj}_{{\bf C}})$. Finally, Proposition~\ref{prop:Ktopl1}~(i) asserts that the identity map $|{\partial}i_{\mathcal K}({\rm Obj}_{{\bf C}})| \to \|{\partial}i_{\mathcal K}({\rm Obj}_{{\bf C}})\|\subset|{\mathcal K}|$ is continuous from this quotient topology to the relative topology induced by $|{\mathcal K}|$, which finishes the proof. \end{proof} {\beta}gin{example}\rm The inclusion $|{\bf C}| \hookrightarrow |{\mathcal K}|$ does {\it not} hold for arbitrary full subcategories of ${\bf B}_{\mathcal K}$. For example, the full subcategory ${\bf C}$ with objects $\bigcup_{i=1,\dots, N} U_i$ (the union of the domains of the basic charts) has only identity morphisms, so that $|{\bf C}| = {\rm Obj}_{{\bf C}}$ equals $|{\mathcal K}|$ only if there are no transition charts. \end{example} In order to prove the existence and uniqueness up to cobordism of reductions, we start by analyzing the induced footprint cover of $X$. Since the induced vicinity contains the zero set ${\iota}_{\mathcal K}(X)$, the further conditions on reductions imply that the reduced footprints $Z_I = {\partial}si_I(V_I\cap s_I^{-1}(0))$ form a reduction of the footprint cover $X=\bigcup_{i=1,\ldots,N} F_i$ in the sense of the following lemma. This lemma makes the first step towards existence of reductions by showing how to reduce the footprint cover. We will use the fact that every compact Hausdorff space is a {\it shrinking space} in the sense that every open cover has a precompact shrinking -- in the sense of Definition~\ref{def:shr0} without requiring condition \eqref{same FI}. {\beta}gin{lemma}{\lambda}bel{le:cov0} For any finite open cover of a compact Hausdorff space $X=\bigcup_{i=1,\ldots,N} F_i$ there exists a {\bf cover reduction} $\bigl(Z_I\bigr)_{I\subset \{1,\ldots,N\}}$ in the following sense: The $Z_I\subset X$ are (possibly empty) open subsets satisfying {\beta}gin{enumerate} \item $Z_I\sqsubset F_I := \bigcap_{i\in I} F_i$ for all $I$; \item if $\overline{Z_I}\cap \overline{Z_J}\ne \emptyset$ then $I\subset J$ or $J\subset I$; \item $X\,=\, \bigcup_{I} Z_I$. \end{enumerate} \end{lemma} {\beta}gin{proof} Since $X$ is compact Hausdorff, we may choose precompact open subsets $F_i^0\sqsubset F_i$ that still cover $X$. Next, any choice of precompactly nested sets {\beta}gin{equation}{\lambda}bel{eq:FGI} F_i^0\,\sqsubset\, G_i^1 \,\sqsubset\, F_i^1 \,\sqsubset\, G_i^2 \,\sqsubset\,\ldots \,\sqsubset\, F_i^{N} = F_i \end{equation} yields further open covers $X=\bigcup_{i=1,\ldots,N} F_i^n$ and $X=\bigcup_{i=1,\ldots,N} G_i^n$ for $n=1,\ldots,N$. Now we claim that the required cover reduction can be constructed by {\beta}gin{equation}{\lambda}bel{eq:ZGI} Z_I \,: =\; \Bigl( {\textstyle\bigcap_{i\in I}} G_i^{|I|} \Bigr) \;{\smallsetminus}\; {\textstyle \bigcup_{j\notin I}} \overline{F^{|I|}_j} . \end{equation} To prove this we will use the following notation: Given any open cover $X=\bigcup_{i=1,\ldots,N} H_i$ of $X$, we denote the intersections of the covering sets by $H_I: = \bigcap_{i\in I} H_i$ for all $I\subset \{1,\ldots,N\}$. This convention will apply to define $F_I^k$ resp.\ $G_I^k$ from the $F_i^k$ resp. $G_i^k$, but it does not apply to the sets $Z_I$ constructed above, since in particular the $Z_i$ generally do not cover $X$. With this notation we have $$ Z_I \,: =\; G_I^{|I|} \;{\smallsetminus}\; {\textstyle \bigcup_{j\notin I}} \overline{F^{|I|}_j} \qquad\text{for all}\;\; I\subset \{1,\ldots,N\}. $$ These sets are open since they are the complement of a finite union of closed sets in the open set $G_I^{|I|}$. The precompact inclusion $Z_I\sqsubset F_I$ in (i) holds since $G_I^{|I|}\sqsubset F_I$. To prove the covering in (iii) let $x\in X$ be given. Then we claim that $x \in Z_{I_x}$ for $$ I_x := {\underline{n}}derlineerset{I\subset\{1,\ldots,N\}, x \in G^{|I|}_I}{\textstyle \bigcup} I \;\;\;\subset\;\;\; \{1,\ldots, N\} . $$ Indeed, we have $x\in G^{|I_x|}_{I_x}$ since $i\in I_x$ implies $x\in G^{|I|}_i$ for some $|I|\leq |I_x|$, and hence $x\in G^{|I_x|}_i$ since $G_i^{|I|}\subset G_i^{|I_x|}$. On the other hand, for all $j\notin I_x$ we have $x\notin G^{|I_x|+1}_{I_x\cup j}$ by definition. However, $x\in G^{|I_x|+1}_{I_x}$ by the nesting of the covers, so for every $j\notin I_x$ we obtain $x\in X{\smallsetminus} G^{|I_x|+1}_j$, which is a subset of $X{\smallsetminus} \overline{F^{|I_x|}_j}$. This proves $x \in Z_{I_x}$ and hence~(iii). To prove the intersection property (ii), suppose to the contrary that $x\in \overline{Z_I}\cap\overline{Z_J}$ where $|I|\le |J|$ but $I{\smallsetminus} J\ne \emptyset$. Then given $i\in I{\smallsetminus} J$, we have $x\in \overline{Z_I} \subset G^{|I|}_I\subset F^{|J|}_i$ since $|I|\le |J|$, which contradicts $x\in \overline{Z_J} \subset X{\smallsetminus} F^{|J|}_i$. Thus the sets $Z_I$ form a cover reduction. \end{proof} To construct cobordism reductions with given boundary restrictions we need the following notion of collared cobordism of cover reductions. {\beta}gin{defn}{\lambda}bel{def:cobred} Given a finite open cover $X=\bigcup_{i=1,\ldots,N} F_i$ of a compact Hausdorff space, we say that two {\bf cover reductions $(Z_I^0)_{I\subset \{1,\ldots,N\}}, (Z_I^1)_{I\subset \{1,\ldots,N\}}$ are collared cobordant} if there exists a family of open subsets $Z_I\sqsubset F_I\tildemes [0,1]$ satisfying conditions (i),(ii),(iii) in Lemma~\ref{le:cov0} for the cover $X\tildemes[0,1]=\bigcup_{i=1,\ldots,N} F_i\tildemes[0,1]$, and in addition are collared in the sense of Definition~\ref{def:collared}, namely: {\beta}gin{enumerate} \item[(iv)] There is ${\varepsilon}>0$ such that $Z_I\cap \bigl(X\tildemes A^{\alpha}_{\varepsilon}\bigr)= Z_I^{\alpha}\tildemes A^{\alpha}_{\varepsilon}$ for all $I\subset\{1,\ldots,N\}$ and ${\alpha}=0,1$. \end{enumerate} \end{defn} {\beta}gin{lemma} {\lambda}bel{le:cobred1} The relation of collared cobordism for cover reductions is reflexive, symmetric, and transitive. \end{lemma} {\beta}gin{proof} The proof is similar to (but much easier than) that of Lemma~\ref{le:cobord1}. \end{proof} With these preparations we can prove uniqueness of cover reductions up to collared cobordism, and also provide reductions for footprint covers of Kuranishi cobordisms. {\beta}gin{lemma}{\lambda}bel{le:cobred} {\beta}gin{enumerate} \item Any cover $X\tildemes[0,1]=\bigcup_{i=1,\ldots,N} F_i$ by collared open sets $F_i\subset X\tildemes [0,1]$ has a cover reduction $(Z_I)_{I\subset\{1,\ldots,N'\}}$ by collared sets $Z_I\subset X\tildemes [0,1]$. \item Any two cover reductions $(Z_I^0)_{I\subset\{1,\ldots,N\}}$, $(Z_I^1)_{I\subset\{1,\ldots,N\}}$ of an open cover $X=\bigcup_{i=1,\ldots,N} F_i$ are collared cobordant. \end{enumerate} \end{lemma} {\beta}gin{proof} To prove (i) first note that any finite open cover $X\tildemes [0,1]=\bigcup_{i=1,\ldots,N} F_i$ by sets with collar form near the boundary ${\partial}artial^{\alpha} F \tildemes A_{\varepsilon}$ can be shrunk to sets $F_i'\sqsubset F_i$ that also have collar form near the boundary. Indeed, taking a common ${\varepsilon}>0$, one can first choose a shrinking $F^{\alpha}_i \sqsubset{\partial}artial^{\alpha} F_i$ of the covers of the ``boundary components'' $X\tildemes\{{\alpha}\}$ and a general shrinking $F''_i \sqsubset F_i$, then the required shrinking is given by {\beta}gin{equation}{\lambda}bel{eq:cobred1} F'_i:= F^0_i \tildemes [0,\tfrac {\varepsilon} 2) \;\cup\; F^1_i \tildemes (1-\tfrac {\varepsilon} 2,1] \;\cup\; F''_i \cap X \tildemes (\tfrac {\varepsilon} 4, 1-\tfrac {\varepsilon} 4 ) . \end{equation} Hence we may choose nested covers $F_i^0\ldots F_i^k\sqsubset G_i^{k+1}\sqsubset \ldots F_i$ as in \eqref{eq:FGI} of $X\tildemes [0,1]$ that have collar form near the boundary. Then the sets $(Z_I)$ defined by intersections in \eqref{eq:ZGI} also have collar form near the boundary, and the arguments of Lemma~\ref{le:cov0} prove (i). To prove (ii), we first transfer to the standard form constructed in Lemma~\ref{le:cov0}. { } \noindent {\bf Claim A:} {\em Any cover reduction $(Z_I)$ of a finite open cover $X=\bigcup_i F_i$ is collared cobordant to a cover reduction constructed from nested covers $F_i^0\ldots F_i^k\sqsubset G_i^{k+1}\sqsubset \ldots F_i$ by \eqref{eq:ZGI}. } { } To prove this claim, choose a shrinking $Z_I^0\sqsubset Z_I$ such that $X = \bigcup_I Z^0_I$. Then these covers induce precompactly nested open covers $$ F_i^0: = {\textstyle \bigcup_{i\in I}} Z_I^0 \quad \sqsubset \quad F_i': = {\textstyle \bigcup_{i\in I}} Z_I \quad \sqsubset \quad F_i = {\textstyle \bigcup_{i\in I}} F_I . $$ As in \eqref{eq:FGI} we can choose interpolating sets $$ F_i^0\sqsubset \ldots F_i^k\sqsubset G_i^{k+1}\sqsubset \ldots \sqsubset F_i^{2N} = F_i', $$ and let $\bigl(Z_I' := G_I^{|I|}{\smallsetminus} \bigcup_{j\notin I} \overline{F_j^{|I|}}\bigr)$ be the resulting cover reduction of $F_i'$ and hence of $F_i$. We claim that the union $(Z_I'':= Z^0_I\cup Z'_I)$ is a cover reduction of $(F_i)$ as well. Since $Z_I^0\sqsubset Z_I\sqsubset F_I$ we only have to check the mixed terms in the intersection axiom (iii) in Lemma~\ref{le:cov0}, i.e.\ we need to verify $$ \Bigl(J{\smallsetminus} I\ne \emptyset,\; I{\smallsetminus} J\ne \emptyset\Bigr)\; \Longrightarrow\; \Bigl((\overline{Z^0_I}\cup \overline{Z'_I})\cap (\overline{Z^0_J}\cup \overline{Z'_J}) = \emptyset\Bigr). $$ Indeed, for $j\in J{\smallsetminus} I$ we obtain $Z^0_J\subset F_j^0\sqsubset F_j^{|I|}\subset X{\smallsetminus} Z'_I$ so that $\overline{Z^0_J}\cap \overline{Z_I'}=\emptyset$. Conversely, $\overline{Z^0_I}\cap \overline{Z_J'}=\emptyset$ follows from the existence of $i\in I{\smallsetminus} J$. Now we have a chain of inclusions between cover reductions of $(F_i)$, namely $Z_I^0\subset Z_I$, $Z_I^0\subset Z_I''$, and $Z_I'\subset Z_I''$. We claim that this induces collared cobordisms $(Z_I^0){\sigma}m(Z_I)$, $(Z_I^0){\sigma}m(Z_I'')$, and $(Z_I'){\sigma}m(Z_I'')$, so that Lemma~\ref{le:cobred1} implies that $(Z_I)$ is collared cobordant to $(Z_I')$, which is constructed by \eqref{eq:ZGI}. To check this last claim, consider any two cover reductions $Z_I'\subset Z_I''$ of $(F_i)$ and note that a collared cobordism is given by $$ \bigl(Z_I'\tildemes [0,\tfrac 23)\bigr)\cup \bigl(Z_I''\tildemes (\tfrac 13,1]\bigr) \;\subset\; X\tildemes [0,1] . $$ Indeed, these open subsets are collared and form a cover reduction since each of $(Z_I'),(Z_I'')$ satisfies the axioms (i),(ii), and the mixed intersection in (iii) is $$ \Bigl(\overline{Z_I'}\tildemes [0,\tfrac 23]\Bigr)\cap \Bigl(\overline{Z_J''}\tildemes [\tfrac 13,1]\Bigr) \subset \Bigl(\overline{Z_I'} \cap \overline{Z_J''} \Bigr) \tildemes [\tfrac 13,\tfrac 23] , $$ which is empty unless $I\subset J$ or $J\subset I$. This proves Claim A. { } Now to prove (ii) it suffices to consider cover reductions $(Z^{\alpha}_I)$ that are constructed from nested covers $F_i^{0,{\alpha}} \ldots F_i^{k,{\alpha}} \sqsubset G_i^{k+1,{\alpha}}\sqsubset \ldots{\partial}artial^{\alpha} F_i$ as in \eqref{eq:FGI}. We extend these to nested covers of the product cover $X\tildemes [0,1]=\bigcup_i F_i\tildemes [0,1]$ by choosing constants $$ \tfrac 13={\varepsilon}_{2N}<\ldots<{\varepsilon}_{0} < \tfrac 12<{\delta}_{0}<\ldots<{\delta}_{2N} = \tfrac 23, $$ and then setting {\beta}gin{align*} F_i^{k}: & = \Bigl(F_i^{k,0}\tildemes [0,{\delta}_{2k})\Bigr) \cup \Bigl(F_i^{k,1}\tildemes ({\varepsilon}_{2k}, 1]\Bigr) \;\sqsubset\; F_i\tildemes [0,1],\\ G_i^{k}: & = \Bigl(G_i^{k,0}\tildemes [0,{\delta}_{2k-1})\Bigr) \cup \Bigl( G_i^{k,1}\tildemes ({\varepsilon}_{2k-1}, 1] \Bigr)\;\sqsubset\; F_i\tildemes [0,1]. \end{align*} Since these sets satisfy the nested property in \eqref{eq:FGI} and have collar form near the boundary given by the nested covers $G_i^{k,{\alpha}}\sqsubset F_i^{k,{\alpha}}$, the cover reduction $(Z_I)$ defined in \eqref{eq:ZGI} is a collared cobordism between the reductions $(Z^0_I)$ and $(Z^1_I)$. \end{proof} We now prove existence and uniqueness of reductions. {\beta}gin{prop} {\lambda}bel{prop:cov2} {\beta}gin{itemize} \item[(a)] Every tame Kuranishi atlas ${\mathcal K}$ has a reduction ${\mathcal V}$. \item[(b)] Every tame Kuranishi cobordism ${\mathcal K}^{[0,1]}$ has a cobordism reduction ${\mathcal V}^{[0,1]}$. \item [(c)] Let ${\mathcal V}^0,{\mathcal V}^1$ be reductions of a tame Kuranishi atlas ${\mathcal K}$. Then there exists a cobordism reduction ${\mathcal V}$ of ${\mathcal K}\tildemes[0,1]$ such that ${\partial}^{\alpha}{\mathcal V} = {\mathcal V}^{\alpha}$ for ${\alpha} = 0,1$. \end{itemize} \end{prop} {\beta}gin{proof} For (a) we begin by using Lemma~\ref{le:cov0} to find a cover reduction $(Z_I)_{I\subset \{1,\ldots,N\}}$ of the footprint cover $X=\bigcup_{i=1,\ldots,N} F_i$. Since $Z_I\subset F_I=\emptyset$ for $I\notin{\mathcal I}_{\mathcal K}$, we can index the potentially nonempty sets in this cover reduction by $(Z_I)_{I\in{\mathcal I}_{\mathcal K}}$. Then Lemma~\ref{le:restr0} provides precompact open sets $W_I\sqsubset U_I$ for each $I\in{\mathcal I}_{\mathcal K}$ with $Z_I\ne \emptyset$, satisfying {\beta}gin{equation}{\lambda}bel{eq:wwI} W_I\cap s_I^{-1}(0) = {\partial}si_I^{-1}(Z_I),\qquad \overline{W_I}\cap s_I^{-1}(0) = {\partial}si_I^{-1}(\overline{Z_I}). \end{equation} The set ${\mathcal V}=(W_I)_{I\in{\mathcal I}_{\mathcal K}}$ now satisfies condition (iii) in Definition~\ref{def:vicin}, namely $\bigcup_I {\partial}i_{{\mathcal K}}(W_I)$ contains $\bigcup_I {\partial}i_{{\mathcal K}}\bigl({\partial}si_I^{-1}(Z_I)\bigr)$, which covers ${\iota}ta_{\mathcal K}(X)$. We will construct the reduction by choosing $V_I \subset W_I$ so that (ii) is satisfied, while the intersection with the zero set does not change, i.e.\ {\beta}gin{equation}{\lambda}bel{eq:zeroV} V_I\cap s_I^{-1}(0) = {\partial}si_I^{-1}(Z_I), \end{equation} which guarantees (iii). Further, condition (i) holds automatically since $V_I\subset W_I$, and we define $V_I:=\emptyset$ when $Z_I=\emptyset$. To begin the construction of the $V_I$, define $$ {\mathcal C}(I):=\{J\in {\mathcal I}_{\mathcal K} \,|\, I\subset J \;\text{or}\; J\subset I \}, $$ and for each $J\notin{\mathcal C}(I)$ define $$ Y_{IJ} \,:= \overline{W_I}\cap {\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}(\overline{W_J})) \; = \; \overline{W_I}\cap {\varepsilon}_I(\overline{W_J}) , $$ and note that ${\varepsilon}_I(\overline{W_J})\subset U_J$ is closed by Lemma~\ref{le:Ku2}~(d). Now if $J\notin{\mathcal C}(I)$, then using ${\partial}si_I^{-1}(\overline{Z_I}) =s_I^{-1}(0)\cap \overline{W_I}$ we obtain {\beta}gin{align*} {\partial}si_I^{-1}(\overline{Z_I}) \cap Y_{IJ}&\;\subset \;{\partial}si_I^{-1}(\overline{Z_I})\cap s_I^{-1}(0)\cap {\varepsilon}_I(\overline{W_J}) \\ &\;=\; {\partial}si_I^{-1}(\overline{Z_I}) \cap {\varepsilon}_I\bigl(s_J^{-1}(0) \cap \overline{W_J} \bigr)\\ &\;= \; {\partial}si_I^{-1}(\overline{Z_I})\cap {\varepsilon}_I\bigl({\partial}si_J^{-1}(\overline{Z_J})\bigr) \;= \; {\partial}si_I^{-1}(\overline{Z_I}) \cap {\partial}si_I^{-1}(\overline{Z_J})\;=\; \emptyset, \end{align*} where the first equality holds because $s$ is compatible with the coordinate changes. The inclusion $Y_{IJ}\subset W_I\sqsubset U_I$ moreover ensures that $Y_{IJ}$ is compact, so has a nonzero Hausdorff distance from the closed set ${\partial}si_I^{-1}(\overline{Z_I})$. Thus we can find closed neighbourhoods ${\mathbb N}n(Y_{IJ})\subset U_I$ of $Y_{IJ}$ for each $J\notin{\mathcal C}(I)$ such that $$ {\mathbb N}n(Y_{IJ})\cap {\partial}si_I^{-1}(\overline{Z_I})= \emptyset. $$ We now claim that we obtain a reduction by removing these neighbourhoods, {\beta}gin{equation}{\lambda}bel{eq:QIVI} V_I := W_I \;{\smallsetminus} {\textstyle\bigcup_{J\notin {\mathcal C}(I)}} {\mathbb N}n(Y_{IJ}). \end{equation} Indeed, each $V_I\subset W_I$ is open, and $V_I\cap s_I^{-1}(0) = {\partial}si_I^{-1}(Z_I) \;{\smallsetminus} \bigcup_{J\notin {\mathcal C}(I)} {\mathbb N}n({Y_{IJ}}) = {\partial}si_I^{-1}(Z_I)$ by construction of ${\mathbb N}n({Y_{IJ}})$. Moreover, because $\overline{V_I}\subset \overline{W_I}$ for all $I$, we have for all $J\notin{\mathcal C}(I)$ $$ \overline{V_I}\cap {\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}(\overline{V_J}))\;\subset\; \overline{V_I}\cap \overline{W_I}\cap {\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}(\overline{W_J})) \;\subset\; {\overline{V_I}}\cap Y_{IJ} \;=\; \emptyset . $$ This completes the proof of a). To prove (b) we use Lemma~\ref{le:cobred}~(i) to obtain a collared cover reduction $(Z_I)_{I\in {\mathcal I}_{{\mathcal K}^{[0,1]}}}$ of the footprint cover of ${\mathcal K}^{[0,1]}$, which is collared by Remark~\ref{rmk:Ceps}. Then the arguments for a) also show that ${\mathcal K}^{[0,1]}$ has a reduction $(V_I')_{I\in {\mathcal I}_{{\mathcal K}^{[0,1]}}}$. It remains to adjust it to achieve collar form near the boundary as in Definition~\ref{def:vicin}~(iv). For that purpose note that the footprint of the reduction is given by the cover reduction, that is \eqref{eq:zeroV} provides the identity $$ V'_I\cap s_I^{-1}(0) = {\partial}si_I^{-1}(Z_I) \qquad\forall I\in {\mathcal I}_{{\mathcal K}^{[0,1]}} . $$ In particular, if ${\partial}^{\alpha} Z_I\ne \emptyset$ then ${\partial}^{\alpha} V'_I\ne \emptyset$, though the converse may not hold. Now choose ${\varepsilon}>0$ less or equal to half the collar width of ${\mathcal K}^{[0,1]}$ in Remark~\ref{rmk:Ceps} and so that condition (iv) in Definition~\ref{def:cobred} holds with $A^{\alpha}_{2{\varepsilon}}$ for the footprint reduction $(Z_I)_{I\in {\mathcal I}_{{\mathcal K}^{[0,1]}} }$, in particular {\beta}gin{equation}{\lambda}bel{eq:Zcoll} {\partial}si_I^{-1}(Z_I) \cap {\iota}^{\alpha}_I\bigl({\partial}^{\alpha} U_I\tildemes A^{\alpha}_{2{\varepsilon}} \bigr) \;=\; {\partial}si_I^{-1}({\partial}artial^{\alpha} Z_I \tildemes A^{\alpha}_{2{\varepsilon}} ) \qquad \forall {\alpha}\in\{0,1\}, I\in {\mathcal I}_{{\mathcal K}^{\alpha}} . \end{equation} Then we set $V_I:=V'_I$ for interior charts $I\in {\mathcal I}_{{\mathcal K}^{[0,1]}}{\smallsetminus}({\mathcal I}_{{\mathcal K}^0}\cup {\mathcal I}_{{\mathcal K}^1})$. If $I\in{\mathcal I}_{{\mathcal K}^{\alpha}}$ for ${\alpha}=0$ or ${\alpha}=1$ (or both) we define $V^{\alpha}_I \subset {\partial}^{\alpha} U^{\alpha}_I$ so that, with ${\varepsilon}_0: = {\varepsilon}$ and ${\varepsilon}_1:=1-{\varepsilon}$, $$ {\iota}^{\alpha}_I\bigl( V^{\alpha}_I \tildemes \{{\varepsilon}_{\alpha}\} \bigr) = \left\{ {\beta}gin{array}{ll} V_I' \cap {\iota}^{\alpha}_I \bigl({\partial}^{\alpha} U_I\tildemes \{{\varepsilon}_{\alpha}\} \bigr) & \mbox{ if } {\partial}^{\alpha} Z_I\ne \emptyset,\\ \emptyset & \mbox{ if } {\partial}^{\alpha} Z_I = \emptyset. \end{array}\right. $$ Since $({\partial}^{\alpha} Z_I)_{I\in{\mathcal I}_{{\mathcal K}^{\alpha}}}$ is a cover reduction of the footprint cover of ${\mathcal K}^{\alpha}$, and $$ {\partial}si_I^{-1}(Z_I) \cap {\iota}^{\alpha}_I \bigl({\partial}^{\alpha} U_I\tildemes \{{\varepsilon}_{\alpha}\} \bigr) = {\partial}si_I^{-1}({\partial}artial^{\alpha} Z_I \tildemes \{{\varepsilon}_{\alpha}\}), $$ this defines reductions $(V^{\alpha}_I)_{I\in{\mathcal I}_{{\mathcal K}^{\alpha}}}$ of ${\mathcal K}^{\alpha}$. With that, we obtain collared subsets of $U_I$ for each $I\in {\mathcal I}_{{\mathcal K}^0}\cup {\mathcal I}_{{\mathcal K}^1}$ by {\beta}gin{equation} {\lambda}bel{eq:Vcoll} V_I:= \biggl( \, V_I' \;{\smallsetminus} \bigcup_{{\alpha}=0,1} {\iota}^{\alpha}_I\bigl({\partial}^{\alpha} U_I\tildemes \overline{A^{\alpha}_{\varepsilon}}\,\bigr) \biggr) \;\cup\; \bigcup_{{\alpha}=0,1} {\iota}^{\alpha}_I\bigl(V^{\alpha}_I\tildemes \overline{A^{\alpha}_{\varepsilon}}\bigr) \quad\subset\; U_I . \end{equation} These are open subsets of $U_I$ because, firstly, each ${\partial}^{\alpha} U_I\tildemes \overline{A^{\alpha}_{\varepsilon}}$ is a relatively closed subset of the domain of the embedding ${\iota}^{\alpha}_I$. Secondly, each point in the boundary of ${\iota}^{\alpha}_I\bigl(V^{\alpha}_I\tildemes \overline{A^{\alpha}_{\varepsilon}}\bigr)\subset U_I$ is of the form ${\iota}^{\alpha}_I(x,{\varepsilon}_{\alpha})$ and, since the collar width is $2{\varepsilon}$, has neighbourhoods ${\iota}^{\alpha}_I\bigl({\mathbb N}n_x \tildemes ({\varepsilon}_{\alpha} -{\varepsilon}, {\varepsilon}_{\alpha} + {\varepsilon}) \bigr)\subset V_I$ for any neighbourhood ${\mathbb N}n_x\subset V^{\alpha}_I$ of $x$. Moreover, $V_I$ has the same footprint as $V_I'$, since adjustment only happens on the collars ${\iota}ta^{\alpha}_I({\partial}^{\alpha} U_I\tildemes \overline{A^{\alpha}_{{\varepsilon}}})$, where the footprint is of product form, thus preserved by the construction. Therefore the sets $(V_I)_{I\in {\mathcal K}^{[0,1]}}$ satisfy conditions (i) and (iii) in Definition~\ref{def:vicin}. They also satisfy (ii) because, in the notation of Remark~\ref{rmk:cobordreal}, we can check this separately in the collars $\rho^{\alpha}(|{\mathcal K}^{\alpha}|\tildemes{\overline{A^{\alpha}_{\varepsilon}}})$ of $|{\mathcal K}^{[0,1]}|$ (where it holds because $(V^{\alpha}_I)$ is a reduction) and in their complements (where it holds because $(V'_I)$ is a reduction). Finally, condition (iv) in Definition~\ref{def:cvicin} holds by construction, namely $({\iota}ta^{\alpha}_I)^{-1} \bigl( V_I \bigr) \cap \bigl( {\partial}artial^{\alpha} U_I \tildemes A^{\alpha}_{\varepsilon} \bigr) = V_I^{\alpha} \tildemes A^{\alpha}_{\varepsilon}$, and the condition $\bigl( \; {\partial}artial^{\alpha} V_I\ne \emptyset \; {\mathbb R}ightarrow \; V_I \cap {\partial}si_I^{-1}\bigl( {\partial}artial^{\alpha} F_I \tildemes \{{\alpha}\}\bigr)\ne \emptyset \;\bigr)$ holds since ${\partial}artial^{\alpha} V_I = V_I^{\alpha} \ne \emptyset$ implies ${\partial}^{\alpha} Z_I \neq \emptyset$ by definition of $V_I^{\alpha}$. This completes the proof of (b). To prove (c) we will use transitivity for cobordism reductions of the product cobordism ${\mathcal K}\tildemes[0,1]$. More precisely, just as in the proof of Lemma~\ref{le:cobord1}, we can adjoin and rescale cobordism reductions within ${\mathcal K}\tildemes[0,1]$. In particular, given cobordism reductions ${\mathcal V}^{[0,1]}\subset {\rm Obj}_{{\bf B}_{\mathcal K}}\tildemes [0,1]$ and ${\mathcal V}^{[1,2]}\subset {\rm Obj}_{{\bf B}_{\mathcal K}}\tildemes [1,2]$ with identical collars ${\partial}^1{\mathcal V}^{[0,1]}={\partial}^1{\mathcal V}^{[1,2]}$ near ${\rm Obj}_{{\bf B}_{\mathcal K}}\tildemes \{1\}$, we define a Kuranishi atlas ${\mathcal K}\tildemes [0,2]$ with domain category ${\rm Obj}_{{\bf B}_{\mathcal K}}\tildemes [0,2]$ as in that lemma. Then, extrapolating notation to the reduction $V^{[0,2]}_I:=V^{[0,1]}_{I^{01}}\cup V^{[1,2]}_{I^{12}} \subset U_I \tildemes [0,2]$ for $I\in{\mathcal I}_{{\mathcal K}^1}$ defines a cobordism reduction ${\mathcal V}^{[0,2]}$ of ${\mathcal K}\tildemes [0,2]$ with ${\partial}^0 {\mathcal V}^{[0,2]}={\partial}^0 {\mathcal V}^{[0,1]}$ and ${\partial}^2 {\mathcal V}^{[0,2]}={\partial}^2 {\mathcal V}^{[1,2]}$. Just as in the proof of additivity in Lemma~\ref{le:cobord1}, the resulting sets satisfy the separation condition (ii) of Definition~\ref{def:vicin} because the reductions ${\mathcal V}^{[0,1]}, {\mathcal V}^{[1,2]}$, and ${\mathcal V}^1={\partial}artial^1 {\mathcal V}^{[0,1]} = {\partial}artial^1 {\mathcal V}^{[1,2]}$ do. Now given two cobordism reductions ${\mathcal V}$ and ${\mathcal V}'$ of ${\mathcal K}\tildemes[0,1]$ with ${\partial}^1{\mathcal V}={\partial}^0{\mathcal V}$ we can shift ${\mathcal V}'$ in the domains to ${\rm Obj}_{{\bf B}_{\mathcal K}}\tildemes[1,2]$, glue it to ${\mathcal V}$ as above, and rescale the result back to a reduction ${\mathcal V}''\subset {\rm Obj}_{{\bf B}_{\mathcal K}}\tildemes[0,1]$ with ${\partial}^0{\mathcal V}''={\partial}^0{\mathcal V}$ and ${\partial}^1{\mathcal V}''={\partial}^1{\mathcal V}'$. Similarly, we can apply the isomorphism on ${\rm Obj}_{{\bf B}_{\mathcal K}}\tildemes[0,1]$ that reverses the inverval $[0,1]$ to turn any cobordism reduction ${\mathcal V}$ of ${\mathcal K}\tildemes[0,1]$ into a cobordism reduction ${\mathcal V}'$ with ${\partial}^0{\mathcal V}'={\partial}^1{\mathcal V}$ and ${\partial}^1{\mathcal V}'={\partial}^0{\mathcal V}$. Based on this, we will prove (c) in several stages. { }{\mathbb N}I {\bf Step 1:} {\it The result holds if ${\mathcal V}^0 \subset {\mathcal V}^1$ and $V^0_I\cap s_I^{-1}(0) = V^1_I\cap s_I^{-1}(0)$ for all $I\in {\mathcal I}_{\mathcal K}$. } {\mathbb N}I In this case the footprints $Z_I := {\partial}si_I\bigl(V^{\alpha}_I\cap s_I^{-1}(0)\bigr)$ are the same for ${\alpha}=0,1$ by assumption. Hence the sets for $I\in {\mathcal I}_{\mathcal K}$ $$ V_I \, : =\; \bigl( V^0_I\tildemes [0,\tfrac 23)\bigr) \cup\bigl( V^1_I\tildemes (\tfrac 13,1]\bigr) \;\subset\; U_I \tildemes [0,1] $$ form the required cobordism reduction. In particular, they satisfy the intersection condition (ii) over the interval $(\tfrac 13,\tfrac 23)$ because $V^0_I\subset V^1_I$ for all $I$. Their footprints are $Z_I\tildemes [0,1]$ and so cover $X\tildemes [0,1]$, and they satisfy the collar form requirement (iv) because $V_I\ne \emptyset$ iff $V^1_I\ne \emptyset$, and the latter implies $Z_I\ne \emptyset$ by condition (i) for ${\mathcal V}^1$. { }{\mathbb N}I {\bf Step 2:} {\it The result holds if all footprints coincide, i.e.\ $V^0_I\cap s_I^{-1}(0) = V^1_I\cap s_I^{-1}(0)$. } {\mathbb N}I Note that ${\mathcal V}^{01}:= \bigcup_{I\in{\mathcal I}_{\mathcal K}} V^0_I\cap V^1_I$ is another reduction of ${\mathcal K}$ since it has the common footprints $Z_I$ as above, thus covers ${\iota}_{\mathcal K}(X)$. So Step 1 for ${\mathcal V}^{01}\subset{\mathcal V}^{\alpha}$ (together with reflexivity for ${\alpha}=0$) provides cobordism reductions ${\mathcal V}$ and ${\mathcal V}'$ of ${\mathcal K}\tildemes[0,1]$ with ${\partial}^0{\mathcal V}={\mathcal V}^0$, ${\partial}^1{\mathcal V}={\mathcal V}^{01}={\partial}^0{\mathcal V}'$, and ${\partial}^1{\mathcal V}'={\mathcal V}^1$. Now transitivity provides the required reduction with boundaries ${\mathcal V}^0,{\mathcal V}^1$. { }{\mathbb N}I {\bf Step 3:} {\it The result holds for all reductions ${\mathcal V}^0,{\mathcal V}^1\subset{\rm Obj}_{{\bf B}_{\mathcal K}}$. } {\mathbb N}I First use Lemma~\ref{le:cobred}~(ii) to obtain a family of collared sets $Z_I\sqsubset F_I\tildemes [0,1]$ for $I\in{\mathcal I}_{\mathcal K}$ that form a cover reduction of $X\tildemes[0,1]=\bigcup_{i=1,\ldots,N} F_i \tildemes[0,1]$ and restrict to the cover reductions ${\partial}artial^{\alpha} Z_I = {\partial}si_I(V_I^{\alpha} \cap s_I^{-1}(0))$ induced by the reductions ${\mathcal V}^{\alpha}$ for ${\alpha}=0,1$. Next, as in the proof of (b), we construct a cobordism reduction ${\mathcal V}'$ of ${\mathcal K}\tildemes [0,1]$ with footprints $Z_I$. Its restrictions ${\partial}artial^{\alpha}{\mathcal V}'$ are reductions of ${\mathcal K}^{\alpha}$ with the same footprint ${\partial}artial^{\alpha} Z_I$ as ${\mathcal V}^{\alpha}$ for ${\alpha}=0,1$. Now Step 2 provides further cobordism reductions ${\mathcal V}$ and ${\mathcal V}''$ with ${\mathcal V}^0={\partial}^0{\mathcal V}$, ${\partial}^1{\mathcal V}={\partial}^0{\mathcal V}'$, ${\partial}^1{\mathcal V}'={\partial}^0{\mathcal V}''$, and ${\partial}^1{\mathcal V}''={\mathcal V}^1$, so that another transitivity construction provides the required cobordism reduction with boundaries ${\mathcal V}^0,{\mathcal V}^1$. \end{proof} Finally, we need to construct nested pairs of reductions ${\mathcal C}\sqsubset {\mathcal V}$, where either ${\mathcal C}$ or ${\mathcal V}$ is given. Since these will be used extensively in the construction of perturbations towards the VMC, we introduce this notion formally. {\beta}gin{definition}{\lambda}bel{def:nest} Let ${\mathcal K}$ be a Kuranishi atlas (or cobordism). Then we call a pair of subsets ${\mathcal C},{\mathcal V}\subset{\rm Obj}_{{\bf B}_{\mathcal K}}$ a {\bf nested (cobordism) reduction} if both are (cobordism) reductions of ${\mathcal K}$ and ${\mathcal C}\sqsubset {\mathcal V}$. \end{definition} In our later constructions, the roles of ${\mathcal V}$ and ${\mathcal C}$ will be quite different: we will define the perturbation $\nu$ on the domains of ${\mathcal V}$, while ${\mathcal C}$ will provide a precompact set ${\partial}i_{\mathcal K}({\mathcal C})\subset |{\mathcal K}|$ that will contain the perturbed zero set ${\partial}i_{\mathcal K}\bigl((s_{\mathcal V} + \nu)^{-1}(0)\bigr)$. As explained in Proposition~\ref{prop:Ktopl1}, the subset ${\partial}i_{\mathcal K}({\mathcal C}) \subset|{\mathcal K}|$ has two different topologies, its quotient topology and the subspace topology. If $({\mathcal K},d)$ is metric, there might conceivably be a third topology, namely that induced by restriction of the metric. Although we will not use this explicitly, let us show that the metric topology on ${\partial}i_{\mathcal K}({\mathcal C})$ agrees with the subspace topology, so that we only have two different topologies in play. {\beta}gin{lemma}{\lambda}bel{le:Zz} Let ${\mathcal C}$ be a reduction of a metric tame Kuranishi atlas $({\mathcal K},d)$. Then the metric topology on ${\partial}i_{\mathcal K}({\mathcal C})$ equals the subspace topology. \end{lemma} {\beta}gin{proof} Since every reduction ${\mathcal C} \subset{\rm Obj}_{{\bf B}_{\mathcal K}}$ is precompact, the continuity of ${\partial}i_{\mathcal K}:{\rm Obj}_{{\bf B}_{\mathcal K}}\to |{\mathcal K}|$ (to $|{\mathcal K}|$ with its quotient topology) and of ${\rm id}_{|{\mathcal K}|}: |{\mathcal K}| \to (|{\mathcal K}|,d)$ from Lemma~\ref{le:metric} imply that ${\partial}i_{\mathcal K}(\overline{\mathcal C})\subset|{\mathcal K}|$ is compact in both topologies. Thus the identity map ${\rm id}_{{\partial}i_{\mathcal K}(\overline{\mathcal C})}: |{\mathcal K}|\supset {\partial}i_{\mathcal K}(\overline{\mathcal C}) \to \bigl({\partial}i_{\mathcal K}(\overline{\mathcal C}), d \bigr)$ is a continuous bijection from the compact space ${\partial}i_{\mathcal K}(\overline{\mathcal C})$ with the subspace topology to the Hausdorff space $\bigl({\partial}i_{\mathcal K}(\overline{\mathcal C}), d \bigr)$ with the induced metric. But this implies that ${\rm id}_{{\partial}i_{\mathcal K}(\overline{\mathcal C})}$ is a homeomorphism, see Remark~\ref{rmk:hom}, and hence restricts to a homeomorphism ${\rm id}_{{\partial}i_{\mathcal K}({\mathcal C})} : |{\mathcal K}|\supset {\partial}i_{\mathcal K}({\mathcal C}) \to \bigl({\partial}i_{\mathcal K}({\mathcal C}), d \bigr)$. Thus, the relative and metric topologies on ${\partial}i_{\mathcal K}(\overline{\mathcal C})$ agree. \end{proof} {\beta}gin{lemma}{\lambda}bel{le:delred} Let ${\mathcal V}$ be a (cobordism) reduction of a metric Kuranishi atlas (or cobordism) ${\mathcal K}$. {\beta}gin{enumerate} \item There exists ${\delta}>0$ such that ${\mathcal V} \sqsubset \bigcup_{I\in{\mathcal I}_{\mathcal K}} B^I_{\delta}(V_I)$ is a nested (cobordism) reduction, and moreover $$ B_{\delta}({\partial}i_{\mathcal K}(\overline{V_I}))\cap B_{\delta}({\partial}i_{\mathcal K}(\overline{V_J})) \neq \emptyset \qquad \Longrightarrow \qquad I\subset J \;\text{or} \; J\subset I. $$ \item If ${\mathcal V}$ is a reduction of a Kuranishi atlas ${\mathcal K}$, then there exists a nested reduction ${\mathcal C}\sqsubset {\mathcal V}$. \item If ${\mathcal V}$ is a cobordism reduction of the Kuranishi cobordism ${\mathcal K}$, and if ${\mathcal C}^{\alpha}\sqsubset {\partial}^{\alpha} {\mathcal V}$ for ${\alpha} = 0,1$ are nested cobordism reductions of the boundary restrictions ${\partial}^{\alpha}{\mathcal K}$, then there is a nested cobordism reduction ${\mathcal C}\sqsubset {\mathcal V}$ such that ${\partial}^{\alpha} {\mathcal C} = {\mathcal C}^{\alpha}$ for ${\alpha}=0,1$. \end{enumerate} \end{lemma} {\beta}gin{proof} To prove (i) for a Kuranishi atlas ${\mathcal K}$ we need to find ${\delta}>0$ so that {\beta}gin{itemize} \item[a)] $\displaystyle{\partial}hantom{\int\!\!\!\!\!\!} B^I_{\delta}(V_I)\sqsubset U_I$ for all $I\in{\mathcal I}_{\mathcal K}$; \item[b)] if $B_{\delta}({\partial}i_{\mathcal K}(\overline{V_I}))\cap B_{\delta}({\partial}i_{\mathcal K}(\overline{V_J})\ne \emptyset$ then $I\subset J$ or $J\subset I$. \end{itemize} The latter is a strengthened version of the separation condition (ii) in Definition~\ref{def:vicin}, due to the inclusion ${\partial}i_{\mathcal K}(B^I_{\delta}(W))\subset B_{\delta}({\partial}i_{\mathcal K}(W))$ by compatibility of the metrics for any $W\subset U_I$. The other conditions $B^I_{\delta}(V_I)\cap s_I^{-1}(0)\neq\emptyset$ and ${\iota}_{\mathcal K}(X)\subset{\partial}i_{\mathcal K}({\mathcal A})$ for a reduction ${\mathcal A} := \bigcup_{I\in{\mathcal I}_{\mathcal K}} B^I_{\delta}(V_I)\sqsubset{\rm Obj}_{{\bf B}_{\mathcal K}}$ then follow directly from the inclusion ${\mathcal V}\subset{\mathcal A}$, and the latter is precompact since each component $V_I\sqsubset B^I_{\delta}(V_I)$ is precompact. In order to obtain cobordism reductions from this construction, recall that, by definition of a metric Kuranishi cobordism, it carries a product metric in the collar neighbourhoods ${\iota}ta^{\alpha}_I({\partial}artial^{\alpha} U_I \tildemes A^{\alpha}_{\varepsilon})\subset U_I$ of the boundary, which ensures that for ${\delta}<\frac {\varepsilon} 2$ the ${\delta}$-neighbourhood of a collared set (i.e.\ with $({\iota}ta^{\alpha}_I)^{-1}(V_I)={\partial}artial^{\alpha} V_I \tildemes A^{\alpha}_{\varepsilon}$) is collared. More precisely, with $B^{{\partial}^{\alpha} I}_{\delta}(W)\subset {\partial}^{\alpha} U_{I}$ denoting neighbourhoods in the corresponding domain of ${\partial}^{\alpha}{\mathcal K}$ we have $$ B^I_{\delta}(V_I) \cap {\iota}ta^{\alpha}_I\bigl({\partial}artial^{\alpha} U_I \tildemes A^{\alpha}_{\frac {\varepsilon} 2}\bigr) \;=\; {\iota}ta^{\alpha}_I\bigl( B^{{\partial}^{\alpha} I}_{\delta}({\partial}artial^{\alpha} V_I) \tildemes A^{\alpha}_{\frac{\varepsilon} 2} \bigr) . $$ So it remains to find ${\delta}lta>0$ satisfying a) and b). Property a) for sufficiently small ${\delta}lta>0$ follows from the precompactness $V_I\sqsubset U_I$ in Definition \ref{def:vicin}~(i) and a covering argument based on the fact that, in the locally compact manifold $U_I$, every $p\in\overline{V_I}$ has a compact neighbourhood $\overline{B^I_{{\delta}lta_p}(p)}$ for some ${{\delta}lta_p>0}$. To check b) recall that by Definition~\ref{def:vicin} the subsets ${\partial}i_{\mathcal K}(\overline{V_I})$ and ${\partial}i_{\mathcal K}(\overline{V_J})$ of $|{\mathcal K}|$ are disjoint unless $I\subset J$ or $J\subset I$. Since each ${\partial}i_{\mathcal K}|_{U_I}$ maps continuously to the quotient topology on $|{\mathcal K}|$, and the identity to the metric topology is continuous by Lemma~\ref{le:metric}, the ${\partial}i_{\mathcal K}(\overline{V_I})$ are also compact subsets of the metric space $|({\mathcal K},d)|$. Hence b) is satisfied if we choose ${\delta}>0$ less than half the distance between each disjoint pair ${\partial}i_{\mathcal K}(\overline{V_I}),{\partial}i_{\mathcal K}(\overline{V_J})$. For (ii) choose any shrinking $(Z_I')_{I\in {\mathcal I}_{\mathcal K}}$ of the footprint cover $\bigl(Z_I = {\partial}si_I(V_I\cap s_I^{-1}(0))\bigr)_{I\in {\mathcal I}_{\mathcal K}}$ of ${\mathcal V}$ as in Definition~\ref{def:shr0}. By Lemma~\ref{le:restr0} there are open subsets $C_I\sqsubset V_I$ such that $C_I\cap s_I^{-1}(0) = {\partial}si_I^{-1}(Z_I')$. This guarantees ${\iota}_{\mathcal K}(X)\subset {\partial}i_{\mathcal K}({\mathcal C})$, and since the $(V_I)$ satisfy the separation condition (ii) in Definition~\ref{def:vicin}, so do the sets $(C_I)$. The same construction works if ${\mathcal K}$ is a cobordism, but we also require that ${\mathcal C}$ be collared. For this we must start with a collared shrinking $(Z_I')_{I\in {\mathcal I}_{\mathcal K}}$ of the footprint cover $\bigl(Z_I = {\partial}si_I(V_I\cap s_I^{-1}(0)\bigr)_{I\in {\mathcal I}_{\mathcal K}}$ of the cobordism reduction ${\mathcal V}$, which exists by Lemma~\ref{le:cobred}~(i). Then we choose $C_I'\sqsubset V_I$ such that $C_I'\cap s_I^{-1}(0) = {\partial}si_I^{-1}(Z_I')$ as above. Finally we adjust each $C_I'$ to be a product in the collar by the method described in \eqref{eq:Vcoll}. To prove (iii), we proceed as above, starting with a collared shrinking $(Z_I')_{I\in {\mathcal I}_{\mathcal K}}$ of the footprint cover $\bigl(Z_I = {\partial}si_I(V_I\cap s_I^{-1}(0)\bigr)_{I\in {\mathcal I}_{\mathcal K}}$ of the cobordism reduction ${\mathcal V}$ that extends the shrinkings at ${\alpha}=0,1$ determined by the reductions ${\mathcal C}^{\alpha}$. This exists by the construction \eqref{eq:cobred1} in the proof of Lemma~\ref{le:cobred}. Then choose collared open subsets $C_I'\sqsubset V_I$ as above with $C_I'\cap s_I^{-1}(0) = {\partial}si_I^{-1}(Z_I')$ for all $I$. Finally, let $2{\varepsilon}>0$ be less than the collar width of the sets $V_I$, $Z_I'$, and $C_I'$, then we adjust $C_I'$ in these collars so that they have the needed restrictions by setting $$ C_I: = \Bigl(C_I'{\smallsetminus} {\iota}^{\alpha}_I({\partial}^{\alpha} V_I\tildemes [0,2{\varepsilon}]\Bigr) \cup \Bigl({\iota}^{\alpha}_I\bigl(C^{\alpha}_I\tildemes [0,2{\varepsilon})\cup {\partial}^{\alpha}(C_I')\tildemes ({\varepsilon},2{\varepsilon}]\bigr)\Bigr). $$ Note that because ${\mathcal C}^{\alpha}$ and ${\partial}^{\alpha} {\mathcal C}'$ are both contained in ${\partial}^{\alpha}{\mathcal V}$, their union is also a reduction. The proof of openness is the same as for \eqref{eq:Vcoll}. \end{proof} We end this subsection by showing that a reduction ${\mathcal V}$ of a tame Kuranishi atlas ${\mathcal K}$ canonically induces a (generally neither additive nor tame) Kuranishi atlas ${\mathcal K}^{\mathcal V}$ whose realization $|{\mathcal K}^{\mathcal V}|$ maps bijectively to ${\partial}i_{\mathcal K}({\mathcal V})$. This result is not used in the construction of the VMC or VFC. {\beta}gin{prop}{\lambda}bel{prop:red} Let ${\mathcal V}$ be a reduction of a tame Kuranishi atlas ${\mathcal K}$. Then there exists a canonical Kuranishi atlas ${\mathcal K}^{\mathcal V}$ which satisfies the strong cocycle condition. Moreover, there exists a canonical faithful functor ${\iota}^{\mathcal V}: {\bf B}_{{\mathcal K}^{\mathcal V}}\to {\bf B}_{{\mathcal K}}$ which induces a continuous injection $|{\mathcal K}^{\mathcal V}|\to |{\mathcal K}|$ with image ${\partial}i_{\mathcal K}({\mathcal V})$. \end{prop} {\beta}gin{proof} To begin the construction of ${\mathcal K}^{\mathcal V}$, note first that by condition (i) in Definition~\ref{def:vicin} the footprint $Z_I: = {\partial}si_I\bigl(V_I\cap s_I^{-1}(0)\bigr)$ is nonempty whenever $V_I\ne \emptyset$. Further by (iii) the sets $(Z_I)_{I\in{\mathcal I}_{\mathcal K}}$ cover $X$. Hence we can use the tuple of nontrivial reduced Kuranishi charts $({\bf K}_I|_{V_I})_{I\in {\mathcal I}_{\mathcal K}, V_I\ne \emptyset}$ as the covering family of basic charts in ${\mathcal K}^{\mathcal V}$. Then the index set of the new Kuranishi atlas is {\beta}gin{equation}{\lambda}bel{eq:IKV} {\mathcal I}_{{\mathcal K}^{\mathcal V}}= \bigl\{C\subset {\mathcal I}_{\mathcal K} \,\big|\, Z_C: = {\textstyle \bigcap_{I\in C}} Z_I\neq \emptyset \bigr\}. \end{equation} By Definition~\ref{def:vicin}~(ii), each such subset $C\subset {\mathcal I}_{\mathcal K}$ that indexes basic charts with $\bigcap_{I\in C} Z_I\neq \emptyset$ can be totally ordered into a chain $I_1\subsetneq I_2\ldots\subsetneq I_n$ of elements in ${\mathcal I}_{\mathcal K}$; cf.\ Figure~\ref{fig:1} and Remark~\ref{rmk:nerve}. Therefore ${\mathcal I}_{{\mathcal K}^{\mathcal V}}$ can be identified with the set ${\mathcal C}\subset 2^{{\mathcal I}_{\mathcal K}}$ of linearly ordered chains $C\subset {\mathcal I}_{\mathcal K}$ such that $Z_C\ne \emptyset$. For $C = \bigl( I^C_1\subsetneq I^C_2\ldots\subsetneq I^C_{n^C}=:I^C_{\rm max} \bigr) \in{\mathcal I}_{{\mathcal K}^{\mathcal V}}$ we define the transition chart by restriction of the chart for the maximal element $I^C_{\rm max}$ to the intersection of the domains of the chain: {\beta}gin{equation}{\lambda}bel{eq:domKC} {\bf K}_C^{\mathcal V} := {\bf K}_{I^C_{\rm max}}\big|_{V_C} \qquad\text{with}\quad V_C := \bigcap_{1\le k\le n_C}\; {\partial}hi_{I_k^C I^C_{\max}}\bigl(V_{I_k^C}\cap U_{I_k^C I^C_{\max}}\bigr) \;\subset\; V_{I^C_{\max}} . \end{equation} By Lemma~\ref{le:Ku2}~(a), this domain satisfies ${\partial}i_{\mathcal K}(V_C)= \bigcap_{I\in C} {\partial}i_{\mathcal K}(V_I)$ and hence can be expressed as {\beta}gin{equation}{\lambda}bel{eq:redu2} V_C \;=\; {\textstyle \bigcap_{I\in C}} \; {\varepsilon}_{I^C_{\max}}(V_I) \;=\; U_{I^C_{\max}} \cap {\partial}i_{\mathcal K}^{-1} \Bigl( {\textstyle \bigcap_{I\in C}} \, {\partial}i_{\mathcal K}(V_I) \Bigr). \end{equation} Next, coordinate changes are required only between $C,D\in{\mathcal I}_{{\mathcal K}^{\mathcal V}}$ with $C\subset D$ so that $Z_C\supset Z_D\ne \emptyset$ and, by the above, ${\partial}i_{\mathcal K}(V_C)\subset{\partial}i_{\mathcal K}(V_D)$. Since the inclusion $C\subset D$ implies the inclusion of maximal elements $I^C_{\rm max} \subset I^D_{\rm max}$, we can define the coordinate change as the restriction $$ \widehat\Phi_{CD}^{\mathcal V}:=\widehat\Phi_{I^C_{\rm max} I^D_{\rm max}}\big|_{U^{\mathcal V}_{CD}} \;: \; {\bf K}_C^{\mathcal V} = {\bf K}_{I^C_{\rm max}}\big|_{V_C} \; \longrightarrow\; {\bf K}_D^{\mathcal V} ={\bf K}_{I^D_{\rm max}}\big|_{V_D} $$ in the sense of Lemma~\ref{le:restrchange} with maximal domain {\beta}gin{equation}{\lambda}bel{eq:UCD} U^{\mathcal V}_{CD}: = V_C\cap ({\partial}hi_{I^C_{\rm max} I^D_{\rm max}})^{-1}(V_D) \;=\; V_C\cap {\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}(V_D)) . \end{equation} Using \eqref{eq:redu2} and the fact that $C\subset D$ we can also rewrite this domain as {\beta}gin{equation}{\lambda}bel{domUCD} U^{\mathcal V}_{CD} \;=\; U_{I^C_{\max}}\cap {\partial}i_{\mathcal K}^{-1}\Bigl({\textstyle \bigcap_{I\in D}} {\partial}i_{\mathcal K}(V_I)\Bigr) \;=\; U_{I^C_{\max}}\cap {\partial}i_{\mathcal K}^{-1}\bigl({\partial}i_{\mathcal K}(V_D)\bigr). \end{equation} Hence, again using Lemma~\ref{le:Ku2}~(a), the domain of the composed coordinate change $\widehat\Phi_{DE}\circ \widehat\Phi_{CD}$ for $C\subset D\subset E$ is {\beta}gin{align*} U^{\mathcal V}_{CD} \cap ({\partial}hi^{\mathcal V}_{I^C_{\rm max} I^D_{\rm max}})^{-1}\bigl( U^{\mathcal V}_{DE} \bigr) &\;=\; U_{I^C_{\max}} \;\cap\; {\partial}i_{\mathcal K}^{-1}\bigl({\partial}i_{\mathcal K}(V_D)\bigr) \;\cap\; ({\partial}hi^{\mathcal V}_{I^C_{\rm max} I^D_{\rm max}})^{-1}\Bigl( {\partial}i_{\mathcal K}^{-1}\bigl({\partial}i_{\mathcal K}(V_E)\bigr) \Bigr) \\ &\;=\; U_{I^C_{\max}} \;\cap\; {\partial}i_{\mathcal K}^{-1}\bigl({\partial}i_{\mathcal K}(V_D) \cap {\partial}i_{\mathcal K}(V_E)\bigr)\\ & \;=\; U_{I^C_{\max}} \;\cap\; {\partial}i_{\mathcal K}^{-1}\bigl( {\partial}i_{\mathcal K}(V_E)\bigr) , \end{align*} which equals the domain of $\widehat\Phi_{CE}$. Now the strong cocycle condition for ${\mathcal K}^{\mathcal V}$ follows from the strong cocycle condition for ${\mathcal K}$, which holds by Lemma~\ref{le:tame0}. In particular, ${\mathcal K}^{\mathcal V}$ is a Kuranishi atlas. Next, the inclusions $V_C\hookrightarrow U_{I^C_{\max}}$ induce a continuous map on the object spaces $$ {\iota}ta^{\mathcal V} \,:\; {\rm Obj}_{{\bf B}_{{\mathcal K}^{\mathcal V}}} = {\textstyle \bigcup_{C\in{\mathcal I}_{{\mathcal K}^{\mathcal V}}}} V_C \; \longrightarrow \; {\textstyle \bigcup_{I\in{\mathcal I}_{\mathcal K}}} U_I = {\rm Obj}_{{\bf B}_{\mathcal K}}. $$ Since $V_C\subset V_{I^C_{\max}}$ for all $C$, this map has image $\bigcup_{I\in {\mathcal I}_{\mathcal K}} V_I$. It is generally not injective. However, because the coordinate changes in ${{\mathcal K}^{\mathcal V}}$ are restrictions of those in ${\mathcal K}$, this map on object spaces extends to a functor ${\iota}ta^{\mathcal V} : {\bf B}_{{\mathcal K}^{\mathcal V}}\to {\bf B}_{{\mathcal K}}$. This shows that the induced map $|{\iota}ta^{\mathcal V}|:|{{\mathcal K}^{\mathcal V}}|\to |{\mathcal K}|$ is continuous with image ${\partial}i_{\mathcal K}({\mathcal V})$. Moreover, the functor ${\iota}ta^{\mathcal V}$ is faithful, i.e.\ for each $(C,x), (D,y)\in {\rm Obj}_{{\bf B}_{{\mathcal K}^{\mathcal V}}}$ the map $$ {\rm Mor}_{{\mathcal K}^{\mathcal V}}\bigl((C,x),(D,y)\bigr)\;\longrightarrow\; {\rm Mor}_{{\mathcal K}}\bigl({\iota}^{\mathcal V}(C,x), {\iota}^{\mathcal V}(D,y)\bigr) $$ is injective. To prove that $|{\iota}ta^{\mathcal V}|$ is injective we need to show for $x\in V_C, y\in V_D$ that $$ (I^C_{\max}, x){\sigma}m_{\mathcal K} (I^D_{\max},y) \;\Longrightarrow\; (C,x){\sigma}m_{{\mathcal K}^{\mathcal V}} (D,y). $$ To see this, note that by assumption and \eqref{eq:redu2} we have ${\partial}i_{\mathcal K}(V_I)\ni {\partial}i_{\mathcal K}(x) = {\partial}i_{\mathcal K}(y) \in {\partial}i_{\mathcal K}(V_J)$ for all $I\in C$ and $J\in D$. In particular, for each $I^C_k\in C, I^D_\ell\in D$ the intersection ${\partial}i_{\mathcal K}(V_{I^C_{k}})\cap {\partial}i_{\mathcal K}(V_{I^D_{\ell}})$ is nonempty. Hence, by Definition~\ref{def:vicin}~(ii), the elements $I^C_1,\ldots,I^C_{\max}, I^D_1,\ldots,I^D_{\max}$ of ${\mathcal I}_{\mathcal K}$ can be ordered into a chain $E:=C\vee D$ (after removing repeated elements) with maximal element $I^{E}_{\max}=I^{C}_{\max}$ or $I^{E}_{\max}=I^{D}_{\max}$, and such that ${\partial}i_{\mathcal K}(x)={\partial}i_{\mathcal K}(y)\in \bigcap_{I\in C\vee D} {\partial}i_{\mathcal K}(V_I)={\partial}i_{\mathcal K}(V_{C\vee D})$. In particular, $V_{C\vee D}$ is nonempty, so we have $E=C\vee D \in{\mathcal I}_{{\mathcal K}^{\mathcal V}}$ and $x\in V_C\cap {\partial}i_{\mathcal K}^{-1}(V_E) \subset U_{I^C_{\max}I^E_{\max}}$ lies in the domain of ${\partial}hi^{\mathcal V}_{C (C\vee D)}={\partial}hi_{I^C_{\max}I^E_{\max}}$, whereas $y\in V_D\cap {\partial}i_{\mathcal K}^{-1}(V_E) \subset U_{I^D_{\max}I^E_{\max}}$ lies in the domain of ${\partial}hi^{\mathcal V}_{D (C\vee D)}={\partial}hi_{I^D_{\max}I^E_{\max}}$. Now Lemma~\ref{le:Ku2}~(a) for $I^C_{\max}\subset I^E_{\max}$ and $I^D_{\max}\subset I^E_{\max}$ implies ${\partial}hi^{\mathcal V}_{C (C\vee D)}(x) = {\partial}hi^{\mathcal V}_{D (C\vee D)}(y)$. This proves $(C,x){\sigma}m_{{\mathcal K}^{\mathcal V}} (D,y)$ as required, and thus completes the proof. \end{proof} {\beta}gin{remark}{\lambda}bel{rmk:KVv} \rm The resulting Kuranishi atlas ${\mathcal K}^{\mathcal V}$ is far from additive. In fact, the above proof shows that ${\mathcal K}^{\mathcal V}$ has the property that for any two charts ${\bf K}_C^{\mathcal V}, {\bf K}_D^{\mathcal V}$ with intersecting footprints $Z_C\cap Z_D\ne \emptyset$, we must have $I^C_{\rm max} \subset I^D_{\rm max}$ or $I^D_{\rm max} \subset I^C_{\rm max}$, though possibly neither $C\subset D$ nor $D\subset C$. Assuming w.l.o.g.\ that $I^C_{\rm max} \subset I^D_{\rm max}$, there is a direct coordinate change $$ \widehat\Phi_{I^C_{\rm max} I^D_{\rm max}}|_{V_C\cap {\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}(V_D))} \;: \; {\bf K}_C^{\mathcal V} = {\bf K}_{I^C_{\rm max}}\big|_{V_C} \; \longrightarrow\; {\bf K}_D^{\mathcal V} = {\bf K}_{I^D_{\rm max}}\big|_{V_D} . $$ It embeds one of the obstruction bundles as a summand of the other; in this case $E_C=_{I^C_{\rm max}}\hookrightarrow \widehat\Phi_{I^C_{\rm max} I^D_{\rm max}}(E_C)\subset E_D = E_{I^D_{\rm max}}$. Such a coordinate change is not explicitly included in the Kuranishi atlas ${\mathcal K}^{\mathcal V}$ unless $C\subset D$. It does however appear, as in Lemma~\ref{le:Ku2}, as the composite of a coordinate change ${\bf K}_C^{\mathcal V} \to {\bf K}_{C\vee D}^{\mathcal V}$ with the inverse of a coordinate change ${\bf K}_D^{\mathcal V} \to {\bf K}_{C\vee D}^{\mathcal V}$. In this respect the subcategory ${\bf B}_{{\mathcal K}}|_{\mathcal V}$ has much simpler structure, since the components $V_I$ of its space of objects correspond to chains $C=(I)$ with just one element. \end{remark} \subsection{Perturbed zero sets}{\lambda}bel{ss:sect} \hspace{1mm}\\ Throughout this section, ${\mathcal K}$ is a fixed tame Kuranishi atlas on a compact metrizable space $X$ or a Kuranishi cobordism on $X\tildemes[0,1]$. We begin by introducing sections in a reduction and an infinitesimal version of an admissibility condition for sections in \cite[A.1.21]{FOOO}.\footnote { In fact, even the canonical section $(s_I)$ may not satisfy an identity $s_J = \widehat{\partial}hi_{IJ}(s_I) \oplus {\rm id}_{E_J/\widehat{\partial}hi_{IJ}(E_I)}$ in a tubular neighbourhood of ${\partial}hi_{IJ}(U_{IJ})\subset U_J$ identified with $U_{IJ}\tildemes E_J/\widehat{\partial}hi_{IJ}(E_I)$ in the way described in \cite[A.1.21]{FOOO}. (The new definition used in \cite{FOOO12} is closer to ours.) } {\beta}gin{defn}{\lambda}bel{def:sect} A {\bf reduced section} of ${\mathcal K}$ is a smooth functor $\nu:{\bf B}_{\mathcal K}|_{\mathcal V}\to{\bf E}_{\mathcal K}|_{\mathcal V}$ between the reduced domain and obstruction categories of some reduction ${\mathcal V}$ of ${\mathcal K}$, such that ${\partial}r_{\mathcal K}\circ\nu$ is the identity functor. That is, $\nu=(\nu_I)_{I\in{\mathcal I}_{\mathcal K}}$ is given by a family of smooth maps $\nu_I: V_I\to E_I$ such that for each $I\subsetneq J$ we have a commuting diagram {\beta}gin{equation}{\lambda}bel{eq:comp} \xymatrix{ V_I\cap {\partial}hi_{IJ}^{-1}(V_J) \ar@{->}[d]_{{\partial}hi_{IJ}} \ar@{->}[r]^{\qquad\nu_I} & E_I \ar@{->}[d]^{\widehat{\partial}hi_{IJ}} \\ V_J \ar@{->}[r]^{\nu_J} & E_J. } \end{equation} We say that a reduced section $\nu$ is an {\bf admissible perturbation} of $s_{\mathcal K}|_{\mathcal V}$ if {\beta}gin{equation}{\lambda}bel{eq:admiss} {\rm d}_y \nu_J({\rm T}_y V_J) \subset{\rm im\,}\widehat{\partial}hi_{IJ} \qquad \forall \; I\subsetneq J, \;y\in V_J\cap{\partial}hi_{IJ}(V_I\cap U_{IJ}) . \end{equation} \end{defn} {\beta}gin{rmk}{\lambda}bel{rmk:sect} \rm (i) Each reduced section $\nu:{\bf B}_{\mathcal K}|_{\mathcal V}\to{\bf E}_{\mathcal K}|_{\mathcal V}$ induces a continuous map $|\nu|: |{\mathcal V}|\to |{\bf E}_{\mathcal K}|$ such that $|{\partial}r_{\mathcal K}|\circ |\nu| = {\rm id}$, where $|{\partial}r_{\mathcal K}|$ is as in Theorem~\ref{thm:K}. Each such map has the further property that $|\nu|\big|_{{\partial}i_{\mathcal K}(V_I)}$ takes values in ${\partial}i_{\mathcal K}(U_I\tildemes E_I)$. { } {\mathbb N}I (ii) More generally, a section ${\sigma}$ of ${\mathcal K}$ is a functor ${\bf B}_{\mathcal K}\to{\bf E}_{\mathcal K}$ that satisfies the conditions of Definition~\ref{def:sect} with ${\mathcal V}$ replaced by ${\rm Obj}_{{\bf B}_{\mathcal K}}$. But these compatibility conditions are now much more onerous. For example, except in the most trivial cases, the set $V_{12}\cap \bigcap_{i=1,2} {\partial}hi_{i,12}(U_{i,12}{\smallsetminus} V_i)$ is nonempty, so that there is $x\in V_{12}$ with ${\partial}i_{\mathcal K}(x)\in \bigl({\partial}i_{\mathcal K}(U_1)\cup {\partial}i_{\mathcal K}(U_2)\bigl){\smallsetminus} \bigl( {\partial}i_{\mathcal K}(V_1) \cup {\partial}i_{\mathcal K}(V_2)\bigr)$. A reduced section $\nu$ could take any value $\nu_{12}(x) \in E_{12}\cong \widehat{\partial}hi_{1,12}(E_1)\oplus \widehat{\partial}hi_{2,12}(E_2)$. On the other hand, a section ${\sigma}$ of ${\mathcal K}$ would have ${\sigma}(x)\in \bigcap_{i=1,2} \widehat{\partial}hi_{i,12}(E_i)=\{0\}$ since the compatibility conditions imply that $\nu_{12}|_{{\rm im\,}{\partial}hi_{i,12}}$ takes values in $\widehat{\partial}hi_{i,12}(E_i)$. We cannot achieve transversality under such conditions, which explains why we consider reduced sections. \end{rmk} Note that the zero section $0_{\mathcal K}$, given by $U_I\to 0 \in E_I$, restricts to an admissible perturbation $0_{\mathcal V}:{\bf B}_{\mathcal K}|_{\mathcal V}\to{\bf E}_{\mathcal K}|_{\mathcal V}$ in the sense of the above definition. Similarly, the canonical section $s:= s_{\mathcal K}$ of the Kuranishi atlas restricts to a section $s|_{\mathcal V}: {\bf B}_{\mathcal K}|_{\mathcal V}\to{\bf E}_{\mathcal K}|_{\mathcal V}$ of any reduction. However, the canonical section is generally not admissible. In fact, as we saw in Lemma~\ref{le:change}, for all $y\in V_J\cap {\partial}hi_{IJ}(V_I\cap U_{IJ})$ the map $$ {\rm pr}_{E_I}^{\partial}erp\circ {\rm d}_y s_J \,: \;\; \quotient{{\rm T}_y U_J} {{\rm T}_y ({\partial}hi_{IJ}(U_{IJ}))} \; \longrightarrow \; \quotient{E_J}{\widehat{\partial}hi_{IJ}(E_I)} $$ is an isomorphism by the index condition \eqref{tbc}, while for an admissible section it is identically zero. So for any reduction ${\mathcal V}$ and admissible perturbation $\nu$, the sum $$ s+\nu:=(s_I|_{V_I}+\nu_I)_{I\in{\mathcal I}_{\mathcal K}} \,:\; {\bf B}_{\mathcal K}|_{\mathcal V} \;\to\; {\bf E}_{\mathcal K}|_{\mathcal V} $$ is a reduced section that satisfies the index condition $$ {\rm pr}_{E_I}^{\partial}erp\circ{\rm d}_y (s_J+\nu_J)\,: \;\; \quotient{{\rm T}_y U_J}{{\rm T}_y ({\partial}hi_{IJ}(U_{IJ}))} \;\stackrel{\cong}\longrightarrow \; \quotient{E_J}{\widehat{\partial}hi_{IJ}(E_I)} \qquad\forall \; y\in V_J\cap {\partial}hi_{IJ}(V_I\cap U_{IJ}). $$ We use this in the following lemma to show that transversality of the sections in Kuranishi charts is preserved under coordinate changes. However, admissibility only becomes essential in the discussion of orientations; cf.\ Proposition~\ref{prop:orient}. {\beta}gin{lemma}{\lambda}bel{le:transv} Let ${\mathcal V}$ be a reduction of ${\mathcal K}$, and $\nu$ an admissible perturbation of $s_{\mathcal K}|_{\mathcal V}$. If $z\in V_I$ and $w\in V_J$ map to the same point in the virtual neighbourhood ${\partial}i_{\mathcal K}(z)={\partial}i_{\mathcal K}(w)\in|{\mathcal K}|$, then $z$ is a transverse zero of $s_I|_{V_I}+\nu_I$ if and only if $w$ is a transverse zero of $s_J|_{V_J}+\nu_J$. \end{lemma} {\beta}gin{proof} Note that, since the equivalence relation ${\sigma}m$ on ${\rm Obj}_{{\mathcal B}_{\mathcal K}}$ is generated by ${\partial}receq$ and its inverse $\succeq$, it suffices to suppose that $(I,z){\partial}receq (J,w)$, i.e.\ $w={\partial}hi_{IJ}(z)$. Now $s_J(w)=\widehat{\partial}hi_{IJ}(s_I(z))=0$ iff $s_I(z)=0$ since $\widehat{\partial}hi_{IJ}$ is injective. Next, $z$ is a transverse zero of $s|_{{\mathcal V}}+\nu$ exactly if ${\rm d}_z (s_I+\nu_I): {\rm T}_z U_I\to E_I$ is surjective. On the other hand, we have splittings ${\rm T}_w U_J \cong {\rm im\,}{\rm d}_z{\partial}hi_{IJ} \oplus \tfrac{{\rm T}_w U_J} {{\rm im\,}{\rm d}_z{\partial}hi_{IJ}} $ and $E_J \cong \widehat{\partial}hi_{IJ}(E_I) \oplus \tfrac{E_J}{\widehat{\partial}hi_{IJ}(E_I)}$ with respect to which the differential at $w$ has product form {\beta}gin{equation}{\lambda}bel{eq:dnutrans} {\rm d}_w (s_J + \nu_J) \cong \bigl(\; \widehat{\partial}hi_{IJ}\circ{\rm d}_z (s_I+\nu_I) \circ ({\rm d}_z{\partial}hi_{IJ})^{-1} \,,\, {\rm d}_w s_J \, \bigr) , \end{equation} by the admissibility condition on $\nu_J$. Here the second factor is an isomorphism by the index condition \eqref{tbc}. Since $\widehat{\partial}hi_{IJ}$ and $({\rm d}_z{\partial}hi_{IJ})^{-1}$ are isomorphisms on the relevant domains, this proves equivalence of the transversality statements. \end{proof} {\beta}gin{defn}{\lambda}bel{def:sect2} A {\bf transverse perturbation} of $s_{\mathcal K}|_{{\mathcal V}}$ is a reduced section $\nu:{\bf B}_{\mathcal K}|_{\mathcal V}\to{\bf E}_{\mathcal K}|_{\mathcal V}$ whose sum with the canonical section $s|_{\mathcal V}$ is transverse to the zero section $0_{\mathcal V}$, that is $s_I|_{V_I}+\nu_I{\partial}itchfork 0$ for all $I\in{\mathcal I}_{\mathcal K}$. Given a transverse perturbation $\nu$, we define the {\bf perturbed zero set} $|{\bf Z}_\nu|$ to be the realization of the full subcategory ${\bf Z}_\nu$ of ${\bf B}_{\mathcal K}$ with object space $$ (s + \nu)^{-1}(0) := {\textstyle \bigcup_{I\in {\mathcal I}_{\mathcal K}}}(s_I|_{V_I}+\nu_I)^{-1}(0) \;\subset\;{\rm Obj}_{{\bf B}_{\mathcal K}} . $$ That is, we equip $$ |{\bf Z}_\nu| : = \bigl|(s + \nu)^{-1}(0)\bigr| \,=\; \quotient{ {\textstyle\bigcup_{I\in{\mathcal I}_{\mathcal K}} (s_I|_{V_I}+\nu_I)^{-1}(0) }}{\!{\sigma}m} $$ with the quotient topology generated by the morphisms of ${\bf B}_{\mathcal K}|_{\mathcal V}$. By Lemma~\ref{lem:full} this is equivalent to the quotient topology induced by ${\partial}i_{\mathcal K}$, and the inclusion $(s+\nu)^{-1}(0)\subset{\mathcal V} = {\rm Obj}_{{\bf B}_{\mathcal K}|_{\mathcal V}}$ induces a continuous injection, which we denote by {\beta}gin{equation}{\lambda}bel{eq:Zinject} i_\nu \,:\; |{\bf Z}_\nu| \;\longrightarrow\; |{\mathcal K}| . \end{equation} \end{defn} To see that the above is well defined, recall that the canonical section restricts to a reduced section $s|_{\mathcal V}: {\bf B}_{\mathcal K}|_{\mathcal V}\to{\bf E}_{\mathcal K}|_{\mathcal V}$, so that the sum $s|_{\mathcal V}+\nu$ is a reduced section as well, with a well defined zero set $(s + \nu)^{-1}(0)$. Moreover, since ${\bf Z}_\nu$ is the realization of a full subcategory of ${\bf B}_{\mathcal K}|_{\mathcal V}$, Lemma~\ref{lem:full} asserts that the map $i_\nu$ is a continuous injection to $|{\mathcal K}|$, and moreover a homoeomorphism from $|{\bf Z}_\nu|$ to ${\partial}i_{\mathcal K}\bigl((s+\nu)^{-1}(0)\bigr)=|(s + \nu)^{-1}(0)|$ with respect to the quotient topology in the sense of Definition~\ref{def:topologies}. In particular, the continuous injection to the Hausdorff space $|{\mathcal K}|$ implies Hausdorffness of $|{\bf Z}_\nu|$. However, the image of $i_\nu$ is ${\partial}i_{\mathcal K}\bigl((s+\nu)^{-1}(0)\bigr)$ with the relative topology induced by $|{\mathcal K}|$, that is $$ i_\nu ( |{\bf Z}_\nu| ) = \|(s+\nu)^{-1}(0)\| . $$ So the perturbed zero set is equipped with two Hausdorff topologies -- the quotient topology on $|{\bf Z}_\nu|\cong|(s+\nu)^{-1}(0)|$ and the relative topology on $\|(s+\nu)^{-1}(0)\|\subset|{\mathcal K}|$. It remains to achieve local smoothness and compactness in one of the topologies. We will see below that local smoothness follows from transversality of the perturbation, though only in the topology of $|{\bf Z}_\nu|$, which may contain smaller neighbourhoods than $\|(s + \nu)^{-1}(0)\|$. On the other hand, compactness of $\|(s + \nu)^{-1}(0)\|$ is easier to obtain than that of $|{\bf Z}_\nu|$, which may have more open covers. For the first, one could use the fact that $\|(s+\nu)^{-1}(0)\|\subset\|{\mathcal V}\|$ is precompact in $|{\mathcal K}|$ by Proposition~\ref{prop:Ktopl1}~(iii), so it would suffice to deduce closedness of $\|(s+\nu)^{-1}(0)\|\subset|{\mathcal K}|$. This would follow if the continuous map $|s+\nu|:\|{\mathcal V}\| \to |{\bf E}_{\mathcal K}|_{\mathcal V}|$ had a continuous extension to $|{\mathcal K}|$ with no further zeros. However, such an extension may not exist. In fact, generally $\|{\mathcal V}\|\subset |{\mathcal K}|$ fails to be open, ${\iota}_{\mathcal K}(X)\subset |{\mathcal K}|$ does not have any precompact neighbourhoods (see Example~\ref{ex:Khomeo}), and even those assumptions would not guarantee the existence of an extension. So compactness of either $\|(s+\nu)^{-1}(0)\|$ or $|{\bf Z}_\nu|$ will not hold in general without further hypotheses on the perturbation that force its zero set to be ``away from the boundary" of $\|{\mathcal V}\|$ in some appropriate sense. For that purpose we introduce the following extra assumption on the perturbed section that will directly imply compactness of $|{\bf Z}_\nu|$. {\beta}gin{defn}{\lambda}bel{def:precomp} A reduced section $\nu: {\bf B}_{\mathcal K}|_{\mathcal V}\to {\bf E}_{\mathcal K}|_{\mathcal V}$ is said to be {\bf precompact} if its domain is part of a nested reduction ${\mathcal C}\sqsubset {\mathcal V}$ such that $ {\partial}i_{\mathcal K}\bigl((s + \nu)^{-1}(0)\bigr) \;\subset\; {\partial}i_{\mathcal K}({\mathcal C}). $ \end{defn} The smoothness properties follow more directly from transversality of the perturbation. The next lemma shows that for transverse perturbations the object space $\bigcup_I (s_I|_{V_I}+\nu_I)^{-1}(0) \subset \bigcup_I V_I$ of ${\bf Z}_\nu$ is a smooth submanifold of dimension $d: = \dim {\mathcal K}$, and that the morphisms spaces are given by local diffeomorphisms. Hence the category ${\bf Z}_\nu$ can be extended to a groupoid by adding the inverses to the space of morphisms. {\beta}gin{lemma} {\lambda}bel{le:stransv} Let $\nu: {\bf B}_{\mathcal K}|_{\mathcal V}\to{\bf E}_{\mathcal K}|_{\mathcal V}$ be a transverse perturbation of $s_{\mathcal K}|_{\mathcal V}$. Then the domains of the perturbed zero set $(s_I|_{V_I}+\nu_I)^{-1}(0)\subset V_I$ are submanifolds for all $I\in{\mathcal I}_{\mathcal K}$; and for $I\subset J$ the map ${\partial}hi_{IJ}$ induces a diffeomorphism from ${V_J\cap U_{IJ}\cap (s_I|_{V_I}+\nu_I)^{-1}(0)}$ to an open subset of $(s_J|_{V_J}+\nu_J)^{-1}(0)$. \end{lemma} {\beta}gin{proof} The submanifold structure of $(s_I|_{V_I}+\nu_I)^{-1}(0)\subset U_I$ follows from the transversality and the implicit function theorem, with the dimension given by the index $d:=\dim U_I-\dim E_I$. For $I\subset J$ the embedding ${\partial}hi_{IJ}:U_{IJ}\to U_J$ then restricts to a smooth embedding {\beta}gin{equation}{\lambda}bel{eq:ZphiIJ} {\partial}hi_{IJ}: U_{IJ}\cap(s_I|_{V_I}+\nu_I)^{-1}(0) \to (s_J|_{V_J}+\nu_J)^{-1}(0) \end{equation} by the functoriality of the perturbed sections. Since this restriction of ${\partial}hi_{IJ}$ to this solution set is an embedding from an open subset of a $d$-dimensional manifold into a $d$-dimensional manifold, it has open image and is a diffeomorphism to this image. \end{proof} Assuming for now that precompact transverse perturbations exist (as we will show in Proposition~\ref{prop:ext}), we can now deduce smoothness and compactness of the perturbed zero set. {\beta}gin{prop} {\lambda}bel{prop:zeroS0} Let ${\mathcal K}$ be a tame $d$-dimensional Kuranishi atlas with a reduction $ {\mathcal V}\sqsubset {\mathcal K}$, and suppose that $\nu: {\bf B}_{\mathcal K}|_{\mathcal V} \to {\bf E}_{\mathcal K}|_{\mathcal V}$ is a precompact transverse perturbation. Then $|{\bf Z}_\nu| = |(s+ \nu)^{-1}(0)|$ is a smooth closed $d$-dimensional manifold. Moreover, its quotient topology agrees with the subspace topology on ${\|(s+ \nu)^{-1}(0)\|\subset|{\mathcal K}|}$. \end{prop} {\beta}gin{proof} By Lemma~\ref{le:stransv}, the realization $|{\bf Z}_\nu|$ is made from the (disjoint) union $\bigcup_I \bigl(s_I|_{V_I}+\nu_I)^{-1}(0)\bigr)$ of $d$-dimensional manifolds via an equivalence relation given by the smooth local diffeomorphisms \eqref{eq:ZphiIJ}. From this we can deduce that $|{\bf Z}_\nu|$ is second countable (i.e.\ its topology has a countable basis of neighbourhoods). Indeed, a basis is given by the projection of countable bases of each manifold $(s_I|_{V_I}+\nu_I)^{-1}(0)$ to the quotient. The images are open in the quotient space since the relation between different components of $(s+\nu)^{-1}(0)$ is given by local diffeomorphisms, taking open sets to open sets. In other words: The preimage of an open set in $|{\bf Z}_\nu|$ is a disjoint union of open subsets of $(s_I|_{V_I}+\nu_I)^{-1}(0)$. This can be used to express any open set as a union of the basis elements. It also shows that $|{\bf Z}_\nu|$ is locally smooth, since any choice of lift $x\in (s_I|_{V_I}+\nu_I)^{-1}(0)$ of a given point $[x]\in |{\bf Z}_\nu|$ lies in some chart ${\mathbb N}n \hookrightarrow {\mathbb R}^d$, where ${\mathbb N}n\subset (s_I|_{V_I}+\nu_I)^{-1}(0)$ is open; thus as above $[{\mathbb N}n]\subset|{\bf Z}_\nu|$ is open and provides a local homeomorphism to ${\mathbb R}^d$ near $[x]$. Moreover, as noted above, the continuous injection $|{\bf Z}_\nu| \to |{\mathcal K}|$ from Lemma~\ref{lem:full} transfers the Hausdorffness of $|{\mathcal K}|$ from Proposition~\ref{prop:Khomeo} to the realization $|{\bf Z}_\nu|$. Thus $|{\bf Z}_\nu|$ is a second countable Hausdorff space that is locally homeomorphic to a $d$-manifold. Hence it is a $d$-manifold, where we understand all manifolds to have empty boundary since the charts are to open sets in ${\mathbb R}^d$. In order to prove compactness, it now suffices to prove sequential compactness, since every manifold is metrizable. (In fact, second countability suffices for the equivalence of compactness and sequential compactness, see \cite[Theorem~5.5]{Kel}.) For that purpose recall that $\nu$ is assumed to be precompact, in particular there exists a precompact subset ${\mathcal C}\sqsubset {\mathcal V}$ so that {\beta}gin{equation}{\lambda}bel{eq:C} (s_I|_{V_I}+\nu_I)^{-1}(0) \;\subset\; {\partial}i_{\mathcal K}^{-1}\bigl({\partial}i_{\mathcal K}({\mathcal C})\bigr) \cap V_I \qquad \forall I\in {\mathcal I}_{\mathcal K}. \end{equation} Now to prove sequential compactness, consider a sequence $(p_k)_{k\in{\mathbb N}}\subset |{\bf Z}_\nu|$. In the following we will index all subsequences by $k\in{\mathbb N}$ as well. By finiteness of ${\mathcal I}_{\mathcal K}$ there is a subsequence of $(p_k)$ that has lifts in $(s_I|_{V_I}+\nu_I)^{-1}(0)$ for a fixed $I\in{\mathcal I}_{\mathcal K}$. In fact, by \eqref{eq:C}, and using the language of Definition~\ref{def:preceq}, the subsequence lies in $$ V_I \cap {\partial}i_{\mathcal K}^{-1}\bigl({\partial}i_{\mathcal K}({\mathcal C})\bigr) \;=\; V_I \cap {\textstyle \bigcup_{J\in{\mathcal I}_{\mathcal K}}} {\varepsilon}_I(C_J) \;\subset\; U_I . $$ Here ${\varepsilon}_I(C_J)=\emptyset$ unless $I\subset J$ or $J\subset I$, due to the intersection property (ii) of Definition~\ref{def:vicin} and the inclusion $C_J\subset V_J$. So we can choose another subsequence and lifts $(x_k)_{k\in{\mathbb N}}\subset V_I$ with ${\partial}i_{\mathcal K}(x_k)=p_k$ such that either $$ (x_k)_{k\in{\mathbb N}}\subset V_I \cap {\partial}hi_{IJ}^{-1}(C_J) \qquad\text{or}\qquad (x_k)_{k\in{\mathbb N}}\subset V_I \cap {\partial}hi_{JI}(C_J\cap U_{JI}) $$ for some $I\subset J$ or some $J\subset I$. In the first case, compatibility of the perturbations \eqref{eq:comp} implies that there are other lifts ${\partial}hi_{IJ}(x_k)\in (s_J|_{V_J}+\nu_J)^{-1}(0)\cap C_J$, which by precompactness $\overline{C_J}\sqsubset V_J$ have a convergent subsequence ${\partial}hi_{IJ}(x_k)\to y_\infty \in (s_J|_{V_J}+\nu_J)^{-1}(0)$. Thus we have found a limit point in the perturbed zero set $p_k = {\partial}i_{\mathcal K}({\partial}hi_{IJ}(x_k)) \to {\partial}i_{\mathcal K}(y_\infty) \in |{\bf Z}_\nu|$, as required for sequential compactness. In the second case we use the relative closeness of ${\partial}hi_{JI}(U_{JI})=s_J^{-1}(E_I)\subset U_I$ and the precompactness $V_I\sqsubset U_I$ to find a convergent subsequence $x_k\to x_\infty \in \overline{V_I} \cap {\partial}hi_{JI}(U_{JI})$. Since ${\partial}hi_{JI}$ is a homeomorphism to its image, this implies convergence of the preimages $y_k:= {\partial}hi_{JI}^{-1}(x_k) \to {\partial}hi_{JI}^{-1}(x_\infty) =: y_\infty \in U_{JI}$. By construction and compatibility of the perturbations \eqref{eq:comp}, this subsequence $(y_k)$ of lifts of ${\partial}i_{\mathcal K}(y_k)=p_k$ moreover lies in $(s_J|_{V_J}+\nu_J)^{-1}(0)\cap C_J$. Now precompactness of $C_J\sqsubset V_J$ implies $y_\infty\in V_J$, and continuity of the section implies $y_\infty\in (s_J|_{V_J}+\nu_J)^{-1}(0)$. Thus we again have a limit point $p_k = {\partial}i_{\mathcal K}(y_k) \to {\partial}i_{\mathcal K}(y_\infty) \in |{\bf Z}_\nu|$. This proves that the perturbed zero set $|{\bf Z}_\nu|$ is sequentially compact, and hence compact. Therefore it is a closed manifold, as claimed. Finally, the map \eqref{eq:Zinject} now is a continuous bijection between the compact space $|{\bf Z}_\nu|$ and the Hausdorff space $\|(s+\nu)^{-1}(0)\|\subset|{\mathcal K}|$ with the relative topology induced by $|{\mathcal K}|$. As such it is automatically a homeomorphism $|{\bf Z}_\nu|\cong \|(s+\nu)^{-1}(0)\|$, see Remark~\ref{rmk:hom} \end{proof} We now extend these results to a tame Kuranishi cobordism ${\mathcal K}^{[0,1]}$ from ${\mathcal K}^0$ to ${\mathcal K}^1$ with cobordism reduction ${\mathcal V}$. Recall from Definition~\ref{def:cvicin} that ${\mathcal V}$ induces reductions ${\partial}artial^{\alpha}{\mathcal V} := \bigcup_{I\in{\mathcal I}_{{\mathcal K}^{\alpha}}} {\partial}artial^{\alpha} V_I \subset {\rm Obj}_{{\bf B}_{{\mathcal K}^{\alpha}}}$ of ${\mathcal K}^{\alpha}$ for ${\alpha}=0,1$, where we identify the index set ${\mathcal I}_{{\mathcal K}^{\alpha}}\cong{\iota}^{\alpha}({\mathcal I}_{{\mathcal K}^{\alpha}})$ with a subset of ${\mathcal I}_{{\mathcal K}^{[0,1]}}$. {\beta}gin{defn} {\lambda}bel{def:csect} A {\bf reduced cobordism section} of $s_{{\mathcal K}^{[0,1]}}|_{{\mathcal V}}$ is a reduced section $\nu:{\bf B}_{{\mathcal K}^{[0,1]}}|_{\mathcal V}\to{\bf E}_{{\mathcal K}^{[0,1]}}|_{{\mathcal V}}$ as in Definition~\ref{def:sect} that in addition has product form in a collar neighbourhood of the boundary. That is, for ${\alpha}=0,1$ and $I\in {\mathcal I}_{{\mathcal K}^{\alpha}}\subset{\mathcal I}_{{\mathcal K}^{[0,1]}}$ there is ${\varepsilon}>0$ and a map $\nu_I^{\alpha}: {\partial}^{\alpha} V_I\to E_I$ such that $$ \nu_I \bigl( {\iota}_I^{\alpha} ( x,t ) \bigr) = \nu_I^{\alpha} (x) \qquad \forall\, x\in {\partial}^{\alpha} V_I , \ t\in A^{\alpha}_{\varepsilon} . $$ A {\bf precompact, transverse cobordism perturbation} of $s_{{\mathcal K}^{[0,1]}}|_{{\mathcal V}}$ is a reduced cobordism section $\nu$ as above that satisfies the transversality condition $s|_{V_I}+\nu_I {\partial}itchfork 0$ on the interior of the domains $V_I$, and whose domain is part of a nested cobordism reduction ${\mathcal C}\sqsubset {\mathcal V}$ such that ${\partial}i_{\mathcal K}\bigl((s + \nu)^{-1}(0)\bigr)\subset {\partial}i_{\mathcal K}({\mathcal C})$. We moreover call such $\nu$ {\bf admissible} if it satisfies \eqref{eq:admiss}. \end{defn} The collar form ensures that the transversality of the perturbation extends to the boundary of the domains, as follows. {\beta}gin{lemma}{\lambda}bel{le:ctransv} If $\nu:{\bf B}_{{\mathcal K}^{[0,1]}}|_{\mathcal V}\to{\bf E}_{{\mathcal K}^{[0,1]}}|_{{\mathcal V}}$ is a precompact, transverse cobordism perturbation of $s_{{\mathcal K}^{[0,1]}}|_{{\mathcal V}}$, then the {\bf restrictions} $\nu|_{{\partial}artial^{\alpha}{\mathcal V}} := \bigl( \nu_I^{\alpha} \bigr)_{I\in{\mathcal I}_{{\mathcal K}^{\alpha}}}$ for ${\alpha}=0,1$ are precompact, transverse perturbations of the restricted canonical sections $s_{{\mathcal K}^{\alpha}}|_{{\partial}artial^{\alpha}{\mathcal V}}$. If in addition $\nu$ is admissible, then so are the restrictions $\nu|_{{\partial}artial^{\alpha}{\mathcal V}}$. Moreover, each perturbed section $s|_{V_I}+\nu_I : V_I \to E_I$ for $I\in{\mathcal I}_{{\mathcal K}^0}\cup{\mathcal I}_{{\mathcal K}^1}\subset {\mathcal I}_{{\mathcal K}^{[0,1]}}$ is transverse to $0$ as a map on a domain with boundary. That is, the kernel of its differential is transverse to the boundary ${\partial}artial V_I = \bigcup_{{\alpha}=0,1}{\iota}ta^{\alpha}_I ({\partial}artial^{\alpha} V_I\tildemes \{{\alpha}\})$. \end{lemma} {\beta}gin{proof} Precompactness transfers to the restriction since the restrictions of the nested cobordism reduction are nested reductions ${\partial}^{\alpha}{\mathcal C} \sqsubset {\partial}^{\alpha}{\mathcal V}$ for ${\alpha}=0,1$. Similarly, admissibility transfers immediately by pullback of \eqref{eq:admiss} to the boundaries via ${\iota}^{\alpha}_I : {\partial}^{\alpha} V_I \tildemes \{{\alpha}\} \to V_I$. Transversality in a collar neighbourhood of the boundary ${\iota}^{\alpha}_I({\partial}artial^{\alpha}{\mathcal V}\tildemes A^{\alpha}_{\varepsilon})\subset V_I$ is equivalent to transversality of the restriction $s|_{{\partial}artial^{\alpha} V_I}+\nu^{\alpha}_I {\partial}itchfork 0$ since ${\rm d} \bigl( \nu_I \circ {\iota}_I^{\alpha} \bigr) = 0 {\rm d} t + {\rm d}\nu_I^{\alpha}$. Moreover, transversality of $s|_{V_I}+\nu_I : V_I \to E_I$ at the boundary of $V_I$, as a map on a domain with boundary, is equivalent under pullback with the embeddings ${\iota}^{\alpha}_I$ to transversality of $f := \bigl( s|_{V_I}+\nu_I \bigr)\circ{\iota}^{\alpha}_I : {\partial}artial^{\alpha} V_I \tildemes A^{\alpha}_{\varepsilon} \to E_I$. For the latter, the kernel $\ker{\rm d}_{s,x} f = \ker{\rm d}_x \nu^{\alpha}_I \tildemes {\mathbb R}$ is indeed transverse to the boundary ${\rm T}_x {\partial}artial^{\alpha} V_I\tildemes \{{\alpha}\}$ in ${\rm T}_x {\partial}artial^{\alpha} V_I \tildemes {\mathbb R}$. \end{proof} With that, we can show that precompact transverse perturbations of the Kuranishi cobordism induce smooth cobordisms (up to orientations) between the perturbed zero sets of the restricted perturbations. {\beta}gin{lemma} {\lambda}bel{le:czeroS0} Let $\nu: {\bf B}_{{\mathcal K}^{[0,1]}}|_{{\mathcal V}^{[0,1]}} \to {\bf E}_{{\mathcal K}^{[0,1]}}|_{{\mathcal V}^{[0,1]}}$ be a precompact, transverse cobordism perturbation. Then $|{\bf Z}_{\nu}|$, defined as in Definition~\ref{def:sect2}, is a compact manifold whose boundary ${\partial}|{\bf Z}_\nu|$ is diffeomorphic to the disjoint union of $|{\bf Z}_{\nu^0}|$ and $|{\bf Z}_{\nu^1}|$, where $\nu^{\alpha} :=\nu|_{{\partial}artial^{\alpha}{\mathcal V}}$ are the restricted transverse perturbations of $s_{{\mathcal K}^{\alpha}}|_{{\partial}^{\alpha} {\mathcal V}}$ for ${\alpha}=0,1$. \end{lemma} {\beta}gin{proof} The topological properties of $|{\bf Z}_{\nu}|$ follow from the arguments in Proposition~\ref{prop:zeroS0}, and smoothness of the zero sets follows as in Lemma~\ref{le:stransv}. However, the zero sets $(s_I|_{V_I}+\nu_I)^{-1}(0)$ for $I\in{\mathcal I}_{{\mathcal K}^{\alpha}}\subset{\mathcal I}_{{\mathcal K}^{[0,1]}}$ are now submanifolds with boundary, by the implicit function theorem on the interior of $V_I$ together with the smooth product structure on the collar neighbourhoods ${\iota}_I^{\alpha} ({\partial}^{\alpha} V_I \tildemes A^{\alpha}_{\varepsilon})$ of the boundary. The latter follows from the smoothness of $(s_I|_{{\partial}^{\alpha} V_I}+\nu^{\alpha}_I)^{-1}(0)$ from Lemma~\ref{le:stransv} and the embedding $$ (s_I|_{V_I}+\nu_I)^{-1}(0)\;\cap\; {\iota}_I^{\alpha} ({\partial}^{\alpha} V_I \tildemes \{{\alpha}\}) \;=\; {\iota}^{\alpha}_I \bigl( (s_I|_{{\partial}^{\alpha} V_I}+\nu^{\alpha}_I)^{-1}(0) \tildemes\{{\alpha}\} \bigr) . $$ This gives $(s+\nu)^{-1}(0)$ the structure of a compact manifold with two disjoint boundary components for ${\alpha}=0,1$ given by $$ {\partial}artial^{\alpha} \bigl( (s+\nu)^{-1}(0) \bigr) \;=\; {\underline{n}}derlineerset{{I\in{\mathcal I}_{{\mathcal K}^{\alpha}}}}{\textstyle\bigcup} {\iota}^{\alpha}_I \bigl( (s_I|_{{\partial}^{\alpha} V_I}+\nu^{\alpha}_I)^{-1}(0) \tildemes\{{\alpha}\} \bigr) , $$ which are diffeomorphic via ${\partial}artial^{\alpha} {\mathcal V} \ni (I,x) \mapsto {\iota}ta^{\alpha}_I(x,{\alpha})$ to the submanifolds $(s+\nu^{\alpha})^{-1}(0)\subset {\partial}artial^{\alpha} {\mathcal V}$ given by the restricted perturbations $\nu^{\alpha}=\bigl(\nu^{\alpha}_I\bigr)_{I\in{\mathcal I}_{{\mathcal K}^{\alpha}}}$. By the collar form of the coordinate changes in ${\mathcal K}^{[0,1]}$ this induces fully faithful functors $j^{\alpha}$ from ${\bf Z}_{\nu^{\alpha}}$ to the full subcategories of ${\bf Z}_\nu$ with objects ${\partial}artial^{\alpha} \bigl( (s+\nu)^{-1}(0) \bigr)$. Moreover, as in Lemma~\ref{le:stransv}, the morphisms are given by restrictions of the embeddings ${\partial}hi_{IJ}$, which are in fact local diffeomorphisms, and hence can be inverted to give ${\bf Z}_\nu$ the structure of a groupoid. Again using the collar form of the coordinate changes, there are no morphisms between ${\partial}artial^{\alpha} \bigl( (s+\nu)^{-1}(0) \bigr)$ and its complement in $(s+\nu)^{-1}(0)$, so the realization $|{\bf Z}_{\nu}|$ inherits the structure of a compact manifold with boundary ${\partial}artial |{\bf Z}_{\nu}| = \bigcup_{{\alpha}=0,1} {\partial}artial^{\alpha} |{\bf Z}_{\nu}|$ with two disjoint boundary components $$ {\partial}artial^{\alpha} |{\bf Z}_{\nu}| \,:=\; {\partial}artial^{\alpha} |{\mathcal K}^{[0,1]}| \cap |{\bf Z}_{\nu}| \;=\; \Bigl| {\textstyle\bigcup}_{I\in{\mathcal I}_{{\mathcal K}^{\alpha}}} {\iota}^{\alpha}_I \bigl( (s_I|_{{\partial}^{\alpha} V_I}+\nu^{\alpha}_I)^{-1}(0) \tildemes\{{\alpha}\} \bigr) \Bigr| . $$ Since the fully faithful functors $j^{\alpha}$ are diffeomorphisms between the object spaces, they then descend to diffeomorphisms to the boundary components, $$ |j^{\alpha}| \,:\; |{\bf Z}_{\nu^{\alpha}}| \;=\; \Bigl| {\textstyle\bigcup}_{I\in{\mathcal I}_{{\mathcal K}^{\alpha}}} (s_I^{\alpha}+\nu^{\alpha}_I)^{-1}(0) \Bigr| \;\overlineerset{\cong}{\longrightarrow}\; {\partial}artial^{\alpha} |{\bf Z}_{\nu}| . $$ Thus $|{\bf Z}_{\nu}|$ is a (not yet oriented) cobordism between $|{\bf Z}_{\nu^0}|$ and $|{\bf Z}_{\nu^1}|$, as claimed. \end{proof} \subsection{Construction of perturbations} {\lambda}bel{ss:const} \hspace{1mm}\\ In this section, we let $({\mathcal K},d)$ be a metric tame Kuranishi atlas (or cobordism) and ${\mathcal V}$ a (cobordism) reduction, and construct precompact transverse (cobordism) perturbations of the canonical section $s_{\mathcal K}|_{\mathcal V}$. In fact, we will construct a transverse perturbation with perturbed zero set contained in ${\partial}i_{\mathcal K}({\mathcal C})$ for any given nested (cobordism) reduction ${\mathcal C} \sqsubset {\mathcal V}$. This will be accomplished by an intricate construction that depends on the choice of two suitable constants ${\delta},{\sigma}>0$ depending on ${\mathcal C}\sqsubset{\mathcal V}$. Consequently, the corresponding uniqueness statement requires not only the construction of transverse cobordism perturbations $\nu$ of $s_{\mathcal K}|_{\mathcal V}$ in a nested cobordism reduction ${\mathcal C} \sqsubset {\mathcal V}$, and with given restrictions $\nu|_{{\partial}^{\alpha}{\mathcal V}}$, but also an understanding of the dependence on the choice of constants ${\delta},{\sigma}>0$. We begin by describing the setup, which will be used to construct perturbations for both Kuranishi atlases and Kuranishi cobordisms. It is important to have this in place before describing the iterative constructions because, firstly, the iteration depends on the above choice of constants, and secondly, even the statements about uniqueness and existence of perturbations in cobordisms need to take this intricate setup into consideration. First, we construct compatible norms on the obstruction spaces by using the additivity isomorphisms given by Definition~\ref{def:Ku2}, {\beta}gin{equation} {\lambda}bel{eq:iI} {\textstyle {\partial}rod_{i\in I}} \;\widehat{\partial}hi_{iI}: \; {\textstyle {\partial}rod_{i\in I}} \; E_i \;\stackrel{\cong}\longrightarrow \; E_I \;=\;\oplus_{i\in I} \widehat{\partial}hi_{iI}(E_i) , \end{equation} which gives a unique decomposition of any map $f_I:{\rm dom}(f_I)\to E_I$ into components $\bigl(f^i_I : {\rm dom}(f_I) \to \widehat{\partial}hi_{iI}(E_i)\bigr)_{i\in I}$ determined by $f_I = \sum_{i\in I} f^i_I$. Now we choose norms $\|\cdot\|$ on the basic obstruction spaces $E_i$ for $i=1,\dots N$, and extend them to all obstruction spaces $E_I$ by $$ \| e \| \;=\; \Bigl\| {\textstyle \sum_{i\in I}} \widehat{\partial}hi_{iI} (e_i) \Bigr\| \,:=\; \max_{i\in I} \| e_i\| \qquad \forall e= {\textstyle \sum_{i\in I}} \widehat{\partial}hi_{iI} (e_i) \in E_I. $$ Here we chose the maximum norm on the Cartesian product since this guarantees estimates of the components $\|e_i\| \leq \|e\|$. This construction guarantees that each embedding $\widehat{\partial}hi_{IJ}:(E_I,\|\cdot\|) \to (E_J,\|\cdot\|)$ is an isometry by the cocycle condition $\widehat{\partial}hi_{IJ}\circ\widehat{\partial}hi_{iI}=\widehat{\partial}hi_{iJ}$. Moreover, we will throughout use the supremum norm for functions, that is for any $f_I:{\rm dom}(f_I)\to (E_I,\|\cdot\|)$ we denote $$ \bigl\| f_I \bigr\| \,:=\; \sup_{x\in {\rm dom}(f_I)} \bigl\| f_I(x) \bigr\| \;=\; \sup_{x\in {\rm dom}(f_I)} \max_{i\in I} \, \bigl\| f^i_I(x) \bigr\| \;=:\, \max_{i\in I}\, \bigl\| f^i_I \bigr\| . $$ Next, recall from Lemma~\ref{le:metric} (which holds in complete analogy for metric Kuranishi cobordisms) that the metric $d$ on $|{\mathcal K}|$ induces metrics $d_I$ on each domain $U_I$ such that the coordinate changes ${\partial}hi_{IJ}:(U_{IJ},d_I)\to (U_J,d_J)$ are isometries. Moreover, Lemma~\ref{le:delred}~(i) provides ${\delta}>0$ so that $B_{2{\delta}}(V_I)\sqsubset U_I$ for all $I\in{\mathcal I}_{\mathcal K}$, and $B_{2{\delta}}({\partial}i_{\mathcal K}(\overline{V_I}))\cap B_{2{\delta}}({\partial}i_{\mathcal K}(\overline{V_J})) = \emptyset$ unless $I\subset J$ or $J\subset I$, and hence the precompact neighbourhoods {\beta}gin{equation}{\lambda}bel{eq:VIk} V_I^k \,:=\; B^I_{2^{-k}{\delta}}(V_I) \;\sqsubset\; U_I \qquad \text{for} \; k \geq 0 \end{equation} form further (cobordism) reductions, all of which contain ${\mathcal V}$. Here we chose separation distance $2{\delta}$ so that compatibility of the metrics ensures the strengthened version of the separation condition (ii) in Definition~\ref{def:vicin} for $I\not\subset J$ and $J\not\subset I$, {\beta}gin{equation}{\lambda}bel{desep} B_{\delta}\bigl({\partial}i_{\mathcal K}(V^k_I)\bigr) \cap B_{\delta}\bigl({\partial}i_{\mathcal K}(V^k_J)\bigr) \subset B_{{\delta} + 2^{-k}{\delta}}\bigl({\partial}i_{\mathcal K}(V_I)\bigr) \cap B_{{\delta} + 2^{-k}{\delta}}\bigl({\partial}i_{\mathcal K}(V_J)\bigr) = \emptyset . \end{equation} In case $I\subsetneq J$, Lemma~\ref{le:Ku2} then gives the identities {\beta}gin{align}{\lambda}bel{eq:N}\notag V^k_I \cap {\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}(V^k_J))&\;=\;V^k_I \cap {\partial}hi_{IJ}^{-1}(V^k_J) ,\\ V^k_J \cap {\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}(V^k_I))&\;=\;V^k_J \cap {\partial}hi_{IJ}(V^k_I \cap U_{IJ}) \;=:\, N^k_{JI} \end{align} for the sets on which we will require compatibility of the perturbations $\nu_I$ and $\nu_J$. The analogous identities hold for any combinations of the nested precompact open sets $$ C_I \;\sqsubset\; V_I \;\sqsubset\; \ldots V^{k'}_I \;\sqsubset\; V^{k}_I \ldots \;\sqsubset\; V^0_I , $$ where $k'>k >0$ are any positive reals. For the sets $N^k_{JI}\subset U_J$ introduced in \eqref{eq:N} above, note that by the compatibility of metrics we have inclusions for any $H\subsetneq J$, $$ {\partial}i_{\mathcal K}(B^J_{\delta}(N^k_{JH})) \subset B_{\delta} \bigl({\partial}i_{\mathcal K}(N^k_{JH})\bigr) \subset B_{\delta} \bigl({\partial}i_{\mathcal K}({\partial}hi_{HJ}(V^k_H\cap U_{HJ}))\bigr) \subset B_{{\delta}}\bigl( {\partial}i_{\mathcal K}(V^k_H) \bigr) . $$ So \eqref{desep} together with the injectivity of ${\partial}i_{\mathcal K}|_{U_J}$ implies for any $H,I \subsetneq J$ {\beta}gin{equation}{\lambda}bel{Nsep} B^J_{\delta}(N^k_{JH}) \cap B^J_{\delta}(N^k_{JI}) \neq \emptyset \qquad \Longrightarrow \qquad H\subset I \;\text{or} \; I\subset H. \end{equation} Moreover, we have precompact inclusions for any $k'>k\geq 0$ {\beta}gin{equation} {\lambda}bel{preinc} N^{k'}_{JI} \;=\;V^{k'}_J \cap {\partial}hi_{IJ}(V^{k'}_I\cap U_{IJ}) \;\sqsubset\; V^k_J \cap {\partial}hi_{IJ}(V^k_I\cap U_{IJ}) \;=\;N^k_{JI} , \end{equation} since ${\partial}hi_{IJ}$ is an embedding to the relatively closed subset $s_J^{-1}(E_I)\subset U_J$ and thus $\overline{{\partial}hi_{IJ}(V^{k'}_I\cap U_{IJ})} = {\partial}hi_{IJ}\bigl(\,\overline{V^{k'}_I}\cap U_{IJ}\bigr) \subset {\partial}hi_{IJ}(V^k_I\cap U_{IJ})$. Next, we abbreviate $$ N^k_J \, := \;{\textstyle \bigcup_{J\supsetneq I}} N^k_{JI} \;\subset\; V^k_J , $$ and will call the union $N^{|J|}_J$ the {\it core} of $V^{|J|}_J$, since it is the part of this set on which we will prescribe $\nu_J$ in an iteration by a compatibility condition with the $\nu_I$ for $I\subsetneq J$. In this iteration we will be working with quarter integers between $0$ and $$ M \,:=\; M_{\mathcal K} \,:=\; \max_{I\in{\mathcal I}_{\mathcal K}} |I| , $$ and need to introduce another constant $\eta_0>0$ that controls the intersection with ${\rm im\,}{\partial}hi_{IJ}={\partial}hi_{IJ}(U_{IJ})$ for all $I\subsetneq J$ as in Figure~\ref{fig:4}, {\beta}gin{equation}{\lambda}bel{eq:useful} {\rm im\,} {\partial}hi_{IJ} \;\cap\; B^J_{2^{-k-\frac 12}\eta_0} \bigl( N_{JI}^{k+\frac 34} \bigr) \;\subset\; N^{k+\frac 12}_{JI} \qquad \forall \; k\in \{0,1,\ldots,M\}. \end{equation} Since ${\partial}hi_{IJ}$ is an isometric embedding, this inclusion holds whenever $2^{-k-\frac 12}\eta_0 + 2^{-k-\frac 34}{\delta} \leq 2^{-k-\frac 12}{\delta}$ for all $k$. To minimize the number of choices in the construction of perturbations, we may thus simply fix $\eta_0$ in terms of ${\delta}$ by {\beta}gin{equation}{\lambda}bel{eq:eta0} \eta_0 \,:=\; (1 - 2^{-\frac 14} ) {\delta}. \end{equation} Then we also have $2^{-k}\eta_0 + 2^{-k-\frac 12}{\delta} < 2^{-k}{\delta}$, which provides the inclusions {\beta}gin{equation}{\lambda}bel{eq:fantastic} B^I_{\eta_k}\Bigl(\;\overline{V_I^{k+\frac 12}}\;\Bigr) \;\subset\; V_I^k \qquad\text{for} \;\; k\geq 0, \; \eta_k:=2^{-k}\eta_0 . \end{equation} {\beta}gin{figure}[htbp] \centering \includegraphics[width=3in]{virtfig4.pdf} \caption{ This figure illustrates the nested sets $V^{k+1}_J \sqsubset V^k_k$ and $ N^{{\lambda}'}_{JI}\sqsubset N^{{\lambda}}_{JI}\subset {\rm im\,}({\partial}hi_{IJ})\cap V^k_J$ for $k+1>{\lambda}'=k+\frac 34 > {\lambda} =k+\frac 12>k$, the shaded neighbourhood $B^J_{\eta}(N^{{\lambda}'}_{JI})$ for $\eta = 2^{-{\lambda}}\eta_0$, and the inclusion given by \eqref{eq:useful}. } {\lambda}bel{fig:4} \end{figure} Continuing the preparations, let a nested (cobordism) reduction ${\mathcal C} \sqsubset {\mathcal V}$ be given. Then we denote the open subset contained in $U_J\cap {\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}({\mathcal C}))$ for $J\in{\mathcal I}_{\mathcal K}$ by {\beta}gin{equation} {\lambda}bel{ticj} \widetilde C_J \,:=\; {\textstyle \bigcup_{K\supset J}} \, {\partial}hi_{JK}^{-1}(C_K) \;\subset\; U_J . \end{equation} The assumption ${\iota}_{\mathcal K}(X)\subset{\partial}i_{\mathcal K}({\mathcal C})$ implies $s_J^{-1}(0)\subset {\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}({\mathcal C}))$, and so by \eqref{eq:N} the zero set $s_J^{-1}(0)$ is contained in $\widetilde C_J \;\cup\; {\textstyle \bigcup_{I\subsetneq J}} {\partial}hi_{IJ}(C_I\cap U_{IJ})$, the union of an open set $\widetilde C_J$ and a set in which the perturbed zero sets will be controlled by earlier iteration steps. {\beta}gin{defn} {\lambda}bel{def:admiss} Given a reduction ${\mathcal V}$ of a metric Kuranishi atlas (or cobordism) $({\mathcal K},d)$, we set ${\delta}_{\mathcal V}>0$ to be the maximal constant such that any $2{\delta}< 2{\delta}_{\mathcal V}$ satisfies the reduction properties of Lemma~\ref{le:delred}, that is {\beta}gin{align} {\lambda}bel{eq:de1} B_{2{\delta}}(V_I)\sqsubset U_I\qquad &\forall I\in{\mathcal I}_{\mathcal K} , \\ {\lambda}bel{eq:dedisj} B_{2{\delta}}({\partial}i_{\mathcal K}(\overline{V_I}))\cap B_{2{\delta}}({\partial}i_{\mathcal K}(\overline{V_J})) \neq \emptyset &\qquad \Longrightarrow \qquad I\subset J \;\text{or} \; J\subset I . \end{align} Given a nested reduction ${\mathcal C}\sqsubset{\mathcal V}$ of a metric Kuranishi atlas $({\mathcal K},d)$ and $0<{\delta}<{\delta}_{\mathcal V}$, we set $\eta_{|J|-\frac 12} := 2^{-|J|+\frac 12} \eta_0 = 2^{-|J|+\frac 12} (1 - 2^{-\frac 14} ) {\delta}$ and {\beta}gin{equation*} {\sigma}({\delta},{\mathcal V},{\mathcal C}) \,:=\; \min_{J\in{\mathcal I}_{\mathcal K}} \inf \Bigl\{ \; \bigl\| s_J(x) \bigr\| \;\Big| \; x\in \overline{V^{|J|}_J} \;{\smallsetminus}\; \Bigl( \widetilde C_J \cup {\textstyle \bigcup_{I\subsetneq J}} B^J_{\eta_{|J|-\frac 12}}\bigl(N^{|J|-\frac14}_{JI}\bigr) \Bigr) \Bigr\} . \end{equation*} \end{defn} In this language, the previous development of setup in this section shows that for any metric Kuranishi atlas or cobordism $({\mathcal K},d)$ we have ${\delta}_{\mathcal V}>0$. We note some further properties of these constants. Note first that there is no general relation between ${\sigma}({\delta},{\mathcal V},{\mathcal C})$ and ${\sigma}({\delta}',{\mathcal V},{\mathcal C})$ for $0<{\delta}'<{\delta}<{\delta}_{\mathcal V}$ since both $V^{|J|}_J$ and $B^J_{\eta_{|J|-\frac 12}}\bigl(N^{|J|-\frac14}_{JI}\bigr)$ grow with growing ${\delta}$, and hence the domains of the infimum are not nested in either way. {\beta}gin{lemma}{\lambda}bel{le:admin} {\beta}gin{enumerate} \item Let ${\mathcal C}\sqsubset{\mathcal V}$ be a nested reduction of a metric Kuranishi atlas or cobordism, and let ${\delta}<{\delta}_{\mathcal V}$, then we have ${\sigma}({\delta},{\mathcal V},{\mathcal C})>0$. \item For any reduction ${\mathcal V}$ of a metric Kuranishi atlas we have ${\delta}_{\mathcal V}={\delta}_{{\mathcal V}\tildemes[0,1]}$. \item Given a metric Kuranishi cobordism $({\mathcal K},d)$ we equip the Kuranishi atlases ${\partial}^{\alpha}{\mathcal K}$ for ${\alpha}=0,1$ with the restricted metrics $d\big|_{|{\partial}^{\alpha}{\mathcal K}|}$. Then for any cobordism reduction ${\mathcal V}$ we have ${\delta}_{{\partial}^{\alpha}{\mathcal V}}\geq {\delta}_{\mathcal V}$ for ${\alpha}=0,1$. \item In the setting of (iii), let ${\varepsilon}>0$ be the collar width of $({\mathcal K},d)$. Then the neighbourhood of radius $r<{\varepsilon}$ of any ${\varepsilon}$-collared set $W\subset U_I$ (i.e.\ with $W\cap {\iota}^{\alpha}_I({\partial}^{\alpha} U_I \tildemes A^{\alpha}_{\varepsilon})={\iota}^{\alpha}_I({\partial}^{\alpha} W \tildemes A^{\alpha}_{\varepsilon})$) is $({\varepsilon}-r)$-collared, {\beta}gin{equation} {\lambda}bel{eq:Wnbhd} B^I_r(W) \cap {\iota}^{\alpha}_I\bigl({\partial}^{\alpha} U_I \tildemes A^{\alpha}_{{\varepsilon}-r}\bigr) = {\iota}^{\alpha}_I\bigl( B^{I,{\alpha}}_r({\partial}^{\alpha} W) \tildemes A^{\alpha}_{{\varepsilon}-r} \bigr) , \end{equation} with ${\partial}^{\alpha} B^I_r(W) = B^{I,{\alpha}}_r({\partial}^{\alpha} W)$, where we denote by $B^{I,{\alpha}}_r$ the neighbourhoods in ${\partial}^{\alpha} U_I$ induced by pullback of the metric $d_I$ with ${\iota}^{\alpha}_I:{\partial}^{\alpha} U_I \tildemes\{{\alpha}\} \to U_I$. \item If in (iv) the collared sets $W\subset U_I$ are obtained as products with $[0,1]$ in a product Kuranishi cobordism ${\mathcal K}={\mathcal K}'\tildemes[0,1]$, then \eqref{eq:Wnbhd} holds for any $r>0$ with ${\varepsilon}-r$ replaced by $1$. \end{enumerate} \end{lemma} {\beta}gin{proof} To check statement (i) it suffices to fix $J\in{\mathcal I}_{\mathcal K}$ and consider the continuous function $\|s_J\|$ over the compact set $\overline{V^{|J|}_J} \;{\smallsetminus}\; \bigl( \widetilde C_J \cup {\textstyle \bigcup_{I\subsetneq J}} B^J_{\eta_{|J|-\frac 12}}\bigl(N^{|J|-\frac14}_{JI}\bigr) \bigr)$. We claim that its infimum is positive since the domain is disjoint from $s_J^{-1}(0)$. Indeed, the reduction property ${\iota}_{\mathcal K}(X)\subset{\partial}i_{\mathcal K}({\mathcal C})$ implies $s_J^{-1}(0)\subset \widetilde C_J \cup \bigcup_{I\subsetneq J} {\partial}hi_{IJ}(C_I\cap U_{IJ})$, the intersections $\overline{V^{|J|}_J}\cap {\partial}hi_{IJ}(C_I\cap U_{IJ})$ are contained in $N^{k}_{JI}$ for any $k<|J|$ since $V^{|J|}_J \sqsubset V^k_J$ and $C_I\sqsubset V_I \subset V^k_I$, and we have $N^{k}_{JI}\subset B^J_{\eta_{|J|-\frac 12}}\bigl(N^{|J|-\frac14}_{JI}\bigr)$ for $k\geq -|J|+\frac 12$. Statement (ii) holds since all sets and metrics involved are of product form. Statement (iii) follows by pullback with ${\iota}^{\alpha}_I|_{{\partial}^{\alpha} U_I \tildemes\{{\alpha}\}}$ since the $2^{-k}{\delta}$-neighbourhood $({\partial}^{\alpha} V_I)^k$ of the boundary ${\partial}^{\alpha} V_I$ within ${\partial}^{\alpha} U_I$ is always contained in the boundary ${\partial}^{\alpha} V^k_I$ of the $2^{-k}{\delta}$-neighbourhood of the domain $V_I$. To check (iv) note in particular the product forms $({\iota}^{\alpha}_I)^{-1}(V_I)={\partial}^{\alpha} V_I \tildemes A^{\alpha}_{\varepsilon}$ and $({\iota}^{\alpha}_I)^*d_I = d^{\alpha}_I + d_{\mathbb R}$ on the ${\varepsilon}$-collars, where $d^{\alpha}_I$ denotes the metric on ${\partial}^{\alpha} U_I$ induced from the restriction of the metric on $|{\mathcal K}|$ to $|{\partial}^{\alpha}{\mathcal K}|$. Then for any ${\partial}^{\alpha} W \subset {\partial}^{\alpha} U_I$ the product form of the metric implies product form of the $r$-neighbourhoods $$ B^I_r\bigl({\iota}^{\alpha}_I({\partial}^{\alpha} W\tildemes A^{\alpha}_{\varepsilon})\bigr)\cap {\rm im\,}{\iota}^{\alpha}_I \;=\; {\iota}^{\alpha}_I\bigl( B^{I,{\alpha}}_r({\partial}^{\alpha} W)\tildemes A^{\alpha}_{\varepsilon}\bigr) . $$ Moreover, for any $W' \subset U_I{\smallsetminus}{\rm im\,}{\iota}^{\alpha}_I$ and $r < {\varepsilon}$, the collaring condition \eqref{eq:epscoll} on the metric implies that {\beta}gin{equation}{\lambda}bel{eq:nob} B^I_r(W') \cap {\iota}^{\alpha}_I({\partial}^{\alpha} U_I\tildemes A^{\alpha}_{{\varepsilon}-r}) = \emptyset . \end{equation} The identity \eqref{eq:Wnbhd} now follows from applying the above identities with $W'=W {\smallsetminus} {\rm im\,}{\iota}^{\alpha}_I$. In the product case (v), the complement of the (closed) collars is empty, so there is no need for the second identity and hence for the restriction $r<{\varepsilon}$. \end{proof} In the case of a metric tame Kuranishi atlas we will construct transverse perturbations $\nu = \bigl(\nu_I : V_I \to E_I \bigr)_{I\in{\mathcal I}_{\mathcal K}}$ by an iteration which constructs and controls each $\nu_I$ over the larger set ${V_I^{|I|}}$. In order to prove uniqueness of the VMC, we will moreover need to interpolate between any two such perturbations by a similar iteration. We will use the following definition to keep track of the refined properties of the sections in this iteration. {\beta}gin{defn} {\lambda}bel{a-e} Given a nested reduction ${\mathcal C}\sqsubset{\mathcal V}$ of a metric tame Kuranishi atlas $({\mathcal K},d)$ and constants $0<{\delta}<{\delta}_{\mathcal V}$ and $0<{\sigma}\le{\sigma}({\delta},{\mathcal V},{\mathcal C})$, we say that a perturbation $\nu$ of $s_{\mathcal K}|_{\mathcal V}$ is {\bf $({\mathcal V},{\mathcal C},{\delta},{\sigma})$-adapted} if the sections $\nu_I:V_I\to E_I$ extend to sections over ${V^{|I|}_I}$ (also denoted $\nu_I$) so that the following conditions hold for every $k=1,\ldots, M$ with $$ M_{\mathcal K}:= \max_{I\in{\mathcal I}_{\mathcal K}} |I|, \qquad \eta_k:=2^{-k}\eta_0=2^{-k} (1-2^{-\frac 14}){\delta} . $$ {\beta}gin{itemize} \item[a)] The perturbations are compatible in the sense that the commuting diagrams in Definition~\ref{def:sect} hold on $\bigcup_{|I|\leq k} {V^k_I}$, that is $$ \qquad \nu_I \circ {\partial}hi_{HI} |_{{V^k_H}\cap {\partial}hi_{HI}^{-1}({V^k_I})} \;=\; \widehat{\partial}hi_{HI} \circ \nu_H |_{{V^k_H}\cap {\partial}hi_{HI}^{-1}({V^k_I})} \qquad \text{for all} \; H\subsetneq I , |I|\leq k . $$ \item[b)] The perturbed sections are transverse, that is $(s_I|_{{V^k_I}} + \nu_I) {\partial}itchfork 0$ for each $|I|\leq k$. \item[c)] The perturbations are {\it strongly admissible} with radius $\eta_k$, that is for all $H\subsetneq I$ and $|I|\le k$ we have $$ \qquad \nu_I( B^I_{\eta_k}(N^{k}_{IH})\bigr) \;\subset\; \widehat{\partial}hi_{HI}(E_H) \qquad \text{with}\;\; N^k_{IH} = V^k_I \cap {\partial}hi_{HI}(V^k_H\cap U_{HI}) . $$ In particular, the perturbations are admissible along the core $N^k_I$, that is we have ${\rm im\,}{\rm d}_x\nu_I \subset {\rm im\,}\widehat{\partial}hi_{HI}$ at all $x\in N^k_{IH}$. \item[d)] The perturbed zero sets are contained in ${\partial}i_{\mathcal K}^{-1}\bigl({\partial}i_{\mathcal K}({\mathcal C})\bigr)$; more precisely $$ (s_I |_{{V^k_I}}+ \nu_I)^{-1}(0) \;\subset\; {V^k_I} \cap {\partial}i_{\mathcal K}^{-1}\bigl({\partial}i_{\mathcal K}({\mathcal C})\bigr) \qquad \forall |I|\leq k, $$ or equivalently $s_I + \nu_I \neq 0$ on ${V^k_I} {\smallsetminus} {\partial}i_{\mathcal K}^{-1}\bigl({\partial}i_{\mathcal K}({\mathcal C})\bigr)$. \item[e)] The perturbations are small, that is $\sup_{x\in {V^k_I}} \| \nu_I (x) \| < {\sigma}$ for $|I|\leq k$. \end{itemize} Given a metric Kuranishi atlas $({\mathcal K},d)$, we say that a perturbation $\nu$ is {\bf adapted} if it is a $({\mathcal V},{\mathcal C},{\delta},{\sigma})$-adapted perturbation $\nu$ of $s_{\mathcal K}|_{\mathcal V}$ for some choice of nested reduction ${\mathcal C}\sqsubset{\mathcal V}$ and constants $0<{\delta}<{\delta}_{\mathcal V}$ and $0<{\sigma}\le{\sigma}({\delta},{\mathcal V},{\mathcal C})$. \end{defn} Next, we note some simple properties of these notions; in particular the fact that adapted perturbations are automatically admissible, precompact, and transverse. {\beta}gin{lemma}{\lambda}bel{le:admin2} {\beta}gin{enumerate} \item Any $({\mathcal V},{\mathcal C},{\delta},{\sigma})$-adapted perturbation $\nu$ of $s_{\mathcal K}|_{\mathcal V}$ is an admissible, precompact, transverse perturbation with ${\partial}i_{\mathcal K}( (s+\nu)^{-1}(0))\subset{\partial}i_{\mathcal K}({\mathcal C})$. \item If $\nu$ is a $({\mathcal V},{\mathcal C},{\delta},{\sigma})$-adapted perturbation, then it is also $({\mathcal V},{\mathcal C},{\delta}',{\sigma}')$-adapted for any ${\delta}'\le{\delta}$ and ${\sigma}'\in \bigl[{\sigma}, {\sigma}({\delta}',{\mathcal V},{\mathcal C})\bigr)$. \end{enumerate} \end{lemma} {\beta}gin{proof} To check statement (i), first note that $\nu$ is an admissible reduced section in the sense of Definition~\ref{def:sect} by c) and d), and is transverse by b). Restriction of a) implies that it satisfies the zero set condition $s_I+\nu_I \neq 0$ on $V_I {\smallsetminus} {\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}({\mathcal C}))$, and hence ${\partial}i_{\mathcal K}( (s+\nu)^{-1}(0))\subset{\partial}i_{\mathcal K}({\mathcal C})$, which in particular implies precompactness in the sense of Definition~\ref{def:precomp}. Statement (ii) holds because the domains in a)-e) for ${\delta}'$ are included in those for ${\delta}$, and ${\sigma}$ only appears in the inequality of e). \end{proof} Using these notions, we now prove a refined version of the existence of admissible, precompact, transverse perturbations in every metric tame Kuranishi atlas. {\beta}gin{prop}{\lambda}bel{prop:ext} Let $({\mathcal K},d)$ be metric tame Kuranishi atlas with nested reduction ${\mathcal C} \sqsubset {\mathcal V}$. Then for any $0<{\delta}<{\delta}_{\mathcal V}$ and $0<{\sigma}\le{\sigma}({\delta},{\mathcal V},{\mathcal C})$ there exists a $({\mathcal V},{\mathcal C},{\delta},{\sigma})$-adapted perturbation $\nu$ of $s_{\mathcal K}|_{{\mathcal V}}$. In particular, $\nu$ is admissible, precompact, and transverse, and its perturbed zero set $|{\bf Z}_\nu|=|(s+\nu)^{-1}(0)|$ is compact with ${\partial}i_{\mathcal K}\bigl((s+\nu)^{-1}(0)\bigr)$ contained in ${\partial}i_{\mathcal K}({\mathcal C})$. \end{prop} {\beta}gin{proof} We will construct $\nu_I: V^{|I|}_I\to E_I$ by an iteration over $k=0,\ldots,M= \max_{I\in{\mathcal I}_{\mathcal K}} |I|$, where in step $k$ we will define $\nu_I : V^k_I \to E_I$ for all $|I| = k$ that, together with the $\nu_I|_{V^k_I}$ for $|I|<k$ obtained by restriction from earlier steps, satisfy conditions a)-e) of Definition~\ref{a-e}. Restriction to $V_I\subset V^{|I|}_I$ then yields a $({\mathcal V},{\mathcal C},{\delta},{\sigma})$-adapted perturbation $\nu$ of $s_{\mathcal K}|_{\mathcal V}$, which by Lemma~\ref{le:admin2}~(i) is automatically an admissible, precompact, transverse perturbation with ${\partial}i_{\mathcal K}( (s+\nu)^{-1}(0))\subset{\partial}i_{\mathcal K}({\mathcal C})$. Compactness of $|(s+\nu)^{-1}(0)|$ then follows from Proposition~\ref{prop:zeroS0}. So it remains to perform the iteration. For $k=0$ the conditions a)-e) are trivially satisfied since there are no index sets $I\in{\mathcal I}_{\mathcal K}$ with $|I|\leq 0$. Now suppose that $\bigl(\nu_I : V^k_I\to E_I\bigr)_{I\in{\mathcal I}_{\mathcal K}, |I|\leq k}$ are constructed such that a)-e) hold. In the next step we can then construct $\nu_J$ independently for each $J\in{\mathcal I}_{\mathcal K}$ with $|J|=k+1$, since for any two such $J,J'$ we have ${\partial}i_{\mathcal K}(V_J^{k+1}) \cap {\partial}i_{\mathcal K}(V_{J'}^{k+1})=\emptyset$ unless $J=J'$ by \eqref{desep}, and so the constructions for $J\neq J'$ are not related by the commuting diagrams in condition a). { }{\mathbb N}I {\bf Construction for fixed $\mathbf {|J|=k+1}$:} We begin by noting that a) requires for all $I\subsetneq J$ {\beta}gin{equation} {\lambda}bel{some nu} \nu_J|_{N^{k+1}_{JI}} \;=\; \nu_J|_{V^{k+1}_J \cap {\partial}hi_{IJ}(V^{k+1}_I\cap U_{IJ})} \;=\; \widehat{\partial}hi_{IJ}\circ \nu_I\circ{\partial}hi_{IJ}^{-1} . \end{equation} To see that these conditions are compatible, we note that for $H\neq I\subsetneq J$ with ${\partial}hi_{HJ}(V^k_H\cap U_{IJ}) \cap {\partial}hi_{IJ}(V^k_I\cap U_{IJ})\neq \emptyset $ property \eqref{Nsep} implies $H\subsetneq I$ or $I\subsetneq H$. Assuming w.l.o.g.\ the first, we obtain compatibility from the strong cocycle condition \eqref{strong cocycle} and property a) for $H\subsetneq I$, {\beta}gin{align*} &\widehat{\partial}hi_{HJ}\circ \nu_H\circ{\partial}hi_{HJ}^{-1} |_{V^k_J \cap {\partial}hi_{IJ}(V^k_I\cap U_{IJ})\cap {\partial}hi_{HJ}(V^k_H\cap U_{HJ})} \\ &= \widehat{\partial}hi_{IJ}\circ \bigl(\widehat{\partial}hi_{HI}\circ \nu_H \bigr) |_{{\partial}hi_{HJ}^{-1}(V^k_J) \cap {\partial}hi_{HI}^{-1}(V^k_I) \cap V^k_H} \circ {\partial}hi_{HI}^{-1} \circ {\partial}hi_{IJ}^{-1} \\ &= \widehat{\partial}hi_{IJ}\circ \bigl(\nu_I \circ {\partial}hi_{HI}\bigr) \circ {\partial}hi_{HI}^{-1} \circ {\partial}hi_{IJ}^{-1} \;=\; \widehat{\partial}hi_{IJ}\circ \nu_I\circ{\partial}hi_{IJ}^{-1} . \end{align*} Here we checked compatibility on the domains $N^k_{JI}$, thus defining a map {\beta}gin{equation}{\lambda}bel{eq:nuJ'} \mu_J \,:\; N^k_J = {\textstyle \bigcup_{I\subsetneq J}} N^k_{JI} \;\longrightarrow\; E_J , \qquad \mu_J|_{N^k_{JI}} := \widehat{\partial}hi_{IJ}\circ \nu_I\circ{\partial}hi_{IJ}^{-1} . \end{equation} Note moreover that by the compatible construction of norms on the obstruction spaces we have $$ \|\mu_J\| \,:=\; \sup_{y\in N^k_J} \|\mu_J(y)\| \;\leq\; \sup_{I\subsetneq J} \sup_{x\in V^k_I} \|\nu_I(x)\| \;<\; {\sigma}. $$ The construction of $\nu_J$ on $V^{k+1}_J$ then has three more steps. {\beta}gin{itemize} \item {\bf Construction of extension:} We construct an extension of the restriction of $\mu_J$ from \eqref{eq:nuJ'} to the enlarged core $N_J^{k+\frac 12}$. More precisely, we construct a smooth map $\widetilde\nu_J : V^k_J \to E_J$ that satisfies {\beta}gin{equation}{\lambda}bel{tinu} \widetilde\nu_J|_{N_J^{k+\frac 12}} \;=\; \mu_J|_{N_J^{k+\frac 12}} , \qquad\quad \|\widetilde\nu_J \| \;\leq \; \|\mu_J\| \;<\; {\sigma} , \end{equation} and the strong admissibility condition on a larger domain than required in c), {\beta}gin{equation}{\lambda}bel{value} \widetilde\nu_J \bigl( B^J_{\eta_{k+\frac 12}}\bigl(N^{k+\frac 12}_{JI}\bigr) \bigr) \;\subset\; \widehat{\partial}hi_{IJ}(E_I) \qquad \forall\; I\subsetneq J . \end{equation} In case $k=1$ we achieve the analogous by setting $\widetilde\nu_J:=0$. \item {\bf Zero set condition:} We show that \eqref{value} and the control over $\|\widetilde\nu_J\|$ imply the strengthened control of d) over the zero set of $s_J + \widetilde\nu_J$, in particular $$ \bigl(s_J|_{{V^{k+1}_J}} + \widetilde\nu_J\bigr)^{-1}(0) \;{\smallsetminus}\; B^J_{\eta_{k+\frac 12}}\bigl(N^{k+\frac34}_J\bigr) \;\subset\; \widetilde C_J . $$ In case $k=1$ this applies with the open set $\widetilde C_J = C_J\subset U_J$. \item {\bf Transversality:} We make a final perturbation $\nu_{\partial}itchfork$ to obtain transversality for $s_J + \widetilde\nu_J + \nu_{\partial}itchfork$, while preserving conditions a),c),d), and then set $\nu_J: = \widetilde\nu_J + \nu_{\partial}itchfork$. Moreover, taking $\|\nu_{\partial}itchfork\|< {\sigma} - \|\widetilde\nu_J\|$ ensures e). \end{itemize} { }{\mathbb N}I {\bf Construction of extensions:} To construct $\widetilde\nu_J$ in case $k\geq 2$ it suffices, in the notation of \eqref{eq:iI}, to extend each component $\mu^j_J$ for fixed $j\in J$. For that purpose we iteratively construct smooth maps $\widetilde\mu_\ell^j: W_\ell \to \widehat{\partial}hi_{jJ}(E_j)$ on the open sets {\beta}gin{equation}{\lambda}bel{eq:W} W_\ell \,:=\; {\textstyle \bigcup _{I\subsetneq J,|I|\le \ell}}\, B^J_{r_\ell}(N^{k+\frac 12}_{JI}) \;=\; B^J_{r_\ell}\bigl( {\textstyle \bigcup _{I\subsetneq J,|I|\le \ell}}\,N^{k+\frac 12}_{JI} \bigr) \;\subset\; U_J \end{equation} with the radii $r_\ell:= \eta_k - \frac {\ell+1} {k+1} ( \eta_k-\eta_{k+\frac 12})$, that satisfy the extended compatibility, admissibility, and smallness conditions {\beta}gin{enumerate} \item[(E:i)] $\widetilde\mu^j_\ell |_{N^{k+\frac 12}_{JI}} = \mu_J^j|_{N^{k+\frac 12}_{JI}}$ for all $I\subsetneq J$ with $|I|\leq \ell$ and $j\in I$; \item[(E:ii)] $\widetilde\mu^j_\ell |_{B^J_{r_\ell}(N^{k+\frac 12}_{JI})} = 0$ for all $I\subsetneq J$ with $|I|\leq \ell$ and $j\notin I$; \item[(E:iii)] $\bigl\|\widetilde\mu^j_\ell \bigr\| \leq \|\mu^j_J\|$. \end{enumerate} Note here that the radii form a nested sequence $\eta_k=r_{-1} > r_0>r_1 \ldots > r_k = \eta_{k+\frac 12}$ and that when $\ell=k$ the function $\widetilde\mu^j_k$ will satisfy (E:i),(E:ii) for all $I\subsetneq J$, and is defined on $W_k=B^J_{\eta_{k+\frac 12}}(N^{k+\frac 12}_J)\sqsupset N^{k+\frac 12}_J$. So, after this iteration, we can define $\widetilde\nu_J:= {\beta}ta {\textstyle\sum_{j\in J}} \, \widetilde\mu^j_k$, where ${\beta}ta:U_J \to [0,1]$ is a smooth cutoff function with ${\beta}ta|_{N^{k+\frac 12}_J}\equiv 1$ and ${\rm supp\,}{\beta}ta\subset B^J_{\eta_{k+\frac 12}}(N^{k+\frac 12}_J)$, so that $\widetilde\nu_J$ extends trivially to $U_J{\smallsetminus} W_k$. This has the required bound by (E:iii), satisfies \eqref{value} since $\widetilde\nu_J^j |_{B^J_{\eta_{k+\frac 12}}(N^{k+\frac 12}_{JI})}\equiv 0$ for all $j\notin I$ by (E:ii). Finally, it has the required values on $N^{k+\frac 12}_J = \bigcup_{I\subsetneq J}N^{k+\frac 12}_{JI}$ since for each $I\subsetneq J$ the conditions (E:i), (E:ii) on $N^{k+\frac 12}_{JI}$ together with the fact $\mu_J(N_{JI}^{k+\frac 12}) \subset \widehat{\partial}hi_{IJ}(E_I)$ guarantee $$ \widetilde \nu_J|_{N_{JI}^{k+\frac12}} \;=\; {\textstyle\sum_{j\in J}} \, \widetilde\mu_k^j|_{N_{JI}^{k+\frac12}} \;=\; {\textstyle\sum_{j\in I}} \, \mu_k^j|_{N_{JI}^{k+\frac12}} \;=\; \mu_J|_{N_{JI}^{k+\frac12}} . $$ So it remains to perform the iteration over $\ell$, in which we now drop $j$ from the notation. For $\ell=0$ the conditions are vacuous since $W_0=\emptyset$. Now suppose that the construction is given on $W_\ell$. Then we cover $W_{\ell+1}$ by the open sets $$ B_L': = W_{\ell+1}\cap B^J_{r_{\ell-1}}(N^{k+\frac 12}_{JL}) \qquad \text{for}\; L\subsetneq J, \; |L|=\ell+1, $$ whose closures are pairwise disjoint by \eqref{Nsep} with $r_{\ell-1}<{\delta}lta$, and an open set $C_{\ell+1}\subset U_J$ covering the complement, $$ C_{\ell+1} \,:=\; W_{\ell+1} \;{\smallsetminus}\; {\textstyle \bigcup _{|L| = \ell+1}\, \overline{B^J_{r_{\ell}}(N^{k+\frac 12}_{JL})}} \;\sqsubset\; W_\ell \;{\smallsetminus}\; {\textstyle \bigcup _{|L| = \ell+1}\, \overline{B^J_{r_{\ell+1}}(N^{k+\frac 12}_{JL})}} \;=:\, C_\ell , $$ which has a useful precompact inclusion into $C_\ell$, as defined above, by $r_{\ell+1}<r_\ell$. This decomposition is chosen so that each $B^J_{r_{\ell+1}}(N^{k+\frac 12}_{JL})$ for $|L|=\ell+1$ (on which the conditions (E:i),(E:ii) for $I=L$ are nontrivial) has disjoint closure from $\overline{C_{\ell+1}}$ (a compact subset of the domain of $\widetilde\mu_\ell$). Now pick a precompactly nested open set $C_{\ell+1} \sqsubset C' \sqsubset C_\ell$, in particular with $\overline{C'}\cap \overline{B^J_{r_{\ell+1}}(N^{k+\frac 12}_{JL})} = \emptyset$ for all $|L|=\ell +1$. Then we will obtain a smooth map $\widetilde\mu_{\ell+1}: W_{\ell+1} \to \widehat{\partial}hi_{jJ}(E_j)$ by setting $\widetilde\mu_{\ell+1}|_{C_{\ell+1}} := \widetilde \mu_\ell|_{C_{\ell+1}}$ and separately constructing smooth maps $\widetilde\mu_{\ell+1}: B'_L \to \widehat{\partial}hi_{jJ}(E_j)$ for each $|L|=\ell+1$ such that $\widetilde\mu_{\ell+1}=\widetilde \mu_\ell$ on $B'_L \cap C'$. Indeed, this ensures equality of all derivatives on the intersection of the closures $\overline{B_L'} \cap \overline{C_{\ell+1}}$, since this set is contained in $\overline{B_L'} \cap C'$, which is a subset of $\overline{B_L' \cap C'}$ because $C'$ is open, and by construction we will have $\widetilde\mu_{\ell+1}=\widetilde \mu_\ell$ with all derivatives on $\overline{B_L' \cap C'}$. So it remains to construct the extension $\widetilde\mu_{\ell+1}|_{B'_L}$ for a fixed $L\subsetneq J$. For that purpose note that the subset on which this is prescribed as $\widetilde\mu_\ell$, can be simplified by the separation property \eqref{Nsep}, {\beta}gin{equation} {\lambda}bel{simply} B'_L\cap C' \;\subset\; \Bigl( B^J_{r_{\ell-1}}(N^{k+\frac 12}_{JL}) {\smallsetminus} \overline{B^J_{r_{\ell+1}}(N^{k+\frac 12}_{JL})} \; \Bigr) \;\cap\; {\textstyle \bigcup _{I\subsetneq L} }B^J_{r_\ell}(N^{k+\frac 12}_{JI}) \;\subset\; W_\ell. \end{equation} To ensure (E:i) and (E:ii) for $|I|\leq \ell+1$ first note that $\widetilde\mu_{\ell+1}|_{C_{\ell+1}}$ inherits these properties from $\widetilde\mu_\ell$ because $C_{\ell+1}$ is disjoint from $B^J_{r_{\ell+1}}(N^{k+\frac 12}_{JI})$ for all $|I|=\ell+1$. It remains to fix $L\subset J$ with $|L|=\ell+1$ and construct the map $\widetilde\mu_{\ell+1}: B'_L \to \widehat{\partial}hi_{jJ}(E_j)$ as extension of $\widetilde\mu_\ell|_{B'_L\cap C'}$ so that it satisfies properties (E:i)--(E:iii) for all $|I|\le\ell+1$. In case $j\notin L$ we have $\widetilde\mu_\ell|_{B'_L\cap C'}=0$ by iteration hypothesis (E:ii) for each $I\subsetneq J$. So we obtain a smooth extension by $\widetilde\mu_{\ell+1}:=0$, which satisfies (E:ii) and (E:iii), whereas (E:i) is not relevant. In case $j\in L$ the conditions (E:i),(E:ii) only require consideration of $I\subsetneq L$ since otherwise $B'_L \cap B^J_{r_\ell}(N^{k+\frac 12}_{JI})=\emptyset$ by \eqref{Nsep}. So we need to construct a bounded smooth map $\widetilde\mu_{\ell+1} : B'_L=W_{\ell+1} \cap B^J_{r_{\ell-1}}(N^{k+\frac 12}_{JL}) \to \widehat{\partial}hi_{jJ}(E_j)$ that satisfies {\beta}gin{itemize} \item[(i)] $\widetilde\mu_{\ell+1}|_{N^{k+\frac 12}_{JL}} = \mu_J^j|_{N^{k+\frac 12}_{JL}}$; \item[(i$'$)] $\widetilde\mu_{\ell+1}|_{N^{k+\frac 12}_{JI}} = \mu_J^j|_{N^{k+\frac 12}_{JI}}$ for all $I\subsetneq L$ with $j\in I$; \item[(ii)] $\widetilde\mu_{\ell+1}|_{B^J_{r_{\ell+1}}(N^{k+\frac 12}_{JI})} = 0$ for all $I\subsetneq L$ with $j\notin I$; \item[(iii)] $\bigl\| \widetilde\mu_{\ell+1}\bigr\| \leq \|\mu_J^j\|$; \item[(iv)] $\widetilde\mu_{\ell+1}|_{B'_L\cap C'} =\widetilde\mu_{\ell}|_{B'_L\cap C'}$. \end{itemize} Because every open cover of $B'_L$ has a locally finite subcovering, such extensions can be patched together by partitions of unity. Hence it suffices, for the given $j\in L\subsetneq J$, to construct smooth maps $\widetilde\mu_z: B^J_{r_z}(z)\to \widehat\Phi_{jJ}(E_j)$ on some balls of positive radius $r_z>0$ around each fixed $z\in B'_L$, that satisfy the above requirements. {\mathbb N}I $\bullet$ For $z\in W_\ell {\smallsetminus} \overline{N^{k+\frac 12}_{JL}}$ we find $r_z>0$ such that $B^J_{r_z}(z)\subset W_\ell {\smallsetminus} \overline{N^{k+\frac 12}_{JL}}$ lies in the domain of $\widetilde\mu_\ell$ and the complement of $N^{k+\frac 12}_{JL}$, so that $\widetilde\mu_z:=\widetilde\mu_\ell|_{B^J_{r_z}(z)}$ is well defined and satisfies all conditions with $\|\widetilde\mu_z\|\leq \|\widetilde\mu_\ell\|$. {\mathbb N}I $\bullet$ For $z\in B'_L{\smallsetminus} \bigl( W_\ell \cup \overline{N^{k+\frac 12}_{JL}}\bigr)$, we claim that there is $r_z>0$ such that $B^J_{r_z}(z)$ is disjoint from the closed subsets $\bigcup_{I\subset L} \overline{N^{k+\frac 12}_{JI}}$ and $\overline{C'}\subset C_\ell\subset W_\ell$. This holds because $\bigcup_{I\subsetneq L} \overline{N^{k+\frac 12}_{JI}}\subset W_\ell$ by \eqref{eq:W}. Then $\widetilde\mu_z:=0$ satisfies all conditions since its domain is in the complement of the domains on which (i), (i$'$), and (iv) are relevant. {\mathbb N}I $\bullet$ Finally, for $z\in \overline{N^{k+\frac 12}_{JL}}$ recall that $\overline{N^{k+\frac 12}_{JL}} \sqsubset N^k_{JL}$ is a compact subset of the smooth submanifold $N^k_{JL}=V^k_J\cap {\partial}hi_{LJ}(V^k_L\cap U_{LJ}) \subset A_J$. So we can choose $r_z>0$ such that $B^J_{r_z}(z)$ lies in a submanifold chart for $N^k_{JL}$. Then we define $\widetilde\mu_z$ by extending $\mu_J^j|_{B^J_{r_z}(z)\cap N^k_{JL}}$ to be constant in the normal directions. This guarantees (i) and $\|\widetilde\mu_z\|\leq \|\mu_J^j\|$, and we will choose $r_z$ sufficiently small to satisfy the further conditions. First, $N^{k}_{JL}$ is disjoint from $C_{\ell}\sqsupset C'$, so we can ensure that $B^J_{r_z}(z)$ lies in the complement of $C'$, and hence condition (iv) does not apply. To address (i$'$) and (ii) recall that for every $I\subsetneq L$ the strong cocycle condition of Lemma~\ref{le:tame0} implies that $N^{k+\frac 12}_{JI}\subset{\rm im\,}{\partial}hi_{IJ} ={\partial}hi_{LJ}(U_{LJ}\cap{\rm im\,}{\partial}hi_{IL})$ is a submanifold of ${\rm im\,}{\partial}hi_{LJ}$, and by assumption $z$ lies in the open subset $N^k_{JL}\subset{\rm im\,}{\partial}hi_{LJ}$. In case $j\in I$ and $z\in N^{k+\frac 12}_{JI}\cap \overline{N^{k+\frac 12}_{JL}}$, we can thus choose $r_z$ sufficiently small to ensure that $B^J_{r_z}(z)\cap N^{k+\frac 12}_{JI}$ is contained in the open neighbourhood $N^k_{JL}\subset{\rm im\,}{\partial}hi_{LJ}$ of~$z$. Then $\widetilde\mu_z$ satisfies (i$'$) by $\widetilde\mu_z=\mu^j_J$ on $B^J_{r_z}(z)\cap N^{k+\frac 12}_{JI}$. In case $j\notin I$ condition (ii) requires $\widetilde\mu_z$ to vanish on $B^J_{r_z}(z)\cap B^J_{r_{\ell+1}}(N^{k+\frac 12}_{JI})$. Here we have $r_{\ell+1}\leq r_1 <\eta_k$, so if $z\notin B^J_{\eta_k}(N^{k+\frac 12}_{JI})$, then we can make this intersection empty by choice of $r_z$. It remains to consider the case $z\in B^J_{\eta_k}(N^{k+\frac 12}_{JI})\cap \overline{N^{k+\frac 12}_{JL}}$, where $I\subset L\subset J$ as above. We pick $x_J\in N^{k+\frac 12}_{JI}$ with $d_J(z,x_J)\leq \eta_k$, then we have $x_J={\partial}hi_{IJ}(x_I)$ for some $x_I\in V^{k+\frac 12}_I\cap U_{IJ}$. By tameness we also have $x_I\in U_{IL}$, and compatibility of the metrics then implies $d(z_L,x_L)=d(z,x_J)\leq \eta_k$ for $x_L:={\partial}hi_{IL}(x_I)$ and $z_L:={\partial}hi_{LJ}^{-1}(z)\in \overline{V^{k+\frac 12}_L}$. This shows that $x_L$ lies in both ${\partial}hi_{IL}(V^{k+\frac 12}_I\cap U_{IJ})$ and $B_{\eta_k}( \overline{V^{k+\frac 12}_L} )$, where the latter is a subset of $V_L^k$ by \eqref{eq:fantastic}, and hence we deduce $x_L\in N^k_{LI}$. From that we obtain $\nu^j_L|_{B^L_{\eta_k}(x_L)} = 0$ by the induction hypothesis d), i.e.\ $\nu_L( B^L_{\eta_k}(N^{k}_{LI})\bigr) \;\subset\; \widehat{\partial}hi_{IL}(E_I)$. This implies that the function $\mu^j_J$ of \eqref{eq:nuJ'} vanishes on $$ {\partial}hi_{LJ}(B^L_{\eta_k}(x_L)\cap U_{LJ})= B^J_{\eta_k}(x_J) \cap {\partial}hi_{LJ}(U_{LJ}) $$ Since $d_J(z,x_J) \leq r_{\ell+1} < \eta_k$ this set contains $z$, and thus $B^J_{r_z}(z)\cap{\partial}hi_{LJ} (U_{LJ})$ for $r_z>0$ sufficiently small. With that we have $\mu_J^j|_{B^J_{r_z}(z)\cap N^k_{JL}}=0$ and hence $\widetilde\mu_z = 0$, so that (ii) is satisfied. This completes the construction of $\widetilde\mu_z$ in this last case, and hence of $\widetilde\mu_{\ell+1}$, and thus by iteration finishes the construction of the extension $\widetilde\nu_J$. { } {\mathbb N}I {\bf Zero set condition:} For the extended perturbation constructed above, we have $\bigl\|\widetilde \nu_J\bigr\| \leq \|\mu_J\| \le \sup_{I\subsetneq J} \sup_{x\in V^k_I} \|\nu_I(x)\|< {\sigma}$ by induction hypothesis e). We first consider the part of the perturbed zero set near the core, and then look at the \lq\lq new part". By \eqref{value}, the zero set near the core $(s_J + \widetilde\nu_J)^{-1}(0)\cap B^J_{\eta_{k+\frac 12}}\bigl(N^{k+\frac34}_{JI}\bigr)$ consists of points with $s_J(x) = -\widetilde\nu_J(x)\in \widehat{\partial}hi_{IJ}(E_I)$, so must lie within $s_J^{-1}\bigl(\widehat{\partial}hi_{IJ}(E_I)\bigr) = {\partial}hi_{IJ}(U_{IJ})$. Hence \eqref{eq:useful} implies for all $I\subsetneq J$ the inclusion {\beta}gin{equation} {\lambda}bel{eq:zeroset} (s_J + \widetilde\nu_J)^{-1}(0)\;\cap\; B^J_{\eta_{k+\frac 12}}\bigl(N^{k+\frac34}_{JI}\bigr) \;\subset\; N^{k+\frac12}_{JI} . \end{equation} Thus the inductive hypothesis d) together with the compatibility condition $\widetilde\nu_J =\mu_J$ on $N^{k+\frac12}_{JI}\subset {\partial}hi_{IJ}(V^k_I)$ from \eqref{tinu}, with $\mu_J$ given by \eqref{eq:nuJ'}, imply that $s_J + \widetilde\nu_J\ne 0$ on $N^{k+\frac12}_{JI}{\smallsetminus} {\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}({\mathcal C}))$. Therefore $$ (s_J + \widetilde\nu_J)^{-1}(0)\;\cap\; B^J_{\eta_{k+\frac 12}}\bigl(N^{k+\frac34}_{JI}\bigr) \subset {\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}({\mathcal C})). $$ Next, by Definition~\ref{def:admiss} we have $$ {\sigma} < {\sigma}({\delta},{\mathcal V},{\mathcal C}) \le \| s_J(x) \| \qquad\forall x\in \overline{V^{k+1}_J} \;{\smallsetminus}\; \Bigl( \widetilde C_J \cup {\textstyle \bigcup_{I\subsetneq J}} B^J_{\eta_{k+\frac 12}}\bigl(N^{k+\frac34}_{JI}\bigr) \Bigr) . $$ Thus if $x$ is in the complement in $\overline{V^{k+1}_J}$ of the neighbourhoods $B^J_{\eta_{k+\frac 12}}\bigl(N^{k+\frac34}_{JI}\bigr)$ which cover the core, then either $x\in \widetilde C_J$ or $\|s_J(x)\|\geq {\sigma}_{J,\eta_{k+1}} > \|\widetilde\nu_J(x)\|$. In particular, we obtain the inclusion {\beta}gin{equation}{\lambda}bel{eq:include} \bigl(s_J|_{\overline{V^{k+1}_J}} + \widetilde\nu_J\bigr)^{-1}(0) \;{\smallsetminus}\; {\textstyle\bigcup_{I\subsetneq J} } B^J_{\eta_{k+\frac 12}}\bigl(N^{k+\frac34}_{JI}\bigr) \;\subset\; \widetilde C_J. \end{equation} From this we can deduce a slightly stronger version of a) at level $k+1$, namely $$ (s_J|_{\overline{V^{k+1}_J}}+\widetilde\nu_J)^{-1}(0) \; \subset\; {\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}({\mathcal C})) \qquad \forall \; |J|\le k+1 . $$ Indeed, the zero set of $(s+\widetilde\nu_J)|_{\overline{V^{k+1}_J}}$ consists of an ``old part'' given by \eqref{eq:zeroset}, which lies in the enlarged core $N^{k+\frac 12}_J$, where by the above arguments we have $s_J + \widetilde\nu_J\ne 0$ on $N^{k+\frac12}_{JI}{\smallsetminus} {\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}({\mathcal C}))$. The ``new part'' given by \eqref{eq:include} is in fact contained in the open part $\widetilde C_J\subset U_J$ of ${\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}({\mathcal C}))$. { } {\mathbb N}I {\bf Transversality:} Since the perturbation $\widetilde\nu_J$ was constructed to be strongly admissible and hence admissible, the induction hypothesis b) together with Lemma~\ref{le:transv} and \eqref{tinu} imply that the transversality condition is already satisfied on the enlarged core, $(s_J + \widetilde\nu_J)|_{N^{k+\frac12}_J} {\partial}itchfork 0$. In addition, \eqref{eq:zeroset} also implies that the perturbed section $s_J+\widetilde\nu_J$ has no zeros on $B^J_{\eta_{k+\frac 12}}\bigl(N^{k+\frac34}_{JI}\bigr) {\smallsetminus} N^{k+\frac 12}_{JI}$, so that we have transversality $$ (s_J + \widetilde\nu_J)|_{B^J_{\eta_{k+\frac 12}}(N^{k+\frac34}_J)} \; {\partial}itchfork \; 0 $$ on a neighbourhood $B:= B^J_{\eta_{k+\frac 12}}(N^{k+\frac34}_J) = \bigcup_{I\subsetneq J} B^J_{\eta_{k+\frac 12}}(N^{k+\frac34}_{JI})$ of the core $N:=N_J^{k+1}=\bigcup_{I\subsetneq J} N^{k+1}_{JI}$, on which compatibility c) requires $\nu_J|_N=\widetilde\nu_J|_N$. In fact, $B$ also precompactly contains the neighbourhood $B':= B^J_{\eta_{k+1}}(N^{k+1}_J)$ of $N$, so that strong admissibility c) can be satisfied by requiring $\nu_J|_{B'}=\widetilde\nu_J|_{B'}$. To sum up, the smooth map $\widetilde\nu_J : V^{k+1}_J \to E_J$ fully satisfies the compatibility a), strong admissibility c), and strengthened zero set condition d). Moreover, $s_J+\widetilde\nu_J$ extends to a smooth map on the compact closure $\overline{V^{k+1}_J}\subset U_J$, where it satisfies transversality $(s_J+\widetilde\nu_J)|_B{\partial}itchfork 0$ on the open set $B \subset \overline{V^{k+1}_J}$ and the zero set condition from \eqref{eq:include}, $$ (s_J+\widetilde\nu_J)^{-1}(0) \cap (\overline{V^{k+1}_J}{\smallsetminus} B)\; \subset\; O: = \overline{V^{k+1}_J} \cap \widetilde C_J . $$ The latter can be phrased as $\| s_J+\widetilde\nu_J \| > 0$ on $( \overline{V^{k+1}_J}{\smallsetminus} B ) {\smallsetminus} O$, which is compact since $O$ is relatively open in $\overline{V^{k+1}_J}$. Since $z \mapsto \| s_J(z) +\widetilde\nu_J(z) \|$ is continuous, it remains nonvanishing on $W{\smallsetminus} O$ for some relatively open neighbourhood $W\subset \overline{V^{k+1}_J}$ of $\overline{V^{k+1}_J}{\smallsetminus} B$. This extends the zero set condition to $(s_J+\widetilde\nu_J)^{-1}(0) \cap W \subset O$. We can moreover choose $W$ disjoint from the neighbourhood of the core $B' \sqsubset B$. Now we wish to find a smooth perturbation $\nu_{\partial}itchfork:\overline{V^{k+1}_J} \to E_J$ supported in $W$ that satisfies the following: {\beta}gin{enumerate} \item[(T:i)] it provides transversality $(s_J+\widetilde\nu_J+\nu_{\partial}itchfork)|_W {\partial}itchfork 0$; \item[(T:ii)] the perturbed zero set satisfies the inclusion $(s_J+\widetilde\nu_J+\nu_{\partial}itchfork)^{-1}(0) \cap W \subset O$; \item[(T:iii)] the perturbation is small: $\|\nu_{\partial}itchfork\|< {\sigma} - \|\widetilde\nu_J\|$. \end{enumerate} To see that this exists, note that for $\nu_{\partial}itchfork=0$ transversality holds outside the compact subset $\overline{V^{k+1}_J}{\smallsetminus} B$ of $W$. Hence by the Transversality Extension theorem in \cite[Chapter~2.3]{GuillP} we can fix a nested open precompact subset $\overline{V^{k+1}_J}{\smallsetminus} B \subset P \sqsubset W$ and achieve transversality everywhere on $W$ by adding an arbitrarily small perturbation supported in $P$. This immediately provides (T:i). Moreover, since $\|s_J +\widetilde\nu_J\|$ has a positive maximum on the compact set $P{\smallsetminus} O$, we can choose $\nu_{\partial}itchfork$ sufficiently small to satisfy (T:ii) and (T:iii). Setting $$ \nu_J:=\widetilde\nu_J + \nu_{\partial}itchfork \,:\; V^{k+1}_J \to E_J $$ then finishes the construction since the choice of $\nu_{\partial}itchfork$ ensures the zero set inclusion a) and transversality b) on $W$; the previous constructions for $\nu_J|_{V^{k+1}_J{\smallsetminus} W}=\widetilde\nu_J|_{V^{k+1}_J{\smallsetminus} W}$ ensure a), b), and d) on $V^{k+1}_J{\smallsetminus} W \supset B'$, and compatibility c) on the core $N \subset B'\subset V^{k+1}_J{\smallsetminus} W$; and we achieve smallness e) by the triangle inequality $$ \|\widetilde\nu_J+\nu_{\partial}itchfork\| \;\leq\; \|\widetilde\nu_J\| + {\sigma} - \|\widetilde\nu_J\| \;\le\; \|\mu_J\| \; \le\; \max_{I\subsetneq J} \|\nu_I\| \;<\; {\sigma} . $$ This completes the iterative step and hence completes construction of the required $({\mathcal V},{\mathcal C},{\delta},{\sigma})$-adapted section. The last claim follows from Proposition~\ref{prop:zeroS0}. \end{proof} In order to prove uniqueness up to cobordism of the VMC, we moreover need to construct transverse cobordism perturbations with prescribed boundary values as in Definition~\ref{def:csect}. We will perform this construction by an iteration as in Proposition~\ref{prop:ext}, with adjusted domains $V^k_J$ obtained by replacing ${\delta}$ with $\frac 12 {\delta}$. This is necessary since as before the construction of $\nu_J$ will proceed by extending the given perturbations from previous steps, $\mu_J$, and now also the given boundary values $\nu^{\alpha}_J$, and then restricting to a precompact subset. However, the $({\mathcal V},{\mathcal C},{\delta},{\sigma})$-adapted boundary values $\nu^{\alpha}_J$ on ${\partial}^{\alpha} V_J$ only extend to admissible, precompact, transverse perturbations in a collar of $V^{|J|}_J$. Hence the construction of $\nu_J$ by precompact restriction does not allow to define it on the whole of this collar. Instead, we do achieve this construction by restriction to $V^{|J|+1}_J\sqsubset V^{|J|}_J$, which by \eqref{eq:VIk} is the analog of $V^{|J|}_J$ when ${\delta}$ is replaced by $\frac 12 {\delta} $. This means that, firstly, we have to adjust the smallness condition for the iterative construction of perturbations by introducing a variation of the constant ${\sigma}({\delta},{\mathcal V},{\mathcal C})$ of Definition~\ref{a-e}. Secondly, we need a further smallness condition on adapted perturbations if we wish to extend these to a Kuranishi cobordism. Fortunately, the latter construction will only be used on product Kuranishi cobordisms, which leads to the following definitions. {\beta}gin{defn} {\lambda}bel{a-e rel} {\beta}gin{enumerate} \item Let $({\mathcal K},d)$ be a metric tame Kuranishi cobordism with nested cobordism reduction ${\mathcal C}\sqsubset{\mathcal V}$, and let $0<{\delta}<\min\{{\varepsilon},{\delta}_{\mathcal V}\}$, where ${\varepsilon}$ is the collar width of $({\mathcal K},d)$ and the reductions ${\mathcal C},{\mathcal V}$. Then we set {\beta}gin{align*} {\sigma}' ({\delta},{\mathcal V},{\mathcal C}) &\,:=\; \min_{J\in{\mathcal I}_{\mathcal K}} \inf \Bigl\{ \; \bigl\| s_J(x) \bigr\| \;\Big| \; x\in \overline{V^{|J|+1}_J} \;{\smallsetminus}\; \Bigl( \widetilde C_J \cup {\textstyle \bigcup_{I\subsetneq J}} B^J_{\eta_{|J|+\frac 12}}\bigl(N^{|J|+\frac34}_{JI}\bigr) \Bigr) \Bigr\} , \\ \qquad{\sigma}_{\rm rel}({\delta},{\mathcal V},{\mathcal C}) &\,:=\; \min\bigl\{ {\sigma}({\delta},{\partial}^0{\mathcal V},{\partial}^0{\mathcal C}), \,{\sigma}({\delta},{\partial}^1{\mathcal V},{\partial}^1{\mathcal C}), \,{\sigma}'({\delta},{\mathcal V},{\mathcal C}) \bigr\} . \end{align*} \item Given a metric Kuranishi atlas $({\mathcal K},d)$, we say that a perturbation $\nu$ is {\bf strongly adapted} if it is a $({\mathcal V},{\mathcal C},{\delta},{\sigma})$-adapted perturbation $\nu$ of $s_{\mathcal K}|_{\mathcal V}$ for some choice of nested reduction ${\mathcal C}\sqsubset{\mathcal V}$ and constants $0<{\delta}<{\delta}_{\mathcal V}$ and $$ \qquad 0\;<\;{\sigma}\;\le\;{\sigma}_{\rm rel}({\delta},{\mathcal V}\tildemes[0,1],{\mathcal C}\tildemes[0,1]) \;=\; \min\bigl\{ {\sigma}({\delta},{\mathcal V},{\mathcal C}) , {\sigma}'({\delta},{\mathcal V}\tildemes[0,1],{\mathcal C}\tildemes[0,1]) \bigr\} . $$ \end{enumerate} \end{defn} Recalling the definition of ${\sigma}({\delta},{\mathcal V},{\mathcal C})$, and the product structure of all sets and maps involved in the definition of ${\sigma}'({\delta},{\mathcal V}\tildemes[0,1],{\mathcal C}\tildemes[0,1])$, we may rewrite the condition on ${\sigma}>0$ in the definition of strong adaptivity as $$ {\sigma} \,<\; \bigl\| s_J(x) \bigr\| \qquad\forall\; x\in \overline{V^{k}_J} \;{\smallsetminus}\; \Bigl( \widetilde C_J \cup {\textstyle \bigcup_{I\subsetneq J}} B^J_{\eta_{k-\frac 12}}\bigl(N^{k-\frac14}_{JI}\bigr) \Bigr) \Bigr\} ,\; J\in{\mathcal I}_{\mathcal K}, \; k\in\{|J|,|J|+1\} . $$ Although the construction of transverse cobordism perturbations with fixed boundary values in part (ii) of the following Proposition will be used only on product cobordisms, we state it here in generality, since we use it to construct transverse cobordism perturbations without fixed boundary values in part (i). {\beta}gin{prop}{\lambda}bel{prop:ext2} Let $({\mathcal K},d)$ be a metric tame Kuranishi cobordism with nested cobordism reduction ${\mathcal C}\sqsubset{\mathcal V}$, let $0<{\delta}<\min\{{\varepsilon},{\delta}_{\mathcal V}\}$, where ${\varepsilon}$ is the collar width of $({\mathcal K},d)$ and the reductions ${\mathcal C},{\mathcal V}$. Then we have ${\sigma}_{\rm rel}({\delta},{\mathcal V},{\mathcal C})>0$ and the following holds. {\beta}gin{enumerate} \item Given any $0<{\sigma}\le {\sigma}_{\rm rel}({\delta},{\mathcal V},{\mathcal C})$, there exists an admissible, precompact, transverse cobordism perturbation $\nu$ of $s_{\mathcal K}|_{\mathcal V}$ with ${\partial}i_{\mathcal K}\bigl((s+\nu)^{-1}(0)\bigr)\subset {\partial}i_{\mathcal K}({\mathcal C})$, whose restrictions $\nu|_{{\partial}^{\alpha}{\mathcal V}}$ for ${\alpha}=0,1$ are $({\partial}^{\alpha}{\mathcal V},{\partial}^{\alpha}{\mathcal C},{\delta},{\sigma})$-adapted perturbations of $s_{{\partial}^{\alpha}{\mathcal K}}|_{{\partial}^{\alpha}{\mathcal V}}$. \item Given any perturbations $\nu^{\alpha}$ of $s_{{\partial}^{\alpha}{\mathcal K}}|_{{\partial}^{\alpha}{\mathcal V}}$ for ${\alpha}=0,1$ that are $({\partial}^{\alpha}{\mathcal V},{\partial}^{\alpha}{\mathcal C},{\delta},{\sigma})$-adapted with ${\sigma}\le {\sigma}_{\rm rel}({\delta},{\mathcal V},{\mathcal C})$, the perturbation $\nu$ of $s_{\mathcal K}|_{\mathcal V}$ in (i) can be constructed to have boundary values $\nu|_{{\partial}^{\alpha}{\mathcal V}}=\nu^{\alpha}$ for ${\alpha}=0,1$. \item In the case of a product cobordism ${\mathcal K}\tildemes[0,1]$ with product metric and product reductions ${\mathcal C}\tildemes[0,1]\sqsubset{\mathcal V}\tildemes[0,1]$, both (i) and (ii) hold without requiring ${\delta}$ to be bounded in terms of the collar width. \end{enumerate} \end{prop} {\beta}gin{proof} The positivity ${\sigma}_{\rm rel}({\delta},{\mathcal V},{\mathcal C})>0$ follows from ${\sigma}({\delta},{\partial}^{\alpha}{\mathcal V},{\partial}^{\alpha}{\mathcal C})>0$ by Lemma~\ref{le:admin}~(i), and ${\sigma}'>0$ by the arguments of Lemma~\ref{le:admin}~(i) applied to the shifted domains. Next, we reduce (i) for given $0<{\sigma} \le {\sigma}_{\rm rel}({\delta},{\mathcal V},{\mathcal C})$ to (ii). For that purpose recall that ${\delta}<{\delta}_{{\partial}^{\alpha}{\mathcal V}}$ by Lemma~\ref{le:admin}~(iii) and ${\sigma} \le {\sigma}({\delta},{\partial}^{\alpha}{\mathcal V},{\partial}^{\alpha}{\mathcal C})$ by definition of ${\sigma}_{\rm rel}({\delta},{\mathcal V},{\mathcal C})$. Hence Proposition~\ref{prop:ext} provides $({\partial}^{\alpha}{\mathcal V},{\partial}^{\alpha}{\mathcal C},{\delta},{\sigma})$-adapted perturbations $\nu^{\alpha}$ of $s_{{\partial}^{\alpha}{\mathcal K}}|_{{\partial}^{\alpha}{\mathcal V}}$ for ${\alpha}=0,1$. Now (ii) provides a cobordism perturbation $\nu$ with the given restrictions $\nu|_{{\partial}^{\alpha}{\mathcal V}}=\nu^{\alpha}$, which are $({\partial}^{\alpha}{\mathcal V},{\partial}^{\alpha}{\mathcal C},{\delta},{\sigma})$-adapted by construction. So (i) follows from~(ii). To prove (ii) recall that, by assumption, the given perturbations $\nu^{\alpha}$ of $s_{{\partial}^{\alpha}{\mathcal K}}|_{{\partial}^{\alpha}{\mathcal V}}$ for ${\alpha}=0,1$ extend to $\nu^{\alpha}_I : ({\partial}^{\alpha} V_I)^{|I|} \to E_I$ for all $I\in{\mathcal I}_{{\partial}^{\alpha}{\mathcal K}}$ which satisfy conditions a)-e) of Definition~\ref{a-e} with the given constant ${\sigma}$. Here by Lemma~\ref{le:admin}~(iv) the domains of $\nu^{\alpha}_I$ are $({\partial}^{\alpha} V_I)^{|I|} = {\partial}^{\alpha} V_I^{|I|}$, and these are the boundaries of the reductions $V_I^k$ which have collars $$ V_I^k \cap {\iota}^{\alpha}_I\bigl({\partial}^{\alpha} U_I \tildemes A^{\alpha}_{{\varepsilon}-2^{-k}{\delta}}\bigr) = {\iota}^{\alpha}_I\bigl( {\partial}^{\alpha} V_I^k \tildemes A^{\alpha}_{{\varepsilon}-2^{-k}{\delta}} \bigr) , $$ where the requirement $2^{-k}{\delta}<{\varepsilon}$ of Lemma~\ref{le:admin} for $k>0$ is ensured by the assumption ${\delta}<{\varepsilon}$. In the case of a product cobordism with product reduction this holds for any ${\delta}>0$ with ${{\varepsilon}-2^{-k}{\delta}}$ replaced by ${\varepsilon}:=1$. The same collar form holds for $C_I \sqsubset V_I$, and hence for any set such as $N^k_{JI}$ or $\widetilde C_I$ constructed from these. Now ${\delta}<{\varepsilon}$ also ensures $2^{-k}{\varepsilon} \le {\varepsilon} - 2^{-k}{\delta}$ for $k\geq 1$, so that we may denote the $2^{-k}{\varepsilon}$-collar of $V_I^k$ by $$ N^k_{I,{\alpha}} \,:=\; {\iota}^{\alpha}_I\bigl({\partial}^{\alpha} V_I^k \tildemes A^{\alpha}_{2^{-k}{\varepsilon}} \bigr) \;\subset\; V^k_I $$ and note the precompact inclusion $N^{k'}_{I,{\alpha}} \sqsubset N^k_{I,{\alpha}}$ for $k'>k$. We will now construct the required cobordism perturbation $\nu$ by an iteration as in Proposition~\ref{prop:ext} with adjusted domains obtained by replacing ${\delta}$ with $\frac 12 {\delta}$. This is necessary since the given boundary value $\nu^{\alpha}_J$ by assumption only extends to a map $\nu^{\alpha}_J : V^{|J|}_J \to E_J$, but as before the construction of $\nu_J$ will proceed by restriction to a precompact subset of the domain of an extension $\widetilde\nu_J$, where this agrees both with the push forward of previously defined $(\nu_I)_{I\subsetneq J}$ and with the given boundary perturbations $\nu^{\alpha}_J$ in collar neighbourhoods. We achieve this by restriction to $V^{|J|+1}_J\sqsubset V^{|J|}_J$. That is, in the $k$-th step we construct $\nu_J : V^{k+1}_J \to E_J$ for each $|J| = k$ that, together with the $\nu_I|_{V^{k+1}_I}$ for $|I|< k$ obtained by restriction from earlier steps, satisfies the following. {\beta}gin{itemize} \item[a)] The perturbation is compatible with coordinate changes and collars, that is $$ \quad \nu_J |_{N^{k+1}_{JI}} \;=\; \widehat{\partial}hi_{IJ} \circ \nu_I \circ {\partial}hi_{IJ}^{-1} |_{N^{k+1}_{JI}} \qquad \text{on}\quad N^{k+1}_{JI} = V^{k+1}_J \cap {\partial}hi_{IJ}(V^{k+1}_I\cap U_{IJ}) $$ for all $I\subsetneq J$, and for each ${\alpha}\in\{0,1\}$ with $J\in{\mathcal I}_{{\partial}^{\alpha}{\mathcal K}}$ we have $$ \nu_J |_{N^{k+1}_{J,{\alpha}}} \;=\; ({\iota}^{\alpha}_J)^*\nu^{\alpha}_J \qquad\text{on}\quad N^{k+1}_{J,{\alpha}} = {\iota}^{\alpha}_J\bigl({\partial}^{\alpha} V_J^{k+1} \tildemes A^{\alpha}_{2^{-k-1}{\varepsilon}} \bigr), $$ where we abuse notation by defining $({\iota}^{\alpha}_J)^*\nu^{\alpha}_J : {\iota}^{\alpha}_J(x,t) \mapsto \nu^{\alpha}_J(x)$. ${\partial}hantom{\bigg(}$ \item[b)] The perturbed section is transverse, that is $(s_J|_{{V^{k+1}_J}} + \nu_J) {\partial}itchfork 0$. \item[c)] The perturbation is {\it strongly admissible} with radius $\eta_{k+1}= 2^{-k-1}(1-2^{-\frac 14})$, $$ \qquad \nu_J( B^J_{\eta_{k+1}}(N^{k+1}_{JI})\bigr) \;\subset\; \widehat{\partial}hi_{IJ}(E_I) \qquad\forall \;I\subsetneq J . $$ \item[d)] The perturbed zero set is contained in ${\partial}i_{\mathcal K}^{-1}\bigl({\partial}i_{\mathcal K}({\mathcal C})\bigr)$; more precisely $$ (s_J |_{{V^{k+1}_J}}+ \nu_J)^{-1}(0) \;\subset\; {V^{k+1}_J} \cap {\partial}i_{\mathcal K}^{-1}\bigl({\partial}i_{\mathcal K}({\mathcal C})\bigr) . $$ \item[e)] The perturbation is small, that is $\sup_{x\in {V^{k+1}_J}} \| \nu_J (x) \| < {\sigma}$. \end{itemize} The final perturbation $\nu=(\nu_I|_{V_I})_{I\in{\mathcal I}_{\mathcal K}}$ of $s_{\mathcal K}|_{\mathcal V}$ then has product form on collars of width $2^{-M_{\mathcal K}}{\varepsilon}$ and thus is a cobordism perturbation, whose boundary restrictions are the given $\nu^{\alpha}$ by construction. Moreover, $\nu$ will be admissible by c), transverse by b), and precompact by d) with ${\partial}i_{\mathcal K}( (s+\nu)^{-1}(0))\subset{\partial}i_{\mathcal K}({\mathcal C})$. Compactness of $|(s+\nu)^{-1}(0)|$ then follows from Lemma~\ref{le:czeroS0}. For $k=0$, there are no indices $|J|=0$ to be considered. Now suppose that $\bigl(\nu_I : V^{|I|+1}_I\to E_I\bigr)_{I\in{\mathcal I}_{\mathcal K}, |I|< k}$ are constructed such that a)-e) hold. Then for the iteration step it suffices as before to construct $\nu_J$ for a fixed $J\in{\mathcal I}_{\mathcal K}$ with $|J|=k$. In the following three construction steps we then unify the cases of $J\in{\mathcal I}_{{\partial}^{\alpha}{\mathcal K}}$ for none, one, or both indices ${\alpha}$ by interpreting the collars $N^k_{J,{\alpha}}$ as empty sets unless $J\in{\mathcal I}_{{\partial}^{\alpha}{\mathcal K}}$. { }{\mathbb N}I {\bf Construction of extension for fixed $|J|=k$:} For each $k\geq 1$ we will construct an extension of a restriction of $$ \quad\mu_J : N_J^k \cup N^k_{J,0} \cup N^k_{J,1} \;\longrightarrow\; E_J , \qquad \mu_J|_{N^{k}_{JI}} := \widehat{\partial}hi_{IJ}\circ \nu_I\circ{\partial}hi_{IJ}^{-1}, \qquad \mu_J|_{N^k_{J,{\alpha}}} := ({\iota}^{\alpha}_J)^*\nu^{\alpha}_J , $$ where $N^k_{J,{\alpha}} = {\iota}^{\alpha}_J\bigl({\partial}^{\alpha} V_J^k \tildemes A^{\alpha}_{2^{-k}{\varepsilon}} \bigr)$ is a collar of $V^k_J$. More precisely, we construct a smooth map $\widetilde\nu_J : V^k_J \to E_J$ that satisfies {\beta}gin{equation}{\lambda}bel{ctinu} \widetilde\nu_J|_{N_{k+\frac 12}} = \mu_J|_{N_{k+\frac 12}} \qquad\text{on}\;\; N_{k+\frac 12} := N_J^{k+\frac 12} \cup N^{k+\frac 12}_{J,0} \cup N^{k+\frac 12}_{J,1} , \end{equation} the bound $\|\widetilde\nu_J \| \leq \|\mu_J\| < {\sigma}$, and the strong admissibility condition {\beta}gin{equation}{\lambda}bel{cvalue} \widetilde\nu_J \bigl( B^J_{\eta_{k+\frac 12}}\bigl(N^{k+\frac 12}_{JI}\bigr) \bigr) \;\subset\; \widehat{\partial}hi_{IJ}(E_I) \qquad \forall\; I\subsetneq J . \end{equation} We proceed as in Proposition~\ref{prop:ext} for fixed $j\in J$ by iteratively constructing smooth maps $\widetilde\mu^j_\ell: W_\ell \to \widehat{\partial}hi_{jJ}(E_j)$ for $\ell=0,\ldots,k-1$ on the adjusted open sets {\beta}gin{equation}{\lambda}bel{eq:cW} W_\ell \,:=\; N^{k_\ell}_{J,0} \;\cup\; N^{k_\ell}_{J,1} \;\cup\; {\textstyle \bigcup _{I\subsetneq J,|I|\le \ell}}\, B^J_{r_\ell}(N^{k+\frac 12}_{JI}) \end{equation} with $r_\ell:= \eta_k - \frac {\ell+1} {k} ( \eta_k-\eta_{k+\frac 13})$ and $k_\ell:= k + \frac {\ell+1} {3k}$, that satisfy the conditions {\beta}gin{enumerate} \item[(E:i)] $\widetilde\mu^j_\ell |_{N^{k+\frac 12}_{JI}} = \mu_J^j|_{N^{k+\frac 12}_{JI}}$ for all $I\subsetneq J$ with $|I|\leq \ell$ and $j\in I$; \item[(E:ii)] $\widetilde\mu^j_\ell |_{B^J_{r_\ell}(N^{k+\frac 12}_{JI})} = 0$ for all $I\subsetneq J$ with $|I|\leq \ell$ and $j\notin I$; \item[(E:iii)] $\bigl\|\widetilde\mu^j_\ell \bigr\| \leq \|\mu^j_J\|$; \item[(E:iv)] $\widetilde\mu^j_\ell = ({\iota}^{\alpha}_J)^*\nu^{{\alpha},j}_J$ on $N^{k_\ell}_{J,{\alpha}} = {\iota}^{\alpha}_J\bigl({\partial}^{\alpha} V_J^{k_\ell} \tildemes A^{\alpha}_{2^{-k_\ell}{\varepsilon}} \bigr)$ for ${\alpha}\in\{0,1\}$ with $J\in{\mathcal I}_{{\partial}^{\alpha}{\mathcal K}}$. \end{enumerate} These requirements make sense because $\eta_{k+\frac 12}< r_\ell < \eta_k$ and $B^J_{\eta_{k}}(N^{k+\frac 12}_J) \subset V^k_J$ by \eqref{eq:fantastic}, so that the domain in (E:ii) is included in $V^k_J$ and is larger than that in \eqref{cvalue}. After this iteration, we then obtain the extension $\widetilde\nu_J:= {\beta}ta {\textstyle\sum_{j\in J}} \, \widetilde\mu^j_{k-1}$ by multiplication with a smooth cutoff function ${\beta}ta:V^k_J \to [0,1]$ with ${\beta}ta|_{N_{k+\frac 12}}\equiv 1$ and ${\rm supp\,}{\beta}ta\subset N^{k+\frac 13}_{J,0} \cup N^{k+\frac 13}_{J,1} \cup B^J_{\eta_{k+\frac 13}}(N^{k+\frac 12}_J)$, where the latter contains the closure of $N_{k+\frac 12}=N^{k+\frac 12}_{J,0} \cup N^{k+\frac 12}_{J,1} \cup N^{k+\frac 12}_J$ in $V^k_J$, so that $\widetilde\nu_J$ extends trivially to $V^k_J{\smallsetminus} W_{k-1}$. For the start of iteration at $\ell=0$, the domain is $W_0= N^{k_0}_{J,0} \;\cup\; N^{k_0}_{J,1}$ with $k_0 = k + \frac 1{3k}$. Conditions (E:i) and (E:ii) are vacuous since there are no index sets with $|I|\le 0$, and we can satisfy (E:iii) and (E:iv), by setting $\widetilde\mu^j_0 ({\iota}ta^{\alpha}(x,t)) := \nu^{{\alpha},j}_J(x)$. Next, if the construction is given on $W_\ell$, then we cover $W_{\ell+1}$ by the open sets $B_L': = W_{\ell+1}\cap B^J_{r_{\ell-1}}(N^{k+\frac12}_{JL})$ for $L\subsetneq J$, $|L|=\ell+1$ and $C_{\ell+1}\subset W_\ell$ given below, and pick an open subset $C'\subset V^k_J$ such that $$ C_{\ell+1} \,:=\; W_{\ell+1} \;{\smallsetminus}\; {\textstyle \bigcup _{|L| = \ell+1}\, \overline{B^J_{r_{\ell}}(N^{k+\frac12}_{JL})}} \;\sqsubset\; C' \;\sqsubset\; W_\ell \;{\smallsetminus}\; {\textstyle \bigcup _{|L| = \ell+1}\, \overline{B^J_{r_{\ell+1}}(N^{k+\frac12}_{JL})}} \;=:\, C_\ell . $$ As before, this guarantees that $C'$ and $B^J_{r_{\ell+1}}(N^{k+\frac 12}_{JL})$ have disjoint closures for all $|L|=\ell +1$. Then we set $\widetilde\mu_{\ell+1}|_{C_{\ell+1}} := \widetilde \mu_\ell|_{C_{\ell+1}}$, which inherits properties (E:i)--(E:iv) from $\widetilde\mu_\ell$ because $C_{\ell+1}$ is still disjoint from $B^J_{r_{\ell+1}}(N^{k+\frac 12}_{JL})$ for any $|I|=\ell+1$, and we have $N^{k_{\ell+1}}_{J,{\alpha}}\subset N^{k_\ell}_{J,{\alpha}}$. So it remains to construct $\widetilde\mu_{\ell+1}: B'_L \to \widehat{\partial}hi_{jJ}(E_j)$ for a fixed $L\subset J$, $|L|=\ell+1$ such that $\widetilde\mu_{\ell+1}=\widetilde \mu_\ell$ on $B'_L \cap C'$. In case $j\notin L$ condition (E:iv) prescribes $\widetilde\mu^j_\ell = ({\iota}^{\alpha}_J)^*\nu^{{\alpha},j}_J$ on the intersection $$ B'_L \cap N^{k_{\ell+1}}_{J,{\alpha}} \subset {\iota}^{\alpha}_J\bigl(B^{J,{\alpha}}_{r_{\ell-1}}({\partial}^{\alpha} N^{k+\frac12}_{JL}) \tildemes A^{\alpha}_{2^{-k_{\ell+1}}{\varepsilon}} \bigr)\bigr). $$ Because $r_{\ell-1}<\eta_k$, strong admissibility for $\nu^{\alpha}_J$ on $B^{J,{\alpha}}_{\eta_k}({\partial}^{\alpha} N^k_{JL})$ implies that $({\iota}^{\alpha}_J)^*\nu^{{\alpha},j}_J=0$ on this intersection. Moreover, $B'_L\cap C'$ again is a subset of $\bigcup _{I\subsetneq L} B^J_{r_\ell}(N^{k+\frac 12}_{JI})$, where we have $\widetilde\mu_\ell|_{B'_L\cap C'}=0$ by iteration hypothesis (E:ii) for each $I\subsetneq J$. Thus $\widetilde\mu_{\ell+1}:=0$ satisfies all extension properties (E:i)--(E:iv) in this case. In case $j\in L$ we may again patch together extensions by partitions of unity, so that it suffices to construct smooth maps $\widetilde\mu_z: B^J_{r_z}(z)\to \widehat\Phi_{jJ}(E_j)$ on balls of positive radius $r_z>0$ around each fixed $z\in B'_L$, that satisfy {\beta}gin{itemize} \item[(i)] $\widetilde\mu_z = \mu_J^j$ on $B^J_{r_z}(z)\cap N^{k+\frac 12}_{JI}$ for all $I\subset L$ with $j\in I$ (including $I=L$); \item[(ii)] $\widetilde\mu_z = 0$ on $B^J_{r_z}(z) \cap B^J_{r_{\ell+1}}(N^{k+\frac 12}_{JI})$ for all $I\subsetneq L$ with $j\notin I$; \item[(iii)] $\bigl\| \widetilde\mu_z\bigr\| \leq \|\mu_J^j\|$; \item[(iv)] $\widetilde\mu_z = ({\iota}^{\alpha}_J)^*\nu^{{\alpha},j}_J$ on $B^J_{r_z}(z) \cap N^{k_{\ell+1}}_{J,{\alpha}}$ for ${\alpha}\in\{0,1\}$ with $J\in{\mathcal I}_{{\partial}^{\alpha}{\mathcal K}}$; \item[(v)] $\widetilde\mu_z=\widetilde\mu_{\ell}$ on $B^J_{r_z}(z)\cap B'_L\cap C'$. \end{itemize} For $z\in V^k_J {\smallsetminus} \overline{N^{k_{\ell+1}}_{J,{\alpha}}}$, this is accomplished by the same constructions as in Proposition~\ref{prop:ext} by choosing $r_z>0$ such that $B^J_{r_z}(z)\cap N^{k_{\ell+1}}_{J,{\alpha}}=\emptyset$. For $z\in \overline{N^{k_{\ell+1}}_{J,{\alpha}}}\subset N^{k_\ell}_{J,{\alpha}}$ we choose $r_z>0$ such that $B^J_{r_z}(z)\subset N^{k_{\ell}}_{J,{\alpha}}$. Then $\widetilde\mu_z := \widetilde\mu_\ell|_{B^J_{r_z}(z)}$ satisfies (v) by construction and (i)-(iv) by iteration hypothesis. { } {\mathbb N}I {\bf Zero set condition:} For the extended perturbation constructed above, we have $\bigl\|\widetilde \nu_J\bigr\| \leq \max\{ \max_{I\subsetneq J} \|\nu_I\| , \|\nu^0_J\| , \|\nu^1_J\| \} < {\sigma} $ by induction hypothesis e). From \eqref{cvalue} and \eqref{eq:useful} we then obtain as in Proposition~\ref{prop:ext} {\beta}gin{equation} {\lambda}bel{eq:czeroset} (s_J |_{V^k_J} + \widetilde\nu_J)^{-1}(0)\;\cap\; B^J_{\eta_{k+\frac 12}}\bigl(N^{k+\frac34}_{JI}\bigr) \;\subset\; N^{k+\frac12}_{JI} . \end{equation} Next, recall that we allowed only ${\sigma}>0$ such that $$ {\sigma} \;\leq\; \inf \Bigl\{ \; \bigl\| s_J(x) \bigr\| \;\Big| \; x\in \overline{V^{|J|+1}_J} \;{\smallsetminus}\; \Bigl( \widetilde C_J \cup {\textstyle \bigcup_{I\subsetneq J}} B^J_{\eta_{|J|+\frac 12}}\bigl(N^{|J|+\frac34}_{JI}\bigr) \Bigr) \Bigr\} . $$ Hence the same arguments as in the proof of Proposition~\ref{prop:ext} provide the inclusion {\beta}gin{equation}{\lambda}bel{eq:cinclude} \bigl(s_J|_{\overline{V^{k+1}_J}} + \widetilde\nu_J\bigr)^{-1}(0) \;{\smallsetminus}\; {\textstyle\bigcup_{I\subsetneq J} } B^J_{\eta_{k+\frac 12}}\bigl(N^{k+\frac34}_{JI}\bigr) \;\subset\; \widetilde C_J. \end{equation} Together with the induction hypothesis on $\widetilde\nu_J =\mu_J=\widehat{\partial}hi_{IJ}\circ\nu_I\circ{\partial}hi_{IJ}^{-1}$ on $N^{k+\frac12}_{JI}$ this implies the zero set condition $(s_J|_{\overline{V^{k+1}_J}}+\widetilde\nu_J)^{-1}(0) \subset{\partial}i_{\mathcal K}^{-1}({\partial}i_{\mathcal K}({\mathcal C}))$. { } {\mathbb N}I {\bf Transversality:} Admissibility together with induction hypothesis b) imply transversality $(s_J + \widetilde\nu_J)|_{N^{k+\frac12}_J} {\partial}itchfork 0$ on the enlarged core. Together transversality of $\nu^{\alpha}_J$ and \eqref{eq:czeroset} we obtain transversality on the open set $$ (s_J + \widetilde\nu_J)|_{B} \; {\partial}itchfork \; 0 , \qquad B:= B^J_{\eta_{k+\frac 12}}(N^{k+\frac34}_J) \cup N^{k+\frac 12}_{0,J} \cup N^{k+\frac 12}_{1,J} \;\subset\; V^k_J . $$ Now $B$ precompactly contains the neighbourhood $B':= B^J_{\eta_{k+1}}(N^{k+1}_J)\cup N^{k+1}_{0,J} \cup N^{k+1}_{1,J} \subset V^k_J$ of the core and collar $N:= N^{k+1}_J \cup N^{k+1}_{0,J} \cup N^{k+1}_{1,J}$, so that compatibility with the coordinate changes and collars in a) and strong admissibility in d) can be satisfied by requiring $\nu_J|_{B'}=\widetilde\nu_J|_{B'}$. In this abstract setting, we can finish the iterative step word by word as in Proposition~\ref{prop:ext}. This completes the construction of the required perturbation in case (ii) and thus finishes the proof. \end{proof} \subsection{Orientations} {\lambda}bel{ss:vorient} \hspace{1mm}\\ This section develops the theory of orientations of Kuranishi atlases. We use the method of determinant line bundles as in e.g.\ \cite[App.A.2]{MS}. but encountered compatibility issues of sign conventions in the literature, e.g.\ all editions of \cite{MS}. We resolve these by using a different set of conventions most closely related to K-theory and thank Thomas Kragh for helpful discussions. As shown in the recent work of \cite{Z3}, these conventions are consistent with some important naturality properties, a fact which may prove useful in the future development of Kuranishi atlases. While the relevant bundles and sections could just be described as tuples of bundles and sections over the domains of the Kuranishi charts, related by lifts of the coordinate changes, we take this opportunity to develop a general framework of vector bundles over Kuranishi atlases, which now no longer are assumed to be additive or tame. {\beta}gin{defn} {\lambda}bel{def:bundle} A {\bf vector bundle} ${\Lambda}=\bigl({\Lambda}_I,\widetilde{\partial}hi_{IJ}\bigr)_{I,J\in{\mathcal I}_{\mathcal K}}$ {\bf over a weak Kuranishi atlas} ${\mathcal K}$ is a collection $({\Lambda}_I \to U_I)_{I\in {\mathcal I}_{\mathcal K}}$ of vector bundles together with lifts $\bigl(\widetildelde {\partial}hi_{IJ}: {\Lambda}_I|_{U_{IJ}}\to {\Lambda}_J\bigr)_{I\subsetneq J}$ of the coordinate changes ${\partial}hi_{IJ}$, that are linear isomorphisms on each fiber and satisfy the weak cocycle condition $\widetildelde {\partial}hi_{IK} = \widetildelde {\partial}hi_{JK}\circ \widetildelde {\partial}hi_{IJ}$ on ${\partial}hi^{-1}_{IJ}(U_{JK})\cap U_{IK}$ for all triples $I\subset J\subset K$. A {\bf section} of a bundle ${\Lambda}$ over ${\mathcal K}$ is a collection of smooth sections ${\sigma}=\bigl( {\sigma}_I: U_I\to {\Lambda}_I \bigr)_{I\in{\mathcal I}_{\mathcal K}}$ that are compatible with the bundle maps $\widetilde{\partial}hi_{IJ}$. In particular, for a vector bundle ${\Lambda}$ with section ${\sigma}$ there are commutative diagrams for each $I\subset J$, \[ \xymatrix{ {\Lambda}_I|_{U_{IJ}} \ar@{->}[d] \ar@{->}[r]^{\;\;\widetildelde{\partial}hi_{IJ}} & {\Lambda}_J \ar@{->}[d] \\ U_{IJ}\ar@{->}[r]^{{\partial}hi_{IJ}} & U_J } \qquad\qquad\qquad \xymatrix{ {\Lambda}_I|_{U_{IJ}} \ar@{->}[r]^{\;\;\widetildelde{\partial}hi_{IJ}} & {\Lambda}_J \\ U_{IJ} \ar@{->}[u]^{{\sigma}_I} \ar@{->}[r]^{{\partial}hi_{IJ}} & U_J \ar@{->}[u]_{{\sigma}_J} . } \] \end{defn} The following notion of a product bundle will be the first example of a bundle over a Kuranishi cobordism. {\beta}gin{defn} {\lambda}bel{def:prodbun} If ${\Lambda}=\bigl({\Lambda}_I,\widetilde{\partial}hi_{IJ}\bigr)_{I,J\in{\mathcal I}_{\mathcal K}}$ is a bundle over ${\mathcal K}$ and $A\subset [0,1]$ is an interval, then the {\bf product bundle} ${\Lambda}\tildemes A$ over ${\mathcal K}\tildemes A$ is the tuple $\bigl({\Lambda}_I\tildemes A,\widetilde{\partial}hi_{IJ}\tildemes {\rm id}_A\bigr)_{I,J\in{\mathcal I}_{\mathcal K}}$. Here and in the following we denote by ${\Lambda}_I\tildemes A\to U_I\tildemes A$ the pullback bundle under the projection $U_I\tildemes A\to U_I$. \end{defn} {\beta}gin{defn} {\lambda}bel{def:cbundle} A {\bf vector bundle over a weak Kuranishi cobordism} ${\mathcal K}$ is a collection ${\Lambda}=\bigl({\Lambda}_I,\widetilde{\partial}hi_{IJ}\bigr)_{I,J\in{\mathcal I}_{\mathcal K}}$ of vector bundles and bundle maps as in Definition~\ref{def:bundle}, together with a choice of isomorphism from its restriction to a collar of the boundary to a product bundle. More precisely, this requires for ${\alpha}=0,1$ the choice of a {\bf restricted vector bundle} ${\Lambda}|_{{\partial}^{\alpha}{\mathcal K}}= \bigl( {\Lambda}^{\alpha}_I \to {\partial}artial^{\alpha} U_I, \widetilde{\partial}hi^{\alpha}_{IJ}\bigr)_{I,J \in {\mathcal I}_{{\partial}^{\alpha}{\mathcal K}}}$ over ${\partial}^{\alpha}{\mathcal K}$, and, for some ${\varepsilon}>0$ less than the collar width of ${\mathcal K}$, a choice of lifts of the embeddings ${\iota}^{\alpha}_I$ for $I\in{\mathcal I}_{{\partial}^{\alpha}{\mathcal K}}$ to bundle isomorphisms $\tilde{\iota}^{\alpha}_I : {\Lambda}^{\alpha}_I\tildemes A^{\alpha}_{\varepsilon} \to {\Lambda}_I|_{{\rm im\,}{\iota}^{\alpha}_I}$ such that, with $A: = A^{\alpha}_{\varepsilon}$, the following diagrams commute \[ \xymatrix{ {\Lambda}_I^{\alpha}\tildemes A \ar@{->}[d] \ar@{->}[r]^{\tilde{\iota}^{\alpha}_I} & {\Lambda}_I|_{{\rm im\,}{\iota}^{\alpha}_I} \ar@{->}[d] \\ {\partial}artial^{\alpha} U_I\tildemes A \ar@{->}[r]^{{\iota}^{\alpha}_I} & {\rm im\,}{\iota}^{\alpha}_I \subset U_I } \qquad\qquad \xymatrix{ {\Lambda}_I^{\alpha}|_{{\partial}^{\alpha} U_{IJ}} \tildemes A \ar@{->}[r]^{\tilde{\iota}^{{\alpha}}_I} \ar@{->}[d]_{\widetilde{\partial}hi^{\alpha}_{IJ}\tildemes{\rm id}_A} & {\Lambda}_I |_{{\iota}^{\alpha}_I({\partial}^{\alpha} U_{IJ} \tildemes A)} \ar@{->}[d]^{\widetilde{\partial}hi_{IJ}} \\ {\Lambda}_J^{\alpha}\tildemes A \ar@{->}[r]^{\tilde{\iota}^{\alpha}_{J}} & {\Lambda}_J|_{{\rm im\,}{\iota}^{\alpha}_J} } \] A {\bf section} of a vector bundle ${\Lambda}$ over a Kuranishi cobordism as above is a compatible collection $\bigl({\sigma}_I:U_I\to {\Lambda}_I\bigr)_{I\in{\mathcal I}_{\mathcal K}}$ of sections as in Definition~\ref{def:bundle} that in addition have product form in the collar. That is we require that for each ${\alpha}=0,1$ there is a {\bf restricted section} ${\sigma}|_{{\partial}^{\alpha}{\mathcal K}}= ( {\sigma}^{\alpha}_I :{\partial}artial_{\alpha} U_I \to {\Lambda}^{\alpha}_I)_{I\in{\mathcal I}_{{\partial}^{\alpha}{\mathcal K}}}$ of ${\Lambda}|_{{\partial}^{\alpha}{\mathcal K}}$ such that for ${\varepsilon}>0$ sufficiently small we have $(\tilde{\iota}^{\alpha}_I)^*{\sigma}_I = {\sigma}^{\alpha}_I\tildemes {\rm id}_{A^{\alpha}_{\varepsilon}}$. \end{defn} In Definition~\ref{def:bundle} we implicitly worked with an isomorphism $(\tilde{\iota}^{\alpha}_I)_{I\in{\mathcal I}_{{\partial}^{\alpha}{\mathcal K}}}$, which satisfies all but the product structure requirements of the following notion of isomorphisms on Kuranishi cobordisms. {\beta}gin{defn} {\lambda}bel{def:buniso} An {\bf isomorphism} $\Psi: {\Lambda}\to {\Lambda}'$ between vector bundles over ${\mathcal K}$ is a collection $(\Psi_I: {\Lambda}_I\to {\Lambda}'_I)_{I\in {\mathcal I}_{\mathcal K}}$ of bundle isomorphisms covering the identity on $U_I$, that intertwine the transition maps, i.e.\ $\widetilde{\partial}hi'_{IJ}\circ\Psi_I|_{U_{IJ}} = \Psi_J \circ \widetilde {\partial}hi_{IJ}|_{U_{IJ}}$ for all $I\subset J$. If ${\mathcal K}$ is a Kuranishi cobordism then we additionally require $\Psi$ to have product form in the collar. That is we require that for each ${\alpha}=0,1$ there is a restricted isomorphism $\Psi|_{{\partial}^{\alpha}{\mathcal K}}= ( \Psi^{\alpha}_I :{\Lambda}^{\alpha}_I \to {\Lambda}'_I\,\!\!^{\alpha})_{I\in{\mathcal I}_{{\partial}^{\alpha}{\mathcal K}}}$ from ${\Lambda}|_{{\partial}^{\alpha}{\mathcal K}}$ to ${\Lambda}'|_{{\partial}^{\alpha}{\mathcal K}}$ such that for ${\varepsilon}>0$ sufficiently small we have $\tilde{\iota}'_I\,\!\!^{\alpha} \circ \bigl(\Psi^{\alpha}_I \tildemes {\rm id}_A\bigr) = \Psi_I \circ \tilde{\iota}^{{\alpha}}_I$ on ${\partial}artial^{\alpha} U_I\tildemes A^{\alpha}_{\varepsilon}$. \end{defn} {\beta}gin{remark}\rm In the newly available language, Definition~\ref{def:cbundle} of a bundle on a Kuranishi cobordism requires isomorphisms (without product structure on the collar) for ${\alpha}=0,1$ from the product bundle ${\Lambda}|_{{\partial}^{\alpha}{\mathcal K}}\tildemes A^{\alpha}_{\varepsilon}$ to the ${\varepsilon}$-collar restriction $({\iota}ta^{\alpha}_{\varepsilon})^*{\Lambda} := \bigl(({\iota}ta^{\alpha}_I)^*{\Lambda}_I , ({\iota}ta^{\alpha}_J)^* \circ \widetilde{\partial}hi_{IJ} \circ ({\iota}ta^{\alpha}_I)_* \bigr)_{I,J\in{\mathcal I}_{{\partial}^{\alpha}{\mathcal K}}}$, given by the collection of pullback bundles and isomorphisms under the embeddings ${\iota}ta^{\alpha}_I : {\partial}artial^{\alpha} U_I \tildemes A^{\alpha}_{\varepsilon} \to U_I$. \end{remark} Note that, although the compatibility conditions are the same, the canonical section $s_{\mathcal K} = ( s_I : U_I\to E_I)_{I\in{\mathcal I}_{\mathcal K}}$ of a Kuranishi atlas does not form a section of a vector bundle since the obstruction spaces $E_I$ are in general not of the same dimension, hence no bundle isomorphisms $\widetilde{\partial}hi_{IJ}$ as above exist. Nevertheless, we will see that, there is a natural bundle associated with the section $s_{\mathcal K}$, namely its determinant line bundle, and that this line bundle is isomorphic to a bundle constructed by combining the determinant lines of the obstruction spaces $E_I$ and the domains $U_I$. Here and in the following we will exclusively work with finite dimensional vector spaces. First recall that the determinant line of a vector space $V$ is its maximal exterior power $\Lambda^{\rm max}\, V := \wedge^{\dim V}\,V$, with $\wedge^0\,\{0\} :={\mathbb R}$. More generally, the {\bf determinant line of a linear map} $D:V\to W$ is defined to be $$ {\delta}t(D):= \Lambda^{\rm max}\,\ker D \otimes \bigl( \Lambda^{\rm max}\, \bigl( \qu{W}{{\rm im\,} D} \bigr) \bigr)^*. $$ In order to construct isomorphisms between determinant lines, we will need to fix various conventions, in particular pertaining to the ordering of factors in their domains and targets. We begin by noting that every isomorphism $F: Y \to Z$ between finite dimensional vector spaces induces an isomorphism {\beta}gin{equation}{\lambda}bel{eq:laphi} {\Lambda}_F :\; \Lambda^{\rm max}\, Y \;\overlineerset{\cong}{\longrightarrow}\; \Lambda^{\rm max}\, Z , \qquad y_1\wedge\ldots \wedge y_k \mapsto F(y_1)\wedge\ldots \wedge F(y_k) . \end{equation} For example, if $I\subsetneq J$ and $x\in U_{IJ}$, it follows from the index condition in Definition~\ref{def:change} that the map for $x\in U_{IJ}$ {\beta}gin{equation}{\lambda}bel{eq:bunIJ} {\Lambda}_{IJ}(x): = {\Lambda}_{{\rm d}_x{\partial}hi_{IJ}} \otimes \bigl({\Lambda}_{[\widehat{\partial}hi_{IJ}^{-1}]}\bigr)^* \, :\; {\delta}t({\rm d}_x s_I) \to {\delta}t({\rm d}_{{\partial}hi_{IJ}(x)} s_J) \end{equation} is an isomorphism, induced by the isomorphisms ${\rm d}{\partial}hi_{IJ}:\ker{\rm d} s_I\to\ker{\rm d} s_J$ and $[\widehat{\partial}hi_{IJ}] : \qu{E_I}{{\rm im\,}{\rm d} s_I}\to\qu{E_J}{{\rm im\,}{\rm d} s_J}$. With this, we can define the determinant bundle ${\delta}t(s_{\mathcal K})$ of a Kuranishi atlas. A second, isomorphic, determinant line bundle ${\delta}t({\mathcal K})$ with fibers $\Lambda^{\rm max}\, {\rm T}_x U_I \otimes \bigl( \Lambda^{\rm max}\, E_I \bigr)^*$ will be constructed in Proposition~\ref{prop:orient}. {\beta}gin{defn} {\lambda}bel{def:det} The {\bf determinant line bundle} of a weak Kuranishi atlas (or cobordism) ${\mathcal K}$ is the vector bundle ${\delta}t(s_{\mathcal K})$ given by the line bundles $$ {\delta}t({\rm d} s_I):=\bigcup_{x\in U_I} {\delta}t({\rm d}_x s_I) \;\to\; U_I \qquad \text{for}\; I\in{\mathcal I}_{\mathcal K}, $$ and the isomorphisms ${\Lambda}_{IJ}(x)$ in \eqref{eq:bunIJ} for $I\subsetneq J$ and $x\in U_{IJ}$. \end{defn} To show that ${\delta}t(s_{\mathcal K})$ is well defined, in particular that $x\mapsto {\Lambda}_{IJ}(x)$ is smooth, we introduce some further natural\footnote{ Here a ``natural" isomorphism is one that is functorial, i.e.\ it commutes with the action on both sides induced by a vector space isomorphism.} isomorphisms and fix various ordering conventions. {\beta}gin{itemlist} \item For any subspace $V'\subset V$ the {\bf splitting isomorphism} {\beta}gin{equation}{\lambda}bel{eq:VW} \Lambda^{\rm max}\, V\cong \Lambda^{\rm max}\, V'\otimes \Lambda^{\rm max}\,\bigl( \qu{V}{V'}\bigr) \end{equation} is given by completing a basis $v_1,\ldots,v_k$ of $V'$ to a basis $v_1,\ldots,v_n$ of $V$ and mapping $v_1\wedge \ldots \wedge v_n \mapsto (v_1\wedge \ldots \wedge v_k) \otimes ([v_{k+1}]\wedge \ldots\wedge [v_n])$. \item For each isomorphism $F:Y\overlineerset{\cong}{\to} Z$ the {\bf contraction isomorphism} {\beta}gin{equation} {\lambda}bel{eq:quotable} \mathfrak{c}_F \,:\; \Lambda^{\rm max}\, Y \otimes \bigl( \Lambda^{\rm max}\, Z \bigr)^* \;\overlineerset{\cong}{\longrightarrow}\; {\mathbb R} , \end{equation} is given by the map $\bigl(y_1\wedge\ldots \wedge y_k\bigr) \otimes \eta \mapsto \eta\bigl(F(y_1)\wedge \ldots \wedge F(y_k)\bigr)$. \item For any space $V$ we use the {\bf duality isomorphism} {\beta}gin{equation}{\lambda}bel{eq:dual} \Lambda^{\rm max}\, V^* \;\overlineerset{\cong}{\longrightarrow}\; (\Lambda^{\rm max}\, V)^*, \qquad v_1^*\wedge\dots\wedge v_n^* \;\longmapsto\; (v_1\wedge\dots\wedge v_n)^* , \end{equation} which corresponds to the natural pairing $$ \Lambda^{\rm max}\, V \otimes \Lambda^{\rm max}\, V^* \;\overlineerset{\cong}{\longrightarrow}\; {\mathbb R} , \qquad \bigl(v_1\wedge\dots\wedge v_n\bigr) \otimes \bigl(\eta_1\wedge\dots\wedge \eta_n\bigr) \;\mapsto\; {\partial}rod_{i=1}^n \eta_i(v_i) $$ via the general identification (which in the case of line bundles $A,B$ maps $\eta\neq0$ to a nonzero homomorphism, i.e.\ an isomorphism) {\beta}gin{equation}{\lambda}bel{eq:homid} {\rm Hom}(A\otimes B,{\mathbb R}) \;\overlineerset{\cong}{\longrightarrow}\; {\rm Hom}(B, A^*) ,\qquad H \;\longmapsto\; \bigl( \; b \mapsto H(\cdot \otimes b) \; \bigr) . \end{equation} \end{itemlist} { }{\mathbb N}I Next, we combine the above isomorphisms to obtain a more elaborate contraction isomorphism. {\beta}gin{lemma} {\lambda}bel{lem:get} Every linear map $F:V\to W$ together with an isomorphism ${\partial}hi:K\to \ker F$ induces an isomorphism {\beta}gin{align}{\lambda}bel{Cfrak} \mathfrak{C}^{{\partial}hi}_F \,:\; \Lambda^{\rm max}\, V \otimes \bigl(\Lambda^{\rm max}\, W \bigr)^* &\;\overlineerset{\cong}{\longrightarrow}\; \Lambda^{\rm max}\, K \otimes \bigl(\Lambda^{\rm max}\, \bigl( \qu{W}{F(V)}\bigr) \bigr)^* \end{align} given by {\beta}gin{align} (v_1\wedge\dots v_n)\otimes(w_1\wedge\dots w_m)^* &\;\longmapsto\; \bigl({\partial}hi^{-1}(v_1)\wedge\dots {\partial}hi^{-1}(v_k)\bigr)\otimes \bigl( [w_1]\wedge\dots [w_{m-n+k}] \bigr)^* , \notag \end{align} where $v_1,\ldots,v_n$ is a basis for $V$ with ${\rm span}(v_1,\ldots,v_k)=\ker F$, and $w_1,\dots, w_m$ is a basis for $W$ whose last $n-k$ vectors are $w_{m-n+i}=F(v_i)$ for $i=k+1,\ldots,n$. In particular, for every linear map $D:V\to W$ we may pick ${\partial}hi$ as the inclusion $K=\ker D\hookrightarrow V$ to obtain an isomorphism $$ \mathfrak{C}_{D} \,:\; \Lambda^{\rm max}\, V \otimes \bigl(\Lambda^{\rm max}\, W \bigr)^* \;\overlineerset{\cong}{\longrightarrow}\; {\delta}t(D) . $$ \end{lemma} {\beta}gin{proof} We will construct $\mathfrak{C}^{{\partial}hi}_F$ by composition of several isomorphisms. As a first step let $F(V)^{\partial}erp\subset W^*$ be the annihilator of $F(V)$ in $W^*$, then the splitting isomorphism \eqref{eq:VW} identifies $\Lambda^{\rm max}\, W^*$ with $\Lambda^{\rm max}\, ( F(V)^{\partial}erp )\otimes \Lambda^{\rm max}\, \bigl(\qu{W^*}{F(V)^{\partial}erp}\bigr)$. Next, we apply \eqref{eq:laphi} to the isomorphisms $F(V)^{\partial}erp \overlineerset{\cong}{\to} \bigl(\qu{W}{F(V)}\bigr)^*$ and $\qu{W^*}{F(V)^{\partial}erp}\overlineerset{\cong}{\to} F(V)^*$, and apply the duality isomorphism \eqref{eq:dual} in all factors to obtain the isomorphism $$ S_W \,:\; \bigl(\Lambda^{\rm max}\, W\bigr)^* \;\overlineerset{\cong}{\longrightarrow}\; \bigl(\Lambda^{\rm max}\, \bigl(\qu{W}{F(V)}\bigr)\bigr)^* \otimes \bigl( \Lambda^{\rm max}\, F(V) \bigr)^* $$ given by $(w_1\wedge \ldots \wedge w_m)^* \mapsto ([w_1]\wedge \ldots\wedge [w_\ell])^* \otimes (w_{\ell+1}\wedge \ldots \wedge w_m)^*$ for any basis $w_1,\ldots,w_m$ of $W$ whose last elements $w_{\ell+i}$ for $i=1,\ldots,m-\ell=n-k$ span $F(V)$. On the other hand, we apply the splitting isomorphism \eqref{eq:VW} for $\ker F\subset V$ and \eqref{eq:laphi} for ${\partial}hi^{-1}: \ker F\to K$ to obtain an isomorphism $$ S_V \,:\; \Lambda^{\rm max}\, V \;\overlineerset{\cong}{\longrightarrow}\; \Lambda^{\rm max}\, K \otimes \Lambda^{\rm max}\, \bigl(\qu{V}{{\partial}hi(K)}\bigr) $$ given by $v_1\wedge \ldots \wedge v_n \mapsto ({\partial}hi^{-1}(v_1)\wedge \ldots\wedge {\partial}hi^{-1}(v_k)) \otimes ([v_{k+1}]\wedge \ldots \wedge [v_n])$ for any basis $v_1,\ldots,v_n$ of $V$ such that $v_1,\ldots,v_k$ spans $\ker F$. Finally, note that $F$ descends to an isomorphism $[F] : \qu{V}{{\partial}hi(K)} \overlineerset{\cong}{\to} F(V)$, so we wish to apply the contraction isomorphism $$ \mathfrak{c}_{[F]} : \Lambda^{\rm max}\, \bigl(\qu{V}{{\partial}hi(K)}\bigr) \otimes \bigl( \Lambda^{\rm max}\, F(V) \bigr)^* \to {\mathbb R} $$ from \eqref{eq:quotable}. Since these factors do not appear adjacent after applying $S_V\otimes S_W$, we compose $S_W$ with an additional reordering isomorphism -- noting that we do not introduce signs in switching factors here $$ R \,:\; A \otimes B \overlineerset{\cong}{\longrightarrow}\; B \otimes A , \qquad a \otimes b \; \longmapsto\; b \otimes a . $$ Finally, using the natural identification $\Lambda^{\rm max}\, K \otimes {\mathbb R}\otimes \bigl(\Lambda^{\rm max}\, \bigl(\qu{W}{F(V)}\bigr)\bigr)^* \cong \Lambda^{\rm max}\, K \otimes \bigl(\Lambda^{\rm max}\, \bigl(\qu{W}{F(V)}\bigr)\bigr)^*$ we obtain an isomorphism $$ \bigl( {\rm id}_{ \Lambda^{\rm max}\, K }\otimes \mathfrak{c}_{[F]} \otimes {\rm id}_{ (\Lambda^{\rm max}\, (\qu{W}{F(V)}))^*} \bigr) \circ \bigl( S_V \otimes (R\circ S_W) \bigr) . $$ To see that it coincides with $\mathfrak{C}_F^{\partial}hi$ as described in the statement, note that -- using the bases as above -- it maps $(v_1\wedge \ldots \wedge v_n) \otimes(w_1\wedge \ldots \wedge w_m)^*$ to $({\partial}hi^{-1}(v_1)\wedge \ldots\wedge {\partial}hi^{-1}(v_k)) \otimes ([w_1]\wedge \ldots\wedge [w_\ell])^*$ multiplied with the factor $(w_{\ell+1}\wedge \ldots \wedge w_m)^*\bigl(F(v_{k+1})\wedge \ldots \wedge F(v_n)\bigr)$, and that the latter equals $1$ if we choose $w_{\ell+i}=F(v_i)$ for $i=1,\ldots, n-k$. Note here that the existence of an isomorphism $F$ implies $m-\ell = n-k$, so that $m-n=\ell+k$, and hence $w_{m-n+(k+i)}=w_{\ell+i}$. \end{proof} {\beta}gin{prop}{\lambda}bel{prop:det0} For any weak Kuranishi atlas, ${\delta}t(s_{\mathcal K})$ is a well defined line bundle over ${\mathcal K}$. Further, if ${\mathcal K}$ is a weak Kuranishi cobordism, then ${\delta}t(s_{\mathcal K})$ can be given product form on the collar of ${\mathcal K}$ with restrictions ${\delta}t(s_{\mathcal K})|_{{\partial}^{\alpha}{\mathcal K}} = {\delta}t(s_{{\partial}^{\alpha}{\mathcal K}})$ for ${\alpha}= 0,1$. The required bundle isomorphisms from the product ${\delta}t(s_{{\partial}^{\alpha}{\mathcal K}})\tildemes A^{\alpha}_{\varepsilon}$ to the collar restriction $({\iota}^{\alpha}_{\varepsilon})^*{\delta}t(s_{\mathcal K})$ are given in \eqref{orient map}. \end{prop} {\beta}gin{proof} To see that ${\delta}t(s_{\mathcal K})$ is a line bundle over ${\mathcal K}$, we first note that each topological bundle ${\delta}t({\rm d} s_I)$ is a smooth line bundle, since it has compatible local trivializations ${\delta}t({\rm d} s_I)\cong\Lambda^{\rm max}\,\ker({\rm d} s_I \oplus R_I)$ induced from constant linear injections $R_I:{\mathbb R}^{N}\to E_I$ which locally cover the cokernel, see e.g.\ \cite[Appendix~A.2]{MS}. There are various natural ways to define these maps; the crucial choice is the sign in equation \eqref{Cfrak}. \footnote{ See \cite{Z3} for a discussion of the different conventions. Changing the sign in \eqref{Cfrak} for example by the factor $(-1)^{n-k}$ affects the local trivializations (and hence the topology of the determinant bundle) because \eqref{Cfrak} is applied below to the family of operators $F_x, x\in U_I,$ the dimension of whose kernels varies with $x$.} At each point $x\in U_I$ we will use the contraction map $\mathfrak{C}^{{\partial}hi_x}_{F_x}$ of Lemma~\ref{lem:get} for the linear map $F_x$ and isomorphism to its kernel ${\partial}hi_x$, where {\beta}gin{align*} F_x \,:\; \ker({\rm d}_x s_I \oplus R_I) &\; \to \; {\rm im\,} R_I \;\subset\; E_I , \qquad\qquad\qquad\qquad (v,r) \;\mapsto\; {\rm d}_x s_I(v) , \\ {\partial}hi_x \,:\;\;\, K := \ker {\rm d}_x s_I &\;\to\; \ker( {\rm d}_x s_I \oplus R_I) \; \subset \; {\rm T}_x U_I\oplus {\mathbb R}^N , \qquad k\mapsto (k,0) . \end{align*} Note here that $\ker({\rm d}_x s_I \oplus R_I)=\bigl\{(v,r)\in {\rm T}_x U_I\oplus {\mathbb R}^N \,\big|\, {\rm d}_x s_I (v) = - R_I(r) \bigr\}$, so that $F_x$ indeed maps to ${\rm im\,} R_I$ with $F_x(v,r)=-R_I(r)$, and its image is ${\rm im\,} F_x = {\rm im\,} {\rm d}_x s_I \cap {\rm im\,} R_I$. If we restrict $x$ to an open set $O\subset U_I$ on which ${\rm d}_x s_I\oplus R_I$ is surjective, then the inclusion ${\rm im\,} R_I \hookrightarrow E_I$ induces an isomorphism $$ {\iota}_x \,: \; \qu{{\rm im\,} R_I}{{\rm im\,} {\rm d}_x s_I \cap {\rm im\,} R_I} \; \overlineerset{\cong}{\longrightarrow} \; \qu{E_I}{{\rm im\,} {\rm d}_x s_I } . $$ Indeed, ${\iota}_x$ is surjective since $E_I = {\rm im\,} {\rm d}_x s_I + {\rm im\,} R_I$ and injective by construction. Hence \eqref{eq:laphi} together with dualization defines an isomorphism ${\Lambda}_{{\iota}_x}^* : \bigl( \Lambda^{\rm max}\, \qu{E_I}{{\rm im\,} {\rm d}_x s_I }\bigr)^* \to \bigl( \qu{{\rm im\,} R_I}{{\rm im\,} {\rm d}_x s_I \cap {\rm im\,} R_I} \bigr)^*$, which we invert and compose with the contraction isomorphism of Lemma~\ref{lem:get} to obtain isomorphisms {\beta}gin{align}\notag T_{I,x} \, := \; \bigl({\rm id}_{\Lambda^{\rm max}\,\ker {\rm d}_x s_I}\otimes ({\Lambda}_{{\iota}_x}^*)^{-1}\bigr) \circ {\mathfrak C}^{{\partial}hi_x}_{F_x} \;:\;\;\; & \Lambda^{\rm max}\, \ker({\rm d}_x s_I \oplus R_I) \otimes \bigl( \Lambda^{\rm max}\, {\rm im\,} R_I \bigr)^* \\ &\;\overlineerset{\cong}{\longrightarrow}\; \Lambda^{\rm max}\,\ker {\rm d}_x s_I \otimes \bigl(\Lambda^{\rm max}\, \bigl(\qu{E_I}{{\rm im\,} {\rm d}_x s_I }\bigr)\bigr)^*. {\lambda}bel{eq:TIx} \end{align} Precomposing this with the isomorphism ${\mathbb R}\cong \Lambda^{\rm max}\, {\mathbb R}^N\stackrel{{\Lambda}_{R_I}}\cong \Lambda^{\rm max}\, {\rm im\,} R_I$ from \eqref{eq:laphi}, we obtain a trivialization of ${\delta}t({\rm d} s_I)|_O$ given by isomorphisms {\beta}gin{align}\notag \widehat T_{I,x} \,:\; \Lambda^{\rm max}\, \ker({\rm d}_x s_I \oplus R_I) &\;\overlineerset{\cong}{\longrightarrow}\; \qquad\qquad {\delta}t({\rm d}_x s_I) \\ {\lambda}bel{eq:HatTx} \overline v_1\wedge \ldots \wedge \overline v_n &\;\longmapsto\; (v_1\wedge\dots v_k)\otimes \bigl( [R_I(e_1)]\wedge\dots [R_I(e_{N-n+k})] \bigr)^*, \end{align} where $\overline v_i=(v_i,r_i)$ is a basis of $\ker({\rm d}_x s_I \oplus R_I)$ such that $v_1,\ldots,v_k$ span $\ker {\rm d}_x s_I$ (and hence $r_1=\ldots=r_k=0$), and $e_{1},\ldots, e_{N}$ is a positively ordered normalized basis of ${\mathbb R}^N$ (that is $e_1\wedge\ldots e_N = 1 \in {\mathbb R} \cong \Lambda^{\rm max}\,{\mathbb R}^N$) such that $R_I(e_{N-n+i}) = {\rm d}_x s_I(v_i)$ for $i = k+1,\ldots, n$. In particular, the last $n-k$ vectors span ${\rm im\,} {\rm d}_x s_I \cap {\rm im\,} R_I \subset E_I$, and thus the first $N-n+k$ vectors $[R_I(e_1)],\dots, [R_I(e_{N-n+k})]$ span the cokernel $\qu{E_I}{{\rm im\,}{\rm d}_x s_I}\cong\qu{{\rm im\,} R_I}{{\rm im\,}{\rm d}_x s_I\cap {\rm im\,} R_I}$. Next, we show that these trivializations do not depend on the choice of injection $R_I:{\mathbb R}^N\to E_I$. Indeed, given another injection $R_I':{\mathbb R}^{N'}\to E_I$ that also maps onto the cokernel of ${\rm d} s_I$, we can choose a third injection $R_I'':{\mathbb R}^{N''}\to E_I$ that is surjective, and compare it to both of $R_I, R_I'$. Hence it suffices to consider the following two cases: {\beta}gin{itemlist} \item $N=N'$ and $R_I = R_I'\circ {\iota}$ for a bijection ${\iota}: {\mathbb R}^{N} \overlineerset{\cong}{\to} {\mathbb R}^{N'}$; \item $N<N'$ and $R_I=R_I'\circ{\partial}r$ for the canonical projection ${\partial}r: {\mathbb R}^{N'}\to {\mathbb R}^N\tildemes\{0\} \cong{\mathbb R}^N$. \end{itemlist} In the second case denote by ${\iota}: {\mathbb R}^{N}\to {\mathbb R}^N\tildemes\{0\}\subset{\mathbb R}^{N'}$ the canonical injection, then in both cases we have $R_I = R_I'\circ {\iota}$, and thus ${\rm id} \tildemes {\iota}$ induces an injection $\ker({\rm d} s_I\oplus R_I)\to \ker({\rm d} s_I\oplus R_I')$ so that there is a well defined quotient bundle $\qu{\ker({\rm d} s_I\oplus R_I')}{\ker({\rm d} s_I\oplus R_I)} \to U_I$. In case $N<N'$ we claim that an appropriately scaled choice of local trivialization for this quotient over an open set $O\subset U_I$, on which both trivializations of ${\delta}t({\rm d} s_I)|_O$ are defined, induces a bundle isomorphism $\Psi: \Lambda^{\rm max}\, \ker({\rm d} s_I\oplus R_I)|_O\to \Lambda^{\rm max}\, \ker({\rm d} s_I\oplus R_I')|_O$ that is compatible with the trivializations $\widehat T_I$ and $\widehat T_I': \Lambda^{\rm max}\, \ker({\rm d} s_I\oplus R'_I)|_O \to {\delta}t({\rm d} s_I)|_O$ constructed as in \eqref{eq:HatTx}, that is $\widehat T_I = \widehat T_I' \circ \Psi$. To define $\Psi$, let $n:=\dim \ker({\rm d} s_I\oplus R_I)$ and fix a trivialization of the quotient, that is a family of smooth sections $\bigl(\overline v^\Psi_{i}=(v^\Psi_{i},r^\Psi_{i})\bigr)_{i=n+1, \ldots,n'}$ of $\ker({\rm d} s_I\oplus R_I')|_O$ with $n':= n+N'-N$, that induces a basis for the quotient space at each point $x\in O$. Here we may want to rescale $\overline v^\Psi_{n+1}$ by a nonzero real, as discussed below. Then for fixed $x\in O$, any choice of basis $(\overline v_i)_{i=1,\ldots,n}$ of $\ker({\rm d}_x s_I\oplus R_I)$ induces a basis $({\rm id} \tildemes {\iota})(\overline v_1),\dots, ({\rm id} \tildemes {\iota})(\overline v_n), \overline v^\Psi_{n+1}, \ldots, \overline v^\Psi_{n'}$ of $ \ker({\rm d}_x s_I\oplus R_I')$, and we define $\Psi$ by $$ \Psi_x \,: \; \overline v_1\wedge\dots\wedge \overline v_n \;\mapsto\; ({\rm id} \tildemes {\iota})(\overline v_1)\wedge\dots\wedge ({\rm id} \tildemes {\iota})(\overline v_n)\wedge \overline v^\Psi_{n+1}(x)\wedge \ldots \wedge \overline v^\Psi_{n'}(x) , $$ which varies smoothly with $x\in O$. It remains to show that, for appropriate choice of the sections $\overline v^\Psi_i$, we have $\widehat T_{I,x} = \widehat T_{I,x}' \circ \Psi_x$ for any fixed $x\in O$. For that purpose we express the trivializations $\widehat T_{I,x}$ and $\widehat T'_{I,x}$ as in \eqref{eq:HatTx}. This construction begins by choosing a basis $(\overline v_i)_{i=1,\dots,n}$ of $\ker ({\rm d}_x s_I \oplus R_I)$, where the first $k$ elements $\overline v_i=(v_i,0)$ span $\ker {\rm d}_x s_I\tildemes\{0\}$. A compatible choice of basis $(\overline v'_i)_{i=1,\dots,n'}$ for $\ker ({\rm d}_x s_I \oplus R'_I)$ is given by $\overline v'_i := ({\rm id} \tildemes {\iota})(\overline v_i)$ for $i=1,\dots,n$, and $\overline v'_i:= \overline v^\Psi_i$ for $i=n+1,\ldots,n'$. Note here that $\overline v'_i = \overline v_i$ for $i=1,\ldots,k$. Next, one chooses a positively ordered normalized basis $e_{1},\ldots, e_{N}$ of ${\mathbb R}^N$ such that $R_I(e_{N-n+i}) = {\rm d}_x s_I(v_i)$ for $i = k+1,\ldots, n$. Then the first $N-n+k$ vectors $[R_I(e_1)],\dots, [R_I(e_{N-n+k})]$ coincide with $[R'_I({\iota}(e_1))],\dots, [R'_I({\iota}(e_{N'-n'+k}))]$ and span the cokernel $\qu{E_I}{{\rm im\,}{\rm d}_x s_I}$, and the last $n-k$ vectors span ${\rm im\,} {\rm d}_x s_I \cap {\rm im\,} R_I \subset E_I$. So we obtain a corresponding basis $e'_1,\ldots, e'_{N'}$ of ${\mathbb R}^{N'}$ by taking $e'_i = {\iota}(e_i)$ for $i=1,\ldots,N$ and $e'_{N+i} = (R'_I)^{-1}\bigl( {\rm d}_x s_I(v^\Psi_{n+i}(x))$ for $i=1,\ldots, N'-N = n'-n$. To obtain the correct definition of $\widehat T'_{I,x}$, we then rescale $v^\Psi_{n'}$ by the reciprocal of {\beta}gin{align*} {\lambda}(x) &\,:=\; {\iota}(e_1)\wedge \ldots \wedge {\iota}(e_N) \wedge (R'_I)^{-1}\bigl( {\rm d}_x s_I(v^\Psi_{n+1}(x))\bigr) \wedge\ldots\wedge (R'_I)^{-1}\bigl( {\rm d}_x s_I(v^\Psi_{n'}(x))\bigr) \\ & \;\in\; \Lambda^{\rm max}\, {\mathbb R}^{N'} \;\cong \; {\mathbb R} , \end{align*} such that $e'_1,\ldots, e'_{N'-1}, {\lambda}(x)^{-1} e'_{N'}$ becomes positively ordered and normalized. Note here that ${\lambda}:O\to{\mathbb R}$ is a smooth nonvanishing function of $x$, depending only on the sections $v^\Psi_{n+1}(x), \ldots, v^\Psi_{n'}(x)$ since ${\iota}(e_i)=(e_i,0)$ are a positively ordered normalized basis of ${\mathbb R}^N\tildemes\{0\}\subset{\mathbb R}^{N'}$ for all $x\in O$. Thus $v^\Psi_{n+1}(x), \ldots, {\lambda}(x)^{-1} v^\Psi_{n'}(x)$ defines a smooth trivialization of the quotient bundle $\qu{\ker({\rm d} s_I\oplus R_I')}{\ker({\rm d} s_I\oplus R_I)} \to O$, for which the induced map $\Psi$ now provides the claimed compatibility. Indeed, we have by construction {\beta}gin{align} \bigl( \widehat T'_{I,x}\circ\Psi_x \bigr) \bigl( \overline v_1\wedge\dots\wedge \overline v_n\bigr) &\;=\; (v_1\wedge\dots\wedge v_k)\otimes \bigl([R_I'(e_1')]\wedge\dots\wedge [R_I'(e_{N'-n'+k}')] \bigr)^* \notag \\ \notag &\;=\; (v_1\wedge\dots\wedge v_k)\otimes \bigl([R_I(e_1)]\wedge\dots\wedge [R_I(e_{N-n+k})]\bigr)^* \\ {\lambda}bel{tpsit} &\;=\; \widehat T_{I,x}\bigl(\overline v_1\wedge\dots\wedge \overline v_n\bigr). \end{align} In case $N=N'$ we define an isomorphism $\Psi$ as above, which however does not depend on any choice of vectors $\overline v^\Psi_i$. Then in the above calculation of $\widehat T_{I,x}$ and $\widehat T'_{I,x}$, the factor ${\lambda} = {\iota}(e_1) \wedge \ldots \wedge {\iota}(e_{N})$ is constant (equal to the determinant of ${\iota} = (R_I')^{-1}\circ R_I$), and hence ${\lambda}^{-1}\Psi$ intertwines the trivializations $\widehat T_I$ and $\widehat T_I'$. This completes the proof that the local trivializations of ${\delta}t({\rm d} s_I)$ do not depend on the choice of $R_I$. In particular, ${\delta}t({\rm d} s_I)$ is a smooth line bundle over $U_I$ for each $I\in{\mathcal I}_{\mathcal K}$. To complete the proof that ${\delta}t(s_{\mathcal K})$ is a vector bundle we must check that the lifts ${\Lambda}_{IJ}$ given in \eqref{eq:bunIJ} of the coordinate changes $\Phi_{IJ}$ are smooth bundle isomorphisms. Since the ${\Lambda}_{IJ}(x)$ are constructed to be fiberwise isomorphisms, and the weak cocycle condition for the coordinate changes transfers directly to these bundle maps, the nontrivial step is to check that ${\Lambda}_{IJ}(x)$ varies smoothly with $x\in U_{IJ}$. For that purpose note that any trivialization $\widehat T_I$ near a given point $x_0\in U_{IJ}$ using a choice of $R_I$ as above, induces a trivialization $\widehat T_J$ of ${\delta}t({\rm d} s_J)$ near ${\partial}hi_{IJ}(x_0)\in U_J$ using the injection $R_J:=\widehat{\partial}hi_{IJ}\circ R_I$, since by the index condition $\widehat{\partial}hi_{IJ}$ identifies the cokernels. We claim that these local trivializations transform ${\Lambda}_{IJ}(x)$ into the isomorphisms ${\Lambda}_{{\rm d}_x {\partial}hi_{IJ} \oplus {\rm id}_{{\mathbb R}^{N}} }$ of \eqref{eq:laphi} induced by the smooth family of isomorphisms $$ {\rm d}_x{\partial}hi_{IJ} \oplus {\rm id}_{{\mathbb R}^{N}} \,:\; \ker({\rm d}_x s_I \oplus R_I) \;\overlineerset{\cong}{\longrightarrow}\; \ker\bigl({\rm d}_{{\partial}hi_{IJ}(x)} s_J \oplus (\widehat{\partial}hi_{IJ}\circ R_I)\bigr) . $$ Note here that these embeddings are surjective since for $(v,z)\in {\rm T} U_J \tildemes {\mathbb R}^{N}$ with ${\rm d} s_J(v)=-\widehat{\partial}hi_{IJ}(R_I(z))$ the tangent bundle condition ${\rm im\,}{\rm d} s_J \cap {\rm im\,}\widehat{\partial}hi_{IJ}={\rm d} s_J ({\rm im\,}{\rm d}{\partial}hi_{IJ})$ from Lemma~\ref{le:change}, the partial index condition $\ker{\rm d} s_J \subset {\rm im\,}{\rm d}{\partial}hi_{IJ}$, and injectivity of $\widehat{\partial}hi_{IJ}$ imply $v\in {\rm im\,}{\rm d} s_I$ with ${\rm d} s_I(v)=-R_I(z)$. Moreover, ${\rm d}_x{\partial}hi_{IJ}$ varies smoothly with $x\in U_{IJ}$, and hence ${\Lambda}_{{\rm d}_x{\partial}hi_{IJ} \oplus {\rm id}_{{\mathbb R}^{N}} }$ varies smoothly with $x\approx x_0$. So to prove smoothness of ${\Lambda}_{IJ}$ near $x_0$ it suffices to prove the transformation as claimed, i.e.\ at fixed $x\in U_{IJ}$ {\beta}gin{equation}{\lambda}bel{eq:etrans} \widehat T_{J,{\partial}hi_{IJ}(x)}\circ {\Lambda}_{{\rm d}_x{\partial}hi_{IJ} \oplus {\rm id}_{{\mathbb R}^{N}}} \;=\; {\Lambda}_{IJ}(x) \circ \widehat T_{I,x} . \end{equation} For that purpose we may simply compare the explicit maps given in \eqref{eq:HatTx}. So let $\overline v_i=(v_i,r_i)$ be a basis of $\ker({\rm d}_x s_I \oplus R_I)$ such that $v_1,\ldots,v_k$ span $\ker {\rm d}_x s_I$. Then, correspondingly, $\overline v'_i=\bigl({\rm d}_x{\partial}hi_{IJ} \oplus {\rm id}_{{\mathbb R}^{N}}\bigr)(\overline v_i)$ ia a basis of $\ker({\rm d}_{{\partial}hi_{IJ}(x)} s_J \oplus R_J)$ such that $v'_i={\rm d}_x{\partial}hi_{IJ}(v_i)$ for $i=1,\ldots,k$ span $\ker {\rm d}_{{\partial}hi_{IJ}(x)} s_J$. Next, let $e_{1},\ldots, e_{N}$ be a positively ordered normalized basis of ${\mathbb R}^N$ such that $R_I(e_{N-n+i}) = {\rm d}_x s_I(v_i)$ for $i = k+1,\ldots, n$. Then, correspondingly, we have $$ R_J(e_{N-n+i}) = \widehat{\partial}hi_{IJ}\bigl(R_I(e_{N-n+i})\bigr) = \widehat{\partial}hi_{IJ}\bigl({\rm d}_x s_I(v_i)\bigr) = {\rm d}_{{\partial}hi_{IJ}(x)}s_J \bigl( {\rm d}_x{\partial}hi_{IJ}(v_i)\bigr) = {\rm d} s_J ( v'_i) . $$ Using these bases in \eqref{eq:HatTx} we can now verify \eqref{eq:etrans}, {\beta}gin{align*} & \bigl({\Lambda}_{IJ}(x) \circ \widehat T_{I,x}\bigr)\bigl(\overline v_1\wedge \ldots \wedge \overline v_n\bigr) \\ &\;=\; {\Lambda}_{IJ}(x) \bigl( (v_1\wedge\ldots \wedge v_k)\otimes \bigl( [R_I(e_1)]\wedge\ldots\wedge [R_I(e_{N-n+k})] \bigr)^* \bigr) \\ &\;=\; \bigl( {\rm d}_x{\partial}hi_{IJ}(v_1)\wedge\ldots\wedge {\rm d}_x{\partial}hi_{IJ}(v_k)\bigr) \otimes \bigl( [\widehat{\partial}hi_{IJ}(R_I(e_1))]\wedge\ldots \wedge [\widehat{\partial}hi_{IJ}(R_I(e_{N-n+k}))] \bigr)^* \bigr) \\ &\;=\; ( v'_1\wedge\ldots\wedge v'_k) \otimes \bigl( [R_J(e_1)]\wedge\ldots\wedge [R_J(e_{N-n+k})] \bigr)^* \bigr) \\ &\;=\; \widehat T_{J,{\partial}hi_{IJ}(x)}\bigr)\bigl(\overline v'_1\wedge \ldots \wedge \overline v'_n\bigr) \;=\; \bigl( \widehat T_{{\partial}hi_{IJ}(x)}\circ \bigl({\rm d}{\partial}hi_{IJ}(x) \oplus {\rm id}_{{\mathbb R}^{N}}\bigr) \bigr)\bigl(\overline v_1\wedge \ldots \wedge \overline v_n\bigr). \end{align*} This finishes the construction of ${\delta}t(s_{\mathcal K})$ for a weak Kuranishi atlas ${\mathcal K}$. In the case of a weak Kuranishi cobordism ${\mathcal K}$, we moreover have to construct bundle isomorphisms from collar restrictions to the product bundles ${\delta}t(s_{{\partial}^{\alpha}{\mathcal K}})\tildemes A^{\alpha}_{\varepsilon}$ to prove that ${\delta}t(s_{\mathcal K})$ is a line bundle in the sense of Definition~\ref{def:bundle} with the claimed restrictions. That is, we have to construct bundle isomorphisms $\tilde{\iota}^{\alpha}_I : {\delta}t({\rm d} s^{\alpha}_I)\tildemes A^{\alpha}_{\varepsilon} \to {\delta}t({\rm d} s_I)|_{{\rm im\,}{\iota}^{\alpha}_I}$ for ${\alpha}=0,1$, $I\in{\mathcal I}_{{\partial}^{\alpha}{\mathcal K}}$, and ${\varepsilon}>0$ less than the collar width of ${\mathcal K}$, and check the identities ${\Lambda}_{IJ} \circ \tilde{\iota}^{{\alpha}}_I = \tilde{\iota}^{\alpha}_{J} \circ \bigl({\Lambda}^{\alpha}_{IJ}\tildemes{\rm id}_{A^{\alpha}_{\varepsilon}}\bigr)$. For that purpose recall that $(s_I\circ{\iota}^{\alpha}_I )(x,t) = s^{\alpha}_I(x)$ for $(x,t)\in{\partial}artial^{\alpha} U_I \tildemes A^{\alpha}_{\varepsilon}$, so that we have a trivial identification ${\rm id}_{E_I} : {\rm im\,} {\rm d}_x s^{\alpha}_I \to {\rm im\,}{\rm d}_{{\iota}^{\alpha}_I(x,t)} s_I$ of the images and an isomorphism ${\rm d}_{(x,t)}{\iota}^{\alpha}_I :\ker {\rm d}_x s^{\alpha}_I \tildemes{\mathbb R} \to \ker{\rm d}_{{\iota}^{\alpha}_I(x,t)} s_I$. The latter gives rise to an isomorphism given by wedging with the canonical positively oriented unit vector $1\in{\mathbb R} ={\rm T}_t A^{\alpha}_{\varepsilon}$, {\beta}gin{equation} {\lambda}bel{wedge 1} \wedge_1 \,:\; \Lambda^{\rm max}\, \ker {\rm d}_x s^{\alpha}_I \;\to\; \Lambda^{\rm max}\, \bigl( \ker {\rm d}_x s^{\alpha}_I \tildemes{\mathbb R} \bigr) , \qquad \eta \;\mapsto\; 1\wedge \eta. \end{equation} Here and throughout we identify vectors $\eta_i\in\ker {\rm d} s^{\alpha}_I$ with $(\eta_i,0)\in \ker {\rm d} s^{\alpha}_I \tildemes{\mathbb R}$ and also abbreviate $1:= (0,1)\in \ker {\rm d} s^{\alpha}_I \tildemes{\mathbb R}$. This map now composes with the induced isomorphism ${\Lambda}_{{\rm d}_{(x,t)}{\iota}^{\alpha}_I}$ from \eqref{eq:laphi} and can be combined with the identity on the cokernel factor to obtain fiberwise isomorphisms {\beta}gin{equation}{\lambda}bel{orient map} \tilde{\iota}^{\alpha}_I (x,t) := \bigl( {\Lambda}_{{\rm d}_{(x,t)}{\iota}^{\alpha}_I} \circ \wedge_1 \bigr) \otimes \bigl({\Lambda}_{{\rm id}_{E_I}}\bigr)^* \;:\; {\delta}t({\rm d}_x s^{\alpha}_I)\tildemes A^{\alpha}_{\varepsilon} \;\to\; {\delta}t({\rm d}_{{\iota}^{\alpha}_I(x,t)} s_I) . \end{equation} These isomorphisms vary smoothly with $(x,t)\in {\partial}^{\alpha} U_I\tildemes A^{\alpha}_{\varepsilon}$ since the compatible local trivializations $\Lambda^{\rm max}\,\ker({\rm d} s^{\alpha}_I \oplus R_I) \to {\delta}t({\rm d} s^{\alpha}_I)$ and $\Lambda^{\rm max}\,\ker({\rm d} s_I \oplus R_I) \to {\delta}t({\rm d} s_I)$ transform $\tilde{\iota}^{\alpha}_I(x,t)$ to ${\Lambda}_{{\rm d}_{(x,t)}{\iota}^{\alpha}_I\oplus{\rm id}_{{\mathbb R}^N}}\circ \wedge_1$. Moreover, $(x,t)\to \tilde{\iota}^{\alpha}_I(x,t)$ lifts ${\iota}^{\alpha}_I$ and thus defines the required bundle isomorphism $\tilde{\iota}^{\alpha}_I : {\delta}t({\rm d} s^{\alpha}_I)\tildemes A^{\alpha}_{\varepsilon} \to {\delta}t({\rm d} s_I)|_{{\rm im\,}{\iota}^{\alpha}_I}$ for each $I\in{\mathcal I}_{{\partial}^{\alpha}{\mathcal K}}$. Finally, the isomorphisms \eqref{orient map} intertwine ${\Lambda}_{IJ} = {\Lambda}_{{\rm d}{\partial}hi_{IJ}} \otimes \bigl({\Lambda}_{[\widehat{\partial}hi_{IJ}]^{-1}}\bigr)^*$ and ${\Lambda}^{\alpha}_{IJ} = {\Lambda}_{{\rm d}{\partial}hi^{\alpha}_{IJ}} \otimes \bigl({\Lambda}_{[\widehat{\partial}hi_{IJ}]^{-1}}\bigr)^*$ by the product form of the coordinate changes ${\partial}hi_{IJ}\circ{\iota}^{\alpha}_I = {\iota}^{\alpha}_ J\circ ({\partial}hi^{\alpha}_{IJ}\tildemes {\rm id}_{A^{\alpha}_{\varepsilon}})$, and because ${\rm d}_{{\iota}^{\alpha}_I(x,t)}{\partial}hi_{IJ}$ maps ${\rm d}_{(x,t)} {\iota}_I^{\alpha} (1)$ to ${\rm d}_{({\partial}hi^{\alpha}_{IJ}(x),t)} {\iota}_J^{\alpha} (1)$, both of which are wedged on by \eqref{orient map} from the left hand side. (For an example of a detailed calculation see the end of the proof of Proposition~\ref{prop:orient1}.) This finishes the proof. \end{proof} We next use the determinant bundle ${\delta}t(s_{\mathcal K})$ to define the notion of an orientation of a Kuranishi atlas. {\beta}gin{defn}{\lambda}bel{def:orient} A weak Kuranishi atlas or Kuranishi cobordism ${\mathcal K}$ is {\bf orientable} if there exists a nonvanishing section ${\sigma}$ of the bundle ${\delta}t(s_{\mathcal K})$ (i.e.\ with ${\sigma}_I^{-1}(0)=\emptyset$ for all $I\in{\mathcal I}_{\mathcal K}$). An {\bf orientation} of ${\mathcal K}$ is a choice of nonvanishing section ${\sigma}$ of ${\delta}t(s_{\mathcal K})$. An {\bf oriented Kuranishi atlas or cobordism} is a pair $({\mathcal K},{\sigma})$ consisting of a Kuranishi atlas or cobordism and an orientation ${\sigma}$ of ${\mathcal K}$. For an oriented Kuranishi cobordism $({\mathcal K},{\sigma})$, the {\bf induced orientation of the boundary} ${\partial}^{\alpha}{\mathcal K}$ for ${\alpha}=0$ resp.\ ${\alpha}=1$ is the orientation of ${\partial}^{\alpha}{\mathcal K}$, $$ {\partial}^{\alpha}{\sigma} \,:=\; \Bigl( \bigl( (\tilde{\iota}^{\alpha}_I)^{-1} \circ{\sigma}_I \circ {\iota}^{\alpha}_I \bigr)\big|_{{\partial}artial^{\alpha} U_I \tildemes\{{\alpha}\} } \Bigr)_{I\in{\mathcal I}_{{\partial}^{\alpha}{\mathcal K}}} $$ given by the isomorphism $(\tilde{\iota}^{\alpha}_I)_{I\in{\mathcal I}_{{\partial}^{\alpha}{\mathcal K}}}$ in \eqref{orient map} between a collar neighbourhood of the boundary in ${\mathcal K}$ and the product Kuranishi atlas ${\partial}^{\alpha}{\mathcal K}\tildemes A^{\alpha}_{\varepsilon}$, followed by restriction to the boundary ${\partial}^{\alpha} {\mathcal K}={\partial}^{\alpha}\bigl({\partial}^{\alpha} {\mathcal K}\tildemes A^{\alpha}_{\varepsilon}\bigr)$, where we identify ${\partial}artial^{\alpha} U_I \tildemes\{{\alpha}\} \cong {\partial}artial^{\alpha} U_I$. With that, we say that two oriented weak Kuranishi atlases $({\mathcal K}^0,{\sigma}^0)$ and $({\mathcal K}^1,{\sigma}^1)$ are {\bf oriented cobordant} if there exists a weak Kuranishi cobordism ${\mathcal K}^{[0,1]}$ from ${\mathcal K}^0$ to ${\mathcal K}^1$ and a section ${\sigma}$ of ${\delta}t(s_{{\mathcal K}^{[0,1]}})$ such that ${\partial}artial^{\alpha}{\sigma}={\sigma}^{\alpha}$ for ${\alpha}=0,1$. \end{defn} {\beta}gin{rmk}{\lambda}bel{rmk:orientb}\rm Here we have defined the induced orientation on the boundary ${\partial}^{\alpha} {\mathcal K}$ of a cobordism so that it is completed to an orientation of the collar by adding the positive unit vector $1$ along $A^{\alpha}_{\varepsilon}\subset {\mathbb R}$ rather than the more usual outward normal vector. Further, although we write the collar as $U^{\alpha}_I\tildemes A^{\alpha}_{\varepsilon}$, formula \eqref{wedge 1} above shows that if $\eta_1,\dots, \eta_n$ is a positively ordered basis for ${\rm T}_x U^{\alpha}_I$ then $1, \eta_1,\dots, \eta_n$ is a positively ordered basis for ${\rm T}_x (U^{\alpha}_I\tildemes A^{\alpha}_{\varepsilon})$. While the first convention merely simplifies notation, the second is necessary for compatibility checks in Proposition~\ref{prop:det0} as well as Proposition~\ref{prop:orient1}~(i) below; cf.\ the proof of \eqref{oclaim} below. The alternative convention of adding the normal vector as last vector leads to sign ambiguities between ${\delta}t(s_{\mathcal K})$ and the determinant line bundle ${\delta}t(s|_{\mathcal V}+\nu)$ for a perturbation, since the dimensions of the kernels of ${\rm d} s_I$ and ${\rm d} s_I+\nu_I$ need not be the same. \end{rmk} {\beta}gin{lemma}{\lambda}bel{le:cK} Let $({\mathcal K},{\sigma})$ be an oriented weak Kuranishi atlas or cobordism. {\beta}gin{enumerate}\item The orientation ${\sigma}$ induces a canonical orientation ${\sigma}|_{{\mathcal K}'}:=({\sigma}_I|_{U'_I})_{I\in{\mathcal I}_{{\mathcal K}'}}$ on each shrinking ${\mathcal K}'$ of ${\mathcal K}$ with domains $\bigl(U'_I\subset U_I\bigr)_{I\in{\mathcal I}_{{\mathcal K}'}}$. \item In the case of a Kuranishi cobordism ${\mathcal K}$, the restrictions to boundary and shrinking commute, that is $({\sigma}|_{{\mathcal K}'})|_{{\partial}^{\alpha}{\mathcal K}'} = ({\sigma}|_{{\partial}^{\alpha}{\mathcal K}})|_{{\partial}^{\alpha}{\mathcal K}'}$. \item In the case of a weak Kuranishi atlas ${\mathcal K}$, the orientation ${\sigma}$ on ${\mathcal K}$ induces an orientation ${\sigma}^{[0,1]}$ on ${\mathcal K}\tildemes[0,1]$, which induces the given orientation ${\partial}^{\alpha}{\sigma}^{[0,1]}={\sigma}$ of the boundaries ${\partial}^{\alpha}({\mathcal K}\tildemes[0,1]) = {\mathcal K}$ for ${\alpha}=0,1$. \end{enumerate} \end{lemma} {\beta}gin{proof} By definition, ${\delta}t(s_{{\mathcal K}'})$ is the line bundle over ${\mathcal K}'$ consisting of the bundles ${\delta}t({\rm d} s'_I)={\delta}t({\rm d} s_I)|_{U'_I}$ and the transition maps ${\Lambda}'_{IJ}={\Lambda}_{IJ}|_{U'_{IJ}}$. The restricted sections ${\sigma}_I|_{U'_I}$ of ${\delta}t({\rm d} s'_I)$ are hence compatible with the transition maps ${\Lambda}'_{IJ}$ and have product form near the boundary in the case of a cobordism. Since they are nonvanishing, they define an orientation of ${\mathcal K}'$. Commutation of restrictions holds since both $({\sigma}|_{{\mathcal K}'})|_{{\partial}^{\alpha}{\mathcal K}'}$ and $({\sigma}|_{{\partial}^{\alpha}{\mathcal K}})|_{{\partial}^{\alpha}{\mathcal K}'}$ are given by ${\sigma}_I|_{{\partial}^{\alpha} U'_I}$ with ${\partial}^{\alpha} U'_I = {\partial}^{\alpha} U_I \cap U'_I$. For part~(iii) we consider an oriented, additive, weak Kuranishi atlas $({\mathcal K},{\sigma})$ and begin by constructing an induced orientation of the product cobordism ${\mathcal K}\tildemes[0,1]$. For that purpose we use the bundle isomorphisms $$ \tilde {\iota}_I:=\wedge_1 \otimes ({\Lambda}_{{\rm id}_{E_I}})^* \,:\; {\delta}t({\rm d} s_I)\tildemes [0,1] \;\to\; {\delta}t({\rm d} s'_I) $$ with $s'_I(x,t)=s_I(x)$, covering ${\iota}_I={\rm id}_{U_I\tildemes[0,1]}$. These coincide with the maps defined in \eqref{orient map} for the interval $[0,1]$ instead of $A^{\alpha}_{\varepsilon}$, so the proof of Proposition~\ref{prop:det0} shows that they provide an isomorphism $\tilde {\iota}$ from the product bundle ${\delta}t(s_{\mathcal K})\tildemes [0,1]$ to the determinant bundle of the product ${\delta}t(s_{{\mathcal K}\tildemes [0,1]})$. Now an orientation ${\sigma}$ of ${\mathcal K}$ determines an orientation ${\sigma}^{[0,1]}:=\tilde{\iota}_*{\sigma}$ of the product ${\mathcal K}\tildemes [0,1]$ given by $(\tilde{\iota}_*{\sigma})_I(x,t)=\tilde{\iota}_I(x,t)\bigl({\sigma}_I(x)\bigr)$. Further, using $\tilde{\iota}^{\alpha}:=\tilde{\iota}$ to define the collar structure on ${\delta}t(s_{{\mathcal K}\tildemes [0,1]})$, the restrictions to both boundaries are ${\partial}^{\alpha}( \tilde{\iota}_*{\sigma}) ={\sigma}$ since $\tilde{\iota}_I^{-1}\circ (\tilde{\iota}_*{\sigma})_I \circ {\iota}^{\alpha}_I|_{U_I\tildemes\{{\alpha}\}} = {\sigma}_I$. \end{proof} The arguments of Proposition~\ref{prop:det0} equally apply for any reduction ${\mathcal V}$ of a Kuranishi atlas ${\mathcal K}$ and an admissible perturbation $\nu$ to define a line bundle ${\delta}t(s|_{\mathcal V} + \nu)$ over ${\mathcal V}$ (or more accurately over the Kuranishi atlas ${\mathcal K}^{\mathcal V}$ defined in Proposition~\ref{prop:red}). Instead of setting up a direct comparison between the bundles ${\delta}t(s|_{\mathcal V} + \nu)$ for different $\nu$, we will work with a ``more universal" determinant bundle ${\delta}t({\mathcal K})$ over ${\mathcal K}$. This will allow us to obtain compatible orientations of the determinant bundles over the perturbed zero set ${\delta}t(s|_{\mathcal V} + \nu)|_{(s+\nu)^{-1}(0)}$ for different transverse perturbations $\nu$. We will construct the bundle ${\delta}t({\mathcal K})$ from the determinant bundles of the zero sections in each chart. However, since the zero section $0_{\mathcal K}$ does not satisfy the index condition, we need to construct different transition maps for ${\delta}t({\mathcal K})$, which will now depend on the section $s_{\mathcal K}$. For this purpose, we again use contraction isomorphisms from Lemma~\ref{lem:get}. On the one hand, this provides families of isomorphisms {\beta}gin{equation}{\lambda}bel{Cds} \mathfrak{C}_{{\rm d}_x s_I} \,:\; \Lambda^{\rm max}\, {\rm T}_x U_I \otimes \bigl( \Lambda^{\rm max}\, E_I \bigr)^* \;\overlineerset{\cong}{\longrightarrow}\; {\delta}t({\rm d}_x s_I) \qquad\text{for} \; x\in U_I . \end{equation} In fact, as we will see in the proof of Proposition~\ref{prop:orient} below, these maps are essentially the special cases of $T_{I,x}$ in \eqref{eq:TIx} in which $R_I$ is surjective. On the other hand, recall that the tangent bundle condition \eqref{tbc} implies that ${\rm d} s_J$ restricts to an isomorphism $\qu{{\rm T}_y U_J}{{\rm d}_x{\partial}hi_{IJ}({\rm T}_x U_I)}\overlineerset{\cong}{\to} \qu{E_J}{\widehat{\partial}hi_{IJ}(E_I)}$ for $y={\partial}hi_{IJ}(x)$. Therefore, if we choose a smooth normal bundle $N_{IJ} =\bigcup_{x\in U_{IJ}} N_{IJ,x} \subset {\partial}hi_{IJ}^* {\rm T} U_J $ to the submanifold ${\rm im\,} {\partial}hi_{IJ} \subset U_J$, then the subspaces ${\rm d}_y s_J(N_{IJ,x})$ (where we always denote $y:={\partial}hi_{IJ}(x)$ and vary $x\in U_{IJ}$) form a smooth family of subspaces of $E_J$ that are complements to $\widehat{\partial}hi_{IJ}(E_I)$. Hence letting ${\partial}r_{N_{IJ}}(x) : E_J \to {\rm d}_y s_J(N_{IJ,x}) \subset E_J$ be the smooth family of projections with kernel $\widehat{\partial}hi_{IJ}(E_I)$, we obtain a smooth family of linear maps $$ F_x \,:= \; {\partial}r_{N_{IJ}}(x) \circ {\rm d}_y s_J \,:\; {\rm T}_y U_J \;\longrightarrow\; E_J \qquad\text{for}\; x\in U_{IJ}, y={\partial}hi_{IJ}(x) $$ with images ${\rm im\,} F_x={\rm d}_y s_J(N_{IJ,x})$, and isomorphisms to their kernel $$ {\partial}hi_x \,:= \; {\rm d}_x{\partial}hi_{IJ} \,:\; {\rm T}_x U_I \;\overlineerset{\cong}{\longrightarrow}\; \ker F_x = {\rm d}_x{\partial}hi_{IJ}({\rm T}_x U_I) \;\subset\; {\rm T}_y U_J . $$ By Lemma~\ref{lem:get} these induce isomorphisms $$ \mathfrak{C}^{{\partial}hi_x}_{F_x} \,:\; \Lambda^{\rm max}\, {\rm T}_{{\partial}hi_{IJ}(x)} U_J \otimes \bigl(\Lambda^{\rm max}\, E_J \bigr)^* \;\overlineerset{\cong}{\longrightarrow}\; \Lambda^{\rm max}\, {\rm T}_x U_I \otimes \Bigl(\Lambda^{\rm max}\, \Bigl(\qq{E_J} {{\rm im\,} F_x} \Bigr) \Bigr)^* . $$ We may combine this with the dual of the isomorphism $\Lambda^{\rm max}\, \bigl(\qu{E_J} {{\rm d}_y s_J(N_{IJ,x})} \bigr) \cong \Lambda^{\rm max}\, E_I$ induced via \eqref{eq:laphi} by ${\partial}r^{\partial}erp_{ N_{IJ}}(x) \circ\widehat{\partial}hi_{IJ} : E_I \to \qu{E_J} {{\rm d}_y s_J(N_{IJ,x})} $ to obtain isomorphisms {\beta}gin{align} {\lambda}bel{CIJ} \mathfrak{C}_{IJ}(x) \,: \; \Lambda^{\rm max}\, {\rm T}_{{\partial}hi_{IJ}(x)} U_J \otimes \bigl(\Lambda^{\rm max}\, E_J \bigr)^* \;\overlineerset{\cong}{\longrightarrow}\; \Lambda^{\rm max}\, {\rm T}_x U_I \otimes \bigl(\Lambda^{\rm max}\, E_I \bigr)^* \end{align} for $x\in U_{IJ}$, given by $\mathfrak{C}_{IJ}(x) := \bigl( {\rm id}_{ \Lambda^{\rm max}\, {\rm T}_x U_I} \otimes {\Lambda}_{({\partial}r^{\partial}erp_{ N_{IJ}}(x)\circ\widehat{\partial}hi_{IJ})^{-1}}^* \bigr) \circ \mathfrak{C}^{{\partial}hi_x}_{F_x}$. {\beta}gin{prop}{\lambda}bel{prop:orient} {\beta}gin{enumerate} \item Let ${\mathcal K}$ be a weak Kuranishi atlas. Then there is a well defined line bundle ${\delta}t({\mathcal K})$ over ${\mathcal K}$ given by the line bundles ${\Lambda}^{\mathcal K}_I := \Lambda^{\rm max}\, {\rm T} U_I\otimes \bigl(\Lambda^{\rm max}\, E_I\bigr)^* \to U_I$ for $I\in{\mathcal I}_{\mathcal K}$ and the transition maps $\mathfrak{C}_{IJ}^{-1}: {\Lambda}^{\mathcal K}_I |_{U_{IJ}} \to {\Lambda}^{\mathcal K}_J |_{{\rm im\,}{\partial}hi_{IJ}}$ from \eqref{CIJ} for $I\subsetneq J$. In particular, the latter isomorphisms are independent of the choice of normal bundle $N_{IJ}$. Furthermore, the contractions $\mathfrak{C}_{{\rm d} s_I}: {\Lambda}^{\mathcal K}_I \to {\delta}t({\rm d} s_I)$ from \eqref{Cds} define an isomorphism $\Psi^{s_{\mathcal K}}:=\bigl(\mathfrak{C}_{{\rm d} s_I}\bigr)_{I\in{\mathcal I}_{\mathcal K}}$ from ${\delta}t({\mathcal K})$ to ${\delta}t(s_{\mathcal K})$. \item If ${\mathcal K}$ is a weak Kuranishi cobordism, then the determinant bundle ${\delta}t({\mathcal K})$ defined as in (i) can be given a product structure on the collar such that its boundary restrictions are ${\delta}t({\mathcal K})|{_{{\partial}^{\alpha}{\mathcal K}}} = {\delta}t({\partial}^{\alpha}{\mathcal K})$ for ${\alpha}= 0,1$. Further, the isomorphism $\Psi^{s_{\mathcal K}}: {\delta}t({\mathcal K}) \to {\delta}t(s_{\mathcal K})$ defined as in (i) has product structure on the collar with restrictions $\Psi^{s_{\mathcal K}}|_{{\partial}^{\alpha} {\mathcal K}}=\Psi^{s_{{\partial}^{\alpha}{\mathcal K}}}$ for ${\alpha}=0,1$. \end{enumerate} \end{prop} {\beta}gin{proof} To begin, note that each ${\Lambda}^{\mathcal K}_I = \Lambda^{\rm max}\, {\rm T} U_I \otimes \bigl(\Lambda^{\rm max}\, E_I\bigr)^*$ is a smooth line bundle over $U_I$, since it inherits local trivializations from the tangent bundle ${\rm T} U_I\to U_I$. Next, we will show that the isomorphisms $\mathfrak{C}_{{\rm d}_x s_I}$ from \eqref{Cds} are smooth in this trivialization, where ${\delta}t({\rm d} s_I)$ is trivialized via the maps $\widehat T_{I,x}$ as in Proposition~\ref{prop:det0} using an isomorphism $R_I: {\mathbb R}^{N}\to E_I$. For that purpose we introduce the isomorphisms $$ G_x: {\rm T}_x U_I\to \ker({\rm d}_x s_I \oplus R_I),\qquad v\mapsto \bigl(v, -R_I^{-1}({\rm d}_x s_I(v)\bigr), $$ and claim that the associated maps on determinant lines fit into a commutative diagram with $\mathfrak{C}_{{\rm d}_x s_I}$ and the version of the trivialization $T_{I,x}$ from \eqref{eq:TIx} {\beta}gin{equation}{\lambda}bel{ccord} \xymatrix{ \Lambda^{\rm max}\, {\rm T}_x U_I \otimes \bigl( \Lambda^{\rm max}\, E_I \bigr)^* \ar@{->}[d]_{{\Lambda}_{G_x}\otimes {\rm id}} \ar@{->}[r]^{ \qquad\quad\mathfrak{C}_{{\rm d}_x s_I}} & {\delta}t({\rm d}_x s_I) \ar@{->}[d]^{{\rm id}} \\ \Lambda^{\rm max}\, \ker ({\rm d}_x s_I \oplus R_I) \otimes \bigl(\Lambda^{\rm max}\, E_I \bigr)^* \ar@{->}[r]^{ \qquad\qquad\quad\;\; T_{I,x}} & {\delta}t ({\rm d}_x s_I). } \end{equation} Here the trivialization $\widehat T_{I,x}$ of ${\delta}t({\rm d}_x s_I)$ is given by precomposing $T_{I,x}$ with the $x$-independent isomorphism ${\Lambda}_{R_I^{-1}}^* : \bigl( \Lambda^{\rm max}\, {\mathbb R}^N \bigr)^* \to \bigl(\Lambda^{\rm max}\, E_I\bigr)^*$ in the second factor. Since $G_x$ varies smoothly with $x$ in any trivialization of ${\rm T} U$, this will prove that $\mathfrak C_{{\rm d} s_I}$ is smooth with respect to the given trivializations. To prove \eqref{ccord} we use the explicit formulas from Lemma~\ref{lem:get} and \eqref{eq:HatTx} at a fixed $x\in U_I$. So let $v_1,\ldots,v_n$ be a basis for ${\rm T}_x U_I$ with ${\rm span}(v_1,\ldots,v_k)=\ker {\rm d}_x s_I$, and let $w_1,\dots, w_N$ be a basis for $E_I$ with $w_{N-n+i}={\rm d}_x s_I(v_i)$ for $i=k+1,\ldots,n$. Then $\overline v_i:=G_x(v_i)=\bigl(v_i,-R_I^{-1}({\rm d}_x s_I(v_i)\bigr) $ is a corresponding basis of $\ker({\rm d}_x s_I \oplus R_I)$. In this setting we can verify \eqref{ccord}, {\beta}gin{align*} \mathfrak{C}_{{\rm d}_x s_I} \bigl( (v_1\wedge\dots v_n)\otimes(w_1\wedge\dots w_N)^* \bigr) &\;=\; \bigl(v_1\wedge\dots v_k\bigr)\otimes \bigl( [w_1]\wedge\dots [w_{N-n+k}] \bigr)^* \\ &\;=\; T_{I,x}\bigl( (\overline v_1\wedge \ldots \overline v_n)\otimes(w_1\wedge\dots w_N)^* \bigr) \\ &\;=\; T_{I,x}\bigl( {\Lambda}_{G_x}(v_1\wedge \ldots v_n)\otimes(w_1\wedge\dots w_N)^* \bigr). \end{align*} This proves the smoothness of the isomorphisms $\mathfrak C_{{\rm d} s_I}$ so that we can define preliminary transition maps {\beta}gin{equation}{\lambda}bel{tiphi} \widetilde{\partial}hi_{IJ}:= \mathfrak{C}_{{\rm d} s_J}^{-1} \circ {\Lambda}_{IJ} \circ \mathfrak{C}_{{\rm d} s_I} \,:\; {\Lambda}^{\mathcal K}_I|_{U_{IJ}}\to {\Lambda}^{\mathcal K}_J \qquad\text{for}\; I\subsetneq J \in {\mathcal I}_{\mathcal K} \end{equation} by the transition maps \eqref{eq:bunIJ} of ${\delta}t(s_{\mathcal K})$ and the isomorphisms \eqref{Cds}. These define a line bundle ${\Lambda}^{\mathcal K}:=\bigl({\Lambda}^{\mathcal K}_I, \widetilde{\partial}hi_{IJ} \bigr)_{I,J\in{\mathcal I}_{\mathcal K}}$ since the weak cocycle condition follows directly from that for the ${\Lambda}_{IJ}$. Moreover, this automatically makes the family of bundle isomorphisms $\Psi^{\mathcal K}:=\bigl(\mathfrak{C}_{{\rm d} s_I}\bigr)_{I\in{\mathcal I}_{\mathcal K}}$ an isomorphism from ${\Lambda}^{\mathcal K}$ to ${\delta}t(s_{\mathcal K})$. It remains to see that ${\Lambda}^{\mathcal K}={\delta}t({\mathcal K})$ and $\Psi^{\mathcal K}=\Psi^{s_{\mathcal K}}$, i.e.\ we claim equality of transition maps $\widetilde{\partial}hi_{IJ}=\mathfrak{C}_{IJ}^{-1}$. This also shows that $\mathfrak{C}_{IJ}^{-1}$ and thus ${\delta}t({\mathcal K})$ is independent of the choice of normal bundle $N_{IJ}$ in \eqref{CIJ}. So to finish the proof of (i), it suffices to establish the following commuting diagram at a fixed $x\in U_{IJ}$ with $y={\partial}hi_{IJ}(x)$, {\beta}gin{equation}{\lambda}bel{cclaim} \xymatrix{ \Lambda^{\rm max}\, {\rm T}_x U_I \otimes \bigl( \Lambda^{\rm max}\, E_I \bigr)^* \ar@{->}[r]^{ \qquad\quad\mathfrak{C}_{{\rm d}_x s_I}} & {\delta}t({\rm d}_x s_I) \ar@{->}[d]^{{\Lambda}_{IJ} (x)} \\ \Lambda^{\rm max}\, {\rm T}_y U_J \otimes \bigl( \Lambda^{\rm max}\, E_J \bigr)^* \ar@{->}[r]^{\qquad\quad\mathfrak{C}_{{\rm d}_y s_J}} \ar@{->}[u]^{\mathfrak{C}_{IJ} (x)} & {\delta}t({\rm d}_y s_J) . } \end{equation} Using \eqref{ccord}, for surjective maps $R_I:{\mathbb R}^N\to E_I$ and $R_J:{\mathbb R}^{N'}\to E_J$, and the compatibility of the trivialization $\widehat T_{J,y}$ with $R'_J:=\widehat T'_{J,y}$ arising from $\widehat{\partial}hi_{IJ}:{\mathbb R}^N\to E_J$, we can expand this diagram to \[ \xymatrix{ \Lambda^{\rm max}\, {\rm T}_x U_I \otimes \bigl( \Lambda^{\rm max}\, E_I \bigr)^* \ar@{->}[r]^{{\Lambda}_{G_x}\otimes {\rm id} \qquad} & \;\; \Lambda^{\rm max}\, \ker ({\rm d}_x s_I\oplus R_I) \otimes \bigl( \Lambda^{\rm max}\, E_I \bigr)^* \ar@{->}[r]^{ \qquad\qquad\qquad T_{I,x} } \ar@{->}[d]_{ {\Lambda}_{{\rm d}_x {\partial}hi_{IJ} \oplus {\rm id}_{{\mathbb R}^N}} \otimes ( {\Lambda}_{R'_J\,\!^{-1}}^* \circ {\Lambda}_{R_I}^*) }& {\delta}t({\rm d}_x s_I) \ar@{->}[d]^{{\Lambda}_{IJ} (x)} \\ & \;\; \Lambda^{\rm max}\, \ker ({\rm d}_y s_J\oplus R'_J) \otimes \bigl( \Lambda^{\rm max}\, {\rm im\,} R'_J \bigr)^* \ar@{->}[d]_{\Psi_y\otimes ( {\Lambda}_{R_J^{-1}}^* \circ {\Lambda}_{R'_J}^*) } \ar@{->}[r]^{ \qquad\qquad\qquad T'_{J,y}} & {\delta}t({\rm d}_y s_J) \\ \Lambda^{\rm max}\, {\rm T}_y U_J \otimes \bigl( \Lambda^{\rm max}\, E_J \bigr)^* \ar@{->}[uu]^{\mathfrak{C}_{IJ}(x)} \ar@{->}[r]^{{\Lambda}_{G_y}\otimes {\rm id} \qquad } & \;\; \Lambda^{\rm max}\, \ker ({\rm d}_y s_J\oplus R_J) \otimes \bigl( \Lambda^{\rm max}\, E_J \bigr)^* \ar@{->}[r]^{ \qquad\qquad\qquad T_{J,y}} & {\delta}t({\rm d}_y s_J) \ar@{->}[u]_{{\rm id}} . } \] Here the upper right square commutes by \eqref{eq:etrans}. To make the lower right square precise, and in particular to choose suitable $R_J$, we note that $E_J={\rm d}_y s_J + {\rm im\,} R'_J$ and ${\rm d}_y s_J ({\rm T}_y{\rm im\,}{\partial}hi_{IJ}) \subset \widehat{\partial}hi_{IJ}(E_I) = {\rm im\,} R'_J$, so that given any normalized basis $e_1,\ldots,e_N\in {\mathbb R}^N$ we can complete the corresponding vectors $R'_J(e_i)$ to a basis for $E_J$ by adding the vectors ${\rm d}_y s_J(v^\Psi_{N+1}), \ldots, {\rm d}_y s_J(v^\Psi_{N'})$, where $v^\Psi_{N+1}, \ldots, v^\Psi_{N'}\in N_{IJ,x}$ is a basis of the normal space $N_{IJ,x}\subset {\rm T}_y U_J$ to ${\rm T}_y{\rm im\,}{\partial}hi_{IJ}$ that was used to define $\mathfrak{C}_{IJ}(x)$. Thus $R_J'$ extends to a smooth family of bijections {\beta}gin{align*} R_J: = R_{J,x} \,:\quad {\mathbb R}^N\tildemes {\mathbb R}^{N'-N} &\; \overlineerset{\cong}{\longrightarrow}\; \widehat{\partial}hi_{IJ}(E_I) \oplus {\rm d} s_y(N_{IJ,x}) \;=\; E_J , \\ ({\underline{n}}derline r ; r_{N+1},\ldots,r_{N'}) &\;\longmapsto \; \widehat{\partial}hi_{IJ}\bigl( R_I({\underline{n}}derline r) \bigr) + {\textstyle \sum_{i=N+1}^{N'}} r_i \cdot {\rm d}_y s_J(v^\Psi_i) . \end{align*} We may choose the vectors $v^\Psi_{i}$ so that the $e_{i}:= R_J^{-1}\bigl({\rm d}_y s_J(v^\Psi_{i})\bigr)$ for $i=N+1,\ldots N'$ extend $e_1,\ldots e_N \in {\mathbb R}^N\cong {\mathbb R}^N \tildemes\{0\}$ to a normalized basis of ${\mathbb R}^{N'}$. Further, the vectors $\overline v^\Psi_{i}:= \bigl( v^\Psi_i , - e_i \bigr)$ span the complement of the embedding ${\iota}: \ker ({\rm d}_y s_J\oplus R'_J) \hookrightarrow \ker ({\rm d}_y s_J\oplus R_J)$, $(v,{\underline{n}}derline r) \mapsto (v,{\underline{n}}derline r,0)$. Hence \eqref{tpsit} (with ${\lambda}(x)=1$) shows that the isomorphism {\beta}gin{align*} \Psi_y \,:\; \Lambda^{\rm max}\, \ker ({\rm d}_y s_J\oplus R'_J) &\; \overlineerset{\cong}{\longrightarrow}\; \Lambda^{\rm max}\, \ker ({\rm d}_y s_J\oplus R_J) ,\\ \overline v_1 \wedge\ldots \wedge \overline v_n &\;\longmapsto \; {\iota}(\overline v_1)\wedge\ldots {\iota}(\overline v_n) \wedge \overline v^\Psi_{N+1} \wedge \ldots \overline v^\Psi_{N'} \end{align*} intertwines the trivializations $T'_{J,y}$ and $T_{J,y}$, that is the lower right square in the above diagram commutes. Now to prove that the entire diagram commutes it remains to identify $\mathfrak{C}_{IJ}(x)$ with the map given by composition of the other isomorphisms, which is the tensor product of ${\Lambda}_{R_I^{-1}}^*\circ {\Lambda}_{R_J}^*$ (composed via $\Lambda^{\rm max}\, {\mathbb R}^N\cong {\mathbb R}\cong\Lambda^{\rm max}\, {\mathbb R}^{N'}$) on the obstruction spaces with the inverse of {\beta}gin{align*} \Lambda^{\rm max}\, {\rm T}_x U_I &\;\overlineerset{\cong}{\longrightarrow}\; \Lambda^{\rm max}\, {\rm T}_y U_J \\ v_1 \wedge\ldots \wedge v_n &\;\longmapsto \; {\rm d}_x{\partial}hi_{IJ}(v_1) \wedge\ldots {\rm d}_x{\partial}hi_{IJ} (v_{n}) \wedge v^\Psi_{N+1} \wedge \ldots v^\Psi_{N'} . \end{align*} Here we used the fact that ${\iota} \circ \bigl({\rm d}_x{\partial}hi_{IJ}\oplus{\rm id}_{{\mathbb R}^N}\bigr) \circ G_x = G_y \circ {\rm d}_x{\partial}hi_{IJ}$ and $\overline v^\Psi_{i}= G_y( v^\Psi_i )$. Note moreover that we chose the vectors $v^\Psi_{i}\in {\rm T}_y U_J$ to span the complement of ${\rm im\,}{\rm d}_x{\partial}hi_{IJ}$, and hence ${\rm d}_x{\partial}hi_{IJ}(v_1),\ldots,{\rm d}_x{\partial}hi_{IJ} (v_{n}), v^\Psi_{N+1},\ldots ,v^\Psi_{N'}$ forms a basis of $ {\rm T}_y U_J$. Moreover, note that $w_i=R_J(e_i)$ is a basis for $E_J$ whose last $N'-N$ vectors are $w_{i}={\rm d}_y s_J(v^\Psi_i)\in {\rm d}_y s_J(N_{IJ,x})$ for $i=N+1,\ldots,N'$. In these bases the explicit formulas \eqref{CIJ} and \eqref{Cfrak} give {\beta}gin{align*} \mathfrak{C}_{IJ}(x) \;:\; & \bigl( {\rm d}_x{\partial}hi_{IJ}(v_1) \wedge\ldots {\rm d}_x{\partial}hi_{IJ} (v_{n}) \wedge v^\Psi_{N+1} \wedge \ldots v^\Psi_{N'}\bigr) \otimes \bigl(R_J(e_1)\wedge\dots R_J(e_{N'})\bigr)^* \\ &\;\mapsto\; (v_1\wedge\dots v_n)\otimes {\Lambda}_{({\partial}r^{\partial}erp_{ N_{IJ}}\circ\widehat{\partial}hi_{IJ})^{-1}}^* \bigl( [\widehat{\partial}hi_{IJ}(R_I(e_1))]\wedge\dots [\widehat{\partial}hi_{IJ}(R_I(e_N))] \bigr)^* \\ &\;=\; (v_1\wedge\dots v_n)\otimes \bigl( R_I(e_1) \wedge\dots R_I(e_N) \bigr)^* . \end{align*} Here in the second factor we have ${\Lambda}_{R_I^{-1}}\bigl( R_I(e_1) \wedge\dots R_I(e_N) \bigr) =1\in \Lambda^{\rm max}\,{\mathbb R}^N$ and ${\Lambda}_{R_J^{-1}}\bigl( R_J(e_1) \wedge\dots R_J(e_{N'}) \bigr)= 1 \in \Lambda^{\rm max}\,{\mathbb R}^{N'}$, so this proves that \eqref{cclaim} commutes. For part (ii) the same arguments apply to define a bundle ${\delta}t({\mathcal K})$ and isomorphism $\Psi^{s_{\mathcal K}}$, for which it remains to establish the product structure on a collar. However, we may use the isomorphisms $\mathfrak{C}_{{\rm d} s_I}:{\Lambda}^{\mathcal K}_I \to {\delta}t({\rm d} s_I)$ and $\mathfrak{C}_{{\rm d} s_I|_{{\partial}^{\alpha} U_I}}:{\Lambda}^{{\partial}^{\alpha} {\mathcal K}}_I \to {\delta}t({\rm d} s_I|_{{\partial}^{\alpha} U_I})$ to pull back the isomorphisms $\tilde{\iota}^{\alpha}_I : {\delta}t({\rm d} s_I)|_{{\rm im\,}{\iota}^{\alpha}_I} \to {\delta}t({\rm d} s_I|_{{\partial}^{\alpha} U_I}) \tildemes A^{\alpha}_{\varepsilon}$ from Proposition~\ref{prop:det0} to isomorphisms $$ \bigl( \mathfrak{C}_{{\rm d} s_I|_{{\partial}^{\alpha} U_I}} \tildemes {\rm id}_{A^{\alpha}_{\varepsilon}} \bigr) ^{-1} \circ \tilde{\iota}^{\alpha}_I \circ \mathfrak{C}_{{\rm d} s_I} \,:\; {\Lambda}^{\mathcal K}_I |_{{\rm im\,}{\iota}^{\alpha}_I} \;\overlineerset{\cong}{\longrightarrow}\; {\Lambda}^{{\partial}^{\alpha}{\mathcal K}}_I \tildemes A^{\alpha}_{\varepsilon} . $$ This provides the product structure for ${\delta}t({\mathcal K})$, and moreover this construction was made such that $\Psi^{s_{\mathcal K}}$ has product form in the same collar, and the restrictions are, as claimed, given by pullback of the restrictions of ${\delta}t(s_{\mathcal K})$. This completes the proof. \end{proof} {\beta}gin{prop}{\lambda}bel{prop:orient1} {\beta}gin{enumerate} \item Let $({\mathcal K},{\sigma})$ be an oriented, tame Kuranishi atlas with reduction ${\mathcal V}$, and let $\nu$ be an admissible, precompact, transverse perturbation of $s_{\mathcal K}|_{\mathcal V}$. Then the zero set $|{\bf Z}_\nu|$ is an oriented closed manifold. \item Let $({\mathcal K}^{[0,1]},{\sigma})$ be an oriented, tame Kuranishi cobordism with reduction ${\mathcal V}$ and admissible, precompact, transverse perturbation $\nu$. Then the corresponding zero set $|{\bf Z}_{\nu}|$ is an oriented cobordism from $|{\bf Z}_{\nu^0}|$ to $|{\bf Z}_{\nu^1}|$ for $\nu^{\alpha}:=\nu|_{{\partial}artial^{\alpha} {\mathcal V}}$, with boundary orientations induced as in (i) by ${\sigma}^{\alpha}:={\partial}artial^{\alpha} {\sigma}$. \end{enumerate} \end{prop} {\beta}gin{proof} We first show that the local zero sets $Z_I: = (s|_{V_I} + \nu)^{-1}(0)\subset V_I$ have a natural orientation. By Lemma~\ref{le:stransv} they are submanifolds, and by transversality we have ${\rm im\,} ({\rm d}_z s_I + {\rm d}_z\nu_I)=E_I$ for each $z\in Z_I$, and thus $\Lambda^{\rm max}\, \qu{E_I}{{\rm im\,} ({\rm d}_z s_I + {\rm d}_z\nu_I)}= \Lambda^{\rm max}\,\{0\} = {\mathbb R}$, so that we have a natural isomorphism between the orientation bundle of $Z_I$ and the restriction of the determinant line bundle {\beta}gin{align*} \Lambda^{\rm max}\, {\rm T} Z_I &\;=\; {\textstyle \bigcup_{z\in Z_I}} \Lambda^{\rm max}\, \ker ({\rm d}_z s_I + {\rm d}_z\nu_I) \\ &\; \cong\; {\textstyle \bigcup_{z\in Z_I}} \Lambda^{\rm max}\, \ker ({\rm d}_z s_I + {\rm d}_z\nu_I) \otimes {\mathbb R} \;=\; {\delta}t(s_I|_{V_I}+\nu_I)|_{Z_I} . \end{align*} Combining this with Proposition~\ref{prop:orient} and Lemma~\ref{lem:get} we obtain isomorphisms $$ \mathfrak{C}^\nu_I(z) := \mathfrak{C}_{{\rm d} (s_I +\nu_I)} \circ \mathfrak{C}_{{\rm d} s_I}^{-1} \,:\; {\delta}t({\rm d} s_I)|_{z} \;\longrightarrow\; \Lambda^{\rm max}\, {\rm T}_z Z_I \qquad \text{for} \; z\in Z_I . $$ To see that these are smooth, recall that smoothness of $\mathfrak{C}_{{\rm d} s_I}$ was proven in Proposition~\ref{prop:orient}. The same arguments apply to $\mathfrak{C}_{{\rm d} (s_I +\nu_I)}$. Further, for $I\subsetneq J$ and $z\in Z_I\cap U_{IJ}$ these isomorphisms are intertwined by the transition maps $$ {\Lambda}_{IJ}(z)={\Lambda}_{{\rm d}_z{\partial}hi_{IJ}} \otimes \bigl( {\Lambda}_{\widehat{\partial}hi_{IJ}^{-1}}\bigr)^* : {\delta}t({\rm d}_z s_I) \to {\delta}t({\rm d}_{{\partial}hi_{IJ}(z)} s_J) $$ and ${\Lambda}_{{\rm d}_z{\partial}hi_{IJ}} : \Lambda^{\rm max}\, {\rm T}_z Z_I \to \Lambda^{\rm max}\, {\rm T}_{{\partial}hi_{IJ}(z)} Z_J$. To see this, one combines the commuting diagram \eqref{cclaim} with the analogous diagram over $Z_I\cap U_{IJ}$ \[ \xymatrix{ \Lambda^{\rm max}\, {\rm T} U_I \otimes \bigl( \Lambda^{\rm max}\, E_I \bigr)^* \ar@{->}[rr]^{ \mathfrak{C}_{{\rm d} (s_I+\nu_I)}\qquad} & & {\delta}t({\rm d} (s_I+\nu_I)) = \Lambda^{\rm max}\, {\rm T} Z_I \otimes {\mathbb R} \ar@{->}[d]^{{\Lambda}_{IJ} = {\Lambda}_{{\rm d}{\partial}hi_{IJ}} \otimes {\rm id}_{\mathbb R} } \\ \Lambda^{\rm max}\, {\rm T} U_J \otimes \bigl( \Lambda^{\rm max}\, E_J \bigr)^* \ar@{->}[rr]^{\mathfrak{C}_{{\rm d} (s_J+\nu_J)}\qquad} \ar@{->}[u]^{\mathfrak{C}_{IJ}} & & {\delta}t({\rm d} (s_J+\nu_J)) = \Lambda^{\rm max}\, {\rm T} Z_J \otimes {\mathbb R}. } \] The latter diagram commutes by the arguments in Proposition~\ref{prop:orient} applied to $s_\bullet+\nu_\bullet$ because ${\rm d} s_J$ and ${\rm d} (s_J+\nu_J)$ induce the same map $\mathfrak{C}_{IJ}(z)$ for $z\in Z_I\cap U_{IJ}$. Indeed, the admissibility of $\nu$ implies that ${\rm im\,}{\rm d}_{{\partial}hi_{IJ}(z)} \nu_J\subset \widehat{\partial}hi_{IJ}(E_I)$ so that $F_z = {\partial}r_{N_{IJ}}(z) \circ {\rm d}_{{\partial}hi_{IJ}(z)} s_J ={\partial}r_{N_{IJ}}(z) \circ {\rm d}_{{\partial}hi_{IJ}(z)} (s_J+\nu_J)$ in the construction of $\mathfrak{C}_{IJ}(z)$. Now the orientation $\bigl( {\sigma}_I : U_I \to {\delta}t({\rm d} s_I) \bigr)_{I\in{\mathcal I}_{\mathcal K}}$ of ${\mathcal K}$ induces nonvanishing sections ${\sigma}^\nu_I := \mathfrak{C}^\nu_I \circ {\sigma}_I : Z_I \to \Lambda^{\rm max}\, {\rm T} Z_I$ which, by the above discussion and the compatibility ${\Lambda}_{IJ}\circ {\sigma}_I = {\sigma}_J$ are related by ${\Lambda}_{{\rm d}{\partial}hi_{IJ}} \circ{\sigma}^\nu_I = {\sigma}^\nu_J$, i.e.\ the orientations ${\sigma}^\nu_I$ in the charts of $|{\bf Z}_\nu|$ are compatible with the transition maps ${\partial}hi_{IJ}|_{Z_I\cap U_{IJ}}$. Hence this determines an orientation of $|{\bf Z}_\nu|$. This proves~(i). For a Kuranishi cobordism, the above constructions provide an orientation $|{\sigma}^\nu| : |{\bf Z}_\nu| \to \Lambda^{\rm max}\, {\rm T} |{\bf Z}_\nu|$ on the manifold with boundary $|{\bf Z}_\nu|$. Moreover, Lemma~\ref{le:czeroS0} provides diffeomorphisms to the boundary components for ${\alpha}=0,1$ $$ |j^{\alpha}| \,:\; |{\bf Z}_{\nu^{\alpha}}| \;=\; \Bigl| {\underline{n}}derlineerset{{I\in{\mathcal I}_{{\mathcal K}^{\alpha}}}}{\textstyle\bigcup} (s_I+\nu^{\alpha}_I)^{-1}(0) \Bigr| \;\overlineerset{\cong}{\longrightarrow}\; {\partial}artial^{\alpha} |{\bf Z}_{\nu}| \;\subset \; {\partial}artial |{\bf Z}_{\nu}| , $$ which in the charts are given by $j^{\alpha}_I := {\iota}ta^{\alpha}_I (\cdot,{\alpha}) : Z^{\alpha}_I \to Z_I$. The latter lift to isomorphisms of determinant line bundles $$ {\tilde j}^{\alpha}_I := {\Lambda}_{{\rm d}{\iota}^{\alpha}_I}\circ \wedge_1 \,:\; \Lambda^{\rm max}\, {\rm T} Z^{\alpha}_I \;\overlineerset{\cong}{\longrightarrow}\; \Lambda^{\rm max}\, {\rm T} Z_I |_{{\rm im\,}{\iota}^{\alpha}_I} , $$ given by the same expression as the restriction to $Z^{\alpha}_I\tildemes \{{\alpha}\}$ of the map \eqref{orient map} on ${\delta}t({\rm d} s^{\alpha}_I)\tildemes A^{\alpha}_{\varepsilon}$ in the case of trivial cokernel. These are the expressions in the charts of an isomorphism of determinant line bundles $$ |\tilde j^{\alpha}| := {\Lambda}_{{\rm d} |{\iota}^{\alpha}|}\circ \wedge_1 \,:\; \Lambda^{\rm max}\, {\rm T} |{\bf Z}_{\nu^{\alpha}}| \;\overlineerset{\cong}{\longrightarrow}\; \Lambda^{\rm max}\, {\rm T} |{\bf Z}_\nu| \bigr|_{|j^{\alpha}|( |{\bf Z}_{\nu^{\alpha}}|) } , $$ which consists of the isomorphism induced by the collar neighbourhood embedding $|{\iota}^{\alpha}| : |{\bf Z}_{\nu^{\alpha}}|\tildemes A^{\alpha}_{\varepsilon} \to |{\bf Z}_\nu|$ given by ${\iota}^{\alpha}_{\varepsilon}|_{Z^{\alpha}_I \tildemes A^{\alpha}_{\varepsilon}}$ in the charts together with the canonical isomorphism between the determinant line bundle of the boundary and the boundary restriction of the determinant line bundle of the collar neighbourhood, $$ \wedge_1 \,:\; \Lambda^{\rm max}\, {\rm T} |{\bf Z}_{\nu^{\alpha}}| \;\to\; \Lambda^{\rm max}\, {\rm T} \bigl( |{\bf Z}_{\nu^{\alpha}}| \tildemes A^{\alpha}_{\varepsilon} \bigr) \big|_{|{\bf Z}_{\nu^{\alpha}}| \tildemes \{{\alpha}\}} \;=\; \bigl( \Lambda^{\rm max}\, {\rm T} |{\bf Z}_{\nu^{\alpha}}| \bigr) \tildemes {\mathbb R} , \quad \eta \mapsto 1\wedge\eta . $$ Here, as before, we identify vectors $\eta_i \in {\rm T} |{\bf Z}_{\nu^{\alpha}}|$ with $(\eta_i ,0) \in {\rm T} |{\bf Z}_{\nu^{\alpha}}| \tildemes {\mathbb R}$ and abbreviate $1:=(0,1) \in {\rm T} |{\bf Z}_{\nu^{\alpha}}| \tildemes {\mathbb R}$. The latter corresponds to the exterior normal ${\rm d} |{\iota}^1| (0,1) \in {\rm T} |{\bf Z}_\nu| \big|_{{\partial}^1 |{\bf Z}_\nu|}$ and the interior normal ${\rm d} |{\iota}^0| (0,1) \in {\rm T} |{\bf Z}_\nu| \big|_{{\partial}^0 |{\bf Z}_\nu|}$. Hence the boundary orientations\footnote { Here, contrary to the special choice for Kuranishi cobordisms discussed in Remark~\ref{rmk:orientb}, we use a more standard orientation convention for manifolds with boundary. Namely, a positively ordered basis $\eta_1,\dots,\eta_k$ for the tangent space to the boundary is extended to a positively ordered basis $\eta_{out},\eta_1,\dots,\eta_k$ for the whole manifold by adjoining an outward unit vector $\eta_{out}$ as its first element. } ${\partial}^{\alpha} |{\sigma}^\nu| :|{\bf Z}_{\nu^{\alpha}}| \to \Lambda^{\rm max}\, {\rm T} |{\bf Z}_{\nu^{\alpha}}|$ induced from $|{\sigma}^\nu|$ on the two components for ${\alpha}=0,1$ differ by a sign, $$ {\partial}^0 |{\sigma}^\nu| := - \, |\tilde j^0|^{-1} \circ |{\sigma}^\nu| \circ |j^0| ,\qquad {\partial}^1 |{\sigma}^\nu| := |\tilde j^1|^{-1} \circ |{\sigma}^\nu| \circ |j^1| . $$ On the other hand, the restricted orientations ${\sigma}^{\alpha}:={\partial}artial^{\alpha} {\sigma}$ of ${\mathcal K}^{\alpha}$ also induce orientations $|{\sigma}^{\nu_{\alpha}}|$ of the boundary components $|{\bf Z}_{\nu^{\alpha}}|$ by the construction in (i). Now to prove the claim that $|{\bf Z}_\nu|$ is an oriented cobordism from $\bigl(|{\bf Z}_{\nu^0}|,|{\sigma}^{\nu_0}|\bigr)$ to $\bigl(|{\bf Z}_{\nu^1}|,|{\sigma}^{\nu_1}|\bigr)$, it remains to check that ${\partial}^0 |{\sigma}^\nu| = - |{\sigma}^{\nu_0}|$ and ${\partial}^1 |{\sigma}^\nu| = |{\sigma}^{\nu_1}|$. This is equivalent to $|{\sigma}^{\nu^{\alpha}}| = |\tilde j^{\alpha}|^{-1} \circ |{\sigma}^\nu| \circ |j^{\alpha}|$ for both ${\alpha}=0,1$. So, recalling the construction of ${\partial}^{\alpha}{\sigma}_I=(\tilde{\iota}^{\alpha}_I)^{-1} \circ {\sigma}_I \circ {\iota}^{\alpha}_I\big|_{Z^{\alpha}_I \tildemes\{{\alpha}\} }$ and ${\sigma}^\nu_I=\mathfrak{C}\nu_I \circ {\sigma}_I \big|_{Z_I}$ in local charts, we must show the following identity over $Z^{\alpha}_I \tildemes\{{\alpha}\}\cong Z^{\alpha}_I$ for all $I\in{\mathcal I}_{{\partial}^{\alpha}{\mathcal K}}$ {\beta}gin{equation} {\lambda}bel{oclaim} \mathfrak{C}_{{\rm d} (s^{\alpha}_I +\nu^{\alpha}_I)} \circ \mathfrak{C}_{{\rm d} s^{\alpha}_I}^{-1} \circ \bigl((\tilde{\iota}^{\alpha}_I)^{-1} \circ {\sigma}_I \circ {\iota}^{\alpha}_I\bigr) \; =\; (\tilde j^{\alpha}_I)^{-1} \circ \bigl( \mathfrak{C}_{{\rm d} (s_I +\nu_I)} \circ \mathfrak{C}_{{\rm d} s_I}^{-1} \circ {\sigma}_I \bigr) \circ j^{\alpha}_I . \end{equation} We will check this at a fixed point $z\in Z^{\alpha}_I$ in two steps. We first show that the contraction isomorphisms $\mathfrak{C}_{{\rm d} s^{\alpha}_I}(z)$ and $\mathfrak{C}_{{\rm d} s_I}({\iota}^{\alpha}_I(z,{\alpha}))$ intertwine the collar isomorphism $\tilde{\iota}^{\alpha}_I = \bigl( {\Lambda}_{{\rm d} {\iota}^{\alpha}_I} \circ \wedge_1 \bigr) \otimes \bigl({\Lambda}_{{\rm id}_{E_I}}\bigr)^*$ from ${\delta}t({\rm d} s^{\alpha}_I)$ to ${\delta}t({\rm d} s_I)$ with the analogous collar isomorphism $$ \widetilde I^{\alpha}_I := \bigl( {\Lambda}_{{\rm d} {\iota}^{\alpha}_I} \circ \wedge_1 \bigr) \otimes \bigl({\Lambda}_{{\rm id}_{E_I}}\bigr)^* \,:\; \Lambda^{\rm max}\, {\rm T} {\partial}artial^{\alpha} U_I \otimes \bigl( \Lambda^{\rm max}\, E_I \bigr)^* \;\to\; \Lambda^{\rm max}\, {\rm T} U_I \otimes \bigl( \Lambda^{\rm max}\, E_I \bigr)^* . $$ Indeed, we can use the product form of $s_I$ in terms of $s^{\alpha}_I$ to check the corresponding identity of maps $\Lambda^{\rm max}\, {\rm T} {\partial}artial^{\alpha} U_I \otimes \bigl( \Lambda^{\rm max}\, E_I \bigr)^* \big|_{Z^{\alpha}_I}\to {\delta}t({\rm d} s_I)\big|_{{\iota}^{\alpha}_I(Z^{\alpha}_I)}$ at a fixed vector of the form $\bigl( \eta_{\ker} \wedge \eta_{{\partial}erp} \bigr) \otimes \bigl ( \zeta_{{\rm coker\,}} \wedge \zeta_{{\rm im\,}} \bigr)$ with $\eta_{\ker} \in \Lambda^{\rm max}\,\ker{\rm d} s^{\alpha}_I$ and $\zeta_{{\rm im\,}} \in \bigl(\Lambda^{\rm max}\, {\rm im\,}{\rm d} s^{\alpha}_I \bigr)^*$: {\beta}gin{align*} & \Bigl( \tilde{\iota}^{\alpha}_I \circ \mathfrak{C}_{{\rm d} s^{\alpha}_I} \Bigr) \Bigl( \bigl( \eta_{\ker} \wedge \eta_{{\partial}erp} \bigr) \otimes \bigl( \zeta_{{\rm coker\,}} \wedge \zeta_{{\rm im\,}} \bigr) \Bigr) \\ &\;=\; \tilde{\iota}^{\alpha}_I\Bigl( \mathfrak{c}_{{\rm d} s^{\alpha}_I}\bigl( \eta_{{\partial}erp} , \zeta_{{\rm im\,}} \bigr) \cdot \eta_{\ker} \otimes \zeta_{{\rm coker\,}} \Bigr)\\ &\;=\; \mathfrak{c}_{{\rm d} s^{\alpha}_I}\bigl( \eta_{{\partial}erp} , \zeta_{{\rm im\,}} \bigr) \cdot {\Lambda}_{{\rm d} {\iota}^{\alpha}_I}(1\wedge \eta_{\ker}) \otimes \zeta_{{\rm coker\,}} \\ &\;=\; \mathfrak{c}_{{\rm d} s_I}\bigl( {\Lambda}_{{\rm d} {\iota}^{\alpha}_I} (\eta_{{\partial}erp}) , \zeta_{{\rm im\,}}\bigr) \cdot {\Lambda}_{{\rm d} {\iota}^{\alpha}_I} \bigl( 1\wedge \eta_{\ker} \bigr) \otimes \zeta_{{\rm coker\,}} \\ &\;=\; \mathfrak{C}_{{\rm d} s_I} \Bigl( \bigl( {\Lambda}_{{\rm d} {\iota}^{\alpha}_I}(1\wedge \eta_{\ker} \bigr) \wedge {\Lambda}_{{\rm d} {\iota}^{\alpha}_I}(\eta_{{\partial}erp}) \bigr) \otimes \bigl( \zeta_{{\rm coker\,}} \wedge \zeta_{{\rm im\,}} \bigr) \Bigr) \\ &\;=\; \mathfrak{C}_{{\rm d} s_I} \Bigl( {\Lambda}_{{\rm d} {\iota}^{\alpha}_I} \bigl( 1\wedge \eta_{\ker} \wedge \eta_{{\partial}erp} \bigr) \otimes \bigl( \zeta_{{\rm coker\,}} \wedge \zeta_{{\rm im\,}} \bigr) \Bigr) \\ &\;=\; \Bigl( \mathfrak{C}_{{\rm d} s_I} \circ \widetilde I^{\alpha}_I \Bigr) \Bigl( \bigl( \eta_{\ker} \wedge \eta_{{\partial}erp} \bigr) \otimes \bigl( \zeta_{{\rm coker\,}} \wedge \zeta_{{\rm im\,}} \bigr) \Bigr). \end{align*} Secondly we check that the contraction isomorphisms for the surjective maps ${\rm d}_{{\iota}^{\alpha}_I(z,{\alpha})}(s_I +\nu_I)$ and ${\rm d}_z (s^{\alpha}_I +\nu^{\alpha}_I)$ intertwine $\widetilde I^{\alpha}_I (z)$ with the boundary isomorphism ${\tilde j}^{\alpha}_I = {\Lambda}_{{\rm d}{\iota}^{\alpha}_I}\circ \wedge_1$ from $\Lambda^{\rm max}\, {\rm T}_z Z^{\alpha}_I$ to $\Lambda^{\rm max}\, {\rm T}_{{\iota}^{\alpha}_I(z,{\alpha})}Z_I $. For that purpose we also use the product form of $\nu_I$ in terms of $\nu^{\alpha}_I$ to check the corresponding identity of maps $\Lambda^{\rm max}\, {\rm T} U_I \otimes \bigl( \Lambda^{\rm max}\, E_I \bigr)^* \big|_{{\iota}^{\alpha}_I(Z^{\alpha}_I)} \to \Lambda^{\rm max}\, {\rm T} Z^{\alpha}_I$ at a fixed vector of the form $\bigl( {\Lambda}_{{\rm d}{\iota}^{\alpha}_{\varepsilon}}(1\wedge \eta_{\ker}) \wedge {\Lambda}_{{\rm d}{\iota}^{\alpha}_{\varepsilon}} (\eta_{{\partial}erp}) \bigr) \otimes \zeta$ with $\eta_{\ker} \in \Lambda^{\rm max}\,\ker{\rm d} (s^{\alpha}_I + \nu^{\alpha}_I)$: {\beta}gin{align*} & \Bigl( (\tilde j^{\alpha}_I)^{-1} \circ \mathfrak{C}_{{\rm d} (s_I +\nu_I)} \Bigr) \Bigl( \bigl( {\Lambda}_{{\rm d}{\iota}^{\alpha}_{\varepsilon}}(1\wedge \eta_{\ker}) \wedge {\Lambda}_{{\rm d}{\iota}^{\alpha}_{\varepsilon}}(\eta_{{\partial}erp}) \bigr) \otimes \zeta \Bigr) \\ &\;=\; \mathfrak{c}_{{\rm d} (s_I +\nu_I)} \bigl({\Lambda}_{{\rm d}{\iota}^{\alpha}_{\varepsilon}}( \eta_{{\partial}erp}) , \zeta \bigr) \cdot (\tilde j^{\alpha}_I)^{-1} \bigl( {\Lambda}_{{\rm d}{\iota}^{\alpha}_{\varepsilon}}(1\wedge \eta_{\ker})\bigr) \\ &\;=\; \mathfrak{c}_{{\rm d} (s^{\alpha}_I +\nu^{\alpha}_I)} \bigl( \eta_{{\partial}erp} ,\zeta \bigr) \cdot \eta_{\ker} \\ &\;=\; \mathfrak{C}_{{\rm d} (s^{\alpha}_I +\nu^{\alpha}_I)} \Bigl( \bigl(\eta_{\ker} \wedge \eta_{{\partial}erp} \bigr) \otimes \zeta \Bigr) \\ & \;=\; \Bigl( \mathfrak{C}_{{\rm d} (s^{\alpha}_I +\nu^{\alpha}_I)} \circ \bigl(\widetilde I^{\alpha}_I \bigr)^{-1} \Bigr) \Bigl( \bigl( {\Lambda}_{{\rm d}{\iota}^{\alpha}_{\varepsilon}}(1\wedge \eta_{\ker}) \wedge {\Lambda}_{{\rm d}{\iota}^{\alpha}_{\varepsilon}}(\eta_{{\partial}erp}) \bigr) \otimes \zeta \Bigr) . \end{align*} This proves \eqref{oclaim} and hence finishes the proof. \end{proof} \subsection{Construction of Virtual Moduli Cycle and Fundamental Class}{\lambda}bel{ss:VFC} \hspace{1mm}\\ We are finally in a position to prove Theorem~B. We begin with its first part, which defines the virtual moduli cycle (VMC) as a cobordism class of closed oriented manifolds. After a discussion of \v{C}ech homology, we then construct the virtual fundamental class (VFC) as \v{C}ech homology class. {\beta}gin{thm}{\lambda}bel{thm:VMC1} Let $X$ be a compact metrizable space. {\beta}gin{enumerate} \item Let ${\mathcal K}$ be an oriented, additive weak Kuranishi atlas of dimension $D$ on $X$. Then there exists a preshrunk tame shrinking ${\mathcal K}_{\rm sh}$ of ${\mathcal K}$, an admissible metric on $|{\mathcal K}_{\rm sh}|$, a reduction ${\mathcal V}$ of ${\mathcal K}_{\rm sh}$, and a strongly adapted, admissible, precompact, transverse perturbation $\nu$ of $s_{{\mathcal K}_{\rm sh}}|_{\mathcal V}$. \item For any choice of data as in (i), the perturbed zero set $|{\bf Z}_\nu|$ is an oriented compact manifold (without boundary) of dimension $D$. \item Let ${\mathcal K}^0,{\mathcal K}^1$ be two oriented, additive weak Kuranishi atlases on $X$ that are oriented, additively cobordant. Then, for any choices of strongly adapted perturbations $\nu^{\alpha}$ as in (i) for ${\alpha}=0,1$, the perturbed zero sets are cobordant (as oriented closed manifolds), $$ |{\bf Z}_{\nu^0}| \; {\sigma}m \;|{\bf Z}_{\nu^1}|. $$ \end{enumerate} \end{thm} {\beta}gin{proof} Proposition~\ref{prop:proper} provides a shrinking ${\mathcal K}'$ of ${\mathcal K}$, which is a tame Kuranishi atlas. Then Proposition~\ref{prop:metric}, again using this theorem, provides a precompact tame shrinking ${\mathcal K}_{\rm sh}$ of ${\mathcal K}'$, in other words a preshrunk shrinking of ${\mathcal K}$, and equips it with an admissible metric. The orientation of ${\mathcal K}$ then induces an orientation of ${\mathcal K}_{\rm sh}$ by Lemma~\ref{le:cK}. Moreover, Proposition~\ref{prop:cov2}~(a) provides a reduction ${\mathcal V}$ of ${\mathcal K}_{\rm sh}$, and by Lemma~\ref{le:delred} we find another reduction ${\mathcal C}$ with precompact inclusion ${\mathcal C}\sqsubset {\mathcal V}$, i.e.\ a nested reduction. Then we may apply Proposition~\ref{prop:ext} with ${\sigma}={\sigma}_{\rm rel}({\delta},{\mathcal V}\tildemes[0,1],{\mathcal C}\tildemes[0,1])$ to obtain a strongly adapted, admissible, transverse perturbation $\nu$ with ${\partial}i_{\mathcal K}\bigl((s+\nu)^{-1}(0)\bigr) \subset {\partial}i_{{\mathcal K}}({\mathcal C})$. This proves (i). Part (ii) holds in this setting since Proposition~\ref{prop:zeroS0} shows that $|{\bf Z}_\nu|$ is a smooth closed $D$-dimensional manifold, which is oriented by Proposition~\ref{prop:orient1}~(i). To prove (iii) we will use transitivity of the cobordism relation for oriented closed manifolds to prove increasing independence of choices in the following Steps 1--4. { }{\mathbb N}I {\bf Step 1:} {\it For a fixed oriented, metric, tame Kuranishi atlas $({\mathcal K},d)$, nested reductions ${\mathcal C}\sqsubset{\mathcal V}$, and $0<{\delta}<{\delta}_{\mathcal V}$, $0<{\sigma}\leq {\sigma}_{\rm rel}({\delta},{\mathcal V}\tildemes[0,1],{\mathcal C}\tildemes[0,1])$, the cobordism class of $|{\bf Z}_\nu|$ is independent of the choice of $({\mathcal V},{\mathcal C},{\delta},{\sigma})$-adapted perturbation~$\nu$. } { } To prove this we fix $({\mathcal K},d)$, ${\mathcal C}\sqsubset{\mathcal V}$, ${\delta}$, and ${\sigma}$, consider two $({\mathcal V},{\mathcal C},{\delta},{\sigma})$-adapted perturbations $\nu^0,\nu^1$, and need to find an oriented cobordism $|{\bf Z}_{\nu^0}| {\sigma}m |{\bf Z}_{\nu^1}|$. For that purpose we apply Proposition~\ref{prop:ext2}~(ii) to the Kuranishi cobordism ${\mathcal K}\tildemes[0,1]$ with product metric and nested product reductions ${\mathcal C}\tildemes[0,1]\sqsubset {\mathcal V}\tildemes[0,1]$ to obtain an admissible, precompact, transverse cobordism perturbation $\nu^{01}$ of $s_{{\mathcal K}\tildemes[0,1]}|_{{\mathcal V}\tildemes[0,1]}$ with boundary restrictions $\nu^{01}|_{{\mathcal V}\tildemes\{{\alpha}\}}=\nu^{\alpha}$ for ${\alpha}=0,1$. Here we use the fact that ${\delta}_{{\mathcal V}\tildemes[0,1]}={\delta}_{\mathcal V}>{\delta}$ by Lemma~\ref{le:admin}~(ii). Moreover, by Lemma~\ref{le:cK}~(iii) the orientation of ${\mathcal K}$ induces an orientation of ${\mathcal K}\tildemes[0,1]$, whose restriction to the boundaries ${\partial}^{\alpha}( {\mathcal K}\tildemes[0,1]) ={\mathcal K}$ equals the given orientation on ${\mathcal K}$. Finally, Lemma~\ref{le:czeroS0} together with Proposition~\ref{prop:orient1}~(ii) imply that $|{\bf Z}_{\nu^{01}}|$ is the required oriented cobordism from $|{\bf Z}_{\nu^{01}|_{{\mathcal V}\tildemes\{0\}}}|=|{\bf Z}_{\nu^0}|$ to $|{\bf Z}_{\nu^{01}|_{{\mathcal V}\tildemes\{1\}}}|=|{\bf Z}_{\nu^1}|$. { }{\mathbb N}I {\bf Step 2:} {\it For a fixed oriented, metric, tame Kuranishi atlas $({\mathcal K},d)$ and nested reductions ${\mathcal C}\sqsubset{\mathcal V}$, the cobordism class of $|{\bf Z}_\nu|$ is independent of the choice of strongly adapted perturbation~$\nu$ with respect to ${\mathcal C}\sqsubset{\mathcal V}$.} { } To prove this we fix $({\mathcal K},d)$ and ${\mathcal C}\sqsubset{\mathcal V}$ and consider two $({\mathcal V},{\mathcal C},{\delta}^{\alpha},{\sigma}^{\alpha})$-adapted perturbations $\nu^{\alpha}$ for $0<{\delta}^{\alpha}<{\delta}_{\mathcal V}$, $0<{\sigma}^{\alpha}\leq {\sigma}_{\rm rel}({\delta}^{\alpha},{\mathcal V}\tildemes[0,1],{\mathcal C}\tildemes[0,1])$, and ${\alpha}=0,1$. Then we need to find an oriented cobordism $|{\bf Z}_{\nu^0}| {\sigma}m |{\bf Z}_{\nu^1}|$. To do this, first note that we evidently have ${\delta}:=\max({\delta}^0,{\delta}^1)<{\delta}_{\mathcal V}={\delta}_{{\mathcal V}\tildemes[0,1]}$ by Lemma~\ref{le:admin}~(ii) with respect to the product metric on $|{\mathcal K}|\tildemes[0,1]$. Next, we have ${\delta}={\delta}^{\alpha}$ for some ${\alpha}\in\{0,1\}$ and hence ${\sigma}_{\rm rel}({\delta},{\mathcal V}\tildemes[0,1],{\mathcal C}\tildemes[0,1])={\sigma}_{\rm rel}({\delta}^{\alpha},{\mathcal V}\tildemes[0,1],{\mathcal C}\tildemes[0,1])\ge{\sigma}^{\alpha}\geq \min({\sigma}^0,{\sigma}^1)$. Now choose ${\sigma} \leq \min\{ {\sigma}^0,{\sigma}^1, {\sigma}_{\rm rel}({\delta},{\mathcal V}\tildemes[0,1],{\mathcal C}\tildemes[0,1]) \}$. Then Proposition~\ref{prop:ext2}~(i) provides an admissible, precompact, transverse cobordism perturbation $\nu^{01}$ of $s_{{\mathcal K}\tildemes[0,1]}|_{{\mathcal V}\tildemes[0,1]}$, whose restrictions $\nu^{01}|_{{\mathcal V}\tildemes\{{\alpha}\}}$ for ${\alpha}=0,1$ are $({\mathcal V},{\mathcal C},{\delta},{\sigma})$-adapted perturbations of $s_{\mathcal K}|_{\mathcal V}$. Since ${\delta}^{\alpha}\leq {\delta}$ and ${\sigma} \leq {\sigma}^{\alpha} \leq {\sigma}_{\rm rel}({\delta}^{\alpha},{\mathcal V}\tildemes[0,1],{\mathcal C}\tildemes[0,1])$, they are also $({\mathcal V},{\mathcal C},{\delta}^{\alpha},{\sigma}^{\alpha})$-adapted. Then, as in Step~1, the perturbed zero set $|{\bf Z}_{\nu^{01}}|$ is an oriented cobordism from $|{\bf Z}_{\nu^{01}|_{{\mathcal V}\tildemes\{0\}}}|$ to $|{\bf Z}_{\nu^{01}|_{{\mathcal V}\tildemes\{1\}}}|$. Moreover, for fixed ${\alpha}\in\{0,1\}$ both the restriction $\nu^{01}|_{{\mathcal V}\tildemes\{{\alpha}\}}$ and the given perturbation $\nu^{\alpha}$ are $({\mathcal V},{\mathcal C},{\delta}^{\alpha},{\sigma}^{\alpha})$-adapted, so that Step 1 provides cobordisms $|{\bf Z}_{\nu^0}| {\sigma}m |{\bf Z}_{\nu^{01}|_{{\mathcal V}\tildemes\{0\}}}|$ and $|{\bf Z}_{\nu^{01}|_{{\mathcal V}\tildemes\{1\}}}| {\sigma}m |{\bf Z}_{\nu^1}|$. By transitivity of the cobordism relation this proves $|{\bf Z}_{\nu^0}| {\sigma}m |{\bf Z}_{\nu^1}|$ as claimed. { }{\mathbb N}I {\bf Step 3:} {\it For a fixed oriented, tame Kuranishi atlas ${\mathcal K}$, the cobordism class of $|{\bf Z}_\nu|$ is independent of the choice of admissible metric and strongly adapted perturbation~$\nu$. } { } To prove this we fix ${\mathcal K}$ and consider two $({\mathcal V}^{\alpha},{\mathcal C}^{\alpha},{\delta}^{\alpha},{\sigma}^{\alpha})$-adapted perturbations $\nu^{\alpha}$ with respect to nested reductions ${\mathcal C}^{\alpha}\sqsubset{\mathcal V}^{\alpha}$ and constants $0<{\delta}^{\alpha}<{\delta}_{\mathcal V}$, $0<{\sigma}^{\alpha}\leq {\sigma}_{\rm rel}({\delta}^{\alpha},{\mathcal V}\tildemes[0,1],{\mathcal C}\tildemes[0,1])$ for ${\alpha}=0,1$. To find an oriented cobordism $|{\bf Z}_{\nu^0}| {\sigma}m |{\bf Z}_{\nu^1}|$ we begin by using Proposition~\ref{prop:metcoll}~(iv) to find an admissible metric $d$ on $|{\mathcal K}\tildemes[0,1]|$ with $d|_{|{\mathcal K}|=|{\partial}^{\alpha}({\mathcal K}\tildemes[0,1])|}=d^{\alpha}$. Next, we use Proposition~\ref{prop:cov2}~(c) and Lemma~\ref{le:delred} to find a nested cobordism reduction ${\mathcal C}\sqsubset{\mathcal V}$ of ${\mathcal K}\tildemes[0,1]$ with ${\partial}^{\alpha}{\mathcal C}={\mathcal C}^{\alpha}$ and ${\partial}^{\alpha}{\mathcal V}={\mathcal V}^{\alpha}$. If we now pick any $0<{\delta}<{\delta}_{\mathcal V}$ smaller than the collar width of $d$, ${\mathcal V}$, and ${\mathcal C}$, then we automatically have ${\delta}<{\delta}_{{\mathcal V}^{\alpha}}$ Lemma~\ref{le:admin}~(iii). Then for any $0<{\sigma}\leq{\sigma}_{\rm rel}({\delta},{\mathcal V},{\mathcal C})$ Proposition~\ref{prop:ext2}~(i) provides an admissible, precompact, transverse cobordism perturbation $\nu^{01}$ of $s_{{\mathcal K}\tildemes[0,1]}|_{\mathcal V}$ whose restrictions $\nu^{01}|_{{\partial}^{\alpha}{\mathcal V}}$ for ${\alpha}=0,1$ are $({\mathcal V}^{\alpha},{\mathcal C}^{\alpha},{\delta},{\sigma})$-adapted perturbations of $s_{{\mathcal K}}|_{{\mathcal V}^{\alpha}}$. As in Step 1, the perturbed zero set $|{\bf Z}_{\nu^{01}}|$ is an oriented cobordism from $|{\bf Z}_{\nu^{01}|_{{\partial}^0{\mathcal V}}}|$ to $|{\bf Z}_{\nu^{01}|_{{\partial}^1{\mathcal V}}}|$. Moreover, we may pick ${\sigma}$ such that ${\sigma}\leq{\sigma}_{\rm rel}({\delta},{\mathcal V}^{\alpha}\tildemes[0,1],{\mathcal C}^{\alpha}\tildemes[0,1])$ for ${\alpha}=0,1$, then each $\nu^{01}|_{{\partial}^{\alpha}{\mathcal V}}$ is strongly adapted with respect to $d^{\alpha}$ and ${\mathcal C}^{\alpha}\sqsubset{\mathcal V}^{\alpha}$. Then Step 2 applies for ${\alpha}=0$ as well as ${\alpha}=1$ to provide cobordisms $|{\bf Z}_{\nu^0}| {\sigma}m |{\bf Z}_{\nu^{01}}|_{{\partial}^0{\mathcal V}}|$ and $|{\bf Z}_{\nu^{01}}|_{{\partial}^1{\mathcal V}}| {\sigma}m |{\bf Z}_{\nu^1}|$, which proves the claim by transitivity. { }{\mathbb N}I {\bf Step 4:} {\it Let ${\mathcal K}^{01}$ be an oriented, additive, weak Kuranishi cobordism, and for ${\alpha}=0,1$ let $\nu^{\alpha}$ be strongly adapted perturbations of some preshrunk tame shrinking ${\mathcal K}_{\rm sh}^{\alpha}$ of ${\partial}^{\alpha}{\mathcal K}^{01}$ with respect to some choice of admissible metric on $|{\partial}^{\alpha}{\mathcal K}^{01}|$. Then there is an oriented cobordism of compact manifolds $|{\bf Z}_{\nu^0}| {\sigma}m |{\bf Z}_{\nu^1}|$. } This is proven along the lines of (i) and (ii) by first using Proposition~\ref{prop:cobord2} to find a preshrunk tame shrinking ${\mathcal K}_{\rm sh}$ of ${\mathcal K}$ with ${\partial}^{\alpha}{\mathcal K}_{\rm sh}={\mathcal K}^{\alpha}_{\rm sh}$, and an admissible metric $d$ on $|{\mathcal K}_{\rm sh}|$. If we equip ${\mathcal K}_{\rm sh}$ with the orientation induced by ${\mathcal K}$, then by Lemma~\ref{le:cK} the induced boundary orientation on ${\partial}^{\alpha}{\mathcal K}_{\rm sh}={\mathcal K}^{\alpha}_{\rm sh}$ agrees with that induced by shrinking from ${\mathcal K}^{\alpha}$. Next, Proposition~\ref{prop:cov2}~(c) provides a reduction ${\mathcal V}$ of ${\mathcal K}_{\rm sh}$, and by Lemma~\ref{le:delred}~(ii) we find a nested cobordism reduction ${\mathcal C}\sqsubset {\mathcal V}$. Now we may apply Proposition~\ref{prop:ext2}~(i) with $$ {\sigma}\;=\; \min\bigl\{ {\sigma}_{\rm rel}({\delta},{\partial}^0{\mathcal V}\tildemes[0,1],{\partial}^0{\mathcal C}\tildemes[0,1]) , {\sigma}_{\rm rel}({\delta},{\partial}^1{\mathcal V}\tildemes[0,1],{\partial}^1{\mathcal C}\tildemes[0,1]) , {\sigma}_{\rm rel}({\delta},{\mathcal V},{\mathcal C}) \bigr\} $$ to find an admissible, precompact, transverse cobordism perturbation $\nu^{01}$ of $s_{{\mathcal K}_{\rm sh}}|_{\mathcal V}$, whose restrictions $\nu^{01}|_{{\partial}^{\alpha}{\mathcal V}}$ for ${\alpha}=0,1$ are $({\partial}^{\alpha}{\mathcal V},{\partial}^{\alpha}{\mathcal C},{\delta},{\sigma})$-adapted perturbations of $s_{{\mathcal K}^{\alpha}_{\rm sh}}|_{{\partial}^{\alpha}{\mathcal V}}$. In particular, these are strongly adapted by the choice of ${\sigma}$. Also, as in the previous steps, $|{\bf Z}_{\nu^{01}}|$ is an oriented cobordism from $|{\bf Z}_{\nu|_{{\partial}^0{\mathcal V}}}|$ to $|{\bf Z}_{\nu|_{{\partial}^1{\mathcal V}}}|$. Finally, Step 3 applies to the fixed oriented, tame Kuranishi atlases ${\mathcal K}^{\alpha}_{\rm sh}$ for fixed ${\alpha}\in\{0,1\}$ to provide cobordisms $|{\bf Z}_{\nu^0}| {\sigma}m |{\bf Z}_{\nu^{01}}|_{{\partial}^0{\mathcal V}}|$ and $|{\bf Z}_{\nu^{01}}|_{{\partial}^1{\mathcal V}}| {\sigma}m |{\bf Z}_{\nu^1}|$. By transitivity, this finishes the proof of Theorem~\ref{thm:VMC1}. \end{proof} One possible definition of the virtual fundamental class (VFC) is as the cobordism class of the zero set $|{\bf Z}_\nu|$ constructed in the previous theorem. If we think of this as an abstract manifold and hence as representing an element in the $D$-dimensional oriented cobordism ring, it contains rather little information. These notions are barely sufficient for the basic constructions of e.g.\ Floer differentials ${\partial}_F$ from counts of moduli spaces with $D=0$, and proofs of algebraic relations such as ${\partial}_F\circ{\partial}_F=0$ by cobordisms with $D=1$. However in the case of interest to us, in which $X = {\overlineerline {\Mm}}_{g,k}(A,J)$ is the Gromov--Witten moduli space of $J$-holomorphic curves of genus $g$, homology class $A$, and with $k\geq 1$ marked points, we explained in Section~\ref{s:construct} how to construct the domains $U_I$ of the Kuranishi charts for $X$ to have elements that are $k$-pointed stable maps to $(M,{\omega})$, so that there are evaluation maps ${\rm ev}_I: U_I \to M^k$. Further, the coordinate changes can be made compatible with these evaluation maps, and Kuranishi cobordisms can be constructed so that the evaluation maps extend over them. Hence, after shrinking to a tame Kuranishi atlas (or cobordism) ${\mathcal K}_{sh}$, there is a continuous evaluation map $$ {\rm ev}: |{\mathcal K}_{sh}| \to M^k $$ both for the fixed tame shrinking used to define $|{\bf Z}_\nu|$ and for any shrinking of a weak Kuranishi cobordism compatible with evaluation maps. Therefore, for any admissible, precompact, transverse perturbation $\nu$, the map ${\rm ev}:|{\bf Z}_\nu|\to M^k$ can be considered as a cycle (the virtual moduli cycle VMC) in the singular homology $H_D(M^k)$ of $M^k$, or even as a cycle in the bordism theory of $M^k$. Similarly, in this case a possible definition of the VFC is as the corresponding singular homology (or bordism) class in $M^k$. Finally, we will explain how to interpret the VFC more intrinsically as an element in the rational \v{C}ech homology $\check{H}_D(X;{\mathbb Q})$ of the compact metrizable space $X$, i.e.\ in the Gromov-Witten example the moduli space itself. As a first step, we associate to every oriented, metric, tame Kuranishi atlas a $D$-dimensional homology class in any open neighbourhood ${\mathcal W}\subset |{\mathcal K}|$ of ${\iota}_{\mathcal K}(X)$ within the virtual neighbourhood. For that purpose recall from \eqref{eq:Zinject} that for any precompact, transverse perturbation $\nu$ of $s_{\mathcal K}|_{\mathcal V}$, the inclusion $(s+\nu)^{-1}(0)\subset{\mathcal V} = {\rm Obj}_{{\bf B}_{\mathcal K}|_{\mathcal V}}$ induces a continuous injection $i_\nu: |{\bf Z}_\nu| \to |{\mathcal K}|$, which we now compose with the continuous bijection $|{\mathcal K}| \to (|{\mathcal K}|,d)$ from Lemma~\ref{le:metric} to obtain a continuous injection {\beta}gin{equation}{\lambda}bel{ionu} {\iota}_\nu \,:\; |{\bf Z}_\nu| \;\longrightarrow\; \bigl(|{\mathcal K}|,d\bigr) . \end{equation} Since $ |{\bf Z}_\nu|$ is compact and the restriction of the metric topology to the image ${\iota}_\nu(|{\bf Z}_\nu|)\subset(|{\mathcal K}|,d)$ is Hausdorff, this map is in fact a homeomorphism to its image; see Remark~\ref{rmk:hom}, and compare with Proposition~\ref{prop:zeroS0} which notes that $i_\nu:|{\bf Z}_\nu|\to |{\mathcal K}|$ is a homeomorphism to its image. If moreover $|{\bf Z}_\nu|$ is oriented, then it has a fundamental class $\bigl[|{\bf Z}_\nu|\bigr]\in H_D(|{\bf Z}_\nu|)$. Now we obtain a homology class by push forward into any appropriate subset of $(|{\mathcal K}|,d)$, $$ [{\iota}_\nu] := ({\iota}_\nu)_* \bigl[|{\bf Z}_\nu|\bigr] \in H_D({\mathcal W}) \qquad\text{for} \quad {\iota}_\nu(|{\bf Z}_\nu|) \subset {\mathcal W} \subset \bigl(|{\mathcal K}|,d\bigr) . $$ Analogously, any precompact, transverse perturbation $\nu^{01}$ of a metric, tame Kuranishi cobordism $\bigl({\mathcal K}^{01}, d^{01}\bigr)$ gives rise to a topological embedding {\beta}gin{equation}{\lambda}bel{cionu} {\iota}_{\nu^{01}} \,:\; |{\bf Z}_{\nu^{01}}| \;\longrightarrow\; \bigl( |{\mathcal K}^{01}| , d^{01}\bigr) . \end{equation} Now by Lemma~\ref{le:czeroS0} the boundary of the cobordism $ |{\bf Z}_{\nu^{01}}|$ has two disjoint (but not necessarily connected) components $$ {\partial} |{\bf Z}_{\nu^{01}}| \;=\; {\partial}^0 |{\bf Z}_{\nu^{01}}| \;\cup\; {\partial}^1|{\bf Z}_{\nu^{01}}|, \qquad {\partial}^{\alpha} |{\bf Z}_{\nu^{01}}| \,:=\; {\partial}^{\alpha} |{\mathcal K}^{01}| \cap |{\bf Z}_{\nu^{01}}| . $$ In fact, we also showed there that the embeddings $J^{\alpha}:=\rho^{\alpha}(\cdot,{\alpha}): |{\partial}^{\alpha}{\mathcal K}^{01}| \to {\partial}^{\alpha}|{\mathcal K}^{01}| \subset |{\mathcal K}^{01}|$ restrict to diffeomorphisms $$ J^{\alpha}|_{|{\bf Z}_{\nu^{\alpha}}|}= |j^{\alpha}| \,:\; |{\bf Z}_{\nu^{\alpha}}| \;\longrightarrow\;{\partial}^{\alpha}|{\bf Z}_{\nu^{01}}| \qquad\text{with}\quad {\iota}_{\nu^{01}} \circ |j^{\alpha}| = J^{\alpha} \circ {\iota}_{\nu^{\alpha}} , $$ where $\nu^{\alpha}:= \nu^{01}|_{{\partial}^{\alpha}{\mathcal K}^{01}}$ are the restricted perturbations of the Kuranishi atlases ${\partial}^{\alpha}{\mathcal K}^{01}$. Moreover, Proposition~\ref{prop:orient1}~(ii) asserts that the boundary orientations on ${\partial}^{\alpha} |{\bf Z}_{\nu^{01}}|$ (which are induced by the orientation of $|{\bf Z}_{\nu^{01}}|$ arising from the orientation of ${\mathcal K}^{01}$) are related to the orientations of $|{\bf Z}_{\nu^{\alpha}}|$ (which are induced by the orientation of ${\partial}^{\alpha}{\mathcal K}^{01}$ obtained by restriction from the orientation of ${\mathcal K}^{01}$) by $$ |j^0| \,:\; |{\bf Z}_{\nu^0}|^- \;\overlineerset{\cong}{\longrightarrow}\; {\partial}^0 |{\bf Z}_{\nu^{01}}| \qquad \text{and}\qquad |j^1| \,:\; |{\bf Z}_{\nu^1}| \;\overlineerset{\cong}{\longrightarrow}\; {\partial}^1 |{\bf Z}_{\nu^{01}}|. $$ In terms of the fundamental classes this yields the identity {\beta}gin{align*} |j^1|_*\bigl[ |{\bf Z}_{\nu^1}|\bigr] - |j^0|_*\bigl[ |{\bf Z}_{\nu^0}|\bigr] &\;=\; \bigl[{\partial}^1 |{\bf Z}_{\nu^{01}}|\bigr] + \bigl[{\partial}^0 |{\bf Z}_{\nu^{01}}|\bigr] \\ &\;=\; \bigl[{\partial} |{\bf Z}_{\nu^{01}}|\bigr] \;=\; {\delta}lta \bigl[|{\bf Z}_{\nu^{01}}|\bigr] \;\in\; H_D( {\partial} |{\bf Z}_{\nu^{01}}|) \end{align*} for the boundary map ${\delta}lta: H_{D+1}(\bigl[ |{\bf Z}_{\nu^{01}}|\bigr] , {\partial} \bigl[ |{\bf Z}_{\nu^{01}}|\bigr] ) \to H_D( {\partial} |{\bf Z}_{\nu^{01}}|)$ that is part of the long exact sequence for ${\partial} |{\bf Z}_{\nu^{01}}|\subset |{\bf Z}_{\nu^{01}}|$. Inclusion to $|{\bf Z}_{\nu^{01}}|$ now provides, by exactness of this sequence, $|j^1|_*\bigl[ |{\bf Z}_{\nu^1}|\bigr] - |j^0|_*\bigl[ |{\bf Z}_{\nu^0}|\bigr] = 0 \in H_D( |{\bf Z}_{\nu^{01}}|)$. Finally, we can push this forward with ${\iota}_{\nu^{01}}$ to $H_D(|{\mathcal K}^{01}|)$ and use the fact that ${\iota}_{\nu^{01}} \circ |j^{\alpha}| = J^{\alpha} \circ {\iota}_{\nu^{\alpha}}$ to obtain {\beta}gin{align*} 0 &\;=\; ({\iota}_{\nu^{01}})_*|j^1|_*\bigl[ |{\bf Z}_{\nu^1}|\bigr] - ({\iota}_{\nu^{01}})_*|j^0|_*\bigl[ |{\bf Z}_{\nu^0}|\bigr] \\ &\;=\; |J^1|_*({\iota}_{\nu^{1}})_*\bigl[ |{\bf Z}_{\nu^1}|\bigr] - |J^0|_*({\iota}_{\nu^{0}})_*\bigl[ |{\bf Z}_{\nu^0}|\bigr] \;=\; |J^1|_*\bigl[{\iota}_{\nu^{1}}\bigr] - |J^0|_*\bigl[{\iota}_{\nu^{0}}\bigr] . \end{align*} The same holds in $H_D({\mathcal W}^{01})$ for any subset ${\mathcal W}^{01}\subset\bigl(|{\mathcal K}^{01}|,d^{01}\bigr)$ that contains ${\iota}_{\nu^{01}}(|{\bf Z}_{\nu^{01}}|)$, that is {\beta}gin{equation} {\lambda}bel{homologous} J^0_* [{\iota}_{\nu^0}] \;=\; J^1_* [{\iota}_{\nu^1}] \;\in\; H_D({\mathcal W}^{[0,1]}) \qquad\text{when} \quad {\iota}_{\nu^{01}}(|{\bf Z}_{\nu^{01}}|) \subset {\mathcal W}^{01} \subset |{\mathcal K}^{01}| . \end{equation} This will be crucial for proving independence of the VFC from choices. In the case of a product cobordism ${\mathcal K}^{01}={\mathcal K}\tildemes[0,1]$ with product metric we can identify $|{\mathcal K}^{01}|\cong |{\mathcal K}|\tildemes [0,1]$ so that \eqref{cionu} also induces a cycle {\beta}gin{equation} {\lambda}bel{pionu} {\partial}r_{ |{\mathcal K}|}\circ {\iota}_{\nu^{01}} \, :\; |{\bf Z}_{\nu^{01}}|\; \longrightarrow\; (|{\mathcal K}|,d) , \end{equation} whose boundary restrictions are ${\iota}_{\nu^{\alpha}} \circ |j^{\alpha}|^{-1}$, so that the above argument directly gives {\beta}gin{equation} {\lambda}bel{pomologous} [ {\iota}_{\nu^0} ] = [{\iota}_{\nu^1} ] \in H_D({\mathcal W}) \qquad\text{when} \quad {\iota}_{\nu^{01}}(|{\bf Z}_{\nu^{01}}|) \subset {\mathcal W}\tildemes[0,1]. \end{equation} Now we can associate a well define virtual fundamental class to any choice of open neighbourhood ${\mathcal W}$ of $X$ in the virtual neighbourhood induced by a fixed tame Kuranishi atlas. {\beta}gin{lemma}{\lambda}bel{le:VMC1} Let $({\mathcal K},d)$ be an oriented, metric, tame Kuranishi atlas and let ${\mathcal W}\subset |{\mathcal K}|$ be an open subset with respect to the metric topology such that ${\iota}_{\mathcal K}(X)\subset {\mathcal W}$. Then there exists a strongly adapted perturbation $\nu$ in a suitable reduction of ${\mathcal K}$ such that ${\partial}i_{\mathcal K}((s+\nu)^{-1}(0))\subset {\mathcal W}$. More precisely, there exists $\nu$ adapted with respect to a nested reduction ${\mathcal C}\sqsubset{\mathcal V}$ such that ${\partial}i_{\mathcal K}({\mathcal C})\subset{\mathcal W}$. For any such perturbation, the inclusion of the perturbed zero set ${\iota}_\nu : |{\bf Z}_\nu|\hookrightarrow {\mathcal W} \subset (|{\mathcal K}|,d)$ defines a singular homology class $$ A^{({\mathcal K},d)}_{{\mathcal W}} \,:=\; \bigl[{\iota}_\nu : |{\bf Z}_\nu| \to {\mathcal W} \bigr] \;\in\; H_D({\mathcal W}) $$ that is independent of the choice of reductions and perturbation. \end{lemma} {\beta}gin{proof} To see that the required perturbations exist, choose any nested reduction ${\mathcal C} \sqsubset {\mathcal V}$ of ${\mathcal K}$. Then we obtain a further precompact open set $$ {\mathcal C}_{\mathcal W} \,:=\; {\mathcal C} \cap {\partial}i_{{\mathcal K}_{\rm sh}}^{-1}({\mathcal W}) \;\sqsubset\; {\mathcal V} $$ which, after discarding components $C_I \cap {\partial}i_{{\mathcal K}_{\rm sh}}^{-1}({\mathcal W}_k)$ that have empty intersection with $s_I^{-1}(0)$, forms another nested reduction since ${\iota}_{\mathcal K}(X)\subset{\partial}i_{\mathcal K}({\mathcal C})\cap {\mathcal W}$. Now Proposition~\ref{prop:ext} guarantees the existence of a $({\mathcal V},{\mathcal C},{\delta},{\sigma})$-adapted perturbation $\nu$ for sufficiently small ${\delta},{\sigma}>0$, and by choice of ${\sigma}$ we can ensure that $\nu$ is also strongly adapted. By Proposition~\ref{prop:zeroS0} and Proposition~\ref{prop:orient1}~(i) its perturbed zero set is an oriented manifold $|{\bf Z}_\nu|$. Moreover, the image of ${\iota}_\nu : |{\bf Z}_\nu| \to (|{\mathcal K}|,d)$ is ${\partial}i_{\mathcal K}\bigl((s+\nu)^{-1}(0)\bigr) \subset{\partial}i_{\mathcal K}({\mathcal C}) \subset {\mathcal W}$, so that by the discussion above ${\iota}_\nu : |{\bf Z}_\nu| \to {\mathcal W}$ defines a cycle $\bigl[{\iota}_\nu \bigr] \in H_D({\mathcal W})$. To prove independence of the choices, we need to show that $\bigl[{\iota}_{\nu^0}\bigr]=\bigl[{\iota}_{\nu^1}\bigr]$ for any given $({\mathcal V}^{\alpha},{\mathcal C}^{\alpha},{\delta}^{\alpha},{\sigma}^{\alpha})$-adapted perturbation $\nu^{\alpha}$ of $s_{\mathcal K}|_{{\mathcal V}^{\alpha}}$ with ${\partial}i_{\mathcal K}({\mathcal C}^{\alpha})\subset {\mathcal W}$, by adapting Steps 1--3 in the proof of Theorem~\ref{thm:VMC1} so that the cycles ${\iota}_{\nu^{01}}:|{\bf Z}_{\nu^{01}}| \to |{\mathcal K}\tildemes[0,1]|$ given by \eqref{cionu} for the respective cobordism perturbations $\nu^{01}$ of $s_{{\mathcal K}\tildemes[0,1]}|_{{\mathcal V}^{[0,1]}}$ take values in ${\mathcal W}\tildemes[0,1]\subset |{\mathcal K}\tildemes[0,1]|$. Note that here we use the product metric on $|{\mathcal K}|\tildemes [0,1]$ so that ${\mathcal W}\tildemes [0,1]$ is open. Then in each step the composite map ${\partial}r_{|{\mathcal K}|} \circ{\iota}_{\nu^{01}} : |{\bf Z}_{\nu^{01}}| \to |{\mathcal K}|$ takes values in ${\mathcal W}$, so that \eqref{pomologous} applies to give $\bigl[{\iota}_{\nu^{01}|_{{\partial}^0{\mathcal V}^{[0,1]}}}\bigr]=\bigl[{\iota}_{\nu^{01}|_{{\partial}^1{\mathcal V}^{[0,1]}}}\bigr]\in H_D({\mathcal W})$. By transitivity of equality in $H_D({\mathcal W})$, Steps 1--3 then prove $\bigl[{\iota}_{\nu^0}\bigr]=\bigl[{\iota}_{\nu^1}\bigr]$. In Steps 1 and 2, the required inclusion is automatic since the perturbations are constructed so that $|(s+\nu^{01})^{-1}(0)| \subset {\partial}i_{{\mathcal K}\tildemes[0,1]}({\mathcal C}\tildemes[0,1])\subset{\mathcal W}\tildemes[0,1]$, where the second inclusion follows from ${\partial}i_{\mathcal K}({\mathcal C})\subset{\mathcal W}$. In Step 3, we have a fixed metric $d^0=d^1=d$ but are given nested reductions ${\mathcal C}^{\alpha}\sqsubset{\mathcal V}^{\alpha}$ for ${\alpha}=0,1$ with ${\partial}i_{\mathcal K}({\mathcal C}^{\alpha})\subset{\mathcal W}$. Then we first equip $|{\mathcal K}\tildemes[0,1]|\cong |{\mathcal K}|\tildemes[0,1]$ with the product metric and choose a nested cobordism reduction ${\mathcal C}^{[0,1]}\sqsubset {\mathcal V}^{[0,1]}$ such that ${\partial}^{\alpha} {\mathcal C}^{[0,1]}={\mathcal C}^{\alpha}$ and ${\partial}^{\alpha} {\mathcal V}^{[0,1]}={\mathcal V}^{\alpha}$, and then replace ${\mathcal C}^{[0,1]}$ by ${\mathcal C}': = {\mathcal C}^{[0,1]}\cap {\partial}i_{{\mathcal K}\tildemes [0,1]}^{-1}( {\mathcal W}\tildemes [0,1])$, discarding components $C'_I$ with ${C'_I\cap s_I^{-1}(0)=\emptyset}$. Note that this construction preserves the collar boundary ${\partial}^{\alpha}{\mathcal C}'={\partial}^{\alpha} {\mathcal C}^{[0,1]}={\mathcal C}^{\alpha}$ since ${\partial}i_{\mathcal K}({\mathcal C}^{\alpha}) \subset{\mathcal W}$. In fact, ${\mathcal C}'\sqsubset{\mathcal V}^{[0,1]}$ is another nested cobordism reduction since ${\iota}_{{\mathcal K}\tildemes[0,1]}(X\tildemes[0,1]) \subset {\mathcal W}\tildemes[0,1]$. Using the nested reduction ${\mathcal C}'\sqsubset{\mathcal V}^{[0,1]}$ in choosing the cobordism perturbation $\nu$ then ensures that ${\iota}_\nu: |{\bf Z}_\nu|\to |{\mathcal K}\tildemes[0,1]|=|{\mathcal K}|\tildemes[0,1]$ takes values in ${\partial}i_{{\mathcal K}\tildemes [0,1]}({\mathcal C}')\subset {\mathcal W}\tildemes [0,1]$, as required to finish the proof. \end{proof} To construct the VFC as a homology class in ${\iota}_{\mathcal K}(X)$ for tame Kuranishi atlases, and later in $X$, we use rational \v{C}ech homology, rather than integral \v{C}ech or singular homology, because it has the following continuity property. {\beta}gin{remark} {\lambda}bel{rmk:Cech} Let $X$ be a compact subset of a metric space $Y$, and let $({\mathcal W}_k\subset Y)_{k\in{\mathbb N}}$ be a sequence of open subsets that is nested, ${\mathcal W}_k\subset{\mathcal W}_{k-1}$, such that $X=\bigcap_{k\in{\mathbb N}} {\mathcal W}_k$. Then the system of maps $\check{H}_n(X;{\mathbb Q}) \to \check{H}_n({\mathcal W}_{k+1};{\mathbb Q})\to \check{H}_n({\mathcal W}_k;{\mathbb Q})$ induces an isomorphism $$ \check{H}_n(X;{\mathbb Q}) \;\overlineerset{\cong}{\longrightarrow}\; {\underline{n}}derlineerset{\leftarrow }\lim\, \check{H}_n({\mathcal W}_k;{\mathbb Q}) . $$ {\rm To see that singular homology does not have this property, let $X\subset {\mathbb R}^2$ be the union of the line segment $\{0\}\tildemes [-1,1]$, the graph $\{(x,{\sigma}n \tfrac {\partial}i x) \,|\, 0<x\le 1\}$, and an embedded curve joining $(0,1)$ to $(1,0)$ that is otherwise disjoint from the line segment and graph. Then $H_1^{sing}(X;{\mathbb Q}) = 0$ since it is the abelianization of the trivial fundamental group. However, $X$ has arbitrarily small neighbourhoods $U\subset{\mathbb R}^2$ with $H_1^{sing}(U;{\mathbb Q}) ={\mathbb Q}$. Note that we cannot work with integral \v{C}ech homology since it does not even satisfy the exactness axiom (long exact sequence for a pair), because of problems with the inverse limit operation; see the discussion of \v{C}ech cohomology in Hatcher~\cite{Hat}, and \cite[Proposition~3F.5]{Hat} for properties of inverse limits. However, rational \v{C}ech homology does satisfy the exactness axiom, and because it is dual to \v{C}ech cohomology has the above stated continuity property by \cite[Ch.6~Exercises~D]{Span}. Further, rational \v{C}ech homology equals rational singular homology for finite simplicial complexes. Hence the fundamental class of a closed oriented $n$-manifold $M$ can be considered as an element $[M]\in \check{H}_n(M;{\mathbb Q})$ in rational \v{C}ech homology and therefore pushes forward under a continuous map $f:M\to X$ to a well defined element $f_*([M])\in \check{H}_n(X;{\mathbb Q})$. Note finally that if one wants an integral theory with this continuity property, the correct theory to use is the Steenrod homology theory developed in \cite{Mi}. } \end{remark} Using this continuity property of rational \v{C}ech homology, we can finish the proof of Theorem~B. {\beta}gin{thm}{\lambda}bel{thm:VMC2} Let ${\mathcal K}$ be an oriented, additive weak Kuranishi atlas of dimension $D$ on a compact, metrizable space $X$. {\beta}gin{enumerate} \item Let ${\mathcal K}_{\rm sh}$ be a preshrunk tame shrinking of ${\mathcal K}$ and $d$ an admissible metric on $|{\mathcal K}_{\rm sh}|$. Then there exists a nested sequence of open sets ${\mathcal W}_{k+1}\subset {\mathcal W}_k\subset \bigl(|{\mathcal K}_{\rm sh}|, d\bigr)$ such that $\bigcap_{k\in{\mathbb N}}{\mathcal W}_k = {\iota}_{{\mathcal K}_{\rm sh}}(X)$. Moreover, for any such sequence there is a sequence $\nu_k$ of strongly adapted perturbations of $s_{{\mathcal K}_{\rm sh}}$ with respect to nested reductions ${\mathcal C}_k\sqsubset{\mathcal V}_k$ such that ${\partial}i_{\mathcal K}({\mathcal C}_k)\subset{\mathcal W}_k$ for all $k$. Then the embeddings $$ {\iota}_{\nu_k} \,:\; |{\bf Z}_{\nu_k}| \;\hookrightarrow \; {\mathcal W}_k \;\subset\; \bigl(|{\mathcal K}_{\rm sh}|, d\bigr) $$ induce a \v{C}ech homology class $$ {\underline{n}}derlineerset{\leftarrow}\lim\, \bigl[ \, {\iota}_{\nu_k} \, \bigr] \;\in\; \check{H}_D\bigl({\iota}_{{\mathcal K}_{\rm sh}}(X);{\mathbb Q} \bigr), $$ for the subspace ${\iota}_{{\mathcal K}_{\rm sh}}(X) =|s_{{\mathcal K}_{\rm sh}}|^{-1}(0)$ of the metric space $\bigl(|{\mathcal K}_{\rm sh}|,d\bigr)$. \item The bijection $|{\partial}si_{{\mathcal K}_{\rm sh}}| = {\iota}_{{\mathcal K}_{\rm sh}}^{-1}: {\iota}_{{\mathcal K}_{\rm sh}}(X) \to X$ from Lemma~\ref{le:Knbhd1} is a homeomorphism with respect to the metric topology on ${\iota}_{{\mathcal K}_{\rm sh}}(X)$ so that we can define the {\bf virtual fundamental class} of $X$ as the pushforward $$ [X]^{\rm vir}_{\mathcal K} \,:=\; |{\partial}si_{{\mathcal K}_{\rm sh}}|_* \bigl( \, {\underline{n}}derlineerset{\leftarrow}\lim\, [\, {\iota}_{\nu_k} \,] \, \bigr) \;\in\; \check{H}_D(X;{\mathbb Q}) . $$ It is independent of the choice of shrinkings, metric, nested open sets, reductions, and perturbations in (i), and in fact depends on the weak Kuranishi atlas ${\mathcal K}$ on $X$ only up to oriented, additive cobordism. \end{enumerate} \end{thm} {\beta}gin{proof} The existence of shrinkings and metric is guaranteed by Theorem~\ref{thm:VMC1}~(i). We then obtain nested open sets converging to ${\iota}_{{\mathcal K}_{\rm sh}}(X)$ by e.g.\ taking the $\frac 1k$-neighbourhoods ${\mathcal W}_k = B_{\frac 1 k}({\iota}_{{\mathcal K}_{\rm sh}}(X))$. Given any such nested open sets $({\mathcal W}_k)_{k\in{\mathbb N}}$, the existence of adapted perturbations $\nu_k$ with respect to some nested reductions ${\mathcal C}_k\sqsubset{\mathcal V}_k$ with ${\partial}i_{{\mathcal K}_{\rm sh}}({\mathcal C}_k)\subset{\mathcal W}_k$ is proven in Lemma~\ref{le:VMC1}. The latter also shows that the embeddings ${\iota}_{\nu_k} : |{\bf Z}_{\nu_k}|\to {\mathcal W}_k$ define homology classes $A^{({\mathcal K}_{\rm sh},d)}_{{\mathcal W}_k} = [{\iota}_{\nu_k} ]\in H_D({\mathcal W}_k)$, which are independent of the choice of reductions ${\mathcal C}_k\sqsubset {\mathcal V}_k$ and adapted perturbations $\nu_k$. In particular, the push forward $H_D({\mathcal W}_{k+1})\to H_D({\mathcal W}_k)$ by inclusion ${\mathcal I}_{k+1}:{\mathcal W}_{k+1}\to{\mathcal W}_k$ maps $A^{({\mathcal K}_{\rm sh},d)}_{{\mathcal W}_{k+1}}=[{\iota}_{\nu_{k+1}} ]$ to $A^{({\mathcal K}_{\rm sh},d)}_{{\mathcal W}_k}$ since any adapted perturbation $\nu_{k+1}$ with respect to a nested reduction ${\mathcal C}_{k+1}\sqsubset{\mathcal V}_{k+1}$ with ${\mathcal C}_{k+1}\subset{\partial}i_{{\mathcal K}_{\rm sh}}^{-1}({\mathcal W}_{k+1})$ can also be used as adapted perturbation $\nu_k:=\nu_{k+1}$. Then we obtain ${\iota}_{\nu_k} = {\mathcal I}_{k+1} \circ {\iota}_{\nu_{k+1}}$, and hence $A^{({\mathcal K}_{\rm sh},d)}_{{\mathcal W}_k}= [{\iota}_{\nu_k}] = ({\mathcal I}_{k+1})_* [{\iota}_{\nu_{k+1}}]$. This shows that the homology classes $A^{({\mathcal K}_{\rm sh},d)}_{{\mathcal W}_k}$ form an inverse system and thus have a well defined inverse limit, completing the proof of (i), $$ A^{({\mathcal K}_{\rm sh},d)}_{({\mathcal W}_k)_{k\in{\mathbb N}}} \,:=\; {\underline{n}}derlineerset{\leftarrow}\lim\, \bigl[ \,{\iota}_{\nu_k} \, \bigr] \;\in\; \check{H}_D\bigl({\iota}_{{\mathcal K}_{\rm sh}}(X);{\mathbb Q} \bigr). $$ This defines $A^{({\mathcal K}_{\rm sh},d)}_{({\mathcal W}_k)_{k\in{\mathbb N}}}$ as \v{C}ech homology class in the topological space $\bigl( {\iota}_{{\mathcal K}_{\rm sh}}(X), d\bigr)$. Towards proving (ii), recall first that $|{\partial}si_{{\mathcal K}_{\rm sh}}|:{\iota}_{{\mathcal K}_{\rm sh}}(X)\to X$ is a homeomorphism with respect to the relative topology induced from the inclusion ${\iota}_{{\mathcal K}_{\rm sh}}(X)\subset|{\mathcal K}_{\rm sh}|$ by Lemma~\ref{le:Knbhd1}. That the latter is equivalent to the metric topology on ${\iota}_{{\mathcal K}_{\rm sh}}(X)$ follows as in Remark~\ref{rmk:hom} from the continuity of the identity map $|{\mathcal K}_{\rm sh}| \to \bigl(|{\mathcal K}_{\rm sh}|,d\bigr)$ (see Lemma~\ref{le:metric}), which restricts to a continuous bijection from the compact set ${\iota}_{{\mathcal K}_{\rm sh}}(X)\subset|{\mathcal K}_{\rm sh}|$ to the Hausdorff space $\bigl( {\iota}_{{\mathcal K}_{\rm sh}}(X), d\bigr)$, and thus is a homeomorphism. To establish the independence of choices, we then argue as in Steps 2--4 in the proof of Theorem~\ref{thm:VMC1}~(iii), with Lemma~\ref{le:VMC1} playing the role of Step 1. { }{\mathbb N}I {\bf Step 2:} {\it Let $({\mathcal K},d)$ be an oriented, metric, tame Kuranishi atlas, and let $({\mathcal W}^{\alpha}_k)_{k\in{\mathbb N}}$ for ${\alpha}=0,1$ be two nested sequences of open sets ${\mathcal W}^{\alpha}_{k+1}\subset {\mathcal W}^{\alpha}_k\subset \bigl(|{\mathcal K}|,d\bigr)$ whose intersection is $\bigcap_{k\in{\mathbb N}}{\mathcal W}^{\alpha}_k = {\iota}_{{\mathcal K}}(X)$. Then we have $A^{({\mathcal K},d)}_{({\mathcal W}^0_k)_{k\in{\mathbb N}}} =A^{({\mathcal K},d)}_{({\mathcal W}^1_k)_{k\in{\mathbb N}}}$, and hence $$ A^{({\mathcal K},d)} \,:=\; A^{({\mathcal K},d)}_{({\mathcal W}_k)_{k\in{\mathbb N}}} \;\in\; \check{H}_D\bigl({\iota}_{\mathcal K}(X);{\mathbb Q} \bigr) , $$ given by any choice of nested open sets $({\mathcal W}_k)_{k\in{\mathbb N}}$ converging to ${\iota}_{\mathcal K}(X)$, is a well defined \v{C}ech homology class. } { } To see this note that the intersection ${\mathcal W}_k:={\mathcal W}^0_k\cap {\mathcal W}^1_k$ is another nested sequence of open sets with $\bigcap_{k\in{\mathbb N}}{\mathcal W}_k = {\iota}_{{\mathcal K}}(X)$. We may choose a sequence of adapted perturbations $\nu_k$ with respect to nested reductions ${\mathcal C}_k\sqsubset{\mathcal V}_k$ with ${\partial}i_{\mathcal K}({\mathcal C}_k)\subset{\mathcal W}_k$ to define $A^{({\mathcal K},d)}_{{\mathcal W}_k}=[{\iota}_{\nu_k}]$. The perturbations $\nu_k$ then also fit the requirements for the larger open sets ${\mathcal W}_k^{\alpha}$ and hence the inclusions ${\mathcal I}^{\alpha}_k : {\mathcal W}_k\to{\mathcal W}^{\alpha}_k$ push $A^{({\mathcal K},d)}_{{\mathcal W}_k}=[{\iota}_{\nu_k}]\in H_D({\mathcal W}_k)$ forward to $A^{({\mathcal K},d)}_{{\mathcal W}^{\alpha}_k}=[{\mathcal I}^{\alpha}_k\circ{\iota}_{\nu_k}]\in H_D({\mathcal W}^{\alpha}_k)$. Hence, by the definition of the inverse limit, we have equality $$ A^{({\mathcal K},d)}_{({\mathcal W}^0_k)_{k\in{\mathbb N}}} \;=\; A^{({\mathcal K},d)}_{({\mathcal W}_k)_{k\in{\mathbb N}}} \;=\; A^{({\mathcal K},d)}_{({\mathcal W}^1_k)_{k\in{\mathbb N}}} \;\in\; \check{H}_D\bigl({\iota}_{{\mathcal K}}(X);{\mathbb Q} \bigr) . $$ { }{\mathbb N}I {\bf Step 3:} {\it Let ${\mathcal K}$ be an oriented, metrizable, tame Kuranishi atlas with two admissible metrics $d^0,d^1$. Then we have $A^{({\mathcal K},d^0)}= A^{({\mathcal K}_,d^1)}$, and hence $$ [X]^{\rm vir}_{\mathcal K} \,:=\; |{\partial}si_{\mathcal K}|_* A^{({\mathcal K}_{\rm sh},d)} \;\in\; \check{H}_D\bigl(X;{\mathbb Q} \bigr) , $$ given by any choice of metric, is a well defined \v{C}ech homology class. } { } As in Step 3 of Theorem~\ref{thm:VMC1}, we find an admissible collared metric $d$ on $|{\mathcal K}\tildemes [0,1]|$ with $d|_{|{\mathcal K}|\tildemes\{{\alpha}\}}=d^{\alpha}$. Next, we proceed exactly as in the following Step 4 in the special case ${\mathcal K}_{\rm sh}={\mathcal K}\tildemes[0,1]$ and ${\mathcal K}^0_{\rm sh}={\mathcal K}^1_{\rm sh}={\mathcal K}$ to find strongly admissible perturbations $\nu^{\alpha}_k$ of $(|{\mathcal K}|,d^{\alpha})$ that define the \v{C}ech homology classes $A^{({\mathcal K},d^{\alpha})} ={\underline{n}}derlineerset{\leftarrow}\lim\, \bigl[ \,{\iota}_{\nu^{\alpha}_k} \, \bigr] \in \check H_D({\iota}_{\mathcal K}(X))$ and satisfy the identity $$ J^0_*\,\Bigl( {\underline{n}}derlineerset{\leftarrow}\lim\, \bigl[ {\iota}_{\nu^0_k} \bigr] \Bigr)\;=\; J^1_* \,\Bigl({\underline{n}}derlineerset{\leftarrow}\lim\, \bigl[ {\iota}_{\nu^1_k} \bigr] \Bigr) \; \in\; \check H_D( {\iota}_{{\mathcal K}_{\rm sh}}(X\tildemes[0,1]) ) $$ with the topological embeddings $J^{\alpha}=\rho^{\alpha}(\cdot,{\alpha}): ( |{\mathcal K}| , d^{\alpha} ) \to ( |{\mathcal K}_{\rm sh}|, d)$. To proceed we claim that the push forwards by $J^{\alpha}$ restrict to the same isomorphism {\beta}gin{equation}{\lambda}bel{J01} \bigl(J^0\big|_{{\iota}_{\mathcal K}(X)}\bigr)_* = \bigl(J^1\big|_{{\iota}_{\mathcal K}(X)}\bigr)_* \;: \; \check H_D( {\iota}_{\mathcal K}(X) ) \;\longrightarrow\; \check H_D( {\iota}_{{\mathcal K}\tildemes[0,1]}(X\tildemes[0,1]) ) \end{equation} on the compact set ${\iota}_{\mathcal K}(X)$, on which the two metric topologies induced by $d^0,d^1$ are the same, since they both agree with the relative topology from ${\iota}_{\mathcal K}(X)\subset|{\mathcal K}|$. Indeed, the restrictions $J^{\alpha}\big|_{{\iota}_{\mathcal K}(X)}$ for ${\alpha} = 0,1$ are homotopic via the continuous family of maps $J^t :{\iota}_{\mathcal K}(X)\to {\iota}_{{\mathcal K}_{sh}}(X\tildemes [0,1])$ that arises from the continuous family of maps\footnote { We pointed out after Example~\ref{ex:mtriv} that the metric topology on $|{\mathcal K}\tildemes [0,1]|$ may not be a product topology in the canonical identification with $|{\mathcal K}|\tildemes [0,1]$. In fact, the metrics $d^0$ and $d^1$ may well induce different topologies on $|{\mathcal K}|$. We avoid these issues by homotoping maps to $X\tildemes[0,1]$, which always has the product topology by Lemma~\ref{le:Knbhd1} and the remarks just before Step 2. } $I^t :X\to X\tildemes [0,1]$, $x\mapsto (x,t)$ by composition with the embeddings ${\iota}_{\mathcal K}$ and ${\iota}_{{\mathcal K}_{sh}}$, i.e.\ \[ J^t \,: \xymatrix{ {\iota}_{\mathcal K}(X) \ar@{->}[r]^{\quad |{\partial}si_{\mathcal K}|} & X \ar@{->}[r]^{I^t\quad\;\;} & X\tildemes [0,1] \ar@{->}[r]^{{\iota}_{{\mathcal K}_{sh}}\quad\;\;} &{\iota}_{{\mathcal K}_{sh}}(X\tildemes [0,1]). } \] These maps are continuous because ${\iota}_{\mathcal K}=|{\partial}si_{\mathcal K}|^{-1}$ and similarly ${\iota}_{{\mathcal K}_{sh}}$ are homeomorphisms to their image with respect to the metric topology by the argument at the beginning of the proof of (ii). Moreover, each $J^t$ is a homotopy equivalence because, up to homeomorphisms, it equals to the homotopy equivalence $I^t$. This proves \eqref{J01}, which we can then use to deduce the claimed identity $$ A^{({\mathcal K},d^0)} \;=\; {\underline{n}}derlineerset{\leftarrow}\lim\, \bigl[ \, {\iota}_{\nu^0_k} \, \bigr] \;=\; {\underline{n}}derlineerset{\leftarrow}\lim\, \bigl[ \, {\iota}_{\nu^1_k} \, \bigr] \;=\;A^{({\mathcal K},d^1)} \quad\in \check H_D( {\iota}_{\mathcal K}(X) ). $$ { }{\mathbb N}I {\bf Step 4:} {\it Let ${\mathcal K}^{01}$ be an oriented, additive, weak Kuranishi cobordism, and let ${\mathcal K}_{\rm sh}^{\alpha}$ be preshrunk tame shrinkings of ${\partial}^{\alpha}{\mathcal K}^{01}$ for ${\alpha}=0,1$. Then we have $$ [X]^{\rm vir}_{{\mathcal K}_{\rm sh}^0}=[X]^{\rm vir}_{{\mathcal K}_{\rm sh}^1}. $$ } { } As in Step 4 of Theorem~\ref{thm:VMC1}, we find a preshrunk tame shrinking ${\mathcal K}_{\rm sh}$ of ${\mathcal K}^{01}$ with ${\partial}^{\alpha}{\mathcal K}_{\rm sh}={\mathcal K}^{\alpha}_{\rm sh}$, and an admissible collared metric $d$ on $|{\mathcal K}_{\rm sh}|$. We denote its restrictions to $|{\mathcal K}^{\alpha}_{\rm sh}|$ by $d^{\alpha}:= d|_{|{\partial}^{\alpha}{\mathcal K}_{\rm sh}|}$. Next, we proceed as in Lemma~\ref{le:VMC1} by choosing a nested cobordism reduction ${\mathcal C} \sqsubset {\mathcal V}$ of ${\mathcal K}_{\rm sh}$ and constructing nested cobordism reductions ${\mathcal C}_k \sqsubset {\mathcal V}$ by $$ {\mathcal C}_k \,:=\; {\mathcal C} \cap {\partial}i_{{\mathcal K}_{\rm sh}}^{-1}\bigl({\mathcal W}^{01}_k) \;\sqsubset\; {\mathcal V}\qquad\text{with}\quad {\mathcal W}^{01}_k: = B_{\frac 1k}({\iota}_{{\mathcal K}_{\rm sh}}(X\tildemes[0,1]) \bigr) \;\subset\;|{\mathcal K}_{\rm sh}|, $$ in addition discarding components $C_k\cap V_I$ that have empty intersection with $s_I^{-1}(0)$. Indeed, each ${\mathcal W}^{01}_k$ and hence ${\mathcal C}_k$ is collared by Example~\ref{ex:mtriv}, with boundaries given by the $\frac 1k$-neighbourhoods ${\partial}^{\alpha} {\mathcal W}^{01}_k = B_{\frac 1k}^{d^{\alpha}}({\iota}_{{\mathcal K}^{\alpha}_{\rm sh}}(X) \bigr)\subset|{\mathcal K}^{\alpha}_{\rm sh}|$ with respect to the metrics $d^{\alpha}$ on $|{\mathcal K}^{\alpha}_{\rm sh}|$. With that, Proposition~\ref{prop:ext2}~(i) guarantees the existence of admissible, precompact, transverse cobordism perturbations $\nu^{01}_k$ with $|(s + \nu^{01}_k)^{-1}(0)| \subset {\mathcal W}^{01}_k$, and with boundary restrictions $\nu^{\alpha}_k:= \nu^{01}_k|_{{\partial}^{\alpha}{\mathcal V}}$ that are strongly admissible perturbations of $({\mathcal K}^{\alpha}_{\rm sh},d^{\alpha})$ for ${\alpha}=0,1$. Note here that these boundary restrictions satisfy the requirements of (i) since $\bigcap_{k\in{\mathbb N}} {\partial}^{\alpha} {\mathcal W}^{01}_k = {\iota}_{{\mathcal K}^{\alpha}_{\rm sh}}(X)$, thus they define the \v{C}ech homology classes $$ A^{({\mathcal K}^{\alpha}_{\rm sh},d^{\alpha})} \;=\; {\underline{n}}derlineerset{\leftarrow}\lim\, \bigl[ \, {\iota}_{\nu^{\alpha}_k} \, \bigr] \;\in\; \check{H}_D\bigl({\iota}_{{\mathcal K}^{\alpha}_{\rm sh}}(X);{\mathbb Q} \bigr) . $$ On the other hand, the homology classes $J^{\alpha}_* \bigl[ {\iota}_{\nu^{\alpha}_k}\bigr]$ also form two inverse systems in $H_D(|{\mathcal K}_{\rm sh}|)$, and as in \eqref{homologous} the chains ${\iota}_{\nu^{(01)}_k}: |{\bf Z}_{\nu^{(01)}_k}| \to {\mathcal W}^{01}_k$ induce identities in the singular homology of ${\mathcal W}^{01}_k$, $$ J^0 _* \bigl[ {\iota}_{\nu_k^0} \bigr] \;=\; J^1_* \bigl[ {\iota}_{\nu^1_k} \bigr] \; \in\; H_D({\mathcal W}^{01}_k) , $$ with the topological embeddings $J^{\alpha}=\rho^{\alpha}(\cdot,{\alpha}): ( |{\mathcal K}^{\alpha}_{\rm sh}| , d^{\alpha} ) \to ( |{\mathcal K}_{\rm sh}|, d)$. Thus taking the inverse limit -- which commutes with push forward -- we obtain $$ J^0_*\,\Bigl( {\underline{n}}derlineerset{\leftarrow}\lim\, \bigl[ {\iota}_{\nu^0_k} \bigr] \Bigr)\;=\; J^1_* \,\Bigl({\underline{n}}derlineerset{\leftarrow}\lim\, \bigl[ {\iota}_{\nu^1_k} \bigr] \Bigr) \; \in\; \check H_D( {\iota}_{{\mathcal K}_{\rm sh}}(X\tildemes[0,1]) ) . $$ So further push forward with the inverse $|{\partial}si_{{\mathcal K}_{\rm sh}}|$ of the homeomorphism ${\iota}_{{\mathcal K}_{\rm sh}}$ implies $$ \bigl(|{\partial}si_{{\mathcal K}_{\rm sh}}| \circ J^0\bigr)_*\,\Bigl( {\underline{n}}derlineerset{\leftarrow}\lim\, \bigl[ {\iota}_{\nu^0_k} \bigr] \Bigr)\;=\; \bigl(|{\partial}si_{{\mathcal K}_{\rm sh}}| \circ J^1\bigr)_* \,\Bigl({\underline{n}}derlineerset{\leftarrow}\lim\, \bigl[ {\iota}_{\nu^1_k} \bigr] \Bigr) \; \in\; \check H_D(X\tildemes[0,1] ) , $$ where the homeomorphism $|{\partial}si_{{\mathcal K}_{\rm sh}}|$ is related to the analogous $|{\partial}si_{{\mathcal K}^{\alpha}_{\rm sh}}| : {\iota}_{{\mathcal K}^{\alpha}_{\rm sh}}(X) \to X$ by $J^{\alpha}$ and the embedding $I^{\alpha}:X\to X\tildemes \{{\alpha}\}$ by $$ |{\partial}si_{{\mathcal K}_{\rm sh}}| \circ J^{\alpha} \big|_{{\iota}_{{\mathcal K}_{\rm sh}}(X)} \;=\; I^{\alpha} \circ |{\partial}si_{{\mathcal K}^{\alpha}_{\rm sh}}| . $$ Now $I^0_* = I^1_* : \check H_D( X) \to \check H_D( X\tildemes[0,1] )$ are the same isomorphisms, because the two maps $I^0, I^1$ are both homotopy equivalences and homotopic to each other. Hence the equality of $I^0_* |{\partial}si_{{\mathcal K}^0_{\rm sh}}| _* \, {\underline{n}}derlineerset{\leftarrow}\lim\, \bigl[ {\iota}_{\nu^0_k}\bigr] = I^1_* |{\partial}si_{{\mathcal K}^1_{\rm sh}}| _* \, {\underline{n}}derlineerset{\leftarrow}\lim\, \bigl[ {\iota}_{\nu^1_k}\bigr]$ in $\check H_D(X\tildemes[0,1] )$ implies as claimed $$ [X]^{\rm vir}_{{\mathcal K}^0_{\rm sh}} \;=\; |{\partial}si_{{\mathcal K}^0_{\rm sh}}| _* \,\Bigl( {\underline{n}}derlineerset{\leftarrow}\lim\, \bigl[ {\iota}_{\nu^0_k} \bigr] \Bigr) \;=\; |{\partial}si_{{\mathcal K}^1_{\rm sh}}| _* \,\Bigl( {\underline{n}}derlineerset{\leftarrow}\lim\, \bigl[ {\iota}_{\nu^1_k} \bigr] \Bigr) \;=\; [X]^{\rm vir}_{{\mathcal K}^1_{\rm sh}}. $$ This proves Step 4. { } Finally, Step 4 implies uniqueness of the virtual fundamental cycle $[X]^{\rm vir}_{\mathcal K}\in \check H_D( X) $ for an oriented, additive weak Kuranishi atlas ${\mathcal K}$, since for any two choices of preshrunk tame shrinkings ${\mathcal K}^{\alpha}_{\rm sh}$ of ${\mathcal K}$ we can apply Step 4 to ${\mathcal K}^{01}={\mathcal K}\tildemes[0,1]$ to obtain $[X]^{\rm vir}_{{\mathcal K}_{\rm sh}^0}=[X]^{\rm vir}_{{\mathcal K}_{\rm sh}^1}$. Moreover, given cobordant oriented, additive, weak Kuranishi atlases ${\mathcal K}^0,{\mathcal K}^1$ there exists by assumption an oriented, additive, weak Kuranishi cobordism ${\mathcal K}^{01}$ with ${\partial}^{\alpha}{\mathcal K}^{01}={\mathcal K}^{\alpha}$. If we pick any preshrunk tame shrinkings ${\mathcal K}_{\rm sh}^{\alpha}$ of ${\mathcal K}^{\alpha}$ to define $[X]^{\rm vir}_{{\mathcal K}^{\alpha}}=[X]^{\rm vir}_{{\mathcal K}_{\rm sh}^{\alpha}}$, then Step 4 implies the claimed uniqueness under cobordism, $$ [X]^{\rm vir}_{{\mathcal K}^0}\;=\;[X]^{\rm vir}_{{\mathcal K}_{\rm sh}^0}\;=\;[X]^{\rm vir}_{{\mathcal K}_{\rm sh}^1}\;=\;[X]^{\rm vir}_{{\mathcal K}^1}. $$ This completes the proof of Theorem~\ref{thm:VMC2}. \end{proof} {\beta}gin{thebibliography}{CCCCC} \bibitem[AFW]{afw:arnold} P.\ Albers, J.\ Fish, and K.\ Wehrheim, {\it A proof of the Arnold conjecture by polyfold techniques}, in preparation. \bibitem[ALR]{ALR} A.\ Adem, J.\ Leida, and Y.\ Ruan, {\it Orbifolds and stringy topology, } Cambridge Tracts in Mathematics {\bf 171}, Cambridge University Press, 2007. \bibitem[CL]{ChenLi} Bohui Chen and An-Min Li, {\it Symplectic virtual localization of Gromov--Witten classes}, arXiv:DG/0610370. \bibitem[CT]{CT} Bohui Chen and Gang Tian, Virtual manifolds and localization, {\it Acta Math.\ Sinica}, {\bf 26} (1), 1--24. \bibitem[CM]{CM} K.~Cieliebak and K.~Mohnke, Symplectic hypersurfaces and transversality in Gromov--Witten theory, {\it J. Symplectic Geom.} {\bf 3} (2005), no. 4, 589--654. \bibitem[FO]{FO} K.\ Fukaya and K.\ Ono, Arnold conjecture and Gromov--Witten invariants, {\it Topology} {\bf 38} (1999), 933--1048. \bibitem[FOOO]{FOOO} K.\ Fukaya, Y.-G.\ Oh, H.\ Ohta, and K.\ Ono, {\it Lagrangian Intersection Theory, Anomaly and Obstruction, Parts I and II}, AMS/IP Studies in Advanced Mathematics, Amer.\ Math.\ Soc.\ and Internat.\ Press. \bibitem[FOOO12]{FOOO12} K.\ Fukaya, Y.-G.\ Oh, H.\ Ohta, and K.\ Ono, {\it Technical detail on Kuranishi structure and Virtual Fundamental Chain}, arXiv:1209.4410. \bibitem[G]{GRO} M.~Gromov, Pseudo holomorphic curves in symplectic manifolds, {\it Invent. Math.} {\bf 82} (1985), 307--347. \bibitem[GFFW]{gffw} R.~Golovko, O.~Fabert, J.~Fish, K.~Wehrheim, {\it Polyfolds: A First and Second Look}, arXiv:1210.6670. \bibitem[GL]{GLu} GuangCun Lu, Virtual moduli cycles and Gromov--Witten invariants of noncompact symplectic manifolds, {\it Commun.\ Math.\ Phys.} {\bf 261} (2006), 43--131. \bibitem[GP]{GuillP} V.\ Guillemin, A.\ Pollack, {\it Differential topology}, Prentice-Hall, 1974. \bibitem[H]{Hat} A.\ Hatcher, {\it Algebraic Topology}, Cambridge University Press, 2002. \bibitem[HWZ1]{HWZ1} H.~Hofer, K.~Wysocki, and E.~Zehnder, A general Fredholm theory I, A splicing based differential geometry, {\it J. Eur. Math. Soc. (JEMS)} {\bf 9} (2007), no 4, 841--876. \bibitem[HWZ2]{HWZ2} H.~Hofer, K.~Wysocki, and E.~Zehnder, A general Fredholm theory II, Implicit Function theorems, {\it Geom. Funct. Anal.} {\bf 19} (2009), no. 1, 206--293. \bibitem[HWZ3]{HWZ3} H.~Hofer, K.~Wysocki, and E.~Zehnder, A general Fredholm theory III, Fredholm functors and polyfolds, {\it Geom. Topol.} {\bf 13} (2009), no. 4, 2279--2387. \bibitem[HWZ4]{HWZ:gw} H.~Hofer, K.~Wysocki, and E.~Zehnder, {\it Applications of Polyfold theory I: The Polyfolds of Gromov--Witten theory}, arXiv:1107.2097. \bibitem[I]{Io} E.~Ionel, {\it GW invariants relative normal crossings divisors}, arXiv:1103.3977 \bibitem[IP]{IoP} E.~Ionel and T.~Parker, {\it A natural Gromov--Witten fundamental class}, arXiv:1302.3472 \bibitem[J]{J} D.\ Joyce, {\it Kuranishi homology and Kuranishi cohomology}, arXiv:0710.5634. \bibitem[JD]{JD} D.\ Joyce, {\it D-manifolds, d-orbifolds and derived differential geometry: a detailed summary}, arXiv:1208.4948. \bibitem[K]{Kel} J.\ Kelley, {\it General Topology}, Graduate Texts in Mathematics {\bf 26}, Springer, 1955. \bibitem[LiT]{LT} Jun Li and Gang Tian, Virtual moduli cycles and Gromov--Witten invariants for general symplectic manifolds, {\it Topics in Symplectic $4$-manifolds (Irvine CA 1996)}, Internat.\ Press, Cambridge, MA (1998), 47--83. \bibitem[LiuT]{LiuT} Gang Liu and Gang Tian, Floer homology and Arnold conjecture, {\it Journ.\ Diff.\ Geom.}, {\bf 49} (1998), 1--74. \bibitem[LuT]{LuT} GuangCun Lu and Gang Tian, Constructing virtual Euler cycles and classes, {\it Int.\ Math.\ Res.\ Surveys } (2008), DOI: 10.1093/imrsur/rym001. \bibitem[Mc1]{Mcv} D.\ McDuff, The virtual moduli cycle, {\it Amer.\ Math.\ Soc.\ Transl.} (2) {\bf 196} (1999), 73 -- 102. 2003 revision on {\tt http://www.math.columbia.edu/${\sigma}m$dusa/ } \bibitem[Mc2]{Mbr} D.\ McDuff, Branched manifolds, groupoids, and multisections, {\it Journal of Symplectic Geometry} {\bf 4} (2007), 259--315. \bibitem[Mc3]{Mcu} D.\ McDuff, Hamiltonian $S^1$-manifolds are uniruled, {\it Duke Math.\ Journ.} {\bf 146} (2009), 449--507. \bibitem[McS]{MS} D.\ McDuff and D.A.\ Salamon, {\it $J$-holomorphic curves and symplectic topology}, Colloquium Publications {\bf 52}, American Mathematical Society, Providence, RI, (2004), 2nd edition (2012). \bibitem[McT]{McT} D.\ McDuff and S.\ Tolman, Topological properties of Hamiltonian circle actions, {\it International Mathematics Research Papers}. {\bf vol 2006}, Article ID 72826, 1--77. \bibitem[McW1]{MW:ku2} D.\ McDuff and K.\ Wehrheim, {\it Smooth Kuranishi atlases with nontrivial isotropy}, work in progress. \bibitem[McW2]{MW:gw} D.\ McDuff and K.\ Wehrheim, {\it Kuranishi atlases for spherical Gromov--Witten moduli spaces}, work in progress. \bibitem[Mi]{Mi} J.\ Milnor, On the Steenrod homology theory, {\it Collected Works}, Amer.Math.Soc. \bibitem[Mo]{Moe} I.\ Moerdijk, Orbifolds as groupoids, an introduction. {\it Orbifolds in Mathematics and Physics}, edited by Adem, {\it Contemp.\ Math} {\bf 310}, AMS (2002), 205--222. \bibitem[R]{Ruan} Y.\ Ruan, Virtual neighborhoods and pseudoholomorphic curves, {\it Turkish J.\ Math.} {\bf 23} (1999), no 1, 161--231. \bibitem[Se]{SEID} P.~Seidel, {\it Fukaya Categories and Picard--Lefschetz theory}, Zurich Lectures in Advanced Mathematics, European Math. Soc. (EMS), Zurich, 2008. \bibitem[Si]{Sieb} B. Siebert, Symplectic Gromov--Witten invariants, in {\it New Trends in Algebraic Geometry, (Warwick 1996)}, 375--424, London Math Soc. Lecture Notes Ser {\bf 264}, Cambridge Univ. Press, Cambridge, 1999. \bibitem[So]{Sol} J.~Solomon, in preparation. \bibitem[Sp]{Span} E. Spanier, {\it Algebraic Topology}, McGraw Hill (1966), reprinted by Springer--Verlag. \bibitem[W1]{w:msritalk} K.\ Wehrheim, {\it Analytic foundations: polyfold structures for holomorphic disks}, {\tt www.msri.org/web/msri/online-videos/-/video/showVideo/3779}. \bibitem[W2]{w:fred} K.~Wehrheim, {\it Fredholm notions in scale calculus and Hamiltonian Floer theory}, arXiv:1209.4040. \bibitem[Y]{DZ} Dingyu Yang, {\it A forgetful functor from polyfold Fredholm structures to Kuranishi structure equivalence classes}, in preparation. \bibitem[Z1]{Z} A.\ Zinger, {\it Pseudocycles and Integral Homology}, arXiv:AT/0605535. \bibitem[Z2]{Z2} A.~Zinger, Enumerative vs. Symplectic Invariants and Obstruction Bundles, {\it J. Symplectic Geom.} {\bf 2} (2004), no. 4, 445--543. \bibitem[Z3]{Z3} A.~Zinger, {\it The Determinant Line Bundle for Fredholm Operators: Construction and Properties}, preprint, 2013. \end{thebibliography} \appendix {\rm sect}ion{Some comments on recent discussions} {\lambda}bel{app} Unfortunately, the topic of regularization and Kuranishi structures has recently appeared more like a ``political mine field" than a research question in need of clarification. We are working on taking the non-mathematical parts of this discussion offline since that seems much more appropriate to us. At this point in time, however, we still feel the need to clarify some impressions given by \cite{FOOO12}, and will thus comment briefly on the recent discussions. {\beta}gin{itemlist} \item In connection with talks at IAS, Princeton in March 2012 --- as our work was nearing completion --- the second author raised some basic questions (see below) in a small online discussion group including the cited authors. The purpose of these questions was to pinpoint some foundational issues in the work of Fukaya--Ono \cite{FO} and the subsequent work of Fukaya--Oh--Ohta--Ono \cite{FOOO} and Joyce \cite{J}. During this discussion, and with our feedback on various versions, Fukaya et al developed a revision of their approach which can now be found in the mathematical parts of \cite{FOOO12}. Our manuscript was sent to the group at several stages prior to posting on the arxiv. However, we only learned from the arxiv about \cite{FOOO12} and its criticism as well as portrayal of the discussion. We suggest that -- should these private email communications be of scientific interest -- then a complete transcript ought to be released, with permission of all authors. \item In \cite{FOOO12}, the authors have reworked many of the details of the foundations of their approach to Kuranishi structures. In particular, as in \cite{FOOO}, they no longer use germs. They also made significant changes to the definition of a good coordinate system in order to deal with issues mentioned in Section~\ref{ss:top} and give a much more detailed construction of the VMC. (Here \cite{FO} only gives a brief analogy with the construction for orbifold bundles, in which e.g.\ the allowed size of perturbation $\frac{{\varepsilon}ilon}{10}$ is not shown to be positive, and Hausdorffness resp.\ compactness are not addressed. Note however, that the Kuranishi setting does not immediately induce an ambient space, let alone a locally compact Hausdorff space.) We have read \cite{FOOO12} and its earlier versions to some extent and could imagine that their definitions are now adequate to prove the results concerning the topic of this paper, i.e.\ smooth Kuranishi structures with trivial isotropy. However their discussion of some foundational issues, for example the role of reductions, shrinkings and cobordisms, and the different topologies on the Kuranishi quotient neighbourhood, are still so convoluted or inexplicit that we were unable to verify all proofs. Similar comments apply to the construction of sum charts for Gromov-Witten moduli spaces. The further issues raised by our questions were discussed in much less detail in the email group. In particular, we cannot comment on the issues of smoothness of gluing and $S^1$-equivariant regularization. More to the point, we feel that it is not our place to referee \cite{FOOO12}. Instead, there should be a wider engagement with these issues in the community. \item We have not removed criticisms of the arguments in \cite{FO, FOOO}, since for many years these have been the only available references on this topic (and still are the only published sources). So we think it important to give a coherent account of the construction in the simplest possible case, showing where arguments have been lacking and how one might hope to fill them. \item There are still some significant differences between the notion of a Kuranishi structure in \cite{FOOO12} and ours. To clarify these, we have changed the name of the object we construct, calling it a ``Kuranishi atlas" instead of a Kuranishi structure. We have also added some explanatory remarks to the beginning of this paper (cf.\ the paragraph in Section 1 called ``Relation to other Kuranishi notions") and have rewritten Remark~\ref{rmk:otherK}. We will comment more on this in \cite{MW:ku2}, once we have extended our definitions to the case with isotropy, since in this case the approaches diverge further. We believe that comparisons of the ease of the different approaches only make sense once their rigor is established and hope that a refereeing process for all Kuranishi type approaches can do the latter. \item Finally, we will provide the list of questions that we posed initially, since we hope that these may serve as guiding questions for anyone who wishes to evaluate the literature for themselves. \end{itemlist} {\mathbb N}I {\bf 1.)} Please clarify, with all details, the definition of a Kuranishi structure. And could you confirm that a special case of your work proves the following? {\beta}gin{enumerate} \item The Gromov-Witten moduli space ${\overlineerline {\Mm}}_1(J,A)$ of $J$-holomorphic curves of genus 0, fixed homology class $A$, with 1 marked point has a Kuranishi structure. \item For any compact space $X$ with Kuranishi structure and continuous map $f:X\to M$ to a manifold $M$ (which suitably extends to the Kuranishi structure), there is a well defined $f_*[X]^{\rm vir}\in H_*(M)$. \end{enumerate} { }{\mathbb N}I {\bf 2.)} The following seeks to clarify certain parts in the definition of Kuranishi structures and the construction of a cycle. {\beta}gin{enumerate} \item What is the precise definition of a germ of coordinate change? \item What is the precise compatibility condition for this germ with respect to different choices of representatives of the germs of Kuranishi charts? \item What is the precise meaning of the cocycle condition? \item What is the precise definition of a good coordinate system? \item How is it constructed from a given Kuranishi structure? \item Why does this construction satisfy the cocycle condition? \end{enumerate} { }{\mathbb N}I {\bf 3.)} Let $X$ be a compact space with Kuranishi structure and good coordinate system. Suppose that in each chart the isotropy group ${\Gamma}_p=\{{\rm id}\}$ is trivial and $s^\nu_p:U_p\to E_p$ is a transverse section. What further conditions on the $s^\nu_p$ do you need (and how do you achieve them) in order to ensure that the perturbed zero set $X^\nu=\cup_p (s^\nu_p)^{-1}(0) / {\sigma}m$ carries a global triangulation, in particular {\beta}gin{enumerate} \item $X^\nu$ is compact, \item $X^\nu$ is Hausdorff, \item $X^\nu$ is closed, i.e.\ if $X^\nu=\bigcup_n {\Delta}lta_n$ is a triangulation then $\sum_n f({\partial}artial {\Delta}lta_n) = \emptyset$. \end{enumerate} { }{\mathbb N}I {\bf 4.)} For the Gromov-Witten moduli space ${\overlineerline {\Mm}}_1(J,A)$ of $J$-holomorphic curves of genus 0 with 1 marked point, suppose that $A\in H_2(M)$ is primitive so that ${\overlineerline {\Mm}}_1(J,A)$ contains no nodal or multiply covered curves. {\beta}gin{enumerate} \item Given two Kuranishi charts $(U_p, E_p, {\Gamma}_p=\{{\rm id}\}, \ldots)$ and $(U_q, E_q, {\Gamma}_q=\{{\rm id}\},\ldots)$ with overlap at $[r]\in{\overlineerline {\Mm}}_1(J,A)$, how exactly is a sum chart $(U_r,E_r,\ldots)$ with $E_r {\sigma}meq E_p\tildemes E_q$ constructed? \item How are the embeddings $U_p \supset U_{pr} \hookrightarrow U_r$ and $U_q \supset U_{qr} \hookrightarrow U_r$ constructed? \item How is the cocycle condition proven for triples of such embeddings? \end{enumerate} { }{\mathbb N}I {\bf 5.)} How is equality of Floer and Morse differential for the Arnold conjecture proven? {\beta}gin{enumerate} \item Is there an abstract construction along the following lines: Given a compact topological space $X$ with continuous, proper, free $S^1$-action, and a Kuranishi structure for $X/S^1$ of virtual dimension $-1$, there is a Kuranishi structure for $X$ with $[X]^{\rm vir}=0$. \item How would such an abstract construction proceed? \item Let $X$ be a space of Hamiltonian Floer trajectories between critical points of index difference $1$, in which breaking occurs (due to lack of transversality). How is a Kuranishi structure for $X/S^1$ constructed? \item If the Floer differential is constructed by these means, why is it chain homotopy equivalent to the Floer differential for a non-autonomous Hamiltonian? \end{enumerate} \end{document}
\begin{document} \title{Exponential decay of the volume for Bernoulli percolation: a proof via stochastic comparison} \abstract{ Let us consider subcritical Bernoulli percolation on a connected, transitive, infinite and locally finite graph. In this paper, we propose a new (and short) proof of the exponential decay property for the volume of clusters. We neither rely on differential inequalities nor on the BK inequality. We rather use stochastic comparison techniques, which are inspired by (and simpler than) those proposed in \cite{Van22} to study the diameter of clusters. We are also very inspired by the adaptation of these techniques in~\cite{Eas22}. } {\footnotesize \tableofcontents } \section{Introduction} Let us consider \textit{Bernoulli bond percolation} on a connected, locally finite, vertex-transitive and countably infinite graph $\mathcal{H}=(\mathcal{V},\mathcal{E})$ (e.g.\ the hypercubic lattice $\mathbb{Z}^d$). ``Locally finite'' means that the degree of each vertex is finite, and ``vertex-transitive'' means that for all vertices $v,w$, there exists an automorphism of $\mathcal{H}$ that maps~$v$ to~$w$. This model is defined as follows: each edge is declared \textit{open} with some probability $p$ and \textit{closed} otherwise, independently of the other edges. We let $\mu_p$ denote the corresponding product probability measure on $\{0,1\}^{\mathcal{E}}$, where $1$ means open and $0$ means closed, i.e.\ \[ \mu_p=\big( p\delta_1+(1-p)\delta_0\big)^{\otimes \mathcal{E}}. \] In percolation theory, one is interested in the connectivity properties of the graph obtained by keeping only the open edges. We fix a vertex $o \in \mathcal{V}$ once and for all, we let $\mathcal{C}_o$ denote the set of all vertices that are connected to $o$ by a path made of open edges, and we let $|\mathcal{C}_o|$ denote its cardinality. The \textit{critical point} is defined as follows: \[ p_c = \inf \big\{ p \in [0,1] : \mu_p\big[|\mathcal{C}_o|=+\infty\big]>0 \big\}. \] Let \[ \psi_n(p) = \mu_p\big[|\mathcal{C}_o| \ge n\big], \quad \forall n \ge 0. \] In this paper, we give a new and short proof of the following theorem, which states that the volume of $\mathcal{C}_o$ has an exponential tail in the subcritical regime. \begin{theorem}\label{thm:exp'} For every $p<p_c$, there exist $c,C>0$ such that \[ \forall n \ge 0, \quad \psi_n(p) \le Ce^{-cn}. \] \end{theorem} Theorem \ref{thm:exp'} was proven by Kesten \cite{Kes81} for all parameters $p$ at which the expectation of $|\mathcal{C}_o|$ is finite (see also \cite{AN84}). Sharpness of Bernoulli percolation -- which means that the expectation of $|\mathcal{C}_o|$ is actually finite in the whole subcritical phase -- was then shown independently by Menshikov \cite{Men86} and Aizenman and Barsky \cite{AB87}\footnote{More precisely, the sharpness property was first proven for graphs with subexponential growth; see \cite{AV08} for the first extension to graphs with exponential growth.}. We refer to \cite{DT16,DT17,DRT19,Van22} for more recent proofs of sharpness and to \cite{Hut20} for a direct proof of Theorem \ref{thm:exp'}. Concerning the more recent proofs of sharpness, Duminil-Copin and Tassion \cite{DT16,DT17} have proposed a very short proof with a ``branching processes flavour'' and Duminil-Copin, Raoufi and Tassion \cite{DRT19} have proposed a proof via the so-called OSSS inequality, that extends to several dependent percolation models. For more about sharpness of percolation -- and percolation in general --, see for instance the books \cite{Gri99,BR06} or the lecture notes \cite{Dum17}. The proofs from all the works mentioned above are based on differential inequalities; except in the (not for publication) work \cite{Van22}, where we proposed a new proof of exponential decay \textit{for the diameter} (see Section \ref{sec:motiv} for our motivations for finding a proof without differential inequalities). This proof is based on stochastic comparison techniques inspired both by a work of Russo in the early 80's (see~\cite[Section~3]{Rus82}) and by works that combine exploration procedures and coupling techniques in percolation theory (see e.g.\ \cite{DM22}, see also~\cite{GS14} for more about the application of exploration procedures to percolation theory). In the present paper, drawing on a recent work by Easo \cite{Eas22} (which, among other things, adapts some techniques from \cite{Van22} to the study of the volume of percolation clusters on finite transitive graphs), we observe that these stochastic comparison techniques seem in fact more suitable to the study of the volume than to that of the diameter, and we prove Theorem \ref{thm:exp'}. In order to study the volume of clusters, it is relevant to introduce what is often referred to as a ``ghost field''. By this, we just mean that, given some parameter $h \in (0,+\infty)$, we color independently every vertex \textit{green} with probability $1-e^{-h}$. We let $\nu_{p,h}$ denote the product probability measure on $\{0,1\}^{\mathcal{E}} \times \{0,1\}^{\mathcal{V}}$ with this additional feature, where we assign the number $1$ to every green vertex, i.e. \[ \nu_{p,h} = \big( p\delta_1+(1-p)\delta_0\big)^{\otimes \mathcal{E}} \otimes \big( (1-e^{-h})\delta_1+e^{-h}\delta_0\big)^{\otimes \mathcal{V}}. \] We let $\mathcal{G}$ denote the set of green vertices (this notation is the reason why our graph is denoted by $\mathcal{H}$ and not $\mathcal{G}$!), and we consider the so-called \textit{magnetization} (this terminology comes from a comparison with analogous quantities that appear in the Ising model): \[ m_h(p)=\nu_{p,h} \big[ \mathcal{C}_o \cap \mathcal{G} \ne \emptyset \big]. \] One can note that, for every $p$, $m_h(p)$ goes to $\mu_p[|\mathcal{C}_o|=+\infty]$ as $h$ goes to $0$. Theorem \ref{thm:exp'} is a direct consequence of the following \textit{near-critical} sharpness result, which can also be deduced from \cite{Hut20} (with different constants). \begin{theorem}\label{thm:exp} Let $p\in[0,1]$, $h\in(0,+\infty)$, and let $q=p(1-m_h(p))$. Then, \[ \forall n \ge 0, \quad \psi_n(q) \le \frac{1}{1-m_h(p)} \psi_n(p) e^{-hn}. \] \end{theorem} \begin{proof}[Proof of Theorem \ref{thm:exp'} by using Theorem \ref{thm:exp}] Let $p<p_c$ and let us choose some $p'<p_c$ and $h\in(0,+\infty)$ such that $p'(1-m_h(p'))\ge p$. Then, \[ \psi_n(p) \overset{\text{Thm.\ \ref{thm:exp}}}{\le} \frac{1}{1-m_h(p')}\psi_n(p')e^{-hn} \le Ce^{-hn}, \] with $C=1/(1-m_h(p'))$. \end{proof} Let us note that Theorem \ref{thm:exp} is the analogue of \cite[Theorem 1.1]{Van22} for the volume. Surprisingly (maybe), the proof of the former is simpler. Theorem \ref{thm:exp} also implies the so-called \textit{mean-field lower bound}, that was proven by Chayes and Chayes~\cite{CC87}. \begin{theorem}\label{thm:mf} For every $p\ge p_c$, \[ \mu_p\big[|\mathcal{C}_o|=+\infty\big]\ge (p-p_c)/p. \] \end{theorem} \begin{proof}[Proof of Theorem \ref{thm:mf} by using Theorem \ref{thm:exp}] By Theorem \ref{thm:exp}, we have $p(1-m_h(p))\le p_c$ for every $p$ and $h$. The result follows by fixing $p$ and letting $h$ go to $0$. \end{proof} \noindent \textbf{Organization of the rest of the paper:} \begin{itemize*} \item Some ideas behind the proof are provided in Section \ref{sec:ideas}; \item The proofs are written in Section \ref{sec:gen} (where we propose a general criterion that is not specific to percolation theory) and in Section~\ref{sec:proof} (that contains the proof of Theorem \ref{thm:exp}); \item In Section \ref{sec:motiv}, we discuss our motivations as well as the possibility to prove new results via the stochastic domination techniques proposed in the present paper. \end{itemize*} \noindent \textbf{Acknowledgments:} I am extremely grateful to Philip Easo for very inspiring discussions and for his work~\cite{Eas22}. I also thank Sébastien Martineau, for comments that helped a lot to improve this manuscript, Tom Hutchcroft for interesting discussions about \eqref{eq:in_exp}, Vincent Beffara and Stephen Muirhead for discussions that helped to simplify some parts of the proof, Barbara Dembin and Jean-Baptiste Gouéré for inspiring feedbacks, and Hugo Duminil-Copin, who encouraged me to include a discussion about possible new results (i.e.\ Section \ref{sec:motiv}). Moreover, I thank Loren Coquille who noticed that there was still a (discrete) differential inequality flavour in \cite{Van22} -- such a thing does not appear anymore in the present paper. Finally, I would like to thank Vincent Tassion, who recommended to combine coupling techniques from~\cite{Van22} with ghost field techniques, which led to the present paper. \section{Some ideas behind the proof}\label{sec:ideas} How can one control the volume of subcritical clusters? One can first notice that Theorem \ref{thm:exp'} is equivalent to the fact that decreasing $p$ has the following \textit{regularizing effect}: \[ \text{Let $p_0$ such that $|\mathcal{C}_o|$ is finite $\mu_{p_0}$-a.s. Then, $\forall p<p_0$, $\psi_n(p)$ is exponentially small.} \] In order to prove this, we will show that, given some $\varepsilon>0$ and $p_0$ such that $\mu_{p_0}\big[|\mathcal{C}_o|=+\infty]=0$, \textit{decreasing $p_0$ by $\varepsilon$ has more effect than conditioning on some well chosen disconnection event~$A$}, essentially in the sense that $\mu_{p_0-\varepsilon}$ is stochastically smaller than $\mu_{p_0}[\,\cdot\mid A]$. This disconnection event will be $A=\{ \mathcal{C}_o \cap \mathcal{G} = \emptyset\}$ for some well chosen parameter $h$. So the question is now: How can one compare the effect of conditioning on an event to that of decreasing a little $p$? One can actually do this by exploring a configuration with law $\mu_{p_0}[\, \cdot \mid A]$ and estimating (at every step of the exploration) the probability that the next edge to be explored has some influence on $A$. If this probability is always small, then decreasing $p$ will have more effect than conditioning on~$A$. \section{A general criterion to compare the effect of decreasing $p$ to that of conditioning on an event}\label{sec:gen} \paragraph{A. Some ``classical'' notions.} Let $E$ be a finite set and let $\mu_{p,E}$ be the product Bernoulli law of parameter $p$ on $\{0,1\}^E$, i.e.\ $\mu_{p,E}=\big( p \delta_1 + (1-p)\delta_0 \big)^{\otimes E}$. Moreover, fix some probability space $(M,\mathcal{F},\rho)$ (that we will just use to include the information about the ghost field in Section \ref{sec:proof}), and let \[ \nu_{p,E}=\mu_{p,E} \otimes \rho. \] In all this section, both $\{0,1\}^E$ and $\{0,1\}^E\times M$ will be equipped with the product $\sigma$-algebra. We denote by $\omega$ the elements of $\{0,1\}^E$ and by $\eta$ the elements of $M$. Moreover, we let $\vec{E}$ be the set of all orderings $(e_1,\dots,e_{|E|})$ of $E$. \begin{definition}\label{defi:expl} An \textit{exploration} of $E$ is a map \[ \begin{array}{rl} \textbf{\textup{e}} : \{0,1\}^E & \longrightarrow \vec{E}\\ \omega & \longmapsto (\textbf{\textup{e}}_1,\dots,\textbf{\textup{e}}_{|E|}) \end{array} \] such that $\textbf{\textup{e}}_1$ does not depend on $\omega$ and $\textbf{\textup{e}}_{k+1}$ only depends on $(\textbf{\textup{e}}_1,\dots,\textbf{\textup{e}}_k)$ and $(\omega_{\textbf{\textup{e}}_1},\dots,\omega_{\textbf{\textup{e}}_k})$. Given an exploration $\textbf{\textup{e}}$, $(x,e) \in \{0,1\}^E \times \vec{E}$ and $k \in \{0,\dots,|E|\}$, we let $\textup{Expl}_k(e,x)$ denote the event that $(\textbf{\textup{e}},\omega)$ equals $(e,x)$ at least until step $k$, i.e.\ \[ \textup{Expl}_k(e,x)=\big\{\forall j \in \{1,\dots,k\}, \textbf{\textup{e}}_j=e_j\text{ and }\omega_{e_j}=x_{e_j}\big\} \subseteq \{0,1\}^E. \] We also let $\textup{Expl}(e,x)=\textup{Expl}_{|E|}(e,x)$. \end{definition} We recall that an \textit{increasing} subset $A\subseteq\{0,1\}^E$ is a subset that satisfies ($\omega \in A$ and $\forall e \in E, \quad \omega'_e \ge \omega_e$) $\mathbb{R}ightarrow$ $\omega'\in A$. Moreover, given two probability measures $\mu,\nu$ on $\{0,1\}^E$, we say that $\mu$ is \textit{stochastically smaller} than $\nu$ if $\mu[A]\le \nu[A]$ for every increasing subset $A$, and we denote this property by $\mu \preceq \nu$. Finally, we say that an edge $e \in E$ is \textit{pivotal} for a measurable set $A \subseteq \{0,1\}^E \times M$ and an element~$(\omega,\eta)$ if changing only the edge $e$ in $\omega$ (and not changing $\eta$) modifies~$1_A$. \paragraph{B. The main intermediate lemma.} Now that we have all the necessary notions, let's start the proof! The finite set $E$ is fixed in the whole section. The key result of this section is the following lemma. \begin{lemma}\label{lem:coupl} Let $p,\varepsilon \in [0,1]$, write $q=p(1-\varepsilon)$, and consider an exploration of $E$. Moreover, let $A \subseteq \{0,1\}^E \times M$ be a measurable set such that $\nu_{p,E}[A]>0$. Assume that the following holds for every $k \in \{0,\dots,|E|-1\}$ and every $(x,e)\in \{0,1\}^E \times \vec{E}$ such that $\nu_{p,E}\big[A \cap \textup{Expl}(e,x)\big]>0$: \[ \nu_{p,E} \big[ \text{$e_{k+1} $ pivotal for $A$} \; \big| \; A \cap \textup{Expl}_k(e,x) \big] \le \varepsilon. \] Then, \[ \mu_{q,E} \preceq \nu_{p,E} \big[ \omega \in \cdot \; \big| \; A \big]. \] \end{lemma} \begin{remark}\label{rk:formal} \begin{itemize*} \item Recall that we use the letter $\omega$ to denote the elements of $\{0,1\}^E$. As a result, $\nu_{p,E} [ \omega \in \cdot \mid A]$ denotes the marginal law on $\{0,1\}^E$ of $\nu_{p,E}[ \, \cdot \mid A]$; \item Let us take the opportunity of this remark to note that $\nu_{p,E} [ \, \cdot \mid \textup{Expl}_k(e,x) ]$ equals $\mu_{p,E}^k \otimes \rho$, where $\mu_{p,E}^k$ is the probability measure on $\{0,1\}^E$ that assigns the value $x_{e_j}$ to $e_j$ for every $j \le k$ and that is the product Bernoulli measure of parameter $p$ on the other coordinates. \end{itemize*} \end{remark} The proof of Lemma \ref{lem:coupl} is based on the following lemma (see e.g.\ \cite[(7.64)]{Gri99} and \cite[Lemma 2.1]{DRT19} for related results). \begin{lemma}\label{lem:gen} Let $q \in [0,1]$ and consider an exploration of $E$. Moreover, let $\mu$ be a probability measure on $\{0,1\}^E$ and assume that the following holds for every $k \in \{0,\dots,|E|-1\}$ and every $(x,e)\in \{0,1\}^E \times \vec{E}$ such that $\mu\big[\textup{Expl}(e,x)\big]>0$: \[ \forall k \in \{0,\dots,|E|-1\}, \quad \mu \big[ \omega_{e_{k+1}} = 1 \; \big| \; \textup{Expl}_k(e,x) \big] \ge q. \] Then, $\mu_{q,E} \preceq \mu$. \end{lemma} \begin{proof}[Proof of Lemma \ref{lem:gen}] The result can for instance be proven via a straightforward induction on $|E|$, by writing \[ \mu[A]=\mu^0 \big[A^0\big] \times \mu[\omega_{\textbf{\textup{e}}_1}=0]+\mu^1 \big[A^1\big] \times \mu[\omega_{\textbf{\textup{e}}_1}=1], \] where $A^i=\big\{x\in \{0,1\}^{E\setminus \{\textbf{\textup{e}}_1\}} : (x'_{\textbf{\textup{e}}_1}=i \text{ and } x' = x \text{ outside of $\textbf{\textup{e}}_1$}) \mathbb{R}ightarrow x' \in A \big\}$, and $\mu^i =\mu [ \, \cdot \mid \omega_{\textbf{\textup{e}}_1}=i ]$. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:coupl}] Let us prove that the hypothesis of Lemma \ref{lem:gen} holds with $\mu=\nu_{p,E}[\omega \in \cdot \mid A]$. Let $(x,e)$ as in the statement of Lemma \ref{lem:coupl}. The second point of Remark \ref{rk:formal} implies that, under $\nu_{p,E} \big[ \, \cdot \mid \textup{Expl}_k(e,x) \big]$, $\omega_{e_{k+1}}$ is independent of the event $A \cap \{ \text{$e_{k+1} $ is not piv. for $A$} \}$ (because the latter is measurable with respect to $\eta$ and $\omega_{|E\setminus\{e_{k+1}\}}$). As a result, \begin{multline*} \nu_{p,E} \big[ \omega_{e_{k+1}} = 1 \; \big| \; A \cap \textup{Expl}_k(e,x) \big]\\ \ge \frac{\nu_{p,E} \big[ \omega_{e_{k+1}} = 1, A, \text{$e_{k+1} $ is not piv. for $A$} \; \big| \; \textup{Expl}_k(e,x) \big]}{\nu_{p,E} \big[ A \; \big| \; \textup{Expl}_k(e,x) \big]}\\ = p \times \nu_{p,E} \big[ \text{$e_{k+1} $ is not piv. for $A$} \; \big| \; A \cap \textup{Expl}_k(e,x) \big]\ge p(1-\varepsilon). \end{multline*} We end the proof by applying Lemma \ref{lem:gen}. \end{proof} \section{Proof of Theorem \ref{thm:exp}}\label{sec:proof} Let $\mathcal{H}_n=(\mathcal{V}_n,\mathcal{E}_n)$ denote the graph $\mathcal{H}$ restricted to the ball of radius $n$ around~$o$ (for the graph distance). (We restrict our attention to a finite subgraph since, for simplification, we have chosen to consider $E$ finite in Section \ref{sec:gen} -- one could also prove Lemma \ref{lem:coupl_perco} below in infinite volume.) We define $\mu_{p,\mathcal{E}_n}$ as in Section \ref{sec:gen} and we let $\nu_{p,h,\mathcal{H}_n}$ denote the product probability measure which is the analogue of $\nu_{p,h}$ on the graph $\mathcal{H}_n$ (so this is a measure on $\{0,1\}^{\mathcal{E}_n}\times\{0,1\}^{\mathcal{V}_n}$). We still use the notations~$\mathcal{C}_o$ and~$\mathcal{G}$ in the context of percolation on $\mathcal{H}_n$, and we denote by $\omega$ the elements of~$\{0,1\}^{\mathcal{E}_n}$. \begin{lemma}\label{lem:coupl_perco} Let $p \in [0,1]$ and $h \in (0,+\infty)$, and write $q=p(1-m_h(p))$. Then, \[ \forall n \ge 0, \quad \mu_{q,\mathcal{E}_n} \preceq \nu_{p,h,\mathcal{H}_n} \big[ \omega \in \cdot \; \big| \; \mathcal{C}_o \cap \mathcal{G} = \emptyset \big]. \] \end{lemma} \begin{proof} Let us apply Lemma \ref{lem:coupl} to $E=\mathcal{E}_n$, $M=\{0,1\}^{\mathcal{V}_n}$, $\nu_{p,E} = \nu_{p,h,\mathcal{H}_n}$ and $A=\{\mathcal{C}_o\cap\mathcal{G}=\emptyset\}$. To this purpose, we define an exploration of $\mathcal{E}_n$ by fixing an ordering of $\mathcal{E}_n$ (which will just be used to make arbitrary decisions) and: \begin{itemize*} \item By first revealing $\mathcal{C}_o$ (i.e.\ by revealing iteratively the edge of smallest index that is connected to $o$ by open edges that have already been revealed); \item Then, by revealing all the other edges (once again by revealing iteratively the edge of smallest index). \end{itemize*} Let $(x,e) \in \{0,1\}^{\mathcal{E}_n}\times\vec{\mathcal{E}}_n$ such that $\nu_{p,h,\mathcal{H}_n}\big[\textup{Expl}(e,x)\big]>0$. We observe that, if $\textup{Expl}_k(e,x)$ holds and $e_{k+1}$ is pivotal for $\{\mathcal{C}_o \cap \mathcal{G} = \emptyset\}$, then one can write $e_{k+1}=\{v_{k+1},w_{k+1}\}$ where: \begin{itemize*} \item $v_{k+1}$ is connected to $o$ by open edges in $\{e_1,\dots,e_k\}$ but $w_{k+1}$ is not; \item $w_{k+1}$ is connected to a green vertex by open edges that do not belong to $\{e_1,\dots,e_{k+1}\}$. \end{itemize*} We let $B_{k+1}$ denote the event of the second item. By this observation and then the Harris--FKG inequality\footnote{If one does not want to use the Harris--FKG inequality in this paper, one can continue to explore the configuration with the same rule as before, except that $e_{k+1}$ is not revealed until no other unrevealed edge is connected to $o$, and one can compute the pivotal probability conditionally on this further information.} (see for instance \cite[Theorem 2.4]{Gri99} or \cite[Lemma~3]{BR06}) applied to $\nu_{p,h,\mathcal{H}_n} \big[ \, \cdot \mid \textup{Expl}_k(x,e)\big]$ (which is a product of Bernoulli laws), we obtain that \begin{multline*} \nu_{p,h,\mathcal{H}_n} \big[\text{$e_{k+1}$ piv.\ for $\{\mathcal{C}_o \cap \mathcal{G} = \emptyset\}$} \; \big| \; \mathcal{C}_o \cap \mathcal{G} = \emptyset, \textup{Expl}_k(x,e) \big]\\ \le \nu_{p,h,\mathcal{H}_n} \big[ B_{k+1} \; \big| \; \mathcal{C}_o \cap \mathcal{G} = \emptyset, \textup{Expl}_k(x,e) \big]\\ \overset{\text{Harris--FKG}}{\le} \nu_{p,h,\mathcal{H}_n} \big[ B_{k+1} \; \big| \; \textup{Expl}_k(x,e) \big] \le m_h(p). \end{multline*} We conclude by applying Lemma \ref{lem:coupl}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:exp}] We note that $\psi_n(q)=\mu_{q,\mathcal{E}_n} \big[ |\mathcal{C}_o|\ge n \big]$. As a result, Lemma \ref{lem:coupl_perco} implies that $\psi_n(q)$ is less than or equal to \begin{align*} \nu_{p,h,\mathcal{H}_n} \big[ |\mathcal{C}_o|\ge n \; \big| \; \mathcal{C}_o \cap \mathcal{G} = \emptyset \big] &\overset{\text{Bayes}}{=} \frac{\psi_n(p) \times \nu_{p,h,\mathcal{H}_n}\big[ \mathcal{C}_o \cap \mathcal{G} = \emptyset \; \big| \; |\mathcal{C}_o| \ge n \big]}{1-\nu_{p,h,\mathcal{H}_n}\big[ \mathcal{C}_o \cap \mathcal{G} \neq \emptyset \big]}\\ & \hspace{0.2cm}\le \frac{\psi_n(p)}{1-m_h(p)} e^{-hn}. \qedhere \end{align*} \end{proof} \section{Some motivations and some consequences}\label{sec:motiv} I am not proving new results in this paper: even the quantitative result (i.e.\ Theorem~\ref{thm:exp}) can be deduced from earlier works (see \cite{Hut20}, where Hutchcroft combines OSSS and ghost field techniques). My initial goal, in the (not for publication) work \cite{Van22} and the present paper, was to propose a proof of sharpness \textit{without differential inequalities}. As explained in \cite{DMT21} (where the authors propose a new proof -- also without differential inequalities -- of some scaling relations for 2D Bernoulli percolation), finding approaches that ``[do] not rely on interpretations (using Russo’s formula) of the derivatives of probabilities of increasing events using so-called pivotal edges" is interesting because ``such formulas are unavailable in most dependent percolation models''. I have to admit that on the one hand I have found a proof of exponential decay without differential equations, but I have not succeeded in extending the proof to other percolation models. Gradually, my aim became more to propose a new point of view on sharpness properties, and as simple a proof of a sharpness phenomenon as I could do. During discussions about such motivations, several colleagues asked me whether these techniques could also be used to prove new results for Bernoulli percolation. This was not my aim but the question is legitimate! I think that the answer is yes and I would like to propose four examples. \paragraph{A. A new inequality for critical exponents.} In \cite{Van22}, we use similar stochastic domination techniques in order to prove an analogue of Theorem \ref{thm:exp} for the diameter. More precisely, we prove that \begin{equation}\label{eq:van22} \forall n \ge m, \quad \theta_{2n}\big( p-2\theta_m(p) \big) \le C \frac{\theta_n(p)}{2^{n/m}}, \end{equation} where $\theta_n(p)$ is the probability that there is a path from $o$ to the sphere of radius $n$ around $o$ (for the graph distance). Let $\eta_1$ and $\nu$ denote the one-arm and the correlation length exponents (so in particular $\eta_1$ is the exponent that describes the quantity $\theta_n(p_c)$, see for instance \cite{DM21} for precise definitions and for other inequalities involving these exponents). \eqref{eq:van22} implies the following inequality: \begin{equation}\label{eq:in_exp} \eta_1 \nu \le 1. \end{equation} Similar inequalities for critical exponents that describe the volume of clusters can be proven by using Theorem \ref{thm:exp}, and were already proven in \cite{Hut20}. However, \eqref{eq:in_exp} is new to my knowledge, and the reason why previous techniques (such as those from \cite{Hut20}) do not seem to imply \eqref{eq:in_exp} is that the calculations behind them only seem to work when the exponent of the connection probability is at most~$1$ (see e.g.\ \cite[Section~4.1]{Hut20}). The critical exponent that describes $\psi_n(p_c)$ (which is $1/\delta$ in \cite{Hut20}) is always at most $1$, but this is not the case of $\eta_1$, which is $2$ on trees for instance. \paragraph{B. The existence of a percolation threshold for finite graphs.} In \cite{Eas22}, P.\ Easo proves the existence of a percolation threshold for sequences of finite graphs by using several techniques including stochastic comparison techniques inspired by \cite{Van22}.\footnote{On the other hand, Easo writes: ``[...] we could instead adapt Hutchcroft’s proof of sharpness from \cite{Hut20}, rather than Vanneuville’s new proof. However, the adaption we found uses the universal tightness result from \cite{Hut21} and consequently yields a slightly weaker final bound.''} The present paper is inspired by both \cite{Van22} and \cite{Eas22}. \paragraph{C. A new inequality for arm events in the plane, and an application to exceptional times in dynamical percolation.} In this paragraph, we consider Bernoulli percolation on a planar (symmetric) lattice. Let $\alpha_k(n)$ denote the probability of the $k$-arm event to scale $n$, at the critical parameter (see e.g.\ \cite[Chapter 2]{GS14} for this terminology). The OSSS inequality implies that \begin{equation}\label{eq:524} n^2\alpha_4(n)\alpha_2(n) \ge c. \end{equation} (See for instance \cite[Chapter 12, Theorem 40]{GS14}.) I believe that one can prove the following by using stochastic domination techniques similar to those proposed in the present paper: \begin{equation}\label{eq:524delta} n^2 \alpha_4(n)\alpha_2(n) \ge cn^c. \end{equation} The latter inequality is not only a very slight improvement of \eqref{eq:524}. Indeed, as explained in \cite{TV22}, \eqref{eq:524delta} implies that -- if one considers dynamical percolation at the critical parameter -- there exist \textit{exceptional times} when there are both primal and dual unbounded components. The inequality \eqref{eq:524} is known for very general planar lattices whereas \eqref{eq:524delta} is known (to my knowledge) only for site percolation on the regular triangular lattice (\cite{SW01}) and bond percolation on $\mathbb{Z}^2$ (see \cite{DMT21}, where the authors use parafermionic observables). I believe that one could prove \eqref{eq:524delta} by combining the following two results/observations: \begin{enumerate*} \item As proven by Kesten \cite{Kes81}, the size of the near-critical window for planar percolation is $1/(n^2\alpha_4(n)$; \item The size of the near critical window is much less than $\alpha_2(n)$. How can one prove this? Let us define an exploration of some percolation event by following an interface that separates a primal macroscopic cluster from a dual macroscopic cluste. I believe that results analogous to Lemma \ref{lem:coupl} (together with RSW techniques) imply that the near-critical window is less than the infimum -- on every vertex $x$ and on every path $\gamma$ -- of the probability of the $4$-arm event at $x$, conditionally on the event $\{x \text{ belongs to the interface}$ and $\text{$\gamma$ is the interface stopped at $x$}\}$. This probability is less than the $2$-arm probability in a space that has the topology of the half-plane, which can be strongly expected to be much smaller than $\alpha_2(n)$. \end{enumerate*} \paragraph{D. A question based on the above paragraph.} Can one find other contexts (e.g.\ non-planar) where general stochastic domination lemmas such as Lemma~\ref{lem:coupl} are more quantitative than the OSSS inequality? {\footnotesize } \end{document}
\begin{equation}gin{document} \title[Perrin-Riou's conjecture]{B\lowercase{eilinson}-K\lowercase{ato} \lowercase{and} B\lowercase{eilinson}-F\lowercase{lach elements}, C\lowercase{oleman}-R\lowercase{ubin}-S\lowercase{tark classes}, H\lowercase{eegner} \lowercase{points} \lowercase{and} \lowercase{the} {P}\lowercase{errin}-{R}\lowercase{iou} {C}\lowercase{onjecture} } \author{K\^az\i m B\"uy\"ukboduk} \address{K\^az\i m B\"uy\"ukboduk\newline UCD School of Mathematics and Statistics\\ University College Dublin\\ Ireland \newline Ko\c{c} University, Mathematics \\ Rumeli Feneri Yolu, 34450 \\ Istanbul, Turkey} \email{[email protected]} \keywords{Abelian Varieties, Modular Forms, Iwasawa Theory, Birch and Swinnerton-Dyer Conjecture} \maketitle \begin{equation}gin{abstract} Our first goal in this note is to explain that a weak form of Perrin-Riou's conjecture on the non-triviality of Beilinson-Kato classes follows as an easy consequence of the Iwasawa main conjectures, and deduce its refined versions in the supersingular case from this fact and a variety of Gross-Zagier formulae. Our second goal is to set up a conceptual framework in the context of $\Lambda$-adic Kolyvagin systems to treat analogues of Perrin-Riou's conjectures for higher motives of higher rank. We apply this general discussion in order to establish a link between Heegner points on a general class of CM abelian varieties and the (conjectural) Coleman-Rubin-Stark elements we introduce here. This is a higher dimensional version of Rubin's results on rational points on CM elliptic curves. \end{abstract} \section{Summary of Contents and Background} \label{sec:intro} We have three goals in this article. The first is to record a rather simple (but somehow overlooked) proof of Perrin-Riou's conjecture (under very mild hypotheses) on the non-vanishing of the $p$-adic Beilinson-Kato class associated to an elliptic curve $E_{/\mathbb{Q}}$, when $E$ has ordinary (i.e., good ordinary or multiplicative) or supersingular reduction at $p$. This generalizes (a fragment of) the forthcoming work of Bertolini and Darmon for a good ordinary prime $p$ and of Venerucci for a \emph{split} multiplicative prime $p$. In the supersingular case, we also explain how to deduce an explicit formula for a point of infinite order on $E(\mathbb{Q})$ in terms of the special values of the two $p$-adic $L$-functions (attached to two $p$-stabilizations of the associated eigenform). We remark that a similar formula was readily proved by Kurihara and Pollack assuming the truth of the $p$-adic Birch and Swinnerton-Dyer conjecture; our argument here is based on Kobayashi's $p$-adic Gross-Zagier formula (which essentially settles the $p$-adic Birch and Swinnerton-Dyer conjecture in this set up). As a side benefit (which was our initial motivation to release the first portion of this note), we remark that these results (in the case of multiplicative reduction) render our previous work~\cite{kbbmtt} on the Mazur-Tate-Teitelbaum conjecture fully self-contained and in some sense, they simplify the proof of the main results therein: The results here allow us to by-pass the need to appeal to a two-variable exceptional zero formulea, as considered in \cite{venerucciinventiones}. In the second part of this article (Section~\ref{subsec:KS}), we first recast this approach relying on the theory of $\Lambda$-adic Kolyvagin systems. We explain how this yields a proof of an extension of Perrin-Riou conjecture concerning the non-vanishing of the $p$-distinguished twists of Beilinson-Flach elements (Corollary~\ref{cor:PRBFelement}). In the third and final portion of this note, we establish a precise link between Heegner points on a general class of CM abelian varieties and the (conjectural) Coleman-Rubin-Stark elements we introduce here associated to these CM abelian varieties (c.f. Theorem \ref{thm:maincolemanRSheegnerPR} in the main text). This is a higher dimensional version of Rubin's results on rational points on CM elliptic curves (where he compares elliptic units to Heegner points on CM elliptic curves). \subsubsection*{\textbf{{Part I. Perrin-Riou's conjecture for Beilinson-Kato elements.}}}Let $E$ be an elliptic curve defined over $\mathbb{Q}$ and let $N$ denote its conductor. Fix a prime $p>3$ and let $S$ denote the set consisting of all rational primes dividing $Np$ and the archimedean place. In this set up, Kato \cite{ka1} has constructed an Euler system $\textbf{c}^{\textup{BK}}=\{c_F^{\textup{BK}}\}$ where $F$ runs through abelian extensions of $\mathbb{Q}$, $c_F^{\textup{BK}}\in H^1(F,T_p(E))$ is unramified away from the primes dividing $Np$ and $T_p(E)$ is the $p$-adic Tate module of $E$. Kato's explicit reciprocity laws show that the class $c_\mathbb{Q}^{\textup{BK}}\in H^1(\mathbb{Q},T_p(E))$ is non-crystalline at $p$ (and in particular, non-zero) precisely when $L(E/\mathbb{Q},1)\neq 0$, where $L(E/\mathbb{Q},s)$ is the Hasse-Weil $L$-function of $E$. Perrin-Riou in \cite[\S 3.3.2]{pr93grenoble} predicts the following assertion to hold true. Let $\textup{res}_p: H^1(G_{\mathbb{Q},S},T)\rightarrow H^1(\mathbb{Q}_p,T)$ denote the restriction map. \begin{equation}gin{conj} \label{conj:PR} The class $\textup{res}_p\left(c_\mathbb{Q}^{\textup{BK}}\right) \in H^1(\mathbb{Q}_p,T_p(E))$ is non-torsion if and only if $L(E/\mathbb{Q},s)$ has at most a simple zero at $s=1$. \end{conj} This is the conjecture (and its extensions in other settings) we address in the current article. We will explain below how to deduce Conjecture~\ref{conj:PR} as an easy corollary of the work of Kato, Skinner-Urban and Wan on the main conjectures of Iwasawa theory of elliptic curves. \begin{equation}gin{thm}[Kato, Skinner-Urban, Wan] \label{thm:main} Suppose that $E$ is an elliptic curve such that the residual representation $$\overline{\rho}_E:\,G_{\mathbb{Q},S}\longrightarrow \textup{Aut}(E[p])$$ is surjective. Then the ``if" part of Perrin-Riou's Conjecture~\ref{conj:PR} holds true in the following cases: \begin{equation}gin{itemize} \item[(a)] $E$ has good ordinary reduction at $p$. \item[(b)] $E$ has good supersingular reduction at $p$ and $N$ is square-free. \item[(c)] $E$ has multiplicative reduction at $p$ and there exists a prime $\ell \mid\mid N$ such that there $\overline{\rho}_E$ is ramified at $\ell$. \end{itemize} \end{thm} As per the ``only if'' direction, one may deduce the following as a rather straightforward consequence of the recent results due to Skinner, Skinner-Zhang and Venerucci in the case of $p$-ordinary reduction and due to Kobayashi and Wan in the case of $p$-supersingular reduction. We state it here for the sake of completeness. \begin{equation}gin{thm}[Skinner, Skinner-Zhang, Venerucci] \label{thm:mainonlyif} In the situation of Theorem~\ref{thm:main}, the ``only if'' part of Perrin-Riou's conjecture holds true for all cases $(\textup{a})$, $(\textup{b})$ and $(\textup{c})$ if we further assume: \begin{equation}gin{itemize} \item in the case of $(\textup{a})$, that $N$ is square free and either $E$ has non-split multiplicative reduction at one odd prime or split multiplicative reduction at two odd primes; \item in the case of $(\textup{c})$ that \begin{equation}gin{itemize} \item $p$ does not divide $\textup{ord}_p(\Delta_E)$ and when $E$ split-multiplicative reduction at $p$, the Galois representation $E[p]$ is not finite at $p$, \item for all primes $\ell \mid\mid N$ such that $\ell \equiv \pm 1 \mod p$, the prime $p$ does not divide $\textup{ord}_\ell(\Delta_E)$, \item there exists at least two prime factors $\ell \mid\mid N$ such that $p$ does not divide $\textup{ord}_\ell(\Delta_E)$. \end{itemize} \end{itemize} \end{thm} We remark that in the situation of (a), the hypotheses in Theorem~\ref{thm:mainonlyif} may be slightly altered if we relied on the work of Zhang~\cite[Theorem~{1.3}]{zhangconversetokolyCambridge} on the converse of the Gross-Zagier-Kolyvagin theorem, in place of the work of Skinner. This on one hand would allow us to relax the condition on the conductor $N$, on the other hand would force us to introduce additional hypothesis (see Theorem 1.1 of loc.cit). In a variety of cases, we will be able refine Theorem~\ref{thm:main} and deduce that the square of the logarithm of a suitable Heegner point agrees with the logarithm of the Beilinson-Kato class $\mathbf{BK}_1$ up to an explicit non-zero algebraic factor and verify some of the hypothetical conclusions in \cite[\S3.3.3]{pr93grenoble}\footnote{Given Theorem~\ref{thm:main} and Kobayashi's $p$-adic Gross-Zagier formula, Perrin-Riou's article seems to contain an unconditional proof of Corollaries~\ref{cor:introsupersingularPRenhanced} and \ref{cor:introRubinsformula}. After an e-mail exchange with F. Castella, we came to believe that it would be useful to provide a somewhat detailed discussion on these results (which benefits from the recent developments in the theory of triangulordinary Selmer groups). We are grateful to F. Castella for his inquiry regarding this point.}. We record here the following three results in the cases when $E$ has good supersingular or bad non-split multiplicative reduction at $p$ (see Remark~\ref{rem:goodordcase} below for the developments concerning the good ordinary case). For a detailed discussion and proofs, we refer the reader to Section~\ref{sec:heegner}. \begin{equation}gin{cor}[Perrin-Riou] \label{cor:introsupersingularPRenhanced} Suppose that $E$ has good supersingular reduction at $p$ and verifies the hypotheses of Theorem~\ref{thm:main}, as well as that its conductor $N$ is square-free. Then, $$\log_E\left(\textup{res}_p(\mathbf{BK}_1)\right)=-(1-1/\mathcal{L}pha)(1-1/\begin{equation}ta)\cdot C(E)\cdot\log_E\left(\textup{res}_p(P)\right)^2$$ for a suitably chosen Heegner point $P \in E(\mathbb{Q})$, where $\log_E$ stands for the coordinate of the Bloch-Kato logarithm associated to $E$ with respect to a suitably normalized N\'eron differential on $E$ and $C(E)\in \mathbb{Q}^\times$ is given in (\ref{eqn:GZ}). \end{cor} This is Theorem~\ref{thm:mainsupersingularexplicit} below. Its proof that we present here follows the general view point that the works \cite{benoisheights, benoiscomplex, benoisbuyukboduk} offer on the theory of heights on triangulordinary Selmer groups, which in a certain (perhaps very subjective) sense of the word simplifies the discussion in \cite[\S4]{prGZ} and \cite[\S3.3.3]{pr93grenoble}. \begin{equation}gin{rem} \label{rem:goodordcase} The treatment of the good ordinary case with the strategy outlined here requires a $p$-adic Gross-Zagier formula at critical slope. After recent developments (that we shall outline below), this formulae is now within our reach and it is the subject of our forthcoming joint work with R. Pollack and S. Sasaki~\cite{BPS-CriticalGZ}. We sketch here our strategy in \cite{BPS-CriticalGZ} to prove the desired Gross-Zagier formula for the critical slope $p$-adic $L$-function attached to the $p$-stabilization $f^{\begin{equation}ta}_E$, where $f_E$ is the eigenform attached to $E$ and $v_p(\begin{equation}ta)=1$. As the first step one considers a Coleman family ${\mathbf f}$ deforming $f$ over a suitable affinoid $A$, with constant slope $1$ and $U_p$-eigenvalue ${\bbbeta}$. Let $\mathscr{Z}$ denote the set of integers $k>2$ which are congruent to $2$ modulo $p-1$. For $k\in \mathscr{Z}$, we let ${\bf{f}}(k)$ denote the specialization of $\bf{f}$ with trivial wild character, so that ${\bf{f}}(k)$ is a $p$-stabilized $p$-old eigenform, which is the $p$-stabilization of a newform ${\bf{f}}(k)^{\circ}$ of level $N$ associated to the root $\bbbeta(k)$ of the Hecke polynomial of ${\bf{f}}(k)^{\circ}$ at $p$. Notice that ${\bf{f}}(k)$ is of non-critical slope. Moreover, $\bbbeta(k)$ has smaller $p$-adic valuation than the other root of the Hecke polynomial of ${\bf{f}}(k)^{\circ}$ at $p$ whenever $k\geq 4$. The first step is to interpolate Heegner cycles (or rather the Generalized Heegner cycles of Bertolini-Darmon-Prasanna) in our Coleman family; this is one of the tasks we will be carrying out in our forthcoming article. We note that Jetchev-Loeffler-Zerbes have independently announced the construction of big Generalized Heegner cycles over a Coleman family (seemingly with a method different than ours). The second step is to prove a $p$-adic Gross-Zagier formulae for non-ordinary newforms ${\bf{f}}(k)^{\circ}$, corresponding to their stabilization with respect to the eigenvalue $\bbbeta(k)$. This has been recently announced by S. Kobayashi. Thanks to the work of Benois, $p$-adic (cyclotomic) height pairings that appear in step two interpolate along the Coleman family $\bf{f}$, giving rise to an $A$-adic (cyclotomic) height pairings. Combining this fact with Steps 1 and 2 (along with the density of $\mathscr{Z}$ in $A$), one obtains an $A$-adic Gross-Zagier formula for the two variable $p$-adic $L$-function $L_p({\bf f}/K):=L_p({\bf f})L_p({\bf f}\otimes \chi_K)$ (where the two $p$-adic $L$-functions on the right are those constructed Pollack-Stevens and Bella\"{i}che) expressing its cyclotomic derivative of $L_p({\bf f}/K)$ as the $A$-adic height of the big Generalized Heegner cycle. On specializing to $f^\begin{equation}ta_E={\bf f}(2)$, we obtain the desired $p$-adic Gross-Zagier formula at critical slope. Before we end this remark, we note that Bertolini and Darmon have readily announced a proof of Corollary~\ref{cor:introsupersingularPRenhanced} in the case of good ordinary reduction. Their methods are disjoint from what we have sketched within this remark (though see also Remark~\ref{rem:darmonvenerucci} below). \end{rem} Corollary~\ref{cor:introsupersingularPRenhanced} combined with the Gross-Zagier formula, Kobayashi's $p$-adic Gross-Zagier formula in this set up and Perrin-Riou's analysis in \cite[\S2.2.2]{pr93grenoble} yields the following result, which allows us to determine a global point in terms of the special values of the associated $p$-adic $L$-functions and validates Formula 3.3.4 of \cite{pr93grenoble}. Let $\omega_\mathcal{L}pha,\omega_\begin{equation}ta \in D_{\textup{cris}}(V)$ denote the canonical elements given as in Section~\ref{subsubsec:Ehasgoodreductionatp}. We set $\delta_E:=[\omega_\begin{equation}ta,\omega_\mathcal{L}pha]/C(E)$, where $[\,,\,]: D_{\textup{cris}}(V)\times D_{\textup{cris}}(V)\rightarrow \mathbb{Q}_p$ is the canonical pairing. \begin{equation}gin{cor}[Gross-Zagier, Kobayashi, Perrin-Riou] \label{cor:introRubinsformula} Let $\exp_V$ denote the Bloch-Kato exponential map. Then under the hypotheses of Corollary~\ref{cor:introsupersingularPRenhanced}, $$P:=\exp_{V}\left(\omega^*\cdot\sqrt{\delta_E\left((1-1/\mathcal{L}pha)^{-2}\cdot L_{p,\mathcal{L}pha}^\mathrm{pr}ime(E/\mathbb{Q},1)-(1-1/\begin{equation}ta)^{-2}\cdot L_{p,\begin{equation}ta}^\mathrm{pr}ime(E/\mathbb{Q},1)\right)}\right)$$ is a $\mathbb{Q}$-rational point on $E$ of infinite order. \end{cor} See \cite[Section 2.7]{kuriharapollack} (and of course, Perrin-Riou's article \cite{pr93grenoble}) for a similar formula which is verified assuming the truth of the $p$-adic Birch and Swinnerton-Dyer conjecture. We instead rely on a Rubin-style formula and the Gross-Zagier formulae of \cite{grosszagier,kobayashiGZ}. The reader will of course notice that Kobayashi's work in op. cit. implies the $p$-adic Birch and Swinnerton-Dyer conjecture up to non-zero rational constants. We may also deduce the following version of Corollary~\ref{cor:introsupersingularPRenhanced} (relying on Disegni's work in place of Kobayashi's) in the case when $E$ has non-split multiplicative reduction at $p$. \begin{equation}gin{cor}[Disegni, Gross-Zagier, Nekov\'a\v{r}] \label{cor:mainenhanced} Suppose that $E$ is an elliptic curve with non-split-multiplicative reduction at $p$ and verifies the hypotheses of Theorem~\ref{thm:main}, also that there exists a prime $\ell \mid\mid N$ such that there $\overline{\rho}_E$ is ramified at $\ell$. Assume that $r_{\textup{an}}=1$ and further that Nekov\'a\v{r}'s $p$-adic height pairing associated to the canonical splitting of the Hodge-filtration on the semi-stable Dieudonn\'e module $D_{\textup{st}}(V)$ is non-vanishing. Then, $$\log_E\left(\textup{res}_p(\mathbf{BK}_1)\right)\cdot\log_E\left(\textup{res}_p(P)\right)^{-2}\in \overline{\mathbb{Q}}^\times\,,$$ where $P$ is any generator of $E(\mathbb{Q})/E(\mathbb{Q})_{\textup{tor}}$\,. \end{cor} \begin{equation}gin{rem} \label{rem:darmonvenerucci} Bertolini and Darmon have announced that in their forthcoming work \cite{BDBSD3}, they prove a result similar to Corollaries \ref{cor:introsupersingularPRenhanced} and \ref{cor:mainenhanced} in the situation of (a). Also, using different techniques then those of \cite{BDBSD3}, Venerucci \cite{venerucciinventiones} gave a proof of the result above when $E$ has split multiplicative reduction at $p$. They may then deduce the conclusions of Theorems~\ref{thm:main} and \ref{thm:mainonlyif} directly from these results (in the respective setting). In contrast, our exposition here is based on the observation that the weak Perrin-Riou Conjecture~\ref{conj:PR} on the non-vanishing of the Beilinson-Kato class is an immediate consequence of Iwasawa main conjectures. Following Perrin-Riou's original strategy, we then exploit a variety of Gross-Zagier formulae to deduce the full Perrin-Riou conjecture (that relates the Beilinson-Kato class to Heegner points). In particular, the proof of full Perrin-Riou conjecture in the situation of (a) also follows from Theorem~\ref{thm:main} thanks to a $p$-adic Gross-Zagier formula for the critical slope $p$-adic $L$-function (along with Perrin-Riou's $p$-adic Gross-Zagier formula for the slope-zero $p$-adic $L$-function), whose proof we have outlined in Remark~\ref{rem:goodordcase} and which is the subject of~\cite{BPS-CriticalGZ}. Concerning Corollaries \ref{cor:introsupersingularPRenhanced} and \ref{cor:mainenhanced}, we would like to underline\footnote{We would like to thank Henri Darmon for an enlightening exchange regarding this point.} a common key feature of the three approaches (in \cite{BDBSD3, venerucciinventiones} and the original approach of Perrin-Riou that we take as a base here) towards it, despite their apparent differences: All three works make crucial use of a suitable $p$-adic Gross-Zagier formula, allowing the comparison of Heegner points with Beilinson-Kato elements. For the approach in \cite{BDBSD3}, this formula is provided by \cite{BDprasannaduke} (where the relevant $p$-adic Gross-Zagier formula is proved by exploiting Waldspurger's formula and it resembles Katz's proof of the $p$-adic Kronecker limit formula) and for Venerucci's approach in \cite{venerucciinventiones}, it is provided by \cite{BDHida2007} (where the authors use Hida deformations and the \v{C}erednik-Drinfeld uniformization of Shimura curves). In the proof of Corollary~\ref{cor:mainenhanced} here, we rely on the $p$-adic Gross-Zagier formulae of Perrin-Riou~\cite{prGZ} in the situation of (a), of Kobayashi~\cite{kobayashiGZ} in the situation of (b) and the recent work of Disegni~\cite{disegniGZ} when $E$ has non-split-multiplicative reduction at $p$. \end{rem} \begin{equation}gin{rem} \label{rem:selfdualmodforms} Our arguments here easily adapt to treat also higher weight eigenforms; however, our conclusion in that situation is not as satisfactory as in the case of elliptic curves. For this reason, here we shall only provide a brief overview of our results towards Perrin-Riou's conjecture in that level of generality. We say that a Galois representation $V$ (with coefficients in a finite extension $K$ of $\mathbb{Q}_p$) is \emph{essentially self-dual} if it has a self-dual Tate-twist $V(r)$ and we say that an elliptic eigenform $f$ (of even weight $2k$ and level $N$, with $N$ coprime to $p$) is essentially self-dual if Deligne's representation $W_f$ associated to $f$ is. In this case, we set $V_f=W_f(k)$; this is necessarily the self-dual twist of $W_f$. Fix a Galois-stable $\frak{o}_K$-lattice $T_f$ contained in $V_f$ and let $\texttt{k}$ denote the residue field of $\frak{o}_K$. We will set $\rho_f: G_{\mathbb{Q},S}\rightarrow \textup{GL}(T_f)$ (where $S$ is the set consisting of all rational primes dividing $Np$ and the archimedean place) and $\overline{\rho}_f:= \rho_f\otimes \texttt{k}$. If the conditions that \begin{equation}gin{itemize} \item $\overline{\rho}_f$ is absolutely irreducible, \item $f$ is $p$-distinguished (namely, the semi-simplification of $\overline{\rho}_f\big{|}_{G_{\mathbb{Q}_p}}$ is non-scalar), \end{itemize} simultaneously hold true, then \begin{equation}gin{align*}\textup{ord}_{s=k}\,L(f,s)=1 \implies& \hbox{ either } \log_{V_f}\mathbf{BK}_1\neq 0\,,\\ & \hbox{ or else \,} \textup{res}_p:H^1_f(\mathbb{Q},V_f)\rightarrow H^1_f(\mathbb{Q}_p,V_f) \hbox{ is the zero map}.\end{align*} Here $H^1_f(\mathbb{Q}_p,V_f) \subset H^1(\mathbb{Q}_p,V_f)$ is the image of the Bloch-Kato exponential map $\exp_{V_f}$ and $H^1_f(\mathbb{Q},V_f)$ is the Bloch-Kato Selmer group. Note in particular that the main result of \cite{benoisbuyukboduk} towards Perrin-Riou's conjecture for $p$-non-crystalline semistable modular forms escapes the methods of the current article. \end{rem} \subsubsection*{\textbf{Part II. $\Lambda$-adic Kolyvagin systems, Beilinson-Flach elements and Coleman-Rubin-Stark elements}} The general theory of $\Lambda$-adic Kolyvagin systems yields a relatively simple criterion to verify the Perrin-Riou conjecture in a great level of generality. This is the content of our Theorem~\ref{thm:PRwithKS} below. Combined with the work of Wan~\cite{wanpordinaryindefinite} and Howard~\cite{benGL2, bentwistedGZ}, it yields an easy proof of the following statement (which is Corollary~\ref{cor:PRBFelement} in the main body of this note): \begin{equation}gin{thm} \label{thm:introPRBF} Let $E/\mathbb{Q}$ be a non-CM elliptic curve that has good ordinary reduction at $p$ and assume that the residual representation $\overline{\rho}_E$ is absolutely irreducible. Let $K/\mathbb{Q}$ be an imaginary quadratic extension that satisfies the weak Heegner hypothesis for $E$ and where $p$ splits completely. Suppose further that $p$ does not divide $\textup{ord}_\ell(j(E))$ whenever $\ell$ is a prime of split multiplicative reduction. If the twisted $L$-function $L(E/K,\mathcal{L}pha,s)$ associated to the base change of $E/K$ and a $p$-distinguished character $\mathcal{L}pha$ vanishes at $s=1$ to exact order $1$, then the corresponding Beilinson-Flach element $\textup{BF}_1 \in H^1_f(K,T_p(E)\otimes\mathcal{L}pha^{-1})$ is non-trivial. \end{thm} We remark that Bertolini and Darmon have announced the proof of an analogue of full Perrin-Riou conjecture in this set up, that enables them to compare the logarithms of Beilinson-Flach elements and Heegner points. Theorem~\ref{thm:introPRBF} is a considerably weaker version of their result. On the other hand, Theorem~\ref{thm:introPRBF} may be extended to cover the case of elliptic curves with good supersingular reduction as well, if one relied on another preprint of Wan on Iwasawa main conjectures at supersingular primes. The view point offered by the theory of $\Lambda$-adic Kolyvagin systems enables us to address similar problems concerning a CM abelian variety $A$ of dimension $g$. In this situation, a slightly stronger version of Rubin-Stark conjectures equips us with a rank-$g$ Euler system. Making use of Perin-Riou's \emph{extended logarithm maps} and the methods of \cite{kbbCMabvar,kbbleiPLMS}, we may obtain Kolyvagin systems (Theorem~\ref{thm:mainRS}) out of these classes (that we call the \emph{Coleman-Rubin-Stark Kolyvagin systems}), with which we may apply Theorem~\ref{thm:PRwithKS}. Our \emph{Explicit Reciprocity Conjecture}~\ref{conj:explicitreciprocityforRS} for these classes is a natural generalization of the Coates-Wiles explicit reciprocity law for elliptic units, and predicts in a rather precise manner how the conjectural Rubin-Stark elements should be related to the Hecke $L$-values attached to the CM abelian variety $A$. All this combined with the results of \cite{kbbCMabvar} allows us to prove Theorem~\ref{thm:introPRColemanPR} below, which establishes an explicit link between the Coleman-Rubin-Stark elements and Heegner points. Let $A$ be an abelian variety which has complex multiplication by an order of a CM field $K$ whose index in the maximal order is prime to $p$ and defined over the maximal totally real subfield $K_+$ of $K$. We assume that $A$ has good ordinary reduction at every prime above $p$, that the prime $p$ is unramified in $K_+/\mathbb{Q}$ and that $A$ verifies the non-anomaly hypothesis (\ref{eqn:hna}) at $p$. One may than associate $A$ a Hilbert modular CM form $\phi$ on $\textup{Res}_{K_+/\mathbb{Q}}\,\textup{GL}_2$ of weight $2$. See Section~\ref{subsec:CMabvarPRStark} for a detailed discussion of these objects. We will assume that there exists a degree one prime of $K_+$ above $p$ (we believe that it should be possible to get around of this assumption with more work). Let $\langle\,,\,\rightarrowngle$ denote the $p$-adic height pairing introduced in Definition~\ref{def:heightparing} and \emph{assume that it is non-zero}. Denote by $\frak{C}$ the Coleman-Rubin-Stark element associated to the CM form $\phi$ (that is given as in Definition~\ref{def:ColemanRS}). \begin{equation}gin{thm} \label{thm:introPRColemanPR} If the Hecke $L$-series of the associated to CM form $\phi$ vanishes to exact order $1$ at $s=1$, then the Coleman-Rubin-Stark class $\frak{C}$ is non-trivial and we have $$\log_{\omega}\left(\frak{C}\right) \equiv \log_{\omega}\left(P_\phi\right)^2 \mod \overline{\mathbb{Q}}^\times\,,$$ where $P_\phi$ is a Heegner point on $A$ and $\log_\omega$ is a certain coordinate of the Bloch-Kato logarithm $($with respect to a suitably chosen N\'eron differential form$)$ given as in $($\ref{eqn:logomegadefine}$)$ below. \end{thm} See Theorem~\ref{thm:maincolemanRSheegnerPR} below for a more precise version of this statement. We remark that Burungale and Disegni~\cite{burungaledisegni} recently proved the generic non-triviality of $p$-adic heights. Relying on this result, one may generically by-pass this hypothesis for the twisted variants of Theorem~\ref{thm:introPRColemanPR}. \begin{equation}gin{rem} We were informed by D. Disegni that in a future version of \cite{burungaledisegni}, the authors will be proving a complementary version of the formula in Theorem~\ref{thm:introPRColemanPR}, which will express the right hand side in terms of the special values of a suitable $p$-adic $L$-function. \end{rem} \subsection{Notation} For a number field $K$, we define $K_S$ to be the maximal extension of $K$ unramified outside a finite set of places $S$ of $K$ that contains all archimedean places as well as all those lying above $p$. Set $G_{K,S}:=\textup{Gal}(K_S/K)$. Let $\bbmu_{p^{\infty}}$ denote the $p$-power roots of unity. For a complete local noetherian $\mathbb{Z}_p$-algebra $R$ and an $R[[G_{\mathbb{Q},S}]]$-module $X$ which is free of finite rank over $R$, we define $X^*:=\textup{Hom}(X, \bbmu_{p^{\infty}})$ and refer to it as the Cartier dual of $X$. For any ideal $I$ of $R$, we denote by $X[I]$ the $R$-submodule of $X$ killed by all elements of $I$. Let $A/K$ be an abelian variety of dimension $g$ which has good reduction outside $S$. We let $T_p(A)$ denote the $p$-adic Tate-module of $A$; this is a free $\mathbb{Z}_p$-module of rank $2g$ which is endowed with a continuous $G_{K,S}$-action. We define the \emph{cyclotomic deformation} $\mathbb{T}_p(A)$ of $T_p(A)$ by setting $\mathbb{T}_p(A):=T_p(A)\otimes\Lambda$ (where we let $G_{K,S}$ act diagonally), where $\Lambda:=\mathbb{Z}_p[[\Gamma]]$ (with $\Gamma=\textup{Gal}(K_\mathrm{cyc}/K)$ is the Galois group of the cyclotomic $\mathbb{Z}_p$-extension $K_\mathrm{cyc}/k$) is the cyclotomic Iwasawa algebra. We let $K_n/K$ denote the unique subextension of $K_ \mathrm{cyc}/K$ of degree $p^n$ (and Galois group $\Gamma_n:=\Gamma/\Gamma^{p^n}\cong \mathbb{Z}/p^n\mathbb{Z}$). When $K=\mathbb{Q}$, we write $\mathbb{Q}_\infty$ in place of $\mathbb{Q}_\mathrm{cyc}$. In the first part of this article the number field $K$ will be $\mathbb{Q}$ whereas in the second part, it will be either a more general totally real field or a CM field. In Part II, we will also work with a general continuous $G_K$-representation $T$ which unramified outside $S$ and which is free of finite rank over a finite flat extension $\frak{o}$ of $\mathbb{Z}_p$. We will denote by $L$ the field of fractions of $\frak{o}$ and by $\mathbb{T}$ the $G_{K,S}$-representation $T\otimes \Lambda$. Fix a topological generator $\gamma$ of $\Gamma$. We will denote by $\textup{pr}_0$ the augmentation map $\Lambda\rightarrow \mathbb{Z}_p$ (which induced from $\gamma\mapsto1$) and by slight abuse, also any map induced by it. \subsubsection{Selmer structures} \label{sec:SelmerandKS} Given a general Galois representation $\mathbb{T}$, we let $\mathcal{F}_\Lambda$ denote the \emph{canonical Selmer structure} on $\mathbb{T}$ defined by setting $H^1_{\mathcal{F}_\Lambda}(K_\lambda, \mathbb{T})=H^1(K_\lambda,\mathbb{T})$ for every prime $\lambda$ of $K$. In the notation of \cite[Definition 2.1.1]{mr02} we have $\Sigma(\mathcal{F})=S$ for each of the Selmer structures above. \begin{equation}gin{define} \label{define:dualSelmer} For every prime $\lambda$ of $K$, there is the perfect local Tate pairing $$\langle\,,\,\rightarrowngle_{\lambda,\textup{Tate}}\,:H^1(K_\lambda,X) \times H^1(K_\lambda,X^*) \rightarrow H^2(K_\lambda,\bbmu_{p^{\infty}})\stackrel{\sim}{\longrightarrow}\mathbb{Q}_p/\mathbb{Z}_p,$$ where. For a Selmer structure $\mathcal{F}$ on $X$, define the \emph{dual Selmer structure} $\mathcal{F}^*$ on $X^*$ by setting $H^1_{\mathcal{F}^*}(\mathbb{Q}_\ell,X^*):=H^1_\mathcal{F}(\mathbb{Q}_\ell,X)^\perp$, the orthogonal complement of $H^1_\mathcal{F}(\mathbb{Q}_\ell,X)$ with respect to the local Tate pairing. \end{define} Given a Selmer structure $\mathcal{F}$ on $X$, we may propagate it to the subquotients of $X$ (via \cite[Example 1.1.2]{mr02}). We denote the propagation of $\mathcal{F}$ to a subquotient still by $\mathcal{F}$. \tableofcontents \section{Part I. Proof of Perrin-Riou's conjecture for elliptic curves over $\mathbb{Q}$} \label{sec:PRconjproof} Let $E/\mathbb{Q}$ be an elliptic curve as above and recall the Beilinson-Kato Euler system $\mathbf{c}^\mathbf{BK}=\{c_F^\mathbf{BK}\}$. We write $$\mathbb{BK}_1:=\{c_{\mathbb{Q}_n}^{\textup{BK}}\}\in \varprojlim H^1(G_{\mathbb{Q}_n,S},T_p(E)) =H^1(\mathbb{Q},\mathbb{T}_p(E)),$$ where the last equality follows from \cite[Lemma 5.3.1(iii)]{mr02}. We also set $\mathbf{BK}_1:=c_\mathbb{Q}^{\textup{BK}}$ so that $\textup{pr}_0\left(\mathbb{BK}_1\right)=\mathbf{BK}_1$. It follows from the non-vanishing results of Rohrlich~\cite{rohrlich84cyclo} and Kato's explicit reciprocity laws for the Beilinson-Kato elements that $\mathbb{BK}_1$ never vanishes. Let us denote the order of vanishing of the Hasse-Weil $L$-function $L(E/\mathbb{Q},s)$ by $r_{\textup{an}}$ and call it the \emph{analytic rank of $E$}. \subsection{Preliminaries} \label{subsec:prelim} We first explain how the ``only if'' part of Perrin-Riou's conjecture (Theorem~\ref{thm:mainonlyif}) may be deduced as an easy corollary of the work of Skinner, Skinner-Zhang and Venerucci. The argument we present here involves some of the reduction steps which we rely on for the proof of the ``if'' part of the conjecture and we pin these down in this portion of our article. \begin{equation}gin{proof}[Proof of Theorem~\ref{thm:mainonlyif}] Suppose first that $\textup{res}_p^s(\mathbf{BK}_1)\neq 0$, where $\textup{res}_p^s$ is the singular projection given as the compositum of the arrows $$H^1(\mathbb{Q},T)\rightarrow H^1(\mathbb{Q}_p,T)\twoheadrightarrow H^1(\mathbb{Q}_p,T)/H^1_f(\mathbb{Q}_p,T)=:H^1_s(\mathbb{Q}_p,T)\,.$$ Kato's explicit reciprocity law shows that $r_{\textup{an}}=0$. We may therefore assume without loss of generality that $\mathbf{BK}_1$ is crystalline at $p$, namely that $\mathbf{BK}_1\in H^1_f(\mathbb{Q},T)$. Since $\mathbf{BK}_1 \neq 0$, it follows from \cite[Corollary 5.2.13(i)]{mr02} that $H^1_{\mathcal{F}c^*}(\mathbb{Q},T^*)$ is finite. Recall that $\mathcal{F}_{\textup{str}}$ denotes the Selmer structure on $T$ given by \begin{equation}gin{itemize} \item $H_{\mathcal{F}_{\textup{str}}}(\mathbb{Q}_\ell,T)=H_{\mathcal{F}}(\mathbb{Q}_\ell,T)$, if $\ell\neq p$, \item $H_{\mathcal{F}_{\textup{str}}}(\mathbb{Q}_p,T)=0$. \end{itemize} We contend to verify that $H^1_{\mathcal{F}_{\textup{str}}}(\mathbb{Q},T)=0$. Assume on the contrary that $H^1_{\mathcal{F}_{\textup{str}}}(\mathbb{Q},T)$ is non-trivial. Since module $H^1(G_{\mathbb{Q},S},T)$ is torsion free under our running hypothesis on the image of $\overline{\rho}_E$, this amounts to saying that $H^1_{\mathcal{F}_{\textup{str}}}(\mathbb{Q},T)$ has positive rank. Recall further that the propagation of the Selmer structure $\mathcal{F}_{\textup{str}}$ to the quotients $T/p^nT$ (in the sense of \cite{mr02}) is still denoted by $\mathcal{F}_{\textup{str}}$. Recall that $T^*\cong E[p^\infty]$ and note for any positive integer $n$ that we may identify the quotient $T/p^nT$ with $E[p^n]$. By \cite[Lemma 3.7.1]{mr02}, we have an injection $$H_{\mathcal{F}_{\textup{str}}}(\mathbb{Q},T)/p^nH_{\mathcal{F}_{\textup{str}}}(\mathbb{Q},T) \hookrightarrow H_{\mathcal{F}_{\textup{str}}}(\mathbb{Q},T/p^nT)=H_{\mathcal{F}_{\textup{str}}}(\mathbb{Q},E[p^n])$$ induced from the projection $T\rightarrow T/p^nT$. This shows that \begin{equation}\label{eqn:lowerboundonstr} \textup{length}_{\mathbb{Z}_p}\left(H_{\mathcal{F}_{\textup{str}}}(\mathbb{Q},E[p^n])\right)\geq n. \end{equation} As above, we let $\mathcal{F}c=\mathcal{F}_1$ denote the canonical Selmer structure on $T$, given by \begin{equation}gin{itemize} \item $H_{\mathcal{F}_{\textup{can}}}(\mathbb{Q}_\ell,T)=H_{\mathcal{F}}(\mathbb{Q}_\ell,T)$, if $\ell\neq p$, \item $H_{\mathcal{F}_{\textup{can}}}(\mathbb{Q}_p,T)=H^1(\mathbb{Q}_p,T)$. \end{itemize} It follows from \cite[Lemma I.3.8(i)]{r00} (together with the discussion in \cite[\S6.2]{mr02}) that we have an inclusion $$H_{\mathcal{F}_{\textup{str}}}(\mathbb{Q}_\ell,E[p^n])\subset H_{\mathcal{F}_{\textup{can}}^*}(\mathbb{Q}_\ell,E[p^n])$$ for every $\ell$. Here, $E[p^n]$ is identified with $T$ on the left and with $T^*[p^n]$ on the right and also viewed as a submodule of $T\otimes \mathbb{Q}_p/\mathbb{Z}_p$). Furthermore, the index of $H_{\mathcal{F}_{\textup{str}}}(\mathbb{Q}_\ell,E[p^n])$ within $H_{\mathcal{F}_{\textup{can}}^*}(\mathbb{Q}_\ell,E[p^n])$ is bounded independently of $n$ (in fact, bounded by the order of $E(\mathbb{Q}_p)[p^\infty]$). This in turn shows that together with (\ref{eqn:lowerboundonstr}) that \begin{equation} \label{eqn:lowerboundoncan} \textup{length}_{\mathbb{Z}_p}\left(H_{\mathcal{F}_{\textup{can}}^*}(\mathbb{Q},E[p^n])\right)\geq n. \end{equation} However, $H_{\mathcal{F}_{\textup{can}}^*}(\mathbb{Q},E[p^\infty])$ is finite and therefore the length of $$H_{\mathcal{F}_{\textup{can}}^*}(\mathbb{Q},E[p^n])\cong H_{\mathcal{F}_{\textup{can}}^*}(\mathbb{Q},E[p^\infty])[p^n]$$ (where the isomorphism is thanks to \cite[Lemma 3.5.3]{mr02}, which holds true here owing to our assumption on the image of $\overline{\rho}_E$) is bounded independently of $n$. This contradicts (\ref{eqn:lowerboundoncan}) and shows that $H^1_{\mathcal{F}_{\textup{str}}}(\mathbb{Q},T)=0$. Thence, the map $$\textup{res}_p:\,H^1_f(\mathbb{Q},T)\longrightarrow H^1_f(\mathbb{Q}_p,T)$$ is injective. The module $H^1_f(\mathbb{Q}_p,T)$ is free of rank one and we conclude that $H^1_f(\mathbb{Q},T)$ has also rank one. When we are in the situation of (a) or (c), the proof now follows from the converse of the Kolyvagin-Gross-Zagier theorem proved in \cite{skinnerKGZ,skinnerzhang}. In the situation of (b), it follows from the works Kobayashi~\cite{kobayashiGZ} and Wan~\cite{xinwansupersingularmainconj} that a suitable Heegner point $P \in E(\mathbb{Q})$ is non-trivial. This in turn implies (relying on the classical Gross-Zagier formula) that $r_{\textup{an}}=1$, as desired. \end{proof} \begin{equation}gin{prop} \label{prop:strisfinite} If $r_{\textup{an}}\leq 1$, then $H^1_{\mathcal{F}c^*}(\mathbb{Q},T^*)$ is finite. \end{prop} \begin{equation}gin{proof} If $r_\textup{an}=0$, it follows that $\textup{res}_p^s(\mathbf{BK})$ and in particular that $\mathbf{BK}_1\neq0$. The conclusion of the proposition follows from \cite[Corollary 5.2.13(i)]{mr02}. Suppose now that $r_\textup{an}=1$. In this case, \begin{equation}gin{align*} H^1_{\mathcal{F}_\textup{str}}(\mathbb{Q},T)&=\ker(H^1_f(\mathbb{Q},T)\longrightarrow H^1_f(\mathbb{Q}_p,T))\\ &=\ker\left(E(\mathbb{Q})\,\widehat{\otimes}\, \mathbb{Z}_p \longrightarrow E(\mathbb{Q}_p)\,\widehat{\otimes}\, \mathbb{Z}_p\right)=0 \end{align*} where the second equality follows from the finiteness of the Tate-Shafarevich group \cite{koly90groth} and the final equality by the Gross-Zagier theorem. As in the proof of Theorem~\ref{thm:mainonlyif}, we may use this conclusion to deduce that the order of $H_{\mathcal{F}_{\textup{can}}^*}(\mathbb{Q},T^*)[p^n]\cong H_{\mathcal{F}_{\textup{can}}^*}(\mathbb{Q},E[p^n])$ is bounded independently of $n$. The proof follows. \end{proof} \subsection{Main conjectures and Perrin-Riou's conjecture} We recall in this section Kato's formulation of the Iwasawa main conjecture for the elliptic curve $E$ and record results towards this conjecture. It follows from Kato's reciprocity laws and Rohrlich's \cite{rohrlich84cyclo} non-vanishing theorems that the class $\mathbb{BK}_1$ is non-vanishing and the $\Lambda$-module $H^1(\mathbb{Q},\mathbb{T})$ is of rank one (as it was predicted by the weak Leopoldt conjecture for $E$). For two ideals $I, J \subset \Lambda$, we write $I\doteqdot J$ to mean that $I=p^{e} J$ for some integer $e$. \begin{equation}gin{conj} \label{ref:conjKatomain} $\textup{char}\left(H^1_{\mathcal{F}_\Lambda^*}(\mathbb{Q},\mathbb{T}^*)^\vee\right)\doteqdot\textup{char}\left(H^1(\mathbb{Q},\mathbb{T})/\Lambda\cdot\mathbb{BK}_1\right)$\,. \end{conj} This assertion is equivalent (via Kato's reciprocity laws, and up to powers of $p$) to the classical formulation of Iwasawa main conjecture for $E$. \begin{equation}gin{thm}[Kato, Kobayashi, Skinner-Urban, Wan] \label{thm:mainconjistrue} In the setting of Theorem~\ref{thm:main}, Conjecture~\ref{ref:conjKatomain} holds true in the following cases: \begin{equation}gin{itemize} \item[(a)] $E$ has good ordinary reduction at $p$. \item[(b)] $E$ has good supersingular reduction at $p$ and $N$ is square-free. \item[(c)] $E$ has multiplicative reduction at $p$ and there exists a prime $\ell \mid\mid N$ such that there $\overline{\rho}_E$ is ramified at $\ell$. \end{itemize} \end{thm} \begin{equation}gin{proof} In the setting of (a), see the works of Kato and Skinner-Urban \cite{ka1,skinnerurban} (as well as the enhancement of the latter due to Wan~\cite{xinwanSigmaHilbert}, that lifts certain local hypothesis of Skinner and Urban) and in the situation of (b), the works of Kobayashi and Wan~\cite{kobayashi2003,xinwansupersingularmainconj}. In the setting of (c), the works of Skinner and Kato~\cite{skinner,ka1} yields the desired conclusion. Note that Kato has stated his divisibility result towards Conjecture~\ref{ref:conjKatomain} only when $E$ has good ordinary reduction at $p$. We refer the reader to \cite{rubinkatodurham} for the slightly more general version of his theorem required to treat the non-crystalline semistable case as well. \end{proof} We are now ready present a proof of Theorem~\ref{thm:main}, which we deduce as a direct consequence of Theorem~\ref{thm:mainconjistrue}. \begin{equation}gin{proof}[Proof of Theorem~\ref{thm:main}] The natural map $$H^1_{\mathcal{F}_{\Lambda}^*}(\mathbb{Q},\mathbb{T}^*)^\vee/(\gamma-1)\cdot H^1_{\mathcal{F}_{\Lambda}^*}(\mathbb{Q},\mathbb{T}^*)^\vee\longrightarrow H^1_{\mathcal{F}c^*}(\mathbb{Q},T^*)^\vee$$ has finite kernel and cokernel by the proof of Proposition~5.3.14 of \cite{mr02} (applied with the height one prime $\frak{P}=(\gamma-1)$). We note that the requirement on the $\Lambda$-module $H^2(\mathbb{Q}_{S}/\mathbb{Q},\mathbb{T})[\gamma-1]$ is not necessary for the portion of this proposition concerning us. We further remark that since $H^0(\mathbb{Q}_p,T^*)$ is finite, it follows that $H^2(\mathbb{Q}_p,\mathbb{T})[\gamma-1]$ is pseudo-null and the proposition indeed applies. It follows from Proposition~\ref{prop:strisfinite} that $(\gamma-1)$ is prime to the characteristic ideal of $H^1_{\mathcal{F}_{\Lambda}^*}(\mathbb{Q},\mathbb{T}^*)^\vee$, and by Theorem~\ref{thm:mainconjistrue} that it is also prime to $\textup{char}\left(H^1(\mathbb{Q},\mathbb{T})/\Lambda\cdot\mathbb{BK}_1\right)$. This tells us that $$\left\{c_{\mathbb{Q}_n}^\mathbf{BK}\right\}=\mathbb{BK}_1 \notin (\gamma-1)H^1(\mathbb{Q},\mathbb{T})=\ker\left(H^1(\mathbb{Q},\mathbb{T})\longrightarrow H^1(\mathbb{Q},T)\right)\,,$$ which amounts to saying that $c_{\mathbb{Q}}^{\mathbf{BK}}\neq 0$. Furthermore, our running hypothesis that $r_{\textup{an}}=1$ together with Kato's explicit reciprocity laws implies that $c_{\mathbb{Q}}^{\mathbf{BK}} \in H^1_f(\mathbb{Q},T)$. The proof of Proposition~\ref{prop:strisfinite} now shows that $\mathrm{res}_p(c_{\mathbb{Q}}^{\mathbf{BK}})\neq 0$ (as otherwise, the module $H^1_{\mathcal{F}_{\textup{str}}}(\mathbb{Q},T)$ would have been non-zero). \end{proof} \subsection{Logarithms of Heegner points and Beilinson-Kato classes} \label{sec:heegner} As before, we suppose that $E$ is an elliptic curve such that the residual representation $\overline{\rho}_E$ is surjective. Assume further that $p$ does not divide $\textup{ord}_\ell(j(E))$ whenever $\ell \mid N$ is a prime of split multiplicative reduction. Assume also that one of the following conditions hold true. \begin{equation}gin{itemize} \item[(a)] $E$ has good ordinary reduction at $p$. \item[(b)] $E$ has good supersingular reduction at $p$ and $N$ is square-free. \item[(c)] $E$ has multiplicative reduction at $p$ and there exists a prime $\ell \mid\mid N$ such that there $\overline{\rho}_E$ is ramified at $\ell$. \end{itemize} With this set up, we will be able refine the conclusion of Theorem~\ref{thm:main} in a variety of cases and deduce that the square of the logarithm of a suitable Heegner point agrees with the logarithm of the Beilinson-Kato class $\mathbf{BK}_1$ up to an explicit non-zero algebraic factor. In these cases, we will therefore justify some of the hypothetical conclusions in \cite[\S3.3.3]{pr93grenoble} (see also Footnote 1). We fix a Weierstrass minimal model $\mathcal{E}_{/\mathbb{Z}}$ of $E$. Let $\omega_\mathcal{E}$ be a N\'eron differential that is normalized as in \cite[\S 3.4]{PR-ast} and is such that we have $\Omega_E^+:=\int_{E(\mathbb{C})^+}\omega_\mathcal{E}>0$ for the real period $\Omega_E^+$. Suppose till the end of this Introduction that $L(E,s)$ has a simple zero at $s=1$. In this situation, $E(\mathbb{Q})$ has rank one and the N\'eron-Tate height $ \langle P,P \rightarrowngle_{\infty}$ of any generator $P$ of the free part of $E(\mathbb{Q})$ is related via the Gross-Zagier theorem to the first derivative of $L(E,s)$ at $s=1$: \begin{equation} \label{eqn:GZ} \frac{L^\mathrm{pr}ime(E,1)}{\Omega_E^+}=C(E)\cdot \langle P,P \rightarrowngle_{\infty} \end{equation} with $C(E)\in \mathbb{Q}^\times$. Let $D_{\textup{cris}}(V)$ be the crystalline Dieudonn\'e module of $V$ and we define the element $\omega_{\textup{cris}}\in D_{\textup{cris}}(V)$ as that corresponds $\omega_\mathcal{E}$ under the comparison isomorphism. We let $\mathcal{H}\subset \Lambda\otimes_{\mathbb{Z}_p}\mathbb{Q}_p$ denote Perrin-Riou's ring of distributions and let $$\frak{Log}_V: H^1(\mathbb{Q}_p,\mathbb{T})\otimes_{\Lambda}\mathcal{H}\longrightarrow \mathcal{H}\otimes D_{\textup{cris}}(V)$$ be Perrin-Riou's extended dual exponential map and write $\mathcal{L}_{\mathbf{BK}}$ as a shorthand for the element $\frak{Log}_V\left(\textup{res}_p(\mathbb{BK}_1)\right) \in \mathcal{H}\otimes D_{\textup{cris}}(V)$. We also let $$\log_V: H^1_f(\mathbb{Q}_p,V)\stackrel{\sim}{\longrightarrow} D_{\textup{dR}}(V)/\textup{Fil}^0\, D_{\textup{dR}}(V)$$ denote Bloch-Kato logarithm. \subsubsection{$E$ has good reduction at $p$} \label{subsubsec:Ehasgoodreductionatp} In this case, $D_{\textup{cris}}(V)$ is a two dimensional vector space. Let $\mathcal{L}pha^{-1},\begin{equation}ta^{-1} \in \overline{\mathbb{Q}}_p$ be the eigenvalues of the crystalline Frobenius $\varphi$ acting on $D_{\textup{cris}}(V)$. Extending the base field if necessary, let $D_\mathcal{L}pha$ and $D_\begin{equation}ta$ denote corresponding eigenspaces. Set $\omega_{\textup{cris}}=\omega_\mathcal{L}pha+\omega_\begin{equation}ta$ with $\omega_\mathcal{L}pha \in D_\mathcal{L}pha$ and $\omega_\begin{equation}ta\in D_\begin{equation}ta$. On projecting $\mathcal{L}_{\mathbf{BK}}$ onto either of these vector spaces we obtain $\mathcal{L}_{\mathbf{BK},?} \in \mathcal{H}$ (so that $\mathcal{L}_{\mathbf{BK},?}\cdot\omega_?$ is the projection of $\mathcal{L}_{\mathbf{BK}}$ onto $\mathcal{H}\otimes D_?$) for $?=\mathcal{L}pha,\begin{equation}ta$. \begin{equation}gin{thm}[Kato, Kobayashi, Perrin-Riou] \label{thm:mainsupersingularexplicit} Suppose that $E$ has good supersingular reduction at $p$ and $N$ is square-free. For $?=\mathcal{L}pha,\begin{equation}ta$, \begin{equation}gin{itemize} \item[(i)] the Amice transform of the distribution $\mathcal{L}_{\mathbf{BK},?}$ is the Manin-Vishik, Amice-Velu $p$-adic $L$-function $L_{p,?}(E/\mathbb{Q},s)$ associated to the pair $(E,D_?)$; \item[(ii)] when $r_{\textup{an}}=1$, one of the two $p$-adic $L$-functions vanishes at $s=1$ to degree $1$; \item[(iii)] still when $r_{\textup{an}}= 1$, at least one of the associated $p$-adic height pairings $\langle\,,\,\rightarrowngle_{p,?}$ is non-degenerate, \item[(iv)] $\log_E\left(\textup{res}_p(\mathbf{BK}_1)\right)=-(1-1/\mathcal{L}pha)(1-1/\begin{equation}ta)\cdot C(E)\cdot\log_E\left(\textup{res}_p(P)\right)^2$\,. \end{itemize} \end{thm} Note that the quantity $(1-1/\mathcal{L}pha)(1-1/\begin{equation}ta)=(1+1/p)$ belongs to $\mathbb{Q}^\times$. \begin{equation}gin{rem} Kobayashi has in fact proved in \cite[Corollaries 1.3(ii) and 4.9]{kobayashiGZ} that when $r_{\textup{an}}=1$, \emph{both} $p$-adic $L$-functions vanish at $s=1$ to degree $1$ and \emph{both} of the associated $p$-adic height pairings $\langle\,,\,\rightarrowngle_{p,?}$ are non-degenerate. The reason why we recall his results in this weaker form is that we hope to apply the strategy here to prove Theorem~\ref{thm:mainsupersingularexplicit}(iv) also in the case when $E$ has good ordinary reduction at $p$. In that case, if we had a $p$-adic Gross-Zagier formula for the critical slope $p$-adic $L$-function, we could have had proceeded precisely in this manner, even though we do not know that both $p$-adic height pairings in this set up are non-trivial. See Remark~\ref{rem:mainthmgoodordinary} below for a further discussion concerning this point. \end{rem} \begin{equation}gin{proof}[Proof of Theorem~\ref{thm:mainsupersingularexplicit}] The first assertion is due to Kato\footnote{This is accomplished after suitably normalizing $\mathbb{BK}_1$ and throughout this work, we implicitly assume that we have done so.}, see \cite{ka1}. It follows from \cite[Proposition 2.2.2]{pr93grenoble} and Theorem~\ref{thm:main} that $\mathcal{L}_{\mathbf{BK}}^\mathrm{pr}ime(\mathds{1})\neq 0$. Thence, for at least one of $\mathcal{L}pha$ or $\begin{equation}ta$ (say it is $\mathcal{L}pha$) we have $$\textup{ord}_{s=1} L_{p,\mathcal{L}pha}(E/\mathbb{Q},s)=1\,.$$ This completes the proof of the second assertion. The third follows from the $p$-adic Gross-Zagier formula of Kobayashi in~\cite{kobayashiGZ}. We will deduce\footnote{ Although it might be possible to prove (iv) relying only on the technology available in \cite{pr93grenoble}, we decided that we will stick to the current exposition (that benefits from the recent developments in the theory of triangulordinary Selmer groups) as our later discussion that concerns higher dimensional motives is in line with the approach we present here.} the last portion from the discussion in \cite[\S 3.3.3]{pr93grenoble}, as enhanced by Benois~\cite{benoiscomplex, benoisheights} building the work of Pottharst, combined with Kobayashi's $p$-adic Gross-Zagier formula. Let $\omega_{\textup{cris}}$ and $\omega_\mathcal{L}pha,\omega_\begin{equation}ta$ be as above. Let us write $\frak{Log}_{V,\mathcal{L}pha} \in \textup{Hom}_{\Lambda}(H^1(\mathbb{Q},\mathbb{T}),\mathcal{H})$ for the $\omega_\mathcal{L}pha$-coordinate of the projection of the image of $\frak{Log}_{V}$ parallel to $D_\begin{equation}ta$. We likewise define $\frak{Log}_{V,\begin{equation}ta}$; note that $$\frak{Log}_{V}=\frak{Log}_{V,\mathcal{L}pha}\cdot \omega_\mathcal{L}pha+\frak{Log}_{V,\begin{equation}ta}\cdot\omega_\begin{equation}ta\,.$$ Let $\omega^* \in D_{\textup{cris}}(V)/\textup{Fil}^0D_{\textup{cris}}(V)$ denote the unique element such that $[\omega,\omega^*]=1$, where $$[\,,\,]\,: D_{\textup{cris}}(V)\otimes D_{\textup{cris}}(V) \longrightarrow \mathbb{Q}_p$$ the canonical pairing. Define $\log_{\omega^*}(\mathbf{BK}_1)$ so that $$\log_{V}(\mathrm{res}_p(\mathbf{BK}_1))=\log_{\omega^*}(\mathrm{res}_p(\mathbf{BK}_1))\cdot\omega^*\,;$$ note that with our choice of $\omega$, we have $\log_{\omega^*}=\log_E$. The dual basis of $\{\omega_\mathcal{L}pha,\omega_\begin{equation}ta\}$ with respect to the pairing $[\,,\,]$ is $\{\omega_\begin{equation}ta^*,\omega_\mathcal{L}pha^*\}$, where $\omega_\begin{equation}ta^*$ (respectively, $\omega_\mathcal{L}pha^*$) is the image of $\omega^*$ under the inverse of the isomorphism $s_{D_\begin{equation}ta}:\,D_\begin{equation}ta\stackrel{\sim}{\rightarrow} D_{\textup{cris}}(V)/\textup{Fil}^0D_{\textup{cris}}(V)$ (respectively, under the inverse of $s_{D_\mathcal{L}pha}$). Suppose that $\mathcal{L}pha$ is such that $\langle\,,\,\rightarrowngle_{p,\mathcal{L}pha}$ is non-degenerate. Then, \begin{equation}gin{align} \notag(1-1/\mathcal{L}pha)^2\cdot C(E)\cdot \langle P,P\rightarrowngle_{p,\mathcal{L}pha}&=\frac{d\,L_{p,\mathcal{L}pha}(E/\mathbb{Q},s)}{ds}\Big{|}_{s=1}\\ \label{eqn:BKcalculation2}&=\frak{Log}_{V,\mathcal{L}pha}(\partial_\mathcal{L}pha\mathbb{BK}_1)(\mathds{1})\\ \notag&=\left[\exp^*(\partial_\mathcal{L}pha{\mathbf{BK}}_1),(1-p^{-1}\varphi^{-1})(1-\varphi)^{-1}\cdot\omega_\begin{equation}ta^*\right]\\ \notag&=(1-p^{-1}\begin{equation}ta)(1-1/\begin{equation}ta)^{-1}\left[\exp^*(\partial_\mathcal{L}pha{\mathbf{BK}}_1),\omega^*\right]\\ \notag&=(1-1/\mathcal{L}pha)(1-1/\begin{equation}ta)^{-1}\frac{\left[\exp^*(\partial_\mathcal{L}pha{\mathbf{BK}}_1),\log_V(\mathrm{res}_p(\mathbf{BK}_1))\right]}{\log_{\omega^*}(\mathrm{res}_p(\mathbf{BK}_1))}\\ \label{eqn:BKcalculation9}&=-(1-1/\mathcal{L}pha)(1-1/\begin{equation}ta)^{-1}\frac{\langle\mathbf{BK}_1,\mathbf{BK}_1\rightarrowngle_{p,\mathcal{L}pha}}{\log_{\omega^*}(\mathrm{res}_p(\mathbf{BK}_1))} \end{align} where the first equality is Kobayashi's formula and the second will follow from the definition of $\frak{Log}_{V,\mathcal{L}pha}$ and the fact that it maps to Beilinson-Kato class to the Amice-Velu, Manin-Vishik distribution as soon as we define the \emph{derivative} $\partial\mathbb{BK}_1$ of the Beilinson-Kato class; we take care of this at the very end. The third equality will also from the explicit reciprocity laws of Perrin-Riou (as proved by Colmez) (c.f., the discussion in \cite[Section 2.1]{kbbleiintegralMC}) once we define projection $\partial{\mathbf{BK}}_1\in H^1_s(\mathbb{Q}_p,V)$ of the derived Beilinson-Kato class $\partial\mathbb{BK}_1$. Fourth and fifth equalities follow from definitions (and using the fact that $\mathcal{L}pha\begin{equation}ta=p$). We now explain (\ref{eqn:BKcalculation9}) together with the definitions of the objects $\partial\mathbb{BK}_1$ and $\partial{\mathbf{BK}}_1$. Let $D_{\textup{rig}}^\dagger(V)$ denote the $(\varphi,\Gamma)$-module attached to $V$ and $\mathbb{D}_\mathcal{L}pha \subset D_{\textup{rig}}^\dagger(V)$ the saturated $(\varphi,\Gamma)$-submodule of $D_{\textup{rig}} ^\dagger(V)$ attached to $D_\mathcal{L}pha$ by Berger. Set $\widetilde{\mathbb{D}}_\mathcal{L}pha:=D_{\textup{rig}}^\dagger(V)/\mathbb{D}_\mathcal{L}pha$, which is also a $(\varphi,\Gamma)$-module of rank one. Given a $(\varphi,\Gamma)$-module $\mathbb{D}$, one may define the cohomology $H^1(\mathbb{D})$ (respectively, Iwasawa cohomology $H^1_\mathrm{Iw}(\mathbb{D})$) group of $\mathbb{D}$ making use of the Fontaine-Herr complex associated to $\mathbb{D}$ (c.f., \cite[Section 1.2]{benoiscomplex}). A well-known result of Herr yields canonical isomorphisms $$H^1_\mathrm{Iw}({D}_\textup{rig}^\dagger(V))\cong H^1 (\mathbb{Q}_p,\mathbb{T})\otimes\mathcal{H}\,\,,\,\, H^1({D}_\textup{rig}^\dagger(V))\cong H^1 (\mathbb{Q}_p,V)\,.$$ Furthermore, we have an exact sequence $$0\rightarrow H^1_\mathrm{Iw}(\mathbb{D}_\mathcal{L}pha) \longrightarrow H^1_\mathrm{Iw}(D_{\textup{rig}}^{\dagger}(V))\stackrel{{\pi_{/f}}}{\longrightarrow} H^1_\mathrm{Iw}(\widetilde{\mathbb{D}}_\mathcal{L}pha)$$ and we set $\textup{Res}_p^{/f}(\mathbb{BK}_1):=\pi_{/f}\circ \mathrm{res}_p(\mathbb{BK}_1) \in H^1_\mathrm{Iw}(\widetilde{\mathbb{D}}_\mathcal{L}pha)$. It turns also that we have the following commutative diagram: $$\xymatrix{\ar[d] H^1_\mathrm{Iw}(D_{\textup{rig}}^{\dagger}(V))\ar[r]^{{\pi_{/f}}}\ar[d]_{\textup{pr}_0}& H^1_\mathrm{Iw}(\widetilde{\mathbb{D}}_\mathcal{L}pha)\ar[d]^{\textup{pr}_0}\\ H^1(\mathbb{Q}_p,V)\ar[r]& H^1_{/f}(\mathbb{Q}_p,V)}$$ so that $\textup{pr}_0\circ\textup{Res}_p^{/f}(\mathbb{BK}_1)=\mathrm{res}_p^{/f}(\mathbf{BK}_1)=0$. Here, $H^1_{/f}(\mathbb{Q}_p,V):=H^1(\mathbb{Q}_p,V)/H^1_{f}(\mathbb{Q}_p,V)$ is the singular quotient. The vanishing of $\textup{pr}_0\circ\textup{Res}_p^{/f}(\mathbb{BK}_1)$ shows that $$\textup{Res}_p^{/f}(\mathbb{BK}_1) \in \ker(\textup{pr}_0:H^1_\mathrm{Iw}(\widetilde{\mathbb{D}}_\mathcal{L}pha)\rightarrow H^1_{/f}(\mathbb{Q}_p,V))=(\gamma-1)\cdot H^1_\mathrm{Iw}(\widetilde{\mathbb{D}}_\mathcal{L}pha),$$ so that there is an element $\partial\mathbb{BK}_1 \in H^1_\mathrm{Iw}(\widetilde{\mathbb{D}}_\mathcal{L}pha)$ with the property that $$\log_p\chi_{\mathrm{cyc}}(\gamma)\cdot\textup{Res}_p^{/f}\left(\mathbb{BK}_1\right)=(\gamma-1)\cdot\partial\mathbb{BK}_1\,,$$ and as a matter of fact, this element is uniquely determined in our set up. Berger's reinterpretation of the Perrin-Riou map $\frak{Log}_{V,\mathcal{L}pha}$ shows that it factors through $\pi_{/f}$ and therefore also that $$\frak{Log}_{V,\mathcal{L}pha}(\partial_\mathcal{L}pha\mathbb{BK}_1)(\mathds{1})=\frac{d}{ds}\,L_{p,\mathcal{L}pha}(E/\mathbb{Q},s)\Big{|}_{s=1}$$ as we have claimed in (\ref{eqn:BKcalculation2}). The element $\partial\mathbf{BK}_1 \in H^1(\mathbb{Q}_p,V)$ is simply $\textup{pr}_0(\partial\mathbb{BK}_1)$. Also attached to $\mathbb{D}_\mathcal{L}pha$, one may construct an extended Selmer group $\mathbf{R}^1\Gamma(V,\mathbb{D}_\mathcal{L}pha)$, which is the cohomology of a Selmer complex $\mathbf{R}\Gamma(V,\mathbb{D}_\mathcal{L}pha)$ (c.f., \cite[Section 2.3]{benoiscomplex}). It follows from \cite[Proposition 11]{benoiscomplex} that this Selmer group agrees in our set up with the classical Bloch-Kato Selmer group. It comes equipped with a $p$-adic height pairing (Section 4.2 of loc. cit.) and as a matter of fact, this height pairing agrees with $\langle\,,\,\rightarrowngle_{p,\mathcal{L}pha}$ by \cite[Theorem 5.2.2]{benoisheights} and (\ref{eqn:BKcalculation9}) follows from the Rubin-style formula proved in \cite[Theorem 4.13]{benoisbuyukboduk}. Using now the fact that $\langle \cdot,\cdot \rightarrowngle_{p,\mathcal{L}pha}$ and $\log_{\omega^*}(\mathrm{res}_p\,(\,\cdot\,))^2$ are both non-trivial quadratic forms on the one dimensional $\mathbb{Q}_p$-vector space $E(\mathbb{Q})\otimes\mathbb{Q}_p$ and combining with (\ref{eqn:BKcalculation9}), we conclude that $$\frac{\langle P,P\rightarrowngle_{p,\mathcal{L}pha}}{\log_{\omega^*}(P)^2}=\frac{\langle \mathbf{BK}_1,\mathbf{BK}_1\rightarrowngle_{p,\mathcal{L}pha}}{\log_{\omega^*}(\mathbf{BK}_1)^2}=-(1-1/\mathcal{L}pha)(1-1/\begin{equation}ta)\cdot C(E)\cdot \frac{\langle P,P\rightarrowngle_{p,\mathcal{L}pha}}{\log_{\omega^*}(\mathbf{BK}_1)}$$ \end{proof} \begin{equation}gin{rem} \label{rem:mainthmgoodordinary} Much of what we have recorded above for a supersingular prime $p$ applies verbatim for a good ordinary prime as well. Suppose that $\mathcal{L}pha$ is the root of the Hecke polynomial which is a $p$-adic unit, so that $v_p(\begin{equation}ta)=1$. In this case, we again have two $p$-adic $L$-functions: The projection of $\mathcal{L}_{\mathbf{BK}}$ to $D_\mathcal{L}pha$ yields the Mazur-Tate-Teitelbaum $p$-adic $L$-function $L_{p,\mathcal{L}pha}(E/\mathbb{Q},s)$, whereas its projection to $D_\begin{equation}ta$ should agree\footnote{We remark that this conclusion does not formally follow directly from Kato's explicit reciprocity laws. See \cite{loefflerzerbes}; also \cite{hansen} where a proof of this was announced.} with the \emph{critical slope} $p$-adic $L$-function $L_{p,\begin{equation}ta}(E/\mathbb{Q},s)$ of Bella\"iche and Pollack-Stevens. The analogous statements to those in Theorem~\ref{thm:mainsupersingularexplicit} therefore reduces to check that one of the following holds true: \begin{equation}gin{itemize} \item[a)] There exists a $p$-adic Gross-Zagier formula for the critical slope $p$-adic $L$-function $L_{p,\begin{equation}ta}(E/\mathbb{Q},s)$, or that \item[b)] $\textup{ord}_{s=1}L_{p,\begin{equation}ta}(f,s)\geq \textup{ord}_{s=1}L_{p,\mathcal{L}pha}(f,s)\,.$ \end{itemize} We suspect that the latter statement may be studied through a \emph{critical slope} main conjecture and its relation with the ordinary main conjecture. We will pursue this direction in a future joint work with R. Pollack. \end{rem} \subsubsection{$E$ has non-split-multiplicative reduction at $p$} In this case, $D_{\textup{cris}}(V)$ is one-dimensional and we have $\mathcal{L}_{\mathbf{BK}}=\mathcal{L}_{\textup{MTT}}\cdot\omega_{\textup{cris}}$, where $\mathcal{L}_{\textup{MTT}}\in \Lambda$ is the Mazur-Tate-Teitelbaum measure. The following is a consequence of the $p$-adic Gross-Zagier formula (c.f., \cite{disegniGZ}), the Rubin-style formula proved in \cite[Proposition 11.3.15]{nek} and our main Theorem~\ref{thm:main}: \begin{equation}gin{thm}[Disegni, Gross-Zagier, Nekov\'a\v{r}] \label{thm:mainmultiplicative} Suppose that Nekov\'a\v{r}'s $p$-adic height pairing associated to the canonical splitting of the Hodge-filtration on the semi-stable Dieudonn\'e module $D_{\textup{st}}(V)$ is non-vanishing. Then, $$\log_V\left(\textup{res}_p(\mathbf{BK}_1)\right)\cdot\log_V\left(\textup{res}_p(P)\right)^{-2}\in \overline{\mathbb{Q}}^\times\,.$$ \end{thm} \begin{equation}gin{rem} \label{rem:modularforms} Our argument so far easily adapt to prove that analogous conclusions hold true for a \emph{essentially self-dual} elliptic modular form $f$ which verifies the hypotheses of Remark~\ref{rem:selfdualmodforms} and for which the natural map $$\textup{res}_p:H^1_f(\mathbb{Q},V_f)\rightarrow H^1_f(\mathbb{Q}_p,V_f)$$ is non-zero. Here, $V_f$ is the self-dual twist of Deligne's representation, as in Remark~\ref{rem:selfdualmodforms}. \end{rem} \section{Part II. $\Lambda$-adic Kolyvagin systems, Beilinson-Flach elements, Coleman-Rubin-Stark elements and Heegner points} \label{subsec:KS} In this section, we recast the proof of Theorem~\ref{thm:main} in terms of the theory of $\Lambda$-adic Kolyvagin systems (and in great generality), with the hope that it will provide us with further insights to analyse the analogs of Perrin-Riou's predictions in other situations. \subsection{The set up} Let $\mathscr{M}$ denote a self-dual motive over either a totally real or CM number field $K$ with coefficients in a number field $L$. Let $L(\mathscr{M},s)$ denote its $L$-function and write $r_{\textup{an}}(\mathscr{M})$ for its order of vanishing at the central critical point (which is a quantity conditional on the expected functional equation and analytic continuation). Let $V$ denote its $p$-adic realization (which a finite finite vector space over $E$ (a completion of $L$ at a prime above $p$) endowed with a continuous $G_K$-action) and $T\subset V$ a $\frak{o}_E$-lattice. We write $\overline{T}$ for $T/\hbox{\frakfamily m}_ET$, where $\hbox{\frakfamily m}_E$ is the maximal ideal of $\frak{o}_E$ and $T^*:\textup{Hom}(T,\bbmu_{p^\infty})$ for the Cartier dual of $T$. We will set $2r:=[K:\mathbb{Q}]\dim_E V$ (note by the self-duality of $\mathscr{M}$ that the quantity on the right is indeed even) and let $S$ denote the finite set of places of $K$ which consists of all primes above $K$, all archimedean places and all places at which $V$ is ramified. We will assume throughout Section~\ref{subsec:KS} that $V$ is \emph{Panchishkin ordinary} in the sense that for each prime $\frak{p}$ of $K$ above $p$, the following three conditions hold true: \begin{equation}gin{itemize} \item[(HP0)] There is a direct summand $F^+_{\frak{p}}T\subset T$ (as an $\frak{o}_E$-submodule) of rank $ r_{\frak{p}}$, which is stable under the $G_{K_\frak{p}}$-action and such that ${\sum_{\frak{p}\mid p} r_{\frak{p}}=r\,.}$ \item[(HP1)] The Hodge-Tate weights of the subspace $F_\frak{p}^+V:=F_\frak{p}^+T\otimes E$ are strictly negative. \item[(HP2)] The Hodge-Tate weights of the quotient $V/F_\frak{p}^+V$ are non-negative. \end{itemize} \begin{equation}gin{rem} When $r=1$, we may in fact drop the Panchishkin condition on $T$ without any further work. \end{rem} \begin{equation}gin{example} \label{example:selfdualmotive} Suppose that $A/K$ is an abelian variety of dimension $g$, which has good ordinary reduction at all primes above $p$. Then $r=g[K:\mathbb{Q}]$ and the $p$-adic realization of the motive $h^1(A)(1)$ is $V_p(A):=T_p(A)\otimes \mathbb{Q}_p$ is the $p$-adic Tate-module and $V_p(A)$ is Panchishkin ordinary. \end{example} For positive integers $\mathcal{L}pha$ and $k$, we set $R_{k,\mathcal{L}pha}:=\Lambda/(p^k,(\gamma-1)^\mathcal{L}pha)$ and $T_{k,\mathcal{L}pha}:=\mathbb{T}\otimes R_{k,\mathcal{L}pha}$. \subsection{Module of $\Lambda$-adic Kolyvagin systems} \label{subsec:KSway} Throughout this section, we shall assume that the hypotheses \textup{\textbf{(H.0)}}\,-\,\textup{\textbf{(H.3)}} of \cite[Section 3.5]{mr02} are in effect as well as the following two hypotheses:\\ \textbf{(H.nA)} $H^0(K_\frak{p},\overline{T})=0$ for every prime $\frak{p}$ of $K$ above $p$. \\ \textbf{(H.Tam)} We have $H^0\left(G_{K_\lambda}/I_\lambda,H^0(I_\lambda,V/T)\big{/}H^0(I_\lambda,V/T)_\textup{div}\right)$ for every prime $\lambda \in S$ (where $I_\lambda \subset G_{K_\lambda}$ stands for the inertia subgroup). \begin{equation}gin{define} We define the Greenberg Selmer structure $\mathcal{F}_\textup{Gr}$ by the local conditions $$H^1_{\mathcal{F}_{\textup{Gr}}}(K_\frak{p},\mathbb{T}):=\textup{im}\left(H^1(K_{\frak{p}},F_\frak{p}^+T\otimes\Lambda)\rightarrow H^1(K_{\frak{p}},\mathbb{T})\right)$$ for every prime $\frak{p}$ above $p$, and by setting $H^1_{\mathcal{F}_{\textup{Gr}}}(K_\lambda,\mathbb{T}):=H^1(K_\lambda,\mathbb{T})$ for $\lambda\nmid p$. \end{define} Under the hypothesis \textbf{(H.nA)} (and the self-duality assumption on $T$), it follows that $H^1(K_p,\mathbb{T})$ is a free $\Lambda$-module of rank $2r$ and $$H^1_{\mathcal{F}_{\textup{Gr}}}(K_{p},\mathbb{T}):=\oplus_{\frak{p}\mid p}H^1_{\mathcal{F}_{\textup{Gr}}}(K_\frak{p},\mathbb{T})\subset H^1(K_{p},\mathbb{T})$$ is a direct summand of rank $r$. We fix a direct summand $H^1_+(K_p,\mathbb{T})\supset H^1_{\mathcal{F}_{\textup{Gr}}}(K_{p},\mathbb{T})$ of rank $r+1$. We will write $H^1_f(K_p,T)$ for the image of $H^1_{\mathcal{F}_{\textup{Gr}}}(K_p,\mathbb{T})$ under $\textup{pr}_0$ and also denote the Selmer group determined by the propagation of the Selmer structure $\mathcal{F}_\textup{Gr}$ to $T$ by $H^1_f(K,T)$. Let $H^1_+(K_p,T)$ denote the image of $H^1_+(K_p,\mathbb{T})$ under $\textup{pr}_0$. The $\frak{o}_E$-module $H^1_+(K_p,T)$ is a direct summand of $H^1(K_p,T)$ of rank $r+1$ (thanks to our hypothesis {\textup{\textbf{H.nA}}}). Using the fact that $\mathcal{F}_\textup{Gr}$ is self-dual, this determines (via local Tate duality) a direct summand $H^1_-(K_p,T)\subset H^1_{\mathcal{F}_{\textup{Gr}}}(K_p,T)$ of rank $r-1$ and allows us to define a Selmer structure $\mathcal{F}_-$ on $T$ (which is given by the local conditions determined by $H^1_-(K_p,T)\subset H^1(K_p,T)$ at $p$, and by propagating of $\mathcal{F}_\textup{Gr}$ at any other place). \begin{equation}gin{define} We define the $\mathcal{F}_+$ by the local conditions $$H^1_{\mathcal{F}_{+}}(K_p,\mathbb{T})=H^1_+(K_p,\mathbb{T})$$ and by setting $H^1_{\mathcal{F}_{+}}(K_\lambda,\mathbb{T}):=H^1(K_\lambda,\mathbb{T})$ for $\lambda\nmid p$. \end{define} Given a Selmer structure $\mathcal{F}$ on $\mathbb{T}$, we let $\chi(\mathcal{F},\mathbb{T}):=\chi(\mathcal{F},\overline{T})$ denote the core Selmer rank (in the sense of \cite[Definition 4.1.11]{mr02}) of the propagation of the Selmer structure $\mathcal{F}$ to $\overline{T}$. It follows from the discussion in \cite[\S4.1 and \S5.2]{mr02} that $\chi(\mathcal{F}_\Lambda,\mathbb{T})=r$. \begin{equation}gin{prop} \label{prop:genericcoreselmerrank} We have $\chi(\mathcal{F}_\textup{Gr},\mathbb{T})=0$; whereas $\chi(\mathcal{F}_+,\mathbb{T})=1$. \end{prop} \begin{equation}gin{proof} These follow using global Euler characteristic formulae, together with the fact that $\chi(\mathcal{F}_\Lambda,\mathbb{T})=r$. \end{proof} \begin{equation}gin{define} \label{def:kolprimestransversecond} Given positive integers $\mathcal{L}pha$ and $k$, we let $\mathcal{P}_{k,\mathcal{L}pha}$ (respectively, $\mathcal{P}_j$) denote the set of \emph{Kolyvagin primes} for $T_{k,\mathcal{L}pha}$, as introduced in \cite[Section 2.4]{kbb}. We set $\mathcal{P}:=\mathcal{P}_{1,1}$. \end{define} Given an integer $j\geq k+\mathcal{L}pha$, one may define the module $\textbf{\textup{KS}}(\mathcal{F}_+,T_{k,\mathcal{L}pha},\mathcal{P}_j)$ of Kolyvagin systems for the Selmer structure $\mathcal{F}_+$ on the artinian module $T_{k,\mathcal{L}pha}$ as in the paragraph following Definition 3.3 in \cite{kbb} (after replacing the Selmer structure $\mathcal{F}c$ by our more general Selmer structure $\mathcal{F}_+$). \begin{equation}gin{define} The $\Lambda$-module $$\textbf{\textup{KS}}(\mathcal{F}_+,\mathbb{T},\mathcal{P}):=\varprojlim_{k,\mathcal{L}pha}\left(\varinjlim_{j\geq k+\mathcal{L}pha}\textbf{\textup{KS}}(\mathcal{F}_+,T_{k,\mathcal{L}pha},\mathcal{P}_j)\right) $$ is called the module of $\Lambda$-adic Kolyvagin systems. \end{define} \begin{equation}gin{thm} \label{thm:mainkbb} Under the hypotheses \textup{\textbf{(H.0)}}\,-\,\textup{\textbf{(H.3)}} of \cite[Section 3.5]{mr02} and assuming \textup{\textbf{(H.Tam)}} and \textup{\textbf{(H.nA)}}, the natural map $\textbf{\textup{KS}}(\mathcal{F}_+,\mathbb{T},\mathcal{P})\rightarrow \textbf{\textup{KS}}(\mathcal{F}_+,\overline{T},\mathcal{P})$ is surjective, the $\Lambda$-module $\textbf{\textup{KS}}(\mathcal{F}_+,\mathbb{T},\mathcal{P})$ is free of rank one and its generated by any $\Lambda$-adic Kolyvagin system whose projection to $\textbf{\textup{KS}}(\mathcal{F}_+,\overline{T},\mathcal{P})$ is non-zero. \end{thm} \begin{equation}gin{proof} This is the main theorem of \cite{kbb}; one only needs to replace $\mathcal{F}c$ in loc. cit. with $\mathcal{F}_+$ (but that entails no complications). \end{proof} \begin{equation}gin{rem} \label{rem:whathappenswhenpnonanomalous} When $T=T_p(E)$ is the $p$-adic Tate module of an elliptic curve $E/\mathbb{Q}$, the hypothesis \textup{\textbf{(H.Tam)}} is equivalent to the requirement that $p$ does not divide $\textup{ord}_\ell(j(E))$ whenever $\ell$ is a prime of split multiplicative reduction. Still when $T=T_p(E)$, the hypothesis that $H^2(\mathbb{Q}_p,\overline{T})=0$ is equivalent to the requirement that $E(\mathbb{Q}_p)[p]=0$. This is precisely the condition that the prime $p$ be \emph{non-anomalous} for $E$ in the sense of \cite{mazur-anom}. It is easy to see that all primes $p>5$ at which \begin{equation}gin{itemize} \item either $E$ has supersingular reduction, \item or non-split-multiplicative reduction, \item or split-multiplicative reduction with $p>7$, \item or good ordinary reduction with $a_p(E)\neq 1$, \item or for elliptic curves which possess a non-trivial $\mathbb{Q}$-rational torsion \end{itemize} are non-anomalous. In our supplementary note \cite[Appendix A]{kbbArxiv1}, we are able to lift the non-anomaly hypothesis on $T_p(E)$ and prove the following in this setting: \begin{equation}gin{thm} \label{thm:mainKS} Suppose that $E$ is an elliptic curve such that the residual representation $$\overline{\rho}_E:\,G_{\mathbb{Q},S}\longrightarrow \textup{Aut}(E[p])$$ is surjective. Assume further that $p$ does not divide $\textup{ord}_\ell(j(E))$ whenever $\ell \mid N$ is a prime of split multiplicative reduction. Then, \\\\ {\upshapehape{\textbf{i)}}} the $\Lambda$-module $\textbf{\textup{KS}}(\mathbb{T}_p(E))$ of $\Lambda$-adic Kolyvagin systems contains a free $\Lambda$-module of rank one with finite index; \\ {\upshapehape{\textbf{ii)}}} there exists a $\Lambda$-adic Kolyvagin system $\pmb{\kappa} \in \textbf{\textup{KS}}(\mathbb{T}_p(E))$ with the property that $\textup{pr}_0(\pmb{\kappa}) \in \textbf{\textup{KS}}(T_p(E))$ is non-zero. \end{thm} \end{rem} Let $\mathrm{res}_{+/f}$ denote the compositum of the arrows $$H^1_{\mathcal{F}_+}(K,\mathbb{T})\longrightarrow H^1_{+}(K_p,\mathbb{T})\longrightarrow {H^1_{+}(K_p,\mathbb{T})}\big{/}{H^1_{\mathcal{F}_\textup{Gr}}(K_p,\mathbb{T})}\,.$$ \begin{equation}gin{define} Given a $\Lambda$-adic Kolyvagin system $\bbkappa \in \textbf{\textup{KS}}(\mathcal{F}_+,\mathbb{T},\mathcal{P})$ for which $\bbkappa_1\neq 0$, we define its \emph{defect} by setting $$\delta(\bbkappa):=\textup{char}\left(H^1(K,\mathbb{T})/\Lambda\cdot\bbkappa_1\right)\big{/}\textup{char}\left(H^1_{\mathcal{F}_+^*}(K,\mathbb{T}^*)^\vee\right)\,.$$ Observe that $\delta(\bbkappa)\subset \Lambda$ thanks to the general Kolyvagin system machinery. Whenever $\mathrm{res}_{+/f}\left(\bbkappa_1\right)\neq 0$, we also define the \emph{Kolyvagin constructed $p$-adic $L$-function} $$\frak{L}_p(\bbkappa):=\textup{char}\left(H^1_{+/f}(K_p,\mathbb{T})\big{/}\mathrm{res}_{+/f}(\bbkappa_1)\right)\,.$$ Note in this case (using Poitou-Tate global duality) that the defect of $\bbkappa$ is given as the quotient $\delta(\bbkappa)=\frak{L}_p(\bbkappa)/\textup{char}(H^1_{\mathcal{F}_{\textup{Gr}}^*}(K,\mathbb{T}^*)^\vee)\,.$ \end{define} \begin{equation}gin{rem} \label{rem:defectcouldbesmall} It follows from Theorem~\ref{thm:mainkbb} that any generator of the module $\textbf{\textup{KS}}(\mathcal{F}_+,\mathbb{T},\mathcal{P})$ of $\Lambda$-adic Kolyvagin systems is $\Lambda$-primitive (in the sense of \cite[Definition 5.3.9]{mr02}). It then follows from \cite[5.3.10(iii)]{mr02} that for any generator $\bbkappa$ of $\textbf{\textup{KS}}(\mathcal{F}_+,\mathbb{T},\mathcal{P})$ we have $\delta(\bbkappa)=\Lambda$. \end{rem} \begin{equation}gin{example} \label{example:ellipticcurve} Suppose $E/\mathbb{Q}$ is an elliptic curve and $T=T_p(E)$. Let $\bbkappa^\mathbf{BK} \in \textbf{\textup{KS}}(\mathcal{F}_\Lambda,\mathbb{T})$ denote the $\Lambda$-adic Kolyvagin system associated to $E$, that descend from the Beilinson-Kato elements via \cite[Theorem 5.3.3]{mr02}. It follows from Theorem~\ref{thm:mainconjistrue} (in all cases that it applies), $\delta(\bbkappa^{\kappa})$ is generated by a power of $p$ and in particular, it is prime to $(\gamma-1)$. \end{example} Recall the direct summand $H^1_-(K_p,T)\subset H^1_{\mathcal{F}_{\textup{Gr}}}(K_p,T)$ of rank $r-1$ and define the map $\mathrm{res}_{f/-}$ as the compositum of the arrows $$H^1_f(K,T)\longrightarrow H^1_f(K_p,T)\longrightarrow H^1_f(K_p,T)\big{/}H^1_-(K_p,T)\,.$$ \begin{equation}gin{thm} \label{thm:PRwithKS} Assume that \textup{\textbf{(H.Tam)}}, \textup{\textbf{(H.nA)}} and the hypotheses \textup{\textbf{(H.0)}}\,-\,\textup{\textbf{(H.3)}} of Mazur and Rubin in \cite{mr02} hold true.\\ \textbf{\textup{i)}} Suppose that the map $\mathrm{res}_{f/-}: H^1_f(K,T)\rightarrow H^1_{f/-}(K_p,T)$ is injective. \begin{equation}gin{itemize} \item[(a)] For any non-trivial $\bbkappa \in \textbf{\textup{KS}}(\mathcal{F}_+,\mathbb{T},\mathcal{P})$, we have $\bbkappa_1\neq 0$. \item[(b)] For any $\bbkappa$ whose defect $\delta(\bbkappa)$ is prime to $(\gamma-1)$, we have $\kappa_1\neq 0$. \end{itemize} \textbf{\textup{ii)}} Conversely, if $\kappa_1\neq 0$ for some $\kappa \in\textbf{\textup{KS}}(\mathcal{F}_+,T,\mathcal{P})$, then the map $\mathrm{res}_{f/-}$ is injective. \end{thm} See the discussion in Remark~\ref{rem:PRconjforEbyKS} and Conjecture~\ref{conj:analyticrankvslocalizationmapinjective} pertaining to the injectivity of the map $\mathrm{res}_{f/-}$. \begin{equation}gin{proof} The requirement that the map $\mathrm{res}_{f/-}$ be injective is equivalent to asking that $H^1_{\mathcal{F}_-}(K,T)=0$. The proof of Theorem~\ref{thm:mainonlyif} adapts without difficulty (on replacing $\mathcal{F}_{\textup{str}}$ with $\mathcal{F}_-$ and $\mathcal{F}c^*$ with $\mathcal{F}_+^*$) to show that $H^1_{\mathcal{F}_+^*}(K,T^*)$ is finite. It follows from \cite[Corollary 5.2.13(i)]{mr02} that for every non-trivial Kolyvagin system ${\kappa}\in \textbf{\textup{KS}}(\mathcal{F}_+,T,\mathcal{P})$, we have $\kappa_1\neq 0$ for its initial term. Theorem~\ref{thm:mainkbb} shows that ${\kappa}$ lifts to a $\Lambda$-adic Kolyvagin system $\bbkappa \in \textbf{\textup{KS}}(\mathcal{F}_+,\mathbb{T},\mathcal{P})$, and we have $\bbkappa_1\neq0$ (as we have $\textup{pr}_0(\bbkappa_1)=\kappa_1\neq 0$). Since the $\Lambda$-module $\textbf{\textup{KS}}(\mathcal{F}_+,\mathbb{T},\mathcal{P})$ is cyclic and $H^1(K,\mathbb{T})$ is torsin-free under our running assumptions, (a) follows. Let now $\bbchi$ be a $\Lambda$-primitive Kolyvagin system (such $\bbchi$ exists and generates the module $\textbf{\textup{KS}}(\mathcal{F}_+,\mathbb{T},\mathcal{P})$ by Theorem~\ref{thm:mainkbb}). Let $g \in \Lambda$ be such that $\bbkappa=g\cdot\bbchi$. It follows from \cite[Theorem 5.3.10(iii)]{mr02} and the choice of $\bbkappa$ that $g$ is prime to $(\gamma-1)$. Furthermore, since $\textup{pr}_0(\bbchi_1)\neq 0$ by the discussion above, it follows that $\kappa_1=\textup{pr}_0(g)\cdot \textup{pr}_0(\bbchi_1)\neq 0$. This completes the proof of (b). To prove (ii), note that $H^1_{\mathcal{F}_+^*}(K,T^*)$ is finite by our assumption and \cite[Theorem 5.2.2]{mr02}. The proof of Theorem~\ref{thm:mainonlyif} shows (after suitable alterations, as we have pointed out in the first paragraph of this proof) that this implies the vanishing of $H^1_{\mathcal{F}_-}(K,T)$. This is precisely what we desired to prove. \end{proof} \begin{equation}gin{rem} \label{rem:PRconjforEbyKS} For $T=T_p(E)$ for an elliptic curve as in Example~\ref{example:ellipticcurve} above, the $\mathrm{res}_{f/-}$ in the statement of Theorem~\ref{thm:PRwithKS} is simply the localization map at $p$. It is easy to see using the work of Gross-Zagier, Kolyvagin and Skinner (under additional mild hypothesis) that this map is injective if and only if $r_{\textup{an}}(E)=1$. In particular, Perrin-Riou's conjecture in this set up follows from Theorem~\ref{thm:PRwithKS} and the discussion in Example~\ref{example:ellipticcurve}. \end{rem} The discussion in Remark~\ref{rem:PRconjforEbyKS} leads us to the following prediction: \begin{equation}gin{conj} \label{conj:analyticrankvslocalizationmapinjective} There exists a choice of the direct summand $H^1_+(K_p,\mathbb{T})$ such that the map $\mathrm{res}_{f/-}$ is injective iff $r_{\textup{an}}(\mathscr{M})\leq 1$. \end{conj} \subsection{Example: Perrin-Riou's conjecture for Beilinson-Flach elements} Let $E/\mathbb{Q}$ be a non-CM elliptic curve with conductor $N$ and let $f_E \in S_2(\Gamma_0(N))$ denote the associated newform. Assume that the residual representation $\overline{\rho}_E:G_K\rightarrow \textup{Aut}(E[p])$ is absolutely irreducible and suppose that $E$ has good ordinary reduction at the prime $p$. Fix an embedding $\iota_p:\overline{\mathbb{Q}}\rightarrow \mathbb{C}_p$. Suppose $K$ is an imaginary quadratic extension of $\mathbb{Q}$ that satisfies the weak Heegner hypothesis for $E$, so that the order of vanishing $r_{\textup{an}}(E/K):=\textup{ord}_{s=1}L(E/K,s)$ is odd. Suppose further that $p$ does not divide $\textup{ord}_\ell(j(E))$ whenever $\ell | N$ is a prime of split multiplicative reduction. We will assume throughout this subsection that the prime $p$ splits in $K$ and write $p=\wp\wp^c$ as a product of primes of $K$, where the prime $\wp$ is induced from $\iota_p$. Fix forever an auxiliary modulus $\frak{f}$ of $K$ which is prime to $p$ and the ray class group of $K$ modulo $\frak{f}$ is prime to $p$. Fix also a ring class character $\mathcal{L}pha$ modulo $\frak{f}p^\infty$ of finite order, for which we have $\mathcal{L}pha(\wp)\neq \mathcal{L}pha(\wp^c)$. We let $\frak{o}$ denote the finite flat extension of $\mathbb{Z}_p$ in which $\mathcal{L}pha$ takes its values and write $T=T_p(E)\otimes \mathcal{L}pha^{-1}$ for the free $\frak{o}$-module of rank $2$ on which $G_K$ acts diagonally. Set $\mathscr{D}(T):=\textup{Hom}(T,\frak{o})(1)\cong T_p(E)\otimes\mathcal{L}pha$. Let $F^+T\subset T$ denote the Greenberg subspaces of $T$; this is a free $\frak{o}$-module of rank one such that the $G_{\mathbb{Q}_p}$-action on the quotient $T/F^+T$ is unramified. In this set up, one may define the Greenberg Selmer structure $\mathcal{F}_{\textup{Gr}}$ on $\mathbb{T}$ as above, as well as modify this Selmer structure appropriately to apply Theorem~\ref{thm:mainkbb}. \begin{equation}gin{define} \label{ref:FplusforBF} We define the Selmer structure $\mathcal{F}_+$ on $\mathbb{T}$ by relaxing the local conditions at the prime $\wp$. \end{define} It follows from the discussion in \cite[Section 3.2]{kbbleiBFpord} (with the choice $f_1=f_E$ in loc.cit.), Beilinson-Flach element Euler system of Lei-Loeffler-Zerbes~\cite{LLZ2} gives rise to a Kolyvagin system $\bbkappa^{\textup{BF}}=\{\kappa^{\textup{BF}}_\eta\}_\eta \in \textbf{\textup{KS}}(\mathcal{F}_+,\mathbb{T})$ (where the indices run through square free ideals $\eta$ of $\frak{o}_K$ that are products of appropriately chosen Kolyvagin primes) which we call the \emph{Beilinson-Flach Kolyvagin system}. Then $\mathbb{BF}_1:=\kappa_1^{\textup{BF}}\in H^1_{\mathcal{F}_+}(K,\mathbb{T})$ and we write $\textup{BF}_1 \in H^1_{\mathcal{F}_+}(K,T)$ for its image. The explicit reciprocity laws for the Beilinson-Flach elements in \cite{KLZ2} show that $\textup{res}_\wp^{s}(\textup{BF}_1)\not= 0$ iff $r_{\textup{an}}(E,\mathcal{L}pha):=\textup{ord}_{s=1}L(E,\mathcal{L}pha.1)$ equals $0$. Here, $$\mathrm{res}_\wp^s: H^1(K,T)\longrightarrow H^1(K_\wp,T)/H^1_{\mathcal{F}_\textup{Gr}}(K_\wp,T)$$ is the singular projection. In this particular situation, the map $\mathrm{res}_{f/-}$ is simply the map $$\mathrm{res}_{f/-}=\mathrm{res}_\wp:\,H^1_f(K,T)\longrightarrow H^1_{\mathcal{F}_{\textup{Gr}}}(K_\wp,T).$$ When $r_{\textup{an}}(E,\mathcal{L}pha)=0$, then $\textup{res}_\wp^{s}(\textup{BF}_1)\not= 0$ and the Kolyvagin system machinery shows that $H^1_f(K,T)=0$ and the $\mathrm{res}_\wp$ is injective for this trivial reason. \begin{equation}gin{prop} \label{prop:resisinjectiveBF} If $r_{\textup{an}}(E,\mathcal{L}pha)=1$, then the map $\mathrm{res}_\wp:\,H^1_f(K,T)\rightarrow H^1_{\mathcal{F}_{\textup{Gr}}}(K_\wp,T)$ is injective. \end{prop} \begin{equation}gin{proof} Let $V_p(E):=T_p(E)\otimes\mathbb{Q}_p$ and $V:=T\otimes \mathbb{Q}_p$. We let $M$ denote the finite abelian extension of $K$ that is cut by $\mathcal{L}pha$. Note that we have \begin{equation}gin{align*} H^1_f(K,V)\stackrel{\sim}{\longrightarrow}H^1_f(M,V)^{G_K}&\stackrel{\sim}{\longrightarrow}\\ & \left(H^1_f(M,V_p(E))\otimes\mathcal{L}pha^{-1}\right)^{G_K}\stackrel{\sim}{\longrightarrow}H^1_f(M,V_p(E))^\mathcal{L}pha \end{align*} where the first arrow follows from the inflation-restriction sequence and the rest are self-evident. It follows from \cite{bentwistedGZ} and (a mild extension of) the main results of \cite{benGL2} that $H^1_f(M,V_p(E))^\mathcal{L}pha$ is a $1$-dimensional $\textup{Frac}(\frak{o})=L$-vector space as well as that $$E(M)^{\mathcal{L}pha}\stackrel{\sim}{\longrightarrow} H^1_f(M,V_p(E))^\mathcal{L}pha$$ where for an abelian group $X$, we write $X^{\mathcal{L}pha}:=\left(X\,\hat{\otimes}\, L(\mathcal{L}pha^{-1})\right)^{G_K}$ with $L(\mathcal{L}pha^{-1})$ being the $1$-dimensional $L$-vector space on which $G_K$ acts by $\mathcal{L}pha^{-1}$. We therefore have the following commutative diagram $$\xymatrix{E(M)^\mathcal{L}pha\ar[d]^{\mathrm{res}_\wp}\ar[r]^(.4){\sim}&H^1_f(M,V_p(E))^\mathcal{L}pha\ar[r]^(.57){\sim}\ar[d]^{\mathrm{res}_\wp}& H^1_f(K,V)\ar[d]^{\mathrm{res}_\wp}\\ \left(\bigoplus_{\frak{p}\mid \wp} E(M_\frak{p})\right)^{\mathcal{L}pha}\ar[r]^(.5){\sim}&H^1_{\mathcal{F}_{\textup{Gr}}}(M_\wp,V_p(E))^\mathcal{L}pha\ar[r]^(.57){\sim}&H^1_{\mathcal{F}_{\textup{Gr}}}(K_\wp,V) }$$ The left-most arrow is evidently injective (as its source is spanned by an $M$-rational point) and hence, the right-most arrow is injective as well. \end{proof} The following statement is the Perrin-Riou conjecture for Beilinson-Flach elements. \begin{equation}gin{cor} \label{cor:PRBFelement} Assume that the residual representation $\overline{\rho}_E$ $($afforded by $E[p]$$)$ is absolutely irreducible. Suppose also that there exists a rational prime $q \in S$ which does not split in $K/\mathbb{Q}$ and $\overline{\rho}_E$ is ramified at $q$. If $r_{\textup{an}}(E,\mathcal{L}pha)=1$, then $\mathrm{res}_\wp(\textup{BK}_1)$ is non-zero. \end{cor} \begin{equation}gin{proof} The main theorem in \cite{wanpordinaryindefinite} (which applies in our set up) shows that the defect $\delta(\bbkappa^{\textup{BF}})$ of the Beilinson-Flach Kolyvagin system is prime to $(\gamma-1)$ and the proof follows from Theorem~\ref{thm:PRwithKS}(i) and Proposition~\ref{prop:resisinjectiveBF}. \end{proof} \subsection{CM Abelian Varieties and Perrin-Riou-Stark elements} \label{subsec:CMabvarPRStark} Let $K$ be a CM field and $K_+$ its maximal totally real subfield that has degree $g$ over $\mathbb{Q}$. Fix a complex conjugation $c \in \textup{Gal}(\overline{\mathbb{Q}}/K_+)$ lifting the generator of $\textup{Gal}(K/K_+)$. Fix forever an odd prime $p$ unramified in $K/\mathbb{Q}$ and an embedding $\iota_p: \overline{\mathbb{Q}} \hookrightarrow \overline{\mathbb{Q}}_p$. \subsubsection{CM types and $p$-ordinary abelian varities} \label{subsubsec:CMtypesabvar} Fix a \emph{$p$-ordinary CM-type} $\Sigma$; this means that the embeddings $\Sigma_p:=\{\iota_p\circ \sigma\}_{\sigma \in \Sigma}$ induce exactly half of the places of $K$ over $p$. Identify $\Sigma_p$ with the associated subset of primes $\{\frak{p}_1,\cdots,\frak{p}_s\}$ of $K$ above $p$ and $\Sigma_p^c=\{\frak{p}_1^c,\cdots\frak{p}_s^c\}$. Note that the disjoint union $\Sigma_p \sqcup \Sigma_p^c$ is the set of all primes of $K$ above $p$. It follows that there then exists an abelian variety that has CM by $K$ and has good ordinary reduction at $p$, and its CM-type is $\Sigma$. Fix such an abelian variety $A$ and assume that the index of the order $\textup{End}_K(A)$ inside the maximal order $\mathcal{O}_K$ is prime to $p$. \emph{We will assume that $A$ is principally polarized and that $K$ contains its reflex field of the CM pair $(K,\Sigma)$.} Let $\mathcal{A}=\mathcal{A}_{/\frak{o}_K}$ denote the N\'eron model of $A$. \subsubsection{Gr\"ossencharacters of CM abelian varieties} \label{subsubsec:grossenkaraktere} The $p$-adic Tate-module $T_p(A)$ of $A$ is a free $\mathbb{Z}_p$-module of rank $2g$ on which $G_K$ acts continuously. As explained in the Remark on page 502 of \cite{serretate}, $T_p(A)$ is free of rank one over $\mathcal{O}_K\otimes\mathbb{Z}_p=\mathrm{pr}od_{\frak{q}} \frak{o}_\frak{q}$, where the product is over the primes of $K$ that lie above $p$ and $\frak{o}_\frak{q}$ stands for the valuation ring of $K_\frak{q}$. We thus have a decomposition $T_p(A)=\bigoplus_{\frak{q}}T_\frak{q}(A)$, where each $T_\frak{q}(A)=\varprojlim A[\frak{q}^n]$ is a free $\frak{o}_\frak{q}$-module of rank one. The $G_K$-action on $T_p(A)$ gives rise to characters $\psi_\frak{q}: G_K \rightarrow \mathcal{O}_\frak{q}^\times.$ It follows from \cite[\S 2]{ribetcompositio} that each $\psi_\frak{q}$ is surjective for $p$ large enough; we fix until the end a prime $p$ satisfying this condition. We thence obtain a decomposition $$T_p(A)\otimes_{\mathbb{Z}_p}\overline{\mathbb{Q}}_p=\bigoplus_{\frak{q}\mid p}\bigoplus_{\sigma: K_\frak{q} \hookrightarrow \overline{\mathbb{Q}}_p} V_{\frak{q}}^\sigma$$ where $V_{\frak{q}}^\sigma$ is the one-dimensional $\overline{\mathbb{Q}}_p$-vector space on which $G_K$ acts via the character $\psi_\frak{q}^\sigma$, which is the compositum $$G_K\stackrel{\psi_\frak{q}}{\longrightarrow} \frak{o}_\frak{q}^\times \stackrel{\sigma}{\longrightarrow} \overline{\mathbb{Q}}_p^\times\,.$$ Fix embeddings $j_\infty:\overline{\mathbb{Q}}\hookrightarrow \mathbb{C}$ and $j_p:\overline{\mathbb{Q}}\hookrightarrow \mathbb{C}_p$ extending $\iota_p$. We write $\boldsymbol{\mathscr{S}}=\Sigma\cup\Sigma^c$ for the set of all embeddings of $K$ into $\overline{\mathbb{Q}}$. Theory of CM associates a Gr\"ossencharacter character $${\bbpsi}: \mathbb{A}_K/K^\times\longrightarrow K^{\times},$$ to $A$, which in turn induces Hecke characters $$\psi_\tau\,: \mathbb{A}_K/K^\times \stackrel{{\bbpsi}}{\longrightarrow} K^\times\stackrel{j_\infty\circ\tau}{\longrightarrow} \mathbb{C}^\times$$ as well as gives rise to its $p$-adic avatars $$\psi_\tau^{(p)}\,:\mathbb{A}_K/K^\times \stackrel{{\bbpsi}}{\longrightarrow}K^\times \stackrel{j_p\circ\tau}{\longrightarrow} \mathbb{C}_p^\times.$$ Furthermore, the two sets $\{\textup{rec}\circ \psi_\tau^{(p)}\}_{\tau\in \frak{J}}$ and $\{\psi_\frak{q}^\sigma\}_{\frak{q},\sigma}$ of $p$-adic Hecke characters may be identified, where $\textup{rec}:\mathbb{A}_K/K^\times\rightarrow G_K$ is the reciprocity map. Since we assumed that the field $K$ contains the reflex field of $(K,\Sigma)$, the Hasse-Weil $L$-function $L(A/K,s)$ of $A$ then factors into a product of Hecke $L$-series\footnote{Until the end of this article we shall write $L(\star,s)$ for the motivic $L$-functions and $L(s,\star)$ for the automorphic $L$-functions (so that the latter is centered at $s=1/2$).} $$L(A/K,s+1/2)=\mathrm{pr}od_{\tau\in\mathscr{S}} L(s,\psi_\tau^u) \in K\otimes \mathbb{C}\cong\mathbb{C}^{\mathscr{S}}$$ where $\psi^u$ is the unitarization of $\psi$. Fix $\varepsilon \in \Sigma$ and identify $K$ with $K^{\varepsilon}$. This choice (together with the chosen embeddings $j_\infty$ and $j_p$) in turn fixes a prime $\frak{p} \in \Sigma_p$ and $\sigma: K_{\frak{p}} \hookrightarrow \overline{\mathbb{Q}}_p$ in a way that $\textup{rec}\circ\psi_\varepsilon^{(p)}=\psi_\frak{p}^\sigma$. We set $L=\sigma(K_\frak{p})$ and $\frak{o}:=\sigma(\mathcal{O}_{{\frak{p}}})$ and define the $p$-adic Hecke character $\psi:=\psi_{\frak{p}}^\sigma: G_K \twoheadrightarrow \frak{o}^\times\,.$ We set $\psi^*=\chi_{\textup{cyc}}\psi^{-1}$ and $T:=\frak{o}(\psi^*)$; note that we have $T^*\cong A[\varpi^{\infty}]$ (where $\varpi \in \hbox{\frakfamily m}_L$ is a uniformizer). In this situation, our non-anomaly hypothesis {\textbf{\textup{(H.nA)}}} translates into the requirement that \begin{equation}\label{eqn:hna} A(K_\frak{q})[\varpi]=0 \hbox{ for every prime } \frak{q} \hbox{ of } K \hbox{ above } p\end{equation} which we assume throughout this subsection. \emph{We assume in addition that $A$ arises as the base-change of an abelian variety (which we still denote by $A$, by slight abuse) that has real multiplication by $K_+$.} This condition implies that $\bbpsi$ is self-dual, in the sense that for each $\tau\in\mathscr{S}$ there is a functional equation with sign $\epsilon(\bbpsi/K)=\pm1$ (that does not depend on the choice of $\tau$) relating the value $L(s,\psi_\tau)$ to the value $L(1-s,\psi_\tau)$. We let $\frak{P}=\frak{p}\frak{p}^c$ denote the prime of $K_+$ below the prime $\wp$ we have fixed above and let $\varpi:=\varpi\varpi^c \in K_{+,\frak{P}}$ be a uniformizer. We write $T_{\frak{P}}(A):=\varprojlim A[\varpi^n]$ for the $\frak{P}$-adic Tate module of $A$ and set $\mathbb{T}_{\frak{P}}(A):=T_{\frak{P}}(A)\otimes\Lambda$. By the theory of complex multiplication and our assumption on $A$, we have $T_{\frak{P}}(A)=\textup{Ind}_{K/K_{+}}T$. For every prime $\frak{Q}$ of $K_+$ above $p$, we have a $p$-ordinary filtration $F^+_{\frak{Q}}T_\frak{P}(A)\subset T_\frak{P}(A)$ (that also gives rise to the filtration $F^+_{\frak{Q}}\mathbb{T}_\frak{P}(A)\subset \mathbb{T}_\frak{P}(A)$) when $T_\frak{P}(A)$ is restricted to $G_{K_{\frak{Q}}}$. We finally let $\phi$ denote the Hilbert modular CM form of weight two associated to $\phi_\epsilon$, $L(\phi,s)$ the associated Hecke $L$-function and $r_{\textup{an}}(\phi)$ the order of vanishing at the central critical point $s=1$. In what follows, we write $\Lambda_{\frak{o}}$ in place of $\Lambda\otimes\frak{o}$. \subsubsection{Perrin-Riou-Coleman maps and Selmer structures} We introduce the Selmer structures we shall apply our theory in Section~\ref{subsec:KSway} with. \begin{equation}gin{define} We define the Greenberg-submodule $$H^1_{\textup{Gr}}(K_p,\mathbb{T}):=\bigoplus_{\frak{q}\in \Sigma_p^c} H^1(K_{\frak{q}},\mathbb{T}) \subset H^1(K_p,\mathbb{T})\,.$$ The semi-local Shapiro's lemma induces an isomorphism \begin{equation} \label{semilocalshapiro} \frak{sh}:\,H^1(K_{+,p},\mathbb{T}_\frak{P}(A)) \stackrel{\sim}{\longrightarrow} H^1(K_p,\mathbb{T}) \end{equation} under which $$H^1_{\textup{Gr}}(K_{+,p},\mathbb{T}_{\frak{P}}(A)):=\bigoplus_{\wp\mid p} \textup{im}\left(H^1(K_{+,\wp},F^+_{\wp}\mathbb{T}_\frak{P}(A))\rightarrow H^1(K_{+,\wp},\mathbb{T}_\frak{P}(A) \right)$$ maps isomorphically onto $H^1_{\textup{Gr}}(K_p,\mathbb{T})$. \end{define} For each prime $\wp$ of $K_+$ above $p$, we let $D_\wp(T_{\frak{P}}(A))$ denote the Dieudonn\'e module of $T_{\frak{P}}(A)$ considered as a $G_{K_{+,\wp}}$-representation. As explained in \cite{kbbleiPLMS}, we may (and we will) think of this as an $\frak{o}$-module of rank $2f_{\wp}$ (where $f_{\wp}=[K_{+,\wp}:\mathbb{Q}_p]$). We also let $$\exp^*:\, H^1_{/f}(K_{+,\wp},T_{\frak{P}}(A))\longrightarrow \textup{Fil}^0\,D_\wp(T_{\frak{P}}(A))$$ the Bloch-Kato dual exponential map, $$\log_{A,\wp}: H^1(K_{+,\wp},T_{\frak{P}}(A))\longrightarrow D_\wp(T_{\frak{P}}(A))/\textup{Fil}^0\,D_\wp(T_{\frak{P}}(A))$$ the inverse of the Bloch-Kato exponential map and \begin{equation}\label{eqn:naturalhodgepairing}\left[\,,\,\right]\,: D_\wp(T_{\frak{P}}(A)) \times D_\wp(T_{\frak{P}}(A)) \longrightarrow \frak{o} \end{equation} the canonical perfect pairing induced from the Weil-pairing (thanks to which we have an identification $D_\wp(T_{\frak{P}}(A))^*:=D_{\wp}(T_{\frak{P}}(A)^\mathcal{D}(1))\cong D_\wp(T_{\frak{P}}(A))$), where for an $\frak{o}$-module $M$, we write $M^{\mathcal{D}}$ for its $\frak{o}$-linear dual. We let $D_\wp(T_{\frak{P}}(A))_{[-1]}=D(F^+T_\mathcal{P}P(A))$ denote the subspace of $D_\wp(T_{\frak{P}}(A))$ on which the crystalline Frobenius $\varphi$ acts with slope $-1$ and let $$\omega^*_{\wp}=\{\omega_{i,\wp}^*\}_{i=1}^{f_\wp} \subset \textup{Fil}^0D_\wp(T_{\frak{P}}(A))^*\otimes_{\mathbb{Z}_p}\mathbb{Q}_p$$ denote a fixed basis corresponding (under the comparison isomorphism) to the N\'eron differential\footnote{This is a top degree invariant form on a N\'eron model of $A$ and as such, does only determine the top-degree exterior product $\wedge \omega^*:=\omega_{1,\wp}^*\wedge \cdots\wedge \omega_{f_\wp,\wp}^*$ uniquely.} on $A$. Since we have $$D_\wp(T_{\frak{P}}(A))_{[-1]}\, \cap\, \textup{Fil}^0D_\wp(T_{\frak{P}}(A))=0,$$ we may choose a basis $\omega_\wp=\{\omega_{j,\wp}\}_{j=1}^{f_\wp}$ of $D_\wp(T_{\frak{P}}(A))_{[-1]}$ such that $[\omega_{i,\wp}^*,\omega_{j,\wp}]=\delta_{i,j}$, where $\delta_{i,j}$ is the Kronecker-delta. Let $\omega$ denote the collection $\{\omega_\wp\}_{\wp\,\mid\, p}$ of these distinguished bases. The pairing (\ref{eqn:naturalhodgepairing}) induces a commutative diagram $$ \xymatrix{ D_\wp(T_{\frak{P}}(A))_{[-1]}\ar[d]& \times &D_\wp(T_{\frak{P}}(A)/F^+T_\mathcal{P}P(A)) \ar[r]^(.77){\left[\,,\,\right]}& \frak{o}\ar@{=}[d]\\ D_\wp(T_{\frak{P}}(A))& \times &D_\wp(T_{\frak{P}}(A)) \ar[r]^(.65){\left[\,,\,\right]}\ar[u]& \frak{o} }$$ which in turn allows us to associate each $\omega_{j,\wp}\in D_\wp(T_{\frak{P}}(A))_{[-1]}$ a map (which we still denote by the same symbol) \,$\omega_{j,\wp}: D_\wp(T_{\frak{P}}(A)) \rightarrow \frak{o}$ that factors through the quotient $D_\wp(T_{\frak{P}}(A)/F^+T_{\frak{P}}(A))$. With a slight abuse, we also write $\omega$ for the map $$\omega=\oplus_{j=1}^{f_\wp} \omega_{j,\wp}:\, D_\wp(T_{\frak{P}}(A)) \longrightarrow \frak{o}^{\oplus f_\wp}$$ that also factors through $D_\wp(T_{\frak{P}}(A)/F^+T_{\frak{P}}(A))$. \begin{equation}gin{thm}[Perrin-Riou, \cite{PRmap}] \label{thm:PRmapintegralproperties} There exists a $\Lambda_\frak{o}$-equivariant map $$\frak{L}^{(\wp)}_\omega\,:\,H^1(K_{+,\wp},\mathbb{T}_{\frak{P}}(A)) \longrightarrow \Lambda_\frak{o}^{\oplus f_{\wp}}$$ which interpolates (in a sense we will not make precise here) the dual exponential maps along the cyclotomic Iwasawa tower. Furthermore, the kernel of the map $\frak{L}^{(\wp)}_\omega$ is precisely the Greenberg submodule $H^1_\textup{Gr}(K_{+,\wp},\mathbb{T}_{\frak{P}}(A))$ and it is pseudo-surjective. \end{thm} \begin{equation}gin{proof} To simplify notation, we shall write $V$ in place of $\mathbb{Q}_p\otimes T_\frak{P}(A)$\,;\, $F^+\,V$ in place of $\mathbb{Q}_p\otimes F^+_\frak{P}\,T_\frak{P}(A)$\,;\, $D(V)$ in place of $D_\wp(T_{\frak{P}}(A))\otimes\mathbb{Q}_p$ and finally $\Phi$ in place of $K_{+,\wp}$. Only in this proof, let us also write $\Lambda$ for $\Lambda_\frak{o}$. We recall once again that $\Phi/\mathbb{Q}_p$ is unramified. Let $\mathcal{H}(\Gamma)\subset L[[\Gamma]]$ denote Perrin-Riou's ring of distributions. Then Perrin-Riou's original construction yields a map $$\frak{L}_V:\, H^1(\Phi,V\otimes\Lambda)\longrightarrow \mathcal{H}(\Gamma)\otimes D(V)\,.$$ Let $\frak{L}_{V,\omega}: H^1(\Phi,V\otimes\Lambda)\rightarrow \mathcal{H}(\Gamma)^{\oplus f_\wp}$ denote the map $\omega\circ \frak{L}_V$, where $\omega: D(V)\rightarrow L$ is the map we have introduced above (which in fact factors through $D(V/F^+V)$). It follows from the discussion in \cite[Section 3]{jaycycloIwasawa} that the following diagram commutes up to units in $\mathcal{H}(\Gamma)$: $$\xymatrix{H^1(\Phi,V\otimes\Lambda)\ar[rr]^(.6){\frak{L}_{V,\omega}}\ar@{->>}[d]&&\mathcal{H}(\Gamma)^{\oplus f_\wp}\ar[d]^{\bf{id}}\\ H^1(\Phi,V/F^+V\otimes\Lambda)\ar[rr]^(.6){\frak{L}_{V/F^+V,\omega}}&&\mathcal{H}(\Gamma)^{\oplus f_\wp} }$$ Here $\frak{L}_{V/F^+V,\omega}:=\omega\,\circ\,\frak{L}_{V/F^+V}$ and $\frak{L}_{V/F^+V}$ is the Perrin-Riou map for $V/F^+V$. We remark that we may take the right vertical arrow to be the identity map since the Hodge-Tate weights of $V$ are $0$ and $1$, and since $$H^1(\Phi,V/F^+V\otimes\Lambda)_{\textup{tor}}=H^2(\Phi,V/F^+V\otimes\Lambda)=0$$ thanks to our assumption (\ref{eqn:hna}). A suitable $p$-power multiple of the compositum of the arrows $$H^1(\Phi,V\otimes\Lambda)\rightarrow H^1(\Phi,V/F^+V\otimes\Lambda) \stackrel{\frak{L}_{V/F^+V,\omega}}{\longrightarrow}\mathcal{H}(\Gamma)^{\oplus f_\wp}$$ is the map we denote by $\frak{L}_\omega^{(\wp)}$ above; the fact that this compositum takes values in $\mathcal{H}_0(\Gamma)^{\oplus f_\wp}=\Lambda[1/p]$ follows from the fact that $D(V/F^+V)$ has slope $0$. It also follows from \cite[Section 3]{jaycycloIwasawa} that the lower horizontal map in the diagram above is injective and therefore the portion of Theorem~\ref{thm:PRmapintegralproperties} concerning the kernel of $\frak{L}_{\omega}^{(\wp)}$ is proved. Finally, Theorem 3.4 of loc. cit shows that the map $$H^1(\Phi,V/F^+V\otimes\Lambda)\otimes_\Lambda\mathcal{H}(\Gamma) \stackrel{\frak{L}_{V/F^+V,\omega}}{\longrightarrow}\mathcal{H}(\Gamma)^{\oplus f_\wp}$$ is surjective and the portion concerning the image of $\frak{L}_{\omega}^{(\wp)}$ also follows. \end{proof} We shall write $\frak{L}^{(\wp,i)}_\omega$ for the map $$\frak{L}^{(\wp,i)}_\omega\,:\,H^1(K_{+,\wp},\mathbb{T}_{\frak{P}}(A)) \longrightarrow \Lambda_\frak{o}^{\oplus (f_{\wp}-1)}$$ for the map obtained from $\frak{L}^{(\wp)}_\omega$ by omitting the summand in the target which corresponds to $\omega_{i,\wp}$. Finally, we set $$\frak{L}_\omega:=\oplus_{\wp\mid p}\,\frak{L}_\omega^{(\wp)}:=H^1(K_{+,p},\mathbb{T}_\frak{P}(A))\longrightarrow \Lambda_{\frak{o}}^g,$$ where $g=[K_{+}:\mathbb{Q}]$ and as well last the dimension of the abelian variety $g$. We note that $\ker\left(\frak{L}_\omega\right)=H^1_\textup{Gr}(K_{+,p},\mathbb{T}_\mathcal{P}P(A))$. \begin{equation}gin{define} \label{def:plusselmergroupfortheabvar} We fix a prime $\frak{Q}$ of $K_+$ above $p$, as well as an integer $1\leq i\leq f_{\frak{Q}}$. We define the map $$\frak{L}_{(\frak{Q},i)}=\bigoplus_{\frak{Q}\neq\wp\,\mid\, p}\frak{L}^{(\wp)}_\omega \oplus \frak{L}^{(\frak{Q},i)}_\omega\,:\,H^1(K_{+,p},\mathbb{T}_\frak{P}(A))\longrightarrow \Lambda_{\frak{o}}^{g-1}\,.$$ We set $H^1_{+}(K_{+,p},\mathbb{T}_\frak{P}(A)):=\ker(\frak{L}_{(\frak{Q},i)})$ and define $H^1_{+}(K_{p},\mathbb{T})$ as the isomorphic image of $H^1_{+}(K_{+,p},\mathbb{T}_\frak{P}(A))$ under Shapiro's morphism $\frak{sh}$. We define the Selmer structure $\mathcal{F}_+$ on $\mathbb{T}$ by the requiring that $$H^1_{\mathcal{F}_+}(K_p,\mathbb{T})=H^1_{+}(K_{p},\mathbb{T}) \hbox{ and } H^1_{\mathcal{F}_+}(K_\lambda,\mathbb{T})=H^1(K_{\lambda},\mathbb{T}) \hbox{ for } \lambda \nmid p\,. $$ \end{define} In this situation, Theorem~\ref{thm:mainkbb} applies since we assumed (\ref{eqn:hna}). Furthermore, \emph{if we assume the truth of Perrin-Riou-Stark Conjecture} proposed in \cite{kbbleiPLMS} (which is a slightly strong form of Rubin-Stark conjectures) \emph{as well as Leopoldt's conjecture for all subextensions of $K(A[\varpi])/K$}, we may obtain a natural generator of this module using the main results of \cite{kbbCMabvar,kbbleiPLMS}; this is what we explain in the next paragraph. \subsubsection{The Coleman-Rubin-Stark element} \label{subsubsec:CRSelement} Let $K_\mathrm{cyc}=K\mathbb{Q}_\infty$ denote the cyclotomic $\mathbb{Z}_p$-extension of $K$. Since we assumed that $p$ is unramified in $K/\mathbb{Q}$, we may canonically identify $\Gamma$ with $\textup{Gal}(K_\mathrm{cyc}/K)$. Let $K_\infty$ denote the maximal $\mathbb{Z}_p$-power extension of $K$and $\Gamma_K=\textup{Gal}(K_\infty/K)$ its Galois group over $K$. Let $\omega_\psi$ denote the character of $G_K$ giving its action on $A[\varpi_E]$; it is the unique character of $G_K$ which has the properties that the character $\langle\psi\rightarrowngle:=\psi\omega_\psi^{-1}$ factors through $\Gamma_K$ and that it is trivial on $\Gamma_K$. Let $T_{\omega_\psi}=\frak{o}(1)\otimes\omega_\psi^{-1}$ and define $H^1_\infty(K,T_{\omega_\psi}):=\varprojlim H^1(F,T_{\omega_\psi})$, where the projective limit is over all finite sub-extensions of $K_\infty/K$. Assume the truth of the \emph{Perrin-Riou-Stark Conjecture 4.14} in \cite{kbbleiPLMS} (with the Dirichlet character $\omega_\psi$) and let\footnote{We invite the interested reader to consult \cite[Remark 4.13]{kbbleiPLMS} and the discussion that precedes this remark for the desired integrality properties of the Rubin-Stark elements.} $\frak{S}_\infty^{\omega_\psi} \in \wedge^{g} H^1_\infty(K,T_{\omega_\psi})$ denote the element whose existence is predicted by the said conjecture. As in Definition 4.16 of loc. cit., we may twist this element to obtain the twisted Perrin-Riou-Stark elements $\frak{S}_\infty \in \wedge^g H^1_\infty(K,T)$ as well as their projections $$\frak{S}_\mathrm{cyc}=\frak{S}_\mathrm{cyc}^{(1)}\wedge\cdots\wedge \frak{S}_\mathrm{cyc}^{(g)} \in \wedge^g H^1(K,\mathbb{T})\cong \wedge^g H^1(K_+,\mathbb{T}_\mathcal{P}P(A))$$ (where we denote the image of $\frak{S}_\mathrm{cyc}$ under the isomorphism above still by the same symbol). We finally set $$\mathbb{RS}^\psi_{/f}:=\textup{res}_{/f}^{\otimes g} (\frak{S}_\mathrm{cyc}) \in \wedge^g H^1_{/f}(K_p,\mathbb{T})\cong \wedge^g H^1_{/f}(K_p,\mathbb{T}_\frak{P}(A))\,.$$ \begin{equation}gin{thm} \label{thm:mainRS} There exists a generator $\bbkappa^{\textup{CRS}}$ (the Coleman-adapted Rubin-Stark Kolyvagin system) of the free $\Lambda$-module $\textbf{\textup{KS}}(\mathcal{F}_+,\mathbb{T},\mathcal{P})$ whose initial term $\kappa_1^{\textup{CRS}} \in H^1_{\mathcal{F}_+}(K_p,\mathbb{T})$ has the following property: $$\frak{L}_{(\frak{Q},i)}\left(\mathrm{res}_{/f}\left({\kappa_1^{\textup{CRS}} }\right)\right)=\frak{L}_\omega^{\otimes g}\left(\mathbb{RS}_{/f}^\psi\right)$$ \end{thm} \begin{equation}gin{proof} This is precisely the content of Theorem A.11 and Proposition A.8 of \cite{kbbleiPLMS} (except that for our purposes, it is sufficient to restrict our attention to the case when $\Lambda$ in loc. cit. has also Krull dimension $2$). In order to apply these results, we simply choose $\frak{L}_\omega$ in place of $\Psi$ and the summand in the target of $\frak{L}_\omega^{(\frak{Q})}$ that corresponds to $\omega_{i,\frak{Q}}$ in place of $L(1)$ in loc. cit. Note in this case that the Selmer structure $\mathcal{F}_+$ above corresponds to the Selmer structure denoted by $\mathcal{F}_L$ in loc. cit. \end{proof} \begin{equation}gin{define} \label{def:ColemanRS} We let $\frak{C}_\infty \in H^1_{\mathcal{F}_+}(K_+,\mathbb{T}_\mathcal{P}P(A))$ denote the element that corresponds to $\kappa_1^{\textup{CRS}} $ under the isomorphism $\frak{sh}$ and let $\frak{C} \in H^1_{\mathcal{F}_+}(K_+,T_\mathcal{P}P(A))$ (that we call the \emph{Coleman-Rubin-Stark class}) its obvious projection. \end{define} The following follows as an immediate consequence of Theorem~\ref{thm:PRwithKS} and Theorem~\ref{thm:mainRS}: \begin{equation}gin{cor} \label{cor:PRconjforRSelements} The map $\mathrm{res}_{f/-}: H^1_f(K_+,T_{\frak{P}}(A))\rightarrow H^1_{f/-}(K_p,T_\mathcal{P}P(A))$ is injective if and only if the Coleman-Rubin-Stark class $\frak{C}$ is non-trivial. \end{cor} \subsubsection{Katz' $p$-adic $L$-function an explicit reciprocity conjecture for Rubin-Stark elements} We recall here the definition the $p$-adic $L$-function of Katz and Hida-Tilouine and propose an extension of the Coates-Wiles reciprocity law for elliptic units to a reciprocity law concerning the Perrin-Riou-Stark elements. \begin{equation}gin{define} A pair $(m_0,d)$ (where $m_0 \in \mathbb{Z}$ and $d=\sum_{\sigma\in \Sigma} d_\sigma\sigma \in \mathbb{Z}^\Sigma$) is called \emph{$\Sigma$-critical} if either $m_0>0$ and $d_\sigma\geq 0$ or else $m_0\leq 1$ and $d_\sigma\geq 1-m_0$ for every $\sigma \in \Sigma$. Likewise, a Gr\"ossencharacter $\bblambda$ is called \emph{$\Sigma$-critical} if its infinity type equals the expression $\sum_{\sigma \in \Sigma} (m_0+d_\sigma)\sigma-d_{\sigma}\sigma^c \in \mathbb{Z}^{\mathscr{S}}$ for some $\Sigma$-critical pair $(m_0,d)$. \end{define} Let $\mathscr{O}$ denote the $p$-adic completion of the maximal unramified extension of $\mathbb{Q}_p$. Let $\Gamma_K$ denote the Galois group of the maximal $\mathbb{Z}_p$-power extension of $K$. By class field theory, note that we have $\Gamma_K\cong \mathbb{Z}_p^{1+g+\delta}$, where $\delta$ is Leopoldt's defect (which equals zero whenever we assume Leopoldt's conjecture). The following statement was proved by Hida and Tilouine in \cite{hidatilouine}, extending a previous construction due to Katz. In our discussion below, we will mostly stick to the exposition in \cite{hsiehanticyclopadicL} and we shall rely on Hsieh's notation (except perhaps minor alterations). Let $\frak{f} \subset \frak{o}_K$ denote the conductor of $\bbpsi$. Write $\frak{f}=\frak{f}^+\frak{f}^-$ where $\frak{f}^+$ (resp., $\frak{f}^-$) is a product of primes that split (resp., that remain inert or ramify) over $K_+$. The following is Proposition~4.9 in \cite{hsiehanticyclopadicL}. \begin{equation}gin{thm}[Katz, Hida-Tilouine] There exists an element $\mathscr{L}^{\Sigma}_{p}\in \mathscr{O}[[\Gamma_K]]$ $($Katz' $p$-adic $L$-function$)$ that is uniquely determined by the following interpolation property on the $p$-adic avatars $\bblambda^{(p)}=(\lambda_\tau^{(p)})$ of the $\Sigma$-critical characters $\bblambda=(\lambda_\tau)$ of infinity type $m_0\Sigma+(1-c)d \in \mathbb{Z}^\mathscr{S}$ and of conductor dividing $\frak{f}p^\infty$: $$\frac{\mathscr{L}^{\Sigma}_{p}(\bblambda^{(p)})}{\Omega_p^{m_0\Sigma+2d}}=t\cdot\frac{\pi^d\,\Gamma_{\Sigma}(m_0\Sigma+d)}{\sqrt{|D_{K_+}|_{\mathbb{R}}}\,\textup{im}(\delta)^d}\cdot\mathscr{E}_p(\bblambda)\mathscr{E}_{\frak{f}^+}(\bblambda)\cdot\mathrm{pr}od_{\frak{\frak{q}\mid \frak{f}}p}(1-\bblambda(\frak{q}))\cdot \frac{L(m_0/2,\bblambda^u)}{\Omega_\infty^{m_0\Sigma+2d}} $$ where the equality takes place in $\iota_p(\overline{\mathbb{Q}})^\mathscr{S}$ and where \begin{equation}gin{itemize} \item $\Omega_p=(\Omega_p(\sigma))_{\sigma} \in \mathscr{O}^\Sigma$ and $\Omega_\infty=(\Omega_\infty(\sigma))_{\sigma} \in \mathbb{C}^\Sigma$ are the periods which are attached to a N\'eron differential $\omega$ on the abelian scheme $\mathcal{A}$, as in \cite[Chapter II]{katz78}; \item $t$ is a certain fixed power $2$ (which can be made explicit); \item $\delta \in K_+$ is the element chosen as in \cite[0.9a-b]{hidatilouine} and \cite[Section 3.1]{hsiehanticyclopadicL}; \item $\mathscr{E}_p(\bblambda)$ and $\mathscr{E}_{\frak{f}^+}(\bblambda)$ are products of modified Euler factors defined in \cite[(4.16)]{hsiehanticyclopadicL} and denoted by $Eul_p$, $Eul_{\frak{f}^+}$ in loc. cit., \item $\bblambda^u:=\bblambda/|\bblambda|_{\mathbb{A}}$ is the unitarization of $\bblambda$ $($where this terminology is borrowed from Hida and Tilouine \cite[pp. 231-232]{hidatilouine}$)$. \end{itemize} \end{thm} In particular, when the Gr\"ossencharacter $\bblambda$ has infinity type $\Sigma$ (so that $m_0=1$ and $d=0$) and conductor $\frak{f}$, the interpolation formula simplifies to $$\frac{\mathscr{L}^{\Sigma}_{p}(\bblambda^{(p)})}{\Omega_p^{\Sigma}}=t\cdot\mathscr{E}_p(\bblambda)\mathscr{E}_{\frak{f}^+}(\bblambda)\cdot\mathrm{pr}od_{\wp |p}(1-\bblambda(\wp)) \cdot\frac{L(1/2,\bblambda^u)}{\sqrt{|D_{K_+}|_{\mathbb{R}}}\,\Omega_\infty^{\Sigma}}$$ We call the pullback $\mathscr{L}^{\Sigma}_{p,\bbpsi}$ of $\mathscr{L}^{\Sigma}_{p}$ along the character $\bbpsi$ (where $\bbpsi$ is the Gr\"ossencharacter associated to $A$) the \emph{$\bbpsi$-branch of $\mathscr{L}^{\Sigma}_{p}$}. In more concrete terms we have $$\mathscr{L}^{\Sigma}_{p,\bbpsi}(\chi):=\mathscr{L}^{\Sigma}_{p}(\chi\bbpsi^{(p)})\,$$ for every character $\chi$ of finite order. Recall the $p$-adic Hecke character $\psi$ we have fixed above, which is associated to our choice of $\epsilon\in \Sigma$ (and fixed embeddings $j_\infty$ and $j_p$). We will fix the modulus $\frak{f}$ to be chosen as the conductor of $\psi$. \begin{equation}gin{cor} \label{cor:padicLfunctionforA} For every character $\chi$ of $\Gamma_K$ of finite order we have, $$\frac{\mathscr{L}^{\Sigma}_{p}(\chi\psi)}{\Omega_p(\epsilon)}=\mathscr{E}(\chi\psi)\mathrm{pr}od_{\wp|p}(1-\chi_\epsilon\psi_\epsilon(\wp)) \cdot\frac{L(1/2,\chi_\epsilon\psi_\epsilon^u)}{\Omega_\infty(\epsilon)}$$ where $\mathscr{E}(\chi\psi):=t\cdot\mathscr{E}_p(\chi\psi)\cdot\mathscr{E}_{\frak{f}^+}(\chi\psi)$ is a product of Euler factors up to an explicit power of $2$. \end{cor} \begin{equation}gin{rem} \label{rem:comparedisegnitohsiehepsilontau} Note that the Euler like factors $1-\chi_\epsilon\psi_\epsilon(\wp)$ are equal to $1$ so long as $\chi$ is not the trivial character. \end{rem} \begin{equation}gin{define} We let $\mathscr{L}_\textup{cyc}^\Sigma \in \mathscr{O}[[\Gamma]]$ denote the measure obtained by restricting the $\bbpsi$-branch $\mathscr{L}^{\Sigma}_{p,\bbpsi}$ of the Katz $p$-adic $L$-function to the cyclotomic characters of finite order. \end{define} \begin{equation}gin{define} Let $n$ be a positive integer and $z=\{z_n\}\in H^1(K_+,\mathbb{T}_\frak{P}(A))$ be an arbitrary element and let $\chi$ be a primitive character of $\Gamma_n$. Fix a prime $\wp$ of $K_+$ lying above $p$. Set $K_n^+:=K_+\mathbb{Q}_n$ and denote by $\wp_n$ the unique prime of $K_n^{+}$ above $\wp$. For every $1\leq j\leq f_{\wp}$\,, we define the \emph{Perrin-Riou symbol} $[z,\wp,j,\chi]$ by setting $$[z,\wp,j,\chi]:=\frac{1}{\tau(\chi)}\left[\sum_{\gamma\in\Gamma_n}\chi(\gamma)\exp^*_n(\mathrm{res}_{\wp_n}\left(z_n\right)^\gamma)\,,\, \varphi^{-n}(\omega_{j,\wp})\right]$$ where $\tau(\chi)$ is the Gauss sum and $$\exp^*_n:\, H^1(K_{n,\wp_n},T_{\mathcal{P}P}(A))\longrightarrow \mathbb{Q}_{p,n}\otimes_{\mathbb{Z}_p}\textup{Fil}^0D_\wp(T_\mathcal{P}P(A))$$ is the dual exponential map. On fixing an ordering of primes of $K_+$ above $p$, we let $\mathbb{R}_{\chi}(z,\omega,\chi)$ denote the $1\times g$ matrix given by $$\mathbb{R}_{\chi}\left(z,\omega,\chi\right)=\left([z,\wp,j,\chi]\right)_{\substack{\wp\,\mid\, p\\1\leq j\leq f_\wp}}\,.$$ For an element $\mathbf{z}=z_1\wedge\cdots\wedge z_g \in \wedge^g H^1(K_+,\mathbb{T}_\frak{P}(A))$, we further define the $g\times g$ matrix $\mathbb{M}(\mathbf{z},\omega,\chi)$ by setting $$\mathbb{M}(\mathbf{z},\omega,\chi)=\left(\mathbb{R}_{\chi}\left(z_i,\omega,\chi\right)\right)_{i=1}^g $$ We remark that this definition actually depends only on the element $\wedge \omega^*$ (that corresponds to a N\'eron differential on $A$) and not the choice of a basis that represents this differential. \end{define} We propose the following explicit reciprocity conjecture for the twisted Perrin-Riou-Stark element $$\frak{S}_\mathrm{cyc}=\frak{S}_\mathrm{cyc}^{(1)}\wedge\cdots\wedge \frak{S}_\mathrm{cyc}^{(g)} \in \wedge^g H^1(K_+,\mathbb{T}_\frak{P}(A))$$ as a natural extension of Coates-Wiles explicit reciprocity law for elliptic units. \begin{equation}gin{conj} \label{conj:explicitreciprocityforRS} There exists a choice of a N\'eron differential on $A$ so that we have $$\det\mathbb{M}(\frak{S}_\mathrm{cyc},\omega,\chi)= \mathscr{E}(\chi\psi)\cdot\frac{L(1/2,\chi_\epsilon\psi_\epsilon)}{\Omega_\infty(\epsilon)}\,$$ for every primitive character $\chi$ of $\Gamma_n$\,. \end{conj} Whenever we assume the truth of Conjecture~\ref{conj:explicitreciprocityforRS}, we shall implicitly assume also that we are working with a basis $\omega$ as comes attached to the appropriate choice of a N\'eron differential. \begin{equation}gin{cor} \label{cor:explicitreciprocityconj} If the Explicit Reciprocity Conjecture~\ref{conj:explicitreciprocityforRS} holds true, we have $${\displaystyle\frak{L}^{\otimes g}_\omega\left(\mathbb{RS}_{/f}^\psi\right)=\frac{\mathscr{L}_\textup{cyc}^\Sigma}{\Omega_p(\epsilon)}}\,.$$ \end{cor} \begin{equation}gin{proof} This follows at once making use of the displayed equation (5) in \cite{kbbleiintegralMC}. \end{proof} \subsubsection{Perrin-Riou's conjecture for CM abelian varieties} \label{subsubsec:PRconj} We may finally turn our attention to Conjecture~\ref{conj:analyticrankvslocalizationmapinjective} in this set up. \emph{We will assume until the end of Section~\ref{sec:CMabvarcolemanRS} that there exists a degree one prime of $K_+$ above $p$} (mostly for brevity; we expect that one should be able to get around of this assumption with more work) and we choose the prime $\frak{Q}$ above that we work with as this prime of degree one. Note in this case that $i=f_{\frak{Q}}=1$ and also that $$\ker\left(\frak{L}^{(\frak{Q},1)}\right)=H^1(K_{+,\frak{Q}},\mathbb{T}_\mathcal{P}P(A))\oplus\bigoplus_{\frak{Q}\neq \frak{p}\mid p}H^1_{\mathcal{F}_\textup{Gr}}(K_{+,\frak{p}},\mathbb{T}_\frak{P}(A)).$$ Also in this situation, note that the set $\omega_{\frak{Q}}$ is a singleton, and by slight abuse, we denote its only element also by $\omega_{\frak{Q}}$. \begin{equation}gin{define} \label{def:heightparing} We let $$\langle\,,\,\rightarrowngle: H^1_f(K_+,T_\frak{P}(A))\otimes H^1_f(K_+,T_\frak{P}(A))\longrightarrow L$$ denote the $p$-adic height pairing of Perrin-Riou~\cite[Section 2.3]{PRheights} associated to the canonical unit root Hodge splitting, the cyclotomic character and Iwasawa's branch of the $p$-adic logarithm. \end{define} The following is the version of Perrin-Riou conjecture in this set up: \begin{equation}gin{thm} \label{thm:PRconjforRSforreal} If the Explicit Reciprocity Conjecture~\ref{conj:explicitreciprocityforRS} holds true and if either $r_{\textup{an}}(\phi)=0$ or else $r_{\textup{an}}(\phi)=1$ and the $p$-adic height pairing $\langle\,,\,\rightarrowngle$ is non-zero, then the Coleman-Rubin-Stark class $\frak{C}$ is non-trivial. \end{thm} \begin{equation}gin{rem} \label{rem:ifonlyKolyvaginLogachev} If one could extend the main theorem of Kolyvagin and Logachev \cite{kolyvaginlogachev} to cover CM abelian varieties, one may prove Theorem~\ref{thm:PRconjforRSforreal} without assuming either the explicit reciprocity conjecture or the non-triviality of the $p$-adic height pairing. \end{rem} \begin{equation}gin{proof}[Proof of Theorem~\ref{thm:PRconjforRSforreal}] When $r_{\textup{an}}(\phi)=0$, it follows from the interpolation formula for the $p$-adic $L$-function that $\mathds{1}(\mathscr{L}_\textup{cyc}^\Sigma)\neq 0$. The assertion in this case follows on combining Theorem~\ref{thm:mainRS} and Corollary~\ref{cor:explicitreciprocityconj}. Suppose now that $r_{\textup{an}}(\phi)=1$ and the $p$-adic height pairing $\langle\,,\,\rightarrowngle$ is non-zero. On choosing an auxiliary totally imaginary extension $F/K_+$ in suitable manner (in a way that the non-vanishing results of Friedberg and Hoffstein~\cite{friedberghoffstein} apply) and relying on the Gross-Zagier formula of Yuan-Zhang-Zhang in~\cite{YYZ} and its $p$-adic variant due to Disegni~\cite{disegniGZ}, we conclude that $\mathscr{L}_\textup{cyc}^{\Sigma} \not\equiv 0 \mod J^2$. Combined with the standard application of the Coleman-Rubin-Stark $\Lambda$-adic Kolyvagin system $\bbkappa^{\textup{CRS}}$ (and a control argument for Greenberg Selmer groups), it follows that the $\frak{o}$-module $H^1_f(K_{+},T_\mathcal{P}P(A))$ is of rank one and that $\Sha(A/K_+)[\frak{P}^\infty]$ is finite. Thence the map $\mathrm{res}_{f/-}$ may be given explicitly via the commutative diagram $$\xymatrix{H^1_{\mathcal{F}_\textup{Gr}}(K_{+},T_\mathcal{P}P(A))\ar[r]^{\mathrm{res}_{f/-}} &H^1_{\mathcal{F}_\textup{Gr}}(K_{+,\frak{Q}},T_\mathcal{P}P(A))\\ A(K_{+})\widehat{\otimes}\,\frak{o}\ar[r]^{\mathrm{res}_{f/-}} \ar[u]_{\cong}& A(K_{+,\frak{Q}})\widehat{\otimes}\,\frak{o}\ar[u]_{\cong}}$$ and it is evidently injective. The proof follows by Corollary~\ref{cor:PRconjforRSelements}. \end{proof} In what follows, we let $D_\frak{P}(A)$ be a shorthand for the Dieudonn\'e module of the $G_{K_{+,\frak{Q}}}$-representation $V_{\frak{P}}(A)$. Observe that $K_{+,\frak{Q}}=\mathbb{Q}_p$ thanks to our choice of $\frak{Q}$. \begin{equation}gin{thm} \label{thm:ercimpliesonlyif} The Explicit Reciprocity Conjecture~\ref{conj:explicitreciprocityforRS} implies the ``only if'' portion of Conjecture~\ref{conj:analyticrankvslocalizationmapinjective} whenever the $p$-adic height pairing of Definition~\ref{def:heightparing} is non-zero, $p$ is prime to $w_{2}(K_+):=H^0(K_+,\mathbb{Q}/\mathbb{Z}(2))$ and $D_{\mathcal{P}P}(A)^{\varphi=1}=0$. \end{thm} \begin{equation}gin{proof} By Corollary~\ref{cor:PRconjforRSelements}, we may assume that the Coleman-Rubin-Stark element $\frak{S}$ is non-trivial and given that, we contend to prove that $r_{\textup{an}}(\phi)\leq 1$ in our set up. As above, let $\omega_{\mathcal{A}}$ denote a N\'eron differential on $\mathcal{A}$ and let $$\omega^*_\frak{Q} \in \textup{Fil}^0D_\frak{P}(A)^*\cong \textup{Fil}^0D_\frak{P}(A)$$ denote the element that corresponds to $\omega_{A}$ under the comparison isomorphism. Let $D_{[-1]} \subset D_{\frak{P}}(A)$ denote the subspace of $D_{\frak{P}}(A)$ on which $\varphi$ acts with slope $-1$. Then the space $D_{[-1]}$ is one-dimensional and $D_{[-1]}\cap \textup{Fil}^0 D_{\frak{P}}(A)=\{0\}$. Since $\textup{Fil}^0 D_{\frak{P}}(A)$ is the exact orthogonal compliment of $\textup{Fil}^0 D_{\frak{P}}(A)^*$ under the pairing $[\,,\,]$ above, there exists a unique element $\omega_\frak{Q} \in D_{[-1]}$ with $[\omega_\frak{Q},\omega_\frak{Q}^*]=1$ (which in fact spans $D_{[-1]}$ as an $L$-vector space). We will denote the image of $\omega_\frak{Q}$ under the isomorphism $D_{[-1]}\stackrel{\sim}{\rightarrow}D_{\frak{P}}(A)/\textup{Fil}^0 D_{\frak{P}}(A)$ by $\overline{\omega}_\frak{Q}$. As explained in \cite[Section 2.1]{kbbleiintegralMC}, we have \begin{equation}gin{align}\notag \mathds{1}\left(\frak{L}^{\frak{Q},1}(z_\infty)\right)&=\left[\exp^*(z_0),(1-p^{-1}\varphi^{-1})(1-\varphi)^{-1}\omega_\frak{Q}\right]\\ \label{en:leadingtermformulaforPRlog}&=(1-1/\mathcal{L}pha)(1-\mathcal{L}pha/p)^{-1}\left[\exp^*(z_0),\omega_\frak{Q}\right] \end{align} for every $z_\infty=\{z_n\}_{n\geq 0}\in H^1_{/f}(K_{+,\frak{Q}},\mathbb{T}_{\frak{P}}(A))$, where $\mathcal{L}pha$ is the $p$-unit eigenvalue for $\varphi$ acting on $D_{\frak{P}}(A)$. Note that $\mathcal{L}pha\neq1$ by assumption. \\ \emph{Case 1.} $\textup{res}_{/f}(\frak{C})\neq 0$. Under our running hypotheses, it follows from Corollary~\ref{cor:explicitreciprocityconj} and (\ref{en:leadingtermformulaforPRlog}) that $\mathds{1}(\mathscr{L}^\Sigma_{\textup{cyc}})\neq 0$. The interpolation property for this $p$-adic $L$-function now shows that $r_{\textup{an}}(\phi)=0$. \\ \emph{Case 2.} $\textup{res}_{/f}(\frak{C})=0$. This means that $\frak{S} \in H^1_f(K_+,T_{\frak{P}}(A))$ and in turn, also that \begin{equation}gin{align*} \frak{S}_{\infty} &\in \ker\left(H^1_{/f}(K_{+,\frak{Q}},\mathbb{T}_{\frak{P}}(A))\rightarrow H^1_{/f}(K_{+,\frak{Q}},T_{\frak{P}}(A))\right)\\ &=(\gamma-1)H^1_{/f}(K_{+,\frak{Q}},\mathbb{T}_{\frak{P}}(A)) \end{align*} It follows by Theorem~\ref{thm:mainRS} that $\mathds{1}(\mathscr{L}_\textup{cyc}^\Sigma)=0$ and the interpolation formula shows (as non of the Euler-like factors in its statement vanish) that $L(1/2,\psi_\varepsilon)=0\,.$ In this case, the discussion in \cite[\S11.3.14]{nek} applies and allows us to construct the derivative $\frak{dS}_\infty \in H^1_{/f}(K_{+,\frak{Q}},T_\mathcal{P}P(A))$ of the class $\frak{S}_\infty$ (our element $\frak{dS}_\infty$ corresponds to the image of the element $(Dx_\mathrm{Iw})_{\frak{Q}}$ in loc. cit. under the cyclotomic character). It then follows from Nekov\'a\v{r}'s Rubin-style formula \cite[Proposition 11.3.15]{nek} (also, his $p$-adic height compares via \S11.3 to those introduced by Perrin-Riou) and shows that \begin{equation}\label{eqn:rubinsformual} \langle\frak{S},\frak{S}\rightarrowngle=-\left[\exp^*\left(\frak{dS}_\infty\right),\log_{A,\frak{Q}}(\frak{S})\right]_{D_\frak{P}(A)} \end{equation} For $c \in H^1_f(K_{+,\frak{Q}},T_\mathcal{P}P(A))$, let us define $\log_{\omega}(c) \in L$ so that \begin{equation}\label{eqn:logomegadefine} \log_{A,\frak{Q}}(c)=\log_{\omega}(c)\cdot \overline{\omega}_{\frak{Q}}\,. \end{equation} Combining (\ref{en:leadingtermformulaforPRlog}), (\ref{eqn:rubinsformual}), Corollary~\ref{cor:explicitreciprocityconj} and the defining property of the Coleman-Rubin-Stark elements in Theorem~\ref{thm:mainRS}, we conclude that \begin{equation}\label{eqn:truerubinstyleformula} -\log_{\omega}(\frak{S})\frac{\left(\mathscr{L}_\textup{cyc}^{\Sigma}\right)^\mathrm{pr}ime(\mathds{1})}{\Omega_p(\epsilon)}=(1-1/\mathcal{L}pha)(1-\mathcal{L}pha/p)^{-1}\langle\frak{S},\frak{S}\rightarrowngle\,. \end{equation} Here, ${\left(\mathscr{L}_\textup{cyc}^{\Sigma}\right)^\mathrm{pr}ime(\mathds{1})}:=\displaystyle{\lim_{s\rightarrow1}\chi_{\textup{cyc}}^{s-1}(\mathscr{L}_\textup{cyc}^\Sigma)/(s-1)}$ is the derivative of the cyclotomic restriction of the Katz' $p$-adic $L$-function, along the cyclotomic character. By our assumption that the $p$-adic height pairing is non-vanishing, it follows that $\mathscr{L}_\textup{cyc}^{\Sigma} \not\in J^2$, where $J\in \mathscr{O}[[\Gamma]]$ is the augmentation ideal. The Kolyvagin system method (applied with the Kolyvagin system $\bbkappa^{\textup{CRS}}$) implies in our situation that $H^1_f(K_+,T_{\frak{P}}(A))=H^1_{\mathcal{F}_{\textup{Gr}}}(K_+,T_\mathcal{P}P(A))$ has rank one. By the parity result of Nekov\'a\v{r} \cite[Theorem 12.2.8 (3)]{nek}, it follows that the sign of the functional equation for $L(s,\phi)$ equals $-1$. Using the generic non-vanishing results of Friedberg and Hoffstein~\cite{friedberghoffstein} and the $p$-adic Gross-Zagier formula of Disegni~\cite{disegniGZ} for a suitably chosen totally imaginary extension $F/K_+$ along with the fact that $\mathscr{L}_\textup{cyc}^{\Sigma} \not\in J^2$, we conclude that there exists a non-trivial Heegner point $P \in A(F)$. The main results of \cite{YYZ} implies that $r_{\textup{an}}(\phi_F)=1$, which in turn shows that $r_{\textup{an}}(\phi)=1$ as well. \end{proof} \subsection{Logarithms of Heegner points and Perrin-Riou-Stark elements} \label{sec:CMabvarcolemanRS} We continue with our discussion on CM abelian varieties and our aim in this subsection is to compare the Bloch-Kato logarithm of the Coleman-Rubin-Stark element $\frak{C}$ to the square of the logarithm of a global point on our abelian variety up to a non-zero algebraic factor. This is a generalized form of Perrin-Riou's predictions for elliptic curves (and Beilinson-Kato elements) that we revisited in Section~\ref{sec:heegner} below in the context of elliptic curves defined over $\mathbb{Q}$. Throughout this section, we assume that $r_{\textup{an}}(\phi)=1$. We also keep working with our assumptions and conventions we have set at the start of Sections~\ref{subsubsec:CMtypesabvar}, \ref{subsubsec:CRSelement}, \ref{subsubsec:PRconj} and throughout Section~\ref{subsubsec:grossenkaraktere}. We fix a quadratic and purely imaginary extension $E$ of $K_+$ such that the relative discriminant $\Delta_{E/K_{+}}$ is totally odd and relatively prime to $D_{K_+}Np$ (where $D_{K_+}$ is the discriminant of $K_+/\mathbb{Q}$). We let $\eta=\eta_{E/K_+}$ the quadratic character associated to $E/K_+$. Let $N$ denote the level of the normalised new Hilbert eigenform $\phi$ of weight $2$ and suppose that $\eta(N)=(-1)^{g-1}$. Let $\phi_\eta$ denote the twisted weight 2 form, and suppose that $r_{\textup{an}}(\phi_\eta)=0$. We note that there are infinitely many choices for the field $E$ simultaneously verifying all these conditions (thanks to \cite{friedberghoffstein}); we pick one. The work of Yuan-Zhang-Zhang \cite{YYZ} applies in this situation and equips us with a \emph{Heegner point} $P_\phi \in A(K_+)$. Also in this case, the work of Manin, Dabrowski, Dimitrov and Januscewski associates the $p$-ordinary stabilisation of $\phi$ a $p$-adic $L$-function $L_p(\phi,\cdot) \in \Lambda_\frak{o}$ which is characterized by the following interpolation property (c.f., Theorem 4.4.1 of \cite{disegnithesispaper}): For every non-trivial character of $\Gamma$ of finite order\footnote{Since we assumed that $K_+/\mathbb{Q}$ is unramified, it follows that any non-trivial character is ramified at all all primes of $K_+$ above $p$.} and conductor $\frak{f}_\chi$ \begin{equation}\label{eqn:interpolationdmitrov} L_p(\phi,\chi)=\chi(D_{K_+})\,{\tau(\bar{\chi})\,\mathbf{N}(\frak{f}_\chi)^{1/2}}\,{\mathcal{L}pha_{\frak{f_\chi}}^{-1}}\cdot\frac{L(\phi,\bar{\chi},1)}{\Omega_\phi^{+}} \end{equation} where \begin{equation}gin{itemize} \item $\tau(\bar{\chi})$ is a Gauss sum that is normalized by Disegni in loc. cit.\,; \item $\mathcal{L}pha_{\frak{f_\chi}}:=\mathrm{pr}od_{\frak{q}\mid p} a_{\frak{q}}(\phi)^{v_{\frak{q}}(\frak{f}_\chi)}$ and $a_{\frak{q}}(\phi)$ the $p$-unit root of the Hecke polynomial at $\frak{q}$\,; \item $\Omega_\phi^+$ is the real period defined by Shimura and Yoshida~\cite{shimurayoshida}. \end{itemize} There is likewise a $p$-adic $L$-function $L_p(\phi_\eta,\cdot)$ associated to a $p$-ordinary stabilisation of the twisted form $\phi_\eta$ (and a corresponding real period $\Omega_{\phi_\eta}^+$) as well as a $p$-adic $L$-function $L_p(\phi_E,\cdot) \in \Lambda_\frak{o}$ attached (by Panchishkin, Hida and Disegni) to the base change $\phi_E$. The Artin formalism yields a factorization \begin{equation} \label{eqn:factorizationofpadicL} L_p(\phi_E,\chi\circ {N}_{E/K_{+}})=\chi(\Delta_{E/K_{+}})^2\frac{\Omega_\phi^+\Omega_{\phi_\eta}^+}{D_E^{-1/2}\Omega_\phi}L_p(\phi,\chi)L_p(\phi_\eta,\chi) \end{equation} for every character $\chi$ as above, where $\Omega_\phi=(8\pi)^2\langle \phi,\phi\rightarrowngle_{N}$ is the Shimura period. We remark that the ratio ${\Omega_\phi^+\Omega_{\phi_\eta}^+}/{\Omega_\phi}$ is always an algebraic number (that in fact belongs to the Hecke field). Set \begin{equation}gin{align*} C(\phi,E,\epsilon)&:=\mathscr{E}_{\frak{f}^+}(\psi_\epsilon)\mathrm{pr}od_{\frak{q}|p}\left(1-{1}/{a_\frak{q}(\phi)}\right)^2\, D_F^{-1}D_E^{1/2}\frac{\Omega_{\phi}}{\Omega_\infty(\epsilon)}\cdot L(\phi_\eta,1)^{-1}\\ &=\mathscr{E}_{\frak{f}^+}(\psi_\epsilon)\mathrm{pr}od_{\frak{q}|p}\left(1-{1}/{a_\frak{q}(\phi)}\right)^2\, D_F^{-1}D_E^{1/2}\frac{\Omega_\phi}{\Omega_\phi^+\Omega_{\phi_\eta}^+}\,\frac{\Omega_\phi^+}{\Omega_\infty(\epsilon)}\, \Omega_{\phi_\eta}^+L(\phi_\eta,1)^{-1} \in \overline{\mathbb{Q}}^\times\,. \end{align*} where $\mathscr{E}_p(\psi_\epsilon)$ is as above and is a product of certain modified root numbers at the primes above $p$ $($and they are given as in \cite[Section 2.3]{burungaledisegni}$)$. \begin{equation}gin{thm} \label{thm:maincolemanRSheegnerPR} ${\log_{\omega}(\frak{C})}=(1-1/\mathcal{L}pha)^{-1}(1-\mathcal{L}pha/p)\cdot C(\phi,E,\epsilon)\cdot{\log_{\omega}(P_\phi)^2}\,.$ \end{thm} \begin{equation}gin{proof} We start observing that \begin{equation}gin{align*} \frac{\langle P_\phi,P_\phi\rightarrowngle}{\log_{\omega}(P_\phi)^2}&=\frac{\langle \frak{C},\frak{C}\rightarrowngle}{\log_{\omega}(\frak{C})^2}=-\frac{\left(\mathscr{L}_\textup{cyc}^{\Sigma}\right)^\mathrm{pr}ime(\mathds{1})}{\Omega_p(\epsilon)\cdot \log_{\omega}(\frak{C})}\cdot(1-1/\mathcal{L}pha)^{-1}(1-\mathcal{L}pha/p)\\ &=\mathscr{E}_{\frak{f}^+}(\psi_\epsilon)\,(1-1/\mathcal{L}pha)^{-1}(1-\mathcal{L}pha/p)\cdot\frac{\Omega_\phi^+}{\Omega_{\infty}(\epsilon)}\cdot\frac{L_p^\mathrm{pr}ime(\phi,\mathds{1})}{\log_{\omega}(\frak{C})} \end{align*} where the first equality on the first line is because $H^1_f(K_+,T_\frak{P}(A))$ has rank one by the proof of Theorem~\ref{thm:PRconjforRSforreal} and $\langle\cdot,\cdot\rightarrowngle/\log_{\omega}(\cdot)^2$ is a non-trivial quadratic form on this space (thanks to our assumption that the $p$-adic height pairing is non-zero); the second equality on the first line is (\ref{eqn:truerubinstyleformula}) and finally, the equality on the second line follows from the Claim below. Indeed, the factor $\mathscr{E}_{\frak{f}^+}(\chi_\epsilon\psi_\epsilon)\,\chi(D_{K_+})$ that appear in the statement of Claim varies analytically in $\chi$ and tend to $\mathscr{E}_{\frak{f}^+}(\psi_\epsilon)$ as $\chi$ tends to the trivial character $\mathds{1}$. We therefore conclude that, \begin{equation}\label{GZZhang1} \frac{L_p^\mathrm{pr}ime(\phi,\mathds{1})}{\langle P_\phi,P_\phi\rightarrowngle}=\mathscr{E}_{\frak{f}^+}(\psi_\epsilon)^{-1}\,(1-1/\mathcal{L}pha)(1-\mathcal{L}pha/p)^{-1}\,\frac{\Omega_{\infty}(\epsilon)}{\Omega_\phi^+}\,\frac{\log_{\omega}(\frak{C})}{\log_{\omega}(P_\phi)^2}\,. \end{equation} On the other hand, \begin{equation}gin{align} \notag\frac{L_p^\mathrm{pr}ime(\phi,\mathds{1})}{\langle P_\phi,P_\phi\rightarrowngle}&=\frac{L_p^\mathrm{pr}ime(\phi_E,\mathds{1})}{\langle P_\phi,P_\phi\rightarrowngle}\cdot\frac{D_E^{1/2}\Omega_{\phi}}{\Omega_{\phi}^+}\cdot (\Omega_{\phi_\eta}^+\cdot L_p(\phi_\eta,\mathds{1}))^{-1}\\ \notag&=\frac{L_p^\mathrm{pr}ime(\phi_E,\mathds{1})}{\langle P_\phi,P_\phi\rightarrowngle}\cdot\frac{D_E^{1/2}\Omega_{\phi}}{\Omega_{\phi}^+}\cdot\left({\mathrm{pr}od_{\frak{q}|p}\left(1-\frac{1}{\eta(\frak{q})a_\frak{q}(\phi)}\right)^2}L(\phi_\eta,1)\right)^{-1}\\ \label{GZZhang2}&=\mathrm{pr}od_{\frak{q}|p}\left(1-{1}/{a_\frak{q}(\phi)}\right)^2D_F^{-1}D_E^{1/2}\,\frac{\Omega_{\phi}}{\Omega_{\phi}^+}\, L(\phi_\eta,1)^{-1} \end{align} where the first equality follows from the factorization (\ref{eqn:factorizationofpadicL}) of the base-change $p$-adic $L$-function, the second from the interpolation formula for the twisted $p$-adic $L$-function and the last from the $p$-adic Gross-Zagier formula of Disegni~\cite[Theorem B]{disegniGZ}. Using (\ref{GZZhang1}) together with (\ref{GZZhang2}), we conclude that $$\frac{\log_{\omega}(\frak{C})}{\log_{\omega}(P_\phi)^2}=(1-1/\mathcal{L}pha)^{-1}(1-\mathcal{L}pha/p)\,\mathscr{E}_{\frak{f}^+}(\psi_\epsilon)\mathrm{pr}od_{\frak{q}|p}\left(1-{1}/{a_\frak{q}(\phi)}\right)^2\, D_F^{-1}D_E^{1/2}\frac{\Omega_{\phi}}{\Omega_\infty(\epsilon)}\cdot L(\phi_\eta,1)^{-1}$$ and the proof follows. \end{proof} \begin{equation}gin{claim} For all sufficiently ramified characters $\chi$ of $\Gamma$ the following identity holds: $$\frac{\mathscr{L}_\textup{cyc}^{\Sigma}(\chi)}{\Omega_p(\epsilon)}=\mathscr{E}_{\frak{f}^+}(\chi_\epsilon\psi_\epsilon)\,\chi(D_{K_+})\,\Omega_{\phi}^+\,L_p(\phi,\chi^{-1})\,.$$ \end{claim} \begin{equation}gin{proof} This will follow once we match the interpolation factors in Corollary~\ref{cor:padicLfunctionforA} for $\mathscr{L}_\textup{cyc}^{\Sigma}(\chi)$ and (\ref{eqn:interpolationdmitrov}) for $L_p(\phi,\chi^{-1})$. More precisely, we would like to verify that $\mathscr{E}_{p}(\chi_\epsilon\psi_\epsilon)$ equals ${\tau({\chi})\,\mathbf{N}(\frak{f}_\chi)^{1/2}}\,{\mathcal{L}pha_{\frak{f_\chi}}^{-1}}$ for all sufficiently ramified characters $ \chi$. The proof of this claim is essentially contained in \cite[Section 2.3]{burungaledisegni} (although we also rely on Hsieh's exposition in \cite[Section 4.7]{hsiehanticyclopadicL}), we provide an outline here for the sake of completeness. For all such $\chi$, the local $L$-factors are trivial and by Tate's local functional equation, it follows that $$\mathscr{E}_{p}(\chi_\epsilon\psi_\epsilon)=\mathrm{pr}od_{\wp\in \Sigma_p^c}\tau(\psi_{\epsilon,\wp}\chi_{\epsilon,\wp}\,, \Psi_\wp)\,,$$ where $\tau(\psi_{\epsilon,\wp}\chi_{\epsilon,\wp}, \Psi_\wp)$ are the Gauss sums which are (un)normalized as in \cite{disegniGZ} and $ \Psi_\wp$ are a suitably determined local additive characters. Let $\mathcal{L}pha_\frak{P}$ be the local character of $K_{+,\frak{P}}^\times$ such that the local constituent of $\phi$ at $\frak{P}=\wp\wp^c$ is the irreducible principal series $\pi(\mathcal{L}pha_\frak{P},\begin{equation}ta_\frak{P})$ for some other local character $\begin{equation}ta_\frak{P}$. Then $\tau(\psi_{\epsilon,\wp}\chi_{\epsilon,\wp}\,, \Psi_\wp)=\tau(\mathcal{L}pha_{\frak{P}}\chi_{\epsilon,\frak{P}}, \Psi_\frak{P})$, where we write $\chi_{\epsilon,\frak{P}}$ for the local character $\chi_{\epsilon,\wp}$ of $K_{+,\frak{P}}^\times=K_{\wp}^\times$ and we similarly define the additive character $\Psi_\frak{P}$. We therefore infer that $$\mathscr{E}_{p}(\chi_\epsilon\psi_\epsilon)=\mathrm{pr}od_{\frak{P}\mid p}\tau(\mathcal{L}pha_{\frak{P}}\chi_{\epsilon,\frak{P}}, \Psi_\frak{P})\,.$$ The $\tau(\chi)$ of \cite{disegnithesispaper} is precisely the normalization of $\mathrm{pr}od_{\frak{P}\mid p}\tau(\chi_{\frak{P}},\Psi_{\frak{P}})$, so that $$\tau(\chi)\,\mathbf{N}(\frak{f}_\chi)^{1/2}=\mathrm{pr}od_{\frak{P}\mid p}\tau(\chi_{\epsilon,\frak{P}},\Psi_{\frak{P}})\,.$$ Finally, as explained in the proof of Lemma A.1.1 of \cite{disegniGZ} we have $$\tau(\chi_{\epsilon,\frak{P}},\Psi_{\frak{P}})= a_{\frak{q}}(\phi)^{v_{\frak{q}}(\frak{f}_\chi)}\,\tau(\mathcal{L}pha_{\frak{P}}\chi_{\epsilon,\frak{P}}, \Psi_\frak{P})$$ and hence, \begin{equation}gin{align*}{{\mathcal{L}pha_{\frak{f_\chi}}^{-1}}\,\tau({\chi})\,\mathbf{N}(\frak{f}_\chi)^{1/2}}&=\mathcal{L}pha_{\frak{f_\chi}}^{-1}\,\mathrm{pr}od_{\frak{P}\mid p}\tau(\chi_{\epsilon,\frak{P}},\Psi_{\frak{P}})\\ &=\mathrm{pr}od_{\frak{P}\mid p}\tau(\mathcal{L}pha_{\frak{P}}\chi_{\epsilon,\frak{P}}, \Psi_\frak{P})=\mathscr{E}_{p}(\chi_\epsilon\psi_\epsilon)\,. \end{align*} \end{proof} \begin{equation}gin{rem} \label{rem:chiportions} Assuming that the sign of the functional equation for the Hecke character $\psi_\epsilon^*:=\psi_{\epsilon}\circ c$ equals $+1$, Burungale and Disegni proved in \cite{burungaledisegni} that the $p$-adic height pairing $$\langle\,,\,\rightarrowngle_\chi: H^1_f(K,T_\wp(A)\otimes\chi)\otimes H^1_f(K,T_{\wp^c}(A)\otimes\chi^{-1})\longrightarrow M$$ for almost all anticyclotomic Hecke characters $\chi$ of $K$ of finite order (where $M$ is a finite extension of $\mathbb{Q}_p$ in which $\chi$ takes its values). \end{rem} } \end{document}
\begin{document} \title{Reducible Correlations in Dicke States} \author{Preeti Parashar and Swapan Rana} \address{Physics and Applied Mathematics Unit, Indian Statistical Institute,\\ 203 B T Road, Kolkata-700 108, India} \ead{[email protected], swapan\[email protected]} \begin{abstract} We apply a simple observation to show that the generalized Dicke states can be determined from their reduced subsystems. In this framework, it is sufficient to calculate the expression for only the diagonal elements of the reudced density matrices in terms of the state coefficients. We prove that the correlation in generalized Dicke states $|GD_N^{(\ell)}\rangle$ can be reduced to $2\ell$-partite level. Application to the Quantum Marginal Problem is also discussed. \end{abstract} \pacs{03.67.-a, 03.65.Ud, 03.67.Mn} \submitto{J. Phys. A: Math. Theor. (\textbf{Fast Track Communication})} \textbf{\emph{Introduction}}: Entanglement is one of the most fascinating non-classical features of quantum theory which has been harnessed for various practical applications. Although bipartite entanglement is well understood, gaining insight into multipartite entanglement is still quite a challenge. There are various perspectives to study entanglement at the multi-party level such as its characterization by means of LOCC, its ability to reject local realism and hidden variable theories etc. A particular interesting point of view is that of ``parts and whole". This approach basically deals with the question: how much knowledge about the quantum system can be acquired from that of its subsystems? To be precise, it asks whether an unknown state can be determined \emph{uniquely} if all its reduced density matrices (RDMs) are specified. In other words, this means whether higher order correlations are determined by lower order ones. It turns out that the most entangled states are the ones which cannot be determined from their RDMs. The determination of a state from its RDMs implies that the correlation present in the state is reducible to lower order ones. In an interesting work \cite{LPW} it was shown that except the GHZ class ($a|000\rangle+b|111\rangle$), all $3$-qubit pure states are determined by their $2$-qubit RDMs. This was further generalized \cite{WL} to the $N$-qubit case to show that GHZ is the most entangled class of states. In these works, the knowledge of $(N-1)$-party RDMs was employed to characterize the $N$-party state. However, in the general scenario there may exist states which can be determined by less than $(N-1)$-party RDMs, i.e., a generic correlation can be reduced beyond the $(N-1)$-partite level. For example, we have recently shown \cite{PR} that the $N$-qubit $W$ class of states are determined by just their bipartite RDMs. Though some partial progress has been made in this direction \cite{others1}, there is no general technique to know which class of states can be determined by $K$-partite RDMs for $K<N-1$. Answering this question will lead to the classification of quantum states in terms of various kinds of reducible correlations they can exhibit \cite{LPW, WL, PR}. A natural way to solve the problem is to determine all the RDMs from the given state and from an arbitrary state (which is supposed to have the same set of RDMs) and then compare the corresponding RDMs. But this is practically very difficult as we need to solve several second degree equations involving complex numbers. In this communication we provide some interesting examples of states which can be determined by their $K$-partite RDMs for $K < N-1$. In particular, we shall consider the Dicke states which are genuinely entangled and have been widely studied both from theoretical and experimental point of views \cite{Dref1}. The simplest Dicke state is the $W$ state which was studied at the qubit level recently \cite{PR}. In the present work, as a first step, we shall extend this result to arbitrary $d$-dimensional (i.e., $N$-qudit) $W$-state. Next, we shall focuss on the $N$-qubit Dicke states and study their reducible correlations. This result is further generalized to $d$-dimensions. Another interesting application of our technique that would be mentioned is the Quantum Marginal Problem. Our proof is based on the simple fact that the RDMs of a \emph{pure} state can be constructed only from the expressions of the diagonal elements. This facilitates easy computation of the RDMs. In addition, if some of the diagonal entries are zero, then this constraints some diagonals of the arbitrary state to vanish, thereby reducing the number of unknowns. So first, let us rewrite some known observations and notational conventions to construct RDM's, in a slightly different way, for later convenience. \textbf{Observation }: \emph{To calculate the Reduced Density Matrix from a generic pure state, it is sufficient to calculate the expression for diagonal elements in terms of the state coefficients. All off-diagonal elements will be obtained from these expressions.} A density matrix being hermitian, can be identified by its upper-half elements $a_{ij}~~\forall i\le j$. So we do not need to calculate the lower-half elements. Using the `lexicographically-ordered' basis \{$|00...0\rangle$, $|00...1\rangle$, ... ,$|00...\overline{d-1}\rangle$, ... , $|\overline{d-1}~\overline{d-1}...\overline{d-1}\rangle$\} of ${\mathbb{C}^d}^{\otimes N}$, an $N$-qudit pure state $|\psi\rangle_N^d$ (i.e., an $N$-partite pure quantum state where each of the parties has a $d$-level system) can be expressed as \begin{equation}\label{psi1} |\psi\rangle_N^d=\sum_{i=0}^{d^N-1}c_i|D_N(i)\rangle,~ \sum_{i=0}^{d^N-1}|c_i|^2=1,\end{equation} where $D_N(x)\end{equation}uiv$ ``Representation of the decimal number $x$ in an $N$-bit string in $d$-base number system". To have a grip on the coefficient corresponding to a basis vector, we are using $d$-base number system to represent the basis vector so that the suffix of its coefficient can be obtained by converting it into decimal number and vice-versa. [Note that for $d\geq11$, we need at least two bit (digit) to represent $d-1$ in decimal number system. But we wish to restrict ourselves to using one bit to represent one level. So we are using $d$-base number system to represent the bases. Thats why a `bar' is used over $d-1$ to indicate that it is of $d$-base number system (and so it consists of one bit)]. Throughout the discussion, we will use $d$-base number system to represent only the bases and decimal numbers elsewhere. However, when there is no ambiguity, we will write $|i\rangle$ instead of $|D_N(i)\rangle$ -- it should always be understood that the bases are in $d$-base number system. Now let us calculate the $M$-partite marginal (RDM) $ \rho^{i_1i_2...i_M}_\psi=Tr(|\psi\rangle_N^d\langle\psi|)$, where the trace is taken over the remaining $N-M$ parties. Clearly, it will be a matrix of order $d^M\times d^M$. So, retaining only the upper half entries, we can write \begin{equation}\label{rhopsi1} \rho^{i_1i_2...i_M}_\psi~=~\sum_{i=0}^{d^M-1}\sum_{j=i}^{d^M-1}r_{ij}|D_M(i)\rangle\langle D_M(j)|.\end{equation} Since the RDM is obtained by tracing over $N-M$ parties (the space of these parties has dimension $d^{N-M}$), each $r_{ij}$ will be a sum of $d^{N-M}$ number of terms each of which is of the form $c_k\bar{c}_l$. Thus $r_{ij}=\sum_{p=0}^{d^{N-M}-1}c_{k_p}\bar{c}_{l_p}$. To get the expression of $r_{ij}$ (i.e., to see explicitly which $c_k$'s and $c_l$'s will appear in the sum) , let us fix one $i$ and one $j$. Let $D_M(i)=s_1s_2...s_M$ and $D_M(j)=t_1t_2...t_M$. In an $N$-bit string, let us now put the $s_j$'s at $i_j$th places respectively. Then the suffixes $k$'s will be obtained by converting the $N$-bit $d$-base numbers, obtained by filling all the remaining $N-M$ places of the above string arbitrarily with 0, 1, 2,..,$\overline{d-1}$, into decimal numbers. \begin{figure} \caption{Least suffix (or the first term) in $r_{ij} \end{figure} For an illustration, let $k_0$ be the decimal number obtained by converting the $N$-bit $d$-base number having $s_j$ fixed at $i_j$th place ($\forall j=1(1)M$) and zero at all the remaining $N-M$ places [see fig. 1 for illustration]. Then $k_0=\sum\limits_{j=1}^Ms_j.d^{N-i_j}$ . Similarly, let $l_0=\sum\limits_{j=1}^Mt_j.d^{N-i_j}$. Then the first term (ordering in suffixes appear) of the sum in the expression of $r_{ij}$ will be $c_{k_0}.\overline{c}_{l_0}$. In a similar way other $c_{k_p}.\overline{c}_{l_p}~~(p=0(1)(d^{N-M}-1))$ terms can be calculated. For the last term, which is the term with highest suffix, we have $k_{d^{N-M}-1} =k_0+(d-1)\sum\limits_{j=1;j\neq i_1,i_2,..,i_M}^Nd^{N-j}$. Note that $r_{ii}=\sum\limits_{p=0}^{d^{N-M}}|c_{k_p}|^2$ and $r_{jj}=\sum\limits_{p=0}^{d^{N-M}}|c_{l_p}|^2$. Since $r_{ij}=\sum\limits_{p=0}^{d^{N-M}}c_{k_p}.\overline{c}_{l_p}$, it follows that each off-diagonal element $r_{ij}$ can be obtained by summing the products of corresponding complex numbers appearing in the expression of $r_{ii}$ and conjugate of the complex numbers appearing in the expression of $r_{jj}$. Hence, it is sufficient to calculate the expression for only the diagonal elements. $\Box$ \textbf{\emph{Remark 1}:} If we start with a mixed state $\left[r_{ij}\right]_{j\ge i=0}^{d^N-1}$ and we wish to calculate an RDM $\left[R_{ij}\right]_{j\ge i=0}^{d^M-1}$, following the same procedure, it can be shown that if $R_{ii}=\sum_{s=0}^{d^M-1}r_{(k_s)(k_s)}$ and $R_{jj}=\sum_{s=0}^{d^M-1}r_{(l_s)(l_s)}$, then $R_{ij}=\sum_{s=0}^{d^M-1}r_{(k_s)(l_s)}$. So $R_{ii}=0$ would imply $r_{(k_s)(j)}=0~~\forall s,j!$ Thus, it is always helpful to calculate first the expression of the diagonal elements ( and compare them). We shall now apply the above observation to arrive at our main results. As a natural extension of the work on $N$-qubit $W$ state \cite{PR}, first we shall consider the generalized $d$-dimensional $W$ state. This would also serve as a good demonstration of the technique and be useful in understanding the proof for the generalized Dicke states. \textbf{\emph{I. $N$-qudit generalized $W$-state}}: The $N$-qudit generalized $W$-state is defined as \cite{Sanders}\begin{equation} |W\rangle_N^d=\sum_{i=1}^{d-1}(a_{1i}|i0...00\rangle+...+a_{ni}|00...0i\rangle) \nonumber \end{equation} However, we will write this state in our notation as \begin{equation}\label{dw1} |W\rangle_N^d=\sum_{i=0}^{N-1}\sum_{j=1}^{d-1}w_{jd^i}|D_N(jd^i)\rangle , \sum_{i=0}^{N-1}\sum_{j=1}^{d-1}|w_{jd^i}|^2=1.\end{equation} \textbf{Theorem 1:} \emph{$N$-qudit generalized $W$-states are determined by their bipartite marginals.} We shall prove this by showing that there does not exist any other $N$-qudit density matrix having the same bipartite marginals except \begin{equation}a\label{dw2} |W\rangle_N^d\langle W|&=& \sum_{i=0}^{N-1}\sum_{j=1}^{d-1}\sum_{k=j}^{d-1}w_{jd^i}\bar{w}_{kd^i}|D_N(jd^i)\rangle\langle D_N(kd^i)|\nonumber\\ &+&\sum_{i=0}^{N-1}\sum_{j=1}^{d-1}\sum_{l=i+1}^{N-1}\sum_{m=1}^{d-1}w_{jd^i}\bar{w}_{jd^l}|D_N(jd^i)\rangle\langle D_N(md^l)|.\end{equation}a [Note that though the above expression looks rather cumbersome, the matrix form can be easily visualized as there are non-zero elements only at $(jd^i+1, kd^l+1)$ positions where $j,k=1(1)(d-1);~i,l=0(1)(N-1)\mbox{ and } jd^i\le kd^l$, since we are considering only the upper half elements. These elements are the coefficients of $|D_N(jd^i)\rangle\langle D_N(kd^l)|$ and are given by $w_{jd^i}\bar{w}_{kd^l}$.] \textbf{\emph{Proof}:} 1.\hspace {1.5 em} Each bipartite marginal $\rho^{JK}_W$ will be a matrix of order $d^2 \times d^2$ where $J \in \left\{1,2,...,N-1\right\}$ and $K \in \left\{2,3,...,N\right\}$. As discussed earlier, to determine $\rho^{JK}_W$, we need to find the expressions of the $d^2$ diagonal elements of $\rho^{JK}_W$ i.e., the coefficients of $|ij\rangle\langle ij|~~\forall i,j=0(1)(d-1)$. [Note that, while in the bases, $i,j$ should be understood as $d$-base numbers]. Now each basis state in (\ref{dw1}) has exactly one nonzero entry (rather `bit') with value 1 to $d-1$. So there will be no basis term $|ij\rangle\langle ij|$ of $\rho^{JK}_W$ having both $i,j$ as nonzero numbers. Therefore, the coefficient of $|ij\rangle\langle ij|$ in $\rho^{JK}_W$ should be zero $\forall i,j=1(1)(d-1)$. Hence we need to consider only the coefficients of $|0i\rangle\langle 0i|$ and $|i0\rangle\langle i0|$~~$\forall i=0(1)(d-1)$. Again, for $i\neq0$, there is exactly one basis state in (\ref{dw1}) containing `$i$' at $J$th place (from left to right) having the coefficient $w_{id^{N-J}}$. Therefore, the coefficient of $|i0\rangle\langle i0|$ is $|w_{id^{N-J}}|^2$. Similarly, the coefficient of $|0i\rangle\langle 0i|$ in $\rho^{JK}_W$ is $|w_{id^{N-K}}|^2$~~$\forall i=1(1)(d-1)$. From normalization ($Tr(\rho^{JK}_W)=1$), the coefficient of $|00\rangle\langle00|$ in $\rho^{JK}$ is obtained as $$1-\sum_{i=1}^{d-1}(|w_{id^{N-J}}|^2+|w_{id^{N-K}}|^2).$$ We know that the non-diagonal terms (the coefficients of $|0i\rangle\langle0j|$, $|0i\rangle\langle j0|$ and $|i0\rangle\langle j0|$; $i\neq j$) will be determined from the above expressions of the diagonal terms. Thus,\\ \begin{equation}a \label{gwrdm}\rho^{JK}_W&=& (1-\sum_{i=1}^{d-1}(|w_{id^{N-J}}|^2+|w_{id^{N-K}}|^2))|00\rangle\langle00| +\sum_{i=1}^{d-1}\sum_{j=i}^{d-1}w_{id^{N-K}}\bar{w}_{jd^{N-K}}|0i\rangle\langle0j|\nonumber\\ &+&\sum_{i=1}^{d-1}\sum_{j=1}^{d-1}w_{id^{N-K}}\bar{w}_{jd^{N-J}}|0i\rangle\langle j0| +\sum_{i=1}^{d-1}\sum_{j=i}^{d-1}w_{id^{N-J}}\bar{w}_{jd^{N-J}}|i0\rangle\langle j0| . \end{equation}a 2.\hspace {1.5 em} Now let us suppose that there exists an $N$-qudit density matrix (possibly mixed, hence the subscript M)\begin{equation} \rho^{12...N}_M~=~\sum_{i=0}^{d^N-1}\sum_{j=i}^{d^N-1}r_{ij}|D_N(i)\rangle\langle D_N(j)|\end{equation} which has the same bipartite marginals as $|W\rangle_N^d$. Here $r_{ii}\ge0~\forall i=0(1)(d^N-1)$, since the diagonal elements of a Positive Semi Definite (PSD) matrix are non-negative. [If possible let in a PSD matrix A, a diagonal element $d_i<0$. Then taking $|\psi\rangle =[0,0,..,0,1,0,...0]^T$, we have $\langle\psi|A|\psi\rangle=d_i<0$, a contradiction that A is PSD]. We first wish to calculate the diagonal elements of the bipartite marginal $\rho^{JK}_M$ of $\rho^{12...N}_M$. Each diagonal element of $\rho^{JK}_M$ is the sum of $d^{N-2}$ number of diagonal elements $r_{ss}$ of $\rho^{12...N}_M$. Out of the $d^2$ number of diagonal elements (coefficients of $|ij\rangle\langle ij|~~\forall i,j=0(1)(d-1)$) of $\rho^{JK}_M$, let us first calculate the coefficient of $|ij\rangle\langle ij|~~\forall i,j=1(1)(d-1)$. To see explicitly which $r_{ss}$'s will appear in the sum, we observe that the suffixes $s$ will vary over the decimal numbers obtained by converting the $N$-bit $d$-base numbers having $i$ fixed at $J$th \& $j$ fixed at $K$th places and arbitrarily 0,1,...,$\overline{d-1}$ at the remaining ($N$-2) places. Hence the terms $r_{ss}$'s for the suffixes $s=0$ and $s=k.d^{l-1}, k=1,2,..,(d-1); l=1,2,...,N$ will not appear in the expression (sum) of coefficient of $|ij\rangle\langle ij|$ in $\rho^{JK}_M$ for any $J,K$ as for these $s$, $D_N(s)$ can have at most one non zero entry (but we need at least two). 3a.\hspace {1.5 em} As can be seen from eqn.(\ref{gwrdm}), there is no term $|ij\rangle\langle ij|$ for $ij\neq0$ in $\rho^{JK}_W$. Therefore, the coefficient of $|ij\rangle\langle ij|$ for $ij\neq0$ in $\rho^{JK}_M$ should vanish. Since these coefficients are sum of non-negative $r_{ss}$'s, each $r_{ss}$ appearing there should individually be zero. Therefore, from step (2), the only non-zero diagonal elements of $\rho^{12...N}_M$ are $r_{ii}$ for $i=0~ \& ~ i=j.d^{k-1}~\forall j=1(1)(d-1),k=1(1)N$. 3b.\hspace {1.5 em} Next comparing the coefficient of $|0i\rangle\langle 0i|$ from $\rho^{JK}_W$ and $\rho^{JK}_M$, we get $r_{(id^{N-K})(id^{N-K})}=|w_{id^{N-K}}|^2$ for all $i=1(1)(d-1)$. Similarly, comparing the coefficient of $|i0\rangle\langle i0|$, we get $r_{(id^{N-J})(id^{N-J})}=|w_{id^{N-J}}|^2$ for all $i=1(1)(d-1)$. Since these results hold for all possible (parties) $J$ and $K$, we can write them in combined form as $r_{(jd^i)(jd^i)}=|w_{jd^i}|^2~\forall j=1(1)(d-1),i=0(1)(N-1)$. 3c.\hspace {1.5 em} Finally, from normalization condition $\sum\limits_{i=0}^{d^N-1}r_{ii}=1=\sum\limits_{i=0}^{N-1}\sum\limits_{j=1}^{d-1}|w_{jd^i}|^2$, we get $r_{00}~=~0$. Thus collecting the results of steps 3a \& 3b it follows that\begin{equation}\label{cond1} r_{(jd^i)(jd^i)} = |w_{jd^i}|^2, \forall j=1(1)(d-1),i=0(1)(N-1)\end{equation} and all other $r_{ii}$ in $\rho^{12...N}_M$ are zero. 4.\hspace {1.5 em} Now we will use the fact that if a diagonal element of a PSD matrix is zero, then all elements in the row and column containing that element should be zero \cite{PSD2}. Hence from the result of step 3c it follows that $\rho^{12...N}_M$ has nonzero elements only at $(jd^i+1, kd^l+1)$ positions where $j,k=1(1)(d-1);~i,l=0(1)(N-1)\mbox{ and } jd^i\le kd^l$. These elements are the coefficients of $|D_N(jd^i)\rangle\langle D_N(kd^l)|$ and are given by $r_{(jd^i)(kd^l)}$. Therefore, $\rho^{12...N}_M$ has the same form as $|W\rangle_N^d\langle W|$ given in (\ref{dw2}). Moreover, from (\ref{cond1}), they have the same diagonal elements. 5.\hspace {1.5 em} The non-diagonal elements of $\rho^{12...N}_M$ are at $(jd^i+1, kd^l+1)$ places with $jd^i<kd^l$. Now, $jd^i<kd^l$ may happen in two ways: either $i<l$ or $j<k$ (when $i=l$). For $i<l$, the non-diagonal element at $(jd^i+1, kd^l+1)$ is found to be $w_{jd^i} \bar {w}_{kd^l}$ by comparing the coefficients of $|0j\rangle\langle k0|$ from $\rho^{(N-l)(N-i)}_M$ and $\rho^{(N-l)(N-i)}_W$. For $i=l$ ( and hence $j<k$), the same can be achieved by comparing the coefficients of $|0j\rangle\langle0k|$ from $\rho^{J(N-i)}_M$ and $\rho^{J(N-i)}_W$ with any $J\ne i$. Thus $r_{(jd^i)(kd^l)} =w_{jd^i}\bar{w}_{kd^l}$ and hence $\rho^{12...N}_M~=~|W\rangle_N^d\langle W|$. $\Box$ \textbf{\emph{II. $N$-qubit generalized Dicke states}}: The generalized Dicke states are defined by \begin{equation} \label{gdnl1} |GD_N^{(\ell)}\rangle=\sum_ia_i|i\rangle \end{equation} where $i=|i_1i_2...i_N\rangle$ and the sum varies over all permutations of $\ell$ number of 1 and $N-\ell$ number of 0. When all coefficients are equal, they are known as Dicke states which have many interesting properties such as permutational invariance, robustness against decoherence, measurement and particle loss. Some important applications of Dicke states include telecloning, quantum secret sharing, open-destination teleportation and quantum games \cite{Dref2}. In particular, implementation and various interesting applications of $W$ states and their connection with Dicke states has been studied in \cite{Wref1}. Thus Dicke states serve as a good test bed for exploring multiparty correlations. Let us first write the above state in (\ref{gdnl1}) as ( to have a grip on the coefficients )\begin{equation} \label{gdnl2} |GD_N^{(\ell)}\rangle=\sum\limits_{i_1>i_2>...>i_{\ell}=0}^{N-1}a_{2^{i_1}+2^{i_2}+... +2^{i_{\ell}}}|B_N(2^{i_1}+2^{i_2}+...+2^{i_{\ell}})\rangle\end{equation} where $B_N(x)$ is the binary representation of the decimal number $x$ in an $N$-bit string and $a_i$'s are arbitrary non-zero complex numbers satisfying the normalization condition. Retaining only the upper-half elements, we can write \begin{equation}\label{gdnldens} |GD_N^{(\ell)}\rangle\langle GD_N^{(\ell)}|=\sum\limits_{i\le j}g_{ij}|B_N(i)\rangle\langle B_N(j)|\end{equation} where $g_{ij}=a_i\bar{a}_j$ and $i,j$ vary over the decimal numbers obtained by converting the $N$-bit binary numbers having $\ell$ number of 1 (and $N-\ell$ numbers of 0). In matrix form $|GD_N^{(\ell)}\rangle\langle GD_N^{(\ell)}|$ will have non-zero entries ($g_{ij}$) only at ($i+1$, $j+1$) positions. Since $1\le\ell\le N-1$, considering the entanglement, it is sufficient to take $\ell\le\lfloor\frac{N}{2}\rfloor$= \emph{Integral part} of $\frac{N}{2}$, as the states corresponding to other $\ell$'s are LU-equivalent to these states. Any property of these later states will follow from the corresponding former states obtained by changing $0$ and $1$ throughout the bases. For example, the two classes $|GD_N^{(N-2)}\rangle$ and $|GD_N^{(2)}\rangle$ have the same property. We shall now prove an interesting property of these states. \textbf{Theorem 2 :}\emph{ For $1\le\ell<\lfloor\frac{N}{2}\rfloor$, the generalized Dicke state $|GD_N^{(\ell)}\rangle$ is uniquely determined by its $2\ell$-partite marginals.} Note that we have excluded the case $\ell=\lfloor\frac{N}{2}\rfloor$. The reason for this exclusion will be described later. We will prove this theorem in two parts---firstly we shall show that if any density matrix has the same $(\ell+1)$-partite marginals as those of $|GD_N^{(\ell)}\rangle$, then it must share the same diagonal elements with $|GD_N^{(\ell)}\rangle\langle GD_N^{(\ell)}|$. But there will be some (off-diagonal) elements in a general density matrix which will never appear in any ($\ell+1$)-partite marginal. So, to include these elements, we have to consider RDMs of more parties. In the second part we will show that it is sufficient to consider the $2\ell$-partite marginals to prove the uniqueness (i.e., the two matrices share the same off-diagonals). \textbf{\emph{Proof}:} 1.\hspace {1.5 em} If possible, let there exist an $N$-qubit density matrix (possibly mixed)\begin{equation}\label{rhom2} \rho_M^{12...N}=\sum\limits_{i=0}^{2^N-1}\sum\limits_{j=i}^{2^N-1}r_{ij}|B_N(i)\rangle\langle B_N(j)|\end{equation} having the same $(\ell+1)$-partite marginals as those of $|GD_N^{(\ell)}\rangle$. We shall prove that $\rho_M^{12...N}$ and $|GD_N^{(\ell)}\rangle\langle GD_N^{(\ell)}|$ share the same diagonals. 2.\hspace {1.5 em} We will first consider the diagonal elements of RDMs. Since there is exactly $\ell$-number of non-zero entry (each is 1) in every basis term of $|GD_N^{(\ell)}\rangle$, the coefficient of $$|i_1i_2...i_{\ell+1}\rangle\langle i_1i_2...i_{\ell+1}|$$ in any $(\ell+1)$-partite marginal should be zero in which every $i_k$ is 1. This constraints the form of $\rho_M^{12...N}$ in eqn (\ref{rhom2}) to have some coefficients ($r_{ij}$) vanishing. In (\ref{rhom2}), only those $r_{ij}$ will be non-zero for which both of $B_N(i)$ and $B_N(j)$ has at most $\ell$ number of 1. We shall now show that only those $r_{ii}$ in (\ref{rhom2}) are non-zero for which $B_N(i)$ has exactly $\ell$-number of 1. a).\hspace {1.5 em} Let us consider the coefficient of $|i_1i_2...i_{\ell+1}\rangle\langle i_1i_2...i_{\ell+1}|$ where $\ell$ number of $i_j$'s are 1 and only one is zero, in the RDM of some parties $J_1,~J_2,...,J_{\ell+1}$. There is exactly one basis term in $|GD_N^{(\ell)}\rangle$ having $i_k$ at $J_k$th place, with the coefficient $a_i$ where $i=2^{N-J_{\ell+1}}+...+2^{N-J_1}$. So, when the RDM is calculated from $|GD_N^{(\ell)}\rangle$, the coefficient is $g_{ii}=|a_i|^2$. Again, there is exactly one non-zero $r_{ii}$ in (\ref{rhom2}) such that $B_N(i)$ has $i_k$ at $J_k$th place (since $B_N(i)$ can have at most $\ell$ number of 1, in order for $r_{ij}\ne0$). Therefore, comparing the coefficients (of this term), $r_{ii}=g_{ii}$. Considering all permutations of this term and all possible set of $(\ell+1)$-number of parties, it follows that $r_{ii}=d_{ii}$, for all decimal $i$ so that $B_N(i)$ has $\ell$-number of 1. b).\hspace {1.5 em} Now we will show that all other $r_{ii}$, corresponding to which $B_N(i)$ has less than $\ell$ number of 1, should be 0. First consider those $r_{ii}$'s corresponding to which $B_N(i)$ has $(\ell-1)$ number of 1. Then comparing the coefficients of $|i_1i_2...i_{\ell+1}\rangle\langle i_1i_2...i_{\ell+1}|$ where $(\ell-1)$ number of $i_j$'s are 1, from the RDMs (considering all possible set of parties and using the result of step a) above), we get $r_{ii}=0$. Similarly, all other $r_{ii}$'s, corresponding to which $|B_N(i)\rangle$ has less than $\ell$ number of 1, should be zero. Finally from normalization ($\sum r_{ii}=\sum g_{ii}=1)$, it follows that $r_{00}=0$. Thus, collecting the results of a) and b) it follows that $\rho_M^{12...N}$ in (\ref{rhom2}) reduces to the same form as $|GD_N^{(\ell)}\rangle\langle GD_N^{(\ell)}|$ in (\ref{gdnldens}) and they have the same diagonal elements $r_{ii}=g_{ii}$. The only remaining task to prove the uniqueness is to show that they have the same non-diagonal elements too. 3.\hspace {1.5 em} Consider a non-diagonal element $r_{ij}$ with $i=|...i_1..i_{\ell}...\rangle$ and $j=|...j_1..j_{\ell}...\rangle$. (each of $i_k$ and $j_k$ is 1). Since $\ell<\lfloor\frac{N}{2}\rfloor$, there will be some terms $|i\rangle\langle j|$ in the density matrix, the coefficients ($r_{ij}$ or $a_i\bar{a}_j$) of which will never occur in any $(\ell+1)$-partite marginal. For example, the coefficient of $|000\ldots011\rangle\langle110\ldots0|$, or $|010\ldots01\rangle\langle100\ldots010|$ will never appear in any tripartite marginal. Generically, those $r_{ij}$'s $(i<j)$ for which the Hamming Distance between $B_N(i)$ and $B_N(j)$ is greater than $\ell+1$, will never occur in any $(\ell+1)$-partite marginal; because partial tracing over the remaining parties will yield 0. Thus, the elements $r_{ij}$'s with $j=\bar{i}$=\emph{complement} of $i\end{equation}uiv2^N-1-i$ (these are the elements on the secondary diagonal of the density matrix) will never occur. Since these $r_{ij}$'s never occur in any $(\ell+1)$-partite marginal, these are unconstrained elements (i.e., these can take any values and need not be $a_i\bar{a}_j$, which is required for the uniqueness of the two density matrices). So, there exists an infinite number of $2^N\times2^N$ hermitian, unit-trace matrices sharing the same diagonals and $(\ell+1)$-partite marginals with $|GD_N^{(\ell)}\rangle\langle GD_N^{(\ell)}|$. However all such matrices may not be valid density matrices because of the semi-positivity restriction ($\rho\ge0$). So, for some particular choices of the coefficients $a_i$'s, there may (or may not) exist a valid density matrix other than $|GD_N^{(\ell)}\rangle\langle GD_N^{(\ell)}|$. Therefore, there is an ambiguity about the general case: what is the minimum number of parties whose RDM's can generically determine the $|GD_N^{(\ell)}\rangle$ state uniquely? 4.\hspace {1.5 em} To answer this question, we observe that the possible maximum Hamming Distance between $B_N(i)$ and $B_N(j)$ is $2\ell$. Therefore, if we consider the $2\ell$-partite marginals, each $r_{ij}$ will appear in some RDM's and hence should be constrained to satisfy some relation with $a_i$'s. We shall now show that considering $2\ell$-partite marginals, indeed yield $r_{ij}=a_i\bar{a}_j$. To prove it, consider a non-diagonal element $r_{ij}$ with $i=|i_1i_2\ldots i_N\rangle$ and $j=|j_1j_2\ldots j_N\rangle$. Let the $\ell$ 1's in $i$ be at $I_k$th places (counting from left to right) and those in $j$ are at $J_k$th places. If the two sets $\{I_k\}$ and $\{J_k\}$ are different (i.e., $\{I_k\}\cap\{J_k\}=\Phi$), then we get a set of $2\ell$ number of parties $\{I_k,J_k\}$ and we can arrange all $I_k$ and $J_k$'s (since $I_k, J_k \in \{1,2\ldots,N\}$) in increasing order. Let us called them $\{P_k\}$ (i.e., $P_1<P_2<\ldots<P_{2\ell}$). If $\{I_k\}\cap\{J_k\}\ne\Phi$, we can add any number(s) from $\{1,2,\ldots,N\}$ to the set $\{P_k\}$ (maintaining the order) so that it contains $2\ell$ number of elements. Let $s_k$ be the $P_k$th bit (from left to right) in $B_N(i)$ and those in $B_N(j)$ be $t_k$. Then comparing the coefficient of $|s_1s_2\ldots s_{2\ell}\rangle\langle t_1t_2\ldots t_{2\ell}|$ from the RDMs $\rho^{P_1P_2\ldots P_{2\ell}}$, it follows that $r_{ij}=a_i\bar{a}_j$ and hence the proof. $\Box$ \textbf{\emph{Remark 2}:} It is worth mentioning that the Theorem 2 can be viewed as a sufficient condition. It states that it is sufficient to consider the $2\ell$-partite marginals to determine $|GD_N^{(\ell)}\rangle$. However, it may happen (e.g., for some specific state in this class) that the state $|GD_N^{(\ell)}\rangle$ can be determined from fewer than $2\ell$-partite marginals. In this sense, we do not know whether this is an optimal bound. We have used the $2\ell$-partite marginals to drive out the possibility of presence of another density matrix having different off-diagonals but sharing the same diagonals. The off-diagonals $r_{ij}$ are arbitrary but are constrained to satisfy the requirement that the resulting matrix should be PSD. This automatically puts some restrictions on the off-diagonals e.g., $|r_{ij}|\le\sqrt{r_{ii}r_{jj}}$. There is a possibility of reducing the number of parties using some further properties of density (PSD) matrices (or using some different techniques). In the present technique, $2\ell$ partite marginals are sufficient. A limitation of the present technique is that if the maximum Hamming distance (between the bases) is $N$, then it gives the trivial result. In the case $\ell=\lfloor\frac{N}{2}\rfloor$, for odd $N$, the technique produces the result of \cite{WL} and for even $N$, it gives no useful result. Thats why we have excluded this case in Theorem 2. Another interesting issue is the number of RDMs needed to identify a state. For example, it has been shown by Diosi \cite{others1} that among pure states, only two (out of three) bipartite marginals are sufficient to determine a generic three qubit pure state ($GHZ$ and its LU equivalents are the only exception). If we restrict ourselves only to pure states , then the number of RDMs can be considerably reduced. The result is stated more precisely in the following theorem. \textbf{Theorem 3 :}\emph{ Among arbitrary pure states, the generalized Dicke state $|GD_N^{(\ell)}\rangle$ is uniquely determined by its $(\ell+1)$-partite marginals. Moreover, only $^{N-1}C_{\ell}$ (out of $^NC_{\ell+1}$) number of them having one party common to all, are sufficient.} \textbf{\emph{Proof}:} Let us take the first party as the common one and consider the following RDMs $\rho^{1i_2i_3\ldots i_{\ell+1}}$. The proof for diagonal part has already been proved in the first part of Theorem 2. The proof for the non-diagonal part follows by comparing the coefficients of $|B_{\ell+1}(i)\rangle \langle B_{\ell+1}(j)|$, where $|B_{\ell+1}(i)\rangle$ and $|B_{\ell+1}(j)\rangle$ has exactly $\ell$ number of 1s. $\Box$ \textbf{\emph{Remark 3}:} We wish to mention here that as we are considering the most general class of Dicke states, it is not possible to determine the states from fewer than $(\ell+1)$-partite marginals. It may happen that for some specific choices of the coefficients, $|GD_N^{(\ell)}\rangle$ is uniquely determined from fewer than $(\ell+1)$-partite marginals, but in general, not all states can be determined. For example, the following two states \begin{equation}a ~~~~~|GD_4^{(2)}\rangle &=& r_3e^{i\theta_3}(|3\rangle+|12\rangle) +r_5e^{i\theta_5}(|5\rangle+|10\rangle) + r_6e^{i\theta_6}(|6\rangle+|9\rangle)\nonumber\\ \mbox{and~} |GD_4^{(2)'}\rangle &=& r_3e^{-i\theta_3}(|3\rangle+|12\rangle) +r_5e^{-i\theta_5}(|5\rangle+|10\rangle) + r_6e^{-i\theta_6}(|6\rangle+|9\rangle) \nonumber\end{equation}a are not determinable since they share the same bipartite marginals. [$r_i,~\theta_i$ are real and the base $|x\rangle$ should be read as $|B_4(x)\rangle$]. \textbf{\emph{III. Generalization to $d$-dimension}}: The generalized $d$-dimensional Dicke states are defined by \begin{equation}\label{ggd1} |D_N(k_0, k_1,...,k_{d-1})\rangle=\sum_ic_i|i\rangle\end{equation} where \begin{equation} \label{cggd1} i=|\underbrace{0...0}_{k_0}\underbrace{1...1}_{k_1}.... \underbrace{\overline{d-1}...\overline{d-1}}_{k_{d-1}}\rangle\end{equation} and the index $i$ varies over all possible different permutations of $k_0$ number of 0, $k_1$ number of 1,..., $k_{d-1}$ number of $\overline{d-1}$; $k_0+k_1+...+k_{d-1}=N$. These states are genuinely entangled. Using the same technique as in the proof of Theorem 2, we can prove the following result about the reducible correlations in these states. \textbf{Theorem 4 :}\emph{ If $K (\end{equation}uiv2\ell<N)$ be the maximum Hamming distance between the bases (\ref{cggd1}), the state given by (\ref{ggd1}) is uniquely determined by its $K$-partite RDMs.} As an example, any state of the class $|D_{2009}(2004,2,3)\rangle$ is determined by its $10$-partite RDMs. \textbf{\emph{IV. Quantum Marginal Problem}}: The basic issue concerning the Quantum Marginal Problem (QMP) is the following: does there exists a joint quantum state consistent with a given set of RDMs? It is known that a general solution to the QMP would provide a solution to the \emph{N-representability problem} in quantum chemistry, e.g., to calculate the binding energies of complex molecules \cite{bookqmp}. A particular class of QMP is `Symmetric Extension', which has direct application in Quantum Key Sharing, Quantum Cryptography etc \cite{Symextn}. Although plenty of literature is available \cite{paperqmp}, there is no general method to get the exact solutions. One needs to calculate the marginals from an arbitrary state (which is the expected joint state) and then compare with the given ones. For large number of marginals, the problem becomes very difficult as we need to solve several complex equations. However, if there is some symmetry in the RDMs (e.g., for $W$-states, Dicke states , all RDMs have similar form with many vanishing elements), the technique presented in our work can be applied to find a solution. As an interesting example, it was mentioned in \cite{LPW} that for the set of RDMs $\{\rho^{AB},~\rho^{BC},~\rho^{AC}\}$ where (in computational basis)\begin{equation}\rho^{AB}=\rho^{BC}=\rho^{AC}=|\psi^-\rangle\langle\psi^-| \end{equation} (with $|\psi^-\rangle = (|01\rangle - |10\rangle)/\sqrt{2}$) there exists no consistent 3-qubit state. We can prove this easily using our technique. If possible, let \begin{equation} \rho^{ABC}~=~\sum_{i=0}^{7}\sum_{j=i}^{7}r_{ij}|B_3(i)\rangle\langle B_3(j)|\end{equation} where $B_3(x)=$`Binary representation of $x$ in a 3-bit string', has the given marginals. Then comparing the first and last diagonal elements (i.e., the coefficients of $|00\rangle\langle00|$ and $|11\rangle\langle11|$) of the RDMs, we get $r_{ii}=0~\forall i=0(1)7$, an impossibility! \textbf{\emph{Conclusions}}: Though the general framework is still far way, through this work we have made considerable progress towards understanding the nature of reducible correlations. It has been shown that the correlations in some classes of multipartite states can be reduced to lower order ones. This provides some insight into the characterization of multiparty entanglement, such as the determination of generalized $W$ state from its bipartite RDMs proves that the entanglement therein is necessarily of bipartite nature. The large class of generalized Dicke states $|GD_N^{(\ell)}\rangle$ have shown to be determined by their $2\ell$-partite marginals, where $1\le \ell <\lfloor \frac{N}{2}\rfloor$. Thus, these states have information at the most at $2\ell$-partite level and it can not be reduced beyond $(\ell+1)$-partite level. In general, the entangled states which are determined by their $K$-party RDMs, can be used as resources for performing information related tasks, specially if some of the parties do not cooperate. In such situations, it is not necessary that each party cooperates with all others; cooperation with only $K-1$ parties is sufficient. The $K$-partite residual entanglement would serve the purpose. For example, because of the bipartite nature of entanglement, the $N$-qubit $W$-state is very robust against the loss of $(N-2)$ parties. Recently it has been shown that $(N-1)$-qudit RDMs uniquely determine the Groverian measure of entanglement of the $N$-qudit pure state \cite{EJ}. So it is likely that for the pure states, which are determined by their $K$-partite RDMs, the entanglement measure may be characterized by these RDMs. However, this requires further investigation. Finally, we have shown by an example that our approach can be applied to Quantum Marginal Problem, at least for simple (low-dimensional) cases. \section*{References} \end{document}
\begin{document} \title{A Phononic Bus for Coherent Interfaces Between a Superconducting Quantum Processor, Spin Memory, and Photonic Quantum Networks} \author{Tom\'{a}\v{s} Neuman$^{+}$} \affiliation{John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA} \author{Matt Eichenfield$^{+}$} \affiliation{Sandia National Laboratories, Albuquerque, NM, USA} \author{Matthew Trusheim$^{+}$} \affiliation{John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA}\affiliation{Massachusetts Institute of Technology, Cambridge, MA, USA}\affiliation{CCDC Army Research Laboratory, Adelphi, MD 20783, USA} \author{Lisa Hackett} \affiliation{Sandia National Laboratories, Albuquerque, NM, USA} \author{Prineha Narang} \email{[email protected]} \affiliation{John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA} \author{Dirk Englund} \email{[email protected]} \affiliation{Massachusetts Institute of Technology, Cambridge, MA, USA}\affiliation{Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, USA} \begin{abstract} We introduce a method for high-fidelity quantum state transduction between a superconducting microwave qubit and the ground state spin system of a solid-state artificial atom, mediated via an acoustic bus connected by piezoelectric transducers. Applied to present-day experimental parameters for superconducting circuit qubits and diamond silicon vacancy centers in an optimized phononic cavity, we estimate quantum state transduction with fidelity exceeding 99\% at a MHz-scale bandwidth. By combining the complementary strengths of superconducting circuit quantum computing and artificial atoms, the hybrid architecture provides high-fidelity qubit gates with long-lived quantum memory, high-fidelity measurement, large qubit number, reconfigurable qubit connectivity, and high-fidelity state and gate teleportation through optical quantum networks. \end{abstract} \date{\today} \maketitle \section{Introduction} Hybrid quantum systems have the potential to optimally combine the unique advantages of disparate physical qubits. In particular, while superconducting (SC) circuits have high-fidelity and high-speed initialization and logic gates \cite{neeley2010generation, pop2014coherent,Ofek2016, narla2016entanglement,Lu2017-uc, barends2019diabaticgates, Arute2019-cd, Kjaergaard_2019}, challenges remain in improving qubit (i) coherence times, (ii) long-range connectivity, (iii) qubit number, and (iv) readout fidelity. A hybrid system may satisfy these challenges by delegating different tasks to constituent physical platforms. Here, we propose an approach to enable such scalable solid-state quantum computing platforms, based fundamentally on a mechanism for high-fidelity qubit transduction between a SC circuit and a solid-state artificial atom (AA). Mediating this transduction is an acoustic bus \cite{Chu2017-wl,kuzyk2018scaling, li2019honeycomb, Bienfait2019transferSAW} that couples to the SC qubit and an AA electron spin via a combination of piezoelectric transduction and strong spin-strain coupling. Applied to present-day experimental parameters for SC flux qubits and silicon vacancy (SiV$^-$) centers in diamond, we estimate quantum state transfer with fidelity exceeding 99\% at a MHz-scale bandwidth. Hyperfine coupling to local ${^{13}}$C nuclear-spin qubits enables coherence times exceeding a minute \cite{bradley2019register}, while excited orbital states enable long-distance state transfer across quantum networks by optically heralded entanglement. Moreover, the scheme is extensible to large numbers of spin qubits with deterministic addressability, potentially enabling integration of large-scale quantum memory. Noting that SiV$^-$ single-shot optical readout fidelity has been experimentally demonstrated to exceed 99.9\% \cite{Bhaskar2019-gd}, this approach thus successfully addresses challenges (i-iv). By combining the complementary strengths of SC circuit quantum computing and artificial atoms, this hybrid SC-AA architecture has the essential elements for extensible quantum information processors: a high-fidelity quantum processing unit (QPU), a bus to scalable quantum memory, and a high-fidelity connection long-range optical quantum networks. Our approach, schematically depicted in Fig.\,\ref{fig:schematicfig1}, combines four quantum interfaces [QIs, marked as QI1-QI4 in Fig.\,\ref{fig:schematicfig1}(a)] between physical modalities: a microwave photon-to-phonon interface, coupling of a phonon to an AA electron spin, coupling of the electron spin to a nuclear spin, and finally coupling of the electron spin to the optical photon. Previous work has investigated these quantum interfaces separately, including the piezoelectric transduction from the microwave circuit to the phonon~\cite{schuetz2015universaltransducers, Manenti2017, arrangoiz2018superconducting, Bienfait2019transferSAW, hann2019hardwareefficient, Higginbotham2018harnessing, sletten2019phononfock, wu2020microwave}, spin-strain coupling in solid-state quantum emitters \cite{falk2014electrically, golter2016optomechNV, kuzyk2018scaling, lemonde2018phononnetworks,chen2018orbital, maity2018alignment, meesala2018strainsiv, udvarheliy2018spinstrain, li2019honeycomb}, hyperfine interactions of electron spins with nearby nuclei \cite{de2010universal, Childress281, taminiau2014universal,Waldherr2014, bradley2019register, nguyen2019nuclearoptics}, and spin-dependent optical transitions~\cite{Pfaff532, Bernien2013, Evans2018-vh,Awschalom2018-en}. The last mentioned, optical response of AAs conditioned on the electron spin state, can be used to generate heralded entanglement~\cite{humphreys2018deterministic, rozpkedek2019near, bhaskar2019experimental} and thus allow for networking (e.g. connecting the device to the quantum internet) via quantum-state teleportation. As compared to optomechanical~\cite{stannigel2010transducer,stannigel2011optomechanicaltransducer, Bochmann2013} and electro-optical~\cite{Rueda2016} transduction schemes, quantum teleportation circumvents the direct conversion of quantum states into photons and thus minimizes the infidelity associated with undetected (unheralded) photon loss. Recent experiments have demonstrated the strain-mediated driving of an AA electron spin ground state with a classical phonon field~\cite{Whiteley2019-xe, Maity2020straincontrol}. Using the strain-spin coupling rates measured in those experiments to inform a theoretical model, and introducing a new phononic cavity design that achieves the strong coupling regime between a single phonon and an AA spin, we estimate that quantum state transduction is possible with near-unity fidelity, as shown below. The \textit{Article} is structured as follows. Section~\ref{sec:II} develops a general model for phonon-to-spin transduction using a quantum master-equation approach, followed in Section~\ref{sec:III} by experimentally informed model parameters. Section~\ref{sec:IV} introduces designs for mechanical cavities that achieve strong phonon-spin coupling and efficient quantum state transfer through a combination of high zero-point strain amplitude at AA sites and high expected mechanical quality factors. In Section~\ref{sec:V}, we numerically evaluate the master equation describing SC-electron spin transfer and demonstrate a state transfer infidelity below $\sim 1$\%, and even below 0.1\% (sufficient for fault tolerance threshold) using more speculative techniques. In Section~\ref{sec:VI}, we elaborate using the AA's optical transitions to realize optical interconnects and -- by heralded entanglement to other networked quantum memories -- to enable on-demand, long-range state and gate teleportation with near-unity fidelity. \begin{figure*} \caption{Quantum Memory (QM) and Interconnect Architecture (a) A SC quantum processing unit (QPU) is connected via piezoelectric `quantum interface 1' (QI1) to a phononic BUS. The phonon couples to electronic spin-orbit states of an AA, forming quantum interface 2 (QI2). The AA's fine-structure states can further couple to nuclear-spin to realize a QM via quantum interface 3 (QI3) or they connect to photons in quantum interface 4 (QI4), which finally connects to the quantum internet (blue dots: photonic interconnects). (b) A physical realization of the scheme outlined in (a). A superconducting qubit is connected via a phononic or microwave multiplexer (mux) to a series of phononic or microwave waveguides that are each interfaced with a mechanical cavity hosting one or many AAs whose electronic fine-structure (spin-orbit) states serve as qubits. The spin-orbit states of each AA interacts with the spin states of a nearby $^{13} \label{fig:schematicfig1} \end{figure*} \section{Theoretical model of the quantum-state transduction}\label{sec:II} To estimate the state-transfer fidelity we theoretically model the quantum state transfer from the SC qubit to the electron-spin qubit using the quantum-master-equation approach. As in Fig.\,\ref{fig:schematicfig1}, the SC qubit is directly coupled to a discrete mechanical mode of a phononic cavity via a tunable electromechanical transducer. In Appendix\,\ref{app:indirect} we describe an alternative coupling scheme in which the interaction between the SC qubit and the mechanical mode of the cavity is mediated by guided modes of a microwave \cite{wu2020microwave} or phononic waveguide \cite{Fang2016transductionphonons, Bienfait2019transferSAW}. These guided modes mediate the state transfer between the SC qubit and the discrete phononic mode. The couplings to and out of the waveguide are time-modulated to release (``pitch'') and later catch a wavepacket of propagating waveguide modes. Finally, the strain of the mechanical mode interacts with spin levels of the electronic fine-structure states of a diamond AA. By controlling this coupling, the quantum state is transduced to the spin state of the AA electron. We start our theoretical description from the Hamiltonian schematically depicted in Fig.\,\ref{fig:schematicfig1}(a): \begin{align} H_{\rm sc-e}&=\hbar\omega_{\rm sc}\sigma_{\rm sc}^\dagger \sigma_{\rm sc} + \hbar \omega_{\rm p} b^\dagger b + \hbar \omega_{\rm e}\sigma_{\rm e}^\dagger \sigma_{\rm e} \nonumber\\ &+ \hbar g_{\rm sc-p}(t) (\sigma_{\rm sc}b^\dagger + \sigma_{\rm sc}^\dagger b)\nonumber\\ &+\hbar g_{\rm p-e}(t) (\sigma_{\rm e} b^\dagger + \sigma_{\rm e}^\dagger b). \end{align} Here $\sigma_{\rm sc}$ ($\sigma_{\rm sc}^\dagger$) is the superconducting qubit two-level lowering (raising) operator, $\sigma_{\rm e}$ ($\sigma_{\rm e}^\dagger$) is the electron spin lowering (raising) operator, and $b$ ($b^\dagger$) is the annihilation (creation) operator of the phonon. The frequencies $\omega_{\rm sc}$, $\omega_{\rm p}$, and $\omega_{\rm e}$ correspond to the SC, phonon, and electron-spin excitation, respectively. The SC couples to the phonon mode via the coupling rate $g_{\rm sc-p}$, and the phonon couples to the electron spin via $g_{\rm p-e}$. The operators $\sigma_{\rm sc}$ ($\sigma_{\rm sc}^\dagger$) describe the SC system in a two-level approximation and can be identified with the annihilation (creation) operators of the qubit flux appearing in the circuit cavity-QED description of the device \cite{blais2004circuitqed, devoret2004superconducting}. Throughout the paper we assume that all effective couplings in the system are resonant and thus $\omega_{\rm sc}=\omega_{\rm p}=\omega_{\rm e}$. We consider system losses by adding into the Liouville equation of motion for the density matrix $\rho$ the Lindblad superoperators $\gamma_{c_i}\mathcal{L}_{c_i}(\rho)$: \begin{align} \frac{\rm d}{{\rm d}t}\rho = \frac{1}{{\rm i}\hbar}[H_{\rm sc-e}, \rho]+\sum_i \gamma_{c_i}\mathcal{L}_{c_i}(\rho),\label{eq:mastereqdirect} \end{align} where \begin{align} \gamma_{c_i}\mathcal{L}_{c_i}(\rho)=\frac{\gamma_{c_i}}{2} \left(2 c_i\rho c_i^\dagger-\lbrace c_i^\dagger c_i,\rho \rbrace \right), \label{eq:lindblad} \end{align} with $c_i\in \{ \sigma_{\rm sc}, b,\sigma_{\rm e}^\dagger \sigma_{\rm e} \}$, and $\gamma_{c_i}\in\{ \gamma_{\rm sc}, \gamma_{\rm p}, \gamma_{\rm e}\}$ representing the decay (decoherence) rates of the respective excitations. We note that the Lindblad superoperators $\mathcal{L}_{\sigma_{\rm sc}}(\rho)$ and $\mathcal{L}_{b}(\rho)$ describe the $T_1$ processes including the qubit decay, whereas $\mathcal{L}_{\sigma^\dagger_{\rm e}\sigma_{\rm e}}(\rho)$ describes pure dephasing of the electron spin (a $T_2$ process) considering the long-lived character of the spin excitation. We do not include pure dephasing ($T_2$) processes of the SC qubit and the mechanical mode, but consider rates of the $T_1$ processes corresponding to the experimentally achievable $T_2$ times (since $T_1\sim T_2$ for phonons and SC qubits). We do not include thermal occupation of modes as we consider the system to be cooled to $\sim$mK temperatures. For high-fidelity state transfer without coherent reflections, it is necessary to switch the magnitude of the Jaynes-Cummings couplings $g_{\rm p-e}$ and $g_{\rm sc-p}$ in a sequence that allows for step-wise transfer of the quantum state to the mechanical mode and finally to the electron spin. To that end we first switch on the coupling $g_{\rm p-e}$ between the SC qubit and the mechanical mode while turning off the phonon-electron-spin coupling $g_{\rm sc-p}$. After completing the state transfer to the mechanical mode, we switch off $g_{\rm sc-p}$ and apply a state-transfer pulse $g_{\rm e-p}$ completing the procedure. Each of the pulses represents a SWAP gate (up to a local phase), so the state-transfer protocol can be inverted by interchanging the pulse order. In particular, we assume that each coupling has a smooth time dependence given by \begin{align} g_{\rm sc-p}(t)&=g_{\rm scp}\,{\rm sech}(2 g_{\rm scp}[t-\tau_{\rm scp}])\label{eq:pulsescp}\\ g_{\rm p-e}(t)&=g_{\rm pe}\,{\rm sech}(2 g_{\rm pe}[t-\tau_{\rm pe}]),\label{eq:pulsepe} \end{align} where $g_{\rm scp}$, $g_{\rm pe}$ are time-independent amplitudes and $\tau_{\rm scp}$, $\tau_{\rm pe}$ are time delays of the respective pulses. We choose the smoothly varying pulses over rectangular pulses to account for the bandwidth-limitation of experimentally achievable time-dependent couplings. In our simulations we adjust $\Delta\tau_{\rm sc-p-e}=\tau_{\rm pe}-\tau_{\rm scp}$ to optimize the state-transfer fidelity $\mathcal{F}$ defined as: \begin{align} \mathcal{F}=\left| {\rm Tr} \left \{ \sqrt{\sqrt{\rho_{\rm i}}\rho_{\rm f}\sqrt{\rho_{\rm i}}} \right\} \right|,\label{eq:fidelity} \end{align} where $\rho_{\rm i}$ ($\rho_{\rm f}$) is the density matrix of the initial state of the SC qubit (final state stored in the electron spin). Due to the finite simulation time we further approximate the ideal infinite time spread of the applied pulses and apply the pulses at a sufficient delay after the start of the simulation. \section{Physical Transducer Parameterization}\label{sec:III} \subsection{SC Transducer Parameterization}\label{sec:III_A} The values of the coupling and loss parameters govern the system performance. Coupling rates between a microwave (MW) resonator and a phononic cavity \cite{wu2020microwave} up to $\sim 100$ MHz have been shown experimentally in a MW-cavity resonantly coupled to a discrete phononic mode via an piezoelectric coupler. Optimizing the coupling requires matching the MW line impedance with the phonon waveguide impedance~\cite{siddiqui2018lambwave}. For tunable coupling between the SC qubit and the mechanical mode of the cavity, the MW resonator can be substituted by the SC qubit itself as in a recent experimental demonstration \cite{Bienfait2019transferSAW}. Using a Josephson junction with externally controllable flux as a tunable microwave switch mated \cite{chen2014tunecoupler,geller2015tunable,zeuthen2018electrooptomech, Bienfait2019transferSAW} to the piezoelectric coupler thus enables controllable coupling between the SC qubit and the phonon. Based on recent experiments \cite{Bienfait2019transferSAW}, we assume that the coupling between the SC qubit and the mechanical mode can reach up to $g_{\rm scp}/(2\pi)=50$\,\si{\mega\hertz} . We conservatively assume SC qubit coherence times on the order of microseconds ($\gamma_{\rm sc}/(2\pi)=10$\,\si{\kilo\hertz}), while best-case SC coherence times approach milliseconds \cite{Kjaergaard_2019}. \subsection{Spin-Strain Transducer Parameterization}\label{sec:III_B} We consider that the spin qubit is formed by the two low-energy fine-structure states of the SiV$^-$ as described in Appendix\,\ref{app:strain}. These two states have distinct orbital and spin character which impedes direct coupling of the spin-qubit transition to either strain or magnetic fields. Generally, a combination of applied strain and magnetic field is thus necessary to address the SiV$^-$ spin qubit. We thus control the spin-strain coupling via locally applied magnetic fields to realize the effective controllable Jaynes-Cummings interaction introduced in Sec.\,\ref{sec:II} [Eq.\,\eqref{eq:pulsepe}]. Several strategies have been devised to engineer the effective spin-strain coupling \cite{meesala2018strainsiv, nguyen2019strainsi} that generally rely on the application of external static or oscillating magnetic fields and optical drives as we detail in Appendix\,\ref{app:strain}. All of these approaches are perturbative in character and the maximum achievable value of the resulting effective spin-strain coupling $g_{\rm pe}$ is therefore decreased with respect to the bare strain coupling measured for fine-structure spin-allowed transitions $g_{\rm orb}$ to $g_{\rm pe}\approx 0.1 g_{\rm orb}$. The spin-strain interaction of group-IV quantum emitters in diamond has been measured at 1\,\si{\peta\hertz}/strain \cite{nguyen2019strainsi}. We estimate that for an efficient state transfer between the mechanical mode and the electron-spin states, the spin-mechanical coupling $g_{\rm orb}$ would need to reach a value of approximately $g_{\rm orb}/(2\pi)\approx 10$\,\si{\mega\hertz} (leading to the effective phonon-electron-spin $g_{\rm pe}/(2\pi)\approx 1$~\si{\mega\hertz}). To that end a mechanical resonance with zero-point strain of $\sim 10^{-9}-10^{-8}$ and a high quality factor is needed. In Section\,\ref{sec:IV} we design (opto-)mechanical cavities that fulfill both of these requirements. \section{Realization of cavity for strong phonon-spin coupling}\label{sec:IV} This section introduces a mechanical cavity that allows fast and efficient phonon-mediated quantum-state transduction to and from the electron spin. We model the cavities through a series of finite-element numerical simulations \cite{Eichenfield2009} (performed using Comsol Multiphysics \cite{comsol}) of the mechanical resonance within the continuum description of elasticity. These simulations use absorbing perfectly matched layers at the boundaries. We obtain the optical response of the diamond cavity from a solution of Maxwell's equations in the materials described via their linear-response dielectric function. We describe two architectures of high-$Q$ mechanical cavities whose zero-point strain field gives rise to the phonon-spin strong coupling required in the transduction scheme. (i) The first design is a silicon cavity with a thin (100 nm) layer of diamond heterogeneously integrated to the silicon substrate [shown in Fig.\,\ref{fig:phononspin1}]. This takes advantage of mature design and fabrication of silicon nanophononics \cite{Eichenfield2009, Safavi-Naeini2011, Chan2011}, exceptionally small decoherence rates of microwave frequency phonons in suspended single crystal silicon \cite{maccabe2019phononic}, and new techniques in heterogeneous integration of diamond nanoscale membranes \cite{mouradian2015scalable, wan2019largescale}. (ii) The second design is an all-diamond optomechanical cavity that at the same time supports an optical and phononic mode for mechanical and optical addressing of the electron spin [shown in Fig.\,\ref{fig:phononspin2}]. As depicted in Fig.\,\ref{fig:phononspin1}(a), the silicon cavity is embedded in a phononic crystal to minimize the cavity loss; it is also weakly coupled to a phononic waveguide that mediates the interaction of the cavity with the SC circuit. Simultaneous acoustic and microwave electrical impedance matching has been demonstrated \cite{eichenfield2013design, siddiqui2018lambwave} to such wavelength-scale structures using thin piezoelectric films, enabling coupling into the waveguide from the superconducting system with low insertion loss. The cavity is separated from the waveguide by a series of barrier holes to allow tuning the coupling rate between the discrete cavity mode and the guided phonons. Fig.\,\ref{fig:phononspin1}(b) details this cavity. The silicon platform forming the base of the cavity is covered with a thin layer (100 nm) of diamond hosting the defects. To analyze the cavity mechanical properties, we calculate the distribution of the elastic energy density of a mechanical mode of the cavity resonant at $\omega_{\rm p}/(2\pi)=2.0$\,\si{\giga\hertz}. The energy density is concentrated in the thin constriction formed by the diamond layer for efficient phonon-spin coupling. Figure\,\ref{fig:phononspin1}(d) shows the calculated bare phonon-spin coupling $g_{\rm orb}=(\epsilon_{xx}-\epsilon_{yy})d$ corresponding to the strain field in the ground state of the resonator--the zero-point strain. Here $d\approx 1$\,\si{\peta\hertz}/strain is the strain susceptibility of the defect electron spin, and $\epsilon_{xx}$ ($\epsilon_{yy}$) are the components of the zero-point strain expressed in the coordinate system of the defect (see Appendix\,\ref{app:transformed}). As we show in Fig.\,\ref{fig:phononspin1}(d), $g_{\rm orb}/(2\pi)$ reaches up to $5.4$\,\si{\mega\hertz} and we thus estimate the effective coupling $g_{\rm pe}/(2\pi)\approx 0.5$\,\si{\mega\hertz}. An equally important figure of merit characterizing the cavity performance is the cavity coupling to the waveguide modes. The distribution of the mechanical energy flux in the cavity in Fig.\,\ref{fig:phononspin1}(e) shows that the cavity mode interacts with the waveguide modes, introducing a decay rate $\kappa_{\rm p}$ of the cavity mode. Fig.\,\ref{fig:phononspin1}(f) plots $\kappa_{\rm p}$ as a function of the number of barrier holes on a logarithmic scale. Thus, $\kappa_{\rm p}$ decreases exponentially with the number of separating barrier holes from almost $\sim 10^7$\,\si{\hertz} to $\sim 1$\,\si{\hertz} for seven holes. For larger number of holes, the cavity lifetime becomes practically limited by the intrinsic material properties of silicon and diamond and could be as low as $\sim 0.1$\,\si{\hertz} \cite{maccabe2019phononic} assuming no additional loss due to the introduction of the diamond nanomembrane. Fig.\,\ref{fig:phononspin2} shows the all-diamond optomechanical cavity, which consists of a diamond beam with an array of elliptical holes of varying sizes. The hole array simultaneously produces a phononic and photonic cavity that concentrates both the mechanical strain of the phononic mode and the optical electric field of the electromagnetic mode on the AA in the cavity center. The distribution of the elastic energy density of a phononic mode of frequency $\omega_{\rm p}/(2\pi)=17.2$\,\si{\giga\hertz} shown in Fig.\,\ref{fig:phononspin2}(a) reveals that the mechanical energy is dominantly concentrated around the center of the beam. Using the calculated values of zero-point strain of this mode we calculate the achievable bare coupling strength $g_{\rm orb}$ and show the result in Fig.\,\ref{fig:phononspin2}(b). The maximal achievable effective phonon-spin qubit coupling in the diamond cavity thus reaches up to $g_{\rm pe}/(2\pi)\approx 0.1 g_{\rm orb}/(2\pi)=2.4$\,\si{\mega\hertz}. This diamond cavity furthermore offers the possibility to increase the efficiency of optical addressing of the diamond AAs by concentrating light of a vacuum wavelength $\lambda_{\rm opt}=732$\,\si{\nano\meter} into an optical mode that is spatially overlapping with the cavity mechanical mode. The high calculated optical quality factor $Q_{\rm opt}=10^6$ can be used to increase the efficiency of optical addressing of the diamond AA as discussed further in Sec.\,\ref{sec:VI}. In summary, we have designed (opto)mechanical cavities that sustain mechanical modes whose zero-point strain fluctuations enable \textit{strong coupling} between an AA spin and a single quantum of mechanical motion. The feasibility of such devices marks an important practical step towards transducers relying on spin-strain interactions. Having established the achievable values of couplings and decay rates governing the dynamics of the system, we now proceed to analyse the numerical results of our quantum-sate transduction protocol. \begin{figure*} \caption{Silicon phononic cavity. (a) A mechanical resonator embedded in a phononic crystal is separated from a phononic waveguide by a number of barrier holes. It is capped by a thin diamond layer placed over the silicon layer. (b) A close-up of one quarter of the silicon-diamond structure with dimensions $a=800$\,nm, $h=0.94a$, and $w=0.2a$. The silicon (diamond) layer thickness is $t_{\rm Si} \label{fig:phononspin1} \end{figure*} \begin{figure} \caption{Diamond optomechanical cavity. (a) Distribution of elastic energy density of a mode of the diamond cavity resonant at $\omega_{\rm p} \label{fig:phononspin2} \end{figure} \section{Numerical Analysis of SC-Emitter Quantum State Transfer}\label{sec:V} As discussed in Sec.\,\ref{sec:IV}, mechanical resonators of high quality factors exceeding $Q\sim 10^7$ have been demonstrated experimentally~\cite{maccabe2019phononic}. The limiting time-scale for high-fidelity state transfer is therefore the decoherence of the SC and electron spin qubits, so it is necessary to transfer the SC population rapidly into the phononic mode. The long-lived phonon then allows transduction into the AA electron spin levels of the emitter, where the qubit can be addressed optically, or is further transferred to the quantum memory - the nuclear spin \cite{bradley2019register, nguyen2019nuclearoptics}. We numerically evaluate the master equation, and show the results of the time evolution of such a system in Fig.\,\ref{fig:rabiscsp} (a). In our simulation we consider $g_{\rm scp}/(2\pi)=50$\,\si{\mega\hertz}, $g_{\rm pe}/(2\pi)=1$\,\si{\mega\hertz}, $\gamma_{\rm sc}/(2\pi)=10^{-5}$\,\si{\giga\hertz} \cite{Kjaergaard_2019}, $\gamma_{\rm p}/(2\pi)=10^{-7}$\,\si{\giga\hertz}, $\gamma_{\rm e}/(2\pi)=10^{-5}$\,\si{\giga\hertz}. The SC qubit is initialized in the excited state while the rest of the system is considered to be in the ground state. We let the system evolve in time and apply the series of control pulses [Eq.\,\eqref{eq:pulsescp} and \eqref{eq:pulsepe} shown in Fig.\,\ref{fig:rabiscsp}(b) as a blue line and a red dashed line, respectively] to transfer the initial population of the SC qubit (full blue line) sequentially to the phonon (red dashed line), and the electron spin (black dash-dotted line), as shown in Fig.\,\ref{fig:rabiscsp}(a). To further analyse the transduction we calculate the state-transfer fidelity $\mathcal{F}$ defined in Eq.\,\eqref{eq:fidelity} as a function of the phonon-spin coupling $g_{\rm pe}$ and the electron-spin dephasing rate $\gamma_{\rm e}$. We vary $g_{\rm pe}/(2\pi)$ in the range from $100$\,kHz, representing a conservative estimate of the phonon-spin coupling rate, to $10$\,MHz which exceeds the value we estimate for the silicon phononic cavity by an order of magnitude. The role of the electron-spin coherence on the overall state-transfer fidelity [together with the infidelity $\log{(1-\mathcal{F})}$] is shown in Fig.\,\ref{fig:rabiscsp}(c) [Fig.\,\ref{fig:rabiscsp}(d)]. When calculating $\mathcal{F}(g_{\rm pe}, \gamma_{\rm e})$ we set $\gamma_{\rm p}/(2\pi)=10^{-7}$\,\si{\giga\hertz}, i.e. we consider a high-quality resonance of the phononic cavity. We consider $\gamma_{\rm e}/(2\pi)=10^{-4}$\,GHz as a conservative upper bound of the electron-spin decoherence rate. However, progress in quantum technology indicates that the lower value considered in our calculations, $\gamma_{\rm e}/(2\pi)=10^{-6}$\,\,\si{\giga\hertz}, can be achieved in state-of-the-art systems \cite{nguyen2019strainsi}. Our calculation shows that for high transfer fidelity (infidelity of less than $\sim 1$\%) the electron-spin decoherence rate should not exceed $\gamma_{\rm e}/(2\pi)\approx 10^{-5}$\,\si{\giga\hertz}, well within the experimentally accessible range, indicating electro-mechanical state transfer is potentially achievable in present systems. \begin{figure} \caption{State transfer from the SC qubit to the electron spin. The system begins in the excited state of the SC qubit, then evolves according to the master equation Eq.\,\eqref{eq:mastereqdirect} \label{fig:rabiscsp} \end{figure} \section{Quantum Interfacing}\label{sec:VI} The AA electron-spin qubit serves as the network bus, mediating coupling to not just phonons as well as photons and nuclear spins. In particular, the spin-dependent optical transitions enable photon-mediated coupling of the quantum device to, for example, distant quantum memories in a quantum network, as illustrated in Fig.\,\ref{fig:schematicfig1}(a). One approach would be to perform spin-to-photon conversion by optically addressing the electron spin after the SC qubit has been transduced to it. This could be performed via a variety of spin-photon interfacing procedures, including direct optical excitation of the quantum emitter \cite{becker2018alloptical}, or a spin-photon controlled-phase gate mediated by a cavity mode. However, the fidelity of this approach is intrinsically limited by the achievable emitter-cavity cooperativity and the detuning between spin states. In particular, current experiments have demonstrated entanglement fidelities of 0.94 and heralding efficiencies of 0.45 \cite{bhaskar2019experimental}. The photon loss associated with this direct spin-to-photon transduction also destroys the quantum state that was to be transported. An alternative approach first entangles a nearby nuclear spin with the quantum network target. This can be achieved using the procedure of repeat-until-succeed optical heralding developed for deterministic state teleportation \cite{Pfaff532, humphreys2018deterministic, rozpkedek2019near, bhaskar2019experimental}. This scheme never actually transduces the SC qubit to the optical domain and thus avoids photon transmission losses. Instead, electron-nuclear spin gates can be used to teleport the qubit across a quantum network. This second approach can achieve near-unity state-transfer fidelity and efficiency provided that entangled qubit pairs shared between nodes of the quantum network can be prepared on demand. This preparation of on-demand entanglement has been recently realized for diamond NV centers~\cite{humphreys2018deterministic}. So far, spin-spin teleporation fidelities of up to 0.84 have been reported~\cite{Hensen2015, humphreys2018deterministic} using the NV center in diamond. Ongoing experimental and theoretical advances promise to enable near-unity teleportation fidelity, including through environmentally-insensitive quantum emitters (such as the SiV considered in this work) and entanglement schemes to improve noise-and error-resilience. The hyperfine interactions of the electron spin with nearby spins of nuclear isotopes is often an unwanted source of electron-spin decoherence hindering the ability to maintain and control the electron-spin qubits over long time scales. Dynamical decoupling techniques \cite{de2010universal, farfurnik2015decoupling} have been applied to mitigate this decoherence and reach $\sim1$\,\si{\milli\second} to $\sim10$\,\si{\milli\second} coherence times in SiV systems \cite{christle2015isolated, sukachev2017sivspinqubit, becker2018alloptical}. However, recent theoretical and experimental work shows that the nuclear spins can be used as a resource as their quantum state can be selectively addressed and controlled via the quantum state of the electron spin itself \cite{taminiau2014universal,Waldherr2014, bradley2019register} with high fidelity. Combined with the extraordinarily long (exceeding $\sim$1\,\si{\second}) coherence times of these nuclear spins, it has been proposed that the nuclear-spin bath could serve as a quantum register \cite{taminiau2014universal, bradley2019register, nguyen2019nuclearoptics} and could store quantum states and thus serve as a QM. In Appendix\,\ref{app:nucspin} we describe how the protocol developed in \cite{bradley2019register} can implement a quantum SWAP gate allowing for state transfer from the electron-spin qubit to a single nuclear spin of a nearby $^{13}$C atom. Assuming electron-spin pure dephasing of $\gamma_{\rm e}/(2\pi)=10$\,\si{\kilo\hertz}, nuclear-spin pure dephasing of $\gamma_{\rm n}/(2\pi)=1$\,\si{\hertz}, a moderate electron-spin hyperfine coupling $A_{\parallel}=500$\,\si{\kilo\hertz}, and a conservative value of an external microwave drive Rabi frequency $\Omega_{\rm mw}/(2\pi)\approx 3.9$\,\si{\kilo\hertz}, we estimate that the state-transfer fidelity $\mathcal{F}_{\rm en}$ of this process could reach $\mathcal{F}_{\rm en}\approx 0.9975$. The compactness of this diamond QM further opens up the possibility to scale the system. Using a mechanical or microwave switching network, each SC qubit could be selectively coupled to a large number of mechanical cavities depending on the experimental architecture. As each additional coupled cavity introduces a decay channel, low-loss high-isolation switching is required. As an example, we consider a pitch-and-catch scheme could [Appendix\,\ref{app:indirect}] wherein the quantum state is launched into a mechanical waveguide with controllable coupling to many phononic resonators. For high fidelity state transfer $\mathcal{F} > 0.99$, the total insertion loss of all switches must remain below 0.04 dB, which may prove experimentally challenging. Considering experimentally achievable AA densities, we estimate that about 10 AAs could be individually addressed within the mechanical mode volume of $\sim 10^{7}$\,\si{\nano\meter^3} of each waveguide. These can be individually optically addressed due to their inhomogeneous optical and microwave transition distribution \cite{bersin2019individual, neuman2019selective}, induced by natural variations in local static strain within the diamond crystal. Each color center enables high-fidelity coupling to $\sim 10$ nuclear spins \cite{bradley2019register}. Allowing for, say, 10 parallel QM interconnects from the QPU would thus provide a total QM capacity of about $\sim 10\times 10\times 10 =$ kqubits. Introducing spatial multiplexing (e.g., using microwave switches) would multiply the QM capacity further. As we show in Appendix\,\ref{app:molmersor}, the proposed architecture coupling a large number of electron-spin qubits to a shared mechanical mode further opens the opportunity for efficient phonon-mediated spin-entangling quantum gates \cite{molmer1999multiparticleentanglement, sorensen1999quantumcomputation1, kuzyk2018scaling, li2019honeycomb}. These gates enable preparation of highly entangled many-spin states, such as the Greenberger–Horne–Zeilinger (GHZ) state, that can serve as resources for further quantum-state manipulation. Specifically, an $N$-electron-spin GHZ state coupled to the same phononic cavity could would increase the phase sensitivity to strain $N$-fold \cite{Leibfried1476}. Thus, a GHZ state prepared in advance of the SC-to-spin transduction would speed up the controlled phase gates, $N$ times speedup for a $N-$spin GHZ state. Combined with local gates acting on the spins and the phonon, a GHZ state could be used to boost the speed and fidelity of the SWAP gate. \section{Conclusions and Outlook}\label{sec:VII} We introduced an architecture for high-bandwidth, high-fidelity quantum state transduction between superconducting microwave AA spin qubits at rates far exceeding intrinsic system decay and decoherence. The resulting hybrid architecture combines the favorable attributes of quantum memories with SC quantum information processors, enabling a wide range of functionalities currently unavailable to a stand-alone superconducting or spin-based architectures. Strong coupling of a single defect center spin to a high-quality mechanical cavity, the key element of our proposal, remains to be experimentally demonstrated. Nevertheless, our analysis shows experimental feasibility of the proposal in state-of-the-art mechanical systems. Further experimental challenges exist in the demonstrations of controllable electro-mechanical and mechanical-mechanical couplers that are necessary for the cascaded state transfer. Rapid development of micro-mechanical systems indicates that the above mentioned experimental challenges can be solved in the foreseeable future. Looking further ahead, reaching fault tolerant quantum information processing will likely require gate and measurement errors below 0.1\%. Fortunately, there appear to be several avenues to speed up, and thus improve, the fidelity of quantum state transfer between phonon to spin encoding. These include further strain concentration (e.g. through thinner diamond patterning), identifying different AAs with increased strain coupling, state distillation, and the use of pre-prepared spin GHZ states---and to that end, fast and reliable spin-entangling protocols must be developed both theoretically and experimentally. To summarize our key results, the SC-AA hybrid architecture combines the complementary strengths of SC circuit quantum computing and artificial atoms, realizing the essential elements of an extensible quantum information processing architecture. There are, of course, components that need to be realized and assembled into one system, which will diminish certain performance metrics, at least near-term. Nonetheless, even our basic performance considerations show that these different capabilities -- QPU, QM, bus, and quantum network port -- should leverage distinct physical modalities in a hybrid system, much like a classical computing system. \section{Transduction from a SC qubit to a spin qubit via a waveguide}\label{app:indirect} The main text discussed a scheme where the SC qubit is directly electro-mechanically coupled to a phononic cavity. Alternatively, the mechanical mode of the phononic cavity can be coupled to the microwave circuit via a microwave or phononic waveguide. For example, this waveguide may serve as an interconnect between a SC qubit of a quantum computer which is physically separated from the phononic cavity across large distance, or it might represent a guided phonon wave connecting a piezoelectric coupler (interdigital coupler - IDT) with a discrete high-$Q$ mechanical mode of a phononic cavity surrounded by a phononic crystal. We break down the transduction of the qubit stored in the SC device to the spin and describe in this appendix the `pitch-and-catch' state transfer of the SC state to the mechanical resonator via the waveguide. The transduction of the quantum state stored in the phonon into the electron spin via an effective Jaynes-Cummings interaction can be performed as described in the main text. \subsection{State transfer from the SC qubit to the phononic cavity via a waveguide} As schematically shown in Fig.\,\ref{fig:releasecatch}(a), we assume that the SC qubit is coupled to a waveguide which is electro-mechanically coupled to the phononic cavity (or, alternatively, to a phononic waveguide mechanically coupled to a discrete mechanical mode of a cavity). Such a system can be described by the following Hamiltonian \cite{Milonni1983recurrences}: \begin{align} H_{\rm sc-m-p}&=\hbar\omega_{\rm sc}\sigma^\dagger_{\rm sc}\sigma_{\rm sc}+\hbar\omega_{\rm p}b^\dagger b +\sum_k \hbar \omega_{k}a^\dagger_k a_k \nonumber\\ &+\sum_{k}\hbar g_{\rm sc-m}(t) (\sigma^\dagger_{\rm sc} a_k+\sigma_{\rm sc}a_k^\dagger)\nonumber\\ &+\sum_k \hbar g_{\rm m-p}(t) (b^\dagger a_k+b a^\dagger_k). \end{align} where $a_k$ ($a^\dagger_k$) are the annihilation (creation) operators of a waveguide mode $k$ of frequency $\omega_k$. The SC qubit and the mechanical mode are coupled to the waveguide via a controllable time-dependent coupling $g_{\rm sc-m}(t)$ \cite{chen2014tunecoupler, Bienfait2019transferSAW, geller2015tunable} and $g_{\rm m-p}(t)$, respectively. The coupling $g_{\rm m-p}$ can be either realized as a tunable IDT coupler, or as a tunable mechanical interconnect that could be e.g. based on interferometric modulation of coupling in analogy with optical implementations \cite{Tanaka2007,Kumar2011}, although an implementation of such a controllable phononic coupler is yet to be demonstrated. The quantum state stored in the SC device can be released into the waveguide and subsequently absorbed by the phononic cavity. To accomplish the pitch-and-catch state transfer with high fidelity we need to ensure that the processes of phonon emission by the SC qubit and phonon absorption by the phononic cavity are mutually time-reversed. To that end the pulse emitted by the SC qubit has to be time-symmetrical and the couplings have to fulfill $g_{\rm sc-m}(t)=g_{\rm m-p}(-[t-\tau])$ \cite{cirac1997transfer}, where $\tau$ is the delay time due to the finite length of the waveguide. \begin{figure} \caption{Pitch-and-catch scheme for state transfer between the SC qubit and the phononic cavity. (a) The general scheme describing a SC qubit (of frequency $\omega_{\rm sc} \label{fig:releasecatch} \end{figure} For concreteness, we consider a waveguide of length $L$ supporting phononic (or electromagnetic) modes of the form $\propto \cos(k_j x)$, with $k_j=(N_0+j)\pi/L$ and $x$ being a position along the waveguide, where $N_0$ is a mode number that in connection with the waveguide length $L$ and the mode velocity $c$ (assuming a linear dispersion) determines a central frequency of the selected set of modes. This function may represent a vector potential in a MW transmission line or e.g. a mechanical displacement of a phononic wave. The free spectral range of this finite waveguide is $\delta=c\pi/L$ and the spontaneous decay rate of each qubit into the waveguide modes occurs with the rate (assuming time-independent $g_{\rm scm}=g_{\rm mp}\equiv g_{\rm qm}$): \begin{align} \kappa_{\rm sc}=\frac{2\pi g_{\rm q-m}^2}{\delta}.\label{eq:defkapsc} \end{align} The objective of releasing a perfectly symmetrical microwave pulse in a form proportional to ${\rm sech}(\kappa_{\rm sc}t/2)$ can be achieved if we modulate the coupling constant in time via an electromechanical coupler \cite{Bienfait2019transferSAW}: \begin{align} g_{\rm sc-m}(t)=g_{\rm qm}\sqrt{\frac{e^{\kappa_{\rm sc}t}}{1+e^{\kappa_{\rm sc}t}}}, \end{align} The wave packet released by the superconducting qubit can then be fully absorbed by the phononic cavity if the time-reversed delayed coupling is: \begin{align} g_{\rm m-p}(t)=g_{\rm qm}\sqrt{\frac{e^{-\kappa_{\rm sc}(t-\tau)}}{1+e^{-\kappa_{\rm sc}(t-\tau)}}}.\label{eq:defgmp} \end{align} We demonstrate the pitch-and-catch scheme in Fig.\,\ref{fig:releasecatch}(b-d). Figure\,\ref{fig:releasecatch}(b) shows the time-dependence of the populations of the SC qubit (full blue), the phonon (red dashed), and the MW photon in the waveguide (black dash-dotted). As shown, the pitch-and-catch scheme leads to an almost perfect transfer of population from the SC qubit to the phonon [final phonon population $\langle \sigma^\dagger_{\rm p}(t_{\rm end})\sigma_{\rm p}(t_{\rm end})\rangle\approx 1$]. The sequence of time-dependent couplings shown in Fig.\,\ref{fig:releasecatch}(c) first releases a fully symmetrical propagating wave packet [a snapshot of the photon intensity is shown in Fig.\,\ref{fig:releasecatch}(d) as a function of position along the waveguide] and is subsequently perfectly absorbed by the receiving qubit. We cast the model outlined above into the form of a master equation \cite{gardiner1993driving} for the density matrix $\rho_{\rm scp}$ describing the SC qubit and the phononic cavity in the single-excitation basis, but only effectively accounting for the modes of the MW waveguide: \begin{align} \frac{\partial \rho_{\rm scp}}{\partial t}&=-\frac{{\rm i}}{\hbar}[H_{\rm sc}+H_{\rm p}, \rho_{\rm scp}]\nonumber\\ +&\kappa_{\rm sc}(t)\mathcal{L}_{\sigma_{\rm sc}}(\rho_{\rm scp})+\gamma_{\rm sc}\mathcal{L}_{\sigma_{\rm sc}}(\rho_{\rm scp})\nonumber\\ +&\kappa_{\rm p}(t)\mathcal{L}_{b}(\rho_{\rm scp})+\gamma_{\rm p}\mathcal{L}_{b}(\rho_{\rm scp})\nonumber\\ +&\sqrt{\kappa_{\rm p}(t)\kappa_{\rm sc}(t)}\nonumber\\ \times&\left( e^{{\rm i}\phi}[\sigma_{\rm sc}\rho_{\rm scp}, b^\dagger]+e^{-{\rm i}\phi}[b, \rho_{\rm scp}\sigma^\dagger_{\rm sc}] \right),\label{eq:mastercascade} \end{align} with $H_{\rm sc}=\hbar\omega_{\rm sc}\sigma^\dagger_{\rm sc}\sigma_{\rm sc}$, and $H_{\rm p}=\hbar\omega_{\rm p}b^\dagger b$. It is understood that the density matrix of the phonon is evaluated at a later time $t+\tau$ (in the following we always set $\tau=0$\,s for simplicity) and the phase accumulated due to the propagation of the photon wave packet is absorbed in the definition of $\phi$. The respective time-dependent decay rates are given by: \begin{align} \kappa_{\rm sc}&=\frac{2\pi g_{\rm sc-m}^2(t-\tau_{\rm pc})}{\delta},\\ \kappa_{\rm p}&=\frac{2\pi g_{\rm m-p}^2(t-\tau_{\rm pc})}{\delta}, \end{align} in accordance with Eqs.\,\eqref{eq:defkapsc}-\eqref{eq:defgmp}. We consider that the pulses are applied at a later time $\tau_{\rm pc}$ to ensure smooth dynamics. The resulting time-dependent populations shown in Fig.\,\ref{fig:releasecatch}(e) perfectly capture the pitch-and-catch scheme described previously in the framework of Schr\"{o}dinger equation [cf. populations in Fig.\,\ref{fig:releasecatch}(b)]. Last we note that by effectively eliminating the waveguide we neglect the waveguide propagation losses that could further decrease the state-transfer fidelity. Neverheless, we estimate that for phonon decay rates $\sim 1$\,\si{\hertz} achieved in state-of-the-art acoustical systems, speed of sound $c\sim 10^3$\,\si{\meter\second^{-1}}, and waveguide length $L\sim 1$\,\si{\milli\meter}, the propagation losses are so small to result in near-unity transmission $\sim e^{-10^{-6}}\approx 0.999999$. We integrate the master-equation description of the pitch-and-catch scheme [Eq.\,\eqref{eq:mastercascade}] to describe the full dynamics of the state transfer from the SC qubit to the electron spin and show the result of the state-transfer protocol in Fig.\,\ref{fig:releasecatch}(e). \section{Effects of strain on SiV negative center}\label{app:strain} \begin{figure} \caption{Coupling between fine-structure states of a SiV$^-$ defect and strain. (a) Fine structure states described in Appendix\,\ref{app:strain} \label{fig:strainspin} \end{figure} The effects of strain on a SiV$^-$ center have been considered in the literature \cite{meesala2018strainsiv, nguyen2019strainsi} theoretically and experimentally. The theory predicts that the strain effects can be divided into three categories according to the transformation properties of the strain field under symmetry operation of the D$_{3{\rm d}}$ symmetry group. based on symmetry, the strain can be classified as $\epsilon_{\rm A_{1g}}$, $\epsilon_{\rm E_{gx}}$ and $\epsilon_{\rm E_{gy}}$. These strain components then give rise to the longitudinal, $\alpha$, and transverse, $\beta$ and $\gamma$, strain coupling to the spin-orbit states of the color center: \begin{align} \alpha&=t_\perp (\epsilon_{xx}+\epsilon_{yy})+t_\parallel \epsilon_{zz}\sim\epsilon_{\rm A_{1g}},\\ \beta&=d(\epsilon_{xx}-\epsilon_{yy}) + f\epsilon_{zx}\sim\epsilon_{\rm E_{gx}},\label{eq:strainbeta}\\ \gamma&=-2 d(\epsilon_{xy}) + f\epsilon_{yz}\sim\epsilon_{\rm E_{gy}},\label{eq:straingamma} \end{align} where $z$ is oriented along the high-symmetry axis of the defect $[111]$, $x$ is oriented along $[\bar{1}\bar{1}2]$, and $y$ is defined by $[\bar{1}10]$. The respective values of the constants $t_\parallel$, $t_\perp$, $d$, and $f$ have been estimated to be in the range of $1$\,PHz to $2$\,PHz (we transform the relevant tensor components into the coordinate system defined by $[100]$, $[010]$, and $[001]$ in Appendix \,\ref{app:transformed}). We will use these values to estimate all necessary constants to design a potential transducer. As has been shown \cite{meesala2018strainsiv, maity2018alignment, lemonde2018phononnetworks, nguyen2019strainsi, Maity2020straincontrol}, the A$_{\rm 1g}$ strain uniformly shifts all the fine-structure-state energies and we thus disregard its effects in the following discussion. We further consider that the Hamiltonian of the fine-structure states of a SiV$^-$ in a longitudinal magnetic field is (neglecting the Jahn-Teller effect and the Orbital Zeeman effect \cite{lemonde2018phononnetworks}, for simplicity): \begin{align} H_{\rm tot}=\left( \begin{array}{cccc} B_z \gamma _{\rm S} & 0 & -{\rm i} \lambda & 0 \\ 0 & -B_z \gamma _{\rm S} & 0 & {\rm i} \lambda \\ {\rm i} \lambda & 0 & B_z \gamma _{\rm S} & 0 \\ 0 & -{\rm i} \lambda & 0 & -B_z \gamma _{\rm S} \\ \end{array} \right).\label{eq:hamfine} \end{align} The Hamiltonian is expressed in the basis of spin-orbit states $\{|e_y\uparrow \rangle, |e_y\downarrow \rangle, |e_x\uparrow \rangle, |e_x\downarrow\rangle \}$ \cite{meesala2018strainsiv, nguyen2019strainsi}. Here $\lambda$ is the spin-orbit coupling strength ($\lambda/(2\pi \hbar)\approx 23$\,GHz), $B_z$ is the magnetic field applied along the high-symmetry axis of the defect, and $\gamma_{\rm S}/(2\pi)\approx 28$\,GHz/T is the spin gyromagnetic ratio. The Hamiltonian in Eq.\,\eqref{eq:hamfine} can be diagonalized to obtain the eigenfrequencies: \begin{align} \nu_1&=-B_z \gamma _{\rm S}-\lambda ,\\ \nu_2&=-B_z \gamma _{\rm S}+\lambda ,\\ \nu_3&=B_z \gamma _{\rm S}+\lambda ,\\ \nu_4&=B_z \gamma _{\rm S}-\lambda. \end{align} and the corresponding eigenstates: \begin{align} |\psi_1\rangle&=\frac{1}{\sqrt{2}}(-{\rm i} |e_y\downarrow \rangle+|e_x\downarrow\rangle),\\ |\psi_2\rangle&=\frac{1}{\sqrt{2}}({\rm i} |e_y\downarrow \rangle+|e_x\downarrow\rangle),\\ |\psi_3\rangle&=\frac{1}{\sqrt{2}}(-{\rm i} |e_y\uparrow \rangle+|e_x\uparrow\rangle),\\ |\psi_4\rangle&=\frac{1}{\sqrt{2}}({\rm i} |e_y\uparrow \rangle+|e_x\uparrow\rangle). \end{align} The structure of the fine-structure states is schematically depicted in Fig.\,\ref{fig:strainspin}(a). The two lowest-energy states $|\psi_4\rangle$ and $|\psi_1\rangle$ can be conveniently used as the spin-qubit states. In this basis the transverse-strain Hamiltonian becomes: \begin{align} H_{ \mathcal{\beta}}=\left( \begin{array}{cccc} 0 & {\beta} & 0 & 0 \\ {\beta} & 0 & 0 & 0 \\ 0 & 0 & 0 & {\beta} \\ 0 & 0 & {\beta} & 0 \\ \end{array} \right) \end{align} and \begin{align} H_{ \gamma}=\left( \begin{array}{cccc} 0 & {\rm i} \gamma & 0 & 0 \\ -{\rm i} \gamma & 0 & 0 & 0 \\ 0 & 0 & 0 & {\rm i} \gamma \\ 0 & 0 & -{\rm i} \gamma & 0 \\ \end{array} \right), \end{align} where $\beta$ and $\gamma$ are the strain components shown in Eq.\,\eqref{eq:strainbeta} and Eq.\,\eqref{eq:straingamma}. The spin degree of freedom thus cannot be flipped by the sole application of a transverse strain. Considering that the transition $|\psi_1\rangle\leftrightarrow|\psi_4\rangle$ is spin-forbidden and the states $|\psi_1\rangle$ and $|\psi_4\rangle$ have distinct orbital character, it is necessary to apply a combination of a transverse magnetic field and strain to couple to the spin qubit. We next consider possible scenarios that allow the transition $|\psi_1\rangle \leftrightarrow|\psi_4\rangle$ including (i) the application of a quasi-static magnetic field, (ii) a microwave drive, and (iii) an optical Raman scheme. \subsection{Quasi-static magnetic field} To allow the spin-qubit states to couple to strain we add to the system a perturbation in the form of an $x-$polarized magnetic field: \begin{align} H_{ B_x}=\left( \begin{array}{cccc} 0 & 0 & B_x \gamma _{\rm S} & 0 \\ 0 & 0 & 0 & B_x \gamma _{\rm S} \\ B_x \gamma _{\rm S} & 0 & 0 & 0 \\ 0 & B_x \gamma _{\rm S} & 0 & 0 \\ \end{array}. \right) \end{align} In the lowest order of perturbation theory, this Hamiltonian causes the following modification to the system eigenstates: \begin{align} |\psi'_1\rangle&\approx|\psi_1\rangle+\frac{B_x \gamma _{\rm S}}{\nu_1-\nu_3}|\psi_3\rangle,\label{eq:psiprime1}\\ |\psi'_2\rangle&\approx|\psi_2\rangle+\frac{B_x \gamma _{\rm S}}{\nu_2-\nu_4}|\psi_4\rangle,\\ |\psi'_3\rangle&\approx|\psi_3\rangle+\frac{B_x \gamma _{\rm S}}{\nu_3-\nu_1}|\psi_1\rangle,\\ |\psi'_4\rangle&\approx|\psi_4\rangle+\frac{B_x \gamma _{\rm S}}{\nu_4-\nu_2}|\psi_2\rangle.\label{eq:psiprime4} \end{align} The two lowest-lying spin states $|\psi_4\rangle$ and $|\psi_1\rangle$ are therefore modified to $|\psi'_4\rangle$ and $|\psi'_1\rangle$ which can be coupled via strain. In particular, in the lowest order of perturbation theory, this coupling can be estimated as: \begin{align} \langle \psi'_{4}| H_{\beta} |\psi'_1\rangle &\approx \beta\left( \frac{B_x \gamma _{\rm S}}{\nu_1-\nu_3}+\frac{B_x \gamma _{\rm S}}{\nu_4-\nu_2} \right)\nonumber\\ &=\beta\left( \frac{B_x \gamma _{\rm S}}{-2B_z\gamma_{\rm S}-2\lambda}+\frac{B_x \gamma _{\rm S}}{2B_z\gamma_{\rm S}-2\lambda} \right). \end{align} Similarly for the $\gamma$ component of strain we obtain: \begin{align} \langle \psi'_{4}| H_{\gamma} |\psi'_1\rangle &\approx -{\rm i}\gamma\left( \frac{B_x \gamma _{\rm S}}{\nu_1-\nu_3}+\frac{B_x \gamma _{\rm S}}{\nu_4-\nu_2} \right)\nonumber\\ &=-{\rm i}\gamma\left( \frac{B_x \gamma _{\rm S}}{-2B_z\gamma_{\rm S}-2\lambda}+\frac{B_x \gamma _{\rm S}}{2B_z\gamma_{\rm S}-2\lambda} \right). \end{align} Based on our simulations we further consider $\beta/(2\pi)\sim 10$\,MHz ($\gamma/(2\pi)\sim 10$\,MHz) we obtain that the direct coupling of states $|\psi'_1\rangle$ and $|\psi'_4\rangle$, $g_{\rm pe}=\Gamma_{\rm pe}B_x$ is going to be of the order $\Gamma_{\rm pe}/(2\pi)\sim 5$\,MHz/T. A moderate magnetic bias field of 0.2 T would therefore be required to achieve the coupling rate $g_{\rm pe}/(2\pi)\sim 1$\,MHz used in the state-transfer analysis. The frequency of the spin transition $|\psi_{1}'\rangle\leftrightarrow |\psi_{4}'\rangle$ can be tuned by an external field $B_{z}$ to achieve resonant spin-phonon interaction. The pulsed modulation of the coupling could be realized by modulating the value of the magnetic field $B_x(t)$. \subsection{Microwave drive} Another way to induce the resonant interaction of the lowest lying spin states (states $|\psi_1\rangle$ and $|\psi_4\rangle$) considering that $\lambda$ is the dominant scale is to drive the spin transition between states $|\psi_4\rangle$ and $|\psi_1\rangle$ that are orbitally allowed via a microwave drive at the correct frequency $\omega_{\rm d}$ (that we determine later) as shown in \cite{lemonde2018phononnetworks}. This scheme is schematically depicted in Fig.\,\ref{fig:strainspin}(c). The orbital transitions $|\psi_1\rangle \to |\psi_2\rangle$ and $|\psi_4\rangle \to |\psi_3\rangle$ are coupled to the acoustic phonon via the strain susceptibility with a rate $g_{\rm orb}\approx 2\pi\times 10$\,MHz. We further introduce the shorthand notation: $\sigma_{ij}=|\psi_i\rangle\langle\psi_j |$ and write the effective Hamiltonian of the system under consideration: \begin{align} H_{\rm sys}&=\Delta \sigma_{22}+\omega_{\rm B}\sigma_{44}+\Omega(t)\left( e^{{\rm i}[\theta(t)+\omega_{\rm d}t]}\sigma_{42}+{\rm H.c.}\right)\nonumber\\ &+g_{\rm orb}\left (b^\dagger\sigma_{12}+{\rm H.c.}\right)+\omega_{\rm p} b^\dagger b.\label{eq:ham4level} \end{align} Here we neglected any influence of the off-resonant state $|\psi_{3}\rangle$, $\Delta=E_2-E_1$, $\omega_{\rm B}=E_4-E_1$, $\omega_{\rm p}$ is the phonon frequency, $b$ ($b^{\dagger}$) is the phonon annihilation (creation) operator, and $\Omega(t)$ and $\theta(t)$ are the amplitude- and phase-envelope of the external microwave drive, respectively. The Hamiltonian in Eq.\,\eqref{eq:ham4level} can be used to find an approximation in such a way that the Raman-mediated coupling of the two lowest spin states with the phonon can be made explicit. To that end we first introduce the interaction picture given by the Hamiltonian: \begin{align} H_{\rm ip}=\omega_{\rm B}\sigma_{44}+\Delta\sigma_{22}+\omega_{\rm p} b^\dagger b. \end{align} This leads to the following rotating-frame Hamiltonian: \begin{align} H_{\rm rot}&=\Omega(t)\left[ \sigma_{42}e^{{\rm i}[\theta(t)+(\omega_{\rm d}+\omega_{\rm B}-\Delta)]t}+{\rm H.c.}\right]\nonumber \\ &+g_{\rm orb}\left [b^\dagger\sigma_{12}e^{{\rm i}(\omega_{\rm p}-\Delta)t}+{\rm H.c.}\right].\label{eq:mwdriveham} \end{align} We further set $\omega_{\rm d}=\omega_{\rm p}-\omega_{\rm B}$ to ensure resonant drive. Adiabatic elimination can be applied to the Hamiltonian in Eq.\,\eqref{eq:mwdriveham} to obtain the effective coupling constant between the phonon and the electron spin: \begin{align} g_{\rm p-e}\approx g_{\rm eff}^{\rm mw}= g_{\rm orb}\frac{\Omega (t)e^{{\rm i}\theta(t)}}{\delta}, \end{align} with $\delta=\omega_{\rm p}-\Delta$. To ensure the validity of the adiabatic approximation we further require that $|\Omega|< |\delta|$ and we therefore estimate $g_{\rm pe}/(2\pi)=g_{\rm eff}^{\rm mw}/(2\pi)\approx 0.1 g_{\rm orb}/(2\pi)\approx 1$\,MHz. The microwave drive employed in this scheme ensures the resonant character of the phonon-spin coupling and eliminates the necessity to tune the magnitude of the magnetic field $B_z$ (i.e. of $\omega_{\rm B}$). \subsection{Optical Raman drive} Finally, an optical Raman drive has been proposed \cite{lemonde2018phononnetworks} to enable resonant coupling between the transition connecting the lowest-energy spin-orbit states and the cavity phonon, as shown in Fig.\,\ref{fig:strainspin}(d). The Hamiltonian describing this Raman scheme, expressed in the basis of states perturbed by the magnetic field [Eqs.\,\eqref{eq:psiprime1} to \eqref{eq:psiprime4}] and an optically accessible excited state, $|\rm E\uparrow\rangle$, can be written as: \begin{align} H_{\rm Raman}&=\Delta\sigma'_{22}+\omega_{\rm B}\sigma'_{44}+\omega_{\rm p} b^\dagger b+\omega_{\rm E}\sigma'_{\rm EE}\nonumber\\ &+\Omega_{\rm A}(\sigma'_{\rm 2E}e^{{\rm i}[\theta_{\rm A}(t)+\omega_{\rm A}t]}+{\rm H.c.})\nonumber\\ &+\Omega_{\rm C}(\sigma'_{\rm 4E}e^{{\rm i}[\theta_{\rm C}(t)+\omega_{\rm C}t]}+{\rm H.c.})\nonumber\\ &+g_{\rm orb}(\sigma'_{12}b^\dagger + \sigma_{21}b), \end{align} where $\sigma'_{ij}=|\psi'_{i}\rangle\langle \psi'_{j}|$ and $|\psi'_{\rm E}\rangle$ is an electronic excited state of SiV$^{-}$. Here $\Omega_{\rm A}$ and $\Omega_{\rm B}$ are related to the amplitude of the two pumping lasers and are proportional to the dipole coupling elements between the respective states, and $\theta_{\rm A}$ ($\theta_{\rm C}$) are slowly varying phases. The respective laser-drive frequencies $\omega_{\rm A}$ and $\omega_{\rm C}$ are adjusted so that $\omega_{\rm p}=\omega_{\rm B}+\omega_{\rm A}-\omega_{\rm C}$. Under such conditions, it is possible to obtain the following effective phonon-electron-spin coupling: \begin{align} g_{\rm p-e}\approx g_{\rm eff}^{\rm Raman}= \frac{\Omega_{\rm A}e^{{\rm i}\theta_{\rm A}(t)}\Omega_{\rm C}e^{-{\rm i}\theta_{\rm C}(t)}g_{\rm orb}}{(\omega_{\rm p}-\Delta)(\omega_{\rm C}-\omega_{\rm E}+\omega_{\rm p})}. \end{align} Since $g_{\rm eff}^{\rm Raman}$ has been obtained perturbatively, it is necessary that $\Omega_{\rm A}\Omega_{\rm C}/[(\omega_{\rm p}-\Delta)(\omega_{\rm C}-\omega_{\rm E}+\omega_{\rm p})]\ll 1$, and the effective phonon-electron-spin coupling is thus substantially reduced. The advantage of this scheme is in the tunnability of the externally applied lasers that can be used to rapidly adjust the condition for the resonant phonon-spin coupling or modulate the magnitude of the coupling strength. Notice also that in order for this scheme to be efficient, the phonon frequency must be close to the transition frequency $\Delta$. Last we mention that the different strain susceptibility of the ground and excited electronic-state manifolds could also be used to induce the ground-state spin-strain coupling. This scheme has, for example, been described in Ref. \cite{kuzyk2018scaling} for a nitrogen-vacancy color center. \section{Coordinate transformation of the strain tensor components}\label{app:transformed} In Appendix\,\ref{app:strain} we discuss the effects of strain on the fine-structure states of a SiV$^-$ color center and express the strain tensor in the internal system of coordinates of the color center defined with respect to the diamond crystallographic directions as: $z$ along $[111]$, $x$ along $[\bar{1}\bar{1}2]$, and $y$ along $[\bar{1}10]$. However, in applications it is more natural to consider the strain tensor in the set of coordinates defined by the basis vectors of the diamond cubic lattice. For convenience, we therefore transform the relevant tensor components that yield electron-spin-phonon coupling into this natural coordinate system defined by the basis vectors of the diamond cubic lattice and use the numbered indexes 1, 2, and 3 to denote the coordinates $[100]$, $[010]$, and $[001]$, respectively: \begin{align} \epsilon_{xx}-\epsilon_{yy} &=(-\epsilon _{11}-\epsilon _{22}+2 \epsilon _{33}+2 [\epsilon _{12}+ \epsilon _{21}]\nonumber\\ &-[\epsilon _{13}+\epsilon_{31}]-[\epsilon _{23}+\epsilon _{32}])/3\\ \epsilon_{zx}&=-(\epsilon _{11}+\epsilon _{22}-2 \epsilon _{33}-2\epsilon _{13}-2\epsilon _{23}\nonumber\\ &+\epsilon_{12}+\epsilon _{21}+\epsilon _{31}+\epsilon _{32})/(3\sqrt{2})\\ \epsilon_{xy}&=\frac{\epsilon _{11}-\epsilon _{12}+\epsilon _{21}-\epsilon _{22}-2 \epsilon _{31}+2 \epsilon _{32}}{2 \sqrt{3}}\\ \epsilon_{yz}&=\frac{-\epsilon _{11}-\epsilon _{12}-\epsilon _{13}+\epsilon _{21}+\epsilon _{22}+\epsilon _{23}}{\sqrt{6}}. \end{align} This form is convenient to express the effect of strained diamond slab etched along the $(100)$ crystalographic plane of diamond, which we consider in the design of the phononic cavity. \section{State transfer from the electron spin to the nuclear spin}\label{app:nucspin} To complete the chain of state-transfer steps leading to the transduction of a state stored in an SC qubit to a nuclear-spin qubit, following reference \cite{bradley2019register} we discuss an example of a state-transfer protocol than can be applied to perform the step connecting the electronic and nuclear-spin qubits. We assume that the nuclear spin described by the Hamiltonian \begin{align} H_{\rm nn}=\frac{\hbar\omega_{\rm L}}{2}\sigma_{z}^{\rm n} \end{align} is coupled to the electron spin via a longitudinal interaction: \begin{align} H_{\rm e-n}=\frac{A_\parallel}{4} \sigma^{\rm e}_{z}\sigma^{\rm n}_z, \end{align} where $\sigma_{z}^{\rm e}$ and $\sigma_{z}^{\rm n}$ are the electron-spin and nuclear-spin Pauli $z$ operators, respectively. This interaction Hamiltonian is a result of a hyperfine interaction between the electronic and the nuclear spin. The nuclear spin is furthermore driven by a microwave field of frequency $\omega_{\rm mw}=\omega_{\rm L}+\frac{A_{\parallel}}{2}$, amplitude $\Omega_{\rm mw}$, and adjustable phase $\theta_{\rm mw}$: \begin{align} H_{\rm mw}=\Omega_{\rm mw}\left[\sigma_{\rm n}e^{{\rm i}(\theta_{\rm mw}+\omega_{\rm mw}t)}+\sigma^\dagger_{\rm n}e^{-{\rm i}(\theta_{\rm mw}+\omega_{\rm mw}t)}\right]. \end{align} This drive is conditionally resonant when the electron spin is in state $|1_{\rm e}\rangle$ and is off-resonant when the electron is in $|0_{\rm e}\rangle$. After transforming the total Hamiltonian $H_{\rm nn}+H_{\rm e-n}+H_{\rm mw}$ into an interaction picture and considering the conditional character of the drive, we obtain the effective Hamiltonian $H_{\rm en}$: \begin{align} H_{\rm en}&=-\hbar {A_{\parallel}}\sigma^\dagger_{\rm n}\sigma_{\rm n}|0_{\rm e}\rangle\langle 0_{\rm e}|\nonumber\\ &+\hbar\Omega_{\rm mw}\left[\cos(\theta_{\rm mw})\sigma^{\rm n}_x+\sin(\theta_{\rm mw})\sigma^{\rm n}_y\right]|1_{\rm e}\rangle\langle 1_{\rm e}|.\label{eq:hamen} \end{align} Importantly, $H_{\rm en}$ describes the time evolution of the system accurately only if $\Omega_{\rm mw}\ll A_{\parallel}$. When the electron spin is in $|0_{\rm e}\rangle$ the nuclear spin undergoes a free precession with an angular velocity $-A_{\parallel}$, and when the electron spin is in $|1_{\rm e}\rangle$ the nuclear spin rotates around an axis ${\bf e}_{\theta_{\rm mw}}=\cos(\theta_{\rm mw}){\bf e}_x+\sin(\theta_{\rm mw}){\bf e}_y$ (${\bf e}_x, {\bf e}_y$ being unit vectors along $x$ and $y$, respectively) with angular velocity $2\Omega_{\rm mw}$. We next consider that the electron spin is periodically flipped via a dynamical decoupling sequence of the form $(\tau - \pi - 2\tau - \pi - \tau)^{N/2}$, where $N$ is an (even) number of pulses applied to the system. The total duration of the pulse sequence is $T_{N}=2N\tau$ and we consider that the gate applied to the nuclear spin is completed at $t=T_{N}$. The phase $\theta_{\rm mw}$ of the microwave drive must be adjusted after each pulse $k$ as: \begin{align} \theta_{\rm mw}&=(k-1)\phi_{k}+\phi_{c}+\phi_0 & {\rm for}\; k\;{\rm odd},\nonumber\\ \theta_{\rm mw}&=(k-1)\phi_{k}+\phi_0 & {\rm for}\; k\;{\rm even},\nonumber\\ \end{align} where $\phi_k=-(2-\delta_{1k})\tau A_{\parallel}$, and $\phi_c=0$ for unconditional rotations of the nuclear spin ($\phi_c=\pi$ for conditional rotations of the nuclear spin). The angle of rotation $\varphi$ of the nuclear spin about the axis determined by $\cos(\phi_0){\bf e}_x+\sin(\phi_0){\bf e}_y$ is $\varphi=2\Omega_{\rm mw}\tau N$. The Rabi frequency $\Omega_{\rm mw}$ must therefore be appropriately adjusted in order to achieve the desired rotation angle $\varphi$. We denote the unconditional gate implemented by the above described protocol as $R_{\phi_0, \varphi}^{\rm n}$ and the conditional gate as $C_{\phi_0, \varphi}^{\rm n}$. The conditional gate rotates the nuclear spin by an angle $-\varphi$ if the electron spin is initially in $|1_{\rm e}\rangle$. The following sequence of controlled and uncontrolled rotations produces a SWAP gate exchanging the states of the electron and the nuclear spin: \begin{align} |\psi_{\rm f}\rangle =CX^{\rm n}\cdot H^{\rm e}\cdot H^{\rm n}\cdot CX^{\rm n}\cdot H^{\rm e}\cdot H^{\rm n}\cdot CX^{\rm n}|\psi_{\rm i}\rangle, \end{align} where $CX^{\rm n}$ is the controlled not gate conditionally flipping the nuclear spin, $H^{\rm s}$ is the single-qubit Hadamard gate acting on the electron qubit, ${\rm s}={\rm e}$, or the nuclear qubit, ${\rm s}={\rm n}$. The single and two-qubit gates outlined above can be constructed from the conditional rotation of the nuclear spin and local qubit operations. In particular, the Hadamard gate acting on the nuclear spin can be constructed as $H^{\rm n}=R^{\rm n}_{0, \pi}\cdot R^{\rm n}_{\frac{\pi}{2}, \frac{\pi}{2}}$. Similarly, $CX^{\rm n}=S_{\frac{\pi}{2}}\cdot R^{\rm n}_{\rm 0, \frac{\pi}{2}}\cdot C^{\rm n}_{\rm 0, \frac{\pi}{2}}$, with $S_{\frac{\pi}{2}}=\sigma_{\rm e}\sigma^\dagger_{\rm e}+{\rm i}\sigma^\dagger_{\rm e}\sigma_{\rm e}$ (a rotation around the $z$ axis). Note that the time-duration of the single-qubit rotations applied to the electron spin is mainly dependent on the intensity of the applied pulses and we treat it as practically instantaneous. On the other hand, the gates applied to the nuclear spin rely on a free time evolution of the system limited by $\Omega_{\rm mw}\ll A_{\parallel}$. This sets the limit to the achievable state-transfer fidelity $\mathcal{F}_{\rm en}$ when spin dephasing is taken into account. We phenomenologically account for pure dephasing of both the electron and the nuclear spin via the Lindblad superoperators $\gamma_{\rm e}\mathcal{L}_{\sigma^\dagger_{\rm e}\sigma_{\rm e}}(\rho)$ and $\gamma_{\rm n}\mathcal{L}_{\sigma^\dagger_{\rm n}\sigma_{\rm n}}(\rho)$ [see Eq.\,\eqref{eq:lindblad}] that together with $H_{\rm en}$ [Eq.\,\eqref{eq:hamen}] describe the dynamics of the system. We estimate the fidelity of the state transfer performed by the SWAP gate for a moderate value of the longitudinal spin-spin coupling $A_{\parallel}/(2 \pi)= 500$\,\si{\kilo\hertz} and we set the drive frequency to $\Omega_{\rm mw}/(2 \pi)\approx 3.9$\,\si{\kilo\hertz}. We further consider $\gamma_{\rm e}/(2 \pi)= 10$\,\si{\kilo\hertz} and $\gamma_{\rm n}/(2 \pi) = 1$\,\si{\hertz}. With these values we estimate $\mathcal{F}_{\rm en}\approx 0.9975$, as given in the main text. \section{Two-qubit gates applicable to the electron spins}\label{app:molmersor} \begin{figure} \caption{M{\o} \label{fig:molmersorensen} \end{figure} One of the advantages of the architecture proposed in this paper is that the color-center electron spins can be used to prepare non-classical many-body quantum-mechanical states that can be further utilized for processing of quantum information, quantum teleportation, or speedup of quantum-state transduction. In this appendix we provide a suggestion of a gate that could be used to generate a GHZ state (i.e. an entangled Bell state) of a pair of electron-spin qubits coupled to a common vibrational mode. In Appendix\,\ref{app:strain} we have shown that the electron-spin states can be coupled to a strain field via effective controllable coupling schemes. This leads to the effective interaction between a mode of an acoustical cavity coupled to two electron spins: \begin{align} H_{\rm eff}&=\hbar\omega_{\rm e1}\sigma_{\rm e1}^\dagger \sigma_{\rm e1}+\hbar\omega_{\rm e2}\sigma_{\rm e2}^\dagger \sigma_{\rm e2}+ \hbar\omega_{\rm p} b^\dagger b \nonumber\\ & +\hbar g_{\rm eff}(\sigma_{\rm e1}+\sigma_{\rm e1}^\dagger) (b+b^\dagger)\nonumber\\ &+\hbar g_{\rm eff}(\sigma_{\rm e2}+\sigma_{\rm e2}^\dagger) (b+b^\dagger).\label{eq:gateham1} \end{align} Here $\sigma_{\rm e1}=| 0_1\rangle\langle 1_1 |$ ($\sigma_{\rm e2}=| 0_2\rangle\langle 1_2 |$) are the lowering operators of the respective two-level spin systems, $b$ ($b^\dagger$) is the annihilation (creation) operator of the shared phonon mode, $\omega_{\rm e1}$ and $\omega_{\rm e2}$ are the frequencies of the respective spins, and $\omega_{\rm p}$ is the frequency of the phonon mode. The effective coupling $g_{\rm eff}$ can be realized as described in Appendix\,\ref{app:strain}. It is more convenient to transform the Hamiltonian in Eq.\,\eqref{eq:gateham1} into the interaction picture: \begin{align} H_{\rm eff}^{\rm I}&=g_{\rm eff}(\sigma_{\rm e1}e^{-{\rm i}\omega_{\rm e1}t}+\sigma_{\rm e1}^\dagger e^{{\rm i}\omega_{\rm e1}t}) (b e^{-{\rm i}\omega_{\rm p}t}+b^\dagger e^{{\rm i}\omega_{\rm p}t})\nonumber\\ &+g_{\rm eff}(\sigma_{\rm e2}e^{-{\rm i}\omega_{\rm e2}t}+\sigma_{\rm e2}^\dagger e^{{\rm i}\omega_{\rm e2}t}) (b e^{-{\rm i}\omega_{\rm p}t}+b^\dagger e^{{\rm i}\omega_{\rm p}t}).\label{eq:gateham1INT} \end{align} Next, we assume that the coupling $g_{\rm eff}$ can be modulated in time as $g_{\rm eff}(t)=\frac{g_{\rm eff}^0}{4}(e^{{\rm i}\omega_1 t}+e^{{\rm i}\omega_2 t}+{\rm H.c.})$ (H.c. stands for the Hermitian conjugate). We assume a situation where $\omega_{\rm p}> \omega_{\rm e}=\omega_{\rm e1}=\omega_{\rm e2}$ and therefore select the two drive frequencies as $\omega_1=\omega_{\rm e}+\omega_{\rm p}-\delta_{\rm MS}$, and $\omega_2=\omega_{\rm p}-\omega_{\rm e}-\delta_{\rm MS}$, with $\delta_{\rm MS}$ a small detunning. We further simplify the Hamiltonian by assuming $\omega_{\rm e1}=\omega_{\rm e2}=\omega_{\rm e}$. The interaction-picture Hamiltonian [Eq.\,\eqref{eq:gateham1INT}] then becomes (considering only slowly oscillating terms in the RWA): \begin{align} H_{\rm eff}^{\rm RWA}&\approx g^0_{\rm eff}[(\sigma_{\rm e1}+\sigma_{\rm e2})b^\dagger e^{-{\rm i}(\omega_{\rm e}-\omega_{\rm p}+\omega_{2})t}\nonumber\\ &+ (\sigma_{\rm e1}^\dagger +\sigma_{\rm e2}^\dagger)b^\dagger e^{{\rm i}(\omega_{\rm e}+\omega_{\rm p}-\omega_{1})t}+{\rm H.c.}]. \label{eq:gatehamdrivenINT} \end{align} From this Hamiltonian we can obtain the effective coupling $g_{\rm M-S}$ between the state $|{0_1}\rangle\otimes|{0_2}\rangle\otimes |0 \rangle\equiv|{\rm gg},0\rangle$ and the doubly excited state $|1_1\rangle\otimes|{1_2}\rangle\otimes |0 \rangle\equiv|{\rm ee},0\rangle$ (more generally $|{\rm gg},n\rangle$ and $|{\rm ee},n\rangle$, with $n$ the number of phonons): \begin{align} g_{\rm M-S}\approx \frac{(g_{\rm eff}^0)^2}{8\delta_{\rm MS}}. \end{align} We plot the resulting dynamics of the populations of the two excited states in Fig.\,\ref{fig:molmersorensen}. The population of the state $|{\rm gg}, 0\rangle$ (blue line) coherently transfers into the population of $|{\rm ee},0\rangle$ (orange line). For comparison we plot the expression $0.5[\cos(2g_{\rm MS}t)+1]$ as the yellow line in Fig.\,\ref{fig:molmersorensen}. We consider that both electron spins and the phonon are subject to decoherence as described in the main text. If we stop the time evolution at $t\approx 0.675$\,\si{\micro\second} we obtain a highly entangled Bell state, the two-qubit GHZ state (up to a phase factor). \end{document}
\begin{document} \title{Maximum Matching in Online Preemptive Model} \begin{abstract} We investigate the power of randomized algorithms for the maximum cardinality matching (MCM) and the maximum weight matching (MWM) problems in the online preemptive model. In this model, the edges of a graph are revealed one by one and the algorithm is required to always maintain a valid matching. On seeing an edge, the algorithm has to either accept or reject the edge. If accepted, then the adjacent edges are discarded. The complexity of the problem is settled for deterministic algorithms~\cite{mcgregor,varadaraja}. Almost nothing is known for randomized algorithms. A lower bound of $1.693$ is known for MCM with a trivial upper bound of two. An upper bound of $5.356$ is known for MWM. We initiate a systematic study of the same in this paper with an aim to isolate and understand the difficulty. We begin with a primal-dual analysis of the deterministic algorithm due to~\cite{mcgregor}. All deterministic lower bounds are on instances which are trees at every step. For this class of (unweighted) graphs we present a randomized algorithm which is $\frac{28}{15}$-competitive. The analysis is a considerable extension of the (simple) primal-dual analysis for the deterministic case. The key new technique is that the distribution of primal charge to dual variables depends on the ``neighborhood'' and needs to be done after having seen the entire input. The assignment is asymmetric: in that edges may assign different charges to the two end-points. Also the proof depends on a non-trivial structural statement on the performance of the algorithm on the input tree. The other main result of this paper is an extension of the deterministic lower bound of Varadaraja~\cite{varadaraja} to a natural class of randomized algorithms which decide whether to accept a new edge or not using {\em independent} random choices. This indicates that randomized algorithms will have to use {\em dependent} coin tosses to succeed. Indeed, the few known randomized algorithms, even in very restricted models follow this. We also present the best possible $\frac{4}{3}$-competitive randomized algorithm for MCM on paths. \end{abstract} \section{Introduction} Matching has been a central problem in combinatorial optimization. Indeed, algorithm design in various models of computations, sequential, parallel, streaming, etc., have been influenced by techniques used for matching. We study the maximum cardinality matching (MCM) and the maximum weight matching (MWM) problems in the online preemptive model. In this model, edges $e_1,\dots,$ $e_m$ of a graph, possibly weighted, are presented one by one. An algorithm is required to output a matching $M_i$ after the arrival of each edge $e_i$. This model constrains an algorithm to accept/reject an edge as soon as it is revealed. If accepted, the adjacent edges, if any, have to be discarded from $M_i$. An algorithm is said to have a \textit{competitive ratio} $\alpha$ if the cost of the matching maintained by the algorithm is at least $\frac{1}{\alpha}$ times the cost of the offline optimum over all inputs. The deterministic complexity of this problem is settled. For maximum cardinality matching (MCM), it is an easy exercise to prove a tight bound of two. The weighted version (MWM) is more difficult. Improving an earlier result of Feigenbaum et al, McGregor~\cite{mcgregor} gave a deterministic algorithm together with an ingenious analysis to get a competitive ratio of $3+2\sqrt{2}\approx5.828$. Later, this was proved to be optimal by Varadaraja~\cite{varadaraja}. Very little is known on the power of randomness for this problem. Recently, Epstein et al.~\cite{epstein} proved a lower bound of $1+\ln 2 \approx1.693$ on the competitive ratio of randomized algorithms for MCM. This is the best lower bound known even for MWM. Epstein et al.~\cite{epstein} also give a $5.356$-competitive randomized algorithm for MWM. In this paper, we initiate a systematic study of the power of randomness for this problem. Our main contribution is perhaps to throw some light on where lies the difficulty. We first give an analysis of McGregor's algorithm using the traditional Primal-Dual framework (see Appendix~\ref{pd}). All lower bounds for deterministic algorithms (both for MCM and MWM) employ {\em growing trees}. That is, the input graph is a tree at every stage. It is then natural to start our investigation for this class of inputs. For this class, we give a randomized algorithm (that uses two bits of randomness) that is $\frac{28}{15}$ competitive. While this result is modest, already the analysis is considerably more involved than the traditional primal dual analysis. In the traditional primal dual analysis of the matching problem, the primal charge (every selected edge contributes one to the charge) is distributed (perhaps equally) to the two end-points. In the online case, this is usually done as the algorithm proceeds. Our assignment depends on the structure of the final tree, so this assignment happens at the end. Our charge distribution is {\em not} symmetric. It depends on the position of the edge in the tree (we make this clear in the analysis) as also the behavior of neighboring edges. The main technical lemma shows that the charge distribution will depend on a neighborhood of distance at most four. We also note that these algorithms are (restricted versions of) randomized greedy algorithms even in the offline setting. Obtaining an approximation ratio less than two for general graphs, even in the offline setting is a notorious problem. See \cite{poloczek,chan} for a glimpse of the difficulty. The optimal maximal matching algorithm for MCM, and McGregor's~\cite{mcgregor} optimal deterministic algorithm for MWM are both local algorithms. The choice of whether a new edge should accepted or rejected is based only on the weight of the new edge and the weight of the conflicting edges, if any, in the current matching. It is natural to add randomness to such local algorithms, and to ask whether they do better than the known deterministic lower bounds. An obvious way to add randomness is to accept/reject the new edge with certain probability, which is only dependent on the new edge and the conflicting edges in the current matching. The choice of adding a new edge is independent of the previous coin tosses used by the algorithm. We call such algorithms {\em randomized local algorithms}. We show that randomized local algorithms cannot do better than optimal deterministic algorithms. This indicates that randomized algorithms may have to use dependent coin tosses to get better approximation ratios. Indeed, the algorithm by Epstein et al. does this. So does our randomized algorithms. The randomized algorithm of Epstein et al.~\cite{epstein} works as follows. For a parameter $\theta$, they round the weights of the edges to powers of $\theta$ randomly, and then they update the matching using a deterministic algorithm. The weights get distorted by a factor $\frac{\theta\ln\theta}{\theta-1}$ in the rounding step, and the deterministic algorithm has a competitive ratio of $2+\frac{2}{\theta-2}$ on \textit{$\theta$-structured graphs}, i.e., graphs with edge weights being powers of $\theta$. The overall competitive ratio of the randomized algorithm is $\frac{\theta \ln \theta}{\theta-1}\cdot \left(2+\frac{2}{\theta-2}\right)$ which is minimized at $\theta\approx5.356$. A natural approach to reducing this competitive ratio is to improve the approximation ratio for $\theta$ structured graphs. However, we prove that the competitive ratio $2+\frac{2}{\theta-2}$ is tight for $\theta$-structured graphs, as long as $\theta\geq 4$, for deterministic algorithms. One (minor) contribution of this paper is a randomized algorithms for MCM on paths, that achieves a competitive ratio of $\frac{4}{3}$, with a matching lower bound. The other (minor) contribution of this paper is to highlight model specific bounds. There is a difference in the models in which the lower and upper bounds have been proved and this may be one reason for the large gaps. \begin{comment} \subsection{Organization of the paper} Due to space restrictions, the two main results, the upper bound on growing trees and the lower bound for randomized algorithms which use independent coin tosses are presented in the main body of the paper. An analysis of McGregor's algorithm, an optimal randomized algorithm for MCM on paths, and on bounded degree trees is moved to the appendix. In section~\ref{lb}, we present various lower bounds on specific type of algorithms and inputs. \end{comment} \section{Barely Random Algorithms for MCM} In this section, we present barely random algorithms, that is, algorithms that use a constant number of random bits, for MCM on growing trees. The ideal way to read the paper, for a reader of leisure, is to first read our analysis of McGregor's algorithm (presented in Appendix~\ref{pd}), then the analysis of the algorithm for trees with maximum vertex degree three (presented in Appendix~\ref{ub2}) and then this section. The dual variable management which is the key contribution gets progressively more complicated. It is local in the first two cases. The Appendix~\ref{sa_gt} also gives an example which shows why a non-local analysis is needed. Here are the well known Primal and Dual formulations of the matching problem. The primal formulation is known to be optimum for bipartite graphs. For general graphs, odd set constraints have to be added. But they are not needed in this paper. \begin{center} \begin{tabular}{c|c} Primal LP & Dual LP \\\hline $\max \sum_e x_e$ & $\min \sum_v y_v$\\ $\forall v:\sum_{v\in e}x_e \leq 1$ & $\forall e: y_u + y_v \geq 1$\\ $x_e\geq 0$ & $y_v \geq 0$ \end{tabular} \end{center} \subsection{Randomized Algorithm for MCM on Growing Trees}\label{ub3} In this section, by using only two bits of randomness, we beat the deterministic lower bound of $2$ for MCM on growing trees. \begin{algorithm}[H] \caption{Randomized Algorithm for Growing Trees} \begin{enumerate} \item The algorithm maintains four matchings: $M_1,M_2,M_3,$ and $M_4$. \item On receipt of an edge $e$, the processing happens in two phases. \begin{enumerate} \item {\bf The augment phase.} The new edge $e$ is added to each $M_i$ in which there are no edges adjacent to $e$. \item {\bf The switching phase.} For $i=2,3,4$, in order, $e$ is added to $M_i$ (if it was not added in the previous phase) and the conflicting edge is discarded, provided it decreases the quantity $\sum_{i,j\in[4],i\neq j}|M_i\cap M_j|$. \end{enumerate} \item Output matching $M_i$ with probability $\frac{1}{4}$. \end{enumerate} \end{algorithm} We begin by assuming (we justify this below) that all edges that do not belong to any matching are leaf edges. This helps in simplifying the analysis. Suppose that there is an edge $e$ which does not belong to any matching, but is not a leaf edge. By removing $e$, the tree is partitioned into two subtrees. The edge $e$ is added to the tree in which it has $4$ neighboring edges. (There must be such a subtree, see next para.) Each tree is analysed separately. We will say that a vertex(/an edge) is {\em covered } by a matching $M_i$ if there is an edge in $M_i$ which is incident on(/adjacent to) the vertex(/edge). We also say that an edge is {\em covered } by a matching $M_i$ if it belongs to $M_i$. We begin with the following observations. \begin{itemize} \item After an edge is revealed, its end points are covered by all $4$ matchings. \item An edge $e$ that does not belong to any matching has $4$ edges incident on one of its end points such that each of these edges belong to a distinct matching. This holds when the edge is revealed, and does not change subsequently. \end{itemize} An edge is called {\em internal} if there are edges incident on both its end points. An edge is called {\em bad} if its end points are covered by only $3$ matchings. We begin by proving some properties about the algorithm. The key structural lemma that keeps ``influences'' of bad edges local is given below. The two assertions in the Lemma have to be proved together by induction. \begin{lemma}\label{internal} \begin{enumerate} \item An internal edge is covered by at least four matchings (when counted with multiplicities). It is not necessary that these four edges be in distinct matchings. \item If $p,q$ and $r$ are three consecutive vertices on a path, then bad edges cannot be incident on all $3$ of these vertices, (as in figure~\ref{IC}). \end{enumerate} \end{lemma} The proof of this lemma is in the Appendix~\ref{pf_inbad}. \begin{figure} \caption{Forbidden Configuration} \label{IC} \end{figure} \begin{theorem} The randomized algorithm for finding MCM on growing trees is $\frac{28}{15}$-competitive. \end{theorem} A local analysis like the one in Appendix~\ref{ub2} will not work here. For a reason, see Appendix~\ref{sa_gt}. The analysis of this algorithm proceeds in two steps. Once all edges have been seen, we impose a partial order on the vertices of the tree and then with the help of this partial order, we distribute the primal charge to the dual variables, and use the primal-dual framework to infer the competitive ratio. If every edge had four adjacent edges in some matching (counted with multiplicities) then the distribution of dual charge is easy. However we do have edges which have only three adjacent edges in matchings. We would like the edges in matchings to contribute more to the end-points of these edges. Then, the charge on the other end-point would be less and we need to balance this through other edges. Details follow.\\ \textbf{Ranks:} Consider a vertex $v$. Let $v_1,\dots,v_k$ be the neighbors of $v$. For each $i$, let $d_i$ denote the maximum distance from $v$ to any leaf if there was no edge between $v$ and $v_i$.The rank of $v$ is defined as the minimum of all the $d_i$. Observe that the rank of $v$ is one plus the second highest rank among the neighbors of $v$. Thus there can be at most one neighbor of vertex $v$ which has rank at least the rank of $v$. All leaves have rank $0$. Rank $1$ vertices have at most one non-leaf neighbor. \begin{lemma}\label{tree1} There exists an assignment of the primal charge amongst the dual variables such that the dual constraint for each edge $e\equiv (u,v)$ is satisfied at least $\frac{15}{28}$ in expectation, i.e. $\mathbb{E}[y_u+y_v]\geq \frac{15}{28}$. \end{lemma} \begin{proof} Consider an edge $e\equiv (u,v)$ where rank of $u$ is $i$ and rank of $v$ is $j$. We will show that $y_u+y_v\geq 2+\epsilon$ for such an edge, when summed over all four matchings. The value of $\epsilon$ is chosen later. The proof is by induction on the lexicographic order of $<j, i>$, $j\geq i$. \\ \textbf{Dual Variable Management:} Consider an edge $e$ from a vertex of rank $i$ to a vertex of rank $j$, such that $i\leq j$. This edge will distribute its primal weight between its end-points. The exact values are discussed in the proof of the claim below. In general, we look to transfer all of the primal charge to the higher ranked vertex. But this does not work and we need a finer strategy. This is detailed below. \begin{itemize} \item If $e$ does not belong to any matching, then it does not contribute to the value of dual variables. \item If $e$ belongs to a single matching then, depending on the situation, one of $0$, $\epsilon$ or $2\epsilon$ of its primal charge will be assigned to the rank $i$ vertex and rest will be assigned to the rank $j$ vertex. The small constant $\epsilon$ is determined later. \item If $e$ belongs to two matchings, then at most $3\epsilon$ of its primal charge will be assigned to the rank $i$ vertex as required. The rest is assigned to the rank $j$ vertex. \item If $e$ belongs to three or four matchings, then its entire primal charge is assigned to the rank $j$ vertex. \end{itemize} The analysis breaks up into six cases. \textbf{Case 1.} Suppose $e$ does not belong to any matching. Then it must be a leaf edge. Hence, $i=0$. There must be $4$ edges incident on $v$ besides $e$, each belonging to a distinct matching. Of these $4$, at least $3$ say $e_1$, $e_2$, and $e_3$, must be from lower ranked vertices to the rank $j$ vertex $v$. The edges $e_1$, $e_2$, and $e_3$, each assign a charge of $1-2\epsilon$ to $y_v$. Therefore, $y_u+y_v\geq 3-6\epsilon \geq 2+\epsilon$. \textbf{Case 2.} Suppose $e$ is a bad edge that belongs to a single matching. Since no internal edge can be a bad edge, $i=0$. This implies (Lemma~\ref{internal}) that, there is an edge $e_1$ from a rank $j-1$ vertex to $v$, which belongs to a single matching. Also, there is an edge $e_2$, from $v$ to a higher ranked vertex, which also belongs to a single matching. The edge $e$ assigns a charge of $1$ to $y_v$. If $e_1$ assigns a charge of $1$ (or $1-\epsilon$) to $y_v$, then $e_2$ assigns $\epsilon$ (or $2\epsilon$ respectively) to $y_v$. In either case, $y_u+y_v=2+\epsilon$. The key fact is that $e_1$ could not have assigned $2\epsilon$ to a lower ranked vertex. Since, then, by Lemma~\ref{internal}, $e$ cannot be a bad edge. \textbf{Case 3.} Suppose $e$ is not a bad edge, and it belongs to a single matching. \\ \textit{Case 3(a).} $i=0$. There are two sub cases. \begin{itemize} \item There is an edge $e_1$ from some rank $j-1$ vertex to $v$ which belongs to $2$ matchings, or there are two other edges $e_2$ and $e_3$ from some lower ranked vertices to $v$, each belonging to separate matchings. The edge $e$ assigns a charge of $1$ to $y_v$. Either $e_1$ assigns a charge of at least $2-3\epsilon$ to $y_v$, or $e_2$ and $e_3$ assign a charge of at least $1-2\epsilon$ each, to $y_v$. In either case, $y_u+y_v\geq 3-4\epsilon\geq 2+\epsilon$. \item There is one edge $e_1$, from a rank $j-1$ vertex to $v$, which belongs to a single matching, and there is one edge $e_2$, from $v$ to a higher ranked vertex, which belongs to $2$ matchings. The edge $e$ assigns a charge of $1$ to $y_v$. If $e_1$ assigns a charge of $1$ (or $1-\epsilon$ or $1-2\epsilon$) to $y_v$, then $e_2$ assigns $\epsilon$ (or $2\epsilon$ or $3\epsilon$ respectively) to $y_v$. In either case, $y_u+y_v=2+\epsilon$. \end{itemize} \textit{Case 3(b).} $i>0$. There are two sub cases. \begin{itemize} \item There are at least two edges $e_1$ and $e_2$ from lower ranked vertices to $u$, and one edge $e_3$ from $v$ to a higher ranked vertex. Each of these edges are in one matching only (not necessarily the same matching). \item There is one edge $e_4$ from a vertex of lower rank to $u$, at least one edge $e_5$ from a lower ranked vertex to $v$, and one edge $e_6$ from $v$ to a vertex of higher rank. All these edges belong to a single matching (not necessarily the same). \end{itemize} The edge $e$ assigns a charge of $1$ among $y_u$ and $y_v$. If $e_1$ and $e_2$ assign a charge of at least $1-2\epsilon$ each, to $y_u$, then $y_u+y_v\geq3-4\epsilon\geq 2+\epsilon$. Similarly, if $e_4$ assigns a charge of at least $1-2\epsilon$ to $y_u$, and $e_5$ assigns a charge of at least $1-2\epsilon$ to $y_v$, then $y_u+y_v\geq3-4\epsilon\geq 2+\epsilon$. \textbf{Case 4.} Suppose $e$ is a bad edge that belongs to two matchings. Then $i=0$. This implies that there is an edge $e_1$, from $v$ to a vertex of higher rank which belongs to a single matching. The edge $e$ assigns a charge of $2$ to $y_v$, and the edge $e_1$ assigns a charge of $\epsilon$ to $y_v$. Thus, $y_u+y_v=2+\epsilon$. \textbf{Case 5.} Suppose $e$ is not a bad edge and it belongs to two matchings. This means that either there is an edge $e_1$ from a lower ranked vertex to $u$, which belongs to at least one matching, or there is an edge from some lower ranked vertex to $v$ that belongs to at least one matching, or there is an edge from $v$ to some higher ranked vertex which belongs to two matchings. The edge $e$ assigns a charge of $2$ among $y_u$ and $y_v$. The neighboring edges assign a charge of $\epsilon$ to $y_u$ or $y_v$ (depending on which vertex it is incident), to give $y_u+y_v\geq 2+\epsilon$. \textbf{Case 6.} Suppose, $e$ belongs to $3$ or $4$ matchings, then trivially $y_u+y_v\geq 2+\epsilon$. From the above conditions, the best value for the competitive ratio is obtained when $\epsilon=\frac{1}{7}$, yielding $\mathbb{E}[y_u+y_v]\geq \frac{15}{28}$. \end{proof} Lemma~\ref{tree1} implies that the competitive ratio of the algorithm is at most $\frac{28}{15}$. \section{Lower Bounds}\label{lb} \begin{comment} \subsection{Lower bound for MCM} In this section we prove a lower bound of $2$ on competitive ratio for maximum cardinality matching in the online preemptive model for specific type of algorithms. The algorithms that we consider are the ones which take decision of accepting or rejecting a new edge based only on current matching stored by the algorithm. The decision is completely independent of the previous coin tosses done by the algorithm. \begin{theorem} No randomized algorithm which makes decision of accepting or rejecting a new edge based on current matching stored by it can have a competitive ratio better than 2 for maximum cardinality matching in online preemptive model. \end{theorem} \begin{proof} To prove the above theorem we use Yao's principle, which says \begin{align*} \sup_d \min_{A\in \mathcal{A}} C(A,d) = \inf_R \max_{G\in \mathcal{G}_n} E(R,G) \end{align*} Here, $\mathcal{G}_n$ is the set of all undirected graphs on $n$ vertices. $\mathcal{A}$ is family of all deterministic algorithms. $d$ is a distribution on $\mathcal{G}_n$. $R$ is a distribution on $\mathcal{A}$. $E(R,G)$ denotes the expected cost of $R$ on input $G$. Given a distribution $d$, $C(A,d)$ denotes the average cost of deterministic algorithm $A$. From the above description, a randomized algorithm can be interpreted as a distribution on set of deterministic algorithms. To prove a lower bound $L$ on how randomized algorithms of certain type can perform, one can give a distribution on class of inputs and show that no corresponding deterministic algorithm can have a better competitive ratio than $C$ on it. We give a distribution on class of inputs such that no deterministic algorithm, which makes a decision of accepting or rejecting the new edge based only on its current matching, will have a competitive ratio better than 2. The input graph contains $4$ layers of vertices, namely $A, B, C, D$, each layer containing $n$ vertices. Let the vertices be named $a_1,\dots,a_n, b_1,\dots, b_n, c_1,\dots, c_n, d_1,\dots, d_n$. The entire input looks like this - a matching between layers $A$ and $B$, a complete bipartite graph between layers $B$ and $C$, and a matching between layers $C$ and $D$. The edges are revealed in following order: First, all the edges from $b_1$ are revealed in a random order (i.e. a random permutation of all the edges from $b_1$), then all edges from $b_2$ are revealed in random order and so on, till all edges from $b_n$ are revealed in random order. Then edges between layers $C$ and $D$ are revealed one by one in arbitrary order. \begin{claim} Expected size of matching held by any deterministic algorithm which takes decision of accepting or rejecting a new edge solely on its current matching is at most $n+\log(n)$. \end{claim} \noindent To see that the above claim holds, we make some important observations. \begin{itemize} \item First, the deterministic algorithm will not switch if the new edge has two conflicting edges in the current matching. If it does, it will only get a worse size of matching. \item Second, (if some vertex $v$ is matched by the deterministic algorithm and) if the algorithm makes a switch on receiving some edge on $v$, then it switches on every new edge on $v$. If the algorithm does not switch on receiving a new edge on $v$, then it will never switch no matter how many edges are presented on $v$. This is due to fact that decisions made by the algorithm are based only on its current matching and are independent of any labeling or number of edges received till now on a particular vertex. \end{itemize} From the above observations, the probability that $b_1$ is matched to $a_1$ is $1/n$. This is because, the edge between $a_1$ and $b_1$ has to be either the first one or the last one to be revealed from $b_1$, for it to be held in the final matching. For, the algorithms which do not switch, this edge has to be the first one and for the algorithms that do switch, this edge has to be the last one. This event happens with probability $1/n$. Similarly, the probability that $b_2$ is matched to $a_2$ is $1/(n-1)$ and the probability that $b_i$ is matched to $a_i$ is $1/(n-i+1)$. Thus, the expected size of matching between layers $A$ and $B$ is $\sum_{i>0} \frac{1}{n-i+1}$ which is $\log(n)$. After all the edges from vertices in layer $B$ are revealed, the expected size of matching between layers $B$ and $C$ is $n-\log(n)$. And after the edges between layers $C$ and $D$ are revealed, the expected size of matching held by the algorithm will be $n+\log(n)$. Now, the size of optimum matching is $2n$. Therefore, as $n\rightarrow\infty$, the competitive ratio of any deterministic algorithm is at least 2. This proves the theorem. \end{proof} \end{comment} \subsection{Lower Bound for MWM}\label{sec_lb_mwm} In this section, we prove a lower bound on the competitive ratio of a natural class of randomized algorithms in the online preemptive model for MWM. The algorithms in this class, which we call \textit{local} algorithms, have the property that their decision to accept or to reject a new edge is completely determined by the weights of the new edge and the conflicting edges in the matching maintained by the algorithm. Indeed, the optimal deterministic algorithm by McGregor \cite{mcgregor} is a local algorithm. The notion of locality can be extended to randomized algorithms as well. In case of {\em randomized local algorithms}, the event that a new edge is accepted is independent of all such previous events, given the current matching maintained by the algorithm. Furthermore, the probability of this event is completely determined by the weight of the new edge and the conflicting edges in the matching maintained by the algorithm. Given that the optimal $(3+2\sqrt{2})$-competitive deterministic algorithm for MWM is a local algorithm, it is natural to ask whether randomized local algorithms can beat the deterministic lower bound of $(3+2\sqrt{2})$ by Varadaraja \cite{varadaraja}. We answer this question in the negative, and prove the following theorem. \begin{theorem}\label{thm_local} No randomized local algorithm for the MWM problem can have a competitive ratio less than $\alpha=3+2\sqrt{2}\approx5.828$. \end{theorem} Note that the randomized algorithm by Epstein et al. \cite{epstein} does not fall in this category, since the decision of accepting or rejecting a new edge is also dependent on the outcome of the coins tossed at the beginning of the run of the algorithm. (For details, see Section 3 of \cite{epstein}.) In order to prove Theorem \ref{thm_local}, we will crucially use the following lemma, which is a consequence of Section 4 of \cite{varadaraja}. \begin{lemma}\label{lem_varadaraja} If there exists an infinite sequence $(x_n)_{n\in\mathbb{N}}$ of positive real numbers such that for all $n$, $\beta x_n\geq\sum_{i=1}^{n+1}x_i+x_{n+1}$, then $\beta\geq3+2\sqrt{2}$. \end{lemma} \subsubsection{Characterization of local randomized algorithms} Suppose, for a contradiction, that there exists a randomized local algorithm $\mathcal{A}$ with a competitive ratio $\beta<\alpha=3+2\sqrt{2}$, $\beta\geq1$. Define the constant $\gamma$ to be \[\gamma=\frac{\beta\left(1-\frac{1}{\alpha}\right)}{\left(1-\frac{\beta}{\alpha}\right)}=\frac{\beta(\alpha-1)}{\alpha-\beta}\geq1>\frac{1}{\alpha}\] For $i=0,1,2$, if $w$ is the weight of a new edge and it has $i$ conflicting edges, in the current matching, of weights $w_1,\ldots,w_i$, then $f_i(w_1,\ldots,w_i,w)$ gives the probability of switching to the new edge. The behavior of $\mathcal{A}$ is completely described by these three functions. We need the following key lemma to state our construction of the adversarial input. The lemma states (informally) that given an edge of weight $w_1$, there exists weights $x$ and $y$, close to each other such that if an edge of weight $x$ (respective $y$) is adjacent to an edge of weight $w_1$, the probability of switching is at most (respectively at least) $\delta$. \begin{lemma}\label{lem_xy} For every $\delta\in(0,1/\alpha)$, $\epsilon>0$, and $w_1$, there exist $x$ and $y$ such that $f_1(w_1,x)\geq\delta$, $f_1(w_1,y)\leq\delta$, $x-y\leq\epsilon$, and $w_1/\alpha\leq y\leq x\leq\gamma w_1$. \end{lemma} The proof of this lemma can be found in Appendix (section~\ref{sec_lemmas}). \subsubsection{The adversarial input} The adversarial input is parameterized by four parameters: $\delta\in(0,1/\alpha)$, $\epsilon>0$, $m$, and $n$, where $m$ and $n$ determine the graph and $\delta$ and $\epsilon$ determine the weights of its edges. Define the infinite sequences $(x_i)_{i\in\mathbb{N}}$ and $(y_i)_{i\in\mathbb{N}}$, as functions of $\epsilon$ and $\delta$, as follows. $x_1=1$, and for all $i$, having defined $x_i$, let $x_{i+1}$ and $y_i$ be such that $f_1(x_i,x_{i+1})\geq\delta$, $f_1(x_i,y_i)\leq\delta$, $x_{i+1}-y_i\leq\epsilon$, and $x_i/\alpha\leq y_i\leq x_{i+1}\leq\gamma x_i$. Lemma \ref{lem_xy} ensures that such $x_{i+1}$ and $y_i$ exist. Furthermore, by induction on $i$, it is easy to see that for all $i$, \begin{equation}\label{eqn_bounds} 1/\alpha^i\leq y_i\leq x_{i+1}\leq\gamma^i \end{equation} These sequences will be the weights of the edges in the input graph. Given $m$ and $n$, the input graph contains several layers of vertices, namely $A_1,A_2,\dots,A_{n+1},A_{n+2}$ and $B_1,B_2,\dots,B_{n+1}$; each layer containing $m$ vertices. The vertices in the layer $A_i$ are named $a^i_1,a^i_2,\ldots,a^i_m$, and those in layer $B_i$ are named analogously. We have a complete bipartite graph $J_i$ between layer $A_i$ and $A_{i+1}$ and an edge between $a^i_j$ and $b^i_j$ for every $i$, $j$ (that is, a matching $M_i$ between $A_i$ and $B_i$). For $i=1$ to $n$, the edges $\{(a^i_j,a^{i+1}_{j'})|1\leq j,j'\leq m\}$, in the complete bipartite graph between $A_i$ and $A_{i+1}$, have weight $x_i$, and the edges $\{(a^i_j,b^i_j)|1\leq j\leq m\}$, in the matching between $A_i$ and $B_i$, have weight $y_i$. The edges in the complete graph $J_{n+1}$ have weight $x_n$, and those in the matching $M_{n+1}$ have weight $y_n$. Note that weights $x_i$ and $y_i$ depend on $\epsilon$ and $\delta$, but are independent of $m$ and $n$. Clearly, the weight of the maximum weight matching in this graph is bounded from below by the weight of the matching $\bigcup_{i=1}^{n+1}M_i$. Since $y_i\geq x_{i+1}-\varepsilon$, we have \begin{equation}\label{eqn_opt_lb} \opt\geq m\left(\sum_{i=1}^ny_i+y_n\right)\geq m\left(\sum_{i=2}^{n+1}x_i+x_{n+1}-(n+1)\epsilon\right) \end{equation} The edges of the graph are revealed in $n+1$ phases. In the $i^{\text{\tiny{th}}}$ phase, the edges in $J_i\cup M_i$ are revealed as follows. The phase is divided into $m$ sub phases. In the $j^{\text{\tiny{th}}}$ sub phase of the $i^{\text{\tiny{th}}}$ phase, edges incident on $a^i_j$ are revealed, in the order $(a^i_j,a^{i+1}_1),(a^i_j,a^{i+1}_2),\ldots,(a^i_j,a^{i+1}_m),$ $(a^i_j,b^i_j)$. \subsubsection{Analysis of the lower bound} The overall idea of bounding the weight of the algorithm's matching is as follows. In each phase $i$, we will prove that as many as $m-O(1)$ edges of $J_i$ and only $\delta m+O(1)$ edges of $M_i$ are picked by the algorithm. Furthermore, in the ${i+1}^{\text{\tiny{th}}}$ phase, since $m-O(1)$ edges from $J_{i+1}$ are picked, all but $O(1)$ edges of the edges picked from $J_i$ are discarded. Thus, the algorithm ends up with $\delta m+O(1)$ edges from each $M_i$, and $O(1)$ edges from each $J_i$, except possibly $J_n$ and $J_{n+1}$. The algorithm can end up with at most $m$ edges from $J_n\cup J_{n+1}$, since the size of the maximum matching in $J_n\cup J_{n+1}$ is $m$. Thus, the weight of the algorithm's matching is at most $mx_n$ plus a quantity that can be neglected for large $m$ and small $\delta$. Let $X_i$ (resp. $Y_i$) be the set of edges of $J_i$ (resp. $M_i$) held by the algorithm at the end of input. Then we have, \begin{lemma}\label{lem_Y} For all $i=1$ to $n$ \[E[|Y_i|]\leq\delta m+\frac{1-\delta}{\delta}\] \end{lemma} \begin{lemma}\label{lem_X} For all $i=1$ to $n-1$ \[E[|X_i|]\leq\frac{1-\delta}{\delta}\] \end{lemma} \begin{lemma}\label{lem_Yn} \[E[|Y_{n+1}|]\leq\delta m+\frac{1-\delta}{\delta}\] \end{lemma} The proof of the above lemmas can be found in Appendix (section~\ref{sec_lemmas}). We are now ready to prove Theorem \ref{thm_local}. The expected weight of the matching held by $\mathcal{A}$ is \[E[\alg]\leq\sum_{i=1}^ny_iE[|Y_i|]+y_nE[|Y_{n+1}|]+\sum_{i=1}^{n-1}x_iE[|X_i|]+x_nE[|X_n\cup X_{n+1}|]\] Using Lemmas \ref{lem_Y}, \ref{lem_Yn}, \ref{lem_X}, and the facts that $y_i\leq x_{i+1}$ for all $i$ and $E[|X_n\cup X_{n+1}|]\leq m$ (since $X_n\cup X_{n+1}$ is a matching in $J_n\cup J_{n+1}$), we have \[E[\alg]\leq\left(\delta m+\frac{1-\delta}{\delta}\right)\left(\sum_{i=2}^{n+1}x_i+x_{n+1}\right)+\frac{1-\delta}{\delta}\sum_{i=1}^{n-1}x_i+mx_n\] Since the algorithm is $\beta$-competitive, for all $n$, $m$, $\delta$ and $\epsilon$ we must have $E[\alg]$ $\geq \opt/\beta$. From the above and equation (\ref{eqn_opt_lb}), we must have \begin{center} \begin{tabular}{ccc} $\left(\delta m+\frac{1-\delta}{\delta}\right)\left(\sum_{i=2}^{n+1}x_i+x_{n+1}\right)$ & \multirow{2}{*}{$\geq$}& \multirow{2}{*}{$\frac{m}{\beta}\left(\sum_{i=2}^{n+1}x_i+x_{n+1}-(n+1)\epsilon\right)$}\\ $+\frac{1-\delta}{\delta}\sum_{i=1}^{n-1}x_i+mx_n$ \end{tabular} \end{center} Since the above holds for arbitrarily large $m$, ignoring the terms independent of $m$ (recall that $x_i$'s are functions of $\epsilon$ and $\delta$ only), we have for all $\delta$ and $\epsilon$, \[\delta\left(\sum_{i=2}^{n+1}x_i+x_{n+1}\right)+x_n\geq\frac{1}{\beta}\left(\sum_{i=2}^{n+1}x_i+x_{n+1}-(n+1)\epsilon\right)\] that is, \[x_n\geq\frac{1}{\beta}\left(\sum_{i=2}^{n+1}x_i+x_{n+1}-(n+1)\epsilon\right)-\delta\left(\sum_{i=2}^{n+1}x_i+x_{n+1}\right)\] Taking limit inferior as $\delta\rightarrow0$ in the above inequality, and noting that limit inferior is super-additive we get for all $\epsilon$, \begin{center} \begin{tabular}{ccl} \multirow{2}{*}{$\liminf_{\delta\rightarrow0}x_n$} & \multirow{2}{*}{$\geq$} &$\frac{1}{\beta}\left(\sum_{i=2}^{n+1}\liminf_{\delta\rightarrow0}x_i+\liminf_{\delta\rightarrow0}x_{n+1}-(n+1)\epsilon\right)$\\ &&$-\limsup_{\delta\rightarrow0}\delta\left(\sum_{i=2}^{n+1}x_i+x_{n+1}\right)$ \end{tabular} \end{center} \begin{comment} \[\liminf_{\delta\rightarrow0}x_n\geq\frac{1}{\beta}\left(\sum_{i=2}^{n+1}\liminf_{\delta\rightarrow0}x_i+\liminf_{\delta\rightarrow0}x_{n+1}-(n+1)\epsilon\right)-\limsup_{\delta\rightarrow0}\delta\left(\sum_{i=2}^{n+1}x_i+x_{n+1}\right)\] \end{comment} Recall that $x_i$'s are functions of $\epsilon$ and $\delta$, and that from equation (\ref{eqn_bounds}), $1/\alpha^i\leq x_{i+1}\leq \gamma^i$, where the bounds are independent of $\delta$. Thus, all the limits in the above inequality exist. Moreover, $\lim_{\delta\rightarrow0}\delta\left(\sum_{i=2}^{n+1}x_i+x_{n+1}\right)$ exists and is $0$, for all $\epsilon$. This implies $\limsup_{\delta\rightarrow0}\delta\left(\sum_{i=2}^{n+1}x_i+x_{n+1}\right)=0$ and we get for all $\varepsilon$, \[\liminf_{\delta\rightarrow0}x_n\geq\frac{1}{\beta}\left(\sum_{i=2}^{n+1}\liminf_{\delta\rightarrow0}x_i+\liminf_{\delta\rightarrow0}x_{n+1}-(n+1)\epsilon\right)\] Again, taking limit inferior as $\epsilon\rightarrow0$, and using super-additivity, \[\liminf_{\epsilon\rightarrow0}\liminf_{\delta\rightarrow0}x_n\geq\frac{1}{\beta}\left(\sum_{i=2}^{n+1}\liminf_{\epsilon\rightarrow0}\liminf_{\delta\rightarrow0}x_i+\liminf_{\epsilon\rightarrow0}\liminf_{\delta\rightarrow0}x_{n+1}\right)\] Note that the above holds for all $n$. Finally, let $\overline{x_n}=\liminf_{\epsilon\rightarrow0}\liminf_{\delta\rightarrow0}x_{n+1}$. Then we have the infinite sequence $(\overline{x_n})_{n\in\mathbb{N}}$ such that for all $n$, $\beta\overline{x_n}\geq\sum_{i=1}^{n+1}\overline{x_i}+\overline{x_{n+1}}$. Thus, by Lemma \ref{lem_varadaraja}, we have $\beta\geq3+2\sqrt{2}$. \subsection{Lower Bound for $\theta$ structured graphs} Recall that an edge weighted graph is said to be $\theta$-structured if the weights of the edges are powers of $\theta$. The following bound applies to any deterministic algorithm for MWM on $\theta$-structured graphs. \begin{theorem}\label{thm_theta} No deterministic algorithm can have a competitive ratio less than $2+\frac{2}{\theta-2}$ for MWM on $\theta$-structured graphs, for $\theta\geq 4$. \end{theorem} The proof of the above theorem can be found in Appendix (section~\ref{theta_bnd}). \section{Randomized Algorithm for Paths}\label{ub} When the input graph is restricted to be a collection of paths, then every new edge that arrives connects two (possibly empty) paths. Our algorithm consists of several cases, depending on the lengths of the two paths. \begin{algorithm}[H] \caption{Randomized Algorithm for Paths} \begin{algorithmic}[1] \STATE $M=\emptyset$. \COMMENT{$M$ is the matching stored by the algorithm.} \FOR{each new edge $e$} \STATE Let $L_1\geq L_2$ be the lengths of the two (possibly empty) paths $P_1,P_2$ that $e$ connects. \STATE If $L_1>0$ (resp. $L_2>0$), let $e_1$ (resp. $e_2$) be the edge on $P_1$ (resp. $P_2$) adjacent to $e$. \IF{$e$ is a disjoint edge \COMMENT{$L_1=L_2=0$ } } \STATE{$M=M\cup \{e\}$.} \ELSIF{$e$ is revealed on a disjoint edge $e_1$ \COMMENT{$L_1=1,L_2=0$. $e_1\in M$}} \STATE{with probability $\frac{1}{2}$, $M=M\setminus\{e_1\}\cup\{e\}$}. \ELSIF{$e$ is revealed on a end point of path of length $>1$ \COMMENT{$L_1>1,L_2=0$}} \STATE{if $e_1\notin M$, $M=M\cup \{e\}$ }. \ELSIF{$e$ joins two disjoint edges \COMMENT{$L_1=L_2=1$. $e_1,e_2\in M$}} \STATE{with probability $\frac{1}{2}$, $M=M\setminus \{e_1,e_2\}\cup \{e\}$}. \ELSIF{$e$ joins a path and a disjoint edge \COMMENT{$L_1>1,L_2=1$. $e_2\in M$}} \STATE{if $e_1\notin M$, $M=M\setminus\{e_2\}\cup\{e\}$}. \ELSIF{$e$ joins two paths of length $>1$\COMMENT{$L_1>1,L_2>1$}} \STATE{if $e_1\notin M$ and $e_2\notin M$, $M=M\cup \{e\}$}. \ENDIF \STATE Output $M$. \ENDFOR \end{algorithmic} \end{algorithm} The following simple observations can be made by looking at the algorithm: \begin{itemize} \item All isolated edges belong to $M$ with probability one. \item The end vertex of any path of $length>1$ is covered by $M$ with probability $\frac{1}{2}$, and this is independent of the end vertex of any other path being covered. \item For a path of length $2,3,$ or $4$, each maximal matching is present in $M$ with probability $\frac{1}{2}$. \end{itemize} \begin{theorem}\label{th1} The randomized algorithm for finding MCM on path graphs is $\frac{4}{3}$-competitive. \end{theorem} The proof of above theorem can be found in Appendix (section~\ref{opt_paths}). \begin{comment} \section{Conclusion} In this paper, we present a barely random algorithm for MCM on growing trees. The next step, is to use the same algorithm to prove a competitive ratio better than $2$ for trees, where edges may be revealed in any arbitrary order. The deterministic algorithm due to McGregor~\cite{mcgregor} uses a single parameter $\gamma$. It will be interesting to see if there exists a barely random algorithm that performs provably better than the best known randomized algorithm~\cite{epstein} for MWM, which chooses between output of two algorithms with some probability, where both the algorithms use different values for the parameter $\gamma$. \section*{Acknowledgment} The first author thanks Sagar Kale for reviewing Section \ref{sec_lb_mwm}, correcting several mistakes and helping us improve the presentation substantially. \end{comment} ~\nocite{*} \section*{Appendices} \appendix \section{A Primal-Dual Analysis of a deterministic algorithm for MWM}\label{pd} In this section, we present a primal-dual analysis for the deterministic algorithm due to~\cite{mcgregor} for the maximum weight matching problem in the online preemptive model. The algorithm is as follows. \begin{algorithm} \caption{Deterministic Algorithm for MWM} \begin{enumerate} \item Fix a parameter $\gamma$. \item If the new edge $e$ has weight greater than $(1+\gamma)$ times the weight of the edges currently adjacent to $e$, then include $e$ and discard the adjacent edges. \end{enumerate} \end{algorithm} \subsection{Analysis} \begin{lemma}~\cite{mcgregor} The competitive ratio of this algorithm is $(1+\gamma)(2+\frac{1}{\gamma})$. \end{lemma} We use the primal-dual technique to prove the same competitive ratio. This analysis technique is different from the one in~\cite{naor}. In~\cite{naor}, the primal variables once set to a certain value are never changed whereas in our analysis the primal variables may change during the run of algorithm. The primal and dual LPs we use for the maximum weight matching problem are as follows. \begin{center} \begin{tabular}{c|c} Primal LP & Dual LP \\\hline $\max \sum_e w_e x_e$ & $\min \sum_v y_v$\\ $\forall v: \sum_{v\in e}x_e \leq 1$ & $\forall e: y_u + y_v \geq w_e$\\ $x_e\geq 0$ & $y_v \geq 0$ \end{tabular} \end{center} We maintain both primal and dual variables along with the run of the algorithm. On processing an edge, we maintain the following invariants. \begin{itemize} \item The dual LP is always feasible. \item For each edge $e\equiv(u,v)$ in the current matching, $y_u\geq(1+\gamma)w(e)$ and $y_v\geq(1+\gamma)w(e)$. \item The change in cost of the dual solution is at most $(1+\gamma)(2+\frac{1}{\gamma})$ times the change in cost of primal solution. \end{itemize} These invariants imply that the competitive ratio of the algorithm is $(1+\gamma)(2+\frac{1}{\gamma})$. We start with $\vec{0}$ as the initial primal and dual solutions. Consider a round in which an edge $e$ of weight $w$ is given. Assume that all the above invariants hold before this edge is given. Whenever an edge $e\equiv(u,v)$ is accepted by the algorithm, we assign values $x_e=1$ to the primal variable and $y_u=max(y_u,(1+\gamma)w(e))$, $y_v=max(y_v,(1+\gamma)w(e))$ to the dual variables of its end points. Whenever an edge is rejected, we do not change the corresponding primal or dual variables. Whenever an edge $e$ is evicted, we change its primal variable $x_e=0$. The dual variables never decrease. Hence, if a dual constraint is feasible once, it remains so. We will now show that the invariants are always satisfied. These are three cases. \begin{enumerate} \item If the edge $e\equiv(u,v)$ has no conflicting edges in the current matching, then it is accepted by the algorithm in current matching $M$. We assign $x_e=1,y_u=\max(y_u,(1+\gamma)w(e))$ and $y_v=\max(y_v,(1+\gamma)w(e))$. Hence, $y_u\geq (1+\gamma)w(e)$ and $y_v\geq (1+\gamma)w(e)$. And hence, the dual constraint $y_u+y_v\geq w(e)$ is feasible. The change in the dual cost is at most $2(1+\gamma)w(e)$. The change in the primal cost is $w(e)$. So, the change in the dual cost is at most $(1+\gamma)(2+\frac{1}{\gamma})$ times the change in the cost of the primal solution. \item If the edge $e\equiv(u,v)$ has conflicting edges $X(M,e)$ and $w(e)\leq (1+\gamma)w(X(M,e))$, then it is rejected by the algorithm. That happens when $y_u+y_v\geq (1+\gamma)w(X(M,e))$, and the dual constraint for edge $e$ is satisfied. \item If the edge $e\equiv(u,v)$ had conflicting edges $X(M,e)$ and $w(e)> (1+\gamma)w(X(M,e))$, then it is accepted by the algorithm in the current matching $M$ and $X(M,e)$ is/are evicted from $M$. We only need to show that the change in dual cost is at most $(1+\gamma)(2+\frac{1}{\gamma})$ times the change in the primal cost. The change in primal cost is $w(e)-w(X(M,e))$. The change in dual cost is at most $2(1+\gamma)w(e)-(1+\gamma)w(X(M,e))$. Hence the ratio is at most \begin{align*} & \frac{2(1+\gamma)w(e)-(1+\gamma)w(X(M,e))}{w(e)-w(X(M,e))}\\ =&2(1+\gamma) + \frac{(1+\gamma)w(X(M,e))}{w(e)-w(X(M,e))}\\ <&2(1+\gamma) + \frac{w(e)}{w(e)-w(X(M,e))}\\ \leq &2(1+\gamma)+(1+\frac{1}{\gamma})\\ =&(1+\gamma)(2+\frac{1}{\gamma}) \end{align*} \end{enumerate} Here, the management of the dual variables was straight forward. The introduction of randomization complicates matters considerably, and we are only able to analyze the algorithm in the very restricted setting of paths and ``growing trees''. \section{Barely Random Algorithms for MCM}\label{ra_paths} \subsection{Randomized Algorithm for Paths} \begin{algorithm} \caption{Barely Random Algorithm for Paths} \begin{enumerate} \item The algorithm maintains two matchings: $M_1$ and $M_2$. \item On receipt of an edge $e$, the processing happens in two phases. \begin{enumerate} \item The augment phase. Here, the new edge $e$ is added to each $M_i$ such that there is no edge in $M_i$ sharing an end point with $e$. \item The switching phase. Edge $e$ is added to $M_2$ and the conflicting edge is discarded, provided it decreases the quantity $|M_1\cap M_2|$. \end{enumerate} \item Output a matching $M_i$ with probability $\frac{1}{2}$. \end{enumerate} \end{algorithm} \begin{theorem}\label{br_paths} The barely random algorithm for finding the MCM on paths is $\frac{3}{2}$-competitive, and no barely random algorithm can do better. \end{theorem} We prove this theorem using the following lemma. \begin{lemma} The dual constraint for each edge is satisfied at least $\frac{2}{3}rd$ in expectation. \end{lemma} $M_1$ and $M_2$ are valid matchings and hence correspond to valid primal solutions. For each edge $e\equiv(u,v)$ in some matching $M_i$, we distribute a charge of $x_e=1$ amongst dual variables $y_u$ and $y_v$ of its vertices. We prove that for each edge $e$, $y_u+y_v\geq \frac{4}{3}$. Thus, $\mathbb{E}[y_u+y_v]\geq \frac{2}{3}$. Hence, this algorithm has a competitive ratio $\frac{3}{2}$. All the dual variables are initialized to $0$. Suppose $e\equiv(u,v) \in M_i$ for some $i\in[2]$. Then distribution of primal charge $x_e$ amongst dual variables $y_u$ and $y_v$ is done as follows. If there is an edge incident on $u$ which does not belong to any matching, and there is an edge incident on $v$ which does belong to some matching, then $e$ transfer a primal charge of $\frac{2}{3}$ to $y_u$ and rest is transferred to $y_v$. Else, the primal charge of $e$ is transferred equally amongst $y_u$ and $y_v$. We look at three cases and prove that $y_u+y_v\geq \frac{4}{3}$ for each edge $e\equiv (u,v)$. \begin{enumerate} \item The edge $e$ is not present in any matching. \begin{enumerate} \item If there are no edges on both its end points in the input graph, then this edge has to be covered by both $M_1$ and $M_2$. So this case is not possible. \item If there is no edge on one end point (say $u$) in the input graph, then $e$ has to belong to belong to some matching. So, this case is not possible. \item If there are two edges incident on end points of $e$ in the input graph, then each of them has to be covered by some matching. So, $y_u+y_v\geq 2\cdot\frac{2}{3}=\frac{4}{3}$. \end{enumerate} \item The edge $e\equiv(u,v)$ is present in a single matching. \begin{enumerate} \item If there are no edges on both its end points in the input graph, then this edge has to be covered by both $M_1$ and $M_2$. So this case is not possible. \item If there is no edge on one end point (say $u$) in the input graph, then edge on the other end point must be covered by the other matching. Otherwise, edge $e$ would have been covered by both matchings. So, $y_u+y_v\geq 1 +\frac{1}{3}=\frac{4}{3}$. \item If there two edges incident on end points of $e$ in the input graph, then at least one of them has to be covered by other matching. Else, edge $e$ would have been covered by both matchings. So, $y_u+y_v\geq 1 +\frac{1}{3}=\frac{4}{3}$. \end{enumerate} \item The edge $e\equiv(u,v)$ is present in both the matchings, then $y_u+y_v=2\geq \frac{4}{3}$. \end{enumerate} This proves the above claim. The corollary of the above claim is that we have a $\frac{3}{2}$-competitive randomized algorithm for the MCM on paths. \begin{proof} (of the second part of Theorem~\ref{br_paths}) Suppose $U$ is the set of matchings used by a barely random algorithm $\mathcal{A}$. Following input is given to this algorithm. Reveal two edges $x_1$ and $y_1$ such that they share an end point. Let $S$ be the set of matchings to which $x_1$ is added, and $\bar{S}$ be the set of matchings to which $y_1$ is added. Here, $U=S\cup \bar{S}$. Now give two more edges $x_2$ and $y_2$ disjoint from the previous edges, such that $x_2$ and $y_2$ share an end point. Wlog, $x_2$ will be added to set of matchings $S$, and $y_2$ will be added to set of matchings $\bar{S}$. Give an edge between $y_2$ and $x_1$. Continue the input similarly for $i>2$. It can be seen that expected increase in the size of optimum matching is $\frac{3}{2}$, whereas increase in the size of matching held by the algorithm is $1$. Thus, we get a lower bound $\frac{3}{2}$ on the competitive ratio of any barely random algorithm. \end{proof} \subsection{Randomized Algorithm for Growing Trees with maximum degree $3$}\label{ub2} In this section, we give a barely random algorithm for growing trees, with maximum degree $3$. We beat the lower bound of $2$ for MCM on the performance of any deterministic algorithm, for this class of inputs. The edges are revealed in online fashion such the one new vertex is revealed per edge, (except for the first edge). Any vertex in the input graph has maximum degree $3$. \begin{algorithm}[H] \caption{Randomized Algorithm for Growing Trees with $\Delta=3$} \begin{enumerate} \item The algorithm maintains $3$ matchings $M_1,M_2,M_3$. \item On receipt of an edge $e$, the processing happens in two phases. \begin{enumerate} \item The augment phase. Here, the new edge $e$ is added to each $M_i$ such that there is no edge in $M_i$ sharing an end point with $e$. \item The switching phase. For $i=2,3$, in order, $e$ is added to $M_i$ and the conflicting edge is discarded, provided it decreases the quantity $\sum_{i,j\in[3],i\neq j}|M_i\cap M_j|$. \end{enumerate} \item Output a matching $M_i$ with probability $\frac{1}{3}$. \end{enumerate} \end{algorithm} \begin{theorem}\label{br_trees} The barely random algorithm for finding the MCM on growing trees with maximum degree $3$ is $\frac{12}{7}$-competitive. \end{theorem} We make following simple observations. \begin{itemize} \item There cannot be an edge which is not in any matching. \item Call an edge ``bad'' if its end points are covered by only two matchings. Indeed, an edge whose none of the end points are leaves, cannot be ``bad''. \item An edge incident on a vertex of degree $3$ cannot be ``bad'', because there will be a distinct edge belonging to every matching. \end{itemize} We begin by proving a few simple lemmas regarding the algorithm. \begin{lemma} There cannot be ``bad'' edges incident on both vertices of an edge. \end{lemma} \begin{proof} Note that a ``bad'' edge is created when an edge is revealed on a leaf node of another edge which belongs to two or three matchings currently and finally belongs to only one matching. Let $e$ be an edge which currently belongs to three matchings, which means it is the first edge revealed. Now if an edge $e_1$ is revealed on a vertex of $e$, then $e_1$ would be added to one matching, and $e$ would be removed from that matching, (in the switching phase of the algorithm). For $e_1$ to be a bad edge, $e$ should be switched out of one more matching. This can happen only if there are two more edges revealed on the other vertex of $e$. This means there cannot be ``bad'' edges on both sides of $e$. Let $e$ belongs to two matchings. Then $e$ already has a neighboring edge $e_2$ which belongs to some matching. When $e_1$ is revealed on the leaf vertex of $e$, it will be added to one matching, in the augment phase. Now for $e_1$ to be ``bad'', $e$ should switch out of some matching. This can only happen if there is one more edge $e_3$ revealed on the common vertex of $e$ and $e_2$. Again, the lemma holds. \end{proof} \begin{lemma}\label{3star} If a vertex has three edges incident on it, then at most one of these edges can have a ``bad'' neighboring edge. \end{lemma} \begin{proof} Out of the three edges incident on a vertex, only one could have belonged to two matchings at any step during the run of algorithm. Hence, only that edge which belonged to two matchings at some stage during the run of algorithm can have a ``bad'' neighboring edge. \end{proof} \begin{proof} (of Theorem~\ref{br_trees}) $M_1, M_2, M_3$ are valid matchings and hence correspond to valid primal solutions. For each edge $e\equiv(u,v)$ in some matching $M_i$, we distribute a charge of $x_e=1$ amongst dual variables $y_u$ and $y_v$ of its end points. We prove that for each edge $e$, $y_u+y_v\geq \frac{7}{4}$. Thus, $\mathbb{E}[y_u+y_v]\geq \frac{7}{12}$. Hence, this algorithm has a competitive ratio $\frac{12}{7}$. All the dual variables are initialized to $0$. Suppose $e\equiv(u,v) \in M_i$ for some $i\in[3]$. Then distribution of primal charge $x_e$ amongst dual variables $y_u$ and $y_v$ is done as follows. If there is a ``bad'' edge incident on $u$, then edge $e$ transfer $\frac{3}{4}$ of of its primal charge to $y_u$ and rest of it to $y_v$. Else, edge $e$ transfer its primal charge equally between $y_u$ and $y_v$. We look at three cases and then prove that $y_u+y_v\geq \frac{7}{4}$ for each edge $e\equiv (u,v)$. \begin{enumerate} \item Edge $e\equiv(u,v)$ is ``bad''. $e\in M_i$ for some $i\in [3]$. $e$ will have some neighboring edge $e_1$ such that $e_1\in M_j$ for $j\in [3]$ and $i\neq j$. Let the common vertex between $e$ and $e_1$ be $v$. Then $e_1$ will transfer $\frac{3}{4}$ of its primal charge to $y_v$. Thus, $y_u+y_v=\frac{7}{4}$. \item Edge $e\equiv(u,v)$ is present in a single matching and not ``bad''. This case has four sub cases. \begin{enumerate} \item $e$ has one neighboring edge $e_1$. Then $e_1$ should belong to two matchings. \item $e$ has two neighboring edges $e_1$ and $e_2$ both belonging to only one matching. If these are both on the same side of $e$, then at most one of them could have a ``bad'' neighboring edge (by lemma~\ref{3star}). If these are on opposite sides of $e$, then none of them can have a ``bad'' neighboring edge. \item $e$ has three neighboring edges $e_1$, $e_2$, and $e_3$, such that $e_1$ and $e_2$ are on one side of $e$, and $e_3$ is on another side of $e$. At most one of $e_1$ and $e_2$ can have a ``bad'' neighboring edge (by lemma~\ref{3star}). \item $e$ has four neighboring edges $e_1$, $e_2$, $e_3$, and $e_4$, such that $e_1$ and $e_2$ are on one side of $e$, and $e_3$ and $e_4$ are on another side of $e$. \end{enumerate} We can see that in all the above sub cases, $y_u+y_v\geq \frac{7}{4}$. \item Edge $e\equiv (u,v)$ belongs to two or three matchings. Then, $y_u+y_v\geq \frac{7}{4}$ trivially. \end{enumerate} This proves that we have a $\frac{12}{7}$-competitive randomized algorithm for finding MCM on growing trees with maximum degree $3$. \end{proof} \subsection{Example showing need of non-local analysis}\label{sa_gt} Consider input graph as a $4$-regular tree with large number of vertices, and an extra edge on every vertex other than the leaf vertices. Every edge other than the extra edges will belong some matching. For every edge that belongs to some matching, there will one edge on each of its end points which does not belong to any matching. If the rule for distributing primal charge among dual variables is similar to one described in section~\ref{ub2}, then for each edge belonging to some matching will transfer its primal charge equally amongst both its end points. For each edge which does not belong to any matching, $y_u+y_v=2$, which will imply only a competitive ratio of $2$. We wish to get a competitive ratio better than $2$. So we need some other idea. \subsection{Proof of Lemma~\ref{internal}}\label{pf_inbad} \begin{proof} Consider an edge $(u,v)$ revealed at $u$. \begin{enumerate} \item When revealed it is not put in any matching. This means that there are four covered edges incident on $u$. (Call an edge covered if it belongs to some matching.) This situation cannot change as more edges are revealed. Thus the edge will remain covered by four matchings, and can never become a bad edge. \item When revealed it is put in one matching. This means that there are three matching edges on at least two covered edges incident on $u$. If there were three covered edges incident on $u$ then they remain covered edges. So suppose otherwise. Then there are two covered edges of which one is in two matchings. Hence there will always be three matching edges covering $u$. If an edge is revealed at $v$ then there will be four matching edges covering the given edge. The edge may become bad if $v$ stays a leaf and if one of the matchings on the edge with two of them, switches. \item When revealed it is put in two matchings. Then there are two matching edges at $u$ and at least one covered edge. If there are two covered edges, they remain so. Of the two copies of the edge in matchings, one may switch to a new edge but will always remain adjacent to this edge. Hence there will always be three matching edges covering $u$. If an edge is revealed at $v$ then there will be four matching edges covering the given edge. The edge may become bad if $v$ stays a leaf and if one of the matchings on the edge with two of them, switches. \item When revealed it is put in three matchings. Then there is one covered edge at $u$. If one more edge is now revealed on $u$, then we are back to case $3$. If a new edge is revealed on $v$, it replaces $(u,v)$ in one of the matchings. Now, even if more edges are revealed on either side of $(u,v)$, it continues to be covered by four matchings. \item When revealed it is put in four matchings. If a new edge is revealed either on $u$ or $v$, then this case reduces to case $2$. \end{enumerate} This completes the proof of the first part of lemma. For the second part of lemma, consider a leaf edge present on each of the vertices $p,q,$ and $r$. Suppose the leaf edge incident on $q$ is bad. When this edge was revealed, there must have been some edge incident on $q$, either $(p,q)$ or $(q,r)$, which belonged to two matchings. Wlog, assume $(p,q)$ belonged to two matchings. Then for a matching to switch out this edge, there need to be three edges incident on $p$, and hence the leaf edge incident on $p$ cannot be a bad edge. \begin{comment} (part $1$ of lemma~\ref{internal}) \begin{enumerate} \item If $e$ does not belong to any matching then it must have been so when it was revealed. In that case, it must have had four distinct neighboring edges each belonging to exactly one matching. This situation cannot change as more edges are revealed. \item If $e$ belongs to single matching at the end of input, then it must have belonged to at least $1$ matching when it was revealed. If it belonged to exactly one matching when it was revealed, then it must have had three distinct neighboring edges, each belonging to exactly one matching and the lemma follows. If it belonged to at least two matchings when it was revealed, then every edge adjacent to $e$ must have been in at least one matching. For $e$ to finally belong to a single matching, there should have been to more edges revealed on its end points, each belonging to at least one matching. Thus, the lemma follows. \item If $e$ is in at least two matchings then every edge adjacent to $e$ must be in at least one matching and the lemma follows. \end{enumerate} This ends the proof of part \textit{1} of the lemma. \end{proof} \begin{proof} (part $2$ of lemma~\ref{internal}) It is clear that only a leaf edge can be a ``bad'' edge. Also, when an edge is revealed, its vertices are covered by all four matchings. Later on, the total number of matchings covering vertices of that edge may fall below four. For an edge $e$ to be ``bad'', it has to be revealed in one of the following ways. \begin{enumerate} \item On the common vertex $u$ of two edges $e_1$ and $e_2$, such that both these edges belong to two matchings. No more edges are revealed on $u$, and all these edges finally belong to only one matching each. \item On the common vertex $u$ of two edges $e_1$ and $e_2$ such that $e_1$ belongs to two matchings and $e_2$ belongs to a single matching. No more edges are revealed on $u$, and all these edges finally belong to only one matching each. \item On a leaf vertex $u$ of an edge $e_1$ which belongs to two or three or four matchings, and later on $e_1$ belongs to only one matching. No more edges are revealed on $u$, and $e_1$ finally belong to only one matching. \item On a leaf vertex $u$ of an edge $e_1$ which belongs to two or three or four matchings. Then an edge $e_2$ is revealed on $u$, such that $e$ is switched out of one of the two matchings it belongs to and $e_2$ is added to it. No more edges are revealed on $u$, and all these edges finally belong to only one matching each. (This case is very similar to case 3). \end{enumerate} These are the only possible cases which can result in an edge being ``bad''. Following is the list of remaining possible cases in which an edge is revealed. But it will not become a ``bad'' edge in any of these cases. \begin{itemize} \item If egde $e$ is revealed on edge a vertex of degree at least $3$, then its end points will definitely be covered by all four matchings. \item If edge $e$ is revealed on a leaf vertex of an edge belonging to only one matching, then it would belong to three matchings due to the augment phase. Now, no matter which edges are revealed on the non-leaf vertex of this edge, its vertices will continue to be covered by four matchings. \item If edge $e$ is revealed on the common vertex $u$ of two edges $e_1$ and $e_2$ which both belong to one matching each, then $e$ would continue to be covered by all four matchings at the end of input. \end{itemize} We go through various cases to prove the lemma. \begin{itemize} \item Let $e_1$ and $e_2$ be two edges which are incident on $p$ such that both these edges belong to two matchings. Now, when $e$ is revealed on $p$, wlog, $e_1$ will switch out of some matching, and $e$ will be added to it, (as described in case 1). Say there is another edge $e_3$ on other vertex (besides $p$) of $e_1$, (vertex $q$), which could belong to two or three matchings. When an edge $e_4$ is revealed on $q$, it will belong to one matching due to the switching phase. Consider the other vertex (besides $q$) of $e_3$, (vertex $r$). When one or two edges are revealed on $r$, neither of them can be a ``bad'' edge. \item Let $e_1$ and $e_2$ be two edges which are incident on $p$ such that, $e_1$ belongs to two matchings, and $e_1$ belongs to a single matching. Now, when $e$ is revealed on $p$, $e$ will be added to a single matching in the augment phase, (as described in case 2). Because $e_2$ belonged to a single matching, it means it already must be having three neighboring edges on the other vertex (besides $p$). Say there is another edge $e_3$ on other vertex (besides $p$) of $e_1$, (vertex $q$), which would belong to two matchings. When an edge $e_4$ is revealed on $q$, it will belong to one matching due to the switching phase. Consider the other vertex (besides $q$) of $e_3$, (vertex $r$). When one or two edges are revealed on $r$, neither of them can be a ``bad'' edge. \item Let $e_1$ be an edge incident on leaf vertex $p$ belonging to two or three or four matchings. When an edge $e$ is revealed on $p$, it would belong to two matchings, (as described in case 3). Say there is an edge $e_2$ on the other vertex (besides $p$) of $e_1$, (vertex $q$). This edge would belong to one or two matchings. So when edge $e_3$ is revealed on $q$, it will belong to a single matching, by either augment or switching phase. Consider the other vertex (besides $q$) of $e_2$, (vertex $r$). When one or two edges are revealed on $r$, neither of them can be a ``bad'' edge. \end{itemize} \end{comment} \end{proof} \section{Proof of lemmas from section~\ref{sec_lb_mwm}}\label{sec_lemmas} \begin{lemma}\label{lem_f0} For every $w>0$, $f_0(w)>1/\alpha$. \end{lemma} \begin{proof} If not, then a single edge of weight $w$ results in algorithm's expected cost $wf_0(w)\leq w/\alpha<w/\beta$, whereas the optimum is $w$. This contradicts $\beta$-competitiveness. \end{proof} \begin{lemma}\label{lem_f1_lb} For every $w_1$ and $w\leq w_1/\alpha$, $f_1(w_1,w)=0$. \end{lemma} \begin{proof} If $f_1(w_1,w)>0$ for some $w_1$ and $w$ such that $w\leq w_1/\alpha$, then the adversary's input is a star, with a single edge of weight $w_1$ followed by a large number $n$ of edges of weight $w$. Regardless of whether the first edge of weight $w_1$ is accepted or not, the algorithm holds an edge of weight $w$, with probability approaching $1$ as $n\rightarrow\infty$, in the end. The optimum is $w_1\geq\alpha w>\beta w$, thus, contradicting $\beta$-competitiveness. \end{proof} \begin{lemma}\label{lem_f1_ub} For every $w_1$, and $w\geq\gamma w_1$, $f_1(w_1,w)\geq1/\alpha$. \end{lemma} \begin{proof} Suppose $f_1(w_1,w)<1/\alpha$ for some $w_1$ and $w$ such that $w\geq\gamma w_1$. The adversary's input is a star, with a large number $n$ of edges of weight $w_1$, followed by a single edge of weight $w$. The algorithm must hold an edge of weight $w_1$, before the edge of weight $w$ is given, with probability approaching $1$ as $n\rightarrow\infty$. Therefore, in the end, the algorithm's cost is $w$ with probability less than $1/\alpha$, and at most $w_1$ otherwise. Thus, the expected weight of the edge held by the algorithm is less than $w/\alpha+w_1(1-1/\alpha)$, whereas the adversary holds the edge of weight $w$. Since the algorithm is $\beta$-competitive and $\beta<\alpha$, we have \[w<\beta\left[\frac{1}{\alpha}\cdot w+\left(1-\frac{1}{\alpha}\right)w_1\right]\text{ }\Rightarrow\text{ }\left(1-\frac{\beta}{\alpha}\right)w<w_1\beta\left(1-\frac{1}{\alpha}\right)\text{ }\Rightarrow\text{ }w<\gamma w_1\] This is a contradiction. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem_xy}] By Lemma \ref{lem_f1_lb}, $f_1(w_1,w_1/\alpha)=0$, and by Lemma \ref{lem_f1_ub}, $f_1(w_1,$ $\gamma w_1)\geq1/\alpha$. Take a finite sequence of points, increasing from $w_1/\alpha$ to $\gamma w_1$, such that the difference between any two consecutive points is at most $\epsilon$, and observe the value of $f_1(w_1,z)$ at each such point $z$. Since $f_1(w_1,w_1/\alpha)<\delta$ and $f_1(w_1,\gamma w_1)>\delta$, there must exist two consecutive points in the sequence, say $y$ and $x$, such that $f_1(w_1,y)\leq\delta$ and $f_1(w_1,x)\geq\delta$. Furthermore, $x-y\leq\epsilon$ and $w_1/\alpha\leq y\leq x\leq\gamma w_1$, by construction. \end{proof} \begin{lemma}\label{lem_preempt} For every $i$, $j$, the probability that $a^i_j$ is not matched to any vertex in $A_{i+1}$, in the $j^{\text{\tiny{th}}}$ sub phase of the $i^{\text{\tiny{th}}}$ phase, just before the edge $(a^i_j,b^i_j)$ is revealed, is at most $(1-\delta)^{m-j+1}$. \end{lemma} \begin{proof} Consider the $j^{\text{\tiny{th}}}$ sub phase of the $i^{\text{\tiny{th}}}$ phase, in which, the edges $(a^i_j,a^{i+1}_1),$ $(a^i_j,a^{i+1}_2),$ $\ldots,(a^i_j,a^{i+1}_m),$ $(a^i_j,b^i_j)$ are revealed. Before this sub phase, the number of unmatched vertices in $A_{i+1}$ must be at least $m-j+1$. Call this set $A'$. If $a^i_j$ was matched at the end of phase $i-1$, then the weight of edge incident on $a^i_j$, at the beginning of the current phase, is $x_{i-1}$. For each vertex $a^{i+1}_{j'}\in A'$, given that $a^i_j$ did not get matched to any of $a^{i+1}_1,\ldots,a^{i+1}_{j'-1}$, the probability that $a^i_j$ gets matched to $a^{i+1}_{j'}$ is $f_1(x_{i-1},x_i)\geq\delta$. Thus, the probability of $a^i_j$ not getting matched to any vertex in $A'\subseteq A_{i+1}$, in the current sub phase, is at most $(1-\delta)^{m-j+1}$. Note that this argument applies even if $a^i_j$ was not matched at the beginning of the current phase, due to Lemma \ref{lem_f0} and since $\delta<1/\alpha<f_0(x_i)$. \end{proof} \begin{proof} [Proof of Lemma~\ref{lem_Y}] First, observe that the sequence in which the edges are revealed ensures that no edge adjacent to any edge $e\in M_i$ appears after $e$. Thus, if $e$ is picked when it is revealed, it is never preempted, and is maintained till the end of input. Hence, $Y_i$ is also the set of edges of $M_i$ that were picked as soon as they were revealed. When the edge $(a^i_j,b^i_j)$ is given, the algorithm picks it with probability at most $\delta$ (since $f_1(x_i,y_i)\leq\delta$) if $a^i_j$ was matched to some vertex in $A_{i+1}$. By Lemma \ref{lem_preempt}, the probability of $a^i_j$ not being matched to any vertex in $A_{i+1}$ is at most $(1-\delta)^{m-j+1}$. Thus, the probability that the edge $(a^i_j,b^i_j)$ appears in $Y_i$ is at most $\delta+(1-\delta)^{m-j+1}$. Hence, $E[|Y_i|]\leq\delta m+\sum_{j=1}^m(1-\delta)^{m-j+1}\leq\delta m+(1-\delta)/\delta$. \end{proof} \begin{proof} [Proof of Lemma~\ref{lem_X}] Consider the set $A'$ of all vertices $a^{i+1}_j$, which remain matched to some vertex in $A_i$ at the end of input. Then clearly, $|A'|=|X_i|$. Let us find the probability that a vertex $a^{i+1}_j$ appears in $A'$. For this to happen, it is necessary that $a^{i+1}_j$ not be matched to any vertex in $A_{i+2}$, in the $j^{\text{\tiny{th}}}$ sub phase of the $i+1^{\text{\tiny{st}}}$ phase. By lemma \ref{lem_preempt}, this happens with probability at most $(1-\delta)^{m-j+1}$. Thus, $E[|X_i|]=\sum_{j=1}^m(1-\delta)^{m-j+1}\leq(1-\delta)/\delta$. \end{proof} \begin{lemma}\label{lem_preempt1} For every $j$, the probability that $a^{n+1}_j$ is not matched to any vertex in $A_n\cup A_{n+2}$, in the $j^{\text{\tiny{th}}}$ sub phase of the $n+1^{\text{\tiny{st}}}$ phase, just before the edge $(a^{n+1}_j,b^{n+1}_j)$ is revealed, is at most $(1-\delta)^{m-j+1}$. \end{lemma} \begin{proof} This proof is analogous to the proof of Lemma \ref{lem_preempt}. If $a^{n+1}_j$ was matched to some vertex in $A_n$ at the end of the $n^{\text{\tiny{th}}}$ phase, then it will continue to remain matched to some vertex in $A_n\cup A_{n+2}$, until the edge $(a^{n+1}_j,b^{n+1}_j)$ is revealed. Otherwise $a^{n+1}_j$ will get matched to some vertex in $A_{n+2}$ with probability at least $1-(1-\delta)^{m-j+1}$, and remain unmatched with probability at most $(1-\delta)^{m-j+1}$. \end{proof} \begin{proof} [Proof of Lemma~\ref{lem_Yn}] This proof is analogous to the proof of Lemma \ref{lem_Y}. Again, the sequence in which the edges are revealed ensures that no edge adjacent to any edge $e$ in any $M_{n+1}$ appears after $e$. Thus, if $e$ is picked when it is revealed, it is never preempted. Hence, $Y_i$ is also the set of edges of $M_i$ that were picked as soon as they were revealed. When the edge $(a^{n+1}_j,b^{n+1}_j)$ is given, the algorithm picks it with probability at most $\delta$ (since $f_1(x_n,y_n)\leq\delta$) if $a^{n+1}_j$ was matched to some vertex in $A_n\cup A_{n+1}$. Thus, the probability that the edge $(a^i_j,b^i_j)$ appears in $Y_{n+1}$ is at most $\delta+(1-\delta)^{m-j+1}$. Hence, $E[|Y_{n+1}|]\leq\delta m+\sum_{j=1}^m(1-\delta)^{m-j+1}\leq\delta m+(1-\delta)/\delta$. \end{proof} \section{Lower Bound for $\theta$ structured graphs}\label{theta_bnd} The overall idea of the adversarial strategy is as follows. The input graph is a tree whose edges are partitioned into $n+1$ layers which are numbered $0$ through $n$ from bottom to top. Every edge in layer $i$ has weight $\theta^i$. The edges are revealed bottom-up. The edges in layer $i$ are given in such a manner that all the edges in layer $i-1$ held by the algorithm will be preempted. This ensures that in the end, the algorithm's matching contains only one edge, whereas the adversary's matching contains $2^{n-i}$ edges from layer $i$, for each $i$. Let $\mathcal{A}$ be any deterministic algorithm for maximum matching in the online preemptive model. The adversarial strategy uses a recursive function, which takes $n\in\mathbb{N}$ as a parameter. For a given $n$, this recursive function, given by Algorithm \ref{alg_theta}, constructs a tree with $n+1$ layers by giving weighted edges to the algorithm in an online manner, and returns the tree, the adversary's matching in the tree, and a vertex from the tree. \begin{algorithm} \caption{{\sc{MakeTree}}$(n)$} \begin{algorithmic}[1] \IF{$n=0$} \WHILE{true} \STATE Take fresh vertices $v$, $v_1$, $v_2$, and give the edges $(v_1,v_2)$, $(v,v_1)$ with weight $1$. \STATE $T:=\{(v_1,v_2),(v,v_1)\}$. \IF{algorithm picks $(v_1,v_2)$} \RETURN $(T,\{(v,v_1)\},v_2)$ \ELSIF{algorithm picks $(v,v_1)$} \RETURN $(T,\{(v_1,v_2)\},v)$ \ELSE \STATE Discard $T$ and retry. \ENDIF \ENDWHILE \ELSE \WHILE{true} \STATE $(T_1,M_1,v_1)$ $:=$ {\sc{MakeTree}}$(n-1)$ \STATE $(T_2,M_2,v_2)$ $:=$ {\sc{MakeTree}}$(n-1)$ \STATE Give the edge $(v_1,v_2)$ with weight $\theta^n$. \IF{algorithm picks $(v_1,v_2)$} \STATE Take a fresh vertex $v$, and give the edge $(v,v_1)$ with weight $\theta^n$. \STATE $T:=T_1\cup T_2\cup\{(v_1,v_2),(v,v_1)\}$. \IF{algorithm replaces $(v_1,v_2)$ by $(v,v_1)$} \RETURN $(T,M_1\cup M_2\cup\{(v_1,v_2)\},v)$. \ELSE \RETURN $(T,M_1\cup M_2\cup\{(v,v_1)\},v_2)$ \ENDIF \ELSE \STATE \COMMENT{algorithm does not pick $(v_1,v_2)$} \STATE Discard the constructed tree and retry. \ENDIF \ENDWHILE \ENDIF \end{algorithmic} \label{alg_theta} \end{algorithm} Let us prove a couple of properties about the behavior of the algorithm and the adversary, when the online input is generated by the call {\sc{MakeTree}}$(n)$. \begin{lemma}\label{lem_theta_adv} Suppose that the call {\sc{MakeTree}}$(n)$ returns $(T',M',v')$. Then \begin{enumerate} \item $M'$ is a matching in $T'$. \item $M'$ does not cover the vertex $v'$. \item The weight of $M'$ is $\sum_{i=0}^n\theta^i2^{n-i}=(\theta^{n+1}-2^{n+1})/(\theta-2)$. \end{enumerate} \end{lemma} \begin{proof} By induction on $n$. For $n=0$, the claim is obvious from the description of {\sc{MakeTree}}. Assume that the claim holds for $n-1$, and consider the call {\sc{MakeTree}}$(n)$, which returns $(T',M',v')$. Then, by induction hypothesis, the two recursive calls must have returned $(T_1,M_1,v_1)$ and $(T_2,M_2,v_2)$ satisfying the conditions of the lemma. Suppose the algorithm replaced $(v_1,v_2)$ by $(v,v_1)$ in its matching. Since $v_1$ and $v_2$ were respectively left uncovered by $M_1$ and $M_2$, $M=M_1\cup M_2\cup\{(v_1,v_2)\}$ is a matching in $T$, and $M'$ does not cover $v'=v$. The case when the algorithm did not replace $(v_1,v_2)$ by $(v,v_1)$ is analogous. In either case, the additional edge in $M'$, apart from edges in $M_1$ and $M_2$ has weight $\theta^n$, and $M_1$, $M_2$ themselves have weight $\sum_{i=0}^{n-1}\theta^i2^{n-1-i}$, by induction hypothesis. Thus, the weight of $M'$ is $\theta^n+2\sum_{i=0}^{n-1}\theta^i2^{n-1-i}=\sum_{i=0}^n\theta^i2^{n-i}$. \end{proof} \begin{lemma}\label{lem_theta_alg} When the call {\sc{MakeTree}}$(n)$ returns $(T,M,v)$, the algorithm's matching contains exactly one edge from $T$. This edge is incident on $v$ and has weight $\theta^n$. \end{lemma} \begin{proof} By induction on $n$. Again, the claim is obvious for $n=0$. Assume that the claim holds for $n-1$, and consider the call {\sc{MakeTree}}$(n)$, which returns $(T',M',v')$. At the end of the two recursive calls which return $(T_1,M_1,v_1)$ and $(T_2,M_2,v_2)$. The algorithm will have exactly one edge $e_1$ from $T_1$ incident on $v_1$, and one edge $e_2$ from $T_2$ incident on $v_2$, by induction hypothesis. If the algorithm does not pick the next edge $(v_1,v_2)$, then the tree is discarded. If the algorithm picks that edge, then it must preempt $e_1$ and $e_2$. Thereafter, if the algorithm replaces $(v_1,v_2)$ by $(v,v_1)$ in its matching, then $v'=v$. Otherwise, if the algorithm keeps $(v_1,v_2)$, then $v'=v_2$. In either case, the algorithm is left with exactly one edge, which is incident on $v'$, and which has weight $\theta^n$. \end{proof} The adversary's strategy is given by Algorithm \ref{alg_adv}, where $n\geq1$ is a parameter. \begin{algorithm} \caption{{\sc{Adv}}$(n)$} \begin{algorithmic}[1] \WHILE{true} \STATE $(T_1,M_1,v_1)$ $:=$ {\sc{MakeTree}}$(n-1)$ \STATE $(T_2,M_2,v_2)$ $:=$ {\sc{MakeTree}}$(n-1)$ \STATE Give the edge $(v_1,v_2)$ with weight $\theta^n$. \IF{algorithm picks $(v_1,v_2)$} \STATE Take a fresh vertex $v$, and give the edge $(v,v_1)$ with weight $\theta^n$. \IF{algorithm replaces $(v_1,v_2)$ by $(v,v_1)$} \STATE Take a fresh vertex $v'$ and give the edge $(v,v')$ with weight $\theta^n$. \STATE $T:=T_1\cup T_2\cup\{(v_1,v_2),(v,v_1),(v,v')\}$. \RETURN $M_1\cup M_2\cup\{(v_1,v_2),(v,v')\}$ \ELSE \STATE \COMMENT{algorithm still has $(v_1,v_2)$} \STATE Take a fresh vertex $v'$ and give the edge $(v_2,v')$ with weight $\theta^n$. \STATE $T:=T_1\cup T_2\cup\{(v_1,v_2),(v,v_1),(v_2,v')\}$. \RETURN $M_1\cup M_2\cup\{(v,v_1),(v_2,v')\}$ \ENDIF \ELSE \STATE \COMMENT{algorithm does not pick $(v_1,v_2)$} \STATE Discard the constructed tree and retry. \ENDIF \ENDWHILE \end{algorithmic} \label{alg_adv} \end{algorithm} \begin{lemma}\label{lem_discard} When a tree $T$ is discarded in a call to {\sc{MakeTree}}$(n)$ or\\ {\sc{Adv}}$(n)$, $\alg(T)\leq\left(2+\frac{2}{\theta-2}\right)\cdot$ $\adv(T)$, where $\alg(T)$ and $\adv(T)$ are respectively the total weights of the edges of the algorithm's and the adversary's matchings, in $T$. \end{lemma} \begin{proof} For $n\geq1$, consider the two calls to {\sc{MakeTree}}, which returned $(T_1,M_1,$ $v_1)$ and $(T_2,M_2,v_2)$ before the edge $(v_1,v_2)$ is revealed. By Lemma \ref{lem_theta_alg}, the algorithm had exactly one edge in each of $T_1$ and $T_2$, and this edge had weight $\theta^{n-1}$. The tree was discarded because the algorithm did not pick the edge $(v_1,v_2)$. Thus, $\alg(T)=2\theta^{n-1}$. On the other hand, the adversary picks the matching $M_1\cup M_2\cup\{(v_1,v_2)\}$ which, by Lemma \ref{lem_theta_adv}, has weight $\adv(T)=\theta^n+2\sum_{i=0}^{n-1}\theta^i2^{n-1-i}=\sum_{i=0}^n\theta^i2^{n-i}=(\theta^{n+1}-2^{n+1})/(\theta-2)$. Thus, \begin{align*} \frac{\adv(T)}{\alg(T)}&=\frac{\theta^{n+1}-2^{n+1}}{2\theta^{n-1}(\theta-2)}=\frac{\theta}{2}\times\frac{1-\left(\frac{2}{\theta}\right)^{n+1}}{1-\frac{2}{\theta}} \geq\frac{\theta}{2}\times\frac{1-\left(\frac{2}{\theta}\right)^2}{1-\frac{2}{\theta}}\\ &=\frac{\theta}{2}\times\left(1+\frac{2}{\theta}\right)=\frac{\theta}{2}+1 \geq\left(2+\frac{2}{\theta-2}\right) \end{align*} The last inequality follows from the fact that $\theta\geq4$. Finally, note that when the discard happens in a call to {\sc{MakeTree}}$(0)$, $\alg(T)=0$ and $\adv(T)=1$. \end{proof} Now we are ready to prove Theorem \ref{thm_theta}. \begin{proof}[Proof of Theorem \ref{thm_theta}] For $n\geq1$, give the adversarial input by calling {\sc{Adv}}$(n)$. If the call does not terminate, then an unbounded number of trees are discarded, and by Lemma \ref{lem_discard}, a lower bound of $\left(2+\frac{2}{\theta-2}\right)$ is forced on each discarded tree. If the call terminates, then suppose $T$ is the final tree constructed. Let $(T_1,M_1,v_1)$ and $(T_2,M_2,v_2)$ be returned by the two calls to {\sc{MakeTree}}$(n-1)$. By the description of {\sc{Adv}} and Lemma \ref{lem_theta_alg}, it is clear that the algorithm holds only one edge of $T$ in the end, and this edge has weight $\theta^n=\alg(T)$. On the other hand, the adversary's matching contains $M_1$ and $M_2$, and two edges of weight $\theta^n$, where by Lemma \ref{lem_theta_adv}, the weight of $M_1$ and $M_2$ is $(\theta^n-2^n)/(\theta-2)$ each. Thus, $\adv(T)=2\theta^n+2(\theta^n-2^n)/(\theta-2)$. Therefore, \[\frac{\adv(T)}{\opt(T)}=2+\frac{2(\theta^n-2^n)}{\theta^n(\theta-2)}=2+\frac{2\left(1-\left(\frac{2}{\theta}\right)^n\right)}{\theta-2}\] This approaches $\left(2+\frac{2}{\theta-2}\right)$ as $n\rightarrow\infty$. Furthermore, this lower bound is also forced on the trees discarded during the execution of {\sc{Adv}}$(n)$. Thus, the algorithm can not have a competitive ratio less than $\left(2+\frac{2}{\theta-2}\right)$. \end{proof} \section{Proof of Theorem~\ref{th1}}\label{opt_paths} Theorem~\ref{th1} can be proved using the following lemma. \begin{lemma} For any (maximal) path $P$ of length $n>0$, \begin{itemize} \item if $n$ is even then $\mathbb{E}[|M\cap P|] \geq (3/4)(n/2)+1/4 = p_0(n)$ (say). \item if $n$ is odd then $\mathbb{E}[|M\cap P|]\geq (3/4)(n/2)+ 3/8 = p_1(n)$ (say). \end{itemize} \end{lemma} \begin{proof}For $n=1$ and $n=2$, $ \mathbb{E}[|M\cap P|]=1$, and for $n=3$, $ \mathbb{E}[|M\cap P|]=\frac{3}{2}$. Thus the lemma holds when $n\leq 3$. We will induct on the number of edges in the input. (Case $n=1$ covers the base case.) Suppose the lemma is true before the arrival of the new edge $e$. We prove that the lemma holds even after $e$ has been processed. We may assume that the length $n$ of the new path $P$ resulting from addition of $e$ is at least $4$. \begin{enumerate} \begin{comment} \item If a path of length $5$ is formed by joining two paths of length $2$ each, by a new edge: With probability $3/4$ atleast one of the joining edges of two paths will be present and with probability $1/4$, none of them will be present. \begin{align*} E\left[path_{5}^{odd}\right]&= \frac{3}{4}\left(E\left[path_{2}^{even}\right]+E\left[path_{2}^{even}\right]\right) + \frac{1}{4}\left(E\left[path_{e}^{even}\right]+E\left[path_{2}^{even}\right] + 1\right)\\ &= 2+\frac{1}{4}\\ &= \frac{3}{4}\left(\frac{5+1}{2}\right) \end{align*} \end{comment} \item If $n$ is even and $L_2=0$, (therefore $L_1$ is odd, and $L_1 \geq 3$), $\Pr[e_1\notin M]=\frac{1}{2}$. Therefore, $e$ is added to $M$ with probability $\frac{1}{2}$. \begin{align*} \mathbb{E}\left[|M\cap P|\right] \geq p_1(n-1)+\frac{1}{2} \geq p_0(n) \end{align*} \item If $n$ is even, $L_1=n-2$ and $L_2=1$. \begin{align*} \mathbb{E}\left[|M\cap P|\right]&\geq p_0(n-2)+1\\ &= \frac{3}{4}\left(\frac{n-2}{2}\right)+\frac{1}{4}+1\\ &\geq p_0(n) \end{align*} \begin{comment} \item If path of length $n$, such that $n$ is even, is formed by joining a path of length $n-3$ and a path of length $2$ by a new edge: With probability $3/4$ atleast one of the joining edges of two paths will be present and with probability $1/4$, none of them will be present. \begin{align*} E\left[path_{n}^{even}\right]&= \frac{3}{4}\left(E\left[path_{n-3}^{odd}\right]+E\left[path_{2}^{even}\right]\right) + \frac{1}{4}\left(E\left[path_{n-3}^{odd}\right]+E\left[path_{2}^{even}\right] + 1\right)\\ &\geq \frac{3}{4}\left(\frac{n-2}{2}\right)+1+\frac{1}{4}\\ &= \frac{3}{4}\left(\frac{n}{2}\right)+\frac{1}{2} \end{align*} \end{comment} \item If $n$ is even, and $L_2>1$, where $n=L_1+L_2+1$, $L_1$ is even, and $L_2$ is odd. $\Pr[e_1\notin M, e_2\notin M]=\frac{1}{4}$ \begin{align*} \mathbb{E}\left[|M\cap P|\right]&\geq p_0(L_1)+p_1(L_2)+\frac{1}{4}\\ &= \frac{3}{4}\left(\frac{L_1+L_2+1}{2}\right)+\frac{1}{4}+\frac{3}{8}-\frac{3}{8}+\frac{1}{4}\\ &\geq p_0(n) \end{align*} \item If $n$ is odd, and $L_2=0$, (therefore $L_1$ is even, and $L_1 \geq 3$), $\Pr[e_1\notin M]=\frac{1}{2}$. Therefore, $e$ is added to $M$ with probability $\frac{1}{2}$. \begin{align*} \mathbb{E}\left[|M\cap P|\right] \geq p_0(n-1)+\frac{1}{2} = p_1(n) \end{align*} \item If $n$ is odd, $L_1=n-2$ and $L_2=1$. \begin{align*} \mathbb{E}\left[|M\cap P|\right]&\geq p_1(n-2)+1\\ &= \frac{3}{4}\left(\frac{n-2}{2}\right)+\frac{3}{8}+1\\ &\geq p_1(n) \end{align*} \begin{comment} \item If path of length $n$, such that $n$ is odd, is formed by joining a path of length $n-3$ and a path of length $2$ by a new edge: With probability $3/4$ atleast one of the joining edges of two paths will be present and with probability $1/4$, none of them will be present. \begin{align*} E\left[path_{n}^{odd}\right]&= \frac{3}{4}\left(E\left[path_{n-3}^{even}\right]+E\left[path_{2}^{even}\right]\right) + \frac{1}{4}\left(E\left[path_{n-3}^{even}\right]+E\left[path_{2}^{even}\right] + 1\right)\\ &\geq \frac{3}{4}\left(\frac{n-3}{2}\right)+\frac{1}{2}+1+\frac{1}{4}\\ &\geq \frac{3}{4}\left(\frac{n+1}{2}\right) \end{align*} \end{comment} \item If $n$ is odd, and $L_2>1$, where $n=L_1+L_2+1$, $L_1$ is even, and $L_2$ is even. $\Pr[e_1\notin M, e_2\notin M]=\frac{1}{4}$ \begin{align*} \mathbb{E}\left[|M\cap P|\right]&\geq p_0(L_1)+p_0(L_2)+\frac{1}{4}\\ &= \frac{3}{4}\left(\frac{L_1+L_2+1}{2}\right)+\frac{1}{4}+\frac{1}{4}-\frac{3}{8}+\frac{1}{4}\\ &= p_1(n) \end{align*} \item If $n$ is odd, and $L_2>1$, where $n=L_1+L_2+1$, $L_1$ is odd, and $L_2$ is odd. $\Pr[e_1\notin M, e_2\notin M]=\frac{1}{4}$ \begin{align*} \mathbb{E}\left[|M\cap P|\right]&\geq p_1(L_1)+p_1(L_2)+\frac{1}{4}\\ &= \frac{3}{4}\left(\frac{L_1+L_2+1}{2}\right)+\frac{3}{8}+\frac{3}{8}-\frac{3}{8}+\frac{1}{4}\\ &\geq p_1(n) \end{align*} \end{enumerate} This completes the induction and hence implies a $\frac{4}{3}$-competitive ratio for this algorithm. \end{proof} \end{document}
\betagin{document} \title{ Stable $3$-spheres in $\mathbb{C}^{3}$} \author{ Isabel M.C.\ Salavessa} \date{} \protect\footnotetext{\!\!\!\!\!\!\!\!\!\!\!\!\! {\bf MSC 2000:} Primary: 53C42; 53C38; 58E35. Secondary: 35J20; 49R50\\ {\bf ~~Key Words:} Stability, Parallel Mean curvature, Calibration, Cauchy-Riemann inequality, Spherical harmonics.\\[1mm] Partially supported by Funda\c{c}\~{a}o para a Ci\^{e}ncia e Tecnologia, through the plurianual project of Centro de F\'{\i}sica das Interac\c{c}\~{o}es Fundamentais and programs PTDC/MAT/101007/2008, PTDC/MAT/118682/2010. } \maketitle ~~~\\[-10mm] {\footnotesize Centro de F\'{\i}sica das Interac\c{c}\~{o}es Fundamentais, Instituto Superior T\'{e}cnico, Technical University of Lisbon, Edif\'{\i}cio Ci\^{e}ncia, Piso 3, Av.\ Rovisco Pais, 1049-001 Lisboa, Portugal;~ [email protected]}\\[5mm] {\small {\bf Abstract:} By only using spectral theory of the Laplace operator on spheres, we prove that the unit 3-dimensional sphere of a 2-dimensional complex subspace of $\mathbb{C}^3$ is an $\Omega$-stable submanifold with parallel mean curvature, when $\Omega$ is the K\"{a}hler calibration of rank $4$ of $\mathbb{C}^3$.}\\[1mm] \textbf{1. Introduction}\\ \par In 2000, Frank Morgan introduced the notion of multi-volume for an $m$-dimensional submanifold $M$ of a Euclidean space $\mathbb{R}^{m+n}$, as a volume enclosed by orthogonal projections onto axis $(m+1)$-planes. He characterized stationary submanifolds for the area functional with prescribed multi-volume as submanifolds with mean curvature vector $H$ prescribed by a constant multivector $\xi\in \wedge_{m+1}\mathbb{R}^{m+n}$, namely $H=\xi\lfloor \vec{S}$, where $\vec{S}$ is the unit tangent plane of $M$, and proved the existence of a minimizer among rectifiable currents, as well as their regularity under general conditions of the boundary. In this setting, a question has arisen on conditions for $\|H\|$ to be constant. In (Salavessa, 2010) we extended the variational characterization of hypersurfaces with constant mean curvature $\|H\|$ to submanifolds with higher codimension, when the ambient space is any Riemannian manifold $\bar{M}^{m+n}$, as discovered by Barbosa, do Carmo and Eschenburg (1984, 1988) for the case $n=1$. This generalization amounts on defining an ``enclosed'' $(m+1)$-volume of an $m$-dimensional immersed submanifold $F:M^m\to \bar{M}^{m+n}$, $m\mbox{\lie g}eq 2$, as the $\Omega$-volume defined by each one-parameter variation family $F(x,t)=F_t(x)$ of $F(x,0)=F(x)$, where $\Omega$ is a semi-calibration on the ambient space $\bar{M}$, that is, an $(m+1)$-form $\Omega$ which satisfies $|\Omega(e_0,e_1, \ldots, e_{m})|\leq 1$, for any orthonormal system $e_i$ of $T\bar{M}$. A submanifold with calibrated extended tangent space $H\oplus TM$ is a critical point of the functional area, for compactly supported $\Omega$-volume preserving variations, if and only if it has constant mean curvature $\|H\|$. In this case we have $H=\|H\|\, \Omega\, \lfloor \vec{S}$. From a deeper inspection of this proof, one can see that the initial assumption of calibrated extended tangent space can be dropped, since it will appear as a consequence of being a critical point itself. This will be explained in detail in a future paper, and also its relations with Morgan's formalism. Assuming that $M$ has parallel mean curvature $H$, a second variation is then computed, and its non-negativeness defines stability of $M$. This corresponds to the non-negativeness of the quadratic form associated with the $L^2$-self-adjoint $\Omega$-Jacobi operator $\mathcal{J}_{\Omega}(W)=\mathcal{J}(W)+m\|H\|C_{\Omega}(W)$, acting on sections in the twisted normal bundle $H^1_{0,T}(NM)= {\cal F}\oplus H^1_0(E)$, where the set ${\cal F}$ of $H^1_0$-functions with zero mean value is identified with the set of sections of the form $f\nu$, with $f\in {\cal F}$ and $\nu=H/\|H\|$, and where $E$ is the orthogonal complement of $\nu$ in the normal bundle. This Jacobi operator is the usual one, but with an extra term, namely a multiple of a first order differential operator $C_{\Omega}(W)$ that depends on $\Omega$. The twisted normal bundle is the $H^1$-completion of the vector space generated by the set $\mathcal{F}_{\Omega}$ of compactly supported infinitesimal $\Omega$-volume preserving variations, and, in general, we do not know whether it is larger than $\mathcal{F}_{\Omega}$ itself. Thus, $\Omega$-stability implies that the area functional of $F_t$ decreases when $t$ approaches $t_0=0$, for any family of $\Omega$-volume preserving variations $F_t$ of $F$, but we do not know whether the converse also holds always. In case the ambient space is the Euclidean space $\mathbb{R}^{m+n}$, then a unit $m$-sphere of an $\Omega$-calibrated Euclidean subspace $\mathbb{R}^{m+1}$ of $\mathbb{R}^{m+n}$ is $\Omega$-stable if and only if, for any $(n-1)$-tuple of functions $f_{\alphapha}\in C^{\infty}(\mathbb{S}^m)$, $2\leq \alphapha \leq n$, the following integral inequality holds: \betagin{equation} \sum_{\alphapha<\betata}-2m\int_{\mathbb{S}^m} f_{\alphapha}\xi(W_{\alphapha},W_{\betata})(\nabla f_{\betata})dM\leq \sum_{\alphapha}\int_{\mathbb{S}^m}\|\nabla f_{\alphapha}\|^2dM, \end{equation} where $W_{\alphapha}$ is a fixed global parallel orthonormal (o.n.) frame of $\mathbb{R}^{n-1}$, the orthogonal complement of $\mathbb{R}^{m+1}$ spanned by $\mathbb{S}^m$, and $\xi$ is the $ T^*\mathbb{S}^m$-valued 2-form on $\mathbb{R}^{n-1}_{/\mathbb{S}^{m}}$ $$\xi(W,W')(X)=\Omega(W, W',*X),\quad W,W'\in \mathbb{R}^{n-1}, X\in T^*\mathbb{S}^m$$ where $*: T\mathbb{S}^m\to\wedge^{m-1}T\mathbb{S}^m$ is the star operator. If (1) holds and \betagin{equation} \bar{\nabla}_{W}\Omega(W, e_1, \ldots, e_m)=0, \quad \forall W\in N\mathbb{S}^m, \end{equation} where $e_i$ is an o.n.\ frame of $T\mathbb{S}^m$ , then in (Salavessa, 2010, proposition 4.5) we have shown that for each $\alphapha<\betata$, $\xi(W_{\alphapha},W_{\betata})$ must be co-exact as a 1-form on $\mathbb{S}^m$, that is, $$ \xi_{\alphapha\betata}:=\xi(W_{\alphapha},W_{\betata})=\delta \omega_{\alphapha\betata}, $$ for some globally defined 2-form $ \omega_{\alphapha\betata}$ on $\mathbb{S}^m$. This is the case when $\Omega$ is a parallel $(m+1)$-form on $\mathbb{R}^{m+n}$. Using these forms $\omega_{\alphapha\betata}$, the stability condition (1) is translated into the \em long $\Omega$-Cauchy-Riemannian integral inequality: \em \betagin{equation} \sum_{\alphapha<\betata}-2m\int_{\mathbb{S}^m}\omega_{\alphapha\betata}(\nabla f_{\alphapha},\nabla f_{\betata}) dM\leq \sum_{\alphapha}\int_{\mathbb{S}^m}\|\nabla f_{\alphapha}\|^2dM. \end{equation} If we fix $\alphapha<\betata$, and set $f=f_{\alphapha}$, $h=f_{\betata}$, and $f_{\mbox{\lie g}amma}=0$ $\forall \mbox{\lie g}amma \neq \alphapha,\betata$, (1) reduces to \betagin{equation} -2m\int_{\mathbb{S}^m}f\xi_{\alphapha\betata}(\nabla h) dM\leq \int_{\mathbb{S}^m}\|\nabla f\|^2dM + \int_{\mathbb{S}^m}\|\nabla h\|^2dM, \end{equation} and if we replace $f$ by $cf$, and $h$ by $c^{-1}h$, where $c^2= \|\nabla h\|_{L^2}/\|\nabla f\|_{L^2}$, then we obtain the corresponding equivalent \em short $\Omega$-Cauchy-Riemannian, integral inequality \em \betagin{eqnarray} -m\int_{\mathbb{S}^m}\omega_{\alphapha\betata}(\nabla f, \nabla h) dM\leq \sqrt{\int_{\mathbb{S}^m}\|\nabla f\|^2dM} \sqrt{ \int_{\mathbb{S}^m}\|\nabla h\|^2dM}, \end{eqnarray} holding for all functions $f,h\in C^{\infty}(\mathbb{S}^m)$. The $\Omega$-stability of a submanifold with calibrated extended tangent space and parallel mean curvature depends on the curvature of the ambient space and on the calibration $\Omega$ (Salavessa, 2010). It always holds on Euclidean spheres if $C_{\Omega}$ vanish. This last condition is equivalent to the condition (2) and $\xi\equiv 0$ ((Salavessa, 2010), Lemma 4.4). In the case $n=2$ the later condition is satisfied, but for $n \mbox{\lie g}eq 3$ the operator $C_{\Omega}$ may not vanish for spheres, even if $\Omega$ is parallel. If $C_{\Omega}$ does not vanish, spheres of calibrated vector subspaces may not be $\Omega$-stable. We first consider $\Omega$ any parallel $(m+1)$-form on $\mathbb{R}^{m+n}$. Laplace spherical harmonics of $\mathbb{S}^m$ of degree $l$ are the eigenfunctions for the closed eigenvalue problem with respect to the Laplacian operator corresponding to the eigenvalue $\lambda_l=l(l+m-1)$, and they are just the harmonic homogeneous polynomial functions of degree $l$ of $\mathbb{R}^{m+1}$ restricted to $\mathbb{S}^m$. We denote by $E_{\lambda_l}$ the finite-dimensional subspace of $H^1( \mathbb{S}^m)$ spanned by these $\lambda_l$-eigenfunctions. In the first theorem we show how each 1-form $\xi_{\alphapha\betata}$ transforms a spherical harmonic $f$ into another spherical harmonic $h$:\\[1mm] {\bf Theorem 1.1.} \em If $\Omega$ is parallel, then for each $f\in E_{\lambda_l}$, $h=\xi_{\alphapha\betata}(\nabla f)$ is also in $E_{\lambda_l}$, and it is $L^2$-orthogonal to $f$. \em \\ In this paper we study the stability of the unit 3-sphere of a 2-dimensional complex subspace of $\mathbb{C}^3$ with respect to the K\"{a}hler calibration. In this case $C_{\Omega}$ does not vanish. Let $\varpi$ be the K\"{a}hler form of $\mathbb{C}^3=\mathbb{R}^6$, and $\Omega$ the K\"{a}hler calibration of rank 4, $$ \varpi=dx^{12}+ dx^{34}+dx^{56}, \quad~~~~~ \Omega=\frac{1}{2}\varpi^2.$$ The unit sphere of $\mathbb{R}^4\times\{0\}$ is immersed into $\mathbb{R}^6=\mathbb{C}^3$, by the inclusion map $\phi=(\phi_1,\ldots,\phi_4,0):\mathbb{S}^3 \to \mathbb{C}^3$. We have only one of those 1-forms $$\xi:=\xi_{56}= *(d\phi^1\wedge d\phi^2+ d\phi^3\wedge d\phi^4)= \phi^1 d\phi^2-\phi^2 d\phi^1+ \phi^3 d\phi^4-\phi^4 d\phi^3,$$ and $\xi= \delta \omega,$ with $\omega=\frac{1}{2}*\xi=\frac{1}{2} (d\phi^1\wedge d\phi^2+ d\phi^3\wedge d\phi^4)=\frac{1}{2}\phi^*\varpi$. Our main theorem is the following:\\[2mm] {\bf Theorem 1.2.} \em Three-dimensional spheres of $\mathbb{C}^2$ are $\Omega$-stable submanifolds of $\mathbb{C}^3$ with parallel mean curvature, where $\Omega= \frac{1}{2}\varpi^2$ is the K\"{a}hler calibration of rank $4$. \em \\[2mm] The Cauchy-Riemann inequality version of the $\Omega$-stability is described in the corollary:\\[2mm] {\bf Corollary 1.1.} \em The Cauchy-Riemann inequality $$-\int_{\mathbb{S}^3} \varpi(\nabla f,\nabla h)dM \leq \frac{2}{3}\sqrt{\int_{\mathbb{S}^3}\|\nabla f\|^2dM} \sqrt{\int_{\mathbb{S}^3}\|\nabla h\|^2dM}$$ holds for any smooth functions $f$ and $h$ of $~\mathbb{S}^3$, with equality if and only if $f, h\in E_{\lambda_1}$, with $f=\sum_i\mu_i\phi_i$ and $h=\sum_i\sigma_i\phi_i$, where $\sigma_2=-\mu_1$, $ \sigma_1=\mu_2$, $\sigma_4=-\mu_3$, $\sigma_3=\mu_4$.\em \\[1mm] Finally, we state that the 3-sphere is the unique smooth closed submanifold that solves the $\Omega$-isoperimetric problem among a certain class of immersed submanifolds:\\[2mm] {\bf Theorem 1.3.} \em The unit 3-sphere of a complex 2-dimensional subspace of $\mathbb{C}^3$ is the unique closed immersed 3-dimensional submanifold $\phi: M\to \mathbb{C}^{3}$ with parallel mean curvature, trivial normal bundle, and complex extended tangent space $H\oplus TM$, that is $\Omega$-stable for the K\"{a}hler calibration of rank 4, and satisfies the inequality $$\int_M S(2 + h \|H\|)dM \leq 0,$$ where $h$ and $S$ are the height functions $ h=\langle \phi, \nu\rightarrowngle$ and $S= \sum_{ij} \langle \phi, (B(e_i,e_j))^F \rightarrowngle B^{\nu}(e_i,e_j).$ \em \\[2mm] {\bf Remark. } On a closed K\"{a}hler manifold $(M,J)$ with K\"{a}hler form $\varpi(X,Y)=g(JX,Y)$, if $f,h:M\to \mathbb{R}$ are smooth functions, then by the Cauchy-Schwarz inequality, $$\left|\int_M\varpi(\nabla f, \nabla h)dM\right| \leq \sqrt{\int_{M}\|\nabla f\|^2dM} \sqrt{\int_{M}\|\nabla h\|^2dM}, $$ with equality if and only if $\nabla h=\pm J\nabla f$, or equivalently $f\pm ih:M\to \mathbb{C}$ is a holomorphic function. If this is the case, then $f$ and $h$ are constant functions. On the other hand, globally defined functions, sufficiently close to holomorphic functions defined on a sufficiently large open set, are expected to satisfy an almost equality. This is not the case of $\mathbb{S}^3$, which is not a complex manifold, and somehow explains the coefficient $2/3$ in Corollary 1.1.\\[2mm] {\bf Remark.} In the case of $3$-spheres in $\mathbb{C}^3$ we have only one form $\xi_{\alphapha\betata}$, that is, the long Cauchy-Riemann inequality is the short one. We wonder if a general proof of short Cauchy-Riemann inequalities can be allways obtained for Euclidean $m$-spheres on $\mathbb{R}^{m+n}$, by using the spectral theory of spheres, when $\Omega$ is any parallel calibration. Note that (4) is immediately satisfied for $f,h\in E_{\lambda_l}$, if $\lambda_l\mbox{\lie g}eq m^2$, that is $l\mbox{\lie g}eq m$, so it remains to consider the cases $l\leq m-1$. For 3-spheres we have to consider polynomial functions up to order $l=2$, while for 2-spheres we have to consider only the case $l=1$. A related remark is given in the end of section 3.\\[1mm] \textbf{2. Preliminaries}\\ We consider an oriented Riemannian manifold $M$ of dimension $m$, with Levi-Civita connection $\nabla$ and Ricci tensor $Ricci^M:TM \to TM$. In what follows $e_1, \ldots, e_m$ denotes a local direct o.n. frame.\\[2mm] {\bf Lemma 2.1.} \em Let $\xi$ be a co-exact 1-form on a Riemannian manifold $M$, with $\xi=\delta \omega$, where $\omega$ is a 2-form. Then for any function $f\in C^{2}(M)$, $$\xi(\nabla f)=div(\nabla^{\omega}f),$$ where $\nabla^{\omega}f= \sum_i\omega(\nabla f,e_i)e_i~$. Moreover, for any $f,h\in C^{\infty}_0(M)$ $$\int_M f \xi(\nabla h)dM =\int_{M}\omega(\nabla f, \nabla h)dM=- \int_M h \xi(\nabla f)dM.$$ \em \\[2mm] {\bf Proof.} We may assume at a point $x_0$, $\nabla e_i=0$. Then at $x_0$ \betagin{eqnarray*} \xi(\nabla f)&=&\delta \omega(\nabla f)=-\sum_i\nabla_{e_i}\omega(e_i, \nabla f) =\sum_i-\nabla_{e_i}(\omega(e_i, \nabla f))+ \omega(e_i, \nabla_{e_i}\nabla f)\\ &=& div (\nabla^{\omega} f)+ \sum_{ij}Hess f(e_i,e_j)\omega(e_i,e_j). \end{eqnarray*} The last equality proves the first equality of the lemma, because $Hess f(e_i,e_j)$ is symmetric on $i,j$ and $\omega(e_i,e_j)$ is skew-symmetric. The other equalities of the lemma follow from $div(f X)= \langle \nabla f, X\rightarrowngle+f div(X)$, holding for any vector field $X$ and function $f$. \qed\\ The $\delta$ and star operators acting on $p$-forms on an oriented Riemannian $m$-manifold $M$ satisfy $\delta=(-1)^{mp+m+1}*d*$, $**=(-1)^{p(m-p)}Id$, and for a 1-form $\xi$ the DeRham Laplacian $\Delta$ and the rough Laplacian $\bar{\Delta}$ are related by the following formulas $$ \betagin{array}{l} \Delta \xi (X)= (d\delta + \delta d)\xi (X)= -\bar{\Delta}\xi (X) +\xi(Ricci^M(X)),\\[1mm] \bar{\Delta}\xi (X)= trace \nabla^2 \xi(X)= \sum_i \nabla_{e_i}\nabla_{e_i}\xi(X) -\nabla_{\nabla_{e_i}e_i}\xi(X). \end{array} $$ If $\xi=\delta \omega$, then $\delta \xi=0$, and so $\Delta \xi(X)=\delta d \xi (X)= -\sum_i \nabla_{e_i}(d\xi) (e_i,X)$. We also recall the following well-known formula (see e.g. Salavessa \& Pereira do Vale (2006)) for $f\in C^{\infty}(M),$ $$(\bar{\Delta}df)(X)=\sum_i\nabla^2_{e_i,e_i}df (X) =g(\nabla (\Delta f), X) +df(Ricci^M(X)).$$ Thus, \betagin{equation} \betagin{array}{l} \bar{\Delta} (\nabla f) = \nabla (\Delta f)+ Ricci^M(\nabla f),\\[1mm] (\bar{\Delta }\xi)(\nabla f)=-(\delta d\xi)(\nabla f) +\xi(Ricci^M(\nabla f)). \end{array} \end{equation} Now we suppose that $M$ is an immersed oriented hypersurface of a Riemannian manifold ${M'}$, with Riemannian metric $\langle, \rightarrowngle$, defined by an immersion $\phi:M\to {M'}$ with unit normal $\nu$, second fundamental form $B$ and corresponding Weingarten operator $A$ in the $\nu$ direction, given by $$B(e_i,e_j)= \langle A(e_i),e_j\rightarrowngle =\langle {\nabla'}_{e_i}e_j,\nu\rightarrowngle = -\langle e_j, \nabla'_{e_i}\nu\rightarrowngle, $$ where ${\nabla'}$ denotes the Levi-Civita connection on ${M'}$. The scalar mean curvature of $M$ is given by $$ H=\frac{1}{m} Trace\, B= \sum_i\frac{1}{m} B(e_i,e_i).$$ The curvature operator of ${M'}$, ${R'}(X,Y,Z,W)=\langle -{\nabla'}_X{\nabla'}_YZ+{\nabla'}_Y{\nabla'}_XZ +{\nabla'}_{[X,Y]}Z, W\rightarrowngle$, can be seen as a self-adjoint operator of wedge bundles ${R'}: \wedge^2 T{M'} \to \wedge^2 T{M'}$, $$\langle {R'}(u\wedge v), z\wedge w\rightarrowngle ={R'}(u,v,z,w),$$ and so $R'(u\wedge v)=\sum_{i<j} R'(u, v, e_i, e_j)e_i\wedge e_j$, where $$<u\wedge v, z\wedge w>= det\left[\betagin{array}{cc} \langle u, z\rightarrowngle & \langle u,w\rightarrowngle\\ \langle v, z\rightarrowngle & \langle v, w\rightarrowngle \end{array}\right].$$ In what follows, we suppose that $\mbox{\lie h}at{\xi}$ is a parallel $(m-1)$-form on ${M'}$, and $\xi$ is given by $$\xi=*\phi^*\mbox{\lie h}at{\xi}$$ where $*$ is the star operator on $M$. In this case $\xi$ is obviously co-closed, but not necessarily co-exact. We employ the usual inner products in $p$-forms and morphisms.\\[2mm] {\bf Lemma 2.2.} \em Assume $m\mbox{\lie g}eq 3$. Then for all $i,j$ $$\betagin{array}{l} (\nabla_{e_i}\xi) (e_j)= \sum_{k}-B(e_i,e_k)\mbox{\lie h}at{\xi}(\nu, *(e_k\wedge e_j))= -\mbox{\lie h}at{\xi}(\nu, *(A(e_i)\wedge e_j)),\\[1mm] \Delta\xi(e_j)= \delta\, d \xi(e_j) =\mbox{\lie h}at{\xi}\left( \nu, *\la{(}e_j\wedge ( m\nabla H -[Ricci^{{M'}}(\nu)]^T)\la{)} + {R'}(e_j\wedge \nu)\right) + \xi(\Theta_B(e_j)), \end{array}$$ where $[Ricci^{{M'}}(\nu)]^T=\sum_k Ricci^{{M'}}(\nu, e_k)e_k$ and $\Theta_B:T{M}\to T{M}$ is the morphism given by, $\Theta_B=\|B\|^2 Id + mH A -2A^2$. \em \\[2mm] {\bf Proof.} We fix a point $x_0\in M$ and take $e_i$ a local o.n. frame s.t. $\nabla e_i(x_0)=0$. We will compute $d\xi(e_i,e_j)$, at $x$ on a neigbourhood of $x_0$. Recall that for any $p$-form $\sigma$, we have $*\sigma=\sigma *$, where the star operator on the r.h.s. can be seen as acting on $\wedge^{m-p}TM$, with $*e_i=(-1)^{i-1}e_1\wedge \ldots \wedge \mbox{\lie h}at{e}_i\wedge \ldots e_m$, and for $i<j$, $*(e_i\wedge e_j)=(-1)^{i+j-1}e_1\wedge\ldots \wedge \mbox{\lie h}at{e}_i \wedge \ldots\wedge \mbox{\lie h}at{e}_j \wedge \ldots \wedge e_m$. Using the fact that $\mbox{\lie h}at{\xi}$ is a parallel form on $M'$, we have for $x$ near $x_0$, \betagin{eqnarray*} \betagin{array}{rl} \nabla_{e_i}(\xi (e_j))=&\sum_{k\neq j}(-1)^{j-1}\mbox{\lie h}at{\xi}(e_1, \ldots ,{\nabla'}_{e_i}e_k, \ldots, \mbox{\lie h}at{e}_j, \ldots, e_m)\\[1mm] =& \sum_{k<j}(-1)^{k+j}\mbox{\lie h}at{\xi}( {\nabla'}_{e_i}e_k, e_1, \ldots ,\mbox{\lie h}at{e}_k, \ldots, \mbox{\lie h}at{e}_j, \ldots, e_m)\\ &+\sum_{k>j}(-1)^{k+j-1}\mbox{\lie h}at{\xi}( {\nabla'}_{e_i}e_k, e_1, \ldots ,\mbox{\lie h}at{e}_j, \ldots, \mbox{\lie h}at{e}_k, \ldots, e_m)\\[1mm] =&\sum_{k<j}-\langle \nabla_{e_i}e_k,e_j\rightarrowngle \mbox{\lie h}at{\xi}(*e_k) -B(e_i,e_k)\mbox{\lie h}at{\xi}(\nu, *(e_k\wedge e_j))\\ &+\sum_{k>j}-\langle \nabla_{e_i}e_k,e_j\rightarrowngle \mbox{\lie h}at{\xi}(*e_k) +B(e_i,e_k)\mbox{\lie h}at{\xi}(\nu, *(e_j\wedge e_k))\\[1mm] =& \xi(\nabla_{e_i}e_j)+ \sum_{k\neq j}-B(e_i,e_k)\mbox{\lie h}at{\xi}(\nu, *(e_k\wedge e_j)). \end{array} \end{eqnarray*} Hence, $(\nabla_{e_i}\xi) (e_j) = \sum_{k\neq j}-B(e_i,e_k)\mbox{\lie h}at{\xi}(\nu, *(e_k\wedge e_j))$, which proves the first sequence of equalities of the lemma. Now, \betagin{eqnarray*} d\xi(e_i,e_j) &=& (\nabla_{e_i}\xi) (e_j)-(\nabla_{e_j}\xi) (e_i)\\ &=& \sum_{k\neq j}-B(e_i,e_k)\mbox{\lie h}at{\xi}(\nu, *(e_k\wedge e_j)) +\sum_{k\neq i}B(e_j,e_k)\mbox{\lie h}at{\xi}(\nu, *(e_k\wedge e_i)), \end{eqnarray*} and by Codazzi's equation, $$ \betagin{array}{l} (\nabla_{e_i}B)(e_j,e_k)= (\nabla_{e_j}B)(e_i,e_k)-{R'}(e_i,e_j,e_k,\nu)\\ \sum_i (\nabla_{e_i}B)(e_i,e_k)=m\nabla_{e_k}H- Ricci^{{M'}}(e_k,\nu). \end{array}$$ Note that $B_{ik}=(\nabla_{e_j}B)(e_i,e_k)$ is a symmetric matrix, and if we define $A_{ki}=\mbox{\lie h}at{\xi}(\nu,* (e_k\wedge e_i))$ (valuing zero if $k=i$), then $A_{ik}$ is skew-symmetric. Thus, $\sum_{k\neq i} B_{ik}A_{ki}=\sum_{k,i}B_{ik}A_{ki}=0.$ Furthermore, if we set $C_{ik}=-{R'}(e_i,e_j,e_k,\nu)$, then $C_{ik}-C_{ki}={R'}(e_k,e_i,e_j,\nu)$. Hence, $$\sum_{i}\sum_{k\neq i} C_{ik}A_{ki}=\sum_{ik} C_{ik}A_{ki}= \sum_{ik} \frac{1}{2}((C_{ik}+ C_{ki})+ (C_{ik}- C_{ki}))A_{ki} =\sum_{ki}\frac{1}{2}{R'}(e_k,e_i,e_j,\nu)A_{ki}.$$ Therefore, for each $j$, at $x_0$ \betagin{eqnarray*} \lefteqn{-\delta d \xi (e_j) = \sum_i \nabla_{e_i}(d\xi(e_i,e_j))}\\ &=& \sum_{k\neq j}\sum_i-(\nabla_{e_i}B)(e_i,e_k)\mbox{\lie h}at{\xi}(\nu,*(e_k\wedge e_j)) -B(e_i,e_k)\nabla_{e_i}(\mbox{\lie h}at{\xi}(\nu,*(e_k\wedge e_j)))\\ &&+ \sum_{k\neq i}\sum_j(\nabla_{e_i}B)(e_j,e_k)\mbox{\lie h}at{\xi}(\nu,*e_k\wedge e_i)) +B(e_j,e_k)\nabla_{e_i}(\mbox{\lie h}at{\xi}(\nu, *(e_k\wedge e_i))\\ &=&\sum_{k\neq j} (-m\nabla_{e_k}H + Ricci^{{M'}}(e_k, \nu)) \mbox{\lie h}at{\xi}(\nu, *(e_k\wedge e_j)) +\sum_{k,i} \frac{1}{2} {R'}(e_k,e_i,e_j,\nu)\mbox{\lie h}at{\xi}(\nu,*(e_k\wedge e_i)) + S \end{eqnarray*} where \betagin{eqnarray*} \betagin{array}{lcl} S&=&\sum_i\sum_{k<j}(-1)^{k+j}B(e_i,e_k)\mbox{\lie h}at{\xi}({\nabla'}_{e_i}\nu, e_1, \ldots, \mbox{\lie h}at{e}_k, \ldots, \mbox{\lie h}at{e}_j, \ldots, e_m)\\ &&+ \sum_i\sum_{k>j}(-1)^{k+j-1}B(e_i,e_k)\mbox{\lie h}at{\xi}({\nabla'}_{e_i}\nu, e_1, \ldots, \mbox{\lie h}at{e}_j, \ldots, \mbox{\lie h}at{e}_k, \ldots, e_m)\\ &&+\sum_i\sum_{k<i}(-1)^{k+i-1}B(e_j,e_k)\mbox{\lie h}at{\xi}({\nabla'}_{e_i}\nu, e_1, \ldots, \mbox{\lie h}at{e}_k, \ldots, \mbox{\lie h}at{e}_i, \ldots, e_m)\\ &&+ \sum_i\sum_{k>i}(-1)^{k+i}B(e_j,e_k)\mbox{\lie h}at{\xi}({\nabla'}_{e_i}\nu, e_1, \ldots, \mbox{\lie h}at{e}_i, \ldots, \mbox{\lie h}at{e}_k, \ldots, e_m) \end{array}\\ \betagin{array}{lcl} =&&\sum_{i}\sum_{k<j}-B(e_i,e_k)B(e_i, e_k)\xi(e_j)+ B(e_i,e_j)B(e_i, e_k)\xi(e_k)\\ &&+\sum_{i}\sum_{k>j}B(e_i,e_j)B(e_i, e_k)\xi(e_k)- B(e_i,e_k)B(e_i, e_k)\xi(e_j)\\ &&+ \sum_{i}\sum_{k<i}B(e_i,e_k)B(e_j, e_k)\xi(e_i) -B(e_i,e_i)B(e_j, e_k)\xi(e_k)\\ &&+\sum_{i}\sum_{k>i}-B(e_i,e_i)B(e_j, e_k)\xi(e_k)+ B(e_i,e_k)B(e_j, e_k)\xi(e_i). \end{array} \end{eqnarray*} At this point we may assume that at $x_0$ the basis $e_i$ diagonalizes the second fundamental form, that is, $B(e_i,e_j)=\lambda_{i}\delta_{ij}$. Then, \betagin{eqnarray*} \betagin{array}{lcl} S &=& \sum_i\sum_{k<j}-\delta_{ik}\lambda_i^2\xi(e_j) + \delta_{ij}\delta_{ik}\lambda_i^2\xi(e_k)+\sum_i\sum_{k>j} \delta_{ij}\delta_{ik}\lambda_i^2\xi(e_k)- \delta_{ik}\lambda_i^2\xi(e_j)\\ &&+\sum_i\sum_{k<i}\delta_{ik}\delta_{jk}\lambda_k^2\xi(e_i) -\delta_{ii}\delta_{jk}\lambda_i\lambda_j\xi(e_k) +\sum_i\sum_{k>i} -\delta_{ii}\delta_{jk}\lambda_i\lambda_j\xi(e_k) +\delta_{ik}\delta_{jk}\lambda_k^2\xi(e_i)\\[1mm] &=& \sum_{i<j}-\lambda_i^2\xi(e_j)+ \sum_{i>j}-\lambda_i^2\xi(e_j) +\sum_{j<i}-\lambda_i\lambda_j\xi(e_j)+ \sum_{j>i}-\lambda_i\lambda_j\xi(e_j) \\[1mm] &=& \sum_{i\neq j}-\lambda_i^2\xi(e_j)-\lambda_i\lambda_j\xi(e_j) = \sum_{i}-\lambda_i^2\xi(e_j)-\lambda_i\lambda_j\xi(e_j) + (\lambda_j^2 +\lambda_j^2)\xi(e_j)\\[1mm] &=& -\|B\|^2\xi(e_j)-mH\xi(A(e_j))+2\xi(A^2(e_j)), \end{array} \end{eqnarray*} and the second sequence of equalities of the lemma is proved.\qed\\[2mm] If we suppose that $\Theta_B= \mu(x)Id$, taking $e_i$ a diagonalizing o.n. basis of the second fundamental form, $B(e_i,e_j)=\lambda_i \delta_{ij}$, then each $\lambda_i$ satisfies the quadratic equation $$2\lambda_i^2-mH\lambda_i + (\mu-\|B\|^2)=0,$$ which implies that we have at most two distinct possible principal curvatures $\lambda_{\pm}$. Moreover, from the above equation, summing over $i$, we derive that $\mu(x)$ must satisfy $\mu(x)=\frac{m-2}{m}\|B\|^2+ m H^2$, and so $$\lambda_{\pm}=\frac{1}{4}\LA{(}mH\pm \sqrt{\frac{16}{m}\|B\|^2+ m(m-8)H^2} \LA{)}.$$ Note that, from $\|B\|^2\mbox{\lie g}eq m \|H\|^2$, we have $\frac{16}{m}\|B\|^2+ m(m-8)H^2\mbox{\lie g}eq (m-4)^2H^2$, and so there are one or two distinct principal curvatures. If $M$ is totally umbilical, then $\|B\|^2=mH^2$ and $\mu=2(m-1)\|H\|^2$. The previous lemma leads to the following conclusion:\\[1mm] {\bf Lemma 2.3.} \em Assuming ${M'}=\mathbb{R}^{m+1}$, $m\mbox{\lie g}eq 3$, and taking $M$ a hypersurface with constant mean curvature, with $\Theta_B= \mu(x)Id$, where $\mu(x)$ is a smooth function on $M$, we get $\mu(x)=\frac{m-2}{m}\|B\|^2+ m H^2$ and $$ \Delta \xi =\mu \xi.$$ Furthermore, $\xi$ is an eigenform for the DeRham Laplacian operator, that is $\mu(x)$ is constant, if and only if $\|B\|$ is constant. In case $M$ is a unit $m$-sphere $ \mathbb{S}^m$, then $\Theta_B=\mu Id$, with $\mu=2(m-1)$, and taking $\nu_x=-x$ as unit normal, then, at each $x\in \mathbb{S}^m$, $$\betagin{array}{l} (\nabla_{e_i}\xi) (e_j) = \mbox{\lie h}at{\xi}(x, *(e_i\wedge e_j))\\ d\xi(e_i,e_j) = 2\mbox{\lie h}at{\xi}(x, *(e_i\wedge e_j))\\ \Delta\xi=\delta d\xi =2(m-1)\xi.\end{array}$$ \em {\bf Lemma 2.4.} \em If $f\in C^{\infty}(\mathbb{S}^{m})$, then $\Delta(\xi(\nabla f))=\xi(\nabla \Delta f).$ \em \\[2mm] \noindent {\bf Proof. } We fix a point $x_0\in \mathbb{S}^m$ and take $e_i$ a local o.n. frame of the sphere s.t. $\nabla e_i(x_0)=0$. Let $f\in C^{\infty}(\mathbb{S}^{m})$. The following computations are at $x_0$. Using the above formulas (6) and previous lemma, we have \betagin{eqnarray*} \Delta(\xi(\nabla f))&=& \sum_i{\nabla_{e_i}}(\nabla_{e_i}(\xi(\nabla f)))= \sum_i{\nabla_{e_i}}\La{(}(\nabla_{e_i}\xi)(\nabla f) + \xi(\nabla_{e_i}\nabla f)\La{)}\\ &=& (\bar{\Delta}\xi )(\nabla f) + 2(\nabla_{e_i}\xi) (\nabla_{e_i}\nabla f) +\xi(\nabla_{e_i}\nabla_{e_i}\nabla f)\\ &=& -2(m-1)\xi(\nabla f) +\xi(\nabla \Delta f) + 2(m-1)\xi(\nabla f) + \sum_i 2(\nabla_{e_i}\xi)(\nabla_{e_i}\nabla f). \end{eqnarray*} Since $Hess\, f(e_i,e_j) $ is symmetric in $ij$ and by Lemma 2.3, $(\nabla_{e_i}\xi)(e_j)$ is skew-symmetric, we have $$ \sum_i (\nabla_{e_i}\xi)(\nabla_{e_i}\nabla f)=\sum_{ij} Hess\, f(e_i,e_j) (\nabla_{e_i}\xi)(e_j)=0,$$ and the lemma is proved. \qed \\ \textbf{3. Proof of Theorem 1.1.}\\ We denote by $\nabla$ the Levi-Civita connection of $\mathbb{S}^m$ induced by the flat connection $\bar{\nabla}$ of $\mathbb{R}^{m+n}$. We are considering a parallel calibration $\Omega$ on $\mathbb{R}^{m+n}$. We fix $\alphapha <\betata$ and define the 1-form on $\mathbb{S}^{m}$ $$\xi=\xi(W_{\alphapha},W_{\betata})= * \phi^*\mbox{\lie h}at{\xi}=\delta\omega,$$ where $\mbox{\lie h}at{\xi}=\mbox{\lie h}at{\xi}_{\alphapha\betata}$ and $\omega= \omega_{\alphapha\betata}$. We recall that the eigenvalues of $\mathbb{S}^m$ for the closed Dirichlet problem are given by $\lambda_l={l(l+m-1)}$, with $l=0,1,2 \ldots$. We denote by $E_{\lambda_l}$ the eigenspace of dimension $m_l$ corresponding to the eigenvalue $\lambda_l$, and by $E_{\lambda_l}^+$ the $L^2$-orthogonal complement of the sum of the eigenspaces $E_{\lambda_i}$, $i=1,\ldots,l-1$, and so it is the sum of all eigenspaces $E_{\lambda}$ with $\lambda\mbox{\lie g}eq \lambda_l$. If $f\in E_{\lambda_l}$, and $h\in E_{\lambda_s}$, then $$\int_{\mathbb{S}^m}fh\, dM=0 ~~\mbox{if}~ l\neq s \mbox{~~~~and~~~~} \int_{\mathbb{S}^m}\langle \nabla f, \nabla h\rightarrowngle\, dM= \delta_{ls} \lambda_l\int_{\mathbb{S}^m}fh\, dM.$$ There exists an $L^2$-orthonormal basis $\psi_{l,\sigma}$ of $L^2(\mathbb{S}^m)$ of eigenfunctions ($1\leq \sigma\leq m_l$). The Rayleigh characterization of $\lambda_l$ is given by $$\lambda_l=\inf_{f\in E_{\lambda_l}^{+}}\frac{\int_{\mathbb{S}^m} \|\nabla f\|^2dM}{\int_{\mathbb{S}^m}f^2dM},$$ and the infimum is attained for $f\in E_{\lambda_l}$. Each eigenspace $E_{\lambda_l}$ is exactly composed by the restriction to $\mathbb{S}^m$ of the harmonic homogeneous polynomial functions of degree $l$ of $\mathbb{R}^{m+1}$, and it has dimension $m_l=\binom{m+l}{m}-\binom{m+l-2}{m}$. Thus, each eigenfunction $\psi\in E_{\lambda_l}$ is of the form $\psi=\sum_{|a|=l} \mu_{a } \phi^{a}$, where $\mu_{a}$ are some scalars and $a=(a_1,\ldots, a_{m+1})$ denotes a multi-index of length $|a|=a_1+ \ldots +a_{m+1}=l$ and $$\phi^{a}= \phi_1^{a_1}\cdot\ldots\cdot\phi_{m+1}^{a_{m+1}}.$$ From $\nabla \phi_i= \epsilon_i^{\top}$ and $\sum_i \phi_i^2=1$, we see that \betagin{equation} \left\{ \betagin{array}{lcl} \langle \nabla \phi_i,\nabla\phi_j\rightarrowngle =\delta_{ij}- \phi_i\phi_j&& \|\nabla \phi_i\|^2= 1-\phi_i^2\\[1mm] \int_{\mathbb{S}^m}\phi^2_idM=\frac{1}{m+1}|\mathbb{S}^m| && \int_{\mathbb{S}^m}\|\nabla \phi_i\|^2dM =\lambda_1\int_{\mathbb{S}^2} \phi_i^2dM =\frac{m}{m+1}|\mathbb{S}^m|. \end{array}\right. \end{equation} We also denote by $\int_{\mathbb{S}^m}\phi^2dM$ any of the integrals $\int_{\mathbb{S}^m}\phi_i^2dM$, $i=1,\ldots, m+1$. We recall the following:\\[2mm] {\bf Lemma 3.1.} \em If $P:\mathbb{S}^m\to \mathbb{R}$ is a homogeneous polynomial function of degree $l$, then $$\int_{\mathbb{S}^m}P(x)dM=\frac{1}{\lambda_l}\int_{\mathbb{S}^m} \Delta^0P(x)dM.$$ In particular, $$\int_{\mathbb{S}^m}\phi^a dM=\sum_{1\leq i\leq m+1}\frac{a_i(a_i-1)}{l(l+m-1)} \int_{\mathbb{S}^m}\phi^{a-2\epsilon_i}dM,$$ where the terms $a_i<2$ are considered to vanish. Thus, if some $a_i$ is odd this integral vanishes. \em \\[1mm] \noindent {\bf Proof of Theorem 1.1}. By Lemma 2.4, if $f\in E_{\lambda_k}$ then $ \xi(\nabla f)\in E_{\lambda_k}$. From $$\int_{\mathbb{S}^m}f \xi(\nabla f)dM=\int_{\mathbb{S}^m}\omega( \nabla f, \nabla f)dM=0$$ we conclude that $f$ and $h= \xi(\nabla f)$ are $L^2$-orthogonal.\qed\\[4mm] {\bf Remark. } Let us consider $f,h\in E_{\lambda_l}$, and take the globally defined vector field of $\mathbb{S}^m$, $\xi^{\sharp}=\sum_j\xi(e_j)e_j$. From Lemma 2.2, we have $$ \langle \nabla h, \nabla(\xi(\nabla f))\rightarrowngle =-\mbox{\lie h}at{\xi}( \nu,*(\nabla h\wedge \nabla f))+ Hess f(\nabla h,\xi^{\sharp}). $$ By Theorem 1.1, $\xi(\nabla f)\in E_{\lambda_l}$ as well. The term $Hess f(\nabla h, \xi^{\sharp})$ is a sum of polynomial functions of degree $2l-3 + k_{\xi}$ where $k_{\xi}$ depends on $\xi^{\sharp}$, when expressed in terms of $\phi^i$. Let us suppose that all $k_{\xi}$ are even. Then by Lemma 3.1, $\int_{\mathbb{S}^m} Hessf(\nabla h, \xi^{\sharp})dM=0$. Since $\lambda_l\mbox{\lie g}eq m$, and taking into consideration that $\Omega$ is a semi-calibration, \betagin{eqnarray*} -\int_{\mathbb{S}^m} h\xi(\nabla f)dM &=& -\frac{1}{\lambda_l} \int_{\mathbb{S}^m} \langle \nabla h,\nabla (\xi(\nabla f))\rightarrowngle dM = \frac{1}{\lambda_l} \int_{\mathbb{S}^m}\mbox{\lie h}at{\xi}(\nu,*(\nabla h\wedge \nabla f))dM\\ &\leq & \frac{1}{\lambda_l}\int_{\mathbb{S}^m} \|\nabla h\|\, \| \nabla f\|dM\leq \frac{1}{m} \|\nabla f\|_{L^2}\|\nabla h\|_{L^2}. \end{eqnarray*} Thus, in this case the short Cauchy-Riemann inequality holds. Inspection of $\xi$ must be required for each case of $\Omega$. A general proof of the short Cauchy-Riemann integral inequality, under appropriate conditions on $\Omega$, will be developed in a future paper.\\[1mm] \textbf{4. 3-spheres of $\mathbb{C}^2$ in $\mathbb{C}^3$}\\ \par In this section we specialize the Cauchy-Riemann inequalities for the case $m=n=3$ and for $\mathbb{R}^6= \mathbb{C}^3$ we will consider the K\"{a}hler calibration $\frac{1}{2}\mathbf{\varpi}^2$ that calibrates the complex two-dimensional subspaces, that is, $$\Omega= dx^{1234}+ dx^{1256}+ dx^{3456}.$$ Thus, fixing $W_5=\epsilon_5$ and $W_6=\epsilon_6$ we have $\mbox{\lie h}at{\xi}:= \mbox{\lie h}at{\xi}_{56}= dx^{12}+ dx^{34}$, and $$\xi:=\xi_{56}= *\phi^*\mbox{\lie h}at{\xi}=*(d\phi^{12}+d\phi^{34}).$$ The volume element of $\mathbb{S}^m$ is $Vol_{S^m}=\sum_{i} (-1)^{i-1}\phi_i d\phi^{1\ldots\mbox{\lie h}at{i}\ldots m} $, and $*\xi$ is the unique 2-form s.t. $\xi\wedge * \xi= \|\xi\|^2Vol_{S^m}$. Using (7) we see that $\|\xi\|=\|*\xi\|=1$. Hence $$\betagin{array}{l} \xi= \phi_1d\phi^2- \phi_2d\phi^1+ \phi_3d\phi^4- \phi_4d\phi^3\\ *\xi= d\phi^1\wedge d\phi^2+ d\phi^3\wedge d\phi^4=\frac{1}{2} d\xi=: d*\omega. \end{array}$$ Therefore, we may take $*\omega= \frac{1}{2}\xi$, that is $$\omega=\frac{1}{2}*\xi=\frac{1}{2}(d\phi^1\wedge d\phi^2 + d\phi^3\wedge d\phi^4)= \frac{1}{2}\phi^*\varpi. $$ Hence, to prove Theorem 1.2 and Corollary 1.1 we have to verify that, for any functions $f,h\in C^{\infty} (\mathbb{S}^3)$, one of the following equivalent inequalities holds: \betagin{eqnarray} \int_{\mathbb{S}^3} -3\omega(\nabla f,\nabla h)dM =\int_{\mathbb{S}^3} -3f\xi(\nabla h)dM\leq \|\nabla f\|_{L^2} \|\nabla h\|_{L^2}\\ \int_{\mathbb{S}^3} -6\omega(\nabla f,\nabla h)dM =\int_{\mathbb{S}^3} -6f\xi(\nabla h)dM\leq \|\nabla f\|_{L^2}^2+ \|\nabla h\|_{L^2}^2.\nonumber \end{eqnarray} By Theorem 1.1 we only need to consider both $f,h\in E_{\lambda_l}$, for some $l$. Note that $\lambda_3=15$ and since $\Omega$ is a calibration, $ \|\xi(X)\|\leq \|X\|$.\\[2mm] {\bf Lemma 4.1.} \em If $f, h\in E_{\lambda_3}^+$ are nonzero, (8) holds, with strict inequality. \em \\[2mm] {\bf Proof.} By Schwartz inequality and Rayleigh characterization $$\int_{\mathbb{S}^3} -3f\xi(\nabla h)dM\leq 3\|f\|_{L^2} \|\nabla h\|_{L^2}\leq \frac{3}{\sqrt{\lambda_3}} \|\nabla f\|_{L^2}\|\nabla h\|_{L^2} < \|\nabla f\|_{L^2}\|\nabla h\|_{L^2}, $$ with strict inequality in the last one, since neither $f$ nor $h$ may be constant.\qed\par We now verify that (8) holds for $f,h \in E_{\lambda_1}$ and $f,h\in E_{\lambda_2}$. From (7) and Lemma 3.1, we have for $i\neq j$ \betagin{eqnarray} \betagin{array}{lcl} \int_{\mathbb{S}^3}\phi^2dM=\frac{1}{4}|\mathbb{S}^3|,&& \int_{\mathbb{S}^3}\phi^2_i\phi_j^2dM= \frac{1}{ 6}\int_{\mathbb{S}^3}\phi^2dM \\ \int_{\mathbb{S}^3}\phi^4dM=\frac{1}{2}\int_{\mathbb{S}^3}\phi^2dM,&& \int_{\mathbb{S}^3}\|\nabla \phi\|^2dM= 3\int_{\mathbb{S}^3}\phi^2dM\\[1mm] \omega(\nabla \phi_1,\nabla \phi_2) =\frac{1}{2}(1-\phi_1^2-\phi_2^2) && \omega(\nabla \phi_1,\nabla \phi_3) =\frac{1}{2}(-\phi_2\phi_3+\phi_1\phi_4) \\ \omega(\nabla \phi_1,\nabla \phi_4) =\frac{1}{2}(-\phi_2\phi_4-\phi_1\phi_3) && \omega(\nabla \phi_2,\nabla \phi_3) =\frac{1}{2}(\phi_1\phi_3+\phi_4\phi_2) \\ \omega(\nabla \phi_2,\nabla \phi_4) =\frac{1}{2}(\phi_1\phi_4-\phi_2\phi_3) & &\omega(\nabla \phi_3,\nabla \phi_4) =\frac{1}{2}(1-\phi_3^2-\phi_4^2). \end{array} \end{eqnarray} and moreover\\[2mm] {\bf Lemma 4.2.} $$\betagin{array}{l} 3\int\omega(\nabla\phi_1,\nabla\phi_2)=3\int\phi^2=\|\nabla \phi_1\|_{L^2} \|\nabla \phi_2\|_{L^2}=\|\nabla \phi\|_{L^2}^2\\ 3\int\omega(\nabla\phi_3,\nabla\phi_4)=3\int\phi^2 =\|\nabla \phi_3\|_{L^2} \|\nabla \phi_4\|_{L^2}=\|\nabla \phi\|_{L^2}^2\\ -3\int\omega(\nabla\phi_i,\nabla\phi_j)=0~~~\mbox{for other }ij\\[1mm] -3\int\phi_k\omega(\nabla\phi_i,\nabla\phi_j)=0~~~\forall i,j,k\\ -3\int\phi_1^2\omega(\nabla\phi_1,\nabla\phi_2)= -3\int\phi_2^2\omega(\nabla\phi_1,\nabla\phi_2)= -\frac{1}{2}\int \phi^2 \\ -3\int\phi_3^2\omega(\nabla\phi_1,\nabla\phi_2)= -3\int\phi_4^2\omega(\nabla\phi_1,\nabla\phi_2)= -\int \phi^2 \\ -3\int\phi_1^2\omega(\nabla\phi_3,\nabla\phi_4)= -3\int\phi_2^2\omega(\nabla\phi_3,\nabla\phi_4)= -\int \phi^2 \\ -3\int\phi_3^2\omega(\nabla\phi_3,\nabla\phi_4)= -3\int\phi_4^2\omega(\nabla\phi_3,\nabla\phi_4)= -\frac{1}{2}\int \phi^2 \\[2mm] -3\int\phi_1\phi_4\omega(\nabla\phi_1,\nabla\phi_3)= -3\int\phi_1\phi_3\omega(\nabla\phi_2,\nabla\phi_3)= -\frac{1}{4}\int\phi^2\\ -3\int\phi_1\phi_3\omega(\nabla\phi_1,\nabla\phi_4)= -3\int\phi_2\phi_3\omega(\nabla\phi_2,\nabla\phi_4) =\frac{1}{4}\int\phi^2\\ -3\int\phi_2\phi_3\omega(\nabla\phi_1,\nabla\phi_3)= -3\int\phi_2\phi_4\omega(\nabla\phi_1,\nabla\phi_4)= \frac{1}{4}\int\phi^2\\ -3\int\phi_2\phi_4\omega(\nabla\phi_2,\nabla\phi_3)= -3\int\phi_1\phi_4\omega(\nabla\phi_2,\nabla\phi_4)= -\frac{1}{4}\int\phi^2\\ -3\int\phi_i\phi_j\omega(\nabla\phi_k,\nabla\phi_s)=0~~\mbox{for other cases.} \\ \end{array} $$ \noindent {\bf Lemma 4.3.} \em If $f, h\in E_{\lambda_1}$, that is $f=\sum_i\mu_i\phi_i$, $h=\sum_j\sigma_j\phi_j$, for some constant $\mu_i, \sigma_j$, then (8) holds, with equality if and only if $\sigma_2=-\mu_1$, $\sigma_1=\mu_2$, $\sigma_4=-\mu_3$, $\sigma_3= \mu_4$. \em \\[2mm] {\bf Proof. } Using the previous lemma, $$\betagin{array}{lcl} -3\int\omega(\nabla f,\nabla h)dM &=& (\mu_1\sigma_2-\mu_2\sigma_1)\int -3\omega(\nabla \phi_1,\nabla \phi_2) +(\mu_3\sigma_4-\mu_4\sigma_3)\int -3\omega(\nabla \phi_3,\nabla \phi_4)\\ &=& -(\mu_1\sigma_2-\mu_2\sigma_1+\mu_3\sigma_4-\mu_4\sigma_3) \|\nabla \phi\|_{L^2}^2\\ &\leq& \frac{1}{2}(\sum_i \mu_i^2+ \sigma_i^2)\|\nabla \phi\|_{L^2}^2 =\frac{1}{2}(\|\nabla f\|_{L^2}^2+ \|\nabla h\|_{L^2}^2). \end{array} $$ The equality case follows immediately. \qed \\[2mm] {\bf Lemma 4.4.} \em If $f, h\in E_{\lambda_2}$ are nonzero, then (8) holds with strict inequality. \em \\[2mm] {\bf Proof. } Set $f=\sum_i\alphapha_i\phi_i^2+ \sum_{i<j}A_{ij}\phi_i\phi_j$, and $h=\sum_i\betata_i\phi_i^2+ \sum_{i<j}B_{ij}\phi_i\phi_j$, where $\alphapha_i, A_{ij}$, $\betata_i, B_{ij}$ are constants. Now we compute {\small \betagin{eqnarray*} \lefteqn{-3\int \omega(\nabla f, \nabla h)=}\\ &&-3\int \omega(\nabla \phi_1, \nabla \phi_2) [(2\alphapha_1\phi_1+ A_{12}\phi_2+A_{13}\phi_3+A_{14}\phi_4) (2\betata_2\phi_2+ B_{12}\phi_1+B_{23}\phi_3+B_{24}\phi_4)\\ &&\quad \quad \quad -(2\alphapha_2\phi_2+ A_{12}\phi_1+A_{23}\phi_3+A_{24}\phi_4) (2\betata_1\phi_1+ B_{12}\phi_2+B_{13}\phi_3+B_{14}\phi_4)]\\ &&-3\int \omega(\nabla \phi_1, \nabla \phi_3) [(2\alphapha_1\phi_1+ A_{12}\phi_2+A_{13}\phi_3+A_{14}\phi_4) (2\betata_3\phi_3+ B_{13}\phi_1+B_{23}\phi_2+B_{34}\phi_4)\\ &&\quad \quad \quad -(2\alphapha_3\phi_3+ A_{13}\phi_1+A_{23}\phi_2+A_{34}\phi_4) (2\betata_1\phi_1+ B_{12}\phi_2+B_{13}\phi_3+B_{14}\phi_4)]\\ &&-3\int \omega(\nabla \phi_1, \nabla \phi_4) [(2\alphapha_1\phi_1+ A_{12}\phi_2+A_{13}\phi_3+A_{14}\phi_4) (2\betata_4\phi_4+ B_{14}\phi_1+B_{24}\phi_2+B_{34}\phi_3)\\ &&\quad \quad \quad -(2\alphapha_4\phi_4+ A_{14}\phi_1+A_{24}\phi_2+A_{34}\phi_3) (2\betata_1\phi_1+ B_{12}\phi_2+B_{13}\phi_3+B_{14}\phi_4)]\\ &&-3\int \omega(\nabla \phi_2, \nabla \phi_3) [(2\alphapha_2\phi_2+ A_{12}\phi_1+A_{23}\phi_3+A_{24}\phi_4) (2\betata_3\phi_3+ B_{13}\phi_1+B_{23}\phi_2+B_{34}\phi_4)\\ &&\quad \quad \quad -(2\alphapha_3\phi_3+ A_{13}\phi_1+A_{23}\phi_2+A_{34}\phi_4) (2\betata_2\phi_2+ B_{12}\phi_1+B_{24}\phi_4+B_{23}\phi_3)]\\ &&-3\int \omega(\nabla \phi_2, \nabla \phi_4) [(2\alphapha_2\phi_2+ A_{12}\phi_1+A_{23}\phi_3+A_{24}\phi_4) (2\betata_4\phi_4+ B_{14}\phi_1+B_{24}\phi_2+B_{34}\phi_3)\\ &&\quad \quad \quad -(2\alphapha_4\phi_4+ A_{14}\phi_1+A_{24}\phi_2+A_{34}\phi_3) (2\betata_2\phi_2+ B_{12}\phi_1+B_{24}\phi_4+B_{23}\phi_3)]\\ &&-3\int \omega(\nabla \phi_3, \nabla \phi_4) [(2\alphapha_3\phi_3+ A_{13}\phi_1+A_{23}\phi_2+A_{34}\phi_4) (2\betata_4\phi_4+ B_{14}\phi_1+B_{24}\phi_2+B_{34}\phi_3)\\ &&\quad \quad \quad -(2\alphapha_4\phi_4+ A_{14}\phi_1+A_{24}\phi_2+A_{34}\phi_3) (2\betata_3\phi_3+ B_{13}\phi_1+B_{23}\phi_2+B_{34}\phi_4)]. \end{eqnarray*}} Thus, using Lemma 4.2, {\small \betagin{eqnarray*} \lefteqn{-3\int \omega(\nabla f, \nabla h)=}\\ &-3\int \omega(\nabla \phi_1, \nabla \phi_2)& [2\alphapha_1B_{12}\phi_1^2+ 2\betata_2A_{12}\phi_2^2 +A_{13}B_{23}\phi_3^2+A_{14}B_{24}\phi_4^2\\ &&-2\betata_1A_{12}\phi_1^2 -2\alphapha_2B_{12}\phi_2^2 -A_{23}B_{13}\phi_3^2-A_{24}B_{14}\phi_4^2]\\ &-3\int \omega(\nabla \phi_3, \nabla \phi_4)& [A_{13}B_{14}\phi_1^2+ A_{23}B_{24}\phi_2^2 +2\alphapha_{3}B_{34}\phi_3^2+2\betata_{4}A_{34}\phi_4^2\\ &&-A_{14}B_{13}\phi_1^2 -A_{24}B_{23}\phi_2^2 -2\betata_{3}A_{34}\phi_3^2-2\alphapha_{4}B_{34}\phi_4^2]\\ &-3\int \omega(\nabla \phi_1, \nabla \phi_3)& [2\alphapha_1B_{34}\phi_1\phi_4+ A_{14}B_{13}\phi_1\phi_4 -A_{13}B_{14}\phi_1\phi_4-2\betata_1A_{34}\phi_1\phi_4\\ &&+2\betata_3A_{12}\phi_2\phi_3 +A_{13}B_{23}\phi_2\phi_3 -A_{23}B_{13}\phi_2\phi_3-2\alphapha_3B_{12}\phi_2\phi_3]\\ &-3\int \omega(\nabla \phi_1, \nabla \phi_4)& [2\alphapha_1B_{34}\phi_1\phi_3+ A_{13}B_{14}\phi_1\phi_3 -A_{14}B_{13}\phi_1\phi_3-2\betata_1A_{34}\phi_1\phi_3\\ &&+2\betata_4A_{12}\phi_2\phi_4 +A_{14}B_{24}\phi_2\phi_4 -A_{24}B_{14}\phi_2\phi_4-2\alphapha_4B_{12}\phi_2\phi_4]\\ &-3\int \omega(\nabla \phi_2, \nabla \phi_3)& [2\betata_3A_{12}\phi_1\phi_3+ A_{23}B_{13}\phi_1\phi_3 -A_{13}B_{23}\phi_1\phi_3-2\alphapha_3B_{12}\phi_1\phi_3\\ &&+2\alphapha_2B_{34}\phi_2\phi_4 +A_{24}B_{23}\phi_2\phi_4 -A_{23}B_{24}\phi_2\phi_4-2\betata_2A_{34}\phi_2\phi_4]\\ &-3\int \omega(\nabla \phi_2, \nabla \phi_4)& [2\betata_4A_{12}\phi_1\phi_4+ A_{24}B_{14}\phi_1\phi_4 -A_{14}B_{24}\phi_1\phi_4-2\alphapha_4B_{12}\phi_1\phi_4\\ &&+2\alphapha_2B_{34}\phi_2\phi_3 +A_{23}B_{24}\phi_2\phi_3 -A_{24}B_{23}\phi_2\phi_3-2\betata_2A_{34}\phi_2\phi_3] \end{eqnarray*}} {\small \betagin{eqnarray*} \betagin{array}{rll} =&\int \phi^2\La{\{}& -\frac{1}{2}[2\alphapha_1B_{12}+2\betata_2A_{12}-2\betata_1A_{12}-2\alphapha_2B_{12} +2\alphapha_3B_{34}+2\betata_4A_{34}-2\betata_3A_{34}-2\alphapha_4B_{34}] \\ &&-[A_{13}B_{23}+A_{14}B_{24}-A_{23}B_{13}-A_{24}B_{14}+ A_{13}B_{14}+A_{23}B_{24}-A_{14}B_{13}-A_{24}B_{23}]\\ &&+\frac{1}{4}\La{[}-2\alphapha_1B_{34}-A_{14}B_{13}+A_{13}B_{14}+ 2\betata_1A_{34} + 2\betata_3A_{12}+A_{13}B_{23}-A_{23}B_{13}-2\alphapha_3B_{12} \\[1mm] &&\quad\quad+2\alphapha_1B_{34}+A_{13}B_{14}-A_{14}B_{13}- 2\betata_1A_{34} + 2\betata_4A_{12}+A_{14}B_{24}-A_{24}B_{14}-2\alphapha_4B_{12}\\[1mm] &&\quad\quad-2\betata_3A_{12}-A_{23}B_{13}+A_{13}B_{23}+ 2\alphapha_3B_{12} - 2\alphapha_2B_{34}-A_{24}B_{23}+A_{23}B_{24}+2\betata_2A_{34}\\[1mm] &&\quad\quad -2\betata_41A_{12}-A_{24}B_{14}+A_{14}B_{24}+ 2\alphapha_4B_{12} + 2\alphapha_2B_{34}+A_{23}B_{24}-A_{24}B_{23}-2\betata_2A_{34}\La{]}~~\La{\}} \\[2mm] =&\int\phi^2\La{\{}&-[\alphapha_1B_{12}+\betata_2A_{12}-\betata_1A_{12}-\alphapha_2B_{12} +\alphapha_3B_{34}+\betata_4A_{34}-\betata_3A_{34}-\alphapha_4B_{34}] \\ &&-[A_{13}B_{23}+A_{14}B_{24}-A_{23}B_{13}-A_{24}B_{14}+ A_{13}B_{14}+A_{23}B_{24}-A_{14}B_{13}-A_{24}B_{23}]\\ &&+\frac{1}{2}\La{[} -A_{14}B_{13}+A_{13}B_{14}+A_{13}B_{23}-A_{23}B_{13} +A_{14}B_{24}-A_{24}B_{14}-A_{24}B_{23}+A_{23}B_{24}\La{]}~~\La{\}}\\[2mm] =&\int\phi^2\La{\{}& [-\alphapha_1B_{12}-\betata_2A_{12}+\betata_1A_{12}+\alphapha_2B_{12} -\alphapha_3B_{34}-\betata_4A_{34}+\betata_3A_{34}+\alphapha_4B_{34}]\\ &&+ \frac{1}{2}[-A_{13}B_{23}-A_{14}B_{24}+A_{23}B_{13}+A_{24}B_{14}- A_{13}B_{14}-A_{23}B_{24}+A_{14}B_{13}+A_{24}B_{23}]~~ \La{\}} \end{array} \end{eqnarray*}} and applying the same lemmas we see that $$\|\nabla f\|_{L^2}^2=\left [2(\sum_k \alphapha_k^2)-\frac{4}{3}(\sum_{i<j} \alphapha_i \alphapha_j)+ \frac{4}{3}(\sum_{i<j} A^2_{ij})\right]\int \phi^2.$$ Hence, we have to verify if the following inequality is true: \betagin{eqnarray} [-\alphapha_1B_{12}-\betata_2A_{12}+\betata_1A_{12}+\alphapha_2B_{12} -\alphapha_3B_{34}-\betata_4A_{34}+\betata_3A_{34}+\alphapha_4B_{34}]\label{10}\\ +\frac{1}{2} [-A_{13}B_{23}-A_{14}B_{24}+A_{23}B_{13}+A_{24}B_{14}- A_{13}B_{14}-A_{23}B_{24}+A_{14}B_{13}+A_{24}B_{23}]\label{11}\\ +\frac{2}{3}(\sum_{i<j}\alphapha_i\alphapha_j + \betata_i\betata_j)\label{12}\\ \leq \sum_k (\alphapha_k^2 + \betata_k^2) + \frac{2}{3}(\sum_{i<j} A^2_{ij}+ B_{ij}^2).\label{13} \end{eqnarray} This is equivalent to prove the inequalities \betagin{eqnarray} (\ref{11})&\leq & \frac{2}{3} (A_{13}^2+A_{14}^2+A_{23}^2+A_{24}^2+B_{13}^2+B_{14}^2+B_{23}^2+B_{24}^2) \label{14}\\[2mm] (\ref{10})+(\ref{12})&\leq & \sum_k (\alphapha_k^2 + \betata_k^2) + \frac{2}{3}( A_{12}^2+A_{34}^2+B_{12}^2+B_{34}^2).\label{15} \end{eqnarray} Note that \betagin{eqnarray*} 2\times(\ref{11}) &\leq & (A_{13}^2+A_{14}^2+A_{23}^2+A_{24}^2+B_{13}^2 +B_{14}^2+B_{23}^2+B_{24}^2)\\ &\leq& \frac{4}{3}(A_{13}^2+A_{14}^2+A_{23}^2+A_{24}^2+B_{13}^2 +B_{14}^2+B_{23}^2+B_{24}^2), \end{eqnarray*} and so inequality (\ref{14}) holds, with equality if and only if $$A_{13}=A_{14}=A_{23}=A_{24}=B_{13}=B_{14}=B_{23}=B_{24}=0.$$ Now \betagin{eqnarray} 3\times (\ref{10}) &=& 3(\alphapha_2-\alphapha_1)B_{12}- 3(\betata_2-\betata_1)A_{12} + 3(\alphapha_4-\alphapha_3)B_{34}+ 3(-\betata_4+\betata_3)A_{34} \nonumber \\ &\leq& \frac{3}{2}( (\alphapha_2-\alphapha_1)^2+ (\betata_2-\betata_1)^2 + (\alphapha_4-\alphapha_3)^2+ (-\betata_4+\betata_3)^2)\nonumber \\ &&+ \frac{3}{2}(A_{12}^2+ A_{34}^2+B_{12}^2+B_{34}^2) \nonumber \\ &\leq &\frac{3}{2}( (\alphapha_2-\alphapha_1)^2+ (\betata_2-\betata_1)^2 + (\alphapha_4-\alphapha_3)^2+ (-\betata_4+\betata_3)^2)\label{16}\\ &&+ 2(A_{12}^2+ A_{34}^2+B_{12}^2+B_{34}^2).\label{17} \end{eqnarray} We will prove that \betagin{eqnarray} \label{18} (\ref{16})+ 3\times(\ref{12}) &\leq & \sum_k3(\alphapha_k^2+\betata_k^2), \end{eqnarray} with equality iff $\alphapha_1=\alphapha_2=\alphapha_3=\alphapha_4$ and $\betata_1=\betata_2=\betata_3=\betata_4$, which proves that (\ref{15}) holds. Furthermore, from (\ref{17}) we see that equality in (\ref{15}) is achieved iff $$A_{12}= A_{34}=B_{12}=B_{34}=0,\mbox{~~~ and for all~} i,j ~~~\alphapha_i=\alphapha_j,~~~\betata_i=\betata_j.$$ In order to prove (\ref{18}) we only have to show that $$\frac{3}{2}((\alphapha_2-\alphapha_1)^2+ (\alphapha_4-\alphapha_3)^2) +2\sum_{i<j}\alphapha_i\alphapha_j\leq 3\sum_k\alphapha_k^2,$$ or equivalently, that $$-2\alphapha_1\alphapha_2-2\alphapha_3\alphapha_4+4\alphapha_1\alphapha_3+4\alphapha_1\alphapha_4 +4\alphapha_2\alphapha_3+4\alphapha_2\alphapha_4\leq 3\sum_k \alphapha_k^2.$$ But this is just $$(\alphapha_1-\alphapha_3)^2+(\alphapha_3-\alphapha_2)^2+ (\alphapha_2-\alphapha_4)^2 +(\alphapha_4-\alphapha_1)^2+(\alphapha_1+\alphapha_2-\alphapha_3-\alphapha_4)^2\mbox{\lie g}eq 0,$$ with equality to zero iff $\alphapha_i=\alphapha_j$ $\forall ij$. We have proved that inequality (8) is satisfied, with equality iff $f=\alphapha(\sum_k\phi_k^2)= \alphapha$ constant and $ h$ constant, and so they must vanish. \qed\\[1mm] Theorem 1.1, with Lemmas 4.1, 4.3 and 4.4, prove that (8) holds for any pair of functions $(f,h)$, and so Theorem 1.2 is proved. Corollary 1.1 follows from these lemmas.\par In (Salavessa, 2010, Theorem 4.2) a uniqueness theorem was obtained, on a class of closed $m$-dimensional submanifolds with parallel mean curvature and calibrated extended tangent in a Euclidean space $\mathbb{R}^{m+n}$, and satisfying an integral height inequality. We will recall such results for the case $\Omega$ parallel. We denote by $B^{\nu}$ the $\nu$-component of the second fundamental form $B$ and by $B^F$ the $F$-component, $B= B^{\nu}+ B^F$, where $F$ is the orthogonal complement of $\nu$ in the normal bundle.\\[2mm] {\bf Theorem 4.1.} \em If $\Omega$ is a parallel calibration of rank $(m+1)$ on $\mathbb{R}^{m+n}$, and $\phi:M\to \mathbb{R}^{m+n}$ is an immersed closed $\Omega$-stable $m$-dimensional submanifold with parallel mean curvature and calibrated extended tangent space, and \betagin{equation} \int_M S(2 + h \|H\|)dM \leq 0, \end{equation} where $ h=\langle \phi, \nu\rightarrowngle$ and $S= \sum_{ij} \langle \phi, (B(e_i,e_j))^F \rightarrowngle B^{\nu}(e_i,e_j),$ then $\phi$ is pseudo-umbilical and $S=0$. Furthermore, if $NM$ is a trivial bundle, then the minimal calibrated extension of $M$ is a Euclidean space $\mathbb{R}^{m+1}$, and $M$ is a Euclidean $m$-sphere. \em \\[1mm] \noindent Theorem 1.3 is an immediate consequence of Theorem 1.2 and the above theorem.\\ \par \textbf{Acknowledgements} \\[-2mm] The author would like to thank Dr.\ Ana Cristina Ferreira and the Universidade do Minho, Braga, for their hospitality during the \em Third Minho Meeting on Mathematical Physics \em at Centro de Matem\'{a}tica da UM, in November 2011, where the final part of this work was completed. \\ \par \textbf{References}\\[3mm] {\small J.L. Barbosa and M. do Carmo (1984). {\sl Stability of hypersurfaces with constant mean curvature}, Math.\ Z., 185(3), 339-353. \\[1mm] {\small J.L. Barbosa, M. do Carmo and J. Eschenburg (1988). {\sl Stability of hypersurfaces of constant mean curvature in Riemannian manifolds}, Math.\ Z., 197(1), 123- 138. \\[1mm] {\small F. Morgan (2000), {\sl Perimeter-minimizing curves and surfaces in $\mathbb{R}^n$ enclosing prescribed multi-volume}, Asian J.\ Math., 4 (2), 373-382.}\\[1mm] {\small I.M.C.\ Salavessa (2010). {\sl Stability of submanifolds with parallel mean curvature in calibrated manifolds}, Bull.\ Braz.\ Math.\ Soc.\, NS, 41(4), 495-530. } \\[1mm] {\small I.M.C.\ Salavessa \& A.\ Pereira do Vale (2006). {\sl Transgression forms in dimension 4}, IJGMMP, 3(4-5), 1221-1254.} \end{document}
\begin{document} \title{An Improved Asymptotic Key Rate Bound for a Mediated Semi-Quantum Key Distribution Protocol} \begin{abstract} Semi-quantum key distribution (SQKD) protocols allow for the establishment of a secret key between two users Alice and Bob, when one of the two users (typically Bob) is limited or ``classical'' in nature. Recently it was shown that protocols exists when both parties are limited/classical in nature if they utilize the services of a quantum server. These protocols are called mediated SQKD protocols. This server, however, is untrusted and, in fact, adversarial. In this paper, we reconsider a mediated SQKD protocol and derive a new proof of unconditional security for it. In particular, we derive a new lower bound on its key rate in the asymptotic scenario. Furthermore, we show this new lower bound is an improvement over prior work, thus showing that the protocol in question can tolerate higher rates of error than previously thought. \end{abstract} \section{Introduction} Quantum key distribution (QKD) protocols are designed to allow two users, Alice ($A$) and Bob ($B$), to establish a shared secret key, secure against even an all-powerful adversary Eve ($E$). Since the creation, in 1984, of the BB84 protocol \cite{QKD-BB84}, there have been several protocols developed which achieve this end. For a general survey, the reader is referred to \cite{QKD-survey}. Semi-quantum key distribution (SQKD) protocols, first introduced in 2007 by Boyer et al., \cite{SQKD-first}, have the same goal: the establishment of a secret key, secure against an all-powerful adversary. However now, instead of allowing both $A$ and $B$ to manipulate quantum resources (e.g., prepare and measure qubits in a variety of bases) as is permissible in a typical QKD protocol, only $A$ is allowed such liberties while $B$ is limited to performing certain ``classical'' or ``semi-quantum'' operations (what operations $B$ is limited to are discussed shortly). In this scenario, $A$ is called the \emph{quantum user} while $B$ is called the \emph{classical user} (in a fully quantum protocol, such as BB84\cite{QKD-BB84}, both $A$ and $B$ are fully quantum). Such protocols are theoretically interesting as they attempt to answer the question ``how quantum does a protocol need to be in order to gain an advantage over a classical one?'' \cite{SQKD-first,SQKD-second} These SQKD protocols, by necessity, rely on a two-way quantum communication channel - one which permits a qubit to travel from the quantum user $A$, to the classical user $B$, then back to $A$. The limited classical $B$, upon receiving a qubit, is able to do one of two things: he may \emph{measure and resend} the qubit or \emph{reflect} the qubit. Measuring and resending involves taking the qubit received from $A$ and subjecting it to a $Z$ basis measurement (the $Z$ basis being $\{\ket{0}, \ket{1}\}$). His result $\ket{r}$, for $r \in \{0,1\}$ is then resent to $A$. Reflecting the qubit involves $B$ allowing the qubit to simply pass through his lab undistributed in which case he learns nothing about its state. Thus, the classical user is only able to work directly with the computational $Z$ basis - he cannot, for example, perform a measurement in the Hadamard $X$ basis (denoted $\{\ket{\pm} = 1/\sqrt{2}(\ket{0}\pm\ket{1})\}$). This two-way quantum channel, of course, greatly complicates the security analysis of these protocols as the attacker $E$ is allowed two opportunities to interact with the traveling qubit. While several SQKD protocols have been proposed \cite{SQKD-first,SQKD-second,SQKD-3,SQKD-4,SQKD-Single-Security,SQKD-cl-A,SQKD-random-prepare}, up until recently, the bulk of security proofs for such semi-quantum protocols have focused on the notion of \emph{robustness}. This property, introduced in \cite{SQKD-first,SQKD-second}, requires that any attack which causes $E$ to potentially gain information, by necessity causes a disturbance which $A$ and $B$, with non-zero probability, may detect. Note that robustness says nothing about the amount of information gained (which may be high) nor the probability of detection (which may be low). Recently, some work has been accomplished moving beyond robustness. In particular \cite{SQKD-information} derived expressions relating the disturbance of $E$'s attack to her information gain for the SQKD protocol of \cite{SQKD-first}, assuming she is limited to performing \emph{individual attacks} (those attacks where $E$ is limited to performing the same attack operation each iteration of the protocol and is forced to measure her quantum memory before $A$ and $B$ utilize their key for any purpose - these are weaker attacks than \emph{collective attacks} - where $E$ performs the same operation each iteration but can postpone her measurement until any future time of her choice - and \emph{general attacks} - those attacks where $E$ is allowed to do anything within the laws of physics; the reader is referred to \cite{QKD-survey} for more information on these possible attack models). Also, \cite{SQKD-cl-A} described a new SQKD protocol and computed a similar relation between the disturbance and information gain of $E$'s attack, though again assuming individual attacks. Our work recently has been in the proof of unconditional security (making no assumptions on the type of attack $E$ uses) of several SQKD protocols. In particular, in \cite{SQKD-Krawec-SecurityProof}, we proved the security of Boyer et al.,'s original SQKD protocol \cite{SQKD-first} showing that $A$ and $B$ are able to distill a secure secret key so long as the noise in the channel is less than $5.34\%$. We have also, in \cite{SQKD-Single-Security}, derived a series of security results for single state SQKD protocols (single state protocols were introduced in \cite{SQKD-lessthan4}, though without proofs of unconditional security). While that particular work stopped short of unconditional security proofs, the security results we prove there can be applied toward that goal \cite{SQKD-Krawec-dissertation}. Finally, and also the topic of this current paper, in \cite{SQKD-MultiUser}, we designed a new \emph{mediated SQKD} protocol. Such a protocol allows two limited classical users $A$ and $B$ to establish a secure secret key with the help of a quantum server (denoted $C$ for ``Center''). With such a system, one could envision a QKD infrastructure consisting of several limited ``classical'' users, utilizing the services of this quantum server in order to distill secret keys (each key known only to a pair of classical users, not the server). If the quantum server $C$ were honest, such a goal would be trivial. Instead, we assumed the server is untrusted - indeed, we can assume the server is the all-powerful quantum adversary. In our original work in \cite{SQKD-MultiUser}, we proved our mediated protocol's unconditional security by computing a lower bound on its key rate (to be defined shortly, roughly speaking, the key rate is the ratio of the number of secret key bits to the number of qubits sent) in the asymptotic scenario (as the number of qubits sent approaches infinity). We showed that, if the server is ``semi-honest,'' that is the server follows our protocol correctly but afterwards tries to learn something about the key, then $A$ and $B$ may distill a secret key so long as the noise in the channel is less than $19.9\%$. If the server is fully adversarial (that is, the server does not follow the protocol and instead performs any operations he likes within the laws of quantum physics), then $A$ and $B$ may distill a secret key so long as the noise in the channel is less than $10.65\%$ (a number that is close to BB84's $11\%$ \cite{QKD-renner-keyrate}). In this paper, we reconsider our security proof and improve on these tolerated noise levels by computing a new bound on the protocol's key rate using an alternative method of proof. In our original proof, we utilized a bound on the Jensen-Shannon Divergence from \cite{QC-info-trace-bound} to find bounds on the quantum mutual information between $A$ and $C$'s system. Here, we will use a technique adapted from \cite{QKD-keyrate-general} (which we also successfully applied in our proof of security for a different SQKD protocol in \cite{SQKD-Krawec-SecurityProof}) which allows us to bound the conditional von Neumann entropy between $A$ and $C$'s system. This new technique provides a far more optimistic bound. Indeed, as we will see, if the server is semi-honest, using our new key rate bound in this paper, we will see that $A$ and $B$ may distill a secret key so long as the noise in the quantum channel is less than $22.05\%$. If the server $C$ is adversarial, then we can withstand up to $12.5\%$. Furthermore, our new proof of security does not make any assumptions concerning the symmetry of $E$'s attack as was done in our original proof. Thus the contributions of this paper are two-fold. First, we improve our original security proof finding a more optimistic bound on our mediated SQKD protocol's tolerated noise level. Our new proof in this paper also requires fewer assumptions. Secondly, we describe an alternative proof method, which as we've already shown in \cite{SQKD-Krawec-SecurityProof}, can be applied to other semi-quantum protocols (and perhaps other QKD protocols utilizing a two-way channel). We also show how this alternative method can produce more optimistic bounds than produced by bounding the Jensen-Shannon divergence as done in our original proof \cite{SQKD-MultiUser}. Thus, this observation can be useful in other QKD protocol proofs and, perhaps, may lead to insight into various quantum information theoretic bounds. Furthermore, it may be possible to adapt the techniques we use in this proof to work with other QKD protocols which utilize a two-way quantum communication channel. \section{The Protocol} We now review the mediated protocol of \cite{SQKD-MultiUser}. This protocol is designed to allow two classical users $A$ and $B$ (users who can only measure and resend in the $Z$ basis or reflect qubits) to distill a secret key, known only to themselves, with the help of a quantum server $C$. Note that, since $A$ and $B$ can only make $Z$ basis measurements, they must rely on the server to perform measurements in alternative bases. However, neither $A$ nor $B$ trust $C$; indeed we will prove the security of this protocol assuming $C$ is adversarial. Since this paper is concerned only with devising an improved key rate bound, we do not alter the protocol in any way. This mediated protocol assumes the existence of a quantum communication channel connecting Alice ($A$) to the server $C$ and Bob ($B$) to $C$. We do not require a quantum channel directly connecting $A$ and $B$. Besides this, we assume an authenticated classical channel connects $A$ and $B$. The server $C$, and indeed any other third party eavesdropper, may listen to the messages sent on this channel, but they may not send messages of their own. Finally, we assume a classical channel connects the server to either $A$ or $B$ (or both). See Figure \ref{fig:diagram}. Using similar arguments as in \cite{SQKD-MultiUser}, this channel connecting $C$ to $A$ or $B$ need not be authenticated; though, indeed, better security bounds may be achieved if it is authenticated as we discuss later. We will assume that any message sent from $C$ is received by both parties. That is, $C$ cannot send two different messages $m_A \ne m_B$ to $A$, respectively $B$, without being caught. Such a mechanism is easy to achieve: any message sent from $C$ to $A$ (or $B$) is then forwarded by $A$ (or $B$) to the other honest party $B$ (or $A$) using the authenticated channel. \begin{figure} \caption{A diagram of the scenario we consider. Here, $A$ and $B$ are the honest, classical users who wish to establish a secret key; $E$ is a third party eavesdropper; and $C$ is the untrusted, potentially adversarial quantum server. A quantum channel (the parallel lines in the center of the diagram) connects $C$ to $A$ and also $C$ to $B$; these channels pass through the third party eavesdropper (which may be a single entity or two separate entities). An authenticated classical channel connects $A$ to $B$ (the dotted line); only $A$ and $B$ may write to this channel, however all parties can read from it. Finally a classical channel connects the server $C$ to $A$ and $B$; this channel is not necessarily authenticated and the eavesdroppers may write their own messages on it.} \label{fig:diagram} \end{figure} Our protocol will utilize the Bell basis, the states of which we denote: \begin{align*} \ket{\Phi^+} &= \frac{1}{\sqrt{2}}(\ket{00} + \ket{11})\\ \ket{\Phi^-} &= \frac{1}{\sqrt{2}}(\ket{00} - \ket{11})\\ \ket{\Psi^+} &= \frac{1}{\sqrt{2}}(\ket{01} + \ket{10})\\ \ket{\Psi^-} &= \frac{1}{\sqrt{2}}(\ket{01} - \ket{10}) \end{align*} We first describe the protocol assuming an honest server $C$. The quantum communication stage of our protocol repeats the following process: \begin{enumerate} \item $C$ prepares $\ket{\Phi^+}$, sending one qubit to $A$, the other to $B$. \item $A$ and $B$ (the classical users) choose, independently of each other, to either reflect the qubit back to $C$ or to measure and resend it. If they measure in the $Z$ basis and resend, they save their measurement results as their potential raw key bit. However, $A$ and $B$ do not yet reveal their choice of operation. \item Regardless of $A$ and $B$'s choice, $C$ will receive two qubits back from them. $C$ then performs a Bell measurement. If this measurement produces the result $\ket{\Phi^-}$, $C$ sends the message ``$-1$'' to both $A$ and $B$ using the classical channel. Otherwise, for all other measurement results, $C$ sends the message ``$+1$''. \item $A$ and $B$ now divulge, using the authenticated classical channel, their choice in step 2 (but not their measurement results). If $A$ and $B$ both measure and resent, and if $C$ sends the message ``$-1$'', they will \emph{accept} this iteration; that is to say, they will both use this iteration's measurement results to contribute towards their raw key. They will not accept this iteration (i.e., it will not be used to contribute to the raw key) if $C$ sent the message ``$+1$''. Otherwise, if $A$ and $B$ both reflected, it should be the case that $C$ sends ``$+1$''; any other message by $C$ is counted as an error. All other cases are discarded. \end{enumerate} Assuming an honest $C$ and the lack of any channel noise or third-party eavesdropper, it is clear that the above protocol is correct (i.e., $A$ and $B$ will agree on the same raw key). Indeed, if $A$ and $B$ both reflect, the state arriving back to $C$ on step 3 is $\ket{\Phi^+}$; thus $C$ should always respond with the message ``$+1$''. Alternatively, if $A$ and $B$ both measure and resend, the state arriving at $C$ is $\ket{i,i}$ (where $i \in \{0,1\}$ is $A$ and $B$'s measurement result and potential raw key bit for this iteration). $C$ will then perform a Bell basis measurement (projecting the state into one of the Bell basis states). This measurement results in outcome $\ket{\Phi^+}$ or $\ket{\Phi^-}$ each with probability $1/2$. Only if $C$ reports ``$-1$'' (i.e., he measures $\ket{\Phi^-}$) will $A$ and $B$ use their measurement results ``$i$'' as their raw key bit. (Note that if they also use those iterations where $C$ sends ``$+1$'', this opens a potential easy attack strategy for $C$: he can always measure in the computational basis and send ``$+1$'' each iteration.) Let $p_M$ be the probability that $A$ or $B$ measure and resend. Note that, without noise and assuming an honest $C$, only $p_Mp_M/2 = p_M^2/2$ iterations are expected to be accepted. However, to improve efficiency, we may use a technique from \cite{QKD-BB84-Modification} (which was meant to improve the key rate of the BB84 protocol). Namely, we may set $p_M$ arbitrarily close to 1. Thus, in the asymptotic scenario, we can expect $1/2$ of the qubits sent to contribute to the raw key. It is an open question as to whether or not a mediated SQKD protocol can be designed which improves this rate to something larger than $1/2$. \section{Security Proof} While this protocol's unconditional security was proven in \cite{SQKD-MultiUser}, we will now provide an alternative proof of security which, as we demonstrate later, provides a more optimistic lower bound on the protocol's key rate (we will define the key rate in the asymptotic scenario shortly). Note that, to prove the security of this protocol, we must not only consider an adversarial $C$, but also the existence of third-party, all-powerful eavesdroppers (as shown in Figure \ref{fig:diagram}). Our proof will work in stages: we will first prove security against an adversarial $C$, assuming $C$ is limited to collective attacks (those where he performs the same attack operation each iteration, but, unlike with an independent attack discussed earlier, he is free to postpone the measurement of his quantum memory until any future time of his choice) and assuming there are no third-party eavesdroppers. Following this, we will prove security against general attacks (those attacks where no assumptions are made other than $C$ follows the laws of physics) and assuming the existence of third-party eavesdroppers. \subsection{Notation} We denote by $H(\cdot)$ the Shannon entropy function. Given a set $\{p_1, p_2, \cdots, p_n\}$ where $\sum_ip_i = 1$ and $p_i \ge 0$, then: \[ H(p_1, p_2, \cdots, p_n) = -\sum_{i=1}^np_i\log p_i, \] where all logarithms in this paper are base two unless otherwise specified. We will occasionally use the notational shortcut $H(\{p_i\}_i)$ to mean $H(p_1, p_2, \cdots, p_n)$. Furthermore, when $n=2$ (which forces $p_2 = 1-p_1$), we will occasionally write $h(p_1)$ to mean $H(p_1, p_2)$. Given a density operator $\rho$, we write $S(\rho)$ to be its von Neumann entropy. Let $\{\lambda_1, \lambda_2, \cdots, \lambda_n\}$ be the eigenvalues of finite dimensional $\rho$. Then $S(\rho) = -\sum_i\lambda_i\log\lambda_i$. When $\rho$ acts on a bipartite system $\mathcal{H}_A\otimes\mathcal{H}_B$, we will often write $\rho_{AB}$. Then, later, if we write $\rho_A$ we take that to mean the result of tracing out the $\mathcal{H}_B$ portion of $\rho_{AB}$ (i.e., $\rho_A = tr_B\rho_{AB}$). Similarly for $\rho_B$. Similarly, for multi-partite systems (e.g., $\rho_{ABC}$ acts on a tripartite system and $\rho_{BC} = tr_A\rho_{ABC}$). Given $\rho_{AB}$, we write $S(AB)$ to mean $S(\rho_{AB})$ and $S(B)$ to mean $S(\rho_B)$. Finally, we write $S(A|B)$ to be the conditional von Neumann entropy defined: $S(A|B) = S(AB) - S(B) = S(\rho_{AB}) - S(\rho_B)$. \subsection{Modeling the Protocol} For now, we will consider only an adversarial center $C$ and not third-party eavesdroppers. That is, the system in question consists only of $A$, $B$, and $C$ and no one else (the case when there are third-party attackers will be considered later). We will also assume $C$ employs collective attacks. Assuming this, a single iteration of our protocol may be described as a closed system in the Hilbert space: \[ \mathcal{H} = \mathcal{H}_A\otimes\mathcal{H}_B\otimes\mathcal{H}_{T_A}\otimes\mathcal{H}_{T_B}\otimes\mathcal{H}_C\otimes\mathcal{H}_{cl}, \] where: \begin{itemize} \item $\mathcal{H}_A$ and $\mathcal{H}_B$ are $A$ and $B$'s private registers storing their raw key bit. \item $\mathcal{H}_{T_A}$ and $\mathcal{H}_{T_B}$ are two dimensional spaces modeling the qubit channel connecting $C$ to $A$ and $C$ to $B$ respectively (they are the \emph{transit} space). \item $\mathcal{H}_C$ is $C$'s private quantum system. \item $\mathcal{H}_{cl}$ is a two-dimensional subspace, spanned by orthonormal basis $\{\ket{+1}, \ket{-1}\}$ used to model the classical message $C$ sends on this iteration. \end{itemize} We model the protocol in the same way as in \cite{SQKD-MultiUser}, the details of which we now quickly review. At start, $C$, who we now assume is fully adversarial, prepares, not necessarily $\ket{\Phi^+}$ as prescribed by the protocol, but instead the state $\ket{\phi_0} = \sum_{i,j}\alpha_{i,j}\ket{i,j}_{T_A,T_B}\otimes\ket{c_{i,j}}$ where the $\ket{c_{i,j}}$ are arbitrary normalized, though not necessarily orthogonal, states in $\mathcal{H}_C$. As shown in our original proof, there is no advantage to $C$ in sending the above state, versus sending the far simpler state: $\ket{{\psi}_0} = \sum_{i,j}\alpha_{i,j}\ket{i,j}_{T_A,T_B}$. That is, there is no advantage to $C$ in preparing a state, on step (1) of the protocol, that is entangled with his private quantum memory. The proof of this, which may be found in \cite{SQKD-MultiUser}, uses a technique introduced in \cite{SQKD-Single-Security}. Thus, when analyzing the security of this protocol, we may assume the state $C$ sends is unentangled with $\mathcal{H}_C$. Following $A$ and $B$'s operation (either measuring and resending, or reflecting), the qubits return to $C$. The server, $C$, is now permitted to perform any operation of his choice, potentially entangling the qubits with his private quantum memory. Since he must also send a single classical bit to both $A$ and $B$ (from our earlier discussion, it is impossible for $C$ to send different messages to $A$ and $B$ thus we may assume he sends only a single message and it is received by both), his attack is modeled as a \emph{quantum instrument} \cite{QC-Instrument} $\mathcal{I}$ which acts on density operator $\rho = \rho_{T_AT_BC}$ as follows: \begin{equation}\label{eq:instrument} \mathcal{I}(\rho) = \ket{+1}\bra{+1}_{cl}\otimes\sum_{i=1}^{N_0}E_{i,0}\rho E_{i,0}^* + \ket{-1}\bra{-1}_{cl}\otimes\sum_{i=1}^{N_1}E_{i,1}\rho E_{i,1}^*, \end{equation} where the $E_{i,j}$ satisfy: \[ \sum_{i=1}^{N_0} E_{i,0}^*E_{i,0} + \sum_{i=1}^{N_1}E_{i,1}^*E_{i,1} = I. \] (Above, $I$ is the identity operator.) We may assume, without loss of generality that $N_0$ and $N_1$ are both finite. Finally, it can be shown (see \cite{SQKD-MultiUser} for details) that this attack may be represented, without loss of power to $C$, by a unitary operator $U = U_\mathcal{I}$ acting on a larger, though still finite dimensional, Hilbert space. Providing $C$ with this larger space potentially increases his power, thus providing us with a lower bound on the security of our protocol. Working with unitary attack operators turns out to be far simpler than working with attacks of the form in Equation \ref{eq:instrument}. This unitary attack operator, $U$, acts on $\mathcal{H}_{T_A}\otimes\mathcal{H}_{T_B}\otimes\mathcal{H}_C\otimes\mathcal{H}_{cl}$, where $\mathcal{H}_C$ has been suitably expanded. After $A$ and $B$'s operation, $C$ will apply unitary $U$. $C$ will then perform a projective measurement on the $\mathcal{H}_{cl}$ system in the $\{\ket{+1}, \ket{-1}\}$ basis. This measurement determines the message he sends. The post-measurement state represents the state of his ancilla in the event he sends that particular message. Such a procedure is mathematically equivalent to his use of $\mathcal{I}$ as proven in \cite{SQKD-MultiUser}. Clearly, a single iteration of the protocol, conditioning on the event the iteration is accepted (i.e., $A$ and $B$ both measure and resend, and $C$ sends ``$-1$'') may be described by a density operator of the form: \[ \rho_{ABC} = \sum_{i,j}p_{i,j}\ketbra{i,j}_{AB}\otimes\sigma_C^{(i,j)}, \] where $p_{i,j}$ is the probability that $A$ and $B$'s raw key bit is $i$ and $j$ respectively and $\sigma_C^{(i,j)}$ is the state of $C$'s quantum memory in that event. Following $N$ iterations of the quantum communication stage, assuming collective attacks, the overall system is in the state $\rho_{ABC}^{\otimes N}$. $A$ and $B$ will then perform parameter estimation, error correction, and privacy amplification protocols (see \cite{QKD-survey} for more information on these, now standard, processes) resulting in a secret key of size $\ell(N) \le N$ (possibly $\ell(N) = 0$ if $C$ has too much information on the raw key). We are interested in the ratio of secret key bits to raw key bits as the latter approaches infinity; that is, we are interested in the \emph{key rate}, denoted $r$, in the \emph{asymptotic scenario} \cite{QKD-survey}: \[ r = \lim_{N \rightarrow \infty} \frac{\ell(N)}{N}. \] For a state of the form $\rho^{\otimes N}$, it was shown in \cite{QKD-Winter-Keyrate,QKD-renner-keyrate,QKD-renner-keyrate2} that the key rate is: \begin{equation} r = \inf(I(A:B) - I(A:C)) = \inf(S(A|C) - S(A|B)). \end{equation} Here $I(A:B)$ is the (classical) mutual information held between $A$ and $B$'s system; $I(A:C)$ is the quantum mutual information between $A$ and $C$; the conditional entropy $S(A|C)$ was defined earlier in the notation section; $H(A|B)$ is the conditional Shannon entropy defined: $H(A|B) = H(AB) - H(B)$. Finally, the infimum is over all attack operators which induce the observed statistics (e.g., the observed error rate). Note that the first equation, involving mutual information, is due to \cite{QKD-Winter-Keyrate}, the second, equivalent version, is from \cite{QKD-renner-keyrate,QKD-renner-keyrate2} (showing the two are equal is trivial). While in our previous paper \cite{SQKD-MultiUser}, we used the key rate equation based on mutual information, in this paper we will bound $r$ by bounding the conditional entropy. Though these two equations produce equal results, the technique we use to bound the latter, as we show in this paper, allows us to provide a more optimistic lower bound on $r$. \subsection{The New Key Rate Bound} We must now describe the state of $A$, $B$, and $C$'s system after a single iteration of the protocol, conditioning on the event that this iteration is used to contribute towards the raw key (i.e., both $A$ and $B$ measure and resend, and $C$ sends ``$-1$''). Let $\ket{\psi_0} = \sum_{i,j}\alpha_{i,j}\ket{i,j}_{T_A,T_B}$ be the initial state prepared and sent by $C$ (from our earlier discussion this is without loss of generality), where the qubit $T_A$ is sent to $A$ and the qubit $T_B$ is sent to $B$. Now, assume both $A$ and $B$ measure and resend (other cases, though potentially useful for parameter estimation, do not contribute towards the raw key and thus are not considered for the time being). After $A$ and $B$'s measure and resend operation, the state is clearly: \[ \rho_1 = \sum_{i,j\in\{0,1\}} |\alpha_{i,j}|^2\ketbra{i,j}_{A,B}\otimes\ketbra{i,j}_{T_A,T_B}. \] At this point, the transit system returns to $C$'s control where he will perform an arbitrary, without loss of generality unitary, operator acting on $\mathcal{H}_{T_A}\otimes\mathcal{H}_{T_B}\otimes\mathcal{H}_C\otimes\mathcal{H}_{cl}$ (since we are working with collective attacks, the latter two subspaces are assumed to be cleared to some ``zero'' state). We write $U$'s action on basis states as follows: \begin{equation}\label{eq:U-action} \ket{i,j} \overset{U}\mapsto \ket{+1, e_{i,j}} + \ket{-1, f_{i,j}}, \end{equation} where $\ket{e_{i,j}}$ and $\ket{f_{i,j}}$ are arbitrary, not necessarily normalized nor orthogonal, states in $\mathcal{H}_{T_A}\otimes\mathcal{H}_{T_B} \otimes \mathcal{H}_C$. Unitarity of $U$ imposes various restrictions on these states which will become important later in our analysis. Note that, unlike our analysis in our original paper, we do not make any symmetry assumptions at this point. Following $C$'s operation, and conditioning on the event that $C$ sends ``$-1$'' (if $C$ sends ``$+1$'' the iteration is discarded and thus does not contribute to the raw key - though such iterations will be important later for parameter estimation as we discuss shortly), the final state is: \begin{equation}\label{eq:final-state-ABC} \rho_{ABC} = \frac{1}{p_a} \sum_{i,j}|\alpha_{i,j}|^2 \ketbra{i,j}_{AB} \otimes \ketbra{f_{i,j}}_C, \end{equation} where: \begin{equation}\label{eq:pa} p_a = \sum_{i,j}|\alpha_{i,j}|^2 \braket{f_{i,j}|f_{i,j}}. \end{equation} (Note that we have disregarded the $\mathcal{H}_{cl}$ portion of the above state as it is projected to $\ketbra{-1}_{cl}$ and thus not needed; we have also abused notation slightly by ``absorbing'' the $\mathcal{H}_{T_A}\otimes\mathcal{H}_{T_B}$ subspaces into $\mathcal{H}_C$. Our goal is to compute a bound on $S(A|C) - H(A|B)$. We will first bound $S(A|C)$. However, the high dimensionality of the system proves a hinderance. To overcome this, we will employ a technique first proposed in \cite{QKD-keyrate-general}, and later adapted successfully by us, to the key rate computation of a different (not a mediated) SQKD protocol in \cite{SQKD-Krawec-SecurityProof}. This technique requires us to condition on a new random variable of our choice. By appending a carefully chosen auxiliary system, we can simplify the entropy computations. Due to the strong sub additivity of von Neumann entropy, it holds that, for any tripartite system $\mathcal{H}_{A}\otimes\mathcal{H}_{C}\otimes\mathcal{H}_X$, we have: \[ S(A|C) \ge S(A|CX) \Longrightarrow S(A|C) - H(A|B) \ge S(A|CX) - H(A|B), \] thus allowing us to find a lower bound on the key rate of this mediated SQKD protocol. Appending this system $\mathcal{H}_X$ and providing $C$ access to it, though unrealistic, does allow us to compute a lower bound on the key rate; the realistic case, then, where $C$ does not have this additional information $\mathcal{H}_X$ can only be better for $A$ and $B$. The system we append, denoted $\mathcal{H}_X$, is two-dimensional and spanned by orthonormal basis states $\{\ket{C}, \ket{W}\}$ where $\ket{C}$ will be used to denote the event that $A$ and $B$'s raw key bit is correct, while $\ket{W}$ will be used to describe the event that their raw key bits are wrong. Incorporating this system into $\rho_{ABC}$ yields the state: \begin{align} \rho_{ABCX} &= &&\frac{1}{p_a}\ketbra{C}\otimes\left(|\alpha_{0,0}|^2\ketbra{0,0}_{AB}\otimes\ketbra{f_{0,0}}\right.\label{eq:final-state-ABCX}\\ &&&\left.+ |\alpha_{1,1}|^2\ketbra{1,1}_{AB}\otimes\ketbra{f_{1,1}}\right) \notag\\ \notag\\ &+&&\frac{1}{p_a}\ketbra{W}\otimes\left(|\alpha_{0,1}|^2\ketbra{0,1}_{AB}\otimes\ketbra{f_{0,1}}\right.\notag\\ &&&\left.+|\alpha_{1,0}|^2\ketbra{1,0}_{AB}\otimes\ketbra{f_{1,0}}\right).\notag \end{align} Given such a state, we may more readily compute $S(A|CX) = S(ACX) - S(CX)$. Indeed, it is not difficult to see that: \begin{equation}\label{eq:entropy-ACX} S(ACX) = H\left( \left\{ \frac{1}{p_a}|\alpha_{i,j}|^2 \braket{f_{i,j}|f_{i,j}}\right\}_{i,j}\right). \end{equation} (Choosing a suitable basis, we may write $\rho_{ACX}$ - which is the result of tracing out $B$ from Equation \ref{eq:final-state-ABCX} - as a diagonal matrix with diagonal elements equal to $\frac{1}{p_a} |\alpha_{i,j}|^2\braket{f_{i,j}|f_{i,j}}$.) We must now compute an upper bound on $S(CX)$ which will provide us with a lower bound on the key rate equation. Indeed, if $S(CX) \le \eta$ then $S(A|C) \ge S(A|CX) \ge S(ACX) - \eta$. Define the following: \begin{align} p_C &= \frac{1}{p_a}(|\alpha_{0,0}|^2\braket{f_{0,0}|f_{0,0}} + |\alpha_{1,1}|^2\braket{f_{1,1}|f_{1,1}})\label{eq:pc}\\ \notag\\ p_W &= \frac{1}{p_a}(|\alpha_{0,1}|^2\braket{f_{0,1}|f_{0,1}} + |\alpha_{1,0}|^2\braket{f_{1,0}|f_{1,0}})\label{eq:pw}. \end{align} Clearly, $p_C$ is the probability that $A$ and $B$'s raw key bit is correct (they match) while $p_W$ is the probability that they are wrong. For the following, assume that both $p_C$ and $p_W$ are both strictly positive. The case when $p_W$ is zero is similar as we will comment later. Of course if $p_C = 0$ then there is far too much noise and $A$ and $B$ should abort. (If $p_C = 0$ all their key bits are wrong!) Note that $p_C$ and $p_W$ are parameters that may be estimated by $A$ and $B$. Also note that $A$ and $B$ may estimate the quantities $\braket{f_{i,j}|f_{i,j}}$ which are simply the probabilities that $C$ sends ``$-1$'' in the event $A$ and $B$ both measure and resend $\ket{i,j}$. Tracing out $A$ and $B$ from Equation \ref{eq:final-state-ABCX} yields: \begin{align} \rho_{CX} &= \frac{1}{p_a}\ketbra{C}\otimes\left( |\alpha_{0,0}|^2 \ketbra{f_{0,0}} + |\alpha_{1,1}|^2 \ketbra{f_{1,1}} \right)\label{eq:final-state-CX}\\ &+\frac{1}{p_a}\ketbra{W} \otimes \left( |\alpha_{0,1}|^2 \ketbra{f_{0,1}} + |\alpha_{1,0}|^2 \ketbra{f_{1,0}} \right).\notag \\ \notag\\ &= p_C\ketbra{C} \otimes \sigma_C + p_W\ketbra{W} \otimes \sigma_W,\notag \end{align} where $\sigma_x$ are the following (unit trace - recall we are assuming for now that both $p_C$ and $p_W$ are non-zero) density operators: \begin{align} \sigma_C &= \frac{|\alpha_{0,0}|^2 \ketbra{f_{0,0}} + |\alpha_{1,1}|^2\ketbra{f_{1,1}}}{|\alpha_{0,0}|^2\braket{f_{0,0}|f_{0,0}} + |\alpha_{1,1}|^2\braket{f_{1,1}|f_{1,1}}}.\\ \notag\\ \sigma_W &= \frac{|\alpha_{0,1}|^2 \ketbra{f_{0,1}} + |\alpha_{1,0}|^2\ketbra{f_{1,0}}}{|\alpha_{0,1}|^2\braket{f_{0,1}|f_{0,1}} + |\alpha_{1,0}|^2\braket{f_{1,0}|f_{1,0}}}. \end{align} It is not difficult to show (see, for instance, \cite{SQKD-Krawec-SecurityProof} for a proof) that the entropy of such a system is simply: \begin{align} S(CX) &= H(p_C, p_W) + p_WS(\sigma_W) + p_CS(\sigma_C)\notag\\ &\le H(p_C, p_W) + p_W + p_CS(\sigma_C).\label{eq:entropy-bound} \end{align} The inequality above follows from the fact that $S(\sigma_W) \le \log \dim \sigma_W$. Since $\sigma_C$ and $\sigma_W$ are both two-dimensional, $S(\sigma_W) \le 1$. Note that it is not difficult to show that if $p_W = 0$, then $|\alpha_{0,1}|^2\ketbra{f_{0,1}} + |\alpha_{1,0}|^2\ketbra{f_{1,0}} \equiv 0$. Thus, this term does not appear in Equation \ref{eq:final-state-CX}, and so the bound in Equation \ref{eq:entropy-bound} holds even in this case. Obviously if the noise is small, than $p_W$ should also be small. All that remains, therefore, is to upper bound $S(\sigma_C)$. We may write, without loss of generality, $\ket{f_{0,0}} = x\ket{f}$ and $\ket{f_{1,1}} = y\ket{f} + z\ket{\zeta}$, where $\braket{f|f} = \braket{\zeta|\zeta} = 1$, $\braket{f|\zeta} = 0$, and $x,y,z \in \mathbb{C}$. This of course implies: \begin{align} &|x|^2 = \braket{f_{0,0}|f_{0,0}}\label{eq:identity-ev-1}\\ &|y|^2+|z|^2 = \braket{f_{1,1}|f_{1,1}}\label{eq:identity-ev-2}\\ &x^*y = \braket{f_{0,0}|f_{1,1}} \Longrightarrow |y|^2 ={|\braket{f_{0,0}|f_{1,1}}|^2}/{|x|^2}.\label{eq:identity-ev-3} \end{align} Using this $\{\ket{f}, \ket{\zeta}\}$ basis, we may write $\sigma_C$ as: \[ \sigma_C = q_0 \left( \begin{array}{ccc} |\alpha_{0,0}|^2 |x|^2 + |\alpha_{1,1}|^2|y|^2 &,& |\alpha_{1,1}|^2y^*z \\\\ |\alpha_{1,1}|^2yz^* &,& |\alpha_{1,1}|^2|z|^2\end{array}\right), \] where: \[ q_0 = (|\alpha_{0,0}|^2\braket{f_{0,0}|f_{0,0}} + |\alpha_{1,1}|^2\braket{f_{1,1}|f_{1,1}})^{-1} = [|\alpha_{0,0}|^2|x|^2 + |\alpha_{1,1}|^2(|y|^2+|z|^2)]^{-1}. \] The eigenvalues of this matrix are easily computed to be: \begin{align*} \lambda_{\pm} &= \frac{1}{2} \pm \frac{q_0}{2}\sqrt{\left(|\alpha_{0,0}|^2|x|^2 + |\alpha_{1,1}|^2|y|^2 - |\alpha_{1,1}|^2|z|^2\right)^2 + 4 |\alpha_{1,1}|^4|y|^2|z|^2}\\ &=\frac{1}{2} \pm \frac{q_0}{2}\sqrt{ \left( |\alpha_{0,0}|^2F_{0,0} + |\alpha_{1,1}|^2[2|y|^2 - F_{1,1}] \right)^2 + 4 |\alpha_{1,1}|^4|y|^2(F_{1,1} - |y|^2)}, \end{align*} where above we have defined $F_{i,i} = \braket{f_{i,i}|f_{i,i}}$ and have used the identities \ref{eq:identity-ev-1} and \ref{eq:identity-ev-2}. Now, define $\Delta = |\alpha_{0,0}|^2F_{0,0} - |\alpha_{1,1}|^2F_{1,1}$ and, continuing, we have: \begin{align*} \lambda_{\pm} &= \frac{1}{2} \pm \frac{q_0}{2}\sqrt{\left(\Delta + 2 |\alpha_{1,1}|^2|y|^2\right)^2 + 4|\alpha_{1,1}|^4|y|^2F_{1,1} - 4|\alpha_{1,1}|^4|y|^4}\\ &= \frac{1}{2} \pm \frac{q_0}{2}\sqrt{\Delta^2 + 4|\alpha_{1,1}|^2|y|^2\left(\Delta + |\alpha_{1,1}|^2F_{1,1}\right)}\\ &= \frac{1}{2} \pm \frac{q_0}{2}\sqrt{\Delta^2 + 4|\alpha_{1,1}|^2|y|^2|\alpha_{0,0}|^2|x|^2}. \end{align*} Finally, using identity \ref{eq:identity-ev-3} (and that $|x|^2 = F_{0,0}$), we have: \begin{equation} \lambda_{\pm} = \frac{1}{2} \pm \frac{q_0}{2}\sqrt{\Delta^2 + 4|\alpha_{0,0}|^2|\alpha_{1,1}|^2|\braket{f_{0,0}|f_{1,1}}|^2} \end{equation} Note that $\Delta, q_0, |\alpha_{0,0}|^2$, and $|\alpha_{1,1}|^2$ are all parameters that $A$ and $B$ may estimate. Computing a bound on $|\braket{f_{0,0}|f_{1,1}}|^2$ may be achieved using the error rate when both $A$ and $B$ reflect as we demonstrate later. Combining everything together yields the bound: \begin{align*} S(A|C) \ge H\left( \left\{ \frac{1}{p_a}|\alpha_{i,j}|^2 \braket{f_{i,j}|f_{i,j}}\right\}_{i,j}\right) - H(p_C, p_W) - p_W - p_CH(\lambda_+, \lambda_-). \end{align*} Note that $H(\lambda_+, \lambda_-) = h(\lambda_+)$ takes its maximum when $\lambda_+ = \frac{1}{2}$. It is not difficult to see that, as $|\braket{f_{0,0}|f_{1,1}}|^2 \ge 0$ increases, $\lambda_+$ increases. Furthermore, when $|\braket{f_{0,0}|f_{1,1}}|^2 = 0$, $\lambda_+ \ge \frac{1}{2}$. Thus, as $|\braket{f_{0,0}|f_{1,1}}|^2$ increases, $h(\lambda_+)$ necessarily decreases. By finding a lower-bound $\mathcal{F}$ (which will be a function of certain observed parameters as we will soon discuss) such that $|\braket{f_{0,0}|f_{1,1}}|^2 \ge \mathcal{F}$, we will have an upper-bound on $h(\lambda_+)$ and thus a lower bound on $S(A|CX)$. That is to say, if we define: \begin{equation} \tilde{\lambda} = \frac{1}{2} + \frac{q_0}{2}\sqrt{\Delta^2 + 4|\alpha_{0,0}|^2|\alpha_{1,1}|^2\mathcal{F}}, \end{equation} then: \[ \frac{1}{2} \le \tilde{\lambda} \le \lambda_+ \Longrightarrow h(\lambda_+) \le h(\tilde{\lambda}), \] This implies: \begin{equation}\label{eq:entropy-bound-final} S(A|C) \ge H\left( \left\{ \frac{1}{p_a}|\alpha_{i,j}|^2 \braket{f_{i,j}|f_{i,j}}\right\}_{i,j}\right) - H(p_C, p_W) - p_W - p_Ch(\tilde{\lambda}). \end{equation} This concludes our bound on $S(A|C)$. Determining a bound on $\mathcal{F}$ (needed to bound $h(\lambda_+)$) can be easily determined from the probability that $C$ sends ``$-1$'' if both $A$ and $B$ reflect (note that $C$ should send ``$+1$'' if they both reflect - $C$ sending ``$-1$'' in this event is counted as an error). In the next section, we will demonstrate this in two specific cases comparing our new result with our older one from \cite{SQKD-MultiUser}. Of course, computing $H(A|B)$, the last remaining term from the key rate equation, is easy after parameter estimation. Indeed, let $p(a,b)$ be the probability of $A$'s raw key bit being $a$ and $B$'s raw key bit being $b$. From Equation \ref{eq:final-state-ABC}, we see these values are: \begin{equation}\label{eq:pab} p(i,j) = \frac{1}{p_a}|\alpha_{i,j}|^2 \braket{f_{i,j}|f_{i,j}} \end{equation} Also, if we let $p(b)$ be the probability that $B$'s raw key bit is $b$ (i.e., $p(b) = p(0,b) + p(1,b)$) then our final key rate bound is found to be: \begin{align*} r &\ge H\left( \left\{ \frac{1}{p_a}|\alpha_{i,j}|^2 \braket{f_{i,j}|f_{i,j}}\right\}_{i,j}\right) - H(p_C, p_W) - p_W - p_Ch(\tilde{\lambda})\\ & - H\left(\{p(a,b)\}_{a,b}\right) + h(p(0)). \end{align*} Using Equation \ref{eq:pab}, this can be simplified to: \begin{equation}\label{eq:key-rate-bound} r \ge h(p(0)) - H(p_C, p_W) - p_W - p_Ch(\tilde{\lambda}) \end{equation} Note that all terms in the above expression, with the exception of $\tilde{\lambda}$, are directly observable by $A$ and $B$. Indeed, $|\alpha_{i,j}|^2$ is simply the probability that $A$ and $B$ measure $\ket{i,j}$. The quantity $\braket{f_{i,j}|f_{i,j}}$ is the probability that $C$ sends ``$-1$'' in the event $A$ and $B$ measure and resend $\ket{i,j}$. If $A$ and $B$ divulge complete information about certain randomly chosen iterations (in particular their choices and measurement results if applicable), this information is easily estimated. The only parameter that cannot be directly observed is $\tilde{\lambda}$; in particular the quantity $|\braket{f_{0,0}|f_{1,1}}|^2$ upon which that eigenvalue depends. They may, however, estimate it using the probability that $C$ sends ``$-1$'' if $A$ and $B$ both reflect (the probability of which $A$ and $B$ may estimate). We will show how this is done in the next section when we actually evaluate the above key rate expression for certain scenarios. \subsection{General Attacks and Third Party Eavesdroppers} We considered only collective attacks above. However, if $A$ and $B$ permute their raw key bits, using a randomly chosen (and publicly disclosed) permutation, the protocol becomes permutation invariant \cite{QKD-renner-keyrate}. Thus the results in \cite{QKD-general-attack,QKD-general-attack2} apply showing that, to prove security against general attacks (where $C$ is allowed to perform any operation of his choice - perhaps altering his attack operator each iteration) it is sufficient to prove security against collective attacks. Furthermore, since we are considering the asymptotic scenario in this paper, our key rate bound is equivalent in both cases. Finally, it is clear that any attack by a third-party eavesdropper (including attacks whereby the eavesdropper alters $C$'s classical messages - recall that channel is not authenticated) can simply be ``absorbed'' into $C$'s attack operator $U$. Thus our bound holds even in this case. \section{Evaluation} Our key rate bound applies in the most general of cases. $A$ and $B$ must simply observe certain parameters and, based on these, they may determine a lower bound on their secret key fraction (bounding $|\braket{f_{0,0}|f_{1,1}}|^2$ may also be achieved using these parameters as we discuss shortly). Of course, due to its reliance on many parameters, it is difficult to visualize this bound here in this paper. However, we may consider certain scenarios which $A$ and $B$ may encounter and evaluate our bound based on these particular scenarios. First, we will consider the case of a semi-honest server and a noisy quantum channel. Later we will consider an adversarial server whose attack is ``symmetric'' (a common assumption in QKD security proofs). These two scenarios were considered in our earlier work \cite{SQKD-MultiUser} and so will allow us to compare our new results with our old showing the superiority of our new bound in both of these scenarios. \subsection{Bounding $|\braket{f_{0,0}|f_{1,1}}|^2$} We now show how to bound the quantity $|\braket{f_{0,0}|f_{1,1}}|^2$ based on the probability that $C$ sends ``$-1$'' if $A$ and $B$ had sent the Bell state $\ket{\Phi^+}$ to $C$. Obviously this is not a parameter that is directly observable ($A$ and $B$ are classical and so cannot prepare $\ket{\Phi^+}$). However, it may be bounded, as we show later, using the error rate in those iterations where $A$ and $B$ reflected (i.e., the probability that $C$ sends ``$-1$'' if $A$ and $B$ both reflect). Consider the action of $C$'s attack operator $U$ on Bell basis states. This is: \begin{align*} U\ket{\Phi^+} &= \ket{+1, g_0} + \ket{-1, h_0}\\ U\ket{\Phi^-} &= \ket{+1, g_1} + \ket{-1, h_1}\\ U\ket{\Psi^+} &= \ket{+1, g_2} + \ket{-1, h_2}\\ U\ket{\Psi^-} &= \ket{+1, g_3} + \ket{-1, h_3}, \end{align*} where each $\ket{g_i}$ and $\ket{h_i}$ are linear functions of $\ket{e_{k,l}}$ and $\ket{f_{k,l}}$ respectively (see Equation \ref{eq:U-action}). In particular, we have $\ket{h_0} = \frac{1}{\sqrt{2}}(\ket{f_{0,0}} + \ket{f_{1,1}})$. Imagine, for the time, that, in addition to the other parameters mentioned in the last section, $A$ and $B$ are also able to estimate the parameter $\braket{h_0|h_0}$: \begin{equation}\label{eq:h0} \braket{h_0|h_0} = \frac{1}{2}(\braket{f_{0,0}|f_{0,0}} + \braket{f_{1,1}|f_{1,1}} + 2Re\braket{f_{0,0}|f_{1,1}}). \end{equation} This then provides an estimate of $|\braket{f_{0,0}|f_{1,1}}|^2$. Indeed, if $Re\braket{f_{0,0}|f_{1,1}} = x$, then of course $|\braket{f_{0,0}|f_{1,1}}|^2 = Re^2\braket{f_{0,0}|f_{1,1}} + Im^2\braket{f_{0,0}|f_{1,1}} \ge Re^2\braket{f_{0,0}|f_{1,1}} = |x|^2$. Thus: \begin{align} \braket{h_0|h_0} &= \eta\notag\\ \Rightarrow |\braket{f_{0,0}|f_{1,1}}|^2 &\ge \left(\frac{1}{2}\braket{f_{0,0}|f_{0,0}} + \frac{1}{2}\braket{f_{1,1}|f_{1,1}} - \eta\right)^2\label{eq:bound-equal-f00f11}. \end{align} This quantity $\braket{h_0|h_0}$ is simply the probability that $C$ sends ``$-1$'' if $A$ and $B$ jointly send the state $\ket{\Phi^+}$ (this should be small). Obviously they cannot estimate this parameter directly in reality (they are neither of them quantum). However, as we will show later, they are able to compute an upper bound on it. In particular, if they bound $\braket{h_0|h_0} \le \eta$, then, for $\eta$ small enough (and recall it should be small since $C$ is supposed to send ``$+1$'' if he receives $\ket{\Phi^+}$), it holds that: \begin{align} \braket{h_0|h_0} &\le \eta\notag\\ \Rightarrow Re\braket{f_{0,0}|f_{1,1}} &\le \eta - \frac{1}{2}\braket{f_{0,0}|f_{0,0}} - \frac{1}{2}\braket{f_{1,1}|f_{1,1}} \le 0\notag\\ \Rightarrow Re^2\braket{f_{0,0}|f_{1,1}} &\ge \left(\frac{1}{2}\braket{f_{0,0}|f_{0,0}} + \frac{1}{2}\braket{f_{1,1}|f_{1,1}} - \eta\right)^2\notag\\ \Rightarrow |\braket{f_{0,0}|f_{1,1}}|^2 &\ge \left(\frac{1}{2}\braket{f_{0,0}|f_{0,0}} + \frac{1}{2}\braket{f_{1,1}|f_{1,1}} - \eta\right)^2.\label{eq:bound-general-f00f11} \end{align} (We see that $\eta$ must be small enough so that the right hand side of the second inequality is negative.) Both of these two identities will be useful in the subsequent examples where we bound $\eta$ as a function of $Q$ - the probability that $A$ and $B$'s measurement results differ - and $p_w$ - the probability that $C$ sends the wrong message when $A$ and $B$ both reflect (i.e., he sends ``$-1$''); with $Q = p_w = 0$ implying $\eta = 0$. \subsection{Semi-Honest Center} In this first example, we will consider the case where $C$ is \emph{semi-honest} - that is, he always follows the protocol exactly, preparing the correct state in step one of the protocol, performing the correct measurement in step three, and reporting, honestly, his measurement result. Beyond that, however, he is free to do whatever he likes, for instance, he can listen in on the public communication channel to try and gain additional information on the key. This is not only a practically relevant scenario to analyze, but it also allows us to compare our new key rate bound with our old from \cite{SQKD-MultiUser}. Besides the semi-honest sever, we will also assume a noisy quantum channeled modeled using two independent depolarization channels, one for the forward direction (qubits traveling from $C$ to $A$ and $B$) with parameter $p$, the other for the reverse (qubits returning from $A$ and $B$ to $C$) with parameter $q$. That is, if the joint qubit state is $\rho$ (a density operator acting on a four dimensional Hilbert space), then the depolarization channel with parameter $p$ is: \[ \mathcal{E}_p(\rho) = (1-p)\rho + \frac{p}{4}I, \] where $I$ is the identity operator. Furthermore, for the remainder of this sub-section, we will relabel the Bell basis states as follows: $\ket{\Phi^+} = \ket{\phi_0}$, $\ket{\Phi^-} = \ket{\phi_1}$, $\ket{\Psi^+} = \ket{\phi_2}$, and $\ket{\Psi^-} = \ket{\phi_3}$. To compute a key rate bound, we need only compute a few parameters. First are the probabilities that $C$ sends ``$-1$'' if $A$ and $B$ measure and resend the value $\ket{i,j}$ (i.e., we estimate the value $\braket{f_{i,j}|f_{i,j}}$). These are easy to compute given our assumptions of a semi-honest server and depolarization channels. For instance, if $A$ and $B$ measure and resend $\ket{0,0}$, then the state arriving at $C$'s lab is: \[ \mathcal{E}_q(\ketbra{0,0}) = (1-q)\ketbra{0,0} + \frac{q}{4}\sum_{i,j\in\{0,1\}}\ketbra{i,j}. \] $C$ now performs a Bell measurement of this system, with the message ``$-1$'' being sent only if he measures $\ketbra{\phi_1}$ (i.e., $\ketbra{\Phi^-}$). It is easy to see that the probability of him sending this message, given state $\mathcal{E}_q(\ketbra{0,0})$ is: \[ \frac{1-q}{2} + \frac{q}{4} = \frac{2-q}{4}. \] Thus $\braket{f_{0,0}|f_{0,0}} = (2-q)/4$. Using a similar process, the remaining $\braket{f_{i,j}|f_{i,j}}$ may be computed: \begin{align} \braket{f_{0,0}|f_{0,0}} &= \frac{2-q}{4} = \braket{f_{1,1}|f_{1,1}}\label{eq:dc:f00}\\ \braket{f_{0,1}|f_{0,1}} &= \frac{q}{4} = \braket{f_{1,0}|f_{1,0}}.\label{eq:dc:f01} \end{align} Next, we compute $|\alpha_{i,j}|^2$ (the probability that $A$ and $B$ measure $\ket{i,j}$). Since we are assuming $C$ is semi-honest and thus he prepared $\ket{\phi_0}$ initially, these are computed using the state: \[ \mathcal{E}_p(\ketbra{\phi_0}) = (1-p)\ketbra{\phi_0} + \frac{p}{4}\sum_{i=0}^3\ketbra{\phi_i}. \] Clearly: \begin{align*} |\alpha_{0,0}|^2 &= \frac{2-p}{4} = |\alpha_{1,1}|^2\\ |\alpha_{0,1}|^2 &= \frac{p}{4} = |\alpha_{1,0}|^2. \end{align*} Thus, the value of $p$ may be estimated by $A$ and $B$ using the observable parameters $|\alpha_{i,j}|^2$, while $q$ can be estimated using the observable parameters $\braket{f_{i,j}|f_{i,j}}$. Furthermore, we have that $Q$, which is the probability that $A$ and $B$'s measurement results are different, is $Q = |\alpha_{0,1}|^2 + |\alpha_{1,0}|^2 = p/2$. $A$ and $B$ are now able to compute $p_a$, $p_C$, and $p_W$ in terms of $p$ and $q$. All that remains is to bound $|\braket{f_{0,0}|f_{1,1}}|^2$. We will use our above discussion and instead determine a value $\eta$ such that $\braket{h_0|h_0}$ (which is the probability that $C$ sends ``$-1$'' if the joint state leaving $A$ and $B$'s lab is $\ket{\phi_0}$) is upper bounded by $\eta$. In the general case, which we consider next, this value $\eta$ can only be estimated (since $A$ and $B$ are classical users, they cannot prepare a Bell state to directly observe $\braket{h_0|h_0}$). However, given the assumptions in this sub-section, they may in fact compute a value for $\eta$ based on $q$. Indeed, if the state leaving $A$ and $B$ is $\ket{\phi_0}$, then the system, when it reaches $C$, has evolved via: \[ \mathcal{E}_q(\ketbra{\phi_0}) = (1-q)\ketbra{\phi_0} + \frac{q}{4}\sum_{i=0}^3\ketbra{\phi_i}. \] Since $C$ is semi-honest, he will only send ``$-1$'' if he measures $\ket{\phi_1}$. This probability is simply $\braket{h_0|h_0} = q/4 = \eta$. We may now use Equation \ref{eq:bound-equal-f00f11} to lower bound the quantity $|\braket{f_{0,0}|f_{1,1}}|^2$. Using these values and Equation \ref{eq:key-rate-bound}, we see that the key rate of this mediated protocol, in the event $p=q=2Q$ (i.e., the noise in the forward and reverse channels is equivalent and the probability that $A$ or $B$'s measurement results are wrong is $Q$), remains positive for all $Q \le 22.05\%$ as shown in Figure \ref{fig:keyrate-semihonest}. This is an improvement over our original bound of $Q \le 19.9\%$. Note that, to achieve this high tolerance level in the presence of third-party eavesdroppers, the classical channel connecting $C$ to $A$ or $B$ needs to be authenticated. This requirement is not necessary in the next scenario we consider where $C$ is fully adversarial. \begin{figure} \caption{A graph of our key rate lower bound when the server $C$ is semi-honest and there is a noisy quantum channel, modeled by two independent depolarization channels, connecting the users. This graph assumes the noise in the forward direction (when the qubits travel from $C$ to $A$ and $B$) is equal to the noise in the reverse (when qubits return from $A$ and $B$ to $C$). Observe that the key rate remains positive for all $Q \le 22.05\%$, where $Q$ is the probability that $A$ and $B$'s measurement results differ.} \label{fig:keyrate-semihonest} \end{figure} \subsection{Adversarial Server, Symmetric Attack} Of course the above section assumed $C$ was semi-honest - that is he followed the protocol exactly. Now, we assume the server $C$ is adversarial. He is allowed to prepare any state he likes on step (1) of the protocol, and he may perform any arbitrary operation (allowed by the laws of quantum physics) when the qubits return to him, sending any message of his choice based on his operation. The only assumption we make in this section is that $C$'s attack is symmetric in that it can be parameterized by only a few parameters (to be discussed). We make this assumption first, so that we can compare our new key rate bound with our old one from \cite{SQKD-MultiUser} (which assumed a symmetric attack); and second, so that we can better visualize the key rate bound by reducing the many parameters to only a few. Note that in our original proof of security in \cite{SQKD-MultiUser}, we assumed throughout that $C$'s attack was symmetric (the key rate bound we derived there was based on this assumption). In our new bound in this paper, however, we made no such assumptions - Equation \ref{eq:key-rate-bound} works even in the most general case. So, while assuming a symmetric attack is not required for our new proof in this paper, it is easier to visualize our key rate bound and it also allows us to compare our new results with our old. The first assumption we make is that we may parameterize the values $|\alpha_{i,j}|^2$ using only a single parameter $Q$. Namely, we have: \begin{align*} |\alpha_{0,0}|^2 &= \frac{1-Q}{2} = |\alpha_{1,1}|^2\\ |\alpha_{0,1}|^2 &= \frac{Q}{2} = |\alpha_{1,0}|^2. \end{align*} Next, we assume that the probability that $C$ sends ``$-1$'' in the event $A$ and $B$ measure $\ket{0,0}$ is equal to the probability he sends that same message if they both measure $\ket{1,1}$. Similarly for the case $\ket{0,1}$ and $\ket{1,0}$. That is: \begin{align*} \braket{f_{0,0}|f_{0,0}} &= \braket{f_{1,1}|f_{1,1}} = \mathcal{F}_{=}\\ \braket{f_{0,1}|f_{0,1}} &= \braket{f_{1,0}|f_{1,0}} = \mathcal{F}_{\ne} \end{align*} We make no assumptions regarding the comparative relationship between $\mathcal{F}_=$ and $\mathcal{F}_{\ne}$. Using similar arguments as in \cite{SQKD-MultiUser} (in particular see Equation 25 from that source, along with its derivation), we may find the following upper-bound on $\braket{h_0|h_0}$: \[ \braket{h_0|h_0} \le \left(\frac{ \sqrt{1-Q}\left(\sqrt{Q\mathcal{F}_{\ne}} + \sqrt{p_w} \right) } {1-Q}\right)^2 = \eta, \] where $p_w$ is the probability that $C$ sends the wrong message (namely ``$-1$'') in the event both $A$ and $B$ reflect. This is our value for $\eta$ allowing us to bound $|\braket{f_{0,0}|f_{1,1}}|^2$ using Equation \ref{eq:bound-general-f00f11} from the previous section. However, to evaluate this bound, we must determine what to set $\mathcal{F}_=$ and $\mathcal{F}_{\ne}$ to. Naturally, in practice, these values are simply observed by $A$ and $B$. However, for this paper, to evaluate our bound, we will consider two examples. First, in order to compare with our previous bound from our original paper, we will use $\mathcal{F}_{\ne} = Q$ and: \[ \mathcal{F}_= = \frac{\tilde{p}_a - Q^2}{1-Q}, \] where $\tilde{p}_a$ is the desired probability of acceptance. It is trivial to check that, when setting these values thusly, we have $p_a = \tilde{p}_a$ (see Equation \ref{eq:pa}). We have to set this parameter this way since, in our original proof, we evaluated our key rate equation assuming $p_a$ was $0.5$, then $0.4$, and finally $0.3$ (our original proof did not establish a clear relationship between the value of $p_a$ and the other observed parameters as we did in this new proof). Using these values, we see that, when $Q = p_w$ and $\tilde{p}_a = .5$ the key rate expression is positive for all $Q \le 12.5\%$; when $\tilde{p}_a = .4$ it is positive for $Q \le 10.8\%$; and for $\tilde{p}_a = .3$ it remains positive for $Q \le 8.86\%$. See Figure \ref{fig:keyrate-adv-1}. Compare this with our old bound from \cite{SQKD-MultiUser} - there, when $\tilde{p}_a = .5$ our old bound remained positive for $Q \le 10.65\%$, while for $\tilde{p}_a = .3$, the old bound was positive only for $Q \le 5.25\%$. \begin{figure} \caption{Showing a graph of our key rate lower bound when the server is fully adversarial and when $C$'s attack is such that $p_a$, the probability that any particular iteration is accepted by $A$ and $B$, is equal to $0.5$, $0.4$, and $0.3$. Here, $Q$ represents the probability that $A$ and $B$'s measurement results are different. This is done to compare with our old bound from \cite{SQKD-MultiUser} \label{fig:keyrate-adv-1} \end{figure} The second example we consider is based on values determined from the depolarization channel example in the previous section. We will assume the noise in both channels is equal. In that case, we have $\mathcal{F}_{\ne} = Q/2$ while $\mathcal{F}_= = 1/2 - Q/2$ (see Equations \ref{eq:dc:f00} and \ref{eq:dc:f01} and recall that $Q = 2p = 2q$). Using these values, we see that, when $Q = p_w$, the key rate remains positive for all $Q \le 13.04\%$. See Figure \ref{fig:keyrate-adv-2}. \begin{figure} \caption{Showing a graph of our key rate lower bound when the server is fully adversarial and when $C$'s attack is such that certain parameters agree with the depolarization example (see text). Here, $Q$ represents the probability that $A$ and $B$'s measurement results are different. Note that the key rate remains positive for all $Q \le 13.04\%$.} \label{fig:keyrate-adv-2} \end{figure} For comparison, BB84 can tolerate up to $11\%$ error while the six-state BB84 can tolerate up to $12.6\%$ (both bounds without pre-processing since we have not considered pre-processing in our proof) \cite{QKD-renner-keyrate,QKD-renner-keyrate2}. B92 \cite{QKD-B92} was recently shown to tolerate up to $6.5\%$ error assuming a depolarization channel \cite{QKD-B92-Improved}. We stress that, in practice, there is no need to make such assumptions to determine $\mathcal{F}_=$ and $\mathcal{F}_{\ne}$: these are parameters that are observed directly. We made assumptions only to visualize our key rate bound and to compare it with our previous work. \section{Conclusion} We have provided a new proof of security for the mediated semi-quantum key distribution protocol presented in \cite{SQKD-MultiUser}. While a proof of security was provided in that original source, we have demonstrated that our new key rate bound provides a more optimistic rate. Indeed we see that in every scenario considered, our new proof demonstrates that the tolerated noise level of the protocol (the maximal amount of noise before $A$ and $B$ should abort) is strictly larger than the old key rate bound. Our new proof also does not make any assumptions about the attack used by $C$. Note that, while we only evaluated our key rate bound in two specific examples - the semi-honest case and the symmetric adversarial case - our proof works in even the most general of scenarios (i.e., a non-symmetric adversarial attack). Our work in this paper has shown that this mediated SQKD protocol can tolerate noise levels surpassing the maximal tolerated noise thresholds of many other fully quantum protocols. Finally, the proof technique we used may hold utility in security proofs of other protocols (quantum or semi-quantum) which rely on a two-way quantum communication channel. Many open problems remain. Most important, perhaps, is that we considered only the perfect qubit scenario; dealing with problems such as multi-photon attacks is an interesting area of research (in any two-way quantum protocol). Also, can a mediated SQKD protocol be designed that is more efficient - recall that, in the absence of noise and with an honest server, only half of the sent qubits can contribute to the raw key. Finally, can a mediated SQKD protocol be designed without the need for $C$ to prepare and measure in the Bell basis. As mentioned in \cite{SQKD-MultiUser}, the answer to this seems to be positive, however the security proof is more involved. Perhaps the techniques we developed and used in this paper can be adapted towards this goal. \end{document}
\begin{document} \title{Smooth numbers in Beatty sequences} \author{Roger Baker} \address{Department of Mathematics\newline \indent Brigham Young University\newline \indent Provo, UT 84602, U.S.A} \email{[email protected]} \begin{abstract} Let $\t$ be an irrational number of finite type and let $\psi \ge 0$. We consider numbers in the Beatty sequence of integer parts, \[\mathcal B(x) = \{\lfloor\t n + \psi\rfloor : 1 \le n \le x\}.\] Let $C > 3$. Writing $P(n)$ for the largest prime factor of $n$ and $|\ldots|$ for cardinality, we show that \[|\{n\in \mathcal B(x) : P(n) \le y\}| = \frac 1\t\, \Psi(\t x, y)\ (1 + o(1))\] as $x\to\infty$, uniformly for $y \ge (\log x)^C$. Here $\Psi(X,y)$ denotes the number of integers up to $X$ with $P(n) \le y$. The range of $y$ extends that given by Akbal \cite{akb}. The work of Harper \cite{harp} plays a key role in the proof. \end{abstract} \keywords{Beatty sequence, exponential sums over smooth numbers.} \subjclass[2020]{Primary 11N25; secondary 11L03} \maketitle \section{Introduction}\label{sec:intro} A positive integer $n$ is said to be \textit{$y$-smooth} if $P(n)$, the largest prime factor of $n$, is at most $y$. We write $\mathfrak S(y)$ for the set of $y$-smooth numbers in $\mathbb N$ and \[\Psi(x,y) = |\mathfrak S(y) \cap [1,x]|,\] where $|\ldots|$ denotes cardinality. Let $\t > 1$ be an irrational number and $\psi \in [0,\infty)$. Arithmetic properties of the Beatty sequence \[\mathcal B(x) = \{\lfloor\t n + \psi\rfloor : 1 \le n \le x\}\] (where $\lfloor$\ \ $\rfloor$ denotes integer part), have been studied in \cite{akb, bakzhao, bankshpar, har}, for example. One may conjecture that for $x$ large and $y=y(x)$ not too small, say $x \ge y \ge (\log x)^C$ where $C > 1$, we have \begin{equation}\label{eq1.1} |\{n\in \mathcal B(x) : P(n) \le y\}| = \frac 1\t\, \Psi (\t x, y) \quad (1 + o(1)) \end{equation} where $o(1)$ denotes a quantity tending to 0 as $x$ tends to infinity. Banks and Shparlinski \cite{bankshpar} obtained \eqref{eq1.1} (in a slightly different form) uniformly for \[\exp((\log x)^{2/3+\e}) \le y \le x.\] (We write $\e$ for an arbitrary positive number.) Under the additional condition that $\t$ is of finite type, Akbal \cite{akb} obtained \eqref{eq1.1} uniformly for \begin{equation}\label{eq1.2} \exp((\log\log x)^{5/3+\e}) \le y \le x. \end{equation} In the present paper we extend the range \eqref{eq1.2}. \begin{thm}\label{thm:irrationalnofinitetype} Let $\t > 1$ be an irrational number of finite type and $\psi \ge 0$. Then \eqref{eq1.1} holds uniformly for \[(\log x)^{3+\e} \le y \le x.\] \end{thm} We recall that an irrational number $\t$ is said to be of finite type if ($\|\ldots\|$ denoting distance to the nearest integer) we have \[\|m\t\| \ge \frac c{m^\k} \quad (m \in \mathbb N)\] for some $c > 0$ and $\k > 0$. We note that if $\t$ is of finite type, then so is $\t^{-1}$. Theorem \operatorname{Re}f{thm:irrationalnofinitetype} depends on an estimate for the exponential sum \[S(\t)= \sum_{\substack{n \le x\\[1mm] n\in\, \mathfrak S(y)}} e(n\t).\] Akbal \cite{akb} uses the estimate of Foury and Tenenbaum \cite{fouten}: for $3 \le y \le \sqrt x$, $q \in \mathbb N$, $(a,q)=1 $, and $\d \in\mathbb R$, \[S\left(\frac aq + \d\right) \ll x (1 + |\d x|) \log^3x \left(\frac{y^{1/2}}{x^{1/4}} + \frac 1{q^{1/2}} + \left(\frac{qy}x\right)^{1/2}\right).\] This is unhelpful when, say, $y = (\log x)^C$ $(C > 1)$ since the trivial bound for $S(\t)$ is $x^{1-1/C +o(1)}$ (see e.g. \cite{hilten}). We use a procedure of Harper \cite[Theorem 1]{harp} and to state his result we introduce some notation. For $2 \le y \le x$, let $u = \frac{\log y}{\log x}$ and let $\a = \a(x,y)$ be the solution of \[\sum_{p \le y} \, \frac{\log p}{p^\a - 1} = \log x.\] For convenience, when $\t = \frac aq + \d$ as above, we write \[L = 2(1 + |\d x|)\] and \begin{equation}\label{eq1.3} M = u^{3/2} \log u \log x (\log L)^{1/2}(\log q y)^{1/2}. \end{equation} In Theorem 1 of \cite{harp} it is shown that whenever \begin{equation}\label{eq1.4} q^2 L^2 y^3 \le x, \end{equation} we have \begin{equation}\label{eq1.5} S\left(\frac aq + \d\right) \ll \Psi(x,y)(q(1+|\d x|)^{-\frac 12 + \frac 32 (1 - \a(x,y))}M. \end{equation} We cannot use this bound directly since \eqref{eq1.3} is too restrictive. We adapt Harper's argument to obtain \begin{thm}\label{thm:multiplicfunc} Let $f$ be a completely multiplicative function, $|f(n)|\le 1$ $(n \in \mathbb N)$. Let \[S(f,\t) = \sum_{\substack{n \le x\\[1mm] n\in \mathfrak S(y)}} f(n) e(\t n).\] \end{thm} Let $q \in \mathbb N$, $(a,q) = 1$, and $\d \in \mathbb R$. Then, with $\a = \a(x,y)$, we have \[S\left(f,\frac aq + \d\right) \ll \Psi(x,y) \left\{ (q(1+|\d x|)^{-\frac 12 + \frac 32\, (1 - \a)} M + x^{\a/2}(q Ly^3)^{\frac 12} \sqrt{\log y\log q}\right\}.\] To save space, we refer frequently to \cite{harp} in our proof of Theorem \operatorname{Re}f{thm:multiplicfunc} in Section 2. Theorem \operatorname{Re}f{thm:irrationalnofinitetype} is deduced in a straightforward manner from Theorem \operatorname{Re}f{thm:multiplicfunc} in Section 3. The factor $f(n)$ in Theorem \operatorname{Re}f{thm:multiplicfunc} is not needed elsewhere in the paper, but it requires no significant effort to include it. I would like to thank Adam Harper for helpful comments concerning his proof of \eqref{eq1.5}. \section{Proof of Theorem \operatorname{Re}f{thm:multiplicfunc}.}\label{sec:proofthm2} \begin{lem}\label{lem:2leyl3x} Let $2 \le y \le x$ and $d \ge 1$. Then we have \[\Psi \left(\frac xd, y\right) \ll \frac 1{d^{\a(x,y)}}\ \Psi(x,y).\] \end{lem} \begin{proof} See de la Bret\`eche and Tenenbaum \cite[Th\'eor\`eme 2.4 (i)]{bretten} \end{proof} We write $p(n)$ for the smallest prime factor of $n\in \mathbb N$. We begin the proof of Theorem \operatorname{Re}f{thm:multiplicfunc} by noting that the result is trivial for $qLy^3 \ge x^\a$. Suppose now that $qLy^3 < x^\a$. Every $y$-smooth number in $[qLy^2,x]$ can be written uniquely in the form $mn$, where \[qLy < m \le qLy^2, \ \frac m{P(m)} \le qLy, \ m \in \mathfrak S(y)\] and \[\frac{qLy^2}m \le n \le \frac xm \ , \ p(n) \ge P(m), \ \ n \in \mathfrak S(y).\] (We take $m$ to consist of the product of the smallest prime factors of the number.) With $\t = \frac aq + \d$, we have \begin{align} S(f,\t) &= \sum_{\substack{qLy^2 \le n \le x\\[1mm] n \in \mathfrak S(y)}} f(n) e(n\t) + O(\Psi(qLy^2, y))\label{eq2.1}\\ &= U + O(\Psi(qLy^2,y))\notag \end{align} where \[U = \sum_{\substack{ qLy < m \le qLy^2\\[1mm] m/P(m) \le qLy\\[1mm] m\in \mathfrak S(y)}} \ \sum_{\substack{ \frac{qLy^2}m \le n \le \frac xm\\[1mm] p(n) \ge P(m)\\[1mm] n \in \mathfrak S(y)}} f(mn) e(mn\t).\] We now decompose $U$ as \begin{equation}\label{eq2.2} U = \sum_{0 \le j \le \frac{\log y}{\log 2}} U_j \end{equation} where \[U_j = \sum_{p \le y} \ \ \sum_{\substack{ 2^j qLy < m \le q L y \min(2^{j+1},p)\\[1mm] P(m) = p}} f(m) \ \sum_{\substack{ \frac{qLy^2}m \le n \le \frac xm\\[1mm] p(n) \ge p\\[1mm] n \in \mathfrak S(y)}} f(n) e (mn\t),\] noting that if $P(m) = p \le y$, then $m$ is $y$-smooth, and the condition $\frac m{P(m)} \le qLy$ can be written as $m \le qLyp$. We apply the Cauchy-Schwarz inequality to $U_j$. Let $\sum\limits_m$ denote \[\sum_{\substack{ 2^j qLy < m \le qLy\, \min(2^{j+1},p).\\[1mm] P(m) = p}}\] We obtain \begin{align*} U_j &\le \sqrt{\sum_{p\le y} \ \sum_m 1} \ \sqrt{ \sum_{2^j \le p \le y}\ \, \sum_{\frac{2^jq Ly}p < m' \le \frac{qLy}p\, \min(2^{j+1},p)}\Bigg|\sum_{\frac{qLy^2}{m'p} \le n \le \frac x{m'p}, p(n) \ge p, n \in \mathfrak S(y)} f(n) e(m'pn\t)\Bigg|^2}\\[2mm] &\ll \sqrt{\Psi(2^{j+1}qLy, y)} \ \, \sqrt{ \sum_{2^j \le p \le y}\ \, \sum_{\substack{ n_1, n_2 \le \frac x{2^jqLy}\\[1mm] p(n_1), p(n_2) \ge p\\[1mm] n_1, n_2 \in \mathfrak S(y)}} \ \min \left\{\frac{2^{j+1}qLy}p, \frac 1{\|(n_1-n_2)p\t\|}\right\}} \end{align*} For the last step, we open the square and sum the geometric progression over $m'$. We may restrict the sum over primes to $p \ge 2^j$, since otherwise the sum over $m$ is empty. Our final bound here for $U_j$ exactly matches \cite{harp}. Let \[\mathfrak T_j(r) : = \max_{1\le b\le r-1} \ \sum_{\substack{ n_1, n_2 \le \frac x{2^jqLy}\\[1mm] n_1, n_2 \in \mathfrak S(y)\\[1mm] n_1 - n_2 \equiv b \operatorname{mod} r}} \, 1.\] Just as in \cite{harp}, after distinguishing the cases $p\mid q$ and $p\nmid q$, we arrive at \begin{equation}\label{eq2.3} U_j \ll \sqrt{\Psi(2^{j+1}q Ly, y)} \, (\sqrt S_1 + \sqrt S_2), \end{equation} with \begin{align*} S_1 = S_1(j): = \sum_{\substack{ 2^j \le p\le y\\[1mm] p\nmid q}} \ & \mathfrak T_j(q) \sum_{b=1}^{q-1} \min \left\{ \frac{2^{j+1}qLy}p\, , \frac qp\right\}\\[2mm] &+ \sum_{\substack{2^j \le p \le y\\[1mm] p\mid q}} \mathfrak T_j \left(\frac qp\right) \ \sum_{b=1}^{(q/p)-1} \min \left\{\frac{2^{j+1}qLy}p , \frac q{pb}\right\} \end{align*} and \begin{gather*} S_2 = S_2(j) : = \sum_{\substack{ 2^j \le p \le y\\[1mm] p\nmid q}} \frac 1p \ \sum_{\substack{ n_1,n_2\le \frac x{2^jqLy}\\[1mm] n_1-n_2 \equiv 0 \operatorname{mod} q\\[1mm] n_1,n_2 \in \mathfrak S(y)}} \min\left\{2^{j+1}qLy, \frac 1{|(n_1-n_2)\d|}\right\}\\[2mm] \sum_{\substack{ 2^j \le p \le y\\[1mm] p\mid q}} \frac 1p \ \sum_{\substack{ n_1, n_2 \le \frac x{2^jqLy}\\[1mm] n_1-n_2 \equiv 0\operatorname{mod}{q/p}\\[1mm] n_1, n_2\in \mathfrak S(y)}} \min\left\{2^{j+1}qLy, \frac 1{|(n_1-n_2)\d|}\right\}. \end{gather*} We now depart to an extent from the argument in \cite{harp}; we have extra terms in the upper bounds in Lemma \operatorname{Re}f{lem:upperbounds} below which arise because we do not have the upper bound \eqref{eq1.4}. \begin{lem}\label{lem:upperbounds} Let $(\log x)^{1.1} \le y \le x^{1/3}$, $q \ge 1$, and $L = 2(1 + |\d x|)$. Then for any $j$, $0 \le j \le \frac{\log y}{\log 2}$, and any prime $p\mid q$, we have \begin{align} \mathfrak T_j(q) &\ll \frac{\Psi(x/2^j qLy, y)^2}q \ q^{1-\a(x,y)} \log x + y \Psi\left(\frac x{2^jqLy}, y\right)\label{eq2.4}\\ \intertext{and} \mathfrak T_j\left(\frac qp\right) &\ll \frac{\Psi(x/2^j qLy, y)^2}{q/p} \ \left(\frac qp\right)^{1-\a(x,y)}\log x\label{eq2.5}\\ &\hskip 2.25in + y\Psi\left(\frac x{2^jqLy},y\right).\notag \end{align} Under the same hypotheses, we have \begin{align} \sum_{\substack{ n_1,n_2 \le \frac x{2^jqLy}\\[1mm] n_1-n_2 \equiv 0 \operatorname{mod} q\\[1mm] n_1, n_2 \in \mathfrak S(y)}} &\min \left\{2^{j+1} qLy, \frac 1{|(n_1-n_2)\d|}\right\}\label{eq2.6}\\[2mm] &\ll 2^j y\, \Psi \left(\frac x{2^j qLy}, y\right)^2 (qL)^{1-\a(x,y)} \log x \log L \notag\\[2mm] &\hskip .75in + 2^j qLy^2 \Psi\left(\frac x{2^jqLy},y\right)\notag \end{align} and, for any prime $p\mid q$, \begin{align} \sum_{\substack{ n_1,n_2 \le \frac x{2^jqLy}\\[1mm] n_1-n_2 \equiv 0 \operatorname{mod}{q/p}\\[1mm] n_1, n_2 \in \mathfrak S(y)}} &\min \left\{2^{j+1} qLy, \frac 1{|(n_1-n_2)\d|}\right\}\label{eq2.7}\\[2mm] &\ll p2^j y\, \Psi \left(\frac x{2^j qLy}, y\right)^2 \left(\frac{qL}p\right)^{1-\a(x,y)} \log x \log L \notag\\[2mm] &\hskip .75in + 2^j qLy^2\, \Psi\left(\frac x{2^jqLy},y\right).\notag \end{align} \end{lem} \begin{proof} We have \[\mathfrak T_j(q) = \max_{1\le b\le q-1} \, \sum_{\substack{ n_1\le x/2^jqLy\\[1mm] n_1\in \mathfrak S(y)}} \ \sum_{\substack{ n_2 \le x/2^j qLy\\[1mm] n_2\equiv n_1-b \operatorname{mod} q\\[1mm] n_2 \in \mathfrak S(y)}} 1,\] and just as in \cite{harp} the inner sum is \[\ll y + \frac{\Psi(x/2^jqLy)}q \, q^{1-\a(x,y)} \log x.\] This leads to the bound \eqref{eq2.4} on summing over $n_1$. The bound \eqref{eq2.5} follows in exactly the same way. To prove \eqref{eq2.6}, \eqref{eq2.7} we distinguish two cases. If $|\d| \le 1/x$, then $L = 2(1 + |\d x|) \asymp 1$, and the bounds can be proved exactly as above on bounding $\min\left\{2^{j+1} qLy, \frac 1{|(n_1-n_2)\d|}\right\}$ by $2^{j+1} qLy \ll 2^j qy$. Now suppose $|\d| > 1/x$, so that $L\asymp |\d x|$. We partition the sum in \eqref{eq2.6} dyadically. Let us use $\sum^\dag$ to denote a sum over pairs of integers $n_1$, $n_2 \le \frac x{2^jqLy}$ that are $y$-smooth and satisfy $n_1- n_2 \equiv 0\operatorname{mod} q$. Then we have \begin{align*} \sideset{}{^\dag}\sum_{|n_1 - n_2| \le \frac x{2^j qL^2y}} &\min\left\{2^{j+1} qLy, \frac 1{|(n_1-n_2)\d|}\right\}\\ &\ll 2^j qLy \sum_{\substack{ n_1 \le \frac x{2^j qLy}\\[1mm] n_1 \in \mathfrak S(y)}} \ \sum_{\substack{ |n_2 - n_1| \le \frac x{2^j qL^2y}\\[1mm] n_2 \in \mathfrak S(y)\\[1mm] n_2\equiv n_1 \operatorname{mod} q}} 1. \end{align*} Following \cite{harp}, but as above \textit{not} absorbing a term $y$, the last expression is \begin{align*} &\ll 2^j qLy \sum_{\substack{ n_1 \le \frac x{2^jqLy}\\[1mm] n_1\in \mathfrak S(y)}} \left\{\frac{\Psi(x/2^jqLy,y)}{qL}\, (qL)^{1-\a} \log x + y\right\}\\[2mm] &\ll 2^j y \Psi(x/2^j qLy, y)^2 (qL)^{1-\a} \log x + 2^j q Ly^2 \Psi(x/2^j qLy, y). \end{align*} Similarly, for any $r$, $0 \le r \le (\log L)/\log 2$, we have \begin{align*} &\sideset{}{^\dag}\sum_{\frac{2^r}{2^jqL^2y} < |n_1-n_2|\le \frac{2^{r+1}x}{2^jqL^2y}} \min\left\{2^{j+1} qLy, \frac 1{|(n_1-n_2)\d|}\right\}\\[2mm] &\hskip 1in \ll \frac{2^jqLy}{2^r} \ \sum_{\substack{ n_1 \le \frac x{2^jqLy}\\[1mm] n_1\in \mathfrak S(y)}} \ \sum_{\substack{ |n_2 - n_1| \le \frac{2^{r+1}x}{2^jqL^2y}\\[1mm] n_2\in \mathfrak S(y)\\[1mm] n_2\equiv n_1\operatorname{mod} q}} 1\\[2mm] &\qquad\ll \frac{2^jqLy}{2^r} \ \sum_{\substack{ n_1 \le \frac x{2^jqLy}\\[1mm] n_1 \in \mathfrak S(y)}} \left\{\Psi\left(\frac x{2^jqLy},y\right)\left(\frac{2^r}{qL}\right)^\a \log x + y\right\}\\[2mm] &\qquad\ll 2^j y \Psi\left(\frac x{2^jqLy}, y\right)^2 \left(\frac{2^r}{qL}\right)^\a \log x + \frac{2^jqLy^2}{2^r} \, \Psi\left(\frac x{2^jqLy}, y\right). \end{align*} The bound \eqref{eq2.6} follows on summing over $r$. The bound \eqref{eq2.7} follows in exactly the same way; we lose a factor $p^\a$ in the first term on the right-hand side in \eqref{eq2.7} because of the weaker congruence condition. This completes the proof of Lemma \operatorname{Re}f{lem:upperbounds}. \end{proof} We now assemble our bounds to prove Theorem \operatorname{Re}f{thm:irrationalnofinitetype}. As noted in \cite{harp}, for any $p \le y$ we have \begin{equation}\label{eq2.8} \sum_{b=1}^{q-1} \min\left\{\frac{2^{j+1}qLy}p, \frac qb\right\} \ll q \log q \end{equation} and, for $p\mid q$, \begin{equation}\label{eq2.9} \sum_{b=1}^{(q/p)-1} \min\left\{\frac{2^{j+1}qLy}p, \frac q{pb}\right\} \ll \frac qp \log q. \end{equation} Let \[A_j = \Bigg(\sum_{\substack{ 2^j \le p \le y\\[1mm] p\nmid q}} \frac 1p + \sum_{\substack{ 2^j \le p \le y\\[1mm] p\mid q}} 1\Bigg) 2^jy \Psi \left( \frac x{2^jqLy}\, , y\right) (qL)^{1-\a} \log x \log L.\] We deduce from \eqref{eq2.1}--\eqref{eq2.3}, Lemma \operatorname{Re}f{lem:upperbounds}, and \eqref{eq2.8}--\eqref{eq2.9} that \begin{equation}\label{eq2.10} S\left(f, \frac aq +\d\right) \ll \Psi(qLy^2, y) + A + B_1 + B_2 \end{equation} where \begin{align*} A &= \sum_{0 \le j \le \frac{\log y}{\log 2}} \sqrt{\Psi(2^{j+1}qLy,y)} \sqrt{\frac y{\log y}\, \log q \Psi\left(\frac x{2^jqLy},y\right)^2 q^{1-\a} \log x}\\[2mm] &\quad + \sum_{0 \le j \le \frac{\log y}{\log 2}} \sqrt{\Psi(2^{j+1}qLy, y)} \sqrt{A_j},\\[2mm] B_1 &= \sum_{0 \le j \le \frac{\log y}{\log 2}} \sqrt{\Psi(2^{j+1}q L y, y)} \ \sqrt{\sum_{2^j \le p \le y} q \log q \cdot y \Psi\left(\frac x{2^jqLy}, y\right)},\\ \intertext{and} B_2 &= \sum_{0 \le j \le \frac{\log y}{\log 2}} \sqrt{\Psi(2^{j+1}qLy, y)} \sqrt{\frac{qLy^3}{\log y}\, \Psi\left(\frac x{2^jqy}, y\right)}. \end{align*} For the estimation of $A$ we can appeal to \cite{harp}: \begin{equation}\label{eq2.11} A \ll \Psi(x,y)(qL)^{-\frac 12 + \frac 32\, (1-\a(x,y))}M. \end{equation} The $\Psi$ functions in $B_1$ and $B_2$ are estimated using Lemma \operatorname{Re}f{lem:2leyl3x}. Thus \begin{align*} \sqrt{\Psi\left(\frac x{2^jqLy}, y\right)\Psi(2^{j+1}qLy,y)} &\ll \Psi(x,y)(2^jqLy)^{-\a/2} \left(\frac x{2^{j+1}qLy}\right)^{-\a/2}\\[2mm] &\ll \Psi(x,y)x^{-\a/2}, \end{align*} leading to (a slightly wasteful) bound \begin{equation}\label{eq2.12} B_1 + B_2 \ll \Psi(x,y) \sqrt{y^3 qL\log q \log y}\ x^{-\a/2}. \end{equation} The remaining term to be estimated is $\Psi(qLy^2, y)$, and Lemma \operatorname{Re}f{lem:2leyl3x} gives \begin{equation}\label{eq2.13} \Psi(qLy^2,y) \ll \left(\frac x{qLy^2}\right)^{-\a} \Psi(x,y). \end{equation} This term can be absorbed into the right-hand side of \eqref{eq2.12} since \begin{equation}\label{eq2.14} (qLy^3)^{\a - \frac 12} \ll (qLy^3)^{\a/2} \ll x^{\a/2}. \end{equation} Theorem \operatorname{Re}f{thm:multiplicfunc} follows on combining \eqref{eq2.10}--\eqref{eq2.14}. \section{Proof of Theorem \operatorname{Re}f{thm:multiplicfunc}}\label{sec:proofthm1} \begin{lem}\label{lem:letu1dotsu} Let $u_1,\ldots, u_N \in \mathbb R$. Then for any $J \in \mathbb N$ and any $\rho \le \s \le \rho + 1$, we have \begin{align*} \big| |\{1 \le n \le N &: u_n \in [\rho,\s] \operatorname{mod} 1\}| - (\s-\rho)N\big|\\ &\qquad \le \frac N{J+1} + 3 \sum_{j=1}^J \ \frac 1j \, \Bigg|\sum_{n=1}^N e(ju_n)\Bigg|. \end{align*} \end{lem} \begin{proof} See e.g. \cite{rcb}, Theorem 2.1. \end{proof} \begin{proof}[Proof of Theorem \operatorname{Re}f{thm:irrationalnofinitetype}] In view of the result of \cite{akb} cited above, we may suppose that \begin{equation}\label{eq3.1} (\log x)^{3+\e} \le y \le \exp((\log \log x)^2). \end{equation} We note that $\lfloor \t n + \psi\rfloor = m$ for some $m$ if and only if \begin{equation}\label{eq3.2} 0 < \left\{\frac{m+1-\psi}\t\right\} \le \frac 1\t, \end{equation} so that, applying Lemma \operatorname{Re}f{lem:letu1dotsu}, \begin{align*} \sum_{n\in \mathcal B(x,y)} 1 &= \sum_{\substack{ m \le \t x\\[1mm] m\in \mathfrak S(y)\\[1mm] \eqref{eq3.2}\, holds}} 1 + O(1)\\[2mm] &= \t^{-1} \Psi(\t x, y) + O\Bigg( \frac{\Psi(\t x, y)}{\log x} + \sum_{j=1}^{[\log x]} \ \frac 1j \Bigg|\sum_{\substack{ m \le \t x\\[1mm] m\in \mathfrak S(y)}} e\left(\frac{jm}\t\right)\Bigg|\Bigg) \end{align*} For our purposes, then, it suffices to show that \begin{equation}\label{eq3.3} \sum_{\substack{ m \le \t x\\[1mm] m\in \mathfrak S(y)}} e(j\t^{-1}m) \ll \Psi(\t x,y)(\log x)^{-2} \end{equation} uniformly for $1 \le j \le \log x$. We deduce this from Theorem \operatorname{Re}f{thm:multiplicfunc}. By Dirichlet's theorem there is $q \in \mathbb N$, $1 \le q \le x^{1/2}$, and $a \in \mathbb N$ with $(a,q)= 1$ such that \[\d : = j\t^{-1} - \frac aq \in \left[- \frac 1{qx^{1/2}}, \frac 1{qx^{1/2}}\right].\] Now \[|qj\t^{-1}-a| \ge \frac{c_1}{(qj)^\k}\] for positive constants $c_1$ and $\k$, hence \[x^{-1/2} \ge q|\d| \ge \frac{c_1}{(qj)^\k}\] giving \[q \gg x^{1/2\k}(\log x)^{-1}.\] We apply Theorem \operatorname{Re}f{thm:multiplicfunc} with $j\t^{-1}$ in place of $\t$, and $\t x$ in place of $x$. Note that \[qL = 2q(1 + |\d \t x|) \ll x^{1/2}\] so that, in view of \eqref{eq3.1}, \begin{equation}\label{eq3.4} qLy^3 \ll x^{5/9}. \end{equation} Now, since $y \ge (\log x)^{3 + \e}$, we have \[\a(\t x, y) \ge \frac 23 + \eta \quad (\eta = \eta(\e) > 0);\] see \cite{hilten}, for example. Thus, abbreviating $\a(\t x, y)$ to $\a$, \[q^{-\frac 12 + \frac 32\, (1-\a)} \ll q^{-\eta} \ll x^{-\eta/3\k}.\] Now, with $M$ as in \eqref{eq1.3} with $\t x$ in place of $x$, \begin{equation}\label{eq3.5} q^{-\frac 12 + \frac 32\, (1-\a)}M \ll x^{-\eta/4\k}. \end{equation} Next, recalling \eqref{eq3.4}, \begin{equation}\label{eq3.6} (\t x)^{-\a/2}(qLy^3)^{1/2}\sqrt{\log y\log q} \ll x^{-1/20}. \end{equation} Combining \eqref{eq3.5}, \eqref{eq3.6} we obtain \eqref{eq3.3}. This completes the proof of Theorem \operatorname{Re}f{thm:irrationalnofinitetype}. \end{proof} \end{document}
\begin{document} \title{Lerch's $\Phi$ and the Polylogarithm at the Positive Integers} \date{June 15, 2020} \author{Jose Risomar Sousa} \maketitle \usetagform{Tags} \begin{abstract} We review the closed-forms of the partial Fourier sums associated with $HP_k(n)$ and create an asymptotic expression for $HP(n)$ as a way to obtain formulae for the full Fourier series (if $b$ is such that $|b|<1$, we get a surprising pattern, $HP(n) \sim H(n)-\sum_{k\ge 2}(-1)^k\zeta(k)b^{k-1}$). Finally, we use the found Fourier series formulae to obtain the values of the Lerch transcendent function, $\Phi(e^m,k,b)$, and by extension the polylogarithm, $\mathrm{Li}_{k}(e^{m})$, at the positive integers $k$. \end{abstract} \tableofcontents \section{Introduction} Since the Basel problem in 1650, scholars have been eager to find closed-forms for similar infinite series, especially Dirichlet series. In this article, we create formulae for the Lerch transcendent function, $\Phi(e^m,k,b)$, and the polylogarithm, $\mathrm{Li}_{k}(e^{m})$, that hold at the positive integers $k$. Conversely, a formula for the Hurwitz zeta function at the negative integers, $\zeta(-k,b)$, is also created, to complement a formula at the positive integers produced in \citena{Hurwitz}.\\ The advantage of formulae that only hold at the positive integers is the fact we expect them to be simpler and easier to work with. It's an obvious statement if, for example, we think about the closed-forms of the zeta function at the positive integers greater than one, $\zeta(k)$, and its general integral, valid for $\Re{(k)}>1$.\\ The formulae derived here are based on new expressions for the generalized harmonic progressions: \begin{equation} \nonumber HP_{k}(n)=\sum_{j=1}^{n}\frac{1}{(a\,n+b)^{k}} \text{,} \end{equation} \noindent which have been extensively studied in two previous papers, and vary depending on whether the parameters, $a$ and $b$, are integer\citesup{GHP} or complex\citesup{CHP}. When $a=1$ and $b=0$, we have a notable particular case, the generalized harmonic numbers, $H_k(n)$.\\ In \citena{GHP} we derived expressions for the partial Fourier sums, $C^m_{k}(a,b,n)$ and $S^m_{k}(a,b,n)$, associated with $HP_{k}(n)$, which we reproduce again in the next section, with a short description.\\ \indent Our objective in this paper is to obtain the limit of those expressions as $n$ gets large, and then combine them to obtain the Lerch transcendent function, $\Phi$, at the positive integers.\\ \indent In the process, we need to obtain the limit of $HP(n)-H(n)$, with $2b$ a non-integer complex number. Since this limit can also be attained by means of the digamma function, $\psi(n)$, this is just a new, more interesting way of deriving that limit.\\ In section \secrefe{int_lim}, we review the limits of the integrals that appear in the expressions of $HP_k(n)$ as $n$ tends to infinity, which are central to this solution.\\ The process of obtaining the limits of $C^m_{2k}(b,n)$ and $S^m_{2k+1}(b,n)$ is much simpler than that of $C^m_{2k+1}(b,n)$ and $S^m_{2k}(b,n)$, since the latter involve the limit of $HP(n)$, which is not finite. \section{The partial Fourier sums} The subsequent expressions are the partial sums of the Fourier series associated with the generalized harmonic progressions from \citena{GHP}, and hold for all complex $m$, $a$ and $b$ and for all integer $n\geq 1$.\\ \indent By definition $HP_{0}(n)=0$ for all positive integer $n$, so they actually have no effect in the sums. If $b=0$, we can discard any term that has a null denominator and the equation still holds (technically, we take the limit as $b$ tends to 0, as we see in section \secrefe{part}). \subsection{$C^m_{2k}(a,b,n)$ and $S^m_{2k+1}(a,b,n)$} \label{Final_1} For all integer $k \geq 1$: \begingroup \footnotesize \begin{multline} \nonumber \sum_{j=1}^{n}\frac{1}{(a j+b)^{2k}}\cos{\frac{2\pi(a j+b)}{m}}=-\frac{1}{2b^{2k}}\left(\cos{\frac{2\pi b}{m}}-\sum_{j=0}^{k}\frac{(-1)^j}{(2j)!}\left(\frac{2\pi b}{m}\right)^{2j}\right) \\+\frac{1}{2(a n+b)^{2k}}\left(\cos{\frac{2\pi(a n+b)}{m}}-\sum_{j=0}^{k}\frac{(-1)^j}{(2j)!}\left(\frac{2\pi(a n+b)}{m}\right)^{2j}\right)+\sum_{j=1}^{k}\frac{(-1)^{k-j}}{(2k-2j)!}\left(\frac{2\pi}{m}\right)^{2k-2j}HP_{2j}(n) \\+\frac{(-1)^k}{2(2k-1)!}\left(\frac{2\pi}{m}\right)^{2k}\int_{0}^{1}(1-u)^{2k-1}\left(\sin{\frac{2\pi(a n+b)u}{m}}-\sin{\frac{2\pi b u}{m}}\right)\cot{\frac{\pi a u}{m}}\,du \end{multline} \endgroup\\ \indent For all integer $k \geq 0$: \begingroup \footnotesize \begin{multline} \nonumber \sum_{j=1}^{n}\frac{1}{(a j+b)^{2k+1}}\sin{\frac{2\pi(a j+b)}{m}}=-\frac{1}{2b^{2k+1}}\left(\sin{\frac{2\pi b}{m}}-\sum_{j=0}^{k}\frac{(-1)^j}{(2j+1)!}\left(\frac{2\pi b}{m}\right)^{2j+1}\right) \\+\frac{1}{2(a n+b)^{2k+1}}\left(\sin{\frac{2\pi(a n+b)}{m}}-\sum_{j=0}^{k}\frac{(-1)^j}{(2j+1)!}\left(\frac{2\pi(a n+b)}{m}\right)^{2j+1}\right)+\sum_{j=1}^{k}\frac{(-1)^{k-j}}{(2k+1-2j)!}\left(\frac{2\pi}{m}\right)^{2k+1-2j}HP_{2j}(n) \\+\frac{(-1)^k}{2(2k)!}\left(\frac{2\pi}{m}\right)^{2k+1}\int_{0}^{1}(1-u)^{2k}\left(\sin{\frac{2\pi(a n+b)u}{m}}-\sin{\frac{2\pi b u}{m}}\right)\cot{\frac{\pi a u}{m}}\,du \end{multline} \endgroup \subsubsection{The limits of $C^m_{2k}(n)$ and $S^m_{2k+1}(n)$} \label{facil} For comparison purposes, let's review some limits that we derived previously for the particular cases $C^m_{2k}(n)$ and $S^m_{2k+1}(n)$ (that is, $a=1$ and $b=0$). We expect the limits of the more general expressions to coincide with them.\\ At infinity these particular cases become Fourier series (denoted here by $C^m_{2k}$ and $S^m_{2k+1}$), which have limits given by: \begingroup \small \begin{eqleft} \nonumber C^m_{2k}=\sum_{j=1}^{\infty}\frac{1}{j^{2k}}\cos{\frac{2\pi j}{m}}=\sum_{j=0}^{k}\frac{(-1)^{k-j}}{(2k-2j)!}\left(\frac{2\pi}{m}\right)^{2k-2j}\zeta(2j)+\frac{(-1)^k\abs{m}}{4(2k-1)!}\left(\frac{2\pi}{m}\right)^{2k} \text{ (} \forall \text{ integer }k\geq 1\text{)} \end{eqleft} \begin{eqleft} \nonumber S^m_{2k+1}=\sum_{j=1}^{\infty}\frac{1}{j^{2k+1}}\sin{\frac{2\pi j}{m}}=\sum_{j=0}^{k}\frac{(-1)^{k-j}}{(2k+1-2j)!}\left(\frac{2\pi}{m}\right)^{2k+1-2j}\zeta(2j)+\frac{(-1)^k\abs{m}}{4(2k)!}\left(\frac{2\pi}{m}\right)^{2k+1} \text{ (} \forall\text{ integer } k\geq 0\text{)} \end{eqleft} \endgroup\\ \indent These limits only hold for real $\abs{m}\ge 1$ ($k=0$ and $\abs{m}=1$ are exceptions and also trivial cases). So for $S^1_{1}=0$ the formula breaks down (see section \secrefe{int_lim} to know why). Both these results are known in the literature, they're rewrites of equations that feature in \citena{dois} (page 805). \subsection{$C^m_{2k+1}(a,b,n)$ and $S^m_{2k}(a,b,n)$} \label{Final_2} For all integer $k \geq 0$: \begingroup \footnotesize \begin{multline} \nonumber \sum_{j=1}^{n}\frac{1}{(a j+b)^{2k+1}}\cos{\frac{2\pi(a j+b)}{m}}=-\frac{1}{2b^{2k+1}}\left(\cos{\frac{2\pi b}{m}}-\sum_{j=0}^{k}\frac{(-1)^j}{(2j)!}\left(\frac{2\pi b}{m}\right)^{2j}\right)\\+\frac{1}{2(a n+b)^{2k+1}}\left(\cos{\frac{2\pi(a n+b)}{m}}-\sum_{j=0}^{k}\frac{(-1)^j}{(2j)!}\left(\frac{2\pi(a n+b)}{m}\right)^{2j}\right)+\sum_{j=0}^{k}\frac{(-1)^{k-j}}{(2k-2j)!}\left(\frac{2\pi}{m}\right)^{2k-2j}HP_{2j+1}(n)\\+\frac{(-1)^k}{2(2k)!}\left(\frac{2\pi}{m}\right)^{2k+1}\int_{0}^{1}(1-u)^{2k}\left(\cos{\frac{2\pi(a n+b)u}{m}}-\cos{\frac{2\pi b u}{m}}\right)\cot{\frac{\pi a u}{m}}\,du \end{multline} \endgroup For all integer $k \geq 1$: \begingroup \footnotesize \begin{multline} \nonumber \sum_{j=1}^{n}\frac{1}{(a j+b)^{2k}}\sin{\frac{2\pi(a j+b)}{m}}=-\frac{1}{2b^{2k}}\left(\sin{\frac{2\pi b}{m}}-\sum_{j=0}^{k-1}\frac{(-1)^j}{(2j+1)!}\left(\frac{2\pi b}{m}\right)^{2j+1}\right)\\+\frac{1}{2(a n+b)^{2k}}\left(\sin{\frac{2\pi(a n+b)}{m}}-\sum_{j=0}^{k-1}\frac{(-1)^j}{(2j+1)!}\left(\frac{2\pi(a n+b)}{m}\right)^{2j+1}\right)-\sum_{j=0}^{k-1}\frac{(-1)^{k-j}}{(2k-1-2j)!}\left(\frac{2\pi}{m}\right)^{2k-1-2j}HP_{2j+1}(n) \\-\frac{(-1)^k}{2(2k-1)!}\left(\frac{2\pi}{m}\right)^{2k}\int_{0}^{1}(1-u)^{2k-1}\left(\cos{\frac{2\pi(a n+b)u}{m}}-\cos{\frac{2\pi b u}{m}}\right)\cot{\frac{\pi a u}{m}}\,du \end{multline} \endgroup \subsubsection{The limits of $C^m_{2k+1}(n)$ and $S^m_{2k}(n)$} \label{dificil} The limits of $C^m_{2k+1}(n)$ and $S^m_{2k}(n)$ for real $\abs{m}\ge 1$ are given by: \begingroup \small \begin{multline} \nonumber C^m_{2k+1}=\sum_{j=1}^{\infty}\frac{1}{j^{2k+1}}\cos{\frac{2\pi j}{m}}=\sum_{j=1}^{k}\frac{(-1)^{k-j}}{(2k-2j)!}\left(\frac{2\pi}{m}\right)^{2k-2j}\zeta(2j+1)+\frac{(-1)^k}{(2k)!}\left(\frac{2\pi}{m}\right)^{2k}\log{\abs{m}}\\-\frac{(-1)^k}{2(2k)!}\left(\frac{2\pi}{m}\right)^{2k+1}\int_{0}^{1}(1-u)^{2k}\cot{\frac{\pi u}{m}}-m(1-u)\cot{\pi u}\,du \text{ (} \forall\text{ integer } k\geq 0\text{)} \end{multline} \begin{multline} \nonumber S^m_{2k}=\sum_{j=1}^{\infty}\frac{1}{j^{2k}}\sin{\frac{2\pi j}{m}}=-\sum_{j=1}^{k-1}\frac{(-1)^{k-j}}{(2k-1-2j)!}\left(\frac{2\pi}{m}\right)^{2k-1-2j}\zeta(2j+1)-\frac{(-1)^k }{(2k-1)!}\left(\frac{2\pi}{m}\right)^{2k-1}\log{\abs{m}}\\+\frac{(-1)^k}{2(2k-1)!}\left(\frac{2\pi}{m}\right)^{2k}\int_{0}^{1}(1-u)^{2k-1}\cot{\frac{\pi u}{m}}-m(1-u)\cot{\pi u}\,du \text{ (} \forall\text{ integer } k\geq 1\text{)} \end{multline} \endgroup\\ \indent The exception is $C^1_1=\infty$, since integral $\int_{0}^{1}\cot{\pi u}-(1-u)\cot{\pi u}\,du$ diverges, which means that $H(n)$ diverges. These results are probably original. \section{The limits of the integrals} \label{int_lim} \indent In \citena{GHNR} we introduced the following theorems, whose validity we now fully extend. For all real $k \geq 0$ and real $m$: \begin{eqleft} \nonumber \textbf{Theorem 1}\lim_{n\to\infty}\int_{0}^{1}(1-u)^{k}\sin{\frac{2\pi n u}{m}}\cot{\frac{\pi u}{m}}\,du= \begin{cases} 1, & \text{if}\ k=0 \text{ and }\abs{m}=1\\ \abs{\frac{m}{2}}, & \text{if }\abs{m}\ge 1 \end{cases} \end{eqleft}\\ \indent Another result we need is in the following theorem, which holds for all real $k \geq 0$ and real $\abs{m}\geq 1$ (except $k=0$ and $\abs{m}=1$, for which the integral doesn't converge): \begin{eqleft} \nonumber \textbf{Theorem 2} \lim_{n\to\infty}\int_{0}^{1}(1-u)^{k}\cos{\frac{2\pi n u}{m}}\cot{\frac{\pi u}{m}}-m\,(1-u)\cos{2\pi n u}\cot{\pi u}\,du=\frac{m\log{\abs{m}}}{\pi} \end{eqleft}\\ \indent A direct consequence of theorem 2 and of the various possible formulae for $H(n)$ (see \citena{GHNR}), the below result is useful to handle the half-integers in \secrefe{C_lim} and \secrefe{Final}: \begin{equation} \nonumber \lim_{n\to\infty}\int_{0}^{1}(1-u)^{k}\cos{\frac{2\pi n u}{m}}\cot{\frac{\pi u}{m}}-\frac{m}{2}\,(1-u)\cos{\pi n u}\cot{\frac{\pi u}{2}}\,du=\frac{m}{\pi}\log{\frac{\abs{m}}{2}} \end{equation} We don't provide a proof for these results due to the scope, but they should be simple.\\ Though these limits shouldn't converge for non-real complex $m$, when they are linearly combined like $l_1+\bm{i}\,l_2$ their infinities cancel out giving a finite value. This property is what allows our final formula from section \secrefe{Final} to converge nearly always, even when the parameters are not real. \section{$HP(n)$ asymptotic behavior} Here we figure out the relationship between $HP(n)$ and $H(n)$.\\ For this exercise, we make use of the sine-based $HP(n)$ formula from \citena{CHP}, which is: \begin{equation} \nonumber \sum_{j=1}^{n}\frac{1}{j+b}=-\frac{1}{2b}+\frac{1}{2(n+b)}+\frac{\pi}{\sin{2\pi b}}\int_{0}^{1}\left(\sin{2\pi(n+b)u}-\sin{2\pi b u}\right)\cot{\pi u}\,du \end{equation}\\ \indent We can ``expand" the sine (that is, use the identity $\sin(x+y)=\sin{x}\cos{y}+\cos{x}\sin{y}$), getting: \begin{multline} \nonumber \small \sum_{j=1}^{n}\frac{1}{j+b}=-\frac{1}{2b}+\frac{1}{2(n+b)}+\frac{\pi}{\sin{2\pi b}}\int_{0}^{1}\left(\cos{2\pi bu}\sin{2\pi n u}+\sin{2\pi bu}\cos{2\pi n u}-\sin{2\pi b u}\right)\cot{\pi u}\,du \end{multline}\\ \indent We take the first part of the integrand, change the variables, expand $\cos{2\pi b(1-u)}$ (with identity $\cos(x+y)=\cos{x}\cos{y}-\sin{x}\sin{y}$), and by means of the theorem 1 we can conclude that: \begin{equation} \label{eq:1o_lim} \small \lim_{n\to\infty}\frac{\pi}{\sin{2\pi b}}\int_{0}^{1}\cos{2\pi b(1-u)}\sin{2\pi n(1-u)}\cot{\pi(1-u)}\,du=\frac{\pi}{2}\left(\cot{2\pi b}+\csc{2\pi b}\right) \end{equation}\\ \indent Now we need to work out the second part of the integrand. We make a change of variables, expand $\sin{2\pi b(1-u)}$, and when using theorem 2 we need to avoid the case $k=0$ and $m=1$ (since that integral doesn't converge), which leads us to: \begin{multline} \nonumber \small \int_{0}^{1}\left(\sin{2\pi b(1-u)}\cos{2\pi n(1-u)}-\sin{2\pi b(1-u)}\right)\cot{\pi(1-u)}\,du=\\ \sin{2\pi b}\int_{0}^{1}(\cos{2\pi bu}-1-u(\cos{2\pi b}-1))\cos{2\pi n(1-u)}\cot{\pi(1-u)}\,du \\ -\cos{2\pi b}\int_{0}^{1}(\sin{2\pi bu}-u\sin{2\pi b})\cos{2\pi n(1-u)}\cot{\pi(1-u)}\,du \\ +\int_{0}^{1}(-\sin{2\pi b(1-u)}+(\sin{2\pi b})(1-u)\cos{2\pi n(1-u)})\cot{\pi(1-u)}\,du \end{multline}\\ \indent The two first integrals on the right-hand side cancel out, per theorem 2, when $n$ goes to infinity, leaving only the third integral to be figured.\\ But if we look back at the expression for $H(n)$ from \citena{GHNR}, we notice it matches part of the last integral: \begin{equation} \label{eq:H(n)} \sum_{j=1}^{n}\frac{1}{j}=\frac{1}{2n}+\pi\int_{0}^{1} u\left(1-\cos{2\pi n (1-u)}\right)\cot{\pi(1-u)}\,du \text{,} \end{equation}\\ \noindent which means the last integral can be further split: \begin{multline} \nonumber \small \frac{\pi}{\sin{2\pi b}}\int_{0}^{1}(-\sin{2\pi b(1-u)}+(\sin{2\pi b})(1-u)\cos{2\pi n(1-u)})\cot{\pi(1-u)}\,du=\\ \frac{\pi}{\sin{2\pi b}}\int_{0}^{1}(-\sin{2\pi b(1-u)}-u\sin{2\pi b}+\sin{2\pi b}\cos{2\pi n(1-u)})\cot{\pi(1-u)}\,du\\ +\pi\int_{0}^{1} u\left(1-\cos{2\pi n (1-u)}\right)\cot{\pi(1-u)}\,du \end{multline}\\ \indent At this point, there's only the limit of the first integral on the right-hand side left to figure out, but fortunately that integral is constant for all integer $n$\footnote{It stems from $\int_{0}^{1}(1-\cos{2\pi n\,u})\cot{\pi\,u}\,du=0$ for all integer $n$.}. Therefore, after simplifying \eqrefe{1o_lim} further, we conclude that for sufficiently large $n$: \begin{equation} \label{eq:approx} \sum_{j=1}^{n}\frac{1}{j+b} \sim -\frac{1}{2b}+\frac{\pi}{2}\cot{\pi b}-\pi\int_{0}^{1}\left(\frac{\sin{2\pi b u}}{\sin{2\pi b}}-u\right)\cot{\pi u}\,du+H(n) \end{equation}\\ \indent Coincidentally, the above integral is identical to the generating function of the zeta function at the odd integers, that we've seen in \citena{GHNR}: \begin{equation} \nonumber \sum_{k=1}^{\infty}\zeta(2k+1)x^{2k+1}=-\pi x\int_{0}^{1}\left(\frac{\sin{2\pi x u}}{\sin{2\pi x}}-u\right)\cot{\pi u}\,du \end{equation}\\ \indent That means that for sufficiently large $n$ and $0<\abs{b}<1$ we can write the interesting approximation: \begin{equation} \nonumber \sum_{j=1}^{n}\frac{1}{j+b} \sim H(n)-\sum_{k=2}^{\infty}(-1)^k\zeta(k)b^{k-1} \end{equation}\\ \indent Now, since formula \eqrefe{approx} clearly doesn't hold at the half-integers, for such $b$ we can resort to a different integral representation for the $\zeta(2k+1)$ generating function\citesup{GHNR}, which leads to: \begin{equation} \nonumber \sum_{j=1}^{n}\frac{1}{j+b} \sim -\frac{1}{2b}+\frac{\pi}{2}\int_{0}^{1}\left(-1+u+\cos{\pi b u}\right)\cot{\frac{\pi u}{2}}\,du+H(n) \text{,} \end{equation}\\ \noindent since $\cot{\pi b}$ is zero for all half-integer $b$. \section{The full Fourier series} \label{new_lim} Although the expressions of $C^m_{k}(a,b,n)$ and $S^m_{k}(a,b,n)$ hold for all positive integers $k$ and $n$ and complex $m$, $a$ and $b$, the limits that we find next are constrained by the requirements of the theorems 1 and 2 from section \secrefe{int_lim}.\\ Without loss of generality, let's set $a=1$ to simplify the calculations: \begin{equation} \nonumber C^m_{k}(b,n)=\sum_{j=1}^{n}\frac{1}{(j+b)^k}\cos{\frac{2\pi(j+b)}{m}} \text{ and } S^m_{k}(b,n)=\sum_{j=1}^{n}\frac{1}{(j+b)^k}\sin{\frac{2\pi(j+b)}{m}} \end{equation}\\ \indent And since $k=0$ and $\abs{m}=1$ leads to trivial cases, we're not going to account for them in the following reasoning (so remember the final formulae may not be true for $k=0$ and $\abs{m}=1$). \subsection{The limit of $C^m_{2k+1}(b,n)$} \label{C_lim} The limit of $C^m_{2k+1}(b,n)$ is much harder to figure out than the limit of $S^m_{2k+1}(b,n)$, which should come as no surprise given the limits we've seen in sections \secrefe{facil} and \secrefe{dificil} for the particular cases. So, without further ado, let's see how to go about it: \begingroup \small \begin{multline} \nonumber \lim_{n\to\infty}C^m_{2k+1}(b,n)=-\frac{1}{2b^{2k+1}}\left(\cos{\frac{2\pi b}{m}}-\sum_{j=0}^{k}\frac{(-1)^j (\frac{2\pi b}{m})^{2j}}{(2j)!}\right) +\sum_{j=1}^{k}\frac{(-1)^{k-j}(\frac{2\pi}{m})^{2k-2j}}{(2k-2j)!}\zeta(2j+1,b+1) \\+\lim_{n\to\infty}\frac{(-1)^k}{2(2k)!}\left(\frac{2\pi}{m}\right)^{2k+1}\left(\frac{m}{\pi}\sum_{j=1}^{n}\frac{1}{j+b}+\int_{0}^{1}(1-u)^{2k}\left(\cos{\frac{2\pi(n+b)u}{m}}-\cos{\frac{2\pi b u}{m}}\right)\cot{\frac{\pi u}{m}}\,du\right) \end{multline}\\ \endgroup \indent Now, if we recall the approximation we found for $HP(n)$ in \eqrefe{approx}, $HP(n)\sim c+H(n)$ for large $n$ (where $c$ is the part that doesn't depend on $n$), we only need to solve the limit: \begin{multline} \nonumber \lim_{n\to\infty}\left(\frac{m}{\pi}(c+H(n))+\int_{0}^{1}(1-u)^{2k}\left(\cos{\frac{2\pi(n+b)u}{m}}-\cos{\frac{2\pi b u}{m}}\right)\cot{\frac{\pi u}{m}}\,du\right) \end{multline} \indent We can expand the cosine in the last integral: \begin{equation} \nonumber \int_{0}^{1}(1-u)^{2k}\left(\cos{\frac{2\pi bu}{m}}\left(-1+\cos{\frac{2\pi nu}{m}}\right)-\sin{\frac{2\pi b u}{m}}\sin{\frac{2\pi n u}{m}}\right)\cot{\frac{\pi u}{m}}\,du \end{equation} \indent But due to theorem 1, the below limit is 0 (we just need to expand the first sine): \begin{equation} \nonumber \lim_{n\to\infty}\int_{0}^{1}u^{2k}\sin{\frac{2\pi b(1-u)}{m}}\sin{\frac{2\pi n(1-u)}{m}}\cot{\frac{\pi(1-u)}{m}}\,du=0 \end{equation} \indent Now, by replacing $H(n)$ with its equation \eqrefe{H(n)} and adding it up to what's left in the integral: \begin{equation} \nonumber \int_{0}^{1}(1-u)^{2k}\cos{\frac{2\pi bu}{m}}\left(-1+\cos{\frac{2\pi nu}{m}}\right)\cot{\frac{\pi u}{m}}+m(1-u)\left(1-\cos{2\pi n u}\right)\cot{\pi u}\,du \end{equation}\\ \indent Looking at theorem 2, we can recombine the terms conveniently into an integral that converges as $n$ goes to infinity: \begin{equation} \nonumber \lim_{n\to\infty}\int_{0}^{1}(1-u)^{2k}\cos{\frac{2\pi bu}{m}}\cos{\frac{2\pi nu}{m}}\cot{\frac{\pi u}{m}}-m(1-u)\cos{2\pi n u}\cot{\pi u}\,du=\frac{m\log{\abs{m}}}{\pi} \text{,} \end{equation} \noindent which is justified by the following (only one piece shown): \begin{multline} \nonumber \cos{\frac{2\pi b}{m}}\int_{0}^{1}u^{2k}\cos{\frac{2\pi bu}{m}}\cos{\frac{2\pi n(1-u)}{m}}\cot{\frac{\pi(1-u)}{m}}-m\cos{\frac{2\pi b}{m}}\,u\cos{2\pi n(1-u)}\cot{\pi(1-u)}\,du \\ \rightarrow \left(\cos{\frac{2\pi b}{m}}\right)^2\frac{m\log{\abs{m}}}{\pi} \text{,} \end{multline}\\ \noindent whereas the remaining integral converges on its own.\\ \indent Let's summarize the result. The below limit doesn't change if we pick different formulae for $H(n)$, which is useful to figure out how the formula changes for the half-integers $b$: \begin{multline} \nonumber \lim_{n\to\infty}\left(\frac{m}{\pi}H(n)+\int_{0}^{1}(1-u)^{2k}\left(\cos{\frac{2\pi(n+b)u}{m}}-\cos{\frac{2\pi b u}{m}}\right)\cot{\frac{\pi u}{m}}\,du\right)\\ =\frac{m\log{\abs{m}}}{\pi} -\int_{0}^{1}(1-u)^{2k}\cos{\frac{2\pi bu}{m}}\cot{\frac{\pi u}{m}}-m(1-u)\cot{\pi u}\,du\\=\frac{m}{\pi}\log{\frac{\abs{m}}{2}}-\int_{0}^{1}(1-u)^{2k}\cos{\frac{2\pi bu}{m}}\cot{\frac{\pi u}{m}}-\frac{m}{2}(1-u)\cot{\frac{\pi u}{2}}\,du \end{multline} \subsubsection{Non-integer $2b$} \indent After we put everything together, the conclusion is that for all integer $k\ge 0$ and real $\abs{m}\ge 1$: \begingroup \small \begin{multline} \nonumber \sum_{j=1}^{\infty}\frac{\cos{\frac{2\pi(j+b)}{m}}}{(j+b)^{2k+1}}=-\frac{1}{2b^{2k+1}}\left(\cos{\frac{2\pi b}{m}}-\sum_{j=0}^{k-1}\frac{(-1)^j}{(2j)!}\left(\frac{2\pi b}{m}\right)^{2j}\right) +\sum_{j=1}^{k}\frac{(-1)^{k-j}}{(2k-2j)!}\left(\frac{2\pi}{m}\right)^{2k-2j}\zeta(2j+1,b+1) \\+\frac{(-1)^k\pi}{2(2k)!}\left(\frac{2\pi}{m}\right)^{2k}\left(\cot{2\pi b}+\csc{2\pi b}\right)+\frac{(-1)^k}{(2k)!}\left(\frac{2\pi}{m}\right)^{2k}\log{\abs{m}}\\-\frac{(-1)^k}{2(2k)!}\left(\frac{2\pi}{m}\right)^{2k+1}\int_{0}^{1}(1-u)^{2k}\cos{\frac{2\pi bu}{m}}\cot{\frac{\pi u}{m}}+m\left(-1+\frac{\sin{2\pi b u}}{\sin{2\pi b}}\right)\cot{\pi u}\,du \end{multline} \endgroup\\ \indent As we can see, it takes a really convoluted function to generate this simple Fourier series. \subsubsection{Half-integer $b$} At the half-integers $b$, the formula reduces to: \begingroup \small \begin{multline} \nonumber \sum_{j=1}^{\infty}\frac{\cos{\frac{2\pi(j+b)}{m}}}{(j+b)^{2k+1}}=-\frac{1}{2b^{2k+1}}\left(\cos{\frac{2\pi b}{m}}-\sum_{j=0}^{k-1}\frac{(-1)^j}{(2j)!}\left(\frac{2\pi b}{m}\right)^{2j}\right) +\sum_{j=1}^{k}\frac{(-1)^{k-j}}{(2k-2j)!}\left(\frac{2\pi}{m}\right)^{2k-2j}\zeta(2j+1,b+1) \\+\frac{(-1)^k}{(2k)!}\left(\frac{2\pi}{m}\right)^{2k}\log{\frac{\abs{m}}{2}}-\frac{(-1)^k}{2(2k)!}\left(\frac{2\pi}{m}\right)^{2k+1}\int_{0}^{1}(1-u)^{2k}\cos{\frac{2\pi bu}{m}}\cot{\frac{\pi u}{m}}-\frac{m}{2}\cos{\pi b u}\cot{\frac{\pi u}{2}}\,du \end{multline} \endgroup \subsubsection{Integer $b$} For integer $b$: \begingroup \small \begin{multline} \nonumber \sum_{j=1}^{\infty}\frac{\cos{\frac{2\pi(j+b)}{m}}}{(j+b)^{2k+1}}=-\frac{1}{2b^{2k+1}}\left(\cos{\frac{2\pi b}{m}}-\sum_{j=0}^{k}\frac{(-1)^j}{(2j)!}\left(\frac{2\pi b}{m}\right)^{2j}\right) +\sum_{j=1}^{k}\frac{(-1)^{k-j}}{(2k-2j)!}\left(\frac{2\pi}{m}\right)^{2k-2j}\zeta(2j+1,b+1) \\-\frac{(-1)^k}{(2k)!}\left(\frac{2\pi}{m}\right)^{2k}H(b)+\frac{(-1)^k}{(2k)!}\left(\frac{2\pi}{m}\right)^{2k}\log{\abs{m}}\\-\frac{(-1)^k}{2(2k)!}\left(\frac{2\pi}{m}\right)^{2k+1}\int_{0}^{1}(1-u)^{2k}\cos{\frac{2\pi bu}{m}}\cot{\frac{\pi u}{m}}-m(1-u)\cot{\pi u}\,du \end{multline} \endgroup \subsection{The limit of $S^m_{2k+1}(b,n)$} In the case of $S^m_{2k+1}(b,n)$, regardless of integer or half-integer we have: \begingroup \small \begin{multline} \nonumber \lim_{n\to\infty}S^m_{2k+1}(b,n)=-\frac{1}{2b^{2k+1}}\left(\sin{\frac{2\pi b}{m}}-\sum_{j=0}^{k-1}\frac{(-1)^j}{(2j+1)!}\left(\frac{2\pi b}{m}\right)^{2j+1}\right) +\sum_{j=1}^{k}\frac{(-1)^{k-j}}{(2k+1-2j)!}\left(\frac{2\pi}{m}\right)^{2k+1-2j}\zeta(2j,b+1)\\ +\lim_{n\to\infty}\frac{(-1)^k}{2(2k)!}\left(\frac{2\pi}{m}\right)^{2k+1}\int_{0}^{1}(1-u)^{2k}\left(\sin{\frac{2\pi(n+b)u}{m}}-\sin{\frac{2\pi b u}{m}}\right)\cot{\frac{\pi u}{m}}\,du \end{multline} \endgroup\\ \indent This one is much simpler and we can easily deduce the limit of the integral by means of the theorem 1, without even having to expand the sine in the integrand. Thus, for all integer $k\ge 0$ and real $\abs{m}\ge 1$: \begingroup \small \begin{multline} \nonumber \sum_{j=1}^{\infty}\frac{\sin{\frac{2\pi(j+b)}{m}}}{(j+b)^{2k+1}}=-\frac{1}{2b^{2k+1}}\left(\sin{\frac{2\pi b}{m}}-\sum_{j=0}^{k-1}\frac{(-1)^j}{(2j+1)!}\left(\frac{2\pi b}{m}\right)^{2j+1}\right) +\sum_{j=1}^{k}\frac{(-1)^{k-j}}{(2k+1-2j)!}\left(\frac{2\pi}{m}\right)^{2k+1-2j}\zeta(2j,b+1)\\ +\frac{(-1)^k\abs{m}}{4(2k)!}\left(\frac{2\pi}{m}\right)^{2k+1}-\frac{(-1)^k}{2(2k)!}\left(\frac{2\pi}{m}\right)^{2k+1}\int_{0}^{1}(1-u)^{2k}\sin{\frac{2\pi b u}{m}}\cot{\frac{\pi u}{m}}\,du \end{multline} \endgroup \subsection{$C^m_{2k}(b)$ and $S^m_{2k}(b)$} \indent The next two formulae, $C^m_{2k}(b)$ and $S^m_{2k}(b)$, are analogs and don't require further explanations. \begingroup \small \begin{multline} \nonumber \sum_{j=1}^{\infty}\frac{\cos{\frac{2\pi(j+b)}{m}}}{(j+b)^{2k}}=-\frac{1}{2b^{2k}}\left(\cos{\frac{2\pi b}{m}}-\sum_{j=0}^{k-1}\frac{(-1)^j}{(2j)!}\left(\frac{2\pi b}{m}\right)^{2j}\right) +\sum_{j=1}^{k}\frac{(-1)^{k-j}}{(2k-2j)!}\left(\frac{2\pi}{m}\right)^{2k-2j}\zeta(2j,b+1)\\ +\frac{(-1)^k\abs{m}}{4(2k-1)!}\left(\frac{2\pi}{m}\right)^{2k}-\frac{(-1)^k}{2(2k-1)!}\left(\frac{2\pi}{m}\right)^{2k}\int_{0}^{1}(1-u)^{2k-1}\sin{\frac{2\pi b u}{m}}\cot{\frac{\pi u}{m}}\,du \end{multline} \endgroup \begingroup \small \begin{multline} \nonumber \sum_{j=1}^{\infty}\frac{\sin{\frac{2\pi(j+b)}{m}}}{(j+b)^{2k}}=-\frac{1}{2b^{2k}}\left(\sin{\frac{2\pi b}{m}}-\sum_{j=0}^{k-2}\frac{(-1)^j}{(2j+1)!}\left(\frac{2\pi b}{m}\right)^{2j+1}\right) -\sum_{j=1}^{k-1}\frac{(-1)^{k-j}}{(2k-1-2j)!}\left(\frac{2\pi}{m}\right)^{2k-1-2j}\zeta(2j+1,b+1) \\-\frac{(-1)^k\pi}{2(2k-1)!}\left(\frac{2\pi}{m}\right)^{2k-1}\left(\cot{2\pi b}+\csc{2\pi b}\right)-\frac{(-1)^k\log{\abs{m}}}{(2k-1)!}\left(\frac{2\pi}{m}\right)^{2k-1}\\+\frac{(-1)^k}{2(2k-1)!}\left(\frac{2\pi}{m}\right)^{2k}\int_{0}^{1}(1-u)^{2k-1}\cos{\frac{2\pi bu}{m}}\cot{\frac{\pi u}{m}}+m\left(-1+\frac{\sin{2\pi b u}}{\sin{2\pi b}}\right)\cot{\pi u}\,du \end{multline} \endgroup \section{Lerch's $\Phi$ at the positive integers} In this section we find out the values of the Lerch transcendent function, $\Phi(e^m,k,b)$, at the positive integers $k$. \subsection{Partial Lerch's $\Phi$ sums, $E^{m}_{k}(b,n)$} It's straightforward to derive an expression for the partial sums of Lerch's $\Phi$ function using the formulae from \secrefe{Final_1} and \secrefe{Final_2}. If $\bm{i}$ is the imaginary unit, we just make: \begin{equation} \nonumber E^{2\pi\bm{i}/m}_{k}(b,n)=\sum_{j=1}^{n}\frac{e^{2\pi\bm{i}(j+b)/m}}{(j+b)^{k}}=C^m_{k}(b,n)+\bm{i} S^m_{k}(b,n) \end{equation}\\ \indent Omitting the calculations and making a simple transformation ($m:=2\pi\bm{i}/m$) (to bring the variables into the domain of the real numbers, which are easier to understand), we can produce a single formula for both the odd and even powers: \begin{multline} \nonumber \sum _{j=1}^{n}\frac{e^{m(j+b)}}{(j+b)^{k}}=-\frac{e^{m b}}{2b^{k}}+\frac{e^{m(n+b)}}{2(n+b)^{k}}+\frac{1}{2b^{k}}\sum_{j=0}^{k}\frac{(m b)^j}{j!}-\frac{1}{2(n+b)^{k}}\sum_{j=0}^{k}\frac{(m (n+b))^j}{j!}\\+\sum_{j=1}^{k}\frac{m^{k-j}}{(k-j)!}HP_{j}(n)+\frac{m^{k}}{2(k-1)!}\int_0^1(1-u)^{k-1}\left(e^{m(n+b) u}-e^{m b u}\right)\coth{\frac{m u}{2}}\,du \end{multline}\\ \indent From this new equation, it's easy to see that as $n$ goes to infinity, the sum on the left-hand side converges only if $\Re{(m)}<0$. However, we can obtain an analytic continuation for this sum, by removing the second term on the right-hand side, which explodes out to infinity if $\Re{(m)}>0$. Perhaps not surprisingly, this analytic continuation coincides with the Lerch $\Phi$ function. \subsection{Partial polylogarithm sums, $E^{m}_{k}(0,n)$} \label{part} \indent When $b=0$, we have one interesting particular case: \begin{multline} \label{eq:ab} \sum _{j=1}^n \frac{e^{m j}}{j^{k}}=\frac{e^{m n}}{2n^{k}}-\frac{1}{2n^{k}}\sum_{j=0}^{k}\frac{(m n)^j}{j!}+\sum_{j=1}^{k}\frac{m^{k-j}}{(k-j)!}H_{j}(n)\\ +\frac{m^{k}}{2(k-1)!}\int_0^1(1-u)^{k-1}\left(e^{m n u}-1\right)\coth{\frac{m u}{2}}\,du \end{multline}\\ \indent To obtain this expression, we take the limit as $b$ tends to 0: \begin{equation} \nonumber \lim_{b\to 0}-\frac{e^{m b}}{2b^{k}}+\frac{1}{2b^{k}}\sum_{j=0}^{k}\frac{(m b)^j}{j!}=0 \end{equation} \subsection{Lerch's $\Phi$} \label{Final} The limits we found in section \secrefe{new_lim} allow us to find the below infinite sum: \begin{equation} \nonumber \sum_{j=1}^{\infty}\frac{e^{\bm{i} 2\pi(j+b)/m}}{(j+b)^{k}}=\lim_{n\to\infty}C^m_{k}(b,n)+\bm{i} S^m_{k}(b,n) \end{equation} \subsubsection{Non-integer $2b$} After we carry out all the algebraic calculations, we find that for all integer $k \ge 1$ and all complex $m$ (except $m$ such that $\Re{(m)}>=0$ and $\abs{\Im{(m)}}>2\pi$): \begin{multline} \nonumber \sum _{j=1}^{\infty}\frac{e^{m(j+b)}}{(j+b)^{k}}=-\frac{1}{2b^{k}}\left(e^{m b}-\sum_{j=0}^{k-2}\frac{(m b)^j}{j!}\right)+\sum_{j=2}^{k}\frac{m^{k-j}}{(k-j)!}\zeta(j,b+1)\\+\frac{\pi\,m^k}{2(k-1)!}\cot{\pi b}-\frac{m^{k-1}}{(k-1)!}\log{\left(-\frac{m}{2\pi}\right)}\\-\frac{m^{k}}{2(k-1)!}\int_0^1(1-u)^{k-1}e^{m\,b\,u}\coth{\frac{m u}{2}}+\frac{2\pi}{m}\left(-1+\frac{\sin{2\pi b u}}{\sin{2\pi b}}\right)\cot{\pi u}\,du \end{multline}\\ \indent The infinite sum on the left-hand side converges whenever $\Re{(m)}<0$, whereas the expression on the right-hand size, $E^{m}_{k}(b)$, is well defined always, except when $2b$ is an integer. At $m=0$, although improper, the expression has a limit.\\ This sum is related to the Lerch function by the below relation: \begin{equation} \nonumber \Phi(e^m,k,b)=\frac{1}{b^k}+e^{-m\,b}\sum _{j=1}^{\infty}\frac{e^{m(j+b)}}{(j+b)^{k}} \text{} \end{equation} \subsubsection{Half-integer $b$} For half-integer $b$, the formula is only slightly different. For all integer $k \ge 1$ and all complex $m$ (except $m$ such that $\Re{(m)}>=0$ and $\abs{\Im{(m)}}>2\pi$): \begin{multline} \nonumber \sum _{j=1}^{\infty}\frac{e^{m(j+b)}}{(j+b)^{k}}=-\frac{1}{2b^{k}}\left(e^{m b}-\sum_{j=0}^{k-2}\frac{(m b)^j}{j!}\right)+\sum_{j=2}^{k}\frac{m^{k-j}}{(k-j)!}\zeta(j,b+1)\\-\frac{m^{k-1}}{(k-1)!}\log{\left(-\frac{m}{\pi}\right)}-\frac{m^{k}}{2(k-1)!}\int_0^1(1-u)^{k-1}e^{m\,b\,u}\coth{\frac{m u}{2}}-\frac{\pi}{m}\cos{\pi b u}\cot{\frac{\pi u}{2}}\,du \end{multline} \subsubsection{Integer $b$} When $b$ is a positive integer, $E^{m}_{k}(b)$ becomes an incomplete polylogarithm series, which we cover next. Therefore it's very simple to derive its formula, we just need to subtract the missing part from the full polylogarithm. A similar reasoning is used if $b$ is a negative integer.\\ Nonetheless, the formula when $b$ is a positive integer is: \begin{multline} \nonumber \sum _{j=1}^{\infty}\frac{e^{m(j+b)}}{(j+b)^{k}}=-\frac{1}{2b^{k}}\left(e^{m b}-\sum_{j=0}^{k-2}\frac{(m b)^j}{j!}\right)+\sum_{j=2}^{k}\frac{m^{k-j}}{(k-j)!}\zeta(j,b+1)\\-\frac{m^{k-1}}{(k-1)!}\log{\left(-\frac{m}{2\pi}\right)}-\frac{m^{k-1}}{(k-1)!}\left(H(b)-\frac{1}{2b}\right)\\-\frac{m^{k}}{2(k-1)!}\int_0^1(1-u)^{k-1}e^{m\,b\,u}\coth{\frac{m u}{2}}-\frac{2\pi}{m}(1-u)\cot{\pi u}\,du \end{multline} \subsection{The polylogarithm, $\mathrm{Li}_{k}(e^{m})$} The limit of $E^{m}_{k}(0,n)$ when $n$ tends to infinity is the limit of the expression we just found when $b$ tends to 0, and it relies on the following two notable limits: \begin{equation} \nonumber \lim_{b\to 0}-\frac{1}{2b^{k}}\left(e^{m b}-\sum_{j=0}^{k-2}\frac{(m b)^j}{j!}\right)+\frac{\pi\,m^k}{2(k-1)!}\cot{\pi b}=-\frac{m^k}{2\,k!} \text{, and } \lim_{b\to 0}\frac{\sin{2\pi b u}}{\sin{2\pi b}}=u \end{equation}\\ \indent Therefore, for all integer $k\ge 1$ and all complex $m$ (except $m$ such that $\Re{(m)}>=0$ and $\abs{\Im{(m)}}>2\pi$): \begin{multline} \nonumber \sum _{j=1}^{\infty} \frac{e^{m j}}{j^{k}}=-\frac{m^{k-1}}{(k-1)!}\log{\left(-\frac{m}{2\pi}\right)}+\sum_{\substack{j=0 \\ j\neq 1}}^{k}\frac{m^{k-j}}{(k-j)!}\zeta(j)\\ -\frac{m^{k}}{2(k-1)!}\int_{0}^{1}(1-u)^{k-1}\coth{\frac{m u}{2}}-\frac{2\pi}{m}(1-u)\cot{\pi u}\,du \end{multline}\\ \indent This infinite sum is known as the polylogarithm, $\mathrm{Li}_{k}(e^{m})$, and the formula on the right-hand side provides an analytic continuation for it for when $\Re{(m)}>0$.\\ Note how the first limit fit perfectly into the second sum (together with the other $\zeta(j)$ values, except for the singularity). And it's easy to show that when $m$ goes to 0, the formula we found goes to $\zeta(k)$, if $k\ge 2$.\\ The formula we found for $\mathrm{Li}_{k}(e^{m})$ allows us to deduce the following power series for $e^m$: \begin{equation} \nonumber \lim_{k\to\infty} \sum_{\substack{j=2}}^{k}\frac{m^{k-j}}{(k-j)!}\zeta(j)=e^m \end{equation} \subsection{The Hurwitz zeta function, $\zeta(-k,b)$} The literature teaches us that the Hurwitz zeta function is related to the polylogarithm function by means of a relatively simple relation: \begin{equation} \nonumber \frac{(2\pi)^k}{(k-1)!}\zeta(1-k,b)=\bm{i}^{-k}\,\mathrm{Li}_{k}(e^{2\pi\bm{i}\,b})+\bm{i}^{k}\,\mathrm{Li}_{k}(e^{-2\pi\bm{i}\,b}) \text{,} \end{equation}\\ \noindent which holds roughly speaking for $\abs{\Re{(b)}}\le 1$.\\ Since the formula we have for $\mathrm{Li}_{k}(e^{m})$ holds at the positive integers $k$, this relation allows us to obtain a formula for $\zeta(-k,b)$ that holds at the negative integers $-k$. Without showing the simple but long calculations involved, we conclude that despite the constraints of the initial relation, the below formula holds for every $b$: \begin{equation} \nonumber \zeta(-k,b)=\frac{b^k}{2}+2\,k!\,b^{k+1}\sum_{j=0}^{\floor{(k+1)/2}}\frac{(-1)^{j}(2\pi\,b)^{-2j}\zeta(2j)}{(k+1-2j)!}=-\frac{B_{k+1}(b)}{k+1} \text{,} \end{equation}\\ \noindent where $B_{k+1}(b)$ are the Bernoulli polynomials, whose relation with the Hurwitz zeta is also known from literature. So we conclude we found an alternative expression for said Bernoulli polynomials. \end{document}
\begin{document} \special{papersize=8.5in,11in} \setlength{{\sc P}\xspacedfpageheight}{{\sc P}\xspaceaperheight} \setlength{{\sc P}\xspacedfpagewidth}{{\sc P}\xspaceaperwidth} \CopyrightYear{2016} {\sc P}\xspaceublicationrights{licensed} \conferenceinfo{LICS '16,}{July 05 - 08, 2016, New York, NY, USA} \copyrightdata{978-1-4503-4391-6/16/07} \reprintprice{\$15.00} \copyrightdoi{http://dx.doi.org/10.1145/2933575.2935315} {\sc P}\xspacereprintfooter{2-way Visibly Pushdown Automata \& Transducers} \title{Two-Way Visibly Pushdown Automata and Transducers \titlenote{Emmanuel Filiot is research associate at FNRS. This work is supported by the ARC project \emph{Transform} (French speaking community of Belgium), the Belgian FNRS PDR project \emph{Flare}, and the French ANR project ExStream. This work has been carried out thanks to the support of the ARCHIMEDE Labex (ANR-11-LABX-0033), the A*MIDEX project (ANR-11-IDEX-0001-02) funded by the Investissements d'Avenir French Government program, managed by the French National Research Agency (ANR) and the PHC project VAST (35961QJ) funded by Campus France and WBI.} } \authorinfo{Luc Dartois \and Emmanuel Filiot} {Universit\'e Libre de Bruxelles, Belgium} {[email protected], [email protected]} \authorinfo{Pierre-Alain Reynier\and Jean-Marc Talbot} {Aix-Marseille Universit\'e, CNRS, LIF UMR 7279, 13000, Marseille, France} {[email protected]\\ [email protected]} \ensuremath{\mathsf{m}}aketitle \begin{abstract} \input abstract \end{abstract} \category{F.4.3}{Mathematical Logic and Formal Languages}{Formal Languages} \keywords Transductions, Pushdown automata, Logic. \input introduction \input twovpa \input twovpt \input expressiveness \input discussion \input appendix \bibliographystyleappendix{plainnat} \bibliographyappendix{papers} \end{document}
\begin{equation}ta} {\rm d}ef\rf{\rflooregin{document} {\large \begin{equation}ta} {\rm d}ef\rf{\rfloorf Linearizability of Systems of Ordinary Differential Equations Obtained by Complex Symmetry Analysis} \begin{equation}ta} {\rm d}ef\rf{\rfloorc{M. Safdar$^{a}$, Asghar Qadir$^{a}$, S. Ali$^{b,c}$\\ $^{a}$Center For Advanced Mathematics and Physics, National University of Sciences and Technology, Campus H-12, 44000, Islamabad, Pakistan\\ $^{b}$School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Campus H-12, 44000, Islamabad, Pakistan\\ $^{c}$Present Address: Department of Mathematics, Brock University, L2S3A1 Canada \\ [email protected], [email protected], sajid\[email protected]}{\rm e}c {\begin{equation}ta} {\rm d}ef\rf{\rfloorf Abstract}. Five equivalence classes had been found for systems of two second-order ordinary differential equations, transformable to linear equations (linearizable systems) by a change of variables \cite{sb}. An ``optimal (or simplest) canonical form" of linear systems had been established to obtain the symmetry structure, namely with 5, 6, 7, 8 and 15 dimensional Lie algebras. For those systems that arise from a scalar complex second-order ordinary differential equation, treated as a pair of real ordinary differential equations, a ``reduced optimal canonical form" is obtained. This form yields three of the five equivalence classes of linearizable systems of two dimensions. We show that there exist $6$, $7$ and $15$-dimensional algebras for these systems and illustrate our results with examples.\\ {\begin{equation}ta} {\rm d}ef\rf{\rfloorf Keywords:} Canonical forms, complex symmetry algebra, equivalence classes, linearizability. \section{Introduction} Lie used algebraic symmetry properties of differential equations to extract their solutions \cite{lie1, lie, lie2, lie3}. One method developed was to transform the equation to linear form by changing the dependent and independent variables invertibly. Such transformations are called {\rm e}mph{point transformations} and the transformed equations are said to be {\rm e}mph{linearized}. Equations that can be so transformed are said to be {\it linearizable}. Lie proved that the necessary and sufficient condition for a scalar nonlinear ordinary differential equation (ODE) to be linearizable is that it must have eight Lie point symmetries. He exploited the fact that all scalar linear second-order ODEs are equivalent under point transformations \cite{sur}, i.e. every linearizable scalar second-order ODE is reducible to the free particle equation. While the situation is not so simple for scalar linear ODEs of order $n\gammaeq3$, it was proved that there are three equivalence classes with $n+1$, $n+2$ or $n+4$ infinitesimal symmetry generators \cite{lea}. For linearization of systems of two nonlinear ODEs, we will first consider the equivalence of the corresponding linear systems under point transformations. Nonlinear systems of two second-order ODEs that are linearizable to systems of ODEs with constant coefficients, were proved to have three equivalence classes \cite{gor}. They have $7$, $8$ or $15$-dimensional Lie algebras. This result was extended to those nonlinear systems which are equivalent to linear systems of ODEs with constant or variable coefficients \cite{sb}. They obtained an ``optimal" canonical form of the linear systems involving three parameters, whose specific choices yielded five equivalence classes, namely with $5$, $6$, $7$, $8$ or $15$-dimensional Lie algebras. Geometric methods were developed to transform nonlinear systems of second-order ODEs \cite{aa,mq1,mq2} to a system of the free particle equations by treating them as geodesic equations and then projecting those equations down from an $m\times m$ system to an $(m-1)\times (m-1)$ system. In this process the originally homogeneous quadratically semi-linear system in $m$ dimensions generically becomes a non-homogeneous, cubically semi-linear system in $(m-1)$ dimensions. When used for $m=2$ the Lie conditions for the scalar ODE are recovered precisely. The criterion for linearizability is simply that the manifold for the (projected) geodesic equations be flat. The symmetry algebra in this case is $sl(n+2,\hbox{{\rm I}\kern-0.2em{\rm R}\kern0.2em})$ and hence the number of generators is $n^2+4n+3$. Thus for a system of two equations to be linearizable by this method it must have 15 generators. A scalar complex ODE involves two real functions of two real variables, yielding a system of two partial differential equations (PDEs) \cite{saj, saj1}. By restricting the independent variable to be real we obtain a system of ODEs. Complex symmetry analysis (CSA) provides the symmetry algebra for systems of two ODEs with the help of the symmetry generators of the corresponding complex ODE. This is not a simple matter of doubling the generators for the scalar complex ODE. The inequivalence of these systems with the above mentioned systems obtained earlier (by geometric means) \cite{mq2}, has been proved \cite{saf2}. {\rm e}mph{Thus their symmetry structures are not the same}. We prove that a general two-dimensional system of second-order ODEs corresponds to a scalar complex second-order ODE if the coefficients of the system satisfy Cauchy-Riemann equations (CR-equations). We provide the full symmetry algebra for the systems of ODEs that correspond to linearizable scalar complex ODEs. For this purpose we derive a {\rm e}mph{reduced optimal canonical form} for linear systems obtainable from a complex linear equation. We prove that this form provides three equivalence classes of linearizable systems of two second-order ODEs, while there exist five linearizable classes \cite{sb} by real symmetry analysis. This difference arises due to the fact that in CSA we invoke equivalence of {\it scalar} second-order ODEs to obtain the reduced optimal form, while in real symmetry analysis equivalence of linear {\it systems} of two ODEs was used to derive their optimal form. The nonlinear systems transformable to one of the three equivalence classes we provide here, are characterized by complex transformations of the form \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray*} T:(x,u(x))\rightarrow (\chi(x),U(x,u)). {\rm e}nd{eqnarray*} Indeed, these complex transformations generate these linearizable classes of two dimensional systems. Note that not all the complex linearizing transformations for scalar complex equations provide the corresponding real transformations for systems. The plan of the paper is as follows. In the next section we present the preliminaries for determining the symmetry structures. The third section deals with the conditions derived for systems that can be obtained by CSA. In section four we obtain the reduced optimal canonical form for systems associated with complex linear ODEs. The theory developed to classify linearizable systems of ODEs transformable to this reduced optimal form is given in the fifth section. Applications of the theory are given in the next section. The last section summarizes and discusses the work. \section{Preliminaries} The simplest form of a second-order equation has the maximal-dimensional algebra, $sl(3,\hbox{{\rm I}\kern-0.2em{\rm R}\kern0.2em})$. To discuss the equivalence of systems of two linear second-order ODEs, we need to use the following result for the equivalence of a general system of $n$ linear homogeneous second-order ODEs with $2n^{2}+n$ arbitrary coefficients and some canonical forms that have fewer arbitrary coefficients \cite{wf}. Any system of $n$ second-order non-homogeneous linear ODEs \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} {\rm d}dot{\textbf{u}}=\textbf{A} {\rm d}ot{\textbf{u}}+\textbf{B} \textbf{u}+\textbf{c},\label{1} {\rm e}nd{eqnarray} can be mapped invertibly to one of the following forms \begin{equation}ta} {\rm d}ef\rf{\rflooregin{equation} {\rm d}dot{\textbf{v}}=\textbf{C} {\rm d}ot{\textbf{v}},\label{2} {\rm e}nd{equation} \begin{equation}ta} {\rm d}ef\rf{\rflooregin{equation} {\rm d}dot{\textbf{w}}=\textbf{D} \textbf{w},\label{3} {\rm e}nd{equation} where $\textbf{A}$, $\textbf{B}$, $\textbf{C}$, $\textbf{D}$ are $n\times n$ matrix functions, $\textbf{u}$, $\textbf{v}$, $\textbf{w}$, $\textbf{c}$ are vector functions and dot represents differentiation relative to the independent variable $t$. For a system of two second-order ODEs ($n=2$) there are a total of $10$ coefficients for the system represented by equation ($\ref{1}$). It is reducible to the first and second canonical forms, ($\ref{2}$) and ($\ref{3}$) respectively. Thus a system with $4$ arbitrary coefficients of the form \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} {\rm d}dot{w_{1}}=d_{11}(t)w_{1}+d_{12}(t)w_{2},\noindentonumber\\ {\rm d}dot{w_{2}}=d_{21}(t)w_{1}+d_{22}(t)w_{2},\label{4} {\rm e}nd{eqnarray} can be obtained by using the equivalence of ($\ref{1}$) and the counterpart of the Laguerre-Forsyth second canonical form ($\ref{3}$). This result demonstrates the equivalence of systems of two ODEs having $10$ and $4$ arbitrary coefficients respectively. The number of arbitrary coefficients can be further reduced to three by the change of variables \cite{sb} \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \tilde{y}=w_{1}/\rho(t),\hspace{2mm}\tilde{z}=w_{2}/\rho(t), \hspace{2mm}x=\int^{t}\rho^{-2}(s)ds,\label{5} {\rm e}nd{eqnarray} where $\rho$ satisfies \begin{equation}ta} {\rm d}ef\rf{\rflooregin{equation} \rho^{\partialrime\partialrime}-\frac{d_{11}+d_{22}}{2}\rho=0,\label{6} {\rm e}nd{equation} to the linear system \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \tilde{y}^{\partialrime\partialrime}=\tilde{d}_{11}(x)\tilde{y}+\tilde{d}_{12}(x)\tilde{z},\noindentonumber\\ \tilde{z}^{\partialrime\partialrime}=\tilde{d}_{21}(x)\tilde{y}-\tilde{d}_{11}(x)\tilde{z},\label{7} {\rm e}nd{eqnarray} where \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \tilde{d}_{11}=\frac{\rho^{3}(d_{11}-d_{22})}{2},\hspace{2mm}\tilde{d}_{12}= \rho^{3}d_{12},\hspace{2mm}\tilde{d}_{21}=\rho^{3}d_{21}.\label{8} {\rm e}nd{eqnarray} This procedure of reduction of arbitrary coefficients for linearizable systems simplifies the classification problem enormously. System ($\ref{7}$) is called the {\rm e}mph{optimal canonical form} for linear systems of two second-order ODEs, as it has the fewest arbitrary coefficients, namely three. \section{Systems of ODEs obtainable by CSA} Following the classical Lie procedure, one uses point transformations \begin{equation}ta} {\rm d}ef\rf{\rflooregin{equation} X=X(x,y,z),\hspace{2mm}Y=Y(x,y,z),\hspace{2mm}Z=Z(x,y,z),\label{9} {\rm e}nd{equation} to map the general linearizable system of two second-order ODEs \cite{jm}, which is (at most) cubically semi-linear in both the dependent variables, \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} y^{^{\partialrime \partialrime }}=\omega_{1}(x,y,z,y^{^{\partialrime }},z^{^{\partialrime }}),\noindentonumber \\ z^{^{\partialrime \partialrime }}=\omega_{2}(x,y,z,y^{^{\partialrime }},z^{^{\partialrime }}),\label{10} {\rm e}nd{eqnarray} where prime denotes differentiation relative to $x$, to the simplest form \begin{equation}ta} {\rm d}ef\rf{\rflooregin{equation} Y^{\partialrime\partialrime}=0,\hspace{2mm}Z^{\partialrime\partialrime}=0,\label{11} {\rm e}nd{equation} where the prime now denotes differentiation with respect to $X$ and the mappings ($\ref{9}$) are invertible. The derivatives transform as \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} Y^{\partialrime}=\frac{D_{x}(Y)}{D_{x}(X)}=F_{1}(x,y,z,y^{^{\partialrime }},z^{^{\partialrime }}),\noindentonumber \\ Z^{\partialrime}=\frac{D_{x}(Z)}{D_{x}(X)}=F_{2}(x,y,z,y^{^{\partialrime }},z^{^{\partialrime}}),\label{12} {\rm e}nd{eqnarray} and \begin{equation}ta} {\rm d}ef\rf{\rflooregin{equation} Y^{\partialrime\partialrime}=\frac{D_{x}(F_{1})}{D_{x}(X)},~~ Z^{\partialrime\partialrime}=\frac{D_{x}(F_{2})}{D_{x}(X)},\label{13} {\rm e}nd{equation} where $D_{x}$ is the total derivative operator. This yields \begin{equation}ta} {\rm d}ef\rf{\rflooregin{equation} \begin{equation}ta} {\rm d}ef\rf{\rflooregin{tabular}{l} $y^{^{\partialrime \partialrime }}+\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{11}y^{^{\partialrime }3}+\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{12}y^{^{\partialrime }2}z^{^{\partialrime }}+\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{13}y^{^{\partialrime }}z^{^{\partialrime }2}+\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{14}z^{^{\partialrime }3}+\begin{equation}ta} {\rm d}ef\rf{\rflooreta_{11}y^{^{\partialrime }2}+\begin{equation}ta} {\rm d}ef\rf{\rflooreta_{12}y^{^{\partialrime }}z^{^{\partialrime }}+\begin{equation}ta} {\rm d}ef\rf{\rflooreta_{13}z^{^{\partialrime }2}$ \\ $+\gammaamma_{11}y^{^{\partialrime }}+\gammaamma_{12}z^{^{\partialrime }}+{\rm d}elta_{1}=0,$ \\ \\ $z^{^{\partialrime \partialrime }}+\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{21}y^{^{\partialrime }3}+\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{22}y^{^{\partialrime }2}z^{^{\partialrime }}+\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{23}y^{^{\partialrime }}z^{^{\partialrime }2}+\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{24}z^{^{\partialrime }3}+\begin{equation}ta} {\rm d}ef\rf{\rflooreta_{21}y^{^{\partialrime }2}+\begin{equation}ta} {\rm d}ef\rf{\rflooreta_{22}y^{^{\partialrime }}z^{^{\partialrime }}+\begin{equation}ta} {\rm d}ef\rf{\rflooreta_{23}z^{^{\partialrime }2}$ \\ $+\gammaamma_{21}y^{^{\partialrime }}+\gammaamma_{22}z^{^{\partialrime }}+{\rm d}elta_{2}=0,$\label{14} {\rm e}nd{tabular} {\rm e}nd{equation} the coefficients being functions of the independent and dependent variables. System ($\ref{14}$) is the most general candidate for two second-order ODEs that may be linearizable. While another candidate of linearizability of two dimensional systems obtainable from the most general form of a complex linearizable equation \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} u^{\partialrime\partialrime}+E_{3}(x,u)u^{\partialrime 3}+E_{2}(x,u)u^{\partialrime 2}+E_{1}(x,u)u^{\partialrime}+E_{0}(x,u)=0,\label{15} {\rm e}nd{eqnarray} where $u$ is a complex function of the real independent variable $x$, is also cubically semi-linear i.e. a system of the form \begin{equation}ta} {\rm d}ef\rf{\rflooregin{equation} \begin{equation}ta} {\rm d}ef\rf{\rflooregin{tabular}{l} $y^{^{\partialrime \partialrime }}+\begin{equation}ta} {\rm d}ef\rf{\rfloorar \alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{11}y^{^{\partialrime }3}-3\begin{equation}ta} {\rm d}ef\rf{\rfloorar \alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{12}y^{^{\partialrime }2}z^{^{\partialrime }}-3\begin{equation}ta} {\rm d}ef\rf{\rfloorar \alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{11}y^{^{\partialrime }}z^{^{\partialrime }2}+\begin{equation}ta} {\rm d}ef\rf{\rfloorar \alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{12}z^{^{\partialrime }3}+\begin{equation}ta} {\rm d}ef\rf{\rfloorar \begin{equation}ta} {\rm d}ef\rf{\rflooreta_{11}y^{^{\partialrime }2}-2\begin{equation}ta} {\rm d}ef\rf{\rfloorar \begin{equation}ta} {\rm d}ef\rf{\rflooreta_{12}y^{^{\partialrime }}z^{^{\partialrime }}-\begin{equation}ta} {\rm d}ef\rf{\rfloorar \begin{equation}ta} {\rm d}ef\rf{\rflooreta_{11}z^{^{\partialrime }2}$ \\ $+\begin{equation}ta} {\rm d}ef\rf{\rfloorar \gammaamma_{11}y^{^{\partialrime }}-\begin{equation}ta} {\rm d}ef\rf{\rfloorar \gammaamma_{12}z^{^{\partialrime }}+\begin{equation}ta} {\rm d}ef\rf{\rfloorar {\rm d}elta_{11}=0,$ \\ \\ $z^{^{\partialrime \partialrime }}+\begin{equation}ta} {\rm d}ef\rf{\rfloorar \alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{12}y^{^{\partialrime }3}+3\begin{equation}ta} {\rm d}ef\rf{\rfloorar \alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{11}y^{^{\partialrime }2}z^{^{\partialrime }}-3\begin{equation}ta} {\rm d}ef\rf{\rfloorar \alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{12}y^{^{\partialrime }}z^{^{\partialrime }2}-\begin{equation}ta} {\rm d}ef\rf{\rfloorar \alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{11}z^{^{\partialrime }3}+\begin{equation}ta} {\rm d}ef\rf{\rfloorar \begin{equation}ta} {\rm d}ef\rf{\rflooreta_{12}y^{^{\partialrime }2}+2\begin{equation}ta} {\rm d}ef\rf{\rfloorar \begin{equation}ta} {\rm d}ef\rf{\rflooreta_{11}y^{^{\partialrime }}z^{^{\partialrime }}-\begin{equation}ta} {\rm d}ef\rf{\rfloorar \begin{equation}ta} {\rm d}ef\rf{\rflooreta_{12}z^{^{\partialrime }2}$ \\ $+\begin{equation}ta} {\rm d}ef\rf{\rfloorar \gammaamma_{12}y^{^{\partialrime }}+\begin{equation}ta} {\rm d}ef\rf{\rfloorar \gammaamma_{11}z^{^{\partialrime }}+\begin{equation}ta} {\rm d}ef\rf{\rfloorar {\rm d}elta_{12}=0,$\label{16} {\rm e}nd{tabular} {\rm e}nd{equation} here the coefficients $\begin{equation}ta} {\rm d}ef\rf{\rfloorar \alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{1i}$, $\begin{equation}ta} {\rm d}ef\rf{\rfloorar \begin{equation}ta} {\rm d}ef\rf{\rflooreta_{1i}$, $\begin{equation}ta} {\rm d}ef\rf{\rfloorar \gammaamma_{1i}$ and $\begin{equation}ta} {\rm d}ef\rf{\rfloorar {\rm d}elta_{1i}$ for $i=1,2$ are functions of $x$, $y$ and $z$. Clearly, the system ($\ref{16}$) corresponds to ($\ref{15}$) if the coefficients $\begin{equation}ta} {\rm d}ef\rf{\rfloorar \alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{1i},~\begin{equation}ta} {\rm d}ef\rf{\rfloorar \begin{equation}ta} {\rm d}ef\rf{\rflooreta_{1i},~\begin{equation}ta} {\rm d}ef\rf{\rfloorar \gammaamma_{1i}$ and $\begin{equation}ta} {\rm d}ef\rf{\rfloorar {\rm d}elta_{1i}$ satisfy the CR-equations i.e. $\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{11,y}=\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{12,z},~\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{12,y}=-\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{11,z}$ and vice versa. It is obvious as ($\ref{15}$) generates a system by breaking the complex coefficients $E_{j}$, for $j=0,1,2,3$ into real and imaginary parts \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} E_{3}=\begin{equation}ta} {\rm d}ef\rf{\rfloorar \alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{11}+i\begin{equation}ta} {\rm d}ef\rf{\rfloorar \alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{12},~~ E_{2}=\begin{equation}ta} {\rm d}ef\rf{\rfloorar \begin{equation}ta} {\rm d}ef\rf{\rflooreta_{11}+i\begin{equation}ta} {\rm d}ef\rf{\rfloorar \begin{equation}ta} {\rm d}ef\rf{\rflooreta_{12},~~ E_{1}=\begin{equation}ta} {\rm d}ef\rf{\rfloorar \gammaamma_{11}+i\begin{equation}ta} {\rm d}ef\rf{\rfloorar \gammaamma_{12},~~ E_{0}=\begin{equation}ta} {\rm d}ef\rf{\rfloorar {\rm d}elta_{11}+i\begin{equation}ta} {\rm d}ef\rf{\rfloorar {\rm d}elta_{12},\label{17} {\rm e}nd{eqnarray} where all the functions are analytic. Hence we can state the following theorem.\\ \noindentewline \textbf{Theorem 1.} \textit{A general two dimensional system of second-order ODEs} ($\ref{10}$) \textit{corresponds to a complex equation} \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} u^{\partialrime\partialrime}=\omega(x,u,u^{\partialrime}), {\rm e}nd{eqnarray} \textit{if and only if $\omega_{1}$ and $\omega_{2}$ satisfy the CR-equations} \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \omega_{1,y}=\omega_{2,z},~~\omega_{1,z}=-\omega_{2,y},\noindentonumber\\ \omega_{1,y^{\partialrime}}=\omega_{2,z^{\partialrime}},~~\omega_{1,z^{\partialrime}}=-\omega_{2,y^{\partialrime}}. {\rm e}nd{eqnarray} \\ For the correspondence of both the cubic forms ($\ref{14}$) and ($\ref{16}$) of two dimensional systems we state the following theorem.\\ \noindentewline \textbf{Theorem 2.} \textit{A system of the form} ($\ref{14}$) \textit{corresponds to} ($\ref{16}$) \textit{if and only if the coefficients $\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{ij}$, $\begin{equation}ta} {\rm d}ef\rf{\rflooreta_{ik}$, $\gammaamma_{il}$ and ${\rm d}elta_{i}$ satisfy the following conditions \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{11}=-\frac{1}{3}\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{13}=\frac{1}{3}\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{22}=-\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{24},\noindentonumber\\ -\frac{1}{3}\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{12}=\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{14}=\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{21}=-\frac{1}{3}\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{23},\noindentonumber\\ \begin{equation}ta} {\rm d}ef\rf{\rflooreta_{11}=\frac{1}{2}\begin{equation}ta} {\rm d}ef\rf{\rflooreta_{22}=-\begin{equation}ta} {\rm d}ef\rf{\rflooreta_{13},\noindentonumber\\ \begin{equation}ta} {\rm d}ef\rf{\rflooreta_{21}=-\frac{1}{2}\begin{equation}ta} {\rm d}ef\rf{\rflooreta_{12}=-\begin{equation}ta} {\rm d}ef\rf{\rflooreta_{23},\noindentonumber\\ \gammaamma_{11}=\gammaamma_{22}=,\quad \gammaamma_{21}=-\gammaamma_{12},\label{18} {\rm e}nd{eqnarray} where $i=l=1,2$, $j=1,...,4$ and $k=1,2,3$.}\\ \noindentewline \textbf{Proof.} It can be trivially proved if we rewrite the above equations as $\begin{equation}ta} {\rm d}ef\rf{\rfloorar \alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{1i}$, $\begin{equation}ta} {\rm d}ef\rf{\rfloorar \begin{equation}ta} {\rm d}ef\rf{\rflooreta_{1i}$ and $\begin{equation}ta} {\rm d}ef\rf{\rfloorar \gammaamma_{1i}$, respectively. These coefficients correspond to complex coefficients of ($\ref{15}$) if and only if they satisfy the CR-equations.\\ \noindentewline Thus Theorem (1) and (2) identify those two dimensional systems which are obtainable from complex equations. \section{Reduced optimal canonical forms} The simplest forms for linear systems of two second-order ODEs corresponding to complex scalar ODEs can be established by invoking the equivalence of scalar second-order linear ODEs. Consider a general linear scalar complex second-order ODE \begin{equation}ta} {\rm d}ef\rf{\rflooregin{equation} u^{\partialrime\partialrime}=\zeta_{1}(x)u^{\partialrime}+\zeta_{2}(x)u+\zeta_{3}(x),\label{21} {\rm e}nd{equation} where prime denotes differentiation relative to $x$ and $u(x)=y(x)+iz(x)$ is a complex function of the real independent variable $x$. As all the linear scalar second-order ODEs are equivalent, so equation ($\ref{21}$) is equivalent to the following scalar second-order complex ODEs \begin{equation}ta} {\rm d}ef\rf{\rflooregin{equation} u^{\partialrime\partialrime}=\zeta_{4}(x)u^{\partialrime},\label{22} {\rm e}nd{equation} \begin{equation}ta} {\rm d}ef\rf{\rflooregin{equation} u^{\partialrime\partialrime}=\zeta_{5}(x)u,\label{23} {\rm e}nd{equation} where all the three forms ($\ref{21}$), ($\ref{22}$) and ($\ref{23}$) are transformable to each other. Indeed these three forms are reducible to the free particle equation. These three complex scalar linear ODEs belong to the same equivalence class, i.e. all have eight Lie point symmetry generators. In this paper we prove that the systems obtainable by these forms using CSA have more than one equivalence class. To extract systems of two linear ODEs from ($\ref{22}$) and ($\ref{23}$) we put $\zeta_{4}(x)=\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{1}(x)+i\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{2}(x)$ and $\zeta_{5}(x)=\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{3}(x)+i\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{4}(x)$ to obtain two linear forms of system of two linear second-order ODEs \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} y^{\partialrime\partialrime}=\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{1}(x)y^{\partialrime}-\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{2}(x)z^{\partialrime},\noindentonumber\\ z^{\partialrime\partialrime}=\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{2}(x)y^{\partialrime}+\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{1}(x)z^{\partialrime}.\label{24} {\rm e}nd{eqnarray} and \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} y^{\partialrime\partialrime}=\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{3}(x)y-\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{4}(x)z,\noindentonumber\\ z^{\partialrime\partialrime}=\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{4}(x)y+\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{3}(x)z,\label{25} {\rm e}nd{eqnarray} thus we state the following theorem.\\ \noindentewline \textbf{Theorem 3.} \textit{If a system of two second-order ODEs is linearizable via invertible complex point transformations then it can be mapped to one of the two forms} ($\ref{24}$) \textit{or} ($\ref{25}$). Notice that here we have {\rm e}mph{only two arbitrary coefficients} in both the linear forms, while the minimum number obtained before was three i.e. a system of the form ($\ref{7}$). The reason we can reduce further is that we are dealing with the special classes of linear systems of ODEs that correspond to the scalar complex ODEs. In fact ($\ref{25}$) can be reduced further by the change of variables \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} Y=y/\rho(t),\hspace{2mm}Z=z/\rho(t),\hspace{2mm} x=\int^{t}\rho^{-2}(s)ds,\label{26} {\rm e}nd{eqnarray} where $\rho$ satisfies \begin{equation}ta} {\rm d}ef\rf{\rflooregin{equation} \rho^{\partialrime\partialrime}-\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{3}\rho=0,\label{27} {\rm e}nd{equation} to \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} Y^{\partialrime\partialrime}=-\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)Z,~~~~ Z^{\partialrime\partialrime}=\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)Y,\label{28} {\rm e}nd{eqnarray} where $\begin{equation}ta} {\rm d}ef\rf{\rflooreta=\rho^{3}\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{4}$. We state this result in the form of a theorem.\\ \noindentewline \textbf{Theorem 4.} \textit{Any linear system of two second-order ODEs of the form} ($\ref{25}$) \textit{with two arbitrary coefficients is transformable to a simplest system of two linear ODEs} ($\ref{28}$) \textit{with one arbitrary coefficient via real point transformations} ($\ref{26}$) \textit{and} ($\ref{27}$). Equation ($\ref{28}$) is the {\rm e}mph{reduced optimal canonical form} for systems associated with complex ODEs, with just one coefficient which is an arbitrary function of $x$. The equivalence of systems ($\ref{24}$) and ($\ref{25}$) can be established via invertible point transformations, so we state the following theorem.\\ \noindentewline \textbf{Theorem 5.} \textit{Two linear forms of the systems of two second-order ODEs} ($\ref{24}$) \textit{and} ($\ref{25}$) \textit{are equivalent via invertible point transformations} \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} y=M_{1}(x)y_{1}-M_{2}(x)y_{2}+y^{*},\noindentonumber\\ z=M_{1}(x)y_{2}+M_{2}(x)y_{1}+z^{*},\label{29} {\rm e}nd{eqnarray} \textit{of the dependent variables only, where} $M_{1}(x)$, $M_{2}(x)$ \textit{are two linearly independent solutions of} \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{1}M_{1}-\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{2}M_{2}=2M^{\partialrime}_{1},\noindentonumber\\ \alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{1}M_{1}-\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{2}M_{2}=2M^{\partialrime}_{1},\label{30} {\rm e}nd{eqnarray} \textit{and} $y^{*}$, $z^{*}$ \textit{are the particular solutions of} ($\ref{24}$).\\ \noindentewline {\begin{equation}ta} {\rm d}ef\rf{\rfloorf Proof.} Differentiating the set of equations ($\ref{30}$) and using the result in the linear form ($\ref{24}$), Routine calculations show that ($\ref{24}$) can be mapped to ($\ref{25}$) where \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{3}(x)=\frac{1}{M^{2}_{1}+M^{2}_{2}}[M_{1}(\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{1}M^{\partialrime}_{1}-\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{2}M^{\partialrime}_{2}-M^{\partialrime\partialrime}_{1})+M_{2}(\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{1}M^{\partialrime}_{2}+\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{2}M^{\partialrime}_{1}-M^{\partialrime\partialrime}_{2})],\noindentonumber\\ \alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{4}(x)=\frac{1}{M^{2}_{1}+M^{2}_{2}}[M_{1}(\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{1}M^{\partialrime}_{2}+\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{2}M^{\partialrime}_{1}-M^{\partialrime\partialrime}_{2})-M_{2}(\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{1}M^{\partialrime}_{1}-\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha_{2}M^{\partialrime}_{2}-M^{\partialrime\partialrime}_{1})].\noindentonumber\\ \label{31} {\rm e}nd{eqnarray} Thus the linear form ($\ref{24}$) is reducible to ($\ref{28}$).\\ \noindentewline \textbf{Remark 1.} Any nonlinear system of two second-order ODEs that is linearizable by complex methods can be mapped invertibly to a system of the form ($\ref{28}$) with one coefficient which is an arbitrary function of the independent variable. \section{Symmetry structure of linear systems obtained by CSA} To use the reduced canonical form \cite{hs} for deriving the symmetry structure of linearizable systems associated with the complex scalar linearizable ODEs, we obtain a system of PDEs whose solution provides the symmetry generators for the corresponding linearizable systems of two second-order ODEs.\\ \noindentewline \textbf{Theorem 6.} \textit{Linearizable systems of two second-order ODEs reducible to the linear form} ($\ref{28}$) \textit{via invertible complex point transformations, have} $6$, $7$ \textit{or $15$-dimensional Lie point symmetry algebras.}\\ \noindentewline \textbf{Proof.} The symmetry conditions provide the following set of PDEs for the system ($\ref{28}$) \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \xi_{xx}=\xi_{xy}=\xi_{yy}=0={\rm e}ta_{1,zz}={\rm e}ta_{2,yy},\label{32}\\ {\rm e}ta_{1,yy}-2\xi_{xy}={\rm e}ta_{1,yz}-\xi_{xz}={\rm e}ta_{2,yz}-\xi_{xy}= {\rm e}ta_{2,zz}-2\xi_{xz}=0,\label{33}\\ \xi_{xx}-2{\rm e}ta_{1,xy}-3\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)\xi_{,y}z+\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)\xi_{,z}y ={\rm e}ta_{1,xz}+\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)\xi_{,z}z=0,\label{34}\\ \xi_{xx}-2{\rm e}ta_{2,xz}+3\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)\xi_{,z}y-\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)\xi_{,y}z= {\rm e}ta_{2,xy}-\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)\xi_{,y}y=0,\label{35}\\ {\rm e}ta_{1,xx}+\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)({\rm e}ta_{1,z}y+2\xi_{,x}z-{\rm e}ta_{1,y}z+{\rm e}ta_{2}) +\begin{equation}ta} {\rm d}ef\rf{\rflooreta^{\partialrime}(x)z\xi=0,\label{36}\\ {\rm e}ta_{2,xx}+\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)({\rm e}ta_{2,z}y-2\xi_{,x}y-{\rm e}ta_{2,y}z-{\rm e}ta_{1}) -\begin{equation}ta} {\rm d}ef\rf{\rflooreta^{\partialrime}(x)y\xi=0.\label{37} {\rm e}nd{eqnarray} Equations ($\ref{34}$)-($\ref{37}$) involve an arbitrary function of the independent variable and its first derivatives. Using equations ($\ref{32}$) and ($\ref{33}$) we have the following solution set \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \xi=\gammaamma_{1}(x)y+\gammaamma_{2}(x)z+\gammaamma_{3}(x),\noindentonumber\\ {\rm e}ta_{1}=\gammaamma^{\partialrime}_{1}(x)y^{2}+\gammaamma^{\partialrime}_{2}(x)yz+ \gammaamma_{4}(x)y+\gammaamma_{5}(x)z+\gammaamma_{6}(x),\noindentonumber\\ {\rm e}ta_{2}=\gammaamma^{\partialrime}_{1}(x)yz+\gammaamma^{\partialrime}_{2}(x)z^{2}+ \gammaamma_{7}(x)y+\gammaamma_{8}(x)z+\gammaamma_{9}(x).\label{38} {\rm e}nd{eqnarray} Using equations ($\ref{34}$) and ($\ref{35}$), we get \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)\gammaamma_{1}(x)=0=\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)\gammaamma_{2}(x).\label{39} {\rm e}nd{eqnarray} Now assuming $\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)$ to be zero, non-zero constant and arbitrary function of $x$ will generate the following cases.\\ \noindentewline\textbf{Case 1.1.} $\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)=0$.\\ The set of determining equations ($\ref{32}$)-($\ref{37}$) will reduce to a trivial system of PDEs \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} {\rm e}ta_{1,xx}={\rm e}ta_{1,xz}={\rm e}ta_{1,zz}=0,\noindentonumber\\ {\rm e}ta_{2,xx}={\rm e}ta_{2,xy}={\rm e}ta_{2,yy}=0,\noindentonumber\\ 2\xi_{,xy}-{\rm e}ta_{1,yy}=0=2\xi_{,xz}-{\rm e}ta_{2,zz},\noindentonumber\\ \xi_{,xz}-{\rm e}ta_{1,yz}=0=\xi_{,xy}-{\rm e}ta_{2,yz},\noindentonumber\\ \xi_{,xx}-2{\rm e}ta_{1,xy}=0=\xi_{,xx}-2{\rm e}ta_{2,xz},\label{40} {\rm e}nd{eqnarray} which can be extracted classically for the system of free particle equations. Solving it we find a $15$-dimensional Lie point symmetry algebra.\\ \noindentewline\textbf{Case 1.2.} $\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)\noindenteq0$.\\ Then ($\ref{39}$) implies $\gammaamma_{1}(x)=\gammaamma_{2}(x)=0$ and ($\ref{38}$) reduces to \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \xi=\gammaamma_{3}(x),\noindentonumber\\ {\rm e}ta_{1}=(\frac{\gammaamma^{\partialrime}_{3}(x)}{2}+c_{3})y+c_{1}z+ \gammaamma_{6}(x),\noindentonumber\\ {\rm e}ta_{2}=c_{2}y+(\frac{\gammaamma^{\partialrime}_{3}(x)}{2}+c_{4})z+ \gammaamma_{9}(x).\label{41} {\rm e}nd{eqnarray} Here two subcases arise.\\ \noindentewline\textbf{Case 1.2.1.} $\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)$ {\rm e}mph{is a non-zero constant}.\\ As equations ($\ref{36}$) and ($\ref{37}$) involve the derivatives of $\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)$, which will now be zero, equations ($\ref{34}$)-($\ref{37}$) and ($\ref{41}$) yield a $7$-dimensional Lie algebra. The explicit expressions of the symmetry generators involve trigonometric functions. But for a simple demonstration of the algorithm consider $\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)=1$. The solution of the set of the determining equations is \begin{equation}ta} {\rm d}ef\rf{\rflooregin{equation} \xi=C_{1},\noindentonumber {\rm e}nd{equation} and \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} {\rm e}ta_{1}=C_{2}y+[-C_{4}e^{x/\sqrt{2}}-C_{3}e^{-x/\sqrt{2}}] \sin({x/\sqrt{2}})+C_{6}e^{x/\sqrt{2}}\cos({x/\sqrt{2}})+ \noindentonumber\\ C_{5}e^{-x/\sqrt{2}}\cos({x/\sqrt{2}})+C_{7}z,\noindentonumber\\ {\rm e}ta_{2}=[-C_{6}e^{x/\sqrt{2}}+C_{5}e^{-x/\sqrt{2}}] \sin({x/\sqrt{2}})-C_{4}e^{x/\sqrt{2}}\cos({x/\sqrt{2}})-C_{2}z+ \noindentonumber\\ C_{3}e^{-x/\sqrt{2}}\cos({x/\sqrt{2}})+C_{7}y.\label{42} {\rm e}nd{eqnarray} This yields a $7$-dimensional symmetry algebra.\\ \noindentewline\textbf{Case 1.2.2.1.} $\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)=x^{-2}, x^{-4}$ or $(x+1)^{-4}$.\\ Equations ($\ref{34}$)-($\ref{37}$) and ($\ref{41}$) yield a $7$-dimensional Lie algebra. Thus the $7$-dimensional algebras can be related with systems which have variable coefficients in their linear forms, apart from the linear forms with constant coefficients.\\ \noindentewline\textbf{Case 1.2.2.2.} $\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)=x^{-1}, x^{2}$, $x^{2}\partialm C_{0}$ or $e^{x}$.\\ Using equations ($\ref{34}$)-($\ref{37}$) and ($\ref{41}$), we arrive at a $6$-dimensional Lie point symmetry algebra. The explicit expressions involve special functions, e.g for $\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)=x^{-1}$, $x^{2}$, $x^{2}\partialm C_{0}$ we get Bessel functions. Similarly for $\begin{equation}ta} {\rm d}ef\rf{\rflooreta(x)=e^{x}$ there are six symmetries, including the generators $y\partialartial_{y}-e^{x}z\partialartial_{z}$, $z\partialartial_{z}+e^{x}y\partialartial_{y}$. The remaining four generators come from the solution of an ODE of order four.\\ Thus there is only a $6$, $7$ or $15$-dimensional algebra for linearizable systems of two second-order ODEs transformable to ($\ref{28}$) via invertible complex point transformations. We are not investigating the remaining two linear forms ($\ref{24}$) and ($\ref{25}$), because these are transformable to system ($\ref{28}$) i.e. all these forms have the same symmetry structures. The linear forms providing $6$ or $7$-dimensional algebras here are obtainable by linear forms extractable from ($\ref{7}$), with a $6$ or $7$ dimensional algebra respectively. Consider ($\ref{7}$) with all the coefficients to be non-zero constants i.e. $\tilde{d}_{11}(x)=a_{0}$, $\tilde{d}_{12}(x)=b_{0}$ and $\tilde{d}_{21}(x)=c_{0}$, where \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} a_{0}^{2}+b_{0}c_{0}\noindenteq 0.\label{43} {\rm e}nd{eqnarray} This system provides seven symmetry generators. The linear form ($\ref{28}$) also provides a $7$-dimensional algebra with constant coefficients satisfying ($\ref{43}$), while the $8$-dimensional symmetry algebra was extracted \cite{sb} by assuming \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} a_{0}^{2}+b_{0}c_{0}=0.\label{44} {\rm e}nd{eqnarray} Such linear forms cannot be obtained from ($\ref{28}$). These two examples explain why a $7$-dimensional algebra can be obtained from ($\ref{28}$), but a linear form with an $8$-dimensional algebra is not obtainable from it.\\ \noindentewline To prove these observations consider arbitrary point transformations of the form \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \tilde{y}=a(x)y+b(x)z,~~~\tilde{z}=c(x)y+d(x)z.\label{45} {\rm e}nd{eqnarray} \noindentewline\textbf{Case a.} If $a(x)=a_{0}$, $b(x)=b_{0}$, $c(x)=c_{0}$ and $d(x)=d_{0}$ are constants then ($\ref{45}$) implies \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \tilde{y}^{\partialrime\partialrime}=a_{0}y^{\partialrime\partialrime}+b_{0}z^{\partialrime\partialrime},\noindentonumber\\ \tilde{z}^{\partialrime\partialrime}=c_{0}y^{\partialrime\partialrime}+d_{0}z^{\partialrime\partialrime}.\label{46} {\rm e}nd{eqnarray} Using ($\ref{7}$) and ($\ref{25}$) in the above equation we find \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} (a_{0}d_{0}-b_{0}c_{0})y^{\partialrime\partialrime}=((a_{0}d_{0}+b_{0}c_{0})\tilde{d}_{11}(x)+c_{0}d_{0}\tilde{d}_{12}(x)-a_{0}b_{0}\tilde{d}_{21}(x))y+\noindentonumber\\ (2b_{0}d_{0}\tilde{d}_{11}(x)+d^{2}_{0}\tilde{d}_{12}(x)-b^{2}_{0}\tilde{d}_{21}(x))z,\noindentonumber\\ (a_{0}d_{0}-b_{0}c_{0})z^{\partialrime\partialrime}=((a_{0}d_{0}+b_{0}c_{0})\tilde{d}_{11}(x)+c_{0}d_{0}\tilde{d}_{12}(x)-a_{0}b_{0}\tilde{d}_{21}(x))z+\noindentonumber \\ (2a_{0}c_{0}\tilde{d}_{11}(x)+c^{2}_{0}\tilde{d}_{12}(x)-a^{2}_{0}\tilde{d}_{21}(x))y,\label{47} {\rm e}nd{eqnarray} where $a_{0}d_{0}-b_{0}c_{0}\noindenteq0$. Using ($\ref{25}$), ($\ref{47}$) and the linear independence of the $\tilde{d}$'s, gives \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} a_{0}b_{0}=c_{0}d_{0}=0,\noindentonumber\\ a^{2}_{0}-b^{2}_{0}=c^{2}_{0}-d^{2}_{0}=0,\noindentonumber\\ a_{0}d_{0}+b_{0}c_{0}=a_{0}c_{0}-b_{0}d_{0}=0,\label{48} {\rm e}nd{eqnarray} which has a solution $a_{0}=b_{0}=c_{0}=d_{0}=0$, which is inconsistent with ($\ref{47}$) because the requirement was $a_{0}d_{0}-b_{0}c_{0}\noindenteq 0$.\\ \noindentewline \textbf{Case b.} If $a(x)$, $b(x)$, $c(x)$ and $d(x)$ are arbitrary functions of $x$ then \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \tilde{y}^{\partialrime\partialrime}=a(x)y^{\partialrime\partialrime}+b(x)z^{\partialrime\partialrime}+a^{\partialrime\partialrime}(x)y+b^{\partialrime\partialrime}(x)z+2a^{\partialrime}(x)y^{\partialrime}+2b^{\partialrime}(x)z^{\partialrime},\noindentonumber\\ \tilde{z}^{\partialrime\partialrime}=c(x)y^{\partialrime\partialrime}+d(x)z^{\partialrime\partialrime}+c^{\partialrime\partialrime}(x)y+d^{\partialrime\partialrime}(x)z+2c^{\partialrime}(x)y^{\partialrime}+2d^{\partialrime}(x)z^{\partialrime}.\label{49} {\rm e}nd{eqnarray} Thus we obtain \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} (ad-bc)y^{\partialrime\partialrime}=[(ad+bc)\tilde{d}_{11}+cd\tilde{d}_{12}-ab\tilde{d}_{21}-a^{\partialrime\partialrime}d+c^{\partialrime\partialrime}b]y+(2bd\tilde{d}_{11}+\noindentonumber\\ d^{2}\tilde{d}_{12}-b^{2}\tilde{d}_{21}-b^{\partialrime\partialrime}d+d^{\partialrime\partialrime}b)z-2d(a^{\partialrime}y^{\partialrime}+b^{\partialrime}z^{\partialrime})+2b(c^{\partialrime}y^{\partialrime}+d^{\partialrime}z^{\partialrime}),\label{50}\\ (ad-bc)z^{\partialrime\partialrime}=(2ac\tilde{d}_{11}+c^{2}\tilde{d}_{12}-a^{2}\tilde{d}_{21}-a^{\partialrime\partialrime}c+c^{\partialrime\partialrime}a)y+[(ad+bc)\tilde{d}_{11}+\noindentonumber\\ cd\tilde{d}_{12}-ab\tilde{d}_{21}-b^{\partialrime\partialrime}c+d^{\partialrime\partialrime}a]z-2c(a^{\partialrime}y^{\partialrime}+b^{\partialrime}z^{\partialrime})+2a(c^{\partialrime}y^{\partialrime}+d^{\partialrime}z^{\partialrime}).\label{51} {\rm e}nd{eqnarray} Comparing the coefficients as before and using the linear independence of $\tilde{d}$'s we obtain \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} a^{\partialrime}(x)=b^{\partialrime}(x)=c^{\partialrime}(x)=d^{\partialrime}(x)=0,\label{52} {\rm e}nd{eqnarray} which implies that it reduces to a system of the form ($\ref{47}$), which leaves us again with the same result. Thus we have the theorem.\\ \noindentewline \textbf{Theorem 7.} \textit{The linear forms for systems of two second-order ODEs obtainable by CSA are in general inequivalent to those linear forms obtained by real symmetry analysis.} Before presenting some illustrative applications of the theory developed we refine Theorem 6 by using Theorem 7 to make the following remark.\\ \noindentewline \textbf{Remark 2.} There are {\rm e}mph{only} $6$, $7$ or $15$-dimensional algebras for linearizable systems obtainable by scalar complex linearizable ODEs, i.e. there are no $5$ or $8$-dimensional Lie point symmetry algebras for such systems. \section{Applications} Consider a system of non-homogeneous geodesic-type differential equations \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} y^{\partialrime\partialrime}+y^{\partialrime 2}-z^{\partialrime 2}=\Omega_{1}(x,y,z,y^{\partialrime},z^{\partialrime}),\noindentonumber \\ z^{\partialrime\partialrime}+2y^{\partialrime}z^{\partialrime}=\Omega_{2}(x,y,z,y^{\partialrime},z^{\partialrime}).\label{53} {\rm e}nd{eqnarray} where $\Omega_{1}$ and $\Omega_{1}$ are linear functions of the dependent variables and their derivatives. This system corresponds to a complex scalar equation \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} u^{\partialrime\partialrime}+u^{\partialrime 2}= \Omega(x,u,u^{\partialrime}),\label{54} {\rm e}nd{eqnarray} which is either transformable to the free particle equation or one of the linear forms ($\ref{21}$)-($\ref{23}$), by means of the complex transformations \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \chi=\chi(x), ~U(\chi)=e^{u}.\label{55} {\rm e}nd{eqnarray} Which are further transformable to the free particle equation by utilizing another set of invertible complex point transformations. Generally, the system ($\ref{53}$) is transformable to a system of the free particle equations or a linear system of the form \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} Y^{\partialrime\partialrime}=\widetilde{\Omega}_{1}(\chi, Y, Z, Y^{\partialrime}, Z^{\partialrime})-\widetilde{\Omega}_{2}(\chi, Y, Z, Y^{\partialrime}, Z^{\partialrime}),\noindentonumber \\ Z^{\partialrime\partialrime}=\widetilde{\Omega}_{2}(\chi, Y, Z, Y^{\partialrime}, Z^{\partialrime})+\widetilde{\Omega}_{1}(\chi, Y, Z, Y^{\partialrime}, Z^{\partialrime}).\label{56} {\rm e}nd{eqnarray} Here $\widetilde{\Omega}_{1}$ and $\widetilde{\Omega}_{2}$ are linear functions of the dependent variables and their derivatives, via an invertible change of variables obtainable from ($\ref{55}$). The linear form ($\ref{56}$) can be mapped to a maximally symmetric system if and only if there exist some invertible complex transformations of the form ($\ref{55}$), otherwise these forms can not be reduced further. This is the reason why we obtain three equivalence classes namely with $6$, $7$ and $15$-dimensional algebras for systems corresponding to linearizable complex equations with only one equivalence class. We first consider an example of a nonlinear system that admits a $15-$dimensional algebra which can be mapped to the free particle system using ($\ref{55}$). Then we consider four applications to nonlinear systems of quadratically semi-linear ODEs transformable to ($\ref{56}$) via ($\ref{55}$) that are not further reducible to the free particle system.\\ \noindentewline \textbf{1.} Consider ($\ref{53}$) with \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \Omega_{1}=-\frac{2}{x}y^{\partialrime}, \noindentonumber\\ \Omega_{2}=-\frac{2}{x}z^{\partialrime},\label{58} {\rm e}nd{eqnarray} it admits a $15$-dimensional algebra. The real linearizing transformations \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \chi(x)=\frac{1}{x},~Y=e^{y}\cos(z),~Z=e^{y}\sin(z),\label{57} {\rm e}nd{eqnarray} obtainable from the complex transformations ($\ref{55}$) with $U(\chi)=Y(\chi)+iZ(\chi)$, map the above nonlinear system to $Y^{\partialrime\partialrime}=0$, $Z^{\partialrime\partialrime}=0$. Moreover, the solution of ($\ref{58}$) corresponds to the solution of the corresponding complex equation \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} u^{\partialrime\partialrime}+u^{\partialrime 2}+\frac{2}{x}u^{\partialrime}=0.\label{59} {\rm e}nd{eqnarray} \noindentewline \textbf{2.} Now consider $\Omega_{1}$ and $\Omega_{2}$ to be linear functions of the first derivatives $y^{\partialrime},~z^{\partialrime}$, i.e., system ($\ref{53}$) with \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \Omega_{1}=c_{1}y^{\partialrime}-c_{2}z^{\partialrime},\noindentonumber\\ \Omega_{2}=c_{2}y^{\partialrime}+c_{1}z^{\partialrime},\label{60} {\rm e}nd{eqnarray} which admits a $7$-dimensional algebra, provided both $c_{1}$ and $c_{2}$, are not simultaneously zero. It is associated with the complex equation \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} u^{\partialrime\partialrime}+u^{\partialrime 2}-cu^{\partialrime}=0. {\rm e}nd{eqnarray} Using the transformations ($\ref{55}$) to generate the real transformations \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \chi(x)= x,~Y=e^{y}\cos(z),~Z=e^{y}\sin(z),\label{61} {\rm e}nd{eqnarray} which map the nonlinear system to a linear system of the form ($\ref{24}$), i.e., \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} Y^{\partialrime\partialrime}=c_{1}Y^{\partialrime}-c_{2}Z^{\partialrime},\noindentonumber\\ Z^{\partialrime\partialrime}=c_{2}Y^{\partialrime}+c_{1}Z^{\partialrime},\label{62} {\rm e}nd{eqnarray} which also has a $7$-dimensional symmetry algebra and corresponds to \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} U^{\partialrime\partialrime}-cU^{\partialrime}=0.\label{66} {\rm e}nd{eqnarray} All the linear second-order ODEs are transformable to the free particle equation thus we can invertibly transform the above equation to $\widetilde{U}^{\partialrime\partialrime}=0$, using \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} (\chi(x), U)\rightarrow(\widetilde{\chi}=\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha+\begin{equation}ta} {\rm d}ef\rf{\rflooreta e^{c\chi(x)},\widetilde{U}=U), {\rm e}nd{eqnarray} where $\alpha} {\rm d}ef\o{\omega} {\rm d}ef\w{\wedgelpha$, $\begin{equation}ta} {\rm d}ef\rf{\rflooreta$ and $c$ are complex. But these complex transformations can not generate real transformations to reduce the corresponding system ($\ref{62}$) to a maximally symmetric system.\\ \noindentewline \textbf{3.} A system with a $6-$dimensional Lie algebra is obtainable from ($\ref{53}$) by introducing a linear function of $x$ in the above coefficients i.e., \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \Omega_{1}=(1+x)(c_{1}y^{\partialrime}-c_{2}z^{\partialrime}),\noindentonumber\\ \Omega_{2}=(1+x)(c_{2}y^{\partialrime}+c_{1}z^{\partialrime}),\label{63} {\rm e}nd{eqnarray} in ($\ref{53}$), then the same transformations ($\ref{61}$) converts the above system into a linear system \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} Y^{\partialrime\partialrime}=(1+\chi) \left (c_{1}Y^{\partialrime}-c_{2}Z^{\partialrime} \right ),\noindentonumber\\ Z^{\partialrime\partialrime}=(1+\chi) \left (c_{2}Y^{\partialrime}+c_{1}Z^{\partialrime} \right ),\label{64} {\rm e}nd{eqnarray} where both systems ($\ref{63}$) and ($\ref{64}$) are in agreement on the dimensions (i.e. six) of their symmetry algebras. Again, the above system is a special case of the linear system ($\ref{24}$).\\ \noindentewline \textbf{4.} If we choose $\Omega_{1}=c_{1},~ \Omega_{2}=c_{2}$, where $c_{i}$ $(i=1,2)$ are non-zero constants, then under the same real transformations ($\ref{61}$), the nonlinear system ($\ref{53}$) takes the form \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} Y^{\partialrime\partialrime}=c_{1}Y-c_{2}Z,\noindentonumber\\ Z^{\partialrime\partialrime}=c_{2}Y+c_{1}Z.\label{65} {\rm e}nd{eqnarray} \section{Conclusion} The classification of linearizable systems of two second-order ODEs was obtained by using the equivalence properties of systems of two linear second-order ODEs \cite{sb}. The ``optimal canonical form" of the corresponding linear systems of two second-order ODEs, to which a linearizable system could be mapped, is crucial. This canonical form used invertible transformations, the invertibility of these mappings insuring that the symmetry structure is preserved. That optimal canonical form of the linear systems of two second-order ODEs led to five linearizable classes with respect to Lie point symmetry algebras with dimensions $5$, $6$, $7$, $8$ and $15$. Systems of two second-order ODEs appearing in CSA correspond to some scalar complex second-order ODE. We proved the existence of a reduced optimal canonical form for such linear systems of two ODEs. This reduced canonical form provided three equivalence classes, namely with $6$, $7$ or $15$-dimensional point symmetry algebras. Two cases are eliminated in the theory of complex symmetries: those of $5$ and $8$-dimensional algebras. The systems corresponding to a complex linearized scalar ODE involve one parameter which can only cover {\it three} possibilities; (a) it is zero; (b) it is a non-zero constant; and (c) it is a non-constant function. The non existence of $5$ and $8$ dimensional algebras for the linear forms appearing due to CSA has been proved by showing that these forms are not equivalent to those provided by the real symmetry approach for systems \cite{sb} with $5$ and $8$ generators. Work is in progress \cite{saf3} to find complex methods of solving a class of 2-dimensional {\rm e}mph{nonlinearizable} systems of second-order ODEs. It is also obtainable from the linearizable scalar complex second-order ODEs, which are transformable to the free particle equation via an invertible change of the dependent and independent variables of the form \begin{equation}ta} {\rm d}ef\rf{\rflooregin{eqnarray} \chi=\chi(x,u), ~U(\chi)=U(x,u). {\rm e}nd{eqnarray} Notice that these transformations are different from ($\ref{55}$). The real transformations corresponding to the complex transformations above cannot be used to linearize the real system. But the linearizability of the complex scalar equations can be used to provide solutions for the corresponding systems.\\ \noindentewline\textbf{Acknowledgements}\noindentewline The authors are grateful to Fazal Mahomed for useful comments and discussion on this work. MS is most grateful to NUST for providing financial support. \begin{equation}ta} {\rm d}ef\rf{\rfloorc{\begin{equation}ta} {\rm d}ef\rf{\rfloorf REFERENCES}{\rm e}c \renewcommand{\refname}{} \begin{equation}ta} {\rm d}ef\rf{\rflooregin{thebibliography}{99} \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{saj} S. Ali, F.M. Mahomed and A. Qadir, Criteria for systems of two second-order differential equations by complex methods, {\it Nonlinear Dynamics} (to appear). \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{saj1} S. Ali, F.M. Mahomed and A. Qadir, Complex Lie symmetries for Scalar Second-order Ordinary Differential Equations, {\it Nonlinear Analysis: Real World Applications} \textbf{10} (2009) 3335-3344. \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{saf} S. Ali, F.M. Mahomed and M. Safdar, Complete classification for systems of two second-order ordinary differential equations by using complex symmetry analysis, work in progress. \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{saf3} S. Ali, A. Qadir and M. Safdar, Symmetry solutions of two dimensional systems not solvable by symmetry methods, work in progress. \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{aa} A.V. Aminova, N.A.M. Aminov, Projective geometry of systems of differential equations: general conceptions, {\it Tensor N S} {\begin{equation}ta} {\rm d}ef\rf{\rfloorf 62} (2000), 65-86. \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{gor} V.M. Gorringe and P. G.L. Leach, Lie point symmetries for systems of $2$nd order linear ordinary differential equations, {\it Quaestiones Mathematicae} \textbf{11}(1) (1988) 95-117. \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{ibr} N.H. Ibragimov, {\it Elementary Lie Group Analysis and Ordinary Differential Equations}, Wiley Chichester (1999). \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{lie1} S. Lie, {\it Differential Equations}, Chelsea, New York (1967). \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{lie} S. Lie, Klassifikation und Integration von gew\"onlichen Differentialgleichungenzwischen $x$, $y$, die eine Gruppe von Transformationen Gestaten, {\it Arch. Math.} {\begin{equation}ta} {\rm d}ef\rf{\rfloorf VIII, IX} (1883), 187. \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{lie2} S. Lie, {\it Lectures on Differential Equations with Known Infinitesimal Transformations}, Teubner: Leipzig (1891) (in German, Lie's lectures by G. Sheffers). \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{lie3} S. Lie, Theorie der Transformationsgruppen, {\it Math. Ann.}, {\begin{equation}ta} {\rm d}ef\rf{\rfloorf 16} (1880), 441. \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{sur} F.M. Mahomed, Symmetry group classification of ordinary differential equations: Survey of some results, {\it Math. Meth. Appl. Sci.} \textbf{30} (2007) 1995-2012. \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{lea} F.M. Mahomed and P.G.L. Leach, Symmetry Lie algebra of $n$th order ordinary differential equations, {\it J. Math. Anal. Appl.} \textbf{151} (1990) 80-107. \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{mq1} F.M. Mahomed and A. Qadir, Linearization criteria for a system of second-order quadratically semi-linear ordinary differential equations, {\it Nonlinear Dynamics} {\begin{equation}ta} {\rm d}ef\rf{\rfloorf 48} (2007) 417-422. \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{mq2} F.M. Mahomed and A. Qadir, Invariant linearization criteria for systems of cubically semi-linear second-order ordinary differential equations, {\it J. Nonlinear Math. Phys.} \textbf{16} (2009) 1-16. \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{jm} J. Merker, Characterization of the Newtonian free particle system in $m\gammaeq2$ unknown dependent variables, {\it Acta Applicandae Mathematicae} \textbf{92} (2006) 125-207. \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{saf2} M. Safdar, A. Qadir and S. Ali, Inequivalence of classes of linearizable systems of cubically semi-linear ordinary differential equations obtained by real and complex symmetry analysis, {\it Math. Comp. Appl.} (to appear). \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{hs} H. Stephani, {\it Differential Equations: Their Solutions Using Symmetries}, Cambridge (1996). \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{sb} C. Wafo Soh and F.M. Mahomed, Symmetry breaking for a system of two linear second-order ordinary differential equations, {\it Nonlinear Dynamics} \textbf{22} (2000) 121-133. \begin{equation}ta} {\rm d}ef\rf{\rflooribitem{wf} C. Wafo Soh and F.M. Mahomed, Linearization criteria for a system of second-order ordinary differential equations, {\it Int. J. of NonLinear Mech.} {\begin{equation}ta} {\rm d}ef\rf{\rfloorf36} (2001) 671-677. {\rm e}nd{thebibliography} {\rm e}nd{document}